halid
stringlengths
8
12
lang
stringclasses
1 value
domain
sequencelengths
0
36
timestamp
stringclasses
652 values
year
stringclasses
55 values
url
stringlengths
43
370
text
stringlengths
16
2.18M
01749805
en
[ "chim" ]
2024/03/05 22:32:07
2013
https://hal.univ-lorraine.fr/tel-01749805/file/DDOC_T_2013_0067_MAZURENKO.pdf
INTRODUCTION The progress of modern analytical chemistry is closely associated with the increase of sensitivity and selectivity of the analysis, as well as with the development of new equipment that would allow quick and easy analysis of complex objects outside laboratory. These requirements have led to the emergence of a new trend in analytical chemistry -chemical sensors. One of the most promising types of chemical sensors is biosensors, containing bio-recognition element. Biosensors can be characterized, above all, by extraordinary selectivity due to the high specificity of bio-recognition element, such as an enzyme, allowing in some cases separate identification of stereoisomers. Secondly, biosensors demonstrate fast response time that allows express analysis. Thirdly, the analysis with biosensors is highly sensitive because, in most cases, it refers to the kinetic methods based on measuring of reaction rate. Biosensors are particularly promising in the analysis of organics, including biologically active substances. Acute need for such analysis is associated with high demands for quality control of food, medicines. It associated also with the importance of the definition of such substances in biological fluids for the purposes of diagnosis of large number of diseases. However, routine analysis of the above objects is complicated by the presence of large variety of components inside the matrix. That leads to the analysis with reduced sensitivity and selectivity. In fact, nowadays only chromatography allows to analyze such objects with confidence, suffering from such disadvantages as long duration and high cost of analysis, complex procedure of sample preparation and high demands for the purity of the reagents. Considering the absence of these disadvantages, biosensors are real alternative to the chromatographic methods in the analysis of organic substances in complex objects. The research conducted so far for the development of new biosensors is limited, in most cases, to the glucose oxidase as a bio-recognition element. This enzyme allows to implement different approaches for enzyme immobilization on the electrode surface due to its high activity and stability. At the same time, other available enzymes from the oxidoreductases class, offer more opportunities for the selection of analyte and object of analysis. However, these enzymes can often be characterized by worse stability and activity that prevents the application of traditional immobilization approaches and directs research efforts to finding of new matrices for bio-encapsulation. One of the promising methods of enzyme immobilization is their encapsulation in thin SiO 2 film on the electrode surface by sol-gel technology. This allows to keep them active and to ensure unhindered access of substrate molecules to the enzyme, increasing the sensitivity of the modified electrodes. The electrodeposition method acquired good reputation for deposition of such films on the electrode surface, allowing to obtain uniform and homogeneous coating. This method was previously proposed in our laboratory for encapsulating of glucose and hemoglobin. Another pathway for increasing of the biosensors sensitivity consists in the use of metallic and carbonaceous nanomaterials, which can be characterized by high surface area and catalytic properties. CHAPTER 1. LITERATURE SURVEY General overview of amperometric biosensors Development of electrochemical sensors is one of the most popular areas of analytical chemistry, widely developed in recent years. The electrode is used as a transducer in such sensors contributing to their cheapness and availability. Owing to easy registration of electrical signal, electrochemical sensors first reached commercialization and are widely used in clinical, industrial, agrochemical and environmental analysis [1]. Electrochemical biosensors combine the analytical power of electrochemical techniques with unprecedented specificity of biological processes of recognition. The general idea of such sensors is appropriate immobilization bio-recognition element in close proximity to the electrode surface and generating electrochemical (more often amperometric or potentiometric) signal, the magnitude of which depends on the concentration of the analyte. The level of development of modern technologies can get tiny, cheap and easy to use biosensors, which are already exploited in many fields of analytical chemistry -analysis of food [2,3], the environmental [4] and clinical analysis [START_REF] Rastislav | Application of Electrochemical Biosensors in Clinical Diagnosis / R. Monošík[END_REF]. Unanimously adopted by the world scientific community as a powerful analytical methods, biosensors own chapters in modern textbooks about analytical chemistry [START_REF] Otto | Modern methods in analytical chemistry[END_REF][START_REF] David | Modern Analytical Chemistry / D. Harvey[END_REF][8]. The term "biosensor" usually refers to the use of any material of biological origin as a recognition element. Microorganisms, organelles, nucleic acids, antibodies, they all find their application in biosensors. However, the most popular are biosensors, containing an enzyme (s). This is because of relative simplicity of their preparation and the fact that enzymes, conceived as natural catalysts, are characterized by high specificity, efficiency and response rate. Therefore, below the term "biosensors" will refer to enzyme-based biosensors. Concept of amperometric biosensors Unlike many chemical reactions, electrochemical processes always occur at the interface between the electrode and the solution. Depending on conditions, the electrochemical measurements can be carried out in potentiometric (equilibrium) or potentiostatic (nonequilibrium) modes. In the first case, the experiment is carried out in static mode with no current flowing through the electrochemical cell. The established potential of electrode allows to determine the concentration of analyte in solution. Potentiometry -an important method in analytical chemistry, and new ionselective membranes that have been developed over the past 10-20 years, allow direct monitoring of many ions in complex samples using this method [START_REF] Evans Alun | Potentiometry and Ion Selective Electrodes[END_REF]. Potentiostatic or voltammetric methods of analysis are based on dynamic nonequilibrium situation, when applied to the electrode potential induces electrochemical reactions at the electrode-solution boundary. The electric current passing through the cell is different from zero. This current can be used for the characterization of occurring reaction and for detection any electroactive substances in solution. The advantages of voltammetry are a high sensitivity and selectivity, wide linear range, portable and cheap equipment, and a large number of different available electrodes [START_REF] Joseph | Analytical Electrochemistry[END_REF]. Immobilized on the electrode enzyme catalyzes some reaction, which can be schematically represented as follows: Substrate + Co-Reactant(coenzyme) 𝑒𝑛𝑧𝑦𝑚𝑒 → Product + Co-Reactant(coenzyme)', Therefore the choice of transducer depends primarily on the enzymatic system that is used in each case. For example, the enzymatic reaction of urease leads to changes in pH, so the best choice is pH-sensitive electrodes, while decarboxylase initiating carbon dioxide release can be used together with potentiometric gas sensors. However, the use of voltammetry is advantageous for most enzymes, because of release or consumption of electroactive substances easily detectable with electrodes. Historically amperometric biosensors can be divided into three generations [START_REF] Brian | Chemical Sensors and Biosensors[END_REF] (Fig. 1.1): 1) In the first-generation biosensors (Fig. 1.1a) the enzymatic oxidation of the substrate involves participation of dissolved oxygen. The first such biosensor was developed on the basis of glucose oxidase and Clark oxygen electrode [START_REF] Clark | Continuous recording of blood oxygen tensions by polarography[END_REF]. It allowed to measure the concentration of glucose by detecting the decrease in the concentration of dissolved oxygen consumed as a result of enzymatic reaction [START_REF] Updike | The enzyme electrode / S[END_REF]. However, these biosensors have significant limitations invoked by the need to maintain a constant concentration of dissolved oxygen and very low potential of its reduction (-0.7 V). Therefore, cathodic reduction of oxygen has been replaced by anodic oxidation of hydrogen peroxide released as a result of the reaction. 2) The second generation biosensors (Fig. 1.1b) uses so-called mediator or electrons-carrier involved in the enzymatic reaction. Mediators are usually molecules that can be easily and reversible oxidized and reduced on the electrode at low potential (e.g., ferrocene and ferrocyanides). Their role is electron transfer from the molecule of the enzyme to the electrode, inducing current, which depends on the substrate concentration. The use of mediators gave a significant boost to the development of new types of biosensors, while the problems remain such as an effective mediator immobilization on the electrode surface and very strict demands to the molecule (low redox potential, pH independence, lack of reaction with other components of biosensors) [START_REF] Dzyadevych | Amperometric enzyme biosensors[END_REF]. 3) Third-generation biosensors (Fig. 1.1c) operate on the basis of direct electron transfer between the electrode and the active center of the enzyme. This approach makes possible getting rid of any intermediary molecules and almost directly converts the concentration of the substrate on the measurable electrochemical signal. However, the design of such biosensors is not easy task, especially because the electrochemically-active group of the enzyme is usually located deep inside of the protein molecule beyond the shield protein groups [START_REF] Freire | Direct Electron Transfer : An Approach for Electrochemical Biosensors with Higher Selectivity and Sensitivity[END_REF]. Regardless of biosensor's generation, the enzyme must be firmly fixed in close proximity to the electrode, and the unobstructed diffusion of substrate to the enzyme as well as the reaction products to the electrode must exist. As noted above, the concentration of analyte in solution is determined by the amount of current flowing through the working electrode. There are two methods of its measurement that can achieve the best results. The voltammograms (linear or cyclic) are being obtained by gradual change of the potential at the working electrode at a certain rate. The electrochemical reaction occurs at some point characterized by increased current and peak appearance at the voltammogram. The value of the peak is proportional to the concentration of electrochemically-active substances according to the equation of Randles-Sevcik [START_REF] Brian | Chemical Sensors and Biosensors[END_REF] (see Appendix B) However, the methods with potential scanning is not very practical for the applications in biosensors due to the impact of charging current of the electrical double layer [START_REF] Budnikov | Fundamentals of modern electrochemical analysis[END_REF] and the considerable time required for one scan. A more convenient method is amperometry, where a constant potential is applied to the working electrode inducing an electrochemical reaction. At such conditions current firstly decreases sharply due to the depletion of near-electrode layer, and then goes to the constant value, which can be calculated from the modified equation Cottrell (see. Appendix B). The value of quasistationary current under these conditions depends on the concentration of electrochemically-active substances, and the use of stirred solution reduces the thickness of the diffusion layer and leads to increased current. Enzyme types used in biosensors Enzymes -substances of protein nature that catalyze chemical reactions in living systems. At present several thousands of individual enzymes existing in living organisms are known [17]. The International Union of Biochemistry and Molecular Biology has developed a 4-levels system of their classification and nomenclature in accordance with the type of reaction accelerated by corresponding enzyme [18]. According to this classification oxidoreductases are the first, and probably one of the largest classes among 6 types of enzymes. Such enzymes catalyze biological redox reactions that transfer electrons from one molecule to another. These enzymes are ideal for the creation of electrochemical biosensors given that all enzymatic reactions of oxidoreductases involve the electron transfer from one molecule to the other. Oxidoreductases class can be divided into two subclasses, depending on the type of oxidant used. If molecular oxygen acts as an electron acceptor, thus turning into a molecule of hydrogen peroxide, these enzymes belong to oxidases. If a special molecule (coenzyme) acts as oxidant, these enzymes belong to the subclass of dehydrogenases. Among the variety of coenzymes (PQQ, FMN, TPP, Coenzyme A etc.), the two of them are most widespread in the oxidoreductase class: the nicotinamide adenine dinucleotide (NAD or NADP) or flavin adenine dinucleotide (FAD). These molecules are able to be oxidized and reduced in a reversible redox process. The wide popularity of oxidases as biosensors sensitive elements is primarily evoked by relative ease of electrochemical detection of hydrogen peroxide at metallic electrodes. The electrochemical detection of dehydrogenases coenzyme NAD + /NADH is complicated (see paragraph 1.5.2), but they are also widely used in the development of biosensors. The methods of biomolecules immobilization on the electrode surface Biomolecules such as enzymes may lose their activity quickly in aqueous solutions because of their gradual oxidation or destruction of quaternary structure on the edge of the liquid/air [START_REF] Nikolas | Enzyme stabilization strategies based on electrolytes and polyelectrolytes for biosensor applications[END_REF]. Given the relatively high cost of pure enzyme preparations their cost-beneficial use requires the reusability of enzyme based biosensors. For this reason, the use of enzymes in solution is an exception, and research efforts are aimed at finding new ways of enzyme immobilization, i.e. attachment to the surface. When choosing an immobilization method one should pay attention to the retention of enzyme activity and conformation of the active center. In addition, enzyme should be in biocompatible environment protected from microbial attack and pollutants. Substrate molecules should be able to diffuse freely to it from the external solution [START_REF] Iqbal | Bioencapsulation within synthetic polymers (Part 1): sol-gel encapsulated biologicals[END_REF][START_REF] Sassolas | Immobilization strategies to develop enzymatic biosensors[END_REF]. Biosensors characteristics are very dependent on the method of enzyme immobilization. The purpose of this immobilization is to ensure close contact between the enzyme and the transducer, while maintaining (and sometimes even improving) stability of the enzyme. There are physical and chemical methods of immobilization, which consist in the following [START_REF] Wilhelm | Immobilized enzymes: methods and applications[END_REF][START_REF] Krajewska | Application of chitin-and chitosan-based materials for enzyme immobilizations: a review / B. Krajewska // Enzyme and Microbial Technology[END_REF]: 1) Adsorption. A simple and cheap method, but it is often reversible, i.e. enzyme gradually desorbs from the surface during measurements. This leads to poor stability of biosensors. 2) Micro-encapsulation using solid or liquid membranes. This method is often used in first biosensors, it allows to place the biomolecules inside the semipermeable membrane in close contact with the transducer. The disadvantage of this method is the complexity and high cost of such membranes, and, most importantly, complicated diffusion of reactants through the membrane. 3) Encapsulation. The enzyme is mixed with a solution of polymer and then the polymerization is initiated. The result is a gel containing encapsulated enzyme. The absence of chemical bonding and biocompatible environment makes the enzyme more active. Unfortunately, enzyme can leach from the gel over time, resulting in signal loss. 4) Crosslinking. The enzyme is bound using so-called bifunctional reagents such as glutaraldehyde, which could form a Schiff base with aminogroups of the protein. The mild binding method almost does not change steric configuration of the enzyme, but such electrodes are characterized by low mechanical properties and diffusion of the substrate through the material is rather slow. 5) Covalent binding. The chemical binding of the enzyme to the carrier by a variety of functional groups is used. This method provides the best stability of the immobilized enzyme, however, it is difficult and tedious, with a lack of results reproducibility. In addition, the chemical binding leads to disruption of steric configuration of the enzyme molecule, which leads to the degradation of its activity and denaturation. Summing up the above, the choice of method of immobilization depends on the each particular case [START_REF] Linqiu | Immobilised enzymes: science or art? / L. Cao[END_REF]. However, it can be concluded that very tough or very weak binding of the enzyme does not lead to good results because of a lack of the electrode activity or stability respectively. In search of the optimal immobilization method the preference should be given to a "golden mean" -an encapsulation and cross-linking methods, they could provide a strong enzyme fixing together with the preservation of its activity [START_REF] Sassolas | Immobilization strategies to develop enzymatic biosensors[END_REF]. One of these soft methods is the encapsulation of enzyme in a polymer film of silicon oxide (SiO 2 ). Such material is biocompatible, has a high porosity and several other important advantages making promising the development of SiO 2 -based biosensors [START_REF] Gupta | Entrapment of biomolecules in sol-gel matrix for applications in biosensors: problems and future prospects[END_REF]. Amperometric SiO2-based biosensors Hybrid silica materials obtained by the sol-gel technology are widely used in electroanalytical chemistry [START_REF]intersection of functionalized sol-gel materials with electrochemistry[END_REF]. Interest to these materials is due to ease of synthesis, and their unique properties, including a variety of chemical compositions and structure (in the form of monoliths or thin films). Being solid inorganic substances, they have a high specific surface area (200-1500 m 2 /g) at the same time and threedimensional structure, consisting of a large number of open interconnected pores. It provides a high diffusion rate of analytes inside. Together with lots of available active sites this is a key factor in the development of highly sensitive electrochemical sensors [START_REF] Walcarius | Electroanalysis with Pure, Chemically Modified and Sol-Gel-Derived Silica-Based Materials / A. Walcarius // Electroanalysis[END_REF]. Another advantage of silica-based materials is the ease of modification with various mediators that can alter their characteristics, increasing the selectivity analysis or providing electrocatalytic properties [START_REF] Walcarius | Electroanalysis with Pure, Chemically Modified and Sol-Gel-Derived Silica-Based Materials / A. Walcarius // Electroanalysis[END_REF][START_REF] Walcarius | Electrochemical Applications of Silica-Based Organic-Inorganic Hybrid Materials / A[END_REF]. Recently it was shown that these materials can also be used to encapsulate biomolecules preserving their activity [START_REF]Biochemically active sol-gel glasses: the trapping of enzymes[END_REF][START_REF] Dave | Sol-gel encapsulation methods for biosensors[END_REF][START_REF]Organically modified sol-gel sensors / O[END_REF]. Silica-materials possess several key characteristics that make them promising for bioencapsulation. Simple lowtemperature synthesis method avoids protein denaturation and unique approach of polymer chains formation around the enzyme molecule, does not lead to a violation of its steric configuration [START_REF] Gupta | Entrapment of biomolecules in sol-gel matrix for applications in biosensors: problems and future prospects[END_REF]. These materials may contain a high amount of water in the structure that improves the long-term stability of immobilized biorecognition elements [START_REF]Enzymes and Other Proteins Entrapped in Sol-Gel Materials / D[END_REF]. In addition, silica-materials have excellent biocompatibility and ability to protect against microbial attack [START_REF] Vivek | Immobilization of Biomolecules in Sol-Gels: Biological and Analytical Applications / V. Kandimalla[END_REF]. However, the SiO 2 -based materials have several shortcomings that need to be removed. First, it is a gradual leaching of modifier molecules from the film and the destruction of the film. This problem can be solved by the introduction of the structuring and stabilizing agents (e.g., surfactants and polyelectrolytes) [START_REF] Nadzhafova | Heme proteins sequestered in silica sol-gels using surfactants feature direct electron transfer and peroxidase activity[END_REF][START_REF]Surfactant-Induced Modification of Dopants Reactivity in Sol-Gel Matrixes / C. Rottman[END_REF][START_REF]silica-nanocomposite films electrogenerated on pyrolitic graphite electrode[END_REF]. Secondly, it is a need for a uniform distribution of enzymes in the film (without the formation of conglomerates), that can be achieved by selecting the appropriate method of SiO2-film obtaining. Biomolecules encapsulation into the silica-matrix on the electrode surface In general, the sol-gel method involves hydrolysis of the precursor (alkoxide) in acidic or alkaline medium with subsequent condensation and polycondensation of monomers leading to the formation of porous gel [START_REF] Brinker | Sol-Gel Science: the physics and chemistry of sol-gel processing[END_REF]: The properties of the formed gel, such as porosity, surface area, polarity, hardness, largely depend on the rate of hydrolysis and condensation reactions (Fig. 1.2), as well as on the choice of the precursor, molar ratio, choice of solvent, temperature, processes drying and aging [START_REF] Joseph | Sol-gel materials for electrochemical biosensors[END_REF]. Moreover, the process of aging may occur long time after the formation of the gel, forming additional bonds inside sol-gel matrix. During the aging process the solvent can be removed from the pores, which leads to a change in polarity and viscosity and to reduction of the pores diameter [START_REF] Brinker | Sol-Gel Science: the physics and chemistry of sol-gel processing[END_REF][START_REF] Kulwinder | Characterization of the Microenvironments of PRODAN Entrapped in Tetraethyl Orthosilicate Derived Glasses / K. K. Flora[END_REF]. For analytical purposes, sol-gel materials can be obtained either as monoliths or thin films [START_REF] Gupta | Entrapment of biomolecules in sol-gel matrix for applications in biosensors: problems and future prospects[END_REF]. Monoliths can be from hundreds of micrometers to several centimeters of thickness and can effectively immobilize large number of (a) (b) (c) biomolecules that are kept inside because of its size and molecular weight. However, the main drawback of monoliths is very large response time due to slow diffusion. In addition, they usually do not find their use in electrochemistry due to lack of conductivity of thick layers of silica. One way to solve this problem can be the creation of composite bioelectrode [START_REF] Joseph | Sol-gel materials for electrochemical biosensors[END_REF] by mixing the sol-gel precursor with enzyme and conductive materials -carbon paste [START_REF] Pankratov | Sol-gel derived renewable-surface biosensors / I. Pankratov[END_REF], graphite [START_REF] Gun | Sol-gel derived, ferrocenyl-modified silicate-graphite electrode: Wiring of glucose oxidase / J. Gun[END_REF] or metal particles [START_REF] Bai | Gold nanoparticles-mesoporous silica composite used as an enzyme immobilization matrix for amperometric glucose biosensor construction[END_REF]. At the same time, thin sol-gel films with thickness less than one micrometer offer significantly faster diffusion of analyte to biorecognition centers guarantying fast response. Therefore they are considered more promising for application in electrochemical sensors [START_REF]Organically modified sol-gel sensors / O[END_REF]. There are several ways of enzyme immobilization within the sol-gel film on the electrode surface: Covalent binding of the enzyme to the SiO 2 -matrix (Fig. 1.3) by carbodiimide coupling reaction was used for glucose oxidase [START_REF]Covalent immobilization of an enzyme (glucose oxidase) onto a carbon sol-gel silicate composite surface as a biosensing platform / X[END_REF] and lactate dehydrogenase [START_REF] Cheng-Li | Amperometric L-lactate sensor based on sol-gel processing of an enzyme-linked silicon alkoxide[END_REF], but this method has not gained widespread use due to the loss of enzyme activity as result of binding to hard matrix. The so-called «sandwich»-configuration (Fig. 1.3) imply enzyme placing between two layers of sol-gel film. Such configuration have been used for the first time for glucose oxidase in work [START_REF]Glucose Biosensor Based on a Sol-Gel-Derived Platform / U. Narang[END_REF] and shown higher activity and fast response compared to conventional methods. Later it was also used for lactate dehydrogenase [START_REF] Ramanathan | Immobilization and Characterization of Lactate Dehydrogenase on TEOS Derived Sol-Gel Films[END_REF][START_REF]Immobilization of lactate dehydrogenase on tetraethylorthosilicate-derived sol-gel films for application to lactate biosensor[END_REF]. Unfortunately, the result of this configuration is uneven distribution of the enzyme throughout the film-modifier reducing the reproducibility of the analytical response. Double-layer SiO 2 -film (Fig. 1.3) was used for immobilization of lactate oxidase [START_REF]Sol-gel based amperometric biosensor incorporating an osmium redox polymer as mediator for detection of L-lactate[END_REF] and peroxidase [START_REF] Stephen | Development of a sol-gel based amperometric biosensor for the determination of phenolics[END_REF]50], together with the osmium redox mediator. But the enzyme and the mediator in this configuration may contact only at the boundary between two layers, making efficiency of such biosensors significantly lower. The encapsulation of enzyme in whole thin SiO 2 -film (with or without a mediator) is one of the most popular methods. Thus, a uniform distribution of modifier-molecules in the film and close contact with the electrode can be achieved. The important key factors of thin film formation are homogeneity and thickness of the film, adhesion to the electrode, resistance to cracking and minimization of possible enzyme leaching. The film thickness is a major parameter for the obtained modified electrodes, as its increase slows diffusion of the analyte to the active centers inside the film, reducing the response [START_REF] Vivek | Immobilization of Biomolecules in Sol-Gels: Biological and Analytical Applications / V. Kandimalla[END_REF]. However, the amount of immobilized enzyme is low in very thin films, which also leads to the signal drop. Methods of thin sol-gel films obtaining on the electrode surface The main methods for the obtaining of thin sol-gel coatings on the surface of the electrodes is dip-coating and spin-coating [START_REF]Review of sol-gel thin film formation[END_REF], less common methods are dropcoating and spray-coating (Table 1.1). Despite the variety of methods for the obtaining of thin sol-gel films several difficulties need to be got round on the road to a successful application of such films in biosensors [START_REF] Gupta | Entrapment of biomolecules in sol-gel matrix for applications in biosensors: problems and future prospects[END_REF]. Firstly, most of the techniques do not allow to reproducibly modify surfaces with complex morphology, such as fibers. Second, for achievement of measurable signal level, thin films require large biomolecules content, and this can create problems for the enzymes that are insoluble or precipitate in the sol. Thirdly, homogeneous films often require the presence of significant amounts of alcohol as a viscosity modifier, that can lead to denaturation of immobilized biomolecules. Finally, unlike the monoliths with slow processes of drying and aging, these processes for thin films occur simultaneously and very fast. This could potentially lead to the film cracking and dehydration of immobilized biomolecules. Thus, the researchers face with the task of developing new, simple and effective ways of electrodes modification with bio-composite films SiO 2 -enzyme that would allow to solve the above problems. One of the promising methods that can be used for this purpose is electrochemically-assisted deposition. Electrochemically-assisted deposition method The method of electrochemically-assisted sol-gel deposition (EAD) is a relatively new way for getting thin coatings, it was first described in 1999 [START_REF] Shacham | Electrodeposition of Methylated Sol-Gel Films on Conducting Surfaces[END_REF]. This method now can be applied only to the conductive surfaces, but can solve the basic problem of the traditional methods of sol-gel processing -impossibility of modification of surfaces with complex morphology and small size [START_REF] Shacham | Pattern recognition in oxides thin-film electrodeposition: Printed circuits[END_REF]. This method consists in immersion of the electrode in a solution of prehydrolyzed sol-gel precursor and application of sufficient negative potential (EAD at positive potential is also possible [START_REF] Collinson | Electrodeposition of Porous Silicate Films from Ludox Colloidal Silica[END_REF]) over some time (Fig. 1.4): When applying a negative potential to the electrode the reactions of water (and oxygen) reduction are occurring according to the scheme [START_REF]Electrochemically deposited sol-gel-derived silicate films as a viable alternative in thin-film design[END_REF]: 2H 2 O + 2ē → 2OH -+ H 2 O 2 + 2H 2 O + 4ē → 4OH - O 2 + 2H 2 O + 2 ē → H 2 O 2 + 2OH - (1.1) Hydroxyl-ions generated by these reactions, enhance pH in the near-electrode area, significantly accelerating the polycondensation reaction of SiO 2 -precursor (Fig. 1 .2b, c) and the formation of SiO 2 -film on the electrode surface. Since the time of potential application is typically less than a few minutes and even seconds, it does not lead to significant changes in the overall pH of the whole solution. Thickness of the formed film can be easily changed by controlling the value [START_REF] Shacham | Electrodeposition of Methylated Sol-Gel Films on Conducting Surfaces[END_REF] and time [START_REF]the functionalization of macroporous electrodes / F. Qu[END_REF] of negative potential applying. This affects the amount of the formed catalyst (OH --ions) and thus -the rate of the precursor polycondensation. In particular, the EAD method was used to obtain ultra-thin SiO 2 -films with ordered vertically-oriented pores [START_REF]Electrochemically assisted self-assembly of mesoporous silica thin films[END_REF]. Since the gelation and drying processes during EAD occur independently of each other, coatings obtained by this method are much more porous than ones obtained by classical methods of sol-gel deposition [START_REF]intersection of functionalized sol-gel materials with electrochemistry[END_REF][START_REF] Collinson | Electrodeposition of Porous Silicate Films from Ludox Colloidal Silica[END_REF]. The higher number of pores facilitates diffusion of reactants through the film increasing the sensitivity and response time, which is especially important for electrochemical sensors. The addition to the sol-gel solution during EAD can be used for the formation of films containing encapsulated mediators such ferrocenedimethanol and ruthenium bipyridyl [START_REF]Electrochemically deposited sol-gel-derived silicate films as a viable alternative in thin-film design[END_REF]. The EAD method is also suitable for the encapsulation of biomolecules, particularly due to lower amount of produced alcohol [START_REF] Collinson | Electrodeposition of Porous Silicate Films from Ludox Colloidal Silica[END_REF][START_REF]Electrochemically deposited sol-gel-derived silicate films as a viable alternative in thin-film design[END_REF], which is formed during deposition and can lead to denaturation of proteins. It was used for the immobilization of biomolecules such as glucose oxidase [START_REF]A glucose biosensor based on chitosan-glucose oxidase-gold nanoparticles biocomposite formed by one-step electrodeposition[END_REF][START_REF] Jia | One-Step Immobilization of Glucose Oxidase in a Silica Matrix on a Pt Electrode by an Electrochemically Induced Sol-Gel Process[END_REF][START_REF] Oksana | Direct electrochemistry of hemoglobin and glucose oxidase in electrodeposited sol-gel silica thin films on glassy carbon / O. Nadzhafova[END_REF], hemoglobin [START_REF]silica-nanocomposite films electrogenerated on pyrolitic graphite electrode[END_REF][START_REF] Oksana | Direct electrochemistry of hemoglobin and glucose oxidase in electrodeposited sol-gel silica thin films on glassy carbon / O. Nadzhafova[END_REF], peroxidase [START_REF] Yang | Simple approach for efficient encapsulation of enzyme in silica matrix with retained bioactivity[END_REF]. These biomolecules are among the most stable and can withstand relatively harsh chemical conditions, so widely used in the development of new types of biosensors [START_REF] Wilson | Glucose oxidase: an ideal enzyme / R. Wilson[END_REF]. However, the application of EAD method for the immobilization of other types of enzymes is still unresolved and poorly researched issue. In addition, owing to the vast possibilities of the thickness control of the formed film this method is of interest for the modification of nanostructured electrodes [START_REF] Shacham | Pattern recognition in oxides thin-film electrodeposition: Printed circuits[END_REF][START_REF]the functionalization of macroporous electrodes / F. Qu[END_REF][START_REF] Mandakini | Quantitative Control over Electrodeposition of Silica Films onto Single-Walled Carbon Nanotube Surfaces[END_REF][START_REF] Stefanie | Local electrocatalytic induction of sol-gel deposition at Pt nanoparticles[END_REF]. Application of nanomaterials in biosensors development Nanomaterials having the size less than 100 nm in at least one dimension, are widely used in modern analytical chemistry, particularly in the field of chemical sensors. Their uniqueness is associated with significant differences in the properties of nanosized particles from those of macrosize made from the same material [START_REF] Pratibha | Prospects of Nanomaterials in Biosensors[END_REF]. This creates the possibility of modification of the nanomaterials properties by varying their size [START_REF] Tewodros | Recent advances in nanostructured chemosensors and biosensors[END_REF]. They have exceptional thermal and electrical properties, high activity and surface area, thus can be used to improve the sensitivity and response time of (electrochemical) sensors [START_REF] Joseph | Nanomaterial-based electrochemical biosensors[END_REF]. Nanosize materials were used for the achievement of direct electron transfer between the electrode and the enzyme, for acceleration of electrochemical reaction, for amplification of the biorecognition element signal etc. [68]. 1.3.1. General properties of nanomaterials: nanoparticles, nanotubes, nanofibers. Nanomaterials that have found application in the development of sensors can be classified by the dimensionality [START_REF] Valentini | Nanomaterials and Analytical Chemistry[END_REF][START_REF]Functional One-Dimensional Nanomaterials: Applications in Nanoscale Biosensors[END_REF]: zero-dimensional (0D) nanomaterials. These include particles with the size in all three dimensions less than 100 nm, nanoparticles and quantum dots; one-dimensional (1D) nanomaterials. These particles have a size larger than 100 nm in only one dimension, the two others do not exceed 100 nm. These include a variety of nanofibers, nanotubes, nanowires; two-dimensional (2D) nanomaterials. Such materials have a size greater than 100 nm in two dimensions, representing a variety of nanosheets and nanofilms with thickness less than 100 nm. In recent years, rapid development of sensors based on this type of material is associated with the discovery of graphene [71,72]. Although nanomaterials may be made of any material, only metal or carbon nanomaterials and conductive polymers are mainly used in electrochemistry, because of their high electrical conductivity. Gold and platinum nanoparticles are the most widely used type of nanomaterials for the development of amperometric biosensors [START_REF] Eugenii | Electroanalytical and Bioelectroanalytical Systems Based on Metal and Semiconductor Nanoparticles[END_REF]. Several layers of nanoparticles deposited on the electrode lead to the formation of porous layer with large surface area, which can adsorb and concentrate a large number of substances [START_REF] Fang | Electrochemical sensors based on metal and semiconductor nanoparticles[END_REF]. They can also be considered as an array of nanoelectrodes with own advantages [START_REF] Wenlong | Colloid chemical approach to nanoelectrode ensembles with highly controllable active area fraction[END_REF]. Gold nanoparticles are able to provide stable immobilization of enzymes while preserving their activity, to achieve the direct electron transfer between the electrode and the enzyme without addition of mediators [START_REF] José | Gold nanoparticle-based electrochemical biosensors[END_REF][START_REF] Shaojun | Synthesis and electrochemical applications of gold nanoparticles[END_REF]. Platinum nanoparticles are mainly used in amperometric biosensors based on oxidases, due to their ability to greatly facilitate oxidation and reduction of hydrogen peroxide and oxygen [START_REF]Amperometric glucose biosensor based on integration of glucose oxidase with platinum nanoparticles/ordered mesoporous carbon nanocomposite / X[END_REF][START_REF] Hrapovic | Electrochemical biosensing platforms using platinum nanoparticles and carbon nanotubes[END_REF][START_REF] Minhua | Electrocatalysis on platinum nanoparticles: particle size effect on oxygen reduction reaction activity[END_REF]. However, of the greatest interest in the development of electrochemical biosensors are zero-dimensional materials [START_REF] Joseph | Nanomaterial-based electrochemical biosensors[END_REF][START_REF]Functional One-Dimensional Nanomaterials: Applications in Nanoscale Biosensors[END_REF]. Due to the considerable length-todiameter ratio, they can act as nanowires, increasing conductivity of film-modifier and connecting the electrode surface with molecules encapsulated in the film [START_REF] Joseph | Nanomaterial-based electrochemical biosensors[END_REF]. Metal nanofibers can be used for direct detection of biological and chemical substances [START_REF] Fernando | Nanowire-Based Biosensors / F[END_REF], but their use in the design of enzymatic biosensors, except several examples [START_REF]Platinum nanowire nanoelectrode array for the fabrication of biosensors[END_REF][START_REF] Qu | Electrochemical biosensing utilizing synergic action of carbon nanotubes and platinum nanowires prepared by template synthesis[END_REF], is not enough explored. Among carbon nanomaterials carbon nanotubes (CNT) attract great interest since their discovery [START_REF] Sumio | Helical microtubules of graphitic carbon / S. Iijima[END_REF]. Single-walled CNT consist of a monatomic layer of graphite rolled into a cylinder, which has a large length-to-diameter ratio. Multiwalled CNT consist of several cylinders of different diameters, placed one inside another (Fig. 1.5). CNT have unique electrical, mechanical and structural properties that make them very attractive for use in electrochemical sensors [START_REF]New materials for electrochemical sensing VI: Carbon nanotubes / A[END_REF][START_REF] Lourdes | Role of carbon nanotubes in electroanalytical chemistry: a review[END_REF]. High sensitivity of CNT conductivity to the adsorbed molecules allows their use as nanoscale DNAsensors, and the ability to accelerate the electron transfer of many important biomarkers can significantly improve the characteristics of CNT-based enzyme electrodes [START_REF] Joseph | Carbon-Nanotube Based Electrochemical Biosensors[END_REF]. Also CNT can accumulate important biomolecules (e.g., nucleic acids [START_REF] Joseph | Carbon-nanotube-modified glassy carbon electrodes for amplified label-free electrochemical detection of DNA hybridization[END_REF]), and neutralize the process of the electrode surface poisoning by reaction products [START_REF] Musameh | Low-potential stable NADH detection at carbon-nanotube-modified glassy carbon electrodes[END_REF]. Possibility of H 2 O 2 and NADH detection at low potential and limited surface passivation during the oxidation of NADH makes CNT an ideal material for use in amperometric biosensors based on oxidases and dehydrogenases [START_REF] Joseph | Carbon-Nanotube Based Electrochemical Biosensors[END_REF] . Moreover, studies have shown that vertically-oriented CNT can be used as a direct conductor between the electrode and the active center of the enzyme (which is usually located deep inside the molecule and isolated by protein chains) that covalently bound to the end of nanotube [START_REF]Protein electrochemistry using aligned carbon nanotube arrays[END_REF][START_REF] Fernando | Long-range electrical contacting of redox enzymes by SWCNT connectors[END_REF]. a b Methods of the electrodes modification with carbon nanotubes. Method of electrophoretic deposition Successful implementation of CNT in amperometric biosensors requires adequate control of their chemical and physical properties, their functionalization and immobilization on the surface [START_REF] Joseph | Carbon-Nanotube Based Electrochemical Biosensors[END_REF]. Simple mixing CNT with a solution of the enzyme in most cases leads to very low final concentration of nanotubes and often to enzyme denaturation or aggregation of CNT. There are two basic approaches for surface modification with CNT: direct synthesis of CNT from metal catalysts located on the electrode surface, or surface modification with pre-synthesized CNT obtained by various techniques. Direct methods of synthesis allow to obtain a big number of firmly fixed on the electrode surface CNT and sometimes even to get vertically-oriented CNT [START_REF]Vertically Aligned Carbon Nanotube Electrodes Directly Grown on a Glassy Carbon Electrode[END_REF]. However, the most synthesis methods require high temperatures, pressures and sophisticated equipment, which limit their use in ordinary laboratories. Therefore most of modified electrodes are being obtained by deposition of CNT dispersion and subsequent drying or preparation of composite electrodes based on CNT mixed with carbon paste [START_REF]Carbon nanotube purification: preparation and characterization of carbon nanotube paste electrodes[END_REF][START_REF] María | Enzymatic Biosensors Based on Carbon Nanotubes Paste Electrodes[END_REF], teflon [START_REF] Joseph | Carbon nanotube/teflon composite electrochemical sensors and biosensors[END_REF], polymers [START_REF]Carbon nanotube-polymer composites: Chemistry, processing, mechanical and electrical properties[END_REF], ceramics [START_REF] Biuck | Simultaneous determination of acetaminophen and dopamine using SWCNT modified carbon-ceramic electrode by differential pulse voltammetry[END_REF]. Because the synthesized CNT usually contain many impurities of other allotropic modifications of carbon their purification before use by treating with acidoxidants is often necessary [98]. Besides elimination of impurities, this treatment also aims to create carboxyl functional groups on the places of defects in the structure of CNT. The presence of these groups enables covalent immobilization of biorecognition molecules or integration of CNT in polymer structure [99]. The limitation on the way of broad application of CNT in the development of biosensors is their negligible solubility in most solvents (including complete insolubility in inorganic solvents) [100], which makes the preparation of composite electrodes with high content of CNT quite challenging. Moreover, the difficulties of CNT manipulating are related to their small size and tendency to aggregation that prevents the formation of homogeneous and reproducible coatings on the electrode surface [101]. Therefore, the additives are often used to improve the dispersion of CNT in solvents such as surfactants, nafion, chitosan, DNA and others [102]. Two conditions are desirable for fabrication of enzyme electrodes based on CNT: a) a sufficiently high content of CNT in the final film-modifier in order to take full advantage of their properties, and b) the biocompatibility and mild conditions of bio-composite electrode fabrication to ensure the retention of the enzymatic activity (hence the undesirability of organic solvents and a large quantity of surfactants). Given this, it would be optimal simultaneous deposition and concentration of CNT from the aqueous solution. To date, CNT-coatings, obtained by this method are mainly used for the creation of new composite materials [108], field emission devices [109,110], supercapacitors [111,112], fuel cells [113][114][115], and for biomedical applications [116][117][118]. However, the EPD method, due to inherent advantages such as the possibility of CNT deposition from low-concentrated aqueous solutions can be used for the fabrication of CNT-matrix for amperometric biosensors. Currently, in the literature there are only a few examples of electrophoretically-deposited CNT application (as a part of CNT-polyaniline composite) for construction of biosensors based on cholesterol oxidase [119] and glycerol dehydrogenase [120], but the influence of the characteristics of CNT layer in the above works is not discussed. Amperometric biosensors based on SiO2 and oxidases Overview of oxidases used in amperometric biosensors Oxidases subclass contains more than 100 enzymes that catalyze specific oxidation of many important biologically-active organic compounds by molecular oxygen [START_REF] Joseph | Carbon-Nanotube Based Electrochemical Biosensors[END_REF]. In most cases, enzymatic reactions catalyzed by oxidases are following: 𝑆𝑢𝑏𝑠𝑡𝑟𝑎𝑡𝑒 + О 2 𝑜𝑥𝑖𝑑𝑎𝑠𝑒 → 𝑃𝑟𝑜𝑑𝑢𝑐𝑡 + Н 2 О 2 (1.2) The enzyme glucose oxidase (GOx) -"pioneer" in the field of amperometric biosensors, the rapid development of this industry began from a message about the first GOx-based biosensor [121]. The reason of its popularity is in the spread of diabetes disease, which affects the people around the globe. The regular screening of glucose concentration in the blood becomes for them a daily routine task and amperometric biosensors may be the solution of this problem [122]. However, the popularity of GOx is also linked to its unique properties -a high specificity, activity and stability [START_REF] Wilson | Glucose oxidase: an ideal enzyme / R. Wilson[END_REF]. That is the reason for its applicability for testing new methods of In addition to the above, there are existing biosensors based on xanthine oxidase, ascorbate oxidase, bilirubin oxidase, lysine oxidase and other enzymes [1]. There is a considerable number of works about the immobilization of mentioned enzymes into SiO 2 -film by sol-gel method for the development of amperometric biosensors. This particularly concerns cholesterol oxidase [139,141,[155][156][157], lactate oxidase [START_REF]Sol-gel based amperometric biosensor incorporating an osmium redox polymer as mediator for detection of L-lactate[END_REF]126,158,159], tyrosinase [160][161][162][163], glucose oxidase [START_REF]Glucose Biosensor Based on a Sol-Gel-Derived Platform / U. Narang[END_REF][START_REF] Oksana | Direct electrochemistry of hemoglobin and glucose oxidase in electrodeposited sol-gel silica thin films on glassy carbon / O. Nadzhafova[END_REF][164][165][166][167]. Among other types of enzymes choline oxidase has an important substrate -choline, the determination of which is necessary in biomedical practice as well as in the analysis of food. In addition, this enzyme can be used to analyze important inhibitors -pesticides. However, in the literature we found only a few reports of the ChOx immobilization in SiO 2 -film [168,169]. Taking into account the mentioned above properties of SiO 2 -materials, the development of ChOx-and SiO 2based biosensors is a promising possibility. Problems of electrochemical detection of H 2 O 2 As noted above (see section 1.1.1) the indicator reaction of the first-generation oxidases-based biosensors may be electrochemical detection of consumed oxygen by its reduction on the electrode at potential -0.7 V. Although the first known biosensors used this principle, this approach has some significant drawbacks leading to its renunciation. First, this is significant variation of oxygen content depending on pH, temperature and composition of the solution, and secondly, it is the simultaneous reduction of hydrogen peroxide at the same potential. Therefore the electrochemical detection of hydrogen peroxide, which is one of the products of enzymatic reactions of almost all oxidases, is considered more promising [170]. The detection can be performed by its oxidation at potential about 0.6 V -the dissolved oxygen does not interfere with the reaction under these conditions. However, the use of such biosensors in the analysis of real objects faces the problems associated with side reactions that may occur at the electrode at this potential. For example the analysis of biological fluids is complicated due to a lot of components may be oxidized at the electrode at potential 0.6 V, such as ascorbic acid, dopamine, bilirubin and some others [START_REF] Brian | Chemical Sensors and Biosensors[END_REF]. Amperometric biosensors based on SiO2 and dehydrogenases Overview of dehydrogenases used in amperometric biosensors Dehydrogenases subclass includes more than 300 enzymes that use coenzyme NAD + /NADH (or its modification NAD(P) + /NAD(P)H) as an electron acceptor. Dehydrogenases can (due to reversibility of NAD + /NADH) also catalyze the reverse reaction in most cases [187]. In general, the enzymatic reaction of dehydrogenases can be represented as: 𝑆𝑢𝑏𝑠𝑡𝑟𝑎𝑡𝑒 + 𝑁𝐴𝐷 + 𝑑𝑒ℎ𝑦𝑑𝑟𝑜𝑔𝑒𝑛𝑎𝑠𝑒 ↔ 𝑃𝑟𝑜𝑑𝑢𝑐𝑡 + 𝑁𝐴𝐷𝐻 (1.3) The number and diversity of dehydrogenases provide more choices for the construction of various amperometric biosensors. The absence of oxygen in the enzymatic reaction scheme can eliminate most of the disadvantages associated with it (difficulty of detecting. interfering influence, etc.) [188]. The reversibility of enzymatic reactions allows to expand analytical applications with the ability to determine the reaction product instead of the substrate. Nevertheless, biosensors based on dehydrogenases are much less common than ones on the basis of oxidases. This is associated with the difficulty of electrochemical detection of NAD + /NADH (see paragraph 1.5. 2) The most widespread enzymes from the dehydrogenases subclass used in the development of biosensors (alcohol dehydrogenase, glucose dehydrogenase, lactate dehydrogenase, glutamate dehydrogenase) more or less duplicate the functions of the corresponding enzymes of oxidases subclass (see paragraph 1.4.1) and their applications. However, alcohol dehydrogenase is used more than analog from oxidases due to its higher activity and stability in the sensors and much higher specificity towards ethanol [145]. Lactate dehydrogenase is also characterized by higher selectivity than lactate oxidase [189]. (Form)aldehyde dehydrogenase -an enzyme that catalyzes the oxidation of formaldehyde to formic acid. Besides the coenzyme NAD + , the reaction also requires the presence of coenzyme glutathione. Biosensors based on this enzyme can be used to determine the formaldehyde having allergic, mutagenic and toxic effects in the food, pharmaceutical and cosmetic industries [190][191][192][193]. The disadvantage of this enzyme is low activity, high cost and difficulty of application due to the need of two coenzymes [193]. Glycerol dehydrogenase provides oxidation of glycerol by coenzyme NAD + . Biosensors based on this enzyme may be useful for wine quality control during fermentation [194,195] as well as in clinical analysis of blood [196]. At the same time, the lack of selectivity of this enzyme and reversibility of enzymatic reaction are reported [197]. Sorbitol dehydrogenase (DSDH) -an enzyme that catalyzes the conversion of polyhydric alcohol sorbitol to fructose. Measurement of sorbitol content is important in the analysis of diabetic food and clinical analysis to prevent the development of diabetes. To date there are only several reports on the biosensors development based on this enzyme [198][199][200][201], among them only in work [198] it was used for the real objects analysis. In addition, immobilized DSDH can be used for the design of the bioreactor for electro-enzymatic synthesis [202,203]. Malate dehydrogenase catalyzes the oxidation of malic acid and its salts. Biosensors based on this enzyme can be used to determine the malic acid in food: fruits, juices and wines, where it affects the organoleptic [187,204,205]. 3-hydroxybutyrate dehydrogenase can be used in the biosensors for clinical analysis of blood, where the determination of 3-hydroxybutyrate is important to avoid life-threatening diabetic ketoacidosis in diabetic patients [206][207][208]. As mentioned above, the data mainly is missing in the literature about the immobilization of dehydrogenases in SiO 2 -film by sol-gel method and the development of amperometric biosensors based on such modified electrodes. Among all the above enzymes such procedure has been described only for lactate dehydrogenase [209][210][211] and malate dehydrogenase [212]. Given the small number of publications and the importance of substrate the point of interest is the development of biosensor based on sorbitol dehydrogenase immobilized in SiO 2 -film. Problems of electrochemical detection of NAD + /NADH The coenzyme NAD + serves as an electron acceptor in most enzymatic A particular interest is the presence of well-developed aromatic system in phenothiazine dyes that determines their ability to be easily and firmly adsorbed on carbon surfaces, including carbon nanotubes [233,234]. Such combination of nanostructured electrode and mediator leads to a synergistic effect -the potential decrease and the increase of the sensitivity and stability NADH detection [235,236]. This allows to apply this approach in the development of dehydrogenase-based biosensors [189,[237][238][239]. Conclusions from the literature survey Analysis of the literature data has shown that electrochemical biosensors is a promising branch of the sensor development. An approach that deserves attention for the immobilization of biomolecules on the electrode surface is their encapsulation into a thin polymer film of silica, due to the large porosity and biocompatibility of the latter. However, there are unresolved issues about the use of such bio-composite films in the biosensors development, e.g., the achievement of strong enzyme fixing in the film, as well as maintaining its sufficient activity. In addition, the fabrication of SiO 2 -films by traditional methods does not always lead to reproducible results. The method of electrochemically-assisted deposition is an alternative to this methods allowing to obtain reproducible porous films with controlled thickness. On the example of glucose oxidase and hemoglobin this method was successfully applied also for bioencapsulation, but information about its application for other types of enzymes, particularly dehydrogenases, is absent in the literature. The use of nanomaterials can significantly improve the analytical characteristics of biosensors based on oxidases and dehydrogenases, including sensitivity and selectivity. The platinum nanoparticles are promising for the choline oxidase immobilization, since they can increase the sensitivity of detection of hydrogen peroxide. Carbon nanotubes are suitable for the stable and low potential detection of coenzyme NADH, so they can be used for the immobilization of sorbitol dehydrogenase. However, there is not much information in the literature about the selection of nanomaterials and the methods of their immobilization. CHAPTER 2. EXPERIMENTAL PART. Chemicals and reagents Enzymes Three enzymes from the oxidoreductases class were used in this work: -Glucose oxidase (GOx, EC 1.1. Solution with concentration 10 mg/mL (activity 100 units/mg). Isoelectric point 4,3. Enzyme solutions (except DSDH) were prepared by dissolving an appropriate amount (final concentration 10 mg/mL) in 0.067 M PBS (pH 6.0) and stored at 4С when not used. Reagents for sol-gel synthesis. For the synthesis of SiO 2 -based sol the tetraethoxysilane (TEOS, 98%, «Alfa Aesar»), concentrated hydrochloric acid (HCl, 36%, «Prolabo») and deionized water from a «Purelab Option» water purification system were used. The polyelectrolytes and surfactants were added to the sol as additives: poly- Addition of positively-charged polyelectrolytes and surfactants was intended for improvement of the enzyme interaction with silica-groups of the sol-gel film. Silaca-groups are negatively charged at the pH used during the sol-gel synthesis (5 -7) due to their deprotonation [240]. This results in electrostatic repulsion taking into account the negative charge of the enzyme molecules at such pH. Therefore, the use of positively-charged polyelectrolyte, that plays a role of stabilizing agent between SiO 2 and protein, leads to significant improvements of the enzyme encapsulation [241]. In addition, cationic surfactants, such as CTAB, improve structure of the solgel film and can enhance the response stability of the encapsulated in SiO 2 -film biomolecules due to their inclusion into micelles [START_REF] Nadzhafova | Heme proteins sequestered in silica sol-gels using surfactants feature direct electron transfer and peroxidase activity[END_REF][START_REF]silica-nanocomposite films electrogenerated on pyrolitic graphite electrode[END_REF]. While immobilizing coenzyme NAD + into SiO 2 -film 3-glycidoxypropyltrimethoxysilane (GPS, 98%, «Sigma») was used as bonding agent. It is able to bind the adenine residue in the coenzyme molecule and react with silanol-groups in the condensation reaction [203]. Reagents for voltammetric measurements. All solutions were prepared using deionized water by dissolving of accurate amounts and/or appropriate dilution. Standardization of solutions was performed using titrimetry. 1,1'-Ferrocenedimethanol (98%, «Aldrich»), hydrogen peroxide (H 2 O 2 , 35%, «Acros»), oxidized and reduced forms of β-nicotinamide adenine dinucleotide (NAD + /NADH, 98%, «Sigma») were used as electrochemical probes. Glucose («Acros»), D-sorbitol (98%, «Sigma») and choline chloride (97%, «Fluka») were used as substrates of the corresponding enzymes. Solutions of glucose were left to mutorotate at least for 24 h before using. The voltammetric measurements were conducted using phosphate and Tris-HCl buffer solutions as background electrolytes unless otherwise specified. Acidity and composition of buffer solutions were chosen taking into consideration optimal pH of enzymatic activity and the absence of interfering influence of solution components for the substrate determination. The adsorption of dye methylene green (MG, >80%, «Sigma») was used for the determination of electroactive surface area of electrodes. Substances studied for interfering influence. To investigate the interfering effect on the determination of sorbitol the number of carbohydrates that can be found in the food was chosen as well as representatives of the homologous series of polyhydric alcohols, including sorbitol stereoisomer (mannitol). The components that can be part of cosmetic products (glycerol, sodium lauryl sulfate) and biological fluids (ascorbic acid, urea) were tested as well. All solutions were prepared by dissolving accurate amounts of substances or by dilution of initial solutions. Iron(III) nitrate, acidified to prevent its hydrolysis, was used for ascorbic acid masking. To investigate the interfering effect on the determination of choline we studied mono-and disaccharides that can be part of the food (glucose, sucrose, lactose), substances that are part of the biological fluids (uric and ascorbic acid, urea), ethanol and some metal ions (Pb (II), Zn (II), Cu (II), Fe (III)), which are classical inhibitors of enzymes. All solutions were prepared by dissolving accurate amounts of substances or by dilution of initial solutions. Apparatus All -For the purpose of comparison platinum and gold macroelectrodes were useds. For Pt-Nfb and GCE a specially designed Teflon cell were used in which the working electrode is located on the bottom side, and its required surface area is limited by rubber ring (ø = 6 and 9,5 mm). The electrode was connected to the potentiostat using copper wire and silver glue (див. Scheme 2.1 для Pt-Nfb). Scheme 2.1. The connection way of Pt-Nfb to electrochemical cell. For electrophoretic deposition of CNT a specially designed device was used consisting of a DC source of alternating voltage and two steel plates-electrodes that were placed strictly parallel to each other at a distance of 6 mm (Fig. 1.6). Plate, which acted as an anode, was shorter providing possibility of GCE connection. The study of electrodes surface was carried out by method of scanning electron microscopy (using commercial microscope Hitachi FEG S4800, SCMEM, University of Nancy) and by atomic-force microscopy (using a microscope Asylum Research MFP-3D-Bio). To ensure continuous mixing of solutions magnetic stirrers with adjustable speed was applied. Acidity of the solution was checked using pH-meter Mettler Toledo S220 and Piccolo HI 1290. Argumentation of the choice of objects and methods Modification of electrodes by electrochemically-assisted deposition method The modification of electrodes with SiO 2 -film was performed by electrochemically-assisted deposition method using alcohol-free sol composition, which does not inhibit the enzyme activity [START_REF] Oksana | Direct electrochemistry of hemoglobin and glucose oxidase in electrodeposited sol-gel silica thin films on glassy carbon / O. Nadzhafova[END_REF]. -For the preparation of the sol for GOx immobilization 2.28 mL of TEOS, 2.0 mL of H 2 O, and 2.5 mL of 0.01 M HCl were mixed with a magnetic stirrer for 16 h. Then, prior to introduction of the enzyme in the medium, 1.66 mL of 0.1 M NaOH was added to neutralize the sol (to avoid possible enzyme denaturation in acidic medium). The enzyme solution (50 μL of PBS (0.067 M, pH 6.0) and 100 μL of 10 mg/mL GOx solution) was added to 0.5 mL of the hydrolyzed sol and left to stay for 1 h. -For the preparation of the sol for ChOx immobilization 0,21 mL of TEOS, 0,15 ml of H 2 O and 0,26 mL of 0,01 М HCl were mixed with a magnetic stirrer for 12 h. Prior to EAD 0,4 mg of CTAB, 0,03 mL of 0,067 М PBS (рН 6,0) and 0,01 mL of ChOx solution (10 mg/mL) were added to 0.5 mL of the hydrolyzed sol. -For the preparation of the sol for DSDH immobilization 2.28 mL of TEOS, 2.0 mL of H 2 O, and 2.5 mL of 0.01 M HCl were mixed with a magnetic stirrer for 16 h. The final sol was diluted three times with pure water and, then, a 100 μL aliquot of this solution was mixed with 100 μL of PDDA (20 wt% in water) and 100 μL of DSDH solution. -For the preparation of the sol for co-immobilization of DSDH and NAD+ [203] 2.28 mL of TEOS, 2.0 mL of H 2 O, and 2.5 mL of 0.01 M HCl were mixed with a magnetic stirrer for 16 h. The sol was diluted five times with water and its 20 μL aliquot was mixed with 10 μL of 20% PEI-solution, 10 μL ofNAD + -GPS solution (prepared by mixing 25 mg of NAD + and 37.6 mg of GPS in 400 μL of Tris-HCl buffer solution pH 7.5 for 14 h at room temperature) and 20 μL of DSDH-solution. Prepared sol was introduced in the electrochemical cell where some negative potential was applied to the working electrode in order to initiate the generation of OH-ions (typically from -1.1 to -1.3 V). Potential and/or duration of its application were optimized for each individual case. After electrodeposition the working electrode was kept in the sol for 5 minutes, then gently washed with water and dried at room temperature for 1 hour before use. Modification of electrodes with carbon nanotubes by the electrophoretic deposition method This work used single-walled carbon nanotubes functionalized carboxylic groups (CNT, > 90%, 4-5 nm × 0,5-1,5 microns, 1-3 atom.% COOH-groups, «Sigma»). The presence of carboxyl groups improves dispersancy of CNT in water and gives them a negative charge required for their movement in a constant electric field. For the suspension preparation the appropriate amount of CNT was weighed on an analytical balance (considering concentration 0.1 mg/mL), then deionized water was added, and the suspension was treated in ultrasonic bath for 12 hours. Glassy carbon plates were polished before the modification by wet emery paper 4000 with Al 2 O 3 powder (0.05 micron, «Buehler»). The same procedure was used to renew the electrode surface after suc-cessful modification and measurements. The two parallel electrodes were introduced in a 4-mL aliquot of the dispersion and a constant potential difference of 60 V (corresponding to the magnitude of the electric field between the electrodes 100 V/cm) was applied for required time usually from 5 to 120 seconds. The optimal applied voltage of 60 V was used throughout because it constitutes a good compromise between a high speed of deposition and limited decomposition of ultra-pure water (which would generate oxygen bubbles that may affect CNT assembly). The level of electrodes dipping was kept constant in order to ensure the same area (1 cm 2 ) of their contact with dispersion and reproducibility of deposition. After deposition, the glassy carbon plate was carefully removed from the remaining dispersion, gently washed in water, first dried horizontally at room temperature and then put in an oven at 450•C for 1 h. The For the preparation of macroporous CNT-layers the mixture of CNT (0,1 mg/mL) and polystyrene beads (0.05 mg/mL to 0.5 mg/mL) was used. It was obtained by adding to the suspension of CNT certain aliquots of concentrated (5%) suspension of polystyrene beads (PS-beads, 500 nm), obtained by the method [243]. Polystyrene was chosen because of facility of homogeneous beads synthesis and their removal by heating. This mixture was stirred, brought into the cell and applied to EPD as was described above. After the deposition electrode was carefully removed from the suspension and left at room temperature until dry. The template removal was carried out in an oven at 450 C for 1 h with 15 C/min ramp. For the purpose of comparison, GCE was also modified with CNT-layer by drop-coating method. The CNTs were deposited from aqueous solution, by dropping 10 μL of the same water suspension as for EPD and left to dry completely. For electrochemical generation of platinum nanoparticles on the surface of CNT we used technique described in [START_REF] Stefanie | Local electrocatalytic induction of sol-gel deposition at Pt nanoparticles[END_REF]. Electrode was dipped in the solution of 1 mM Pt(NO 3 ) 2 , that also contained 0.1 M NaNO 3 , and was exposed to a series of pulses. The sequence of pulses in each series was as follows: 0.035 V for 1 s, -0.7 V for 0.2 s, 0.035 V for 1 s. The electrochemical reduction of platinum and the formation of nanoparticles occurs during the application of negative potential. Application of the positive potential was used for dumping, it does not happen any electrochemical reaction at such potential. The choice of the number of series of pulses is justified in section 3.2.1.1. Voltammetric measurements To study the properties of modified electrodes we used methods of cyclic and linear voltammetry, amperometry and hydrodynamic voltammetry. All voltammogramms were obtained using saturated silver chloride electrode The sensitivity of the modified electrode to the substrate was calculated as the slope of a graph current-substrate concentration [244]. Surface observations Investigation of the surface of modified electrodes by scanning electron microscopy (SEM) was carried out by connecting the modified electrode with substrate using a special conductive tape. Since the conductivity of the electrodes was sufficient spraying metal plating wasn't used. Energy of scanning electron beam ranged from 1 to 15 kV, magnification -from 1,000 to 400,000 times. If necessary, the electrode was placed at a certain angle to the electron beam. Investigation of the surface by atomic force microscopy (AFM) was conducted in the temperature-controlled room using V-shaped silicon nitride probe (MLCT-EXMT-BF, «Veeco Instruments», USA) with a radius of curvature 50 nm and a constant of elasticity 0.1 N/m. Scanning was carried out in contact mode, the speed ranged from 0.5 to 2 Hz, the data was captured from laser optical detector. A thin scratch was made with a needle for profile and film thickness measurements. AFM images were captured on the border of scratches, allowing to estimate the difference in film thickness. Processing of AFM images was carried out using software WSxM 5.0 («Nanotec Electronica SL», Spain) [245]. Scanning electrochemical microscopy (SECM) with a special device developed on the basis of commercial appliance Sensolytics (Germany) was used to study the conductivity of bio-composite films. The measurements were performed using a platinum microelectrode (ø = 25 microns) in a solution containing 0.1 M KCl, 1 mM ferrocenedimethanol. Profilometry with SECM was also applied to study the morphology of the layers of CNT with a thickness greater than 500 nm, in this case a glass needle was used, measuring the distance from it to the surface using piezoelectric sensors [246]. For electrochemical studies of electroactive surface area the dye methylene green was adsorbed on the electrode surface by immersion of the electrode in a 1 mM solution of the dye and stirring on a magnetic stirrer for 12 hours. Then the electrode was washed thoroughly with water and dried at room temperature. Calculations based on electrochemical measurements Calculation of apparent Michaelis constant The apparent Michaelis constant was calculated in order to determine the degree of the relationship of the corresponding enzyme to substrates, as well as to determine the upper limit of the linear range of modified electrodes. The assumption was made that the enzymatic reaction is one-substrate or the concentration of the second substrate (in the case of coenzyme) is saturating, that allows to apply the Michaelis-Menten kinetics to it [247]: 𝑉 = 𝑉 𝑚𝑎𝑥 × 𝑆 𝐾 𝑀 + 𝑆 (2.1) wh. V -the rate of enzymatic reaction; V max -the maximum rate of enzymatic reaction at such conditions; S -equilibrium concentration of substrate, mM; K M -Michaelis constant, mM. The current flowing through the electrochemical cell with a known concentration of the substrate was taken as rate of enzymatic reaction. In this case, the graph of V-S relation for equation (2.1) has the form of hyperbole that approaches to a straight line V = V max . The processing of such curves is difficult (although possible with modern software) so the linearization of this equation was performed by the method of Lineweaver-Burk [248]: 1 𝑉 = 1 𝑉 𝑚𝑎𝑥 + 𝐾 𝑀 𝑉 𝑚𝑎𝑥 × 𝑆 (2.2) In the coordinates 1/V -1/S the graph of the equation ( 2.2) has the form of a straight line that intersects the abscissa at the point -1/K M making possible finding Michaelis constant. Further, in all cases apparent Michaelis constant was found out from the equation of the graph 1/V against 1/S, calculated by the method of least squares, and the point of intersection of this line with the abscissa. As an alternative method direct processing of the V-S graph with filter «Hill» in software «Origin 8.5» was used. Calculation of the electroactive surface area using dye adsorption The adsorption of the dye methylene green (MG) was used for the calculation of approximate electroactive surface area of electrodes modified with CNT. It was assumed that the dye molecules form a monolayer on the surface of nanotubes [234]. The area of anodic or cathodic peak of adsorbed dye on voltammogram of modified electrode was calculated using computer program «Origin 8.5» (e.g. Fig. 4.5a). Electroactive surface area calculation was performed using the formula: 𝑆 𝑒𝑙.𝑎𝑐𝑡 = 𝑆 𝑝𝑒𝑎𝑘 × 6,24 • 10 18 × 𝑆 1 𝜈 × 𝑛 (2.3) wh. S peak -area of the MG peak on the voltammogram, AV; ν -potential scan rate, V/s; n -number of electrons transferred in the electrochemical reaction (n = 2 for MG); 6,2410 18 -number of electrons for the charge of 1 C; S 1 -area occupied by one adsorbed molecule of dye (for MG 0,8 nm 2 , or 810 -13 mm 2 [249]), mm 2 . Conclusions to the chapter The chapter lists and argues the choice of reagents and methods for carrying out the necessary experiments, describes the objects of study. The choice of interfering substances investigated in developing amperometric methods for detection of sorbitol and choline was argued. The techniques of sol-gel proceeding and generation of bio-composite SiO 2 -films on the electrode surface by electrodeposition method were described. The method of the formation of CNTcoatings by electrophoretic deposition on the surface of glassy carbon electrode was defined, including macroporous CNT-layers. The techniques of voltammetric measurements, spectroscopic studies and formulas for calculations were listed. CHAPTER 3. ENCAPSULATION OF OXYDASES BY ELECTROCHEMICALLY-ASSISTED DEPOSITION OF SOL-GEL BIOCOMPOSITE One of the needs of electroanalytical chemistry is the development of new, simple and effective methods of enzyme immobilization on the surface of electrodes. The EAD method allows to encapsulate biomolecules in SiO 2 -film on the electrode surface by simple one-step process, which has currently been applied only for certain enzymes. Meanwhile, the immobilization of less active enzymes, such as ChOx, requires ways to increase the sensitivity of such modified electrodes. As it was mentioned in the literature survey (paragraph 1. One possible approach for achievement of this goal is use of platinum-based nanomaterials, including nanoparticles and nanofibers [START_REF]Platinum nanowire nanoelectrode array for the fabrication of biosensors[END_REF]253]. Such materials can exhibit electrocatalytic properties towards hydrogen peroxide oxidation due to their small size and large surface area. This chapter covers the results of the EAD method application for the immobilization of oxidases in bio-composite SiO 2 -film on electrode surface. It was shown the prospects of the combination of this method with nanomaterials to improve the sensitivity of modified electrodes. The influence of platinum nanostructured materials on the analytical signal of modified electrodes was studied. Immobilization of glucose oxidase into the SiO2-film on the surface of platinum nanofibers Platinum nanofibers (Pt-Nfb), forming a conductive network with a large number of intersections can be quite easily obtained by the method electrospinning, which allows to control the thickness and density of the network [242,254,255]. Despite the attractive characteristics such as small diameter and high density of fibers there is no information in the literature about the use of electrospun Pt-Nfb for the development of biosensors. The first part of this section presents the results of modification of Pt-Nfb with bio-composite SiO 2 -film, containing an enzyme from the class of oxidases. As a model, we have chosen the enzyme glucose oxidase (GOx), which is often used in amperometric biosensors [122]. This is due to the acute need of quick monitoring of the glucose concentration in the blood (and food) for people with diabetes. Another reason for the popularity of GOx is its high stability, which allows application of different ways of immobilization without loss of enzyme activity. Given this, we have chosen this particular enzyme as a model, to verify the applicability of Pt-Nfb network for the oxidases immobilization in the SiO 2 -film by electrochemically-assisted deposition method. In this case, the advantage of this method is the ability to selectively modify only Pt-Nfb, without affecting the glass substrate. Morphological and electrochemical characteristics of platinum nanofibers network Given that electrospun Pt-Nfb was not previously used as an electrode, we have investigated their morphological and electrochemical properties, as well as response of hydrogen peroxide on the electrodes of this type. (b) AFM pictures of fibers with scratch and cross-section profile. Electrochemical properties of platinum nanofibers network. The mechanical stability of the assembly is good enough in solution to perform electrochemical measurements. As shown in Fig. 3.2, the fiber density affects the electroactive surface area. Samples displaying various densities of Pt nanofibers have been characterized by cyclic voltammetry in sulfuric acid solution [256]. The electroactive surface area measurement was performed by the area calculation of anodic oxidation peak of adsorbed hydrogen (Fig. 3.2a, shaded), which corresponds to the charge, using the formula: The estimation made from the integration of the hydrogen desorption peak shows that one fiber layer exhibits an electroactive surface area (0.29 cm 2 ) similar to the geometric surface area defined by the O-ring of the electrochemical cell (0.28 Immobilization of glucose oxidase using electrochemically-assisted deposition method In an attempt to cover only the surface of Pt-Nfb sol-gel electrochemicallyassisted deposition was preferred over the classical evaporation method, which is basically restricted to film deposition onto flat surfaces. We carried out an immobilization of model enzyme GOx on the surface of Pt-Nfb by EAD method, and the influence of EAD parameters on the thickness of the formed SiO 2 -film and the electrochemical response of immobilized enzyme was studied. Voltammetric characteristics of platinum nanofibers modified with SiO 2 -glucose oxidase film The cyclic voltammogram of Pt-Nfb in phosphate buffer has a complex shape, inherent in platinum electrodes (Fig. 3.4, curve 1). The current increase is noted in the anodic region at potential 0.7 V, which can be attributed to the formation of oxides of platinum. The significant peak at the potential of 0.0 V in the cathodic region can be noticed caused by several reactions, including the reduction of platinum As mentioned in the literature survey (paragraph 1.2.3), the properties of the formed film, and therefore the response of immobilized enzyme are affected by electrochemically-assisted deposition parameters, such as potential and duration. Therefore, further optimization of EAD duration was conducted to achieve the best response of immobilized enzyme. Influence of the duration of electrochemically-assisted deposition on the amperometric response of immobilized glucose oxidase To assess the optimal conditions for obtaining electrogenerated film SiO 2 -GOx, we investigated the relationship of modified electrode amperometric response to glucose with the duration of EAD at a constant potential of -1.2 V. At 1-2 s the response is negligible, further signal sharply increases for the duration of electrodeposition 3 s, and then the sensitivity gradually decreases, approaching zero at 10 s (Fig. 3.5). To conclude this section, it can be stated that silica deposition occurs essentially on the Pt nanofibers for very short deposition times (<3 s). More prolonged electrolysis resulted in larger amounts of OH -catalyst diffusing onto the overall surface, thereby inducing film formation on both the platinum nanofibers and the glass substrate, which can then lead to the full encapsulation of the fibers into the silica material. Such morphological variations affect the electroactivity of the immobilized GOx. the biocomposite film totally covers the nanofiber assembly (i.e., 10 s deposition, Fig. 3.6f), it totally blocks the electrochemical detection of glucose. Comparison of platinum nanofibers and platinum macroelectrode for the immobilization of glucose oxidase A comparison was performed with Pt-Nfbs and bare Pt electrode with using 3 s electrochemically-assisted deposition in the same conditions. Similar results were obtained which suggests that here the electrochemical response of the assembly is mainly limited by diffusion of glucose in the sol-gel layer (Fig. 3.8). At these conditions Pt-Nfbs behave as partially-blocked electrode [257] and diffusion layers of individual nanofibers overlap to form a common layer, similar to the diffusion layer of macroelectrode [258,260,261]. However, such a control experiment has to be taken into account very carefully as the electrolysis of the sol occurs at more negative potential on the Pt bare electrode than on Pt-Nfbs, potentially affecting the thickness of the deposit. In any case, these results highlight the interest of the low-dimensional Pt-Nfbs and well-controlled solgel electrochemically assisted deposition to improve the response of biomoleculedoped thin silica films produced by the electrochemically-assisted deposition method. Immobilization of choline oxidase into the SiO2-film Choline oxidase (ChOx), as well as glucose oxidase, belongs to the class of oxidases, which makes possible the use of common approach of sensitive hydrogen peroxide detection to develop a biosensor based on it. However, unlike GOx, ChOx is characterized by much lower stability, which makes necessary finding ways to increase its stability in the immobilized state. Thanks to biocompatibility of SiO 2based materials, they are promising for use as a matrix for encapsulation of the enzyme on the surface of the electrode. The use of rapid one-step EAD method allows immobilization in mild conditions and shortening of the time of unfavorable conditions exposure on the enzyme molecule. Electrochemical generation of the platinum nanoparticles on the surface of glassy carbon electrode for the choline oxidase immobilization Pt-Nfb did not demonstrate significant advantages of sensitivity and selectivity compared with planar platinum electrode and method for their preparation requires sophisticated equipment. Therefore, in order to obtain choline-based biosensor, we decided to investigated another type of electrode -glassy carbon electrode modified with platinum nanoparticles (Pt-Nps). Nanoparticles were chosen due to the fact that they can be easily obtained on the electrode surface by electrochemical generation. At the same time, the size of such particles can be much smaller than the diameter of Pt-Nfb, which may contribute to an increase of electrocatalytic effect, compared with nanofibers. Optimization of the conditions of the glassy carbon electrode modification with platinum nanoparticles We used a method described in work [START_REF] Stefanie | Local electrocatalytic induction of sol-gel deposition at Pt nanoparticles[END_REF] for electrochemical generation of nanoparticles on the surface of GCE, which involves alternation of reducing and dumping pulses of short duration. While imposing negative potential the reduction of platinum precursor occurs on the electrode surface, which leads to the formation of platinum nanoparticles, at dumping pulse no electrochemical reaction occurs. SEMimage of modified electrode (Fig. 3.9a) reveals a presence of nanoparticles that are uniformly distributed over the electrode surface. nm a It is noticeable that unmodified GCE is almost inactive to hydrogen peroxide in the range of potentials -0.5 -1.0 V. There is no difference in voltammograms of this electrode in the presence and in the absence of H 2 O 2 (Fig. 3.9b, curves 1,2). A slight increase of current is seen from the potential 0.9 V you, which apparently precedes the peak of H 2 O 2 , oxidation observed at potentials higher than 1 V. In contrast, the voltammogram of modified GCE-Pt-NPs in the presence of hydrogen peroxide is significantly different from voltammogramms in the buffer solution (Fig. 3.9c). Two half-wave, cathodic and anodic, are visible at potential about 0.35 V corresponding to the reduction and oxidation of hydrogen peroxide, respectively (Fig. 3.9). They appear due to the presence of platinum nanoparticles on the surface of GCE, catalyzing the electrochemical reaction of H 2 O 2 . Thus, the electrodeposition of Pt-NPs on the surface of GCE significantly shifts the potential of H 2 O 2 oxidation in the negative direction and improves the electrochemical response. Such behavior makes the electrode a promising sensor for hydrogen peroxide. The assumption can be made that the pulse duration affects the size of nanoparticles and the number of pulses affects their quantity. The change of these parameters may affect their catalytic properties. Therefore, to increase the sensitivity of GCE-Pt-NPs to hydrogen peroxide the number and duration of pulses have been optimized. Sensitivity was calculated as described in paragraph 2.3.3). The results are shown in Table 3.1. Table 3.1 This can be explained by the fact that smaller particles are likely to exhibit stronger catalytic properties. Sensitivity also increases with the number of pulses from one to three, and further increase leads to a decrease in the signal, possibly due to the formation of a large number of particles forming agglomerates. Thus, the optimal parameters of electrochemical generation of Pt-NPs were three pulses of 0.3 sec each. Influence of the duration (1) and number of pulses (2) of Pt-Nps electrogeneration on the sensitivity of GCE-Pt-Nps to H2O2 Investigation of the stability of amperometric response of glassy carbon electrode modified with platinum nanoparticles In addition to high sensitivity to hydrogen peroxide electrochemical transducer must demonstrate the stability and reproducibility of the response. Therefore, we have tested the stability of amperometric and voltammetric responses of GCE-Pt-Nps to hydrogen peroxide (Fig. 3.10). In the efforts to improve stability and to prevent leaching of nanoparticles from the electrode surface we have coated the GCE-Pt-Nps electrode with biocomposite film SiO 2 -ChOx with the same parameters of electrodeposition as explained in section 3.1. This modified electrode was active to choline (Appendix C), but its stability was still quite low. Thus, the glassy carbon electrode modified with platinum nanoparticles is characterized by poor stability of the response. This situation can be explained by the gradual leaching of platinum nanoparticles from the electrode surface or their deactivation. Therefore, given the high demands on the stability of amperometric sensors, GCE-Pt-NPs was not desirable for using as a transducer in amperometric sensor for choline. The choice of electrode for the choline oxidase immobilization Given the poor stability of nanoparticles a comparative study of the other types of electrodes was conducted in order to find a new type of transducer for the immobilization of ChOx (Fig. 3.11). As objects of comparison we selected screen-printed electrodes as they have low cost and easy fabrication, as well as small size including both working, auxiliary and reference electrodes. At the same time, specially designed for use in the sensors, they usually have high electrochemical activity. The type working electrode material (gold and platinum) was chosen because of their high sensitivity to H 2 O 2 . Cyclic voltammograms of platinum (Fig. 3.11c) and gold (Fig. 3.11d) screenprinted electrodes (AuSPE) shows active oxidation of hydrogen peroxide on themthe current increase in both cases is noticeable starting from potential 0.2 V (Fig. 3.11c, d curve 2 ). The value of current is approximately the same for these two electrodes and commensurate with the current on GCE modified with platinum nanoparticles (Fig. 3.11b). Given this, any of the two screen-printed electrodes can be used as a transducer for the immobilization of ChOx. However, taking into account the slightly higher response to H 2 O 2 , lower cost and advantages of gold electrode [251], we have chosen the AuSPE for the immobilization of ChOx. Immobilization of choline oxidase into SiO 2 -film on the surface of gold screen-printed electrode For the one-step immobilization of ChOx in the SiO 2 -film on the surface of AuSPE the EAD method was proposed with parameters similar to those used for the immobilization of GOx on the surface of Pt-Nfb. However, to achieve a significant response the duration of deposition has been increased to 20 seconds. The AuSPE modified with film SiO 2 -ChOx demonstrates significant increase of anodic current on the voltammogram in the presence of choline (Fig. 3.12a). The method of hydrodynamic voltammetry (Fig. 3.12b) confirms this finding. Indeed, the voltammogram of AuSPE-SiO 2 -ChOx in the presence of choline (Fig. 3.12b, curve 2) depicts two half-waves, whose presence can be explained by a complex mechanism of hydrogen peroxide oxidation on gold electrodes [262]. From this voltammogram the optimum working potential (0.7 V) was selected. Choice of the deposition potential of the SiO 2 -choline oxidase film Negative potential imposed during EAD, affects the rate of formation of electrogenerated catalyst affecting the parameters of formed SiO 2 -films, such as thickness and porosity. In turn, these characteristics influence the response of The results shown in Fig. 3.13 indicate that the change of deposition potential leads to changes in the sensitivity of the modified electrode. The greatest response observed for the EAD at potential -1.1 -1.2 V, which correlates with the optimal parameters of deposition of GOx-containing film on the surface of Pt-Nfb (section 3.1). At the same time, at higher potential (Fig. 3.13b, curve 1) the response is almost absent due to slow speed of electrogeneration resulting in the absence of SiO 2 -film on the electrode. However, more negative potentials (-1.3 B) (Fig. 3.13b, curve 4) lead to a too violent reaction of electrolysis of water and active hydrogen bubbles formation which prevent the formation of the film, causing its destruction. Investigation of the response stability of gold screen-printed electrode modified with SiO 2 -choline film Stability is an important parameter of biosensors, so we performed a study of long-term stability of AuSPE-SiO 2 -ChOx. The study was conducted by recording the calibration curves every 4 days for 2 weeks and comparing the sensitivity of the modified electrode to choline relative to the initial value immediately after modification. As known, the main causes of sensitivity loss of biosensors is inactivation of enzymes in the film, as well as their leaching, including as a result of the film destruction. The presence of surfactants in sol during EAD significantly affects the last factor, enhancing the interaction of the enzyme with silanol groups of film as well as acting as structuring agent and improving its morphological properties (paragraph 1.2, 2.1.2). Therefore, we have investigated the effect of concentration of CTAB in the sol on the long-term stability of AuSPE-SiO 2 -ChOx. To assess the stability of the biosensor the relative sensitivity of the electrode modified with film SiO 2 -CTAB-ChOx was calculated, using the formula: 𝑆 = 𝑆 𝑥 𝑆 1 × 100% (3.5) wh. S -relative sensitivity, %; S 1 -sensitivity of the electrode to substrate at first day after modification, μA/mM; S х -sensitivity of the electrode to substrate at x-th day after modification, μA/mM. The modified electrode was stored in a refrigerator at +4 C in the interim between experiments. It should be noted that the free enzyme in buffer solution under these conditions completely lost the activity in week, which contrasts with preservation of activity of immobilized enzyme. The data presented in Table 3.2. The data obtained shows that the biggest stability of modified electrode is observed at concentrations of CTAB in sol 12.2 mM, which is much greater than its CMC in aqueous solutions (≈ 1 mM [START_REF]Surfactant-Induced Modification of Dopants Reactivity in Sol-Gel Matrixes / C. Rottman[END_REF]). This correlates with data on a range of CTAB concentrations in sol with the highest analytical response of immobilized biomolecules [START_REF] Nadzhafova | Heme proteins sequestered in silica sol-gels using surfactants feature direct electron transfer and peroxidase activity[END_REF][START_REF]silica-nanocomposite films electrogenerated on pyrolitic graphite electrode[END_REF]. High concentrations of CTAB leads to a high foaming and flotation effect, and may lead to the restructuring of micelle and film structure reducing amperometric response of immobilized enzyme. The operational stability of AuSPE-SiO 2 -ChOx which is necessary during continuous operating of biosensor was investigated in voltammetric and amperometric modes at optimum concentration of CTAB in sol (Fig. The constant response is also observed in amperometric mode. Catalytic oxidation current of hydrogen peroxide, which is formed by the enzymatic oxidation of choline after 50 minutes of continuous measurements while mixing decreases no more than at 6% (Fig. 3.14b). Thus, these data indicate high operational stability of AuSPE-SiO 2 -ChOx, which is achieved through the use of sol-gel encapsulation of enzymes and addition of surfactant CTAB. The resulting stability is quite sufficient for the application of developed modified electrode as a sensitive element of biosensor and its use in the analysis different objects. Further, the optimum parameters were the EAD at potential -1.1 V for 20 s, and the presence of 12.2 mM CTAB in sol. Dependence of the amperometric response of the gold screen-printed electrode modified with SiO 2 -choline oxidase film from the choline concentration Dependence of the amperometric response of AuSPE-SiO 2 -ChOx on the concentration of choline resembles the saturation curve (Fig. (mM) (R 2 = 0/998). The detection limit, calculated by 3S-criteria was 5 μM choline (0.7 mg/L). Thus, the sensitivity of the modified electrode is sufficient for the determination of choline in foods and biological fluids. Conclusions to chapter 3 The oxidation current of hydrogen peroxide on the surface of platinum Characterization of the electrophoretically deposited carbon nanotubes layers The electrophoretic deposition method allows the accumulation of charged particles from their suspension onto one of the two electrodes used in the device. For this reason, carbon nanotubes bearing a high content of negatively-charged carboxylic groups on their surface have been chosen to facilitate their dis-persion (as non-modified CNT of this sort are almost insoluble in the pure water) and to make them likely to move in the electric field in order to get fast deposition of high quality CNT layers. The rate of the precipitation of charged particles depends on the applied potential difference, surface area of electrodes, distance between them and the process duration [266]. Therefore, the quantity of deposited nanotubes can be finely controlled by varying the time of potential application while keeping other parameters constant. The optimal applied potential difference of 60 V was used throughout because it constitutes a good compromise between a high speed of deposition and limited decomposition of ultra-pure water (which would generate oxygen bubbles that may affect CNT assembly). Morphological characteristics A necessary condition for the efficiency of electrode modification with CNT is the formation of a homogeneous and reproducible coating. The application of EPD method allows obtaining of such coatings. All images can clearly show the presence of distinct horizontally-oriented nanotubes, indicating the absence of aggregation, occuring, for example, when CNT dispersion is simply dropped on the electrode. The influence of the duration of deposition on the CNT-layer thickness The formation of CNT assemblies was also monitored by atomic force microscopy. In spite of the difficulties to recognize single nanotubes (for more detailed image see Appendix D), the overall thickness and uniformity of film can be The equation of this dependence is following: That gives the possibility to control the quantity of deposited carbon nanotubes and, consequently, the film thickness by tuning the electrophoretic deposition time. Such variation could influence the electrochemical characteristics of modified electrode. Measurement of the electroactive surface area of CNT-layer Electroactive surface area is one of the most important parameters of the electrodes, whose increase usually leads to an increase in current density, and hence the sensitivity of such electrodes. To estimate the surface area of GCE-CNT electrodes, we have used the adsorption of the dye methylene green (MG). It is known that phenothiazine dyes can easily adsorb onto the surface of carbon nanotubes owing to the favorable electrostatic and π-π stacking interactions [233]. We have thus exploited the adsorption of methylene green, which is Thus, the present electrophoretic method allows obtaining in a controlled way a porous conductive matrix with extended electroactive surface area, which might be of interest for the immobilization of a large quantity of enzyme. According to the formula (2.3), by integrating the peak area of adsorbed MG one may calculate the approximate area of electroactive surface of electrode modified with CNT. For the electrode modified with CNT by EPD duration of 120 s (Fig. 4.5a, curve 5) electroactive surface area was 1033 mm 2 , while the geometric area of the electrode of the same diameter (6 mm) is 28 mm 2 (according to the formula S = πr 2 ). Thus, modification of electrode with CNT by EPD method increases electroactive surface area of the electrode for almost 40 times, leading to the improvement of its sensitivity to electroactive substances. Electrocatalytic properties of CNT assemblies towards NADH oxidation To reveal the outlook for the application of hereby deposited carbon nanotubes film for the bioelectrode development, we examined its electrocatalytic effect toward the oxidation of NADH, an enzymatic cofactor mostly required for enzymes from dehydrogenases group. showing no significant anodic current below 0.6 V (actually a peak (not shown on the figure) can be observed at ca. 0.8 V), in agreement with previous literature [START_REF] Musameh | Low-potential stable NADH detection at carbon-nanotube-modified glassy carbon electrodes[END_REF]). On the opposite, the cyclic voltammogram of NADH recorded using the GC-CNT electrode (Fig. Comparing parts a and b in Fig. 4.6 and curves 1 and 2 in Fig. 4.7 reveals significant decrease in overpotentials for NADH oxidation on carbon electrodes mainly due to the expected electrocatalytic action of CNT. These results are similar to those obtained in work [START_REF] Musameh | Low-potential stable NADH detection at carbon-nanotube-modified glassy carbon electrodes[END_REF]. The potential of NADH oxidation is not so low as ca. 0.0 V observed by some researchers on the nanotubes particularly treated to induce the formation of quinone-like functional groups [219,220]. The absence of visible peaks in the low-potential range argues that here the high-voltage excitation during the EPD does not produce quinone species and the catalytic effect of carbon nanotubes relies mostly on their edge plane sites exposed to the solution [217]. Comparison of the voltammetric characteristics of NADH oxidation on the modified with CNT and non-modified glassy carbon electrods A comparison of the electrochemical behavior of CNT simply drop-coated on the electrode surface was also performed. One has to comment that it is not obvious to obtain a homogeneous film by drop-coating on the surface of the electrode because carbon nanotubes demonstrated a tendency to form aggregates during the dryingprocess. Here, we deposited the CNTs from aqueous solu tion, by dropping 10 μL of the same water suspension as for EPD (deposition has also been performed with DMF for the dispersion of the CNT with a similar result). The electrochemical behavior of the drop-coated film CNTs was then compared to the film deposited by EPD. On the voltammogram in PBS of GC-electrode with drop-coated CNTs (Fig. 4.6c ) an anodic peak at potential about -0,05 V can be noticed that is absent in case of electrophoretically-deposited CNTs (Fig. 4.6b). This peak can be attributed to the presence of catalytic impurities or to some easily-oxidized groups on the surface of as-produced CNTs (e.g. quinone-like). One should notice also that the quantity of CNTs (indirectly compared from the magnitude of the background current) deposited in case of drop-coating was here several times higher than those deposited electrophoretically. But the current of NADH oxidation at 0.42 V was higher in case of electrophoretically-deposited CNTs, possibly because of the better distribution of the CNT in the porous structure obtained with EPD allowing a good access to the internal surface of the electrode. Stability of the NADH voltammetric response The stability of the signal is an important biosensors parameter. Oxidation of NADH at non-modified electrodes often leads to contamination of surface with reaction products and the decrease of response over time [START_REF] Musameh | Low-potential stable NADH detection at carbon-nanotube-modified glassy carbon electrodes[END_REF]. The stability of NADH oxidation peak at GCE and GCE-CNT have been investigated (Fig. 4.8). Thus, the NADH oxidation current on the unmodified GCE is significantly reduced after the second scan, and after the 3rd cycle the signal is only 50% of the original one (Fig. 4.8a). At the same time, current on the GCE-CNT decreases much more slowly, after the 3-d potential scan the current decrease is not more than 15% of the original response ((Fig. 4.8b). It should be noted that for the subsequent scans the current approaches to a constant value and remains constant over time. Summing up the above, we can conclude that the modification of GCE with CNT by EPD method significantly improves its electrochemical characteristics toward NADH oxidation. In particular, the potential peak is shifted by 0.25 -0.30 V toward negative values, that facilitates the process of coenzyme oxidation on electrode. In addition, the stability of the electrode response to NADH increases about 3 times due to the absence of surface electrode contamination with reaction products. It should be emphasized that the EPD method enables the obtaining of more homogeneous and porous coatings than conventional drop-coating method, which leads to increased in about two and a half times sensitivity of the electrode to NADH. Electrochemically-assisted deposition of SiO2-sorbitol dehydrogenase film on the CNT-modified electrode To highlight the potential interest of such assemblies in designing bioelectrochemical devices, the electrophoretically-deposited CNT layer was used as a support for deposition of a sol-gel biocomposite comprising dehydrogenase enzymes (d-sorbitol dehydrogenase, DSDH). Due to the heterogeneous topography of the porous CNT layer, the evaporation-based sol-gel deposition approach is limited, at least as far as uniform film formation around CNT is expected, so that the EAD method (which already led to successful sol-gel film formation on non flat supports) was applied here. One of the advantages of the method of sol-gel EAD is the possibility of partial modification of the required part of the electrode only. For instance, we showed earlier the feasibility of controlled modification by sol-gel biocomposite of platinum nanofibers lying on a glass substrate (see paragraph 3.1). The situation of the GC-CNT electrode is more complex as the glassy carbon used as a substrate is conductive and the formation of silica film could be triggered not only by the carbon nanotubes but also by the underlying support. Therefore, the linear sweep voltammograms were recorded at the different types of electrodes dipped in the silica sol in order to examine the activity of the reactions responsible for the generation of the catalyst for the silica condensation (i.e., water and/or oxygen reduction (1.1)). Comparing the curves obtained for bare GC and GC-CNT electrodes (see curve 1 and 2 on Fig. 4.9), clearly indicates much higher reduction rates on the carbon nanotubes than on the bare electrode (e.g., more than one order of magnitude at -1.3 V). The reason for this has to be found in the much larger electroactive surface area for GC-CNT as well as catalytic effect of carbon nanotubes leading to faster reaction rate. Thus, by applying the negative potential of -1.3 V to the GC-CNT electrode, one would be able to induce the formation of hydroxyl-ions mainly in the vicinity of the nanotubes walls and not so much onto the glassy carbon support, which should lead to preferable CNT covering with DSDH-doped silica as it was demonstrated before for the localized sol-gel deposition on Pt nanoparticles lying on a glassy carbon surface [START_REF] Stefanie | Local electrocatalytic induction of sol-gel deposition at Pt nanoparticles[END_REF]. - using either bare GC or the GC-CNT electrode covered with a silica film containing DSDH generated by the method of EAD. Note that to make the data comparable, it was necessary to apply a more cathodic potential to the bare GC electrode (i.e., -1.8 V, with respect to -1.3 V for GC-CNT) to ensure the generation of the same electrolysis current and thus the same amount of so-produced hydroxyl ions catalysts [268] (which would lead to similar quantities of deposited biocomposite on both electrodes). Thus NADH oxidation was facilitated when performed using the GC-CNT electrode (by ca. 0.3 V). Also, significantly larger bioelectrocatalytic currents were obtained in the presence of CNT. Assuming a same amount of deposited materials in both cases (as discussed above) and thus a same quantity of enzyme in the film, the smaller current observed with bare GC can be explained by some diffusional restrictions for the analyte to reach the active enzymatic centers through the silica layer, whereas the porous structure of the carbon nanotube assemblies resembles more to a large surface area 3D-electrode ensuring fast diffusion of reactants to the proximal electrode surface. Thereby, the biocomposite modified electrode constructed on the basis of electrophoretically deposited carbon nanotubes offers the advantages of lower potential of detection and larger current due to its porous structure associated to the intrinsic electrocatalytic properties of nanotubes. Optimization of the parameters of the biocomposite film In order to receive the best response of the immobilized enzyme and to study the influence of the main parameters affecting the characteristics of either the silicabased biocomposite layer or the underlying carbon nanotube assemblies, the two-step optimization of the deposition parameters was performed. First, we examined the effect of the electrodeposition time (in the range 5-25 s), which directly affects the quantity of deposited biocomposite, while keeping the same parameters of carbon nanotubes layer (time of electrophoretic deposition 60 s). The naked-eyes appearance of the electrode was not changed while the potential was applied for 5 s, nevertheless a clearly visible silica layer could be remarked for 25 s deposition. Analyzing the amperometric response of such electrodes to the d-sorbitol analyte (Fig. 4.11a) revealed a very strong impact of the deposition time on the bioelectrode sensitivity. The highest responses only appear in a narrow time window, i.e., between 12 and 20 s, with a maximum reached at 16 s (Fig. 4.11b). Such great influence of the EAD process duration relies on the catalytic nature of this sol-gel deposition process. As previously assessed using AFM and EQCM techniques [269], the sol-gel deposition can be basically divided in two successive regimes, the first one being characterized by a slow rate of deposition and the second one by the fast deposition of much larger quantities of sol-gel material. As a consequence, the particular shape of the variation depicted in Fig. 4.11b can be rationalized as follows: short biocomposite deposition time resulted in low amounts of incorporated enzyme and thus to low bioelectrochemical response, whereas long deposition times led to thick deposits preventing fast mass transport into the biocomposite films resulting thereby in sensitivity loss as it was described before for the sol-gel deposition on the Pt-Nfbs (see paragraph 3.1) and macroporous gold electrode [START_REF]the functionalization of macroporous electrodes / F. Qu[END_REF]. The optimal situation arises thus from the best compromise of sufficiently high amount of incorporated enzyme while maintaining a porous structure open enough to ensure fast diffusion of reactants. A second important point to be investigated is the impact of the carbon nanotubes film thickness on the electrochemically assisted bioencapsulation. Fig. 4.12 shows the dependence of the bioelectrode sensitivity on the time of electrophoretic deposition of the CNT assemblies while keeping constant the parameters of sol-gel deposition (-1.3 V, for 16 s). The relation has also a sharpened shape with better sensitivities in the range from 20 to 60 s electrophoretic deposition. According to the equation (4.1) this corresponds to a layer thickness of CNT from 100 to 150 nm. In fact, not only the time of sol-gel electrodeposition but also the quantity of carbon nanotubes on the glassy carbon electrode affect the quantity of sol-gel materials deposited because sol electrolysis is facilitated on carbon nanotubes (see Fig. 4.9). A higher quantity of carbon nanotubes on the electrode surface leads to faster sol-gel deposition. A great care has thus to be taken when optimizing the solgel electrodeposition on porous and catalytically active materials. At the same time, the results are reproducible, while maintaining constant parameters. Analytical performance of the modified electrode The d-sorbitol dehydrogenase immobilized into silica film on the surface of carbon nanotubes showed typical Michaelis-Menten kinetics and the gradual saturation of response after concentration of d-sorbitol 5 mM (Fig. 4.13a). The apparent Michaelis-Menten constant (K m ) was calculated from the Lineweaver-Burk plot and its value was found to be 4.1 mM which is very similar to one measured in the solution [270] indicating a good activity of DSDH entrapped in the electrogenerated silica film and rapid mass transfer of reagents. Finally, the analytical characteristics of the bioelectrode have been determined by recording its amperometric response at the applied potential of +0.5 V, as a function of the d-sorbitol concentration. Using the optimal thickness of carbon nanotubes layer and the best parameters of sol-gel EAD, the calibration plot for the oxidation of d-sorbitol was obtained and found to be linear in the range from 0.5 mM to 3.5 mM. The linear regression equation was I(ΔA) = (-0.09 ± 0.15) + (2.77 ± 0.07) × C (mM) with the correlation coefficient 0.995. The detection limit by 3S criteria was found to be 0.16 mM. The modification method can be reproduced on the different electrodes and on different days with a precision of 9% (n = 3). The bioelectrode demonstrated sufficient stability for the measurements during one month as it was shown before for the film with same composition [241], the signal drop after 20 consecutive measurements did not exceed 10%. Electrophoretic deposition of macroporous CNT-assemblies for electrochemical applications The application of macroporous electrodes is a promising approach to increase the sensitivity of electrochemical analysis [271]. They can be used in particular for the immobilization of biomolecules [START_REF]the functionalization of macroporous electrodes / F. Qu[END_REF][272][273][274]. Due to the porous structure, the electrode has higher electroactive surface area and the pore size is large enough to allow the quick and unobscured diffusion of reagents from solution to the electrode. In particular, macroporous gold electrode was used to immobilize the enzyme with coenzyme that was possible thanks to the large pore volume [275]. The application of carbon nanotubes as nanoscale building blocks for the construction of analogous macroporous electrodes appears as a promising way to combine the advantages belonging to the macroporous structure [276] and the electrocatalytic properties of individual nanotube [START_REF] Lourdes | Role of carbon nanotubes in electroanalytical chemistry: a review[END_REF]. This notwithstanding, there are only a few examples dealing with three-dimensional 3D carbon nanotube ensembles with engineered and controlled macroporosity. The method of direct synthesis allows creation of 3D CNT-ensembles with long tube-to-tube distance [277,278] but it gives rather insufficient possibility of porosity control. Interesting results can be obtained by the method of ice-segregation induced self-assembly (ISISA) that was applied for the fabrication of self-supported porous CNT monoliths [279]. Another widespread method is a template approach that consists in mixing singlewalled carbon nanotubes with sacrificial particles, formation of a composite and further removal of the particles [280]. A two-dimensional network can also be obtained by using patterning methods from the solution [281]. To date, the above mentioned ensembles have not been applied in the area of electrochemical biosensors. The method of vacuum filtration allows obtaining rather thick porous CNT films [282], but it can lead to the segregation of particle and nanotube layers during the filtration process [283] and does not permit to get the films on the solid support. A possible way to overcome these limitations could be the resort to an electrophoretic deposition method for the fabrication of macroporous CNT-electrode. One drawback of such CNT-assemblies can be the uncontrolled packing of the material that hinders the access to the CNT at the bottom of the film, especially for biosensors applications. In this chapter the formation of 3D-macroporous CNT-assemblies in the form of thin films on electrode surfaces by means of electrophoretic deposition is described. With a view to evaluate the prospects of such kind of films for possible application in biosensing, their electrochemical response to hydrogen peroxide and nicotinamide adenine dinucleotide have been investigated and the first example of enzyme immobilization using electrophoretically deposited 3D macroporous CNTelectrode have been shown. Fabrication of macroporous CNT-layers on the electrode surface The method of the electrophoretic deposition basically allows the formation of deposits from suspensions of charged particles. The carbon nanotubes are negatively charged in a broad pH range due to the presence of carboxylic groups on their surface. On the other hand, the polystyrene beads are also negatively charged owing to the presence of sulfonate groups on their surface [284] (potassium persulfate was used as initiator of the polymerization in their synthesis) and can be electrophoretically deposited as well. However, we did not succeed in fabricating a uniform 3D-film consisting of PS beads only as they tended to form a monolayer with big gaps between the deposited beads (see Appendix A). This likely occurs because of the weak adhesion between the polystyrene and glassy carbon together with unfavorable interaction originated from their spherical shape and the electrostatic repulsion between particles. Considering the above observations, the deposition of a mixture of carbon nanotubes and PS beads could contribute to overcome the lacks of each individual component by bringing a volume from polymeric particles while nanotubes contribute to the reinforcement of the composite. In contrast, the appearance of the film formed in the presence of larger amounts of PS beads is different, demonstrating the absence of well-defined 500 nmmacropores but showing rather a disordered sparse packing of nanotubes (Fig. 4.15C and D), suggesting the collapsing of such thick macroporous film. Optimization of PS-beads concentration in the suspension To investigate the optimal concentration of PS beads leading to thick and highly porous CNT films, we analyzed different samples obtained with the same 135 s deposition time and PS beads contents varying from 0.05 and 0.5 mg/mL. The dependence of the deposit thickness on the PS beads concentration represents a linear correlation with quite big slope since higher amounts of PS beads increase significantly the volume of deposit (Fig. 4.16a). At the same time, when the PS template is removed, this dependence has a dramatically different shape (compare curves 1 and 2 in Fig. 4.16a) and can be basically divided into three parts. At the initial part of the plot the thickness of the template-free macroporous CNT deposit is close to the CNT-PS one (yet lower), demonstrating that burning of the template does not have significant influence on the macrostructure of deposit due to a small quantity of beads. For the same reason, such films suffer from rather small thicknesses and low pore volumes. At the second part, the rate of the thickness increasing slows down and reaches the maximum at a PS beads concentration of 0.2 mg/mL. At higher concentrations the thickness suddenly drops lower than 1 μm and does not change significantly in the third part of the plot. This demonstrates that too large concentrations of polystyrene have to be avoided as they lead to pore collapse during the calcination. In conclusion, the maximal thickness of the macroporous film can be obtained at a concentration of PS beads of about 0.2 mg/mL, defining a maximal PS beads / CNT ratio above which collapsing of macropores starts to occur. As a rule, the quantity of the electrophoretically deposited material depends strongly on the deposition time [266]. We verified thereby this approach and its impact on the feasibility of the macroporous film formation. Fig. 4.16b shows a dependence of the film thickness on the time of electrophoretic deposition. This dependence has linear type when only carbon nanotubes (no PS beads) are present in the dispersion (see curve 3 in Fig. 4.16b). In the same manner, but with much bigger slope, grows the deposit thickness when the PS beads have been introduced in the mixture (Fig. 4.16, curve 1). But this increase occurs until 100 s of deposition, and then tends to level off and could even decrease at much longer deposition times. We suppose that the rate of the PS beads deposition significantly decreases above 100 s because of a significant depletion of the particles concentrations consumed during the deposition. The loss in thickness at longer time is more difficult to explain but it might be due to the film destruction by oxygen bubbles formed on a larger time window during the deposition (they were clearly observed at the naked eye). In agreement with the observation made in Fig. 4.16a for a PS-beads content of 0.2 mg/mL, the thickness of the deposit decreases by ca. a factor 2 after template removal (Fig. 4.16b, curve 2) but still remains much bigger than that of the deposit fabricated without PS beads (Fig. 4.16b, curve 3). At low deposition time (i.e., less than 75 s), the thickness of the macroporous CNT film is below 500 nm, which is likely to correspond to a sparse thin film similar to the one that can be obtained by the patterning from solution. After 75 s of deposition, the thickness of the deposit considerably increases, up to 2 lm, indicating a formation of true macroporous film with cellular structure. Methylene green adsorption and electrochemical characterization The strong adsorption on the CNT-surface and the electroactive nature of MG was exploited to compare the surface area of the different films prepared here. Fig. Co-immobilization of sorbitol dehydrogenase and coenzyme on the macroporous CNT-assembly Though the macroporous structure does not provide true direct advantage for the direct detection of NADH in the solution, it still could be useful for the development of biosensors based on the NAD-dependent enzymes. In that case, the enhancement can be achieved by immobilization of both the enzyme and the cofactor inside the porous film. The larger inner surface area together with the bigger pore volume would lead to increasing of the number of active sites where the enzymatic reaction is allowed to take place. We used here a recently described protocol for the immobilization of NAD+ Thus, the porous structure of the macroporous CNT film provides higher sensitivity for the detection of D-sorbitol when the NAD + and DSDH are immobilized together into the silica film and this could be further exploited for the construction of advanced dehydrogenase-based biosensors. Conclusions to chapter 4 Modification of glassy carbon electrode with carbon nanotubes by electrophoretic deposition method significantly improves the efficiency of the latter when detecting coenzyme NADH. The presence of carbon nanotubes shifts NADH oxidation potential at 0.25-0.30 V in negative potentials direction, and significantly increases the stability of the response. Electrochemically-assisted deposition method allows immobilization of dehydrogenases in SiO 2 -film with preservation of enzymatic activity, which was illustrated by sorbitol dehydrogenase example. The electrode modified with this enzyme, shows stable and reproducible response to changes in the concentration of sorbitol in solution. The electrode modified with carbon nanotubes and sorbitol dehydrogenase demonstrates significant advantages in comparison with the electrode without carbon nanotubes. In particular, the sensitivity and selectivity of sorbitol detection are increased thanks to the use of low working potential 0.5 V. The response of the modified electrode depends on the thickness of carbon nanotubes layer and SiO 2 -film deposition parameters. Best analytical characteristics were obtained when using carbon nanotube layer thickness of 100 -150 nm and electrochemically-assisted deposition parameters -1.3 V, 16 s. Macroporous layer of carbon nanotubes obtained by the method of electrophoretic deposition is a promising matrix for developing biosensors based on oxidases and dehydrogenases. In particular, it allows to improve the sensitivity and selectivity of the electrode modified with film SiO 2 -dehydrogenase when combined with adsorbed methylene green dye and immobilized coenzyme. CHAPTER 5. APPLICATION OF MODIFIED ELECTRODES AS SENSITIVE ELEMENTS OF BIOSENSORS FOR THE DETERMINATION OF SORBITOL AND CHOLINE Selective determination of biologically-active organic substances is an important task of modern analytical chemistry. Such substances are often used in food production, resulting in their significant intake with food. However, excess or deficiency of such substances in the human body can cause serious physiological disorders. Currently, analytical determination of these substances is a complex task due to the similarity of the chemical properties of individual substances in homologous series (e.g., polyhydric alcohols, monosaccharides, polysaccharides have OH - groups). In fact, the only method that allows sensitive and selective analysis of organic substances is chromatography. However, it had disadvantages, such as considerable length analysis, complex sample preparation, high cost and complexity of the equipment, the need of pure reagents. All this makes impossible screening of organic substances. However, the sensitivity at the level of 10 -5 -10 -4 M is usually quite sufficient for analysis of substances, particularly in foods [285]. Examples of substances that require determination in the food are sorbitol and choline, important members of metabolism in the human body. Electrochemical biosensors are promising alternative for the determination of organic compounds in complex objects with minimal sample preparation. They are characterized by sufficient sensitivity, speed of analysis, and that is the most important, high selectivity of determination thanks to the use of enzymes as recognition elements. Price of one analysis is rather low due to the possibility of multiple use, a small amount of enzyme and cheapness of electrodes (e.g. screenprinted electrodes). This section describes the results of the application of developed modified electrodes for the determination of choline and sorbitol. The determination of sorbitol was carried out using the glassy carbon electrode modified with non-porous CNTlayer and biocomposite film SiO 2 -DSDH with optimal parameters (as described in chapter 4, paragraph 4.3). The choline analysis was conducted using gold screenprinted electrode modified with biocomposite film SiO 2 -ChOx with optimal parameters (as described in chapter 3, paragraph 3.2). The deposition of silicabiocomposite in both cases was performed by electrochemically-assisted deposition method. Determination of sorbitol using glassy carbon electrode modified with CNT and SiO2-sorbitol dehydrogenase film Sorbitol is an alcohol contained in fruits that has a sweet flavor. It is widely used in the food industry (food additive E 420) as a sweetener and [286] as well as in the cosmetic and pharmaceutical industries. Very often it is used along with fructose as a sweetener in diabetic foods for people with diabetes. In human body, sorbitol is formed from glucose and is converted to fructose with DSDH. Its abundance in the tissues leads to the swelling that occurs in diabetes [287]. It also fermented by the bacteria of large intestine to acetate and H 2 , which leads to diarrhea, severe disorders of the gastrointestinal tract and weight loss when its excessive consumption [288]. These symptoms have been reported for the abuse of sorbitol in amounts of more than 10 g/day. Thus, monitoring of sorbitol in the food is an important analytical task. In For the verification, the method of "added -found" was used. Results of the determination of sorbitol additions in buffer solution are presented in Table 5.1. Interference of some substances on the determination of sorbitol As objects of study we selected products of the food and cosmetic industries that may contain sorbitol. These products are dietary sweets, chewing gum, and Carbohydrates and polyhydric alcohols do not interfere with determination of 1 mM sorbitol, including mannitol, which is a stereoisomer of sorbitol, indicating the high selectivity of the modified electrode. Ascorbic acid does quite significant interfering influence due to the nonenzymatic oxidation on the electrode at operating potential 0.5 V. But lower working potential (0.5 V) enables an approach for its masking -the introduction of Fe (III). It was investigated that even fivefold excess of Fe(III) does not interfere with determination of sorbitol (Table 5.2). Given the fact that the objects of study contain sorbitol in significantly higher amounts than ascorbic acid, 3 mM of Fe(NO 3 ) 3 will be sufficient for the masking and will not interfere with the determination of sorbitol. According to the literature data [292,293], the masking effect Fe(III) can be explained by two factors: the formation of the transition complex of Fe(III) with ascorbic acid and subsequent oxidation of ascorbic acid by ferric iron in the complex forming Fe (II) and dehydroascorbic acid . The presence of anionic surfactant leads to a gradual decrease in signal with time during the time of contact of the electrode with the solution more than 10 minutes, probably due to leaching of the enzyme from the film [START_REF]silica-nanocomposite films electrogenerated on pyrolitic graphite electrode[END_REF]. Consequently, when analyzing cosmetic products containing surfactants one should avoid prolonged (over 10 minutes) contact of modified electrode with test solutions. Thus, given the absence of interfering effects of most of the investigated compounds and the ability to mask ascorbic acid the developed modified electrode can be used for determination of sorbitol in the food and cosmetic samples. Determination of sorbitol in the food and cosmetic samples As objects of study we used toothpaste for children «Oral B Stages berry bubble» (manufacturer «Procter & Gamble Co.», Germany), chewing gum «Orbit» (manufacturer LLC «Wrigley», Russia), biscuits "Diabetic" (producer PJSC "Kharkiv biscuit Factory", Ukraine), shower gel "White honey" (produced by "Yaka", Ukraine). The composition of the objects is given in Appendix G. Results of sorbitol determination in the objects of food and cosmetic industries are shown in Table 5.3. The obtained data are characterized by satisfactory accuracy and reproducibility. The found sorbitol content in the object "Shower gel" is perhaps somewhat underestated due to the presence of anionic surfactants, which lead to a decrease in the signal. For determination of sorbitol the electrochemical, titrimetric, polarimetric, chromatographic and enzymatic methods are used. The titrimetric method of sorbitol analysis, which is used in the pharmacopoeia is based on the formation of its complex with copper, the concentration of which is determined by iodometric titration [295]. There is also a method of reverse iodometric titration of periodate excess that remains after sorbitol oxidation in sulfuric acid medium [296]. However, these methods are not sensitive and selectivein fact they determine the total content of polyols and glucose. One of the methods is direct electrochemical detection of sorbitol on platinum [297,298] Although chromatographic methods of analysis provide selective and sensitive determination of sorbitol in many objects, their common drawback is an expensive equipment, complex sample preparation and significant duration of analysis, as well as the need for special training and high requirements for reagents purity. In enzymatic methods of analysis DSDH is most often used with diaphorase which catalyzes reduction of iodonitrotetrazolium chloride to iodonitrotetrazolium formazan (Fig. 5.2), while measuring the value of the absorption of the latter at λ = 492 nm photometrically [303]. There are also ready test kits for sorbitol and xylitol determination by this method in food [304,305]. Sorbitol + NAD + 𝑠𝑜𝑟𝑏𝑖𝑡𝑜𝑙 𝑑𝑒ℎ𝑦𝑑𝑟𝑜𝑔𝑒𝑛𝑎𝑠𝑒 → Fructose + NADH Fig. 5.2 . Schemes of reactions of enzymatic determination of sorbitol This method is very selective and sensitive, but the significant consumption of enzymes and cofactors, including their high cost (especially diaphorase), makes it not viable from an economic point of view. In addition, this method is not express, and the error of determination increases because of the use of complex two-enzyme system. The advantages of the developed method for the sorbitol determination using modified GCE-CNT-SiO 2 -DSDH electrode compared with titrimetric, polarimetric and electrochemical methods are the high selectivity of determination in the presence of other polyhydric alcohols and carbohydrates, as well as the lower limit of detection. Compared with chromatographic methods, the technique requires minimal sample preparation, characterized by expressivity and the possibility of using by unqualified personnel. The sensitivity of this method is sufficient to determine the sorbitol in the majority of objects. The method can be applied for screening sorbitol content in the objects of food and cosmetics industries, as well as for the analysis of biological fluids if sensitivity increased. To date there are only a few examples of the development of amperometric biosensors based on DSDH to determine sorbitol (Table 5.4). However, in most cases, these sensors were used only for the analysis of model solutions, not real objects. Compared with them the developed GCE-CNT-SiO 2 -DSDH electrode has high stability and reproducibility of results, limit of detection is sufficient to determine sorbitol in most objects. The advantage is also a simple modification and the absence of mediator. Determination of choline using gold screen-printed electrode modified with film SiO2-choline oxidase Choline -a quaternary ammonium base, which belongs to the vitamin B (B 4 ) group, although the human body is able to synthesize it. However, it should be necessarily present in the diet of humans; its recommended dose is 425 and 550 mg per day for women and men respectively [306]. In the body choline participates in the synthesis of important neurotransmitter acetylcholine, it also participates in the regulation of insulin levels, in the transport of fat in the liver (as the part of some phospholipids) and in the formation of cell membranes. The deficiency of choline in the body leads to serious violations: lipopexia and liver damage, kidney damage, bleeding. However, excessive consumptionof choline leads to the so-called "fish odor syndrome", sweating, excessive salivation, reduced pressure [129]. Foods rich in choline are veal liver, egg yolks, milk, spinach and cauliflower. In addition, choline is artificially added to some specialized food such as baby food, vitamin formula, sports drinks. Control of choline content in these foods is an important analytical task. For example, infant vitamin mixture is almost the only source of choline for infants, and its lack in food can lead to severe developmental Results of choline additions determination in phosphate buffer solution are presented in Table 5.5. They are characterized by satisfactory accuracy and reproducibility. Table 5.5 Results of choline additions determination in phosphate buffer Interference of some substances on the determination of choline The influence of the substances contained in the food and able to interfere with the determination of choline by the modified electrode was studied. The effect of substances at their average concentrations in the corresponding objects was investigated (Appendix G). Since urea and uric acid are lacking in food, their As seen from the results, the main macrocomponents do not interfere with the determination of choline. One should note the absence of interfering effects of ethanol in relatively high concentrations. The presence of equimolar amounts of Cu(II) leads to the signal decrease, it is known from the literature that copper cations inhibit ChOx [309], but the presence of copper in such high concentrations in the studied objects is ruled out. Ascorbic acid significantly interferes with the determination of choline due to its nonenzymatic oxidation on the electrode at 0.7 V. Elimination of interfering influence of ascorbic acid Given the fact that ascorbic acid is a common interfering component in the development of biosensors, the literature offers several ways to eliminate its impact. In particular, the most common way is the use of semipermeable membranes that limit the access of interfering substances to the electrode due to the size or charge of their molecules [310,311]. However, this approach often leads to a decrease in the sensitivity of the biosensor to analyte [176]. Therefore, another promising approach is the eliminatation of the reductants effect by their prior oxidation on the membraneoxidants [175]. The composition of such membranes can vary, but one of the best oxidants that can be used for this purpose is the oxide of manganese (IV) [173,174]. The disadvantage of such membranes is the low reproducibility of the signal and slow response time. Therefore, to simplify the procedure of the modified electrode fabrication, we decided to add MnO 2 powder directly into the solution of the analyte so that he could oxidize ascorbic acid before determination. The dry powder of MnO 2 was added (10 mg per 10 ml of solution) to the solution that was stirred with a magnetic stirrer for 30 min. Then the sample was filtered through a paper filter and used for the determination of choline as described in paragraph 5.2.1. Applied potential +0.7 V. To test the impact of MnO 2 on choline signal its determination was conducted using AuSPE-SiO2-ChOx in a series of model solutions (Table 5.7). From the obtained data it can be concluded that MnO 2 does not interact with choline because found choline amount remains constant before and after treatment with MnO 2 (solutions 1, 2, Table 5.7). At the same time, the presence of ascorbic acid leads to an overestimation of the expected results of choline determination (solution 3, Table 5.7). However, after the treatment of this solution with MnO 2 (solution 4, Table 5.7), the found amount of choline correlates with added. Thus, treatment of 0.1 mM ascorbic acid solution with MnO 2 leads to the elimination of its interfering impact in choline determination using AuSPE-SiO 2 -ChOx. According to the stoichiometry of the reaction [312], the amount of MnO 2 1 mg/ml is sufficient to eliminate the interfering effects of ascorbic acid up to its concentration in a solution 10 mM. The ascorbic acid content should not exceed 5 mM after sample preparation, so such quantity of MnO 2 is enough to mask its influence. Determination of choline in food products The infant formula «Bebi» was chosen as an object of study, its composition is listed in Appendix G. However, attempts to determine choline in food mixture using a standard photometric method with Reinecke salt [314] were unsuccessful due to the low sensitivity and reproducibility of the method, and chromatographic technique requires the use of special columns. Comparison of the standard methods of choline determination with the developed technique For determination of choline in foods the spectrophotometric, chromatographic and enzymatic methods of analysis are used. Its analysis is complicated because it can be either in free form or in the ester derivatives of phosphoric acid, so its preliminary hydrolysis is necessary before determining total choline [129]. Historically, the first photometric method for choline determination is based on formation of insoluble colored compound with Reinecke salt, whose absorption is detected in methanolic solution at 520 nm [314]. The gravimetric variant of this method is also possible [315]. Despite its simplicity, this method is time-consuming, low-sensitive (the loss of choline during washing is possible), requires the use of toxic reagents. In addition, this method is not selective, determination of choline is prevented by most other amines. Chromatographic determination of choline is possible using liquid [316][317][318], gas [318,319] NMR spectroscopy can be also used for determination of choline [322]. The method allows to determine choline content with great accuracy in various forms, but is unsuitable for most laboratories because of the high cost of equipment and low linearity and sensitivity. Enzymatic methods for choline determination are based on its oxidation with ChOx (after preliminary hydrolysis of esters by phospholipase) with the release of H 2 O 2 . The hydrogen peroxide reacts with phenol and 4-aminoantypyrine contained in the mixture in the presence of peroxidase, giving a colored reaction product, which is detected photometrically at a wavelength of 505 nm [313,323]. There are also modifications of this method with other dyes [324] [324]. This method of choline determination is quite expensive, given the high cost of the enzyme, with a significant error of determination by photometric method of signal detection. In addition, the determination of choline by this method is influenced by many reductants that can react with hydrogen peroxide. Their influence can be removed by using activated carbon [325]. The developed method of choline determination with AuSPE-SiO 2 -ChOx has significant advantages over existing methods. In particular, the use of planar technology, relatively cheap reagents and small quantities of enzyme allow producing large number of electrodes with little cost. Sensitivity and selectivity of the modified electrode is sufficient to determine choline in foods and biological fluids with minimal sample preparation. Small time of analysis and its simplicity allows the use of the developed technique by unskilled personal. Compared with other known biosensors based on ChOx, the developed modified electrode is characterized by fast response and low detection limit, which can be explained by the porous structure of SiO 2 -film that facilitates the diffusion of reactants inside (Table 5.9). Modification procedure is simple and, unlike most other biosensors, does not use mediator, which could worsen the analytical characteristics and reproducibility of biosensors. The sensitivity developed electrode slightly lower than in some works [168,307,326], but it is quite sufficient to determine choline in foods that has been shown in the analysis of real objects. It should be noted that immobilized on the electrode surface ChOx is characterized by low apparent Michaelis constant (Table 5.9), indicating the preservation of its native structure due to biocompatibility of SiO 2 -materials. In addition, immobilization into SiO 2 -film increases the stability of the ChOx structure, for example, large amounts of ethanol do not lead to its deactivation. The developed electrode is stable and can be used repeatedly (as was shown in section 3.2.5), which is particularly important for biosensors. Conclusions to chapter 5. The prospects of the developed modified electrodes application as amperometric biosensors for the analysis of real objects were shown. The technique of amperometric determination of sorbitol using glassy carbon electrode modified with biocomposite film SiO 2 -sorbitol dehydrogenase was developped. The calibration graph was linear in the range of concentrations of 510 -4 -3,510 -5 M, the detection limit was 1,610 -4 M. The equimolar amounts of sucrose, glucose, urea, mannitol, and glycerol do not interfere with the determination of 74 ABBREVIATIONS 74 ABBREVIATIONS ....................................................................................................... Fig. 1 . 1 . 11 Fig. 1.1. Schemes of amperometric biosensors of 1st (a), 2nd (b) and 3d (c) generations. Fig. 1 . 2 . 12 Fig. 1.2. Scheme of reactions occurring during the sol-gel synthesis (on the example of TEOS): hydrolysis (a), condensation (b), polycondensation (c). Fig. 1 . 3 . 13 Fig. 1.3. Scheme of different biosensor configurations based on the sol-gelmaterials [33]. Fig. 1 . 4 . 14 Fig. 1.4. Scheme of electrochemically-assisted deposition. Fig. 1 . 5 . 15 Fig. 1.5. Structure of single-walled (a) and multi-walled (b) carbon nanotubes Fig. 1.6. Scheme of electrophoretic deposition of CNT. reactions of dehydrogenases, turning to the reduced form NADH. Due to the electrochemical activity of pair NAD + /NADH the amperometric detection of concentration changes of any component of this pair could be linked to the initial concentration of the substrate-analyte. However, the NADH interacts with the electrode, causing the dependency of the position and height of the peak on the material and the structure of the electrode as well as on the pH and the buffer solution[213]. Although the formal redox potential of pair NAD + /NADH is quite low (approximately -0.515 V vs. Ag/AgCl[213][214][215]), the process is characterized by significant overvoltage on conventional electrodes (0.7 V -on platinum, 1.0 V -on gold) because of irreversible oxidation of NADH[215]. Electrochemical detection at such high potentials would have led to significant interfering effect from other electroactive substances and even water electrolysis, preventing any analytical application of such biosensors.The electrochemical oxidation of NADH at carbon electrodes has been reported to lead to the electrode poisoning due to contamination of surface by reaction products[216], including adsorbed NAD + dimers[217]. This leads to a decrease of the anodic current of NADH oxidation and consequently to the loss of electrode sensitivity with time, greatly complicating the development of stable biosensors.Therefore, research efforts are aimed at searching of new materials and ways of NADH oxidation potential reduction. The carbon nanotubes established a reputation as a promising material for such enzyme electrodes[START_REF] Joseph | Carbon-Nanotube Based Electrochemical Biosensors[END_REF] 100]. Thus, the use of an electrode modified with carbon nanotubes reduces the potential of NADH oxidation by almost 0.5 V compared to the unmodified carbon electrode[START_REF] Musameh | Low-potential stable NADH detection at carbon-nanotube-modified glassy carbon electrodes[END_REF]. Even better results can be obtained by performing pre-activation of carbon nanotubes by anodic oxidation[218] or microwave processing[219]. This treatment leads to partial destruction of the nanotubes structure and the emergence of highly-active centers[217, 218] as well as to the appearance of quinone-like groups on the surface of the nanotubes, which act as mediators[219, 220].Another way to solve the problem of high oxidation potential of NADH may be the use of electrochemical mediators, including various quinones, aromatic diamines, phthalocyanine, ruthenium complexes and other[215, 221]. Scheme of such mediators application may include simple addition to the solution or immobilization on the electrode by adsorption, covalent binding or (electro)polymerization[221, 222]. Phenothiazine and phenoxazine redox dyes[223] attract attention as mediators due to the high rate of heterogeneous electron transfer reactions of NADH and are considered as one of the most promising electrochemical mediators for coenzyme oxidation[222, 224]. The biosensors based on such dyes have already been developed: Meldola Blue [225, 226], Nile Blue [227], Methylene Blue [228-230] and Methylene Green [231, 232]. 3.4, «Sigma») from Aspergillus niger (M w ≈ 160000, activity 15 -50 units/mg). Isoelectric point 4,2; -Choline oxidase (ChOx, EC 1.1.3.17, «Sigma») from Alcaligenes sp. (M w ≈ 95000, activity ≥ 10 units/mg). Isoelectric point 4,1; -D-Sorbitol dehydrogenase (DSDH, EC 1.1.1.15), synthesized at the Department of Microbiology of the University of Saarland (Germany). ( dimethyldiallylammonium chloride) (PDDA, low molecular weight, 20 wt% in water, «Aldrich»), poly-(ethyleneimine) (PEI, high molecular weight, water-free, «Aldrich»), and cationic surfactant cetyltrimethylammonium bromide (CTAB, «Sigma»). electrochemical experiments were carried out on potentiostat/galvanostat PalmSens («Palm Instruments BV», Netherlands) potentiostat EmStat2 («Palm Instruments BV», The Netherlands) and bipotentsiostat/galvanostat μStat 400 («DropSens», Spain). The three-electrode cell was used containing (except gold screen-printed electrode) corresponding working electrode, an Ag/AgCl reference electrode (Ag / AgCl, 3 M KCl, «Metrohm», Germany) and a platinum auxiliary electrode. Data from the potentiostat was transferred to a personal computer and processed using software «PSTrace», «DropView» and «Origin». Following electrodes were used as working: -Platinum nanofibers (Pt-Nfb) were synthesized and deposited on the surface of the glass substrate in Physico-Chemical Institute of the University of Giessen (Germany) by the method of electrospinning [242]. There were one-, two-, and four-layers Pt-Nfb with resistance 10, 2 and 0.5 kΩ accordingly; -Glassy carbon electrode (GCE) in the form of plates (Sigradur ®, «HTW Hochtemperatur-Werkstoffe», Germany), modified with CNT; -Gold screen-printed electrode (AuSPE) («DropSens», Spain). This type of electrode contained both gold working electrode, silver reference electrode and gold auxiliary electrode. ( Ag/AgCl), all potentials are mentioned versus it. All voltammogramms were obtained in buffer solutions of appropriate pH (unless otherwise specified) in static mode (without stirring). Potential scan rate varied from 20 to 100 mV/s, in some cases the extended range from 5 to 200 mV/s was used. Coenzyme NAD + (1 mM) was added into solution during measurement in the study of the voltammetric characteristics of immobilized DSDH. Amperograms were obtained at constant stirring, applied potential was kept constant throughout the measurement. At the beginning of the measurement dumping time was at least 200 s. Hydrodynamic voltammetry was carried out in the dynamic mode (with constant stirring). Potential changed stepwise with interval 50 mV every 30 s. The equilibrium current at the end of the time interval was measured. If necessary voltammetric measurements were performed in an anaerobic mode. Oxygen removal for the anaerobic experiments was achieved by bubbling argon for 15 min prior to experiment and this atmosphere was kept in the cell during the whole measurement. 4.2), the general feature of all oxidases is release of hydrogen peroxide in the enzymatic reaction. The initial concentration of the substrate can be determined by registering the change of the H 2 O 2 concentration. One of the best materials for electrochemical detection of H 2 O 2 is platinum [250]. The oxidation of hydrogen peroxide on platinum surface occurs with high signal reproducibility and at low potential due to the catalytic action of platinum oxides. This fact can be exploited to avoid interfering effects of reductants, increasing thereby the selectivity of platinum electrodes in comparison with other types of electrodes [251]. However, even with relatively low reaction potential [252], one would desire further reduction of the H 2 O 2 oxidation potential. 3. 1 . 1 . 1 . 111 Morphological characterization of platinum nanofibers assemblies. Fig. 3 . 3 Fig.3.1 depicts typical microscopic characterization data of the Pt-Nfb assemblies. SEM (Fig.3.1a) and AFM (Fig.3.1b) imaging reveals that the diameter of individual fibers is in the range 30-60 nm. Under the conditions used here, the overall thickness of the two-layers assembly remains limited to ca. 100 nm, as shown in the AFM profile (Fig.3.1b). Platinum nanofibers deposited by electrospinning are highly interconnected, forming a 2Dnetwork of Pt nanoelectrodes on the glass substrate, suggesting good conductivity. This is indeed the case, as the conductivity of the assembly varied typically from 0.5 to 10 kΩ/cm (two points measurement), depending on the film density (which can be tuned by adjusting either the deposition time and/or the number of deposited layers). As also shown, individual Pt nanofibers can be easily evidenced by AFM, and their diameter can be evaluated quite accurately when they are deposited directly onto the glass support (see the two first features around X =1 μm in the line scan in Fig.3.1b). Fig. 3 . 1 . 31 Fig. 3.1. Characterization of platinum nanofibers. (a) SEM picture; cm 2 )Fig. 3 . 2 .Fig. 3 . 3 . 23233 Fig. 3.2. Cyclic voltammogram of Pt-Nfbs in 0.5 M H 2 SO 4 solution. Scan rate 100 mV/s; (b) Dependence of estimated electroactive surface area of Pt-Nfbs as a function of the number of nanofibers layers. Fig. 3 . 3 Fig. 3.4. (a) Cyclic voltammograms of Pt-Nfb (1), Pt-Nfb-SiO 2(2) and Pt-Nfb-SiO 2 -GOx(3) in PBS (рН 6,0) containing 10 mM glucose(2, 3). Scan rate 20 mV/s;(б) Amperometric response of Pt-Nfb (1), Pt-Nfb-SiO 2(2) and Pt-Nfb-SiO 2 -GOx(3) to the additions of glucose (on the graph). Supporting electrolyte: PBS рН 6,0.Applied potential +0,6 V. Fig. 3 . 3 Fig. 3.5. (a) Amperometric response of Pt-Nfb-SiO 2 -GOx, modified with time of EAD 1 s (1), 2 s (2), 3 s (3), 4 s (4), 5 s (5) and 10 s (6) to the additions of glucose (on the graph). Supporting electrolyte: PBS рH 6,0. Applied potential +0,6 V; (b) Dependence of electrode Pt-Nfb-SiO 2 -GOx sensitivity on glucose at potential +0,6 V as a function of time of sol-gel electrodeposition. Fig. 3 . 6 . 36 Fig. 3.6. AFM images and cross-section profiles (along white lines shown in the images) of silica-modified Pt-Nfbs prepared with using increasing electrolysis times: 1 s (a), 2 s (b), 3 s (c), 4 s (d), 5 s (e) and 10 s (f). Fig. 3 . 7 . 37 Fig. 3.7. SEM images of silica-modified Pt-Nfbs prepared for various electrodeposition times: 1 s (a), 2 s (b), 3 s (c) and 10 s (d). Fig. 3 . 8 . 38 Fig. 3.8. Amperometric response of Pt-bare electrode (1), 2-layers Pt-Nfb (2) and 4-layers Pt-Nfb (3) modified with SiO 2 -GOx film by EAD method (-1,2 V, 3 s), to the additions of glucose (on the graph). Supporting electrolyte: PBS pH 6,0. Applied potential +0,6 V. Fig. 3 . 3 Fig. 3.9. (a) SEM-image of GCE, modified with with Pt-Nps by means of electrochemical generation; (b, c) Cyclic voltammograms of GCE (b) and GCE-Pt-Nps (c) in 0.025 М PBS (рН 7.0) in the absence (1) and presence of 3 mM H 2 O 2 (2).Scan rate 50 mV/s. 1 Fig. 3 . 13 Fig. 3.10. (a) Amperometric response of GCE-Pt-Nps to the additions of H 2 O 2 (on the graph). Supporting electrolyte: 0.025 PBS (рН 7.0), applied potential +0.7 V; (b) Cyclic voltammograms (20 cycles) of GCE-Pt-Nps in the presence of H 2 O 2 (1 mM). Supporting electrolyte: 0.025 PBS (рН 7.0), scan rate 50 mV/s. Fig. 3 . 11 . 311 Fig. 3.11. Comparative cyclic voltammograms of GCE (a), GCE-Pt-Nps (b), platinum (c) and gold (d) screen-printed electrodes in the absence (1) and presence (2) of 1 mM H 2 O 2 . Supporting electrolyte: 0.025 PBS (рН 7.0), scan rate 50 mV/s. The dotted line shows proportionality of scales of (a, b) and (c, d). Fig. 3 . 3 Fig. 3.12. Cyclic voltammograms (a) and hydrodynamic voltammograms (b) of AuSPE modified with SiO 2 -ChOx film in the absence (1) and presence (2) of 2 mM choline. Supporting electrolyte: 0.025 М PBS (рН 7.5), scan rate 50 mV/s. Fig. 3 . 13 . 313 Fig. 3.13. Amperometric curves (a) and corresponding dependences of current from choline concentration in the initial range (b) for AuSPE-SiO 2 -ChOx, obtained at potential -1.0 (1), -1.1 (2), -1.2 (3), -1.3 (4) V and time of deposition 20 s. Supporting electrolyte: 0.025 М PBS (рН 7.5), applied potential +0.7 V. Fig. 3 . 3 Fig. 3.14. (a) Cyclic voltammograms of AuSPE-SiO 2 -ChOx (at optimum CTAB concentration) in the absence (1) and presence of 2 mM choline: 1st(2), 10th (3), 20th (4) and 30th (5) cycles of potential scan (50 mV/s); (b) Amperometric curve of AuSPE-SiO 2 -ChOx in the presence of choline (0.5 mM). Applied potential +0.7 V. Supporting electrolyte: 0.025 М PBS (рН 7.5) Fig. 3 . 3 Fig. 3.15. (a) Dependence of the amperometric response of AuSPE-SiO 2 -ChOx at potential +0.7 V on the choline concentration. Supporting electrolyte: 0.025 М PBS (рН 7.5); (b) Lineweaver-Burk plot for the determination of apparent Michaelis constant. Fig. 3 . 3 Fig. 3.16. Dependence of the amperometric response value of AuSPE-SiO 2 -ChOx at potential +0.7 V on the choline concentration in the solution. Supporting electrolyte: 0.025 М PBS (рН 7.5). nanofibers is increases in about 10 times compared with bare platinum electrode, making the use of such nanofibers promising for the development of oxidase-based biosensors. At the same time platinum fibers coated with SiO 2 -film do not have advantages over bare platinum electrode. The response of glucose oxidase immobilized in SiO 2 -film on the surface of platinum nanofibers significantly depends on the thickness of biocomposite film.Electrogeneration of platinum particles on the surface of the glassy carbon electrode increases the sensitivity of the latter to hydrogen peroxide. However, response of such modified electrode is unstable due to the gradual leaching of nanoparticles from the electrode surface, showing little promise of its application in the development of biosensors.The use of gold screen-printed electrode is promising for the immobilization of choline oxidase in SiO 2 -film. The application of simple one-step method of electrochemically-assisted deposition allows the immobilization of the enzyme on the surface of the gold electrode with preservation of its activity. The increase of the stability and sensitivity of the modified electrode to choline can be achieved by altering the deposition parameters and by introduction of the surfactants in the sol.The best performance of modified electrode was obtained using deposition at -1.1 V for 20 s and CTAB concentration in sol 12.2 mM. The developed sensitive element of the choline biosensor has a wide linear range and low detection limit.CHAPTER 4. IMMOBILIZATION OF SORBITOL DEHYDROGENASE ON THE ELECTROPHORETICALLY-DEPOSITED CARBON NANOTUBESThe use of dehydrogenases in biosensors are usually less common than corresponding oxidases due to their low stability and difficulty of coenzyme NAD + /NADH detection. For example, there are only a few works about the development of biosensors based on sorbitol dehydrogenase (DSDH) -enzyme that can be used to identify the substrate sorbitol. However, these works are limited to the definition of sorbitol in model solutions, data on the analysis of real objects are missing. Encapsulation in thin silica film can increase the stability of enzymes in the immobilized state, making appropriate the use of this method for the development of dehydrogenases-based biosensors. Given the lack of literature data on the application of EAD method to immobilize dehydrogenases, the point of interest is the development of methodological approaches of the EAD method application for the immobilization of DSDH into SiO 2 -film, as well as the use of developed biosensor for sorbitol determination in the real objects.The development of sensitive dehydrogenase-based biosensors requires the selective, sensitive and reliable detection of coenzyme NADH formed in the enzymatic reaction (see paragraph1.5.2). As noted in the literature review, the metal electrodes (including platinum and gold) are unsuitable for such purposes because of the high potential of NADH oxidation and poisoning of the electrode surface by reaction products, leading to a sensitivity decrease. The better choice is electrodes based on allotropic modification of carbon. For example, carbon nanotubes (CNT) are promising for use as a matrix for such biosensors due to the low potential oxidation of NADH on them. However, their handling is difficult because of small size, negligible solubility in inorganic solvents and the tendency to aggregation. As previously stated in the literature review (paragraph 1.3.2), electrophoretic deposition method is able to solve these problems allowing controlled electrode modification with CNT from low-concentrated dispersions. CNT-coatings obtained therefore, have not been used as a matrix for enzyme immobilization, but they have great potential and are promising for electrode modification including the sol-gel method. This chapter presents the results of study of CNT-layer, obtained by EPD on the surface of GCE, including the activity towards NADH oxidation. The prospects of its combination with EAD method for the immobilization of dehydrogenases are shown. Fig. 4.1. SEM image of nanotubes deposit on the surface of GCE with (a) short time of potential applying (t = 5 s) and with (b) long time of potential applying (t = 60 s). Fig. 4 . 4 Fig. 4.2. (a) AFM image and (b) cross-section profile (along white line) of the CNT-layer with 30 s of deposition. Image size 20 х 20 μm. Fig. 4 . 3 . 43 Fig. 4.3. Dependence of the thickness of carbon nanotubes layer measured by AFM from the time of electrophoretic deposition. Fig. 4 .Fig. 4 44 Fig. 4.4. (a) Cyclic voltammograms in the anaerobic conditions of GCE-CNT with adsorbed methylene green at different scan rates: (1) 5 mV/s, (2) 10 mV/s, (3) 20 mV/s, (4) 50 mV/s, (5) 100 mV/s, (6) 200 mV/s. (b) Oxidation (1) and reduction (2) peaks currents of methylene green versus scan rate. Fig. 4 . 6 . 46 Fig. 4.6. Cyclic voltammograms of the (a) GCE, (b) GCE-CNT (prepared by EPD) and (c) GCE-CNT (drop-coated) in the 0.067 M PBS (pH 7.5) without (solid lines) and with (dashed lines) 5 mM NADH. Scan rate 20 mV/s. Fig. 4 . 7 . 47 Fig. 4.7. Hydrodynamic voltammograms for 5 mM NADH in 67 mM PBS (pH 7.5) at (1) GCE and (2) GCE-CNT electrode. Fig. 4 . 8 . 48 Fig. 4.8. Cyclic voltammograms of GCE (a) and GCE-CNT (b) in the presence of 5 mM NADH: 1-st (1), 2-nd (2) and 3-d (3) cycle. Supporting electrolyte: 0,067 М PBS (рН 7,5), scan rate 20 мV/s. 1 Fig. 4 . 9 .Fig. 4 . 1494 Fig. 4.9. Linear sweep voltammograms of GC electrode (1) and GC-CNT electrode (2) in the sol-gel solution. Scan rate 100 mV/s. 1 Fig. 4 . 14 Fig. 4.10. Cyclic voltammograms of GCE (1) and GCE-CNT electrode (2) modified with thin film SiO 2 -DSDH by sol-gel EAD in the 0.1 M Tris-HCl buffer (pH 9.0) containing 5 mM D-sorbitol and 1 mM NAD + . Scan rate 20 mV/s. The conditions of sol-gel EAD were: -1.8 V, 16 s for GCE and -1.3 V, 16 s for GCE-CNT electrode. Fig. 4 . 4 Fig. 4.11. (a) Amperometric response of the GCE-CNT electrode modified by SiO 2 -DSDH film with different time of EAD to the additions of D-sorbitol: (a) 0 s, (b) 10 s, (c) 14 s, (d) 16 s, (e) 17 s, (f) 25 s. E = +0.5 V. (b) Sensitivity of the GCE-CNT electrode modified with SiO 2 -DSDH film to the D-sorbitol at +0.5 V as a function of the time used for the sol-gel EAD. Supporting electrolyte 0.1 M Tris-HCl buffer (pH 9.0) contains 1 mM NAD + . The time of electrophoretic deposition of carbon nanotubes was 60 s for all samples. Fig. 4 . 12 . 412 Fig. 4.12. Electrochemical response of the GCE-CNT electrode modified by SiO 2 -DSDH with optimal parameters (-1.3 V, 16 s) to the D-sorbitol at +0.5 V as a function of the electrophoretic deposition time and corresponding layer thickness (from equation (4.1)) of carbon nanotubes. Supporting electrolyte 0.1 M Tris-HCl buffer (pH 9.0) contains 1 mM NAD + . Fig. 4 . 4 Fig. 4.13. (a) Dependence of the amperometric response of GCE-CNT-SiO 2 -DSDH from the D-sorbitol concentration at potential +0,5 V. Supporting electrolyte: 0,1 М Tris-HCl (рН 9,0) and 1 mM NAD + . Fig. 4 . 4 Fig. 4.14A and B show edge-views of a typical CNT-PS film obtained by electrophoretic deposition on the surface of a glassy carbon electrode. The PS beads and carbon nanotubes are homogeneously mixed together forming a uniform deposit with thickness of about 4.5 μm. The apparent small amount of carbon nanotubes in the interspace of beads is a visual artifact that can be explained by low contrast of the image. The real quantity of nanotubes is much bigger as proved on the images of the film after template removal (see part C and D in Fig. 4.14). After calcination of the PS beads, the film structure is maintained, forming a cellular 3D network with pore sizes corresponding to those of the removed beads (≈ 500 nm). These pores are separated by a 40-60 nm thick CNT layer. The thickness of the template-free macroporous-CNT film (Fig. 4.14C) was only slightly smaller than the one of CNT-PS film (Fig. 4.14A). Fig. 4 . 4 Fig. 4.14. SEM-images on the edge (left column) and under the angle of 35 (right column) of a CNT-PS composite film deposited in the presence of PS-beads at 0.2 mg/mL, before (A, B) and after (C, D) template removal. Time of electrophoretic deposition was 135 s. Fig. 4 . 4 Fig. 4.16. (a) Variation of the CNT-PS layer thickness (135 s deposition) as a function of the concentration of PS-beads in the initial dispersion before (1) and after (2) template removal. (b) Variation of the CNT-PS (0.2 mg/mL) deposit thickness as a function of the electrophoretic deposition time, as measured with AFM or profilometry, before (1) and after template removal (2), as well as for the samples deposited without PS beads (3). 4 .Fig. 4 .Fig. 4 . 19 .Fig. 4 .Fig. 4 . 4441944 Fig. 4.17. (a) Cyclic voltammograms of MG adsorbed on the macroporous CNT layers ([PS-beads] = 0.1 mg/mL) with different duration of electrophoretic deposition: (1) 15 s, (2) 45 s, (3) 75 s, (4) 105 s, (5) 135 s. Supporting electrolyte: 0,067 М PBS (рН 7,5), scan rate 20 mV/s. (b) Peak current of MG as a function of electrophoretic deposition time of non-macroporous CNT layers (1,1') and macroporous CNT layers ([PS beads] = 0.1 mg/mL) (2,2'). Fig. 4 . 4 Fig. 4.22. Anodic peak current values corresponding to the adsorbed MG alone or in the presence of 1mM NADH, as a function of the electrophoretic deposition time applied to prepare the macroporous CNT electrodes. Fig. 4 . 23 . 423 Fig. 4.23. Cyclic voltammograms of (a, b) non-macroporous and (c, d) macroporous CNT film ([PS beads] = 0.2 mg/mL) prepared by electrophoretic deposition for (a, c) 15 s and (b, d) 135 s and modified with adsorbed MG and SiO 2 - Fig. 5 . 1 . 51 Fig. 5.1. Typical amperogram obtained during D-sorbitol determination with GCE-CNT-SiO 2 -DSDH using method of standard addition. Applied potential +0,5 V. Sample preparation was performed in accordance with State Standard[294].The samples of objects was weighed on an analytical balance so that in the final volume the concentration of sorbitol was between 20-50 mM. Sample was manually crushed in a porcelain mortar, quantitatively transferred into a 100 mL beaker and 40 mL of heated to 60 -70 C bidistilled water was added and stirred for one hour on a heated magnetic mixer. The additions of standard 50 mM solution of sorbitol were injected in some samples. The resulting suspension was filtered through a paper filter in a 50 mL flask and adjusted to the mark on the flask. Further determination of sorbitol in the resulting solution was performed by the method of standard additions, as described in paragraph 5.1.1. * The stability of the electrode is expressed in the format X% / Y, where Xthe percentage of response (from the original) that persists after use of the electrode during the time interval Y. 7 VFig. 5 . 3 . 753 Fig. 5.3. Typical amperogram obtained during choline determination with AuSPE-SiO 2 -ChOx using method of standard addition. Applied potential +0,7 V. Fig. 5 .Fig. 5 . 4 . 554 Fig. 5.4 shows that the current increase on the AuSPE after addition of ascorbic acid solution after contact with MnO 2 is very small (Fig.5.4a). For comparison, the Sample preparation was carried out according to the[313]. The sample was weighed on analytical balance, transferred to a beaker and 30 ml of 1 M HCl was added. The beaker was covered with a glass and heated at 60 -70 C in a water bath for 4 hours, stirring occasionally. Heating with acid promotes separation of milk proteins and hydrolysis of bound in the form of esters of choline to the free form.After cooling, the mixture was filtered through filter paper in 50 ml flask. The pH of the solution was adjusted to 3.0 -4.0 with 10 M NaOH. Treatment with 50 mg of MnO 2 was performed as indicated in section 5.2.2.1, the volume of the solution was adjusted to the mark with water. The resulting solution was stored in a refrigerator at 4 C up to one week. Determination of choline in the resulting solution was performed by the method of standard additions as described in paragraph 5.2.1. Results of determination are given in Table5.8. The data obtained are characterized by satisfactory accuracy and reproducibility and correlated with the content of choline, declared by the manufacturer (Appendix G). , ion chromatography [320] and capillary electrophoresis [321]. The mass detector [316, 317], ion exchange membrane detector [320] and indirect UV detection [321] can be applied. The general drawbacks of chromatographic techniques are long and complex sample preparation, high cost and the inability of analysis in the field. * The stability of the electrode is expressed in the format X% / Y, where Xthe percentage of response (from the original) that persists after use of the electrode during the time interval Y. CONCLUSIONS sorbitol. Interfering effect of ascorbic acid can be eliminated by the introduction of trivalent iron. Results of sorbitol determination by the developed technique in the food and cosmetic products are characterized by satisfactory accuracy and reproducibility.The technique of amperometric determination of choline using gold screenprinted electrode modified with film SiO 2 -choline oxidase was proposed. The calibration graph was linear in the concentration range of 110 -5 -610 -4 M, the detection limit was 510 -6 M. The 10-fold excess of sucrose, lactose, urea and heavy metals except Cu (II) do not interfere with the choline determination. Interfering effect of ascorbic acid may be eliminated by prior mixing with MnO 2 . Results of choline determination in the food using developed technique are characterized by satisfactory accuracy and reproducibility. The advantages of the developed techniques in comparison with known methods of sorbitol and choline determination are simple sample preparation, higher selectivity and expressivity of analysis, and low cost of a single determination. The simple one-step procedure for immobilization of oxidases and dehydrogenases in SiO 2 -film on the surface of different types of electrodes by electrodeposition method was unified. The examples of choline oxidase and sorbitol dehydrogenase showed that immobilized enzymes more actively bind to the substrate and retain their activity longer than in solution.  Electrodes modified with platinum nanomaterials are more sensitive to hydrogen peroxide than platinum macroelectrodes. At the same time, these electrodes coated with biocomposite SiO 2 -enzyme film exhibit analytical characteristics similar to platinum macroelectrodes.  Modification of glassy carbon electrode with carbon nanotubes (CNT) by electrophoretic deposition method increases the sensitivity and stability of the amperometric response of coenzyme NADH, which is promising for the development of dehydrogenase-based biosensors. The use of CNT shifts NADH oxidation potential at 0.25-0.3 V towards negative values, increasing thereby the selectivity of its detection.  The use of gold screen-printed electrode is promising for the immobilization of choline oxidase in SiO 2 -film. The increase of modified electrode sensitivity to choline is achieved by optimization of the parameters of electrochemically-assisted deposition, the maximum response was obtained during the deposition for 20 s at potential of -1.1 V.  Electrode modified with CNT and sorbitol dehydrogenase, shows significant advantages in terms of sensitivity and selectivity of sorbitol detection compared with electrode containing no CNT. Analytical signal of modified electrode depends on the parameters of electrodeposition and thickness of CNT-layer. Best characteristics were obtained using -1.3 V, 16 s of electrochemically-assisted deposition and CNT-layer thickness from 100 to 150 nm. Fig Fig. D.1. AFM-image of the surface of GCE, modified with CNT by EPD. TABLE OF CONTENTS OF TABLE Table 1 .1 Main methods of sol-gel films obtaining 1 Method Advantages Drawbacks Table 3 .2 Influence of CTAB concentration in the sol on the long-term response stability of AuSPE-SiO2-ChOx 3 С (CTAB), mM Relative sensitivity in two weeks, % 0.0 < 1 6.1 9.8 12.2 52.7 21.3 18.9 42.7 8.5 Table 5 .1 Results of the determination of sorbitol additions in the buffer solution using GCE-CNT-SiO2-DSDH. n = 3, P = 0,95 5 Sorbitol concentration, mM Added Found S r 0.40 0.41 ± 0.04 0.04 0.60 0.60 ± 0.05 0.03 0.90 0.90 ± 0.04 0.02 Table 5 .3 Results of sorbitol determination in the food and cosmetics samples. n = 3, P = 0,95 5 Sorbitol content, мг/г Sample Added Found S r Toothpaste for children «Oral B 0 117 ± 20 0,06 Stages berry bubble» 296 411 ± 33 0,04 0 549 ± 46 0,03 Chewing gum «Orbit» 694 1251 ± 122 0,04 Table 5 .4 Comparative characteristics of known amperometric biosensors based on DSDH 5 Electrode Modifier Response time, s Linear range, mM Limit of detection, μM Potential of detection, mV Stability* RSD, % Real object analysis Link Table 5 .7 Influence of MnO2 on the results of 0.8 mM choline determination using AuSPE-SiO2-ChOx 5 № Solution content Choline found, mM 1 Choline 0.81 ± 0.06 2 Choline + 10 mg MnO 2 0.81 ± 0.06 3 Choline + 0.1 mM ascorbic acid 0.91 ± 0.09 4 Choline + 0.1 mM ascorbic acid + 10 mg MnO 2 0.81 ± 0.07 Table 5 .8 Results of choline determination in model solution and food products. 5 n = 3, P = 0,95 Choline content, mM sample Added Found, х ± Δх S r 0.30 0.34 ± 0.04 0.05 Model solution* 1.00 1.09 ± 0.12 0.05 Choline content, mg/g 0 0.95 ± 0.14 0.06 Infant formula «Bebi» 1.12 1.99 ± 0.19 0.04 Table 5 .9 Comparative characteristics of known amperometric biosensors based on ChOx 5 Electrode Modifier Response time, s К м , mM Linear range, mM Limit of detection, μM Potential of detection, mV Stability* RSD, % Real object analysis Link Pt Diaminobenze 30 1.2 0.05 ÷ 50 0 85% / 8 -[327] n-Prussian blue 2 2 m. Pt Polyvinylferroc 70 2.32 0.004 ÷ 4 750 - 4,6 -[328] ene chlorate 1.2 Carb Prussian blue 30 2 0.02 ÷ 20 50 1 m. 14 -[329] on 2 paste Pt Au-Nps-PVA- 20 0.78 0.02 ÷ 10 400 80% / 7,4 -[264] Glutaraldehyde 0.4 14 d. GCE PDDA-FePO 4 - 2 0.47 0.002 ÷ 0.4 0 95% / 3,2 -[307] Prussian blue 3.2 14 d. Pt SiO 2 -CNT 15 - 0.005 ÷ 0.1 160 75% / 4,8 + [168] 0.1 1 m. Cookies "Diabetic":
01749849
en
[ "sdv.aen" ]
2024/03/05 22:32:07
2013
https://hal.univ-lorraine.fr/tel-01749849/file/DDOC_T_2013_0075_SHEVCHENKO_ZAITSEVA.pdf
ABBREVIATIONS DPC-diphenylcarbazide; -ED3A -ethylendiaminetriacetic acid groups immobilized at the surface of silica besed material; FTIR -Fourier transform infrared spectroscopy; ICP-AES -inductively-coupled plasma with detection by atomic emission spectroscopy; ТPD-МS -thermo-programmed desorption with mass spectroscopy detector; XPS -X-ray photoelectron spectroscopy; XFS -X-ray fluorescence spectroscopy;. МСМ-41 -silica based mesoporouse material with structuraly ordered hexogonal pores distributed pores (d  =1.5-10 nm); MCM-41-SH -thiol-functionalized mesoporous silica of MCM-41 type; MCM-41-SH/SO 3 H-X -a series of partually oxidized (by H 2 O 2 ) thiol-functionalized mesoporous silica, where X=1-6 (6 corresponds to the sample most fully oxidized and 1 -to the sample which was not oxidized by H 2 O 2 ); MCM-41-SH/SO 3 -2.X -a series of partually oxidized (by air oxygen) thiolfunctionalized mesoporous silica, which were disposed on air during different peroid of time (from 1 day (X=1) to 2 months (X=3)); SBA-15 -structured mesoporous silica-based material with two-dimensional hexagonal pores (d  = 5-9 nm); -SH -mercaptopropyl groups immobilized at the surface of silica besed material; SiO 2 -ED3A -silica gel grafted with ethylendiaminetriacetic acid groups; SiO 2 -SH -silica gel grafted with mercaptopropyl groups; SiO 2 -SH/ED3A -silica gel simultaneously grafted with mercaptopropyl and ethylendiaminetriacetic acid groups; -SO 3 H -propylsulpfonic acid groups immobilized at the surface of silica based material; -S-S -dipropyldisulfide groups immobilized at the surface of silica besed material. ! Introduction Principalement utilisés dans l'industrie à des fins diverses, les composés du chrome sot classés comme des polluants environnementaux persistants. Le chrome entre dans la fabrication des alliages et de l'acier (jusqu'à 136 mille tonnes par an), des pigments et peintures, dans le tannage du cuir et les procédés d'électro-dépôt (jusqu'à 90 mille tonnes par an). L'apparition du chrome dans l'environnement est surtout imputable aux activités anthropiques. A titre d'exemple, la concentration de chrome relevés en Inde dans les effluents industriels est de 2-5 g L -1 , le déversement total de chrome atteignant donc jusqu'à une tonne par un. Contrairement à la plupart des autres métaux, la toxicité du chrome dépend de son degré d'oxidation. Les composés du chrome(III), qui existent principalement sous la forme de particules chargées positivement en solution aqueuse, sont caractérisés par une labilité cinétique faible et sont ainsi moins enclin à l'adsorption biologique. En revanche, le chrome(VI), qui a un fort pouvoir oxydant, existe en solution sous forme d'anion. Etant isomorphe de sels minéraux importants, le chrome(VI) est cent fois plus toxique que le chrome(III), qui se traduit par une forte activité cancérogène. Les méthodes modernes de traitement des effluents contenant du chrome(VI) toxique sont principalement basées sur l'immobilisation des composés du chrome sous forme solide. Il s'agit notamment : 1) de l'électrodialyse ou electrocoagulation, et 2) de l'adsorption sélective sur du charbon actif, des résines fonctionnalisées ou des oxydes de minéraux, ou encore sur des biopolymères naturels. Ces méthodes sont souvent appliquées à l'élimination du chrome à partir de solutions concentrées (100 mg L -1 ). Une percée significative dans le domaine des méthodes d'adsorption sélective de Cr(VI) concerne le recours à de matériaux polyfonctionnels présentant à la fois des propriétés de réduction et de fixation. A titre d'exemple, de tels matériaux sont des adsorbants composés de sulfures issus de la biomasse, des hydroxydes de fer et des sulfures métalliques, ou des adsorbants hybrides organo-minéraux. A la surface de tels matériaux, Cr(VI) peut être réduit en Cr(III) qui peut être à ! 2! son tour spécifiquement fixé par un groupe fonctionnel ou co-précipicité sous forme d'oxyde. L'efficacité de tels matériaux polyfonctionnels dépend d'un ensemble de facteurs, tels que la faculté de réduction, l'affinité des groupements fonctionnels vis-àvis des espèces Cr(III) et Cr(VI), ainsi que l'ingénierie de surface des matériaux. Ceci étant, les adsorbants de ce type peuvent présenter des inconvénients, dont les principaux sont des propriétés d'adsorption non reproductibles et caractérisés par de faibles cinétiques (particulièrement vrai pour la biomasse), des gammes de pH opérationnelles assez élevées (pH > 5, alors que la plupart des effluents contenant du chrome soient acides), de faibles quantitativités d'adsorption (< 80 %) et la formation d'énormes quantités de boues. La conception intelligente de matériaux polyfonctionnels pourrait permettre de contourner les principaux problèmes évoqués ci-dessus. Dans le cadre de cette thèse, nous nous proposons d'examiner le comportement de deux silices bi-fonctionnalisées présentant une mésostructure (i.e., ou non (i.e., gel de silice, dénommé ici SiO 2 ), ainsi que leur réactivité vis-à-vis des espèces de chrome. Les groupements fonctionnels sélectionnés pour modifier les échantillons de silice afin d'atteindre ce but sont d'une part, le mercaptopropyle et l'acide propylsulfonique (MCM-SH,SO 3 H), et d'autre part le mercaptopropyle et l'éthylènediaminetriacetate (SiO 2 -SH/ED3A). La recherche a débuté avec des matériaux structurellement ordonnés, de type MCM-41, offrant une très grande aire spécifique tout en assurant un accès rapide et aisé vers les groupements fonctionnels. Sur la base d'une MCM-41 modifiée par des fonctions thiol oxydées à divers degrés, un ensemble d'échantillons d'adsorbants caractérisés par différents rapport de grupements greffés thiol/acide sulfonique (teneur constante en soufre = 1 mmol g -1 ) ont été synthétisés. Une attention particulière a été portée à la caractérisation de la composition chimique de surface, pour laquelle on s'attend à une forte influence sur les propriétés de sorption. Une méthode simple, basée sur une seule technique instrumentale (titrage conductimétrique), a été appliquée pour la détermination simultanée des groupements thiol et sulfonique sur MCM-SH,SO 3 H. Dans un second temps, les conditions expérimentales susceptibles de permettre un piégeage effectif de Cr(VI) sur MCM-! 3! SH,SO 3 H ont été définies, en étudiant notamment l'effet du pH, du rapport solide/solution, ou encore de la composition de l'adsorbant (i.e., rapport SH/SO 3 H). Sur la base des données collectées, un mécanisme de réduction-sorption expliquant le processus d'immobilisation a été proposé. Dans une seconde approche, un autre type de silice bi-fonctionnelle (SiO 2 -SH,ED3A) a été suggéré afin d'améliorer l'affinité (propriétés de sorption) du matériau vis-à-vis des espèces Cr(III) générées lors de la réduction de Cr(VI). Le gel de silice a été choisi comme matrice pour greffer des quantités contrôlées de groupements mercaptopropyls et éthylènediaminetriacetate à sa surface. La performance de tel adsorbants bi-fonctionnels a été évaluée au regard de paramètres expérimentaux variés susceptibles d'influencer le processus de sorptionréduction (pH, rappot solide/solution, concentration) afin de déterminer le mécanisme de séquestration et de le comparer avec les adsorbants précédents. Finalement, on montrera comment le second adsorbant présente également l'avantage de pouvoir être utilisé dans des conditions dynamiques (expériences en colonne). Cette thèse a été structurée comme suit : Le chapitre I traite des analyses bibliographiques ; Le chapitre II liste les matériaux et réactifs utilisés pour la synthèse des silices bifonctionnalisée, ainsi que les méthodes mises en oeuvre afin d'examiner leur propriétés physico-chimiques et les procédures appliquée pour l'étude des processus de sorption ; Le chapitre III présente une méthode conductimétrique de détermination simultanée des groupements thiol et acide sulfonique sur MCM-SH,SO 3 H, en discutant en parallèle les effets du rapport SH/SO 3 H (fixé par oxidation partielle et contrôlée de silices thiolées avec des quantités variables de H 2 O 2 ); Le chapitre IV est dédié à l'étude de l'adsorption de Cr(III) sur silices thiolées (MCM-SH) et sulfonatées (MCM-SO 3 H), ainsi que les processus de réduction-sorption de Cr(VI) sur MCM-SH et la discussion des mécanismes de fixation du chrome à la surface de MCM-SH,SO 3 H bi-fonctionnel; Le chapitre V est dévolu à l'évaluation des conditions optimales pour l'élimination Aqueous chemistry of chromium Chromium oxidation states are ranging from 0 to +VI. However in the environment it mainly occurs only in two oxidation states, Cr(III) and Cr(VI). Cr (IV) and Cr (V) are formed as intermediates during oxidation or reduction of Cr(III) and Cr(VI) respectively. The most stable oxidation state is Cr(III) and a considerable energy should be required to convert it into other state (see Fig.1.1). The negative standard potential (E 0 ) of Cr(III)/Cr(II) ion couple indicates that Cr(II) is readily oxidized to Cr(III), and Cr(II) is stable only in the absence of oxidants (anaerobic conditions). Cr(VI) in acidic medium shows a very high positive redox potential [START_REF] Ball | Critical evaluation and selection of standard state thermodynamic properties for chromium metal and its aqueous ions, hydrolysis species, oxides, and hydroxides[END_REF], which means that it is strongly oxidizing and unstable in the presence of electron donors. [START_REF] Shriver | Inorganic Chemistry 2nd Edition[END_REF]. Considering the equilibria between Cr(III) and Cr(VI), the decisive role played by pH and redox potential must be emphasized. Formation of hydroxo complexes by Cr(III) and polynuclear species by both Cr(III) and Cr(VI), should also be taken into account. To show the conditions of pH and potential under which each species is thermodynamically stable, a Pourbaix diagram is useful (see Fig. 1.2.). The approach, however, does not take into account kinetic constraints, and when Cr is introduced into, or exists in, the natural environment, its actual form may differ from that predicted by the diagram. total chromium in solution is 10 -4 М. [START_REF] Nieboer | Biologic chemistry of chromium[END_REF] The content of aerated waters containing trivalent chromium changes in time as far as such solutions can be affected by hydrolysis, complexation, redox reactions and sorption. In the absence of complexing agents (except H 2 O and OH -), Cr(III) exists as a complex of hexaaquachromium(III) (relatively strong acid with pK ~ 4) and its hydrolysis products [START_REF] Baes | The Hydrolysis of Cations[END_REF] 2+ , Cr(OH) 2 + , Cr(OH) 3 . They dominate consistently within pH 4-10. This range contains pH values that are characteristic for natural waters, and so the cited hydroxo complexes are the main forms of Cr(III) existence in the environment. Increasing of Cr(III) concentration (> 10 -4 M) leads to formation of polynuclear hydroxo complexes (Cr 2 (OН) 2 4+ , Cr 3 (OН) 4 5+ , Cr 4 (OН) 6 6+ ) [START_REF] Rai | Chromium(III) hydrolysis constants and solubility of chromium hydroxide[END_REF]. The process of condensation through the formation of hydroxo-bridges is known as "olation" (eq. 1.4) Cr(III) aqua-complexes are highly inert which means low rate of exchange of water molecules to other ligands. Half-time of such exchange can reach several days [START_REF] Perrin | Organic analytical reagents[END_REF]. Except complexation with water molecules and hydronium ions, trivalent chromium mobility may also decrease due to binding with such macromolecular systems as humic acids [START_REF] Geisler | [END_REF]. The main forms of Cr(VI) found in natural waters are HCrO 4 -and CrO 4 2-. Under рН<1, strong acid H 2 CrO 4 is formed [START_REF] Saleh | Kinetics of chromium transformation in the environment[END_REF][START_REF] Sperling | Determination of chromium(III) and chromium(VI) in water using flow injection on-line pre-concentration with selective adsorption on activated alumina and flame atomic absorption spectrometric detection[END_REF]. Distribution of these species according to pH is described by the following equations and constants [START_REF] Allison | MINTEQA2/PRODEFA2, A Geochemical Assessment Model for Environmental Systems: Version 3[END_REF] and is illustrated by Fig. In acidic media, in case when concentration of Cr(VI) is higher than 10 mM, chromate species dimerize to form orange red dichromate ion [START_REF] Allison | MINTEQA2/PRODEFA2, A Geochemical Assessment Model for Environmental Systems: Version 3[END_REF]: 2 HCrO 4 -= Cr 2 O 7 2-+ H 2 O pK = -1.54 (1.8) Cr(VI) does not give rise to an extensive complex series of poly-acids and polyanions [START_REF] Cotton | Advanced Inorganic Chemistry[END_REF][START_REF] Greenwood | !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!![END_REF] characteristics of somewhat less acidic oxides, such as those of V(V), Mo(VI) or W(VI). The reason for this is perhaps the greater extent of multiple bonding (Cr=O) for the smaller chromium. Polymerization beyond dichromate is apparently limited to formation of tri (Cr 3 O 10 2-) and tetra (Cr 4 O 13 2-) chromate [START_REF] Cotton | Advanced Inorganic Chemistry[END_REF]. Cr(VI) compounds are well soluble in water and therefore are highly mobile in the environment. They can be reduced to Cr(III) by electron donors present in water in the form of organic matter and/or reduced by inorganic particles. As far as HCrO 4 -reduction is accompanied by H + consumption (eq. 1.9), decrease in acidity decreases the formal potential (see, Fig. A high redox potential of Cr(VI)/Cr(III) metal-ion couple makes insignificant the chance of Cr(III) be oxidised by dissolved oxygen, and only the presence of manganese oxides (γ-MnO 2 , β-MnO 2 , α-MnO 2 , δ-MnO 2 ) or H 2 O 2 leads to its effective oxidation in ecological systems. [START_REF] Pettine | Digestion treatments and risks of Cr (III) -Cr(VI) interconversions during Cr ( VI ) determination in soils and sediments -a review[END_REF]. Probability of transforming Cr(III) in Cr(VI) increases with pH increasing [15]. It becomes less effective for "aged" solution of Cr(III), where not labile precipitates of chromium hydroxides are formed [START_REF] Csoba | Sorption of Cr (III) on silica and aluminium oxide#: experiments and modelling[END_REF]. Yet, if precipitetes had not been formed, manganese oxide can effectively oxidize Cr(III) (10 -5 М) during 90 days (3 < pH < 10) [START_REF] Pettine | Digestion treatments and risks of Cr (III) -Cr(VI) interconversions during Cr ( VI ) determination in soils and sediments -a review[END_REF]. Cr(III) complexes with organic ligands are also not so easily oxidized as their aqua/hydroxo analogues, which means that the trivalent state is better stabilized by ligands other than H 2 O and/or OH -. Opposite to Cr(III) complexes, Cr(VI) species are only weakly sorbed to inorganic surfaces, being the most mobile form of Cr in the environment. Hydrogen peroxide, which reduces Cr(VI) to Cr(III) under acid conditions [START_REF] Pettine | Hydrogen peroxide interference in the determination of chromium(VI) by the diphenylcarbazide method[END_REF][START_REF] Pettine | Reduction of Hexavalent Chromium by H 2 O 2 in Acidic Solutions[END_REF], is one of the possible reductants in aqueous solutions. These also include Fe(II), sulfide, sulfite and a number of organic compounds [START_REF] Pettine | Digestion treatments and risks of Cr (III) -Cr(VI) interconversions during Cr ( VI ) determination in soils and sediments -a review[END_REF]. The nature and behaviour of various Cr forms found in wastewater can be quite different from those present in natural waters because of altered physicochemical conditions of the effluents originating from various industrial sources. The presence and ! 12! concentration of Cr forms in discharged effluents depend mainly on the Cr compounds applied in the technological process, on pH and on organic and/or inorganic wastes coming from the material processing. Thus, hexavalent Cr will dominate in wastewater from the metallurgical industry, metal finishing industry (Cr hard plating), refractory industry and production or application of pigments (chromate colour pigments and corrosion inhibition pigments). Cr(III) will be found mainly in tannery, textile (printing, dying) and decorative plating industry wastewater. The presence of various inorganic and organic ligands, as well as the pH value in effluents, determines Cr forms by influencing their solubility, sorption and redox reactions. For example, although in tannery wastewater Cr(III) is the most expected Cr form, the redox reactions occurring in sludge can increase the concentration of the hexavalent form. Under slightly acidic or neutral pH conditions in this type of wastewater the poorly soluble Cr(OH) 3 .aq should be the preferred form, but a high content of organic matter originated from hide material processing is effective in forming soluble organic Cr(III) complexes [START_REF] Stein | Chromium speciation in the wastewater from a tannery[END_REF][START_REF] Walsh | Chromium speciation in tannery effluent. An assessment of techniques and role of organic Cr(III) complexes[END_REF]. Occurrence and biological effects of chromium In Nature, chromium occurs quite extensively, mostly in its trivalent state in the form of minerals, mainly as chromite. Cr(III) can also be found in fruits, vegetables and meat. It is recognised as an essential trace element in human and animal diets and important in glucose metabolism. Most diets are considered to be deficient in chromium for which the recommended daily intake is 200 µg for adults [START_REF] Anderson | Chromium as an Essential Nutrient, The Chromium File[END_REF]. Chromium can also occur as hexavalent chromium and persist in polyatomic anionic form as CrO 4 2-under strong oxidizing conditions. Natural chromates are rare. Cr(VI) and Cr(0) are mainly formed as a result of manufacturing processes. Chromium at zero valency exists as metallic chromium and in many chromiumcontaining alloys, including stainless steels. In these cases, chromium on the surface is oxidized spontaneously to Cr(III) creating a passive film which prevents further oxidation and which is responsible for corrosion resistance. ! 13! Hexavalent chromium occurs predominantly in chemical manufacturing processes and to a much lesser extent in metallurgical processes such as ferrochromium and stainless steel production, stainless steel welding and in some high temperature furnace operations that use chromium-containing refractories. Chemical manufacturing processes which cause formation of Cr(VI), namely are: -The manufacture of chromate and dichromates through the roasting of chromite ore All other industrial chromium chemicals are in turn made from sodium dichromate or, to a much lesser extent, from sodium chromate. -Chromium plating and surface treatment of metals. Conventional electrolytic processes for chromium plating use chromic acid to deposit chromium metal on surfaces of other metals. -Leather tanning. Basic chromium sulphate, in which chromium is present in the trivalent state, has been used in leather tanning for nearly 150 years. Although historically hexavalent chromium salts were converted to chromium sulphate by tanners, this practice is now rare in most countries, where tanneries are supplied with chromium tanning agents which contain no detectable levels of hexavalent chromium. -Spray painting. There is evidence of increased risk of lung cancer in workers engaged in the manufacture of zinc chromate and sparingly soluble chromate compounds which are classified as carcinogenic. These materials are used in anticorrosion primer paints applied to metal surfaces. Therefore, it is essential to minimise exposure by using local exhaust ventilation and personal protection equipment. -Refractory industries. Although chromium-based refractories are generally considered to be inert, some hexavalent chromium compounds may be present during the manufacturing stages. Many chromium-containing refractories are used in processes where the conditions may lead to the formation of hexavalent chromium, particularly high temperature operations in atmospheres containing oxygen. -Wood preservation industry. Chromium based wood preservatives are commonly used in the treatment of timber to extend its useful life. The chromium acts to fix the The toxicity of trivalent and metallic forms of chromium by conventional exposure routes is low. Trivalent chromium is poorly absorbed by the body and does not easily cross cell membranes. Metallic and alloyed forms need to be ionized in order to cross any cell membrane. The most significant occupational health effects are related to hexavalent chromium compounds. In aqueous solutions Cr(VI) exists as oxo-anions which are isomorphic to vitally important sulfates and phosphates [START_REF] Langrrd | One hundred years of chromium and cancer: A review of epidemiological evidence and selected case reports[END_REF]. The carcinogenic effect of hexavalent chromium is thought to relate to the ability of chromate ions to cross cell membranes, where subsequent chemical valence reduction is accompanied by genetic damage (see Fig. 1.5.) The relationship between exposure and effect is complicated by the fact that extra-cellular body fluids can detoxify hexavalent chromium by reducing it to the trivalent state [START_REF] Kotas | Chromium Occurrence in the Environment and Methods of Its Speciation[END_REF]. ! 15! Fig. 1.5. Hypothetical model of chromium transport and toxicity in plant roots [START_REF] Shanker | !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 25 Toxicological profile of chromium[END_REF] Exposure to such compounds may result in acute effects such as skin and nasal irritation, ulceration and nasal septum perforation and respiratory sensitisation. The most serious health effect is respiratory cancer. Epidemiological studies have confirmed that long-term exposure to high levels of hexavalent chromium as encountered historically in chromate chemicals and chromate pigments manufacture and electrolytic plating processes using chromic acid has led to a measurable excess incidence of respiratory cancer with a latency period in excess of 15 years [25, 26, 27, 28, 29]. Because of these health effects, all commercially available hexavalent chromium compounds are heavily regulated and in many areas are classified as occupational carcinogens. Table 1.1 shows the classification of carcinogenicity of chromium compounds established by international organization IARC [START_REF]Monographs on the Evaluation of Carcinogenic Risks to Humans[END_REF]. concentrations from mg L -1 to µg L -1 levels. Chromium removal techniques Several well-documented reviews or monographs are available dealing with chromium removal from wastewaters [START_REF] Beszedits | Chromium removal from industrial wastewaters[END_REF][START_REF] Soundararajan | Biosorption of chromium ion[END_REF][START_REF] Mohan | Activated carbons and low cost adsorbents for remediation of tri-and hexavalent chromium from water[END_REF][START_REF] Gode | Removal of chromium ions from aqueous solutions by adsorption method[END_REF][START_REF] Sharma | Chromium removal from water: a review[END_REF][START_REF] Sen | Chromium removal using various biosorbents, Iran. J. ! 116! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!![END_REF][START_REF] Miretzky | Cr(VI) and Cr(III) removal from aqueous solution by raw and modified lignocellulosic materials: A review[END_REF][START_REF] Malaviya | Physicochemical technologies for remediation of chromiumcontaining waters and wastewaters[END_REF]. It came out that firstly proposed remediation schemes were directed to reduce the carcinogenic, soluble, and mobile Cr(VI) (i.e., in acidic medium, pH ~2) to the less toxic and less mobile Cr(III), which forms insoluble or sparingly soluble precipitates (i.e., in alkaline medium, above pH ~9-10). Such methods, however, are applicable only for concentrated industrial wastewater and have been considered as undesirable due to the use of expensive chemicals, poor removal efficiency for meeting regulatory standards, and the production of large amounts of chemical sludge [START_REF] Cabatingan | Potential of biosorption for the recovery of chromate in industrial wastewaters[END_REF]. This has thus generated a huge amount of works directed to investigate other approaches for chromium remediation. Nowadays existing technologies are mainly based on immobilization on solid supports, or separation and filtration processes, associated or not to reduction/precipitation. They include: ! 18! -membrane filtration (membrane separation [START_REF] Kozlowski | Removal of chromium(VI) from aqueous solutions by polymer inclusion membranes[END_REF], ultrafiltration [START_REF] Ghosh | Hexavalent chromium ion removal through micellar enhanced ultrafiltration[END_REF][START_REF] Mungray | Removal of heavy metals from wastewater using micellar enhanced ultrafiltration technique: a review[END_REF], reverse osmosis [START_REF] Ozaki | Performance of an ultra-low-pressure reverse osmosis membrane (ULPROM) for separating heavy metal: effects of interference parameters[END_REF], dialysis/electrodialysis [START_REF] Mohammadi | Modeling of metal ion removal from wastewater by electrodialysis[END_REF][START_REF] Li | Concentration and purification of chromate from electroplating wastewater by two-stage electrodialysis processes[END_REF] or electro-deionization [START_REF] Xing | Variable effects on the performance of continuous electrodeionization for the removal of Cr(VI) from wastewater[END_REF]48]) and other separation techniques (coagulation/electrocoagulation [START_REF] Parga | Characterization of electrocoagulation for removal of chromium and arsenic[END_REF][START_REF] Akbal | Comparison of electrocoagulation and chemical coagulation for heavy metal removal[END_REF], reduction/ coagulation/ filtration [START_REF] Qin | Hexavalent chromium removal by reduction with ferrous sulfate, coagulation, and filtration: a pilot-scale study[END_REF], flotation [START_REF] Matis | Recovery of metals by ion flotation from dilute aqueous solutions[END_REF]) or extraction processes (solvent extraction [START_REF] Salazar | Equilibrium and kinetics of Cr(V1) extraction with Aliquat 336[END_REF], chemical or electrochemical precipitation [START_REF] Roundhill | Methods and techniques for the selective extraction and recovery of oxoanions[END_REF], electrokinetic extraction [START_REF] Roundhill | Methods and techniques for the selective extraction and recovery of oxoanions[END_REF], sedimentation [START_REF] Song | Sedimentation of tannery wastewater[END_REF], coagulation-flocculation-sedimentation [START_REF] Haydar | Coagulation-flocculation studies of tannery wastewater using combination of alum with cationic and anionic polymers[END_REF]); -ion exchange (resins, mainly anion exchangers for Cr(VI) [START_REF] Shi | Removal of hexavalent chromium from aqueous solutions by D301, D314 and D354 anion-exchange resins[END_REF][START_REF] Rafati | Removal of chromium (VI) from aqueous solutions using Lewatit FO36 nano ion exchange resin[END_REF][START_REF] Neagu | Removal of hexavalent chromium by new quaternized crosslinked poly(4-vinylpyridines)[END_REF] but also cation exchangers for Cr(III) [60,[START_REF] Cavaco | Evaluation of chelating ionexchange resins for separating Cr(III) from industrial effluents[END_REF], and ion exchange columns [START_REF] Kabir | Removal of chromate in trace concentration using ion exchange from tannery wastewater[END_REF][START_REF] Sahu | Removal of chromium(III) by cation exchange resin, Indion 790 for tannery waste treatment[END_REF][START_REF] Tang | Column study of Cr(VI) removal by cationic hydrogel for in-situ remediation of contaminated groundwater and soil[END_REF]); -selective adsorption (on various, often low cost, adsorbents [START_REF] Mohan | Activated carbons and low cost adsorbents for remediation of tri-and hexavalent chromium from water[END_REF][START_REF] Pollard | Low cost adsorbents for waste and wastewater treatment: a review[END_REF][START_REF] Babel | Low cost adsorbents for heavy metals uptake from contaminated water: a review[END_REF] such as activated carbon [START_REF] Mohan | Activated carbons and low cost adsorbents for remediation of tri-and hexavalent chromium from water[END_REF][START_REF] Fang | Cr(VI) removal from aqueous solution by activated carbon coated with quaternized poly(4-vinylpyridine)[END_REF], mineral oxides [START_REF] Bois | Experimental study of chromium adsorption on minerals in the presence of phthalic and humic acids[END_REF], functionalized resins [START_REF] Misra | Iminodiacetic acid functionalized cation exchange resin for adsorptive removal of Cr(VI), Cd(II), Ni(II) and Pb(II) from their aqueous solutions[END_REF][START_REF] Gode | Column study on the adsorption of Cr(III) and Cr(VI) using Pumice, Yarikkaya brown coal, Chelex-100 and Lewatit MP 62[END_REF], sol-gelderived functional materials [71,[START_REF] Park | Adsorption of chromium (VI) from aqueous solutions using an imidazole functionalized adsorbent[END_REF][START_REF] Liu | Removal of Cr(III, VI) by quaternary ammonium and quaternary phosphonium ionic liquids functionalized silica materials[END_REF] or natual (bio)polymers [START_REF] Miretzky | Cr(VI) and Cr(III) removal from aqueous solution by raw and modified lignocellulosic materials: A review[END_REF][START_REF] Crini | Recent developments in polysaccharide-based materials used as adsorbents in wastewater treatment[END_REF]) and bioadsorption (on various biological materials [START_REF] Soundararajan | Biosorption of chromium ion[END_REF][START_REF] Moussavi | Biosorption of chromium(VI) from industrial wastewater onto pistachio hull waste biomass[END_REF][START_REF] Saha | Biosorbents for hexavalent chromium elimination from industrial and municipal effluents[END_REF][START_REF] Sahmoune | Advanced biosorbents materials for removal of chromium from water and wastewaters[END_REF]); -and some other processes (photocatalytic reduction [START_REF] Kajitvichyanukul | Sol-gel preparation and properties study of TiO 2 thin film for photocatalytic reduction of chromium(VI) in photocatalysis process[END_REF], phytoremediation, … [START_REF] Roundhill | Methods and techniques for the selective extraction and recovery of oxoanions[END_REF]). All these methods exhibit advantages and disadvantages and are most often applied to the removal of chromium from solutions containing relatively high initial chromium concentrations (i.e., > 100 mg L -1 ). Adsorptive filtration and ion exchange are suitable for small-scale applications. Membrane technology is effective in removing both hexavalent and trivalent species of chromium, but can suffer from membrane fouling and high costs [START_REF] Malaviya | Physicochemical technologies for remediation of chromiumcontaining waters and wastewaters[END_REF]. Adsorption, though likely to generate non-negligible amounts of sludge with associated disposal problems, merged recently among the most promising approaches for simple, efficient, and selective chromium removal [START_REF] Mohan | Activated carbons and low cost adsorbents for remediation of tri-and hexavalent chromium from water[END_REF][START_REF] Gode | Removal of chromium ions from aqueous solutions by adsorption method[END_REF]. An interesting breakthrough in the field is the possibility to use adsorbents with both reductive and sorption properties in a single solid, giving rise to chromium immobilization according to a reduction-sorption process. In doing so, one part of the adsorbent has the propensity to reduce the most toxic Cr(VI) species while another part is likely to immobilize the so-generated Cr(III) moieties. Examples of materials reported to exhibit such reduction-sorption capabilities include mainly sludge and/or sulfur-containing biomass and other biosorbents [START_REF] Deng | Polyethylenimine-modified fungal biomass as a high-capacity biosorbent for Cr(VI) anions: sorption capacity and uptake mechanisms[END_REF]82,[START_REF] Sanghi | Fungal bioremediation of chromates: conformational changes of biomass during sequestration, binding, and reduction of hexavalent chromium ions[END_REF][START_REF] Escudero | Modeling of kinetics of Cr(VI) sorption onto grape stalk waste in a stirred batch reactor[END_REF][START_REF] Wu | Cr(VI) removal from aqueous solution by dried activated sludge biomass[END_REF][START_REF] Li | Mechanism of electron transfer in the bioadsorption of hexavalent chromium within Leersia hexandra Swartz granules by X-ray photoelectron spectroscopy[END_REF][START_REF] Liu | Polyethylenimine modified eggshell membrane as a novel biosorbent for adsorption and detoxification of Cr(VI) from water[END_REF], but also organic-inorganic hybrids [71] or inorganic iron metal or sulfides [START_REF] Demoisson | Pyrite oxidation by hexavalent chromium: investigation of the chemical processes by monitoring of aqueous metal species[END_REF][START_REF] Cao | Remediation of Cr(VI) using zero-valent iron nanoparticles: kinetics and stoechiometry[END_REF]. Even if some ! 19! redox active centers (for Cr(VI) reduction) and/or complexing groups (for Cr(III) binding) can be identified [71,[START_REF] Deng | Polyethylenimine-modified fungal biomass as a high-capacity biosorbent for Cr(VI) anions: sorption capacity and uptake mechanisms[END_REF][START_REF] Sanghi | Fungal bioremediation of chromates: conformational changes of biomass during sequestration, binding, and reduction of hexavalent chromium ions[END_REF][START_REF] Li | Mechanism of electron transfer in the bioadsorption of hexavalent chromium within Leersia hexandra Swartz granules by X-ray photoelectron spectroscopy[END_REF], the intrinsic complexity of these materials often made difficult the deep understanding of the main chemical parameters affecting the overall uptake process. Functionalized adsorbents for chromium sequestration Today there exists a variety of modified sorbents, creating for the concentration of heavy metals. They can be classified by matrix or modifier type. As a modifier can be used: organic reagents and their complex compounds, mineral acids (heteropolyacid) and their salts, natural compounds and some microorganisms [START_REF] Ostrovska | Voda. Indikatornye sistemy (Water, Indicator systems)[END_REF]. As a matrix for adsorbents are used: synthetic and natural polymers, mineral carriers or inert materials, which can be grafted by modifying compounds. Chemical modification of silica-based adsorbents can be divided into groups according to the method of their synthesis: 1. Covalent (chemical) bounding of functional groups at the surface of an adsorbent [START_REF] Lisichkin | Khimiya privitykh poverkhnostnykh soedinenii (Chemistry of Grafted Surface Compounds)[END_REF]: by grafting [START_REF] Zaitsev | ! 121! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!![END_REF]93], (scheme 1.1, a). Modification of an adsorbent matrix by attachment of functional molecules to the surface of the pores; groups are relatively isolated. However, by employing just enough water in the process to form a monolayer on the pore surface, more continuous coats of organosilanes may be obtained, leading to a high concentration of organics in the product. Excess water must be avoided, because it can lead to uncontrolled polymerization of the silylation reagents within the channels or external to the mesoporous adsorbent. by co-condensation principle [START_REF] Melde | Hybrid Inorganic -Organic Mesoporous Silicates Nanoscopic Reactors Coming of Age[END_REF], (polymerization, sol-gel technology), scheme 1.2. In this case, during the synthesis of material, modificator can be incorporated into the matrix of final material; (1.2) 2. Non-covalent immobilisation [START_REF] Nadzhafova | Test paper for the determination of aluminim in solutions[END_REF] of functional groups at the surface by principle of: -impregnation (soaking) of adsorbent matrix by solution of modificator; -dispersion, electrostatic, dipol-dipole or hydrogen bonds interactions. Ion-exchange The exchange reaction of Cr(VI) with the ion exchangers are described by the following equations: ! 2R + -OH -+ CrO 4 2-↔ R + 2 -CrO 4 2-+ 2OH - (1.11) R + 2 -CrO 4 2-+ CrO 4 2-+ H + ↔ R + 2 -Cr 2 O 7 2-+ OH -! ! , where R is ion-exchanger matrix. The NaOH solutions are applied for anion exchanger regeneration. As mentioned earlier, the industrial wastewaters formed in chromium plating of surfaces of other metals also contain ions of other elements, besides chromium ions. ! 21! Therefore, the problem of separation of individual components and first of all the recovery of both chromium and other metals ions occupies an important position. The performance of the chromate ion-exchange process has been reported to be greatly influenced by the properties of anion-exchange resins. Сommercially available polymeric sorbents containing both weak and/or strong bases on the surface are used as anion-exchangers. Examples of strong anion-exchangers are Amberlite IRA 400 [START_REF] Mustafa | Selectivity reversal and dimerization of chromate in the exchanger Amberlite IRA-400[END_REF] and IRA-900 [97]. Weak anion exchengers are represented by such resins as Amberlite [START_REF] Tenorio | Treatment of chromium plating process effluents with ion exchange resins[END_REF]. Capacity of ion-exchengers depends on the type of their functional groups, the type of the matrix and concentration of functional groups. Comparison of these values are brought in line at A high distribution coefficient of Cr(VI) ions removal was found also applying anion exchange fibres with pyridine groups [110]. Investigations showed that basicity of the sorbent does not affect its selectivity for chromium ions. The capacity of the obtained resin with pyridine groups was estimated to be 130 mg g -1 . It was also found that the fibres containing pyridine groups of low basicity are resistant to oxidizing effect of Cr(VI) ions which stabilize their ion exchange capacity in sorption-desorption cycles. The principle of cation-exchange (scheme 1.12) is used to in order isolate chromium in its trivalent form: ! nR-SO 3 -H + + M n+ ↔ nR-SO 3 -M n+ + nH + (1.12) , where M n+ is either hexaaquachromium(III) or its hydrolysis products For this purpose cation-exchangers with either strong sulphonic (Lewatit 100 S), iminodiacetic (Amberlite IRC 718 is) acid groups or weak carboxyl groups acid groups (Chelex-100, Amberlite IRC 76) are used [115,116]. Removal of Cr(III) from industrial wastewaters is accompanied by some difficulties particularly in the presence of sulphates. Literature data indicate existence of many Cr(III) complexes in the solution. Cr(III) complexes, in the amount from 10% to 32% were found which were not sorbed by sulphonic cation exchangers [117]. Besides, in acid solutions behavior of Cr(III) -cation-exchanger system becomes more complecated, due to formation of differently charged complexes. Their relative amount depends on the composition of the solutions and the conditions of their preparation. Thus purple aqueous solutions of Cr(III) salts are higly inert due to formation of aqua-complexes. Heating of such solutions in the presence of Cl -, Br -, I -, NO Adsorption of Cr(III) by cation-exchanger is complicated by the formation of coordination bonds between the acidic groups (carboxylic and sulfonic acid groups) and the main complexes of Cr(III), which affects its quantitative removal. Particularly, it is seen from adsorption isotherm of Cr(III) on strongly acidic cation exchanger (sulfonic acid groups) [115] (see Figure 1.7) that in the concentration range 0.1 -1 mmol L -1 binding of Cr(III) is not that much effective, as one would predict. bondings between acid groups of cation-exchanger and Cr(III) aqua-complexes, even in case of such strong acid groups as sulfonic and phosphinic) in comparison with anionexchange applied for Cr(VI) removal. Moreover sorption properties of cationexchangers are strongly affected by increasing of ionic strength and so can not be recommended for effective chromium removal in its trivalent form after preliminary reduction of Cr(VI). Chelating adsorbents Alternative selective adsorption of chromium can be performed by chelating Reduction of Cr(VI) before its adsorption was used by Sumida [124] for selective determination of both forms of chromium. This method is based on selective adsorption of Cr(III) on the surface of the polymer sorbent modified with iminodiacetic acid groups. To do this, two identical columns were used. A cap with reducing agent was placed between them. Сr(III) was adsorbed at the first column, while Cr(VI) was reduced in the cap and then adsorbed at the second column in the form of Cr(III). Elution was performed by 2 M solution of HNO 3 . To achieve selective binding and extraction of heavy metal ions, 4-Amino-3hydroxy-2-(2-chlorobenzene)-azo-1-naphthalene sulfonic acid (AHCANSA) was used as a chelating agent [125]. AHCANSA groups were immobilized either physically (I) or covalently (II) at the surface of two silica samples. Concentrations of immobilized groups were 0.488 and 0.473 mmol g -1 , respectively. It was found that these adsorbents adsorb effectively such heavy metals as Cr(III), Ni (II), Cu (II), Zn (II), Cd (II), Pb (II). Sorption capacity of these metals for these adsorbents are in the range from 0.250 to ! 28! 0.483 mmol g -1 . Both silica were found to be effective metal adsorbents (adsorpton yields were 95.2-98.1% and 92.5-97.1% respectively). Under optimal conditions, sorbents can remove trace amounts of metal ions (up to 1.0 and 2.00-2.50 µg mL -1 , respectively). Chinolinol was immobilized on silica surface for the removal of trace amounts of In contaminated surface waters at pH lower than neutral, chromates can be accompanied by other heavy metal ions like Cu(II), Zn(II) and Ni(II). The above described chelating ion exchanger can be applied for simultaneous removal of both heavy metal ions and chromates(VI) from a given medium (Fig. 1.9). Application of organic ligands capable of forming complex compounds with Cr(III) can be affected by other heavy methals ions, but the examples given in this section have demonstrated that the use of special experimental conditions (pH, chromatographic separation) can guarantee appropriate parameters of chromium adsorption. Quantitative sequestration of Cr(VI) on chelating adsorbents, after being reduced to Cr(III), can not be considered as qualitative approach to solving the problem. Cr(VI) reduction in solution requires special optimization in each and every case (qualitative and quantitative content of effluent must be considered) to avoid incomplete reduction. Moreover additional impurities like reducing agent and products of redox reaction are introduced into the treated wastewater. Low cost (bio)sorbents from natural materials Various natural and/or biological materials can relatively efficiently adsorb different metal ions: what in generalization is called bio-sorption [130]. A huge amount of work was devoted to sorption of Cr(VI) on such materials as sludge and/or sulfur-! containing biomass [START_REF] Wu | Cr(VI) removal from aqueous solution by dried activated sludge biomass[END_REF], inanimate bacteria [START_REF] Li | Mechanism of electron transfer in the bioadsorption of hexavalent chromium within Leersia hexandra Swartz granules by X-ray photoelectron spectroscopy[END_REF], algae [START_REF] Sanghi | Fungal bioremediation of chromates: conformational changes of biomass during sequestration, binding, and reduction of hexavalent chromium ions[END_REF], fungi [START_REF] Deng | Polyethylenimine-modified fungal biomass as a high-capacity biosorbent for Cr(VI) anions: sorption capacity and uptake mechanisms[END_REF], bio-waste products [82,[START_REF] Escudero | Modeling of kinetics of Cr(VI) sorption onto grape stalk waste in a stirred batch reactor[END_REF][START_REF] Liu | Polyethylenimine modified eggshell membrane as a novel biosorbent for adsorption and detoxification of Cr(VI) from water[END_REF] and iron sulfides [START_REF] Demoisson | Pyrite oxidation by hexavalent chromium: investigation of the chemical processes by monitoring of aqueous metal species[END_REF][START_REF] Cao | Remediation of Cr(VI) using zero-valent iron nanoparticles: kinetics and stoechiometry[END_REF]. Over the past decade, more than 200 articles was published in various international journals concerning Cr(VI) biosorption. Most of the early studies suggest that Cr(VI) is removed from the aqueous phase by electrostatic interaction of chromates with positively charged groups of biomaterials. Recently it has been suspected that such interpretation was incorrect [131] as far as no attention was paid to chromium speciation in equilibrium solution, at the surface of biomaterials, and also, in many cases, contact time was insufficient to establish equilibrium The latest review on bioadsorbent appearence [139] propose the existance of four models of Cr(VI) bio-sorption. They are as follows: 1. Anionic adsorption [137,138,140,141] Negatively charged chromium species (CrO The amount of adsorption depends on the nature of the biomass. [146,147,148] According to this mechanism a part of hexavalent chromium is reduced to Anionic and cationic adsorption ! 32! trivalent chromium. The hexavalent chromium (anionic) and trivalent chromium (cationic) are adsorbed to biomass. Reduction and anionic adsorption mechanism [149] According to this mechanism a part of the hexavalent chromium is reduced to Cr(III) by biosorbent and mainly Cr(VI) is adsorbed to the biomass while Cr(III) remains in the solution. It's interesting to consider the removal efficiency of different forms of chromium (Table 1.7) and total chromium (Table 1.8) on bio-adsorbents depending on pH. Efficiency of Cr(III) removal increases in more basic media, while the effectiveness of Cr(VI) reduction is restricted. The highest removal efficiency of total chromium is observed when spruce bark (85%) and pine cones (71.8%) are used at pH = 2.17, and coal (97%) is used at pH = 5. For other materials adsorption efficiency is very low (from 25 to 60%). Several attempts were made to determine active redox centers [START_REF] Deng | Polyethylenimine-modified fungal biomass as a high-capacity biosorbent for Cr(VI) anions: sorption capacity and uptake mechanisms[END_REF][START_REF] Sanghi | Fungal bioremediation of chromates: conformational changes of biomass during sequestration, binding, and reduction of hexavalent chromium ions[END_REF][START_REF] Li | Mechanism of electron transfer in the bioadsorption of hexavalent chromium within Leersia hexandra Swartz granules by X-ray photoelectron spectroscopy[END_REF] (responsible for Cr(VI) reduction) and binding groups (responsible for chromium ! 33! binding) present at the surface of bio-materials. Still chemical content of such complicate systems as biological materials is never fixed and may vary from sample to sample, depending on many factors (origin, method of preparation). Consequently, bioadsorbents are charactarized with low reproducibility of their properties. exchangers applied for Cr(III) removal. Moreover in case of (bio)sorbents the mechanism of chromium binding is not fully understood. For these reasons biosorption of Cr(VI) still is mainly confined to lab studies only. To understand the mechanizm a synthetic system reproducing the qualities of bio-adsorbents should be used. This would also help to obtain systems with better sorption capacity and specificity to target metal ion. Application of silica for Cr(VI) removal via 'adsorption-coupled reduction' mechanism The pioneer work which applies a silica-based materials for Cr(VI) sequestration by reduction-adsorption mechanism was a paper published by Deshpande et al. in 2005 [71]. They has introduced a bifunctional organic-inorganic hybrid material chemically modified with amino and/or thiol groups for one-step sequestration of Cr(VI). A comparative study of mono and bifunctionalized materials showed the advantages of those materials which were modified with both amino and thiol groups. In this work no attention was paid to control distribution of reductively-generated Cr(III) in solid and liquid phase. Chromium was considered to be adsorbed as Cr(III) on the basis of green color development by the treated adsorbent. Adsorption capacity of such bifunctionalized adsorbent was not calculated. Another approach of reductive-precipitation mechanism of Cr(VI) removal with the silica application was described in 2007 [151] and maintained by the following range of works [152,153,154 ]. In the presence of silica gel, Cr(VI) was proposed to be reduced by a zero-valent metal (Fe(0), Zn(0)). Silica was suggested to catalyze reduction and assist further precipitation of freshly-generated Cr(III). This novel approach to combine strong inorganic reductant and advanced OH-bearing adsorbent has revealed several disadvantages: formation of huge amount of sludge, simultaneous contamination of waters by products of reduction (Zn(II), Fe(III), Cr(III)). However Fe@SiO 2 nanocomposite designed by [154] should be marked out as the most efficient among the material proposed for Cr(VI) sequestration. Fe@SiO 2 dose 0. , I -, CN -, SCN -, OH -, NO 3 -, NO 2 -, SO 3 2-, SO 4 2-, CO 3 2-, C 2 O 4 2-etc. ). The ability of ligands to form complexes decreases in the range: thiol-disulfide modified silica for Cr(VI) adsorption. As a result Cr(III) and 2 disulfide groups are formed, with the possibility of chromium detection in the phase of the sorbent. OH -> C 2 O 4 2-> SO 3 2-> CH 2 COO -> HCOO -> SO 4 2-> Cl -> NO 3 -. In Among the articles dedicated to the mechanism of Cr(VI) interaction with thiolbearing organic compounds it was found that Cr(VI) interacts stepwise with formation of thiochromates as intermediats (see, scheme 1.3) [159]. First Cr(VI) is reduced to Cr(IV), then thiel-radicals interact and form disulfid bonds [160]. In more recent works it is shown that sulfonic acids can also be formed. [161]. Great difference between standart redox-potentials of redox couple Cr(III)/Cr(VI) Generally, interaction of a metal cation with the EDTA ligand can be written as follows: (Е о = 1.47 V) and S 2-/SO 4 2-(Е о = 0.303 V) or S 2-/S 2 2-(Е о = -0.524 V) means EDTA can form complex with Cr(III) with coordination number 6 in 1:1 ratio. A high stability of these complexes (see Table 1.9) is explained by the presence of six functional groups as donor atoms in the molecule of EDTA (containing nitrogen and oxygen). Trivalent chromium forms complexes with EDTA and its derivatives, but very slowly. In the literature one can find different information about the time of complex The manuscript dedicated to studying of Cr(III)DTPA complex formation [180] mentions that in aqueous solutions there migh exist 5 main types of species shown on 22.05; 28.18; 31.03; 32.48. One could assume that the distribution diagram for similar processes but on the surface of silica gel will have a slightly different appearance, the curves would be shifted toward lower pH, as generally the strength of immobilized acid and so value of the protonation constants increases. Elemental analysis of so synthesized silica revealed that the surface layer contain the mixture of ethylendiamine, ethylendiamineacetic acid, ethylendiaminediacetic acid and ethylendiaminetriacetic acid. ! 41! Finally technology of synthesis of ED3A-containing silane was imroved and now it became commercialy available and so it is possible graft silica with ED3A groups in one stage. Despite the fact that this silane exists as 55-65% aqueous solution of sodium salt, it was stated that ED3A groups were covalently bounded [185, , 186, 187] Reagents and solutions All reagents were analytical grade and solutions were prepared with high-purity water (18 MΩ cm -1 ) from a Millipore Milli-Q water purification system. Working solutions were prepared by dilution and pH was adjusted with nitric acid. Reagents K 2 Cr 2 O 7 , Cr(NO 3 ) 3 •9Н 2 О, Cu(NO 3 ) 2 •10Н 2 О, Pb(NO 3 ) 2 , Co(NO 3 ) 2 •6Н 2 О, Ni(NO 3 ) 2 •6Н 2 О, FeCl 3 •6Н 2 О, -Stock solutions of Cr(VI) were prepared by dissolving accurate weighed portions of a K 2 Cr 2 O 7 powder in deionized water, from which working solutions were prepared by dilution and pH adjustment using nitric acid. -Other metal solutions were prepared by dissolving the appropriate analyticalgrade reagents. -Ag(I) solutions were obtained from its nitrate salt (AgNO 3 ). They were prepared by diluting a stock solution of 0.10 M (in 0.1 M HNO 3 ) in distilled water and stored in dark glass. Their concentrations were checked with conductimetric titration by using a certified standard solution of NaCl. -DPC solution was prepared by dissolving 0,1 g of DPC in 50 mL of ethanol, by adding 20 mL of H 2 SO 4 (1:9). The obtained solution was stored during 1 month at 3- 5 о С. ! 46! Preparation of functionalized mesoporous silicas Synthesis of thiol-functionalized mesoporous silicas They were prepared by co-condensation of TEOS and MPTMS at room temperature in the presence of CTAB template, according to a published procedure [189,190]. Briefly, 0.9 mole of a TEOS/MPTMS mixture (9:1 molar ratio) was added under stirring to a surfactant solution made of CTAB (0.3 mole), water (50 mole), ethanol (150 mole) and ammonia (10 mole). After precipitation, the medium was left stirring for 2 hours and the resulting solids were filtered, washed with ethanol, and dried under vacuum (<10 -2 bar) for 24 h. Template extraction was achieved in ethanol/HCl (1M) under reflux for 24 h. The whole synthetic procedure was applied under inert atmosphere (Ar) in order to avoid any oxidation of thiol groups. The resulting solid was ascribed as MCM-41-SH; it contained 0.9 mmol g -1 -SH groups and it was characterized by a well-defined hexagonal mesostructure (powder XRD d 100 spacing of 33 Å), a specific surface area of 1598 m 2 g -1 , a total pore volume of 0.76 cm 3 g -1 and a pore diameter of 20 Å. Another sample was prepared without taking care about oxidation and used for comparison purposes (noted MCM-41-SH/SO 3 H-1 as it contained some sulfonic acid moieties, see Chapter IV for explanation). Another sample MCM-41-SH/SO 3 H-2.0 was synthesized according to the aforementioned procedure of MCM-41-SH/SO After oxidation, the material was always mesostructured, yet with a small lattice contraction (d 100 at 30 Å), but its specific surface area fell down to 860 m 2 g -1 and the total pore volume to 0.42 cm 3 g -1 due to the bigger size of sulfonic acid groups and possible degradation of the long-range order in the material [191] (parameters of functionalized materials are listed in Table 2.1). Materials oxidized in different ways were obtained by adjusting varied amount of H 2 O 2 in the oxidizing medium (i.e. 0, 0.19, 0.38, 1.15, 1.9 and 3.6 respectively). These materials are ascribed in the text as MCM-41-SH/SO 3 H-X, where X corresponds to the number of the sample (from 1 to 6). Pretreatment of silica-gel before functionalization In order to remove possible trace metal impurities in silica, a pre-treatment was carried out as follows: 400 mL of concentrated acids H 2 SO 4 /HNO 3 (9:1) mixture was poured to 20 g of pure silica and stirred with mechanical agitator during one night. SiO 2 was then filtered, washed with distilled water (up to neutral pH) and dried at 100 °C. Purified silica was stored in a tightly closed vessel. Before silanization it was annealed in muffle furnace at 450 °C during 3 hours. To avoid absorption of water from the air, calcined silica was cooled in a vacuum desiccator under P 2 O 5 , before transferring it into reactor. Silica gel with covalently attached mercaptopropyl groups Thiol-functionalized silica samples (SiO 2 -SH) were synthesized according to a grafting procedure described in the literature [192,193]. Silica gel (5 g) was placed into a flask, which was then filled with toluene (60 mL) and left for blending for several minutes. Then a selected aliquot of MPTMS solution (2 mL) in toluene was added to the slurry to reach a final content of 1 mmol of organosilane for 1 g of silica gel. Instrumentation Solution analysis Solution-phase analysis of total chromium was performed by inductively-coupled plasma with detection by atomic emission spectroscopy (ICP-AES, Plasma 2000, Perkin-Elmer). Distinction between Cr(VI) and Cr(III) species was made using the conventional diphenylcarbazide (DPC)-UV/Vis spectrometric method [ 195]. Other metal ions were analyzed by atomic absorption spectrometry (AAS) using a flame atomization "Saturn" apparatus and a propane-butane-air flame. Characterization of materials Composition of the adsorbents was determined by elemental analysis (CHONS) using a Thermofinnigan FlashEA 1112 analyzer. The amount of thiol and sulfonic acid groups in MCM-41-SH/SO 3 H-X materials was also determined by conductimetric titration, as performed in the Arrhenius cell. FTIR spectra were measured from self-supporting transparent tablets on a "Nexus-470" spectrometer, manufactured by "Termo-Nicolеt" in nitrogen, at 120°C. X-ray fluorescence (XRF) spectra were taken with the help of "ElvaX-Light" spectrometer, with an energy dispersion detector sensitive to radiation with 2-45 eV. Time of accumulation for each sample was 300 seconds. The tablets for XRF spectra were made by pressing sorbent with the filler (poly(vinyl alcohol)) in correlation 1:1 (0.05 g : 0.05 g) using a mold with 10 mm diameter. Diffuse reflectance spectra (DRS) were recorded on a "Specord M-40" spectrometer in the wavelength region 12000-40000 cm -1 . Measuring precision of position of the absorption maximum is ± 40 cm -1 . Equilibration procedures Static mode Batch equilibrations were performed at room temperature, under magnetic stirring. Solid and liquid phases were separated by filtration and analyzed with spectroscopic methods described in section 2.4.1. Equilibrated solutions of Cr(VI) were analyzed for the content of either Cr(VI) or Cr (III). ! 51! Sorption of Cr(III) and Cr(VI) versus pH Studying sorption/reduction efficiency of Cr(III) and/or Cr(VI) versus pH was performed in suspensions containing selected amounts of adsorbents and constant volumes (see Table 2.2) in the pH range 1-7. Sorption of Cr(III) and Cr(VI) versus solid-to-solution ratio Effect of solid-to-solution ratio on sorption of Cr(III) and Cr(VI) was studied in suspensions with constant concentrations of Cr species, constant volumes and selected amount of adsorbents (see Table 2.3). Sorption of Cr(III) and Cr(VI) versus time of interaction These experiments were conducted in suspensions with constant amount of sorbents, concentration of Cr species and pH (which corresponded to optimal sorption conditions). For more details see Table 2.4 Sorption of Cr(III) and Cr(VI) versus concentration of chromium species in solution Adsorption isotherms were studied in conditions of optimal pH, constant volumes and amount of sorbents. Detailed information is listed in Table 2.5 Sorption of Cr(VI) on SiO 2 -SH/ED3A versus ionic strength Equilibrations were performed for 24 hours in 25 mL solution with constant SiO 2 -SH/ED3A amount (0.05 g) and Cr(VI) concentration (10 -3 mole L -1 ) and pH = 2.5. Ionic strength was adjusted with NaCl (from 0.01 to 1M). of Cr(VI). Dynamic mode For all tests performed in dynamic mode, the same column (d = 7mm) was filled with 0,1 g of sorbent (SiO 2 -SH/ED3A). Before placing SiO 2 -SH/ED3A into the column it was soaked in distilled water during 12 hours. After the column was packed with sorbent, it was stored under the layer of distilled water. Before use, the column was washed with a solution of distilled water with a working pH = 2.5. pH of Cr(VI) solutions were adjusted to 2.5. Sorption of Cr(VI) versus solution flow rate Solution (25 mL) containing 0.2 mmol L -1 of Cr(VI) were passed through the column with different speed rates: 0.1; 0.2; 0.5; 0.7; 1.0 mL min -1 . 2 mL aliquots collected at the output of the column were adjusted to 25 mL and concentration of total chromium and Cr(VI) was detected in each of the sample using ICP-AES and (DPC)-UV/VIS spectrometric method respectively. Studying of the breakthrough volume for the column filled with SiO 2 -SH/ED3A The experiments have been performed at two Cr(VI) concentrations (2 mM and 4 mM), at pH 2.5. 25 mL of such solutions were passed through the column (parameters mentioned in section 2.5.2) at three different flow rates (0.1 mL min -1 0.4 mL min -1 , and 1.0 mL min -1 ). Aliquots at the output of the column (2 mL) were diluted up to 25 ! 54! mL and were analyzed by ICP-AES method. Desorption of metals from the column filed with SiO 2 -SH/ED3A A series of metals Cu(II), Pb(ІІ), Cо(ІІ), Ni(II), Fe(III) and Cr(VI)were preadsorbed on SiO 2 -SH/ED3A from their water solution with C=80 µmole L -1 . Desorption was accomplished by passing hydrochloric acid with variable concentration through the column filled with SiO 2 -SH/ED3A in a salt form (Na + ). In order to achieve constant changings in concentration of hydrochloric acid a system of "communicating vessels" was built (see Scheme 2.1). (2.1) Two 25 mL cylinders were interconnected with a teflon tube. The liquids in both cylinders were always at the same level. If the liquid was drawn off from one cylinder (speed rate 1 mL min -1 ), then the liquid from another cylinder went via the tube into the first cylinder and the level of liquid in both cylinders was balanced again. Thus the solution of hydrochloric acid (0.1 M) from the second cylinder was gradually added to the first cylinder filled with distilled water where magnetic stirrer was constantly mixing the liquid. In the process of liquid moving from one cylinder to another, concentration of hydrochloric acid in the first cylinder was increasing in monotonic way. Such a solution with a gradient concentration of hydrochloric acid was passed through the column filled with SiO 2 -SH/ED3A to check the changings of pH in each aliquot at the output of the column (2 mL). Than the heavy metals were pre- H 2 O HCl Peristaltic pump ! 55! concentrated in the same conditions as Cr(VI) and diluted by the previously mentioned technique. Each aliquot was deleted to 25 mL and analyzed by spectroscopic method. Intensity of Kα line of Cr (XRF spectroscopy) versus the content of chromium in matrix of the sorbent Solutions (25 mL) with different concentrations of Cr(VI) were passed through the column with the flow rate 1 mL min -1 . Solutions at the output of the column were analyzed by (DPC)-UV/VIS spectrometric method. The solid samples were dried on air and the tablets for Cr(VI) solid phase XRF detection were prepared. The influence of Cr(VI) solution volume on chromium sorption at column filed with SiO 2 -SH/ED3A Different volumes of solution (from 0.01 to 1 L) with constant values of pH = 2.5 and constant amount of Cr(VI) (10 -5 mole) were passed through the column at the flow rate 1 mL min -1 . After passing through the column solution was analyzed by ICP-AES method. The solids were taken out from the column and dried on air. The tablets were prepared for XRF analysis by the technique mentioned in section 2.4.2. Most commonly, the contents of immobilized thiol or sulfonic acid groups are S i! S! S i! S! S i! S! S i! S! O! S i! S! S i! S! O! O! S i! S! O! O! O! S i! S! S i! S! O! O! S i! O! H! S i! O! H! S i! O! H! S i! O! H! S i! O! H! S i! S! H! S i! S! H! n! ! + H! 2! O! 2! ! in methanol! 1! 2! 3! 4! 5! 0 ! 59! calculated from the results of elemental analysis of the adsorbent [189,204]. However, the method cannot selectively determine S-containing groups with different degree of The features of a direct acid-base titration were first studied for the material (SiO 2 -SO 3 H) characterized with monofunctionalized layer of sulfonic acid groups [212] to be safe from overlapping of deprotonation of thiol-groups. Similarly to neutralization of strong acid in solution, the curve of conductimetric titration of SiO 2 -SO 3 H in aqueous suspension is V-shaped (see curve 1, Fig. This observation is typical for neutralization of weak acid with pK a ≥6 in solution and was also expected for immobilized mercaptopropyl groups. So the concentration of bonded -SO 3 H groups was determined from the position of V-type minimum on the titration curve as 0.28 mmol g -1 . This number is in good agreement with the concentration calculated for the same material discussed in [212]. Sharp V-type minimum on the curve of acid-base titration of SiO 2 -SO 3 H and weak interference with other immobilized groups as well as with silica matrix suggest that conductimetry is H 2 O 2 , but was stored on air) also exhibits a small V-shaped minimum, which corresponds to 30 µmol g -1 of -SO 3 H (Fig. 3.3, solid square). The origin of sulfonic acid groups presented at the MCM-41-SH surface is concerned with spontaneous oxidation on air and will be discussed hereafter. is newer reached neither in this research nor in other known sources [191,214]. Indeed, from Table 3.1, where the compositions of immobilized layer for different samples of MCM-41-SH/SO 3 H-X are summarized, it can be seen that for selected conditions maximum degree of transformation (ω H +) from -SH to -SO 3 H groups is 65%. The results obtained from conductimetric titration of MCM-41-SH/SO 3 H-X agreed with the data received from elemental analysis of MCM-41-SH/SO 3 H-X samples (see Table 3.1). Indeed, for all MCM-41-SH/SO 3 H-X samples overall concentration of -SH and -SO 3 H groups (ΣC L ), determined from conductimetric titration is about equal to the concentration of immobilized groups determined from elemental analysis on sulfur. In this regard, all MCM-41-SH/SO 3 H-X samples, synthesized by hydrogen peroxide oxidation are polyfunctional and contain at least two types of immobilized groups: propylthiol and sulfonic acid. To explain increasing in oxidation stability of immobilized layer on TSFH its FTIR, XPS and TPD-MS spectra were studied. + + + - - ⇒ + - - H SAg CH SiO Ag SH CH SiO 3 2 2 3 2 2 ) ( ) ( ! 64! FTIR spectra of MCM-41-SH/SO 3 H-X FTIR spectra of MCM-41-SH/SO 3 H-X samples show characteristic peaks of stretching (2982, 2929, 2898 and 2858 cm -1 ) and deformation (1450, 1411 and 1344 cm - 1 ) vibrations of aliphatic CH 2 -groups. This is evidence of MCM-41-SH/SO 3 H-X stability against a destructive effect of oxidation. Low-intense adsorption of S-H stretching band from propylthiol was also registered at 2569 cm -1 for MCM-41-SH only (see Fig. XPS study of MCM-41-SH/SO 3 H-X As an alternative approach to confirm the above titration results, we have performed XPS analysis of the same series or materials. XPS spectra (S2p core level) of MCM-41-SH/SO 3 H-X series were fitted using doublets 2p 1/2 and 2p 3/2 separated by a spin-orbit splitting of 1.2 eV. The S(2p 1/2 ) peak area was constrained to be one half of the area from major S(2p 3/2 ) band. For MCM-41-SH sample the energy of the major band in the doublet is observed at 163.3±0.1 eV (Fig. 3 a) and it is typically attributed to S(II) in mono-or poly-sulfide species [162]. This position remains constant for the whole MCM-41-SH/SO 3 H-X set, while the peak area decreases with increasing of oxidation rate of the immobilized layer. For oxidized MCM-41-SH/SO 3 H-X samples together with doublet at 165 eV another asymmetric peak at higher energy is observed (Fig. 3.5). There also is no direct evidence for absolute selectivity of back-conductimetric titration used in this research to determine concentration of -SH groups. Therefore the methods applied above cannot prove or refute the presence of disulfide bridges on TSFH surface. Study of MCM-41-SH/SO 3 H-X by TPD-MS In order to reveal the products of incomplete oxidation of immobilized thiols, the method of thermo-programmed desorption with mass spectroscopy detection (TPD-MS) was applied. Some results of TPD-MS study of MCM-41-SH/SO 3 H-X is presented on an evident arm at 400°С. Also all samples discharge H 2 S 2 at 440-450°C but intensity thermal desorption of this compound is very low (less than 5% from H 2 S intensity). H 2 S 2 can only be generated from thermal decomposition of disulfides like 1-3 (scheme 3.1). So from TPS-MS spectra of MCM-41-SH/SO 3 H-X it can be suggested that the contribution of disulfide moieties to the composition of the immobilized layer on oxidized MCM-41-SH is not significant. In this context, the hypothesis of disulfide bridge formation as explanation of incomplete -SH to -SO 3 H transformation seems to be wrongful, at least for MCM-41-SH/SO 3 H-X materials, synthesized by cocondensation, where propylthiol groups are incorporated into the matrix during the solgel synthesis. Contrary, the intensity of high-temperature shoulder of H 2 S discharge for MCM-41-SH/SO 3 H-6, having only 35% of -SH group left, is higher than a low-temperature one. For later sample sulfonic acid micro-environment for residual -SH moieties is more likely than for the first one. This microenvironment can stabilize -SH fragment due to the formation of thiosulphonate, according to scheme 3.5. Similar effect is observed for SO 2 thermo-desorption curve. For MCM-41-SH/SO 3 H-6 sample the high temperature shoulder as twice as high of low-temperature one, Fig. ! As it can be seen from the figure, a typical minimum for sulfonic acid groups is observed at each titration curve. The results for all the curves are in close agreement, stating the presence of 15 µmol g -1 of sulfonate groups and 572 µmol g -1 of thiol groups. Comparison of these curves with the curve of the sample, which was kept in atmosphere of argon, shows that disposition of thiol groups on air provokes insignificant oxidation, which doesn't increase with prolongation of the contact. Conclusions The proposed single instrumental method is applicable for detection of thiol and sulfonic acid groups simultaneously present at the surface of silica-based mesoporous organic-inorganic materials containing up to 1 m g -1 of propylthiol groups and 15-600 µmol g -1 of sulfonic acid groups. Reliability of the obtained results is confirmed by the repeatable coincidence of the total concentration of thiol and sulfonic acid groups calculated by conductimetric titration, which in turn agrees with the data of elemental analysis. It is demonstrated that thiol and sulfonic acid groups are present at the surfaces of all studied samples except the one that was synthesized in conditions of inert atmosphere. The concentrations of strong acid moieties calculated from conductimetric titration curves were found to be in linear proportion to the quantity of added oxidant, which prove the transformation of one groups into others. But still, in agreement with the XPS and TPD-MS data, the proposed method of titration proves incomplete transformation of thiol groups to sulfonic acid moieties (65%), even at 100-fold molar excess of oxidant. TPD-MS analysis of MCM-41-SH/SO 3 H-X samples did not reveal any formation of disulfide bonds during oxidation by hydrogen peroxide. On the contrary, it was suggested that thiol groups surrounded by sulfonic groups are stable to oxidation by hydrogen peroxide owing to the formation of thiosulfonate!bonds. At the same time, it is also shown that thiol-functionalized materials synthesized on air contain small concentrations of sulfonic acid groups (up to 3 % of the total concentration of sulfur-containing groups) and are not likely to further oxidation by being disposed on air in room conditions for 2 months. 1.5 g L -1 of adsorbent whereas lower contents of materials in suspension gave rise to lower Cr(III) adsorption (e.g., 53% at 0.5 g L -1 , a value corresponding yet to an excess of sulfonate groups with respect to the amount of chromium in solution). These results indicate that rather high contents of adsorbent would be necessary to enable the uptake of all Cr(III) species from dilute solutions (the sorption yield being function of pH) and that the experimentally observed capacities are much lower than the amount of binding sites in the material. On the other hand, the uptake process was very fast as equilibrium was reached within 1 min, indicating the presence of minimal mass transfer resistances in agreement with other observations made for metal ion binding to ordered mesoporous silica bearing organo-functional groups [189,191]. Cr(III) sorption to MCM-41-SH/SO 3 H-6 at pH 2 was further characterized by drawing the corresponding isotherm (inset in Fig. 4.1.B) indicating a maximum adsorption capacity of 32 mg g -1 (0.62 mmol g -1 ), which is of the same order of magnitude as those reported for other cationexchanger bearing sulfonic acid (see Fig. 1.7 in Chapter I) and solid-phase extractants used for Cr(III) removal (for comparison see Table 1.8 in Chapter I). This value corresponds to an adsorbed quantity exactly equal to the amount of sulfonic acid groups in the material (as determined by titration, more details in Chapter IV), confirming again the optimal accessibility to binding sites in ordered mesoporous adsorbents [189,191]. By comparison and unsurprisingly, no Cr 3+ binding to thiol-functionalized mesoporous silica (MCM-41-SH) was detected, at least when the adsorbent was prepared in inert atmosphere (see curve "a" on Fig. 4.1. A). This indicates that Cr 3+ is not likely to be complexed by thiol groups (under these conditions) and further confirms negligible binding to the surface silanol groups in acidic medium. Interestingly, some Cr 3+ binding was observed on thiol-functionalized mesoporous silica, which was prepared without strict atmosphere control (i.e., in air), but the uptake yield was significantly lower than that observed on MCM-41-SH/SO 3 H-6 (compare curves "b" and "c" on Fig. 4.1. A). This can be explained by the assumption that materials preparation in the presence of oxygen (especially the template removal step in acidic medium) has led to a bi-functionalized system containing both thiol and sulfonic acid On the opposite, the remaining amount of Cr(III) in solution was found to be more important when decreasing pH (see Fig. "a"). The experiment was thus repeated using a large excess of Cr(VI) in solution and the resulting spectra (curve "b") was now visible, consisting of two main peaks corresponding to 2p 3/2 and 2p 1/2 core levels of chromium. The main 2p 3/2 peak was located at a binding energy of 577.3 eV, which corresponds to Cr(III) on the basis of values ranging between 577 and 577.5 eV for Cr2p 3/2 reported for Cr(III)-containing materials [START_REF] Fang | Cr(VI) removal from aqueous solution by activated carbon coated with quaternized poly(4-vinylpyridine)[END_REF]218,219]. The Cr2p 1/2 signal located at 586.7 eV also supports the existence of Cr(III) [218]. This demonstrates that Cr(VI) species have been indeed reduced by thiol groups and that the adsorbed species are really in the form of Cr(III) on the material. This appears to be advantageous in comparison to the commonly used activated carbon adsorbents for which the presence of both Cr(III) and the more toxic Cr(VI) species has been identified on the solid [START_REF] Fang | Cr(VI) removal from aqueous solution by activated carbon coated with quaternized poly(4-vinylpyridine)[END_REF]. more complicated than a simple ion exchange of Cr 3+ species at sulfonic acid centers in the material as total desorption cannot be achieved (i.e., only 60% desorption in a solution made of 2M HCl, as measured after Cr(VI) reduction / Cr(III) sorption on MCM-SH/SO 3 H-1). No attempt was made to characterize the exact coordination of Cr(III) in the material but this could involve the participation of silanol groups, as suggested from IR spectroscopic measurements (decrease in the absorption band of free silanol groups at 3750 cm -1 observed after reductive adsorption of Cr(VI) on MCM-SH/SO 3 H). It should be also noted that the presence of a small contribution of Cr(VI) cannot be discarded from XPS data (possible contribution of Cr2p 3/2 signal around 579. 5-580 eV [218, 219], which could be due to some impregnation of Cr(VI) (that wasn't washed out after excess treatment). Influence of solid-to-solution ratio Sorption yields can be dramatically improved by increasing the solid-to-solution ratio (see inset in Again, pH was found to play an important role in the reduction-sorption process (Fig. 4.5) and the trend was similar as that observed with less solid in suspension (Fig. Overall mechanism and optimization of the process It is clear from the above results that MCM-41-SH/SO 3 H-1 is likely to reduce Cr(VI) via its thiol groups and to immobilize Cr(III) species generated by this reaction via its sulfonate groups. One could be surprised, however, that Cr(III) sorption yield gave rise to more than 50% binding of reduced chromium. This suggests that reaction of thiol groups with Cr(VI) would have increased the amount of sulfonate groups in the material (in agreement with increased Cr(III) binding capacities observed with adsorbent containing higher amounts of sulfonate groups, see Fig. 4.1.A). To demonstrate that point, experiments have been performed with the aid of MCM-41-SH (i.e. a thiol-functionalized mesoporous silica containing no oxidized groups), thus maintaining the reduction ability of the material but not its sequestration properties (no sulfonate groups in the starting material). In that case, Cr(VI) reduction was always quantitative (at pH 2.2) and some of the generated Cr(III) species were indeed sequestrated in the material, in a proportion reaching up to about 35 % depending on the solid-to-solution ratio (see part "a" on Fig. 4.6). Meanwhile, XPS measurements made on the solid before and after reaction point out an oxidation of some thiol groups into sulfonic acid moieties (decrease in the S 2p line located at 163.4 eV (-SH) with concomitant increase of that situated at 168.5 eV (-SO 3 H), as much more pronounced as high was Cr(VI) concentration in solution, see Fig. 4.7). One can rationalize the above data by the following equation ( 4 4.6). This also explains why the use of a bi- ! 85! functionalized material containing both thiol and sulfonic acid groups (i.e., MCM-41-SH/SO 3 H) gave rise to better performance for the sequestration process (inset in Fig. In attempt to optimize the performance of the reduction/sequestration scheme, we have thus evaluated the influence of the SH/SO 3 H ratio on the sorption yields (see parts "b-d" on Fig. 4.6)). As shown, the presence of sulfonic acid groups in the starting material is of major importance to reach the best performance (sorption yields close to 100 % under optimized conditions and at high solid-to-solution ratio), but their amount relative to the thiol group content is not critical as no distinguishable variation was found between MCM-41-SH/SO 3 H-X samples respectively oxidized at 15, 20, and 40 % (for more information see Table 3.1). Only little amounts of sulfonate groups are thus necessary to ensure efficient reduction-sorption in the adsorbent. As reduction is equally quantitative in all cases, one can conclude that the limiting step in total sequestration of chromium is always the binding of the reduced products (i.e., Cr 3+ ). 4.3.A). Conclusions Once optimized (adsorbent at a content higher than 5 g L -1 , pH 2.2), the reduction-sorption process using MCM-41-SH/SO 3 H-1 is efficient enough to ensure residual chromium concentrations as low as the tolerance level accepted for industrial wastes (0.05-0,1 mg L -1 , see by reaction with the ED3A-silane coupler in water-methanol medium, as described in section 2.3.5). Again, it was possible to adjust the amount of immobilized groups by tuning the ED3A-silane coupler-to-silica ratio (see Fig. 5.1). A maximum ED3A content of ca. 0.7 mmol g -1 was achieved. When using the SiO 2 -SH material instead of bare silica, the amount of immobilized ED3A groups was lower due to the presence of SH groups occupying a significant portion of the silica surface. This is confirmed by chemical analysis, indicating the presence in the final bi-functional adsorbent (SiO 2 -SH/ED3A) of SH groups (0.38 ± 0.02 mmol g -1 from elemental analysis; 0.40 ± 0.03 mmol g -1 from silver titration), and ED3A groups (0.39 ± 0.02 mmol g -1 , from elemental analysis). The presence and the integrity of the organo-functional groups were further checked by FTIR (see Fig. 5.2). The spectra of thiol-bearing samples were characterized by C-H stretching vibrations of the propyl chains at 2855 and 2960 cm -1 and a weak vibration corresponding to the -SH group at 2578 cm -1 [221]. The ED3A-groups can be identified via the vibration of their carboxylate/carboxylic acid moieties, leading to a band at 1728 cm -1 (COOH) and two others (COO -) at 1631 cm -1 (this latter being superimposed to that of weakly physi-sorbed water [222]) and 1405 cm -1 , in agreement with previous observations [223], whereas the characteristic band of ED3A at 1332 cm -1 [224] was almost invisible because it was located too close to the huge signal corresponding to siloxane moieties in the 1000 to 1300 cm -1 range [221]. [217]). The accumulation process is very slow, requiring several hundreds of hours to reach significant sorption yields (see curve a in Fig. 5.4). This is explained by the kinetic inertness of Cr(H 2 O) 6 3+ ions with respect to ligand exchange reactions (see section 1.4.2), which constitutes the rate-determining step in complex formation of Cr(III) with ethylenediaminetriacetic acid. On the other hand, one should mention that no measurable Cr(III) uptake was observed up to pH 4, using the SiO 2 -SH adsorbent, confirming the absence of any interaction between thiol groups and Cr(III) species (discussed in section 3.1). Cr(VI) reduction-sorption on SiO 2 -SH/ED3A While no interaction between the SiO 2 -ED3A material and Cr(VI) species was observed, Figure 5.3 (part b) reveals that using the bi-functionalized SiO 2 -SH/ED3A adsorbent led to significant chromium uptake, especially between pH 1 and 3. At this stage it is difficult to distinguish unambiguously between these 2 processes but several features tend to indicate that the ED3A chelate plays an important role. First, the stoechiometry of the redox reaction (Eq. 5.1) shows that 2 Cr(III) species are formed when only one SO 3 H is generated whereas sorption yields as high as 90% have been observed (Fig. 5.3), demonstrating that ion exchange (Eq. 5.2a) cannot be the only process explaining Cr(III) immobilization and, therefore, complex formation with ED3A should occur. Secondly the pH range for effective reduction-sorption using SiO 2 -SH/ED3A (i.e., in the 1-3 range, see Fig. chelates (which are indeed present in the material at a content of 0.4 mmol g -1 ) via complex formation (Eq. 5.2a) and then a weaker binding to the generated sulfonic acid species via ion exchange/electrostatic interactions (Eq. 5.2b). Only this second (weak interaction) binding process was likely to occur with the mono-functionalized SiO 2 -SH material. These results also suggest the formation of a 1:1 complex between Cr(III) and ED3A on silica, consistent with the stoechiometry of the corresponding Cr(III)-HEDTA complex in solution. One can distinguish the characteristic blue/violet colour of the Cr(III)-ED3A complex on SiO 2 -SH/ED3A, the intensity of which increasing as high was the chromium loading. This is more quantitatively evidenced from recording UV-Vis diffuse spectra (see Fig. 5.7) for which the main peak at 565 nm was found to increase linearly up to a chromium loading of 0.4 mmol g -1 (corresponding to the content of ED3A groups in the material) and then tended to level off. Column sorption experiments The potential for SiO and three different flow rates (0.1 mL min -1 (■,□), 0.4 mL min -1 (▲,∆), and 1.0 mL Several conclusions can be drawn from these results. First, if the maximum sorption yields are independent on the solution flow rate (in the 0.1 -1.0 mL min -1 range), this parameter has some effects on the overall speed of the reduction-sorption process, especially more marked at 4 mM Cr(VI) than at 2 mM. This can be evidenced ! 103! from the breakthrough data (Fig. 5.9.A) in which the S-shaped curves were more welldefined at lower flow rates, as expected from longer contact times for reaching steadystate. This is also evident from adsorbent capacity variations (Fig. 5.9.B), showing that higher flow rates required larger solution volumes to fill the column. Secondly, the maximum uptake capacity was ca. 0.37 mmol g -1 (i.e., near the content of ED3A groups in the material), suggesting that only the strong chelating ED3A groups are likely to retain Cr(III) species that have been reduced by thiol groups. In such dynamic conditions, the lability of the SiO 2 -SO 3 -,Cr 3+ ion pair does not allow the durable immobilization of Cr(III) species via electrostatic interactions with sulfonic acid moieties. This confers definite advantage to the SiO 2 -SH/ED3A adsorbent over the previously reported SiO 2 -SH/SO 3 H one, which cannot be used in column (no measurable Cr(III) retention). Thirdly, under optimal conditions (i.e., low flow rates), there exists an inverse relationship between the breakthrough volume and the chromium solution concentration (i.e., 0.02 L for [Cr(VI)] = 2 mM, and 0.01 L for [Cr(VI)] = 4 mM, see Fig. 5.9.A), suggesting that the chromium concentration does not affect the kinetics of the reduction-sorption process. To establish optimal volume suitable for effective concentration of Cr (VI) different volumes of solutions (10-1000 mL) with constant amount of Cr (VI) (10 µmole) were passed through at the column filled with SiO 2 -SH/ED3A. As shown in It should be mentioned that the intensity of Kα lines of Cr (XRF), adsorbed on the surface SiO 2 -SH/ED3A from different volumes also varies slightly in case when solution is diluted to 250 mL (see Figure 5.11). The value of the signal intensity obtained after concentration of Cr (VI) from 500 and 1000 mL differ by 500 conventional units and cannot be considered as a method error. Thus, under the conditions of described experiment (4-fold excess of ED3A groups to amount of chromium, the volume rate = 1 mL min -1 , column d∅ = 6 mm and mSiO2-SH/ED3A = 100 mg) efficiency of chromium adsorption is sufficient in case of passing 250 mL of Cr (VI) solution. This makes it possible to apply columns filled with SiO 2 -SH/ED3A for adsorptiomn of toxic Cr(VI0 from diluted solutioins.! Influence of the presence of concomitant foreign species The effect of high ionic strength on the adsorbent capacity was first considered in static conditions, by working in excess Cr(VI), for which a decrease in the reductionsorption capacity by about 40 % was observed at high ionic strength (>0.5M, see Fig. 5.12). At the same time, distribution coefficients (Kd values) were found to decrease by a factor of about 2 (see inset in Fig. 5.12), but remained rather high, even at high ionic strength (e.g., 435 mL g -1 in the presence of 1 M NaCl). This behaviour can be rationalized by considering the reduction-sorption mechanism discussed above. Indeed, only Cr(III) species weakly bonded to -SO 3 -moieties contribute to the loss in capacity (i.e., from 0.69 to 0.43 mmol g -1 ). This confirms again the major importance of the strong ED3A chelates in maintaining the immobilization properties of the bi-functional adsorbent. Because ED3A-functionalized materials are likely to adsorb other metal ions, we have then investigated the possible effect of the presence of such species, which can be present along with chromium in typical acidic wastewaters (e.g., from electroplating). Figure 5.13.A shows that under pure thermodynamic competition conditions (i.e., excess adsorbent over solute), the presence of foreign species (Fe(III), Cu(II), Ni(II)) at concentrations ranging from half to twice as that of chromium did not dramatically affect the sorption yields (less than 20 % decrease when interfering species were two times in excess over Cr(VI)), even if these species were likely to bind to the material (Fig. 5.13.B) but at ca. 10 times lower contents in comparison to chromium. More importantly, pH was found to play a major role on the selectivity series. Conclusion Adequately engineered bi-functional silica adsorbents, SiO 2 -SH/ED3A, bearing one component designed to reduce toxic Cr(VI) (thiol groups in this case) and another one specifically selected for immobilizing the so-generated, yet less toxic, Cr(III) species (i.e., ethylenediaminetriacetate moieties), have proven to offer good performance for the efficient removal of chromium from aqueous medium, according to a reduction-sorption mechanism. Actually, the produced Cr(III) species can be sequestrated in the material via either complexation with the ethylenediaminetriacetate ! 109! ligand (ED3A) or ion exchange/electrostatic interactions with sulfonic acid moieties generated concomitantly to Cr(VI) reduction by thiol groups. The strong chelate properties of ED3A towards Cr(III) species, however, gave rise to significant improvement (in terms of operational pH range, durable adsorbent capacity, and low residual chromium concentration in solution) in comparison to the former SiO 2 -SH/SO 3 H material suffering form weak interactions between Cr(III) and the sulfonic acid groups. This has been notably exploited here in dynamic mode (column experiments) for which the presence of ED3A groups was essential to immobilize the reduced Cr(III) species on the adsorbent, which were not retained by "simple" electrostatic interactions with sulfonate moieties. Cr(VI) reduction and subsequent Cr(III) uptake was also possible in the presence of other metal species and the immobilized Cr(III) was found to be more stable at lower pH values than the other metal ions. Finally, optical (UV-vis) and spectroscopic (XRF) techniques can be used to quantify the adsorbed species and to distinguish between the strong ED3A-Cr 3+ complex and the weaker SO 3 -,Cr 3+ ion pair on the solid ! 110! General conclusions This work propose bifunctionalized silica-based adsorbents MCM-SH,SO 3 H-X and SiO 2 -SH/ED3A as effective adsorbents for selective removal of Cr(VI). For this purpose, the methods of synthesis of corresponding mono-and bifunctional materials are developed. In order to distinguish between the concentrations of both mercaptopropyl and propylsulfonic acid groups which are simulteniously present at the surface of MCM-SH,SO 3 H-X the method of conductometric titration has been proposed. It allows detecting existence of sulfonic acid groups in such low concentrations as 15-600 µmole g -1 , which is suitable for surveillance of thiol-containing adsorbents. Using conductimetric method, it is shown that prolonged exposure of thiol-containing adsorbent to an oxic atmosphere doesn't cause the formation of sulfonic acid groups. The contents of MCM-SH,SO 3 H-X layers are precised using spectroscopic methods (TPDMS, IR and XPS), which confirmed bifunctionality and various ratios of -SH and-SO 3 H groups at their surfaces. It is evaluated that bifunctuanalized adsorbents operate much more effective than the corresponding monofunctionalized silica-based materials. It is shown that the presence of minor amounts (0.2 mmol g -1 ) of binding groups (e.g. SO 3 H) helps to improve the efficency of sorption up to 100%. Although the adsorption capacities (20-30 mg g -1 ) are found to be not greater (yet of the same order of magnitude) than those for conventional sorbents, the proposed bifunctionalized adsorbents help to reduce toxic Cr(VI) (thiol groups) and selectively immobilize the so-generated, yet less toxic, Cr(III). In case of MCM-SH,SO 3 H-X, the binding of Cr(III) is achieved by ion exchange with SO 3 H groups, while at the surface of SiO ! ! 4 ! 4 de Cr(VI) dans les eaux usées à l'aide d'adsorbants bi-fonctionnalisés SiO 2 -SH/ED3A, et discute le mécanisme de fixation du chrome sur cet adsorbant via deux types d'interactions (électrostatiques avec -SO 3 H, ou par formation de complexes avec -ED3A); La possibilité d'utiliser SiO 2 -SH/ED3A pour l'analyse de Cr(VI) en phase solide par spectrométrie de fluorescence X est également démontrée. Fig. 1 . 1 . 11 Fig. 1.1. The Frost diagram for chromium (Cr) species in acidic solution [2]. !Fig. 1 . 1 Fig. 1.2. A Pourbaix diagram for Cr species dominating in aireted aqueous solutions in the absence of any other complexing agents rather than OH -and H 2 O. Concentration of ) 3 exhibits amphoteric properties, and so pH increasing leads to formation of soluble tetra-hydroxo complex Cr(OН) (III) ions in aqueous solution calculated for 5⋅10 -4 М solution for pH range from 2 to 12 is shown in Fig.1.3. Fig. 1 . 3 . 13 Fig.1.3. Mole ratio of Cr(III) particles in aqueous solution with a total concentration of Cr(III) 5⋅10 -4 М (calculated with the help of ScQuery Vn.5.34) Fig. 1 . 4 . 14 Fig. 1.4. Distribution of Cr(VI) species in water solutions versus pH (C Cr(VI) = 10 -6 M) scheme 1.1, b). In the grafting processes noted above, silylation reagents were typically added under dry conditions to avoid hydrolysis and condensation away from the pore walls. Under anhydrous conditions the hydrophilic portion of the silica surface is preserved during silylation and surface ! 20! Fig. 1 . 7 17 Fig. 1.7 Isotherm of Cr(III) sorption on Lewatit S 100. (C original (Cr(III)) = 0.1 -1 mmol L -1 ; m adsorbent = 0.5 g; V solution = 30 mL; рН =3.8, contact time: 150 min) adsorbents. Since chromium is able to form complexes only in the trivalent state, Cr(VI) should be reduced before the contact with chelating adsorbent. The methods of chemical reduction were listed in Part 1.3. Despite this photocatalytic reduction on the improved organic-inorganic hybrid material ZnS(en)0.5 (Zn(II) sulfide modified with ethylenediamine (EN)) [122] and γ-ray irradiation in the presence of TiO 2 , Al 2 O 3 or SiO 2 [123] are used for Cr(VI) transformation. FigFig. 1 . 9 . 19 Fig.1.8) is a specific ion-exhanger on the basis of chelating adsorbent.The polymer matrix built of crosslinked polystyrenedivinylbenzene was combined with the functional bis-picoliamine group containing nitrogen atom, which is a donor of electron pair for Levis acid-Cu(II) cation, in a covalent way. Copper ions are coordinated in such a way that complete neutralization of the positive charge does not [Fig. 1 . 1 Fig. 1.10. Proposed mechanism of the Cr(VI) biosorption by natural biomaterials). Cr(VI) is removed from an aqueous system by natural biomaterials through both direct (I) and indirect (II) reduction mechanisms. charged functional groups on the surface of biosorbents. This mechanism is based on the observation that at low pH Cr(VI) adsorption increases and at high pH Cr(VI) adsorption decreases. At low pH functional groups of the biosorbent become protonated, and easily attract negatively charged chromium, but at high pH deprotonation occurs, functional groups become negatively charged repelling negatively charged chromium. 2. Adsorption-coupled reduction [131, 142, 143] Reduction followed by cationic adsorption was first proposed by Volesky for algae sargassum biomass [144]. This mechanism is popularized by Prabhakaran on the basis of experiments [145]. According to this mechanism Cr(VI) is totally reduced to Cr(III) by biomass in the presence of acid. Then part of Cr(III) is adsorbed to biomass. 37 ! 37 Fig. 1.11 Sorption efficiency of Cu(II) (1), Bi(II) (2), Cd(II) (3), Co(II) (4), Ni(II) (5), Zn(II) (6) by silica gel chemically modified with mercaptopropyl groups versus HCl concentration and pH (С Ме = 5 mg L -1 , V = 10 mL, m c = 0,1 g, t = 5 min) Fig. 1 . 1 Fig. 1.12. UV-VIS spectra of chromium solutions: ( ) Cr(III) 1000 mg L -1 ; ( ) CrO 4 2-25 mg L -1 ; ( )Сr 2 O 7 2-50 mg L -1 ; ( ) Cr(III)-EDTA 250 mg Fig. 1 . 1 Fig.1.13. In pH range from 4 to 7, together with the main CrY 2-form (where Y is DTPA), the monoprotonated CrHY species are present. At рН = 3 the complex is protonated with two protons. When pH is >7 the monohydrate complex predominates, the metal is coordinated with one water molecule (Fig.1.14). Cr(III) complexes with DTPA (CrY 2-, CrHY -, CrH 2 Y, CrH 3 Y + ) are characterized with the following constants: Fig. 1 . 1 Fig. 1.13 The structure of Cr(III)DTPA complex Conductivities of suspensions were measured by means of AC Conductivity Bridge R-5058 at operational frequency of 1000 Hz at room temperature. For acid-base conductimetric titration, solutions of 0.011-0.024 M NaOH were used to titrate residual and bulk concentrations of sulphonic acid groups respectively. A batch of each sample (∼0.15 g) was previously soaked in 25 mL of deionised water. Titration of equilibrated suspension was performed in in 24h. For back conductimetric titration 0.1M NaCl solution was used to titrate excess of silver ions. The sample (∼0.15 g) was equilibrated with a mixture of 10 mL of 0.04 M AgNO 3 and 20 mL of deionised water. The suspension was kept without light access during 12 h. It was then filtrated and an aliquot of equilibrated solution was titrated in the Arrenius cell. pH of silver nitrate mixture was checked before and after the contact with samples. It was always stated to be 4-5. Solid desiccation under pressure (over P 2 O 5 ) preceded all of the titration experiments. photons. Powders were pressed at room temperature onto the adhesive side of a copper adhesive electrical tape. The binding energies were corrected on the basis of the standard value of C 1S from contaminants at 284.6 eV. Narrow scanned spectra were used to obtain the chemical state information for sulfur and chromium. Thermogravimetric studies were performed on Q-1500 D derivatograph of Paulic-Paulic-Erdey system (Hungary) at atmospheric pressure, with open platinum crucibles and Al 2 O 3 as standard for comparison. Heating rate was 10 °C⋅min -1 in the range between 20 and 800°C. Interference studies were performed during 24 hours by adding metal ions (Fe(III), Cu(II), Ni(II)), at concentrations ranging from 0.004 to 0.48 mmol L -1 in the suspensions made of 0.05 g of adsorbent in 25 mL of solution containing 0.23 mmol L-1 ! Ce chapitre est dédié à la détermination quantitative, par titrages conductimétriques, des groupements thiol et acide sulfonique présents à la surface de silices mésoporeuses organiquement modifiées (MCM-SH,SO 3 H) constituées de différents rapports thiol/acide sulfonique. Dans cet objectif, un ensemble d'échantillons de type MCM-41 contenant 1 mmol g -1 de groupements mercaptopropyle (MCM-SH) a été oxydé à l'aide de quantités sélectionnées de peroxyde d'hydrogène afin de produire des matériaux présentant différents rapports SH/SO 3 H. Le titrage conductimétrique a été proposé comme technique susceptible de distinguer les teneurs respectives des groupements thiol et acide sulfonique présents simultanément à la surface du matériau. Cette méthode permet d'estimer quantitativement la présence d'acide sulfonique à des teneurs inférieures à 1% en masse. Les concentrations en acides forts (SO 3 H), elles aussi calculées d'après les courbes de titrage conductimétrique, se sont révélées être directement proportionnelles à la quantité d'oxydant ajouté. Les données calculées sur base des titrages conductimétriques ont été confirmées par des analyses XPS, démontrant une bonne corrélation des résultats obtenus par ces deux techniques. A partir de ces données, et conformément à la littérature, la transformation incomplète des groupements thiol en acide sulfonique (65%), même en utilisant un excès molaire d'un oxydant d'un facteur 100, a été confirmée. La spectrométrie de masse avec thermodésorption programmée (TPD-MS) a également été utilisée pour compléter l'étude. Il a été démontré que l'oxidation de MCM-SH par le peroxyde d'hydrogène en concentrations croissantes ne conduit pas à la formation de quantités notables de groupements disulfures. La formation de groupements poly-sulfonates stables a été suggérée comme une des raisons possibles de l'oxydation incomplète des groupements thiol à la surface du matériau.Mesoporous silicas functionalized with thiol groups (the obvious case was mentioned in Chapter III -surfactant templated MCM-41-SH), which are generally used to remove heavy metals[164, 196], often serve as precursors for preparation of alkylsulfonic acid functionalized silica[197, 198] (MCM-41-SH/SO 3 H-X in Chapter II and III, where X corresponds to the number of the sample (from 1 to 6)). Extensive use of both thiol-functionalized materials and their oxidized derivatives was the reason of a vigorous interest in study the composition of their immobilized layer. It was shown that oxidation of immobilized propylthiol species with H 2 O 2 leads to generation of a polyfunctional layer[199, 200] that consists of both unreacted propylthiol groups and Scontaining groups with different oxidation state [201] (demonstrated on scheme (3.1)). With the help of C 13 CP/MAS NMR [201], RAMAN, IR and XPS methods [202] it was demonstrated that functional layer of such materials as MCM-41-SH/SO 3 H-X mainly consist of propylthiol (type 0, scheme 3.1) and propylsulfonate (type 5, scheme 3On the other hand, it was demonstrated that the surface layer of MCM-41-SH could also be poly-functional. For instance during sol-gel synthesis of MCM-41-SH in alkaline aqueous solution, atmospheric oxygen can cause oxidation of surface alkylthiol-groups to disulfide moieties (type 1, scheme 3.1) [203]. 3.1).At the same time the conductivity of aqueous suspension of unmodified structurally ordered silica based material of SBA type (see curve 2, Fig.3.1) in selected range increases in linear proportion to quantity of added base. Similarly to silanol groups, immobilized propylthiol groups shall not affect to smooth increase of suspension conductivity[162]. Indeed, acid-base titration of MCM-41-SH demonstrates no inflexion on conductimetric curve, similar to titration of SBA (see later discussions). ! 61 !!Fig. 3 . 1 . 6131 Fig. 3.1. Curves of conductimetric titration of SiO 2 with covalently attached groups of ethylsulfonic acid (1) and unmodified structurally ordered silica based material (SBA type) (2) !Fig. 3 . 2 . 1 !Fig. 3 . 3 . 32133 Fig. 3.2. Curves of direct conductimetric titration of differently oxidized samples: MCM-41-SH/SO 3 H-X, where (1) X = 1, (2) X = 2, (3) X = 3, (4) X = 4, (5) X = 1 3 3 . Fig. 3 . 4 . 34 Fig. 3.4. Fragments of FTIR spectra of MCM-41-SH (a), MCM-41-SH/SO 3 H-4 (b) and MCM-41-SH/SO 3 H-6 (c) ! Fig. 3 . 5 . 35 Fig. 3.5. XPS spectra (S 2p core level) of MCM-41-SH (1), MCM-41-SH/SO 3 H-3 (2), MCM-41-SH/SO 3 H-4 (3), MCM-41-SH/SO 3 H-5 (4), MCM-41-SH/SO 3 H-6 (5) ! The intensity of this peak is growing with increasing of oxidation degree of the immobilized layer, Fig.3.5. The later peak is shifted by 5 eV to higher binding energy in respect to the position of S(II) one, and attributed to S(VI) sulfur species in alkylsulphonic fragments [162]. Consequently, XPS data confirms formation of alkylsulfonic acid species on the MCM-41-SH/SO 3 H-X surface. Similar to the results of conductimetric titration, XPS data denote incomplete transformation of propylthiol into propylsulfonic groups under H 2 O 2 treatment: spectrum of MCM-41-SH/SO 3 H-6 sample that was obtained in highest concentration of H 2 O 2 is still characterized with both S(II) and S(VI) bands, Fig.3.5.The XPS data cannot be used directly for quantitative determination of concentration of bonded species because of unavailability of the standards (and because this technique is only analyzing the extreme surface of the materials), but the ratio of intensities (h) for different XPS peaks correlates with the concentration ratio for corresponded fragments. Table3.2 summarized the results obtained form XPS spectra aFig. 3 . 6 . 2 ! 362 Fig. 3.6. Correlation of -SO 3 H to -SH ratio, calculated from conductimetric titration curves (hollow circles) and heights of XPS peaks (solid squares), with concentration of Fig. 3 . 7 . 37 Fig. 3.7. Under thermal treatment MCM-41-SH/SO 3 H-X samples release H 2 S, H 2 S 2 , SO 2 and CH 3 -CH=CH 2 . Hydrogen sulfide is observed for all MCM-41-SH/SO 3 H-X samples at high enough temperature (337°С), Fig.3.7. This peak is asymmetric and has ! 69 !Fig. 3 . 7 .! 6937 Fig. 3.7. TPD-MS study of MCM-41-SH/SO 3 H-1 (a), MCM-41-SH/SO 3 H-3 (b), MCM-41-SH/SO 3 H-6(c)!Predominance of the low-temperature H 2 S discharge for lightly oxidized MCM-41-SH/SO 3 H-X (where x=1 and 3) assign the peak observed at 320-330°C to the thermal decomposition of thiol groups in accordance with the scheme(3.2). H 2 S 2 is generated form decomposition of dipropyldisulfides (type 1, scheme 3.1), according to scheme 3.3. Peak of SO 2 discharge observed form MCM-41-SH/SO 3 H-X at lowtemperature (260°C) is assign to the thermal decomposition of propylsulfonic acid groups according to scheme 3.4. As far as high-temperature peaks for hydrogen sulfide and sulfur dioxide discharge is observed at the same temperature (400°C) simultaneous decomposition of the surface fragment reviling both of these gases can be suggested. This fragment is presented on scheme 3.1 as thiosulfonate with formula 4. Formation and thermal decomposition of immobilized thiosulfonate under TPD-MS is presented by scheme 3.5. Fig. 4 . 4 Fig. 4.1. (A) Variation of Cr(III) sorption yields as a function of pH, using various adsorbents: MCM-41-SH (a), MCM-41-SH/SO 3 H-1 (b), and MCM-41-SH/SO 3 H-6 (c, d); experimental conditions: solid-to-solution ratios of 0.8 g L -1 (a-c) or 6 g L -1 (d); starting Cr(III) concentration in solution equal to 50 µM. (B) Variation of Cr(III) sorption yields on MCM-41-SH/SO 3 H-6 at pH 2 as a function of the solid-to-solution ratio (Inset: adsorption isotherm obtained in the same conditions). Fig. 4.2. XPS spectra (S 2p core-level) of MCM-41-SH (a) MCM-41-SH/SO 3 H-1 (b) and MCM-41-SH/SO 3 H-6 (c). Figure 4 . 3 . 43 A shows that adding MCM-41-SH/SO 3 H-1 particles into Cr(VI) solutions results in the uptake of some chromium species. The process starts at pH lower than 5 to increase rapidly when decreasing pH, giving rise to a maximal effectiveness of about 55% sequestration at pH 1-2. Chemical analyses after equilibration have revealed the presence of both Cr(III) and Cr(VI) in the supernatant solution. Cr(VI) concentration in solution was found to sharply decrease with lowering pH (Fig.4.3. B). This demonstrates the possible reduction of Cr(VI) by thiol groups immobilized in the mesoporous material, in agreement with observations made with other thiol-containing solids, and this process is more quantitative at lower pH values. Such pH dependence is consistent with the variation of the apparent potentials for Cr(VI)/Cr(III) redox couple (i.e., increasing when decreasing pH, see Fig.1.2 in chapter I). Fig. 4 . 3 . 43 Fig. 4.3. Variation as a function of pH of (A) total chromium sorption yield and (B) equilibrium concentration of Cr(VI) and Cr(III) in solution after reaction with MCM-41-SH/SO 3 H-1; (experimental conditions: solid-to-solution ratios of 0.5 g L -1 ; starting Cr(VI) concentration in solution equal to 50 µM.) Inset in part (A): variation of chromium sorption yields at pH 2 as a function of the solid-to-solution ratio. Fig. 4 . 4 . 44 Fig. 4.4. XPS spectra (Cr 2p core-level) of MCM-41-SH/SO 3 H-1 after contacting (a) equimolar and (b) large excess (0.5 M) quantities of Cr (VI) with respect to thiol groups. Fig. 4 . 4 3.A), as expected from a higher amount of organo-functional groups to reduce Cr(VI) and to immobilize the generated Cr(III) species. These data show that quantitative uptake (i.e., 100% sorption yield) was reached at about 5 g L -1 , a solid-to-solution ratio corresponding to an excess of about 2 orders of magnitude of organo-functional groups in the adsorbent with respect to the initial amount of Cr(VI) in solution. In fact, the variation in sorption yields for Cr(VI) on MCM-41-SH/SO 3 H-1 (inset in Fig.4.3.A) follows a trend rather similar as that for Cr(III) sorption on most fully oxidized sample MCM-41-SH/SO 3 H-6 (Fig.4.1.B), suggesting that the limiting step would be the immobilization of Cr(III) species arising from Cr(VI) reduction and not the redox transformation of Cr(VI) by thiol groups. This is also sustained by the fact that complete Cr(VI) reduction was already achieved at a solid-to-solution ratio of 1 g L -1 , although only 64% of the generated Cr(III) species were bonded to the adsorbent in these conditions. 4. 3 ) 83 !Fig. 4 . 5 . 38345 Fig. 4.5. Variation as a function of pH of (A) total chromium sorption yield and (B) equilibrium concentration of Cr(VI) and Cr(III) in solution after reaction with MCM-41-SH/SO 3 H-1); experimental conditions: solid-to-solution ratios of 7.5 g L -1 ; starting Cr(VI) concentration in solution equal to 50 µM. allowing MCM-41-SH/SO 3 H-1 to react with Cr(VI) than directly with Cr(III) (compare data on Fig. 4.3.A with curve "b" on Fig. 4.1.A). For example at pH 2, 0.8 g L -1 of MCM-41-SH/SO 3 H-1 was likely to immobilize less than 20% of Cr(III) from a 50 µM solution whereas 0.5 g L -1 of MCM-41-SH/SO 3 H-1 in 50 µM of Cr(VI) Fig. 4 . 6 . 46 Fig. 4.6. Variation of chromium sorption yields, as a function of the solid-tosolution ratio, using MCM-41-SH/SO 3 H-X particles suspended in 50 µM Cr(VI) solution at pH 2.2: (a) MCM-41-SH, (b, ○) MCM-41-SH/SO 3 H-2, (b, □) MCM-41-SH/SO 3 H-3, (b, ∇) MCM-41-SH/SO 3 H-5. Fig. 4 . 7 . 47 Fig. 4.7 . XPS spectra (S 2p core-level) of MCM-41-SH after reaction with solutions containing Cr(VI) at increasing concentrations: (a) 50 µM, (b) 2 mM, (c) 5 mM and (d) 0.5 M. Fig. 5 . 1 . 51 Fig. 5.1. Variation of the amount of ED3A groups attached to the silica surface, as a function of the concentration of the ED3A-silane coupler solution. Fig. 5 . 2 . 52 Fig. 5.2. FTIR spectra of SiO 2 -SH (a), SiO 2 -SH/ED3A (b) and SiO 2 -ED3A (c) in the 1250-3000 cm -1 range. Fig. 5 . 5 Fig. 5.3 (a) Variation of Cr(III) sorption yields as a function of pH, using the SiO 2 -ED3A adsorbent; experimental conditions: solid-to-solution ratio of 2.0 g L -1 , starting Cr(III) concentration in solution equal to 100 µM (25 mL solution). (b) Variation of Cr(VI) reduction-sorption yields as a function of pH, using the SiO 2 -SH/ED3A adsorbent; experimental conditions: solid-to-solution ratio of 2.0 g L -1 , starting Cr(VI) concentration in solution equal to 100 µM (25 mL solution). Fig. 5 . 4 .Figure 5 . 5 . 5455 Fig. 5.4. Effect of contact time between (a) a Cr(III) solution and the SiO 2 -ED3A adsorbent, or (b) a Cr(VI) solution and the SiO 2 -SH/ED3A adsorbent, on the chromium sorption yields; experimental conditions: solid-to-solution ratios of 4.0 g L -1 (a) or 2.0 g L -1 (b), starting Cr(III) and Cr(VI) concentrations in solution equal to 192 µM and 100 µM, respectively (25 mL solution), and pH in the medium equal to 5.0 (a) or 2.5 (b). Figure 5 . 5 . 98 !Figure 5 . 5 . 4 Fig. 5 . 6 . 559855456 Figure 5.5.B shows that SiO 2 -SH/ED3A is likely to decrease the residual chromium concentration below low threshold values, but this requires the use of solid/solution ratios high enough. For instance, working in a solution containing initially 0.1 mM Cr(VI), the use of SiO 2 -SH/ED3A contents increasing from 2 to 4 and then to 6 g L -1 resulted in sorption yields of 88, 97, and 99 %, respectively. The adsorbent is thus likely to reduce chromium concentration under the µM concentration level. In view of the importance of the solid-to-solution ratio (Fig. 5.5.B) and considering the fast sorption kinetics (Fig. 5.4.b), flow-through experiments could be Fig. 5 . 7 .Fig. 5 . 8 . 5758 Fig. 5.7. UV-Vis diffuse reflectance spectra of Cr(VI)-treated SiO 2 -SH/ED3A at various chromium contents, ranging from 0.04 to 25 mg g -1 . Fig. 5 . 9 . 59 Fig. 5.9. Cr(VI) reduction-sorption onto SiO 2 -SH/ED3A in dynamic mode. Variation of (A) the concentration of remaining (non adsorbed) chromium in solution at the output of the column, and (B) the amount of adsorbed chromium onto the solid phase, as a function of the solution volume passed through the column; the experiments have been performed at two Cr(VI) concentrations (2 mM (■,▲,•) and 4 mM (□,∆,○), at pH 2.5), Figure 5 . 5 Figure 5.10 (curve 2), the sorption efficiency decreases by 15% while volume is increasing to 100 mL. While diluting solution to 500 mL, sorption efficiency remains constant (71-75%) but it is reduced sharply (10%) when the volume of Cr(VI) solution passed through the column is 1000 mL. Efficiency of Cr (VI) reduction (curve 1, Fig.5.10) exceeds 90% when low volumes (10-25 mL) are passed through the column and is reduced up to 80% when 1000 mL are used.Apparently, dilution of Cr (VI) solution has little effect on reduction ability of SiO 2 -SH/ED3A column, but the sorption efficiency of reductively-generated Cr (III) is somewhat reduced when solution is diluted to 500 mL and is rather low (56%) when diluted to 1000 mL. Fig. 5 . 10 .Fig. 5 . 11 . 510511 Fig. 5.10. Efficiency of Cr(VI) reduction (1) and sorption (2) after interaction with SiO 2 -SH/ED3A in dynamic mode versus the volume of Cr(VI) solution; Inset: corresponding variation of distribution coefficients (mSiO2-SH/ED3A = 100 mg, n Cr(VI) = 10 µmole, ν = 1 mL min -1 )! Fig. 5 .Fig. 5 . 55 Fig. 5.12. Influence of the ionic strength on adsorption capacity SiO 2 -SH/ED3A adsorbent versus Cr(VI) (2.0 g L -1 ; starting Cr(VI) concentration in solution equal to 2 mM; pH 2.5), and corresponding variation of distribution coefficients (inset). Fig. 5 . 5 Fig. 5.14. Variations of the desorption yields, as a function of pH, of a column made of 100 mg of SiO 2 -SH/ED3A material pre-treated with a solution (25 mL) containing 80 µM of various metal species; desorption was made at a flow rate of 1.0 mL min -1 in a pH-gradient elution mode (see variation in pH values as a function of the elution volume in the inset). Metal species were Pb(II) (○), Co(II) (▼), Fe(III) (◊), Ni(II) (▲), Cu(II) (□), and Cr(VI) (adsorbed as Cr(III) species, •). 2 -SH/ED3A, the binding mainly occurs through complexation with ED3A groups (if the quantity of Cr(VI) exceeds the molar ratio with ED3A, the binding of Cr(III) excess is reached by ion-exchange with the generated SO 3 H groups). Due to the reductive sorption mechanizm of Cr(VI) sequestration, the selectivity of bifunctionalized adsorbents is higher in comparison to conventional Cr(VI) adsorbents. It is shown that high concentrations of either ! 111! competing anions (high ionic strength is typical for wastewaters from plating industry) or heavy metals doesn't prevent the efficiency of Cr(VI) removal. The selectivity of Cr(VI) extraction is based on the kinetic inertness of Cr(III)ED3A complex at the applied conditions (pH=1-3).Both bifunctionalized adsorbents are advantageous in comparison to anion exchangers that can suffer from competition with other anions such as sulfate, nitrate or phosphate. Besides they operate under conditions typical for industial wastewaters (pH = 1-3), which facilitate the process of wastewater treatment. Experiments in dynamic mode revealed the possibility to use SiO 2 -SH/ED3A for column treatment of Cr(VI) contaminated waters. It was shown that such column can be used for complete sequestration of Cr(VI) from solutions with Cr(VI) concentrations as low as the tolerance level accepted for industrial wastes (0.05-0,1 mg L -1 ). Pre-concentration of Cr(VI) at the surface of SiO 2 -SH/ED3A in combination with X-ray fluorescence analysis can also be used for creation of a test method of Cr(VI) detection. Occurrence of characteristic color of Cr(III)ED3A complex in the phase of SiO 2 -SH/ED3A after interaction with solutions containing Cr(VI) allows detecting its presence visually. : Cr(OH) 2 (H 2 O) 4+ + H 2 O = Cr(OH) 3 + H 3 O + (1.3) abbreviated as Cr(OH) ! Cr(H 2 O) 6 3+ + H 2 O = Cr(OH)(H 2 O) 5 2+ + H 3 O + (1.1) Cr(OH)(H 2 O) 5 2+ + H 2 O = Cr(OH) 2 (H 2 O) + + H 3 O + (1.2) ! 8! Table 1 .1 Evaluation of carcinogenicity of chromium compounds [30] 1 Taking in account drastically different biochemical reactivity of Cr(III) and Cr(VI), the present-day regulations and quality guidelines call for accurate distinction between Cr(VI) and Cr(III) species. As shown at Table 1.2., maximum allowable concentrations (MAC) of hexavalent chromium in some countries can be 1000-times lower than concentration of total chromium. ! 16! *Group 1: Carcinogenic to humans Group 2: Probably carcinogenic to humans Group 3: Not classifiable as to its carcinogenicity to humans Table 1 .2 Maximum allowable concentrations for waste waters, mg L -1 [30] 1 Total Cr 0,5 1 France, Ukraine, Germany (metallurgy, chemical industry), South Africa (waste waters) Germany (tanning) 2 Japan (public water systems) 0,005 South Africa (ecosystems) Cr(VI) 0,05-0,1 France, Ukraine, Germany (metallurgy, chemical industry), South Africa (agricultural use) 0,5 Germany (tanning), Japan (public water systems), USA At the same time, concentrations of hexavalent chromium at the output of the regular galvanizing plant (see, Table 1.3) reach values thousand times higher than MAC. ! 17! Table 1 .3 Concentration of the magor components in the effluent from galvanic plant [31] 1 Ingridient Concentration, mg L -1 Ingridinent Concentration, mg L -1 Cr(VI) 1000 Sn 10 Cu 30 Cl - 500 Ni 50 SO 4 2- 1000 Zn 50 CN - 30 Cd 15 NO 3 - 60 Pb 10 - - Considering afore cited toxicity of Cr(VI), technology used for purification of wastewaters ought to be that much effective to certify reduction of Cr(VI) Table 1 1 .4. Table 1 .4 Comparison of the Adsorption Capacity of Several Amine or Quaternary Ammonium Based adsorbents 1 Sample Functional groups Sorption capacity, mg g -1 references Amberlite IRA 400 60 102 Amberlite IRA 900 quaternary ammonium 153 103 Amberlite IR 67 RF imine 56 98 Amberlite IRA-96 amine 32 102 QAPAN fiber amine, quaternary ammonium 248 111 APAN fiber amine 35 104 APAN fiber amine 133 105 4-VP grafted PET fiber 4-vinyl pyridine 72 106 PANI-jute fiber amine and imine 63 107 RQA resin amine, quaternary ammonium 48 108 RCl resin N-methylimidazolium 132 109 VION AN-1 Pyridinium ion 130 110 Table 1 .5 Distribution coefficients of Cr(III) and Cr(VI) between AG-1X8 and solution of H 2 SO 4 with different concentrations [113] 1 Transformation of pyridine groups into more basic forms decreases chemical stability of resin in Cr(VI) adsorption from solutions. A novel fibrouse adsorbent functionalized with both amino and quaternary ammonium groups [111] can be repeatedly used for removal of aqueous Cr(VI). It is also stated that Cr(VI) is partially reduced to Cr(III) at its surface. The anion-exchanger Table 1 .6 Cr(III) distribution coefficients between cation-exchangers and acid solutions [118] 1 3 -, ClO 3 -, ClO 4 -anions cause formation of complex ions and change the colour from purple to green. Distribution coefficients of Cr(III) between cation-exchangers and acid solutions depends on acid concentrations, (see Table 1.6). Table 1 .7 Efficiency of adsorption of different chromium species on low cost (bio)adsorbent at different pH [150] 1 Adsorption yields, % Adsorbent рН = 2 рН = 5 Сr(VI) Cr(III) Сr(VI) Cr(III) Wool 69,3 0,0 5,8 58,3 Olive pits 47,1 0,0 8,4 74,8 Sawdust 53,5 0,0 13,8 96,8 Pine needles 42,9 0,0 13,0 79,4 Almonds pits 23,5 0,0 2,3 60,0 Coal 23,6 - 2,4 99,4 Cactus 19,8 0,0 8,2 55,2 Table 1 .8 Final solution pHs, removal efficiencies and sorption capacities of total Cr at equilibrium state on different types of low cost (bio)sorbents [77, 131] 1 Adsorbent Removal efficiency of total Cr, % Final solution рН Sorption capacity, mg g -1 Pine needle 38,0 2,21 21,5 Pine bark 85,0 2,17 - Pine cone 71,8 2,17 - Banana skin 25,5 2,37 - Green tea waste 64,8 2,27 5,7 Oak leaf 48,7 2,29 - Walnut shell 24,6 2,21 5,88 Rice straw 26,3 2,24 - Peanut shell 41,0 2,19 - Sawdust 19,9 2,24 53,5 Orange peel 49,9 2,27 - Rice husk 25,2 2,21 45,6 Rhizopus 27,2 2,32 23,9 Ecklonia 77,2 2,5 - Sargassum 64,1 2,46 32,6 Enteromorpha 15,8 2,37 - As one can see from the Table 1 .8 the sorption capacities of polifunctional (bio)sorbents are of the same order of magnitude as those reported for cation-! In 2011Guo et al. has published the data of reductive-adsorption capability of bacteria immobilized at sol-gel[153]. Unfortunately chromium adsorption parameters were not well examined. Moreover the proposed adsorbent is characterized with low kinetics of interaction. It takes days to reach complete reduction of Cr(VI).In this work we propose to examine application of bifunctionalized silica based materials for selective Cr(VI) removal from acid solutions. By planning the chemical composition of the surface and the structure of bifunctionalized adsorbent, we will try to improve sorption properties and increase the efficiency of sequestration of chromium in its less toxic form. By analogy to[71], in this work we propose to consider silica based 15 g L -1 can help to remove 70 mg L -1 of Cr(VI) under pH 6, removal capacity of Fe@SiO 2 reaches ! 35! 467 mg of Cr per g. materials modified by thiolpropyl as a reductant group and either by propylsulfonic acid or ethylenediaminetriacetate as a binding group. We should now revise the literature concerned with interaction of previously mentioned ligands with Cr(III), Cr(VI) and other heavy metals. 1.4 Justification for functional group choice 1.4.1 Interaction of Cr(III) and Cr(VI) with thiols Cr(III) belongs to hard acids [155], it forms inert complexs with coordination number 6 and octahedral configuration of ligands containing oxygen, nitrogen and sulfur as donor atoms. They include complexes with neutral groups (H 2 O, NO, NH 3 , NO 2 , SO 2 , S, P, CO, N 2 H 4 , NH 2 OH, C 2 H 5 OH, C 6 H 6 і т.д.) and ions (F -, Cl -, Br - aqueous solutions, because of the kinetic inertness of aqua-complexes, Cr(III) does not interact with sulfides [155] and thiols . Chromium(III) sulfide (Cr 2 S 3 ), formed by CrCl 3 treatment with gaseous H 2 S, is poorly soluble substance that decomposes very slowly in contact with water. This is the reason why there are any publication devoted to adsorption of Cr(III) by thiol-functionalized material from its aqueous solutions. Nevertheless thiol and sulfide-bearing materials are used for Cr(VI) reduction [156, 157] . Trofimchuk reports in his work [158] the possibility of using ! 36! Table 1 .9 Stability constants of some EDTA complexes; t=20 0 С; µ=0,1 in KNO 3 [179] 1 Potassium dichromate in the presence of EDTA in weakly alkaline medium, is gradually reduced, the process is significantly accelerated when boiled in the presence of manganese salts[168]. As a result a purple compound is formed, which is likely to be the CrY -complex. If a reduction agent (iodide) is present in solution, a violet complex is formed immediatly, most likely due to formation of anhydrous ion Cr 3+ [176]. - UV-spectra of this complex (Fig.1.12) reveals two absorption peaks at pH 2 (396 and 538 nm)[177]. In basic medium (pH 11) the absorption peaks are shifted to 390 and 590 nm. This changes in color can easily be detected by eyes. A purple complex CrY -in case of higher pH turns into darc blue complexes Cr(OH)Y 2-, Cr(OH) 2 Y 3-. The CrY - Cation Complex log K MY 2- Cu 2+ CuY 2- 18,8 Cd 2+ CdY 2- 16,46 Pb 2+ PbY 2- 18,04 Co 2+ CoY 2- 16,31 Ni 2+ NiY 2- 18,62 Zn 2+ ZnY 2- 16,5 Al 3+ AlY - 16,13 Fe 3+ FeY - 25,1 Cr 3+ CrY - 23,4 formation : 30 min [170, 171] , 45 min [172] or 5 min [173] . The process can be accelerated by heat. Complex formation at ordinary temperatures can be speed up by adding a catalyst such as acetate and bicarbonate [174], as well as trace amounts of Cr M n+ + H 4 Y → MY (n-4)+ + 4H + . (1.13) ! 38! (II) [175]. complex is very stable (see, Table 1 .9) and characterized with constant values of light adsorption at pH = 1.5-4. The optimal light adsorption is observed in case of Cr to EDTA ratio 1:6. In such conditions spectrophotometric method anables to detect up to 6 mg of Cr(III) per liter [178] . NaHSO 3 , NaCl, NaOH and diphenylcarbazide (DPC). average particle size of 125 ± 25 µm, a specific surface area of 425 ± 25 m 2 g -1 , (2) organosilanes, tetraethoxysilane (TEOS, >98%, Merck), mercaptopropyltrimethoxysilane (MPTMS, 95%, Lancaster) and N-[(3- trimethoxysilyl)-propyl]-ethylenediamine triacetate (ED3A, 50% (w/w) aqueous solution of sodium salt of ED3A-silane coupler, Petrarch Systems Inc.); (3) solvents, toluene (95%, Merck), ethanol, methanol (Merck) and (4) reagents cetyltrimethylammonium bromide (CTAB, Merck), ammonia (28%, Merck), hydrogen peroxide (35%, Merck) Solutions -Stock solution of Cr(III) was prepared by dissolving 0.7073 g of K 2 Cr 2 O 7 in 100 mL of 2M H 2 SO 4 and adding 5 g of NaHSO 3 in small portions under vigorous stirring. Concentrated HCl (37%), HNO 3 (65%), H 2 SO 4 (95-97%) and standart solutions of NaCl, NaOH and HNO 3 . ! 45! Chemicals used to prepare the adsorbents were: (1) the silica support, a chromatographic grade silica Kieselgel 60 obtained from Merck (characterized by an average pore size of about 7 nm, and a surface hydroxyl content of 3.6 mmol OH g, an The obtained medium was next boiled to complete Cr(VI) reduction into Cr(III) and to expel SO 2 , and was then diluted to 1 L to get a final Cr(III) concentration of 0.250 g L -1 . 2 Oxidation of thiol groups into sulfonic acid moieties 3 H-1 synthesis without taking care about oxidation by air oxygen. It was divided into three parts and titrated after different time of storing on air. Conductimetric titration was performed on the samples being disposed on air during 1 day (MCM-41-SH/SO 3 H-2.1), 2 weeks (MCM-41-SH/SO 3 H-2.2) and 2 months (MCM-41-SH/SO 3 H-2.3). Stractural parameters of functionalized materials are mentioned in Table 2.1.mixture of 10 mL of aqueous 35% H 2 O 2 (3.5 g) and 35 mL of methanol. After 24 h stirring, the solid particles were filtered and washed with water and ethanol. The wet materials were re-suspended in 0.1 M H 2 SO 4 and stirred for another 4 h period, filtered again, washed with water and ethanol, and dried under vacuum (<10 -2 bar) for 24 h. 2.3. MCM-41-SH was partially oxidized by adapting a previously published procedure involving the use of H 2 O 2 in a methanol-water mixture [191] . To get a fully oxidized material (MCM-41-SH/SO 3 H-6), 0.7 g of MCM-41-SH was suspended in a ! 47! washed in Soxhlet's apparatus with toluene (1h) and then with methanol for 24h. The final material contained about 0.4 mmol of covalently immobilized thiol groups per gram of adsorbent (for more information about structural parameters see Table2.1) SH was placed into 20 mL of water-methanol solution (1:1 in volume) and left blended for 30 minutes. Then 4.0 mL of ED3A-silane coupler solution was added to SiO 2 -SH slurry and blended for 3 days at the room temperature. The adsorbent was filtered, washed with distilled water and dried at 100 2.3.5 Silica gel with covalently attached mercaptopropyl and ethylenediaminetriacetate groups Thiol-and ethylenediaminetriacetate-functionalized silica samples (SiO 2 - SH/ED3A) have been prepared according to a similar procedure as that reported for getting ethylenediaminetriacetate-bonded silica gel (SiO 2 -ED3A) [194]. Briefly, 5.0 g of SiO 2 - The grafting reaction was conducted at 100 o C for 6 hours. The resulting precipitate was then ! 48! o C under reduced pressure. It contained about 0.4 mmol of immobilized ED3A groups per gram of material, i.e., the same amount as SH groups (for other adsorbent parameters see Table 2 .1). Such optimal conditions for functionalization of SiO 2 -SH with ED3A groups have been defined after investigating the binding of the ED3A-silane coupler to pure silica gel (1 g mixed with 4 mL of a 1:1 water-methanol solution containing the ED3A-silane coupler at concentrations varying from 13 to 290 g L -1 ). Table 2 .1 Parmeters of functionalized materials 2 Sample C(-SO 3 H), mmole g -1 C(-SH), mmole g -1 C(-ED3A), mmole g -1 BET surface area, m 2 g -1 Mesopore volume, сm 3 g -1 Pore diameter, Å MCM-SH/SO 3 H-1 0.035 0.898 … … … - ≥860 ≥0,42 ≥20 MCM-SH/SO 3 H-6 0.606 0.327 MCM-SH - 0.931 - 1598 0,76 20 SiO 2 -SH/ED3A - 0,382 0,401 346 - 5 SiO 2 -SH - 0,392 - 448 - 6 SiO 2 -ED3A - - 0,181 364 - 5 Table 2 .2 Conditions applied in experiments which study effect of pH on Cr uptake 2 Cr species Sorbent m, g V, mL t, hours MCM-41-SH/SO 3 H-6 0,02 0,15 Cr (III) MCM-41-SH/SO 3 H-1 MCM-41-SH SiO 2 -ED3A 0,02 0,05 25 24 Cr (VI) MCM-41-SH/SO 3 H-1 0,01 0,15 20 SiO 2 -SH/ED3A 0,05 25 ! Table 2 .3 Conditions applied in experiments which study effect of solid-to-solution ratio on Cr uptake 2 Cr species Sorbent m, g V, mL C, mole⋅L -1 pH t, hours Cr (III) MCM-41-SH/SO 3 H-6 0,01-0,35 MCM-41-SH 0,01-0,20 Cr (VI) MCM-41-SH/SO 3 H-2 MCM-41-SH/SO 3 H-3 0,03-0,2 0,03-0,2 50 5⋅10 -5 -10 -4 2,2 24 MCM-41-SH/SO 3 H-5 0,05-0,2 SiO 2 -SH/ED3A 0,01-0,3 3 Table 2 .4 Conditions applied in experiments which study kinetics of Cr(III) and Cr(VI) interaction with sorbents 2 Cr species Sorbent m, g C, mole L -1 pH t, hours Cr (III) SiO 2 -ED3A 0,1 1,92⋅10 -4 5 72 Cr (VI) SiO 2 -SH/ED3A 0,05 1,00⋅10 -4 2,5 24 Table 2 .5 Conditions applied in experiments which study effect chromium concentration on sorption 2 Cr species Sorbent V, mL m, g C, mole L -1 pH t, hours Cr (III) MCM-41-SH/SO 3 H-6 0,02 10 -5 -1,7⋅10 -3 2 Cr (VI) SiO 2 -SH SiO 2 -SH/ED3A 25 0,125 4⋅10 -5 -4⋅10 -3 2⋅10 -6 -2⋅10 -2 2,5 24 Table 3 .1 Concentrations of different sulfur groups in MCM-41-SH/SO 3 H-X series 3 the values were calculated from the total concentration of sulfur bearing groups c concentration determined by elemental analysis As it is illustrated on Fig.3.3, linear correlation exists between H 2 O 2 concentration and -SO 3 H loading on MCM-41-SH/SO 3 H-X. So it can be expected that adding MCM-41-SH to 5 mM solution of H 2 O 2 will follow complete oxidation of the surface layer to -SO 3 H groups. Nevertheless such transformation is not observed for any concentration of H 2 O 2 and can only be achieved when the concentration of thiol groups at the surface is low [201]. Such a restriction, in case of mild oxidation with H 2 O 2, is generally attributed to formation of disulfide groups stable to oxidation [215]. Sample C(H 2 O 2 ), mmol L -1 C(-SO 3 H), mmol g -1 C(-SH), mmol g -1 ΣC L , mmol g -1 ΣC L mmol g -1 c , ω H +, % MCM-41-SH 0 0 0.931 0.931 0 MCM-41-SH/SO 3 H-1 MCM-41-SH/SO 3 H-2 MCM-41-SH/SO 3 H-3 0 a 0.16 0.31 0.035 0.164 0.194 0.898 0.769 b 0.739 b 0.933 -- 1.06 4 18 21 MCM-41-SH/SO 3 H-4 0.91 0.354 0.599 0.953 37 MCM-41-SH/SO 3 H-5 MCM-41-SH/SO 3 H-6 a oxidation on air 1.45 2.49 0.361 0.606 0.550 0.327 b 0.911 - 40 65 b Table 3 . 3 2 summarized the results obtained form XPS spectra of different MCM-41-SH/SO 3 H-X samples together with data obtained from their conductimetric titration. Better correlation between S(VI)/S(II) ratio and ω H +, % is observed for data obtained from the peak intensities but not from mass concentration according to the quantification report of XPS spectra analyzer. 5 4 3 2 20 (CPS) 1 172 168 164 160 Binding&energy,&eV ! Table 3 . 3 2. Data from the XPS spectra Height (h) of Results of conductimetric Sample the peak, cm S(II) S(VI) S(VI)/S(II), % determination mmol g -1 C(SH), C(SO 3 H), mmol g -1 ω H +, % MCM-41-SH/SO 3 H-6 1.7 2.6 60 68 a 0.33 0.61 65 MCM-41-SH/SO 3 H-5 2.1 1.9 48 65 a 0.55 0.36 40 MCM-41-SH/SO 3 H-4 2.4 1.5 38 48 a 0.60 0.35 37 MCM-41-SH/SO 3 H-3 4.4 1.6 26 40 a 0.74 0.19 21 Table 3 .3 Concentrations of different sulfur groups in MCM-41-SH/SO 3 H-2.X series 3 3.7. from 35 to 100% of propylthiol groups. No essential amount of dipropyldisulfide is detected for any samples with oxidation degree from 0 to 65%, so stabilization towards complete oxidation by H 2 O 2 cannot be explained by the surface disulfide formation. ! Table 1 . 1 2), holding great promise for wastewater treatment by reducing the volume of toxic sludges with respect to the conventional reduction- 100 (cps) a b sulfonic acid thiol c d 175 170 165 160 Binding energy (eV) ! Chapter V VI) removal via reduction-sorption by SiO 2 -SH/ED3A 5.1 Adsorbent preparation and characteristics Attachment of mercaptopropyl groups (SH), on one hand, and ethylenediaminetriacetate moieties (ED3A), on the other hand, onto the silica surface requires two distinct grafting procedures. The first one is the "classical" grafting reaction of MPTMS on silica in refluxing toluene (see section 2.3.4) but special care should be taken to avoid complete coverage of the whole silica surface to enable further immobilization of ED3A groups, which can be easily made by adjusting the MPTMS/silica ratio [192]. Then, ED3A groups were attached to the SiO 2 -SH material 5.2 Batch sorption experiments 5.2.1 Cr(III) sorption on SiO 2 -ED3A Cr(III) species (Cr 3+ , or more accurately the aqua complex Cr(H 2 O) 6 3+ in acidic medium) can be immobilized on SiO 2 -ED3A and, as illustrated in Figure 5.3 (part a), this process is pH-dependent. As shown, no adsorption was observed at pH than 2.5, but sorption yields increased then rapidly at pH 3, to reach maximum values above pH 5. The sorption process is expected to be due to metal-ligand complex formation (on the basis of known Cr(III)-chelate complexes with chelators such as N-(2- hydroxyethyl)ethylenediaminetriacetic acid (HEDTA) or ethylenediaminetetraacetic acid (EDTA)), but one cannot exclude a contribution of non-selective bonding of Cr(H 2 O) 6 3+ species to silanolate groups, especially at higher pH values (i.e., above pH 4 5.3) is significantly larger than for SiO 2 -SH formation with ED3A (even at pH values as low as 0.5-1, consistent with highly stable Cr(III) complexes with HEDTA [170] or EDTA [226] in strongly acidic media). Kinetics associated to Cr(VI) reduction and subsequent Cr(III) immobilization onto SiO 2 -SH/ED3A are very fast (see curve b in Fig. 5.4), showing steady state values for maximum sorption yields in less than 15 min (i.e., the first measured data). This supports the idea of fast binding of freshly generated Cr(III) in the form of Cr 3+ species, which are formed close to ED3A groups in the porous material and thereby likely to undergo rapid complexation, contrary to as-prepared solutions of Cr(III) in which chromium is in the form of Cr(H 2 O) 6 3+ which requires slow ligand exchange before being complexed with the ED3A chelate (as supported by curve a in Fig. 5.2). ! 96! (i.e., between pH 2 and 3 for MCM-SH/SO 3 H-X), indicating more efficient Cr 3+ binding in acidic medium for the SiO 2 -SH/ED3A sorbent, which can be explained by complex
01749959
en
[ "sdv.bbm", "sdv.tox.eco" ]
2024/03/05 22:32:07
2012
https://hal.univ-lorraine.fr/tel-01749959/file/DDOC_T_2012_0397_SANTINI.pdf
Karen Burga Anna Maria Becker Cédric Mondy Hugues Clivot Audrey Cordi Anne- Sophie Foltete Eric Gismondi Nelly Jacquet Stéphane Jomini Huong Thi Thuy Ngo Nubia Quiroz Guillermo Restrepo Benjamin Schmidt Didier Techer Rosy Toda Hela Toumi Philippe Wagner O Santini Vasseur P 1&2 Frank, H 1&2 Article 2: Copper effects on Na + /K + -ATPase and H + -ATPase in the freshwater bivalve Anodonta anatina Keywords: calcium homeostasis, copper, Na + /K + -ATPase, H + -ATPase, Anodonta anatina Anodonta cygnea metal homeostasis, freshwater bivalve, phytochelatins, Anodonta cygnea 8 Article 4: Phytochelatins, a group copper, freshwater bivalve, phytochelatins, Anodonta cygnea, metal tolerance Effects of copper on the activities of the cell plasma membrane H + -ATPase and Na + /K + -ATPase of the freshwater mussel Anodonta anatina were assessed after 4, 7, and 15 days of exposure to Cu 2+ at the environmentally relevant concentration of 0.35 µmol L -1 . The H + -ATPase was measured in the mantle, and the Na + /K + -ATPase in the gills, the digestive gland and the mantle. The Na + /K + -ATPase activities showed significant inhibition upon 4 days in the gills (72 %) and in the digestive gland (80 %) relative to control mussels: in the mantle, no inhibition of the enzyme activities was noted. Incipient recovery of the Na + /K + -ATPase activity was registered after 7 and 15 days in the gills and the digestive gland yet, in the gills not returning to basal level, within 15 days. H + -ATPase activity remained unaffected by Cu 2+ at the test concentration. [START_REF] Kang | Metallothionein redox cycle and function[END_REF], MT: metallothionein, ROS: reactive oxygen species, GSH: glutathione, GSSG: glutathione disulfide 61 Fig. 8: Fig. 8: PC synthesis [START_REF] Vatamaniuk | Worms take the 'phyto' out of 'phytochelatins[END_REF] 62 Peak "a" is an unidentified compound originating from the derivatization reaction with reagent. Standard concentration was 10 µmol L -1 for cysteine (Cys), glutathione (GSH), -glutamylcysteine ( -GluCys), 5 µmol L -1 for the internal standard N-acetylcysteine (NAC), and 2 µmol L -1 for phytochelatins 2-5 (PC 2-5 ). 114 Peak "a" is an unidentified compound originating from the derivatization reaction with reagent. Standard concentration was 10 µmol L -1 for cysteine (Cys), glutathione (GSH), -glutamylcysteine ( -GluCys), 5 µmol L 1 for the internal standard N-acetylcysteine (NAC), and 2 µmol L -1 for phytochelatins 2-5 (PC 2-5 ). 128 List of tables Background Table 1: Functions of the main cuproproteins (Tapiero et al., 2003) 40 Extended summary Extended summary Table 1: Basal enzymatic activities of plasma membrane ATPase(s) (µmol P i /mg protein/min) and cytoplasmic carbonic anhydrase (U/mg protein) in Anodonta anatina. 72 Table 2: Basal levels of phytochelatines 2-4 (µg PC/g tissue wet weight), -GluCys (µg -GluCys/g tissue wet weight), and metallothionein (mg MT/g protein) in Anodonta cygnea. 76 Article 1 Table 1: PMCA activities (µmol P i -1 L -1 mg -1 protein min -1 ) in different tissues of freshwater and marine organisms. 88 Table 2: Carbonic anhydrase activities in freshwater and marine organisms evaluated by measurement of pH decrease caused by enzymatic hydration of CO 2 . 89 Article 2 Table 1: Plasma membrane Na + /K + -ATPase and H + -ATPase activities (µmol/L P i /mg protein/min) in different tissues of freshwater and marine organisms. 103 Article 3 Table 1: Average retention time in minutes (n = 20) ± SD, limit of detection (LOD), and limit of quantification (LOQ) in pmol per 20 µL injected for cysteine-rich metal-binding peptides standards. Standard curves were run with 7 concentrations: 1 to 10 µmol L -1 for Cys, GSH, -GluCys, and 0.2 to 2 µmol L -1 for PC 2-5 . 112 Table 2: PC content [µg PC g -1 tissue wet weight] in digestive gland and gills of Anodonta cygnea, means (n = 6) ± SD. 117 Article 4 Table 1: -GluCys and MT content in the digestive gland and the gills of A. cygnea exposed to 0.35 µmol L -1 Cu 2+ for 0 h, 12 h, 48 h, 4 d, 7 d, and 21 d. Means of results (n = 6) ± SD. *= significant difference between exposed group and its respective control (Kruskal-Wallis and Mann-Whitney two sided test (n = 6, = 0.05)) 131 Summary Copper (Cu) is one of the metals contaminating European fresh water ecosystems. Filter feeding bivalves have high bioaccumulation potential for transition metals as Cu. While copper is an essential micronutrient for living organisms, it causes serious metabolic and physiological impairments when in excess. The objectives of this thesis are to get knowledge on toxic effects and detoxification mechanisms of copper in Anodonta cygnea and Anodonta anatina, two mussel species widely distributed in continental waters. Because Ca plays a fundamental role in shell formation and in numerous biological processes, Cu 2+ effects on cellular plasma membrane calcium transport were studied first. In the second step, the investigations focused on Cu 2+ detoxification mechanism involving cysteine (Cys) rich compounds known to play a major role in homeostasis of essential trace metals and in cellular metal detoxification. Under our experimental conditions, copper inhibition of Ca 2+ -ATPase activity was observed in the gills and the kidneys, and inhibition of Na + /K + -ATPase in the gills and the digestive gland (DG) upon 4 d of exposure. At day 7 of exposure to environmental Cu 2+ concentrations total recoveries was observed in the kidneys and the gills for Ca 2+ -ATPase activity, and in the DG for Na + /K + -ATPase, but not at high doses. Ca and Na transport inhibition may entail disturbance of osmo-regulation and lead to continuous under-supply of Ca. Recoveries of Na + /K + -ATPase and Ca 2+ -ATPase enzymes function suggest that metaldetoxification is induced. Phytochelatins (PC) are Cys-rich oligopeptides synthesised by phytochelatin synthase from glutathione in plants and fungi. Phytochelatin synthase genes have recently been identified in invertebrates; this allows us to hypothesize a role of PC in metal detoxification in animals. In the second part of this work, PC and their precursors as well as metallothionein were analyzed in the gills and in the DG of Anodonta cygnea exposed to Cu 2+ . Our results showed for the first time the presence of PC 2-4 in invertebrates. PC were detected in control mussels not exposed to metal, suggesting a role in essential metal homeostasis. Compared to control, PC 2 induction was observed during the first 12 h of Cu 2+ exposure. Those results confirm the role of PC as a first line detoxification mechanism in A. cygnea. Key words: calcium homeostasis, copper, Anodonta freshwater bivalve, phytochelatins Zusammenfassung Kupfer ist eines der Übergangs-Metalle, die in Süsswasser-Ökosystemen Europas am weitesten verbreitet sind und in Konzentrationen auftreten, die ökotoxikologische Bedeutung haben können. Die filtrierenden, zweischaligen Muscheln haben die Fähigkeit, solche Übergangs-und Spuren-Metalle wie Kupfer durch Bioakkumulation anzureichern, da Kupfer auch ein wesentliches essentielles Metall für alle lebenden Organismen ist; bei übermässiger Belastung mit diesem Spuren-Metall werden jedoch schwere metabolische und physiologische Störungen hervorgerufen. Diese Doktorarbeit zielt darauf ab, die Kenntnisse über toxische Wirkungen sowie Entgiftungsmechanismen von Kupfer in Anodonta cygnea und Anodonta anatina zu erweitern, zwei Arten von zweischaligen Süsswassermuscheln, die in Gewässern Europas weit verbreitet sind. Da Calcium eine fundamentale Rolle für die Zusammensetzung der Muschel-Schalen sowie bei zahlreichen biologischen Prozessen spielt, wurden zuerst die Wirkungen von Cu 2+ auf den zellulären Transport von Calcium auf der Ebene der Plasmamembran studiert. Dann wurde der Schwerpunkt der Studie auf die Entgiftungsmechanismen des Kupfers gelegt, wobei cysteinreiche Peptide und Proteine im Vordergrund stehen. Cystein ist bekannt für seine Rolle als funktionelles Element für die Homöostase der essenziellen Spurenmetalle sowie für die zelluläre Metallentgiftung. Unter den gewählten experimentellen Bedingungen wurde nach vier Tagen Cu 2+ -Exposition bei 0,35 µmol L -1 die Hemmung der Ca 2+ -ATPase in den Kiemen und Nieren und die Hemmung der Na + /K + -ATPase in den Kiemen und der Mitteldarmdrüse beobachtet. Nach sieben Tagen Exposition wurde die Erholung der Enzymtätigkeit in den Nieren und in den Kiemen für die Ca 2+ -ATPase beobachtet, in der Mitteldarmdrüse für die Na + /K + -ATPase. Die Erholung der Enzymtätigkeiten weist auf die Induktion der Metallentgiftungs-Kapazität hin. Bei doppelt so hoher Cu-Konzentration dauerte die Hemmung aller genannten Enzyme über den gesamten Expositionszeitraum von 15 Tagen an. Die Hemmung des Transports von Calcium und Natrium kann Störungen in der Osmoregulation verursachen und zu einem Defizit an Calcium führen. Phytochelatine (PC) sind cysteinreiche Polypeptide, die in Pflanzen und Hefen durch PC Synthase aus Glutathion synthetisiert werden. Gene, die zu funktionellen PC Synthasen führen können, wurden in Wirbellosen identifiziert, was uns annehmen ließ, dass PC auch bei Tieren eine Rolle bei der Metallentgiftung spielen könnte. Daher wurden im zweiten Teil der Arbeit PC und deren Vorläufer sowie die Metallothioneine in den Kiemen und in der Mitteldarmdrüse von kupferexponierten Anodonta cygnea analysiert. Die Ergebnisse zeigen zum ersten Mal die Präsenz von PC 2-4 in Wirbellosen. Die PC wurden auch in Kontrollmuscheln gefunden, was auf ihre Rolle bei der Homöostase essenzieller Metalle hinweist. Eine Induktion von PC 2 wurde in den ersten 12 Stunden der Cu 2+ -Exposition beobachtet, eine Bestätigung seiner Rolle als erster Metallentgiftungsmechanismus in A. cygnea. Schlüsselwörter: Calcium Homöostase, ionisches Kupfer, Zweischalige Süsswasser-Muscheln Anodonta, Phytochelatine Résumé Le cuivre est l'un des métaux contaminants des écosystèmes d'eau douce le plus représenté en Europe. Les bivalves filtreurs ont une grande capacité de bioaccumulation des métaux de transitions tel que le cuivre. Le cuivre est un oligo-élément essentiel pour les organismes vivants, mais en excès il provoque de graves perturbations métaboliques et physiologiques. L'objectif de cette thèse est d'acquérir des connaissances sur les effets toxiques et les mécanismes de détoxification du cuivre chez Anodonta cygnea et Anodonta anatina, deux espèces de bivalves dulcicoles largement distribuées dans les eaux continentales. Parce que le calcium joue un rôle fondamental dans la composition de la coquille et pour de nombreux processus biologiques, les effets du Cu 2+ ont été étudiés d'abord sur le transport cellulaire du calcium au niveau de la membrane plasmique. Dans un deuxième temps, l'étude a été axée sur les mécanismes de détoxification du Cu 2+ impliquant des composés riches en cystéine (Cys), connus pour jouer un rôle majeur dans l'homéostasie des métaux traces essentiels et dans la détoxification des métaux dans les cellules. Dans nos conditions expérimentales (0,35 et 0,64 µmol L -1 ), l'inhibition de la Ca 2+ -ATPase par le Cu 2+ a été observée dans les branchies et les reins, et l'inhibition de la Na + /K + -ATPase dans les branchies et la glande digestive, après 4 jours d'exposition. Au delà de 7 jours d'exposition à la concentration de 0,35 µmol L -1 Cu 2+ , une récupération totale de l'activité enzymatique a été observée dans les reins et les branchies pour Ca 2+ -ATPase, et dans la glande digestive pour la Na + /K + -ATPase. A dose élevée (0,64 µmol L -1 ), l'inhibition persiste. L'inhibition du transport du calcium et du sodium peut entraîner des perturbations de l'osmorégulation et conduire à des carences en calcium. La récupération de l'activité enzymatique de la Ca 2+ -ATPase et de la Na + /K + -ATPase suggère une induction de fonctions de détoxification des métaux. Les phytochélatines (PC) sont des oligopeptides riches en Cys synthétisés par la phytochélatine synthase à partir du glutathion, chez les plantes et les champignons. Des gènes codant pour des phytochélatine synthases fonctionnelles ont étés identifiés chez des invertébrés tel que le ver de fumier. Ceci nous a incités à rechercher la présence de PC dans les bivalves. Nos résultats ont montré pour la première fois, la présence de PC 2-4 chez les invertébrés. Les PC ont été détectés dans des moules témoins non exposées aux métaux, ceci suggère une fonction dans l'homéostasie des métaux essentiels. Nous avons donc étudiés leur rôle éventuel dans la détoxification des métaux chez ces organismes et les animaux en général. Jusqu'ici les phytochélatines étaient considérées jouer un rôle uniquement chez les végétaux. Dans la seconde partie de ce travail, les PC et leurs précurseurs, ont été recherchés dans les branchies et la glande digestive d'Anodonta cygnea exposé au Cu 2+ . Une induction de PC 2 a été observée dés les 12 premières heures d'exposition au Cu 2+ , comparé aux bivalves témoins. Ces résultats confirment le rôle du PC en tant que mécanisme de première ligne de détoxification des métaux chez A. cygnea. Les métallothioneines ont été analysées en parallèle, mais aucune induction n'a été prouvée en présence de cuivre. (Tapiero et al., 2003). Les propriétés qui rendent ce métal essentiel Les bivalves sont en contact étroit avec leur environnement, ils sont largement utilisés pour la surveillance de la pollution dans les écosystèmes aquatiques. Parmi ces animaux, les bivalves Unionidae sont largement utilisés comme organismes indicateurs de la bioaccumulation et des effets toxiques des polluants métalliques et organiques (Winter, 1996;Falfushynska et al., 2009). Pour ces raisons, nous avons choisi comme modèles biologiques pour l'étudier de la toxicité du cuivre, Anodonta cygnea et Anodonta anatina qui appartiennent aux Unionidae. (Coimbra et al., 1993). Dans notre étude des inhibitions de l'activité des Ca L'identification de gènes en mesure de donner des PCS fonctionnelles chez les invertébrés (Clemens et al., 2001;Vatamanuik et al., 2001;Brulle et al., 2008), nous avons fait des recherches de PC chez les organismes animaux. Des gènes homologues de phytochélatine synthase ont été trouvés répandus chez les invertébrés, (Clemens et Peršoh, 2009) Matériel et méthodes Acclimatation des bivalves L'entretien de moules a été décrit en détail dans notre premier article précédent (Santini et al., 2011a) Exposition au cuivre Les effets du cuivre ont étés évalués chez A. anatina sur les activités enzymatiques de Ca 2+ -ATPase, Na + /K + -ATPase, H + -ATPase de la membrane plasmique et AC cytosolique. Les activités des Ca 2+ -ATPase, Na + /K + -ATPase, et H + -ATPase ont été déterminées dans chacun des organes à partir du culot C75600 remis en suspension (fig. 1). La quantification du phosphate inorganique libéré lors de la dégradation de l'ATP a été réalisée avec la méthode de Chifflet et al. (1988) par dosage spectrophotométrique du complexe molybdate d'ammonium à 850 nm. L'activité de CA a été déterminée avec la méthode de Vitale et al. (1999) par la mesure de la chute de pH en présence des extraits de tissus et de CO 2 . Analyse des composés riches en thiol Résultats Ca 2+ -ATPase de la membrane plasmique et anhydrase carbonique L'activité moyenne des Ca 2+ -ATPase (± écart type) chez les animaux témoins était de 0,087 ± 0,023 µmol P i /mg de protéine/min dans les reins, 0,45 ± 0,13 dans les branchies, et 0,095 ± 0,03 dans la glande digestive. Le cuivre a inhibé les activités PMCA dans le rein de façon significative après 4 jours d'exposition à toutes les concentrations testées, soit de 0,26 à 1,15 µmol L -1 Cu 2+ . A 0,35 µmol L -1 Cu 2+ , une récupération de l'activité PMCA a été observée au delà de 7 jours d'exposition. A forte concentration (0,64 µmol L -1 ), une inhibition de 20 % de l'activité PMCA a été observée sur les 15 jours d'exposition sans aucune récupération. Dans les branchies un profil similaire à celui des reins, mais avec des variations non significatives est observé. Aucun effet significatif de l'éxposition à 0,35 µmol L -1 de Cu 2+ a été noté sur l'anhydrase carbonique, à l'exception d'une légère inhibition, mais non significative, après 15 jours d'exposition. Na + /K + -ATPase et H + -ATPase La moyenne Na + /K + -ATPase (± écart type) chez les animaux témoins en septembre était de 0,098 ± 0,006 µmol P i /mg de protéine/min dans les branchies, 0,045 ± 0,007 dans la glande digestive, et 0,042 ± 0,009 dans le manteau. L'activité de la H + -ATPase était de 0,002 ± 0,0012 dans le manteau. Une inhibition significative de l'activité de la Na + /K + -ATPase par rapport aux témoins a été observée à 4 jours d'exposition à 0,35 µmol L -1 de Cu 2+ dans les branchies (72 % d'inhibition) et la glande digestive (80 % d'inhibition). Une reprise de l'activité est observée entre le quatrième et quinzième jour d'exposition à 0,35 µmol L -1 de Cu 2+ , en fin de test une inhibition (54 %) persiste. Dans le manteau, l'activité de la Na + /K + -ATPase baisse de la même façon (26 % d'inhibition à 4 jours) mais la différence n'est pas significative par rapport aux témoins. Aucun effet significatif du cuivre sur la H + -ATPase n'a été noté dans le manteau de moules après 15 jours d'exposition. D'une façon générale et dans tous les tissus, les activités des deux ATPases ont été plus élevées en juillet et septembre par rapport aux valeurs mesurées en janvier, mars et avril. Métallothionéine, phytochélatines et ses précurseurs Le temps d'élution moyenne des standards de phytochélatines ont été: PC 2 à 13,09 min, PC 3 à 16,62 min, PC 4 à 18,59 min, et PC 5 à 19,75 min. Des composés marqués au mBBr avec des temps de rétention correspondant aux standards de PC 2 , PC 3 , PC 4 ont été détectés dans les extraits de glande digestive et de branchies de moules témoins. La PC 5 était au dessous de la limite de détection. La PC 2 présentait la concentration la plus forte avec 2,17 ± 0,59 et 0,88 ± 0,15 mg PC 2 / g de poids frais dans les tissus de la glande digestive et des branchies, respectivement. Dans ces deux organes, l'ordre de grandeur des concentrations en PC 2-4 était classé de la façon suivante : PC 2 > PC 3 > PC 4 . Les proportions de PC 2 et de PC 3 sont deux ou trois fois plus élevées dans la glande digestive que dans les branchies, tandis que le niveau de PC 4 est à peu près équivalent dans ces tissus. Le niveau de PC 2 a nettement et significativement augmenté dans les branchies des moules exposées à 0,35 µmol L -1 de Cu 2+ . Comparé aux bivalves témoins respectifs, une induction de 50 % de PC 2 a été observée dès les 12 premières heures jusqu'à 4 jours Discussion Nous avons étudié les effets du cuivre sur les enzymes impliquées dans le transport du calcium et dans les processus de ionorégulation, et les mécanismes de détoxification par phytochélatines et métallothionéines chez les bivalves Anodonta. Le calcium joue un rôle fondamental dans de nombreux processus biologiques (production d'énergie, métabolisme cellulaire, contraction musculaire, reproduction) et a d'importantes fonctions mécaniques (coquille, squelette) chez les organismes vivants (Mooren et Kinne, 1998). Contrairement aux mollusques des écosystèmes marins qui sont généralement hypoosmotiques et pour lesquels l'absorption du calcium est facilitée par de plus fortes concentrations ambiantes de calcium, les bivalves d'eau douce sont hyperosmotiques et nécessitent une régulation stricte de leur métabolisme calcique. La présente étude nous a permis de déterminer des niveaux de base des activités enzymatiques de Ca 2+ -ATPase, Na + /K + -ATPase et H + -ATPase de la membrane plasmique et de l'anhydrase carbonique cytosolique dans les organes impliqués dans l'homéostasie du calcium. Il y a un manque de données physiologiques chez les bivalves d'eau douce par rapport aux modèles marins tels que Mytilus edulis ou Mytilus galloprovincialis. Ces données sont nécessaires à la compréhension des mécanismes de transport du calcium puisque les bivalves d'eau douce sont soumis à une osmo-iono-régulation différente des bivalves marins. Dans notre étude, l'activité de la Ca 2+ -ATPase de la membrane plasmique (PMCA) des branchies d'A. anatina était quatre fois supérieure à celle trouvée chez Mytilus edulis (Burlando et al., 2004). Ceci reflète l'importance du transport actif du calcium chez Anodonta anatina. Les concentrations de calcium en eau douce sont inférieures à celles trouvées dans l'eau de mer, où l'absorption du calcium est plus facile pour Mytilus edulis. L'activité de la PMCA est également élevée dans les reins et la glande digestive, favorisant l'absorption du calcium provenant des aliments et la réabsorption du calcium dans l'ultrafiltrat rénal. Ces données physiologiques nous ont permis de comprendre dans quelle mesure la PMCA est importante pour l'homéostasie du calcium chez les bivalves d'eau douce par rapport aux organismes marins. En outre, chez les unionidés, le calcium joue un rôle direct dans la reproduction, puisque les glochidies sont incubées dans le marsupium (branchies), il est donc déterminant pour la croissance des populations de moules. La Na + /K + -ATPase maintient le gradient de sodium cellulaire transmembranaire nécessaire pour la diffusion facilitée du calcium par le Na + /Ca 2+ antiporteur. L'augmentation significative de l'activité de la Na + /K + -ATPase a été mesurée dans les branchies et la glande digestive en juillet et septembre par rapport au reste de l'année. Ces deux mois correspondent à la période de biominéralisation chez Anodonta sp. (Moura et al., 2000). Les reins ont un rôle essentiel dans la filtration et la réabsorption des ions, de l'eau et des molécules organiques de l'ultrafiltrat. Comme les bivalves d'eau douce sont hyperosmotiques, la pression osmotique résultant du gradient de concentration entre les compartiments internes et l'environnement conduit à l'absorption d'eau par osmose et à une perte par diffusion ionique. L'absorption de l'eau est compensée par la production d'urine et la perte ionique est limitée par la réabsorption ionique. Chez les bivalves d'eau douce, la production journalière d'urine est élevée. Les reins jouent un rôle essentiel dans l'homéostasie du Ca en limitant les pertes ioniques dans l'urine, par réabsorption active des ions Ca 2+ dans le filtrat (Turquier, 1994). Une inhibition de la réabsorption de calcium peut donc considérablement perturber l'homéostasie calcique. Notre étude a montré une inhibition de l'activité enzymatique de la PMCA dans les branchies et les reins chez les moules exposées à 0,35 µmol L -1 de Cu 2+ . Une reprise totale de l'activité de la PMCA a été observée à partir de 7 j d'exposition à 0,35 µmol L -1 de Cu 2+ , mais l'inhibition a persisté à concentration plus élevée (0,64 µmol L -1 ). En raison de problèmes analytiques inhérents à la faible masse des reins, ce tissu a été peu étudié chez les bivalves d'eau douce jusqu'alors. Cet organe joue pourtant un rôle important dans les processus de détoxification (Viarengo et Nott, 1993) ; de plus le rein est essentiel pour la iono-régulation. Nos résultats ont montré la grande sensibilité de cet organe au Cu 2+ à des concentrations réalistes au plan environnemental (0,35 µmol L -1 = 22,3 µg L -1 ). Dans le présent travail, une inhibition de la Na + /K + -ATPase a été observée dans les branchies et dans la glande digestive. Une telle perturbation de la iono-régulation pourrait conduire à une carence en calcium ; elle peut également affecter les voies de signalisation cellulaires calciques, en plus de perturber la biominéralisation et la formation de la coquille des moules et des glochidies. Une reprise des fonctions enzymatiques (Ca 2+ -ATPase et Na + /K + -ATPase) suggère qu'un mécanisme de détoxification des métaux est induit. Par conséquent, dans la deuxième partie de cette étude, nous nous sommes concentrés sur les mécanismes de détoxification par les composés riches en Cys, chélateurs de métaux par leur groupement thiol. Les bivalves dulcicoles unionidés sont largement reconnus pour leur capacité à accumuler dans leurs tissus, une grande variété de contaminants de l'environnement, y compris les métaux (Bonneris et al., 2005). Cette tolérance aux métaux est permise par des stratégies biochimiques impliquant leur séquestration. La séquestration intracellulaire des métaux se fait selon une séquence d'événements en cascade impliquant différents ligands avec une force croissante de liaison vis-à-vis des éléments métalliques. A concentration élevée, les métaux peuvent inhiber ces mécanismes de détoxication. Dans nos résultats, le retour au niveau basal des Ca 2+ -ATPase et Na + /K + -ATPase observé au-delà de 7 jours d'exposition à 0,35 µmol L -1 de Cu 2+ indique des capacités d'adaptation des bivalves, via des systèmes de détoxification efficace à faible concentration de cuivre. A une concentration plus élevée de Cu 2+ (0,64 µmol L -1 ), aucune récupération de l'activité enzymatique n'a été notée. Il est intéressant de remarquer que la récupération a été observée uniquement à faible concentration (0,35 µmol L -1 ) environnementalement pertinente. Ceci souligne l'importance d'utiliser des concentrations environnementales dans la recherche écotoxicologique; l'extrapolation de résultats observés à des doses élevées pour des situations environnementales peut être critiquée en raison des différents mécanismes de toxicité développés à faibles et fortes doses. L'étude de phytochélatines chez Anodonta cygnea, nous a amené à développer et optimiser un protocole analytique par HPLC pour la quantification des PC dans les tissus animaux, basée sur la méthode de Minocha et al. (2008). Le présent travail a confirmé la présence de phytochélatines chez les invertébrés. A notre connaissance, c'est la première fois que les PC ont été trouvés chez les animaux et chez les invertébrés. Chez les végétaux, les PC sont rapidement induites dans les cellules et les tissus exposés aux métaux de transition. Les PC jouent un rôle important dans la détoxification des métaux. Les PC pourraient également être impliquées dans l'homéostasie des métaux essentiels (Hirata et al., 2005). Dans notre étude, la PC 2 , PC 3 et PC 4 , ont été détectés dans les branchies et la glande digestive d'A. cygnea chez les témoins non exposés. Dans les deux organes, la PC 2 a été trouvée en concentration la plus forte suivie de la PC 3 elle-même en concentration supérieure à celle de la PC 4 . Les concentrations de PC 2 et PC 3 étaient environ deux à trois fois plus élevée dans la glande digestive que dans les branchies. Le niveau de base des PC 2-4 en l'absence de cuivre et d'autres métaux d'exposition suggère leur rôle dans l'homéostasie des métaux essentiels. A 12 h d'exposition au cuivre, une induction significative de PC 2 a été observée par rapport aux témoins correspondants dans les branchies et à un moindre niveau dans la glande digestive. Ces résultats confirment le rôle des PC comme chélateur de métaux impliqués en première ligne dans les mécanismes de détoxification chez A.cygnea. Au delà de 7 jours et jusqu'à 21 j, dans les branchies des moules exposées au cuivre, la PC 2 diminue pour revenir au niveau basal, identique à celui des témoins. Cette diminution suggère que la détoxification du cuivre a été prise en charge par d'autres mécanismes, sur le long terme. Chez les Unionidae, les métallothionéines et les granules (concretions intracellulaire insolubles le plus souvent minérale, mais également organiques) sont connues pour jouer ce rôle. Bonneris et al. (2005) ont montré que les concentrations de cadmium, de zinc et de cuivre dans la fraction de granules des branchies étaient significativement corrélées avec les concentrations de ces métaux sur l'environnement. Les granules sont connues pour être des sites privilégiés pour le stockage de cuivre dans les branchies des unionidés. Environ 65 % du cuivre total dans les branchies a été trouvé séquestré dans les granules chez Anodonta grandis grandis, où les concrétions de calcium représentent 51 % du poids sec des branchies. Des valeurs similaires ont été trouvées chez Anodonta cygnea (Bonneris et al., 2005). Aucune variation significative du niveau de MT n'à été observée lors de l'exposition des bivalves au cuivre durant les 21 jours de la présente étude. L'isoforme de MT (<10 kDa) dans l'extrait de moules élué, après séparation dans nos conditions HPLC, n'a pas été induit par le cuivre. Une détoxification par d'autres isoformes de MT non détectées par notre méthode HPLC chez A. cygnea, ne peut être exclue. En effet un polymorphisme important des MT est connu chez les invertébrés (Amiard et al., 2006). Les métallothionéines, les granules et les systèmes antioxydants ont été décrits comme étant impliqués dans les mécanismes de détoxication des bivalves d'eau douce. Cossu-Leguille et al. (1997) ont montré le rôle majeur joué chez les Unionidae par les antioxydants et en particulier par le glutathion réduit (GSH) pour la détoxification des métaux. Dans les branchies de Unio tumidus une diminution de 45 % du GSH a été observée chez les moules exposées en site contaminés par des métaux, par rapport aux témoins. Cette diminution du niveau de GSH chez les Unionidae exposés au cuivre a été confirmée par Doyotte et al. (1997) dans des conditions contrôlées de laboratoire, indiquant la séquestration des métaux directement par le groupe SH du GSH ou de son utilisation comme substrat par les enzymes antioxydantes. En effet, une implication en parallèle des enzymes antioxydantes a été décrite. L'activité de ces enzymes augmente lors d'exposition à faibles concentrations en métaux (Vasseur et Leguille, 2004). Le GSH joue également un rôle dans la synthèse des PC. Ce double rôle dans la détoxification directe des métaux, et comme précurseur de synthèse des PC pourrait expliquer cette diminution. Les populations d'Unionidae représentent la plus grande partie de la biomasse totale dans de nombreux systèmes aquatiques. Ils prennent une part active dans la purification de l'eau, dans la sédimentation, la modification des communautés de phytoplancton et des invertébrés détritivores (Aldridge, 2000). Par conséquent la disparition des Unionidae pourrait produire des perturbations structurelles et fonctionnelles dans les écosystèmes aquatiques. Les effets potentiellement négatifs de la compétition entre bivalves, autochtones et invasifs est une question qui fait débat. Les bivalves d'eau douce invasifs Corbicula fluminea, Dreissena polymorpha n'appartenant pas aux Unionidae colonisent les hydrosystèmes dulcicoles avec des effets préjudiciables pour les autres invertébrés. Ces espèces invasives ne jouent pas le même rôle fonctionnel dans les écosystèmes que les unionidés. Leur présence est l'une des hypothèses expliquant la baisse des populations d'unionidés. La combinaison de différents facteurs explique la disparition progressive des Unionoidea au profit des espèces invasives. L'appauvrissement en salmonidés qui sont de potentiels poissons hôtes pour les larves d'unionidés est présenté comme une autre hypothèse possible à celle d'une compétition par des éspeces invasives. Les populations de salmonidés sont touchées par la pollution de l'eau, pouvant indirectement renforcer les maladies naturelles tels que le parasitisme connu pour son impact négatif sur les populations de salmonidés (Voutilainen et al., 2009). Les autres principales raisons de ce déclin sont la dégradation physique des ruisseaux par la modification des lits de rivières et canaux, ainsi que la dégradation de la qualité de l'eau. En effet les interactions entre polluants, sont une cause de perturbation même à faible concentration (Vighi et al., 2003). Afin de protéger les populations autochtones d'Unionidae, il est important de déterminer comment et dans quelle mesure ces facteurs sont impliqués dans leur disparition. Les résultats acquis au cours de ce travail de thèse contribuent à la compréhension des effets de la pollution par les métaux sur l'homéostasie du calcium chez les bivalves d'eau douce. La pollution aquatique par les métaux est susceptible d'être l'une des raisons du déclin généralisé des populations d'Unionoidea dans les rivières européennes (Frank et Gerstmann, 2007). Nos résultats sur les effets du cuivre sur A. anatina ont confirmé cette hypothèse. Conclusion Les communautés du macrobentos sont d'excellents indicateurs de qualité de l'eau (Ippolito et al., 2010). Les objectifs du notre travail ont été l'étude des effets toxiques du cuivre et des mécanismes de détoxification chez les espèces de bivalves dulsicoles du genre Anodonta appartenant aux Unionidae. Tout d'abord, l'étude du Cu 2+ comme un inhibiteur potentiel des protéines enzymatiques jouant un rôle dans le transport du calcium et les processus biominéralisation a été réalisée chez Anodonta anatina. Deuxièmement, l'étude s'est orientée sur les mécanismes de détoxification du Cu 2+ par les composés riches en Cys chélateurs de métaux chez Anodonta cygnea. Dans une première partie de la présente étude, les effets de l'exposition au Cu 2+ sur les activités enzymatiques de la Ca 2+ -ATPase, la Na + /K + -ATPase, et la H + -ATPase de la membrane plasmique, et de l'AC cytosolique ont été évalués chez le bivalve dulcicole Anodonta anatina. Dès 4 jours, une inhibition de l'activité enzymatique de la Ca 2+ -ATPase des branchies et des reins, et de la Na + /K + -ATPase des branchies et la glande digestive, a été observée chez Anodonta anatina exposée à 0,35 µmol L -1 de Cu 2+ . Une récupération totale de l'activité de la Ca 2+ -ATPase a été observée à 7 jours d'exposition à 0,35 µmol L -1 de Cu 2+ , indiquant l'induction de mécanismes de détoxication des métaux. Chez les bivalves exposés à 0,64 µmol L -1 de cuivre aucune récupération de l'activité enzymatique n'a été observée sur 15 jours. Il serait intéressant d'étudier les conséquences à long terme de la perturbation des fonctions osmorégulatrices des reins chez A. anatina exposés à 0,64 µmol L -1 de Cu 2+ . Dans un deuxième temps nous avons recherché la présence de phytochélatines chez les bivalves. Au cours de notre recherche nous avons identifié les phytochélatines PC 2 , PC 3 et PC 4 . C'est la première fois que la présence de PC est mise en évidence chez des organismes animaux et chez A. cygnea en particulier. Chez Anodonta cygnea exposés à 0,35 µmol L -1 de Cu 2+ , la PC 2 a été induite par rapport aux témoins à 12 h dans la glande digestive et les branchies, l'induction a persisté après 7 jours dans les branchies. Ces résultats suggèrent un rôle des PC dans l'homéostasie des métaux essentiels, et confirme le rôle des PC en tant que mécanisme de première ligne pour la détoxification des métaux chez A.cygnea. Le cuivre a été signalé comme un faible inducteur des polypeptides riches en thiols chélateur de métaux (Zenk, 1996). L'exposition à un métal non essentiel et puissant inducteur de PC comme le cadmium pourrait être intéressante pour l'étude de l'induction des PC 3-4 chez A. cygnea. Par ailleurs la comparaison des effets de l'exposition à un métal essentiel comme le cuivre et un métal non essentiel tel que le cadmium pourrait nous permettre de déterminer le rôle joué par les PC dans l'homéostasie des métaux essentiels et les mécanismes de détoxification. Nos résultats d'analyses de métallothionéine (MT) par HPLC n'ont montré aucune variation de son niveau d'expression après exposition au cuivre. Ceci indique que l'isoforme MT quantifié avec notre méthode n'est pas induit par ce métal. Des analyses complémentaires des MT chez A. cygnea exposés au cuivre dans les mêmes conditions, avec une méthode de spectrométrie qui permet la quantification de la totalité des isoformes de MT seraient intéressantes afin de déterminer si d'autres isoformes de MT sont induites chez cette espèce. De plus en parallèle, il serait nécessaire d'optimiser notre protocole d'analyse HPLC de manière à séparer et mesurer les isoformes de MT induites par le cuivre chez A. cygnea. Le cuivre est un inducteur d'espèces réactives de l'oxygène (ERO) à travers la réaction de Fenton. Le rôle antioxydant joué par les PC (Hirata et al., 2005) serait intéressant à étudier pour une meilleure compréhension du mécanisme de détoxification du cuivre par les PC chez A. cygnea. La pollution chimique est l'une cause de dégradation de l'environnement aquatique responsable du déclin des populations de bivalves dulcicoles. La toxicité dépend non seulement de la biodisponibilité des polluants et de leur toxicité intrinsèque, mais également de l'efficacité des systèmes de détoxification dans l'élimination des espèces chimiques réactives. Cette thèse est une contribution significative à l'étude des systèmes de détoxification des métaux chez les bivalves dulsicoles A. cygnea et A. anatina. L'originalité de notre travail est la mise en évidence, pour la première fois, de phytochélatines chez des organismes animaux. Nous avons démontré leur rôle dans la protection vis-à-vis de la toxicité du Cu 2+ , à des concentrations réalistes et pertinentes au plan environmental. Introduction 1 General introduction Human activity is associated with the development of industry and agriculture which have become indispensable. These sectors are responsible for the production and release of many pollutants. The chemical and physical properties and the different types of transport determine pollutant diffusion in all compartments of ecosystems. The aquatic environment is a sink for most many pollutants including metals. Copper belongs to the metals most commonly used because of its physical (particularly its electrical and thermal conductivity) and chemical properties. As pure metal, as alloy, or in the ionic state it is employed in many industrial and agricultural sectors. Therefore, copper is a metal frequently detected in the aquatic continental environment where it is present in the water column and accumulates in sediments [START_REF] Ineris | Données technico-économiques sur les substances chimiques en France : cuivre, composés et alliages[END_REF]. The chemical properties of copper (including as catalyst) make it essential to many biological processes involving such vital functions as respiration or photosynthesis (Tapiero et al., 2003). The properties that make this metal an essential element are at the same time the reasons for its toxicity when in excess. Copper is bioaccumulative and can become a threat to biocenoses. Mechanisms regulating the copper concentration and detoxification are essential to all living organisms. Molluscs represent an important group of macroinvertebrates in aquatic ecosystems. Among this group, bivalves are particularly interesting. Through their important filtering activity to satisfy respiratory and nutrition, bivalves have the capacity to accumulate a variety of environmental contaminants. In freshwater ecosystems they play an important role in matter transfer from the water column to the sediment. Faeces and pseudofaeces of bivalves make the plankton fraction available to detrivores and can change the sediment quality by pollutant sedimentation and concentration. In close contact with their environment, they are widely used for pollution monitoring in aquatic ecosystems. Among these animals, freshwater mussels of the Unionidae family are employed for controlling the bioaccumulation and toxic effects of metallic and organic pollutants (Winter, 1996;Falfushynska et al., 2009). For these reasons we chose Anodonta cygnea and Anodonta anatina which belongs to the Unionidae as biological models to study copper toxicity. These invertebrates are autochtonous species in European hydrosystems, but regression of Unionidae populations, like many other freshwater bivalves, has been observed in the last decades, the main reason for such ecotoxicological studies as this one. The objective of this thesis was to gain knowledge on the mechanisms of disturbance of calcium metabolism in Anodonta anatina by copper. Calcium is a critical element in the functioning of eukaryotic organisms. It controls multiple processes of reproduction, life, and death (Ermak and Davies, 2001). Absorption, biomineralization, and maintenance of intracellular calcium concentrations are effected and controlled by its passage through cell membranes. This takes place by simple diffusion and also by means of transport proteins. In bivalves, calcium is also very important for exo-skeleton synthesis by biomineralization. In addition to calcium, for biomineralization carbonate ions are required, produced by carbonic anhydrase (CA) catalysis. The effects of Cu 2+ on calcium transport have been tested in Anodonta anatina by assessment of the enzymatic activities of Ca 2+ -ATPase, Na + /K + -ATPase, H + -ATPase of the plasma membrane, and of the cytosolic CA. These enzymes are involved in calcium absorption and in the biomineralization processes. Organs studied were the gills, the digestive gland, the kidney, and the mantle which play an important role in the absorption of calcium and the synthesis of the shell (Coimbra et al., 1993). Enzymatic inhibition in A. anatina exposed to Cu 2+ at concentrations ranging from 0.26 to 1.15 µmol L -1 was observed. At low concentration of 0.35 µmol L -1 , a total recovery following the inhibition indicated the induction of detoxication mechanisms. The second part of this study focused specifically on metal binding Cys-rich compounds. These compounds are oligopeptides with high Cys content such as phytochelatins, or proteins such as metallothioneins. They play a major role in animals and plants as metal chelator, in essential metal homeostasis, and in non-essential metal detoxication. Phytochelatins (PC) are thiol rich oligopeptides with ( -Glu-Cys) n -Gly as general structure synthesised by phytochelatin synthase from glutathione. PC binds metal ions and forms metal complexes that reduce the intracellular free metal ion concentration in cells of plants, fungi, and microalgae. Since the identification of homologous genes able to give a functional phytochelatin synthase in invertebrates (Clemens et al., 2001;Vatamaniuk et al., 2001;Brulle et al., 2008), PC are strongly suspected to play a wider role in trace metal detoxification in animals. In invertebrates including bivalves, PC synthase genes were found to be widespread (Clemens and Peršoh, 2009). Therefore invertebrates such as Unionidae, known to bioaccumulate metals, are likely to express PC. Induction of PC and their precursors and of MT were assessed in the gills and the digestive gland of Anodonta cygnea under Cu 2+ exposure. This thesis manuscript is divided into eight chapters. After this general introduction, the second chapter contains a literature review on freshwater bivalves and data on copper, on its chemical and biological properties, its distribution in aquatic ecosystems, bioaccumulation, toxic effects, and detoxification mechanisms. An extended summary is presented in the third chapter followed by a conclusion (chapter 4). A published article on the inhibition of Ca 2+ -ATPase and CA activities by Cu 2+ is presented in the fifth chapter, and the sixth chapter is on the inhibition of Na + /K + -ATPase and the H + -ATPase activities; both dealing with the Origin and use Copper (symbol Cu, atomic number 29, atomic mass 63.546, two stable isotopes) is a transition metal belonging to the group 11 IB (silver, gold) of the periodic table of elements. Copper is a ductile metal with good electrical and thermal conductivities. It occurs naturally in oceans, lakes, rivers, and soils. The average natural levels of copper in the Earth's crust vary from 40 to 70 mg kg -1 . Values in ores range from 80 to 150 mg kg -1 (IFEN, 2011). Copper is found mainly as the sulfides CuS and Cu 2 S in tetrahedrite (Cu 12 Sb 4 S 13 ) and enargite (Cu 3 AsS 4 ) and as the oxide Cu 2 O (cuprite). The most important ore is chalcopyrite (Cu 2 S, Fe 2 S 3 ). It is also found in malachite (CuCO 3 (OH) 2 ), azurite Cu 3 (CO 3 ) 2 (OH) 2 , chalcocite (Cu 2 S), and bornite (Cu 5 FeS 4 ). Since 1980, an increase in global mining production has been observed due to the growing demand for copper in the fields of renewable energy, Copper is commonly used because of its physical properties, particularly its electrical and thermal conductivity and its resistance to corrosion. It is mainly used as metal in electronics (42 %), construction (28 %), and in vehicles (12 %). Seventy percent of pure copper are used for electrical wire laminates, and pipes. In alloys, the main families are the brasses (copper-zinc), the bronzes (copper-tin), copper-aluminium, and copper-nickel blends. Many other alloys are also in use for special purposes, such as copper-nickel-zinc, coppersilicon, copper-lead, copper-silver, copper-gold, copper-zinc-aluminium-magnesium, as well as copper alloyed with cadmium, tellurium, chromium, or beryllium [START_REF] Blazy | Cuivre: ressources, procédés et produits[END_REF]. Salts of copper at different levels of oxidation are also used for their chemical and biochemical properties. Copper is encountered in many fields of technical application: -Electricity, used pure or alloyed in wires of electrical equipment (coils, generators, transformers, connectors) [START_REF] Blazy | Cuivre: ressources, procédés et produits[END_REF]. -Electronics and communication: pure or alloyed it plays an important role in communications technology, computers, and mobile phones [START_REF] Blazy | Cuivre: ressources, procédés et produits[END_REF], in printed circuit boards, in solar cells of solar panels, and in the semiconductor industry. -Building: copper and brass are used for pipes and plumbing fixtures, in heating and guttering, as copper sheets for cover. -Transportation: alloys of copper and nickel are used in boat hulls and in anti-fooling paint to reduce hull fouling by algae. It is also in motors, radiators, connectors, and brakes; an automobile contains about 22.5 kilograms of copper. Trains can contain between 1 and 4 tons [START_REF] Ineris | Données technico-économiques sur les substances chimiques en France : cuivre, composés et alliages[END_REF]. -In various industrial equipments, copper and copper alloys are used in the manufacture of turbine blades, gears and bearings, heat exchangers, tanks and pressure equipment, marine equipment, pressed steel, and smelters. Some commercial preparations are particularly polluting for the aquatic compartment since they contain copper in ionic form directly soluble in water. Salts of copper are used in many preparations and in industrial quantities: -As, copper acetate, cupric chloride, and copper sulphate for colouring of textiles, glass, ceramics, paints, and varnishes, and for tanning (OECD, 1995). Copper acetate, cuprous and cupric oxide, cuprous and cupric chloride are used as catalysts in petrochemical organic synthesis and for rubber production. They are also used in solder pastes, in electroplating, as polishing agent for optical glass, and for refining of copper, silver, and gold. -Pyrotechnics, oxides and cupric copper to colour firework, brass in ammunition. -General consumer products, pool algicides, decorative objects made in copper and its alloys (buttons, zippers, jewelry), coins. -In the agriculture, the farming of cattle, pigs, and poultry is a large sector, using copper as a dietary supplement [START_REF] Ineris | Données technico-économiques sur les substances chimiques en France : cuivre, composés et alliages[END_REF]. Copper acetate, copper sulphate, copper hydroxide, cuprous oxide, and tetracupric oxychloride are used in fungicides, herbicides, insecticides, molluscicides, and as antiseptics. Copper treatment is practiced in vinyards and on fruit trees with average doses from 1000 to 2500 g Cu / ha / year and from 3750 to 5000 g Cu / ha / year respectively (ADEME-SOGREAH, 2007). -In wood preservation, copper sulphate, cupric oxide, and cupric chloride are used (INERIS, 2005). Copper is also used as a substitute for other substances such as lead, arsenic, and tributyltin. This enumeration illustrates the widespread use of copper. Sources of emissions and environmental levels Natural inputs in the order of decreasing importance are volcanic eruptions, plant decomposition, forest fires, and marine aerosols [START_REF] Ineris | Données technico-économiques sur les substances chimiques en France : cuivre, composés et alliages[END_REF]. Water and soil surrounding sites of agricultural and industrial activities are most heavily exposed to copper and, to a lesser extent, the air contaminated by road and rail traffic. In the European Union, some countries represent a large part of copper emission such as France with 43% in water, 15% in soil, and 13% in air. According to IFEN (2011), anthropogenic inputs of copper originating from industrial activities are mostly into waters and soils, while urban and agricultural activities as well as road traffic emit mostly into the air. In 2007, the emissions of copper in-to the environment in Europe were estimated as 371,000 kg year -1 to water, 146,000 kg year -1 to air, and 139,000 kg year -1 to soil [START_REF] Ineris | Données technico-économiques sur les substances chimiques en France : cuivre, composés et alliages[END_REF]. Air emissions are mainly non-industrial. The road transport sector is a major issue representing 51 % of emissions in France in 2004. These are mainly caused by wear of brake pads containing copper. The railway transport sector represents 32 % of total air emissions of copper which is caused by wear of overhead lines (ADEME-SOGREAH, 2007). Emissions of copper-related road transport are increasing with the development of this sector. Industrial discharges of copper are mainly from the synthesis of organic chemicals, in the production of nonferrous metals from ore, ferrous metal smelting, and the production of iron and steel. In the atmosphere, copper metal oxidizes slowly to Cu 2 O which covers the metal with a protective layer against corrosion. Copper is released into the atmosphere as particulate oxide, sulphate, or carbonate as particulate matter. Releases to the aquatic environment are mainly due to corrosion of equipment made from copper or brass, and urban waste is another important source of release. One of the most significant non-industrial discharges of copper is the treatment of urban wastewater. This sector represents 28 % (15617 kg year -1 ) of the total French emissions of copper and its derivatives to the aquatic environment [START_REF] Ineris | Données technico-économiques sur les substances chimiques en France : cuivre, composés et alliages[END_REF]. Copper is one of the compounds always detected in the input and output of secondary sewage treatment plants (but not always in sewage treatment plants with tertiary treatment). Copper is one of the highest concentrated compounds in sewage treatment plant inflow with concentrations generally greater than 10 µg L -1 . At the outlet it is usually found at concentrations between 1 and 10 µg L -1 . In Europe in 2007, the main emitters of industrial origin are the United Kingdom, France, Germany, and Romania. They represent respectively 35 % (129000 kg), 15 % (55700 kg), 11 % (39800 kg) and 7 % (24500 kg) of the European industrial emissions to water [START_REF] Ineris | Données technico-économiques sur les substances chimiques en France : cuivre, composés et alliages[END_REF]. In the European Union, the most significant sectors are thermal power plants and other combustion plants. In France, another important sector is the production of nonferrous metals from ore from concentrates and from secondary materials. Speciation and behaviour of copper in the environment will directly influence its bioavailability. In aquatic environments, the fate of copper is influenced by many processes and factors such as chelation by organic ligands (especially on NH 2 and SH groups, to a lesser extent on the OH group). Adsorption phenomena can also occur on metal oxides, clays, or particulate organic matter, bioaccumulation, presence of competing cations (Ca 2+ , Fe 2+ , Mg 2+ ) or of anions (OH, S 2-, PO 4 3- , CO 3 2- ) plays also a role in copper behaviour [START_REF] Ineris | Données technico-économiques sur les substances chimiques en France : cuivre, composés et alliages[END_REF]. The great portion of copper released into the water is in the particulate form and tends to settle, to precipitate, or adsorb to organic matter, hydrous iron, manganese oxides, or clay particles (ATSDR, 1990;INERIS, 2005). In hard water (carbonate concentration up to 1 mg L -1 ), the largest fraction of copper is precipitated as insoluble compounds. Cuprous oxide Cu 2 O is insoluble in water. Except in the presence of a stabilizing ligand such as sulfides, cyanide, or fluoride the oxidation state Cu(I) is easily oxidized to Cu(II) as CuSO 4 , Cu(OH) 2 and CuCl 2 more soluble in water. The Cu 2+ ion forms many stable complexes with inorganic ligands such as chloride or ammonium, or organic ligands. When copper enters the aquatic environment, the chemical equilibrium of oxidation states and of soluble and insoluble species is usually reached within 24 hours (INERIS, 2005). In aquatic environments, copper is mainly absorbed on particles, and suspended solids are often heavily loaded. In Europe, according to the forum of the European geological surveys (FOREGS, 2010), copper concentrations up to 3 µg L -1 are found in continental water. In regions with intensive human activities (agricultural, urban), copper concentrations of 40 µg L -1 and even 100 µg L -1 can be found seasonally [START_REF] Neal | A summary of river water quality data collected within the Land-Ocean Interaction Study: core data for eastern UK rivers draining to the North Sea[END_REF]Falfushynska et al., 2009). Trace metals as copper are mainly transported to the marine environment by rivers through estuaries. The magnitude of metal input to the marine environment depends on the levels in the river waters and on the physico-chemical processes that take place in the estuaries [START_REF] Waeles | Distribution and chemical speciation of dissolved cadmium and copper in the Loire estuary and North Biscay continental shelf, France[END_REF]. In sea water, copper is found in concentrations ranging from 0.1 to 4 µg L -1 [START_REF] Waeles | Seasonal variations of dissolved and particulate copper species in estuarine waters[END_REF][START_REF] Levet | Agence de l'eau Seine-Normandie[END_REF]. Depositions of copper into soils are from different sources. In respect to industrial activities in Europe, in 2007 the major industrial emitters to soil are the United Kingdom, France, and Germany with respectively 74 tons, 60 tons, and 5 tons. The most significant activities are landfilling or recycling of non-hazardous waste (45 %), abattoirs (21 %) and sewage sludge from urban waste water treatment plants (12 %) (E -PRTR, 2010). The main agricultural sources identified (ADEME-SOGREAH, 2007) for the release of copper are animal wastes (faeces and manure), sewage sludge from water treatment plants, compost, mineral fertilizers, lime and magnesium soil amendment. Animal waste represents an important input into soils (53 %). This reflects the fact that the feed of cattle, pig and poultry production is supplemented with copper for promotion of growth and prevention of diseases. As copper is poorly absorbed, it is added in great quantities to the feed (sometimes added at levels 30 times in excess of the needs of the animal), and the surplus is then found in the droppings. The spreading of animal manure is important for agricultural soils [START_REF] Jondreville | Le cuivre dans l'alimentation du porc : oligoélément essentiel, facteur de croissance et risque potentiel pour l'Homme et l'environnement[END_REF]. Thirty four % of the copper amount found in agricultural soils comes from phytosanitary treatment (ADEME-SOGREAH, 2007). Furthermore, mineral fertilizers and lime and magnesium soil amendments contribute to copper inputs to agricultural soils. The burden of copper to soil by urban activities is mainly through spreading and composting of sludge from wastewater treatment plants. For example, the average concentration of copper in sewage sludge of the Rhone Mediterranean basin was about 350 mg kg -1 in 2004 (Agence de l'eau Rhône Méditerranée Corse, 2004). In soils, copper is in the oxidation states I or II in the form of sulfides, sulphates, carbonates, and oxides. The behavior of copper in soils depends on many factors such as soil pH, redox potential, cation exchange capacity, type and distribution of organic matter, presence of oxides, rate of decomposition of the organic matter, proportions of clay, silt, and sand, climate, and type of vegetation [START_REF] Ineris | Données technico-économiques sur les substances chimiques en France : cuivre, composés et alliages[END_REF]. Acidification for instance causes a decrease of the metal bound to solids and thus release from solid matter [START_REF] Hlavackova | Evaluation du comportement du cuivre et du zinc dans une matrice de type sol à l'aide de différente méthodologie[END_REF]. Copper binds preferentially to organic matter representing between 25 to 50 % of copper, to iron oxides, manganese oxides, carbonates, and to clays. This characteristic makes the majority of copper strongly adsorbed in the upper few centimeters of soil, especially on organic matter. Copper does not migrate deeply, except under special circumstances of drainage or in highly acidic environment [START_REF] Ineris | Données technico-économiques sur les substances chimiques en France : cuivre, composés et alliages[END_REF]. In the European Union, in 2010 the most polluted solid matters are the sediments of alluvial plains with 25 mg kg -1 , the sediments of rivers with 22 mg kg -1 , soil with 17 mg kg -1 . The maximum concentration determined in a river sediment was found to be 877 mg kg -1 , more than 40 times the average (FOREGS, 2010). In France, concentrations of copper in the surface layers of soil can reach 46 mg kg -1 . The levels of copper in surface and deep layers of the most polluted soils are found mainly in the South of France and the West of Bretagne. Copper metabolism and its physiological role in animals (generalities) Copper is an essential trace element in microorganisms, plants, and animals. It plays a basic role being in the active centre of enzymes involved in connective tissue formation with lysyl oxidase, in respiration with cytochrome C oxidase (López de Romana, 2011), in photosynthesis with plastocyanin [START_REF] Grotz | Molecular aspects of Cu, Fe and Zn homeostasis in plants[END_REF], and in controlling the level of oxygen radicals with Cu / Zn-superoxide dismutase (Table 1). It allows also the transport of oxygen in the haemolymph of many invertebrates. Copper is used in several cell compartments, and the intracellular distribution of copper is regulated in response to metabolic demands and changes in the cell environment (Tapiero et al., 2003). The same properties that make transition metal ions indispensable for life at low exposure level are also the ones that are responsible for toxicity when present in excess. Copper metabolism must be tightly regulated, ensuring a sufficient supply without toxic accumulation. Copper homeostasis involves a balance between absorption, distribution, use, storage, and detoxification. Table 1: Functions of the main cuproproteins (Tapiero et al., 2003) Metalloenzymes Functions Superoxide Photosynthesis In the mammalian intestine where low pH and the presence of ligands promote its solubility, copper passes by the enterocytes. Copper is absorbed by saturable active transport through membrane transporters. In the cytosol of enterocytes, copper binds to metallothioneins which play a role in copper sequestration and transport. Once absorbed, copper is transported to the liver, predominantly by albumin, but also by transcuprein and free amino acids including histidine. In the liver, depending on the status of the animal, copper is bound to metallothioneins to be stored, incorporated into ceruloplasmin [START_REF] Cousins | Absorption, transport and hepatic metabolism of copper and zinc: special reference to metallothionein and ceruloplasmin[END_REF], and then transported to other organs, or secreted into the intestine via the bile. In monogastric animals, copper homeostasis is largely maintained by increased excretion avoiding excessive accumulation in the liver (fig. 1). Biliary excretion is the major route, losses via urine, skin, or via cellular desquamation in the gut are minor [START_REF] Arredondo | Iron and copper metabolism[END_REF]. In aquatic organisms, especially bivalves, copper is ingested and internalized after solubilization in the gastrointestinal tract and absorbed by the intestinal tract (associated with the particulate phase or in solution). Copper in the dissolved phase can also be directly absorbed by the tissues (gills, mantle, etc.) in contact with the surrounding water (Viarengo and Nott, 1993). Fig. 1: Copper metabolism in mammals [START_REF] Jondreville | Le cuivre dans l'alimentation du porc : oligoélément essentiel, facteur de croissance et risque potentiel pour l'Homme et l'environnement[END_REF]. The entry of copper into the cell occurs mainly by mechanisms that depend on copper transporting membrane channels. In this process, Cu 2+ ion is first reduced to Cu + by reductases associated with the membrane to facilitate its entry. Once inside the cytoplasm, it is likely that reduced glutathione (GSH) and metallothioneins (MT) bind the copper, serving as intracellular stores. Copper appears to be interchanged between MT forming a stable complex and its bound state with GSH. As copper bound to GSH turns over more rapidly than when bound to MT, copper becomes available for other uses and for its transport. The delivery of copper ions to their specific pathways into the cell is mediated by metallochaperones that protect the metal from intracellular scavengers and delivers it directly to the respective target proteins and cellular compartments [START_REF] Kozlowski | Copper, iron, and zinc ions homeostasis and their role in neurodegenerative disorders (metal uptake, transport, distribution and regulation)[END_REF]. The copper chaperone for Cu/Zn superoxide dismutase guides copper to superoxide dismutase (Cu/Zn SOD) which participates in the defence against oxidant stress within the cytoplasm. Cytochrome c oxidase copper chaperone is another protein that channels copper to cytochrome c oxidase in the inner mitochondrial membrane which plays a critical role in the electron transport chain for cellular respiration. Antioxidant protein 1 presents copper to the ATPase of type P, ATP7A of ATP7B [START_REF] Lutsenko | Human copper homeostasis: a network of interconnected pathways[END_REF]. These three transporters play an essential role in copper homeostasis. They perform distinct functions depending on their cellular localization: when at the Golgi apparatus they enable the management of copper by ceruloplasmin and incorporation of copper into enzymes which require it as cofactor. When localized in vesicular compartments, they allow the removal of copper from the cell and thus participate in copper homeostasis [START_REF] Arredondo | Iron and copper metabolism[END_REF][START_REF] Hejl | Du gène à la maladie : les anomalies des transporteurs du cuivre From gene to disease: Copper-transporting P ATPases alteration[END_REF]. Toxicology of copper (general) In living organisms, cupric ion (Cu 2+ ) is fairly soluble whereas cuprous (Cu + ) solubility is in the sub-micromolar range. Cu is present mainly as Cu 2+ since in the presence of oxygen or other electron acceptors, Cu + is readily oxidized. Strong reductants such as ascorbate or reduced glutathione can reduce Cu 2+ back to Cu + [START_REF] Arredondo | Iron and copper metabolism[END_REF]. As in the case of iron through the Haber-Weiss and Fenton reactions, free copper ions can catalyse the production of hydroxyl radicals (HO • ). Copper toxicity results also from nonspecific binding which can inactivate important regulatory enzymes by displacing other essential metal ions from catalytic sites, by binding to catalytic Cys groups or by allosterically altering the functional conformation of proteins [START_REF] Mason | Metal detoxification in aquatic organisms[END_REF]. Thus, the mechanisms of toxicity are associated both with oxidative stress and direct interactions with cellular compounds. Free copper ions have high affinity to sulfur-, nitrogen-, and oxygen-containing functional groups in biological molecules which can inactivate and damage them. Cytotoxicity observed in copper poisoning results from inhibition of the pyruvate oxidase system by competing for the protein's sulfhydryl groups. Glucose-6-phospho-dehydrogenase and GR are also inhibited (competitive inhibition) proportionally to the concentration of intracellular copper [START_REF] Barceloux | Copper[END_REF]. The same applies to some transporters as the ATPase(s) which are also inhibited by copper, causing disruption of homeostasis of the respective transported entities. Toxic effects of copper can also result from its affinity to DNA [START_REF] Agarwal | Effects of copper on mammalian cell components[END_REF][START_REF] Bremner | Manifestations of copper excess[END_REF][START_REF] Sagripanti | Interaction of copper with DNA and antagonism by other metals[END_REF]. Another mechanism of toxicity of excessive concentrations of copper is the modification of the zinc finger structures of transcriptional factors which cannot any longer bind to DNA [START_REF] Pena | A delicate balance: homeostatic control of copper uptake and distribution[END_REF]. Copper in excess can also promote apoptosis [START_REF] Kozlowski | Copper, iron, and zinc ions homeostasis and their role in neurodegenerative disorders (metal uptake, transport, distribution and regulation)[END_REF] wile copper deficiency may be the cause of many diseases due to cuproprotein and copper dependent reaction inhibition. In mammals, copper homeostasis is primordial. Wilson's and Menkes diseases are caused by genetic mutations in copper transporter proteins. The former results from accumulation of copper in several organs and tissues. There are different varieties, the most common being liver disease and anaemia [START_REF] Hejl | Du gène à la maladie : les anomalies des transporteurs du cuivre From gene to disease: Copper-transporting P ATPases alteration[END_REF]. The accumulation of copper arises from a defect in the P-type Cu-protein ATP7B (called Wilson protein), a specific transporter of copper. The gene which encodes this protein is located on autosome number 13 in humans. It also allows the incorporation of copper in cuproproteins and excretion of copper into the bile. Accumulation of copper leads to liver cirrhosis and neurodegeneration. Menkes disease is a neurodegenerative disease. Copper, after ingestion, accumulates in the intestine and absorption by other organs and tissues (blood, liver, brain) is defective. Menkes syndrome is caused by a mutation in the ATP7A gene located on chromosome X which encodes a protein ATP7A Cu-type P. This membrane protein is the first specific transporter of copper found in eukaryotes. Copper (Cu), besides cadmium, is one of the major metals causing environmental problems in fresh water ecosystems. Since it is highly toxic to fish, it is also used as piscicide [START_REF] Manzl | Copper-induced formation of reactive oxygen species causes cell death and disruption of calcium homeostasis in trout hepatocytes[END_REF] For Anodonta cygnea common names are: swan mussel (English), anodonte des cygnes (French), Große Teichmuschel (German); for Anodonta anatina: duck mussel (English), mulette des cannards (French), Gemeine Teichmuschel (German). Description: In both species, the shell is nacreous, and there are no teeth on the ligament. Anodonta anatina (fig. 2A): Shell slightly elongated, triangular because of the presence of a large rear wing or crest, the top and bottom form an angle of greater or lesser extent, the upper edge rises in an almost straight line towards the back to the highest point of the crest, then descends in a more concave line towards the posterior end; the anterior is broadly rounded, rather short, the posterior is longer, ending posteriorly by an obtuse rostrum; the ligament is rather long and prominent; the top is a little bulging, covered with fine wrinkles, a little curved, obliquely cutting the growth lines; the shell is rather thick, solid, shiny, greenish gray, or brownish. (Length: 14 cm, height: 10.5 cm, thickness: 6 cm) [START_REF] Vrignaud | Clef de détermination des Naïades d'Auvergne[END_REF]. Anodonta cygnea (fig. 2B): Shell oval, more or less elongated with the top and bottom edges roughly parallel or convex, the upper edge is more straight than the lower, the posterior area being much longer; the ligament is rather long, and prominent; the top is a little bulging, covered with fine wrinkles parallel to growth lines; the shell is thin, not strong, fairly shiny, greenish-yellow. (Length: 20 cm, height: 10 cm, thickness: 6 cm) [START_REF] Vrignaud | Clef de détermination des Naïades d'Auvergne[END_REF]. A B Fig. 2: A : Anodonta anatina, B: Anodonta cygnea (www.biopix.eu) Geographic distribution Anodonta cygnea and Anodonta anatina are autochthonous bivalves, widely distributed under the polar circle in European continental hydrosystems. The two mussels have a similar distribution, but wider for Anodonta cygnea. The geographical distribution (fig. 3) of Anodonta anatina extends from the British Isles to the eastern limit of Europe and from Sweden to Spain. Anodonta cygnea is present from the British Isles to Siberia and from Sweden to North Africa [START_REF] Başçınar | A Preliminary study on reproduction and larval development of swan mussel [Anodonta cygnea (Linnaeus, 1758)] (Bivalvia: Unionidae), in Lake Çıldır (Kars, Turkey)[END_REF]; www.discoverlife.org). Life Cycle Of the unionids, the individual animals are usually gonochoric. No age limit is known for gametogenesis. The spermatozoids are released into the water through the exhalant siphon. They are filtered by mussels located downstream. After fertilization, the eggs are incubated in the marsupium which is a modification of the gills of the mussel where the larvae or glochidia will hatch (fig. 4). They are produced in large quantities, from 50000 to 2 million [START_REF] Başçınar | A Preliminary study on reproduction and larval development of swan mussel [Anodonta cygnea (Linnaeus, 1758)] (Bivalvia: Unionidae), in Lake Çıldır (Kars, Turkey)[END_REF], some of them attaching themselves with hooks at the end of their valves on the gills of a fish host, and then live as encysted parasites. After a few weeks, the cyst bursts and releases a small bivalve similar in its anatomy to an adult mussel. This juvenile will grow buried in the substratum before becoming adult and gradually rising to the surface. The expulsion of glochidia begins at the end of winter and may continue until September [START_REF] Mouthon | Les mollusques dulcicoles -Données biologiques et écologiques -Clés de détermination des principaux genres de bivalves et de gastéropodes de France[END_REF]. The Unionidae produce only one generation per year. It is estimated that mussels of the genus Anodonta can live very long, about fifteen to twenty years [START_REF] Taskinen | Age-, size-and sex-specific infection of Anodonta piscinalis (Bivalvia, Unionidae) with Rhipidocotyle fennica (Digenea, Bucephalidae) and its influence on host reproduction[END_REF]. Diet According to the conventional nomenclature, among the freshwater molluscs one distinguishes vegetarians, detritivores, and more rarely omnivores; there is no true carnivor. Most bivalves have a mixed diet with detritivore and vegetarian dominance throughout the year, as for the Unionidae. Except during their larval stage when the glochidium parasites a host fish and feeds on his plasma [START_REF] Uthaiwan | Culture of glochidia of the freshwater pearl mussel Hyriopsis myersiana (Lea, 1856) in artificial media[END_REF], Anodonta mussels are filter feeders, although their diet is not precisely known. They feed on seston (phytoplankton, filamentous algae, detritus, protists, epipelic) suspended in water. Water enters the mantle cavity through the inhalant siphon, passes through the gills and is exhaled through the exhalant siphon. Food particles in the water are intercepted by rows of cilia or cirri which extend between the gill filaments. The particles are either transferred and incorporated into mucus strings or carried in a concentrated suspension as they are transported to the labial palps. Particles selected for ingestion are transferred to the mouth, while rejected particles are bound up into a mucus ball and expelled from the inhalant siphon or pedal gap (gap between the valves) as pseudofaeces. Anatomy and ecology Anatomy The mantle cavity is divided by the gills into a ventral inhalant chamber and a dorsal exhalant chamber. Water passes through the inhalant siphon to enter the branchial chamber, flows across the gills and then into the exhalant suprabranchial chamber. As the water crosses the gills, food particles are filtered from it and oxygen is removed. From the suprabranchial chamber the water flows out of the exhalant siphon. The gills consist of a sheet of coalescent filaments folded into a "W" shape and attached to the dorsal wall of the mantle. This sheet divides the mantle cavity into the ventral chamber and the dorsal suprabranchial chamber. To get from the ventral chamber to the dorsal, water must pass through pores in the gills. The Ecology Anodonta cygnea and Anodonta anatina are species that frequent ponds, oxbow lakes, slow rivers, and canals with weak stream which often have high trophic level. They prefer bottom with fine granulometry (silt, sand, gravel), often with accumulations of organic matter. These species are very tolerant to average temperature of the water as evidenced by their presence in Spain to Sweden. They are localised in bed areas, out of strong stream undergoing some erosion, in general in a depth range between 0.2 and 2.5 m. These are burrowing organisms, living vertically partially embedded in the substrate. These bivalves are generally sedentary organisms. In Spain, a displacement of about ten meters by year was found for mussels A. cygnea and A. anatina marked by Aldridge (2000). However, flooding can cause passive dispersal from upstream to downstream. The most mobile stage is the larval stage, which is capable of moving long distances through the host fish. Threats Over the past 50 years, a general decline of freshwater mussel has-been observed. It seems that some factors are combined to cause the gradual disappearance of these species: among the potential causes, human activities leading to chemical pollution and habitat exchange have been suggested (Aldridge, 2000;[START_REF] Kádár | Avoidance responses to aluminium in the freshwater bivalve Anodonta cygnea[END_REF]. Given the lack of ecological and ecotoxicological knowledge for these bivalves (including the tolerance to -The virtual disappearance or severe depletion of potential host-fish. -The physical degradation of streams and reworks of the beds of rivers and canals (Aldridge, 2000), rectification, dams and channel dredging, but also the impact of intensive agriculture, quality and quantity of sediments and their transit. -Degradation of water quality, including eutrophication, as well as pollutants from human activity. -The fragmentation of populations, one of the main causes of biodiversity loss. -Introduced species, the potential effects of competition in bivalves such as Corbicula fluminea, Sinanodonta woodiana as well as Dreissena polymorpha are poorly documented. Zebra mussels appear to have a negative impact by binding to the valves of certain freshwater mussels, hampering their opening. S. woodiana also seems to have a negative impact on native unionid populations [START_REF] Adam | L'Anodonte chinoise Sinanodonta woodiana (Lea, 1834) (Mollusca, Bivalvia, Unionidae) : une espèce introduite qui colonise le bassin Rhône-Méditerranée[END_REF]. Ecotoxicological interest Molluscs are common, highly visible, ecologically and commercially important on a global scale as food and as non-food resources [START_REF] Rittschof | Molluscs as multidisciplinary models in environment toxicology[END_REF]. In some aquatic ecosystems (lakes, slow streams) molluscs can represent up to 80% of the total biomass of the benthic macroinvertebrates, so their impact can become major. Populations of bivalves filter large amounts of water (Unionidae 300ml/individual/h) and take an active part in sedimentation and water purification. Faeces and pseudofaeces concentrate sometimes a large fraction of planktonic microorganisms not used, making them accessible to detritivorous invertebrates such as oligochaetes, and many diptera. But they also change the quality of sediment by concentration and excretion of many substances (metals, pesticides, radionuclides). Because of their sedentary lifestyle, their filtration capacity, and their wide distribution, molluscs and bivalves are excellent sentinels for monitoring the fraction of bioavailable pollutants in their environment [START_REF] Hayer | Accumulation of extractable organic halogens (EOX) by the freshwater mussel Anodonta cygnea L., exposed to chlorine bleached pulp and paper mill effluents[END_REF]. In close contact with water-suspended particles and sediment, they are widely used for controlling the bioaccumulation and toxic effects of metallic and organic pollutants in aquatic ecosystems [START_REF] Viarengo | Critical evaluation of an intercalibration exercise undertaken in the framework of the MED POL biomonitoring program[END_REF][START_REF] Rittschof | Molluscs as multidisciplinary models in environment toxicology[END_REF]. Several Unionidae mussels, especially from the Anodonta genus have since been used as biomonitor organisms in toxicity assessment of numerous compounds released in continental water. The anatomy and physiology of these animals has been studied for a long time, allowing the study of toxic effects of compounds but also the mechanisms of detoxification. Anodonta cygnea and Anodonta anatina are biological models widely used in ecotoxicology (Falfushynska et al., 2009). They are present in large quantities, sedentary, easy to collect and to acclimatize in aquaria. 2.3 Effects of copper exposure and detoxification mechanisms Copper exposure effects Copper bioaccumulation Trace elements are known to be highly accumulated by aquatic molluscs. Bivalves are in close contact with sediments which constitute a major environmental sink for metals, with an important filtering activity to satisfy respiration and nutrition, and tolerance mechanisms that involve metal sequestration rather than metal exclusion or elimination. They provide accurate and integrated information about the environmental impact and bioavailability of chemicals. They are therefore extensively applied in marine environments using mussels and oysters, but are also implemented in freshwater systems using other bivalve species such as Anodonta sp., Dreissena polymorpha, Elliptio complanata, and Asiatic clams. Among freshwater organisms, unionid molluscs are widely recognised for their capacity to accumulate a variety of environmental contaminants including metals in their tissues (Winter, 1996;[START_REF] Bilos | Trace metals in suspended particles, sediments and Asiatic clams (Corbicula fluminea) of the Rio de la Plata Estuary, Argentina[END_REF][START_REF] Kádár | Avoidance responses to aluminium in the freshwater bivalve Anodonta cygnea[END_REF]Falfushynska et al., 2009). The widespread recent decline in the species diversity and population density of freshwater mussels (Lydeard et al., 2004) may be partly related to chronic, low-level exposure to toxic metals (Frank and Gerstmann, 2007). Freshwater mussels are exposed to metals that are dissolved in water, associated with suspended particles, and deposited in bottom sediments. Thus, freshwater mussels can bioaccumulate certain metals to concentrations that greatly exceed those dissolved in water. In adult mussels, the most common site of metal uptake is the gills, followed by the digestive gland, the mantle, and the kidneys [START_REF] Pagliarani | Mussel microsomal Na + /Mg 2+ -ATPase Sensitivity to Waterborne Mercury, Zinc and Ammonia[END_REF]Bonneris et al., 2005). Bioaccumulation of metals varies strongly according the water chemistry conditions, pH and water hardness being important parameters. Aqueous concentrations of calcium probably enhance the bioavailability and toxicity of metal cations, because the permeability of membranes is inversely related to aqueous calcium concentration; calcium ions apparently compete with other metal cations for binding sites on the gill surface, decreasing the direct uptake of other cationic metals. The pH influences both the chemistry of metals and macromolecules of surface structures. A modification of membrane permeability causes an alteration in metal diffusion. Additionally, changes in membrane potential modify the transport of polar metal species (Winter, 1996). In bivalves, the biological barriers are the gill epithelium, the wall of the digestive tract, and the shell (which is often reported as a site of bioaccumulation). Metals or metalloids in solution are more easily absorbed by the surfaces in direct contact with the external environment, while those associated with the particulate phase are rather ingested and internalized after solubilization in the digestive tract, or transferred by endocytosis to then undergo lysosomal digestion (Wang and [START_REF] Rainbow | Influence of metal exposure history on trace metal uptake and accumulation by marine invertebrates[END_REF]. Once past the first barrier, the mechanisms of transfer of metals or metalloids into the cell involve intracellular diffusion (passive or facilitated), active transport, and endocytosis (phagocytosis and pinocytosis). The oxygen concentration of the water or the density of microalgae are parameters that will directly influence the ventilatory activity of the bivalve and so its exposure to metals in solution or particulate [START_REF] Tran | Mechanism of oxygen consumption maintenance under varying levels of oxygenation in the freshwater clam Corbicula fluminea[END_REF][START_REF] Tran | Copper detection in the Asiatic clam Corbicula fluminea: optimum valve closure response[END_REF]. Copper and metal bioaccumulation can change between individuals of the same species with the difference in sizes, or between different species mussels due to different physiology (breathing, eating) (Winter, 1996;[START_REF] Bilos | Trace metals in suspended particles, sediments and Asiatic clams (Corbicula fluminea) of the Rio de la Plata Estuary, Argentina[END_REF]Gundacker, 2000;[START_REF] Hédouin | Allometric relationships in the bioconcentration of heavy metals by the edible tropical clam Gafrarium tumidum[END_REF]Falfushynska et al., 2009). In bivalves, the determination of trace metal concentrations in whole individuals presents little interest, since determination of bioconcentration factors in various tissues suggested that the principal accumulating organs are: the gills, the digestive gland, the kidney, and the mantle, also the shell acting as a storage matrix (Viarengo and Nott, 1993;[START_REF] Roméo | Metal distribution in different tissues and in subcellular fractions of the Mediterranean clam Ruditapes decussatus treated with cadmium, copper, or zinc[END_REF][START_REF] Das | Dose-dependent uptake and Eichhornia-induced elimination of cadmium in various organs of the freshwater mussel, Lamellidens marginalis (Linn.)[END_REF]Bonneris et al., 2005). The intracellular sequestration of metals is based on a sequence of cellular events involving a cascade of different ligands with increasing metal binding strengths. Anodonta sp. and other molluscs accumulate metals to high levels in their tissues (Falfushynska et al., 2009). Metal tolerance in such accumulator organisms involves sequestration of metals in non-toxic forms. Among the best studied intracellular sites involved in the sequestration of essential and nonessential metals in aquatic invertebrates are lysosomes, granules, and soluble Cys-rich ligands as metal-binding peptide and proteins. Unionids also lay down calcium microspherule concretions, particularly in the connective tissue of the gills, in the mantle, and in the digestive gland [START_REF] Pynnönen | Occurrence of calcium concretions in various tissues of freshwater mussels, and their capacity of cadmium sequestration[END_REF]Moura et al., 2000;[START_REF] Lopes-Lima | Isolation, purification and characterization of glycosaminoglycans in the fluids of the mollusc Anodonta cygnea[END_REF]. Dissimilar mechanisms for copper and metal uptake, storage, mobilisation, and excretion performed by different cell types in different organs explain the pattern of metal accumulation and tissue distribution [START_REF] Soto | The contribution of metal/shell-weight index in target-tissues to metal body burden in sentinel marine molluscs. 1. Littorina littorea[END_REF]. Oxidative stress Reactive oxygen species comprise a variety of unstable oxygen derivatives. Some [START_REF] Viarengo | Heavy metal effects on lipid peroxidation in the tissues of mytilus galloprovincialis lam[END_REF]Géret et al., 2002a;[START_REF] Sheehan | Oxidative stress and bivalves: a proteomic approach[END_REF]). An excess of ROS causes a risk of oxidative stress to the cell. Free radicals are naturally produced and may also have a physiological role. The major role of ROS is an activity of regulation of molecules containing sulfhydryl groups. This type of regulation can in particular affect molecules implied in the mechanisms of signal transduction like protein kinase C [START_REF] Lander | An essential role for free radicals and derived species in signal transduction[END_REF][START_REF] Dalton | Regulation of gene expression by reactive oxygen[END_REF]. Significant sources of free oxygenated radicals are redox cycles and oxidations catalysed by cytochrome P450 monooxygenases implicated in detoxification. Like many exogenous compounds that can stimulate the production of ROS, copper is known as hydroxyl radical inducer [START_REF] Company | Antioxidant biochemical responses to long-term copper exposure in Bathymodiolus azoricus from Menez-Gwen hydrothermal vent[END_REF]. Metal deficiency can also lead to oxidative stress. Indeed, copper belong to the molecular composition of antioxydent enzyme such as Cu / Zn-SOD. Thus copper deficiency disrupts in this case the dismutation of the superoxide anion. ROS can potentially react with every cellular component. They influence in particular the thiol groups of proteins, leading to the formation of intra-or inter-molecular disulphide bridges. The most widely studied action of ROS is lipid peroxidation, mainly carried out by HO • . After rearrangement and addition of oxygen, ROO • and RO • radicals are generated. Oxidation of phospholipids modifies the membranes with loss in permeability and membrane potential, leading to inactivation of membrane receptors and enzymes. In a general way, during this reaction, various compounds are produced such malondialdehyde (MDA) and 4hydroxynonenal, both able to bind to proteins and to form DNA adducts [START_REF] Blair | Lipid hydroperoxide-mediated DNA damage[END_REF]. Thus, lipid peroxidation may trigger endogenous DNA damage as nuclear DNA and mitochondrial DNA are the targets of ROS. Four main classes of damage can be noted: single and double strand breaks, modified bases, DNA-DNA and DNA-protein bridges, and abasic sites. Proteins are essential structural and functional cellular components which may undergo oxidative modification, thus inducing their aggregation or digestion by proteases [START_REF] Davies | The oxidative environment and protein damage[END_REF]. Oxidation of amino acids, particularly of sulphur-containing and aromatic amino acids, result in structural modifications of proteins. HO • is the main initiator of the oxidation of polypeptide chains producing free radicals. In the absence of oxygen, two radicals can react together to form intra-or interchain cross-links. In the presence of oxygen, an addition reaction may take place to yield a peroxyl radical. A series of reactions follows, leading to the formation of alkoxyl radicals, precursors to the fragmentation of polypeptide chains. The oxidation of glucose can be performed in the presence of metal ions, leading to the release of aldehydes and hydrogen peroxide. This leads to glycation of proteins by attachment of aldehydes, often entailing the cleavage of protein chains [START_REF] Wolff | Autoxidative glycosylation": free radicals and glycation theory[END_REF]. Glycation of proteins promotes their oxidizability. The structural modifications induce functional changes, in particular of cellular metabolism. The oxidation by ROS can disturb ionic transport, enzymatic activities, and calcium homeostasis. Antioxidant defense systems can directly inhibit the production of ROS, limit their propagation, or destroy them. These antioxidant systems may act by reducing ROS species or by trapping them to form stable compounds. Two categories of antioxidant systems are generally defined, i. In the antioxydent system glutathione peroxidases (GPxs) / glutathione reductase (GR), GPxs are also able to reduce H 2 O 2 and other peroxides. Their enzymatic activities of GPxs are coupled with the oxidation of glutathione (GSH). Oxidized glutathione (GSSG) will be reduced by GR using NADPH. These enzymes are localised in the cytoplasm and the mitochondrial matrix [START_REF] Lushchak | Environmentally induced oxidative stress in aquatic animals[END_REF]. Antioxidant molecules or free radical scavengers such as Cys thiol rich compounds, principally GSH and the metallothioneins, will be discussed later. Their antioxidant capacity is conferred by the thiol group of the Cys residue. Vitamin E is also known for its powerful anti-radical activity operating in lipid membranes. Other scavenger molecules include vitamin C, carotenoids, and uric acid. Calcium transport and perturbation of bio-mineralization Calcium transport perturbation Normally, intracellular Ca homeostasis is maintained by a balance between extrusion and compartmentation systems (Mooren and Kinne, 1998). Alteration of these processes during cell injury can result in inhibition of Ca 2+ extrusion or intracellular compartmentation mechanisms, as well as in enhanced Ca 2+ influx and release of Ca 2+ from intracellular stores such as the endoplasmic reticulum and mitochondria [START_REF] Marchi | Mercury-and copper-induced lysosomal membrane destabilisation depends on [Ca 2+ ] i dependent phospholipase A2 activation[END_REF]. This can lead to uncontrolled rises in cytosolic Ca 2+ concentration. The biological functions of Ca 2+ are versatile; they control multiple processes of birth, life and death. Location, duration, and amplitude of calcium signals form a complex code that can control vital physiological processes. Any disturbance in these signals would change the "Ca 2+ code" and modify multiple life processes which are usually associated with loss of cell viability (Ermak and Davies, 2001). Filter feeding bivalves have high bioaccumulation potential for trace metals which at higher concentration can cause serious metabolic, physiological, and structural impairments. Continuous decline of freshwater mussels during the past decades, could be attributed to calcium homeostasis perturbation by trace metals (Frank and Gerstmann, 2007). Slight deregulations of Ca 2+ homeostasis like those deriving from low-dose trace metal contamination can affect the cellular ability to maintain and modulate Ca 2+ signaling. Copper is known to increase Ca 2+ intracellular concentration by release of the Calcium stock of the endoplasmic reticulum, leading to lysosomal membrane destabilisation in the mussels [START_REF] Marchi | Mercury-and copper-induced lysosomal membrane destabilisation depends on [Ca 2+ ] i dependent phospholipase A2 activation[END_REF] and apoptosis or cytotoxicity and necrosis (Ermak and Davies, 2001). Cell Ca 2+ extrusion systems allowing physiological calcium concentration maintenance appear to be essential for cell viability. The absorption and release of calcium in the environment, biomineralization, and the maintenance of intracellular calcium concentration in involved its passage through cell membranes. Ca 2+ passively enters the apical membrane down to its concentration gradient through carrier-mediated facilitated diffusion Na + /Ca 2+ antiporters and via simple diffusion through Ca 2+ channels. Intracellular Ca 2+ gradients are maintained by membrane Ca 2+ -ATPases across the plasma membrane (PMCA) and the membranes of intracellular stores. In Mytilus edulis copper is known to inhibit PMCA. Thiol groups of PMCA proteins are directly oxidized or they bind copper because of high affinity to the metal. Hydroxyl radicals OH • are induced by copper which indirectly causes PMCA protein impairment [START_REF] Viarengo | Possible role of Ca 2+ in heavy metal cytotoxicity[END_REF]Viarengo et al., 1996;Burlando et al., 2004). Besides the Ca 2+ -ATPases, another important system for the maintenance of calcium homeostasis is the plasma membrane Na + /Ca 2+ antiporter, the activity of which is based upon the transmembrane Na + electrochemical gradient. Inhibition of the Na + /K + -ATPase modifies the Na + gradient, and therefore it can also affect the activity of the Na + /Ca 2+ antiporter. Na + /K + -ATPase inhibition was observed in Mytilus edulis and Perna viridis after Ag, Cr, and copper exposure (Viarengo et al., 1996;[START_REF] Vijayavel | Sublethal effect of silver and chromium in the green mussel Perna viridis with reference to alterations in oxygen uptake, filtration rate and membrane bound ATPase system as biomarkers[END_REF]. Due to their physiological function for respiratorion, nutrient absorption and excretion, the gills, the digestive gland, and the kidneys are preferential sites of metal uptake and bioaccumulation in bivalves. These organs play important roles in iono-regulation, in particular for calcium homeostasis in freshwater bivalves (Coimbra et al., 1993). Effects on the Ca 2+ transport systems in these tissues leads to perturbation of calcium homeostasis in the whole organism. Perturbation of biomineralization Biomineralization is a complex process; in bivalves principally it enables the formation, growth, and repair of the shell. In addition to the shell, mineral concretions are produced which also play a role in detoxification mechanisms (Viarengo and Nott, 1993). In freshwater molluscs such as Anodonta, several internal tissues, bathed by haemolymph, also produce calcified structures, namely microspherules [START_REF] Lopes-Lima | Isolation, purification and characterization of glycosaminoglycans in the fluids of the mollusc Anodonta cygnea[END_REF]. Microspherules are usually present between both epithelia of the mantle under CaCO 3 and / or Ca 3 (PO 4 ) 2 deposits. These calcium transitory reserves are devoted to shell growth or glochidia larvae. The mantle of lamellibranchs is a leaflet that covers the internal surface of the shell and surrounds the body of the animal. It consists of two epithelia: internal in contact with the external medium, and external, the outer mantle epithelium (OME) facing the shell. The mantle is the tissue responsible for shell synthesis (Coimbra et al., 1993). Shell growth and maintenance of its mineral content are thus dynamic equilibrium involving a continuous exchange of Ca 2+ between the shell and the OME through the extrapalleal fluid dividing the two. The dynamic calcium exchange may result in a net accumulation of Ca 2+ in the shell, or its re-absorption, depending on the developmental stage and the metabolic state of the animal. The biomineralization mechanism depends on pH changes or its tendency [START_REF] Lopes-Lima | Isolation, purification and characterization of glycosaminoglycans in the fluids of the mollusc Anodonta cygnea[END_REF]. H + -ATPases in cells of the OME regulate the deposition of calcium or its reuptake into the haemolymph by pH control. H + -ATPases probably play a role equivalent to those of the two bone cell lines, the osteoblasts and the osteoclasts [START_REF] Machado | Ultrastructural and cytochemical studies in the mantle of Anodonta cygnea[END_REF]. H + -ATPases of the OME induce calcium mobilisation from the shell mainly by pumping protons into the extrapalleal fluid. H + -ATPase inhibition under bis(tributyltin)-oxide exposure was observed (Machado et al., 1989). CA is a ubiquitous enzyme; in the mantle of bivalves it controls bicarbonate balance between haemolymph and extrapalleal fluid for CaCO 3 formation. CA is a bioindicator of biomineralization already studied in ecotoxicology of metals (Vitale et al., 1999[START_REF] Rousseau | Biomineralisation markers during a phase of active growth in Pinctada margaritifera[END_REF]. So, if some alteration of the ion transport mechanisms across the mantle occurred in bivalves, it may depend on a direct inhibition of biomineralization. In freshwater bivalves, trace metals and in particular copper, seem to interfere with the shell calcification conditions by direct action on the proton pump and CA. Perturbation of biomineralization could cause subsequent shell thickening (Machado et al., 1989) or thinning. Detoxification mechanisms Cysteine thiol rich compounds Glutathione The tripeptide glutathione (GSH, -glutamyl-cysteinylglycine) is a non-protein thiol. The GSH/glutathione disulfide (GSSG) system is the most abundant redox system in eukaryotic cells, playing a fundamental role in cell homeostasis and metal detoxification, and being involved in signalling processes associated with programmed cell death termed apoptosis [START_REF] Canesi | Heavy metals and glutathione metabolism in mussel tissues[END_REF][START_REF] Camera | A nalytical methods to investigate glutathione and related compounds in biological and pathological processes[END_REF]. It is present mainly in its reduced form, GSH, and represents the most significant thiol in eukaryotic cells (0.2 to 10 mM). GSH biosynthesis requires two enzymes (fig. 6): first, glutamate-cysteine-ligase ( -ECS) catalyzing the fusion of glutamic acid (Glu) and Cys to -glutamyl-cysteine ( -GluCys), which in turn is converted into GSH by addition of glycine (Gly) by glutathione synthase (GS). -ECS is inhibited in feedback by GSH [START_REF] Monostori | Determination of glutathione and glutathione disulfide in biological samples: An in-depth review[END_REF]. GSH exerts many functions in the cell. It intervenes in processes of reduction such as the synthesis and degradation of proteins, and the formation of deoxyribonucleotides. GSH plays a role as co-enzyme of various reactions, and it is also used for being combined with either endogenous (oestrogens, prostaglandins, and leucotrienes) or exogenous compounds (drugs and xenobiotics), thus taking part in their metabolism. GSH is thus regarded as a central component in antioxidant defense [START_REF] Cossu | Antioxidant biomarkers in freshwater bivalves, Unio tumidus, in response to different contamination profiles of aquatic sediments[END_REF][START_REF] Manduzio | The point about oxidative stress in molluscs[END_REF]. The biologically active site of GSH is represented by the thiol group of the Cys residue. The high nucleophilicity of the thiol function facilitates the role of GSH as a free radical scavenger both under physiological conditions and in xenobiotic toxicity. GSH also helps in the regeneration of other antioxidants such as vitamin E, ascorbic acid, and metallothionein [START_REF] Knapen | Glutathione and glutathione-related enzymes in reproduction: A review[END_REF]. GSH is a cofactor for glutathione peroxidase in the decomposition of hydrogen peroxide or organic peroxides (fig. 6), for glyoxalase 1 in the detoxification of methylglyoxal and other oxo-aldehydes, and for maleylacetoacetate isomerase in the conversion of maleylacetoacetate and maleylpyruvate to the corresponding fumaryl derivatives [START_REF] Monostori | Determination of glutathione and glutathione disulfide in biological samples: An in-depth review[END_REF]. GS-conjugates of endogenous compounds are involved in metabolism, transport, and storage in cell. In addition, conjugation of xenobiotics to GSH initiates a detoxification pathway that generally leads to excretion or compartmentation of the biotransformed compound. Although some adducts can be formed directly, glutathione-S-transferases (GST) mediated reactions generally predominate. GST belong to cellular mechanisms of detoxification and elimination of molecules (Wünschmann et al., 2009). In animals, GS-conjugates are hydrolyzed and degraded by c-glutamyltransferases, followed by carboxypeptidation of the glycine residue. GR reduces GSSG to GSH which, with synthesis of new GSH, maintains the cellular GSH stock (fig. 6). [START_REF] Vašák | Advances in metallothionein structure and functions[END_REF]. The designation MT reflects the extremely high thiolate sulfur and metal content, both of the order of 10% (w/w). MT have been identified in a wide range of organisms, from bacteria to mammals, in many fish and aquatic invertebrates, mainly molluscs. Classically these extremely heterogeneous polypeptides were grouped into three classes of MT [START_REF] Fowler | Nomenclature of metallothionein[END_REF]Viarengo and Nott, 1993) Metallothioneins are playing a role in the homeostatic control of essential metals (Cu, Zn) as they can act as essential metal stores ready to fulfil enzymatic and other metabolic demands (Amiard et al., 2006). Their involvement in metal metabolism is based on their capacity to complex metals, effectively buffering free-metal ion concentrations in the intracellular environment. Additionally, the biosynthesis of these metalloproteins may be induced by exposure to essential and non-essential metals (Bonneris et al., 2005). Vital roles for this pleiotropic protein result in its involvement in homeostasis of essential trace metals, zinc and copper, or sequestration of the environmental non essential metals. Moreover, MT could protect cells from oxidative stress (Géret et al., 2002b). In the presence of ROS, zinc can be removed [START_REF] Vašák | Advances in metallothionein structure and functions[END_REF]. The release of zinc is accompanied by the formation of the MTdisulfide (or thionin, oxidized form of the protein) which in turn can be reduced by the ratio GSH / GSSG to restore the ability of the protein to bind zinc. This redox cycle of MT (fig. 7) plays a crucial role in maintaining physiological homeostasis of metals, detoxification of toxic metals and protection against oxidative stress [START_REF] Kang | Metallothionein redox cycle and function[END_REF]. MT plays an essential role in other metabolic processes, since MT expression is rapidly induced by a variety of factors such as cold, heat, hormones, cytokines [START_REF] Monserrat | Pollution biomarkers in estuarine animals: Critical review and new perspectives[END_REF], in some molluscs important seasonal variations in MT levels also correlate with the gametogenesis process. MT, like other proteins, are degraded in lysosomes [START_REF] Ng | Metallothionein turnover, cytosolic distribution and the uptake of Cd by the green mussel Perna viridis[END_REF]. ) (Hirata et al., 2005). These results suggest that PC play important roles not only in the chelation of trace metals but also as antioxidant defence. PC are synthesized from glutathione, homo-glutathione, hydroxymethyl-glutathione or -glutamylcysteine, catalysed by a transpeptidase, named phytochelatin synthase (fig. 8), which is a constitutive enzyme requiring post-translational activation by trensition metals [START_REF] Grill | Phytochelatins, the heavy-metalbinding peptides of plants, are synthesized from glutathione by a specic -glutamylcysteine dipeptidyl transpeptidase (phytochelatin synthase)[END_REF]Clemens, 2006). Phytochelatin synthase has been shown to be activated by a broad range of metals and MT Reductant GSH Oxidant GSSG MT-disulfide MT-thiols Metal (Zn 2+ ,…) Oxidant GSSG, ROS,… Reductant + Metal Synthesis Degradation metalloids, in particular Cd, Ag, Pb, Cu, Hg, Zn, Sn, Au, and As, both in vivo and in vitro. After the completion of the full genome sequence of the nematode Caenorhabditis elegans two publications independently described a functional PC synthase enable to synthesize PC in this model invertebrate (Clemens et al., 2001;Vatamaniuk et al., 2001). In addition sequences similar to PC synthase gene have been identified from the aquatic midge, Chironomus, and a species of earthworm (Brulle et al., 2008;[START_REF] Cobbett | A family of phytochelatin synthase genes from plant, fungal and animal species[END_REF]. There is, yet, no evidence that these animal genes encode PC-synthase activity, however it seems likely that they to encode PC synthase. It became clear that PC could play a wider role in trace metal detoxification than has previously been thought. Organisms with an aquatic or soil habitat are more likely to express PC (Cobbett, 2000). Clemens et al. (2001) hypothesized that PC may be ubiquitously involved in the tolerance and homeostasis of metals in all eukaryotic organisms. Clemens and Peršoh (2009) suggest how PC synthase genes are far more widespread than anticipated, homologous sequences are found through the entire animal kingdom including in bivalves. Metal detoxification mechanisms in bivalves Bivalves and particularly freshwater mussel unionids are widely recognised for their capacity to accumulate a variety of environmental contaminants, including metals, in their tissues and yet survive in these polluted environments (Winter, 1996;Cossu et al., 1997;[START_REF] Das | Dose-dependent uptake and Eichhornia-induced elimination of cadmium in various organs of the freshwater mussel, Lamellidens marginalis (Linn.)[END_REF]Bonneris et al., 2005). Such tolerance depends on the ability of these animals to regulate the essential metal concentration and detoxify nonessential metal. In molluscs three physiological and biochemical ways allow metal regulations: by binding to specific, soluble, Cys-rich ligand; by compartmentalization within organelle; and by formation of insoluble non-toxic precipitates (Viarengo and Nott, 1993;[START_REF] Rainbow | Influence of metal exposure history on trace metal uptake and accumulation by marine invertebrates[END_REF]. Metal sub-cellular partitioning depending of organs and metal nature. For example in mussels, copper bound to granules are dominant in the gills, in the digestive gland copper is principally bound to soluble Cys-rich compounds (Bonneris et al., 2005). For a long period of exposure, metals might be displaced from soluble metal-binding ligands to granules [START_REF] Roméo | Metal distribution in different tissues and in subcellular fractions of the Mediterranean clam Ruditapes decussatus treated with cadmium, copper, or zinc[END_REF]. Insolubles storage, compartmentalization and elimination of metals In mollusc an important storage or detoxification system is carried out by sequestration of metals in intracellular cytosolic and compartmentalized precipitated structures named granules [START_REF] Howard | The composition of intracellular granules from the metal-accumulating cells of the common garden snail (Helix aspersa)[END_REF]Viarengo and Nott, 1993). Granules are observed in different tissues of bivalves, mainly in the digestive gland, gills, and kidneys which are implicated in metal homeostasis and detoxification (Wang and [START_REF] Rainbow | Influence of metal exposure history on trace metal uptake and accumulation by marine invertebrates[END_REF]. Lysosome is mainly involved in the catabolism of both endogenous and exogenous molecules. Ni and appear to be linked with metal detoxification. These insoluble granules are often present in cells from which they can be eliminated by exocytosis [START_REF] Howard | The composition of intracellular granules from the metal-accumulating cells of the common garden snail (Helix aspersa)[END_REF]Viarengo and Nott, 1993). Granules alone or with the lysosome are finally eliminated, principally by exocytosis in urine, haemolymph, and faeces. In addition, a specificity of unionids is calcium extracellular concretions named microspherules. They are found in extrapalial fluid and haemolymph particularly in the gills, and in the mantle. The calcium of these microspherules is usually bound to phosphate and carbonate, either as orthophosphate Ca 3 (PO 4 ) 2 , and carbonate calcium CaCO 3 (Moura et al., 2000;[START_REF] Lopes-Lima | Isolation, purification and characterization of glycosaminoglycans in the fluids of the mollusc Anodonta cygnea[END_REF]. These microspherules appear to act as a calcium reservoir, serving as a source of calcium for embryonic shell development, but can also play a role in the detoxification of metals [START_REF] Pynnönen | Effects of episodic low pH exposure on the valve movements of the freshwater bivalve Anodonta cygnea L[END_REF]Bonneris et al., 2005). An other site of metal storage in mollusc is the shell which may act as safe storage matrix for toxic contaminants resistant to soft tissue detoxification mechanisms [START_REF] Das | Dose-dependent uptake and Eichhornia-induced elimination of cadmium in various organs of the freshwater mussel, Lamellidens marginalis (Linn.)[END_REF]. In shell of bivalves and unionids, substantial bioaccumulation of copper and other metals was shown (Gundacker, 2000). Cysteine rich metal binding compounds detoxification mechanisms A strategy for cells to detoxify non-essential metal ions and essential metal ions excess is the synthesis of high-affinity binding sites to suppress blockage of physiologically important functional groups (Clemens, 2006). Metal ions have high reactivity with thiol, amino or hydroxyl groups making molecules carrying these functional groups candidates for metal detoxification processes (Viarengo and Nott, 1993). Therefore, the best-known and presumably the first line to Cu, Zn and non-essential metal chelation are the Cys-containing peptides glutathione and phytochelatins and small Cys-rich proteins metallothioneins. Metal cation with a high affinity for SH residues displace Zn 2+ from a metallothionein physiological pool always present in the cells. An excess of trace metal cations, including Zn 2+ released from pre-existing metallothioneins, induces in the nucleus the synthesis of the mRNA coding for metallothioneins, consequently increasing MT on the cytosolic compartment. These MT chelate the trace metal cations, thus reducing their cytotoxic effects (Viarengo and Nott, 1993). The metal-thiolate clusters within the MT molecules allow rapid exchanges of metallic ions between clusters and with other MT molecules [START_REF] Monserrat | Pollution biomarkers in estuarine animals: Critical review and new perspectives[END_REF]. MT are usually not saturated by a single metal but contain several atoms of Cu, Zn, Cd, or Hg and Ag when present (Amiard et al., 2006). In Zn-thioneins the seven metal atoms of Zn or Cd are distributed between two clusters: cluster of 3 Zn and cluster of 4 Zn [START_REF] Maret | Fluorescent probes for the structure and function of metallothionein[END_REF]. Functionally, the two clusters show different affinities for metal cations. At pH 7 or below, cluster is the first to be saturated and cluster the first to release the metal. Data ion is different for MT (Géret et al., 2002b). Indeed, the stability constant for copper is 100 times higher than for cadmium and 1000 times higher than for zinc. Due to the high affinity of Cu(I) for the SH residues, the complex is stable and the metal is not easily released. It is important to note that the Cu-thionein has distinct chemical characteristics, including the capacity to produce oxidized insoluble polymers. In the digestive gland of bivalve, the metal is detoxified into the lysosomes trapped in the form of oxidized insoluble Cu-thioneins, which is subsequently eliminated by exocytosis of residual bodies. Nevertheless, literature data indicate that MT synthesis is not always induced in freshwater mussel, studies carried out in laboratory in A. cygnea by [START_REF] Tallandini | Regulation and subcellular distribution of copper in the freshwater molluscs Anodonta cygnea (L.) and Unio elongatulus (Pf.)[END_REF] and in Dreissena polymorpha by [START_REF] Lecoeur | Evaluation of metallothionein as a biomarker of single and combined Cd/Cu exposure in Dreissena polymorpha[END_REF] showed no induction of MT after Cu 2+ exposure. MT in response to a metal exposure might be reflected in increased turnover (synthesis and breakdown) of the protein, but not necessarily in changes in MT concentration [START_REF] Mouneyrac | Partitioning of accumulated trace metals in the talitrid amphipod crustacean Orchestia gammarellus: a cautionary tale on the use of metallothionein-like proteins as biomarkers[END_REF]. The Cys-containing peptides PC and GSH have been shown to form complexes with various metals, through their thiolate sulfur atom function. In aquatic organisms, glutathione is believed to play a fundamental role in detoxifying metals. The soluble tripeptide GSH complexes and detoxifies trace metal cations soon after their entrance in the cells, thus representing a first line of defence against trace metal cytotoxicity [START_REF] Canesi | Heavy metals and glutathione metabolism in mussel tissues[END_REF]. In particular, reduction of Cu(II) by GSH produces a stable Cu(I)-SG complex which is also a physiological donor of Cu(I) to copper apoproteins in both mammalian and marine invertebrate cells. Methylmercury has also a high affinity for SH groups of GSH, methylmercury-SG complexes have been identified in different animals tissues. Glutathione metal complexes are transported across the plasma membranes and, therefore represent a carrier for the elimination of the metal from the cells (Viarengo and Nott, 1993). Conjugation of glutathione to metals prevents them from interacting deleteriously with other cellular components. Enhanced cadmium toxicity after glutathione depletion has been observed in both in vitro and in vivo mammalian studies. There is also evidence that glutathione depletion enhances metal toxicity in aquatic organisms. Glutathione depletion by inhibitor of glutathione synthesis (buthionine sulfoximine) enhances copper toxicity in the oysters Crassostrea virginica [START_REF] Conners | Effects of glutathione depletion on copper cytotoxicity in oysters (Crassostrea virginica)[END_REF]. Biochemical and genetic studies have confirmed that GSH is the substrate for PC biosynthesis. PC have been assumed to function in the cellular homeostasis of essential transition metal nutrients, particularly Cu and Zn [START_REF] Schat | The role of phytochelatins in constitutive and adaptive heavy metal tolerances in hyperaccumulator and nonhyperaccumulator metallophytes[END_REF]. PC are high-affinity chelators of metals, and play major roles in the detoxification. Unequivocal evidence was shown for the involvement of PC synthesis in metal detoxification [START_REF] Courbot | Cadmium-Responsive Thiols in the Ectomycorrhizal Fungus Paxillus involutus[END_REF]Morelli and Scano, 2004;[START_REF] Thangavel | Changes in phytochelatins and their biosynthetic intermediates in red spruce (Picea rubens Sarg.) cell suspension cultures under cadmium and zinc stress[END_REF]. Induction of PC is triggered by exposure to various physiological and non-physiological metal ions. Some of these: Cd, Ag, Pb, Cu, Hg, Zn, Sn, Au, and As form complexes with PC in vivo in algae, plants, and fungi (Clemens, 2006). They are suspected to play a role in animals as well as in plant (Cobbett, 2000;[START_REF] Vatamaniuk | Worms take the 'phyto' out of 'phytochelatins[END_REF]Clemens and Peršoh, 2009). In the structural model of a PC-Cd complex, for example, the Cd coordinately binds one, two, three or, at maximum capacity, four sulfur atoms from either single or multiple PC molecules, resulting in amorphous complexes. In vivo the size of the PC chain molecules, and the pH stability are essential and determined the metal binding capacity per molecule of PC (Hirata et al., 2005). [START_REF] Konishi | Enhancing the tolerance of zebrafish (Danio rerio) to heavy metal toxicity by the expression of plant phytochelatin synthase[END_REF], introduced into the early embryos of zebrafish, mRNA coding for PC synthase from Arabidopsis thaliana. As a result, the heterogeneous expression of PC synthase and the synthesis of PC from GSH in embryos could be detected. The developing embryos expressing PC synthase and became more tolerant to Cd exposure. Extended summary 3 Extended summary The present work was undertaken in order to investigate some potential causes involved in the Europe-wide observed phenomenon of the decline of various freshwater bivalve species. As stated in the IUCN Red List of Threatened Species, 44 % of all freshwater molluscs are under threat (http://www.iucnredlist.org/news/european-red-list-press-release). One of the potential causes is the increasing use and corresponding local or regional release of industrially and technologically important metals, besides many other potential factors involved in such a complex eco-toxicologically sequence of events. Clearly, such limited toxicological-pathobiochemical study as presented in the few investigations described here is far too limited to come to a distinct and exclusive delineation of one single, individual explanation for the widespread decline of freshwater molluscs including the bivalves, one class of this important, threatened phylum. On the other hand, without such detailed and necessarily limited studies of concentrating on the potential patho-biochemical and toxicological mechanisms of a single likely pollutant, the delineation of the multitude of facets of such ecotoxicological effects will remain blurred. Therefore, the overall conclusions derived from the investigations presented in more detail in the following is not meant as proof of the only relevance of the increasingly used, technologically important metal copper, but as a demonstration of its potential relevance when considering various possibilities which can have an impact on the stability of bivalve populations, including the most spectacular and most strongly threatened, the European pearl mussel. Optimization of analytical protocols Bivalve maintenance and copper exposure Particular attention was paid to mussel maintenance in order to carry out the tests under stable, homogenous, and reproducible conditions (detailed in article 1). The bottom of the tank was covered with a layer of glass beads of 10 mm of diameter, so the mussels could find conditions for burying. This was important for giving the animals a substrate for close-tonature behaviour, at the same avoiding the use of natural river sediment as substrate; thus, unpredictable or unaccountable metal accumulations by adsorption onto sediment particles or interferences of microorganisms by metal species conversion could be avoided without excessively disturbing the behaviour of the animals, as indicated by their regular ventilation activity. Moreover, the large beads facilitated a deep daily cleaning of the tank by suction. The water, artificially reconstituted from deionised water, was renewed daily in order to ensure equal conditions during the whole experimental course for all replicates and avoid any uncontrolled metal accumulation or cross-contamination. Under these conditions, the copper concentrations could be maintained within close limits of the target concentration during the whole exposure period, as verified by graphite furnace atomic absorption spectrometry (GFASS) or inductively coupled plasma mass spectrometry (ICPMS). Enzymatic analyses The enzymatic analyses were carried out as described in article 1 and 2. The activities of Ca 2+ -ATPase and Na + /K + -ATPase were determined in the suspension of the microsomal pellet (as shown in fig. 1) obtained by homogenization of the mantle, digestive gland, gills, and kidney, and by centrifugation at 75600 g, the activity of H + -ATPase was determined in the supernatant of the first centrifugation step at 2000 g as shown in (fig. 1). Inorganic phosphate released by the ATPases was quantified by spectrophotometry of the ammonium molybdate complex according to Chifflet et al. (1988). The CA activity was evaluated in the supernatants (S 2000, fig. 1) by measuring the pH decrease according to Vitale et al. (1999). The study of phytochelatins in Anodonta cygnea lead us to develop and optimise an HPLC analytical protocol for phytochelatin (PC) quantification in animal tissues, based on the method for plants developed by Minocha et al. (2008). A protein removal step by acidification and centrifugation was added (fig. 2) and the mobile phase gradient profile was modified as described in articles 3 and 4. PC were quantified in the cytosolic fractions of deproteinized tissue homogenates and metallothionein (MT) in the non-deproteinized cytosolic fractions, after thiol reduction with tris-(2-carboxyethyl)-phosphine hydrochloride (TCEP) and 1,4-dithiothreitol (DTT), respectively. Monobromobimane (mBBr) was used as a fluorescent tag. Initially non-fluorescent, the dye mBBr became fluorescent after its binding to thiol groups under dehalogenation. It is selective for thiol groups and allows their quantification with high sensitivity by fluorimetry. Each sample was spiked with 0.6 µmol L -1 of each standard PC 2-5 to certify PC identity. The quantification limits per 20 µL sample injected into the HPLC column were 1.5 pmol for PC 2-3 , 2.7 pmol for PC 4 , 5 pmol for PC 5 . Effects of copper on calcium transport in Anodonta anatina Physiological data A good physiological knowledge of the biomarkers is important for their ecotoxicological interpretation. The present study (article 1 and article 2) allowed us to determine basal levels of the enzymatic activities of Ca 2+ -ATPase, Na + /K + -ATPase, and H + -ATPase of the plasma membrane. These enzymes were studied in gills, digestive gland, kidneys, and mantle, organs playing an important role in calcium input. The Ca 2+ -ATPase controls the cellular calcium concentration by active transport. The Na + /K + -ATPase and the H + -ATPase are maintaining sodium and proton gradients necessary for calcium Na + /Ca 2+ and H + /Ca 2+ transporters. H + -ATPase and cytoplasmic CA were also studied for their implication in the biomineralization process of the shell and in the maturation of glochidia. There is a lack of physiological data in freshwater bivalves compared to marine models such as Mytilus edulis or M. galloprovincialis. These data are useful in understanding calcium transport mechanisms since freshwater bivalves are subject to osmo-and iono-regulations different from marine mussels. Enzymatic activity found in the gills of A. anatina (table 1) were consistent to those reported in the gills of A. cygnea for Na + /K + -ATPase and Ca 2+ -ATPase 0.11 µmol P i /mg protein/min, (Lagerspetz and Senius, 1979) and 0.58 µmol P i /mg protein/min [START_REF] Pivovarova | Effect of cadmium on the ATPase activity in gills of Anodonta cygnea at different assay temperatures[END_REF] respectively, and in the gills of A. anatina for CA 2.1 U/mg protein [START_REF] Ngo | Sub-chronic effects of environment-like cadmium levels on the bivalve Anodonta anatina (Linnaeus 1758): III. Effects on carbonic anhydrase activity in relation to calcium metabolism[END_REF]. The plasma membrane Ca 2+ -ATPase (PMCA) activity in the gills (article 1) was fourfold higher than in the marine bivalve Mytilus edulis (Burlando et al., 2004). This reflects how important calcium active transport is to the freshwater mussel Anodonta anatina. Calcium concentrations in seawaters are higher than in freshwaters which explains that calcium absorption is easier for Mytilus edulis. PMCA activity is also high in the kidneys and the digestive gland, favouring calcium absorption from food and calcium reuptake from renal filtrate. These physiologic data allowed us to understand how PMCA is crucial for calcium homeostasis in freshwater bivalves compared to marine organisms. Moreover in Unionidae, calcium plays a direct role in reproduction since glochidia are incubated in the marsupium (gills) and is therefore important for population growth of mussels. Na + /K + -ATPase maintained the sodium transmembrane cellular gradient necessary for calcium facilitateddiffusion by Na + /Ca 2+ antiporters. In Anodonta anatina, Na + /K + -ATPase activity varied in a consistent manner as function of the seasons (article 2). The enzymatic activities of Na + /K + -ATPase and H + -ATPase were found to be maximum in summer, the season of shell and glochidia calcification in Anodonta mussels (Taskinen, 1998;Moura et al., 2000). This seasonal variation is an important aspect which is imperative to consider for data interpretation in ecotoxicological investigations. signalling pathways in addition to biomineralization and shell formation in mussels and in glochidia. The kidneys have an essential role in filtration and reabsorption of ions, water, and organic molecules from the ultrafiltrate. As freshwater organisms are hyperosmotic, the osmotic pressure resulting from the gradient of concentrations between internal and surrounding compartments leads to water uptake by osmosis and ionic loss by diffusion. The water uptake is compensated by urine production and ionic loss is limited by active ion reabsorption. In freshwater bivalves, the daily output of urine is high. The kidneys play an essential role in calcium homeostasis by limiting ionic loss in urine through active Ca 2+ reuptake from the filtrate (Turquier, 1994). An inhibition of calcium reabsorption may therefore dramatically disturb calcium homeostasis. Because of analytic problems inherent to the small biomass of kidneys, this tissue has been poorly studied in freshwater bivalves. This organ plays a role in detoxification (Viarengo and Nott, 1993) and is essential to ionoregulation. Our results (article 1) showed the high sensitivity of Ca 2+ -ATPase to Cu 2+ in this organ. The Unionidae freshwater bivalves are widely recognized for their capacity to accumulate a variety of environmental contaminants in their tissues including metals (Bonneris et al., 2005). This marked metal tolerance is effected by biochemical strategies that involve metal sequestration. The intracellular sequestration of metals is based on a sequence of cellular events involving a cascade of different ligands with increasing metal binding strengths. High concentration of metal could also inhibit detoxification mechanisms. In our results (article 1 and 2), recovery of Ca 2+ -ATPase and Na + /K + -ATPase activity within 7 days at the low Cu 2+ concentration (0.35 µmol L -1 ) indicated adaptive ability. This suggested the mobilisation of detoxification systems efficient at low Cu 2+ concentrations. At higher Cu 2+ concentration (0.64 µmol L -1 ), no recovery was noted. It is interesting to note that recovery was observed only at 0.35 µmol L -1 , a concentration environmentally more relevant than the higher concentration studied (article 1). Therefore, it is important to use environmental concentrations in ecotoxicological research; extrapolation of the results observed at high doses to environmental situations may be critical and not pertinent due to different mechanisms of adaptation and toxicity at high and low doses. Metal detoxification mechanism in Anodonta cygnea Aquatic molluscs have a number of properties that make them one of the most popular sentinels for monitoring water quality. The molluscs take up and accumulate high levels of trace metals, although the body concentrations show wide variability across metals and invertebrate taxa. The study of the capacity of bivalves for assessment of water quality is primarily linked to their tendency to accumulate trace metals even at high concentrations (Doyotte et al., 1997). Tolerance depends on the ability to regulate the metal concentrations in the cells and to accumulate excess metals in non-toxic forms (Viarengo and Nott, 1993). In our study, the recovery of Ca 2+ -ATPase (article 1) and Na + /K + -ATPase (article 2) activity observed within 7 days of exposure indicate an induction of detoxification mechanisms. Copper belongs to the transition metals most of which are known to show various degrees of affinity for thiol group. This property makes chelating ligands carrying thiol functions the first mechanism of metal detoxification. In articles 3 and 4, the study was focused on the mechanisms of metal detoxification by thiol rich compounds. In most animals, tolerance to trace metals depends on the induction of MT, a family of thiol-rich proteins of low molecular weight. These metallo-proteins are known to play an important role in the homeostatic control of essential metals such as Cu and Zn but also in the detoxification of excessive amounts of essential and non-essential trace metals. In the tissues of metal-exposed mussels is a rapid increase of metallothioneins, the soluble proteins involved in transition metal detoxification which is synergetic with lysosome compartmentalization (Amiard et al., 2006). Phytochelatins (PC) are thiol rich oligopeptides which have been characterized in a wide range of plant species (Grill et al., 1985). PC plays a major role in the detoxification of trace metals in plants by chelating metals with high affinity. In 2001, two publications independently described a functional phytochelatin synthase in the invertebrate nematode Caenorhabditis elegans (Clemens et al., 2001;Vatamaniuk et al., 2001). Brulle et al. (2008) provided the first evidence of a phytochelatin synthase from the earthworm Eisenia fetida implicated in dose-dependent cadmium detoxification. Neither study furnished evidence of the existence of the peptide PC in the invertebrates and animals. Recent cloning of the gene encoding phytochelatin synthase in E. fetida (Brulle et al., 2008;[START_REF] Bernard | Metallic trace element body burden sand gene expression analysis of biomarker candidates in Eisenia fetida, using an ''exposure/depuration'' experimental scheme with field soils[END_REF] suggested the existence of this detoxification pathway in this species. A superficial view of the limited selection of species in which such sequences have been identified might suggest that invertebrates with an aquatic or soil habitat are more likely to express PC (Cobbett, 2000). The objectives of our study were to determine the ability of A. cygnea to synthesize PC (article 3), and to study the possible role played by PC in copper detoxification in this freshwater bivalve (article 4). In plants, PC are rapidly induced in cells and tissues exposed to a range of transition metals and play an important role in detoxification. The results obtained in the present work (articles 3 and 4) showed the presence of phytochelatins in invertebrates; to our knowledge this is the first time. Basal levels were determined for PC 2-4 , -GluCys, and MT in the gills and the digestive gland (table 2). Besides the interest of these data for freshwater bivalve studies, the detection of PC 2-4 in the absence of excessive copper and other metals suggests their role in essential metal homeostasis. In the gills and the digestive gland, PC 2 was found in higher concentrations, followed by PC 3 which again was higher in concentration than PC 4 . PC 2 and PC 3 were found to be two or three times higher in the digestive gland than in the gills. Table 2: Basal levels of phytochelatines 2-4 (µg PC/g tissue wet weight), -GluCys (µg -GluCys/g tissue wet weight), and metallothionein (mg MT/g protein) in Anodonta cygnea. G: gills; DG: digestive gland; PC: phytochelatin; MT: metallothionein In the European continental hydrosystem affected by agricultural and / or urban activities, levels of up to 0.6 µmol L -1 of Cu 2+ are seasonally found [START_REF] Neal | A summary of river water quality data collected within the Land-Ocean Interaction Study: core data for eastern UK rivers draining to the North Sea[END_REF]Falfushynska et al., 2009). In our work (article 1 and 2), inhibition followed by recovery of Ca 2+ -ATPase and Na + /K + -ATPase activities were observed in mussels exposed to 0.35 µmol L -1 Cu 2+ , an environmentally relevant concentration. Therefore, in our studies on the detoxification mechanisms by metal binding Cys-rich compounds (articles 3 and 4), the mussels were exposed to the same Cu 2+ concentration. A statistical significant PC 2 induction in the gills of A. cygnea exposed to of 0.35 µmol L -1 Cu 2+ was observed (article 4), i.e. 50 % from 12 h to 4 d exposure, and 30 % upon 7 d. In the digestive gland, significant PC 2 induction was observed only at 12 h of Cu 2+ exposure. Relative to the respective control, -GluCys increased significantly in the gills within 48 h and 7 d exposure and within 48 h and 4 d in the digestive gland. The induction of PC 2 in A. cygnea exposed to Cu 2+ suggests its key role as metal chelator in a first line of detoxification, together with other compounds such as GSH. Higher sensitivity of PC 2 induction in the gills could be explained by the water route of Cu-exposure. The gills may play a more important role than the digestive gland in the uptake of copper dissolved in the test media. However, beyond 7 d of exposure, PC 2 declined to control values within 21 d of exposure. This decrease suggests that long-term Cu 2+ detoxification was shifting to other mechanisms. In Unionidae, MT and insoluble granules are known to play a role in metal detoxification in the long term. Bonneris et al. (2005) showed that cadmium, zinc, and copper concentrations in the gill granule fraction were significantly correlated with environmental concentrations of these metals. The granules are known to be a preferential site for copper storage in the gills of Unionidae. Around 65 % of the total copper in the gills was found to be bound to granules in Anodonta grandis grandis, were the calcium concretion represented 51 % of the gills dry weight. Similar values were found in Anodonta cygnea (Bonneris et al., 2005). No increase of MT was observed in the present study upon Cu 2+ exposure for 21 d. The MT isoform (< 10 kDa) in the mussel extract identified by HPLC separation in our study was not induced by Cu 2+ . Detoxification in A. cygnea by other MT isoforms not detected by the HPLC method cannot be excluded. Indeed, MT polymorphism is known to be important in invertebrates (Géret et al., 2002b;Amiard et al., 2006). A good example of such polymorphism and functional divergence is found for MT in the snail Helix pomatia. These snails can tolerate exceptionally high concentrations of cadmium. Additionally, they accumulate relatively high amounts of copper, needed for the biosynthesis of the oxygen carrier hemocyanin. The specific metal accumulation is paralleled by the presence MT forms which bind specifically one type of metal (Cd or Cu), both containing 18 conserved Cys residues but differing in other amino acids [START_REF] Vašák | Advances in metallothionein structure and functions[END_REF]. Metallothioneins, granules, and antioxidant systems have been described to be involved in detoxification mechanisms in freshwater bivalves. Cossu et al. (1997) have shown that in Unionidae antioxidants and especially GSH play major roles in metal detoxification. In the gills of Unio tumidus, a decrease by 45 % of GSH was found in mussels exposed at a metal-contaminated site. A decrease in GSH level in Unionidae exposed to copper was confirmed by Doyotte et al. (1997) under controlled laboratory conditions, indicating either metal blockage by SH groups or the use of GSH as substrate of antioxidant enzymes. Indeed, a parallel involvement of antioxidant enzymes had been described with increased activity at low metal concentration (Vasseur and Leguille, 2004). GSH also plays a role in PC synthesis. This double role in direct metal detoxification and as a PC precursor could explain this decrease (fig. 1). Decline of the Unionidae populations Over the past 50 years, a world-wide decline of autochthonous freshwater molluscs has been observed (Lydeard et al., 2004). Among the different species, the Unionoida taxon seems to be particularly endangered. Species such as the pearl mussels Margaritifera margaritifera and Margaritifera auricularia are declining dramatically in European rivers. Other species belonging to the Unionidae family are included in the red list of threatened species established by the World Conservation Union and is the case for Anodonta anatina and Anodonta cygnea. The Unionidae populations represented the largest part of the total biomass in many aquatic systems. They take an active part in sedimentation and water purification, modifying the phytoplankton community, and are detritivorous invertebrates (Aldridge, 2000;[START_REF] Vaughn | Community and foodweb ecology of freshwater mussels[END_REF]. Therefore, the disappearance of Unionidae may result in structural and functional perturbations of aquatic ecosystems. The competition with invasive bivalves is also a matter of debate. Invasive freshwater bivalves as Corbicula fluminea or Dreissena polymorpha not belonging to the Unionidae colonise freshwater hydrosystems with detrimental effects to other invertebrates. These invasive species do not play the same functional role as Unionidae in ecosystems. Their presence is hypothesized to contribute to the decline of the Unionidae. A combination of various factors may explain the gradual disappearance of Unionoidea in their competition with invasive species. Depletion of salmonids as host-fish has also been mentioned as a possible cause. Salmonid fish populations are affected by water pollution; moreover, pollutants may indirectly promote natural disease factors such as parasitism which is known to endanger salmonids (Voutilainen et al., 2009), and bivalves; trematode parasitism of Anodonta population entails complete infertility [START_REF] Taskinen | Age-, size-and sex-specific infection of Anodonta piscinalis (Bivalvia, Unionidae) with Rhipidocotyle fennica (Digenea, Bucephalidae) and its influence on host reproduction[END_REF]. Other major reasons for the decline are the physical degradation of streams and the reworking of river beds and canals as well as the degradation of water quality. Indeed, interactions between pollutants may disturb these biocoenoses even at low concentrations (Vighi et al., 2003). In order to protect the autochthonous Unionidae it is important to determine how and to which extent those factors contribute to their disappearance. The present work is meant as a contribution to understand the role of metal pollution in affecting calcium homeostasis in fresh water mussels, likely to be one of the reasons for the widespread decline of mussel populations in European rivers (Frank and Gerstmann 2007). Conclusion 4 Conclusion A macrobentos community is an excellent indicator of water quality (Ippolito et al., 2010). The objectives of the present work were to study some toxic effects of ionic copper and its detoxification mechanisms in the freshwater mussel Anodonta species belonging to the Unionidae. First, Cu 2+ as a potential inhibitor of proteins playing a role in calcium transport and biomineralization was assessed with A. anatina. Secondly, mechanisms of copper detoxification by MT, PC, and -GluCys were studied in A. cygnea. In the present study, the effects of Cu 2+ exposure on the activities of Ca 2+ -ATPase, Na + /K + -ATPase, and H + -ATPase of the plasma membrane, and of cytosolic CA have been evaluated in the freshwater bivalve A. anatina. Upon 4 d exposure, inhibition of Ca 2+ -ATPase activity in the gills and the kidneys, and of the Na + /K + -ATPase activity in the gills and the digestive gland was observed in A. anatina exposed to 0.35 µmol L -1 Cu 2+ . A total recovery of Ca 2+ -ATPase was observed upon 7 d of exposure, indicating that detoxication mechanisms are activated, except for the higher Cu 2+ concentration of 0.64 µmol L -1 . Ca 2+ -ATPase inhibition was particularly sensitive in the kidney, the organ playing an important role in calcium reuptake and iono-regultion. It would be interesting to study the long-term consequences on disturbances of osmoregulatory functions in the kidney of A. anatina exposed to 0.64 µmol L -1 Cu 2+ . The present work is the first to identify the phytochelatins, i.e. PC 2 , PC 3 , and PC 4 , in freshwater mussels. In A. cygnea exposed to 0.35 µmol L -1 of Cu 2+ , PC 2 was induced in the digestive gland and the gills within 12 h. The induction in the gills persisted for 7 days. These results suggest a role of PC in essential metal homeostasis and their involvement in first-line detoxification. Our HPLC results showed no variation of MT levels after Cu 2+ exposure, at least the MT isoform quantified with our method. Additional MT experiments with A. cygnea exposed to Cu 2+ at the same conditions and employing a spectrometric method which allows the quantification of the total MT isoforms would be interesting, in order to compare our results with those found in A. anatina by Nugroho and Frank (2012). In parallel the HPLC method should be optimized to allow the detection of other MT isoforms induced by copper. Copper has been reported as a weak inducer of metal binding thiol peptides (Zenk, 1996). Exposure to a non-essential metal being a strong inducer of PC, such as cadmium, could be interesting. Comparison of effects of an essential metal like copper and a non essential metal like cadmium could allow to determine the role played by PC in homeostasis of essential metal and detoxfication mechanisms. Copper is an inducer of reactive oxygen species (ROS) through Fenton type reactions. The activity of PC in scavenging ROS in A. cygnea exposed to copper would be interesting to study. Chemical pollution is one of the environmental factors that may impact bivalve populations. Toxicity depends not only on the bioavailability of pollutants and their intrinsic toxicity, but also on the efficiency of detoxifying systems in eliminating reactive chemical species. Articles INTRODUCTION Na + /K + -ATPase and H + -ATPase are important enzymes mainly involved in osmoregulation and acid-base balance. H + -ATPase plays as proton pump also an essential role in shell synthesis by pH control, a decisive factor in the biomineralization process (Machado et al., 1989). The plasma membrane Na + /K + -ATPase and H + -ATPase maintain the sodium and proton gradients necessary for the calcium antiporter system. Although calcium uptake is mainly achieved by active flux generated by Ca 2+ -ATPase. Facilitated diffusion through Ca 2+ /Na + or Ca 2+ /H + antiporters are other important ways of entry of calcium into the cell (Wheatly et al., 2002). Perturbation of calcium homeostasis has been hypothesized as a possible cause in the decline of the pearl mussel populations (Frank and Gerstmann 2007). Another freshwater mussel belonging to the Unionoida order known to bioaccumulate copper is Anodonta anatina (Cossu et al., 1997;Nugroho and Frank, 2011). Inhibition of Ca 2+ -ATPase activity in the plasma membrane was shown in A. anatina exposed to 0.35 µmol L -1 of copper (Santini et al., 2011a). Both H + -ATPase and Na + /K + -ATPase of the plasma membrane are metal-sensitive enzymes with functional SH groups. Due to its thiol affinity, copper is likely to affect the activity of these enzymes (Viarengo et al., 1991) and calcium homeostasis could thus be indirectly affected by copper; alteration of Na + and H + gradients will entail decreased efficacy of Ca 2+ /Na + or Ca 2+ /H + antiporters. Therefore, it seemed interesting to evaluate the activities of both enzymes in mussels in order to determine their sensitivity to this transition metal which nowadays is widely found in the environment at elevated levels. Therefore, in the present study the effects on the activities of Na + /K + -ATPase and H + -ATPase have been evaluated with A. anatina mussels exposed to copper ions (Cu 2+ ) at 0.35 µmol L -1 . The H + -ATPase activity was measured in the mantle, which plays the dominant role in the generation, growth and repair of the shell. Na + /K + -ATPase activity was determined in the mantle, the gills, and the digestive gland, organs which are mainly involved in calcium uptake and homeostasis (Coimbra et al., 1993). MATERIAL AND METHODS Chemicals All chemicals used for maintenance and exposure of bivalves, for sample preparation, and the biochemical assays were of analytical grade. Ethylene glycol-bis(β-aminoethylether)-N,N,N',N'-tetraacetic acid (EGTA), ouabain, phenylmethylsulfonic fluoric acid (PMSF), sodium azide, sodium dodecyl sulphate (SDS), Nethylmaleimide (NEM), and sodium orthovanadate were from Fluka (Schnelldorf, Germany). Adenosine-5'-triphosphate (ATP), N-(2-hydroxyethyl) piperazine N'-(2-ethanesulfonic acid) (HEPES), and tris-(hydroxymethyl)-aminomethane (Tris) were from Carl Roth (Karlsruhe, Germany). Animal maintenance and copper exposure Mussel maintenance was described in detail in a previous article (Santini et al., 2011a). Briefly, adult mussels A. anatina with shell lengths of 6.5 -7.5 cm were kept at 17° C in a glass aquarium containing 1.5 L per animal of filtered and aerated artificial pond water ). The water was renewed every 48 h. The bottom of the tank was covered with a 5-cm layer of glass beads (about 10 mm diameter), so the mussels could find conditions for burying. They were fed daily with unicellular algae Chlorella kessleri from a culture in the exponential growth phase, added to a final algal density of 2×10 5 cells mL -1 . Animals were acclimatized for 3 weeks in September before any experiment. For the copper (Cu 2+ ) exposure, animals were placed in a 20 L aquarium lined with dye-and pigment-free high-density polyethylene foils, with a glass beads layer and filled with 1.5 L per mussel of experimental media (pH 7.2 temperature: 17 ± 0.5° C). The bivalves were fed daily as described above and the test media were renewed every day. Concentration of 0.35 µmol L -1 Cu 2+ was chosen for exposure test. The concentration was controlled by inductively-coupled plasma mass spectrometry, with a limit of Cu detection of 0.5 µg L -1 = 8 nmol L -1 . The test was conducted with 12 mussels randomly assigned by groups of 3 to each treatment. At the beginning (time 0) a group of 3 mussels unexposed to Cu 2+ was used as control. A group of 3 mussels was taken from the aquarium after 4 d, 7 d, and 15 d of Cu 2+ exposure. Mussels used on each treatment were dissected separately for biochemical analyses, i.e. 3 replicates per treatment. Tissue sample preparation for enzymatic analysis Tissue samples of the gills, the mantle, and the digestive gland were prepared. The samples were suspended in 6 vol. ice-cold HEPES buffer (10 mmol L -1 HEPES, 250 mmol L -1 sucrose, 1 mmol L -1 EDTA, 1 mmol L -1 phenylmethylsulfonic fluoric acid (PMSF) as protease inhibitor, adjusted to pH 7.4 with HCl, 1 mol L -1 ) and homogenized by means of a motor-driven Teflon pestle homogeniser with 30 up-and-down strokes. The resulting homogenates were centrifuged at 2000 g (10 min, 4° C), the supernatants (S2000) were diluted with 12 mL ice-cold HEPES buffer per g tissue wet weight and centrifuged at 10000 g (20 min, 4° C). The supernatants (S10000) were ultracentrifuged at 75600 g (60 min, 4° C), and the final pellets were suspended in 6 vol. ice-cold Tris-HCl buffer (25 mmol L -1 Tris, 1 mmol L -1 PMSF, adjusted to pH 7.4 with HCl, 1 mol L -1 ) for plasma membrane Na + /K + -ATPase test. Plasma membrane H + -ATPase activity was determined in the S2000 supernatant used as mantle extract. All samples were frozen in liquid nitrogen and stored at -80° C until analyses were carried out, not later than one week after sampling. Protein concentrations were determined according to Bradford (1976), using bovine serum albumin as standard. Plasma membrane H + -ATPase and Na + /K + -ATPase activity assay Plasma membrane H + -ATPase and Na + /K + -ATPase activities were determined by measurement of inorganic phosphate released (Chifflet et al., 1988) and quantified by spectrophotometry of the ammonium molybdate complex at 850 nm (Spectrophotometer Unicon, Kontron 930). The H + -ATPase reaction medium contained (final concentrations) 6 mmol L -1 MgSO 4 , 50 mmol L -1 hepes, 0.5 mmol L -1 sodium orthovanadate (P-ATPase inhibitor), 0.5 mmol L -1 sodium azide as inhibitor of mitochondrial ATPase activities, 4 mmol L -1 Na 2 ATP, adjusted to pH 7.4. The mantle extract (160 µg protein) was incubated at 1 mL final volume for 60 min in a shaking water bath at 25° C, with or without addition of NEM as H + -ATPase inhibitor [START_REF] Lin | H + -ATPase activity in crude homogenates of fish gill tissue: inhibitor sensitivity and environmental and hormonal regulation[END_REF]. The reaction was stopped by addition of 400 µL sample to 400 µL of a 12 % solution of sodium dodecyl sulphate (SDS). Blanks were prepared in the same way except that the tissue samples were added after the SDS-dilution step. H + -ATPase activity, expressed in micromoles of P i released per mg protein and min, was determined as the difference between the ATPase activity in the presence of 10 mmol L -1 NEM and the ATPase activity without NEM. The Na + /K + -ATPase reaction medium contained (final concentrations) 100 mmol L -1 NaCl, 0.5 mmol L -1 EGTA, 5 mmol L -1 MgCl 2 , 25 mmol L -1 Tris, 0.5 mmol L -1 sodium azide as inhibitor of mitochondrial ATPase activities, 4 mmol L -1 Na 2 ATP, adjusted to pH 7.4. Samples of the membrane fractions (suspensions of the 75600 g pellet) of the gills, the digestive gland, and the mantle (20, 70, 60 µg protein, respectively) were incubated at 1 mL final volume for 20 min in a shaking water bath at 37° C, with or without addition of K + and ouabain as Na + /K + -ATPase inhibitor (Lionetto et al., 1998). The reaction was stopped by addition of 400 µL sample to 400 µL of a 12 % solution of sodium dodecyl sulphate (SDS). Blanks were prepared in the same way except that the tissue samples were added after the SDS-dilution step. Na + /K + -ATPase activity, expressed in micromoles of P i released per mg protein per min, was determined as the difference between the ATPase activity in the presence of 20 mmol L -1 KCl, and the ATPase activity without KCl and in the presence of ouabain (1 mmol L -1 ). Statistical analyses Data distributions were not normal, so statistical analysis for comparison of the enzymatic activities between treated and control mussels, was done using the non-parametric Mann-Whitney test (Statistica, StatSoft France 2001, data analysis software, version 6, Maison-Alfort, France). All data are reported as mean (n = 3) ± standard deviation (S.D.). Differences were considered as significant when p < 0.05. No significant effect of copper on H + -ATPase activity was noted (fig. 2) in the mantle of mussels upon 15 days of exposure. The H + -ATPase activities exhibited a continuously declining trend, although statistically not significant due to high variability between mussels. These basal activities (expressed in micromoles of P i released per mg protein per min) varied in consistent manner in function of seasons. The activities were the highest in July and September compared to the values measured in January, March and April (fig. 3). In July, the Na + /K + -ATPase activity expressed as µmol P i mg/protein/min was 0.107 ± 0.027 in the gills, 0.047 ± 0.050 in the digestive gland, and 0.053 ± 0.025 in the mantle (fig. 3). The H + -ATPase of the mantle did not significantly differed over the study period (from July to April) with 0.0025 ± 0.001 µmol P i /mg protein/min in July (fig. 4). Minimal values of Na + /K + -ATPase and H + -ATPase activities were observed in April in every tissue with a strong decrease by half and ten of the July values, depending on organ. RESULTS Fig DISCUSSION In the general decline of freshwater bivalve observed (Lydeard et al., 2004), the Unionoida taxon is particularly endangered. Alteration of ionic transport systems as Na + /K + -ATPase and H + -ATPase by copper is poorly investigated in the freshwater mussel Anodonta anatina belonging to this taxon. Copper, a metal extensively used in industry, building sector, public energy supply and transportation systems, and agriculture was studied in the present work as a potential factor implicated in this complex phenomenon of decline. The magnitude of mean basal activities of Na + /K + -ATPase in the different organs studies were gills > mantle > digestive gland. The high activity measured in the gills reflects the specialization and implication of this organ in ionoregulation. The gills are known to play a major role in active uptake of mineral ions from surrounding water (Turquier, 1994). The mantle assumes the ionic balance between haemolymph and the extrapallial fluid necessary for shell growth or calcium body reabsorption (Coimbra et al., 1993). The basal activities of Na + /K + -ATPase determined in the gills of control A. anatina mussels were in the same order of magnitude as found by Lagerspetz and Senius (1979) in A. cygnea, and in the freshwater fish Bidyanus bidyanus (Alam and Frankel, 2006) (Table 1). The hyperosmotic status of A. anatina and A. cygnea is expressed by the high basal activities of the enzyme. In comparison, the marine mussel Mytilus galloprovincialis showed a Na + /K + -ATPase activity threefold lower than A. anatina (Viarengo et al., 1996). There is little information in the scientific literature on the H + -ATPase activity in the mantle of mussels. Investigations on the enzyme activity expressed as µmol P i mg/protein/min (table 1) were carried out mostly in the gills of crustacean 0.023 and fish 0.003 to 0.041. In the present work the mean H + -ATPase activity 0.0025 ± 0.001 found in the mantle of A. anatina was lower. In this study, seasonal fluctuations of Na + /K + -ATPase basal activity were observed with clear and significantly elevated levels in July and September. It is known that the freshwater mussel shows seasonal changes in calcification (Taskinen, 1998). Our results correspond to the cycle of calcification found in Anodonta cygnea a close species. The growth of clam shells and the level of glycosaminoglycans know to be important for biomineralisation, increased both in summer with a maximum in July and August (Taskinen, 1998;Moura et al., 2000). These fluctuations, parallel to the calcification cycle, suggest an indirect implication of Na + /K + -ATPase in the calcium transport by the Ca 2+ /Na + antiporter. H + -ATPase activities showed not statistically significant comparable seasonal fluctuations. Na + /K + -ATPase of the gills and the digestive gland were strongly affected by exposure to 0.35 µmol L -1 of Cu 2+ upon 4 days of exposure, to a lesser extent in the mantle. Therefore, even though our results do not provide direct evidence of the Ca 2+ /Na + antiporter inhibition, a decrease of calcium ionic transport can be assumed. This is consistent with the results observed by Viarengo et al. (1996) in M. galloprovincialis exposed 4 d to 0.6 µmol L -1 of Cu 2+ . In A. anatina, inhibition of Na + /K + -ATPase modified the cellular Na + gradient, which could result in a reduced activity of Ca 2+ /Na + antiporter and disturbance of calcium homeostasis. A similar profile of inhibition of the H + -ATPase activity was observed in the mantle, but inter-individual variations were too high to be statistically significant. Some recovery of the Na + /K + -ATPase was observed in the digestive gland of A. anatina within 7 days of Cu 2+ exposure. Viarengo et al. (1996) also showed a return to basal level of the Na + /K + -ATPase activity in M. galloprovincialis exposed during 7 d to 0.6 µmol L - 1 of Cu 2+ . This may indicate the occurrence of detoxification systems mobilized in the first times of exposure to cope with stressors through metal sequestration or binding to metallothioneins. Despite a trend indicating partial recovery beyond 4 days the Na + /K + -ATPase activity in the gills of A. anatina was still significantly inhibited upon 15 days of experiment. An inhibition of Ca 2+ -ATPase activity was also observed in the gills and the kidneys of A. anatina exposed to Cu 2+ in same conditions as in the present study (Santini et al., 2011a). These whole results indicate an inhibition of ionic transport by Cu 2+ , which may perturb calcium homeostasis and ionoregulation more generally. In the long term, this may lead to biomineralization perturbation (shell slimmer, glochidia viability decrease), and physiologic impairments. INTRODUCTION Several transition metals (Cr, Mn, Fe, Co Cu, Zn, Mo) have chemical properties that make them essential for biological systems but toxic in excess. Essential and non-essential metals pose the problem of being toxic in the micromolar concentration range. For homeostasis and detoxification of such metals, bacteria, plants, and animals employ a strategy by synthesizing cysteine-rich peptides and proteins with high-affinity metal binding sites in the form of thiol groups (Clemens, 2006), viz. glutathione, phytochelatins, and metallothioneins. Until recently, in animal studies on trace metal detoxification, only reduced glutathione (GSH) and the family of metallothioneins (MT) were taken into consideration. The ubiquitous tripeptide GSH ( -GluCysGly) is present in most eucharyotic cells at concentrations of about 0.2 to 10 mmol L -1 . Metallothioneins are cysteine-rich, ubiquitous cytosolic proteins of low molecular weights (~ 4 -14 kDa). In addition to GSH and MT, in plants, some fungi, and yeasts, phytochelatins (PC n ) represent a third type of thiol-bearing entity (Grill et al., 1985;Clemens, 2006). Ni, Cu, Zn, Ag, Hg, and Pb, and the metalloid arsenic (Clemens, 2006). Homologous genes enabling to give a functional PCS were identified in the nematode Caenorhabditis elegans (Clemens et al., 2001;Vatamaniuk et al., 2001) and more recently in the oligochaete Eisenia fetida (Brulle et al., 2008). These findings suggest that PC n may play a role in metal homeostasis also in animals. Bivalves are known to bio-accumulate persistent organic pollutants and metals, which is one of the reasons suspected of the freshwater bivalve general decline (Franck and Gerstmann, 2007). According to Clemens and Peršoh (2009), bivalves also present a homologous gene of PCS. Anodonta cygnea is a freshwater bivalve belonging to the Unionidae, a filter-feeding and burrowing species living at the water/sediment interface. As bivalves are in close contact with the aquatic environment, they are known to bioaccumulate transition metals (Gundacker, 2000;Nugroho and Frank, 2011). The aim of the present study was to determine whether PC n are eventually synthesised by A. cygnea in the gills and the digestive gland. Indeed the presence of PC n in animals has been established for the first time. MATERIALS AND METHODS Chemicals Phytochelatins standards (PC 2 , PC 3 , PC 4 , and PC 5 ) [PC n , ( -Glu-Cys) n -Gly, where n = 2-5], and monobromobimane (mBBr) were from Anaspec (San Jose, CA, USA). Acetonitrile pooled in order to obtain sufficient mass for analysis. The samples were immediately analysed after sampling or frozen in liquid N 2 and stored at -80°C for no more than 4 weeks until analysis. To avoid enzymatic degradation or PC n oxidation, tissues were homogenised in acid buffer (6.3 mmol L -1 DTPA, 0.1% TFA) with a manual Potter-Elvehjem homogenizer with glass pestle. The homogenization was performed with 500 mg tissues in 1 mL buffer. The homogenate was centrifuged at 3500 g for 10 min (model 1-15PK, Sigma-Aldrich, St. Quentin Fallavier, France). The resulting supernatant (500 µL) was deproteinized by addition of 125 µL 5 mol L -1 perchloric acid, and centrifuged again at 13000 g for 30 min. The supernatant (500 µL) was neutralised with 100 µL 5 mol L -1 NaOH, and used for PC n analyses. Reduction and derivatization: Just after extraction, 99 µL of tissue extract was mixed with 244 µL of HEPPS buffer (200 mmol L -1 HEPPS, 6.3 mmol L -1 DTPA, pH 8.2), 10 µL TCEP solution (20 mmol L -1 TCEP in HEPPS buffer, pH 8.2) as disulfide reductant, and 4 µL of a solution of 0.5 mmol L -1 NAC as internal standard. Reduction was conducted in a water-bath at 45°C for 10 min. The sample injection volume was 20 µL. Fluorescence of mBBr-labeled compounds was monitored at an excitation wave length of 382 nm and 470 nm of emission. Derivatized PC n were separated in a reversed-phase column (Phenomenex-Synergi-Hydro RP C18), 100 mm × 4.6 mm, 4 µm particle size, protected by a C18 guard column, 4 mm × 3 mm, 5 µm (Phenomenex Security guard cartridge). The temperature of the column oven was 40°C. Peak areas were integrated using proper software (UniPoint system, version 1.90, Gilson, Villier-le-Bel, France). The bimane derivatives were separated using a gradient of mobile phase A (99.9 vol-% ACN, 0.1 vol-% TFA) and B (89.9 vol-% water, 10 vol-% ACN, 0.1 vol-% TFA). The gradient profile was a linear gradient of mobile phase A from 0 to 10.6% run for 11.2 min at 1 mL min -1 . Further, the linear gradient of solvent A was raised from 10.6 to 28.6% in 13.6 min. Before injecting a new sample, the column was rinsed with 100% of solvent A for 5.5 min at a flow rate of 2.5 mL min -1 . The column was equilibrated with 100% of solvent B for a total of 10 min at 1 mL min -1 . Total run time for each sample was 40.3 min including column rinsing and re-equilibration. Purified standards at seven increasing concentrations ranging from 0.2 to 2 µmol L -1 for PC 2-5 , and 1 to 10 µmol L -1 for Cys, GSH, and -GluCys, were used to plot calibration curve. In samples thiol compound concentration were determined using equations of calibration curve. PC content was expressed in µg of PC per g of tissue wet weight. Each sample was spiked with 0.6 µmol L -1 of each standard PC 2-5 to certify PC identity. Statistical analyses Homogeneity of variances and normality of data were not verified (Bartlett and Shapiro-Wilk tests), so statistical analysis was done using the non parametric Kruskal-Wallis and Mann-Whitney two sided tests (R Development Core Team, 2010. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.Rproject.org). All data means were reported with standard deviations (SD). Differences were considered as significant when p < 0.05. RESULTS AND DISCUSSION Table 1: Average retention time in minutes (n = 20) ± SD, limit of detection (LOD), and limit of quantification (LOQ) in pmol per 20 µL injected for cysteine-rich metal-binding peptides standards. Standard curves were run with 7 concentrations: 1 to 10 µmol L -1 for Cys, GSH, -GluCys, and 0.2 to 2 µmol L -1 for PC 2-5 . -Not determined for the NAC used as an internal standard r 2 : Pearson correlation coefficients of standard curves In this study, with precolumn mBBr derivatization and reversed-phase HPLC analyses, an excellent linearity was obtained for the calibration curves as shown by the Pearson coefficients (table 1). The quantification limits per 20 µL sample injected into the HPLC column were 1.5 pmol for PC 2-3 , GSH, -GluCys, 2.7 pmol for PC 4 , 5 pmol for PC 5 , and 1 pmol for Cys. In the samples a part of mBBr underwent reductive dehalogenation with TCEP to give tetramethylbimane (Me 4 B) (Graham et al., 2003). i.e. 2.2 ± 0.6 and 0.9 ± 0.2 µg g -1 wet weights respectively. In both organs, the PC n levels decreased in the order PC 2 > PC 3 > PC 4 . The concentrations of PC 2 and PC 3 were two to three times higher in the digestive gland than in the gills, whereas the PC 4 levels were nearly equivalent in both organs. Wilcoxon, Mann-Whitney test showed no significant difference in PC 2 content (fig. 4) and PC 3-4 (data not shown) between fresh and frozen tissues. PC are important for detoxification of the non-essential metal Cd in plants and fungi. were renewed every day, and the bivalves were fed daily as previously. For sampling, the 12 mussels used on each treatment were dissected. The gills and digestive glands of two mussels were pooled, which gives 6 replicates per treatment for analysis. Determination of PC and other cysteine-rich metal-binding peptides PC were determined according to Minocha and al. (2008) with a slight modification in the protein removal step and the HPLC mobile phase gradient profile (Santini et al., 2011b). Cysteine-rich peptides were determined in the cytosolic fractions of the deproteinized tissue homogenates after reduction. Extraction: All dissection, extraction, and centrifugation steps were carried out at 4°C. The gills and digestive gland were carefully dissected, the digestive content was removed, and the tissues were washed in Tris buffer (Tris 25 mmol L -1 , NaCl 50 mmol L -1 , pH 8.0). The organs from two mussels were pooled in order to obtain sufficient mass for analysis. The samples were immediately analysed after sampling, or frozen in liquid N 2 and stored at -80°C until analysis. For extraction to avoid enzymatic degradation or PC oxidation, the tissues were homogenised in acid buffer (6,3 mM DTPA, 0,1% TFA) with a manual Potter-Elvehjem homogenizer with glass pestle. The homogenization was performed with 500 mg of tissues in 1 mL of buffer. The homogenate was centrifuged at 3500 g for 10 min (model 1-15PK, Sigma-Aldrich, St. Quentin Fallavier, France). The resulting supernatant (500 µL) was deproteinized by addition of 125 µL perchloric acid 5 mol L -1 , and centrifuged again at 13000 g for 30 min. The supernatant, 500 µL, was neutralised with 100 µL aqueous NaOH 5 mol L - 1 , and used as tissues extract for PC analyses. (UniPoint system version 1.90, Gilson, Villier-le-Bel, France). The bimane derivatives were separated using a gradient of mobile phase A (99.9 vol-% ACN, 0.1 vol-% TFA) and B (89.9 vol-% water, 10 vol-% ACN, 0.1 vol-% TFA). The gradient profile was a linear gradient of mobile phase A from 0 to 10.6 vol-% was run for 11.2 min at 1 mL min -1 . Further, the linear gradient of solvent A was raised from10.6 to 28.6 vol-% in 13.6 min at 1 mL min -1 . Before injecting a new sample, the column was rinsed with 100 % of solvent A for 5.5 min at a flow rate of 2.5 mL min -1 . The column was equilibrated with 100 % of solvent B for a total of 10 min at 1 mL min -1 . Total run time for each sample was 40.3 min, including column rinsing and re-equilibration. Purified standards at seven increasing concentrations ranging from 0.2 to 2 µmol L -1 for PC 2-5 , and 1 to 10 µmol L -1 for Cys, GSH, and -GluCys, were used to plot calibration curve. In samples thiol compound concentration were determinate using equations of calibration curve. PC content was expressed in µg of PC per g of tissue wet weight. Each sample was spiked with 0.6 µmol L -1 of each standard PC 2-5 to certify PC identity. The quantification limits per 20 µL sample injected into the HPLC column were 1.5 pmol for PC 2- Determination of metallothionein The MT content was determined by the method of Romero-Ruiz et al. (2008). All steps were carried out at 4°C. Organs from two mussels were pooled in order to obtain sufficient mass for analysis. The samples were immediately frozen in liquid N 2 and stored at -80°C until analysis. The gills and digestive gland were homogenised in buffer (0.1 mol L -1 Tris, 1 mmol L -1 DTT, 50 mmol L -1 PMSF, 6 mmol L -1 leupeptin, pH 9.5) at a ratio of 5 mL g -1 with a manual Potter-Elvehjem homogenizer with glass pestle. The homogenate was centrifuged at 3500 g for 10 min. The resulting supernatant was centrifuged again at 25000 g for 30 min, and the supernatant was used as extract for MT analyses. Tissue extract, 125 µL was mixed with 30 µl of Tris buffer, 0.23 mol L -1 , pH 9.5, 10 µL, DTT 0.3 M, added to 5 µL EDTA, 0.1 mol L -1 , pH 7, and 63 µL of 12 % SDS aqueous solution. Reduction and denaturation was conducted in a water-bath at 70°C for 20 min. For derivatization, 16.7 µL of mBBr solution in ACN, 0.18 mol L -1 , was added and the sample was incubated for 15 min at room temperature away from light. The derivatized samples were chromatographically analyzed on VWR reversed-phase column (LiChrocart LiChrosphere RP C18, Strasbourg, France) 250 mm × 4 mm, 5 µm particle size, fitted to a C-18 guard column, 4 mm × 3 mm, 5 µm (Phenomenex Securityguard cartridge). The fluorescence detector parameters were the same than for PC analyses. MT was separated using a gradient of mobile phase A (99.9 vol-% ACN, 0.1 vol-% TFA) and B (99.9 vol-% water, 0.1 vol-% TFA). After injection, the mobile phase A was kept for 10 min at 30 %, and changed by 1 min linear rise to 70 % of phase A. These conditions were maintained for 10 min, before the initial conditions (30 % phase A) were re-established by 1min linear decrease. Before injecting a new sample, the column was rinsed and equilibrated at the initial conditions for 8 min. T flow was kept at 1 mL min -1 throughout analysis. The injection volume was 20 µL. A plot of peak area versus purified standard content was established using seven increasing concentrations from 0.4 to 1 µmol L -1 rabbit liver MT-I. The equation obtained from this calibration line was used for MT quantification in the samples. The MT content was expressed in mg per g of protein. Protein was determined according to Bradford (1976) using bovine serum albumin as standard. Statistical analyses As homogeneity of variances and normality of data were not verified (Bartlett and Shapiro-Wilk tests), statistical analysis was done using the non parametric Kruskal-Wallis and Mann-Whitney two sided tests (R Development Core Team (2010). R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.Rproject.org). All data were reported as means ± standard deviations (SD). Differences were considered as significant when p < 0.05. RESULTS A broad peak of tetramethylbimane resulting from the reaction between mBBr and TCEP eluted approximately at 10.5 min (fig. 1 A). At 17.96 min an unidentified compound (peak "a") from reagents eluted. In standards profile (fig. 1 B), the phytochelatin mean elution times, were: PC 2 at 13.09 min, PC 3 at 16.62 min, PC 4 at 18.59 min, and PC 5 at 19.75 min. (Amiard et al., 2006). In the present work, MT identification and quantification was carried out with purified rabbit liver MT-I standard. After HPLC separation, the MT in the mussel extract eluted at the same time as the rabbit liver standard, i.e. have a molecular weight and cysteine content similar to MT-I of mammals (Alhama et al., 2006). This MT isoform from A. cygnea does not appear to be induced by copper. Cu 2+ induction of MT in A. anatina was shown using a spectrophotometric method which allowed total MT isoform quantification (Nugroho and Frank, 2012). Since the discovery of the PC synthase gene, able to give a functional PC synthase in invertebrates, the question of the implication of PC in metal detoxification and homeostasis in animals has arisen. The present study shows a clear PC 2 induction in freshwater mussels under Cu 2+ exposure, indicating a key role of this cysteine rich polypeptide as metal chelator. PC 2 increased in the gills from 12 h to 7 d, so it may serve as first line defense, with other compounds such as GSH. The PC 2 decline to control values upon 21 d suggests that at later time copper is taken in charge by other detoxication mechanisms such as MT and insoluble granules (Viarengo and Nott, 1993). Sequestration of 65 % of the intracellular copper in granules was shown in the gills of Unionidae bivalves (Bonneris et al., 2005). In the digestive gland, PC 2 increased significantly only up to 12 h of Cu 2+ exposure. The water media route of Cu-exposure could be the reason of higher sensitivity of PC 2 induction in the gills, which may play a more important role than digestive gland, in the uptake of copper dissolved in the surrounding water. Moreover, PC induction depends on the metal species, Cu having been reported as a weak inducer compared to Cd (Zenk, 1996). Further studies with Cd would be interesting, in order to determin whether PC 2-4 may be efficient in detoxification of such a non-essential metal. Additional investigations are necessary to assess the relevance of PC as potential biomarker for metal exposure of molluscs. Annexe 2: Algae culture medium Summary: Copper (Cu) is one of the metals contaminating European fresh water ecosystems. Filter feeding bivalves have high bioaccumulation potential for transition metals as Cu. While Cu is an essential micronutrient for living organisms, it causes serious metabolic and physiological impairments when in excess. The objectives of this thesis are to get knowledge on toxic effects and detoxification mechanisms of copper in Anodonta cygnea and Anodonta anatina, two mussel species widely distributed in continental waters. Because Calcium (Ca) plays a fundamental role in shell formation and in numerous biological processes, Cu 2+ effects on cellular plasma membrane Ca transport were studied first. In the second step, the investigations focused on Cu 2+ detoxification mechanisms involving Cys-rich compounds known to play a major role in homeostasis of essential trace metals and in cellular metal detoxification. Under our experimental conditions, Cu 2+ inhibition of Ca 2+ -ATPase activity was observed in the gills and the kidneys, and inhibition of Na + /K + -ATPase in the gills and the digestive gland upon 4 d of exposure. At day 7 of exposure to 0,35 µmol L -1 Cu 2+ : total recovery was observed in the kidneys and the gills for Ca 2+ -ATPase activity, and in the digestive gland for Na + /K + -ATPase, but not at high doses. Ca and Na transport inhibition may entail disturbance of osmoregulation and lead to continuous under-supply of Ca. Recoveries of Na + /K + -ATPase and Ca 2+ -ATPase enzymes function suggest that metal-detoxification is induced. Phytochelatins (PC) are Cysrich oligopeptides synthesised by phytochelatin synthase from glutathione in plants and fungi. Phytochelatin synthase genes have recently been identified in invertebrates; this allows us to hypothesize a role of PC in metal detoxification in animals. In the second part of this work, PC and their precursors as well as metallothionein were analyzed in the gills and in the digestive gland of A. cygnea exposed to Cu 2+ . Our results showed for the first time the presence of PC 2-4 in invertebrates. PC were detected in control mussels not exposed to metal, suggesting a role in essential metal homeostasis. Compared to control, PC 2 induction was observed during the first 12 h of Cu 2+ exposure. Those results confirm the role of PC as a first line detoxification mechanism in A. cygnea. Key words: calcium homeostasis, copper, Anodonta freshwater bivalve, phytochelatins Résumé : Le cuivre (Cu) est l'un des métaux contaminants les écosystèmes dulcicoles Européens. Les bivalves filtreurs ont une grande capacité de bioaccumulation des métaux de transitions tel que le Cu. Le Cu est un oligo-élément essentiel pour les organismes vivants, mais en excès il provoque de graves perturbations métaboliques et physiologiques. L'objectif de cette thèse est d'acquérir des connaissances sur les effets toxiques et les mécanismes de détoxification du cuivre chez Anodonta cygnea et Anodonta anatina, deux espèces de bivalves largement distribuées dans les eaux continentales. Parce que le Calcium (Ca) joue un rôle fondamental dans la composition de la coquille et pour de nombreux processus biologiques, les effets du Cu 2+ ont été étudiés d'abord sur le transport cellulaire du Ca au niveau de la membrane plasmique. Dans un deuxième temps, l'étude a été axée sur les mécanismes de détoxification du Cu 2+ impliquant des composés riches en Cys, connus pour jouer un rôle majeur dans l'homéostasie des métaux traces essentiels et dans la détoxification des métaux dans les cellules. Dans nos conditions expérimentales, l'inhibition de la Ca 2+ -ATPase par le Cu 2+ a été observée dans les branchies et les reins, et l'inhibition de la Na + /K + -ATPase dans les branchies et la glande digestive, après 4 jours d'exposition à concentrations environnementales. Au delà de 7 jours d'exposition à 0,35 µmol L -1 Cu, une récupération totale de l'activité enzymatique a été observée dans les reins et les branchies pour Ca 2+ -ATPase, et dans la glande digestive pour la Na + /K + -ATPase. A dose élevée l'inhibition persiste. L'inhibition du transport du Ca et du Na peut entraîner des perturbations de l'osmorégulation et conduire à des carences en Ca. La récupération de l'activité enzymatique de la Ca 2+ -ATPase et de la Na + /K + -ATPase suggère une induction de fonctions de détoxification des métaux. Les phytochélatines (PC) sont des oligopeptides riches en Cys synthétisés par la phytochélatine synthase à partir du glutathion, chez les plantes et les champignons. Des gènes codant pour des phytochélatine synthases fonctionnelles ont étés identifiés chez des invertébrés. Dans la seconde partie de ce travail, les PC et leurs précurseurs, ainsi que les métallothioneines ont été étudiés dans les branchies et la glande digestive d'A. cygnea exposé au Cu 2+ . Nos résultats ont montré pour la première fois, la présence de PC 2-4 chez les invertébrés. Les PC ont été détectés dans des moules témoins non exposées aux métaux, ceci suggère une fonction dans l'homéostasie des métaux essentiels. Une induction de PC 2 a été observée dés les 12 premières heures d'exposition au Cu 2+ , comparé aux bivalves témoins. Ces résultats confirment le rôle du PC en tant que mécanisme de première ligne de détoxification des métaux chez A. cygnea. Mots clés : homéostasie du calcium, cuivre, bivalves dulcicoles Anodonta, phytochélatines Copper metabolism in mammals[START_REF] Jondreville | Le cuivre dans l'alimentation du porc : oligoélément essentiel, facteur de croissance et risque potentiel pour l'Homme et l'environnement[END_REF]. 41 Fig.2: A: Anodonta anatina, B: Anodonta cygnea (www.biopix.eu) 45 Fig. 3: Geographical distribution of Anodonta anatina, and distribution of Anodonta cygnea 46 Fig. 4: Biological cycle of Unionidae (http://biodidac.bio.uottawa.ca) 47 Fig. 5: Longitudinal-section of an Unionidae (Mouthon, 1982), p.a.m.: posterior adductor muscle, a.a.m.: anterior adductor muscle 49 Fig. 6: GSH metabolism (Mendoza-Cózatl and Moreno-Sánchez, 2006), -ECS: glutamatecysteine-ligase, GS: glutathione synthase, GR: glutathione reductase, GPx: glutathione peroxidase, GST: glutathione transferase, Xe: xenobiotic, GSH: glutathione, GSSG: glutathione disulfide 59 Fig. 7: Redox cycle of MT Fig. 1 :Fig. 1 :Fig. 1 : 111 Protocol of tissue fraction preparation for enzymes analysis of Ca 2+ -ATPase, Na + /K + -ATPase, H + -ATPase, and carbonic anhydrase (CA), in the mantle (M), digestive gland (DG), gills (G), and kidneys (K) of the fresh water mussel A. anatina. 69 Fig. 2: Protocol of tissue fraction preparation for thiol rich compound analysis, i.e. of metallothionein (MT), phytochelatins (PC) and precursors in the digestive gland (DG), and gills (G) of the fresh water mussel A. anatina. 70 Fig. 3: Phytochelatin and precursor biosynthesis in plants (Mendoza-Cózatl and Moreno-Sánchez, 2006), PCS: phytochelatin synthase, -ECS: glutamate-cysteine-ligase, GS: glutathione synthase, GST: glutathione transferase, Xe: xenobiotic, GSH: glutathione, GSSG: glutathione disulfide, HMWC: high molecular weight complexes. 78 Article 1 Schematic presentation of the Ca flow through the different compartments of a freshwater bivalve, and inhibition of PMCA by Cu in kidney and gills. MCE: mantle cavity epithelium, DG: digestive gland, PMCA: plasma membrane Ca 2+ -ATPase. 83 Fig. 2: PMCA activity in kidneys (K), gills (G), and digestive gland (DG) of A. anatina upon 0 (control), 4, 7, 15 days of exposure to 0.35 µmol L -1 (a) and 0.64 µmol L -1 Cu (b). Means of results, (n = 3) ± SE, are presented as ratios of enzymatic activity at the respective Cu concentration and time vs. control (mentioned on the graph). Notes: For K, the error bars reflect the variability of the analytical determination of the pooled tissue sample, for G and DG the biological variation between animals. * = significantly lower than control, = significantly higher than control (U test p < 0Na + /K + -ATPase activity determination in September in gills (G), the digestive gland (DG), and mantle (M) of A. anatina upon 0 (control), 4, 7, 15 days of exposure to 0.35 µmol L -1 Cu 2+ . Means of results, n = 3 ± SD, are presented as enzymatic activity in µmol P i /mg protein/min. *= significantly lower than control (Mann-Whitney test two sided test = 0.05). 98 Fig. 2: H + -ATPase activity determination in September in mantle (M) of A. anatina upon 0 (control), 4, 7, 15 days of exposure to 0.35 µmol L -1 Cu 2+ . Means of results, n = 3 ± SD, are presented as enzymatic activity in µmol P i /mg protein/min. *= significantly lower than control (Mann-Whitney test two sided test = 0.05). 99 Fig. 3: Na + /K + -ATPase basal activity in gills (G), the digestive gland (DG), and mantle (M) of A. anatina in July, September, January, March, and April 2007 / 2008. Means of results, n = 3 ± SD, are presented as enzymatic activities in µmol P i /mg protein/min. # = significantly higher than in January, March, and April (Mann-Whitney test two sided test = 0.05). 100 Fig. 4: H + -ATPase basal activity in mantle (M) of A. anatina in July, September, January, March, and April 2007 / 2008. Means of results, n = 3 ± SD, are presented as enzymatic activity in µmol P i /mg protein/min. # = significantly higher than in January, March, and April (Mann-Whitney test two sided test = 0.05). 101 Article 3 Chromatograms of (A) reagents blank with homogenisation buffer, and (B) mix of the eight cysteine-rich peptide standards. The broad peak is tetramethylbimane (Me 4 B). Fig. 2 :Fig. 1 : 21 Chromatograms of digestive gland samples obtained with the same extract: alone (A), and spiked with 0.6 µmol L -1 of each standard PC 2-5 (B). 115 Fig. 3: Chromatograms of gills samples made with the same extract: alone (A), and spiked with 0.6 µmol L -1 of each standard PC 2-5 (B). 116 Fig. 4: PC 2 content in the digestive gland (DG) and the gills (G) of Anodonta cygnea, means (n = 6) ± SD. No significant difference was found between fresh (white bars) and frozen (grey bars) tissues (Wilcoxon, Mann-Whitney test). 117 Article 4 Chromatograms of (A) reagent blank with homogenisation buffer, and (B) mix of the eight cysteine-rich peptide standards. The broad peak is tetramethylbimane (Me 4 B). Fig. 2 : 2 Chromatograms of gills samples: mussels exposed for 4 days to 0.35 µmol L -1 Cu 2+ (A), mussel controls 4 days (B), mussel controls 4 days spiked with 0.6 µmol L -1 of each standard PC 2-5 (C), a.u.: area unit. 129 Fig. 3: PC 2 content in the digestive gland (DG) and the gills (G) of A. cygnea exposed to 0.35 µmol L -1 of Cu 2+ for 0 h, 12 h, 48 h, 4 d, 7 d, and 21 d. Means of results (n = 6) ± SD. *= significant difference between exposed group and its respective control (Kruskal-Wallis and Mann-Whitney two sided test (n = 6, = 0.05)) 130 Mots clés: homéostasie du calcium, cuivre, bivalves dulcicoles Anodonta, ( respiration avec le cytochrome C oxydase, formation des tissus connectifs avec la lysyl oxydase) sont aussi à l'origine de sa toxicité lorsqu'il est en excès. Le cuivre est bioaccumulable et peut devenir une menace pour la biocénose. Des mécanismes de régulation de la concentration et de la détoxication du cuivre sont essentiels pour les organismes vivants. Les mollusques représentent une forte proportion de macroinvertébrés dans les écosystèmes aquatiques. Parmi ce groupe, les bivalves sont particulièrement intéressants. Du fait de leur importante activité de filtration nécéssaire pour satisfaire leur respiration et leur nutrition, les bivalves ont la capacité d'accumuler de nombreux contaminants. Dans l'écosystème, ils jouent un rôle important dans le transfert de matière de la colonne d'eau vers les sédiments. Les excréments et pseudofèces de bivalves rendent le phytoplancton disponible aux détritivores, et peuvent modifier la qualité des sédiments par concentration des polluants. 2+ -ATPase et Na + /K + -ATPase ont été observées chez A. anatina exposés au Cu 2+ . Une inhibition suivie d'une reprise totale de l'activité enzymatique, observée à la concentration d'exposition de 0,35 µmol L -1 de Cu 2+ mais non à forte concentration (0.64 µmol L -1 ) indique un mécanisme de détoxication. Ces résultats suggérent l'induction de mécanismes de détoxication métallique. La deuxième étape de cette étude a porté spécifiquement sur les composés riches en Cys chélateurs de métaux. Nous avons posé l'hypothèse que des phytochélatines pouvaient être présentes chez les bivalves et jouer un rôle dans la détoxification du cuivre. Les composés riches en Cys sont des polypeptides ou des protéines telles que les phytochélatines ou les métallothionéines, avec une teneur en Cys élevée. Ils jouent un rôle majeur comme chélateur de métaux, dans l'homéostasie des métaux essentiels et pour la détoxication des métaux nonessentiels. Les phytochélatines (PC) sont des polypeptides riches en thiol de formule générale : ( -Glu-Cys) n -Gly, synthétisés par la phytochélatine synthase (PCS) à partir de glutathion. Les PC complexent les ions métalliques réduisant la concentration intracellulaire en ions métalliques libres chez les plantes, les champignons et les microalgues. . Brièvement A. cygnea et A. anatina adultes (7,5 ± 0,5 cm de long) ont été placées en aquariums à 20 ± 0,5 ° C. L'eau d'étang artificiel 1,5 L / moule, pH 7,25 ± 0,10 (en mmol L -1 : 0.40 Ca 2+ , 0.20 Mg 2+ , 0.70 Na + , 0.05 K + , 1.35 Cl -, 0. Les bivalves ont été nourris quotidiennement avec une culture de Chlorella kessleri en phase exponentielle de croissance, ajoutée à une densité finale d'algues de 2 × 10 5 cellules / ml. Les animaux ont été acclimatés à ces conditions pendant deux semaines avant toute expérimentation. Fig. 2 : 2 Fig. 2 : Protocole d'analyse des composés riches en thiols GD : glande digestive, B : branchies. d'exposition au cuivre et 30 % d'augmentation à 7 jours d'exposition. Au delà de 7 jours, la PC 2 revient au même niveau que celui mesuré dans les témoins. Dans la glande digestive, aucune variation significative de la PC 2 n'a été mise en évidence, excepté à 12 h d'exposition au Cu 2+ . Les concentrations de -GluCys sont significativement supérieures dans les branchies des moules exposées au cuivre durant 48 heures et 7 jours. La concentration de -GluCys augmente également dans la glande digestive des bivalves à 48 h et 4 j d'exposition au cuivre. Aucune variation significative du niveau de MT n'a été observée dans les branchies et la glande digestive des moules sur les 21 jours d'exposition au cuivre. disruption of calcium transport and biomineralization mechanisms by copper. Detoxification mechanisms by metal binding thiol compounds are presented in the seventh and construction, and transport (LME, 2011). Primary production of copper increased from 9.6 million in 1980 to 16.9 million in 2006 (INERIS, 2010). In 2007, according to INERIS (2010), 35 % of the world copper consumption came from recycled copper. In Europe, in 2007 the recycling rate of copper was 41 %, and in 2006 copper consumption was about 4.7 million tons of copper (21 % of global demand). The demand for copper in Europe was estimated in 2007 at 3.85 million tons (European Copper Institute, 2009). Fig. 3 : 3 Fig. 3: Geographical distribution (enclosed by the red line) of Anodonta anatina, and distribution of Anodonta cygnea in European continental hydrosystems[START_REF] Başçınar | A Preliminary study on reproduction and larval development of swan mussel [Anodonta cygnea (Linnaeus, 1758)] (Bivalvia: Unionidae), in Lake Çıldır (Kars, Turkey)[END_REF] www.discoverlife.org) Fig. 4 : 4 Fig. 4: Biological cycle of Unionidae (http://biodidac.bio.uottawa.ca) Fig. 5: Longitudinal-section of an Unionidae (Mouthon, 1982), p.a.m.: posterior adductor muscle, a.a.m.: anterior adductor muscle , but especially of larvae and juveniles), little is available to define the main threats and causes of extinction: ROS are free radicals, presenting unpaired electrons (hydroxyl radical HO • ), others are nonradical species such as hydrogen peroxide H 2 O 2 . To reach a better level of stability, these radicals will capture electrons from reductor molecules, ROS reduction causing oxidations in chains. All the bio-molecules of the cell (nucleic acids, lipids, proteins, polysaccharides) are potential reductor substrates of ROS. The level of ROS instability characterized their diffusion capacity. A low-reactive form tends to act far from its site of production as it has a significant diffusion radius. On the contrary, a very reactive species acts very quickly and its diffusion is limited. ROS include the superoxide anion radical (2 ), hydroperoxy (ROO • ), and alkoxy (RO • ) radicals, nitric oxide (NO • ), and hydroxyl radical (HO • ). Molecular oxygen O 2 can also be regarded as a radical species since it has two single electrons. The superoxide anion is produced during various reactions with transition metals, and enzymes are implied in its formation. H 2 O 2 reacts weakly, diffuses freely, and has a long lifetime. The ROO • and RO • radicals arise from the peroxidation of lipids. These radicals allow the gradual propagation of lipid peroxidation. Free radicals are produced physiologically during normal cell metabolism. They can also be formed in response to a wide range of exogenous agents including radiation, metal ions, solvents, particulate matter, nitrogen oxides, and ozone. NO • plays at the same time a role in the destruction and the production of radicals. It is not very reactive with cellular components and reacts with radicals generating less reactive species. Combined with the superoxide anion radical, it may be involved in the formation of peroxynitrite, a highly toxic species. Due to its high reactivity, HO • is quite non-specific in its targets for oxidation, whereas ROS with lower rate constants react more specifically. ROS are at the origin of lipid peroxidation. Copper ions are involved in redox reactions which result in the production of ROS. In the Fenton reaction, cuprous ions react with H 2 O 2 giving rise to the extremely reactive HO •[START_REF] Labieniec | Antioxidative and oxidative changes in the digestive gland cells of freshwater mussels Unio tumidus caused by selected phenolic compounds in the presence of H 2 O 2 or Cu 2+ ions[END_REF]. In the case of organic hydroperoxides (ROOH), a homologous reaction is thought to occur leading to the formation of ROO • and of the more reactive RO • . Copper ions may participate both in the initiation and the propagation steps of lipid peroxidation, thus stimulating the in vivo degradation of membrane lipids e. antioxidant enzymes, and molecules without enzymatic activity. Antioxidant enzymes are superoxide dismutases (SODs) and catalase (CAT): SODs catalyze the dismutation of the superoxide anion radical under formation of molecular oxygen and H 2 O 2 , the latter detoxified by CAT. SOD and CAT are localised in peroxysomes, and also in mitochondria and cytosol. Isoenzymes of SOD are found in various compartments of the cell. Fig. 6 : 6 Fig. 6: GSH metabolism (Mendoza-Cózatl and Moreno-Sánchez, 2006), -ECS: glutamatecysteine-ligase, GS: glutathione synthase, GR: glutathione reductase, GPx: glutathione peroxidase, GST: glutathione transferase, Xe: xenobiotic, GSH: glutathione, GSSG: glutathione disulfide : Class I, including the MT which show biochemical homology and close elution time by chromatography with horse MT, Class II, including the rest of the MT with no homology with horse MT, and Class III, which includes phytochelatins, Cys-rich enzymatically synthesised peptides. A second classification was performed[START_REF] Binz | Metallothionein: molecular evolution and classification[END_REF]; http://www.bioc.uzh.ch/mtpage/classif.html) and takes into account taxonomic parameters and the patterns of distribution of Cys residues along the MT sequence. Cysteine (Cys) residues are distributed in typical motifs consisting of Cys-Cys, Cys-X-Cys or Cys-X-X-Cys sequences (X denoting amino-acid residues other than Cys). It results in a classification of 15 families for proteinaceous MT. Mollusc MT belong to family 2, divided in two subfamilies. Fig. 7 : 7 Fig. 7: Redox cycle of MT[START_REF] Kang | Metallothionein redox cycle and function[END_REF], MT: metallothionein, ROS: reactive oxygen species, GSH: glutathione, GSSG: glutathione disulfide Fig. 8 : 8 Fig.8: PC synthesis[START_REF] Vatamaniuk | Worms take the 'phyto' out of 'phytochelatins[END_REF] for Zn/Cu-thionein indicate that copper is present in the form of Cu(I). Unlike to Zn, Cu is arranged differently in the MT clusters. Cu(I) atoms are binding by one bivalent connection which allows 10 copper atoms to bind per MT protein. The relative affinities of each metal Fig. 1 :Fig. 2 : 12 Fig.1: Protocol of tissue fraction preparation for enzymes analysis of Ca 2+ -ATPase, Na + /K + -ATPase, H + -ATPase, and carbonic anhydrase (CA), in the mantle (M), digestive gland (DG), gills (G), and kidneys (K) of the fresh water mussel A. anatina. Fig. 3 : 3 Fig. 3: Phytochelatin and precursor biosynthesis in plants (Mendoza-Cózatl and Moreno-Sánchez, 2006), PCS: phytochelatin synthase, -ECS: glutamate-cysteine-ligase, GS: glutathione synthase, GST: glutathione transferase, Xe: xenobiotic, GSH: glutathione, GSSG: glutathione disulfide, HMWC: high molecular weight complexes. ( in mmol L -1 : 0.40 Ca 2+ , 0.20 Mg 2+ , 0.70 Na + , 0.05 K + , 1.35 Cl -, 0 Fig. 1: Na + /K + -ATPase activity determination in September in the gills (G), the digestive gland (DG), and the mantle (M) of A. anatina upon 0 (control), 4, 7, 15 days of exposure to 0.35 µmol L -1 Cu 2+ . Means of results, n = 3 ± SD, are presented as enzymatic activity in µmol P i /mg protein/min. *= significantly lower than control (Mann-Whitney test two sided test = 0.05). Fig. 2 : 2 Fig. 2: H + -ATPase activity determination in September in mantle (M) of A. anatina upon 0 (control), 4, 7, 15 days of exposure to 0.35 µmol L -1 Cu 2+ . The Means of results, n = 3 ± SD, are presented as enzymatic activity in µmol P i /mg protein/min. *= significantly lower than control (Mann-Whitney test two sided test = 0.05). Fig. 3 : 3 Fig. 3: Na + /K + -ATPase basal activity in gills (G), the digestive gland (DG), and mantle (M) of A. anatina in July, September, January, March, and April 2007 / 2008. Means of results, n = 3 ± SD, are presented as enzymatic activities in µmol P i /mg protein/min. # = significantly higher than in January, March, and April (Mann-Whitney test two sided test = 0.05). Fig. 4: H + -ATPase basal activity in mantle of A. anatina in July, September, January, March, and April 2007 / 2008. Means of results, n = 3 ± SD, are presented as enzymatic activity in µmol P i /mg protein/min. # = significantly higher than in January, March, and April (Mann-Whitney test two sided test = 0.05). PC n are synthesised by phytochelatin synthase (PCS) which catalyzes the transpeptidation of the -Glu-Cys moiety of GSH onto a second GSH moiety or with PC n ( -Glu-Cys) n -Gly (n=2-11) to form PC n+1 . PC n are rapidly induced in cells and tissues when exposed to a range of transition metal ions, including the cations Cd, Derivatization was carried out by addition of 4 µL of a solution of 50 mmol L -1 mBBr acetonitrile and incubation in a water-bath at 45°C for 30 min in the dark. The reaction was stopped by addition of 40 µL of an aqueous solution of MSA 1 mol L -1 .The high-performance liquid chromatographic analysis was performed with an HPLC instrument (Gilson, Roissy, France) equipped with a dual solvent pump (model 322), autosampler (model 234), a 100 µL injection loop, and fluorescence detector (model 122). Fig. 1 :Fig. 2 :Fig. 3 : 123 Fig. 1: Chromatograms of (A) reagents blank with homogenisation buffer, and (B) mix of the eight cysteine-rich peptide standards. The broad peak is tetramethylbimane (Me 4 B). Peak "a" is an unidentified compound originating from the derivatization reaction with reagent.Standard concentration was 10 µmol L -1 for cysteine (Cys), glutathione (GSH),glutamylcysteine ( -GluCys), 5 µmol L -1 for the internal standard N-acetyl-cysteine (NAC), and 2 µmol L -1 for phytochelatins 2-5 (PC 2-5 ). Fig. 4 : 4 Fig. 4: PC 2 content in the digestive gland (DG) and the gills (G) of Anodonta cygnea, means (n = 6) ± SD. No significant difference was found between fresh (white bars) and frozen (grey bars) tissues (Wilcoxon, Mann-Whitney test). Reduction and derivatization:Just after extraction, 244 µL HEPPS buffer (HEPPS 200 mmol L -1 , DTPA 6.3 mmol L -1 , pH 8.2) was mixed with 10 µL TCEP (TCEP 20 mmol L -1 in HEPPS buffer, pH 8.2) used as disulfide reductant, 4 µL of NAC 0.5 mmol L -1 as an internal standard, and 99 µL of tissue extract. Disulfide reduction was conducted in a water-bath at 45°C for 10 min. Derivatization was carried out by addition of 4 µL of mBBr (50 mmol L -1 in acetonitrile), and incubated in a water-bath at 45°C for 30 min in the dark. The reaction was stopped by addition of 40 µL of an aqueous solution of MSA 1 M.Analyses were performed using HPLC instrument (Gilson, Roissy, France) equipped with a dual solvent pump (model 322), an autosampler (model 234), a 100 µL injection loop, and a fluorescence detector (model 122). The fluorescence of mBBr-labeled molecules was monitored with an excitation wave length of 382 nm and emission at 470 nm. The injection volume was 20 µL. The derivatized PC were separated on a reversed-phase column (Phenomenex-Synergi-Hydro RP C18), 100 mm × 4.6 mm, 4 µm particle size, fitted with a C-18 guard column (Phenomenex Securityguard cartridge) 4 mm × 3 mm, 5 µm. The temperature of the column oven was 40°C. Peak areas were integrated using proper software Fig. 1 :Fig. 2 : 12 Fig. 1: Chromatograms of (A) reagent blank with homogenisation buffer, and (B) mix of the eight cysteine-rich peptide standards. The broad peak is tetramethylbimane (Me 4 B). Peak "a" is an unidentified compound originating from the derivatization reaction with reagent.Standard concentration was 10 µmol L -1 for cysteine (Cys), glutathione (GSH),glutamylcysteine ( -GluCys), 5 µmol L 1 for the internal standard N-acetyl-cysteine (NAC), and 2 µmol L -1 for phytochelatins 2-5 (PC 2-5 ). Fig. 3 3 Fig. 3 PC 2 content in the digestive gland (DG) and the gills (G) of A. cygnea exposed to 0.35 µmol L -1 Cu 2+ for 0 h, 12 h, 48 h, 4 d, 7 d, and 21 d. Means of results (n = 6) ± SD. *= significant difference between exposed group and its respective control (Kruskal-Wallis and Mann-Whitney two sided test (n = 6, = 0.05)) Le milieu aquatique est le réservoir final pour la plupart des polluants, dont les métaux. Le cuivre appartient aux métaux les plus couramment utilisés du fait de ses propriétés physiques et chimiques (particulièrement pour ses qualités de conductivité électrique et thermique).Comme métal pur, en alliage, ou à l'état ionique, il est utilisé dans un grand nombre de Introduction L'activité humaine est associée au développement de l'industrie et l'agriculture, qui sont devenus indispensables. Ces secteurs sont responsables de la production et de la diffusion de nombreux polluants. Les propriétés chimiques et physiques et les différents types de transport déterminent la diffusion des polluants dans tous les compartiments des écosystèmes. PC : phytochélatine(s) PCS : phytochélatine synthase P i : phosphate inorganique PMCA : Ca 2+ -ATPase de la membrane plasmique R : reins S : surnageant SH : groupement thiol TCEP : tris-(2-carboxyethyl)-phosphine hydrochloride secteurs industriels et agricoles. De ce fait, le cuivre est un métal fréquemment détecté dans les milieux aquatiques continentaux où il est présent dans la colonne d'eau et s'accumule dans les sédiments (INERIS, 2010). Les propriétés chimiques du cuivre en font un élément surtout utilisé en tant que catalyseur biologique des réactions enzymatiques, et un élément essentiel à de nombreux processus biologiques impliquant des fonctions vitales comme la respiration ou la photosynthèse Ces deux invertébrés sont autochtones des systèmes hydrologiques européens. Au cours des dernières décennies, une régression des populations d'Unionidae a été observée en Europe. L'objectif de cette thèse a été d'acquérir des connaissances sur les mécanismes de perturbation par le cuivre du métabolisme du calcium chez Anodonta anatina. Le calcium est un élément essentiel dans le fonctionnement des organismes eucaryotes. Il contrôle plusieurs processus vitaux(Ermak et Davies, 2001). L'absorption, le maintien de la concentration intracellulaire de calcium dans l'organisme, et les processus de biominéralisation sont rendus possible par le contrôle de son passage à travers les membranes cellulaires. Ce passage se fait par simple diffusion, mais aussi par des protéines de transport. Le phénomène de biominéralisation nécessite en plus du calcium, des ions carbonate produits en partie par l'anhydrase carbonique (AC). Nous avons étudié les effets du Cu 2+ sur le transport du calcium chez Anodonta anatina par l'évaluation des activités enzymatiques de Ca 2+ -ATPase, Na + /K + -ATPase et H + -ATPase de la membrane plasmique et de l'activité enzymatique cytosolique de AC, enzymes impliquées dans l'absorption du calcium et dans les processus de biominéralisation. Les organes étudiés ont été les branchies, la glande digestive, le rein, et le manteau, qui jouent un rôle important dans l'absorption du calcium et la synthèse de la coquille . Les invertébrés appartenant aux Unionidae, connus pour leur facilité à bioaccumuler les métaux, pourraient être susceptibles de synthétiser des PC, c'est l'hypothèse que nous émettons dans la deuxiéme partie de la thèse. Leur présence est connue chez les végétaux, non chez les organismes animaux. L'induction de PC, de leurs précurseurs ont été évalués dans les branchies et la glande digestive chez Anodonta cygnea exposée au cuivre. Comme l'homogénéité des variances et la normalité des données ne se sont pas révélées vérifiées (tests de Bartlett et de Shapiro-Wilk), l'analyse statistique a été réalisée par tests non paramétriques de Kruskal-Wallis et de Mann-Whitney. Les différences ont été considérées comme significatives lorsque p <0,05. Analyses statistiques Analyses Des expériences préliminaires ont été effectuées avec trois bivalves exposés à chacune des concentrations de Cu 2+ : 0,26, 0,54 et 1,15 µmol L -1 pendant quatre jours, pour trouver les concentrations les plus pertinentes. Pour l'étude des effets du cuivre sur ces activités enzymatiques l'exposition a été effectuée à 0,35 et 0,64 µmol L -1 de Cu 2+ sur une durée de 15 jours. Trois moules ont été échantillonnées et disséquées pour chaque traitement : jour 0 (témoin), 4 j, 7 j et 15 j. Les organes de chaque individu ont été préparés et étudiés séparément, ce qui donne 3 réplicats par traitement. Les effets du cuivre sur les mécanismes de détoxification des métaux ont étés évalués chez A. cygnea sur les paramétres suivants : PC et leurs précurseurs, et MT. Les bivalves ont été exposés à 0,35 µmol L -1 de Cu 2+ durant 0 h, 12 h, 48 h, 4 j, 7 j et 21 j, les témoins ont étés maintenus en parallèle, sur la même durée, dans l'eau artificielle. Les milieux tests ont été renouvelés chaque jour, et les bivalves ont été nourris quotidiennement, comme décrit précédemment. Douze moules ont étés échantillonnées par traitement avant d'être disséquées. Les branchies et la glande digestive de deux moules ont été regroupées, donnant 6 réplicats par traitement pour l'analyse. enzymatiques Fig. 1 : Protocole d'analyse enzymatique M : manteau, GD : glande digestive, B : branchies, R : reins, S : surnageant, C : culot. Ca 3 (PO 4 ) 2 , Mg 3 (PO 4 ) 2 ) and pyrophosphate (Ca 2 P 2 O 7 , Mg 2 P 2 O 7 ) can contain Mn, Zn, Cu, Fe, Co, Cd and This organelle accumulate high concentrations of trace metals in non-toxic granules forms and thus represent an important detoxification way. The ferritin-rich and copper-sulphur granules are related to Fe and Cu metabolism in the respiratory pigment and also in copper detoxification. Lipofuscins are mainly lipid peroxidation end-products which are accumulated in the lysosomes as insoluble lipoprotein granules. Metals such as Cu, Cd, and Zn are trapped by the lipofuscin and sterically prevented from moving in or out of the granule. In mollusc intracellular granules composed to calcium / magnesium orthophosphate ( Table 1 : 1 Basal enzymatic activities of plasma membrane ATPase(s) (µmol P i /mg protein/min) and cytoplasmic CA (U/mg protein) in Anodonta anatina. G DG M - K Reference Ca 2+ -ATPase Na + /K + -ATPase H + -ATPase 0.45 0.098 - 0.095 0.015 -- 0.042 0.002 0.087 --- Article 1 Article 2 Article 2 CA 2.4 1.76 Article 1 G: gills; DG: digestive gland; M: mantle; K: kidneys, CA: carbonic anhydrase; ─: not determined Table 1 : 1 Plasma membrane Na + /K + -ATPase and H + -ATPase activities (µmol/L P i /mg protein/min) in different tissues of freshwater and marine organisms. Species Na + /K + -ATPase H + -ATPase Tissues References Freshwater bivalve Anodonta anatina 0.098 G The present study 0.015 DG 0.042 0.002 M Anodonta cygnea 0.109 G Lagerspetz and Senius, 1979 Asellus aquaticus 0.018 G Bouskill et al. , 2006 Dreissena polymorpha 0.006 G Marine bivalve Mytilus galloprovincialis 0.033 G Viarengo et al. , 1996 Freshwater crustacean Dilocarcinus pagei 0.023 G Firmino et al. , 2011 Brackishwater crustacean Acartia tonsa 0.014 total Pedroso et al. , 2007 Freshwater fish Oncorhynchus mykiss 0.025 G Lin and Randall, 1993 Trichogaster microlepis 0.015 0.003 G Huang et al. , 2010 Bidyanus bidyanus 0.124 0.041 G Alam and Frankel, 2006 Macquaria ambigua 0.052 0.032 G Perca flavescens 0.064 G Packer and Garvin, 1998 Plasma membrane ATPase activity µmol L -1 Pi/mg protein/min G: gills, DG: digestive gland, M: mantle. Component name Retention time (min) LOD pmol / 20 µl LOQ pmol / 20 µl r 2 Cys 3.14 ± 0.08 0.21 1.09 0.999 GSH 5.99 ± 0.12 0.24 1.40 0.999 -GluCys 6.43 ± 0.13 0.37 1.46 0.999 NAC 9.16 ± 0.23 - - - PC 2 13.09 ± 0.33 0.59 1.39 0.999 PC 3 16.62 ± 0.28 0.79 1.52 0.998 PC 4 18.59 ± 0.25 1.93 2.70 0.991 PC 5 19.75 ± 0.18 4.71 5.09 0.961 Table 2 : 2 PC content [µg PC/g tissue wet weight] in the digestive gland and the gills of Anodonta cygnea, means (n = 6) ± SD.Table2shows the PC n content in the gills and the digestive gland in µg per g tissue wet weight. The concentrations in the digestive gland and the gills were the highest for PC 2 , PC 2 PC 3 PC 4 PC 5 Digestive gland 2.17 ± 0.59 1.10 ± 0.12 0.47 ± 0.52 < LOD Gills 0.88 ± 0.15 0.72 ± 0.57 0.40 ± 0.44 < LOD < LOD: below limit of detection Clemens and Peršoh (2009) andGonzalez-Mendoza et al. (2007) support the hypothesis that PC have dual functions: metal detoxification and essential metal homeostasis (Zn, Cu). In our work, before analyses the mussels were kept at least for 2 weeks in artificial pond water (see animal maintenance) without exposure to non-essential metals like Cd or the like, but PC 2-4 could be detected anyway. The presence of PC 2-4 without the animals being exposed to Cd or other transition metals suggests that PC could have functions comparable to other cysteinerich compounds, i.e. for homeostasis of essential trace metals or as reducing agent. The evidence of homologous genes for functional PCS in animals suggests that PC n play a wider role in heavy-metal detoxification than previously thought. The results obtained in this study highlight, for the first time, the ability of the freshwater bivalve Anodonta cygnea to synthesize PC 2-4 .6 tons, worldwide about 17 × 10 6 tons (INERIS, present work 0.35 µmol L -1 Cu 2+ (controlled by graphite furnace atomic absorption spectrometry GFAAS, limit of detection for Cu = 0.5 µg L -1 = 8 nmol L -1 ) was chosen as concentration for exposure test. The experiments were performed in aquaria lined with dyeand pigment free high-density polyethylene foil, filled with 1.5 L of experimental medium (pH 7.25) per mussel. The mussels were kept at 20° C in a thermo-regulated room with a photoperiod of 16 h light and 8 h darkness. The animals were divided in groups of 12 mussels. These groups were exposed to 0.35 µmol L -1 Cu 2+ for 0 h, 12 h, 48 h, 4 d, 7 d, and 21 d, or kept for the same duration in artificial pond water as corresponding time controls. Test media INTRODUCTION Copper is extensively used for various technical applications, especially conducting metal for electro-technical equipment, electrical power lines, as catalyst in the chemical industry, and in the building industry for water pipes and roofings; a small amount goes into the use as fungicide by which way it is directly emitted into the environment. Annual copper consumption in Europe alone is 3.5 × 10 , 2.7 pmol for PC , pmol for PC 5 . Acknowledgements I am very grateful to Professor Konrad Dettner, Professor Britta Planer-Friedrich, and Professor Jouni Taskinen, to participate in the jury of defense of this Ph.D, and to accept reviewing this Ph.D. thesis. I thank Professor Pascale Bauda, Professor Laure Giambérini, and Professor Jean François Ferard, the present and past directors of the laboratory, for welcoming me into the Laboratory Interactions Ecotoxicology Biodiversity Ecosystems (LIEBE). I wish to express my sincere gratitude to my two Supervisors, Professor Hartmut Frank and Professor Paule Vasseur who welcomed me in their teams. I am indebted to my supervisors for their guidance, for all of their valuable advices and remarks throughout this Ph.D. thesis. I thank Doctor Naima Chahbane, Lecturer-HDR Carole Cossu-Leguille, and Doctor Silke Gerstmann for their interest in this research and their help. My warm thanks to Mrs Agnes Bednorz for her kindness and her help. I am indebted to Lecturer Sylvie Cotelle for allowing me to use the HPLC equipment. I acknowledge Mr Philippe Rousselle for copper determinations by GFAAS and FAAS. Many thanks to Mrs Catherine Drui, Mrs Maryline Goergen and Mrs Irmgard Lauterbach, for their help in administrative issues. I am pleased to acknowledge all the colleagues for their kindness and their cheerfulness: ACKNOWLEDGEMENTS This research was supported by the Universities of Bayreuth (Germany) and Lorraine (France) and the CPER (Contrat de Projet Etat Région) in Lorraine. The authors thank Dr. Silke Gerstmann for helpful discussion. Financial support of the Oberfranken-Stiftung and by Dr. Robert Klupp of the fisheries department of the Regional Government of Upper Franconia is appreciated. ACKNOWLEDGEMENTS This research was supported by the Universities of Lorraine (France) and Bayreuth (Germany), the PRST Region Lorraine, and the French Ministry of Research. ACKNOWLEDGEMENTS This research was supported by the Universities of Lorraine (France) and Bayreuth (Germany), PRST Region Lorraine in France, and the French Ministry of Research. List of abbreviations a.a.m.: anterior adductor muscle ACN: acetonitrile Copper and enzymatic perturbation The enzymatic activity of the plasma membrane Ca 2+ -ATPase was significantly inhibited in the kidneys of A. anatina upon 4 days of exposure at all concentrations of Cu 2+ tested in the rage of 0.26 to 1.15 µmol L -1 (article 1). In the kidneys and the gills, a significant inhibition of Ca 2+ -ATPase activity was observed upon 4 d of exposure at 0.35 µmol L -1 followed by a recovery at 7 d of exposure. Significant Ca 2+ -ATPase activity inhibition with no recovery was observed upon 15 d in the kidneys at the higher concentration of 0.64 µmol L -1 of Cu 2+ . No significant effect was noted on CA activity in gills and digestive gland of A. anatina exposed to Cu 2+ (article 1). Compared to controls mussels, the Na + /K + -ATPase activity was significantly inhibited in the gills (72 % inhibition) and the digestive gland (80 % inhibition) of A. anatina upon 4 days of exposure at 0.35 µmol L -1 of Cu 2+ (article 2). The Na + /K + -ATPase activity was inhibited by 26 % in the mantle of mussels exposed 4 d to the same conditions, but not significantly relative to controls. No recovery was observed upon the 15 days of exposure in the gills which still showed 54 % of inhibition at the end of test, and a partial recovery was observed at day 7 in the digestive gland. In the mantle, the H + -ATPase activity declined continuously but the decline was not statistically significant due to high variability between mussels (article 2). Calcium plays a fundamental role in numerous biological processes (energy production, cellular metabolism, muscle contraction, reproduction) and has important mechanical functions (shell, skeleton) in many organisms (Mooren & Kinne, 1998). Contrary to molluscs from marine ecosystem which are generally hyposmotic and for which calcium uptake is easy, freshwater bivalves are hyperosmotic and require tight regulation of their calcium metabolism. Despite some recovery beyond 4 days in the digestive gland, the Na + /K + -ATPase activity in the gills of A. anatina was still significantly inhibited upon 15 days of experiment; an inhibition of Ca 2+ -ATPase activity was also observed in the gills and the kidneys under the same conditions (article 1 and 2). Inhibition of these enzymes was consistent with the results obtained with Mytillus galloprovincialis exposed to copper by Viarengo et al. (1996) and Burlendo et al. (2004). Inhibition of Na + /K + -ATPase modified the cellular Na + gradient which could result in a reduced activity of Ca 2+ /Na + antiporter; this, in addition to direct inhibition of Ca 2+ -ATPase, may lead to a disturbance of calcium homeostasis. Such a disturbance of ionoregulations could lead to continuous under-supply of calcium which may also affect Ca 2+ VWR), Water was purified (18.2 M by a Milli-Q system (Millipore, France). Animal maintenance Mussel maintenance is described in detail in a previous article (Santini et al., 2010a). Adult mussel A. cygnea with shell lengths of 7.5 ± 0.5 cm were provided by a commercial supplier (Amazon fish, Pfaffenhoffen, France). The mussels were kept in 10 L aquaria under a photoperiod of 16 h illumination and 8 h darkness. The bottom of the tank was covered with a layer of glass beads (10 mm diameter) so mussels could find conditions for burying. The , 0.20 HCO 3 -mmol L -1 , was renewed every day. Bivalves were fed daily with unicellular algae Chlorella kessleri from a culture in the exponential growth phase, which were added to a final algal density of 2×10 5 cells mL -1 . Air was bubbled continuously to ensure aeration and water column homogeneity. Animals were acclimatized to these conditions for 2 weeks before any experiment. Determination of PC n and other cysteine-rich metal-binding peptides PC n s were determined according Minocha et al. (2008) with a slight modification in the protein removal step and in the gradient profile of the mobile phase in HPLC. Cysteinerich peptides levels were studied in the cytosolic fractions of deproteinized tissue homogenates after reduction. Extraction: All dissection, extraction, and centrifugation steps were carried out at 4°C. The gills and the digestive gland were carefully dissected, digestive content was removed, and washed in Tris buffer (Tris 25 mmol L -1 , NaCl 50 mmol L -1 , pH 8.0). Organs from two mussels were 2010). Another possible source is fly ash from coal combustion, containing up to 20 g Cu per ton coal. Copper is easily found in aquatic ecosystems [START_REF] Waeles | Distribution and chemical speciation of dissolved cadmium and copper in the Loire estuary and North Biscay continental shelf, France[END_REF] since they are the ultimate way for numerous contaminants. Copper is an essential trace element for the function of many cellular enzymes and proteins. However, copper become toxic when excessive intracellular accumulation occurs (Viarengo and Nott, 1993). Copper toxicity results both from non-specific metal binding to proteins, from is involvement in Fenton reactions leading to formation of reactive oxygen species (ROS) and oxidative stress. Through their important filtering activity they satisfy their respiratory and nutrition needs, by which bivalves have the capacity to accumulate a variety of environmental contaminants. The freshwater bivalve Anodonta cygnea belongs to Unionidae family, and is a species well distributed in continental waters. Unionidae are widely recognised for metal bioaccumulation including copper (Cossu et al., 1997). The level up to which mussels can tolerate transition metals depends on their ability to regulate the metal cation concentration in cells. The proteins of the metallothionein (MT) group, the polypeptides glutathione (GSH) and phytochelatins (PC) are protective compounds rich in the amino acid cysteine containing a thiol group (SH). Cu, as other transitions metals have high affinity for SH groups, making cysteine rich peptides the principal biological reagents for transition metal sequestration (Viarengo andNott, 1993, Clemens, 2006). PC bind transition metals with high thiol complexation constants, reducing the intracellular concentration of free ions of such metals in plants, fungi and microalgae. The general structure of PC is (γ -Glu-Cys) n -Gly (n = 2 to 11), synthesized by the constitutive enzyme phytochelatin synthase, which is activated by the presence of metal ions with GSH as substrate. (Grill et al., 1985). A PC synthase homologous sequence has been found in the genomes of the invertebrates Caenorhabditis elegans (Clemens et al., 2001;Vatamaniuk et al., 2001), Eisenia fetida (Brulle et al., 2008) and Chironomus (Cobbett, 2000) genomes and more generally throughout the invertebrates (Clemens and Peršoh, 2009). In our previous article (Santini et al., 2011b) Animal maintenance and copper exposure Mussel maintenance was described in detail in our previous article (Santini et al., 2011a). Briefly adult individuals of A. cygnea with shell lengths of 7.5±0.5 cm were kept in aquaria at a temperature of 20 ± 0.5° C. The bottom of the tank was covered with a layer of glass beads so the mussels could find conditions for burying. Artificial pond water 1.5 L/mussel, pH 7.25 ± 0.10 (in mmol L -1 : 0.40 Ca 2+ , 0.20 Mg 2+ , 0.70 Na + , 0.05 K + , 1.35 Cl -, 0.20 SO 4 2-, 0.20 HCO 3 -) was renewed every day. The bivalves were fed daily with Chlorella kessleri from a culture in the exponential growth phase and added to a final algal density of 2×10 5 cells mL -1 . The animals were acclimatized to these conditions for two weeks before any experiment. Previous results (Santini et al., 2011a) showed inhibition of enzymes involved in osmoregulationin A. cygnea exposed to 0.35 µmol L -1 of Cu 2+ . A total recovery of Ca 2+ -ATPase upon 7 d followed, indicating the induction of detoxication mechanisms. In the organs varied as follows: PC 2 > PC 3 > PC 4 , no significant induction of PC 3-4 was observed. The quantities of PC 2-4 found in digestive gland were higher than in gills (data not shown). In the gills of mussels exposed to Cu 2+ upon 48 h and 7 d -GluCys levels (table 1) are significantly superior to the respective controls. -GluCys increase as well, in the digestive gland of bivalves at 48 h and 4 d of exposure. Table 1: No induction of MT was observed upon Cu 2+ exposure for 21 d, a result similarly to the one found in A. cygnea (Amiard et al., 2006). MT polymorphism appears to be particularly important in invertebrates compared to mammals. Different isoforms of MT play Controlled by inductively coupled plasma mass spectrometry (ICPMS) (detection limit: Cu = 0.5 µg L -1 = 0.008 nmol L -1 ), means (n = 3) ± standard deviations (SD) Controlled by graphite furnace atomic absorption spectrometry (GFAAS) (detection limit Cu = 0.5 µg L -1 = 8 nmol L -1 ), means (n = 3) ± standard deviations (SD) Annexe 4: Pictures of mussel maintenance and exposure Anodonta cygnea in acclimatization aquarium Anodonta cygnea copper exposure in aquaria lined with polyethylene foil, in a thermoregulated room
00175042
en
[ "shs.eco", "shs.stat" ]
2024/03/05 22:32:07
2007
https://shs.hal.science/halshs-00175042/file/R07036.pdf
Alex Coad email: [email protected] Rekha Rao Hans Gersbach Karin Hoisl Camilla Lenzi Pierre Mohnen Christian Seiser The employment effects of innovation * Keywords: L25, O33, J01 Technological Unemployment, Innovation, Firm Growth, Weighted Least Squares, Aggregation, Quantile Regression Chômage technologique, Innovation, Croissance des firmes, Weighted Least Squares, Aggrégation, Régression par quantile The issue of technological unemployment receives perennial popular attention. Although there are previous empirical investigations that have focused on the relationship between innovation and employment, the originality of our approach lies in our choice of method. We focus on four 2-digit manufacturing industries that are known for their high patenting activity. We then use Principal Components Analysis to generate a firm-and year-specific 'innovativeness' index by extracting the common variance in a firm's patenting and R&D expenditure histories. To begin with, we explore the heterogeneity of firms by using semi-parametric quantile regression. Whilst some firms may reduce employment levels after innovating, others increase employment. We then move on to a weighted least squares (WLS) analysis, which explicitly takes into account the different job-creating potential of firms of different sizes. As a result, we focus on the effect of innovation on total number of jobs, whereas previous studies have focused on the effect of innovation on firm behavior. Indeed, previous studies have typically taken the firm as the unit of analysis, implicitly weighting each firm equally according to the principle of 'one firm equals one observation'. Our results suggest that firm-level innovative activity leads to employment creation that may have been underestimated in previous studies. L'innovation des entreprises et croissance de l'emploi Résumé: Nous présentons une analyse de la relation entre l'innovation et l'emploi dont l'originalité repose sur la méthodologie statistique. Nous nous concentrons sur quatre industries manufacturières qui ont une forte propension à déposer des brevets. Ensuite, nous utilisons l'analyse en composantes principales pour élaborer une indice d'innovation. Dans un premier temps, nous explorons l'hétérogénéité des firmes en appliquant la régression par quantiles. Alors que l'innovation peut entraîner dans quelques entreprises des réductions d'emplois, dans d'autres, elle est à l'origine d'une augmentation des emplois. Nous procédons à une analyse par les moindres carrés pondérés ('weighted least squares', WLS) qui prend en compte les différentes capacités de création d'emploi d'entreprises de tailles variées. Nous nous concentrons donc bien sur l'effet de l'innovation sur le nombre total d'emplois, alors Introduction Whilst firm-level innnovation can be expected to have a positive influence on the growth of a firm's sales, the overall effect on employment growth is a priori ambiguous. Innovation is often associated with increases in productivity that lower the amount of labour required for the production of goods and services. In this way, an innovating firm may change the composition of its productive resources, to the profit of machines and at the expense of employment. As a result, the general public has often expressed concern that technological progress may bring about the 'end of work' by replacing men with machines. Economists, on the other hand, are usually more optimistic. To begin with, theoretical discussions have found it useful to decompose innovation into product and process innovation. Product innovations are often associated with employment gain, because the new products create new demand (although it is possible that they might replace existing products). Process innovations, on the other hand, often increase productivity by reducing the labour requirement in manufacturing processes (e.g. via the introduction of robots [START_REF] Fleck | The Adoption of Robots in Industry[END_REF]). Thus, process innovations are often suspected of bringing about 'technological unemployment'. The issue becomes even more complicated, however, when we consider that there are not only direct effects of innovation on employment, but also a great many indirect effects operating through various 'substitution channels'. For example, the introduction of a labour-saving production process may lead to an immediate and localized reduction in employees inside the plant (the 'direct effect'), but it may lead to positive employment changes elsewhere in the economy via an increased demand for new machines, a decrease in prices, and increase in incomes, an increase in new investments, or a decrease in wages (for an introduction to the various 'substitution channels', see Spiezia and Vivarelli, 2000). As a result, the overall effect of innovation on employment needs to be investigated empirically. Although Van Reenen recently lamented the "dearth of microeconometric studies on the effect of innovation on employment" (Van Reenen, 1997: 256), the situation has improved over the last decade. Research into technological unemployment has been undertaken in different ways and at various levels of aggregation. The results emerging from different studies are far from harmonious though -"[e]mpirical work on the effect of innovations on employment growth yields very mixed results" (Niefert 2005:9). Doms et al. (1995) analyse survey data on US manufacturing establishments, and observe that the use of advanced manufacturing technology (which would correspond to process innovation) has a positive effect on employment. At the firm-level of analysis, Hall (1987) observes that employment growth is related positively and significantly to R&D intensity in the case of large US manufacturing firms. Similarly, [START_REF] Greenhalgh | Technological Activity and Employment in a Panel of UK Firms[END_REF] observe that R&D intensity and also the number of patent publications have a positive effect on employment for British firms. Nevertheless, [START_REF] Evangelista | Innovation, Employment and Skills in Services: Firm and Sectoral Evidence[END_REF] observe a negative overall effect of innovation on employment in the Italian services sector. When the distinction is made between product and process innovation, the former is usually linked to employment creation whereas the consequences of the latter are not as clearcut. Evidence presented in Brouwer et al. (1993) reveals a small positive employment effect of product-related R&D although the combined effect of innovation is imprecisely defined. Relatedly, work by [START_REF] Van Reenen | Employment and Technological Innovation: Evidence from UK Manufacturing Firms[END_REF] on listed UK manufacturing firms and Smolny (1998) for West German manufacturing firms shows a positive effect on employment for product innovations. Smolny also finds a positive employment effect of process innovations, whereas Van Reenen's analysis yields insignificant results. [START_REF] Harrison | Does innovation stimulate employment? A firm-level analysis using comparable micro data on four European countries[END_REF] consider the relationship between innovation and employment growth in four European countries (France, Italy, the UK and Germany) using data for 1998 and 2000 on firms in the manufacturing and services industries. Whilst product innovations are consistently associated with employment growth, process innovation appears to have a negative effect on employment, although the authors acknowledge that this latter result may be attenuated (or even reversed) through compensation effects. To summarize, therefore, we can consider that product innovations generally have a positive impact on employment, whilst the role of process innovations is more ambiguous (Hall et al., 2006). We must emphasize, however, that investigations at the level of the firm do not allow us to infer the aggregated and cumulative effect of innovation on 'total jobs' -this is because datasets are composed of firms of different sizes which need to be weighted accordingly. Previous research in this area, however, has implicitly given equal weights to firms, by treating each firm as one 'observation' in a larger database. These studies can shed light on the effect of innovation on employment decisions in the 'average firm', but they do not yield conclusions on the total employment effects of innovation, for society as a whole.1 We have strong theoretical motivations for suspecting that the relationship between innovation and employment is not invariant over the firm size distribution. For example, it may be the case that larger firms are more prone to introduce labour-saving process innovations, whereas smaller firms are often associated with product innovations. In this way, innovation in larger firms may be associated with job destruction whereas the innovative activity of small firms would be associated with job creation. On the other hand, smaller firms have less restrictive hiring-and-firing regulations, and so innovation may lead to reductions in employment that are more frequent in smaller firms than in their larger counterparts. Although there may be a relationship between the size of a firm and the employment effects of innovation, however, we consider the sign and magnitude to be an empirical question. Our empirical framework enables us to evaluate the effect of innovation on total employment by attributing weights to firms of different sizes. "Linking more explicitly the evidence on the patterns of innovation with what is known about firms growth and other aspects of corporate performance -both at the empirical and at the theoretical level -is a hard but urgent challenge for future research" [START_REF] Cefis | The persistence of innovative activities: A cross-countries and cross-sectors comparative analysis[END_REF]Orsenigo, 2001:1157). We are now in a position to rise to this challenge. In Section 2 we discuss the methodology, focusing in particular on the shortcomings of using either patent counts or R&D figures individually as proxies for innovativeness. We describe how we use Principal Component Analysis to extract a synthetic 'innovativeness' index from patent and R&D data. Section 3 describes how we matched the Compustat database to the NBER innovation database, and we describe how we created our synthetic 'innovativeness' index. Indeed, we have made efforts to obtain the best possible observations for firm-level innovative activity. Whilst our database does not allow any formal distinction between 'product' and 'process' innovation, however, we do not consider this to be a fatal caveat for the purposes of this investigation. Section 4 contains the semi-parametric quantile regression analysis, where we can observe how the influence of innovation on employment change varies across the conditional growth rate distribution. We then move on to the parametric analysis in Section 5. In particular, we compare the estimates obtained from conventional regressions (OLS and FE) with those obtained from weighted least squares (WLS). We observe that WLS estimation consistently yields a slightly more positive (although never statistically significant) estimate than other techniques, which suggests that previous studies may have underestimated the total employment gains from innovation. Section 6 concludes. 2 Methodology -How can we measure innovativeness? Activities related to innovation within a company can include research and development; acquisition of machinery, equipment and other external technology; industrial design; and training and marketing linked to technological advances. These are not necessarily identified as such in company accounts, so quantification of related costs is one of the main difficulties encountered during the innovation studies. Each of the above mentioned activities has some effect on the growth of the firm, but the singular and cumulative effect of each of these activities is hard to quantify. Data on innovation per se has thus been hard to find [START_REF] Van Reenen | Employment and Technological Innovation: Evidence from UK Manufacturing Firms[END_REF]. Also, some sectors innovative extensively, some don't innovative in a tractable manner, and the same is the case with organizational innovations, which are hard to quantify in terms of impact on the overall growth of the firms. However, we believe that no firm can survive without at least some degree of innovation. We use two indicators for innovation in a firm: first, the patents applied for by a firm and second, the amount of R&D undertaken. [START_REF] Cohen | Protecting their intellectual assets: Appropriability conditions and why US manufacturing firms patent (or not)[END_REF] suggest that no industry relies exclusively on patents, yet the authors go on to suggest that the patents may add sufficient value at the margin when used with other appropriation mechanisms. Although patent data has drawbacks, patent statistics provide unique information for the analysis of the process of technical change [START_REF] Griliches | Patent Statistics as Economic Indicators: A Survey[END_REF]. We can use patent data to access the patterns of innovation activity across fields (or sectors) and nations. The number of patents can be used as an indicator of inventive as well as innovative activity, but it has its limitations. One of the major disadvantage of patents as an indicator is that not all inventions and innovations are patented (or indeed 'patentable'). Some companies -including a number of smaller firms -tend to find the process of patenting expensive or too slow and implement alternative measures such as secrecy or copyright to protect their innovations [START_REF] Archibugi | Patenting as an indicator of technological innovation: a review[END_REF][START_REF] Arundel | What percentage of innovations are patented? Empirical estimates for European firms[END_REF]. Another bias in the study using patenting can arise from the fact that not all patented inventions become innovations. The actual economic value of patents is highly skewed, and most of the value is concentrated in a very small percentage of the total (OECD, 1994). Furthermore, another caveat of using patent data is that we may underestimate innovation occuring in large firms, because these typically have a lower propensity to patent [START_REF] Dosi | Sources, Procedures, and Microeconomic Effects of Innovation[END_REF]. The reason we use patent data in our study is that, despite the problems mentioned above, patents would reflect the continuous developments within technology. We complement the patent data with R&D data. R&D can be considered as an input into the production of inventions, and patents as outputs of the inventive process. R&D data may lead us to systematically underestimate the amount of innovation in smaller firms, however, because these often innovate on a more informal basis outside of the R&D lab [START_REF] Dosi | Sources, Procedures, and Microeconomic Effects of Innovation[END_REF]. For some of the analysis we consider the R&D stock and also the patent stock, since the past investments in R&D as well as the past applications of patents have an impact not only on the future values of R&D and patents, but also on firm growth. [START_REF] Hall | Exploring the Patent Explosion[END_REF] suggests that the past history of R&D spending is a good indicator of the firms technological position. Taken individually, each of these indicators for firm-level innovativeness has its drawbacks. Each indicator on its own provides useful information on a firm's innovativeness, but also idiosyncratic variance that may be unrelated to a firm's innovativeness. One particular feature pointed out by [START_REF] Griliches | Patent Statistics as Economic Indicators: A Survey[END_REF] is that, although patent data and R&D data are often chosen to individually represent the same phenomenon, there exists a major statistical discrepancy in that there is typically a great randomness in patent series, whereas R&D values are much more smoothed. Figure 1 shows that the variable of interest (i.e. ∆K -additions to economically valuable knowledge) is measured with noise if one takes either innovative input (such as R&D expenditure or R&D employment) or innovative output (such as patent statistics). In order to remove this noise, one needs to collect information on both innovative input and output, and to extract the common variance whilst discarding the idiosyncratic variance of each individual proxy that includes noise, measurement error, and specific variation. In this study, we believe we have obtained useful data on a firm's innovativeness by considering both innovative input and innovative output simultaneously in a synthetic variable.2 Principal Component Analysis (PCA) is appropriate here as it allows us here to summarize the information provided by several indicators of innovativeness into a composite index, by extracting the common variance from correlated variables whilst separating it from the specific and error variance associated with each individual variable [START_REF] Hair | Multivariate Data Analysis: Fifth Edition[END_REF]. We are not the only ones to apply PCA to studies into firm-level innovation however -this technique has also been used by [START_REF] Lanjouw | Patent Quality and Research Productivity: Measuring Innovation with Multiple Indicators[END_REF] to develop a composite index of 'patent quality' using multiple characteristics of patents (such as the number of citations, patent family size and patent claims). Another criticism of previous studies is that they have lumped together firms from all manufacturing sectors -even though innovation regimes (and indeed appropriability regimes) vary dramatically across industries. In this study, we focus on specific 2-digit sectors that have been hand-picked according to their intensive patenting and R&D activity. However, even within these sectors, there is significant heterogeneity between firms, and using standard regression techniques to make inferences about the average firm may mask important phenomena. Using quantile regression techniques, we investigate the relationship between innovativeness and growth at a range of points of the conditional growth rate distribution. We observe three types of relationship between innovation and employment. First, most firms do not experience much employment change in any given year, and what little change they have appears to be largely idiosyncratic and not strongly related to innovative activity. Second, for those firms that grow the fastest, we observe that innovation seems to be strongly positively associated with increases in employment. Third, for those firms that are rapidly shedding workers, this is strongly associated with innovative activity. We note that this heterogeneity of the response of employment change to innovation cannot be detected if we focus on conventional regression estimators that estimate 'the average effect for the average firm'. We only consider certain specific sectors, and not the whole of manufacturing. This way we are not affected by aggregation effects; we are grouping together firms that can plausibly be compared to each other. We are particularly interested in looking at the growth of firms classified under 'complex' technology classes. We base our classification of firms on the typology put forward by [START_REF] Hall | Exploring the Patent Explosion[END_REF] and [START_REF] Cohen | Protecting their intellectual assets: Appropriability conditions and why US manufacturing firms patent (or not)[END_REF]. The authors define 'complex product'3 industries as those industries where each product relies on many patents held by a number of other firms and the 'discrete product' industries as those industries where each product relies on only a few patents and where the importance of patents for appropriability has traditionally been higher. 4 We chose four sectors that can be classified under the 'complex products' class. The two digit SIC codes that match the 'complex technology' sectors are 35, 36, 37, and 38. 5 By choosing these sectors that are characterised by high patenting and high R&D expenditure, we hope that we will be able to get the best possible quantitative observations for firm-level innovation. 3 Database description Database We create an original database by matching the NBER patent database with the Compustat file database, and this section is devoted to describing the creation of the sample which we will use in our analysis. The patent data has been obtained from the NBER database (Hall et al., 2001b), and we have used the updates available on Bronwyn Hall's website 6 to obtain data until 2002. The NBER database comprises detailed information on almost 3 416 957 U.S. utility patents in the USPTO's TAF database granted during the period 1963 to December 2002 and all citations made to these patents between 1975 and 2002. A firm's patenting history is analysed over the whole period represented by the NBER patent database. The initial sample of firms was obtained from the Compustat 7 database for the aforementioned sectors comprising 'complex product' sectors. These firms were then matched with the firm data files from the NBER patent database and we found all the firms8 that have patents. The final sample thus contains both patenters and non-patenters. The NBER database has patent data for over 60 years and the Compustat database has firms' financial data for over 50 years, giving us a rather rich information set. As Van Reenen (1997) mentions, the development of longitudinal databases of technologies and firms is a major task for those seriously concerned with the dynamic effect of innovation on firm growth. Hence, having developed this longitudinal dataset, we feel that we will be able to thoroughly investigate whether innovation drives sales growth at the firm-level. Table 1 shows some descriptive statistics of the sample before and after cleaning. Initially using the Compustat database, we obtain a total of 4274 firms which belong to the SICs 35-38 6 See http://elsa.berkeley.edu/∼bhhall/bhdata.html 7 Compustat has the largest set of fundamental and market data representing 90% of the world's market capitalization. Use of this database could indicate that we have oversampled the Fortune 500 firms. Being included in the Compustat database means that the number of shareholders in the firm was large enough for the firm to command sufficient investor interest to be followed by Standard and Poor's Compustat, which basically means that the firm is required to file 10-Ks to the Securities and Exchange Commission on a regular basis. It does not necessarily mean that the firm has gone through an IPO. Most of them are listed on NASDAQ or the NYSE. and this sample consists of both innovating and non-innovating firms. These firms were then matched to the NBER database. After this initial match, we further matched the year-wise firm data to the year-wise patents applied by the respective firms (in the case of innovating firms) and finally, we excluded firms that had less than 7 consecutive years of good data. Thus, we have an unbalanced panel of 1920 firms belonging to 4 different sectors. Since we intend to take into account sectoral effects of innovation, we will proceed on a sector by sector basis, to have (ideally) 4 comparable results for 4 different sectors. Summary statistics and the 'innovativeness' index Table 2 provides some insights into the firm size distribution for each of the four sectors. We can observe a certain degree of heterogeneity between the sectors, with SIC 37 (Transportation equipment) containing relatively large proportion of large firms. Figure 2 shows the number of patents per year in our final database. For some of the sectors there appears to be a strong structural break at the beginning of the 1980s which may well be due to changes in patent regulations (see [START_REF] Hall | Exploring the Patent Explosion[END_REF] for a discussion). Table 3 presents the firm-wise distribution of patents, which is noticeably right-skewed. We find that 46% of the firms in our sample have no patents. Thus the intersection of the two datasets gave us 1028 patenting firms who had taken out at least one patent between 1963 and 1998, and 892 firms that had no patents during this period. The total number of patents taken out by this group over the entire period was 317 116, where the entire period for the NBER database represented years 1963 to 2002, and we have used 274 964 of these patents in our analysis i.e. representing about 87% of the total patents ever taken out at the US Patent Office by the firms in our sample. Though the NBER database provides the data on patents applied for from 1963 till 2002, it contains information only on the granted patents and hence we might see some bias towards the firms that have applied in the end period covered by the database due the lags faced between application and the grant of the patents. Hence to avoid this truncation bias (on the right) we consider the patents only till 1997 so as to allow for a 5-year gap between application and grant of the patent.9 Concerning R&D, 1867 of the 1920 firms report positive R&D expenditure. Table 4 shows that patent numbers are well correlated with (deflated) R&D expenditure, albeit without controlling for firm size. To take this into account, we take employees as a measure of firm size and scale down the R&D and patents measures. 10 Table 5 reports the rank correlations between firm-level patent intensity and R&D intensity. For each of the sectors we observe positive and highly significant rank correlations, which nonetheless take values lower than 0.25. These results would thus appear to be consistent with the idea that, even within industries, patent and R&D statistics do contain large amounts of idiosyncratic variance and that either of these variables taken individually would be a rather noisy proxy for 'innovativeness'. 11 Indeed, as discussed in Section 2, these two variables are quite different not only in terms of statistical properties (patent statistics are much more skewed and less persistent than R&D statistics) but also in terms of economic significance. However, they both yield valuable information on firm-level innovativeness. application. However, we allow for a five-year gap here because it has been suggested that this gap has become longer in recent years. 10 We also investigate the robustness of our results by scaling down a firm's R&D and patents by its sales instead of its employees, and obtain similar results. For a brief discussion, see the Appendix (Section A). 11 Further evidence of the discrepancies between patent statistics and R&D statistics is presented in the regression results in Tables 5 and6 of Coad and Rao (2006a). Our synthetic 'innovativeness' index is created by extracting the common variance from a series of related variables: both patent intensity and R&D intensity at time t, and also the actualized stocks of patents and R&D. These stock variables are calculated using the conventional amortizement rate of 15%, and also at the rate of 30% since we suspect that the 15% rate may be too low [START_REF] Hall | Does the market value R&D investment by European firms? Evidence from a panel of manufacturing firms in France, Germany, and Italy[END_REF]. Information on the factor loadings is shown in Table 6. We consider that the summary 'innovativeness' variable is a satisfactory indicator of firm-level innovativeness in all the sectors under analysis because it loads reasonably well with the stock variables and explains between 50% to 83% of the total variance. Our composite variable has worked well in previous studies (e.g. Coad and Rao 2006a,b,c) and in this study we find that it works reasonably well. Nevertheless, we check the robustness of our results in the Appendix (Section B) by taking either a firm's R&D stock or its patent stock as alternative indicators of 'innovativeness'. An advantage of this composite index is that a lot of information on a firm's innovative activity can be summarized into one variable (this will be especially useful in the following graphs). A disadvantage is that the units have no ready interpretation (unlike 'one patent' or '$1 million of R&D expenditure'). In this study, however, we are less concerned with the quantitative point estimates than with the qualitative variation in the importance of innovation over the conditional growth rate distribution (i.e. the 'shape' of the graphs). Semi-parametric analysis In this section we use semi-parametric quantile regression techniques to explore the heterogeneity between firms with regards to their innovation and employment behavior. We begin with an introduction to quantile regression before presenting the results. An Introduction to Quantile Regression Standard least squares regression techniques provide summary point estimates that calculate the average effect of the independent variables on the 'average firm'. However, this focus on the average firm may hide important features of the underlying relationship. As Mosteller and Tukey explain in an oft-cited passage: "What the regression curve does is give a grand summary for the averages of the distributions corresponding to the set of x's. We could go further and compute several regression curves corresponding to the various percentage points of the distributions and thus get a more complete picture of the set. Ordinarily this is not done, and so regression often gives a rather incomplete picture. Just as the mean gives an incomplete picture of a single distribution, so the regression curve gives a correspondingly incomplete picture for a set of distributions" (Mosteller and Tukey, 1977:266). Quantile regression techniques can therefore help us obtain a more complete picture of the underlying relationship between innovation and employment growth. In our case, estimation of linear models by quantile regression may be preferable to the usual regression methods for a number of reasons. First of all, we know that the standard least-squares assumption of normally distributed errors does not hold for our database because growth rates follow an exponential rather than a Gaussian distribution. The heavy-tailed nature of the growth rates distribution is illustrated in Figure 3 (see also [START_REF] Stanley | Scaling behavior in the growth of companies[END_REF] and [START_REF] Bottazzi | Common Properties and Sectoral Specificities in the Dynamics of U. S. Manufacturing Companies[END_REF] for the growth rates distribution of Compustat firms). Whilst the optimal properties of standard regression estimators are not robust to modest departures from normality, quantile regression results are characteristically robust to outliers and heavytailed distributions. In fact, the quantile regression solution βθ is invariant to outliers of the dependent variable that tend to ± ∞ [START_REF] Buchinsky | Changes in the U.S. Wage Structure 1963-1987: Application of Quantile Regression[END_REF]. Another advantage is that, while conventional regressions focus on the mean, quantile regressions are able to describe the entire conditional distribution of the dependent variable. In the context of this study, high growth firms are of interest in their own right, we don't want to dismiss them as outliers, but on the contrary we believe it would be worthwhile to study them in detail. This can be done by calculating coefficient estimates at various quantiles of the conditional distribution. Finally, a quantile regression approach avoids the restrictive assumption that the error terms are identically distributed at all points of the conditional distribution. Relaxing this assumption allows us to acknowledge firm heterogeneity and consider the possibility that estimated slope parameters vary at different quantiles of the conditional growth rate distribution. The quantile regression model, first introduced in [START_REF] Koenker | Regression Quantiles[END_REF] seminal contribution, can be written as: y it = x it β θ + u θit with Quant θ (y it |x it ) = x it β θ (1) where y it is the dependent variable, x is a vector of regressors, β is the vector of parameters to be estimated, and u is a vector of residuals. Q θ (y it |x it ) denotes the θ th conditional quantile of y it given x it . The θ th regression quantile, 0 < θ < 1, solves the following problem: min β 1 n i,t:y it ≥x it β θ|y it -x it β| + i,t:y it <x it β (1 -θ)|y it -x it β| = min β 1 n n i=1 ρ θ u θit (2) where ρ θ (.), which is known as the 'check function', is defined as: ρ θ (u θit ) = θu θit if u θit ≥ 0 (θ -1)u θit if u θit < 0 (3) Equation ( 2) is then solved by linear programming methods. As one increases θ continuously from 0 to 1, one traces the entire conditional distribution of y, conditional on x [START_REF] Buchinsky | Recent Advances in Quantile Regression Models: A Practical Guide for Empirical Research[END_REF]. More on quantile regression techniques can be found in the surveys by [START_REF] Buchinsky | Recent Advances in Quantile Regression Models: A Practical Guide for Empirical Research[END_REF] and [START_REF] Koenker | Quantile Regression[END_REF]; for some applications see the special issue of Empirical Economics (Vol. 26 (3), 2001). Graphs made using the 'grqreg' Stata module [START_REF] Azevedo | grqreg: Stata module to graph the coefficients of a quantile regression[END_REF]. Quantile regression results We now apply quantile regression to estimate the following linear regression model: GROW T H i,t = α + β 1 IN N i,t-1 + β 2 CON T ROL i,t-1 + y t + i,t (4) where IN N is the 'innovativeness' variable for firm i at time t. CON T ROL includes all of the control variables that may potentially influence a firm's employment growth;12 namely, lagged growth, lagged size and 3-digit industry dummies. We also control for common macroeconomic shocks by including year dummies (y t ). The regression results for each of the four 2-digit sectors can be seen in Figure 4 (see also Table 7). We observe considerable variation in the regression coefficient over the conditional quantiles. At the upper quantiles, the coefficient is observed to increase. This means that innovation has a strong positive impact on employment for those firms that have the fastest employment growth. At the lower quantiles, however, the coefficient on our 'innovativeness' variable often becomes negative (although not statistically significant), which indicates that innovation is associated with job destruction for those firms that are losing the most jobs. To sum up, it may be useful to distinguish between three groups of firms. First of all, the 'average firm' stays at roughly the same size. Such firms do not change their employment levels by much, and furthermore innovation seems to have little effect on their employment decisions. This is indicated by the fact that the coefficient on 'innovativeness' is close to zero at the median quantile. The second group consists of those fast-growing firms that are experiencing the largest increases in employment. For these firms, innovation has a strong positive effect on employment. The third group contains firms that are losing the most jobs. In this case, increases in firm-level innovative activity are associated with subsequent reductions in employment. This could be due to two effects, however. On the one hand, it could be due to innovation leading to a reduction in the required labour inputs (this effect is the bona fide 'technological unemployment' argument). On the other hand, it could be because some firms are unsuccessful in their attempts at innovation. This is the 'tried and failed' category of innovators described in Freel (2000) and discussed in [START_REF] Coad | Innovation and Firm Growth in High-Tech Sectors: A Quantile Regression Approach[END_REF]. We suspect that both of these effects are present for this third group of firms. In the Appendix (Section B), we check the robustness of our results by using alternative (cruder) measures of firm-level innovation. These measures are 3-year R&D and patent stocks, depreciated at the conventional rate of 15%. As expected, these two variables taken on their own are less clear-cut than our preferred composite 'innovativeness' variable. Broadly speaking, however, the results from this exercise appear to support our main results presented in this section. We have thus observed that in some cases innovation is associated with employment creation whilst in other cases it is associated with job destruction. It is of interest to see if these two categories are correlated with a firm's size. For example, we could suspect that the latter category corresponds to the largest firms who are more likely to introduce process innovations. Previous studies have not been able to test this hypothesis because they implicitly attribute equal weights to firms of different sizes. We argue that this approach is flawed, however, given that larger firms have a greater impact on the absolute number of jobs, because of their large size. We investigate this issue in the next section. Parametric analysis We begin this section with a brief introduction to the weighted least squares estimator, and then apply it to our dataset. An Introduction to Weighted Least Squares "As Mosteller and Tukey (1977, p346) suggested, the action of assigning "different weights to different observations, either for objective reasons or as a matter of judgement" in order to recognize "some observations as 'better' or 'stronger' than others" has an extensive history." Willett and Singer (1988:236) Consider the regression equation: y i = βx i + i (5) The OLS regression solution seeks to minimize the sum of the squared residuals, i.e.: min Q = n i=1 (y i -βx i ) 2 ≡ n i=1 ( i ) 2 (6) Implicit in the basic OLS solution is that the observations are treated as equally important, being given equal weights. Weighted Least Squares, however, attributes weights w i to specific observations that determine how much each observation influences the final parameter estimates: min Q = n i=1 w i (y i -βx i ) 2 (7) It follows that WLS estimators are functions of the weights w i . Although WLS can be used in situations where observations are attributed different levels of 'importance', it is most often used for dealing with heteroskedasticity. In the context of this study, the weight w i corresponds to the firm's size, measured in terms of employees. Regression results We estimate equation ( 4) using conventional estimators such as OLS and the Fixed-Effect estimator, as well as the Weighted Least Squares estimator: the results are presented in Table 7. The main feature of the regression results is that the coefficients obtained from the standard OLS and fixed-effect (FE) estimators are positive (though not always significant) for each of the four sectors. Furthermore, we observe that the R 2 coefficients are rather low, always lower than 7%. The standard interpretation of these results would be that, if anything, innovation seems to be positively associated with subsequent employment growth. However, our preferred interpretation of these results is informed by the quantile regression analysis presented in the preceding section. We observed that, for the fast-growing firms, innovation is positively associated with employment, whilst increases in innovation may also be associated with job destruction for those firms shedding the most jobs. This heterogeneity is indeed masked by standard regression techniques that focus on 'the average effect for the average firm'. We also observe that the WLS coefficient estimates are in most cases higher than the results obtained by either OLS or FE. This evidence hints that innovation in large firms is more likely to be associated with employment creation than innovation in small firms. This is an interesting finding given that larger firms have a greater potential for large increases in the absolute number of new jobs. In addition, this result is perhaps surprising given that large firms are usually associated with process innovations (see for example [START_REF] Klepper | Entry, Exit, Growth, and Innovation over the Product Life Cycle[END_REF] and process innovations, in turn, are usually classified as labour-saving. Conclusion Our main results are twofold. Our first main result emerges when we apply semi-parametric quantile regressions to explore the relationship between innovation and employment growth. We observe three categories of firms. First, most firms do not grow by much, and what little they do grow seems to be unrelated to innovation. Second, those firms that experience rapid employment growth owe a large amount of this to their previous attempts at innovation. Third, for those firms that are shedding the most jobs, increases in innovative activity seem to be associated with job destruction. The distinction between these three categories is effectively masked whenever The fact that the WLS R 2 is higher than the OLS or FE R 2 may simply be a spurious statistical result [START_REF] Willett | Another Cautionary Note About R 2 : Its Use in Weighted Least-Squares Regression Analysis[END_REF]. conventional parametric regressions are used, because these latter focus on 'the average effect for the average firm' and are relatively insensitive to heterogeneity between firms. Our second main result is observed when we investigate whether the relationship between innovation and employment varies with firm size. Our previous observations on the heterogeneity of firm behavior vis-à-vis innovation and employment effectively fuelled such suspicions. Our results indicate that, if anything, innovative activity in large firms is more positively associated with employment growth than innovative activity undertaken by their smaller counterparts. We should mention the limitations of our results that are brought on by the specificities of our dataset. In the US, the labour market is more fluid than in other countries, and this may reduce the generality of our results. Furthermore, we focus only on high-tech manufacturing sectors. Although this particular sectoral focus allows us to get relatively accurate measures of firm-level innovation, it reduces the scope of our analysis. It may be the case that the relationship between innovation and unemployment is different for other sectors of the economy. Graphs made using the 'grqreg' Stata module [START_REF] Azevedo | grqreg: Stata module to graph the coefficients of a quantile regression[END_REF]. The fact that the WLS R 2 is higher than the OLS or FE R 2 may simply be a spurious statistical result [START_REF] Willett | Another Cautionary Note About R 2 : Its Use in Weighted Least-Squares Regression Analysis[END_REF]. Graphs made using the 'grqreg' Stata module [START_REF] Azevedo | grqreg: Stata module to graph the coefficients of a quantile regression[END_REF]. Figure 1 : 1 Figure 1: The Knowledge 'Production Function': A Simplified Path Analysis Diagram (based on Griliches 1990:1671) Figure 2 : 2 Figure 2: Number of patents per year. SIC 35: Machinery & Computer Equipment, SIC 36: Electric/Electronic Equipment, SIC 37: Transportation Equipment, SIC 38: Measuring Instruments. Figure 3 : 3 Figure 3: The (annual) employment growth rates distribution for our four two-digit sectors. Figure 4 : 4 Figure 4: Variation in the coefficient on 'innovativeness' (i.e. β 1 in Equation (4)) over the conditional quantiles. Confidence intervals extend to 2 standard errors in either direction. Horizontal lines represent OLS estimates with 95% confidence intervals. SIC 35: Machinery & Computer Equipment (top-left), SIC 36: Electric/Electronic Equipment (top-right), SIC 37: Transportation Equipment (bottom-left), SIC 38: Measuring Instruments (bottom-right).Graphs made using the 'grqreg' Stata module[START_REF] Azevedo | grqreg: Stata module to graph the coefficients of a quantile regression[END_REF]. Figure 5 : 5 Figure 5: Variation in the coefficient on a firm's 3-year R&D stock (i.e. γ 1 in Equation (8)) over the conditional quantiles. Confidence intervals extend to 2 standard errors in either direction. Horizontal lines represent OLS estimates with 95% confidence intervals. SIC 35: Machinery & Computer Equipment (top-left), SIC 36: Electric/Electronic Equipment (top-right), SIC 37: Transportation Equipment (bottom-left), SIC 38: Measuring Instruments (bottom-right).Graphs made using the 'grqreg' Stata module[START_REF] Azevedo | grqreg: Stata module to graph the coefficients of a quantile regression[END_REF]. Figure 6 : 6 Figure 6: Variation in the coefficient on a firm's 3-year patent stock (i.e. γ 1 in Equation (8)) over the conditional quantiles. Confidence intervals extend to 2 standard errors in either direction. Horizontal lines represent OLS estimates with 95% confidence intervals. SIC 35: Machinery & Computer Equipment (top-left), SIC 36: Electric/Electronic Equipment (top-right), SIC 37: Transportation Equipment (bottom-left), SIC 38: Measuring Instruments (bottom-right).Graphs made using the 'grqreg' Stata module[START_REF] Azevedo | grqreg: Stata module to graph the coefficients of a quantile regression[END_REF]. Table 1 : 1 Summary statistics before and after data-cleaning sample before cleaning sample used n=4274 firms n=1920 firms mean std. dev. mean std. dev. Total Sales 1028.156 6775.733 1178.169 7046.354 Patent applications 6.387029 45.35137 9.267999 56.86579 R&D expenditure 58.6939 363.402 57.06897 351.7708 Total Employees 8.5980 40.1245 10.21704 46.0217 Table 2: Firm size distribution in SIC35-38, 1963-1998 No. of Employees SIC 35 SIC 36 SIC 37 SIC 38 ≤250 Mean 0.104332 0.112787 0.101951 0.095604 Std. Dev 0.069861 0.06795 0.071023 0.070228 obs 2570 3196 266 3667 > 250 & ≤500 Mean 0.371858 0.36585 0.375686 0.36347 Std. Dev 0.071885 0.071307 0.075044 0.069809 obs 969 1347 204 879 > 500 & ≤5000 Mean 1.802632 1.684009 2.091483 1.641482 Std. Dev 1.187339 1.109291 1.174895 1.094161 obs 3317 2941 937 2018 > 5000 Mean 33.91514 43.34083 91.30289 25.02034 Std. Dev 50.19058 64.77395 165.8062 28.9475 obs 1729 1322 1312 935 Note: employee numbers given in thousands. Table 3 : 3 The Distribution of Firms by TotalPatents, 1963-1998 (SIC's 35-38 only) 0 or more 1 or more 10 or more 25 or more 100 or more 250 or more 1000 or more Firms 1920 1028 641 435 195 119 53 Table 4 : 4 Contemporaneous rank correlations between Patents and R&D expenditure SIC 35 SIC 36 SIC 37 SIC 38 ρ 0.4277 0.4560 0.4326 0.4591 p-value 0.0000 0.0000 0.0000 0.0000 Obs. 8533 8751 2696 7475 Table 5 : 5 Contemporaneous rank correlations between 'patent intensity' (patents/employees) and 'R&D intensity' (R&D/employees) SIC 35 SIC 36 SIC 37 SIC 38 ρ 0.1631 0.2321 0.2248 0.1990 p-value 0.0000 0.0000 0.0000 0.0000 Obs. 7906 8119 2505 6935 Table 6 : 6 Extracting the 'innovativeness' index used for the quantile regressions -Principal Component Analysis results(first component only, unrotated) SIC 35 SIC 36 SIC 37 SIC 38 Table 7 : 7 Regression estimation of equation (4). Quantile regression estimates obtained using 1000 bootstrap replications. Quantile regression OLS FE WLS 10% 25% 50% 75% 90% SIC 35 β 1 -0.0124 -0.0003 0.0111 0.0196 0.0392 0.0114 0.0195 0.0261 Std. Error 0.0082 0.0043 0.0035 0.0046 0.0124 0.0035 0.0057 0.0086 t-stat -1.51 -0.06 3.11 4.24 3.16 3.23 3.42 3.03 R 2 within 0.0424 R 2 between 0.0001 R 2 overall 0.0723 0.0582 0.0479 0.0735 0.0894 0.0599 0.0273 0.1989 obs (groups) 601 obs 6682 6682 6682 6682 6682 6682 6682 6682 SIC 36 β 1 0.002 0.0052 0.0067 0.0179 0.0255 0.0103 0.0153 0.0282 Std. Error 0.0063 0.002 0.0024 0.004 0.0044 0.0024 0.0056 0.0057 t-stat 0.32 2.58 2.8 4.5 5.73 4.29 2.75 4.96 R 2 within 0.043 R 2 between 0.0005 R 2 overall 0.0429 0.041 0.0361 0.0487 0.048 0.0479 0.018 0.1427 obs (groups) 614 obs 6891 6891 6891 6891 6891 6891 6891 6891 SIC 37 β 1 -0.0017 0.0024 0.0038 0.0096 0.0179 0.0043 0.0149 0.0149 Std. Error 0.0176 0.0031 0.0026 0.0035 0.0114 0.005 0.008 0.0056 t-stat -0.1 0.75 1.47 2.72 1.56 0.85 1.86 2.67 R 2 within 0.0548 R 2 between 0.0048 R 2 overall 0.1036 0.0787 0.065 0.0617 0.0716 0.0685 0.0261 0.2417 obs (groups) 178 obs 2154 2154 2154 2154 2154 2154 2154 2154 SIC 38 β 1 -0.0011 -0.0008 0.0125 0.041 0.0688 0.0073 0.0136 0.0044 Std. Error 0.011 0.0058 0.0086 0.0099 0.0191 0.0074 0.0107 0.0068 t-stat -0.1 -0.14 1.45 4.13 3.6 0.99 1.27 0.65 R 2 within 0.0329 R 2 between 0.0884 R 2 overall 0.0459 0.0288 0.028 0.0507 0.065 0.0284 0.0049 0.1247 obs (groups) 527 obs 5870 5870 5870 5870 5870 5870 5870 5870 Note: Table 8 : 8 Extracting the 'innovativeness' index used for the quantile regressions -Principal Component Analysis results (first component only, unrotated) SIC 35 SIC 36 SIC 37 SIC 38 R&D / Sales 0.1631 0.1351 0.3076 0.0302 Patents / Sales 0.2669 0.1239 0.4294 0.1614 R&D stock / Sales (δ=15%) 0.4628 0.4945 0.3530 0.4645 Pat. stock / Sales (δ=15%) 0.4840 0.4958 0.4830 0.5199 R&D stock / Sales (δ=30%) 0.4659 0.4888 0.3540 0.4653 Pat. stock / Sales (δ=30%) 0.4865 0.4870 0.4877 0.5200 Prop n Variance explained 0.5031 0.6155 0.4752 0.3762 No. Obs. 7858 8079 2559 6940 Table 9 : 9 Regression estimation of equation (4). Note that the quantile regression SEs have not been bootstrapped here. quantile regression OLS FE WLS 10% 25% 50% 75% 90% SIC 35 β 1 -0.03868 -0.00390 0.00225 0.00114 0.01109 -0.00127 0.00276 0.02374 Std. Error 0.00154 0.00142 0.00066 0.00066 0.00145 0.00346 0.00494 0.01522 t-stat -25.10 -2.74 3.40 1.71 7.64 -0.37 0.56 1.56 R 2 within 0.0401 R 2 between 0.0006 R 2 overall 0.06880 0.05350 0.04020 0.06060 0.07350 0.05360 0.02260 0.16560 obs (groups) 661 obs 7273 7273 7273 7273 7273 7273 7273 7273 SIC 36 β 1 -0.01129 -0.00984 -0.00063 0.01048 0.01895 -0.00240 -0.00235 0.02051 Std. Error 0.00178 0.00091 0.00049 0.00085 0.00123 0.00276 0.00279 0.01403 t-stat -6.33 -10.84 -1.29 12.28 15.45 -0.87 -0.84 1.46 R 2 within 0.0352 R 2 between 0.0050 R 2 overall 0.04190 0.0368 0.0332 0.0436 0.0400 0.0408 0.0135 0.1333 obs (groups) 614 obs 7495 7495 7495 7495 7495 7495 7495 7495 SIC 37 β 1 0.00479 0.00148 0.00304 0.00673 0.02609 0.00658 0.00362 0.02215 Std. Error 0.00376 0.00129 0.00118 0.00165 0.00269 0.00379 0.00490 0.01533 t-stat 1.28 1.14 2.57 4.06 9.72 1.74 0.74 1.44 R 2 within 0.0588 R 2 between 0.0046 R 2 overall 0.0841 0.0750 0.0594 0.0529 0.0661 0.0659 0.0233 0.2213 obs (groups) 178 obs 2389 2389 2389 2389 2389 2389 2389 2389 SIC 38 β 1 0.00038 -0.00128 0.00540 0.00347 0.00121 0.00173 0.00164 0.00174 Std. Error 0.00110 0.00061 0.00063 0.00117 0.00122 0.00268 0.00333 0.00276 t-stat 0.35 -2.11 8.52 2.97 0.99 0.64 0.49 0.63 R 2 within 0.0261 R 2 between 0.0502 R 2 overall 0.04030 0.02570 0.02560 0.03900 0.03940 0.02510 0.00610 0.11840 obs (groups) 527 obs 6421 6421 6421 6421 6421 6421 6421 6421 Note: Note however that[START_REF] Evangelista | The Impact of Innovation on Employment in Services: evidence from Italy[END_REF] estimate the effect of firm-level innovation on total employment by attributing observation-specific weights to firms. Nonetheless, their analysis is rather limited because their employment growth variable is a qualitative survey response instead of a quantitative growth rate. Following[START_REF] Griliches | Patent Statistics as Economic Indicators: A Survey[END_REF], we consider here that patent counts can be used as a measure of innovative output, although this is not entirely uncontroversial. Patents have a highly skew value distribution and many patents are practically worthless. As a result, patent numbers have limitations as a measure of innovative output -some authors would even prefer to consider raw patent counts to be indicators of innovative input. During our discussion, we will use the terms 'products' and 'technology' interchangeably to indicate generally the same idea. 4 It would have been interesting to include 'discrete technology' sectors in our study, but unfortunately we did not have a comparable number of observations for these sectors. This remains a challenge for future work.5 The 'complex technology' sectors that we consider are SIC 35 (industrial and commercial machinery and computer equipment), SIC 36 (electronic and other electrical equipment and components, except computer equipment), SIC 37 (transportation equipment) and SIC 38 (measuring, analyzing and controlling instruments; photographic, medical and optical goods; watches and clocks). The patent ownership information (obtained from the above mentioned sources) reflects ownership at the time of patent grant and does not include subsequent changes in ownership. Also attempts have been made to combine data based on subsidiary relationships. However, where possible, spelling variations and variations based on name changes have been merged into a single name. While every effort is made to accurately identify all organizational entities and report data by a single organizational name, achievement of a totally clean record is not expected, particularly in view of the many variations which may occur in corporate identifications. Also, the NBER database does not cumulatively assign the patents obtained by the subsudiaries to the parents, and we have taken this limitation into account and have subsequently tried to cumulate the patents obtained by the subsidiaries towards the patent count of the parent. Thus we have attempted to create an original database that gives complete firm-level patent information. The gap between application and grant of a patent has been referred to by many authors, among others[START_REF] Bloom | Patents, Real Options and Firm Performance[END_REF] who mention a lag of two years between application and grant, andHall et al. (2001a) who state that 95% of the patents that are eventually granted are granted within 3 years of For a survey of firm growth, see[START_REF] Coad | Firm Growth: A Survey[END_REF]. [START_REF] Scherer | Size of Firm, Oligopoly, and Research: A Comment[END_REF] discusses the possibility of scale measurement errors entering into various firm-level data. Although he is unable to verify the hierarchy of these errors, he speculates that the measurement problems are likely to be larger for assets followed by sales and (to a lesser extent) employment(Scherer 1965: 259). Appendices A Scaling down according to firm size There are at least two ways of scaling down indicators of innovative activity according to firm size [START_REF] Small | R&D performance of UK Companies[END_REF]. The first, and perhaps most common, way is to use a firm's sales as an indicator of its size. The second involves scaling down according to a firm's employment. Our analysis in this paper uses the second approach, which some authors have nonetheless identified as the preferable method. 13 However, we also investigate the robustness of our analysis by scaling down according to total sales. Table 8 contains the corresponding results for the generation of our composite 'innovativeness' indicator. We observe that this indicator does not appear to perform as well when we scale down innovative activity by a firm's total sales (compare the results here with those in Coad and Rao (2006b,c)). We nonetheless pursue the analysis using this indicator, and we obtain similar results (see Table 9). B Alternative measures of innovative activity In this section we verify the robustness of the quantile regression results presented in Section 4 by using simpler and cruder measures of firm-level innovative activity. We now estimate the following linear regression model: where IN N OV i,t-1 refers to either a firm's 3-year stock of R&D intensity (i.e. R&D / Sales) or patent intensity (Patents / Sales); the conventional depreciation rate of 15% has been used for both of these variables. The results are presented in Figures 5 and6. Broadly speaking, these results offer support to our earlier analysis. In general, we observe that the coefficient is close to zero at the median quantile. The coefficient decreases at the very lowest quantiles, often taking on a negative coefficient. In contrast, the coefficient becomes increasingly positive at the upper quantiles.
01750444
en
[ "chim.othe" ]
2024/03/05 22:32:07
2013
https://hal.univ-lorraine.fr/tel-01750444/file/DDOC_T_2013_0189_GHACH.pdf
Keywords: Tetraethoxysilane TMOS: Tetramethoxysilane PEI: Poly (ethylenimine) Indium-Tin-Oxide electrode PGE: Pyrolytic graphite electrode Techniques Baclight bacterial viability test AFM: Atomic force microscopy TEM: Transmission electron microscopy Scientific terms EC: Electrochemical communication ET: Electron transfer DET: Direct electron transfer MET: Mediated electron transfer sol-gel, hybrid materials, bioencapsulation, electrodeposition, bacteria, artificial biofilm, membrane-associated enzyme, bacteriophage, electron transfer, cytochrome c PBS: Phosphate buffer solution PEG: Poly (ethylene glycol) SWCNT: Single-walled carbon nanotubes SWCNT-(EtO) 8 -Fc: Single-walled carbon nanotubes chemically-modified with poly (ethylene glycol) linker and ferrocene mediator MWCNT: Multi-walled carbon nanotubes MWCNT-Os: Multi-walled carbon nanotubes wrapped with osmium polymer AuNP: Gold nanoparticles GFP: Green fluorescent protein PI: Propidium iodide PVA: poly(vinyl alcohol) (PVA-g-P(4-VP)): poly(vinyl alcohol) and 4-vinylpyridine (PVA-g-P(4-VP)) complex Introduction générale Le travail décrit dans cette thèse a été mené à l'interface entre trois disciplines: l'électrochimie, la science des matériaux et la microbiologie. L'objectif de cette recherche était tout d'abord d'étudier l'activité de bactéries immobilisées dans un film de silice déposé par le procédé sol-gel à la surface d'électrodes. Le film peut être préparé par électrochimie ou par simple dépôt d'une goutte de sol à la surface de l'électrode. Il est alors primordial de conserver la viabilité de la bactérie dans ce film sol-gel et de favoriser les réactions de transfert électronique entre la bactérie et l'électrode. L'immobilisation de protéines membranaires au sein de couches minces sol-gel a ensuite été considérée et appliquée à l'électrocatalyse enzymatique. Ces protéines sont associées à des fragments membranaires ou vésicules qui peuvent être stabilisés dans le film sol-gel en utilisant des stratégies similaires à celles impliquées pour l'immobilisation de cellules entières. Les études proposées ici sont de natures fondamentales, mais peuvent trouver des applications dans les domaines des biocapteurs ou bioréacteurs. Un dernier travail, périphérique par rapport aux travaux précédents, a concerné l'encapsulation de bactériophage pour étudier l'influence de mode d'immobilisation sur l'infectivité du virus. L'électrochimie a tout d'abord été utilisée pour induire l'encapsulation de bactéries au sein de films hybrides organique-inorganique préparés par le procédé sol-gel (chapitre 2). En effet, il est possible d'utiliser l'électrolyse du sol pour produire localement de façon contrôlée à la surface de l'électrode les espèces OH -qui catalysent alors la gélification du matériau. Cette approche a été initialement proposée par Schacham et al. à la fin des années 90 avant d'être appliqué par Walcarius et al. à la génération de films silicatés mésostructurés. Cette même approche a ensuite été appliquée en 2007 à l'immobilisation de protéines rédox par le groupe ELAN du LCPME. L'application du dépôt sol-gel par assistance électrochimique pour l'encapsulation de bactéries se situe ainsi dans la continuité des travaux précédents, l'enjeu étant ici de protéger la viabilité des bactéries dans cette couche mince, malgré le stress de l'électrolyse. Deux modes de dépôt ont été développés pour l'encapsulation des bactéries par assistance électrochimique, en une étape par introduction des bactéries dans le sol avant dépôt, et en deux étapes, avec une première étape d'immobilisation des bactéries à la surface de l'électrode avant application de l'électrolyse dans un sol qui cette fois ci ne contient pas de bactéries. Dans cette première série d'expériences, la viabilité des bactéries a été évaluée en utilisant un kit de marqueurs fluorescents de l'ADN, Live/dead BacLight, permettant de caractériser l'intégrité membranaire. L'activité métabolique a également été testée en utilisant des bactéries génétiquement modifiées pour exprimer des protéines fluorescentes ou luminescentes en réponse à leur environnement. L'électrochimie a ensuite été utilisée comme un moyen analytique pour caractériser les réactions de transfert électronique entre des bactéries et différents médiateurs rédox (chapitre 3). Dans une première approche, un médiateur soluble, Fe(CN) 6 3-, a été utilisé, en nous inspirant ainsi des travaux décrits par le groupe de Dong. L'immobilisation du médiateur a ensuite été considérée. Deux stratégies ont été mises en oeuvre. La première fait appel à des nanotubes de carbone fonctionnalisés par des groupements ferrocène à l'aide d'un bras poly(oxyde d'éthylène). La seconde implique un médiateur naturel, le cytochrome c, qui est alors immobilisé dans la matrice de silice avec les bactéries, mimant de cette façon une stratégie développé par certains biofilms naturels pour augmenter les transferts électroniques vers l'accepteur final, minéral ou électrode. Cette dernière stratégie a d'abord été testée avec Shewanella Putrefaciens avant d'être approfondie avec Pseudomonas Fluorescens. Le troisième sujet traité dans cette thèse est l'électrochimie des protéines membranaires immobilisées dans un film sol-gel (Chapter 4). Plusieurs travaux décrits dans la littérature ont montré que ces protéines, associées à des fragments membranaires, pouvaient être stabilisés au sein de matériaux sol-gel, mais l'utilisation de ce mode d'immobilisation pour l'électrochimie reste rare ou inexistant selon le type de protéine considéré. Deux systèmes ont été étudiés. Le premier était un cytochrome de type P450 (CYP1A2) qui peut être utilisé pour l'élaboration de biocapteurs. Une propriété de ce type de protéine rédox est de pouvoir accepter un transfert électronique direct entre l'électrode et le centre catalytique. Nous avons étudié ici l'intérêt de la chimie sol-gel pour protéger la protéine afin de favoriser la stabilité de ce transfert électronique. Le second système est la mandélate déshydrogénase (ManDH). L'électrochimie de cette protéine implique un transfert électronique médié. La stabilité de cette réponse électrochimique, notamment en milieu convectif, a été testée et comparée à la réponse du système impliquant une simple adsorption des vésicules contenant cette protéine à la surface de l'électrode de carbone vitreux. La dernière section s'intéresse à l'influence de l'encapsulation du bactériophage ΦX174 dans une matrice sol-gel hybride sur son infectivité (chapitre 5). Ce type de virus, découvert au début du 20 ème siècle présente une infectivité spécifique pour certaines souches bactériennes pouvant être utilisée pour le traitement antibactérien, en remplacement ou complément des antibiotiques. L'encapsulation de virus dans un matériau sol-gel a été récemment considérée afin de permettre un relargage contrôlé dans l'organisme pour la thérapie génétique des cancers. Ce mode de relargage a également l'avantage de ralentir le développement d'anticorps en réponse à ce virus. L'apparition de souches bactériennes développant des résistances aux antibiotiques actuels a récemment remis en lumière les bactériophages comme moyen de lutte contre les infections bactériennes. Leur immobilisation contrôlée permettrait sans doute d'étendre le champ d'application de ces virus. Dans ce travail, l'effet de l'encapsulation des bactériophages dans une matrice sol-gel a été étudié, en tenant compte notamment de l'effet du séchage du matériau sur l'infectivité. Les matériaux ont été préparés sous la forme de monolithes et conservés en atmosphère humide ou sèche. Ces monolithes ont ensuite été réintroduits en solution pour relargage des bactériophages dont l'infectivité a été mesurée en présence d'Escherichia Coli. Différents additifs ont été introduits dans le gel de silice comme le glycérol ou de polymères chargés afin d'évaluer leur effet sur l'infectivité résiduelle. Toutes ces études sont précédées par une introduction sur l'intérêt des matériaux biocomposites dans les domaines médicaux et biotechnologiques (chapitre 1). Des généralités sur le procédé sol-gel, certains additifs organiques pouvant être introduits dans les matrices inorganiques pour l'élaboration de matériaux hybrides sont présentées en discutant leur intérêt potentiel pour notre travail. L'intérêt de l'encapsulation de protéines ou de microorganismes est également présenté en considérant les applications possibles, notamment en bioélectrochimie. Le principe de l'électrochimie des protéines rédox et des microorganismes et notamment les stratégies pour favoriser les réactions de transfert d'électron sont décrites. Enfin, le positionnement du sujet par rapports à la littérature est donné. Les méthodes et les techniques utilisées pour décrire les propriétés physico-chimiques des systèmes étudiés dans cette thèse, les différents protocoles sol-gel, notamment pour la modification des électrodes sont décrits dans la partie expérimentale. Enfin, une conclusion générale est proposée. General introduction The work reported in this thesis has been developed at the interface between three disciplines, i.e., electrochemistry, material science and microbiology. The purpose of this research was first to study the activity of bacteria immobilized in silica-based films prepared by the sol-gel process on electrode surfaces. The film can be prepared either by electrochemistry or drop-coating on the electrode surface. In such systems, the fundamental keys are to retain the cell viability in a sol-gel film [1] and to promote the electron transfer reactions between entrapped bacteria and electrodes materials [2]. Then, the immobilization of membrane associated redox proteins in sol-gel films have been considered and applied for electrocatalysis. These proteins are associated to membrane fragments or vesicles that can be stabilized in sol-gel films with similar strategies as reported for whole cells [3]. The approaches of the different works shown here with bacteria and membrane-associated proteins, are fundamental, but can find application for example as electrochemical biosensors or electrochemical bioreactors. A peripheral work of this PhD, will be also presented, i.e., the influence of encapsulation in sol-gel matrix on bacteriophage infectivity. Electrochemistry was first used to induce the encapsulation of bacteria in hybrid solgel films (Chapter 2). Indeed, the controlled cathodic electrolysis of the sol can be used to produce locally at the electrode surface a basic pH, which catalyzes the rapid gelification of the sol. The approach has been initially proposed by Schacham et al. at the end of the nineties [4] before to be applied by Walcarius et al. to the generation of mesostructured films [5]. The same approach has been also applied in 2007 to the immobilization of redox proteins by the group ELAN of LCPME [6]. The application of electrochemically-assisted sol-gel deposition for the encapsulation of bacteria is a continuation of these previous investigations, the challenge being here to protect the viability of the bacteria in such thin film, despite the stress of electrolysis. Two approaches have been developed for electrochemically assisted bacteria encapsulation (EABE), within one step with the bacteria introduced into the sol before deposition or within two steps with the bacteria immobilized on the electrode surface (ITO) before to apply the electrolysis of the sol that do not contain, in this case, any bacteria. In this first set of experiments, viability of the microorganism was simply controlled with the commercial Live/Dead BacLight test based on staining of the DNA material of bacteria with fluorescent dyes. Metabolic activity was also tested with using genetically modified bacteria that could express fluorescent proteins or luminescent light in response to the environment. Electrochemistry was then considered as an analytical method. Bacteria have been encapsulated in silica-based films and the electron transfer reactions from bacteria to different redox mediators have been monitored (Chapter 3). First approach simply involved ferricyanide as soluble mediators as reported by the group of Dong [7]. Then, the immobilization of the mediator was considered. Two different strategies have been implemented. First one involved carbon nanotubes functionalized with ferrocene moieties through a long poly (ethylene oxide) chain. The second strategy involved a natural mediator, i.e. cytochrome c form bovine heart, which was immobilized inside the silica gel, mimicking in some respect some strategies involved in natural biofilm to increase the ability to transfer electron towards a final acceptor, electrode or mineral. This latter strategy was initially proposed for Shewanella putrefaciens, before to be studied systematically with Pseudomonas fluorescens. The third topic of this thesis was the electrochemistry of membrane associated proteins immobilized in a sol-gel film (Chapter 4). Several reports in the literature have shown that such proteins, associated to membrane fragments could be stabilized in sol-gel materials, but application to bioelectrochemistry remains seldom [8,9] or does not exist depending on the considered proteins. Two systems have been studied. The first one was cytochrome P450 (CYP1A2) that is actually considered for biosensor development [10,11]. One major interest of this protein is the ability to operate direct electron transfer (DET) during the bioelectrochemical reaction. Here, it has been studied the potential interest of sol-gel chemistry to provide a protective environment to the protein in order to promote stable direct electron transfer reaction. The second system studied in this section was membrane associated mandelate dehydrogenase (ManDH). The electrochemical application of this protein involves mediated electron transfer (MET). The stability of the electrochemical response, notably in convective environment was tested and compared with the simple adsorption of proteins on glassy carbon electrodes. A final section considered the influence of encapsulation in a hybrid sol-gel matrix on the infectivity of bacteriophage ΦX174 (chapter 5). Such virus, discovered at the beginning of the 20 th century [12] can show infectivity for specific bacterial strains to be considered for antibacterial treatment. Encapsulation of virus in sol-gel materials has been recently considered in order to promote a controlled release in the organism for cancer gene therapy [13]. The silica gel-based delivery had moreover the advantage to slow down the development of anti-adenovirus antibodies. With the appearance of bacterial strain displaying resistances to antibiotics, the interest of bacteriophage for development of antibacterial treatment recently reemerged [14]. Their controlled immobilization could help in extending their application. Here, the effect of their encapsulation in sol-gel matrix was studied taking into account the effect of drying on their infectivity. Material have been prepared in the form of monolith and stored in humid or dry atmosphere. After storage, the gels were introduced in solution for release of the bacteriophage, whose infectivity was measured in the presence of E. coli. The different organic additives such as glycerol or positively charged polyelectrolyte have been introduced in the gel and their effects on the infectivity of the released bacteriophage were studied. All these studies have been preceded in the thesis by a brief introduction to the interest of biocomposite in biomedical and biotechnological fields (Chapter 1). Knowledge of sol-gel based materials, different organic additives and advantages of incorporation to fabricate solgel based hybrid materials are discussed. The advantages of proteins or microorganism encapsulation and their applications in bio-electrochemical and bio-technological fields as well as the principles of wiring redox proteins or microorganisms for bioelectrochemical devices have been illustrated as reported in literature. Finally, positioning of the subject compared to literature survey has been described. Methods and techniques has been also proceeded to describe the physico-chemical properties of the studied compounds, various sol-gel preparation methods, electrode modifications and experimental techniques and analyses used in this work. A conclusion will close the manuscript. Interest of biocomposite materials Enzymes, bacteria and viruses nowadays are interesting systems for technological devices due to their simplicity of manipulation, high selectivity and sensitivity. Their bioanalytical features are increasingly attracting attention in the pharmaceutical, environmental, medical, and industrial fields. The real challenge faced by efficiency of biological devices is to fulfill the ever growing requirements of environmental legislation in terms of rapidity of response, selectivity, stable reactivity and cost [1,2]. To achieve that, enzymes, bacteria and viruses should be entrapped in biodegradable composite to maintain a high viability/activity rate during the storage process and to enhance the communication with the surrounding environment [1], i.e, the entrapped cells provide advantages over free cells such as: increased metabolic activity, protection from environmental stresses and toxicity, increased plasmid stability and they might act as cell reservoir systems prolonging reaction times [3,4]. Several chemical or physical methods for bio-entrapment have been proposed in the literature. In chemical methods, enzymes and bacteria can be attached to an inert and biocompatible matrix through cross-linking using a bi-functional reagent. Proteinic supports, e.g., bovine serum albumin [5,6] and gelatin [7,8] are typically used to constitute the network with glutaraldehyde (GA) as the cross-linking agent. This method is rather simple and rapid. Nevertheless, it has also some drawbacks especially when considering microorganisms. In particular, cross-linking involves the formation of covalent bonds between the functional groups located on the outer membrane of the microorganisms/enzymes and GA. This mode of immobilization is consequently not suited when cell viability is absolutely required or when enzymes involved in the detection are expressed at the cell surface [2]. Physical methods represented by physical adsorption, is the simplest and softest method for biomolecules and microganisms immobilization. Nevertheless, it results in a weak electrostatic bonding that may cause easy desorption of the proteins and microorganisms from the surface during storage or analysis [2,9]. Another way to avoid leakage is to entrap them in an adequate material. Different strategies have been developed in this context. In a first strategy, the cells are filtered through a porous membrane, e.g., Teflon [10], polycarbonate [11], cellulose nitrate [12], silicon [13] or nylon [14] which is subsequently fixed to the electrode. In a second strategy, cells are retained near the transducer surface by a dialysis membrane [15]. These two modes of immobilization have been widely used for biosensors construction. In a third approach, proteins and microorganisms are entrapped in a chemical or biological Chapter I. Knowledge and literature survey 2013 12 polymeric matrix. Sol-gel silica [16][17][18] or hydrogels, such as poly(vinyl alcohol) (PVA) [19] and alginate [20] are typically used for that purpose. These polymers can efficiently protect microorganisms from external aggression. On the other hand, they may form a diffusion barrier that restricts the accessibility of the cells to the substrate and/or decrease electron transfer reactions [2]. The swelling properties of hydrogels may also limit their practical application in some cases. The combination of sol-gel and hydrogel materials by using a solgel-derived composite based on silica sol and PVA-4-vinylpyridine copolymer prevents silica glass from cracking during the sol-gel transition, limits hydrogel swelling and protect the bacterial viability [21]. To date, natural or engineered proteins, antibodies, antigens, DNA, RNA, cell membrane fractions, organelles and whole cells, have been encapsulated in a diverse range of inorganic, organic, and hybrid materials [22]. The realm of sol-gel biocomposite offers significant advances in technologies associated with biology, i.e, for fabrication of biosensors, biocatalysts, and bioartificial organs to the fabrication of high-density bioarrays and bioelectronic devices. Indeed, it is expected that the coming years will witness the realization of a variety of research and industrial applications, especially those aimed at the catalysis, sensing/monitoring, diagnostics, biotechnology, and biocomputing sectors [22]. Next section will provide some general information about the sol-gel process. Fig I-1. Applications and issues of bioencapsulation. (a) Summary of the major fabrication requirements of bio-immobilization technologies. Critical demands are broad applicability to a diverse range of biological materials, accommodation of specific co-immobilization, activation and/or stabilization prerequisites, availability of a variety of polymer chemistries, and amenability to macro-and micro-fabrication in different formats. (b) Overview of the performance characteristics of ideal bio-immobilizates. The major issues are the physicochemical robustness of the polymer matrix and the stabilization of the immobilized biological. These are especially important in relation to storage stability, sustained performance in gaseous and liquid media and heterogeneous environments, efficient functioning in biologically, chemically and physically aggressive environments, sustained catalytic performance in large-scale bioreactors, and close-tolerance functioning of electronicsinterfaced micro-fabricated immobilizates (adapted from reference [23]). Abbreviations: liq, liquid; UF, ultrafiltration; NF, nanofiltration; NCF, near-critical fluid; SCF, super-critical fluid. Principles of sol-gel chemistry and processing The sol-gel process is a wet-chemical technique widely used in the fields of material science and ceramic engineering. In this procedure, the initial precursor 'sol' gradually evolves towards the formation of a gel-like diphasic system containing both a liquid phase and solid phase whose morphologies range from discrete particles to continuous polymer networks. A sol is a stable dispersion of colloidal particles in a liquid. Colloids are solid particles with diameters of 1-1000 nm. A gel is an interconnected, rigid network with pores of submicrometer dimensions and polymeric chains whose average length is greater than a micrometer [24]. In the sol-gel process, the starting precursor could be either in an alkoxide or an aqueous form. Alkoxides are available as pure, single molecules whose reactivity toward hydrolysis can be efficiently controlled and thus the nature of the species that will effectively condense to form a gel can be selected [25]. Overall, the alkoxide-based sol-gel route is more flexible in terms of reaction conditions, chemical nature, functionality, and processing. However, when one considers biology-related applications of sol-gel chemistry, alkoxide precursors exhibit several limiting features. Alkoxide hydrolysis leads to the release of parent alcohol molecules that may be detrimental to biological systems. This is a major concern for material design itself, but also in terms of ecological impact when large-scale applications are to be developed [25,26]. Considering the actual knowledge in the field of sol-gel material, it can be proposed that "alkoxide" and "aqueous" routes may be suitable for different applications, related to their processing. Silicon alkoxides appear convenient precursors for the [25,27]). Fig I-2. Overview of the sol-gel process (adapted from references Hydrolysis Sol-gel silica synthesis is based on the controlled condensation of Si(OH) 4 entities. These may be formed by hydrolysis of soluble alkali metal silicates or alkoxysilanes. The commonly used compounds are sodium silicates, tetraethoxysilane (TEOS) and tetramethoxysilane (TMOS) [23]. Chapter I. Knowledge and literature survey 2013 16 Hydrolysis of aqueous silica Silicate solution species are controlled by the pH medium and silica concentration [25]. Monomolecular Si(OH) 4 is the predominant solution species below pH 7and at higher pH, anionic and polynuclear species are formed. For that, sodium silicate solutions are acidified in order to generate Si(OH) 4 species that will then condense to form siloxane bonds. The sodium silicate solution characteristics are dependent on the SiO 2 :Na 2 O ratio. Hydrolysis of alkoxysilanes The preparation of a silica glass can begin with an appropriate alkoxide, such as Si(OR) 4 , where R is mainly CH 3 , C 2 H 5 , or C 3 H 7 , which is mixed with water and/or a mutual solvent to form a homogenous solution. Hydrolysis leads to the formation of silanol groups (SiOH) (Eq. I-1). It has been well established that the presence of H 3 O + in the solution increases the rate of the hydrolysis reaction [24,25]. Eq I-1. Hydrolysis of alkoxide silica precursor in acidic medium. Condensation In a condensation reaction, two hydrolyzed or partially hydrolyzed molecules can link together through forming siloxane bonds (Si-O-Si) (Eq. I-2). This type of reaction can continue to build larger and larger silicon-containing molecules and eventually results in a In acidic conditions, the rate of hydrolysis is faster than that of condensation [24,25]. Sol-gel are generated by polycondensation of inorganic in nature or both inorganic and organic as induced either by evaporation of a sol solvent or electrochemically-assisted deposition [28,29]. Solvent evaporation x Coating mode (drop-, spin-and dip-coating): A process where the substrate (biomolecules or microorganisms) is mixed with sol to be dropped, spinned or dipped on the surface allowing solvent to be evaporated (Fig x Monolith: are defined as a bulk gel (smallest dimension ≥ 1 mm). Monolithic gels are potentially of interest because they are stable at room and low temperature [25]. In addition, bulk gel could be more convenient for entrapment of microorganisms and viruses due to its higher protection which is contributed to its thickness [17]. Electrochemically assisted deposition An alternative method for polycondensation of hydrolyzed silica precursors has been proposed by Shacham et al. involving the incorporation of electrochemistry with the sol-gel processing [30]. The basic idea of electrodeposition is to facilitate the polycondensation of sol precursors by an electrochemical control of the pH at the electrode/solution interface [31], thereby affecting thereby the kinetics associated to the sol-gel process. Starting from a sol solution where hydrolysis is optimal (i.e., pH 3) and condensation very slow, electrodeposition by applying a negative potential are likely increasing pH at the electrode/solution interface (Fig. I-4), thereby catalyzing the polycondensation on the electrode surfaces [29]. The film thickness deposited on the electrode surface is likely to be affected by the applied potential, the electrodeposition time, and the nature of the electrode [30]. The electrochemically assisted deposition could be also applied for non-silica precursors such as zirconia or titania [32,33]. In addition, electrochemically-assisted deposition can be advantageously combined with the surfactant templating process to generate highly ordered mesoporous sol-gel with unique mesopore orientation normal to the underlying support [34,35] and this was also exploited to prepare vertically aligned silica mesochannels bearing organo-functional groups [36,37]. Gelation The hydrolysis and condensation reactions discussed in the proceeding sections lead to the growth of clusters that eventually collide and link together to form a network called gel. Gels are defined as "strong" or "weak" according to the stability of bonds formed which could be reversible or permanent. The difference between strong and weak is related to time of gelation [25]. The gelation rate also influences the pore structure. Fast gelation gives an open structure because the particles are quickly connected and cannot undergo further rearrangements [38]. Aging Aging is taking place when a solution starts to lose its fluidity and takes on the appearance of an elastic solid. During aging, four processes affect the porous structure and the surface area of the silica gel [25,45]. These processes are: x Polycondensation is the further reaction of silanols and alkoxy groups in structure to form siloxane bonds. The reaction results in densification and stiffening of the siloxane network. x Syneresis is the shrinkage of the gel network. This shrinkage is caused by the condensation of surface groups inside pores, resulting in a pore narrowing. In aqueous gel systems, the pore size controlled by equilibrium between electrostatic repulsion and Van der Waals forces. In this case, shrinkage is produced by addition of an electrolyte. x Coarsening or Ostwald ripening is the dissolution and re-deposition of small particles. Also necks between particles will grow and small pores may be filled in. Chapter I. Knowledge and literature survey 2013 20 This results in an increase in the average pore size and decrease in the specific surface area. x Phase transformation can occur in aging with several types such as separation of solid phase from liquid (microsyneresis), segregation of partially reacted alkoxide droplets (gel turns white and opaque), crystallization and precipitation. Summarizing, the pore structure, surface area and stiffness of the network are changed and controlled by the following parameters: time, temperature, pH and pore fluid [9]. Drying The gel drying process consists of removal of water from the interconnected pore network with simultaneous collapse of the gel structure, under conditions of constant temperature, pressure, and humidity. Large capillary stresses can develop during drying when the pores are small ( 20 nm). These stresses will cause the gels to crack catastrophically unless the drying process is controlled by decreasing the liquid surface energy by addition of surfactants or elimination of very small pores, by supercritical evaporation, which avoids the solid-liquid interface, or by obtaining monodisperse pore sizes by controlling the rates of hydrolysis and condensation [24,25]. Sol-gel based bio-hybrid materials Sol-gel is a suitable matrix for immobilization of enzymes, viruses and bacteria to develop biotechnological applications. However, sol-gel precursors and conditions are sometimes not mild enough for designing a stable and safe biocomposite. For instance, the release of alcohol during the hydrolysis-condensation of silicon alkoxides has been considered an obstacle, due to its potential detrimental effects on the entrapped proteins and microorganisms [39,40]; the ionic strength of aqueous precursor has been also considered as harmful environment for microorganisms entrapment [39,41]. In addition, sol-gel constraints and shrinkage occurring during gelation and drying processes respectively lead to fracture of the material, pore collapse and loss of biological activity [42]. In order to overcome the limitations of inorganic matrices, sol-gel based hybrid materials have been proposed as a stable systems containing inorganic precursor and some organic and/or polymeric additives [22]. Hybrid materials can effectively eliminate the brittleness of pure inorganic sol-gel and the swelling property of some pure polymer or hydrogel. Chapter I. Knowledge and literature survey 2013 21 Biocompatible silica precursors Biocompatible silica precursors have to be used in order to avoid the denaturation of the entraped biological life. TMOS [Si(OMe) 4 )] and TEOS [Si(OEt) 4 )] are commonly used as biocompatible precursors behind the evaporation of methanol and ethanol, respectively, under vacuum. Aqueous sol-gel precursors are otherwise commonly used in order to avoid any trace of alcohol, for example a sodium silicate solution [43], or a mixture of sodium silicate and Ludox ® suspension [18,39]. Another way to avoid denaturation by alcohol, is to use biocompatible alcohols such as polyol-based silanes as hydrolyzable groups that can be hydrolyzed under mild pH conditions [44]. Organic additives Sugar Sugars can be used as additives to stabilize biological life within sol-gel matrices. Chymotrypsin and ribonuclease T1 have been trapped in the presence of sorbitol and Nmethylglycine. Glucose and amino acids significantly increase the thermal stability and biological activity of the proteins by altering their hydration state and increasing the pore size of the silica matrix [45]. D-glucolactone and D-maltolactone have also been covalently bonded to the silica network via a coupling reagent aminopropyltriethoxysilane (APTES), giving rise to non-hydrolyzable sugar moieties. Firefly Luciferase, trapped in such matrices, has been used for the ultra-sensitive detection of ATP via bioluminescent reactions [46]. Trehalose and sucrose are non-reducing disaccharides that may be accumulated at high concentrations by many organisms capable of surviving complete dehydration. They were shown to be excellent stabilizers of many biomolecules and living cells in the dry state, and appeared to be superior to other sugars [47,48]. The stabilization effects of trehalose were attributed to several mechanisms/interactions with critical biomolecules or cellular structures such as membranes and proteins. The stabilization mechanism of membranes often referred to the water replacement hypothesis. The formation of hydrogen bonds with membrane phospholipids and proteins suggests to replace water molecules which have been evaporated during drying [49]. This, in turn, inhibits Van der Waal's interactions between adjacent lipids thus preventing the elevation of membrane phase transition temperatures (Tm). Such a Tm change could cause membranes to shift from their native liquid-crystalline state to a gel state under environmentally relevant temperatures. As the membranes of biomolecules and Chapter I. Knowledge and literature survey 2013 22 microorganisms pass through this phase transition, regions with packing defects make the membranes leaky. Also, trehalose may interact by hydrogen bonding of its -OH groups to polar residues in proteins, hence preserving the conformational state against dehydration processes [50]. Trehalose has also demonstrated to decrease oxidative damage caused by oxygen radicals [51] and to be accumulated in response to heat shock and cold exposure, allowing bacteria viability at low temperatures. It was shown in literature that trehalose considerably increases the tolerance of E. coli to drying processes [41]. Glycerol Glycerol (or glycerin) is a simple polyol compound. It is water-soluble, sweet-tasting and non-toxic compound which is widely used in pharmaceutical formulations. As known in literature, glycerol is an example of compatible solute which can be accumulated in cells under hyperosmotic conditions to allow them to tolerate osmotic stresses [47]. Thus, glycerol is used as a powerful osmotic stabilizer, which lowers water activity, prevents contraction of the gel, and also protects cells against encapsulation and freezing stress [17,18,41,47]. Glycerol in aqueous sol-gel monolith has succeeded with its specific properties as water solubility, biocompatibility and maintenance of silica gel cohesion to preserve 60 % of bacterial viability from stresses of sol gelation and aging along one month [17]. Polymer additives Poly(ethylene Glycol) (PEG) PEG is flexible, water-soluble polyether polymer and highly applicable from industrial manufacturing to medicine. It can play several important roles in sol-gel applications, apart from structure modification: they decrease the non-specific binding of proteins and living cells to the surrounding matrix and fill the pores and spaces in the matrix, thereby eventually preventing matrix shrinkage and collapse [52]. In turn, this contributes to protect the entrapped biomolecules and microorganisms from denaturation and activity loss [42,47,53]. The introduction of PEG to sol-gel material has proven to be efficient for safe immobilization of bacteria in stable sol-gel film [54]. Poly(ethyleneimine) (PEI) The cationic polymers such as PEI, poly(allylamine) and poly(dimethyldiallylammonium chloride) have shown critical effects on the stability of encapsulated enzymes [55]. PEI which Chapter I. Knowledge and literature survey 2013 23 is a polymer with repeating unit structure composed of the amine group and two carbon aliphatic (CH 2 CH 2 ) spacer, has been used for irreversible bacterial adhesion on the solid materials via electrostatic interactions between its amine groups and membranes of protein and bacteria [56]. Thus, PEI could be a safe agent for immobilization of biological life on the surface of conducting electrodes for construction of electrochemical biosensors. Chitosan Chitosan is a very abundant biopolymer obtained by the alkaline deacetylation of chitin. It is a heteropolymer containing both glucosamine and acetylglucosamine units. The introduction of chitosan polymer aims at counterbalancing some of the silica disadvantages especially cracking upon aging/drying. Moreover, it may provide additional properties such as ion binding capacity, photoluminescence or chirality [57]. On the basis of IR studies evidencing interactions between chitosan and silica materials, a model proposed H-bonds of silanol groups with amide-and oxy-groups of chitosan, ionic bonds between chitosan amino groups and silanolate as well as covalent links resulting from transesterification of chitosan hydroxygroups by silanol (Fig. I-5) [57,58]. However, the fact that amino groups remain available for further binding suggests that these functions are only weakly bonded to the silica network [59]. Due to the strong compatibility and interactions between chitosan and sol-gel, these hybrid materials were also shown to be suitable for the encapsulation of enzymes [START_REF] Tan | [END_REF][61][62]. Moreover, chitosan has shown a considerable possibility to be electrodeposited with proteins for biosensor constructions [63,64]. The electrodeposition mechanism involves a cathodic electrolysis as for the electrodeposition of silica which could be interesting for co-deposition of both chitosan and silica to form electrodeposited hybrid materials. First, chitosan can be protonated and dissolved in acidic solutions. At low pH, protonated chitosan becomes a cationic polyelectrolyte (Equation 1) [63]. Eq I-3. Protonation of chitosan polymer in acidic medium. However, the increase in solution pH results in a decreasing charge and at pH=6.5 amino groups of chitosan become deprotonated. High pH can be generated at the cathode surface using the cathodic reduction of water discussed before in section 2.2.2 (Fig. I-4) [63]. Eq I-4. Cathodic reduction of water molecule. Then electric field provides electrophoretic motion of charged chitosan macromolecules to the cathode, where chitosan forms an insoluble deposit (Equation 3) [63]. Eq I-5. Cathodic electrodeposition of chitosan polymer by precipitation. The development of electrochemically-assisted codeposition between silica gel and chitosan polymer could provide a stable biodegradable and safe composite for biomolecules and cell entrapment towards biosensing applications [62]. Finally, the sol-gel based hybrid materials have gained the interest in combining the attractive properties of both organic and inorganic materials for biotechnologies and electrochemistry [65]. The hybrid materials exhibit chemical and physicochemical features that might be readily exploited when used to modify electrode surfaces due to the versatility of sol-gel chemistry in the design of electrochemical devices. Applications cover various fields such as sensors, reactors, batteries, and fuel cells [27]. Next section will discuss the interesting features of sol-gel based hybrid materials for entrapment of biological life and their potential applications in biotechnologies. Sol-gel bioencapsulation and applications The last decade has seen a revolution in the area of sol-gel-derived biocomposites since the demonstration that these materials can be used to encapsulate biological species in a safe nucleic acids and even whole cells in these applications relies largely on the successful immobilization of the biological objects in a physiologically active form [26,66]. Encapsulation of proteins Proteins can be broadly categorized as either soluble or membrane-associated. Soluble proteins, which include enzymes, antibodies, regulatory proteins and many others, reside in an aqueous environment, and thus are generally amenable to aqueous processing methods for immobilization. The solubility of these proteins arises owing to the presence of polar or charged amino acid residues on the exterior surface. Therefore, immobilization techniques for soluble proteins must provide a hydrated environment at a pH that does not alter the membrane of proteins and does not have significant polarity differences relative to water. The group of Avnir was one of the pioneers in developing the sol-gel encapsulation technique for soluble proteins as enzymes [67], whose field has become an important area of research and technology [68]. More recently, the group of Walcarius has succeeded to preserve and stabilize the activity of different enzymes in sol-gel film based on solvent evaporation protocol [69,70] and electrochemically-assisted protocol [71,72] for bio-electrochemical applications. In addition, the bioencapsulation field has also included the entrapment of the redox protein c-cytochrome in silica nanoparticles for designing electrochemical biosensors [73]. On the other hand, membrane-associated proteins, contain either completely (intrinsic membrane proteins) or partially (extrinsic membrane proteins) embedded within cellular lipid membranes [66] (Fig. I-7). The distinguishing characteristic of membrane proteins, with respect the soluble ones, is the presence of both hydrophobic residues, which associate with the lipophilic membrane, and hydrophilic amino acid residues that associate with the aqueous environment on either side of the membrane [66]. Therefore, successful immobilization of membrane proteins must address two important issues. Firstly, the method must allow retention of the tertiary folded structure of the protein as is the case for soluble proteins. Secondly, it must accommodate the phospholipid membrane structure, which is held together mainly through hydrophobic interactions [66]. These issues are necessary to retain intact the membrane, or at a minimum the proteinassociated lipids, in order to accommodate both the hydrophilic and hydrophobic portions of the protein. This latter requirement suggests that the use of organic solvent should be avoided and highlights the need for aqueous processing methods during immobilization [66]. There have been a wide number of strategies directed to the immobilization of bilayer lipid membrane [75], including physical adsorption of a bilayer through deposition, covalent attachment of a monolayer, or bilayer of phospholipids to a solid surface, and attachment via avidin-biotin linkages. Some of these typical strategies are illustrated in Fig. I-8. The biocompatibility and stability of silica-based gel has gained an attention for immobilization of membranes and membrane proteins such as: pyrene-labeled liposomes and dye-encapsulating liposomes [76,77], photoactive proton pump bacteriorhodopsin (bR) [78], two ligand-binding receptors, the nicotinic acetylcholine receptor (nAChR) and the dopamine D2 receptor [79], hydrogenases [START_REF] Zadvorny | [END_REF] and cytochrome P450s [81]. In most of cases, the water soluble polyethylene glycol (PEG) or glycerol were required as an additive to maintain the receptors in an active state (ca. 40-80% activity relative to solution) [74,[START_REF] Zadvorny | [END_REF]. The transition from traditional silica precursors and processing methods toward more biocompatible Chapter I. Knowledge and literature survey 2013 28 processing methods has been critical for the successful entrapment of many membrane-bound proteins. The use of polyols such as glycerol or polyethylene glycol have also proven to be highly beneficial for stabilization of entrapped membrane proteins, yet the underlying mechanisms of stabilization by these additives being still not fully understood. Fundamental and technological advances would expedite the use of these membrane protein doped materials for applications including microarrays, bioaffinity chromatography, biosynthesis, biocatalysis, bioselective solid phase microextraction, and energy storage [66]. Encapsulation of bacteria An ideal immobilization matrix should be functional at ambient temperature and biocompatible, should enable complete retention of the cells, and allow the flow of nutrients, oxygen, and analytes through the matrix. In theory, encapsulated cells should remain viable but not growing [21]. This aspect is of paramount importance when immobilized cells have to Chapter I. Knowledge and literature survey 2013 29 be stored for long periods of time without loss of viability. In this manner living cells can be regarded as a "ready-to-use reagent" that can be stored for prolonged time under carefully controlled conditions and, once activated, e.g., by a defined temperature change or addition of nutrients, a constant and reproducible number of viable cells can be obtained for selected applications (e.g., biosensing, bioremediation, fermentation) [1]. Cells have been successfully encapsulated in organic polymers (e.g., hydrogels such as agar) [82] and inorganic matrices (e.g., sol-gel) [41] and in sol-gel based organic-inorganic hybrid materials [17,18,21,41,47]. All of them performed well in the terms of solute diffusion and cell entrapment maintenance whereas encapsulation in organic polymers may hamper from cell leaking due to its swelling behavior. The real challenges in this field are to avoid the sol-gel constraints during gelation and aging processes and the impact of gel drying on long-term cell viability. Most of the cell encapsulation described in the literature have applied the mode of solvent evaporation and monolithic form [18,21,39,83,84], in addition to the thick sol-gel layer [85,86] in order to keep as long as possible the water/solvent ratio for preserving cell viability. Only a limited number of works succeeded in keeping cell viability in thin films either by cell encapsulation in multi-layered silica thin films [87] where protecting encapsulated cells from direct contact with the surrounding media, avoiding leaching and rapid drying, or in celldirected assembly protocol [88] where the phospholipids can self-organize around cells and confine water at their vicinity allowing long-term preservation under ambient conditions. To date a limited number of works was based on the mode of electrochemically-assisted cell encapsulation. One approach has succeeded to encapsulate protein and bacteria in thick electrodeposited hybrid sol-gel film but the authors have just confirmed the conserved membrane integrity of encapsulated bacteria without providing the accurate details, such as the ratio of PEG to silica material, the duration and % of conserved viability [89]. The other approaches have just performed the deposition with alginate polymer instead of sol-gel polymer [90,91]. to replace the rapidly depleting fossil fuel [92]. The encapsulation approach in sol-gel materials is likely to retain its photosynthetic activity for conversion of carbon dioxide into biofuels under light energy [83,84]. Thus, scientific impact of bioencapsulation in sol-gel relies on cell viability accompanied with cell integrity, metabolic activity and protein synthesis of the encapsulated bacteria for environmental biotechnologies [4,17]. All these works have exploited the sol-gel deposition protocol based on solvent evaporation and monolithic form. Actually Bioreporter Genetically-engineered reporter consists of a gene promoter as a sensing element for chemical or physical changes and a reporter gene(s) coding for measurable protein gene for detecting toxic metals [95]. ZntA gene could be incorporated to luminescent or GFP reporter for early detection of toxic metals such as cadmium, mercury, etc. The approach based on the encapsulation approach in sol-gel materials has opened a route for designing of long-term efficient fluorescent and luminescent reporters based on cells genetically modified with the green fluorescent protein (GFP) or luminescent light properties in response to general toxicity and genotoxicity [85,86,96]. All these works have used the solgel deposition protocol based on solvent evaporation and thick sol-gel film. Fig I-10. Bacterial toxicity assay principle. A quantifiable molecular reporter is fused to specific gene promoters, known to be activated by the target chemical(s) (adapted from reference [93]). Biosensor A biosensor is a device that enables the identification and quantification of an analyte of interest from a sample matrix, for example, water, food, blood, or urine. As a key feature of the biosensor architecture, biological recognition elements that selectively react with the analyte of interest are employed (Fig I -11) [START_REF] Borgmann | Advances in Electrochemical Science and Engineering: Bioelectrochemistry[END_REF][START_REF] Souza | [END_REF]. The approach based on the encapsulation in sol-gel matrices has also opened a route for development of sol-gel based biosensors such as for biochemical oxygen demand (BOD) or detection of toxic organophosphates which result from the extensive use of pesticides and insecticides as well as potential chemical warfare agents [START_REF] Yu | [END_REF]. BOD is used to characterize the organic pollution of water and wastewater, which is estimated by determining the amount of oxygen required by aerobic microorganisms for degrading organic matters in wastewater Chapter I. Knowledge and literature survey 2013 32 [21,100,101]. The encapsulation approach has succeeded to preserve the cell viability and its metabolic activity in solvent evaporation based protocol [START_REF] Souza | [END_REF]. Moreover, the electrochemically-assisted cell encapsulation in a sol-gel matrix could extend the biotechnology into the micro-and nano-scale level of environmental and diagnostic analysis for developing of bioelectronics and biosensors [90,91]. Biofuel cells A new form of green energy based on the efficient conversion of organic matter into electricity is now feasible using biofuel cells. In these devices, microorganisms support their growth by oxidizing organic compounds and anodic electrode serves as the sole electron acceptor, so electricity can be harvested. These electrons then flow from the anode, through the device to be powered, or a resistor in experimental studies, and onto the cathode (Fig. approach has preserved the viability and activity of these biofilms for few months [104] but up to now, the encapsulated biofilm have a limited applications in applied for construction of microbial fuel cells [105]. Encapsulation of virus The possibility to immobilize biological active agents like virus and bacteriophage within silica gels has opened the route for development of biomedical applications [107][108][109]. These microorganisms are important agents for the medical field such as oncolytic viruses including replication-competent adenoviruses which are emerging as a promising tool for the treatment of cancer [110] and bacteriophages which reduce food-borne pathogens during the preharvest and postharvest stages of food production [111]. The key factors for these biomedical applications are the conservation of viral infectivity and sometimes the extended release of virus to reach the specific release rate and target. For that, the possibility of immobilizing viruses into sol-gel materials without the loss of biological activity has led to the development of vaccination and viral epitopes carriers for treatment of diseases [107,109]. The process is based on sol-gel polymerization in conditions compatible with biological life. Silica materials are biodegradable in vivo to control the release rate of encapsulated viruses then they are subsequently secreted into urine [112]. The sol-gel encapsulation of viruses is designed to contain substantial amounts of water to ensure large pores in an aqueous environment, which can be a hydrophilic environment for viruses. In principle, viruses might therefore remain Investigation of bioelectrochemical communication As discussed above, redox proteins and bacteria have supplied the driving force for development of bio-electrochemical systems such as electrochemical bioreactors, biosensors and biofuel cells. The key criterion in bio-electrochemical devices is the electrochemical communication (EC) between these biomolecules/cells and an electrode surface [106,[START_REF] Borgmann | Advances in Electrochemical Science and Engineering[END_REF]. In case of redox proteins (such as cytochromes and enzymes), the active site is the critical domain for electron transfer (ET) between the protein, its cofactor and the electrode surface [START_REF] Kohlmann | [END_REF]. In case of bacteria, the EC between its intracellular enzymes and electrode surface is expected to occur via extracellular electron transfer (EET) [118,119]. The EET is a catalytic mechanism of organic substrates with different intracellular enzymes (respiration or fermentation for aerobic and anaerobic species, respectively). Electrons produced from microbial respiration at microbial ET chain and transported through the periplasmic space and cell wall to the outer electron acceptor (i.e., electrode). Electrodes can serve either as electron Chapter I. Knowledge and literature survey 2013 35 donors or electron acceptors for microorganisms/redox protein, depending on whether the electrode is considered as a cathode or anode, respectively [120]. An electron transfer pathway is generally divided into 2 categories: 1) Direct electron transfer (DET). 2) Mediated electron transfer (MET). The DET takes place via a physical contact of the bacterial cell membrane, or protein active site, with the electrode surface without requirements of any electron shuttle [106,[START_REF] Kohlmann | [END_REF]. The exclusive drawbacks for DET are (i) limiting to electroactive bacteria since most of living cells are assumed to be electronically non-conducting, such a transfer mechanism has long been considered impossible and (ii) working electrode overpotential could be toxic to the microbial and enzymatic life [START_REF] Borgmann | Advances in Electrochemical Science and Engineering: Bioelectrochemistry[END_REF]106]. The MET, as an alternative to direct-contact pathways, involves an additional molecule capable to ensure redox cycling process (i.e., electron shuttle or mediator). Redox artificial mediator can enhance the ET transferring electrons between enzymatic active sites/microbial cell membranes and the electrode surface. Moreover, it can operate at low overpotentials in order to avoid drawbacks discussed in DET [START_REF] Borgmann | Advances in Electrochemical Science and Engineering: Bioelectrochemistry[END_REF]106]. In case of bacteria, the MET mechanism of electron shuttling can be further divided into 2 categories: a) Endogenous shuttling. b) Exogenous shuttling. Endogenous shuttle, which is microbially-produced, is secreted outside the bacteria to exchange electrons by redox mechanism with the electrode surface. This process is known as self-mediated ET [118]. Few bacteria have self-mediated ET property such as S. oneidensis strain MR-1 which is able to excrete a quinone/flavin molecule to serve this purpose [121][122][START_REF] Marsili | Proc. Nat. Am. Soc[END_REF]. Exogenous shuttle is an artificial mediator used to enter/contact the bacteria to redox shuttling electrons with the electrode surface such as ferricyanide as a chemical origin [START_REF] Alferov | [END_REF] The development of different mechanisms for facilitating the ET between enzymes/bacteria and an electrode surface has extended the electrochemical technologies such as sensors and fuel cells into biological life. Chapter I. Knowledge and literature survey 2013 36 Electrical wiring of protein The development of EC between redox proteins and an electrode surface has gained an interesting attention for biosensors and enzymatic fuel cells fabrication [71,75,125,126]. Some redox proteins such as laccase, bilirubin oxidase, hydrogenase, hemoglobin and ccytochrome are likely to conduct electrons directly with the electrode surface [125,[127][128][129]. Mediators such as ferrocene dimethanol (FDM) [72], osmium polymer [55] or vitamin K 3 [70] attached to carbon nanotubes (CNT) have been investigated for wiring of redox proteins. Furthermore, biological mediators such as cytochromes have been also utilized to wire enzymes efficiently and to avoid using toxic chemical substances [130][131][132][133]. In addition, the immobilization of cytochrome c into layer-by-layer systems has improved the EC between enzymes and the electrode surface than monolayer system [134,135]. Research works have shown that an enzyme can be durably immobilized by simple entrapment in porous gel networks without requiring any covalent bond between the support and the enzyme, thus maintaining its native properties and biological activity [68,70,136]. The immobilization of artificial mediator along with enzyme and its cofactor for reagentless biosensors fabrication, has gained an attention due to stability of enzymatic activity and electrical wiring in porous materials [16,70]. Electrical wiring of bacteria Whole cells have also gained an increasing attention in EC with electrode surface for fabrication of biosensors and microbial fuel cells [20,103]. Whole cells can easily and rapidly communicate with their surrounding environment and their utilization can save the expensive and time-consuming enzyme purification step and preservation in natural environment [START_REF] Souza | [END_REF]. In case of DET, only electroactive bacteria were able to directly-conduct electrons with the electrode surface [106]. The DET in electroactive species is facilitated by outer-membrane (OM) redox protein (c-cytochrome) that allows the ET from ET chain located in cell membrane to the electron acceptor (outside the cell) as the case of Shewanella species [102] or by conductive appendages such as nanowires in flagella and pilli as the case of Geobacter species [103]. Electroactive bacteria has been considered as building blocks for construction of microbial fuel cells for its efficiency in DET with electrode materials [START_REF] Kim | [END_REF]. In case of MET, soluble mediator such as ferricyanide K 3 [Fe(CN) 6 ] has gained an attention especially for wiring gram-negative bacteria for BOD sensors because of its solubility, higher Chapter I. Knowledge and literature survey 2013 37 than oxygen, and its ability to enter through the porins of cell walls to accept electrons directly from the bacterial electron transfer chains [START_REF] Alferov | [END_REF]. However, high ferricyanide concentration (≥ 50 mM) can also damage the membrane of these bacteria [138]. One approach tried to immobilize ferricyanide in an ion exchangeable polysiloxane to fabricate BOD sensor for detection of organic pollution in industrial wastewater, effluent or natural water. This approach has improved the BOD assay than the conventional one which requires a 5 day period and complicated procedures and skilled analysts to obtain reproducible results [100]. Polymeric mediators have also gained a significant attention in electrical wiring field especially with the flexible osmium polymer which has been stably-bound on the electrode surface [139,140]. they can wire efficiently different gram-negative bacteria due to their cationic charge and flexibility [START_REF] Alferov | [END_REF]139,140]. The electrode modification with osmium polymer for wiring bacteria has been described either by using organic polymers with/without CNT in presence of gluteraldehyde as cross-linker [START_REF] Alferov | [END_REF]139,140], or by conducting polymer for enhancing ET with electrode surface [141]. Up to now, c-cytochrome has not been utilized for wiring bacteria as an artificial mediator. A limited work has been done for investigation of the electrochemical communication of entrapped bacteria in sol-gel materials [100,142]. Following the development of EC between planktonic bacteria and electrode surfaces especially the current and electric responses in electrochemical biosensors and microbial fuel cells respectively, the building of multilayers of electro-active bacteria as biofilm has gained more and more interests especially for microbial and hybrid fuel cells [125,143,144]. A biofilm can facilitate bacterial ET efficiency in biofuel cells, primarily due to much higher biomass densities and higher bacterial viability, caused by anode respiration. However, natural anodic biofilms usually have a considerable thickness varying from several to tens of microns, which may cause diffusion limitation of nutrients and insufficient interaction of bacteria with anode materials [144]. For that, the mimicking of biofilm within silica matrix and/or conducting polymers eliminates the significant cultivation times and variability typically associated with natural biofilm formation and avoids the loss of biomass in addition to serving as efficiently as natural biofilm [105,141,143]. All the discussed mechanisms for ET between bacteria or biofilm with the electrode surface have been summarized in figure I-14. Fig I-14. Summarizing the proposed mechanisms for ET to the anodic electrode. Red dots represent outer surface cytochromes, black lines represent nanowires, and the blue clouds represent the possible extracellular matrix which contains c-type cytochromes conferring conductivity (adapted from reference [102] ). Positioning of the thesis As it was discussed in the literature survey, encapsulation of bacteria, proteins [55,70] or even viruses [107,113] in silica-based materials prepared by the sol-gel process has been already described. Electrochemistry of bacteria or redox proteins is also a major field of research, providing opportunities for application in biosensors, bioreactor and biofuel cell technologies. In these active areas, the complementary expertise in electrochemistry, sol-gel chemistry and microbiology still provide opportunities for original research. Several directions will be considered in this thesis. First, we will investigate the application of electrochemistry to trigger the encapsulation of bacteria in thin hybrid sol-gel film (Chapter 2). Electrochemistry was then considered as an analytical method to study the reactions of electron transfer Introduction There is a tremendous interest for immobilization of living cells (bacteria, yeast, microalgae, …) in porous matrices for biotechnological applications [1][2][3][4][5]. Bacteria can be encapsulated in organic polymers (e.g., agar hydrogels) [6] or inorganic matrices (e.g., sol-gel-derived silica-based materials) [7,8]. Compared to organic polymers, silica gel offers several advantages such as improved mechanical strength, chemical inertness, biocompatibility, resistance to microbial attack and retention of the bacterial viability out of growing state [9,10]. In order to combine the advantage of both organic and inorganic components, sol-gel based hybrid materials have been also proposed (e.g., sol-gel including gelatin [9], glycerol [11], or copolymers of poly(vinyl alcohol) and poly(4-vinyl pyridine)) [10]. A challenge in this field is to avoid the impact of drying on cell viability so that bacteria were essentially encapsulated in monoliths [9,11] or thick films [7,8] while keeping enough water in the final materials. Both encapsulation in monolith or thick film have succeeded to preserve the viability and activity of encapsulated microorganisms for several months. While only a limited number of works succeeded in preserving cell viability in thin films (i.e., a configuration often required for practical applications). This was notably achieved by multilayered silica thin films [12] or by introducing phospholipids in cell directed assembly [13]. On the other hand, the incorporation of organic polymers, especially those bearing amine or amide groups (e.g., chitosan), in the sol-gel matrix allows the formation of organic-inorganic hybrids (often stabilized by strong hydrogen bonding) with improved protection against cracking upon aging/drying [14]. The incorporation of glycerol or poly(ethylene glycol) prevents excessive contraction of sol-gel matrices and protects the cells against osmotic stresses [15,16]. Finally, sugars (particularly trehalose) have been shown to stabilize bacteria during freezing and aging [17,18]. At the end of the nineties, besides the classical evaporation methods applied to generate thin sol-gel films [19], a novel approach was described to deposit such thin films on electrode surfaces, based on an electrochemically-driven pH increase likely to accelerate the gelification of a sol, and hence film deposition [20]. This has been rapidly extended to the generation of functionalized sol-gel layers [21] or ordered and oriented mesoporous silica films [22], as well as to the encapsulation of biological objects (e.g., haemoglobin [23] or dehydrogenases [24]) in thin sol-gel films. This approach offers some advantages compared to Chapter II. Electrochemically-assisted bacteria encapsulation in thin hybrid sol-gel film 2013 50 other sol-gel deposition methods that are based on solvent evaporation (spin-, dip-, spraycoatings), especially in terms of enabling homogenous film deposition on small electrodes (e.g., ultra-microelectrode) [25,26] or non-flat supports [27] and on conducting supports, which can be useful for bioprocessing and electrochemical biosensing applications [28][29][30][31]. The electrochemical encapsulation has been recently developed for bacteria in alginate and studied for short-term analysis [30,31], but it has not been reported yet for bacteria in sol-gel films. The aim of this chapter is to present the application of electro-assisted deposition technique for the encapsulation of bacteria in thin hybrid sol-gel films. This technique utilizes low voltages for base-catalyzed condensation of a sol doped with suitable additives on an indiumtin-oxide (ITO) electrode. The approach involves the immersion of the electrode into a stable hydrolyzed sol in moderately acidic medium, which is followed by applying a constant negative potential to the electrode surface in order to induce a local pH increase leading thereby to the polycondensation of the hydrolyzed monomers and thus film deposition along with entrapping bacteria in the final deposit. This led to thin films contrary to thick films described elsewhere [26]. Here, we also show how organic additives can improve the electrochemical encapsulation of bacteria in hydrophilic thin sol-gel films by protecting the bacteria against both electrochemical and aging stresses. In order to assess the state of encapsulated bacteria in the film, the Live/Dead BacLight viability kit has been applied. It enables differentiation between bacteria with intact and damaged cytoplasmic membranes (membrane integrity) [32]. E. coli C600 has been used as a simple model for adjusting the optimum protocol of electrochemically-assisted encapsulation in sol-gel film. In addition, the metabolic activity of encapsulated bacteria was evaluated by measuring the luminescent activity of E. coli MG1655 pUCD607 [33] and the fluorescence activity of E. coli MG1655 pZNTA-GFP, as expressed in the presence of low concentration of Cd 2+ . Factors affecting the electrochemically assisted deposition of the hybrid sol-gel film The electrochemically assisted bacteria encapsulation has been optimized with specific parameters for successful entrapment of bacteria and optimal viability. The sol-gel reaction is This potential was found optimal in preliminary experiments as more negative potential was detrimental for the bacterial viability and less negative potential was inefficient to produce rapidly the films containing, or covering, the bacteria. Under these conditions, the thickness of the material could be controlled from less than 100 nm to more than 2 µm by varying the deposition time from 10 to 60 s. The film thickness was also significantly affected by the sol composition, especially the presence of the polymeric additives (chitosan, PEG) used to improve the cell viability in the final films. To assess the effect of these components, thickness measurements were made using AFM on films deposited under the same conditions, i.e., at -1.3 V for 30 s, for various sol compositions. Electrolysis from the starting sol containing only the silica precursor (0.25 M) did not produce a visible deposit under these conditions. Note that changing these conditions can allow faster film deposition for other applications [23]. The addition of PEG (12.5% m/v as a final concentration in the hybrid solgel film) increased the deposition rate but the thickness remained very small, i.e., 30 ± 7 nm. In contrast, the introduction of chitosan (0.25% m/v as a final concentration in the hybrid solgel film) in the sol led to a dramatic increase in the film thickness, by a factor of 10, to reach 350 ± 70 nm. The thickness increased even more by mixing together the three components, i.e., the silica precursor (TMOS), chitosan and PEG, to reach 1.9 ± 0.1 µm (Fig. II-1). Such variations could be basically explained by different polycondensation rates, which are affected by the presence of organic additives, in agreement with previously reported observations for other related systems [22,24]. But this is not the only reason. Indeed, chitosan has the ability to be electrodeposited by applying negative potential on the conducting electrode [34,35]. The thickness of a film prepared with chitosan and PEG only (no silica precursor) was rather small (24 ± 4 nm), suggesting little contribution of this process. But in the presence of the silica precursor, chitosan can be co-deposited to form a hybrid material displaying improved properties compared to the individual components taken separately. Moreover, PEG strongly influences the film drying and limits shrinkage during the aging process as reported before [36], whose effects could also be beneficial for preventing a mechanical stress to the bacterial membrane encapsulated in the film. Electrochemically assisted bacteria encapsulation in two steps (EABE(2S)) Principle and preliminary characterization Immobilization of bacteria was first performed using a two-step procedure (EABE(2S)), implying first the adhesion of the bacteria on the ITO surface, as described elsewhere [37], and then the electrochemically assisted sol-gel film deposition and bacteria entrapment, as illustrated in Fig. II- Thicknesses of the films deposited with various electrolysis times were measured using AFM. Values of 82 ± 5 nm, 130 ± 20 nm and 1.9 ± 0.1 µm were obtained for deposition times of 10, 20, and 30 s, respectively (Fig. II-3). This variation was not linear with time as reported previously for other electrochemically assisted sol-gel deposition [21]. Viability of bacteria immobilized by EABE(2S) Live/Dead BacLight viability analysis is a dual DNA fluorescent staining method used to determine simultaneously the total population of bacteria and the ratio of damaged bacteria. It was applied here on encapsulated bacteria in sol-gel films. BacLight kit is composed of two fluorescent nucleic acid-binding stains: SYTO 9 and propidium iodide (PI). SYTO 9 penetrates all bacterial membranes and stains the cells green while PI only penetrates cells with damaged membranes. The combination of the two stains produces red fluorescing cells (damaged cells) whereas those with only SYTO 9 (green fluorescent cells) are considered viable [32]. At first, the short-term viability, i.e., one hour after the encapsulation of E. coli C600 in the optimal sol-gel containing silica precursor, PEG and chitosan, was studied and high viability Influence of film components on the long-term viability of EABE(2S) As shown in The positive influence of silica gel on the cell viability was already discussed in the literature [9,10]. In addition; the presence of silica precursor, PEG and chitosan in the electrolysis bath allows here the rapid gelification of a hybrid film, thick enough to protect the bacteria as discussed in section (4.1) where the thickness reached 2 µm (approximately 4 times the diameter of bacterial cell ≈ 0.5 µm). In particular, chitosan, which is characterized by the presence of -NH 3 + groups, can interact with silicate monomers to facilitate the deposition of a stable hybrid sol-gel film [38]. As a result, the samples prepared with chitosan exhibited larger film thicknesses (≥ 100 times) than those without chitosan (see the first section of Results and discussion). In addition, PEG lowers the interaction of entrapped bacteria with the silicate matrix, minimizes the interaction with toxic solvent produced during the hydrolysis and prevents excessive contraction during sol-gel transition phase; it fills the pores and spaces in the matrix, thereby eventually preventing matrix shrinkage and collapse [36,39]. Chapter II. Electrochemically-assisted bacteria encapsulation in thin hybrid sol-gel film 2013 57 Influence of storage conditions on the long-term viability of EABE(2S) Fig II-7. Long-term BacLight analysis for EABE(2S) of E. coli C600 (4×10 6 cells mL -1 ) in KCl 1 mM (a), moist air only (b), and moist air + trehalose inside sol-gel film (c). All films have been prepared by electrolysis at -1.3 V for 30 s. N.B (zero: 1 hour after encapsulation). According to results of Fig. II-7, the storage in moist air without introducing trehalose into the film did not succeed in preserving bacterial viability for more than 3 days due to complete drying of the sol-gel film (curve b). The storage in KCl solution could solve the problem of bacterial damage due to sol-gel film drying (curve a). On the other hand, storage in wet medium has also some disadvantages in terms of limiting long-term viability due to aqueous dissolution of the sol-gel material for long storage conditions [12], and providing higher probability of contamination than dry storage. The storage in moist air with introducing trehalose inside the sol-gel film has solved these problems, showing the best preservation of the membrane integrity as 95% of viable cells after 30 days of encapsulation (curve c). Here, trehalose was the water-replacing molecule in dry storage. As described in the literature [17,18], it provides a hydrophilic humid medium by replacing water bonding during gel Chapter II. Electrochemically-assisted bacteria encapsulation in thin hybrid sol-gel film 2013 58 drying to avoid protein denaturation and lipid phase transition of bacterial membrane. This, in turn, inhibits van der Waals interactions between adjacent lipids thus preventing the elevation of membrane phase transition and stabilizes the structural conformation of bacterial membrane proteins against denaturation. The conditions used in this experiment are not favourable for cell division. We suppose that the observed increase in cell viability for short time, i.e. between 0 and 5 days (curve c), could be due to recovery of cell membrane integrity. The error affecting these measurements is higher for short time than for longer time. This variability seems to be related to the stress that affected some cells, long-term measurement allowing higher ratio of viable cells and lower variability in the measurement. Influence of the electrodeposition time on the long-term viability of EABE(2S) Finally, we have studied the effect of the electrolysis time, from 10 to 30 s, on the viability of encapsulated E. coli C600 (Fig. II-8). The films have been produced from the optimal sol composition defined before, in the presence of silica gel, PEG, chitosan and trehalose. All samples were stored at +4 ºC in humid air. viability could be found after 3 days when the film was produced with using 10 s electrolysis (curve a). The viability after 15 days was greatly improved by increasing the deposition time, as 40 % (curve b) and 96 % (curve c) viability were observed in the films prepared with 20 s and 30 s electrolysis, respectively. The effect of electrolysis time has to be correlated with the film thickness that increases dramatically from 82 ± 5 nm and 132 ± 17 nm for 10 and 20 s, respectively, to 1.9 ± 0.1 µm for 30 s. Here, the film deposited with applying 30 s electrolysis was thick enough for effective protection of the encapsulated bacteria with no significant cell membrane destruction by electrochemical stress. Thinner films did not ensure enough protection because the bacteria were not properly covered with the hybrid layer. Thicker films can be produced by using longer electrolysis times but it was observed that too long electrodeposition processes became detrimental for the cell integrity (data not shown). Practical applications of immobilized bacteria for biosensing could involve a large number of bacteria. Also, a large amount of encapsulated bacteria is necessary for studying the metabolic activity, notably in the luminescent mode of analysis. As shown in Fig. II-5d-f, the cell density of bacteria can be controlled during the first step adhesion. However, one disadvantage of the EABE(2S) protocol is the limited viability preservation when such high bacterial density is encapsulated. We observed that a dense bacterial film strongly limits the electrodeposition process and/or alters the film stability. For this reason, we have investigated the electrochemically assisted bacterial encapsulation in one step (EABE(1S)) in order to achieve larger amounts of encapsulated bacteria and to be able to analyze their metabolic activity once entrapped in the hybrid sol-gel films. The main results are described hereafter. Electrochemically assisted bacteria encapsulation in one step (EABE(1S)) Principle and preliminary characterization As we explained before that EABE(1S) implying the electrodeposition of hybrid sol including bacteria suspension on the electrode surface. Thicknesses of the films deposited with various electrolysis times but optimal potential -1. Using the same parameters (optimal potential, hybrid sol-gel components and optimization, time of electrodeposition) as EABE(2S), the AFM measurements for EABE(1S) have shown much higher film thicknesses than those of EABE(2S) probably due to the presence of bacteria that affected electrodeposition rate on the ITO surface. Chapter II. Electrochemically-assisted bacteria encapsulation in thin hybrid sol-gel film 2013 61 AFM characterization also provides additional information concerning the gradual encapsulation of E. coli C600 with cell density (8×10 9 cells mL -1 ). With the one-step protocol, bacteria can be observed on all samples, even when using long electrodeposition time, i.e. 30s, leading to relatively thick films (Fig. . This confirms the random distribution of bacteria encapsulated by EABE(1S) inside the deposited gel. The roughness of the film (measured after drying) also strongly increased when increasing the film thickness. Viability analysis of bacteria immobilized by EABE(1S) Analysis of bacterial metabolic activity In order to confirm the viability of the encapsulated bacteria, the EABE(1S) protocol was applied on the bioluminescent bacterial biosensor E. coli MG1655 pUCD607. The pUCD607 plasmid contains the lux-CDABE operon under the control of the constitutive tetracycline resistant promoter [41], which is responsible for continuous light production. The bioluminescent reaction is catalyzed by luciferase (encoded by the luxA and luxB genes) which requires a long-chain aldehyde (provided by the luxCDE gene products) as a substrate, oxygen, and a source of reducing equivalents, usually reduced flavin mononucleotide (FMNH 2 ) [42]. Since the FMNH 2 production depends on a functional electron transport system, only metabolically active cells produce light. Here, the E. coli MG1655 pUCD607 Chapter II. Electrochemically-assisted bacteria encapsulation in thin hybrid sol-gel film 2013 63 system was used to assess the effect of encapsulation and subsequent storage conditions on the overall metabolic activity of bacteria. E. coli MG1655 pUCD607 (8 × 10 9 cells per mL) was encapsulated in the sol gel matrix with the optimal composition (as described above) and compared to non-encapsulated E. coli (only deposited on PEI-modified ITO electrode). All samples have been stored at -80 ºC prior to the reactivation step of bacteria in medium for long-term luminescent analysis. All results are presented in Table II-1. After four weeks storage, the electrochemical encapsulation of E. coli MG1655 pUCD 607 in a film with optimal composition has preserved 50% of the continuous luminescence production compared to approximately null metabolic activity of nonencapsulated bacteria, which is most probably due to the absence of cell stabilizers against freezing stresses [18]. On the other hand, the same encapsulation protocol has been applied to the biosensor strain E. coli MG1655 pZNTA-GFP, which expresses GFP in response to cadmium exposure (P. Billard, unpublished data). This bacterial strain could be used for designing of fluorescent microbial sensor based on assessment of heavy metal bioavailability [43]. After one week of Conclusions Three E. coli strains (C600, MG1655 pUCD607 and MG1655 pZNTA-GFP) have been used as models of encapsulated microorganisms in thin sol-gel films generated in a controlled way by electrochemical protocols. The encapsulation can be performed with bacteria preimmobilized on the electrode surface or from a suspension of the bacteria in the starting sol. The possibility of utilizing electrochemistry for effective bacteria encapsulation in sol-gel films with preserving a high percentage of viability and activity for the entrapped microorganisms was demonstrated here. The presence of organic additives was proved to be essential as they contributed to modify significantly both the reactivity and the stability of the Introduction Development of living cell-based electrochemical devices holds great promise in biosensors, bioreactors and biofuel cells. It eliminates the need for isolation of individual enzymes, and allows the active biomaterials to work under conditions close to their natural environment, thus at a high efficiency and stability [1][2][3][4]. Living cells are able to metabolize a wide range of chemical compounds. Microbes are also amenable for genetic through mutation or through recombinant DNA technology and serve as an economical source of intracellular enzymes [2]. However, the main issues are the preservation of cell viability and efficient electron transfer (ET) between bacteria and electrodes for high performance and feasible large-scale implementation for these electrochemical devices [1,4]. The preservation of cell viability could be materialized by encapsulation technology in biocompatible matrices resembling the natural environment in order to conserve the cells alive in open porous matrix for passage of oxygen, solvents and nutrients but avoiding cell leaching [3]. Whole cells have been successfully encapsulated either by physical or chemical methods. The latter includes covalent interactions between functional groups located on the outer membrane of the microorganisms and glutaraldehyde as cross-linking agent. This encapsulation mode is consequently not suited when cell viability is absolutely required whereas excessive degrees of cross-linking can be toxic to the cells [4,5]. On the other hand, the physical entrapment is a simple and soft method to encapsulate microorganisms onto a transducer. Microorganisms can be encapsulated in organic or inorganic polymeric matrix, e.g., sol-gel [6,7], or hydrogels such as poly(vinyl alcohol) (PVA) [8], alginate [9] or agarose [10] . In order to combine the advantages of both organic and inorganic matrices, microorganisms have been entrapped in hybrid materials based on silica sol and poly(vinyl alcohol) and 4-vinylpyridine (PVA-g-P(4-VP)) [11,12]. The encapsulation can protect the cells from external aggression but at the same time might form a barrier that decreases the electrochemical interaction with the electrode surface [4]. Immobilization of the bacteria onto an electrode surface is also a key issue in the development DET has been developed between electrode and monolayer bacteria through c-type cytochromes (c-Cyts) associated with bacterial outer membrane (OM) such as Shewanella oneidensis [14] or putatively conductive nanowires such as Geobacter sulfurreducens strains [15] or with conductive biofilm [16]. Mediated ET has been performed with either bacterial self-mediated transfer such as Pseudomonas aeruginosa [17] or by an artificial mediator to enhance the ET between microbial cells and electrodes. Thus, the natural final electron acceptor (e.g., dioxygen in the case of aerobic bacteria, or other oxidant such as Fe(III) oxides in the case of anaerobic respirating organisms) can be replaced to prevent the problem of limiting concentrations of the electron acceptors [18]. Soluble artificial mediator such as ferricyanide, p-benzoquinone and 2,6-dichlorophenol indophenol (DCPIP) have been successfully used for applications of microbial fuel cells and electrochemical biosensors [19][20][21] due to their ability to circumvent the cell membrane, collect and transfer the maximum current to the electrode surface. Ferricyanide (Fe(CN) 6 ) has greater attention especially for gram-negative bacteria because the high solubility of synthetic electron acceptor overcame the limitation from lower oxygen solubility and its ability to enter through the porins of cell walls. However, high ferricyanide concentration (≥ 50 mM) can also damage the membrane for some bacteria like E. coli [22]. Polymeric mediators have also succeeded to facilitate the electrochemical communication (EC) of different bacterial species through flexible polymer stably binding on the electrode surface to avoid the problem of releasing potentially humantoxic compound in the environment [20,[23][24][25]. The conductive properties of osmium polymer promote a good EC between the electron donating system and the electrode surface due to the permanent positive charge of the redox polymer which favors its electrostatic interactions with charged cell surface [25]. In addition to the mediators, carbon nanotubes (CNT) has attracted considerable attention in electrochemical biosensors to provide large surface areas for enhancing the conduction of ET between bacteria and the electrode surface alone [26,27] or in presence of osmium polymer [28]. Actually, it seems to be of interest to consider using covalently-linked chemical mediator or biological mediator for wiring bacteria with electrode surface, but this approach has not been described so far in the literature. around -0.22 V and a smaller signal at -0.32 V (peak potentials). On the reverse scan, the anodic response appears even more complex with three successive redox signals visible at -0.28 V, -0.15 V and +0.06 V. One can reasonably suppose that only the bacteria located close to the electrode can transfer efficiently some electrons to the GCE. These signals are compatible with redox proteins secreted by the bacteria or present in the outer membrane of the S. putrefaciens cells such as membrane-bound cytochrome [28,29] or secreted riboflavin molecules [30] which could participate to electron transfer reactions to the electrode surface. Investigation of bioelectrochemical communication Electrochemical activity of bacteria in a silica gel layer Curve "a" in Figure 2B (see inset for a closer view) reports the chronoamperometric response of this electrode upon two successive additions of 20 mM sodium formate. Despite the visible electrochemistry of S. putrefaciens in the sol-gel layer, the resulting current in the presence of electron donor remains very low, almost undetectable in the range of few nA. A so low response can certainly be explained by the limited number of bacteria that are able to transfer electron to GCE in the insulating environment formed by the silica gel [2,4]. . The inset shows the measurement for (b) at a lower scale current in order to give a better resolution. On the other hand, one already know that soluble mediators like ferricyanide have been used successfully to shuttle electrons between gram-negative bacteria physically encapsulated in a sol-gel layer and an electrode surface [21,31,32], notably for the fabrication of biochemical oxygen demand (BOD) sensor [21,32]. Ferricyanide can indeed diffuse through the porins of the outer cell membrane for collecting the electrons from the bacteria [20,31]. Curve "b" in . In these conditions, a well-defined current increase, in the range of hundreds of nA, was measured after each addition of 20 mM formate in the solution, which is an indication of the good activity of the bacteria in this environment. The comparison between experiments performed in the absence (curve "a") and in the presence (curve "b") of ferricyanide in the solution clearly indicates that electron transfer from bacteria encapsulated in a sol-gel layer can be strongly hindered if no electron shuttle is added. The presence of a soluble mediator is indeed an effective approach to improve this reaction, but other strategies could be implemented in order to reach more efficient electrochemical communication. The next sections will discuss the possible interest of using SWCNT for enhancement of the electrochemical communication. This will be made first in the presence of soluble mediator and then by attaching the mediator, i.e. ferrocene, on the surface of SWCNT. Finally a last strategy based on the encapsulation of cytochrome c as natural mediator in the sol-gel layer along with bacteria will be investigated and compared with strategies involving SWCNT and chemical mediators. Influence of SWCNT on the electrochemical communication with entrapped bacteria in a silica gel layer The introduction of carbon nanotubes has been performed following two different protocols, III-3B, corresponding to case C in Fig. III-1). On the other hand, it is noteworthy that the twolayer configuration suffers from longer response time, probably because of some resistance to mass and charge transfer in the insulating silica layer to reach the conductive electrode surface (a phenomena which is expected to be less constraining in the composite layer made of CNTs distributed in the whole film). For both configurations, the electrochemical response recorded in the presence of carbon nanotubes was improved by comparison to that observed with the bacteria alone in the gel (i.e., without SWCNT, see curve "b" in Fig. III-2B). SWCNT could be in principle appropriate for networking bacteria encapsulated in the sol-gel material. The good electrochemical responses measured in the presence of soluble mediators indicates that the bacteria kept good activity in the sol-gel layer and was not disturbed by the presence of SWCNT or additives used to disperse them. However, the small improvement observed in these experiments in the absence of soluble mediators indicates that optimal use of this nano-material cannot be reached here, possibly because of the limited dispersion of SWCNT in the sol that would lead to insufficient networking. Moreover it was observed that silica is likely to cover the nanotube surface and this effect could hinder the electron transfer reaction between them or here with the bacteria. One way to overcome this limitation is the surface functionalization of SWCNT with a suitable mediator that would allow efficient interaction between bacteria and neighboring carbon nanotubes. . Amperometric current responses measured (A) in the absence and (B) in the presence of 5 mM Fe(CN) 6 3-mediator in solution upon successive additions of sodium formate (arrows) using glassy carbon electrodes modified with (a) a sol-gel layer containing both SWCNT-COOH and S. putrefaciens (as reported in Fig. III-1B) and (b) sol-gel overlayer containing S. putrefaciens on chitosan/SWCNT-COOH underlayer film (as reported in Fig. III-1C ). The measurements were performed in PBS at room temperature by applying +0.35 V versus Ag/AgCl reference electrode. Application of SWCNT-(EtO) 8 -Fc for electrochemical communication with bacteria entrapped in a silica gel layer Our group recently described the functionalization of SWCNT with ferrocene mediator with a flexible poly(ethylene glycol) linkers, and a spacer length of 8 ethylene glycol moieties was reported as the most effective one [33]. This functionalization provides both a good mobility to the ferrocene moieties in order to get well-defined cyclic voltammetric response (Fig. III-4A) and to achieve suitable electrocatalysis and an improved dispersion of the nanotubes in aqueous environment used here for bacterial immobilization. The polyethylene glycol chain could also provide a hydrophilic protection for preserving microbial viability [12]. . Amperometric current responses measured upon addition of sodium formate using glassy carbon electrodes modified with (a) sol-gel layer containing CNT-(EtO) 8 -Fc and S. putrefaciens (as reported in Fig. III-1B); (b) sol-gel overlayer containing S. putrefaciens on chitosan/SWCNT-(EtO) 8 -Fc underlayer film (as reported in Fig. III-1C). The measurement was performed under stirring in PBS at room temperature by applying +0.35 V versus Ag/AgCl reference electrode. Effect of biological mediator on cell communication The electrochemical behavior of S. putrefaciens in sol-gel has been studied in the presence of cytochrome c (biological mediator) to be compared with results obtained from SWCNT-Fc described in the previous section. This film displayed a significantly different behavior from previous system with chemical mediators (Ferrocene or Fe(CN) 6 3- ). The amperometric response was indeed measured upon successive additions of 0.2 mM formate in the solution i.e., 100 times lower concentration than those reported in the previous investigations reported in this this section. Under this conditions a maximum sensitivity of 1 mA M -1 cm -2 was observed, which is ten times higher than the best sensitivity observed with using SWCNT-(EtO) 8 -Fc. Moreover, the operating potential was lower with cytochrome c (i.e., +0.15 V) than with chemical mediators (i.e., +0.35 V), which is clearly advantageous for several applications, including biosensing and biofuel cell. But the linear range of the response was much narrower when using cytochrome c as mediator (mM range) by comparison to other mediators (in the range of 100 mM). In the present state of our investigations, it is difficult to clearly explain the different behavior of the two classes of mediators, but their different nature could be at the origin of distinct interactions with bacteria. The interaction mechanism of chemical mediator such as Fe(CN) 6 3or flexible osmium polymer linked to CNT has been mentioned in literature as entrance of mediator through the porins of outer cell membrane in order to accept electrons from electron transport chain located at the cell membrane [20,31] or even from cytosolic enzymes [35] while biological mediator could collect electrons only from the outer cell membrane without entering through its porins. (A) Cyclic voltammogramm measured with GCE modified with a sol-gel layer containing bovine heart cytochrome c. The measurement was performed in PBS at room temperature and at a scan rate of 20 mV s -1 . (B) Amperometric current responses recorded upon successive additions of 0.2 mM sodium formate, as measured using GCE modified with a sol-gel layer containing bovine heart cytochrome c and S. putrefaciens. The measurement was performed under stirring in PBS at room temperature by applying +0.15 V versus Ag/AgCl reference electrode. As a matter of comparison, a recent report involving S. putrefaciens CIP 8040 , in connection with osmium redox polymer display a maximum current of 45 µA cm -2 in the presence of 18 mM sodium lactate [24], which would correspond to a sensitivity of 2.5 mA M -1 cm -2 , twice the sensitivity reported here with using cytochrome as electron mediator in a sol-gel film. Interestingly the artificial immobilization in the same osmium polymer led to a comparable sensitivity as reported in our work. Some limitations arise from the forced immobilization of the bacteria in sol-gel material in this kind of artificial biofilm. Further improvement in the strategies used for bacteria immobilization will be necessary to make sol-gel encapsulated bacteria competitive with more freely growing biofilms. But one also has to consider that freely growing biofilms take more than a day to express the maximum current, while the artificial biofilm reported here is rapidly operating, which is clearly an advantage when considering biosensing applications. To resume this section, different ways of electrochemical communication between bacteria encapsulated in a sol-gel layer and a glassy carbon electrode has been investigated. In the absence mediator, the bacteria display poor ability to transfer electron to the electrode surface, because of the insulating nature of the matrix of encapsulation. The introduction of SWCNT on the glassy carbon surface or in the sol-gel layer does not improve significantly this transfer of electron, while the bacteria kept good activity as demonstrated in the presence of soluble mediator. SWCNT-(EtO) 8 -Fc and cytochrome c have demonstrated their ability to facilitate the electron transfer reaction between S. putrefaciens encapsulated in the silica layer and the glassy carbon electrode. The covalent linkage of ferrocene on SWCNT with polyethyelene glycol linker has provided the suitable flexibility for efficient interaction with S. putrefaciens. The better sensitivity (1 mA M -1 cm -2 ) was obtained with using cytochrome c as mediator while the widest linear response versus the concentration was observed with SWCNT-Fc The modification of the strain was only motivated by practical reason, this latter strain was simply found easier to cultivate than Shewanella putrefaciens. Investigation of biological-mediated bioelectrochemical communication The electrochemical investigation of cytochrome c as redox-active protein at electrodes [36] has gained an interesting area of research. Indeed, cytochrome c has succeeded to mediate the ET between an electrode surface and some enzymes [37,38] without chemical mediator, forming a stable protein-protein system for construction of biosensors [39,40]. Up to now, the biological mediator has not been tested for wiring bacteria, especially those having membrane-bound cytochromes. Cytochrome c displays many important features of such as harmless effect on bacterial viability, good electroactive properties and is of prime importance for the electrochemical communication into biofilms [16]. The encapsulation of both cytochrome c and bacteria in porous sol-gel matrices could mimic the natural biofilms and avoid their limitations such as insufficient interaction of bacteria and electrode materials or the poor control on the bacterial strains involved. The silica matrix mimics in these conditions the exopolysaccharide 'glue' that binds cells in a natural biofilm. The goal of this study is to evaluate an original strategy based on ET between Pseudomonas fluorescens CIP 69.13 cells and an electrode using bovine heart cytochrome c as biological mediator. The encapsulation of this bacteria-protein couple in a sol-gel film produces an artificial biofilm likely to be considered in electrochemical biosensors, biofuel cells or bioreactors. Glucose was used as a model analytical probe to investigate such novel system. Chapter III. Investigations of the electrochemical communications between bacteria and the electrode surface 2013 81 Electrochemistry of bovine heart cytochrome c in sol-gel The immobilization of cytochrome c was achieved in situ by physical entrapment in the silicate film in the course of the sol gelification. The redox-active protein cytochrome c is likely to mediate ET between other proteins and an electrode surface [36]. The first experiments were thus directed to evaluate the electrochemical characteristics of the entrapped cytochrome mediator in the sol-gel film. small signal in the same potential region (anodic peak located at + 0.08 V versus Ag/AgCl) but more complex feature can also be observed at -0.20 V (see inset in Fig. III-7A). This signal could be assigned to redox proteins present in the bacterial membrane. The electrochemical comparison of P. fluorescens with S. putrefaciens species (anodic and cathodic peaks are located in the range of -0.1 to +0.1 V and -0.2 to -0.4 V respectively with respect to bacterial strains) [29,42] has demonstrated the compatibility between P. fluorescens's redox peaks and peaks related to membrane-bound cytochromes present in electroactive bacteria. These membrane-bound cytochromes are able to transfer electrons to the electrode surface and are probably involved in the electron transfer reactions to the cytochrome c introduced in the sol-gel film. Feasibility of bacteria/protein communication in sol-gel It is well known that viable cells in microbial biosensors and biofuel cells are likely to catalyze the oxidation of various substrates (such as glucose, lactose, methanol …etc.) producing electrons in a process called microbial respiration [2,43]. The role of artificial mediators is to enhance the ET between microbial cells and the electrode. In order to study the feasibility of bacteria / c-cytochrome ET the in sol-gel layer, amperometric measurements have been applied to detect the microbial communication with the electrode upon successive additions of glucose as a substrate for respiration (Fig. III-8). As shown in III-8C). This big difference of intensity between measurements reported in figures III-8B and III-8C can be explained by the critical role of cytochrome c mediator added into the film ensuring an electron relay between the microorganism and the electrode surface, while in the absence of this mediator, extracellular ET in sol-gel film only occurs for bacteria located very near to the electrode surface. The mediation with cytochrome c occurs probably via electron hopping between the heme centers of the proteins [44], which obviously improves the microbial electrochemical communication with the electrode. A direct relation was found to exist with respect to an increasing glucose concentration and current density, up to a maximal saturation value (see the inset in Fig. III-8C). Communication disruption/competition In order to further confirm the mechanism of microbial communication with the electrode, some toxic chemicals have been tested as inhibitors of ET between P. fluorescens and the electrode surface, either by killing bacteria [46] or inhibiting the cell respiration [45]. On the basis of amperometric measurement performed upon addition of NaClO (200 mg L -1 ), it was shown that a rapid inhibition of the electrochemical communication occurred, with a decrease in the current response to approximately 80 % within 1 minute (Fig. III-9A). species in the presence of glucose as the substrate [5,27]. In view of potential application in the biosensors field, efforts should be also made to increase the sensitivity of the bioelectrode response. In this context, it was checked if the electron transfer pathway between P. fluorescens, cytochrome c and glassy carbon electrode surface can be further improved by the introduction of gold nanoparticles in the hybrid sol-gel layer. Factors affecting the bioelectrochemical communication Fig III-10. Amperometric current responses measured with glassy carbon electrodes modified with sol-gel film containing P. fluorescens and bovine heart cytochrome c with respect to (A) P. fluorescens cell density in the presence of 1mM c-based cytochrome, (B) cytochrome c concentration in the presence of P. fluorescens (9 x 10 9 cells mL-1 as initial cell density), (C) temperature and (D) pH of the medium. Other conditions are similar as in In the presence of gold nanoparticles (i.e. 0.04 mg mL -1 in the starting sol), the electrochemical response was doubled (Fig. III-11B). However, the introduction of more Chapter III. Investigations of the electrochemical communications between bacteria and the electrode surface 2013 88 nanoparticles, i.e., 0.08 mg mL -1 in the sol, led to a significant decrease in the electrochemical responses. These limitations could be related to toxicity to biological life or to modification in the composite structure and hence charge transfer properties could lower the current values [28]. In attempting to circumvent this limitation, the incorporation of multi-walled carbon nanotubes (instead of gold nanoparticles) was also evaluated as an alternative way to improving electrochemical communication between the encapsulated bacteria and the electrode (using 0.4 and 0.8 mg mL -1 in the sol), but no significant enhancement of current responses was observed in these conditions. As a final control experiment, the introduction of gold nanoparticles in absence of added cytochrome c in the gel was showing only a small current response, which confirms the importance of cytochrome c as electron shuttle between P. fluorescens and the electrode surface in the construction of the artificial biofilm for electrochemical biosensing. The sensitivity of the bioelectrode was found to be 0.38 × 10 -3 mA mM -1 cm -2 glucose in the presence of the optimal concentration of gold nanoparticles. Note that the concept of artificial biofilm was reported only recently in the literature, with planktonic bacteria wrapped with graphite fiber and encapsulated in polypyrrole [16] and biofilm covered with sol-gel layer [47]. Both considered the facilitated EC between bacteria and the electrode surface by accumulation of flavin mediator in the polypyrrole or sol-gel layer, respectively. The strategy proposed in this thesis is original, providing a new approach for the elaboration of artificial biofilms to be applied in electrochemical studies or devices. Conclusion The poor electrochemical communication between bacteria encapsulated in a silica-based solgel film and a glassy carbon electrode can be increased with using soluble mediator Fe(CN 6 3- ), or ferrocene mediator chemically linked to single walled carbon nanotubes. This "chemical" strategy, as tested with Shewanella putrefaciens, leads to rather comparable results, which would be indicative of similar mechanism involved with these two different mediators. The interpretation of these results, in agreement with other reports from the literature, can be that the mediators are able to enter the outer membrane of the bacteria, through the porins, to collect electrons [48]. Another strategy, of "biological" nature, involves cytochrome c that can be co-immobilized with the bacteria in the silica-based gel. However, cytochrome cannot enter the outer membrane of the bacteria and only collect the electron from the outer largely on the successful immobilization of the biomolecule in a physiologically active form [1]. In aqueous solutions, biological molecules such as enzyme lose their catalytic activity rapidly [2]. For that, the utilization of immobilized enzymes has gained more considerable attentions as an active sensing element for fabrication of biosensors due to their stable inherent properties such as specific recognition and catalytic activity [1,2]. In general, the key challenge of immobilization techniques is the stabilization of enzymes without altering its catalytic activity. The limitation of adsorption techniques is the weakness of bonding, thus adsorbed enzymes lack the degree of stabilization and leak easily into the solution. On the other hand, the strengthening of the bonds between support and enzyme by chemical linkage could inactivate or reduce most of the enzymatic activity [2]. For these reasons, the physical entrapment of enzyme in a porous carrier such as sol-gel materials has drawn great interest in recent years. This is due to its simplicity of encapsulation process, chemical inertness and biocompatibility of silica, tunable porosity, mechanical stability, and negligible swelling behavior [2,3]. Sol-gel encapsulation approach has succeeded to maintain the enzymatic activity to fabricate reagent less biosensors and bioreactors [4][5][6][7][8]. A diversity of works has been briefly reported about sol-gel encapsulation of enzymes as soluble protein but a limited works on the membrane-associated proteins. Membrane-associated proteins are either completely (intrinsic membrane proteins) or partially (extrinsic membrane proteins) embedded within an amphiphilic bilayer lipid membrane (BLM). Thus it is important to retain at least the essential bound lipids, and often the entire BLM, to keep the membrane-associated protein properly folded and functional. This makes such species particularly difficult to immobilize by conventional methods and highlights the need for protein-compatible immobilization methods [1,9]. Some of membrane-associated proteins have been successfully encapsulated in sol-gel materials, as reported for photoactive membrane proteins [10], membrane-bound receptors [11], two-ligand channel receptor [12] and cytochromes P450 [13,14]. The use of polyols such as glycerol or polyethylene glycol Chapter IV. Sol-gel encapsulation of membrane-associated proteins for bioelectrochemical applications 2013 95 with sol-gel materials have been proven to be highly beneficial for stabilization of entrapped membrane proteins [1]. The goal of this study is to evaluate the interest of such approaches for application of membrane-associated proteins in bioelectrochemical processes, with the goal to immobilize these proteins in an active form on the surface of electrodes. Two systems have been considered. First, cytochrome P450 was immobilized in a sol-gel layer on pyrolytic graphite electrodes, with the idea to favor and keep direct electron transfer reactions between the electrode and the protein (this work was done in collaboration with the Groupe of GM. Almeida, Université de Lisbonne). The second system was the Mandelate dehydrogenase, isolated as vesicle from Pseudomonas putida. This protein was kindly provided by Gert Kohring( Saarland University, Sarrebruck, Allemgane). Direct electron transfer reactions are not possible with this protein and had to be wired using mediators, either soluble or immobilized on the electrode surface. Direct communication of membrane-associated cytochrome P450 The basis of third generation biosensors is the direct communication between electrodes and proteins especially those containing heme groups such as cytochromes [15]. For biosensors based on DET, the absence of mediator is the main advantage to provide a superior selectivity, operate in a potential window closer to the redox potential of the enzyme itself, and therefore, less prone to interfering reactions [16]. Cytochromes P450 belong to the group of external monooxygenases [17,18]. In nature, the iron heme cyt P450s utilize oxygen and electrons delivered from NADPH by a reductase enzyme to oxidize substrates stereo-and regioselectively. Significant research has been directed toward achieving these events electrochemically [19]. Improving the electrochemical performance of cytochrome P450 enzymes is highly desirable due to their versatility in the recognition of different biological and xenobiotic compounds for application as biosensors [20], and biocatalysis [21,22]. Interfacing these enzymes to electrode surfaces and electrochemically driving their catalytic cycle has proven to be very difficult [23]. The protein was immobilized directly on electrode surface with taking advantage of electrostatic interactions on graphite electrode or by covalent linkage on gold surfaces, in lipid membranes, or using additional additives, i.e., surfactants, polymers or nanoparticles [23]. Chapter IV. Sol-gel encapsulation of membrane-associated proteins for bioelectrochemical applications 2013 96 The application of sol-gel chemistry for the immobilization of CYP has been rarely considered. One report by Iwuoha et al. described the immobilization of P450cam in a complex hybrid matrix prepared with methyltriethoxysilane in the presence of didodecyldimethyl ammonium bromide (DDAB) vesicles, bovine serum albumin (BSA) and glutaradhyde (GA) [13]. Direct electron transfer reactions between the electrode surface and the immobilized are described and applied for camphor or pyrene oxidation and an improved stability is discussed, considering the application in organic solvent. However DDAB is already known for stabilizing P450 proteins and the critical role of the inorganic precursor is not clearly demonstrated in this complex matrix including BSA and GA. Another report by Waibel et al. described screen-printed bienzymatic sensor based on sol-gel immobilized acetylcholinesterase and a cytochrome P450 BM-3 (CYP102-A1) mutant [14]. P450 was stabilized in the sol-gel matrix and applied for the catalytic oxidation of parathion in the presence of NADPH and O 2 . The resulting molecule, i.e. paraoxon, could inhibit the enzymatic activity of acetylcholine esterase that was electrochemically monitored. Different protocols for sol-gel immobilization were described, but no tentative of direct electron transfer reactions were considered with the immobilized CYP. Critical role of the sol-gel matrix for direct electron transfer electrochemistry Figure IV-1A shows the electrochemical response of CYP simply adsorbed on pyrolytic graphite electrode (PGE) surface. A small signal could be observed for some experiments, corresponding to the DET from the electrode to the CYP protein. This signal was not systematically observed but sometimes observed as a very small intensity. This is probably Chapter IV. Sol-gel encapsulation of membrane-associated proteins for bioelectrochemical applications 2013 97 due to the small amount of protein presenting a good orientation and conformation DET reaction. PEG which is known as stabilizer for electrochemistry of P450 at high temperature [24] was tested here to immobilize CYP1A2 (Fig. IV-1B). However, no signal was observed in these conditions. The immobilization in silicate has pointed a slight improvement in the measured electrochemical signal (Fig. IV-1C). -0.8 -0.6 -0.4 -0. Chapter IV. Sol-gel encapsulation of membrane-associated proteins for bioelectrochemical applications 2013 99 Electrocatalytic reduction of O 2 CYP are very sensitive to oxygen that can be used to evaluate the DET reactions [25]. Figure IV-3A reports the electrochemical response of CYP1A2 immobilized in the sol-gel-PEG hybrid film to increasing amounts of oxygen. The experiment was initiated in oxygen free solution, deareted with argon. Then, small volume of air saturated solution was introduced into the cell, leading to a gradual increase in the electrocatalytic response of the P450 protein due to oxygen reduction. The mechanism of oxygen reduction with CYP can involve the production of H 2 O 2 species with detrimental effect on the protein activity [19]. This is verified here by increasing more the concentration of oxygen in the solution, by equilibration with the atmosphere (Fig. IV-3B). The electrochemical response was measured for the first cyclic voltammogram was well defined with peak current intensity higher than 8 µA and maximum peak potential at -0.355 V vs Ag/AgCl. However continuous cyclic at 50 mV s -1 leads to a continuous degradation both in peak current intensity and peak potential. The last CV measured here was indeed located at -700 mV vs Ag/AgCl and displayed only 3 µA intensity. -0.8 -0.6 -0.4 -0.2 0.0 Chapter IV. Sol-gel encapsulation of membrane-associated proteins for bioelectrochemical applications 2013 To resume, the direct electrochemistry of P450 proteins in water-based sol-gel thin film is described here for the first time. The electrochemistry of the heme center and the electrocatalytic reaction with O 2 is greatly improved in the presence of PEG. A careful comparison of the film composition shows that only the combination of the inorganic matrix and the PEG additive allows the observation of DET reaction and the electrocatalytic activity for immobilized P450. A second system has been thus studied here, involving another membrane protein, i.e. L-Mandelate dehydrogenase. The possible immobilization of the vesicle containing the proteins in the silica-based layer developed in this section was first considered in order to evaluate the interest of this approach to stabilize the electrochemical response of this protein, whose response is based on mediated electron transfer. Mediated communication of membrane-associated Lmandelate dehydrogenase L-Mandelate dehydrogenase (ManDH) could be applied for the production of enantiopure αhydroxy acids as educts for the preparation of semi synthetic antibiotics [26,27]. Their In the case of mediated communication, redox mediators shuttle electrons between the active side of enzymes, i.e. containing the FMN cofactor, and the electrode. The main advantages of employing mediated electron transfer (MET) within a biosensor device are that the electron transfer process is independent of the presence of natural electron acceptors or donors [32]. Chapter IV. Sol-gel encapsulation of membrane-associated proteins for bioelectrochemical applications 2013 101 Most of dehydrogenases are communicating directly with the external mediator instead of the electrode surface. As mentioned above, they are interesting enzymes for electrosynthesis applications, especially for the production of rare sugars as building blocks for pharmaceutical and food industry [33]. One of the main requirements for electrosynthesis is the stable immobilization of a large amount of active proteins on the electrode surface of the reactor. A huge diversity of soluble dehydrogenases have been encapsulated in sol-gel materials without altering their enzymatic activity and stability [2,34]. However, the sol-gel encapsulation of membrane-associated dehydrogenase has not been reported yet. All the membrane-associated dehydrogenases have been studied in literature depending on simple adsorption on the electrode surface for bioelectrochemical biosensors [35][36][37]. Feasibility of the L-mandelate dehydrogenase in a hybrid SiO 2 /PEG film The sol-gel used for the encapsulation of L-mandelate dehydrogenase have been prepared according the protocol optimized previously for encapsulation of cytochrome P450 studied in section A (SiO 2 55mM/ PEG 6wt % -pH 7). L-mandelate dehydrogenase activity in sol-gel film has been measured in presence of ferrocene dimethanol (FDM) as a soluble mediator in buffer medium using electrochemical measurements (Fig. IV-4). 0,0 0,1 0,2 0,3 0,4 0,5 -0,4 0,0 E / V 0-3mM 0 250 500 750 1000 0,0 The measurement was performed in phosphate buffer (pH 7) and FDM (0.1 mM) at room temperature. Potential scan rate was 20mVs -1 (B) Amperometric measurements for (a) bare GCE, (b) L-mandelate dehydrogenase in a hybrid sol-gel film on GCE in absence of FDM, (c) L-mandelate dehydrogenase in a hybrid sol-gel film on GCE. The measurements were performed under stirring in 50 mM phosphate buffer (pH 7) and FDM (0.1 mM) at room temperature by applying a constant potential of + 0.3V versus Ag/AgCl reference electrode. The CV measured in the absence of mandelic acid only show the typical electrochemical response of the ferrocene mediator, whose signal is controlled by diffusion into the sol-gel film (Fig. IV-4A). The addition of mandelic acid, from 0.5 to 3 mM leads to a gradual increase in the anodic signal concomitant with a decrease of the cathodic signal, which is the signature of an electrocatalytic process. This electrocatalytic response was further tested under hydrodynamic conditions, using the chronoamperometric measurement by applying 0. Encpasulation of L-ManDH using other sol-gel protocols The experiment made in the presence of P45O have shown that some component like PEG could be necessary for the stabilization of the proteins or to favor a conformation suitable for direct electron transfer reaction. Two different conditions have been tested, 1) the same sol as in section 3.1, but in the absence of PEG and 2) a sol based on TEOS, prepared at slightly acidic pH (5) and in which a small amount of EtOH was produced during hydrolysis reactions (Fig. IV-7). Surprisingly, all systems led to a quite comparable response, showing increasing current upon addition of increasing amount of mandelic acid, measured by CV or chronoamperometry under hydrodynamic conditions. The vesicle as prepared from E. coli is rather robust and was efficiently encapsulated using these different protocols. The better stability of ManDH response compared to P450 could be due to the lower requirement for MET by comparison with DET. Indeed, DET requires a suitable conformation to be achieved by the protein on the electrode surface in order to operate the final electron transfer reaction. At the opposite, in MET mechanism, the mediator can diffuse through the gel to the protein located in the vesicle. Here the only requirement is that the vesicle should not be dissolved in the sol-gel environment, which is obviously the case. Chapter IV. Sol-gel encapsulation of membrane-associated proteins for bioelectrochemical applications 2013 105 0,0 0,1 0,2 0,3 0,4 0,5 -0,5 0,0 E vs Ag/AgCl / V 0 -3mM 0 400 800 1200 1600 0,0 Conclusion The electrochemistry of membrane-associated proteins can be promoted in sol-gel environment. Direct electron transfer (DET) has been found more sensitive than mediated electron transfer (MET) reaction. DET requires the proteins to be located at close distance to Chapter IV. Sol-gel encapsulation of membrane-associated proteins for bioelectrochemical applications 2013 106 the electrode surface in a conformation that is suitable to operate this transfer of electron. We suppose that a fraction of the total P450 CYP1A2 proteins introduced in the sol-gel material are not able to react with the electrode and only those located at the film/electrode interface leads to a measured current. Nethertheless the sol-gel material and the presence of PEG clearly allowed the electrocatalytic signal for oxygen reduction. Further experiment should involve electrochemical detection of other substrate of P450 proteins, e.g. propanolol or paracetamol. Mediated electron transfer does not display the same requirements. Proteins located far from the electrode surface can transfer their electrons to the electrode if the mediator can diffuse in the silica gel to the redox center of L-Mandelate Dehydrogenase to react with the FMN cofactor before to react again with the electrode. The immobilization of vesicle with ManDH was found rather insensitive to the sol-gel route used for immobilization on the electrode surface. The benefit of encapsulation has been shown for electrochemical measurement under convection. The electrochemical response kept stable for at least 15 hours, while no response was observed with the vesicle simply dropped on the glassy carbon electrode surface. These experiments were all developed with using a mediator in solution, i.e. ferrocenedimethanol. We could finally show that the mediator, in this case an osmium polymer wrapped on carbon nanotubes, could be co-immobilized with the protein in a reagentless configuration. Further studies could involve the implementation of such biohybrid materials in electrochemical reactors for electroenzymatic synthesis. The effect of encapsulation of bacteriophage ΦX174 in silica-based sol-gel matrices on the infectivity has been studied. The purpose of this study is to investigate some strategies to protect bacteriophage against deactivation. The encapsulation was achieved via aqueousbased sol-gel, similar as those used in chapter 3 and 4, but the protocol here is applied for the preparation of small monoliths. The materials were aged, and then introduced into water for release of the virus and assessment of the infectivity. Organic and polymeric additives have been introduced to design a protective environment for encapsulation of sensitive bacteriophage. The efficiency of sol-gel encapsulation has been evaluated by studying the viral infectivity as a function of lysis plaques in the presence of Escherichia coli. Introduction Bacteriophages have been discovered by Felix d'Herelle in 1917. They have the ability to infect selectively target bacteria, allowing, e.g., the detection and control of foodborne diseases [1,2]. They can also find application in biosensing, as molecular biology tools or for drug delivery [3]. Upon infection of a host bacterium, a bacteriophage utilizes the cell machinery for amplification of new virions, thereby causing the death of the host bacterium. Bacteriophages can be produced in large quantities, they display a better stability than antibodies and they are able to recognize only living bacteria [2]. The research on bacteriophage as antibacterial agent was important before the discovery of antibiotics, but then decreased, being active only in few locations, notably in Georgia. The recent emergence of bacterial strain that developed resistances to existing antibiotics treatments has refocused the research on this biological antibacterial treatment. To maintain the infectious character of virus is also crucial when considering their use in medical applications [4]. Bacteriophages can be produced, concentrated and stored in aqueous solutions at low temperature, showing in these conditions slow decay of the infectivity with the time. Immobilization in a protective matrix is another approach that could be used to maintain the infectivity of the virus in conditions that would be suitable for specific applications [3]. As discussed in previous chapters, sol-gel encapsulation technology has already been successfully used to preserve the viability and activity of many encapsulated biomolecules such as vegetal cells [5], enzyme [6,7], bacteria [8,9], algae [10], yeast [11]. Several example of encapsulation of viruses in sol-gel matrix can be found in the literature, with the goal to use this biological entity as a template during the gelification of the inorganic matrix [12,13], but it is important to consider that these studies did not consider the infectious character of the immobilized virus. Only recently, this approach was used to encapsulate adenovirus with the goal to extend the release time from implants [4]. One has to note that in this example the gel containing the virus was stored in aqueous solution before implantation in mice, whose conditions are favorable for keeping high ratio of infectious virus. The approach involves the polymerization of aqueous silicate monomers in the presence of silica nanoparticles for encapsulation of bacteriophage in stable bio-composite matrix. Various additives were introduced in the sol with the goal to improve the protection of the immobilized bacteriophage, especially after drying of the gel. After aging, the gel was reintroduced in solution for phage release and the infectivity of ΦX174 has been analyzed. Assessment of viral infectivity Principle of infectivity determination for encapsulated bacteriophage The infectivity analysis of encapsulated ΦX174 has been tested from single-agar-layer plaque assay [17]. The protocol is reported in Figure V-2. First, ΦX174 was encapsulated in sol-gel monolith and aged at +4 °C. Then the monolith containing bacteriophage was dissolved in water to release free ΦX174 followed by entrapping together with E. coli CN in agar plate and incubation overnight at 37 °C to allow the interactions between ΦX174 and E. coli and lysis plaques formation in the agar plate. Fig V-2. Schematic representation of bactériophage ΦX174 encapsulation and determination of its infectivity against E. coli CN. As shown on the picture reported in Figure V-2, lysis plaques can be counted, this parameter being used to compare the influence of aging, and the impact of additives on viral infectivity after release in solution (units forming plaque per mL of solution, UFP mL -1 ). Encapsulated in semi-dry silica monolith The compatibility of the aqueous sol-gel route (sodium silicate solution and Ludox nanoparticles), i.e. not involving alcohol [18], for bacteriophage encapsulation was first studied by dispersion of the bacteriophage in saturated solution of sodium silicate, pH 7. The Chapter V. Encapsulation of infectious bacteriophage in a hybrid sol-gel monolith 2013 114 infectivity was then tested for 6 days (Fig. V-3). The infectivity of ΦX174 against bacteria remained approximately constant (250-300 UFP mL -1 ) during this time, which confirm the sol was harmless on the viral infectivity and could be applied for encapsulation of bacteriophages. (PEI) or Poly(diallyldimethylammonium chloride) (PDDA) has decreased from 180 UFP mL - 1 into 10 UFP mL -1 and 1 UFP mL -1 respectively after 6 days at +4 °C. The introduction of PEI to the sol-gel has apparently improved the protection against rapid deactivation of viral infectivity during 1 week while PDDA did not. This improvement of PEI could be explained by the favorable electrostatic and H-bonds interactions between PEI and bacteriophage capsides, protecting the bacteriophage from surrounding sol-gel materials as reported before for some redox proteins [19]. Encapsulated in dried silica monolith In a second set of experiments, bacteriophage ΦX174 has been encapsulated in a monolith of hybrid sol-gel and stored at 4°C for 6 days in petri-dish without lid, allowing a complete TEM analysis of the resulting has been performed in order to evaluate the influence of the additive of the gel texture, but no obvious influence could be observed as the texture of the material was dominated by the silica nanoparticles of Ludox (Fig. V-6). No bacteriophage could be observed due to their relatively low concentration. In the present state of our investigation, we first suppose that the beneficial influence of PEI and glycerol on the bacteriophage activity is due to interactions established between the additives and the viral capside, preventing complete deactivation in the harsh conditions of the study. Note that the composition of the gel could also affect the liberation of bacteriophages from entrapped into free active state. To the best of our knowledge, this work Conclusion The bacteriophage ΦX174 has been used as a model for a systematic investigation on the conditions to favor infectivity after encapsulation in sol-gel inorganic material. The encapsulation process has been performed by mixing a suspension of the bacteriophage in the starting sol followed by rapid gelification. The presence of glycerol and polyethyleneimine was proved to be essential to modify significantly the viral infectivity against bacteria in dry environment for 6 days, the better protection being obtained with the polyelectrolyte. The favorable electrostatic and H-bonds interactions between bacteriophage and the polymer could explain this better resistance to inactivation.. This work could provide more opportunities for applications of immobilized bacteriophage in antibacterial protection and in medicine. A B C Conclusion et perspectives Conclusion and perspective 2013 123 Conclusion and perspective The main topic of this thesis was related to the immobilization and the characterization of activity of bacteria in silica-based sol-gel films. The electrochemistry was successfully used to induce the immobilization of Escherichia coli strains in a hybrid sol-gel layer by using the controlled electrolysis of the starting sol containing the silica precursors, polyethylene glycol, chitosan and trehalose. Bacteria were either adsorbed on the electrode surface before sol-gel electrodeposition or codeposited during the electrolysis. We could show that the membrane integrity of the bacteria was kept for more than one month and that bacteria were able to express luminescence or fluorescence in response to their environment. Electrochemistry was then considered for the communication with Shewanella putrefaciens and Pseudomonas fluorescens strains in the presence of sodium formate and glucose as electron donor, respectively. Both strains are able to transfer electrons from their outer membrane. The immobilization of these bacteria strongly limits the electron transfer reactions from the bacteria to the electrode surface because of the insulating character of silica matrix. Strategies have proposed to increase this electron transfer pathway in the gel, using either immobilized ferrocene mediators on carbon nanotubes or by co-immobilization of a natural mediator, i.e. cytochrome c, in the silica gel including bacteria. Both strategies lead to very different behaviors from the point view of sensitivity to the addition of the substrate, as ferrocene could be able to cross the outer membrane of the bacteria, through the porins, and to collect electrons in the inter-membrane space. At the opposite cytochrome c only collects electrons from the outer membrane. Implication of cytochrome c within silica based gel entrapping bacteria mimics in some respects the strategies developed in natural biofilms to favor electron exchanges with a solid substrate, mineral or electrode and can this system can be defined for this reason as being an artificial biofilm. The experience developed on immobilization of bacteria was further applied to the immobilization of membrane associated proteins for Bioelectrochemistry. Two different proteins have been evaluated, P450 (CYP1A2) and L-Mandelate dehydrogenase (ManDH). Both systems display electrochemical activity after immobilization, by direct electron transfer and mediated electron transfer, respectively. We observed that the electrochemistry P450 (CYP1A2) was more sensitive to the sol-gel environment than L-ManDH, whose result can be explained the nature of the electron transfer reaction between the protein and the electrode. Conclusion and perspective Nevertheless, in both systems, the immobilization in sol-gel matrix was improving the intensity and the stability of the electrocatalytic response. Finally, the some preliminary studies on encapsulation of bacteriophage in hybrid sol-gel inorganic material have shown that the infectivity of this virus could be improved by careful control of the material composition. The presence of glycerol and polyethyleneimine was proved to be essential to modify significantly the viral infectivity against bacteria in dry environment for 6 days, the better protection and/or release being obtained with the polyelectrolyte. The favorable interaction between negatively charged bacteriophage and the positively charged polymer could partially explain this better efficiency (resistance to inactivation). This work could provide more opportunities for applications of immobilized bacteriophage in antibacterial protection and in medicine. In a tentative to propose some perspectives to this thesis, we can first comment that many different systems have been studied in this thesis, each of them providing new opportunities of continuation. These futures developments concern the application of electrochemically assisted bacteria encapsulation, future development on artificial biofilm, application of immobilized membrane associated proteins and new development on bacteriophages. The reader have certainly noticed that electrochemically assisted bacterial encapsulation developed in chapter II was not applied to electrochemical studied as reported in chapter III. These two studies have been developed in parallel and the link between these two different approaches was not done here. The difficulty is that the film composition suitable for electrodeposition was different to the one used in electrochemical communication studies, so optimization are necessary for application of electrodeposited biohybrid films to electrochemical devices. Artificial biofilm appears as a promising route for controlled elaboration of bioelectrode. Such concept as some drawback, as the immobilization hinders the natural growing of bacteria. However it provides some qualities. It allows the careful control of the bacterial strain; it would allow fundamental studies in which several strains would be mixed together in a controlled manner in order to study their synergy for bioelectrochemical reactions. Biofilm growth needs time; artificial biofilm would be a valuable strategy for the rapid elaboration of microbial devices for electrochemical application such as biosensors or bioreactors. Conclusion and perspective Immobilization of membrane associated proteins in sol-gel materials could be a valuable approach for increasing the stability of the electrochemical response, which could be further tested in electrochemical reactor for electroenzymatic synthesis. The sol could be further optimized for improving as much as possible the stability of the electrocatalysis without affecting the enzymatic activity. Other membrane proteins are of interest for energy conversion, like hydrogenase. It could be of interest to consider the interest of sol-gel for immobilization of this later class of proteins. Finally, studies on bacteriophage immobilization in sol-gel materials for controlled release are in their infancy. Additional studies should consider more deeply the effect of additives on the release of liberated infective bacteriophage from silica matrix and the effect of their concentrations on the viral infectivity. Further studies could consider the potential application of such biomaterial in the form of form for controlled released as antibacterial treatment. Methods and Techniques To conduct the work presented in this thesis, several chemicals and techniques have been used to prepare and characterize the active film. This chapter presents the description of sol-gel precursors, polymeric and organic additives, chemical and biological mediators, conducting nanomaterials used as materials for fabrication of biocomposite. The preparation protocols of biomaterials such as bacteria, bacteriophage and membrane-associated enzymes have been described in details. A series of protocols used to encapsulate biomolecules and microorganisms in pure inorganic and organic/inorganic hybrid sol-gel are also described. In addition, the analyses used to investigate the viability and bioactivity (electrochemical measurements, thickness measurements, membrane integrity and metabolic activity) have been described in this chapter. At last, the techniques and analytical methods used to confirm these studies have been described and illustrated. Chemicals and biological species Additives A series of additives with different physical properties have been used in this work. These additives have been introduced either for adsorption of bacteria on electrode surface or for optimization of hybrid sol-gel materials. The effects of additives introduction into sol-gel for bioencapsulation are also investigated later. 3 provides some information on these microorganisms. Strains Physical properties Escherichia coli K12 derivative C600 E. coli C600 is an aerobic Gram-negative, rod-shaped bacterium. Optimal growth occurs at 37 °C. Optimal growth occurs at 37 °C. Bacteriophage ΦX174 ΦX174 is an aerobic virus isometric, having single-stranded circular DNA and icosahedral shape of 20 triangular faces and 12 vertices. Optimal growth occurs at 37 °C in presence of bacteria and nalidixic acid 25mg/mL. Redox mediators Redox mediators Formula MW (g.mol -1 ) Suppliers Preparation of nano-material suspensions 1) MWCNT-COOH: 2 mg of MWCNT-COOH were dispersed in 1 mL of deionized water. The solution was sonicated for 10 min followed by stirring overnight at room temperature. 2) SWCNT-EtO: 2 mg of SWCNT were dispersed in 1 mL of chitosan. The solution was sonicated for 10 min followed by stirring overnight at room temperature. 3) SWCNT-(EtO) 8 -Fc: the chemical modification of SWCNT with poly (ethylene glycol) linker and ferrocene has been prepared by SRSMC laboratory [3,4]. Its dispersion in chitosan has been applied using the same protocol as SWCNT-EtO. 4) MWCNT-Os: first, 2 mg of MWNCT-COOH were dispersed in 1mL of deionized water by sonication. Then, a mixture of 150 µL of osmium polymer P0072 and 250 µL of dispersed MWCNT-COOH were sonicated for 1 h followed by stirring overnight at room temperature. Finally, MWCNT wrapped with osmium P0072 was dispersed in chitosan 0.2 wt % (ratio 1:1) [5]. 3) Gold nanoparticles: HAuCl4 was dissolved in 20 mL of deionized water to get a final concentration of 1mM and heated. When the solution started boiling, trisodium citrate dihydrate (1 wt%, 2 mL) was added. The final solution was stirred and boiled until changing color into brick-red color. The final concentration of gold nanoparticles was estimated at 0.2 mg.mL -1 [6]. Electrodes Glassy Preparation of sol-gels for bioencapsulation We have explored various starting sol compositions for bioencapsulation (Table 5). Aqueous Sol B [7] Sodium silicate solution (0.27 M, 13 mL) was mixed with LUDOX HS-40 (40 wt %, 7 mL) and 5 wt % of pure glycerol. HCl (1 M, 2.4 mL) was added to adjust pH at 7. Finally, this sol mixture has been diluted 10 times with deionized water for further use. Sol C [8] The starting sol was prepared by mixing 4 ml TMOS with 0.5 mL of 0.1 M HCl and 2 mL of distilled water. The mixture was sonicated (Transsonic T 080) for 10 min. Then the acidified sol has been diluted to concentration 1 M with deionized water and evaporated for a weight loss of 3.52 g of methanol. Then, alcohol-free sol has been stored at 4 °C for further use. Sol D Sodium silicate solution (55 mM, 20 mL) was mixed with PEG 6% wt. HCl (1 M, 0.7 mL) was added to adjust pH at 7. Sol E The starting sol was prepared by mixing 2.25 ml TEOS with 2.5 mL of 0.01 M HCl and 2 mL of distilled water. The mixture was mixed overnight at room temperature. NaOH (1 M, 0.7 mL) was added to adjust pH at 5. Finally, this sol mixture has been diluted 3 times with deionized water for further use. Table 5. Information of different methods used to prepare the starting sols Microorganism cultures Bacteria All the strains used in encapsulation experiments are from our laboratory collection. All the bacterial strains (except E. coli CN) were streaked on TSA plates from cultures stored at -80 ºC on beads supplemented with glycerol. When needed, a 0.1 mL overnight culture from single colony was inoculated in 200 mL TSB and incubated at the optimal growth temperature for each strain under stirring (160 rpm) for 24 h (stationary growth phase). Except S. Methods and Techniques putrefaciens strain needs stirring for 48 h (stationary growth phase) in order to have depletion of oxygen inside growth medium and strain starts to produce cytochromes to adapt the anaerobic medium. Ampicillin sodium salt [100 µg/mL] and kanamycin (25 µg/mL) were added only to growth media for cultivation of E. coli MG1655 pUCD607 and E. coli MG1655 zntA-GFP strains, respectively in order to control the growth of geneticallymodified bacteria only. Bacterial growth (turbidity) was measured by monitoring the optical density at 600 nm. The culture was harvested by centrifugation at 5000 g for 10 min at room temperature. The pellet was washed twice with 1 mM KCl, and then suspended in 1 mM KCl. This washing procedure eliminated nutrients to avoid any bacterial growth in the sol-gel. Finally, bacterial suspension in KCl [1mM] could be used directly for sol-gel bioencapsulation or stored at +4 °C for few days. The strain E. coli CN has been cultured following a different protocol in order to adapt its physiological properties at exponential growth phase. Bacteria were stored at -80 ºC on beads supplemented with glycerol. When needed, a 0.1 mL overnight culture from single colony was inoculated in 200 mL MSB Nal medium and incubated at 37 ºC under stirring (160 rpm) for 3 h (exponential phase growth). Bacterial growth was measured by monitoring the optical density at 600 nm. Bacteriophage The bacteriophage ΦX174 was produced according to ISO10705-2:2000(F). A suspension of ΦX174 (107 PFU/ml as a final concentration) was incubated in MSB Nal medium containing E. coli CN cells (adapted at exponential growth phase) for 5 h at 37 °C. The culture was harvested by centrifugation at 3000 g for 20 min at room temperature. Then, supernatant was filtered by filter membrane (pores of 0.22 µm), divided into aliquots and stored at 4°C in dark until further use. Sol-gel bioencapsulation protocols 6.1. Bacteria encapsulation by electrodeposition (Chapter II) The preparation of sol used for EAD protocols has been described in the previous section (preparation of sol C) without any additives. The introduction of additives allowed enhancing deposition and stability of sol-gel film in addition to its protective role for bacteria. or E. coli MG1655 zntA-GFP (2 × 10 9 cells/mL). EABE(2S) It is divided into 2 steps: adsorption of the bacteria on the ITO plate surface through the aid of polycationic PEI polymer followed by the electrodeposition of the hybrid sol-gel film through/over the bacteria layer (scheme is illustrated later). Bacteria encapsulation by drop-coating (Chapter III) The preparation of sol used for drop-coating protocol has been described in the previous section (preparation of sol B). The protocol has been divided into 2 branches: One-layer encapsulation In Bacteriophage encapsulation (Chapter V) The preparation of sol used for monolith bioencapsulation has been described in the previous section (preparation of sol A. A collection of organic additives have been introduced uniquely into sol A in each experiment such as PEI (0.2 wt %), glycerol (10 wt %), trehalose (10 wt %), PDDA (5 wt %) and a mixture of trehalose-glycerol (5wt % -5 wt %) as a final concentrations in the sol-gel monolith. 2 mL of HCl solution (1 M) was added to raise the pH up to 7.4. Then, ΦX174 (1 mL, 6 × 10 7 UFP/mL) was added to the sol and/or hybrid sol. Finally, aliquots samples (15 µL) of mixture containing sol and bacteriophage were aged at 4°C for 6 days in humid air or harsh dry air for further analysis. Note that control experiment has been performed in absence of any additive. were performed using glassy carbon electrodes (GCE) as working electrode in presence of platinum counter and Ag/AgCl reference electrodes. Amperometric measurement (Chapter II) was performed using ITO plates as working electrode in presence of platinum counter and Ag/AgCl reference electrodes for electrodeposition. Methods of analysis Atomic force microscopy (AFM) Atomic force microscopy experiments were carried out using MFP3D-BIO instrument (Asylum Research Technology, Atomic Force F&E GmbH, and Mannheim, Germany). Triangular cantilevers with square pyramidal tips (radius of curvature 15 nm) were purchased from Atomic Force (OMCL-TR400PSA-3, Olympus, Japan). The spring constant of the cantilevers measured using thermal noise method was found to be 20 pN/nm. The films have been imaged in AFM contact mode and were collected at a fixed scan rate of 1 Hz with a resolution of 512 x 512 pixels. Experiments were performed in air at room temperature. The Film was scratched and its thickness was measured from the cross section profiles of the AFM topographical images. Values were calculated by statistical analysis, as averages of 5 measurements obtained from several regions of the film. Methods and Infectivity analysis The infectivity analysis of encapsulated ΦX174 has been tested from single-agar-layer plaque assay [16]. The aliquots of encapsulated ΦX174 samples were dissolved in sterile water (50 mL) for at room temperature for testing the preserved infectivity to bacteria. 10mL of MSA Nal soft medium (MSB containing 10g/L Agar) are steamed to liquefy and adjusted at 55°C in a water bath. 1mL of E.coli CN (adapted at exponential phase) and 1mL of dissolved aliquots samples were introduced into the melted MSA. The final mixture has been gelified in petri dish. Finally, the latter was incubated overnight at 37°C and lysis plaques were counted by naked eye. Values were calculated as average of 2-3 samples. Note that some control test has been perfomed using same protocol for ΦX174 suspension dispersed in saturated solution of silica. Chapter I. Knowledge and literature survey 2013 15 encapsulation 15 of proteins and enzymes which are not highly sensitive to alcoholic byproducts, While silicates and colloidal silica are more adapted to encapsulate sensitive microganisms such as bacteria and viruses[26].The sol-gel process consists of hydrolysis, condensation, gelation, drying and aging steps in order to form gels or xerogels as summarized in Figure I-2. 17 The 17 SiO 2 network. The H 2 O (or alcohol) expelled from the reaction remains in the pores of the network. When sufficient interconnected Si-O-Si bonds are formed in a region, they respond cooperatively as colloidal (submicrometer) particles or a sol-gel [25]. Eq I-2. Condensation of silica precursors. gel morphology is influenced by temperature, concentrations of each species (especially R ratio, R = [H2O]/[Si(OR)4], and mainly pH. Hydrolysis and condensation occur simultaneously. 18 Fig I- 3 . 183 Fig I-3. Schematic representation of coating processes via solvent evaporation such as A) dip coating; B) spin coating and C) drop-coating (adapted from reference[28]). Chapter I. Knowledge and literature survey 2013 19 Fig I- 4 . 194 Fig I-4. The principle of electrochemically-assisted generation of sol-gel on the electrode surface. Fig I- 5 . 5 Fig I-5. Possible interactions between chitosan and silicates (adapated from reference[57]). Fig I- 6 . 6 Fig I-6. Chemical structures of Sol-gel based hybrid material i.e: A: Tetramethyl orthosilane; B: Glycerol; C: Poly(Ethylene Glycol); D: Chitosan; E: Trehalose dehydrate; F: Poly(Ethyleneimine); G: sodium silicate with colloidal silica. functional state. The technological advancements in immobilization of biological species over several decades has also resulted in a revolution in the use of biological objects for the selective extraction, delivery, separation, conversion and detection of a wide range of chemical and biochemical reagents. The use of biological species such as proteins, peptides, Chapter I. Knowledge and literature survey 2013 27 27 Fig I- 7 . 7 Fig I-7. Simplified illustration showing the different types of proteins which are utilized for various immobilization formats. Soluble proteins are located within the hydrophilic intracellular compartment of the cell. Intrinsic trans-membrane proteins span the cellular phospholipid bilayer membrane, whereas extrinsic membrane proteins are partially embedded within the membrane and are exposed to either the intracellular or extracellular regions (adapted from reference [74]). Fig I- 8 . 8 Fig I-8. Illustration of some typical strategies used for immobilization of membrane proteins. (A) Direct physical adsorption of the lipid bilayer and membrane protein onto a solid support. (B) Adsorption or covalent attachment of the lipid bilayer onto a solid support, with an intermediate layer of hydrated polymer. (C) Immobilization of the lipid bilayer and membrane protein in the pores of a solid support. (D) Immobilization of a phospholipid vesicle through covalent or avidin-biotin bioaffinity attachment (adapated from reference[66]). Fig I- 9 . 9 Fig I-9. Photosynthesis of cyanobacteria encapsulated in porous silica (adapted from reference[83]). (s) relative to the environmental effects. The promoter senses the presence of target molecule (s) and activates transcription of the reporter gene and subsequent translation of the reporter mRNA produces a protein/light as a detectable signal (Fig. I-10) [93,94]. The genetically-engineering process has generated diversity of cells for designing bioreporters such as bacteria modified with zntA Chapter I. Knowledge and literature survey 2013 31 Fig I- 11 . 11 Fig I-11. Typical biosensor setup (adapted from reference[START_REF] Borgmann | Advances in Electrochemical Science and Engineering: Bioelectrochemistry[END_REF]). I- 12 ) 12 [102,103]. Since the efficiency of biofuel cells depends on the bacterial density, most biofuel cells have been performed with microbial biofilms. The sol-gel encapsulation Chapter I. Knowledge and literature survey 201333 Fig I- 12 . 12 Fig I-12. Schematic illustration of a microbial fuel cell (adapted from reference[106]). Chapter I. Knowledge and literature survey 2013 34 encapsulated 34 and infective for months[113]. In addition, the encapsulation of viruses and bacteriophage in sol-gel materials has been developed for biotechnologies such as ordered mesoporous silica with designed pores according to the size and morphology of biomaterials (Fig I-13)[114,115]. Fig I- 13 . 13 Fig I-13. Mesoporous template Synthesis of Hierarchically Structured Composites (adapted from reference[114]). Chapter I. Knowledge and literature survey 2013 39 between 39 bacteria immobilized in a sol-gel layer and the electrode material. The immobilization of these bacteria could limit the electron transfer reactions to the electrode surface because of the insulating character of silica matrix, thus strategies to favor electron transfer reactions will be implemented, for the preparation of artificial biofilm(Chapter 3).The expertise developed for electrochemistry of whole cell will then be applied to the electrochemistry membrane-associated proteins, i.e. P450 and Mandelate dehydrogenase, with the goal to improve the stability of the electrocatalytic response(Chapter 4). Finally, a last topic has been investigated, which concerns the infectivity of bacteriophages after their encapsulation in a silica-based sol-gel monolith(Chapter 5). hybrid sol-gel filmIn this chapter, a novel method, based on the electrochemical manipulation of the sol-gel process, was developed to immobilize bacteria in thin hybrid sol-gel films. This enabled the safe immobilization of Escherichia coli on electrode surfaces. E. coli strains C600, MG1655 pUCD607 and MG1655 pZNTA-GFP were incorporated and physically encapsulated in a hybrid sol-gel matrix and the metabolic activity and membrane integrity of the bacteria were assessed as a function of the aging time in the absence of nutrients at + 4 °C or + 80 °C.Live/Dead BacLight bacterial viability analysis detected by epifluorescence microscopy indicated good preservation of E. coli C600 membrane integrity in the sol-gel film. The presence of chitosan, trehalose and polyethylene glycol additives was shown to strongly improve the viability of E. coli cells in the electrodeposited matrix for 1 month after encapsulation. Finally, the bioluminescent activity of E. coli MG1655 pUCD607 was preserved by approximately half of the cells present in such composite films. Chapter II. Electrochemically-assisted bacteria encapsulation in thin hybrid sol-gel film 2013 51 catalysed 51 by the local variation of pH that can be induced by the electrolysis of the sol. In this work, a potential of -1.3 V versus Ag/AgCl (3 M) was applied to the working ITO electrode. Fig II- 1 . 1 Fig II-1. AFM profiles measured on films composed with (a) TMOS, PEG, and chitosan, (b) TMOS and PEG, (c) TMOS and chitosan, (d) chitosan and PEG. N.B. E= -1.3 V, deposition time = 30s. Fig II- 3 . 3 Fig II-3. AFM profiles for electrodeposition of optimal hybrid sol-gel films varying with deposition times: (a) 10 s, (b) 20 s, (c) 30 s. N.B. E= -1.3 V.AFM characterization also provides additional information concerning the gradual encapsulation of E. coli C600 with cell density (4×10 6 cells mL -1 ) (displaying about 500 nm diameter of bacteria) with electrodeposition time (Figures II-3 and II-4). Bacterial cells were still visible on the planar substrate after deposition of 80-nm thick film (Fig.II-3a&4a) but was gradually masked after 20 s (Fig.II-3b&4b) and 30s (Fig.II-3c) deposition. Note that the roughness of the film (measured after drying) also strongly increased when increasing the film was observed (close to 100 %). One great advantage of EABE (2S) is that the bacterial density can be easily controlled by the first bacteria adsorption step, and the uniform orientation of the cells allows precise analysis of the viability (Fig.II-5). At the opposite, the encapsulation in one step (EABE(1S)) leads to randomly distributed bacteria in the all volume of the film (see next section). 55 Fig II- 5 . 555 Fig II-5. Short-term BacLight analysis of EABE(2S) for six E. coli C600 concentrations encapsulated in optimal sol-gel: a) [4 ×10 4 cells ml -1 ], b) [4 ×10 5 cells ml -1 ], c) [4 ×10 6 cells ml -1 ], d) [4 ×10 7 cells ml -1 ], e) [4 ×10 8 cells ml -1 ] and f) [4 ×10 9 cells ml -1 ]. Same magnification for the 3 upper figures a,b and c, same magnification for the 3 lower figures d, e and f. N.B. E= -1.3 V, deposition time = 30s. The effects of several parameters, i.e. the hybrid sol-gel components (Fig. II-6), the storage conditions (Fig. II-7) and the deposition time parameters of EABE(2S) (Fig. II-8) were then analyzed by Live/Dead BacLight over several weeks. Fig II- 6 . 6 Fig II-6. Long-term BacLight analysis for EABE(2S) of E. coli C600 (4×10 6 cells mL -1 ) in films composed with (a) sol-gel silica, PEG and chitosan, (b) PEG and chitosan, (c) sol-gel silica and chitosan, (d) sol-gel silica and PEG. Trehalose was also introduced in all films.The first measurement was done 1 hour after electrochemically assisted bacteria encapsulation. All films have been prepared by electrolysis at -1.3 V for 30 s. All samples were stored at + 4 °C in moist air. Fig. II- 7 7 Fig. II-7 reports the evolution with time of the bacterial viability of encapsulated E. coli C600 under different storage conditions, i.e., 1 mM KCl solution (curve a) or moist air (curves b andc). All samples have been stored at +4 ºC which was given as an optimum temperature for storage of encapsulated E. coli inside inorganic matrices in the literature[40]. Fig II- 8 . 8 Fig II-8. Long-term BacLight analysis for EABE(2S) of E. coli C600 (4×10 6 cells.mL -1 ) with 10 s (a), 20 s (b) and 30 s (c) deposition times. All films have been prepared by electrolysis at -1.3 V. N.B (zero: 1 hour after encapsulation).For all samples, the short-term viability measured one hour after encapsulation was high, close to 100 %. However, this viability dramatically decreased for thinner films and almost no Fig. II- 9 Fig II- 9 . 99 Fig. II-9 illustrates the EABE(1S) protocol. Here, bacteria are introduced in the starting sol, in the presence of the silica precursor, PEG, chitosan and trehalose. When the electrolysis is performed, the sol-gel transition occurs and bacteria are immobilized in the course of the hybrid sol-gel layer formation. During the immobilization, the bacteria are expected to be homogeneously distributed in the material but are randomly oriented. Fig II- 10 . 10 Fig II-10. AFM profiles for electrodeposition of optimal hybrid sol-gel films obtained with various deposition times: (a) 10 s, (b) 20 s (30 s was not measurable). All films have beenprepared by electrolysis at -1.3 V with final cell density 2 × 10 9 cells/mL. Fig II- 11 . 11 Fig II-11. AFM images of EABE(1S) for E. coli C600 (8×10 9 cells mL -1 ) encapsulated in optimal sol-gel: a) 10 s, b) 20 s and c) 30 s. All films have been prepared by electrolysis at -1.3 V. Fig Fig. II-12 reports the short-term viability of E. coli C600 immobilized by EABE(1S) in films produced with electrolysis times varying from 10 s (Fig. II-12a) to 60 s (Fig. II-12d). The bacterial cells exhibited high viability, but precise counting of viable bacteria was difficult because of their random orientation, especially for the thickest films. The quantity of bacteria immobilized on the electrode surface was controlled by the bacterial density introduced in the sol and the time of electrodeposition. As observed previously for EABE(2S), too long electrolysis time in EABE(1S) can be detrimental for the bacteria as shown in Fig. II-12d(corresponding to the film prepared with 60 s electrolysis). In this experiment, a significant number of non-viable cells were observed (but it was not possible to quantify them accurately). Note that the cell density (4 × 10 8 cells per mL) used for the EABE(1S) Fig II- 12 . 12 Fig II-12. Short-term BacLight analysis of EABE(1S) for E. coli C600 (4×10 8 cells mL -1 ) along deposition times of optimal sol-gel: a) 10 s; b) 20 s; c) 30 s and d) 60 s. Same magnification for all pictures. All films have been prepared by electrolysis at -1.3 V with final cell density 10 8 cells/mL. Fig II- 13 . 13 Fig II-13. Fluorescence metabolic activity for EABE (1S) of MG1655 pZNTA-GFP (4×10 8 cells mL -1 ) encapsulated in optimal sol-gel after 1 week: a) stressed by CdCl 2 and b) Unstressed (Control). surface biocomposite films, notably by providing high protection levels for the encapsulated microorganisms for a rather long period (1 month). Bacteria immobilized in such films can be reactivated for expression of luminescent or fluorescent proteins and they kept sensitivity to their environment, as demonstrated by the fluorescent signal in the presence of cadmium. The development of such electrochemical protocol provides new opportunities for bacteria immobilization on electrode surfaces, with possible applications in the field of electrochemical biosensors for environmental monitoring, or to optimize the electrochemical communication between bacteria and electrode collectors, which constitutes major challenges in the development of biotechnological devices. In this chapter, the work focuses on the electrochemical communication between Shewanella putrefaciens CIP 8040 or Pseudomonas fluorescens CIP 69.13 encapsulated in a silica-based sol-gel film and a glassy carbon electrode. As silica is an insulating material, strategies have found to improve the electron transfer from bacteria to electrode in this environment. Several configurations have been considered, i.e., direct electron transfer from the bacteria to the glassy carbon electrode, introduction of soluble Fe(CN) 6 3-mediator in the solution, introduction of carbon nanotubes on the electrode surface or in the sol-gel layer, functionalization of the carbon nanotubes with ferrocene mediator and long ethylene glycol arm and finally introduction of a natural mediator , i.e., cytochrome c in the gel, in close interaction with the bacteria. The comparison of these different approaches allows a discussion on the interest and limitation of these different strategies. Sodium formate and glucose have been used as electron donors for S. putrefaciens or P. fluorescens, respectively. 70 ET 70 of cell-based electrochemical devices by providing close proximity between the viable cells and the solid electrode surface to achieve fast and efficient electron transfer (ET) reactions[4]. The challenge is to enhance the link between the microbial catabolism and the communication with the electrode surface. Microorganisms can adopt different strategies for Chapter III. Investigations of the electrochemical communications between bacteria and the electrode surface 2013 , most common of which are: (1) direct electron transfer (DET) and mediated electron transfer (MET), through either exogenous or endogenous diffusive electron mediators[13]. Chapter III. Investigations of the electrochemical communications between bacteria and the electrode surface 2013 71 The 71 goal of this study is to investigate an original strategy based on ET between shewanella putrefaciens and an electrode material using novel artificial mediators. Sodium formate was used as a substrate for this bacterium. For the first investigation, Fe(CN)6 3-has been used to optimize the EC of encapsulated bacteria or film modified with bacteria and bare CNT with the glassy carbon electrode (GCE). In addition, Bare CNT and CNT covalently-linked with flexible ferrocene mediator have been compared in different entrapment protocols in order to investigate and evaluate the EC of encapsulated bacteria with GCE.The different configurations of electrode are schematically described in Figure III-1. Fig III- 1 . 1 Fig III-1. The various electrode configurations used in this study: in (A) only the bacteria are immobilized in the sol-gel layer deposited on GCE; in (B) SWCNT has been introduced in the sol-gel layer with bacteria; in (C) a first layer of SWCNT was deposited on GCE and then a second layer of silica gel containing bacteria was overcoated; and in (D) cytochrome c was introduced in the sol-gel layer with bacteria in order to serve as electron mediator. Figure Figure III-2A shows a typical cyclic voltammetric response of S. putrefaciens encapsulated in a sol-gel layer deposited onto the GCE surface (i.e., configuration as in Fig.III-1A). In these conditions a complex cathodic response can be observed with a well-defined redox signal Fig III- 2 . 2 Fig III-2. A) Cyclic voltammograms recorded with a GCE modified with a sol-gel layer containing S. putrefaciens. The measurement was performed in phosphate buffer at room temperature at scan rate of 20 mV s -1 . B) Amperometric response obtained upon successive additions of sodium formate 20 mM (arrows) with a similar electrode as (A).The measurements were performed at room temperature in PBS under stirring, by applying +0.35 V versus Ag/AgCl reference electrode (a) in the absence and (b) in the presence of 5 mM Figure Figure III-2B confirms such observation through the electrochemical response measured with the same system as reported in curve "a", but this time in the presence of 5 mM Fe(CN) 6 3- i.e., cases B and C as reported in Figure III-1. In case B, SWCNT was introduced in the solgel layer with the idea to promote a network of bacteria and nanotubes. In case C, a layer of SWCNT was first deposited on glassy carbon electrode and covered by a second layer containing the bacteria, in order to increase the underlying electrode surface area in the Chapter III. Investigations of the electrochemical communications between bacteria and the electrode surface 2013 objective to improve the interactions between the bacteria and the SWCNT at the interface between these two layers. The electrochemical responses of these assemblies were first tested in the absence of mediator in solution. The electrochemical response observed upon additions of 20 mM formate with the one-layer configuration (curve "a" in Fig. III-3A, corresponding to case B in Fig. III-1) was slightly improved when compared to the response observed in the absence of SWCNT (curve "a" in Fig. III-2B) but remained very low. The current response was enhanced with the two-layer configuration (curve "b" in Fig. III-3A, corresponding to case C in Fig. III-1), however this response dropped dramatically and rapidly with time. The same systems, i.e., cases B and C (Fig. III-1), were then tested in the presence of 5 mM ferricyanide. As expected the current increased dramatically upon addition of 50 mM formate, the current value in the one-layer configuration (curve "a" in Fig. III-3B, corresponding to case B in Fig. III-1) being two times lower than the two-layer configuration (curve "b" in Fig. Figure III- 3 3 Figure III-3. Amperometric current responses measured (A) in the absence and (B) in the presence of 5 mM Fe(CN) 6 3-mediator in solution upon successive additions of sodium formate (arrows) using glassy carbon electrodes modified with (a) a sol-gel layer containing both SWCNT-COOH and S. putrefaciens (as reported in Fig. III-1B) and (b) sol-gel overlayer containing S. putrefaciens on chitosan/SWCNT-COOH underlayer film (as reported in Fig. III-1C). The measurements were performed in PBS at room temperature by applying +0.35 V versus Ag/AgCl reference electrode. Fig III- 4 .Chapter III. Investigations of the electrochemical communications between bacteria and the electrode surface 2013 77 SWCNT-(EtO) 8 - 4778 Fig III-4. (A) Scheme of SWCNT functionalized with ferrocene linked by flexible Poly(ethylene glycol) linker. (B) Cyclic voltammogramm measured with GCE modified with SWCNT-(EtO) 8 -Fc. The measurement was performed in PBS at room temperature and at scan rate of 20 mV s -1 . Fig. III-6A reports the electrochemical response of cytochrome c in the sol-gel layer. A pair of well-defined current peak is observed located at 0.07V. Contrarily to the response measured with S. putrefaciens that display several irreversible peaks (Fig. III-2A), the signal coming from cytochrome c is clearly reversible. Investigations have shown that the electrochemical response of cytochrome c was controlled Chapter III. Investigations of the electrochemical communications between bacteria and the electrode surface 2013 78 by diffusion, indicative of the good mobility of this small protein in the silica gel (See section 3 of this chapter). Fig III-6. (A) Cyclic voltammogramm measured with GCE modified with a sol-gel layer containing bovine heart cytochrome c. The measurement was performed in PBS at room temperature and at a scan rate of 20 mV s -1 . (B) Amperometric current responses recordedupon successive additions of 0.2 mM sodium formate, as measured using GCE modified with a sol-gel layer containing bovine heart cytochrome c and S. putrefaciens. The measurement was performed under stirring in PBS at room temperature by applying +0.15 V versus Ag/AgCl reference electrode. . The different behavior observed in the presence of SWCNT-(EtO) 8 -Fc and cytochrome c is indicative of the different mechanism involved by the bacteria to transfer the electrons to the Chapter III. Investigations of the electrochemical communications between bacteria and the electrode surface 2013 80 mediator, as soluble mediator or ferrocene are susceptible to enter the porins of the outer bacterial membrane, while cytochrome c do not, limiting with the latter electron from the outer membrane of the bacteria. Further optimization has been done with using cytochrome c as natural mediator. Experiments have been performed with Pseudomonas fluorescens and are presented in the next section. Fig III- 7 . 7 Fig III-7. (A) Cyclic voltammograms for (a) bare GCE, (b) bovine heart cytochrome c in a solgel film on GCE, (c) cytochrome c and P. fluorescens in a sol-gel film on GCE. Inset: Cyclic voltammetric response of P. fluorescens alone in the sol-gel film. (B) Variation of the current response measured by cyclic voltammetry at different scan rates for a sol-gel film 83 fluorescens 83 Fig. III-8A, cytochrome c alone was not able to generate any current response upon addition of glucose, as expected from the absence of bacterial respiration, while P. fluorescens CIP 69.13 alone have generated very small current response upon addition of glucose (Fig. III-8B). By contrast, P. Chapter III. Investigations of the electrochemical communications between bacteria and the electrode surface 2013 CIP 69.13 encapsulated with bovine heart cytochrome c has shown much higher current responses upon successive additions of glucose (> 10 times that of bacteria alone, Fig. Fig III- 8 . 8 Fig III-8. Amperometric current responses measured upon the addition of glucose using glassy carbon electrodes modified with sol-gel film containing (A) bovine heart cytochrome c, (B) P. fluorescens, (C) P. fluorescens with bovine heart cytochrome c and (D) E. coli with CFig III- 9 . 9 Fig III-9. Amperometric current responses measured in the presence of 3 mM glucose with glassy carbon electrodes modified with sol-gel film containing P. fluorescens and bovine For further optimization of the conditions to improve the electrochemical communication between P. fluorescens and bovine heart cytochrome c immobilized in sol-gel films, the effect of cytochrome c concentration, cell density, temperature, pH and applied potential have been investigated and studied via amperometric measurements (Fig. III-10). Figure Figure III-10A shows that increasing the cell density (from 4 to 9 x 10 9 cells mL -1 ) encapsulated with constant concentration of cytochrome c generated higher current response. Fig. III-10B displays the electrochemical response of the biofilm as a function of the initial concentration of bovine heart cytochrome c (from 0.1 to 1.5 mM) in the sol used for electrode modification, as co-encapsulated with constant density of P. fluorescens [9 x 10 9 cells mL -1 ].The highest response was obtained with using 1 mM cytochrome c. The small decrease in the current response observed at higher concentration could be explained by some leaking of cytochrome c due to alteration of sol-gel film stability at this high protein concentration (indeed, too high contents of additives are expected to result in more open silica structures and, thereby, more easy leaching of entrapped species). Temperature variations from 20 to 35 °C resulted in significant variations in sensitivity (Fig.III-10C), with a maximum current response for the electrochemical communication obtained at 30 °C, which corresponds actually to the temperature used for growth of this bacterium. Finally, pH 7 was observed as optimal (Fig.III-10D) for this bioelectrode. The optimal pH and temperature are here in Figure III- 8 , 8 Fig III-10. Amperometric current responses measured with glassy carbon electrodes modified with sol-gel film containing P. fluorescens and bovine heart cytochrome c with respect to (A) P. fluorescens cell density in the presence of 1mM c-based cytochrome, (B) cytochrome c concentration in the presence of P. fluorescens (9 x 10 9 cells mL-1 as initial cell density), (C) temperature and (D) pH of the medium. Other conditions are similar as inFigure III-8, with the exception of the parameters under study. Fig III- 11 . 11 Fig III-11. Influence of (A) the quantity of artificial biofilm and c-cytochrome (volume of deposited sol solution) and (B) the presence of gold nanoparticles in the biocomposite film on the electrochemical response of glassy carbon electrodes modified with sol-gel films containing P. fluorescens and bovine heart cytochrome c, upon the addition of 3 mM glucose as substrate. Other conditions are similar as inFigure III-8. (N.B: PF and AuNP are the abbreviations of P.fluorescens and gold nanoparticles respectively) Chapter III. Investigations of the electrochemical communications between bacteria and the electrode surface 2013 89 membrane 89 cytochrome, leading to a different behavior, by comparison with chemical mediators, both in term of sensitivity and range of response to the addition of electron donor.The application of cytochrome c for electron transfer from bacteria to the electrode was then studied systematically with Pseudomonas fluoresecens. The resulting systems could be considered as an artificial biofilm, with the silica matrix mimicking the exopolymers involved in natural biofilms, but not only. Here the strategy involved to increase the electron transfer rate is also mimicking a strategy involved in natural biofilms, in which cytochrome can be expressed to relay the electron from the bacteria to a final acceptor, this latter being a mineral or the electrode of a microbial fuel cell. The system display high competition against O 2 as electron acceptor, and displayed clear response to the environment as demonstrated by the dependent relationship of current with glucose concentration. The introduction of gold nanoparticles has slightly enhanced the electron transfer reaction between the electrode and the bacteria/cytochrome c composite. This concept of artificial biofilm is probably an important topic to be developed in future works for application as biosensors or bioreactors.Technological advancements in immobilization of biological molecules over several decades has resulted in a revolution of biodevices for the selective extraction, delivery, separation, conversion and detection of a wide range of chemical and biochemical reagents. The use of biological molecule such as proteins, peptides and nucleic acids in these applications relies This work was performed by Patrícia Rodrigues as cooperation with Maria Gabriela Almeida (Chemistry Department. Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa) under our co-supervisions. It will report the immobilization of cytochrome P450 1A2 (CYP1A2) in sol-gel thin films prepared from water-based precursor in the presence of polyethylene glycol (PEG) with a special focus on direct electron transfer reaction. The use of sodium silicate prevents the introduction of alcohol in the starting sol while PEG improves the stabilization of the membrane proteins. The direct electron transfer reaction was characterized electrochemically in the absence and in the presence of O 2 . Fig IV- 1 . 5 Fig IV- 2 . 152 Fig IV-1. Electrochemical responses of CYP1A2 (A) simply adsorbed on PGE surface, (B) immobilized in PEG400, (C) in sodium silicate and (D) in a hybrid sol-gel matrix made of sodium silicate and PEG400. All measurements have done in 50 mM phosphate buffer at pH7 after degassing the solution for 20 min with Argon. The 40 successive scans have been performed at 50mV/s. Fig IV- 3 . 3 Fig IV-3. (A) Influence of O 2 addition on the electrochemical response of CYP1A2immobilized in sodium silicate -PEG400 hybrid sol-gel matrix. Detrimental influence of higher concentration of O 2 on the direct electron transfer reaction. All measurements have been done in 50 mM phosphate buffer at pH 7 after degassing the solution for 20 min with Argon. Potential scan rate was 50 mV s -1 . catalytic activity is expressed by several proteins, which can be soluble or membrane bound and use different co-factors like nicotine amide dinucleotide (NAD + ) or flavine mononucleotide (FMN). The Uniprot databank lists 63 ManDH related proteins from bacteria and fungi but there are many more α-hydroxy acid dehydrogenases expressing mandelate oxidizing or benzoylfomate reducing activity. Here we chose L-ManDH from Pseudomonas putida (EC 1.1.99.31) encoded in the mdlB gene as a membrane bound FMN-dependent protein[28]. The enzyme has been cloned and heterologously expressed in Escherichia coli as active catalysts and it was characterized in detail[29][30][31]. The enzymatic activity, determined by reduction of 2,6-dichlorophenol indophenols, could be demonstrated with S-ManDH bound to membrane vesicles. Fig IV- 4 . 4 Fig IV-4. (A) Cyclic voltammetry for a sol-gel film containing L-mandelate dehydrogenase.The measurement was performed in phosphate buffer (pH 7) and FDM (0.1 mM) at room temperature. Potential scan rate was 20mVs -1 (B) Amperometric measurements for (a) bare GCE, (b) L-mandelate dehydrogenase in a hybrid sol-gel film on GCE in absence of FDM, (c) L-mandelate dehydrogenase in a hybrid sol-gel film on GCE. The measurements were performed under stirring in 50 mM phosphate buffer (pH 7) and FDM (0.1 mM) at room temperature by applying a constant potential of + 0.3V versus Ag/AgCl reference electrode. 3 V 3 Fig. IV-4B) and with GCE modified with ManDH vesicle in sol-gel, but in the absence of electrochemical mediator in solution (curve "b" in Fig.IV-4B), confirms that the electrochemical response was due to the enzymatic activity, and that it could be observed only in the presence of mediator. Additionally, few testsFig IV- 6 . 6 Fig IV-6. (A) Amperometric measurement for a hybrid sol-gel film containing L-mandelate dehydrogenase and MWCNT-Os on GCE The measurements were performed under stirring Fig IV- 7 . 7 Fig IV-7. Cyclic voltammetry for L-mandelate dehydrogenase in (A) SiO 2 (55mM -pH 7) and (C) TEOS (0.25 M -pH 5). The measurements were performed in phosphate buffer (pH 7) and FDM (0.1 mM) at room temperature. Potential scan rate was 20mVs -1 . Amperometric measurement for L-mandelate dehydrogenase in (B) SiO 2 (55mM -pH 7) and (D) TEOS (0.25 M -pH 5). The measurements were performed under stirring in 50 mM phosphate buffer (pH 7) and FDM (0.1 mM) at room temperature by applying a constant potential of + 0.3V versus Ag/AgCl reference electrode. influence of sol-gel encapsulation on the infectivity of bacteriophage has not been reported. Elaboration of hybrid materials allowing the preservation of this infectivity would be of great interest for development of antibacterial coating or to control the release of the bacteriophage for biomedical purposes, enhancing the persistence of the phage at the site to be treated.The aim of this work is to study the influence of encapsulation in inorganic matrix on the biological activity of bacteriophage, considering both wet and dried gel. Bacteriophage ΦX174 was chosen as model system. It is a virus isometric about 25 nm in diameter, having single-stranded circular DNA and icosahedral shape of 20 triangular faces and 12 vertices (Fig.V-1)[14]. It is capable to infect a number of enterobacteria strains such as Shigella sonnei, Salmonella typhimurium Ra, or E. coli C by recognition of its target cells through lipopolysaccharides (LPS) present on the bacterial membrane[15,16]. Fig V- 1 . 1 Fig V-1. Representation of bactériophage ΦX174 -Image de Jean-Yves Sgro ©2004 virology.wisc.edu/virus world Fig V- 3 . 3 Fig V-3. Assessment of bacteriophage ΦX174 infectivity along 6 days of dispersion in saturated solution of sodium silicate.Bacteriophages ΦX174 have been encapsulated in sol-gel monoliths (thickness > 1mm) and then stored at 4 °C in petri-dish covered with a lid in order to avoid, in this first set of experiments, the complete drying of the gel. Some additives have been introduced so as to protect the bacteriophage against constrains during sol-gel transition and aging phases and to evaluate their influence on the viral infectivity after release in solution. Fig V-4 reports the effect of encapsulation in pure and hybrid sol-gel on preservation of infectivity against bacteria along 6 days. The infectivity of ΦX174 encapsulated in pure sol-gel (no additives) has decreased from 180 UFP mL -1 (initial value) to 13 UFP mL -1 and 1 UFP mL -1 after 1 and 6 days respectively; this rapid deactivation of viral infectivity during 6 days could be explained by capsid protein denaturation during sol-gel transition and aging phases or, this cannot be totally excluded, from a somehow limited release in solution. At first, we made the hypothesis that deactivation was mainly observed during the encapsulation in the gel. In order to improve the protection against the rapid viral deactivation from sol-gel matrix, trehalose and glycerol which are known in literature as microorganism's stabilizer against sol-gel constraints and aging stresses[19][20][21] have been introduced into the sol-gel material. The infectivity of ΦX174 encapsulated in hybrid sol-gel containing either glycerol or trehalose or Fig V- 4 . 4 Fig V-4. Assessment of bacteriophage ΦX174 infectivity along 6 days of encapsulation in hybrid sol-gel monolith and storage in semi-dry environment. Chapter V. Encapsulation of infectious bacteriophage in a hybrid sol-gel monolith 2013 116 dryingFig V- 5 . 1165 Fig V-5. Assessment of bacteriophage ΦX174 infectivity along 6 days of encapsulation in hybrid sol-gel monolith and storage in harsh-dry environment. Chapter V. Encapsulation of infectious bacteriophage in a hybrid sol-gel monolith 2013 117 representsFig V- 5 . 1175 Fig V-5. Representative transmission electron micrographs (TEM) for A) gel made with sodium silicate and Ludox nanoparticles, B) with glycerol and C) with PEI as additives. Samescale for all figures is 20 nm. sol-gel (Sol A, B and D) has been applied in order to avoid any trace of alcohol in dropcoating and monolith protocols of bioencapsulation. Alkoxide-based sol-gel (Sol C) has been applied for electrochemically-assisted bioencapsulation and (Sol E) has been applied for bioencapsulation of membrane-associated enzyme. The release of alcohol after the hydrolysis of TMOS has been considered an obstacle, due to its potential denaturing activity on the entrapped bacteria. Methanol has been removed from the TMOS sol by evaporation. A natural polymer (chitosan) and PEG were used as additives which were advantageous in providing a biological protection and deposition improvement for electrochemically-assisted bioencapsulation. In addition, trehalose and glycerol has been used for better bacterial protection against sol-gel constraints and aging. PEI has been used for either better protection of viral infectivity in sol-gel monolith or bacterial adsorption in electrochemically-assisted bioencapsulation. Note that control experiments have been applied in absence of one of hybrid sol-gel components to check its effective role on sol-gel bioencapsulation.MaterialsProcedures used to prepare the starting sols for films formation Sol A Sodium silicate solution (0.22 M, 10 mL) was mixed with LUDOX HS-40 (40 wt %, 10 mL). HCl (1 M, 2.4 mL) was added to adjust pH at 7. It is electrodeposition of a film from a sol containing the bacteria in suspension in one-step protocol (scheme is illustrated later. The preparation of hybrid sol for one-step protocol is: 5 mL of sol C [1 M] was mixed to 5 mL of 50 % PEG (w/v), 5ml of chitosan (1% w/v), 1 % (w/v) of trehalose. Then, NaOH solution (0.2 M, 1.6 mL) was added to raise the pH to 5.3 and 5mL of E. coli MG1655 pUCD607 or E. coli MG1655 zntA-GFP were introduced into the hybrid sol. Finally, the final mixture was poured into the electrochemical cell where electrochemically-assisted deposition was performed at -1.3 V at room temperature for several tens of s (deposition time). Then, ITO plates were rinsed immediately with deionized water carefully in order to remove non-deposited sol and stored at -80 °C for further analysis. N.B. the final concentrations for the hybrid sol-gel components are: TMOS (0.25 M), PEG (12.5 % w/v), chitosan (0.25 % w/v), trehalose (0.25 % w/v) and E. coli MG1655 pUCD607 1 - 2 - 12 Adsorption protocol: ITO plate was successively treated in nitric acid (65 %) for 1 h, sodium hydroxide solution (1 M) for 20 min, and PEI (0.2 % w/v) for 3 h. Between each step the ITO plate was washed with ultrapure water and dried in sterilized air at room temperature. The treated ITO plate was then introduced in E. coli C600 suspension (4×10 6 cells.mL-1) and incubated at room temperature in sterile environment for 2 h. Finally, the ITO plate was washed with and stored in a KCl solution (1 mM) for two-step EAB[9].The preparation of hybrid sol for two-step protocol is: 5 mL of sol C (1 M) was mixed to 5 mL of 50 % PEG (w/v), 5ml of chitosan (1% w/v), 1 % (w/v) of trehalose. Then, NaOH solution (0.2 M, 1.6 mL) was added to raise the pH to 5.3. Note that some control experiments were done by replacing PEG, chitosan or sol C with deionized water. N.B. the final concentrations for the hybrid sol-gel components are: TMOS (0.25 M), PEG (12.5 % w/v), chitosan (0.25 % w/v) and trehalose (0.25 % w/v). Electrodeposition step: ITO plate modified with bacteria has been introduced into hybrid sol-gel mixture (described above) and electrochemically-assisted deposition has been performed at -1.3 V for several tens of s (deposition times). The, ITO plates were rinsed immediately with deionized water carefully in order to remove nondeposited sol and stored at 4 °C for in humid air further analysis. Note that some control samples have been stored either in KCl solution [1mM] or humid air without introduction of trehalose into sol C. 6 . 2 . 2 . 6 . 3 . 62263 section A, 10 μL of sol B was mixed with 10 µL of S. putrefaciens CIP 8040 (9 × 10 9 cells/mL) and/or 10 µL of SWCNT-EtO/ SWCNT-(EtO) 8 -Fc (2 mg/mL) dispersed in chitosan (0.5 wt%). Finally, 5µL of obtained mixture has been drop-coated on the glassy electrode and kept in humid air at 4 °C for further analysis. N.B. the final concentrations for the hybrid components are: S. putrefaciens (3 × 10 9 cells/mL) and SWCNT-EtO/ SWCNT-(EtO) 8 -Fc (0.67 mg/mL). In section B, 30 μL of sol B was mixed with 10 µL of bovine-heart cytochrome c [1 mM], 10 µL of one of these strains (E. coli C600, P. fluorescens CIP 69.13 or S. putrefaciens CIP 8040 (9 × 10 9 cells/mL) and/or 10 µL of MWCNT-COOH dispersed in water (2 mg/mL) or 20 µL of gold nanoparticles. Finally, 5µL of obtained mixture has been drop-coated on the glassy electrode and kept in humid air at 4 °C for further analysis. N.B. the final concentrations for the hybrid components are: bacteria (1.8 × 10 9 cells/mL), SWCNT-EtO/ SWCNT-(EtO) 8 -Fc (0.4 mg/mL) and bovine-heart cytochrome c [0.2 mM]. Two-layer encapsulation First, 5µL of SWCNT-EtO/ SWCNT-(EtO) 8 -Fc (2 mg/mL) dispersed in chitosan has been drop-coated on the glassy electrode and kept for 2 h to be fully dried. Then, 5 µl of suspension (containing 10 μL of sol B and 10 µL of S. putrefaciens CIP 8040 (9 × 10 9 cells/mL) has been drop-coated on the glassy electrode and kept in humid air at 4 °C for further analysis. N.B. the Methods and Techniques 2013 137 final concentrations for the hybrid components are: S. putrefaciens (4.5 × 10 9 cells/mL) and SWCNT-EtO/ SWCNT-(EtO) 8 -Fc (2 mg/mL). Membrane-associated protein encapsulation (Chapter IV) Cytochrome P450: 10 μL of sol D was mixed with 10 µL of Cytochrome P450 (CYP1A2) membrane protein. 5µL of obtained mixture has been drop-coated on the pyrolytic graphite electrode and kept in humid air at 4 °C for further analysis. L-Mandelate dehydrogenase: 10 μL of sol D or E was mixed with 10 µL of L-ManDH membrane protein. 5µL of obtained mixture has been drop-coated on the glassy electrode and kept in humid air at 4 °C for further analysis. 7. 1 . Electrochemical measurements 7 . 1 . 1 . 1711 Cyclic voltammetry (CV)[10] Cyclic voltammetry is the most widely used technique for acquiring qualitative information about electrochemical reactions. It is often the first experiment performed in an electroanalytical study. In particular, it offers a rapid location of redox potential of the electroactive species, and convenient evaluation of the effect of various parameters on the redox process. This technique is based on varying the applied potential at a working electrode Methods and Techniques 2013 138 in both forward and reverse directions (at selected scan rates) while monitoring the resulting current. The corresponding plot of current versus potential is termed a cyclic voltammogram. Figure II- 1 1 Figure II-1 shows the response of a reversible redox couple during a single potential cycle. It is assumed that only the oxidized form O is present initially. Thus, a negative-going potential scan is chosen for the first half-cycle, starting from a value where no reduction occurs. As the applied potential approaches the characteristic E0 for the redox process, a cathodic current begins to increase, until a peak is reached. After traversing the potential region in which the reduction process takes place, the direction of the potential sweep is reversed. During the reverse scan, R molecules (generated in the forward half cycle, and accumulated near the surface) are reoxidized back to O and an anodic peak results. The important parameters in a cyclic voltammogram are the two peak potentials (Epc, Epa) and two peak currents (ipc, ipa) of the cathodic and anodic peaks, respectively. Cyclic voltammetry can be used for the study of reaction mechanisms and adsorption processes, which can also be useful for quantitative purposes, based on measurements of the peak current. Figure 1 . 1 Figure 1. Typical cyclic voltammogram for a reversible redox reaction (O + ne-→ R and R → O + ne-). 7. 1 . 2 12 Amperometric measurement[10] The basis of amperometry technique is the measurement of the current response to an applied potential. A stationary working electrode and stirred solution are used. The resulting currenttime dependence is monitored. As mass transport under these conditions is solely by convection (steady state diffusion), the current-time curve reflects the change in the concentration gradient in the vicinity of the surface, which is directly related to concentration in solution (Fig.2). Figure 2 . 2 Figure 2. Typical amperometric detection for a chemical reaction upon successive additions of substrate. All the electrochemical measurements (amperometric or voltammetric) were carried out the potentiostat Palm.Sens and monitored by the software (PS Trace) using a conventional threeelectrode cell. Amperometric and cyclic voltammetry measurements (Chapters III and IV) Figure 3 . 7 . 3 . 373 Figure 3. Typical AFM 3D image for a sol-gel film scratched on the left side Figure 4 . 4 Figure 4. Transmission electron microscopy of an E. coli cell trapped within a silica gel (adapted from reference[11]). 7. 4 . 4 BacLight™ bacterial viability analysisLIVE/DEAD BacLight viability analysis is a dual DNA fluorescent staining method used to determine simultaneously the total population of bacteria and the ratio of damaged bacteria. It was applied here on encapsulated bacteria in sol-gel films. BacLight is composed of two fluorescent nucleic acid-binding stains: SYTO 9 and propidium iodide (PI). SYTO 9 penetrates all bacterial membranes and stains the cells green while PI only penetrates cells with damaged membranes. The combination of the two stains produces red fluorescing cells (damaged cells) whereas those with only SYTO 9 (green fluorescent cells) are considered viable[12].The two dye solutions, SYTO 9 (1.67 µM) and PI (1.67 µM), were mixed together with the same volume equivalent. Then, 3 µL of this solution was diluted in 1 mL of non-pyrogenic water. The sample of sol-gel film encapsulating bacteria was covered with 0.2 mL of diluted dye solution and incubated for 15 min in dark condition. Finally, the sample was washed in KCl solution (1 mM) and the counting was performed with epi-fluorescence microscope (OLYMPUS BX51) with emersion oil at 100X magnification. Viability values were calculated as averages of ten fields obtained from two samples. Figure 5 . 5 Figure 5. Image series for comparison of dye combinations in mixed E.coli suspensions (adapted from reference [13]) Figure 6 . 6 Figure 6. Typical luminescent scheme for a bacteria modified with plasmid PUCD607 Figure 7 . 7 Figure 7. Typical fluorescent scheme for a bacteria modified with plasmid ZntA-GFP Figure 8 . 8 Figure 8. Typical plaque assay for encapsulated bacteriophage upon exposure to bacteria. Chapter I. Knowledge and literature survey 2013 30 4.2.1. Bioreactor Some cells like cyanobacteria are classified as photosynthetic prokaryotes, employing the same reaction as plants to synthesize bio-organic compounds such as sucrose, starch and cellulose from CO 2 and water in presence of light (Fig. I-9). Cyanobacteria-based bioreactor could minimize the CO 2 level in the environment and produce a novel, reusable carbon source , there is a tremendous interest for immobilization of living cells (bacteria, yeast, or microalgae, …) in porous matrices for biotechnological applications such as bioreactors, bioreporters, biosensors and biofuel cells [1,2]. 2. Chapter II. Electrochemically-assisted bacteria encapsulation in 2013 thin hybrid sol-gel film PEI adsorbed on ITO + ITO electrode + + + 1 st step ITO electrode Bacteria Bacteria Bacteria immobilization Sol Hybrid gel ITO electrode Bacteria Bacteria 2 nd step ITO electrode Bacteria Bacteria Sol electrolysis Fig II-2. Mechanism of EABE(2S) of bacteria in a thin sol-gel film. PEI: poly(ethylene imine), ITO: indium tin oxide. 53 Table II - II 1. Long-term metabolic activity for EABE(1S) of E. coli MG1655 pUCD607 stored at -80 ºC. All films have been prepared by electrolysis at -1.3 V for 30 s. Le sujet principal de cette thèse concernait l'immobilisation et la caractérisation de l'activité de bactérie dans des films de silice préparés par le procédé sol-gel. L'électrochimie a été utilisée avec succès pour induire l'immobilisation de Escherichia coli dans un films sol-gel hybride en utilisant l'électrolyse contrôlée du sol initial contenant les précurseurs de silice, le poly(éthylène glycol) du chitosan et du tréhalose. Les bactéries étaient soit adsorbées à la surface de l'électrode avant le dépôt sol-gel soit codéposées pendant l'électrolyse. Nous avons montré que l'intégrité membranaire était conservée pendant plus d'un mois et que les bactéries immobilisées pouvaient exprimer de la luminescence ou de la fluorescence en réponse à leur environnement. L'électrochimie a ensuite été utilisée pour caractériser la communication électronique avec Shewanella Putrefaciens et Pseudomonas fluorescens en présence de donneurs d'électrons, respectivement le formiate de sodium et le glucose. Ces deux souches bactériennes sont capables de transférer des électrons par leur membrane externe. Mais l'immobilisation de ces bactéries limite fortement les transferts électroniques de la bactérie à l'électrode en raison de la nature isolante de la matrice de silice. Deux stratégies ont été proposées afin d'augmenter ces processus de transfert électronique dans le gel, en utilisant des nanotubes de carbone fonctionnalisés par des groupements ferrocène ou par co-immobilisation dans le gel avec les bactéries d'un médiateur naturel, le cytochrome c. Ces deux approches ont conduit à des résultats sensiblement différents du point de vue de leur sensibilité à l'ajout d'un substrat en raison des mécanismes impliqués. En effet, le ferrocène est susceptible de traverser la membrane externe, par exemple au travers des porins, pour collecter les électrons de la respiration alors que le cytochrome ne peut collecter que les électrons transmis au travers de la membrane externe. L'utilisation du cytochrome c au sein de la matrice de silice qui permet d'immobiliser les bactéries mime dans une certaine mesure une stratégie développée dans certains biofilms naturels pour favoriser le transfert électronique vers un accepteur final, CYP1A2) ou transfert médié (ManDH). Nous avons pu observer que le cytochrome était plus sensible que la déshydrogénase, ce résultat pouvant s'expliquer par la nature directe du transfert d'électron entre l'électrode et la protéine. Cependant, pour ces deux systèmes, l'immobilisation dans un film sol-gel a permis d'augmenter l'intensité et la stabilité de la réponse électrocatalytique.Finalement, les études préliminaires sur l'encapsulation du bacteriophage )X174 au sein d'une matrice sol-gel hybride a permis de montrer que l'infectivité du virus pouvait être augmentée par la modification judicieuse du matériaux. La présence de glycérol ou de polyéthyleneimine a ainsi permis de protéger cette infectivité dans une atmosphère sèche pendant plus de 6 jours, la meilleure infectivité étant observée en présence de polyélectrolyte.L'interaction favorable entre le bactériophage chargé négativement et le polymère chargé positivement pourrait expliquer, au moins en partie, cette meilleure résistance à l'inactivation. Ces deux études ont en effet été menées en parallèle et le lien entre les deux travaux n'a pu être fait ici. Une difficulté est que la composition du film adaptée pour l'électrodépôt est différente de celle utilisée pour les études de communication électronique. Il sera alors nécessaire de faire de nouvelles optimisations pour appliquer cette approche originale à l'électrodépôt de biofilm artificiel applicable en électrochimie. empêche la bactérie ou le biofilm de croître naturellement. Cependant elle a aussi ces qualités. Elle permet d'abord un très bon contrôle de la souche bactérienne présente dans le biofilm, ce qui permettrait de mener des études fondamentales dans lesquelles plusieurs souches bactériennes serait associées d'une façon bien contrôlée afin d'évaluer leur synergie dans une configuration biofilm. La croissance d'un biofilm peut demander beaucoup Conclusion and perspective encapsulation dans un film sol-gel à la surface de l'électrode, par transfert direct d'électron 2013 (P450 -Ce travail pourrait procurer de nouvelles opportunités d'application pour les bactériophages immobilisés pour la lutte contre les souches bactériennes résistantes aux antibiotiques ou en médecine. Dans une tentative de proposer des perspectives à cette thèse, nous pouvons tout d'abord dire que de nombreux systèmes différents ont été étudiés dans cette thèse, chacun conduisant à imaginer de nouvelles directions de recherche. Ces développements potentiels concernent l'application de l'encapsulation bactérienne par assistance électrochimique, l'exploitation des biofilms artificiels, l'application des protéines membranaires immobilisées pour l'électrocatalyse et de nouveaux développements avec les bactériophages. Le lecteur aura certainement noté que le protocole d'encapsulation bactérienne par assistance électrochimique développé au chapitre 2 n'avait pas été appliqué aux études décrites au 2013 l'encapsulation de temps. Le biofilm artificiel est une stratégie intéressante pour élaborer rapidement des dispositifs microbiologiques pour des applications électrochimiques telles que les biocapteurs ou les bioréacteurs. L'immobilisation des protéines rédox membranaires dans un matériau sol-gel est une voie chapitre 3. Conclusion and perspective intéressante pour augmenter la stabilité de leur réponse électrochimique, celle-ci pouvant minéral ou électrode. Il est ainsi possible de considérer que notre approche conduit à la formation d'un biofilm artificiel. L'expérience développée sur l'immobilisation de bactéries a ensuite été appliquée à l'immobilisation de protéines membranaires for la bioélectrochimie. Deux types de protéines rédox ont été étudiées, un cytochrome P450 (CYP1A2) et une mandélate déshydrogénase (ManDH). Pour ces deux systèmes, une activité électrocatalytique a pu être mesurée après 121 122 Les biofilms artificiel semblent être une voie de recherche intéressante pour l'élaboration contrôlée de bioélectrodes à base de bactérie. Cette approche a ses limites dans la mesure ou alors être testée en réacteur électrochimique pour l'électrosynthèse enzymatique. Le sol pourrait être optimisé pour augmenter autant que possible la stabilité de la réponse électrocatalytique sans perturber l'activité enzymatique. D'autres protéines membranaires présentent un intérêt pour la conversion d'énergie, telle que les hydrogénases. Il serait intéressant de considérer l'intérêt de la voie sol-gel pour l'immobilisation de cette dernière classe de protéines. Enfin, les études sur l'encapsulation des bactériophages pour contrôler leur relargage sous une forme infectieuse n'en est qu'à ses débuts. Il conviendrait d'étudier plus en détails les paramètres qui influencent à la fois la libération des batériophages et le maintient de leur infectivité. Des études futures pourraient considérer l'application de ce type de matériau sous la forme de film pour le relargage contrôlé de cet agent antibactérien. 1.1. Sol-gel reagents This work focuses on developing safe bioencapsulation techniques based on pure silica or hybrid sol-gel film to construct effective whole cell biosensors. A series of precursor with different properties are used (Table1). Tetramethoxysilane is the most common precursor used to electrodeposit sol-gel film on the conducting electrodes. Sodium silicate solution and Ludox® HS-40 colloidal have been used for bacteria and protein encapsulation by dropcoating in order to avoid any trace of alcohol (i.e. aqueous sol-gel route). Chemicals Formula Grade MW (g.mol -1 ) Suppliers Tetramethoxysilane (TMOS) Si(OCH 3 )4 98 % 152.22 Aldrich Sodium silicate solution Na 2 O 14 % - Aldrich SiO 2 27 % Ludox® HS-40 colloidal SiO2 40 % 60.08 Aldrich (40 wt. % suspension in H2O) Tetramethoxysilane (TEOS) Si(OC 2 H 5 ) 4 98 % 208.33 Alfa Aesar Table 1 . 1 Silica precursors. Table 2 2 provides some technical information on Table 2 . 2 Additives for preparation of hybrid materialsBacteriophage ΦX174 and series of microbial species such as Escherichia coli, Shewanella putrefaciens and Pseudomonas fluorescens have been used in this work. These microorganisms have been encapsulated in different sol-gel protocols and materials. The physical properties of microorganisms encapsulated in sol-gel are also investigated. Table 1.3. Bacteria and bacteriophage species Table II - 4 . II4 Chemical components of culture media. 1 L of deionized water. (pH= 7.3 ± 0.2) MSB medium contains 10 g/L peptone, 3 g/L yeast extract, 12 g/L Minimal salt broth beef extract, 3 g/L NaCl, 5 ml of Na 2 CO 3 (150g/mL), 0.3ml of (MSB) [MgCL 2 , 6H 2 O] (2g/mL) and 10mL of nalidixic acid (25mg/mL) dissolved in 1 L of deionized water. (pH= 7.3 ± 0.2) MSA medium contains 10 g/L ^peptone, 3 g/L yeast extract, 12 g/L Minimal salt agar beef extract, 3 g/L NaCl, 5 ml of Na 2 CO 3 (150g/mL), 0.3ml of (MSA) [MgCL 2 , 6H 2 O] (2g/mL), 10mL of Nalidixic acid (25mg/mL), and 10 g/L agar dissolved in 1 L of deionized water. (pH= 7.3 ± 0.2) carbon electrode (GCE, 3mm in diameter), indium-tin-oxide glass slides (ITO, Delta Technologies) or pyrolytic graphite electrode (PGE) served as working electrode. Prior each measurement, GCE was first polished on wet emery paper 4000, using Al 2 O 3 powder (0.05 mm, Buehler), then rinsed thoroughly with water to remove the embedded alumina particles.ITO electrodes were cleaned only with acetone to remove any chemical or biological contamination. Basal plane pyrolytic graphite (3 mm diameter disks). The surfaces were treated by abrasion with sandpaper and polished with alumina slurry (0.3 µm), followed by brief sonication in deionized water. Finally, the electrodes were washed with deionized water and dried with compressed air. The counter electrode was a platinum wire and the reference electrode was Ag/AgCl 3M KCl. In this chapter, the work focuses on the immobilization of membrane-associated proteins in the hybrid sol-gel matrix for bioelectrochemical applications. According to the literature, the sol-gel material could provide a suitable environment for safe immobilization of these proteins, which would be beneficial for bioelectrocatalysis. The bioencapsulation process was achieved via drop-coating of the starting sol from aqueous silicate including enzymes. PEG has been introduced to design a protective environment for encapsulation of the membrane fragments. Two different proteins have been chosen for this study, i.e., cytochrome P450 (CYP1A2) according to its ability to directly transfer electrons to the electrode surface and L-Mandelate dehydrogenase that can transfer electron only through electrochemical mediators. The biohybrid materials developed could be further used for construction of reagentless bioelectrochemical devices such as biosensors and bioreactors. Escherichia coli MG1655 pUCD607 and Escherichia coli MG1655 zntA-GFP were kindly provided by P. Billard (Laboratory of interactions microorganisms, minerals and organic materials in the sol; LIMOS -Lorraine University, France). Escherichia coli CN and Bacteriophage ΦX174 were kindly provided by Prof. C. Gantzer (Laboratory of chemistry, physics and microbiology for environment; LCPME -Lorraine University, France). Membrane-associated enzymes The membrane-associated L-Mandelate dehydrogenase (L-ManDH) solution (30 units/mg) isolated from Pseudomonas putida has been provided by Prof. G. W. Kohring (Laboratory of microbiology -Saarland University, Germany). The membrane-associated cytochrome P450-CYP1A2 has been provided by G. Almeida (Faculty of science and technology -Nova de Lisboa University, Portugal). Culture medium for microorganisms All the culture media used for bacteria and bacteriophage growth has been sterilized prior each experiment using autoclave system for 15 min at 121 °C to avoid contamination with other species. Table 4. shows the chemical composition of the culture media. L'immobilisation de protéines rédox membranaires a également été considérée dans ces couches minces inorganiques pour favoriser la stabilité de la réponse électrocatalytique. Les protéines considérées impliquent des mécanismes de transfert électronique différents, soit direct pour le cytochrome P450 (CYP1A2), soit médié pour la mandélate déshydrogénase. Finalement, l'influence de l'encapsulation dans une matrice sol-gel hybride sur l'infectivité du bactériophage ΦX174 a été étudiée, montrant l'effet protecteur de la polyéthylènenimine ou du glycérol. Mots clefs: sol-gel, matériaux hybrides, bioencapsulation, électrodépôt, bactérie, biofilm artificiel, protéine rédox membranaire, bactériophage, transfert électronique, cytochrome c. Abstract The work reported in this thesis has been developed at the interface between three disciplines, i.
01750454
en
[ "chim.othe" ]
2024/03/05 22:32:07
2013
https://hal.univ-lorraine.fr/tel-01750454/file/DDOC_T_2013_0201_MAGHEAR.pdf
Development of new types of composite electrodes based on natural clays and their analytical applications Development of new types of composite electrodes based on natural clays and their analytical applications "We are what we repeatedly do. Excellence, then, is not an act, but a habit." ~Aristotle~ To my supervisors... I would like to express my deep and special gratitude to my thesis supervisor Prof. Robert Săndulescu for providing me the opportunity to join his research group and for making everything possible during all these four years. His optimism and dedication to science are impressive and I must thank him for his continuous and endless support and also for all the fruitful scientific and non-scientific discussions. I am grateful to Dr. Alain Walcarius, also my PhD supervisor, for receiving me in his research group and for his key contributions to my studies. I respect his contagious dedication to science. I dedicate this thesis to my both supervisors hoping I fulfilled their ambitions and expectations. Special thanks to Mr. and Mrs. Iuliu and Ana Marian for their endless support, encouragement, availability, understanding, and priceless lessons in science and life. I would also like to thank Dr. Cecilia Cristea and Dr. Mathieu Etienne for their help and guidance in the research lab, for their support and encouraging ideas. Tertiş and Luminiţa Fritea for their help, support, friendship, and hard work. I would also like to acknowledge the valuable contribution of all co-authors and collaborators from Romania and France for making this thesis possible, especially to Tamara Topală, Dr. Emil Indrea, Dr. Cosmin Farcău, Ludovic Mouton, and Pierrick Durand. For the financial support I am grateful to the Agence Universitaire de la Francophonie and to UMF "Iuliu Haţieganu" for the research project POS-DRU 88/1.5/S/58965. And, of course, I grace my family for their encouragement and support throughout my life. .................................................................................................... INTRODUCTION ..................................................................................................... ART ............................................................................................... 2.4.1.1. Direct detection -electrocatalysis ......................................... 2.4.1.2. Preconcentration electroanalysis ........................................... 2.4.1.3. Electrochemical biosensors and related devices ..................... 2.4.1.4 Special thanks to my colleagues Mihaela Table of Contents ABBREVIATIONS STATE OF THE INTRODUCTION Along with the progress in the industrial and technological fields, pollution has been one of the main concerns all over the world. Its impact on environment has lead among time to the development of different approaches to detect, prevent, or minimize its damaging effects. In this way, electrochemistry offers a wide variety of techniques that aim to control by different means the impact of pollution in the living world. Heavy metals are among the most important soil and biological contaminants. The interest for their detection has therefore increased a lot in the recent years. Heavy metals show high toxicity, they can accumulate in human, animal, and plant tissues and they are not biodegradable, so their quantitative determination in different media is an issue of primary importance nowadays. Electrochemistry is likely to cope with this high demand by offering new types of electrodes for real-time detection of trace metal contaminants in natural waters and also in biological and biomedical samples. Clay-modified electrodes are mostly used for this type of application. If in the past centuries clays were just used in cosmetics or to produce ceramics, in the last three decades they attracted the interest of electrochemists due to their catalytic, adsorbent, and ion exchange properties which have been exploited in the development of chemical sensors. Moreover, the adsorption of proteins and enzymes on clay mineral surfaces was intensively applied in biosensor fabrication. This research was directed towards the modification of different types of electrodes using indigenous Romanian clays for the development of sensors applied in the detection of heavy metals from matrices of biopharmaceutical and biomedical interest and biosensors for the detection of different pharmaceuticals. The studies presented here are mainly focused on electrochemical methods (voltammetry, differential pulse methods, anodic stripping) and ion-selective electrodes based on different types of clays, in order to improve the performances of the existent devices and also to develop other methods for the determination of heavy metals in various matrices. The aims of this thesis are summarized as follows: • To confirm the physico-chemical and structural characterization of the indigenous Romanian clays • To develop new clay-modified electrodes by employing different methods to immobilize the clay at the electrode surface: a. the embedment of the clay in carbon paste; b. the incorporation of the clay in polymeric conductive films; c. the immobilization of clay using a semipermeable membrane; d. the entrapment of the clay in a sol-gel matrix. • To study the electrochemical behavior of the new electrode configurations and to apply them in trace heavy metal detection or in pharmaceutical analysis: a. the analysis of acetaminophen, ascorbic acid, and riboflavin using clay-carbon paste-modified electrodes; b. the development of a HRP/clay/PEI/GCE biosensor for acetaminophen detection; c. the development of tetrabutylammonium-modified clay film electrodes covered with cellulose membranes for heavy metal detection; d. the development of a copper(II) sensor using clay-mesoporous silica composite films generated by electro-assisted self-assembly. Clay-modified electrodes A "chemical sensor is a small device that, as the result of a chemical interaction or process between analyte and the sensor device, transforms chemical or biochemical information of a quantitative or qualitative type into an analytically useful signal" [START_REF] Stetter | Sensors, chemical sensors, electrochemical sensors, and ECS[END_REF] . A chemical sensor consists of two basic components: a chemical recognition system (receptor) and a transducer 2 . In the case of biosensors, the recognition system uses a biological mechanism instead of a chemical process. The role of the transducer is to transform the response measured at the receptor into a detectable signal. Due to their remarkable sensitivity, experimental simplicity, and low cost, electrochemical sensors are the most attractive chemical sensors reported in the literature. The signal detected by the transducer can be a current (amperometry), a voltage (potentiometry), or impedance/conductance changes (conductimetry). [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] An important feature of the chemically modified electrodes consists in their ability to preconcentrate the analyte into a small volume on the electrode, allowing lower concentrations to be measured than possible in the absence of a preconcentrated step (adsorptive stripping voltammetry) 2 . Among the wide range of electrode modifiers, clays have attracted the interest of electrochemists, in particular for their analytical applications. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF][START_REF] Lee | Organo and inorgano-organo-modified clays in the remediation of aqueous solutions: An overview[END_REF] Clays -definition, classification, properties The use of clay minerals as electrode modifiers is the result of electrochemists desire to achieve a high quality electrode surface with the required properties, but also the result of their three decades effort to understand and control the processes that take place at electrode surface. The clay minerals employed in this case belong to the class of phyllosilicates-layered hydrous aluminosilicates. Their layered structures comprises either one sheet of SiO 4 tetrahedra and one sheet of AlO 6 octahedra (1 : 1 phyllosilicates)or an Al-octahedral sheet is sandwiched between two Si-tetrahedral sheets (2 :1 phyllosilicates). [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF][START_REF] Bedioui | Zeolite-encapsulated and clay-intercalated metal porphyrin, phthalocyanine and Schiff-base complexes as models for biomimetic oxidation catalysts: an overview[END_REF][START_REF]Layer Charge Characteristiscs of 2:1 Silicate Clay Minerals[END_REF] The role of the exchangeable cations (Na + , K + , NH 4 + etc.) bound on the external surfaces for 1 :1 phyllosilicates and also in the interlayer in the case of 2 :1 phyllosilicates is to balance the positive charge deficiency of the layers. An important characteristic of the clay mineral is the basal distance which depends on the number of intercalated water and exchangeable cations within the interlayer space. Other important properties of this structure regard the relatively large specific surface, ion-exchange properties, and ability to adsorb and intercalate organic compounds. All these features recommend phyllosilicates, especially a group of smectites, for preparation of clay-modified electrodes. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] There are two classes of clays: cationic clays that have negatively charged alumino silicate layers, and anionic clays, with positively charged hydroxide layers, the neutrality of these materials being ensured by ions, cations or anions, depending on the clay type in the interlayer space, that balances the charge. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] Cationic clays are among the most common minerals on the earth's surface, being well known due to their applications in ceramic production, pharmacy, cosmetics, catalysts, adsorbents, and ion exchangers. [START_REF] Vaccari | Preparation and catalytic properties of cationic and anionic clays[END_REF][START_REF] Vaccari | Clays and catalysis: a promising future[END_REF] Smectite clays are mostly used in the development of electrochemical sensors due to their ability to incorporate ions by an ion-exchange process and also due to their adsorption properties. Therefore, proteins adsorption is intensively used in the development of biosensors. [START_REF] Gianfreda | Enzymes in soil: properties, behavior and potential applications[END_REF] Due to its cation exchange capacity (CEC) (tipically between 0.80 and 1.50 mmol g -1 ) and anion exchange about four times lower, montmorillonite is the most used smectite. On the other hand, thixotropy recommends montmorillonite to be used as a stable and adhesive clay film. Besides smectites (montmorillonite, nontronite, hectorite), literature describes some other clay minerals (vermiculite, kaolinite, sepiolite) which can be exploited for electrodes modification. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] Clay-modified electrode preparation The preparation method is the one that mostly influences the electrochemical behavior of CLME. Several techniques have been employed to cast the clay films, such as the slow evaporation of colloidal suspensions on electrode surfaces (i.e., platinum, glassy carbon, indium tin oxide, and screen printed electrodes), physical adsorption (the most widely used technique due to its simplicity and ease), spin coating thin clay films, and clay-carbon paste-modified electrodes. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] A more sophisticated strategy uses silane linkages to couple clay to the underlying electrode surface. [START_REF] Rong | Electrochemistry and photoelectrochemistry of pillared claymodified-electrodes[END_REF] Langmuir-Blodgett method was applied to prepare thin film of clays on electrode surfaces. [START_REF] Hotta | Electrochemical behavior of hexaammineruthenium (III) cations in clay-modified-electrodes prepared by the Langmuir-Blodgett method[END_REF][START_REF] Okamoto | Preparation of a clay-metal complex hybrid film by the Langmuir-Blodgett method and its application as an electrode modifier[END_REF][START_REF] He | Preparation of hybrid films of an anionic Ru(II) cyanide polypytidyl complex with layered double hydroxides by the Langmuir-Blodgett method and their use as electrode modifiers[END_REF][START_REF] He | Electrocatalytic response of GMP on an ITO electrode modified with a hybrid film of Ni(II)-Al(III) layered double hydroxide and amphiphilic Ru(II) cyanide complex[END_REF] Another study reported a hybrid film of chiral metal complex and clay, prepared by the Langmuir-Blodgett method for the purpose of chiral sensing. [START_REF] He | Creation of stereoselective solid surface by self-assembly of a chiral metal complex onto a nanothick clay film[END_REF][START_REF] He | Preparation of a novel clay/metal complex hybrid film and its catalytic oxidation to chiral 1,1'-binaphtol[END_REF] The electrodeposition of kaolin and montmorillonite using rotating quartz crystal microbalance disk electrodes or indium tin oxide (ITO) was also realized. [START_REF] Shirtcliffe | Deposition of clays onto a rotating electrochemical quartz crystal microbalance[END_REF][START_REF] Song | Preparation of clay-modified electrodes by electrophoretic deposition of clay films[END_REF] Electrochemistry at clay-modified electrodes Electron transfer at clay-modified electrodes (CLME) has been extensively studied in the 1990s. [START_REF] Baker | Electrochemistry with clays and zeolites[END_REF][START_REF] Bard | Electrodes modified with clays, zeolites and related microporous solids[END_REF][START_REF] Macha | Clays as architectural units at modified-electrodes[END_REF][START_REF] Therias | Electrodes modified with synthetic anionic clays[END_REF] Poor charge transport is an important issue when using nonconductive solids to modify electrodes, which is related to either physical diffusion in the channels and/or to charge hopping. In the case of clays, electroactive species can be accumulated within the clay layer at different places, but only a small fraction of these species display redox activity (≈10-30%) (Figure 1). Mass transport processes through cationic and anionic clays were investigated by electrochemical quartz crystal microbalance measurements. [START_REF] Yao | Clay-modified electrodes as studied by the quartz crystal microbalance: adsorption of ruthenium complexes[END_REF][START_REF] Yao | Clay-modified electrodes as studied by the quartz crystal microbalance: redox processes of ruthenium and iron complexes[END_REF][START_REF] Yao | Mass transport on an anionic claymodified electrode as studied by a quartz crystal microbalance[END_REF] The results showed that the charge balancing during a redox reaction was accomplished by the leaching or insertion of mobile ions at the clay solution interface. Taking in to account that this phenomenon depends on the nature and the concentration of the electrolyte, modifications in the swelling properties of the clays can occur. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] The enhancement of charge transport within the clay film can be achieved by delamination processes that give less ordered coatings and consequently allow access to the channels. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] Different strategies can be applied for this: the use of smaller size particles (laponite instead of montmorillonite), pillaring agents (alumina, silicate) to obtain a porous clay heterostructure, or by the intercalation of molecules (surfactants or polymers). [START_REF] Carrero | Electrochemically active films of negatively charged molecules, surfactants and synthetic clays[END_REF][START_REF] Falaras | Al-pillared acid activated montmorillonite modified electrodes[END_REF] Other methods describe the enhancement of electron hopping using electron relay, making use of the redox active cation sites within the crystal lattice (i.e., iron, cobalt, or copper for cationic clays and nickel, cobalt or manganese for anionic clays) to transfer electrons from intercalated ions to the conductive substrate. [START_REF] Qiu | Anionic clay modified electrodes: electron transfer mediated by electroactive nickel, cobalt or manganese sites in layered double hydroxide films[END_REF][START_REF] Xiang | Electron transport in clay-modified electrodes: study of electron transfer between electrochemically oxidized tris(2,2'-bipyridyl)iron cations and clay structural iron(II) sites[END_REF][START_REF] Xiang | Electrodes modified with synthetic clay minerals: evidence of direct electron transfer from structural iron sites in the clay lattice[END_REF][START_REF] Xiang | Electrodes modified with synthetic clay minerals: electron transfer between adsorbed tris(2,2'-bipyridyl) metal cations and electroactive cobalt centers in synthetic smectites[END_REF][START_REF] Xiao | Preparation, characterization and electrochemistry of synthetic copper clays[END_REF] Other possibilities for delivering charges consist of using a conductive polymer (polypyrrole) within the clay interlayer [START_REF] Rudzinski | Polypyrrole-clay modified electrodes[END_REF][START_REF] Faguy | Conducting polymer-clay composites for electrochemical applications[END_REF] or by using a composite conducting material (V 2 O 5 ). [START_REF] Anaissi | Modified electrode based on mixed bentonite vanadium (V) oxide xerogels[END_REF] In spite of all inconveniences regarding electron transfer at CLME, many applications have been found for these modified electrodes, such as electrocatalysts, photocatalysts, sensors, and, biosensors. The direct electrochemistry of heme proteins (i.e., cytochrome c, cytochrome P450, myoglobin (Mb), etc.) was reported at a CLME. [START_REF] Lei | Clay-bridged electron transfer between cytochrome P450cam and electrode[END_REF][START_REF] Sallez | Electrochemical behavior of c-type cytochromes at claymodified carbon electrodes: a model for the interaction between proteins and soils[END_REF][START_REF] Bianco | Protein modified and membrane electrodes: strategies for the development of biomolecular sensors[END_REF][START_REF] Scheller | Bioelectrocatalysis by redox enzymes at modified electrodes[END_REF][START_REF] Dai | Direct electrochemistry of myoglobin based on ionic liquid-clay composite films[END_REF] In most of these cases, the protein-clay films were prepared by depositing a certain concentration of protein onto the CLME, the heterogeneous electron transfer process between the protein and the electrode surface being facilitated by the clay modification on the electrode surface. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] It is stated that the interaction of the proteins with the clay particles didn't seriously affect the heme Fe (III)/(II) electroactive group of the incorporated proteins. Heavy metals are well-known as important environmental and biological contaminants: they are not bio-or photodegradable, they accumulate in human, animal, and plant tissues and they have high toxic effects. Their density is 6.0 g/cm [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] or more (much higher than the average particle density of soils which is 2.65 g/cm [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] ). Even they occur naturally in rocks, their concentrations are frequently elevated as a result of contamination. Arsenic, cadmium, chromium, mercury, lead, and zinc are the most important heavy metals with regard to potential hazards and occurrence in contaminated soils. 41 Metal mining, metal smelting, metallurgical industries, and other metal-using industries, waste disposal, corrosions of metals in use, agriculture and forestry, fossil fuel combustion, and sports and leisure activities are the main sources of heavy metal pollutants. Large areas worldwide are affected by heavy metal contamination. Heavy metal pollution is mostly present close to industrial sites, around large cities and in the vicinity of mining and smelting plants. Heavy metals can transfer into crops and subsequently into the food chain, which affects seriously the agriculture in these areas. 41 The increasing requirement for an efficient and real-time monitoring of trace heavy metals that pollute the environment led to the development of new detection methods and new specific sensors capable to perform in situ measurements with minimum disturbance of natural system. The ion exchange capacity and the ion exchange selectivity of clays recommended them to be applied to the accumulation of charged electroactive analytes. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] Cationic exchanging clays are essentially used for sensor devices. The electroactivity of the adsorbed ions depends on the soaking time of the CLME in the analyte solution (accumulation time), on the nature and concentration of analyte, on the mode of preparation of the CLME, on the electrolyte nature, etc. Smectite clays are frequently employed at CLME because they can serve as matrices for electroactive ions as they are able to incorporate ions by an ion-exchange process, like polymeric ionomers. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] Due to its cation exchange capacity typically between 0.80-1.50 mM g -1 , while anion exchange is about four times lower, montmorillonite is the most often used smectite. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] Carbon paste electrodes (CPEs) were generally used for cationic heavy metals preconcentration at CLME. The detection limits depend on the nature of the metal cations or on the clay species and are lower than the European norms required for drinking water. 2 Inorganic clay heavy metal detection sensors Determination of inorganic analytes (metal ions in the most cases) can be achieved due to the ion exchange reactions on the clay modifier. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] Thus, the procedure consists of an accumulation step under open circuit conditions with subsequent voltammetric determination after medium exchange (Figure 2). The fact that the accumulation process is separated from the measurement step and optimum conditions can be found and applied for each other, represents a great advantage of this procedure. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] Preconcentration conditions must be optimized due to the ion exchange and sorption reactions of clays and zeolites. After removing the electrode from the preconcentration medium is careful washed with redistilled water before it is transferred into the measurement cell containing the background electrolyte. The composition and concentration of the background electrolyte has a great influence on the current response of clay electrode. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] There is competition between electrolyte cations and analyte for ion-exchange sites, while electrolyte anions can influence the ion-pairing mechanism. Also, electrolyte concentration can affect the structure of the clay modifier. These aspects should be therefore taken into account during optimization of the analytical method. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] The ion exchange capability of clays was firstly proved using clay CPEs. The preconcentration of free Fe(III) cations into montmorillonite [START_REF] Wang | Trace analysis at clay-modified carbon paste electrodes[END_REF] takes places due to the replacement of the exchangeable cation in the interlayer of clay. A preconcentration model of Ag + and Cu 2+ on vermiculite modified paste electrode under open circuit conditions [START_REF] Kalcher | The vermiculite-modified carbon paste electrode as a model system for preconcentrating mono and divalent cations[END_REF] was investigated based on the conception of the simplified ion exchange process, involving the negatively charged groups of clay minerals. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] The values obtained for electrochemically determined concentration equilibrium constant for the ion exchange of Ag + corresponded with those evaluated by atomic absorption spectroscopy. The model is therefore supposed to be applied to other systems. The exchange of metal cations onto montmorillonite, vermiculite and kaolinite was studied by repetitive CV on the clay CPEs. [START_REF] Kula | Voltammetric Study of Clay Minerals Properties[END_REF] The metal sorption is represented by the current's dependence on time or on the potential cycling and thus, one can distinguish individual clay minerals and metal cations, even only at the first approximation. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] Copper(II) could be determined under the open circuit conditions due to its preconcentration by means of the cation exchange. [START_REF] Kula | Voltammetric copper (II) determination with a montmorillonitemodified carbon paste electrode[END_REF] The exchange of other cations present in the electrolyte can be competitive with cation exchange [START_REF] Wang | Trace analysis at clay-modified carbon paste electrodes[END_REF][START_REF] Švegl | Vermicullite clay mineral as an effective carbon paste electrode modifier for the preconcentration and voltammetric determination of Hg(II) and Ag(I) ions[END_REF][START_REF] Navrátilová | Cation and anion exchange on clay modified electrodes[END_REF] , a formation of metal complexes must be considered, too, in this case (e.g., the exchange of the cationic complexes [Cu(ac)] + occurred on montmorillonite and vermiculite 47 ). A vermiculite CPE was employed as a model for a soil-like phase and the binding interactions of Cu(II) ions with the mineral were studied. [START_REF] Švegl | A methodological approach to the application of a vermiculite modified carbon paste electrode in interaction studies: Influence of some pesticides on the uptake of Cu(II) from a solution to the solid phase[END_REF] When investigating the influence of selected pesticides on the Cu(II) uptake from the solution to vermiculite, it was shown that Cu(II) uptake significantly depended on binding affinity of pesticides to vermiculite, so on their ability to form coordination compounds with Cu(II) ions. The different effects some substances of environmental importance have on the metal ion uptake to clay mineral were also described. Then research was extended, when employing different soils in their native form as electrode modifiers in order to study soil-heavy metal ions interactions by means of CPE for the first time. [START_REF] Švegl | Soil-modified carbon paste electrode: a useful tool in environmental assessment of heavy metal ion binding interactions[END_REF] The binding capabilities of the soils were examined in a model solution of Cu(II) ions, together with the correlation between the copper ions accumulation and the standard soil parameters (ion exchange capacity, soil pH, organic matter and clay content. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] The nature of soil samples should be considered when developing an environmental sensor based on soil modified CPE suitable for on-site testing of soils. [START_REF] Sallez | Electrochemical behavior of c-type cytochromes at claymodified carbon electrodes: a model for the interaction between proteins and soils[END_REF] For example, the soil used as a modifier should be fully characterized due to the fact that the soil modified electrodes strongly depend on the type of soil. [START_REF] Švegl | Soil-modified carbon paste electrode: a useful tool in environmental assessment of heavy metal ion binding interactions[END_REF] In order to obtain a reproducible measurement the renewal of the CPE surface is important. Carbon paste surface can be easily removed by polishing and a new surface can be prepared. It was shown that standard deviation is 5% if a measurement is always performed on the mechanically renewed surface. [START_REF] Kula | Voltammetric copper (II) determination with a montmorillonitemodified carbon paste electrode[END_REF] Regeneration by washing the electrode with water and measurement on one surface during one day was also described. [START_REF] Wang | Trace analysis at clay-modified carbon paste electrodes[END_REF] In the case of Cu(II) determination on a vermiculite modified electrode [START_REF] Ogorevc | Determination of traces of copper by anodic stripping voltammetry after its preconcentration via an ion-exchange route at carbon paste electrodes modified with vermiculite[END_REF] , a short-term regeneration step (0.05 M KCN for 2 min and 0.1 M HClO 4 for 1 min) was applied after each voltammetric measurement, while a long-term preconditioning step (0.1 M HClO 4 for 24 hours, 0.05 M KCN for 2 min and 0.1 M HClO 4 for 1 min) was applied after mechanical renewal of the surface. The regenerated surface was applicable for one week without mechanical renewing the paste). [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] Another study described the renewal of surface by exposing the electrode to 0.2 M KCN in 2 M NH 3 for 2 min. [START_REF] Kalcher | The vermiculite-modified carbon paste electrode as a model system for preconcentrating mono and divalent cations[END_REF] During the preconcentration step, the ion exchange reaction takes place by replacing the cations initially present in the clay, like in the case of Fe(III). [START_REF] Wang | Trace analysis at clay-modified carbon paste electrodes[END_REF] The peak in the reduction current is the result of the reduction of the surface bound iron, while the enrichment of iron in the carbon paste is indicated by the increase of the peak height at longer preconcentration times. The renewal of the electrode surface is needed as a smoothed electrode surface should be reused. A 0.1 M CH 3 COONa or 0.1 M Na 2 CO 3 was employed for 1 min electrode bathing to achieve a 100% removal of iron from montmorillonite, although the regeneration step can represent a limit point of the procedure because the stated time could be insufficient for higher concentration of Fe(III). [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] The method did not show interferences with the coexisting metal ions, besides that Mg(II) and Al(III) in a 2-fold excess caused 32%, and 100% decrease of the current response, respectively. That can be explained due to ability of high valence cations to replace exchangeable cations in clays. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] Historically, it can be noted that greater attention was given to copper(II) determination by means of the clay CPEs. The first works reported the voltammetric response of the zeolite modified CPEs for Cu(II). [START_REF] Shaw | Carbon composite electrodes containing alumina, layered double hydroxides, and zeolites[END_REF][START_REF] El Murr | The zeolite-modified carbon paste electrode[END_REF] But, the related detection limits (about 10 -4 mol L -1 ) were not low enough for practical use. In addition, the regeneration of the electrode surfaces was not an easy step. Cu(II) determination methods by means of the clay modified CPEs achieved better results as to the limit of detection with time. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] Both vermiculite CPE [START_REF] Ogorevc | Determination of traces of copper by anodic stripping voltammetry after its preconcentration via an ion-exchange route at carbon paste electrodes modified with vermiculite[END_REF] and montmorillonite modified CPE 45 used the anodic current response of the accumulated Cu(II). In the first method, clay CPE preparation was achieved by using the dry clay modifier, so in this case the activation of the electrode surface complicated the measurement. [START_REF] Ogorevc | Determination of traces of copper by anodic stripping voltammetry after its preconcentration via an ion-exchange route at carbon paste electrodes modified with vermiculite[END_REF] The electrode was exposed to 0.1 M HClO 4 for 24 h which can damage the electrode surface, even authors don't state that. The electrode was treated with 0.05 M KCN for 2 min and with 0.1 M HClO 4 for 1 min before use. The short term (3 min) regeneration step in 0.05 KCN and 0.1 M HClO 4 was needed to remove copper residue in vermiculite taking into consideration the fact that the electrode surface was used for at least one week without mechanic renewing of the paste. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] This step can also be an inconvenient in the Cu(II) determination. However, the method was promising, with relative error of 2% for ppm Cu(II) concentrations. When using the wetted modifier, no activation of the electrode surface was necessary in the case of the montmorillonite modified CPE. No regeneration step was required in this method due to the regular mechanical removing of the paste and a newly smoothed surface was used for each measurement. [START_REF] Kula | Voltammetric copper (II) determination with a montmorillonitemodified carbon paste electrode[END_REF] The limit of detection achieved by this method was ten times higher in comparison with the vermiculite CPE, although the area of the montmorillonite CPE was 20 times smaller than that of the vermiculite electrode. Some interferences were present for both methods: bivalent metal cations in a 100-fold excess (Pb, Hg, Cd, Zn) [START_REF] Kula | Voltammetric copper (II) determination with a montmorillonitemodified carbon paste electrode[END_REF][START_REF] Ogorevc | Determination of traces of copper by anodic stripping voltammetry after its preconcentration via an ion-exchange route at carbon paste electrodes modified with vermiculite[END_REF] , cations Fe(II, III), Co(II), Ni(II), Mn(II), and Bi(III) in a 200-fold excess [START_REF] Ogorevc | Determination of traces of copper by anodic stripping voltammetry after its preconcentration via an ion-exchange route at carbon paste electrodes modified with vermiculite[END_REF] , while surfactants as Triton-X100, butylammonium bromide, and methylammonium bromide did not perturb the copper signal even in a 1000-fold excess. [START_REF] Ogorevc | Determination of traces of copper by anodic stripping voltammetry after its preconcentration via an ion-exchange route at carbon paste electrodes modified with vermiculite[END_REF] On the contrary, Cu(II) current response decreased with 50% in a 4-fold excess of humic ligands. [START_REF] Kula | Voltammetric copper (II) determination with a montmorillonitemodified carbon paste electrode[END_REF] So when employing vermiculite and montmorillonite CPEs for Cu(II) determination, the methods are sensitive enough, while their selectivity remains questionable. However, the Cu(II) and Zn(II) different selectivity towards nontronite is supposed to eliminate the interference of Cu(II) in the determination of Zn(II) in their mixture. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] The nafion/nontronite modified mercury film electrode was proved to be suitable for this purpose [START_REF] Zen | Disposable claycoated screen-printed electrode for amitrole analysis[END_REF] as it exhibited square-wave stripping peak currents linear over the ranges 0 -5 and 6 -12 µM Zn(II) in the presence of 10 µM Cu(II). Monovalent silver and divalent mercury could be determined using a vermiculite modified CPE. [START_REF] Švegl | Vermicullite clay mineral as an effective carbon paste electrode modifier for the preconcentration and voltammetric determination of Hg(II) and Ag(I) ions[END_REF] Preconditioning in this case was achieved by exposition of the modified electrode in 0.1 M HClO 4 for 24 h. Electrode regeneration was performed in 0.1 M KCN for 2 min, followed by the activation with 0.1 M HClO 4 for 2 min before each measurement. The limits of detection were around 10 -8 mol/L for both silver and mercury. The anionic complexes HgCl 4 2-and HgCl 3 -could be determined by means of montmorillonite modified CPE on the same concentration level. [START_REF] Kula | Sorption and determination of Hg(II) on clay modified carbon paste electrodes[END_REF] The chlorocomplexes were also applied to determine gold on the montmorillonite modified CPE [START_REF] Navratilova | Determination of gold using clay modified carbon paste electrode[END_REF][START_REF] Kula | Anion exchange of gold chloro complexes on carbon paste electrode modified with montmorillonite for determination of gold in pharmaceuticals[END_REF] , the determination of gold in pharmaceuticals being also possible with this method. 100 A sodium modified montmorillonite was applied to cadmium sorption from nitrate solutions in order to simulate a cadmium polluted clay mineral. The cadmium concentration adsorbed by the clay was analyzed at equilibrium by means of differential pulse polarography showing a Freundlich sorption profile. After cadmium sorption procedure, the clay mineral was included in a carbon paste in order to investigate the cadmium content by voltammetric determination. A linear response of the CPE was observed in the 5.0×10 -5 -1.8×10 -4 mol g -1 range with good reproducibility. [START_REF] Marchal | Determination of cadmium in bentonite clay mineral using a carbon paste electrode[END_REF] A selective analysis of copper (II) was also obtained at carbon microelectrode coated with monolayers of laponite clay and polythiophene by an originally developed double-step voltcoulometry. [START_REF] Barančok | Surface modified microelectrodes for selective electroanalysis of metal ions in environmental components[END_REF] Langmuir-Blodgett technique was used for the surface modification of electrodes and a detection limit of 5 mg L -1 was reached. Authors discuss the characteristic features of the ''memory effect'' of the laponite coating. [START_REF] Barančok | Surface modified microelectrodes for selective electroanalysis of metal ions in environmental components[END_REF] When testing the effect of montmorillonite on the preparation of a calcium ion electrode by combining and immobilization of ionophore and montmorillonite into a polymer membrane, it was shown that the montmorillonite-modified electrode exhibited higher performance than that without montmorillonite. [START_REF] Wang | Immobilized ionophore calcium ion sensor modified by montmorillonite[END_REF] Good accumulation ability for the [Cu(NH 3 ) 4 ] 2-complex ion via the ion-exchange ability of nontronite was obtained when testing a nontronite/cellulose acetate-coated GCE for the preconcentration and electroanalysis of Cu 2+ in ammoniacal medium by square wave voltammetry (SWV). The cellulose acetate permselective membrane was applied to strengthen the mechanical stability of the nontronite coating on GCE and to prevent the interference from surface-active compounds. The obtained detection limit was 1.73 ppb in pH 10 ammonia solution. The practical application of the developed sensor was illustrated by the measurement of Cu 2+ in tap water, groundwater, and pond water. [START_REF] Zen | Multianalyte sensor for the simultaneous determination of hypoxanthine, xanthine and uric based on a preanodized nontronite-coated screen-printed electrode[END_REF] Electrodeposition method was uses to develop a CPE modified with montmorillonite and covered with a mercury film. In this case, both electrodeposition in situ and a preliminary electrodeposition were applied to deposit the mercury film. Anodic stripping of metals deposited on the Hg film electrodes is said to be a complicated process limited by factors as time and potential of the metal electrodeposition, while accumulation and stripping of metals depends on metal nature. By using an open-circuit sorption of Cd, Pb, and Cu with subsequent anodic stripping voltammetry, higher current responses were obtained. Besides an enhanced sensitivity, superior separation of the current responses during a simultaneous stripping of metals was achieved.. [START_REF] Navrátilová | Electrodeposition of mercury film on electrodes modified with clay minerals[END_REF] Ag(I) could be detected using two natural clays (kaolinite and montmorillonite) deposited onto a platinum electrode surface, by two deposition techniques and under different experimental variables. For both clays, a complete surface coverage for the electrode surface was achieved using the spin-coating technique. The detection limit of Ag(I) ions was as small as 10 -10 M. [START_REF] Issa | Deposition of two natural clays on a Pt surface using potentiostatic and spin-coating techniques: a comparative study[END_REF] Recently, the simultaneous determination of trace amounts of Pb 2+ and Cd 2+ with montmorillonite-bismuth-carbon electrodes was developed. The method was applied for determining Pb 2+ and Cd 2+ contents in real water samples by square wave anodic stripping voltammetry. The detection limits were 0.2 μg L -1 for Pb 2+ and 0.35 μg L -1 for Cd 2+ . [START_REF] Luo | Voltammetric determination of Pb 2+ and Cd 2+ with montmorillonite-bismuth-carbon electrodes[END_REF] A different approach for heavy metal detection was proposed by investigating the feasibility of amperometric sucrose and mercury biosensors based on the immobilization of invertase, glucose oxidase, and mutarotase entrapped in a clay matrix (laponite). Platinum electrodes were used to deposit the enzyme clay gel crosslinked with GA. The inhibition of the invertase activity enabled the measurement of mercury concentration. It was proved that the use of clay matrix, a cationic exchanger, for the invertase immobilization allows the accumulation of metal cations in the vicinity of the enzyme causing the enhancement of the inhibition effect, associated to a decrease of the biosensor recovery. The biosensor inhibition by inorganic and organic mercury was evaluated and a good selectivity towards mercury when studying interferences with other metal ions. Under optimized conditions, Hg(II) was determined in the concentration range of 10 -8 to 10 -6 M. 64 Organo-clay heavy metal detection sensors Organo-clays found their applications in the treatment of wastewaters contaminated with several inorganic cations. [START_REF] Lee | Organo and inorgano-organo-modified clays in the remediation of aqueous solutions: An overview[END_REF] 'Organo-clays' represent, in fact, clay materials with enhanced sorption capacity obtained by introducing suitable organic molecules. [START_REF] Lee | Organo and inorgano-organo-modified clays in the remediation of aqueous solutions: An overview[END_REF] An important feature here is the fact that, depending on the nature of the organic molecule introduced, the surface of clay become more hydrophobic or hydrophilic. [START_REF] Yariv | Organo-clay complexes and interactions[END_REF] The incorporation of organic molecule relates closely to the specific purpose of the modification. Therefore, the organic functionalities contain selective functional groups with high affinity towards the target species. In this perspective, organosilanes intercalated to the clays for the Hg 2+ detection 66 demonstrated that besides the enhanced uptake of these cation at the surface charge sites of clays, the intercalated functional groups were also involved in the binding of this metal ion, which caused of the enhanced uptake. [START_REF] Lee | Organo and inorgano-organo-modified clays in the remediation of aqueous solutions: An overview[END_REF] In a similar way, the enhanced removal of Hg(II) from aqueous solutions using the grafted clay materials with 1,3,4-thiadiazole-2,5-dithiol 67 or 3-mercaptopropyl 68 was also described. A CPE modified by a natural 2:1 phyllosilicate clay functionalized with either amine or thiol groups as a sensor for Hg(II) was also evaluated. Its functionalization was achieved by grafting the pristine clay via its reaction with 3-aminopropyltriethoxysilane or 3-mercaptopropyl-trimethoxysilane. The electroanalytical procedure followed two steps: the chemical accumulation of the analyte under opencircuit conditions and the electrochemical detection of the preconcentrated species using differential pulse anodic stripping voltammetry (DPASV). The detection limits were 8.7•10 -8 and 6.8•10 -8 M, respectively, for the amine-and thiol-functionalized clays. Authors sustain that the sensor can be useful as an alerting device in environmental monitoring or for a rapid screening of areas polluted with mercury species. [START_REF] Tonlé | Preconcentration and voltammetric analysis of mercury(II) at a carbon paste electrode modified with natural smectite-type clays grafted with organic chelating groups[END_REF] Na montmorillonite was transformed organophilicly by exchanging the inorganic interlayer cations for hexadecyltrimethylammonium ions (HDTA). Based on the high affinities of an organo-clay for non-ionic organic molecules, 1,3,4-thiadiazole-2,5-dithiol was loaded on the HDTA-montmorillonite surface, resulting in the 1,3,4thiadiazole-2,5-dithiol-HDTA-montmorillonite complex, which has been shown to be an effective solid phase selective sorbent for Hg(II), that can also be applied in the preparation of a chemically modified CPE. The detection limit was estimated as 0.15 µg L -1 . [START_REF] Dias Filho | Study of an organically modified clay: Selective adsorption of heavy metal ions and voltammetric determination of mercury(II)[END_REF] A lower detection limit for Hg(II) was obtained by A.J. M) using thiol-functionalized porous clay heterostructures, which have been prepared by intragallery assembly of mesoporous organosilica in natural smectite clay. The electrode assembly consisted of a surfactant-directed co-condensation of tetraethoxysilane (TEOS) and 3-mercaptopropyltrimethoxysilane (MPTMS), at various MPTMS/TEOS ratios, in the interlayer region of the clay deposited as thin films onto the surface of GCEs and applied to the voltammetric detection of Hg(II) subsequent to open-circuit accumulation. They displayed attractive features, like fast diffusion rates, and thus great sensitivity and good mechanical stability, due to the layered morphology . [START_REF] Jieumboué-Tchinda | Thiol-functionalized porous clay heterostructures (PCHs) deposited as thin films on carbon electrode: Towards mercury(II) sensing[END_REF] The use of clays and humic acids in order to simulate soil clay-organo complex (clay humate) represents another approach to understand the soil processes related to transport and accumulation of heavy metals. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] The clay-humate modified CPE was employed to study the clay humate system and its reaction with copper in comparison with the clays alone [START_REF] Kula | Voltammetric copper (II) determination with a montmorillonitemodified carbon paste electrode[END_REF] and it was concluded that the Cu(II) sorption was significantly decreased in the case of montmorillonite-humate. Some organic molecules with different functional groups ( -NH 2 , -COOH, -SH, or -CS 2 ) have been introduced within the interlayer space of montmorillonite using organic compounds, such as hexamethylenediamine, 2-(dimethylamino)ethenethiol, 5aminovaleric acid, and hexamethylenediamine-dithiocarbamate. [START_REF] Stathi | Physicochemical study of novel organoclays as heavy metal ion adsorbents for environmental remediation[END_REF] These organo intercalated montmorillonite samples showed an increased uptake capacity for cations (i.e., Cd 2+ , Pb 2+ and Zn 2+ ). The suggested mechanism was that the increase in the interlayer space (between 0.3-0.4 nm) occurred due the intercalation of the organic compound, which allowed an easier access for the M 2+ and M(OH) + species to the intercalated organic compound. Meanwhile, the strong binding of the M(OH) + species to the organic compounds contributed additionally to the metal uptake, as proved by the obtained binding constants. [START_REF] Lee | Organo and inorgano-organo-modified clays in the remediation of aqueous solutions: An overview[END_REF] A voltammetric sensor based on chemically modified bentonite-porphyrin CPE has been introduced for the determination of trace amount of Mn(II) in wheat flour, wheat rice, and vegetables. The method showed good Mn(II) responses at a pH range of 3.5-7.5 and a detection limit of 1.07•10 -7 mol L -1 Mn(II). Authors proved that a 1000fold excess of additive ions did not interfere on the determination of Mn(II). [START_REF] Rezaei | A selective modified bentonite-porphyrin carbon paste electrode for determination of Mn(II) by using anodic stripping voltammetry[END_REF] By combining the ion exchange capability of Na montmorillonite with the electronic conductor function of an anthraquinone, a sensitive electrochemical technique for the simultaneous determination of trace levels of Pb 2+ and Cd 2+ by DPASV was developed. This method used a non-electrolytic preconcentration, via ion exchange model, followed by an accumulation period, via the complex formation in the reduction stage at -1.2 V, followed by an anodic stripping proces s. The detection limit was 1 and 3 nM for Pb 2+ and Cd 2+ , respectively. This method found its application in the detection of trace levels of Pb 2+ and Cd 2+ in milk powder and lake water samples. [START_REF] Yuan | Simultaneous determination of cadmium (II) and lead (II) with clay nanoparticles and anthraquinone complexly modified glassy carbon electrode[END_REF] A low-cost electrochemical sensor for lead detection developed using an organoclay obtained by the intercalation of 1, 10-phenanthroline within montmorillonite. The results showed that the amount of accumulated Pb(II) increased with an increase of the accumulation time and remained constant after saturation. The limit of detection for lead was in the sub-nanomolar range (4•10 -10 M), and there was no interference with copper at concentrations 0.1×[Pb(II)]. [START_REF] Bouwe | Structural characterisation of 1,10phenanthroline-montmorillonite intercalation compounds and their application as low-cost electrochemical sensors for Pb(II) detection at the sub-nanomolar level[END_REF] Amperometric biosensors based on clays applied in pharmaceutical and biomedical analysis Enzyme modified electrodes are considered the most popular and reliable kind of biosensors. A crucial feature on the commercial development of biosensor is the stable immobilization of an enzyme on an electrode surface, with complete retention of its biological activity and good diffusional properties for substrates. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] The various enzyme immobilization methods reported in the literature include cross-linking of proteins by bifunctional reagent, covalent binding, and entrapment in a suitable matrix. Due to their hydrophilic, swelling, and porosity properties clays occupy a privileged place among all the inorganic and organic matrices described. Therefore, they were used as host matrices for enzymes (first generation biosensors), as concentration means for different redox mediators (second generation biosensors), then as mediators themselves (clay and LDH for electrons transfer), and finally, for direct enzyme regeneration (third generation biosensors). Oxygen based biosensors (first generation) In the first generation of biosensors, during glucose oxidation process, flavin-adenine dinucleotide (FAD) component of GOX is converted into FADH 2 . After glucose oxidation, FADH 2 is reconverted to FAD in the presence of oxygen (Figure 3A). Besombes et al. [START_REF] Besombes | Improvement of analytical characteristic of an enzyme electrode for free and total cholesterol via laponite clay additives[END_REF][START_REF] Besombes | Improvement of poly(amphiphilic pyrrole) enzyme electrodes via the incorporation of synthetic laponite-clay-nanoparticles[END_REF] have shown that the analytical performance of amperometric biosensors based on (polyphenol oxidase) PO and (cholesterol oxidase) CO can be very much improved by the incorporation of laponite particles within an electrogenerated polypyrrole matrix. A similar study described that laponite matrix improved the analytical characteristics and the long-term stability of biosensors compared to the corresponding biosensors simply obtained by the chemical crosslinking of glucose oxidase (GOX) or polyphenol oxidase (PPO) on the electrode surface. [START_REF] Cosnier | Mesoporous TiO2 films: new catalytic electrode materials for fabricating amperometric biosensors based on oxidases[END_REF][START_REF] Shan | A new polyphenol oxidase biosensor mediated by Azure B in laponite clay matrix[END_REF] An inexpensive, fast, and easy method for the elaboration of enzymes electrodes is the entrapment of biomolecules in clay matrix, which consists of the adsorption of an enzyme/clay aqueous colloid mixture onto the electrode surface. 2 However, slow release of enzymes into solution can considerably reduce the biosensor lifetime. This problem was overcome by using enzyme cross linking agents, such as glutaraldehyde (GA) [START_REF] Poyard | A new method for the controlled immobilization of enzyme in inorganic gels (laponite) for amperometric glucose biosensing[END_REF] , polymethyl methacrylate , or poly(o-phenylenediamine) [START_REF] Shyu | Characterization of iron containing clay modified electrodes and their applications for glucose sensing[END_REF] which have been added in the enzyme/clay coating film. 2 GOX 80 was immobilized by clay sandwich method. A polycationic organosilasesquioxane laponite clay matrix was adopted by Coche-Guérente et al. [START_REF] Coche-Guérente | Characterization of organosilasesquioxaneintercalated-laponite-clay modified electrodes and (bio)electrochemical applications[END_REF][START_REF] Coche-Guérente | Amplification of amperometric biosensor responses by electrochemical substrate recycling: Part II. Experimental study of the catechol-polyphenol oxidase system immobilized in a laponite clay matrix[END_REF][START_REF] Coche-Guérente | Amplification of amperometric biosensor response by electrochemical substrate recycling. 3. Theoretical and experimental study of phenol-polyphenol oxidase system immobilized in laponite hydrogels and layer-by-layer selfassembled structures[END_REF] for biosensor development. The group proposed an enzymatic kinetic model for the PPO amplification process. Amperometric detection of glucose was intensively studied, mainly due to the physiological importance of this analyte, the stability of glucose oxidase, and the diversity of sensing methods applied. The oxidizing agent used by GOX electrode is molecular oxygen, while the amperometric detection of glucose can be carried out via the electrooxidation of the enzymatically generated H 2 O 2 at a Pt electrode: 2 Glucose + O 2 GOX gluconic acid + H 2 O 2 H 2 O 2 → 2H + + O 2 + 2e - Interferents like ascorbic acid and uric acid, which are commonly present in biological fluids, can also be oxidized due to the high polarizing voltage applied (E app ≈ 0.6-0.8 V) leading to nonspecific signals 2 . To overcome this problem, Poyard et al. [START_REF] Poyard | Optimization of an inorganic/bio-organic matrix for the development of new glucose biosensor membranes[END_REF][START_REF] Poyard | Association of a poly(4vinylpyridine-costyrene) membrane with an inorganic/organic mixed matrix for the optimization of glucose biosensors[END_REF] used clay-semipermeable polymer composite electrodes to decrease the permeability to organic interfering compounds. Due to the fact that hydrogen peroxide can be reduced by redox mediators immobilized within CLME, another possibility consists in using redox mediators to decrease the electrode potential. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] Based on this concept, cationic methylviologen (MV 2+ ) [START_REF] Zen | A glucose sensor made of an enzymatic clay modified electrode and methyl viologen mediator[END_REF] , ruthenium complexes [START_REF] Shyu | Characterization of iron containing clay modified electrodes and their applications for glucose sensing[END_REF][START_REF] Ohsaka | A new amperometric glucose sensor based on bilayer film coating of redox-active clay film and glucose oxidase enzyme film[END_REF] associated to structural iron cations, or TiO 2 underlying films [START_REF] Cosnier | Mesoporous TiO2 films: new catalytic electrode materials for fabricating amperometric biosensors based on oxidases[END_REF] were employed in the development of CLME based glucose biosensors. A particular example of amperometric biosensor for glucose detection is GOX/luminol/clay electrode which works by means of electrochemiluminescence. 87 A biosensor for phosphate detection has been fabricated by the coimmobilization of three enzymes (GOX, maltose phosphorilase and mutarotase) with complementary activities. [START_REF] Mousty | Trienzymatic biosensor for the determination of inorganic phosphate[END_REF] The amperometric detection in this case corresponded to H 2 O 2 oxidation. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] Taking into account that a large percentage of environmental pollutants are known to act as enzyme inhibitors, the development of a series of sensors is based on the measurement of this property. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] In this way, a novel hypoxanthine sensor based on xanthine oxidase immobilized within a polyaniline film was developed by Hu et al. [START_REF] Hu | Biosensor for detection of hypoxanthine based on xanthine oxidase immobilized on chemically modified carbon paste electrode[END_REF] The detection in this case is based on oxygen consumed due to the enzymatic reaction measured at a montmorillonite-MV 2+ carbon paste-modified electrode. 2 Mediator based biosensors (second generation) In the second generation of biosensors, O 2 was replaced by redox mediators (i.e., tetrathio fulvalene or dopamine) as oxidizing agents. [START_REF] Lei | Hydrogen peroxide sensor based on coimmobilized methylene green and horseradish peroxidase in the same montmorillonite-modified bovine serum albuminglutaraldehyde matrix on a glassy carbon electrode surface[END_REF][START_REF] Zen | A selective voltammetric method for uric acid and dopamine detection using clay-modified electrodes[END_REF][START_REF] Zen | An enzymatic clay modified electrode for aerobic glucose monitoring with dopamine as mediator[END_REF] The amperometric measure of glucose concentration (Figure 3B) represents therefore the current flowing through the reoxidation of electron transfer agents at the electrode surface. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] In a similar way, bienzymatic configurations, consisting of GOX, horseradish peroxidase (HRP), and redox mediators coated on CLME could be applied to the detection of glucose at 0.0 V with good sensitivities and with no interferences and preventing any possible oxidation of ascorbate and ureate. [START_REF] Cosnier | A composite clay glucose biosensor based on an electrically connected HRP[END_REF][START_REF] Shan | HRP wiring by redox active layered double hydroxides: application to the mediated H2O2 detection[END_REF] Laponite was particularly used as immobilization matrix for several other oxidase enzymes immobilized on CLME, and the resulting biosensors have been used to detect NADH, lactate, ethanol, and also a wide variety of water pollutants, such as cyanide. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF][START_REF] Cosnier | Mesoporous TiO2 films: new catalytic electrode materials for fabricating amperometric biosensors based on oxidases[END_REF][START_REF] Shan | A new polyphenol oxidase biosensor mediated by Azure B in laponite clay matrix[END_REF][START_REF] Shan | HRP wiring by redox active layered double hydroxides: application to the mediated H2O2 detection[END_REF][START_REF] Cosnier | A new strategy for the construction of amperometric dehydrogenase electrodes based on laponite gel-methylene blue polymer as the host matrix[END_REF][START_REF] Cosnier | Amperometric detection of pyridine nucleotide via immobilized viologen-accepting pyridine nucleotide oxidoreductase or immobilized diaphorase[END_REF][START_REF] Cosnier | An original electroenzymatic system: flavin reductase-riboflavin for the improvement of dehydrogenase-based biosensors. Application to the amperometric detection of lactate[END_REF][START_REF] Shan | A composite poly azure B-Clay-enzyme sensor for the mediated electrochemical determination of phenols[END_REF][START_REF] Shan | Layered double hydroxides: an attractive material for electrochemical biosensor design[END_REF][100] The biosensors achieved by Cosnier's group were based on laponite-electrogenerated polymer composites. [START_REF] Cosnier | Mesoporous TiO2 films: new catalytic electrode materials for fabricating amperometric biosensors based on oxidases[END_REF][START_REF] Cosnier | A composite clay glucose biosensor based on an electrically connected HRP[END_REF][START_REF] Cosnier | A new strategy for the construction of amperometric dehydrogenase electrodes based on laponite gel-methylene blue polymer as the host matrix[END_REF][START_REF] Cosnier | Amperometric detection of pyridine nucleotide via immobilized viologen-accepting pyridine nucleotide oxidoreductase or immobilized diaphorase[END_REF][START_REF] Cosnier | An original electroenzymatic system: flavin reductase-riboflavin for the improvement of dehydrogenase-based biosensors. Application to the amperometric detection of lactate[END_REF] Hydrogenase 101 was immobilized by clay sandwich method and bovine serum albumin + GA [START_REF] Lei | Hydrogen peroxide sensor based on coimmobilized methylene green and horseradish peroxidase in the same montmorillonite-modified bovine serum albuminglutaraldehyde matrix on a glassy carbon electrode surface[END_REF] were used as enzyme cross linking agents. The redox mediators methylene green, poly 3,4 dihydroxybenzaldehyde, and 2,2'-azinobis-3-ethylbenzothiazoline-6-sulfonate were co-immobilized within the clay matrix as electron shuttles between the redox center of HRP and the electrode. [START_REF] Cosnier | A composite clay glucose biosensor based on an electrically connected HRP[END_REF][START_REF] Shan | HRP wiring by redox active layered double hydroxides: application to the mediated H2O2 detection[END_REF]102 A biosensor configuration based on the entrapment of diaphorase or dehydrogenase within laponite gel containing methylene blue (MB) as electropolymerized mediator for NADH oxidation was described. [START_REF] Cosnier | A new strategy for the construction of amperometric dehydrogenase electrodes based on laponite gel-methylene blue polymer as the host matrix[END_REF][START_REF] Cosnier | Amperometric detection of pyridine nucleotide via immobilized viologen-accepting pyridine nucleotide oxidoreductase or immobilized diaphorase[END_REF] The presence of PolyMB in the biolayer allowed an electron transfer communication between enzymes and the electrode surface and Poly MB/dehydrogenase/laponite-modified electrode was successfully applied for electroenzymatic detection of lactate and ethanol via the mediated oxidation of NADH at 0 V. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] A comparative study described the properties of two different biosensors based on the immobilization of PPO within the two different kinds of clay matrices: one cationic (laponite) and the other anionic (layered double hydroxides, LDHs). [START_REF] Shan | Layered double hydroxides: an attractive material for electrochemical biosensor design[END_REF] Due to the high permeability and the structure and charge of the particles of the LDH-based biosensors, the PPO/[Zn-Al-Cl] LDHs biosensor showed remarkable properties such as high sensitivity and good storage stability. 2 Directly coupled enzyme electrodes (third generation) The enzymatic detection of hydrogen peroxide has been reported either at mediated HRP-CLME or by direct enzyme regeneration at CLME (3rd generation, Figure 3C). [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF] The synthesis of carbon nanotubes (CNTs) on clay mineral layers and the preparation of H 2 O 2 sensor based on CNT/Nafion/Na-montmorillonite composite film for the detection of H 2 O 2 were achieved and the developed sensor showed high sensitivity and applicability. 103 Step-by-step self-assembly was employed to incorporate HRP into a laponite/chitosan-modified glassy carbon electrode (GCE). The self-assembled enzyme maintained its biological activity and showed an excellent electrocatalytic performance for the reduction of H 2 O 2 with fast amperometric response (10 s), broad linear range, good sensitivity, and low detection limit. 104 The direct electron transfer of heme proteins (HRP, myoglobin, hemoglobin, cytochrome c) found its applications in sensors for the determination of H 2 O 2 , as well nitrite, NO, and trichloroacetic acid. [105][106][107][108][109] The direct electron transfer of Mb was realized by immobilizing Mb onto an ionic liquid-clay composite film modified GCE. The composite was biocompatible and promoted the direct electron transfer between Mb and electrode. The reduction of hydrogen peroxide demonstrated the biocatalytic activity of Mb in the composite film. [START_REF] Dai | Direct electrochemistry of myoglobin based on ionic liquid-clay composite films[END_REF] Nanocomposite matrix based on chitosan/laponite was employed to construct an amperometric glucose biosensor. It is stated that GOX immobilized in the material maintained its activity well as the usage of GA was avoided. 110 Taking into consideration that the majority of the amperometric glucose biosensors need stabilization membranes to prevent enzyme release and to improve selectivity, a comparative study between some of the most common membranes and two more innovative systems based on Ti and Pd hexacyanoferrate hydrogels was carried out on electrodes modified by a Ni/Al hydrotalcite (HT) electrochemically deposited on Pt surfaces, together with the GOX enzyme. The results showed that the Pd hexacyanoferrate hydrogel was the best system in terms of sensitivity, selectivity and allowed to determine glucose levels in bovine serum samples which were perfectly in agreement with the one declared by the analysis certificate. 111 The direct electron transfer between GOX and the underlying GCE could be achieved via colloidal laponite nanoparticles as immobilization matrix. Due to the decrease of oxygen electrocatalytic signal, the laponite/GOX/GCE was successfully applied in the reagentless glucose sensing at -0.45 V. The electrode exhibited fast amperometric response (8 s), low detection limit (1.0•10 -5 M), and very good sensitivity and selectivity. 112 An amperometric biosensor for phenol determination was fabricated based on chitosan/laponite nanocomposite matrix. The composite film enabled the PPO immobilization on the surface of GCE. The role of chitosan was to improve the analytical performance of the pure clay-modified bioelectrode. The biosensor had good affinity to its substrate, high sensitivity catechol, and remarkable long-term stability in storage (it retained 88% of the original activity after 60 days). 113 The comparison between four amperometric biosensors for phenol, prepared by means of immobilization of tyrosinase in organic and inorganic matrices, is also reported. In this case, the enzyme was entrapped within two organic matrices (polyacrylamide microgels and polyvinylimidazole). On the other hand, enzyme was absorbed on two inorganic matrices (laponite clay and calcium phosphate cement, called brushite) and subsequently was cross-linked by GA. Phenolic compounds were detected in aqueous and organic media by direct electrochemical reduction of the enzymatic product, o-quinone, at -0.1 V versus saturated calomel electrode (SCE). Authors attributed the large differences found in biosensors performance to the environment surrounding the enzyme, or the biomaterial layers used in the fabrication of the biosensor. The best results in detection of monophenols and catechol in aqueous solution was achieved with the sensors based on inorganic matrices. 114 The immobilization of lactate oxidase on a GCE modified with laponite/chitosan hydrogels for the quantification of L-lactate in alcoholic beverages and dairy products was also described. The study used ferrocene-methanol as artificial mediator and aimed to determine the best hydrogel composition from the analytical point of view. 115 Mesoporous silica materials and their applications in electrochemistry Silica-based mesoporous materials have had a great impact on electrochemistry research in the past two decades, especially due the tremendous development in the field of mesoporous materials. 116 Ordered mesoporous silicas (MPS) present attractive intrinsic features which enabled their application in electrochemical science. Being the first reported class of mesoporous materials, ordered MPS were usually obtained by inorganic polymerization in the presence of a liquid-crystalforming template (ionic or non-ionic molecular surfactant or block copolymer). These materials are in fact amorphous solids with cylindrical mesopores ranging from 20 to more than 100 Å in diameter, spatially organized into periodic arrays that often mimic the liquid crystalline phases exhibited by templates. [117][118][119][120] The removal of the hybrid mesophases can be achieved by calcination or extraction. This process gives rise to stable mesoporous materials with extremely high surface area (up to 1400 m 2 g -1 ), mesopore volume greater than 0.7 ml g -1 and narrow pore size distribution. 121 In the large field of chemically modified electrodes, researchers have recently included sol-gel materials [121][122][123][124][125] as inorganic modifiers (along with clays, [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF]126 or zeolites 127,128 ) as a result of their need to couple the intrinsic properties of a suitable modifier to a particular redox process. More recently, due to the development in nanoscience and nanotechnology, nanomaterials have found their applications in electrochemistry, in particular to build nanostructered electrodes with improved performances. [129][130][131][132] The most important features that recommend nanomaterials for applications in electrochemistry are: (i) large surface areas (high number of surfaceactive sites, ideal support for the immobilization of suitable reagents); (ii) widely open and interconnected porous structure (fast mass transport); (iii) good conductivity and intrinsic electrocatalytic properties (for noble metals and carbons); and (iv) high mechanical stability owing to their multidimensional structure. 116 The two approaches to generate nanoarchitectures on electrode surfaces are: (i) the assembly of one dimensional nanostructures (nanoparticles, nanowires or nanorods) into functional 2D or 3D architectures 133 or nanoparticulate films, 134 and (ii) the preparation of single-phase continuous ordered and porous mesostructures from supramolecular template assemblies and their integration into electrochemical devices. 116 The sol-gel process Silica-based materials represent robust inorganic solids displaying a high specific surface area and a three-dimensional structure made of highly open spaces interconnected to each other via SiO 4 tetrahedra, generating highly porous structures. 116 Generally, the sol-gel process involves the following steps: • Hydrolysis The preparation of a silica glass begins with an appropriate alkoxide, such as Si(OR) 4 ( R = CH 3 , C 2 H 5 , or C 3 H 7 ), which is mixed with water and a mutual solvent to form a solution. Hydrolysis leads to the formation of silanol groups (SiOH). The presence of H 3 O + in the solution increases the rate of the hydrolysis reaction. 121 • Condensation In a condensation reaction, two partially hydrolyzed molecules can link together through forming siloxane bonds (Si-O-Si). This type of reaction can continue to build larger and larger silicon-containing molecules (linkage of additional Si-OH) and eventually results in a SiO 2 network. The H 2 O (or alcohol) expelled from the reaction remains in the pores of the network. When sufficient interconnected Si-O-Si bonds are formed in a region, they respond cooperatively as colloidal (submicrometer) particles or a sol. 121 The gel morphology is influenced by temperature, the concentrations of each species (attention focuses on R ratio, R = [H 2 O]/[Si(OR) 4 ), and especially acidity: -Acid catalysis generally produces weakly-cross-linked gels which easily compact under drying conditions, yielding low-porosity microporous (smaller than 2 nm) xerogel structures; -Conditions of neutral to basic pH result in relatively mesoporous xerogels after drying, as rigid clusters a few nanometers across pack to form mesopores. The clusters themselves may be microporous. -Under some conditions, base-catalyzed and two-step acid-base catalyzed gels (initial polymerization under acidic conditions and further gelation under basic conditions exhibit hierarchical structure and complex network topology. 121 Si OH Si O Si + H 2 O Si OH + • Aging Gel aging is an extension of the gelation step in which the gel network is reinforced through further polymerization, possibly at different temperature and solvent conditions. During aging, polycondensation continues along with localized solution and reprecipitation of the gel network, which increases the thickness of interparticle necks and decreases the porosity. The strength of the gel thereby increases with aging. An aged gel must develop sufficient strength to resist cracking during drying. 116 • Drying The gel drying process consists in removal of water from the interconnected pore network, with simultaneous collapse of the gel structure, under conditions of constant temperature, pressure, and humidity. Large capillary stresses can develop during drying when the pores are small (<20 nm). These stresses will cause the gels to crack catastrophically, unless the drying process is controlled by decreasing the liquid surface energy by addition of surfactants or elimination of very small pores, by hypercritical evaporation, which avoids the solid-liquid interface, or by obtaining monodisperse pore sizes by controlling the rates of hydrolysis and condensation. 121 • Electrochemically-assisted generation of silica films Electrochemistry was recently proven to be attractive for synthesizing ordered mesoporous (and macroporous) deposits on electrode surfaces. The principle is based on the electrochemical manipulation of pH on the electrode surface affecting thereby the kinetics associated to the sol-gel process. The electrode is immersed in a stable silica sol (mild acidic medium: pH 3-4) where a negative potential is applied to increase the pH locally at the electrode/solution interface, inducing therefore the polycondensation of the silica precursors only on the electrode surface, which makes the process applicable to deposit thin films on non-planar surfaces (Figure 4). 121 Silica and silica-based organic-inorganic hybrids Well-ordered MPS have monodispered pore sizes (typically between 2 and 10 nm) and various ionic or non-ionic surfactants can be employed for these materials preparation. [135][136][137] It is well known that the morphology of mesoporous silica materials depends on the synthesis conditions. ''Evaporation-Induced Self-Assembly'' (EISA) is the most common method employed to obtain mesoporous silica thin films with controlled mesostructure and pore size. 138,139 EISA process implies a diluted sol and the formation of the film at the electrode surface is due to the evaporation of the solvent upon the extraction of the solid template from the sol (e.g., by dip-coating, spin-coating, or dropping techniques). 116 The sol solution consists of inorganic precursors alone or in mixture with organically-modified metal oxides and a surfactant usually dissolved in a water-ethanol mixture. The evaporation of volatile components takes place at the air/film interface and the concomitant self-assembly polycondensation of the surfactant and the precursors gives rise to a homogeneous mesostructured deposit (with typical thicknesses in the range of 50 nm-700 nm). 116 If in the past decades the generation of ordered porous films on solid electrode surfaces led to poorly permeable deposits (probably due to the unfavourable pore orientation) and suffered from cracks formation arising from surfactant extraction 140 , the ability to precisely control the structural arrangement of silica films on the mesoscale has recently led to more accurately engineered mesoporous film electrodes with continuous structural order over wide areas and variable permeation properties depending on the film structure. 141 Mesostructure with different symmetry (cubic, hexagonal, double gyroid, rhombohedral, etc.) could be obtained by controlling the sol composition, the nature of the template, and the post-treatment temperature and humidity level. 139 The use of hard templates (nano-or microbeads) enabled the generation of ordered macroporous thin films 142 together with the achievement multimodal hierarchical porosity. 143 Due to their defined multiscale porous networks with adjustable pore size and connectivity, high surface area and accessibility, these rigid three-dimensional matrices can be applied on solid supports (like electrode surfaces), being very promising for effective transport reactions at electrode/solution interfaces. 116 Organically-functionalized silica-based materials are of interest in electrochemistry firstly due to their highly porous and regularly ordered 3D structure which ensures good accessibility and fast mass transport to the active centres. 116 This can be useful to improve sensitivity in preconcentration electroanalysis (voltammetric detection subsequent to open-circuit accumulation) and to enable the immobilization of a great variety of organo-functional groups improving in the meantime the selectivity of the recognition event. Secondly, due to their redox-active moieties, these materials should be able to induce intra-silica electron transfer chains, or they can be applied in electrocatalysis as electron shuttles or mediators. 116 Third, these materials can also find their applications in the field of electrochemical biosensors taking into consideration the possibility for nanobioencapsulation (e.g., enzyme immobilization [144][145][146][147] ), which can lead to the development of integrated systems combining molecular recognition, catalysis and signal transduction. 116 Ordered and oriented mesoporous sol-gel films It was recently described that by electrochemistry is likely to prepare ordered and oriented mesoporous silica thin films, by the so-called electro-assisted selfassembly (EASA) method. 148,149 In this method the formation of surfactant assemblies under potential control with concomitant growing of a templated inorganic film also takes place, but, unlike in EISA, the electrogenerated species (e.g., OH -) do not serve in to precipitate a metal hydroxide, but they act as catalysts to gelify a sol onto the electrode surface. 116 The mechanism of EASA implies a suitable cathodic potential applied to an electrode immersed in a hydrolyzed sol solution, in order to locally generate OH -species at the electrode/solution interface, which then induces the polycondensation of the silane precursors and the growth of silica or organicallymodified silica films onto the electrode surface. [150][151][152][153] When a cationic surfactant (i.e., cetyltrimethylammonium bromide, CTAB) is present in the sol, the obtained configuration is that of hexagonally-packed mesopore channels growing perpendicularly to the electrode surface, which is the result of the electrochemicallydriven cooperative self-assembly of surfactant micelles with simultaneous silica formation. 154 The main advantages of EASA are the possibility to obtain homogeneous deposits over wide areas (cm 2 ), even on nonflat surfaces ( with thicknesses typically in the 50-200 nm range), and vertically-aligned mesoporous silica films on various electrode materials such as glassy carbon, platinum, gold, copper, or ITO. This type of orientation with accessible pores from the surface is expected to enhance mass transport rates through the film and hence improve sensitivity of voltammetric analysis. 116 A short deposition time is required in order to avoid the formation of aggregates. 149 Insulating supports can also be employed to prepare these films using higher electric fields. 155 Mass transport in mesoporous (organo)silica particles The regular 3D structure consisting in mesochannels of monodisperse dimensions favors a fast mass transport, contrary to the diffusion in the non-ordered silica gel homologues. 116 This could be demonstrated through electrochemical methods by means of which the kinetics associated to mass transfer reactions between a solution and solid particles in suspension can be monitored in situ and in real time. 156 Therefore, by potentiometric pH monitoring of aqueous suspensions of mesoporous silica particles grafted with aminopropyl groups the kinetics associated to the protonation of the amine groups located in the material could be determined. 156,157 This approach aimed to study the variation of the apparent diffusion coefficients (D app ) as a function of the reaction progress by assuming the diffusion of protons (and associated counter anion) inside the functionalized particles as a rate-determining step, and fitting the ''proton consumption versus time'' plot to a spherical diffusion model (silica particles have been considered as spherical in a first approximation. 157 This study was achieved on two amine-functionalized mesoporous silica samples and one non-ordered silica gel grafted with the same aminopropyl groups and it was concluded that mass transfer was faster in well-ordered mesoporous samples than in the non-ordered homologues, but only at low protonation levels, and D app values decreased dramatically in mesostructured materials due to major electrostatic shielding effects when generating charged moieties onto the internal surface of regular mesochannels. 157 2.4 Selected applications of mesoporous silica materials in electrochemistry Electroanalysis, sensors and biosensors Applications in this field include mesoporous silica and organically-modified silicates in electroanalysis, 128 mesoporous silica-based materials for sensing 158 or biosensing, 159 and templated porous film electrodes in electrochemical analysis. 160,161 Direct detection -electrocatalysis Mesoporous silica is a non-conductive metal oxide which can be used as such or functionalized with appropriate catalysts. 116 In this way, mesoporous silica-based materials were employed to host mediators. Therefore, they were dispersed in carbon paste electrodes for electrocatalytic purposes. 116 The immobilization of polyoxometalates or Prussian Blue derivatives have been achieved in (protonated) amine-functionalized mesoporous silica due to favorable electrostatic interactions. [162][163][164] The amperometric detection of NO 2 -164 and ClO 3 -/BrO 3 -163 was therefore achieved. A redox polymer based on the poly(4vinylpyridine) complex of [Os(bpy) 2 Cl] + quaternized with methyl iodide was immobilized onto a mesoporous silica for the electrocatalytic reduction of nitrite ions. 165 A more sophisticated approach described, is that of zinc phthalocyanine adsorbed on Ag/Au noble metal nanoparticles (NPs) anchored onto thiolfunctionalized MCM-41(MCM = Mobil Composition of Matter), which exhibited synergistic effects for the electrocatalytic reduction of molecular oxygen. 166 2.4.1.2 Preconcentration electroanalysis Due to their sorption properties, ordered mesoporous silica were employed for the preconcentration electroanalysis of metal cations, 167 nitroaromatic compounds, 168 bisphenol A, 169 ascorbic and uric acids and xanthine, 170 nitro-and aminophenol derivatives, 171,172 and some drugs. 173,174 Chlorophenol was also successfully accumulated onto mesoporous titania prior to sensitive detection. 175 Heavy metal species can accumulate on mesoporous silica by adsorption (e.g., Hg(II)) 176 . Even so, the adsorbent properties of this material can be significantly improved by functionalization with suitable organic groups. Mesoporous organosilica have thus found many applications as nanoengineered adsorbents for pollutant removal, 177 but also as electrode modifiers for preconcentration electroanalysis. 128 Their main advantages are, firstly, the possibility of tuning the selectivity of the recognition event (and therefore the selectivity of the detection) by an appropriate selection of the organo-functional group and secondly, the well-ordered and rigid mesostructure ensuring fast transport of reactants (and thus high preconcentration efficiency and good sensitivity for the sensor). 116 The sensitivity has been significantly enhanced when using an electrode modified with a mesostructured adsorbent instead of the non-ordered homologue, when comparing Hg(II) detection subsequent to opencircuit accumulation at thiol-functionalized silica materials (Figure 5). This proved also the fastest mass transport processes in the mesostructures silica. 178 Many primary works were focused on simple functions such as thiol [179][180][181] or amine groups. 182 Several other organo-functional groups have been afterwards used when trying to improve the selectivity of the accumulation step, such as quaternary ammonium, 183 sulfonate, 184 glycinylurea, 185 salicylamide, 186 carnosine, 187 acetamide phosphonic acid, 188 benzothiazolethiol, 189 acetylacetone, 190 cyclam derivatives, 191,192 β-cyclodextrin, 193 5-mercapto-1-methyltetrazole, 194,195 or ionic liquids. 196 These materials are likely to accumulate analytes via complexation and ion exchange. 116 . Electrodes modified with mesoporous organic-inorganic hybrid silica have been employed to detect several analytes, such as Ag(I), 197 Cu(II), 192 Cd(II), 188 Hg(II), 180,189 Pb(II), 179,194 Eu(III), 186 U(VI). 198 The procedure generally involves first the preconcentration of the analyte in the mesostructured hybrid material, usually followed by its desorption in the detection medium, and subsequent quantitative electrochemical detection. 116 It was recently resorted to the detection of cations in a mixture (e.g., {Cd(II) + Pb(II) + Cu(II)} 181,190 or {Hg(II) + Cd(II) + Pb(II) + Cu(II)} 196 ) which were accumulated together before being selectively detected via voltammetric signals located at distinct potential values. Selectivity was sometimes achieved by the preferential recognition of the organo-functional group for the target metal in the presence of potentially-interfering species, which can be tuned by molecular engineering of the immobilized ligands. 116 Good selectivity for Cu(II) 191 was thus obtained by using mesoporous silica bearing cyclam groups, while the addition of acetamide arms on the cyclam centres resulted in shifting the selectivity towards Pb(II). 192 Both film and bulk composite electrode configurations were tested for preconcentration/voltammetric detection and it resulted that the response time of film electrodes is often slower, especially at low analyte concentration, due to the fact that the accumulation starts on the upper part of the film contacting the solution, and thus there are very low amounts of accumulated species when operating in dilute solutions, some of them being lost after desorption in the detection medium and therefore not detectable on the electrode surface. 195 But when very thin films with mesopore channels oriented normal to the underlying electrode are employed, this effect will be minimized and the recognition even will be accelerated. 116 It should be noted that unlike the non-ordered silica gels, ordered mesoporous adsorbents can give rise to more sensitive detection in preconcentration electroanalysis, but this is applicable only when the rate determining step is the diffusion of the analyte in the material, not in the case of kinetics dominated by the complexation reaction itself. 199 Organic species like bisphenol A 200 or dihydroxybenzene isomers 201 after accumulation at aminopropyl-grafted SBA-15 (SBA = Santa Barbara Amorphous) could also be determined using functionalized mesoporous silica. A possibility to improve selectivity by rejecting interferences is by using organically-modified mesoporous silica films as permselective barriers. This was demonstrated for aminopropyl-functionalized films exhibiting ion-permselective pH-switchable behavior. [202][203][204] 2.4.1.3 Electrochemical biosensors and related devices An electrochemical biosensor requires an effective and durable immobilization of huge amounts of biomolecules in an active form, and a favorable environment for efficient electron transfer reactions. Due to their attractive properties (large surface areas for immobilization of reactants and biocomponents, interconnected pore systems to ensure fast accessibility to active centres, intrinsic electrocatalytic properties or support for electrocatalysts, possibility of functionalization) silica-based mesoporous materials have been intensively applied in this field. 116 • Immobilization of heme proteins, direct electrochemistry and electrocatalysis Direct electrochemistry could be registered for heme proteins, such as cytochrome c, 205, 206 haemoglobin, 207, 208 or myoglobin, 209,210 when immobilized in nonconductive mesoporous silica particles or continuous mesoporous silica layers deposited as thin films on electrode surfaces. The well-defined voltammetric responses obtained in these cases indicate that the proteins retain their biological activity once immobilized within the mesoporous material. The amounts of immobilized proteins are influenced by the pore structure and size of the host material (e.g., hemoglobin almost excluded from MCM-41 while fitting well inside SBA-15 211 ) and, at similar loadings, the interconnected 3D or bimodal mesostructures resulted in higher current responses. 211,212 Additives were used in some cases to improve the electron transfer reactions (e.g., ionic liquids 213 ), or to prevent leakage of the protein out of the material (e.g., chitosan 212 ). As a result of their peroxidase activity, the immobilized heme proteins were sensitive to the presence of hydrogen peroxide, showing an electrocatalytic response towards hydrogen peroxide. 205,207,209,210,212,213 Electrode configuration set the sensitivity of the device, while the addition of gold nanoparticles or CdTe quantum dots were reported to enhance the biosensor response. 214,215 • Enzyme immobilization and electrochemical biosensors Mesoporous silica-based materials are suitable immobilization matrices for enzymes, as they keep their biological activity, [144][145][146][147] and for biosensor construction. 159 The first generation biosensors implied the enzyme immobilization on the material and the electrode was used to detect the enzymatically-generated products (e.g., GOX or HRP entrapped in mesoporous silica particles deposited as thin films on GCE [216][217][218] ). Larger pore sizes were necessary for the nonconductive silica matrices, to ensure fast mass transport processes. 116 In order to overcome this problem, it was resorted to the increase the conductivity by incorporating gold nanoparticles, 219 via the use of conducting polymers, 220 or via the formation of silica-carbon composite nanostructures. 216 A hydrogen peroxide biosensor was also constructed using a purely organic ordered mesoporous polyaniline film, fabricated by electrodeposition from a lyotropic liquid crystalline phase, as an immobilization matrix for HRP. 221 The development of bienzymatic systems based on the co-immobilization of two enzymes (e.g., tyrosinase and HRP 222 or 2-hydroxybiphenyl 3-monooxygenase and GOX 223 ) suggested that the confined environment of the mesoporous silica host preserves the necessary interactions. • Electrochemical immunosensors and aptasensors Due to their hosting properties, MPS were employed in fabricating ultrasensitive electrochemical immunosensors. There are two different approaches to be distinguished between for the electrochemical detection of the antigen-antibody recognition event. The first one refers to the use of mesoporous silica nanoparticles (in which the enzyme, e.g., HRP, a mediator and a first antibody have been immobilized) as nanolabels. 224 They are expected to bind to an electrode surface (bearing a second antibody) in the presence of the analyte (the antigen, in a sandwich configuration between the two complementary antibodies), and the resulting current response is directly proportional to the amount of nanolabels onto the electrode surface, and thus to the analyte concentration. 224 By improving the electronic conductivity of the nanolabel and the electrode surface, better performance of the device could be achieved. [225][226][227] The second approach deals with an antibody still immobilized on the mesoporous material, but coated this time on the electrode surface, while the mediator is in the solution; when the immunoconjugates are formed, the electrode becomes progressively blocked, while the signal of the electrochemical response decreases proportionally to the target analyte concentration. [228][229][230] More recently, some new kinds of label-free aptasensors were described, based on graphene-mesoporous silicagold NP hybrids as an enhanced element of the sensing platform [231][232][233] for the detection of ATP 231 or DNA. 232 Other electrochemical sensors Due to their sorption properties, mesoporous silica could be exploited in the development of electrochemiluminescence sensors. 234 Some examples dealing with electrochemical detection methods (i.e., conductivity changes, surface photovoltage measurements) include mesoporous silica for sensing alcohol vapours 235 or detection of humidity changes, 236 and tin-doped silica for NO 2 sensing. 237 Energy conversion and storage Nanoscaled engineering materials have become an important mean in designing various devices for energy conversion and storage. 116 Taking into consideration the increasing need for improved systems like batteries, supercapacitors, fuel cells, or dyesensitized solar cells, finding innovative electrode materials with architecturally tailored nanostructures is an important focus in the research field. 116 Therefore, proton-conducting organic-inorganic hybrids, like mesoporous silica containing sulfonic acids, phosphonic acid or carboxylic acid, can be used as membranes for fuel cells, because they are likely to exhibit good thermostability and efficient proton conductivity at high temperature and low humidity. 238 PERSONAL CONTRIBUTION 1 Clays -physico-chemical and structural characterization Introduction "Clay" is a collective term for a large group of sedimentary rocks with clay minerals as main components. They are generally fine-grained crystalline hydrated aluminum silicates and they exhibit plasticity when mixed with water in certain proportions. Bentonites are clay materials and represent secondary rocks formed from the devitrification, hydration and hydrolysis of other underlying rocks (e.g. volcanic tuffs, pegmatite etc.). An important feature of all bentonites is their important content of montmorillonite with its cryptocrystalline aggregate structure, at which low fractions of quartz, feldspar, volcanic glass, amphibole, pyroxene, chlorite, limonite, halloysit, etc. are added. 239 The name "montmorillonite" was assigned to the clay mineral with the theoretical formula Al 2 (Si 4 O 10 )(OH) 2 , which has a relatively high content of water molecules adsorbed between its layers. The most commonly accepted structure for montmorillonite minerals is similar to that of pyrophyllite, from which it differs only in the distribution of constitutive ions and in the overlapping of multiple sheets. Therefore, montmorillonite consists of two tetrahedral silica plans and a central octahedral aluminum plan. An invariable feature of montmorillonite structure is that water and other polar organic molecules can enter the interlayer space, resulting in a shift towards the (c) axis. The dimensions of (c) axis in montmorillonite are not fixed, but vary from 9.6 Å, when there are no polar molecules in the interlayer space, up to 15 Å, in the presence of polar molecules (Figure 6). The thickness of the water layer between the structural units depends on the nature of the adsorbed cation and on the pressure of water vapors of the working environment. 240 Romanian clays are cationic clays, with negatively charged alumino-silicate layers. The clays presented in this thesis are bentonites obtained from Răzoare and Valea Chioarului deposits in operation (Maramureş County, Romania). In order to compare their electrochemical behavior with their internal structure and properties, the structural characterization of the above measured clays was achieved. Materials and methods Physico-chemical composition studies, as well as X-ray diffraction (XRD) and transmission electron microscopy (TEM) established the properties of the two clays. For the structural characterization of Răzoare and Valea Chioarului bentonites, it was resorted to their impurities removal, in order to obtain a higher content of montmorillonite in the resulting samples. The refinement was achieved by washing and decantation obtaining a more homogeneous product, rich in montmorillonite, the main component of their structure. All of the following characterization procedures and analytical experiments are based on these refined clay samples. Separation was performed on different clay granulometric particle sizes by sedimentation, decantation, centrifugation, and ultracentrifugation after the procedures reported in the literature 242 , according to Stockes' law. Several fractions, below 50 µm, 20 µm, 8 µm, 5 µm, 2 µm, 1µm, and below 0.2 µm were separated and characterized. The chemical composition, the ion exchange capacity, the surface area, and the structural characteristics, e.g. particle size and shape, of each separated fraction were determined by XRD and TEM. Chemical composition was achieved by gravimetry (Si), complexonometry (Fe, Al, Ca, Mg), colorimetry (Ti at 436 nm), and flame photometry (Na at 589 nm, K at 768 nm). Ion exchange capacity was obtained by treating the clay sample with an ammonium chloride solution, followed by the filtration and determination of Ca and Mg (by complexonometry), Na and K (by flame photometry). For TEM studies, an aqueous suspension of clay was deposited on a thin layer of collodium, followed by evaporation. Examination of the samples was achieved using a JEOL JEM 1010 microscope. Diffractometry was achieved on fine powder material using a Shimadzu X-ray diffractometer XRD 6000 N, equipped with a monochromator and a position-sensitive detector. The X-ray source was a Cu anode (40kV, 30 mA).The diffractogram were recorded in the 0-90º 2θ range, with a 0.02 step size and a collection of 0.2 s per point. Surface area analyses were recorded with a Thermo analyzer Q Finnigam type SURF 9600 by single point method, on Răzoare clay fraction bellow 20 µm and on Valea Chioarului clay sample bellow 50 µm, respectively, without any previous thermal treatment. 239 Thermal behavior was also studied using differential thermal analysis performed with a MOM derivatograph, type 1500D, at 10 o C/minute temperature rate, in the range of 20 o C and 1000 o C. 243,244 IR analyses were recorded with a Brucker FTIR spectrometer, on Răzoare clay fraction bellow 20 µm and Valea Chioarului clay fraction bellow 0.2 µm in KBr matrix, from 4000 to 400 cm -1 . 243 Results and discussions Elemental analyses confirmed clays chemical composition and revealed the differences between the two clays (Table 1 The higher SiO 2 quantity in the composition of Răzoare bentonite indicates the presence of an increased quantity of impurities (e.g., cristobalite). TEM images of Răzoare (Figure 7A) and Valea Chioarului clays (Figure 7C) at higher magnification showed a diffusive, irregular, and opalescent surface. The very fine dispersed montmorillonite (bellow 0.2 nm) formed extremely thin lamellar layers with nanometer dimensions (Figure 7B). The XRD diffractogram obtained for Valea Chioarului clay using a sample with particle size below 0.2 μm (Figure 8B) showed a high content of montmorillonite (with its characteristic peaks at 2θ: 6. The presence of montmorillonite was also proved by comparing the XRD data of the below 0.2 μm sample before and after treatment with ethyleneglycol. After adsorption of ethylenglycol, it was noticed an increase in the reticular distance in c axis direction from 12.72 Å to 17.18 Å, corresponding to 6.9434 o and to 5.137 o peaks, respectively, characteristic for montmorillonite. XRD diffraction patterns (see Figure 8A and8B) illustrate the fact that the two investigated samples are multiple phase bentonite materials containing mainly montmorillonite hexagonal (Na 0.3 Al 2 Si 4 O 16 H 10 ) and cristobalite (SiO 2 ) crystalline phases. The fact that the intense peak that appears in the XRD pattern at 2θ = 7.9° (d = 1.23 nm) for montmorillonite compounds in the X1 sample is moved at 2θ = 8.2° (d = 1.21 nm) for the R1 sample (where it appears less intense) seems to indicate that the layered type of montmorillonite structure in Rǎzoare bentonite is largely deformed by different impurities. The value of the specific surface of the sample bellow 20 μm of Răzoare clay was 50 m 2 /g. Due to the aggregation effect which occurred during the drying process of the fine granulation prepared clay, the specific surface of the Valea Chioarului clay was determined on the sample below 50 μm. The obtained value was 190.86 m 2 /g. The ion exchange capacity of the analyzed clays was determined by replacing the compensatory ions with NH 4 + ions, followed by their quantitative determination. Therefore, the ionic exchange capacity of Răzoare clay was 68.32 mE/100g and for Valea Chioarului clay it was estimated at 78.03 mE/100g. 239 In order to complete the structural characterization of the Romanian clays several experiments were performed. The FTIR studies and the thermodifferential analysis showed the presence of a high percentage of montmorillonite. Thus, the FTIR spectrum of Răzoare clay (Figure 9A) revealed the characteristic groups of montmorillonite. The broad band at 3447 cm -1 , with its specific peak at 3620 cm -1 , was attributed to the stretching vibration of the hydroxyl group. The bands at 1000-1200 cm -1 and 466 cm -1 were produced by the Si-O stretching vibration. The bands at 793 cm -1 and 519 cm -1 were assigned to the Si-O-Al group. 243,246 The FTIR spectrum of Valea Chioarului clay (Figure 9B) presented a broad band at 3446 cm -1 with the specific peak at 3625 cm -1 attributed to the stretching vibration of the hydroxyl group. 244 The broad band at 1000-1200 cm -1 was assigned to the Si-O stretching vibration and the band at 520 cm -1 to the Si-O-Al group, all attributed to the characteristic groups of montmorillonite. In both cases the band at 1637 cm -1 was assigned to the bending vibration of H-O-H group. 243,246 Samples a [Å] b [Å] c [Å] Rp X1 montmorillonite 5.17 5.17 Regarding the thermodifferential analysis (Figure 10) of Răzoare and Valea Chioarului clays, a good superposition of the thermodifferential characteristic curves of montmorillonite was noticed, confirming that montmorillonite is the main constituent of both Romanian clays. The thermogravimetric (TGA) and thermodifferential (TDA) analysis of Răzoare clay (Figure 10 A) showed the loss of the adsorbed water between 60 o C and 200 o C, accompanied by a strong endothermic effect at 115 o C. The elimination of hydroxyl groups from the mineral network with an endothermic effect and decrease of mass occurs between 600 o C and 750 o C. The last endothermic effect, at 900 o C, immediately followed by an exothermic one, showed a modification of the crystal structure to an inferior energetic state. (A) (B) The first pronounced endothermic effect in the TDA curve (Figure 10 B) appeared between 60 o C and 250 o C due to the water loss. A substantial decrease in the clay mass occurred in the meantime according to the TG curve. Dehydration was a reversible process, at 250 o C the clay being able to readsorb the water molecules, which could be then eliminated due to a new temperature exposure 243,247,248 The second endothermic effect appeared between 600 o C and 750 o C, followed by a loss in the clay mass due to the elimination of hydroxyl groups. A third endothermic effect occurred at 880 o C, immediately followed by an exothermic effect at 900 o C due to the crystal structure modifications. 243,247,248 Conclusions Two Romanian clays from Răzoare and Valea Chioarului deposits (Maramureş County) were refined and characterized by X-ray diffraction, transmission electron microscopy, FTIR, and thermogravimetric and thermodifferential analysis. The ion exchange capacity of purified clays was determined by replacing the compensatory ions with NH 4 + ions. 239,243 The clay chemical composition was confirmed by elemental analyses and the characteristic formulas for the two clays could be calculated. The diffusive, irregular, and opalescent surface of Răzoare and Valea Chioarului clays was evidenced by TEM images and the extremely thin lamellar layers of montmorillonite with nanometer dimensions were also described. The XRD diffractograms of both clays showed the characteristic diffraction peaks of montmorillonite. The presence of other minerals like quartz and feldspar was also evidenced. The specific surface for Răzoare clay was 50 m 2 /g, while the value obtained for Valea Chioarului clay was 190.86 m 2 /g. The ionic exchange capacity of Răzoare clay was 68.32 mE/100g and for Valea Chioarului clay it was estimated at 78.03 mE/100g. FTIR spectra of both clays revealed the characteristic peaks of montmorillonite and the good superposition of the thermodifferential characteristic curves confirmed its presence in both analyzed clays. As a conclusion, all characterization methods employed revealed montmorillonite as the main component of Răzoare and Valea Chioarului clays structure. Due to its higher CEC value and larger specific surface, together with a negligible quantity of impurities, Valea Chioarului clay is highly recommended for further electroanalytical applications. Electroanalytical characterization of two Romanian clays with possible applications in pharmaceutical analysis Introduction The high demand for simple, fast, accurate, and sensitive detection methods in pharmaceutical analysis has driven to the development of novel electrochemical sensors. CLME are likely to be used for this application. Therefore, it is well known that montmorillonite has been used for centuries to produce ceramics. Furthermore, its applications in pharmacy, adsorbents, and ion exchangers are also reported in the literature. [START_REF] Vaccari | Preparation and catalytic properties of cationic and anionic clays[END_REF][START_REF] Vaccari | Clays and catalysis: a promising future[END_REF] These last applications are particularly useful for the development of electrochemical sensors. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF]239 Clay modified electrodes have attracted considerable attention attempting to control the path and scale of electrode reactions. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF][START_REF] Bard | Electrodes modified with clays, zeolites and related microporous solids[END_REF][START_REF] Macha | Clays as architectural units at modified-electrodes[END_REF][249][250][251] The composition of the CPE modified with clay was defined as a complex heterogeneous system consisting of conductive solids, semiconductors, and insulators, including a clay-induced aqueous phase. Phenomena of charge and mass transfer in such mixtures are extremely complicated and require a thorough characterization, moreover because the clays included in the electrode material are natural compounds whose composition and structure are subject to their place of origin. In spite of the wide range of electrode modifiers, clays have attracted the interest of electrochemists, in particular for their analytical applications. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF]239,[252][253][254] Electrochemical methods are well known as very sensitive, but they lack selectivity. Electrode modification is therefore the main issue in improving selectivity. This is why the different ways of modifying the electrode surface in order to obtain improved electrochemical signals represents a concern for researchers all over the world. Electrode modifications concern either the improvement of sensor selectivity by increasing the affinity for a specific analyte and rejecting, in the same time, other interfering chemical species, or the improvement of the electroanalytical performances (higher accuracy and reproducibility, lower detection and quantification limits, the possibility to determine several electroactive species without any separation process, at different oxidation or reduction potentials). Generally, chemical sensors contain two basic functional units: a receptor part which transforms the chemical information into a measurable form of energy, and a transducer part capable to convey the energy carrying the chemical information about the sample into a useful analytical signal. Biological sensors can be defined either as devices able to detect the presence, the movement, and the number of organisms in a given environment, or as sensors which contain in their structure a biological component (bacteria, algae, tissues, cells) as receptor, this type of devices being known as biosensors. 255 In the living world, there are a lot of examples of sensors consisting in biological receptors (proteins, nucleic acids, signaling molecules) located on the cell membrane, in all the tissues, organs, or even in circulating blood stream. Enzymes were used for decades in the sensors development and this led to the urge for a niche field in research: biosensors. More specific biosensors could be defined as sensitive and selective analytical devices which associate a biocomponent to a transducer. 256 Biosensors are applied with success in several fields (environment, food security, biomedical and pharmaceutical analysis), especially because of the stable source of the biomaterial (enzymes produced by bacteria, plants, or animal as by-products), and due to their catalytic properties and the possibility of modifying the surface of transducers in various ways. A key step in the development and optimization of the biosensors is related to the entrapment of the enzymes at the surface of electrode, another challenge being to preserve the microenvironment of the enzyme and hence the lifetime of the biosensor. Besides the methods used before, like adsorption, cross linking, covalent binding, biological membranes, magnetic microparticles, entrapment in sol-gel, etc., the immobilization into an electrochemical polymer or polymerizable matrices was successfully used in the development of the amperometric biosensors. 256 In this case, the procedure was effective and simple and the enzyme was less affected than during other methods of entrapment. Adsorption of proteins on clay mineral surfaces represents an important application in fields related to the agricultural and environmental sciences, but also in the pharmaceutical and biomedical analysis. [START_REF] Gianfreda | Enzymes in soil: properties, behavior and potential applications[END_REF]243 Organic molecules, macromolecules, and biomolecules can be easily intercalated in solids with a 2D structural arrangement that have an open structure. Therefore, clay minerals are likely to be exploited to improve the analytical characteristics of biosensors. This type of biosensors is based on three smectite clays (laponite, montmorillonite, and nontronite) and on layered double hydroxides. [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF]243 Due to their low price and accessibility, clays offer a new immobilization method for biomolecules like enzymes. Hydrated clays represent a good environment for enzyme functioning and can improve the lifetime of the biosensor. On the other hand, a lot of electrochemical processes are controlled by diffusion and clays, due to their adsorbent properties, can improve or accelerate the diffusion of different molecules (pharmaceuticals in our case) at the electrode surface. In this way, electrochemical parameters can also be improved, allowing the recording of higher currents at lower potentials, and developing thus new electroanalytical methods with enhanced performances. Clays show also some disadvantages, the clay deposition and the thickness of the clay film on the electrode surface being factors that can decrease the electric conductivity. In this case, the use of conductive polymers, like PEI or polypyrrole, for the clay film immobilization at the electrode surface represents a good alternative. In spite of the water and alcohol solubility of PEI, this polymer does not involve any further polymerization process (such as the use of heat, the polymerization initiators or the potential scanning) which can damage the enzyme structure and functioning. The development of composite electrodes for biosensors construction based on HRP and clay films for acetaminophen (N-acetyl-p-aminophenol) detection is described. Acetaminophen is widely used as analgesic antipyretic drug having actions similar to aspirin. It is a suitable alternative for the patients who are sensitive to aspirin and safe in therapeutic doses. HRP has been a powerful tool in biomedical and pharmaceutical analysis. Many biosensors based on HRP applied in biomedical and pharmaceutical analysis are mentioned in the literature. 256,257 The enzyme immobilization was performed by retention in a PEI and clay porous gel film, technique that offers a good entrapping and, in the meantime, a "protective" environment for the biocomponent. 243 In this study, the refined bentonites obtained from Răzoare and Valea Chioarului deposits were used to modify CPEs. The electrochemical behavior of acetaminophen, ascorbic acid, and riboflavin phosphate was tested by cyclic voltammetry on the clay-modified CPEs with different clay particle sizes. The resulting CPEs revealed either better electroanalytical signals or oxidation at lower potential. The exploitation of Romanian clays in developing a biosensor for acetaminophen detection with good sensitivity and reproducibility was also achieved. The development of new claymodified sensors using such composite materials based on micro and nanoparticles could be applied in pharmaceutical analysis. 239 The studies presented in this chapter are preliminary studies. By employing the three standard pharmaceutical substances, these studies didn't aim at the development of new analytical methods with improved sensitivity and selectivity (even they showed this could be achieved further on), but to emphasize the adsorbent and ion exchange properties of the Romanian clays. Based on these principles, the tested molecules chosen were neutral (acetaminophen), anionic (ascorbic acid), and cationic (riboflavin). Materials and methods Clay water suspensions of 50 mg/mL were prepared for the fractions bellow 20 µm and 0.2 µm for Valea Chioarului clay and below 20 µm for Răzoare clay. Standard solutions of acetaminophen, riboflavin, and ascorbic acid were prepared to provide a final concentration of 10 -3 M. For biosensor development, standard solutions of acetaminophen and hydrogen peroxide were prepared to provide a final concentration of 10 -4 M for acetaminophen and 0.1mM for hydrogen peroxide. The stock solutions of acetaminophen were dissolved in phosphate buffer and kept in the refrigerator. All the experiments were performed in PBS (phosphate buffer saline) (pH 7.4; 0.1M) at room temperature (25 ºC). CPEs were modified by mixing different Răzoare clay concentrations (1%, 2.5%, 5%, and 10%) with "homemade" carbon paste prepared with solid paraffin. 239,258 Electrochemical studies, like cyclic voltammetry (CV) and chronoamperometry, were performed in a conventional three-electrode system: new modified carbon based electrodes (working electrodes), platinum (auxiliary electrode), Ag/AgCl 3M KCl (reference electrode), under stirring conditions. All the CV experiments were recorded at 100 mVs -1 . During chronoamperometry experiments the biosensor potential was kept at 0 V vs. Ag/AgCl under continuous stirring conditions. The working potential was imposed and the background current was allowed to arrive at a steady state value. Different amounts of acetaminophen standard solution were added, every 100 seconds, into the stirred electrochemical cell and the current was recorded as a function of time. The obtained configuration was used to study the biocatalytic oxidation of acetaminophen in the presence of the hydrogen peroxide. 243 The experiments were achieved with AUTOLAB PGSTAT 30 (EcoChemie, Netherlands) equipped with GPES and FRA2 software. The pH of the solution was measured using a ChemCadet pH-meter. All solutions were prepared by using high-purity water obtained from a Millipore Milli-Q water purification system. Paraffin (Ph Eur, BP, NF), graphite powder, acetaminophen (minimum 99.0 %), L-ascorbic acid (99.0 %), and riboflavin (Ph Eur) were provided by Merck and KCl (analytical grade) from Chimopar Bucureşti. All reagents were of analytical grade, used as received, without further purification. HRP (Peroxidase type II from Horseradish, EC 232-668-6), acetaminophen, hydrogen peroxide, monosodium phosphate, and disodium phosphate were provided by Sigma Aldrich; PEI (50 % in water, Mr 600000 -1000000, density 1.08 g/cm 3 (20 o C)) was purchased from Fluka. All reagents were of analytical grade, used as received. Composite film electrodes (PEI/clay/GCE) were prepared as follows: PEI (5 mg) was stirred for 15 minutes in absolute ethanol (125 μL) and distilled water (120 μL), then 6.5 μL of nanoporous clay gel were added and stirred again for 15 minutes. Two different suspensions (20 μL) containing Valea Chioarului clay particles with the diameter below 20 μm and below 0.2 μm were deposited on the surface of two different GCEs and dried for 4 hours at 4°C. 243 GCEs were provided by BAS Inc. (West Lafayette, USA) and were carefully washed with demineralized water and polished using diamond paste (BAS Inc.). Results and discussions The electrochemical behavior of several clay-modified electrodes was tested in the presence of some pharmaceutical compounds: acetaminophen, ascorbic acid, and riboflavin phosphate (Figure 11). The electrochemical behavior of these three selected redox probes (neutral, negatively and positively charged, respectively) is widely discussed in the literature on different electrode materials (Pt, carbon paste, glassy carbon, etc.) which enables the comparison between the resulting electrochemical performances with those described by other authors. Taking into consideration the adsorbent properties of the investigated clays, the study aimed to improve the oxidations and reductions potentials obtained on unmodified CPEs. Acetaminophen ( N-acetyl-p-aminophenol) is one of the most commonly used analgesics in pharmaceutical formulations for the reduction of fever and also as a painkiller for the relief of mild to moderate pain associated with headache, backache, arthritis, and postoperative pain. Acetaminophen is metabolized primarily in the liver, into toxic and non-toxic products. The hepatic cytochrome P450 enzyme system metabolizes acetaminophen, resulting in a minor yet significant alkylating metabolite known as NAPQI (N-acetyl-p-benzo-quinone imine). NAPQI is then irreversibly conjugated with the sulfhydryl groups of glutathione: Ascorbic acid is a naturally occurring organic compound with antioxidant properties. As a mild reducing agent, it degrades upon exposure to air, converting the oxygen to water. The redox reaction is accelerated by the presence of metal ions and light. Ascorbic acid can be oxidized by one electron to a radical state or doubly oxidized to the stable form called dehydroascorbic acid: + 2 H + + 2 e - Riboflavin, also known as vitamin B 2 , is a colored micronutrient, easily absorbed, with an important role in humans and animals health maintenance. This vitamin is the central component of the cofactors FAD and FMN, being therefore required by all flavoproteins. As such, vitamin B 2 is implied in a wide variety of cellular processes, playing a key role in energy metabolism, but also in the metabolism of fats, ketone bodies, carbohydrates, and proteins. The reversible oxidation and reduction processes of riboflavin take place as follows: Acetaminophen and ascorbic acid (Figure 12A and 12B) showed relatively similar electrochemical behavior in CV investigations, one irreversible oxidation peak being obtained at 0.78 V and 150 μA for acetaminophen and 200 μA for ascorbic acid with 1% clay-modified CPEs (due to the two electron transfer described in the chemical reactions above). In both cases, the increase in the clay content was followed by an important shift of the oxidation potential towards lower values, 0.70 V for acetaminophen and 0.60 V for ascorbic acid, showing that the increasing clay concentration facilitates the oxidation process (ΔE = 100 -150 mV). In both cases, during the anodic potential sweep, voltammetric peaks were formed revealing that acetaminophen and ascorbic acid electrochemical oxidation reactions are diffusion-controlled processes. Moreover, also in both cases, during the cathodic potential scan no significant currents were detected, thus both oxidation processes are irreversible. Immunohistochemistry or IHC refers to a process by which a specific antibody (A) (B) In the case of acetaminophen, an increase in the current from 150 to 350 μA could be observed in the oxidation range at the 2.5 % clay-modified CPE, while for the 5 % clay-modified CPE the current had the same order of magnitude as the 1% claymodified CPE (Figure 12A). Ascorbic acid showed a different behavior, the increase in the clay content having no influence on the current range, but facilitating the oxidation reactions, proved by the above mentioned shift of the anodic potential towards lower values (Figure 12B). This can be attributed to the electrostatic repulsion between the negatively-charged clay sheet and the negatively-charged molecule. Riboflavin phosphate exhibited a typical reversible cyclic voltammetric response at unmodified carbon paste electrode, with an oxidation peak at -0.45 V and a reduction peak at -0.60 V as presented in Figure 13. The electrostatic attraction between the positively-charged molecule and the negatively-charged clay sheets is very clear in this case. Anodic currents increased proportionally with the clay content with about 5 nA and 10 nA for the 5% and the 10% clay-modified CPEs versus unmodified CPEs, respectively. The increase in the cathodic current was higher (20 nA) than the anodic current (10 nA) for the 10% clay-modified CPEs. Thus, it can be concluded that an increase in the clay concentration favors riboflavin detection. A significant difference could be observed when the 5% clay-modified CPE current was compared with the current of the unmodified CPE. In the oxidation range, the current was 5 nA higher than the current measured at the unmodified electrode, while in the reduction range the value was about 10 nA lower than the one measured at the unmodified electrode. This proved that a lower concentration of clay was not enough for riboflavin detection. Biosensor for acetaminophen detection with Romanian clays and conductive polymers Two types of transducers were studied (CPE and GCE) using two types of clay particles (under 20 μm and below 0.2 μm). CPE has been prepared by adding various amounts of clays (1, 2.5, 5, and 10 %). CPEs made by a mixture of graphite powder and solid paraffin are simple to prepare and offer a renewable surface essential for the electron transfer. 258 The use of CPEs in electroanalysis is due to their simplicity, minimal cost and the possibility of facile modification by adding other compounds thus giving the electrodes certain predetermined properties like high selectivity and sensitivity. 243,259 In order to realize a biosensor for the detection of acetaminophen, the electrochemical behavior of the doped CPEs was compared with the behavior of the thin PEI film GCEs. The thin PEI film deposited on the surface of a GCE exhibited a better mechanical stability in spite of its relative water solubility and an improved hydration layer, essential for the immobilization of the enzyme. The difference between the two electrode configurations was clear. By CV recording of acetaminophen on modified CPE, only the oxidation process could be observed (Figure 12A). By comparison, the PEI film electrodes showed a reversible oxidation and reduction process (Figure 14). The porosity of the clay film is obvious in this case, as it let the neutral acetaminophen species quite easily reach the electrode surface. The current obtained on polymeric film electrodes was, however, lower (10-15 μA) than that obtained on clay-modified CPEs (100-350 μA). An increase of the current was noticed on different particle sizes of Valea Chioarului clay, the best response for acetaminophen being recorded for the 0.2 μm, due to the greater active surface (Figure 14). In the human body, acetaminophen is metabolized to N-acetylbenzoquinonimine (NAPQI). 260 The same conversion can be achieved in vitro by HRP in the presence of hydrogen peroxide. The amperometic studies (Figure 15) were made by recording the electrochemical reduction of the enzymatically generated electroactive oxidized species of acetaminophen (NAPQI) in the presence of hydrogen peroxide after stepwise addition of small amounts of 10 -4 M acetaminophen solution. 243,260 Acetaminophen +H 2 O 2 +2H + +HRP→NAPQI+2H 2 O NAPQI+2e -+2H + → Acetaminophen The linear range was calculated as the ratio of the standard deviation of the blank baseline (0.1 M phosphate buffer, pH 7.4), the noise and the biosensor's response to acetaminophen. 256 The amperometric assays were realized at -0.2V, which represents the reduction potential of NAPQI. The HRP/clay/PEI/GCE biosensor had a sensitivity of 6.28×10 -7 M and a linear range between 5.25×10 -6 M and 4.95×10 -5 M. The calibration curve equation of the amperometric biosensor was y= 0.0139x + 3×10 -8 (R 2 = 0.996) and showed a linear range between 5×10 -6 -4.95×10 -5 M. The reproducibility was also tested on the same electrode after 10 successive analyses in three different days. The RSDs of the slopes of the linear responses calculated by Line weaver-Burk method were less than 15%. 243 The results are comparable with those recently presented in the literature: Conclusions New composite materials based on clay micro and nanoparticles for the development of electrochemical sensors were developed. The electrochemical behavior of acetaminophen and riboflavin phosphate was tested for the first time on clay-modified CPEs with different clay particle sizes using CV and new electrochemical methods were elaborated for their detection and applied in pharmaceutical analysis. The obtained results emphasized the great active surface, the adsorbent and ionic exchange properties and showed the advantages offered by Răzoare and Valea Chioarului clays for the development of novel modified electrodes applied in pharmaceutical analysis. 239 Carbon paste was chosen in this work because it is accessible, low cost, easy to manipulate, and it showed good electrochemical properties. It offered, in the meantime, a good entrapment for the clay at the electrode surface. The main disadvantage of the carbon paste was impossibility to quantify the clay at the electrode surface which comes in contact with the electrolyte and which is practically involved in the electrochemical measurements. Therefore, the results obtained in this study didn't present higher performances in comparison with the already existent methods. A novel biosensor based on HRP immobilization in a PEI and clay porous gel film for acetaminophen was developed. When comparing the CV recording of acetaminophen on modified CPE and on PEI film electrodes, it was concluded that in the first case only the oxidation process could be observed, while in the second case reversible oxidation and reduction processes were visible. For the GCE modified electrode, the current response increased proportionally with the active surface area as a result of the decrease in the particle size. The amperometric detection of acetaminophen was successfully achieved with a sensitivity of 6.28×10 -7 M and a linear range between 5×10 -6 M and 4.95×10 -5 M. The clay offered both a good entrapping and a "protective" environment for the biocomponent. This immobilization strategy could be exploited in the development of other biomolecules for the detection of pharmaceuticals. 243 These are preliminary results as these studies didn't aim at developing new analytical methods for the detection of the selected pharmaceutical probes (acetaminophen, ascorbic acid, and riboflavin), but to emphasize the specific properties of the Romanian clays employed which recommends them for further exploitation for electrodes modification and for the development of new composite materials applied in sensors and biosensors fabrication. 3 Tetrabutylammonium-modified clay film electrodes: characterization and application to the detection of metal ions Introduction Even their development has started a few decades ago, clay-modified electrodes (CLMEs) still represent a notable field of interest, especially for their applications in electroanalysis, as discussed in some reviews [START_REF] Mousty | Sensors and biosensors based on clay-modified electrodes-new trends[END_REF][START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF][START_REF] Macha | Clays as architectural units at modified-electrodes[END_REF]126,249,261 and illustrated in recent research papers where CLMEs have been used as sensors or biosensors (see examples, e.g., from our group 239,243,262 and from those of Ngameni [START_REF] Jieumboué-Tchinda | Thiol-functionalized porous clay heterostructures (PCHs) deposited as thin films on carbon electrode: Towards mercury(II) sensing[END_REF][START_REF] Tonlé | Preconcentration and voltammetric analysis of mercury(II) at a carbon paste electrode modified with natural smectite-type clays grafted with organic chelating groups[END_REF][START_REF] Bouwe | Structural characterisation of 1,10phenanthroline-montmorillonite intercalation compounds and their application as low-cost electrochemical sensors for Pb(II) detection at the sub-nanomolar level[END_REF][263][264][265][266][267]Mousty 268,269 , or some others [START_REF] Dias Filho | Study of an organically modified clay: Selective adsorption of heavy metal ions and voltammetric determination of mercury(II)[END_REF]270,271 ). Clay minerals used as electrode modifiers are primarily (but not only) phyllosilicates-layered hydrous aluminosilicates. An important characteristic of those minerals is their interlayer distance which depends on the number of intercalated water and exchangeable cations within the interlayer space [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF] . They also exhibit attractive properties such as a relatively large specific surface area, ion exchange capacity, and the ability to adsorb and intercalate organic species. Smectites have been mostly used for CLMEs preparation in thin layer configuration, especially montmorillonite (MMT), due to a high cation exchange capacity (typically 0.80-1.50 mmol g -1 ) and its thixotropy likely to generate stable and adhesive clay films on electrode surfaces. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF]249 One can remind here that clays are insulating materials so that their use in electrochemistry requires a close contact to an electrode surface, which can be achieved via either the dispersion of clay powders in a conductive composite matrix (e.g., carbon paste electrode 272 ) or the deposition of clay particles as thin films on solid electrode surfaces. An advantage of the clay film modified electrodes is that they are binder-free, thanks to the particular platelet morphology of clay particles bringing them self-adhesive properties toward polar surfaces [START_REF] Macha | Clays as architectural units at modified-electrodes[END_REF] , which ensure a better interaction with most electrode materials and, consequently, a more durable immobilization. Clay films can be attached to a solid electrode surfaces by physical means (through solvent casting, spin-coating, or layer-by-layer assembly [START_REF] Macha | Clays as architectural units at modified-electrodes[END_REF]249,273 , or electrophoretic deposition [START_REF] Song | Preparation of clay-modified electrodes by electrophoretic deposition of clay films[END_REF] , by covalent bonding (via silane or alkoxysilane coupling agents) [START_REF] Rong | Electrochemistry and photoelectrochemistry of pillared claymodified-electrodes[END_REF]274 , or, more recently, in the form of clay-silica composite films 275 . At the beginning, CLMEs were mainly prepared from bare (unmodified) clay materials [START_REF] Macha | Clays as architectural units at modified-electrodes[END_REF]249 but recent advances have been mainly based on organically-modified clays (obtained either by intercalation or grafting of organic moieties in the interlayer region of the clay [276][277][278][279] ) because they enable to tune, to control and to extend the clay properties, resulting therefore to better analytical performance in terms of selectivity and sensitivity [START_REF] Jieumboué-Tchinda | Thiol-functionalized porous clay heterostructures (PCHs) deposited as thin films on carbon electrode: Towards mercury(II) sensing[END_REF][START_REF] Tonlé | Preconcentration and voltammetric analysis of mercury(II) at a carbon paste electrode modified with natural smectite-type clays grafted with organic chelating groups[END_REF][START_REF] Bouwe | Structural characterisation of 1,10phenanthroline-montmorillonite intercalation compounds and their application as low-cost electrochemical sensors for Pb(II) detection at the sub-nanomolar level[END_REF][263][264][265][266][267] . Dealing with sensitivity, preconcentration electroanalysis at modified electrodes (in which the analyte is firstly accumulated at open circuit and then electrochemically detected) has proven to be a powerful method to improve the performance of electrochemical sensors. In this respect, the ion exchange capacity of clays and the binding properties of organoclay have been exploited for the detection of metal ions using CLMEs (see examples in reviews [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF]126 ). Till now, however, very few examples are based on the use of intercalated organoclay materials for that purpose [START_REF] Bouwe | Structural characterisation of 1,10phenanthroline-montmorillonite intercalation compounds and their application as low-cost electrochemical sensors for Pb(II) detection at the sub-nanomolar level[END_REF]265 , in spite of the simpler modification procedure for intercalation than for grafting for instance (which requires the use of particular organoalkoxysilane reagents [START_REF] Tonlé | Preconcentration and voltammetric analysis of mercury(II) at a carbon paste electrode modified with natural smectite-type clays grafted with organic chelating groups[END_REF]267 ). Here, we have thus examined the interest of CLMEs based on clay particles intercalated with tetrabutylammonium (TBA + ) moieties for the preconcentration electroanalysis of some metal ions (i.e., Cd 2+ , Pb 2+ and Cu 2+ ). The choice of this tetraalkylammonium intercalation reagent was motivated by at least two features: (1) TBA + ions can be easily intercalated in the interlayer region of smectite clays by ion exchange 280 , and ( 2) it modifies the packing configurations in the interlayer of the clay thus influencing the sorptive properties of the organoclay 281 , notably with respect to adsorption of metal ions such as Cu 2+ or Cd 2+282, 283 , which could be advantageously exploited here with CLMEs. The present study describes the deposition of tetrabutylammonium modified clay particles (montmorillonite-rich natural clay from Romania) onto a glassy carbon electrode surface, subsequently covered with a dialysis tubing cellulose membrane, a configuration ensuring fast mass transport for analytes from the solution through the film to the electrode surface. The permeability properties of the novel thin clay films towards selected redox probes (cationic, neutral, and anionic) were characterized. The permeability properties and long-term operational stability of the modified electrodes are discussed regarding their physico-chemical characteristics. Surprisingly, the fully doped tetrabutylammonium clay showed lower electrochemical performance towards cations than the unmodified clay. This is explained by the positive electrostatic barrier and the blockage of interlayer adsorption sites due to the tetrabutylammonium ion. In order to diminish the positive electrostatic barrier and also to create free adsorption sites, the partial removal of tetrabutylammonium ions was performed, thus improving the electrochemical performances of the modified clays. Then, the modified electrode was applied to the detection of some metal ions chosen as relevant biological and environmental contaminants (Cd 2+ , Pb 2+ and Cu 2+ ). After optimization of various experimental parameters, a stable and reliable sensor was obtained. Experimental Clays, reagents, and electrochemical instrumentation NaNO 3 (99%, Fluka), HCl (37%, Riedel de Haen), and tetrabutylammonium bromide (TBAB, 99%, Sigma) were used as received without further purification. The redox probes employed for permeability tests were of analytical grade: ferrocene dimethanol (Fc(MeOH) 2 , Alfa Aesar); potassium hexacyanoferrate(III) (K 3 Fe(CN) 6 , Fluka); and hexaammineruthenium chloride (Ru(NH 3 ) 6 Cl 3 , Sigma-Aldrich). Singlecomponent and multicomponent cation solutions were prepared daily by diluting standardized mother solutions (comprised of 1000 mg/l each metal ion, from Sigma-Aldrich). These standards were also used to certify copper(II) solutions prepared from Cu(NO 3 ) 2 •3H 2 O and 0.05 M HNO 3 , lead(II) solutions prepared from Pb(NO 3 ) 2 and 0.5 M HNO 3 and cadmium(II) solutions prepared from Cd(NO 3 ) 2 •4H 2 O and 0.5 M HNO 3, Sigma-Aldrich), which were used to prepare diluted solutions for preconcentration studies (final pH in the electrolyte was 5.5 if not stated otherwise). The electrolyte employed was 0.1 M NaNO 3 . All solutions were prepared with high purity water (18 MΩ cm -1 ) from a Millipore milli-Q water purification system. The clay sample used in this study was a natural Romanian clay from Valea Chioarului (Maramureş County), consisting mainly of MMT, with minor amounts of quartz. Its physico-chemical characterization is provided elsewhere 239 . The structural formula is (Ca 0.06 Na 0.27 K 0.02 ) Σ=0.35 (Al 1.43 Mg 0.47 Fe 0.10 ) Σ=2.00 (Si 3.90 Al 0.10 )Σ=4.00 O 10 (OH) 2 •nH 2 O. It is characterized by a surface area (N 2 , BET) of 190 m 2 g -1 . Only the MMT-rich fine fraction of the clay (< 1 μm, as collected by sedimentation according to Stockes law, after the raw clay was suspended in water, ultrasonicated for about 15 min and allowed to settle, centrifugation and ultracentrifugation of the supernatant phase) was used here. This fine fraction has a CEC of 0.78 meq g -1 and was used before as template for an amperometric biosensor for acetaminophen detection 243 . Its XRD diffractogram showed a high content of MMT (with its characteristic peaks at 2θ : 6,94 o ; 19,96 o ; 21,82 o ; 28,63 o ; 36,14 o ; 62,01 o ) and also confirmed the almost negligible presence of other minerals. Apparatus and characterization procedures Electrochemical experiments were carried out using a PGSTAT-12 potentiostat (EcoChemie) equipped with the GPES software. A conventional three electrode cell configuration was employed for the electrochemical measurements. Film modified GCEs were used as working electrodes, with an Ag/AgCl/KCl 3M reference electrode (Metrohm) and a platinum wire as reference and counter electrode. Cyclic voltammetry (CV) determinations were carried out respectively in 1 mM K 3 Fe(CN) 6 , 0.1 mM Ru(NH 3 ) 6 Cl 3 , and 5 mM Fc(MeOH) 2 (in 0.1 M NaNO 3 ). CV curves were typically recorded in multisweep conditions at a potential scan rate of 20 mV s -1 and used to qualitatively characterize accumulation/rejection phenomena and mass transport issues through the various films. Accumulation-detection experiments were also performed using copper(II), lead(II), and cadmium(II) as model analytes. Typically, open-circuit accumulation was made from diluted cations solutions (5×10 -7 -10 -6 M) at pH 5.5 and voltammetric detection was achieved after medium exchange to a cation-free electrolyte solution (0.1 M NaNO 3 ) by square wave voltammetry (SWV), at a scan rate of 5 mV s -1 , a pulse amplitude of 50 mV and a pulse frequency of 100 Hz. The CNH elemental analysis was performed for the unmodified clay, for the TBAB fully doped MMT before and after the TBAB partial extraction, using an Elementar Vario Micro Cube, with the following experimental conditions: combustion temperature 950 o C; reduction temperature 550 o C; He flow 180 ml/min; O 2 flow 20 ml/min; pressure 1290 mbar. The film structure was characterized by X-ray diffraction (XRD), FTIR and Raman Spectroscopy. XRD measurements were performed using a BRUKER D8 Advance X-ray diffractometer, with a goniometer equipped with a germanium monochromator in the incident beam, using Cu Kα1 radiation (λ = 1.54056 Å ) in the 2θ range 15-85°. The FTIR spectra were measured on a Jasco FT/IR-4100 spectrophotometer equipped with Jasco Spectra Manager Version 2 software (550-4000 cm -1 ). The Raman spectra were acquired with a confocal Raman microscope (Alpha 300R from WiTec) using a WiTec Control software for data interpretation (-1000-3600 cm -1 , resolution > 0.5 cm -1 ). Electrochemical impedance spectroscopy (EIS) was used to characterize the electron transfer properties of the modified electrodes. The Nyquist plots were recorded with an Autolab potentiostat equiped with a FRA2 module and 4.9 version software. Clay modification with TBAB A MMT sample (10 g, particle size <1 μm) was suspended in ultrapure demineralized water (clay concentration in water 4%). A quantity of Na 2 CO 3 equivalent to 100 mEg Na 2 CO 3 per100 g clay was then added in the clay suspension and stirred for 30 minutes at 97 o C. An aqueous solution of TBAB (corresponding quantitatively to 1.1 times montmorillonite cation exchange capacity (CEC MMT = 0.78 mEq g -1 ); 0.85 mEq g -1 of TBAB), was then added and the suspension was stirred for 30 more minutes at room temperature. The obtained solid was separated by centrifugation and washed until it was free of any residual Br -. The organoclay material was then dried for 48 h at 60° C. Depending on the analytical purpose, the TBAB was kept in the clay or partially solvent-extracted from the powder in an ethanol solution containing 0.1 M NaClO 4 for 30 min under moderate stirring. Electrode assembly Glassy carbon electrodes purchased from e-DAQ (GCEs, 1 mm in diameter) were first polished using 1 and 0.05 μm alumina powder and then washed with water and sonicated for 15 min in distilled water to remove any alumina trace, leading to an electrochemically active surface area of about 7.85•10 -3 cm 2 . Clay or organoclay suspensions (5 mg/ml) were then prepared in distilled water stirred for 20 min, sonicated for 10 min, and left quiescent at room temperature. The clay or organoclay film was deposited on GCE by spin-coating. A volume of 2.5 μl of the clay suspension (5 mg/ml) was deposited on the electrode surface and then stirred for 20 minutes at 2000 rotation per minute. The electrode was then dried at room temperature for 1 hour. The clay or organoclay film was covered with a dialysis tubing cellulose membrane (Sigma) fixed first with a rubber o-ring, and then with laboratory film to avoid the solution penetration under the membrane. The five systems characterized in this study are: the bare glassy carbon electrode (GCE), the bare glassy carbon electrode with cellulose membrane (GCE/M), the unmodified MMT film coated on GCE with cellulose membrane (GCE+MMT/M), and the TBAB modified MMT film coated on GCE with cellulose membrane before (GCE+MMT+TBAB/M) and after TBAB partial extraction (GCE+MMT+TBAB(-TBAB)/M). Results and discussions Physical-chemical characterization of clays XRD was first used to characterize the eventual structural changes of the smectite clay upon intercalation of TBAB. As expected, prior to surfactant entrapment, the clay film exhibited the same MMT characteristics as those reported for the raw clay particles in the experimental section (diffraction lines at 2θ values (°) of 6.9; 19.9; 21.8; 28.6; 36.1; 62.0, data not shown). The unit cell parameters and the profile discrepancy indices values (Table 5) indicate an expansion of the interlayer region between the clay sheets. As the clay was in contact with a TBAB solution, this expansion is certainly due to the incorporation of TBA + species in the clay interlayer (in agreement with a similar process described elsewhere for other surfactants such as cetyltrimethylammonium bromide (CTAB) 284 . After partial removal of the TBAB, the clay interlayer distance was found to maintain almost the same values obtained for the fully doped clay, MMT+TBAB (see Table 5). The FTIR spectrum of Valea Chioarului clay (Figure 16, black line) presented the typical bands attributed to the characteristic groups of MMT, described elsewhere. 243,244 The FTIR spectrum of TBAB (Figure 16, blue line) presented at 739 cm -1 a band corresponding to the C-H alkanes rocking vibration. The band near 1020-1250 cm -1 can be assigned to the C-N stretching vibrations and the bands near 1450-1470 cm -1 are due to the C-H alkanes bending vibrations. The bands near 1350-1370 cm -1 correspond to the C-H alkanes rocking vibrations and the bands near 2969-2945 cm -1 to the C-H alkanes stretching vibrations. The bands near 600-1300 cm -1 can be assigned to the C-C aliphatic chain vibrations. 246 For the fully dopped TBAB clay sample (Figure 16, red line), the peak at 3625 cm -1 attributed to the stretching vibration of the hydroxyl group, the band near 1637 cm -1 due to the bending vibration of H-O-H group and the broad band near 1000-1200 cm -1 assigned to the Si-O stretching vibration are common with MMT spectrum. In the same time, the bands near 2969-2945 cm -1 due to the C-H alkanes stretching vibrations, the bands near 1475 cm -1 can be assigned to the C-H alkanes bending vibrations, and the bands near 1350 cm -1 correspond to the C-H alkanes rocking vibrations, all indicating the presence of the surfactant in the clay. It can be assumed that the broad band at 1000-1200 cm -1 includes the C-N stretching vibrations. 243,244,246 After the TBAB partial removal, the FTIR spectrum (Figure 16, green line) presents almost all the bands that were attributed to the surfactant, but with lower intensities. Raman spectrum of MMT is characterized by three strong bands near 200, 425 and 700 cm -1 , (Figure 17, black line). [285][286][287] The sharp Raman peak at 706 cm -1 is due to SiO 4 vibrations and the broader feature near 420 cm -1 has been assigned to M-OH bending vibrations 285,287 and to Si-O-Si(Al) bending modes. 286,287 The position of the strong band near 201 cm -1 varies depending on the clay mineral type 285,287 ; it is probably due to the SiO 4 and influenced by Al substitution and the dioctahedral or trioctahedral character. Weaker bands due to the OH bending vibration are observed near 850-920 cm -1 . 287 In the case of TBAB spectra (Figure 17, blue line), the sharp Raman bands near 2800-3000 cm -1 are due to the C-H bending vibrations. The strong bands near 250-400 cm -1 can be assigned to the C-C aliphatic chains vibrations. The broader feature near 1380 cm -1 can be assigned to the CH 3 bending vibration, while the feature near 1460 cm -1 is due to the asymmetric CH 3 vibrations. The band near 1331 cm -1 corresponds to the C-N bending vibration and the weaker band near 1057 cm -1 is due to the C-C aliphatic chain vibrations. 288 The Raman spectrum of the surfactant modified MMT shows, as expected, both characteristic bands of MMT and TBAB. At 705 cm -1 a sharp peak attributed to the SiO 4 vibrations, the broader feature near 421 cm -1 due to the M-OH bending vibrations, (Figure 17, red line) 285,287 and to Si-O-Si(Al) bending modes 286,287 , and also the weaker bands due to the OH bending vibration observed near 850-920 cm -1287 , all correspond to the MMT Raman spectrum. The sharp Raman bands near 2800-3000 cm -1 due to the C-H bending vibrations and the bands near 250-400 cm -1 assigned to the C-C aliphatic chains vibrations, are common with the TBAB Raman spectrum. Also the feature near 1450 cm -1 due to the asymmetric CH 3 vibrations, the band near 1318 cm -1 corresponding to the C-N bending vibration, and the weaker band near 1058 cm -1 due to the C-C aliphatic chain vibrations 288 demonstrate clearly the incorporation of the surfactant in the clay. The data is a clear proof of the surfactant incorporation in the clay interlayer. The assigned bands of TBAB are still present in the Raman spectrum (Figure 17, green line), but with lower intensities, after the partial removal of the surfactant from MMT, explaining thus, why the interlayer spaces were maintained (Table 2). X-ray diffraction, FTIR and Raman spectra data were confirmed by the elemental analysis results presented in Table 6. The TBA + content, calculated using the N% values, as the doping agent is a quaternary ammonium salt, is 10.17 % for the fully doped clay, while after the surfactant partial extraction is decreasing to 2.07%. Clay film permeability The MMT film presence at the GCE surface increases its voltammetric response in Ru(NH 3 ) 6 3+ (Figure 18C). This is possible due to the accumulation of the positivelycharged electroactive probe by ion exchange in the clay film. For GCE+MMT/M, GCE+MMT+TBAB/M, and GCE+MMT+TBAB(-TBAB)/M, the stationary value of current intensity is reached after 30 potential scans at 0.020Vs -1 (Figures 18C,18D,18E). This behavior was observed for all the MMT samples used (unmodified or modified with different compounds) and corresponds to the diffusion from the solution/clay interface to the electrode surface, results in agreement with other authors 251 . This behavior proves the cation exchange capacity of modified clay and which can be exploited in ion exchange voltammetry (cations detection after preconcentration). The first voltammetric cycles recorded with the MMT modified GCE shows less intense peaks than that obtained on bare GCE due to the fact that the clay film on the electrode surface reduces its active area. This fact was observed only when cycling potentials just after immersion of the electrode in the analyte solution. If the electrode was kept in electrolyte before testing it, the first voltammetric cycle showed peak currents comparable with those obtained on the bare electrode (data not shown). The presence of TBA + /TBAB in the clay is indicated by the lower response of the film to the Ru(NH 3 ) 6 3+ probe (lower accumulation possible by cation exchange, see Figure 18D). This signal decrease also tends to support the presence of some holes in the film (in the presence of a crack-free film the response to Ru(NH 3 ) 6 3+ would have been totally suppressed). On the other hand, after surfactant partial removal, the accumulation response of Ru(NH 3 ) 6 3+ species upon continuous potential cycling is more marked (compare Figure 18E and 3C) due to the increase in the clay interlayer and also to the cation exchange in the clay. An important feature here is that extraction of the surfactant was made using ethanol and NaClO 4 (not the classically used ethanol/HCl mixture) to avoid any chemical degradation of the clay (i.e., acid hydrolysis of the aluminum sites in the aluminosilicate) and to maintain its cation exchange capacity. These results indicate promising use of GCE+MMT+TBAB(-TBAB)/M for preconcentration electroanalysis of cationic analytes (see section 3.3 for confirmation). The negatively charged clay repulses the negatively charged [Fe(CN) 6 ] 3-(Figure 19C) and the electrochemical signal is decreasing. On the basis of the net negative charge of MMT clay, the negatively charged species would be expected to be totally rejected by the clay. Results in Figure 19 are also an indication that the MMT and the TBAB modified MMT films could present some cracks that permit the [Fe(CN) 6 ] 3-diffusion to electrode surface. However, there are some studies in literature in which is mentioned that even cation-exchanging clays have been reported to accumulate somewhat the anionic species. [START_REF] Navrátilová | Cation and anion exchange on clay modified electrodes[END_REF][START_REF] Kula | Sorption and determination of Hg(II) on clay modified carbon paste electrodes[END_REF][START_REF] Navratilova | Determination of gold using clay modified carbon paste electrode[END_REF]251,289,290 Prior to surfactant extraction, when using the Fe(CN) 6 3-probe, a very small signal can be observed, yet clearly visible (see Figure 19D), slightly and continuously growing upon successive cycling. This behavior suggests some accumulation of the negatively-charged Fe(CN) 6 3-species. This result can be attributed to the fact that tetrabutylammonium cation, TBA + , is likely to accommodate the clay particles. 284 It is known that when smectite type clays are treated with a cationic surfactant at a concentration ranging between 0.5 and 1.5 times the CEC of the clay, the surfactant molecules adopt a bilayer or pseudotrimolecular arrangement within the clay platelets, with both vertical and horizontal orientations of the alkylammonium chains. 291 Therefore, the aggregation of the organic cations occurs via both ion exchange and hydrophobic bonding which leads to the creation of positive charges in the clay layer and on the clay surface, inducing thereby possible uptake of anionic species by the clay composite via the formation of surface-anion complex. 292,293 In the present case, the slight accumulation of redox species, as observed in Figure 19D, can be ascribed to the formation of ion pairs between TBA + cations and Fe(CN) 6 3-anions. The accumulation effect is however much lower than that for classical surfactant-modified clay films (e.g., smectite clay modified with hexadecyltrimethylammonium 265 ), perhaps due to the presence of the cellulose membrane around the surfactant modified clay particles. After surfactant partial removal, the absence of signal to negatively-charged Fe(CN) 6 3-species (see Figure 19E) is again explained by electrostatic repulsions from the negatively-charged clay sheets. All the clay films used in this work greatly attenuate the current response in comparison to the bare GCE (about 70% signal suppression). If the redox probe considered is the neutral Fc(MeOH) 2 , the signal recorded on MMT modified GCE is quite similar with that obtain on bare GCE (compare Figure 20C and 20A) which proves that the clay film is still porous as it let the neutral Fc(MeOH) 2 species reach quite easily the electrode surface. The reduction potential is shifted in negative direction in the presence of clay films, from 0.034 V in the case of bare GCE to -0.09 V in the case of MMT modified GCE and -0.02 V on GCE+MMT+TBAB(-TBAB)/M. In the case of the oxidation peak the potential is shifted to less positive values from 0.16 V in the case of bare GCE to 0.03 V for GCE+MMT/M. The neutral Fc(MeOH) 2 probe is still detectable on the GCE+MMT+TBAB(-TBAB)/M film electrode but less easily than for GCE+MMT/M (compare Figure 20E and20C) because the increase in the interlayer clay makes it more difficult for the probe molecules to reach the underlying electrode surface. EIS -determinations EIS was employed to characterize the electron transfer properties of the modified electrodes (Figure 21). The typical Nyquist plot of EIS includes a semicircle and a linear zone, which correspond to the electron transfer limited process and the diffusion limited process, respectively. The proposed circuit is R1(Q1[R2W1]) both for unmodified GCE and GCE modified with MMT (simple and with TBAB). The conventional C dl is replaced by the constant phase element (CPE), representing the non uniform behavior of adsorbed species on irregular geometry and small electrode surface. The reaction seems to occur in single step and a combination of kinetic and diffusion processes with infinite thickness describe the whole process. The high frequency section of Nyquist curves describe an arc, the diameter of which displays the R ct values (namely the electron transfer resistance), that increase in the presence of MMT from 128.58 Ω (GCE) to 395.25 Ω (GCE+MMT+TBAB(-TBAB)/M) due to the electrode surface coverage with non conductive clay films which obstruct the electron transfer. The best electrode surface coverage is achieved using TBAB modified MMT films after the surfactant extraction. The experimental results of EIS confirmed that the clay films were fixed at the GCE surface and the clay's presence decrease the electron transfer rate of [Fe(CN) 6 ] 3-. Optimization of experimental conditions Supporting electrolyte for optimal metal ions detection Different supporting electrolytes have been known to present different electroanalytical responses at CLMEs towards the detection of certain analytes by increasing/decreasing either the catalytic current response and/or lowering the detection potential. To choose the suitable detection medium, the electrochemical behavior of treated ions was studied using SWV on GCE+MMT+TBAB(-TBAB)/M in different buffer mediums (such as 0.1 M HCl; 0.2 M HNO 3 ; 0.1 M KCl (pH 4.0); 0.1 M NaNO 3 (pH 5.5 in ultrapure water). It appears that the current of Cu(II) oxidation peak is higher in the 0.2 M HNO 3 (Figure 22) and in this particular case the peak potential is shifted to less negative values (-0.17 V vs. Ag/AgCl). In the case of the 0.1 M NaNO 3 , the peak potential appears at -0.22 V vs. Ag/AgCl, but the peak shape is more appropriate for analytical study, moreover an acidic medium might destroy the internal structure of the clay. Therefore, the 0.1 M NaNO 3 was used as supporting electrolyte in all subsequent electrocatalytical experiments of Cu(II) detection. Accumulation time The effect of accumulation time was studied in a range from 0 to 25 min in Cu 2+ and Cd 2+ solutions at different concentrations and the results are presented in Figure 23. The peak height (current) increases with the accumulation time both for Cu(II) and Cd(II) and, at higher concentrations, it reaches the steady-state after more than 15 min. This can be attributed to the saturation of the accessible adsorption sites in the clay film. The accumulation time was therefore set according to the amount of analytes in the solution. For the analysis of Cu 2+ and Cd 2+ performed in solutions at lower cations concentration, a longer period may be required in order to sufficiently accumulate the metal ions at detectable levels. However, focusing on the most sensitive system (GCE+MMT+TBAB(-TBAB)/M), the relative large linear response range (from 0 to 20 -25 minutes) demonstrated that is not easy for the system to be saturated with metal ions and requires longer accumulation periods. The variations are typical for preconcentration electroanalysis at modified electrodes. They involve an accumulation by analyte binding to active centers (a first linear increase of the signal followed by leveling off when reaching steady-state due to ion exchange equilibrium can be observed, together with the saturation of ion exchange sites). This is in agreement with previous observations made for copper (II) and cadmium(II) electroanalysis at other kind of modified electrodes [START_REF] Marchal | Determination of cadmium in bentonite clay mineral using a carbon paste electrode[END_REF]191,294,295 . For the further determinations 5 to 10 minutes of preconcentration time were considered, depending on experiment type. A B TBAB effect on montmorillonite The effect of TBAB insertion in clay structure was studied both for Cu 2+ and Cd 2+ water solutions after 5 min accumulation at open-circuit. From Figure 24A and Figure 24B one can observe that the modification of MMT with TBAB (after surfactant removal) led to an increase of about two times the electroanalytical signal for all cations tested (compare curve a and b in Figure 24A and Figure 24B). This proves that the interlayer distance between the clay sheets was found to increase as a result of TBAB ion exchange. Ten consecutive voltammetric measurements were performed using the same modified electrode in order to determine the system's stability towards the Cu 2+ and Cd 2+ , respectively. The MMT+TBAB(-TBAB)/M modified GCE shows good long-term operational stability for Cu 2+ (Figure 25) and Cd 2+ (Figure 26) and good reproducibility for successive preconcentration-detection steps on the same electrode (see some example signals in the inset of Figures 25 and26). This can be ascribed to the durable mechanical immobilization of the clay material by the aid of the cellulose membrane. One can observe that both unmodified and TBAB modified MMT (tested after TBAB partial extraction) films present good stability, but clay modification improves the preconcentration properties toward Cu(II) and Cd(II) ions. -0,8 -0,6 -0,4 -0,2 0,0 0,2 0,4 0,6 0,8 After each measurement, the regeneration of the modified electrode was achieved by immersing the electrode in 0.1 M NaNO 3 solution and washing under magnetic stirring for 2 minutes. The resulting relative standard deviation (RSD) is 0.37 % in the case of Cu(II) and 1.554% in the case of Cd(II) detection, indicating that the sensor obtained by fixing the clay film on the electrode with the cellulose membrane can be easily used for repetitive measurements. Calibration data SWV was used to determine the relationship between the analyte concentration (Cu(II) and Cd(II)) and the peak current intensity in the oxidation field of potentials, by using GCE+MMT+TBAB(-TBAB)/M. As shown in Figure 27 and Figure 28, there are no anodic peaks when the experiments were performed in Cu(II) and Cd(II) free solutions. In the case of Cu(II), an anodic peak appears at about 0.1 V vs. Ag/AgCl after accumulation at open-circuit in Cu 2+ solution and the peak current increases with the cation concentration. The oxidation peak potential is slightly shifted to higher positive values with the increase in the analyte concentration in preconcentration solution. The LOD values were estimated on the basis of a signal-to-noise ratio of 3, 267 while the LOQ values were estimated on the basis of a signal-to-noise ratio of 10. The relationship between Cu(II) concentration and current intensity obtained with SWV was linear in a range from 1.2×10 -7 M to 7.5×10 -6 M, according to the equation I (μA) = 2.9823±0.1077 × [Cu 2+ ] (μM) + 0.2333±0.03601 with a correlation coefficient of 0.9922 (8 points considered for the linear plotting). In the case of Cu(II), the LOD was estimated at 3.62×10 -8 M and LOQ = 10.86×10 -8 M. In the case of Cd(II), an anodic peak appears at about -0.9 V/Ag/AgCl after accumulation at open-circuit in Cd 2+ solution and the peak current increases with the cation concentration. No oxidation peak potential shift was observed when increasing Cd 2+ concentration in preconcentration solution. The relationship between Cd(II) concentration and current intensity obtained with SWV was linear in a range from 2.16×10 -7 M to 2.5×10 -6 M , according to the equation I (μA) = 0.69791±0.0144 × [Cd 2+ ] (μM) -0.0277±0.0167 with a correlation coefficient of 0.9983 (6 points considered for the linear plotting). Cd(II) detection limit was estimated at 7.2×10 -8 M, while LOQ calculated value was 21.6×10 -8 M. Simultaneous determination of multicomponent cations solutions Figure 29 shows the voltammetric response of a multi-component solution of Cd 2+ , Pb 2+ , and Cu 2+ registered on GCE+MMT+TBAB(-TBAB)/M after 5 min preconcentration using a 10 -5 M solution of each metal ion, in comparison with the signals obtained for monocomponent solutions determined in the same conditions. One can see that the interactions between the three cations is minimal in terms of peak current intensity, but the peak potential is shifted to less negative values in the case of Cd(II) signal, while in the case of Cu(II) the sift takes place to more negative values. The position of the Pb(II) peak is less affected by the presence of the other metal ions. Interference study The selectivity of the MMT based sensor for Cu 2+ and Cd 2+ was evaluated in the presence of some common metal ions expected to influence the electrochemical sensors signals: Pb 2+ , Co 2+ , Ni 2+ , Zn 2+ , Ba + , Na + , and K + tested at the same concentration. 10 -6 M solutions of Cu 2+ and Cd 2+ , and of the possible interfering ions were tested by SWV using the GCE+MMT+TBAB(-TBAB)+M system. The current intensity was measured for each determination and the results are presented in Table 7. The presence of Pb 2+ , Co 2+ , Zn 2+ , Ba + , Na + , and K + does not influence the electrochemical signal of Cu 2+ and Cd 2+ in a significant way, but an important decrease (12 -16%) in current intensity could be observed in the presence of Ni 2+ at the same concentration as the main analytes. To avoid the interferences appearance, for real sample analysis, a dilution of the samples before experimental determination is recommended if samples contain high concentration of cations. Conclusions This work demonstrates that low-cost pillared clay is attractive to be used as the electrode modifier for electrochemical sensors. The effective grafting and functionalization of MMT with TBAB were confirmed by XRD, FT-IR, FT-Raman, and EIS determinations. TBAB modified montmorillonite was prepared. After partial removal of the surfactant, the resulting material preserved the basic properties of the clay (i.e., cation exchange) and exhibited excellent permeability issues and long-term mechanical stability, which could be notably exploited in preconcentration electroanalysis. The selectivity of the MMT based sensor for Cu 2+ and Cd 2+ was not significantly influenced by the presence of Pb 2+ , Co 2+ , Zn 2+ , Ba + , Na + , and K + ions. The detection limits were estimated at 3.62×10 -8 M for Cu 2+ and 7.2×10 -8 M for Cd 2+ , respectively. The new material exhibited good permeability properties towards selected redox probes (cationic, neutral, and anionic). The partial removal of TBA + ions minimized the positive electrostatic barrier towards cations adsorption and created free adsorption sites, improving thus the electrochemical performances of the new sensor material. To conclude, the obtained sensor was easily fabricated and showed linear response ranges and good reproducibility being a promising electrode modifier for the accumulation of other cationic toxic species. This low-cost device will be a useful alternative in environmental monitoring of highly toxic contaminants. 4 Clay-mesoporous silica composite films generated by electro-assisted self-assembly Introduction Structuration of electrode surfaces with inorganic thin films has become a well established field of interest, notably for applications in electroanalysis. [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF]116,121,123,126,128,160,249,253,262,[296][297][298][299][300][301][302][303][304][305][306] Various materials have been used for that purpose, including zeolites 128, 297-299 , clays [START_REF] Navrátilová | Clay modified electrodes: present applications and prospects[END_REF]126,249,262,300 and layered double hydroxides 300,301 , silica 128,253,302 and silica-based organic-inorganic hybrids 121,302 , sol-gel materials 121,123,303,304 and, more recently, ordered mesoporous materials. 116,128,160,305,306 The driving force to select one or another of these electrode modifiers often relies on the particular properties (ion exchange, selective recognition, hosting capacity, size selectivity, redox activity, permselectivity, etc) which can be useful to the final application (preconcentration electroanalysis, electrocatalysis, permselective coatings, biosensors, ...). As most of these materials are electronic insulators, their use in connection to electrochemistry requires a close contact to an electrode surface, which can be basically achieved via dispersion of as-synthesized powdered materials in a conductive composite matrix (e.g., CPEs 272 ) or deposited as thin films on solid electrode surfaces. In the latter case, a critical point is the uniformity and long-term mechanical stability of the thin coatings, which might be challenging when attempting to deposit particulate materials onto electrodes, requiring often the use of an additional polymeric binder. 307 This is especially the case of zeolite film modified electrodes, the situation being somewhat less problematic for clay film modified electrodes because of the particular platelet morphology of clay particles and their self-adhesive properties toward polar surfaces [START_REF] Macha | Clays as architectural units at modified-electrodes[END_REF] ensuring better interaction with most electrode materials and consequently more durable immobilization. Nevertheless, besides the classical physical attachment of a clay film to a solid electrode surface (through solvent casting, spin-coating or layer-by-layer assembly as the mainstream techniques [START_REF] Macha | Clays as architectural units at modified-electrodes[END_REF]249,273 , or electrophoretic deposition [START_REF] Song | Preparation of clay-modified electrodes by electrophoretic deposition of clay films[END_REF] ), other strategies based on covalent bonding (via silane or alkoxysilane coupling agents) have been also developed. [START_REF] Rong | Electrochemistry and photoelectrochemistry of pillared claymodified-electrodes[END_REF]274 On the other hand, the versatility of the sol-gel process makes it especially suitable to coat electrode surfaces with uniform deposits of metal or semimetal oxides (mainly silica) and organic-inorganic hybrid thin films of controlled thickness, composition and porosity. The method is intrinsically simple and exploits the fluidic character of a sol (typically made of alkoxysilane precursors for silica-based materials) to cast it on the surface of a solid electrode, allowing gelation, and drying to get a xerogel film 121,123,303,304 . The driving force for film formation is thus solvent evaporation and this approach is often sufficient to yield high quality films of flat surfaces 308 , including the ordered mesoporous ones generated by evaporation induced self-assembly in the presence of surfactant templates 139 . Sol-gel thin films can be also generated onto electrode surfaces by electrolytic deposition. The method involves the immersion of the electrode into a hydrolyzed sol solution (i.e., typically in moderate acidic medium) and the application of a suitable cathodic potential likely to generate hydroxyl ions locally at the electrode/solution interface, such local pH increase contributing to catalyze the precursors polycondensation and the growth of the silica film on the electrode surface. 150,309,310 This approach can be extended to the generation of organically-functionalized silica films 152,153 or to the co-deposition of solgel/metal nanocomposites 311 or conductive polymer-silica hybrids 312 , and is compatible with the encapsulation of biomolecules to build bioelectrocatalytic devices. 313,314 When applied in the presence of surfactant template (i.e., CTAB), the method enables the deposition of highly ordered mesoporous silica films with mesopore channels oriented normal to the underlying support 148,149 , a configuration ensuring fast mass transport for analytes from the solution through the film to the electrode surface, thus offering great promise for the elaboration of sensitive electroanalytical devices 182 . Moreover, contrary to the evaporation method, the electro-assisted generation approach enables to get uniformly deposited sol-gel layers onto electrode surfaces of complex geometry or complex conductive patterns (i.e., gold CD-trodes 153 , macroporous electrodes 315 , metal nanofibers 316 or printed circuits 317 ) or at the local scale using ultramicroelectrodes 318,319 . The method was also proven to be suited to the generation of sol-gel materials through nano-or micro-objects deposited onto electrode surfaces, such as nanoparticles 320 or bacteria 321 , thus acting somewhat as a template for film growing. In the present study, we have further extended the electro-assisted deposition method to generate clay-mesoporous silica composite films and characterized their permeability properties towards selected redox probes (cationic, neutral, and anionic). The synthesis procedure involves at first the deposition of clay particles (montmorillonite-rich natural clay from Romania) onto a glassy carbon electrode surface, and the subsequent electro-assisted self-assembly of a surfactant-templated mesoporous silica around the clay particles. The use of a cationic surfactant (i.e., CTAB) in the synthesis medium was driven by at least two points, the fact that it contributes to template the silica film (which exhibits distinct permeability properties before and after extraction 148,149,182 ), and its possible incorporation into the interlayer region of the clay by cation exchange (which is known to modify the interlayer spacing between the clay sheets 284 ). Even if some silica-clay composites have been described in the literature [322][323][324] , including two as thin layers on electrode 325,326 , the present work provides, to the best of our knowledge, the first example of electrogenerated claymesoporous silica composite films. Their permeability properties and long-term operational stability of the modified electrodes are discussed regarding their physicochemical characteristics. 275 Experimental Reagents and materials Tetraethoxysilane (TEOS, 98%, Alfa Aesar), ethanol (95-96%, Merck), NaNO 3 (99%, Fluka), HCl (37%, Riedel de Haen), and CTAB (99%, Acros) were used as received for sol-gel films synthesis. The redox probes employed for permeability characterization were analytical grade: ferrocene dimethanol (Fc(MeOH) 2 , Alfa Aesar); potassium hexacyanoferrate(III) (K 3 Fe(CN) 6 , Fluka); and hexaammineruthenium chloride (Ru(NH 3 ) 6 Cl 3 , Sigma-Aldrich); they were typically used in 0.1 M NaNO 3 solution. A certified copper (II) standard solution (1000 ± 4 mg L -1 , Sigma-Aldrich) was used to prepare diluted solutions for preconcentration studies. The electrolytes KCl (99.8%) and HCl (36% solution) were obtained from Reactivul Bucureşti. All solutions were prepared with high purity water (18 MΩ cm -1 ) from a Millipore Milli-Q water purification system. The clay sample used in this study was natural Romanian clay from Valea Chioarului (Maramureş County), which consisted mainly of smectite, with minor amounts of quartz. Its physico-chemical characterization is provided elsewhere 239 . The structural formula is (Ca 0.06 Na 0.27 K 0.02 ) Σ=0.35 (Al 1.43 Mg 0.47 Fe 0.10 ) Σ=2.00 (Si 3.90 Al 0.10 ) Σ=4.00 O 10 (OH) 2 • nH 2 O. It is characterized by a surface area (N 2 , BET) of 190 m 2 g -1 . Only the montmorillonite-rich fine fraction of the clay (< 0.2 μm, as collected by sedimentation according to Stockes law, after the raw clay was suspended in water, ultrasonicated for about 15 min and allowed to settle, centrifugation and ultracentrifugation of the supernatant phase) was used here. This fine fraction has a cation exchange capacity (CEC) of 0.78 meq g -1 . Its XRD diffractogram showed a high content of montmorillonite (with its characteristic peaks at 2θ: 6. Preparation of the clay-mesoporous silica films Glassy carbon electrodes (GCE, 5 mm in diameter) were first polished on wet silicon carbide paper using 1 and 0.05 μm Al 2 O 3 powder sequentially and then washed in water and ethanol for a few minutes, respectively. GCE were afterwards coated with a clay film, which was prepared by depositing a 10 μL aliquot of an aqueous clay suspension (5 mg mL -1 ) by spin-coating onto the GCE surface, as in 251 . The film was dried for 1 h at room temperature prior to any further use. This electrode is denoted below as "GCE-clay". A mesoporous silica film was then electrogenerated through this film, around the clay particles, onto the electrode surface under potentiostatic conditions. This was typically achieved from a precursor solution containing 20 mL ethanol, 20 mL aqueous solution of 0.1 M NaNO 3 and 0.1 M HCl, to which 13.6 mmol TEOS and 4.35 mmol CTAB were added under stirring (optimized conditions as in 148,149,182 ) and the resulting sol was aged for 2 h prior to use. The GCE-clay electrode was then immersed in this sol solution and electro-assisted deposition was performed by applying -1.3 V for 10 s. The electrode was then quickly removed from the solution, rinsed with water, and dried/aged overnight in an oven at 130°C. The resulting composite film electrode is denoted "GCE-clay-mesopSiO 2 ". It can be used as such or after template removal; in this latter case, the CTAB template is solvent-extracted with an ethanol solution containing 0.1 M NaClO 4 for 5 min under moderate stirring. For comparison purposes, a template-free silica film was also deposited onto the GCE-clay electrode, exactly under the same aforementioned conditions but in the absence of CTAB in the starting sol. It is denoted "GCE-clay-SiO 2 ". 275 Apparatus and characterization procedures All electrochemical experiments were performed using a PGSTAT-12 potentiostat (EcoChemie) monitored by the GPES software. A conventional three electrode cell configuration was employed for the electrochemical measurements. Film modified GCEs were used as working electrodes, with a saturated Ag/AgCl (Metrohm) and a platinum wire as reference and counter electrode, respectively. CV measurements were carried out respectively in 5 mM Fc(MeOH) 2 , 1 mM K 3 Fe(CN) 6 , or 0.1 mM Ru(NH 3 ) 6 Cl 3 (in 0.1 M NaNO 3 ). CV curves were typically recorded in multisweep conditions (ca. 20 cycles) at a potential scan rate of 20 mV s -1 and used to qualitatively characterize accumulation/rejection phenomena and mass transport issues through the various films. Accumulation-detection experiments were also performed using copper (II) as a model analyte. Typically, open-circuit accumulation was made from diluted copper(II) solutions (10 -7 -10 -6 M) at pH 5.5, and voltammetric detection was achieved after medium exchange to a copper(II)-free electrolyte solution (0.1 M KCl + 0.1 mM HCl) by SWV, at a scan rate of 5 mV s -1 , a pulse amplitude of 50 mV and a pulse frequency of 100 Hz. The films morphology was observed by scanning electron microscopy (SEM) and micrographs were recorded with a Hitachi FEG S-4800 apparatus. The film structure was characterized by X-ray diffraction (XRD) in Bragg-Brentano geometry using a Panalytical X'Pert Pro diffractometer operating with a copper cathode (λ Kα = 1.54056 Å). Atomic force microscopy (AFM, apparatus Thermomicroscope Explorer Ecu+, Veeco Instruments SAS) was also used to evaluate the film thickness. 275 Results and discussions Films preparation and permeability properties evaluated by cyclic voltammetry The various electrode configurations investigated here are schematically represented in Figure 30. In the raw clay modified electrode (GCE-clay, Figure 3a), the clay particles are expected to lie randomly onto the electrode surface [START_REF] Macha | Clays as architectural units at modified-electrodes[END_REF]249 . Then, the non occupied volume between these particles is filled upon electro-assisted generation of the surfactant-templated silica material (GCE-clay-mesopSiO 2 ) or the non templated silica layer (GCE-clay-SiO 2 ). It was proven previously that electro-assisted deposition of sol-gel films implies a mechanism involving the generation of hydroxyl ions under potential control, leading to a pH increase at the electrode/solution interface, catalyzing thereby the sol-gel film deposition 148-150, 152, 182, 309, 310, 320 . The method is thus conceptually different from the direct electrodeposition of composites such as claymetallic films (i.e., using the clay particles as quasi-templates 327 ) or conductive polymer-clay films 328 , for which the deposited matter is that resulting from the direct electrochemical transformation of the precursors (reduction of metal ions or electropolymerization of monomers), thus leading film growing obligatory from the underlying electrode surface. In the present case, the electron transfer reaction does not contribute to the electrochemical transformation of the precursors but it generates the catalyst (i.e., hydroxyl ions) that induces the precursors polycondensation and formation of the silica network. These catalytic species are expected to be present in a rather thick region at the electrode/solution interface (i.e., the diffusion layer) so that the gelification process can occur simultaneously in the whole non occupied volume between the clay particles to form the clay-mesoporous silica film (GCE-clay-mesopSiO 2 , Figure 30b-c) or clay-silica composite films (GCE-clay-SiO 2 , Figure 30d). Actually, it has been reported that electro-assisted deposition of mesoporous silica through an assembly of microparticles deposited onto an electrode surface happened in a multi-step process, the material starting to deposit around the particles (in agreement to faster gelification onto solid surfaces in comparison to bulk gelification from homogeneous sols 329 ) and then tended to fill all the interstitial volume to give the final composite films 320 . The schemes in Figure 30, though simplified, provide thus a realistic view of the different systems studied here. No real morphology difference is expected to occur between GCE-clay-mesopSiO 2 electrodes before and after surfactant extraction, except that mesoporosity would be revealed after template removal. 148 The cyclic voltammetry characterization of these 4 film electrodes (GCE-clay, GCE-clay-mesopSiO 2 before and after surfactant extraction, and the non templated GCE-clay-SiO 2 film), using three relevant redox probes (Fe(CN) 6 3-, Fc(MeOH) 2 , Ru(NH 3 ) 6 3+ ), is summarized in Figure 31. Several observations and conclusions can be drawn from these data. Figure 30 Schematic view of the various electrode configurations used in this work First, the raw clay deposit acts as a barrier to the negatively-charged Fe(CN) 6 3- species (see part B1 in Figure 31), as a result of electrostatic repulsions of the negatively-charged clay sheets. Nevertheless, the clay film is still porous as it let the neutral Fc(MeOH) 2 species quite easily reach the electrode surface, which resulted in almost similar CV response as on bare GCE, at least when reaching steady-state after 20 cycles (compare parts B2 and A2 in Figure 31). Finally, the CV signal of the positively-charged Ru(NH 3 ) 6 3+ species was found to be larger than at the bare electrode (compare parts B3 and A3 in Figure 31) and was found to continuously grow upon multiple successive potential scanning (see part B3 in Figure 31) owing to the preconcentration of these Ru(NH 3 ) 6 3+ species by cation exchange in the clay particles. Overall, this behavior is consistent with previous observations on similar systems 251 and the preconcentration capacity was exploited in ion exchange voltammetry. 330 The situation was definitely different for the GCE-clay-mesopSiO 2 electrode. Let's first consider the case prior to surfactant extraction. Using the Fe(CN) 6 3-probe resulted in a very small signal, yet clearly visible (see part C1 in Figure 31), slightly and continuously growing upon successive cycling. This behavior suggests some accumulation of the negatively-charged Fe(CN) 6 3-species. To interpret this result, one has to remind that sol-gel electro-assisted deposition was performed in the presence of CTAB in the solution (at ca. 0.1 M), and that the cetyltrimethylammonium cation, CTA + , is likely to accommodate the clay particles. 284 Actually, when smectite type clays are treated with a cationic surfactant at a concentration ranging between 0.5 and 1.5 times the CEC of the clay, the surfactant molecules adopt a bilayer or pseudotrimolecular arrangement within the clay platelets, with both vertical and horizontal orientations of the alkylammonium chains. 291 The aggregation of the organic cations thus occurs via both ion exchange and hydrophobic bonding which leads to the creation of positive charges in the clay layer and on the clay surface, inducing thereby possible uptake of anionic species by the clay composite via the formation of surface-anion complex. 292,293 In the present case, even if it is impossible to quantitatively determine the amount of surfactant molecules in the clay, one can reasonably ascribe the slight accumulation of redox species, as observed in part C1 in Figure 31, to the replacement of the weakly retained counterions (Br -) of the surfactant in the clay by the Fe(CN) 6 3-probe, leading to the formation of ion pairs between CTA + cations and Fe(CN) 6 3-anions. The accumulation effect is however much lower than that for classical surfactant-modified clay films (e.g., smectite clay modified with hexadecyltrimethylammonium 265 ) because of the presence of the surfactant-templated mesoporous materials around the clay particles in the present case. An additional indication of the presence of CTA + /CTAB in the clay is the almost totally suppressed response of the film to the Ru(NH 3 ) 6 3+ probe (no more accumulation possible by cation exchange, see part C3 in Figure 31). This signal suppression also tends to support a good coverage of the whole electrode surface with a crack-free clay-mesoporous silica composite film (the presence of some holes would have resulted in noticeable CV response). Finally, the response to the neutral Fc(MeOH) 2 species was lower than previously, but still significant (40 % of the intensity on bare GCE), and shifted by ca. 0.1 V towards more anodic potentials (compare parts B2 and C2 in Figure 31). This is explained by the solubilization of the neutral probe in the surfactant phase, as previously reported for electrodes covered with surfactant-templated mesoporous silica films 141,148 . After surfactant removal, the GCE-clay-mesopSiO 2 electrode exhibited again a distinct behavior (compare parts C1-3 and D1-3 in Figure 31). Note that extraction of the surfactant template was made using ethanol and NaClO 4 (not the classically used ethanol/HCl mixture) to avoid any chemical degradation of the clay (i.e., acid hydrolysis of the aluminum sites in the aluminosilicate) and to maintain its cation exchange capacity. The voltammetric characteristics of the surfactant-extracted GCE-clay-mesopSiO 2 electrode resembles somewhat to those observed for the raw clay film electrode (GCE-clay), yet with some differences, as it can be seen by comparing parts B1-3 and D1-3 in Figure 31. The absence of signal to negatively charged Fe(CN) 6 3- species (see part D1 in Figure 31) is again explained by electrostatic repulsions from both the negatively-charged clay sheets and the negatively-charged silica surface (as also evidenced for pure mesoporous silica films 141 ). The neutral Fc(MeOH) 2 probe is still detectable on the GCE-clay-mesopSiO 2 film electrode but less easily than for GCEclay (compare parts D2 and B2 in Figure 31) because the probe molecules have now to cross the mesoporous silica binder to reach the underlying electrode surface. On the other hand, the accumulation response of Ru(NH 3 ) 6 3+ species upon continuous potential cycling is more marked (compare parts D3 and B3 in Figure 31) due to the synergistic properties of the composite film (cation exchange in the clay and favorable electrostatic interactions with the negatively charged silica surface; this last phenomena was previously demonstrated for the accumulation of both Ru(NH 3 ) 6 3+ and Ru(bpy) 3 2+ at mesoporous silica modified electrodes 141,148 ). These results indicate promising use of GCE-clay-mesopSiO 2 for preconcentration electroanalysis of cationic analytes (see section 3.3 for confirmation). Some control experiments have been also performed using a non templated GCE-clay-SiO 2 composite film electrode. The results indicate comparable behavior as for GCE-claymesopSiO 2 (suppressed response to Fe(CN) 6 3-; significant signal for Fc(MeOH) 2 , suggesting the existence of some porosity; good response to the positivelycharged Ru(NH 3 ) 6 3+ species, in agreement with previous observations made for sol-gel derived clay-silicate film electrodes using Fe(CN) 6 3-and methylviologen as redox probes 325 ), but a less effective accumulation of the Ru(NH 3 ) 6 3+ probe in comparison to the templated composite film (compare parts E3 and D3 in Figure 31). 275 Physico-chemical characterization XRD was first used to characterize the eventual structural changes of the smectite clay upon entrapment within the CTAB-templated mesoporous silica film. As expected, prior to sol-gel electrodeposition, the clay film exhibited the same montmorillonite characteristics as those reported for the raw clay particles in the experimental section (diffraction lines at 2θ values (°) of 6.9; 19.9; 21.8; 28.6; 36.1; 62.0, data not shown). Focusing on the low angles range (Figure 32), corresponding to the d001 reflection, one can see what happened as a result of the various treatments (electro-assisted deposition in the presence of CTAB and template removal, respectively). Prior to any treatment, the d001 reflection appears at 2θ = 6.9° (see curve "a" in Figure 32), which corresponds to a d spacing of 12.9 Å (i.e., a classical interlayer distance for montmorillonite clays 331,332 ). After electro-assisted deposition of the mesoporous silica, this line at 2θ = 6.9° almost disappears, to be replaced by new lines at much lower 2θ values (i.e., 4.59° and 2.19°, see curve "b" in Figure 32). This indicates an expansion of the interlayer region between the clay sheets. As the clay was in contact with a CTAB solution prior to and during electro-assisted deposition of the mesoporous silica material, this expansion is certainly due to the incorporation of CTA + and CTAB species in the clay interlayer (a process which is known elsewhere 284,333,334 ). Actually, the XRD pattern (curve "b" in Figure 32) is very similar to those previously observed for montmorillonite treated with CTAB at a concentration at least 3 times higher than the clay cation exchange capacity (which is the case here), indicating the existence of a CTAB-clay material with the surfactant in a paraffin-bilayer configuration (i.e., with surfactant binding to the clay by ion exchange and via hydrophobic interactions). 284 This result supports the above interpretation of CV data obtained for the Fe(CN) 6 3-probe at GCE-clay-mesopSiO 2 (part C1 in Figure 31) for which the observed small increase of the voltammetric signal was attributed to the formation of ion pairs between CTA + cations immobilized in/on the clay particles and Fe(CN) 6 3-anions. After removal of the surfactant template from GCE-clay-mesopSiO 2 , the clay interlayer distance was found to recover almost its initial value (see curve "c" in Figure 32), with a d spacing of 13.2 Å (i.e., close to the 12.9 Å value measured for the raw clay). This indicates that the CTAB loading/removal from the clay is reversible and that no silica condensation occurred in the interlayer of the clay (as might occur in porous clay heterostructures 335,336 ). The control XRD measurements performed for samples prepared without clay or without CTAB (see curves "d" and "e" in Figure 32) further confirmed the above discussed phenomena. SEM characterization of similar films as those analyzed in XRD provided additional information on their texture. Both top-view and cross-sectional views are shown (Figure 33). Clay particles are clearly visible on the cross-sectional view of the spin-coated clay film (Figure 33B). At the opposite, the composite material prepared by electro-assisted deposition of mesoporous silica with CTAB is characterized by a more homogeneous texture, which can be explained by a rather good filling of the inter-particle region of the clay film with the surfactant-templated silica matrix (Figure 33D). Some increase in the film thickness can be also evidenced (as also confirmed by AFM measurements in Figure 34), consistent with the expansion of the clay in the presence of CTAB. On the other hand, the composite sample prepared from electroassisted deposition of silica from a CTAB-free sol solution leads to a texture (Figure 33F) quite comparable to the one observed with the initial clay film. This is explained by the much slower electro-assisted deposition of silica in the absence of CTAB 148 , resulting in lesser amounts of silica binder deposited around the clay particles. Note that SEM top views did not show different film features between the initial clay film and the composite material prepared by electro-assisted deposition of mesoporous silica with CTAB (compare parts A and C in Figure 33), which is also supported by AFM imaging (Figure 34), indicating that mesoporous silica deposition was essentially restricted to the clay layer. It can be concluded that electro-assisted deposition of a surfactant-templated mesoporous silica matrix through a clay film electrode allows the fabrication of a composite material displaying a good homogeneity in the whole thickness of the composite film, which confirms the above voltammetric behavior and suggests possibly good mechanical stability, as discussed below on the basis of successive uses in preconcentration electroanalysis. 275 Effect on copper(II) preconcentration and detection To further characterize the novel composite films and to discuss their potential interest for voltammetric sensing, the modified electrodes were subject to preconcentration analysis using copper(II) as a model analyte. The accumulation was made at open-circuit from an unbuffered copper(II) solution and detection was made by SWV after medium exchange to a slightly acidic chloride solution (pH 4) likely to desorb the previously accumulated copper(II) species; actually the detection sensitivity was highly pH-dependent (signal intensity increasing continuously when decreasing pH from 5 to 1) but, as far as multiple analyses with the same electrode are concerned, too acidic media have to be avoided to maintain the chemical integrity of the clay (i.e., ion exchange capacity), so that pH 4 was chosen as the best compromise between sufficient sensitivity and good reusability. In these conditions, well-defined SWV curves can be obtained and the peak intensity was dependent on the electrode type, growing from GCE-clay-SiO 2 to GCE-clay and to surfactant-extracted GCE-clay-mesopSiO 2 , in agreement with the trend observed in their CV response to the cationic (A) (B) Ru(NH 3 ) 6 3+ redox probe (Figure 31C). Focusing on the most sensitive system (GCEclay-mesopSiO 2 ), one can see in Figure 35 that the SWV response was function of both copper(II) concentration in the accumulation medium and preconcentration time. The variations were as expected for preconcentration electroanalysis at modified electrodes involving an accumulation by analyte binding to active centers (i.e., a first linear increase of the signal followed by leveling off when reaching steady-state (ion exchange equilibrium or saturation of ion exchange sites), in agreement with previous observations made for copper(II) electroanalysis at other kind of modified electrodes. 191,294,295 Interestingly, the time response was very fast (the voltammetric response of the electrode increased linearly as a function of accumulation time from the early accumulation times) contrary to some delay that can be observed to get a detectable signal, e.g., in case of restricted diffusion rates or slow binding processes. 191,294 Such good performance can be explained by the highly porous structure of the composite film after surfactant extraction. (A) (B) Another attractive feature is the rather good long-term operational stability of the GCE-clay-mesopSiO 2 , as illustrated in Figure 36 (curve "b") and the good reproducibility observed for successive preconcentration-detection steps with the same electrode (see some typical signals in the inset of Figure 36). This can be ascribed to the durable immobilization of the clay material by physical entrapment in the surrounding mesoporous silica matrix. By contrast, the response of GCE-clay to successive experiments was found to rapidly decrease to almost zero (see curve "a" in Figure 36), probably as a result of progressive leaching of the clay particles in the solution due to rather poor mechanical stability in stirred medium (it should be reminded here that preconcentration was performed in stirred solution to facilitate mass transport of the analyte). Finally, it is noteworthy that a clay-free GCE-mesopSiO 2 electrode gave also rise to stable signals upon successive preconcentration-detection experiments (see curve "c" in Figure 36), but with much lower sensitivity as a result of poor Cu 2+ preconcentration efficiency of the mesoporous silica matrix, confirming again the interest of the clay-mesoporous silica composite material developed here. The modified electrode developed here offers good performance in terms of sensitivity and long-term stability but, of course, not in terms of selectivity as the Cu 2+ recognition processes ion exchange, which is obviously not so selective with respect to other metal ions. Anyway, the method could be applied to generate composite films 0,0 0,5 1,0 0,0 0,5 1,0 0,0 0,5 1,0 0,0 0,5 1,0 0,0 0, 5 based on other clay materials, such as organically-grafted clays, which are known to exhibit much better selectivity towards target metal ions depending on the nature of the organo-functional groups used to modify the clay. [START_REF] Tonlé | Preconcentration and voltammetric analysis of mercury(II) at a carbon paste electrode modified with natural smectite-type clays grafted with organic chelating groups[END_REF]251,267,275 Conclusions This work has demonstrated the possible electro-assisted generation of claymesoporous silica composite films on electrodes by combining the electro-assisted self-assembly process 148,149 with the spin-coated clay films. After surfactant removal, the resulting materials kept the basic properties of the clay (i.e., cation exchange) and exhibited excellent permeability issues and long-term mechanical stability, which could be notably exploited in preconcentration electroanalysis. A particular feature of the developed method was the reversible intercalation/exchange of the cationic surfactant in the interlayer region of the clay (with concomitant variation in the interlayer distance), which avoided any deposition of silica between the clay sheets and probably also ensured fast mass transport through the film by creating high porosity upon surfactant extraction. The method appears rather general and could be applied, for example, to the preparation of organically-functionalized clay-mesoporous silica materials by adding suitable organosilanes in the synthesis medium, which would lead to multifunctional composite films. 275 Final conclusions The high demand for simple, fast, accurate, and sensitive detection methods in pharmaceutical and environmental analysis has led to the development of novel electrochemical sensors. Due to their ion exchange capacity and adsorbent properties, clay modified electrodes are likely to be used for this application. Montmorillonite-rich indigenous Romanian clays were employed for the modification of different types of electrodes in order to develop sensors applied in the detection of heavy metals from matrices of biopharmaceutical and biomedical interest and biosensors for the detection of different pharmaceuticals. Bentonites obtained from Răzoare and Valea Chioarului deposits (Maramureş County, Romania) were refined and characterized by X-ray diffraction, transmission electron microscopy, FTIR, and thermodifferential analysis. The ion exchange capacity of purified clays was determined by replacing the compensatory ions with NH 4 + ions. All physico-chemical studies revealed montmorillonite as the main component of their structure. The electrochemical behavior of acetaminophen, ascorbic acid, and riboflavin phosphate was tested by cyclic voltammetry on clay-modified CPEs with different clay particle sizes. Resulting CPEs revealed either better electroanalytical signals or oxidation at lower potential values. These results recommend the application of the new clay-modified sensors in pharmaceutical analysis. The development of a biosensor based on the immobilization of HRP within a Romanian clay-polyethylenimine film at GCEs surface for acetaminophen detection is described. In this case, HRP was immobilized on the surface of the GCE by retention in a polyethylenimine and clay porous gel film, a technique that offered good entrapping and a protective environment for the biocomponent due to the hydration properties of the immobilization layer. The amperometric detection of acetaminophen was successfully achieved with a sensitivity of 6.28×10 -7 M and a linear range between 5.25×10 -6 M and 4.95×10 -5 M. The use of a low-cost pillared clay as electrode modifier for electrochemical sensors development is also demonstrated in this work. For this, montmorillonite was modified with TBAB. By partial removal of the surfactant, the resulting material preserved the basic properties of the clay (i.e., cation exchange) and could be therefore exploited in the development of sensors able to detect the cationic toxic species (e.g., Cu(II), Cd(II)) from different matrices with good reproducibility and sensitivity. Finally, this thesis reveals the electro-assisted generation of clay-mesoporous silica composite films onto GCEs. The method involved the deposition of clay particles by spin-coating on GCE and the subsequent growing of a surfactant-templated silica matrix around these particles by EASA. EASA typically consisted in applying a cathodic potential to the electrode immersed into a hydrolyzed sol (containing TEOS as the silica source, and CTAB as surfactant) in order to generate the necessary hydroxyl catalysts inducing the formation of the mesoporous silica. In such conditions, alongside the silica deposition process, the interlayer distance between the clay sheets was found to increase as a result of CTAB ion exchange. After removal of the surfactant template, the composite film became highly porous (i.e., to redox probes) and the clay recovered its pristine interlayer distance and cation exchange properties. This made it promising for application in preconcentration electroanalysis, as pointed out here using copper(II) as a model analyte, especially because it offered much better long-term operational stability than the conventional (i.e., without silica binder) clay film electrode. This is the first example of electrogenerated clay-mesoporous silica composite films in the literature and the promising applications of these new composite materials are also discussed here. The rather general methods presented in this thesis can be further exploited to develop new reliable methods for environmental and pharmaceutical monitoring of highly toxic contaminants with improved selectivity. Originality of the thesis This thesis aims at developing new composite materials by exploring the ion exchange and adsorbent properties of two montmorillonite-rich Romanian natural clays (from Răzoare and Valea Chioarului deposits, Maramureş County) for sensors and biosensors construction. I believe that the results of these studies will have important impact on the pharmaceutical and environmental fields as they: 1) Present for the first time the applications of Romanian clays in electroanalysis for heavy metal detection and for development of biosensors with applications in pharmaceutical analysis; 2) Describe for the first time the complete structural characterization of Răzoare and Valea Chioarului bentonites; 3) Establish new performances for the existent sensor devices as the developed systems show good long-term operational stability and good reproducibility (e.g., GCE-clay-mesopSiO 2 ); 4) Provide new approaches for the determination of heavy metals in various matrices; 5) Give the first example of electrogenerated clay-mesoporous silica composite films with promising applications; 6) Acetaminophen and riboflavin phosphate was tested for the first time on clay-modified CPEs and new electrochemical methods are proposed to be applied for their detection in pharmaceutical analysis. This study describes reliable methods for environmental monitoring of highly toxic contaminants. The methods presented here are rather general and could be further exploited to generate composite films based on other clay materials with improved selectivity. 2 2 Figure 1 1 Figure 1 Different adsorption sites at clay-modified electrode 2 2 2 3 3 Figure 2 2 Figure 2 Preconcentration step at clay-modified electrode 2 Figure 3 3 Figure 3 Amperometric biosensors: (A) first generation, (B) second generation, (C) third generation. 2 Figure 4 4 Figure 4 Electrochemically-assisted generation of silica films Figure 5 5 Figure 5 Influence of the preconcentration time on the voltammetric response of carbon paste electrodes modified with (A) a mercaptopropyl-functionalized mesoporous silica prepared from a TEOS/MPTMS 80 : 20 and (B) an amorphous silica gel grafted with the mercaptopropyl group; accumulation medium: 5•10 -7 M Hg(NO3)2 in 0.1 M HNO3; detection medium: 3 M HCl; other conditions of the detection process: 1 min electrolysis at -0.7 V, followed by anodic stripping differential pulse voltammetric detection. Inset: variation of the stripping peak area with the accumulation time. 116 Figure 6 6 Figure 6 Smectites structure 241 Figure 7 TEMFigure 8 78 Figure 7 TEM characterization Răzoare (A, B) and Valea Chioarului (C) clays The XRD diffractogram of Razoare clay, fraction bellow 20 μm (Figure 8A) displayed the characteristic diffraction peaks of montmorillonite at 2θ (7.12 o ; 19.68 o ; 21.57 o ; 28.14 o ; 36.04 o ; 61.66 o ) and also the presence in smaller quantities of other minerals, such as cristobalite at 2θ (20.68 o ; 26.50 o ; 36.36 o ; 42.10 o ; 54.70 o ; 59.67 o ), feldspar at 2θ (23.22 o ; 24.10 o ; 27.74 o ; 35.08 o ), etc. 239, 245 94 o ; 19.96 o ; 21.82 o ; 28.63 o ; 36.14 o ; 62.01 o ), which confirmed the position of the diffraction peaks, in agreement with literature data 245 , and also the almost negligible presence of other minerals. Figure 10 10 Figure 10 Thermodifferential analysis of Răzoare (A) and Valea Chioarului clay (B) Figure 11 11 Figure 11 Chemical structures of investigated pharmaceuticals: (A) acetaminophen, (B) ascorbic acid, and (C) riboflavin 239 Figure 12 12 Figure 12 Cyclic voltammograms of 10 -3 M acetaminophen (A) and 10 -3 M ascorbic acid (B) on 1% (solid line), 2.5% (square line), and 5% (dot line) Răzoare clay-modified CPEs (KCl 0.1 M, 100 mVs -1 ) 239 239 Figure 13 13 Figure 13 Cyclic voltammograms for 10 -3 M riboflavin phosphate at unmodified CPE (dot line), at 5% (square line), and 10% (solid line) Răzoare clay-modified CPEs (KCl 0.1 M, 100 mVs -1 ) 239 Figure 14 14 Figure 14 Cyclic voltammograms of 10 -4 M acetaminophen solution using 20 μm (dot line) and 0.2 μm (solid line) Valea Chioarului clay immobilized in a 1 mg/mL PEI film (0.1 M phosphate buffer pH 7.4, 50 mVs -1 ) 243 Figure 15 15 Figure 15 Amperometric response of the clay-modified electrode (0.2 μm Valea Chioarului clay in 1 mg/mL PEI /HRP/GCE) after successive additions of 50 μL of 10 -4 M acetaminophen in phosphate buffer pH 7.4 and 0.2 mM H2O2 243 Figure 16 FTIR 16 Figure 16 FTIR spectra of MMT samples, simple (black line) modified with TBAB (red line) and after the partial TBAB extraction (green line) compared with TBAB spectrum (blue line). Figure 17 17 Figure 17 Raman spectra of MMT samples, simple (black line) modified with TBAB (red line) and after the partial TBAB extraction (green line) compared with TBAB spectrum (blue line). Figure 18 18 Figure 18 Multisweep cyclic voltammograms recorded in 10 -3 M Ru(NH3)6Cl3 in 0.1M NaNO3 using: bare GCE (A) (10 cycles); GCE/M (10 cycles) (B); GCE+MMT/M (30 cycles) (C); GCE+MMT+TBAB/M (30 cycles) (D); and GCE+MMT+TBAB(-TBAB)/M (30 cycles) (E). Figure 19 19 Figure 19 Multisweep CVs recorded in 10 -3 M [Fe(CN)6] 3-(in 0.1M NaNO3) using bare GCE (A) (5 cycles); GCE/M (10 cycles) (B); GCE+MMT/M (10 cycles) (C); GCE+MMT+TBAB/M (10 cycles) (D); and GCE+MMT+TBAB(-TBAB)/M (10 cycles) (E). Figure 20 20 Figure 20 Multisweep cyclic voltammograms recorded in 10 -3 M Fc(MeOH)2 using bare GCE (A) (5 cycles); GCE/M (5 cycles) (B); GCE+MMT/M (10 cycles) (C); GCE+MMT+TBAB/M (10 cycles) (D); and GCE+MMT+TBAB(-TBAB)/M (10 cycles) (E). Figure 21 Nyquist plots for: GCE/M (e-Daq, d=1mm) (black); GCE+MMT/M (< 1 μm; 5 mg mL -1 suspension) (red); GC+MMT+TBAB/M (5 mg mL -1 suspension) (green); GC+MMT+TBAB(-TBAB)/M (5 mg mL -1 suspension) (blue); in 10 mM K3[Fe(CN)6] in PBS (0.1M; pH7.4). Inset: Rct variation with the electrode type. (5 mgmL -1 water suspensions; amplitude 0.005; begin frequency: 100 kHz; end frequency: 0.01 Hz; number of frequencies: 71). Figure 22 22 Figure 22 The electroanalytical response of Cu(II) in different electrolytes. SWVs were recorded using GCE+MMT+TBAB(-TBAB)/M, after 5 min accumulation at open-circuit in 10 -5 M Cu(II) solution in ultrapure water. Detection performed in the supporting electrolyte after 180 s electrolysis at -0.6 V. Figure 23 23 Figure 23 Variation of current peak intensity with accumulation time using GCE+MMT+TBAB(-TBAB)/M after open circuit accumulation of: Cu(II) (A) and Cd(II) (B) at different concentrations: (1) 5•10 -7 M; (2) 10 -6 M; and (3) 5•10 -6 M. Detection performed in 0.1 M NaNO3. Figure 24 SWVs recorded using clay/GCE: (a) unmodified MMT; (b) GCE+MMT+TBAB(-TBAB) (after TBAB removal) after 5 min accumulation at open-circuit in 10 -6 M Cu(II) (A) and Cd(II) (B) solutions in ultrapure water. Detection performed 0.1 M NaNO3 after 180 s electrolysis at -0.6 and -1.0 V. Regeneration after 120 s magnetic stirring in 0.1M NaNO3 solution ((c) and (d) represent the clay films regeneration curves). Figure 25 25 Figure 25 SWV responses obtained with (a) GCE+MMT/M and (b) GCE+MMT+TBAB(-TBAB)/M to successive preconcentration of 10 -6 M Cu 2+ solution (5 min accumulation at open circuit, detection in 0.1 M NaNO3 after 180 s electrolysis at -0.6 V, regeneration after 120 s magnetic stirring in 0.1M NaNO3 solution). The inset shows SWVs obtained using GCE+MMT+TBAB(-TBAB)/M after preconcentration in Cu 2+ solution. Figure 27 27 Figure 27 Variation of current intensity on GCE+MMT+TBAB(-TBAB)+M for Cu(II) at different concentration: 0 (1); 0.25 (2); 0.50 (3); 0.75 (4); 1 (5); 2.5 (6); 5 (7); 7.5 (8) and 10 μM (9) in 0.1M NaNO3 solution. Inset: the calibration curve for Cu(II) obtained under optimized conditions. Experimental conditions: 10 min accumulation at open-circuit in Cu 2+ solutions in ultrapure water; detection performed in 0.1M NaNO3 solution, after 180 s electrolysis at -0.6 V; regeneration after 120 s magnetic stirring in 0.1M NaNO3 solution. Figure 28 Variation of current intensity on GCE+MMT+TBAB(-TBAB)+M for Cd(II) at different concentration: 0 (1); 0.25 (2); 0.50 (3); 0.75 (4); 1 (5); 5 (6); 7.5 μM (7) in 0.1M NaNO3 solution. Inset: the calibration curve for Cd(II) obtained under optimized conditions. Experimental conditions: 10 min accumulation at open-circuit in Cd 2+ solutions in ultrapure water; detection performed in 0.1M NaNO3 solution, after 180 s electrolysis at -1.1 V; regeneration after 120 s magnetic stirring in 0.1M NaNO3 solution. Figure 29 29 Figure 29 SWVs recorded using GCE+MMT+TBAB(-TBAB)/M, after 5 min accumulation at opencircuit in 10 -5 M: Cd 2+ (black), Pb 2+ (red), Cu 2+ (green) and a mixture of these three cations (blue) solutions in ultrapure water. Detection performed in 0.1M NaNO3 solution, after 180 s electrolysis at -1.1 V (frequency: 100 Hz; amplitude: 0.05 V; potential step: 0.02; counter electrode: Pt/Ti; reference electrode: Ag/AgCl without internal solution). Figure 31 31 Figure 31 Cyclic voltammetric curves recorded at 20 mV s -1 for 20 successive cycles using (A) bare GCE, (B) GCE-clay, (C, D) GCE-clay-mesopSiO2 respectively before (C) and after (D) surfactant removal, and (E) GCE-clay-SiO2 electrodes, for three redox probes (1: Fe(CN)6 3-; 2: Fc(MeOH)2; 3: Ru(NH3)6 3+ ) in 0.1 M NaNO3 Figure 32 X 32 Figure 32 X-Ray diffractograms for (a) GCE-clay, (b, c) GCE-clay-mesopSiO 2 respectively before (b) and after (c) surfactant removal, (d) GCE-clay-SiO 2 and (e) GCE-mesopSiO 2 . Figure 33 SEMFigure 34 3334 Figure 33 SEM images of a spin-coated clay film (A,B) and composite materials obtained by electro-assisted deposition of surfactant-templated silica around the clay (clay-mesopSiO2: C,D) and non-templated composite materials (E,F). Both top views (A, C, and E) and cross-sections (B, D, and F) are shown. Figure 35 ( 35 Figure 35 (A) Variation of SWV peak currents recorded using GCE-clay-mesopSiO2 after open circuit accumulation of Cu 2+ at various concentrations: (a) 10 -7 M, (b) 5•10 -7 M, and (c) 10 -6 M. Detection medium composition: 0.1 M KCl + 0.1 mM HCl. (B) Typical SWV curves obtained at the above different concentrations of Cu2+ but at the same preconcentration time (5 min). Figure 36 36 Figure 36 SWV responses obtained with (a) GCE-clay, (b) GCE-clay-mesopSiO2 after surfactant removal, and (c) GCE-mesopSiO2, to successive preconcentration of 10 -6 M Cu 2+ (2 min accumulation at open circuit; detection in 0.1 M KCl + 0.1 mM HCl). The inset shows some typical curves obtained with GCE-clay-mesopSiO2 after surfactant removal (corresponding to data in part "b" of the figure). 2 Mesoporous silica materials and their applications in electrochemistry ...... ..Clays -definition, classification, properties .............................................. .2.1 Ordered and oriented mesoporous sol-gel films .................................. Selected applications of mesoporous silica materials in electrochemistry ...... 2.4.1 Electroanalysis, sensors and biosensors ............................................... 1.1 2.3 Mass transport in mesoporous (organo)silica particles ............................ 2.4 1 Clay-modified electrodes ............................................................................ 1.2 Clay-modified electrode preparation ........................................................ 1.3 Electrochemistry at clay-modified electrodes ........................................... 1.4 Applications in environmental and biomedical analysis ........................... 1.4.1 Heavy metal detection using clay-modified electrodes ....................... 1.4.1.1 Clays implication in heavy metal detection ............................ 1.4.1.2 Inorganic clay heavy metal detection sensors ......................... 1.4.1.3 Organo-clay heavy metal detection sensors ............................ 1.4.2 Amperometric biosensors based on clays applied in pharmaceutical and biomedical analysis ........................................................................ 1.4.2.1 Clays Oxygen based biosensors (first generation) ................... 1.4.2.2 Mediator based biosensors (second generation) .................... 1.4.2.3 Directly coupled enzyme electrodes (third generation) ........... 2.1 The sol-gel process .................................................................................... 2.2 Silica and silica-based organic-inorganic hybrids ..................................... 2 ). Bentonites have a high content of SiO 2 and Al 2 O 3 and also significant water content. The components present in small quantities and in varying proportions are: MgO, CaO, K 2 O, Na 2 O, Fe 2 O 3 and TiO 2 . Elements such as Mg 2+ and Fe 3+ act as substitutes of Al 3+ in the octahedral configuration. Alkaline metals and Ca 2+ can fix by adsorption means in the spaces between the structural packages of the clay. The structural formulas of the clay minerals are: Răzoare (Ca 0.03 Na 0.30 K 0.06 ) Σ=0.39 (Al 1.54 Mg 0.37 Fe 0.10 ) Σ=2.01 (Si 3.84 Al 0.16 ) Σ=4.00 O 10 (OH) 2 · nH 2 O, and Valea Chioarului (Ca 0.06 Na 0.27 K 0.02 ) Σ=0.35 (Al 1.43 Mg 0.47 Fe 0.10 ) Σ=2.00 (Si 3.90 Al 0.10 ) Σ=4.00 O 10 (OH) 2 · nH 2 O. Table 1 Chemical composition of Răzoare and Valea Chioarului bentonites 239 1 Sample SiO2 TiO2 Al2O3 Fe2O3 CaO MgO Na2O K2O L.C. * Răzoare 68.60 0.22 13.89 1.36 0.30 3.38 1.50 0.45 11.30 Valea 59.82 0.25 16.14 1.67 0.70 3.92 1.75 0.25 15.50 Chioarului * Loss on calcination process at 1000 o C Table 2 Quantitative crystalline phase analysis and the effective crystallite mean size, Deff (nm), the root mean square (rms) of the microstrains , <ε 2 > 1/2 m and profile (Rp) discrepancy indices calculated by Rietveld refinement for the montmorillonite crystalline phases (X1=Valea Chioarului clay; R1=Răzoare clay) 2 Samples montmorillonite cristoballite amorphous Deff <ε 2 > 1/2 m × 10 3 Rp [% vol.] [% vol.] [% vol.] [nm ] X1 74.7 14.5 10.8 15.2 0.413 12.2 R1 19.6 43.0 17.4 10.4 2.846 14.3 Table 3 The unit cell parameters and profile (Rp) discrepancy indices calculated by Rietveld refinement analysis for the montmorillonite and cristoballite crystalline phases (X1=Valea Chioarului clay; R1=Răzoare clay) 3 Figure 9 FTIR spectra of Răzoare (A) and Valea Chioarului clays (B) 12.624 17.2 X1-cristobalite 6.46 6.46 6.46 22.3 R1 montmorillonite 5.06 5.06 12.544 19.6 R1 -cristobalite 6.64 6.64 6.64 26.8 (A) (B) Table 4 HRP amperometric biosensors for acetaminophen analysis Electrode configuration Enzyme/ Transducer Sensitivity (μA M -1 ) 4 Linear range LOD References LOQ Zr alcoxyde in HRP/GC 1.17×10 -7 1.96×10 -5 -2.55×10 -4 M Not Sima et al., PEI with HRP on published 2008 GC Nanoporous HRP/Carbon Not 2×10 -6 -5.7×10 -5 M Not Yu et al., magnetic paste with published published 2006 microparticles- MMPs HRP Zr alcoxyde in HRP/SPE Not 4.35×10 -7 -4.98×10 -6 M 6.21×10 -8 M Sima et al., PEI with HRP on published 2.07×10 -7 M 2010 SPE HRP/PEI/SWCT HRP/GC Not 9.99 -79.01μM 7.82×10 -6 M Tertis et al., / GCE published 2013 HRP/Ppy/SWCT HRP/SPE Not 19.96 -118.06 μM 8.09×10 / SPE published -6 M Tertis et al., 2013 Table 5 The unit cell parameters and profile (Rp) discrepancy indices calculated by Rietveld refinement analysis for the clay (MMT, clay with TBAB (MMT+TBAB) and clay with TBAB after TBAB removal (MMT+TBAB(-TBAB)) 5 Samples a b c R p [Å] [Å] [Å] MMT 5.17 5.17 12.624 17.2 MMT+TBAB 5.24 5.24 15.487 18.6 MMT+TBAB(-TBAB) 5.23 5.23 15.475 17.6 Table 6 CNH elemental analysis for the modified clay before and after the TBAB partial extraction 6 Sample C% N% H% Assignment MMT (unmodified) 0.14 0.14 2.839 Organic or/and inorganic impurities containing C and N, H2O and Si-OH groups for H, respectively MMT+TBAB (fully doped) 8.82 0.73 4.068 TBAB + and impurities MMT+TBAB(-TBAB) 3.85 0.26 3.386 TBAB + and impurities (after extraction) Table 7 Interference study Interfering ion Analyte signal (%) 7 Cu 2+ Cd 2+ Cu 2+ - 98.89 Cd 2+ 99.59 - Co 2+ 100.03 99.54 Pb 2+ 101.22 99.76 Ni 2+ 84.45 88.59 Zn 2+ 97.28 93.19 Ba + 99.36 99.10 Na + 98.88 99.32 K + 100.47 99.05 94 o ; 19.96 o ; 21.82 o ; 28.63 o ; 36.14 o ; 62.01 o ) and also confirmed the almost negligible presence of other minerals. 275 Acknowledgements Now, when I've reached the end of the path I started four years ago, I look back to express a special gratitude and I realize the list of the people I have to thank is quite long… My work is the result of a fruitful collaboration between two research groups: the Department of Analytical Chemistry which is part of the Faculty of Pharmacy, University of Medicine and Pharmacy" Iuliu Haţieganu", Cluj-Napoca, Romania, and Laboratoire de Chimie Physique et Microbiologie pour l'Environnement-Université de Lorraine, Villerslès-Nancy, France. STATE OF THE ART
00175048
en
[ "shs.eco", "shs.stat" ]
2024/03/05 22:32:07
2007
https://shs.hal.science/halshs-00175048/file/R07037.pdf
Alex Coad email: [email protected] Thomas Brenner Tom Broekel Guido Buenstorf Andreas Chai Christian Cordes Giovanni Dosi Corinna Manig Dick Nelson Jason Potts Kerstin Press Rekha Rao Andrea Roventini Erik Stam Vera Troeger Marco Valente Karl Wennberg Claudia Werker Ulrich Witt Exploring the « mechanics » of firm growth : evidence from a short-panel VAR Keywords: L25, L20 Firm Growth, Panel VAR, Employment Growth, Industrial Dynamics, Productivity Growth Croissance des firmes, Panel VAR, Création d'emplois, Economie industrielle, croissance de productivité This paper offers many new insights into the processes of firm growth by applying a vector autoregression (VAR) model to longitudinal panel data on French manufacturing firms. We observe the co-evolution of key variables such as growth of employment, sales, gross operating surplus, and labour productivity growth. Preliminary results suggest that employment growth is succeeded by the growth of sales, which in turn is followed by growth of profits. Generally speaking, however, growth of profits is not followed by much employment growth or sales growth. Une investigation des processus de croissance des entreprises Résumé: Cet article offre un certain nombre de résultats concernant les processus de croissance des firmes, en appliquant un modèle VAR (Vector Autoregression) à des données longitudinales sur des entreprises manufacturières francaises. Nous observons la co-evolution de variables-clés telles que la croissance d'emplois, du chiffre d'affaires, de l'excédent brut d'exploitation ('profits'), et de la productivité de la main-d'oeuvre. Nos résultats suggèrent que la croissance d'emplois est suivie par la croissance du chiffre d'affaires, qui est ensuite suivie par la croissance des profits. Il semblerait toutefois que la croissance des profits n'est pas suivie par la croissance d'emplois ou du chiffre d'affaires. Introduction The literature on firm growth, at present, consists mainly of empirical investigations along the framework of Gibrat's Law, where firm growth features as the dependent variable and firm size is an independent variable. In such regressions, different indicators of firm growth (e.g. sales growth or employment growth) are considered almost interchangeably as proxies for the same underlying phenomenon (i.e. firm growth). There are also many other 'augmented' versions of Gibrat's law, in which other characteristics of the firm at time (t -1) are included in the regression to explain the firm's growth from (t -1 : t) (for an extensive survey of the literature on firm growth, see [START_REF] Coad | Firm growth: A survey[END_REF]). Regressions of this kind have had limited success, however, because the explanatory power of these regressions is typically very low and the characteristics (in levels) of firms at a point in time (t) seem to have limited influence on the rate of change of firm size. Geroski has thus said in his despair: "The most elementary 'fact' about corporate growth thrown up by econometric work on both large and small firms is that firm size follows a random walk" (Geroski, 2000, p. 169). The empirical framework presented here is admittedly rather simple but, we argue, has the potential of shedding light on what happens inside growing firms. The approach we take is quite different to conventional empirical analysis of firm growth. Whilst sales, employment and profits are usually taken as alternative proxies for firm growth, we consider each of these indicators to be essentially different, each yielding unique information on different facets of firm growth. We therefore view firm growth as a multidimensional phenomenon. By considering the coevolution of these series, we can improve our understanding of the processes of firm growth. Implicit in our model is the idea that the growth of profits is not just a final outcome for firms but also as an input, because it provides firms with the means for expansion. Furthermore, employment growth can be seen as an input (in the production process) but also as an output if, for example, the policy maker is interested in the generation of new jobs. We suggest that this conception of the growing firm as a dynamic co-evolving system of interdependent variables is best described in the context of a panel vector autoregression (VAR) model. Theoretical considerations Whilst many theoretical propositions about firm growth have been made, these have largely escaped empirical investigation. For example, it has long been supposed that the evolutionary principle of 'growth of the fitter' should apply to firms, such that the more productive or profitable firms should grow and the least productive or profitable should shrink and exit (see, for example, [START_REF] Alchian | Uncertainty, evolution and economic theory[END_REF], [START_REF] Friedman | Essays in Positive Economics, chapter The Methodology of Positive Economics[END_REF] and [START_REF] Nelson | An Evolutionary Theory of Economic Change[END_REF]). Many (evolutionary) economists would probably accept this idea without much thought. However, a growing empirical literature casts doubt on the relevance of these theoretical assertions. Recent work on productivity dynamics suggests that, if anything, there appears to be a mild negative relationship between productivity and firm growth, with relatively low productivity firms growing more. Similarly, (scant) evidence suggests that profits do not appear to lead to higher firm growth (see [START_REF] Coad | Testing the principle of 'growth of the fitter': the relationship between profits and firm growth[END_REF] and [START_REF] Bottazzi | Productivity, profitability and financial fragility: evidence from italian business firms[END_REF], and for a review of the empirical literature on the influence of productivity and profitability on firm growth, see [START_REF] Dosi | Perspectives on Innovation, chapter Statistical Regularities in the Evolution of Industries: A Guide through some Evidence and Challenges for the Theory[END_REF] and [START_REF] Coad | Firm growth: A survey[END_REF]). Another topic that we feel is under-developed in the current literature is our understanding of the microfoundations of employment growth decisions (i.e. at the firm level). For instance, we could expect that the benefits of employment growth on profits may not be manifest immediately if it takes time for firms to adequately train new employees. Instead, new employees and new positions in the organization may make their most significant contribution to firm profitability only after a certain time lag. It is also of interest to investigate the elasticity of employment growth to profit growth. Do profitable firms create new jobs? Is there any justification in the popular vision of industrial development as being characterized by 'jobless growth' ? Our empirical framework also allows us to investigate firm-level productivity dynamics. Previous theo-retical contributions have suggested that there may be 'dynamic increasing returns' (à la Kaldor-Verdoorn) according to which firm growth would be positively associated with productivity growth. On the other hand, [START_REF] Penrose | The Theory of the Growth of the Firm[END_REF] suggested that firm growth is associated with decreases in productive efficiency, because planning for growth takes managerial focus away from keeping production costs down. The association between firm growth and productivity has not been resolved in theoretical discussions, and we therefore consider it to be an empirical question. Structure of the paper In Section 2 we present the database along with some summary statistics. In Section 3 we discuss our regression methodology. In Section 4 we present our main results. The robustness of these results is explored in Section 5. In Section 6 we discuss these results, and conclude in Section 7. Database and summary statistics This research draws upon the EAE databank collected by SESSI and provided by the French Statistical Office (INSEE). 12 This database contains longitudinal data on a virtually exhaustive panel of French firms with 20 employees or more over the period 1989-2004. We restrict our analysis to the manufacturing sectors.3 For statistical consistency, we only utilize the period 1996-2004 and we consider only continuing firms over this period. Firms that entered midway through 1996 or exited midway through 2004 have been removed. Since we want to focus on internal, 'organic' growth rates, we exclude firms that have undergone any kind of modification of structure, such as merger or acquisition. In contrast to some previous studies (e.g. Bottazzi et al. ( 2001)), we do not attempt to construct 'super-firms' by treating firms that merge at some stage during the period under study as if they had been merged from the start of the study, because of limited information on restructuring activities. To start with we had observations for around 22 000 firms per year for each year of the period,4 but at this stage we have a balanced panel of 8503 firms for each year. In order to avoid misleading values and the generation of NANs5 whilst taking logarithms and ratios, we now retain only those firms with strictly positive values for Gross Operating Surplus (GOS), 6 Value Added (VA), and employees in each year. This creates some missing values, especially for our growth of gross operating surplus variable (see Table 2). By restricting ourselves to strictly positive values for the gross operating surplus, we lose 13-14% of the observations in 1997 and 2000 whereas we lose about 26% of the observations in 2004. In keeping with previous studies, our measure of growth rates is calculated by taking the differences of the logarithms of size: GROW T H it = log(X it ) -log(X i,t-1 ) (1) where, to begin with, X is measured in terms of employment, sales, gross operating surplus, or labour pro- Empl. growth 0.0000 0.1352 -0.1049 -0.0437 -0.0096 0.0417 0.1156 Sales growth 0.0000 0.2314 -0.1759 -0.0740 -0.0038 0.0785 0.1803 GOS growth 0.0000 0.8068 -0.7630 -0.3152 0.0043 0.3191 0.7675 Prod. growth 0.0000 0.2173 -0.1956 -0.0910 -0.0019 0.0861 0.1987 2000 Empl. growth 0.0000 0.1333 -0.1168 -0.0526 -0.0117 0.0466 0.1317 Sales growth 0.0000 0.2084 -0.1708 -0.0800 -0.0077 0.0752 0.1845 GOS growth 0.0000 0.7988 -0.7688 -0.2995 -0.0029 0.3232 0.7550 Prod. growth 0.0000 0.2181 -0.2025 -0.0936 -0.0007 0.0905 0.2068 2004 Empl. growth 0.0000 0.1295 -0.1157 -0.0373 0.0164 0.0475 0.1050 Sales growth 0.0000 0.2191 -0.1763 -0.0729 -0.0002 0.0810 0.1779 GOS growth 0.0000 0.8603 -0.8465 -0.3151 0.0176 0.3257 0.8312 Prod. growth 0.0000 0.2580 -0.2138 -0.0904 0.0001 0.0931 0.2150 ductivity7 for firm i at time t. In keeping with previous work (e.g. [START_REF] Bottazzi | Corporate growth and industrial dynamics: Evidence from french manufacturing[END_REF]) the growth rate distributions have been normalized around zero in each year which effectively removes any common trends such as inflation.8 Summary statistics Table 1 presents some year-wise summary statistics, which gives the reader a rough idea of the range of firm sizes in our dataset. Table 2 presents some summary stats of the growth rate distributions. Figure 1 shows the unconditional growth rates distributions for our four variables of interest. These growth rates distributions are visibly heavy-tailed.9 This gives an early hint that standard regression estimators such as OLS, which assume Gaussian residuals, may perform less well than Least Absolute Deviation (LAD) techniques which are robust to extreme observations. We also observe that the distribution of growth rates of gross operating surplus has a particularly wide support, which would indicate considerable heterogeneity between firms in terms of the dynamics of their profits. Table 3 and Figure 2 show the correlations between our indicators of firm growth and firm performance. Spearman's rank correlation coefficients are also shown since these are more robust to outliers. All of the series are correlated between themselves at levels that are highly significant. However, the correlations are indeed far from perfect, as has been noted elsewhere [START_REF] Delmar | Arriving at the high-growth firm[END_REF]). The largest correlation (0.5959) is between growth of gross operating surplus and that of labour productivity. Indeed, the positive correlation between profits and productivity has also been observed in work on Italian data -see [START_REF] Bottazzi | Productivity, profitability and financial fragility: evidence from italian business firms[END_REF]. We also observe relatively large positive correlations between these two variables and the growth of sales (0.3922 and 0.4452 respectively). Although there is a large degree of multicollinearity between these series, the lack of persistence in firm growth rates (despite a high degree of persistence of firm size) will, we hope, aid in identification in the regression analysis. Furthermore, the large number of observations will also be helpful in identification. Multicollinearity has the effect of making the coefficient estimates unreliable in the sense that they may vary considerably from one regression specification to another. With this in mind, we therefore pursue a relatively lengthy robustness analysis in Section 5. manufacturing firms. In particular, they estimate the functional form of the growth rates density in terms of the Subbotin family of distributions (of which the Gaussian (normal) and the Laplace (symmetric exponential) distributions are special cases). They observe that, in the case of French manufacturing firms, the growth rates density is even fatter tailed than the Laplace. Methodology Introducing the VAR The regression equation of interest is of the following form: w it = c + βw i,t-1 + ε it (2) where w it is an m × 1 vector of random variables for firm i at time t. β corresponds to an m × m matrix of slope coefficients that are to be estimated. In our particular case, m=4 and corresponds to the vector (Empl. growth(i,t), Sales growth (i,t), GOS growth (i,t), labour productivity growth(i,t))'. ε is an m × 1 vector of disturbances. We do not include any dummy control variables (such as year dummies or industry dummies) in the VAR equation because we anticipate that, if indeed there are any temporal or sectoral effects at work, then dummy variables will be of limited use in detecting these effects. Instead, we suspect that the specificities of individual years or sectors may have non-trivial consequences on the structure of interactions of the VAR series, and these cannot be detected through the use of appended dummy variables alone. We explore the influence of temporal disaggregation and sector of activity in detail in Section 5. Furthermore, since previous work on this dataset has not observed any dependence of sales growth on size [START_REF] Bottazzi | Corporate growth and industrial dynamics: Evidence from french manufacturing[END_REF]), we do not attempt clean the series of size dependence before applying the VAR. However, we explore how our results change across firm size groups in detail in Section 5 We estimate equation ( 2) via 'reduced-form' VARs, which do not impose any a priori causal structure on the relationships between the variables, and are therefore suitable for the preliminary nature of our analysis. These reduced-form VARs effectively correspond to a series of m individual OLS regressions [START_REF] Stock | Vector autoregressions[END_REF]). One problem with OLS regressions in this particular case, however, is that the distribution of firm growth rates is typically exponentially distributed and has much heavier tails than the Gaussian. In this case OLS may provide unreliable results, and as argued in [START_REF] Bottazzi | Corporate growth and industrial dynamics: Evidence from french manufacturing[END_REF] we would prefer Least Absolute Deviation (LAD) estimation. Allowing for firm-specific fixed effects A further reason why OLS (and also LAD) estimation of equation ( 2) is likely to perform poorly is if there is unobserved heterogeneity between firms in the form of time-invariant firm-specific effects. If these 'fixed effects' are correlated with the explanatory variables, then OLS (and LAD) estimates will be biased. One way of doing accounting for these fixed effects would be to introduce a dummy variable for each firm and to include this in the regression equation to obtain a standard 'fixed-effects' panel data model. The drawback with this, however, is that the inclusion of lagged dependent variables can be a source of bias for fixed-effect estimation of dynamic panel-data models. The intuition is that the fixed effect would be in some sense 'double-counted' if the dependent variable is included in the regression equation at time t and also at at previous times due to the lag structure (this problem is known as 'Nickell-bias' after Nickell (1981)). Nickell-bias is often observed to be rather small, however, and so its importance is a matter of debate. This 'Nickell-bias' problem can be dealt with by using instrumental variables (IV) techniques, such as the 'System GMM' estimator [START_REF] Blundell | Initial conditions and moment restrictions in dynamic panel data models[END_REF]). The performance of instrumental variables estimators, however, depends on the quality of the instruments. If the instruments are effective then the estimates will be relatively precisely defined. If the instruments are weak, however, the confidence intervals surrounding the resulting estimates will be large. This is likely to be the case in this study because it is difficult to find suitable instruments for firm growth rates because they are characteristically random and lack persistence (see the discussion in [START_REF] Geroski | Competence, Governance and Entrepreneurship, chapter The growth of firms in theory and practice[END_REF] and [START_REF] Coad | Testing the principle of 'growth of the fitter': the relationship between profits and firm growth[END_REF]). IV estimation of a panel VAR with weak instruments thus leads to imprecise estimates. [START_REF] Binder | Estimation and inference in short panel vector autoregressions with unit roots and cointegration[END_REF] present a panel VAR model which can include firm-specific fixed effects but that does not require the use of instrumental variables. The model is estimated using Quasi-maximum-likelihood optimization techniques. They propose the following model: w it = (I m -Φ)µ i + Φw i,t-1 + (3) where µ corresponds to the firm-specific fixed effects and Φ is the m×m coefficient matrix to be estimated. is the usual vector of disturbance terms. BHP (2005) present evidence from Monte Carlo simulations that demonstrates that their estimator is more efficient (i.e. the estimates have lower standard errors) than IV GMM. The drawback with the BHP estimator for this particular application, however, is that it assumes normally distributed errors (whereas the distributions of firm growth rates are approximately Laplace-distributed). In this paper our estimator of choice is therefore the LAD estimator, which is best suited to the case of Laplacian error terms. Causality or association? Our intentions in this paper are to summarize the comovements of the growth series. We remind the reader of the important distinction between correlation and causality. We have no strong a priori theoretical positions, and we make no attempt at any serious identification of the underlying causality at this early stage, instead preferring to describe the associations. Indeed, much can be learned simply by considering the associations between the variables without mentioning issues of causality (see [START_REF] Moneta | Causality in macroeconometrics: some considerations about reductionism and realism[END_REF] for a discussion). Aggregate analysis The regression results obtained from the OLS, Fixed-effects, and LAD estimators are presented in Tables 4,5 and 6 respectively. It is encouraging to observe that the results obtained from these estimators, and from the different regression specifications (one or two lags) are not too dissimilar. One major difference between the Gaussian estimators (OLS and FE) and LAD is that the magnitudes of the autocorrelation coefficients (along the 'diagonals') are much smaller using the LAD estimator. This was observed by [START_REF] Bottazzi | Corporate growth and industrial dynamics: Evidence from french manufacturing[END_REF] and is explored in [START_REF] Coad | A closer look at serial growth rate correlation[END_REF]. We also note that the Fixed-Effects regressions yield fewer significant results than the OLS regressions, which in turn yield fewer significant results than the LAD regressions. The coefficients on the variables lagged twice are roughly speaking less significant than those on the first lag. It is also worth mentioning that whilst the growth of GOS seems to be slightly negatively associated with subsequent growth of sales and employment in the LAD results, these coefficients appear to be positive in the OLS and FE regressions (we are therefore cautious in our interpretations of this result). We base our interpretations mainly on the LAD results. A first observation is that most of the series (except for employment growth) exhibit negative autocorrelation -this is shown along the diagonals of the coefficient matrices for the lags. This is in line with previous work (see [START_REF] Coad | A closer look at serial growth rate correlation[END_REF]). The autocorrelation coefficients for the growth of profits and of labour productivity display a particularly large negative sign. Whilst a substantial previous literature has emphasized the 'persistence of profits', the growth of profits has little persistence. This pronounced negative autocorrelation for profits and productivity growth may well be due to 'behavioural' factors whereby an increase (or decrease) in performance in one year may be followed by a 'slackening off' (or 'extra effort') of the workforce. Indeed, it may be that a period of successful achievement may be followed by a renegotiation of the organization's goals in the direction of a redistribution of the rents towards the employees, or the fostering of a more relaxed working environment. Our results suggest that growth of a firm's employment is associated with previous growth of sales and of labour productivity. Sales growth and labour productivity growth have a relatively small positive effect, and the magnitude is of a similar order even at the second lag. Employment growth, however, appears to be relatively strongly associated with subsequent growth of sales and of profits. As could be expected, sales growth and productivity growth also appear to make a relatively large contribution to the subsequent growth of profits. Indeed, sales growth has a sizeable impact on GOS growth even at the second lag. It is rather straightforward to interpret the magnitudes of the coefficients. If we observe that employment growth rate increases by 1 percentage point, then ceteris paribus we can expect sales growth to rise by about 0.15 percentage points in the following year. Similarly, a 1 percentage point increase in sales growth can be expected to be followed by a 0.04 percentage point increase in employment growth. This latter result is apparently far more modest than results reported for a sample of Dutch manufacturing firms in (Brouwer et al., 1993, p. 156), who observe that a 1% increase in sales leads to a (statistically significant) 0.33% increase in employment.) However, we warn against putting too much faith in specific point estimates at this early stage. We also observe that growth in labour productivity seems to be preceded by growth of employment and of sales, although the (positive) coefficient is rather small. In addition, it appears that growth of profits is associated with a relatively small subsequent growth in sales, and an even smaller growth of employment. Growth of profits may have a more persistent effect on employment growth than for sales growth, however. Growth of sales, on the other hand, is very strongly associated with subsequent growth of profits. We also observe that the R 2 statistics are rather low, always lower than 5% in our preferred LAD specification (Table 6). Robustness analysis In the following section we explore the robustness of our results in a number of ways. First, we consider a simpler regression specification and investigate whether we obtain similar coefficient estimates when we exclude one of the VAR series (Section 5.1). We also investigate the robustness of our findings by repeating the analysis at a more disaggregated level. We disaggregate firms according to size (Section 5.2) and sector of activity (Section 5.3), as well as repeating our regressions for individual years (Section 5.4). We also explore potential asymmetries in the growth process between growing and shrinking firms (Section 5.5). Sensitivity to specification In Table 3 we observed that the highest contemporaneous correlations between the VAR series were between profits growth and labour productivity growth. This high degree of multicollinearity may lead to excessively sensitive coefficient estimates. To explore this sensitivity, we repeat the analysis excluding either the productivity growth or the GOS growth variables, and we hope to obtain similar coefficient estimates to those obtained earlier. Table 7 presents the regression results when productivity growth is excluded, and Table 8 presents the results when GOS growth is excluded. It is encouraging that we still find that employment growth is relatively strongly associated with subsequent growth of sales in all specifications, which in turn is relatively strongly associated with the growth of profits. Sales growth is also observed to have a feed-back effect on subsequent employment growth, of a similar magnitude to that found in Table 6. However, in this simplified specification we no longer observe the direct influence of employment growth on subsequent profits growth (or on productivity growth). Another difference concerns the relationship between growth of GOS and subsequent growth of employment or sales. The results obtained from the different specifications are admittedly different in a few respects, and so we should be especially cautious in drawing conclusions from this rather preliminary analysis. Size disaggregation Due care needs to be taken to deal with how growth dynamics vary with factors such as firm size. We cannot suppose that it will be meaningful to take a 'grand average' over a large sample of firms and assume a common structural specification. [START_REF] Coad | A closer look at serial growth rate correlation[END_REF] shows how the time scale of growth processes varies between small and large firms. For example, whilst small firms display significant negative autocorrelation in annual growth rates, larger firms experience positive autocorrelation which is consistent with the idea that they plan their growth projects over a longer time horizon. As a result, before we can feel confident about the robustness of our results, we should investigate the possible coexistence of different growth patterns for firms of different sizes. We split our sample into 5 size groups, according to their sales in 1996, and the results are presented in Table 9. The task of sorting growing entities into size groups is not straightforward statistical task, however. In Table 10, therefore, we use an alternative methodology for sorting the firms into size groups (i.e. according to mean number of employees 1996-2004). Although similar patterns are observed in each of the size groups, we observe that the autocorrelation coefficients (along the diagonals) do seem to vary with firm size (more on this in [START_REF] Coad | A closer look at serial growth rate correlation[END_REF]). We also observe that, as we move towards larger firms, the contribution of sales growth to both employment growth and growth of profits seems to increase in magnitude. It is also interesting to observe that employment growth has less of an effect on subsequent productivity growth for larger firms, which is consistent with the idea that small firms have to struggle to reach the minimum efficient scale (MES), and until they reach the MES increases in employment will be associated with increases in productivity. Sectoral disaggregation One possibility that deserves investigation is that there may be a sector-specific element in the dynamics of firm growth. For example, the evolution of the market may be easier to foresee in some industries (with mature technologies, for example) than in others. Industries may also vary in relation to the importance of employment growth for the growth of output. We explore how our results vary across industries by loosely following [START_REF] Bottazzi | Corporate growth and industrial structure: Some evidence from the italian manufacturing industry[END_REF], and comparing the results from four particular sectors: precision instruments, basic metals, machinery and equipment, and textiles. These sectors have been chosen to represent the different sectors of Pavitt's taxonomy of industries [START_REF] Pavitt | Sectoral patterns of technical change: towards a taxonomy and a theory[END_REF]); that is, science-based industries, scale-intensive industries, specialized supply industries, and supplier-dominated industries respectively.10 The regression results are presented in Table 11. Our results emphasize a certain degree of heterogeneity between diverse sectors. For example, in the Pharmaceuticals and Machinery/Equipment sectors, employment growth seems to make a particularly large contribution to subsequent sales growth. In addition, it appears that sales growth has a relatively large influence on subsequent growth of profits, in the Machinery/Equipment and Textiles sectors. We also observe that productivity growth is relatively strongly associated with growth of profits in the Textiles sector. Temporal disaggregation It may well be the case that the processes of firm growth are not insensitive to the business cycle. To investigate this possibility, we repeat our analysis for individual years (i.e. the years 1998, 2000. 2002 and 2004). The results are presented in Table 12. We do indeed observe that the regression results vary over time. In particular, the contribution of sales growth and employment growth to the growth of profits seems to vary considerably. Employment growth seems to have a relatively consistent effect on the growth of sales, however. Asymmetric effects for growing or shrinking firms One potential caveat of the preceding analysis is that there may be asymmetric effects for firms that increase employment and for firms that decrease employment. It may be relatively easy for firms to hire new employees while firing costs may limit their ability to lay workers off. In this section we therefore explore differential effects of the explanatory variables over the employment growth distribution. To do this, we perform quantile regressions, which are able to describe variation in the regression coefficient over the conditional employment growth quantiles. (For an introduction to quantile regression, see [START_REF] Koenker | Quantile regression[END_REF].) Figure 3 and Table 13 present the quantile regression coefficient estimates. Roughly speaking, the lower quantiles (closer to 0) represent firms with net employment losses whilst the upper quantiles (closer to 1) represent firms with net employment gains. We observe that the coefficient on lagged growth of profits is slightly higher at the lower quantiles. This suggests that, for those firms that are shedding employees, growth of profits seems to attenuate the firing of employees. Put differently, if a firm is firing employees, it can be expected to fire even more workers if it is experiencing poor financial performance. The magnitude of this effect is not very large, however. We also check for analogous effects in the relationships between other pairs of variables by looking at the quantile regression plots. Concerning the autocorrelation coefficients, we find results similar to those reported in [START_REF] Coad | A closer look at serial growth rate correlation[END_REF]. For the other relationships, we sometimes obtain interesting results.11 We therefore conclude this section by acknowledging that although there may be asymmetric effects for growing and shrinking firms, these asymmetries do not appear to be so large as to make our previous estimates unhelpful. Discussion The coefficient estimates from the preceding section have allowed us to observe the comovements of the four series -employment growth, sales growth, growth of the gross operating surplus, and growth of labour productivity. Figure 4 provides a simple summary representation of our results, which is based on an (admittedly subjective) synthesis of the LAD estimates reported in Tables 6,7, and 8. It should certainly be remembered, however, that we cannot rely too heavily on our regression estimates because our results do appear to be sensitive to regression specification, firm size, sector of activity, and year. Figure 4 illustrates that employment growth appears to contribute positively to sales growth, which in turn is associated with subsequent profits growth. These early results provide (limited) support to the idea that employment growth may perhaps be seen as the 'stimulus' which drives growth in other domains of the firm. Indeed, among the series that we consider here, employment growth is the firm's main decision variable. Our results allow us to comment on two theories of firm growth. First, the replicator dynamics model, frequently found in neo-Schumpeterian simulation models, supposes that retained profits are the main source of firm growth. In this vein, we should expect profitable firms to grow whilst struggling firms would lose market Figure 3: Quantile regression analysis of the relationship between growth of profits (t -1) and employment growth (t). Variation in the coefficient on lagged growth of profits over the conditional quantiles of the employment growth rate distribution. Conditional quantiles (on the x-axis) range from 0 (for the extreme negative-growth firms) to 1 (for the fastest-growing firms). Confidence intervals (non-bootstrapped) extend to 95% confidence intervals in either direction. Horizontal lines represent OLS estimates with 95% confidence intervals. Graphs made using the 'grqreg' Stata module [START_REF] Azevedo | grqreg: Stata module to graph the coefficients of a quantile regression[END_REF]). share (see [START_REF] Coad | Testing the principle of 'growth of the fitter': the relationship between profits and firm growth[END_REF] for a discussion). Second, and not altogether unrelated, the 'accelerator' models of firm investment suppose that growth of sales leads to a subsequent reinvesting in the firm, which would thus result in employment growth. The results presented here do not offer much support to these two theories of firm growth. Instead, it seems that firm growth is very much a discretionary phenomenon. The decision to take on new employees seems to be largely exogenous, and the mere generation of profits certainly does not automatically imply that these profits will be reinvested in the firm. Two stories of firm growth Our results are consistent with (at least) two possible stories of firm growth. First, one may believe that firms are incapable of accurately seeing into the future. At any time some firms may take a risk and decide to grow, and this increase in resources eventually results in an increase in sales and also an increase in profits. Other firms may be hesitant about hiring new employees, and thus they may miss out on growth opportunities.12 Second, an alternative view is that firms can accurately anticipate the evolution of the market (demand shocks or technology shocks, for example). These rational firms take on new employees with the aim of exploiting these anticipated opportunities. In this case, employment growth is merely a response to new information about market conditions. In this case it would be quite incorrect to say that employment growth causes sales growth, because it is the successful anticipation of sales growth opportunities that leads to employment growth. We note however that many intermediate cases are also possible, whereby managers do not know for sure how the business climate will evolve but they are willing to take a bet on a 'hunch' they might have. In order to decide upon the level of foresight of business firms (i.e. is employment growth an exogenous event?..), we note here that qualitative empirical work (interviews and questionnaires) may be informative. Conclusion We have presented some preliminary investigations of a regression framework that, hopefully, will allow us to better understand the growth behaviour of business firms. The application of a VAR framework to firm growth has been introduced here, and we have investigated the robustness of our results along a number of dimensions, but our analysis should be seen as preliminary. In particular, there is a considerable degree of multicollinearity between the individual statistical series that make up the PVAR model, and this seems to make our results rather 'wobbly'. We are therefore wary of putting too much confidence in any specific point estimates. We can identify (at least) three important caveats in our analysis. First, despite our efforts to conduct a robustness analysis, there are still unresolved issues of sample selection bias that stem from the fact that we have not included firms with negative values for their gross operating surplus. Furthermore, our results are obtained from analyzing a balanced panel of surviving firms and does not deal with entry or exit. Second, we observe that the R 2 statistics are rather low (typically lower than 5%). Could this be due to measurement error, aggregation effects, or perhaps due to some statistical fallacy? How does the R 2 improve when we include contemporaneous effects or longer lags? Does the R 2 statistic improve when we use data covering shorter time periods (e.g. quarterly data)? This issue clearly deserves to be explored. Third, we do not know to what extent our results are specific to the case of French manufacturing firms. A next step would be to begin thinking about moving from a reduced-form VAR to a structural VAR, in which some of the contemporaneous relationships are presented in more detail. If anything, our results would inform a structural VAR in which profits growth at (t) depends on sales growth and employment growth at (t), and where sales growth at (t) depends on employment growth at (t). Employment growth at (t), however, would not depend on any contemporaneous values of the other variables. One unresolved question concerns how we should aggregate over firms. Our results seem to suggest that, at the firm-level, employment growth precedes sales growth, and sales growth is associated with subsequent growth in profits. At the aggregate level, however, there is some evidence (from monthly US data) that increases in output are followed by a less than proportionate increase in labour hours, and that this increase occurs mostly within a 6-month interval (Sims, 1974). We also outline some directions for future research. It may be fruitful to use data at quarterly intervals, in line with macroeconomic applications of VAR models. The Compustat database provides quarterly data on sales, investment, and other series and may be a suitable database in this respect. Furthermore, series such as R&D expenditure could be added to the VAR model. These would give additional information on the relationship between innovation and firm growth. 13 In addition, we might want to include investment in fixed assets in our VAR framework, even though there are specific issues related to this variable that would have to be dealt with.14 Figure 1 : 1 Figure 1: Distribution of the unconditional growth rates of our sample of French manufacturing firms. Top left: employment growth. Top right: sales growth. Bottom left: growth of gross operating surplus. Bottom right: growth of labour productivity. Note the log scale on the y axis. Figure 4 : 4 Figure 4: A stylized depiction of the process of firm growth, based on the PVAR(1) specification in Table 6 Table 1 : 1 Summary statistics after cleaning the data Mean Std. Dev. 10% 25% Median 75% 90% obs. 1996 Sales 99328 340574 11733 17531 30693 68306 179629 Empl 101.01 235.79 25 32 45 86 190 2000 Sales 125609 447165 13670 21199 38342 84011 227723 Empl 106.16 234.71 27 34 47 93 200 2004 Sales 135671 527168 13237 21128 40046 88751 239982 Empl 104.35 238-96 25 33 47 92 200 Table 2: Summary statistics for the growth rate series Mean Std Dev 10% 25% 50% 75% 90% obs 1997 Table 3 : 3 Matrix of contemporaneous correlations for the indicators of firm growth. Scatterplot matrix of contemporaneous values of employment growth, sales growth, growth of GOS and growth of productivity in a typical year(2000). Con- Table 4 : 4 OLS estimation of equation (2). t-1 β wt Table 5 : 5 Fixed-effect estimation of equation (2). t-1 β wt Table 6 : 6 LAD estimation of equation (2). t-1 β wt Table 7 7 : LAD estimation of equation (2) where m=3 and corresponds to the vector (Empl. growth(i,t), Sales growth (i,t), GOS growth (i,t))'. w t β t-1 β t-2 Empl. gr. Sales gr. GOS gr. Empl. gr. Sales gr. GOS gr. R 2 obs Empl. growth -0.0205 0.0529 0.0013 0.0060 50277 t-stat -5.64 20.86 2.32 Sales growth 0.1194 -0.0767 0.0014 0.0054 50279 t-stat 23.06 -21.25 1.77 GOS growth -0.0073 0.2144 -0.2845 0.0341 46826 t-stat -0.31 13.10 -73.61 Empl. growth -0.0216 0.0581 0.0021 0.0119 0.0260 0.0013 0.0087 40924 t-stat -6.03 22.11 3.35 3.32 10.00 2.09 Sales growth 0.1381 -0.0850 0.0024 0.0655 -0.0358 0.0004 0.0066 40925 t-stat 24.62 -20.72 2.50 11.72 -8.83 0.36 GOS growth -0.0004 0.3269 -0.3633 -0.0276 0.1059 -0.1475 0.0472 38084 t-stat -0.02 19.82 -89.52 -1.25 6.54 -36.76 Table 8 : 8 Empl. gr. Sales gr. Prod. gr. Empl. gr. Sales gr. Prod. gr. w t β t-1 β t-2 R 2 obs Empl. growth -0.0034 0.0428 0.0186 0.0075 59332 t-stat -0.97 17.84 8.98 Sales growth 0.1286 -0.0888 0.0230 0.0053 59334 t-stat 24.93 -25.02 7.50 Prod. growth -0.0030 0.0111 -0.2287 0.0258 59266 t-stat -0.52 2.80 -65.88 Empl. growth -0.0068 0.0473 0.0217 0.0221 0.0178 0.0172 0.0109 50809 t-stat -1.81 17.59 9.44 5.83 6.61 7.21 Sales growth 0.1496 -0.0982 0.0221 0.0620 -0.0391 0.0025 0.0066 50810 t-stat 26.46 -24.43 6.43 10.90 -9.69 0.69 Prod. growth -0.0085 0.0298 -0.2837 -0.0313 0.0145 -0.1486 0.0354 50751 t-stat -1.25 6.15 -68.03 -4.59 3.00 -34.52 LAD estimation of equation ( 2 ) where m=3 and corresponds to the vector (Empl. growth(i,t), Sales growth (i,t), labour productivity growth (i,t))'. Table 9 : 9 LAD estimation of equation (2) across different size groups. Firms sorted into size groups according to their initial size (sales in 1996). Group 1 contains the smallest firms. Standard errors (and hence t-statistics) obtained from using 500 bootstrap replications. w t β t-1 Empl. gr. Sales gr. GOS gr. Prod. growth R 2 obs Size group 1 Empl. growth -0.0697 0.0306 0.0001 0.0308 0.0079 t-stat -3.81 2.42 0.06 2.01 Sales growth 0.1547 -0.1489 -0.0012 0.0405 0.0106 t-stat 5.44 -5.95 -0.49 1.54 GOS growth 0.2102 0.0945 -0.3039 0.0564 0.0412 t-stat 1.87 1.19 -12.87 0.49 Prod. growth 0.0990 -0.0030 0.0003 -0.2480 0.0388 t-stat 3.39 -0.15 0.10 -10.85 Size group 2 Empl. growth -0.0116 0.0348 -0.0037 0.0490 0.0069 10292 t-stat -0.71 3.79 -2.35 4.18 Sales growth 0.1800 -0.1452 0.0005 0.0339 0.0106 10292 t-stat 7.28 -8.68 0.19 1.75 GOS growth -0.0223 0.1304 -0.2861 0.0880 0.0348 t-stat -0.27 2.56 -12.04 1.06 Prod. growth 0.0203 0.0048 0.0008 -0.2284 0.0260 10288 t-stat 0.79 0.27 0.35 -11.48 Size group 3 Empl. growth 0.0004 0.0230 -0.0015 0.0503 0.0076 10166 t-stat 0.03 3.03 -1.18 5.25 Sales growth 0.1623 -0.1155 -0.0040 0.0753 0.0065 10166 t-stat 8.75 -7.68 -1.45 3.91 GOS growth 0.2829 0.0045 -0.3263 0.3062 0.0357 t-stat 3.17 0.06 -11.47 2.73 Prod. growth 0.0400 -0.0035 -0.0025 -0.2028 0.0239 10166 t-stat 1.49 -0.20 -0.71 -8.10 Size group 4 Empl. growth 0.0053 0.0491 -0.0015 0.0158 0.0071 t-stat 0.40 5.67 -1.02 1.46 Sales growth 0.1147 -0.0641 -0.0027 0.0323 0.0030 t-stat 5.63 -4.08 -1.02 2.26 GOS growth 0.0652 0.2084 -0.3691 0.2732 0.0422 t-stat 0.66 3.20 -11.34 2.66 Prod. growth -0.0296 0.0462 -0.0048 -0.1850 0.0157 t-stat -1.09 2.85 -1.42 -7.04 Size group 5 Empl. growth 0.1309 0.0486 -0.0016 0.0253 0.0237 10092 t-stat 8.67 7.66 -1.01 3.65 Sales growth 0.1587 -0.0040 -0.0014 0.0266 0.0061 10093 t-stat 5.60 -0.26 -0.55 1.79 GOS growth 0.1851 0.2978 -0.2939 0.2161 0.0233 t-stat 2.05 4.57 -8.91 2.46 Prod. growth -0.0406 0.0254 -0.0005 -0.1566 0.0102 10080 t-stat -1.38 1.06 -0.10 -5.52 Table 10 : 10 LAD estimation of equation (2) across different size groups. Firms sorted into size groups according to their mean size (average number of employees[1996][1997][1998][1999][2000][2001][2002][2003][2004]. Group 1 contains the smallest firms. Standard errors (and hence t-statistics) obtained from using 500 bootstrap replications. w t β t-1 Empl. gr. Sales gr. GOS gr. Prod. growth R 2 obs Size group 1 Empl. growth -0.0826 0.0319 0.0003 0.0309 0.0129 t-stat -5.36 3.56 0.18 3.08 Sales growth 0.1127 -0.1397 0.0019 0.0248 0.0089 t-stat 5.46 -7.66 0.78 1.45 GOS growth 0.1853 0.0108 -0.2909 0.1741 0.0370 9294 t-stat 1.94 0.15 -11.07 2.25 Prod. growth 0.0828 0.0020 -0.0005 -0.2310 0.0309 t-stat 2.69 0.12 -0.20 -9.73 Size group 2 Empl. growth -0.0612 0.0273 -0.0005 0.0174 0.0051 t-stat -3.90 3.14 -0.28 1.53 Sales growth 0.1282 -0.1417 -0.0022 0.0221 0.0107 t-stat 5.71 -7.78 -0.94 1.20 GOS growth 0.1707 0.0909 -0.3208 0.1344 0.0386 9454 t-stat 1.91 1.82 -11.90 1.42 Prod. growth 0.0789 -0.0019 0.0002 -0.2298 0.0279 t-stat 3.03 -0.12 0.06 -10.19 Size group 3 Empl. growth -0.0006 0.0308 -0.0043 0.0443 0.0072 t-stat -0.05 4.67 -3.07 5.13 Sales growth 0.1226 -0.0929 -0.0047 0.0426 0.0049 t-stat 6.76 -5.90 -1.72 2.64 GOS growth 0.0554 0.1877 -0.3332 0.2393 0.0386 9512 t-stat 0.58 3.11 -14.53 2.73 Prod. growth 0.0310 0.0097 -0.0006 -0.2156 0.0228 t-stat 1.23 0.55 -0.17 -9.28 Size group 4 Empl. growth 0.0670 0.0531 -0.0012 0.0309 0.0121 9903 t-stat 4.80 5.52 -1.17 3.09 Sales growth 0.1927 -0.0696 -0.0033 0.0642 0.0076 9903 t-stat 8.99 -4.41 -1.20 3.37 GOS growth 0.2840 0.0991 -0.3577 0.3899 0.0379 9166 t-stat 2.89 1.76 -9.80 3.35 Prod. growth -0.0009 0.0194 -0.0073 -0.1558 0.0163 9900 t-stat -0.05 1.18 -2.87 -7.71 Size group 5 Empl. growth 0.1335 0.0606 -0.0035 0.0402 0.0263 t-stat 7.26 6.82 -1.96 3.99 Sales growth 0.1832 -0.0035 -0.0022 0.0408 0.0086 t-stat 7.74 -0.22 -0.81 2.93 GOS growth 0.1909 0.2519 -0.2857 0.2412 0.0235 9392 t-stat 1.66 2.93 -9.08 2.12 Prod. growth -0.0421 0.0289 0.0003 -0.1595 0.0105 t-stat -1.44 1.39 0.07 -5.96 Table 11 : 11 LAD estimation of equation (2) across different industries. Standard errors (and hence t-statistics) obtained from using 1000 bootstrap replications. w t β t-1 Empl. gr. Sales gr. GOS gr. Prod. growth R 2 obs NAF 33: Precision instruments Empl. growth 0.0939 0.0721 0.0008 0.0149 0.0195 t-stat 2.53 2.95 0.26 0.55 Sales growth 0.2458 -0.0766 -0.0046 0.0739 0.0120 t-stat 3.63 -1.41 -0.95 1.26 GOS growth -0.0172 0.2630 -0.2924 0.0852 0.0406 t-stat -0.07 1.34 -4.23 0.36 Prod. growth -0.0583 0.0741 -0.0025 -0.3129 0.0322 t-stat -0.99 1.52 -0.45 -5.95 NAF 27: Basic metals Empl. growth 0.0120 0.0553 -0.0023 0.0024 0.0074 t-stat 0.33 2.50 -0.50 0.10 Sales growth 0.1060 -0.0875 -0.0111 0.0717 0.0069 t-stat 1.67 -1.77 -1.71 1.42 GOS growth 0.3084 -0.0253 -0.3701 0.2307 0.0552 t-stat 1.36 -0.24 -4.94 1.00 Prod. growth -0.1107 -0.0387 -0.0108 -0.1921 0.0342 t-stat -1.66 -0.83 -1.27 -3.27 NAF 29: Machinery and equipment Empl. growth -0.0016 0.0307 0.0002 0.0140 0.0038 t-stat -0.08 3.00 0.12 1.10 Sales growth 0.2059 -0.1896 0.0010 0.0431 0.0151 t-stat 5.30 -7.29 0.22 1.36 GOS growth 0.1371 0.2054 -0.3506 0.0937 0.0493 t-stat 0.96 2.05 -8.28 0.65 Prod. growth 0.0072 -0.0136 0.0005 -0.2332 0.0278 t-stat 0.17 -0.54 0.13 -6.08 NAF 17: Textiles Empl. growth 0.0133 0.0512 0.0000 0.0103 0.0061 t-stat 0.72 2.70 -0.01 0.78 Sales growth 0.0660 0.0429 0.0030 -0.0491 0.0051 t-stat 1.78 1.54 0.55 -1.76 GOS growth 0.2778 0.3242 -0.3617 0.2510 0.0383 t-stat 1.89 3.25 -7.27 2.36 Prod. growth 0.0521 0.0411 -0.0064 -0.1773 0.0178 t-stat 0.84 0.99 -0.80 -3.43 Table 13 : 13 Quantile regression estimation of equation (2), focusing on the relationship between profits growth (t-1) and employment growth (t). 50269 observations. Standard errors (and hence t-statistics) obtained from using 100 bootstrap replications. 10% 25% 50% 75% 90% Coeff. 0.0007 -0.0003 -0.0019 -0.0030 -0.0033 t-stat 0.40 -0.34 -2.94 -2.61 -1.37 R 2 0.0098 0.0067 0.0069 0.0074 0.0084 The EAE databank has been made available to the author under the mandatory condition of censorship of any individual information. This database has already featured in several other studies into firm growth -see[START_REF] Bottazzi | Corporate growth and industrial dynamics: Evidence from french manufacturing[END_REF],[START_REF] Coad | Testing the principle of 'growth of the fitter': the relationship between profits and firm growth[END_REF][START_REF] Coad | A closer look at serial growth rate correlation[END_REF]. More specifically, we examine firms in the two-digit NAF sectors 17-36, where firms are classified according to their sector of principal activity (the French NAF classification matches with the international NACE and ISIC classifications). We do not include NAF sector 37, which corresponds to recycling industries. 22 319, 22 231, 22 305, 22 085, 21 966, 22 053, 21 855, 21 347 and 20 723 firms respectively. NAN is shorthand for Not a Number, which refers to the result of a numerical operation which cannot return a valid number value. In our case, we may obtain a NAN if we try to take the logarithm of a negative number, or if we try to divide a number by zero. GOS is sometimes referred to as 'profits' in the following. Labour productivity is calculated in the usual way by dividing Value Added by the number of employees. In fact, this choice of strategy for deflating our variables was to some extent imposed upon us, since I was unable to find a suitable sector-by-sector series of producer price indices to be used as deflators. [START_REF] Bottazzi | Corporate growth and industrial dynamics: Evidence from french manufacturing[END_REF] present a parametric investigation of the distribution of sales growth rates of French The sectors we study are NAF 33 (manufacturing of medical, precision and optical instruments, watches and clocks), NAF 27 (manufacturing of basic metals), NAF 29 (manufacturing of machinery and equipment, nec.) and NAF 17 (manufacturing of textiles). Note that we do not follow exactly the methodology in[START_REF] Bottazzi | Corporate growth and industrial structure: Some evidence from the italian manufacturing industry[END_REF] because we consider only 2-digit sectors, for want of a suitable number of observations for our empirical model. For example, there is considerable variation in the coefficient on lagged employment growth on growth of profits. At the lower quantiles of profits growth, lagged employment growth has a positive effect whilst having a negative effect at the upper quantiles. Remember however that we have excluded those unfortunate firms that obtained negative profits -which is a source of sample selection bias. We should thus be extremely wary of saying that employment growth always leads to sales or profits growth, because it may be that in some cases employment growth leads to failure. One hypothesis we could test here (recently put forward by Giovanni Dosi) is that firms have 'behavioural' decision rules for R&D expenditures, whereby they make no attempt to 'maximize expected return on all future innovation opportunities', as some neoclassical economists might suggest, but instead they simply try to adjust their R&D expenditures in an attempt to keep a roughly constant R&D/sales ratio. This line of research is currently being pursued with Rekha Rao. First, there are problems distinguishing between expansionary and replacement investment, which obscures the relationship between investment in fixed assets and firm growth. Second, there is a remarkable lumpiness in the time series of investment in fixed assets. For example, Doms and Dunne (1998) consider a large sample of US manufacturing plants 1972-1988 and observe that, on average, half a plant's total investment was performed in just three years.
01751139
en
[ "spi.other" ]
2024/03/05 22:32:07
2014
https://hal.univ-lorraine.fr/tel-01751139/file/DDOC_T_2014_0193_YANG.pdf
Dr Claude Esling Yudong Zhang Dr Yuping Ren Dr Song Li Dr Na Xiao Dr Jing Bai Ms Nan Xu Ms Shiying Wang Dr Zhangzhi Shi Mr Yan Haile Ms Xiaorui Liu Ms Jianshen 马氏体变体为复合孪生关系 。 在平直的低反差区域 , 复合孪生的 N M 马氏体变体关于板 条界面对称分布 。 然而在弯曲的高反差区域 N M 马氏体变体关于板条界面 不对称分布 。 对于七层调制马氏体 , 在平直的低反差区域 Nimnga 薄膜中存在 个 Nm 马氏体变体团 , 每个 N M 马氏体变体团有 种 Nm 马氏体变体 , 因此在整个 Nimnga 薄膜中 种不同 取向的 N M 马氏体变体 。 对于 7m 马氏体 , 同样存在 个变体团 , 不过每个变体团有 种取向的 7m 马氏体变体 , 总共有 马氏体变体 。 这说明 , Nimnga N M 马氏体变体必须经过 7m 才能获得 个变体 。 每种 马氏体变体转变为两种 取向的 EBSD 获得马氏体变体的取向和晶体学计算可知,在一个板条内部的两个 NM Keywords: ferromagnetic shape memory alloys (FSMAs), Ni-Mn-Ga thin films, martensitic transformation, EBSD, misorientation, Texture Epitaxial Ni-Mn-Ga thin films have attracted considerable attention, since they are promising candidates for magnetic sensors and actuators in micro-electro-mechanical systems. Comprehensive information on the microstructural and crystallographic features of the NiMnGa films and their relationship with the constraints of the substrate is essential for further property optimization. In the present work, epitaxial Ni-Mn-Ga thin films were produced by DC magnetron sputtering and then characterized by x-ray diffraction technique (XRD) and backscatter electron diffraction equipped in scanning electron microscope (SEM-EBSD). Epitaxial NiMnGa thin films with nominal composition of Ni50Mn30Ga20 and thickness of 1.5 µm were successfully fabricated on MgO monocrystalline substrate by DC magnetron sputtering, after the optimization of sputtering parameters such as sputtering power, substrate temperature and seed layer by the present work. XRD diffraction measurements demonstrate that the epitaxial NiMnGa thin films are composed of three phases: austenite, NM martensite and 7M martensite. With the optimized measurement geometries, maximum number of diffraction peaks of the concerning phases, especially of the low symmetrical 7M martensite, are acquired and analyzed. The lattice constants of all the three phases under the constraints of the substrate in the films are fully determined. These serve as prerequisites for the subsequent EBSD crystallographic orientation characterizations. SEM-EBSD in film depth analyses further verified the co-existence situation of the three constituent phases: austenite, 7M martensite and NM martensite. NM martensite is located near the free surface of the film, austenite above the substrate surface, and 7M martensite in the intermediate layers between austenite and NM martensite. Microstructure characterization shows that both the 7M martensite and NM martensite are of plate morphology and organized into two characteristic zones featured with low and high relative second electron image contrast. Local martensite plates with similar plate morphology orientation are organized into plate groups or groups or variant colonies. Further EBSD characterization indicates that there are four distinct martensite plates in Abstract -IV-each variant groups for both NM and 7M martensite. Each NM martensite plate is composed of paired major and minor lamellar variants in terms of their thicknesses having a coherent interlamellar interface, whereas, each 7M martensite plate contains one orientation variant. Thus, there are four orientation 7M martensite variants and eight orientation NM martensite variants in one variant group. According to the crystallographic orientation of martensites and the crystallographic calculation, for NM martensite, the inter-plate interfaces are composed of compound twins in adjacent NM plates. The symmetrically distribution of compound twins results in the long and straight plate interfaces in the low relative contrast zone. The asymmetrically distribution leads to the change of inter-plate interface orientation in the high relative contrast zone. For 7M martensite, both Type-I and Type-II twin interfaces are nearly perpendicular to the substrate surface in the low relative contrast zones. The Type-I twin pairs appear with much higher frequency, as compared with that of the Type-II twin pairs. However, there are two Type-II twin interface trace orientations and one Type-I twin interface trace orientation in the high relative contrast zones. The Type-II twin pairs are more frequent than the Type-I twin pairs. The inconsistent occurrences of the different types of twins in different zones are originated from the substrate constrain. The crystallographic calculation also indicates that the martensitic transformation sequence is from Austenite to 7M martensite and then transform into NM martensite (A→7M→NM). The present study intends to offer deep insights into the crystallographic features and martensitic transformation of epitaxial NiMnGa thin films. Des couches minces é pitaxiales avec NiMnGa de composition nominale Ni50Mn30Ga20 et d'é paisseur 1,5 µm ont é té fabriqué es avec succè s sur le substrat monocristallin de MgO par pulvé risation cathodique magné tron DC, aprè s l'optimisation des paramè tres tels que la puissance de pulvé risation cathodique, la tempé rature du substrat et de la couche d'ensemencement dans le cadre du pré sent travail. Les mesures de diffraction DRX montrent que les couches minces é pitaxiales NiMnGa sont composé es de trois phases: austé nite, martensite NM et martensite modulé e 7M. Avec les gé omé tries de mesure optimisé es, le nombre maximum possible de pics de diffraction des phases relatives, en particulier compte tenu de la basse symé trie de la martensite 7M, sont acquis et analysé s. Les constantes de ré seau de l'ensemble des trois phases dans le cadre des contraintes du substrat dans les films sont entiè rement dé terminé es. L'analyse SEM-EBSD en profondeur du film a permis en outre de vé rifier la situation de coexistence de trois phases constitutives: austé nite, 7M martensite et martensite NM. La martensite NM se trouve prè s de la surface libre du film, l'austé nite au-dessus de la surface du substrat, et la martensite 7M dans les couches intermé diaires entre l'austé nite et la martensite NM. La caracté risation de microstructure montre que la martensite 7M et la martensite NM ont une morphologie de plaque et sont organisé es en deux zones caracté ristiques dé crites avec des bas et haut contraste en images d'électrons secondaires. Des plaques de martensite locales similaire en orientation morphologique sont organisé es en groupes de plaques ou colonies ou variantes de colonies. Une caracté risation plus poussé e en EBSD indique qu'il existe quatre plaques de martensite distinctes dans chaque colonie de variante à la fois pour la martensite NM et 7M. Chaque plaque de martensite NM est composé e de variantes lamellaires majeures et mineures en termes d'épaisseurs appariées et ayant une interface interlamellaire cohé rente, alors que chaque plaque de martensite 7M contient une variante d'orientation. Ainsi, il existe quatre variantes d'orientation de martensite 7M et huit variantes d'orientation de martensite NM dans une colonie de variantes. Selon l'orientation cristallographique des martensites et des calculs cristallographiques, pour la martensite NM, les interfaces inter-plaques sont constitué es de macles de type composé es dans des plaques adjacentes de martensite NM. La distribution symé trique des macles composé es ré sulte dans des interfaces de plaques longues et droites dans la zone de contraste relatif faible. La répartition asymétrique conduit à la modification de l'orientation d'interface entre les plaques de la zone de contraste relativement é levé .Pour la martensite 7M, à la fois les interfaces de type I et de type II sont à peu prè s perpendiculaires à la surface du substrat dans les zones à faible contraste relatif. Les paires de macles de type-I apparaissent avec une fré quence beaucoup plus é levé e, par comparaison avec celle des macles de type II. Cependant, il ya deux traces d'interface de macles de type II et une trace d'interface de macles de type I dans les zones de contraste relatifs é levé s. Les paires de macles de type II sont plus fré quentes que les paires de macles de type-I. Les apparitions incohé rentes des diffé rents types de macles dans les diffé rentes zones sont dues à la contrainte du substrat. Le calcul cristallographique montre é galement que la sé quence de la transformation martensitique est d'austé nite en martensite 7M qui est ensuite transformé e en martensite NM (A →7M→NM). La présente étude se propose d'offrir une étude approfondie des caractéristiques cristallographiques et de la transformation martensitique de films minces de NiMnGa pré paré s par é pitaxie. Mots clé s: Alliages ferromagné tique à mé moire de forme (FSMAs); Films minces de Ni-Mn-Ga; transformation martensitique; EPCA; dé sorientation; texture。 General introduction The miniaturization of electronic systems and the increase of their functionality require the implementation of active and sensitive devices on a small scale [START_REF] Yang | Substrate constraint induced variant selection of 7M martensite in epitaxial NiMnGa thin films[END_REF]. Micro-electro-mechanical systems (MEMS) or nano-electro-mechanical systems (NEMS) are one of the potential technologies for decreasing size of these devices and have been established in wide domains such as information technology, automotive, aerospace and bio-medical applications [2,3]. Developing excellent active and sensitive materials are of technological interest and also represent dominating challenges for the design of MEMS and NEMS. In particular, active and sensitive materials that can exhibit large strains with rapid response (also referred to smart materials) are desirable [4,5]. A number of active materials such as magnetostrictive materials, piezoelectric ceramics and shape memory alloys that show a few percent strains under an applied external field have been proposed as these kinds of actuator and sensor materials [6][7][8][START_REF] Fukuda | Giant magnetostriction in Fe-based ferromagnetic shape memory alloys[END_REF][START_REF] Handley R C | Modern magnetic materials: principles and applications[END_REF][START_REF] Kohl | Ferromagnetic Shape Memory Microactuators[END_REF]. Among the numerous advanced materials, the ferromagnetic shape memory alloys (FSMAs), also referred to magnetic shape memory alloys (MSMAs), are a group of fascinating material, which can provide a significant and reversible strain at high frequency driven by external magnetic field [START_REF] Faehler | An Introduction to Actuation Mechanisms of Magnetic Shape Memory Alloys[END_REF][START_REF] Jani | A review of shape memory alloy research, applications and opportunities[END_REF][START_REF] Jiles | Recent advances and future directions in magnetic materials[END_REF][START_REF] Wilson | New materials for micro-scale sensors and actuators: An engineering review[END_REF][START_REF] Nespoli | The high potential of shape memory alloys in developing miniature mechanical devices: A review on shape memory alloy mini-actuators[END_REF][START_REF] Hauser | Chapter 8 -Thin magnetic films[END_REF][START_REF] Acet | Chapter Four -Magnetic-Field-Induced Effects in Martensitic Heusler-Based Magnetic Shape Memory Alloys[END_REF]. FSMAs not only overcome the low potential efficiency of thermally controlled shape memory actuators, but also exhibit much larger output strains than those of the magnetostrictive, the piezoelectric or the electrostrictive materials. The unique properties of FSMAs have attracted extensive research interest during the past few years. The discovery of ferromagnetic Ni-Mn-based Heusler alloys quickly promoted a breakthrough in the application of ferromagnetic shape memory alloys [START_REF] Sozinov | 12% magnetic field-induced strain in Ni-Mn-Ga-based non-modulated martensite[END_REF][START_REF] Sozinov | Giant magnetic-field-induced strain in NiMnGa seven-layered martensitic phase[END_REF][START_REF] Kohl | Recent Progress in FSMA Microactuator Developments[END_REF]. To date, magnetic field induced strain as high as 12% has been achieved recently in the bulk NiMnGa single crystals. Whereas, although extensive research has been centered on bulk polycrystalline NiMnGa alloys to understand their mechanical, magnetic properties and phase transformation behaviors [START_REF] Xu | Ni-Mn-Ga shape memory alloys development in China[END_REF][START_REF] Cong | Crystal structures and textures in the hot-forged Ni-Mn-Ga shape memory alloys[END_REF][START_REF] Cong | Crystal structure and phase transformation in Ni53Mn25Ga22 shape memory alloy from 20 K to 473 K[END_REF][START_REF] Cong | Crystal structures and textures of hot forged Ni48Mn30Ga22 alloy investigated by neutron diffraction technique[END_REF][START_REF] Li | Composition-dependent ground state of martensite in Ni-Mn-Ga alloys[END_REF][START_REF] Li | Microstructure and magnetocaloric effect of melt-spun Ni52Mn26Ga22 ribbon[END_REF][START_REF] Li | Twin relationships of 5M modulated martensite in Ni-Mn-Ga alloy[END_REF][START_REF] Wang | Twinning stress in shape memory alloys: Theory and experiments[END_REF][START_REF] Pond R C | Deformation twinning in Ni2MnGa[END_REF][START_REF] Matsuda | Transmission Electron Microscopy of Twins in 10M Martensite in Ni-Mn-Ga Ferromagnetic Shape Memory Alloy[END_REF][START_REF] Murray | 6% magnetic-field-induced strain by twin-boundary motion in ferromagnetic Ni--Mn--Ga[END_REF], magnetic field induced strains in bulk NiMnGa polycrystals are still in the magnitude order of 1%, owing to the complex microstructure. In addition, bulk NiMnGa alloys are very brittle in polycrystalline state, making them difficult to deform into desirable shape. Ductility can be improved in a single crystal or thin film form. NiMnGa thin films deposited with various physical vapor deposition methods have shown superior mechanical properties [3,[START_REF] Backen | Epitaxial Ni-Mn-Ga Films for Magnetic Shape Memory Alloy Microactuators[END_REF][START_REF] Eichhorn | Microstructure of freestanding single-crystalline Ni2MnGa thin films[END_REF][START_REF] Dunand | Size Effects on Magnetic Actuation in Ni-Mn-Ga Shape-Memory Alloys[END_REF][START_REF] Chernenko | Magnetic domains in Ni-Mn-Ga martensitic thin films[END_REF][START_REF] Hakola | Ni-Mn-Ga films on Si, GaAs and Ni-Mn-Ga single crystals by pulsed laser deposition[END_REF][START_REF] Dong | Shape memory and ferromagnetic shape memory effects in single-crystal Ni[sub 2]MnGa thin films[END_REF][START_REF] Castano | Structure and thermomagnetic properties of polycrystalline Ni--Mn--Ga thin films[END_REF][START_REF] Dong | Epitaxial growth of ferromagnetic Ni[sub 2]MnGa on GaAs(001) using NiGa interlayers[END_REF][START_REF] Dong | Molecular beam epitaxy growth of ferromagnetic single crystal (001) Ni[sub 2]MnGa on (001) GaAs[END_REF]. However, most of the as-deposited NiMnGa thin films are composed of complex microstructure and several phases, which are the main obstacles to achieve huge magnetic field induced strain [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Leicht | Microstructure and atomic configuration of the (001)-oriented surface of epitaxial Ni-Mn-Ga thin films[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Kallmayer | Compositional dependence of element-specific magnetic moments in Ni 2 MnGa films[END_REF][START_REF] Annadurai | Composition, structure and magnetic properties of sputter deposited Ni-Mn-Ga ferromagnetic shape memory thin films[END_REF]. Revelation of the local crystallographic orientation and interfaces between microstructural constituents in epitaxial NiMnGa thin films have been an essential issue, in order to provide useful guidance for post treatments to eliminate the undesired martensite variants [START_REF] Witherspoon | Texture and training of magnetic shape memory foam[END_REF][START_REF] Wang | A variational approach towards the modeling of magnetic field-induced strains in magnetic shape memory alloys[END_REF][START_REF] Gaitzsch | Magnetomechanical training of single crystalline Ni-Mn-Ga alloy[END_REF][START_REF] Chulist | Twin boundaries in trained 10M Ni-Mn-Ga single crystals[END_REF][START_REF] Chmielus | Training, constraints, and high-cycle magneto-mechanical properties of Ni-Mn-Ga magnetic shape-memory alloys[END_REF]. Ferromagnetic shape memory effect Of all shape memory effects discovered, magnetic field induced shape memory effect is the most conducive to application, due to its high response frequency and giant actuation strain. The nature of giant magnetic-field-induced strain is due to either orientation rearrangements of martensite variants or the structural transition of these materials [START_REF] Acet | Chapter Four -Magnetic-Field-Induced Effects in Martensitic Heusler-Based Magnetic Shape Memory Alloys[END_REF][START_REF] Liu | Giant magnetocaloric effect driven by structural transitions[END_REF][START_REF] Kainuma | Magnetic-field-induced shape recovery by reverse phase transformation[END_REF]. Orientation rearrangements of martensite variants, also referred to magnetic field induced re-orientation, is based on magnetostructural coupling which takes advantage of the larger uniaxial magnetocrystalline anisotropy of the martensite phase [START_REF] Acet | Chapter Four -Magnetic-Field-Induced Effects in Martensitic Heusler-Based Magnetic Shape Memory Alloys[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF]. Normally, the c-axis (short axis) is the easy axis in the modulated martensites, while it is the hard-axis in non-modulated martensite which displays easy magnetization planes perpendicular to the c-axis. As shown Fig. 1.1, in the practical materials, twin variants in martensite have magnetic moments in different directions. On the application of magnetic field, the variants that are not aligned with the applied field, will de-twin to align their moments with the external magnetic field. This movement results in a macroscopic change in length resulting in strain. The unique mechanism of magnetic field induced reorientation has been observed in NiMnGa alloys [START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Tickle | Ferromagnetic Shape Memory Materials[END_REF]. The microstructure evolution due to magnetic field induced variant rearrangement [START_REF] Tickle | Ferromagnetic Shape Memory Materials[END_REF]. The other mechanism to achieve giant magnetic field induced strain is the structural transitions under external magnetic field. The structural transitions contains the transition from paramagnetic parent phase to ferromagnetic martensite observed in Fe-Mn-Ga alloys [START_REF] Zhu | Magnetic-field-induced transformation in FeMnGa alloys[END_REF] and the reverse transition from antiferromagnetic martensite to ferromagnetic parent phase observed in Ni-Mn-In(Sn) alloys [START_REF] Hernando | Thermal and magnetic field-induced martensite-austenite transition in Ni(50.3)Mn(35.3)Sn(14.4) ribbons[END_REF][START_REF] Krenke | Martensitic transitions and the nature of ferromagnetism in the austenitic and martensitic states of Ni-Mn-Sn alloys[END_REF]. Since the required external magnetic field to induce structural transition is much larger than that to induce the reorientation of martensite variants, the NiMnGa Heusler alloys are more preferable to be applied in the practical actuators and sensors. Bulk NiMnGa Heusler alloys NiMnGa alloys are one of the typical Heusler alloys and exhibit magnetic field induced strain as large as 6%~12% in the order of kHz [START_REF] Sozinov | 12% magnetic field-induced strain in Ni-Mn-Ga-based non-modulated martensite[END_REF][START_REF] Murray | 6% magnetic-field-induced strain by twin-boundary motion in ferromagnetic Ni--Mn--Ga[END_REF], even under relatively low bias magnetic fields. However, the phase constituents, crystal structure of various phases, crystallographic features and mechanical properties of the Ni-Mn-Ga alloys are highly sensitive to the chemical composition and temperature. A deep and complete understanding of NiMnGa alloys is crucial to deal with the magnetic field induced reorientation of NiMnGa martensite variants. Their crystal structures, and crystallographic features and some specific properties are introduced here. NiMnGa alloys have an L21 structure in the austenite state, which is based on the bcc structure and consists of four interpenetrating face centered cubic (FCC) sublattices as shown for Ni2MnGa in Fig. 1.3(a). When the temperature is decreased, Ni2MnGa and off-stoichiometric Ni-Mn-Ga Heusler alloys undergoes martensitic transformations and transforms to the L10 tetragonal structure at low-enough Ga concentrations, as shown in Fig. 1.3(b) [START_REF] Pons | Structure of the layered martensitic phases of Ni-Mn-Ga alloys[END_REF][START_REF] Pons | Crystal structure of martensitic phases in Ni-Mn-Ga shape memory alloys[END_REF][START_REF] Banik | Structural studies of Ni_{2+x}Mn_{1-x}Ga by powder x-ray diffraction and total energy calculations[END_REF]. Modulated structures other than the tetragonal structure can also be found in the martensite state especially at higher Ga concentrations. The most common crystal structure of modulated martensites are the 5M and 7M modulated structures (also denoted as 10M and 14M), as shown Fig. 1.3(c) and Fig. 1.3(d) [START_REF] Righi | Crystal structure of 7M modulated Ni-Mn-Ga martensitic phase[END_REF][START_REF] Righi | Commensurate and incommensurate "5M" modulated crystal structures in Ni-Mn-Ga martensitic phases[END_REF][START_REF] Righi | Incommensurate modulated structure of the ferromagnetic shape-memory Ni2MnGa martensite[END_REF]. Phase constituents and their crystal structures of NiMnGa alloys From Fig. 1.3(c) and Fig. 1.3(d), the generated modulations can be seen for the 5M and 7M cases. The letter 'M' refers to the monoclinic resulting from the distortion associated with the modulation. Although this particular example is given for Ni2MnGa, similar modulated states are observed in off-stoichiometric martensitic Heusler alloys incorporating other Z(Z=Ga, In, Sn) elements as well. The crystal structures of martensite given here are only typical examples, as for the individual alloys, the structure parameters of martensite depends on their particular composition. Twin boundaries of martensites in NiMnGa alloys For the shape memory performances, low twinning stress is the key property of the ferromagnetic shape memory alloys, since magnetically induced rearrangement is mediated by twin boundary motion. In the 5M and 7M modulated NiMnGa martensite, magnetically induced rearrangement can be realized by the motion of type I or type II twin boundaries. However, Type I twin boundaries exhibits a strong temperature-dependent twinning stress typically of about 1 MPa at room temperature. Type II twin boundaries exhibit a much lower and almost temperature independent twinning stress of 0.05-0.3 MPa [START_REF] Wang | Twinning stress in shape memory alloys: Theory and experiments[END_REF][START_REF] Chulist | Diffraction study of bending-induced polysynthetic twins in 10M modulated Ni-Mn-Ga martensite[END_REF][START_REF] Heczko | Different microstructures of mobile twin boundaries in 10M modulated Ni-Mn-Ga martensite[END_REF][START_REF] Chulist | Characterization of mobile type I and type II twin boundaries in 10M modulated Ni-Mn-Ga martensite by electron backscatter diffraction[END_REF]. Therefore, the Type II twin boundaries are preferred to achieve giant magnetic field induced strain at lower external magnetic field. Orientation relationship between martensite variants in NiMnGa alloys Clarification of the orientation relationship between the martensitic variants is quite helpful to understand the rearrangement of martensitic variants and to find a training process or loading scheme which will eliminate or control the twin boundaries. To date, both the orientation relationships between martensite variants of non-modulated and modulated martensite were successfully determined, based on electron backscatter diffraction (EBSD) [START_REF] Cong | Microstructural and crystallographic characteristics of interpenetrating and non-interpenetrating multiply twinned nanostructure in a Ni-Mn-Ga ferromagnetic shape memory alloy[END_REF][START_REF] Cong | Experiment and theoretical prediction of martensitic transformation crystallography in a Ni-Mn-Ga ferromagnetic shape memory alloy[END_REF] orientation determination, using the accurate crystal structure information. For NM martensite, it is shown that there are two martensite variants with compound twin Orientation relationship of martensitic transformation in NiMnGa alloys The martensitic transformation is a diffusionless, displacive phase transformation under a specific orientation relationship between the parent phases and the product phase. Determination of the orientation relationship during martensitic transformation enables us to not only predict the microstructural configuration of martensite variants and their crystallographic correlation, but also control the microstructure of martensite via thermal treatment. Dependent on the chemical composition, there are three possible transformation sequences for NiMnGa alloys, which are A-5M-7M-NM, A-7M-NM and A-NM. Thus, the martensitic transformation may result in three different types of martensite (i.e. 5M, 7M, and NM martensites), which also depends on the chemical composition. Up to now, Li et.al [START_REF] Li | Determination of the orientation relationship between austenite and incommensurate 7M modulated martensite in Ni-Mn-Ga alloys[END_REF] NiMnGa thin films Shape memory material in thin films appears with the advancement of fabrication technology, where shape memory alloys are deposited directly onto micromachined materials or as stand-alone thin films. Micro-actuation models taking advantage of magnetic field induced reorientation have been proposed in NiMnGa thin films. Magnetically induced re-orientation in constrained NiMnGa film has been reported by Thomas et al. [START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Thomas | Stress induced martensite in epitaxial Ni-Mn-Ga films deposited on MgO(001)[END_REF]. With magnetic field operation, they are bound to have the advantage of fast response and high frequency operation. To date, since Ni-Mn-Ga thin films are promising candidates for magnetic sensors and actuators in MEMS, they have attracted considerable attention that focused on fabrication and freestanding, characterization of crystallographic features and microstructure, as well as martensitic transformation of NiMnGa thin films. Fabrication process The fabrication method needs to tune both microstructure and composition well, since microstructure and chemistry of NiMnGa films affects the phase constituent, the mobility of martensite variants and the critical stress. To date, a variety of techniques like sputtering, melt-spinning, pulsed laser deposition, molecular beam epitaxial and flash evaporation, have been used to deposit NiMnGa thin films [START_REF] Dong | Shape memory and ferromagnetic shape memory effects in single-crystal Ni[sub 2]MnGa thin films[END_REF][START_REF] Castano | Structure and thermomagnetic properties of polycrystalline Ni--Mn--Ga thin films[END_REF][START_REF] Dong | Epitaxial growth of ferromagnetic Ni[sub 2]MnGa on GaAs(001) using NiGa interlayers[END_REF][START_REF] Dong | Molecular beam epitaxy growth of ferromagnetic single crystal (001) Ni[sub 2]MnGa on (001) GaAs[END_REF][START_REF] Hakola | Pulsed laser deposition of NiMnGa thin films on silicon[END_REF]. Via the sputtering, it is easy to tailor the composition which is a critical parameter for many desired properties related to actuation and shape memory effect. For instance, using co-sputtering, the composition of NiMnGa can be finely tuned by adding Ni or Mn to vary the overall composition. Because the NiMnGa thin film is found to have ~ 3-5% increase in Ni with respect to that in the targets and a corresponding decrease in Mn and Ga depending upon the deposition parameters, desired thin film composition can also be obtained by appropriate adjustment of target alloy composition. As significant texturation through reducing the number of variant colonies or groups in the film could be realized by epitaxial growth. Preparation of Epitaxial NiMnGa thin films on single crystal substrates such as MgO , Al2O3 , GaAs [START_REF] Hakola | Ni-Mn-Ga films on Si, GaAs and Ni-Mn-Ga single crystals by pulsed laser deposition[END_REF][START_REF] Dong | Epitaxial growth of ferromagnetic Ni[sub 2]MnGa on GaAs(001) using NiGa interlayers[END_REF] by magnetron sputtering has been performed. To date, the procedure to fabricate continuous films with homogeneous chemical composition and controllable thicknesses has been parameterized [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Yeduru S R, Backen | Microstructure of free-standing epitaxial Ni-Mn-Ga films before and after variant reorientation[END_REF][START_REF] Jetta | Phase transformations in sputtered Ni-Mn-Ga magnetic shape memory alloy thin films[END_REF]. For instance, Heczko et al. [START_REF] Heczko | Epitaxial Ni-Mn-Ga films deposited on SrTiO[sub 3] and evidence of magnetically induced reorientation of martensitic variants at room temperature[END_REF] showed that thin films deposited on SrTiO3 substrate showed epitaxial structure with a twinned orthorhombic martensite. Khelfaoui et al. [START_REF] Khelfaoui | A fabrication technology for epitaxial Ni-Mn-Ga microactuators[END_REF] reported epitaxial thin film deposition on MgO (100) substrate using DC magnetron sputtering at substrate temperature of 350 °C. Phase constituent, microstructural and crystallographic features In spite of the progress made in studying epitaxial NiMnGa alloys, the microstructural and crystallographic characterization of the produced films has been challenging due to the specific constrains from the substrate, the specific geometry of the film, and the ultra fineness of the microstructural constituents. Many pioneer efforts [34, 42-44, 55, 77, 81-88] have been made to characterize the phase constituent, microstructural and crystallographic features to advance the understanding of their physical and mechanical behaviors. The significant scientific and technical challenges remain. Crystal structure of phases in epitaxial NiMnGa thin films X-ray diffraction (XRD) [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Thomas | Stray-Field-Induced Actuation of Free-Standing Magnetic Shape-Memory Films[END_REF] was first used to analyze phase constituents and texture components of epitaxial NiMnGa films. Multiphase (austenite, modulated martensite and non-modulated martensite) have been identified in most of the epitaxial NiMnGa films and strong textures of each phase have been revealed. For austenite and the non-modulated martensite that have relative simple crystal structure, the lattice constants have been unambiguously determined from the limited XRD reflection peaks [START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Kauffmann-Weiss | Magnetic Nanostructures by Adaptive Twinning in Strained Epitaxial Films[END_REF][START_REF] Buschbeck | In situ studies of the martensitic transformation in epitaxial Ni-Mn-Ga films[END_REF]. For the modulated martensite (7M or 14M), the maximum attainable number of observed reflection peaks in most cases has not been sufficient [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF], thus the complete lattice constant determination is not possible. Without information on the monoclinic angle, the modulated martensite has to be simplified as pseudo-orthorhombic other than the monoclinic structure that has been identified in the bulk materials [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF]. However, it is commonly accepted that 7M (or 14M) modulated NiMnGa martensite are of monoclinic crystal structure as in bulk NiMnGa materials [START_REF] Righi | Crystal structure of 7M modulated Ni-Mn-Ga martensitic phase[END_REF][START_REF] Righi | Commensurate and incommensurate "5M" modulated crystal structures in Ni-Mn-Ga martensitic phases[END_REF][START_REF] Righi | Incommensurate modulated structure of the ferromagnetic shape-memory Ni2MnGa martensite[END_REF]. Generally, to describe the lattice parameters of 7M martensite, two different coordinate reference systems are frequently used in the literature, i.e. an orthogonal coordinate system attached to the original cubic austenite lattice and a non-orthogonal coordinate system attached to the conventional monoclinic Bravais lattice [27, 28, 63-65, 72, 73]. Since the monoclinic angle is very close to 90°, the modulated structure may be approximated as a pseudo-orthorhombic structure under the former scheme [START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Müller | Nanoscale mechanical surface properties of single crystalline martensitic Ni-Mn-Ga ferromagnetic shape memory alloys[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF]. Fig. 1.4(a) illustrates the settings of the cubic coordinate system for austenite, the pseudo-orthorhombic coordinate system [START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF] for 14M modulated martensite under the adaptive phase model [START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF], and the monoclinic coordinate system for 7M modulated martensite [START_REF] Li | Twin relationships of 5M modulated martensite in Ni-Mn-Ga alloy[END_REF][START_REF] Li | Evidence for a monoclinic incommensurate superstructure in modulated martensite[END_REF][START_REF] Li | Determination of the orientation relationship between austenite and incommensurate 7M modulated martensite in Ni-Mn-Ga alloys[END_REF][START_REF] Li | New approach to twin interfaces of modulated martensite[END_REF]. It is seen from Fig. 1.4(b) that the deviations of the respective monoclinic basis vectors (amono, bmono and cmono) from the corresponding directions in the cubic coordinate system ) are very small (ranging from 1° to 3°), depending on the lattice constants of materials with different chemical compositions. The two settings have their own disadvantages. Apparently, a direct atomic or lattice correspondence between the austenite structure and the martensite structure is not so conveniently accessible under the monoclinic coordinate system. However, in the pseudo-orthorhombic setting, the basis vectors shown in Fig. 1.4(a) are not true lattice vectors, because the basis vectors of the original cubic cell become irrational in the newly formed structure after the martensitic transformation. This makes the precise description of the twin relationships of martensitic variants using well-defined twinning elements (such as twinning plane, twinning direction, and etc.) particularly difficult, especially when there are irrational twinning elements, as presented in table 1.2, the description of twinning elements in both 14M-adaptive and 7M monoclinic crystal reference system. It can be clearly seen that the precise twinning elements of Type II twin and compound twin have not been determined in the 14M-adaptive crystal system. Moreover, the pseudo-orthorhombic cell does not possess the same symmetry group as that of the monoclinic Bravais lattice. This affects the determination of the number of martensitic variants and the orientation relationships between adjacent variants. Microstructure and crystallographic features of epitaxial NiMnGa thin films For epitaxial NiMnGa thin films, so far microstructure examinations have been made mainly using scanning electron microscopy (SEM), atomic force microscopy (AFM) and high resolution scanning tunneling microscopy (STM). The crystallographic aspects have been deduced by examining the surface corrugations of the films. Microstructure analyses by scanning electron microscopy (SEM) revealed that martensite is in plate shaped and organized in groups as is the case in bulk material but much finer. The important feature found is that there are two kinds of martensite groups in terms of secondary electron (SE) image contrast. One is with relative homogeneous contrast that consists of long strips running parallel to the two edge directions ( MgO [START_REF] Pitsch | Der Orientierungszusammenhang zwischen Zementit und Austenit[END_REF] and MgO [ 010] ) of the substrate. The other is with relatively high contrast that contains plates with their length direction running roughly in 45° with respect to the substrate edges [START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF]. Close observations using atomic force microscopy [START_REF] Leicht | Microstructure and atomic configuration of the (001)-oriented surface of epitaxial Ni-Mn-Ga thin films[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Müller | Nanoscale mechanical surface properties of single crystalline martensitic Ni-Mn-Ga ferromagnetic shape memory alloys[END_REF] revealed that the different SE contrasts correspond to different surface corrugations of the films: the low SE relative contrast zones to low surface corrugation, whereas the high SE contrast zones to high surface relief. According to the height profile of the surface relief in the high surface corrugation zones that are considered to be of 7M martensite, twin relationship between the adjacent martensite plates has been deducedthe so-called a-c twin relationship [START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF]. It means that the pseudo-orthorhombic unit cells from the neighboring plates share a common b axis. The plate boundaries correspond to the (101)orth and they are inclined roughly 45° to the substrate surface. Later high resolution scanning tunneling microscopy (STM) examination indicated that within the 7M martensite plates, there are still fine periodic corrugations [START_REF] Leicht | Microstructure and atomic configuration of the (001)-oriented surface of epitaxial Ni-Mn-Ga thin films[END_REF]. The fine corrugations have been considered to be related to the structure modulation of the 7M martensite [START_REF] Leicht | Microstructure and atomic configuration of the (001)-oriented surface of epitaxial Ni-Mn-Ga thin films[END_REF] and preferentially, the 7M modulated martensite was considered to be composed of nano-twinned NM martensite (the so-called adaptive phase), as the lattice constants of the three phases fulfill the relation proposed by the adaptive phase theory (a14M=cNM+aNM-aA, aA=b14M, c14M=aNM [START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF]). For the low contrast zones (composed of strips running parallel to the edges of the substrate ( MgO [START_REF] Pitsch | Der Orientierungszusammenhang zwischen Zementit und Austenit[END_REF] and the MgO [010] )), the nature of the martensite has not been clearly revealed. Some consider that they are of NM tetragonal martensite [START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF] but others thought that they are of 7M martensite [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Müller | Nanoscale mechanical surface properties of single crystalline martensitic Ni-Mn-Ga ferromagnetic shape memory alloys[END_REF]. The neighboring plates were deduced to also be a-c twin related with the plate interfaces ((101)orth) perpendicular to the substrate surface, according to the epitaxial relationship between the substrate MgO and the austenite and the Bain relationship between the austenite and the NM martensite. For the substructure within the NM plate, as there are no significant corrugations detected, the microstructure and crystallographic features in the NM plates remains unclear. Recently, further attempts have been made in further revealing the characteristic microstructures of NiMnGa films [START_REF] Niemann | The Role of Adaptive Martensite in Magnetic Shape Memory Alloys[END_REF]. Two types of twins have been proposed, namely type I and type II twins, and build as paper models. Apparently, the type I twin is the so-called a-c twin with (101)A as twinning plane. However, it is not easy to correlate the proposed "type II twin" with yet published twin relationships in NiMnGa in literature, as there is no clear crystallographic description (such as twinning plane and twinning direction)) [START_REF] Niemann | The Role of Adaptive Martensite in Magnetic Shape Memory Alloys[END_REF]. It has been found that the type I twins are mainly in low surface corrugation zones (the so-called Y pattern) and the type II twins mainly in high surface corrugation zones (the so-called X pattern) [START_REF] Backen | Mesoscopic twin boundaries in epitaxial Ni-Mn-Ga films[END_REF]. Cross section observations [START_REF] Backen | Mesoscopic twin boundaries in epitaxial Ni-Mn-Ga films[END_REF] of the film have evidenced that in the low surface corrugation zones there are two interface orientations, one being perpendicular to the substrate surface and the other parallel to the substrate surface. In the high corrugation zones, there is only one interface orientation that is tilted to the substrate surface by roughly 45° [START_REF] Backen | Mesoscopic twin boundaries in epitaxial Ni-Mn-Ga films[END_REF]. The apparent different occurrences of different twins in the two different variants zones are due to the constraints of the substrate [START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Backen | Mesoscopic twin boundaries in epitaxial Ni-Mn-Ga films[END_REF]. So far the short intra-plate interfaces in NiMnGa films have not been identified and the orientation relationship between the variants connected by such interfaces is not known. Clearly, due to the lack of direct microstructure and crystallographic orientation correlation on the martensite variants (either 7M or NM martensite) and the inaccuracy of the orientation relationship between the parent austenite and the martensite, the precise information on the crystallographic configurations of the martensite lamellae inside each martensite plate and further the crystallographic organization of the martensite plates in each colony or group, especially the lamellar interfaces and plate interfaces, is still unavailable. In consequence, the possible number of variants in one variant group, their precise orientation relationships and interface planes stay unclear, which hinder the uncovering of microstructural singularities of NiMnGa films with respect to their counterparts in bulk materials. Content of the present work Based on the state of art mentioned above, epitaxial Ni-Mn-Ga thin films were produced by DC magnetron sputtering and then characterized by x-ray diffraction technique (XRD) and backscatter electron diffraction equipped in scanning electron microscope (SEM-EBSD). The scientific aim of the present work is first to produce qualified NiMnGa films with designed chemical compositions and film thicknesses and then to clarify the crystal structures of phase constituents, the configurations of martensite variants and their orientation correlations. The followings are the main content of the present work: (1) Produce NiMnGa thin films with designed chemical composition and film thicknesses by magneto sputtering through optimizing the deposition parameters, such as sputtering power, substrate materials and seed layer materials.. Chapter Experimental and crystallographic calculation In this chapter, the techniques used to prepare and characterize Ni-Mn-Ga thin films are summarized. The essential crystallographic calculation methods are also described in details. Sample preparation Preparation of sputtering target In order to obtain NiMnGa thin films with different phases at ambient temperature, NiMnGa alloys with various nominal composition were dedicatedly prepared by arc melting using high purity elements Ni (99.97 wt. %), Mn (99.9 wt. %) and Ga (99.99 wt. %). The ingots with designed nominal composition were melted by arc-melting in water cooled copper crucible under argon atmosphere. In order to obtain ingots with homogenous composition, each ingot was re-melted for four times and electromagnetic stirring was applied during the melting process. For further reducing the composition inhomogeneity of the as-cast ingots, the ingots were homogenized at 900 o C for 12 h in a vacuum quartz tube, and then cooled within the furnace. Magnetron sputtering is a physical vapor deposition technique by which ions accelerated under certain high voltage bombard the target surface to eject atoms towards the substrate to be deposited, as illustrated in Fig. 2.2 (a). In a typical deposition, substrate with clean surface is attached to the substrate holder. Sputtering is done under high vacuum conditions to avoid contamination of the film. Base pressure is pumped down to 10 -6 -10 -9 Torr using turbo and molecular pumps. On realizing a very high negative potential to the cathode target, plasma is generated. The Argon atoms in the plasma are ionized and accelerated towards the target at very high velocities. Due to the collision cascade, sputtered atoms with enough velocity get deposited on the substrate surface. The residual kinetic energy in target atoms is converted to thermal energy which increases their mobility and facilitates surface diffusion. Hence a continuous chain of atoms are formed resulting in a high quality film. Magnetic field produced by a permanent magnet referred is used to increase the path of the electron and hence the number of ionized inert atoms. It also can change the path of accelerated ions, which results in ring-like ejection zones on the surface of target. In the present thesis, a custom-built multi-source magnetron sputtering system (JZCK-400DJ) was used for thin film deposition. As illustrated in Fig. 2.2(b), the magnetron sputtering system has three sputtering sources arranged in confocal symmetry, which is capable of multilayer deposition, co-sputtering and deposition with a seed layer or cap layer. The base pressure before depositing is pumped below 9.0×10 -5 Pa. In order to obtain a continuous NiMnGa thin film, the working Ar pressure is fixed at 0.15 Pa during the whole deposition process. The deposition parameters, such as sputtering power, substrate materials, substrate temperature, time and seedlayer have been optimized through examining their effects on film thickness, film composition, and phase constituents and their microstructures. (b) The x-ray diffractometer equipped with a curved position sensitive detector. Here,  is the angle between incidence beam and sample surface. is the angle between incidence beam and diffraction plane. 2is the angle of incidence beam and reflected beam.  is the tilting angle with respect to the substrate surface.isthe rotation angle along the film normal. represents that the sample is not rotated. Characterization X-ray diffraction X-ray diffraction is well developed technique for determining the crystal structure and macroscopic crystallographic features of crystalline materials. In the present study, three X-ray diffractometers were employed to determine both the crystal structure and macroscopic crystallographic orientations. All the diffractometers are of four circle X-ray diffractometer with Cobalt cathode source (Co K= 0.178897 nm). For the determination of the crystal structure of the phases in NiMnGa thin films, two X-ray diffractometers were employed. One is of conventional diffractometer with point detector (Fig. 2.3a) and the other is of X-ray diffractometer with curved position sensitive (CPS) detector (Fig. 2.3b). Compared with the conventional diffractometers, X-ray diffractometer with curved position-sensitive offers fast data collection over a wide 2 range. As illustrated in Fig. 2.3(b), the reflection-mode Debye-Scherrer geometry for CPS detectors is unique in the sense that the angle for the incident X-ray beam is kept fixed with respect to the normal of a flat diffracting sample, while the reflected beams are measured at multiple angles with respect to the sample normal. Considering that the thin film possesses in-plane texture, X-ray diffractometer with curved position-sensitive detector was used to obtain maximum numbers of diffraction peaks from the NiMnGa film. SEM and SEM-EBSD The microstructure, composition and microscopic crystallographic features of NiMnGa thin films were investigated by the field emission gun scanning electron microscope (FEG-SEM, JEOL-6500F). The composition of the NiMnGa films were investigated by the energy dispersive X-ray spectrometry system (EDS, Bruker) attached this SEM. The morphology and microstructure of the NiMnGa thin films were characterized by the secondary electron image and backscattered electron image. The microstructure and the microscopic crystallographic orientations of the NiMnGa martensite variants were analyzed using the same SEM equipped with an EBSD acquisition camera (Oxford HKL) and acquisition software (Oxford Channel 5). The EBSD patterns from the martensite variants were manually acquired using Channel 5 Flamenco's interactive option. The sample for SEM-EBSD analysis was electrolytically polished with the solution of 20%HNO3 in volume in CH3OH at room temperature. TEM Philips CM 200 TEM with 200 kV accelerating voltage and LaB6 filaments has been used to observe fine martensite variants in NiMnGa thin films. JEOL-2100F HR-TEM with field-emission gun was used to investigate the atomic-level stacking faults between variants within martensite plates. Specimens for TEM investigation were cut off out the freestanding NiMnGa thin film and further thinned using twin-jet electrolytic polishing with an electrolyte of 20% HNO3 in volume in CH3OH at ambient temperature. Crystallographic calculations With the precise crystallographic orientations of martensite variants, the orientation relationships between the martensite variants and their interfaces, and orientation relationship between austenite and martensite can be calculated. In addition, the following presents the calculation methods used in the present work. Fundaments and definitions Reference systems Considering the characteristics of the crystal structures of the phases concerned in the present study, austenite, modulated martensite and non modulated martensite, as given in Table 2. The lattice vector in reciprocal space can be expressed by the lattice vector in the direct space, as shown in Eq.(2-1) ) ( * c b a c b a     , ) ( * a c b a c b     , ) ( * b a c b a c     (2-1) For the orthonormal crystal coordinate systems, the basis vectors are three orthogonal Metric tensors The metric tensor contains the same information as the set of crystal lattice parameters, but in a form that allows direct computation of the dot product between two vectors. The computational and algebraic aspects of these mutually reciprocal bases can be conveniently expressed in terms of the metric tensors of these bases. The matrix of the metric tensor of the direct basis G , or briefly the direct metric tensor, is The corresponding reciprocal metric tensor is                                                                                               2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 sin ) cos cos (cos ) cos cos (cos ) cos cos (cos sin ) cos cos (cos ) cos cos (cos ) cos cos (cos sin V b a V bc a V c ab V bc a V c a V abc V c ab V abc V c b                      c c b c a c c b b b a b c a b a a a G * (2-4) Where ) cos cos cos cos cos cos 2 1 ( 2 2 2 2 2 2 2            c b a V . With the matrix tensor, the vectors of direct space and reciprocal space can be easily transformed between each other according to Eq.( 2 -5).   T w v u is a direction vector in direct space.   T l k h is a miller index of a plane in direct space and also a direction vector in reciprocal space.                       l k h w v u * G and                       w v u l k h G (2-5) In the present study, for austenite, the matrix tensors in direct and reciprocal space are given in Eq.(2-6)            2 2 2 0 0 0 0 0 0 A A A a a a A G and                    2 2 2 * 1 0 0 0 1 0 0 0 1 A A A a a a A G (2-6) For NM martensite, the matrix tensors in direct and reciprocal space are given in Eq.(2-7)            2 G (2-9) Coordinate transformation As presented in Fig. 2.6, the sample coordinate system and the Bravais lattice cell are related by two coordinate transformations. The first transformation is from the sample coordinate system to the orthonormal crystal coordinate system. Normally, this transformation can be expressed by the Euler angles   [START_REF] Bunge | The role of the inversion centre in texture analysis[END_REF][START_REF] Bunge | Texture analysis in materials science-mathematical methods[END_REF], which can be directly determined by EBSD system. According to Eq.(2-10), the transformation matrix can be constructed with the acquired Euler angles. The second transformation is from the orthonormal crystal coordinate system to the bravais lattice cell. In general, let the vector ] [uvw be in the old coordinate system ) (abc and ] [UVW be the same vector expressed in the new coordinate system ) (ijk . Since, the coordinate transformation does not change the value of the vector, it follows that 2 1 , ,   Φ in Bunge notation                            2 1 2 1 2 1 2 1 2 1 1 2 1 2 1 2 1 2 1                     E M (2-10) k j i c b a W V U w v u      (2-11) The coordinate transformation can be made by:                      w v u W V U C M (2-12) If the transformation is from the orthonormal crystal coordinate system to a triclinic Bravais lattice unit cell,                                   sin 0 sin / cos cos cos cos cos 0 0 sin / cos cos cos sin 2 2 c a c b a a C M (2-13) In the present study, there are three phases: austenite, NM martensite and 7M martensite. The transformation matrixes from the orthonormal coordinate system to their Bravais crystal basis are given in Eq. (2)(3)(4)(5)(6)(7)(8)[START_REF] Fukuda | Giant magnetostriction in Fe-based ferromagnetic shape memory alloys[END_REF][START_REF] Handley R C | Modern magnetic materials: principles and applications[END_REF][START_REF] Kohl | Ferromagnetic Shape Memory Microactuators[END_REF][START_REF] Faehler | An Introduction to Actuation Mechanisms of Magnetic Shape Memory Alloys[END_REF][START_REF] Jani | A review of shape memory alloy research, applications and opportunities[END_REF][START_REF] Jiles | Recent advances and future directions in magnetic materials[END_REF] to Eq. (2)(3)(4)(5)(6)(7)(8)[START_REF] Fukuda | Giant magnetostriction in Fe-based ferromagnetic shape memory alloys[END_REF][START_REF] Handley R C | Modern magnetic materials: principles and applications[END_REF][START_REF] Kohl | Ferromagnetic Shape Memory Microactuators[END_REF][START_REF] Faehler | An Introduction to Actuation Mechanisms of Magnetic Shape Memory Alloys[END_REF][START_REF] Jani | A review of shape memory alloy research, applications and opportunities[END_REF][START_REF] Jiles | Recent advances and future directions in magnetic materials[END_REF][START_REF] Wilson | New materials for micro-scale sensors and actuators: An engineering review[END_REF][START_REF] Nespoli | The high potential of shape memory alloys in developing miniature mechanical devices: A review on shape memory alloy mini-actuators[END_REF] . For austenite A a c b a    , o 90       , the coordinate transformation matrix crystal direction vector is            A A A a a a 0 0 0 0 0 0 A C M (2-14) For non modulated martensite, , ,    in Bunge's convention [START_REF] Bunge | The role of the inversion centre in texture analysis[END_REF]. The rotations are between the orthonormal sample coordinate system the the orthonormal crystal coordinate system, as described in the coordinate transformation part (2.3.1.3). In the present study, this method was employed to calculate the misorientation between two martensite variants. If the orientations of variant A and variant B are specified in rotation matrices Δg going from variant A to variant B can be defined as follows: j B 1 A 1 i B A S G G S Δg    (2-17) where the term Δg (2-18) The misorientation angle  and rotation ) d , d , (d 3 2 1 d can be calculated: The misorientation relationship, expressed in rotation axis/angle pairs, can be used to determine twinning type and certain twin elements. According to the definition of twin orientation relationship, there is at least one 180° rotation for a twin. As shown in table 2.3, if there is one 180° rotation and the Miller indices of the plane normal to the rotation axis are rational, the crystal twin belongs to type-I and K1 is identified. If there is one 180° rotation and the Miller indices of the rotation axis are rational, the crystal twin belongs to type-II and η1 is identified. If there are two independent 180° rotations and the planes normal to the rotation axes are also rational, the twin is a compound twin, with one rotation axis being the twinning direction and the other being the normal to the twining plane [START_REF] Zhang | A general method to determine twinning elements[END_REF]. With the determined twin type and rotation axis, the other twinning elements can be calculated using the general method developed by our group [START_REF] Zhang | A general method to determine twinning elements[END_REF].           2 1 arccos 33 22 11 g g g  (2-19) (1)  180                                  ) sgn(g ) sgn(d m, i convention by 0, d 1,2,3 i , d max d with 2 1 g , 2 1 g , 2 1 g ) d , d , (d im i m i m 33 22 11 3 2 1 d (2-20) (2)  0     0 0, 1, ) d , d , (d 3 2 1   d (2-21) (3)  180   and  0                 sin 2 g , sin 2 g , sin 2 In addition, if the twin type and twinning elements are known, the rotation angle and rotation axis are also known. Thus, the misorientation matrix can be constructed by Eq. . Suppose the rotation angle is  , the rotation axis is   T 3 2 1 d , d , d  d expressed in the orthonormal crystal coordinate system and must be a unit vector.                              ) cos 1 ( cos sin ) cos 1 ( sin ) cos 1 ( sin ) cos 1 ( ) cos 1 ( cos sin ) cos 1 ( sin ) cos 1 ( sin ) cos 1 ( ) cos 1 ( cos 2 3 1 3 2 2 3 1 1 3 2 2 2 3 2 1 2 3 1 3 2 1 2 1                   d d d d d d d d d d d d d d d d d d d d d R M (2-23) Stereographic projection and traces determination The stereographic projection is a particular mapping that projects a sphere onto a plane, which is one of the projection methods to calculate pole figures. To calculate the pole figure of crystal directions or plane normal vectors, the vectors have to be expressed in the macroscopic coordinate system set with respect to the equatorial plane (for example the sample coordinate system) t before the stereographic projection. For instance, when a plane normal vector is  * hkl V P  . The corresponding vector in the sample coordinate system S V can be calculated using Eq. . P * i C E S V G S M M V  (2-24) Here, E M is the Euler angle matrix, C M is the coordinate transformation matrix from the orthonormal crystal coordinate system to the Bravais lattice basis. i S is the symmetric element of a crystal, * G is the metric tensor in reciprocal space. Then, the vector T S t t t ] , , [ 3 2 1  V should be normalized to unit vector. As presented in Fig. 2.8a, vector OP represents the vector S V in the sphere S V OP  . P is the intersection of vector OP with the sphere and is defined as the pole of plane ( hkl ). Let the line connect point P and the South Pole S. Line PS intersect the equatorial plane at point P'. P' is the stereographic projection of pole P. On the equatorial plane, P is expressed with the polar angle  and the azimuth angle  (  , ), where            OP ON OP ON arccos  (2-25) and             ' ' arccos 2 OP OE OP OE   (2-26) Line HK vertical to vector OP' in the equatorial plane represents the trace of plane ( hkl ). As shown in Fig. 2.9, the orthonormal reference frame is set such that the basis vector Known the orientation of the martensite variants, the orientation of the austenite with respect to the sample coordinate system expressed in matrix can be calculated with Eq. (2-28) 1 1 1 1 1 ) ( ) ( ) ( ) ( ) (         A C i A M j m C m E A C i A M j m C m E A E M S T S M M M S T T S M M M (2-28) Where, For NiMnGa alloy, the possible orientation relationship between the austenite and martensite are shown in Table 2.4. If the orientation relationship between austenite and martensite is assumed to be the one between austenite and martensite, with Eq. (2-28), the orientation of the austenite from the measured orientations of the martensitic variants can be derived. The schematic illustration of the determination of the OR between austenite and martensite is shown in Fig. 2.10. As presented in Fig. 2.10, if the distinct austenite orientations calculated from the observed variants have at least one orientation in common, it can be deduced that the assumed orientation relationship indeed exists. The common orientation is the orientation of the austenite. ) M A 0] 1 [1 || [001] [96] Kurdjumov-Sachs (K-S) M A (101) || (111) M A ] 1 [11 || 0] 1 [1 [97] Nishiyama-Wassermann(N-W) M A (101) || (111) M A ] 1 [10 || 11] 2 [ [98, 99] Pitsch M A ) 2 1 (1 || (110) M A ] 1 1 1 [ || 0] 1 [1 [100] Displacement gradient tensor It is well known that martensitic transformation is a displacive transformation realized by coordinated atomic movements. As shown in Fig. 2.11, the transformation can be regarded as the deformation of an artificially set reference cell based on the orientation relationship plane and direction of austenite to the corresponding reference cell of martensite by certain lattice distortion. The reference cells of austenite and martensite can be built according to the orientation relationship ( M A ) ( || ) ( hkl hkl , M A ] [ || ] [ uvw uvw ). Normally, the reference cell of austenite is an orthogonal unit cell. To analysis the atomic displacement during martensitic transformation, the orthonormal coordinate systems are also helpful. As presented in Fig. 2.11, orthonormal coordinate systems ( k j i --) were fixed in both the austenite and martensite. The lattice vectors for both austenite and martensite can be expressed in the orthonormal coordinate system, as presented in Eq.(2-29) and Eq.(2-30)                        k j i z k j i y k j i x A A A A A A 0 0 0 0 0 0 z y x (2-29)                        k j i z k j i y k j i x z y x z y x z y x z z z y y y x x x M M M M M M M M M M M M (2-30) Then the displacement describing the lattice change from austenite to martensite can be obtained as z y x D       , where k j i z k j i y k j i x                               ) ( ) 0 ( ) 0 ( ) 0 ( ) ( ) 0 ( ) 0 ( ) 0 ( ) ( M M M M M M M M M z z z z y y y y x x x x z y x z y x z y x (2-31) Therefore, the displacement D can be rewritten as Eq. (2-32) k j i k j i k j i z y x D                                                                               k j i z z z y y y x x x z z z y y y x x x D D D z z z z y y y x x x z z z y y y y x x x z z z y y y x x x x z z y x z y y x z y x x ) ( ) 0 ( ) 0 ( ) 0 ( ) ( ) 0 ( ) ) 0 ( ) 0 ( ) ( )) ( ) 0 ( ) 0 (( )) 0 ( ) ( ) 0 (( )) 0 ( ) 0 ( ) (( A A A M A A M A A M A A M A A A M A A M A A M A A M A A A M A M M M M A M M M M A M (2-32) According to the definition, the displacement gradient tensor with respect to the orthonormal coordinate system is                                                                       A A M A M A M A M A A M A M A M A M A A M     1 pitsch A A MgO pitsch A A MgO M S M M M S M M       i D i k (2-34)                       1,2,  :                  1 0 0 0 ) 45 cos( ) 45 sin( 0 ) 45 sin( ) 45 cos( A MgO M ,                   2 2 2 2 0 0 0 1 2 2 2 2 0 pitsch A M . ( 2 Experimental procedure In the present study, the films are produced with the base pressure higher than 9.0×10 -5 Pa and the working pressure fixed at a low argon pressure (0.15 Pa) in the DC magnetron sputtering machine (JZCK 400DJ), in order to obtain continuous film. For Si substrates, the substrate temperature, the target composition, the sputtering power and deposition time to produce NiMnGa films are given in Fig. 3.1. The selection of the deposition time is under the consideration to ensure the same film thickness at all the sputtering powers used. The targeted film thickness is about 1 micrometer. The film deposited under 75W is further undergone annealing at 750°C, 800°C and 900°C. Using the optimized sputtering power from the above tests, the other deposition parameters to produce the films are further refined on the MgO substrates without and with a NiMnGa thin films deposited on Si substrate with various sputtering power. As presented in Fig. 3.1, the concentration of Ni and Mn in NiMnGa thin films is slightly increased with the sputtering power, whereas the concentration of Ga is slightly decreased with the sputtering power. However, the changes are not linear with the sputtering power. Influence of annealing The film deposited under 75 W were further annealed at different temperature for 2 h. Influence of seed layer To obtain continuous film, a 100 nm seed layer of either Ag or Cr is deposited on the substrate. The misfit between the MgO and the austenite of NiMnGa is 2.42 %, whereas that between Cr and NiMnGa is 0.75%. It is easier to generate epitaxial thin films with small misfit. Discussion When the sputtered atoms arrive at the substrate, a thin film is formed. The morphology, the microstructure and the crystallographic orientation of the thin films depend on the sputtering conditions, which has been depicted in a so-called structure zone model [START_REF] Charpentier | X-Ray diffraction and Raman spectroscopy for a better understanding of ZnO:Al growth process[END_REF]. This model correlates the argon pressure and the substrate temperature to the morphological properties of the produced films. As shown in Fig. 3.12, low argon pressure produces films with microstructures that are smooth and featureless while high argon pressure produces films with microvoids surrounding the amorphous islands. In addition, the grain growth and Summary In summary, NiMnGa thin films with continuous microstructures were successfully produced by DC magnetron sputtering, after the optimization of sputtering parameters such as substrate temperature, sputtering power, substrate materials, seed-layer materials, film composition and thickness. Low argon pressure, high substrate temperature and seed layer are necessary to fabricate the epitaxial NiMnGa thin films with continuous microstructure. The optimum fabrication parameters to produce NiMnGa films with continuous microstructure and with martensite phase at room temperature are:  Target material: Ni46at.%-Mn32at.%-Ga22at.%;  Substrate: MgO(100) substrate with 100 nm Cr seed layer.  working pressure: 0.15Pa;  sputtering power: 75 w;  substrate temperature 500 ℃; 4 Chapter 4 Determination of crystal structure and crystallographic features by XRD Introduction A key requirement for considerable magnetic field-induced strains in ferromagnetic shape memory alloys, as reported for bulk NiMnGa samples, is the presence of modulated martensite. The precise knowledge of martensites in terms of crystal structure and crystallographic texture is of utmost importance. To conduct such characterizations, XRD techniques have been one of the privileged characterization techniques to analyze the phase constituents and texture components in many previous studies on NiMnGa epitaxial thin films [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Thomas | Stray-Field-Induced Actuation of Free-Standing Magnetic Shape-Memory Films[END_REF]. For crystal structure analyses of the constituent phases, the crystal structure of the cubic austenite and the tetragonal non-modulated (NM) martensite have been unambiguously determined from attainable XRD reflection peaks [START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Kauffmann-Weiss | Magnetic Nanostructures by Adaptive Twinning in Strained Epitaxial Films[END_REF][START_REF] Buschbeck | In situ studies of the martensitic transformation in epitaxial Ni-Mn-Ga films[END_REF]. However, for the monoclinic modulated martensite (7M or 14M), the lattice constant determination have suffered from insufficient number of measured reflection peaks [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF], which result in the monoclinic angle of the crystal unattainable. Without knowing precisely the monoclinic angle, the monoclinic modulated martensite has to be simplified to a pseudo-orthorhombic structure [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF]. The simplification to pseudo-orthorhombic structure makes the precise description of the orientation relationships between martensitic variants using well-defined twinning elements (such as twinning plane, twinning direction, and etc.) particularly difficult, especially when there are irrationally indexed twinning elements. For crystallographic texture analyses, several studies have been performed on the preferred orientations of the two kinds of martensite. Special attention has been paid to link the orientation relationship between the two martensites, and to provide experimental evidences for the "adaptive phase model". In the chapter, the phase constituents and texture components of NiMnGa thin films were investigated by X-ray diffraction techniques. The crystal structure of modulated martensite and lattice constants expressed in monoclinic Bravais cell was fully determined and the texture characteristics were precisely detected. Experimental Ni-Mn-Ga thin films with nominal composition of Ni50Mn30Ga20 and nominal thickness of 1.5 m were deposited from a cathode target of Ni46Mn32Ga22 by DC magnetron sputtering. A Cr buffer layer of 100 nm thick was pre-coated on the MgO(100) monocrystal substrate. The crystal structures of the as-deposited NiMnGa thin films were determined by X-ray diffraction using Co-Kradiation ( = 0.178897 nm). Considering that the thin films may possess in-plane texture, two four-circle X-ray diffractometers, one with a conventional 2 coupled scan and the other with a rotating anode generator (RIGAKU RU300) and a large-angle position sensitive detector (INEL CPS120), were used to collect a sufficient number of diffraction peaks. The geometrical configurations of the two X-ray diffractometers are schematically illustrated in Fig. 2.3. In the former case (Fig. 2.3a), the 2 coupled scans were performed between 45° and 90° at tilt angle  ranging from 0° to 10° with a step size of 1°. In the latter case (Fig. 2.3b), the 2 scans were conducted at tilt angle  from 0.75° to 78.75° with a step size of 1.25°. At each tilt angle, the sample was rotated from 0° to 360° with a step size of 5°. Two incident angles  (27.9° and 40°) were selected in order to obtain possible diffraction peaks at the low 2 (48°-58°) and the high 2 (around 82.5°) regions. The final diffraction patterns were obtained by integrating all diffraction patterns acquired at different sample positions. Results and discussion Determination of crystal structure Fig. 4.1a shows the -dependent XRD patterns of the as-deposited Ni-Mn-Ga thin films obtained by conventional -2 coupled scanning at ambient temperature. At each tilt angle , there are only a limited number of diffraction peaks. Besides, no diffraction peaks become visible over the 2 range of 48°-55°. This could be attributed to the texture effect of the as-deposited thin films, as several peaks at these positions were well observed in previous powder XRD measurements [START_REF] Righi | Crystal structure of 7M modulated Ni-Mn-Ga martensitic phase[END_REF][START_REF] Li | Determination of the orientation relationship between austenite and incommensurate 7M modulated martensite in Ni-Mn-Ga alloys[END_REF]. A close look at the measured and calculated peak positions, we found that fairly good matches can be found between the measured and recalculated ones for the two kinds of martensite, although a deviation of 0.2~2° exists. The good fits confirm that the modulated martensite and the NM martensite possess the same crystal structure as their counterparts in bulk materials, whereas the angular deviations indicates that the lattice constants of the 7M and NM martensite in the as-deposited thin films are not exactly the same as the ones derived from their powder counterparts, as the substrate imposes a constraint on the as-deposited thin films during the phase transitions. Using the measured peak positions of each phase, the lattice constants of the 7M and NM martensite were resolved. Table 4.1 summarizes the complete crystal structure information on the three different phases (austenite, 7M martensite, and NM martensite) involved in the as-deposited thin films. The unit cell of the 7M and NM martensite is shown in Fig. 4.2. This information is prerequisite for the subsequent EBSD orientation analyses. It should be noted that the macroscopic crystallographic features obtained in the present work are similar to those identified in the previous studies [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF]. The only difference is that choice of the lattice cell for the two martensite is not the same. The (2 0 20)mono, (2 0 20 )mono and (0 4 0)mono planes in the present work refer to the respective (0 4 0)orth, (4 0 0)orth and (0 0 4)orth planes in the pseudo-orthorhombic coordinate system [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF], whereas the (004)Tetr and (220)Tetr planes refer to the respective (004)NM and (400)NM in the previous studies [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF]. Summary X-ray crystal structural analysis shows that three different phases, i.e. austenite, 7M modulated martensite, and NM martensite, co-exist in the as-deposited Ni50Mn30Ga20 thin films. The austenite phase has a cubic L21 crystal structure (Fm Experimental procedure The microstructures and crystallographic orientations of the thin films with nominal composition (at.%) of Ni50Mn30Ga20 and nominal thickness of 1.5 m were analyzed using the same SEM equipped with an EBSD system, where the EBSD patterns from martensitic variants were manually acquired using Channel 5 Flamenco's interactive option. Prior to the microstructural observations and orientation measurements, the thin film samples were subject to thickness-controlled electrolytic polishing with a solution of 20%HNO3 in CH3OH at 12 volts at room temperature. NiMnGa thin films were also investigated by transmission electron microscope (TEM) with 200 kV accelerating voltage. Specimens for TEM investigation were cut off out the freestanding NiMnGa thin film and further thinned using twin-jet electrolytic polishing with an electrolyte of 20% HNO3 in volume in CH3OH at ambient temperature. ), the high relative contrast zones are of shorter and somewhat bent plates that orient roughly at 45° with respect to the substrate edges. In fact, the SE image contrast is related to the surface topography of an observed object. Thus, the low relative contrast zones and the high relative contrast zones are expected to have low and high surface reliefs, respectively. Further examination at a higher magnification in backscattered electron imaging (BSE) mode reveals that the fine lamellae having two different brightness levels are distributed alternately inside each plate, as highlighted with green and blue lines in Fig. 5.1(b). Of the two contrasted neighboring lamellae, one is thicker and the other is thinner. As the BSE image contrast for a monophase microstructure with homogenous chemical composition originates from the orientation differences of the microstructural components, the thicker and thinner lamellae distributed alternately in each plate should be correlated with two distinct orientations. The crystal structures of the coarse plates in the near surface layer of the film and fine plates well under the film surface were further verified by EBSD, using the determined crystal structure information for the 7M and NM martensite. In contrast, the pattern in Fig. 5.2(f) represents a single pattern and can be well indexed with the monoclinic superstructure of the 7M martensite (Fig. 5.2(g)). Large misfits between the acquired and calculated patterns are generated if using the tetragonal crystal structure of the NM martensite (Fig. 5.2(h)) to index this pattern. In this context, the coarse plates in the top layer of the film are composed of the NM martensite, whereas the fine plates in the film interior are of the 7M martensite. By taking into account of the X-ray measurement results, one may deduce that the NM martensite is located near the free surface of the film, the austenite above the substrate surface, and the 7M martensite in the intermediate layers between them. Using the determined lattice constants of the NM and 7M martensites and the published atomic position data [START_REF] Righi | Crystal structure of 7M modulated Ni-Mn-Ga martensitic phase[END_REF] as initial input, the constituent phases in the surface layers of the as-deposited thin films were verified by EBSD analysis. The Kikuchi line indexation has evidenced that both low and high relative contrast zones displayed in Fig. 5.3(a) can be identified as the tetragonal NM martensite other than the 7M martensite. Apparently, there is a difference in interpreting the microstructural features of Ni-Mn-Ga thin films between the present result and previous work by other groups [42-44, 55, 82, 85, 86, 88, 89]. This may be related to the different thickness of thin films used for microstructural characterizations. In the present study, the NM martensite was found at the surface of the produced thin films of about 1.5 µm thick, while the others reported the existence of the 7M martensite at the surface of the thin films with maximum thickness of about 0.5 µm. Indeed, the formation of different types of martensite is very sensitive to local constraints. It is obvious that at the film surface, the constraint from the substrate decreases with the increasing film thickness. Thus, the surface layers of thick films without much constraint from substrate would easily transform to the stable NM martensite, as demonstrated in the present case. the { 1 12}Tetr lamellar interfaces in P1 and P2 to be on edge, as shown in Fig. 5.7. The selected area diffraction patterns acquired from plate P1 and P3 in Fig. 5.6 and P1 and P2 in Fig. 5.7 Results Microstructure of epitaxial NiMnGa thin films Crystallographic features of NM martensite well revealed that the adjacent fine lamellae in the same plate have their { 1 12}Tetr in parallel. This further confirms the compound twin relationship between them. Orientation correlation Detailed EBSD orientation analyses were conducted on the NM martensite plates in the low and high relative contrast zones (Z1 and Z2 in Fig It is noted that the orientations of the major and minor variants in the low relative contrast zones are different from those in the high relative contrast zones. For the low relative contrast zones (Z1), the major and minor variants are oriented respectively with their (110)Tetr planes and (001)Tetr planes nearly parallel to the substrate surface (Fig. 5.8(c)). In the high relative contrast zones (Z2), such plane parallelisms hold for plates 2 and 4 but with an exchange of the planes between the major and minor variants, whereas both major and minor variants in plates 1 and 3 are oriented with their (110)Tetr planes nearly parallel to the substrate surface (Fig. 5.8(d)). In correlation with the microstructural observations, plates 2 and 4 are featured with higher brightness and plates 1 and 3 with lower brightness. Misorientation relationships Based on the misorientation calculations in part 2.3.2 and Eq. (2)(3)(4)(5)(6)(7)(8)[START_REF] Fukuda | Giant magnetostriction in Fe-based ferromagnetic shape memory alloys[END_REF][START_REF] Handley R C | Modern magnetic materials: principles and applications[END_REF][START_REF] Kohl | Ferromagnetic Shape Memory Microactuators[END_REF][START_REF] Faehler | An Introduction to Actuation Mechanisms of Magnetic Shape Memory Alloys[END_REF][START_REF] Jani | A review of shape memory alloy research, applications and opportunities[END_REF][START_REF] Jiles | Recent advances and future directions in magnetic materials[END_REF][START_REF] Wilson | New materials for micro-scale sensors and actuators: An engineering review[END_REF][START_REF] Nespoli | The high potential of shape memory alloys in developing miniature mechanical devices: A review on shape memory alloy mini-actuators[END_REF][START_REF] Hauser | Chapter 8 -Thin magnetic films[END_REF], the orientation relationships between adjacent lamellar variants were further determined from their orientation data manually acquired by EBSD. For both low and high relative contrast zones, the in-plate lamellar variants are found to have a compound twin relationship with the twinning elements K1 = (112)Tetr, K2 = (11 2 )Tetr,  = ] 1 11 [ etr,  = ] 111 [ Tetr, P = (1 1 0)Tetr and s = 0.412. Here, all the crystallographic elements are expressed in the crystal basis of tetragonal Bravais lattice for the convenience in interpreting the EBSD orientation data and introducing the symmetry elements of the tetragonal structure to find proper misorientation axes and angles. The present twin relationship is the same as that reported for the bulk alloys [START_REF] Cong | Microstructural and crystallographic characteristics of interpenetrating and non-interpenetrating multiply twinned nanostructure in a Ni-Mn-Ga ferromagnetic shape memory alloy[END_REF][START_REF] Zhang | A general method to determine twinning elements[END_REF], and it is also consistent with the so-called a-c twin relationship found in the thin films [START_REF] Eichhorn | Microstructure of freestanding single-crystalline Ni2MnGa thin films[END_REF][START_REF] Leicht | Microstructure and atomic configuration of the (001)-oriented surface of epitaxial Ni-Mn-Ga thin films[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Müller | Nanoscale mechanical surface properties of single crystalline martensitic Ni-Mn-Ga ferromagnetic shape memory alloys[END_REF][START_REF] Buschbeck | In situ studies of the martensitic transformation in epitaxial Ni-Mn-Ga films[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF]. By means of the indirect two-trace method [START_REF] Zhang | Indirect two-trace method to determine a faceted low-energy interface between two crystallographically correlated crystals[END_REF], the in-plate interlamellar interface planes in the two relative contrast zones were further determined. They coincide well with the As for the high relative contrast zones (Fig. 5.8(d)), the interlamellar interfaces in one plate (e.g. plate 1) are roughly perpendicular (88.6°) to the substrate surface. Whereas, the interlamellar interfaces in its neighboring plates (e.g. plate 2) are inclined 44.4° toward the substrate surface. Moreover, the orientation relationships between two lamellae connected by an inter-plate interface in the low and high relative contrast zones were calculated. The respective minimum rotation angles and the corresponding rotation axes are displayed in Table 5.1 and Table 5.2. The counterpart major and minor variants in adjacent plates are related by a rotation of ~83° around the <110>Tetr axes and a rotation of ~11-14° around the <301>Tetr axes with certain degrees of deviation, respectively. For the low relative contrast zones, the closest atomic planes from the counterpart major variants are referred to the ( 2 )Tetr planes with an angular deviation of 1.9° and those from the counterpart minor variants to the {010}Tetr planes with an angluar deviation of 4.2°, as outlined with black dotted rectangles in the {010}Tetr and {112}Tetr pole figures in Fig. 5.8(c). These characteristic planes are all nearly perpendicular to the substrate surface and the corresponding planes of the counterpart major and minor variants are positioned symmetrically to the (010)MgO plane. As for the high relative contrast zones (Fig. 5.8(d)) , the closest atomic planes from the counterpart major and minor variants are respective of the ( 121 )Tetr and (010)Tetr, similar to the case in the low relative contrast zones. However, they are no longer perpendicular to the substrate surface. Detailed calculations on plates 1 and 2 show that their ( It should be noted that the present results on the orientation relationships between lamellar variants in two neighboring NM plates (e.g. plates A and B in the low relative contrast zones and plates 1 and 2 in the high relative contrast zones), the orientations of in-plate interlamellar interfaces and those of inter-plate interfaces with respect to the substrate are similar to those reported in literature [42-44, 82, 89]. However, our direct EBSD orientation measurements have clarified that plates C and D in the low relative contrast zones (or plates 3 and 4 in the high relative contrast zones) are not a repetition of plates A and B (or plates 1 and 2), in terms of the orientations of the lamellar variants, intra-plate interfaces and inter-plate interfaces. Discussion As demonstrated above, the morphology and the surface topology of the NM martensite plates in the low and high relative contrast zones are clearly different, although they appear to be composed of the same (112)Tetr compound twins that act as the primary microstructural elements. In essence, the crystallographic orientations of the in-plate martensitic variants with respect to the substrate surface are not the same. This may be the origin of the morphological and topological differences observed for the two relative contrast zones, as discussed below. (1) Low relative contrast zone Fig. 5.9(a) illustrates the atomic correspondences of eight lamellar variants organized in four NM plates (representing one variant group) for the low relative contrast zones (Z1), viewed from the top of the as-deposited thin films. The atomic correspondences were constructed with the individually measured orientations of lamellar variants and the detemined intra-and inter-plate interface planes. The width ratio (expressed in the number ratio of atomic layers) between the minor and major variants is 0.492, being determined according to the phenomenological theory of martensitic transformation (known as WLR theory [START_REF] Wechsler | On the Theory of the Formation of Martensite[END_REF][START_REF] Wayman | The phenomenological theory of martensite crystallography: Interrelationships[END_REF]) under the assumption that the invariant plane is parallel to the MgO subtrate surface. This width ratio is very close to 2:4 (0.5), i.e. that of the ideal (5 2 ) stacking sequence. It should be noted that for the sake of saving article space, only two atomic layers for the minor variants and five atomic layers for the major variants were taken in Fig. 5.9(a) to illustrate the thickness ratio between the minor variants and the major variants, the structures of intra-and inter-plate interfaces and the orientations of lamellar variants with respect to the substrate. In reality, the lamellar variants are much thicker in nanometer range. One should not confuse this with the structure model from the "adaptive phase theory" [START_REF] Leicht | Microstructure and atomic configuration of the (001)-oriented surface of epitaxial Ni-Mn-Ga thin films[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Kauffmann-Weiss | Magnetic Nanostructures by Adaptive Twinning in Strained Epitaxial Films[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF][START_REF] Niemann | The Role of Adaptive Martensite in Magnetic Shape Memory Alloys[END_REF], where the unit cell of the monoclinic 7M martensite is built based on a fixed number of atomic layers in each tetragonal variant. Fig. 5.9(b) presents a 3D display of the atomic correspondences between two alternately distributed lamellar variants with the (112)Tetr compound twin relationship in one NM martensite plate. It is seen from Figs. 5.9(a) and 5.9(b) that the in-plate interlamellar interfaces are coherent with perfect atomic match. To further reveal the inter-plate interface features, a 3D configuration of two adjacent NM plates was constructed using the twinned lamellar variants as blocks, as illustrated in Fig. 5.9(c). From Figs. 5.9(a) and 5.9(c), it is seen that the inter-plate interfaces are incoherent with certain amout of atomic mismatch. These inter-plate interfaces correspond to the so-called a-c twin interfaces, as mentioned in literature [START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Müller | Nanoscale mechanical surface properties of single crystalline martensitic Ni-Mn-Ga ferromagnetic shape memory alloys[END_REF]. Interestingly, the atoms from two adjacent NM plates are not totally disordered at the inter-plate interfaces but show some periodicity. For instance, each pair of major and minor lamellae constitutes one period, and two end atoms at one period possess perfect match at the inter-plate interface. If choosing those coherent atoms as reference, the atoms within one period experience an increased mismatch symmetrically to the plate interface when approching the interlamellar interface enclosed in the period. Both the periodic coherence and the symmetrical mismatch would define a straight inter-plate interface, which acts as another invariant plane for the NM martensite in the low relative contrast zones. Indeed, the width ratio required by this invariant plane is the same as that required by the invariant plane parallel to the MgO substrate surface. Due to such an atomic construction, inter-plate interface are always straight without need to bend to accommodate the unbalanced interfacial atomic misfits. This boundary character has been evidenced in the microstructural observation (Fig. 5.3(a)). As the combination of the lamellar variants ((112)Tetr compound twins) are the same in neigboring plates, the atomic structures of all inter-plate interfaces are the same. Therefore, all plate interfaces in the low relative contrast zones are parallel to one another, as demonstrated in Fig. 5 .3(b). Moreover, as the major and the minor lamellar variants ((112)Tetr compound twins) have the same orientation combination for all NM plates and they are distributed symmetrically to the inter-plate interfaces (Fig. 5.9(c)), there appear no microscopic height misfits across the inter-plate interfaces in the film normal direction. The NM plates in these zones are relatively flat without prononced surface relief or corrugation. Therefore, no significant relative contrast is visible between adjacent NM plates in the SE image. (2) High relative contrast zone Similarly to the above analyzing scheme, the atomic correspondences of eight lamellar variants in four NM plates, the 3D-atomic correspondences of twinned lamellar variants in one NM plate, and the 3D configuration of two adjacent NM plates were also constructed for the high relative contrast zone (Z2), as shown in Fig. 5.10. Note that each high relative contrast zone contains two distinct orientation plates (e.g. plates 1 and 2 or plates 3 and 4 in Fig. 5.10(a)) in terms of the orientations of paired lamellar variants (the (112)Tetr compound twins). Due to the orientation differences, the width ratios between the minor and major variants in plate 1 and plate 2 are respectively 0.47 and 0.48, compared with that (0.492) for the low relative contrast zones. Here, the ideal width ratio 0.5 was used to construct Fig. 5.10. For a real material, the deviations from the ideal width ratio may be accommodated by stacking faults. From Fig. 5.10(a), it is seen that both major and minor lamellar variants in plate 1 are with their (110)Tetr planes near-parallel to the substrate surface, whereas the major and minor lamellar variants in plate 2 are respectively with their (001)Tetr and (110)Tetr planes near-parallel to the substrate surface. If taking the coherent (112)Tetr compound twin interfaces in plate 2 as reference, the (112)Tetr compound twin interfaces in plate 1 can be generated by a rotation of about 90° around the MgO [110] . In this manner, the (112)Tetr compound twin interfaces become perpendicular to the substrate surface. It is commonly considered that the (112)Tetr compound twinning may bring about the atomic corrugations in NM plates, as illustrated in Fig. 5.10(b). As the direction of the atomic corrugations in plate 1 lie in the film plane, no significant surface relief is created on the free surface of thin films. Scanning tunneling microscopy (STM) imaging has evidenced that the free surface of the NM platesbeing equivalent to plate 1 in the present work -stays smooth [START_REF] Leicht | Microstructure and atomic configuration of the (001)-oriented surface of epitaxial Ni-Mn-Ga thin films[END_REF]. )Tetr and (010)Tetr planes from the neighboring plates are not in mirror relation with respect to the inter-plate interface. As the lengths of one pair of major and minor variants (considered as one period) in two plates are not the same along the macroscopic plate interface (Fig. 5.10(c)), unbalanced atomic misfit can be expected. This misfit is accumulative and increases with the increased length (parallel to the film surface) and the height (normal to the film surface) of the plates. Therefore, the plate interface orientation could be dominated by the orientation of the ( 121 )Tetr of the major variant in either plate 1 or plate 2, depending on local constraints. This may be the reason why the inter-plate interfaces in the high relative contrast zones are bent after running in certain length. As displayed in Figs. 5.10(b) and 5.10(c), the atomic misfit also arises in the film normal direction. For example, the planar spacing of the major and minor variants in the film normal direction is 0.272 nm in plate 1, but 0.334 (major) and 0.272 nm (minor) in plate 2. If assuming that the major variants in plate 2 make dominant contribution to the atomic misfit between the two plates at the plate interface, the region of plate 2 is elevated by 23% in height with respect to that of plate 1. In the present work, the as-deposited thin films were subject to electrolytic polishing before microstructural observation. The constraints induced by the atomic misfits in the height direction may be fully released at the free surface. Thus, significant height difference between plate 1 and plate 2 can be expected. In the high relative contrast zones of Fig. 5.3(a), the plates with higher brightness are those with larger planar spacing in the film normal direction (major variants in plate 2), whereas the plates with lower brightness are those with smaller planar spacing in the film normal direction. This could well account for the distinct levels of brightness between neighboring plates observed in the high relative contrast zones. the dotted yellow and green lines is about 15°, being bisected by one solid black line that is parallel to the [ 1 1 0]MgO direction. The observed microstructural features are similar to those reported in recent studies [42-44, 82, 86]. A low relative contrast zone (Group 1) corresponds to the so-called Type Y pattern, and a high relative contrast zone (Group 2 or Group 3) to the Type X pattern [START_REF] Backen | Mesoscopic twin boundaries in epitaxial Ni-Mn-Ga films[END_REF]. In most cases, the neighboring plates in both the low and high relative contrast zones have almost the same width. The microstructural-correlated characterizations of crystallographic orientations of 7M martensite plates were conducted by EBSD, where the macroscopic reference frame was set to the crystal basis of the MgO monocrystal substrate. It is found that each 7M martensite plate is specified by a single crystallographic orientation, being designated as one orientation variant. Crystallographic features of 7M martensite There are a total of four different orientation variants distributed in one plate group, as illustrated in Fig. 5.12(a) and Fig. 5.12(b). Here, the four orientation variants -representing one plate group with low or high relative contrast -are denoted by the symbols 7M-V1, V2, V3, V4 (Fig. 5.12(a)) and 7M-VA, VB, VC, VD (Fig. 5.12(b)), respectively. and Group 2 in the form of {2 0 20}mono, {2 0 20 }mono and {0 4 0}mono pole figures. The subscript "mono" stands for monoclinic, as the three indices of the crystallographic planes are defined in the monoclinic Bravais lattice frame. Clearly, in the low relative contrast zone (Group 1), the four variants 7M-V1, V2, V3, V4 are all with their (2 0 20)mono plane nearly parallel to the substrate surface (Fig. 5.12(c), left). These variants correspond to the a-c twins with their common b axis perpendicular to the substrate surface, the so-called b-variants in the literature [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Müller | Nanoscale mechanical surface properties of single crystalline martensitic Ni-Mn-Ga ferromagnetic shape memory alloys[END_REF]. However, in the high relative contrast zone (Group 2), the variants VA and VD (colored in yellow and green in Fig. 5.12(b)) are with their (2 0 20 )mono plane nearly parallel to the substrate surface (Fig. 5.12(c), middle), and the variants VB and VC (colored in blue and red in Fig. 5.12(b)) with their (0 4 0)mono plane nearly parallel to the substrate surface (Fig. 5.12(c), left). Furthermore, the variants VB and VC and the variants VA and VD are of low brightness and high brightness, respectively. They correspond to the a-c twins with their common b axis parallel to the substrate surface, the so-called a-variants for VA and VD and c-variants for VB and VC [START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF]. It should be noted that in the high relative contrast zones, one plate group is composed of two (2 0 20 )mono variants (VA and VD) and two (0 4 0)mono variants (VB and VC). But in the literature [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Müller | Nanoscale mechanical surface properties of single crystalline martensitic Ni-Mn-Ga ferromagnetic shape memory alloys[END_REF], a pair of the (2 0 20 )mono variants (VA and VD) was designated to be one 0)orth, (4 0 0)orth and (0 0 4)orth planes in the pseudo-orthorhombic coordinate system, respectively [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF]). In consequence, only two variants were found in each plate group [START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF][START_REF] Müller | Nanoscale mechanical surface properties of single crystalline martensitic Ni-Mn-Ga ferromagnetic shape memory alloys[END_REF], instead of four variants as evidenced in the present work. From the calculated misorientation data, the complete twinning elements -K1, K2, , P and s -of the above three types of twins were derived using a general method developed by our group [START_REF] Li | New approach to twin interfaces of modulated martensite[END_REF][START_REF] Zhang | A general method to determine twinning elements[END_REF], as shown in Table 5.5. The twinning modes are exactly the same as those in bulk materials [START_REF] Li | Determination of the orientation relationship between austenite and incommensurate 7M modulated martensite in Ni-Mn-Ga alloys[END_REF][START_REF] Li | New approach to twin interfaces of modulated martensite[END_REF]. Here, an attempt is made to correlate the present twin relationships (identified by EBSD) under the monoclinic crystal system with the published ones (deduced from XRD) under the pseudo-orthorhombic crystal system. It is seen from Table 5.6 that the so-called a-c twins with the (1 0 1)A as the twin interface are the Type-I twins in the present work. Interestingly, the Type-II twins in the present work also belong to the so-called a-c twins. Clearly the specification of twin relationships under the pseudo-orthorhombic crystal system has difficulty to distinguish the subtle differences between the two types of twins. Apart from this, the compound twins could not be differentiated due to the simplification of the 7M martensite with a pseudo-orthorhombic crystal structure. To have a general view on the configuration of the 7M variants, the occurrences of the three types of twins in the two relative contrast zones are further examined. In the low relative contrast zone (Fig. 5.12(a)), the Type-I twins (V1:V3 and V2:V4) and the Type-II twins (V1:V2 and V3:V4) are those with the interface traces marked by solid black lines and dotted white lines, respectively. In these zones, the traces of Type-I and Type-II interfaces are in parallel, and the majority of variants are Type-I twins. Moreover, no compound twin relationship (V1:V4 and V2:V3) can be observed between two adjacent variants. However, in the high relative contrast zone (Fig. 5.12(b)), the majority of variants are Type-II twins. The interface traces of Type-I twins have only one orientation (marked by the solid black lines), whereas those of Type-II twins have two orientations (marked by the dotted yellow and green lines).   mono 1 10 2 1  K Type I twin   mono 1 1 2 1  K   A mono 1 M 14 1 1 0 1 || || K K a14M-c14M twin K1 close to (1 0 1)14M The interfaces with compound twin relationship (VA:VD and VB:VC) are those intra-plate interfaces, occurred with the bending of martensite plates. Using the indirect two-trace method [START_REF] Zhang | Indirect two-trace method to determine a faceted low-energy interface between two crystallographically correlated crystals[END_REF][START_REF] Cong | Determination of microstructure and twinning relationship between martensitic variants in 53 at.%Ni-25 at.%Mn-22 at.%Ga ferromagnetic shape memory alloy[END_REF], the planes of twin interfaces in the low and high relative contrast zones were calculated with the measured crystallographic orientations of individual variants and orientations of their interface traces on the film surface. Since the epitaxial NiMnGa thin films were prepared on the MgO monocrystal substrate at elevated temperature, the thin films were composed of monocrystalline austenite when the deposition process was finished. According to the scanning tunneling microscopy (STM) analysis [START_REF] Leicht | Microstructure and atomic configuration of the (001)-oriented surface of epitaxial Ni-Mn-Ga thin films[END_REF], the orientation relationship between the MgO substrate and the NiMnGa austenite refers to A A MgO MgO ] 0 0 1 [ ) 1 0 0 ( ] 0 1 1 [ ) 1 0 0 ( || , as illustrated in Fig. 5.14(a). Upon cooling, the cubic austenite transforms to the monoclinic 7M modulated martensite below the martensitic transformation start temperature. Due to the displacive nature of martensitic transformation, the phase transformation from austenite to 7M martensite in the present thin films is realized by coordinated lattice deformation of the parent phase following certain orientation relationship between the two phases. The Pitsch orientation relationship ( mono mono A A ] 1 1 1 [ ) 1 2 1 ( ] 0 1 1 [ ) 0 1 1 ( || ), previously revealed for the austenite to 7M martensite transformation in bulk NiMnGa [START_REF] Li | Determination of the orientation relationship between austenite and incommensurate 7M modulated martensite in Ni-Mn-Ga alloys[END_REF], is further confirmed in the present work, as illustrated in Fig. 5.14(b). It has been demonstrated that the austenite to 7M martensite transformation should be self-accommodated to minimize the macroscopic strain by forming specific variant pairs or groups with specific variant volume fractions [START_REF] Hane | Microstructure in the cubic to monoclinic transition in titanium-nickel shape memory alloys[END_REF][START_REF] Yang | Self-accomodation and shape memory mechanism of ϵ-martensite -II. Theoretical considerations[END_REF][START_REF] Bhattacharya | Wedge-like microstructure in martensites[END_REF]. This self-accommodation character in accordance with the substrate constraint may be the origin of the characteristic organizations of 7M variants in different variant zones, notably the low and high relative contrast zones in the present work. With the aid of the Pitsch orientation relationship, the lattice deformation during the martensitic transformation can be described by the displacement gradient tensor (M) in the orthonormal basis referenced to the ) should be subject to different constraints from the MgO substrate. To quantify these differences and their influence on the occurrences of different variant pairs in different martensite groups, the displacement gradient tensor of each variant is calculated and presented in the macroscopic reference frame A ] 0 1 [0 - A ] 1 0 [1 - A ] 1 0 [1 , i.e.                         0 ( MgO ] 0 0 1 [ - MgO ] 0 1 0 [ - MgO ] 1 0 0 [ ). For the low relative contrast zone (Group 1), the displacement gradient tensors of the four 7M variants (V1, V2, V3, V4) in the macroscopic reference frame are expressed as According to our experimental observations on the low relative contrast zones, the variant pair V1:V3 (or V2:V4) belongs to Type-I twins, and the variant pair V1:V2 (or V3:V4) to Type-II twins. To estimate the overall transformation deformations of the two types of twins, a mean tensor matrix is defined as ) 1 ( 1 3 or 2 1 1 d M d M M      , where d1 represents the relative volume fraction of one variant. Fig. 5.15 plots the non-zero ij e components of the matrix M against d1 for the Type-I and Type-II twin variant pairs. In the case of the Type-I twins, the ii e components representing dilatation in the three principal directions (Fig. This may be the reason why in the low relative contrast zones, Type-I twins appear with much higher frequency compared to the Type-II twins. Moreover, the formation of the Type-I twins does not create any height difference between adjacent variants in the film normal direction ( 31 e and 32 e = 0, and 33 e is the same for the constituent variants). Thus, a homogenous SE image contrast should be obtained for all the variants. For the high relative contrast zone (Group 2), the displacement gradient tensors of the four 7M variants (VA, VB, VC, VD) in the macroscopic reference frame are: ( Following the above analysis scheme for the low relative contrast zones, the non-zero ij e components in the matrix M ( ) (1 × + × = 1 or 1 d - M d M M C B A ) were calculated as a function of the volume fraction d1 of variant VA, as shown in Figs. 5.17 When the Type-II twin pair forms, the situation for the dilatation deformations is similar to that for the Type-I twin pair. Fig. 5.17(c) shows that the 22 e and 33 e terms are nearly cancelled at d1 = 0.4964, whereas the 11 e term at d1 = 0.6020. For the shear deformations, all the components converge towards zero around d1 = 0.5 (Fig. 5.17d). The j e 3 term can nearly be cancelled at two volume ratios that are very close, d1 = 0.5284 and d1 = 0.5364. When the 31 e equals to zero, the 32 e is -0.0006; when the 32 e equals to zero, the 31 e is 0.0008. This indicates that the formation of the Type-II twins in the high relative contrast zone could effectively eliminate the shear deformation along MgO 1] 0 [0 direction. Moreover, at these volume ratios, the other components have relatively small values as shown in the matrices in Fig. 5.17(d). These volume ratios are very close to the effectively observed thickness ratio and close to the values reported in the literature [START_REF] Leicht | Microstructure and atomic configuration of the (001)-oriented surface of epitaxial Ni-Mn-Ga thin films[END_REF][START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF]. In this connection, the formation of the twins should appear with higher frequency that is in accordance with our experimental observation. In addition, there exist height differences between adjacent variants, as presented by the 33 e component of the matrix in Eq. (5-3) for the four 7M variants. This accounts for the high relative contrast between the pairs of variants in the SE images. Crystallography and sequence of martensitic transformation As mentioned above, the local microscopic microstructural and local crystallographic features in specific martensite variant groups were correlated by EBSD. To obtain the macroscopic crystallographic features, several martensite variant groups in the present NiMnGa thin films were analyzed by EBSD. It is seen that macroscopically, martensite variants are organized in colonies or groups, demonstrated by different brightnesses in Fig. 5.18. There are two differently oriented bright long straight strips parallel to the substrate edges ( MgO [START_REF] Pitsch | Der Orientierungszusammenhang zwischen Zementit und Austenit[END_REF] or MgO [010] ) that correspond to the low relative contrast zones mentioned in section 5. Summary Microstructure characterization shows that both the 7M martensite and NM martensite the Type-II twin pairs. The rigid constraint from the substrate accounts for this preference. Due to the fact that the formation of Type-I twin pair can cancel the shear deformation in the film normal direction at macroscopic scale (this shear tends to "peel" the film off the substrate) and requires much lower effort. In the high relative contrast zones, there exists one interface trace orientation for Type-I twins but two interface trace orientations for Type-II twins on the film surface. In contrast to the low relative contrast zones, the Type-II twin pairs are more frequent than the Type-I twin pairs, due to the constraint from the rigid substrate. The formation of Type-II twin pair allows cancelling the shear deformation in the film normal direction at macroscopic scale. Large relative height differences between adjacent variants account for the high relative contrast. Crystallographic calculation indicates that the martensitic transformation sequence is from Austenite to 7M martensite and then to NM martensite (A→7M→NM). The present study intends to offer deep insights into the crystallographic features and martensitic transformation of epitaxial NiMnGa thin films. Les couches minces é pitaxiales de Ni-Mn-Ga ont attiré une attention considé rable, car ils sont des candidats prometteurs pour les capteurs et actionneurs magné tiques dans des microsystè mes é lectromé caniques. Des informations complè tes sur les caracté ristiques de la microstructure et de la cristallographie des films NiMnGa et leur relation avec les contraintes du substrat sont essentielles à l'optimisation des proprié té s. Dans le pré sent travail, les couches minces é pitaxiale de Ni-Mn-Ga ont é té produites par pulvé risation cathodique magné tron à courant continu et ensuite caracté risé es par la technique de diffraction des rayons X (XRD) et la diffraction d'é lectrons ré trodiffusé s dans un microscope é lectronique à balayage é quipé d'analyse EBSD (MEB-EBSD). ,目前制备出的 NiMnGa 薄膜中包含多种晶体结构的相、多种取 向的马氏体变体和复杂的微观组织, 导致其很难获得较大的磁场诱发应变。 揭示 NiMnGa 薄膜中微观组织与晶体学取向之间的内在联系、探究 NiMnGa 薄膜中马氏体相变过程, 是通过训练消除不利的马氏体变体,获得较大磁场诱发应变的前提。但是,NiMnGa 薄 膜中的马氏体板条尺寸远远小于块体材料的板条尺寸,使得马氏体相的晶体学取向表征 异常困难,因此在 NiMnGa 薄膜种,局部马氏体变体的详细取向信息尚未报道。本论文 针对这一科学难题,利用 X-射线衍射(XRD) ,扫描电镜,电子背散射衍射(EBSD) 和透射电镜(TEM) ,对外延生长 NiMnGa 薄膜进行了系统全面的晶体学分析和研究。 结果如下: 首先,通过调控磁控溅射的工艺参数(溅射功率、气压、基板温度) ,薄膜的成分、 厚度等,成功制备出柱状晶和强取向的外延生长 NiMnGa 薄膜。由于柱状晶薄膜的晶界 会限制磁诱导形状变化的效果,导致其磁场诱发应变受到限制,因此本论文重点研究外 延生长 NiMnGa 薄膜。 XRD 结果表明,外延生长 NiMnGa 薄膜由奥氏体、非调制马氏体(NM)和七层调 制马氏体(7M)三相组成。其中七层调制马氏体的晶体结构为单斜结构。在薄膜中, 。通过分析 EBSD 的 Kikuchi 花样,发现表 层粗大的马氏体板条为 NM 马氏体,底层细小的马氏体板条为 7 层调制马氏体结构。 每个马氏体变体团由 4 种不同取向的马氏体板条组成,且每个板条中由两种取向的 Fig. 1 . 1 11 Fig.1.1 Schematic illustration of the magnetic field induced deformation due to the rearrangements of martensite variants. Fig. 1 . 1 2 shows magnetic field-induced variant rearrangement occurring in a rectangular bar specimen under the applied field along the ] 100 [ direction of the specimen. The twinned microstructure is visible using polarized microscopy on a polished (001) surface of the specimen. Fig. 1 . 2 12 Fig.1.2The microstructure evolution due to magnetic field induced variant rearrangement[START_REF] Tickle | Ferromagnetic Shape Memory Materials[END_REF]. Fig. 1 . 3 13 Fig.1.3 The illustration of crystal structure of NiMnGa alloys in austensite(a), non modulated martensite (b) and modulated martensite (c,d) relationships existing in each martensite plate and in total eight martensite variants in each variant group. The twin related variants have a minimum misorientation angle of ~79° around the <110>Tetr axis [69, 70]. For modulated martensites (5M and 7M), four types of alternately distributed martensite variants (A, B, C, and D) in one martensite variant group were determined to be twin-related: A and C (or B and D) possess type I twin relation, A and B (or C and D) type II twin, and A and D (or B and C) compound twin. All the twin interfaces are in coincidence with the respective twinning plane (K1) [26-28, 71-74]. Fig. 1 . 4 14 Fig.1.4 (a) the relationship of crystal coordinate systems of austenite, adaptive phase and monoclinic seven layer modulated martensite. (b) The pole figure presents the projection of monoclinic seven layer modulated martensite coordinate system along cA ( ( 2 )( 4 ) 24 Determination of the phase constituents, their crystal structures and macroscopic crystallographic features of various types of martensite in NiMnGa thin films by X-ray diffraction technique. (3) Correlation the microstructure with local crystallographic orientation of NM martensite and 7M modulated martensite by means of EBSD. Investigation of the orientation relationships between martensite variants and the orientation relationship for martensitic transformation by determined local crystallographic orientations and crystallographic calculation. Examination of the influence of substrate constraint on martensitic transformation and preferential selection of martensite variants in NiMnGa thin films based on crystallographic analyses. Fig. 2 . 1 21 Fig. 2.1 the schematics of fabrication process for NiMnGa target Fig. 2 . 2 22 Fig.2.2 illustration of direct current magnetron sputtering process (a) and the confocal magnetron sputtering system (b). Fig. 2 . 3 23 Fig.2.3 schematic illustration of X-ray diffractometers. (a) the conventional four-circle x-ray diffractometer. 2 and shown in Fig. 2.4, in connect with SEM/EBSD orientation definition (Convention Oxford-Channel 5), two crystal coordinate systems are selected by convenience.One is referred to the Bravais lattice cell of each phase and the other is a Cartesian coordinate system set to the lattice cell in the present work. The macroscopic sample coordinate system is referenced to the three edges of MgO substrate ( Fig. 2 . 4 24 Fig. 2.4 Schematic representation of a general (triclinic or anorthic) unit cell. vectors i, j and k with j parallel to b (j//b) and i perpendicular to b-O-c plane ( bOc i  ) following Channel 5 convention, as shown in Fig.2.5 for the three phases in the present work. Fig. 2 . 5 25 Fig.2.5 The schematics of selection of orthonormal coordinate systems , the matrix tensors in direct and reciprocal space are given in Eq.(2 Fig. 2 . 6 26 Fig.2.6 Illustration of coordinate transformation from sample coordinate system to Bravais lattice cell. Fig. 2 . 7 27 Fig.2.7 The schematics of misorientation between two martensite variants M is the Euler angle matrix). Then the misorientation matrix B A 1 AG 1  is the reverse matrix of A G , transforming from crystal frame A back to the sample frame. S are the generic elements of the rotation symmetry group.This provides an alternate description of misorientation as the successive operation of transforming from the first variant frame (A) back to the sample frame and subsequently to the new variant frames (B). Various methods can be used to represent this transformation operation, such as: Euler angles, axis/angle (where the axis is specified as a crystallographic direction), or unit quaternion.Misorientation expressed in angle and rotation axis is of particular interest for the present work, as it brings first information about twin orientation relationship between martensite variants. The determination of the angle and axis is explained as follow[START_REF] Humbert | Determination of the Orientation of a Parent [beta] Grain from the Orientations of the Inherited [alpha] Plates in the Phase Transformation from Body-Centred Cubic to Hexagonal Close Packed[END_REF]. If we denote the misorientation matrix Fig. 2 . 8 28 Fig. 2.8 The stereographic projection of the normal on crystal 2 eFig. 2 . 9 229 Fig.2.9 Schematic illustration of the determination of OR between austenite and martensite. MM is the Euler angle matrix of martensite, which can be determined by SEM-EBSD. i S and j S are the symmetry elements of austenite and martensite, respectively.A M T  is the transformation matrix from martensite to austenite. A T and M T are the transformation matrix from the Bravais lattice cell of austenite and martensite to the orthonormal reference frame set with respect to the parallel plane and direction.. are the transformation matrix for the orthonormal crystal coordinate system to the respective Bravais lattice cell of austenite and martensite. Fig. 2 . 2 Fig. 2.10 Schematic illustration of the determination of orientation relationship between austenite and martensite. A 1n g and A n g 2 ( N n , , 2 , 1    ) represent possible distinct orientations of the austenite Fig. 2 . 2 Fig. 2.11 the schematics of martensitic transformation examining the change of the length of the basis vectors of the reference cell before and after the transformation, the displacement gradient tensor can be readily built.To clearly analysis the constraint from MgO substrate during martensitic transformation process, the deformation gradient tensors of variants transformed from the transformed to the basis coordinate system of MgO substrate, with the following equation. components of the displacement gradient tensor in the MgO substrate coordinate system represent the dilatation and shear. The components) ( j i e ij stand for the normal strain, which represent the dilatation along in the MgO are shear strains, which indicates the shear magnitude in j plane (the plane normal direction is the j axis) along i direction. Fig. 2 . 2 Fig.2.12 Illustration of coordinate transformation from the coordinates of MgO substrate to austenite and from the coordinates of austenite to Pitsch. Fig. 3 . 1 31 Fig. 3.1 Composition of NiMnGa thin films deposited on Si substrate with various sputtering power Fig. 3 . 2 32 Fig. 3.2 Microstructure of thin films deposited on Silicon substrate at various sputtering power (a) 30w, (b) 50w, (c) 75w, (d) 100w. Fig. 3 . 3 Fig. 3.3 shows the microstructures of the thin film annealed at different temperatures. It can be seen that the column thicknesses increase with the annealing temperatures. The film annealed at 900 ℃ for 2 h show a plate structure in each column with the plate width of about 200 nm. . Fig. 3 . 3 Fig. 3 . 4 X 3 . 3 . 2 3 . 3 . 2 . 1 Fig. 3 . 5 Fig. 3 . 33343323321353 Fig. 3.3 SEM microstructure of NiMnGa thin films deposited on Si annealed at different temperature for 2 h (a) As-deposited; (b) 750°C ; (c) 800°C ; (d) 900°C Fig. 3 . 6 36 Fig. 3.6 NiMnGa thin films deposited on MgO(100) at different substrate temperature(a)400℃, (b) 500 ℃. Fig. 3 . 3 Fig.3.6 shows the X-ray diffraction patterns of the NiMnGa thin film deposited on the MgO substrate. X-ray diffraction spectra show that the crystal structure of these thin films is of those of 7M martensite and austenite. Figure 3 . 3 7 shows the microstructures of the NiMnGa thin films deposited on MgO (100) and on MgO (100) with Ag or Cr seed layer. As can be seen in Fig. 3.7, only the NiMnGa thin films deposited on MgO\Cr show a continuous character. On one hand the Cr seed layer improves element diffusion on the substrate and on the other hand, the Cr seed layer decreases lattice misfit between the substrate and the NiMnGa thin films. Since the lattice parameter of Cr and NiMnGa austenite is 2.888 Å and 5.803 Å, respectively. The orientation relationship between MgO, Cr and NiMnGa is 10 Fig. 3 . 7 3 . 3 . 3 Influence of film composition and thickness 3 . 3 . 3 . 1 373333331 Fig. 3.7 NiMnGa thin films deposited on MgO (100) with and without seed layer (a) MgO(100),(b) MgO\Ag(100 nm), (c) MgO\Cr(100 nm)3.3.3 Influence of film composition and thickness Fig. 3 . 8 Fig. 3 . 9 Fig. 3 . 3 . 3 . 3 . 2 Fig. 3 . 3839333323 Fig. 3.8 SEM image of MgO\Cr\NiMnGa thin films with different compositions (a-b) Ni51.42at.%-Mn29.25 at.%-Ga19.33 at.%, (c-d) Ni55.45 at.%-Mn26.62 at.%-Ga17.93 at.% Figure 3 .Fig. 3 . 33 Figure 3.10 presents the microstructure of MgO\Cr\NiMnGa thin film(composition?) withdifferent thickness prepared in the present work. It is seen from the figures that with the increase of the film thickness, the amount of terrace shape constituents is increased. SEM-EDX analyses show that the composition of the terraces it is the same as that of the matrix. This Fig. 3 . 3 Fig. 3.11 XRD patterns of MgO\Cr\NiMnGa thin films with two different thicknesses (a)1.5 m, (b) 3 m. microstructure of deposited films is classified in different zones depending on the temperature, TS is the substrate temperature). High substrate temperature also produces films with continuous microstructure. However, in the present study, in order to fabricate NiMnGa thin films with continuous microstructure other than column microstructure, both the low argon pressure (0.15 Pa) and high substrate temperature (500 ℃) were employed. Maybe due to the limitations of the present depositing equipment, films with continuous microstructure were not attainable direct on the substrates without utilizing any seed layer. Introducing a seed layer proves to be an alternative. For the NiMnGa deposited on the MgO single crystal substrate, the Cr seed layer improves the element diffusion on the substrate, which results in the continuous microstructure. Fig. 3 . 3 Fig.3.12 Thornton structure zone model correlating the argon pressure and the substrate temperature to the morphological properties of the films [101]. Fig. 4 . 1 41 Fig.4.1 XRD patterns of as-deposited Ni-Mn-Ga thin films with MgO/Cr substrate: (a) measured by conventional -2 coupled scanning at different tilt angles ; (b) measured by 2θ scanning at two incidence angles ω and integrated over the rotation angle φ. Fig. 4 . 4 Fig. 4.1b presents the XRD patterns measured using a large-angle position sensitive detector under two different incident beam conditions. Compared with Fig. 4.1a, extradiffraction peaks are clearly seen in the 2range of 48°-55° and at the higher 2 positions (around 82°). These additional peaks in the 2range of 48°-55° were not detected in the previous studies on the Ni-Mn-Ga thin films[START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Tillier | Martensite structures and twinning in substrate-constrained epitaxial Ni-Mn-Ga films deposited by a magnetron co-sputtering process[END_REF][START_REF] Thomas | Magnetically induced reorientation of martensite variants in constrained epitaxial Ni-Mn-Ga films grown on MgO(001)[END_REF][START_REF] Backen | Comparing properties of substrate-constrained and freestanding epitaxial Ni-Mn-Ga films[END_REF][START_REF] Buschbeck | In situ studies of the martensitic transformation in epitaxial Ni-Mn-Ga films[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF]. By combining all the characteristic diffraction peaks in Figs.4.1a and 4.1b, we are able to conduct more reliable phase identification and lattice constant determination on the constituent phases of the as-deposited thin films, especially for the modulated martensite. Fig. 4 . 2 42 Fig.4.2 the illustration of crystal structure of NM martensite and 7M martensite in NiMnGa thin films Fig. 4 . 4 Fig.4.3 presents the pole figures of NM martensite and 7M martensite. Let the edges of MgO substrate as the macroscopic sample coordinate system as shown in Fig.4.3(a) and the crystal direction of Fig. 4 . 3 43 Fig. 4.3 Pole figures of NM and 7M martensite in NiMnGa thin films (a) NM martensite, (b) 7M martensite figures and local surface morphologies. To determine the twin relationship and precise twinning elements, local crystallographic orientation correlated with microstructure is the prerequisite information. In this chapter, a spatially-resolved orientation analysis is conducted on martensites of NiMnGa thin films by means of electron backscatter diffraction (EBSD)a SEM based microstructural-crystallographic characterization technique. The microstructural features of both NM martensite and 7M modulated martensitic variants are directly correlated with their crystallographic orientations. The roles of substrate constraint in the preferential selection of martensitic variants are addressed. Based on the crystallographic calculation the martensitic transformation sequence in the present NiMnGa thin films was verified. Fig. 5 . 1 51 Fig.5.1 SE image of electrolytically polished Ni-Mn-Ga thin films, showing martensite plates that are clustered in groups with low and high relative contrasts. The sample coordinate system is set in accordance with the basis vectors of the MgO substrate. (b) High-magnification BSE image of the squared area Z1 in Fig. 5.1a, showing fine lamellae distributed alternately inside each plate. The inter-plate interfaces are marked with white dotted lines, and the intra-plate interfaces with blue and green solid lines. Fig. 5 . 5 Fig.5.1(a) shows a SE image of the as-deposited thin films after slight electrolytic polishing. It is clearly seen that the martensite appears in plate shape, and individual plates are clustered into groups that can be distinguished locally by the alignment of parallel or near-parallel inter-plate boundaries. According to the SE image contrast of neighboring martensite plates, the clustered groups are characteristic of two different relative contrasts -low relative contrast (Z1) or high relative contrast (Z2), as illustrated in Fig.5.1(a). Whereas the low relative contrast zones consist of long and straight plates running with their length direction parallel to one edge of the substrate (i.e. Fig. 5 . 5 Fig. 5.2(a) presents the secondary electron (SE) image acquired from the sample with gradient thickness film thickness. This sample was prepared by a controlled polishing to obtain an increased polishing depth from the left to the right, as schematically illustrated in Fig. 5.2(b). The right side and the left side of Fig. 5.2(a) represent the microstructure near the film surface and that deep inside the film, respectively. It can be seen from Fig. 5.2(a) that the thin film has an overall plate-like microstructure, with plate thickening from its interior to its surface. Although the polishing depth of the film changes smoothly, an abrupt change in the plate thickness occurs without any transition zone. This indicates a complete change of microstructural constituents or phases along the film thickness. Fig. 5 . 5 2(c) and Fig. 5.2(f) present the respective Kikuchi patterns, acquired from one coarse plate (with high brightness in the dotted blue square of Fig. 5.2(a)) and one thin plate (with high brightness in the dotted yellow square of Fig. 5.2(a)). Obviously, the measured patterns bear intrinsic differences, as arrowed in Fig. 5.2(c) and Fig. 5.2(f). Fig. 5.2(d) and Fig. 5.2(e) demonstrate that each of the two overlapping patterns appeared in Fig. 5.2(c) can be indexed with the tetragonal crystal structure of the NM martensite. It means that one coarse plate in the top surface layer of the film contains two NM martensitic variants. This is coherent with the BSE observations, shown in Fig. 5.1(b). Fig. 5 . 2 . 52 Fig.5.2. (a) SE image showing the plate-like microstructure of an electropolished sample with gradient film thickness. (b) Schematic illustration of the sample thickness change from the left side to the right side, produced by gradually electrolytic polishing. (c-e) Kikuchi patterns acquired from one coarse plate with high brightness in the dotted blue square of Fig. 5.2a. The mixed patterns in Fig. 1c are indexed as two NM variants (Fig. 5.2d and Fig. 5.2e) with Euler angles of (156.7 o , 175.2 o , 38.1 o ) and (163.8 o , 96.7 o , 44.7 o ), respectively. (f) Kikuchi pattern acquired from one fine plate with high brightness in the dotted yellow square of Fig. 5.2a.(g and h) Calculated Kikuchi patterns using the monoclinic superstructure of 7M martensite and the tetragonal structure of NM martensite, respectively. Note that in the latter case there appear large mismatches with the measured Kikuchi pattern. Fig. 5 . 5 Fig.5.3 (a) SE image of the as-deposited thin films after slight electrolytic polishing. (b) High-magnification BSE image of the squared area Z1 in Fig. 5.3(a). The insets represent the crystallographic orientation of the major and minor lamella variants. V1 and V2 are the major and minor variants in one martensite plate. V3 and V4 are the major and minor variants in the left adjacent martensite plate. Fig. 5 . 5 4 presents an example of the Kikuchi line pattern acquired from one martensite plate with high brightness in the high relative contrast zone and the calculated patterns using the tetragonal NM martensite structure. As the in-plate lamellar variants are too fine and beyond the resolution of the present EBSD analysis system, we could not obtain a single set of Kikuchi lines from one variant. For each acquisition, there are always mixed patterns from two neighboring variants in one image, as displayed in Fig.5.4(a). However, the high intensity reflection lines belonging to two different variants can be well separated in the image, as outlined by the green and red triangles in Fig. 5.4(b). By comparing Figs. 5.4(c) and 5.4(d) with Fig. 5.4(a), perfect matches between the acquired Kikuchi lines for the two variants and the calculated ones using the tetragonal NM martensite structure are evident. Fig. 5 . 4 . 54 Fig. 5.4. (a) Kikuchi line pattern acquired from one martensite plate in the high relative contrast zone (Z2) in Fig. 5.3a. (b) Indication of high intensity reflection lines from two adjacent variants 1 (the green triangle) and 2 (the red triangle). (c-d) Calculated Kikuchi line patterns using the tetragonal structure for variants 1 and 2, respectively. Fig. 5 . 5 55 Fig.5.5 The TEM bright field image of NiMnGa thin films (a) the low relative contrast zone, (b) the high relative contrast zone. The yellow dotted lines represent traces of the inter plate interfaces. The blue dashed lines represent traces of the intra plate interfaces. Fig. 5 . 6 56 Fig.5.6 The TEM bright field image in the high relative contrast zones of NiMnGa thin films (a-b) and the corresponding diffraction patterns of P1 and P3 (c-d). The yellow dotted lines represent traces of the inter plate interfaces. The blue dashed lines represent traces of the intra plate interfaces. Here the electron beam is parallel to the <110>Tetr zone axes of P1 and P3. Fig. 5 . 7 57 Fig.5.7 The TEM bright field image in the high relative contrast zones of NiMnGa thin films (a-b) and the corresponding diffraction patterns of P1 and P2 (c-d). Here the electron beam is parallel to the <201>Tetr zone axes of P1 and P2. Fig. 5 . 5 Fig. 5.6 and 5.7 show the TEM bright field images from the same area displayed in Fig. . 5.3(a)). Individual plates were identified to be composed of two alternately distributed orientation variants (lamellae) with different thicknesses, as schematically illustrated in Fig. 5.8(a) and 5.8(b). The clustered martensite plates running parallel to the substrate edges (Z1) represent one kind of variant group, whereas those running roughly at 45° to the substrate edges (Z2) represent another kind. According to the orientations of the thicker variants in plates, two sets of orientation plates can be distinguished, i.e. the set of plates A, B, C and D in the low relative contrast zones and the set of plates 1, 2, 3 and 4 in the high relative contrast zones. Thus, there are in total eight orientation variants of the NM martensite in one variant group. For easy visualization, they are denoted as V1, V2, …, V8 in Fig. 5.8(a) and SV1, SV2, …, SV8 in Fig. 5.8(b), where the symbols with odd subscripts correspond to the thicker (major) variants and those with even subscripts the thinner (minor) variants. Taking the basis vectors of the MgO(100) monocrystal substrate as the sample reference frame, as defined in the "Experimental" part, the measured orientations of the NM variants in the two relative contrast zones are presented in the form of {001}Tetr, {110}Tetr, {010}Tetr and {112}Tetr pole figures, as displayed in Figs. 5.8(c) and Fig. 5.8(d). Fig. 5 . 5 . 55 Fig. 5.5. (a-b) Schematic illustration of the geometrical configuration of NM plates. V1, V2, …, V8 and SV1, SV2, …, SV8 denote eight orientation variants in the low relative contrast zones (Z1) and in the high relative contrast zones (Z2), respectively. (c-d) Representation of measured orientations of in-plate lamellar variants in the form of {001}Tetr, {110}Tetr, {010}Tetr and {112}Tetr pole figures. The orientations of the intraand inter-plate interface planes are respectively indicated by light green solid squares and black dotted rectangles in the {010}Tetr and {112}Tetr pole figures. ( 112 ) 112 Tetr twining plane (roughly corresponding to the (101) planes of the austenite [44, 55, 88, 89]) and are perfectly coherent, as indicated by the light green solid squares in Figs. 5.8(c) and 5.8(d). According to the {112}Tetr pole figure of the low relative contrast zones shown in Fig. 5.8(c), the interlamellar interfaces in each plate are inclined roughly 47.5° toward the substrate surface, and two interlamellar interfaces from adjacent plates are positioned symmetrically either to the (010)MgO plane (e.g. plates A and B) or to the (100)MgO plane (e.g. plates B and C). Fig. 5 . 9 . 59 Fig. 5.9. (a) Atomic correspondences of eight lamellar variants in four NM plates for the low relative contrast zones (viewed from the top of as-deposited thin films). Only Mn and Ga atoms are displayed. (b) 3D-atomic correspondences of two alternately distributed lamellae with (112)Tetr compound twin relationship in one NM plate. The coherent twinning planes are outlined in green. is the dihedral angle between the (110)Tetr plane of the major variants and the MgO substrate, and is the dihedral angle between the (001)Tetr plane of the minor variants and the MgO substrate. (c) 3D configuration of two adjacent NM plates using in-plate lamellar variants as blocks to illustrate the plate interface misfits. Fig. 5 . 5 Fig. 5.10. (a) Atomic correspondences of eight lamellar variants in four NM plates for high relative contrast zones (viewed from the top of as-deposited thin films). Only Mn and Ga atoms are displayed. (b) 3D-atomic illustration of the combination of two lamellar variants ((112)Tetr compound twins) in plates 1 and 2. The twinning planes between lamellar variants are colored in green. is the dihedral angle between (001)Tetr plane of the major variants and the MgO substrate, and is the dihedral angle between the (110)Tetr plane of the minor variants and the MgO substrate. (c) 3D construction of two adjacent plates showing the plate interface misfits. Fig. 5 . 5 Fig. 5.11 presents a selected SE image of 7M martensite plates with full-featured microstructural constituents. It is evident that in Fig. 5.11(a) the individual martensite plates are clustered into groups, exhibiting either low relative contrast (Group 1) or high relative SE contrast (Group 2 and Group 3), as the case of NM martensite. The low relative contrast zones contain long straight plates parallel to one of the substrate edges, whereas the high relative Fig. 5 . 5 Fig. 5.11. (a) SE image of plate groups of 7M martensite with low relative contrast (Group 1) and high relative contrast (Group 2 and Group 3). (b) Magnified image of individual plates belonging to Group 2. Note that the traces of inter-plate interfaces have three distinct spatial orientations, as highlighted by the dotted yellow lines (Type-II twin interfaces), the dotted green lines (Type-II twin interfaces), and the solid black lines (Type-I twin interfaces). Fig. 5 . 5 Fig. 5.12. (a and b) Illustration of 7M martensite plates with four orientation variants in Group 1 (denoted by V1, V2, V3, V4) and Group 2 (denoted by VA, VB, VC, VD). The insets show the corresponding microstructures in the same zones. (c) Representation of individually measured orientations of 7M variants in Group 1 and Group 2 in the form of {2 0 20}mono, {2 0 20 }mono and {0 4 0}mono pole figures. Fig. 5 . 5 Fig. 5.12(c) presents the individually measured orientations of the 7M variants in Group 1 single a-variant and a pair of the (0 4 0)mono variants (VB and VC) to one single c-variant. The same situation has also happened to the identification of constituent variants in the low relative contrast zones, e.g. a pair of the (2 0 20)mono variants (V1 and V4, or V2 and V3) was previously taken as one single b-variant. The appearance of such a discrepancy in specifying the number of variants in one plate group arises from the fact that in the other work, the crystallographic orientations of constituent variants in thin films were determined by conventional X-ray diffraction (XRD). The limited spatial resolution with this technique has prevented differentiating the corresponding (2 0 20)mono, (2 0 20 )mono, and (0 4 0)mono planes from variant pairs in pole figures. Here, the (2 0 20)mono, (2 0 20 )mono and (0 4 0)mono planes refer to the (0 4 5 . 5 interface traces on the film surface are both in parallel to the cases can be found for the Type-II twin interfaces. As shown in the Fig.5.13(a), middle), the two Type-II twin interfaces that have parallel traces on the film surface intersect the substrate surface at +85° (V3 and V4) and -85° (V1 and V2), respectively. Apparently, the four Type-I and Type-II twin interfaces are oriented differently through the film thickness, but they have the same interface trace orientation on the film surface. As for the Compound twin interfaces, the interfaces between adjacent variants are all parallel to the film surface, as shown in the Fig.5.13(a), right). That is why no Compound twin interfaces could be observed by examining solely the film surface microstructure in the present work. Such Compound twin interfaces have been detected by cross section SEM observation[START_REF] Kaufmann | Modulated martensite: why it forms and why it deforms easily[END_REF].For the high relative contrast zone (Group 2), it is seen from the 13(b), left) that the Type-I twin interfaces between variants VA and VC and those between variants VB and VD possess roughly the same orientation in the film. They incline at about 45° to the substrate surface and their traces on the film surface are along the , for the Type-II twin interfaces as shown in the figure (Fig.5.13(b), middle), although the interface between VA and VB and that between VC and VD are inclined at about 45° to the substrate surface, their interface traces on the film surface do not possess the same orientation. One is 37.3° (VA and VB) away from the angle between two differently oriented Type-II twin interface traces on the film surface is 15° (as shown in Fig.5.11(b)), being in accordance with the results reported in the literature[START_REF] Backen | Mesoscopic twin boundaries in epitaxial Ni-Mn-Ga films[END_REF].As for the Compound twin interfaces, the interface between VA and VD and that between variant VB and VC possess roughly the same orientation (Fig.5.13(b), right). They are nearly perpendicular to the substrate surface and their traces on the film surface are nearly parallel to the is found that the bending of martensite plates in the high relative contrast zone is associated with the orientation change from VA to VD or from VB to VC, which results in an change of the plate interface from type I twin interface to type II twin interface or from type II twin interface to type I twin interface. Fig. 5 . 5 Fig. 5.14. (a) Illustration of the orientation relationship between NiMnGa austenite and MgO substrate. (b) Schematic of the Pitsch orientation relationship between the cubic austenite lattice and the monoclinic 7M martensite lattice. Note that only the average monoclinic unit cell [73] is used for describing the 7M structure. the diagonal term eii represents the dilatation in the i direction ( the off diagonal term eij represents the shear in the i direction on the plane normal and in the j direction. It is seen that the largest deformation during the transformation happens as a shear in the   , i.e. the twinning plane and the twinning direction of the 7M martensite. Here, for simplicity, we ignore the structural modulation of the 7M martensite and the 7M superstructure is reduced to an average monoclinic unit cell that corresponds to one cubic unit cell[START_REF] Li | Determination of the orientation relationship between austenite and incommensurate 7M modulated martensite in Ni-Mn-Ga alloys[END_REF].Due to the cubic crystal symmetry of the austenite phase, there are six distinct but equivalent confirmed that the four 7M variants in Group 1 in the low relative contrast zone are originated from the . Clearly, the 7M variants that originate from the two austenite planes ( (5- 2 ) 2 It is seen that the diagonal components of the four displacement gradient tensors are exactly the same, whereas the off diagonal components are of the same absolute value but sometimes with opposite signs. The maximum deformation appears as a shear 21 e in the (1 highlighted in bold in the matrices in Eq. (5-2). 5. 15 ( 31 e 1531 a), as well as the 23 e component representing the shear in the (0 0 1)MgO plane along the . 5.15(b)), remain unchanged with the variation of d1. This means that these transformation deformations cannot be accommodated by forming twin variants. However, when the variant volume ratio d1 reaches 0.5, the largest shear 21 e and the shear are cancelled out. This volume ratio is very close to the thickness ration observed in the present work. For the case of the Type-II twins, the ii e and 31 e terms stay unchanged under the variant volume change. When the volume ratio reaches 0.5, the largest shear 21 e and the shear 23 e are both cancelled out. Fig. 5 . 15 . 515 Fig. 5.15. Displacement gradient tensor components of (a-b) Type-I twins and (c-d) Type-II twins in the low relative contrast zone, averaged with the volume fractions of two constituent variants. Fig. 5 . 16 . 516 Fig. 5.16. Illustration of the shear deformations from austenite to 7M martensite in the low relative contrast zone with respect to the substrate, viewed on (a) the (0 0 1)MgO plane and (b) the (0 1 0)MgO plane. - 3 ) 3 Here, all the components in each displacement gradient tensor are not zero and their absolute values are in the same order of magnitude. The maximum deformation appears as a e ), as highlighted in bold in the matrices in Eq. (5-3). and 22 e 22 (a-b) (Type-I twin pair) and Figs.5.17(c-d) (Type-II twin pair). For the Type-I twin pair, it is seen from Fig.5.17(a) that the dilatation deformations can be accommodated to some extent through variant volume change, but they cannot be accommodated at one single volume ratio. Indeed, the 11 e terms are nearly cancelled out at d1 = 0.4876, but the 33 e term at d1 = 0.5935. Similarly, shear deformation accommodation exists but cannot be achieved at one single volume ratio, as shown in Fig.5.17(b). The 13 e , 21 e , 23 e and 31 e terms are roughly cancelled at d1 = 0.4698 but the 12 e and 32 e terms at d1 = 0.5940. If we consider that the j e 3 terms are the most important components to be accommodated due to the constraints from the rigid substrate, no volume ratio can be reached to cancel these components. When the 31 e component of the Type-I twin pair equals to zero, the 32 e component is -0.0111; when the 32 e component of the Type-I twin equals to zero, the component 31 e is 0.0111. Therefore, the formation of Type-I twins in the high relative contrast zone is energetically costly. Fig. 5 . 17 . 517 Fig. 5.17. Displacement gradient tensor components of (a-b) Type-I twins and (c-d) Type-II twins in the high relative contrast zone, averaged with the volume fractions of two constituent variants. Fig. 5 . 5 Fig.5.18 the BSE image of the NiMnGa thin films (a), (b) is the magnification view of area A and (c) is the magnification view of area B. The dotted lines are the variant group boundaries. Fig. 5 . 5 Fig.5.18 presents the BSE images of NiMnGa thin films after slight electrolytic polishing. 3 . 2 . 32 Adjacent to the long strips (G5 and G6), there are several either bright or dark polygonal zones, which are the corresponding high relative contrast zones. Fig.5.18(b) and Fig.5.18(c) are the magnified view that presents the martensite groups. As shown in Fig.5.18(c), although the traces of plate interface in G3 are close to those in G4, the image brightness of the two martensite variant groups (G3 and G4) are different, indicating that group G3 and G4 possess different crystallographic features. As the contrast of a BSE image for a monophase microstructure with homogenous chemical composition originates from the orientation differences of the microstructural components, the difference in contrast between two martensite variant groups indicates the difference in their crystallographic orientations. Fig. 5 .Fig. 5 . 2 552 Fig. 5.19 (004)Tetr pole figure of NM martensite from the six variant groups determined by EBSD Fig. 5 .Fig. 5 . 55 Fig.5.22 Pole figures of NM martensite variants in the six variant groups are of plate morphology and organized into two characteristic zones featured with low and high relative SEM second electron image contrast. Local martensite plates with similar plate morphology orientation are organized into variant groups or variant groups. Low relative contrast zones consists of long straight plates running with their length direction parallel to the substrate edges, whereas the high relative contrast zones consists of shorter and bent plates with the length direction roughly in 45° against the substrate edges. SEM-EBSD in film depth analyses further verified the co-existence of the three constituent phases: austenite, 7M martensite and NM martensite. NM martensite is located near the free surface of the film, austenite above the substrate surface, and 7M martensite in the intermediate layers between austenite and NM martensite.Further EBSD characterization indicates that there are four distinct martensite plates in each variant group for both NM and 7M martensite. Each NM plate is composed of paired major and minor lamellar variants (i.e. (112)Tetr compound twins) in terms of their thicknesses having a coherent interlamellar interface, whereas, each 7M martensite plate contains one orientation variant. Thus, there are four orientation 7M martensite variants and eight orientation NM martensite variants in one variant group.For NM martensite, in the low relative contrast zones, the long and straight inter-plate interfaces between adjacent NM plates result from the configuration of the counterpart (112)Tetr compound twins that have the same orientation combination and are distributed symmetrically to the macroscopic plate interfaces. As there are no microscopic height misfits across plate interfaces in the film normal direction, the relative contrasts of adjacent NM plates are not distinct in the SE image. However, in the high relative contrast zones, the asymmetrically distributed (112)Tetr compound twins in adjacent NM plates lead to the change of inter-plate interface orientation. The pronounced height misfits across inter-plate interfaces in the film normal direction give rise to the surface reliefs, hence high relative contrast between adjacent plates.For 7M martensite, both Type-I and Type-II twin interfaces are nearly perpendicular to the substrate surface in the low relative contrast zones. The Type-I twin pairs appear with much higher frequency, as compared with that of the Type-II twin pairs. However, there are two Type-II twin interface trace orientations and one Type-I twin interface trace orientation in the high relative contrast zones. The Type-II twin pairs are more frequent than the Type-I twin pairs. The inconsistent occurrences of the different types of twins in different zones are originated from the substrate constrain.The crystallographic calculation also indicates that the martensitic transformation sequence is from Austenite to 7M martensite and then to NM martensite (A→7M→NM). The present study intends to offer deep insights into the crystallographic features and martensitic transformation of epitaxial NiMnGa thin films.In conclusion, both NiMnGa thin films with column and continuous microstructures were successfully fabricated by DC magnetron sputtering, after the optimization of sputtering parameters such as substrate temperature, sputtering power, substrate, seed-layer, film composition and thickness.X-ray diffraction analysis demonstrates that three different phases, i.e. austenite, 7M modulated martensite, and NM martensite, co-exist in the as-deposited epitaxial Ni50Mn30Ga20 thin films. The austenite phase has a cubic L21 crystal structure (Fm 3 m, No. 225) with lattice constant aA = 0.5773 nm. The 7M martensite phase has an incommensurate monoclinic crystal structure (P2/m, No. 10) with lattice constants a7M = 0.4262 nm, b7M = 0.5442 nm, c7M = 4.1997 nm, and = 93.7  The NM martensite phase is of tetragonal crystal structure (I4/mmm, No. 139) with lattice constants aNM = 0.3835 nm and cNM =0.6680 nm.By the combined the XRD and EBSD orientation characterizations, it is revealed that the as-deposited microstructures is mainly composed of the tetragonal NM martensite at the film surface and the monoclinic 7M martensite beneath the surface layer. For the NM martensite, there are two characteristic zones featured respectively with low and high relative SE image contrast. The NM martensite in the low relative contrast zones consists of long straight plates running with their length direction parallel to the substrate edges, whereas the NM martensite in the high relative contrast zones consists of shorter and bent plates with the length direction roughly in 45° against the substrate edges.Each NM plate is composed of paired major and minor lamellar variants (i.e. (112)Tetr compound twins) having a coherent interlamellar interface. There are in total eight orientation variants in one variant group. Indeed, in the low relative contrast zones, the long and straight inter-plate interfaces between adjacent NM plates result from the configuration of the counterpart (112)Tetr compound twins that have the same orientation combination and are distributed symmetrically to the macroscopic plate interfaces. As there are no microscopic height misfits across plate interfaces in the film normal direction, the relative contrasts of adjacent NM plates are not distinct in the SE image. However, in the high relative contrast zones, the asymmetrically distributed (112)Tetr compound twins in adjacent NM plates lead to the change of inter-plate interface orientation. The pronounced height misfits across inter-plate interfaces in the film normal direction give rise to the surface reliefs, hence high relative contrast between adjacent plates.The plate-like microstructures of the 7M martensite are composed of two distinct kinds of plate groups too. Each 7M martensite plate contains one orientation variant. There are four orientation variants in one plate group. The inter-plate interfaces are either Type-I or Type-II twin interfaces. The Type-I twin pairs appear with much higher frequency, as compared with Table 2 .1 2 List of NiMnGa targets with various compositions for deposition No. Nominal composition of phase of Targeted composition in Targeted phase the targets targets at thin films of thin film at room room temperature temperature A Ni46at.%-Mn32at.%-Ga-22at.% A Ni50at.%-Mn30at.%-Ga-20at.% 7M B Ni48at.%-Mn30at.%-Ga-22at.% A Ni50at.%-Mn28at.%-Ga-22at.% 5M C Ni50at.%-Mn28at.%-Ga-22at.% 5M Ni52at.%-Mn26at.%-Ga-22at.% 7M D Ni50at.%-Mn30at.%-Ga-20at.% 7M Ni52at.%-Mn28at.%-Ga-20at.% NM 2 .1.2 Thin film deposition sputtering technique also produces films with a better adhesion than films produced by evaporation techniques. Substrate heating can be done by attaching a heater to the sample holder for epitaxial depositions. Thermal energy imparted to the deposition atoms increases the mobility of atoms forming thin film that have a nearly perfect atomic stacking during deposition. Several deposition methods have been developed to deposit thin films with the thickness ranging from several nanometers to several microns. Comparing with other depositing techniques, magnetron sputtering process has a number of advantages. First of all, plasma can be sustained at lower Argon pressures and hence resulting in low level of impurities. In addition, since the sputtering mechanism is not a thermal evaporation process, even the material with highest melting point can be deposited at ambient temperature. Usually, the deposited alloy films have a composition close to their target composition. Moreover, Table 2 . 2 22 The selection of coordinate systems Phases Crystal structure Lattice parameters Austenite Cubic (L21) Crystal coordinate system Table 2 . 2 3 the characteristic features of different twinning types Type I twin Type II twin Compound twin Number of 180°Rotation axis 1 1 2 Twinning plane K1 Rational Rational Conjugate twinning plane K2 Rational Rational Twinning shear direction 1 Rational Rational Conjugate Twinning direction 2 Rational Rational Table 2 . 2 4 the relationships of martensitic transformation Relationship Plane Direction Ref. Bain (001 A (001) || M 3 Chapter 3 Fabrication of NiMnGa thin films 3.1 Introduction In -36) spite of the effort made recently to use sputtering technology for the preparation of thick Ni-Mn-Ga films (from sub-micron to micron range (<2µm)), there are still some open questions in scientific and technical aspects in producing NiMnGa thin films, such as the control of the film composition during sputtering, the attainability of ultra thick epitaxial films (in several microns thick). The present work is dedicated to tackle these issues by optimizing the substrate parameters and the deposition parameters. Two substrate materials were used in the present work: one is thermal oxidization silicon and the other one is monocrystalline MgO [START_REF] Pitsch | Der Orientierungszusammenhang zwischen Zementit und Austenit[END_REF] . It is known that compared with MgO, Si substrate is not optimum for producing qualified epitaxial NiMnGa films in terms of mono-crystallinity of austenite at high temperature but it is of low cost. It allows to determinate the optimum depositing parameters that require large numbers of trial and errors with a relatively low cost. Thus in the present work, the Si substrate is used to optimize the primary deposition parameters and the MgO substrate is used to finally produced the qualified films with limited numbers of tests. Table 3 . 3 1 the corresponding parameters of NiMnGa thin films deposited on Si with various sputtering power Power Time Thickness Sputtering Film composition (at.%) (W) (h) (m) rates(nm/s) Ni Mn Ga Si 1 30 6 1.142 0.059 54.85 23.92 21.23 Si 2 50 3.5 1.142 0.091 55.36 24.22 20.42 Si 3 75 1.5 0.714 0.132 55.76 23.92 20.02 Si 4 100 2 1.285 0.159 55.17 24.97 19.87 Table 4 . 1 . 41 Calculated lattice constants of austenite, 7M martensite and NM martensite for as-deposited Ni-Mn-Ga thin films. Lattice constants Phase Space group Crystal system a (nm) b (nm) c (nm)    7M P2/m (No. 10) Monoclinic 0.4262 0.5442 4.199 90°,  93.7° NM I4/mmm (No. 139) Tetragonal 0.3835 0.3835 0.6680 ° Austenite Fm 3 m (No.225) Cubic L21 0.5773 0.5773 0.5773 ° 4.3 .2 Determination of crystallographic texture 12 1 )Tetr planes are inclined to the substrate surface at 42.57° and 47.24°, and the (010)Tetr planes are at 48.29° and 41.68°, respectively. Neighboring plates Variant pairs Misorientation angle  ( o ) Rotation axis A/B V1-V3 V2-V4 82.6518 13.6865 3.4 o from the <110>Tetr direction 5.4 o from the <031>Tetr direction C/D V5-V7 V6-V8 83.0965 13.1146 3.7 o from the <110>Tetr direction 4.6 o from the <301>Tetr direction B/C V3-V5 V4-V6 83.1537 14.4210 3.6 o from the <110>Tetr direction 4.0 o from the <301>Tetr direction D/A V7-V1 V8-V2 83.0093 14.4189 3.7 o from the <110>Tetr direction 3.1 o from the <301>Tetr direction Table5.1. Minimum rotation angle and rotation axis between two lamellar variants connected by an inter-plate interface in the low relative contrast zones. Table 5 . 2 . 52 Minimum rotation angle and rotation axis between two lamellar variants connected by an inter-plate interface in the high relative contrast zones. Neighboring plates Variant pairs Misorientation angle  ( o ) Rotation axis 1/2 SV1-SV3 SV2-SV4 82.9699 14.1760 3.9 o from the <110>Tetr direction 2.1 o from the <301>Tetr direction 3/4 SV5-SV7 SV6-SV8 82.7963 14.8473 3.4 o from the <110>Tetr direction 3.6 o from the <031>Tetr direction 2/3 SV3-SV5 SV4-SV6 82.5782 12.2859 4.7 o from the <110>Tetr direction 3.5 o from the <031>Tetr direction 4/1 SV7-SV1 SV8-SV2 82.8110 11.8090 5.0 o from the <110>Tetr direction 9.9 o from the <031>Tetr direction Table 5 . 3 . 53 Calculated misorientations between adjacent 7M variants in Group 1, expressed as rotation axis (d) and angle () in the orthonormal crystal reference frame. The Euler angles of four variants V1, V2, V3, and V4 are respectively (137.94°, 46.93°, 90.45°), (221.82°, 133.15°, 270.46°), (41.68° ,133.18°, 270.44°), and (318.32°, 46.80°, 90.50°). Variant pairs Misorientation angle () d1 Rotation axis d d2 d3 Twin type V1:V3 83.1337 179.9229 -0.7342 0.4504 -0.0010 0.7482 -0.6789 -0.4871 Type I V2:V4 82.8455 179.9432 -0.7339 0.4493 -0.0007 0.7498 -0.6791 -0.4856 Type I V1:V2 96.7436 179.9519 0.7265 -0.5136 -0.0005 0.6643 0.6871 0.5430 Type II V3:V4 97.2851 179.9806 0.7254 0.5166 0.0002 -0.6607 0.6882 -0.5445 Type II V1:V4 179.3063 179.5858 0.6837 -0.7297 0.0036 -0.0060 -0.7297 -0.6837 Compound V2:V3 179.3435 179.8736 0.6841 0.7293 -0.0011 0.0057 -0.7293 0.6841 Compound Table 5 . 4 . 54 Calculated misorientations between adjacent 7M variants in Group 2, expressed as rotation axis (d) and angle () in the orthonormal crystal reference frame. The Euler angles of four variants VA, VB, VC, ) of the four 7M variants in each plate group, the orientation relationships between neighboring variants can be further calculated.Table 5.3 and Table 5.4 show the calculated misorientations of adjacent variants in the low and high relative contrast zones, expressed in terms of rotation axis and angle. Detailed analysis has revealed that each pair of adjacent variants can be identified as either Type-I or Type-II or Compound twins, depending on whether the rotation axis is close to the normal of a rational plane or/and a rational direction in the monoclinic crystal basis. It is seen from Table1that in the low relative and VD are respectively (48.43°, 136.75°, 95.28°), (178.19°, 92.35°, 2.78°), (271.81°, 87.65°, 182.78°), and (41.57°, 43.25°, 275.28°). Variant pairs Misorientation angle () d1 Rotation axis d d2 d3 Twin type VA:VC 82.3098 178.5255 -0.7414 -0.4414 0.0195 -0.7529 -0.6707 0.4880 Type I VB:VD 82.2390 177.9083 -0.7476 0.4364 -0.0277 0.7534 -0.6635 -0.4917 Type I VA:VB 97.8691 179.5722 0.7295 -0.5156 0.0049 -0.6569 0.6839 0.5500 Type II VC:VD 97.6586 179.7170 0.7246 0.5187 -0.0032 0.6583 0.6891 -0.5454 Type II VA:VD 177.3972 179.9432 0.6871 0.7263 -0.0004 0.0227 -0.7265 0.6869 Compound VB:VC 179.6341 179.8470 0.6693 0.7429 -0.0013 0.0031 -0.7429 0.6693 Compound With the correctly determined crystallographic orientations (represented with three Euler angles in Bunge's notation [92] Table 2 , 2 where VA:VC and VB:VD belong to Type I twins, VA:VB and VC:VD to Type II twins, and VA:VD and VB:VC to Compound twins. Table 5 . 5 . 55 Twinning elements of 7M variants represented in the monoclinic crystal coordinate frame. K1 is the twinning plane, K2 the reciprocal or conjugate twinning plane,  the twinning direction,  the conjugate twinning direction, P the shear plane, and s the magnitude of shear. Twin Type I twin Type II twin Compound twin elements (VA:VC, VB:VD; V1:V3, V2:V4) (VA:VB, VC:VD; V1:V2, V3:V4) (VA:VD, VB:VC; V1:V4, V2:V3) K1 (1 2 ) 10 mono (1.1240 2 ) 8.7602 mono (1 0 10) mono K2 1.1240 ( 2 mono 8.7602) 1 ( 2 10) mono 1 ( 0 10) mono  [11.0973 10 mono ] 0.8903 10 [ 10 mono 1] 10 [ 0 mono 1]  [10 10 mono ] 1 [11.0973 10 mono ] 0.8903 [10 0 mono 1] P (1 0.1161 mono 11.1610) (1 0.1161 mono 11.1610) (0 1 0) mono s 0.2537 0.2537 0.0295 Table 5 . 6 . 56 Comparison of twin orientation relationships identified in the present work with those reported in the literature. 14M-adaptive [44, 82, 91] 7M modulated super cell 7M-average unit cell Notes a14M-c14M twin Type I twin K1 = (1 0 1)14M Table 5 . 8 58 The published orientation relationships of martensitic and intermartensitic transformation in NiMnGa alloys Orientation relationship Austenite Martensite Plane direction Plane direction Bain (001) A A [START_REF] Pitsch | Der Orientierungszusammenhang zwischen Zementit und Austenit[END_REF] This work is supported by the Sino-French Cai Yuanpei Program (N°24013QG), the Northeastern University Research Foundation for Excellent Doctor of Philosophy Candidates (No. 200904), and the French State through the program "Investment in the future" operated by the National Research Agency (ANR) and referenced by ANR-11-LABX-0008-01 (LabEx DAMAS). The work presented in this thesis is completed at LEM3 (former LETAM, University of Lorraine, France) and the Key Laboratory for Anisotropy and Texture of Materials (Northeastern University, China). seed layer (100 nm Cr or Ag). The corresponding parameters are displayed in Table 3.2. The film thickness was verified by stylus profiler (DEKTAK 150). SEM (JEOL 6500F) equipped with EDS were used to analyze the microstructure and composition of these films. Home-made X-ray diffraction machine (FSM) with Cobalt cathode source (nm was used to determine the phase constituents of the produced films. for Compound twins, as in the case for bulk materials. For easy visualization of the through-film-thickness orientations of twin interfaces in two relative contrast zones, the With the crystallographic orientations of the NM martensite variants in the six variant groups, the crystallographic orientations of 7M martensite variants can be calculated based on the published orientation relationship between 7M and NM martensites [71]. Fig. 5.20 presents the (001)mono pole figures of the 7M martensite calculated based on the orientation of the NM martensite variants. As shown in Fig. 5.20, there are also six 7M martensite variant groups, each containing 4 distinct variants. Then in total twenty-four 7M martensite variants. Results Influence of sputtering parameters and post annealing Influence of sputtering power Crystallographic orientation of 7M martensite variant The calculated crystallographic orientation of 7M martensite variants are consistent with that acquired by EBSD in part 5.3.3. It is indicated that the intermartensitic transformation from 7M martensite to NM martensite follows the orientation relationship determined by Li et.al [71]. In order to compare the pole figures obtained by EBSD with those by XRD (Fig. 4.3) in the present study and previous studies [START_REF] Tillier | Tuning macro-twinned domain sizes and the b-variants content of the adaptive 14-modulated martensite in epitaxial Ni-Mn-Ga films by co-sputtering[END_REF][START_REF] Kaufmann | Adaptive Modulations of Martensites[END_REF], the (2 0 20)mono , (2 0 20 ) mono, (0 4 0) mono (Invited lecture). II. Contributions to
00175129
en
[ "phys.phys.phys-ins-det" ]
2024/03/05 22:32:07
2008
https://hal.in2p3.fr/in2p3-00175129/file/adc_publi_ieee_lpc.pdf
Gérard Bohner Roméo Bonnefoy Rémi Cornat Pascal Gay Jacques Lecoq Samuel Manen Laurent Royer A very-front-end ADC for the electromagnetic calorimeter of the International Linear Collider Keywords: ADC, pipeline, CMOS integrated circuit, comparator, differential, amplifier, ILC, CALICE, calorimeter, veryfront-end electronics A 10-bits pipeline Analog-to-Digital Converter (ADC) is introduced in this paper and the measurements carried out on prototypes produced in a 0.35 µm CMOS technology are presented. This ADC is a building block of the very-front-end electronics dedicated to the electromagnetic calorimeter of the International Linear Collider (ILC). Based on a 1.5-bit resolution per stage architecture, it reaches the 10-bits precision at a sampling rate of 4 MSamples/s with a consumption of 35 mW. Integral and Differential Non-Linearity obtained are respectively within ±1 LSB and ±0.6 LSB, and the measured noise is 0.47 LSB at 68% C.L. The performance obtained confirms that the pipeline architecture ADC is suitable for the Ecal readout requirements. I. INTRODUCTION T HE Electromagnetic Calorimeter (ECAL) of the Inter- national Linear Collider (ILC) requires a performant very-front-end readout electronics which implies an ambitious R&D inside the CALICE collaboration [START_REF] Brinkmann | Tesla Technical Design Report, part II[END_REF]. This integrated electronics has to process 10 8 channels which deliver a 15-bits dynamic range signal with a precision of 8 bits. Moreover, the minimal cooling available for the embedded readout electronics imposed an ultra-low power limited to 25 µW per channel. This issue will be reached thanks to the timing of ILC which allows the implementation of a power pulsing with a duty ratio of 1% . A key component of the very-front-end electronics is the Analog-to-Digital Converter (ADC) which has to reach a precision of 10 bits. In order to save the die surface of the chip and to limit the power consumption, one ADC will be shared by several channels. To fullfill this request, an ADC operating at a sampling rate of the order of one MSamples/s has been designed. This paper presents the design and the performance of a 35-mW 4-MSamples/s 10-bits ADC. After a description of the 1.5-bit per stage architecture, the gain-2 amplifier and comparator are detailed. Measurement results of the engineering samples fabricated in a 0.35 µm CMOS technology are then reported. II. 1.5-BIT/STAGE PIPELINE ADC ARCHITECTURE Among the various efficient ADC architectures developed and improved last tens years, the pipeline architecture is adapted to reach high resolution, speed and dynamic range with relative low power consumption and low component count. Typically, resolutions in the range of 10-14 bits at sampling frequency up to 100 MSamples/s are achieved in CMOS technologies with power lower than 100 mW [START_REF] Sumanen | A 10-bit 200-MS/s CMOS Parallel Pipeline A/D Converter[END_REF]- [START_REF] Kurose | 55-mW 200-MSPS 10bit Pipeline ADCs for Wireless Receivers[END_REF]. The block diagram of a conventional m-bits pipeline ADC [START_REF] Van De Plassche | Integrated analog to digital and digital to analog converters[END_REF] with n output bits per stage is given in Fig. 1. Each stage consists of a sample-and-hold (S/H), a low-resolution n-bits flash sub-ADC, a substractor and a residue amplifier. This stage converts the input voltage into the corresponding n-bits code and provides the amplified residual voltage to the next stage. The complete m-bits system consists in piling up m stages and in adding an Error Correction Logic block which delivers the final digital code. The simplest architecture is designed with a resolution of 1 bit per stage. In such basic ADC, each sub-ADC is composed of one comparator, with the reference signal V REF fixed at the middle of the analog input dynamic ADC range, and the gain of the residue amplifier is 2. The algorithm for the i th stage is as follows: if (V In ) i > V REF then b i = 1 and (V In ) i+1 = 2(V In ) i -V REF , else b i = 0 and (V In ) i+1 = 2(V In ) i , where (V In ) i is the input voltage of the i th stage and b i the output bit of this stage. The accuracy of this type of ADC is limited by three main [START_REF] Cho | Low power, low voltage analog to digital conversion techniques using pipelined architectures[END_REF]- [START_REF] Lee | A 15-bit 1-Ms/s digitally self-calibrated pipeline ADC[END_REF] parameters : • the interstage gain 2 accuracy limited by the gainbandwidth product of the amplifier and the mismatch of its feedback components; • the offset voltage of the comparators caused by the mismatch of components of its input stage; • the thermal noise which varies between samples. The noise contribution of the sampling in each stage, represented by the kT /C expression, is generally predominant but this noise contribution of a latter stage is effectively attenuated by the gain of the previous stage. An architecture with a resolution of 1.5 bits per stage is a solution to attenuate the contributions of the gain error and offset voltage to the non-linearity of the ADC. The Integral Non-Linearity (INL) obtained with algorithmic simulations is displayed on Fig. 2 ] i is delivered by each stage i. The corresponding algorithm is given by: • if (V In ) i < V Low T h then [b 2 b 1 ] i = [00] and (V In ) i+1 = 2(V In ) i+1 • if V Low T h < (V In ) i < V High T h then [b 2 b 1 ] i = [01] and (V In ) i+1 = 2((V In ) i -V Low Ref ) • if (V In ) i > V High T h then [b 2 b 1 ] i = [10] and (V In ) i+1 = 2((V In ) i -V High Ref ) III. SCHEMATIC OF ONE STAGE The global schematic of one ADC stage with resolution of 2 bits is given on Fig. 4. In order to reject the common mode noise, a fully differential structure has been adopted. In the very-front-end electronics of the electromagnetic calorimeter of ILC, the differential mode is particularly relevant considering the presence in the chip of a digital electronics which could induced common mode noise into the analogue part. Since the captor delivers a common mode signal, a commonmode to differential-mode conversion is required before the digitalisation stage. This interface should be inserted as soon as possible during the analogue processing of the signal in order to take advantage of the improvement signal-to-noise induced by the differential mode. As represented on Fig. 4, the value of the reference signal added to the (V In ) i input signal is selected by comparators outputs through switches. Then the circuit operates on two clock phases: during the sampling phase, the input signal and the reference signal are summed through capacitors 2C, while during the hold phase, the summed signal is amplified by factor 2. A. The gain-2 amplifier The gain-2 amplifier is built with a differential amplifier and a capacitive feedbacked loop. A better matching is obtained with capacitors and they have been preferred to resistors [START_REF] Hastings | The art of analog layout[END_REF]. This matching is particularly important because it affects the precision of the gain 2, and therefore, the linearity of the ADC. Thus, feedback capacitors must be large enough to minimize both the thermal noise kT /C and the components mismatch proportional to 1/ √ C. In contrast, both the small die surface and the dynamical performance achieved with low supply current have to be carry out. Then capacitors values of 300 fF and 600 fF are used. To match these components with the precision required of 0.1%, the capacitors array has been drawn using a common-centroid layout composed of six poly-poly capacitor unit cells of 300 fF each. Dummy switches have been introduced in order to counterbalanced the parasitic capacitors introduced by the reset switches. B. The operational amplifier As represented in Fig. 5, the operational amplifier is based on a fully differential architecture with rail-to-rail inputs and outputs [START_REF] Gharbiya | Operational Amplifiers rail to rail input stages using complementary differential pairs[END_REF]- [START_REF] Kaiser | Introduction to CMOS analog integrated circuit[END_REF]. It includes a resistive Common Mode FeedBack circuit (CMFB) to control the common mode output voltage. This table shows that despite a large gain-bandwidth product of 50 MHz, the consumption is limited to 1.9 mW with a 5V power supply. The design of the differential amplifier must guarantee a stable behaviour of the gain-2 feedback amplifier over process parameters fluctuations. Both differential output signal and common mode output signal must be stable with a total load capacitance evaluated to 1 pF. On that purpose, a pole-zero compensation network is connected at the differential output and its parameters adjusted. Then, stability simulations give a phase margin better than 66 degrees for both differential and common mode signals. Statistical simulations with process parameters fluctuations give standard deviation of 3 degrees and 5 degrees respectively for differential and common mode phase margins. A M P L I F I E R V R E F l o w - g n d 2 C C l o c k 2 C 2 C 2 C C C + - V R E F h i g h - V R E F l o w + g n d V R E F h i g h + V i n + V i n - V T H l o w - V T H l o w + C O M P . + - + - C l o c k V T H h i g h - V T C. The comparator As represented on Fig. 6, the comparator consists on a latched comparator [START_REF] Crolla | A fast latching current comparator for 12-bit A/D applications[END_REF] followed by a dynamic memory. This architecture is fully differential with two differential inputs: one for the differential signal (V In ) i from the previous stage and one for the differential reference signal. However, for simplicity, only one differential input is represented in Fig. 6. The very high impedance of the load represented by the latch gives the very high sensitivity of this comparator. The high gain is reached when the switch of the latch is open, leading the latch output to switch into a logical state corresponding to the sign of the differential input voltage namely (V In ) i -V Ref . The intrinsic sensitivity of this comparator is so high that the sensitivity is finally dominated by the input comparator noise. The characteristics of the designed comparator are summarized on the table II. The code occurence histogram at the code value of 512 is given in Fig. 9. All the codes delivered are included into 512 ± 1 LSB, with about 90% of them at the center value. At a sampling rate of 4 Msamples/s the dissipation of the chip is 35 mW. With a time of 250 ns to convert one analogue signal and considering the number of events stored per channel to be less than 5, the power consumption integrated during the ILC duty cycle of 200 ms is evaluated to 0.22 µW /channel. Assuming that the ON-setting time and the pipeline latency of the convertion are neglected when the ADC is shared by tens of channels, the integrated power dissipation of the ADC is then limited to 1 % of the total power available for the very front-end electronics of the ECAL. V. CONCLUSION A 10-bit 4-MSamples/s 35-mW CMOS ADC based on a pipeline 1.5-bit/stage architecture has been designed and tested. Its performance confirms that this architecture fullfill the ADC requirements of the ECAL at ILC. Nevertheless, such results can not be obtained without the efficient comparator and amplifier presented in this paper. These building blocks have been optimized to fulfill the offset, sensitivity, speed and stability requirements with low power consumption. Their schematics are as simple as possible in order to improve integration and reliability. Bearing in mind that the comsumption is a key point, the next step will consist on a portage in 3V supply. AFig. 1 . 1 Fig. 1. Conventional m-bits pipeline ADC architecture with n bits delivered per stage. Fig. 2 .Fig. 3 . 23 Fig. 2. 1.5-bit/stage ADC: variation of INL as a function of offset voltage Voff on comprarators. and Fig 3. The Integral Non-Linearity refers to the deviation, in LSB, of each individual output code from the ideal transfert-function value. Fig 2 shows that, for an input dynamic range of 2V, the INL of the ADC is not affected by the offset voltage V of f of the comparators up to ±250 mV. Moreover, as reported in Fig 3, the precision is less sensitive to the gain 2 accuracy with the 1.5-bit/stage architecture compare to a basic 1-bit/stage one. This 1.5-bit/stage pipeline ADC architecture [9] involves two comparators per stage, with separate threshold voltages V Low T h and V High T h , and two reference voltages V Low Ref and V High Ref . A 2-bits word [b 2 b 1 1 b 2 Fig. 4 .Fig. 5 . 1245 Fig. 4. Simplified schematic of one ADC stage: 2 comparators give the output bits of the stage and determine the reference signals substracted to the input signal; the amplifier amplifies by a factor 2 the redidual voltage and delivers it to the next stage. Fig. 9 . 9 Fig. 9. ADC code occurence histogram at the code centre TABLE I OPEN I LOOP AMPLIFIER CHARACTERISTICS. Power supply 5.0 V Consumption 1.9 mW Gain-Bandwidth 50 MHz Differential phase margin 66 ± 3 degrees @ 68% C.L. Common Mode phase margin 66 ± 5 degrees @ 68% C.L. The main characteristics of the open loop amplifier are reported in the table I. Code INL (LSB) Fig. 8. Integral nonlinearity measurement IV. MEASUREMENT RESULTS A 10-bits ADC prototype has been fabricated using the Austriamicrosystems 0.35 µm 2-poly 4-metal CMOS process. The total area of the ADC is 1.35 mm 2 and the chip is bounded into a JLCC 44 pins package. The circuit is measured with a 5.0 V supply and a differential input swing of 2.0 V pp , at a frequency clock of 4 MHz. Main performance is reported on table III. A dynamic input range of 2.0 V is measured with zero and gain errors respectively of 0.5 % and 0.8 % of the full scale. The standard deviation of the noise is lower than 0.5 LSB at 68% C.L. The static linearity curves of the 1.5-bit/stage ADC are given in Fig. 7 and Fig. 8. The Differential Non-Linearity (DNL) is displayed in Fig. 7. This error is defined as the difference between an actual step width and the ideal value of one LSB. The DNL measured is
01751340
en
[ "sdv.sa" ]
2024/03/05 22:32:07
2017
https://theses.hal.science/tel-01751340/file/Vannier_Nathan.pdf
Christophe Mougel Denis Tagu J Ean-Christophe Simon A Chim Quaiser Marie Duhamel Sarah Ben Maamar Romain Olivier Guillaume Maxime, K evin H, K evin T, A lice, Manon, Youn, Tiphaine, J ean, K aï na, A rmand, Hélè ne, Gorenka, Delphine Merci Aux Doctorants Du Labo, Diana Sarah Mê R emerciements C'est avec un soulagement modéré que j'écris ces remerciements avant d'appuyer sur le bouton qui clôturera ces 3 ans d'aventure. Modéré parce qu'avec le soutient et l'aide de tous ceux que je vais citer ici, ces trois ans, bien qu'intenses en travail, ont été un réel plaisir. C'était cool ! J 'espè re avoir la chance de retrouver des gens aussi gentils tout au long de mes prochaines aventures. J e voudrais commencer par remercier mes super encadrants Cendrine Mony, A nne-K ristel Bittebiè re et Philippe Vandenkoornhuyse pour leur précieuse aide. Merci pour votre disponibilité, votre bienveillance et surtout votre bonne humeur au quotidien. J e trouve qu'on a formé une super équipe et mê me si quatre tê tus qui discutent ça peut faire des réunions compliquées je crois qu'on a réussi a faire émerger des idées de tout ça. Merci aussi Cendrine pour tout ce magnésium disponible dans ton bureau (3 ans de thè se, 5 de régime) et pour les sorties terrains rafraî chissantes bien qu'un peu humides ! Merci aussi Philippe pour ta bonne humeur et pour les biè res d'inspiration scientifique. Merci aussi A nne-K ristel pour tes précieux conseils (mê me a 700 km tu m'as beaucoup soutenu), notamment grâce a tes yeux bioniques capables de voir le moindre changement de police ou décalage dans un diapo. R ésumé F rançais Depuis plusieurs années déjà, la communauté scientifique a tiré la sonnette d'alarme sur les risques accrus de crises alimentaires du fait des événements climatiques qui deviennent plus fréquents et extrê mes (Battisti et al., 2009). L a productivité agricole mondiale diminueainsi alors mê me qu'en parallè le la population humaine continue a croî tre réguliè rement avec une population humaine estimée a 10 milliard à l'horizon 2100 (Lutz et al., 2001). Pour subvenir à une telle demande de nourriture la production agricole devra presque doubler [START_REF] Cirera | Income distribution trends and future food demand[END_REF]. A fin d'augmenter la production, les besoins en espaces agricoles seront en grande partie acquis par la transformation d'écosystè mes naturels (forê ts) en cultures. Il est estimé ainsi la transformation de 10 9 hectares en agrosystè mes d'ici à 2050 [START_REF] Tilman | Forecasting agriculturally driven global environmental change[END_REF] induisant ainsi une perte de biodiversité et de fonctions écosystémiques et impactant considérablement la quantité de CO2 émise vers l'atmosphè re. L e défi pour l'agriculture durant ce siè cle est donc de produire plus efficacement et durablement pour nourrir la plante tout en préservant au maximum les écosystè mes naturels.En agriculture, les contraintes locales de l'environnement ont été contournées par l'utilisation d'intrants dans les cultures (fertilisants, pesticides, etc). Cependant, la disponibilité des ressources permettant de produire ces fertilisants est limitée et se pose donc la question de la durabilité de l'agriculture conventionnelle (Duhamel & Vandenkoornhuyse 2013). s. Durant les derniè res décennies, les communautés symbiotiques de bactéries, champignons et A rchaea ont été reconnues comme des déterminants majeurs des fonctions écosystémiques à échelle globale [START_REF] Van Der Heijden | Mycorrhizal fungal diversity determines plant biodiversity, ecosystem variability and productivity[END_REF][START_REF] Urich | A rchaea predominate among ammonia-oxidizing prokaryotes in soils[END_REF][START_REF] Falkowski | The microbial engines that drive Earth's biogeochemical cycles[END_REF]van der Heijden et al., 2008), notamment via les fonctions de résistance et d'acquisition des ressources qu'ils fournissent. A insi, de nombreux chercheurs ont commencé à reconnaî tre le potentiel des micro-organismes associés aux plantes pour résoudre le dilemme productivité agricole-maintien de la biodiversité. L a prochaine étape pour l'agriculture de demain serait donc de prendre en compte les communautés microbiennes des cultures pour optimiser les fonctions qu'elles assurent. A fin de parvenir a cet objectif d'utilisation des communautés microbiennes à des fins d'agriculture durable, il est nécessaire d'appréhender les rè gles qui régissent le fonctionnement de ces communautés. Cette thè se a pour objectif de déterminer les rè gles d'assemblage des communautés microbiennes associées aux plantes et leur influence sur le phénotype de la plante iv hôte. Elle est articulée autour de trois chapitres qui sont chacun composés d'articles en préparation, soumis ou publiés. Chapitre 1 : Conséquences écologique et évolutive de la plasticité induite par les mutualistes. In natura, les plantes sont colonisées par une grande diversité de micro-organismes à l'intérieur et à l'extérieur de leurs tissus. Ces microorganismes constituent leur microbiote. Ce microbiote fournit à son hôte des fonctions clés telles que l'acquisition des ressources ou la résistance aux stress biotiques et abiotiques (Friesen et al., 2011;Mü ller et al., 2016), influençant ainsi tous les aspects de la vie d'une plante, de l'établissement à la croissance jusqu'à la reproduction (Mü ller et al., 2016). L a composition du microbiote des plantes est déterminée par une large gamme de facteurs environnementaux tels que les propriétés du sol comprenant le pH ou la disponibilité en nutriments et en eau [START_REF] Berg | Plant species and soil type cooperatively shape the structure and function of microbial communities in the rhizosphere[END_REF][START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012;Shakya et al., 2013;Schreiter et al., 2014). En effet, les micro-organismes associés aux plantes sont majoritairement recrutés depuis le sol. Du fait de leur style de vie sessile (i.e. immobile), les plantes doivent donc faire face à l'hétérogénéité de l'abondance des micro-organismes dans le sol. L es plantes ont ainsi développé différents mécanismes permettant de tamponner les effets des contraintes environnementales tels que des modifications plastiques (i.e. production de plusieurs phénotypes à partir d'un seul génotype) (Bradshaw, 1965). Ces modifications sont classiquement considérées comme étant déterminées par le génome de l'organisme. Or les mécanismes épigénétiques (régulant l'expression du génome) et les micro-organismes symbiotiques ont été identifiés à de multiples reprises comme des sources de variations phénotypiques (Richards, 2006;Friesen et al., 2011;Holeski et al., 2012;Vandenkoornhuyse et al., 2015). Dans le premier chapitre de cette thè se nous avons étudié l'influence de ces deux sources de variations phénotypiques sur le phénotype et l'adaptation des plantes. L 'article de synthè se bibliographique présenté dans cette thè se (A rticle I) a permis de mettre en évidence l'importance du microbiote et de l'épigenetique comme sources de plasticité phénotypique. L es avancées récentes en épigénétique et sur les symbiotes des plantes ont démontré leur impact majeur sur la survie, le développement et la reproduction des plantes. Ces sources de plasticité permettent aux plantes de s'adapter aux contraintes environnementales sur des pas de temps courts (au cours la vie de la plante) et longs (i.e. temps évolutifs). De plus, les variations phénotypiques ainsi produites peuvent ê tre intégrées dans le génome par accommodation génétique et ainsi influencer les trajectoires évolutives. De maniè re générale, cette synthè se bibliographique indique également que des interactions entre microbiote et épigénétique pourraient exister et v devraient donc ê tre étudiées en parallè le des effets respectifs de ces mécanismes sur la survie et l'évolution des plantes. Dans le second article présenté dans ce chapitre (A rticle II), nous nous sommes intéressés au rôle des micro-organismes dans la réponse des plantes clonales à l'hétérogénéité environnementale. L es plantes clonales sont des organismes particuliè rement plastiques. En effet, leur structure organisée en réseau (i.e. individus clonaux connectés) permet la propagation dans l'espace ainsi que le partage des ressources et de l'information (i.e. intégration physiologique ; Oborny et al., 2001). De nombreuses études ont mis en évidence que ce réseau permet le développement de deux principaux types de réponses plastiques à l'hétérogénéité des ressources : le foraging (i.e. exploration de l'espace pour les ressources) et la spécialisation (i.e. division du travail pour l'exploitation des ressources) (Slade & Hutchings, 1987;Dong, 1993;Hutchings & de K roon, 1994;Birch & Hutchings, 1994). Par transposition, la présence de microorganismes symbiotiques dans le sol, pourrait induire le mê me type de réponses plastiques, en raison de leur rôle clé pour la plante (lire les synthè ses Friesen et al., 2011;Mü ller et al., 2016).. Nous avons testé expérimentalement cette hypothè se en manipulant l'hétérogénéité de présence de champignons mycorhiziens à arbuscules (MA ) et en mesurant les traits de réponses plastiques de foraging et de spécialisation de la plant clonale Glechoma hederacea. Dans nos expérimentations, nous avons démontré 1.) que Glechoma hederacea ne produit pas de réponse plastique de spécialisation et une réponse de foraging limitée a l'hétérogénéité des champignons MA et que 2.) les traits architecturaux impliqués dans les réponses de foraging des plantes ne sont pas affectés par les espè ces de champignons MA testées, contrairement aux traits d'allocation des ressources liés aux réponses de spécialisation. L e fait que G. hederacea ne réponde que peu ou pas à l'hétérogénéité de distribution des champignons MA suggè re que les plantes pourraient homogénéiser cette distribution en les transmettant d'un ramet a un autre dans le réseau. Cette hypothè se est corroborée par des observations en microscopie électronique à balayage révélant la présence d'hyphes à la surface d'échantillons de stolons et des observations en coupe montrant la présence de structures apparentées à des champignons colonisant les cellules de stolons. Chapitre 2 : L 'héritabilité du microbiote des plantes, vers le concept de méta-holobionte Dans ce chapitre nous avons testé expérimentalement l'hypothè se de transmission des microorganismes entre ramets clonaux. Ce chapitre vise également à évaluer les conséquences d'un tel mécanisme de transmission pour les performances des plantes et pour la compréhension du fonctionnement et de l'évolution des plantes. vi Des études précédentes ont mis en évidence l'existence de transmission de symbiotes endophytiques via la colonisation des graines de la plante hôte. Ce type transmission a majoritairement été décrit pour le champignon Neotyphodium coenocephalum qui colonise les graines et permet la résistance à certains stress environnementaux (Clay & Schardl, 2002;Selosse et al., 2004). Cet exemple est le seul décrit dans la littérature montrant une transmission verticale de micro-organismes entre générations de plantes et n'implique le transfert que d'une seule espè ce de champignon a un moment spécifique de la vie de l'hôte. Comme mis en évidence dans le premier chapitre les plantes clonales pourraient potentiellement transmettre des micro-organismes, en plus des ressources et de l'information, à leurs descendants clonaux, constituant un niveau supplémentaire d'intégration physiologique . Dans le premier article présenté dans ce chapitre (A rticle III), nous avons cherché à démontrer expérimentalement l'existence de transmission de microorganismes entre générations de plantes clonales. Dans nos expérimentations, nous avons détecté la présence d'archées, de bactéries et de champignons dans l'endosphè re des ramets mè res de G. hederacea. Certains de ces microorganismes (des champignons et des bactéries) ont été détectés dans les racines et stolons des ramets filles démontrant ainsi l'héritabilité d'une partie du microbiote de la plante à ses descendants. De plus, les communautés transmises étaient similaires entre ramets filles bien que plus pauvres en nombre d'OTUs que l'assemblage présent au niveau des racines du ramet mè re initial. Ce résultat démontre l'existence d'une filtration des micro-organismes durant la transmission par une diminution de la richesse et une homogénéisation des communautés transmises constituant un « core microbiote » D'un point de vue théoriquela transmission du microbiote aux descendants est avantageuse car elle permet d'assurer la présence de symbiotes bénéfiques et permet donc d'assurer la qualité de l'environnement des descendants (Wilkinson & Sherratt, 2001). Nos résultats ouvrent des perspectives sur la capacité des plantes clonales à sélectionner et transmettre préférentiellement certains micro-organismes spécifiques. Cependant, les modalités de cette filtration ne sont pour l'instant pas connues et représentent donc une perspective de recherche importante de cette thè se. L es micro-organismes transmis pourraient ainsi ê tre filtrés en fonction de leurs capacités de dispersion (i.e. de colonisation des entre-noe uds ou stolons) ou en fonction des fonctions qu'ils remplissent pour le ramet mè re (par exemple l'acquisition de ressource ou la résistance a un stress). vii Cette transmission verticale de micro-organismes constitue une continuité du partenariat entra la plante et ses symbiotes (Zilber-Rosenberg & Rosenberg, 2008). Récemment, la compréhension des interactions hôtes-symbiotes a évolué vers une approche holistique de l'hôte et de ses symbiotes. L a théorie de l'holobionte fournit un cadre théorique pour l'étude des interactions hôte-symbiotes (Zilber- Rosenberg & Rosenberg, 2008;Theis et al., 2016). Dans l'hologenome (i.e. l'ensemble formé par le génome de l'hôte et celui de ses symbiotes) un micro-organisme peut ê tre assimilé à un gè ne dans le génome. De la mê me maniè re que les gè nes sont hérités lors de la reproduction sexuée, l'héritabilité des micro-organismes entre les générations d'hôtes est ainsi un paramè tre clé de l'évolution de l'hologénome. Dans ce contexte, le mécanisme démontré ici intè gre les plantes dans la gamme des organismes qui peuvent ê tre considérés et appréhendés comme des holobiontes. De plus, la structure en réseau des plantes clonales suggè re un niveau supplémentaire de complexité dans l'assemblage de l'holobionte puisque dans ce réseau les holobiontes peuvent échanger ou partager une partie de leur microbiote. Dans le second article de ce chapitre deux (A rticle IV ) nous proposons une extension du concept de l'holobionte aux organismes clonaux organisés en réseaux : le méta-holobionte. En plus des bases théoriques de la compréhension des réseaux clonaux comme des méta-holobiontes nous proposons dans cette article de transposer des corpus de connaissances issus des théories écologiques des méta-communautés et des réseaux écologiques. L e cadre théorique des métacommunautés devrait permettre d'appréhender l'impact de la transmission des micro-organismes sur l'assemblage et la survie des communautés de micro-organismes et sur la gestion par les plantes des micro-organismes d'intérê t. L e cadre théorique de l'écologie des réseaux devrait fournir des perspectives sur l'optimisation de la résilience et des performances du réseau clonal en fonction de sa structure et de la transmission des micro-organismes qui dépend de cette structure. Chapitre 3 : Importance du contexte de la communauté de plantes pour l'assemblage du microbiote d'une plante hôte L a composition des communautés de micro-organismes du sol dépend de différents facteurs environnementaux dont par exemple le type de sol et ses propriétés (e.g. pH, humidité, concentration en nutriments) [START_REF] Berg | Plant species and soil type cooperatively shape the structure and function of microbial communities in the rhizosphere[END_REF]L undberg et al., 2012;[START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]Shakya et al., 2013;Schreiter et al., 2014). Du fait de l'immobilité des plantes, ce pool de microorganismes disponibles pour le recrutement détermine en grande partie leur microbiote (L undberg viii et al., 2012;[START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]. En plus du type de sol et de ses propriétés, les plantes peuvent également modifier ce pool de micro-organismes via différents mécanismes. L es plantes sont capables de recruter préférentiellement certains micro-organismes à partir du sol (Vandenkoornhuyse et al., 2015) et de promouvoir les symbiotes les plus bénéfiques [START_REF] Bever | Preferential allocation to beneficial symbiont with spatial structure maintains mycorrhizal mutualism[END_REF]K iers et al., 2011). De plus les exsudats racinaires des plantes peuvent également augmenter ou diminuer l'abondance de micro-organismes spécifiques en fonction de l'espè ce de plante considérée (pour une synthè se lire Berendsen et al., 2012). On s'attend donc à ce que les plantes modifient localement la composition des communautés microbiennes du sol. Suivant cette hypothè se, le voisinage d'une plante donnée (c'est-à-dire l'identité et l'abondance des plantes voisines) devrait influencer localement les communautés de micro-organismes du sol et donc les micro-organismes disponibles pour le recrutement par d'autres plantes de la communauté. A insi, le voisinage d'une plante donnée devrait influencer l'assemblage du microbiote de la plante focale. Cependant, ce rôle potentiel du contexte de la communauté de plantes sur l'assemblage du microbiote n'a pas été étudié expérimentalement. En outre, des espè ces [START_REF] Oh | Distinctive bacterial communities in the rhizoplane of four tropical tree species[END_REF][START_REF] Bonito | Plant host and soil origin influence fungal and bacterial assemblages in the roots of woody plants[END_REF] et des écotypes [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012) différents de plantes ont des microbiotes spécifiques ce qui suggè re que les plantes peuvent promouvoir différents microorganismes et donc avoir des effets contrastés sur les communautés du sol. Considérant la grande diversité de fonctions fournies par les symbiotes des plantes (Friesen et al., 2011, Mü ller et al., 2016) et leurs effets sur le phénotype et les performances des plantes mis en évidence dans cette thè se (A rticles I et II), des changements de composition du microbiote devraient affecter les performances de la plante focale. Dans ce chapitre qui est composé d'un article en préparation nous avons testé expérimentalement l'hypothè se de l'effet du contexte de la communauté de plantes sur l'assemblage du microbiote d'une plante hôte. Nous avons utilisé un dispositif de mésocosmes manipulant des communautés prairiales de plantes. Nous avons échantillonné les assemblages fongiques du sol de ces mésocosmes en utilisant Medicago truncatula comme plante piè ge et en cartographiant le recouvrement en plante sur plusieurs années. Dans notre expérience (A rticle V ), nous avons détecté un effet significatif de l'abondance des espè ces de plantes du voisinage sur la composition du microbiote colonisant les racines de M. truncatula. Cette effet dépend de l'espè ce de plante du voisinage avec certaines plantes ayant un effet positif et d'autres un effet négatif sur la richesse et l'équitabilité des assemblages fongiques colonisant M. truncatula. Nous avons démontré ainsi qu'une plante spécifique peut filtrer et déterminer les assemblage fongiques locales présentes et disponibles pour le recrutement dans le sol ix comme proposé par Valyi et al., (2016). Nos résultats indiquent également que cet effet du voisinage de plantes ne se limite pas aux champignons MA mais influence tout l'assemblage fongique. L 'influence du voisinage en plantes sur le microbiote racinaire se produit à échelle fine (i.e. de 5 à 25 cm) . De plus, il s'avè re qu'à la fois le recouvrement passé et le recouvrement présent en plantes impactent les assemblages fongiques des racines suggérant que les plantes peuvent laisser une empreinte durable sur la composition des assemblages fongiques du sol. En outre, nous avons également démontré que les changements de composition des assemblages fongiques colonisant les racines impactent in fine les performances de la plante hôte M. truncatula. A insi, des assemblages fongiques plus riches ou plus équitables ont induit de meilleures performances chez les plantes hôtes. Dans leur ensemble ces résultats démontrent l'existence d'interactions plante-plante via les champignons et impactant in fine la fitness de la plante hôte et ouvrent des perspectives sur la manipulation du microbiote racinaire des plantes via leur voisinage dans l'objectif d'améliorer la productivité. Ce travail a donc permis de mieux comprendre les modalités d'assemblage du microbiote et démontre l'existence d'une empreinte du voisinage des plantes sur la composition des communautés de micro-organismes. Un second résultat majeur est la démonstration de l'héritabilité d'une fraction de ce microbiote, sans doute un microbiote transmis essentiel au développement de son hôte. Ces travaux offrent de nouvelles opportunités de recherche qui devraient permettre de mieux comprendre le microbiote des plantes, son assemblage, ses fonctions et son influence sur le phénotype de l'hôte. A u delà des réflexions conceptuelles induites par cette thè se, ces travaux interrogent d'un point de vue appliqué la place du voisinage des plantes en agriculture pour contribuer à la diversité du réservoir des micro-organismes symbiotiques. Une perspective majeure de ce travail est donc le développement de polycultures dans lesquelles les espè ces de plantes induisent des effets de facilitation via la promotion de la diversité du microbiote. x Table of contents INT R ODUC T ION GE NE R A L E . ...............................................................................1 General context of the thesis. .............................................................................2 I. Scientific context................................................................................................2 II. Structure of the PhD thesis. ..............................................................................5 L iterature review. ...............................................................................................6 1. Microbiota composition.....................................................................................6 1.1 Plants and microorganisms, an ubiquitous alliance. The root microbiota: Diversity and composition. .............................................................9 2. Endophytes induced functions and phenotypic modifications. .......................12 2.1 Resources acquisition. ....................................................................................................12 2.2 Resistance to environmental constraints. .................................................................................15 3. Plant microbiota assembly. ..............................................................................17 3.1 Microbiota recruitment...................................................................................................18 3.1.1 The soil as a microbial reservoir. ............................................................................18 3.1.2 Rhizosphere............................................................................................................19 3.1.3 Endosphere.............................................................................................................20 3.2 Controls of the plant over its microbiota ........................................................................20 3. Vertical transmission or heritability. .......................................................................26 4. The plant is a "holobiont". ...............................................................................27 5. Objectives of the thesis. ...................................................................................29 C hapter I: C onsequences of mutualist-induced plasticity. ....................................31 I. [START_REF] Simms | The relative advantages of plasticity and fixity in different environments: when is it good for a plant to adjust ?[END_REF] .........................................................................................................................33 I.2 A rticle I: Epigenetic mechanisms and microbiota as a toolbox for plant phenotypic adjustment to environment. ..............................................................35 I.3 A rticle II: A M fungi patchiness and the clonal growth of Glechoma hederacea in heterogeneous environments. .........................................................57 C hapter II: T he heritability of the plant microbiota, toward the meta-holobiont concept. .......................................................................................................................83 xi II.1 Introduction...................................................................................................83 Scientific context..................................................................................................................83 Objectives of the chapter. .....................................................................................................84 Methods................................................................................................................................84 Main results..........................................................................................................................85 II.2 A rticle III: A microorganisms journey between plant generations. ..............87 II.3 A rticle IV: Introduction of the metaholobiont concept for clonal plants. ...121 C hapter III:Importance of the plant community context for the individual plant microbiota assembly. ...............................................................................................141 III.1 Introduction...............................................................................................141 Scientific Context...............................................................................................................141 Objectives of the chapter. have raised the alarm on the risks of food crises becoming more and more frequent (Battisti et al., 2009) due to extreme climatic events or pests spreading. A gricultural productivity is thus expected to decline worldwide and in parallel human population is continuously increasing and is expected to reach 10 billion people before 2100 (L utz et al., 2001). By this time, it is estimated that agricultural production would need to almost double to fulfill world demands [START_REF] Cirera | Income distribution trends and future food demand[END_REF]. This need for space will be fulfilled by the transformation of natural ecosystems in cropland and following estimations about 10 9 ha are likely to be lost by 2050 [START_REF] Tilman | Forecasting agriculturally driven global environmental change[END_REF] constituting a massive loss of biodiversity and ecosystem functions. The future challenges for agriculture during this century is thus to produce enough food to support world population expansion and in parallel to limit collateral damage to the environment. In agriculture, local environment constraints have been overstepped by the use of additives in the cultures (fertilizers, insecticides etc). However, the phosphorus that is used in fertilizers relies on high quality rock phosphate, which is a finite resource and major agricultural regions such as India, A merica, and Europe are already dependent on P imports (Duhamel & Vandenkoornhuyse 2013). The use of inputs have thus failed to solution the current challenges and have even impoverished soils. The last decades the symbiotic communities of bacteria, archaea and fungi have been recognized as major drivers of global ecosystem functions [START_REF] Van Der Heijden | Mycorrhizal fungal diversity determines plant biodiversity, ecosystem variability and productivity[END_REF][START_REF] Urich | A rchaea predominate among ammonia-oxidizing prokaryotes in soils[END_REF][START_REF] Falkowski | The microbial engines that drive Earth's biogeochemical cycles[END_REF]van der Heijden et al., 2008). Researchers have thus started to recognize the potential of plant-associated microorganisms to solve the agricultural productivity-biodiversity loss conundrum. Together with the root nodules, A M fungi are now recognized as one of the important symbiosis that help feed the world (Marx et al., 2004, in The roots of plant-microbe collaborations) and are promising for the developtment or organic farming [START_REF] Gosling | A rbuscular mycorrhizal fungi and organic farming[END_REF]. Industrials and farmers have even started to conduct experiments using microorganisms inoculation to enhance crop resistance to pathogens or survival on deprived soils (de Vrieze, 2015 Science "The littlest farmhands"). The next step will be to engineer crops microbiota in order to optimize functional characteristics provided. For example, flowering time and biomass are suitable candidate plant traits for the research of microorganism-services. In order to fulfill this goal, scientists will have to describe in precision the links between the microbiota assembly and functioning and the resulting plant phenotype. In natura, plants are colonized by a high diversity of microorganisms both inside and outside of their tissues, constituting the microbiota. These microorganisms affect plant health and growth in a beneficial, harmful, or neutral way. Because of their sessile lifestyle (i.e. immobile), plants rely on these micro-organisms for many ecological needs. Indeed, plant-associated microorganisms provide important ecological functions to the plant such as the acquisition of resources or the resistance against biotic and abiotic stresses (Friesen et al., 2011;Mü ller et al., 2016), all along the plant life (development, survival and reproduction) (Mü ller et al., 2016). Considering the importance of microbiota, the last decade have seen significant advances in the description of the factors shaping the assembly of the plant microbiota. The improvements of sequencing technologies allow to describe more and more finely the composition of the microbota. Composition and dynamics of this microbiota are driven by a large range of environmental factors such as soil properties comprising pH, nutrient and water availability [START_REF] Berg | Plant species and soil type cooperatively shape the structure and function of microbial communities in the rhizosphere[END_REF][START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012;Shakya et al., 2013;Schreiter et al., 2014). In addition, the complexity and versatility of plant-microorganisms associations makes difficult to disentangle all the rules leading to a given microbiota composition. Thus, determining the assembly rules of the plant microbiota is a current conundrum in ecology. Because they are immobile, plants have to cope with environmental heterogeneity such as the heterogeneity of microorganisms presence in soil. Plants classically respond to environmental heterogeneity through the production of phenotypic plasticity (i.e. production of different phenotypes from a single genotype; Bradshaw, 1965). In nature, clonal plants represent up to 70% of plant species in temperous ecosystems (van Groenendael & de K roon, 1990). Clonal plants are organized as a network of ramets (i.e. clonal individuals constituted of leaves and roots) connected by above or below-ground modified stem (i.e. stolons and rhizomes respectively) (see Harper, 1977 for a description of clonal structure). This network allow the sharing of resources and information (i.e. physiological integration, Oborny et al., 2001) and thus allows to produce plastic responses at the scale of the network. These plants are thus able to modify the structure of the network allowing to explore the environment (i.e. foraging) ensuring resources acquizition (e.g. light or nutrients, Slade & Hutchings, 1987;Dong, 1993;Hutchings & de K roon, 1994;Birch & Hutchings, 1994). Considering that microorganisms can have large effects on plant morphology (see for review Friesen et al., 2011;Mü ller et al., 2016), they may alter plant ability to produce such plastic responses despite it has not been demonstrated. In addition, because microorganims provide key functions to the plant, the expectation is thus that plants should develop plastic responses to ensure the availability of beneficial symbionts like they do for resources. Despite the ubiquity of clonal plants, the possibility that they forage for microorganisms to construct their microbiota has never been investigated. A major mechanism ensuring the availability of beneficial microorganisms for macroorganisms is the transmission of microrganisms between individuals (investigated in the Chapter 2). Two kinds of transmission have been theorized, vertical (i.e. direct transmission from parents to offsprings) and pseudo-vartical (i.e. transmission by sharing the same environment) (Wilkinson, 1997). In plants, vertical transmission has only been evidenced through seeds colonization and the most well-known example is stress-protective endophyte Neotyphodium coenocephalum to the descendants in several grass plant species (Clay & Schardl, 2002;Selosse et al., 2004). In clonal plants, the progeny is connected to the parent plant in the clonal network. Stuefer et al., (2004) suggested that in these plants, microorganisms could be transmitted in addition to resources and information. Despite it has been suggested and could represent a major factor structuring microbiota assembly, such transmission of microorganisms within the clonal network has never been investigated so far. In prairial ecosystems, many plants species are coexisting and interact in competitive or facilitative ways. A large number of studies, mainly focusing on A rbuscular Mycorrhizal Fungi (A M fungi hereafter), have investigated how the plant community can be influenced by the diversity of soil fungi [START_REF] Grime | Floristic diversity in a model system using experimental microcosms[END_REF]van der Heijden et al., 1998a;K lironomos et al., 2000;[START_REF] Bever | A rbuscular Mycorrhizal Fungi: More Diverse than Meets the E ye, and the Ecological Tale of Why: The high diversity of ecologically distinct species of arbuscular mycorrhizal fungi within a single community has broad implications for plant ecology[END_REF]. Reciprocally, microcosm studies based on a focal plant design have demonstrated that the surrounding plant composition influence the fungal microbiota of the focal plant-associated (J ohnson et al., 2004). Such microcosms-based studies have provided insights into the possible effect of the community context on microbiota assembly. However, microcosms are oversimplifications of the plant community real context where many species co-exist in a spatiotemporal dynamic system. In multispecific plant communities, Hausmann & Hawkes (2009) showed that the plant community composition influenced the composition of a focal plant microbiota. A s a plant harbor a specific microbiota [START_REF] Hardoim | Properties of bacterial endophytes and their proposed role in plant growth[END_REF][START_REF] Redford | The ecology of the phyllosphere: geographic and phylogenetic variability in the distribution of bacteria on tree leaves[END_REF][START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012) then we can expect a plant to leave a specific fingerprint on the soil pool of microorganisms. A plant neighborhood could thus determine its microbiota assembly (investigated in Chapter 3). Nevertheless, the spatial and temporal scales of this influence are still unclear whereas it could determine the fingerprint of the plant community on the soil pool of fungi. In addition, the above-described studies mainly focused on particular type of plant symbionts, the A M fungi. Despite many other plant endophytes are known to provide important functions (see Introduction section), the influence of the plant community on bacteria, fungi, and archaea has not yet been described. My PhD thesis aimed to address hypotheses related to these 3 current important frontiers of research in Ecology and Plant Science, (1) the importance of plant-micoorganisms interactions for plant phenotype and plasticity (2) the understanding of the vertical heritability of the plant microbiota and ( 3) the ability of plants to manipulate the local symbiotic compartment thus to leave a fingerprint on the pool of microorganisms. The work I did also aimed to question different theoretical concepts of ecology and evolution and to provide new perspectives regarding these concepts. II. Structure of the PhD thesis This thesis begins with a literature review of the existing knowledge on the plant microbiota composition, assembly rules and dynamics and the links between this microbiota and plant phenotypes. This state of the art aims at delimiting the current knowledges and gaps in our understanding of the plant microbiota and at identifying the objectives of this thesis work. The second part of this work consists in articles presented in three chapters addressing the questions raised by the literature review. Each of these chapters begins with a short presentation of the scientific context and the experimental procedure, as well as a brief summary of the main results. The results are then presented and discussed in the form of articles published, submitted or in preparation for scientific journals with a proofread committee. L iterature review 1. Microbiota composition 1.1 Plants and microorganisms, an ubiquitous alliance Plant and microorganisms have long been considered as separate but interacting organisms. This conception dividing two fields of research on plants and on microorganisms has progressively been replaced by an integrated approach to plant-associated microorganisms. This evolution in understanding of the plant has slowly evolved through the numerous studies describing how plants are systematically associated with microorganisms. For example, all plant species that were investigated were colonized by leaf endophytes (A rnold et al., 2003;[START_REF] Schulz | The endophytic continuum[END_REF]A lbrectsen et al., 2010;[START_REF] Gilbert | Symbiosis as a source of selectable epigenetic variation: taking the heat for the big guy[END_REF] and parallel studies also highlighted the difficulty of growing transplants of different species without bacteria and fungi [START_REF] Hardoim | Properties of bacterial endophytes and their proposed role in plant growth[END_REF]. Indeed, microorganisms colonize every plant species known and are found in various parts of these plants: roots, stems, xylem vessels, apoplast, seeds, and nodules (Rosenblueth & Martinez-Romero 2006; Figure 1). Some of these microorganisms live within or on the surface of plant organs, live within the endosphere or episphere and are called endophytes and epiphytes, respectively. The endosphere and episphere constitute the plant microbiota. On the surface of leaves, prokaryotes are found at densities of 10 6 to 10 7 per cm -2 (Lindow & Brandl 2003) and lab-based estimates of endophytic bacterial populations range from 10 7 to 10 10 cells per gram of tissue [START_REF] Hardoim | Properties of bacterial endophytes and their proposed role in plant growth[END_REF]. A mong the endophytic microorganisms colonizing plants, A rbuscular Mycorrhizal Fungi (A M fungi hereafter) have been found in 80% of all terrestrial plants [START_REF] Bonfante | Plants, mycorrhizal fungi, and bacteria: a network of interactions[END_REF]. Evidence of colonization of plant roots by A M fungi has been found in fossils dating from the early Devonian and Ordovician, suggesting that these fungi were already associated with plants over 400 to 460 million years ago [START_REF] Remy | Four hundred-million-year-old vesicular arbuscular mycorrhizae[END_REF][START_REF] Redecker | Glomalean fungi from the Ordovician[END_REF]. A M fungi are suggested to be at the origin of major evolutionary events such as the colonization of land, and the evolution and diversification of plant phototrophs [START_REF] Selosse | The land flora: a phototroph-fungus partnership?[END_REF][START_REF] Heckman | Molecular evidence for the early colonization of land by fungi and plants[END_REF][START_REF] Brundrett | Coevolution of roots and mycorrhizas of land plants[END_REF][START_REF] Bonfante | Mechanisms underlying beneficial plant-fungus interactions in mycorrhizal symbiosis[END_REF]. A ll these elements converge towards a consideration of the plant as an obligate host of numerous microorganisms. In the light of these observations [… ] a plant that is completely free of microorganisms represents an exotic exception, rather than the -biologically relevant -rule [… ] (Partida-Martinez & Heil, 2011). Symbiotic interactions between plants and the micro-organisms can be cooperative, neutral or antagonistic, thus a continuum of interactions ranging from mutualism to parasitism (i.e. an interaction being parasitic if it is disadvantageous for one of the organisms or a mutualistic when beneficial for both partners; e.g. [START_REF] Rico-Gray | Interspecific interaction[END_REF]. The level of intimacy (physical association) is likely to be variable among symbionts and interactions are considered as symbioses when the physical association is strong. This is the case for many known associations in nature such as corals with the algae Symbiodinum (along with other symbionts, Rosenberg et al., 2007), lichens (an association between fungi and photobiontes; [START_REF] Spribille | Basidiomycete yeasts in the cortex of ascomycete macrolichens[END_REF], legumes with Rhizobia, and more generally plants with A rbuscular Mycorrhizal Fungi. Such symbiosis has long been considered as beneficial to both organisms. Nevertheless, symbionts behavior cannot be considered as binary (i.e. either parasitic or mutualistic) but rather as more or less beneficial. This continuum in symbionts behavior has been demonstrated in different plant symbioses. For example, different A M fungi isolates can provide quantitatively different amounts of phosphorus in exchange for a given quantity of carbohydrates (K iers et al., 2011), with cheaters racketeering their host whereas cooperators display 'fair-trade' behavior (see section 3.2.3 for more details). Furthermore, a given symbiont can behave as a parasite or a mutualist depending on the ecological context. [START_REF] Hiruma | Root endophyte Colletotrichum tofieldiae confers plant fitness benefits that are phosphate status dependent[END_REF] demonstrated in an elegant experimental study that the endophyte Colletotrichum tofieldae transfers phosphorus to shoots of Arabidopsis thaliana via root-associated hyphae only under phosphorus-deficient conditions. This shift in C. tofieldae behavior was associated with a phosphate starvation response of the plant indicating that the behavior of the symbiont on was dependent on the nutrient status of the host. In addition, symbiosis generally involves more than two partners that can either be parasites or mutualists [START_REF] Graham | Functioning of mycorrhizal associations along the mutualism-parasitism continuum[END_REF][START_REF] Saikkonen | Fungal endophytes: a continuum of interactions with host plants[END_REF] and can colonize different parts of the plant at the same time. In the above-described experiment, C. tofieldae was shown to first colonize the roots and then spread to the leaves [START_REF] Hiruma | Root endophyte Colletotrichum tofieldiae confers plant fitness benefits that are phosphate status dependent[END_REF]. c) 1.3 The root microbiota: Diversity and composition The diversity of microorganisms associated with plants roots is estimated to be in the order of tens of thousands of species (Berendsen et al., 2012). Tremendous advances in the approaches used to describe the microorganisms living in association with plants have been made in recent decades. A pproaches based on the cultivation of organisms were for a long period of time the only methods available to characterize microbial communities [START_REF] Vartoukian | Strategies for culture of 'unculturable'bacteria[END_REF][START_REF] Margesin | Diversity and ecology of psychrophilic microorganisms[END_REF][START_REF] Rosling | A rchaeorhizomycetes: unearthing an ancient class of ubiquitous soil fungi[END_REF]. However, since most microorganisms, and especially fungi, are not cultivable [START_REF] Epstein | The phenomenon of microbial uncultivability[END_REF], the diversity of plant-associated microorganisms has long been underestimated. Estimations of plant-associated fungal diversity for a long time relied on spore counts and morphology (see for example [START_REF] Burrows | A rbuscular mycorrhizal fungi respond to increasing plant diversity[END_REF]L andis et al., 2005). Molecular methods developed in recent decades, such as Sanger sequencing of cloned products, PCR amplicons (polymerase chain reaction) or PCR-RFL P (restriction fragment length polymorphism) revolutionized the experimental approaches and were rapidly adopted for microorganisms identification and classification (Balint et al., 2016). This ability to rapidly detect DNA and at low cost provided information about the diversity of organisms in an environmental sample and led to numerous descriptions of the wide diversity of microorganisms associated with plants. Thanks to these studies, we now know that plants harbor an extreme diversity of archaea (Edwards et al., 2015), bacteria [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012) and fungi [START_REF] Vandenkoornhuyse | Extensive fungal diversity in plant roots[END_REF]. These studies extended our knowledge of plant-associated microorganisms and led to an upward revision of their diversity. In 2002, Vandenkoornhuyse et al. demonstrated that the diversity of fungi colonizing plants roots was much greater than previously believed and included non mycorrhizal fungi of the phyla Basidiomycota and Ascomycota. The composition of this microbiota varies significantly between plant species [START_REF] Hardoim | Properties of bacterial endophytes and their proposed role in plant growth[END_REF][START_REF] Redford | The ecology of the phyllosphere: geographic and phylogenetic variability in the distribution of bacteria on tree leaves[END_REF][START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012) and also depends on the genotype of the plant (Inceoğlu et al., 2010(Inceoğlu et al., , 2011) ) but to a limited extent [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF](Bulgarelli et al., , L undberg et al., 2012; see section 3.2.3 for host genetics effect). A lthough the composition of the plant microbiota has been extensively described for different species of bacteria with estimations of the relative abundances and species richness for different organs (Figure 2) such knowledge is lacking for fungal and archaeal communities and only a few seminal papers have been published (Vandenkoornhuyse et al., 2002a;Edwards et al 2015;Colemann-derr et al., 2016). Within the fungal compartment, the most studied organisms are rootassociated mycorrhizae that are either endomycorrhizal (within the plant cells) or ectomycorrhizal (outside the cell walls) forming hyphae around the roots. A ll A M fungi belong to the phylum Glomeromycota whereas the ectomycorrhizal fungi are spread among the A scomycota and Basidiomycota phyla (A rnold, 2007;[START_REF] Rodriguez | Fungal endophytes: diversity and functional roles[END_REF][START_REF] Rudgers | A fungus among us: broad patterns of endophyte distribution in the grasses[END_REF]). In roots and shoots, endophytic Basidiomycota and A scomycota also exist in a non-mycorrhizal form. Recent studies enlarging the fungal compartment to groups other than mycorrhizae detected over 3000 OTUs in total in the soil, episphere and endosphere of agave species with a third of these OTUs (1007 OTUs belonging to nine orders) being found in the endosphere (Colemann-derr et al., 2016). Endophytes induced functions and phenotypic modifications Endophytic microorganisms are involved in a variety of phenotypic changes in plants. These changes have been widely studied especially in plants of agricultural importance. Because of their sessile lifestyle plants have to cope with the environmental conditions. A biotic conditions especially, such as light, water and others are spatially variable, so plants must cope with this heterogeneity [START_REF] Hodge | The plastic plant: root responses to heterogeneous supplies of nutrients[END_REF]. There are numerous reports of fungal and bacterial symbionts conferring tolerance to a variety of stresses to host plants as well as other benefits (e.g. Friesen et al., 2011;Mü ller et al., 2016). The range of functions provided by roots microorganisms are reviewed in the following section. Resources acquisition Microorganisms living in association with plants improve the acquisition of different resources and, more especially, allow the acquisition of otherwise inaccessible ones. L eaf epiphytic cyanobacteria for example transfer atmospherically fixed nitrogen to plants that can represent 10 to 20% of the leaf nitrogen [START_REF] Bentley | Direct transfer of newly-fixed nitrogen from free-living epiphyllous microorganisms to their host plant[END_REF]. This ability to fix atmospheric nitrogen is displayed by at least six different bacterial phyla, the most common being Proteobacteria, and by several archaea lineages [START_REF] Martinez-Romero | Dinitrogen-fixing prokaryotes[END_REF]Friesen et al., 2011). Nitrogen acquisition also occurs within the tree roots in boreal forests thanks to Ectomycorrhizal fungi (Lindahl et al., 2007). In parallel, other nutrients can be more easily obtained through the help of endophytic organisms and a given organism can provide multiple resources. For instance, the arbuscular mycorrhizal fungi living in association with plants supply these plants with water, phosphorous, nitrogen, and trace elements [START_REF] Smith | Mycorrhizal Symbiosis[END_REF]Smith et al., 2009). In the arbuscular mycorrhizal fungi symbiosis, mineral nutrients uptake can be 5 and 25 times higher for nitrogen and phosphorus respectively, in mycorrhized as compared to non-mycorrhized roots (Van Der Heijden et al., 2003;Vogelsang et al., 2006). A M fungi acquire these resources more efficiently because hyphae can access narrower soil pores and increase the uptake of immobile resources (especially inorganic phosphate), by acquiring nutrients beyond the depletion zone of the root and stimulating the production of exudates that release immobile soil nutrients (Maiquetı á et al., 2009;[START_REF] Courty | The role of ectomycorrhizal communities in forest ecosystem processes: new perspectives and emerging concepts[END_REF][START_REF] Cairney | Ectomycorrhizal fungi: the symbiotic route to the root for phosphorus in forest soils[END_REF]. In addition, A M fungi are able to acquire both organic and inorganis nitrogen, which is not the case of plants (see [START_REF] Hodge | A rbuscular mycorrhiza and nitrogen: implications for individual plants through to ecosystems[END_REF] for a review on A M fungi nitrogen uptakes). One example is the nitrogen in soil organic matter released by hydrolytic and oxidative enzymes produced by Ericoid mycorrhizae. This enhanced acquisition of resources not only directly affects the individual plant' s fitness but also its phenotype and competitive interactions with other plants (see section II 2.2.1 for examples; for a review see [START_REF] Hodge | A rbuscular mycorrhiza and nitrogen: implications for individual plants through to ecosystems[END_REF]. Indeed, a phenotypic consequence of the more efficient nutrient uptake of mycorrhizae for the plant is a reduced number of fine roots together with a lower root:shoot ratio and specific root length (Smith et al. 2009). Improved resource acquisition also allows the plant to cope more efficiently with environmental constraints. Resistance to environmental constraints A biotic stresses Numerous studies indicate that plant adaptation to stressful conditions may be explained by the fitness benefits conferred by mutualistic fungi (for example resources acquisition described in the section 2.1) [START_REF] Stone | A n overview of endophytic microbes: endophytism defined[END_REF][START_REF] Rodriguez | The role of fungal symbioses in the adaptation of plants to high stress environments[END_REF]. In addition to these indirect effects of resources acquisition, fungi and bacteria also provide direct resistance to a large range of stresses through the production of certain compounds. A mong the stresses alleviated by plant endophytes the main ones are salinity, extreme heat, drought and heavy metal pollutants. Salinity tolerance for example can be increased by different metabolites such as trehalose produced by bacteria like Rhizobia in nodules (Lopez et al., 2008). Such improved tolerances are not only provided by bacteria. The fungal endophyte Curvularia found on thermal soils in Yellowstone Park has been shown to increase tolerance to extreme heat (Redman et al., 2002). Fungal endophytes such as F usarium culmorum colonizing the above-and below-ground tissues and seed coats of Leymus mollis, also confers salinity tolerance (Rodriguez et al., 2008). In addition a given microorganism can at the same increase resource acquisition and provide resistance to an abiotic stress (independently of its effect on resources acquisition). A M fungi increase stomatal conductance when they are inoculated to plants either under normal or drought conditions. This increase of stomatal conductance has been linked to greater drought tolerance of rice and tomato plants inoculated with such fungi [START_REF] Raven | Plant nutrient-acquisition strategies change with soil age[END_REF]Rodriguez et al., 2008). Following observation of the described tolerance, several microorganisms have been considered as suitable candidates for bioremediation of polluted soils. This is the case of plant growth-promoting rhizobacteria that elicite tolerance to heavy metals [START_REF] Glick | Phytoremediation: synergistic use of plants and bacteria to clean up the environment[END_REF][START_REF] Zhuang | New advances in plant growth-promoting rhizobacteria for bioremediation[END_REF]. Biotic stresses In addition to their role in the resistance against abiotic stresses plant-associated microorganisms also mediate plant resistance against biotic constraints among which the most studied are aggressions by pathogens. Indeed, endophytes are able to secrete defensive chemicals in plant tissues (A rnold et al. 2003;Clay & Schardl 2002). Different studies have identified chemicals providing defense against pathogens and produced by a myriad of microorganisms associated with plants. The range of compounds produced by symbionts consists of antimicrobials with direct effects on the pathogen or indirect effects such as a diminution of its pathogenicity. Compounds with a direct and immediate antibacterial effect include antimicrobial auxin and other phytohormones [START_REF] Morshed | Toxicity of four synthetic plant hormones IA A , NA A , 2, 4-D and GA against A rtemia salina (L each)[END_REF]. Compounds with indirect effects are for example nonanoic acid produced by the fungus Trichoderma harzianum that inhibits mycelial growth and spore germination of two pathogens in the tissues of Theobroma cacao (A neja et al. 2005). A lternatively, microorganisms can also produce compounds such as A HL -degrading lactonases that are able to alter the communication between pathogens and thus prevent the expression of virulence genes (Reading & Sperandio, 2006). In addition, different strains and species can produce different compounds and each endophyte can itself produce several compounds. Even within a species, each strain can produce multiple antibiotics as in Bacillus where these antibiotics have synergistic interactions against pathogens (Haas et al., 2000). Such protection against biotic aggressors can also be mediated by stimulation of the plant immune system. [START_REF] Boller | A renaissance of elicitors: perception of microbe-associated molecular patterns and danger signals by pattern-recognition receptors[END_REF] highlighted mutualist-induced signaling pathways initiated by flagellin/FL S2 and EF-Tu/EFR recognition receptors, allowing plants to respond to virus, pests and pathogens. The defensive responses induced by mutualists are not always localized and given microorganisms like the fungus Trichoderma may induce both systemic and localized resistances to a variety of plant pathogens [START_REF] Harman | Trichoderma species-opportunistic, avirulent plant symbionts[END_REF]. Such resistances often protect against crop damage and many plantassociated microorganisms like the latter can be used for biocontrol [START_REF] Harman | Trichoderma species-opportunistic, avirulent plant symbionts[END_REF]. Biotic constraints are not limited to pathogens and many other aggressors or competitors can affect plants. One of the most studied biotic stresses alleviated by plant-associated microorganisms is herbivory. Mutualist-induced resistance to herbivory has been identified and described for various plant feeding herbivores. Such resistance often involves the production by the endophyte of compounds that are toxic for herbivores or thatdiminish plant palatability. For example, [START_REF] Tanaka | A symbiosis expressed nonribosomal peptide synthetase from a mutualistic fungal endophyte of perennial ryegrass confers protection to the symbiotum from insect herbivory[END_REF] showed that the fungal endophyte Neotyphodium produces the secondary metabolite peramine that protects Epichloë festucae from insect herbivory. L ike for abiotic stress tolerance (see above section 2.2.1) a single microorganism can produce various compounds. For instance, clavicipitaceous endophytes such as Neotyphodium induce the production of alkaloids, lolitrems, lolines, and peramines allowing grasses to defend against herbivores [START_REF] Rowan | L olitrems, peramine and paxilline: mycotoxins of the ryegrass/endophyte interaction[END_REF]Siegel et al., 1989;[START_REF] Clay | Fungal endophytes of grasses[END_REF]Clay & Schardl 2002). Such patterns of defense against herbivores have been mostly evidenced in grasses and also comprise other endophytes limiting mammalian herbivory through the production of lysergic acid amide [START_REF] White J R | Endophyte-host associations in forage grasses. V III. Heterothallism in Epichloë typhina[END_REF][START_REF] Gentile | Origin divergence and phylogeny of Epichloë endophytes of native argentine grasses[END_REF][START_REF] Zhang | Neotyphodium endophyte increases A chnatherum inebrians (drunken horse grass) resistance to herbivores and seed predators[END_REF]. Non-toxic compounds conferring antifeeding properties include for example alkaloids that reduce rabbit herbivory on plants [START_REF] Panaccione | Effects of ergot alkaloids on food preference and satiety in rabbits, as assessed with gene-knockout endophytes in perennial ryegrass (L olium perenne)[END_REF]. Growth and reproductive strategy A s has been shown in other hosts such as insects with Wolbachia, endophytes can also alter plant growth and reproductive strategy. In clonal plants, Streitwolf-Engel et al., (1997, 2001) showed that colonization of the roots by A rbuscular Mycorrhizal Fungi could alter the growth and reproduction of clonal plants. In this experiment, the authors found that ramets (i.e. clonal units composed of roots and shoots) production by Prunella vulgaris was differentially affected by the inoculation of three A M fungi isolates (see figure 3 for clonal growth plant description). The number of ramets produced changed by a factor of up to 1.8 independently of the isolates' effects on plant biomass. A M fungi can thus alter the trade-off between growth and reproduction. In addition, microorganisms can also alter the trade-off between different compartments of plant growth. In the above-described experiment the authors also found that branching (lateral ramification) was affected by the inoculation, suggesting that foraging by the plant (i.e., the strategy for resources acquisition) was modified by A rbuscular Mycorrhizal Fungi inoculation. In another experiment conducted, Sudova (2009) showed that growth and reproductive strategy modifications induced by the A M fungi vari both with the fungi identity and the plant species. Using five co-occurring plant species with 3 A M fungal isolates the authors showed that plant growth response to inoculation varied widely from negative to positive depending on the inoculum. A M inoculation led to changes in clonal growth traits such as an increase in stolon number and length only in some plant species. The effects of microorganisms on growth and reproductive strategy of plants appear thus to depend on the matching between plant and microorganisms identities although this idea has not been extensively tested. A s shown in this section the vast diversity of plant-associated microorganisms ensures essential functions impacting plant growth, development, survival and resistance to environmental constraints in general. However, the described studies tended to focus on describing the effects of the microbiota but not on the use of this microbiota by the plant. Considering the benefits of symbiotic associations, evolution should favorize a plant that optimizes its interactions with microbes. However, the extent to which plants might forage for microorganisms has not been investigated to date. Plant microbiota assembly The additive ecological functions supplied by the plant mutualists described in the previous section extend the plant' s adaptation ability (e.g., Vandenkoornhuyse et al., 2015), leading to fitness benefits for the host in highly variable environments [START_REF] Conrath | Priming: getting ready for battle[END_REF] and therefore can affect evolutionary trajectories (e.g., [START_REF] Brundrett | Coevolution of roots and mycorrhizas of land plants[END_REF]. In addition, because microbial communities may produce a mixture of antipathogen molecules that are potentially synergistic (see section 2.2.2), we can predict that plant hosts will be better protected against biotic stresses in the presence of more diverse microbial communities (Friesen et al., 2011). The same idea has been proposed for resources acquisition following results showing complementarity between symbionts in the acquisition of resources [START_REF] Van Der Heijden | Mycorrhizal fungal diversity determines plant biodiversity, ecosystem variability and productivity[END_REF]. Thus the composition of the plant microbiota is of major importance in determining the ecological success (the fitness) of plants. In this context, the assembly rules shaping microbiota diversity and composition have only started to be described in recent years, and the current knowledge is reviewed in the following section (see figure 4 for an overview of the factors shaping microbiota assembly). These factors inducing a filter of the microorganism communities from the soil to the rhizosphere and subsequently wihtin the endosphere are presented in this section. 3.1 Microbiota recruitment 3.1.1 The soil as a microbial reservoir The soil is a highly rich and diversified reservoir of microorganisms [START_REF] Curtis | Estimating prokaryotic diversity and its limits[END_REF][START_REF] Torsvik | Prokaryotic diversity--magnitude, dynamics, and controlling factors[END_REF][START_REF] Gams | Biodiversity of soil-inhabiting fungi[END_REF]Buee et al., 2009). For example, forest soil samples of 4 g were found to contain approximately 1000 molecular operational taxonomic units assigned to fungal species (Buee et al., 2009). Recent studies investigating the composition of microbiota in different compartments (soil, rhizosphere, root and leaf endosphere) of A gave species showed that the majority of bacterial and fungal OTUs found in the endosphere were present in the soil [START_REF] Coleman-Derr | Plant compartment and biogeography affect microbiome composition in cultivated and native A gave species[END_REF]. This observation was independently evidenced in the common model plant Arabidopsis thaliana where most of the root-associated bacteria were also found in the soil [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012;Schlaeppi et al., 2014). Root-associated microorganisms are thus mainly recruited from the surrounding soil, and the pool of microorganisms available for recruitment in the soil determines the diversity and composition of the plant root microbiota. The above-described recent studies, in line with others, have clearly demonstrated from experimental approaches that the main environmental factors determining the soil pool (and thus the plant roots endophytic microbiota) are soil type and properties such as pH, temperature or water availability [START_REF] Berg | Plant species and soil type cooperatively shape the structure and function of microbial communities in the rhizosphere[END_REF][START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012;Shakya et al., 2013;Schreiter et al., 2014). Rhizosphere While the composition of the soil microbial communities seems to be mostly determined by soil parameters (see above), the rhizosphere is a thin layer of soil at the periphery of the roots. This specific habitat of soil microorganisms contains a great abundance of microbes up to 10 11 microbial cells per g of soil and is also highly diversified with more than 30,000 prokaryotic species (e.g. [START_REF] Egamberdieva | High incidence of plant growth-stimulating bacteria associated with the rhizosphere of wheat grown on salinated soil in Uzbekistan[END_REF][START_REF] Mendes | Deciphering the rhizosphere microbiome for disease-suppressive bacteria[END_REF]. This rhizosphere corresponds to the influence zone of plant root exudates and oxygen releases (Vandenkoornhuyse et al., 2015). Exudates are highly complex mixtures comprising carbon-rich molecules and other compounds secreted by the plant as products of its metabolism. These exudates have a strong effect on the fungal and bacterial communities because they can be resources available for microorganisms, signal molecules (e.g. chemotaxy), and at the same time anti-microbial compounds (Vandenkoornhuyse et al., 2015). The large quantity of organic molecules secreted allows plants to manipulate the rhizosphere from the surrounding bulk soil reservoir (Bais et al., 2006). Differences in microbial communities in the rhizosphere of the same soil, depending on the plant species, have been highlighted (Berg et al., 2006;[START_REF] Garbeva | Rhizosphere microbial community and its response to plant species and soil history[END_REF][START_REF] Berg | Plant species and soil type cooperatively shape the structure and function of microbial communities in the rhizosphere[END_REF][START_REF] Micallef | Influence of A rabidopsis thaliana accessions on rhizobacterial communities and natural variation in root exudates[END_REF] demonstrating the strong influence of plants and the induced "rhizosphere effect". Differences have also been detected within species as genotypes were shown to harbor distinct rhizosphere communities [START_REF] Micallef | Influence of A rabidopsis thaliana accessions on rhizobacterial communities and natural variation in root exudates[END_REF]. In addition some species have also been shown to create similar communities in different soils [START_REF] Miethling | Variation of microbial rhizosphere communities in response to crop species, soil origin, and inoculation with Sinorhizobium meliloti L 33[END_REF]. If specific richness seems to be lower in the rhizosphere than in the surrounding soil, suggesting that organisms are filtered from the original soil pool [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF], the intensity of the rhizosphere effect seems however to vary [START_REF] Uroz | Pyrosequencing reveals a contrasted bacterial diversity between oak rhizosphere and surrounding soil[END_REF][START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012;Schlaeppi et al., 2014). The importance and intensity of the rhizosphere effect still need to be clarified but it has been hypothesized that plant roots select for specific microorganisms to prosper in the rhizosphere through the composition of the exudates that they release into the rhizosphere (discussed in section 3.2). Endosphere The endosphere constitutes the area within the plant roots and is colonized by a large variety of organisms comprising mycorrhizal and non-mycorrhizal fungi [START_REF] Vandenkoornhuyse | Extensive fungal diversity in plant roots[END_REF][START_REF] Smith | Mycorrhizal Symbiosis[END_REF], bacteria (Reinhold-Hurek & Hurek, 2011) and A rcheae [START_REF] Sun | Endophytic bacterial diversity in rice (Oryza sativa L .) roots estimated by 16S rDNA sequence analysis[END_REF]. The plant' s influence on microbial assemblages is stronger in this area than in the rhizosphere because of the influence of the plant immune system (Vandenkoornhuyse et al., 2015). In this area the diversity and abundance of organisms are lower than in the surrounding soil [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012;Schlaeppi et al., 2014), with specific assemblages differing from the soil communities (Vandenkoornhuyse et al., 2015). Several key microorganisms are indeed enriched (i.e. with higher relative abundance) in the endosphere, as compared to the soil and rhizosphere compartments, whereas others are depleted (i.e. with a lower relative abundance). L undberg et al. (2012) detected 96 OTUs that were enriched in the endosphere of A.thaliana and 159 OTUs that were depleted. These results have been confirmed in a more recent study using a larger set of Arabidopsis and Cardamina hosts where A ctinobacteria, Betaproteobacteria and Bacteroidetes dominated the datasets obtained from the rhizosphere and endosphere samples (Schlaeppi et al., 2014). These patterns of organisms enrichment and depletion can be attributed to the plant' s ability to control its microbiota through its immune system. 3.2 Controls of the plant over its microbiota Microorganisms recruitment through compounds secretion The microorganisms communities colonizing the rhizosphere, and those that penetrate the endosphere and form the plant microbiota, have been described with regard to their role in plant health. Studies have highlighted a large panel of mechanisms that allow the plant to control this microbiota by filtering the pathogens and promoting beneficial microbes [START_REF] Doornbos | Impact of root exudates and plant defense signaling on bacterial communities in the rhizosphere. A review[END_REF]. These control mechanisms have been identified for both the rhizosphere and endosphere and involve exudates secretion and the plant' s immune system. Berendsen et al., (2012) reviewed the mechanisms that enabled the plant to control its rhizosphere composition and showed that plant could exude secondary metabolites such as benzoxazinoids that inhibited the growth of specific microbes in the rhizosphere (Bais et al., 2002;[START_REF] Zhang | Secondary metabolites from the invasive Solidago canadensis L . accumulation in soil and contribution to inhibition of soil pathogen Pythium ultimum[END_REF]. Some studies have also evidenced plant-secreted compounds that suppress microbes cell-to-cell communication (i.e. "quorum sensing") and thus alter the expression of microbial virulence genes. Such compounds have been identified in a variety of species such as pea (Pisum sativum), rice (Oryza sativum) and green algae (Chlamydomonas reinhardtii) [START_REF] Teplitski | Plants secrete substances that mimic bacterial N-acyl homoserine lactone signal activities and affect population density-dependent behaviors in associated bacteria[END_REF][START_REF] Teplitski | Chlamydomonas reinhardtii secretes compounds that mimic bacterial signals and interfere with quorum sensing regulation in bacteria[END_REF][START_REF] Ferluga | OryR is a L uxR-family protein involved in interkingdom signaling between pathogenic Xanthomonas oryzae pv. oryzae and rice[END_REF]. A study on Medicago truncatula [START_REF] Gao | Production of substances by Medicago truncatula that affect bacterial quorum sensing[END_REF] found that ~20 compounds were produced and released in seedlings exudates that altered quorum sensing. These exudates also exhibited anti-microbial properties against pathogens and at the same time attracted beneficial organisms. A n example of this attractive effect is the beneficial rhizobacterium Pseudomonas putida KT2440 that is tolerant to and chemotactically attracted by exudate compounds [START_REF] Neal | Benzoxazinoids in root exudates of maize attract Pseudomonas putida to the rhizosphere[END_REF]. Such recruitment of microorganisms, mediated by the root exudates, has been described in fungal antagonists as a response to colonization of the plant by the pathogen Verticillium dahliae. Other studies have indicated that upon attack by pathogens, planst respond by recruiting specific beneficial microorganisms [START_REF] Rudrappa | Root-secreted malic acid recruits beneficial soil bacteria[END_REF][START_REF] Ryu | Foliar aphid feeding recruits rhizosphere bacteria and primes plant immunity against pathogenic and non-pathogenic bacteria in pepper[END_REF]. For example, when the leaves of A rabidopsis were infected by Pseudomonas syringae pv., the roots were more colonized by Bacillus subtilis F B17 which is a beneficial endophyte. These observations led to the suggestion that the recruitment of specific microorganisms within the endosphere by the plant constituted a means for plants to repress pathogens (Berendsen et al., 2012). The plant is thus able to preferentially recruit microorganisms depending on its needs by repressing or facilitating the microorganisms within the rhizosphere. The control of the plant over the microbial communities is not limited to the area around the root and is even greater within the roots. The immune system A nother way for the plant to control the composition of its microbiota is through the innate immune system. The forms of innate immunity and their role in controlling microbiota composition have been reviewed (see for example J ones [START_REF] Dangl | The plant immune system[END_REF]. Two forms of innate immunity exist depending on the pattern of the molecules triggering the immunity that can be pathogen-associated molecular pattern (PTI) or gene-based effectors (ET I). [START_REF] Boller | A renaissance of elicitors: perception of microbe-associated molecular patterns and danger signals by pattern-recognition receptors[END_REF] described how host plant receptors detect molecular patterns of damage and microbes and initiate the immune response by producing reactive oxygen, thickening their cell walls and activating defense genes in response to bacterial flagellin. In the "arms race" between plants and their pathogens, pathogens have evolved the use of effectors to suppress the above-described PTI mechanisms (K amoun, 2006) whereas the plant has developed a perception of these effectors through ET I that induces a strong response of cell apoptosis and local necrosis [START_REF] Boller | A renaissance of elicitors: perception of microbe-associated molecular patterns and danger signals by pattern-recognition receptors[END_REF]. In a recent paper, Hiruma et al., (2016) suggested that the immune system has wider physiological roles than simply restricting pathogen growth and may be involved in symbiont behavior. In an experiment described previously in section 1.2 they used metabolic profiling and experimental inoculations of endophytic fungi to demonstrate that the plant's innate immune system favorized the colonization of Colletotrichum tofieldiae during phosphate starvation and defense responses under phosphate-sufficient conditions [START_REF] Hiruma | Root endophyte Colletotrichum tofieldiae confers plant fitness benefits that are phosphate status dependent[END_REF]. A t the same time the immune system stimulated the indole glucosinolate metabolism [START_REF] Pant | Identification of primary and secondary metabolites with phosphorus status-dependent abundance in A rabidopsis, and of the transcription factor PHR1 as a major regulator of metabolic changes during phosphorus limitation[END_REF] that induced a shift in symbiont behavior, triggering the transfer of phosphorus by the fungi from root-associated hyphae to the plant shoots. The immune system could thus be a tool allowing the plant to manipulate symbionts for its own good. Regulation of symbiotic interactions A s explained in section 1.2, symbiosis consists of a continuum of partnerships ranging from parasitism to mutualism. The level of symbiosis efficiency is a consequence of the evolutionary trajectory of the symbiont under consideration. This means that in any kind of symbiosis cheaters and cooperators can co-exist. But why cheating? A large proportion of the plant microbiota provides beneficial functions to the plant and is thus believed to consist of mutualists. Mutualism implies reciprocal benefits with an associated cost for both partners [START_REF] Cameron | Giving and receiving: measuring the carbon cost of mycorrhizas in the green orchid, Goodyera repens[END_REF][START_REF] Davitt | Do the costs and benefits of fungal endophyte symbiosis vary with light availability?[END_REF]Fredericksson et al., 2012). This type of interaction favors "the tragedy of the commons" since it creates a conflict of interest between organisms. From an evolutionary point of view, mutualism is thus expected to be unstable and to evolve toward an asymmetric relationship with one partner benefiting from the interaction at the expense of the other. By doing so, a cheater symbiont or a host providing few benefits (at low cost) to the other in exchange for high rewards would selfishly improve its own fitness and thus invade the community. The stability of mutualism has been proposed to rely on control mechanisms on both sides of the symbiosis [START_REF] Bever | Preferential allocation to beneficial symbiont with spatial structure maintains mycorrhizal mutualism[END_REF]K iers et al., 2011). It has been clearly demonstrated that plants can exert a carbon sanction on the less beneficial A rbuscular Mycorrhizal fungi cooperators (K iers et al., 2011) thus reducing the fitness of any cheaters. This mechanism of carbon sanction by the host plant has been demonstrated for both phosphorus [START_REF] Bever | Preferential allocation to beneficial symbiont with spatial structure maintains mycorrhizal mutualism[END_REF]K iers et al., 2011) and nitrogen transfer [START_REF] Fellbaum | Carbon availability triggers fungal nitrogen uptake and transport in arbuscular mycorrhizal symbiosis[END_REF] by A M fungi. Reciprocally on the fungal side, phosphorus is only delivered to the host in exchange for photosynthates. In this symbiosis, being a cheater thus means increasing the carbon cost to deliver the phosphorus (K iers et al., 2011). Considering that a given host plant is colonized by multiple symbionts at the same time, cheaters can only proliferate if the plant is not able to favorise other symbionts. The level of cooperation is thus likely correlated with the level of diversity of symbionts. 3.2.4 The host plant effect: genetics and biogeography The colonization of plants by microorganisms occurs in the light of a finely tuned innate plant immune system allowing a selective recognition and filtration of microorganisms. A plant is thus capable of constructing its own microbiota. The inherent expectation is thus a strong effect of plant identity on the assembly of the microbiota. A lthough the initial microorganisms communities depend on the original soil they become more and more plant-specific and less diverse during colonization of the rhizosphere and endosphere (see above section 3.1). Even within these compartments, communities become increasingly plant-specific during plant growth and development [START_REF] Chaparro | Rhizosphere microbiome assemblage is affected by plant development[END_REF][START_REF] Sugiyama | Changes in the bacterial community of soybean rhizospheres during growth in the field[END_REF]Edwards et al., 2015;[START_REF] Shi | Successional trajectories of rhizosphere bacterial communities over consecutive seasons[END_REF]. Plants represent microorganisms habitats and thus the root architecture or nutrient quality and quantity can determine the plant microbiota depending on the plant species [START_REF] Oh | Distinctive bacterial communities in the rhizoplane of four tropical tree species[END_REF][START_REF] Bonito | Plant host and soil origin influence fungal and bacterial assemblages in the roots of woody plants[END_REF]. Differences between microbiota compositions can even be detected at a finer scale such as the ecotype. A. thaliana ecotypes establish different rhizosphere communities from each other at a statistically significant level [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012). However the same studies along with others also highlighted that environmental factors seemed to exert greater control on root-associated bacteria than the plant genotype or even plant species [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012;Shakya et al., 2013;Schlaeppi et al., 2014). This suggests that plant identity might not be the primary or at least the strongest factor determining differences in plant microbiota. Recent advances indicate that the importance of the host species might be microorganism-specific. [START_REF] Coleman-Derr | Plant compartment and biogeography affect microbiome composition in cultivated and native A gave species[END_REF] found that while prokaryotic communities of A gave species were primarily shaped by plant compartment and soil properties, fungal communities were structured by the biogeography of the plant host species. Such differences in biogeographic effect have been independently found in other studies where the root fungal communities (ectomycorrhiza) were differentiated more by geographic distance than bacterial communities [START_REF] Peay | A strong species-area relationship for eukaryotic soil microbes: island size matters for ectomycorrhizal fungi[END_REF]Shakya et al., 2013;[START_REF] Meiser | Meta-analysis of deep-sequenced fungal communities indicates limited taxon sharing between studies and the presence of biogeographic patterns[END_REF]. Biotic interactions Microorganisms living within the plant interact with each other in different ways and these interactions may determine their co-existence. The plant itself is also interacting with other plants in a community context. In this section we review the importance of the biotic interactions in assembly of the microbiota, focusing primarily on recent discoveries regarding microbe-microbe interactions and then emphasizing the under-appreciated role of the plant community context. Microbe-microbe interactions Considering the fact that plant roots constitute a habitat for microorganisms and that the roots represent a finite volume, co-existing microorganisms within the plant are expected to be in competition for space and resources. Competition between bacterial and fungal species has been demonstrated at the scale of the species. [START_REF] Werner | Partner selection in the mycorrhizal symbiosis[END_REF] demonstrated that the order of arrival of A M fungi structured the colonization of the plant roots suggesting that mycorrhizal fungi compete for root space. Specifically they evidenced that the A M fungal that first colonized plant roots (through an earlier inoculation) benefitted from a priority effect and were thus found in greater abundance. A t higher taxonomic levels, a recent study focusing on the whole endospheric root fungal microbiota (L ê Van et al., 2017) highlighted the coexistence of Glomeromycota, Ascomycota and Basidiomycota in Agrostis stolonifera. A n important result of this work was the demonstration that the phylogenetic structure and phylogenetic signal measured differed among phyla. A phylogenetic overdispersion was observed for Glomeromycota and Ascomycota indicating facilitative interactions between fungi or competitive interactions resulting in the coexistence of dissimilar fungi. Conversely, the fungi within Basidiomycota displayed a clustered phylogenetic pattern suggesting an assemblage governed by environmental filtering (i.e. plant host) favoring the coexistence of closely related species (L ê Van et al., 2017). These mechanisms of facilitation, competition or environmental filtering acting on the fungal microbiota composition are observed at the level of an individual plant. However, competitive or facilitative interactions between microorganisms in the plant microbiota are not stable over time and colonization of the root by a new microorganism can bring major changes. For example, colonization by the fungal pathogen Rhizoctonia solani,caused a significant shift in composition of the sugar beet and lettuce rhizosphere community often with an increase in specific family abundances [START_REF] Chowdhury | Effects of Bacillus amyloliquefaciens FZB42 on lettuce growth and health under pathogen pressure and its impact on the rhizosphere bacterial community[END_REF][START_REF] Chapelle | Fungal invasion of the rhizosphere micro-biome[END_REF]. Moreover, plant-microorganisms may themselves be colonized by bacteria and viruses (i.e. three way symbiosis) affecting symbiotic communication. For example, the endophyte Curvularia protuberata contains a virus that is required to induce heat tolerance [START_REF] Má Rquez | A virus in a fungus in a plant: three-way symbiosis required for thermal tolerance[END_REF], the fungus being asymptomatic without the virus. Current knowledge thus indicates that microbemicrobe interactions can, like host-microbe interactions, drive plant microbiota assembly and its effect on phenotype. However, information acquired about these microbe-microbe interactions is still very limited. A n example is the above-described three way symbioses that can mediate the effect of a given symbiont but whose occurrence in natural ecosystems is unknown. The plant community context The pool of microorganisms available in the soil conditions plant recruitment and thus determines the plant microbiota (see section 3.1.1). This soil pool depends on local environmental filtering and microorganisms dispersal. A nother factor structuring this pool is plant host filtering. Indeed, as a plant is able to selectively recruit and promote microorganisms depending on its needs (see section (2016) proposed that the degree of host filtering on the A M fungal community would depend on the spatial scale considered and could be stronger than environmental factors at a local scale. Taking into account that a given plant recruits its microbiota within its local environment, there is a need to evaluate the intensity of this plant community context in determining both the A M fungal and whole microbiota assembly. Microbiota transmission Diverse modes of transmission of microorganisms between generations, also called heritability, have been described and reviewed, and organized into different classifications [START_REF] Mcfall-Ngai | Unseen forces: the influence of bacteria on animal development[END_REF]Zilber-Rosenberg & Rosenberg, 2008). Herein, we present these modes of heritability in two categories: pseudo-vertical (or horizontal) and vertical transmissions. Horizontal or pseudo-vertical transmission Pseudo-vertical transmissions are indirect transmissions mediated by the environment. Two individuals sharing the same environment are able to recruit similar microorganisms from the soil pool and thus plants producing microorganisms at low dispersal distance may share a part of their microbiota to this environmental transmission. Such transmission has been largely described for a wide range of organisms with significant impact on plant phenotype such as growth-promoting rhizobacteria [START_REF] Smith | Genetic basis in plants for interactions with disease-suppressive bacteria[END_REF][START_REF] Singh | Unravelling rhizospheremicrobial interactions: opportunities and limitations[END_REF][START_REF] Somers | Rhizosphere bacterial signalling: a love parade beneath our feet[END_REF][START_REF] Egamberdieva | High incidence of plant growth-stimulating bacteria associated with the rhizosphere of wheat grown on salinated soil in Uzbekistan[END_REF], rhizobium [START_REF] Stougaard | Regulators and regulation of legume root nodule development[END_REF][START_REF] Davies | How rhizobial symbionts invade plants: the Sinorhizobium-Medicago model[END_REF] and even A rbuscular mycorrhizal fungi and non-photosynthetic fungi when seeds fall close to the mother plant [START_REF] Wilkinson | Mycorrhizal evolution[END_REF]Wang & Qui, 2006). Vertical transmission or heritability For plants, vertical transmission has primarily been reported for the cytoplasmic inheritance of chloroplasts [START_REF] Margulis | Origins of species: acquired genomes and individuality[END_REF]. Other true vertical transmission in plants does exist even if it has not been extensively studied. A limited diversity of endophytic fungi and bacteria are contained in the germplasm and are thus vertically transmitted in grasses (Clay & Schardl, 2002), conifers [START_REF] Ganley | Fungal endophytes in seeds and needles of Pinus monticola[END_REF], and tropical trees (U'Ren et al., 2009). A nother kind of vertical transmission has been proposed for plants through vegetative propagules [START_REF] Wilkinson | Mycorrhizal evolution[END_REF]Zilber-Rosenberg & Rosenberg, 2008). Plants are modular organisms and thus a fragment of plant (i.e. propagule) falling on the ground is able to grow and produce a new plant. In this context, it is probable that microorganisms developing within the plant fragment are transmitted to the progeny even if this has not been experimentally demonstrated. A nother form of vertical microbiota transmission was suggested for clonal plants by Stuefer et al., (2004). Clonal plants are organized as a network of ramets (i.e. clonal, potentially autonomous individuals) connected by above or below-ground modified stems (i.e. stolons) (Figure 3). This network structure promotes plant propagation in space (i.e. physical integration) and in some species the sharing of resources within the network to optimize fitness of the network as a whole (i.e. physiologic integration, Oborny et al., 2001). A nother layer of integration could occur in such a network because the stolons could permit the transfer of microorganisms between ramets through the vascular system or via the surface of the stolons. This hypothesis has never been addressed despite its importance regarding microorganisms dispersal and plant adaptation to environmental constraints. Considering all the benefits that can be attributed to symbiotic microorganisms (described in section 2) vertical transmission of part of or the whole microbiota would be advantageous because it would limit the costs of searching for suitable symbionts (Wilkinson & Sherratt, 2001) and thus could ensure habitat quality for the progeny. In terms of evolution, the transmission of symbiotic microorganisms between plant generations constitutes a "continuity of partnership" between the plant and the transmitted symbionts (Zilber- Rosenberg and Rosenberg, 2008). There is thus a need in plant science to search for vertical transmission of a significant part of the microbiota influencing the plant's fitness. [START_REF] Schlichting | Phenotypic plasticity-an evolving plant character[END_REF]. The plant is a "holobiont" Following the observations described in the previous sections, a new understanding of plantmicroorganisms interactions has been proposed. Experiments and observations have clearly demonstrated the fundamental role of microorganisms in all aspects of host life with important consequences on host phenotype and fitness. Thus the plant should not be considered as a standalone entity but needs to be viewed as a host together with its microbiota. In this context, the holobiont idea, originally introduced in 1994 during a symposium lecture by Richard J efferson and the holobiont hypothesis later developed by Zilber-Rosenberg and Rosenberg ( 2008) allow a collective view of the functions and interactions of a host and its symbionts. Holobiont is a term derived from the Greek word holos (whole or entire) and is used to describe an individual host and its microbial community. This holobiont theory proposes that an entity comprising a host and its associated microbiota, can be considered as a unit of selection (i.e. not the only unit) in evolution (Zilber- Rosenberg & Rosenberg, 2008;Theis et al., 2016). The host and microbial genomes of a holobiont are collectively defined as its hologenome [START_REF] Rosenberg | Role of microorganisms in adaptation, development, and evolution of animals and plants: the hologenome concept[END_REF]Bordenstein & Theis, 2015). This concept relies on different tenets among which the idea that microorganisms can be vertically or horizontally transmitted between generations, constituting a continuity of partnership, which shapes the holobiont phenotype. Genomic novelties can thus occur within any components of the hologenome (host or microorganisms) and holobiont evolution could lead to variations in either the host or the microbiotic genomes (Zilber- Rosenberg & Rosenberg, 2008;[START_REF] Rosenberg | Role of microorganisms in adaptation, development, and evolution of animals and plants: the hologenome concept[END_REF]. Furthermore, variations in the hologenome can also arise by recombination in the host and/or microbiome and by acquisition of new microbial strains from the environment [START_REF] Rosenberg | Role of microorganisms in adaptation, development, and evolution of animals and plants: the hologenome concept[END_REF]. In this context, selection acts on the result of the interactions both between the host and its symbionts and between symbionts: the holobiont phenotype. The recent development of the holobiont concept and hologenome theory (Bordenstein & Theis, 2015) has triggered a debate on its validity and its usefulness as a scientific framework (Moran & Sloan, 2015;[START_REF] Douglas | Holes in the hologenome: why host-microbe symbioses are not holobionts[END_REF]Theis et al., 2016). The primary role of this holobiont concept and related hologenome theory is to provide adequate vocabulary to describe and study host microbiota symbiosis (Theis et al., 2016) while blending the effects of complex microbiota on host phenotypes that may be cooperative or competitive [START_REF] Bosch | Metaorganisms as the new frontier[END_REF]Vandenkoornhuyse et al., 2015). Holobionts and their hologenomes are theoretical entities that require understanding. Metaphorically, skepticism and critics are the selection pressures shaping the evolution of science and "...the hologenome concept requires evaluation as does any new idea" (Theis et al., 2016). One of the main critics and prospects regarding the validity and significance of the holobiont and hologenome theory is the lack of studies showing vertical transmission of microorganisms between holobiont generations (i.e. vertical transmission, or heritability). Observational and experimental studies addressing the ubiquity (i.e. frequency of occurrence in natural populations), the fidelity (i.e. homogeneity in the organisms transmitted) and the significance (i.e. amount of microorganisms transmitted and impact on the plant phenotype) of the heritability are thus needed to build a comprehensive holobiont theory. A nother aspect of interest is the role of heritability in holobiont assembly and especially in the covariance of genomes between hosts and microbiota components that can induce variation in the phenotypes on which selection acts. In order for the holobiont and hologenome theory to be comprehensive, it should encompass many symbiosis models. The current understanding of the plant as a holobiont suggests another layer of complexity that has not been theorized yet. Many organisms are modular but plants harbor an additional level of modularity through clonal reproduction. The network structure of clonal plants potentially allows exchanges between holobionts, and thus provides a particular model for comprehension of holobiont assembly and evolution. A n extension of the holobiont and hologenome theory for clonal organisms is thus required. Objectives of the thesis This thesis is organised around two major objectives that are the consequence of each another. This work aims first at identifying the assembly rules governing the structure of the plant microbiota and the consequences of this asssembly regarding plant phenotypes. More precisely, we aimed at determining : (i) The importance of the microbiota regarding plant phenotypic plasticity and adaptation (Chapter 1) with the hypothesis that fungal endophytes and specifically A M fungi influence the spatial spreading and performance of plants (ii) The occurrence of microbiota heritability between clonal generations in plants and its impact on the assembly of the microbiota (Chapter 2). Specifically, we tested the hypothesis that clonal plants are able to transmit to their clonal progeny a part of their microbiota, including important symbionts for plant fitness, through the connective structures forming the clonal network. The second objective of the thesis is to determine the importance of the plant community context for the plant-microorganisms interactions and the outcome for plant performance (chapter 3). The hypothesis is that the abundance of a given plant in a community can modify the composition and structure of the fungal community colonizing a focal host-plant, ultimately determining its performance. C hapter I: C onsequences of mutualist-induced plasticity I. 1. Introduction Scientific context Plants are sessile organisms and have thus to cope with environmental constraints. In nature, the spatial distribution of resources and environmental constraints in general, are heterogeneous. Plants have thus evolved different mechanisms to buffer these constraints such as plastic modifications (i.e. production of different phenotypes from a single genotype) to adapt to local conditions (Bradshaw, 1965). However, the tools available for plantplastic responses are not limited to its genome. Indeed, epigenetic mechanisms (regulating the expression of the genome) and microorganisms has been repeatedly shown as sources of variations (Richards, 2006;Friesen et al., 2011;Holeski et al., 2012;Vandenkoornhuyse et al., 2015). A large number of studies have separately identified the effect of specific plant-associated microorganisms and epigenetic marking on plant phenotype. There is however a need to bring together these findings, to evaluate the importance of these sources of phenotypic variation on the plant ability to produce plastic responses and adapt to environmental constraints. Clonal plants are particularly plastic organisms. In clonal plants, the network structure (i.e. clonal individual connected) allows the propagation in space, together with resource and information sharing (i.e. physiological integration; Oborny et al., 2001). The network thus allow to develop plastic responses to the heterogeneity at low modular level such as a leaf or a root to higher modular level such as the ramet. Many studies have evidenced two plastic responses to the heterogeneity of resources that are specific to this network structure: foraging and specialization (Slade & Hutchings, 1987;Dong, 1993;Hutchings & de K roon, 1994;Birch & Hutchings, 1994). Plant-associated symbionts provide a large range of key functions that can be useful for the plant (Friesen et al., 2011;Mü ller et al., 2016). From a theoretical point of view, plants should thus consider microorganisms as a resource and develop foraging and specialization behaviors in response to their heterogeneous distribution. . Based on the optimal foraging theory (de K roon & Hucthings, 1995), clonal plants are expected to aggregate ramets (i.e. clonal unit comprising shoot and roots) in favorable patches and to avoid unfavorable patches through an elongation of stolons (i.e. connective structure). In agreement with the division of labor theory (Stuefer & de K roon, 1996), plants are also expected to develop specialization response by developing preferentially roots in a nutrient rich soil patch. Such plant response to microorganisms heterogeneous distribution has never been investigated to date. In addition, as microorganism affect plant phenotype, they could alter its ability to produce plastic response. Objectives of the chapter This chapter aims at determining the role of microorganisms in the production of phenotypic plasticity to environmental heterogeneity. More precisely we address the following questions: 1) Is the genome the only source of phenotypic plasticity for plants ? A re microbiota and epigeneticsignificant sources of phenotypic plasticity allowing plant to adapt to environmental constraints ?(A rticle I) 2) Does plant develop foraging and specialization plastic responses to the spatial heterogeneity of A rbuscular Mycorrhizal Fungi presence ? A rbuscular Mycorhizal Fungi identity is susceptible to alter plant traits, thus impacting its ability to produce foraging and plastic responses ? (A rticle II) Methods The work presented in this chapter is based on two different approaches. The first article presented is a review investigating the current knowledges and limits on plant phenotypic plasticity induced by epigenetic and microbiota. This paper aims at identifying perspectives in the roles of microorganisms for plant response to environmental constraints. The second article presented herein is based on experimental cultivations of the clonal plant Glechoma hederacea in controlled conditions (light, temperature, water and nutrient availability, no exterior contaminations). Two different experiments are presented in this article.We tested the G. hederacea foraging and specialization responses to the heterogeneous distribution of A M fungi. We manipulated the heterogeneity of A M fungi distribution by setting artificial patches (separate cultivation pots) that were inoculated (presence) or not (absence) with a mixture of three A M fungi species. To test the effect of the fungal species inocula on the plant traits, plants were cultivated into larger pots and the treatments consisted of the inoculation of individual fungal species (three treatments) or no inoculation at all. Main results The review presented herein (A rticle I) allowed to evidence the importance of microbiota and epigenetics as sources of phenotypic plasticity for plants adaptation to environmental constraints. Recent discoveries on epigenetics and plant-associated microorganisms demonstrated their impact on plant survival, development and reproduction. Microbiota and epigenetics represent non genome based sources of phenotypic variations that can be used by the plant to cope with environmental conditions from very short (within lifetime) to longer time scales (i.e. evolutive). These phenotypic variations can furthermore be integrated in the genome through genetic accommodation and thus impacts plants evolutionary trajectories. Our review of the existing knowledge overall evidenced that an interplay between microbiota and epigenetic mechanisms could occur and deserves to be investigated together with the respective impacts of these mechanisms on plant survival and evolution. In our experiments, we demonstrated 1.) in Glechoma hederacea no specialization and a limited foraging response to the heterogeneous distribution of A M fungi and 2.) that the architectural traits involved in the plant' s foraging response were not affected by the A M fungi species tested, contrary to resource allocation traits (linked to the specialization response). Two possible explanations were proposed: (i) plant responses are buffered by the differences observed in fungal species individual effects. Indeed, as A M fungal species have contrasted effects on plant traits and resources acquisition, in mixture their cumulative effects could be neutral; (ii) the initial heterogeneous distribution of A M fungi is perceived as homogeneous by the plant either by reduced physiological integration or due to the transfer of A M fungi propagules through the stolons. Indeed the fact that G. hederacea does not forage for arbuscular mycorrhizal fungi suggests that plant could homogenize the distribution of its symbiotic microorganisms through their transmission between ramets i.e. generations. A dditional observation of Scanning electron microscopy revealed the presence of hyphae on the stolon surface and several cells close to the external surface of the stolon cross-section were invaded by structures which could be interpreted as fungi. DNA sequencing of stolon samples confirmed these results and demonstrated the presence of A M fungi in the stolons. This suggests that fungi can be transferred from one ramet to another, at least by colonization of the stolon surface and/or within the stolon tissues. The possible transfer of A M fungi between ramets suggests that plants could have evolved a mechanism of vertical transmission of a part of their microbiota. This mechanism would ensure the habitat quality for their progeny and constitutes a continuity of partnership [START_REF] Wilkinson | Mycorrhizal evolution[END_REF]. To date, vertical transmission examples only concerns a few microorganisms colonizing plant seeds (Clay & Schardl., 2002;Selosse et al., 2004). Such mechanisms would be of importance since induced phenotypic modifications inherited between generations allow short, medium and long-time scale adaptation of plants to environmental constraints (A rticle I). The hypothesis of microbiota transmission between ramets is addressed in the following chapter. the sources of such variation that has led to the diversification of living organisms, is therefore of major importance in evolutionary biology. Diversification is largely thought to be controlled by genetically-based changes induced by ecological factors [START_REF] Schluter | Experimental evidence that competition promotes divergence in adaptive radiation[END_REF]2000). Phenotypic plasticity, i.e., the ability of a genotype to produce different phenotypes (Bradshaw, 1965;[START_REF] Schlichting | The evolution of phenotypic plasticity in plants[END_REF][START_REF] Pigliucci | Evolution of phenotypic plasticity: where are we going now?[END_REF], is a key developmental parameter for many organisms and is now considered as a source of adjustment and adaptation to biotic and abiotic constraints (e.g Because of their sessile lifestyle, plants are forced to cope with local environmental conditions and their survival subsequently relies greatly on plasticity [START_REF] Sultan | Phenotypic plasticity for plant development, function and life history[END_REF]. Plastic responses may include modifications in morphology, physiology, behavior, growth or life history traits [START_REF] Sultan | Phenotypic plasticity for plant development, function and life history[END_REF]. In this context, the developmental genetic pathways supporting plasticity allow a rapid response to environmental conditions (Martin and Pfennig, 2010) and the genes underlying The links between genotype and phenotypes are often blurred by factors including (i) epigenetic effects inducing modifications of gene expression, post-transcriptional and posttranslational modifications, which allow a quick response to an environmental stress (Shaw and Etterson, 2012) and (ii) the plant symbiotic microbiota recruited to dynamically adjust to environmental constraints (Vandenkoornhuyse et al., 2015). We investigate current knowledge regarding the evolutionary impact of epigenetic mechanisms and symbiotic microbiota and call into question the suitability of the current gene-centric view in the description of plant evolution. We also address the possible interactions between the responsive epigenetic mechanisms and symbiotic interactions shaping the biotic environment and phenotypic variations. Genotype-phenotype link: still appropriate? In the neo-Darwinian synthesis of evolution (Mayr and Provine, 1997), phenotypes are determined by genes. The underlying paradigm is that phenotype is a consequence of genotype (A lberch, 1991) in a nonlinear interaction due to overdominance, epistasis, pleiotropy, and covariance of genes (see A lberch, 1991;[START_REF] Pigliucci | Evolution of phenotypic plasticity: where are we going now?[END_REF]. Both genotypic variations and the induction of phenotypic variation through environmental changes have been empirically demonstrated, thus highlighting the part played by the environment in explaining phenotypes. These phenotypes are consequences of the perception, transduction and integration of environmental signals. The latter is dependent on environmental parameters, including (i) the reliability or relevance of the environmental signals (Huber and Hutchings 1997), (ii) the intensity of the environmental signal which determines the response strength [START_REF] Hodge | The plastic plant: root responses to heterogeneous supplies of nutrients[END_REF], (iii) the habitat patchiness (A lpert and Simms, 2002) and (iv) the predictability of future environmental conditions with current environmental signals information [START_REF] Reed | Phenotypic plasticity and population viability: the importance of environmental predictability[END_REF]. The integration of all these characteristics of the environmental stimulus regulates the triggering and outcomes of the plastic response (e.g. A lpert and Simms, 2002). In this line, recent works have shown that plant phenotypic plasticity is in fact determined by the interaction between plant genotype and the environment rather than by genotype alone (El-Soda et al., 2014). Substantial variations in molecular content and phenotypic characteristics have been repeatedly observed in isogenic cells (K aern et al., 2005). Moreover, recent analyses of massive datasets on genotypic polymorphism and phenotype often struggle to identify single genetic loci that control phenotypic trait variation (A nderson et al., 2011). The production of multiple phenotypes is not limited to the genomic information and the idea of a genotype-phenotype link no longer seems fully appropriate in the light of these findings. Besides, evidence has demonstrated that phenotypic variations are related to genes-transcription and RNA s-translation, which are often linked to epigenetic mechanisms, as discussed in the following paragraph [START_REF] Rapp | Epigenetics and plant evolution[END_REF]. E pigenetics as a fundamental mechanism for plant phenotypic plasticity "Epigenetics" often refers to a suite of interacting molecular mechanisms that alter gene expression and function without changing the DNA sequence (Richards, 2006;Holeski 2012) Epigenetics is now regarded as a substantial source of phenotypic variations (Manning et al., 2006;Crews et al., 2007;K ucharski et al., 2008;Bilichak, 2012;[START_REF] Zhang | Epigenetic variation creates potential for evolution of plant phenotypic plasticity[END_REF] in response to environmental conditions. More importantly, studies have suggested the existence of epigenetic variation that does not rely on genetic variation for its formation and maintenance (Richards, 2006;[START_REF] Vaughn | Epigenetic natural variation in Arabidopsis thaliana[END_REF]. However, to date, only a few studies have demonstrated the existence of pure natural epi-alleles (Cubas et al., 1999) although they are assumed to play an important role in relevant trait variation of cultivated plants [START_REF] Quadrana | Natural occurring epialleles determine vitamin E accumulation in tomato fruits[END_REF] adaptive value through modifications in the form, regulation or phenotypic integration of the trait. In the "adaptation loop", the effect of environment on plant performance induces the selection of the most efficient phenotype. The epigenetic processes are not the only engines of plant phenotypic plasticity adjustment. Indeed, plants also maintain symbiotic interactions with microorganisms to produce phenotypic variations. Plant phenotypic plasticity and symbiotic microbiot Plants harbor an extreme diversity of symbionts including fungi (Vandenkoornhuyse et al., 2002) and bacteria [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012). During the last decade, substantial research efforts have documented the range of phenotypic variations allowed by symbionts. Examples of mutualist-induced changes in plant functional traits have been reported (Streitwolf-Engel et al., 1997;2001;[START_REF] Wagner | Natural soil microbes alter flowering phenology and the intensity of selection on flowering time in a wild Arabidopsis relative[END_REF], which modify the plant's ability to acquire resources, reproduce, and resist biotic and abiotic constraints. The detailed pathways linking environmental signals to this mutualist-induced plasticity have been identified in some cases. For instance, [START_REF] Boller | A renaissance of elicitors: perception of microbe-associated molecular patterns and danger signals by pattern-recognition receptors[END_REF] highlighted several mutualist-induced signaling pathways allowing a plastic response of plants to virus, pests and pathogens initiated by flagellin/FLS2 and EF-Tu/EFR recognition receptors. Mutualist-induced plastic changes may affect plant fitness by modifying plant response to its environment including (i) plant-resistance to salinity (Lopez et al., 2008), drought (Rodriguez et al., 2008), heat (Redman et al., 2002) and (ii) plant nutrition (e.g., Smith et al., 2009). These additive ecological functions supplied by plant mutualists extend the plant's adaptation ability (Bulgarelli et al., 2013;Vandenkoornhuyse et al., 2015), leading to fitness benefits for the host in highly variable environments [START_REF] Conrath | Priming: getting ready for battle[END_REF] and therefore can affect evolutionary trajectories (e.g. [START_REF] Brundrett | Coevolution of roots and mycorrhizas of land plants[END_REF]. In fact, mutualism is a particular case of symbiosis (i.e. long lasting interaction) and is supposed to be unstable in terms of evolution because a mutualist symbiont is expected to improve its fitness by investing less in the interaction. Reciprocally, to improve its fitness a host would provide fewer nutrients to its symbiont. Thus, from a theoretical point of view, a continuum from parasite to mutualists is expected in symbioses. However, the ability of plants to promote the best cooperators by a preferential C flux has been demonstrated both in Rhizobium/ and A rbuscular Mycorrhiza/Medicago truncatula interactions (K iers et al., 2007; 2011). Thus, the plant may play an active role in the process of mutualist-induced environment adaptation as it may be able to recruit microorganisms from soil (for review Vandenkoornhuyse et al., 2015) and preferentially promote the best cooperators through a nutrient embargo toward less beneficial microbes (K iers et al., 2011). In parallel, vertical transmission or environmental inheritance of a core microbiota is suggested (Wilkinson and Sherratt, 2001) constituting a "continuity of partnership" (Zilber- Rosenberg and Rosenberg, 2008). Thus the impact on phenotype is not limited to the individual's lifetime but is also extended to reproductive strategies and to the next generation. Indeed, multiple cases of alteration in reproductive strategies mediated by mutualists such as arbuscular mycorrhizal fungi (Sudová , 2009) or endophytic fungi (A fkhami and Rudgers, 2008) have been reported. Such microbiota, being selected by the plant and persisting through generations, may therefore influence the plant phenotype and be considered as a powerhouse allowing rapid buffering of environmental changes (Vandenkoornhuyse et al., 2015). The idea of a plant as an independent entity on the one hand and its associated microorganisms on the other hand has therefore recently matured towards understanding the plant as a holobiont or integrated "super-organism" (e.g., Vandenkoornhuyse et al., 2015). Holobiont plasticity and evolution If the holobiont can be considered as the unit of selection (Zilber- Rosenberg and Rosenberg, 2008), even though this idea is still debated (e. g. L eggat et al., 2007;Rosenberg et al., 2007), then the occurrence of phenotypic variation is enhanced by the versatility of the holobiont composition, both in terms of genetic diversity (i.e. through microbiota genes mainly) and phenotypic changes (induced by mutualists). Different mechanisms allowing a rapid response of the holobiont to these changes have been identified [START_REF] Simms | The relative advantages of plasticity and fixity in different environments: when is it good for a plant to adjust ?[END_REF] 3) recruitment of new mutualists within the holobiont (Vandenkoornhuyse et al., 2015). In this model, genetic novelties in the hologenome (i.e. the combined genomes of the plant and its microbiota, the latter supporting more genes than the host) are a consequence of interactions between the plant and its microbiota. The process of genetic accommodation described in section 3, impacts not only the plant genome but can also be expanded to all components of the holobiome and may thus be enhanced by the genetic variability of microbiota. In the holobiont, phenotypic plasticity is produced at different integration levels (i.e., organism, super-organism) and is also genetically accommodated or assimilated at those scales (i.e., within the plant and mutualist genomes and therefore the hologenome). The holobiont thus displays greater potential phenotypic plasticity and a higher genetic potential for mutation than the plant alone, thereby supporting selection and the accommodation process in the hologenome. In this context, the variability of both mutualistinduced and epigenetically-induced plasticity in the holobiont could function as a "toolbox" for plant adaptation through genetic accommodation. Consequently, mechanisms such as epigenetics allowing a production of phenotypic variants in response to the environment should be of importance in the holobiont context. Do microbiota and epigenetic mechanisms act separately or can they interact ? Both epigenetic and microbiota interactions allow plants to rapidly adjust to environmental conditions and subsequently support their fitness (Figure 1). Phenotypic changes ascribable to mutualists and mutualists transmission to progeny are often viewed as epigenetic variation (e.g., [START_REF] Gilbert | Symbiosis as a source of selectable epigenetic variation: taking the heat for the big guy[END_REF]. However, this kind of plasticity is closer to an "interspecies induction of changes" mediated by epigenetics rather than "epigenetics-induced changes" based solely on epigenetic heritable mechanisms (see section on epigenetics for a restricted definition). A part from the difficulty of drawing a clear line between epigenesis and epigenetics (J ablonka and L amb, 2002), evidence is emerging of the involvement of epigenetic mechanisms in mutualistic interactions. A n experiment revealed changes in DNA adenine methylation patterns during the establishment of symbiosis [START_REF] Ichida | DNA adenine methylation changes dramatically during establishment of symbiosis[END_REF], suggesting an effect of this interaction on the bacterial epigenome or at least, a role of epigenetic mechanisms in symbiosis development. Correct methylation status seems also to be required for efficient nodulation in the Lotus japonicus -Mesorhizobium loti symbiosis [START_REF] Ichida | Epigenetic modification of rhizobial genome is essential for efficient nodulation[END_REF] These epigenetic mechanisms and microbiota sources of plant phenotypic plasticity may act synergistically although this idea has never convincingly been addressed. A s far as we know, different important issues bridging epigenetic mechanisms and microbiota remain to be elucidated such as [START_REF] Simms | The relative advantages of plasticity and fixity in different environments: when is it good for a plant to adjust ?[END_REF] the frequency of epigenetic marking in organisms involved in mutualistic interactions, (2) the range of phenotypic plasticity associated with these marks either in the plant or in microorganisms, (3) the consequences of these marks for holobiont phenotypic integration, (4) the functional interplay between epigenetic mechanisms and microbiota in plant phenotype expression, [START_REF] Pigliucci | Reaction norms of Arabidopsis. V. flowering time controls phenotypic architecture in response to nutrient stress[END_REF] the inheritance of epigenetic mechanisms and thus their impact on symbiosis development, maintenance and co-evolution. To answer these questions, future studies will need to involve surveys of plant genome epigenetic states (e.g., methylome) in response to the presence/absence of symbiotic microorganisms. Recent progress made on bacteria methylome survey methods should represent useful tools to design future experiment on this topic (Sá nchez-Romero et al., 2015). A lthough research on the interaction between microbiota and epigenetics is in its infancy in plants, recent works mostly on humans support existing linkages. Indeed, a clear link has been evidenced between microbiota and human behavior (Dinan et al., 2015). Other examples of microbiota effects are their (i) deep physiological impact on the host through serotonin modulation [START_REF] Yano | Indigenous bacteria from the gut microbiota regulate host serotonin biosynthesis[END_REF] and (ii) incidence on adaptation and evolution of the immune system (L ee and Mazmanian, 2010). Such findings should echo in plant-symbionts research and encourage further investigations on this topic. More broadly, and despite the above-mentioned knowledge gaps, our current understanding of both epigenetic mechanisms and the impact of microbiota on the expression of plant phenotype, invite us to take those phenomena into consideration in species evolution and diversification. Martin and Pfennig, 2010) but rapid shifts in plant traits, as allowed by both microbiota and epigenetics, would provide accelerated pathways for their evolutionary divergence. In addition, such rapid trait shifts also permit rapid character displacement. Induction of DNA methylation may occur more rapidly than genetic modifications and could therefore represent a way to cope with environmental constraints on very short time scales (during the individual's lifetime; [START_REF] Rando | Timescales of genetic and epigenetic inheritance[END_REF]. In parallel, microbiota-induced plasticity is achieved both at a short time scale (i.e. through recruitment) and at larger time scales (i.e. through symbiosis evolution; Figure 2). Because of the observation of transgenerational epigenetic inheritance, the relevance of epigenetically-induced variations is a current hot topic in the contexts of evolutionary ecology and environmental changes (Bossdorf et al., 2008;[START_REF] Slatkin | Epigenetic inheritance and the missing heritability problem[END_REF][START_REF] Zhang | Epigenetic variation creates potential for evolution of plant phenotypic plasticity[END_REF]Schilichting and Wund, 2014). This has stimulated renewed interest in the 'extended phenotype' (Dawkins, 1982). The central idea of Dawkins 'extended phenotype' (Dawkins, 1982) is that phenotype cannot be limited to biological processes related to gene/genome functioning but should be 'extended' to consider all effects that a gene/genome (including organisms behavior) has on its environment. For example, the extended phenotype invites us to consider not only the effect of the plant genome on its resources acquisition but also the effect of the genome on the plant symbionts as well as on nutrient availability for competing organisms. A fkhami, M. E., and Rudgers, J . A . (2008) No specialization and only a limited foraging response to A M fungi heterogeneous distribution was found. A n effect of the A M fungal species on plant mass allocation and ramet production, but not on spacer length, In nature, environmental conditions, especially resources, vary spatially and temporally even at a fine scale. The spatial variations in resources abundance are perceived by organisms as environmental heterogeneity as long as the patches of resources are smaller than the organism and larger than the response unit 1,2 . Plants, because of their sessile lifestyle, have to cope with this heterogeneity and have evolved complex and diverse buffering mechanisms, such as phenotypic plasticity (i.e. production of different phenotypes from a single genotype 3 ). Phenotypic plasticity improves the plant' s ability to respond to resource heterogeneity during its lifetime by allowing trait adjustment to current environmental conditions 4,5,6,7 . Plasticity is expressed at different modular levels in plants 8 ranging from first order modules such as leaf or root to a superior modular level such as the ramet (see Harper, 1977 for modular structure description 9 ). This plastic response results from a trade-off between environment exploration for a resource (e.g. foraging for nutrient-rich patches) and resources exploitation (e.g. uptake of the resource and establishment in the patches). In clonal plants, each individual consists of a set of ramets connected through belowground (i.e. rhizomes) or aboveground horizontal modified stems (i.e. stolons). These connections result in a network structure and promote plant propagation in space(i.e. physical integration). In some species they also allow sharing of information and resources within the physical clone (i.e. physiological integration 10 ). A s a result of this network architecture, clonal individuals experience spatial heterogeneity at centimetric scales. They also share information about this environmental signaling between ramets. This leads to plastic responses at the local scale to optimize performance, through resource-sharing, at the clone level 11 . The response of clonal individuals to this small-scale heterogeneity results from a resource exploitation-exploration trade-off. Exploration responses are mostly linked to ramet positioning and induce modifications in clonal network architecture to allow foraging for available resources 12,13 . The optimal foraging theory predicts that ramets should maximize resource acquisition by aggregating in rich patches and avoiding poor patches 12,14,15,16 . Such aggregation may be achieved through modifications of the horizontal architecture of clonal plants, such as internode shortening or increased branching 12,17,18 . Exploitative responses involve (changes in resource acquisition traits. A s a result of physiological integration, each ramet may specialize in acquiring the most abundant resource (division of labor theory 19 ) and share it throughout the network. This specialization can involve modifications in ramet resource allocation patterns 20,21 whereby a higher root/shoot ratio is observed in ramets developing in nutrient-rich patches, and a lower ratio in light-rich patches 20,22 . Clonal foraging and ramet specialization have been demonstrated in response to soil nutrient heterogeneity 22,23,24,25 . However, under natural conditions, plant-nutrients uptake is mostly mediated by symbiotic micro-organisms such as A rbuscular Mycorrhizal (A M) fungi which colonize ~80% of terrestrial plants 26 . A M fungi symbionts (i.e. Glomeromycota) colonize roots and develop a dense hyphal network exploring soil to 'harvest' mineral nutrients for the plant' s benefit 26 . Plants with mychorrized roots can thus attain higher rates of phosphorus and nitrogen absorption n minus of 2 samples(x 5 and x 25 respectively) than plants with non-mycorrhized roots 27,28 . In turn, AM fungi obtain from plants, the carbohydrates required for their survival and growth 29,30 . In natural conditions, plant roots are colonized by a complex community of A M fungi 31 . These fungi display different levels of cooperation ranging from good mutualists to more selfish ones (i.e. cheaters 32 ). Within the root-colonizing fungal assemblage, plants have been shown to preferentially allocate carbon toward the best cooperators, thereby favoring their maintenance over cheaters 33 . The additive nutrient supply provided by A M fungi can be assimilated as a resource for the plant. A n important raising expectation is that plants may respond to the heterogeneous presence of A M fungi as for a nutritive resource. The plant could thus forage (optimal foraging theory) or specialize (division of labor theory) in response to A M fungi presence. The opposite hypothesis is that A M fungi and foraging or specialization are alternatives to cope with resource heterogeneity implying that plant with clonal mobility do not rely on AM fungi to respond to the heterogeneity. Our aim in this study was to analyze a plant's plastic response to A M fungal heterogeneity by performing two experiments under controlled conditions with the clonal herb Glechoma hederacea. In the first experiment, we tested the plant's foraging and specialization response to the heterogeneous distribution of A M fungi. The treatments consisted of a mixture of three species of A M fungi that has been shown to display various cooperativeness in precedent studies. Two assumptions were tested: (i) according to the optimal foraging theory, clones should aggregate ramets in the patches containing A M fungi by reducing their internodes lengths and (ii) according to the division of labour theory, clones should specialize ramets with a higher allocation to roots in the presence of A M fungi than in their absence. To better understand the results obtained in experiment 1 and because of the potential impact of different cooperation levels in fungi involved in this symbiosis, we carried out a second experiment to test the effect of A M fungal identity on the foraging and specialization response of G. hederacea. We tested i) the effect on plant traits of the individual presence of the three different species of A M fungi used in the assemblage treatment and ii) the assumption that A M fungal species differ in their effects on the traits involved in foraging and specialization responses. In both experiments, the performance of clonal individuals was expected to be reduced in the absence of A M fungi. ME T HODS Biological material We used the clonal, perennial herb Glechoma hederacea which is a common L amiaceae in woods and grasslands. G. hederacea clones produce new erect shoots at the nodes at regular intervals of 5 to 10 cm on plagiotropic monopodial stolons (i.e. aboveground connections). Each ramet consists of a node with two leaves, a root system and two axillary buds. In climatic chambers with constant conditions, G. hederacea does not flower and displays only vegetative growth 12 . This species is known to exhibit foraging behavior 12,22,45 and organ specialization 22 in response to nutrients or light heterogeneity. The ramets used in our experiments were obtained from the vegetative multiplication of 10 clonal fragments taken in 10 different locations sufficiently spaced to obtain different genotypes. Plants were cultivated for three months under controlled conditions to avoid parental effects linked with their original habitats 51 . Vegetative multiplication was carried out on a sterilized substrate (50% sand and 50% vermiculite, autoclaved at 120°C for 20 minutes) to ensure the absence of A M fungi propagules. For each experiment, the transplanted clonal unit consisted of a mature ramet (leaves and axillary buds) with one connective internode (to provide resources to support ramet survival) 52 , and without roots (to avoid prior mycorrhization). The A M fungi inocula used in both experiments were Glomus species: Glomus intraradices (see [START_REF] Stockinger | Glomus intraradices DA OM197198', a model fungus in arbuscular mycorrhiza research, is not Glomus intraradices[END_REF] for discussion on G. intraradices reclassification 53 ), Glomus custos, and Glomus clarum. These A M species were chosen to limit phylogenetic differences between fungi life-history traits 54 . G intraradices has been shown to display beneficial P uptake in Medicago truncatula 33 . The use of three different A M species also ensure a range of cooperativeness in the symbionts. The inocula used in the two experiments consisted of a singlespecies inoculum produced in in vitro root cultures (provided by S.L . Biotechnologia Ecologica, Granada, Spain) or a mixture of equal proportions of all three inocula. The inoculations consisted of an injection of 1mL of inoculum directly above the roots, and were administered when the plants had roots of 0.5 to 1 cm in length. E xperimental conditions Experiment 1 was designed to test the foraging and specialization responses of G. hederacea to the heterogeneous distribution of A M fungi. Experiment 2 tested the effect of the species of A M fungus on the plant traits involved in these responses. Both experiments were carried out with cultures grown on the same sterile substrate (50% sand, 50% vermiculite) in a climate-controlled chamber with a diurnal cycle of day /12h night at 20°C. Plants were watered with deionized water every two days to check for nutrient availability. Necessary nutrients were supplied by watering the plants every 10 days using a fertilizing Hoagland's solution with strongly reduced phosphorus content to ensure ideal conditions for mycorrhization (i.e. phosphorus stress) 55,56,57 . A t each watering, the volumes of deionized water and fertilizing solution per pot were 25 mL and 250 mL respectively for the first and second experiments. We also controlled nutrient accumulation during the experimental period by using pierced pots that allowed evacuation of the excess watering solution. To prevent nutrient enrichment due to the inoculum, AM fungi-free pots were also inoculated with a sterilized inoculum (autoclaved at 100°C for five minutes). E xperiment 1: E ffect of heterogeneous A M fungal distribution on G. hederacea foraging and specialization responses. The responses of G. hederacea to four different spatial distributions of AM fungi were tested. G. hederacea was grown in series of 11 consecutive pots: two homogeneous treatments with the presence (P) or absence (A ) of A M fungi in all pots; and two heterogeneous treatments with two patches of 5 pots either in presence then absence (PA) or absence then presence (A P) (Fig. 1). The two latter treatments were done to take into account a potential effect of ramet age in the plant's response to heterogeneity. These treatments were replicated for 10 clones of glechoma hederacea (see Methods section "Biological material" for precision on plants used). Each clone was grown in plastic pots (8 × 8 × 7 cm3) filled with sterile substrate. Only one ramet was allowed to root in each pot and plant growth was oriented in a line by removing lateral ramifications. The initial ramet, in all treatments, was planted in a pot without AM fungi. For each treatment, the inoculum consisted of a mixture of the three A M fungal species (G. clarum, G. custos and G. intraradices). Inoculations were started on the second pot of each line which actually contained the fourth ramet of the clone (exceptionally, the first three ramets rooted in the same first pot due to internode shortness, see Fig. 1). Inoculations were administered for each ramet separetely when the ramet had roots of 0.5 to 1 cm in length to avoid a ramet age effect on the A M fungi colonization process. The clones were harvested when the final ramet (number 13) had rooted in the 11th pot. This ensured that each clone had 10 points for sampling environmental quality. The 5th, 6th, 10th and 11th ramets of each clone in the pot line (Fig. 1) were used for statistical analyses. These ramets corresponded to the second and third ramets experiencing the current patch quality. Indeed, L ouâpre et al. ( 2012) emphasized the role of the "past experience" of the clone in developing a plastic response. The choice of these four ramets thus ensured that the clone had enough sampling points to assess the quality of its habitat i.e. in the patches where A M fungi were present or absent, in the heterogeneous treatments, and to adjust accordingly when initiating new ramets 35 . Each ramet was carefully washed after harvesting. The foraging response was assessed by measuring the length of the internode just after the ramet. A n aggregation of ramets, with shortened internodes was expected in patches where A M fungi were present and an avoidance of patches, i.e. production of longer internodes, was expected where A M fungi were absent. Modifications in ramification production linked to the effect of the treatment were checked by recording the number of ramifications produced by the ramets throughout the experiment. The specialization response was examined by measuring the root/shoot ratio (R/S) i.e. the biomass allocated to the below-and above-ground resource acquisition systems, after separating the roots and shoots and after oven-drying for 72h at 65°C. We expected a higher R/S ratio in patches where A M fungi were present than in patches where A M fungi were absent. Clone performance was assessed from (i) the total biomass of the clone, calculated as the sum of ramet roots, shoots and stolons after oven-drying for 72h at 65°C and (ii) the growth rate calculated as the number of days needed for the clone to develop the 10 sampling ramets i.e. the number of days between rooting of the 4th ramet and final harvesting. E xperiment 2: E ffect of A M fungal identity on G. hederacea performance and traits. The effects of individual A M fungal species on G. hederacea foraging and specialization traits were tested using four culture treatments: 1) no A M fungi, 2) with Glomus custos, 3) with Glomus intraradices, and 4) with Glomus clarum. Each treatment was replicated eight times with four related ramets assigned to each treatment replicate (32 clones in total), to control for plant-genotype effects. The initial ramet of each clone had previously been cultivated on sterile substrate to ensure root system development and facilitate survival after transplanting. The initial ramets were then transplanted in pots (27.5 × 12 × 35 cm3) filled with substrate. The A M fungi inoculations consisted of three injections of 1 mL of inoculum directly on the roots of the first three rooted ramets to ensure colonization of the whole pot. The plants were harvested after six weeks. The following traits involved in foraging were measured: (i) the longest primary stolon length (of order 1) as an indicator of the maximum spreading distance of space colonization (ii) the number of ramifications as an indicator of lateral spreading and clone densification. We also measured biomass allocation to the roots, shoots and stolons at the clone level, i.e. traits involved in the specialization response, after oven-drying for 72h at 65°C. Plant performance for the entire clone was determined from: (i) the total biomass calculated as the sum of the dry weights of the shoots, roots and stolons after oven-drying for 72h at 65°C. and (ii) the number of ramets i.e. the number of potential descendants. Performance was expected to be higher in pots inoculated with fungi and to differ depending on the fungal species. Statistical analysis For experiment 1, to test whether G. hederacea develops a plastic foraging (internode length) or specialization (R/S ratio) response to the heterogeneous distribution of AM fungi, A NOVA analyses were performed using the linear mixed-effects model procedure in R 3. 1.3 58 with packages "nlme" 59 and "car" 60 . Ramets of the same age were compared between genotypes to control for a possible effect of ramet age. For experiment 2, to determine whether the species of A M fungi induced changes in plant traits and performance, A NOVA analyses were performed using linear mixed models with the same R packages and version described above. W Resource allocation was tested by using the clone total biomass as covariate to take into account the trait variance associated with clone growth. For both experiments genotype-induced variance and data dependency was controlled by considering the treatment (four modalities) as a fixed factor and the plant-clone genotype as a random factor. The effect of genotype was assessed by comparing the intra-and inter-genotype variance and was considered significant when the inter-genotype variance was strictly superior to the intra-genotype variance. When a significant effect of treatment was detected by A NOVA , post hoc contrast tests were performed using the "doBy" package 61 to test for significant differences between modalities. When necessary, the normality of the residuals was ensured by subjecting the data to log transformation. The total clone biomass (summed dry weights of shoots, roots, and stolons) was used as covariate to account for variance due to differences in clone performance. R E SULT S In both experiments G. hederacea traits variation was not significantly influenced by plant genotype (i.e. the inter-genotypic variance was not greater than the intra-genotypic variance). E xperiment 1: E ffect of heterogeneous A M fungi distribution on G. hederacea foraging and specialization responses. The hypothesis of modified foraging and specialization responses of Glechoma hederacea to the patchiness of A M fungal presence was tested by comparing the internode lengths and the R/S ratio between the treatments for the 5 th , 6 th , 10 th and 11 th ramets (see Methods for details on ramet selection and experimental design). We found a significant effect of the A M fungal treatment on the 10 th internode length (P=0.005; F=5.74) (Tab. 1, Fig. 2) with a longer internode in the PA treatment (A M fungi present then absent) than in the absence (A ) and presence (P) treatments. Conversely, no significant effect was found for the 5 th ramets (P=0.71; F=0.45) and 6 th ramets (P=0.15; F=1.92) (Fig. 2). The 11 th ramets seem to display the same response patterns as the 10 th ramets, but no significant differences were detected between the treatments (P=0.93; F=0.15), due to a partial bimodal distribution of data in the "P" treatment with a few individuals exhibiting longer stolon. In addition, the number of ramifications for the 5 th , 6 th , 10 th , and 11 th ramets was not significantly affected by treatment (Tab. 1). No changes in the R/S ratio in response to A M fungal treatment were detected in any of the four tested ramets (Tab.1). with Glomus intraradices allocating significantly fewer resources to stolons (Fig. 3) and more to shoots (P=0.019, F=4.24) than plants without A M fungi. The allocation to roots, however, was not dependent on the treatment (P=0.68; F=0.50). DISC USSION The plants did display some foraging behavior in response to A M fungi heterogeneity, as elongation of the internodes was observed in patches without A M fungi after the plant had experienced patches with AM fungi. This behavior would correspond to an avoidance of resource-poor patches, as expected from the optimal foraging theory. However, this behavior was only detected at a particular ramet age (10 th ramets), indicating a possible role of the ontogenic state in development of the plastic response 34 . This may be due to a "lag time" in the plant's response based on the need for environmental sampling. Indeed, L ouâpre et al., (2012) demonstrated that clonal plants may need a minimum number of sampling points as benchmarks in order to perceive and respond to resource availability 35 . In their study, Potentilla reptans and P. anserina started to respond to the treatment after the 5 th internode, suggesting a strong effect of patch size. A similar patch size effect had already been demonstrated in modeling studies 10,36 . No plastic modifications corresponding to a ramet specialization of G. hederacea, in response to A M fungal spatial heterogeneity, were found either. Contrary to the results expected with the specialization theory, biomass was not preferentially allocated to the roots in patches with A M fungi or to the shoots in patches without AM fungi. This absence of response was recorded for all the ramet ages tested. These results -a mild foraging response and no specialization -give credit to the theory supported by Ornitchenko & Zobel (2000) that the species with high mobility do not rely on A M fungi to cope with resource heterogeneity 37. . Glechoma with its high clonal mobility should thus show no response to A M fungi response. However, our results do not fit with the literature predictions for specialization and foraging response 38 . This divergence may be explained by two alternative hypotheses that are developed in the following sections. The first explanation is linked with the occurrence of an individual effect of the species of A M fungus on plant traits, which may predominate or modify the response to the presence/absence of A M fungi when all three species exist together (experiment 2); the second is linked with reduced physiological integration either due to a direct effect of A M fungi on this plant trait, or to the absence of a clear contrast between the different patches sensed by the plant. In our second experiment, we demonstrated that the architectural traits involved in the plant's foraging response were not affected by the species of A M fungi tested, consistently with the weak response detected in the first experiment. On the contrary, significant changes in resource allocation traits (linked to the specialization response) were detected, depending on the species of A M fungus. Only one species, G. intraradices induced a change in allocation by the plant, in comparison to the absence of A M fungi treatment, which led to an increased allocation to shoots at the expense of stolons. Modifications of plant phenotype, depending on the A M fungal species, have already been observed in such traits 39,40 . These authors identified a significant effect of Glomus species isolates on branching, stolon length and ramet production in Prunella vulgaris and Prunella grandiflora. In the first analysis of the A M fungal genome, Tisserant et al. (2013) revealed existing pathways attributed to the synthesis of phytohormones or analogues 41 . Such molecules would have a direct effect on host phenotype. In the individual effect observed, plant response in the presence of G. intraradices symbiosis was coupled with decreased plant performance due to a diminution of ramet production relative to biomass in this treatment. In contrast, the G. custos treatment led to a decrease in the potential number of descendants of the clone. A ccording to experiment 1, root colonization by an inoculum containing three species had no effect on plant traits associated with specialization and foraging. This suggests two alternative hypotheses: i) G. intraradices may be less cooperative than G. custos with Glechoma hederacea and the result is a consequence of the plant's rewarding process to the more cooperative fungus 33 and/or ii) root colonization by G. custos or G. clarum buffers the effect of G. intraradices due to a 'priority effect' (i.e. order of arrival in the colonization as a key to fungal community structure in roots) 41 . To test this, the mycorrhization intensity of the three A M fungal species inoculated in the first experiment would need to be assessed by qPCR. A lternatively, the combined effects of the three A M fungal species on plant phenotype might result in the environment not being perceived as heterogeneous by the plant. This hypothesis is developed in the following section. The intraclonal plasticity predicted by the foraging and division of labor theories is based on the ability of ramets to sense environmental heterogeneity, share information and resources within the clonal network, to locally adapt and optimize the performance of the whole clone. The weak response of G. hederacea to A M fungal heterogeneity could thus be explained by a decrease in physiological integration that reduces the level of resource-sharing within the clone and prevents the plant from developing an optimized foraging or specialization response. This diminution could initially be due to the presence of AM fungi. Only a few studies have been carried out on the effect of AM fungi on the degree of integration 43 . These authors demonstrated that A M fungi induced a decrease of physiological integration in the clonal plant Trifolium repens when grown in a heterogeneous environment. This effect was dependent on the presence and richness of A M fungal species. Whether this observed diminution of physiological integration is due to a direct manipulation of the host plant phenotype by the fungi remains, as far as we know, unknown. Secondly, this diminution may depend on the individual plant's perception of environmental conditions that might be sensed as homogeneous because the patch contrast is less important than expected. A reduction of plant integration is expected when the maintenance of high physiological integration is more costly than beneficial 44,45 , such as when the environment is resource-rich, not spatially variable 46 or not sufficiently contrasted 10,47 . Such a reduced contrast might result from the effect of the three A M fungal species on the plant phenotype (when used as a mixed inoculum), which is unlikely. A more probable mechanism of environment homogenization could result from A M fungal transfer through the stolons. Scanning electron microscopy of the clone cultures (see protocol in supplementary material) revealed the presence of hyphae on the stolon surface (fig. 5). In addition, several cells close to the external surface of the stolon cross-section were invaded by structures which could be interpreted as fungi. DNA sequencing of stolon samples (fig. 6) confirmed these results and demonstrated the presence of A M fungi in stolons. This suggest that fungi can be transferred from one ramet to another, at least by colonization of the stolon surface (as shown in Fig. 5a) and/or within the stolon (Fig. 5b). Whether fungi are passively or actively transferred through the plant's stolon tissues, and hence to all related ramets, remains an open question. Further studies are therefore needed to confirm these fungal transfers to plant clones and to measure their intensities in contrasted environments. Studies of the response of clonal plants to environmental heterogeneity have classically focused on abiotic heterogeneity 48,49 . Our study is the first to investigate clonal response to a heterogeneous distribution of A M fungi, based on the assumption that A M fungi can be regarded as a resource for the plant. However, in response to the heterogeneous distribution of A M fungi, G. hederacea clones displayed only a weak foraging response and no specialization, which suggests respectively that clones do not aggregate more especially in patches with A M fungi or maximize their proportion of roots in contact with A M fungi. We provide a first explanation by highlighting the impact of A M fungal identity on the plant phenotypes and more particularly on the allocation traits involved in specialization. More importantly, we provide evidence that stolons might be vectors for the transfer of micro-organisms between ramets, thereby buffering (through this dispersion of fungi) the initial heterogeneous distribution. If this is true, stolons will have to be regarded in a different way, and be seen as ecological corridors for the dispersion of micro-organisms allowing a continuity of partnership along the clone. Considering the plant as a holobiont 31,50 , this novel view of stolon function is expected to stimulate new ideas and understanding about the heritability of microbiota in clonal plants. A cknowledgements This work was supported by a grant from the CNRS-EC2CO program (MIME project), CNRS-PEPS program (MY COL A ND project) and by the French ministry for research and higher education. We thank S. L e Panse for performing microscopy analysis and L . L eclercq for performing preliminary experiments. We are also grateful to D. Warwick for helpful comments and suggestions for modifications on a previous version of the manuscript. A uthor contribution NV, A K B, PV and CM conceived the ideas and experimental design. NV, A K B, CM did the experiments. NV did the data analyses. NV, A K B, PV and CM did the interpretations and writing of the publication. C ompeting F inancial Interests statement No competing financial interests. C hapter II: T he heritability of the plant microbiota, toward the metaholobiont concept II.1 Introduction Scientific context The first chapter findings suggested that the elongation of clonal plants stolons is accompanied by the transmission of a 'cohort' of microorganisms, that includes arbuscular mycorrhizal fungi, from the mother ramet to spatially distant descendants (i.e. other developing ramets). This chapter aims at experimentally testing this hypothesis and at reviewing its consequences for both plant performance and the conceptual understanding of plants functioning and evolution. Previous studies have evidenced the existence of vertical inheritance of endophytic symbionts, colonizing host-plants through seeds. The most described example of such transmission is the stress-protective endophyte Neotyphodium coenocephalum that colonizes plant seeds and is transmitted to descendants in several grass species (Clay & Schardl, 2002;Selosse et al., 2004). However, this process is the only known example of true vertical transmission in plants. It represents the transfer of only a few plant-associated microorganism and at a single moment of the plant life. In clonal plant networks, information and resources can be shared within the physical clone (i.e. physiological integration; Oborny et al., 2001). A n additional level of integration may then occur through the sharing of microorganisms within the clonal network, as previously proposed by Stuefer et al. (2004) and suggested by our previous results (A rticle II). From a theoretical point of view, vertical and pseudo-vertical transmissions (i.e. inheritance of conspecific symbionts from parents to offsprings sharing the same environment; Wilkinson, 1997) are advantageous because they limit the costs of foraging for suitable symbionts (Wilkinson & Sherratt, 2001). In this context, microbiota heritability allows the plant to ensure environmental quality for its progeny. Vertical transmission would thus permit a "continuity of partnership" between the plant and its symbionts (Zilber- Rosenberg & Rosenberg, 2008). Recently, the understanding of host-symbionts interactions has evolved toward an holistic perception of the host and its associated microorganisms. The holobiont theory provides a theoretical framework for the study of host-symbiont interactions (Zilber- Rosenberg & Rosenberg, 2008;Theis et al., 2016). In the hologenome, a microorganism can be equated to a gene in the genome. L ike genes are inherited during sexual reproduction, a key parameter for the evolution of the hologenome is the heritability of microorganisms between host generations. In this context, the mechanism suggested in the previous chapter would integrate clonal plants into the range of organisms that can be considered as holobionts. Furthermore, the network structure of clonal plants suggests another layer of complexity in the holobiont assembly since holobionts would be susceptible to share a part of their microbiota within the clonal network. There is thus a need to develop an extension of the holobiont concept for clonal organisms organized as networks, "the meta-holobiont concept". Objectives of the chapter This chapter aims at testing the hypothesis of the heritability of a core microbiota in clonal plants and intends to extend the current theories regarding the holobiont assembly and evolution to clonal plant networks. More precisely we address the following questions : 1) Is there a transmission of a cohort of microorganisms in the clonal plant G. hederacea through stolon elongation ? Is this cohort composed of specific microorganisms, thus constituting an inherited core microbiota ? (A rticle III) 2) To which extent this mechanisms redefine the holobiont theory for clonal organisms organized in network ? What are the perspectives of clonal plants as model organisms for the study of holobionts assembly ? (A rticle IV ) Methods This chapter is composed of two articles (III and IV ). The first paper is based on an experimental approach to demonstrate the existence of a microbiota heritability mechanism in clonal plant. The second paper is an opinion paper addressing the consequences of this mechanism on the understanding of holobionts assembly in clonal network. We tested the hypothesis of microorganisms' transmission to progeny through clonal integration with an experimental approach using the clonal herbaceous species Glechoma hederacea . Plants from 10 ecotypes were grown under controlled conditions in individual pots. The mother pot was filled with field soil in to provide an initial microbial inocula and newly emitted ramets were forced to root in separate pots containing sterilized substrate. To detect endophytic microorganisms transferred from mother to daughter ramets, we sampled roots of both mother (growing in pots containing microorganisms) and daughter ramets (growing in sterile substrate without microorganisms) as well as internodes connecting them. High-throughput amplicon sequencing of 16S and 18S rRNA genes was used to detect and identify Bacteria, A rchaea and Fungi within the roots endosphere and internodes. We constructed an opinion paper to address the significance of the above heritability mechanism on pour understanding of plants fitness and evolution. In this review we mobilise knowledges on clonal plants network to propose hypotheses on whether they may use microorganisms transfer as a tool for adaptaton. We also mobilise knowledges from network theory and meta-community ecology to provide directions for future research on clonal network linked to microbiota assembly and plant fitness. Main results In our experiment (A rticle III), we detected the presence of archaea, bacteria and fungi within the root endosphere of the mother ramets. Some of these microorganisms were also found within the stolons and the roots of the daughter ramets, comprising fungi and bacteria but not A rchaea. We thus demonstrated the heritability of a part of the plant microbiota to its progeny. In addition, endophytic communities of daughter ramets roots were found to be similar between each other, while they were different from the original mother communities. We thus demonstrated a filtration process during the transmission of the microbiota (decrease in richness and homogenization of the transmitted communities). Our results confirm the hypothesis of microorganism transmission between ramets constituting thus an heritable core microbiota (A rticle III). Whether the transmission could occur in the reverse sense, i.e. from the daughter to the mother ramets, remains an open question. Microbiota transmission to the progeny is advantageous because it provides suitable symbionts and thus ensures habitat quality for the progeny (Wilkinson & Sherratt, 2001). Our results open new questions on the ability of clonal plants to preferentially select and transmit particular sets of microorganisms. Indeed, we observed that microorganisms were filtered during the transmission process but the modalities of this filtration remain unknown. Microorganisms could be filtered based on their ability to colonise the internodes (i.e. dispersal abilities) or alternatively they could be filtered depending on the functions they provide to the mother ramet. In the latter case, it would suggest an active filtering by the plant. The expectation is thus that beneficial organisms such as cooperative A M fungi would be preferentially transmitted to the progeny. Our results invite to revise our understanding of clonal plants. Especially an extension of the holobiont concept toward the meta-holobiont for clonal plant network has been introduced (A rticle IV ). In addition, we propose that clonal network sharing of microrganisms should be apprehended through the network theory and meta-community frameworks. The network theory framework should provide insights on how the network structure and associated microorganisms sharing could enhance the network resilience and performance as a whole. The meta-community framework should help to understanding the impact of microorganisms transmission on microorganisms communities assembly and survival and on plant management of useful microorganisms. Introduction A ll living plants experience interactions with ectospheric and endospheric microorganisms and are known to harbor a great diversity of symbionts including fungi (Vandenkoornhuyse et al., 2002; L ê Van et al., 2017) , bacteria [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012;Schlaeppi et al., 2014) and A rchaea (Edwards et al., 2015) which collectively form the plant microbiota. This microbiota performs ecological functions that extend the plant' s ability to adapt to environmental conditions (Bulgarelli et al., 2013;Vandenkoornhuyse et al., 2015). Studies using maize cultivars demonstrated that genetic control of the composition of the microbial rhizosphere by the host-plant was detectable, even if limited [START_REF] Peiffer | Diversity and heritability of the maize rhizosphere microbiome under field conditions[END_REF]. Plant microbiota composition is thus, at least in part, not only a consequence of the pool of microorganisms available for recruitment in the surrounding soil but also of plant selective recruitment within the endosphere. This filtering system includes plant defense mechanisms (Berendsen et al., 2012;Yamada et al., 2016) and promotion of the best cooperators through a nutrient embargo toward less beneficial fungi (Vandenkoornhuyse et al., 2015;K iers et al., 2011). From a theoretical point of view, vertical and pseudo-vertical transmissions (i.e. inheritance of conspecific symbionts from parents to offspring sharing the same environment; Wilkinson et al., 1997) are advantageous because they limit the costs of foraging for suitable symbionts (Wilkinson and Sherratt, 2001). Vertical transmission would thus permit a "continuity of partnership" between the plant and its symbionts (Zilber- Rosenberg and Rosenberg, 2008). In this context, microbiota heritability is also a way for the plant to ensure environmental quality for its progeny. In natura, plants can reproduce either by seed production or by clonal multiplication (van Groenendael and de K roon, 1990;[START_REF] Hendricks | Clonal plant architecture: a comparative analysis of form and function. (The ecology and evolution of clonal plants[END_REF]. Some studies have evidenced a vertical inheritance of endophytic symbionts colonizing host-plants through the seeds: the most well-known example is perhaps the transmission of the stress-protective endophyte Neotyphodium coenocephalum to the descendants in several grass plant species (Clay and Schardl, 2002;Selosse et al., 2004). Recent findings suggest that vegetative elongation of the horizontal stems forming the clonal plants network is accompanied by the transmission of a 'cohort' of microorganisms, that includes arbuscular mycorrhizal fungi, to spatially distant descendants (Vannier et al., 2016). This form of heritability of microorganisms to plant-progeny is not mediated environmentally (i.e. through environment sharing) or sexually. Such a process would support the niche construction of plant progeny while microorganisms could benefit from a selective dispersal vector allowing them to reach a similar and hence suitable host. Transmission in clonal plants has been demonstrated to involve information-and resource-sharing within the physical clone (i.e. physiological integration; Oborny et al., 2001). A n additional level of integration might occur through the sharing of microorganisms within the clonal network, as previously proposed by Stuefer et al. (2004). We tested this hypothesis of microorganisms transmission to progeny through clonal integration and addressed the new concept of a core microbiota heritability in clonal plants, using the clonal herbaceous species Glechoma hederacea as model. The growth form of this plant consists of a network of ramets connected through horizontal stems (i.e. aerial stolons), one of the most widespread forms of clonality (Zilber- Rosenberg and Rosenberg, 2008). Plants from 10 ecotypes were grown under controlled conditions. First, a juvenile ramet without roots (mother ramet) was transplanted into a pot containing field soil. Plant growth was oriented by forcing the newly emitted ramets (daughter ramets) of the two ramifications to root into separate pots containing sterilized substrate (Figure 1). Our aim was to detect the endophytic microorganisms present in the mother ramet roots and transferred to the daughter ramets through the clone stolons. High-throughput amplicon sequencing of 16S and 18S rRNA genes was used to detect and identify Bacteria, A rchaea and Fungi within the roots endosphere and the stolon internodes. Control pots randomly distributed in the experiment were also analyzed to remove from the dataset all operational taxonomic units (OTU) which could not be attributed to a plant-mediated transfer of microorganisms (see methods in supplementary information). Material and methods Biological material We used the clonal, perennial herb Glechoma hederacea, which is a common model for studying clonal plant response to environmental constraints (Slade and Hutchings 1987;Birch and Hutchings, 1994;Stuefer et al., 1996). G. hederacea clones produce new erect shoots at the nodes at regular intervals of 5 to 10 cm (the internodes) on plagiotropic monopodial stolons (i.e. aboveground connections). Each ramet consists of a node with two leaves, a root system and two axillary buds. In climatic chambers with controlled conditions and in the absence of enriched substrate, G. hederacea does not invest in flowering but displays only vegetative growth (Birch and Hutchings, 1994). The ramets used in our experiments were obtained from the vegetative multiplication of 10 clonal fragments taken at 10 different locations separated by at least 1 km to sample different ecotypes. Plants were grown for three months under controlled conditions to limit parental effects related to their geographic location and habitats [START_REF] Dyer | The role of adaptive trans-generational plasticity in biological invasions of plants[END_REF]). Vegetative multiplication was carried out on a sterilized substrate (50% sand and 50% vermiculite, autoclaved twice at 120°C for 1h). E xperimental conditions Experiments were carried out with cultures grown on the same sterile substrate (50% sand, 50% vermiculite) in a climate-controlled chamber with a diurnal cycle of 12h day /12h night at 20°C. Plants were watered with deionized water every two days to avoid a bias in nutrient availability. Necessary nutrients were supplied by watering the plants every 10 days with a lowphosphorus watering solution to favor mycorhization (Oborny et al., 2001). A t each watering, the volumes of deionized water and fertilizing solution per pot were 25 mL. To test for the transmission of microorganisms within the clonal network we transplanted an initial ramet (mother ramet) into a pot with field soil and oriented its growth to force the newly emitted ramets to root in different individual pots containing sterilized substrate (Figure 1). During the experiment, secondary ramifications of daughter ramets were removed to limit spread and confine the growth of the plant to a simple network of five ramets comprising the mother ramet and four daughter ramets equally distributed between two stolons (two on each primary stolon). By using two stolons we could test whether the potential transmission was systematic within the clone or whether this transmission varied between stolons (i.e. transfer of random organisms from the mother pool). The transplanted clonal unit (i.e. the mother ramet) consisted of a mature ramet (leaves and axillary buds) with one connective stolon internode (to provide resources to support ramet survival; Huber and Stuefer, 1997), and without roots (to avoid prior colonization of the roots by micro-organisms). Field soil was collected from grassland harboring native Glechoma hederacea and located in the experimental garden of the University of Rennes. Soil was then sieved through 0.5 cm mesh to remove stones and roots. The experiment was stopped and the ramets harvested when the clone had reached the stage with a mother ramet and four rooted daughter ramets. The composition of endospheric microorganisms in the root and internode samples was analyzed by separating the clonal network into stolon internodes, roots and shoots for both the mother and the daughter ramets. Each internode and root sample was meticulously washed first with water, secondly with a 1% Triton X 100 (Sigma) solution (three times) and lastly with sterile water (five times). This procedure ensured removal of the ectospheric microorganisms [START_REF] Vandenkoornhuyse | A ctive root-inhabiting microbes identified by rapid incorporation of plant-derived carbon into RNA[END_REF]. In order to control for potential contaminations, three control pots were also randomized into the experimental design. These pots were filled with the same sterile substrate and watered similarly to the other pots. Substrate from these control pots was sampled at the end of the experiment so that all contaminant microorganisms that were not plant-transmitted could be removed from the sequence analyses and from all subsequent statistical analyses. A ll root, internode and substrate samples were frozen at -20°C before DNA extraction and subsequent molecular work. DNA extraction and amplification DNA was extracted from cleaned roots and internodes, as well as from the substrate from control pots, using the DNeasy plant mini kit (Qiagen). The 18S rRNA gene was PCR amplified using fungal primers NS22b (5'-A ATTA A GCA GA CA A ATCA CT-3') and SSU817 (5'-TTA GCATGGA ATA ATRRA ATA GGA -3')(Lê Van et al., 2017). The conditions for this PCR comprised an initial denaturation step at 95°C for 4 min followed by 35 cycles of 95°C for 30 s, 54°C for 30 s and 72°C for 1 min with a final extension step at 72°C for 7 min. The 16S rRNA gene was amplified using Bacterial primers 799F (5'-A A CMGGATTA GATA CCCK G-3') and 1223R. (5'-CCATTGTA GTA CGTGTGTA -3'). The conditions for this PCR consisted of an initial denaturation step at 94°C for 4 min followed by 32 cycles of 94°C for 30 s, 54°C for 30 s and 72°C for 1 min with a final extension step at 72°C for 10 min. The 16S rRNA gene was also amplified using a nested PCR with A rchaea primer. The first PCR primers were Wo_17F (5'-ATTCY GGTTGATCCY GSCGRG-3') and A r_958R (5'-Y CCGGCGTTGA MTCCA ATT-3') and PCR conditions comprised an initial denaturation step at 94°C for 2 min followed by 35 cycles of 94°C for 30 s, 57.5°C for 50 s and 72°C for 50 s with a final extension step at 72°C for 10 min. The second PCR primers were A r_109F (5'-A CK GCTCA GTA A CA CGT-3') and A r_915R (5'-GTGCTCCCCCGCCA ATTCCT-3') and PCR conditions comprised an initial denaturation step at 94°C for 4 min followed by 32 cycles of 94°C for 30 s, 57°C for 30 s and 72°C for 1 min with a final extension step at 72°C for 10 min. A ll amplification reactions were prepared using Illumina RTG PCR beads with 2µ L of extracted DNA and target PCR products were visualized by agarose gel electrophoresis. Sequencing and data trimming A ll PCR amplifications products were purified using A gencourt A MPure X P kit. A fter purification, the amplifications products were quantified and their qualities checked using A gilent high sensitivity DNA chip for bioanalyzer and Invitrogen fluorimetric quantification. A ll PCR amplifications products were then subjected to an end repair step and adaptor ligation using the Neb library preparation kit. Multiplexing was done with a PCR step using NEBnext Ultra 2 multiplex oligo (dual index). Multiplexed products were then quantified and quality checked using A gilent high sensitivity DNA chip for bioanalyzer and quantitative PCR with SmartChip Wafergen. A mplicons libraries were pooled to equimolar concentration and paired-end sequenced (2x250 bp) with an Illumina MiSeq instrument. Data trimming consisted of different steps: primer removal (Cutadapt), and classical sequence quality. A n additional step consisted of checking the sequence orientation using a homemade script. This stringent data trimming resulted in 9,592,312 reads. Trimmed sequences were then analyzed using the FROGS pipeline [START_REF] Escudie | FROGS: Find Rapidly OTU with Galaxy Solution[END_REF]) (X .SIGENA E [http://www.sigenae.org/]). FROGS pre-process was performed with a custom protocol (K ozich et al., 2013) for A rchaea and Fungi and with the FROGS standard protocol for bacteria reads. In this pre-process, bacteria reads were assembled using Flash [START_REF] Magoč | FLA SH: fast length adjustment of short reads to improve genome assemblies[END_REF]. The clustering step was performed with SWA RM to avoid, in an innovative manner, the use of identity thresholds to group sequences in OTUs [START_REF] Mahé | SWA RM: robust and fast clustering method for amplicon-based studies[END_REF]. Following the pipeline designer's recommendations, a de-noising step was performed with a maximum distance of aggregation of 1 followed by a second step with a maximum distance of aggregation of 3. Chimera were filtered with the FROGS remove chimera tool. A filter was also applied to keep those OTUs with sequences in at least three samples to avoid the presence of artificial OTUs. A ll statistical analyses were also done with a five samples filter and results were similar. We herein present only the R2 Fungi and R1 A rchaea results based on affiliation statistics that indicated a better quality of affiliation. OTUs affiliation was performed using Silva 123 16S for Bacteria and A rchaea and Silva 123 18S for Fungi. OTUs were then filtered based on the quality of the affiliations with a threshold of at least 95% coverage and 95% BLA ST identity. The stringent parameters used in FROGS enabled us to finally obtain 4,068,634 bacterial reads, 2,222,950 fungal reads and 113,008 A rchaeal reads. Rarefaction curves were generated using R (package vegan 2.2-1;Oksanen et al., 2015) to determine whether the sequencing depth was sufficient to cover the expected number of operational taxonomic units (OTUs). The sequencing depth was high enough to describe the microbial communities in detail (Supplementary Figure S1). To homogenize the number of reads by sample for subsequent statistical analyses, samples were normalized to the same number of reads based on graphical observation of the rarefaction curves using the same R package. During this step, samples with less reads than the normalization value were removed from the dataset. A ll OTUs found in the soil of the control pots were then removed from the data set. Sequences data are available through the accession number PRJ EB20603 at European Nucleotide A rchive. Statistical analyses The positions and stolon of each ramet within the network were recorded as two factors for the statistical analyses. We considered three positions in the network: the mother ramet, the 1st daughter ramet and the 2nd daughter ramet. The stolon was considered as a factor with two levels: the 1st and the 2nd stolon emitted during growth. We analyzed heritability, richness and composition of microorganisms assemblages in G. hederacea ecotypes. We analyzed fungi and bacteria assemblages separately. No statistical analyses were performed on A rchaea data as they were found in the mother ramet roots and in the stolon internodes following the mother ramets but not in the daughter roots. A ll statistical analyses were performed using the R software (R Development [START_REF] Development | R: A language and environment for statistical computing[END_REF]. Heritability calculation and null model construction Heritability was measured for each taxonomic group in each ecotype as the number of OTUs present in the mother ramet and shared by at least two daughter ramets (we also tested the heritability calculation for three and four daughter ramets). To determine whether the observed heritability could be expected stochastically, we compared the observed heritability against a null model. This procedure is designed to test the null hypothesis that species from the mother ramets are randomly distributed within each daughter ramet and do not reflect the selection or the dispersal of a particular set of species from the mother pool. It allows assessment of the probability that the observed heritability indexes are greater than would be expected under a null distribution (Mason etal., 2008). We built a null model for each of the 10 ecotypes by generating daughter ramets communities with a random sampling of microorganism species within the mother' s pool. The probability of species sampling was the same for all species in the mother's pool (i.e. independent of their initial abundance in the mother roots). Only species identity was changed from one model to another while species richness within the daughter communities remained unmodified. For each daughter ramet community within the 10 ecotypes, 9999 virtual communities were randomly sampled from the mother' s pool and the heritability indexes calculated for each of these models. Results were similar when a less stringent heritability was used (e.g. OTU present in at least one daughter ramet) but the heritability could not be more stringent because it would create null communities with zero inherited OTUs for most of the null communities and thus overestimate the difference between the observed and the random heritability values. For each ecotype, we computed the Standard Effect Size (SES), calculated as described by Gotelli whereas positive SES values reveal higher heritability than expected by random (heritability of microorganisms from the mother ramet). A one-sample t-test with the alternative hypothesis "greater" was then applied to the SES values to determine whether they were significantly greater than zero after checking for the data normality. A nalyses of richness through linear mixed models Richness was calculated as the number of OTUs present in the sample. Richness was calculated separately for bacteria and fungi at the scale of the whole community and at the scale of the phyla (OTU richness in each phylum). We chose these two scales to detect general patterns in microorganisms richness and also to detect potential variation in these patterns between taxonomic groups (phyla). We conducted our analyses at the phylum scale rather than at a more precise taxonomic level because we were constrained by the sequence affiliation that produced multiaffiliation of OTUs at lower taxonomic levels. To test whether the richness was affected by the sample position in the clonal network, we performed linear mixed-effects models using R packages "nlme" (Pinheiro et al., 2015) and "car" (Fox and Weisberg, 2011). We initially tested for differences in richness between mother and daughter ramets. We then tested for differences in richness between 1st and 2nd daughter ramets by considering the position in the clone (1st daughter or 2nd daughter) and the stolon (1st stolon, 2nd stolon) within the plant ecotype as explanatory variables. Ecotype-induced variance and data dependency were controlled by considering the position in the clone (mother or daughter) and the stolon as fixed factor and the plant ecotype as a random factor in the mixed models. Normality of the models residuals was verified using a graphical representation of the residuals and the data were log or square-root transformed when necessary. For several fungal and bacterial groups exhibiting low abundances in the samples, the models testing differences in richness did not respect the normality of the residuals and thus these results are not presented. A nalysis of microorganisms community composition A PLS-DA analysis was used to test whether the microbiota composition varied significantly between mother and daughter ramets and between daughter ramets. The PLS-DA consists of a Partial L east Squares (PL S) regression analysis where the response variable is categorical (y-block; describing the position in the ecotype), expressing the class membership of the statistical units [START_REF] Sjöström | PLS discrimination plots[END_REF][START_REF] Sabatier | Two approaches for discriminant partial least squares[END_REF][START_REF] Mancuso | Soil volatile analysis by proton transfer reaction-time of flight mass spectrometry (PTR-TOF-MS)[END_REF]. This procedure makes it possible to determine whether the variance of the x-blocks can be significantly explained by the y-block. The x-blocks (OTUs abundance) are pre-processed in the PLS-DA analysis using an autoscale algorithm (i.e., centers columns to zero mean and scales to unit variance). The PLS-DA procedure includes a cross-validation step producing a p-value that expresses the validity of the PLS-DA method regarding the data set. The PL S-DA procedure also expresses the statistical sensitivity indicating the modeling efficiency in the form of the percentage of misclassification of samples in categories accepted by the class model. Our aim in using this model was to test the variance of community composition that could be explained by the position of the ramet in the clone. The entire data set was subdivided into two or three groups depending on the groups tested (i.e. mother ramets vs 1st daughter ramets vs 2nd daughter ramets, mother ramets vs all daughter ramets and 1st daughter ramets vs 2nd daughter ramets). Results A rchaeal, Bacterial and fungal communities in the roots of Glochoma hederacea A rchaea (Thaumarcheota), fungi and bacteria were found in mother ramets. A rchaea were not detected in the daughter ramets, but fungi and bacteria were found in daughter roots (Figure 2). Comparison of the sequences obtained from the roots of mother and daughter ramets revealed a subset of 100% identical reads in both mother and daughter ramets, representing 34% and 15% of the daughter fungal and bacterial reads respectively. Heritability, calculated as the number of OTUs found in the mother and in the roots of at least two daughters, varied from 15 to 374 OTUs (µ =100. 2±118.6) for bacteria and from 0 to 12 OTUs (µ =6. 1±3.63) for fungi, depending on the ecotypes. To test whether this observed heritability was higher than would be expected stochastically (i.e. random dispersal of OTUs), we used a null model approach in which the identity of the fungi or bacteria species in the experimental samples was randomized while keeping the OTU richness identical. For each ecotype, we thus generated bacterial and fungal random daughter communities by sampling species from all the mother roots communities (regional pool) and compared the observed heritability in our dataset to this distribution of random heritability values. The null model approach indicated that the observed communities displayed significantly higher OTUs heritability between the roots of mothers and daughters than expected stochastically (one sample t-test with alternative hypothesis "higher", P<0.01 t = 3.03, df = 8, and P<0.001 t = 6.11, df = 9 for fungi and bacteria respectively) (Supplementary Figure S2). In addition to the non-random presence of OTUs in daughter roots we also found communities of fungi and bacteria in the stolon internodes connecting the ramets in the network (Supplementary Figure S3). These internodes exhibited similar phyla richness to that observed in the daughter roots. The transmission of bacteria and fungi within the G. hederacea clonal network was thus clearly demonstrated. Microbial communities filtration during transmission Endophytic microorganisms were strongly filtered during the transmission process. Daughter roots displayed significantly lower fungal OTUs richness than mother ramet roots with mother communities averaging 40 OTUs compared to an average of 10 OTUs in the daughter ramets (linear mixed model, F 1,31 =280, P<0.001; mother ramet: 40±7; daughter ramet: 10±3)(Figure 2; supplementary table S1). The same significant pattern was observed for Bacteria with mother communities averaging 800 OTUs compared to an average of 100 OTUs in the daughter ramets (linear mixed model, F 1,39 =410, P<0.001; mother ramet: 800±131; daughter ramet: 100±100 Figure 2, supplementary table S1). The observed 'low' richness of the transmitted communities indicates that the transmitted microbiota is filtered from the original pool (i.e. the mother microbiota). A significant effect of ecotype, on the richness of the transmitted microbiota, was also found (Methods for details on the statistics and random factor used). Comparison of the microorganisms in the roots of mothers and daughters revealed a general decrease in richness of most phyla during the transmission process. The fungal communities colonizing the roots were mostly from the phyla A scomycota (106 OTUs) and Basidiomycota (39 OTUs) and to a lesser extent from Glomeromycota (24 OTUs), Zygomycota (7 OTU) and Chytridiomycota (4 OTUs) (Figure 2a). The mean OTU richness of A scomycota and Glomeromycota was significantly lower in daughter roots than in mother roots (supplementary table S1) whereas no significant variation was observed in the OTU richness of Basidiomycota. (supplementary table S1). This striking observation clearly advocates for the presence of a fungus-dependent filtering mechanism. The bacterial communities colonizing the roots were distributed in 3384 OTUs mostly belonging to Proteobacteria ( 2009OTUs) and Bacteroidetes (715 OTUs) which together represented about 80% of all the sequences, the remaining 20 % belonging to 6 additional phyla (Figure 2b). Consistently with fungi, the bacterial OTU richness was significantly lower in daughter roots than in mother roots for the Proteobacteria, Bacteroidetes, A cidobacteria, A ctinobacteria and Firmicutes (supplementary table S1). This observation suggests that bacterial phyla are indifferently affected by the filtering mechanism. T he heritability of a core microbiota The differences in microorganisms community composition between mother and daughter roots were assessed using a multi-regression approach with a Partial L east Squares Discriminant A nalysis procedure (PLS-DA ) (see Material and Methods, supplementary information). The advantage of this analysis is its ability to test an hypothesis based on a grouping factor of the samples in the data set (i.e. an explicative factor) and to obtain the significance of the factor as well as the part of the variance explained by the factor. With this analysis the entire dataset can be used and most of the variance conserved in contrast to NMDS approaches in which the distances between samples such as Bray-Curtis or J accard summary the variance between samples. Significant differences in the composition of daughters communities compared to mothers' were detected for both fungi (PPL S-DA = 0.001, PMothers vs Daughters < 0.01, explained variance = 87.3%, Figure 3a) and bacteria (PPLS-DA = 0.001, PMothers vs Daughters < 0.01, explained variance = 72.4%, Figure 3b; supplementary table S2). These differences in composition between mothers and daughters can be explained by the observed diminution in richness during the transmission process. These result indicates that only a portion of the original pool of microorganisms is transmitted from the mother to the daughters (i.e. a specific set of organisms). To test the validity of the hypothesis that a plant filtering mechanism allows the transmission of a core microbiota we analyzed the filtering consistency between daughters by comparing the microbiota composition within the daughter roots using a PLS-DA procedure. The composition of the roots communities was not significantly different between the 1st and 2nd daughter ramets (PPL S-DA = 0.09 and PPL S-DA = 0.33 for fungi and bacteria respectively, supplementary table S2), thus confirming that a specific set of organisms was similarly transmitted to daughter-plants of all ecotypes. E ffect of dispersal distance and dispersal time We found patterns of richness dilution in bacterial communities along the stolons (linear mixed model, F 1,18 = 6.13, P < 0.05, supplementary table S3) showing that those ramets most distant from the mother were less rich in bacteria. This finding suggests that colonization of the daughters by bacteria is limited by dispersal distance. This pattern of richness dilution also followed the course of plant development as stolons produced earlier in the experiment (i.e 1st stolon emitted by the plant) were found to be richer (linear mixed model, F 1,9 = 4.92, P < 0.05, supplementary table S3), which suggests that richness of the bacterial community also depends on dispersal time. A lternatively, these patterns may be linked to a cumulative filtering effect at each node of the clonal network, reducing the pool of transmitted bacteria. Conversely these richness dilution patterns were not detected for fungal communities (supplementary table S1), suggesting either that dispersion of the transmitted species was not limited or that the fungal community was already strongly filtered during the initial transmission. These two non-exclusive hypotheses are supported by our observation of a variation in the diminution of fungal community richness between mothers and daughters, probably dependent on the life history and dispersal traits of the different fungal taxonomic groups. Discussion This work provides the first demonstration of vertical transmission and heritability of a specific endospheric microbiota (fungi and bacteria) in plants. A long with other studies, it supports an understanding of the plant as a complex-rather than a standalone-entity and is aligned with the idea that the plant and its microbiota have to be considered as holobionts (Zilber- Rosenberg and Rosenberg, 2008;Vandenkoornhuyse et al., 2015;Theis et al., 2016). Our demonstration of coremicrobiota transmission supports the idea that microbial consortia and their host constitute a combined unit of selection. This finding does not conflict with the idea that this heritability of microbiota (microbial components metaphorically called 'singers' in [START_REF] Doolittle | It's the song, not the singer: an exploration of holobiosis and evolutionary theory[END_REF], within clonal-plants, in fact consists of the heritability of a selected set of functions (the 'song' in [START_REF] Doolittle | It's the song, not the singer: an exploration of holobiosis and evolutionary theory[END_REF]. Thus, our work reconciles aspects of the on-going debate regarding the evolutionary process at work within the holobiont entity (Bordenstein et al., 2015;Moran and Sloan, 2015;Theis et al., 2016). For the plant, the transmission of a microbiota along plant clonal networks extends to microorganisms the concept of physiological integration previously demonstrated for information and resources. This integrated network-architecture questions the idea of a meta-holobiont organization where ramets (i.e. holobionts) can act as sinks or sources of micro-organisms. Such a structure may ensure exchanges between the holobionts, and especially between the mother source and the daughter "sinks", thereby increasing the fitness of the clone as a whole. Indeed, the inheritance of a cohort of micro-organisms that has already gone through the plant filtering system provides a pool of microorganisms available for recruitment in the newly colonized environments. This "toolbox" of microorganisms could allow the plant to rapidly adjust to environmental conditions and therefore provide fitness benefits in a heterogeneous environment (Clay and Schardl, 2002). This may be assimilated to plant niche construction and provide a competitive advantage when colonizing new habitats. From the perspective of microorganisms, the stolons can be seen as ecological corridors facilitating the dispersal at a fine scale. In addition to propagules transport in the environment, this process ensures a spread of the transmitted organisms from one suitable host to another. A s a consequence, transmitted symbiotic partners may benefit from a priority effect when colonizing the rooting system within the new environment (Werner and K iers, 2015). Future work will thus need to address (i) the direction (uni vs. bidirectional) of microorganisms transmission within the clonal network as well as the modalities of (ii) the transmission mechanism (active or passive), and of (iii) microorganisms filtering during this transmission to determine (iv) the significance of the process in T he genome is not the only support of phenotypic variations The theoretical basis of neo-Darwinian evolution (i.e. the modern synthesis) is linked to the idea that a genetic variation, as a mutation, is an accidental random event having neutral consequences or inducing advantageous or disadvantageous effects on fitness with the natural selection increasing the advantageous variants in large populations (e.g. [START_REF] Charlesworth | The sources of adaptive variation[END_REF]. This sorting of genetic variations by natural selection acts on the individual phenotypic value. It is thus generally believed that an organism, its functions and its ability to adapt to environmental constraints can be addressed by the analysis of its genome (i.e. the information repository of the organism). Behind this idea is the assumption that organisms develop through programmed genes. This view of genomes is an oversimplification [START_REF] Goldman | What is a genome ?[END_REF]. There are existing examples of physical transience in genomes (i.e. genome composition and stability are not fixed at all times in every organism) as in ciliates (e.g. Oxytricha) [START_REF] Bracht | Genomes on the edge: programmed genome instability in ciliates[END_REF]. Elsewhere and less anecdotal is the existence of additional information beside the genome supporting phenotype expression. These are of two main types. Firstly, the epigenetic marks (i.e. DNA methylation, histone modification, histone variants, and small RNA s) which can be heritable and reversible. They induce a suite of interacting molecular mechanisms which impact gene expression and function and thus phenotypes without changing the DNA sequence (e.g. Richards, 2006;Holeski et al., 2012;[START_REF] Huang | Epigenomic Diversity in a Global Collection of Arabidopsis thaliana A ccessions[END_REF]. Secondly, all macro-organisms, animals and plants, are interacting as hosts with symbiotic partners forming the microbiota. Such associations deeply impact the phenotypic variations, and determine host fitness (e.g. [START_REF] Hooper | Interactions between the microbiota and the immune system[END_REF][START_REF] Tremaroli | Functional interactions between the gut microbiota and host metabolism[END_REF]Mc Fall-Ngai et al., 2013;[START_REF] Blaser | The microbiome revolution[END_REF]Vandenkoornhuyse et al., 2015;Vannier et al., 2015). A large proportion of population genetics studies considers the host genome solely whereas microbe-free plants are not facts but artifacts (e.g. [START_REF] Partida-Martinez | The microbe-free plant: fact or artifact? F rontiers in Plant Science[END_REF]. Thus phenotypic variations are often mistakenly attributed to genome variants and the neo-darwinian approach of species evolution fails to explain these variations. A n holistic understanding of plant-microbes associations is thus needed. T he holobiont and hologenome concepts A given macro-organism can no longer be considered as an autonomous entity but rather as the result of the host and its associated micro-organisms forming a holobiont (e.g. Bordenstein & Theis, 2015) with their collective genome forming the hologenome. The holobiont is a unit of biological organization and encompasses not solely the host and obligate symbionts but also includes [… ] the facultative symbionts and the dynamic associations with the host [… ] (Theis et al., 2016). This new understanding of what a macro-organism is, deeply modifies our perception of evolution processes in complex organisms. From this, an important idea is that genetic variations occurring in any genomic subunits have to be considered as hologenomic variations which may be neutral, deleterious or beneficial for the holobiont (Zilber- Rosenberg & Rosenberg, 2008;Bordenstein & Theis, 2015). This parsimonious idea is the keystone of the hologenome theory of evolution (Zilber- Rosenberg & Rosenberg, 2008). Beside these genetic changes in the hologenome, a plant, for example, can recruit micro-organism(s) within the phyllosphere and the rhizosphere to buffer environmental constraints (e.g. Vandenkoornhuyse et al., 2015). The microbiota modification in a plant holobiont, thus hologenome modifications, can be seen as a rapid acquisition of new functions to adjust to biotic and abiotic environmental fluctuations. Because the micro-organisms recruited can be vertically or pseudo-vertically transmitted [START_REF] Rosenberg | The hologenome theory of evolution contains L amarckian aspects within a Darwinian framework[END_REF] this adjustment would be a reboot of the L amarckian deterministic evolution (i.e. inheritance of acquired characteristics). In this context, the microbiota assembly is the repository of the information transmitted between generations. T he microbiota assembly of the holobiont Natural selection acts on holobiont phenotype, thus on hologenome. The case of neutrality (nor advantage neither disadvantage) or circum-neutral effect on individual fitness, could explain the heterogeneity of microbiota community structure observed among hosts. From ecological theories, both niche partitioning (through environmental filtering or through species interactions sorting) and neutral processes (Hubbell, 2001;2005) are classically admitted to drive community assemblies and to explain diversity patterns. Coexistence is mainly assumed to result from complementarity in resource use (Webb et al., 2002). The neutral and stochastic vs deterministic host-microbiota assembly has been debated [START_REF] Nemergut | Patterns and processes of microbial community assembly[END_REF], Bordenstein & Theis, 2015). From a random community assembly model [START_REF] Nemergut | Patterns and processes of microbial community assembly[END_REF], if a large proportion of the microbiota is recruited from the environment (i.e. horizontal transmission), the microbiota community composition is not expected to differ from a random assembly. However, because the functions assumed by the microbiota are important for holobiont fitness, a specialization at the metabolic level of the microbiota is expected from a strong selection process on hologenomes. In this case, the resulting observation would be a deterministic host-microbiota assembly (i.e. non-random assembly). This microbiota specialization could act at two different levels. The first level is the micro-organisms recruitment and filtering by host, a process that should culminate in the vertical or pseudo-vertical transmission of microbiota components. The second level of specialization is, for particular micro-organisms within the microbiota, the relaxation of selection pressure on 'useless' genes and accumulation of mutations in these particular genes toward a loss of functions evolution. In addition, microorganisms within the holobiont experience selection pressures both at the individual level within the holobiont and at the whole holobiont level. Holobiont and hologenome: the special case of plants Plants are sessile macro-organisms. This conditions their interactions with their environment. First, they are not able to escape environmental constraints and need to adapt to either abiotic (e.g. resource limitation) or biotic (e.g. competition, predation) stresses. Second, plants act on their own environmental conditions. For instance, they deplete through their nutrition mineral and water resources of their habitat, or modify microclimate and local soil characteristics through the developement of above and belowground organs [START_REF] Marschner | Root-induced changes in the rhizosphere: Importance for the mineral nutrition of plants[END_REF][START_REF] Orwin | L inkages of plant traits to soil properties and the functioning of temperate grassland[END_REF][START_REF] Veen | Peeking into the black box: a traitbased approach to predicting plant-soil feedback[END_REF]. This retroaction of plants on their environment drives environmental fluctuations occurring at different time scales. In these two above-described situations, the recruitment of microorganisms within the microbiota enables to acquire new ecological functions increasing resource acquisition or buffering stressful conditions (see for review Friesen et al., 2011;Bulgarelli et al., 2013;Mü ller et al., 2016). Changes in the microbiota composition along the seasons or years may thus reflect the temporal needs of the plants. Considering that the environmental changes can be either continuous or disruptive and occur over short or long timescales, the recruitment of microorganisms represents a quick and long-term adaptation to environmental constraints that is less costly for the plant than genomic plastic responses (see A lpert & Simms, 2002 for a review of plasticity advantages). Plant-associated micro-organisms then condition plant survival and fitness and provide quick phenotypic adjustments allowing adaptation to rapid and long-term environmental changes (Vannier et al., 2015). Plants are modular organisms and can be seen as a discrete organization of units (modules) forming a system (Harper, 1977). Their growth is iterative through undifferentiated and totipotent tissues (meristems). Such modularity is expressed at different levels of integration: from simple subunits such as a leaf, a root, or a flower (order 1 of modularity) to more complex units such as ramets (i.e.potentially autonomous clonal units composed of roots and shoots modules) produced by clonal multiplication (e.g. order 2 of modularity) (Fig. 2a). Most plants are then constituted as reticulated networks that sample the environment and can adjust their structure to the environment (van Groenendael & de K roon, 1990). A physiological integration involving information-and resource-sharing within the clonal network has been demonstrated (e.g. Oborny et al., 2001). Physiological integration occurs for all first order modules within each second order module, at least partially during its life-cycle. Some species display physiological integration between all or part of the second order modules (Price & Hutchings, 1992), for a short period or the whole clonal plant life (e.g. splitter vs. integrator strategies). In response to small-scale environmental heterogeneity, such modularity and integration enable plastic adjustments of order 2 modules with an impact on the fitness of either or both 1 st and 2 nd order modules depending on integration level. (Stuefer et al., 2004;Hutchings & Wijesinghe 2008). More specifically, plastic changes have been reported in the network architecture in response to patchy distribution of favorable habitats (i.e. foraging, Fig 2b). Such changes occur along a morphological gradient from a 'phalanx' (the clonal network is dense with high branching and low internode length) to a 'guerilla' form (loose network with low branching and high internode length) [START_REF] Doust | Population dynamics and local specialization in a clonal perennial (Ranunculus repens): I. The dynamics of ramets in contrasting habitats[END_REF]. Phalanx forms promotes the exploitation of nutrient rich habitats through ramet aggregation whereas guerilla forms enable space exploration and the colonization of habitats at distance from the mother plant. In addition to the sampling of resources in the environment, this plastic network also allow to sample potentially symbiotic microorganisms within the soil pool. In clonal plants, this process has never been taken into account for microbiota assembly despite the microbiota importance for plant fitness. Pseudo-vertical (i.e. acquisition by environment sharing) and vertical transmissions of microbiota ensure the presence of suitable symbionts and thus limit the associated foraging costs (Wilkinson & Sherratt, 2001). The question of the vertical or pseudo-vertical transmission of the microbiota is then of fundamental importance. In aggregated networks (i.e. phalanx), the clonal progeny is expected to be in contact with a similar pool of microorganisms than the mother plant (i.e.strong pseudo-vertical transmission). Conversely, in scattered networks (i.e. guerilla), a weak pseudovertical transmission is expected (i.e. progeny encounter different microorganisms than parents). However, it has recently been demonstrated that clonal plants are able to vertically transmit a core microbiota (i.e. a fraction of the mother microbiota) containing bacteria and fungi through their stolons (Vannier et al., 2016;Vannier et al., submitted). This vertical transmission of a subset of the mother microbiota could be seen as an insurance of habitat quality for progeny. F rom holobiont to meta-holobiont concept Considering that a particular clone is colonized by a complex microbiota, that at least part of this microbiota is transmitted between clonal generations and that these transmitted microorganisms altere the clone fitness (e.g. among others, arbuscular mycorhizal fungi, A M fungi hereafter) (Vannier et al., Submitted), a clonal individual thus satisfies the holobiont and hologenome theory (Zilber- Rosenberg & Rosenberg, 2008;Theis et al., 2016). The plant clonal network represents however an additional level of organization in which holobionts are inter-connected and integrated in a higher modularity model (the clonal network). To explain this level of organization, we introduce the concept of meta-holobiont. T he meta-holobiont concept Bordenstein & Theis, (2015) have proposed a framework of ten principles for the holobiont and hologenome theory which do not change the rules of evolutionary biology but redefine what a macro-organism really is. The meta-holobiont concept is primarily dedicated to better understand and define clonal organisms forming a network (i.e. like clonal plants) through which holobionts can share molecules and micro-organisms (Fig 3). The meta-holobiont concept thus posits that through a network passive or active transfers of microorganisms, resources and information between holobionts can impact an individual holobiont. The meta-holobiont also posits the existence of specific physical host structures to build the network used for these exchanges. The physical structures linking clonal plant holobionts can be stolons or rhizomes. This physical link is of crucial importance as it is the support of the plastic responses developed at both i.e. the holobiont and meta-holobiont scales. In the case of mycorhizal hyphae for example, the physical link (hyphae) linking two plants is not produced by the host plant and thus does not allow integrated plastic responses at the metaholobiont scale (in this case two plants linked by a mycorhizal hyphae). In this case, the selection The inherent physiological integration of the meta-holobiont network (Oborny et al., 2001) and the microorganisms transmission from mother to clonal progeny (demonstrated in Vannier et Wijesinghe & Hutchings, 1999;[START_REF] Roiloa | Small-scale heterogeneity in soil quality influences photosynthetic efficiency and habitat selection in a clonal plant[END_REF]. A nother important response of clonal plants to heterogeneity is their ability to specialize individual modules in the acquisition of the most abundant resource to the network benefit (Fig 2c;[START_REF] Stuefer | Division of labour in clonal plants[END_REF][START_REF] Stuefer | Two types of division of labour in clonal plants: benefits, costs and constraints[END_REF]. This specialization process has been extended to the concept division of labour concept (sensu Stuefer et al., 1996), occurring when the spatial distributions of two resources are negatively correlated [START_REF] Stuefer | Division of labour in clonal plants[END_REF][START_REF] Van K Leunen | Quantifying the effects of reciprocal assimilate and water translocation in a clonal plant by the use of steam-girdling[END_REF]). On the one hand, microbiota transfer within the clonal network may be an alternative to these plastic mechanisms by providing ramets the ability to compensate for nutrient-limitation. For instance the transfer of arbuscular mycorhizal fungi (A M fungi) with high resource uptake ability at the ramet scale is probably less costly for the plants than developing an increased rooting system or than increasing spacer length to forage for better patches. Specialization may then be seen in a wider context at the holobiont level (i.e. ramets that do specialize because of the recruitment of particular microbiota). Foraging through plant trait modifications could likely not be as beneficial as using microbiota functions. Note that such trade-off between foraging plant traits versus the use of microbial activity for resource uptake has already been well described at the individual level with root development (see for instance [START_REF] Eissenstat | L inking root traits to nutrient foraging in arbuscular mycorrhizal trees in a temperate forest[END_REF][START_REF] Iu | Complementarity in nutrient foraging strategies of absorptive fine roots and arbuscular mycorrhizal fungi across 14 coexisting subtropical tree species[END_REF]. What we suggest herein at the meta-holobiont scale, thus only consists in an extension of plant foraging tradeoffs. On the other hand, foraging and specialization mechanisms, may be indirectly mediated by the holobiont microbiota. Studies have demonstrated that microorganisms often manipulate plant traits [START_REF] Cheplick | Recovery from drought stress in L olium perenne (Poaceae): are fungal endophytes detrimental?[END_REF] inducing changes in plant architecture (e.g. connection branching or elongation; Streitwolf-Engel et al., 1997;Sudova, 2009;Streitwolf-Engel, 2001;Vannier et al., 2015), biomass allocation (Du et al., 2009;Vannier et al., 2015). In parallel, other works demonstrated that high diversity of A M fungi can reduce plant physiological integration in heterogeneous environments (Du et al., 2009). These results thus suggest a retroaction between plants and their microbiota in plastic adjustments to environmental heterogeneity developed at the clone level. In these two cases, microorganisms sharing between holobionts in the meta-holobiont impacts both the holobiont and the meta-holobiont phenotypes. The meta-holobiont concept may shed light on a new understanding of these integration and plastic responses mechanisms, and explain the large range of strategies observed between plants. Holobionts coexistence and dynamics within the meta-holobiont network: transposition of metacommunity-based theories Microbiota transmission within the meta-holobiont induces consequences at the holobiont scale. A s previously exposed, the meta-holobiont could specialize if the environment is heterogeneous and thus alter individual holobionts microbiota. Reversely an homogenization of holobionts microbiota within the meta-holobiont can be expected in homogeneous environments. This may condition the dynamics of microbiota assembly at the meta-holobiont level because in the first case, several holobionts may represent poorer pools of genomes than others to be transferred within the metaholobiont while in the second case, all holobionts represent the same potentialities. Understanding microbiota assembly within meta-holobiont networks and its consequence on microoragnisms species dynamics is likely to be achieved based on metacommunity theories. Different models of metacommunities have been described (see L eibold et al., 2004 for a review): specialization of microbiota within the meta-holobiont is close to the source-sink metacommunity model whereas homogenization corresponds to the neutral or patch dynamic models. Such theories transposition to the frame microorganisms transfer through plant network should provide an interesting framework for the understanding of the meta-holobiont impacts on microorganisms dispersal in natural ecosystems. Network theory and meta-holobiont properties Because microorganisms alter holobiont phenotypes, their recruitment in each holobiont and transmission dynamics within the meta-holobiont may condition the properties of the metaholobiont. Such properties can for instance include productivity (e.g. fitness of the meta-holobiont, plant mass productivity), adaptation to heterogeneous conditions or resilience to disturbance. There is a wide theoretical corpus of knowledge based on graph theory about the relationships between network topology and its properties. For instance the number of nodes, their connectance or shape (i.e. modularity, nestedness) within a network, condition its stability to perturbations since it determines individual fluxes at population and community levels (see review of [START_REF] Proulx | Network thinking in ecology and evolution[END_REF]. In plants in general and in clonal plants in particular, the modules topology has been demonstrated to be shaped by structural blue-print, ontogeny, and plastic response to the environmental conditions (Huber et al., 1999;[START_REF] Bittebiere | Structural blueprint and ontogeny determine the adaptive value of the plastic response to competition in clonal plants: a modelling approach[END_REF]. Many researches have been specifically done on clonal plants to investigate how the network topology provides emergent functions to the clonal plant such as response to heterogeneous conditions, fluctuating environments, or disturbance (see examples given in the review of [START_REF] Oborny | From virtual plants to real communities: a review of modelling clonal growth[END_REF]. Transposing graph theories to the meta-holobiont concept may allow for instance to determine keystone holobionts or specific network structures that maximize the whole network performance. This could be achieved through the maximization of resilience based on holobionts' positions within the network or the degree of redundancy of microbiota compositions in the network. The meta-holobiont concept may provide a new way to consider these questions while taking into account the plant-microorganisms associations. C hapter III: Importance of the plant community context for the individual plant microbiota assembly III.1 Introduction Scientific C ontext The composition of the soil pool of microorganisms depends on different environmental factors comprising soil type and properties (e.g. pH, water content, nutrient concentration) [START_REF] Berg | Plant species and soil type cooperatively shape the structure and function of microbial communities in the rhizosphere[END_REF]L undberg et al., 2012;[START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]Shakya et al., 2013;Schreiter et al., 2014). Because plants are sesssile this pool of microorganisms available for recruitment determine the plant microbiota (Lundberg et al., 2012;[START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]. In addition to soil type and properties, plants can also modify this soil pool of mcroorganisms through different mechanisms. Plants are able to selectively recruit microorganisms from the soil (Vandenkoornhuyse et al., 2015) and promote the most beneficial symbionts [START_REF] Bever | Preferential allocation to beneficial symbiont with spatial structure maintains mycorrhizal mutualism[END_REF]K iers et al., 2011). In addition, plant exudates have been shown to either enhance or reduce particular microorganisms abundances depending on the plant species (for a review see Berendsen et al., 2012). The expectation is thus that plants can locally alter the composition of the soil microbial pool. Following this idea, the neighborhood of a given plant (i.e. the identity and abundance of neighboring plants) should influence the local microorganisms soil pool and thus the microorganisms available for recruitment by other plants in the community. Thus the neighborhood of a given plant shoul influence the focal plant microbiota assembly. However, this potential role of the plant community context in the plant microbiota assembly has not been extensively described yet. In addition, plants have been shown to harbor contrasted microbiota between species [START_REF] Oh | Distinctive bacterial communities in the rhizoplane of four tropical tree species[END_REF][START_REF] Bonito | Plant host and soil origin influence fungal and bacterial assemblages in the roots of woody plants[END_REF] and ecotypes [START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012). A n expectation is thus that plant species select and promote different microorganisms and have thus constrasted effects on the soil pool and on the different microbial groups within this soil pool. Considering the diversity of functions provided by plant-associated endophytes (Friesen et al., 2011, Mü ller et al., 2016) and their effects on plant phenotypes and performance evidenced herein (A rticles I and II), changes in endophyte community composition is likely to affect individual plant performance. Objectives of the chapter This chapter aims at testing the hypothesis that the plant community context and especially the close neighborhood of a plant determine its fungal microbiota assembly, ultimately impacting its performance. More precisely we will address the following questions: 1) Does the neighborhood plant species abundances influence the richness and equitability of the focal plant fungal microbiota ? Does neighborhood plant species have contrasted effect ? (A rticle V ) 2) A t which temporal and spatial scales act this neighborhood effect ? (A rticle V ) 3) A re changes in fungal groups richness and equitability mediated by the neighborhood plants determining the focal plant performance ? (A rticle V ) Methods This chapter is composed of a single article (V ). In this study, we tested the hypothesis of a neighborhood effect on plant microbiota assembly impacting plant performance using an outdoor experimental mesocosm design with grassland plant communities. This mesocosm comprises a range of grassland plant communities varying in composition and richness. Soil cores were sampled within each mesocosm and Medicago truncatula was used as a trap plant to capture soil fungal species. High-throughput amplicon sequencing of 18S rRNA genes was used to detect and identify fungi within the root endosphere of M. truncatula individuals. To determine plant neighborhoods, plant species abundances were mapped two consecutive years (one year before and the same year than soil sampling). We thus explained the richness and equitability (equality of species abundances) of the fungal community trapped by M. truncatula by the abundances of plant species within the neighborhood of the sampling point. We considered two temporal scales of the neighborhood, the past covering (one year before sampling) and the present covering (the same year than sampling). We additionally assess the effect of the microbiota composition on M. truncatula biomass as a measure of its performance. Main results In our experiment (A rticle V ), we detected a significant effect of the plant neighborhood species abundances on the composition of the microbiota colonizing M. truncatula's roots. This effect was plant species dependant with several plants affecting positively and others negatively the richness and the equitability of the M.truncatula fungal endophytes. We thus demonstrated that a particular plant can filter and determine the local fungal pool available for recruitment in the soil, as proposed by Valyi et al., (2016). This was true not solely at the A M fungal diversity level but was also the case for the other fungal groups and more widely for the whole fungal community. Plant neighborhood influence on soil microbiotaoccurred at small scale (i. Heijden et [START_REF] Stuefer | Two types of division of labour in clonal plants: benefits, costs and constraints[END_REF]2008;Wagg et al., 2014, Fester et al., 2014) through nutrients cycling and symbioses (e.g. [START_REF] Bardgett | Belowground biodiversity and ecosystem functioning[END_REF]. These symbioses between plants and soil microorganisms are widespread in natura and are compulsory for plant development and growth [START_REF] Hacquard | Survival trade-offs in plant roots during colonization by closely related beneficial and pathogenic fungi[END_REF]Bulgarelli et al., 2013;Vandenkoornhuyse et al., 2015). A mong symbiotic microorganisms, the A rbuscular Mycorrhizal (A M) fungi have received much attention since they are arguably the world' s most abundant mutualists and among the most important terrestrial symbionts 'that help feed the world' (Marx, 2004). In exchange for carbohydrates, A M association provides many ecological functions to plants such as nutrients acquisition and protection under stressful conditions. Despite their ubiquity, the existence of such mutualisms is unexpected in terms of evolutionary trajectories [START_REF] Hardin | The tragedy of the commons[END_REF] because cheaters (i.e. providing little phosphorus in exchange for carbohydrates) are predicted to spread at the expense of cooperative partners and thus indirectly favor competing strains (West et al., 2002). The ability of plants to preferentially reward A M fungal symbionts according to their level of "cooperativeness" was demonstrated using Medicago truncatula (K iers et al., 2011). This selective rewarding mechanism mitigates the fitness of less cooperative fungi, stabilizes the A M fungal symbiosis (K iers et al., 2011) and is supposed to allow plants to filter and exclude part of the colonizers. A consequence of this filtering phenomenon is the observation of a 'host-plant preference' (Vandenkoornhuyse et al., 2002;Duhamel & Vandenkoornhuyse, 2013). This filtering effect occurring at the individual plant level is then likely to influence the local diversity and abundance of fungal propagules. A t the plant community scale, the very first experimental studies based on a matrix-focal plant design showed that the identity of the plant species in the neighborhood determines the composition of the focal plant fungal community (J ohnson et al., 2004;Hausmann & Hawkes, 2009). This observation remains consistent for A M fungi in more complex mixtures of plants and can be extended to their temporal dynamics (Hausman & Hawes, 2010). The first established plant species has a filtering effect on the pool of A M fungi colonizing the second established plant species (Hausman & Hawes, 2010). Following these findings, the temporal scale effect has been suggested as a key parameter in understanding of the whole fungal community assembly [START_REF] Bennett | A rbuscular mycorrhizal fungal networks vary throughout the growing season and between successional stages[END_REF]Cotton et al., 2015). However in the above-described experiment, the temporal scale tested was very short (several weeks) and the fingerprint of a plant on later fungal communities has never been tested over a long period of time [START_REF] Hausmann | Order of plant host establishment alters the composition of arbuscular mycorrhizal communities[END_REF]. A lthough the influence of plant community composition on fungal assemblages has been described (e.g. J ohnson et al., 2004;[START_REF] Hausmann | Order of plant host establishment alters the composition of arbuscular mycorrhizal communities[END_REF], the spatial scale of this influence is still unclear. In a recent review, Valyi et al. (2016) proposed that the relative influence of environmental conditions, dispersal and host filtering on the A M fungal community was dependent on on the spatial scale considered. The effect of the host plant (i.e. host filter) would then be stronger at a local scale (Valyi et al., 2016). A s far as we know, there are very few empirical or experimental supports for this hypothesis. A single seminal paper demonstrated that the relationship between plant and A M fungi compositions was detectable at the 25 cm² point scale but not at the 1m² plot scale (L andis et al., 2005). However, the fact that the influence of a given plant on the composition of fungal endophytes can be detectable at a fine scale suggests that the plant communities and their dynamics may condition the A M fungal community assembly. A t a given location, we can make the parsimonious assumption that the fungal propagule reservoir available is necessarily a consequence of the fungal community structure and composition within the previous hosts. If the fungal diversity pool results from the cumulative influences of plants, the consequence is that we can explain A M fungal community assembly and quantify the respective influences of the surrounding plant species. This assumption drives the idea that the plant neighborhood determines the assembly of the fungal pool available for recruitment by the focal plant. The plant-associated fungal endophytes are known to provide benefits for plant nutrition and resistance to abiotic and biotic constraints (e.g. Friesen et al., 2011). Changes in fungal groups richness should therefore affect the ecological functions performed by the fungi to the focal plant and ultimately impact its fitness. We herein address the hypothesis that a plant neighborhood fingerprint on the root fungal community impacts host-plant fitness. We used a mesocosm design comprising experimental assemblages of different grassland species, with varying levels of plant diversity. We spatially sampled soil cores from plots on which Medicago truncatula was then grown as a trap plant. Root fungal assemblages were characterized by amplicon sequencing. Plant neighborhoods around the soil sampling positions were characterized using centimetric maps of plant species occurrences recorded over two years. We analyzed the effect of past and current plant neighborhoods on endophyte assemblages of M. trunculata and ultimately on its biomass. In this study we considered two diversity scales, the sample scale (alpha diversity) and the plot scale (gamma diversity). More specifically, we tested the following hypotheses: (i) the root fungal community structure is determined by the local plant neighborhood and the relationships between plant composition and endophyte diversity and richness should be weaker at the plot than at the sample scale. The scale of influence is likely to be a few centimeters (ii) this relationship occurs across different fungal taxonomic groups (iii) the changes in the fungal communities colonizing the plant should ultimately impact the focal plant performance. Materials and Methods A nalyses were performed in two stages. We first analyzed the influence of past and present plant neighborhoods (i.e. plant species around the sampling point) on the fungal assemblages at the plot scale and at the sample scale. We based our analyses on indexes describing the fungal communities calculated for different taxonomic groups following a down scale approach (from the whole assemblage to the phyla and classes). Experimental design To determine whether fungal communities respond to the overlying plant communities depending on their taxonomic groups, and at which spatio-temporal scales, we used 112 experimental plant communities settled in 1.30 × 1.30 × 0.25 m mesocosms in 2009 (see [START_REF] Benot | Fine-scale spatial patterns in grassland communities depend on species clonal dispersal ability and interactions with neighbours[END_REF][START_REF] Bittebiere | Clonal traits outperform foliar traits as predictors of ecosystem function in experimental mesocosms[END_REF] for additional information on the experimental design). This study was conducted in the experimental garden of the University of Rennes1. Communities were constituted from a set of plant species widely distributed in temperate grasslands of Western France (des A bbayes et al., 1971), with one to 12 plant species in each mixture. Plants were grown on a homogeneous substrate composed of sand (20%) and ground soil (80%). Weeds were regularly removed, and the mesocosms were watered every two days during the dry season. A bove-ground vegetation was mown once a year in late September and plant flowers were cut off to suppress sexual reproduction. Plant community dynamics was therefore dependent only on the plant' s clonal growth. The present plant neighborhood therefore resulted from the previous one. Plant neighborhood characterization The plant species spatial distributions in the plots changed over time due to the ongoing community dynamics. To take these dynamics into account, species occurrences were therefore mapped in all mesocosms after two and three years of experimental communities cultivation (i.e. early spring 2011 and 2012), using an 80 × 80 cm squared lattice centered on the mesocosm (Fig. 1). We recorded presence/absence data in 5 × 5 cm cells of the lattice (i.e. 256 cells in total per mesocosm). A plant species was considered as present when at least one individual rooted within the cell, with each individual belonging to a single cell. GIS (A rcGIS ver. 9.3.,ESR I) was used to calculate the number of cells colonized by each plant species (i.e. their abundances) at each spatial scale tested. Plant species abundances were calculated at the plot scale as the total number of cells occupied over the square-lattice. Within the plot, we analyzed plant species abundance at five spatial scales around the central sampling point, ranging from 5 to 25 cm (Fig. 1) (see Bittebiere & Mony, 2015 for details on the method). We considered two temporal scales by performing these calculations in 2011 (past plant neighborhood) and 2012 (present plant neighborhood). Per species correlations between 2011 and 2012 for each spatial scale exceeding 90% were not detected. Endophyte assemblages analyses through a trap plant bioassay To determine the fungal pool of species available for plant recruitment, we sampled five soil cores per mesocosm within the 80x80cm lattice (the four corners and the center) (i.e. 560 soil samples in total). This design enabled the fungal community to be captured both at the sample (sample in the center of the mesocosm) and the plot scales (the five sampling points of the plot). These soil samples were used as substrates for the cultivation of Medicago truncatula individuals, used as trap plants. M. truncatulawas transplanted as seedlings after being germinated in sterile conditions for 3 weeks. This species is known to display a very low host-preference, trapping most of the fungal species present in the soil (Cook, 1999). M. truncatula individuals were cultivated for seven weeks under controlled conditions (constant temperature and water availability) with a 12h day/light cycle and nutrients were provided with a watering solution before harvesting. To evaluate M. truncatula performance, root and shoot samples from each individual were weighed at the end of experiment. Plant total fresh mass was used as a proxy of performance. DNA extraction and amplicon preparation Medicago truncatula root samples were carefully washed with detergent (Triton 100X , 1% V /V ), thoroughly rinsed in sterile distilled water and ground to powder using a pestle and mortar under liquid nitrogen. Then, total DNA was extracted using the DNeasy plant kit (Qiagen) according to the manufacturer's recommendations. A 480 bp fragment of the fungi SSU rRNA was specifically amplified by PCR using NS22/0817 primers (L ê Van et al., 2017) with PuReTaq Ready-to-go PCR beads (GE Healthcare). A ll the PCRs were done using fusion primers containing sequencing adapters and multiplex identifiers in addition to PCR primer (more details about amplifications in L ê Van et al., 2017). For each of the 560 samples, true technical amplicon replicates were performed (i.e. two independent PCRs for each extracted DNA sample). A mplicons were purified using A MPure X P -PCR kit (A gencourt/Beckman-Coulter). Purified amplicons were then quantified (Quant-iT Picogreen ds DNA assay, Invitrogen). A n equimolecular amount of each amplicon was pooled to prepare the sequencing library. Traces of concatemerized primers were removed (L abchipX T, Caliper) before emPCR and sequencing on a GS FL X + instrument (Roche), following the manufacturer' s instructions. Data trimming and contingency matrix preparation Trimming, filtering, clustering, OTU identification and taxonomic assignments were performed as described elsewhere (e.g. Ben Maamar et al., 2015, L ê Van et al., 2017). To summarize the strategy, short sequences (<200 bp), sequences with homopolymers (>8 nucleotides) or ambiguous nucleotides, sequences containing errors in the multiplex identifier or primer, were deleted from the dataset. Detected chimeric sequences using chimera.uchine were deleted. A fter these steps, and from the two replicates, only sequences displaying 100% identity were kept. The remaining sequences were grouped into OTUs using DNA clust [START_REF] Ghodsi | DNA CL UST: accurate and efficient clustering of phylogenetic marker genes[END_REF] with a 97% sequence identity threshold, and a contingency matrix was built. A fter these steps, the sequencing depth (i.e. number of sequences per sample to describe the community) was checked from rarefaction curves computed using the V EGA N package (Oksanen et al., 2015) in R (R Core Team, 2013). We removed 154 samples which did not satisfy these criteria and samples were normalized to 1351 sequences. OTUs affiliation and taxonomic groups selection A total of 3471 fungal OTUs for the 406 samples were obtained. A large proportion of these OTUs were rare (i.e. >70% of the OTUs represented by less than 25 sequences). Similarly to plant community studies, OTUs occurring in at least 1% of the samples were used for the statistical analyses. To check the sensitivity of the statistical signal of this additional filtering step, we tested other thresholds and obtained similar statistical results (data not shown). The resulting dataset contained 2057 fungal OTUs. A ll the statistical analyses were performed at three taxonomic levels, all fungi, within phyla (i.e. Ascomycota, Basidiomycota, Glomeromycota), and within classes (i.e. Sordariomycetes, Glomeromycetes, and Agaricomycetes) (i.e. seven datasets in total). Phyla and classes were selected according to their respective dominance in the whole assemblage and within the phyla. The Ascomycota, the Glomeromycota, and the Basidiomycota contained 1587 OTUs (77.2% of the total richness), 308 OTUs (15% of the total richness) and 100 OTUs (4.86% of the total richness), respectively. In each of these three fungal phyla, Sordariomycetes (186 OTUs), Glomeromycetes (80 OTUs) and Agaricomycetes (86 OTUs) were the dominant classes in terms of OTU richness. Diversity indices calculation Indices were calculated at the plot scale (i.e. the five samples from each mesocosm pooled) and at the sample scale (i.e. based on the sample from the center of the mesocosm only) for the seven fungal datasets (see above). To characterize the fungal community at the plot scale (gamma diversity), we calculated the OTUs richness as the total number of OTUs occurring in at least one of the five samples from the plot. A t the sample scale (alpha diversity), we calculated the OTU richness (S), the Pielou equitability index (J ), and the Shannon and Simpson diversity indices. Because strong correlations (i.e. > 90%) between Shannon diversity, Simpson diversity, and equitability were found, the Shannon and Simpson indices were discarded before further analyses. No strong correlations (i.e. > 90%) were found between the two remaining indices regardless of the taxonomic level analyzed. Indices were calculated using the V EGA N package (Oksanen et al., 2013) in R (R Core [START_REF] Pinheiro | nlme: L inear and nonlinear mixed effects models[END_REF]. Statistical analyses To determine whether the fungal community structure was influenced by the past and present plant neighborhoods at the sample and plot scales, we used multiple regression analyses with plantspecies abundances as explanatory variables in linear model (L M) procedures. These analyses were performed on the seven fungal datasets. First, to test for the influence of past and present plant neighborhoods on the fungal pool at the plot scale, we tested the effect of the plant abundances for each date (see above section Plant neighborhood characterization) on the fungal OTUs richness. We constructed two models for each taxonomic level analyzed (i.e. 14 models), which were optimized using a backward stepwise selection procedure of the explanatory variables based on the A kaike' s Information Criterion (described below). Second, to test for the influence of past and present plant neighborhoods on the fungal pool at the sample scale (i.e. corresponding to the soil sample from the mesocosm center), we tested the effect of the plant abundances for each date (see above section Plant neighborhood characterization) on the fungal richness and equitability for the five neighborhood sizes examined. This enabled us to determine the spatio-temporal scale of response of taxonomic groups from the local fungal pool to the plant neighborhood. One model was developed for each date and neighborhood size. We therefore constructed a total of ten models per index (two indices) and per taxonomic level analyzed (i.e. 140 models in total), and each model was optimized using a backward stepwise selection procedure of the explanatory variables. We used the information-theoretic model comparison approach based on A kaike's Information Criterion (A IC) and compared for each index, all the optimized models through second-order A IC corrected for small sample sizes (A ICc) [START_REF] Burnham | Model selection and multimodel inference: A practical information-theoretic approach[END_REF]. In our analyses, we considered models with smaller A ICc values and with a substantial level of empirical support (i.e., a difference of A ICc > 2 with other models) as the most probable [START_REF] Burnham | Model selection and multimodel inference: A practical information-theoretic approach[END_REF]. Thus this procedure enabled us to compute multiple models and to compare these models while minimizing the risk of producing false positives. richness of the phylum Basidiomycota (P<0.05, 0.06≥R 2 ≥0.09) but decreased the richness of the class Agaricomycetes (P<0.05, 0.07≥R 2 ≥0.1). The same pattern was observed for F. rubra increasing the richness of the phylum Ascomycota (P<0.05, 0.04≥R 2 ≥0.12) but decreasing the richness of the class Sordariomycetes (P<0.05, 0.1≥R 2 ≥0.11). On the contrary, several species had a consistent effect between taxonomic levels. A. tenuis for example significantly increased the richness of the whole fungal community and at the phylum levels for both Ascomycota and Basidiomycota, and at the class level for Sordariomycetes. The same pattern was observed with B. pinnatum that increased the richness of both the phylum Glomeromycota and the class Glomeromycetes. F ungal equitability at the sample scale To investigate the effect of the plant neighborhood on the equitability of the fungal community at fine-scale, we produced linear models at the sample scale (center of the mesocosm) with plant species abundances as explanatory variables. The multiple linear models analysis revealed that the equitability of the whole fungal community, of the Ascomycota, Basidiomycota, Glomeromycota, Sordariomycetes, Glomeromycetes, and Agaricomycetes were all significantly determined by the plant neighborhood (Tab 2). A t the level of the whole fungal community, the equitability significantly decreased with the presence of only Holcus mollis in the neighborhood whereas the abundances of other plant species had no significant effect on fungal community equitability (P<0.05; 0.04≥R 2 ≥0.07). Tab l e 1 . wesp o n ses o f each fu n gal gro u p ri ch n ess at th e p l o t scal e to th e p ast an d p resen t p l an t n ei gh b o rho o ds ( resu l ts o f l i n ear mo del s) . t -v al u es an d adj u sted w² are p resen ted. hn l y sp eci es p arti ci p ati n g si gn i fi can tl y to mo del b u i l di n g are sh o wed as wel l as th ei r effect o n th e fu n gal ri ch n ess ( + i n creasi n g th e ri ch n ess; -decreasi n g th e ri ch n ess) . E ffect of temporal and spatial scales on the link between plant landscape and fungal community We determined the spatio-temporal scale of response of the fungal community to the plant neighborhood by producing linear models with past and present plant species distributions at five neighborhood sizes (5 to 25 cm). These analyses were performed at the scale of the sample (center of the mesocosm) and the best models were selected according to the A ICc criteria (see Material & Methods section). These selected models showed that fungal community richness and equitability responded equally to present and past plant neighborhoods (i.e. the A ICc criteria of the models were not different; Tab. 2). Furthermore, the species explaining the variations in richness and equitability of the fungal community were the same at both temporal scales (i.e. past and present). Only the equitability of Glomeromycota, Basidiomycota, and Sordariomycetes and the richness of Glomeromycetes responded to a single temporal scale. The whole fungal community richness and equitability responded indifferently to the plant neighborhood at the five neighborhood size scales analyzed (i.e. 5, 10, 15, 20 and 25 cm around the sampling point) (Tab. 2). Only the Glomeromycota equitability index responded to a specific neighborhood size (i.e. 25 cm), whereas the two other fungal phyla and the classes analyzed responded to at least two of the five neighborhood sizes tested for both richness and equitability. Thus a clear neighborhood effect was highlighted but no specific neighborhood size was detected in the response of the fungal community to the plant neighborhood. To investigate the effect of the changes in fungal communities richness and equitability on the fitness of the trap plant we produced linear models at the sample scale (center of the mesocosm) with fungal groups richness or equitability as explanatory variables. The biomass of the trap plant Medicago truncatula was not affected by the richness of its whole fungal community (Tab 3). However, the biomass increased significantly with the richness of the Basidiomycota and Glomeromycota phyla (P<0.01 and P<0.05 respectively) (Tab. 3) but not with the phylum Ascomycota (Tab 3). The combined effects of Basidiomycota and Glomeromycota richness explained ~12% of the variations in plant biomass (P<0.05; R 2 =0.12). A t the class level, the same positive effect on M. truncatula biomass was found for Glomeromycetes and Agaricomycetes richness but not for Sordariomycetes richness (P<0.01; R 2 =0.13; Tab 3). If a higher microbiota fungal richness positively impacted the M. truncatula biomass, we would expect a reciprocal effect of equitability, because the detection of rare OTUs from the same sequencing depth within a more evenly distributed community, is less likely. A s expected, the data analyses indicated that the biomass of the trap plant increased with the equitability of its whole fungal microbiota community (P<0.01; R 2 =0.07; Tab 3). However, the biomass increased only with the equitability of the phylum Ascomycota but not with the Basidiomycota and Glomeromycota phyla (P<0.01, R 2 =0.1; Tab 3). A t the class level, the same positive effect on M. truncatula biomass was found for Sordariomycetes equitability (P<0.01; R 2 =0.1; Tab 3) but not for Agaricomycetes and Glomeromycetes equitability. . This result suggests that the biotic interactions structuring the fungal community (i.e., host preference and host filtering) mostly act at a centimetric scale (Hazard et al., 2013;Valyi et al., 2016). A t the sample level, our results indicated that the plant neighborhood determined, at least in part, the richness and competitive balance (i.e. equitability) of the whole fungal community colonizing the trap plant, M. truncatula. Such heterogeneity of microbiota composition within a given host-plant species is a recurrent observation (e.g. [START_REF] Davison | A rbuscular mycorrhizal fungal communities in plant roots are not random assemblages[END_REF]Schlaeppi et al., 2014; L ê Van et al., 2017) and has been linked to plant recruitment from the soil "reservoir" (Vandenkoornhuyse et al., 2015). Indeed, root-associated fungi in agave species have been shown to be mainly recruited from the surrounding soil [START_REF] Coleman-Derr | Plant compartment and biogeography affect microbiome composition in cultivated and native A gave species[END_REF]. The importance of abiotic factors, notably soil properties, as determinants of the soil microbial pool composition (Shakya et al., 2013;Schreiter et al., 2014;[START_REF] Coleman-Derr | Plant compartment and biogeography affect microbiome composition in cultivated and native A gave species[END_REF] has been repeatedly demonstrated and is considered as the main source of variation of the plant microbiota (Vandenkoornhuyse et al., 2015). We demonstrated here that the plant neighborhood at a centimetric scale (i. The fungal community also results from past plant neighborhoods In agreement with our expectations (hyp. 2) we demonstrated that the past neighborhood also impacted the fungal community structure of the following year and that this was not due to correlations between past and current neighborhoods. This result was found for the whole fungal microbiota community level as well as for the three phyla and classes analyzed. The key interpretation of this result is that past plants do leave a "fingerprint" on the composition of microorganisms colonizing the present plant community. A lthough the fungal communities responded at both temporal scales, there was no difference in the intensity of response between the two years. The persistence of the effect of the plant over these years could be due to the short period investigated in the present study (two years) because the majority of fungal spores and propagules can survive in soil for a long time. A study involving a longer period (i.e. >2 years), for example mapping the plant communities 5 years before sampling, would allow the temporal limits of this potential soil fungal bank "memory" to be determined. The structure of the root endospheric fungal community impacts M. truncatula biomass In agreement with our expectations (hyp.3) we demonstrated that the performance of the Medicago truncatula plant was affected by the composition of the fungal endophytes community. The measures of Medicago truncatula performance indicated that the individual biomass production depended on the richness of Glomeromycota and Basidiomycota (but not on the total richness of fungal OTUs or on the Ascomycota richness). This relationship between Glomeromycota richness and plant performance has already been demonstrated for A M fungi (Van der Heijden et al., 1998, K lironomos et al., 2000[START_REF] Hiiesalu | Species richness of arbuscular mycorrhizal fungi: associations with grassland plant richness and biomass[END_REF], due to their beneficial effects. The increase in A M fungal diversity has experimentally been shown to result in more efficient exploitation of available resources such as soil phosphorus (Van der Heijden et al., 1998) and to decrease plant pathogens [START_REF] Van Der Putten W H | Empirical and theoretical challenges in aboveground-belowground ecology[END_REF]. To our knowledge, however, the positive effect of fungal species richness on plant performance has never been demonstrated with the phylum Basidiomycota or the class Agaricomycetes. A s far as we know, little is known about the functions of the endospheric Agaricomycetes in grass plants and the role of this group on plant growth has still to be clarified. Interestingly, Medicago truncatula performance only depended on the equitability of the whole community, the phylum Ascomycota and the class Sordariomycetes that represented the larger fungal groups in the samples. This result suggests that a community dominated by a few Ascomycota and Sordariomycetes species has a less beneficial effect on plant growth. Ascomycota is a phylum known to be composed of very diversified organisms performing various functions for host plants. In this context, an equitable community could represent a higher diversity of organisms available to recruitment in the plant "toolbox" for adjustment to environmental conditions (Vannier et al., 2015). neighborhood on the productivity of the trap plant M. truncatula, mediated by an increase in fungal richness or equitability, was evidenced. This change in fungal soil assemblage may be due to a complementarity of the niches that plants provide to fungal symbionts. The degree of host preference differs between plants [START_REF] Helgason | Selectivity and functional diversity in arbuscular mycorrhizas of co-occurring fungi and plants from a temperate deciduous woodland[END_REF], which may then constitute different niches for fungi. These differences in host preference can be due to the level of mycotrophy of the host with generalist hosts favoring a rich fungal community contrary to more specialist hosts. Such positive effects of particular plant identities have already been suggested: for instance, spore abundance in salt marsh was determined by the proximity of mycotrophic hosts [START_REF] Carvalho | Spatial variability of arbuscular mycorrhizal fungal spores in two natural plant communities[END_REF], whereas the presence of the grass Anthoxantum odoratum increased the abundance of A M fungi in the soil regardless of the plant mixtures (De Deyn, 2010). Conversely, we also demonstrated in our experiment the negative effects of several plant species on the richness and equitability of key fungal groups, which ultimately reduced the trap plant biomass. For example, H. mollis, A. stolonifera and H. lanatus indirectly decreased the productivity of the trap plant M. truncatula. This negative impact might be due to a host-preference effect. Indeed, several groups of fungi can harbor a limited range of host plants. A given plant can thus represent either a good or a bad choice of host. It can also be interpreted as an allelopathic-like phenomenon although it has not been evidenced before on these species. For example, a species belonging to the family Festuca has been shown to produce allelopathic compounds suppressing D. glomerata growth but depending on the competition context of the community (Viard-Crétat et al., 2012). Plant production of allelopathic compounds may suppress the germination of spores or the formation of mycorrhizal associations [START_REF] Stinson | Invasive plant suppresses the growth of native tree seedlings by disrupting belowground mutualisms[END_REF]Callaway et al., 2008) and thus alter the reservoir of soil microorganisms. We More importantly, we showed that A M fungal species used have contrasted effects on plant traits. In our experiments, the difficulty of identifying the role of a given symbiont in determining the plant phenotype resided in the fact that a given plant was colonized by different symbionts with contrasted effects at the same time (A rticle II). It is thus necessary to identify the range of variations due to symbiont identity. A perspective of this work would thus be to test a wider range of A M fungal partners to determine whether they specifically modify plant traits and act on performances. We performed a preliminary study (not presented in the document) testing this hypothesis. We grew individuals of G. hederacea in controlled conditions inoculated with a fungal isolateor not inoculated. We tested the effect of nine fungal species selected to constitute a large range of cooperative behabior (i.e. provinding high benefits to low or no benefits). We found significant differences in various plant traits comprising performance traits depending on the fungal partner identity. In addition to direct effects described herein (A rticle II), the microbiota can also indirectly impact plant phenotype through its interplay with epigenetic mechanisms (A rticle I). These two sources of phenotypic variation happening during the plant' s life currently constitute two separated fields of research. The few observations of a link between epigenomic modifications and the establishment of symbiosis [START_REF] Ichida | DNA adenine methylation changes dramatically during establishment of symbiosis[END_REF][START_REF] Ichida | Epigenetic modification of rhizobial genome is essential for efficient nodulation[END_REF]De Luis et al., 2012) suggest however that interplays does exist. These two separated fields of research will thus need to develop a common framework to determine the occurrence and significance for plant phenotype of microbiotaepigenetics interplays. Conducting a monitoring of epigenetic marking while inoculating different experimentally engineered microbiota would surely provide insightful results and could allow to disentangle complex endophytes-induced phenotypic changes. Plants have evolved mechanisms ensuring the presence of symbionts Plants have been interacting with microorganisms for a very long time (over 400 millions years for A M fungi; [START_REF] Remy | Four hundred-million-year-old vesicular arbuscular mycorrhizae[END_REF][START_REF] Redecker | Glomalean fungi from the Ordovician[END_REF] allowing the establishment of co-evolution patterns [START_REF] Brundrett | Coevolution of roots and mycorrhizas of land plants[END_REF]. Subsequently, plants should have evolved mechanisms to optimize their association with beneficial symbionts. Many studies have described how plants have evolved, filtration, defense, promotion or regulation mechanisms regarding microorganism colonization. For example, mechanisms allowing plants to regulate the symbiosis and avoid cheating behavior with A M fungi have already been evidenced [START_REF] Bever | Preferential allocation to beneficial symbiont with spatial structure maintains mycorrhizal mutualism[END_REF]K iers et al., 2011). However, since plants are sessile organisms, their "recruitment choice" depends on available microorganisms in the local environment (e.g. Vandenkoornhuyse et al., 2015). In this context, the heterogeneity of beneficial microorganisms presence in the soil could be considered as a source of stress for plant (i.e. absence of beneficial microbes). Regarding this heterogeneity, different forms of "continuity of partnership" could be expected to ensure beneficial interactions. Clonal plants are known to display plastic responses maximizing either exploration (foraging) or exploitation (specialization) of heterogeneous resources relying on their ability to share resources and information within the clonal network. Such responses could then be expected to buffer beneficial microorganisms heterogeneity. In our experiments (A rticle II), we however demonstrated that Glechoma hederacea only displayed a low foraging and no specialization response to A M fungi heterogeneity. Clonal plants have however evolved another mechanism allowing to ensure the presence of microorganisms and thus their habitat quality. We evidenced the existence of vertical transmission (or heritability) of microorganisms through the connexions between ramets (A rticle III). This previously unknown mechanism allows the transfer of a core microbiota that ensures habitat quality of theplant progeny. This results suggest that plants physiological integration (Oborny et al., 2001) comprise in addition to resources and information, the sharing of microorganisms, constituting a prospect of interest. These results open a large avenue for future research questions such as 1.) the direction of this transfer (unidirectional or bidirectional; i.e. from progeny to parent plants), 2.) the mechanisms of transfer, active through the vascular network fluxes or passive by colonization of the stolon surface; 3.) the modalities of microorganisms filtering during the process. Considering the latter hypothesis we indeed evidenced that transmitted communities are similar and an important consequence is the existence of a filtration process during the transmission that creates this similarity. This filtration process could be based either on microorganisms dispersal ability (i.e. ability to colonize stolons) or on the functions they provide to plants. Different hypotheses can be drawn from these resulst like the transmission of a core microbiota necessary for the clone establishment or alternatively the transfer of only the most cooperative and important fraction of the microbiota transmitted. We are currently testing the hypothesis that more beneficial symbionts could be preferentially transmitted. This experiment comprises two phases. A first phase (already completed, see above) aims at identifying the impact of the inoculation of single A M fungal species on plant traits and performance to detect beneficial and deleterious A M species. The second phase of the experiment consists in the inoculation of clonal plants with a designed mixture of A M fungal species comprising a range of symbiont qualities. T his second phase aims at determining which A M fungal species are transmitted to the progeny, depending on their level of cooperation and their effects on plants. Estimating the ecological significance of this mechanism should be the main focus of future research by determining its ubiquity, fidelity and its significance for plant phenotype expression. The next step would thus be to test if specific microbial cores can be recruited depending on plant ecological needs. For example when the plant is subjected to a stress, it should preferentially transfer organisms conferring an increased resistance to this particular stress. II. Redefining the individual plant : from holobiont to meta-holobiont Plants can no longer be considered as standalone entities Plants are colonized by a high diversity of symbionts that individually (A rticle II) and collectively (A rticles I and V ) determine the plant phenotype and its subsequent ability to grow, reproduce and adapt to environmental conditions (A rticle I). This microbiota is present in every known plant and can be transmitted between generations (A rticle II). The plant can thus no longer be seen as an independent entity. Regarding selection and evolution processes the plant has to be considered as a complex and multiple entity comprising the plant and its microbiota forming the holobiont as well as their genomes forming the hologenome (Vandenkoornhuyse et al., 2015). Selection processes can act at different organisation levels in such complex organisms (i.e. on the plant phenotype, on the microorganisms phenotypes or on the result of their interactions). Toward a novel understanding of clonal network organisms The discovery of a novel microbiota heritability mechanism deeply impacts our understanding of This idea could be extended to other clonal networks as soon as organisms can be shared within the network. Future research are needed on clonal organisms in order to determine whether such structure happens to exist in other organisms or if the meta-holobiont only fits for clonal plants. Even if the meta-holobiont theory developped herein (A rticle V ) cannot be extended to other organisms, we emphasize the suitability of the clonal plant model (and thus the meta-holobiont) for the study of microbiota assembly in the context of complex organisms. The role of the meta-holobiont in determining microorganisms dispersal and plant fitness in natural ecosystem is a major perspective. Especially, in the context of plant community, transmission and filtration mechanisms in the meta-holobiont could impact the dispersal of beneficial symbionts in the environment. If the meta-holobiont is able to selectively facilitate the spreading of the most useful microorganisms within the plant community it could impact the microbiota assembly of the whole plant community. III. From individuals to community The community context shapes the microbiota A t the plant community scale, plants are interacting with each other in neutral, competitive, or facilitative ways. A given plant in the community is thus interacting with its neighborhood and it is likely that this neighborhood could affect the assembly of its microbiota. More importantly, at a given location, the fungal propagule reservoir (i.e. the soil pool) available for recruitment by the plant is a consequence of the fungal community within the previous host-plants. We demonstrated that the plant neighborhood determines in part this fungal soil pool available for plant recruitment (A rticle V ). We explained the competitive balance and richness of the fungal communities colonizing the trap plant Medicago truncatula based on the close neighborhood of plants surrounding the soil sampling point. Valyi et al. (2016) proposed that the relative influence host filtering on the A M fungal community is stronger at local scale (Valyi et al., 2016). Our results fits perfectly with this hypothesis and introduce the role of the local plant community context (i.e. the plant neighborhood) as a factor structuring the assemblage of the plant microbiota, aside from the plant genetics and the environmental factors like soil properties. The results obtained herein (A rticle V ) suggest that the host preference and host filtering structuring the fungal community mostly act at centimetric scale (Hazard et al., 2013;Valyi et al., 2016). The effect of the plant neighborhood occurred at finer scale than previously thought (i.e. 5 cm to 25 cm around the sampling point) on the fungal community richness and composition. Previous studies focusing on A M fungi reported that A M fungal community composition was highly heterogeneous, even at local scale [START_REF] Brundrett | Mycorrhizal fungus propagules in the jarrah forest[END_REF][START_REF] Carvalho | Spatial variability of arbuscular mycorrhizal fungal spores in two natural plant communities[END_REF][START_REF] Wolfe | Origins of major human infectious diseases[END_REF]). Bahram et al. (2015) also reported in a meta-analysis the existence of a spatial autocorrelation of A M fungi, suggesting thus dispersal patterns of fungi at the scale of the meter. This corpus of knowledge, together with our results, suggest that fungi, and possibly more widely the microorganisms, are dispersal limited at the scale of the plant community. However, evidences that isolation and dispersal limitation exist for microbial assemblages are scarce [START_REF] Telford | Dispersal limitations matter for microbial morphospecies[END_REF]. There is thus a need to reconsider the scale at which we study plant microbiota assembly rules. In addition, the mechanism of microbiota heritability evidenced herein (A rticle III) allow the dispersal of fungi in the community and calls for the development of a framework on microorganisms dispersal. Toward the concept of micro-organisms micro-landscape Plant host has been shown to affect the microbiota structure, in particular its composition and diversity [START_REF] Berg | Plant species and soil type cooperatively shape the structure and function of microbial communities in the rhizosphere[END_REF][START_REF] Bulgarelli | Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota[END_REF]L undberg et al., 2012). In addition, we demonstrated that the plant neighborhood can have contrasted effects on fungal taxonomic groups (e.g. different phyla or classes) richness and equitability. Theoreticall, niche partitioning and neutral processes (Hubbell, 2001(Hubbell, , 2005) ) are classically admitted to drive community assemblies and explain diversity patterns. In the context of symbiotic microorganisms, plants can be considered as habitat and the assembly of the plant community is driven by the nich partitioning of microorganisms. This result thus suggest that a particular plant constitutes a host of a specific quality (favorable or unfavorable) that can be assimilated to a patch of habitat for symbiotic organisms. From here, the plant community can be assimilated to a dynamic mosaï c of patches. The composition of the patches and their spatial arrangement could thus determine the microbiota assembly of plant species within the mosaï c. In this context, a major perspective is to transpose and adapt macro-landscape ecological frameworks at the scale of the microorganisms landscape. Plant community can be redefined from the fungal perspective in an attempt to characterize what habitat size, isolation and dispersal should be considered in microbial ecology. From our results, we develop the idea that for a given fungus, the plant community is a dynamic mosaï c of plants of various quality that can be assimilated to patches in a landscape. A n important consequence of this understanding of the fungal landscape is the transposition of macro-landscape elements to a micro-scale. For example, a perspective of this approach is to investigate the impact of the connectivity in the fungal micro-landscape. The connectivity describe the permeability of landscape elements to the dispersal of organisms. If plants can be considered as habitat of different quality for a fungus, then the abundance of a favorable host within the fungal landscape would increase the dispersion of the fungus. We are currently engaged in the redaction of a review (not presented herein) that aims at transposing these knowledge of macro-landscape to the micro-landscape, introducing the micro-landscape. In parallel, we conducted an experiment in collaboration (not presented herein) using experimental mesocosms to test the hypothesis that landscape parameters such as the connectivity between two hosts (defined as the presence of favorable hosts between the focal plants) could explain the similarity of their microbiota. IV. Microbiota assembly and agricultural practices The ability to engineer plant-microbiota is a key challenge to resolve the productivity-biodiversity loss conundrum. Microbiota engineering aims at optimizing specific attributes of the focal host organisms through the selection of a specific microbiota (Mü ller et al., 2016). In plants, several candidate traits have been identified, comprising for example flowering time and plant biomass [START_REF] Panke-Buisse | Selection on soil microbiomes reveals reproducible impacts on plant function[END_REF]. To fulfill this goal, future research will have to describe in precision on one the hand the rules driving microbiota assembly and on the other hand the host-plant fitness resulting from a given assembly. While beneficial microorganisms have started to be commercialized [START_REF] Berg | Plant-microbe interactions promoting plant growth and health: perspectives for controlled use of microorganisms in agriculture[END_REF]; and see de Vrieze, 2015 for a chronicle) they are not always reliable on crops, especially depending on the local context. Our results provide evidences that it is not only the presence of one particular beneficial microbe that increases plant performance but also the global richness of the plant fungal microbiota (A rticle V ). For a few years, scientists have proposed improvements for agricultural practices such as the increase in plant crop richness to promote microorganisms diversity and then enhance the diversity of ecological functions provided to the plant (Duhamel & Vandenkoornhuyse, 2013). The work presented herein demonstrated that the rules of plant microbiota assembly, comprising the richness of major fungal groups, and the resulting effects on plant phenotype depend on the plant community context (A rticle V ). There is thus perspectives for agricultural practices to consider not only the plant richness but also the plant composition and spatial arrangement. Studies have until now focused on either the identification of key beneficial microorganisms or the maximization of microorganisms diversity. We additionally demonstrate that opportunities exist of maximizing the equitability of the microbiota to increase plant fitness. In addition, the effect on plant phenotype does not necessarily rely on the plant community richness as previously thought, but can also depend on plant identities in the host neighborhood. This encourage for the development of polyculture practices. For example, combination of facilitative plants can be used in cultures to maximize key fungal groups richness thus ultimately increasing productivity. To conclude, i can see two major perspectives arising from this PhD thesis. The first perspective is linked to the above mentioned limits of the neo-Darwinian synthesis of evolution in describing and understanding plant-microbes interactions and their evolutionnary consequences. Both microbiota and epigenetics are not linked to the genome and are thus not integrated in the synthesis. The synthesis do not encompass biological units of organization like holobionts and meta-holobionts. From my perception, one of the major challenges of the next few years relies on the development of an extention for the neo-Darwinian synthesis of evolution integrating nongenome based and interactions mediated evolutionnary patterns. The second pespective is linked to the concrete application of acquired knowledges for the development of a sustainable agriculture. The results obtained during this PhD highlights the complexity and the dynamics of plant microbiota assembly and of its resulting effects on plant phenotype. From my perception thus, the ability to maximize crops resistance, resilience and productivity does not reside on the study of a given beneficial microorganism but rather on the understanding of the mechanics regulating emerging properties and functionning of microbiota assemblages. A BSTRA CT Plants live in association with a wide diversity of microorganisms forming the microbiota. The plant microbiota provides a variety of key functions that influence many aspects of plant's life comprising establishment, growth and reproduction. The present thesis aims at determining the assembly rules of the plant microbiota and its consequences for plant phenotype, adaptation and evolution. To fulfill this objective, we used different experimental approaches using either clonal plants as model organisms or grassland mesocosms for community-wide analyses. Our results demonstrated i) that A rbuscular Mycorrhizal Fungi induce important phenotypic variations in clonal plants traits involved in space exploration and resources exploitation. These changes depended on the identity of the symbionts and altered the plants ability to produce plastic responses to environmental heterogeneity. ii) Plants have evolved a mechanism allowing the transmission of a part of their microbiota to their progeny, ensuring thus their habitat quality. iii) The plant community context is a major factor structuring local plant microbiota assembly. Particular plant species identity in the neighborhood increase or decrease the microbiota diversity and ultimately determine the focal plant performance. This thesis overall demonstrates the importance of symbiotic microorganisms in the understanding of the plant adaptation and evolution. From the knowledges acquired we developed a novel understanding of symbiotic interactions in clonal plants by extending the holobiont theory to the meta-holobiont theory. 2 . 1 21 Microorganisms recruitment through compounds secretion..................................20 3.2.2 The immune system...............................................................................................21 3.2.3 Regulation of symbiotic interactions......................................................................22 3.2.4 The host plant effect: genetics and biogeography..................................................23 3.3 Biotic interactions. 1 1 Microbe-microbe interactions.................................................................................24 3.3.2 The plant community context................................................................................25 3.4 Microbiota transmission. of global changes (i.e. global warming), local conditions are changing and scientists Figure 1 . 1 Figure 1. a) Schematic plant showing the different compartments of the plant and its environment colonized by microorganisms comprising bacteria, fungi and archaea. Numbers of bacterial cells are indicated for phyllosphere, atmosphere, rhizosphere, and root and soil bacterial communities and were taken from (Bulgarelli et al., 2013). b) Schematic root cross section of the root hair zone showing epiphytic colonization endophystes colonizing the endosphere. c) Schematic longitudinal section of a root and the zhizosphere and bulk soil around the root. The bulk soil is colonized by a wide diversity of microorganisms and they are filtered within the rhizosphere and subsequently in the endosphere. The figure was modified from Mü ller et al., (2016). Figure 2 . 2 Figure 2. Phylogenetic structure of bacteria associated to the roots and leaves of different plant species. This figure was realised from sequencing data of different papers and analyzed using a reference-based operational taxonomic unit (OTU) picking method and were subsequently combined at the family level. A bbreviations: n.d., not detected. From Mü ller et al., (2016). Figure 3 . 3 Figure 3. Growth form of the clonal plant Glechoma hederacea. The plant is organized as a network of ramets that are potentially independant individual units composed of roots, shoots and a node.These ramets are connected by stolons and the section of the stolon separating two nodes (i.e. two ramets) is called an internode. Modified fromBirch & Hutchings, 1994. Figure 4 . 4 Figure 4. Schematic root surface and root endosphere as welle as surrounding rhizosphere (zone of the soil unfer plant influence) and bulk soil. The figure shows the different abiotic and biotic factors structuring the assembly of the microorganisms community within the different compartments. 3. 2 2 ) a given plant will influence the local soil pool. Thus at the scale of the plant community, the effect of a plant on the local soil pool of microorganisms indirectly affects the microbiota assembly of the other plants in the community. More importantly, as the filtering effect is dependent on plant identity, studies based on matrix-focal plant designs have demonstrated that this identity determines the composition of the focal plant fungal community(J ohnson et al., 2004; Hausmann & Hawkes, 2009). Such host filtering effects have long been considered negligible for plant microbiota assembly in comparison to abiotic environmental factors. However, a recent review by Valyi et al. I. 2 A 2 rticle I: E pigenetic mechanisms and microbiota as a toolbox for plant phenotypic adjustment to environment 35 Evolution is driven by selection forces acting on variation among individuals. Understanding these induced phenotypes are subjected to selection(Pfennig, 2010). If selection acts primarily on phenotype, the environmental constraints an organism has to face can lead either to directional selection or disruptive selection of new phenotypes(Pfennig, 2010). Thus, novel traits can result from environmental induction followed by genetic accommodation of the changes (West-Eberhard, 2005). These accommodated novelties, because they are acting in response to the environment, are proposed to have greater evolutionary impact than mutation-induced novelties (West-Eberhard, 2005). Figure 1 : 1 Figure 1: (A ) Plant phenotypic plasticity is trigged by environmental constraints. Phenotypic changes induced are not solely genetically controlled but are also based on either epigenetic marks (box 2) or plant microbiota by recruitment of mutualists. This plant 'toolbox' allows a rapid response to environmental constraints. (B) The control over plant phenotypic plasticity may crosstalk or synergistically interplay with different possible interactions. 1) Co-evolution plant-symbiont 2) Interplay genetic/epigenetics 3) Cross-talk epigenetic/microbiota. These mechanisms also act at the modular scale of plant structure. 57 1 57 T., Hutton, M. G. and Denison, R. F. (2007). Human selection and the relaxation of legume defences against ineffective rhizobia. Proc. R.Soc. B 274,[3119][3120][3121][3122][3123][3124][3125][3126] K iers, E.[START_REF] Duhamel | Reciprocal rewards stabilize cooperation in the mycorrhizal symbiosis[END_REF].Reciprocal rewards stabilize cooperation in the mycorrhizalsymbiosis. Science 333, 880-882 K ucharski, R., Maleszka, J ., Foret, S., and Maleszka, R. (2008). Nutritional control of reproductive status in honeybees via DNA methylation. Science 319, 1827-1830 L ee, Y.K ., and Mazmanian, S. K . (2010). Has the microbiota played a critical role in the evolution of the adaptive immune system? Science 330, 1768-1773.L eggat, W., A insworth,T., Bythell, J ., Dove, S., Gates, R., Hoegh-Guldberg, O., et al. (2007). The hologenome theory disregards the coral holobiont.Nat. Rev. Microbiol. 5, doi:10.1038/nrmicro1635C1 L ira-Medeiros, C. F., Parisod, C., Fernandes, R. A ., Mata, C. S., Cardoso, M. A ., and Ferreira, P. C. G. (2010). Epigenetic variation in mangrove plants occurring in contrasting natural environment. PLoS ONE 5, e10326L opez, M., Herrera-Cervera, J .[START_REF] Tejera | Trehalose and trehalase in root nodules of Medicago truncatula and Phaseolus vulgaris in response to salt stress[END_REF].Growth and nitrogen fixation in Lotus japonicus and Medicago truncatula under NaCl stress: Nodule carbon metabolism. J . Plant Physiol. 165, 641-650 De L uis, A ., Markmann, K ., Cognat, V., Holt, D. B., Charpentier, M., Parniske, M., et al. (2012). Two microRNA s linked to nodule infection and nitrogen-fixing ability in the legume Lotus japonicus. Plant Physiol. 160, 2137-2154 L undberg, D. S., L ebeis, S. L ., Paredes, S. H., Yourstone, S., Gehring, J ., Malfatti, S., et al. (2012). Defining the core Arabidopsis thaliana root microbiome. Nature 488, 86-90 Manning, K ., Tör, M., Poole, M., Hong, Y., Thompson, A . J ., K ing, G. J ., et al. (2006). A naturally occurring epigenetic mutation in a gene encoding an SBP-box transcription factor inhibits tomato fruit ripening. Nat. Genet. 38, 948-952 Martin, R.A . and Pfennig, D.W. (2010). Field and experimental evidence that competition and ecological opportunity promote resource polymorphism. Biol. J . Linn. Soc. 100, 73-88 Mayr, E. and Provine, W. B. (Eds.) (1998). The evolutionary synthesis: perspectives on the unification of biology. Harvard University Press Molinier, J ., Ries, G., Zipfel, C., and Hohn, B. (2006). Transgeneration memory of stress in plants. Nature 442,1046-1049 Peck, L .S., Thorne, M. A ., Hoffman, J . I., Morley, S. A ., and Clark, M. S. (2015). Variability among individuals is generated at the gene expression level. Ecology 96, 2004-2014 Pfennig, D.W., Wund, M. A ., Snell-Rood, E. C., Cruickshank, T., Schlichting, C. D., and Moczek, A . P. (2010). Phenotypic plasticity's impacts on diversification and speciation. Trends Ecol. Evol. 25, 459-467 Pigliucci, M. (2005). Evolution of phenotypic plasticity: where are we going now? Trends Ecol. Evol. 20,481-486 I.3 A rticle II: A M fungi patchiness and the clonal growth of Glechoma hederacea in heterogeneous environments Université de Rennes 1, CNRS, UMR 6553 EcoBio, Campus Beaulieu, Avenue du Général L eclerc, RENNES Cedex (France) 2 Université de Lyon 1, CNRS, UMR 5023 L EHNA 43 Boulevard du 11 Novembre 1918, V IL L EURBA NNE Cedex (France) A bstract:The effect of A M fungi spatial distribution on individual plant development may determine the dynamics of the whole plant community. We investigated whether clonal plants display a foraging or a specialization response, as for other resources, to adapt to the heterogeneous distribution of A M fungi. Two separate experiments were done to investigate Glechoma hederacea response to a heterogeneous distribution of a mixture of 3 A M fungi species, and the single effects of each species on colonization and allocation traits. was detected. Two possible explanations are proposed: (i) the plant' s responses are buffered by differences in individual effects of the fungal species or their root colonization intensity. (ii) the initial A M fungi heterogeneity is sensed as homogeneous by the plant either by reduced physiological integration or due to the transfer of A M fungi propagules through the stolons. Microscopic and DNA sequencing analyses provided evidences of this transfer, thus demonstrating the role of stolons as dispersal vectors of A M fungi within the plant clonal network. K ey words: Glechoma hederacea, A rbuscular Mycorrhizal Fungi, Phenotypic Plasticity, Clonality, Heterogeneity, Scanning Electron Microscopy, Patches Fig.- 1 : 1 Fig.-1: Schematic drawing of the experimental design composed of pots arranged in lines. Ramets were forced to root in different pots and lateral ramifications were removed to orient growth in a line. Four treatments of AM fungal distribution were applied based on the presence or absence of A M fungi in the pots: A bsence (A ) (10 pots without A M fungi); Presence (P) (10 pots with A M fungi); Presence-A bsence (PA) (five pots with A M fungi followed by five pots without A M fungi); A bsence-Presence (A P) (five pots without A M fungi followed by five pots with A M fungi). Fig.- 2 : 2 Fig.-2: Foraging response: internode length under the four treatments applied (cm per gram of ramet total biomass) (A ). Specialization response: root:shoot ratio (R/S) of 5 th , 6 th , 10 th and 11 th ramets under the four applied treatments (g of roots per g of shoots after drying) (B). A bsence (blue bars), Presence (grey bars), Presence-A bsence (orange bars), A bsence-Presence (green bars). Statistical significance of the internode length or R/S variations between treatments: NS, not significant; **, P<0.01. Fig.- 3 : 3 Fig.-3: A llocation traits of the whole clone for the four treatment of A M fungi inoculation: T1= no A M fungi (white bars), T2= Glomus custos (blue bars), T3= Glomus intraradices (yellow bars), T4= Glomus clarum (red bars). Means of each organ (shoots, roots and stolons) biomass in grams per gram of total clone biomass. Statistical significance of the organ biomass variations between treatments: NS, not significant; *, P<0.05. Fig.- 4 : 4 Fig.-4: Performance traits of the clone for the four treatments of A M fungi inoculation: T1= no AM fungi (white bars), T2= Glomus custos (blue bars), T3= Glomus intraradices (yellow bars), T4= Glomus clarum (red bars). Total clone biomass in grams after drying (A ). Number of ramets per gram of total clone biomass (B). Statistical significance of the total biomass and number of ramets variations between treatments: NS, not significant; *, P<0.05. Fig.- 5 : 5 Fig.-5: Maximum likelihood tree of the GTR +I+G model using PhyML . Multiple alignment has been produced with MUSCL E 62 . Bootstrap values at the nodes were produced from 200 replicates. Only values above 50 are shown. Multiple alignment and tree reconstruction were performed using SEAV IEW 63 . OT Us have been obtained from a Glechoma hederacea stolon after DNA extraction using DNEasy plant mini kit (Qiagen), PCR amplification using fungal primers NS22b and SSU817, and Illumina MiSeq sequencing. In addition to reference sequences within the Glomeromycota phylum, we sampled 13 sequences among the best BL A ST hits ( † ). Fig.- 6 : 6 Fig.-6: Results for the microscopy analysis of stolons harvested from G. hederacea pre-cultures. Scanning electron microscopy of the stolon surface showing hyphae attached to the stolon hairs (A ). Stolon microscopy cross-section observed with an optical microscope. A rrows indicate cortical cells invaded by structures which may be interpreted as fungi (B). F igure 1 | 1 E xperimental design. (a), clonal ramets of 10 ecotypes were forced to root in separate individual pots and connected by stolons. A t the end of the experiment, the clonal network consisted of the mother ramet and 4 daughter ramets. The daughter ramets (1st and 2nd mother ramets) were positioned along the two primary stolons produced by the mother ramet. Pots with mother ramets were filled with homogenized field soil, those with daughter ramets contained sterilized substrate, and contact was only by the internode that separated two consecutive ramets. M = mother, D1 = 1st daughter, D2 = 2nd daughter. (b), picture of the experimental design: the pots are only connected by the internodes. F igure 2 | 2 C omposition of the bacterial and fungal communities within the root endosphere at the different positions in the clonal network. (a), mean number of OTUs of each fungal phylum and mean total number of OTUs for all phyla together found in the root samples at the different positions in the clonal network (mother, 1st daughter or 2nd daughter). Vertical bars represent the standard error of the mean for each phylum. Significance of the linear mixed models testing the differences in OTUs richness between mothers and daughters in the clonal network is indicated: *** P<0.001. (b), Mean number of OTUs of each bacterial phylum and mean total number of OTUs for all phyla together found in the root samples at the different positions in the clonal network (mother, 1st daughter or 2nd daughter). Vertical bars represent the standard error of the mean for each phylum. Significance of the linear mixed models testing the differences in OTUs richness between mothers and daughters in the clonal network is indicated: *** P<0.001. 103F igure 3 | 3 Partial L east Square Discriminant A nalysis (PL S-DA ). (a), PLS-DA testing the significance of the position (mothers, 1st daugthers and 2nd daughters) on the composition of the root bacterial communities. (b), PLS-DA testing the significance of the position (mothers, 1st daugthers and 2nd daughters) on the composition of the root fungal communities. The groups used as grouping factor in the model are represented on the graphs. They correspond to mother, 1st and 2nd daughter ramets. 1st and 2nd ramets were grouped independently of the stolon to which they belonged. This analysis was used to test the hypothesis that roots at different ramet positions in the clonal network exhibit similar compositions of fungal and bacterial communities. The percentage of variance indicated on each axis represent the variance of the communities composition explained by the grouping factor. ecosystems. A s regards this last aspect, plant communities are dominated by clonal plants and our findings demonstrate their fundamental role in the spreading of microorganisms between trophic levels and reveal a new ecological function of plant clonality. Considering that the heritability process demonstrated herein affects different compartments within the ecosystem, this novel ecosystem process consisting of microbiota filtering and transfer by clonal plants is of paramount importance. between plant generations Nathan Vannier 1 , C endrine Mony 1 , A nne-K ristel Bittebiere 2 , Sophie Michon-C oudouel 1 , Marine Biget 1 , Philippe Vandenkoornhuyse 1* II.3 A rticle IV: Introduction of the metaholobiont concept for clonal plants. Fig 1 : 1 Fig 1: Glechoma hederacea phenotypic plasticity induced by its microbiota composition. A ll the plants have the same genotype (i.e. same clone) grown under controlled conditions (light, temperature, hygrometry, duration, water supply). Only one of their microbiota component differed (i.e.mycorrhizal colonizer). Figure 2 . 2 Figure 2. a) Schematic clonal plant showing the twol levels of modularity within clonal plants: 1 st order modules and 2 nd order modules. b) Schematic clonal plant showing a plastic repsonse of foraging in resource heterogeneous environment. The foraging response consist on ramet aggregation in good patches through the modification of internodes length while avoiding poor patches. c) Schematic of a clonal plant showing a plastic repsonse of specialization to combine heterogeneity of light and nutrient. The response consist on preferentially develop roots in high nutrient and shoots in high light a redistribute resources within the network thanks to the physiological integration. Fig 3 : 3 Fig 3: The meta-holobiont. (A ) the holobiont entity and meta-holobiont entity and their respective composition are explained in (A ) and (B) respectively. Colored dots correspond to epiphytic or endophytic microorganisms forming the root microbiota. Notice that roots and shoots are colonized by microorganisms. al., Submitted) and possibly reciprocally (i.e. from clonal offsprings to ascendent ramets), although this has not been demonstrated yet, clearly represent an important level to consider in the understanding of the clonal holobiont. If holobionts and hologenomes are units of biological organization for the observation and understanding of a given macro-organism (principle 1, 2, 3 inBordenstein & Theis, 2015) then the meta-holobiont could be considered alike a supra-organism.Holobiont impacts on meta-holobiont structure and reciprocal effectsIn patchy resource distribution, many authors demonstrated that clonal individuals aggregate ramets through internodes shortening and increased branching in resource-rich patches, and avoid resource poor patches through spacers elongation (i.e. foraging, Fig 2b;[START_REF] De K Roon | Morphological plasticity in clonal plants: the foraging concept reconsidered[END_REF] e. from 5 to 25 cm), stressing the importance of the local plant community context for endophytes assembly. In addition, both the present and the past plant neighborhood impacted the root fungal community thus demonstrating that plants can leave a durable fingerprint on the composition of the soil fungal pool. Furthermore, we demonstrated that such changes in the fungal composition of the host plantroots impacted M. truncatula performance. Higher species richness and more equitable endophyte assemblages increased the fitness of the plant. Both results demonstrated the existence of plant-plant interactions mediated by fungi ultimately impacting the plant fitness. This offers a new understanding of the plant microbiota assembly through the underestimated role of the local plant community. The role of leaf endophytes diversity in the diversity-productivity relationship has been recently proposed (L aforest-L apointe et al., 2017) and our result suggest that it could be extended to the root microbiota and depend on the plant community context. III.2 A rticle V: Plant-plant interactions mediated by fungi impact plant fitness Introduction The great diversity of soil microorganisms, micro-and meso-fauna is a key-driver of aboveground ecosystem functioning which determines vegetation composition and dynamics (e.g. van der Figure 1 . 1 Figure 1. Sampling protocol. Plant neighborhoods were determined by mapping the abundances of the different plant species in the past (one year before sampling) and present (the same year than sampling). Five soil cores were sampled within each plot and an individual of M. truncatula was grown on each soil sample as a trap plant. fungal richness and equitability on the trap plant biomass e. 5 cm to 25 cm around the sampling point) also determines in part the fungal soil pool available for plant recruitment. Our findings thus open up new avenues in the understanding of plant microbiota assembly rules by introducing the role of the local plant community context (i.e. the plant neighborhood) as a structuring factor. herein explain the competitive balance and richness of the fungal communities colonizing Medicago truncatula based on the close neighborhood of plants surrounding the sampling point. These findings expand the rules of plant fungal microbiota assembly to consideration of the fingerprint that a plant in the local neighborhood can leave on the soil fungal pool. These plant filtering mechanisms operated on the fungal community at a finer scale than previously thought (below 25 cm). The clear influence of plant neighborhood on the local soil microbial reservoir is an additional ecological force driving microbiota complexity and heterogeneity with strong consequences on the focal plant fitness. Recent advances demonstrated a significant impact of the leaf-associated microbiota diversity on ecosystem productivity(Laforest- L apointe et al., 2017). We extend this idea to include the root-associated microbiota and support the correlation between plant-associated microbial diversity and ecosystem productivity, suggesting that manipulations of plant neighborhoods can be used to improve biodiversity-ecosystem functioning relationships. V iard-C rétat F, Baptist F, Secher-F romell H, Gallet C . 2012. The allelopathic effects of Festuca paniculata depend on competition in subalpine grasslands. Plant ecology 213: 1963-1973. Wagg C , J ansa J , Schmid B, van der Heijden MG. 2011. Belowground biodiversity effects of plant symbionts support aboveground productivity. Ecology Letters 14: 1001-1009. Wagg C , Bender SF, Widmer F, van der Heijden MG. 2014. Soil biodiversity and soil community composition determine ecosystem multifunctionality. Proceedings of the National Academy of Sciences 111: 5266-5270. West SA , K iers E T, Pen I, Denison R F. 2002. Sanctions and mutualism stability: when should less beneficial mutualists be tolerated? J ournal of Evolutionary Biology 15: 830-837. Z hang Q, Sun Q, K oide RT, Peng Z , Z hou J , Gu X , Gao W, Y u M. 2014. A rbuscular mycorrhizal fungal mediation of plant-plant interactions in a marshland plant community. The Scientific World J ournal: 923610. A cknowledgements This work was supported by a grant from the A NR program, by a grant from the CNRS-EC2CO program (MIME project) and by the French ministry for research and higher education. We thank K evin Potard for advice on statistical analysis. We are also grateful to D. Warwick for helpful comments and suggestions for modifications on a previous version of the manuscript. We also thank the Institut National de la Recherche A gronomique (INRA , Montpellier, France) for providing M. truncatula seeds. A uthor C ontributions A .K .B., P.V. and C.M. conceived the ideas and experimental design. A .K .B. and C.M. did the experiments. N.V. did the data analyses. N.V., A .K .B., P.V. and C.M. did the interpretations and writing of the publication. I. Plants and microorganisms: a tight association Microbiota matters for plant phenotype Considering the diversity of functions provided by the plant microbiota, it is clear that endophytes have important impacts on plants phenotype (A rticle I). Clonal plants that represent most of the known plants in temperate ecosystems (van Groenendael & de K roon, 1990) are no exception to this rule (A rticle II). We demonstrated that the inoculation of symbionts (A M Fungal species) can alter architectural, allocative and reproductive traits (A rticle II). The effects of these plantassociated microoragnisms on the traits involved in clonal plants foraging and specialization have been poorly described to date. Our experiments thus provided indications that A M fungi could be responsible for changes in the ability of clonal plant to produce foraging and specialization responses. plant phenotype and evolution. Not only plants can be considered as holobionts but the structure of clonal plants in network connecting clonal ramets extends the holobiont concept to a more complex scale that we have called the meta-holobiont (article IV ). The clonal network allows indeed the dynamic sharing of microorganisms between ramets. Different holobionts are connected in clonal network allowing them to exchange of part of their hologenome. The network thus constitutes an additional level on which selection can act. Selection can act on the phenotype of the individual holobiont or on the phenotype of the network. In addition, these two levels of organization are intricated because the fitness of one member of the network could effect the fitness of the whole network. K eywords: Clonal plants; Microbiota; A rbuscular Mycorrhizal Fungi; Holobiont; Plant-plant interactions; Community assembly RESUME L es plantes vivent en association avec une grande diversité de micro-organismes qui forment son microbiote. Ce microbiote fournit des fonctions clés qui influencent tous les aspects de la vie d'une plante, son établissement, sa croissance et jusqu'à sa reproduction. Cette thè se a pour intention de déterminer les rè glent d'assemblage du microbiote et ses conséquences pour le phénotype ,l'adaptation et l'évolution des plantes. Pour atteindre cet objectif nous avons utilisé différentes approches expérimentales comprenant des plantes clonales comme organismes modè les ainsi que des mésocosmes prairiaux pour des analyses à l'échelle des communautés.Nos résultats ont démontré i) que les Champignons Mycohiziens à A rbuscules induisent d'importantes variations phénotypiques pour les traits des plantes clonales impliqués dans l'exploration de l'espace et l'exploitation des ressources. Ces changements dépendent de l'identité des symbiotes et altè rent les capacités des plantes à développer des réponses plastiques à l'hétérogénéité environnementale. ii) Les plantes ont évolué un mécanisme permettant la transmission d'une partie de leur microbiote a leur descendance, assurant ainsi la qualité de leur habitat. iii) L e contexte spécifique des communautés de plantes est un facteur majeur structurant l'assemblage du microbiote des plantes à échelle locale. L 'abondance de certaines espè ces de plante dans le voisinage d'une plante cible augmente ou diminue la diversité de son microbiote, déterminant in fine ses performances De maniè re générale, cette thè se démontre l'importance des organismes symbiotiques dans la compréhension de l'adaptation et de l'évolution des plantes. A partir des connaissances acquises nous avons développé une nouvelle compréhension des interactions symbiotiques chez les plantes clonales en introduisant la théorie du méta-holobiont une comme extension de la théorie de l'holobiont.Mots-clés : Plantes clonales ; Microbiote ; Champignons Mycorhiziens à Arbuscules ; Holobiont ; Interactions plante-plante ; A ssemblage des communautés ..13 2.2.1 A biotic stresses.......................................................................................................13 2.2.2 Biotic stresses.........................................................................................................14 2.3 Growth and reproductive strategy. . Introduction...................................................................................................31 Scientific context..................................................................................................................31Objectives of the chapter......................................................................................................32 Methods................................................................................................................................32 Main results. ...................................................................................................142 Methods..............................................................................................................................142 Main results........................................................................................................................143 III.2 A rticle V: Plant-plant interactions mediated by fungi impact plant fitness ...........................................................................................................................145 GE NE R A L DISC USSION A ND PE R SPE C T IV E S..............................................179 I. Plants and microorganism: a tight association............................................. ..180 II. Redefining the individual plant : from holobiont to meta-holobiont............183 III. From individuals to community ................................................................ ..184 IV. Microbiota assembly and agricultural practices..........................................186 Bibliography . The best-known epigenetic mechanisms involve DNA methylation, histone modifications and histone variants, and small RNA s. These epigenetic mechanisms lead to enhanced or reduced gene transcription and RNA -translation (e.g.Richards, 2006; Holeski, 2012). A more restricted definition applied in this paper considers as epigenetic the states of the epigenome regarding epigenetic marks that affect gene expression: DNA methylation, histone modifications (i.e. histone amino- terminal modifications that act on affinities for chromatin-associated proteins) and histone variants (i.e. structure and functioning), and small RNA s. These epigenetic marks may act separately or concomitantly, and can be heritable and reversible (e.g. Molinier et al., 2006;[START_REF] Richards | Natural epigenetic variation in plant species: a view from the field[END_REF] Bilichak, 2012) . The induction of defense pathways and metabolite synthesis against biotic and abiotic constraints by epigenetic marks has been demonstrated during the last decade mainly in the model plant species A rabidopsis and tomato (e.g. [START_REF] Rasmann | Herbivory in the previous generation primes plants for enhanced insect resistance[END_REF][START_REF] Slaughter | Descendants of primed Arabidopsis plants exhibit resistance to biotic stress[END_REF][START_REF] Sahu | Epigenetic mechanisms of plant stress responses and adaptation[END_REF] . horizontal gene transfer between members of the holobiont (i.e., transfer of genetic material between bacteria; Dinsdale et al., 2008) (2) microbial amplification (i.e., variation of microbes abundance in relation to environment variation) and ( The extended phenotype: The gene as the unit of selection. Oxford University Press Dinan,T. G., Stilling, R. M., Stanton, C., Cryan, J . F (2015). Collective unconcious: how gut microbes shape human behaviour. J Psychiatr.Res. 63,[START_REF] Simms | The relative advantages of plasticity and fixity in different environments: when is it good for a plant to adjust ?[END_REF][START_REF] Wijesinghe | The effects of environmental heterogeneity on the performance of Glechoma hederacea: the interactions between patch contrast and patch scale[END_REF][START_REF] Bradshaw | Evolutionary significance of phenotypic plasticity in plants[END_REF][START_REF] Schlichting | Phenotypic plasticity-an evolving plant character[END_REF][START_REF] Pigliucci | Reaction norms of Arabidopsis. V. flowering time controls phenotypic architecture in response to nutrient stress[END_REF][START_REF] Sultan | Phenotypic plasticity for plant development, function and life history[END_REF][START_REF] Miner | Ecological consequences of phenotypic plasticity[END_REF][START_REF] Pigliucci | Evolution of phenotypic plasticity: where are we going now?[END_REF][START_REF] Harper | Population biology of plants[END_REF] Dinsdale, E.A ., Edwards, R. A ., Hall, D., A ngly, F., Breitbart, M., Brulc, J . M., et al. (2008). . Symbiosis lost: Imperfect vertical transmission of fungal endophytes in grasses. Am. Nat. 172, 405-416 A lberch, P. (1991). From genes to phenotype: dynamical systems and evolvability. Genetica 84, 5-11 Dawkins, R. (1982). Functional metagenomic profiling of nine biomes. Nature 452, 629-632 A lpert, P., and Simms, E. L . (2002). The relative advantages of plasticity and fixity in different El-Soda, M., Malosetti, M., Zwaan, B. J ., K oornneef, M., and A arts, M. G. (2014). Genotype × environments: when is it good for a plant to adjust? Evol. Ecol. 16, 285-297 environment interaction QTL mapping in plants: lessons from Arabidopsis. Trends Plant Sci. A nderson, J . T., Willis, J . H., and Mitchell-Olds, T. (2011). Evolutionary genetics of plant 19, 390-398 adaptation. Trends Genet. 27, 258-266 Gilbert, S.F., McDonald, E., Boyle, N., Buttino, N., Gyi, L ., Mai, et al. (2010). Symbiosis as a Transgenerational epigenetic imprints on mate preference. Proc. Natl. Acad. Sci. U.S.A. 104, 5942-5946 Cubas, P. V incent, C., and Coen, E. (1999). A n epigenetic mutation responsible for natural variation in floral symmetry. Nature 401, 157-161 Bilichak, A . (2012) . The Progeny of Arabidopsis thaliana plants exposed to salt exhibit changes in DNA methylation, histone modifications and gene expression. PLoS ONE 7, e30515 Boller, [START_REF] Boller | A renaissance of elicitors: perception of microbe-associated molecular patterns and danger signals by pattern-recognition receptors[END_REF] . A renaissance of elicitors: perception of microbe-associated molecular patterns and danger signals by pattern-recognition receptors. Ann. Rev. Plant Biol. 60, 379-406 Bossdorf, O., Richards, C. L ., and Pigliucci, M. (2008) . Epigenetics for ecologists. Ecol. Let. 11, 106-115 Bradshaw, A . D. (1965) . Evolutionary significance of phenotypic plasticity in plants. Adv. Genet. 115-155 Brundrett, M. C. (2002) . Coevolution of roots and mycorrhizas of land plants. New Phytol. 154, 275-304 Bulgarelli, D., Rott, M., Schlaeppi, K ., Ver L oren van Themaat, E., A hmadinejad, N., A ssenza, F, et al. (2012) . Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota. Nature 488: 91-95 Conrath, U., Beckers, G. J ., Flors, V., García-A gustín, P., J akab, G., Mauch, F.,et al. (2006) . Priming: getting ready for battle. Mol. Plant-Microbe Interact. 19, 1062-1071 Crews, D., Gore, A . C., Hsu, T. S., Dangleben, N. L ., Spinetta, M., Schallert, T., et al. (2007) . source of selectable epigenetic variation: taking the heat for the big guy. Proc. R. Soc. B 365, 671-678 Hodge, A . (2004) . The plastic plant: root responses to heterogeneous supplies of nutrients. New Phytol. 162, 9-24 Holeski, L . M., J ander, G., and A grawal, A . A . (2012) . Transgenerational defense induction and epigenetic inheritance in plants. Trends Ecol. Evol. 27, 618-626 Huber, H. and Hutchings, M.J . (1997) . Differential response to shading in orthotropic and plagiotropic shoots of the clonal herb Glechoma hirsuta. Oecologia 112, 485-491 Ichida, H. Matsuyama, T., A be, T., and K oba, T. (2007) . DNA adenine methylation changes dramatically during establishment of symbiosis. F EBS J . 274, 951-962 Ichida, H. Yoneyama, K ., K oba, T., and A be, T. (2009) . Epigenetic modification of rhizobial genome is essential for efficient nodulation. Bioch. Bioph. R. Com. 389, 301-304 J ablonka, E. and L amb, M.J . (2002) . The changing concept of epigenetics. Ann. N. Y. Acad. Sci. 981,[82][83][84][85][86][87][88][89][90][91][92][93][94][95][96] K aern, M., Elston, T. C., Blake, W. J . and Collins, J . J . (2005) . Stochasticity in gene expression: from theories to phenotypes. Nat. Rev. Genet. 6,[451][452][453][454][455][456][457][458][459][460][461][462][463][464] E. Table . - . 1: Results of linear models for each traits linked to the plants foraging, specialization and performance. Trait E xperiment 2: E ffect of A M fungi identity on G. hederacea traits. Treatment F -value P -value (α =0.05) The hypothesis that modifications in G. hederacea foraging and specialization traits were affected by the A M Total biomass F -value P -value (α =0.05) Intra : lower/estimate/upper Inter : lower/estimate/upper R andom factor (G enotype) Total Biomass G rowth time fungal species was tested by comparing the allocation, architectural and growth traits of four treatments 0.39 0.75 --0.72 / 0.95 / 1.25 2.7 0.067 --3.98 / 5.24 / 6.89 0.45 0.71 1.58 0.22 1.19 / 1.61 / 2.18 1.92 0.15 8.32 <0.01 1.03 / 1.4 / 1.9 inoculated with different A M fungal species (see Methods for details on experimental design). Primary 5 th in te rn o de l e n g th 6 th in te rn o de l e n g th stolon length (an architectural trait) tended to vary (P=0.07; F=2.83) in response to the presence and species 5.74 <0.01 4.38 <0.05 0.59 / 0.81 / 1.12 0.15 0.93 0.02 0.87 0.96 / 1.34 / 1.86 0.48 0.69 --n/a 0.18 0.9 --n/a 1.09 0.37 --n/a 0.46 0.7 --n/a 1.1 0.36 14.49 <0.01 0.99 / 1.31 / 1.7 0.46 0.7 5.2 <0.01 1.06 / 1.40 / 1.85 0.26 0.84 1.91 0.18 0.89 / 1.18 / 1.55 0.88 0.46 1.08 0.3 0.89 / 1.18 / 1.56 1 0 th Ln te rn o d e l e n g th 11 th ramet number of ramifications 10 th ramet number of ramifications 6 th ramet number of ramifications 5 th ramet number of ramifications 11 th ramet root/shoot 10 th ramet root/shoot 6 th ramet root/shoot 5 th ramet root/shoot 11 th internode length of A M fungi whereas the number of ramifications (P=0.25; F=1.49) did not (Tab. 2). A llocation to stolons 0.43 / 0.81 / 1.53 0.37 / 1.79 / 8.53 0.75 / 1.45 / 2.8 0.34 / 0.83 / 2.05 0.54 / 0.97 / 1.74 0.41 / 0.94 / 2.17 n/a n/a n/a n/a 0.38 / 0.83 / 1.8 0.81 / 1.46 / 2.64 0.36 / 0.77 / 1.66 0.22 / 0.59 / 1.63 F-values and P-values of the treatment and total biomass (when used as covariable) are presented, as well as lower, estimate and upper values of intra and inter genotype variance (random factor). was significantly affected by the presence and species of A M fungi (P=0.017; F=4.51) with plants inoculated Table . - . 2: Results of linear models for each traits linked to the plants resources allocation and performance. Trait Treatment F -value P -value (α =0.05) Total biomass F -value P-value (α =0.05) R andom factor (G enotype) Intra : lower/estimate/upper Inter : lower/estimate/upper Total Biomas s Number of ramets (allocation) P rimary stolon length Number of ramifications S tolons weight (allocation) S hoots weigth (allocation) R oots weight (allocation) 0.67 3.55 2.84 1.49 4.51 3.96 0.5 0.57 <0.05 0.07 0.25 <0.05 <0.05 0.68 -46.6 1.99 5.8 91.37 1528 30.72 -<0.001 0.17 <0.05 <0.001 <0.001 <0.001 0.27 / 0.38 / 0.52 5.97 / 8.45 / 11.96 10.99 / 15.45 / 21.75 0.46 / 0.66 / 0.93 0.08 / 0.11 / 0.17 0.06 / 0.09 / 0.13 0.06 / 0.09 / 0.12 0.03 / 0.14 / 0.67 7.58 / 13.7 / 24.8 4.53 / 10.69 / 25.23 0.24 / 0.53 / 1.18 0.03 / 0.09 / 0.22 0.04 / 0.08 / 0.18 0.006 / 0.03 / 0.19 F-values and P-values of the treatment and total biomass (when used as covariable) are presented, as well as lower, estimate and upper values of intra and inter genotype variance (random factor). Iobs is the observed heritability index value, Inull is the mean of the null distribution and σ null is its standard deviation. SES aims to quantify the direction and magnitude of each ecotype heritability index compared to the null distribution. Negative SES values indicate lower heritability than in the random model (heritability of microorganisms species not present in the mother ramet), and McCabe, 2002: SES 3 I obs -I null σ null where Considering the fungal phyla and classes separately, plant-species effects were better revealed. For example, the equitability of the phylum Ascomycota significantly increased with B. Glomeromycetes, and Agaricomycetes whereas A. stolonifera increased the equitability of Glomeromycetes but decreased the equitability of Agaricomycetes. The explained variance in the models increased when lower as compared to higher fungal taxonomic levels were considered (e.g. 14% of the variance was explained for the Ascomycota phylum (P<0.01) while ~24% of the variance was explained for the Sordariomycetes class (P<0.001). Present pinnatum but decreased with H. mollis abundances (P<0.01; 0.11≥R 2 ≥0.14), whereas Past Glomeromycota equitability increased with F. rubra abundance (P<0.01; R 2 =0.09). Several plant Taxonomic groups species had the same effect on the equitability for every fungal group whereas others had different P-value R² Significant plant species P-value R² Significant plant species A ll Fungi effects between groups: the abundance of B. pinnatum always increased fungal equitability within 0.04 0.04 A sto (-) 0.11 A scomycota 0.07 0.03 Hmol (-) 0.09 the Sordariomycetes, 0.01 0.02 -- Glomeromycota 0.04 0.05 Cnig (-) 0.08 0.02 - Basidiomycota 0.12 0.03 Bpin (+) 0.03 0.04 Hlan (-) A garicomycetes 0.02 0.04 Hlan (-) 0.02 0.04 Hlan (-) Sordiariomycetes 0.03 0.07 Erep (+) Dglo (+) L per (+) 0.01 0.08 A ten (+) Cnig (+) L per (+) Glomeromycetes 0.004 0.08 Hmol (-) Cnig (+) 0.01 0.06 Cnig (- 162 Table 2 . 1622 Responses of each fungal group richness and equitability at the sample scale to the past and present plant neighborhoods (results from linear models). P-values and the range of adjusted R² of best models are presented. Significant timeand spatial scales of neighborhood are also indicated. Only species significantly participating to the model building are showed as well as their effect on the fungal richness and equitability (+ increasing;decreasing). *, P<0.05; **, P<0.01; ***, P<0.001. Taxonomic groups Time scale of response A ll F ungi Richness Present/Past Equitability Present/Past A scomycota Richness Present/Past Equitability Present/Past Glomeromycota Richness Present/Past Equitability Present Basidiomycota Richness Spatial scale of response (radius) 5 to 20cm 10 to 25cm 5 to 25cm 5 to 20cm 5 to 25cm 25cm R² 0,07-0,1 (*) 0,04 -0,07 (*) 0,04 -0,12 (*) 0,11 -0,14 (**) 0,06 -0,09 (*) 0,09 (**) Significant plant species A ten (+) Hmol (-) A ten (+) Frub (+) Hmol (-) Bpin (+) Erep (-) Bpin (+) Dglo (-) Frub (+) Table 3 . 3 Results of linear models testing the effect of each fungal group richness and equitability on the biomass of the trap plant M. truncatula. ANOVA P-values, F -values and adjusted R² of the best models are presented. Only fungal groups significantly participating to the best model building are presented. Tax o n o mi c gro u p s ! l l Cu n gi t -v al u e 0 . 2 4 wi ch n ess C-v al u e 1 . 4 2 w² 0 . 0 0 5 t -v al u e 0 . 0 0 7 9q u i tab i l i ty C-v al u e 7 . 6 7 w² 0 . 0 8 t h y l a / l assess ! sco my co ta . asi di o my co ta Dl o mero my co ta {o rdari o my cetes ! gari co my cetes Dl o mero my cetes ---0 . 0 1 0 . 0 2 ---0 . 0 1 0 . 0 4 ---2 . 5 6 2 . 3 8 ---6 . 8 3 4 . 3 4 0 . 1 2 0 . 1 4 0 . 0 0 3 ------0 . 0 0 7 0 . 0 5 7 --- 9 . 5 7 ------7 . 6 2 3 . 7 1 --- 0 . 1 0 . 1 Université de Rennes 1, CNRS, UMR 6553 EcoBio, campus Beaulieu, Avenue du Général L eclerc, 35042 RENNES Cedex (France) Université de Lyon 1, CNRS, UMR 5023 L EHNA 43 Boulevard du 11 Novembre 1918, 69622 V IL L EURBA NNE Cedex (France) *Corresponding author. Email: [email protected] Université de Rennes 1, CNRS, UMR6553 EcoBio, campus Beaulieu, Avenue L eclerc, 35042 RENNES Cedex (France) Université de Lyon 1, CNRS, UMR 5023 L EHNA 43 Boulevard du 11 Novembre 1918, 69622 V IL L EURBA NNE Cedex (France) ~In preparation~1 A cknowledgments: This work was supported by a grant from the CNRS-EC2CO program (MIME project), CNRS-PEPS program (MY COLA ND project) and by the French ministry for research and higher education. We also acknowledge E. T. K iers and D. Warwick for helpful comments and suggestions for modifications on a previous version of the manuscript and A . Salmon for helpful discussions about epigenetics. Summary The classic understanding of organisms focuses on genes as the main source of species evolution and diversification. The recent concept of genetic accommodation questions this gene centric view by emphasizing the importance of phenotypic plasticity on evolutionary trajectories. Recent discoveries on epigenetics and symbiotic microbiota demonstrated their deep impact on plant survival, adaptation and evolution thus suggesting a novel comprehension of the plant phenotype. In addition, interplays between these two phenomena controlling plant plasticity can be suggested. Because epigenetic and plant-associated (micro-) organisms are both key sources of phenotypic variation allowing environmental adjustments, we argue that they must be considered in terms of evolution. This 'non-conventional' set of mediators of phenotypic variation can be seen as a toolbox for plant adaptation to environment over short, medium and long time-scales. K ey-words : Plant, Symbiosis, Phenotypic Plasticity, Epigenetic, Evolution More recently the development of the 'hologenome theory' (Zilber- Rosenberg and Rosenberg, 2008) posits that evolution acts on composite organisms (i.e., host and its microbiome) with the microbiota being fundamental for their host fitness by buffering environmental constraints. Both the 'extended phenotype' concept and 'hologenome theory' admit that the environment can leave a "footprint" on the transmission of induced characters. Thus, opportunities exist to revisit our understanding of plant evolution to embrace both environmentally-induced changes and related 'genetic accommodation' processes. A microorganisms journey between plant generations Nathan Vannier 1 , Cendrine Mony 1 , A nne-K ristel Bittebiere 2 , Sophie Michon-Coudouel 1 , Marine Biget 1 , Philippe Vandenkoornhuyse ~In revision for ISME J ~89 A bstract Plants are colonized by a great diversity of symbiotic microorganisms which form a microbiota and perform additional functions for their host. This microbiota can thus be considered a toolbox enabling plants to buffer local environmental changes, with a major influence on plant fitness. In this context, the transmission of the microbiota to the progeny represent a way to ensure habitat quality. However, examples of such transmission are scarce and their importance unclear. We investigated the transmission of symbiotic partners to plant progeny within the clonal network using Glechoma hederacea as plant model. We demonstrated the vertical transmission of a significant proportion of the mother' s symbiotic Bacteria and Fungi to the daughters and the heritability of a specific core microbiota. In this clonal plant, microorganisms are transmitted between individuals through connections, thereby ensuring the availability of symbiotic partners for the newborn plants as well as the dispersion between hosts for the microorganisms. This previously unknown ecological process allows the dispersal of microorganisms in space and across plant generations. The vast majority of plants are clonal, this process might be therefore a strong driver of ecosystem functioning and assembly of plant and microorganism communities in a wide range of ecosystems. Supplementary F igure 1 | (a), mean rarefaction curves for the Bacterial communities in the mother roots samples (red), the daughter roots samples (green), the internode samples (brown) and the control pots (blue). (b), mean rarefaction curves for the fungal communities in the mother roots samples (red), the daughter roots samples (green), the internode samples (brown) and the control pots (blue). (c), mean rarefaction curves for the fungal communities in the mother roots samples (red), the daughter roots samples (green), the internode samples (brown) and the control pots (blue). Coloured ideas indicate ± SE. ecotypes by generating daughter communities with random samples of the microorganisms species occurring within the species pool (regional pool) of the mother ramet. Only the species identity was changed while species richness within the null daughter communities remained unmodified. We created 9999 null datasets for each daughter ramet and measured the OTUs heritability for each ecotype created in this way as the number of OTUs shared between the mother and at least 2 daughters. We then calculated the Standard effect size (SES) values of each ecotype. Negative SES values indicated that the observed heritability was lower than would be expected in the null-model (heritability of OTUs not specific to the ecotype mother ramet), whereas positive SES values revealed a higher heritability than expected (heritability of microorganisms from the mother). The horizontal line represents an SES value of 0 (no difference between the observed heritability and the null heritability). P-values indicate the significance of the one sample t-test to determine whether SES values are significantly higher than 0 (alternative hypothesis "greater"). Plant-plant interactions mediated by fungi impact plant fitness ~In preparation~1 Summary • The plant microbiota is now recognized as a major driver of plant health. The rules governing root microbiota assembly have been recently investigated and the importance of abiotic determinants has been highlighted. The role of the biotic context of the plant community however remains unclear. • We tested whether the plant neighborhood may leave a fingerprint on the fungal soil pool. We used outdoor experimental mesocosms comprising various floristic composition and mapped plant distribution over two years. Medicago truncatula was used as a trap plant on soil samples and root DNA was sequenced to describe fungal communities. The trap plant performance was estimated through biomass measures. • Fungal communities richness and equitability were influenced by the abundance of key plant species in the neighborhood. A given plant had contrasted effects on fungal phyla and classes. A dditionally, we demonstrated that changes in fungal groups richness and equitability influenced the biomass of the trap plant. • Our results suggest the existence of plant-plant interactions mediated by fungi and impacting plant fitness. The shift between facilitation and competition may be mediated by the fungal community. The ecosystem diversity-productivity relationship could be extended to the root microbiota and depend on the plant community context. To determine the impact of fungal taxonomic groups, OTUs richness, and equitability on the trap plant performance we used linear models with the indices as explanatory variables and M. truncatula biomass as the dependent variable. This was done at the sample scale (central point of the mesocosm) and at the three taxonomic levels analyzed (all fungi, phyla, classes). For all models, data were log or square-root transformed when necessary to satisfy the assumption of a normal distribution of the residuals. The model coefficients and the proportion of index variation that was accounted for by the regression (R² ) were calculated. The significance of each explanatory variable was tested with an A NOVA procedure. A ll the statistical analyses were performed using the packages "car" (Fox & Weisberg, 2011) and "A ICcmodavg" [START_REF] Mazerolle | Package 'A ICcmodavg[END_REF] in R (R Core Team, 2013). R esults E ndospheric fungal microbiota in Medicago truncatula roots The 2057 fungal OTUs found in the M. truncatula root endosphere belonged to five phyla (i.e. Zygomycota, Chytridiomycota, Glomeromycota, Ascomycota, and Basidiomycota), with Zygomycota and Chytridiomycota together accounting for less than 3% of the total number of OTUs F ungal community richness response to the plant neighborhood F ungal richness at the plot scale To investigate the effect of the plant neighborhood on the richness of the fungal community at the scale of the whole mesocosm (i.e. gamma diversity), we produced linear models at the plot scale (i.e. the five sampling points of the mesocosm) with the plant species abundances as explanatory variables (Tab. 1). The plot richness of the whole fungal community was significantly determined by the present plant neighborhood. However, the proportion of the variation in fungal richness explained by the model was low (p=0.04; R 2 =0.04). In addition, the fungal richness within the Ascomycota and Basidiomycota was not determined by the plant neighborhood and only 5% of the variance in Glomeromycota richness could be attributed to the present-plant neighborhood (P=0.04; R 2 =0.05) (Tab. 1). When considering the past-plant neighborhood, only Basidiomycota richness was weakly determined by the plant neighborhood (P=0.03; R 2 =0.04) whereas Ascomycota and Glomeromycota, as well as the fungal community as a whole, were not. F ungal richness at the sample scale To investigate the effect of the plant neighborhood on the richness of the fungal community at finescale we produced linear models at the sample scale (i.e. center of the mesocosm) with plant species abundances as explanatory variables. Our multiple linear models analysis revealed that the richness of the fungal community was significantly determined by the plant neighborhood for all taxonomic groups tested (Tab 2). In comparison with the models produced at the plot scale, a larger proportion of the variations in fungal community richness could be explained at this alpha diversity scale. A t the level of the whole fungal community, the richness increased significantly only with the abundance of Agrostis tenuis in the neighborhood whereas the abundance of the other plants had no significant effect on fungal community richness (P<0.05; 0.07≥R 2 ≥0.1). In addition, A. tenuis was one of the rarest species in the experiment and the effect detected could be an artifact of this rarity. Ascomycota richness increased with the abundance of A. tenuis and F estuca rubra (P<0.05; 0.04≥R 2 ≥0.12) whereas Glomeromycota richness increased with Brachypodium pinnatum and Dactylis glomerata and decreased with Elytrigia repens (P<0.05; 0.06≥R 2 ≥0.09). The effects (positive or negative) of plant species at the phylum-level were not necessarily the same at the class-level. The presence of D. glomerata in the plant neighborhood significantly increased the Plant-plant interaction mediated by fungi affects plant performance K nowing that the fungal communities compositions were determined by the plant neighborhood and that these changes affected plant performance (biomass productivity), this study demonstrates the existence of plant-plant interactions mediated by the fungal communities. Our results indicate that these shifts in fungal community composition can have positive or negative effects on the trap plant biomass, suggesting that the neighborhood can have either a facilitative or a competitive effect on the trap plant. Studies investigating the shift between plant-plant facilitation and competition suggested that this shift is linked to environmental stress or disturbance intensity and is spatially heterogeneous [START_REF] O'brien | The shift from plant-plant facilitation to competition under severe water deficit is spatially explicit[END_REF]. Previous studies have indicated that plant-plant facilitation (i.e. a beneficial effect of a given plant presence) may be linked to the compositions of their A M fungal communities [START_REF] Montesinos-Navarro | The network structure of plant-arbuscular mycorrhizal fungi[END_REF], Zhang et al., 2014). [START_REF] Montesinos-Navarro | The network structure of plant-arbuscular mycorrhizal fungi[END_REF] argued that stronger facilitation occurs between pairs of plant species with different associated A MF. This phenomenon underlies a potential mechanism which increases A M fungal diversity in the shared rhizosphere and promotes complementarity between the beneficial effects of each A M fungus (Van der Heijden et al., 1998;Wagg et al., 2011). We herein provide the first experimental evidence of this assumption by showing that the richness of Glomeromycota increased with the abundance of specific plants in the neighborhood, which ultimately increased M. trunculata's biomass. Our results also indicate that changes in plant fungal communities can be detrimental as several plant species had a negative effect on fungal richness and/or equitability. We thus propose that the shift from plant-plant facilitation to competition is mediated by changes in the fungal communities available for recruitment in the surrounding soil. More importantly, we demonstrated here that these phenomena of plant-plant facilitation and competition mediated by fungi are not restricted to A M fungi but have to be extended to other fungal groups such as Ascomycota and Basidiomycota. We thus encourage future studies to consider the whole fungal community and to take into account the feedbacks between plant and fungal communities that can affect ecosystem properties such as productivity (Cadotte et al., 2008). Positive and negative effects of given plants on the trap plant biomass A n increase in the plant trap biomass associated with an increase in fungal groups richness (e.g. Glomeromycetes and Agaricomycetes) and equitability (e.g. Ascomycota and Sordariomycetes) was Physiology and Biochemistry, 40 [START_REF] Hutchings | Performance of a clonal species in patchy environments: effects of environmental context on yield at local and whole-plant scales[END_REF], 983-995. Bais, H. P., Weir, T. L ., Perry, L . G., Gilroy, S., & Vivanco, J . M. (2006). The role of root exudates in rhizosphere interactions with plants and other organisms. Annu. Rev. Plant Biol.,[START_REF] Cui | Facilitation of plant phosphate acquisition by arbuscular mycorrhizas from enriched soil patches[END_REF] Bá lint, M., Bahram, M., Eren, A . M., Faust, K ., Fuhrman, J .
00175142
en
[ "phys.grqc" ]
2024/03/05 22:32:07
2008
https://hal.science/hal-00175142/file/BFstringfinal.pdf
Winston J Fairbairn email: [email protected] Alejandro Perez email: [email protected] Extended matter coupled to BF theory . In this paper, we discuss various aspects of the four-dimensional theory. Firstly, we study classical solutions leading to an interpretation of the theory in terms of strings propagating on a flat spacetime. We also show that the general classical solutions of the theory are in one-to-one correspondence with solutions of Einstein's equations in the presence of distributional matter (cosmic strings). Secondly, we quantize the theory and present, in particular, a prescription to regularize the physical inner product of the canonical theory. We show how the resulting transition amplitudes are dual to evaluations of Feynman diagrams coupled to threedimensional quantum gravity. Finally, we remove the regulator by proving the topological invariance of the transition amplitudes. I. INTRODUCTION Based on the seminal results [START_REF] Deser | Three-Dimensional Einstein Gravity: Dynamics of Flat Space[END_REF] of 2 + 1 gravity coupled to point sources, recent developments [START_REF] Freidel | Effective 3d quantum gravity and non-commutative quantum field theory[END_REF], [START_REF] Noui | Three dimensional Loop Quantum Gravity: Towards a self-gravitating Quantum Field Theory[END_REF] in the non-perturbative approach to 2 + 1 quantum gravity have led to a clear understanding of quantum field theory on a three-dimensional quantum geometrical background spacetime. The idea is to first couple free point particles to the gravitational field before going through the second quantization process. In this approach, particles become local conical defects of spacetime curvature and their momenta are recasted as holonomies of the gravitational connection around their worldlines. It follows that momenta become group valued leading to an effective notion of non-commutative spacetime coordinates. The Feynman diagrams of such theories are related via a duality transformation to spinfoam models. All though conceptually very deep, these results remain three-dimensional. The next step is to probe all possible extensions of these ideas to higher dimensions. Two ideas have recently been put forward. The first is to consider that fundamental matter is indeed pointlike and study the coupling of worldlines to gravity by using the Cartan geometric framework [START_REF] Wise | MacDowell-Mansouri gravity and Cartan geometry[END_REF] of the McDowell-Mansouri formulation of gravity as a de-Sitter gauge theory [START_REF] Macdowell | Unified geometric theory of gravity and supergravity[END_REF]. The second is to generalize the description of matter as topological defects of spacetime curvature to higher dimensions. This naturally leads to matter excitations supported by co-dimension two membranes [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF], [START_REF] Baez | Exotic statistics for loops in 4-D BF theory[END_REF]. Before studying the coupling of such sources to quantum gravity, one can consider, as a first step, the BF theory framework as an immediate generalization of the topological character of three-dimensional gravity to higher dimensions. This paper is dedicated to the second approach, namely the coupling of string-like sources to BF theory in four dimensions. The starting point is the action written in [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF] generating a theory of flat connections except at the location of two-dimensional surfaces, where the curvature picks up a singularity, or in other words, where the gauge degrees of freedom become dynamical. The goal of the paper is a two-fold. Firstly, acquire a physical intuition of the algebraic fields involved in the theory which generalize the position and momentum Poincaré coordinates of the particle in three-dimensions. Secondly, provide a complete background independent quantization of the theory in four dimensions, following the work done in [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF]. The organization of the paper is as follows. In section II, we study some classical solutions guided by the three-dimensional example. We show that some specific solutions lead to the interpretation of rigid strings propagating on a flat spacetime. More generally, we prove that the solutions of the theory are in one-to-one correspondence with distributional solutions of general relativity. In section III, we propose a prescription for computing the physical inner product of the theory. This leads us to an interesting duality between the obtained transition amplitudes and Feynman diagrams coupled to three-dimensional gravity. We finally prove in section IV that the transition amplitudes only depend on the topology of the canonical manifold and of the spin network graphs. II. CLASSICAL THEORY A. Action principle and classical symmetries Let G be a Lie group with Lie algebra g equipped with an Ad(G)-invariant, non degenerate bilinear form noted 'tr' (e.g. the Killing form if G is semi-simple). Consider the principal bundle P with G as structure group and as base manifold a d + 1 dimensional, compact, connected, oriented differential manifold M . We will assume that P is trivial, all though it is not essential, and chose once and for all a global trivialising section. We will be interested in the following first order action principle, describing the interaction between closed membrane-like sources and BF theory [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF]: S[A, B; q, p] = S BF [A, B] - W tr(B + d A q)p). (1) The action of free BF theory in d + 1 dimensions is given by S BF [A, B] = 1 κ M tr(B ∧ F [A]). (2) Here, B is a g-valued (d -1)-form on M , F is the curvature of a g-valued one-form A, which is the pull-back to M by the global trivializing section of a connection on P, and κ ∈ R is a coupling constant. In the coupling term, W is the (d -1)-brane worldsheet defined by the embedding φ : E ⊂ R d-1 → M , d A is the covariant derivative with respect to the connection A, q is a g-valued (d -2)-form on W and p is a g-valued function on W . The physical meaning of the matter variables p and q will be discussed in the following section. Essentially, p is the momentum density of the brane and q is the first integral of the (d -1)-volume element; the integral of a line and surface element in in three and four dimensions (d = 2, 3) respectively. The equations of motion governing the dynamics of the theory are those of a topological field theory: F [A] = κp δ W (3) d A B = κ[p, q]δ W (4) φ * (B + d A q) = 0 (5) d A p| W = 0. (6) Here, δ W is a distributional two-form, also called current, which has support on the worldsheet W . It is defined such that for all (d -1)-form α, W α = M (α ∧ δ W ). The symbol φ * denotes the pull-back of forms on W by the embedding map φ. We can readily see that the above action describes a theory of local conical defects along brane-like (d-1)submanifolds of M through the first equation. The second states that the obstruction to the vanishing of the torsion is measured by the commutator of p and q. The third equation is crucial. It relates the background field B to the dynamics of the brane. For instance, this equation describes the motion of a particle's position in 3d gravity [START_REF] Ph | On spin and (quantum) gravity in (2+1)-dimensions[END_REF]. The last states that the momentum density is covariantly conserved along the worlsheet. It is in fact a simple consequence of equation [START_REF] Deser | Three-Dimensional Einstein Gravity: Dynamics of Flat Space[END_REF] together with the Bianchi identity d A F = 0. We will see how this is a sign of the reducibility of the constraints generated by the theory. The total action is invariant under the following (pull back to M of) vertical automorphisms of P, ∀g ∈ C ∞ (M, G), B → B = gBg -1 (7) A → A = gAg -1 + gdg -1 p → gpg -1 q → gqg -1 and the 'topological', or reducible transformations ∀η ∈ Ω d-2 (M, g), B → B + d A η (8) A → A p → p q → q -η where Ω p (M, g) is the space of g-valued p-forms on M . B. Physical interpretation: the flat solution In this section, we discuss some particular solutions of the theory leading to an interpretation of matter propagating on flat backgrounds. We discuss the d = 2 and d = 3 cases where the gauge degrees of freedom of BF theory become dynamical along one dimensional worldlines and two-dimensional worldsheets respectively. The point particle in 2 + 1 dimensions We now restrict our attention to the d = 2 case with structure group the isometry group G = SO(η) of the diagonal form η of a three-dimensional metric on M ; η = (σ 2 , +, +) with σ = {1, i} in respectively Riemannian (G = SO(3)) and Lorentzian (G = SO(1, 2)) signatures. We denote (π, V η ) the vector (adjoint) representation of so(η) = R{J a } a=0,1,2 , i.e., V η = R 3 and V η = R 1,2 in Riemannian and Lorentzian signatures respectively. The bilinear form 'tr' is defined such that tr(J a J b ) = 1 2 η ab . In this case, the free BF action (2) describes the dynamics of three-dimensional general relativity, where the B field plays the role of the triad e. The matter excitations are 0-branes, that is, particles and the worldsheet W reduces to a one-dimensional worldline that we will note γ. The degrees of freedom of the particle are encoded in the algebraic variables q and p which are both so(η)-valued functions with support on the world-line γ. Firstly, we consider the open subset U of M constructed as follows. Consider the three-ball B 3 centered on a point x 0 of the worldline γ and call x and y the two punctures ∂B 3 ∩ γ. Pick two non intersecting paths γ 1 and γ 2 on ∂B 3 both connecting x to y. The open region bounded by the portion of ∂B 3 contained between the two paths and the two arbitrary non intersecting disks contained in B 3 and bounded by the loops γγ 1 and γγ 2 defines the open subset U ⊂ M . Next, we define the coordinate function X : M → V η mapping spacetime into the 'internal space' so(η) isomorphic, as a vector space, to its vector representation space V η . The coordinates are chosen to be centered around a point x in M traversed by the worldline; X(x) = 0. Associated to the coordinate function X, there is a natural solution to the equations of motion (3), ( 4), ( 6), [START_REF] Noui | Three dimensional Loop Quantum Gravity: Towards a self-gravitating Quantum Field Theory[END_REF] in U e = dX = δ (9) A = 0 q = -X | γ p = constant, where δ is the unit of End(T p M, V η ), δ(v) = v forall v in T p M and all p in U . The field configuration e = δ (together with the A = 0 solution) provides a natural notion of flat Riemannian or Minkowskian spacetime geometry via its relation to the spacetime metric g = 2tr(e ⊗ e). This flat background is defined in terms of a special gauge (notice that one can make e equal to zero by transformation of the form ( 8)). From now on, we will call such gauge a flat gauge. The solution for q is obtained through the equation ( 5) relating the background geometry to the geometry of the worldline. Here, we can readely see that q represents the particle's position X, first integral of the line element defined by the background geometry e. Below we show that equation (4) forces the worldline to be a straight line. Finally, p = constant trivially satisfies the conservation equation [START_REF] Wise | MacDowell-Mansouri gravity and Cartan geometry[END_REF]. In fact, the curvature equation of motion (3) constrains p to remain in a fixed adjoint orbit so we can introduce a constant m ∈ R * + such that p = mv with v ∈ so(η) such that trv 2 = -σ 2 . Consequently, p satisfies the mass shell constraints p 2 := tr(p 2 ) = -σ 2 m 2 and acquires the interpretation of the particle's momentum. We can now relate the position q and momentum p, independent in the first order formulation, by virtue of (4). Indeed, the chosen flat geometry solution e = δ, A = 0 leads to a everywhere vanishing torsion d A e. Hence, the commutator [p, q] = X × p, where × denotes the usual cross product on V η , vanishes on the worldline. This vanishing of the relativistic angular momentum (which is conserved by virtue of equation ( 3)) implies, together with the flatness of the background fields, that the the worldline γ of the particle defines a straight line passing through the origin and tangent to its momentum p. Equivalently, we can think of the momentum p as Hodge dual to a bivector * p, in which case the worldline is normal to the plane defined by * p. Note that translating γ off the origin, which requires the introduction of spacetime torsion, can be achieved by the gauge transformation q → q + C with C = constant which leaves all the other fields invariant. In this way we conclude that the previous solution of our theory can be (locally) interpreted as the particle following a geodesic of flat spacetime. More formally, we can also recover the action of a test particle in flat spacetime by simply 'switching off' the interaction of the particle whith gravity. This can be achieved by evaluating the action (1) on the flat solution and neglecting the interactions between geometry and matter, namely the equations of motion linking the background fields to the matter degrees of freedom (e.g. e = dX). This formal manipulation leads to the following Hamilton function S[p, X, N ] = γ tr(p Ẋ) + N (p 2 -m 2 ), (10) which is the standart first order action for a relativistic spinless particle. The string in 3 + 1 dimensions We now focus on the four dimensional (d = 3) extension of the above considerations. Here again we consider the isometry group G = SO(η) of a given four dimensional metric structure η = (σ 2 , +, +, +), in which case the value σ = 1 leads to the Riemannian group G = SO(4), while σ = i encodes a Lorentzian signature G = SO [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF][START_REF] Deser | Three-Dimensional Einstein Gravity: Dynamics of Flat Space[END_REF]. As in three dimensions, we denote (π, V η ), with V η = R{e I } I , I = 0, ..., 3, the vector representation of so(η) = R{J ab } a,b=0,...,3 . Finally, we choose the bilinear form 'tr' such that, forall a, b in so(η), it is associated to the trace tr(ab) = 1 2 a IJ b IJ in the vector representation. We are using the notation α IJ = α ab π(J ab ) IJ := α ab J IJ ab for the matrix elements of the image of an element α ∈ so(η) in End(V η ) under the vector representation. The dynamics of the theory is governed by the action [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF] where the matter excitations are string-like and the worldsheet W is now a two-dimensional submanifold of the four dimensional space time manifold M . The string degrees of freedom are described by an so(η)-valued one-form q and an so(η)-valued function p living on the world-sheet W . As before, we construct an open subset U ⊂ M by cutting out a section of the four-ball B 4 , and define the coordinate function X : M → V η , centered around a point x in M ∩ W . Consider the following field configurations which define a flat solution to the equations of motion (3), ( 4), ( 6), [START_REF] Noui | Three dimensional Loop Quantum Gravity: Towards a self-gravitating Quantum Field Theory[END_REF] in U .: B = * (e ∧ e), with e = dX = δ (11) A = 0 q = - * XdX p = constant, where the star ' * ' is the Hodge operator * : Ω p (V η ) → Ω 4-p (V η ) acting on the internal space; ( * α) IJ = 1 2 ǫ KL IJ α KL , with the totally antisymmetric tensor ǫ normalized such that ǫ 0123 = +1. The solutions B = * (δ ∧δ) (A = 0), leads to a natural notion of flat Riemannian or Minkowski background geometry through the standard construction of a metric out of B when B is a simple bivector; B = * (e ∧ e) with e = δ. We can readily see that the q one-form is the first integral of the area element defined by the background field B. As in 3d, the equations of motion constrain p to remain in a fixed adjoint orbit so that we can introduce a constant τ ∈ R * + such that p = τ v and v ∈ so(η) has a fixed norm; trv 2 = -σ 2 . We call τ the string tension, or mass per unit length, and p the momentum density which satisfies a generalized mass shell constraint. This momentum density p is related to the q field by analysis of equation ( 4). The solution (B = * δ∧δ, A = 0) has zero torsion d A B. Accordingly, the commutator [p, q] = [ * XdX, p] vanishes on the worldsheet. This leads to the constraint X I p IJ = 0. Putting everything together, we see that the flat solution in the open subset U leads to the picture of a locally flat worldsheet (a locally straight, rigid string) in flat spacetime, dual1 , as a two-surface, to the momentum density bivector p (if p is simple, namely if it defines a two-plane). If we consider more general solutions admitting torsion, the plane can be translated off the origin. Indeed, the equation ( 5) determines the field q in terms of the geometry of the B field up to the addition of an exact one-form β = dα which encodes the translational information. For instance, the translation X → X + C of the plane yields q → q - * CdX and consequently corresponds to a function α defined by dα = - * CdX. This potential is in turn determined by the torsion T = d A B of the B field via the equation ( 4). More general solutions can be found for arbitrary α's, as discussed below. Following the same path as in the case of the particle case, we can 'turn off' the interaction between the topological BF background and the string by evaluating the action on the flat solution (this implies, here again, that we ignore the equations of motion of the coupled theory, i.e. the relation between the matter and geometrical degrees of freedom). We obtain the following Hamilton function S[p, X, N ] = W tr( * p dX ∧ dX) + N (p 2 -τ 2 ), (12) up to a constant. This is the Polyakov action on a non trivial background with metric G µν = 0 and antisymmetric field b = v ∈ so(η). Now, the previous action leads to trivial equations of motion that are satisfied by arbitrary X (because p = constant and so the Lagrangian is a total differential). This is to be expected, from the string theory viewpoint, this would be a charged string moving on a constant potential, so the field strength is zero. This seems in sharp contrast to the particle case where the effective action leads to straight line solutions. Here any string motion is allowed; however, from the point of view of the full theory, all these possibilities are pure gauge. The reason for this is that in 2 + 1 dimensions the flat gauge condition e = δ fixes the freedom (8) up to a global translation, and hence gauge considerations are not necessary in interpreting the effective action. In the string case, B = * (δ ∧ δ) partially fixes the gauge; the remaining freedom being encoded in η = dα for any α. C. Geometrical interpretation: cosmic strings and topological defects The above discussion shows that particular solutions of the theory in a particular open subset lead to the standard propagation of matter degrees of freedom on a flat (or degenerate) background spacetime. In fact, we can go further in the physical interpretation by considering other solutions, defined everywhere, which are in one-to-one correspondence with solutions of four dimensional general relativity in the presence of distributional matter. These solutions are called cosmic strings. Cosmic strings It is well known that the metric associated to a massive and spinning particle coupled to three-dimensional gravity is that of a locally flat spinning cone. The lift of this solution to 3 + 1 dimensions corresponds to a spacetime around an infinitely thin and long straight string (see for instance [START_REF] Deser | Time travel?[END_REF] and references therein). Let us endow our spacetime manifold M with a Riemannian structure (M, g) and let x ∈ M label a point traversed by the string. We can choose as a basis of the tangent space T x M the coordinate basis {∂ t , ∂ r , ∂ ϕ , ∂ z } associated to local cylindrical coordinates such that the string is lying along the z axis and goes through the origin. The embedding of the string is given by φ(t, z) = (t, 0, 0, z). Let τ and s respectively denote the mass and intrinsic (spacetime) spin per unit length of the string. Note that τ is the string tension. Solving Einstein's field equations for a such stationary string carrying the above mass and spin distribution produces a two-parameter (τ, s) family of solutions described by the following line element written in the specified cylindrical coordinates ds 2 = g µν dx µ ⊗ dx ν = σ 2 (dt + βdϕ) 2 + dr 2 + (1 -α) 2 r 2 dϕ 2 + dz 2 , (13) where β = 4Gs and α = (1 -4Gτ ), G is the Newton constant. In fact this family of metrics is the general solution to Einstein's equations describing a spacetime outside any matter distribution in a bounded region of the plane (r, ϕ) and having a cylindrical symmetry. Exploiting the absence of structure along the z axis, by simply suppressing the z direction, reduces the theory to that of a point particle coupled to gravity in 2 + 1 dimensions, where the location of the particle is given by the point where the string punctures the z = 0 plane. We will come across a such duality again in the quantization process of the next sections. The dual co-frame for the above metric is written e 0 = dt + βdϕ (14) e 1 = cos ϕdrαr sin ϕdϕ e 2 = sin ϕdr + αr cos ϕdϕ e 3 = dz, such that ds 2 = e I ⊗e J η IJ .If we assume that the connection A associated to the above metric is Riemannian, it is straight-forward to calculate its components by exploiting Cartan's first structure equation (d A e = 0). The result reads A = A IJ µ σ IJ dx µ = 4Gτ σ 12 dϕ, (15) where {σ IJ } I,J is a basis of Ω 2 (V η ) ≃ so(η). Using the distributional identity ddϕ = 2πδ 2 (r)dxdy (x = r cos ϕ, y = r sin ϕ, and dxdy is a wedge product), it is immediate to compute the torsion T = T 0 e 0 and curvature F = F 12 σ 12 of the cosmic string induced metric : T 0 = 8πGs δ 2 (r) dxdy, F 12 = 8πGτ δ 2 (r) dxdy. (16) These equations state that the torsion and curvature associated to the cosmic string solution are zero everywhere except when the radial coordinate r vanishes, i.e. at the location of the string worlsheet lying in the zt plane. If we now focus on the spinless cosmic string case s = 0, we can establish a one-to-one correspondence between the above solutions of general relativity and the following solutions of BF theory coupled to string sources: B 01 = sin ϕdrdz + αr cos ϕdϕdz, B 02 = -(cos ϕdzdrαr sin ϕdzdϕ) B 03 = αrdrdϕ, B 12 = -σ 2 dzdt B 13 = -σ 2 (sin ϕdtdr + αr cos ϕdtdϕ), B 23 = σ 2 (cos ϕdtdrαr sin ϕdtdϕ), A 12 = 4Gτ dϕ, (17) q 12 = σ 2 (zdt -tdz), p 12 = τ, where only the non vanishing components have been written and the coupling constant κ in (1) has been set to 8πG. In this way, solutions of our theory are in one-to-one correspondence to solutions of Einstein's equations. The converse is obviously not true as our model does not allow for physical local excitations such us gravitational waves. However, augmenting the action (1) with a Plebanski term constraining the B field to be simple, would lead to the full Einstein equations in the presence of distributional matter, ǫ IJKL e J ∧ F KL = 8πGτ ǫ IJKL e J J KL 12 δ W , (18) where J KL 12 = δ [K 1 δ L] 2 , starting from the theory considered in this paper. Many-strings-solution One can also construct a many string solution by 'superimposing' solutions of the previous kind at different locations. Here we explicitly show this for two strings. We do this as the example will illustrate the geometric meaning of torsion in our model. Assume that we have two worlsheets W 1 and W 2 respectively traversing the points p 1 and p 2 . We will work with two open patches U i ⊂ M , i = 1, 2, such that p 1 and p 2 both belong to the overlap U 1 ∩ U 2 . The cylindrical coordinates (t i , r i , ϕ i , z i ) associated to the charts (U i ⊂ M, X µ i : U i → R 4 ) are chosen such that the strings lie along the z axis, are separated by a distance x 0 in the x-direction, and are such that r i (p i ) = 0. The coordinate transform occurring in the overlap U 1 ∩ U 2 is immediate; it yields t i = t, x 2 = x 1 + x 0 y i = y and z i = z, for i = 1, 2. The two embeddings are given consequently by φ 1 (t, z) = (t, 0, 0, z) and φ 2 (t, z) = (t, x 0 , 0, z). Our notations are such that a field φ expressed in the coordinate system associated to the open subset U i is noted φ Ui . Our strategy to construct the two-string-solution is the following. We need to realize the fact that, regarded from a particular coordinate frame, one of the two strings is translated off the origin. We will choose to observe the translation of W 2 from the coordinate frame 1. Now, the study of the flat solution discussed in the previous section has showed that translations of the worlsheet are related to the torsion T of the B field. In particular, we know how to recognize a translation of the form X → X + C, with C = x 0 e 1 . It corresponds to a torsion of the form T = κ[p, dα], with dα = - * CdX. Hence, the two-string-solution is based on the tetrad field which leads to the desired value of the B field torsion taking into account the separation of the two worldsheets. For simplicity, here we assume that the two strings are parallel, hence that they have same momentum density p U1 = p U2 = τ σ 12 , (19) and accordingly create the same curvature singularity in both coordinate frames 1 and 2. The associated connection yields A Ui = 4Gτ dϕ i σ 12 , ∀i = 1, 2. ( 20 ) The dual co-frame e Ui = e I Ui ⊗ e IUi is defined by the following components e 0 Ui = dt (21) e 1 Ui = cos ϕ i dr i -αr i sin ϕ i dϕ i e 2 Ui = sin ϕ i dr i + (αr i cos ϕ i + δ i2 κ 4π τ x 0 )dϕ i e 3 Ui = dz. By integrating the B = * e ∧ e solution with e given by ( 21), we can now calculate the q field, up to the addition of an exact form β = dα q Ui = σ 2 (zdt -tdz) σ 12 + dα IJ i σ IJ . (22) The potential α is derived from the equation of motion (4) relating the commutator of p and q to the B field torsion three-form T = d A B = * d A e ∧ e + * e ∧ d A e : T Ui = δ i2 1 2 κ τ x 0 δ(r)(dx 2 dy 2 dz σ 01 + σ 2 dt dx 2 dy 2 σ 13 ). (23) This torsion indeed corresponds to a two-string-solution since it yields the desired value - * CdX for the form dα, dα i = δ i2 1 2 x 0 (dz σ 02 + σ 2 dt σ 23 ). (24) One can add more than one string in a similar fashion, leading to multiple cosmic string solutions. It is interesting to notice that torsion of the mutiple string solution is related to the distance x 0 separating the world sheets. Of course this is a distance defined in the flat-gauge where B = * δ ∧ δ. This concludes our discussion on the physical aspects of the action (1) of string-like sources coupled to BF theory. We now turn toward the quantization of the theory. III. QUANTUM THEORY For the entire quantization process to be well defined, we will restrict our attention to the case where the symmetry group G is compact. For instance, we can think of G as being SO(4). We will also concentrate on the four-dimensional theory and set the coupling constant κ to one. Also, to rely on the canonical analysis performed in [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF], we will work with a slightly different theory where the momentum p is replaced by the string field λ ∈ C ∞ (W , G). This new field enters the action only through the conjugation τ Ad λ (v) of a fixed unit element v in g, and the theory is consequently defined by the action (1) with p set to τ λvλ -1 . The field λ transforms as λ → gλ under gauge transformations of the type [START_REF] Macdowell | Unified geometric theory of gravity and supergravity[END_REF] and the theory acquires a new invariance under the subgroup H ⊆ G generated by v. The link between the two theories is established by the fact that, as remarked before, the equation of motion F = pδ W implies that p remains in the same conjugacy class along the worldsheet. Here, we choose to label the class by τ v and to consider λ as dynamical field instead of p. A. Canonical setting As a preliminary step, we assume that the spacetime manifold M is diffeomorphic to the canonical split R × Σ, where R represents time and Σ is the canonical spatial hypersurface. The intersection of Σ with the string worldsheet W forms a one dimensional manifold S that we will assume to be closed2 . We choose local coordinates (t, x a ) for which Σ is given as the hypersurface {t = 0}. By definition, x a , a = 1, 2, 3, are local coordinates on Σ. We also choose local coordinates (t, s) on the 2-dimensional world-sheet W , where s ∈ [0, 2π] is a coordinate along the one-dimensional string S . We will note x S = φ | Σ the embedding of the string S in Σ. We pick a basis {X i } i=1,...,dim(g) of the real Lie algebra g, raise and lower indices with the inner product 'tr', and define structure constants by [X i , X j ] = f k ij X k . Next, we choose a polarization on the phase space such that the degrees of freedom are encoded in the configuration variable (A, λ) ∈ A × Λ defined by the couples formed by (the pull-back to Σ of) connections and string momenta. The canonical analysis of the coupled action [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF] shows that the legendre transform from configuration space to phase space is singular : the system is constrained. Essentially3 , the constraints are first class and are given by the following set of equations : G i := D a E a i + f k ij q j a π a k δ S ≈ 0 ( 25 ) H a i := ǫ abc F ibc -π a i δ S ≈ 0. (26) Here, E a i = ǫ abc B ibc is the momentum canonically conjugate to A a i , π a i = ∂ s x a S p i is conjugate to q and satisfies D a π a i = 0, where p i = tr(X i p) denote the components of the Lie algebra element p in the chosen basis of g. The symbols D and F denote respectively the covariant derivative and curvature of the spatial connection A. The first constraint [START_REF] Pfeiffer | Quantum general relativity and the classification of smooth manifolds[END_REF], the Gauss law, generates kinematical gauge transformations while the second (26), the curvature constraint, contains the dynamical data of the theory. To quantize the theory, one can follow Dirac's program of quantization of constrained systems which consists in first quantizing the system before imposing the constraints at the quantum level. The idea is to construct an algebra A of basic observables, that is, simple phase space functions which admit unambiguous quantum analogues, which is then represented unitarely, as an involutive and unital ⋆-algebra of abstract operators, on an unphysical or auxiliary Hilbert space H. Since the classical constraints are simple functionals of the basic observables, they can be unambiguously quantized, that is, promoted to self-adjoint operators on H. The kernel of these constraint operators are spanned by the physical states of the theory. The structure of the constraint algebra enables us to solve the constraints in different steps. One can first solve the Gauss law to obtain a quantum kinematical setting. Then, impose the curvature constraint on the kinematical states to fully solve the dynamical sector of the theory. In [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF], the kinematical subset of the constraints is solved and a kinematical Hilbert space H kin solution to the quantum Gauss law is defined. We first review the kinematical setting of [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF] before exploring the dynamical sector of the theory. B. Quantum kinematics : the Gauss law The Hilbert space H kin of solutions to the Gauss constraint is spanned by so-called string spin network states. String spin network states are the gauge invariant elements of the auxiliary Hilbert space H of cylindrical functions which is constructed as follows. Auxiliary Hilbert space H Firstly, we define the canonical BF states. Let Γ ⊂ Σ denote an open graph, that is, a collection of one dimensional oriented sub-manifolds4 of Σ called edges e Γ , meeting if at all only on their endpoints called vertices v Γ . The vertices forming the boundary of a given edge e Γ are called the source s(e Γ ) and target t(e Γ ) vertices depending on the orientation of the edge. We will call n ≡ n Γ the cardinality of the set of edges {e Γ } of Γ. Let φ : G ×n → C denote a continuous complex valued function on G ×n and A(e Γ ) ≡ g eΓ = P exp ( eΓ A) denote the holonomy of the connection A along the edge e Γ ∈ Γ. The cylindrical function associated to the graph Γ and to the function φ is a complex valued map Ψ Γ,φ : A → C defined by : Ψ Γ,φ [A] = φ(A(e 1 Γ ), ..., A(e n Γ )), (27) forall A in A. The space of such functions is an abelian ⋆-algebra denoted Cyl BF,Γ , where the ⋆-structure is simply given by complex conjugation on C. The algebra of all cylindrical functions will be called Cyl BF = ∪ Γ Cyl BF,Γ . Next, we define string states. Since the configuration variable is a zero-form, we expect to consider wave functions associated to points x ∈ Σ. Accordingly, we define the ⋆-algebra Cyl S of cylindrical functions on the space Λ of λ fields as follows. An element Φ X,f of Cyl S is a continuous map Φ X,f : Λ → C, where X = {x 1 , ..., x n } is a finite set of points in S and f : G ×n → C is a complex valued function on the Cartesian product G n , defined by Φ X,f [λ] = f (λ(x 1 ), ..., λ(x n )). ( 28 ) Both algebras Cyl BF and Cyl S , regarded as vector spaces, can be given a pre-Hilbert space structure. Fixing a graph Γ ⊂ Σ with n edges and a set of m points X ⊂ Σ, we define the scalar products respectively on Cyl BF,Γ and Cyl S,p as < Ψ ′ Γ,φ , Φ Γ,ψ > = G ×n φ ψ, (29) and < Φ ′ X,f , Φ X,g > = G ×m f g, (30) where the integration over the group is realized through the Haar measure on G. These scalar products can be extended to the whole of Cyl BF (resp. Cyl S ), i.e. to cylindrical functions defined on different graphs (resp. set of points), by redefining a larger graph (resp. set of points) containing the two different ones. The resulting measure, precisely constructed via projective techniques, is the AL measure. The string Hilbert space was in fact introduced by Thiemann as a model for the coupling of Higgs fields to loop quantum gravity [START_REF] Thiemann | Kinematical Hilbert spaces for fermionic and Higgs quantum field theories[END_REF] via point holonomies. Completing these two pre-Hilbert spaces in the respective norms induced by the AL measures, one obtains the BF and string auxiliary Hilbert spaces respectively denoted H BF and H S . Tensoring the two Hilbert spaces yields the auxiliary Hilbert space H = H BF ⊗ H S of the coupled system. Using the harmonic analysis on G, one can define an orthonormal basis in H BF and H S the elements of which are respectively denoted (open) spin networks and n-points spin states. Using the isomorphism of Hilbert spaces L 2 (G ×n ) ≃ eΓ L 2 (G eΓ ), any cylindrical function Ψ Γ,φ in H BF decomposes according to the Peter-Weyl theorem into the basis of matrix elements of the unitary, irreducible representations of G : Ψ Γ,φ [A] = ρ1,...,ρn φ ρ1,...,ρn ρ 1 [A(e 1 Γ )] ⊗ ... ⊗ ρ n [A(e n Γ )], (31) where ρ : G → Aut(V ρ ) denotes the unitary, irreducible representation of G acting on the vector space V ρ and the mode φ ρ1,...,ρn := ⊗ n i=1 φ ρi is an element of ( V ρi ⊗ V ρi * ) ⊗ n i=1 . The functions appearing in the above sum are called open spin network states. Equivalently, the string cylindrical functions decompose as : Φ X,f [λ] = ρ1,...,ρm f ρ1,...,ρm ρ 1 [λ(x 1 )] ⊗ ... ⊗ ρ m [λ(x m )], (32) and a given element in the sum is called an n-point spin state. String spin network states One can now compute a unitary action of the gauge group C ∞ (Σ, G) on H by using the transformation properties of the holonomies and of the string fields λ → gλ under the gauge group and derive the subset of G-invariant states, that is, the states solution to the Gauss constraint. A vectorial basis of the vector space of gauge invariant states can be constructed, in analogy with 3d quantum gravity coupled to point particles [START_REF] Noui | Three-dimensional loop quantum gravity: coupling to point particles[END_REF], by tensoring the open spin network basis with the n point spin states elements. Such an tensorial element is required the following consistency conditions to be G-invariant. The graph Γ of the open spin network has a set of vertices V Γ including the points {x 1 , ..., x n } forming the set X. The vertices of Γ are coloured with a chosen element ι v of an orthonormal basis of the vector space of intertwining operators Hom G   eΓ|t(eΓ)=vΓ V ρe Γ , eΓ|s(eΓ)=vΓ V ρe Γ   , (33) if the vertex v Γ is not on the string. If a vertex v Γ is on the string, it coincides with some point x k ∈ X. In this case, we chose an element ι vΓ in an orthonormal basis of Hom G   eΓ|t(eΓ)=vΓ V ρe Γ ,   eΓ|s(eΓ)=vΓ V ρe Γ   ⊗ V ρ k   , (34) where V ρ k is the representation space associated to the point x k . By finally implementing the invariance under the sub-group H ⊆ G generated by v of the n-point spin states by choosing the modes to be H-invariant, one obtains a vectorial basis in the kinematical Hilbert space H kin where the inner product is that of (BF and string) cylindrical functions. The elements of this basis are called string spin networks states and are of the form (see fig. 1) : Ψ Γ,X [A, λ] := (Ψ Γ ⊗ Φ X )[A, λ] = eΓ∈Γ ρ eΓ [A(e Γ )] x∈X ρ x [λ(x)] . vΓ∈Γ ι vΓ , (35) where the dot '.' denotes tensor index contraction. This concludes the quantum kinematical framework of strings coupled to BF theory performed in [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF]. We now solve the curvature constraint and compute the full physical Hilbert space H phys . C. Quantum dynamics: the curvature constraint In this section, we explore the dynamics of the theory by constructing the physical Hilbert space H phys solution to the last constraint of the system, that is, the curvature or Hamiltonian constraint [START_REF] Freidel | Ponzano-Regge model revisited II: Equivalence with Chern-Simons[END_REF]. Note that the physical states that we construct below are also solutions to the constraints of four dimensional quantum gravity coupled to distributional matter, as in the classical case. We first underline a crucial property of the curvature constraint of d + 1-dimensional BF theory with d > 2, namely its reducible character which has to be taken into account during the quantization process. We then proceed (as in [START_REF] Noui | Three-dimensional loop quantum gravity: coupling to point particles[END_REF]) à la Rovelli and Reisenberger [START_REF] Reisenberger | A Left-handed simplicial action for Euclidean general relativity[END_REF][START_REF] Rovelli | The Projector on physical states in loop quantum gravity[END_REF] by building and regularizing a generalized projection operator mapping the kinematical states into the kernel of the curvature constraint operator. This procedure automatically provides the vector space of solutions with a physical inner product and a Hilbert space structure, and leads to an interesting duality with the coupling of Feynman loops to 3d gravity [START_REF] Barrett | Feynman loops and three-dimensional quantum gravity[END_REF], [START_REF] Freidel | Ponzano-Regge model revisited. I: Gauge fixing, observables and interacting spinning particles[END_REF] from the covariant perspective. The reducibility of the curvature constraint A naive imposition of the curvature constraint on the kinematical states leads to severe divergences. This is due to the fact that there is a redundancy in the implementation of the constraint; the components of the curvature constraint of (d + 1)-dimensional BF theory are not linearly independent, they are said to be reducible, if d > 2. The same is true for the theory coupled to sources under study here. As an illustration of this fact, let us simply count the degrees of freedom of source free (τ = 0) BF theory in d + 1 dimensions. The configuration variable of the theory A i a is a g-valued connection one-form, thus containing d × dim(g) independent components for each space point of Σ. In turn, the number of constraints is given by the dim(g) components of the Gauss law (25) plus the d × dim(g) components of the Hamiltonian constraint [START_REF] Freidel | Ponzano-Regge model revisited II: Equivalence with Chern-Simons[END_REF] for each space point x ∈ Σ. Hence, we have N C = (d + 1) × dim(g) constraints per space point. This leads to a negative number of degrees of freedom. What is happening5 ? The point is that the N C constraints are not independent: the Bianchi identity (D (2) F = d (2) F + [A, F ] = 0, where the superscript (p) indicates the degree of the form acted upon) imply the reducibility equation D a H a i = 0. ( 36 ) In the case where sources are present, the reducibility equation remains valid because the curvature constraint F = pδ S together with the Bianchi identity automatically implements the momentum density conservation Dp = 0. We will come back to this reducibility of the matter sector of the theory. The system is said to be (d -2)-th stage reducible in the first class curvature constraints. This designation is due to the fact that the operator d (2) is himself reducible since d (3) d (2) ≡ 0. In turn, d (3) is reducible and so on. The chain stops after precisely d -2 steps since the action of the d (d) de-Rham differential operator on d-forms is trivial. Accordingly, the N R = dim(g) reducibility equations (36) imply a linear relation between the components of the curvature constraint. The number N I of independent constraints is thus given by N C -N R = d × dim(g). Using N I to count the number of degrees of freedom leads to the correct answer, namely zero degrees of freedom for topological BF theory. The standard procedure to quantize systems with such reducible constraints consists in selecting a subset H | irr of constraints which are linearly independent and impose solely this subset of constraints on the auxiliary states of H. Keeping this issue in mind, we now proceed to the definition and regularization of the generalized projector on the physical states and construct the Hilbert space H phys of solutions to all of the constraints of the theory. Physical projector : formal definition -the particle/string duality We start by introducing the rigging map η phys : Cyl → Cyl * (37) Ψ → δ( Ĥ | irr ) Ψ, where Cyl * is the (algebraic) dual vector space of Cyl = Cyl BF ⊗ Cyl S . The range of the rigging map η phys formally lies in the kernel of the Hamiltonian constraint of the coupled model. The power of the rigging map technology is that it automatically provides the vector space η phys (Cyl) = Cyl * phys ⊂ Cyl * of solutions to the Hamiltonian constraint with a pre-Hilbert space structure encoded in the physical inner product < η phys (Ψ 1 ), η phys (Ψ 2 ) > phys = [η phys (Ψ 2 )](Ψ 1 ) :=< Ψ 1 , δ( Ĥ | irr ) Ψ 2 >, (38) for any two string spin network states Ψ 1 , Ψ 2 ∈ H kin . The scalar product used in the last equality is the kinematical inner product ( 29), [START_REF] Perez | On the regularization ambiguities in loop quantum gravity[END_REF]. The physical Hilbert space H phys is then obtained by the associated Cauchy completion of the quotient of H kin by the Gel'fand ideal defined by the set of zero norm states. Accordingly, the construction of the physical inner product can explicitly be achieved if we can rigorously make sense of the formal expression δ( Ĥ | irr ). This task is greatly simplified by virtue of the following duality. Indeed, we can re express the above formal quantity as follows δ( Ĥ | irr ) = x∈Σ δ( Ĥ | irr (x)) = N Dµ[N ] exp i Σ tr(N ∧ Ĥ) . (39) Here, N ∋ N is the space of regular g-valued one-forms on Σ and Dµ[N ] denotes a formal functional measure on N imposing constraints on the test one-form N to remove the redundant delta functions on H. Simply plugging in the explicit expression of the exponent in (39) leads to H[N ] = Σ tr(N ∧ H) = Σ tr(N ∧ F ) + S tr(N p) = S 3d BF +part [N, A], (40) which, in the case where G = SO(η), where η is a three-dimensional metric, is the action of 3d gravity coupled to a (spinless) point particle [START_REF] Ph | On spin and (quantum) gravity in (2+1)-dimensions[END_REF], [START_REF] Freidel | Ponzano-Regge model revisited. I: Gauge fixing, observables and interacting spinning particles[END_REF] : the role of the triad is played by N , the mass and the worldline of the particle are respectively given by the string tension τ (hidden in the string variable p = τ Ad λ (v)) and S . Finally, the role of the Cartan subalgebra generator J 0 is played by v (also hidden in p). This relation is reminiscent to the link between cosmic strings in 4d and point particles in three-dimensional gravity discussed in the first sections. More generally, we have in fact the following duality : P (Ω) d+1 BF +(d-2)-branes = Z d BF +(d-3)-branes , (41) where Ω denotes the (d + 1-dimensional) no-spin-network vacuum state, Z is the path integral of BF theory in d spacetime dimensions and we have introduced the linear form6 P on Cyl ⊂ A defined by ∀Ψ ∈ Cyl, P (Ψ) = < η phys (Ω), η phys (Ψ) > phys (42) = < Ω , δ( Ĥ | irr ) Ψ > . Furthermore, when d = 3, the formal functional measure Dµ[N ] introduced above to take into account the reducible character of the four-dimensional theory corresponds to the Fadeev-Popov determinant gaugefixing the translational topological symmetry of the 3d theory; the reducibility of the 4d theory is mapped via this duality onto the gauge redundancies of the three-dimensional theory. Now because of the above duality (41), regularizing the formal expression (39) is, roughly speaking, equivalent to regularizing the path integral for 3d gravity coupled to point particles, up to the insertion of spin network observables. The physical inner product in our theory will therefore be related to amplitudes computed in [START_REF] Freidel | Ponzano-Regge model revisited. I: Gauge fixing, observables and interacting spinning particles[END_REF], [START_REF] Oriti | Group field theory formulation of 3-D quantum gravity coupled to matter fields[END_REF], although they would have here a quite different physical interpretation. Following [START_REF] Noui | Three-dimensional loop quantum gravity: coupling to point particles[END_REF][START_REF] Noui | Three-dimensional loop quantum gravity: Physical scalar product and spin foam models[END_REF], we will regularize the Hamiltonian constraint at the classical level by defining a lattice-like discretization of Σ and by constructing holonomies around the elementary plaquettes of the discretization as a first order approximation of the curvature. However, there are two major obstacles to the direct and naive implementation of such a program. The first is the reducible character of the curvature constraint and the second is the presence of spin network edges ending on the string. We will use the above duality to treat the first issue while the second will be dealt with by introducing an appropriate regularization scheme. Physical projector : regularization Throughout this section, we will concentrate on the definition of the linear form (42) evaluated on the most general string spin network state Ψ ∈ H kin , since it contains all the necessary information to compute transition amplitudes between any two arbitrary elements of the kinematical Hilbert space. We will consider a string spin network basis elements Ψ of H kin defined on the (open) graph Γ. The set of end points of the graph living on the string S will be denoted X. We follow the natural generalization of the regularization defined in [START_REF] Noui | Three-dimensional loop quantum gravity: coupling to point particles[END_REF] for 2 + 1 gravity coupled to point particles. In order to deal with the curvature singularity at the string location, we thicken the smooth curve S to a torus topology, smooth, non-intersecting tube T η of constant radius η > 0 centered on the string S . The radius η is defined in terms of the local arbitrary coordinate system. If the string is disconnected, we blow up each string component in a similar fashion. Next, we remove the tube T η from the spatial manifold Σ. We are left with a three-manifold with torus boundary Σ \ T η noted Σ η . For instance, if Σ has the topology of S 3 , we know by Heegard's splitting, that the resulting manifold has the topology of a solid torus whose boundary surface is the Heegard surface defined by the string tube. In this way we construct a new three-manifold with boundary where each boundary component is in one-to-one correspondence with a string component and has the topology of a torus. Finally, the open graph Γ is embedded in the bulk manifold and its endpoints lie on the boundary torus. The next step is to choose a simplicial decomposition7 of Σ η or more generally any cellular decomposition, i.e., a homeomorphism φ : Σ η → ∆ from our spatial bulk manifold Σ η to a cellular complex ∆. The discretized manifold ∆ ≡ ∆ ǫ depends on a parameter ǫ ∈ R + controlling the characteristic (coordinate) 'length scale' of the cellular complex. We will see that, by virtue of the three-dimensional equivalence between smooth, topological and piecewise-linear (PL) categories, together with the background independent nature of our theory, no physical quantities will depend on this extra parameter. We will note ∆ k the k-cells of ∆. To make contact with the literature, we will in fact work with the dual cellular decomposition ∆ * . The dual cellular complex ∆ * is obtained from ∆ by placing a vertex v in the center of each three-cell ∆ 3 , linking adjacent vertices with edges e topologically dual to the two-cells ∆ 2 of ∆, and defining the dual faces f , punctured by the one-cells ∆ 1 , as closed sequences of dual edges e. The intersection between ∆ * and the boundary tube T η induces a closed, oriented (trivalent if ∆ is simplicial) graph which is the one-skeleton of the cellular complex ∂∆ * = (v, e, f ) dual to the cellular decomposition ∂∆ of the 2d boundary T η induced by the bulk complex ∆. We will note F the set of faces f of the cellular pair (∆ * , ∂∆ * ) and require that each dual face of F admits an orientation (induced by the orientation of Σ η ) and a distinguished vertex. Finally, among all possible cellular decompositions, we select a subsector of two-complexes which are adapted to the graph Γ. Namely, we consider dual cellular complexes (∆ * , ∂∆ * ) whose one-skeletons admit the graph Γ as a subcomplex. In particular, the open edges of Γ end on the vertices v of the boundary two-complex ∂∆ * . The meaning of the curvature constraint F = p δ S is that the physical states have support on the space of connections which are flat everywhere except at the location of the string where they are singular. In other words, the holonomy g γ = A(γ) of an infinitesimal loop γ circling an empty, simply connected region yields the identity, while the holonomy g γ circling the string around a point x ∈ S is equal to exp p(x), the image of the fixed group element u = e τ v under the inner automorphism Ad λ : G → G; u → λ(x) u λ -1 (x), with the string field λ evaluated at the point x. The integration over the string field λ appearing in the computation of the physical inner product then forces the holonomy of the connection around the string to lie in the same conjugacy class Cl(u) than the group element u. To impose the F = 0 part of the curvature constraint, we will require that the holonomy A : F → G (43) ∂f → g f = e⊂∂f A(e), around all the oriented boundaries of the faces f of F be equal to one 8 . Each such flat connection defines a monodromy representation of the fundamental group π 1 (Σ η ) in G. Concretely, the holonomies are computed by taking the edges in the boundary ∂f of the face f in cyclic order, following the chosen orientation, starting from the distinguished vertex. Reversing the orientation maps the associated group element to its inverse. It is here crucial to take into account the reducible character of the curvature constraint to avoid divergences due to redundancies in the implementation of the constraints (i.e. coming from the incorrect product of redundant delta functions). As discussed above, the reducibility equation induced by the Bianchi identity implies that the components of the curvature are not independent. In the discretized framework, we know [START_REF] Kawamoto | Lattice Chern-Simons gravity via Ponzano-Regge model[END_REF] that forall set of faces f forming a closed surface S with the topology of a two-sphere, f ∈S g f = 1 1, (44) modulo orientation and some possible conjugations depending on the base points of the holonomies. Accordingly, there is, for each three-cell of the dual cellular complex ∆ * , one group element g f , among the finite number of group variables attached to the faces bounding the bubble, which is completely determined by the others. It follows that imposing g f = 1 1 on all faces of the cellular complex ∆ * is redundant and would create divergences in the computation of the physical inner product. The proper way [START_REF] Freidel | Spin networks for noncompact groups[END_REF], [START_REF] Freidel | Ponzano-Regge model revisited. I: Gauge fixing, observables and interacting spinning particles[END_REF] to address the reducibility issue, or over determination of the holonomy variables, is to pick a maximal tree T of the cellular decomposition ∆ and impose F = 0 only on the faces of ∆ * that are not dual to any one simplex contained in T . A tree T of a cellular decomposition ∆ is a sub-complex of the one-skeleton of ∆ which never closes to form a loop. A tree T of ∆ is said to be maximal if it is connected and goes through all vertices of ∆. The fact that T is a maximal tree implies that one is only removing redundant flatness constraints taking consistently into account the reducibility of the flatness constraints. Finally, we need to impose the F = p part of the curvature constraint. The idea is to require that the holonomy g γ around any loop γ in ∂∆ * based at a point x, belonging to the homology class of loops of the boundary torus T η normal to the string S (these loops are the ones wrapping around the cycle of the torus circling the string, i.e., the non-contractible loops in Σ η ), be equal to the image of the group element u under the adjoint automorphism λ(x)uλ(x) -1 , i.e., belong to Cl(u). Intuitively, this could be achieved by picking a finite set {γ i } i of such homologous paths all along the tube T η and imposing g γi = Ad λi (u), with the field λ i evaluated at the base point of the holonomy g γi . However, here again, care must be taken in addressing the reducibility issue induced by the equation D a H a i = 0. In the presence of matter, the reducibility implies that the curvature constraint F = pδ S together with the Bianchi identity DF = 0 induce the momentum density conservation Dp = 0. In our setting, this is reflected in the fact that the holonomies g γ1 and g γ2 associated to two distinct homologous loops γ 1 an γ 2 circling the string satisfy the property Cl(g γ1 ) = Cl(g γ2 ) on shell. This is due to the Bianchi identity in the interior of the cylindrically shaped section of the torus T η bounded by γ 1 and γ 2 , and the flatness constraint F = 0 imposing the holonomies around all the dual faces on the boundary of the cylindrical section to be trivial (see e.g. ( 44)). Accordingly, imposing g γ1 ∈ Cl(u) naturally implies that g γ2 belongs to the conjugacy class labeled by u. In other words, choosing one arbitrary closed path circling the string, say γ 1 based at a point x 1 , and imposing F = p only along that path naturally propagates via the Bianchi identity and the flatness constraint and forces the holonomy g γ2 around any other homologous loop γ 2 based at a point x 2 to be of the form g γ2 = huh -1 ∈ Cl(u), for some h ∈ G. This shows that imposing F = p more than once, e.g. also around γ 2 , would lead to divergences which can be traced back to the reducibility of the constraints. However, for the prescription to be complete 9 , it is not sufficient to have g γ2 in Cl(u); we need to recover the fact that the holonomy along the loop γ 2 is the conjugation of the group element u under the dynamical field λ evaluated at the base point x 2 of the holonomy, namely λ 2 = λ(x 2 ). This suggests an identification of the group element h conjugating u with the value of the string field λ 2 , which leads to a relation between the holonomy g β a path β connecting the points x 1 and x 2 and the value of the string field λ at x 1 and x 2 : g β = λ s(β) λ t(β) -1 , stating that λ is covariantly constant along the string. We have seen that the Bianchi identity together with the full curvature constraint induces the momentum density conservation. Our treatment of the reducibility issue consists in truncating the curvature constraint, i.e., in imposing F = p only once, and using the Bianchi identity supplemented with the momentum conservation Dp = 0 to recover the truncated components of the curvature constraint without any loss of information. Accordingly, the full prescription is defined via a choice of a closed, oriented path α and a finite set C of open, oriented paths β in ∂∆ * . The closed path α circles the string (it is non-contractible in the three-manifold Σ η ). This loop is based at a point x ∈ X lying on a dual vertex v ∈ ∂∆ * supporting a spin network endpoint. The open paths β ∈ C are defined as follows. Let γ ∈ ∂∆ * be an oriented loop based at x, non-homologous to α (along the cycle of T η contractible in Σ η ) and connecting all the spin network endpoints x k ∈ X. Define the open path γ by erasing the segment of γ supported by the edge ē which is such that x = t(ē). The paths β of C are 1d sub-manifolds of γ each connecting x to a vertex v traversed by γ. If the graph Γ is closed, one reiterates the same prescription simply dropping the requirements on the spin network endpoints x k ∈ X, in particular, the base point x is chosen arbitrarily. We then impose g α = exp p with p evaluated at the point x, and g β = λ s(β) λ t(β) -1 , where x = s(β), on each open path β of C. To summarize, we choose a regulator R (η,ǫ) = (T η , (∆ ǫ , ∂∆ ǫ ), T, α, C) consisting in a thickening T η of the string, a cellular decomposition (∆ ǫ , ∂∆ ǫ ) of the manifold (Σ η , T η ) adapted to the graph Γ, a maximal tree T of ∆, a closed path α in ∂∆ * , and a collection C of open paths β in ∂∆ * . The associated regularized physical scalar product is then given by P [Ψ] := lim η,ǫ→0 P [R (η,ǫ) ; Ψ], (45) with P [R (η,ǫ) ; Ψ] =< Ω ,   f / ∈T δ (g f ) α δ (g α exp p) β ∈ C δ(g β λ t(β) λ s(β) -1 )   Ψ >, (46) where the product over α is to take into account the possible multiple connected components of the string. It is important to point out that, in addition to the expression of the generalized projection above, we can use the regularization to give an explicit expression of the regularized constraint corresponding to H[N ] in equation ( 40). With the notation introduced so far the regulated quantum curvature constraint becomes Ĥη,ǫ [N ] = f ∈∆ * Tr[N (x f )g f ] + α Tr[N (x p )g α exp p], (47) 9 To understand these last points, consider two loops γ 1 and γ 2 belonging to the same homology class circling the string at two neighboring points, say x 1 and x 2 . These two loops define a section of the torus Tη homeomorphic to a cylinder. Suppose that the dual cellular complex ∆ * is such that the cylinder is discretized by a single face with two opposite edges glued along an dual edge β in ∂∆ * connecting x 1 and x 2 . The flatness constraint on the boundary of the cylinder implies the following presentation of the cylinder's fundamental polygon: gγ 1 g β g -1 γ 2 g -1 β = 1 1 , which relates the holonomies gγ 1 and gγ 2 by virtue of the Bianchi identity in the interior of the tube. Hence, imposing F = p only along one of the two loops, say γ 1 , naturally leads to the constraint gγ 2 ∈ Cl(u). Finally, plugging the relation g β = λ 1 λ -1 2 in the value of the holonomy gγ 2 leads to the required constraint gγ 2 = λ 2 uλ -1 2 . where x f is an arbitrary point in the interior of the face f and x p and arbitrary point on the string dual to the loop α (the sum over α is over all the string components). It is easy to check that the regulated quantum curvature constraints satisfy off-shell anomaly freeness condition. For instance U [g] Ĥη,ǫ [N ]U † [g] = Ĥη,ǫ [gN g -1 ], (48) where U [g] is the unitary generator of G-gauge transformations. Therefore the regulator does not break the algebraic structure of the classical constraints. The quantization is consistent. The quantum constraint operator is defined as the limit where η and ǫ are taking to zero. Instead of doing this in detail we shall simply concentrate on the regulator independence of the physical inner product in the following section. The inner product is computed using the AL measure, that is, by Haar integration along all edges of the graph (∆ * , ∂∆ * ) ∋ Γ and along all endpoints x k ∈ X ⊂ ∂∆ * of the graph Γ. We can now promote the classical delta function on the lattice phase space to a multiplication operator on H kin by using its expansion in irreducible unitary representations : ∀g ∈ G, δ(g) = ρ dim(ρ)χ ρ (g), ( 49 ) where χ ρ ≡ tr ρ : G → C is the character of the representation ρ. Each χ ρ is then promoted to a self-adjoint Wilson loop operator χρ on H kin creating loops in the ρ representation around each plaquette defined by our regularization, which is charged for the face bounded by the loop α. To summarize, we have, for each face f of the regularization, a sum over the unitary, irreducible representations ρ f of G, a weight given by the dimension dim(ρ f ) of the representation ρ f summed over and a loop around the oriented boundary of the face in the representation ρ f . See for instance [START_REF] Noui | Three-dimensional loop quantum gravity: Physical scalar product and spin foam models[END_REF], [START_REF] Thiemann | Quantum spin dynamics (QSD)[END_REF], [21] for details. This concludes our regularization of the transition amplitudes of string-like sources coupled to four dimensional BF theory. Now, the physical inner product that we have constructed above depends manifestly on the regulating structure R (η,ǫ) . To complete the procedure, we have to calculate the limit in which the regulating parameters η and ǫ go to zero. IV. REGULATOR INDEPENDENCE Throughout this section we will suppose that the cellular complex (∆, ∂∆) is simplicial, i.e. a triangulation of (Σ η , T η ). We will also make the simplifying assumption that G = SU(2), the unitary irreducible representations of which will be noted ( j π, V j ), with the spin j in N/2. The generalization to arbitrary cellular decompositions and arbitrary compact Lie groups can be achieved by using the same techniques that we develop below. If ∆ is a now a regular triangulation, it cannot be adpated to any graph Γ. It can only be so for graphs with three-or-four-valent vertices. Now, we can always decompose a n-valent intertwiner, with n > 3, into a three-valent intertwiner by using repeatedly the complete reducibility of the tensor product of two representations10 . We will therefore decompose all n-valent interwiners ι v with n > 3 into three-valent vertices. We now show how to remove the regulator R (η,ǫ) from the regularized scalar product (45). Instead of computing the η, ǫ → 0 limit, we demonstrate that the transition amplitudes are in fact independent of the regulator. To prove such a statement, we show that the expression (45) does not depend on any component of the regulator. The transition amplitudes are proven to be invariant under any finite combination of elementary moves called regulator moves R : R (η,ǫ) → R ′ (η,ǫ ′ ) , where each regulator move is a combination of elementary moves acting on the components of the regulator : • Bulk and boundary (adapted) Pachner moves (∆, ∂∆) → (∆ ′ , ∂∆ ′ ), • Elementary maximal tree moves T → T ′ , • Elementary curve moves γ → γ ′ . We will see how the invariance under the above moves also implies an invariance under dilatation/contraction of the string thickening radius η. To conclude on the topological invariance of the amplitudes from the above elementary moves, we will furthermore prove that the transition amplitudes are invariant under elementary moves acting on the string spin network graph G : Γ → Γ ′ which map ambient isotopic PL-graphs into ambient isotopic PL-graphs. We now detail the regulator and graph topological moves. A. Elementary regulator moves The regulator moves are finite combinations of the following elementary moves acting on the simplical complex (∆, ∂∆), the maximal tree T and the paths α and β ∈ C. Adapted Pachner moves The first invariance property that we will need is that under moves acting on the simplicial-pair (∆, ∂∆), leaving the one-complex Γ invariant and mapping a Γ-adapted triangulation into a PL-homeomorphic Γadapted simplicial structure. We call these moves adapted Pachner moves. There are two types of moves to be considered : the bistellar [START_REF] Pachner | Ein Henkeltheorem f 'ur geschlossene semilineare Mannigfaltigkeiten[END_REF], acting on the bulk triangulation ∆ and leaving the boundary simpicial structure unchanged, and the elementary shellings [START_REF] Pachner | homeomorphic manifolds are equivalent by elementary shellings[END_REF], deforming the boundary triangulation ∂∆ with induced action in the bulk. a. Bistellar moves. There are four bistellar moves in three dimensions: the (1, 4), the (2, 3) and their inverses. In the first, one creates four tetrahedra out of one by placing a point p in the interior of the original tetrahedron whose vertices are labeled p i , i = 1, ..., 4, and by adding the four edges (p, p i ), the six triangles (p, p i , p j ) i =j , and the four tetrahedra (p, p i , p j , p k ) i =j =k . The (2, 3) move consists in the splitting of two tetrahedra into three : one replaces two tetrahedra (u, p 1 , p 2 , p 3 ) and (d, p 1 , p 2 , p 3 ) (u and d respectively refer to 'up' and 'down') glued along the (p 1 , p 2 , p 3 ) triangle with the three tetrahedra (u, d, p i , p j ) i =j . The dual moves, that is the associated moves in the dual triangulation, follow immediately. See FIG. 2. b. Elementary shellings. Since the manifold (Σ η , T η ) has non-empty boundary, extra topological transformations have to be taken into account to prove discretization independence. These operations, called elementary shellings, involve the cancellation of one 3-simplex at a time in a given triangulation (∆, ∂∆). In order to be deleted, the tetrahedron must have some of its two-dimensional faces lying in the boundary ∂∆. The idea is to remove three-simplices admitting boundary components such that the boundary triangulation admits, as new triangles after the move, the faces along which the given tetrahedron was glued to the bulk simplices. Moreover, for each elementary shelling there exists an inverse move which corresponds to the attachment of a new three-simplex to a suitable component in ∂∆. These moves correspond to bistellar moves on the boundary ∂∆ and there are accordingly three distinct moves for a three-manifold with boundary, the (3, 1), its inverse and the (2, 2) shellings, where the numbers (p, q) here correspond to the number of two-simplices of a given tetrahedron lying on the boundary triangulation. In the first, one considers a tetrahedron admitting three faces lying in ∂∆ and erases it such that the remaining boundary component is the unique triangle which did not belong to the boundary before the move. The inverse move follows immediately. The (2, 2) shelling consists in removing a three-simplex intersecting the boundary along two of its triangles such that, after the move, ∂∆ contains the two remaining faces of the given tetrahedron. These shellings and the associated boundary bistellars are depicted in FIG. 3. The subset of bistellar moves and shellings which map Γ-adapted triangulations into Γ-adapted triangulations will be called adapted Pachner moves, and, considering a local simplicial structure T k = ∪ k n=1 ∆ n 3 , k = 1, ..., 4, an adapted (p, q) Pachner move T p → T q will be noted P (p,q) , or more generally P. Any two PL-homeomorphic, Γ-adapted triangulations (∆, ∂∆) of the PL-pair (Σ η , T η ) are related by a finite sequence of such moves. Here it is important to take into account the PL-embedding of the string spin network graph Γ in the (dual) triangulation (∆, ∂∆). We will call Γ k = Γ ∩ T * k the restrictions of the graph Γ to the local simplex configurations T k appearing the moves. If the graph Γ k is not the null graph, we will consider that it is open and does not contain any loop. If this was not the case, the set of adapted moves would reduce to the identity move, under which the transition amplitudes are obviously invariant. Hence, Γ k , if at all, can only be either an edge, or more generally a collection of edges, either a (three-valent) vertex. The associated string spin network functional Ψ Γ k will be represented by a group function φ k , which is the constant map φ k = 1 if Γ k is the null graph. We will make sure to check that, under a (p, q) Pachner move, φ k transforms as φ p → φ q . Maximal tree moves It is also necessary to define topological moves for the trees [START_REF] Freidel | Spin networks for noncompact groups[END_REF], [START_REF] Freidel | Ponzano-Regge model revisited II: Equivalence with Chern-Simons[END_REF]. Any two homologous trees T 1 and T 2 are related by a finite sequence of the following elementary tree moves T : T 1 → T 2 . Definition 1 (Tree move) Considering a vertex ∆ 0 belonging to a tree T , choose a pair of edges ∆ 1 , ∆ ′ 1 in ∆ touching the vertex ∆ 0 such that ∆ 1 is in T , ∆ ′ 1 is not in T and such that ∆ ′ 1 combined to the other edges of T does not form a loop. The move T consists in erasing the edge ∆ 1 from T and replacing it by ∆ ′ 1 . ∆0 ∆0 ∆1 ∆1 ∆ ′ 1 ∆ ′ 1 There is another operation on trees that we need to define. When acting on the simplicial complex (∆, ∂∆) with a bistellar move or a shelling, one can possibly map (∆, ∂∆) into a simplicial complex (∆ ′ , ∂∆ ′ ) with a different number of vertices. Hence, a maximal tree T of ∆ is not necessarily a maximal tree of ∆ ′ ; the Pachner moves have a residual action on the trees. This leads us to define the notion of maximal tree extension (or reduction) accompanying Pachner moves modifying the number of vertices of the associated simplicial complex. Definition 2 (Tree extension or reduction) An extended (or reduced) tree T , associated to a Pachner move P : ∆ → ∆ ′ modifying locally the number of vertices of a simplicial complex ∆, is a maximal tree of ∆ ′ obtained from a maximal tree T of ∆ by adding (or removing) the appropriate number of edges to T as a mean to transform T into a maximal tree T P of ∆ ′ . Obviously, there is an ambiguity in the operation of tree extension or reduction. But, because of the fact that the regularized physical inner product will turn out to be independent of a choice of maximal tree, there will be no trace of this ambiguity in the computations of the transition amplitudes. Curve moves Finally, we define the PL analogue of the Reidemeister moves, which where in fact a crucial ingredient in the proof of Reidemeister's theorem. Any two ambient isotopic PL embeddings γ 1 and γ 2 of a curve γ in the dual complex (∆ * , ∂∆ * ) are related by a finite sequence of the following elementary topological moves C : γ 1 → γ 2 . Definition 3 (Curve move) Consider a PL path γ lying along the p boundary edges e 1 , e 2 , ..., e p of a twocell f of the dual pair (∆ * , ∂∆ * ), where f has no other edges nor vertices traversed by the curve γ. Erase the path γ along the edges e 1 , e 2 , ..., e p and add a new curve along the complementary ∂f \ {e 1 , e 2 , ..., e p } of the erased segment in f . e ∂f \ e f f We have now defined all of the elementary regulator moves. To summarize, an elementary regulator move R : R (η,ǫ) → R ′ (η,ǫ ′ ) is a finite combination of all the above moves : R[R (η,ǫ) ] = (T η , P[(∆, ∂∆)], T[T P ], C[α], C[C]). Proving the invariance of the regularized physical inner product (45) under all of these elementary regulator moves is equivalent to showing the independence of the regulating structure R (η,ǫ) . Note however that we have not included contractions or dilatations of the string tube T η radius η in the regulator moves. This is because the invariance under shellings implies the invariance under increasing or decreasing of η. Indeed, the bistellars and shellings are the simplicial analogues of the action of the homeomorphisms Homeo[(Σ η , T η )]. In particular, the topological group Homeo[(Σ η , T η )] contains transformations deforming continuously the boundary T η , like for instance maps decreasing or increasing the (non-contractible) radius η > 0 of the boundary torus T η . Hence, showing the invariance under elementary shellings is sufficient to prove the independence on the string thinckening radius, and the moves defined above are sufficient to conclude on the regulator independence of the definition of the regularized physical inner product. To push the result further and conclude on the topological invariance of the transition amplitudes, we need extra ingredients that we define in the following section. B. String spin network graph moves We now introduce the following elementary moves respectively acting on the edges and vertices of the open graph Γ. All ambient isotopic PL embeddings of the one complex Γ are related by a finite sequence of the following elementary moves noted G. Definition 4 (Edge move) An edge move is a curve move applied to an edge e Γ of the graph Γ. Note that these moves apply also to the open edges of the graph Γ. However, there exists other moves which displace the endpoints. Definition 5 (Endpoint move) Considering an open string spin network edge e Γ ending on the point x k ∈ X supported by a dual vertex v of ∂∆ * , which is such that its neighbouring vertex v ′ not touched by e Γ belongs to ∂∆ * , an endpoint move consist in adding a section to e Γ connecting v to v ′ . e ∂f \ e v v f f v ′ v ′ We also need similar moves for the vertices. Definition 6 (Vertex translation) Let v Γ denote a three-valent spin network vertex sitting on the vertex v of the dual complex (∆ * , ∂∆ * ). Choose one edge e Γ among the three edges emerging from v Γ and call v ′ the dual vertex adjacent to v which is traversed by e Γ . Call e, e ′ ⊂ (∆ * , ∂∆ * ) the dual edges locally supporting e Γ , i.e, such that ∂e = {v, v ′ } and v ′ = e Γ ∩ (e ∩ e ′ ). The move consists in translating the vertex v Γ along e from v to v ′ . This is achieved by choosing one dual face sharing the dual edge e and not containing the dual edge e ′ , and acting upon it with the edge move. v v v ′ ′ e e e ′ e ′ Note that the use of rectangular faces in the above picture is only for the clarity of the picture, the move is defined for faces of arbitrary shape. It is important to remark that the above moves respect the topological structure of the embedding because no discontinuous transformations are allowed and because the number and nature of the crossings are preserved since the faces used to define the moves are required to have empty intersections with the string or graph a part from the specified ones. The combination of the adapted Pachner moves and the spin network moves are the simplicial analogues of the action of the homeomorphisms Homeo[(Σ η , T η )] on the triple ((Σ η , T η ), Γ). C. Invariance theorem We can now prove the following theorem. Theorem 1 (Invariance theorem) Let Ψ Γ denote a string spin network element of a given basis of H kin defined with respect to the one-complex Γ. Choose a regulator R (η,ǫ) = (T η , (∆ ǫ , ∂∆ ǫ ), T, α, C) consisting in a thickening T η of the string, a cellular decomposition (∆ ǫ , ∂∆ ǫ ) of the manifold (Σ η , T η ) adapted to the graph Γ, a maximal tree T of ∆, a closed path α in ∂∆ * , and a collection C of open paths β in ∂∆ * . Let R : R (η,ǫ) → R ′ (η,ǫ ′ ) (resp. G : Γ → Γ ′ ) denote an elementary regulator move (resp. a string spin network move). The evaluated linear form (46) is invariant under the action of R and G: P [R (η,ǫ) ; Ψ Γ ] = P [ R[R (η,ǫ) ]; Ψ Γ ] (50) = P [R (η,ǫ) ; Ψ G(Γ) ]. Proof. We proceed by separatly showing the invariance under each elementary regulator moves, before proving the invariance under graph moves. • Invariance under maximal tree moves. Here, we simply apply to the proof of invariance under maximal tree moves writen in [START_REF] Freidel | Ponzano-Regge model revisited II: Equivalence with Chern-Simons[END_REF]. Firstly, we need to endow the tree T of the left hand side of the move with a partial order. To this aim, we pick a distinguished vertex r of T , chosen to be the other vertex of the edge ∆ ′ 1 . The rooted tree (T, r) thus acquires a partial order : a vertex ∆ ′ 0 of (T, r) is under a vertex ∆ 0 , ∆ ′ 0 ∆ 0 , if it lies on the unique path connecting r to ∆ 0 . We can now define the tree T ∆0 to be the subgraph of T connecting all the vertices above ∆ 0 : ∆ 0 is the root of T ∆0 . The second ingredient that we need is the notion of Bianchi identity (44) applied to trees. Indeed, if T is a tree of the (regular) simplicial complex ∆, its tubular neighborhood has the topology of a 3-ball and its boundary has the topology of a 2-sphere. This surface S can be built as the union of the faces f dual to the edges ∆ 1 in ∆ touching the vertices of T without belonging to T . Hence, applying the Bianchi identity to the tree T ∆0 yields g f ′ 0 =   f ∈S1 g f   g f0   f ∈S2 g f   . (51) Here, f 0 and f ′ 0 are the faces dual to the segments ∆ 0 and ∆ ′ 0 (note that ∆ 0 does not belong to T ∆0 ). The sets S 1 , S 2 are the set of faces dual to the segments ∆ 1 touching the vertices of T ∆0 without belonging to T ∆0 , and which are not f 0 nor f ′ 0 . The presence of two different sets S 1 and S 2 is simply to take into account the arbitrary positioning of the group element g f0 amoung the product over all faces. As usual, the group elements are defined up to orientation and conjugation. Next, we apply a delta function to both sides of the above equation and multiply the result as follows : f / ∈ T f = f 0 , f ′ 0 δ(g f ) δ(g f ′ 0 ) = f / ∈ T f = f 0 , f ′ 0 δ(g f ) δ     f ∈S1 g f   g f0   f ∈S2 g f     (52) f / ∈T δ(g f ) = f / ∈T ′ δ(g f ). In the second step, we have simply used the delta functions with which the expression has been multiplied to set the group elements associated to the sets S 1 and S 2 to the identity (the faces of S 1 and S 2 are dual to segments not belonging to T ). One can then check that the various steps of the proof remain valid if the boundaries of the dual faces carry string spin networks. This shows the invariance of the regularized inner product (46) under maximal tree move • Invariance under adapted Pachner moves. To prove the invariance under Pachner moves, we introduce a simplifying lemma [START_REF] Oeckl | Generalized lattice gauge theory, spin foams and state sum invariants[END_REF], [START_REF] Girelli | Spin foam diagrammatics and topological invariance[END_REF]. Lemma 1 (Gauge fixing identity) To each vertex of the dual triangulation ∆ * are associated four group elements {g a } a=1,...,4 , six unitary irreducible representations {j ab } a<b=1,...4 of G and a string spin network function φ 1 ({g a } a=1,...,4 ). If φ 1 is the constant map φ 1 = 1, or depends on its group arguments only through monomial combinations {g a g b } a =b of degree two, then the following identity holds : 4 b>a=1 G dg a jab π (g a g b ) φ 1 ({g d } d ) = 4 b>a=1 G dg a δ(g c ) jab π (g a g b ) φ 1 ({g d } d ), (53) for c = 1, 2, 3 or 4. Proof of Lemma 1. The above equality is trivially proven by using the invariance of the Haar measure and performing the change of variables γ cb = g c g b , for c < b (resp. γ bc = g b g c , for c > b) in the left hand side. This translation is always possible since the group function φ is either the constant map or depends on the group elements only through monomials of degree two Let us comment here on the validity of the hypothesis made on the spin network function φ 1 associated to the graph Γ 1 = Γ ∩ T 1 , with T 1 = ∆ 3 in the above Lemma. In fact φ 1 depends necessarily on combinations of the form g a g b locally if the graph Γ 1 is not the null graph. Indeed, Γ 1 can either be a collection of edges, in which case this requirement simply states that the edges are open, either a vertex, where one can always use the invariance of the associated intertwining operator to satisfy the desired assumption. Hence, this requirement is always locally satisfied. We can now show the invariance under bistellar moves and shellings. -Bistellars moves: * The (4, 1) move. Consider the four simplices configuration T 4 in (∆, ∂∆) (FIG. 2). Since the amplitudes do not depend on the maximal tree T of ∆, we are free to chose it. The simplest choice consists in considering a maximal tree T whose intersection T 4 with the simplex configuration T 4 reduces to the four external vertices and to a single one-simplex touching the central vertex. We work in the dual picture and label the four external dual edges from one to four. The face dual to the internal tree segment is chosen to be the face 142. We note g a and h ab , a, b = 1, ..., 4, the group elements 11 associated to the external and internal dual edges respectively, while the representations assigned to the dual faces are noted j ab . The general PL string spin network state restricted to the configuration T * 4 is noted φ 4 ({g a } a , {h ab } a<b ). Obviously, φ 4 is not a function of all of its ten arguments, otherwise it would contain a loop, but can generally depend on any one of these ten group elements, as suggested by the notation. The regularized physical inner product (46) restricted to these four simplices yields j12 π (g 1 g 2 ) j13 π (g 1 g 3 ) j14 π (g 1 g 4 ) j23 π (g 2 h 23 g 3 ) j24 π (g 2 h 24 g 4 ) j34 π (g 3 h 34 g 4 ) δ(h 23 ) δ(h 34 ) δ(h 23 h 34 h 24 ) φ({g a } a , {h 23 , h 34 , h 24 }) = G 4 dg 1 dg 2 dg 3 dg 4 j12 π (g 1 g 2 ) j13 π (g 1 g 3 ) j14 π (g 1 g 4 ) j23 π (g 2 g 3 ) j24 π (g 2 g 4 ) j34 π (g 3 g 4 ) φ 1 ({g a } a ), (55) where we have integrated over the delta functions to eliminate the interior variables in the second step. The right hand side of the above equality corresponds to the one simplex configuration of the (4, 1) move with the associated maximal tree reduction (the obvious removal of the internal tree segment; T 1 is given by the four vertices of the resulting tetrahedron). In other words, we have just proved the invariance under the transformation P (4,1) : (T 4 , T 4 ) → (T 1 , T ′ 1 ), with T k = T ∩ T k and T ′ = T P (4,1) . * The (3, 2) move. Here, we consider the three simplices configuration T 3 (see FIG. 2) and chose a tree T intersecting T 3 only on its five vertices. Concentrating on the dual graph, we label the three vertices from one to three and respectively note g α a , h ab , and j αβ ab , a, b = 1, ..., 3, α, β = 1, 2, the external and internal group elements, and the representation labels. The associated string spin network state is called φ 3 . The transition amplitude, restricted to these three simplices, yields (omitting the sum over representations and associated dimensions) Using the gauge fixing identity at the vertices 2 and 3 to eliminate the variables h 1a , a = 1, 11 The notations takes into account the appropriate orientations. G 9 dg 1 1 dg 2 1 dg 1 2 dg 2 2 dg π (g 1 1 hg 2 1 ) j 12 22 π (g 1 2 hg 2 2 ) j 12 33 π (g 1 3 hg 2 3 ) φ 2 ({g a } a , h), where we have used the inverse gauge fixing identity in the last step. This expression corresponds to the two simplices configuration T 2 of the (3, 2) move. -Shellings : * The (3, 1) move. Remarkably, writting the amplitudes associated to the left and right hand sides of the (3, 1) shelling leads to the same expression than the (4, 1) bistellar, even if the geometrical interpretation is obviously different. This is due to the fact that we are imposing the flatness constraint F = 0 also on the faces of the boundary12 simplicial complex ∂∆ and integrating also on the boundary edges. The only difference is in the presence of possible open string spin network edges reflected in the group function φ = φ({g a } a , {λ a } a ), without any incidence an any steps of the proof given for the (4, 1) bistellar. Accordingly, the proof of invariance under the (3, 1) shelling is the one sketched above. * The (2, 2) move. The same remark applies here, the amplitudes are exactly identical to the ones of the (3, 2) bistellar. Accordingly, we have proven the invariance under adapted Pachner moves • Invariance under curve moves. We here show that the regularized physical inner product is invariant under curve moves. The proof uses the flatness constraint F = 0. Consider a particular dual face f of (∆ * , ∂∆ * ) containing n boundary edges positively oriented from vertex 1 to vertex n. Suppose that there are p < n dual edges e 1 , ..., e p supporting a curve positively oriented w.r.t the face f , to which is associated a spin j representation. We want to prove that the associated amplitude is equal to the amplitude corresponding to the curve lying along the np edges of ∂f \ {e 1 , ..., e p } after the edge move. We start from the initial configuration G n n a=1 dg a j π (g 1 ...g p ) δ(g 1 ...g p g p+1 ...g n ) δ(g 1 G 1 )δ(g 1 H 1 ) ... δ(g n G n ) δ(g n H n ), (58) where the capital letter G a , H a , a = 1, ..., n, represent the sequences of group elements associated to the two others faces sharing the edge a. We then simply integrate over the group element g 1 to obtain G n-1 n a=2 dg a j π (g -1 n ...g -1 p+1 ) δ(g -1 n ...g -1 2 G 1 ) δ(g -1 n ...g -1 2 H 1 ) ... δ(g n G n ) δ(g n H n ) (59) = G n n a=1 dg a j π (g -1 n ...g -1 p+1 ) δ(g 1 ...g p g p+1 ...g n ) δ(g 1 G 1 ) δ(g 1 H 1 ) ... δ(g n G n ) δ(g n H n ). Note the reversal of orientations intrinsic to the move. This closes the proof of invariance under curve move We finish the proof of theorem 1 by showing the second part, namely the invariance of the transition amplitudes under string spin network graph moves. • Invariance under edge moves. The proof is the one given for the curve move • Invariance under endpoint moves. Here, we use the momentum conservation Dp = 0. Considering a particular dual face f containing n boundary edges, with p < n dual edges e 1 , ..., e p supporting an open string spin network edge (positively oriented w.r.t f ) ending on the boundary of the edge p, to which is associated a spin j representation. We call λ k the string field evaluated at the target of the kth edge. We choose the holonomy starting point x to be on the endpoint of the p edge (we prove below that nothing depends on this choice) and, since nothing depends on the paths β by virtue of the invariance under curve moves, we choose a path β of C along the edge p + 1. The relevant amplitude is given by G n n a=1 dg a dλ p dλ p+1 j π (g 1 ...g p λ p )δ(g p+1 λ p+1 λ -1 p ), (60) where the notations are the same than above. It is immediate to rewrite the above quantity as G n n a=1 dg a dλ p dλ p+1 j π (g 1 ...g p g p+1 λ p+1 )δ(g p+1 λ p+1 λ -1 p ), (61) which concludes the proof of endpoint move invariance • Invariance under vertex translations. Here, we consider three dual face f i , i = 1, 2, 3, of (∆ * , ∂∆ * ) each containing n i boundary edges positively oriented from vertex 1 to vertex n i . The three faces meet on the common edge e which is such that 1 = t(e), i.e., e = e i ni , forall i. Suppose that there are p 1 < n 1 dual edges e 1 1 , ..., e 1 p1 (resp. p 2 < n 2 dual edges e 2 1 , ..., e 2 p2 ) of the face f 1 (resp. f 2 ) supporting a string spin network edge e 1 Γ (resp. e 2 Γ ) colored by a spin j 1 (resp. j 2 ) representation and oriented negatively w.r.t the orientation of f 1 (resp. f 2 ). Suppose also that the face f 3 contains n 3p 3 dual edges e 3 p3+1 , ..., e along which lies a positively oriented (w.r.t. to the orientation of the face) string spin network edge e 3 Γ colored by a spin j 3 representation. Consider that the three edges meet on the vertex v Γ supported by the vertex 1 of (∆ * , ∂∆ * ). Noting g the group element associated to the common edge e, one can write the spin network function associated to the three valent vertex v Γ lying on 1 and use the invariance property of the associated intertwining operator ι to 'slide' the vertex along the edge e : j1 π ((g 1 p1 ) -1 (g 1 p1-1 ) -1 ...(g 1 1 ) -1 ) j2 π ((g 2 p2 ) -1 (g 2 p2-1 ) -1 ...(g 2 1 ) -1 ) (62) j3 π (g 3 p3+1 ...g 3 n3-1 g) ι j1j2j3 = j1 π ((g 1 p1 ) -1 ...(g 1 1 ) -1 g -1 ) j2 π ((g 2 p2 ) -1 ...(g 2 1 ) -1 g -1 ) j3 π (g 3 p3+1 ...g 3 n3-1 ) ι j1j2j3 . It is then possible to use the flatness constraint F = 0 on either of the faces f 1 or f 2 to implement an edge move on e 1 Γ or e 2 Γ thus completing the vertex move By virtue of all the above derivations, we have now fully proven theorem 1. To be perfectly complete, we need to verify that the amplitudes are also independent under orientation and holonomy base point change. This leads to the following proposition. Proposition 1 The regularized physical inner product (46) is independent of the choice of orientations of the dual edges and faces of (∆ * , ∂∆ * ), and does not depend on the choice of holonomy base points. Proof of Proposition 1. It is immediate to see that the amplitudes do not depend on the orientations of the dual faces and dual edges of (∆ * ∂∆ * ), nor on the holonomy starting points on the boundaries of the dual faces [START_REF] Freidel | Ponzano-Regge model revisited II: Equivalence with Chern-Simons[END_REF]. Indeed, a dual face and dual edge orientation change correspond respectively to a change g e → g -1 e and g f → g -1 f which are respectively compensated by the invariance of the Haar measure, dg e = dg -1 e , and of the delta function: δ(g f ) = δ(g -1 f ). A change in the holonomy base point associated to a dual face f will have as a consequence the conjugation of the group element g f by some element h in G. Since the delta function is central, δ(g f ) = δ(hg f h -1 ), the regularized physical inner product (46) will remain unchanged under a such transformation. Concerning the base point x used to define the holonomies along the loop α and the paths β ∈ C in (46), the situation is similar. Let x be noted x 1 and suppose that we change the point x 1 to another point x 2 in X neighbouring x 1 . Since we have showen the invariance under bistellars and shellings, we are free to choose the simplest discretization 13 of the manifold (Σ η , T η ). We choose it such that the cylindrical section of T η between x 1 and x 2 is discretized by a single dual face with two opposite sides glued along a dual edge e linking x 1 to x 2 . By virtue of the curve move invariance, we are also free to choose the path β to be along e. The amplitude based on x 2 as a starting point for the paths α and β, restricted to this section of T η , yields 2 ) δ(g β λ 1 λ -1 2 ) δ(g 2 g β g -1 1 g -1 β ) f ({g a } a , {λ a } a , g β ), where g a is the holonomy around the disk bounding the tube section at the point x a and the function f describes the string spin network function together with the other delta functions containing the group elements g a and g β . It is immediate to rewrite the above expression as G 5 2 a=1 dg a dλ a dg β δ(g 1 λ 1 uλ -1 1 ) δ(g -1 β λ 2 λ -1 1 ) δ(g -1 1 g -1 β g 2 g β ) f ({g a } a , {λ a } a , g β ), (64) which is the amplitude based on x 1 as a starting point for the paths α and β There are two major consequences due to the above theorem and proposition. First, there is no continuum limit to be taken in (45). Since the transition amplitudes are invariant under elementary regulator moves, the regularized physical inner product (46) is independent of the regulator and the expression (46) is consequently exact, there is no need to take the limits14 ǫ, η → 0. In particular, we have shown that the amplitudes are invariant under any finite sequence of bistellar moves and shellings which implies, by Pachner's theorem, that the physical inner product is well defined and invariant on the equivalence classes of PL-manifolds 15 (∆, ∂∆) up to PL-homeomorphisms. Accordingly, the transition amplitudes are invariant under triangulation change and thus under refinement. This leads to the second substantial consequence of theorem 1. The crucial point is that the equivalence classes of PL-manifolds up to PL-homeomorphism are in one-to-one correspondence with those of topological manifolds up to homeomorphism. See for instance [START_REF] Pfeiffer | Quantum general relativity and the classification of smooth manifolds[END_REF] for details. Hence, showing the invariance of the regularized physical inner product under triangulation change is equivalent to showing homeomorphism invariance: the discretized expression (46) is in fact a topological invariant of the manifold (Σ η , T η ). In particular, the amplitudes are invariant on the equivalence classes of boundary torii T η up to homeomorphisms. It follows that they do not depend on the embedding of the string S . Combining these results with the second part of theorem 1 stating that the regularized physical inner product is invariant under string spin netwotk graph moves, we obtain the following corollary. This corollary concludes our study of the topological invariance of the theory of extended matter coupled to BF theory studied in this paper. V. CONCLUSION In the first part of this paper we have studied the geometrical interpretation of the solutions of the BF theory with string-like conical defects. We showed the link between solutions of our theory and solutions of general relativity of the cosmic string type. We provided a complete geometrical interpretation of the classical string solutions and explained (by analyzing the multiple strings solution) how the presence of strings at different locations induces torsion. In turn torsion can in principle be used to define localization in the theory. We have achieved the full background independent quantization of the theory introduced in [START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF]. We showed that the implementation of the dynamical constraints at the quantum level require the introduction of regulators. These regulators are defined by (suitable but otherwise arbitrary) space discretization. Physical amplitudes are independent of the ambiguities associated to the way this regulator is introduced and are hence well defined. There are other regularization ambiguities arising in the quantization process that have not been explicitly treated here. For an account of these as well as for a proof that these have no effect on physical amplitudes see [START_REF] Perez | On the regularization ambiguities in loop quantum gravity[END_REF]. The results of this work can be applied to the more general type of models introduced in [START_REF] Montesinos | Two-dimensional topological field theories coupled to four-dimensional BF theory[END_REF], were it is shown that a variety of physically interesting 2-dimensional field theories can be coupled to the string world sheet in an consistent manner. An interesting example is the one where in addition to the degrees of freedom described here, the world sheet carries Yang-Mills excitations. There is an intriguing connection between this type of topological theories and certain field theories in the 2+1 gravity plus particles case. One would expect a similar connection to exist in this case. However, due to the higher dimensional character of the excitations in this model this relationship allows for the inclusion of more general structures: only spin and mass is allowed in 2+1 dimensions. The study of the case involving Yang-Mills world-sheet degrees of freedom is of special interest. This work provides the basis for the computation of amplitudes in the topological theory. A clear understanding of the properties of string transition amplitudes should shed light on the eventual relation with field theories with infinitely many degrees of freedom. FIG. 1 : 1 FIG.1:A typical string spin network (the string is represented by the bold line). FIG. 2 : 2 FIG.2: The (4, 1) and (3, 2) bistellar moves. FIG. 3 : 3 FIG.3: The (3, 1) and (2, 2) shellings, their dual moves and the associated boundary bistellars. G 5 2 a=1 2 dg a dλ a dg β δ(g 2 λ 2 uλ -1 Corollary 1 1 The physical inner product (46) is a topological invariant of the triple ((Σ η , T η ), Γ) :P [R (η,ǫ) ; Ψ Γ ] = P [[(Σ η , T η )]; Ψ [Γ] ],(65)where [(Σ η , T η )] and [Γ] denote the equivalence classes of topological open manifolds and one-complexes up to homeomorphisms and ambient isotopy respectively. G 10 dg 1 dg 2 dg 3 dg 4 dh 12 dh 13 dh 14 dh 23 dh 24 dh 34 (g 3 h 34 g 4 ) δ(h 12 h 23 h 31 ) δ(h 13 h 34 h 41 ) δ(h 23 h 34 h 24 ) φ 4 ({g a } a , {h ab } a<b ), where we have omitted the sum over representations weighted by the associated dimensions. We start by implementing Lemma 1 at vertices 2, 3 and 4 to eliminate the three variables h 1b , b = 1. We obtain G 7 dg 1 dg 2 dg 3 dg 4 dh 23 dh 24 dh 34 (54) j12 π (g 1 h 12 g 2 ) j13 π (g 1 h 13 g 3 ) j14 π (g 1 h 14 g 4 ) j23 π (g 2 h 23 g 3 ) j24 π (g 2 h 24 g 4 ) j34 π 12 h 23 h 13 ) φ 3 ({g a } a , {h ab } a<b ). 1 3 dg 2 3 dh 12 dh 13 dh 23 (56) j 11 12 π (g 1 1 h 12 g 1 2 ) j 11 13 π (g 1 1 h 13 g 1 3 ) j 11 23 π (g 1 2 h 23 g 1 3 ) j 22 12 π (g 2 1 h 12 g 2 2 ) j 22 13 π (g 1 2 h 13 g 2 3 ) j 22 23 π (g 2 2 h 23 g 2 3 ) j 12 11 π (g 1 1 g 2 1 ) j 12 22 π (g 1 2 g 2 2 ) j 12 33 π (g 1 3 g 2 3 ) δ(h Note that this is exactly the same result than the one obtained for the point particle, if we think of the 3d momentum as Hodge dual to a bivector. In fact if Σ is compact the equations of motion (3) implies that the string must be closed (or have zero tension). See the original work[START_REF] Baez | Quantization of strings and branes coupled to BF theory[END_REF] for a detailed canonical analysis. More precisely, one usually endows the canonical hypersurface Σ with a real, analytic structure and restricts the edges to be piecewise analytic or semi-analytic manifolds, as a mean to control the intersection points. We thank Merced Montesinos for pointing out this property of BF theory's constraints. More precisely, the linear form P , once normalized by the evaluation P (1), is a stateP/P (1) : Cyl ⊂ A → C,whose associated GNS construction leads equivalently to the physical Hilbert space Hphys. The associated Gel'fand ideal I is immense by virtue of the topological nature of the theory under consideration. Indeed, one can show that any element of Cyl based on a contractible graph is equivalent to a complex number. The associated physical representation πphys : Cyl → End(Hphys) is defined such that forall cylindrical function a Γ ∈ Cyl defined on a contractible graph Γ, πphys(a Γ [A])Ψ = aγ [0]Ψ, forall Ψ in Hphys. Note that in dimension d ≤ 3, each topological d-manifold admits a piecewise-linear-structure (this is the so-called 'triangulation conjecture'). Note that the blow up of the string, reflected here in the presence of a flatness constraint on the boundary torus, gives us the opportunity to impose that the connection is flat also on the string. One can show that the amplitudes obtained using the three-valent decomposition (where the virtual edges are assumed to be real) are identical to the ones obtained using a suitable non-simplicial cellular decomposition adapted to spin networks with arbitrary valence vertices. In this sense, the boundary amplitudes are very different from a 2 + 1 quantum gravity model defined on an open manifold. Anticipating on the next paragraph, we are using the fact that the invariance under Pachner moves implies the topological invariance of the amplitudes. Accordingly, we can use a cellular decomposition which is not necessarily a triangulation. More precisely, we have shown that (46) is invariant when going from R (η,ǫ) to R (η ′ ,ǫ ′ ) for all (η ′ , ǫ ′ ) = (η, ǫ) which implies regulator independence. More precisely, a triangulation ∆ is not a PL-manifold. It is a combinatorial manifold which is PL-isomorphic to a PLmanifold. Acknowledgements We thank Romain Brasselet for his active participation to the early stages of this project. WF thanks Etera Livine for discussions. This work was supported in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (Capes) through a visiting professor fellowship.
01751474
en
[ "spi.other" ]
2024/03/05 22:32:07
2014
https://hal.univ-lorraine.fr/tel-01751474/file/DDOC_T_2014_0309_XU.pdf
After all the confusion or epiphany, depression or inspiration, sadness or happiness, finally it comes to the stage when we are modest and grateful, as the last thing that I learned from my doctoral studies. I would like to begin by expressing my most sincere gratitude to my Ph.D. advisor, Professor Michel Potier-Ferry, an inspiring and extraordinary person to work with during the entire span of this thesis. In addition to introducing me to the subject of instability and guiding me to grow in computational mechanics, his patient guidance and relentless pursuit of excellence have helped shape me into not only a good researcher with rigorous attitude but also a responsible and considerate person than I otherwise would have been. Plainly put, he has been a great mentor. It has been an honor and a privilege working with him over the last three years. Sincere appreciation is extended to my co-advisor, Dr. Salim Belouettar, for his kind supports and helps as well as deep trusts for giving me full autonomy in research activity. I would like to express my gratitude to the other thesis committee members, Professor Basile Audoly, Professor Yibin Fu, Professor Martine Ben Amar and Professor Hachmi Ben Dhia, for taking the time to read this thesis and for offering helpful comments and suggestions. Professor Basile Audoly and Professor Yibin Fu deserve a special thank for their inspiring and encouraging reports on this thesis. Many thanks go in particular to Professor Yanping Cao at Tsinghua University for his expert advice and valuable scientific discussions, which enlightened me and eliminated my confusions on some technical points. I would also like to record my gratitude to my colleagues and my friends, Dr. Yu Cong and Dr. Yao Koutsawa, for our scientific (and not-so-scientific) discussions. It has been a pleasure working with them. Special thanks go to Professor Hamid Zahrouni for allowing me to use his powerful workstation to perform heavy simulations. My thanks also go to my friends, Kodjo Attipou, Cai Chen, Yajun Zhao, Junliang Dong, Qi Wang, Dr. Wei Ye, Dr. Jingfei Liu, Dr. Kui Wang, Alex Gansen, Qian Shao, Sandra Hoffmann, Dr. Duc Tue Nguyen, etc. for sharing my joy and sadness, and offering helps and supports whenever needed. I treasure every minute that I have spent with them. Lastly, I would like to give my most special thanks to my parents and my fiancée for their unconditional love, 24/7 support, share of happiness and for always believing in me. Without their encouragement I would never have made it this far. Financial support for this research was provided by AFR Ph.D. Grants from Fonds National de la Recherche of Luxembourg (Grant No. FNR/C10/MS/784868). 2.1 An elastic stiff film resting on a compliant substrate under in-plane compression. The wrinkle wavelength λ x is much larger than the film thickness h f . The ratio of the substrate thickness h s to the wavelength, h s /λ x , can vary from a small fraction to a large number. . . . . . . . . . . . . . . . . . (v R 1 , v I 1 , v R 2 , v I 2 ) . The reduction is performed over the domain [π, 29π]. The instability pattern for µ = 2.21. 4.4 Buckling of a clamped beam under uniform compression: one real envelope (v 0 ) and four complex envelopes (v R 1 , v I 1 , v R 2 , v I 2 ) . The reduction is performed over the domain [π, 29π]. The instability pattern for µ = 2.21. . xiii 4.5 Buckling of a clamped beam under uniform compression: one real envelope (u 0 ) and four complex envelopes (u R 1 , u I 1 , u R 2 , u I 2 ). The reduction is performed over the domain [π, 29π]. The instability pattern for µ = 2.21. . Thin films on hyperelastic substrates under equi-biaxial compression. The left column shows a sequence of wrinkling patterns, while the right column presents the associated instability shapes at the line X = 0.5L x . Localized folding mode and checkerboard mode appear in the bulk. . . . . . . . . . . xv General introduction Surface wrinkles of stiff thin layers bound to soft materials have been widely observed in nature, such as wrinkles of hornbeam leaf and human skin, which has raised considerable research interests for several decades. When depositing a stiff thin film on a polymeric soft substrate, the developed residual compressive stresses in the film during the cooling process due to the large mismatch of thermal expansion coefficient between the film and the substrate, are relieved by buckling with wrinkle patterns, which was pioneered by Bowden et al. [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF] in 1998. These surface wrinkles are a nuisance in some applications, but can be widely applied in modern industry ranging from micro/nanofabrication of flexible electronic devices with controlled morphological patterns [START_REF] Bowden | The controlled formation of ordered, sinusoidal structures by plasma oxidation of an elastomeric polymer[END_REF][START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF] to the mechanical property measurement of material characteristics [START_REF] Howarter | Instabilities as a measurement tool for soft materials[END_REF]. Over the last decade, several theoretical, numerical and experimental works have been devoted to stability analyses in order to determine the critical conditions of instability and the corresponding wrinkling patterns [START_REF] Cai | On the imperfection sensitivity of a coated elastic half-space[END_REF][START_REF] Cai | Exact and asymptotic stability analyses of a coated elastic halfspace[END_REF][START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF][START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF][START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF][START_REF] Song | Buckling of a stiff thin film on a compliant substrate in large deformation[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part III: Herringbone solutions at large buckling parameter[END_REF]. Although linear perturbation analyses can predict the wavelength at the initial stage of instability threshold, determination of the following post-buckling morphological evolution really requires nonlinear buckling analyses. During post-buckling, the wavelength and amplitude of wrinkles may vary with respect to externally applied compressive load. However, instability problems are complex and often involve strong effects of geometrical nonlinearities, large rotations, large displacements, large deformations, loading path dependence and multiple symmetry-breakings. That is why most nonlinear buckling analyses have resorted to numerical approaches since only a limited number of exact analytical solutions can be obtained. In most of previous works, the 2D or 3D film/substrate system is often discretized by spectral method or Fast Fourier Transform (FFT) algorithm, which is fairly computationally inexpensive but prescribes periodic boundary conditions and simple geometries. Moreover, within the spectral or FFT framework, it is rather difficult to describe localized behavior that often occurs in soft matters. By contrast, such systems can be modeled by using finite element methods, which is more computationally expensive but more flexible to describe complex geometries and general boundary conditions, and allows using com-mercial computer codes. In addition, there is a shortage of study concerning the effect of boundary conditions on instability patterns, which is important in practice. Localizations are often caused by stress concentration near the boundary or by symmetry-breakings, and finite element method is a good way to capture the localized behavior such as folding, creasing or ridging, while the spectral or FFT technique has difficulties to achieve it. Overall, pattern formation modeling and post-buckling analysis deserve new numerical investigations, especially through finite element method that can provide the whole view and insight into the formation and evolution of wrinkle patterns in any condition. Therefore, the main objective of the present thesis is to apply advanced numerical methods for multiple-bifurcation analyses of film/substrate systems. These advanced numerical approaches include path-following techniques, bifurcation indicators, bridging techniques, multi-scale analyses, etc. The point of this thesis lies in, but is not limited to, the application of the following numerical methods to the instability pattern formation of film/substrate systems: • Finite element method to be able to deal with all the geometries, behaviors and boundary conditions; • Path-following technique for nonlinear problem resolution; • Bifurcation indicator to detect bifurcation points and the associated instability modes; • Reduction techniques of models by multi-scale approaches; • Bridging techniques to couple full models and reduced-order models concurrently. The thesis is outlined as follows. In Chapter 1, literature review revisits previous contributions and reports cuttingedge works as well as research trends in the relative fields. Challenges and problems that need to be overcome and solved are positioned and discussed. Besides, the main methods and techniques that will be applied and developed in the following chapters are introduced. This includes firstly an advanced numerical continuation technique to solve nonlinear differential equations, namely Asymptotic Numerical Method (ANM) [START_REF] Damil | A new method to compute perturbed bifurcation: Application to the buckling of imperfect elastic structures[END_REF][START_REF] Cochelin | Asymptotic-numerical methods and Padé approximants for non-linear elastic structures[END_REF][START_REF] Cochelin | A path-following technique via an asymptotic-numerical method[END_REF][START_REF] Cochelin | Méthode asymptotique numérique[END_REF]; secondly a Fourier-related multi-scale modeling technique for instability pattern formation; and lastly the well-known Arlequin method [START_REF] Ben Dhia | Multiscale mechanical problems: the Arlequin method[END_REF][START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF] for model coupling between different scales or different meshes. In Chapter 2, we apply advanced numerical methods for bifurcation analyses to typical film/substrate systems, and focus on the post-bifurcation evolution involving secondary bifurcations and the associated instability modes. A finite element model based on the 2 ANM is developed for nonlinear analysis of pattern formation, with a particular attention on the effect of boundary conditions. Up to four successive bifurcation points have been detected. The evolution of wrinkling patterns and post-bifurcation modes including period-doubling has been observed beyond the first bifurcation. Next in Chapter 3, following the same strategy, we extend the 2D work to 3D cases. Spatial pattern formation in stiff thin films on compliant substrates is investigated based on a 3D finite element model by coupling shell elements representing the film and block elements describing the substrate. Typical post-bifurcation patterns include sinusoidal, checkerboard and herringbone shapes, with possible spatial modulations, boundary effects and localizations. Up to four successive bifurcation points have been found on nonlinear response curves. Chapter 4 presents a very original nonlocal bridging technique between microscopic and macroscopic models for the wrinkling analysis. We discuss how to connect a fine and a coarse model within the Arlequin framework. We propose a nonlocal reductionbased coupling operator that allows us to accurately describe the response of the system near the boundary and to avoid locking phenomena in the coupling zone. The proposed method can be viewed as a guide for coupling techniques involving other reduced-order models and it shows a flexible way to analyze cellular instability problems involving thin boundary layers, e.g. membrane wrinkling, buckling of thin metal sheets, etc. In the last Chapter 5, a macroscopic modeling framework for film/substrate systems is developed based on the technique of slowly variable Fourier coefficients [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. More specifically, a 2D macroscopic film/substrate model is derived from the 2D model presented in Chapter 2, with all the mechanical fields being represented by Fourier coefficients. In this way, the computational cost can be reduced significantly, since only a few number of elements are sufficient to describe nearly periodic wrinkles. In the same spirit, a 3D macroscopic film/substrate model that couples a nonlinear envelope membrane-wrinkling model and a linear elastic macroscopic model is then established. Unless particularly mentioned, computations performed throughout the thesis have been based on self-developed computer codes in MATLAB. Introduction générale Le plissement dans les films minces sur un substrat plus mou a été largement observé dans la nature, par exemple dans les feuilles d'arbres et la peau humaine. Ces phénomènes ont suscité un intérêt considérable depuis plusieurs décennies. Lors du dépôt d'un film sur un substrat polymère souple, des contraintes résiduelles de compression se développent dans le film pendant la phase de refroidissement en raison du grand décalage de coefficient thermique entre le film et le substrat, puis elles se relaxent par flambage: on se reportera par exemple à l'article pionnier de Bowden et al. [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF] en 1998. Ces plis de surface sont une nuisance dans certaines applications, mais peuvent être largement utilisés dans l'industrie moderne pour des applications allant de la micro/nano-fabrication de dispositifs électroniques flexibles avec motifs morphologiques contrôlés [START_REF] Bowden | The controlled formation of ordered, sinusoidal structures by plasma oxidation of an elastomeric polymer[END_REF][START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF] à la mesure des propriétés mécaniques des matériaux [START_REF] Howarter | Instabilities as a measurement tool for soft materials[END_REF]. Au cours de la dernière décennie, des recherches théoriques, numériques et experimentales ont été consacrées aux analyses de stabilité afin de déterminer les conditions critiques d'instabilité et les modes correspondants [START_REF] Cai | On the imperfection sensitivity of a coated elastic half-space[END_REF][START_REF] Cai | Exact and asymptotic stability analyses of a coated elastic halfspace[END_REF][START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF][START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF][START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF][START_REF] Song | Buckling of a stiff thin film on a compliant substrate in large deformation[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part III: Herringbone solutions at large buckling parameter[END_REF]. Bien que les analyses linéaires de perturbation puissent prévoir la longueur d'onde à l'initiation de l'instabilité, la détermination de l'évolution morphologique ultérieure nécessite vraiment des analyses non-linéaires. Au cours du post-flambage, la longueur d'onde et l'amplitude peuvent varier. Mais, les problèmes d'instabilité sont complexes et impliquent souvent de forts effets de non-linéarité géométrique, de grandes rotations, de grands déplacements, de grandes déformations, une dépendance par rapport au chemin de chargement et de multiples brisures de symétrie. C'est pourquoi la plupart des analyses non-linéaires de flambement ont recouru à des approches numériques parce qu'on ne peut obtenir qu'un nombre limité de solutions exactes de manière analytique. Dans la plupart des travaux antérieurs, le système film/substrat en 2D ou 3D est souvent discretisé par la méthode spectrale ou la transformée de Fourier rapide (FFT), qui coûtent peu cher en termes de temps calcul mais imposent des conditions aux limites périodiques et des géométries simples. En outre, dans le cadre spectral ou FFT, il est assez difficile de décrire un comportement localisé qui apparait souvent avec la matière molle. En revanche, ces systèmes peuvent être modélisés par la méthode des éléments finis, ce qui est plus coûteux en temps calcul mais plus flexible pour décrire des géométries complexes et des conditions aux limites génériques, et permet d'utiliser des codes de calculs commerciaux. En plus, il y a peu d'études concernant les effets de conditions aux limites sur les modes d'instabilité, ce qui est important dans la pratique. Les localisations sont souvent causées par la concentration des contraintes près des bords ou par les multiples brisures de symétrie, et la méthode des éléments finis est une bonne façon de capturer des comportements localisés tels que le pliage, le plissement ou la formation de crêtes, ce qui est plus difficile avec la technique spectrale ou FFT. Globalement, la modélisation de la formation de plis et l'analyse de post-flambage méritent de nouvelles investigations numériques, en particulier par la méthode des éléments finis qui peut founir une vue complète de la formation et de l'évolution des modes de plissement dans toutes les conditions. L'objectif principal de cette thèse est donc d'appliquer aux systèmes film/substrat des méthodes numériques avancées pour des analyses de bifurcations multiples. Ces approches numériques avancées incluent des techniques de cheminement, des indicateurs de bifurcation, des méthodes de couplage de modèle, des analyses multi-échelle, etc. Le point fort de cette thèse est l'application des méthodes numériques suivantes aux instabilités dans les systèmes film/substrat: • Méthode des éléments finis pour pouvoir traiter toutes les géométries, tous les comportements et conditions aux limites; • Technique de cheminement pour la résolution de problèmes non-linéaires; • Indicateur de bifurcation pour détecter les points de bifurcations et les modes d'instabilité associés; • Techniques de réduction de modèles par des approches multi-échelles; • Méthodes de couplage pour coupler les modèles complets et les modèles réduits. Le contenu détaillé de la thèse est décrite ci-dessous. Dans le Chapitre 1, une revue de la littérature présente les travaux récents et les tendances de la recherche dans le domaine. Les défis et les problèmes à surmonter sont positionnés et discutés. Par ailleurs, les méthodes et techniques principales qui seront appliquées et développées dans les chapitres suivants sont introduits: tout d'abord une technique numérique de pilotage pour résoudre les équations différentielles non-linéaires, à savoir la Méthode Asymptotique Numérique (MAN) [START_REF] Damil | A new method to compute perturbed bifurcation: Application to the buckling of imperfect elastic structures[END_REF][START_REF] Cochelin | Asymptotic-numerical methods and Padé approximants for non-linear elastic structures[END_REF][START_REF] Cochelin | A path-following technique via an asymptotic-numerical method[END_REF][START_REF] Cochelin | Méthode asymptotique numérique[END_REF]; deuxièmement une technique de modélisation multi-échelle de type Fourier pour la formation de modes d'instabilité; et enfin la méthode Arlequin [START_REF] Ben Dhia | Multiscale mechanical problems: the Arlequin method[END_REF][START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF] bien connue pour le couplage des modèles entre différentes échelles ou différents maillages. Dans le Chapitre 2, nous appliquons des méthodes numériques avancées pour des analyses de bifurcations aux modèles typiques de systèmes film/substrat, et il se concentre sur l'évolution post-bifurcation impliquant bifurcations secondaires et modes d'instabilité associés. Un modèle éléments finis basés sur la MAN est développé pour l'analyse nonlinéaire, avec une attention particulière sur l'effet des conditions aux limites. Jusqu'à quatre points de bifurcation successifs ont été détectés. L'évolution des modes de plissement et des modes de post-bifurcation, y compris le doublement de la période, a été observée au-delà de la première bifurcation. Dans le Chapitre 3, suivant la même stratégie, nous étendons le travail 2D au 3D. La formation de modes spatiaux dans les films minces sur des substrats mous est étudiée à partir d'un modèle éléments finis 3D par le couplage d'éléments de coque et d'éléments volumiques. Les modes typiques de post-bifurcation incluent des formes sinusoïdaux, de damiers et de chevrons, avec des modulations spatiales possibles, des effets des conditions aux limites et des localisations. Jusqu'à quatre points de bifurcation successifs ont été trouvés sur des courbes de réponse non-linéaires. Le Chapitre 4 présente une technique très originale de couplage non-local entre les modèles microscopiques et macroscopiques pour l'analyse de plissement. Nous discutons comment connecter un modèle fin et un modèle grossier dans le cadre Arlequin. Nous proposons un opérateur de réduction non-local qui nous permet de décrire avec précision la réponse du système à proximité de la frontière et d'éviter des phénomènes de verrouillage dans la zone de couplage. La méthode proposée peut être considérée comme un guide pour les techniques de couplage impliquant d'autres modèles réduits et il montre la manière flexible pour analyser les problèmes d'instabilité cellulaire impliquant des couches minces, par exemple, le plissement des membrane, le flambage de tôles minces métalliques, etc. Dans le dernier Chapitre 5, un cadre de modélisation macroscopique pour les systèmes film/substrat est développé sur la base de la technique de Fourier à coefficients lentement variables [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. Plus précisément, un modèle 2D macroscopique de film/substrat est dérivé du modèle 2D présenté dans le Chapitre 2, tous les champs mécaniques étant représentés par des coefficients de Fourier. De cette manière, le coût de calcul peut être considérablement réduit, car seul un petit nombre d'éléments suffit pour décrire les plis périodiques. Dans le même esprit, un modèle 3D macroscopique de film/substrat est établi, qui couple un modèle de membrane-plissement enveloppe non-linéaire et un modèle macroscopique élastique linéaire. Sauf mention particulière, les calculs effectués tout au long de la thèse ont été basés sur les codes informatiques de notre cru écrits dans MATLAB. Wrinkles of a stiff thin layer attached on a soft substrate have been widely observed in nature (see Fig. 1.1) and these phenomena have raised considerable research interests over the last decade [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF][START_REF] Mahadevan | Self-organized origami[END_REF][START_REF] Efimenko | Nested self-similar wrinkling patterns in skins[END_REF][START_REF] Dervaux | Morphogenesis of growing soft tissues[END_REF]128,[START_REF] Rogers | Materials and mechanics for stretchable electronics[END_REF][START_REF] Brau | Multiple-length-scale elastic instability mimics parametric resonance of nonlinear oscillators[END_REF][START_REF] Kim | Hierarchical folding of elastic membranes under biaxial compressive stress[END_REF]. The underlying mechanism of wrinkling is generally understood as a stress-driven instability, analogous to Euler buckling of an elastic column under compressive stress. By depositing a stiff film on an elastomeric soft substrate, the developed residual compressive stresses in the film during the cooling process due to the large thermal expansion coefficient mismatch between the film and the substrate are relieved by buckling with a pattern of wrinkles, while the film remains bonded to the substrate, which was pioneered by Bowden et al. [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF][START_REF] Bowden | The controlled formation of ordered, sinusoidal structures by plasma oxidation of an elastomeric polymer[END_REF] in 1998. These surface wrinkles are a nuisance in some applications, but can be widely applied ranging from micro/nano-fabricating surfaces with ordered patterns with unique wetting [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF][START_REF] Bowden | The controlled formation of ordered, sinusoidal structures by plasma oxidation of an elastomeric polymer[END_REF], buckled single crystal silicon ribbons [START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF], optical property of electronic eye camera, mechanical property measurement of surface characteristics of the materials [START_REF] Howarter | Instabilities as a measurement tool for soft materials[END_REF], biomedical engineering [START_REF] Genzer | Soft matter with hard skin: From skin wrinkles to templating and material characterization[END_REF] as well as biomechanics [START_REF] Amar | Swelling instability of surface-attached gels as a model of soft tissue growth under geometric constraints[END_REF][START_REF] Dervaux | Buckling condensation in constrained growth[END_REF][START_REF] Li | Surface wrinkling of mucosa induced by volumetric growth: Theory, simulation and experiment[END_REF], to the design of flexible semiconductor devices and stretchable electronics [START_REF] Rogers | Materials and mechanics for stretchable electronics[END_REF], conformable skin sensors, smart surgical gloves, and structural health monitoring devices [START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF]. When subjected to sufficiently large in-plane compressive stresses, in order to minimize the total potential elastic energy, a film/substrate system may buckle into different intricate patterns depending on loading and boundary conditions, e.g. sinusoidal, checkerboard, herringbone, hexagonal and triangular patterns, as shown in Figs. 1.2 and 1.3. One of the mathematical expressions of those pattern shapes is the combinations of trigonometric functions [START_REF] Cai | Periodic patterns and energy states of buckled films on compliant substrates[END_REF], which can be written as 1D sinusoidal mode: z = sin(kx), Checkerboard mode: z = cos(kx) cos(ky), Herringbone mode: z = cos(kx) + sin(kx) cos(ky), Hexagonal mode: z = cos(kx) + 2 cos ( 1 2 kx ) cos ( √ 3 2 ky ) , Triangular mode: z = -sin(kx) + 2 sin ( 1 2 kx ) cos ( √ 3 2 ky ) . (1.1) First, let us distinguish three typical phenotypes in morphological instability of soft materials: wrinkling, folding and creasing (see Fig. 1.4). Wrinkling is related to periodic or aperiodic surface undulations appearing on a flat surface. It often occurs during the buckling of thin structures with lateral foundations. For example in 2D cases, a stiff film bonded to a compliant substrate may buckle into sinusoidal waves. Folding usually refers to a buckling induced surface structure with a localized surface valley. Folds are often observable during the post-buckling evolution of surface wrinkles in a hard layer lying on a gel substrate or floating on a liquid. The term folding has been extensively adopted in some fields like biomedicine and tectonophysics to represent traditional wrinkling. By contrast, creasing usually appears at the surface of soft materials without hard skins, when an initially smooth surface forms a sharp self-contacting sulci [START_REF] Li | Mechanics of morphological instabilities and surface wrinkling in soft materials: a review[END_REF]. The broad interest on surface morphological instabilities of stiff thin layers attached on soft substrates has motivated recent studies, especially focusing on stability analysis. Several theoretical, numerical and experimental works have been devoted to linear perturbation analyses and nonlinear buckling analyses in order to determine the critical conditions of instability and the corresponding wrinkling patterns [START_REF] Huang | Instability of a compressed elastic film on a viscous layer[END_REF][START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Huang | Evolution of wrinkles in hard films on soft substrates[END_REF][START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF][START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF][START_REF] Mahadevan | Self-organized origami[END_REF][START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF][START_REF] Song | Buckling of a stiff thin film on a compliant substrate in large deformation[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part III: Herringbone solutions at large buckling parameter[END_REF][START_REF] Lee | Surface instability of an elastic half space with material properties varying with depth[END_REF][START_REF] Im | Wrinkle patterns of anisotropic crystal films on viscoelastic substrates[END_REF]. In particular, there are many analytical solutions of models linearized from homogeneous finite deformation, in the case of half-spaces [START_REF] Hayes | Surface waves in deformed elastic materials[END_REF][START_REF] Dowaikh | On surface waves and deformations in a pre-stressed incompressible elastic solid[END_REF] as well as film/substrate systems [START_REF] Steigmann | Plane deformations of elastic solids with intrinsic boundary elasticity[END_REF][START_REF] Cai | On the imperfection sensitivity of a coated elastic half-space[END_REF][START_REF] Cai | Exact and asymptotic stability analyses of a coated elastic halfspace[END_REF]. Cai and Fu [START_REF] Cai | Exact and asymptotic stability analyses of a coated elastic halfspace[END_REF] analytically studied the buckling of a pre-stressed coated elastic half-space with the aid of the exact theory of nonlinear elasticity, treating the coating as an elastic layer and using its thickness as a small parameter. Besides, they also determined the imperfection sensitivity of a neo-Hookean surface layer bonded to a neo-Hookean half-space [START_REF] Cai | On the imperfection sensitivity of a coated elastic half-space[END_REF]. Although linear perturbation analyses can predict the wavelength at the initial stage of instability threshold, determination of the post-buckling morphological evolution and the mode transition of surface wrinkles really requires nonlinear buckling analyses. During post-buckling, the wrinkle wavelength and amplitude will vary with respect to externally applied compressive load. Due to its well- known difficulty, most post-buckling analyses have resorted to numerical and experimental approaches, since only a limited number of exact analytical solutions can be obtained in very simple or simplified cases. Some researchers considered the wrinkling of thin film by using Föppl-von Kármán nonlinear elastic plate theory and applied linear elasticity theory for the substrate, and then carried out the minimization of the potential energy to characterize the effective instability parameters, which is quite a classical and simple way to model the film/substrate systems. Chen and Hutchinson [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF] have elucidated nonlinear aspects on the buckling behavior of some periodic modes and developed a closed-form solution. They calculated the wavelength and amplitude of sinusoidal wrinkles through considering an infinitely thick substrate. Through numerical way by using finite element method and simulating a single elementary cell, a square or a parallelogram with periodic boundary conditions, Chen and Hutchinson [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF] found that the herringbone pattern has the minimum energy among several patterns including sinusoidal and checkerboard, which is the reason why it was frequently observed in experiments. Huang et al. [START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF] extended the work of Chen and Hutchinson [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF] to the case of a film bound to a substrate of finite thickness. Instead of modeling the substrate as a Winkler foundation (a foundation made of an array of springs and dashpots) [START_REF] Huang | Evolution of wrinkles in hard films on soft substrates[END_REF], they developed a spectral method to evolve two-dimensional patterns of wrinkles in separate spaces and represented the three-dimensional elastic field of the substrate in the Fourier space. The calculation of wrinkle patterns was performed in a square cell in the plane with periodic boundary conditions replicating the cell to the entire plane. They observed stripe wrinkles, checkerboard, labyrinths or herringbone patterns depending on the loading conditions and showed that the wavelength of the wrinkles remains constant as the amplitude of the wrinkles increases. Independently, Mahadevan and Rica [START_REF] Mahadevan | Self-organized origami[END_REF] proposed an analysis of herringbone patterns based on amplitude equations, which is suitable for the analysis of large wavelength perturbations on top of the straight wrinkles, but with an assumption that does not apply to the geometry of herringbones. Audoly and Boudaoud [START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF] proposed a simplified buckling model by considering a film attached on an isotropic half-infinite substrate and then solved it in the Fourier space. They found that the undulating stripes evolve smoothly towards a pattern similar to herringbone one under increasing loads. In their companion papers [START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part III: Herringbone solutions at large buckling parameter[END_REF], they conducted asymptotic methods and Fast Fourier Transform (FFT) algorithm to explore aspects of behavior expected in the range of very large overstress with emphasis on the herringbone mode. Most previous studies consider a homogeneous substrate, while systems consisting of a stiff thin layer resting on an elastic graded substrate are often encountered both in nature and industry. In nature, many living soft tissues including skins, brains, mucosa of esophagus and pulmonary airway can be modeled as a soft substrate covered by a stiff thin surface layer [START_REF] Li | Surface wrinkling of mucosa induced by volumetric growth: Theory, simulation and experiment[END_REF]. It is noted that the sub-surface layer (i.e. substrate) usually has gradient mechanical properties because of the spatial variation in the microstructure or composition. Besides, many practical systems in industry have a functionally graded substrate [START_REF] Lee | Surface instability of an elastic half space with material properties varying with depth[END_REF][START_REF] Howarter | Instabilities as a measurement tool for soft materials[END_REF][START_REF] Cao | Buckling and post-buckling of a stiff film resting on an elastic graded substrate[END_REF][START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF]. For instance, the deposition process of a stiff thin film on a soft substrate may lead to a variation of the mechanical properties of the substrate along the thickness direction (functionally graded), which would affect the wrinkling of film/substrate system [START_REF] Howarter | Instabilities as a measurement tool for soft materials[END_REF]. Lee et al. [START_REF] Lee | Surface instability of an elastic half space with material properties varying with depth[END_REF] performed a stability and bifurcation analysis for the surface wrinkling of an elastic half space under in-plane compression, with Young's modulus arbitrarily varying along the depth. They developed a finite element method to solve this problem. Cao et al. [START_REF] Cao | Buckling and post-buckling of a stiff film resting on an elastic graded substrate[END_REF] carried out theoretical analyses and finite element simulations for a hard film wrinkling on an elastic graded substrate subjected to in-plane compression. In particular, they investigated two typical variations in the substrate modulus along the depth direction, expressed by a power function and an exponential function, respectively. Nevertheless, up to now, there is a shortage of theoretical or numerical investigation on the instability of a stiff layer lying on an elastic graded substrate. On the other hand, when the substrate is made of viscous or viscoelastic materials, wrinkling patterns may evolve with time due to their time-dependent mechanical properties [START_REF] Huang | Instability of a compressed elastic film on a viscous layer[END_REF][START_REF] Im | Wrinkle patterns of anisotropic crystal films on viscoelastic substrates[END_REF]. In such systems, the instability characteristics can be determined by integrating the methods of energetics and kinetics [START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF]. A spectral method was developed by Huang and Im [START_REF] Huang | Dynamics of wrinkle growth and coarsening in stressed thin films[END_REF] for numerical simulations of wrinkle growth and coarsening in stressed thin films on a viscoelastic layer. A random perturbation distribution of lateral deflection is imposed to trigger disordered labyrinth patterns. Later, a Fourier transform method was employed by Im and Huang [START_REF] Im | Wrinkle patterns of anisotropic crystal films on viscoelastic substrates[END_REF] for wrinkling analysis of an anisotropic crystal film on a viscoelastic substrate layer. Even so, investigations in terms of the effects of viscosity on wrinkling pattern evolution are still very limited and still deserve much further effort. Notwithstanding much effort have been made on the modeling of morphological wrinkling in film/substrate systems, most of those previous studies are mainly constrained to determine the critical conditions of instability and corresponding wrinkling patterns near the instability threshold. The post-buckling evolution and mode transition of surface wrinkles are only recently being pursued [128,129,[START_REF] Brau | Multiple-length-scale elastic instability mimics parametric resonance of nonlinear oscillators[END_REF][START_REF] Kim | Hierarchical folding of elastic membranes under biaxial compressive stress[END_REF][START_REF] Li | Surface wrinkling of mucosa induced by volumetric growth: Theory, simulation and experiment[END_REF][START_REF] Cao | Wrinkling phenomena in neo-Hookean film/substrate bilayers[END_REF][START_REF] Cao | From wrinkles to creases in elastomers: the instability and imperfection-sensitivity of wrinkling[END_REF][START_REF] Cao | Buckling and post-buckling of a stiff film resting on an elastic graded substrate[END_REF][START_REF] Zang | Localized ridge wrinkling of stiff films on compliant substrates[END_REF][START_REF] Sun | Folding wrinkles of a thin stiff layer on a soft substrate[END_REF][START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF]. Experimentally through thin elastic membranes being supported on a much softer elastic solid or a fluid, Pocivavsek et al. [128] found a transition from periodic surface wrinkling to symmetry-broken folding when the compression is beyond a critical value. When the fluid substrate is replaced by a polydimethylsiloxane (PDMS) foundation, the film/substrate system shows distinctly different pattern evolution with increasing com-pression. Brau et al. [START_REF] Brau | Multiple-length-scale elastic instability mimics parametric resonance of nonlinear oscillators[END_REF] experimentally discovered that further compressions above the onset of buckling triggers multiple bifurcations: one wrinkle grows in amplitude at the expense of its neighbors. These bifurcations create a period-doubling or even a periodquadrupling surface topography under progressive compressions. Li et al. [START_REF] Li | Surface wrinkling of mucosa induced by volumetric growth: Theory, simulation and experiment[END_REF] numerically reproduced these interesting phenomena in volumetric growth and surface wrinkling of a mucosa and submucosa by using a pseudo dynamic solution method. Through numerical simulations, Cao and Hutchinson [START_REF] Cao | Wrinkling phenomena in neo-Hookean film/substrate bilayers[END_REF] uncovered advanced post-bifurcation modes including period-doubling, folding and a newly identified mountain ridge mode, in the post-buckling of a bilayer system wherein an unstretched film is bonded to a prestretched compliant neo-Hookean substrate with buckling arising as the stretch in the substrate is relaxed. Then, Zang et al. explored this localized mountain ridge mode in greater depth by finite element simulation and an analytical film/substrate model. Meanwhile, Cao and Hutchinson [START_REF] Cao | From wrinkles to creases in elastomers: the instability and imperfection-sensitivity of wrinkling[END_REF] provided another further insight into the connection between wrinkling and an alternative surface mode, the finite amplitude crease or sulcus. Additionally, hierarchical folding of an elastic membrane on a viscoelastic substrate can be observed under a continuous biaxial compressive stress: the folds delineate individual domains and each domain subdivides into smaller ones over multiple generations [START_REF] Kim | Hierarchical folding of elastic membranes under biaxial compressive stress[END_REF]. By experimentally modifying the boundary conditions and geometry, Kim et al. [START_REF] Kim | Hierarchical folding of elastic membranes under biaxial compressive stress[END_REF] demonstrated control over the final network morphology. Challenges and discussion Although considerable progresses have been made over the last decade on the modeling of morphological wrinkling in film/substrate systems, there remain lots of significant and interesting problems that deserve further investigations. Advances in theoretical and numerical modeling in this field are impeded by a number of mechanical, mathematical and numerical complexities. For instance, the surface instability of stiff layers on soft materials usually involves strong geometrical nonlinearities, large rotations, large displacements, large deformations, loading path dependence, multiple symmetry-breakings, nonlinear constitutive relations, localizations, and other complexities, which makes the theoretical and numerical analyses quite difficult. The morphological post-buckling evolution and mode shape transition beyond the critical load are incredibly complicated, especially in 3D cases, and the conventional numerical methods of post-buckling have difficulties in predicting and detecting all the bifurcations and the associated wrinkling modes on their complex response curves. Reliable and robust path-following techniques are in strong demand for post-buckling analyses of film/substrate system, especially for predicting and tracing surface mode transitions, while it was rarely explored in the literature. Moreover, in conventional finite element analysis, the post-buckling simulation may suffer from the convergence issue if the film is much stiffer than the substrate. In most of previous works, the 2D or 3D spatial problem is often discretized by spectral method or FFT algorithm, which is fairly computationally inexpensive but prescribes periodic boundary conditions and simple geometries. Furthermore, within the spectral or FFT framework, it is quite difficult to capture localized behavior that often occurs in soft matters in complex geometries and boundary conditions. It has been early recognized by Chen and Hutchinson [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF] that such systems can also be studied by using finite element methods, which is more computationally expensive but more flexible to describe complex geometries and general boundary conditions, and allows using commercial computer codes. However, only few following contributed works can be found in literatures and 3D finite element simulations of film/substrate buckling were studied only in few papers [START_REF] Cai | Periodic patterns and energy states of buckled films on compliant substrates[END_REF]. Besides, the post-buckling evolution and mode transition of surface wrinkles in 3D film/substrate systems are merely studied in the case of periodic cells [START_REF] Cai | Periodic patterns and energy states of buckled films on compliant substrates[END_REF]. In particular, Cai et al. [START_REF] Cai | Periodic patterns and energy states of buckled films on compliant substrates[END_REF] employed an analytical upper-bound method through a 3D model considering a nonlinear Föppl-von Kármán plate bonded to a linear elastic foundation. Through performing 3D finite element simulations on a specific unit cell with periodic boundary conditions, a new equilateral triangular mode was identified and the mode transition from a triangular mode to an asymmetric three lobed mode under increasing overstress was analyzed. Still, there is a shortage of study concerning the effect of boundary conditions on instability patterns, which is important in practice. Localizations are often caused by stress concentration due to the real boundary and loading conditions or by symmetrybreakings, and finite element method is a good way to capture the localized behavior such as folding or ridging, while the spectral or FFT technique has difficulties to achieve it. Overall, pattern formation and evolution deserve further numerical investigations, especially through finite element method that can provide the overall view and insight into the formation and evolution of wrinkle patterns in any condition. Can one obtain the variety of wrinkling patterns reported in the literature by using classical finite element models? Can one predict and trace the whole evolution path of buckling and post-buckling of this system? Can one capture the exact post-buckling modes on strong nonlinear response curves? Under what kind of loading and boundary conditions can each type of patterns be observed at what value of bifurcation loads? What are the effects of boundary conditions and material properties on pattern formation and evolution? What are the critical parameters influencing their wavelengths and amplitudes? These questions will be addressed in this thesis, from 2D to 3D cases, from analytical to numerical methods, from classical to multi-scale perspectives. We will start from developing 2D and 3D classical finite element film/substrate models that consider nonlinear geometry for the film and linear elasticity for the substrate, as often employed in the literature [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part III: Herringbone solutions at large buckling parameter[END_REF], by taking into account the boundary effects and localizations, by following the post-buckling evolution path, by predicting the bifurcation points, then move to the multi-scale standpoint. The introduction on main advanced numerical methods that we need to solve the problems and to develop the subjects will be briefed in the following sections, which includes firstly an advanced nonlinear resolution perturbation technique, namely Asymptotic Numerical Method (ANM) as a path-following approach to predict bifurcations, and secondly a recent Fourier series based multi-scale modeling technique for cellular instability pattern formation, and then lastly the well-known Arlequin method for multiple models/domains coupling between different scales or levels. Asymptotic Numerical Method for nonlinear resolution As mentioned in the last section, post-buckling evolution of film/substrate systems beyond the critical load are usually complicated, while the conventional numerical resolution approaches have difficulties in getting a reliable convergent solution and predicting all the bifurcations as well as the associated instability patterns on their evolution paths. These challenges pose a need for developing an effective nonlinear resolution technique that can follow the post-buckling evolution path on film/substrate instability problem. In this thesis, Asymptotic Numerical Method (ANM) [START_REF] Damil | A new method to compute perturbed bifurcation: Application to the buckling of imperfect elastic structures[END_REF][START_REF] Cochelin | Asymptotic-numerical methods and Padé approximants for non-linear elastic structures[END_REF][START_REF] Cochelin | A path-following technique via an asymptotic-numerical method[END_REF][START_REF] Cochelin | Méthode asymptotique numérique[END_REF] will be incorporated with 2D and 3D models of film/substrate systems to resolve extremely strong geometrical nonlinearities. It offers considerable advantages in terms of efficiency and reliability to get a robust path-following technique compared with classic iterative algorithms. For some cases involving multiple bifurcations, probably, it would be rather difficult to achieve a convergent solution on very strong nonlinear response curves by using conventional numerical methods. The solution to many physical problems can be achieved through the resolution of nonlinear problems depending on a real parameter λ. The corresponding nonlinear system of equations can be written as R(U, λ) = 0, (1.2) where U ∈ R n is the unknown vector and R ∈ R n is a vector of "n" equations that are supposed to be sufficiently smooth with respect to U and λ. The main idea of the ANM is to compute a solution path (U, λ) of the nonlinear system (1.2) using a step-by-step method, with each step corresponding to a truncated Taylor series. The standing point (U j+1 , λ j+1 ) of the step (j + 1) is determined using the 1.2. Asymptotic Numerical Method for nonlinear resolution last point solution (U j , λ j ) of the previous step j (see Fig. 1.5). The ANM has been proven to be an efficient path-following technique to deal with various nonlinear problems both in solid and fluid mechanics [START_REF] Zahrouni | Computing finite rotations of shells by an asymptotic-numerical method[END_REF][START_REF] Cadou | ANM for stationary navierstokes equations ans with petrov-galerkin formulation[END_REF]4,[START_REF] Medale | A parallel computer implementation of the Asymptotic Numerical Method to study thermal convection instabilities[END_REF][START_REF] Nezamabadi | A multilevel computational strategy for handling microscopic and macroscopic instabilities[END_REF][START_REF] Nezamabadi | Solving hyperelastic material problems by asymptotic numerical method[END_REF][START_REF] Lazarus | Continuation of equilibria and stability of slender elastic rods using an asymptotic numerical method[END_REF][START_REF] Lejeune | Automatic solver for non-linear partial differential equations with implicit local laws: Application to unilateral contact[END_REF][START_REF] Cong | Simulation of instabilities in thin nanostructures by a perturbation approach[END_REF]. The idea of the ANM application is to associate a perturbation technique with an appropriate numerical resolution scheme, such as finite element method. This allows transforming a given nonlinear problem into a set of linear problems to be solved successively, leading to a numerical representation of the solution in the form of power series truncated at relatively high orders. Once the series are fully computed, an accurate approximation of the solution path is provided inside a determined radius of convergence. Unlike classical incremental-iterative algorithm, this method does not require iterative corrections thanks to the high order predictor [START_REF] Lahman | High-order predictor-corrector algorithms[END_REF]5]. For efficiency concerns, all governing equations need to be set into quadratic form before applying the series expansion. Details of these procedures can be found in [START_REF] Damil | A new method to compute perturbed bifurcation: Application to the buckling of imperfect elastic structures[END_REF][START_REF] Cochelin | Asymptotic-numerical methods and Padé approximants for non-linear elastic structures[END_REF][START_REF] Cochelin | A path-following technique via an asymptotic-numerical method[END_REF][START_REF] Cochelin | Méthode asymptotique numérique[END_REF]. Perturbation technique Starting from a known point solution (U j , λ j ), the solution path is represented by truncated power series using a path parameter a:            U(a) = U j + ∞ ∑ p=1 a p U p = U j + aU 1 + a 2 U 2 + . . . λ(a) = λ j + ∞ ∑ p=1 a p λ p = λ j + aλ 1 + a 2 λ 2 + . . . (1.3) This solution path is a solution of Eq. (1.2), which can be written as 0 =R(U j + aU 1 + a 2 U 2 + . . . , λ j + aλ 1 + a 2 λ 2 + . . .) =R(U j , λ j ) + ∂R ∂U ⌋ j (aU 1 + a 2 U 2 + . . .) + ∂R ∂λ ⌋ j (aλ 1 + a 2 λ 2 + . . .) + 1 2 ∂ 2 R ∂U 2 ⌋ j (aU 1 + a 2 U 2 + . . .)(aU 1 + a 2 U 2 + . . .) + . . . (1.4) Considering the fact that R(U j , λ j ) = 0, and after rearranging the terms as increasing power of a, the above equation becomes 0 =a { ∂R ∂U ⌋ j U 1 + ∂R ∂λ ⌋ j λ 1 } + a 2 { ∂R ∂U ⌋ j U 2 + ∂R ∂λ ⌋ j λ 2 + 1 2 ∂ 2 R ∂U 2 ⌋ j U 1 U 1 + 1 2 ∂ 2 R ∂λ 2 ⌋ j λ 2 1 + ∂ 2 R ∂U∂λ ⌋ j λ 1 U 1 } + a 3 { ∂R ∂U ⌋ j U 3 + ∂R ∂λ ⌋ j λ 3 + terms depending on U 1 , U 2 , λ 1 , λ 2 } . . . + a p      ∂R ∂U ⌋ j U p + ∂R ∂λ ⌋ j λ p + terms depending on U 1 . . . U p-1 , λ 1 . . . λ p-1 -F nl p      . . . (1.5) Or in a condensed form: R(U(a), λ(a)) = aR 1 + a 2 R 2 + a 3 R 3 + . . . = 0. (1.6) Eq. (1.6) should be verified for each value of a. Therefore, the resolution of the nonlinear system (1.2) leads to the resolution of a recurrent system of linear equations in the following form: R p = 0 for p 1. (1.7) At each order p, the vector of equations R p = 0 is a linear system with respect to U p and λ p : ∂R ∂U ⌋ j U p + ∂R ∂λ ⌋ j λ p = F nl p , (1.8) where the right-hand sides member F nl p only depends on the terms of previous orders. Path parameter Eq. (1.8) represents a system of n linear equations and (n+1) unknowns. Therefore, a complementary condition is needed as also required in predictor-corrector methods. This complementary condition can be found by defining the path parameter a using the quasiarc-length parameter (by projection of the increment on the tangent direction (U 1 , λ 1 )) as follows: a = (U -U j )U 1 + (λ -λ j )λ 1 . (1.9) After substituting Eq. (1.3) into Eq. (1.9), one can obtain the supplementary condition at each order: { ∥U 1 ∥ 2 + λ 2 1 = 1, U p U 1 + λ p λ 1 = 0. (1.10) The calculation of the step j is achieved by the calculation of N order right-hand sides F nl p and the resolution of N order linear problems in Eqs. (1.8) and (1.10). In contrast to the other predictor-corrector methods, only one matrix ∂R ∂U ⌋ j needs to be calculated and inversed, which can save an important amount of calculation time. Continuation approach To achieve an efficient algorithm, the analysis of validity range and the definition of a new starting point should be adaptive, i.e. the value of path parameter a has to be automatically determined by satisfying a given accuracy tolerance. A simple way to define the value of a max follows the remark: polynomial solutions are very similar inside the radius of convergence of the series, but they tend to rapidly separate when this radius is reached. Thus, a simple criterion is to require that the difference of displacements between two successive orders should be smaller than a given precision parameter δ: Validity range: a max = ( δ ∥u 1 ∥ ∥u n ∥ ) 1/(n-1) , ( 1.11) where the notation ∥•∥ stands for the Euclidean norm. The method for determining a max consists in imposing an accuracy parameter δ, and that the relative difference between solutions of two successive orders be small in comparison with δ [START_REF] Cochelin | A path-following technique via an asymptotic-numerical method[END_REF]. Note that a max is computed in a posteriori manner based on the available series coefficients. So the step length determination in the ANM framework can be considered as fully adaptive and completely automatic as opposed to classical iterative algorithms. When there is a bifurcation point on the solution path, the radius of convergence is defined by the distance to the bifurcation. Thus, the step length becomes smaller and smaller, which looks as if the continuation process "knocks" against the bifurcation [START_REF] Baguet | On the behaviour of the ANM continuation in the presence of bifurcations[END_REF]. This accumulation of small steps is a very good indicator of the presence of a bifurcation point on the path. All the bifurcations can be easily identified in this way by the user without any special tool. Bifurcation indicator As mentioned in Section 1.1.1, complex systems such as stiff thin layers bound to soft materials often involves strong geometrical nonlinearities and multiple bifurcations, which makes the numerical resolution quite difficult. The detection of bifurcation points is really a challenge. In literatures, test functions are widely introduced to compute critical points. They are scalar functions vanishing at a singular point. Once a critical point is detected between two states, its accurate position can be determined through using a bisection or secant iteration scheme. Two classes of methods, i.e. direct methods and indirect methods, can be distinguished. In direct methods, the existence condition of critical points is embedded in the system of equations to be solved [START_REF] Weinitshke | On the calculation of limit and bifurcation points in stability problems of elastic shells[END_REF][START_REF] Wriggers | A quadratically convergent procedure for the calculation of stability point in finite element analysis[END_REF][START_REF] Wriggers | A general procedure for the direct computation of turning and bifurcation point[END_REF] and the solution of system is exactly the critical point. The indirect methods consist in computing a solution branch and evaluating test functions. The most popular test functions are the determinant or the smallest eigenvalue of tangent stiffness matrix. To distinguish bifurcation point from limit point, the current stiffness parameter is used [START_REF] Wagner | A simple method for the calculation of postcritical branches[END_REF]. Once the bifurcation points are obtained, the associated eigenmodes can be captured. The resulting nonlinear problems are usually solved by the Newton-Raphson method with a choice of piloting strategy [START_REF] Batoz | Incremental displacement algorithms for nonlinear problems[END_REF]. Despite a lot of progresses have been made using the Newton-Raphson method, an efficient and reliable algorithm is quite difficult to be established. Indeed, it would cost considerable computing time in the bisection sequence and corrector iteration because of very small step lengths close to the bifurcation. In the framework of the ANM, a bifurcation indicator has been proposed to detect bifurcation points [START_REF] Boutyour | Méthode asymptotique-numérique pour le calcul des bifurcations: application aux structures élastiques[END_REF][START_REF] Vannucci | An asymptotic-numerical method to compute bifurcating branches[END_REF][START_REF] Jamal | Bifurcation indicators[END_REF][START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF]. It is a scalar function obtained through introducing a fictitious perturbation force in the problem, which becomes zero exactly at the bifurcation point. Indeed, this indicator measures the intensity of the system response to perturbation forces, which can be computed explicitly along the equilibrium branch through perturbation technique. Next, the roots of this function characterize singular points easily and exactly, since the function is known in a closed form. By evaluating it through an equilibrium branch, all the critical points existing on this branch and the associated bifurcation modes can be determined. In this thesis, these generic bifurcation schemes will be explicitly incorporated with 2D and 3D models of film/substrate systems in order to carry out multiple-bifurcation analyses that involve capturing exact bifurcation points and the corresponding instability modes. Multi-scale modeling for instability pattern formation Instability pattern formation is a very common phenomenon in nature [START_REF] Mahadevan | Self-organized origami[END_REF] and in scientific fields [START_REF] Cross | Pattern formation out of equilibrium[END_REF][START_REF] Aranson | The world of the complex Ginzburg-Landau equation[END_REF][START_REF] Hoyle | Pattern formation, an introduction to methods[END_REF]. In these cases, the spatial shape of system responses looks like a slowly modulated oscillation. Direct calculation of such cellular instabilities in a big sample often requires numerous degrees of freedom, such as membrane wrinkling [START_REF] Rossi | Simulation of light-weight membrane structures by wrinkling model[END_REF], Rayleigh-Bénard convection in large boxes [START_REF] Newell | Finite band width, finite amplitude convection[END_REF][START_REF] Segel | Distant side walls cause slow amplitude modulation of cellular convection[END_REF][START_REF] Medale | A parallel computer implementation of the Asymptotic Numerical Method to study thermal convection instabilities[END_REF], buckling of long structures [START_REF] Damil | Wavelength selection in the postbuckling of a long rectangular plate[END_REF][START_REF] Léotoing | Nonlinear interaction of geometrical and material properties in sandwich beam instabilities[END_REF][START_REF] Abdelmoula | Influence of distributed and localized imperfections on the buckling of cylindrical shells[END_REF], microbuckling of carbon nanotubes [START_REF] Ru | Axially compressed buckling of a double-walled carbon nanotube embedded in an elastic medium[END_REF][START_REF] He | Buckling analysis of multi-walled carbon nanotubes: a continuum model accounting for van der Waals interaction[END_REF] or fiber microbuckling of composites [START_REF] Kyriakides | On the compressive failure of fiber reinforced composites[END_REF][START_REF] Waas | Compressive failure of composites, part II: Experimental studies[END_REF][START_REF] Drapier | A structural approach of plastic microbuckling in long fibre composites: comparison with theoretical and experimental results[END_REF], surface morphological instabilities of stiff thin films attached on compliant substrates [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF][START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF], flatness defects in sheets induced by industrial processes [START_REF] Fischer | Buckling phenomena related to rolling and levelling of sheet metal[END_REF][START_REF] Jacques | Buckling and wrinkling during strip conveying in processing lines[END_REF]2]. Therefore, from the computational point of view, it is better to apply reduced-order models not only to satisfy the desired accuracy but also to dramatically cut down the computational time and cost. Classically, such cellular instabilities can be modeled by bifurcation analysis according to the famous Ginzburg-Landau theory [153,[START_REF] Iooss | Theory of steady Ginzburg-Landau equation in hydrodynamic stability problems[END_REF]. The Ginzburg-Landau equation follows an asymptotic double scale analysis. At the local level, one accounts for the periodic nature of buckles, while the slow variations of envelopes are described at the macroscopic scale. Nevertheless, the macroscopic evolutions governed by the Ginzburg-Landau equation have some drawbacks. First, this bifurcation equation is valid only close to the critical state. Second, it cannot account for the coupling between global nonlinear behavior and appearance of patterns, for instance, for structures undergoing both local and global buckling. Third, within the Ginzburg-Landau double scale approach, it is not easy to deduce consistent boundary conditions. A new approach based on the concept of Fourier series with slowly varying coefficients has been presented recently, which is developed to study the instabilities with nearly periodic patterns [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | A generalized continuum approach to predict local buckling patterns of thin structures[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. It has been consecutively applied in buckling of a long beam lying on a nonlinear elastic foundation [START_REF] Mhada | About macroscopic models of instability pattern formation[END_REF], global and local instability interaction of sandwich structures [START_REF] Liu | A new Fourierrelated double scale analysis for instability phenomena in sandwich structures[END_REF], and membrane wrinkling [START_REF] Damil | New nonlinear multi-scale models for wrinkled membranes[END_REF][START_REF] Damil | Membrane wrinkling revisited from a multiscale point of view[END_REF]. This multi-scale approach is based on the Ginzburg-Landau theory [153,[START_REF] Iooss | Theory of steady Ginzburg-Landau equation in hydrodynamic stability problems[END_REF]. In the proposed theory, the envelope equation is derived from an asymptotic double scale analysis and the nearly periodic fields (reduced model) are represented by Fourier series with slowly varying coefficients. This mathematical representation yields macroscopic models in the form of generalized continua. In this technique, the macroscopic field is defined by Fourier coefficients of the microscopic field. It has been established that the models obtained in this way are consistent with the Ginzburg-Landau technique, but they can remain valid away from the bifurcation [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. Besides, the coupling between global and local buckling can be taken into account in a computationally efficient manner [START_REF] Liu | A new Fourierrelated double scale analysis for instability phenomena in sandwich structures[END_REF]. Moreover, this approach could be very useful to analyze instability problems like Rayleigh-Bénard convection whose discretization requires a huge number of degrees of freedom. As in paper [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF], the multi-scale approach, based on the concept of Fourier series with slowly varying coefficients, will be adopted in this thesis. Let us consider a physical phenomenon described by the field U (x)(x ∈ R). The instability wave number q is supposed to be known. All the unknowns of model U = {u(x), v(x), n(x)...} are described as Fourier series, whose coefficients vary more slowly than the harmonics: U (x) = +∞ ∑ j=-∞ U j (x)e jiqx , ( 1.12) where the Fourier coefficient U j (x) denotes the envelope for the j th order harmonic and U -j (x) denotes its conjugate value. The macroscopic unknown fields U j (x) slowly vary Mean field Mean field+amplitude over a period [ x, x + 2π q ] of the oscillation. It is worth mentioning that at least two functions, U 0 (x) and U 1 (x), are necessary to describe the nearly periodic patterns as depicted in Fig. 1.6. The zero order variable U 0 (x) is identified as the mean value and U 1 (x) represents the envelope or amplitude of the spatial oscillations. Notice that U 0 (x) is real valued and U 1 (x) can be expressed as U 1 (x) = r(x)e iφ(x) . The latter mathematical expression represents the first harmonic where r(x) is the amplitude modulation and φ(x) is the phase modulation. The main idea of macroscopic modelling is to deduce differential equations satisfied by the amplitude U j (x). In the present thesis, this Fourier-related multi-scale modeling methodology will be applied in buckling of an elastic beam subjected to a nonlinear Winkler foundation firstly. Then, a generalized framework for macroscopic modeling of film/substrate will be pro-posed. Lastly, it goes to specific 2D and 3D cases with simplifications and assumptions to save computational cost when studying nearly periodic morphological instabilities in film/substrate systems. Arlequin method for model coupling In this thesis, the Arlequin method [START_REF] Ben Dhia | Multiscale mechanical problems: the Arlequin method[END_REF][START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF] will be used to analyze the influence of boundary conditions on multi-scale modeling of instability pattern formation. More specifically, the coarse model, obtained from a suitable Fourier-related technique [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF] presented in the last section, is inaccurate in the boundary region. Hence, near the boundary, the full model is employed, which is then coupled to the coarse model in the remainder of the domain within the Arlequin framework. This multiple-domain coupling strategy offers a flexible way to design multi-scale models so as to balance the desired accuracy and the computational cost. Despite of considerable advances in computational techniques and computing power, direct simulation of those cellular instability problems is still not a viable option. For instance, the discretization of Rayleigh-Bénard convection problems in large boxes [START_REF] Newell | Finite band width, finite amplitude convection[END_REF][START_REF] Segel | Distant side walls cause slow amplitude modulation of cellular convection[END_REF] requires a huge number of degrees of freedom [START_REF] Medale | A parallel computer implementation of the Asymptotic Numerical Method to study thermal convection instabilities[END_REF], which is a challenge for direct computations. Therefore, there is a need for reliable and efficient techniques in a consistent manner that can take into account the most important scales involved in the goal of the simulation, while allowing ones to flexibly choose the desired level of accuracy and detail of description. Generally, direct finite element modeling of structures involving local defects such as cracks, holes and inclusions, is very cumbersome when the local refinement needs to be considered. To overcome these difficulties, important innovations and efficient numerical methods have been developed during several decades to improve the flexibility of finite element method. Let us list a few in particular the meshless method [START_REF] Belytschko | Element-free Galerkin methods[END_REF], the sequential adaptation method (i.e. h-adaptation, p-adaptation and hp-adaptation), the multigrid (MG) method, the partition of unity finite element method (PUFEM) [START_REF] Babuška | The partition of unity method[END_REF], the generalized finite element method (GFEM) [START_REF] Strouboulis | The design and analysis of the Generalized Finite Element Method[END_REF], the extended finite element method (XFEM) [START_REF] Belytschko | Elastic crack growth in finite-elements with minimal remeshing[END_REF][START_REF] Moës | A finite element method for crack growth without remeshing[END_REF]. All these approaches are essentially monomodel and may either lack flexibility or relevance to address the above issues. Later, hierarchical global-local strategies, including s-version method by Fish [START_REF] Fish | The s-version of finite element method[END_REF][START_REF] Fish | Adaptive s-method for linear elastostatics[END_REF][START_REF] Fish | Adaptive and hierarchical modelling of fatigue crack propagation[END_REF] and Arlequin method by Ben Dhia et al. [START_REF] Ben Dhia | Multiscale mechanical problems: the Arlequin method[END_REF][START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Ben Dhia | Global local approaches: the Arlequin framework[END_REF][START_REF] Ben Dhia | Further insights by theoretical investigations of the multiscale Arlequin method[END_REF], allow the superposition of different mechanical models and different meshes. The s-version method is a multilevel solution scheme where each level is discretized using finite element mesh of arbitrary element size and polynomial order. It superimposes additional local refined meshes to an existing global one, thus allowing different modeling in the superimposed meshes. Like the s-method, the Arlequin method aims at creating a multimodel framework. Opposed to s-method, the models are not added but overlapped and glued to each others in the Arlequin framework. In addition, since not only displacement fields but also complete mechanical states (e.g. stress and strain) are potentially allowed to concurrently exist in the superposition zone, the Arlequin method has no redundancy problem. Besides, the iteration of the superposition process (by taking care of gluing zones) can potentially lead to relevant multi-scale models. Over the last decade, the Arlequin method or the bridging domain method [START_REF] Xiao | A bridging domain method for coupling continua with molecular dynamics[END_REF] have been successfully applied to couple heterogeneous models in various cases. One can couple classical continuum and shell models [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF], particle and continuum models [START_REF] Xiao | A bridging domain method for coupling continua with molecular dynamics[END_REF][START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF]130,[START_REF] Bauman | Adaptive multiscale modeling of polymeric materials with Arlequin coupling and Goals algorithms[END_REF][START_REF] Chamoin | Ghost forces and spurious effects in atomic-to-continuum coupling methods by the Arlequin approach[END_REF][START_REF] Prudhomme | Analysis of an averaging operator for atomic-to-continuum coupling methods by the Arlequin approach[END_REF], heterogeneous meshes [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Hu | Multi-scale modeling of sandwich structure using the Arlequin method, Part I: linear modelling[END_REF] or more generally heterogeneous discretizations [START_REF] Ben Dhia | On the use of XFEM within the Arlequin framework for the simulation of crack propagation[END_REF][START_REF] Biscani | Variable kinematic plate elements coupled via Arlequin method[END_REF]. By superposing and gluing models, the Arlequin method offers an extended modeling framework for the design of various mechanical models for engineering materials and structures in a rather flexible way. In this thesis, within the Arlequin framework, discussion on the transition between a fine and a coarse model will be provided, which is almost a generic but hard topic in applying bridging techniques to reduced-order models or multi-scale models. Especially, a new bridging technique based on a nonlocal reduction operator defined by Fourier series will be presented and highlighted. Energy distribution In the Arlequin framework, the domain of the whole mechanical system is partitioned into two overlapping sub-zones Ω 1 and Ω 2 . Let S g denote the gluing zone supposed to be a non-zero measured polyhedral subset of S = Ω 1 ∩ Ω 2 . The potential energy contribution of the whole system and external load can be re- Arlequin method for model coupling spectively expressed as P int i (u i ) = 1 2 ∫ Ω i α i σ(u i ) : ε(u i )dΩ, (1.13) P ext i (u i ) = ∫ Ω i β i f • u i dΩ. (1.14) In order to have consistent modeling and not to count the energy in the overlapping domain twice, the energy associated to each domain is balanced by weight functions which are represented by α i for the internal work and β i for the external work. These weight functions are assumed to be positive piecewise continuous in Ω i and satisfy the following equations:      α 1 = β 1 = 1, in Ω 1 \S, α 2 = β 2 = 1, in Ω 2 \S, α 1 + α 2 = β 1 + β 2 = 1, in S. (1.15) One can choose constant, linear, cubic, or higher-order polynomial functions for energy distribution (see Fig. 1.8). More details on selection of these functions can be found in [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF][START_REF] Chamoin | Ghost forces and spurious effects in atomic-to-continuum coupling methods by the Arlequin approach[END_REF][START_REF] Prudhomme | Analysis of an averaging operator for atomic-to-continuum coupling methods by the Arlequin approach[END_REF]. Coupling choices The Arlequin method aims at connecting two spatial approximations of an unknown field, generally a fine approximation U f and a coarse approximation U r . The idea is to require that these two approximations are neighbor in a weak and discrete sense and to introduce Lagrange multipliers in the corresponding differential problems. A major concern in the Arlequin framework is to define an appropriate coupling operator. At the continuous level, a bilinear form must be chosen, which can be L 2 -type, H 1 -type or energy type [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Ben Dhia | Further insights by theoretical investigations of the multiscale Arlequin method[END_REF][START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF]. The H 1 -type coupling operator C can be defined as C (λ, u) = ∫ Sg ( λ • u + ℓ 2 ε(λ) : ε(u) ) dΩ. (1.16) When ℓ = 0, it becomes an L 2 -type coupling operator. The choice of the length ℓ and the comparisons between H 1 and L 2 couplings have been well discussed in the literature [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Guidault | On the L 2 coupling and the H 1 couplings for an overlapping domain decomposition method using Lagrange multipliers[END_REF][START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF][START_REF] Hu | Multi-scale modeling of sandwich structure using the Arlequin method, Part I: linear modelling[END_REF]. The first and important application of the Arlequin method is the coupling between two different meshes discretizing the same continuous problem: in this case, the mediator problem should be discretized by a coarse mesh to avoid locking phenomena [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF] and spurious stress peaks [START_REF] Hu | Multi-scale modeling of sandwich structure using the Arlequin method, Part I: linear modelling[END_REF]. In a recent paper [START_REF] Chamoin | Ghost forces and spurious effects in atomic-to-continuum coupling methods by the Arlequin approach[END_REF], the origin of these so-called "ghost forces" was carefully analyzed, and some corrections were proposed with an appropriate choice of weights and especially by introducing interaction forces between coarse and fine model. Now we present a very representative example [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF] to explicitly show the differences of displacement field caused by choices of mediator space. This example has been previously studied by Ben Dhia and Rateau [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF] to illustrate the relationship between the coupling operator and the deterioration of the condition number. Let us consider a 1D small strain linear elastic mechanical problem, which consists in evaluating the vertical displacement field, u, in a vertical column with uniform cross section, clamped at both ends and loaded by its own weight. For convenience, the Young's modulus E, the section S, the density ρ and the gravity factor g are chosen to satisfy ρg = ES. Continuous 1D linear elements are conducted to approximate displacement fields as well as Lagrange multiplier fields. Different refinements of the superimposed models are considered. Precisely, we define the coarse domain as Ω c = [0, 2] and the fine domain as Ω f = [START_REF][END_REF][START_REF] Abdelmoula | Influence of distributed and localized imperfections on the buckling of cylindrical shells[END_REF]. The weight functions α f and β f are associated to the fine model. The analytical solution of displacement field is denoted as reference. Figs. 1.9 and 1.10 illustrate the influence of mediator space used to discretize the Lagrange multiplier field on the mechanical states solution. It can be seen that depending on whether the gluing forces space is chosen based on the glue zone of the fine or the coarse finite element model, the fine mechanical state is either tightly locked to the coarse one (see Fig. 1.9), or linked to an average value in a weak sense (see Fig. 1.10). However, the two connected problems are not always in the same space, as for example when dealing with particle and continuous problems. In this case, a prolongation operator has to be introduced to convert the discrete displacement into a continuous one, and then a connection between continuous fields is performed [START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF]: this is consistent because the continuous model can be seen as the coarsest one. A similar approach has been applied in the coupling between shell and 3D models. A prolongation operator has been introduced (i.e. from the coarse to the fine level) and the integration is done in the 3D domain but the discretization of the Lagrange multiplier corresponds to a projection on the coarsest problem: thus, in this sense, this coupling of shell/3D is also achieved at the coarse level. In the same spirit, for the coupling between a fine model and a macroscopic envelope model that is discussed in this thesis, the connection should also be done at the coarse level, i.e. between Fourier coefficients. On the contrary, a prolongation operator from the coarse to the fine model had been introduced in the previous paper [START_REF] Hu | A bridging technique to analyze the influence of boundary conditions on instability patterns[END_REF] and the connection had been done at this level. Therefore, one can wonder if the imperfect connection observed in [START_REF] Hu | A bridging technique to analyze the influence of boundary conditions on instability patterns[END_REF] could be improved by introducing a coupling at the relevant level. This thesis tries to answer this question by studying again the Swift-Hohenberg equation [START_REF] Swift | Hydrodynamic fluctuations at the convective instability[END_REF] that is a simple and representative example of quasi-periodic instabilities. Very probably, the same ideas can be applied to 2D macroscopic membrane models that were recently introduced in [START_REF] Damil | New nonlinear multi-scale models for wrinkled membranes[END_REF][START_REF] Damil | Membrane wrinkling revisited from a multiscale point of view[END_REF]. Note that the presented new technique can be considered as a nonlocal coupling since it connects Fourier coefficients involving integrals on a period. A similar nonlocal coupling has been introduced in [START_REF] Prudhomme | Analysis of an averaging operator for atomic-to-continuum coupling methods by the Arlequin approach[END_REF] in the case of an atomic-to-continuum coupling, where the atomic model is reduced by averaging over a representative volume. Chapter conclusion This chapter aims at positioning the context and focus of the present thesis with consideration of research trends in fields of instability pattern formation modeling of film/substrate systems. The challenges in modeling and post-buckling resolution, especially from numerical standpoint, were discussed as well. We presented the main technical ingredients that we need to solve the problems and to develop the subjects in the following chapters. This includes firstly an advanced numerical nonlinear resolution technique, namely Asymptotic Numerical Method (ANM) that is a robust path-following technique; then secondly a Fourier-related multi-scale modeling technique for cellular instability pattern formation; and thirdly the well-known Arlequin method for multiple models/domains coupling between different scales or different meshes. A simple benchmark case is presented to demonstrate the effect of different choices of mediator space when using the Arlequin method. In the following chapters, we will present how to model surface wrinkling of film/substrate system, from 2D to 3D cases, from classical to multi-scale perspectives, and how the ANM framework is adapted and incorporated in various finite element models for nonlinear problem resolution and bifurcation analysis, and how to extend and improve the Arlequin framework for instability pattern formation from a multi-scale standpoint. Chapter 2 Multiple bifurcations in wrinkling analysis of film/substrate systems Introduction Wrinkles of a stiff thin layer attached on a soft substrate have been widely observed in nature and these phenomena have raised considerable interests over the last decade. The underlying mechanism of wrinkling is generally understood as a stress-driven instability, analogous to Euler buckling of an elastic column under compressive stress. The pioneering work of Bowden et al. [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF] implied that surface wrinkling of film/substrate system can widely apply ranging from micro/nano-fabricating surfaces with ordered patterns with unique wetting, buckled single crystal silicon ribbons, optical property of electronic eye camera, mechanical property measurement of surface characteristics of the materials, to the design of flexible semiconductor devices and stretchable electronics. This leads to several theoretical and experimental works in terms of stability study devoted to linear perturbation analysis and nonlinear buckling analysis [START_REF] Huang | Instability of a compressed elastic film on a viscous layer[END_REF][START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Huang | Evolution of wrinkles in hard films on soft substrates[END_REF][START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF][START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF][START_REF] Mahadevan | Self-organized origami[END_REF][START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF][START_REF] Song | Buckling of a stiff thin film on a compliant substrate in large deformation[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part III: Herringbone solutions at large buckling parameter[END_REF][START_REF] Im | Wrinkle patterns of anisotropic crystal films on viscoelastic substrates[END_REF]. In particular, there are many analytical solutions of models linearized from homogeneous finite deformation, in the case of half-spaces [START_REF] Hayes | Surface waves in deformed elastic materials[END_REF][START_REF] Dowaikh | On surface waves and deformations in a pre-stressed incompressible elastic solid[END_REF][START_REF] Shield | The buckling of an elastic layer bonded to an elastic substrate in plane strain[END_REF] as well as film/substrate systems [START_REF] Steigmann | Plane deformations of elastic solids with intrinsic boundary elasticity[END_REF][START_REF] Cai | On the imperfection sensitivity of a coated elastic half-space[END_REF][START_REF] Cai | Exact and asymptotic stability analyses of a coated elastic halfspace[END_REF]. It has been also recognized that wrinkling can be the precursor of delamination and failure of the film [START_REF] Shield | The buckling of an elastic layer bonded to an elastic substrate in plane strain[END_REF]. However, most previous studies have been mainly constrained to determine the critical conditions of instability and corresponding wrinkling patterns near the instability threshold. The post-buckling evolution and mode transition of surface wrinkles are only recently being pursued [START_REF] Brau | Multiple-length-scale elastic instability mimics parametric resonance of nonlinear oscillators[END_REF][START_REF] Cao | Wrinkling phenomena in neo-Hookean film/substrate bilayers[END_REF][START_REF] Cao | From wrinkles to creases in elastomers: the instability and imperfection-sensitivity of wrinkling[END_REF][START_REF] Zang | Localized ridge wrinkling of stiff films on compliant substrates[END_REF][START_REF] Sun | Folding wrinkles of a thin stiff layer on a soft substrate[END_REF]. To our best knowledge, the effect of boundary conditions on wrinkling has not yet been investigated. This study aims at applying advanced numerical methods for bifurcation analysis to typical models of film/substrate system and focuses on the post-bifurcation evolution involving secondary bifurcations and advanced wrinkling modes like period-doubling mode. For this purpose, a finite element (FE) model based on Asymptotic Numerical Method (ANM) [START_REF] Damil | A new method to compute perturbed bifurcation: Application to the buckling of imperfect elastic structures[END_REF][START_REF] Cochelin | Asymptotic-numerical methods and Padé approximants for non-linear elastic structures[END_REF][START_REF] Cochelin | A path-following technique via an asymptotic-numerical method[END_REF][START_REF] Cochelin | Méthode asymptotique numérique[END_REF] is developed for the nonlinear analysis of wrinkle formation. In this model, the film undergoing moderate deflections is described by Föppl-von Kármán nonlinear elastic plate theory, while the substrate is considered to be a linear elastic solid. This idea is analogous to the modeling of sandwich structures considering each layer described by its own kinematic formulation and the displacement continuity is satisfied at each interface [START_REF] Léotoing | Nonlinear interaction of geometrical and material properties in sandwich beam instabilities[END_REF][START_REF] Léotoing | First applications of a novel unified model for global and local buckling of sandwich columns[END_REF][START_REF] Hu | A novel finite element for global and local buckling analysis of sandwich beams[END_REF]. Instead of solving the resulting nonlinear equations using classical predictor-corrector algorithms such as the Newton-Raphson procedure, we adopted the ANM which appears as a significantly efficient continuation technique [START_REF] Doedel | AUTO: A program for the automatic bifurcation analysis of autonomous systems[END_REF][START_REF] Allgower | Numerical continuation methods[END_REF] without any corrector iteration. It has been proven to be an efficient path-following technique to deal with various nonlinear problems both in solid mechanics [START_REF] Zahrouni | Computing finite rotations of shells by an asymptotic-numerical method[END_REF]4,[START_REF] Medale | A parallel computer implementation of the Asymptotic Numerical Method to study thermal convection instabilities[END_REF][START_REF] Assidi | Regularization and perturbation technique to solve plasticity problems[END_REF][START_REF] Nezamabadi | Solving hyperelastic material problems by asymptotic numerical method[END_REF][START_REF] Lazarus | Continuation of equilibria and stability of slender elastic rods using an asymptotic numerical method[END_REF][START_REF] Lejeune | Automatic solver for non-linear partial differential equations with implicit local laws: Application to unilateral contact[END_REF] and in fluid mechanics [START_REF] Brezillon | A numerical algorithm coupling a bifurcating indicator and a direct method for the computation of Hopf bifurcation points in fluid mechanics[END_REF][START_REF] Girault | An algorithm for the computation of multiple Hopf bifurcation points based on Padé approximants[END_REF][START_REF] Jawadi | Asymptotic numerical method for steady flow of power-law fluids[END_REF][START_REF] Guevel | Parametric analysis of steady bifurcations in 2D incompressible viscous flow with high order algorithm[END_REF]. The underlying principle of the ANM is to build up the nonlinear solution branch in the form of relatively high order truncated power series. The resulting series are then introduced into the nonlinear problem, which helps to transform it into a sequence of linear problems that can be solved numerically. In this way, one gets approximations of the solution path that are very accurate inside the radius of convergence. Moreover, by taking the advantage of the local polynomial approximations of the branch within each step, the algorithm is remarkably robust and fully automatic. Furthermore, unlike incremental-iterative methods, the arc-length step size in the ANM is fully adaptive since it is determined a posteriori by the algorithm. A small radius of convergence and step accumulation appear around the bifurcation and imply its presence. Detection of bifurcation points is really a challenge. Direct computation of eigenvalues of the Jacobian matrix is possible, but it costs considerable computing time in bisection sequences. Such methods are expensive and difficult to manage, with the only advantage to be available in libraries like ARPACK [START_REF] Lehoucq | ARPACK User's Guide: Solution of large-scale eigenvalue problems with implicitly restarted Arnoldi methods[END_REF]. A more efficient approach is to iteratively solve the nonlinear system characterizing the bifurcation points. This technique was initiated in [140] for ordinary bifurcations and in [START_REF] Jepson | Numerical Hopf bifurcation[END_REF] for Hopf bifurcation, but the convergence of the process depends strongly on the initial guess. There is another class of methods, named "indirect methods", where one computes a "bifurcation indicator" vanishing at singular points. So the determinant of the Jacobian matrix is a bifurcation indicator that is not easy to compute for large scale problems. That is why we prefer another bifurcation indicator that is a sort of measure of the tangent stiffness, is easily implemented and applied in the ANM framework and yields the bifurcation mode. Its reliability has been assessed by many applications in solid mechanics [START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF][START_REF] Jamal | Bifurcation indicators[END_REF][START_REF] Vannucci | An asymptotic-numerical method to compute bifurcating branches[END_REF] as well as for stationary and instationary bifurcations in fluid flows [START_REF] Brezillon | A numerical algorithm coupling a bifurcating indicator and a direct method for the computation of Hopf bifurcation points in fluid mechanics[END_REF][START_REF] Girault | An algorithm for the computation of multiple Hopf bifurcation points based on Padé approximants[END_REF][START_REF] Guevel | Parametric analysis of steady bifurcations in 2D incompressible viscous flow with high order algorithm[END_REF]. Alternative techniques are also available from the ANM framework like the method of Padé approximants [START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF] or an attractive method of series re-analysis introduced recently in [START_REF] Cochelin | Power series analysis as a major breakthrough to improve the efficiency of Asymptotic Numerical Method in the vicinity of bifurcations[END_REF], but we limited ourselves to the most secure and validated technique. Note that it is also possible to combine several methods to get a very reliable detection technique [START_REF] Brezillon | A numerical algorithm coupling a bifurcating indicator and a direct method for the computation of Hopf bifurcation points in fluid mechanics[END_REF][START_REF] Girault | An algorithm for the computation of multiple Hopf bifurcation points based on Padé approximants[END_REF][START_REF] Guevel | Parametric analysis of steady bifurcations in 2D incompressible viscous flow with high order algorithm[END_REF]. This chapter explores the occurrence and post-bifurcation evolution of aperiodic mode beyond the onset of the primary sinusoidal wrinkling mode in greater depth. The work presented in this chapter, i.e. multiple bifurcations in wrinkling analysis of thin films on compliant substrates, is viewed as an interesting, original and the first work that addresses the post-bifurcation response of film/substrate systems from the quantitative standpoint, which has been published in International Journal of Non-Linear Mechanics [START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF]. Mechanical model and dimensional analysis We consider an elastic stiff film of thickness h f bound to an elastic compliant substrate of thickness h s that can vary by orders of magnitude in applications, whose surface can buckle under in-plane compression (see Fig. 2.1). Upon wrinkling, the film elastically buckles to relax the compressive stress and the substrate concurrently deforms to maintain perfect bonding at the interface. In the following, the elastic potential energy of the system, is considered in the framework of Hookean elasticity. The film/substrate system is considered to be two-dimensional. Let x and z be the longitudinal and the transverse coordinates. The top surface of the film is traction free. The deformation of the system is described by a deflection w along the z direction and a displacement u along the x direction. In the literature, most of the authors model the film by Föppl-von Kármán nonlinear elastic plate theory [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF], which implies moderate rotations and small strains. When the film wrinkles, the wavelength is much larger than the film thickness, so that Föpplvon Kármán nonlinear elastic plate theory can adequately model the thin film [START_REF] Landau | Theory of Elasticity[END_REF]. Moreover, the substrate is generally considered to be a linear elastic solid with 2D plane strain deformation. These assumptions make sense in the case of a large stiffness ratio E f /E s , E f and E s being Young's modulus of the film and the substrate, respectively. Typically, we will consider a ratio E f /E s in the range O (10 4 ). In this range, critical strains are very small and thus the linear elastic framework is relevant. Other studies consider much softer films or stiffer substrates, typically with a stiffness ratio E f /E s in the range O [START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF], where the critical strain is relatively large and the small strain framework is no longer appropriate. Therefore, large strain constitutive laws such as neo-Hookean hyperelasticity have to be chosen for E f /E s ≈ O [START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF] [START_REF] Cao | Wrinkling phenomena in neo-Hookean film/substrate bilayers[END_REF][START_REF] Hutchinson | The role of nonlinear substrate elasticity in the wrinkling of thin films[END_REF]. In this chapter, we limit ourselves to large stiffness ratio E f /E s ≈ O (10 4 ) so that we can choose the most common framework: 1. The film is represented as a geometrically nonlinear beam with Föppl-von Kármán approximation. 2. The constitutive law of the substrate is 2D linear elasticity. The same framework and especially the beam model were chosen in the most of previous mentioned papers, but obviously a careful application of 2D finite elements should be possible as for instance in [START_REF] Hutchinson | The role of nonlinear substrate elasticity in the wrinkling of thin films[END_REF], with the drawback of leading to very large scale problems due to the film thinness. Such limitations are not necessary when analytically solving incremental systems and 2D/3D models can be considered for the film. Nevertheless, beam/plate models are better Figure 2.1: An elastic stiff film resting on a compliant substrate under in-plane compression. The wrinkle wavelength λ x is much larger than the film thickness h f . The ratio of the substrate thickness h s to the wavelength, h s /λ x , can vary from a small fraction to a large number. for numerical studies of very thin films. Their consistency with 3D approaches are well established for film/substrate systems, see for instance [START_REF] Ciarlet | A justification of the von Kármán equations[END_REF][START_REF] Cai | Exact and asymptotic stability analyses of a coated elastic halfspace[END_REF]. Based on the above assumptions, the internal potential energy P int of the film/substrate system on a cellular wavelength can be given by a sum of two parts: P int = 1 2 ∫ λx 0 ( E f h f ε 2 x + E f h 3 f K 2 12 ) dx + 1 2 ∫ λx 0 ∫ λz 0 t ε s L s ε s dzdx, (2.1) in which                ε x = du f dx + 1 2 ( dw f dx ) 2 , K = d 2 w f dx 2 , t ε s = { ∂u s ∂x , ∂w s ∂z , ∂u s ∂z + ∂w s ∂x } , (2.2) where E f and L s are Young's modulus of the film and elastic matrix of the substrate, respectively. Let u f and w f be the longitudinal and transverse displacement of the film, while u s and w s denote the longitudinal and transverse displacement of the substrate, respectively. The response of the film can be considered periodic or nearly periodic, but the wavelength λ x is not given a priori. The surface instability does not necessarily affect the entire substrate, but only an influence zone whose depth is of the order of λ z . In Appendix A, it is explained why the ratio α = λ x /λ z is of the order of unity. In order to carry out dimensional analysis, firstly, we introduce some dimensionless variables as follows:                  x = λ x x, z = λ z z, u = h 2 f λ x u, w = h f w, L s = E s L s , (2.3) where the notation • stands for dimensionless variables. By introducing Eq. ( 2.3) into Eq. ( 2.2), one can obtain                ε x = h 2 f λ 2 x ε x , K = h f λ 2 x K, t ε s = h f λ x { 0, 1 α ∂w s ∂z , ∂w s ∂x } + ( h f λ x ) 2 { ∂u s ∂x , 0, 1 α ∂u s ∂z } . (2.4) Since the surface wrinkling wavelength λ x is much larger than the film thickness h f , the term (h f /λ x ) 2 is much smaller than the term h f /λ x and it can be reasonably neglected. Then substituting Eq. (2.4) into Eq. (2.1), consequently, P int = E f h 5 f 2λ 3 x ∫ 1 0 ( ε 2 x + K 2 12 ) dx + E s h 2 f α 2 ∫ 1 0 ∫ 1 0 t ε s L s ε s dzdx. (2.5) The critical wavelength can be obtained when the two coefficients E f h 5 f /(2λ 3 x ) and E s h 2 f α/2 have the same order, consequently, E f h 5 f /(2λ 3 x ) E s h 2 f α/2 = O(1). (2.6) Then the critical wavelength λ c x reads λ c x = O [ h f ( E f αE s ) 1/3 ] , (2.7) which is consistent with the analytical solution provided based on linearized stability analysis in [START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF]. Therefore, only the dimensionless stiffness matrix L s appears in the energy (2.5). For an isotropic substrate, the only significant parameter is Poisson's ratio ν s and one can suspect that its influence is relatively weak. In other words, for a very large system (L ≫ λ x , h s ≫ λ x ) and a given ν s , the film/substrate system is generic and all these systems are equivalent according to the change of variables in Eq. (2.3). In more general situations, some parameters can affect the response of film/substrate system, especially the wave number (L/λ x ), the relative thickness of the substrate (h s /λ x ), the anisotropy of the substrate (L s ) and the heterogeneity of the substrate. In what follows, we will discuss the effect of boundary conditions, graded material properties and anisotropy of the substrate in the case of a not too large wave number. 1D reduced model The film/substrate system will be studied numerically within the previously described framework: a linear elastic substrate and a film modeled with Föppl-von Kármán approximation. In the literature, this system is generally discretized by Fast Fourier Transform (FFT) [START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part III: Herringbone solutions at large buckling parameter[END_REF], which disregards boundary effects. Standard finite element model would be a good candidate for this problem. For convenience, we will adapt a finite element procedure [START_REF] Léotoing | Nonlinear interaction of geometrical and material properties in sandwich beam instabilities[END_REF][START_REF] Léotoing | First applications of a novel unified model for global and local buckling of sandwich columns[END_REF][START_REF] Hu | A novel finite element for global and local buckling analysis of sandwich beams[END_REF] used for sandwich beams. First, the substrate is divided into layers, which leads to a 1D multilayer model. Then this 1D model will be discretized by standard 1D elements. The advantages of this approach are to ensure a perfect continuity between the film and substrate and also to preserve basic symmetries of the continuous system as well as bifurcation points arising from these symmetries. The film/substrate system is considered to be two-dimensional and the geometry is as shown in Fig. 2.2. Let x and z be the longitudinal and the transverse coordinates. The length of the system is denoted as L. The parameters h f , h s and h t are, respectively, the thickness of the film, the substrate and the total thickness of the system. Since the analytical kinematics of substrate is unknown, a finite element methodology has to be applied to discretize the substrate into some sublayers along the z direction first, then discretize it along the x direction. Consequently, h i denotes the thickness of each sublayer in the substrate. Since the film is bound to the substrate, the displacement must be continuous at the interface, which is a restrictive assumption to derive general governing equations. Chapter 2. Multiple bifurcations in wrinkling analysis of film/substrate systems Kinematics The kinematics of the film/substrate system by considering the first discretization along the z direction is given in Eq. (2.8)-(2.10). Film          U f (x, z) = u f (x) - ( z - h f 2 -h s ) ∂W f (x, z) ∂x , h s ≤ z ≤ h t W f (x, z) = w f (x). (2.8) 1 st sublayer          U s1 (x, η) = 1 -η 2 u 0 (x) + 1 + η 2 u 1 (x), -1 ≤ η ≤ 1, (h s -h 1 ) ≤ z ≤ h s W s1 (x, η) = 1 -η 2 w 0 (x) + 1 + η 2 w 1 (x). (2.9) n th sublayer          U sn (x, η) = 1 -η 2 u n-1 (x) + 1 + η 2 u n (x), -1 ≤ η ≤ 1, 0 ≤ z ≤ h n W sn (x, η) = 1 -η 2 w n-1 (x) + 1 + η 2 w n (x). (2.10) Here, the longitudinal and transverse displacement fields are represented by U and W , respectively. Let u f and w f denote the longitudinal and transverse displacement fields of the neutral fiber of the film, while u n and w n are the longitudinal and transverse displacement fields at the interfaces between each sublayer of the substrate, respectively. The superscript f and sn stand for the film and the n th sublayer of the substrate, respectively. The local coordinate along the z direction is described by η. Note that in above kinematic model, the displacement continuity is automatically satisfied at the interfaces between different sublayers. Finite element formulation The expression of the internal virtual work of film/substrate system can be simplified by neglecting stresses whose energetic contribution are quite low, i.e. σ f zz and σ f xz . Consequently, the following constitutive and geometric equations are taken into account:          σ f xx = E f ϵ f xx , σ sn xx = (λ s + 2G s ) ϵ sn xx + λ s ϵ sn zz , σ sn zz = (λ s + 2G s ) ϵ sn zz + λ s ϵ sn xx , σ sn xz = G s γ sn xz , (2.11)            ϵ f xx = U f ,x + 1 2 (W f ,x ) 2 , ϵ sn xx = U sn ,x , ϵ sn zz = W sn ,z , γ sn xz = U sn ,z + W sn ,x , (2.12) where E f , E s and ν s are Young's modulus of the film, substrate, and Poisson's ratio of the substrate, respectively. Lamé's first parameter λ s is expressed as λ s = E s ν s / [(1 + ν s )(1 -2ν s )], while G s is the shear modulus of the substrate expressed as G s = E s / [2(1 + ν s )]. The notation , x stands for the partial derivative ∂ ∂x . The principle of virtual work reads P int (δu) + P ext (δu) = 0, ∀δu ∈ K.A. (2.13) where δu is the virtual displacement and K.A. represents the space of kinematically admissible displacements, while P int (δu) and P ext (δu) are the internal and external virtual work, respectively. The internal virtual work of system is given as P int (δu) = - ∫ Ω f σ f xx δϵ f xx dΩ - ∑ sn ∫ Ω sn (σ sn xx δϵ sn xx + σ sn zz δϵ sn zz + σ sn xz δγ sn xz ) dΩ, (2.14) where Ω f and Ω sn stand for the domain of the film and the n th sublayer in the substrate, respectively. Considering the load proportional to a scalar parameter λ, the external virtual work is defined as P ext (δu) = λ ∫ Ω FδudΩ, (2.15) where F denotes the external load. Through substituting Eq. (2.8)-(2.12) into Eq. (2.13), the film/substrate model can be developed in the following parts. Internal virtual work of the substrate First, let us define the unknown variables in each sublayer ⟨q sn 1 ⟩ = ⟨u n-1 w n-1 u n w n ⟩ , ( 2.16 ) ⟨q sn 2 ⟩ = ⟨u n-1,x w n-1,x u n,x w n,x ⟩ . (2.17) According to the kinematics (2.10), the displacement field reads { U sn W sn } = [N z ] {q sn 1 } , ( 2.18) where [N z ] =    1 -η 2 0 1 + η 2 0 0 1 -η 2 0 1 + η 2    . (2.19) The strain vector {ε sn } and stress vector {s sn } can be respectively expressed as {ε sn } =      ϵ sn xx ϵ sn zz γ sn xz      = [B 1 ] {q sn 1 } + [B 2 ] {q sn 2 } , ( 2.20 ) {s sn } = [C sn ] {ε sn } , (2.21) in which [B 1 ] =       1 -η 2 0 1 + η 2 0 0 - 1 h n 0 1 h n - 1 h n 0 1 h n 0       , (2.22) [B 2 ] =     0 0 0 0 0 0 0 0 0 1 -η 2 0 1 + η 2     , (2.23) [C sn ] =    λ s + 2G s λ s 0 λ s λ s + 2G s 0 0 0 G s    . (2.24) The internal virtual work of the substrate can be represented as the sum of all the sublayers: P s int (δu) = - ∫ L 0 ∫ hs 0 ⟨δε s ⟩ {s s } dzdx = - ∫ L 0     ∑ sn ⟨δq sn 1 ⟩ ∫ hn 0 T [B 1 ] {s sn } dz Φ + ∑ sn ⟨δq sn 2 ⟩ ∫ hn 0 T [B 2 ] {s sn } dz Ψ     dx. (2.25) Through considering Eqs. (2.20) and (2.21), one can obtain Φ = ∑ sn (∫ hn 0 T [B 1 ] [C sn ] [B 1 ] dz {q sn 1 } + ∫ hn 0 T [B 1 ] [C sn ] [B 2 ] dz {q sn 2 } ) , ( 2.26) Ψ = ∑ sn (∫ hn 0 T [B 2 ] [C sn ] [B 1 ] dz {q sn 1 } + ∫ hn 0 T [B 2 ] [C sn ] [B 2 ] dz {q sn 2 } ) . (2.27) One can also combine the above two equations in the following form: { Φ Ψ } = [C s ] { q sn 1 q sn 2 } . ( 2.28) Now we consider the discretization of substrate along the x direction. The unknown vectors can be given as {q sn 1 } = [N s ] {v s } , (2.29) {q sn 2 } = [ N s ,x ] {v s } , ( 2.30) where {v s } is the elementary unknown vector of the substrate and [N s ] is the shape function. Note that the longitudinal displacement u is discretized by linear Lagrange functions, while the transverse displacement w is discretized by Hermite functions. Consequently, the internal virtual work of the substrate can be written as P s int (δu) = - ∑ e ⟨δv s ⟩ ∫ le 0 ( T [N s ]Φ + T [N s ,x ]Ψ ) dx = - ∑ e ⟨δv s ⟩ ∫ le 0 ( [ T N s , T N s ,x ] [C s ] [ N s N s ,x ]) dx{v s }, (2.31) where l e is the length of 1D element. Internal virtual work of the film As for the thin film, the strain energy is mainly generated by normal strain ϵ f xx , the other two terms ϵ f zz and γ f xz being neglected. Consequently, the internal virtual work of the film is expressed as P f int (δu) = - ∫ Ω f σ f xx δϵ f xx dΩ = - ∫ L 0 [ N f ( δu f ,x + w f ,x δw f ,x ) + M f δw f ,xx ] dx, (2.32) where      N f = E f h f ( u f ,x + 1 2 (w f ,x ) 2 ) , M f = 1 12 E f h 3 f w f ,xx . (2.33) The generalized strain vector {ε f } of the film is defined as { ε f } = ( [H] + 1 2 [A(q f )] ) { q f } , ( 2.34) in which [H] = [ 1 0 0 0 0 1 ] , [A(q f )] = [ 0 w f ,x 0 0 0 0 ] , { q f } =      u f ,x w f ,x w f ,xx      . ( 2 .35) Since [A(q f )] and { q f } are linear functions of u f and w f , the internal virtual work of the film in Eq. (2.32) is a quadratic form with respect to the displacement and the stress: P f int (δu) = - ∫ L 0 ⟨ δq f ⟩ ( T [H] + T [A(q f )] ) { s f } dx, (2.36) where the generalized stress of the film reads { s f } = { N f M f } = [D] ( [H] + 1 2 [A(q f )] ) { q f } , (2.37) in which [D] = [ E f h f 0 0 1 12 E f h 3 f ] . (2.38) Note that the discretization of unknown variables { q f } takes the same shape function as for the substrate, i.e. Lagrange functions for the longitudinal displacement and Hermite functions for the transverse displacement. Connection between the film and the substrate In the kinematics (2.8)-(2.10), there are 2(n + 2) unknown functions depending on the number of sublayers in the substrate. As the film is bonded to the substrate, the displacement should be continuous at the interface. Therefore, the connection between the film and the substrate reads { U f (x, h s ) = U s1 (x, -1), W f (x, h s ) = W s1 (x, -1). (2.39) From Eqs. (2.8)-(2.9) and (2.39), the following relations can be obtained:    u f = u 0 - h f 2 w 0,x , w f = w 0 . (2.40) Consequently, the above conditions reduce the unknown functions to 2(n+1) independent unknown variables {u 0 , w 0 , u 1 , w 1 , . . . , u n , w n }. Resolution technique and bifurcation analysis Asymptotic Numerical Method (ANM) [START_REF] Damil | A new method to compute perturbed bifurcation: Application to the buckling of imperfect elastic structures[END_REF][START_REF] Cochelin | Asymptotic-numerical methods and Padé approximants for non-linear elastic structures[END_REF][START_REF] Cochelin | A path-following technique via an asymptotic-numerical method[END_REF][START_REF] Cochelin | Méthode asymptotique numérique[END_REF] is used to solve the resulting nonlinear equations. The ANM is a path-following technique that is based on the succession of high order power series expansions (perturbation technique) with respect to a well chosen path parameter, which appears as an efficient continuation predictor without any corrector iteration. Besides, one can get approximations of the solution path that are very accurate inside the radius of convergence. In this chapter, the main interest of the ANM is its ability to detect bifurcation points. First, small steps are often associated with the occurrence of a bifurcation. Then, a bifurcation indicator will be defined, which permits to exactly detect the bifurcation load and the corresponding nonlinear mode. Path-following technique Let us write a generalized nonlinear problem as R (U, λ) = L (U) + Q (U, U) -λF = 0, (2.41) where U is a mixed vector of unknowns including displacement and stress, L(•) a linear operator, Q(•, •) a quadratic one, F the external load vector and R the residual vector. The external load parameter is denoted as a scalar λ. The principle of the ANM continuation consists in describing the solution path by computing a succession of truncated power series expansions. From a known solution point (U 0 , λ 0 ), the solution (U, λ) is expanded into truncated power series of a perturbation parameter a: U(a) = U 0 + n ∑ p=1 a p U p = U 0 + aU 1 + a 2 U 2 + . . . + a n U n , (2.42) λ(a) = λ 0 + n ∑ p=1 a p λ p = λ 0 + aλ 1 + a 2 λ 2 + . . . + a n λ n , (2.43) a = ⟨u -u 0 , u 1 ⟩ + (λ -λ 0 ) λ 1 , (2.44) where n is the truncation order of the series. Eq. (2.44) defines the path parameter a that can be identified to an arc-length parameter. By introducing Eqs. (2.42) and (2.43) into Eqs. (2.41) and (2.44), then equating the terms at the same power of a, one can obtain a set of linear problems: Order 1 : L 0 t (U 1 ) = λ 1 F, (2.45 ) ⟨u 1 , u 1 ⟩ + λ 2 1 = 1. (2. 46) Order p 2 : L 0 t (U p ) = λ p F - p-1 ∑ r=1 Q (U r , U p-r ) , (2.47) ⟨u p , u 1 ⟩ + λ p λ 1 = 0, (2.48) where L 0 t (•) = L(•) + 2Q(U 0 , •) is the linear tangent operator. Note that this operator depends only on the initial solution and takes the same value for every order p, which leads to only one matrix inversion at each step. These linear problems are solved by FE method. Once the value of U p is calculated, the path solution at the step (j + 1) can be obtained through Eq. (2.42). The maximum value of the path parameter a should be automatically defined by analyzing the convergence of the power series at each step. The a max can be based on the difference of displacements at two successive orders that must be smaller than a given precision parameter δ: Validity range: a max = ( δ ∥u 1 ∥ ∥u n ∥ ) 1/(n-1) , ( 2.49) where the notation ∥•∥ stands for the Euclidean norm. Unlike incremental-iterative methods, the arc-length step size a max is adaptive since it is determined a posteriori by the algorithm. When there is a bifurcation point on the solution path, the radius of convergence is defined by the distance to the bifurcation. Thus, the step length defined in Eq. (2.49) becomes smaller and smaller, which looks as if the continuation process "knocks" against the bifurcation [START_REF] Baguet | On the behaviour of the ANM continuation in the presence of bifurcations[END_REF]. This accumulation of small steps is a very good indicator of the presence of a singularity on the path. All the bifurcations can be easily identified in this way by the user without any special tool. It is worth mentioning that there are only two parameters controlling the algorithm. The first one is the truncation order n of the series. It was previously discussed that the optimal truncation order should be large enough between 15 and 20, but bigger values (e.g. n = 50) lead to good results for large scale problems as well [START_REF] Medale | A parallel computer implementation of the Asymptotic Numerical Method to study thermal convection instabilities[END_REF]. Another important parameter is the chosen tolerance δ that affects the residual. For instance, very small values of tolerance (e.g. δ = 10 -6 ) ensure quite a high accuracy and a pretty robust path-following process. In this chapter, the ANM has been chosen for its ability for branch-switching and to detect bifurcation points, but not necessarily to reduce the computation time. Nevertheless, let us underline that this computation time may remain moderate even with many terms of the series. For instance in [START_REF] Medale | A parallel computer implementation of the Asymptotic Numerical Method to study thermal convection instabilities[END_REF], very large scale problems have been solved and 50 terms of Taylor series have been computed for a cost smaller than the one of two linear problems. Complementary results can be found in [START_REF] Cochelin | Méthode asymptotique numérique[END_REF]. The implementation of the recurrence formulae as (2.47) is relatively simple for Föpplvon Kármán nonlinear plate or Navier-Stokes equations, but it can be tedious in a more general constitutive framework. That is why some tools have been proposed to compute high order derivatives of constitutive laws [START_REF] Lejeune | Automatic solver for non-linear partial differential equations with implicit local laws: Application to unilateral contact[END_REF] that are based on techniques of Automatic Differentiation [START_REF] Griewank | Evaluating derivatives: Principles and techniques of algorithmic differentiation[END_REF]. Very efficient software is available for small systems [START_REF] Karkar | Manlab: an interactive path-following and bifurcation analysis software[END_REF]. In this chapter, the simplest ANM algorithm is considered. A variant with possible correction phases [START_REF] Lahman | High-order predictor-corrector algorithms[END_REF] should lead to somewhat smaller computation time. Here, the priority is the reliability of path-following and it is simpler to choose very small values of the accuracy parameter δ in (2.49) to ensure a secure method for bifurcation analysis. The presented algorithm belongs to the family of continuation methods and not to local bifurcation analyses that are generally associated with the names of [START_REF] Liapounoff | Problème général de la stabilité du mouvement[END_REF][START_REF] Schmidt | Uber die auflösung der nichtlinearen integralgleichungen und die verzweigung ihrer lösungen[END_REF][START_REF] Koiter | On the stability of elastic equilibrium[END_REF]. It is certainly possible to numerically compute a branch issued from a bifurcation point in the form of high order Taylor series, while historical papers used only few terms computed analytically. From such numerical results, see for instance [START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF], it appears that the radius of convergence can be large, but finite, so that the standard ANM algorithm remains necessary for the next steps. Note that a recent study [START_REF] Liang | A Koiter-Newton approach for nonlinear structural analysis[END_REF] attempts to deduce a computational method from Koiter's ideas. Detection of bifurcation points There are many methods to detect bifurcation points, the most important ones being briefly mentioned in the introduction. None of these methods is perfect, the main difficulty being their reliability. That is why we have chosen the bifurcation indicator method that has been proven to be very secure in many cases of solid and fluid mechanics, see for instance [START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF][START_REF] Brezillon | A numerical algorithm coupling a bifurcating indicator and a direct method for the computation of Hopf bifurcation points in fluid mechanics[END_REF][START_REF] Girault | An algorithm for the computation of multiple Hopf bifurcation points based on Padé approximants[END_REF][START_REF] Guevel | Parametric analysis of steady bifurcations in 2D incompressible viscous flow with high order algorithm[END_REF][START_REF] Jamal | Bifurcation indicators[END_REF][START_REF] Vannucci | An asymptotic-numerical method to compute bifurcating branches[END_REF]. At each step, one defines a scalar parameter that vanishes only when the tangent stiffness matrix is singular. This indicator is a scalar measure of the tangent stiffness, while it is not an eigenvalue except at the singular point. Definition of a bifurcation indicator Let ∆µf be a fictitious perturbation force applied to the structure at a given deformed state (U, λ), where ∆µ is the intensity of the force f and ∆U is the associated response. Through superposing the applied load and perturbation, the fictitious perturbed equilibrium can be described by L (U + ∆U) + Q (U + ∆U, U + ∆U) = λF + ∆µf . (2.50) Considering the equilibrium state and neglecting the quadratic terms, one can obtain the following auxiliary problem: L t (∆U) = ∆µf , ( 2.51) where L t (•) = L(•) + 2Q(U, •) is the tangent operator at the equilibrium point (U, λ). If ∆µ is imposed, this leads to a displacement tending to infinity in the vicinity of the critical points. To avoid this problem, the following displacement based condition is imposed: ⟨ L 0 t (∆U -∆U 0 ) , ∆U 0 ⟩ = 0, (2.52) where L 0 t (•) is the tangent operator at the starting point (U 0 , λ 0 ) and the direction ∆U 0 is the solution of L 0 t (∆U 0 ) = f . Consequently, ∆µ is deduced from the linear system (2.51) and (2.52): ∆µ = ⟨∆U 0 , f ⟩ ⟨L -1 t (f ), f ⟩ . (2.53) Since the scalar function ∆µ represents a measure of the stiffness of structure and becomes zero at the singular points, it can define a bifurcation indicator. It can be directly computed from Eq. (2.53) but it requires to decompose the tangent operator at each point throughout the solution path. For this reason, the system (2.51) and (2.52) can be more efficiently resolved by the ANM. Goal: find a b such that: L t (U(a b )) ∆U = 0 Method: L t (U(a)) ∆U = ∆µf , ⟨L 0 t (∆U -∆U 0 ) , ∆U 0 ⟩ = 0 Output: when ∆µ(a b ) = 0, ∆U(a b ) is the bifurcation mode Thus, the bifurcation indicator ∆µ(a) is easily computed from the auxiliary problems (2.51) and (2.52). This function ∆µ(a) depends on the fictitious perturbation force f , but the numerical solutions of the initial problem (2.41), the bifurcation points and the bifurcation modes are quite independent on it. Exceptionally, the bifurcation indicator could miss a bifurcation if the fictitious force vector f is orthogonal to the instability mode. The choice of a random perturbation force avoids this problem. In what follows, each field ∆U(a b ) is called instability mode. Computation of the bifurcation indicator by the ANM The perturbation (∆U, ∆µ) is searched by the following asymptotic series expansions: ∆U(a) = ∆U 0 + n ∑ p=1 a p ∆U p = ∆U 0 + a∆U 1 + a 2 ∆U 2 + . . . + a n ∆U n , ( 2 L 0 t (∆U 0 ) = ∆µ 0 f , (2.56) where the condition ∆µ 0 = 1 is prescribed a priori at each step. Order p 1 : L 0 t (∆U p ) = ∆µ p f -2 p ∑ j=1 Q (U j , ∆U p-j ) , (2.57) ⟨∆U p , f ⟩ = 0, (2.58) where the vectors U j are determined during the computation of the equilibrium branch and L 0 t is exactly the same tangent operator obtained from the equilibrium branch. Even though this procedure requires computing the order p series at each step, the corresponding computing speed is fast since the same tangent stiffness matrix is used at every order. The discretization of the problem at the order p in Eqs. (2.57) and (2.58) gives [ K 0 t ] {∆u p } = ∆µ p {f } + {∆F p } , (2.59) T {∆u p } [ K 0 t ] {∆u 0 } = 0, (2.60) where [K 0 t ] denotes the tangent stiffness matrix computed at the starting point (U 0 , λ 0 ). The vectors {∆u p } and {∆u 0 }, respectively represent nodal displacements at the order p and order 0 associated to the fictitious perturbation {f }. The vector {∆F p } depends only on the solutions at the previous (p -1) orders. From Eqs. (2.59) and (2.60), one can obtain ∆µ p = - ⟨∆F p , ∆u 0 ⟩ ⟨f , ∆u 0 ⟩ . (2.61) Since ∆µ p is computed and then substitute it into Eq. ( 2.59), one can get ∆u p . In this way, all the asymptotic expansion terms of the bifurcation indicator can be determined. Results and discussion The path-following technique described in Section 2.4.1 will be used in any case, near bifurcation points or away. Specific procedures to compute branches emanating from bifurcation are available [START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF][START_REF] Guevel | Automatic detection and branch switching methods for steady bifurcation in fluid mechanics[END_REF][START_REF] Vannucci | An asymptotic-numerical method to compute bifurcating branches[END_REF], but for simplicity we limited ourselves to the basic continuation algorithm in Section 2.4.1. That is why we introduce a small perturbation force to trigger a continuous transition from the fundamental branch to the bifurcated one: this transverse perturbation f z = 10 -6 is imposed in the middle of the film. The introduction of such small perturbation forces is quite a common technique in the solution of bifurcation problems by continuation techniques [START_REF] Doedel | AUTO: A program for the automatic bifurcation analysis of autonomous systems[END_REF][START_REF] Allgower | Numerical continuation methods[END_REF], even when using commercial finite element codes. This artifice could be avoided by applying a specific procedure to compute the bifurcation branch as in [START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF][START_REF] Vannucci | An asymptotic-numerical method to compute bifurcating branches[END_REF]. In this chapter, the perturbation force f z allows us to compute the whole bifurcated branch with a single continuation algorithm. Note that these forces differ from the fictitious perturbation force in Section 2.4.2 that acts only on the bifurcation indicator. To check if the global tangent stiffness matrix K t is positive definite, Crout decomposition is applied at each step during the nonlinear resolution to evaluate the stability of the solution. The cost of this stability test is negligible since Crout decomposition has also to be done to solve the problems (2.45) and (2.47). The sketch of the film/substrate system under compression forces is illustrated in Fig. 2.3. On the bottom surface of the substrate, the vertical displacement w n , and the tangential traction are taken to be zero. On both left and right sides, simply supported and clamped boundary conditions will be considered, respectively. Precisely, the transverse displacement w i , is locked to be zero in the case of simply supported boundary conditions and its derivative w i,x is also zero in the case of clamped boundary conditions. The material and geometric parameters of film/substrate system are similar to those in [START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF][START_REF] Song | Buckling of a stiff thin film on a compliant substrate in large deformation[END_REF][START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF], which is shown in Table 2.1. The huge ratio of Young's modulus, E f /E s , determines the critical wavelength λ c x that remains practically unchanged as the amplitude of the wrinkles increases [START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF]. Poisson's ratio is a dimensionless measure of the degree of compressibility. Compliant materials in the substrate, such as elastomers, are nearly incompressible with ν s = 0.48. A relative thin film has been chosen so that an isotropic and homogeneous system is not parameter dependent, as established in Section 2.2. In what follows, we will investigate the effect of boundary conditions and material properties of the substrate on instability patterns in the whole buckling and post-buckling evolution. The dimensional analysis in Section 2.2 demonstrated that the 2D response of film/substrate systems is not parameter dependent within the chosen framework. More precisely, for isotropic materials, a thick and soft substrate (sufficiently large h s /λ x and E f /E s ) and a response with many waves (large L/λ x ), the only influencing parameter is Poisson's ratio of the substrate. Therefore, all the responses of such systems should be similar. Nevertheless, the literature reports a variety of nonlinear responses of these systems, including period-doubling, localized creasing, folding and ridging [START_REF] Cao | Wrinkling phenomena in neo-Hookean film/substrate bilayers[END_REF][START_REF] Sun | Folding wrinkles of a thin stiff layer on a soft substrate[END_REF][START_REF] Zang | Localized ridge wrinkling of stiff films on compliant substrates[END_REF]. Some of these responses cannot be obtained here since this study is limited to 2D systems with small strains in the substrate. In this part, we will discuss the influence of the substrate anisotropy and two cases of elastic graded materials [START_REF] Cao | Buckling and post-buckling of a stiff film resting on an elastic graded substrate[END_REF]. Besides, two cases of boundary conditions for the film/substrate will be studied: simply supported and clamped, respectively. To our best knowledge, previous studies in the literature did not consider the effect of boundary conditions and anisotropy of the substrate. Thus, we will detail most of the possible bifurcation sequences in the basic case of 2D systems with a very soft substrate. Very fine meshes, at least ten elements within one wavelength, are adopted to discretize the film/substrate system along the x direction. As for the z direction, convergence analysis is examined through the test with different number of sublayers. Fig. 2.4 shows the applied load versus transverse displacement with respectively 5, 10, 15, 20 or 25 sublayers in the substrate. It can be observed that fifteen sublayers are sufficient to obtain a convergent solution and well agree with the critical load of full 2D model (CPE8R elements in [START_REF][END_REF]). Besides, the critical load of sinusoidal wrinkles based on classical linearized stability analysis was presented in [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF], with Föppl-von Kármán nonlinear elastic plate assumption for the film. For a finite thick substrate, the critical load is expressed as F c = 1/4h f E f ( 3E s /E f ) 2/3 , where E f = E f / (1 -v f 2 ) and E s = E s / (1 -v s 2 ) . By introducing the material and geometric parameters in Table 2.1, one can obtain the analytical solution for periodic boundary conditions F c = 0.048N/mm, which is very close to our FE results with real boundary conditions (about 0.049N/mm in Fig. 2.4). The established 1D FE model based on the ANM gives a very fast computing speed with small number of steps to reach the secondary or higher bifurcations (see Fig. 2.5). Four bifurcations have been detected from the small step accumulation in the ANM framework. Besides, it is found that the transverse displacement along the z direction follows an exponential distribution as shown in Fig. 2.6. This result is analytically justified in Appendix A. Although the small step accumulation is a good indicator of the occurrence of bifurcation, the exact bifurcation points may locate between two neighbouring steps, which cannot be captured directly. Therefore, bifurcation indicators are computed to detect the exact position of bifurcation points. By evaluating this indicator through an equilibrium branch, all the critical points (see Fig. 2.7) existing on this branch and the associated bifurcation modes (see Fig. 2.8) can be determined. Overall, four bifurcation points have been found. Fig. 2.8 shows a sequence of wrinkling patterns corresponding to the critical loads and their associated instability modes. The first instability mode is localized near the boundary and starts at λ = 0.04893. Then the pattern tends to be a uniform sinusoidal mode at the second bifurcation point when λ = 0.055681. A period-doubling mode occurs at the third bifurcation point when λ = 0.086029. Besides, a localized ridge mode can be evident in the middle. At the fourth bifurcation point when λ = 0.11595, an aperiodic mode (period-trebling and period-quadrupling) emerges. The stability of all these solutions has been checked by using Crout decomposition of tangent stiffness matrices. Lastly, it is noted that the strain is less than 1% at the bifurcation and lower than 4% at the end of the loading process. Likely, more accurate results at the end of loading could be obtained in a finite strain hyperelasticity framework. The same remark holds for the following examples. Film/substrate with clamped boundary conditions Following the same strategy, we study the pattern formation in the case of clamped boundary conditions. Very fine meshes are adopted to discretize the film/substrate system along the x direction. Convergence of the computational results is carefully examined. In Figs. 2.9 and 2.10, two bifurcation points have also been captured through evaluating the bifurcation indicators along the equilibrium branch. The same method will also be used in the following examples, but the corresponding indicator curves will no longer be presented. The sequence of wrinkling patterns corresponding to the bifurcation loads and their associated instability modes are illustrated in Fig. 2.11. The two modes correspond to modulated oscillations, the first one with a sinusoidal envelope and the second one with a hyperbolic tangent shape. These modes are quite common [START_REF] Mhada | About macroscopic models of instability pattern formation[END_REF] and can be predicted by the asymptotic Ginzburg-Landau equation [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. The first bifurcation mode in Fig. 2.8b has been inhibited by the clamped boundary conditions and it does not appear in this case. The period-doubling was not observed in the range of load-displacement curve in Fig. 2.9 either. Nevertheless, the observed patterns are quite similar except near the boundary: compare for instance Fig. Functionally Graded Material (FGM) substrate with simply supported boundary conditions In Section 2.2, it has been shown that the elastic matrix L s contributes to the potential energy, which would affect wrinkling patterns to some extent. In what follows, we will extend the 1D FE model to consider the substrate as elastic graded materials to study the influence of variable Young's modulus on the pattern formation. Systems consisting of a stiff thin layer resting on an elastic graded substrate are often encountered both in nature and industry. In nature, many living soft tissues including skins, brains, mucosa of esophagus and pulmonary airway can be modeled as a soft substrate covered by a stiff thin surface layer [START_REF] Li | Surface wrinkling of mucosa induced by volumetric growth: Theory, simulation and experiment[END_REF]. It is noted that the sub-surface layer (i.e. substrate) usually has graded mechanical properties because of the spatial variation in the microstructure or composition. Besides, many practical systems in industry have a functionally grad- ed substrate [START_REF] Lee | Surface instability of an elastic half space with material properties varying with depth[END_REF][START_REF] Howarter | Instabilities as a measurement tool for soft materials[END_REF][START_REF] Cao | Buckling and post-buckling of a stiff film resting on an elastic graded substrate[END_REF]. For instance, the deposition process of a stiff thin film on a soft substrate may lead to a variation of the mechanical properties of the substrate along the thickness direction (functionally graded), which would affect the wrinkling of film/substrate system [START_REF] Howarter | Instabilities as a measurement tool for soft materials[END_REF]. In particular, we explore typical exponential variations of Young's modulus of the substrate along the thickness direction (softening E s1 and stiffening E s2 ) (see Fig. 2.12), while Poisson's ratio is constant as before. In fact, these two typical kinds of exponential variations (softening and stiffening) were also assumed in [START_REF] Cao | Buckling and post-buckling of a stiff film resting on an elastic graded substrate[END_REF][START_REF] Lee | Surface instability of an elastic half space with material properties varying with depth[END_REF] for studying the effect of graded material property on pattern formation. First, we investigate the pattern formation with simply supported boundary conditions and softening Young's modulus E s1 . The first bifurcation occurs at λ = 0.024935, which happens much earlier than the homogeneous case since the stiffness of the system reduces under the exponential grading Young's modulus situation (see Fig. 2.13 and Fig. 2.5). Instability modes are also captured by computing bifurcation indicators. The first mode is localized near the boundary (see Fig. 2.14). Hence, it seems typically related to the simply supported boundary conditions. The second mode is periodic with little boundary effect. The wavelength is larger than in the homogeneous case due to the softening of the substrate, which is consistent with Eq. (2.7). The third mode corresponds to a period-doubling. FGM substrate with clamped boundary conditions We study the pattern formation of FGM substrate with clamped boundary conditions and softening Young's modulus E s1 . Three bifurcations have been observed (see Fig. 2.15). The first bifurcation appears at λ = 0.026732, which occurs earlier than in the homogeneous case as expected (see Fig. 2.15 and Fig. 2.9). The sequence of wrinkling patterns corresponding to the bifurcation loads and the associated instability modes are shown in Fig. 2.16. The first bifurcation mode corresponds to a sinusoidal envelope while the second one looks like a hyperbolic tangent, as in the case of a homogeneous substrate. The period-doubling mode occurs at the third bifurcation point when λ = 0.099932. FGM substrate with stiffening Young's modulus We explore the wrinkle formation in the case of a stiffening Young's modulus E s2 . Up to now at each bifurcation point, only one branch has been computed, which is proven to be stable because of the positivity of Jacobian matrix. This stability after bifurcation is classical for supercritical bifurcation. In the present case of a stiffening Young's modulus, we try to compute both the fundamental branch and the bifurcated one. The bifurcated branch has been always observed in the previous cases and it naturally results from the path-following algorithm. Bifurcation theory provides theoretical means to define all the branches emanating from a bifurcation point and this can lead to practical algorithms [START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF][START_REF] Vannucci | An asymptotic-numerical method to compute bifurcating branches[END_REF]. Nevertheless, the fundamental branch can also be found without changes in the path-following algorithm in Section 2.4.1. The idea is to relax the tolerance δ just before the bifurcation. Practically, as recommended in [START_REF] Baguet | On the behaviour of the ANM continuation in the presence of bifurcations[END_REF][START_REF] Cochelin | Méthode asymptotique numérique[END_REF], we keep the same precision parameter δ = 10 -5 up to the neighbourhood of the third bifurcation. Then we increase this parameter to δ = 10 -2 . Consequently, two solution branches clearly distinguish in Fig. 2.17. We checked that the post-bifurcation branch remains accurate despite of the greater value of tolerance δ (residual lower than 10 -3 ) and that the Jacobian matrix is no longer positive along the fundamental solution, as expected theoretically. Similar bifurcation portraits could be found in all the other cases, but the practical interest of computing unstable branches is rather limited. In Figs. 2.18 and 2.19, the response shows four bifurcation points, the first two ones being very similar to those obtained in the previous cases, but with a shorter wavelength: the first mode is strongly influenced by boundary conditions and the second one corresponds to a uniform amplitude in the bulk. The third and the fourth bifurcation modes are slightly different from those in Fig. 2.8 (homogeneous substrate) and Fig. 2.14 (softening Young's modulus E s1 ). Indeed, period-doubling is not observed here, but only a slow modulation of the wrinkle envelopes. Hence, period-doubling mode appears as a possible event that does not happen in any case. Anisotropic substrate We study the effect of an orthotropic substrate on pattern formation. Simply supported boundary conditions are applied. The elastic matrix of the substrate in Eq. (2.24) becomes [C sn ] =        E 1 (1 -ν 23 ) 1 -2ν 12 ν 21 -ν 23 - E 1 ν 21 ν 23 + 2ν 12 ν 21 -1 0 - E 2 ν 12 ν 23 + 2ν 12 ν 21 -1 E 2 (ν 12 ν 21 -1) (1 + ν 23 )(ν 23 + 2ν 12 ν 21 -1) 0 0 0 E s 2(1 + ν 12 )        . (2.62) Precisely, here we choose E 1 = 10E s , E 2 = E s , ν 12 = ν s , ν 21 = ν s /10 and ν 23 = ν s . From Fig. 2.20, one can see that the anisotropic system is slightly stiffer than the isotropic one, but the post-buckling path follows the similar trend as the isotropic case. Furthermore, small step accumulations appear almost in the same loading region in both curves, which implies the bifurcation loads are comparable. Three instability modes are detected by computing bifurcation indicators in the anisotropic case (see Fig. 2.21). The comparison between these three modes in Fig. 2.21 and the final shape in Fig. 2.22 is a little disturbing since a period-doubling appears clearly in the final shape, but not in the modes. Likely this period-doubling grows continuously along the response curve. Comments Three types of bifurcation modes have been encountered in the four studied cases. First, a mode localized near the boundary can primarily appear in the case of simply supported boundary conditions. Then the instability pattern tends to be periodic except in the boundary region where the response is influenced by the boundary conditions. Last, this perfect periodicity can be broken by another bifurcation, leading to perioddoubling, period-trebling or even period-quadrupling modes. Such a loss of periodicity had been previously discussed, for instance in [START_REF] Cao | Wrinkling phenomena in neo-Hookean film/substrate bilayers[END_REF][START_REF] Zang | Localized ridge wrinkling of stiff films on compliant substrates[END_REF][START_REF] Sun | Folding wrinkles of a thin stiff layer on a soft substrate[END_REF]. Curiously, all the computed bifurcations are supercritical so that the post-bifurcation solutions remain stable. The responses of the system as well as the third bifurcation mode are represented in Fig. 2.23 as a function of the substrate properties: isotropic homogeneous, orthotropic homogeneous, inhomogeneous softening and stiffening, respectively. The bifurcation portrait seems generic at the beginning: a boundary mode is followed by a second mode that is almost periodic except in the boundary region. The third and eventually fourth bifurcations are no longer periodic, but several forms can be encountered: a clear period-doubling in the FGM softening case, a more confused situation for homogeneous isotropic substrates (see Fig. 2.8) or modulated modes without period-doubling in the FGM stiffening and orthotropic cases. The multiple bifurcations can be seen as markers of the evolution of instability patterns, but sometimes the pattern can change continuously without obvious bifurcation: such an evolution has been found in the anisotropic case in Section 2.5.6 where period-doubling mode appears in the final shape (see Fig. 2.22), but not in the observed bifurcation modes (see Fig. 2.21). This is not surprising because bifurcation is not a generic singularity and Chapter conclusion Wrinkling phenomena of stiff films bound to compliant substrates were investigated from the quantitative standpoint, with a particular attention on the effect of boundary conditions, which was rarely studied previously. A classical model was used associating geometrically nonlinear beam for the film and linear elasticity for the substrate. The presented results rely heavily on robust solution techniques based on the ANM that is able to detect secondary bifurcations and to compute bifurcation modes on a nonlinear response curve. Probably, it would be rather difficult to detect all the bifurcations found in this chapter by conventional numerical methods. Six cases have been studied that are characterized by the boundary conditions and by the material properties of the substrate (homogeneous, graded material and orthotropic). Up to four successive bifurcation points have been found and the bifurcation portrait seems more or less generic. In the case of a simply supported film/substrate system, the first mode is a boundary mode. Then the pattern tends to be periodic in the bulk and the response near the ends is locally influenced by boundary conditions. When the amplitude increases, the periodicity is broken and one can observe period-doubling bifurcations or appearance of modulated patterns. The present study is limited to moderate rotations and 2D domains. It would be an interesting challenge to extend this analysis in 3D cases [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Cai | Periodic patterns and energy states of buckled films on compliant substrates[END_REF], without too restrictive assumptions, but the computation time could increase dramatically. In this respect, an idea is to introduce reduced-order models, for instance via the technique of slowly variable Fourier coefficients [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF][START_REF] Mhada | About macroscopic models of instability pattern formation[END_REF][START_REF] Damil | Membrane wrinkling revisited from a multiscale point of view[END_REF]. Nevertheless, 3D film/substrate models are essential because the periodicity can be broken by 3D bifurcations before the occurrence of perioddoubling mode [START_REF] Cai | Periodic patterns and energy states of buckled films on compliant substrates[END_REF][START_REF] Xu | 3D finite element modeling for instabilities in thin films on soft substrates[END_REF]. Chapter 3 Pattern formation modeling of 3D film/substrate systems Introduction Surface morphological instabilities of stiff thin layers attached on soft substrates are of growing interest in a number of academic domains including micro/nano-fabrication and metrology [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF], flexible electronics [START_REF] Rogers | Materials and mechanics for stretchable electronics[END_REF], mechanical and physical measurement of material properties [START_REF] Howarter | Instabilities as a measurement tool for soft materials[END_REF], and biomedical engineering [START_REF] Genzer | Soft matter with hard skin: From skin wrinkles to templating and material characterization[END_REF] as well as biomechanics [START_REF] Li | Surface wrinkling of mucosa induced by volumetric growth: Theory, simulation and experiment[END_REF]. The pioneering work of Bowden et al. [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF] leads to several theoretical and numerical works in terms of stability study devoted to linear perturbation analysis and nonlinear buckling analysis [START_REF] Huang | Instability of a compressed elastic film on a viscous layer[END_REF][START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Huang | Evolution of wrinkles in hard films on soft substrates[END_REF][START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF][START_REF] Huang | Kinetic wrinkling of an elastic film on a viscoelastic substrate[END_REF][START_REF] Huang | Dynamics of wrinkle growth and coarsening in stressed thin films[END_REF][START_REF] Im | Wrinkle patterns of anisotropic crystal films on viscoelastic substrates[END_REF][START_REF] Mahadevan | Self-organized origami[END_REF][START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF][START_REF] Song | Buckling of a stiff thin film on a compliant substrate in large deformation[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part III: Herringbone solutions at large buckling parameter[END_REF][START_REF] Lee | Surface instability of an elastic half space with material properties varying with depth[END_REF]. In most of these papers, the 2D or 3D spatial problem is discretized by either spectral method or Fast Fourier Transform (FFT) algorithm, which is fairly inexpensive but prescribes periodic boundary conditions. In this framework, several types of wrinkling modes have been observed, including sinusoidal, checkerboard, herringbone (see Fig. 3.1) and disordered labyrinth patterns. It has been early recognized by Chen and Hutchinson [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF] that such systems can also be studied by finite element methods, which is more computationally expensive but more flexible to describe complex geometries and more general boundary conditions, and allows using commercial computer codes. In addition, 3D finite element simulations of film/substrate instability were studied only in few papers [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Cai | Periodic patterns and energy states of buckled films on compliant substrates[END_REF]. Furthermore, the post-buckling evolution and mode transition of surface wrinkles in 3D film/substrate systems are rarely studied and only in the case of periodic cells [START_REF] Cai | Periodic patterns and energy states of buckled films on compliant substrates[END_REF], which still deserves further investigation, especially through finite element method that can provide the overall view and insight into the formation and evolution of wrinkle patterns in more general conditions. Can one obtain the variety of 3D wrinkling modes reported in the literature by using classical finite element models? Can one describe the whole evolution path of buckling and post-buckling of this system? Under what kind of loading and boundary conditions can each type of patterns be observed at what value of bifurcation loads? These questions will be addressed in this chapter. This study aims at applying advanced numerical methods for bifurcation analysis to typical cases of film/substrate system and focuses on the post-buckling evolution involving multiple bifurcations and symmetry-breakings, for the first time with a particular attention on the effect of boundary conditions. For this purpose, a 2D finite element (FE) model was previously developed for multiperiodic bifurcation analysis of wrinkle formation [START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF]. In this model, the film undergoing moderate deflections is described by Föppl-von Kármán nonlinear elastic theory, while the substrate is considered to be a linear elastic solid. Following the same strategy, we extend the work to 3D cases by coupling shell elements to represent the film and block elements to describe the substrate. Therefore, large displacements and rotations in the film can be considered and the spatial distribution of wrinkling modes like sinusoidal, checkerboard and herringbone (see Fig. 3.1) could be investigated. Surface instability of stiff layers on soft materials usually involves strong geometrical nonlinearities, large rotations, large deformations, loading path dependence, multiple symmetry-breakings and other complexities, which makes the numerical resolution quite difficult. The morphological post-buckling evolution and mode shape transition beyond the critical load are incredibly complicated, especially in 3D cases, and the conventional numerical methods have difficulties in detecting all the bifurcation points and associated instability modes on their evolution paths. To solve the resulting nonlinear equations, continuation techniques give efficient numerical tools to compute these nonlinear response curves [START_REF] Doedel | AUTO: A program for the automatic bifurcation analysis of autonomous systems[END_REF][START_REF] Allgower | Numerical continuation methods[END_REF]. In this chapter, we adopted Asymptotic Numerical Method (ANM) [START_REF] Damil | A new method to compute perturbed bifurcation: Application to the buckling of imperfect elastic structures[END_REF][START_REF] Cochelin | Asymptotic-numerical methods and Padé approximants for non-linear elastic structures[END_REF][START_REF] Cochelin | A path-following technique via an asymptotic-numerical method[END_REF][START_REF] Cochelin | Méthode asymptotique numérique[END_REF] which appears as a significantly efficient continuation technique without any corrector iteration. The underlying principle of the ANM is to build up the nonlinear solution branch in the form of relatively high order truncated power series. The resulting series are then introduced into the nonlinear problem, which helps to transform it into a sequence of linear problems that can be solved numerically. In this way, one gets approximations of the solution path that are very accurate inside the radius of convergence. Since few global stiffness matrix inversions are required (only one per step), the performance in terms of computing time is quite attractive. Moreover, as a result of the local polynomial approximations of the branch within each step, the algorithm is remarkably robust and fully automatic. Furthermore, unlike incremental-iterative methods, the arc-length step size in the ANM is fully adaptive since it is determined a posteriori by the algorithm. A small radius of convergence and step accumulation appear around the bifurcation and imply its presence. Detection of bifurcation points is really a challenge. Despite a lot of progresses have been made using the Newton-Raphson method, an efficient and reliable algorithm is quite difficult to be established. Indeed, it would cost considerable computing time in the bisection sequence and corrector iteration because of very small step lengths close to the bifurcation. In the ANM framework, a bifurcation indicator has been proposed to detect bifurcation points [START_REF] Boutyour | Méthode asymptotique-numérique pour le calcul des bifurcations: application aux structures élastiques[END_REF][START_REF] Vannucci | An asymptotic-numerical method to compute bifurcating branches[END_REF][START_REF] Jamal | Bifurcation indicators[END_REF][START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF]. It is a scalar function obtained through introducing a fictitious perturbation force in the problem, which becomes zero exactly at the bifurcation point. Indeed, this indicator measures the intensity of the system response to perturbation forces. By evaluating it through an equilibrium branch, all the critical points existing on this branch and the associated bifurcation modes can be determined. This chapter explores the occurrence and post-bifurcation evolution of sinusoidal, checkerboard and herringbone mode in greater depth. The work presented in this chapter, i.e. 3D finite element modeling for instabilities in thin films on soft substrates, is considered as an interesting study and a valuable contribution to film/substrate instability problems, which has been published in International Journal of Solids and Structures [START_REF] Xu | 3D finite element modeling for instabilities in thin films on soft substrates[END_REF]. 3D mechanical model We consider an elastic thin film bonded to an elastic substrate, which can buckle under compression. Upon wrinkling, the film elastically buckles to relax the compressive stress and the substrate concurrently deforms to maintain perfect bonding at the interface. In the following, the elastic potential energy of the system, is considered in the framework of Hookean elasticity. The film/substrate system is considered to be three-dimensional and the geometry is as shown in Fig. 3.2. Let x and y be in-plane coordinates, while z is the direction perpendicular to the mean plane of the film/substrate. The width and length of the system are denoted by L x and L y , respectively. The parameters h f , h s and h t represent, respectively, the thickness of the film, the substrate and the total thickness of the system. Young's modulus and Poisson's ratio of the film are denoted by E f and ν f , while E s and ν s are the corresponding material properties for the substrate. The 3D film/substrate system will be modeled in a rather classical way, the film being represented by a thin shell model to allow large rotations while the substrate being modeled by small strain elasticity. Indeed, the considered instabilities are governed by nonlinear geometric effects in the stiff material, while these effects are much smaller in the soft material. Since the originality of this chapter lies in the numerical treatment of multiple bifurcations, we limit ourselves to this classical framework for the sake of consistency with previous literatures. The large rotation framework for the film has been chosen because of the efficiency of the associated finite element. Note that the same choice of a shell with finite rotations coupled with small strain elasticity in the substrate had been presented for numerical reasons in [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF]. The application range of this model is limited by two small parameters: the aspect ratio of the film h f /L x , h f /L y and the stiffness ratio E s /E f . In the case of a larger ratio E s /E f , a finite strain model should be considered in the substrate as in [START_REF] Hutchinson | The role of nonlinear substrate elasticity in the wrinkling of thin films[END_REF] and this would be not too difficult to be implemented in our framework [START_REF] Nezamabadi | Solving hyperelastic material problems by asymptotic numerical method[END_REF]. Nonlinear shell formulation for the film Challenges in the numerical modeling of such film/substrate systems come from the extremely large ratio of Young's modulus (E f /E s ≈ O(10 5 )) as well as the big thickness difference (h s /h f O( 102 )), which requires very fine mesh if using 3D block elements both for the film and for the substrate. Since finite rotations of middle surface and small strains are considered in the thin film, nonlinear shell formulations are quite suitable and efficient for modeling. Hereby, a three-dimensional shell formulation proposed by Bücheter et al. [START_REF] Büchter | Three-dimensional extension of non-linear shell formulation based on the enchanced assumed strain concept[END_REF] is applied. It is based on a 7-parameter theory including a linear varying thickness stretch as extra variable, which allows applying a complete 3D constitutive law without condensation. It is distinguished from classical shell models that are usually based on degenerated constitutive relations (e.g. Kirchhoff-Love, Reissner-Mindlin theories). It is also incorporated via the Enhanced Assumed Strain (EAS) concept proposed by Simo and Rifai [START_REF] Simo | A class of mixed assumed strain methods and method of incompatible modes[END_REF] to improve the element performance and to avoid locking phenomena such as Poisson thickness locking, shear locking or volume locking. This hybrid shell formulation can describe large deformation problems with hyperelasticity and has been successively applied to nonlinear elastic thin-walled structures such as cantilever beam, square plate, cylindrical roof and circular deep arch [START_REF] Sansour | A theory and finite element formulation of shells at finite deformations involving thickness change: Circumventing the use of a rotation tensor[END_REF][START_REF] Zahrouni | Computing finite rotations of shells by an asymptotic-numerical method[END_REF][START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF]. Geometry and kinematics of shell element are illustrated in Fig. 3.3, where the position vectors are functions of curvilinear coordinates (θ 1 , θ 2 , θ 3 ). The geometry description relies on the middle surface θ 1 and θ 2 of the shell, while θ 3 represents the coordinate in the thickness direction. The current configuration is defined by the middle surface displacement and the relative displacement between the middle and the upper surfaces. The large rotations are taken into account without any rotation matrix since the current direction vector is obtained by adding a vector to one of the initial configurations. In the initial undeformed configuration, the position vector x representing any point in the shell can be defined as x(θ α , θ 3 ) = r(θ α ) + θ 3 a 3 (θ α ), (3.1) where r(θ α )(α = 1, 2) denotes the projection of x in the middle surface and θ 3 describes its perpendicular direction with θ 3 ∈ [-h f /2, h f /2] in which h f is the reference thickness of shell. The normal vector of middle surface is represented by a 3 = a 1 × a 2 . Similarly, in the current deformed configuration, we define the position of point x by the vector x: x(θ α , θ 3 ) = r(θ α ) + θ 3 a 3 (θ α ), where { r = r + v, a 3 = a 3 + w. (3.3) Therefore, the displacement vector associated with an arbitrary material point in the shell, linearly varies along the thickness direction reads u(θ α , θ 3 ) = x -x = v(θ α ) + θ 3 w(θ α ). (3.4) Totally, six degrees of freedom can be distinguished in Eq. (3.4) to describe the shell kinematics: three vector components related to the translation of the middle surface (v 1 , v 2 , v 3 ) and other three components updating the direction vector (w 1 , w 2 , w 3 ). The Green-Lagrange strain tensor is used to describe geometrical nonlinearity, which can be expressed in the covariant base: γ = 1 2 ( g ij -g ij ) g i ⊗ g j with i, j = 1, 2, 3, (3.5) where g i are the contravariant base vectors, while g ij = g i •g j and g ij = g i •g j respectively represent the components of covariant metric tensor in the initial configuration and the deformed one [START_REF] Büchter | Three-dimensional extension of non-linear shell formulation based on the enchanced assumed strain concept[END_REF]. The hybrid shell formulation is derived from a three-field variational principle based on the Hu-Washizu functional [START_REF] Büchter | Three-dimensional extension of non-linear shell formulation based on the enchanced assumed strain concept[END_REF][START_REF] Zahrouni | Computing finite rotations of shells by an asymptotic-numerical method[END_REF]. The stationary condition can be written as Π EAS (u, γ, S) = ∫ Ω { t S : (γ u + γ) - 1 2 t S : D -1 : S } dΩ -λP e (u), (3.6) where D is the elastic stiffness tensor. The unknowns are, respectively, the displacement field u, the second Piola-Kirchhoff stress tensor S and the compatible Green-Lagrange strain γ u . The enhanced assumed strain γ, satisfies the condition of orthogonality with respect to the stress field. The work of external load is denoted by P e (u), while λ is a scalar load parameter. Concerning the enhanced assumed strain γ, classical shell kinematics requires the transversal strain field to be zero (γ 33 = 0). In reality, since 3D constitutive relations are concerned, this condition is hardly satisfied due to Poisson effect, especially for bending dominated cases. This phenomenon is commonly referred to as "Poisson thickness locking". To remedy this issue, an enhanced assumed strain γ contribution has been introduced in the shell formulation [START_REF] Büchter | Three-dimensional extension of non-linear shell formulation based on the enchanced assumed strain concept[END_REF], acting across the shell thickness direction and supplementing the compatible strain field γ u . It describes the linear variation of the the thickness stretch or compression, and is expressed with respect to the local curvilinear coordinates θ 3 : γ = θ 3 γ 33 g 3 ⊗ g 3 with γ 33 = γ 33 (θ α ), (3.7) and satisfies the condition of orthogonality with respect to the stress field S: ∫ Ω t S : γ dΩ = 0. (3.8) In this way, "spurious" transversal strains induced by Poisson effect for bending dominated kinematics are balanced by the assumed strain γ, which clears the "thickness locking" problem. This approach is applied in this chapter, since the associated finite element is very efficient, especially for very thin shells [START_REF] Zahrouni | Computing finite rotations of shells by an asymptotic-numerical method[END_REF][START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF]. A 8-node quadrilateral element with reduced integration is utilized for the 7-parameter shell formulation. The enhanced assumed strain γ does not require inter element continuity, neither contribute to the total number of nodal degrees of freedom. Therefore, it can be eliminated by condensation at the element level, which preserves the formal structure of a 6-parameter shell theory with totally 48 degrees of freedom per element. Linear elasticity for the substrate Since the displacement, rotation and strain remain relatively small in the substrate, the linear isotropic elasticity theory with updated geometry can accurately describe the substrate. The nonlinear strain-displacement behavior has essentially no influence on the results of interest [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF]. The potential energy of the substrate can be expressed as Π s (u s ) = 1 2 ∫ Ω t ε : L s : εdΩ -λP e (u s ), (3.9) where L s is the elastic matrix of the substrate. In this chapter, 8-node linear block elements with reduced integration are applied to discretize the substrate, with totally 24 degrees of freedom on each block element. Connection between the film and the substrate As the film is bonded to the substrate, the displacement should be continuous at the interface. However, the shell elements for the film and 3D block elements for the substrate cannot be simply joined directly since they belong to dissimilar elements. Therefore, additional incorporating constraint equations have to be employed. Hereby, Lagrange multipliers are applied to couple the corresponding node displacements in compatible meshes between the film and the substrate (see Fig. 3.4). Note that using 8-node linear block element here is only for coupling convenience, 20-node quadratic block element would be another good candidate, while both of them follow the same coupling strategy. Consequently, the stationary function of film/substrate system is given in a Lagrangian form: L (u f , u s , ℓ) = Π EAS + Π s + ∑ node i ℓ i [ u - f (i) -u s (i) ] , (3.10) in which u - f (i) = v(i) - h f 2 w(i). (3.11) where the displacements of the film and the substrate are respectively denoted as u f and u s , while the Lagrange multipliers are represented by ℓ. At the interface, the displacement continuity is satisfied at the same nodes and connects the bottom surface of the film (u - f ) and the top surface of the substrate. From Eq. (3.10), three equations are obtained according to δu f , δu s and δℓ:              δΠ EAS + ∑ node i ℓ i δu - f (i) = 0, δΠ s - ∑ node i ℓ i δu s (i) = 0, ∑ node i δℓ i u - f (i) - ∑ node i δℓ i u s (i) = 0. (3.12) Figure 3.4: Sketch of coupling at the interface. Resolution technique and bifurcation analysis Asymptotic Numerical Method (ANM) [START_REF] Damil | A new method to compute perturbed bifurcation: Application to the buckling of imperfect elastic structures[END_REF][START_REF] Cochelin | Asymptotic-numerical methods and Padé approximants for non-linear elastic structures[END_REF][START_REF] Cochelin | A path-following technique via an asymptotic-numerical method[END_REF][START_REF] Cochelin | Méthode asymptotique numérique[END_REF] is used to solve the resulting nonlinear equations. The ANM is a path-following technique that is based on a succession of high order power series expansions (perturbation technique) with respect to a well chosen path parameter. It appears as an efficient continuation predictor without any corrector iteration. Moreover, one can get approximations of the solution path that are very accurate inside the radius of convergence. In this chapter, the main interest of the ANM is its ability to detect bifurcation points. First, small steps are often associated with the occurrence of a bifurcation. Then, a bifurcation indicator will be defined, which allows exactly detecting the bifurcation load and the corresponding nonlinear mode. Path-following technique The resulting nonlinear problem (3.12) can be rewritten as δL (u f , u s , ℓ) = ⟨R (U, λ) , δU⟩ = 0, (3.13) in which R (U, λ) = L (U) + Q (U, U) -λF = 0, (3.14) where U = (u f , u s , ℓ) is a mixed vector of unknowns, R the residual vector, L(•) a linear operator, Q(•, •) a quadratic one and F the external load vector. The external load parameter is denoted as a scalar λ. The principle of the ANM continuation consists in describing the solution path by computing a succession of truncated power series expansions. From a known solution point (U 0 , λ 0 ), the solution (U, λ) is expanded into truncated power series of a perturbation parameter a: U(a) = U 0 + n ∑ p=1 a p U p = U 0 + aU 1 + a 2 U 2 + . . . + a n U n , (3.15) λ(a) = λ 0 + n ∑ p=1 a p λ p = λ 0 + aλ 1 + a 2 λ 2 + . . . + a n λ n , ( 3.16 ) a = ⟨u -u 0 , u 1 ⟩ + (λ -λ 0 ) λ 1 , (3.17) where n is the truncation order of the series. Eq. (3.17) defines the path parameter a that can be identified to an arc-length parameter. By introducing Eqs. (3.15) and (3.16) into Eqs. (3.13) and (3.17), then equating the terms at the same power of a, one can obtain a set of linear problems. The maximum value of the path parameter a should be automatically defined by analyzing the convergence of the power series at each step. The a max can be based on the difference of displacements at two successive orders that must be smaller than a given precision parameter δ: Validity range: a max = ( δ ∥u 1 ∥ ∥u n ∥ ) 1/(n-1) , ( 3.18) where the notation ∥•∥ stands for the Euclidean norm. Unlike incremental-iterative methods, the arc-length step size a max is adaptive since it is determined a posteriori by the algorithm. When there is a bifurcation point on the solution path, the radius of convergence is defined by the distance to the bifurcation. Thus, the step length defined in Eq. (3.18) becomes smaller and smaller, which looks as if the continuation process "knocks" against the bifurcation [START_REF] Baguet | On the behaviour of the ANM continuation in the presence of bifurcations[END_REF]. This accumulation of small steps is a very good indicator of the presence of a singularity on the path. All the bifurcations can be easily identified in this way by the user without any special tool. It is worth mentioning that there are only two parameters controlling the algorithm. The first one is the truncation order n of the series. It was previously discussed that the optimal truncation order should be large enough between 15 and 20, but bigger values (e.g. n = 50) lead to good results for large scale problems as well [START_REF] Medale | A parallel computer implementation of the Asymptotic Numerical Method to study thermal convection instabilities[END_REF]. Another important parameter is the chosen tolerance δ that affects the residual. For instance, very small values of tolerance (e.g. δ = 10 -6 ) ensure quite a high accuracy and a pretty robust path-following process. Detection of bifurcation points Detection of exact bifurcation points is a challenge. It takes much computation time in the bisection sequence and in many Newton-Raphson iterations because of small steps close to the bifurcation. In the framework of the ANM, a bifurcation indicator has been proposed to capture exact bifurcation points in an efficient and reliable algorithm [START_REF] Boutyour | Méthode asymptotique-numérique pour le calcul des bifurcations: application aux structures élastiques[END_REF][START_REF] Vannucci | An asymptotic-numerical method to compute bifurcating branches[END_REF][START_REF] Jamal | Bifurcation indicators[END_REF][START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF]. Let ∆µf be a fictitious perturbation force applied to the structure at a given deformed state (U, λ), where ∆µ is the intensity of the force f and ∆U is the associated response. Through superposing the applied load and perturbation, the fictitious perturbed equilibrium can be described by L (U + ∆U) + Q (U + ∆U, U + ∆U) = λF + ∆µf . (3.19) Considering the equilibrium state and neglecting the quadratic terms, one can obtain the following auxiliary problem: L t (∆U) = ∆µf , ( 3.20) where L t (•) = L(•) + 2Q(U, •) is the tangent operator at the equilibrium point (U, λ). If ∆µ is imposed, this leads to a displacement tending to infinity in the vicinity of the critical points. To avoid this problem, the following displacement based condition is imposed: ⟨ L 0 t (∆U -∆U 0 ) , ∆U 0 ⟩ = 0, (3.21) where L 0 t (•) is the tangent operator at the starting point (U 0 , λ 0 ) and the direction ∆U 0 is the solution of L 0 t (∆U 0 ) = f . Consequently, ∆µ is deduced from the linear system (3.20) and (3.21): ∆µ = ⟨∆U 0 , f ⟩ ⟨L -1 t (f ), f ⟩ . (3.22) Since the scalar function ∆µ represents a measure of the stiffness of structure and becomes zero at the singular points, it can define a bifurcation indicator. It can be directly computed from Eq. (3.22), but it requires to decompose the tangent operator at each point throughout the solution path. For this reason, the system (3.20) and (3.21) can be more efficiently resolved by the ANM. Goal: find a b such that: L t (U(a b )) ∆U = 0 Method: L t (U(a)) ∆U = ∆µf , ⟨L 0 t (∆U -∆U 0 ) , ∆U 0 ⟩ = 0 Output: when ∆µ(a b ) = 0, ∆U(a b ) is the bifurcation mode In what follows, each field ∆U(a b ) is called instability mode or wrinkling mode. Note that it is recommended to use a random perturbation force vector f [START_REF] Boutyour | Méthode asymptotique-numérique pour le calcul des bifurcations: application aux structures élastiques[END_REF][START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF]. The bifur-cation indicator in Eq. (3.22) vanishes at singular points only if the fictitious force vector f is not orthogonal to the instability mode. The choice of a random perturbation force can avoid this problem. It is worth mentioning that the fictitious perturbation force f influences neither the numerical solutions of the initial problem (3.13) nor the detection of bifurcation points via (3.20) and (3.21), but only the auxiliary unknown ∆U(a). Results and discussion Three types of wrinkling patterns, sinusoidal, checkerboard and herringbone, will be investigated under different loading and boundary conditions. On the bottom surface of the substrate, the deflection u z and the tangential traction are taken to be zero. The material and geometric properties of film/substrate system are similar to those in [START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF][START_REF] Song | Buckling of a stiff thin film on a compliant substrate in large deformation[END_REF][START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF][START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF], which is shown in Table 3.1. The different dimensional parameters and loading conditions for each case are presented in Table 3.2 and Fig. 3.5, respectively. The huge ratio of Young's modulus, E f /E s , determines the critical wavelength λ c that remains practically unchanged as the amplitude of the wrinkles increases [START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF][START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF]. Poisson's ratio is a dimensionless measure of the degree of compressibility. Compliant materials in the substrate, such as elastomers, are nearly incompressible with ν s = 0.48. A relative thin film has been chosen so that an isotropic and homogeneous system is not parameter dependent [START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF]. In order to trigger a transition from the fundamental branch to the bifurcated one, small perturbation forces, f z = 10 -8 , are imposed in the film. The introduction of such small perturbation forces is quite a common technique in the solution of bifurcation problems by continuation techniques [START_REF] Doedel | AUTO: A program for the automatic bifurcation analysis of autonomous systems[END_REF][START_REF] Allgower | Numerical continuation methods[END_REF], even when using commercial finite element codes. This artifice could be avoided by applying a specific procedure to compute the bifurcation branch as in [START_REF] Boutyour | Bifurcation points and bifurcated branches by an asymptotic numerical method and Padé approximants[END_REF][START_REF] Vannucci | An asymptotic-numerical method to compute bifurcating branches[END_REF]. In this chapter, the perturbation forces f z allow us to compute the whole bifurcated branch with a single continuation algorithm. Note that these forces differ from the fictitious perturbation force in Section 3.3.2 that acts only on the bifurcation indicator. The number of elements required for a convergent solution was carefully examined. Critical loads can be detected by bifurcation points in the load-displacement curve. Although the small step accumulation is a good indicator of the occurrence of bifurcation, the exact bifurcation points may locate between two neighbouring steps, which cannot be captured directly. Therefore, bifurcation indicators are computed to detect the exact position of bifurcation points. By evaluating this indicator through an equilibrium branch, all the critical points existing on this branch and the associated bifurcation modes can be determined. In what follows, we will explore in greater depth the formation and evolution of three kinds of patterns (sinusoidal, checkerboard and herringbone) in the case of a not too large wave number. In experiments, one often observes more disordered wrinkles, like a frustrated labyrinth [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF][START_REF] Yin | Deterministic order in surface micro-topologies through sequential wrinkling[END_REF]. These more intricate patterns can be predicted by generic finite element procedure as the one presented in this chapter, provided that sufficient computer resources are available. Sinusoidal patterns First, we study the sinusoidal pattern formation and evolution via Film/Sub I. The film is uniaxially compressed along the x direction as shown in Fig. 3.5a. The displacements v 2 , v 3 , w 2 and w 3 are taken to be zero on loading sides y and { (see Fig. 3.5a) that are parallel to O y . This means that these sides are simply supported because the rotation w 1 around O y is not locked. The other two sides x and z are set to be free. To avoid rigid body motions, the displacement v 1 in the film center is locked as well. The film is meshed with 50 × 50 shell elements to ensure at least five elements within one wavelength. The substrate is compatibly discretized by 12500 block elements with five layers. Totally, the film/substrate system contains 100827 degrees of freedom (DOF) including the Lagrange multipliers. The critical load of sinusoidal wrinkles based on classical linearized stability analysis was presented in [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF], with Föppl-von Kármán nonlinear elastic plate assumption for the film. For a finite thick substrate, the critical load is expressed as F c = 1/4h f E f ( 3E s /E f ) 2/3 , where E f = E f / (1 -v f 2 ) and E s = E s / (1 -v s 2 ) . By introducing the material and geometric parameters in Table 3.1, one can obtain the analytical solution for periodic boundary conditions F c = 0.048N/mm, which is close to our 3D finite element results with real boundary conditions (about 0.052N/mm in Fig. 3.6). The established 3D model based on the ANM offers a very fast computing speed to reach secondary bifurcations with few steps (see Fig. 3.6). These steps are very large except in the very small region where the load is between 0.052 and 0.056 N/mm. In this region, one can observe two packets of small steps corresponding to two bifurcation points. Their exact locations have been captured through evaluating the bifurcation indicators along the equilibrium branch (see Fig. 3.7). The same method will also be used in the following examples, but the corresponding curves will no longer be presented. The sequence of wrinkling modes ∆v corresponding to the bifurcation loads and their associated instability modes ∆v 3 are illustrated in Fig. 3.8. These two instability modes are similar to classical patterns obtained for instance in membrane wrinkling. Their shapes are sinusoidal with fast oscillations in the compressive direction and with spatial modulation. These oscillations are located in the center of the film for the first mode and near the sides y and { for the second one, which corresponds to zones where the compressive stresses are larger. When the load increases, the pattern tends to a more or less uniform sinusoidal shape (see Fig. 3.9) with small boundary effects close to the loading sides. In other words, boundary effects are important at the first appearance of wrinkles, then the amplitudes of the oscillations tend to be uniform. Similar evolutions have been obtained in similar problems where one converges to an oscillation whose envelope looks like a hyperbolic tangent, for instance for a clamped beam [START_REF] Mhada | About macroscopic models of instability pattern formation[END_REF] or a clamped membrane [START_REF] Damil | Membrane wrinkling revisited from a multiscale point of view[END_REF]. The tendency to a uniform oscillation is in agreement with predictions of the asymptotic Ginzburg-Landau equation [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. Following the same strategy, we investigate the surface morphological instability via Film/Sub I in the case of clamped boundary conditions. More specifically, the displacements w 1 is also taken to be zero on loading sides y and { (see Fig. 3.5a) that are parallel to O y . This means that these sides are clamped because the rotation w 1 around O y is locked. The other boundary conditions and loadings are the same as before. The same mesh as in simply supported case is carried out. Two bifurcation points have been found as shown in Fig. 3.10. The two instability modes correspond to modulated oscillations (see Fig. 3.11). The first one is similar to simply supported case except the vanishing rotations on the boundary. The second one takes a hyperbolic tangent envelope except a small localization in the middle. Then the pattern tends to be a uniform hyperbolic tangent shape when the load reaches the final step (see Fig. 3.12). Checkerboard patterns Checkerboard modes are explored via Film/Sub II. The square film is under equibiaxial compression both in x and y direction (see Fig. edges x, y, z and {, are locked to be zero, which means the film is simply supported on the whole boundary. The displacements v 1 and v 2 in the film center are also set to be zero to avoid rigid body movements. The same mesh as in the sinusoidal case with totally 100827 DOF is performed. Four bifurcations have been captured through computing bifurcation indicators (see Fig. 3.13). In any case, the main symmetries with respect to the medians have been preserved. The first wrinkling load is slightly lower than in the uniaxial loading case (λ = 0.048361 instead of 0.052812). Fig. 3.14 presents a sequence of wrinkling modes ∆v corresponding to the critical loads and their associated instability modes ∆v 3 . In the first mode, the pattern is not uniform and one observes a corner effect due to stress concentration in this area. As in the two previous cases, this first bifurcation is due to local effects that should not appear under periodic boundary conditions. The uniform checkerboard mode matures in the bulk at the second bifurcation, but this growth in the center seems to occur gradually since the first bifurcation. Boundary and corner effects are significantly growing when the load reaches the third bifurcation (see Fig. 3.14e). Nevertheless, it still maintains the checkerboard shape in the middle bulk. The fourth mode is very similar to the third mode due to their proximate critical loads, which cannot be obviously distinguished and is not shown here. The wrinkling pattern in the final step is depicted in Fig. 3.15, which appears strong boundary and corner effects. The growth of checkerboard patterns is not as stable as for the previously observed sinusoidal patterns, since three bifurcations occur for rather small values of the deflection (v 3 /h f ≈ 0.375) and there is a local maximum value of the deflection. Note that the checkerboard mode is Herringbone patterns Herringbone modes are investigated via Film/Sub III with a rectangular surface (L x /L y = 0.5) so as to more clearly observe the patterns, since the wavelength λ x and λ y are not identical. The film is under biaxial step loading as shown in Fig. 3.5c. More precisely, the film is compressed along the x direction at the first step, where the loading and boundary conditions are the same as the sinusoidal case with simply supported boundary conditions in Section 3.4.1: simply supported on sides y and {, free on sides x and z. Then, the displacements v 1 , v 3 , w 1 and w 3 along four sides x, y, z and {, are locked at the beginning of the second step of loading, which means that the sides x and z are simply supported in this second step. Compressions on two edges x and z are then imposed along the y direction. The film is meshed with 26 × 50 shell elements, while the substrate is compatibly discretized by 6500 block elements with five layers. Totally, the film/substrate system contains 53235 DOF including the Lagrange multipliers. The uniaxial compression in the first step generates the same type of sinusoidal wrinkles as in Section 3.4.1. In Fig. 3.16a, two bifurcation points have been found. The sequence of wrinkling modes ∆v corresponding to the bifurcation loads and their associated instability modes ∆v 3 are illustrated in Fig. 3.17. The first mode is modulated in a sinusoidal way while the second one corresponds to a quasi-uniformly distributed oscillation. During the second step of compression along the y direction, two bifurcations have been captured by computing bifurcation indicators (see Fig. 3.16b). The first mode shows aperiodic wrinkles (see Fig. 3.18a and Fig. 3.18b), where the perfect periodicity in Fig. 3.17d has been broken by the new bifurcation. Such a loss of periodicity had been previously discussed in [START_REF] Sun | Folding wrinkles of a thin stiff layer on a soft substrate[END_REF][START_REF] Cao | Wrinkling phenomena in neo-Hookean film/substrate bilayers[END_REF][START_REF] Cao | From wrinkles to creases in elastomers: the instability and imperfection-sensitivity of wrinkling[END_REF][START_REF] Cao | Buckling and post-buckling of a stiff film resting on an elastic graded substrate[END_REF][START_REF] Zang | Localized ridge wrinkling of stiff films on compliant substrates[END_REF][START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF], where period-doubling or even period-quadrupling is observed. Here, the periodicity is broken by the appearance of 3D wrinkling patterns and one can wonder in which cases the sinusoidal modes lose their stabilities by the occurrence of period-doubling or 3D wrinkling modes. The herringbone mode (see Fig. 3.18c and Fig. 3.18d) appears around the second bifurcation with an inplane wave occurring along the y direction in order to satisfy the minimum energy states [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF]. Apparently, the wavelength λ y is larger than the sinusoidal wavelength λ x , which is consistent with the experimental results in [START_REF] Yin | Deterministic order in surface micro-topologies through sequential wrinkling[END_REF]. A symmetric phase shifting can be obviously seen in the final step (see Fig. 3.19), which justifies that the new in-plane wave spreads along the y direction while oscillates in the x direction. Nevertheless, the wave number in the x direction remains unchanged during the second step of loading. Chapter conclusion Pattern formation and evolution of stiff films bound to compliant substrates were investigated, by accounting for boundary conditions in 3D cases, which was rarely studied previously. A classical model was applied associating geometrically nonlinear shell formulation for the film and linear elasticity for the substrate. Then the shell elements and block elements were coupled by introducing Lagrange multipliers. The presented results rely heavily on robust solution techniques based on the ANM that is able to detect secondary bifurcations and to compute bifurcation modes on a nonlinear response curve. Probably, it would be rather difficult to detect all the bifurcations found in this chapter by conventional numerical methods. Notably, the occurrence and evolution of sinusoidal, checkerboard and herringbone modes have been observed in the post-buckling range. The boundary conditions lead to non-uniformly distributed modes, but these boundary effects hold only at the onset of the instability and further bifurcation modes correspond to more or less uniform amplitudes of oscillations. In our simulations, the appearance of sinusoidal, checkerboard or herringbone patterns is mainly related to the loading conditions. The nonlinear behavior of moderately large wrinkles has been investigated and it seems that 1D patterns are more stable than 2D ones. The presented nonlinear 3D model can describe moderately large displacements and rotations in the film, while the computation cost is dramatically increasing compared to the 2D model in [START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF] (100827 DOF in the 3D model instead of 1818 DOF in the 2D model). In this respect, an idea for simulating larger samples is to introduce reducedorder models, for example via the technique of slowly variable Fourier coefficients [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF][START_REF] Mhada | About macroscopic models of instability pattern formation[END_REF][START_REF] Damil | Membrane wrinkling revisited from a multiscale point of view[END_REF]. Introduction Wrinkling phenomenon is one of the major concerns for the analysis, design and optimization of structures [START_REF] Rossi | Simulation of light-weight membrane structures by wrinkling model[END_REF] and material processing [2], self-organized surface morphology in biomechanics [START_REF] Efimenko | Nested self-similar wrinkling patterns in skins[END_REF], pattern formation for micro/nano-fabrication [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF], etc. To analyze such phenomena, we propose the use of macroscopic models based on envelope equations as in the field of cellular instability problems [153,[START_REF] Cross | Pattern formation out of equilibrium[END_REF][START_REF] Hoyle | Pattern formation, an introduction to methods[END_REF]. Such macroscopic descriptions are common for Rayleigh-Bénard convection [START_REF] Newell | Finite band width, finite amplitude convection[END_REF][START_REF] Segel | Distant side walls cause slow amplitude modulation of cellular convection[END_REF], buckling of long structures [START_REF] Damil | Wavelength selection in the postbuckling of a long rectangular plate[END_REF][START_REF] Boucif | Experimental study of wavelength selection in the elastic buckling instability of thin plates[END_REF][START_REF] Abdelmoula | Influence of distributed and localized imperfections on the buckling of cylindrical shells[END_REF], surface wrinkling of stiff thin films resting on compliant substrates [START_REF] Bowden | Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer[END_REF][START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF][START_REF] Huang | Evolution of wrinkles in hard films on soft substrates[END_REF][START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF][START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part III: Herringbone solutions at large buckling parameter[END_REF][START_REF] Wang | Local versus global buckling of thin films on elastomeric substrates[END_REF][START_REF] Brau | Multiple-length-scale elastic instability mimics parametric resonance of nonlinear oscillators[END_REF][START_REF] Cao | Wrinkling phenomena in neo-Hookean film/substrate bilayers[END_REF][START_REF] Zang | Localized ridge wrinkling of stiff films on compliant substrates[END_REF], fiber microbuckling and compressive failure of composites [START_REF] Drapier | A structural approach of plastic microbuckling in long fibre composites: comparison with theoretical and experimental results[END_REF][START_REF] Kyriakides | On the compressive failure of fiber reinforced composites[END_REF][START_REF] Waas | Compressive failure of composites, part II: Experimental studies[END_REF], wrinkling of membranes [START_REF] Rossi | Simulation of light-weight membrane structures by wrinkling model[END_REF][START_REF] Wong | Wrinkled membranes, Part I: experiments[END_REF][START_REF] Rodriguez | Numerical study of dynamic relaxation with kinetic damping applied to inflatable fabric structures with extensions for 3D solid element and non-linear behavior[END_REF][START_REF] Lecieux | Experimental analysis on membrane wrinkling under biaxial load-Comparison with bifurcation analysis[END_REF][START_REF] Lecieux | Numerical wrinkling prediction of thin hyperelastic structures by direct energy minimization[END_REF] and many other instabilities arising in various scientific fields [153,[START_REF] Cross | Pattern formation out of equilibrium[END_REF]. The responses of such systems are often nearly periodic spatial oscillations. Therefore, the evolution can be described by envelope models similar to the famous Ginzburg-Landau equation [START_REF] Segel | Distant side walls cause slow amplitude modulation of cellular convection[END_REF][START_REF] Damil | Amplitude equations for cellular instabilities[END_REF][START_REF] Hunt | Cellular buckling in long structures[END_REF][START_REF] Iooss | Theory of steady Ginzburg-Landau equation in hydrodynamic stability problems[END_REF]. A new approach has been recently adopted by Damil and Potier-Ferry [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | A generalized continuum approach to predict local buckling patterns of thin structures[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF] to model wrinkling phenomena. The approach is based on the Ginzburg-Landau theory [153,[START_REF] Iooss | Theory of steady Ginzburg-Landau equation in hydrodynamic stability problems[END_REF]. In the proposed theory, the envelope equation is derived from an asymptotic double scale analysis and the nearly periodic fields (reduced model) are represented by Fourier series with slowly varying coefficients. This mathematical representation yields macroscopic models in the form of generalized continua. In this case, the macroscopic field is defined by Fourier coefficients of the microscopic field. It has been shown recently that this approach is able to account for the coupling between local and global buckling in a computationally efficient manner [START_REF] Liu | A new Fourierrelated double scale analysis for instability phenomena in sandwich structures[END_REF] and it remains valid beyond the bifurcation point [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. Nevertheless, a clear and secure account of boundary conditions cannot be obtained, which is a drawback intrinsically linked to the use of any model reduction. To solve this problem, a multi-scale modeling approach has been recently proposed in order to bypass the question of boundary conditions [START_REF] Hu | A bridging technique to analyze the influence of boundary conditions on instability patterns[END_REF]: the full model is implemented near the boundary while the envelope model is considered elsewhere, and these two models are bridged by the Arlequin method [START_REF] Ben Dhia | Multiscale mechanical problems: the Arlequin method[END_REF][START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Ben Dhia | Global local approaches: the Arlequin framework[END_REF][START_REF] Ben Dhia | Further insights by theoretical investigations of the multiscale Arlequin method[END_REF]. This idea makes it possible to clarify the question of boundary conditions, which keeps the advantages of the two approaches: the envelope model in the bulk makes it possible to simplify the response curves and limit the total number of degrees of freedom; the fine model avoids the cumbersome problem of the boundary conditions being applied to the envelope equation. In this chapter, we revisit these coupling techniques between a reference model and a reduced model of Ginzburg-Landau type. Over the last decade, various numerical techniques have been developed to couple heterogeneous models, e.g. the Arlequin method [START_REF] Ben Dhia | Multiscale mechanical problems: the Arlequin method[END_REF][START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Ben Dhia | Global local approaches: the Arlequin framework[END_REF][START_REF] Ben Dhia | Further insights by theoretical investigations of the multiscale Arlequin method[END_REF] or the bridging domain method [START_REF] Xiao | A bridging domain method for coupling continua with molecular dynamics[END_REF]. One can couple classical continuum and shell models [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF], particle and continuum models [START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF]130,[START_REF] Prudhomme | Analysis of an averaging operator for atomic-to-continuum coupling methods by the Arlequin approach[END_REF][START_REF] Bauman | Adaptive multiscale modeling of polymeric materials with Arlequin coupling and Goals algorithms[END_REF][START_REF] Xiao | A bridging domain method for coupling continua with molecular dynamics[END_REF], heterogeneous meshes [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Hu | Multi-scale modeling of sandwich structure using the Arlequin method, Part I: linear modelling[END_REF] or more generally heterogeneous discretizations [START_REF] Ben Dhia | On the use of XFEM within the Arlequin framework for the simulation of crack propagation[END_REF][START_REF] Biscani | Variable kinematic plate elements coupled via Arlequin method[END_REF]. For instance, local stresses around the boundary have been computed by coupling 2D elasticity near the boundary and 1D beam model elsewhere [START_REF] Hu | Multi-scale modeling of sandwich structure using the Arlequin method, Part I: linear modelling[END_REF][START_REF] Hu | Multi-scale nonlinear modelling of sandwich structures using the Arlequin method[END_REF]. Basically, the Arlequin method aims at connecting two spatial approximations of an unknown field, generally a fine approximation U f and a coarse approximation U r . The idea is to require that these two approximations are neighbor in a weak and discrete sense and to introduce Lagrange multipliers in the corresponding differential problems. At the continuous level, a bilinear form must be chosen, which can be L 2 -type, H 1 -type or energy type [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Ben Dhia | Further insights by theoretical investigations of the multiscale Arlequin method[END_REF][START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF]. The first and important application of the Arlequin method is the coupling between two different meshes discretizing the same continuous problem: in this case, the mediator problem should be discretized by a coarse mesh to avoid locking phenomena [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF] and spurious stress peaks [START_REF] Hu | Multi-scale modeling of sandwich structure using the Arlequin method, Part I: linear modelling[END_REF]. But the two connected problems are not always in the same space, as for instance when dealing with particle and continuous problems. In this case, a prolongation operator has to be introduced to convert the discrete displacement into a continuous one and next a connection between continuous fields is performed [START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF]: this is consistent because the continuous model can be seen as the coarsest one. A similar approach has been applied in the coupling between plate and 3D models. A prolongation operator has been introduced (i.e. from the coarse to the fine level) and the integration is done in the 3D domain but the discretization of the Lagrange multiplier corresponds to a projection on the coarsest problem: thus, in this sense, this coupling of plate/3D is also achieved at the coarse level. In the same spirit, for the coupling between a fine model and an envelope model that is discussed in this chapter, the connection should also be done at the coarse level, i.e. between Fourier coefficients. On the contrary, a prolongation operator from the coarse to the fine model had been introduced in the previous paper [START_REF] Hu | A bridging technique to analyze the influence of boundary conditions on instability patterns[END_REF] and the connection had been done at this level. Therefore, one can wonder if the imperfect connection observed in [START_REF] Hu | A bridging technique to analyze the influence of boundary conditions on instability patterns[END_REF] could be improved by introducing a coupling at the relevant level. This chapter tries to answer this question by studying again the Swift-Hohenberg equation [START_REF] Swift | Hydrodynamic fluctuations at the convective instability[END_REF] that is a simple and illustrative example of quasi-periodic bifurcation. Very probably, the same ideas can be applied to 2D macroscopic membrane models that were recently introduced in [START_REF] Damil | New nonlinear multi-scale models for wrinkled membranes[END_REF]. Note that the presented new technique can be considered as nonlocal since it connects Fourier coefficients involving integrals on a period. A similar nonlocal coupling has been introduced in [START_REF] Prudhomme | Analysis of an averaging operator for atomic-to-continuum coupling methods by the Arlequin approach[END_REF] in the case of an atomic-to-continuum coupling, where the atomic model is reduced by averaging over a representative volume. The question addressed in this chapter is more or less generic in applying bridging techniques to reduced models or multi-scale models. The first papers about the Arlequin method focused on the choice of a bilinear form and its discretization. But in asymptotic multiple scale methods [136] or in computational homogenization [START_REF] Feyel | A multilevel finite element method (FE 2 ) to describe the response of highly non-linear structures using generalized continua[END_REF], one clearly distinguishes two independent spatial domains: a macroscopic domain to account for slow variations and a microscopic domain for the rapid variations. Therefore, the connection operators between the two levels have to be clearly defined, as well as the level at which the coupling is achieved. This subject will be discussed in this chapter. The work presented in this chapter, i.e. the new bridging technique based on a nonlocal reduction operator, is considered as an original, logical and relevant application of multiscale modeling with good motivations, explanations and interesting numerical results, which has been published in International Journal of Solids and Structures [START_REF] Xu | Bridging techniques in a multiscale modeling of pattern formation[END_REF]. Macroscopic modeling of instability pattern formation The numerical test considered in this chapter is the famous Swift-Hohenberg equation [START_REF] Swift | Hydrodynamic fluctuations at the convective instability[END_REF] that corresponds to the problem of a compressed elastic beam coupled with a nonlinear foundation. It has been studied in many papers, for instance in [START_REF] Hunt | Structural localization phenomena and the dynamical phase-space analogy[END_REF][START_REF] Hunt | Cellular buckling in long structures[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF][START_REF] Mhada | About macroscopic models of instability pattern formation[END_REF], because it is a very representative example in the study of cellular instabilities. From this microscopic model, a macroscopic envelope model will be presented and studied in the rest of the chapter. Among those discussed in [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF], it is not the more accurate, but it is the simplest one and it is able to describe the amplitude modulation of the oscillation. Let us recall that the central point of this chapter is a bridging technique used to correct a reduced model near the boundary. This technique has to be robust and it has to play its part for several levels of reduced model. Description of the microscopic model We consider the example of an elastic beam subjected to a nonlinear elastic foundation as shown in Fig. 4.1. The unknowns are the components u(x) and v(x) of the displacement vector and the normal force n(x), which represents U (x) = {u(x), v(x), n(x)}. We will study the following set of differential equations: n(x) EI n(x) ES L cv+c 3 v 3                        dn dx + f = 0, (a) n ES = du dx + 1 2 ( dv dx ) 2 , ( b ) d 2 dx 2 ( EI d 2 v dx 2 ) - d dx ( n dv dx ) + cv + c 3 v 3 = 0. (c) (4.1) These equations will be referred to as the microscopic model that depends on four structural parameters EI, ES, c, c 3 and a given axial force f (x). This system is able to describe periodic patterns. For instance, in the case without horizontal force (f = 0), with constant coefficients EI, c and a prescribed uniform compression stress µ (n(x) = -µ), a relation between the critical load µ and the wave number q of periodic patterns can be deduced from the linearized version of (4.1-c): µ(q) = EIq 2 + c q 2 . ( 4.2) The critical wave number q = 4 √ c/EI can be defined as the minimum of the neutral stability curve µ(q). Note that the solutions of the system (4.1) are stationary points of the following potential energy: P(u, v) = ∫ L 0 ( ES 2 ( u ′ + v ′2 2 ) 2 + EI 2 v ′′2 + c 2 v 2 + c 3 4 v 4 -f u ) dx. (4.3) Reduction procedure by Fourier series We will conduct a multi-scale approach based on the concept of Fourier series with slowly varying coefficients. Let us suppose that the instability wave number q is known. In this way, all the unknowns of model U (x) = {u(x), v(x), n(x)...} can be written in the form of Fourier series, whose coefficients vary more slowly than the harmonics: U (x) = +∞ ∑ j=-∞ U j (x)e jiqx , ( 4.4) where the Fourier coefficient U j (x) denotes the envelope for the j th order harmonic, which is conjugated with U -j (x). The macroscopic unknown fields U j (x) slowly vary over a period [ x, x + 2π q ] of the oscillation. In practice, only a finite number of Fourier coefficients will be considered. As shown in Fig. 1.6, at least two functions U 0 (x) and U 1 (x) are necessary to describe nearly periodic patterns: U 0 (x) can be identified with the mean value while U 1 (x) represents the envelope or amplitude of the spatial oscillations. The mean value U 0 (x) is real valued, while the other envelopes are complex. Consequently, the envelope of the first harmonic U 1 (x) can be written as U 1 (x) = r(x)e iφ(x) , where r(x) represents the amplitude modulation and φ(x) is the phase modulation. If the phase varies linearly like φ(x) = Qx + φ 0 , this type of approach is able to describe quasi-periodic responses whose wave number q + Q slightly differs from the a priori chosen q. Hence, the method makes it possible to account for a change in wave number. The main idea of macroscopic modeling is to deduce differential equations satisfied by the amplitude U j (x). Some calculation rules have been introduced in [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF] to manage these Fourier series with slowly varying coefficients. A simple macroscopic model with two real envelopes The previous reduction procedure has been applied to the microscopic model (4.1) and (4.3) (see [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]). Several reduced models have been established, depending on the number of harmonics and some additional assumptions. The general methodology to obtain the macroscopic models is detailed in Appendix B, as well as a very accurate reduced model with five harmonics and another with one real and one complex envelope in Appendix C. In this chapter, we only recall the simplest possible model that involves only three real functions: the mean values of the membrane unknowns u 0 (x), n 0 (x) and the first amplitude of the oscillation of deflection v 1 (x). The potential energy of this simple macroscopic model is given by (see [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]) P(u 0 , v 1 ) = ∫ L 0 ( ES 2 ( u ′ 0 + v ′2 1 + q 2 v 2 1 ) 2 + EI ( 6q 2 v ′2 1 + q 4 v 2 1 ) + cv 2 1 + 3c 3 2 v 4 1 -f 0 u 0 ) dx. (4.5) The differential equations of the system follow from the stationarity of the potential energy Transition operators in the framework of Fourier series with variable coefficients The next problem studied in this chapter is the coupling between the macroscopic model (4.6) and the microscopic model (4.1). According to the Arlequin framework, a bilinear form has to be defined and this will be done in the following sections. In this part, we define and analyze transition operators between the full model and the reduced one. Prolongation and reduction operators Let us discuss the possible manners of connecting the fine and reduced models. Firstly, let us consider the transition from the envelopes U j (x) to the full model (U j (x) → U (x)). It has been previously introduced (see [START_REF] Hu | A bridging technique to analyze the influence of boundary conditions on instability patterns[END_REF]): P(U j ) = U (x) = +∞ ∑ j=-∞ U j (x)e jiqx . ( 4.7) With the simplification in Section 4.2.3, the unknowns are reduced to the mean membrane displacement u 0 (x) and to the first envelope of the deflection v 1 (x). Consequently, the Eq. (4.7) can be simplified as P(u 0 , v 1 ) = { u(x) v(x) } = { u 0 (x) 2v 1 (x) cos(qx + φ) } . ( 4.8) Conversely, according to the assumption of slowly varying envelope over a period [ x -π q , x + π q ] , the macroscopic unknowns can be deduced from microscopic ones by the classic formula of Fourier series: U j (x) = q 2π ∫ π q -π q U (x + y)e -jiq(x+y) dy. (4.9) Therefore, considering the simplified theory in Section 4.2.3, the reduction operator reads R(u, v) =      u 0 (x) v R 1 (x) v I 1 (x)      = q 2π ∫ π q -π q      u(x + y) v(x + y) cos[q(x + y)] -v(x + y) sin[q(x + y)]      dy. ( 4 Numerical analysis of the reduction procedure Theoretical remarks Before presenting a bridging technique, we first examine the meaning of the reduction procedure in Eqs. (4.9) and (4.10). This is performed through the analysis of a numerical solution of the microscopic model (4.1). For a periodic function U (x), the Fourier coefficients are given by Eq. (4.9). In what follows, we compute the real part U R j and imaginary part U I j as follows: U R j (x) = q 2π ∫ π q -π q U (x + y) cos [jq(x + y)] dy, (4.11) U I j (x) = - q 2π ∫ π q -π q U (x + y) sin [jq(x + y)] dy. (4.12) From the mathematical standpoint, it is straightforward to obtain all the envelopes through Eq. (4.9). The reduction of transversal displacement and longitudinal displacement is conducted by considering five envelopes j = 0, j = ±1 and j = ±2, respectively. We consider a beam with length L = 30π, ES = 1, EI = 1, c = 1 and c 3 = 1/3. We choose the instability wave number q = 1. The beam is subjected to an increasing global end shortening u(L) = -µL and the body force is f 0 = 0. The whole beam is divided into 120 cubic elements, which means that the element length is l e = π/4. Theoretically, to implement this nonlocal reduction, we can choose any point x in Eqs. (4.11) and (4.12) as the center to carry out the integral over the period [ x -π q , x + π q ] except the boundary regions [ 0, π q ] and [ L -π q , L ] . For simplicity, we choose each node of the microscopic mesh as the center of these integrals. Therefore, for each reduction point, the integral domain covers eight elements over the whole period as shown in Fig. The discretization of this nonlocal reduction can be written as U R j (x i ) = q 2π ∫ x i + π q x i -π q U (x) cos (jqx) dx ≈ q 2π l e 2 ∑ xn∈gp U (x n ) cos(jqx n ), (4.13) U I j (x i ) = - q 2π ∫ x i + π q x i -π q U (x) sin (jqx) dx ≈ - q 2π l e 2 ∑ xn∈gp U (x n ) sin(jqx n ), (4.14) where x i are the nodes of the mesh and x n ∈ gp represent the corresponding Gauss points within the integration domain I( x i ) = [ x i -π q , x i + π q ] . Note that the above equations should be limited to functions that are exactly periodic with a period 2π/q. But in general, it is not possible to precisely predict the period of solutions of nonlinear equations that changes with their amplitudes. Let us consider for instance a harmonic function with a wave number Q that is close but a little different from the a priori given wave number q: v(x) = e i(Qx+φ) + e -i(Qx+φ) , Q ≈ q, Q ̸ = q. ( 4.15) From Eqs. (4.13) and (4.14), the first order Fourier coefficient can be written as v 1 (x) = q 2π   e iφ ∫ π q -π q e i(Q-q)(x+y) Slowly variable dy + e -iφ ∫ π q -π q e -i(Q+q)(x+y) Oscillating dy   . ( 4.16) Therefore, the Fourier coefficients involve a slowly varying part and a rapidly varying one, this second part being disregarded by the macroscopic models in Section 4.2. The oscillating part is relatively small when the two wave numbers are close to each other. Numerical tests for a simply supported beam In order to analyze the practical accuracy of the reduction formulae (4.13) and (4.14), firstly, we have computed the response of the microscopic model (4.1) with simply supported boundary conditions: v(0) = v ′′ (0) = 0, v(L) = v ′′ (L) = 0. Then the reduction terms (4.13) and (4.14) are calculated when µ = 2.21 for the orders j = 0, j = 1 and j = 2, respectively. Their spatial distribution is plotted in Fig. 4.3. It is found that only the imaginary part of the first order envelope v I 1 is not small, the ratios |v R 1 /v I 1 | and |v 0 /v I 1 | being of the order of 10 -3 or 10 -4 . The second order amplitudes v R 2 and v I 2 even have a lower level than v 0 . In other words, the response of v(x) is approximately 0.3166 sin x and the complementary contributions are relatively small. This means that the effective wave number Q for µ = 2.21 corresponds precisely to the predicted quantity q = 1. We have checked that the wave number is also Q = 1 for any µ in the interval [2, 2.21]. Numerical tests for a clamped beam With clamped boundary conditions: v(0) = v ′ (0) = 0, v(L) = v ′ (L) = 0, it is known that the effective wave number Q is not exactly the one predicted by the linear theory 106 Transition operators in the framework of Fourier series with variable coefficients (v R 1 , v I 1 , v R 2 , v I 2 ) . The reduction is performed over the domain [π, 29π]. The instability pattern for µ = 2.21. (v R 1 , v I 1 , v R 2 , v I 2 ) . The reduction is performed over the domain [π, 29π]. The instability pattern for µ = 2.21. (v 0 , v R 1 , v I 1 , v R 2 and v I 2 ) is compared with the exact solution v(x). Bridging technique and discretization In this section, the microscopic model (4.1) and (4.3) is implemented in a small region close to the boundary, which allows for the introduction of "exact" boundary conditions into the system. The simplest envelope model (4.5) and (4.6) will be applied in the bulk. These two types of models can be bridged by the Arlequin method [START_REF] Ben Dhia | Multiscale mechanical problems: the Arlequin method[END_REF][START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Ben Dhia | Global local approaches: the Arlequin framework[END_REF][START_REF] Ben Dhia | Further insights by theoretical investigations of the multiscale Arlequin method[END_REF]. According to the Arlequin framework, these two mechanical fields are matched in a weak sense inside the gluing zone and the potential energy is distributed between these two models. Arlequin method in the context of prolongation or reduction coupling The domain of the whole mechanical system is partitioned into two overlapping subzones: Ω f (microscopic fine model domain) and Ω r (macroscopic reduced model domain). The resulting superposition zone S = Ω f ∩ Ω r contains the gluing zone S g (S g ⊆ S) (see Fig. 4.11). Here the two zones S and S g cannot coincide because of the nonlocal character of the reduction operator. Energy distribution Setting u f = {u(x), v(x), x ∈ Ω f } and u r = {u 0 (x), v 1 (x), x ∈ Ω r }, the energy contribution of the two models for the potential energy of the whole system defined in Eqs. (4.3) and (4.5) is as follows:            P f (u f ) = ∫ Ω f [α f W(u f ) -β f f u] dΩ, P r (u r ) = ∫ Ωr [α r W(u r ) -β r f 0 u 0 ] dΩ, (4.17) where          W(u f ) = ES 2 (u ′ + v ′2 2 ) 2 + EI 2 v ′′2 + c 2 v 2 + c 3 4 v 4 , W(u r ) = ES 2 (u ′ 0 + v ′2 1 + q 2 v 2 1 ) 2 + EI(6q 2 v ′2 1 + q 4 v 2 1 ) + cv 2 1 + 3c 3 2 v 4 1 . (4.18) In order to have consistent modeling of the energy in the overlapping domain, the energy associated to each domain is balanced by weight functions which are represented by α i for the internal work and β i for the external work. These weight functions are assumed to be positive piecewise continuous in Ω i and satisfy the following equations: More details on selection of these functions can be found in [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF]. In this chapter, we choose piecewise linear continuous weight functions in the overlapping region S.                α f = β f = 1, in Ω f \S, α r = β r = 1, in Ω r \S, α f + α r = β f + β r = 1, in S. Coupling alternatives The coupling technique implies a connection between the microscopic model and the envelope model. According to the Arlequin framework, generally, coupling based on the coarse model is preferred to avoid the locking phenomena. This requires defining a nonlocal reduction operator u f → R(u f ) which involves the Fourier transform. Conversely, the other way is to perform the inverse connection by using a local prolongation operator u r → P(u r ), which reproduces a compatible field from u r to be coupled with u f (see Section 4.3.1). Therefore, the coupling is conducted by requiring that one of the two following conditions could be satisfied in a mean sense: R(u f ) -u r = 0, ∀x ∈ S g ; (4.20) u f -P(u r ) = 0, ∀x ∈ S g . (4.21) The literature in terms of the Arlequin method recommends the first way (4.20) [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Ben Dhia | Further insights by theoretical investigations of the multiscale Arlequin method[END_REF][START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF]. For simplicity, the prolongation method (4.21) was studied in [START_REF] Hu | A bridging technique to analyze the influence of boundary conditions on instability patterns[END_REF]. Here, we will test the reduction approach (4.20) that should lead to a better coupling. Prolongation coupling approach The prolongation operator was defined in Eq. (4.8). The reduced model is based on strong simplifications and especially on the assumption of a constant arbitrary phase. As in [START_REF] Liu | A new Fourierrelated double scale analysis for instability phenomena in sandwich structures[END_REF], we choose φ = -π/2 in what follows, which leads to the following form of the prolongation operator: P(u r ) = [ 1 0 0 2 sin(qx) ] { u 0 v 1 } , ∀x ∈ S g . (4.22) By introducing Lagrange multipliers λ = {λ u (x), λ v (x), x ∈ S g } as a fictitious gluing force, the coupling equation (4.21) can be rewritten in a weak form as C (λ, u f -P(u r )) = 0, ∀λ ∈ M, (4.23) where M is the mediator space. Eq. (4.23) could be considered as a constraint in an optimization problem. The corresponding stationary function is given in a Lagrangian form as L (u f , u r , λ) = P f (u f ) + P r (u r ) + C (λ, u f -P(u r )) . (4.24) From Eq. (4.24), three equations are obtained according to δu f , δu r and δλ:                P f (δu f ) + C (λ, δu f ) = 0, ∀δu f ∈ K.A., P r (δu r ) -C (λ, P(δu r )) = 0, ∀δu r ∈ K.A., C (δλ, u f ) -C (δλ, P(u r )) = 0, ∀δλ ∈ M, (4.25) where K.A. stands for kinematically admissible. Finally, the coupling operator C is defined as follows: C (λ, u) = ∫ Sg ( λ • u + ℓ 2 ε(λ) : ε(u) ) dΩ. (4.26) It is an H 1 -type coupling operator. When ℓ = 0, it becomes an L 2 -type coupling operator. The choice of the length has been discussed in the literature [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Guidault | On the L 2 coupling and the H 1 couplings for an overlapping domain decomposition method using Lagrange multipliers[END_REF][START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF][START_REF] Hu | Multi-scale modeling of sandwich structure using the Arlequin method, Part I: linear modelling[END_REF] and the difficulties of L 2 coupling have been pointed out. This point will be re-discussed in Section 4.5.3. Reduction-based coupling approach In Section 4.3.2, it was shown that the longitudinal displacement can be adequately described by a single envelope u 0 since it is almost linear. Moreover, the two longitudinal displacements coincide (see Eqs. (4.22) and (4.28) below) so that their bridging procedure (4.23) and (4.27) is identical. In the transversal direction, the reduction operator is the first order Fourier coefficient that has a nonlocal character (see Eq. (4.10)). For simplicity, C (λ, u) will be the L 2 -type scalar product. The reduction operator has been defined in Eq. (4.10). With φ = -π/2, the weak form of coupling formula (4.20) reads C (λ, R(u f ) -u r ) = 0, ∀λ ∈ M, ( 4.27) where R(u f ) = { R u (x) R v (x) } =      u(x) q 2π ∫ π q -π q v(x + y) sin [q(x + y)] dy      . (4.28) The corresponding stationary function is also in a Lagrangian form: L (u f , u r , λ) = P f (u f ) + P r (u r ) + C (λ, R(u f ) -u r ) . (4.29) From Eq. (4.29), one can obtain three equations according to δu f , δu r and δλ:                P f (δu f ) + C (λ, R(δu f )) = 0, ∀δu f ∈ K.A., P r (δu r ) -C (λ, δu r ) = 0, ∀δu r ∈ K.A., C (δλ, R(u f )) -C (δλ, u r ) = 0, ∀δλ ∈ M. (4.30) Comments In works of Ben Dhia and Rateau [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Ben Dhia | Global local approaches: the Arlequin framework[END_REF][START_REF] Ben Dhia | Further insights by theoretical investigations of the multiscale Arlequin method[END_REF], it was established that it is better to discretize the coupling equation in a coarse manner (choice of the discrete mediator space M ). In this chapter, we will further discuss whether the coupling equation has to be defined at a fine level (4.21) and (4.23) or at a coarse level (4.20) and (4.27). Discretization The chosen discretization of the microscopic model (4.1) is very classical with linear interpolation for the axial displacement and cubic Hermite interpolation for the deflection. C 0 elements can be chosen since the reduced energy (4.5) involves only the first derivatives. As in [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF], we use 3-node quadratic elements. The discretization of the coupling operators is presented with more details. Discretization of the prolongation coupling The finite element method is applied to solve Eq. (4.25). The discretization of unknowns is as follows: u f = { u v } e = [ N f u N f v ] { Q f } e , ( 4.31 ) u r = { u 0 v 1 } e = [ N r u N r v ] { Q r } e , ( 4.32 ) λ = { λ u λ v } e = [ N r u N r v ] { Q λ } e , ( 4.33) where { Q f } e , {Q r } e and {Q λ } e are the elementary nodal unknowns of u f , u r and λ, respectively. The shape functions N f u and N f v are, respectively, described by Lagrange and Hermite interpolating polynomials. To avoid locking the microscopic behavior to the macroscopic behavior in the coupling zone, the discretization of λ should be conducted as u r (see more details in [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF]). Finally, one can obtain the global discrete system in the generic form of a mixed problem:                [ R f (Q f ) ] + [C f ] t {Q λ } = 0, [R r (Q r )] -[C r ] t {Q λ } = 0, [C f ] { Q f } -[C r ] {Q r } = 0, (4.34) where the residuals R f (Q f ) and R r (Q r ) are detailed in Appendix D. The coupling matrix C f and C r are assembled from the elementary matrices that have the following forms: C e f = ∫ Ωe ([ N r u N r v ] [ N f u N f v ] t + ℓ 2 [ N r u ′ N r v ′ ] [ N f u ′ N f v ′ ] t ) dΩ, (4.35 ) C e r = ∫ Ωe ([ N r u N r v ] [ N r u 2N r v sin(qx) ] t + ℓ 2 [ N r u ′ N r v ′ ] [ N r u ′ 2N r v ′ sin(qx) + 2qN r v cos(qx) ] t ) dΩ, (4.36 ) in which Ω e represents the elementary integration domain. The resulting nonlinear system (4.34) is solved using the classic Newton-Raphson method. Discretization of the reduction-based coupling The discretization of unknowns u f , u r and λ is the same as in Section 4.4.2. In addition, the global discrete system (4.34) has the same form as those in the prolongation coupling but with a completely different coupling matrix C f due to the nonlocal character of the coupling operator. Now let us look at the discretization of the bilinear form C (λ, R(u f )) that follows from two integrations. The bilinear form C (•, •) is an integral in the macroscopic domain split into macroscopic elements E, each of them being associated with their Gauss points (GP (E)) in S g . This first integral is quite classical with finite element method. The reduction operator R(•) is an integral in the interval I(x i ) = [ x i -π q , x i + π q ] in the microscopic domain that is split into small elements e, each of them being associated with its Gauss points (gp(e)). Classically, these Gauss points are defined in a reference interval [-1, 1]. The bilinear form is discretized in quite a classical way: C (λ, R(u f )) = ∑ E L E 2 ∑ x i ∈GP (E) ( ⟨Q λ u ⟩ E {N r u (x i )}⟨N f u ⟩ { Q f u } e + ⟨Q λ v ⟩ E {N r v (x i )}R v (x i ) ) , (4.37) where L E is the length of the macroscopic elements. Note that two Gauss points on each macroscopic element can fully meet the accuracy requirements in this case (see Fig. 4.12). The reduction operator R v (x i ) in Eq. (4.28) is detailed from the following classical integration formula: R v (x i ) = q 2π ∫ x i + π q x i -π q v(x) sin(qx)dx ≈ q 2π ∑ e l(e) 2 ∑ x j ∈gp(e) v(x j ) sin(qx j ) = q 2π ∑ e l(e) 2 ∑ x j ∈gp(e) sin(qx j )⟨N f v (x j )⟩ { Q f v } e , ( 4.38) where the length of the interval l(e) is the intersection of the microscopic element with the integral region I(x i ) = [ x i -π q , x i + π q ] . It does not necessarily coincide with the interval of interpolation (see Fig. 4.12). The coupling matrix C r can be assembled from the elementary matrices as C e r = ∫ Ωe ([ N r u N r v ] [ N r u N r v ] t ) dΩ. (4.39) The nonlinear system is solved using the classic Newton-Raphson method. bifurcation, which can be explained via the Ginzburg-Landau equation [153,[START_REF] Damil | Wavelength selection in the postbuckling of a long rectangular plate[END_REF]. In what follows, we distinguish the bulk behavior and the boundary behavior, respectively in the regions [3π, 15π] and [0, 3π]. The post-buckling amplitude in the bulk is correctly captured by all the models, especially the macroscopic model (see Fig. 4.15). Indeed, the boundary conditions have little influence on the macroscopical amplitude in the bulk. In the same way, all the models correctly predict the bifurcation point µ ≈ 2. Near the boundary, the macroscopic model and prolongation coupling lead to divergent results. Only the reduction-based coupling is able to reproduce the prediction of the reference model (see Fig. 4.16). The details of the response near the boundary are illustrated in Fig. 4.19 for µ = 2.21. One can observe the coincidence between the reduction-based coupling and the reference model. As for the macroscopic model and prolongation coupling, the post-buckling instability pattern is qualitatively similar to the reference model but with a significant phase shift. It is well-known that the coupling matrices have to be defined on the coarse level to avoid locking phenomena. In [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF], this coarse character is related to the discretization of the Lagrange multipliers λ. In the present case, the coupling matrices C e f and C e r depend on the discretized mediator space, but there is also another alternative: to connect the Fourier coefficients (reduction-based coupling) or the functions of the fine model (prolongation coupling). Our results clearly establish that the coupling procedure has to be done in the coarse space, i.e. in the space of Fourier coefficients. Indeed, considering the values of the functions v f (x) and v r (x) in the gluing zone for the prolongation coupling (see Fig. 4.17), one can observe that the two functions coincide (v f (x) = v r (x) in S g ) and this locking phenomenon causes undesirable behavior in the boundary region (see Fig. 4.19). On the contrary, as for the reduction-based coupling (see Fig. 4.18), v f (x) and v r (x) are not identical in the gluing zone, which leads to an accurate prediction in the boundary region (see Fig. 4.19). About convergence The definition of the numerical model depends on the three meshes (micro, macro and bridging), the reduced model and the location of gluing zone. The choice of meshes follows the same rules as with finite element technique and it is not re-discussed here. The definition of the reduced model induces some limits to the accuracy that can be expected in the macroscopic domain Ω r . With the choice made in this chapter, one can get a good prediction of the amplitude of wrinkling patterns in this zone, but not their phase. This limitation clearly appears in Figs. 4.17 and 4.18 and cannot be improved within this reduced model. Thus, we focus on the influence of the gluing zone or, equivalently, on the size of the microscopic domain Ω f . In Figs. 4.20 and 4.21, the Arlequin solution is compared with the reference solution in the cases of gluing zones in [3π, 4π] and [8π, 9π], respectively. In the first case with a small Ω f , the Arlequin solution is valid in almost all this interval Ω f = [0, 3π]. This is already a good result that was not found with the prolongation coupling, compared with Fig. 4.19. If one extends the microscopic domain Ω f up to [0, 8π], the Arlequin solution becomes accurate both in microscopic and gluing zones, which is the best accuracy to be expected with this reduced model. H 1 versus L 2 coupling The choice of the coupling bilinear form is a basic question within Arlequin method. The problems involved by L 2 coupling are known. It has been established that the Lagrange multiplier converges to a distribution and not to a function, for 1D elasticity with different meshes [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF] and for atomistic-continuum coupling [START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF]. We come back to this discussion in the present case of a coupling between nonlinear beam and envelope model in the case of the prolongation coupling. The same clamped beam as in Section 4.5.1 is considered. In Fig. 4.22, we have plotted the transversal displacement in the zone of the microscopic model on the left. It should be compared with Fig. 4.19 where the prolongation and reduction-based couplings were evaluated. The difference is rather weak, about few percents, while a significant difference was observed between the prolongation and reduction-based couplings. Nevertheless, this does not mean that H 1 and L 2 couplings are equivalent. In Fig. 4.23, one evaluates the difference between the two displacements v f and v r in the cases of H 1 and L 2 couplings and one observes that this difference is much smaller with H 1 coupling. In the next Fig. 4.24, the spatial evolution of normal force n(x) is depicted, as well as the macroscopic stress n 0 (x) that is the mean value of n(x). In this case, these two quantities should be constant because of Eqs. (4.1-a) and (4.6-a). One observes rather strong oscillations in the coupling zone that are much severer in the L 2 case (about 4.2%) than in the H 1 case (about 1.6%). Nevertheless, these oscillations have little influence out of the gluing zone. Note that small oscillations existing in the microscopic zone on the left are due to membrane locking in finite element approximation. Finally, we present in Fig. 4.25 the variations of Lagrange multipliers in the two coupling cases. Localized forces are observed near the end of the gluing zone, which was expected by comparison with previous results; see for instance [START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF][START_REF] Bauman | On the application of the Arlequin method to the coupling of particle and continuum models[END_REF][START_REF] Chamoin | Ghost forces and spurious effects in atomic-to-continuum coupling methods by the Arlequin approach[END_REF]. In the latter paper, the origin of these so-called "ghost forces" was carefully analyzed, and some corrections were proposed with an appropriate choice of weights and especially by introducing interaction forces between coarse and fine model. Hence, there are differences between H 1 and L 2 couplings in the present case of bridging between a macroscopic and a microscopic model, which is very sensitive in the gluing zone. However, the coupling can be achieved at the micro or macro level. In the studied case, the best way is to couple in the macroscopic domain and this point is at least as important as the choice of the bilinear form, which is clearly shown in Figs. S g = [5π, 6π] |v f -v r |/max|v f | H1 coupling L2 coupling Chapter conclusion In this chapter, we have discussed how to connect a fine and a coarse model within the Arlequin framework. A typical cellular instability problem has been accurately analyzed, in which the coarse model is defined by envelope equations of Ginzburg-Landau type. An Arlequin-problem involves a coupling operator and two models to be connected. The efficiency of the numerical technique depends on all three, but the coupling model has to be sufficiently robust and compatible with various choices of reduced models. In the case of envelope equations discussed here, the reduction operator leads to spurious oscillations in the Fourier coefficients that have been smoothed by the coupling operator. The presented reduction-based coupling has permitted us to accurately describe the response of the system near the boundary, even with a rather coarse reduced model. The Arlequin method has been applied in a multi-scale framework, which has required to accurately define transition operators between the two levels, i.e. a prolongation operator from the coarse to the fine level and a reduction operator in the opposite sense. These two operators play a crucial role in the coupling technique, as well as the mediator bilinear form and its discretization. In the studied case, it is clearly better to use the reduction operator for a coupling at the coarse level rather than the prolongation operator for a coupling at the fine level. It will be interesting to discuss this question with other multi-scale models like those obtained in computational homogenization. It will be also interesting to apply a similar bridging technique for the coupling between full shell models and 2D envelope equations as introduced in [START_REF] Damil | New nonlinear multi-scale models for wrinkled membranes[END_REF]. Clearly, the bilinear form should be of H 1 -type as in the nonlocal coupling introduced in [START_REF] Prudhomme | Analysis of an averaging operator for atomic-to-continuum coupling methods by the Arlequin approach[END_REF] and the coupling has to be performed in the space of Fourier coefficients as established in the present chapter. film, while the computational cost would be rather high for simulating large samples with big wave number, which requires numerous finite elements. In this respect, one idea for simulating larger samples is to introduce reduced-order models, for example via the technique of slowly variable Fourier coefficients [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | A generalized continuum approach to predict local buckling patterns of thin structures[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. In this chapter, the macroscopic modeling methodology mentioned in Section 1.3 will be conducted both in 2D and 3D cases for film/substrate multi-scale modeling. A generalized macroscopic modeling framework will be deduced first, then it goes to specific 2D and 3D cases with simplifications and assumptions so as to be more efficient. Precisely, the 2D macroscopic film/substrate model will be based on the established classical model presented in Chapter 2, while all the mechanical fields are in the macroscopic perspective represented by Fourier coefficients. As for the 3D modeling of film/substrate, a nonlinear macroscopic membranewrinkling model that accounts for both membrane energy and bending energy will be deduced first. Then a linear macroscopic elastic model will be derived. Finally, following the same strategy as in Section 3.2.3, these two models will be coupled at the interface through Lagrange multipliers. General macroscopic modeling framework We will conduct a multi-scale approach based on the concept of Fourier series with slowly varying coefficients. Let us suppose that the instability wave number q is known. In this way, all the unknowns of model U (x) = {u i (x), w i (x)} can be written in the form of Fourier series, whose coefficients vary more slowly than the harmonics: U (x) = +∞ ∑ j=-∞ U j (x)e jiqx , ( 5.1) where the Fourier coefficient U j (x) denotes the envelope for the j th order harmonic, which is conjugated with U -j (x). The macroscopic unknown fields U j (x) slowly vary over a period [ x, x + 2π q ] of the oscillation. In practice, only a finite number of Fourier coefficients will be considered. As shown in Fig. 1.6, at least two functions U 0 (x) and U 1 (x) are necessary to describe nearly periodic patterns: U 0 (x) can be identified with the mean value while U 1 (x) represents the envelope or amplitude of the spatial oscillations. The mean value U 0 (x) is real valued, while the other envelopes are complex. Consequently, the envelope of the first harmonic U 1 (x) can be written as U 1 (x) = r(x)e iφ(x) , where r(x) represents the amplitude modulation and φ(x) is the phase modulation. If the phase varies linearly like φ(x) = Qx + φ 0 , this type of approach is able to describe quasi-periodic responses whose wave number q + Q slightly differs from the a priori chosen q. Hence, the method makes it possible to account for a change in wave number. The main idea of macroscopic modeling is to deduce differential equations satisfied by the amplitude U j (x). In what follows, we will develop a generalized macroscopic modeling framework for film/substrate system. Some calculation rules have been introduced in [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF] to manage these Fourier series with slowly varying coefficients. Derivative operators can be calculated exactly, according to the rules presented in Appendix B. Let us apply the above methodology in general nonlinear elasticity problem. With the notations in [START_REF] Cochelin | Méthode asymptotique numérique[END_REF], neglecting volume forces, the principle of virtual work can be expressed as ∫ Ω ⟨δγ⟩ {s} dΩ = λ ∫ ∂Ω ⟨δu⟩ {f } dΩ, (5.2) where {δu} is the virtual displacement vector and {f } is the external loading vector at the boundary with the incremental loading parameter λ. In the framework of linear constitutive laws, the second Piola-Kichhoff stress {s} and the Green-Lagrange strain vector {γ} are linked linearly, while the strain is related to the displacement gradient {θ} by a quadratic relationship:    {s} = [D] {γ}, {γ} = [H] {θ} + 1 2 [A(θ)] {θ}, ( 5.3) where the matrices [D], [H] and [A(θ)] are defined in [START_REF] Cochelin | Méthode asymptotique numérique[END_REF]. Note that the matrix [A(θ)] satisfies the following symmetry property: [A(θ)] {φ} = [A(φ)] {θ}. ( 5.4) We seek nearly periodic responses that vary rapidly in one direction. This characteristic direction and the period are described by a wave vector q ∈ R 3 that is assumed as a given parameter. In practice, this vector comes from a linear stability analysis. Hence, the vector {Λ(x)}, which includes displacement vector, its gradient, strain and stress tensors, is sought in the form of Fourier series, whose coefficients {Λ j (x)} vary slowly. For simplicity, we keep harmonics up to level 2 but it may achieve to high level for complex wave oscillation: Λ(x) = +2 ∑ j=-2 Λ j (x)e jiqx . (5.5) After applying it to the constitutive law (5.3), one can obtain a macroscopic constitutive law for harmonic 0 (real value) and two constitutive laws for harmonics 1 and 2 (complex values), all these equations being coupled:          {γ 0 } = [D] -1 {s 0 } = [H] {θ 0 } + 1 2 [A(θ 0 )] {θ 0 } + [A(θ -1 )] {θ 1 } + [A(θ -2 )] {θ 2 }, {γ 1 } = [D] -1 {s 1 } = [H] {θ 1 } + [A(θ 0 )] {θ 1 } + [A(θ -1 )] {θ 2 }, {γ 2 } = [D] -1 {s 2 } = [H] {θ 2 } + [A(θ 0 )] {θ 2 } + 1 2 [A(θ 1 )] {θ 1 }. (5.6) Therefore, a generalized continuum model has been defined, which is a sort of superposition of several continua. Like each field in the model, the displacement is replaced by a generalized displacement that includes five Fourier coefficients for j ∈ [-2, 2]. To facilitate the understanding and implementation, the macroscopic constitutive law (5.6) can be unified in the same generic form as the starting law (5.3):    {S} = [D gen ] {Γ}, {Γ} = [H gen ] {Θ} + 1 2 [A gen (Θ)] {Θ}, ( 5.7) where the generalized stress {S}, the generalized strain {Γ} and the generalized displacement gradient {Θ} also include five Fourier coefficients for j ∈ [-2, 2]. The matrices that represent linear relationships are diagonal since the couplings between harmonics appear only for nonlinear terms, which leads to [D gen ] =         D 0 0 0 0 0 D/2 0 0 0 0 0 D/2 0 0 0 0 0 D/2 0 0 0 0 0 D/2         , ( 5.8 ) [H gen ] =         H 0 0 0 0 0 2H 0 0 0 0 0 2H 0 0 0 0 0 2H 0 0 0 0 0 2H         . ( 5.9) The nonlinear aspects are taken into account by a single matrix: [A gen (Θ)] = 2         A(θ 0 )/2 A(θ R 1 ) A(θ I 1 ) A(θ R 2 ) A(θ I 2 ) 0 A(θ 0 ) 0 A(θ R 1 ) A(θ I 1 ) 0 0 A(θ 0 ) -A(θ I 1 ) A(θ R 1 ) 0 A(θ R 1 )/2 -A(θ I 1 )/2 A(θ 0 ) 0 0 A(θ I 1 )/2 A(θ R 1 )/2 0 A(θ 0 )         . ( 5.10) The size of this matrix is 30 × 45 in 3D cases and 15 × 20 in 2D cases. Note that the j th component of the displacement gradient is not the gradient of the j th component of the displacement. The rule defining the Fourier components of a gradient vector reads {∇u} j = {∇(u j )} + ji[Q]{u j }, ( 5.11) where [Q] =    {q} 0 0 0 {q} 0 0 0 {q}    . (5.12) In the same way, we will define the principle of virtual work for the extended macroscopic continuum. The previously defined technique with slowly variable Fourier coefficients can be applied to the balance equations. The weak form of those equations is then the extended principle of virtual work. Besides, this weak form can be deduced directly from the principle of virtual work of the initial problem (5.2) by using Parseval identity (B.3) (see Appendix B). The deduced principle of virtual work involves the Fourier coefficients of stress and strain: ∫ Ω +2 ∑ j=-2 ⟨ δγ -j ⟩ {s j } dΩ = λ ∫ ∂Ω +2 ∑ j=-2 ⟨δu j ⟩ {f j } dΩ, (5.13) where the left hand side involves Fourier coefficients of stress and strain, i.e. macroscopic stress and strain. One can find that the extended principle of virtual work takes the same form as the initial model (5. P gen (U ) = 1 2 ∫ Ω ⟨Γ(U )⟩ [D gen ] {Γ(U )} dΩ -λ ∫ ∂Ω ⟨U ⟩ {F } dΩ. (5.15) Since the generalized macroscopic model has the same form as the initial microscopic model, its solution can be approximated by the same shape functions. In the microscopic model, the displacement and its gradient can be related to nodal variables via two interpolation matrices [N ] and [G]: .16) After applying this discretization principle to the generalized displacement and the generalized displacement gradient, one can obtain interpolation formulae similar to (5.16). { {u(x)} e = [N ] {v} e , {∇u(x)} e = [G] {v} e . ( 5 The generalized displacement will be expressed with respect to the generalized nodal displacement via a block diagonal matrix: {U (x)} e = [N gen ] {V gen } e , ( 5.17) where [N gen ] =    [N ] [0] [0] [0] [N ] [0] [0] [0] [N ]    . ( 5.18) The other matrix interpolates the displacement gradient and it is a little complicated due to the derivative rule (5.11). The coupling between micro and macro scales appears at this level via the wave number matrix [Q]. First, let us separate the complex variables to real and imaginary parts from (5.11) by considering the matrix [Q] is real: {θ} j = {θ R j } + i{θ I j } = {∇(u j )} + ji[Q]{u j } = {∇(u R j )} -j[Q]{u I j } + i ( {∇(u I j )} + j[Q]{u R j } ) . ( .19) The relation between the generalized displacement and the generalized displacement gradient vector {Θ} reads {Θ(x)} e =                {θ 0 } {θ R 1 } {θ I 1 } {θ R 2 } {θ I 2 }                = [G gen ] {V gen } e , ( 5.20) where [G gen ] =         [G] [0] [0] [0 [0]] [0] [G] -[Q][N ] [0] [0] [0] [Q][N ] [G] [0] [0] [0] [0] [0] [G] -2[Q][N ] [0] [0] [0] 2[Q][G] [G]         . (5.21) Consequently, the discretization of the generalized strain tensor in Eq. (5.7) can be written as {Γ} = ( [H gen ] + 1 2 [A gen (Θ)] ) [G gen ] {V gen } e . ( 5.22) The above general macroscopic model can be directly applied to the 2D or 3D discretization of film/substrate systems. While considering the intrinsic property of the thin film that has been discussed in the previous chapters, the more efficient way is to take into account some kinematics simplifications for the film or to incorporate beam/shell/plate elements that are competitive for thin-walled structure modeling. This leads to the energy separation of the film/substrate system into two parts, i.e. film part and substrate part: Π = Π f + Π s . (5.23) Thus, in what follows, we will incorporate the general macroscopic modeling framework with nonlinear beam formulation for 2D case and nonlinear Föppl-von Kármán plate theory for 3D case, while the substrate is considered to be a linear elastic foundation. A 2D macroscopic film/substrate model The general macroscopic modeling framework established in the last section can be used directly for 2D film/substrate modeling, with the discretization of domain using 2D finite elements. However, some simplifications can be introduced as conducted in Chapter 2 to reduce the computational cost. One straight way is to develop a specific 2D Fourier-related film/substrate model based on the microscopic film/substrate model (2.8)-(2.14) established in Chapter 2, where the microscopic model has shown its validated effectiveness for post-buckling analyses. Thus, considering the identities presented in the last section, the kinematics in Eqs. (2.8)-(2.10) can be transformed to macroscopic displacement fields as follows: Film          U f j = u f j - ( z - h f 2 -h s ) ( d dx + ijq ) W f j , h s ≤ z ≤ h t W f j = w f j . (5.24) 1 st sublayer          U s1 j = 1 -η 2 (u 0 ) j + 1 + η 2 (u 1 ) j , -1 ≤ η ≤ 1, (h s -h 1 ) ≤ z ≤ h s W s1 j = 1 -η 2 (w 0 ) j + 1 + η 2 (w 1 ) j . (5.25) n th sublayer          U sn j = 1 -η 2 (u n-1 ) j + 1 + η 2 (u n ) j , -1 ≤ η ≤ 1, 0 ≤ z ≤ h n W sn j = 1 -η 2 (w n-1 ) j + 1 + η 2 (w n ) j . (5.26) In the same way, the constitutive and geometric equations in (2.11) and (2.12) can also be converted to macroscopic fields:          (σ f xx ) j = E f (ϵ f xx ) j , (σ sn xx ) j = (λ s + 2G s ) (ϵ sn xx ) j + λ s (ϵ sn zz ) j , (σ sn zz ) j = (λ s + 2G s ) (ϵ sn zz ) j + λ s (ϵ sn xx ) j , (σ sn xz ) j = G s (γ sn xz ) j , (5.27)                      (ϵ f xx ) j = ( d dx + ijq ) U f j + 1 2 ∞ ∑ j 1 =-∞ ( d dx + ij 1 q ) ( d dx + i (j -j 1 ) q ) W f j 1 W f j-j 1 , (ϵ sn xx ) j = ( d dx + ijq ) U sn j , (ϵ sn zz ) j = W sn j,z , (γ sn xz ) j = U sn j,z + ( d dx + ijq ) W sn j , ( 5.28) Consequently, the internal virtual work (2.14) can be rewritten in the macroscopic form: P int (δu) = - ∫ Ω f +∞ ∑ j=-∞ (σ f xx ) j δ(ϵ f xx ) j dΩ - ∑ sn ∫ Ω sn +∞ ∑ j=-∞ [(σ sn xx ) j δ(ϵ sn xx ) j + (σ sn zz ) j δ(ϵ sn zz ) j + (σ sn xz ) j δ(γ sn xz ) j ] dΩ. (5.29) Therefore, the microscopic model (2.8)-(2.14) has been transformed into its equivalent macroscopic form (5.24)-(5.29), with the unknowns {u 0 , w 0 , u 1 , w 1 , . . . , u n , w n } of the microscopic level being converted to the Fourier coefficients {(u 0 ) j , (w 0 ) j , (u 1 ) j , (w 1 ) j , . . . , (u n ) j , (w n ) j } of the macroscopic scale. Since wrinkles usually appear on the top surface of the film/substrate system, which means local buckling plays a major role, three envelopes U 0 (x), U -1 (x) and U 1 (x), respectively representing the mean field and amplitudes of fluctuation, are sufficient to describe such surface morphological instability (see Fig. 1.6). Hence, only three terms (j = -1, 0, 1) of Fourier coefficients will be considered in the transverse displacements. Besides, as relatively small oscillations appear in the longitudinal displacement field, only the zero order term (j = 0) is reasonable to be taken into account. Similar approximations have been conducted and validated in previous works [START_REF] Liu | A new Fourierrelated double scale analysis for instability phenomena in sandwich structures[END_REF][START_REF] Xu | Bridging techniques in a multiscale modeling of pattern formation[END_REF]. Internal virtual work of the substrate First, let us define the unknown variables in each sublayer ⟨q sn ⟩ = ⟨u n-1 w n-1 u n w n ⟩ , (5.30) ⟨ q sn ,x ⟩ = ⟨u n-1,x w n-1,x u n,x w n,x ⟩ . (5.31) According to the kinematics (2.10), the displacement field reads { U sn W sn } = [N z ] {q sn } , ( 5.32) where [N z ] =    1 -η 2 0 1 + η 2 0 0 1 -η 2 0 1 + η 2    . (5.33) The strain vector {ε sn } and stress vector {S sn } can be respectively expressed as {ε sn } =      ϵ sn xx ϵ sn zz γ sn xz      = [B 1 ] {q sn } + [B 2 ] { q sn ,x } , ( 5.34 ) {S sn } = [C sn ] {ε sn } , ( 5.35) in which [B 1 ] =       1 -η 2 0 1 + η 2 0 0 - 1 h n 0 1 h n - 1 h n 0 1 h n 0       , ( 5.36) [B 2 ] =     0 0 0 0 0 0 0 0 0 1 -η 2 0 1 + η 2     , (5.37) [C sn ] =    λ s + 2G s λ s 0 λ s λ s + 2G s 0 0 0 G s    . ( 5.38) The internal virtual work of the substrate can be represented as the sum of all the sublayers: P s int (δu) = - ∫ L 0 ∫ hs 0 ⟨δε s ⟩ {S s } dzdx = - ∫ L 0     ∑ sn ⟨δq sn ⟩ ∫ hn 0 T [B 1 ] {S sn } dz Φ + ∑ sn ⟨ δq sn ,x ⟩ ∫ hn 0 T [B 2 ] {S sn } dz Ψ     dx. (5.39) Through considering Eqs. (5.34) and (5.35), one can obtain .41) One can also combine the above two equations in the following form: Φ = ∫ hn 0 T [B 1 ] [C sn ] [B 1 ] dz {q sn } + ∫ hn 0 T [B 1 ] [C sn ] [B 2 ] dz { q sn ,x } , ( 5.40) Ψ = ∫ hn 0 T [B 2 ] [C sn ] [B 1 ] dz {q sn } + ∫ hn 0 T [B 2 ] [C sn ] [B 2 ] dz { q sn ,x } . ( 5 { Φ Ψ } = [C s ] { q sn q sn ,x } . (5.42) The macroscopic form of internal virtual work of the substrate involving three envelopes can be written as P s int (δu) = - ∑ sn ∫ L 0 ( ⟨ δ (q sn ) 0 , δ ( q sn ,x ) 0 ⟩ [C s ] { (q sn ) 0 ( q sn ,x ) 0 } +2 ⟨ δ (q sn ) 1 , δ ( q sn ,x ) 1 ⟩ [C s ] { (q sn ) 1 ( q sn ,x ) 1 }) dx. (5.43) Now we consider the discretization of substrate along the x direction. The unknown vectors can be given as {q sn } = [N s ] {v s } , (5.44) { q sn ,x } = [ N s ,x ] {v s } , ( 5.45) where {v s } is the elementary unknown vector of the substrate and [N s ] is the shape function. Note that the longitudinal displacement u is discretized by linear Lagrange functions, while the transverse displacement w is discretized by Hermite functions. Consequently, the internal virtual work of the substrate can be written as P s int (δu) = - ∑ e ⟨δv s ⟩ ∫ le 0 ( T [N s ]Φ + T [N s ,x ]Ψ ) dx = - ∑ e ⟨δv s ⟩ ∫ le 0 ( [ T N s , T N s ,x ] [C s ] [ N s N s ,x ]) dx{v s }, (5.46) where l e is the length of 1D element. Internal virtual work of the film As for the thin film, the strain energy is mainly generated by normal strain ϵ f xx , the other two terms ϵ f zz and γ f xz being neglected. By considering three envelopes, the macroscopic form of the internal virtual work for the film is expressed as P f int (δu) = - ∫ Ω f σ f xx δϵ f xx dΩ = -δ ( 1 2 ∫ Ω f E f [ (ϵ f xx ) 2 0 + 2 (ϵ f xx ) 1 2 ] dΩ ) , ( 5.47) in which ( ϵ f xx ) 0 = u f 0,x - ( z - h f 2 -h s ) w f 0,xx + 1 2 (w f 0,x ) 2 + (w f 1,x ) 2 + q 2 (w f 1 ) 2 , (5.48) ( ϵ f xx ) 1 = [ - ( z - h f 2 -h s ) ( w f 1,xx -q 2 w f 1 ) + w f 0,x w f 1,x ] + i [ -2 ( z - h f 2 -h s ) qw f 1,x + qw f 0,x w f 1 ] , (5.49) where ( ϵ f xx ) 0 and ( ϵ f xx ) 1 are respectively the zero order and the first order of the strain ϵ f xx . Consequently, Eq. (5.47) can be rewritten in the following expanded form: P f int (δu) = - ∫ L 0 [ S f a ( δu f 0,x + w f 0,x δw f 0,x + 2w f 1,x δw f 1,x + 2q 2 w f 1 δw f 1 ) + S f b δ ( w f 0,x w f 1,x ) + S f c δw f 0,xx + S f d δ ( w f 0,x w f 1 ) + S f e δw f 1,x + S f f δ ( w f 1,xx -q 2 w f 1 )] dx = - ∫ L 0 ⟨ δε f ⟩ { S f } dx, (5.50) where                              S f a = E f h f ( u f 0,x + 1 2 (w f 0,x ) 2 + (w f 1,x ) 2 + q 2 (w f 1 ) 2 ) , S f b = 2Eh f w f 0,x w f 1,x , S f c = 1 12 E f h 3 f w f 0,xx , S f d = 2E f h f q 2 w f 0,x w f 1 , S f e = 2 3 E f h 3 f q 2 w f 1,x , S f f = 1 6 E f h 3 f ( w f 1,xx -q 2 w f 1 ) . (5.51) The generalized strain vector {ε f } of the film is defined as { ε f } = ( [H] + 1 2 [A(q f )] ) { q f } , ( 5.52) in which [H] =           1 0 0 1 0 1 0 1           , (5.53) [A(q f )] =           0 w f 0,x 0 2q 2 w f 1 2w f 1,x 0 0 w f 1,x 0 0 w f 0,x 0 0 0 0 0 0 0 0 w f 1 0 w f 0,x 0 0 0 0 0 0 0 0 0 0 0 0 0 0           , ( 5.54) ⟨ q f ⟩ = ⟨ u f 0,x w f 0,x w f 0,xx w f 1 w f 1,x ( w f 1,xx -q 2 w f 1 )⟩ . ( 5 .55) Since [A(q f )] and { q f } are linear functions of u f and w f , the internal virtual work of the film (5.50) is in a quadratic form with respect to the displacement and the generalized stress: P f int (δu) = - ∫ L 0 ⟨ δq f ⟩ ( T [H] + T [A(q f )] ) { S f } dx, (5.56) where the generalized stress of the film reads { S f } = [D] ( [H] + 1 2 [A(q f )] ) { q f } , ( 5.57) in which [D] = diag ( E f h f 2E f h f 1 12 E f h 3 f 2q 2 E f h f 2 3 q 2 E f h 3 f 1 6 E f h 3 f ) . (5.58) Note that the discretization of unknown variables { q f } takes the same shape function as for the substrate, i.e. Lagrange functions for the longitudinal displacement and Hermite functions for the transverse displacement. Nonlinear macroscopic membrane-wrinkling model for the film The technique of slowly variable Fourier coefficients will be applied to deduce the macroscopic membrane-wrinkling model based on the well-known Föppl-von Kármán equations for elastic isotropic plates that is considered as the reference model in this section:            D∆ 2 w -div (N • ∇w) = 0, N = L m • γ, γ = 1 2 ( ∇u + t ∇u ) + 1 2 (∇w ⊗ ∇w) , divN = 0, (5.62) where u = (u, v) ∈ R 2 represents the in-plane displacement, while w is the deflection. The membrane stress and strain are denoted by N and γ, respectively. With the vectorial notations N = t {N X , N Y , N XY } and γ = t {γ X , γ Y , γ XY }, the membrane elastic matrix can be written as L m = Eh 1 -ν 2     1 ν 0 ν 1 0 0 0 1 -ν 2     . ( 5.63) The potential energy of the film, Π f , can be divided into a membrane part Π mem and a bending part Π ben :              Π f (u, w) = Π mem (u, w) + Π ben (w), Π mem (u, w) = 1 2 ∫ ∫ Ω t γ • L m • γdΩ = Eh 2(1 -ν 2 ) ∫ ∫ Ω ( γ 2 X + γ 2 Y + 2(1 -ν)γ 2 XY + 2νγ X γ Y ) dΩ, Π ben (w) = D 2 ∫ ∫ Ω ( (△w) 2 -2(1 -ν) ( ∂ 2 w ∂X 2 ∂ 2 w ∂Y 2 - ( ∂ 2 w ∂X∂Y ) 2 )) dΩ. (5.64) Macroscopic modeling of membrane energy We adapt the technique of Fourier series with slowly variable coefficients in 2D framework [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. For simplicity, we suppose that the instability wave number Q is known and the considered wrinkles spread only in the O x direction. The unknown field U(X, Y ) = {u(X, Y ), w(X, Y ), N(X, Y ), γ(X, Y )}, whose components are in-plane displacement, transverse displacement, membrane stress and strain, is written in the following form: U(X, Y ) = +∞ ∑ j=-∞ U j (X, Y )e jiQX , ( 5.65) where the macroscopic unknown fields U j (X, Y ) vary slowly on the period [ X, X + 2π Q ] of oscillations. It is not necessary to choose an infinite number of Fourier coefficients, so the unknown fields U(X, Y ) are expressed in terms of two harmonics: the mean field U 0 (X, Y ) and the first order harmonics U 1 (X, Y )e iQX and U 1 (X, Y )e -iQX . The second harmonic should be taken into account to recover the results of the asymptotic Ginzburg-Landau double-scale approach [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. Nevertheless, the second harmonic does not contribute to the membrane energy in the present case, since the rapid one-dimensional oscillations e iQX are not extensional so that N 2 = 0, w 2 = 0. Hence the second harmonic does not influence the simplest macroscopic models. A unique direction O x for wave propagation is chosen in the whole domain. This assumption is a bit restrictive so that the current model can only describe sinusoidal pattern, which should be improved in the future. In principle, the mean field U 0 (X, Y ) is real and the envelope U 1 (X, Y ) is complex-valued. However, spatial evolutions of patterns can be reasonably accounted for with only two real coefficients in practice, even if a complex envelope can improve the treatment of boundary conditions [START_REF] Mhada | About macroscopic models of instability pattern formation[END_REF]. After extending derivation rules in two-dimensional framework, the first Fourier coefficient of the gradient and the zero order coefficient of strain (i.e. mean value on a period) can be respectively expressed as {(∇w) 1 } =      ∂w 1 ∂X + iQw 1 ∂w 1 ∂Y      , ( 5.66 ) {γ 0 } =      γ X0 γ Y 0 2γ XY 0      = { γ F K } + {γ wr } , (5.67) in which { γ F K } =                ∂u 0 ∂X + 1 2 ( ∂w 0 ∂X ) 2 ∂v 0 ∂Y + 1 2 ( ∂w 0 ∂Y ) 2 ∂u 0 ∂Y + ∂v 0 ∂X + ∂w 0 ∂X ∂w 0 ∂Y                , ( 5.68 ) {γ wr } =                ∂w 1 ∂X + iQw 1 2 ∂w 1 ∂Y 2 ( ∂w 1 ∂X + iQw 1 ) ∂w 1 ∂Y + ( ∂w 1 ∂X -iQw 1 ) ∂w 1 ∂Y                , ( 5.69) where the strain is divided into a classical part γ F K that takes the same form as the initial Föppl-von Kármán model (5.62), and a wrinkling part γ wr that depends only on the envelope of deflection w 1 . The strain-displacement law (5.67) can be simplified. First, the displacement field is reduced to a membrane mean displacement and to a bending wrinkling, i.e. u 1 = 0, w 0 = 0, which only considers the influence of wrinkling on a flat membrane state. Second, the deflection envelope w 1 (X, Y ) is assumed to be real, which disregards the phase modulation of the wrinkling pattern. Therefore, the envelope of the displacement has only three components u 0 = (u 0 , v 0 ) and w 1 that will be rewritten for simplicity as (u, v, w) = (u 0 , v 0 , w 1 ). Consequently, the simplified version of the strain field becomes The simplified membrane strain (5.70) is quite similar to that of the initial Föppl-von Kármán model. It is split, first in a linear part ε(u) that is the symmetric part of the mean displacement gradient corresponding to the pure membrane linear strain, second in a nonlinear part γ wr more or less equivalent to wrinkling strain. The main difference with the initial Föppl-von Kármán strain (5.62) is the extension Q 2 w 2 in the wave direction of wrinkles. This wrinkling strain is a stretching and is always positive. In the case of a compressive membrane strain, this wrinkling term leads to a decrease of the true strain. By only considering the zero order harmonic, the reduced membrane energy becomes (5.73) Π mem (u, w) = Eh 2(1 -ν 2 ) ∫ ∫ Ω  Macroscopic modeling of bending energy Through simplifying the energy by keeping only the zero order term, it provides a formulation easier to be managed for the numerical discretization. The computation of the energy is based on the fact that only the zero order harmonic φ 0 of a function φ has a non-zero mean value: ∫ ∫ Ω φdΩ = ∫ ∫ Ω φ 0 dΩ. (5.74) This identity is applied to the bending energy in the framework u 1 = (u 1 , v 1 ) = (0, 0), w 0 = 0, w 1 ∈ R: φ = (∆w) 2 -2(1 -ν) ( ∂ 2 w ∂X 2 ∂ 2 w ∂Y 2 - ( ∂ 2 w ∂X∂Y ) 2 ) = φ A -2(1 -ν)φ B . (5.75) The first terms of bending energy reads φ A 0 = (∆w) 2 = +∞ ∑ j=-∞ (∆w) j (∆w) -j = 2 (∆w) 1 (∆w) -1 = 2 |(∆w) 1 | 2 = 2 ∆w 1 -Q 2 w 1 + 2iQ ∂w 1 ∂X 2 . (5.76) Due to the assumption of a real envelope w = w 1 , the first term of the bending energy can be expressed as φ A 0 = 2 ( ∆w -Q 2 w ) 2 + 8Q 2 ( ∂w ∂X ) 2 . (5.77) In the same way, the second term of the bending energy φ B 0 reads φ B 0 = 2 ( ∂ 2 w ∂X 2 -Q 2 w ) ∂ 2 w ∂Y 2 -2 ( ∂ 2 w ∂X∂Y ) 2 -2Q 2 ( ∂w ∂Y ) 2 . (5.78) As in [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF], the derivatives of order three and order four in above differential equations can be neglected, since theses high order derivatives may lead to spurious oscillations and the derivatives of order two are sufficient to recover the Ginzburg-Landu asymptotic approach. Consequently, this leads to the simplified macroscopic bending energy: Π ben (w) = D 2 ∫ ∫ Ω { Q 4 w 2 -2Q 2 w∆w + 4Q 2 ( ∂w ∂X ) 2 + 2(1 -ν 2 )Q 2 [ w ∂ 2 w ∂Y 2 + ( ∂w ∂Y ) 2 ]} dΩ. (5.79) Full membrane-wrinkling model The macroscopic model is deduced from the minimum of total energy that is the sum of membrane energy (5.73) and bending energy (5.79), which associates zero order harmonic for membrane quantities and real-valued first order harmonics for the deflection. In this way, the total energy is stationary at equilibrium: δΠ mem + δΠ ben = 0. (5.80) The condition that any virtual displacement is zero at the boundary gives ∫ ∫ After straightforward calculations, one obtains the partial differential equations of the macroscopic problems:        -6DQ 2 ∂ 2 w ∂X 2 -2DQ 2 ∂ 2 w ∂Y 2 + ( DQ 4 + N X Q 2 ) w -div (N • ∇w) = 0, N = L m : [ε(u) + γ wr (w)] , divN = 0. (5.83) The above model (5.83) couples nonlinear membrane equations with a bifurcation equation satisfied by the envelope of wrinkling pattern. More discussion on the model can be found in [START_REF] Damil | New nonlinear multi-scale models for wrinkled membranes[END_REF][START_REF] Damil | Membrane wrinkling revisited from a multiscale point of view[END_REF]. Since this model is a second order partial differential system, any classical C 0 finite element is acceptable for its discretization. In the application, 8-node quadratic quadrilateral elements (2D-Q8) with three degrees of freedom (u, v, w) per node will be used. Details on the discretization will be omitted here, since it is quite straightforward. all the equations being coupled: { {ε 0 } = [L s ] -1 {σ 0 } = [H]{θ 0 }, {ε 1 } = [L s ] -1 {σ 1 } = [H]{θ 1 }, ( 5.87) where the matrix [H] is detailed in [START_REF] Cochelin | Méthode asymptotique numérique[END_REF]. Finally, the macroscopic behavior can be formulated in a general form: (5.90) Discretization of the 3D macroscopic model (5.85) takes the same shape functions as the 3D classical model (3.9), since they are quite similar, where the details are omitted. Thus, 8-node solid elements with reduced integration can be applied as well, but with totally 72 (3 × 3 × 8) degrees of freedom on each element. Connection between the film and the substrate As in Section 3.2.3, displacement continuity is satisfied at the interface. However, the macroscopic membrane-wrinkling elements for the film and 3D macroscopic block elements for the substrate cannot be simply joined directly since they belong to dissimilar elements. Therefore, additional incorporating constraint equations have to be employed. Hereby, Lagrange multipliers are applied to couple the corresponding node displacements of the same order harmonics in compatible meshes between the film and the substrate (see Fig. 3.4). Note that using 8-node linear block element here is only for coupling convenience, 20-node quadratic block element would be another good candidate, while both of them follow the same coupling strategy. Consequently, the stationary function of film/substrate system is given in a Lagrangian form: L (u f , u s , ℓ) = Π f + Π s + ∑ node i ℓ i [u f (i) -u s (i)] , (5.91) where the displacements of the film and the substrate are respectively denoted as u f and u s , while the Lagrange multipliers are represented by ℓ. At the interface, the displacement continuity is satisfied at the same nodes and connects the middle surface of the film and the top surface of the substrate. From Eq. (5.91), three equations are obtained according to δu f , δu s and δℓ:              δΠ f + ∑ node i ℓ i δu f (i) = 0, δΠ s - ∑ node i ℓ i δu s (i) = 0, ∑ node i δℓ i u f (i) - ∑ node i δℓ i u s (i) = 0. (5.92) Resolution technique and bifurcation analysis The same generic resolution techniques and bifurcation schemes established in Section 2.4 and 3.3 will be adapted to both 2D and 3D macroscopic film/substrate model, which includes using the ANM as a continuation technique to solve nonlinear differential equations and calculating bifurcation indicators to predict secondary bifurcations as well as the associated wrinkling modes on the post-buckling path. Since this general framework is also suitable for the current multi-scale problem and the procedure remains unchanged, the details will be omitted here. Chapter conclusion In this chapter, we revisit the film/substrate system from a multi-scale standpoint with respect to Fourier series. Firstly, a generic macroscopic modeling scheme is established, which is suitable for both 2D and 3D problems but not so efficient for very thin film in terms of computational cost. Thus, secondly, a simplified 2D macroscopic film/substrate model with three envelopes is derived based on the established microscopic model presented in Chapter 2. Development of the computer codes have been completed. It is expected to predict the primary sinusoidal wrinkles and entailing aperiodic modes observed in Chapter 2 with much fewer elements, which can save an enormous amount of computation time. Validation of the macroscopic model by comparison with 2D classical film/substrate model established in Chapter 2, should be included as part of short-term perspective. Lastly, following the same modeling scheme proposed in Chapter 3, a 3D nonlinear macroscopic film/substrate model has been developed. Precisely, this 3D model couples a macroscopic membrane-wrinkling model based on the well-known Föppl-von Kármán nonlinear plate theory and a linear macroscopic elasticity. Through introducing Lagrange multipliers at the interface, displacement continuity is satisfied in a weak form. Self-developed computer codes for both macroscopic models and coupling procedures have been accomplished. It is expected to predict at least sinusoidal patterns observed in Chapter 3 with much fewer elements, which is competitive from computational standpoint. In the following, validation of the macroscopic model by comparison with 3D classical film/substrate model established in Chapter 3, will be included as part of short-term perspective. Conclusion and perspectives In this thesis, we proposed a whole framework to study surface wrinkling of thin films bound to soft substrates in a numerical way: from 2D to 3D modeling, from classical to multi-scale perspective. Both 2D and 3D models incorporated Asymptotic Numerical Method (ANM) as a robust path-following technique and bifurcation indicators well adapted to the ANM, so as to predict a sequence of multiple bifurcations and the associated instability modes on their post-buckling evolution path as the load is increased. The tracing of post-bifurcation evolution is an important numerical problem and it is definitely non-trivial. The ANM gives interactive access to semi-analytical equilibrium branches, which offers considerable advantage of reliability compared with classical iterative algorithms. Probably, it would be rather difficult to detect all the bifurcations found in this thesis by using conventional numerical methods. To our best knowledge, it appears to be the first work that addresses the post-bifurcation instability problems of film/substrate from the quantitative standpoint, through applying these advanced numerical approaches (path-following techniques, bifurcation indicators, bridging techniques, multi-scale approaches, Lagrange multipliers, etc.), which is viewed as a valuable contribution to the field of film/substrate post-buckling analyses. In 2D cases of film/substrate system, a classical finite element model was used associating geometrically nonlinear beam for the film and linear elasticity for the substrate. The effect of boundary conditions and material properties of the substrate (homogeneous, graded material and orthotropic) on the bifurcation portrait is carefully studied, which was rarely explored in the literature. The evolution of wrinkling patterns and postbifurcation modes including period-doubling has been detected beyond the onset of the primary sinusoidal wrinkling mode on the post-buckling evolution path (see Fig. 1). In 3D cases, spatial pattern formation of film/substrate was investigated based on a nonlinear 3D finite element model associating geometrically nonlinear shell formulation for the film and linear elasticity for the substrate. Typical post-bifurcation patterns include sinusoidal, checkerboard and herringbone shapes, with possible spatial modulations, boundary effects and localizations. The post-buckling behavior often leads to intricate response curves with several secondary bifurcations that were rarely studied and only in the case of periodic cells [START_REF] Cai | Periodic patterns and energy states of buckled films on compliant substrates[END_REF]. In conventional finite element analysis, the post-buckling simulation may suffer from the convergence issue if the film is much stiffer than the substrate. Nevertheless, the proposed finite element procedure in this thesis allows accurately describing these bifurcation portraits by taking into account the effect of boundary conditions, without any convergent problem. The occurrence and evolution of sinusoidal, checkerboard and herringbone patterns were highlighted in Chapter 3. This work has been viewed as a valuable contribution to film/substrate buckling problems, where the model incorporating the ANM as a path-following technique to detect multiple bifurcations in the post-buckling analysis is interesting and meaningful. A very original nonlocal coupling strategy is developed in this thesis, which is able to bridge classical models and multi-scale models concurrently, where the strengths of each model are fully exploited while their shortcomings are accordingly overcome. More precisely, we considered a partitioned-domain multi-scale model for a 1D nonlinear elastic wrinkling model. The coarse model, obtained from a suitable Fourier-reduction technique [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF], is inaccurate near the boundary of the domain. Therefore, near the boundary, the full model is employed, which is then coupled to the coarse model in the remainder of the domain using the Arlequin method [START_REF] Ben Dhia | Multiscale mechanical problems: the Arlequin method[END_REF][START_REF] Ben Dhia | The Arlequin method as a flexible engineering design tool[END_REF] as a bridging-domain technique. Numerical results explicitly illustrate the effectiveness of this methodology. Besides, discussion on the transition between a fine and a coarse model was provided in a general way. The present method can also be seen as a guide for coupling techniques involving other reduced-order models. This work has been considered as an original, logical and relevant application of multi-scale modeling with good motivations, explanations and interesting numerical results. The proposed bridging techniques are not merely limited to 1D case, but can also be flexibly extended to 2D or 3D cases. Thus, one direct potential perspective of application is to use the proposed bridging techniques to study 2D or 3D film/substrate buckling problems, which can be viewed as one of our future works. A macroscopic modeling framework was provided based on the technique of slowly variable Fourier coefficients [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | A generalized continuum approach to predict local buckling patterns of thin structures[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF], which is in 3D situation and can be extended to any film/substrate modeling application. In particular, a 2D and a 3D macroscopic film/substrate model were derived from the classical models established in Chapter 2 and Chapter 3, respectively. Development of the computer codes for both 2D and 3D envelope models has been completed, while validation of the macroscopic models by comparison with classical film/substrate models should be included in the short-term perspective. Some parts of the works presented in this thesis have been published in International Journal of Solids and Structures [START_REF] Xu | 3D finite element modeling for instabilities in thin films on soft substrates[END_REF][START_REF] Xu | Bridging techniques in a multiscale modeling of pattern formation[END_REF] or in International Journal of Non-Linear Mechanics [START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF], while the ideas and schemes generated from the present thesis are expected to lead to more works in the future. As for the short term perspective, validation of both 2D and 3D multi-scale models should be done by comparing with the results obtained in the established classical models. It can predict the patterns found in classical models with much fewer elements so as to significantly reduce the computational cost. As shown in Fig. 2, preliminary results demonstrate the evolution of sinusoidal wrinkles by using only 2 elements along the wave spreading direction, while 50 elements are required in this direction for the 3D classical model established in Chapter 3. However, due to the intrinsic limitation of any reduced-order model that clear and real boundary conditions are difficult to be taken into account, bridging techniques are really needed, which provides a flexible and efficient way to overcome this drawback. The considerable advantage of the proposed nonlocal reduction-based coupling strategy has been explicitly demonstrated in Chapter 4. Application range of the above mentioned models is constrained by a large stiffness ratio E f /E s in the range O(10 4 ), which means the substrate is much softer than the film. In this range, critical strains are relatively small and thus the linear elastic framework is sufficient [START_REF] Chen | Herringbone buckling patterns of compressed thin films on compliant substrates[END_REF]. Some recent studies consider much softer films made of polymeric materials [START_REF] Brau | Multiple-length-scale elastic instability mimics parametric resonance of nonlinear oscillators[END_REF][START_REF] Cao | Wrinkling phenomena in neo-Hookean film/substrate bilayers[END_REF][START_REF] Yin | Deterministic order in surface micro-topologies through sequential wrinkling[END_REF], typically with a stiffness ratio E f /E s in the range O [START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF], where the critical strain is relatively large and the small strain framework is no more appropriate. Therefore, finite strain models such as neo-Hookean hyperelasticity should be considered at least for the substrate [START_REF] Hutchinson | The role of nonlinear substrate elasticity in the wrinkling of thin films[END_REF]. By considering the large deformation of the substrate, some extremely localized instability modes like folding [START_REF] Sun | Folding wrinkles of a thin stiff layer on a soft substrate[END_REF], creasing [START_REF] Cao | From wrinkles to creases in elastomers: the instability and imperfection-sensitivity of wrinkling[END_REF] or ridging [START_REF] Zang | Localized ridge wrinkling of stiff films on compliant substrates[END_REF] that require the nonlinearity of the substrate can be explored. It is not complicated to introduce nonlinear material laws in our framework (see Appendixes E and F) and development of computer codes has been accomplished. Preliminary results illustrate a localized surface valley in the film center (see Fig. 3), namely folding, during the post-buckling evolution of the film/substrate with a stiffness ratio E f /E s in the range O [START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part II: A global scenario for the formation of herringbone pattern[END_REF], where neo-Hookean law is employed in the substrate within the ANM framework. Further investigations and discussions in this direction should be included in the perspectives. Appendix D The residuals of the microscopic and macroscopic model Through variation of potential energy in Eq. (4.17), one obtains P f (δu f ) = ∫ Ω f ⟨δu δu ′ δv δv ′ δv ′′ ⟩         α f                0 ES(u ′ + v ′2 /2) cv + c 3 v 3 ESv ′ (u ′ + v ′2 /2) EIv ′′                -β f                f 0 0 0 0                        dΩ. (D.1) With Eq. (4.31), we define                δu δu ′ δv δv ′ δv ′′                e = [G f ] { δQ f } , [G f ] =         N f u N f u ′ N f v N f v ′ N f v ′′         . (D.2) Therefore, the elementary residual [ R f (Q f ) ] e can be written as [ R f (Q f ) ] e = ∫ Ωe [G f ] t         α f                0 ES(u ′ + v ′2 /2) cv + c 3 v 3 ESv ′ (u ′ + v ′2 /2) EIv ′′                -β f                f 0 0 0 0                        dΩ. (D.3) Appendix E Finite strain hyperelasticity Generally, the mechanical response of hyperelastic materials is described by strain energy potential function, Ψ, to fit the particular material. This function is a continuous scalar-valued function and is given in terms of the deformation gradient tensor, F = ∇u+I (u being the displacement field and I being the second-order identity tensor), or some strain tensors, Ψ = Ψ(F). In this paper, we limit the strain energy function to isotropic behavior throughout the deformation history. Isotropic hyperelastic materials can be expressed as a function of strain invariants of the right symmetric Cauchy-Green tensor, C = t F • F. Therefore, the strain energy potential can be formulated as Ψ Résumé Le plissement dans les films minces sur un substrat plus mou a été largement observé dans la nature. Ces phénomènes ont suscité un intérêt considérable au cours de la dernière décennie. L'évolution en post-flambage d'instabilités morphologiques implique souvent de forts effets de non-linéarité géométrique, de grandes rotations, de grands déplacements, de grandes déformations, une dépendance par rapport au chemin de chargement et de multiples brisures de symétrie. En raison de ces difficultés notoires, la plupart des analyses non-linéaires de flambement ont recouru à des approches numériques parce qu'on ne peut obtenir qu'un nombre limité de solutions exactes de manière analytique. Cette thèse propose un cadre général pour étudier le problème de flambage de systèmes film/substrat de manière numérique : de la modélisation 2D ou 3D, d'un point de vue classique ou multiéchelle. L'objectif principal est d'appliquer des méthodes numériques avancées pour des analyses de bifurcations multiples aux divers modèles de film/substrat, en particulier en se concentrant sur l'évolution en post-flambement et la transition du mode à la surface. Les modèles intègrent la Méthode Asymptotique Numérique (MAN) comme une technique robuste de pilotage et des indicateurs de bifurcation qui sont bien adaptés à la MAN pour détecter une séquence de bifurcations multiples ainsi que les modes d'instabilité associés sur leur chemin d'évolution de post-flambement. La MAN donne un accès interactif aux branches d'équilibre semi-analytique, qui offre un avantage considérable en termes de la fiabilité par rapport aux algorithmes itératifs classiques. En outre, une stratégie originale de couplage non-local est développée pour coupler les modèles classiques et les modèles multi-échelles concurremment, où les forces de chaque modèle sont pleinement exploitées, et leurs lacunes surmontées. Une discussion sur la transition entre les différentes échelles est fournie d'une manière générale, qui peut également être considéré comme un guide pour les techniques de couplage impliquant d'autres modèles réduits. A la fin, un cadre général de modélisation macroscopique est développé et deux modèles spécifiques de type Fourier sont dérivés de modèles classiques bien établis, qui permettent de prédire la formation des modes d'instabilités avec beaucoup moins d'éléments et donc de réduire le coût de calcul de manière significative. Mots-clés: Plissement; Post-flambement; Bifurcation; Film mince; Multi-échelle; Technique de cheminement; Méthode de couplage; Méthode Arlequin; Méthode des éléments finis. 1 1 Wrinkles in nature: (a) hornbeam leaf wrinkles, (b) finger wrinkles, (c) wrinkles in landform. [pictures from internet] . . . . . . . . . . . . . . . . . 1.2 Schematics of mode shapes: (a) sinusoidal mode, (b) checkerboard mode, (c) herringbone mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Schematics of mode shapes: (a) hexagonal mode, (b) triangular mode. . . . 1.4 Schematic of three types of morphological instability: (a) wrinkling, (b) folding, and (c) creasing [116]. . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Descriptive scheme of the ANM. . . . . . . . . . . . . . . . . . . . . . . . . 1.6 At least two macroscopic fields (the mean field and the amplitude of the fluctuation) are necessary to describe a nearly periodic response. . . . . . . 1.7 Arlequin method in a general mechanical problem. . . . . . . . . . . . . . . 1.8 Different weight functions for energy distribution. . . . . . . . . . . . . . . 1.9 Mediator space is based on the fine mesh. Arlequin parameters: α f = 0.99, β f = 0.5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10 Mediator space is based on the coarse mesh. Arlequin parameters: α f = 0.99, β f = 0.5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. 2 2 . 5 225 Geometry of the film/substrate system. . . . . . . . . . . . . . . . . . . . . 2.3 Sketch of the film/substrate system under a compression test. . . . . . . . 2.4 Convergence test of film/substrate system with simply supported boundary conditions. The substrate is respectively divided into 5, 10, 15, 20 or 25 sublayers. ANM parameters: n = 15, δ = 10 -5 , 45 steps. Each point corresponds to one ANM step. . . . . . . . . . . . . . . . . . . . . . . . . . ix Bifurcation curve of film/substrate system with simply supported boundary conditions. Four bifurcations are observed. ANM parameters: n = 15, δ = 10 -5 , 45 steps. Each point corresponds to one ANM step. . . . . . . . 2.6 Transverse displacement along the z direction. Results follow an exponential distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Bifurcation indicators as a function of load parameter for film/substrate system with simply supported boundary conditions. (a) The 1st bifurcation point when λ = 0.04893. (b) The 2nd bifurcation point when λ = 0.05568. (c) The 3rd bifurcation point when λ = 0.08603. (d) The 4th bifurcation point when λ = 0.116. The condition ∆µ 0 = 1 is prescribed a priori at each ANM step. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 The left column shows a sequence of wrinkling patterns corresponding to its critical load determined by bifurcation indicators in Fig. 2.7. The right column presents the associated instability modes. Simply supported boundary conditions are imposed. (b) The 1st mode. (d) The 2nd mode. (f) The 3rd mode. (h) The 4th mode. . . . . . . . . . . . . . . . . . . . . . 2.9 Bifurcation curve of film/substrate system with clamped boundary conditions. Two bifurcations are observed. ANM parameters: n = 15, δ = 10 -3 , 25 steps. Each point corresponds to one ANM step. . . . . . . . . . . . . . 2.10 Bifurcation indicators as a function of load parameter for film/substrate system with clamped boundary conditions. (a) The 1st bifurcation point when λ = 0.05035. (b) The 2nd bifurcation point when λ = 0.05794. The condition ∆µ 0 = 1 is prescribed a priori at each ANM step. . . . . . . . . 2.11 The left column shows a sequence of wrinkling patterns corresponding to its critical load determined by bifurcation indicators in Fig. 2.10. The right column presents the associated instability modes. Clamped boundary conditions are imposed. (b) The 1st mode. (d) The 2nd mode. . . . . . . . 2.12 Exponential variations of Young's modulus of the substrate along the thickness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.13 Bifurcation curve of FGM substrate with simply supported boundary conditions and softening Young's modulus E s1 . Three bifurcations are observed. ANM parameters: n = 15, δ = 10 -5 , 37 steps. Each point corresponds to one ANM step. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x 2.14 The left side shows a sequence of wrinkling patterns corresponding to its critical load determined by bifurcation indicators. The right side presents the associated instability modes. Simply supported boundary conditions and softening Young's modulus E s1 are applied in the FGM case. (b) The 1st mode. (d) The 2nd mode. (f) The 3rd mode. . . . . . . . . . . . . . . . 2.15 Bifurcation curve of FGM substrate with clamped boundary conditions and softening Young's modulus E s1 . Three bifurcations are observed. ANM parameters: n = 15, δ = 10 -5 , 45 steps. Each point corresponds to one ANM step. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.16 The left side shows a sequence of wrinkling patterns corresponding to its critical load determined by bifurcation indicators. The right side presents the associated instability modes. Clamped boundary conditions and softening Young's modulus E s1 are applied in the FGM case. (b) The 1st mode. (d) The 2nd mode. (f) The 3rd mode. . . . . . . . . . . . . . . . . . 2.17 Bifurcation curve of FGM substrate with stiffening Young's modulus E s2 and simply supported boundary conditions. Bifurcated branch and fundamental branch are distinguished. Each point corresponds to one ANM step. ANM parameters: (1) n = 15, δ = 10 -5 , 50 steps for the bifurcated branch; (2) n = 15, δ = 10 -5 , δ = 10 -2 after 22 steps for the fundamental branch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.18 The left side shows a sequence of wrinkling patterns corresponding to the bifurcated branch. The right side presents the associated instability modes. Stiffening Young's modulus E s2 and simply supported boundary conditions are applied in the FGM case. (b) The 1st mode. (d) The 2nd mode. (f) The 3rd mode. (h) The 4th mode. . . . . . . . . . . . . . . . . . . . . . . . 2.19 The left side shows the wrinkling pattern corresponding to the fundamental branch. The right side presents the associated 4th instability mode. . . . . 2.20 Comparison of bifurcation curves between anisotropic substrate and isotropic one. Each point corresponds to one ANM step. ANM parameters: (1) n = 15, δ = 10 -5 , 29 steps for the anisotropic case; (2) n = 15, δ = 10 -5 , 45 steps for the isotropic case. . . . . . . . . . . . . . . . . . . . . . . . . . 2.21 The left column shows a sequence of wrinkling patterns corresponding to its critical load determined by bifurcation indicators. The right column presents the associated instability modes. Simply supported boundary conditions are imposed in the anisotropic case. (b) The 1st mode. (d) The 2nd mode. (f) The 3rd mode. . . . . . . . . . . . . . . . . . . . . . . . . . xi 2.22 Anisotropic substrate with simply supported boundary conditions under compression: (a) wrinkling pattern in the final step, (b) final shape of the film. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.23 Comparison of bifurcation curves among four different cases with simply supported boundary conditions: (a) isotropic substrate, (b) anisotropic substrate, (c) FGM substrate with softening Young's modulus, (d) FGM substrate with stiffening Young's modulus. . . . . . . . . . . . . . . . . . . 3.1 Schematics of wrinkling patterns: (a) sinusoidal mode, (b) checkerboard mode, (c) herringbone mode (a periodic array of zigzag wrinkles). . . . . . 3.2 Geometry of film/substrate system. . . . . . . . . . . . . . . . . . . . . . . 3.3 Geometry and kinematics of shell. . . . . . . . . . . . . . . . . . . . . . . . 3.4 Sketch of coupling at the interface. . . . . . . . . . . . . . . . . . . . . . . 3.5 Loading conditions: (a) Film/Sub I under uniaxial compression, (b) Film/Sub II under equi-biaxial compression, (c) Film/Sub III under biaxial step loading. 3.6 Bifurcation curve of Film/Sub I with simply supported boundary conditions under uniaxial compression. Two bifurcations are observed. ANM parameters: n = 15, δ = 10 -4 , 26 steps. Each point corresponds to one ANM step. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Bifurcation indicators as a function of load parameter of Film/Sub I with simply supported boundary conditions: (a) the first bifurcation point when λ = 0.05281, (b) the 2nd bifurcation point when λ = 0.05516. The indicators vanish at bifurcation points. . . . . . . . . . . . . . . . . . . . . . . . 3.8 Film/Sub I with simply supported boundary conditions under uniaxial compression. The left column shows a sequence of wrinkling modes ∆v corresponding to its critical load determined by bifurcation indicators in Fig. 3.7. The right column presents the associated instability modes ∆v 3 at the line Y = 0.5L y : (b) the 1st mode, (d) the 2nd mode. . . . . . . . . . 3.9 Film/Sub I with simply supported boundary conditions under uniaxial compression: (a) sinusoidal pattern v in the final step, (b) the final shape v 3 . 3.10 Bifurcation curve of Film/Sub I with clamped boundary conditions under uniaxial compression. Two bifurcations are observed. ANM parameters: n = 15, δ = 10 -4 , 28 steps. Each point corresponds to one ANM step. . . . 3.11 Film/Sub I with clamped boundary conditions under uniaxial compression. The left column shows a sequence of wrinkling modes ∆v corresponding to its critical load determined by bifurcation indicators. The right column presents the associated instability modes ∆v 3 at the line Y = 0.5L y : (b) the 1st mode, (d) the 2nd mode. . . . . . . . . . . . . . . . . . . . . . . . . xii 3.12 Film/Sub I with clamped boundary conditions under uniaxial compression: (a) sinusoidal pattern v in the final step, (b) the final shape v 3 . . . . . . . . 3.13 Bifurcation curve of Film/Sub II under equi-biaxial compression. Four bifurcations are observed. ANM parameters: n = 15, δ = 10 -4 , 80 steps. Each point corresponds to one ANM step. . . . . . . . . . . . . . . . . . . 3.14 Film/Sub II under equi-biaxial compression. The left column shows a sequence of wrinkling modes ∆v corresponding to its critical load determined by bifurcation indicators. The right column presents the associated instability modes ∆v 3 at the line Y = 0.5L y : (b) the 1st mode, (d) the 2nd mode, (f) the 3rd mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.15 Film/Sub II under equi-biaxial compression: (a) checkerboard pattern v in the final step, (b) the final shape v 3 . . . . . . . . . . . . . . . . . . . . . . 3.16 Bifurcation curve of Film/Sub III: (a) the first step of compression along the x direction, ANM parameters: n = 15, δ = 10 -4 , 20 steps, (b) the second step of compression along the y direction, ANM parameters: n = 15, δ = 10 -4 , 31 steps. Each point corresponds to one ANM step. . . . . . . . 3.17 Film/Sub III under the first step of compression along the x direction. The left column shows a sequence of wrinkling modes ∆v corresponding to its critical load determined by bifurcation indicators. The right column presents the associated instability modes ∆v 3 at the line Y = 0.5L y : (b) the 1st mode, (d) the 2nd mode. . . . . . . . . . . . . . . . . . . . . . . . . 3.18 Film/Sub III under the second step of compression along the y direction. The left column shows a sequence of wrinkling modes ∆v corresponding to its critical load determined by bifurcation indicators. The right column presents the associated instability modes ∆v 3 at the line Y = 0.5L y : (b) the 1st mode, (d) the 2nd mode. . . . . . . . . . . . . . . . . . . . . . . . . 3.19 Film/Sub III under the second step of compression along the y direction: (a) herringbone pattern v in the final step, (b) top view of (a). . . . . . . . 4.1 Sketch of an elastic beam on a nonlinear elastic foundation. . . . . . . . . . 4.2 Schematic of the reduction from the microscopic model. . . . . . . . . . . . 4.3 Buckling of a simply supported beam under uniform compression: one real envelope (v 0 ) and four complex envelopes 4. 6 6 Buckling of a clamped beam under uniform compression. The instability pattern for µ = 2.21. Reconstruction by all the reduced Fourier coefficients(v 0 , v R 1 , v I 1 , v R2 and v I 2 ) is compared with the exact solution v(x). . . . . . . 4.7 Buckling of a clamped beam under uniform compression. The instability pattern for µ = 2.21. Reconstruction by two reduced Fourier coefficients v I 1 and v R 1 is compared with the exact solution v(x). . . . . . . . . . . . . . 4.8 Buckling of a clamped beam under uniform compression. The instability pattern for µ = 2.21. Reconstruction by all the reduced Fourier coefficients (u 0 , u R 1 , u I 1 , u R 2 and u I 2 ) is compared with the exact solution u(x). . . . . . 4.9 Buckling of a clamped beam under uniform compression. The instability pattern for µ = 2.21. Reconstruction by only u 0 is compared with the exact solution u(x). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4.10 Buckling of a clamped beam under uniform compression: derivatives of the curve in Fig. 4.8. The instability pattern for µ = 2.21. . . . . . . . . . . . 4.11 Definition of domains in the Arlequin framework. . . . . . . . . . . . . . . 4.12 How the reduced Fourier coefficient R v (x i ) depends on the global microscopic transversal displacement Q f v . . . . . . . . . . . . . . . . . . . . . . . 4.13 Buckling of a clamped beam under uniform compression: spatial distribution of the instability patterns for µ = 2.01. Only the left half part of the beam is demonstrated. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.14 Buckling of a clamped beam under uniform compression: spatial distribution of the instability patterns for µ = 2.21. Only the left half part of the beam is demonstrated. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.15 Buckling of a clamped beam under uniform compression: maximal deflection in [3π, 15π] vs. applied shortening µ. The prolongation coupling and reduction-based coupling are depicted together. . . . . . . . . . . . . . . . 4.16 Buckling of a clamped beam under uniform compression: maximal deflection in [0, 3π] vs. applied shortening µ. The prolongation coupling and reduction-based coupling are depicted together. . . . . . . . . . . . . . . . 4.17 The prolongation coupling approach is implemented in the bridging technique. Buckling of a clamped beam under uniform compression: spatial distribution of the instability patterns for µ = 2.01 and µ = 2.21. Only the left half part of the beam is demonstrated. . . . . . . . . . . . . . . . . . . xiv 4.18 The reduction-based coupling approach is implemented in the bridging technique. Buckling of a clamped beam under uniform compression: spatial distribution of the instability patterns for µ = 2.01 and µ = 2.21. Only the left half part of the beam is demonstrated. . . . . . . . . . . . . . 4.19 The left boundary region: spatial distribution of the instability patterns for µ = 2.21. The prolongation coupling and reduction-based coupling are depicted together. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.20 Spatial distribution of the instability patterns with the coupling zone in [3π, 4π]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.21 Spatial distribution of the instability patterns with the coupling zone in [8π, 9π]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.22 Deflection according to H 1 and L 2 couplings in the interval [0, 5π] for µ = 2.21. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.23 Effect of H 1 and L 2 couplings in the gluing zone. The relative difference of displacement is plotted. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.24 Effect of H 1 and L 2 couplings on normal forces. . . . . . . . . . . . . . . . 4.25 Lagrange multiplier within H 1 and L 2 couplings. . . . . . . . . . . . . . . . 1 Bifurcation curve and the associated wrinkling modes with respect to the incremental compression. Period-doubling mode appears at the third bifurcation point. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3D macroscopic film/substrate model under uniaxial compression: (a) the 1st wrinkling mode, (b) the 2nd wrinkling mode. Only two elements are used along the wave direction. . . . . . . . . . . . . . . . . . . . . . . . . . 3 1 . 1 11 Surface morphological instabilities of film/substrate systems 1.1.1 Literature review . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Challenges and discussion . . . . . . . . . . . . . . . . . . . . . 1.2 Asymptotic Numerical Method for nonlinear resolution . . . 1.2.1 Perturbation technique . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Path parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Continuation approach . . . . . . . . . . . . . . . . . . . . . . . 1.2.4 Bifurcation indicator . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Multi-scale modeling for instability pattern formation . . . . 1.4 Arlequin method for model coupling . . . . . . . . . . . . . . . 1.4.1 Energy distribution . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 Coupling choices . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Chapter conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Surface morphological instabilities of film/substrate systems 1.1.1 Literature review Figure 1 . 1 : 11 Figure 1.1: Wrinkles in nature: (a) hornbeam leaf wrinkles, (b) finger wrinkles, (c) wrinkles in landform. [pictures from internet] Figure 1 . 2 : 12 Figure 1.2: Schematics of mode shapes: (a) sinusoidal mode, (b) checkerboard mode, (c) herringbone mode. Figure 1 . 3 : 13 Figure 1.3: Schematics of mode shapes: (a) hexagonal mode, (b) triangular mode. Figure 1 . 4 : 14 Figure 1.4: Schematic of three types of morphological instability: (a) wrinkling, (b) folding, and (c) creasing [116]. Figure 1 . 5 : 15 Figure 1.5: Descriptive scheme of the ANM. Figure 1 . 6 : 16 Figure 1.6: At least two macroscopic fields (the mean field and the amplitude of the fluctuation) are necessary to describe a nearly periodic response. Figure 1 . 7 : 17 Figure 1.7: Arlequin method in a general mechanical problem. Figure 1 . 8 : 18 Figure 1.8: Different weight functions for energy distribution. Figure 1 . 9 : 19 Figure 1.9: Mediator space is based on the fine mesh. Arlequin parameters: α f = 0.99, β f = 0.5. Figure 1 . 10 : 110 Figure 1.10: Mediator space is based on the coarse mesh. Arlequin parameters: α f = 0.99, β f = 0.5. Figure 2 . 2 : 22 Figure 2.2: Geometry of the film/substrate system. Figure 2 . 3 : 23 Figure 2.3: Sketch of the film/substrate system under a compression test. Figure 2 . 4 : 24 Figure 2.4: Convergence test of film/substrate system with simply supported boundary conditions. The substrate is respectively divided into 5, 10, 15, 20 or 25 sublayers. ANM parameters: n = 15, δ = 10 -5 , 45 steps. Each point corresponds to one ANM step. Figure 2 . 5 : 2 Figure 2 . 6 :Figure 2 . 7 : 2522627 Figure 2.5: Bifurcation curve of film/substrate system with simply supported boundary conditions. Four bifurcations are observed. ANM parameters: n = 15, δ = 10 -5 , 45 steps. Each point corresponds to one ANM step. Figure 2 . 8 : 28 Figure 2.8: The left column shows a sequence of wrinkling patterns corresponding to its critical load determined by bifurcation indicators in Fig. 2.7. The right column presents the associated instability modes. Simply supported boundary conditions are imposed. (b) The 1st mode. (d) The 2nd mode. (f) The 3rd mode. (h) The 4th mode. 54 Figure 2 . 9 : 29 Figure 2.9: Bifurcation curve of film/substrate system with clamped boundary conditions. Two bifurcations are observed. ANM parameters: n = 15, δ = 10 -3 , 25 steps. Each point corresponds to one ANM step. Figure 2 . 10 : 210 Figure 2.10: Bifurcation indicators as a function of load parameter for film/substrate system with clamped boundary conditions. (a) The 1st bifurcation point when λ = 0.05035. (b) The 2nd bifurcation point when λ = 0.05794. The condition ∆µ 0 = 1 is prescribed a priori at each ANM step. Figure 2 . 11 : 211 Figure 2.11: The left column shows a sequence of wrinkling patterns corresponding to its critical load determined by bifurcation indicators in Fig. 2.10. The right column presents the associated instability modes. Clamped boundary conditions are imposed. (b) The 1st mode. (d) The 2nd mode. Figure 2 . 12 :Figure 2 . 13 : 212213 Figure 2.12: Exponential variations of Young's modulus of the substrate along the thickness. Figure 2 . 14 :Figure 2 . 15 : 214215 Figure 2.14: The left side shows a sequence of wrinkling patterns corresponding to its critical load determined by bifurcation indicators. The right side presents the associated instability modes. Simply supported boundary conditions and softening Young's modulus E s1 are applied in the FGM case. (b) The 1st mode. (d) The 2nd mode. (f) The 3rd mode. Figure 2 . 16 : 2 0Figure 2 . 17 : 2162217 Figure 2.16: The left side shows a sequence of wrinkling patterns corresponding to its critical load determined by bifurcation indicators. The right side presents the associated instability modes. Clamped boundary conditions and softening Young's modulus E s1 are applied in the FGM case. (b) The 1st mode. (d) The 2nd mode. (f) The 3rd mode. Figure 2 . 18 : 63 λFigure 2 . 19 : 21863219 Figure 2.18: The left side shows a sequence of wrinkling patterns corresponding to the bifurcated branch. The right side presents the associated instability modes. Stiffening Young's modulus E s2 and simply supported boundary conditions are applied in the FGM case. (b) The 1st mode. (d) The 2nd mode. (f) The 3rd mode. (h) The 4th mode. 63 Figure 2 . 20 : 220 Figure 2.20: Comparison of bifurcation curves between anisotropic substrate and isotropic one. Each point corresponds to one ANM step. ANM parameters: (1) n = 15, δ = 10 -5 , 29 steps for the anisotropic case; (2) n = 15, δ = 10 -5 , 45 steps for the isotropic case. Figure 2 . 21 : 221 Figure 2.21: The left column shows a sequence of wrinkling patterns corresponding to its critical load determined by bifurcation indicators. The right column presents the associated instability modes. Simply supported boundary conditions are imposed in the anisotropic case. (b) The 1st mode. (d) The 2nd mode. (f) The 3rd mode. Figure 2 . 22 : 222 Figure 2.22: Anisotropic substrate with simply supported boundary conditions under compression: (a) wrinkling pattern in the final step, (b) final shape of the film. Figure 2 . 23 : 223 Figure 2.23: Comparison of bifurcation curves among four different cases with simply supported boundary conditions: (a) isotropic substrate, (b) anisotropic substrate, (c) FGM substrate with softening Young's modulus, (d) FGM substrate with stiffening Young's modulus. Figure 3 . 1 : 31 Figure 3.1: Schematics of wrinkling patterns: (a) sinusoidal mode, (b) checkerboard mode, (c) herringbone mode (a periodic array of zigzag wrinkles). Figure 3 . 2 : 32 Figure 3.2: Geometry of film/substrate system. Figure 3 . 3 : 33 Figure 3.3: Geometry and kinematics of shell. Figure 3 . 5 : 35 Figure 3.5: Loading conditions: (a) Film/Sub I under uniaxial compression, (b) Film/Sub II under equi-biaxial compression, (c) Film/Sub III under biaxial step loading. Figure 3 . 6 : 36 Figure 3.6: Bifurcation curve of Film/Sub I with simply supported boundary conditions under uniaxial compression. Two bifurcations are observed. ANM parameters: n = 15, δ = 10 -4 , 26 steps. Each point corresponds to one ANM step. Figure 3 . 7 : 37 Figure 3.7: Bifurcation indicators as a function of load parameter of Film/Sub I with simply supported boundary conditions: (a) the first bifurcation point when λ = 0.05281, (b) the 2nd bifurcation point when λ = 0.05516. The indicators vanish at bifurcation points. Figure 3 . 8 : 38 Figure 3.8: Film/Sub I with simply supported boundary conditions under uniaxial compression. The left column shows a sequence of wrinkling modes ∆v corresponding to its critical load determined by bifurcation indicators in Fig. 3.7. The right column presents the associated instability modes ∆v 3 at the line Y = 0.5L y : (b) the 1st mode, (d) the 2nd mode. Figure 3 . 9 :Figure 3 . 10 :Figure 3 . 11 : 39310311 Figure 3.9: Film/Sub I with simply supported boundary conditions under uniaxial compression: (a) sinusoidal pattern v in the final step, (b) the final shape v 3 . Figure 3 . 12 :Figure 3 . 13 : 312313 Figure 3.12: Film/Sub I with clamped boundary conditions under uniaxial compression: (a) sinusoidal pattern v in the final step, (b) the final shape v 3 . Figure 3 . 14 : 314 Figure 3.14: Film/Sub II under equi-biaxial compression. The left column shows a sequence of wrinkling modes ∆v corresponding to its critical load determined by bifurcation indicators. The right column presents the associated instability modes ∆v 3 at the line Y = 0.5L y : (b) the 1st mode, (d) the 2nd mode, (f) the 3rd mode. 90 3Figure 3 . 15 : 90315 Figure 3.15: Film/Sub II under equi-biaxial compression: (a) checkerboard pattern v in the final step, (b) the final shape v 3 . Figure 3 . 16 :Figure 3 . 17 : 316317 Figure 3.16: Bifurcation curve of Film/Sub III: (a) the first step of compression along the x direction, ANM parameters: n = 15, δ = 10 -4 , 20 steps, (b) the second step of compression along the y direction, ANM parameters: n = 15, δ = 10 -4 , 31 steps. Each point corresponds to one ANM step. Figure 3 . 18 : 318 Figure 3.18: Film/Sub III under the second step of compression along the y direction. The left column shows a sequence of wrinkling modes ∆v corresponding to its critical load determined by bifurcation indicators. The right column presents the associated instability modes ∆v 3 at the line Y = 0.5L y : (b) the 1st mode, (d) the 2nd mode. Figure 3 . 19 : 319 Figure 3.19: Film/Sub III under the second step of compression along the y direction: (a) herringbone pattern v in the final step, (b) top view of (a). Figure 4 . 1 : 41 Figure 4.1: Sketch of an elastic beam on a nonlinear elastic foundation. .10) 104 4. 3 . 1043 Transition operators in the framework of Fourier series with variable coefficients xi -Domain influencing xi Figure 4 . 2 : 42 Figure 4.2: Schematic of the reduction from the microscopic model. 2 Figure 4 . 3 : 243 Figure 4.3: Buckling of a simply supported beam under uniform compression: one real envelope (v 0 ) and four complex envelopes(v R 1 , v I 1 , v R 2 , v I 2 ). The reduction is performed over the domain[π, 29π]. The instability pattern for µ = 2.21. 2 Figure 4 . 4 : 244 Figure 4.4: Buckling of a clamped beam under uniform compression: one real envelope (v 0 ) and four complex envelopes(v R 1 , v I 1 , v R 2 , v I 2 ). The reduction is performed over the domain[π, 29π]. The instability pattern for µ = 2.21. 2 Figure 4 . 5 : 245 Figure 4.5: Buckling of a clamped beam under uniform compression: one real envelope (u 0 ) and four complex envelopes(u R 1 , u I 1 , u R 2 , u I 2 ). The reduction is performed over the domain[π, 29π]. The instability pattern for µ = 2.21. Figure 4 . 6 : 46 Figure 4.6: Buckling of a clamped beam under uniform compression. The instability pattern for µ = 2.21. Reconstruction by all the reduced Fourier coefficients(v 0 , v R 1 , v I 1 , v R2 and v I 2 ) is compared with the exact solution v(x). 1 RFigure 4 . 7 : 147 Figure 4.7: Buckling of a clamped beam under uniform compression. The instability pattern for µ = 2.21. Reconstruction by two reduced Fourier coefficients v I 1 and v R 1 is compared with the exact solution v(x). Figure 4 . 8 : 48 Figure 4.8: Buckling of a clamped beam under uniform compression. The instability pattern for µ = 2.21. Reconstruction by all the reduced Fourier coefficients(u 0 , u R 1 , u I 1 , u R2 and u I 2 ) is compared with the exact solution u(x). Figure 4 . 9 : 49 Figure 4.9: Buckling of a clamped beam under uniform compression. The instability pattern for µ = 2.21. Reconstruction by only u 0 is compared with the exact solution u(x). Figure 4 . 10 : 410 Figure 4.10: Buckling of a clamped beam under uniform compression: derivatives of the curve in Fig. 4.8. The instability pattern for µ = 2.21. Figure 4 . 11 : 411 Figure 4.11: Definition of domains in the Arlequin framework. Figure 4 . 12 : 412 Figure 4.12: How the reduced Fourier coefficient R v (x i ) depends on the global microscopic transversal displacement Q f v . Figure 4 . 13 : 413 Figure 4.13: Buckling of a clamped beam under uniform compression: spatial distribution of the instability patterns for µ = 2.01. Only the left half part of the beam is demonstrated. Figure 4 . 14 : 414 Figure 4.14: Buckling of a clamped beam under uniform compression: spatial distribution of the instability patterns for µ = 2.21. Only the left half part of the beam is demonstrated. 0 Figure 4 . 16 : 416 Figure 4.16: Buckling of a clamped beam under uniform compression: maximal deflection in [0, 3π] vs. applied shortening µ. The prolongation coupling and reduction-based coupling are depicted together. Figure 4 . 17 : 417 Figure 4.17: The prolongation coupling approach is implemented in the bridging technique. Buckling of a clamped beam under uniform compression: spatial distribution of the instability patterns for µ = 2.01 and µ = 2.21. Only the left half part of the beam is demonstrated. Figure 4 . 18 : 418 Figure 4.18: The reduction-based coupling approach is implemented in the bridging technique. Buckling of a clamped beam under uniform compression: spatial distribution of the instability patterns for µ = 2.01 and µ = 2.21. Only the left half part of the beam is demonstrated. Figure 4 . 19 : 419 Figure 4.19: The left boundary region: spatial distribution of the instability patterns for µ = 2.21. The prolongation coupling and reduction-based coupling are depicted together. µ =2.21 x = [0, 9π] Reference model Reduction-based coupling (micro part) Figure 4 . 20 : 420 Figure 4.20: Spatial distribution of the instability patterns with the coupling zone in [3π, 4π]. coupling (micro part) Figure 4 . 21 : 421 Figure 4.21: Spatial distribution of the instability patterns with the coupling zone in [8π, 9π]. Figure 4 . 22 : 422 Figure 4.22: Deflection according to H 1 and L 2 couplings in the interval [0, 5π] for µ = 2.21. Figure 4 . 23 : 423 Figure 4.23: Effect of H 1 and L 2 couplings in the gluing zone. The relative difference of displacement is plotted. Figure 4 . 24 : 424 Figure 4.24: Effect of H 1 and L 2 couplings on normal forces. Figure 4 . 25 : 425 Figure 4.25: Lagrange multiplier within H 1 and L 2 couplings. established that Eqs. (5.7) and(5.14) are derived from the stationarity of the following macroscopic potential energy: {γ} = {γ 0 } = {ε(u)} + {γ wr } , N : δγ wr dΩ + δΠ ben = 0.(5.82) { {σ} = [L gen s ]{ε}, {ε} = [H gen ]{θ}, Figure 1 : 1 Figure 1: Bifurcation curve and the associated wrinkling modes with respect to the incremental compression. Period-doubling mode appears at the third bifurcation point. Figure 2 : 2 Figure 2: 3D macroscopic film/substrate model under uniaxial compression: (a) the 1st wrinkling mode, (b) the 2nd wrinkling mode. Only two elements are used along the wave direction. mm) Deflection (mm) of film at line X=0.5L x (d) Figure 3 : 3 Figure 3: Thin films on hyperelastic substrates under equi-biaxial compression. The left column shows a sequence of wrinkling patterns, while the right column presents the associated instability shapes at the line X = 0.5L x . Localized folding mode and checkerboard mode appear in the bulk. I 3 2 ∂C = I 3 C 323 = Ψ(C) = Ψ (I 1 , I 2 , I 3 ) , = det(C) = [det(F)] 2 = J 2 . (E.4)One can obtain the derivatives of invariants with respect to C as follows: -1 .(E.7) Table 2 . 2 1: Material and geometric parameters of the film/substrate system.E f (M P a) E s (M P a) ν f ν s L(mm) h t (mm) h f (mm) 1.3 × 10 5 1.8 0.3 0.48 1.5 0.1 10 -3 2.5.1 Film/substrate with simply supported boundary conditions Table 3 . 3 1: Common characteristics of material and geometric properties. E f (M P a) E s (M P a) ν f ν s h f (mm) h s (mm) 1.3 × 10 5 1.8 0.3 0.48 10 -3 0.1 Table 3 . 3 2: Different parameters of film/substrate systems. L x (mm) L y (mm) Loading Film/Sub I 1.5 1.5 Uniaxial Film/Sub II 1 1 Equi-biaxial Film/Sub III 0.75 1.5 Biaxial step Figure 4.15: Buckling of a clamped beam under uniform compression: maximal deflection in [3π, 15π] vs. applied shortening µ. The prolongation coupling and reduction-based coupling are depicted together. 2.3 2.25 2 2.2 2.15 1.99 2.1 1.98 0.01 0.02 0.03 0.04 µ 2.05 2 1.95 Reference model 1.9 Macroscopic model Prolongation coupling 1.85 Reduction-based coupling Analytical bifurcation point 1.8 0.1 0.2 0.3 0.4 0.5 0.6 Maximum transversal displacement in [3π, 15π] 2.3 2.25 2.01 2.2 2 2.15 1.99 2.1 0 0.01 0.02 0.03 µ 2.05 2 1.95 Reference model 1.9 Macroscopic model Prolongation coupling 1.85 Reduction-based coupling Analytical bifurcation point 0 1.8 0.1 0.2 0.3 0.4 0.5 Maximum transversal displacement in [0, 3π] Acknowledgments Appendix B General methodology to obtain the macroscopic models vii Table of contents Appendix C A macroscopic model with one real and one complex envelope Appendix D The residuals of the microscopic and macroscopic model Appendix E Finite strain hyperelasticity Appendix F Automatic differentiation with the ANM Bibliography viii δP = 0, which leads to ) , (b) The macroscopic model couples the 1D membrane Eqs. (4.6-a) and (4.6-b) with a second order differential equation (4.6-c) for the envelope v 1 (x), which is a sort of Ginzburg-Landau equation. In this chapter, we only consider the simplest case of a real v 1 (x), which has the drawback of fixing the phase in the bulk. A complete study of the model with a complex v 1 (x) can be found in [START_REF] Mhada | About macroscopic models of instability pattern formation[END_REF], where better accuracy in the boundary layer has been established. One may wonder why the fourth order derivatives of v 1 have been dropped. It was theoretically and numerically established in [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF], §5.1, that these high order derivatives can lead to spurious oscillations and lack of convergence for fine meshes. In fact, this is consistent with the basic assumption of a slowly varying envelope d/dx ≪ q. On the contrary, one can consider removing all the derivatives in Eq. (4.6-c), but the discussion in previous papers showed that the resulting model is very poor and not consistent with the asymptotic Ginzburg-Landau approach. Effective range of macroscopic models The assumption of slowly varying amplitudes is the main restriction for application of envelope models. In other words, the length scales of the oscillations and of the envelope must be clearly distinguished. This limitation holds for the present Fourier approach as well as for the asymptotic one [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. A known example is the localization of buckling patterns arising from long domains and softening nonlinearity (c 3 < 0). For instance, in [START_REF] Hunt | Structural localization phenomena and the dynamical phase-space analogy[END_REF], Fig. 8 demonstrates that the assumption is only valid very close to the bifurcation point. In other words, such localized behavior cannot be represented by slowly varying amplitude. In the same way, any envelope model is not able to perfectly account for boundary effects. That is the reason why we propose a double scale analysis, where the macroscopic model is considered in the bulk while the fine model is used near the boundary. [START_REF] Mhada | About macroscopic models of instability pattern formation[END_REF]. To verify this, we have computed the solution of the nonlinear problem (4.1) by the classic Newton-Raphson method and the corresponding macroscopic quantities v From Fig. 4.4, we can observe that the first envelope |v 1 | is bigger than the other three ones, but the ratio is larger than in the simply supported case and is approximately 10 -1 . According to Eq. (4.16), this shows that the effective wave number is not exactly the expected value q = 1. Moreover, the envelopes obtained by Eqs. (4.13) and (4.14) are not precisely slowly variable and a rapidly varying part is observed as indicated in the oscillating part of Eq. (4.16). The longitudinal displacements u 0 , u R 1 , u I 1 , u R 2 and u I 2 are illustrated in Fig. 4.5. As expected, the zero order term u 0 is larger than the others and is almost linear, but there are also oscillating parts u R 1 and ), respectively. Next, we have rebuilt the microscopic deflection v(x) = P(v i ) from five envelopes (j = 0, j = ±1 and j = ±2) by Eq. (4.4) and we find that this reconstruction well describes the exact value as shown in Fig. 4.6. Furthermore, it can be noticed that only two envelopes v I 1 and v R 1 can sufficiently cover the microscopic model as shown in Fig. 4.7, which justifies the assumptions made in Section 4.2.3. The same reconstruction is demonstrated in Figs. 4.8 and 4.9 and this leads to the expected nearly linear response u(x). However, u(x) is not perfectly linear and its derivative u ′ (x) fluctuates around -2.22 as shown in Fig. 4.10. This can be explained through the constitutive equation (4.1-b). Since there is no horizontal force f = 0, the normal force is constant n(x) = -µ. The transversal displacement v(x) being in a wave form, the deformation u ′ (x) is in a wave form as well. Comments By considering the exact solution of the full problem, we have established that the macroscopic quantities defined by the reduction formulae (4.13) and (4.14) are not exactly slowly variable. They also account for an oscillating part that is small only if the effective wave number Q is very close to the a priori chosen wave number q. In the next sections, we will introduce a coupling between the two scales that is based on the reduction operator (4.10). A bridging technique based on the formulae (4.13) and (4.14) will be presented and evaluated. To smooth these oscillations, an L 2 -type coupling operator will be used. Numerical evaluation and assessment Prolongation versus reduction-based coupling Two coupling approaches are compared and evaluated. The numerical assessment is based on the results obtained on a beam resting on a nonlinear elastic foundation configuration. The clamped beam has the same parameters as reported in Section 4.3.2. The models are defined as 1. Microscopic model: 120 microscopic elements for the whole beam [0, 30π]. 2. Macroscopic model: 15 macroscopic elements for the whole beam [0, 30π] with a wave number q = 1. The boundary conditions for the microscopic model are those of a classical clamping: Prolongation The same boundary conditions have been applied for the bridging techniques (prolongation coupling and reductionbased coupling) at x = 0. As for the macroscopic model, it is known that at a clamped end, the envelope satisfies nearly v 1 (0) = v 1 (L) = 0. To be clear, only the left half part of the beam [0, 15π] will be presented in the following figures, but the computations have been performed on the whole domain [0, 30π]. The nonlinear problems have been solved by the Newton-Raphson procedure. Small step lengths have to be chosen in the region of the bifurcation µ ≈ 2. To ensure a transition from the fundamental branch to the bifurcated one, a transverse perturbation force g pert is applied. The values of the perturbation force are given as g pert = 2 × 10 -3 for the microscopic model, g pert = 3 × 10 -5 for the macroscopic model and g pert = 2 × 10 -3 for the bridging models. The numerical response of the structure is a spatial oscillation with a modulation near the end (see Fig. 4. 13 and Fig. 4.14). Close to the bifurcation point µ = 2.01, the envelope is nearly sinusoidal and it looks like a hyperbolic tangent away from the Connection between the film and the substrate As the film is bonded to the substrate, the displacement should be continuous at the interface. Therefore, the connection between the film and the substrate reads (5.59) From Eqs. (2.8)-(2.9) and (5.59), the following relations can be obtained: (5.60) Consequently, the corresponding macroscopic relation reads (5.61) A 3D macroscopic film/substate model The general macroscopic modeling framework established in Section 5.2 can be used directly for 3D film/substrate modeling, by discretizing the domain using 3D finite elements. Nevertheless, it is cumbersome and especially not efficient for very thin film, since the huge difference ratio of thickness between the film and the substrate requires extremely refined meshes to keep the acceptable element aspect ratio. Some more flexible modeling schemes should be developed for 3D cases. One simple way is to develop a 3D macroscopic film/substrate model based on the same modeling strategy introduced in Chapter 3, where the 3D classical model has shown its competitive efficiency for post-bifurcation response of morphological instability. Thus, we consider the same film/substrate system as in Section 3.2, which includes the geometric and material properties as well as Hookean elasticity framework (see Fig. 3.2). However, the 3D macroscopic model will be totally different from the previous one. Precisely, this 3D macroscopic one couples a nonlinear macroscopic membrane-wrinkling model [START_REF] Damil | New nonlinear multi-scale models for wrinkled membranes[END_REF][START_REF] Damil | Membrane wrinkling revisited from a multiscale point of view[END_REF] to represent the film and a linear macroscopic elasticity to describe the substrate, and these two models are coupled by introducing Lagrange multipliers. In what follows, this modeling scheme will lead to derive the integrated macroscopic film/substrate model. Linear macroscopic elasticity for the substrate As explained in Section 3.2.2, linear isotropic elasticity theory is sufficient to represent the substrate, while the potential energy of the substrate (3.9) should be reformulated in the macroscopic framework. We apply the technique of slowly variable Fourier coefficients in 3D case [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]. To be consistent with the macroscopic membrane-wrinkling model, we also assume that the instability wave number vector {q} is known and the wave spreads along the O x direction. In other words, the vector {q} is parallel to the unit vector {1, 0, 0}, i.e. {q} = q{1, 0, 0}. The unknown field U(X, Y, Z) = {u(X, Y, Z), σ(X, Y, Z), ε(X, Y, Z)}, whose components are displacement, stress and strain, is written in the following form: where the macroscopic unknown fields U j (X, Y, Z) vary slowly on the period of oscillations. As discussed before, it is not necessary to choose an infinite number of Fourier coefficients and thus the unknown fields U(X, Y, Z) are expressed in terms of two harmonics: the mean field U 0 (X, Y, Z) and the first order harmonics U 1 (X, Y, Z)e i{q}X and U 1 (X, Y, Z)e -i{q}X . Hence, the macroscopic potential energy of the substrate can be expressed as where L s is the elastic matrix of the substrate. It can be seen that the macroscopic potential energy (5.85) takes the same form as the classical one (3.9). Note that each unknown of the classical model is replaced by a vector of three times larger size, since it takes into account the zero order harmonic and the first order harmonic represented by a complex vector or by two real vectors. These generalized vectors, i.e. the generalized displacement {u s }, the generalized stress {σ}, the generalized strain {ε} and the generalized displacement gradient {θ}, are defined as (5.86) For linear elasticity, one can get a constitutive law for the zero order harmonic (real number) and two constitutive laws for the first order harmonic (complex number), with Appendix A Justification for exponential distribution of transverse displacement along the z direction Let us recall Lamé-Navier equation where λ and µ are Lamé's first parameter and shear modulus, respectively. In twodimensional case, Eq. (A.1) can be written as As for sinusoidal wrinkles [START_REF] Huang | Nonlinear analyses of wrinkles in a film bonded to a compliant substrate[END_REF], the displacement field can be assumed as where f 1 and f 2 are the unknown functions, while k is the wave number. By introducing Eq. (A.3) into Eq. (A.2), one can obtain two ordinary differential equations These two coupled equations can be solved after introducing the boundary conditions, while the boundary conditions are not clear at the interface and bottom [START_REF] Audoly | Buckling of a stiff film bound to a compliant substrate-Part I: Formulation, linear stability of cylindrical patterns, secondary bifurcations[END_REF]. Nevertheless, it can be solved with unknown constants as where c 1 , c 2 , c 3 and c 4 are four unknown constants depending on the boundary conditions. Obviously, f 1 is an exponential type function. Therefore, the transverse displacement w should follow the exponential distribution along the thickness direction, which can be observed in Fig. 2.6. The most remarkable feature of the solution (A.5) is the length 1/k characterizing the exponential decay, which coincides with the wrinkling wavelength. This proves the property α = λ x /λ z = O(1) in Section 2.2. Appendix B General methodology to obtain the macroscopic models Within the Fourier approach, the differential equations satisfied by the amplitudes U j = (u j , v j , n j ) are deduced from the microscopic model simply by identifying the Fourier coefficients in each equation. The amplitudes U j (x) are assumed to be constant over a period (see [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | A generalized continuum approach to predict local buckling patterns of thin structures[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]). However, the derivative operators are calculated exactly, according to the following rules: ( da dx where a(x) is a Fourier series with slowly varying coefficients as in Eq. (4.4). The other rules follow from Parseval identity and from the assumption of slowly varying coefficients: where b(x) is also a Fourier series with slowly varying coefficients. The membrane constitutive law (4.1-b) leads to the following macroscopic constitutive law: Hence, in the case of two real envelopes, U 0 = (u 0 , v 0 , n 0 ) ∈ R and U 1 = (u 1 , v 1 , n 1 ) ∈ R, the macroscopic constitutive law for the membrane stress n 0 becomes Remark that the two last terms are always positive and correspond to an increase of tensile strain or to a decrease of the compressive strain. Therefore, this macroscopic law is able to account for the membrane stress decrease due to local wrinkling, particularly via the last term of Eq. (B.6). The procedure to deduce a finite number of envelope equations is straightforward in the case of a simple nonlinear system as Eq. (4.1) (see [START_REF] Damil | A generalized continuum approach to describe instability pattern formation by a multiple scale analysis[END_REF][START_REF] Damil | A generalized continuum approach to predict local buckling patterns of thin structures[END_REF][START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF]). Theoretically, the number of terms in the Fourier series can be very large (see Eq. (B.5)), but in practice, it is convenient to limit the number of harmonics. For instance, in [START_REF] Damil | Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients[END_REF], a macroscopic model involving five harmonics has been presented corresponding to wave numbers 0, ±q, ±2q. The corresponding potential energy has the following form: (B.8) The model (B.7) includes five envelopes, but it may be unnecessarily intricate. In this context, further simplifications are introduced in the potential energy (B.7). Appendix C A macroscopic model with one real and one complex envelope Additional simplifications would be introduced in the potential energy (B.7). Firstly, we consider only three harmonics 0, ±q, with one real envelope U 0 (x) = (u 0 , v 0 , n 0 ) and one complex envelope U 1 (x) = (u 1 , v 1 , n 1 ), or equivalently three real envelopes U 0 (x), U R 1 (x), U I 1 (x). The second restriction concerns the body axial force that does not fluctuate on the basic cell: f (x) = f 0 (x), which implies that the normal force does not fluctuate (n 1 = 0). This makes it possible to drop the unknown u 1 (x). In this case, the potential energy depends on the mean field (u 0 (x), v 0 (x)) and the envelope of the deflexion v 1 (x): where Since v 1 is a complex valued function, the model (C.1) makes it possible to predict slow phase modulations. It is more accurate than the macroscopic model used in this thesis. In [START_REF] Mhada | About macroscopic models of instability pattern formation[END_REF], it was proven that the account of a complex v 1 improves the solution near the boundary. In the same way, one can also obtain [R r (Q r )] e as follows: where Thus, the second Piola-Kirchhoff stress tensor can be expressed as ] . (E.8) Several forms of the strain energy function have been proposed in the literature to represent the isotropic hyperelastic material behavior. The most popular one used in the literature is the Ogden model for modeling rubberlike materials. The compressible Ogden constitutive law [127] is described in function of the eigenvalues λ i (i = 1, 2, 3) of the right Cauchy-Green tensor, C, as follows: where N is a material constant, and µ i , α i and β i are temperature-dependent material parameters. The Mooney-Rivlin model and neo-Hookean form can be obtained as a particular case of the Ogden model (see [START_REF] Holzapfel | Nonlinear solid mechanics: A continuum approach for engineering[END_REF]). Setting N = 2, α 1 = α 2 = -2, the strain energy potential of Mooney-Rivlin for the incompressible materials reads where the material constants c 1 = µ 1 /2 and c 2 = -µ 2 /2. The compressible form of the Mooney-Rivlin reads where c and d are temperature-dependent material constants. By prescribing that the reference configuration is stress-free, we can obtain d = 2(c 1 + 2c 2 ). One can deduce the second Piola-Kirchhoff stress tensor from Eqs. (E.8) and (E.11): The neo-Hookean model can be deduced from the strain energy of Mooney-Rivlin (E.11) by taking c 2 = 0. Another version of the neo-Hookean model, which is an extension where λ 0 and µ 0 are Lamé's material constants. The second Piola-Kirchhoff stress tensor then reads S = (λ 0 lnJ -µ 0 ) C -1 + µ 0 I. (E.14) Appendix F Automatic differentiation with the ANM The implementation of the recurrence formulae as (2.47) is relatively simple for Föpplvon Kármán nonlinear plate or Navier-Stokes equations [START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF][START_REF] Xu | 3D finite element modeling for instabilities in thin films on soft substrates[END_REF], but it can be tedious in a more general constitutive framework. For example, there are several forms of strain energy potentials to describe hyperelastic behavior of rubberlike materials: Ogden, Mooney-Rivlin, neo-Hookean, Gent, Arruda-Boyce, etc. Each constitutive law should be set in the adapted ANM form. This requires different adaptations or even regularization techniques, which appears to be cumbersome and not straightforward sometimes. That is why some tools based on Automatic Differentiation (AD) techniques [START_REF] Griewank | Evaluating derivatives: Principles and techniques of algorithmic differentiation[END_REF] have been proposed to compute high order derivatives of constitutive laws [START_REF] Koutsawa | A generic approach for the solution of nonlinear residual equations. Part I: The Diamant toolbox[END_REF][START_REF] Charpentier | Automatic differentiation of the asymptotic numerical method: the Diamant approach[END_REF]111,[START_REF] Lejeune | Automatic solver for non-linear partial differential equations with implicit local laws: Application to unilateral contact[END_REF], which allows one to introduce various potential energy functions of hyperelastic laws in a rather simple and automatic way. This work applies the Matlab AD toolbox presented in [START_REF] Koutsawa | A generic approach for the solution of nonlinear residual equations. Part I: The Diamant toolbox[END_REF] to compute Taylor series involved the ANM algorithm, see [START_REF] Cochelin | Méthode asymptotique numérique[END_REF][START_REF] Xu | Multiple bifurcations in wrinkling analysis of thin films on compliant substrates[END_REF][START_REF] Xu | 3D finite element modeling for instabilities in thin films on soft substrates[END_REF] and references cited therein. The AD has been first introduced in the ANM algorithm by Charpentier and Potier-Ferry [START_REF] Charpentier | Automatic differentiation of the asymptotic numerical method: the Diamant approach[END_REF] to make the algorithm more generic and easier to be implemented. The theoretical developments based on the generic Faá di Bruno formula for the higher order differentiation of compound functions have been presented in [START_REF] Koutsawa | A generic approach for the solution of nonlinear residual equations. Part I: The Diamant toolbox[END_REF]. The main result reported in [START_REF] Koutsawa | A generic approach for the solution of nonlinear residual equations. Part I: The Diamant toolbox[END_REF] is that from a proper initialization of the p-th Taylor coefficients of the independent unknown variables (U p , λ p ) (see Eqs. Series computation Construction of L 0 t (U 0 ) using AD on R Decomposition of L 0 t Order 1: (0) Initialization: intermediate variables End of the series computation Abstract Surface wrinkles of stiff thin layers attached on soft materials have been widely observed in nature and these phenomena have raised considerable interests over the last decade. The post-buckling evolution of surface morphological instability often involves strong effects of geometrical nonlinearity, large rotation, large displacement, large deformation, loading path dependence and multiple symmetry-breakings. Due to its notorious difficulty, most nonlinear buckling analyses have resorted to numerical approaches since only a limited number of exact analytical solutions can be obtained. This thesis proposes a whole framework to study the film/substrate buckling problem in a numerical way: from 2D to 3D modeling, from classical to multi-scale perspective. The main aim is to apply advanced numerical methods for multiple-bifurcation analyses to various film/substrate models, especially focusing on post-buckling evolution and surface mode transition. The models incorporate Asymptotic Numerical Method (ANM) as a robust path-following technique and bifurcation indicators well adapted to the ANM to detect a sequence of multiple bifurcations and the associated instability modes on their post-buckling evolution path. The ANM gives interactive access to semi-analytical equilibrium branches, which offers considerable advantage of reliability compared with classical iterative algorithms. Besides, an original nonlocal coupling strategy is developed to bridge classical models and multi-scale models concurrently, where the strengths of each model are fully exploited while their shortcomings are accordingly overcome. Discussion on the transition between different scales is provided in a general way, which can also be seen as a guide for coupling techniques involving other reduced-order models. Lastly, a general macroscopic modeling framework is developed and two specific Fourier-related models are derived from the well-established classical models, which can predict the pattern formation with much fewer elements so as to significantly reduce the computational cost. Keywords: Wrinkling; Post-buckling; Bifurcation; Thin film; Multi-scale; Path-following technique; Bridging technique; Arlequin method; Finite element method.
01751772
en
[ "spi.other" ]
2024/03/05 22:32:07
2015
https://hal.univ-lorraine.fr/tel-01751772/file/DDOC_T_2015_0086_ARLAZAROV.pdf
Reduction of fuel consumption and CO 2 emission is one of the key concerns of the worldwide car makers today. The use of high-strength high-formability steels is one of the potential solutions to lighten a car. The so-called "Medium Mn" steels, containing from 4 to 12 wt.% Mn, are good candidates for such applications. They exhibit an ultra-fine microstructure with a significant amount of retained austenite. This retained austenite transforms during mechanical loading (TRIP effect), which provides a very attractive combination of strength and ductility. Such ultra-fine microstructure can be obtained during the intercritical annealing of fully martensitic Medium Mn steel. In that case, the formation of austenite happens through the socalled "Austenite Reverted Transformation" (ART) mechanism. Consequently, the understanding and modeling of phenomena taking place during ART-annealing is of prime interest. In this PhD work, the evolution of both microstructure and tensile properties was studied as a function of holding time in the intercritical domain. First, an "alloy design" procedure was performed to select the chemical composition of steel and the thermal treatment adapted to this grade. It was based on both computational and experimental approaches. Then, the elaborated cold-rolled 0.098C -4.7Mn (wt.%) steel was subjected to ART-annealing at 650°C with various holding time (from 3min to 30h). Two types of characterization were applied on the treated samples: analysis of microstructure evolution and evaluation of mechanical behavior. Microstructure evolution was studied using a double experimental and modeling approach. The final microstructure contains phases of different natures (ferrite (annealed martensite), retained austenite and fresh martensite) and of different morphologies (lath-like and polygonal). A particular attention was paid to the kinetics of austenite formation in connection with cementite dissolution and to the morphology of the phases. A mechanism was proposed to describe the formation of such microstructure. Furthermore, the importance of taking into account the size distribution on the overall transformation kinetics was evidenced through the comparison between experimental and simulated austenite growth. The critical factors controlling thermal austenite stability, including both chemical and size effects, were determined and discussed, based on the analysis of the retained austenite time-evolution. At last, an adapted formulation of Ms temperature law applicable to the medium Mn steels with ultra-fine microstructure was proposed. Tensile properties of the steel were measured as a function of holding time and the relation between microstructure and mechanical behavior was analyzed. Advanced analysis of the individual behavior of the three major constituents (ferrite (annealed martensite), retained austenite and fresh martensite) was performed. An important influence of Mn content on the strength and strain hardening of fresh martensite was revealed. Therefore, a specific model was proposed to describe true-stress versus true-strain curves of fresh martensite. On the other hand, mechanical behavior of retained austenite and ferrite was described with the adapted approaches already existing in the literature. At last, a complete model for predicting the true-stress versus true-strain curves of medium Mn steels was proposed based on the ISO-W mixture model. Secondly, I'm very grateful to my family. Of course to my parents, especially my mother, who put all their energy, time and love to raise me, to give me good education and to make me think in all circumstances. I wish to express my love to my actual family, my wife and daughter. Also many thanks for their patience and comprehension, but also for their love. I would like to express my gratitude to my supervisors: Alain HAZOTTE, Mohamed GOUNÉ and Olivier BOUAZIZ. They helped me and gave me a lot of advices throughout this work. All our numerous discussions contributed to the development of this work and allowed good understanding of the happening phenomena. Especially, I would like to thank them for the useful comments and corrections during the manuscript redaction. I'm extremely thankful to Patrick BARGES for the help with TEM characterizations, to Gerard PETITGAND for the EPMA analysis and to Frederic KEGEL for his support with different experiments. I also gratefully acknowledge the NanoSIMS characterization performed by Nathalie Valle and her detailed explications. I would like to express my appreciation for the fruitful discussions on different topics to Didier HUIN, Jean-Philippe MASSE, David BARBIER, Jean-Christophe HELL and Sebastien ALLAIN. I would like to thank Sabine Fogel from the Documentation department for her appreciable assistance in the literature research. I would like to express my gratefulness to all the staff of Automotive Products centre of ArcelorMittal Maizières Research and Development campus and, in particular, to the engineers and technicians of Metallurgy Prospects and Manufacturing (MPM) team. Different technicians helped me a lot with various experimental techniques. Thanks to my colleagues engineers because they were supporting an additional charge due to my partial working time. I wish to thank my chiefs Thierry IUNG and Michel BABBIT for accepting this PhD work and for their support during these years. I would like to acknowledge also Thierry's help in final review of the manuscript. Of course, many thanks to ArcelorMittal company for financial supply of this work. I'm very thankful to all the members of jury: Alexis DESCHAMPS, Philippe MAUGIS, Pascal JACQUES, Astrid PERLADE and Gregory LUDKOVSKY -for the time they spend in order to evaluate this work. I'm also grateful to them for their interesting questions and pertinent remarks. Finally, this work is dedicated to my small daughter Anna. She helped me a lot with typing of the manuscript. Окружающий нас мир столь многогранен и сложен, что познавая его, мы всё больше и больше осознаём, что процес познания бесконечен как и сам мир. Le monde autour de nous est tellement varié et complexe qu'en l'étudiant, le processus de découverte et d'apprentissage nous apparaît de plus en plus sans fin, comme le monde lui-même. The world around us is so multifaceted and complex that when perceiving it, we realize more and more that discovering and learning processes are infinite as the world itself. Sergey OLADYSHKIN Résumé La perspective d'une hausse durable du prix de l'énergie fossile et les exigences réglementaires accrues vis-à-vis des émissions de C0 2 nécessitent de développer des véhicules plus légers. L'utilisation d'aciers à Très Haute Résistance (THR) est une voie possible pour répondre à ces exigences, car ceux-ci permettent une réduction significative d'épaisseur sans affecter la rigidité des pièces. Le développement d'un acier qui combine à la fois une très haute résistance et une bonne formabilité constitue à l'heure actuelle un thème central chez les sidérurgistes. Une des solutions envisagées est de développer une nouvelle nuance d'aciers THR dits « Medium Manganèse » dont la teneur en Mn est située entre 4 et 12 %. Les premiers résultats publiés montrent un intérêt évident au développement de telles nuances. En effet, pour des teneurs en carbone relativement faibles, il est possible de stabiliser une fraction élevée d'austénite résiduelle à température ambiante grâce à la taille ultra fine de la microstructure et à l'enrichissement en Mn. Cette austénite résiduelle se transforme en martensite sous la charge mécanique (effet TRIP), ce qui procure une combinaison très attractive entre la résistance et la ductilité. Une des voies pour obtenir ce type de microstructure est d'effectuer un recuit inter-critique d'un acier complètement martensitique (issu d'austénitisation et trempe). Lors d'un tel recuit, la formation de l'austénite obéit à un mécanisme spécifique qui porte le nom d'ART -Austenite Reverted Transformation (transformation inverse de l'austénite). La compréhension et la modélisation des phénomènes en jeu pendant le recuit ART ont un grand intérêt scientifique. Ainsi, l'objectif de ce travail de thèse était d'étudier et de modéliser les évolutions microstructurales en lien avec les propriétés mécaniques lors d'un recuit ART. Dans un premier temps, une étude de type « alloy design » a été menée pour déterminer la composition chimique de l'acier et le traitement thermique adapté à cette nuance. Une double approche, numérique et expérimentale, a été utilisée. Ensuite, des recuits inter-critiques (à 650°C) avec des temps de maintien variables (entre 3min et 30h) ont été réalisés sur l'acier laminé à froid et contenant 0.098%C et 4.7%Mn (mas.). Après chaque traitement, deux types de caractérisations ont été menés : analyse microstructurale et évaluation des propriétés mécaniques. L'évolution de la microstructure lors du recuit a été étudiée en se basant sur deux types d'approches : expérimentale à l'échelle des phases (MEB, MET,..) et en utilisant la modélisation thermodynamique. Il a été déterminé que la microstructure finale se compose de phases de nature (ferrite, austénite résiduelle et martensite de trempe) et morphologie (en forme d'aiguille et polygonale) différentes. Une attention particulière a été accordée aux cinétiques de dissolution des carbures et de formation de l'austénite. Une vision complète de ces processus a été construite. De plus, un effet important de la taille de grains ultra fine sur les cinétiques globales a été démontré en comparant les résultats des calculs numériques avec les données expérimentales. En outre, le mécanisme de stabilisation de l'austénite résiduelle à la température ambiante a été étudié et discuté. Les deux contributions à la stabilité de l'austénite résiduelle, composition chimique et taille des grains, ont été analysées sur la base de l'évolution temporelle de l'austénite résiduelle. Enfin, une formulation adaptée pour le calcul de la température Ms des aciers « Medium Manganèse » avec une microstructure ultrafine a été proposée. Des essais de traction ont été réalisés afin d'évaluer le comportement mécanique de l'acier après différents recuits ART. Les liens entre la réponse mécanique de l'acier et sa microstructure ont été établis. Une analyse plus détaillée du comportement de chaque constituant de la microstructure (ferrite, austénite résiduelle et martensite de trempe) a été effectuée. Elle a révélé que le Mn a un effet très important sur la résistance et sur l'écrouissage de la martensite de trempe. Cette observation a conduit à proposer un modèle spécifique pour décrire la courbe contrainte vraie -déformation vraie de la martensite de trempe. En revanche, les comportements de la ferrite et de l'austénite résiduelle ont été modélisés en s'appuyant sur des approches existant déjà dans la littérature. A l'issue de cette thèse, un modèle complet est disponible pour calculer les courbes de contrainte vraie -déformation vraie d'un acier « Medium Mn » après un recuit ART. Il s'appuie sur l'approche ISO-W pour décrire ce matériau multiphasé évolutif. INTRODUCTION Background of this study Since more than 20 years, the worldwide car makers are in permanent progress in the field of material selection in order to decrease the weight of the vehicles. The goal of car weight reduction is to minimize fuel consumption and CO 2 emission. In the same time, passengers safety should be preserved or even improved. Vehicle lightening, especially for the passenger cars, is also stipulated by the European Union (EU) legislation. In 2009 was approved the mandatory emission reduction targets for new cars [EC]. These days, the average CO 2 emissions from the car fleet in the EU are precisely observed and reported. The evolution of CO 2 emissions and the targets for 2015 and 2020 are presented in Figure 1. The presented 2015 and 2020 targets are 130 grams of CO 2 per kilometre (g/km) and 95g/km, respectively. Compared with the 2007 fleet average of 158.7g/km these targets represent reductions of 18% and 40%, respectively [MON '12]. To achieve these quite ambitious targets, the car markers are constrained to use the most innovative solutions in terms of materials and design. Car body and many other pieces of vehicles are made of different steels. Hence, there are two strategies for the weight reduction: a) use permanently improved steel solutions; b) use alternative materials: aluminum, magnesium, plastics and others. Till now, steel stays a very attractive and functional material as it combines a wide range of mechanical properties and low production cost. That's why the research and development in the field of steels is quite important and has its place in the near future. For many years, steel researchers were working on the conventional Advanced High Strength Steels (AHSS) grades: Dual Phase (DP), TRansformation Induced Plasticity (TRIP), Complex Phase (CP) and Martensitic (M) steels. These steels were classified as the 1 st Generation AHSS. Then, the TWinning Induced Plasticity (TWIP) steels were discovered and investigated as the 2 nd Generation AHSS. Production of these steels is inhibited due to the high level of alloying elements that stipulates an important cost increase and a lot of problems on the process route. Finally, these days the researchers are trying to develop the 3 rd Generation AHSS. The objective of this development is to propose steels with an intermediate strength-ductility balance (better than 1 st Generation AHSS, but probably lower than the 2 nd Generation AHSS) but with reasonable costs and production issues. Positions of this target in terms of tensile strength/total elongation balance as well as the already developed 1 st and 2 nd Generations of AHSS are illustrated in Figure 2. '06]. There are different metallurgical concepts that can offer a steel product with the satisfactory strength-ductility balance: carbide free bainite (CFB), annealed martensite matrix (AMM), quenching and partitioning (Q&P) and, finally, so-called "Medium Mn" steel (MMS). This research is focused on the last concept. Medium Mn TRIP steel is a promising solution to get high strength steels with good formability. Recently, a lot of studies on MMS concept were done and published. The results of some of these studies [MIL '72], [FUR'89], [HUA'94], [FUR'94], [MER'07], [SUH'09], [SHI '10], [COO '10], [GIB '11], [JUN'11], [CAO '11], [ARL'11] are shown in Figure 3. This graph presents the Ultimate Tensile Stress (UTS) as a function of Total Elongation (TE) and the targeted domain of the 3 rd Generation AHSS. It can be seen that MMS has a great potential since some combinations already satisfy the target of third generation high strength steels. Aim of this study The mechanical response of MMS is very attractive. However the studies of these steels started not so long time ago. Hence, there are a certain amount of unanswered questions about microstructure formation, mechanical behavior and the link between these two properties. Therefore, it was proposed to investigate the following topics: 1. Understand the mechanisms of microstructure formation during intercritical annealing of MMS: -C and Mn distribution between phases; -Mechanisms of austenite formation and stabilization ; 2. Explain the relation between microstructure and mechanical properties obtained after thermal treatments: -Influence of Mn and ultra-fine size on work hardening; -Austenite strain induced transformation -TRIP effect; 3. Develop a model for prediction of mechanical behavior of the steel, considered as a multi-phase evolutive material. More global purpose of this research is to acquire all the necessary knowledge of this concept in order to build the tools (models) for the further optimization of steel compositions and thermal treatments. This objective is an essential step for the future product development of such type of steels. Context of this study This PhD work was done in a specific context. Before starting the PhD, the author of this work was working for 4 years and continued to work during the PhD in ArcelorMittal Maizieres Research centre, in the Automotive department and more precisely in the Metallurgy Prospects and Manufacturing (MPM) team. Therefore, this PhD work was accomplished in particular conditions with somehow following time partitioning: 50% of the authors' time was allocated to its engineer work in ArcelorMittal Maizieres Research centre and other 50% was dedicated to the PhD study. This work was done within the collaboration between ArcelorMittal Maizieres Research centre and the University of Lorraine, in particular LEM3 laboratory. It was supervised by three persons: Director -Alain HAZOTTE (professor in LEM3 laboratory, University of Lorraine); Co-director -Mohamed GOUNÉ (former research engineer in ArcelorMittal Maizieres Research centre, now professor in ICMCB laboratory, University of Bordeaux); Co-supervisor -Olivier BOUAZIZ (former research engineer in ArcelorMittal Maizieres Research centre, now professor in LEM3 laboratory, University of Lorraine). Finally, it should be highlighted that the major part of the experimental work was done in the ArcelorMittal Maizieres Research centre. Presentation of the manuscript The manuscript is divided in the following four parts. Chapter 1 is devoted to the literature review. The analysis of the found literature relative to microstructure formation and mechanical behavior of MMS steels is presented. In the part about the microstructure, the following major topics are treated: recrystallization, standard and reverse austenite formation, and, at last, austenite thermal stability. Concerning mechanical behavior part, the following subjects in link with this study are reviewed: global analysis of mechanical properties of MMS in relation with the available microstructure characterizations (retained austenite fraction), mechanical behavior of ultra fine ferrite, fresh martensite and retained austenite (including TRIP effect) and, finally, modeling of the multiphase steel mechanical behavior. Chapter 2 presents different experimental and numerical techniques used in this work. All the tools utilized for the elaboration of steel, for microstructure observation and analysis at different scales, for thermodynamic calculations and for final properties measurements are described. In the second part of this chapter, selection of chemical composition of steel and of the subsequent thermal treatment is given. To define temperature of the thermal treatment a combinatory approach is used. It is based on both thermodynamic simulations and particular experimental heat treatment. In Chapter 3, the observations and analysis of microstructures obtained after different thermal treatments are presented and described. Morphology, size and chemical composition of phases are analyzed. Microstructure evolution with time and different steps of austenite formation are discussed. The results of thermodynamic simulations and their comparison with experimental values are also considered in this chapter. According to these simulations, particularities of austenite transformation in MMS are argued. This microstructure analysis permits to describe microstructure evolution and in particular to explain unambiguously the stability of austenite at room temperature. The effect of austenite size on its stability is introduced in a particular manner through the variation of M s temperature. At last, the mechanisms of austenite size influence are debated. Chapter 4 is dedicated to the mechanical properties of MMS. Tensile behavior of intercritically annealed medium Mn steels with different phases (two phases and three phases microstructures) is presented and analyzed. Tensile behavior of each phase constituent (as-quenched martensite, ferrite with medium Mn content and austenite with medium C and Mn content) is described. Influence of Mn on the work hardening of the different phases and on the mechanical stability of austenite is discussed. Finally, a global mechanical model capable to predict the whole tensile curve of the intercritically annealed medium Mn steels with different phases is proposed. This model is based on the analysis of the mechanical behavior of each constituent and on the detailed experimental description of the microstructures. The performance of the model is also discussed. CHAPTER 1: LITERATURE REVIEW General Description More than 200 years ago (in 1774) manganese was elaborated in metallic state and from the end of nineteenth century manganese is widely applied as an alloying element for the steel elaboration. One of the most remarkable and important discovery in the field of steels was done by Sir Robert Abbott Hadfield. He searched for a hard and in the same time tough steel and developed a steel containing around 12% Mn and having outstanding combination of hardness and toughness. His invention was called in his honor Hadfield steel and since that time was produced in significant quantities [DES '76] [MAN '56]. Gradually, manganese became a common alloying element for the steel production. Its influence on different properties (hardenability, grain size, phase transformations, mechanical behavior and etc…) was widely studied. However, manganese content was often limited to ~2% because of its important hardenability [DES '76]. In the end of 60's and beginning of 70's of the twentieth century, Medium Mn steels (up to 6% Mn) were first proposed by Grange and Hribal [GRA'68] as an air-hardenable material. Fully martensitic structure was obtained for a 0.1 wt.% C -6 wt.% Mn steel with the cooling rate of 1.7°C/min. But this structure had poor toughness properties. Thus, a tempering treatment was applied and resulted in a good balance between strength and ductility. Miller and Grange [MIL'70], [MIL'72] continued to work on this material and found that this good mechanical behavior can be explained by rather high fraction of austenite retained after tempering and ultrafine size of the microstructure features. In the beginning of 90's,Furukawa et al. [FUR'89], [HUA '94], [FUR'94] also found a good balance of strength and ductility after annealing martensitic hot rolled steels with 5 wt.% Mn in their intercritical domain. One more time it was shown that ultra-fine microstructure and retained austenite have an important role in these steels. These studies also claimed that there are optimum temperature and time to get the maximum of retained austenite. Nowadays, due to the increasing demand for the development of "3rd generation high strength steels" the interest to MMS attained its maximum and the number of research and development studies is exponentially growing. Final properties of the steel product depend on its chemical composition and final structure: grain size, phase's fractions, precipitates and dislocation content. Further, the final microstructure in most of the cases is directly linked to all the steps of elaboration (thermomechanical treatments). Generally, one of the most important steps, that will condition the final properties, is annealing. Heat treatment necessary for the production of MMS is conventional annealing: heating to soaking temperature, holding and final cooling. Thus, most of the phenomena happening during annealing are already well described. However, there are some particularities in these steels. Due to the high manganese content which increases drastically the steel hardenability, in majority of the cases, the initial (before annealing) microstructure will be fully martensitic or a mixture of bainite and martensite. Also, thanks to high manganese content which decreases grain boundaries mobility, a very fine size of microstructural features is expected. These two particularities will impact the phase transformations during annealing and will result in different mechanical properties. The available in the literature information necessary for understanding of phase transformations, microstructure formation and mechanical properties of MMS will be given in this chapter. Mechanisms of phase transformations Generally, in the simple Fe-C-Mn system during heating the following phenomena can happen: recrystallization, carbide dissolution and austenite nucleation and growth. The microstructure in the end of heating depends strongly on the initial state of the material: initial microstructure and/or the deformation level. In the case of important deformation level without any dependence on initial microstructure the steel will progressively evolve through recrystallization, carbide dissolution and austenite nucleation and growth. When initial structure is not-deformed (case of double annealing for example) the recrystallization step after deformation is suppressed and carbide dissolution and austenite nucleation and growth will depend on the initial microstructure of steel. The case of intermediate deformation rate combines all the difficulties: recrystallization and dependence of carbide dissolution and of austenite nucleation and growth on the initial microstructure of steel. In the work of Arlazarov et al. [ARL'12] it was shown that after intercritical annealing of MMS a complex ultrafine mixture of ferrite, retained austenite and/or martensite can be obtained. Examples of such microstructures are presented in Figure 4. Complexity in the phase constituents and their morphology results from the interference of different phenomena happening during annealing. The increase of Mn content is known to decrease the temperature range of intercritical domain. Thus, to obtain a certain fraction of austenite the annealing temperature should be lower. On the other hand, at lower annealing temperature recrystallization and cementite dissolution have a more sluggish kinetics. Therefore, increase of Mn creates an overlap between recrystallization, cementite dissolution and austenite transformation. In addition, development and interaction of these phenomena depend on the initial (before annealing) microstructure. At last, austenite stabilization or its transformation to martensite during final cooling step will be significantly influenced by the processes occurring during heating and holding. '12]. Taking into account the complexity of microstructure formation in the annealed MMS, it is necessary to introduce the descriptions of each phenomenon. Therefore, further in this subchapter will be presented the following topics: 1. Recrystallization Deformation of steel like rolling, forging or other introduces in the material a certain number of defects (in particular dislocations) that will depend on the strain rate. Deformed metal has a certain amount of stored energy, hence during reheating and soaking it tends to suppress the defects and to recover its structure and properties. Depending on the heating rate, reheating temperature and quantity of stored energy, the following processes can happen: rearrangement and annihilation of dislocations, formation of new dislocation-free grains and their growth. Hence, recrystallization process can be divided in 3 steps: o recovery -rearrangement and annihilation of dislocations, formation of sub-structure; o recrystallization -nucleation of new dislocation-free grains; o grain growth -the larger grains grow consuming the smaller ones in order to obtain the lower energy configuration of grain boundaries [HUM '04]. Schematic representation of different states of recrystallization is presented in Figure 5. As it was mentioned previously, in the case of not-deformed initial microstructure there is no recrystallization in the majority of the cases. However, when initial structure is fully martensitic, the dislocation density is high enough to provoke martensite recrystallization (nucleation of new ferrite grains). Some studies about martensite recovery and recrystallyzation were already done in the past. First of all in late 60's and in the beginning of 70's G. R. Speich [SPE'69], [SPE'72] investigated the effect of tempering in low carbon martensite. Early stages of carbon segregation and carbide precipitation were observed and temperature ranges of martensite recovery and recrystallization were established. Recovered martensite was characterized by the lath structure with a certain dislocation density, issued from the as-quenched lath martensite, and after recrystallization strain-free equiaxed grains were observed. Then, in the beginning of 70's, R. Caron and G. Krauss [CAR'72] made an exhaustive study of the microstructure evolution during tempering of 0.2 wt.% C steel. They found that coarsening of fine lath martensite structure with the elongated packet-lath morphology is the major microstructure transformation and only at late stages of tempering an equiaxed structure gradually appears. Figure 6 shows their microstructure observations with optical microscope (OM) and electron micrograph extraction replica of the sample tempered at 700°C for 12h. Figure 6 -Extraction replica TEM electron micrographs of lath martensite tempered at 500°C for 5h (at left) and at 700°C for 12h (at right) [CAR '72]. In spite of the equiaxed form of grains the increase in high angle boundaries expected from the formation of strain-free grains was not observed. Hence, R. Caron and G. Krauss proposed the following scenario of as-quenched martensite tempering:  first recovery takes place and significantly decreases the low angle boundaries content. At this stage the morphology of the tempered martensite is lath-like coming from the initial lath morphology of the as-quenched martensite. In the same time carbon segregation and carbide precipitation occurs at very early stages of tempering;  then recovery continues by polygonization or low angle boundary formation because the recrystallization is delayed due to the pinning of grain boundaries by carbide particles. As well process of carbide spheroidization, or Ostwald ripening, can occur;  finally the aligned coarsened lath morphology, left from the earlier stages, transforms to an equiaxed ferritic grains through the grain growth mechanism [CAR '72]. These results were supported by the work of S. Takaki et al. [TAK'92] on the commercial 0.2%C steel. No proofs of the as-quenched martensite recrystallization were found. However, recrystallization of the deformed lath martensite was observed and the influence of deformation rate on the recovery and recrystallization was studied. More recent research works of T. Tsuchiyama et al. [TSU'01], [NAT '05] and [TSU '10] showed that recrystallization of ultra-low carbon steel at high tempering temperatures and long times is possible through the specific bulge nucleation and growth (BNG) mechanism (Figure 7). Bulging of packet boundaries and prior austenite grain boundaries results in the recrystallized grains nucleus that grow afterwards by consuming martensitic structure with high dislocation density. ArcelorMittal internal experience also shows the possibility of the as-quenched martensite recrystallization in 0.4C-0.7Mn steel [TAU '13]. Martensite, obtained after austenitization at 900°C for 5min followed by water quench, was then tempered at 690°C for 60h and resulted in microstructure consisting of strain-free equiaxed ferrite grains and cementite (Figure 8). Austenite formation during intercritical annealing Austenite formation in steels and alloys was broadly studied for more than 100 years. One of the first works on austenite was done by Arnold and McWilliams in 1905 [ARN'05]. They found that austenite forms during heating through the nucleation and growth mechanisms. The same conclusion was done by Roberts and Mehl [ROB'43], but they also have done the exhaustive analysis of the austenite formation kinetics starting from the ferrite-cementite microstructure. Development of dual-phase and TRIP steel grades from the early 80's pushed the researchers to perform numerous works on the austenite formation. A detailed study of austenite formation in steels with various carbon content and 1.5wt.% Mn was done by Garcia and DeArdo [GAR'81]. Different initial microstructures were used: spheroidized cementite in a recrystallized ferrite matrix; spheroidized cementite in a cold rolled ferrite matrix and ferrite plus pearlite. It was found that austenite nucleates at cementite particles located on the ferrite grain boundaries in the case of spheroidized cementite and on either pearlite colony boundaries or boundaries separating colonies and ferrite grains in the case of pearlite. The kinetics of austenite formation at 725°C was also studied. It appears that austenite forms slightly more rapidly from cold rolled ferrite than from recrystallized ferrite or ferrite-pearlite structures (Figure 9). However the final amount of formed austenite is very similar for different cases. '81]. Figure 9 -Comparison of austenite formation kinetics at 725°C from various starting microstructures [GAR Another interesting thing observed by Garcia and DeArdo [GAR'81] was the effect of Mn segregation (banding) on the pearlite microstructure: pearlite in the zones with higher Mn content had a more fine interlamellar spacing in comparison with the pearlite outside of this Mn bands. Similar but very important study was done by Speich et al. [SPE'81]. Series of 1.5 pct manganese steels containing different carbon amounts and with a ferrite-pearlite starting microstructure were investigated in the range of 740 to 900°C. According to the obtained experimental results, the kinetics of austenite formation was separated into three steps: 1) prompt nucleation of austenite at the ferrite-pearlite interface and very rapid growth of austenite into pearlite until the full dissolution of cementite (Figure 10(1)); 2) after dissolution of cementite, further growth of austenite into ferrite occurs with slower rate that is controlled by carbon diffusion in austenite at high temperatures (850 to 900°C) and by manganese diffusion in ferrite at low intercritical temperatures (740 to 780°C) (Figure 10(2a) and (2b)); 3) very sluggish final equilibration of the manganese contents of the austenite and ferrite phases controlled by manganese diffusion in austenite (Figure 10(3)). With the help of diffusion models and experimental results, austenite formation diagram which graphically illustrate the amount of produced austenite and processes controlling kinetics at different times and temperatures was constructed for one of the steels (Figure 11). Figure 11 -Diagram for austenite formation and growth in 0.12C-1.5Mn (wt.%) steel [SPE '81]. On the other hand, Pussegoda et al. [PUS'84] studied 0.06C, 2.83Mn, 0.33Si (wt.%) steel and demonstrated significant partitioning of Mn between ferrite and austenite during intercritical annealing. It was shown that Mn can diffuse to the center of austenite grains within a reasonable time at 695°C of the order of hours rather than centuries (Figure 12). This result was partly attributed to the obtained very fine-grained microstructure with ferrite grain size of ~5µm and austenite grain size of ~ 1µm. Also diffusion rate of Mn in austenite at 695°C was estimated at ~10-14 cm 2 /s which is considerably higher than that obtained by extrapolation of diffusion measurements from higher temperatures. Effect of cold deformation on the austenite formation kinetics in 0.11C-1.58Mn-0.4Si (wt.%) steel was studied by El-Sesy et al. [ELS '90]. Figure 13 shows the austenite fraction evolution with the time at 735°C for the samples with different initial states: hot-rolled, 25% and 50% cold deformed states. It appears that the kinetics of austenite formation (at least till the complete cementite dissolution) increases with the enhancement of cold rolling from 0% to 50%. Two possible reasons of the cold-deformation influence were proposed:  Higher deformation rate decreases the size of recrystallized ferrite grains hence increasing the number of austenite nucleation sites at the intersections of ferrite grains with pearlite. Higher density of ferrite grain boundaries also promotes easier diffusion of substitutional alloying elements (for example, Mn) during the second step of austenite growth;  Increase of cold rolling accelerates austenite formation due to the higher driving force or lower activation energy [ELS '90]. Positions of the a/a' boundaries are indicated [PUS '84]. Figure 13 -Effect of cold deformation on the austenite formation kinetics in 0.11C-1.58Mn-0.4Si steel at 735°C. FGS is the ferrite grain size and tp is the time of complete pearlite dissolution [ELS '90] Finally, manganese influence on austenite formation was also investigated in the past [BAI '69], [WEL '48]. Fe-C equilibrium diagram and the effect of manganese on the form and position of austenitic region are shown in Figure 14. It can be noted that Mn extends austenitic domain to lower temperatures and diminish the intercritical domain (austenite+ ferrite). In the same time, Mn lowers the eutectoid point "E" in both temperature and carbon content. Manganese also enhances stability of austenite and, hence, enlarges the region of metastable austenite. The delay of austenite decomposition into ferrite plus pearlite at 525°C and into bainite at 310°C is clearly illustrated on the isothermal transformation diagrams in Figure 15 [BAI '69]. All these Mn effects are quite important for the appropriate choice of annealing temperature. The pioneer study in this field was done by Nehrenberg in the late 40's on different C-Mn commercial steels [NEH'50]. He investigated the dependence of austenite formation on the initial microstructure: pearlite, spheroidite (spheroidal carbides and ferrite), martensite, tempered martensite and bainite. It was found that from pearlite and spheroidite austenite grows more or less freely and equiaxed ferrite-austenite structure is obtained. On the other hand, martensite, tempered martensite or bainite result in acicular shape of final autenite and ferrite and a lamellar microstructure consisting of these alternate acicular phases is obtained. Plichta and Arronson [PLI '74] looked on the effectiveness of different alloying elements for the production of acicular morphology. It was found that Mn, Ni and Cu stimulate the formation of fine, complex, acicular network of boundaries in which the influence of initial martensite structure is clearly reflected. The proposed explanation was that the rates of austenite nucleation are sufficiently high to restrict significantly the migration of martensite needle boundaries. Interesting discoveries were also done by , [MAT' when they studied Fe-C-Ni steels. The final microstructure after reverse transformation consisted of two types of austenite grains: globular and acicular. Then, they examined the effect of heating rate on the formed microstructures and found that the number of globular grains decreases with the decrease of heating rate. Figure 16 shows two examples of microstructure and the evolution of globular austenite grains quantity as a function of heating rate. It was also observed that cementite is more enriched in Mn with lower heating rate. Hence, it was supposed that the decrease in the number of globular austenite grains is due to the inhibited by the Mn segregation growth. Similar type of observations was also done by Kinoshita and Ueda [KIN'74] for Fe-0.21C-1Cr (wt.%) steel and by Law and Edmonds [LAW'80] for the Fe-0.2C-1V (wt.%) steel. Another researcher who contributed a lot to the development of ART annealing was Gareth Thomas [KOO '76], [KOO '77], [KIM '81], [NAK '85]. He was one of the persons who clearly showed an interest of ART annealing for industrial use: grain refinement, lath morphology and improvement of mechanical behavior. He also obviously demonstrated the difference between ART, intercritical annealing and formation of ferrite from high temperature austenite. That is why some researchers call the fibrous microstructure obtained with ART as "Thomas fibers". Figure 17 shows the scheme of thermal cycles which were compared and the relative resulting microstructures [KIM '81]. Using transmission electron microscopy (TEM) it was also observed that ferrite regions have some subgrain structure after ART and intercritical annealing, while the step quenched coarse ferrite was free of such subgrains. Different studies [TOK '82], [MAT'84], [YI'85], [CAI'85] also showed that the prior to annealing martensite structure will condition austenite nucleation and, hence, the morphology of final microstructure. These investigations point out the importance of the following parameters for the control of austenite shape and distribution: • prior austenite grain size; • presence of non-dissolved carbides; • presence of defects (deformation state); • recrystallization state of martensite. In particular, Yi et al. [YI'85] observed an unambiguous difference in the morphology of austenite obtained from two different martensitic states. Succinct representation of their results is given in Figure 18. It appears that very rapid austenitization prevents the full dissolution of carbides in austenite and limits the austenite grain growth, hence in the following intercritical annealing new austenite nucleates mostly on the prior austenite grain boundaries and its shape is more globular. In contrary, austenite nucleates at lath and packet boundaries and has a lath-like morphology in the case of initial martensite with fully dissolved carbides and bigger grains. Cai et al. [CAI'85] also observed sluggish austenite kinetics and an extensive partitioning of Mn between annealed martensite (ferrite) and austenite. This indicates that driving force for the austenite growth is rather small and the kinetics is controlled by the manganese diffusion. Two very recent studies of ART mechanisms [WEI '13] and [NAK '13] investigated formation and growth of austenite from as quenched martensite and came to almost the same conclusions. Wei et al. [WEI'13] studied 0.1C-3Mn-1.5Si (wt.%) steel and found that during intercritical annealing of as-quenched martensitic structure, austenite will grow from thin austenite films between laths retained upon quenching, but also austenite will nucleate at lath boundaries and packet boundaries of martensite and within laths. As found by Speich et al. [SPE'81] the growth of austenite can be divided on three stages: 1. initial no-partitioned growth of austenite, controlled by rapid carbon diffusion in ferrite, which is gradually replaced by carbon diffusion in austenite; 2. intermediate slow growth, controlled by diffusion of Mn and/or Si in ferrite; 3. very slow growth, controlled by diffusion of Mn and/or Si in austenite for final equilibration, which accompanies the shrinkage of austenite [WEI '13]. These three stages are clearly depicted in Figure 19. '13]. 1 1 2 3 2 3 a) b) c) d) Internal ArcelorMittal results [MOU '03] confirmed the observations of fine fibrous microstructure obtained after intercritical annealing of martensite and their interest for improvement of mechanical behavior. Austenite stabilization during annealing It is known that during cooling from the austenitic or intercritical region, austenite will undergo a transformation according to the cooling rate: ferrite, pearlite, bainite or martensite. But in the case of high stability of austenite, these transformations can be avoided and austenite will be retained at the room temperature. Concerning medium Mn steels, ferrite, pearlite and bainite transformations are strongly delayed due to the high Mn content. Hence, in the majority of cases the only transformation that takes place during cooling is the martensitic one. Numerous studies were done on the observation and comprehension of martensitic transformation and many reviews were edited. The reviews or books used in this work as the base for the understanding of martensite and its transformation are the following: [KUR'60], [KUR '77], [OLS '92], [KRA '99]. These authors have done a considerable contribution to the knowledge building about martensite. Martensite transformation is characterized by two temperatures: M s -temperature at which martensite starts to form and M f -temperature when martensite transformation is finished. Accordingly, during cooling austenite does not transform at temperatures higher than M s . Therefore, it is very important to know M s temperature since it can be used as an indicator of austenite thermal stability. A lot of studies were done about the influence of different parameters on the M s temperature. It is commonly agreed that M s temperature depends on chemical composition, prior austenite grain size, hydrostatic pressure and applied stress. It is also known that the effect of chemical composition and fine prior austenite grain size (less than 10µm) are of major importance. A number of different relations linking chemical composition and M s temperature exist in the literature. Probably the most common relations are those proposed by Steven and Haynes [STE'56] and Andrews [AND'65]. However, more recent work of Van Bohemen [BOH '12] proposed a better correlation of M s temperature and carbon content, using an exponential carbon dependence. Also a clear review of the past investigations and an extension of M s relation to higher and broader alloying elements contents was done by Barbier [BAR'14]. Van Bohemen's and Barbier's formulas are presented hereafter. Both give very satisfactory results, but Barbier's formula has a wider application field. Hence, in this work it will be taken as a reference. [BOH12] s w 12 - w 8 - w 10 - w 13 - w 31 - )) w exp(-0.96 - (1 600 - 565 M Mo Ni Cr Si Mn C ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ = (1) B Ti Nb Cu Al Co V Mo Ni Cr Si Mn C ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ + ⋅ + ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ = ( 2 ) where i w is the alloying element content in austenite expressed in weight percents. Studies about the effect of prior austenite grain size on the austenite stability and M s temperature are more rare. This is probably related to the experimental difficulties as the observation of prior austenite grains is quite complex. Therefore, the first studies were done in more theoretical ways. One of the most common work is that of Fisher et al. [FIS'49] where a nucleation and growth model to predict the kinetics of martensite transformation was proposed. Effect of prior austenite grain size was introduced in the model by the authors, based on the geometrical constraints: the volume fraction of forming martensite is proportional to the volume of the prior austenite grain size. Modern studies [WAN '01], [DRI'04], [JIM'07-1], [JIM' , [YAN '09] confirmed the fact that small austenite grain size decreases considerably the M s temperature. Two last works as well proposed their ways to take into account the influence of austenite grain size. Jimenez-Melero et al. [JIM' suggested the following equation: 3 1 ' - - = γ BV M M s s (3) where M s is taken from the Andrews equation [AND '65], V γ is the volume of austenite grain, and B is a parameter that can be obtained from thermodynamic calculations. For spherical grains, the factor B was fitted to the value of 475 µm.K. Yang et al. [YAN'09] had the same global idea, but a different formula was given in their study of Fe-C-Ni-Mn alloys:         +       -       - - - = 1 1 ) 1 ln( exp 1 ln 1 0 m f aV b M M s s γ (4) where V γ is the volume of austenite grain, f is martensite fraction (for M s calculation f = 0.01), m is the plate aspect ratio (m = 0.05 from [CHR '79]) and a = 1mm and [YAN '09]. As it can be seen there are some key limitations in both solutions:  Jimenez-Melero et al. model gives too high impact of very small grains (less than 2µm);  Yang et al. equation, in contrary, predicts too low impact of fine grains (less than 2µm). Such restrictions leave the possibility for the future investigations and new solutions. Another important topic that was also largely studied is the kinetics of martensite transformation. In Fe-C-Mn steels austenite to martensite transformation is athermal. This means that kinetics of martensite transformation only depends on the decrease of temperature, thus driving force for the transformation increases with higher supercooling below the M s temperature. The most widespread model to predict the kinetics of athermal martensite transformation is the Koistinen and Marburger empirical equation [KOI '59]: ( ) ( ) q s RA T M f - ⋅ - = α exp (5) where f RA is the volume fraction of retained austenite, T q is the temperature reached during quenching and α=0.011 is a fitting parameter. Van Bohemen et al. [BOH'09] and of proposed some modifications to this equation. In the former one, it was first found that the curvature of martensite transformation kinetics curve depends on chemical composition of austenite. The curvature is controlled by the parameter α, hence the following empirical dependence of α on the Recent works of chemical ⋅ ⋅ = ⋅ ⋅ = - - - = β α α β α (7) From the previously mentioned works, it appears that two factors have the major influence on the M s temperature, and thus on austenite stability: 1. chemical enrichment of austenite with alloying elements (C, Mn, Si and others); 2. austenite grain size that can give an important increase of austenite stability when its value is very small. Considering these factors, one can imagine an intercritical annealing that allows considerable partitioning of C and Mn (as major austenite stabilizers) and in the same time low austenite grain size. Hence, using such intercritical annealing an important fraction of austenite can be retained at the room temperature. One of the first works that showed experimental confirmation of important Mn partitioning in medium Mn steels was the study of Kim et al. [KIM'05]. It was shown that high amounts of retained austenite can be stabilized after intercritical annealing and that the stability of retained austenite was due to both C and Mn partitioning (Figure 21). Recent studies of different authors [COO '10], [WAN'11], [LUO '11], [JUN'11], [LUO '13] confirmed the partitioning of Mn during intercritical annealing and high Mn enrichment of austenite. Examples of TEM observations and EDX profiles across austenite grains taken from the work of [COO '10] are presented in Figure 22. As it can be seen from Figure 22, the obtained austenite grain size is ultra fine (less than 1µm). This means that an important contribution of grain size effect on the stability of austenite is expected. Already in the early works on medium Mn steels of Huang et al. [HUA'94] the ultra fine grain size was reported and its influence on austenite stability was anticipated. Current works [WAN '11], [LUO'11], [JUN'11] confirmed the ultra fine grain size of austenite with the quantitative results. An example of such quantification taken from the work of Luo et al. [LUO'11] is shown in Figure 23. Also, based on their previous work [LEE '13-1] proposed a way to consider the influence of austenite grain size on the optimal selection of annealing temperature to obtain maximum fraction of retained austenite. Figure 23 -At left microstructure observation using TEM and at right apparent thickness of austenite and ferrite laths, Mn content and austenite volume fraction as measured by TEM, STEM-EDX and XRD respectively [LUO '11]. Mechanical Properties of Medium Mn steels In general, MMS are the steels with an enhanced TRIP effect due to the high volume fraction of retained austenite. The matrix of these steels is an ultra fine ferrite (in certain cases the mean size is less than 1µm). The hard phase in such matrix is represented by retained austenite (RA) and, if its stability is not sufficient, fresh martensite. Recently, a lot of studies describing mechanical behavior of MMS were performed and published. In order to analyze the existing data and to have a clear vision on the trends of mechanical behavior of MMS, a data base was built in the beginning of this work. It was updated during the whole period of this work. The entire data base can be found in Annex 1.1. Further will be presented only some extracted data, considered to be the most relevant for the standard intercritically annealed MMS (no Q&P). Overview of the mechanical properties First of all, Figure 24 presents the classical Ultimate Tensile Strength versus Total Elongation (UTS-TE) as well as more specific Yield Strength versus Uniform Elongation (YS-Uel) charts. As it can be seen and was already stated, these two graphs show the clear potential of MMS to fulfill the requirement of 3 rd generation AHSS steels. Looking on the UTS-TE graph, it can be stated that UTS ranging from 1000 to 1200MPa can be obtained with TE ranging from 12 to 37. This means that it is possible to create a variety of strength-ductility combinations using MMS. Even though there are some points with higher UTS (up to 1400MPa), the general trend is that with the increase of UTS the ductility slightly goes down. For the YS-Uel the tendency is less clear, but in the same time the quantity of available data is also lower. Nevertheless, in certain cases very attractive combinations of YS and Uel can be obtained (~1000MPa with more than 20% of Uel). Using the available data set, it was decided to investigate relationships between some mechanical characteristics and microstructure parameters. Generally, ductility is correlated with the RA fraction. Thus, Figure 25 presents two dependencies: TE and Uel as a function of RA fraction. It can be seen that the correlations are much dispersed. Of course, there is a trend that with increasing austenite fraction elongation increases, but there are probably also other parameters, like austenite stability for example, that also have an important influence. In all the cases, it is clear that tensile elongation cannot be attributed to RA fraction only. Relation between maximum true stress and RA fraction is plotted in Figure 26. It can be observed that there is no correlation between these two parameters. It was then supposed that maximum true stress should depend not directly on the RA, but rather on the global fraction of hard constituents: fresh martensite and RA. Unfortunately, in the majority of the articles only fraction of RA was given. Hence, further investigations of the links between microstructure and mechanical properties were not possible. At last, it is known that mechanical behavior of a multi-phase material depends on the behavior of each phase under mechanical loading. Taking into account that the studied steels can contain 3 different phases (ultra-fine ferrite, retained austenite and fresh martensite), further text will provide the knowledge about mechanical behavior of each phase. In addition, the information concerning the TRIP effect (induced martensite transformation) will be given. Finally, the possibilities of global modeling of multi-phase steel will be discussed. Mechanical behavior of ultra fine ferrite Classically, the yield and flow stresses in polycrystalline materials can be described using the following Hall-Petch equation: D k + = 0 σ σ (8) where σ is the yield or flow stress, D is the mean grain size and σ 0 and k are constants. This equation shows a direct relation between the stress and inverse square root of the mean grain size. Physical origin of this relationship is that the grain boundaries are obstacles for dislocation motion. Further, the works about the dislocations movement and storage of Bergström [BER'70], Mecking and Kocks [MEC'81], and Estrin [EST '96] allowed a more sophisticated description of ferrite behavior law and of its dependence on the mean grain size. For example, in the work of Allain and Bouaziz [ALL'08] it was proposed to use the following expression for the evolution of ferrite flow stress (σ F ) as a function of its deformation (ε F ): ( ) f M f D b M F F ε µ α σ σ ⋅ ⋅ - - ⋅ ⋅ ⋅ ⋅ + = exp 1 0 (9) where α is a constant of the order of unity (α = 0.4), M is the mean Taylor factor (M= 3), μ is the shear modulus of the ferrite (μ =80GPa), b is the Burgers' vector (b ≈ 2.5×10 -10 m), D is the ferrite mean grain size, σ 0 is the lattice friction stress which depends on the chemical composition (elements in solid solution) and on the temperature, and f is a fitting parameter related to the intensity of dynamic recovery processes, and adjusted on the experimental results. Both laws (8) and ( 9) were derived and fitted from data with the average grain size mostly higher than 2 µm. In this range of grain sizes (more than 2 µm) these laws show quite good performance and the predictions are in good agreement with the experimental data. However, when the grain size is ultra low (below 2 or even 1 µm) such approaches appear to be inappropriate. Several works pointed out this point and proposed some new considerations and/or theories [CHO '89], [EMB'94], [BOU'09], [BOU '10]. However, for the moment there is no clear vision in the literature about the strengthening mechanism of the steels with ultra-fine (or nanoscale) grain size. The difficulty also exists in the production of such fine scale structures and measurements of their mechanical properties. Only a few recent works proposed some valuable data [OHM '04], [TSU'08]. An example of stress-strain curves obtained on a given steel with different ferrite grain sizes is shown in Figure 27. Looking on these results, it can be seen that for the grain sizes lower than 1µm the work hardening is close to zero. As well, the strength increase with decreasing grain size appears more pronounced for the ultra-fine structures. But even in these works the obtained ferrite grains co-existed with cementite particles, which can also have an influence on the mechanical behavior. Thus, their presence should be considered carefully. The important dependence on the grain size of YS, UTS and Uel for the ferritic steels with ultrafine grains was also shown in the more recent work of Bouaziz [BOU'09] (Figure 28). The variation of YS, UTS and Uel as a function of grain size appears to be almost linear in the domain of grain sizes between 0.1 and 1 µm. This clearly shows that for the ultra-fine ferrite grains, the classical Hall-Petch relation between the stress and inverse square root of the mean grain size is not effective. Therefore, for the ultra-fine grains it seems to be more appropriate to take into account grain size influence on the stress in form of 1/D. Figure 27 -Nominal stress-nominal strain curves of the ferrite-cementite steels obtained by tensile tests with strain rates of 3.3×10 -4 s -1 (a), 100 s -1 (b), and 10 3 s -1 (c) at 296K [TSU '08]. Till now, only the behavior of ultra-fine ferrite with polygonal morphology was discussed. Concerning the lath-like ultra fine ferrite structures, the literature is rather poor. Naturally, lath structure of ferrite can be found in tempered martensite and pearlite. In the first case (tempered martensite) the resulting ferrite, depending on the tempering temperature, contains a certain density of dislocations and in the same time some carbides are generally present. Thus, the understanding of mechanical behavior of such mixture is rather complex. In addition, data about stress-strain behavior of tempered martensite are really rare, especially for high tempering temperatures. In most of the works, only hardness and/or 0.2% offset yield stress are considered. In the case of pearlite, the stress-strain data are more abundant and the ferrite contains much lower number of defects. However, the problem is how to dissociate the separate contribution of ultra-fine lath ferrite and lamellar cementite. It is commonly admitted that mechanical properties of pearlitic steel strongly depends on the interlamellar spacing. Based on experimental results, Bouaziz and Buessler [BOU'02] proposed a semi-empirical behavior law for pearlite. It was assumed that only ferrite in between the cementite lamellae deforms plastically. Thus, the yield stress was considered to be dependent on solid solution hardening and on interlamellar spacing with the form inspired from the Orowan's [ORO '54] theory. Finally, plastic strain hardening was supposed to be isotropic and to follow a Voce type law. Consequently, the general form of the pearlite law was as follows:                 ⋅ - - ⋅ + ⋅ ⋅ + = 2 exp 1 0 p p g g K b M ε λ µ σ σ (10) where λ is the interlamellar spacing, and K and g are fitting parameters adjusted on the experimental tensile curves of fully pearlitic steels with different interlamellar spacing. Mechanical behavior of fresh martensite Martensite is one of the hardest phase in the steel and also one of the most complex due to its ultra-fine, acicular and multi-variant structure. Martensite is a metastable phase obtained in the steels during cooling with sufficient rate from austenite to room temperature. As it was already stated, martensitic transformation is athermal and displacive. This means a lot of things. First of all, there is no diffusion of atoms at long distance during transformation. As a result, the produced martensite inherits the composition of prior austenite. Secondary, during transformation the face centered cubic (FCC) lattice structure is changed to the body centered tetragonal (BCT) one and this involves two things: volume change and large shear. Finally, the transformed fraction of martensite depends only on the temperature below M s and is independent of time. Of course, such complicated transformation produces a very complex structure. According to the study of Krauss and Marder [KRA'71], low M s twinned martensite is called "plate martensite", whereas high M s needle-like martensite is called "lath martensite". In this work only lath-martensite is considered. An example of complicated lath-martensite structure taken from Morito et al. [MOR'05] is given in Figure 29. It is evident that such complex nano-scale structure will result in specific mechanical behavior. An example of true stress-true strain curves of martensitic steels with different carbon content is given in Figure 30. As it can be seen, very high levels of strength can be obtained, but in counterpart the ductility is rather low. However, ductility can be improved with tempering treatments, which will not be discussed here. Different sources of strengthening are discussed in the literature: grain and substructure (packets, laths), solid solution, precipitation, internal stresses and others. But the most generally acknowledged correlation between a mechanical and a metallurgical parameter is the evolution of martensite hardness related to its carbon content. Such dependence is very well demonstrated in the review of Krauss [KRA'99], where hardness measurements from a number of authors were plotted as a function of martensite carbon content for the variety of carbon concentrations (Figure 31). As well, there is an important quantity of data about the influence of different alloying elements on the martensite hardness [GRA '77], [YUR'87]. Such effect of alloying elements is naturally included in the prediction of yield stress. For example, the effect of Mn on the martensite yield strength was shown in the review of Krauss [KRA'99] by combining the data from the studies of Speich et al. [SPE'68] and Norstrom [NOR'76] (Figure 32). However, very few studies discuss the influence of Mn and other substitution elements on the strength of martensite. '68] and for Fe-C -Mn alloys [NOR '76]. In addition, there is a lack of data about the influence of substitution elements on the strain hardening of martensitic steels. Only one study concerning the effect of Cr on the work hardening of martensite was found [VAS '74]. Thus, it appears that the Mn effect on the strength and strain hardening of martensitic carbon steels is not clear and needs more studies. The amount of considerations in the literature about strain-hardening mechanisms of martensitic steels is quite low, probably related to their poor uniform elongation (i.e. necking strain). Principally, the mechanical behavior of martensite is either described with phenomenological polynomial law [CHO '09] or it is reduced to an elastic or an elastic-perfectly plastic law [DEL '07]. However, recently a Continuum Composite Approach (CCA) was proposed to predict the complete tensile curves of as-quenched martensitic steels [ALL '12]. The general idea of this approach is to consider martensite as a composite of elastic-perfectly plastic phases in interaction. All the phases have the same Young modulus, and the density of probability f(σ) to find a phase with a given yield strength defines the so-called "stress spectrum". Consequently, the behavior of any martensitic steel can be described using its distinctive stress spectrum. An example of such spectrum f(σ) and of its associated cumulated function F(σ) are given in Figure 33. The principal equation of the model relates the macroscopic stress Σ to a given macroscopic strain E in the following way: ( ) ( )dσ σ f σ σdσ σ f Σ L min L σ σ σ L ∫ ∫ +∞ + = (11) where σ min is the threshold stress at which the softest phases of the composite plasticize and σ L (E) is the highest yield stress among the plasticized phases for a given macroscopic strain E. The first integral is the contribution of already plasticized zones of the composite and the second one is the contribution of the zones that remains elastic under σ L loading. In majority of the cases, these integrals cannot be solved explicitly. However, it is possible to calculate the derivative as a function of the macroscopic strain (i.e. the macroscopic strain-hardening rate): ( ) ) ( 1 ) ( 1 1 L L F F Y dE d σ β σ - + = Σ ( 12 ) where Y is the young modulus and β is a constant parameter that allows describing the interactions between the zones in the composite after any plasticization: ε σ β - - Σ - = E ( 13 ) where σ and ε are respectively the local stress and the local strain in each element of the composite. This model was fitted on the mixed database: mechanical behavior of certain martensitic steels was taken from the literature and others were determined experimentally. The adjustements of the mechanical spectrums was done using the KJMA type law with three fitting parameters: σ min , σ 0 and n. The last two parameters control the shape and the width stress spectrum. Comparison between the results from the model and experimental data is shown in Figure 34. As it can be seen, not only very good agreement between the model and experimental true stress-true strain curves was obtained, but also very good prediction of strain hardening was achieved (Figure 34(b)). As well, the simulation of elastic-plastic transitions was very accurate and exact. Finally, using such sophisticated model permits to describe the tensile behavior of martensite in a univocal way and to explain low microplasticity yield stress, very high strain hardening rate and large Bauschinger effect. However, as in the previous studies, only the effect of C content on the strength and strain hardening was taken into account (Figure 34(c)). The present description of martensite structure and mechanical behavior is clearly not exhaustive. Only very rapid overview of the main results was presented here. More information about martensite is available in the books of Kurdjumov [KUR'60], [KUR '77] and Olson and Owen [OLS'92], but also in the review of Krauss [KRA'99] and in the more recent PhD work of Badinier [BAD '13]. Austenite present at room temperature, also called retained austenite (RA), is a metastable phase. Usually, RA is obtained at room temperature due to the chemical enrichment with C and Mn and in some cases due to the ultra fine size of austenite. The stacking fault energy (SFE) of C-Mn steels is low. Consequently, multiple simultaneous and/or sequential deformation mechanisms are possible:  dislocation slip;  mechanical twinning;  transformation to α'-or ε-martensite. One more time, chemical composition (mostly C and Mn), deformation temperature and in a less extend austenite grain size have major influence on the deformation mechanisms that will govern austenite behavior. According to Schumann's phase stability diagram (left part of Figure 35) [SCH '72], only transformation to α'-martensite occurs till 10 wt.% of Mn. In the same time, from the right part of Figure 35 showing the modified diagram with experimentally determined domain of mechanical twinning (shaded area), it can also be stated that no twinning occurs for the compositions with Mn content lower than 10 wt.% and C lower than 1.2 wt.%. As this work concerns mostly the steels in this composition range, only dislocation slip and transformation to α'-martensite will be considered further as possible deformation mechanisms. It is very difficult or even impossible to obtain experimentally the mechanical behavior (with dislocation slip mechanism) of austenite with low C and Mn levels. Firstly, because it is complicated to produce fully austenitic steel with low C-Mn levels at room temperature and, secondly, due to the occurrence of the martensite induced transformation. However, inspired from the works concerning ferrite and highly alloyed austenitic steels, at least two behavior laws of austenite were proposed. Hereafter these two models will be briefly described and compared. The first one, proposed by Perlade et al. [PER'03], assumes that austenite follows the classical relation for polycrystals between flow stress (σ A ) and the total dislocation density (ρ): ρ µ α σ σ b M A A + = 0 ( 14 ) where α = 0.4 is a constant, M = 3 is the average Taylor factor, m = 72 GPa is the shear modulus, b = 2.5*10 -10 m is the Burgers vector and σ 0A is the friction stress of austenite. According to [PIC '78], this friction stress can be expressed as: Cr Si C 0 w 7 . 3 w 20 w 354 68 σ A ⋅ + ⋅ + ⋅ + = (15) where w C , w Si and w Cr are taken in weight percents. In this approach it was also considered that induced during deformation martensite, which is much harder, results in a strong hardening of the retained austenite islands. In fact, induced martensite laths will reduce the mean free path of dislocations, thus increasing dislocation density and strengthening of austenite. Finally, taking into account the works of Mecking and Kocks [MEC'81] concerning the dislocation accumulation and annihilation due to the dynamic recovery, it was proposed to consider the evolution of the dislocation density with shear strain for dislocation glide with the following equation: ρ ε σ ⋅ - Λ ⋅ = ⋅ f b d M d 1 ( 16 ) where Λ is the dislocation mean free path and f is a constant. In this equation, the dislocation storage rate is described by the first term (1/bΛ) and dynamic recovery by the second one (-fρ). Another behavior law was proposed by Bouaziz et al. [BOU'11]. The modeling approach was quite similar, but it was extended to the TWIP steels, thus permitting an experimental validation. A semi-phenomenological model was proposed to predict the stress-strain behavior of C-Mn TWIP steels as a function of Mn and C content. The flow stress was expressed as: A A A 2 1 0 A σ σ σ σ + + = (17) where σ 0A is the yield stress of austenite that increases with C content and decreases with Mn content in the following manner: Mn C 0 w 2 w 187 228 σ A ⋅ - ⋅ + = (18) with w C and w Mn in weight percents. The flow stress of austenite without any TWIP effect (σ 1A ) was taken in the form of well known Voce law: ( )       ⋅ - - ⋅ = f f K A 1 ε exp 1 σ A (19) where K and f are material related constants and ε A is the strain of the austenite. Finally, σ 2A, which represented the contribution of the dynamic composite effect related to the development of backstresses (i.e. TWIP effects), was expressed as: p m ε ⋅ = A 2 σ (20) where m and p are dimensionless material-related parameters. Using this modeling approach, Bouaziz et al. obtained rather accurate predictions of the true stress-true strain curves of different C-Mn TWIP steels. Comparison between model and experimental true stress-true strain curves is presented in Figure 36. The negative effect of Mn on the yield stress is obvious from this data. For example, Figure 36(b) presents the true stresstrue strain curves for two steels with almost the same C level but different Mn contents. It can be seen that higher Mn content steel has lower yield strength. Comparison between the models proposed by Perlade and Bouaziz led to the following remarks. Bouaziz approach appears to be better for the prediction of the yield stress of austenite in medium and high-Mn steels. For example, for the 1.2C-12Mn steel Bouaziz model gives the YS = 428MPa, which is close to the experimental value (~430MPa). Perlade's model is not so far (YS = 493MPa), but for higher Mn this difference increases significantly. Moreover, Bouaziz model proposes the combined contributions of dislocation slip and twining, which is a clear advantage. However, for our work this is of minor interest because, as it was stated previously, the studied compositions are out of twining domain. Finally, the evident advantage of Perlade's model is the fact that it takes into account the strengthening of austenite due to the martensite induced transformation. '11]. Figure 36 -Comparison between experimental and model predicted true stress-true strain curves [BOU Mechanical stability of retained austenite and induced transformation As it was said beforehand, retained austenite is a metastable phase. Consequently, during mechanical loading it will transform to α'-or ε-martensite. Such transformation is very beneficial for the global mechanical behavior of the steel. Indeed, it considerably increases strain hardening rate and delays necking. This phenomenon is called TRansformation Induced Plasticity (TRIP) effect. A lot of works were done on the characterization, understanding and modeling of austenite mechanical stability and TRIP effect in the low Mn steels and alloys [ZAC '67], [GER'70], [BHA'72], [ZAC'74], [OLS '75], [OLS '78], [TAM '82], [RAG'92], [SUG'92], [JAC '01], [JAC '07]. Nowadays using new machines for fine in-situ characterizations like neutron diffraction or high energy synchrotron X-ray diffraction, the studies of TRIP effect continues [TOM '04], [MUR'08], [JUN '10], [BLO '14]. In the same time, the investigations of the huge TRIP effect in the MMS, its understanding and modeling are also multiplying [SHI '10], [CAO '11-1], [WAN '13], [CAI '13], [SUH '13], [GIB '11], [COO '13], [RYU '13]. Of course this topic is of prime interest for us, as it has a direct link with our work. Further, a short summary of the observations found in the literature is given. It is divided in two parts: characterization of austenite stability and TRIP effect in MMS steels and modeling of austenite induced transformation. Austenite stability and TRIP effect in medium Mn steels Two of the first experimental results concerning the retained austenite evolution under mechanical loading of MMS were published by Shi et al. [SHI'10] and . The evolution of microstructure and mechanical properties as a function of holding time at intercritical temperature was studied for hot forged steels with 0.2 wt.% C and 4.7 wt.% Mn. Figure 37 shows the evolution of retained austenite fraction as a function of strain for 4 different holding times at 650°C and as a function of deformation temperature for 6h holding time. An optimum holding time (in this case 6h) was found where the best TRIP effect can be achieved, meaning the optimum stability of retained austenite. And, as expected, austenite transformation was accelerated at lower deformation temperatures. Microstructure observations of specimens annealed for 6h then deformed were published later by Wang et al. [WAN'13]. It was concluded that retained austenite was transformed in majority of cases to α'-martensite through strain induced transformation, due to the high density of microtwins and stacking faults in austenite grains. Presence of ε-martensite was also detected, but its fraction was so low that it was considered as negligible. Suh et al. [SUH'13] studied steels with different C and Mn levels and 2 wt.% Al. These steels were cold rolled and annealed at different intercritical temperatures. Evolution of retained austenite fraction during tensile test was measured. The evolution of normalized fraction versus engineering strain is presented in Figure 38. It can be seen that the rate of strain-induced transformation increases with increasing annealing temperature. In this work, it was also found that there is an optimum annealing temperature in the intercritical domain that can provide optimum stability of retained austenite and consequently best TRIP effect. In particular, for 0.11C-4.5Mn-0.45Si-2.2Al steel this temperature was 720°C, while it was 700°C for 0.055C-5.6Mn-0.49Si-2.1Al steel. '11], [COO '13] performed less statistical, but more rigorous study about the deformation behavior of retained austenite during tensile loading. Cold rolled 0.1C-7.1Mn-0.13Si (wt.%) steel was intercritically annealed at 575, 600, 625 and 650°C for 168h. Then insitu neutron diffraction experiments were carried out during tensile tests. This permitted to plot the evolution of retained austenite as a function of strain (Figure 39). The observed tendency is the same than in the case of Suh et al. [SUH'13]: the induced transformation rate increases with the increase of holding temperature. However, further analysis of samples annealed at 600°C and 650°C [COO '13] showed two things: Gibbs et al. [GIB • in the sample with the high austenite stability (600°C), the latter was deformed by the glide of partial dislocations trailing stacking faults. Strain induced transformation of the austenite took place only after the yield point elongation; • on the other hand, austenite islands of the sample annealed at 650°C contained some stacking faults and thin ε-martensite laths, which promoted the stress induced transformation of austenite at lower stress levels. Models for austenite induced transformation Generally, for the modeling of the induced austenite transformation a modified Kolmogorov-Johnson-Mehl-Avrami approach [KOL '37], [JOH '39], [AVR '39] is used. One of the most commonly applied formulation of such approach is the equation proposed by Olson and Cohen [OLS'75]: ( ) ( ) [ ] n M ind αε β - - ⋅ - - = exp 1 exp 1 F (21) where n is a constant parameter, α is dependent on the stacking fault energy and defines the rate of shear band formation, and β is related to the probability that a shear band intersection forms a martensite nucleus. This probability depends on the temperature through its link with the chemical driving force. Another equation proposed even earlier by Guimaraes [GUI '72] for 0.06C-31Ni (wt.%) steel was closer to the standard Kolmogorov equation: ( ) z p M ind kε - - = exp 1 F (22) where k and z are constant fitting parameters and ε p is the true plastic strain. Most of the induced transformation models use one of these two equations with adapted fitting parameters. However, some works [ANG '54], [LUD'69], [GER'70] rather proposed a direct power law of the following type: RA B M ind F A ⋅ ⋅ = ε F (23) where A and B are fitting parameters and F RA is the volume fraction of retained austenite. Finally, Perlade et al. [PER'03] proposed a model for the induced martensite transformation based on the Raghavan's physical approach for the isothermal martensite transformation in Fe-Ni alloys [RAG '92]. Brief description of Raghavan's approach is as follows. Usually, the rate of first-order transformation at a given temperature depends on the nucleation and growth rates of the new phase. In the case of martensite transformation the growth of plates or laths is stopped by the prior austenite grain boundaries or/and by the neighboring units. Hence, in comparison with classical nucleation-and-growth processes (mutual impingement) there are additional obstacles for the growth. The global martensite transformation rate (dF M /dt) is then controlled by the nucleation rate per unit volume of parent phase at any instant (dN V /dt) and by the transformed volume fraction per nucleation event (dF M /dN V ): V V dN d dt dN dt d M M F F ⋅ = (24) Raghavan also observed that nucleation accelerates very rapidly even at earliest stages, indicating a strong autocatalytic effect. Therefore, he suggested that the number of new nucleation sites generated by autocatalysis is proportional to the volume fraction of formed martensite: V M i f N F p n n - ⋅ + = (25) where n f is the number of sites at martensite fraction F M , p is the so-called "autocatalytic factor" and N V is the number of martensite plates up to F M . Based on the work of Pati and Cohen [PAT'69], Raghavan used the following equation for the nucleation rate:       ∆ - ⋅ ⋅             - ⋅ + = kT W V p n dt dN a i exp 1 F M ν (26) where V N V / F M = is the mean plates volume at martensite fraction F M , ν is a vibration frequency term and ΔW a is the activation energy for nucleation. In the model of Perlade et al. [PER'03], it was considered that at the temperatures above M s the austenite to martensite transformation can take place when the applied stress level is high enough -"stress assisted transformation" (Figure 40(a)). Such transformation can be modeled by incorporating the thermodynamic effect of the applied stress in the theory developed for the transformation upon cooling (Figure 40(b)). When the stress exceeds the yield stress of austenite the martensite nucleation will be strain-induced on potent sites induced by the plastic strain. This domain is characterized by the σ s M temperature. Effect of plastic strain on the nucleation rate was then introduced in the activation energy (ΔW) through the driving force (ΔG) in the following manner: G B A W a ∆ ⋅ + = ∆ (27) where A and B are two positive constants and ΔG is taken as a sum of chemical and mechanical contributions: Using such a physically based approach is advantageous as it directly takes into account the effects of chemical composition, austenite size and plastic strain on the induced martensite transformation. As it can be seen in Figure 41, good predictions in comparison with the experimental volume fractions of induced martensite were obtained. However, in the work of Moulin et al. [MOU'02] it was shown that the modeled induced martensite transformation is very sensitive to the grain size of retained austenite (Figure 42). This means that for the majority of cases the retained austenite size should be fitted in order to obtain good description of experimental data. γ σ σ σ ⋅ ∂ ∆ ∂ + ∆ = ∆ = G G G 0 (28) Modeling of the multiphase steel mechanical behavior For the multiphase structures the stress and strain levels of the global material depends on the stress and strain values of each phase. The behavior laws that consider the behavior of each constituent are so-called "mixture rules". One of the first works that proposed the additivity of the stress and strain tensors for a multicomponent system was the article of Hill [HIL'63]: 2 2 1 1 σ σ σ ⋅ + ⋅ = f f (29) 2 2 1 1 ε ε ε ⋅ + ⋅ = f f (30) where f 1 and f 2 are the phase volume fractions (f 1 + f 2 = 1); σ i and ε i are the stress and the strain of each component. This approach means that there is stress and strain partitioning between the multiple constituents of the system. These two equations were very frequently used separately (with the iso-strain or iso-stress hypothesis) or together [FIS '77], [KAR'75], [TAM '73]. Utilization of both equations is more adapted as it results in less pronounced stress partitioning than the iso-strain model. Another way to improve the predictions of iso-strain model was proposed by Gladman et al. [GLA'72]. Actually the authors suggested to use a power law for the volume fraction of constituents in the equation ( 29): 2 1 1 1 ) 1 ( σ σ σ ⋅ - + ⋅ = n n f f (31) Nevertheless this approach is still less precise than the general modeling with both equations ( 29) and (30), especially in the case of multicomponent system with more than 2 constituents. Even though the combined mixture law for stress and strain gives rather good results, its disadvantage is the need of a fitting parameter in order to solve the system of two equations. Actually, equations ( 29) and ( 30) do not describe the transfer of stress and strain between phases. Commonly to take into account this stress and strain transfer, a parameter β, that describes the slope of line AB in Figure 43, is determined in the following way [FIS '77], [GOE'85]: 1 2 1 2 ε ε σ σ β - - = (32) where β is a fitting parameter that is fixed in the range between 0 and ∞, depending on the studied case. '85]. Figure 43 -Schematic representation of the three different conditions of the mixture law (isostress, iso-strain, and intermediate one) with the lines EF, GH, and AB, respectively, and the true stress-true strain curves of a soft phase matrix (m), hard phase α'-martensite, and the composite [GOE In order to avoid this arbitrary fitting parameter, Bouaziz and Buessler [BOU'02] proposed another approach. For a disordered microstructure in whatever material state, mechanical work increment was suggested to be equal in each constituent. As well, the global increment of strain was considered to be the sum of strain increments in each constituent multiplied by their volume fractions. In terms of expressions this means a system of two equations: 2 2 1 1 ε σ ε σ d d = (33) 2 2 1 1 ε ε ε d f d f d + = (34) where σ i and ε i are the stress and the strain of each constituent and f i their respective volume fractions. This approach was called Iso-W and was used further for the modeling of different multiphase materials. It was successfully applied for ferrite-pearlite [BOU '02] and low-Mn (standard) TRIP [PER '03] steels. Furthermore, it was recently utilized by for the modeling of the mechanical behavior of a medium Mn steel containing 0.08 wt.% C, 6.15 wt.% Mn, 1.5 wt.% Si, 2.0 wt.% Al, 0.08 wt.% V and with bimodal grain size distribution. Yield Point Elongation In addition, one particularity of the mechanical behavior of MMS should be stated. There are a lot of examples where a yield point elongation (YPE) is observed. A yield point elongation phenomenon is a localized, heterogeneous transition from elastic to plastic deformation. An example of stress-strain curve presenting YPE is shown in Figure 44. Generally, YPE elongation occurs in low carbon steels due to the pinning of dislocations by interstitial atoms (typically carbon and nitrogen). In order to liberate the dislocations and make them available for the further motion, an additional energy (stress) is required. This stress corresponds to the Upper Yield Stress (in this work it will be abbreviated as YS H ). After the YS H the dislocations are free and the needed stress for their motion becomes abruptly lower. This leads to lower macroscopic stress of the specimen: Lower Yield Stress (in this work it will be abbreviated as YS L ). The plastic deformation of material is then localized and heterogeneous. At this moment Lüders band between plastically deformed and undeformed material appears and moves with a constant cross head velocity. During the propagation of the band from one end of the specimen to the other, the nominal stress-strain curve is flat or fluctuates around a constant stress value due to the interaction of moving dislocations with the interstitial atoms. Once the band has gone through the entire specimen, the deformation became homogeneous and a positive strain hardening is observed. There are two major factors affecting Lüders bands formation: microstructure (grain size, ageing state and crystallographic structure) and macroscopic geometry of the sample. Figure 44 -Stress-strain diagram showing Yield Point Elongation (YPE) and Upper (UYS) and Lower (LYS) Yield Strengths [AST'09]. These stress localization and Lüders band propagation in MMS were studied in details by De Cooman et al. [COO '13] and Gibbs et al. [GIB'14]. The outputs from both works were similar. It was found that yielding behavior of a 7 wt.% Mn steel was controlled by the intercritical annealing temperature, which in its turn has a great influence on the final microstructure and stability of retained austenite. Two possible scenarios were detected and described. 1) Yielding of the duplex ferrite-austenite microstructure, obtained after low temperature intercritical annealing, proceeded through a localized plastic deformation of ferrite (Lüders band nucleation and propagation at a constant stress). In the range of yield point elongation, the retained austenite deformed by the glide of partial dislocations trailing stacking faults and only about 6% from the whole austenite fraction was transformed into martensite. The major part of the strain-induced retained austenite transformation took place after the yield point elongation. 2) On the other hand, complex microstructure consisting of ferrite, α'-martensite, εmartensite, and a low stability retained austenite, obtained after high temperature intercritical annealing, yielded in another manner. In fact, stress-induced retained austenite transformation was quite rapid, thus providing high work hardening rate and avoiding localized deformation. These results one more time underline the complexity of mechanical behavior of MMS steels and the importance of their microstructure control, in particular stability of retained austenite. In the first part of this chapter a brief description of machines, techniques and methods used in this work will be given. The second part will present the results of a primary study that aims the selection of steel composition and adapted heat treatment. This part can be called "alloy design". Machines, techniques and methods In this section are presented the tools and methods used for the elaboration of samples and their characterization. In majority of the cases, the following strategy was adopted for the experiments and analysis: 1. heat treatments; 2. tensile tests; 3. microstructure observation and quantification; 4. fine characterization; 5. model simulations. Therefore, the presentation of different tools in this section will follow this plan. Heat treatments Different heat treatments, aiming production of samples for different analyses, were performed with one dilatometer and two furnaces: 1) Dilatometer Bähr DIL 805 was used for the study of cementite precipitation during heating and characterization of phase transformations; 2) AET Gradient Batch Annealing furnace that produces a gradient of temperature on one sheet sample was used for the rapid evaluation of mechanical properties as a function of holding temperature; 3) NABERTHERM furnace that allows simple homogeneous heat treatments was used as a major tool for the elaboration of big size samples (tensile tests and microstructure analysis). The main characteristics of these three tools are presented hereafter. Dilatometer Bähr DIL 805 Dilatometry is most often used to characterize the phase transformations (transition points and kinetics) in steels. In this work, a Bähr DIL 805 dilatometer was used to follow the cementite precipitation during heating and to study the phase transformations (austenite, ferrite and martensite) during annealing. A picture of Bähr DIL 805 dilatometer and a schematic representation of the experimental cell are presented in Figure 45. The dilatometer follows the length variations of the sample occurring during the imposed heat treatment. The sample is heated and maintained at temperature by a high-frequency induction coil. Temperature control is done by one or several thermocouples. Usually, Pt-Rt/Rh 10% (type S) thermocouples are used. The sample is maintained by the quartz rods and one of them is mobile. Hence, when the length variations occur, one rod is moving and the linear displacement is captured by an LVDT (Linear Variable Differential Transformer) sensor. Figure 45 -Global view on Bähr DIL 805 dilatometer (at left) and schematic representation of the experimental cell (at right). To avoid oxidation during treatment vacuum is done in the experimental chamber, then a small amount of helium (He) is injected. Cooling rate can be controlled and high cooling rates can be obtained using He gas injection. Three types of samples were used:  Ø4 mm × 10 mm cylindrical rod -hot rolled steel;  4 mm × 4 mm × 10 mm parallelepiped -hot rolled steel;  1.2 mm × 4 mm × 10 mm parallelepiped -cold rolled steel. AET Gradient Batch Annealing furnace The AET batch annealing (BA) furnace is presented in Figure 46. It consists of 4 zones that can be controlled independently in terms of heating and holding. This allows producing a gradient of temperature on one sheet sample. Therefore, this furnace is named Gradient Batch Annealing (GBA) furnace. The precise control of the temperatures in the different zones is assured by 12 thermocouples located in different axial and transversal positions. A linear temperature gradient between 400°C and 800°C can be obtained on a sample with the length of 700 mm. It can be supposed that each 15 mm segment of the annealed sheet has a constant global mean temperature. Thus, for each temperature segment it is possible to prepare two so-called mini tensile samples, which will be described later in the text. Usually, the heating rate is quite low and it takes hours (in between 10 and 40h) to reach the target temperature, which is comparable with industrial batch annealing process. Fast cooling is not possible, only natural cooling or controlled slow cooling can be produced. Annealing is performed under vacuum in the entire furnace chamber in order to avoid oxidation and decarburization during treatment. NABERTHERM furnace The NABERTHERM furnace is shown in Figure 47. This is a 700 × 500 × 250 mm isolated chamber, which temperature is regulated by resistance heating. The furnace is first heated and stabilized at the target temperature. Maximum holding temperature is 1280°C. Then, Argon or Nitrogen gas is introduced in the chamber in order to protect the sample from the possible oxidation and decarburization. Next, the sample is put in the deep part of the chamber, which is characterized by more homogeneous temperature, and held for a desired time. The heating rate depends on the sample thickness and geometry, and it cannot be controlled or varied in-situ. However, for the same sample thickness and geometry, the heating rate will be the same or quite similar. Finally, after holding for a certain time, the sample is cooled down with three possible ways: water quench, oil quench or air cooling. As it was said previously, in the deep part of the chamber the sample has a completely homogeneous temperature (less than 5°C difference between different points of the sample). Figure 47 -NABERTHERM furnace. Tensile tests After thermal treatments using gradient batch annealing furnace, small tensile specimens were prepared with gauge length 20 mm and section 5 × 1 mm 2 (Figure 48). Specimens were cut along the transverse direction of the steel sheet. For each temperature two tensile tests were done at room temperature with a constant strain rate of 0.13 s -1 . In the case of heat treatments with Nabertherm furnace, two specimens with gauge length 50 mm and section 12.5 × 1 mm 2 (ASTM E8 geometry, Figure 49) were machined and tensile tests were performed at room temperature with a constant strain rate of 0.008 s -1 . Specimens were cut along the longitudinal direction of the steel sheet. Tensile tests were realized on a Zwick 1474 machine with macroXtens SE50 extensometer (Figure 50). This machine has a capacity of 50 kN. The cross head rate can be varied between 0.0005 and 600 mm/min with the precision of 0.002 % of the used value. The acquisition frequency in the system is 500 Hz. The electronic device for the force measurement corresponds to the type I, according to the ISO 7500/1 standard: • rank 0 in the range from 200 N to 50000 N; • rank 1 in the range lower than 200 N. The displacement was measured using a macroXtens SE50 extensometer, which is a high resolution extensometer (Figure 51). According to standard EN ISO 9513, this extensometer has a precision rank of 0.5. The maximal error in the measurements of cross head displacement between two points in the range between 20 and 200 µm is ±1µm. Quantification of retained austenite Two methods were used to quantify the volume fraction of retained austenite: X-Ray Diffraction (XRD) and saturation magnetization measurements (Sigmametry). The results from both techniques were compared and discussed. Finally, the most pertinent ones were retained for the global microstructure analysis in this work. Further, both techniques will be briefly presented and the comparison of results will be discussed. X-Ray Diffraction (XRD) X-ray diffraction is a powerful tool for the analysis of crystalline phase structures. This technique is widely used for the characterization of different phases in steels. In particular, austenite fraction and carbon content can be evaluated using X-ray diffraction patterns. Diffraction phenomenon in crystals is effective because the wavelength λ of X-rays is typically of the same order of magnitude (1-100 angstroms) as the spacing between the crystal planes. In this work, steel samples of 15 × 15 mm 2 were mechanically grinded to their quarter thickness and polished down to 1 µm to obtain a mirror surface. Then, the X-ray diffraction measurements were done using a Siemens D5000 diffractometer with a cobalt tube, under 30 kV and 30 mA (Co K α radiation with λ = 1.8 Å). The scans were done in the configuration θ-2θ. In order to avoid texture effect, the angle variations were the following: 2θ from 55° to 129° with 0.026° step, ψ from 0° to 60° with a step of 5° and φ from 0° to 360° in continuous rotation. Siemens D5000 diffractometer and scheme of goniometric configuration are shown in Figure 52. The Averbach-Cohen [AVE '48] method was used to calculate the volume fraction of retained austenite, through the following equation: where f γ and f α are the volume fractions of austenite and ferrite, hkl I α and hkl I γ are the average integrated intensities of (220) α , (211) α , (200) α and (200) γ , (220) γ , (311) γ diffraction peaks and hkl R α and hkl R γ are constant parameters related to the α and γ phases and studied hkl plans. hkl R α and hkl R γ parameters were determined internally using a reference sample with known retained austenite volume fraction. The average integrated intensities of (220) α , (211) α , (200) α and (200) γ , (220) γ , (311) γ diffraction peaks are measured from the obtained XRD spectra. Then, 9 ratios between the average integrated intensities of the 3 peaks of austenite and the 3 peaks of ferrite are calculated. Next, using the ratio hkl hkl R R α γ / obtained on the reference sample, 9 values of austenite fraction are determined. Finally, the average over these 9 values is considered to be the volume fraction of retained austenite in the sample. Saturation magnetization measurements There are two major disadvantages of XRD measurements:  the depth of analysis is only about 10-20 microns, just below the prepared sample surface;  the sample preparation is time consuming and mechanical grinding can affect the stability of retained austenite. Therefore, the amount of retained austenite was also evaluated using magnetic saturation measurements (Sigmametry). The advantages of this technique are that the measurement is done in the whole volume of the sample and that the sample preparation is quite simple. A small carefully cut sample of about 5 mm wide and from 5.5 to 6 mm long (the ratio of length to width has to be superior to 1) is used. This sample is put in the device generating a magnetic field sufficient for complete magnetization and the level of saturation magnetization is measured. Most of the phases in classical steels are ferromagnetic at room temperature, except the austenite and epsilon martensite that are paramagnetic. Consequently, using magnetic saturation method it is possible to determine retained austenite fraction. Saturation magnetization of a multiphase sample depends on the saturation magnetization of each phase ( i s σ ): ∑ ⋅ = i m m i i s s σ σ (37) where m is the global mass of the sample and m i is the mass of each phase, thus ∑ = i i m m . Thus, when the mass of the sample contain m a amount of austenite and m f of ferrite, the equation becomes as follows: m m m m f f s a a s s ⋅ + ⋅ = σ σ σ (38) As it was stated previously austenite is a paramagnetic phase, hence its saturation magnetization can be considered as negligible (approximately 0.3 to 0.7 µTm 3 /kg). Therefore, the saturation magnetization of the sample takes the following form: m m f f s s ⋅ = σ σ (39) Finally, to obtain the fraction of retained austenite it is necessary to perform two measurements: 1) the saturation magnetization of the studied sample containing retained austenite ( s σ ); 2) the saturation magnetization of the specimen without retained austenite ( f s σ reference sample). Then the fraction of retained austenite is calculated: f s s f s RA f σ σ σ - = (40) Usually, the specimen without retained austenite is obtained by applying a heat treatment on the initial sample for austenite destabilization. The standard TRIP steels are annealed at 500°C for 1h in order to get the reference samples. In this work, due to the high Mn content (low temperature domain of austenite existence), the destabilization of austenite was not so trivial. Therefore, three different samples were chosen for the reference:  one after austenitization at 750°C for 30min and quenching -expected to have fully martensitic structure;  one after annealing of initial martensite structure (850°C -1min WQ) at 500°C for 1h;  one after annealing of initial martensite structure (750°C -30min WQ) at 500°C for 30h. The microstructures after these 3 different treatments were observed using FEG SEM (Figure 54). All the microstructures were considered to be free of retained austenite. For each thermal treatment at least 2 samples were prepared for saturation magnetization measurements. Further, only the average values of the saturation magnetization or retained austenite fraction will be presented. Table 1 presents the mean values of f s σ obtained for different reference samples. As it can be seen, these values are quite close. This means that any reference sample can be used for the evaluation of retained austenite fraction. However, in order to obtain some vision on the possible dispersion of the calculated retained austenite fraction, all possible combinations between s σ and f s σ were used, then the average value was taken. The mean standard deviation of the retained austenite fraction estimation was calculated to be 1.4 % and the mean confidence interval was about 5 %. These rather low values are evidence for the robustness of the saturation magnetization method. Comparison between XRD and Sigmametry results The values of retained austenite fractions measured on different samples using XRD and Sigmametry techniques are plotted in Figure 55 as a function of holding time at 650°C. It can be seen that the shape of the RA fraction evolution with holding time is almost the same for both techniques. However, there is a significant difference between the two curves. The values obtained with XRD are much lower than those from Sigmametry, except the first point at 3min which is rather close. For the standard TRIP steels, it is known that mechanical polishing preceding XRD analysis can affect the results of the measurement. Therefore, based on the works of Zaefferer et al. [ZAE'08], it was decided to perform XRD analysis using different preparation methods on two samples with different stability of retained austenite (30min WQ and 30h WQ). For the comparison, two samples (TRIP 800 and Q&P steels) from other studies were also included in the test procedure. All the obtained results are presented in Table 2. The results in Table 2 clearly show that the effect of mechanical polishing is very important in the case of the MMS studied in this work. This effect is significantly less pronounced in the case of TRIP 800 and Q&P steels. It can also be observed that, even though electrochemical polishing permits to suppress the major part of hardened layer from mechanical polishing, the values obtained by XRD are still slightly below those from Sigmametry analysis. Such an important effect of mechanical polishing is probably due to the lower mechanical stability of RA. Further in the work it will be shown that a part of RA stability is controlled by the size of the austenite and not only by carbon content as in the case of TRIP and Q&P steels. Hence, it is supposed that mechanical stability of RA stabilized by the size effect is lower than that coming from carbon enrichment. Nevertheless, more studies are still needed to confirm this hypothesis. It is interesting to highlight that, very lately, Matsuoka et al. [MAT'13] reported that grain refinement of austenite is ineffective for suppression of deformation-induced martensite transformation. In fact, in the case of deformation-induced martensite transformation, the multivariant transformation is no longer necessary and single-variant transformation is favored. Thus the mechanical stability of austenite is claimed to be independent of the austenite size. Such approach is in good agreement with the results obtained in this work. Finally, based on the obtained results, sigmametry technique was considered to be more adapted for this study and almost all the RA fractions presented in this work were obtained by this method. Samples preparation for observations Small samples with about 20 mm length and 5 mm width were cut in the longitudinal direction from the heads of tensile specimens. Then, they were mounted in a conductive resin under heat and high pressure. The effect of mounting cycle was shown to have no or very limited influence on the microstructure. Next, the samples were mechanically grinded and polished to obtain a mirror surface. To reveal the microstructure, different etchants were used: Dino or Marshall etching -for the global microstructure observations in the optical microscope (OM) and/or scanning electron microscope (SEM); light Metabisulfite + Dino etchings -for the observations of microstructure in the SEM, and following quantifications of fresh martensite + retained austenite fractions; Picral etching -for the observations of carbides in the SEM. First, characterization using optical microscopy (OM) was systematically performed. Such type of observation provides initial information about the microstructure and permits the qualitative analysis of phase components. As well, macroscopic vision of the steel structure is obtained, which is necessary for the analysis of global homogeneity of the sample and possible decarburization of the edges. Microstructure observations were done using the Zeiss Axiovert 200 MAT microscope (Figure 56). In order to perform precise quantification of fresh martensite + retained austenite fractions, Back Scattered Electrons (BSE) imaging mode was utilized. BSE images allow higher contrast between phases while smoothing variations inside the martensite islands, which make easier the image analysis (Figure 58). Quantitative analysis -Aphelion and Image J Microstructural quantifications were performed on the SEM images. As said beforehand, fresh martensite + retained austenite fractions were estimated using Aphelion® semi-automatic image analyzer software [ADC'15] with internally developed routines. BSE images of the samples etched with light Metabisulfite + Dino etchings were acquired at magnification of 3000. The fractions were determined using a simple threshold method (Figure 58). Preliminary comparison with the standard point counting showed that this method was more effective with small amount of available images. For each sample, 10 images were analyzed, which represents a surface of about 400 000 µm 2 . The confidence range on the mean fractions was estimated to be about 10 % of the resulted value. Austenite size was evaluated using Image J ® software [SCH '12], [COL'07], [GIR'04] and a manual selection procedure. In that case, standard Secondary Electrons images obtained by SEM FEG at a higher magnification (×5000 and ×10000) were preferred. As it will be explained in Chapter 3, two morphologies of austenite were observed. Thus, to estimate the size of austenite two types of measurements were considered: 1) width of the laths: the distance between the two interfaces of a given lath in the normal direction was considered; 2) equivalent diameter of polygons: distance between the two interfaces of polygon in random direction. An example of the performed measurements is shown in Figure 59. 100 measurements were performed for each type of morphology: laths and polygons. Fine characterization tools Transmission Electron Microscope (TEM) High magnification observations of microstructure, as well as Mn content measurements in different phases, were performed using a JEOL 2100F TEM (Figure 60) with Brüker Energy Dispersive X-ray Spectrometer (EDX). Two kinds of samples were used for TEM observations: replicas and thin foils. Replicas were prepared using the following procedure: • standard mechanical polishing till 1 µm; • etching first with 2 % Nital, then Picral; • deposition of a cellulose acetate (Biodène) film; • 20min drying, then peeling-off (detachment) of the film from the sample; • carbon vapor deposition (~30-50 nm) on the film; • cutting the film in squares of 4 mm 2 and putting them on the copper grid; • dissolution of cellulose acetate film in a mixture of solvents. In order to prepare a thin disk shaped foil the steel sheet was ground mechanically to about 80 µm thickness, then twin-jet electropolished in a solution of 5% perchloric and 95% acetic acids at about 15°C. Convergent Beam Electron Diffraction (CBED) in STEM mode (Scanning TEM) was used to make difference between the phases present in the microstructure. Obtained Kikuchi patterns were indexed using "Euclid's Phantasies V1.1" software developed in LEM3 laboratory (University of Lorraine) [FUN '03]. Such methodology is very similar to the Electron Back Scattered Diffraction (EBSD) mapping used in SEM. However, the accuracy of orientation determination on the patterns generated by TEM can be better than 0.1°, thus making this tool opportune for ultra fine scale studies. In certain cases, when indexing was not possible or doubtful, the standard Selected Area Diffraction (SAD) in mode TEM was used. Electron probe microanalyzer (EPMA) Manganese distribution and segregations were characterized using the CAMECA SX100 electron probe microanalyzer (EPMA). An example of CAMECA SX100 EPMA is presented in Figure 61. The function of EPMA can be explained in the following manner. The generated electron beam strikes the analyzed sample and the interaction between prime electrons and the sample atoms provokes the emission of characteristic X-rays. This emitted X-rays are analyzed in a Wavelength-Dispersive Spectrometer (WDS), owing to single-crystal monochromators which diffract a precise wavelength onto a detector where the photons are counted. Then, using a reference sample, the elemental concentrations can be determined. The qualitative and quantitative distribution (mapping) of chemical elements can be obtained by scanning a small area of the sample [BEN '87]. Generally, with an electron beam of about 1 μm diameter, the analyzed volume of material at each measuring point is approximately 1 μm 3 . In this work the following test conditions were used for acquiring manganese maps: -Accelerating voltage -15 keV; -Current -2 μA; -Step size -0.5 μm; -Time step -0.1 s/pixel. Mn quantitative maps were built by combining the intensity in the distribution map and the intensity given by the quantitative analysis. NanoSIMS analyzer Recently, in the studies of Valle et al. [VAL'06] and Drillet et al. [DRI'12], it was shown that the NanoSIMS (SIMS for Secondary Ion Mass Spectrometry) technique is a powerful tool for the characterization of carbon distribution in steels. For this reason, Cameca NanoSIMS 50 (Figure 62) was used in this work to confirm the C level of retained austenite and martensite (prior austenite) in certain samples. The specimen preparation for NanoSIMS analysis is rather simple. Moreover, the advantages of Cameca NanoSIMS 50 are its high lateral resolution (about 50 nm) and its high sensitivity (detection limit for carbon is 0.0063 wt.%). Secondary ion mass spectroscopy is based on the analysis of secondary ions, induced by an initial ion bombardment, with a mass spectrometer. In fact, the surface of a solid sample is sputtered by primary heavy ions of a few keV energy. At a contact with the surface of target sample, this primary beam generates a sequence of atom collisions, followed by the ejection of atoms and atom clusters. A fraction of the emitted particles is spontaneously ionized -"secondary ions". Such a secondary ion emission supplies the information about the chemical composition of the emitting area. The ions of interest are isolated using the mass analyzer. Finally, ion detection system permits to record the magnitude of the secondary ions signal and to present this data in forms of quantitative maps for a chosen element. Schematic representation of the SIMS technique is shown in Figure 63. The major difference between static SIMS and dynamic SIMS (nanoSIMS) modes is the resolution depth. The first one analyses only the surface of the sample, which is mostly limited to the first top monolayer. The second one has a depth resolution ranging from sub-nm to tens of nm, therefore giving the possibility to investigate bulk composition and depth distribution of trace elements. In this work NanoSIMS 50 machine was used to obtain 12 C ion maps of studied samples in order to estimate carbon content of austenite. The samples were mounted in an aluminum ring using Wood's alloy and disposed in a sample holder (Figure 64). 10 × 10 μm 2 areas were scanned using a focused Cs + primary ion beam (<1 pA) and the SIMS intensities were measured. Before the measurements, surface of the samples was pre-sputtered in order to eliminate any carbon contamination. Wood's alloy Al ring Wood's alloy 2.1.7 Thermo-Calc and DICTRA softwares Thermodynamic simulations were performed using Thermo-Calc ® and DICTRA ®™ softwares for deeper study of phase transformations. Thermo-Calc ® software was used for two major objectives: 1. to visualize the effect of Mn on phase transformations using pseudo-binary diagrams; 2. to obtain expected phase fractions and compositions in ortho-equilibrium condition; this information was used for: a. selection of annealing temperature; b. comparison with experimental results. Al ring DICTRA ®™ software coupled with Thermo-Calc ® was used for thermo-kinetics simulation of phase transformations during annealing. The aims of these simulations were to assist the analysis of the experimental data and to propose a way for the prediction of the final microstructure. Starting from the 1980 [HIL '80], [SUN'85], researchers continuously develop the "CALPHAD" (CALculation of PHAse Diagrams) approach in order to have complete thermodynamic data bases for different alloys systems. These data bases allow different kinds of calculations of thermodynamic properties as functions of temperature, composition, pressure, etc., construction of phase diagrams and evaluation of other thermodynamic factors and parameters. To use these data bases, different softwares were created. For instance, Thermo-Calc company developed two softwares: Thermo-Calc ® (TCC ™ and TCW ™ ) for thermochemical equilibrium calculations based on the minimization of Gibbs energy and DICTRA ®™ for kinetic simulations based on the diffusion theory [AND '02]. Both softwares were considered to be adapted for this study and were used with TCFE5 and MOB2 databases [THE'15]. Thermo-Calc simulations were performed in Chapter 2.2 (so-called "Alloy design") in order to evaluate the effect of Mn on the phase transformations in steel and to predict different phase fractions at equilibrium conditions. DICTRA was used to simulate the dissolution of cementite and the formation of austenite under local equilibrium conditions. The initial conditions and the results of simulations are reported in Chapter 3. Selection of composition and treatment Selection of composition and elaboration/characterization of obtained steel The precursor works of Matlock [MAT '06] show that a certain level of retained austenite is necessary (from 20% to 30%) to achieve the mechanical properties, especially the balance between strength and ductility, in order to fulfill the requirements of 3rd generation AHSS steels. It is well known that an increase in the concentration of both carbon and manganese, which are strong austenite stabilizers, results in an improved stability of austenite at room temperature. Furthermore, a reduction in the austenite grain size is well known to increase the austenite stability by suppressing the martensite transformation. These two aspects, austenite composition and size, were considered for the development of steels with large volume fraction of retained austenite. In particular, such high fraction of retained austenite can be achieved in ultrafinegrained steels with 5-7 wt.% Mn content and relatively low carbon content. This type of microstructure provides an excellent combination of strength and ductility thanks to the enhanced TRIP effect. The composition used in this work (see Table 3) follows the aforesaid philosophy. It is worth noting that both carbon and manganese contents were limited respectively to 0.1 wt.% and 5 wt.% in order to prevent the problems linked to the welding properties and to the resistance of spot welds. At this point, it is also important to mention that theoretically, considering only the chemical contribution to M s temperature, the selected composition does not allow stabilizing 30% of retained austenite at room temperature. As it will be discussed and demonstrated later in this thesis, the size effect plays a key role on the austenite stability. Vacuum induction melting was used to prepare the steel. Chemical composition of the obtained steel is shown in Table 3. The level of C and Mn was slightly lower than the targeted ones. The ingot was then reheated to 1200°C and hot rolled with the finishing temperature around 930°C. Coiling was simulated by a slow cooling in the furnace from 625°C. Microstructure of hot rolled steel consists of bainite and martensite (Figure 66) and its microhardness was evaluated to be around 340 HV. Afterward, 70% of reduction was done to get the final thickness of cold rolled steel at about 1.2 mm. Table 3 -Chemical composition of studied steel (10 -3 wt. %). Thermal treatment selection As it was discussed in Chapter 1.1, globally, there are two ways to anneal a MMS: direct annealing of cold rolled metal or double annealing, i.e. austenitization followed by quench then second annealing with corresponding ART phenomena. As shown previously [KIM '81], one of the main advantages of double annealing, when the initial microstructure before second annealing is martensite and/or bainite, is the very fine resulting microstructure and a particular morphology of its constituents: lath-like and/or fibers. From a mechanical point of view, this provides better balance between strength and ductility and better damage properties [SUG'02], [SUG'00], [HEL '11], [KRE '11]. From a phase transformation point of view, finer grain size is supposed to enhance both the kinetics of austenite formation and the stability of austenite. Finally, it should be stated that double annealing with the first annealing in fully austenitic domain permits to decrease the so-called "heritage effects". Indeed, direct annealing is very dependent on the prior steps of steel processing like coiling and cold rolling rate. Normally, in the case of double annealing, due to the full austenitization at the first step, these "heritage effects" have lower impact. For all the above reasons, it was decided to investigate this type of thermal treatment. However, the choice of annealing temperature is of major importance since it determines both the fractions of microstructure constituents and the stability of austenite via its composition and size. In order to better assess the annealing temperature, it was decided to proceed in two steps: -thermodynamic calculations using Thermo-Calc software. This step is necessary to determine the temperature and the composition ranges for the existence of different phases; -combinatory experiments using gradient batch annealing in order to determine the effects of temperature on mechanical properties. Batch annealing was selected as a type of second intercritical treatment based on the works of Furukawa et al. [FUR'89], [FUR'94] and Huang et al. [HUA'94], which shown that optimal retained austenite fraction and mechanical properties were obtained after 3h annealing. Thermodynamic calculations Four pseudo-binary diagrams with 1.7 wt.%, 2.7 wt.%, 3.7 wt.% and 4.7 wt.% Mn content were calculated (Figure 67) to visualize the global effect of Mn on the possible phase transformations. The analysis of these pseudo-binary diagrams brings the following outcomes:  increasing Mn expands austenite domain to lower temperatures;  two phase domain α+γ is moved to the lower temperatures and slightly expanded at low C concentrations;  three phase region (α+γ+θ) is also expanded, meaning higher stability of cementite;  slope of the solvus between intercritical and austenitic domains (red line) decreases, leading to the rapid change of austenite volume fraction with temperature variation. Then, some ortho-equilibrium simulations were done to obtain the temperature evolution of phase fractions and their respective compositions for the studied steel (Fe-0.098 wt.% C-4.7 wt.% Mn). A part of the results is shown in Figure 68. Figure 68(a) shows that complete dissolution of cementite happens at 610°C (blue line) and fully austenite structure is obtained at 740°C (red line). This means that the intercritical domain exists in a rather narrow temperature range: from 610 to 740°C. In Figure 68(b) it can be seen that the evolution of C content in austenite is in form of peak with the highest amount achieved at temperature when complete dissolution of cementite happens. In the same time, Mn content in austenite is continuously decreasing till the value of steel composition (4.7 wt.%) in fully austenite structure. Such evolution of chemical composition of austenite supposes that there is a similar peak evolution of retained austenite fraction if only chemical stability of retained austenite (no size effect) is taken into account. This means that there is an optimum annealing temperature to get the highest amount of retained austenite which was observed in the previous studies [FUR '89]. In order to assess this optimum temperature and to obtain the evolution of retained austenite fraction at room temperature as a function of annealing temperature the following calculations were performed. The M s temperature was evaluated using Andrews relation [AND '65]. In the case of simple C-Mn steels without other alloying elements, it takes the following form: A A Mn C 0 s w 30.4 - w 23 4 - 539 C) ( M ⋅ ⋅ = ° (41) where A C w and A Mn w are C and Mn contents in austenite taken in weight percents. Then, Koistinen and Marburger empirical equation [KOI '59] was taken for the estimation of retained austenite fraction at room temperature (20°C): ( ) ( ) q s A RA T M f f - ⋅ - ⋅ = α exp (42) where f RA is the volume fraction of retained austenite, f A is the volume fraction of austenite before martensite transformation (austenite fraction at the end of holding before quenching), T q is temperature reached during quenching (in this study it is equal to 20°C -room temperature) and α=0.011 is a fitting parameter. Finally, using the data from Figure 68 (austenite fraction and its composition) the calculations were done and the results are presented in Figure 69. It can be seen that the peak of retained austenite corresponds exactly to the peak of C content in austenite, which in its turn is related to the temperature of cementite dissolution. From a thermodynamic point of view, the optimum temperature is calculated around 610°C. However, it should be noted that kinetics effects, such as cementite dissolution and austenite growth, are not taken into account in this type of calculations. As it was seen in pseudo-binary diagrams (Figure 67), stability of formed cementite increases with the increase of Mn, thus cementite dissolution in MMS can be sluggish. Knowing that the carbon peak is directly related to the cementite dissolution, one can expect that the peak of retained austenite fraction will be also controlled by both cementite dissolution and austenite growth. Therefore, the temperature of 610°C determined by thermodynamic calculations can be clearly considered as a lower value. In a more pragmatic way, it is possible to approach the optimum annealing temperature by using a combinatory experiment. Combinatory experiments As explained previously, it was decided to perform a specific thermal treatment (batch annealing) in the furnace in which it is possible to control the gradient of temperature. The principle of gradient batch annealing furnace was explained in the previous section. The furnace was programmed to obtain a linear gradient of soaking temperature between 600 and 700°C (range defined from aforementioned thermodynamic calculations). The scheme of annealing cycle is presented in the left part of Figure 70. Based on the literature and preliminary dilatometry trials, it can be reasonably supposed that the slow cooling does not affect significantly the microstructure. Indeed, it was found that ferrite formation does not occur at very low cooling rates and, even during holding in the domain with high driving force for ferrite formation, the transformation was very sluggish. This is probably due to the high Mn content in austenite before cooling. A steel sheet 230 mm long and 150 mm wide was heat treated. The obtained gradient of temperature depending on the position in the sheet is shown in the right part of Figure 70. For each temperature two "mini" tensile samples with the gauge length of 20 mm and section of 5 mm × 1.2 mm were prepared. Then, tensile tests were performed. The evolution of mechanical properties as a function of holding temperature is presented in Figure 71. Table of all results and all tensile curves can be found in Annex 2. These results clearly show that there is an optimum domain of temperatures where a good balance between strength and ductility can be achieved. For the studied composition, it appears that the optimum range of temperatures is between 640 and 660°C. As it was anticipated in the previous part of the work, these temperatures are higher than those predicted by orthoequilibrium calculations, due to the kinetics effects. XRD analysis was performed on all samples to study the evolution of retained austenite fraction. Results are presented in Figure 72. Taking into account only the evolution of retained austenite fraction, one can conclude that the optimal temperature is ~690°C, but this will be an erroneous outcome. In fact, the optimum ductility (elongation) was obtained at temperature range of 640-660°C, as already stated. This is due to one more parameter that introduces another level of complexity in the choice of optimal treatment: the mechanical stability of retained austenite. As it can be seen in Figure 72, different retained austenite fractions can provide the same level of strength-ductility balance and, alternatively, the same retained austenite fraction can result in different strength-ductility. This indicates directly that strength-ductility balance is related to both parameters: fraction and stability of retained austenite. At this stage of the work, it was not possible to get better understanding about the mechanical stability of retained austenite. Based on the obtained results, it was decided to select the soaking temperature of 650°C and to investigate the impact of soaking time on the microstructure and mechanical properties evolution. The decisions taken in part 2.2 can be summarized in the following manner: 1. selected composition: 0.1 wt.% C and 5 wt.% Mn; 2. selected heat treatment: double annealing with first austenitization above Ae 3 followed by intercritical batch annealing at 650°C. As it was already stated, final mechanical properties of steels are closely related to the microstructure parameters: nature, composition, volume fraction, size and morphology of the microstructure constituents. In its turn, microstructure depends on the applied thermomechanical treatments. In this chapter, the microstructure evolutions resulting from the double annealing treatment will be presented and discussed. According to the considerations described in chapter 2.2, the heat treatment schematically presented in Figure 73 was chosen for the further investigations. It consists of two thermal cycles. First, a complete austenitization at 750°C for 30 minutes followed by water quench. Second, an intercritical annealing at 650°C with different holding times (3min, 10min, 30min, 1h, 2h, 3h, 7h, 10h, 20h and 30h) followed by water quench in the end. All double annealing treatments were performed in Nabertherm furnace under Ar atmosphere to avoid any decarburization. The mean heating rate in the furnace was about 5°C/s. However, in the end of heating section the heating rate was much lower. Finally, the holding temperature ±5°C was achieved after ~200s. 750°C-30min Water Quench Characterisation of the microstructure after austenitization The microstructure after first annealing cycle (austenitization followed by quench) was characterized using different techniques: optical microscope (OM), scanning electron microscope (SEM), transmission electron microscope (TEM) and X-ray diffraction (XRD). Figure 74 presents the observations (OM, SEM, TEM) of the obtained microstructure. It can be seen that the resulting microstructure is composed of lath martensite. Applying image analysis on OM images after Dino etching, prior austenite grain size was estimated to be around 4µm. Furthermore, optical observation after Klemm etching revealed the presence of a small quantity of retained austenite in the martensite matrix as highlighted in Figure 75(b). Finer characterization of retained austenite done with TEM (Figure 75(d)) confirms this observation. The evaluation of retained austenite volume fraction from the XRD spectrum (Figure 75(c)) was not possible due to its very low value. Thus, it was considered that there was less than 3% of retained austenite. A mean Mn content in retained austenite islands of 9 wt.% was measured by EDX. This value was considered as relatively high. As a consequence, the microstructure at the beginning of the second cycle consists of a fully martensitic structure in which some small quantities of retained austenite are present. The presence of retained austenite after quenching can be due to the existence of Mn microsegregations. Therefore, Mn distribution was analyzed using EPMA and the results are presented in Figure 76. As expected, microsegregation of Mn can be observed. Nevertheless, the mean segregation level is only about 5.5 wt.% which is a rather low value in comparison with the measured Mn content in retained austenite islands by TEM-EDX (~9 wt.%). Locally, Mn composition can reach a value as high as 15 wt.% (red zones on the Mn map) which is more consistent with the level of Mn necessary for austenite stabilization. Finally, the obtained mean segregation level corresponds to a partition coefficient of about 1.2. Quantitative X Image of Mn (wt.%) Microstructure evolution during annealing at 650°C Before going into the core of this section (microstructure characterization), a short recall of the previously obtained results is given hereafter. First of all, the initial microstructure before intercritical annealing is fully martensitic without any prior deformation. Next, based on the thermodynamic calculations (Chapter 2.2), the expected stable phases are ferrite + austenite at 650°C and a ferrite + cementite mixture at low temperature. The austenite formation can thus result from different phase transformations including the formation of cementite at lower temperature and its dissolution at higher temperature. It is worth noting that cementite particles would act as preferential nucleation sites for austenite. Therefore, the objective of this section is to characterise and to analyse the microstructural evolution during annealing at 650°C and its role on the stabilization of retained austenite at room temperature. Microsegregation evolution Generally, the cold rolled high strength steels are subjected to microsegregations (especially Mn) due to the former processing conditions. These microsegregations result from the solidification process and introduce dispersions in the microstructure, which can have a significant impact on phase transformations, microstructure evolution and both mechanical and damage properties. Therefore, it is important to characterize the states of dispersions and microsegregations at different steps of heat treatment. Beforehand, it was shown that the initial (before intercritical annealing) microstructure has Mn microsegregations. Hence, it was of interest to follow the evolution of Mn microsegregation during intercritical annealing. For that purpose, four samples (3min, 1h, 10h and 30h) were subjected to EPMA analysis. The obtained quantitative images of Mn distribution are shown in Figure 77. The complete data from EPMA analysis are presented in Annex 3.1. An important redistribution of Mn happens during such a long intercritical annealing. These analyses suggest that there is a progressive homogenization of Mn in segregated bands during annealing, as observed in [LAV '49], [KRE '11]. In that case, the Mn diffusion length is expected to be of the order of the initial mean distance between bands, i.e. order of 10 µm. The long-range diffusion of Mn supposes that Mn should diffuse in a two-phase matrix of ferrite and austenite. This means that the effective diffusivity of Mn may be different from that in pure ferrite and in pure austenite. This phenomenon was already discussed in the work of . It was shown that such long-range Mn homogenization was controlled by an apparent diffusivity similar to the one in ferrite. Both the small segregated band spacing and the ultra-fine size of microstructure constituents are supposed to enhance this phenomenon in the particular case of MMS. It is worth noting that another type of Mn partitioning may occur at smaller scale. It is mainly linked to the interactions between Mn and migrating α/γ interface during austenite growth. This partition is driven by short-range diffusion and depends on temperature and microstructural parameters. This topic will be discussed more deeply later using more precise data of TEM hypermaps. Evolution of cementite precipitation state Carbides were first analyzed during heating process using interrupted cycles (heating to the temperature followed by He quench). Three different temperatures were considered 550°C, 600°C and 650°C. The observations were done on TEM replicas. The Mn content of carbides was measured by EDX. All the obtained results can be found in Annex 3.2. Figure 78 highlights only the most significant ones. The mean Mn content of carbides at 650°C was determined from EDX measurements. Its value was estimated to be about 10 wt.%. Then, the time evolution of carbides during holding at 650°C (3min, 1h, 2h and 3h) was analyzed using TEM thin foils. The main results are presented in Figure 79. In a general manner, the carbides precipitation state depends on time at 650°C. For short holding time (3min), high volume fraction was observed. In terms of Mn content, there were two populations of carbides: one with high Mn content around 30 wt.% and another one with lower one about 15wt.%. Also, two types of morphology were detected: rod one and polygonal one (more or less rectangular). After 1h holding time, the volume fraction of carbides decreases, but there are still a certain amount of carbides with bimodal Mn content and shape. After 2h holding time, the sample can be considered as almost carbide free: only a few carbides with high Mn amount (~25 wt.%) remains in some areas. Finally, carbides are completely dissolved after 3h holding. It is also important to note that it was not possible to establish any link between carbides morphology and their Mn content from the analyzed data. Using Convergent Beam Electron Diffraction (CBED) and Selected Area Electron Diffraction (SAD) it was determined that crystallographic structure of the observed carbides corresponds to cementite. This is coherent with the conclusions of Luo et al. [LUO'11], [LUO '13]. These results show the presence of cementite at 650°C for holding time as long as 2 hours. This contrasts with equilibrium thermodynamic calculations carried out in Chapter 2.2. Indeed, the temperature of complete cementite dissolution was determined to be around 610°C. This can be attributed to kinetics effects and will be clarified further by DICTRA simulations Time-evolution of austenite and ferrite The samples obtained after intercritical annealing at 650°C were firstly characterized by OM (Figure 80). Dino etching [ARL '13] was used to reveal the microstructure. Basically, a refined microstructure is evidenced by OM even after 30h holding. Thus, using OM it was only possible to get the macro vision of the microstructure. For more detailed analysis, all the microstructures were characterized using FEG SEM (Figure 81 and Figure 82). The explanation and details of the observed microstructures are given in the Figure 82. The latter shows microstructure after 10h holding at 650°C consisting of at least three phases: ferrite (F), retained austenite (RA) and fresh martensite (M). The detailed analysis of the microstructures leads to the following conclusions: 1) austenite (retained austenite and fresh martensite) and ferrite have two different morphologies: lath-like and polygonal ones; 2) at holding times longer than 3h, higher fraction of fresh martensite was observed and most of the martensite was polygonal, 3) it can be supposed that the polygonal ferrite grains results from a recrystallization process from the initial martensite structure. The first ferrite polygonal grains were observed already after 30min holding. Recrystallization of martensite is a complex subject and need a dedicated study which is not in the scope of this work. The rate of austenite transformation is very rapid in the beginning and then becomes more and more sluggish. The equilibrium fraction is achieved after 7h holding. The observed austenite evolution is apparently consistent with the work of Speich et al. [SPE'81], in which it was shown that there are 3 stages for austenite growth: first two rapid ones that are controlled by C and Mn diffusion in ferrite and a third sluggish stage that is controlled by Mn homogenization in austenite. The analysis of Mn distribution between austenite and ferrite as a function of time was done using local TEM-EDX measurements and TEM-EDX hypermaps. More complete information from the overall TEM-EDX analysis can be found in Annex 3.3. Figure 84 presents the hypermaps for the samples with 1h, 2h, 3h and 10h holding time. On each hypermap the Mn-rich zones, considered to be RA or FM, and several Mn-depleted zones (F) were selected and quantified. An example of such a selection for samples after 2h and 3h holding at 650°C is shown in Figure 85. The analysis of at least 10 austenite and 5 ferrite zones of the hypermap was done for each sample. The average values of Mn content in both austenite and ferrite of different samples are reported in Table 4. As calculated with Thermo-Calc software, the equilibrium Mn contents of ferrite and austenite at 650°C are 2.3 and 9.1 wt.%, respectively. From data in Table 4, it can be seen that Mn content of ferrite decreases with holding time and achieve the equilibrium value after 10h. In a general manner, Mn content in austenite decreases with time from 10 wt.% to 8.5 wt.%. The high Mn value at the beginning of transformation may be explained from the fact that austenite nucleates preferentially in Mn rich regions such as carbides. After longer time, the measured Mn content in austenite is close to the equilibrium one (8.5 wt.% versus 9.1 wt.%). As a consequence and surprisingly, the system reaches the equilibrium after a relatively short time of 10h; the equilibrium being defined from the calculated volume fraction of phases and from their respective compositions. It was also decided to measure the level of carbon content in austenite for the samples with 1h and 30h holding time at 650°C using NanoSIMS technique. Some areas of 10 × 10 μm 2 were scanned using a focused Cs + primary ion beam (<1 pA) in the NanoSIMS 50 machine and the SIMS intensities were measured and plotted as maps (in particular 12 C ion maps are necessary for the estimation of carbon content). Image analysis was then performed on the obtained 12 C ion maps and the evaluation of carbon content was done according to the method described in chapter 2 that is mainly based on the comparison of the intensities between the analyzed sample and a reference sample with known carbon content. The carbon content was estimated for each selected zone in the 12 C ion maps in order to evaluate the dispersion of C content. Two 12 C ion maps and the associated distribution of measurements for 1h and 30h samples are shown in Figure 86 and Figure 87. It is observed that the measured carbon contents show a relatively low scatter. The mean carbon content measured in both samples after 1h and 30h holding is close to 0.36 wt.% and 0.27 wt.%, respectively. These values correspond very well to the C content calculated from the mass balance that neglects carbides (Figure 86 and Figure 87) and thus suggests that a major part of carbides are completely dissolved after one hour treatment at 650°C. Time-evolution of retained austenite and martensite The volume fraction of retained austenite was evaluated by saturation magnetization measurements and that of fresh martensite was deduced from image analysis. The evolutions of retained austenite and martensite fractions with holding time at 650°C are shown in Figure 88. The time-evolution of both retained austenite and fresh martensite can be divided into three successive steps: -in the beginning of transformation, i.e. for times shorter than 1h, all the prior austenite was stabilized at room temperature, -a decrease of RA and a concomitant increase of FM is observed for times between 1h and 10h, -a final stage where both RA and FM seems no longer to evolve. In addition, evolution of RA fraction with holding time has a peak-type form, which was already reported in the literature [HUA '94], [ARL'12-1] without any clear explanations. Later in this chapter, it will be shown that this temporal retained austenite evolution is a fair indicator of the mechanism of austenite stability. Some TEM analysis of the selected samples (3min, 1h, 2h, 3h, 10h and 30h) were also performed in order to study more finely the different phases and to determine the mean Mn content in each phase. Figure 89, Figure 90 and Figure 91 present the TEM images with the indexed diffraction patterns in mode STEM (CBED) and the measured Mn content of certain phases for all the samples. The analysis of the sample after 3min of holding (Figure 89(a) and(b)) shows that ferrite (indexed as BCC structure) contains a high density of dislocations and some features present also a substructure. The dislocation density decreases with the increase of holding time. However, even after 30h of holding, certain grains still contain some dislocations. The presence of substructure and the decrease of dislocation density with time are certainly the indicators of martensite recovery phenomena, as already seen in the past by Speich [SPE '69] [SPE '72]. Surprisingly, it appears that recovery is delayed to longer times. The high Mn content is supposed to be the reason for such sluggish recovery kinetics. In fact, Mn effect is thought to be double. First, the increase of Mn results in lower intercritical temperature range, thus recovery of martensite happens with lower driving force. Second, Mn atoms can interact with dislocations or grain boundaries by a solute drag effect [GOR '62], [PET'00], [REI '06]. It can be seen in Figure 89(b) that ferrite of the sample with 3min holding time has a Mn content of about 4 wt.% and that the size of formed austenite is ultra fine. The sample with 1h holding (Figure 89(c) and (d)) presents coarser austenite and a lower Mn content in both ferrite and retained austenite. The retained austenite was detected further in all the samples. Its mean Mn content was estimated between 9 and 11 wt.%. In Figure 90(b) (2h sample) certain austenite grains present some features inside the grain with lath morphology. Using CBED it was possible to distinguish some austenitic twins (FCC(Twins)) and alpha prime martensite (BCC(M)). Figure 90(b) also reveals some dislocations in the ferrite grain (outlined by the blue line) adjacent to the austenite grains. It is thought that this is due to the accommodation of transformation stress. Geometrical and topological aspects The analysis of both the size and morphology of phases is a key point to better understand the microstructure formation. By using SEM images at relatively high magnification (×10000example in Figure 59) it was possible to perform manual measurements of the size of austenite by considering martensite and retained austenite (FM+RA) in the final microstructure. For each holding time, 200 austenite features were measured by image analysis from SEM images. Both the lath and polygonal like morphologies were distinguished and their mean size was measured. Table 5 summarizes the measurements of size and includes the volume fraction of austenite at a given time for holding at 650°C. It is clear that the low statistics and manual operations lead to uncertainties regarding the size measurements. Although, the values obtained for lath features are in rather good agreement with those found by Luo et al. [LUO'11]. '80] it was observed that austenite nucleates at lath and packet boundaries and grows preferentially along the laths, consequently producing a structure with lath-like morphology of ferrite and austenite. However, the complete vision on the sequence of carbide precipitation and austenite nucleation and growth was not completely obvious. Based on the different observations it was proposed to consider the following mechanism of austenite nucleation and growth: • precipitation of carbides along the laths, packets boundaries and on the triple junctions of prior austenite grains; • nucleation of austenite on the previously precipitated carbides (along the laths, packets boundaries and on the triple junctions); • preferential growth of austenite nucleated at laths and packets boundaries along these units. Comparison of the austenite formation mechanisms from the initial fresh martensite structure (not deformed) and from the deformed structure of MMS is schematically described in Figure 92. Figure 92(A) show the nucleation and formation of austenite from an initial non deformed martensite. In this case, as stated previously, austenite will nucleate and grow on both the laths, packets boundaries and on the triple junctions of prior austenite grains. Consequently, the final morphology of the austenite is double: lath-like and polygonal features. In the case of deformed martensite structure (Figure 92(B)) the scenario is different. Cementite precipitation is concomitant with recovery and recrystallization of ferrite. Therefore, cementite precipitates on the prior austenite grains, on the laths or packets boundaries, but also on the boundaries of new grains and/or restoration cells. The recrystallization before austenite transformation is partial. This was observed in the earlier study [ARL '12-2]. In this condition, austenite nucleates preferentially on the cementite which is located on the recrystallized ferrite boundaries. In the same time, there are also two other possible sites for austenite nucleation: prior austenite grains and laths or packets boundaries. Finally, further austenite growth and ferrite recrystallization result in the polygonal microstructure of ferrite and austenite. Comparison of the experimentally obtained microstructures from the initial martensite structure (not deformed) and from the deformed structure of MMS is shown in Figure 93. It should be stated that the both microstructures have similar fraction of austenite (martensite + retained austenite). The obtained morphologies correspond well to those described by the both mechanisms in Figure 92. However, it should be noted that the scheme proposed in Figure 92 cannot be generalized. Cementite precipitation, recrystallization and austenite formation depend on several parameters like heating rate and path, initial structure and steel composition. Therefore, depending on these parameters the morphology of the final microstructure can be different. Overall view on the obtained experimental data Microstructure evolution during ART annealing at 650°C was characterized using EPMA, SEM, TEM and NanoSIMS techniques. EPMA results revealed the long-range Mn homogenization controlled by the diffusivity of Mn close to those in ferrite. Small interspacing of microsegregation bands and ultra fine microstructure enhance this phenomenon. SEM characterization shown that the microstructures contained at least three phases: ferrite, retained austenite and fresh martensite. Double morphology (lath-like and polygonal) of analyzed features was revealed and a complete mechanism of such microstructure formation was proposed. From the image analysis of SEM pictures it was found that the kinetics of austenite transformation was very rapid in the beginning and then becomes smoother. Size evolution of austenite was also evaluated. Retained austenite fraction was determined by saturation magnetization method. It was found that an important amount of RA was stabilized at room temperature after final cooling. Evolution of RA fraction with holding time has a particular peaktype form. Finer TEM observations shown that carbides dissolution was sluggish: small amount of non-dissolved carbides were present even after 2h holding and contain high amount of Mn. Different phases were distinguished using CBED in STEM mode. An important dislocation density in ferrite was observed. Mn content of different phases was characterized using TEM-EDX measurements. Finally, C content of two samples, 1h and 30h, was determined by NanoSIMS technique. The obtained values were close to those, calculated from the mass balance neglecting the presence of cementite. All the obtained quantitative data is put together in the following table. Discussion of main results: experimental/modelling approach As it was pointed out in the literature review, retained austenite fraction and its stability have significant influence on the final mechanical behavior of MMS. Under mechanical loading retained austenite transforms in martensite and thus providing additional ductility and strain hardening known as a TRIP effect. The rate of strain induced martensite transformation is linked to the fraction and stability of retained austenite. Both are directly linked with the mechanism of austenite formation during holding. In this part, we propose to discuss more deeply the kinetics of austenite growth, the stability of austenite at room temperature and their interactions. A double experimental/modeling approach is proposed here. Mechanisms of austenite formation Effects of the representative volume It is proposed to study the mechanisms of austenite growth using DICTRA software that is mainly based on a mean field approach. In this case, the choice of the equivalent representative volume of the microstructure is critical, all the more so when the studied microstructure is complex. The linear geometry was chosen for the main reason that the experimental observations revealed that the lath-like morphology is dominant, especially at the beginning of holding. The configuration for DICTRA calculations is given in Figure 94. Three phases were considered: θcementite, α-ferrite and γ-austenite, the latter one being initially set as an inactive phase at the α/θ interface. Consequently, austenite will appear when the driving force for its nucleation will be higher (absolute value) than a threshold value calculated by DICTRA under local equilibrium conditions. The simulation cycle was set to be as follows: starting at 400°C, heating to 650°C at ~1°C/s heating rate then holding at 650°C. The characteristic length L α (see Figure 94) was taken equal to the half-distance between the martensite laths. This length was approximately estimated to be 150 nm. Then the cementite region size (L θ ) was calculated from the classical mass balance considering that the volume fraction of cementite is 1.45%, value that corresponds to the equilibrium fraction at 400°C. The obtained cementite thickness was 2 nm. L α (nm) γ α θ L θ (nm) L α (nm) γ α θ L θ (nm) The Mn content of cementite was considered to be 10 wt.% as determined in section 3.2.2 using the EDX in TEM. Both Mn and C content of ferrite were then calculated according to the mass balance: 4.53 wt.% and 2.10 -6 wt.%, respectively. The results in terms of kinetics of cementite dissolution and of austenite formation are presented in Figure 95. In a general manner, the kinetics of austenite is relatively well predicted, even if the calculated kinetics is abrupt between 30min and 3h. However, the calculated dissolution rate of cementite is rapid (complete dissolution after ~800s). This seems contradictory with the observed experimental results -carbides are still present after 2h of holding. From a theoretical point of view, there are many reasons that can explain such a result. First, the microstructure dispersions such as the size distribution and the heterogeneity of compositions are not taken into account. This can explain that some carbides may persist after rather long holding time. Second, the mean field approach used in this work imposes to choose a characteristic length (L α ), supposed to be representative of the microstructure state. This distance, very difficult to determine for the reason that the microstructure is quite complex, predetermines the size of cementite and can strongly influence the calculated kinetics. For example, the corresponding size of cementite in the initial state is of 4 nm that is far away from the measured one and would explain partially why the kinetics of cementite dissolution is faster than the observed one. In order to better understand the influence of calculation parameters and to improve the predictions, some complementary calculations were carried out with the following considerations: o cementite region size was varied from 1 to 10 nm (this implies variation of ferrite region size from 75 to 750 nm); o Mn content of cementite was varied from 10 to 30 wt.%. Effects of cementite size and of its Mn content on the kinetics of cementite dissolution and austenite formation are shown in Figure 96 and Figure 97. As expected, it was found that both kinetics strongly depend on the considered cementite size (and then on the ferrite region size) and on the Mn content in cementite. It is known [GOU '12-1] that an increase of Mn content in cementite decreases the driving force for cementite dissolution and influences the kinetics of the austenite growth. This effect is clearly illustrated in Figure 97. However, the effect of Mn content appears less pronounced. Furthermore, it is shown that it is not possible to describe simultaneously the experimental time for complete dissolution of cementite and the experimental kinetics for austenite growth using a given couple of cementite size and Mn content. It is worth noting that the time for the complete dissolution of cementite is not necessary a relevant parameter in the absence of the measured kinetics of cementite dissolution. Indeed, the measurements of C content in austenite (Figure 86 and Figure 87) are in very good agreement with the C content calculated from the mass balance that neglects the presence of carbides. This is a clear evidence that a major part of carbides are completely dissolved after one hour of treatment at 650°C. The dispersion existing in the microstructure, particularly those concerning both the size and Mn content, may explain that some cementite particles persist even after longer holding time at 650°C. In the following analysis and in view of the above, we will consider the first parameters set (L α =150nm) as sufficiently relevant to describe the austenite growth. Austenite growth controlled by Mn diffusion The mechanism and the kinetics of austenite transformation and growth are strongly linked to the evolution of C and Mn at the transformation interfaces α/γ and γ/θ. In ternary Fe-C-Mn system the situation is more complicated in comparison with the Fe-C binary one. Indeed, both interstitial and substitutional diffusion occur during transformation and their diffusivities differ substantially. Thus, the composition of C and Mn at the α/γ that defines the tie-line for austenite growth cannot be determined by the tie-line passing through the bulk alloy composition, contrary to the Fe-C system. Furthermore, the austenite formation can be successively controlled by the C diffusion in austenite, the Mn diffusion in ferrite and, finally, by the Mn diffusion in austenite [SPE '81]. These varieties of possible growth mode for austenite have a strong influence on the kinetics of austenite formation. The analysis of the time-evolution of both C and Mn profiles through the system at 650°C is given in Figure 98. It provides some clarifications about the mechanism of austenite growth. The time-evolution of both C and Mn at γ/θ and α/γ define the tie-lines for cementite dissolution and austenite growth. In a first step, the focus is put only on the cementite dissolution process. It is worth noting that during this stage austenite grows into ferrite. From a theoretical point of view, the tie-line for transformation depends on both kinetics and thermodynamic properties of the studied system. However, in this specific case, it is shown that the content of both C and Mn at γ/θ and α/γ do not significantly evolve with time. The C and Mn contents are close to 0.45 wt.% and 8 wt.%, respectively, at the α/γ interface (austenite side) and are close to 0.5 wt.% and 11.5 wt.%, respectively, at the γ/θ interface (austenite side). As a consequence, the tie-lines for transformation, which are represented in Figure 99, can be considered as time-independent during cementite dissolution process. The determination of these tie-lines is critical for the reason that they govern the austenite growth for a substantial part of austenite formation. The austenite growth is influenced by the dissolution of cementite into austenite and by the growth of austenite into ferrite (Figure 98). Obviously, these two steps are interdependent and their kinetics depends mainly on the difference between the difference of the chemical potentials of both C and Mn at γ/θ and α/γ interfaces from the austenite side (the small red and blue arrows in Figure 99). It is interesting to note that the values of carbon content at the γ/θ and α/γ interfaces are quite close (0.45 wt.% versus 0.5 wt.%). This is the reason why the carbon profile in austenite is almost flat (Figure 98). In order to go further, the C activity through the system at the beginning of transformation at 650°C (t=0s) was determined (Figure 100). The C activity is uniform through the system. As a consequence and surprisingly, the austenite growth is not controlled by C diffusion in austenite, as suggested by many authors in other steels with lower Mn content [SPE '81]. This can be explained by two concomitant effects: the high nominal Mn content and the relatively low temperature of transformation. Indeed, all this lead to a high C activity at the α/γ interface (austenite side) and to a strong decrease of the C chemical potential gradient through the system. Figure 98 -DICTRA calculations of the time-evolution of both C (a) and Mn (b) profiles at 650°C during cementite dissolution. The time t=0s corresponds to the beginning of transformation at 650°C. Figure 101 presents the profiles evolution of Mn activity. It can be seen that the Mn activity profile in austenite is relatively flat. It can thus be concluded that the driving force for cementite dissolution is weak, although the Mn content in cementite is relatively low. This explains why the kinetics of cementite dissolution is relatively slow and why the effect of Mn content in cementite on the kinetics of cementite dissolution is not significant (Figure 97). Such behavior was already observed in other Fe-C-Mn steels [GOU '12-1]. On the other hand, a gradient of Mn activity is established in ferrite and the austenite growth is thus controlled by Mn diffusion in ferrite. It is interesting to note that a substantial part of austenite is formed before the complete dissolution of cementite. In a second step, when cementite is completely dissolved, an accumulation of Mn at the α/γ interface (austenite side) is highlighted (Figure 102). This is a direct consequence of Mn partitioning from ferrite to austenite. In that case, it is worth noting that the tie-line for austenite growth depends on time as shown in Figure 103. It is interesting to note that the tie-lines for transformation are located below the equilibrium tie-line for times shorter than 1800s, approximately, and above for longer times. As a consequence, it is expected to observe a decrease of Mn content at α/γ in austenite side from a certain time and shrinkage of austenite as clearly evidenced in Figure 102 and Figure 103. This retraction, also visible in Figure 95(b), is relatively slight in this case for the reason that the geometrical locus of tie-lines is relatively close to the equilibrium tie-line (see the position of tie-line at 7200s and the equilibrium tie-line in Figure 103). Surprisingly, this retraction is observed for relatively short holding time (after 2 hours). This can be partially attributed to the kinetics effect resulting from the ultra-fine structure that reduces the Mn diffusion length. As a partial conclusion, the major part of austenite growth is controlled by Mn diffusion in ferrite and by Mn diffusion of austenite for longer times. Although the transformation is not controlled by the carbon diffusion in austenite, the kinetics of austenite is relatively fast and the shrinkage of austenite occurs, surprisingly, for relatively short times. These effects result from both thermodynamic and kinetics process. For instance, it is clearly shown that the high nominal Mn content and the relatively low temperature of transformation lead to a strong decrease of the C chemical potential gradient through the system. Furthermore, the ultra-fine microstructure seems to play a key role on the kinetics of austenite growth: the kinetics of austenite would have been more sluggish (as it is controlled by Mn diffusion) in the case of coarse microstructure. Tie-lines for cementite dissolution and austenite growth Effect of characteristic length L α size distribution Further, by combining both effects of cementite size and Mn content, it was possible to compare different austenite kinetics simulated with DICTRA with the experimental values (Figure 104). This figure shows that an increase of both cementite size and its Mn content makes the austenite formation curve smoother. However, taking only the fixed values of cementite size and of its Mn content cannot completely describe the experimental data. Once again, it was supposed that the heterogeneous distribution of size and/or Mn content in the microstructure can explain such an evolution. In the works of Gouné et al. [GOU'10], [GOU '12-2] it was shown that the dispersions existing in microstructure play an important role in the kinetics of phase transformations and that taking into account such heterogeneities can improve the predictions of classical approaches. Based on this, it was decided to perform calculations that will account for the effect of size distribution. For that purpose, it was assumed that the distribution of the characteristic length L α can be described by the Log-normal distribution: ( ) [START_REF] Rob | [END_REF] where µ and σ are, respectively, the mean value and the standard deviation of the logarithm distribution of the characteristic length L α . It was supposed that there were no interactions between the features of different size classes. Next, DICTRA simulations were performed for each size class. The obtained from each simulation austenite fraction was then balanced according to the distribution fractions (Figure 105(b)) using following equation: ( ) π σ α σ µ α α ⋅ ⋅ ⋅ =       - - 2 2 ln 2 1 L e L F L ( ) ( ) ∑ ⋅ = i L i A A f t f t f α (44) where ( ) t f A is the average volume fraction of austenite at time t, ( ) t f i A is the volume fraction of austenite calculated using DICTRA for a selected The resulted austenite kinetics is presented in Figure 106. As it can be seen, very good prediction of austenite kinetics was achieved. The reason of such a good result is the correct selection of the L α distribution. In fact, high fraction, almost 60%, of small size cells (L α is less than 200nm) provides a rapid austenite formation during the first 1000s of holding. The austenite fraction generated by the small size cells during this period represents roughly 80% of the global fraction. In the same time, with the increase of holding time the influence of coarser cells increases, thus decreasing the austenite formation rate and making the curve smooth. An interesting fact is that the calculated fractions for chosen size classes of L α with the Lognormal distribution is very similar to the measured fractions of austenite size classes for the sample with 3min holding at 650°C (Figure 107). Actually, there is a direct link between L α and austenite size: a given cell size will result in a certain fraction of austenite with a certain size. However, to recalculate L α from the measured austenite size requires a huge number of DICTRA simulations and complex fitting with the experimentally measured austenite fractions. Figure 107 -Comparison of size classes fractions calculated from: L α Log-normal distribution (orange) and measured distribution of austenite sizes for the sample with 3min holding at 650°C (blue). On the other hand, this similarity between measured and calculated size classes is a direct proof of the right form of the selected L α distribution. Indeed, the selected Log-normal L α distribution should result in the Log-normal distribution of the austenite sizes, which is the observed case in this study. Factors controlling austenite stabilization at room temperature In the previous section, it was shown that a relatively high volume fraction of retained austenite can be stabilized at room temperature, although, its carbon composition is much lower than in the classical TRIP steels. The origin of such behavior is controversial as evidenced by some recent lively discussions in [LEE '11], [LUO '12], [LEE '12]. At all events, most agree that the enrichment of alloying elements such as C and Mn are important factors responsible for the room temperature stability of retained austenite in medium Mn steels. However, the effect of grain size on the stability of retained austenite is much more discussed. In this section, it will be shown that the experimentally obtained evolution of retained austenite is a very valuable dataset for the study of factors governing the stability of austenite in MMS. Retained austenite fraction at room temperature can be determined using the Koistinen and Marburger empirical equation [KOI '59] presented earlier in the literature review and Chapter 2: ( ) ( ) q s A RA T M f f - ⋅ - ⋅ = α exp (45) where f RA is the volume fraction of retained austenite, f A is the volume fraction of austenite before martensite transformation (austenite fraction at the end of holding before quenching), T q is the quenching temperature (T q =20°C in the present work) and α=0.011 is a fitting parameter. It is assumed that the effect of chemical composition and other variables is visible in the change of M s . It can be seen also that the retained austenite fraction (f RA ) depends on the initial austenite fraction at the end of holding (f A ). For sake of simplicity, it is suggested to describe the austenite fraction during isothermal holding using the Kolmogorov-Johnson-Mehl-Avrami (KJMA) approach [KOL '37], [JOH'39], [AVR '39]. Obviously, a similar result can be obtained using DICTRA as soon as the kinetics of austenite formation is well described. In that case f A can be expressed in the following manner: ( ) [ ] n eq A A t k f f ⋅ - - ⋅ = exp 1 (46) where n is so-called "Avrami coefficient", k is a parameter that depends on nucleation and isotropic growth rates, and eq A f is the volume fraction of austenite under ortho-equilibrium. This value, obtained with ThermoCalc at 650°C, is of 0.36. A very good agreement with experimental data was obtained for values of k = 0.074 and n= 0.38 (Figure 108). Then, the time evolution of retained austenite fraction as a function of holding time and M s temperature is obtained by combining equations ( 45) into (46): ( ) [ ] ( ) ( ) q s n eq A RA T M t k f f - ⋅ - ⋅ ⋅ - - ⋅ = α exp exp 1 (47) Here, it is highlighted that the time evolution of retained austenite fraction is governed by the competition between the initial volume fraction of austenite before quenching and its stability. The first derivation of equation ( 47) permits to evaluate the magnitude of retained austenite variation and the inflection points. The following equation is thus obtained: ( ) [ ] ( ) ( ) [ ]       ⋅ - - ⋅ ⋅ - ⋅ - ⋅ ⋅ ⋅ ⋅ - ⋅ - ⋅ = - n s n n q s eq A RA t k dt dM t k t n k T M f dt df exp 1 exp exp 1 α α (48) This relation shows clearly that the evolution of retained austenite depends on the temporal variation of M s temperature, which is linked to the austenite stability. For example, when M s is constant, the derivative from the equation 48 is always positive. Therefore, the retained austenite fraction is expected to be strictly increasing over the considered time-interval. In this work, it was shown that the retained austenite evolution exhibits a clearly marked maximum (Figure 88). As it was stated previously, three different stages are observed for RA evolution: increase, decrease and, finally, stagnation. First, initial austenite volume fraction is fully stabilized at room temperature, thus, f RA = f A and it continuously increases till 1h holding time. In that case, it can be easily demonstrated from equations ( 45) and ( 47) that this condition is fulfilled for M s temperatures strictly less than T q . Second, the time-evolution curve of RA fraction goes through a maximum value and decreases. The peak, that corresponds to the higher volume fraction of retained austenite, is, thus, defined by M s = T q . As a consequence, the austenite state (chemical composition and size) at the peak, defines the critical point beyond which austenite is unstable, because M s >T q . From the experimental results, it appears that the peak point corresponds to carbon and manganese contents in austenite of 0.33 wt.% and 9.3 wt.%, respectively. The decrease of f RA can be mainly explained by the concomitant decrease of austenite carbon content and increase of its size during growth. The final stagnation stage is observed simply because of the absence or very slight evolution of austenite: equilibrium fraction and composition are achieved and the effect of size variation is becoming minor. From these considerations it is obvious that the shape of the f RA curve and the position of the peak are fair indicators of the factors controlling austenite stability. In order to go further, it is important to clarify an important point, which is the chemical contribution to M s temperature. According to the Andrews empirical relation [AND '65] and in the case of ternary Fe-C-Mn steels, the chemical contribution to M s can be expressed as: A A Mn C 0 s w 30.4 - w 23 4 - 539 C) ( M ⋅ ⋅ = ° (49) where A C w and A Mn w are C and Mn contents in austenite taken in the weight percents. It is important to mention that Andrews collected a huge data base of M s temperatures obtained from fully austenitic structures with quite large grain sizes. Thus, the effect of austenite grain size on M s can be reasonably considered as negligible in the resulting data base. From the measured Mn content in austenite (9.3 wt.%) at peak, this relation predicts that austenite is stable at T q =20°C when its carbon concentration of A C w >0.56 wt.%. In this study, it was shown clearly that austenite with carbon lower than 0.33 wt.% are expected to transform into martensite (see the carbon content at peak). This strongly suggests that the empirical relation given in equation 49 has a limited validity for the MMS. In addition, it appears that parameters other than chemical composition may play a significant role in the stability of the austenite. As already mentioned during literature review, it can be reasonably supposed that the small austenite size affects its stability by lowering the M s temperature. In other words, an extra driving force that depends on austenite grain size would be added to the change of Gibbs free energy for the austenite to martensite transformation. It was then decided to propose a specific formulation for the influence of austenite size. Actually, it is proposed to introduce an additional term that depends on the volume of austenite as follows: ( ) q K A 0 s s V - M M = (50) where 0 s M is determined from the equation 49 and represents the chemical contribution, V A is the volume of austenite features taken as an equivalent volume of cube (D A 3 in µm 3 ), and K and q are fitting parameters. As discussed beforehand, at the peak point M s = T q , therefore, the parameters K and q must satisfy the following first-order condition: ( ) q T - ⋅ = peak 0, s q A M V K (51) In this study, the peak occurs at 1h with carbon and manganese contents in austenite of 0.33 wt.% and 9.3 wt.%, respectively, and a mean size of austenite about 240 nm. Thus, 0 s M was calculated, using equation 49, to be about 117°C. The fitting parameters K and q are interdependent and at each value of q is associated an unique value of K (Figure 109). Nevertheless, many pairs of K and q satisfy condition 51 but only a single set of K and q parameters should fit best the time-evolution curve of retained austenite. Using the experimental data of this work, the best agreement was obtained with q=0.039 and K=83.3 (Figure 110). Finally, the new M s law that accounts for the size effect in the MMS with ultra-fine microstructure takes the following expression: ( ) 039 . 0 A Mn C new s V 3 . 83 - w 30.4 - w 23 4 - 539 C) ( M - ⋅ ⋅ ⋅ = °A A (52) where V Α is given in µm 3 . All the values considered for the calculation of RA fraction and the obtained results are summarized in Table 7. The prediction of RA fraction evolution using new Ms formulae is in a very good agreement with the experimental results. As it was seen experimentally, the fresh martensite starts to form during cooling in case of holding time longer than 2h. This means that for longer holding time, stability of austenite is decreased. Also, the proposed approach predicts correctly the stabilization of RA fraction at very long holding time. Figure 110 shows the comparison between the evolutions of RA fraction calculated using only chemical contribution to M s (Andrews M s ) and using both chemical and size contributions (new M s with q=0.039 and K=83.3). These results confirm unambiguously the size effect on austenite stability. It can be also observed that the influence of size effect is quite important. Actually, it brought two things: a) delay of the maximum peak to a longer holding time; b) global increase of RA fraction (maximum value and values at longer annealing times). Moreover, the particular peak-like form of the RA fraction time-evolution is the result of concomitant change of C content and size of austenite. Finally, using the new M s law it is possible to estimate the critical volume or size of austenite, with a given chemical composition in austenite (in our case C and Mn), necessary for the full stabilization at room temperature: q K 1 0 s critical A 20 M V         - = (53) For example, austenite with 0.33 wt.% C and 9.3 wt.% Mn will be fully stabilized at room temperature when its size is lower than 280nm. Such stabilization due to the ultra-fine size is apparently contradictory with the classical nucleation model in which more grain boundary area should provide more chances for nucleation of martensite embryos [GHO '94]. This dichotomy was discussed in [FUR '89] and it can be reasonably suspected that a stabilizing mechanism becomes dominant for smaller grain sizes. The results of this work are probably not sufficient in order to determine the mechanism of austenite stabilization by the size effect. However, some discussions about two possible mechanisms are proposed hereafter. As already seen previously, Yang et al. [YAN'09] suggested that the dependence of M s on the austenite grain size during the austenite to martensite transformation can be explained quantitatively on the basis of Fisher's original model [FIS '49]. The latter, based on a purely geometrical analysis, claims that the number of plates per unit volume needed to obtain a detectable fraction of martensite increases as the austenite grain volume decreases because the volume transformed per plate is reduced. The adaptations done by Yang et al. [YAN'09] provided the following expression for M s :         +       -       - - - = 1 1 ) 1 ln( exp 1 ln 1 0 m f aV b M M s s γ (54) where V γ is the volume of austenite grain, f is martensite fraction (for M s calculation f = 0.01), m is the plate aspect ratio (m = 0.05 from [CHR '79]) and a = 1mm -3 , b = 0.2689 and 0 s M = 363.5°C are three fitting parameters. Using this M s law, the critical size of austenite with 0.33 wt.% C and 9.3 wt.% Mn for its full stabilization at room temperature should be lower than ~100nm. This value is almost 3 times lower than that determined previously with the M s law proposed in this study. One of the possible reasons for this is that the fitting parameters a and b were determined from the grain sizes much higher than those measured in this work (see [YAN'09]). However, by resetting the fitting parameters a, b and m and using 0 s M of Andrew's it is possible to obtain equivalent results for the retained austenite time evolution. It should be noted that the fitting parameter b and the plate aspect ratio m are not independent. The product is equal to the constant parameter α in the Koistinen-Marburger equation (α being equal to 0.011 here). The values of parameters a and b (m = 0.05) obtained by fitting are 4 mm -3 and 0.22, respectively. The obtained results are compared with experimental data in Figure 111. It can be seen that the adapted "geometrical model" fits well the measured retained austenite evolution. In the same time, Takaki et al. [TAK'04] recently suggested another theory based on the experimental results of Fe-Cr-Ni steel. It was stated that austenite can be significantly stabilized when the martensitic transformation mode is changed from multi-variant to single variant. Therefore, it can be reasonably suspected that austenite refinement contributes to the change in the transformation mode. The mechanism of grain refinement-induced austenite stabilization can be discussed in terms of increase of elastic strain energy associated with austenite to martensite transformation via a single variant. It was shown that the increase of elastic strain energy due to the development of one single variant of martensite in one single grain of austenite is estimated by the following relation:       +       = ∆ D x D x E V 6 . 562 1 . 1276 2 (55) where x is the thickness of martensite plate and D is the austenite grain size. The plot of elastic strain energy ∆E v as a function of grain size shows the difficulty for martensite lath to nucleate in austenite which size is smaller than 1 µm [TAK '04]. The quantity ∆E v can be seen as an excess driving force for martensite nucleation. As a result, it is expected that M s depends on grain size in the same manner that ∆E v depends on grain size [GUI '07]. This relation is far from that observed in this work (see equation 52) and would suggest that such mechanisms is not involved. As it was stated previously, the results of this work do not allow to select or to define the mechanism of austenite stabilization by the size effect, even though geometrical model show a good agreement. It can be imagined that both geometrical and elastic energy constraints can operate together and prevent austenite transformation to martensite. Therefore, more studies are necessary to discriminate the mechanism of austenite stabilization by the size effect. As a conclusion, austenite stabilization during ART annealing of MMS was provided by two mechanisms: C, Mn enrichment and ultra fine size of austenite (less than 0.5 µm). Mn content and ultra fine size of austenite play an important role for its stabilization; however the prime importance for the austenite stability was attributed to its C content. Both C content and ultra fine austenite size are the key factors for the peak-like form of the RA evolution curve. Moreover, the time-evolution of retained austenite is shown to be a fair indicator of the critical factors governing austenite stability in ultra-fine grained MMS. Particularly, the peak of the curve defines the critical point delimiting stable and unstable austenite. The corresponding critical size was estimated around 280nm. This chapter presents the experimental characterization of mechanical behavior of studied MMS, the analysis and understanding of the obtained results, and, finally, a modeling approach based on the microstructure observations discussed in the previous chapter. The chapter division on sections is given just above. Mechanical properties of annealed samples Mechanical behavior of the medium Mn steel (0.1C-4.7Mn wt.%) after intercritical annealing was obtained using standard tensile tests. Evolution of mechanical behavior with the holding time at 650°C was mainly studied. Engineering and true stress-strain curves are presented in Figure 112. Engineering and true stress-strain curves were each time separated in two graphs: one corresponding to annealing times going till 3h of holding time and another corresponding to higher holding times. This was done for easier analysis of the curves. Figure 112 shows that holding time has an important impact on mechanical behavior. Indeed, a variety of curves were obtained with different strength-elongation combinations and work hardening rates. Mechanical characteristics like yield strength (YS), yield point elongation (YPE), ultimate tensile strength (UTS), uniform (Uel) and total (TE) elongations are summarized in Table 8. Their evolution with holding time was analyzed and presented in Figure 113. The obtained mechanical properties are quite attractive: good combination of strength and ductility was produced through a rather large range of holding time. Higher YS (more than 100MPa) with same or slightly better elongation was obtained in comparison with the mechanical properties of classical TRIP800. There is a clear optimum in terms of strengthelongation balance observed for 2h holding at 650°C. It can be seen that increase of holding time till 2h resulted in a small increase of UTS (+60MPa), but in the same time YS showed a decrease in about 150MPa and elongation was improved (+10% of Uel). Further increase of holding time induced a decrease of ductility (elongation), slight increase of UTS and pronounced decrease in YS. Such evolution of mechanical properties correlates well with the microstructure evolution observed in chapter 3. During first holding hours austenite fraction is increasing and carbides are continuously dissolved. C and Mn partitioning into the austenite and its ultra fine size during this stage are the reasons of improved austenite stability. The retained austenite fraction of 2h sample is close to the maximum observed for 1h sample. Higher RA fraction results in more pronounced TRIP effect which finally gives better elongation. Higher holding time is decreasing the stability of RA, hence fresh martensite is observed in the microstructure, which contributed to the decrease of elongation. Slight increase of UTS till 3h of holding is correlated with the increase of global austenite fraction meaning higher martensite or induced martensite fraction during mechanical loading. The observed permanent decrease of YS is thought to be related with the continuous recovery and recrystallization of initial martensite structure which leads to the lower resistance of matrix phase (ferrite) due to lower density of defects. Low YS can be also attributed to high RA fraction due to the low YS of obtained austenite as it will be shown further in Chapter 4.4. Strain hardening rate of the samples was also studied. The evolution of strain hardening rate with strain and stress is presented in Figure 114. All samples expose three domains of work hardening (WH) rate evolution: 1) WH rapid decreasing; 2) re-enhancement and stabilization of WH; 3) final drop of WH. These three stages are clearly shown in Figure 115. Such three step behavior was reported to be related to the strain induced transformation of austenite into martensite (TRIP effect) in MMS [SHI '10]. All the samples had a retained austenite fraction higher than 20% thus taking benefit of TRIP effect. However, the rate of strain induced transformation was not the same and, hence, the strain hardening behavior was also singular. The re-enhancement of WH was moderate for the holding times lower than 2h, especially for the 3min holding, but in the same time the final drop of WH appears more as a stagnation or very slight decrease (particularly true for the 1h and 2h samples). Taking into account that the RA fraction was rather high and that stability of RA was increasing, one can suppose that such behavior results from the continuous strain induced transformation of austenite with small increment. It can be even imagined that at the end of the tensile test some RA stays untransformed. On the other hand, for the samples with higher annealing time (>2h) the re-enhancement of WH was more pronounced, but the following decrease of WH was also amplified especially for the 30h sample. It is thought that such behavior was obtained due to the lower stability of RA, thus the rate of strain induced transformation was more rapid and RA was consumed quickly. These aspects of RA stability will be further discussed in chapter 4.6 in relation with the obtained experimental data. Some simple correlations between mechanical characteristics and fractions of constituents were analyzed. Relations between YS L , Maximum True Stress, Uel and fractions of RA and FM+RA were supposed to be the most interesting, and thus they are presented in Figure 116. It can be seen that RA fraction influence on YS L and Maximum True Stress is very limited or inexistent, but in the same time it has an impact on Uel, although, it cannot explain the complete evolution. On the other hand, FM+RA fraction has no correlation with Uel, but seems to be an important parameter for the strength of these steels. However, the relation between YS L and FM+RA fraction appears to be strange in this case. FM+RA represent the hard constituents in the steel. Thus one can expect the increase of YS with the increase of FM+RA fraction, but the observed tendency was opposite. In a particular case when YPE exists, the decrease of YS with the increase of hard phase is possible when the YPE is suppressed. But in this work YPE was found for all the samples. Hence, it was then supposed that this correlation has an indirect link through the holding time at 650°C which represents the annealing (tempering) of initial martensite. Longer holding time results in higher FM+RA fraction, but the degree of initial martensite tempering will be also higher, consequently decreasing YS. Finally, it can be concluded that, although there are some correlations between mechanical properties and microstructure, they are quite complex and their understanding is incomplete. Hence, to explain and model global tensile behavior of studied steel, more advanced comprehension of the behavior of each constituent and their interactions should be acquired. The I II III I II III following subsections will present the approach used for the modeling of mechanical behavior and the performed experimental trials to obtain behavior laws of each constituent, when possible. Finally, in the end of the chapter the complete model is presented and discussed. Description of the Iso-W approach Based on the microstructure analysis and in order to simplify the problem, it was decided to consider a microstructure consisting of three phases: ferrite, retained austenite and fresh martensite. Influence of the carbides, precipitated in the ferrite then dissolved during annealing, was neglected. It is a strong assumption which needs to be confirmed by a complementary detailed study. This point is discussed further in the section concerning mechanical behavior of ferrite. To predict mechanical behavior law of the three phase mixture without supplementary fitting parameter it was decided to use the so-called "Iso-W" approach proposed by Bouaziz and Buessler [BOU'99]. This approach was already presented in chapter 1. It supposes that in a multiphase microstructure increment of mechanical work is equal in each constituent during the whole mechanical loading. In terms of equation this means that: FM FM A A F F d d d ε σ ε σ ε σ = = (56) where σ F , σ A , σ FM and ε F , ε A , ε FM are the stress and strain of ferrite, retained austenite and fresh martensite, respectively. In order to calculate the increment of strain for each phase the subsequent mixture law was used: FM FM A A F F d f d f d f d ε ε ε ε + + = (57) where ε is the macroscopic strain of the material and f F , f A and f FM are, respectively, the fractions of ferrite, retained austenite and fresh martensite. As it can be seen from equations ( 56) and (57), prediction of the global mechanical behavior of such mixture of phases requires at least two following inputs: 1) mechanical behavior of each constituent: annealed and fresh martensite with medium Mn content, and retained austenite with medium C and Mn content; 2) description of initial microstructure: at least fractions of all constituents. As well it is necessary to take into account the dynamic evolution of phase fractions under loading, e.g. induced transformation of retained austenite into martensite (TRIP effect), which introduces an additional strain hardening. Hereafter are described the considerations and investigations performed for each phase. Mechanical behavior of as-quenched martensite and its simulation 4.3.1 Mechanical behavior of as-quenched medium Mn martensite Generally, as it was discussed in the literature review, it is considered that the mechanical behavior of as-quenched martensite depends mostly on its C content. Thus, it was decided to obtain experimental stress-strain curves of as-quenched martensite for the studied steel and to model them with the approach proposed by Allain et al. [ALL'12] (already described in the literature review). Two samples after first complete austenitization at 750°C for 30 minutes followed by water quench as presented in chapter 3 were used. Their microstructure was already investigated in chapter 3.1. Figure 117 presents the main results of this investigation. Microstructure analysis after Dino etching confirmed the absence of ferrite and X-ray spectra did not revealed any austenite. Hence it was concluded that the structure consists mainly of martensite and only negligible amount of retained austenite, observed in TEM. Then, tensile tests were performed and the obtained tensile curves and mechanical properties are presented in Figure 118 and Table 9, respectively. Figure 118 and Table 9 also illustrates the reproducibility of tensile curves between the two samples. It can be stated that the observed strength level and strain hardening rate are quite high for a quenched 0.1C steel which typical UTS value is around 1200MPa. Looking on the obtained data, we were forced to conclude that the mechanical behavior of the obtained martensite depends not only on its C content but also on its Mn content. Moreover, it was not possible to predict the stress-strain curve of such martensite (medium Mn steel) using the CCA model proposed by Allain et al. [ALL'12]. Hence, it was decided to investigate a little more the effect of Mn on the strength and strain hardening of martensite. 4.3.2 Influence of Mn content on the strength and strain hardening of martensite A database of some selected C-Mn martensitic steels from previously published studies and of some new experimental trials was built. The chemical compositions of these steels are shown in Table 10. "-" means less than 0.01 wt.% for Si and Cr and less than 0.001 wt.% for Ti. Figure 119 presents the results of experimental tensile tests performed in the present study along with the results collected from the previous works [PUS '09], [ZHU '10], [ALL '12]. The true stress evolution as a function of true strain is shown in Figure 119 shows the related strain hardening rate evolution as a function of true stress (so-called Kocks-Mecking plot [KOC '03], [MEC'81]). For all studied martensitic steels some general highlights can be stated: • conventional yield stress seems to be a function of martensite carbon and manganese contents; • high work-hardening rate is observed and it increases up to necking strain in accordance with the carbon and manganese contents. From Figure 119(a) it can be seen that the true stress-true strain curve of 0.1C-5Mn is almost the same as for 0.15C, and that the true stress-true strain curve of 0.15C-5Mn is in between the curves of 0.22C and 0.3C. The same conclusion can be deduced from Figure 119(b) which demonstrates the strain hardening evolution. The solid solution hardening of Mn cannot explain this difference in the behavior of martensitic steels with high Mn content. Modification of solid solution hardening only shifts the curves to higher stress levels, but in the case of medium Mn steels a clear change in strain hardening of martensite can be found. According to F.B. Pickering and T. Gladman [PIC'63] the solid solution hardening of Mn can be evaluated using following relation: S Mn *Mn wt.%, where S Mn = 32. This means that each percent of Mn increases strength of 32MPa and for 4.6wt.% of Mn the strength should increase of about 150MPa. Figure 119(c) shows the experimental curves of 0.15C and 0.15C-5Mn, but also 0.15C curve shifted up at 150MPa. This figure proves that such shift is not sufficient to match with 0.15C-5Mn; there are still about 125 MPa missing. But also the strain hardening rate, which is depicted in Figure 119(b) is not the same. Thus, it is considered that simple additive solid solution hardening cannot explain the behavior of martensite with medium content of Mn. It can be also observed from the Figure 119(a) and (b) that the influence of Mn on stress and on strain hardening is rather limited in the case of small carbon content (0.01C-3Mn). All these facts suggest that Mn content influences the martensite strength and strain hardening in synergy with the carbon content of martensite. Based on the CCA approach [ALL '12], a simplified behavior law for martensitic steels was proposed. In order to describe the stress-strain curve of martensite as a large elasto-plastic transition the strain hardening can be expressed as the product of Young modulus (Y) by the fraction of elastic zones (1-F): ( ) Y F d d ⋅ - = 1 ε σ (58) where σ and ε are respectively the macroscopic stress and strain of the material. The plasticized zones fraction F is chosen as a logistic law:                 - - - = p 0 min exp 1 F σ σ σ (59) where σ min is the minimum stress necessary to start to plasticize and, p and σ 0 are the parameters that control the shape of F(σ) curve (Figure 120(a)). In the first stage of tensile test, the macroscopic stress is lower than the elastic threshold (σ<σ min ), hence F = 0 and the material exhibits completely elastic behavior. When the applied stress (σ) became more important than σ min , then F starts to increase, meaning that the plasticized zones fraction increases. Model adjustment with the experimental and literature data showed that σ min and p can be taken as constants for all considered steels, and the following values were found to be optimal: σ min = 450MPa and p = 2.5. Thus, only one variable parameter, σ 0 , was used to obtain the best fitting results between model and experiments. It was found that both C and Mn have an important influence on σ 0 . A linear dependence between σ 0 and C eq was established in the form of subsequent equation: eq C ⋅ + = 1997 130 0 σ (60) where C eq is the parameter that considers the concomitant influence of C and Mn. It was proposed to take into account this synergy of C and Mn in the following way:         + ⋅ = Mn eq K C Mn C w 1 w (61) where w C and w Mn represent C and Mn (wt.%) contents of martensite, and K Mn is the coefficient of the Mn influence. Finally, from the collected experimental data (Table 10 and Figure 119) it was possible to find the optimum value for K Mn = 3.5. The σ 0 evolution with C eq is shown in Figure 120 The final results of the model are presented and compared to the experimental data in Figure 121. The results of stress-strain curves prediction were separated into two graphs (Figure 121(a) and (b)) to have a better vision of medium Mn steels curves. As it can be seen, this simple model accurately predicts the whole stress-strain curves of different martensitic steels with varied C and Mn contents. On the other hand, it can be noticed that the simulated curves are not perfect and there are some mismatches. However, the maximum difference between model and experimental curves in terms of stress is less than 60 MPa and this represent less than 5% of the maximum flow stress. d) present the evolution of strain hardening rate as a function of stress and as a function of strain, respectively. Taking into account that modeling of derivative is more complex than modeling of a function itself, the proposed model gives very satisfactory results of strain hardening rate evolution. These figures also show that the model considers correctly the synergy influence of C and Mn on the strain hardening rate evolution. Even though the results are satisfactory, it is evident that the proposed model is a simplified version of CCA published previously [ALL '12], hence the global description of stress-strain curves is less precise, especially elasto-plastic transition. This can be clearly seen on the stressstrain curves of 0.22C, 0.15C and in particular 0.01C-3Mn steels. The discrepancy of the model is also related to the fact that only one fitting parameter was considered. This was done deliberately in order to simplify the understanding of the observed phenomenon of C-Mn synergy. Model response can be easily improved by relaxing the constraints on σ min making it variable. Nevertheless, more data and studies are needed to obtain good correlation between σ min and some metallurgical or microstructural parameters. Future works are also necessary to understand physical mechanism of this C-Mn synergy and its relation with the microstructure. The outputs of these further investigations will be probably very helpful for further model improvement. The work about the effect of Mn on strength and on strain hardening of C-Mn martensite was published in the ISIJ International journal with the reference [ARL '13]. Mechanical behavior of austenite with medium C and Mn contents From the microstructure investigations in chapter 3, it was concluded that retained austenite contains about 10 wt.% Mn and a maximum of 0.4 wt.% C. Thus, it was decided to elaborate such composition in order to produce fully austenitic steel and to evaluate its mechanical behavior. However, some technical problems occurred during melting and the obtained steel contained a little higher C content (0.5 wt.%) as well as important quantity of boron. Nevertheless, the work was continued with this steel as it was considered that the increased C content would not change drastically the nature of austenite and the mechanism of its deformation because this composition is in the same domain as targeted one according to the Schumann's diagram (Figure 35). The obtained 2 kg ingot had the composition given in Table 11. The ingot was then reheated to 1200°C and hot rolled with the finishing temperature around 900°C. Coiling was simulated by a slow cooling in the furnace from 550°C. Microhardness of hot rolled band was evaluated to be around 560 HV. Table 11 -Chemical composition of steel elaborated for the investigation of austenite mechanical behavior (10 -3 Hot rolled microstructure was analyzed after Nital etching using optical microscope and SEM. The obtained microstructures are shown in Figure 122. From these observations it appears that brown or dark areas on optical image are pearlite islands, which are clearly seen in SEM images. The orange color areas on optical and dark grey on SEM images are supposed to be austenite. Needle like features are supposed to be epsilon martensite. And finally dense areas around pearlite islands are imagined to be alpha prime martensite. X-Ray Diffraction revealed the presence of former three phases: austenite, ε-martensite and α'martensite. However, deeper microstructure analysis of these phases (like TEM) was not done, thus their morphological appearance on optical and SEM images stays hypothetical. Cold rolling of such steel was not possible due to high hardness and brittleness of the hot rolled sheet, thus further trials with the objective to produce fully austenitic structure were continued on hot rolled metal. The dilatometer trial presented in Figure 123(a) was performed: heating with ~5°C/s rate to 1000°C, then holding for 10s followed by cooling with ~5°C/s rate. The obtained dilation curve is shown in Figure 123(b). Based on the equilibrium thermodynamic data, at 1000°C the structure is fully austenitic. And according to the dilation curve (Figure 123(b)), there is no phase transformation during slow cooling. Therefore, it was supposed that if the sample is raised into the austenitic domain and then quenched it will be all or mostly austenitic with perhaps some martensite (α' and/or ε). Hence, a new annealing cycle was done on hot rolled sample: it was heated up with ~6°C/s rate to 800°C, soaked for 5min and then water quenched. Hardness of annealed sample was evaluated to be around 270HV. In the same time, low fraction of α'-martensite (less than 7%) was determined from the X-ray and magnetic measurements. It was then supposed that the microstructure was mostly austenitic. Finally, tensile tests were performed on hot rolled and annealed at 800°C-5min samples. The obtained stress-strain curves and mechanical properties are presented in Figure 124 and Table 12, respectively. It can be observed that the obtained mechanical properties are rather similar, meaning that the effect of thermal treatment on the properties of this steel is rather limited. In both cases the YS 0.2 is about 340 MPa, UTS close to 600 MPa and somehow poor elongation (less than 5%) is observed. It is known that the YS is controlled by the softest phase. Hence, taking into account that in the both microstructures the soft phase is austenite, it is assumed that the obtained YS correspond to the YS of austenite. On the other hand, very poor ductility is thought to be due to the very hard initially present or transformation induced α'-martensite that contain high level of C and Mn. Mn segregations can also be the reason for the low elongation values. However this topic was not studied. Finally, considering the observed low YS and relatively high C and Mn content (0.4 wt.% of C and at least 9 wt.% of Mn) of retained austenite in the final microstructure, it appeared to be appropriate to use the law proposed by Bouaziz et al. [BOU'11] for the description of retained austenite mechanical behavior: ( ) ( )       ⋅ - - ⋅ + = f ε f exp 1 σ ε σ A 0 A A A K (62) where σ A and ε A are respectively the stress and strain of the retained austenite, K = 2900 and f = 4 are fitting parameters determined by Bouaziz et al. [BOU'11] and σ 0A is the yield stress of austenite that increases with C content and decreases with Mn content in the following manner: Mn w 2 w 187 228 σ C 0 A ⋅ - ⋅ + = (63) where w C and w Mn are, respectively, C and Mn contents of austenite in weight percents. Comparison of the stress-strain curves proposed by the explained model and experimental one is shown in Figure 125. One can state that the simulation differs from the experimental points. Clearly, the strain hardening rate of the experimental curve is higher than the model ones. Of course, by changing the fitting parameters the difference can be decreased. But deeper analysis of the global data (tensile curves and microstructure characterization) brought us to the following conclusions. In fact, initially present or rapidly transformation induced α'-martensite that contain high level of C and Mn would contribute significantly to the strain hardening of the experimental material. Hence, the experimentally obtained curve for the sample annealed at 800°C for 5min is not representative of the fully austenitic structure behavior, but that of an austenite-martensite structure. Taking into account these considerations, it was judged that the proposed model will be appropriate for the mechanical behavior description of austenite with medium C and Mn contents. Mechanical behavior of ferrite with medium Mn content As presented previously in chapters 1 and 3, ferrite (annealed martensite) is the phase that was obtained from the high temperature annealing of fresh martensite. One can say that this is simply a ferrite and from crystallographic point of view this will be true, but the morphology and mechanical behavior of this ferrite (annealed martensite) is different. This was clearly shown in the works of Sugimoto [SUG '02]. In our work in chapter 3 it was also demonstrated that such ferrite contains an important quantity of dislocations which should have a non negligible impact on mechanical behavior. Consequently, in this work it is considered that this ferrite (annealed martensite) has similar properties as an ultra fine lath-like ferrite with some dislocations and/or restorations cells. Then the question was: how to produce steel with 100% lath-like ferrite structure? This question was quite complex and no trivial answer was found. No way to produce 100% lath-like ferrite structure was determined. However, it was possible to produce microstructure with lath-like ferrite (more than 98%) and some carbides. Then it was decided to study such structure and to look for some approximations. The studied 0.1C-4.7Mn (wt.%) steel was heated up with ~6°C/s rate to 500°C, held for different times (3min, 1h and 30h) and finally water quenched. Such low temperature was chosen intentionally in order to avoid any austenite formation. However, even at such low temperature after very long annealing, some nucleus of austenite were formed, as it was revealed by microstructure analysis presented further. Microstructure was controlled using SEM after metabisulfite etching; the images are presented in Figure 126. The revealed microstructure shows clearly the ferrite in grey with some carbides (white spots). Prior austenite boundaries can be also easily observed. In the sample with 30h holding time some very small nucleus of austenite were observed (black colored spots in the down left corner of Figure 126(b)) but their fraction was quite limited and thus they can be neglected. The low temperature annealed samples were then submitted to tensile tests. The resulting stressstrain curves and mechanical properties are presented in Figure 127 and Table 13, respectively. Data for one sample with fully fresh martensite structure are also given for comparison. It can be seen that the obtained tensile curves were quite particular. The yield strength was rather high (more than 750 MPa) and almost no strain hardening was observed (flat curves). Such mechanical behavior corresponds to the revealed microstructure: ultra fine lath-like recovered ferrite and some fine carbides. In the work of Bouaziz [BOU'09] it was already shown that during mechanical loading ultra fine ferrite is characterized by very high strength (for example ~1000MPa for 0.3µm size) and absence of strain hardening. Based on this work it was concluded that the obtained tensile curves and microstructures are in good agreement. In addition, it was noticed that with the increasing holding temperature or time the size of annealed martensite increases, thus decreasing the strength (3min ~1000 MPa; 30h ~800 MPa). Considering the fact that there was no strain hardening of such a ferrite and that its morphology was a lath-like one, it was proposed to model its behavior with an elastic perfectly plastic law where the stress level depends only on solid solution hardening and on the mean free path. For that purpose, the law proposed by Bouaziz and Buessler [BOU'02] and inspired from Orowan [ORO '54] theory was used: ( ) λ µ b M σ ε σ F 0 F F ⋅ ⋅ + = ( 64 ) where M is the Taylor factor (taken equal to 3), μ is the shear modulus (80000MPa), b is the magnitude of the Burgers vector (0.25 nm), λ is the mean free path (lath size) and σ 0 is the internal friction stress that takes into account solid solution hardening in the following manner: P Mn w 750 w 32 60 σ F 0 ⋅ + ⋅ + = ( 65 ) where w Mn and w P are Mn and P contents of ferrite, taken in weight percents. The parameter λ (mean free path) is a fitting parameter and it can represent the influence of both lath size of ferrite and carbides interspacing. Therefore, influence of carbides can be partially included in this fitting parameter λ. Retained austenite strain induced transformation (TRIP effect) As it was discussed in chapter 1, under mechanical loading retained austenite transforms to martensite. Such transformation increases considerably the strain hardening rate of the material, thus delaying necking and increasing elongation. Hence, it is very important to know the factors that control this transformation and to predict properly the induced martensite fraction. With this aim, the evolution of RA fraction with the increase of deformation was measured via the interrupted tensile tests for two different treatments. These two extreme cases are 1h and 30h holding at 650°C. Magnetic saturation measurements were performed in the deformed zone to estimate retained austenite fraction. The increase of induced martensite and the decrease of RA fraction with the strain are presented in Figure 128. Looking only on the Figure 128(a) one could say that the induced martensite transformation is the same, but this is not really true. In fact, considering Figure 128(b) it is more evident that there is an important difference. In the beginning (till ~0.08 strain) the slope of RA decrease in both cases seems to be the same, but afterward a different behavior is observed. The slope of 1h sample is slightly diminished thus less austenite is transformed, but in the case of 30h sample the transformation continues with the same rate. Finally, it can be found that in 1h sample still ~8% of RA was available at the end of tensile test and in contrary in the 30h sample all austenite was consumed. This means that in the 1h sample a portion of RA was stable enough to avoid the transformation even at rather high strain levels. These observations are also in a good agreement with the strain hardening rate curves presented in Figure 114. Strain hardening rate increase of 1h sample was less pronounced but in the same time a certain value is maintained for a rather high strain levels which corresponds to high stability of RA and continuous induced transformation. On the other hand, for 30h sample the increase of strain hardening is important, but its drop off is as well very quick. Such behavior corresponds to a more rapid induced transformation and signifies lower stability of RA. Different models describing TRIP effect were presented in chapter 1. The model proposed by Perlade et al. [PER'03] was even tested, but it was rejected (in its actual form) due to the high sensitivity to the austenite size evolution. For simplicity and in the same time good control of the transformation rate, an approach proposed by Olson and Cohen [OLS '75] was selected. This approach is based on simple Kolmogorov-Johnson-Mehl-Avrami [KOL '37], [JOH '39], [AVR '39] law. Nevertheless, the controlling and fitting parameters may differ from author to author. In our case a special form of this phenomenological exponential law was proposed to simulate the evolution of RA fraction with the increase of strain:                         - - ⋅ = n RA RA ini M ind f f 0 exp 1 ε ε (66) where RA ini f and M ind f are the fractions of initial retained austenite and induced martensite, respectively, ε A is the strain of the retained austenite and ε 0 and n are the fitting parameters. Using the experimental data from Figure 128, ε 0 and n were obtained for the two samples with different holding time at 650°C: 1h (ε 0 =0.12; n=1) and 30h (ε 0 =0.07; n=2). Comparison of the simulated and experimental evolution of martensite induced transformation is shown in Figure 129. As it can be seen, the proposed model accounts well the differences in the stability of RA discussed above. As a perspective for this model, the parameters that govern the induced transformation rate (ε 0 and n) can be related to some physical parameters controlling the stability of retained austenite (C, Mn composition, grain size and/or others). Before starting the description of the obtained results with the global model, some additional considerations for the model should be discussed. In fact, during strain induced transformation, fraction of RA decreases and new induced martensite appears. Hence, the effect of such transformation on strain hardening and also the difference between the flow stress of two phases (transformed and induced) should be taken into account. As proposed by Embury and Bouaziz [EMB'10] the strain hardening law for such multiphase mixture can be written in the following manner: ( ) A M A FM FM A A F F d df d d f d d f d d f d d σ σ ε ε σ ε σ ε σ ε σ - + + + = (67) where f F , f A , f FM are the fractions of ferrite, retained austenite and fresh martensite, respectively, σ M is the flow stress of induced martensite and σ and ε are respectively macroscopic stress and strain. ( ) A M A d df σ σ ε part in equation 67 demonstrates explicitly the contribution of strain induced transformation on the work hardening and also the difference between the flow stress of retained austenite and induced martensite. As the observed C and Mn contents of retained austenite were rather high (~0.3 wt.% C and at least 9 wt.% Mn) the flow stress of induced martensite will be also high, hence it was decided to take a constant value of σ M = 2500 MPa. This value of 2500MPa represents roughly the maximum flow stress that can be measured for fully martensitic steel with high C content (~0.5 wt.%). Krauss [KRA'99] showed, for example, such a value for a 0.5C (wt.%) steel tempered at 150°C. For such high C content martensite a brittle fracture is expected without tempering. For higher C values, a high amount of retained austenite is present after quench, hence, it seems impossible to evaluate properly the flow stress of such martensite. Global Iso-W model As the behavior of each constituent was determined and simulated, it was then possible to predict the stress-strain curves of samples with different holding times at 650°C and various multiphase microstructures. All the parameters used in the model which are necessary for the description of each phase behavior are presented in Annex 4.1. As it was determined experimentally in chapter 3, martensite appears only for the samples with holding time superior to 2h. The values for C content of retained austenite and martensite were taken from the calculations performed in section 3.3.2 (Table 7). C eq calculated for martensite was close to 1 wt.%, however, as it was discussed above, mechanical behavior of such martensite cannot be determined. Thus in the case when C eq was more than 0.5, it was limited to the value of 0.5, which gives σ 0FM = 1129 MPa. True stress-true strain (only plastic strain is considered) curves simulated with the model and obtained experimentally are compared in Figure 130. This figure is divided in four couples for the easy judgment of the model performance (avoid intermixture between curves): (a) -3min and 1h samples; (b) -10min and 3h samples; (c) -30min and 10h samples; (d) -2h and 30h samples. Curves of the samples 10h and 20h were not represented as their behavior is very close to the 7h and 30h samples, respectively. As it can be seen, the model provides quite reasonable results: accurate prediction of the whole stress-strain curves is obtained for the samples with different holding times at 650°C. This also means that good description of two phase and three phase microstructures was achieved, as the samples with holding time lower than 2h contained only ferrite and retained austenite (of course the carbides were neglected) while samples with higher holding time contained in addition fresh martensite. However, it can be seen that the beginning of certain curves (1h and 2h samples) was not well predicted. This was supposed to be linked to the pronounced YPE, which was not accounted at all in the model. Description of Lüders plateau is rather complex, thus it needs a specific study which was not in the scope of the present work. Strain hardening of the different samples is correctly described and the variation of different mechanical parameters was well predicted. Table 14 shows the comparison between experimental and model mechanical properties. Differences between experimental and model values of UTS and Uel (ΔUTS and ΔUel) were rather limited and their values were comparable with the experimental errors. On the other hand, prediction of YS appeared to be more difficult. In the case of samples with high volume fraction of RA (30min, 1h and 2h holding) the ΔYS is close to 100MPa, which is rather high. Such a big error is attributed to the three factors: 1. presence of YPE; 2. difficulties to assess the individual values of YS for each constituent; 3. and lack of information about austenite mechanical stability at low strain levels. However, a more detailed study is necessary to provide precise reasons of such deviations. Although the absolute values of YS were not well predicted, the evolution of YS with the holding time was well described. Globally, the performance of this model with all the taken assumptions and only 3 fitting parameters (λ, ε 0 and n) was considered to be very satisfactory. Good description of mechanical properties and of tensile behavior evolutions with the holding time at 650°C is achieved. As well, sensibility analysis of the model was performed. The detailed description and results of the analysis are presented in Annex 4.2. Figure 131 shows the effect of the most sensitive parameters on the true stress-true strain curve. Globally, the model accounts properly different influence of input parameters and has certain sensitivity to all of the parameters. Consequently, for the good performance of the model a detailed microstructural analysis of such complex microstructures is needed. As it is highlighted in Figure 131, the most sensitive parameters are the fractions of constituents (especially fresh martensite) and the size of ferrite. On the contrary, the impact of retained austenite stability, represented by ε 0 and n, appears to be of the second order. CONCLUSIONS The so-called "Medium Mn" steel (MMS) is a promising solution to get high strength steels with good formability. An attractive combination of strength and ductility is the result of pronounced TRIP effect in these steels. The latter is obtained due to the high amount of retained austenite in the final microstructure at room temperature. There are several reasons for such a high retained austenite fraction: for example, Mn partitioning and ultra-fine microstructure. This type of ultrafine microstructure can be obtained using a particular heat treatment, named ART-annealing that consists of two subsequent heat cycles: first austenitization followed by rapid cooling to get a fully martensite structure, then intercritical annealing of this martensitic Medium Mn steel. During the second treatment, the formation of austenite occurs according to the so-called "Austenite Reverted Transformation" (ART) mechanism. The main goal of this PhD was to better understand the mechanism of microstructure formation and the link between microstructure parameters and the mechanical properties of Medium Mn steels. First of all, the chemical composition of the steel studied during this PhD work was Fe-0.1C-4.7Mn wt.%. It was chosen from a large analysis of literature data assisted by thermodynamic calculations. The temperature for the second intercritical annealing was determined according to the thermodynamic simulations and, in the same time, by combinatory experimental heat treatments. It was found that there is an optimum domain of temperatures (between 640 and 660°C) where a good balance between strength and ductility can be achieved. Based on the obtained results, the selected heat treatment was double annealing with first austenitization at 750°C for 30min followed by intercritical batch annealing at 650°C. Using this type of treatment the time-evolution of microstructure and mechanical properties during intercritical annealing at 650°C was studied. Microstructure evolution during intercritical annealing The initial state before intercritical annealing (after austenitization at 750°C for 30min followed by water quench) was characterized using different experimental techniques: optical (OM), scanning electron (SEM) and transmission microscopes (TEM), X-ray diffraction (XRD) and electron probe microanalyzer (EPMA). Majority of the microstructure was found to be lath martensite. Next, the microstructure evolution during intercritical annealing at 650°C was characterized using the before mentioned techniques as well as saturation magnetization measurements and NanoSIMS analysis. First, the evolution of Mn microsegregations was investigated. It was found that a significant redistribution of Mn happens during a long intercritical annealing. Such a longrange diffusion of Mn supposes that Mn should diffuse in both ferrite and austenite, meaning that the effective diffusivity of Mn may be different from that in ferrite and in austenite. It was found that homogeneization was controlled by diffusivity close to those in ferrite. In the same time, both the small segregated band spacing and the ultra-fine size of microstructure constituents were supposed to enhance this phenomenon. On the other hand, another type of Mn diffusion was observed at smaller scale. This Mn partitioning is mainly linked to the interactions between Mn and migrating α/γ interface during austenite growth. Second step was the characterization of cementite precipitation state evolution during heating and holding at 650°C. In this work, the evolution of carbides during heating process was not very significant, probably due to relatively high heating rate ~5°C/s. The most important information from this analysis was the mean Mn content of carbides at the beginning of holding (650°C-0s). Its value was estimated to be about 10 wt.%. Then, the time evolution of carbides during holding was analyzed on four samples: 3min, 1h, 2h and 3h. In the beginning of holding (3min), carbides with both two types of morphology (rod and polygonal) and Mn content (15 and 30 wt.%) were observed. Samples after 2h holding time still presented a small amount of carbides with high Mn level (~25 wt.%). Finally, sample after 3h holding was completely carbide free. Existence of cementite after 2h holding at 650°C contrasts with equilibrium thermodynamic calculations predicting complete cementite dissolution at around 610°C. This indirectly highlights the role of kinetics effects. Third stage was the analysis of austenite and ferrite time-evolution. SEM characterizations shown that the final microstructures contained at least three phases: ferrite, retained austenite and fresh martensite. Double morphology (lath-like and polygonal) was revealed for both austenite (retained austenite and fresh martensite) and ferrite. At longer holding times (more than 3h) a clear presence of fresh martensite was observed and most of this martensite was polygonal. From image analysis of SEM pictures, it was also found that the kinetics of austenite transformation was very rapid in the beginning and then becomes smoother. Mn distribution between austenite and ferrite as a function of time was evaluated using local TEM-EDX measurements and TEM-EDX hypermaps. Unexpectedly, the system reaches the equilibrium after a relatively short time of 10h. The experimental volume fractions of phases and the respective compositions were in agreement with those calculated with Thermo-Calc software: 36% of austenite with 9.1 wt.% Mn and 64% ferrite with 2.3 wt.% Mn. At last, the carbon content in austenite for the samples with 1h and 30h holding time at 650°C was measured by NanoSIMS technique. The C content of austenite was estimated by comparison of the obtained SIMS intensities between the analyzed samples and the reference one using image analysis. This methodology was employed for the first time in this work and was considered to be adapted for such type of studies. The measured mean carbon contents of 1h and 30h samples were close to 0.36 wt.% and 0.27 wt.%, respectively. These values correspond very well to the C content calculated from the mass balance neglecting the presence of cementite and thus confirms that a major part of carbides are dissolved after one hour treatment at 650°C. Fourth step was the study of retained austenite and martensite time-evolution. Here it is important to note that a preliminary comparison between two experimental techniques (XRD and Sigmametry) for retained austenite fraction measurements was performed. This was done due to the observed discrepancies in the RA fraction values obtained with two mentioned techniques. It was then found that RA fraction is underestimated by XRD measurements due to preceding mechanical polishing. Such an important effect of mechanical polishing is particular to the MMS and probably results from the lower mechanical stability of RA. As in the standard TRIP steels, RA stability is assumed to be controlled by the austenite composition (C, Mn in this work) and its size. The effect of austenite size on its thermal stability is quite important in the case of MMS. However, according to some very recent work of Matsuoka et al. [MAT'13] the size effect on the mechanical stability of RA is quite low or even absent. Based on these results, Sigmametry technique was considered to be more adapted for this study. Then, the time-evolution of volume fraction of retained austenite was evaluated. A particular peak-type form of the RA timeevolution curve was observed. It can be divided in three stages:  full stabilization of growing austenite resulting in the increase of RA fraction till the maximum value;  decrease of RA fraction and appearance of martensite due to the decrease of austenite stability (lower C, Mn contents and bigger size)  a final stage where RA seems no longer to evolve, probably because the state close to the equilibrium one is achieved. In the end, TEM analysis of several samples with different holding time was performed. Depending on the holding time at 650°C, the four possible phases, cementite, ferrite, retained austenite and fresh martensite, were distinguished using Convergent Beam Electron Diffraction (CBED) in STEM mode (Scanning TEM). The double morphology (lath-like and polygonal) of the observed features was confirmed. The next point was the austenite size measurements and discussion of the observed morphology. The mean austenite size was found to vary from 0.16 µm to 0.45 µm for the studied range of time. These values correspond well to those found in the literature. According to the overall microstructure characterizations, a complete vision on the sequence of carbide precipitation and austenite nucleation and growth in connection with the microstructure topology was proposed. The next part of the work consisted in the analysis of the main results using coupled experimental/modelling approach. The kinetics of phase transformation in ultra-fine lath-like microstructure was studied, including both cementite dissolution and austenite formation, with DICTRA simulations. It was found that in this specific case of Medium Mn steel with ultra-fine microstructure the major part of austenite growth is controlled by Mn diffusion in ferrite and by Mn diffusion in austenite for longer times. It was also clearly shown that standard DICTRA simulations based on a mean filed approach disclose limitations for the prediction of the observed kinetics of cementite dissolution and austenite formation. However, prediction of austenite fraction evolution can be improved by taking into account the dispersions. These dispersions come from the prior processing conditions and play an important role in the kinetics of phase transformations. In this work, DICTRA simulations that account for the effect of size distribution were performed. Very good prediction of austenite kinetics was achieved using such methodology. The reason of such a good result is the correct selection of the size distribution which is coherent with the experimental observations. Finally, the important role of the initial ultra-fine size for the overall kinetics was highlighted. In last section of this part, the critical factors controlling thermal austenite stability, including both chemical and size effects, were determined and discussed, based on the analysis of the retained austenite time-evolution. In an original manner, it was shown that the time-evolution of retained austenite is a fair indicator of the critical factors governing austenite stability in the MMS with ultra-fine microstructure. Particularly, the peak of the curve defines the critical point in terms of composition and size delimiting stable and unstable austenite. The effect of austenite size on its stability was clearly demonstrated and a critical size of about 280 nm was estimated. Finally, an adapted formulation of M s temperature law applicable to the medium Mn steels with ultra-fine microstructure was proposed. Characterization and modeling of mechanical properties Tensile properties of the steel were measured as a function of holding time at 650°C and the relation between microstructure and mechanical behavior was analyzed. The continuous increase of ultimate tensile strength (UTS) and decrease of yield strength (YS) was observed with the increase of holding time. In the same time, uniform (Uel) and total (TE) elongations have a hilllike time-evolution with the maximum values corresponding to the range of times between 30min and 2h. The increase of UTS is clearly related to the increase of global austenite fraction (retained austenite plus fresh martensite). The observed permanent decrease of YS was supposed to be related with the continuous recovery and recrystallization of initial martensite structure which lead to the lower resistance of matrix phase (ferrite) due to lower density of defects. Finally, it was seen that time-evolution of ductility (Uel and TE) is somehow linked to the RA fraction, but this relation is not straightforward, since it requires to take into account the mechanical stability of RA. As well recovery and recrystallization of initial martensite structure can affect the ductility. Analysis of the work hardening rate (WH) curves supports the thoughts about the significant influence of RA on both ductility and strength. Indeed, all the samples exhibited transformation induced plasticity (TRIP) effect, however the rate of strain induced transformation was not the same and, hence, the strain hardening behavior was also singular. Advanced analysis of the individual behavior of the three major constituents (ferrite (annealed martensite), fresh martensite and retained austenite) was done in order to better understand the behavior of such multiphase medium Mn steels. Mechanical behavior of ferrite with medium Mn content From the microstructure characterizations, it was shown that the ferrite (annealed martensite) is the matrix phase of the steel. This ferrite was obtained during the annealing of initial martensite structure and, thus, it has a double specificity. Firstly, the majority of the ferrite has a lath-like morphology. Secondly, it has an important dislocation density issued from the recovery process. No simple way to produce such type of microstructure (100% of lath-like dislocated ferrite) was found. Hence, it was approximated that microstructure with lath-like ferrite (more than 98%) and some carbides has the similar mechanical behavior. Such type of microstructures was obtained by the annealing of initial martensite at 500°C. Three different holding times were considered. The obtained tensile curves were particular, but in accordance with the literature data. The yield strength was rather high (more than 750MPa), but almost no strain hardening was observed (flat curves). Taking into account the fact that there was no strain hardening of such a ferrite and that its morphology was a lath-like one, it was proposed to model its behavior with an elastic perfectly plastic law, where the stress level depends only on solid solution hardening and on the mean size of laths. Mechanical behavior of as-quenched medium Mn martensite Experimental stress-strain curves of as-quenched martensite for the studied steel were obtained. It was observed that the strength level and the strain hardening rate are higher than the levels classically seen in the literature for the 0.1C (wt.%) steel. Also, it was not possible to predict the stress-strain curve of such martensite using the Continuous Composite Approach (CCA) model [ALL '12]. Therefore, it was supposed that there is an effect of Mn on the strength and strain hardening of martensite. To investigate this effect, a database of 8 C-Mn martensitic steels from previously published studies and of some new experimental trials was built. The stress-strain curves and the strain hardening rate versus true stress were analyzed. An important influence of Mn content on the strength and strain hardening of fresh martensite was confirmed. Moreover, it was found that Mn content influences the martensite strength and strain hardening in obvious synergy with the carbon content of martensite. In addition, a more pragmatic behavior law, based on the CCA approach, was proposed for modeling the stress-strain curve of medium Mn martensite. The revealed synergy between carbon and manganese was taken into account in a specific way and the coefficient of the manganese influence was adjusted. The results of the adjusted model with only one fitting parameter showed very satisfactory agreement with the experimental data. Influence of Mn content on the mechanical behavior of as-quenched martensite and its synergy with C content of martensite was clearly evidenced in this work. Mechanical behavior of austenite with medium C and Mn content Mechanical behavior of austenite was assessed on the steel with 0.5 wt.% C and 10 wt.% Mn. This composition was chosen based on experimental characterization of austenite. The investigations were performed on the hot rolled steel (~4 mm thick) because cold rolling was not possible. A specific annealing treatment was determined in order to avoid α'-and/or εmartensite formation and to produce almost fully austenitic steel. The final microstructure was mostly austenitic and contained less than 7% of α'-martensite. The obtained tensile curves were rather particular. High strain hardening, low YS and low ductility were observed. It was considered that low YS is intrinsic to the austenite matrix, which is in accordance with the literature data. On the other hand, poor ductility and high strain hardening was supposed to be the result of transformation induced α'-martensite with high level of C and Mn. On the ground of these results, it was proposed to describe mechanical behavior of retained austenite using empirical law proposed by Bouaziz et al. [BOU'11]. Retained austenite strain induced transformation (TRIP effect) Retained austenite transforms to martensite under mechanical loading. Interrupted tensile tests were performed for two treatments (1h and 30h holding at 650°C) in order to study the evolution of RA fraction with the increase of deformation. It was found that in the 1h sample a portion of RA was stable enough to avoid the transformation even at rather high strain levels and at the end of tensile test still ~8% of RA was available. In contrary, in the 30h sample lower stability of RA was observed and all austenite was consumed. These observations are also supported by the strain hardening rate curves of the studied samples. Based on the Kolmogorov-Johnson-Mehl-Avrami approach, a special form of phenomenological exponential law was proposed for the modeling of RA fraction evolution with the increase of strain. Two fitting parameters, that allow good control of the induced transformation rate, were necessary. These parameters can be related to some physical factors controlling the stability of retained austenite in the future studies. Iso-W model to predict mechanical behavior of studied medium Mn steel At last, based on the Iso-W mixture model, a complete model for predicting the true-stress versus true-strain curves of medium Mn steels was proposed. The effect of strain induced RA transformation on strain hardening and also the difference between the flow stress of two phases (transformed and induced) was taken into account in the way proposed by Embury and Bouaziz [EMB'10]. The performance of this model with all the taken assumptions and only 3 fitting parameters was considered to be very satisfactory. Good description of the mechanical properties, stress-strain curves and strain hardening rate evolutions with the holding time at 650°C was achieved. It was found that the most sensitive parameters are the fractions of constituents (especially fresh martensite) and the size of ferrite. In contrast, the influence of retained austenite stability appears to be of second order. As a general conclusion it can be highlighted that this work contributes to the understanding of microstructure formation and of mechanical behavior of ultra-fine medium Mn steels. Microstructure analysis revealed undoubtedly the influence of austenite size on its thermal stability and the absolute necessity to take this into account for the M s temperature predictions. On the other hand, studies of mechanical behavior demonstrated the important influence of Mn content on the tensile behavior of as-quenched martensite. Moreover, a complete model for the prediction of mechanical behavior of complex multiphase medium Mn steels was build utilizing the inputs from microstructure investigations. Clearly, this model is a helpful tool for the further developments of medium Mn steels. PERSPECTIVES Although this work proposed the answers on several important questions, there are still a certain number of topics for further investigations and improvements. Furthermore, this work put on the table some new questions, hence there are even more questions now than in the beginning of this study. Possible subjects for the further studies are summarized hereafter. Microstructure formation and evolution In this work, it was shown that ferrite recrystallization from initial not deformed martensite structure can happen rather rapidly. The first ultra-fine grains of ferrite were detected already after 30min of annealing at 650°C. In the literature, there are certain works about ferrite recrystallization from not deformed martensite, however the observed times are much longer. From a scientific point of view, ferrite recrystallization from not deformed martensite can be a very interesting subject for further investigations. As it was stated before, a long-range diffusion of Mn happens during a long intercritical annealing. This suggests that the effective diffusivity of Mn may be different from that in ferrite and in austenite. Therefore, it can be of interest to think about the method for calculating the diffusivity of Mn in a two-phase or multiphase structure. Characterization of cementite precipitation state and further DICTRA simulations highlighted the importance of cementite dissolution for the austenite transformation and stabilization. Besides, stability of austenite and its fraction are both very significant parameters for the mechanical behavior of steel. In particular, different thermal paths can provide different carbides and austenite evolutions, and thus result in various mechanical properties. Therefore, the control of cementite precipitation state and its influence on the austenite transformation and stabilization can be a very valuable topic for future studies. In this study, austenite size was shown to have an important influence on its thermal stability. However, the mechanism of austenite stabilization by the size effect is still the subject of discussions. Thus, it can be interesting to continue the work on this topic. Moreover, some results pointed out the low or even non-existing effect of size on the mechanical stability of austenite. This issue requires deeper investigations. Finally, it was clearly demonstrated that standard DICTRA simulations based on a mean field approach disclose limitations for the prediction of the observed kinetics of cementite dissolution and austenite formation. Nevertheless, prediction can be improved by taking into account the dispersions, for example, in form of size distribution. It seems to be important to work on the characterization and modeling of different dispersions (size, morphology, topology, etc…) and further to introduce them in the kinetics simulations. This aspect is of particular importance for the 3 rd generation AHSS grades due to their higher sensibility to the heritage effects. Mechanical properties and behavior It was discussed that producing 100% of lath-like dislocated ferrite structure is a very complex issue. It is thought to be challenging to produce such type of microstructure and to measure its mechanical response. Significant influence of Mn content on the tensile behavior of as-quenched martensite was revealed. It was also emphasized that there is a synergy between C and Mn. However, to understand the physical mechanism of this Mn influence and its synergy with C requires further deeper studies. It is also of interest to look if there is any effect on the microstructure of asquenched martensite and its link with the mechanical behavior. Finally, the improvement of characterization and modeling of austenite induced transformation (TRIP effect) is supposed to be very relevant. At least two ways to improve the modeling of TRIP effect are possible. One is to modify the already existing physical model proposed by Perlade et al. [PER'03] in order to decrease its exaggerated sensitivity to the austenite grain size. Another possibility is to introduce some physical factors controlling the retained austenite stability in the empirical law used in this work. Austenitization at 750°C 30min WQ Figure 136 presents the Mn map analysis performed using Aphelion software: quantitative Mn map and Mn profiles along the lines 1 and 2. Optical observation of microstructure is also included in this figure. FINAL STATEMENT OF AUTHOR Optical Image Quantitative X Image of Mn (wt.%) ART annealing at 650°C for 3min The microstructure obtained after 3min ART annealing at 650°C is shown in Figure 138. Figure 139 presents the Mn map analysis performed using Aphelion software: quantitative Mn map and Mn profiles along the lines 1 and 2. ART annealing at 650°C for 1h The microstructure obtained after 1h ART annealing at 650°C is shown in Figure 140. Figure 141 presents the Mn map analysis performed using Aphelion software: quantitative Mn map and Mn profiles along the lines 1 and 2. ART annealing at 650°C for 10h The microstructure obtained after 10h ART annealing at 650°C is shown in Figure 142. Figure 143 presents the Mn map analysis performed using Aphelion software: quantitative Mn map and Mn profiles along the lines 1, 2 and 3. The microstructure obtained after 30h ART annealing at 650°C is shown in Figure 144. Figure 145 presents the Mn map analysis performed using Aphelion software: quantitative Mn map and Mn profile along the line 1. As a global conclusion of the model sensibility analysis it can be stated that the model accounts well different effects of input parameters and has certain sensitivity to all of them. This proves the necessity of detailed microstructural analysis of such complex microstructures. It can be also highlighted that the most sensitive parameters are the fractions of constituents (especially fresh martensite) and the size of ferrite. In contrast, stability of retained austenite has a lighter impact. Figure 1 - 1 Figure 1 -Evolution of average CO 2 emissions from the car fleet in the EU. Figure 2 - 2 Figure 2 -Schematic representation of tensile strength/total elongation balance for different already developed and future steels [MAT'06]. Figure 3 - 3 Figure 3 -Published results of mechanical properties of Medium Mn steels -ultimate tensile strength (UTS) as a function of total elongation (TE); target of third generation high strength steels is marked with red ellipse. 1. 1 1 General Description.................................................................................... 1.2 Mechanisms of phase transformations ...................................................... 1.2.1 Recrystallization.............................................................................................. 1.2.2 Austenite formation during intercritical annealing ........................................... 1.2.3 Particular case of Austenite Reverse Transformation (ART) .......................... 1.2.4 Austenite stabilization during annealing .......................................................... 1.3 Mechanical Properties of Medium Mn steels ............................................. 1.3.1 Overview of the mechanical properties ........................................................... 1.3.2 Mechanical behavior of ultra fine ferrite .......................................................... 1.3.3 Mechanical behavior of fresh martensite ........................................................ 1.3.4 Mechanical behavior of retained austenite ..................................................... 1.3.5 Mechanical stability of retained austenite and induced transformation ........... 1.3.5.1 Austenite stability and TRIP effect in MMS .......................................................... 1.3.5.2 Models for austenite induced transformation ....................................................... 1.3.6 Modeling of the multiphase steel mechanical behavior................................... 1.3.7 Yield Point Elongation ..................................................................................... References ....................................................................................................... Figure 4 - 4 Figure4-Observed in the SEM-FEG microstructures of the medium Mn steel annealed at 670°C for 6h (a), revealed with Marshall reagent, and 7h (b), revealed with OPU polishing and picral etching. In both cases uniform light grey colour corresponds to ferrite and rough dark grey areas are martensite and/or retained austenite [ARL'12]. Figure 5 - 5 Figure 5 -Schematic description of different steps of recrystallization: (a) Deformed state, (b) Recovered, (c) Partially recrystallized, (d) Fully recrystallized and (e) Grain growth [HUM'04]. Figure 7 - 7 Figure 7 -Left part -Optical micrographs and traces of grain boundaries obtained by the insitu observation of the as-quenched specimen (upper pictures) and tempered at 1 023 K for 3.6 ks (lower ones). A bulging packet boundary is shown by the hatched line. Right part -Transmission electron micrographs and traces showing nucleation (tempered at 973 K for 2.1 ks) (a) and growth (tempered at 973 K for 2.7 ks) (b) of recrystallized grains in the ULC steel [TSU'01]. Figure 8 - 8 Figure 8 -SEM image of the ferrite-Fe 3 C carbides microstructure, obtained after as-quenched martensite tempering at 690°C for 60h [TAU'13]. Figure 10 - 10 Figure 10 -Schematic representation of austenite formation and growth during intercritical annealing of ferrite-pearlite steels: 1 -dissolution of pearlite, 2a -austenite growth with carbon diffusion in austenite, 2b -austenite growth with manganese diffusion in ferrite, 3 -final equilibration with manganese diffusion in austenite [SPE'81]. Figure 12 - 12 Figure 12 -Mn distribution in ferrite and martensite: (a) as-rolled (before heat treatment), and after 695°C intercritical annealing for (b) 0.83 h and (c) 30 h followed by brine quenching. Positions of the a/a' boundaries are indicated [PUS'84]. Figure 14 - 14 Figure 14 -Fe-C equilibrium diagram and the effect of manganese on the form and position of austenitic region (taken from [DES'76]). Figure 15 - 15 Figure 15 -Isothermal transformation diagram -effect of manganese on the austenite decomposition of 0.55 wt.% C steel at two different temperatures (taken from [DES'76]). Figure 16 - 16 Figure 16 -At left -microstructure of specimens quenched from 770 and 780°C: γ G -globular austenite and γ A + α -acicular austenite + ferrite. At right -number of globular austenite grains as a function of heating rate [MAT'74-1], [MAT'74-2]. Figure 17 - 17 Figure 17 -Scheme of applied heat treatments and corresponding final microstructures after: (a) Intermediate Quenching (ART); (b) Intercritical Annealing and (c) Step Quenching. Figure 18 18 Figure 18 -a) Schematic illustrations of the starting microstructures: MT-1 and MT-2. b) Austenite particles (martensite at room temperature) revealed with LePera's etchant: MT-1 sample annealed at 840°C for 2 min and water-quenched and MT-2 sample annealed at 780°C for 10 sec and water-quenched. Figure 19 19 Figure 19 -a) Comparison of measured and calculated volume fractions of austenite (open symbols are measured by point counting and solid ones are from dilatometry).b), c), d) -Isothermal sections of a ternary Fe-C-Mn phase diagram illustrating different steps of austenite growth during annealing (open circles indicate the bulk alloy composition; solid circles represent the mean composition of the ferrite matrix changing with time) [WEI'13]. -3 , b = 0.2689 and M s 0 = 363.5°C are three fitting parameters. Figure 20 20 Figure 20 presents the comparison of both models ([JIM'07-2] and [YAN'09]) in terms of the decrease of M s temperature: Figure 20 - 20 Figure20-Decrease of M s temperature as a function of austenite grain size according to both studies: [JIM' and [YAN'09]. Figure 21 - 21 Figure 21 -Two upper figures: effect of holding temperature on the amount of retained austenite and of its carbon content after 6h annealing at different temperatures. Lower figure: EDS-TEM analysis of retained austenite in 8 wt.% Mn steel after 6h annealing at 625°C [KIM'05]. Figure 22 - 22 Figure 22 -TEM images (at left) and EDX profiles (at right) of the samples after intercritical annealing at different temperatures: (a) -640, (b) -660, (c) -680, (d) -700. Figure 24 - 24 Figure24-UTS-TE and YS-Uel (when the data were available) charts using the most relevant data found in the literature and patents. Figure 25 - 25 Figure 25 -Influence of retained austenite fraction on: (a) -TE; (b) -Uel; plotted with the available literature data. Figure 26 - 26 Figure 26 -Relation between ultimate true stress and retained austenite fraction; plotted with the available literature data. Figure 28 - 28 Figure 28 -Variation of the YS, UTS (at left) and Uel (at right) as a function of the grain size for pure iron and IF steels [BOU'09]. Figure 29 - 29 Figure 29 -TEM image of lath martensite in the 0.2C-1.5Mn-0.15V (wt.%) steel formed from fine-grained austenite with a 2.3 µm mean grain size. The plain and dotted white lines show prior austenite grain and packet boundaries, respectively [MOR'05]. Figure 30 - 30 Figure 30 -Normalized true stress -true strain tensile curves of martensitic steels with different carbon contents [ALL'12]. Figure 31 - 31 Figure 31 -Hardness of martensitic microstructures in steels as a function of their carbon content [KRA'99]. Figure 32 32 Figure 32 -0.2% offset yield stress as a function of carbon content for Fe-C [SPE'68] and for Fe-C -Mn alloys [NOR'76]. Figure 33 - 33 Figure 33 -Typical stress spectrum f(σ) with associated cumulated function F(σ) and the definition of σ min [ALL'12]. Figure 34 - 34 Figure 34 -(a) Comparison between the results of the model and experimental tensile curves of the studied steels; (b) Comparison between the results of the model and experimental evolution of the slopes of the tensile curves as a function of true stress; (c) Evolution of the adjusted σ 0 parameters with C content of steels; (d) Adjusted stress spectrums for different martensitic steels [ALL'12]. Figure 35 - 35 Figure 35 -At left is presented Schumann's Fe-Mn-C phase stability diagram at 298K and at right -modified Schumann's Fe-Mn-C phase stability diagram after tensile tests at 298K [SCO'05]. Figure 37 - 37 Figure 37 -Retained austenite volume fraction versus true tensile strain: (a) samples with 5min, 30min, 6h and 144h holding time at 650°C and deformed at 25°C; (b) 6h annealed samples deformed at different temperatures: 100°C, 50°C, 25°C, -40°C, -80°C [CAO'11-1]. Figure 38 - 38 Figure 38 -Normalized austenite fraction as a function of engineering strain for (a) 0.11C-4.5Mn-0.45Si-2.2Al and (b) 0.055C-5.6Mn-0.49Si-2.1Al steels. Symbols are measured values and lines are calculated ones. Figure 39 - 39 Figure39-At left: evolution of retained austenite fraction as a function of strain; at right: evolution of transformed austenite fraction as a function of true strain (points) and fittedOlson- Cohen model (lines) [GIB'11]. Figure 40 - 40 Figure 40 -(a) Evolution of the martensite nucleation stress with temperature, (b) Gibbs free energy curves versus temperature and effect of an applied stress σ [PER'03]. Figure 41 - 41 Figure 41 -Volume fraction of strain induced martensite as a function of the macroscopic strain for the different TRIP steels. Symbols are the obtained experimental results and curves are the predictions of the model [PER'03]. Figure 42 - 42 Figure 42 -Influence of the retained austenite size on the martensite induced transformation according to the Perlade's model [MOU'02]. Figure 46 - 46 Figure 46 -AET Gradient Batch Annealing (GBA) furnace. Figure 48 - 48 Figure48 -Scheme of the small (so-called "mini") specimens used for tensile tests after gradient batch annealing. Figure 49 - 49 Figure49 -Scheme of the so-calledISO 12.5x50 specimens used for tensile test after annealing in the Nabertherm furnace. Figure 50 - 50 Figure 50 -Zwick 1474 machine. Figure 51 - 51 Figure 51 -MacroXtens SE50 extensometer: at left -global view; at right -zoom on the part which is in contact with tensile specimen. Figure 52 -Figure 53 - 5253 Figure 52 -Siemens D5000 diffractometer (at left) and scheme of goniometric circle with tube, detector and sample positions and the rotation angles 2θ, ψ and φ. Figure 54 - 54 Figure 54 -Microstructure observations (FEG SEM images) of the different reference samples for saturation magnetization measurements: (a) martensitic structure obtained by austenitization at 750°C for 30min and quenching; (b) sample after double annealing: 850°C -1min WQ followed by 500°C for 1h WQ; (c) sample after double annealing: 750°C -30min WQ followed by 500°C for 30h WQ. Figure 55 - 55 Figure 55 -Retained austenite (RA) fraction as a function of holding time at 650°C. Comparison of two techniques: XRD (blue triangles) and Sigmametry (green squares). Figure 56 - 56 Figure 56 -Zeiss Axiovert 200 MAT optical microscope. Figure 57 - 57 Figure 57 -Field Emission Gun Scanning Electron Microscope (FEG SEM) JEOL 7001F. Figure 58 - 58 Figure 58 -Example of BSE image (at left) and of binary image obtained after threshold (at right). 1h WQ sample after Metabisulfite + Dino etchings. Figure 59 - 59 Figure 59 -Example of the performed measurements of lath width and equivalent diameter of polygon. 2h WQ sample after Metabisulfite + Dino etchings. Figure 60 - 60 Figure 60 -View on JEOL 2100F TEM. Figure 61 - 61 Figure 61 -CAMECA SX100 Electron probe microanalyzer. Figure 62 -Figure 63 - 6263 Figure 62 -Photo of Cameca NanoSIMS 50. Figure 64 - 64 Figure 64 -Scheme of a sample mounted in Al ring sample (at left) and nanoSIMS sample holder (at right) (modified from [PUS'09]). Figure 65 - 65 Figure 65 -12 C ion map of the reference sample and the selection of martensite zones for the averaging of SIMS intensities. Figure 66 - 66 Figure 66 -Observed microstructures after hot rolling with coiling at 625°C (at left -Nital etching) and after cold rolling (at right -Dino etching). Figure 67 - 67 Figure 67 -Pseudo-binary diagrams calculated with Thermo-Calc software for the steels with four different Mn contents: (a) 1.7 wt.%; (b) 2.7 wt.%; (c) 3.7 wt.%; (d) 4.7 wt.%. Figure 68 - 68 Figure 68 -Evolution of phase fractions and austenite C and Mn contents with temperature according to the Thermo-Calc simulations: steel with 0.098 wt.% C and 4.7 wt.% Mn. Figure 69 - 69 Figure 69 -Evolution of retained austenite fraction and its C content as a function of annealing temperature (using the data from Thermo-Calc simulations). Figure 70 - 70 Figure 70 -Scheme of gradient batch annealing cycle (in the left part of the figure) and the obtained gradient of temperature depending on the position in the sheet (in the right part). Figure 71 - 71 Figure 71 -Evolution of ultimate tensile strength (UTS) and total elongation (TE) as a function of holding temperature. Figure 72 - 72 Figure 72 -Evolution of retained austenite fraction and balance between strength and ductility expressed by UTS*TE as a function of annealing temperature. of the microstructure after austenitization ....................... 3.2 Microstructure evolution during annealing at 650°C .................................. 3.2.1 Microsegregation evolution ............................................................................. 3.2.2 Evolution of cementite precipitation state ....................................................... 3.2.3 Time-evolution of austenite and ferrite ............................................................ 3.2.4 Time-evolution of retained austenite and martensite .................................... 3.2.5 Geometrical and topological aspects ............................................................ 3.2.6 Overall view on the obtained experimental data ........................................... 3.3 Discussion of main results: experimental/modelling approach ................ 3.3.1 Mechanisms of austenite formation ..............................................................3.3.1.1 Effects of the representative volume ................................................................. 3.3.1.2 Austenite growth controlled by Mn diffusion ...................................................... 3.3.1.3 Effect of characteristic length L α size distribution .............................................. 3.3.2 Factors controlling austenite stabilization at room temperature .................... References ..................................................................................................... Figure 73 - 73 Figure 73 -Schematic representation of performed double annealing cycles. Figure 74 -Figure 75 - 7475 Figure 74 -Characterization of the martensite present in the microstructure after first annealing cycle: a) OM image after Dino etching [ARL'13]; b) SEM image after Metabisulfite etching; c) TEM image obtained on thin foil. Figure 76 - 76 Figure 76 -EPMA quantitative analysis of Mn distribution in the microstructure after first annealing: a) quantitative Mn map (rolling direction is parallel to abscisse); b) Mn profiles along the lines 1 and 2. Figure 77 - 77 Figure 77 -Quantitative Mn maps obtained with EPMA for samples after annealing at 650°C for 3min, 1h, 10h and 30h. Figure 77 ( 77 Figure77(a) and (b) show that Mn microsegregation is still evident after short time annealing (3min or 1h). For the samples with longer annealing time (10h and 30h) the situation is different (Figure77(c) and (d). An important redistribution of Mn happens during such a long intercritical annealing. These analyses suggest that there is a progressive homogenization of Mn in segregated bands during annealing, as observed in [LAV'49], [KRE'11]. In that case, the Mn diffusion length is expected to be of the order of the initial mean distance between bands, i.e. order of 10 µm. The long-range diffusion of Mn supposes that Mn should diffuse in a two-phase matrix of ferrite and austenite. This means that the effective diffusivity of Mn may be different from that in pure ferrite and in pure austenite. This phenomenon was already discussed in the work of. It was shown that such long-range Mn homogenization Figure 78 - 78 Figure 78 -TEM images on replicas of the samples obtained with different reheating temperatures: 550, 600 and 650°C. Mn content of carbides measured with EDX is shown directly on the images. Figure 79 - 79 Figure 79 -TEM images obtained on thin foils of samples with 3min, 1h, 2h and 3h holding time. Carbides observation was particularly targeted. Mn EDX measurements are directly presented on the images. Figure 80 - 80 Figure 80 -Optical images of microstructures obtained with different holding time at 650°C. Dino etching [ARL'13] was used to reveal the microstructure. Figure 81 - 81 Figure 81 -SEM images of microstructures obtained with different holding time at 650°C. Dino etching was used to reveal the microstructure. Figure 84 - 84 Figure 84 -EDX Mn hypermaps obtained in TEM for the 1h, 2h, 3h and 10h samples. Figure 85 - 85 Figure 85 -Examples of selected areas for Mn analysis: at left -2h sample, at right -3h sample. Figure 86 - 86 Figure 86 -NanoSIMS results for the sample with 1h holding time: (a) -12 C ion map with selected zones for the intensity analysis; (b) -estimated C content of each selected area in map (a) (red square points) and calculated using mass balance (blue line). Figure 87 - 87 Figure 87 -NanoSIMS results for the sample with 30h holding time: (a) -12 C ion map with selected zones for the intensity analysis; (b) -estimated C content of each selected area in map (a)(blue square points) and calculated using mass balance (orange line). Figure 88 - 88 Figure 88 -RA and FM fractions as a function of holding time at 650°C. Figure 89 - 89 Figure 89 -TEM images of samples after 3min and 1h of holding at 650°C. The diffraction patterns in mode STEM show with arrows different phases (BCC -ferrite and FCC -retained austenite). The Mn content of different phases determined with EDX is put directly on the images. Figure 90 ( 90 Figure 90(c) and (d) (3h sample) show the coexistence of ferrite (BCC) and retained austenite (FCC) as well as their double morphology (lath-like and polygonal). A high dislocation density of certain ferrite grains can be also observed. Finally, presence of lath martensite (BCC(M)) was clearly revealed in Figure 91 (10h and 30h samples). Figure 90 -FCCFigure 91 - 9091 Figure 90 -TEM images of samples after 2h and 3h of holding at 650°C. The diffraction patterns in mode STEM show with arrows different phases (BCC -ferrite, BCC (M) -fresh martensite and FCC -retained austenite). The Mn content of different phases determined with EDX is put directly on the images Figure 92 - 92 Figure 92 -Scheme of the austenite nucleation and growth in: A -non-deformed martensite structure and B -deformed structure. Figure 94 - 94 Figure 94 -Schematic representation of the linear configuration for the DICTRA simulations. Figure 95 - 95 Figure 95 -Kinetics of cementite dissolution (a) and of austenite formation (b) at 650°C calculated by DICTRA. The initial state corresponds to the cementite size of 2nm and to the 10 wt.% Mn content in cementite. The experimental measurements of austenite fraction are shown with orange squares. Figure 96 - 96 Figure 96 -Cementite and austenite fractions as a function of holding time obtained using DICTRA simulations with different size of cementite region. The experimental measurements of austenite fraction are shown with orange squares Figure 99 - 99 Figure 99 -Cross section of the studied Fe-C-Mn system at 650°C. The tie-lines for cementite dissolution and austenite growth are represented in blue and red lines, respectively. The contents of both C and Mn at γ/θ and α/γ interface Figure 100 - 100 Figure 100 -DICTRA calculation of the carbon activity profile through the system at the beginning of transformation (t=0s) at 650°C. Figure 101 - 101 Figure 101 -DICTRA calculations of the Mn activity profile through the system at 650°C before cementite dissolution. Figure 102 - 102 Figure 102 -Simulated with DICTRA time-evolution of both C (a) and Mn (b) profiles during austenite growth at 650°C after cementite dissolution. Figure 103 - 103 Figure 103 -Time-evolution of tie-line for austenite growth. The equilibrium tie-line passing through the bulk composition (big red point) is given by the red dotted line. Figure 104 - 104 Figure 104 -Austenite kinetics simulated with DICTRA using different size and Mn content of cementite compared to the experimentally measured values (orange squares). Figure 105 - 105 Figure 105 -(a) Calculated L α Log-normal distribution; (b) Fractions for chosen classes of size L α calculated from the Log-normal distribution. Figure 106 - 106 Figure 106 -Austenite kinetics calculated using DICTRA simulations with Log-normal distribution of L α compared with the experimentally measured values (orange squares). Figure 108 - 108 Figure 108 -Experimentally measured and calculated (KJMA model) time evolution of austenite fraction. Figure 109 - 109 Figure 109 -Interdependence of fitting parameters K and q. Figure 110 - 110 Figure 110 -Evolution of A and RA fractions with the holding time at 650°C: A and RA are experimental fractions of initial, before quench, (blue open triangles) and retained austenite (orange squares); RA-calculated (Andrews M s ) -RA fraction calculated using equations 45 and 49 (black crosses) and RA-calculated (q=0.039, K=83.3) -RA fraction calculated using equations 45 and 52 (pink filled lozenges). Figure 111 - 111 Figure 111 -Comparison of calculated using "geometrical model" RA fraction (RA-calculated (adapted Yang's M s )) -equations 45 and 54 (blue filled triangles) with the experimental RA fractions (RA -Experimental; orange squares), as well as RA fraction calculated using equations 45 and 52 (RA-calculated (q=0.039, K=83.3); pink filled lozenges). Figure 112 - 112 Figure 112 -Engineering (a, b) and true (c, d) stress-strain curves of the samples annealed at 650°C with different holding time. Figure 113 - 113 Figure 113 -Evolution of mechanical properties with the holding time for the samples annealed at 650°C. Figure 114 - 114 Figure 114 -Strain hardening rate as a function of strain (a) and stress (b). Figure 115 - 115 Figure 115 -Examples of 3 stage work hardening rate evolution for the samples annealed at 650°C for 2 and 30 hours. Curves of true stress-true strain (a, b) and of strain hardening rate as a function of true strain (c, d). Figure 116 - 116 Figure 116 -Tendencies between YS L , Maximum True Stress, Uel and fractions of RA and FM+RA. Figure 117 - 117 Figure 117 -Structure of medium Mn steel quenched after holding at 750°C for 30min: (a) observation with optical microscope after Dino etching and (b) corresponding X-ray spectrum. Figure 118 - 118 Figure 118 -Engineering and true stress-strain curves of studied medium Mn martensitic samples. Figure119presents the results of experimental tensile tests performed in the present study along with the results collected from the previous works [PUS'09], [ZHU'10], [ALL'12]. The true stress evolution as a function of true strain is shown in Figure119(a), meanwhile Figure119(b)shows the related strain hardening rate evolution as a function of true stress (so-called Kocks-Mecking plot [KOC'03],[MEC'81]). For all studied martensitic steels some general highlights can be stated:• conventional yield stress seems to be a function of martensite carbon and manganese contents; • high work-hardening rate is observed and it increases up to necking strain in accordance with the carbon and manganese contents. Figure 119 - 119 Figure 119 -(a) Experimental true stress-true strain curves of all studied martensitic steels; (b) Strain hardening rate as a true stress function of corresponding tensile tests presented in (a); (c) Comparison of true stress-true strain curves of 0.15C and 0.15C-5Mn steels, and also the curve of 0.15C shifted up at 150MPa, which corresponds to solid solution hardening of Mn. (b): experimentally adjusted points were compared to the ones predicted with equation 60. Figure 120 - 120 Figure 120 -(a) Evolution of modeled F(σ) with the true stress; (b) Comparison of the σ 0 values: predicted with equation 60 (model) and experimentally adjusted ones. Figure 121 - 121 Figure 121 -Comparison of the model and experimental results: (a) and (b) show the stressstrain curves: (a) -steels with varied C content; (b) -steels with higher Mn content and two steels with standard (for AHSS) Mn content for comparison. (c) -evolution of strain hardening rate as a function of stress. (d) -evolution of strain hardening rate as a function of strain. "E" means experimental data and "M" means data from the model. Figure 121 121 Figure121(c) and (d) present the evolution of strain hardening rate as a function of stress and as a function of strain, respectively. Taking into account that modeling of derivative is more complex than modeling of a function itself, the proposed model gives very satisfactory results of strain hardening rate evolution. These figures also show that the model considers correctly the synergy influence of C and Mn on the strain hardening rate evolution. Figure 122 - 122 Figure 122 -Optical and SEM images of hot rolled 0.5C-10Mn (wt.%) steel after Nital etching. Figure 123 - 123 Figure 123 -Dilatometer trial performed on 0.5C-10Mn (wt.%) steel: (a) -temperature versus time curve; (b) dilation curve. Figure 124 - 124 Figure 124 -Engineering and true stress-strain curves of hot rolled and annealed at 800°C-5min samples. Figure 125 - 125 Figure 125 -True stress-true strain curves of the sample annealed at 800°C for 5min (experimental points -open red squares) and model austenite structure (blue curve): (a)complete curves; (b) -zoom on low strain domain. Figure 126 - 126 Figure 126 -Microstructure of the studied MMS steel annealed at 500°C for (a) -1h, (b) -30h. Grey color is ferrite and white spots are carbides. Figure 127 - 127 Figure 127 -Engineering and true stress-strain curves of samples annealed at 500°C for different times and one fresh martensite sample for comparison. Figure 128 - 128 Figure 128 -Evolution of induced martensite (a) and RA (b) fraction with the increase of strain. Figure 129 - 129 Figure 129 -Induced martensite fraction (f M ind ) as a function of strain. Proposed model compared to experimental results for two samples annealed at 650°C for: (a) -1h; (b) -30h. Figure 130 - 130 Figure 130 -True stress-true strain curves resulting from the proposed model compared to the experimental ones: (a) -3min and 1h samples; (b) -10min and 3h samples; (c) -30min and 7h samples; (d) -2h and 30h samples. "E" means experimental data and "M" means data from the model. Figure 131 - 131 Figure 131 -Influence of the most sensitive parameters ((a) and (b) fractions of fresh martensite and retained austenite, respectively; (c) size of ferrite) on the true stress-true strain curves according to the model.As a final conclusion, it should be stated that the model demonstrates very good performance and an accurate prediction of the experimental tensile curves is achieved. However, to obtain such a good results of the model a very fine and advanced description of the microstructure constituents and their chemical composition is required. Therefore, additional studies to characterize different microstructure constituents and in particular lath-like recovered ferrite can even further improve the performance of the model. Better understanding of the induced austenite transformation can also contribute to the development of the model. Figure 133 - 133 Figure 133 -Engineering (a, b) and true (c, d) stress-strain curves of the samples after gradient batch annealing. Figure 134 - 134 Figure 134 -(a) Evolution of retained austenite fraction as a function of holding temperature (GBA trial). (b) Evolution of Uel as a function of RA fraction. Figure 135 - 135 Figure135-Quantitative Mn maps obtained with EPMA for the sample after austenitization at 750°C for 30min and for the samples after ART annealing at 650°C for 3min, 1h, 10h and 30h. Figure 136 - 136 Figure 136 -EPMA quantitative analysis of Mn distribution in the microstructure after first annealing (750°C-30min-WQ): (a) optical micrograph after Dino etching; (b) quantitative Mn map (rolling direction is parallel to abscisse); (c) and (d) Mn profiles along the lines 1 and 2, respectively. Figure 137 Figure 137 - 137137 Figure 137 (a), (c) and (e) show another representation of quantitative Mn maps using ImageJ software. In Figure 137 (b), (d) and (f) are plotted the Mn profiles according to the lines in Figure 137(a), (c) and (e). Figure 138 - 138 Figure 138 -SEM images of the microstructures obtained after 3min ART annealing at 650°C: (a) ×3000; (b) ×5000. Figure 139 - 139 Figure 139 -EPMA quantitative analysis of Mn distribution in the microstructure obtained after 3min ART annealing at 650°C: (a) quantitative Mn map (rolling direction is parallel to abscisse); (b) and (c) Mn profiles along the lines 1 and 2, respectively. Figure 140 - 140 Figure 140 -SEM images of the microstructures obtained after 1h ART annealing at 650°C: (a) ×3000; (b) ×5000. Figure 142 - 142 Figure 142 -SEM images of the microstructures obtained after 10h ART annealing at 650°C: (a) ×3000; (b) ×5000. Figure 143 - 143 Figure143-EPMA quantitative analysis of Mn distribution in the microstructure obtained after 10h ART annealing at 650°C: (a) quantitative Mn map (rolling direction is parallel to abscisse); (b) and (c) Mn profiles along the lines 1,2 and 3, respectively Figure 144 - 144 Figure 144 -SEM images of the microstructures obtained after 30h ART annealing at 650°C: (a) ×3000; (b) ×5000. Figure 145 - 145 Figure 145 -EPMA quantitative analysis of Mn distribution in the microstructure after 30h ART annealing at 650°C: (a) quantitative Mn map; (b) Mn profile along the line 1. Figure 154 - 154 Figure 154 -Mn profiles of 1h sample (1 st hypermap): at left are presented selected zones and at right -corresponding Mn evolutions as a function of distance. Figure 155 - 155 Figure 155 -Bright field TEM image (at left) and the EDX-hypermap of corresponding zone (at right) of 1h sample (2 nd hypermap). Figure 156 - 156 Figure 156 -Mn profiles of 1h sample (2 nd hypermap): at left are presented selected zones and at right -corresponding Mn evolutions as a function of distance. Figure 157 - 157 Figure 157 -Mn profiles of 1h sample (2 nd hypermap): at left are presented selected zones and at right -corresponding Mn evolutions as a function of distance. Figure 158 - 158 Figure 158 -Bright field TEM image (at left) and the EDX-hypermap of corresponding zone (at right) of 1h sample (3 rd hypermap). Figure 159 - 159 Figure 159 -Mn profiles of 1h sample (3 rd hypermap): at left are presented selected zones and at right -corresponding Mn evolutions as a function of distance. Figure 160 - 160 Figure 160 -Mn profiles of 1h sample (3 rd hypermap): at left are presented selected zones and at right -corresponding Mn evolutions as a function of distance. Figure 162 - 162 Figure 162 -Mn profiles of 2h sample: at left are presented selected zones and at rightcorresponding Mn evolutions as a function of distance. Figure 163 - 163 Figure 163 -Mn profiles of 2h sample: at left are presented selected zones and at rightcorresponding Mn evolutions as a function of distance. Figure 164 - 164 Figure 164 -Mn profiles of 2h sample: at left are presented selected zones and at rightcorresponding Mn evolutions as a function of distance. Figure 167 - 167 Figure 167 -Bright field TEM image (at left) and the EDX-hypermap of corresponding zone (at right) of 3h sample (2 nd hypermap). Figure 168 - 168 Figure 168 -Mn profiles of 3h sample (2 nd hypermap): at left are presented selected zones and at right -corresponding Mn evolutions as a function of distance. Figure 169 - 169 Figure 169 -Mn profiles of 3h sample (2 nd hypermap): at left are presented selected zones and at right -corresponding Mn evolutions as a function of distance. Figure 170 - 170 Figure 170 -Bright field TEM image (at left) and the EDX-hypermap of corresponding zone (at right) of 3h sample (3 rd hypermap). Figure 171 - 171 Figure 171 -Mn profiles of 3h sample (3 rd hypermap): at left are presented selected zones and at right -corresponding Mn evolutions as a function of distance. Figure 172 - 172 Figure 172 -Mn profiles of 3h sample (3 rd hypermap): at left is presented selected zone and at right -corresponding Mn evolution as a function of distance. Figure 174 - 174 Figure 174 -Mn profiles of 10h sample: at left are presented selected zones and at rightcorresponding Mn evolutions as a function of distance. Figure 175 - 175 Figure 175 -Mn profiles of 10h sample: at left are presented selected zones and at rightcorresponding Mn evolutions as a function of distance. Figure 177 - 177 Figure 177 -Mn profiles of 30h sample: at left are presented selected zones and at rightcorresponding Mn evolutions as a function of distance. Figure 178 - 178 Figure 178 -Mn profiles of 30h sample: at left are presented selected zones and at rightcorresponding Mn evolutions as a function of distance. Figure 179 - 179 Figure 179 -Mn profiles of 30h sample: at left are presented selected zone and at rightcorresponding Mn evolution as a function of distance. Figure 181 - 181 Figure 181 -Influence of ferrite size (λ) on the mechanical behavior according to the model: (a) true stress-true strain curves; (b) evolution of mechanical characteristics (YS, UTS and Uel) and (c) stability of retained austenite. Figure 183 - 183 Figure 183 -Influence of retained austenite stability (ε 0 ) on the mechanical behavior according to the model: (a) true stress-true strain curves; (b) evolution of mechanical characteristics (YS, UTS and Uel) and (c) stability of retained austenite Figure 184 - 184 Figure 184 -Influence of retained austenite stability (n) on the mechanical behavior according to the model: (a) true stress-true strain curves; (b) evolution of mechanical characteristics (YS, UTS and Uel) and (c) stability of retained austenite Similar modifications were proposed by Lee et al, but as the work was done on different steel compositions (medium Mn steels), other coefficients were used for the alloying elements. Also Lee et al. introduced supplementary fitting parameter (β) to improve the curvature prediction: composition was established, based on numerous dilatometer data: α m = 0.0224 - 0.0107 ⋅ w C - 0.0007 ⋅ w Mn - 0.00012 ⋅ w Cr - 0.00005 ⋅ w Ni - 0.0001 ⋅ w Mo (6) f ' 1 exp [ ( M s q T ) ] 0.0076 - 0.0182 w C - 0.00014 w Mn 1.4609 - 0.4483 w C - 0.0545 w Mn Table 1 - 1 Mean values of f Reference samples s σ measured for different reference samples. Table 2 - 2 Retained austenite (RA) fraction measured using XRD technique with different polishing methods and using Sigmametry. Results were obtained on two samples from this study (30min WQ and 30h WQ) and on two samples from other studies (TRIP 800 and Q&P steels). Technique Preparation This study Samples from other studies 30min WQ 30h WQ TRIP800 Q&P Mechanical polishing (down to 1 µm) 17.7 11.3 13.4 20.8 XRD Chemical polishing 17.7 12.4 - - Electrochemical polishing 24.2 19.0 14.5 22.1 Sigmametry Standard 25.9 20.9 15.2 20.7 Table 4 - 4 Mn content in austenite and ferrite zones of different samples determined from direct EDX measurements and hypermaps. Reference Time, s Measured by EDX Mn F , wt.% Mn A , wt.% 3min WQ 180 4.3 10.0 1h WQ 3600 4.1 9.3 2h WQ 7200 3.5 8.9 3h WQ 10800 2.8 8.9 10h WQ 36000 2.3 8.7 30h WQ 108000 2.3 8.5 Table 5 - 5 Fraction and size measurements of FM+RA features obtained from the image analysis of FEG SEM images. Reference Time (s) Fraction FM+RA (%) Size of FM+RA features (µm) Polygonal Lath Global mean 3min WQ 180 15.4 0.24 0.075 0.16 10min WQ 600 20.6 0.23 0.079 0.16 30min WQ 1800 21.6 0.37 0.110 0.24 1h WQ 3600 27.3 0.37 0.110 0.24 2h WQ 7200 31.5 0.38 0.140 0.26 3h WQ 10800 32.5 0.41 0.140 0.27 7h WQ 25200 36.1 0.45 0.140 0.30 10h WQ 36000 34.9 0.46 0.170 0.31 20h WQ 72000 36.0 0.57 0.210 0.39 30h WQ 108000 37.5 0.68 0.210 0.45 Based on the microstructural investigations, it was of interest to discuss the observed double morphology (lath-like and polygonal) of ferrite and austenite. The origin of the lath like morphology of ferrite and austenite was already presented in the chapter 1.1.1.3 and is undoubtedly attributed to the initial non-deformed lath martensite structure. In previous works [NEH'50], [KOO'76], [KIM'81], [LAW Table 6 - 6 The values of the different parameters measured experimentally or calculated from the experimental values. Employed tools Image Analysis of SEM images Saturation magnetization TEM-EDX hypermaps NanoSIMS Calculated from mass balance Reference Time, s F A (%) D A (µm) F RA (%) Mn A (wt.%) C A (wt.%) C A (wt.%) 3min WQ 180 15.4 0.16 14.6 10.0 - 0.67 10min WQ 600 20.6 0.16 21.4 - - 0.46 30min WQ 1800 21.6 0.24 25.9 - - 0.38 1h WQ 3600 27.3 0.24 30.1 9.3 0.36 0.33 2h WQ 7200 31.5 0.26 26.2 8.9 - 0.31 3h WQ 10800 32.5 0.27 25.7 8.9 - 0.30 7h WQ 25200 36.1 0.30 22.6 - - 0.27 10h WQ 36000 34.9 0.31 22.5 8.7 - 0.28 20h WQ 72000 36 0.39 21.1 - - 0.27 30h WQ 108000 37.5 0.45 20.3 8.7 0.27 0.26 Table 7 - 7 The values of all the parameters necessary for the estimation of RA fraction at room temperature and the results of these calculations: M s new (M s temperature taking into account the influence of chemical composition and size of austenite), F M ind (induced martensite fraction) and RA (retained austenite fraction). Ref. Time (s) F a (%) D a (µm) V a (µm 3 ) C a (wt.%) Mn a (wt.%) M s new F M ind RA (%) 3min 180 14.6 0.16 0.0041 0.67 10.0 -152.8 0.00 14.6 10min 600 21.4 0.16 0.0041 0.46 9.9 -59.2 0.00 21.4 30min 1800 25.9 0.24 0.0138 0.38 9.7 -14.6 0.00 25.9 1h 3600 30.1 0.24 0.0138 0.33 9.3 20.0 0.00 30.1 2h 7200 31.5 0.26 0.0176 0.31 8.9 39.3 0.19 25.5 3h 10800 32.5 0.27 0.0197 0.30 8.9 43.8 0.23 25.0 7h 25200 36.1 0.30 0.0270 0.27 8.7 63.8 0.38 22.3 10h 36000 34.9 0.31 0.0298 0.28 8.7 60.2 0.36 22.4 20h 72000 36 0.39 0.0593 0.27 8.7 66.4 0.40 21.6 30h 108000 37.5 0.45 0.0911 0.26 8.7 72.5 0.44 21.0 Table 8 - 8 Measured mechanical properties of samples after different annealing treatments, with their corresponding incertitude in the brackets: high and low yield strength (YS H and YS L ± 15MPa), yield point elongation (YPE ± 0.2%), ultimate tensile strength (UTS ± 10MPa), uniform (Uel ± 0.5%) and total (TE ± 1%) elongations, respectively. Ref. YS H (MPa) YS L (MPa) YPE (%) UTS (MPa) Uel (%) TE (%) 3min WQ 712 688 2.3 766 13.5 22.8 10min WQ 648 641 1.6 776 20.6 29.1 30min WQ 628 626 1.4 791 23.7 31.6 1h WQ 589 589 2.1 826 23.2 26.3 2h WQ 566 566 0.3 827 24.9 29.6 3h WQ 495 487 1.9 909 19.2 23.6 7h WQ 482 482 1.9 882 21.9 25.8 10h WQ 455 446 1.7 897 21.5 25.7 20h WQ 385 378 1.4 905 19.1 22.2 30h WQ 368 359 1.2 871 14.2 15.2 Table 9 - 9 Measured mechanical properties of martensitic samples with the corresponding incertitude in the brackets: yield strength (YS 0.2 ± 15MPa), ultimate tensile strength (UTS ± 10MPa), uniform (Uel ± 0.5%) and total (TE ± 1%) elongations, respectively. Ref. YS 0.2 (MPa) UTS (MPa) Uel (%) TE (%) 2260B-M1 1049 1387 3.0 5.2 2260B-M2 1067 1391 3 5 Table 10 - 10 Chemical compositions (wt.%) of some martensitic steels tested in the present study and in previously published works with corresponding references. For each steel first column gives the reference (steel) that will be used further in the text. Steel C Composition (wt.%) Mn Si Cr Ti Source 0.3C 0.29 1.20 0.25 0.17 0.04 This study 0.36C 0.36 1.22 0.23 0.10 0.04 This study 0.1C-5Mn 0.09 4.6 - - - This study 0.15C-5Mn 0.14 4.6 - - - This study 0.01C-3Mn 0.01 2.92 0.01 0.11 0.04 Zhu et al. [ZHU'10] 0.09C 0.09 1.90 0.15 0.10 - Allain et al. [ALL'12] 0.15C 0.15 1.90 0.22 0.20 - Pushkareva [PUS'09] 0.22C 0.22 1.18 0.27 0.21 - Allain et al. [ALL'12] wt.%). Ref. C Mn Si P Al B S Cr Cu N A193 495 9940 51 11 26 1.4 2 4 6.8 1.1 Table 12 - 12 Measured mechanical properties of hot rolled and annealed at 800°C-5min samples with the corresponding incertitude in the brackets: yield strength (YS 0.2 ± 15MPa), ultimate tensile strength (UTS ± 10MPa), uniform (Uel ± 0.5%) and total (TE ± 1%) elongations, respectively. Ref YS 0.2 (MPa) UTS (MPa) Uel (%) TE (%) Hot Rolled 339 572 4.6 4.8 800°C-5min 337 603 3.8 3.9 Table 13 - 13 Measured mechanical properties of samples annealed at 500°C for different times and one fresh martensite sample. Corresponding incertitude is presented in the brackets: high and low yield strength (YS H and YS Ref. YS H (MPa) YS L (MPa) YPE (%) YS 0.2 (MPa) UTS (MPa) Uel (%) TE (%) Fresh M - - - 1049 1387 3.0 5.2 3min WQ 1000 990 4.9 - 1001 6.1 9.1 1h WQ 898 878 6 - 892 6.8 10 30h WQ 806 765 4.3 - 795 7.7 12 L ± 15MPa), yield point elongation (YPE ± 0.2%), yield strength (YS 0. 2 ± 15MPa), ultimate tensile strength (UTS ± 10MPa), uniform (Uel ± 0.5%) and total (TE ± 1%) elongations, respectively. Table 14 - 14 Comparison between experimental and model mechanical properties of samples after different heat treatments: yield strength (YS), ultimate tensile strength (UTS) and uniform elongation (Uel), respectively. Ref. YS (MPa) UTS (MPa) Uel (%) ΔYS ΔUTS ΔUel 3min WQ E M 688 697 766 776 13.5 12.8 9.2 10.2 0.7 10min WQ E M 641 612 776 770 20.6 18.3 29.1 6.1 2.3 30min WQ E M 626 557 791 783 23.7 22.0 69.1 8.4 1.7 1h WQ E M 589 491 826 811 23.2 23.3 98.1 15.4 0.1 2h WQ E M 566 483 827 842 24.9 25.0 82.7 14.5 0.1 3h WQ E M 487 481 909 915 19.2 19.3 6.2 6.3 0.1 7h WQ E M 482 430 882 893 21.9 21.1 52.0 11.0 0.8 10h WQ E M 446 426 897 891 21.5 19.8 19.6 6.3 1.7 30h WQ E M 359 375 871 866 14.2 14.4 16.1 4.5 0.2 Table 21 - 21 Results of the model simulations with different sizes of ferrite. λ (µm) YS (MPa) UTS (MPa) Uel (%) 0.05 1118 1454 10.9% 0.1 687 1020 15.1% 0.2 467 783 19.5% 0.3 393 696 21.9% 0.5 332 621 24.4% 0.7 306 586 25.7% 1 286 559 26.8% Variation 0.95 831.5 895.0 15.9% Var / 0.1µm 87.5 94.2 1.7% Table 24 - 24 Results of the model simulations with different n values n YS (MPa) UTS (MPa) Uel (%) 0.5 368 643 20.4% 1 355 656 28.3% 1.5 353 678 31.8% 2 353 697 32.2% 2.5 353 712 31.4% 3 353 723 30.3% 4 353 738 28.3% Variation 3.5 15.5 94.3 8.0% Var / 0.5 2.2 13.5 1.1% F A -austenite fraction; D A -austenite size; F RA -retained austenite fraction; Mn A and C A -Mn and C contents in austenite, respectively. ACKNOWLEDGEMENTS First of all, I would like to thank God for His support during my entire life. All the achievements in my life are the result of His favor to me. The time-evolution of austenite fraction a 650°C that corresponds to retained austenite and fresh martensite (FM+RA) after quenching was evaluated by image analysis (Aphelion software) from SEM pictures. Figure 83 presents the obtained results. The corresponding kinetics is also compared with the equilibrium fraction of austenite calculated using Thermo-Calc. Figure 83 -Evolution of austenite fraction with holding time at 650°C. In the final structure after quench austenite is represented by retained austenite and fresh martensite (FM+RA). Equilibrium fraction of austenite at 650°C calculated using Thermo-Calc is given by the dashed line. Annex 1: Data base of mechanical properties of MMS The following table presents the data set, collected from the literature and used for the analysis of mechanical properties of Medium Mn Steels. It contains the following information (when available):  composition (major alloying elements);  process parameters (hot rolling, cold rolling, annealing);  mechanical properties;  microstructure description (fractions, sizes and contents of C and Mn). Retained austenite fraction was measured by XRD for each holding temperature. The results are presented in Table 16 and in Figure 134. Table 16 -Measured mechanical properties of samples after gradient batch annealing (different holding temperature), with their corresponding incertitude in the brackets: high and low yield strength (YS H and YS L ± 15MPa), yield point elongation (YPE ± 0.2%), ultimate tensile strength (UTS ± 10MPa), uniform (Uel ± 0.5%) and total (TE ± 1%) elongations, respectively. The last column gives also the measured fraction of retained austenite (RA). Annex 3.2: Characterization of carbides by TEM TEM observations of carbides formed during heating were done using replicas. The heating was interrupted at 550, 600 and 650°C using He quench. For each sample several observations were performed and the obtained results are presented in the following sub-sections. Mean composition of cementite estimated from 17 measurements was 6.8 wt.% Mn. However, three levels of Mn content in cementite were observed: first around 5 wt.%, second ~8 wt.% and third about 13 wt.% . 650°C sample 6.9 wt.% Mn Annex 4.1: Global model parameters. All the parameters necessary for each phase behavior description are given in the Annex 4.2: Sensitivity analysis of the mechanical model. Variation of following input parameters of the model was studied: a) fresh martensite fraction; b) size of ferrite; c) retained austenite fraction; d) mechanical stability of retained austenite (both ε and n). In order to avoid the interference of different effects several parameters were blocked at a certain values depending on the performed calculations. Influence of Fresh Martensite fraction The following parameters were considered with the subsequent constant values: • λ = 0.3 (µm) • Mn F = 3 (wt.%) • f RA = 20 (%) • ε = 0.14 • n = 1 C FM and Mn FM were recalculated according to the global fraction of FM+RA, but the C eq was always superior to 0.5, thus almost no influence on the stress-strain curves. Finally, changing parameters were f FM (0-30%) and f F fractions (50-80%). The results of the analysis are presented in Figure 180 and Table 20. As it can be seen fresh martensite has an important influence on the mechanical behavior of such a multiphase mixture. UTS has the biggest variation with the FM fraction evolution, on the other hand the change of YS and Uel is less pronounced. It can be also observed that due to the strain partitioning (Iso-W assumption) the induced transformation of RA is slightly modified as well (Figure 180 c). Influence of Ferrite size The following parameters were considered with the subsequent constant values: • f FM = 10 (%) • C FM = 0.31 (wt.%) • Mn FM = 8.3 (wt.%) • Mn F = 3 (wt.%) • f RA = 20 (%) • ε = 0.14 • n = 1 The size of ferrite (λ) was varying from 0.05 to 1 (µm). The results of the analysis are presented in Figure 181 and Table 21. The impact of λ evolution appears to be very important for all mechanical characteristics (YS, UTS and Uel). Due to the strain partitioning ferrite size has an important effect on the strain induced transformation (Figure 181c). Influence of Retained Austenite fraction The following parameters were considered with the subsequent constant values: • f F = 70 (%) • λ = 0.3 (µm) • Mn F = 3 (wt.%) • C FM = 0.31 (wt.%) • Mn FM = 8.3 (wt.%) • ε = 0.14 • n = 1 Changing parameters were f RA (0-30%) and f FM fractions (0-30%). The results of the analysis are shown in Figure 182 and Table 22. As it can be expected the increase of RA fraction decreases YS and UTS and improves the Uel. The effect on Uel is very significant. On the other hand, on YS and UTS it is less prominent. The following parameters were considered with the subsequent constant values: • f AM = 70 (%) • λ = 0.3 (µm) • Mn F = 3 (wt.%) • C FM = 0.31 (wt.%) • Mn FM = 8.3 (wt.%) • f RA = 30 (%) Changing parameters were ε 0 and n. In the case of ε 0 variation (0.03-0.21), the n was blocked at the value of 1. Figure 183 and Table 23 present the obtained results. In contrast, when n was changing, the ε 0 = 0.14 was constant. The results obtained with such simulation are shown in Figure 184 and Table 24. As it can be found both parameters ε 0 and n have low effect on the YS. In the same time the impact of austenite stability on UTS and Uel is quite significant, especially with the variation of ε 0 . High values of ε 0 increase the stability of RA thus decreasing induced martensite fraction and subsequently UTS. This effect is also accompanied by the improved Uel. Finally, it can be concluded that the sensibility of ε 0 in comparison with n is more important.
01751800
en
[ "chim.othe" ]
2024/03/05 22:32:07
2015
https://hal.univ-lorraine.fr/tel-01751800/file/DDOC_T_2015_0105_POLTORAK.pdf
Keywords: This work combines the electrochemistry at the interface between two immiscible electrolyte solutions (ITIES) with the Sol -Gel process of silica leading to an interfacial modification with mesoporous silica using soft template. In the first part of this work the macroscopic liquid -liquid interface was employed to separate the aqueous solution of the hydrolyzed silica precursor species (tetraethoxysilane (TEOS)) from the cationic surfactant (cetyltrimethylammonium (CTA + )) dissolved in the dichloroethane. The silica material deposition was controlled by the electrochemical CTA + transfer from the organic to the aqueous phase. Template transferred to the aqueous phase catalyzed the condensation reaction and self-assembly resulting in silica deposition at the interface. A variety of initial synthetic Frequently used abbreviations: In alphabetic order BET -Brunauer-Emmett-Teller isotherm BTPPA + -Bis(triphenylphosphoranyldiene) cation CE -Counter electrode CTA + -Cetyltrimethylammonium cation CV -Cyclic Voltammetry 𝑫 𝒊 -Diffusion coefficient 𝑫 𝒊 ′ -Apparent diffusion coefficient DCE -Dichloroethane DecFc -Decamethylferrocene DMFc -1,1'-Dimethylferrocene EtOH -Ethanol G -Gibbs energy of transfer ITIES -Interface between Two Immiscible Electrolyte Solutions LOD -Limit of detection NMR -Nuclear Magnetic Resonance 4OBSA --4-Octylbenzenesulfonic anion PAMAM -Poly(amidoamine) PH + -Trimethylbenzhydrylammonium cation RE -Reference electrode S -Spacing factor or in the other words pore center-to-center distance SAXS years I could unceasingly count on their priceless scientific advices and help. I will miss our very fruitful discussions, which has resulted in series of interesting discoveries. The time, which they have invested in me, resulted in my tremendous personal development -I am in their debt. Last but not least I appreciate a financial support from Région Lorraine and to Ecole Doctorale SESAMES (ED 412, Université de Lorraine) for my PhD grant. conditions were studied with cyclic voltammetry: influence of [CTA + ] org and [TEOS] aq , polarity of the organic phase, pH of the aqueous phase or deposition time scale etc. [CTA + ] org was found to be limiting factor of the deposition reaction. Characterization of silica material was also performed in order to study its chemical functionalities (XPS and infra-red spectroscopy were used) and morphology confirming mesostructure (SAXS and TEM were employed in this regard). Silica deposition at the miniaturized ITIES (membranes supporting array of micrometer in diameter pores were used in this regard) was the second part of this work. Silica interfacial synthesis performed in situ resulted in stable deposits growing on the aqueous side of the interface. Mechanical stability of the supported silica deposits allowed further processing -silica material was cured. Based on imaging techniques (e.g. SEM) it was found that deposits form hemispheres for longer experimental time scales. Interfacial reaction was also followed with in situ confocal Raman spectroscopy. Molecular characteristics of the interface were changed dramatically once CTA + species were transferred to the aqueous phase. An array of microITIES modified with silica was also assessed by ion transfer voltammetry of five interracially active species different in size, charge and nature. Ion transfer of each ion was affected in the presence of mesoporous silica at the ITIES. Finally the local pH change at the liquid -liquid interface was induced by ion transfer and UV photolysis of trimethylbenzhydrylammonium initially dissolved in the organic phase. The local pH change was confirmed with the local pH measurement (iridium oxide Pt microdisc modified electrode was used in this regard). Interfacial deposition triggered by pH decrease was shown to be feasible once TEOS precursor was dissolved in the organic phase whereas the CTA + Br -was dissolved in the aqueous phase. Interfacial modification with mesoporous silica materials was shown to possess promising properties for improving selectivity at the ITIES. This particular analytical parameter can be further improved by silica functionalization, which could be continuation of this work. Résumé -Version Français Ce travail combine l'électrochimie à l'interface liquide -liquide avec le procédé solgel pour la modification interfaciale avec de la silice mésoporeuse. Dans la première partie de ce travail, l'interface liquide -liquide macroscopique a été utilisée pour séparer la solution aqueuse de l'espèce de précurseur de silice hydrolysées (tétraéthoxysilane (TEOS)) de l'agent tensioactif cationique (cetyltrimethylammonium (CTA + ) qui a agi comme un template et a été dissous dans le dichloroéthane. Le dépôt de matériau de silice a été déclenchée par le transfert du CTA + à partir de la phase organique vers la phase aqueuse. CTA + qui a transféré à la phase aqueuse a catalysé la réaction de condensation de la silice sur l'interface liquideliquide. Différents conditions initiales de synthèse ont été étudiée par voltampérométrie cyclique: [CTA + ] org et [TEOS] aq , polarité de la phase organique, le pH de la phase aqueuse ou temps de déposition, etc. [CTA + ] org est le facteur limitant la réaction de déposition. La caractérisation de la silice a également été réalisée. Les fonctionnalités chimiques ont été évaluées par spectroscopie XPS et infra-rouge. La mésostructure de la silice a été confirmée par SAXS et TEM. Le dépôt de silice à des interfaces liquide -liquide miniaturisées était la deuxième partie de ce travail. Les dépôts stables sur le côté de l'interface ont été synthétisés in situ par voie électrochimique. La stabilité mécanique des dépôts de silice permis un traitement thermique de la silice. Basé sur les techniques d'imagerie (par exemple SEM) il a été constaté que les dépôts forment des hémisphères pour des temps plus long. La réaction interfaciale a également été suivie in situ par spectroscopie Raman confocale. Caractéristiques moléculaires de l'interface ont été modifiées de manière spectaculaire une fois les espèces CTA + ont été transférés à la phase aqueuse. Les interfaces liquide -liquide miniaturisés et modifiés ont également été évaluée avec le transfert voltampérométrique de cinq ions différentes (en taille, charge et nature des espèces). Le transfert de chaque d'ion a été affectée par la présence de silice mésoporeuse à l'interface liquide -liquide. Enfin, le changement de pH local à l'interface liquide -liquide a été induit avec le transfert d'ions et de photolyse UV de trimethylbenzhydrylammonium initialement dissous dans la phase organique. La diminution de pH local a été confirmée par la mesure du pH local (microdisque en Pt modifié avec de l'oxyde d'iridium a été utilisé à cet égard). Le dépôt interfacial de silice déclenchée par la diminution du pH local a été démontré une fois que le précurseur TEOS a été dissous dans la phase organique tandis que le CTA + Br -a été dissous dans la phase aqueuse. La modification interfaciale avec des matériaux de silice mésoporeuse a été démontrée pour posséder des propriétés prometteuses pour améliorer la sélectivité de capteur electroanalytiques basés sur l'interface liquide -liquide. Ce paramètre analytique particulier peut être encore amélioré par la fonctionnalisation de silice, ce qui pourrait être la poursuite de ce travail. Chapter I. Bibliographical introduction First chapter gives an overview of the issues being an introduction to the consequential work. It is divided in three main parts: (i) the general information concerning electrified liquid -liquid interface including its structure, the charge transfer reactions, electrochemical behavior and miniaturization; (ii) the Sol-Gel process of silica and the examples of its application in electrochemistry and (iii) the approaches that were used for the liquid -liquid interface modification with metals, phospholipids, polymers, carbon materials and finally silica materials. Electrified interface between two immiscible electrolyte solutions At the very beginning, it is important to clarify a terminology developed in electrochemistry for the liquid -liquid interface based systems. The terms as electrified, polarized or non-polarized liquid -liquid interface were used alternatively with the interface between two immiscible electorate solutions (ITIES) among the literature of the subject and hence they are also use in the present work. From electroanalytical point of view, the ITIES bears plenty of high quality properties: (i) first of all under proper conditions it can be polarized; (ii) it is self-healing in the same manner as the mercury electrodes; (iii) it is free from defects down to a molecular level; (iv) lack of preferential nucleation sites offers a unique way to study a deposition process; (v) the electrochemical theories developed for the solid electrodes are applicable at the ITIES. (vi) What is setting the ITIES apart is the detection not restricted to the reduction or the oxidation reactions, which can arise from ion transfer reactions and (vii) the ITIES as an electrochemical sensor has good sensitivity and reasonable limits of detection. In the following subchapters, the most significant aspectsfrom this work point of view -are discussed and divided into: (i) interfacial structure, (ii) different types of interfacial charge transfer reactions, (iii) the polarized and the non-polarized interfaces, (iv) electrochemical instability phenomena occurring in the presence of surface active molecules adsorption and (v) miniaturization at the ITIES. More comprehensive set of information dealing with the electrochemical aspects of the ITIES are available in number of reviews. [START_REF] Samec | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions (IUPAC Technical Report)[END_REF][START_REF] Samec | Charge-Transfer Processes at the Interface between Hydrophobic Ionic Liquid and Water[END_REF][START_REF] Dryfe | The Electrified Liquid-Liquid Interface[END_REF]4,[START_REF] Girault | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions[END_REF][START_REF] Peljo | Electrochemistry at the Liquid/liquid Interface[END_REF][START_REF] Samec | Dynamic Electrochemistry at the Interface between Two Immiscible Electrolytes[END_REF][START_REF] Senda | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions[END_REF][START_REF] Girault | Charge Transfer across Liquid-Liquid Interfaces[END_REF] 1.1.1. Liquid -liquid interface structure The structure of the liquid -liquid interface was proposed for the first time in 1939 when Verwey and Niessen based on Gouy-Chapman theory described an electric double layer as two back-to-back electric double layers with opposite charge separated by a continuous geometric boundary (Figure 1.1 a). [START_REF] Verwey | The Electrical Double Layer at the Interface of Two Liquids[END_REF] First experimental report dealing with the interfacial structure was given by Gavach et al. almost 40 years later. By measuring the interfacial tension versus the concentration of different tetraalkylammonium ions they proved presence of specific adsorption, which was explained in terms of ion pair formation at the liquidliquid interface. [START_REF] Gavach | The Double Layer and Ion Adsorption at the Interface between Two Non Miscible Solutions: Part I: Interfacial Tension Measurements for the Water-Nitrobenzene Tetraalkylammonium Bromide Systems[END_REF] One year later the same group proposed what we know today as the 'modified Verwey-Niessen' model (Figure 1.1 b). The experimental approach consisted in control of interfacial Galvani potential difference between a sodium bromide aqueous solution and a tetraalkylammonium tetraphenylborate organic solution by addition of tetraalkylammonium bromide to the aqueous phase and subsequent interfacial tension measurement -giving the electrocapillary curve. This work has brought two main characteristics to the interfacial model: the first, has treated the interface as a 'compact layer' of oriented dipole molecules; the second has assumed very small potential drop across the interface. [START_REF] Gros | The Double Layer and Ion Adsorption at the Interface between Two Non-Miscible Solution; Part II: Electrocapillary Behaviour of Some Water-Nitrobenzene Systems[END_REF] Furthermore, the presence of mixed solvent layer at the interface, was confirmed via surface excess study of water at the interface between organic solvents of different polarity. [START_REF] Girault | Thermodynamic Surface Excess of Water and Ionic Solvation at the Interface between Immiscible Liquids[END_REF][START_REF] Girault | Thermodynamics of a Polarised Interface between Two Immiscible Electrolyte Solutions[END_REF] Results indicated that surface excess of water at the liquid -liquid interface was not enough to form a monolayer, which in turn has suggested that these ions penetrates an interfacial region (Figure 1.1 c). An important aspect of the ITIES structure is its thickness. Existence of capillary waves at the liquid -liquid interface designates its boundaries. Qualitative parameter -mean-square interfacial displacement -allowed the estimation of upper and lower limits of the interface. 15 The results based on molecular dynamic calculations for a typical interfacial tension values between H 2 O and DCE indicated that the size of the interface is in the order of around 10 Å. [START_REF] Benjamin | Theoretical Study of the Water/1,2-Dichloroethane Interface: Structure, Dynamics, and Conformational Equilibria at the Liquid-liquid Interface[END_REF][START_REF] Benjamin | Chemical Reactions and Solvation at Liquid Interfaces: A Microscopic Perspective[END_REF] The complex nature and the unique character of the liquid -liquid interface narrows the study of its structure down to few spectroscopic techniques: scattering of X-ray and neutrons [START_REF] Schlossman | Liquid-Liquid Interfaces: Studied by X-Ray and Neutron Scattering[END_REF] and non-linear optical methods (sum-frequency vibrational spectroscopy [START_REF] Wang | Generalized Interface Polarity Scale Based on Second Harmonic Spectroscopy[END_REF] and second harmonic generation [START_REF] Higgins | Second Harmonic Generation Studies of Adsorption at a Liquid -Liquid Electrochemical Interface[END_REF] ). Charge transfer reactions at the ITIES At the ITIES, each of immiscible phases can be characterized with its own inner Galvani potential. Under open circuit potential when no charge transfer is observed, the species in one phase, say an aqueous are too hydrophilic to be transferred to the organic phase and species from the organic phase are too hydrophobic to transfer to the aqueous phase. The system equilibrium can be disrupted by introduction of external solute to one of phases (partition of species driven by interfacial Galvani potential difference leads to the interfacial ion transfer reaction until the equilibrium is established) or by external interfacial polarization. The quantity of energy that has to be delivered to the system in order to transfer one of the components to the neighboring phase can be given by standard transfer Gibbs energy whereas the partition of the species between two immiscible phases are conditioned by interfacial Galvani potential difference. In principle charge transfer reaction across the ITIES can be divided into three main groups: (i) simple ion transfer, (ii) assisted/facilitated ion transfer and (iii) electron transfer between the redox couple O 1 /R 1 in one phase and redox couple O 2 /R 2 in the latter. The proceeding description is given for the three above mentioned charge transfer thermodynamics enriched with practical examples. The examples and characteristics of electrochemically induced liquid -liquid interface adsorption reaction are also briefly discussed. Simple ion transfer reaction If one will consider that ion (i) is transferred from the aqueous to the organic phase via simple ion transfer reaction, than the standard transfer Gibbs energy (∆𝐺 (𝑖) 0,𝑎𝑞→𝑜𝑟𝑔 ) will be defined as the difference between standard Gibbs energy of solvation (𝜇 0,𝑜𝑟𝑔 ) and hydration (𝜇 0,𝑎𝑞 ): ∆𝐺 (𝑖) 0,𝑎𝑞→𝑜𝑟𝑔 = 𝜇 0,𝑜𝑟𝑔 -𝜇 0,𝑎𝑞 (1.1) Standard transfer Gibbs energy (∆𝐺 (𝑖) 0,𝑎𝑞→𝑜𝑟𝑔 ) can be converted to standard transfer potential (∆ 𝑜𝑟𝑔 𝑎𝑞 Ф (𝑖) 0 ) by introduction of 𝑧 (𝑖) 𝐹 factor according to equation 1.2: ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф (𝑖) 0 = ∆𝐺 (𝑖) 0,𝑎𝑞→𝑜𝑟𝑔 𝑧 (𝑖) 𝐹 (1.2) For the charged species (i), the ion transfer equilibrium condition, at constant temperature and pressure is given by following equality: 𝜇 ̃(𝑖) 𝑎𝑞 = 𝜇 ̃(𝑖) 𝑜𝑟𝑔 (1.3) Where 𝜇 ̃(𝑖) 0,𝑥 is the electrochemical potential of charged species (i) expressed in form of equation 1.4: 𝜇 ̃(𝑖) 𝑥 = 𝜇 (𝑖) 0,𝑥 + 𝑅𝑇𝑙𝑛𝑎 (𝑖) 𝑥 + 𝑧 (𝑖) 𝐹𝛷 𝑥 (1.4) Where 𝜇 (𝑖) 0 is the standard chemical potential, 𝛷 𝑥 is the inner Galvani potential, 𝑎 (𝑖) 𝑥 is the activity of the specie and x correspond to the aqueous (aq) or the organic (org) phase. The equality from equation 1.3 can be developed by substitution with equation 1.4: 𝜇 (𝑖) 0,𝑎𝑞 + 𝑅𝑇𝑙𝑛𝑎 (𝑖) 𝑎𝑞 + 𝑧 (𝑖) 𝐹𝛷 𝑎𝑞 = 𝜇 (𝑖) 0,𝑜𝑟𝑔 + 𝑅𝑇𝑙𝑛𝑎 (𝑖) 𝑜𝑟𝑔 + 𝑧 (𝑖) 𝐹𝛷 𝑜𝑟𝑔 (1.5) Separation of the Galvani potential difference -∆ 𝑜𝑟𝑔 𝑎𝑞 Ф = Ф 𝑎𝑞 -Ф 𝑜𝑟𝑔 -on one side and rest of the components on the second side of the equality yields in the Nernst equation for ion transfer at the ITIES: [START_REF] Samec | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions (IUPAC Technical Report)[END_REF][START_REF] Sabela | Standard Gibbs Energies of Transfer of Univalent Ions From Water To 1,2-Dichloromethane[END_REF] Where ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф (𝑖) 0 is the standard transfer potential or in the other words standard Gibbs energy expressed in the voltage scale as it was shown with equation 1.2, 𝑎 (𝑖) 𝑥 is the activity for the ion (i), in the aqueous or organic phase, R is the gas constant (8.31 𝐽/𝑚𝑜𝑙 • 𝐾) and T is the temperature. Equation 1.6 can be expressed with concentration instead of activity as: Where ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф′ (𝑖) 0 correspond to formal transfer potential. [START_REF] Samec | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions (IUPAC Technical Report)[END_REF]4 Simple ion transfer reaction is considered as the easiest to study and above mentioned deliberations provide qualitative parameters (standard transfer potential or standard Gibbs energy difference) that can be measured with electrochemical techniques. Figure 1.2 includes the ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф (𝑖) 0 values for different cationic and anionic species measured at the waterdichloroethane interface. ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф = Assisted/facilitated ion transfer This type of interfacial transfer induces host-guest interaction between ion (i) in one phase and ligand (L) dissolved in a second, immiscible phase. Nernst like equation for simple ion transfer reaction can be easily adapted to describe the facilitated ion transfer. To do so, some assumptions have to be made: -Ion is initially present in the aqueous phase, whereas ligand able to complex transferring ion is present in the organic phase; -Complexation is taking place in the organic phase; -Concentration of ions in the aqueous phase is in excess over their concentration in the organic phase; -Under open circuit potential complex formation in the organic phase prevails, hence ion concentration in the organic phase can be neglected and; -The aqueous phase concentration of ligand is neglected. To facilitate consideration we can assume that the complexation is of 1:1 stoichiometry type, which can be written as: 𝐿 𝑜𝑟𝑔 + (𝑖 𝑎𝑞 ) ⇌ 𝐿 -(𝑖) 𝑜𝑟𝑔 (1.8) For given reaction the association constant is given by: 𝐾 𝑎 = 𝑎 𝐿-(𝑖) 𝑜𝑟𝑔 𝑎 𝐿 𝑜𝑟𝑔 𝑎 (𝑖) 𝑎𝑞 (1.9) The ∆ 𝑜𝑟𝑔 𝑎𝑞 𝛷 for the facilitated ion transfer can be written in the form of equation 1.10: First report concerning assisted ion transfer reaction at the electrified liquid -liquid interface derives from Czechoslovakia at that time, [START_REF] Koryta | Electrochemical Polarization Phenomena at the Interface of Two Immiscible Electrolyte Solutions[END_REF] where Koryta has studied potassium transfer from the aqueous phase to a nitrobenzene solution containing dibenzo-18-crown-6 ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф = ∆ ionophore. Beyond doubts complexation agent being dissolved in the organic phase lowered the Gibbs energy of transfer and allowed the ions to transfer at lower interfacial potential difference values. This novelty opened the way to study species, whose detection was limited by rather narrow potential window. Complex nature of the facilitated ion transfer reaction involves several mechanisms divided -in respect to a complexation/dissociation locationinto: ACT (aqueous complexation followed by transfer), TOC (transfer followed by complexation), TIC (transfer by interfacial complexation) and TID (transfer by interfacial dissociation). [START_REF] Shao | Assisted Ion Transfer at Micro-ITIES Supported at the Tip of Micropipettes[END_REF] All four mechanisms are illustrated on Figure 1.3. The model of the facilitated ion transfer reaction at the ITIES can be referred to as historical potassium transfer facilitated by dibenzo-18-crown-6. [START_REF] Koryta | Electrochemical Polarization Phenomena at the Interface of Two Immiscible Electrolyte Solutions[END_REF] To date, facilitated transfer of ionic compounds allowed the improvement of the still poor selectivity of the ITIESguided by specific interaction between the host (ionophore, ligand) and the guest (target ion). The assisted transfer of heavy metals, which detection is of highest importance, had also attracted a lot of experimental attention. Series of five cyclic thioether ligands were found to be proper ionophores for the assisted transfer of cadmium, lead, copper and zinc. [START_REF] Lagger | Electrochemical Extraction of Heavy Metal Ions Assisted by Cyclic Thioether Ligands[END_REF] Synthetic molecule -ETH 1062 -incorporated into gelled polyvinylchloride-2-nitrophenylethyl ether polymer membrane was used in cadmium detection from the aqueous solution. [START_REF] Lee | Amperometric Tape Ion Sensors for cadmium(II) Ion Analysis[END_REF] The ion transfer voltammetry of silver ions being dissolved in the aqueous phase indicated that upon addition of 1,5-cyclooctadiene to the organic phase (1,6-dichlorohexane) its transfer becomes shifted from the potential range where it was partially masked by background ion transfer to a less positive potential where clear peak-like response was formed. [START_REF] Katano | Electrochemical Study of the Assisted Transfer of Silver Ion by 1 , 5-Cyclooctadiene at the 1 , 6-Dichlorohexane | Water Interface[END_REF] Work devoted for divalent copper cations assisted transfer by 6.7-dimethyl-2,3-di(2-pyridyl)quinoxaline ligands, showed that the half-wave potential of free ion transfer was shifted by around 400 mV to a less positive potential values in the presence of ligands in the organic phase. Among other examples, much attention was given to calix [4]arenes synthetic ionophores, first time applied by Zhan et al. [START_REF] Zhan | Electrochemical Recognition of Alkali Metal Ions at the Micro-Water 1,2-Dichloroethane Interface Using a calix[END_REF] towards alkali metal detection. Subsequent works from other groups -dealing with analogical ionophore derivatives -have shown that in the presence of alkali and alkaline-earth metals, selective detection at the ITIES can be slightly improved [START_REF] Kaykal | Synthesis and Electrochemical Properties of a Novel calix[4]arene Derivative for Facilitated Transfer of Alkali Metal Ions across water/1,2-Dichloroethane Micro-Interface[END_REF] or directed towards potassium (once 5.11.17.23-tetra-tertbuthyl-25-27-bis(2'amino-methylpyridine)-26-28-dihydroxy calix [4]arene was used) [START_REF] Durmaz | Voltammetric Characterization of Selective Potassium Ion Transfer across Micro-water/1,2-Dichloroethane Interface Facilitated by a Novel calix[4]arene Derivative[END_REF] or calcium cations (for 5.11.17.23-tetra-tert-buthyl-25-27-diethoxycarbonylmethoxy-26-28-dimethoxy calix[4]arene). [START_REF] Bingol | Facilitated Transfer of Alkali and Alkaline-Earth Metal Ions by a Calix[4]arene Derivative Across Water/1,2-Dichloroethane Microinterface: Amperometric Detection of Ca(2+)[END_REF] O'Dwyer and Cunnane have studied the facilitated transfer of silver cations with O,O''-Bis [2-(methylthio)ethyl]-tertbutyl calix [4]arene. [START_REF] O' Dwyer | Selective Transfer of Ag+ at the water|1,2-Dichloroethane Interface Facilitated by Complex Formation with a Calixarene Derivative[END_REF] Since host-guest interaction for calix [4]arene ionophores are not limited to metal cations it has been shown that its modification with urea group allowed the detection of anions (phosphate, chloride and sulphate with selectivity towards phosphate anions). [START_REF] Kivlehan | Study of Electrochemical Phosphate Sensing Systems: Spectrometric, Potentiometric and Voltammetric Evaluation[END_REF] An example of facilitated transfer of proton was also given. Reymond et al. have shown that in the presence of piroxicam derivatives in the organic phase, H + undergoes transfer by interfacial complexation/dissociation (TIC/TID) reaction. 33 Electron transfer across ITIES Properly selected two redox pairs O 1 /R 1 and O 2 /R 2 dissolved in the aqueous and the organic phase respectively under potentiostatic conditions may lead to the electron transfer reaction across the ITIES. [START_REF] Samec | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions (IUPAC Technical Report)[END_REF] 𝑂 1 𝑧,𝑎𝑞 + 𝑅 2 𝑧,𝑜𝑟𝑔 ⇌ 𝑂 2 𝑧,𝑜𝑟𝑔 + 𝑅 1 𝑧,𝑎𝑞 (1.12) For the reaction 1.12 at constant temperature and pressure the electrochemical potentials equilibrium will be given as: 𝜇 ̃𝑂1 0,𝑎𝑞 + 𝜇 ̃𝑅2 0,𝑜𝑟𝑔 = 𝜇 ̃𝑂2 0,𝑜𝑟𝑔 + 𝜇 ̃𝑅1 0,𝑎𝑞 (1.13) Substitution of eq. 1.4 to eq. 1.13 and proper transformation will yield the Nernst like equation for the electron transfer reaction across the ITIES: ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф = ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф 𝑒𝑙 0 + 𝑅𝑇 𝐹 𝑙𝑛 𝑎 𝑂 2 𝑜𝑟𝑔 𝑎 𝑅 1 𝑎𝑞 𝑎 𝑂 1 𝑎𝑞 𝑎 𝑅 2 𝑜𝑟𝑔 (1.14) where ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф 𝑒𝑙 0 is the standard Galvani potential difference for the electron transfer from the aqueous to the organic phase, related with the free Gibbs energy of the electron transfer reaction via equation 1.15: ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф 𝑒𝑙 0 = ∆ 𝑜𝑟𝑔 𝑎𝑞 𝐺 𝑒𝑙 0 𝐹 (1.15) The electron transfer reaction at the ITIES depends from the relative reduction potential of two redox couples separated between two immiscible phases and hence can occur spontaneously (if the interfacial potential difference is high enough to trigger redox reaction) or can be controlled potentiometrically. The studies of the electron transfer reaction were performed with variety of electrochemical techniques as for instance: SECM, [START_REF] Cai | Electron Transfer Kinetics at Polarized Nanoscopic Liquid/liquid Interfaces[END_REF][START_REF] Barker | Scanning Electrochemical Microscopy: Beyond the Solid/liquid Interface[END_REF] cyclic voltammetry, [START_REF] Geblewicz | Electron Transfer between Immiscible Solutions. The Hexacyanoferrate-Lutetium Biphthalocyanine System[END_REF][START_REF] Samec | Charge Transfer between Two Immiscible Electrolyte Solutions. Part IV. Electron Trnasfer between hexacyanoferrate(III) in Water and Ferrocene in Nitrobenzene Investigated by Cyclic Votammetry with Four-Electrode System[END_REF] ac impedance [START_REF] Cheng | Impedance Study of Rate Constants for Two-Phase Electron-Transfer Reactions[END_REF] etc. The process of an interfacial electron transfer reaction was first time observed by Samec et al. for the system composed from hexacyanoferrate redox couple in the aqueous phase and ferrocene in the nitrobenzene phase: [START_REF] Samec | Detection of an Electron Transfer across the Interface between Two Immiscible Electrolyte Solutions by Cyclic Voltammetry with Four-Electrode System[END_REF] 𝐹𝑒(𝐶𝑁) 6(𝑎𝑞) 3-+ 𝐹𝐶 (𝑜𝑟𝑔) ⇌ 𝐹𝑒(𝐶𝑁) 6 4-+ 𝐹𝐶 (𝑜𝑟𝑔) + (1.16) Since that time this particular type of interfacial charge transfer reaction accompanied the work focused on electrocatalysis, [START_REF] Rodgers | Particle Deposition and Catalysis at the Interface between Two Immiscible Electrolyte Solutions (ITIES): A Mini-Review[END_REF] photoinduced interfacial reaction [START_REF] Eugster | Photoinduced Electron Transfer at Liquid | Liquid Interfaces : Dynamics of the Heterogeneous Photoreduction of Quinones by Self-Assembled Porphyrin Ion Pairs[END_REF][START_REF] Fermìn | Organisation and Reactivity of Nanoparticles at Molecular Interfaces. Part II. ‡ Dye Sensitisation of TiO2 Nanoparticles Assembled at the Water/1,2-Dichloroethane Interface[END_REF] or interfacial modification with the metals, polymers and metal/polymers deposits. The latter is discus in more details in the subsections 3.1 and 3.3. Electrochemically induced interfacial adsorption At the ITIES the interfacial adsorption was reported for two following class of species: (i) amphiphilic ions and (ii) large and multicharged species e.g. dendrimers. The study of amphiphilic molecules adsorption, e.g. phospholipids, under applied potential conditions is most of all performed by electrocapillary curves measurements. [START_REF] Zhang | Potential-Dependent Adsorption and Transfer of Poly(diallyldialkylammonium) Ions at the Nitrobenzene|water Interface[END_REF][START_REF] Kitazumi | Potential-Dependent Adsorption of Decylsulfate and Decylammonium prior to the Onset of Electrochemical Instability at the 1,2-Dichloroethane|water Interface[END_REF][START_REF] Samec | Dynamics of Phospholipid Monolayers on Polarised Liquid-Liquid Interfaces[END_REF] The adsorption of phosphatidylcholine phospholipid from the organic phase and its complexation with the different ionic species (K + , H + , Fe 2+ , Fe 3+ , 𝐼𝑟𝐶𝑙 6 2-, 𝐼𝑟𝐶𝑙 6 3-) from the aqueous solution was followed by cyclic voltammetry (which gave characteristic triangular signal in the presence of adsorption) and contact angle measurements at different interfacial potentials values (contact angle for the studied droplet tend to increase with the polarization towards more positive potentials in the presence of phospholipid adsorption). [START_REF] Uyanik | Voltammetric and Visual Evidence of Adsorption Reactions at the Liquid-liquid Interfaces Supported on a Metallic Electrode[END_REF] Recently, dendrimers -repetitively branched molecules with the generation growing with number of molecular branches -attracted a lot of scientific attention as drug carriers, [START_REF] Patri | Dendritic Polymer Macromolecular Carriers for Drug Delivery[END_REF] molecular gates, [START_REF] Perez | Selectively Permeable Dendrimers as Molecular Gates[END_REF] soft templates [START_REF] Hedden | Templating of Inorganic Nanoparticles by PAMAM/PEG Dendrimer -Star Polymers[END_REF] etc. whereas the electrochemistry at the ITIES was shown as a good electroanalytical tool for their determination. [START_REF] Berduque | Electrochemistry of Non-Redox-Active Poly(propylenimine) and Poly(amidoamine) Dendrimers at Liquid-Liquid Interfaces[END_REF][START_REF] Calderon | Electrochemical Study of a Dendritic Family at the water/1,2-Dichloroethane Interface[END_REF] It has been shown that dendrimers with growing size and charge exhibit complex behavior at the electrified liquid -liquid interface. Molecular dynamics simulations employed to study adsorption of model dendrimer of a third generation at the liquid -liquid interface has shown that molecules possessing amphiphilic structure have higher stability at the interface. [START_REF] Cheung | How Stable Are Amphiphilic Dendrimers at the Liquidliquid Interface?[END_REF] The interfacial adsorption rather than interfacial transfer was reported for Poly-L-Lysine dendritic family (from generation 2 to generation 5) [START_REF] Herzog | Electroanalytical Behavior of Poly-L-Lysine Dendrigrafts at the Interface between Two Immiscible Electrolyte Solutions[END_REF] and higher generation of poly(amidoamine) and poly(propyleimine) dendrimers. [START_REF] Berduque | Electrochemistry of Non-Redox-Active Poly(propylenimine) and Poly(amidoamine) Dendrimers at Liquid-Liquid Interfaces.pdf[END_REF] Interfacial adsorption was also voltammetrically observed for biomolecules as insulin, [START_REF] Scanlon | Voltammetric Behaviour of Biological Macromolecules at Arrays of Aqueous|organogel Micro-Interfaces[END_REF] hen egg-white lysozyme [START_REF] Scanlon | Voltammetric Behaviour of Biological Macromolecules at Arrays of Aqueous|organogel Micro-Interfaces[END_REF] and hemoglobin. [START_REF] Herzog | Electrochemical Behaviour of Haemoglobin at the Liquid/liquid Interface[END_REF] The last biomolecule, additionally to adsorption process was shown to facilitate the transfer of anionic part of the organic phase supporting electrolyte, and subsequently decrease its Gibbs energy of transfer. Such species do not exhibit the characteristics of reversible transfer: (i) the peak-to-peak separation exceeds 0.059V/n; (ii) forward and reverse peak current ratio ≠ 1; (iii) deviation from linearity for current -concentration (calibration) curves and (iv) the reverse peak current terminated with abrupt current drop rather than diffusion limited tail. For surface active molecules adsorption -(v) so-called 'electrochemical instability' manifested by unrepeatable current spikes was reported and is discussed in the subsection 1.1.4. [START_REF] Kitazumi | Electrochemical Instability in Liquid-Liquid Two-Phase Systems[END_REF] Additionally for hemoglobin (vi) a thin layer film was visible at the ITIES after repetitive cycling. 56 Potential window and limiting current A polarizable electrode is ideal when, in the absence of Faradaic processes, no current flows through it over the whole potential range. A current -potential characteristic of a polarizable electrode in real conditions is presented on Similar behavior can be distinguished at the ITIES. In that case, the solid conductor is replaced with an immiscible electrolyte phase and the polarization becomes purely an ionic process. If we assume that the aqueous phase contains highly hydrophilic salt (A + B -) of ideally zero solubility in the organic phase, and the organic phase contains highly lipophilic salt (C + D -) of ideally zero solubility in the aqueous phase then the ideally polarized interface can be described as the interface impermeable to the charged particles transfer in whole potential range. In other words, it means that the ions should possess infinite Gibbs energy of transfer, which of course is far from the truth. In experimental conditions, a polarizable interface can be constructed between highly hydrophilic salt dissolved in the aqueous phase (NaCl, LiCl ect…) and highly lipophilic salt (tetrabutylammonium tetrakis(4chlorophenylborate) (TBA + TPBCl -)) dissolved in the organic phase. One or both phases can also be replace with respectively a hydrophilic (1-butyl-3-H-imidazolium nitrate) [START_REF] Cousens | Electrochemistry of the Ionic Liquid|oil Interface: A New Water-Free Interface between Two Immiscible Electrolyte Solutions[END_REF] or the hydrophobic (1-octyl-3-methylimidazolium bis(perfluoroalkylsulfonyl)imid) 59 ionic liquids. Non-polarizable interface occurs when both immiscible phases contain at least one common ion, which can freely cross the interface. In an ideal case, the current originating from common ion transfer in any direction should not induce any changes in interfacial potential difference. According to Nernst like equation for ion transfer reaction (eq. 1.6), the potential distribution is depended on the ionic species separated between two immiscible phases. At non-polarized interface, binary electrolyte (1:1; 2:2 etc.) A + B -or electrolytes containing common ion -A + B -/A + C --are distributed between two phases and hence interfacial potential difference does not depend on their concentration. [START_REF] Markin | Electrocapillary Phenomena at Polarizable and Reversible Interfaces between Two Immiscible Liquids: The Generalized Electrocapillary Equation in Hansen's Representation[END_REF][START_REF] Markin | Potentials at the Interface between Two Immiscible Electrolyte Solutions[END_REF] Systems employing non-polarized interface are used to study the kinetics of electron transfer reactions. )), curvature of electrocapillary curve and location of potential of zero charge against ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф 0 . [START_REF] Kakiuchi | Potential-Dependent Adsorption and Partitioning of Ionic Components at a Liquid|liquid Interface[END_REF][START_REF] Kakiuchi | Electrochemical Instability of the Liquid|liquid Interface in the Presence of Ionic Surfactant Adsorption[END_REF] The concepts of an electrochemical instability model do not assumed presence of specific adsorption of the ionic species, which is the case in real systems. Kakiuchi and Kitazumi improved previous model based on Gouy's double layer theory by introduction of the inner layer localized between two diffuse double layers. The new model describes potential dependency of interfacial capacitance, excess charge of ionic species in the aqueous phase and shape of electrocapillary curves in the presence of specific adsorption. [START_REF] Kitazumi | A Model of the Electrochemical Instability at the Liquid|liquid Interface Based on the Potential-Dependent Adsorption and Gouy's Double Layer Theory[END_REF][START_REF] Kitazumi | Electrochemical Instability in Liquid-Liquid Two-Phase Systems[END_REF] On cyclic voltammetry curves, in the presence of surface active ions, the electrochemical instability manifests itself as irregular current spikes in the vicinity of half-wave potential of transferring species. These characteristics were confirmed for anionic surfactants (alkansulfonate and alkyl sulfate salts), [START_REF] Kakiuchi | Cyclic Voltammetry of the Transfer of Anionic Surfactant across the Liquid-liquid Interface Manifests Electrochemical Instability[END_REF][START_REF] Kakiuchi | Regular Irregularity in the Transfer of Anionic Surfactant across the Liquid/liquid Interface[END_REF][START_REF] Kitazumi | Potential-Dependent Adsorption of Decylsulfate and Decylammonium prior to the Onset of Electrochemical Instability at the 1,2-Dichloroethane|water Interface[END_REF] cationic surfactant (decylamine) [START_REF] Kitazumi | Potential-Dependent Adsorption of Decylsulfate and Decylammonium prior to the Onset of Electrochemical Instability at the 1,2-Dichloroethane|water Interface[END_REF][START_REF] Kasahara | Electrochemical Instability in the Transfer of Cationic Surfactant across the 1,2-Dichloroethane/water Interface[END_REF] and alkaline-earth metals facilitated transfer by complexation agent -polyoxyethylene (40)isooctylphenyl ether (Triton X-405). [START_REF] Kakiuchi | Electrochemical Instability in Facilitated Transfer of Alkaline-Earth Metal Ions across the Nitrobenzene|water Interface[END_REF] The abundance of current irregularities is highly depended from the experimental time scale and intensifies for lower scan rates. Low concentration of dodecansulfonate (0.2 mM) [START_REF] Kakiuchi | Regular Irregularity in the Transfer of Anionic Surfactant across the Liquid/liquid Interface[END_REF] do not induce anomalies at cyclic voltammograms due to weak adsorption (fluctuation appears once the concentration reaches 0.5 mM). This confirms that the electrochemical instability can be triggered once surface coverage reaches crucial value. Miniaturization of the ITIES In electroanalytical chemistry, miniaturization possesses two advantages over the macroscopic system. First of all, it improves the sensitivity as a reason of increased mass transfer to a solid -liquid or the liquid -liquid interface arising from radial diffusion zone geometry. [START_REF] Scanlon | Enhanced Electroanalytical Sensitivity via Interface Miniaturisation: Ion Transfer Voltammetry at an Array of Nanometre Liquid-Liquid Interfaces[END_REF] Second characteristic lies in the interfacial surface area, which decreases as the system becomes smaller, which in turn lowers the capacitance current and improves limits of detection. [START_REF] Collins | Ion-Transfer Voltammetric Determination of the Beta-Blocker Propranolol in a Physiological Matrix at Silicon Membrane-Based Liquid|liquid Microinterface Arrays[END_REF] In case of liquid -liquid interface, miniaturization additionally improves its mechanical stability whereas the developments in the field of lithographical techniques allow the design and performance of well-defined supports. At the ITIES, the miniaturization was first time performed by Taylor and Girault who have supported the liquid -liquid interface in a pulled glass tube resulting in 25 µm in inner tip diameter. [START_REF] Taylor | Ion Transfer Reaction across a Liquid -Liquid Interface Supported on a Micropipette Tip[END_REF] Repeatable single pore microITIES could be also prepared by using a metal wire with a fixed diameter as a template, and a glass tube melted around. The wire removal of a wire by etching releases the pore that can be subsequently used to support the ITIES. [START_REF] Stockmann | Hydrophobic Alkylphosphonium Ionic Liquid for Electrochemistry at Ultramicroelectrodes and Micro Liquid|liquid Interfaces[END_REF] With development of new technologies the dimensions of the single ITIES could be further decreased up to the nanometer level -especially when LASER pulling approach was employed. 81 The geometrical and voltammetric properties of three kinds of ITIES were compiled in Table 1.2. It is worth noting that the miniaturized liquid -liquid interface can possess an asymmetrical diffusion zone profile on both sides of the interface as it is shown i.e. for microITIES scheme from Table 1.2. In that case, the mass transfer inside the pore is dominated by linear diffusion and hence the charge transfer reaction is a diffusion limited process. On the pore ingress, mass transfer is enhanced by hemispherical diffusion zone, which makes the charge transfer diffusion non-limited process. The application of arrays of geometrically regular nano- [START_REF] Scanlon | Ion-Transfer Electrochemistry at Arrays of Nanointerfaces between Immiscible Electrolyte Solutions Confined within Silicon Nitride Nanopore Membranes[END_REF] and microITIES, [START_REF] Zazpe | Ion-Transfer Voltammetry at Silicon Membrane-Based Arrays of Micro-Liquid-Liquid Interfaces[END_REF] as in the case of solid state electrochemistry, have gave better electroanalytical response since under proper geometrical conditions the ensemble can be treated as a sum of individual pores. Organosilicon compounds are when there is a covalent bond between silicon and carbon atom. Once two silicon atoms from two different organosilicon compounds are connected by oxygen atom, siloxane compound is being formed. The polymers with the skeletal structure formed from siloxane units are called silicone. Siloxides are the compounds with the general formula R 3 SiOM, were R is the organic group and M is the metal cation. r >> δ r [mm, cm] 𝑜𝑟𝑔 → 𝑎𝑞 -linear diffusion 𝑎𝑞 → 𝑜𝑟𝑔 -linear diffusion r < δ r [µm] 𝑜𝑟𝑔 → 𝑎𝑞 -linear diffusion 𝑎𝑞 → 𝑜𝑟𝑔 -radial diffusion r < δ r [nm] 𝑜𝑟𝑔 → 𝑎𝑞 -linear diffusion 𝑎𝑞 → 𝑜𝑟𝑔 -radial diffusion Signal current 𝐼 𝑜𝑟𝑔→𝑎𝑞 = 268600𝑛 3/2 𝐴𝐷 1/2 𝐶𝑣 1/2 𝐼 𝑎𝑞→𝑜𝑟𝑔 = 268600𝑛 3/2 𝐴𝐷 1/2 𝐶𝑣 1/2 𝐼 𝑜𝑟𝑔→𝑎𝑞 = 268600𝑛 3/2 𝐴𝐷 1/2 𝐶𝑣 1/2 𝐼 𝑎𝑞→𝑜𝑟𝑔 = 4𝑁𝑛𝐹𝐷𝑐𝑟 𝐼 𝑜𝑟𝑔→𝑎𝑞 = 4𝑓(𝛩)𝑛𝐹𝐷𝑐𝑟 81 Different characteristics of the regular arrays of microITIES are shown in Silicon alkoxides or alkoxysilanes are the compounds of silicon and alcohol with the general formula Si(OR) 4 where R is the organic substituent (for example tetramethoxysilane, tetraethoxysilane, tetrapropoxysilane etc.). [START_REF] Iler | The Chemistry of Silica: Solubility, Polymerization[END_REF] The chosen silicon containing compounds can be found on The Sol -Gel process of silica The Sol -Gel processing of silica and silicates is a two-stage process which involves (i) hydrolysis with the final aim to reach sol (small particle dispersion in the liquid medium) and (ii) condensation, which results is the gel phase (relatively rigid polymerized and noncrystalline network possessing different in size pores). Apart from hydrolysis and condensation steps, one has to take into account other reactions that might occur, as for instance silica dissolution at higher pH values. Moreover, the gel phase usually needs further post-treatment to cure not fully cross-linked silica matrix in order to obtain the solid material. All these steps have been well studied and described, [START_REF] Lev | Sol-Gel Electrochemistry: Silica and Silicates[END_REF] hence only a brief description will be given in here. Hydrolysis The rate of hydrolysis of alkoxysilane species as a function of the pH is shown on Condensation Second step of the Sol -Gel process is the condensation reaction -see red curve on Dissolution The sol -gel process is said to be inhibited once the dissolution rate exceeds over the condensation rate. Dissolution rate versus pH is represented by the black line on Figure 1.9. Silicates or silica dissolution takes place under high pH condition -using strong bases -or in the presence of hydrofluoric acid. The mechanism of dissolution is similar in both cases and involves nucleophilic attack of 𝑂𝐻 -or 𝐹 -on the positively charged silicon atom and subsequent cleavage of Si-O bond. 88,87 Curing The silica structure in the gel state is not yet rigid. The matrix is flexible, not fully crosslinked and the reactions of hydrolysis, condensation and dissolution can still affect final structure of the silica network. The fulfillment of the condensation is performed with the drying process, which involves the following stapes: (i) reduction of the gel volume; (ii) successive emptying the liquid content from pores driven by capillary pressure and (iii) diffusion of the residual solvent to the surface and evaporation. The drying process is accompanied by formation of cracks due to the high surface tension at the interface between empty pores and liquid filled pores, which creates large pressure difference. Crack formation can be somehow overcome by generation of thin materials, the use of surface active species decreasing surface tension, drying under freeze-drying conditions or doping the silica with other more 'elastic' materials. 88 Templates -towards surface engineering The idea behind template technology is very straightforward and assumes the use of the geometrically well-defined 'mold', which is then used to grow the material of interest, usually on the basis of the bottom-up approach. In electrochemistry, an electrode surface modification has many advantages: increase in the electroactive surface area (once conductive material is grown), enhancement in the mass transport via diffusion or improvement in the catalytic efficiency for the increasing number of nucleation sites (especially for near-atomic range of roughness). Deposition of electrically insulating materials, as for instance porous silica, is of highest interest from the analytical point of view, especially when the deposit exhibits some degree of selectivity towards the analyte in the presence of a contaminant. Depending on pores dimension three main class of porosity can be distinguish: (i) microporous materialsare when the pore widths is less than 2 nm; (ii) mesoporous materials -with the pore widths between 2 and 50 nm and (iii) macroporous materials -with the pore widths greater than 50 nm. 89 Walcarius has divided the templates into three main groups: hard templates, colloidal crystal assemblies and soft templates. [START_REF] Walcarius | Template-Directed Porous Electrodes in Electroanalysis[END_REF] Hard template is the term reserved for a solid porous substrate that is modified within the pores and the deposit is then released by removal of surrounding pore walls (see Silica or silicate materials can exhibit dual role since they were used as templates (in form of colloidal assemblies for instance) [START_REF] Velev | Colloidal Crystals as Templates for Porous Materials[END_REF] and were easily templated as shown in the following subsection. 1.2.4. A soft template for a Sol-Gel process of mesoporous silica thin films Evaporation Induced Self-Assembly (EISA) is the method, which allows the formation of a silica film with controlled mesostructure and pore size by varying: chemical parameters and processing conditions. In general, in such a method, the sol solution (containing template and silica precursor species) is contacted with the solid support interface and the volatile components (e.g. H 2 O, EtOH and HCl) of the reaction media are left to evaporate. Reducing the volume of the sol solution results in condensation of the silica precursor around the template matrix (which tend to form liquid crystal phases once its critical micelle concentration is reached). Once solvent is evaporated the silica film is formed. [START_REF] Grosso | Fundamentals of Mesostructuring Through Evaporation-Induced Self-Assembly[END_REF] Variety of processing methods were developed for EISA process, examples include: dip-coating -with the substrate being immersed to the sol solution and subsequently pulled out at a known rate; [START_REF] Lu | Continuous Formation of Supported Cubic and Hexagonal Mesoporous Films by Sol -Gel Dip-Coating[END_REF] spin-coating -centrifugal force is used to spread the sol solution over the support; [START_REF] Etienne | Preconcentration Electroanalysis at Surfactant-Templated Thiol-Functionalized Silica Thin Films[END_REF] casting -when the sol is simply poured on the support and left for evaporation [START_REF] Kong | Gel-Casting without de-Airing Process Using Silica Sol as a Binder[END_REF] or sprayingtransferring the sol solution on the material in the form of aerosol. [START_REF] Olding | Ceramic Sol-gel Composite Coatings for Electrical Insulation[END_REF] Under proper conditions, all the above methods allow the formation of ordered films with pores oriented horizontally to the support plane. Deposition of mesoporous silica at solid electrodes in the manner which does not exclude the electrical contact between the medium phase and a conductive substrate is challenging. Deposition of mesoporous silica films with high symmetry and controlled pore orientationpreferentially perpendicular to the substrate plane -is achievable by Electrochemically Assisted Self-Assembly (EASA). [START_REF] Walcarius | Electrochemically Assisted Self-Assembly of Mesoporous Silica Thin Films[END_REF][START_REF] Goux | Oriented Mesoporous Silica Films Obtained by Electro-Assisted Self-Assembly (EASA)[END_REF] In such an approach, the condensation of silica precursor is catalyzed by OH -electrogeneration at sufficiently low cathodic potential. Under these conditions, the cationic surfactants (cetyltrimethylammonium cations) present in the reaction media form hexagonally packed liquid crystal phase growing perpendicular to the electrode surface. The condensation of silica and self-assembly of soft templates occur simultaneously. The resultant thin film, after thermal curing, shows highly ordered silica network with the pores oriented normal to the underlying support. Functionalized mesoporous silica films prepared by Sol-Gel processing The introduction of chemical functionalities possessing different physico-chemical properties allows of material chemical and physical properties to be altered. It is especially important in analytical chemistry since it improves the selectivity once the system is designed to favors the detection of analyte and in parallel, inhibits the detection of contaminant (e.g. based on charge, hydrophilic/hydrophobic or host -ligand interactions). The sensor becomes even more versatile when surface nanoarchitecture is additionally adjusted. Such attributes are easily feasible for highly ordered mesoporous silica films with pores oriented normal to the surface plane prepared by EASA method. [START_REF] Etienne | Oriented Mesoporous Organosilica Films on Electrode: A New Class of Nanomaterials for Sensing[END_REF] The functionalized mesoporous silica prepared by the Sol -Gel process has two possible synthetic routs: (i) direct co-condensation with organosilanes and (ii) co-condensation and deposition followed by chemical reaction. The first method involves hydrolysis of alkoxy silanes with organosilanes bearing the functional group of interest. An electrochemical deposition leads to functionalized silica film formation. Such an approach allowed the introduction of simple functionalities as for instance methylup to 40% mmol, 100 amine -up to 10% mmol [START_REF] Etienne | Oriented Mesoporous Organosilica Films on Electrode: A New Class of Nanomaterials for Sensing[END_REF] or thiol -up to 10% mmol 101 groups without lost in mesostructure order. The second method is composed from two steps. Initially the organosilanes with the reactive organic group are co-condensate with the alkoxy silanes and electrodeposited at the solid electrode surface. The second step involves the reaction between the organic functionalities from the silica framework and properly selected reagent. A pioneering example was developed by Vilá et al. who electrogenerated azide functionalized oriented and ordered mesoporous silica films (for up to 40% mmol of azide group bearing silanes in the initial sol solution) that were further modified by alkyne-azide 'click' reaction with ethynylferrocene or ethynylpyridine. 102 Liquid -liquid interface modification Liquid -liquid interfaces can be modified ex situ (with material performed prior to its interfacial deposition) or in situ (when the interfacial reaction results in deposit formation). Electrochemistry at the ITIES can be the driving force for the interfacial modification as it can be a tool for deposits evaluation. modified liquid -liquid interface to photovoltaics applications. 106 The liquid -liquid interface was modified with Au NPs or mirror-like Au films. Photo-exited meso-tetra(4carboxyphenyl)porphyrin was located in the aqueous phase, whereas the ferrocene species were dissolved in the dichloroethane. Modified interface irradiation at the 442 nm -under total internal reflection conditions -has led to the increase in photocurrent (resulting as electron transfer from ferrocene in the organic phase to exited porphyrin from the aqueous phase -see Figure 1.11 for details) with significantly greater efficiency for the mirror-like deposit as compared with the Au NPs. 106 Other reductants for Au precursor were also used. The interfacial reduction of the 𝐴𝑢𝐶𝑙 4 -, on its transfer from organic phase containing tri-(p-tolyl)amine reductant to the aqueous phase, could be observed by metallic Au formation confirmed with microfocus X-ray absorption near-edge structure spectroscopy. [START_REF] Samec | Dynamic Electrochemistry at the Interface between Two Immiscible Electrolytes[END_REF] No spontaneous reaction was found to took place when 𝐴𝑢𝐶𝑙 4 -was initially dissolved in the DCE together with TPA. Higher reactivity was found for 𝐴𝑢𝐶𝑙 2 -electron transfer induced reduction and hence it was used for Au NPs deposition for further analysis. The effect of time and potential were examined. The particle size distribution depended on the conditions applied. NPs as small as 3 nm were obtained at lower potentials and shorter deposition times. The NPs size increased up to 50 nm at higher potential and longer deposition times. 110 One example from group of Opallo emerges from Au modified three phase junction system. 111 The cell composition was the ITO electrode crossing the liquid -liquid interface constituted between the organic phase containing a gold precursor salt -tetraoctylammonium tetrachloroaurate -and aqueous solution of a hydrophilic slat -KPF 6 -see Figure 1.12. The electrochemical reduction, which was taking place only at the three phase junction, led to gold deposition: .18) which was coupled with the ion transfer reaction -relatively hydrophobic 𝑃𝐹 6 -ions transfer to the organic phase and repeal of 𝐶𝑙 -to the aqueous phase: 𝐴𝑢𝐶𝑙 4 (𝑜𝑟𝑔) - + 3𝑒 -→ 𝐴𝑢 (𝑡ℎ𝑟𝑒𝑒 𝑝ℎ𝑎𝑠𝑒 𝑗𝑢𝑛𝑐𝑡𝑖𝑜𝑛) + 4𝐶𝑙 (𝑜𝑟𝑔) - ( 1 𝐶𝑙 (𝑜𝑟𝑔) - + 𝑃𝐹 6(𝑎𝑞) - → 𝐶𝑙 (𝑎𝑞) - + 𝑃𝐹 6(𝑜𝑟𝑔) - (1.19) The NPs deposition was performed by chronoamperometry. Size distribution (from 110 to 190 nm) and the shapes (most of all rounded particles were formed) were found to be unaffected after initial growth. Another issue employing liquid -liquid interface modification with Au NPs found application as liquid-mirrors. 112 The optical and electrochemical properties of the gold mirror like films prepared from Au NPs were found to be depended from the surface coverage, NPs size and the polarization of the incident irradiation as well as its wavelength. 113 It was found for instance that the films prepared from 60 nm Au NPs for the surface coverage parameter equal to 1.1 (indicating the formation of one monolayer) exhibited the maximum reflectivity (for S-polarized light under green laser irradiation). Moreover, the films were found to be conductive starting from surface coverage equal to 0.8 (as confirmed with the SECM). Ag deposition at the electrified ITIES Interfacial electrodeposition of Ag films at the liquid -liquid interface from so-called the three phase junction system was studied in a series of by Efrima. 114,115,116 Deposition was performed by reduction of aqueous Ag + with the conventional three electrode set-up with silver working electrode 'touching' the liquid -liquid interface. In the set of referred works the authors included detailed study of the effect of the organic phase solvent, concentration of the silver precursor, potential of the silver reduction, presence of surfactants etc. on the film formation morphology. For instance, once the aqueous phase was contacted with the low surface tension solvents, deposits were thinner, shinier and possessed higher degree of ramification. In contrary, the silver films electrogenerated at the interface between water and organic solvents of higher surface tension were black and required much longer deposition times. 116 Guo et al. studied the formation of Ag nanoparticles at the miniaturized (nano-and micro-ITIES). 117 Interfacial polarization allowed the electron transfer from the organic phase containing BuFc to reduce Ag + from the aqueous phase and subsequent formation of Ag NPs. Single particle deposition was shown to take place at the microITIES lesser than 0.5 µm. In The Ag disc ultramicroelectrode was first oxidized above the liquid -liquid interface to give Ag + ions, which thereafter were reduced to metallic Ag by decamethylferrocene dissolved in the DCE. The heterogeneous electron transfer reaction can be written as: 𝐴𝑔 (𝑎𝑞) + + 𝐷𝑒𝑐𝐹𝐶 (𝑜𝑟𝑔) → 𝐴𝑔 (𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒) + 𝐷𝑒𝑐𝐹𝐶 (𝑜𝑟𝑔) + (1.20) It was found that the interfacial potential difference established with the common ion method had a weak effect on the nucleation and the growth process. Ag deposition at the ITIES driven by the free energy and ion transfer was reported by Schutz and Hasse. 119 The aqueous phase was a solution of AgNO 3 whereas the organic phase (nitrobenzene or n-octanol) contained DecFC or ferrocene as reductant. The spontaneous transfer of 𝑁𝑂 3 (𝑎𝑞→𝑜𝑟𝑔) triggered the reduction of Ag + (present in the aqueous phase) by the reductant from the organic phase. The reaction was driven by the rule of charge neutrality of the system. For DecFC the reaction can be schematically written as: 𝐷𝑒𝑐𝐹𝐶 (𝑜𝑟𝑔) + 𝐴𝑔 (𝑎𝑞) + + 𝑁𝑂 3 (𝑎𝑞) - → 𝐷𝑒𝑐𝐹𝐶 (𝑜𝑟𝑔) + + 𝐴𝑔 (𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒) 0 + 𝑁𝑂 3 (𝑜𝑟𝑔) - (1.21) In the subsequent work, it was shown that the presence of the dust pollutants in the airadsorbing at the interface before contacting two phases -give some preferential site for the nucleation process and has led to deviation from initially obtained -whisker like fibersmorphologies. 120 An interesting approach was introduced for Ag NPs electrodeposition at the liquidliquid interface separating bulk aqueous phase from the thin layer organic phase Pd and Pt deposition at the electrified ITIES Deposition of metallic Pd and Pt NPs at the ITIES was usually performed under potentiostatic interfacial reduction of metal-chloro-complex present in the aqueous phase by heterogenus electron transfer reaction from organic phase containing reductant -for instance 1,1'-dimethylferrocene 122 or butylferrocene. 123 An electrochemically induced liquid -liquid interface modification with both metals attracted a lot of attention concerning mechanistic studies of nucleation process. Johans et al. have proven on the basis of theoretical model that Pd NPs nucleation is free from preferential nucleation sites. 124 The size of the Pd particles was also found to have an effect on their surface activity and based on quantitative thermodynamic considerations it was shown that only particles exceeding a critical radius can stay at the interface and further grow. When the interfacial tension was lowered by the adsorption of phospholipid molecules following changes were noticed: (i) the nucleation kinetics was significantly decreased (kinetics of growth was found to be unaffected); (ii) the critical radius and consequently, the particles size of the same surface activity had to increase and (iii) more energy was required to trigger the electron transfer reaction. 125 In order to eliminate NPs agglomeration found in all previous works, the ITIES was miniaturized using an alumina membrane with the mean pore diameter of 50 nm 126 and 100 nm. 127 Interestingly the growth of NPs was observed only in some of the pores, which was explained by autocatalytic effected followed by an interfacial nucleation. Other explanation of such a behavior was given by et al.. 136,137,138 The interface modified with a phospholipid monolayer can be studied by the simple ion transfer process across the artificial half part of biological membrane. For instance, adsorption of phosphatidylcholines on the water-1.2-dichloroethane interface had almost no effect on the transfer of tetraethylammonium cations. 139 As discussed in mentioned work, the phospholipids at the ITIES form 'island-like clusters' which partially cover the interface and the electrochemical signal is due to the transfer of electroactive species through cluster free domains. In order to control a compactness of the adsorbed monolayer, the surface pressure control -with the Langmuir trough technique -as an additional degree of freedom was introduced. 140,141 Even though the monolayer quality could be controlled by lateral compression, the large planar area gave rise to a potential distribution and become unstable due to the dissolution of the adsorbed phospholipids in the organic phase. To overcome such difficulties, the Langmuir trough used to control surface pressure of the adsorbed phospholipid monolayer was used as the aqueous half-part of electrochemical cell. The second, organic phase was specially designed PTFE cell containing gelled onitrophenyloctylether (o-NPOE) -poly(vinyl chloride) (PVC), which was immersed into the monolayer, resulting in gel-liquid interface (see. 2𝑀𝑜𝑛 𝑜𝑟𝑔 0 → 6𝑀𝑜𝑛 𝑜𝑟𝑔 + 2𝐻 + (1.24) Elongation of oligomers lowers their oxidation potential which results in the formation of oligomeric cation radical: 25) which precipitates at the liquid -liquid interface 𝐶𝑒 𝑎𝑞 4+ + 6𝑀𝑜𝑛 𝑜𝑟𝑔 → 𝐶𝑒 𝑎𝑞 3+ + 6𝑀𝑜𝑛 𝑜𝑟𝑔 •+ (1. 2(6𝑀𝑜𝑛 𝑜𝑟𝑔 •+ ) → 12𝑀𝑜𝑛 𝑝𝑟𝑒𝑐𝑖𝑝𝑖𝑡𝑎𝑡𝑒 + 2𝐻 + (1.26) The roughness of the polymer film was found to be in the order of tens of nanometer as measured with the AFM. 154 Direct electro-polymerization at the liquid -liquid interface was also reported for polythiophene. The oxidation potential for the monomer was found to be > +1.85 V and in case of the studied system (aqueous phase was the 0.1 M Ce 4+ /0.01 M Ce 3+ redox couple, 0.2 M H 2 SO 4 and 0.1 M Li 2 SO 4 whereas the organic phase was solution of 1 mM TPAsTPBF and 1 mM 2,2':5',2'' terthiophene) it was overlaid with a background current limiting the potential window. The presence of growing polymer layer additionally inhibited the transfer of the organic phase electrolyte ions (TPAs + on the negative site and TPBF -on the positive site of the potential window), which was attributed to the formation of physical obstacle. 155 where it undergoes homogeneous electron transfer with the monomer species: 𝐴𝑢𝐶𝑙 4 (𝑎𝑞) - + 3𝐻 -𝑀𝑜𝑛 (𝑎𝑞) → 𝐴𝑢 𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒 + 3𝑀𝑜𝑛 (𝑎𝑞) •+ + 3𝐻 + + 4𝐶𝑙 - (1.27) Evolution of radical cation in the aqueous phase triggers the polymerization reaction: 𝑀𝑜𝑛 (𝑎𝑞) •+ + 𝑥𝑀𝑜𝑛 (𝑎𝑞) → 𝐷𝑖𝑚𝑒𝑟 (𝑎𝑞) → 𝑂𝑙𝑖𝑔𝑜𝑚𝑒𝑟 (𝑎𝑞) → 𝑃𝑜𝑙𝑦𝑚𝑒𝑟 (𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒) (1.28) The formation of the compact layer at the ITIES was confirmed by the blocking effect for the ions crossing the interface with a subsequent voltammetric cycling (attributed to the grown of the polymer thickness). Growth of the film and size distribution of embedded Au NPs were studied under different experimental conditions i.e. type of monomer, gold precursor to the monomer concentration ratio, 157 pH of the aqueous phase 157 and interfacial Galvani potential difference. 158 The results are depicted in Table 1.5. CV Upon irradiation with the beam light (454 nm) the photocurrent associated with the electron transfer reaction between the dye entrapped in the polypeptide multilayer and the electron acceptor in the organic phase gave about 8 times bigger value as compared with planar disc-glassy-carbon electrode modified in the same manner. Resulting photocurrent increase was associated with the high specific surface area of the carbon foam electrode. As mentioned by authors, the carbon material due to its high absorption could be replaced with other -transparent or reflective -electrodes to further increase a photovoltaic efficiency. Recently communicated report from Dryfe and coworkers described the catalytic effect of few layer graphene and single wall carbon nanotubes modified ITIES on interfacial electron transfer reaction (between 1,1'-dimethylferrocene in the organic phase and ferricyanide in the aqueous phase). 163 The interfacial reaction for the electron transfer can be schematically written as: [𝐴𝑢𝐶𝑙 4 -] = 0.2 mM [Tyramine] = 1mM 10 ~15 nm Ref. [157] CV [𝐴𝑢𝐶𝑙 4 -] = 1 mM [Tyramine] = 0.5 mM 10 ~15 nm Ref. [157] CV [𝐴𝑢𝐶𝑙 4 -] = 1 mM [Tyramine] = 1 mM 10 ~15 nm Ref. [157] CV [𝐴𝑢𝐶𝑙 4 -] = 0.2 mM [Tyramine] = 1mM 12 ~5 nm Ref. [157] CV [𝐴𝑢𝐶𝑙 4 -] = 1 mM [Tyramine] = 0.5 mM 12 ~5 nm Ref. [157] CV [𝐴𝑢𝐶𝑙 4 -] = 1 mM [Tyramine] = 1 mM 12 ~5 nm Ref. [157] IDPM ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф = 0.123 𝑉 [𝐴𝑢𝐶𝑙 4 -] = 0.2 mM [Tyramine] = 1 mM - ~7 nm (univocal shapes) Ref. [158] IDPM ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф = 0.0.440 𝑉 [𝐴𝑢𝐶𝑙 4 -] = 0.2 mM [Tyramine] = 1 mM - ~60 nm (presence of nanorods Ref. [158] IDPM ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф = 0.123 𝑉 [𝐴𝑢𝐶𝑙 4 -] = 1 mM [Resorcinol] = 𝐷𝑀𝐹𝑐 (𝑜𝑟𝑔) + 𝐹𝑒(𝐶𝑁) 6 3- (𝑎𝑞) ⇌ 𝐷𝑀𝐹𝑐 + (𝑜𝑟𝑔) + 𝐹𝑒(𝐶𝑁) 6 4- (𝑎𝑞) (1.29) The electron transfer reaction, in the presence of few layer graphene film at the ITIES gave rise to a higher faradaic current, which was attributed to the increase in the effective interfacial area. Similar catalytic effect was found for the carbon nanotubes (CNT) modified ITIES. Increase in electron transfer current in such case was attributed to the increase in surface active area and/or doping-charging effect of the CNT. It was also shown that Pd metal nanoparticles can be easily prepared on the carbon modified ITIES with the spontaneous or potential-induced electron transfer. The size of the particles, depending from the support, oscillated around 10 -20 nm for CNT and 20 -40 nm for graphene flakes. The electrified liquid -liquid interface modified with carbon based material was developed for catalytic water splitting -hydrogen evolution reaction (2𝐻 + + 2𝑒 -→ 𝐻 2 ↑). The two electron transfer reaction between the protons from the aqueous phase and the electron donor from the organic phase can be controlled with the interfacial Galvani potential difference. A recent article by Ge et al. has described the effected of graphene and mesoporous carbon doped with Mo 2 S nanoparticles modified ITIES on interfacial proton reduction catalysis. 164 The authors have shown that modification of the liquid -liquid interface separating tetrakis(pentafluoropheneyl) borate anions in the aqueous phase (used as proton pump and potential determining ion) and electron donor -DecMFc -in the organic phase with the graphene doped MoS 2 leads to 40 fold increase in the hydrogen evolution rate and over 170 fold when mesoporous carbon was used as the conductive support. Similarly, 1000 fold increase in reaction rate was observed when the ITIES was modified with Mo 2 C doped multiwalled carbon nanotubes under similar conditions. 165 The choice of carbon material as a scaffold for nanoparticles deposition is not surprising as (i) it exhibit the large dispersion of nanoparticles being the catalytic sites of the proton reduction reaction; (ii) it possess high specific surface area and (iii) the use of a carbon provides the interfacial region with the extra electrons. Silica modified liquid -liquid interface The following section is devoted to examples of the silica materials synthesized or deposited at/or in the close vicinity of the liquid -liquid interface. Silica, thanks to its attractive physicochemical properties (biocompatible, chemically inert, undergoes easy functionalization, allow the formation of multi-scale porous materials, insulator etc.) found applications in range of scientific directions: drug delivery systems, 166 multi-scale porous membranes, 167 supports for enzymes 168,169 and proteins 170 immobilization, high-specificsurface-area sorbents or gas sensors 172 -being only ''the tip of the iceberg'' among other examples. Much attention was pay during the last few decades to air -liquid 173 and liquidsolid [START_REF] Goux | Oriented Mesoporous Silica Films Obtained by Electro-Assisted Self-Assembly (EASA)[END_REF]174 interfaces modified with mesoporous silica. Insight into the field yielded to a development of range of methods allowing the morphological control and the design of the well-defined silica materials. Up-to-date, modification of the interface separating two immiscible liquids with the silica materials is restricted to the emulsion, microemulsions and over a few dozen examples emerging from the neat liquid -liquid interface, among which only few emerges from the ITIES modification. In following subsections attention is pay to the latter, which covers the three phase junction systems, the non-polarized planar liquidliquid interfaces and finally the ITIES. Three phase junction systems Once the liquid -liquid interface is contacted with the solid electrode, the three phase junction system is formed. In principle, in such systems the electrochemical reaction at the solid support is followed by the ion transfer across the neighboring liquid -liquid interface. Three phase junction with one of the phases being a solid electrode modified with the silicate material was studied in the group of Opallo. 175 First report dealt with ex situ Au electrode modification with silicate films on the surface of which the small droplet of the hydrophobic redox liquid (t-butylferrocene) was placed and hereinafter was covered with the aqueous phase. Silica modification was performed in order to minimize transfer of relatively hydrophilic t-butylferrocenium cation (generated upon electrooxidation) to the aqueous phase. The best effect was observed when electrode was premodified with mercaptotrimethoxysilane. Electrochemically assisted Sol -Gel process employing the three phase junction was also reported 176 and is schematically shown on 𝑆𝑂 3 2-+ 𝐻 2 𝑂 → 𝑆𝑂 4 2-+ 2𝐻 + + 2𝑒 - (1.30) The n-octyltriethoxysialne, once hydrolyzed, transfer across the liquid -liquid interface and condensed at the ITO electrode. The resulting stripe width ranges from 10 µm to 70 µm depending on the method used (cyclic voltammetry or chronoamperometry) and electrodeposition time. The stripe thickness was largest ~100 nm on the aqueous side of the interface. Neat, non-polarized liquid -liquid interface in situ modification with silica materials First reports dealing with silica synthesis at the planar liquid -liquid interface involved an acid-prepared mesostructures method. 177 This approach consists of the synthesis of silica materials at the interface between surfactant (structure directing agent) aqueous solution at pH adjusted to be considerably below the isoelectric point of silica species and the organic solvent -typically hexane, decane or methylene chloride -containing silica precursor. Self-assembly of the template and inorganic species at the oil/water interface follows S + -X --I + mechanism (where S + is the cationic surfactant, X -is the halide anion and I + is the positively charge inorganic species), controlled by hydrogen-bonding interactions. The silica precursor is hydrolyzed at the oil/water interface. Hydrolyzed species, being more hydrophilic than hydrophobic, transfer to the aqueous phase and undergoes self-assembly with abundantly present cationic surfactants. As the reaction time passes, the film grows towards aqueous side of the interface. Morphology of the film differs depending on the side of the interface; in the organic environment the silica film consisted of ball-shaped beads with diameters ranging up to 100 µm whereas the shapes from the aqueous site are much smaller and differentiated. Analogous experimental approach was employed at the oil/water interface to grow mesoporous silica fibers. 178,179 The formation of desired fiber shapes requires the control of variety of factors: pH of the aqueous phase, the template and the precursor concentrations, nature of the organic phase and use of co-solvent (changing the kinetics of hydrolyzed silica species crossing the interface). Resultant silica-fibers with length up to 5 cm and diameters from 1 µm to 15 µm with high mesoporous quality (hexagonal arrangement and narrow pore size distribution, pores oriented parallel to the fiber axis and high specific surface area of 1200 m 2 /g) were obtained. Regev et al studied the silica film formation at the heptane/water interface under acidic and basic conditions. 180 The aqueous phase contained CTACl and CTAB in acidic and basic media respectively. Counter phase was always the mixture between TEOS and heptane. Different morphology on both sides of the silica film was found; for acid media the synthesized film roughness on the aqueous side were found to be in the order of 50 nm, whereas for the organic side it was 3 nm (as measured with AFM). For silica films prepared from basic media it was found that both aging and curing time have an effect on film mesostructure: (i) aging times <6h results in poor order and undefined crystalline phase, (ii) aging times >6h generates cubic structure and leads to better crystalline structure and (iii) long curing times (>15h) led to a structure collapse. Fabrication of ordered silica-based films at the liquid -liquid interface could also be obtained by silica spheres deposition. Example of such an approach was given by He et al. 181 were dispersed in the aqueous phase and contacted with hexane 182 and (ii) silica nanoparticles dispersed in the aqueous phase were separated from hexane containing octadecylamine (ODA). 183 Adsorption of CTA + at SiO 2 nanoparticles surface tune their properties and make them partially hydrophobic which facilitates their interfacial deposition. Rearrangement, immediately after deposition, of SiO 2 /CTA + layer was also observed and attributed to interaction among alkyl chains and/or organic phase. A second system (ii) is different. In the presence of ODA in the organic phase, three distinctive stages, for the increasing surfactant concentration were proposed. First, for low ODA concentrations the interface coverage was low and hence electrostatic (and probably hydrogen bonding) interactions were not sufficient enough to induce the SiO 2 adsorption. The partition of ODA between water -hexane and its adsorption at SiO 2 surface become visible close to the oil -water interface saturation with the surface active molecules. Silica nanoparticles 'modified' with ODA species started to adsorb at the liquid -liquid interface and changed it viscoelastic properties (higher interfacial tension values observed as compared with system in absence of silica nanoparticles). Finally at high ODA concentrations, the interface is fully occupied by the SiO 2 -ODA spheres, which probably (no experimental evidences were given) self-assembly forming densely packed layer at liquid -liquid interface. Approach dealing with the silica film generation at the liquid -liquid interface having anisotropic properties (also referred to as Janus properties) was first reported by Kulkarini et al. 184 Interface was constituted between ammonia aqueous solution and heptane containing methyltrimethoxysilane. Basic pH of the aqueous phase has led to the silica precursor hydrolysis and its interfacial condensation. Films were directly collected from the interface or supported with the porous material (suspended in the organic phase so that its bottom edge barely touched the aqueous phase) and cured in the oven. For the highest methyltrimethoxysilane/heptane molar ratio -0.4 -the superhydrophobic properties of the silica film from the organic side of the interface (contact angle ~152°) were attributed to the high methyl group surface coverage and its morphology (thick film with roughness on organic site). Aqueous side of the film was hydrophilic as confirmed with the contact angles ~65°. The NH 4 OH was found to have an effect on film grown and the wetting properties for concentration below 0.1 M. It seems that addition of cationic (CTAB) and anionic (SDS) surfactants at concentration >CMC to the aqueous phase does not change the wetting properties of the organic-side of the silica film, and only CTAB slightly decreased the hydrophilicity of the aqueous side of the film (contact angle increased from 65° in the absence to 75° in the presence of CTAB). The presence of charged surfactants (especially CTAB) could have some additional mesoporous structure driving properties; however no experimental effort was made towards this direction. Further examples of silica films of Janus type generated at static liquid -liquid interface were studied by Biswas and Rao. 185 Films were grown at the toluene (containing silica precursor) -water (at acidic pH facilitating precursor hydrolysis) interface. Three silica precursors were used: tetraethoxysilane, hexadecyltrimethoxysilane and perfluorooctyltrimethoxysilane. On micrometer scale, organic side of the silica film had always rougher structure than its counter -aqueous side. Contact angle measurements have shown that the hydrophilicity of the aqueous side of the silica films stayed unaffected (~70°) for all three precursors. Contact angle on organic side of the silica films varied depending from the hydrophobic character of the precursor. Results indicated slight hydrophobicity for tetraethoxysilane-based organic-side (contact angle = 92°) as compared with very hydrophobic hexadecyltrimethoxysilane-and perfluorooctyltrimethoxysilane-based organic-sides of the films (contact angles = 140° and 146° respectively). 185 The compilation of different synthetic approaches leading to neat liquid -liquid interface modification with silica materials can be found in Table 1.6. Electrified Interface between Two Immiscible Electrolyte Solutions modification with silica materials The examples of the electrified liquid -liquid interface modification with the silica materials are basically based on ex situ and in situ approaches. 1.3.5.3.1. Ex situ modification In recent years, Dryfe et al. published series of papers focused on ex situ modified ITIES with silicalite materials. In the pioneering work, 186 for the first time, they modified ITIES with inorganic material -namely non-polar zeolite (the zeolite membrane is schematically depicted on Figure 1.20 A). The ex situ modification employs material initially grown on mercury surface, which after proper treatment was cured to a glass tube using a silicone rubber. By interfacial modification authors were able to slightly increase the potential window as compared with the unmodified system. The size of the pinholes within the zeolite network allowed the size-selective ion transfer across the interface. Two different in size model ions were employed, this is TMA + and TEA + . Cyclic voltammetry has shown that TMA + (hydrodynamic radius 0.52 nm) transfer remain unaffected in the presence of zeolite membrane, whereas the transfer of the later ion (TEA + with 0.62 nm in hydrodynamic radius) was suppressed. It has to be mention that the TMA + backward transfer (from organic to aqueous phase) was accompanied by adsorption within the zeolite network, as indicated from the current drop and partial loss of reversibility. In the second approach 187 it was shown that zeolite framework modified ITIES can significantly increase the potential window. The key requirement is the use of background limiting cation and anion with the size exceeding the zeolite framework pores. This was obtained by employing tetrabutylammonium tetraphenylborate (TBA + TPB -) as an organic electrolyte. The value of standard transfer potentials for organic electrolyte ions were lower for cations and higher for anions as compared with the standard transfer potential of aqueous salt ions and hence the potential window -at the unmodified ITIES -was limited by supporting organic electrolyte ions transfer. The diameter of TPB -(0.84 nm) is comparable to the pore diameter of silicalite (0.6 nm). The size of the pores in zeolite network prevented the organic electrolyte anion interfacial transfer and as a result, elongated the potential window, which finally was limited by the smaller inorganic aqueous electrolyte ions transfer. The silicalite membranes were immersed for ten minutes in the aqueous phase prior to each electrochemical measurement in order to saturate the The increase in the potential window from more positive potential side is in agreement with the organic anion size and charge (TPBCl -) being unable to cross the pores. Finally, with help of inductively coupled plasma optical emission spectrometry -used to calculate the concentration of TEA + in the zeolite-Y modified ITIES -and cyclic voltammetry, authors estimated apparent diffusion coefficient for TEA + within the membrane. The parameter was calculated based on Randles -Sevčik equation and ranged from 1.9 • 10 -8 to 3.8 • 10 -8 cm/s 2 . The ion transfer voltammetry at the ITIES modified with the zeolite Y can be also used to change the chemical properties of the silicalite material. The example include proton exchange with the sodium from the membrane framework immersed in an acidic solution 189 . Another example, developed in the group of Chen,190 emerges from ex situ modification of macroporous polyethylene terephthalate (PET) membrane with randomly distributed pores with 500 nm in diameter and 5 µm in height (see Figure 1.20 B for schematics). Silica deposition inside the pores was performed with the aspiration-induced infiltration method 191 from the acidic solution containing CTAB and TEOS as template and precursor respectively. The as-prepared membrane was used to support liquid -liquid interface (ITIES separating 0.1 M KCl aqueous solution and 0.02 M BTPPA + TPB -in DCE). Authors claimed that silica inside the PET macropores has formed channels directed along the pores. The average pore diameter was estimated to be 3 nm (based on N 2 adsorption/desorption method), which is suitable for macromolecule sieving. Three molecules were employed to study the permeability of the silica modified macropores by ion transfer voltammetry: TEA + , K + (with crown ether present in the organic phase) and a biomolecule, Cytochrome c. Especially the potassium did not encounter any resistance to mass transfer in the presence of silica membrane. TEA + also gave rise to a signal which was partially hinder by a TPB -transfer (application of species giving a wider potential window could improve interpretation of voltammetric data). No signal was detected for Cytochrome c, which size apparently exceeded the silica channels entrance dimensions and hence retained in the aqueous phase. Similar conditions were used for the folic acid detection. 192 Analytical study allowed the characterization of the folic acid transfer across the silica modified PET membrane with and without CTAB species inside the silica channels. When the liquidliquid interface was constituted between 10 mM NaCl (aq) and 20 mM BTPPA + TPBCl - (org) in dichlorohexane, interfacial transfer of the folic acid was found to be better pronounced in the presence of CTAB among the silica network (phenomena attributed to the anion exchange process between bromides from CTAB and the folic acid). Presence of alkyl chains inside the silica channels changed the polarity of the interior of initially hydrophilic pores, which shifted the position of the interface from the pore ingress to its interior. To overcome this problem low molecular weight polyvinylchloride was used in order to gelified the organic phase. Such an approach, first stabilized the position of the interface and second, allowed the use of stripping techniques, which even further improved limit of detection of folic acid (from 100 µM for CTAB doped silica modified PET membrane for CV at water -dichlorohexane interface up to 80 nM for differential pulse stripping voltammetry technique for CTAB doped silica modified PET membrane at water -organogel interface). 1.3.5.3.2. In situ modification In situ, electrochemically controlled silica film formation at the ITIES, was reported for the first time in 2003 by Mareček and Jänchenová. 193 Method has involved Sol-Gel process with template and precursor separated between the organic and the aqueous phase respectively. During silica film formation -by cyclic voltammetry -the trimethyloctadecylammonium cation (TODA + ) (used as a template and cationic part of organic electrolyte in parallel) was transferred to the aqueous solution of water glass containing Na 2 O and SiO 2 . The interfacial transfer was followed by the template and the precursor self-assembly reaction and finally silica film deposition at the liquid -liquid interface according to the following reaction: 4𝑇𝑂𝐷𝐴 (𝑜) + + Chapter II. Experimental part The aim of the following part is to supply the reader with all the experimental, technical and instrumental details concerning this work. At the very beginning the full list of chemicals that have been used among this study is given. Next section describes the electrochemical set-ups, including: (i) the cell used to study the reaction taking place at the macroscopic ITIES, (ii) the cell and the membrane used for miniaturization and finally (iii) the system employed to probe local pH change induced by ion transfer and UV irradiation. The compositions of cells during electrochemical measurements as well as the information concerning preparation of the aqueous and the organic phase prior to silica interfacial deposition were included in separate section. Next are the information concerning instrumentation. This chapter is fulfilled with the set of step-bystep protocols dealing with the: (i) organic synthesis of chemicals prepared for the purpose of this work and (ii) preparation of the organic counter electrode as well as (iii) micro capillaries used to support the liquid -liquid interface. Chemicals The Table 2.1 gives the full list of the chemicals that have been used in this work. For each chemical its name, abbreviation, source and function are given. The chemicals synthesized in this study were prepared according to protocol available in section 1-2.5.1; 2 -2.5.3; 3 -2.5.2; 4 -Appendix II; 5 -2.5.7 and 6 -2.5.4. Electrochemical set-ups Different set-ups were used among this work and their descriptions are divided in agreement with the successive parts of the thesis i.e. macroscopic ITIES modification with the silica material, microscopic ITIES modification with the silica material and finally local pH changes at the ITIES induced by ion transfer and UV irradiation. (i) Electrochemical cells supporting macroITIES The custom made electrochemical cells used to study the electrochemical silica deposition at the macroscopic ITIES are presented on The RE org and RE aq correspond to the organic and the aqueous reference electrodes respectively whereas the CE org and CE aq correspond to the organic and the aqueous counter electrodes respectively. The numbers stand for: 1 -the aqueous phase; 2 -the organic phase; 3 -supporting aqueous phase; 4 -is the liquid -liquid interface between higher density phase and the supporting aqueous phase and 5 is the ITIES. The interfacial surface area was 1.13 cm 2 for cell A and 2.83 cm 2 for cell B. (ii) Electrochemical set-up supporting microITIES The electrochemical cell used for electrochemical silica interfacial deposition study and electroanalytical evaluation of the silica deposits is shown on of which silicon wafer with array of microITIES was attached (silicon wafer was fixed to the glass tube using silicon acetate sealant from Rubson® (resistive to DCE)). The glass vessel was filled with the aqueous phase whereas the glass tube was filled with the organic phase. The silver wire acting as both organic reference and organic counter electrode was placed directly in the organic phase. Prior to contacting two phases, the organic phase was placed in the glass tube and it was left for one minute in order to impregnate the array of pores with the organic solution. Once the silver wire was placed in the organic phase the upper hole of the glass tube was finely The silicon membrane supporting the array of microITIES (see Figure 2.2 B) was fabricated from a silicon wafer being a 4x4 mm square. The pores were patterned by UVphotolithography and pierced by a combination of wet and DRIE etches as described elsewhere. [START_REF] Zazpe | Ion-Transfer Voltammetry at Silicon Membrane-Based Arrays of Micro-Liquid-Liquid Interfaces[END_REF] The fabrication process has led to hydrophobic pore walls. The pores were therefore filled with the organic phase, presenting an array of inlaid microITIES. The characteristics of silicon wafers used in this work are shown in Table 2.2. The pore height was constant and for all wafers was 100 µm. (iii) Electrochemical set-up used to study local pH change For electrochemical study of PH + ion transfer the cell with the macroscopic ITIES (see way that the distance between the surface of the wafer and the electrode tip was 1 µm. The UV irradiation source was directed toward interface using flexible optical fiber. Composition of electrochemical cells. The aqueous and the organic phase preparation The electrochemical set-up, used for silica material electrodeposition at macroscopic ITIES, can be written as: The various TEOS and CTA + concentrations used in the above mentioned cells 1 and 2 were in the ranges 50 mM ≤ x ≤ 300 mM and 1.5 mM ≤ y ≤ 14 mM, respectively. In order to study local pH changes at the liquid -liquid interface induced by ion transfer and UV irradiation photo active compound (PH + ) had to be synthesized (see protocol in section (ii) Composition of the aqueous and the organic phase during silica deposition induced by CTA + ion transfer Aqueous phase During the interfacial silica electrodeposition the silica precursor -TEOS -was initially dissolved in the aqueous part of the liquid -liquid cell. Initially the TEOS was added to the 5 mM NaCl solution and the pH was adjusted to 3 in order to facilitate the hydrolysis reaction: 𝑆𝑖(𝐶𝐻 2 𝐶𝐻 3 ) 4 + 𝑥𝐻 2 𝑂 𝑝𝐻=3 → 𝑆𝑖(𝐶𝐻 2 𝐶𝐻 3 ) 4-𝑥 (𝑂𝐻) 𝑥 + 𝑥𝐶𝐻 3 𝐶𝐻 2 𝑂𝐻 (2.1) Such mixture, initially biphasic, was vigorously stirred for 1h until clear solution was obtained. Next the pH was increased to around 9 in order to facilitate condensation process. 𝑆𝑖(𝐶𝐻 2 𝐶𝐻 3 ) 4-𝑥 (𝑂𝐻) 𝑥 + 𝑆𝑖(𝐶𝐻 2 𝐶𝐻 3 ) 4-𝑥 (𝑂𝐻) 𝑥 𝑝𝐻=9 → 𝑆𝑖(𝐶𝐻 2 𝐶𝐻 3 ) 4-𝑥 (𝑂𝐻) 𝑥-1 𝑂𝑆𝑖(𝑂𝐻) 𝑥-1 (𝐶𝐻 2 𝐶𝐻 3 ) 4-𝑥 + 𝐻 2 𝑂 (2. 2) The precursor solution prepared in such a manner was than directly transferred to the liquidliquid electrochemical cell and constituted the higher density phase. Organic phase The organic phase used during interfacial electrochemical silica deposition was 1,2dichloroethane solution of 10 mM BTPPA + TPBCl -(prepared according to protocol from section 2.6.1) and x mM CTA + TPBCl -(prepared according to protocol from section 2.6.2). The organic phase was the lower density phase of the liquid -liquid cell. (ii) Composition of the aqueous and the organic phase during silica deposition induced by local pH change The composition of two immiscible phases during silica material interfacial deposition triggered by local pH change was inversed as compared with previous experiments. This means that precursor was initially dissolved in the organic phase whereas template species were in the aqueous phase. The organic phase composition, 20% TEOS solution in DCE, was unchanged among all experiments. The aqueous phase was x mM CTA + Br -solution (for x = 0.70; 1.38 and 2.02 mM) in 5 mM NaCl. Nitrogen adsorption-desorption -isotherms were obtained at -196 °C over a wide relative pressure range from 0.01 to 0.995. The volumetric adsorption analyzer was TRISTAR 3000 from Micromeritics. The samples were degassed under vacuum for several hours at room temperature before nitrogen adsorption measurements. The specific surface area was calculated with the BET (Brunauer, Emmett, Teller) equation. Instrumentation X-ray Photoelectron Spectroscopy -analysis was performed with the KRATOS Axis Ultra Xray spectrometer from Kratos Analytical. The apparatus was equipped with the monochromated AlKα X-ray source (hν = 1486.6 eV) operated at 150 W. Fourier transform Infra-red Spectroscopy -IR spectra were recorded in transmission mode using the KBr pellets as a support for a sample. The Nicolet 8700 apparatus was used for this purpose. Transmission Electron Microscope -the TEM micrographs were recorded with the CM20 apparatus (Philips, Netherlands) at an acceleration voltage 1 kV. Scanning Electron Microscope -the SEM micrographs were obtained using a Philips XL30 (with acceleration voltage equal to 1 kV), without any sample metallization. Confocal Raman spectroscopy -Horiba Jobin Yvon T64000 spectrometer equipped with a nitrogen cooled CCD detector with a green laser light (the wavelength was 532 or 514 nm) was used to collect Raman spectra. The irradiance was measured in air at 100 kW cm -2 and was estimated to have the same value in the aqueous phase. No laser heating in the aqueous phase was observed. UV irradiation -the irradiation source was HXP 120-UV lamp from Kubler Codix. The apparatus was equipped with the flexible optical fiber. The capillary was set at 4500 V. The end plate offset was -500V. Local pH measurement Protocols Inseparable part of this work was organic synthesis, miniaturization and electrodes preparation. All step-by-step protocols describing these aspects are arranged together in the following sections. 2.5.1. Preparation procedure of BTPPA + TPBCl - Bis(triphenylphosphoranylidiene)ammonium tetrakis(4-chlorophenylborate) (BTPPA + TPBCl -) -organic electrolyte salt, was prepared according to the following metathesis reaction: Preparation involves few stages: 1. 1.157 g of BTPPA + Cl -and 1.000 g of K + TPBCl -were dissolved in 10 ml and 20 ml of 1:2 mixture of H 2 O:MeOH respectively. Solution of BTPPA + Cl -was added drop-wise to a vigorously stirring solution of K + TPBCl -resulting in a white precipitate. 2. The reaction was continuously stirred at 4°C for 48h. After this time, solvent was filtered under vacuum for about 30 min. Reaction product was transferred to dry beaker and placed in a desiccator overnight. Beaker was covered with an aluminum foil to prevent light induced decomposition of the product. Then solution of CTA + Br -was added drop-wise to a vigorously stirring solution of K + TPBCl -. The product of reaction -white precipitate -was then left under stirring in the fridge for 48h. 2. After this time suspension was filtered under vacuum and then dried overnight in desiccator. 3. Dry CTA + TPBCl -was dissolved in acetone and was filtered under gravity using paper filter. The beaker containing CTA + TPBCl -was covered with the Parafilm® and placed under the fume hood. Small holes were made in Parafilm® to allow acetone evaporation. 4. Recrystallized CTA + TPBCl -is very dense and viscus liquid residual at the bottom of the beaker. In the last stage, the product was dissolved in dichloroethane and moved to the volumetric flask in order to calculate its concentration (beaker was weighted with and without CTA + TPBCl -). 5. The solution was stored at 4°C. Preparation procedure of TBA + TPBCl - Tetrabutylammonium tetrakis(4-chlorophenylborate) (TBA + TPBCl -) -salt of interfacially active model cation -TBA + -soluble in the organic phase, was prepared according to following metathesis reaction: Preparation involves few stages: 1. Equimolar amount of TBA + Cl -(0.500 grams) and K + TPBCl -(0.893 grams) were dissolved in 10 ml and 20 ml of 1:2 mixture of H 2 O:MeOH respectively. Solution of TBA + Cl -was added drop-wise to a vigorously stirring solution of K + TPBCl -resulting in white precipitate. 2. The reaction was continuously stirred at fridge for 48h. After this time solvent was filtered out under vacuum for about 30 min. Reaction product was transferred to dry beaker and placed in desiccator overnight. 3. Next step was to dissolved TBA + TPBCl -in acetone and then filtered under gravity using paper filter. Vessel with the filtrate was covered with Parafilm® in which small holes were made to allow the acetone evaporation. 4. Recrystallized TBA + TPBCl -(small, powder-like crystals) was transferred to the vessel and stored at 4°C. Preparation procedure of PH + TPBCl - Trimethylbenzhydrylammonium tetrakis(4-chlorophenylborate) (PH + TPBCl -) photoactive compound organic salt, was prepared according to following metathesis reaction: Preparation involves few stages: 1. Equimolar amounts of PH + l -(0.500 grams) and K + TPBCl -(0.740 grams) were dissolved in 10 ml and 20 ml of 1:2 mixture of H 2 O:MeOH respectively. Solution of PH + l -was added drop-wise to a vigorously stirred solution of K + TPBCl -resulting in white precipitate. 2. The reaction was continuously stirred at fridge for 48h. After this time the solvent was filtered under vacuum for about 30 min. 3. In order to purify the product from KI, it was rinsed three times with distilled water (iodides detection was performed with HPLC in order to follow effectiveness of KI removal -see Figure 2.5). Each time 10 ml of water was shaken together with the PH + TPBCl -and then it was centrifuged (15 minutes with 5000 rpm). 4. Free from KI reaction product was transferred to dry beaker and placed in desiccator overnight. 5. Next step was to dissolved PH + TPBCl -in acetone and then filter it under gravity using paper filter. Vessel with the filtrate was covered with the Parafilm® in which small holes were made to allow the acetone evaporation. 6. Recrystallized PH + TPBCl -, being viscus liquid, was dissolved in dichloroethane and transferred to volumetric flask. The mass of the PH + TPBCl -was taken as a result of mass subtraction of beaker with and without photoactive compound product. Solution was stored in the fridge. Protocol of organic counter electrode preparation Borosilicate glass capillary, as an inert material, was used to protect interior of the electrode from both solutions (the organic and the aqueous phase). Preparation of organic counter electrode requires: -Platinum mesh and platinum wire attached to each other by spot welding (Figure 2 Step-by-step electrode preparation protocol can be described as follows: 1. First, the platinum mesh has to be welded around the platinum wire. Next, as it is shown on 4. The ~15 cm copper-tinned (for electric contact) wire was placed inside the glass capillary until it touched the conductive resin. To speed up curing time of resin capillary was put in the oven at 130°C for few hours. 5. Once resin is cured, the electrode was taken out from the oven and its interior was filled with the silicon sealant (once again using syringe with the thicker needle possible). Finally, elastic isolation was placed on top of the electrode in order to secure the contact wire. Ready electrode is depicted on Figure 2.9. Figure 2.9. Platinum mesh based counter organic phase electrode. Single pore microITIES protocol of preparation Greater part of this thesis belongs to miniaturization. Reproducible procedure of preparation of the single pore capillaries supporting miniaturized liquid -liquid interface is described in here. Preparation of single pore microITIES requires: -borosilicate glass capillaries with filament (outer diameter 1. The single pore microITES preparation can be divided into few steps: 1. Initially, the borosilicate glass capillary has been closed in the middle. To do so, one end of the capillary was clogged with the Parafilm® and then the capillary has been placed vertically with the opened end directed upwards so that its half-length was placed in the middle of the ring made from several scrolls of the nickel/chrome 80/20 wire (Figure Protocol of preparation of trimethylbenzhydrylammonium iodide The protocol was adapted from a previous study. [START_REF] Samec | Charge-Transfer Processes at the Interface between Hydrophobic Ionic Liquid and Water[END_REF] The amide group of aminodiphenylhydrochloride is nucleophilic center, which is methylated with methyl iodide in order to form quaternary amine. The overall reaction of methylation can be schematically written as: NH 2 N + CH 3 C H 3 CH 3 I - CH 3 I Na 2 CO 3 MeOH The protocol of preparation can be divided into few stages: 1. First, the solvent -100 ml of methanol, was placed into the round-bottom two neck flask and was deoxygenated for 30 minutes by passing the nitrogen through the solution (top neck was clogged with the cap, whereas the side neck was covered with turn-over Flang stopper through which two needles were placed, first as a nitrogen inlet and second as nitrogen outlet). 2. The following reagents: 1g of aminodiphenylmethane hydrochloride, 3.85g of anhydrous sodium carbonate and 2.26 ml of iodomethane were added to the flask containing methanol. Reaction was carried out under inert gas atmosphere. N 2 inlet was supplied from top part of condenser. Gas bubbler filled with silicon oil was employed to monitor the N 2 flow. 4. After 24 hours, once the mixture cooled down to room temperature, 30 ml of 1M sodium thiosulfate was added and then mixture was stirred for 15 minutes. Insoluble part of reaction mixture was gravity filtered with paper filter. 10 grams of potassium iodide was added and the reaction mixture was stirred for another 15 minutes. 5. Further purification step was completed in separatory funnel. The reaction mixture was shaken with 50 ml of H 2 O and 50 ml of dichloromethane. Organic phase was collected. Extraction was repeated two more times with 50 ml dichloromethane each time. 6. Dichloromethane was reduced under vacuum with rotary evaporator shown on Figure 2.16. 7. Obtained product was recrystallized from acetone. Chapter III. Templated Sol -Gel process of silica at the electrified liquidliquid interface Under proper conditions the liquid -liquid interface can be polarized and used as a well understood electro chemical device (the equations developed for the electrochemistry of solid electrodes are applicable at the ITIES). From an electroanalytical point of view the electrified liquid -liquid interface brings several advantages over the solid electrodes: (i) it is self-healing, which eliminates the problems concerning electrode surface polishing; (ii) it has an interface deprived from defects up to a molecular level; (iii) the detection is not restricted to reduction and oxidation reaction but can arise from simple interfacial ion transfer reaction; (iv) it has good sensitivity and finally (v) reasonable limits of detection down to nM level for miniaturized systems. The ITIES can be used for determination of number of ionic analytes ranging from inorganic molecules [START_REF] Bingol | Facilitated Transfer of Alkali and Alkaline-Earth Metal Ions by a Calix[4]arene Derivative Across Water/1,2-Dichloroethane Microinterface: Amperometric Detection of Ca(2+)[END_REF]196 to organic compounds including the species which are biochemically important. 197 Chiral detection was also reported at the ITIES. 198,199 In spite of number of qualities the detection at the ITIES still suffer from poor selectivity, which until today is restricted to the use of ionophores (discussed in the section 1.1.2.2) or ex situ zeolite modifiers (discussed in the section 1.3.5.3.1). For solid electrodes it was shown many times that the surface modification can significantly improve their selectivity. Recently the Sol -Gel process of silica attracted a lot of attention for an electrochemical conductive electrode modification since under proper conditions, in the presence of structure driving agents called templates, the formation of highly ordered structures oriented normal to the electrode substrate -even with complex geometry -was easy to control. [START_REF] Dryfe | The Electrified Liquid-Liquid Interface[END_REF]4 The optimal conditions requires the hydrolysis of TEOS -silica precursor species -at the acidic pH equal to 3 in the presence of cationic CTA + surfactants -used as a template -and supporting electrolyte, for TEOS/CTA + molar ratio equal to 0.32. The condensation process of silica, cylindrical CTA + micelles formation and silica scaffold generation were triggered simultaneously by the application of sufficiently low potential at which an electrochemical reduction of H + and H 2 O has led to OH -evolution and subsequently local pH increase. The first reports dealing with the in situ electrified liquid -liquid interface modification with silica materials emerge from the group of Marecek This study aims to investigate deposition of mesoporus silica material at the macroscopic liquid -liquid interface. By coupling the electrochemistry at the ITIES with the Sol-Gel process of silica (in the presence of structure driving agents) the liquid -liquid interface can be easy modified. To do so, the precursor species were hydrolyzed in the aqueous phase and separated from the organic template solution by liquid -liquid interface. The silica material formation was controlled by electrochemical transfer of positively charged template species from the organic to the aqueous phase. Number of experimental parameters e.g. initial concentration of template and precursor, polarity of the organic phase or experimental time scale were investigated. Electrochemical information extracted during interfacial silica deposition allowed the proposal of possible deposit formation mechanism. The detailed description of electrogenerated silica material was performed with a number of qualitative, structural and morphological studies. The ultimate goal of this work is to construct well characterized molecular sieves that could be used to improve selectivity of electroanalytical sensors employing the liquid -liquid interface. Results and discussion 3.1.1. Electrochemical study The B). After electrodeposition the ITIES was modified with the silica film being a gel phase -not fully condensed silica matrix. In order to obtain solid silica it has to be cured. To do so, the silica deposits were collected (using microscope cover slip immersed in the cell prior to electrolysis) and stored overnight in the oven at 130°C. Silica deposits were prepared under different initial synthetic conditions, which include: the effect of [TEOS] aq and [CTA + ] org , the polarity of the organic phase and the method used for electrolysis. Samples prepared in such way could then be used for further characterization. IR spectroscopy and XPS were used to study chemical functions of the silica deposits. BET analysis allowed evaluation of the silica porosity after the calcination (removal of organic residues from inside the pores induced by thermal oxidation). Finally, the SAXS and TEM were employed as a tool to study the structural properties of the silica. Spectroscopic analysis Characteristic IR spectrum recorded for the silica deposit electrogenerated at the ITIES using surfactant template is shown on Figure 3.7. The peaks attributed to the vibrations of silica network are localized at 1080 cm -1 -intense band arising from Si-O-Si asymmetric stretching mode, 790 cm -1 -arising due to symmetric stretching of Si-O-Si and 480 cm -1 -attributed to Si-O bond rocking. The small peak observed at around 1200 cm -1 was described in the literature as an overlap of asymmetric stretching modes of Si-O-Si bond. [START_REF] Senda | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions[END_REF] The broad band from about 3100 cm -1 up to 3750 cm -1 corresponds to both: terminal -O-H stretching mode present inside the silica walls and absorbed of adsorbed water molecules. The additional contribution to -O-H can be also found as a peak around 1630 cm -1 , which arises from its bending vibration mode. 9,10 Finally, IR spectroscopy was also used to track remaining surfactant species. The CTA + molecules were tracked by two characteristic peaks situated at 2925 cm -1 -attributed to asymmetric CH 2 stretching vibrations and at 2850 cm -1 -arising as a reason of CH 2 symmetric stretching vibrations. Also the peak at around 1480 cm -1 could inform about the presence of alkyl chains as it corresponds to C-H bending vibration, although it is not clearly pronounced on the recorded spetrum. [START_REF] Senda | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions[END_REF] The presence of template molecules in the silica framework was in agreement with the voltammetric results, which have indicated that the interfacial transfer of CTA + was irreversible and its reverse transfer was accomplished with the adsorption process. The absorption of aromatic rings from the cationic part of organic electrolyte was observed as a peak with very weak intensity at around 3060 cm -1 (see inset of Figure 3.7). [START_REF] Gavach | The Double Layer and Ion Adsorption at the Interface between Two Non Miscible Solutions: Part I: Interfacial Tension Measurements for the Water-Nitrobenzene Tetraalkylammonium Bromide Systems[END_REF] The organic electrolyte cation presence could be due to the residuals from the organic phase evaporation most probably collected together with the silica deposit from the liquid -liquid interface. However, as it will be shown in one of the following chapters, this was not the case since organic electrolyte ions were also incorporated in a silica deposit formation. Cl 2s are most probably due to remaining aqueous supporting electrolyte residuals, although some charge balancing of terminal OH -inside the pores by Na + cannot be excluded. Another source of Cl -anions is the TPBCl --anionic part of organic electrolyte -which apparently was also present in the sample. The BTPPA + traces could be followed with the P 2p signal. The possible explanation of two hydrophobic ions that were present in the sample is not straightforward since some residuals could have been collected together with the organic phase during silica deposit collection. The SiO 2 formation can be confirmed on the basis of BET analysis The results of nitrogen adsorption-desorption isotherm for the silica deposit electrogenerated at the liquid -liquid interface are displayed on Figure 3.9. The curve marked with the squares corresponds to the silica deposit that was cured in the oven at 130°C overnight. The second plot, marked with the circles, was recorded after calcination at 450°C for 30 min. Clearly, after calcination, the pores filled with the organic species are released since the specific surface area increased from 194 m 2 /g up to 700 m 2 /g. The curve recorded after calcination took the shape of type IV isotherm [START_REF] Gros | The Double Layer and Ion Adsorption at the Interface between Two Non-Miscible Solution; Part II: Electrocapillary Behaviour of Some Water-Nitrobenzene Systems[END_REF] with two capillary condensation steps: the first at 𝑝/𝑝 0 = ~ 0.4 and the second at 𝑝/𝑝 0 = ~ 0.7 (which was also present for the sample before calcination). The first step is characteristic for the mesoporous materials with relatively small pore -the pores dimensions were calculated to be around 7 nm. The step at 𝑝/𝑝 0 = ~ 0.7 indicated that synthesized silica exhibit second degree of mesoporosity which is around 1 µm. Morphological characterization SAXS analysis was employed to study structural parameters of silica deposits. Broad peak in the SAXS patterns suggested the presence of 'worm-like' structure (especially visible for Figure 3.10 A blue curve). [START_REF] Girault | Thermodynamic Surface Excess of Water and Ionic Solvation at the Interface between Immiscible Liquids[END_REF] This level of order confirms the template (CTA + ) transfer from the aqueous to the organic phase, where it self-assembly with TEOS molecules and results in the silica modified ITIES. Figure 3.10 A shows the structural properties of the silica deposits prepared for different [CTA + ] org (5 mM -black curve, 10 mM -red curve and 14 mM -blue curve) while the [TEOS] aq was kept constant (300 mM). The peak intensity increased with the increasing [CTA + ] org and the pore center-to-center distance (being the 2π/q where q is the wavelength of the peak center) decreased from 5.7 nm ([CTA + ] org = 5 mM) to 4.8 nm ([CTA + ] org = 10 mM) down to 4.5 nm ([CTA + ] org ). It was already shown that the [CTA + ] org influenced the charge transfer during the voltammetric silica deposition and hence was the limiting factor during the silica formation. The [CTA + ] org was directly related with the CTA + concentration in the aqueous diffusion layer once CTA + was transferred from the organic to the aqueous phase. The critical micelle concentration (CMC) in water for CTA + Cl -is 1.4mM. [START_REF] Girault | Thermodynamics of a Polarised Interface between Two Immiscible Electrolyte Solutions[END_REF] The concentration of CTA + in the aqueous diffusion layer is more likely to be above the CMC for the higher [CTA + ] org and hence results in better structuration of the silica deposit. When [CTA + ] org was kept constant (14 mM) and [TEOS] aq was decreased from 300 mM down to 50 mM the small increase in the pore center-to-center distance (from 4.5 nm to 4.9 nm) was observed. It was concluded that [TEOS] aq impact on the silica deposit formation is not as important as it was shown for [CTA + ] org . The nature and the polarity of the organic phase were found to have a great influence on the structure of the deposits as it is shown on to the organic phase has led to the formation of still broad however clear response; with a pore center-to-center distance of 3.8 nm (the alcohols are known to affect the structure of silica prepared by Sol-Gel process in the presence of CTA + as structure driving agents). 15 The discrepancy of the pore center-to-center distance calculated from SAXS pattern and from the BET analysis (pore diameter was found to be 7 nm) can be attributed to the sample broad pore dimensions distribution. 208 Also the pore center-to-center distance calculated from SAXS patterns show very small discrepancy (4.6 nm for chronoamperometry and 4.8 nm for cyclic voltammetry). Conclusion The macroscopic liquid -liquid interface electrochemical modification with silica material structured with the cationic surfactant -being the surface driving agent -was studied. It has been shown that silica deposits can be formed only in the presence of both, silica precursor in the aqueous phase and positively charged surfactant template in the organic phase. This indicates that the CTA + ion transfer was facilitated by negative charge of polynuclear TEOS species hydrolyzed in the aqueous phase. The shape of voltammetric response has suggested the interfacial adsorption process of CTA + , as abrupt drop in current was always observed on the reverse peak. Moreover, it was concluded that the formation of silica at the ITIES is limited by [CTA + ] org , as the charge above the reverse peak was increasing linearly with its concentration (varying [TEOS] aq did not affect the charge transfer). Silica deposits were also found to stabilize the liquid -liquid interface as the electrochemical instability was not observed after deposition. Electrochemical deposition was followed by the silica collection, curing and finally its characterization. With spectroscopic methods, the formation of Si-O-Si bond was confirmed. Additionally the presence of template species could be followed by IR spectroscopy as two vibrational modes of symmetric and asymmetric stretching of CH 2 bonds were found. The organic electrolyte was also found to be present in the silica deposits. Although most probably some organic phase solution was collected together with the deposit its participation is silica formation cannot be excluded. Thermal treatment at 450°C for 30 min was enough to calcinated the organic residues from inside the silica pores. The BET isotherm confirmed the presence of mesostructure. The pore symmetry was found to be of 'worm like' structure as concluded from broad peak presence in the SAXS patterns and from TEM micrographs. The pore center-to-center distance, depending on the experimental conditions applied during silica deposit synthesis, ranged from 3.8 nm up to 6.1 nm. Silica generated at the liquid -liquid interface requires further treatment in order to support and cure the silica material for further application in electroanalysis. This is not feasible at the macroscopic ITIES and as a consequence silica has to be supported in a way which will allow its synthesis, processing and finally reuse in the liquid -liquid configuration. To do so, the liquid -liquid interface was miniaturized and modified with the silica material using conditions elaborated at the macroscopic ITIES as it is shown in the following chapter IV. Results described in this chapter are also available in the Electrochemistry Communications, 2013, 37, 76 -79. Chapter IV. Silica electrodeposition using cationic surfactant as a template at miniaturized ITIES In the following chapter, the liquid -liquid interface was miniaturized through silicon chips supporting an array of pores. Electrodeposition of silica material at the microITIES was performed with the protocol developed at the macroITIES (described in chapter III). Application of the membrane supporting array of microITIES allowed the mechanical stabilization of silica deposits. The electrochemical deposition process, silica deposits morphological study, spectroscopic characterization, in situ spectroscopic study of silica formation and finally electroanalytical evaluation of silica deposits were investigated in a series of experiments that are described in following subchapters. Electrochemical and morphological study of silica deposits at the array of microITIES The supported microITIES allowed an easy and straightforward silica deposition. The silicon chips (mesoporous membrane of array of pores prepared by lithography) have additionally served as the support for silica deposits, which after electrogeneration have kept their mechanical stability and hence further characterization and reuse in electroanalysis was possible. In the following subchapters, parameters such as the concentration of template ions in the organic phase and precursor species in the aqueous phase, time scale of the experiment or the influence of geometry of the array of microITIES used to support the liquid -liquid interface on the silica deposit formation were evaluated based on cyclic voltammetry (CV) results. Morphological study, performed with SEM and profilometry based on shear force measurement allowed the morphological characterization of silica deposits. The effect of calcination was studied with Raman spectroscopy and ion transfer voltammetry. Surfactant-template assisted Sol-Gel process of silica at the microITIES Typical CV recorded at the microITIES membrane design number 3 (Table 2.2) during silica deposit formation is marked on precursor in the aqueous and the organic phase respectively. During silica electrogeneration the interface was polarized from +600 mV down to -100 mV on the forward scan (the polarization direction was chosen with agreement to the positive charge of cationic template species, which in order to trigger silica condensation reaction, had to be transferred from the organic to the aqueous phase). As it is shown on scheme 1 on The information extracted from the shape of a reverse peak confirms the participation of the CTA + species in the silica material formation on the aqueous side of the liquid -liquid interface and has indicated that the back transfer is not a diffusion-limited process. After one cycle, the array of microITIES is completely covered with the silica material as it is shown on scheme 3 on Factors affecting silica deposition at the array of microITIES In the following part the influence of the parameters such as: (i) template and precursor initial concentrations, (ii) the time scale of the experiments -i.e. scan rate influence -and (iii) the different geometries of array of the micropores supporting the liquid -liquid interface on the silica material deposition are discussed. Influence of [CTA + ]org and [TEOS]aq The silica deposits are being formed only when CTA + is transferred from the organic to the aqueous phase containing silica precursor -hydrolyzed TEOS. It is not surprising that composition of both contacting phases is likely to affect the deposition process. chosen in such a way that each pore was independent of each other. The membrane design number 3 (see Table 2.2) (100 µm center-to-center distance was enough to avoid the overlap of radial diffusion layer) and moderate scan rate (5 mV/s) were employed for this purpose. When These findings indicate that the [CTA + ] org is the limiting factor for the silica deposits formation reaction and [TEOS] aq has no influence on the reaction rate (in contrary to the mesoporous silica films at the solid electrodes derived from the surfactant assisted Sol-Gel process of silica). 97,98 Influence of the pore center-to-center distance For the micro electrode arrays or the microITIES arrays the shape of the currentpotential curves and the limiting currents are highly affected by the center-to-center distance (noted as spacing factor, S) between two micro electrodes or microITIES. Change in the spacing factor allows the control of the planar and radial diffusion contributions affecting the mass transfer of the ions. Davies and Compton [START_REF] Davies | The Cyclic and Linear Sweep Voltammetry of Regular and Random Arrays of Microdisc Electrodes: Theory[END_REF] have divided microdisc arrays into four main groups based on their geometrical size with respect to the size of diffusion zones: (i) δ < r, δ < S; (ii) δ > r, δ < S/2; (iii) δ > r, δ > S/2 and (iv) δ > r, δ >> S, where r is the pore radius, δ is the diffusion zone radius and S is the pore center to center distance (see Schemes from (i) to (iv) are different groups of the microITIES arrays divided with respect to the size of diffusion zones. The terminology proposed by Compton and Davis can be applied to describe the results obtained during silica deposits formation at an array of microITIES supported with different membranes (design number 1, 2, 3 and 4 were used -see Table 2.2) with constant pore radius (r = 5 µm) but various spacing factor (20 µm ≤ S ≤ 200 µm). The schemes on This was most probably due to entrapped CTA + species, with a higher energy required for back transfer to the organic phase. intensity and the shape of CV curves were found to vary, with negative peak evolving from slight peak to wave and to bigger peak when increasing the potential scan rate. These observations can be rationalized by analyzing the diffusion layer profile for each case. A prerequisite is to know the diffusion coefficient of the transferring species (i.e., CTA + ). Apparently, this value is not available but one can reasonably get an estimation of it from the literature data available for the closely related octadecyltrimethylammonium chloride (ODTM) compound (only 2 more CH 2 moieties in the alkyl chain with respect to CTA + ). The diffusion coefficient in the aqueous phase for ODTM was found to be around 0.66×10 -6 cm 2 s -1 as determined by pulse gradient spin-echo NMR. 211 Taking into account the difference in alkyl chain length between ODTM and CTA +two carbon atoms -and the change in medium viscosity -from 1.002 cP (at 20°C) 212 for aqueous phase to 0.85 cP (at 20°C) 212 for DCE -then one would expect 𝐷 𝐶𝑇𝐴 + to be slightly higher than 𝐷 𝑂𝐷𝑇𝑀 . Consequently, a 𝐷 𝐶𝑇𝐴 + value is expected to be around reasonable value of 1×10 -6 cm 2 s -1 . Influence of the scan rate From such 𝐷 𝐶𝑇𝐴 + value, one can estimate the diffusion layer thickness at various scan rates on the basis of the following equation: δ = √2𝐷 𝐶𝑇𝐴 + 𝑡 (3.1) The estimated thicknesses of diffusion layers of CTA + for each scan rate are thus roughly: 630 µm for 0.1 mV/s, 200 µm for 1 mV/s, 90 µm for 5 mV/s, and 63 µm for 10 mV/s. The diffusion layer from the organic side of the liquid -liquid interface can be considered as: δ = δ 𝑙 + δ 𝑛𝑙 (3.2) where δ 𝑙 is the linear diffusion inside the pores of the membrane and δ 𝑛𝑙 correspond to diffusion zone on the pore ingress from the organic side. For the highest scan rate -10 mV/s -the diffusion layer thickness, δ l , is less than the membrane thickness -d = 100 µm -(see scheme (i) on Figure 4.5) and hence the transfer inside the pore was limited only by linear diffusion (resulting in peak like response). In this case, not enough CTA + species were transferred from the organic to the aqueous phase and no silica deposit formation was observed (verified with the profilometry based on shear force measurements). Consequently, all CTA + species transferred to the aqueous phase on the forward scan were transferred back to the organic medium on scan reversal, as supported by forward and reverse charge transfer of similar magnitude (see Figure experimental times would cause an increase in δ nl exceeding half of the pore center-to-center distance (S/2) on the basis of the estimated δ value (i.e., 200 µm in this case), leading to the overlap of radial diffusion zones, which should have a peak-like response. Then fact that it is not happening here and that a sigmoidal response was obtained is attributed to the restricted transfer of CTA + across the interfacial synthesized silica material (apparent diffusion values for the overall process have to be lower than those in solution at an unmodified liquid-liquid interface). Actually, much longer experimental times were necessary (for instance, that corresponding to a potential scan rate as low as 0.1 mV/s) to get conditions of total diffusion overlap (see scheme (iv) on Morphological study The morphology of the silica deposits generated at arrays of microITIES under various conditions was analyzed by SEM and profilometry (based on shearing force measurement). Micrographs and profilometry images concern deposits generated using the same composition of c). The first observation is that the silica deposits are always formed on the aqueous side of the liquid -liquid interface. A second point is the good quality of the deposits, which have been formed uniformly on all pores, and their crack-free appearance after surfactant removal (images were recorded after calcination at 400°C) suggests a good mechanical stability. When using the same support (microITIES number 2), keeping constant the scan rate (5 mV/s), but increasing the number of cycles from half scan (first column) to 3 scans (second column), one can notice an increase in both the deposit height (from 1.7 µm to 3.4 µm) and the deposit diameter (from 14 µm to 15.7 µm). Also, there was no significant difference in the deposit diameter between samples prepared from half and whole potential cycle, showing only an increase in the deposit height from 1.7 µm up to 2.7 µm, consistent with the lengthened duration of more negative potentials application. All these results have confirmed that the electrochemically-induced 𝐶𝑇𝐴 𝑜𝑟𝑔→𝑎𝑞 + transfer was indeed at the origin of the surfactant-templated silica self-assembly process. However, the deposit height growth was not directly proportional to the number of CV cycles, most probably due to increased resistance to 𝐶𝑇𝐴 𝑜𝑟𝑔→𝑎𝑞 + transfer in the presence of much thicker deposits. A more effective way to get massive silica material deposition at the microITIES was to slow down the potential scan rate (which also requires the use of a silicon membrane with larger pore spacing to avoid overlap of the radial diffusion profiles). This is shown on part 3 of 1) deposits prepared by one linear scan voltammetry (half CV scan) at 5 mV/s using microITIES number 2; (2) deposits prepared by three successive CV scans at 5 mV/s using microITIES number 2; and ( 3) deposits prepared by one CV scan at 0.1 mV/s using microITIES number 3. Polarization direction was always from anodic to cathodic potential direction. Another way to control the generation of silica deposits was the use of constant-potential amperometry (i.e., chronoamperometry) instead of linear sweep or cyclic voltammetry. Too short experiment times (5-10 s) did not lead to any silica material formation. Then, the amount of deposited material was found, as expected, to increase with the potentiostatic step duration (for example 50 s deposition time led to deposits with 3.7 µm in height and 14.9 µm in diameter), while extending the electrosynthesis up to 200 s enabled the formation of massive deposits, with 8.4 µm in height and 18.6 µm in diameter. As for deposits prepared by cyclic voltammetry, their shape was also found to change from rather flat to hemispherical when increasing the charge passing across the interface contributing thereby to larger amounts of silica material deposited at the microITIES. Finally, the variation in shape of the silica deposits resembles to an image of the evolution of the diffusion layer profiles (on the aqueous side of the interface): short deposition times corresponds to mainly linear diffusion limitations, yet with some radial contribution at the macropore edges, while longer experiments are governed by radial diffusion control. This is quantitatively illustrated on Another point, yet minor, to mention from the morphological analysis is the presence of cubic spheres on some silica deposits (see, e.g., image 3a on Figure 4.7), which are the sodium chloride crystals arising from residual background electrolyte back transfer from organic to aqueous phase. This phenomenon is similar to that reported by Silver et al. 214 , on the basis of protein crystallization experiments at liquid -liquid interfaces. This means that the electroassisted generation of silica material could be also suitable to the entrapment of proteins in the silica matrix (sol-gel silica is indeed known to enable encapsulation of proteins) 215 as an effective way to immobilize biomolecules at liquid -liquid interfaces. Finally, a very thin additional silica layer covering the whole silicon membrane (including silica deposits) can be also evidenced from SEM pictures (see micrographs 1a and 2a in Figure 4.7). This arises most probably from some evaporation induced condensation of TEOS and silica deposition even if each electrosynthesis experiment was followed by careful rinsing with distilled water. Spectroscopic and electrochemical characterization of deposits Confocal Raman spectroscopy was first used to evidence the formation of a silica network at microITIES. It has been performed after thermal treatment of the deposits (24h storage at 130°C followed by 30 min at 400°C) to be able to detect the characteristic vibrational modes of silica without being disturbed by large signals arising from the organic matter expected to be present in the same spectral region (see Figure 4.11). In addition to the narrow and intense band at 520 cm -1 due to the vibration mode characteristic of the silicon wafer support, one can notice a broad band from 375 cm -1 to 500 cm -1 attributed to the Si-O-Si vibrational mode, and a single band at 710 cm -1 suggesting the presence of terminal Si-OH groups. This strongly supports the formation of silica. Raman spectroscopy can be also used to characterize the organic species that have been incorporated into the silica material during formation of the deposits, as well as to check the effectiveness of their removal upon calcination. This is shown on Figure 4.12 B, where the broad and intense signal located in the region 2760 -3020 cm -1 region, corresponding to the -CH 2stretching mode of the long alkyl chain of CTA + species, 216 confirms the presence of the surfactant template in the material. This signal disappears almost completely after heat treatment, indicating the successful calcination of the organic template. From Figure 4.12 B, one can also notice a broad signal of weak intensity at 3060 cm -1 , which can be ascribed to the C-H stretching mode of aromatic rings, which are present in the organic electrolyte (BTPPA + TPBCl -), suggesting that some of these cations have been co-encapsulated in the silica material in addition to the surfactant template. This is best shown on Figure 4.12 A, where the typical signature of BTPPA + TPBCl -is seen via the bands located in the 600 to 1800 cm -1 region 217 (for instance the peak at around 1000 cm -1 is due to the vibration of the aromatic rings). 203 After calcination, all these bands disappeared, which indicates the complete removal of this organic electrolyte from the deposits. One can also notice from Figure 4.12 A two broad peaks of weak intensity in the region from 1250 to 1750 cm -1 , which appeared upon calcination, representing actually the D and G bands of traces of amorphous carbon arising from thermal decomposition of the organic molecules. 218 These results support that the mesopore channels have been liberated from most of their organic content, even if the quantitative analysis of the porosity by the classical gas adsorption method was not possible due to the too low amount of available material (too small deposits). In order to better evaluate the permeability of µITIES modified with silica deposits before and after calcination, a model interfacial active cation was employed. The transfer of 𝑇𝑀𝐴 𝑎𝑞→𝑜𝑟𝑔 + through the microITIES modified with the silica deposits was clearly observed after removal of the surfactant template (Figure 4.13 B -red curve), confirming good permeability of the mesoporous silica deposits, whereas negligible transfer (possible hindering effect of capacitive current cannot be excluded) occurred before calcination (Figure 4.13 B -black curve). The corresponding CV curve was characterized by a sigmoidal wave when transferring from the aqueous to the organic phase and a peak-like response on scan reversal (due to the restricted amount of TMA + for back transfer). The absence of significant 𝑇𝑀𝐴 𝑎𝑞→𝑜𝑟𝑔 + transfer prior to template removal also confirms the good quality of the deposits and the fact that all -or the greatest part of -the macropores of the silicon membrane have been successfully modified with the silica deposits. Conclusion Miniaturized liquid -liquid interface supported by an array of micropores can be easily modified with the mesoporous silica deposits with the surfactant assisted Sol-Gel process of silica. The experimental time scale -i.e. scan rate -was shown to have a great effect on the voltammetric characteristics recorded during deposition as well as on the shape of the silica deposits, which tend to grow towards the bulk of the aqueous phase taking the shape of the diffusion layer profile of CTA + on the aqueous side of the liquid -liquid interface. The spacing factor between two neighboring pores in the different microITIES geometrical arrays was also shown to affect deposition process. The rate limiting properties of CTA + found at macroscopic ITIES were also confirmed with miniaturized system. With TEM the mesoporous character of the silica was confirmed. The effect of calcination -removal of organic species blocking the interior of the pores -was followed by confocal Raman spectroscopy and ion transfer voltammetry of TMA + . The system optimized in this section can be used for further evaluation in electroanalysis as it will be shown in section 4.4 of following chapter. The results for this section can be also found in Langmuir, 2014, 30, 11453 -11463. In situ confocal Raman spectroscopy study of interfacial silica deposition at microITIES In general, electrochemistry at the ITIES allows the estimation of thermodynamic, kinetic and charge transfer parameters, but precise quantitative information about the interfacial region has to be supported by other methods, especially when very precise molecular information of the system is required. Spectroscopic methods (both linear and non-linear techniques) can be coupled to electrochemistry at the ITIES. 220 At the liquid -liquid interface the biggest difficulty is the separation of the signal originating from the bulk from the signal originating from the interfacial region. This can be overcome by fulfilling the condition of total internal reflection. 220 The first studies of interfacial processes by spectroelectrochemistry were made using linear techniques, which include UV-visible volt-and chronoabsorptiometry, 221,222,223 volt-and chronofluoriometry 224,225 or reflectance spectroscopy. 226 In contrast, non-linear techniques (sum frequency spectroscopy and second harmonic generation) have provided information about the molecular structure of the liquid -liquid interface as they are surface specific. [START_REF] Wang | Generalized Interface Polarity Scale Based on Second Harmonic Spectroscopy[END_REF] Confocal Raman microspectrometer can also be used to focus the analysis at the interface and provide useful interfacial information. A Raman spectroscope with a spatial resolution of 0.5-1 mm has been used to investigate electron transfer reaction at solid microelectrode arrays. 227,228 Raman spectroscopy has also been used for the characterization of reactions occurring at the neat liquidliquid interface. 217,229,230,231,232,233 Interfacial reaction 229 or metallic nanoparticle selfassembly 230,231,232,217 at the liquid-liquid interface were also reported to be monitored by Raman spectroscopy. However, the investigation of reactions at the liquid-liquid interface controlled by electrochemical means has scarcely been studied. Indeed, it was only recently that surface enhanced Raman scattering was used to study the potential dependent agglomeration of silver nanoparticles at the water -1,2-dichloroethane (DCE) interface. 217 More recently, the interfacial transfer of ferroin ion and heterogeneous electron transfer between dimethylferrocene (from the organic phase) and hexacyanoferrate (II/III) anions (from the aqueous phase) were followed by confocal Raman spectroscopy. 234 The electrodeposition of gold nanoparticles formed at a three- Additionally, the chloromethyl group gives two overlapping peaks at around 3034 cm -1 (weak intensity) assigned to the antisymmetric CH 2 stretching mode and 2987 cm -1 (very strong intensity) assigned to the corresponding symmetric stretching mode. 236 When the organic electrolyte was added to the organic phase, three additional peaks were observed (see inset of TPBCl -. The intensity of the peak at 1000 cm -1 attributed to BTPPA + , remained constant upon addition of 10 mM CTA + TPBCl -in the organic phase and then dropped for 53 mM CTA + TPBCl - in the organic phase, suggesting that the attribution of a vibrational mode of BTTPA + was correct. Addition of CTA + TPBCl -to the organic electrolyte solution also gave rise to another peak at 349 cm -1 , which could be attributed to low-frequency deformation modes of CTA + alkyl chains. 203 These experiments demonstrated that the organic electrolyte and surfactant ions remained in the organic phase at open-circuit potential. Raman spectroscopy analysis of the liquidliquid interface at open circuit potential Ion transfer followed by Raman spectroscopy Previous works have demonstrated that the application of a negative potential difference can cause a displacement of the interface with ingress to the aqueous phase. 213,237 The influence of polarization potential on the position of the interface was then investigated by recording Raman spectra under various negative interfacial potential differences can be found on . Then, the interfacial potential difference was held for times sufficiently long (which was typically 4 minutes) to collect a full Raman spectrum from 200 to 3200 cm -1 . Spectra were normalized to the silicon peak at 520 cm -1 (arising from photons scattered by the silicon membrane). The interfacial potential difference was varied from -200 mV down to -800 mV. At these potentials, BTPPA + ions were transferred from the organic phase to the aqueous phase as demonstrated on the blank CV shown on Figure 4.17 A -full Raman spectra and B -Raman spectra in the region from 975 cm -1 to 1100 cm -1 recorded at the microITIES between 5 mM NaCl aqueous phase solution and 10 mM BTPPA + TPBCl - organic phase solution under different negative polarization. For A and B dashed line corresponds to Raman spectrum recorded at open circuit potential. Dotted line from A was recorded under negative polarization potential at -300 mV whereas the solid was recorded at -800 mV. The spectra from B were recorded from +200 mV up to -800 mV. C -shows the peak intensity (after normalization to 520 cm -1 ) in function of applied potential -empty squares correspond to the peak at 1002 cm -1 while filled circles can be attributed to the peak at 1078 cm -1 (in that case error bars are too small to notice); open circuit potential was 200 mV; Raman peaks marked with () were assigned to BTTPA + , with ( ) to DCE and with () to TPBCl -. D -is the blank voltammogram recorded prior to spectra collection with the scan rate equal to 5 mV/s. 2847, 2877, 2961 and 3005 cm -1 ) assigned to the vibrational modes of DCE decreased in intensity, which would confirm that the organic phase did not ingress to the aqueous phase (Figure 4.17 A). This is supported by the increase of the band intensity at 1002 and 1029 cm -1 related to BTPPA + ions (Figure 4.17 C -open squares). Indeed, more and more BTPPA + ions were transferred as the interfacial potential difference became more and more negative (i.e., from 200 mV down to -800 mV). Furthermore, at such negative interfacial potential difference, the transfer of anions from the organic side of the interface (TPBCl -) was not expected. The band intensity at 1078 cm -1 drop as the interfacial potential difference varied (Figure 4.17 C -filed black circles). These experiments demonstrated that the variations in Raman spectra were due to ion transfer caused by the application of a negative interfacial potential difference. The displacement of the liquid -liquid interface was excluded. Interfacial silica deposition followed by Raman spectroscopy The electrochemically assisted assembly of surfactant-templated silica at the microITIES was followed by Raman spectroscopy. A Raman spectrum was first recorded at open circuit potential (ocp = +200 mV) and it showed all the characteristic vibration bands reported on Figure 4.18. After this control experiment, the interfacial potential difference was linearly swept (at 5 mV/s) from +600 mV down to -100 mV and the potential was held at -100 mV while a second Raman spectrum was recorded. Next the potential was swept back from -100 mV to +600 mV where a third Raman spectrum was recorded while holding the potential at +600m V; the same operation was repeated once more to get the 4 th and 5 th Raman spectra, which are shown on The region from 2800 to 3100 cm -1 showed the most dramatic variations upon voltammetric cycling. This region corresponds to the C-H stretching modes, which originated from either DCE, BTTPA + , TPBCl -or CTA + ions. The presence of CTA + after the first scan was confirmed by the vibrational contribution from its long alkyl chains, in particular, CH 2 symmetric stretching at 2851 cm -1 , CH 3 symmetric stretching at 2874 cm -1 and CH 3 asymmetric stretching mode at 2935 cm -1 . Additionally, BTPPA + and TPBCl -ions gave rise to two overlapping bands from 3025 to 3080 cm -1 . The band centered at 3044 cm -1 originated most probably from BTPPA + and arose from the five aryl C-H bonds in the aromatic rings, whereas the latter at 3064 cm -1 can be assigned to monosubstituted aromatic rings as in the case of the chlorophenyl substituent in TPBCl -. The other peaks associated with BTPPA + at 1002 and 1025 cm -1 and with TPBCl -at 1078 and 1115 cm -1 also increased. If the growth of characteristic peaks assigned to CTA + vibrational modes was expected, the increase of those related to the ions of the organic phase electrolyte was more surprising, however possible since the silica material is known to be an attractive adsorbent for coadsorption between CTA + and species containing aromatic rings. 238 Indeed, the spectra presented on Figure 4.18 had shown that these ions were barely visible at open circuit potential. Furthermore, peak intensities of the vibrational modes of BTPPA + were not growing before a potential difference of -300 mV was reached, whereas TPBCl -did not transfer at all at such a negative interfacial potential difference (see Figure 4.17 C). A possible explanation to such an increase in the intensity of the bands characteristic of these two compounds is that they can be trapped in the silica-surfactant matrix, which is being formed by self-assembly between condensing TEOS and CTA + transferred to the aqueous phase, via favorable electrostatic interactions (i.e., between the TPBCl -anions and CTA + cations, and between BTPPA + cations and the negatively charged silica surface). Such a hypothesis was notably supported by previous observations made for CTA + based mesoporous silica materials for which the final composition implied the presence of ionic species in addition to the presence of the surfactant and silica. 239,240 Unfortunately, no direct in situ evidence of the interfacial silica material formation was found, since no band for the Si-O-Si vibrational mode has been observed during the acquisition time used in in situ experiments. This might be due to the fact that longer acquisition times were required to observe the Si-O-Si vibrational mode in the 450 to 500 cm -1 region 185 for silica deposits and if existing here, this signal was too weak to be visible with respect to the other ones or to the noise of the Raman spectra. Another point was that the condensation kinetics for solgel-derived silica is usually rather slow (i.e., low condensation degree for the material generated in the synthesis medium, as pointed out from in situ NMR experiments), 241,242 and required subsequent heat treatment to achieve a high degree of crosslinking of the silica network. Actually, the observation of the silica vibration band for a silica deposits electrogenerated at the microITIES was possible, but only after calcination at 400 °C for 30 minutes (sees Figure 4.11). Another indirect evidence of silica formation can be seen by following the evolution of the Si-Si band at 520 cm -1 (see Figure 4.18 A). Indeed, mesoporous materials (e.g. mesoporous silica) can serve as a waveguide for light. 243 The amount of the silica material generated at the microITIES grew with the number of scans, and hence disturbed the system by guiding laser photons through the silica deposits to the silicon wafer supporting the microITIES. The waveguide phenomenon (due to elastic light scattering by the silica formed) was observed as the increase in Si-Si band intensity. The same phenomenon could be responsible for the disappearance of all bands originating from DCE once silica is being formed at the liquid -liquid interface. The Raman shifts of the particular molecular contributions used to describe all recorded spectra from this work are summarized in the Table 4.1. Conclusions The confocal Raman spectroscopy with the spatial resolution around 1 µm allowed the study of molecular changes at the miniaturized ITIES induced by electrochemically driven ion transfer and silica deposition reactions. Ion transfer of cationic part of supporting electrolyte of the organic phase -BTPPA + -induced by negative polarization, could be followed by confocal Raman spectroscopy after a precise assignment of each typical band (based on data available in the literature and series of reference measurements). The high spectral resolution of usual Raman spectrometers and the sharpness of Raman peaks were of strong help in this regard. Raman spectroscopy also offers the opportunity to monitor vibrational modes in the aqueous phase, which is more difficult in infrared absorption spectroscopy (though not impossible using dedicated techniques like attenuated total reflection sampling (ATR)). The confocal microscope also enabled to finely localize the liquid -liquid interface, which was mandatory in this kind of study. Besides, compared to non-linear optical techniques like second harmonic or sumfrequency generation, the laser irradiance can be low enough to avoid any disturbance of the fragile interface during the formation of the silica deposits. No direct evidence was found for Si-O-Si formation during in situ silica formation study with Raman spectroscopy (it was already shown before that such band can be found for cured deposits). Silica formation was confirmed indirectly as the interfacial molecular characteristics have changed dramatically upon interfacial polarization in the presence of template molecules in the organic phase and silica precursor in the aqueous phase. Additionally, it was found that organic supporting electrolyte is also involved in the silica formation mechanism as the signals arising from its vibrational mode were also present in recorded Raman spectra. This experimental set-up developed in this work could be also used to follow the in situ functionalization of silica deposits via the co-condensation of silanes and organosilanes. The results from this part of work can be also found in Phys.Chem.Chem.Phys., 2014, 16, 26955 -26962. Electrochemical evaluation of microITIES modified with silica deposits In the following section the silica deposits electrogenerated at the liquid -liquid interface supported with the membrane design number 3 (30 pores, each having 5 µm in radius, for more details refer to Table 2. The potential window was determined by the transfer of the supporting electrolyte ions dissolved in each phase, which resulted in a current rise on both sides of CV. There was no significant impact of the silica deposits on the potential window width. At the negative end, the potential window was limited by the transfer of Cl -, which crosses the interface at a higher potential than BTTPA + ions. Indeed, previous studies have shown that the Galvani potential for Clis  1/2 = -530 mV, [START_REF] Sabela | Standard Gibbs Energies of Transfer of Univalent Ions From Water To 1,2-Dichloromethane[END_REF]245 whereas the Galvani potential for BTPPA + is  1/2 = -700 mV. 245 The peak observed at  = -450 mV was then attributed to the back-transfer of Cl -from the organic to the aqueous phase, whose diffusion was confined inside the microITIES pores and hence was linear. At the positive end of the potential window, the transfer was limited by the transfer of anions of the organic electrolyte salt. Indeed, the absence of a peak on the reverse scan has suggested a radial diffusion in the aqueous phase, which would correspond to the back transfer of TPBCl -from the aqueous to the organic phase. Unfortunately, the Galvani standard potential for the transfer of TPBCl -has never been determined 245 and one can only assume that it is lower than the one for Li + which is  1/2 = +580 mV. 246 Blank experiment before and after modification Single charge ion transfer before and after modification Electrochemical behavior in the presence of silica deposits of three tetraalkylammonium cations of different sizes (TMA + , TEA + and TBA + ) -see Figure 4.20 A -and of the negatively charged 4OBSA --see Figure 4.20 B -were first studied. The electrochemical behavior of each analyte was investigated in the absence (black curves) and in the presence (red curves) of silica deposits at the microITIES. All ions were initially present in the aqueous phase at concentration of 56.8 µM, and the forward polarization was selected in agreement with the charge of transferring species (towards positive potentials for tetraalkylammonium cations and towards negative potentials for 4OBSA -). The shape of cyclic voltammograms for all four species showed a sigmoidal forward signal -in agreement with hemispherical diffusion zone from the aqueous side of the interface -and a reverse peak like response -indicative for linear diffusion limitation inside the microITIES pore channels filled with the organic phase. The voltammograms are shown on the Galvani potential scale, based on the peak transfer of Cl -. The presence of silica deposits has an impact on both the ion transfer potential and current intensity. For all four ions studied here, the presence of silica deposits at the microITIES has made the transfer more difficult, as an ion transfer potentials were shifted negatively by 22 mV for TMA + , 26 mV for TEA + , 46 mV for TBA + and by 50 mV for 4OBSA -. Based on the potential shift the difference of the Gibbs Energy of transfer,G, could be estimated: G = G before modification -G after modification (3.3) The difference was higher than any of the cations tested, even larger than TBA + , whose hydrodynamic radius is almost twice bigger (𝑟 ℎ 4𝑂𝐵𝑆𝐴 = 0.30 𝑛𝑚 and 𝑟 ℎ 𝑇𝐵𝐴 = 0.48 𝑛𝑚). 247 In spite of similar hydrodynamic radii, the behaviors of TEA + (𝑟 ℎ = 0.28 nm 247 ) and 4OBSA -were quite different with a shift of 2.5 kJ mol -1 for TEA + and 4.8 kJ mol -1 for 4OBSA -. A similar trend was observed for the ion transfer current with a drop caused by the interface modification: 59% for 4OBSA -and 48% for TEA + . This behavior can be attributed to electrostatic repulsion between negative net charge of silica surface (arising from the presence of deprotonated terminal OH groups located on the edge of silica walls as the point of zero charge is 2) 248 and anionic 4OBSA -. Multicharged ion transfer before and after modification Dendrimers are the second family of species studied in this work. Large multi-charged species are giving complex electrochemical response at ITIES, which varies with the dendrimer generation. They can undergo either interfacial adsorption [START_REF] Herzog | Electroanalytical Behavior of Poly-L-Lysine Dendrigrafts at the Interface between Two Immiscible Electrolyte Solutions[END_REF] or ion transfer [START_REF] Berduque | Electrochemistry of Non-Redox-Active Poly(propylenimine) and Poly(amidoamine) Dendrimers at Liquid-Liquid Interfaces[END_REF][START_REF] Herzog | Electroanalytical Behavior of Poly-L-Lysine Dendrigrafts at the Interface between Two Immiscible Electrolyte Solutions[END_REF] . Furthermore, PAMAM dendrimers at electrified liquid -liquid interface were employed as encapsulating agents for smaller porphyrin molecules 249 or molecular organic dyes. 250 Complex behavior of dendrimer-guest molecular association studied with cyclic voltammetry coupled with spectroscopic methods indicated that ion transfer reaction is accomplished with interfacial adsorption process -well pronounced by rapid current drop on the reverse peak. 249 Prior to interfacial modification, both dendrimers (G0 and G1) underwent a simple ion transfer reaction, which is also the case at the macroscopic ITIES. [START_REF] Berduque | Electrochemistry of Non-Redox-Active Poly(propylenimine) and Poly(amidoamine) Dendrimers at Liquid-Liquid Interfaces[END_REF] Modification of the microITIES with silica has led to the following results : (i) a lower forward current for both dendrimers, which was slightly more pronounced for PAMAM G1 (51% of loss on the forward current as compared with a 29% loss on the forward current for PAMAM G0); (ii) a shift in forward and reverse transfer potentials towards more negative potentials, which was in contrast to the series of tetraalkylammonium cations and (iii) the shape of back transfer peak for PAMAM G0 was unaffected after modification and hence a diffusion-limited peak was observed suggesting that the transfer current is due to an interfacial transfer reaction. The characteristics of PAMAM G1 behavior are more complex as the peak shape was changed. These phenomena (i, ii, and iii) may have two origins. First, on the forward scan, ion transfer can be affected by electrostatic interaction between positively-multi-charged dendritic molecules and negative net charge of silica (terminal OH groups inside the mesopores), which has led to charge screening and resulted in lower current. Second, the peak on the reverse scan (especially visible for PAMAM G1) was probably affected by adsorption process since additionally to the negative peak potential shift the current drop was more rapid and the separation between forward and reverse signal increased from 64 mV to 114 mV (factors that are a fingerprint of macromolecules exhibiting adsorption behavior at ITIES) [START_REF] Herzog | Electroanalytical Behavior of Poly-L-Lysine Dendrigrafts at the Interface between Two Immiscible Electrolyte Solutions[END_REF][249][250][251] . In the case of PAMAM G1 adsorption, facilitated transfer of the organic electrolyte counter ion cannot be excluded and may give some additional portion of faradaic current on forward scan, as it was shown for proteins [START_REF] Scanlon | Voltammetric Behaviour of Biological Macromolecules at Arrays of Aqueous|organogel Micro-Interfaces[END_REF][START_REF] Herzog | Electrochemical Behaviour of Haemoglobin at the Liquid/liquid Interface[END_REF] and synthetic dendrimers [START_REF] Herzog | Electroanalytical Behavior of Poly-L-Lysine Dendrigrafts at the Interface between Two Immiscible Electrolyte Solutions[END_REF] studied at the ITIES. 4OBSA -; 5 -PAMAM G0 and 6 -to PAMAM G1. As already noticed, modification caused the decrease of the faradaic current evidenced for each of studied ions. On the forward scan, the sigmoidal wave current increased linearly for the three tetraalkylammonium cations and 4OBSA - over the whole concentration range studied. This was observed in the presence and in the absence of silica deposits. This behavior was different for multi-charged species. The forward transfer current reached a plateau for the higher concentrations of dendrimers (see calibration curves on Figure 4.24 5c and 6c). The analytical parameters of each ion (sensitivity, limit of detection and apparent diffusion coefficients within the silica deposits) in the presence and in the absence of silica at microITIES were extracted from these calibration curves (Table 4.2 𝑖 𝑙𝑖𝑚 = 4𝑛𝑟𝑧 𝑖 𝐷𝐹𝐶 𝑖 (3.5) where n is the number of microITIES in the array (n = 30), r is the microITIES radius (r = 5 µm), z i is the charge of the species transferred, F is the Faraday constant (96485 𝐶 • 𝑚𝑜𝑙 -1 ) and C i is concentration. Theoretical calibration curves in the absence of silica deposits were fitted using There is a good correlation between the sensitivity calculated from the slope of theoretical fitting using Eq. (3.5) and the sensitivity measured experimentally before modification, S 0 , for TMA + , TEA + , and to a lesser extent for TBA + and PAMAM G0 and G1. The impact of the interface modification on the sensitivity, S, is shown for the different ions studied as a function of the product of their charge, z i , and their diffusion coefficient, D i (Figure 4.25). For single charge species, the impact of the interface modification on the sensitivity ratio S/S 0 (S 0 is the sensitivity before modification) was lower for the smaller ion TMA + than for the larger TBA + . A similar trend was observed for dendrimers where the impact on the sensitivity ratio was greater for PAMAM G1 than for PAMAM G0. The sensitivity of the larger molecule (PAMAM G1) was more affected than the one of the smaller molecule (PAMAM G0). Nevertheless, the S/S 0 is higher for both these multiply-charged molecules than for single charged ions. This can be explained by stronger interactions between the negatively charged silica walls and the multiple charges of PAMAM dendrimers. Electroanalytical properties of microITIES modified with silica deposits The limit of detection (LOD) was calculated from the linear fit of equation: 𝐿𝑂𝐷 = 3.3𝑆𝐷 𝑠 (3.6) where SD is the standard deviation of the intercept and S is the sensitivity. LODs for the ions studied before microITIES modification are in the µM range, with the exception of TBA + in the sub-µM range. The modification with mesoporous silica did not impact significantly the LODs since their value remained rather in the same range (see Table 4.2). According to eq. (3.5), the difference in sensitivity before and after modification can be explained by an impeded diffusion of charge species through mesoporous silica, leading to apparent diffusion coefficients, 𝐷 𝑖 ′ , that are lower than the diffusion coefficients of species in the bulk solution. From linear fit of calibration curves for modified microITIES using eq. (3.5), the 𝐷 𝑖 ′ for all studied species was extracted, assuming that the microITIES radius remained unchanged in the presence of silica deposits (deposits are flat at the bottom and preferably filled with the aqueous phase). The 𝐷 𝑖 ′ calculated for all ions are shown in for TMA + and 50% drop for TEA + , TBA + and 4OBSA -were observed in the presence of silica deposits. These estimations correlate with previous studies. The diffusion coefficient of TEA + within zeolite Y modified ITIES (with the aperture diameter of 7.4 Å) was two order of magnitude lower than in the aqueous phase. 188 TEM images showed that the mesopores dimensions of silica material from this work was significantly larger than zeolite Y and hence the effect on 𝐷 𝑖 ′ is considerably smaller as on the diffusion coefficient through zeolite Y. rh is the hydrodynamic radius, 𝐷 𝑖 is the aqueous diffusion coefficient, 𝐷 𝑖 ′ is the apparent diffusion coefficient determined experimentally, zi is the charge, Stheor correspond to the theoretical sensitivity based on Eq. (3.6), S0 correspond to the sensitivity at a bare microITIES and S to the sensitivity at a modified microITIES, BM to before modification and AM to after modification. *Calculated from the Stokes Einstein relationship, 𝑟 ℎ = 𝑘 𝐵 𝑇 6𝜋𝜂𝐷 , where kB is the Boltzmann constant (1.3807 • 10 -23 𝑚 2 • 𝑘𝑔 • 𝑠 -2 • 𝐾 -1 ), T is the temperature (293 K), 𝜂 is the aqueous phase viscosity (0.89 cP) and D is the diffusion coefficient (cm s -1 ). **Calculated from calibration curve fit based on equation describing limiting current at array of microITIES (eg. (3.5)). Conclusion Arrays of microITIES were modified with silica deposits based on electrochemically driven surfactant (CTA + ) assisted Sol -Gel process with TEOS as a silica precursor species. The electrodeposition and morphological study was done and is described in section 4.2 of this chapter. Silica deposits were characterized by ion transfer voltammetry of six different in size, nature and charge analytes: TBA + , TEA + , TMA + , 4OBSA -and PAMAM G0 and G1. The transfer peak currents observed for the transfer of all species were lowered and the transfer peak potentials were shifted depending on the nature of studied analyte in the presence of silica deposits at the ITIES. Increase in size of tetraalkylammonium cations has led to a greater current drop and potential shift (from finely affected TMA + to clearly changed TBA + ). In case of 4OBSA -both, the current drop and E 1/2 shift were attributed to both size and charge effects. The electrochemical behavior of multi-positively-charged dendrimers in the presence of silica deposits was attributed to the electrostatic interaction with negatively charged silica net. Furthermore, microITIES modification affects differently the electroanalytical parameters (sensitivity, LOD and apparent diffusion coefficient) Depending on the size of the species transferred, suggesting the future possibility of selective ion transfer using microITIES modified with functionalized mesoporous silica. The results of this section were published in Electrochimica Acta doi:10.1016/j.electacta.2015.01.129. Chapter V. Local pH change at the ITIES induced by ion transfer and UV photolysis The anodic oxidation of water allows the generation of H + which can act as a catalyst in the Sol-Gel process of silica leading to uniform and compact film formation at the electrode surface. 254 The silica film formation from hydrolyzed alcoxysilane species can be also conducted under high pH values -easily feasible by cathodic water reduction. 255 Such approach can be extended to TiO 2 electro-assisted deposition, 256 SiO 2 -TiO 2 257 or SiO 2 -conducting polymers 258 formation as the electrode modifiers. The electro-assisted (at cathodic potential facilitating OH - evolution) Sol -Gel process of silica generated in the presence of structure driving agents has led to the formation of well-ordered and oriented normal to the electrode surface mesoporous films. [START_REF] Walcarius | Electrochemically Assisted Self-Assembly of Mesoporous Silica Thin Films[END_REF]239 Such properties are of paramount interest in electroanalysis since well-defined mesostructure can act as the molecular sieve, which in case of silica, can be additionally modified with number of functionalities. 102 The approach where the H + or OH -generated at the ITIES can indirectly trigger modification of the liquid -liquid interface has never been studied before. Indeed during the interfacial modification with polymer 153 or polymer-Au NPs 156 the generation of protons as a reaction side product was reported however not evidenced experimentally. The application of interfacially and photo active species (in polymer science such species are known as photobase 259 or photoacid 260 generators) which photolysis may lead to change in the pH could be applied to trigger pH sensitive reaction as for instance Sol -Gel process of silica. In this work the trimethylbenzhydrylammonium (PH + ) cation was first synthesised 195 and then used as the interfacially and photoactive ion which photolysis has led to the local pH decrease. First, the PH + electrochemical behavior was studied at macro-and microITIES. The photolysis of the PH + was performed with the UV irradiation and it was followed by electrochemistry, HPLC and mass spectroscopy. Interestingly the transfer and subsequent photolysis of PH + was not the only source of the pH change. The side reaction at the counter electrode was found to occur -reduction of water at negative interface polarization potentials has led to an increase in pH. The local pH measurement had to be performed in order to separate the pH increase occurring at the aqueous counter electrode from the pH change occurring at the ITIES. To do so the Pt microdisc electrode modified with an iridium oxide (as the pH electrode) 261,262 was used to probe the pH above the microITIES. Finally, the local pH change was used to trigger the interfacial silica deposition in the liquid -liquid interface configuration where precursor (TEOS) was initially dissolved in the DCE whereas template (CTA + Br -) in the aqueous phase. The methylation of primary amine with methyl iodide has led -under proper conditionsto quaternary amine. Detailed protocol of preparation can be found in section 2.5.7 of chapter II. The proton NMR spectra for both: ADPM and PH + I -can be found on ) for iodide is -340 mV [START_REF] Samec | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions (IUPAC Technical Report)[END_REF][START_REF] Sabela | Standard Gibbs Energies of Transfer of Univalent Ions From Water To 1,2-Dichloromethane[END_REF]245 and hence I -would be expected to cross the interface at the more negative side of the potential window. Unfortunately, showing the potential scale as the standard Galvani potential difference is not possible since no internal reference was present in the system. Moreover, for the signal with E 1/2 at around +550 mV it was also difficult to study the effect of capillary geometry on the shape of the signal itself since it is partially masked with the background electrolyte transfer and hence the charge of the transferring specie remains unknown. The presence of undesirable species transferring on the positive side of the available potential window required further processing, hence the PH + was precipitated from the post-synthesis mixture with TPBCl -(please refer to section 2.5.4 in chapter II for protocol of PH + TPBCl -preparation): 𝑃𝐻 + 𝐼 -+ 𝐾 + 𝑇𝑃𝐵𝐶𝑙 -→ 𝑃𝐻 + 𝑇𝑃𝐵𝐶𝑙 -↓ + 𝐾𝐼 (5.1) After filtration, the white precipitate was rinsed four times with 50 mL of distilled water (the flask containing PH + TPBCl -and 50 mL of H 2 O was place in the ultrasound bath for 10 min each time). After sonification each sample was centrifuged. Next, the aqueous phase was collected from above the solid PH + TPBCl -and analyzed by the ion chromatography. (peak like on the forward -linear diffusion inside the pore -and wave like on the reserve scanradial diffusion on the pore ingress) confirms the occurrence of the PH + interfacial transfer. based on Randles-Sevcik equation: Electrochemical characterization of PH + transfer at macroITIES 𝐼 𝑝 = (2.69 × 10 5 )𝑧 3/2 𝐴𝐷 1/2 𝐶𝑣 1/2 (5.2) where z is the charge, A is the interfacial surface area (2.83 cm 2 ) and C is the concentration (50 × 10 -9 𝑚𝑜𝑙/𝑐𝑚 3 ). The obtained value of 6.91 • 10 -6 cm 2 /s is of the same order of magnitude as interfacially active species having similar hydrodynamic radius. [START_REF] Samec | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions (IUPAC Technical Report)[END_REF] The behavior of PH + at the polarized ITIES can be classified as 'common' for known mono-charged quaternary ammonium cations. Its interfacial transfer can be easily controlled with electrochemical methods as for instance cyclic voltammetry, which in parallel can be used as an analytical detection method. The abbreviations stand for: UV -for Ultraviolet, Nu -for nucleophile and R -for . First mechanism (photo S N 1) is expected to give tertiary amine (: 𝑁 -(𝐶𝐻 3 ) 3 ) and a carbocation that reacts with the nucleophile and gives 𝑅 -𝑁𝑢 products. Photo S N 2 leads to a tertiary amine and 𝑅 -𝑁𝑢 through the formation of the intermediate. In the single electron transfer (SET) mechanism the electron from the nucleophile is excited by the UV irradiation and can transfer to the aromatic system of the quaternary ammonium cation. The resulting radical is then dissociated into the benzhydryl radical and the tertiary amine. When the photolysis of PH + cations caring I -or TPBCl -as the counter ion was studied in water or EtOH photo S N 1 mechanism was considered to take place (polar protic solvent favor S N 1 mechanism). In the case of the photolysis reaction which was taking place in DCE -SET mechanism was very probable. 195 PH + photolysis products were studied by HPLC. The series of peaks up to 3 min and an intense peak at 3.14 min were observed and attributed to the products of photo decomposition. Very weak peak was also recorded at around 12.95 min and originates from remaining PH + ions. Photolysis of PH + could also be followed by cyclic voltammetry as shown in The photodecomposition was confirmed and followed with three different techniques: cyclic voltammetry indicating disappearance of interfacially active species in the organic phase upon UV irradiation; chromatography which allowed the separation of photolysis products and finally mass spectroscopy aiming to study the nature of the aqueous phase photolysis products. Local pH change induced by electrochemical transfer and photodecomposition of PH + species To study the effect of PH + photodecomposition on the aqueous phase pH, the PH + species were transferred by electrochemistry to the aqueous phase with subsequent UV irradiation. First pH measurements were performed with the standard pH meter. pH increase was observed, however not due to photoelectrochemistry but as a reason of side reaction on the counter aqueous electrode -water reduction (2𝐻 2 𝑂 + 2𝑒 -→ 𝐻 2 + 2𝑂𝐻 -). The effect of negative polarization on the aqueous phase pH was further studied and the results are shown on The masking effect caused by the side reaction at the aqueous counter electrode highlighted the need to measure the pH locally. The set-up used for this purpose can be found on The exact nature of species marked with ??? (org) was not studied and it can be only assumed that these arise from the nitrogen radical reactions which transfer to the organic phase. The ∆ 𝑜𝑟𝑔 𝑎𝑞 𝛷 𝐻 + 0 at the DCE -water interface is 549 mV 1 and hence for the applied conditions (the potential was The pH changes observed above the liquid -liquid interface are significant enough to induce the hydrolysis and condensation reaction of TEOS and hence such system was employed in this regard as it is shown in the following section. Silica deposition induced by local pH decrease The silica deposition at the electrified liquid -liquid interface can be triggered by electrochemical transfer of cationic surfactant -template -species from the organic phase to the aqueous phase containing already hydrolyzed silica precursor. The template can act as the catalyst for silica condensation reaction and as structure driving agent in one. The neat liquidliquid interface can be also modified with the silica deposit in the 'reversed system' where the precursor is dissolved in the organic phase contacted with the acidic or basic aqueous phase. 185,176 In order to induce porosity of the silica interfacial deposit, template -usually cationic surfactant -can be added to the aqueous phase. 177 A similar approach was used in this work. The only difference was the pH of the aqueous phase, which instead of being a fixed value (adjusted before the liquid -liquid interface was formed) was controlled with electrochemical transfer of 𝑃𝐻 𝑜𝑟𝑔→𝑎𝑞 + with simultaneous UV irradiation. The CVs recorded at macroscopic ITIES in the presence of 1 mM CTA + Br -in the aqueous phase and TEOS in the organic phase (see In order to triger silica deposition the local pH of the aqueous phase above the liquidliquid interface was decreased by PH + transfer from the organic phase (the potential of the ITIES was held at +150 mV for 60 minutes) with simultaneous UV irradiation. Photolysis of PH + has indireclty led to proton generation which catalysis the hydrolysis reaction of silica from the organic phase. min with simultaneous irradiation in the presence and in the absence of CTA + in the aqueous phase. Subsequently the cell was left for 12 hours. After this time the silica deposits were collected and cured at 130°C for 16 hours. The morphological analysis was performed with the TEM imaginary. The 'worm like' structures were observed for all three samples even without CTA + in the aqueous phase (see A -TEM image on Figure 5.15). This finding suggest that a spontaneous formation of mesopores could take place in here as it was already shown to occur for aluminosilicate materials prepared via Sol-Gel process. 268 Conclusion In this work, the interfacially active quaternary ammonium cation (trimethylbenzhydrylammonium -PH + ) being sensitive to UV irradiation was synthesized and characterized electrochemically. The photodecomposition of the PH + in the protic solvents was studied and the formation of benzhydrol in the aqueous phase, as one of the products of photolysis, was confirmed. The PH + was then employed to locally affect the pH of the aqueous phase since its transfer can be controlled by electrochemical means whereas the irradiation leads to its decomposition followed by reactions which affect aqueous protons concentration. Due to the water reduction taking place at the aqueous counter electrode, the pH change of the aqueous phase could not be followed with the conventional pH measurement. Iridium oxide modified Pt microdisc electrodes were used to study the change of the pH at the local scale and the results have shown that the PH + transferred and photodecomposed in the aqueous phase increase proton concertation. This experimental set-up was then used to modify the liquid -liquid interface with silica material. The results included in this chapter are planned to be published in Electrochimica Acta journal. General conclusions English version The main focus of this thesis was the modification of electrified liquid -liquid interface with silica material to form molecular sieves. In general the silica deposition was performed via the Sol -Gel process in the presence of soft template. Deposition was controlled by electrochemistry at the ITIES. Two categories of the ITIES were employed for this purpose: (i) macroscopic ITIES created in a conventional four electrode electrochemical cell and (ii) microITIES under the form of an array of pores or single microscopic pore capillary. The macroscopic ITIES was first employed to study the silica deposition mechanism. Cationic surfactant -CTA + -was used as a template and a catalyst for the silica formation, initially dissolved in the organic phase (10 mM BTPPA + TPBCl -solution in dichloroethane). Silica precursor -TEOS -was hydrolyzed in the aqueous phase (at pH = 3) and it was thus separated from the template species. The transfer of CTA + was controlled with interfacial polarization and was only observed in the presence of hydrolyzed silica in the aqueous phasethis type of reaction is known as facilitated ion transfer. The silica formation was triggered once CTA + cations were transferred to the aqueous phase -the formation of spherical micelles catalyzes the condensation reaction and act as a template which structure the silica deposits. General conclusions were made regarding the macroscopic ITIES modification: 1. The CTA + transfer reaction was irreversible as the characteristic for adsorption current drop (associated to backward 𝐶𝑇𝐴 𝑎𝑞→𝑜𝑟𝑔 + transfer) was found on the cyclic voltammograms; 2. The silica deposit is being formed at the liquid -liquid interface after one voltammetric cycle run at 5 mV/s; 3. Charge being back transferred to the organic phase increase for the first few cycles and becomes constant which indicated that the interfacial region was 'saturated' with the negative charge of condensing silica facilitating the 𝐶𝑇𝐴 𝑎𝑞→𝑜𝑟𝑔 + transfer; 4. The limiting factor of the silica deposit formation is the [CTA + ] org and its concentration in the diffusion layer on the aqueous side of the liquid -liquid interface; 5. [TEOS] aq was not affecting the CTA + transfer in the studied concentration range (from 50 mM up to 300 mM); 6. The aqueous phase pH, governing the formation of polynuclear silanol species, was found to be optimal in the pH range from 9 to 10. Moreover, the macroscopic ITIES has served for the silica deposit generation for further characterization. In order to cure the silica deposits collected from the liquid -liquid interface they were stored overnight in the oven at 130°C. Set of characterization techniques was employed, suggesting that: 1. The silica formation was confirmed. Further spectroscopic (infra-red) investigation indicated the presence of the CTA + among the silica network (as expected) and traces of organic electrolyte ions; 2. XPS indicated that TEOS was not totally hydrolyzed since C-O bond was detected. Furthermore, based on XPS results it was assumed that some charge balancing between negatively charged OH -groups (present inside the silica pores) and Na + can occur; 3. The mesostructure of silica deposit was confirmed and the pore center-to-center distance depending on the polarity of the organic phase and [CTA + ] org concentration was in the range between 3.7 up to 7 nm; 4. The pores among the silica deposits are of 'worm like' shape as evidenced with the broad peak on SAXS patterns and directly on the TEM micrographs; Once the silica deposit electrogeneration was optimized at the macroscopic ITIES, the miniaturization of the liquid -liquid interface was performed. The membrane used to support ITIES was a silicon wafer with array of microscopic pores (with the radius ranging from 5 µm up to 10 µm) arranged in honeycomb. Miniaturization has improved the electroanalytical parameters of the system (lower limit of detection due to lower capacitive current and better sensitivity arising from higher mass transfer) and additionally allowed the formation of mechanically stable silica deposits -which were further characterized and evaluated. The study concerning silica electrodeposition at the array of microITIES can be summarized as follows: 1. The current -potential characteristics affected by asymmetric diffusion profile on the both sides of the liquid -liquid interface did not correspond to the simple ion transfer reaction recorded during silica formation; 2. The shape of the forward peak (arising from 𝐶𝑇𝐴 𝑜𝑟𝑔→𝑎𝑞 + transfer) depended on scan rate (diffusion layer thickness on the organic side of the liquid -liquid interface) and membrane design used (the pore center-to-center distance). Generally saying, the process was diffusion limited once the transfer was governed by linear diffusion inside the pores of the silicon wafer (for scan rate > 10 mV/s) and by overlaid diffusion layer on the ingress to the pores from the organic phase (for scan rate < 0.1 mV/s). Overlap of the diffusion profiles on the organic side of the liquid -liquid interface was also observed for the membranes supporting the pores with low spacing factor; 3. Silica deposits always grow towards the bulk of the aqueous phase. They are formed on the ingress to pore from the aqueous side of the interface, are flat on the bottom and are filled with the silica inside (which excludes any possible interface movement during deposition); 4. The shape of the silica deposits correspond to hemispherical diffusion layer of CTA + in the aqueous phase: for short experimental times they were flat at the top and rounded on the sides whereas longer experimental time has led to the hemispherical 'cups' formation; 5. The blocking effect of the organic species present inside the pores of silica deposits was confirmed by ion transfer voltammetry; 6. Calcination allowed the removal of organic species from inside the mesopores. Empty mesostructure was permeable for the analytes. 5. In situ study of silica deposition did not evidenced Si-O-Si bond formation. Nevertheless the silica presence was confirmed after electrogenerated material was cured thermally. Long acquisition time was employed for this purpose. The ion transfer voltammetry of five different in charge, size and nature analytes were finally employed to electroanalytically evaluate the array of micro ITIES modified with silica deposits (modified under optimal conditions elaborated during previous study). Following observations were made: 1. The ion transfer was affected in the presence of silica deposits for all five analytes studied; 2. The change of the Gibbs energy of transfer for three different in size tetraalkylammonium cations before and after the modification was observed to be greater for the largest TBA + as compared with the slightly affected TMA + . The positively and mono charged analytes with the greater size required more energy to transfer across liquid -liquid interface modified with the silica deposits; 3. The transfer of negatively charged 4OBSA -was also studied and it was found that the interfacial modification has increased the Gibbs energy of transfer in the same manner as for larger TBA + cation. This behavior was attributed to the electrostatic repulsion between negatively charged anion and OH groups located inside the silica pore walls; 4. The electrochemical behavior of PAMAM dendrimers at the modified liquid -liquid interface was found to be different from tetraalkylammonium cations. Firstly, the E 1/2 for both dendrimers was shifted towards less positive potential which means that the presence of silica deposits decreased the amount of energy required to trigger the transfer. Secondly, the fingerprints of adsorption for bigger PAMAM molecule -generation 1 -were found in the presence of silica deposits. Both phenomena were attributed to the electrostatic interactions between positive charge of dendrimers and negative net charge of silica framework; These observations clearly indicate that the presence of silica deposits at the ITIES affects the electrochemical behavior of different analytes in different manner suggesting that the ultimate goal of this work -sieving properties -are entirely possible. Further evaluation of liquid -liquid interface modified with silica material could be performed with the larger molecules as for instance: larger PAMAM dendrimers or biomolecules. Assessment of silica deposit with the permeability coefficient (which can be extracted from information collected with SECM) would be of highest interest. Some effort was made towards this direction (see section 7.1) however further study is needed. The silica electrogeneration was performed on the aqueous side of the liquid -liquid interface and its morphology was governed by the hemispherical diffusion of CTA + . Interesting properties could be obtained if interior of the pore, where transfer is limited only by linear diffusion, could be modified in the same manner. Confined geometry inside the miniaturized pores can differently affect the morphology of the silica deposits and consequently their sieving properties. The selectivity of the ITIES modified with silica deposits can be also improved by chemical functionalization. Sol -Gel process allows the permanent introduction of the organic groups to the silica framework on the route of co-condensation between organosilanes with alkoxysilanes. Some preliminary results concerning functionalized silica interfacial deposition can be found in section 7.2. First approach dealing with the liquid -liquid interface modification with the silica material was based on the controlled electrochemical transfer of template molecules from the organic phase to the silica precursor containing aqueous phase. The local change of the pH at the liquid -liquid interface is the second way of triggering silica deposition (the hydrolysis and condensation reactions are pH sensitive). The photosensitive and interfacially active cationtrimethylbenzhydrylammonium (PH + ) -was employed to change the pH above the liquid -liquid interface on its aqueous side. The conclusions for this part of the thesis are: 1. Electrochemical transfer of 𝑃𝐻 𝑜𝑟𝑔→𝑎𝑞 + with simultaneous UV irradiation has led to the formation of reactive carbocation (which reacts with water and forms benzhydrol) and releases a proton; 2. The overall pH of the aqueous phase was also affected by the water reduction taking place at the counter aqueous electrode; 3. The local pH measurements (performed above the liquid -liquid interface with the iridium oxide modified Pt microdisc electrodes) have shown that the interfacial pH is decreasing once PH + species are transferred and photodecomposed in the aqueous phase; 4. The local decrease in the pH was shown to catalyze the silica hydrolysis followed by condensation (in this configuration silica precursor -TEOS -was dissolved in the organic phase whereas the template -CTA + -was present in the aqueous phase) and resulted in silica deposit formation; 5. Miniaturization was found to mechanically stabilize the liquid -liquid interface in the presence of CTA + species (dissolved in the aqueous phase) which have destabilized macroscopic ITIES; 6. The mesostructure formation was poor and the worm like structures were present even when silica was deposited in the absence of CTA + in the aqueous phase. Further study is needed to improve the mesostructure properties of synthesized material. The factor which affects the structuration of silica is the CTA + concentration. Conducting electrochemical study at the macroscopic ITIES in the presence of high [CTA + ] aq was impossible and hence miniaturization has to be employed (the microITIES was mechanically stable up to [CTA + ] aq = 2.02 mM, higher concentrations were not investigated). The interfacial silica deposition can be also triggered by the local pH increase. In order to control this process electrochemically, interfacially active species (initially present in the organic phase) has to be functionalized with the basic center, for instance nitrogen atom with a lone electron pair. The electrochemically controlled transfer of the base to the aqueous phase will increase the pH which can trigger the silica deposition (see section 7.3). Version Française L'objectif principal de cette thèse concerne la modification de l'interface liquide -liquide par voie électrochimique. Ainsi, un matériau de silice a été utilisé dans le but de former une tamis 5. L'effet de blocage des espèces organiques présentes à l'intérieur des pores de dépôts de silice a été confirmée par voltamétrie de transfert d'ions; 6. La calcination a permis l'oxydation des espèces organiques à l'intérieur des mésopores. La mésostructure vide était perméable pour les analytes. La dimension de l'interface liquide -liquide miniaturisée a nécessité l'application d'une technique de caractérisation locale. Pour cela, une méthode de spectroscopie Raman confocale a été utilisée et a permis l'étude des différentes contributions moléculaires. Ainsi, deux phénomènes ont été suivis: (i) la réaction interfaciale de transfert d'ions et (ii) la formation de dépôts de silice à l'interface contrôlée par voie électrochimique. En général, les informations suivantes ont été extraites de ce travail: 1. Une polarisation négative a affecté la composition moléculaire de l'interface liquide -liquide parce que les signaux Raman correspondant au BTPPA + ont augmenté et ceux du TPBCl -ont diminué; 2. Aucun mouvement de l'interface n'a été détecté pendant la polarisation de l'interface : l'ensemble des bandes Raman attribués au DCE sont resté inchangées indépendamment du potentiel appliqué; 3. La composition moléculaire de l'interface liquide -liquide pendant la formation de silice a changé brusquement après la première moitié du cycle voltamétrique ; 4. Les signaux forts provenant du CTA + , du BTPPA + et du TPBCl -ont été trouvés au cours de l'électrodéposition. L'intensité du signal à une tendance à augmenter avec le nombre de cycles voltammétriques; 5. Les études in situ de la déposition de silice n'ont pas prouvé la formation de liaisons Si-O-Si. Néanmoins, la présence de silice a été confirmée après le durcissement thermique du matériau. La voltampérométrie de transfert de cinq ions qui différaient en charge, en taille et en nature a finalement été utilisée pour évaluer les dépôts de silice par voie électroanalytique (la membrane de silicium a été modifiée avec les conditions optimales élaborées lors de l'étude précédente). Les observations suivantes ont été réalisées: 1. Le transfert d'ions a été influencé en présence de dépôts de silice pour les cinq analytes étudiés; 2. La variation de l'énergie de Gibbs de transfert pour trois cations différents de 12. La formation de mésostructure était pauvre et les structures vermiculaires étaient présents même lorsque la silice a été déposée en absence de CTA + dans la phase aqueuse. Une étude plus approfondie est nécessaire pour améliorer des propriétés de mésostructure du matériau synthétisé. Le facteur qui influence la structuration de la silice est la concentration de CTA + . Une étude électrochimique réalisée à l'interface liquide -liquide macroscopique en présence de concerning few of the above mentioned ideas are given in the following subsections. Silica deposits -SECM characterization SECM allows the study of electrochemical behavior of the system at the local scale. In SECM configuration the electrochemical interaction between the tip and studied interface was investigated. Two working modes can be distinguished in SECM: (i) generation/collection mode -where one side of the electrochemical cell, say tip, is electrochemically detecting the probe which is generated at the second side, say support and (ii) the feedback mode which can be negative (drop in current once the tip is approaching the surface -insulator) or positive (increase in current in close vicinity of the surface -conductor). In the negative feedback mode the probe from the bulk of the liquid media gives the steady state current which above the insulating surface is decreased due to hindered diffusion profile of the tip. The positive feedback is when the species being the reason of charge transfer and resulting current at the tip in the bulk are regenerated at the conductive support (tip approach result in current increase in this case). SECM can be employed to probe the properties of porous membranes, 247,269 and hence it could be employed to study the microITIES modified with silica deposits. The set-up which could be used for such experiment is shown on Silica deposits functionalization Different functionalities can be introduced to the silica framework on the basis of cocondensation between alkoxysilanes and organosilanes species. The electrodeposition of such silica functionalized materials can be easily performed at the liquid -liquid interface. A preliminary investigation has been done and the results are found to be promising. The cocondensation was performed with two organosilanes: (3-mercaptopropyl)trimethoxysilane was used as a method for silica deposits generation. The cationic surfactant being initially dissolved in the organic phase was transferred on forward polarization (from more positive to more negative potential) to the aqueous phase where it has catalyzed and self-assembly with condensing silica precursor species. Interfacial silica deposition was performed for different concentration of organosilanes in the initial sol solution (from 5% up to 25%). Resulting silica deposits were collected form the liquid -liquid interface and spectroscopic characterization (data not shown) was performed. Thiol bond evolution was followed with Raman spectroscopy whereas the presence of azide group was confirmed with the infra-red spectroscopy. The morphological characterization was conducted with SAXS. The results are shown on As a continuation, other functionalities can be introduced to the silica framework (cocondensation or post grafting can be employed). In the second step, the miniaturized ITIES could be modified with the functionalized silica deposits, which can be finally, evaluate with a range of interfacially active analytes. Interfacially active base The local pH increase of the aqueous phase near the liquid -liquid interface could be performed with the ion transfer reaction, once the ion initially present in the organic phase would be functionalized with a base. Such compound could be a quaternary ammonium cation, which is substituted, for instance, with an alkyl chain terminated with a nitrogen atom with a lone electron pair (i.e. (n-aminoalkyl)trimethylammonium cation). Surprisingly such compounds are nearly inaccessible and very expensive despite the fact that their synthesis seems to be easy. The amination reaction of an alkyl halide containing quaternary ammonium cation with ethylenediamine is one of the ways to obtain the product of interest: In future the electroanalytical evaluation of the synthesized molecule can be performed at the electrified liquid -liquid interface. The change in pH has to be evidenced experimentally. The system employing interfacially active base can be used to trigger interfacial silica deposition (which can be further characterized in order to evaluate its structure). 2. Each pulled capillary was observed with an optical microscope in order to verify effectiveness of applied pulling parameters. 3. Nanopipettes were then filled with the chlorotrimethylsilane so that the small volume of the solution (~1µL) was placed inside the capillary in the close vicinity of the bigger entrance. Vertical positon of the nanopipette, with the tip oriented upwards, was maintained all the time. 4. The nanopipette with the solution inside was left for 20 min (in vertical position!) under the fume hood. 5. After this time the residual solution was removed from inside the capillary and it was stored overnight in the oven at 130°C. 6. The nanopipette characterization was performed with the ion-transfer voltammetry. Prior to electrochemical measurement, nanopipettes were filled with the 10 mM solution of BTPPA + TPBCl -in DCE. Frequently, during pipette filling air space separated the very end of the filled tip from the rest of organic solution filling the capillary. In order to remove this empty space from inside the tip, glass fiber (pulled out from Pasteur pipette above the Bunsen burner) could be used with diameter smaller than the inner diameter of the tip filled with the air bubble. The limiting current recorded at micro-and nanopipette can be calculated with the equation (1). 𝐼 𝑠𝑠 = 3.35𝜋𝐶𝐹𝐷𝑛𝑟 (I.1) were I ss is the limiting current, C is the concentration of analyte in mol/cm 3 , D is the diffusion coefficient in cm 2 /s, F is the Faraday constant (96485 A•s/mol) and n is the net charge of the analyte. I would also like to acknowledge other people who have contributed to this work. I will especially highlight Manuel Dossot and Jérôme Grausem who have helped me a lot with the Raman spectroscopy, Cédric Carteret for his help with infra-red spectroscopy, Mathieu Etienne for his help with SECM, Neus Vila for advices concerning organic synthesis, Christelle Despas for help with ion chromatography and Marc Hebrant for his engagement during HPLC analysis. I am also indebted to the people who were involved in other aspects of my work. Particularly, I would like to acknowledge Marie-José Stébé, Andreea Pasc and Melanie Emo for SAXS analysis and precious discussions, Aurélien Renard for XPS analysis, Jaafar Ghanbaja and Sylvie Migot for TEM imaging and Lise Salsi for the SEM imaging. A special acknowledgement goes to my office mates and good friends: Doktorka Veronika Urbanova, Ievgien Mazurenko and Daniel Alonso Gamero Quijano. Next in row are lab mates. With Ivan Vakulko and Khaoula Hamdi we started our theses at the same time. Wissam Ghach introduced me to the laboratory organization. Lin Zhang started her thesis at the end of 2013. Mohana Afsharian first did her master training in 2013 and then, in 2104 joined ELAN team as a PhD student. During my third year Maciek Mierzwa, Tauqir Nasir, Cheryl Maria Karman and Stephane Pinck became a part of our team. Martha Collins and Maizatul Najwa Binti Jajuli, both working with liquid -liquid interface, did master internship in our team in 2015. Together we were a dozen or so nationalities working under one roof. It was a great pleasure to be a part of such multicultural team. One period of my PhD was a DocSciLor 2015 organization. For great fun and new experience I am glad to gratitude Fernanda Bianca Haffner, Hugo Gattuso, Ileana-Alexandra Pavel, Ivan and Maciek. I also want to acknowledge Claire Genois for taking care about the laboratory organization, which has made our lives easier. I cannot omit the work shop team: Jean-Paul Moulin also known as Monsieur Moustache, Gérard Paquot and Patrick Bombardier. I am very grateful for their technical support. My acknowledgements also go to Marie Tercier, Christelle Charbaut, and Jacqueline Druon for their help with all administration issues and of course to all LCPME members. Figure 1 . 1 . 11 Figure 1.1. Different models for the ITIES structure. Black solid lines correspond to potential distribution across the polarized liquid -liquid interface. Figure 1 . 2 . 12 Figure 1.2. Standard ion-transfer potentials for different anionic and cationic species across the dichloroethane -water interface. The abbreviations stand for: Ph 4 P + -tetraphenyl phosphonium; Ph 4 As + -tetraphenylarsonium; Ph 4 B --tetraphenylborate and Alkyl 4 N + -correspond to tetraalkylammonium compounds. Figure prepared based on ref.[START_REF] Samec | Electrochemistry at the Interface between Two Immiscible Electrolyte Solutions (IUPAC Technical Report)[END_REF][START_REF] Sabela | Standard Gibbs Energies of Transfer of Univalent Ions From Water To 1,2-Dichloromethane[END_REF] Figure 1 . 3 . 13 Figure 1.3. Four mechanisms of possible assisted ion transfer reaction. Designations: L -ligand, iionic species, ACT -aqueous complexation followed by transfer, TOC -transfer followed by complexation, TIC -transfer by interfacial complexation and TID -transfer by interfacial dissociation. Figure 1 . 4 (Figure 1 . 4 ( 1414 Figure 1.4 (b). Deviations from stability are induced by reactions occurring at the electrode surface. Figure 1 . 4 . 14 Figure 1.4. Current -potential characteristics for a) polarizable and b) non-polarizable electrodes. Figure 1 . 5 . 15 Figure 1.5. Voltammogram recorded only in the presence of the aqueous (LiCl) and the organic (BTPPA + TPBCl -) supporting electrolytes. Regions A and C correspond to electrolyte ion transfer currents whereas region B is the potential window, among which the interface is impermeable for all ionic species present in both phases. Figure 1 . 5 15 Figure 1.5 illustrates cyclic voltammogram recorded at the polarized interface between aqueous solution of LiCl and organic solution of bis(triphenylphosphoranyldiene) ammonium tetrakis(4-chlorophenylborate) (BTPPA + TPBCl -). Region between two vertical dashed lines (marked as B) correspond to polarizable part of the interface -potential window. In this particular part of a cyclic voltammogram, the change in inner potentials between two immiscible phases does not induce noticeable change in chemical composition of the aqueous and the organic medium. In other worlds interface is impermeable for the charge transfer and resulting current is only due to charging of double layer capacitance at both sides of the liquid -liquid interface. The potential window is limited by supporting electrolyte ions transfer (part A and C). Once the standard transfer potential of the less hydrophobic (in case of ions transferring from the aqueous to the organic phase) or the less hydrophilic (for ions crossing the interface from the organic side of the interface) ion is reached, they start to cross the interface. The available potential window in voltammetric Figure 1 . 6 . 16 At the ITIES, resulting current is associated with the direction of ion transfer as it is shown on Positive current is recorded once cations are moving from the aqueous to the organic phase or anions from the organic to the aqueous phase, whereas negative current is when cation are transferring from the organic to the aqueous phase and anions from the aqueous to the organic phase. With this in mind, the association of ion transfer to the positive and the negative end of the voltammogram from Figure 1.5 becomes simplified. Limiting current on the lower potential scale side (part A on Figure 1.5) originates from chloride transfer (positive peak corresponds to 𝐶𝑙 𝑜𝑟𝑔→𝑎𝑞 whereas the negative current is due to 𝐶𝑙 𝑎𝑞→𝑜𝑟𝑔 -). Positive end of potential window is limited by TPBCl -transfer (𝑇𝑃𝐵𝐶𝑙 𝑎𝑞→𝑜𝑟𝑔 results in the negative current peak and 𝑇𝑃𝐵𝐶𝑙 𝑜𝑟𝑔→𝑎𝑞 can be observed as a positive current increase). Figure 1 . 6 . 16 Figure 1.6. Direction of ion transfer associated with the current response. -tail finished peak 𝑎𝑞 → 𝑜𝑟𝑔 -tail finished peak 𝑜𝑟𝑔 → 𝑎𝑞 -wave like signal 𝑎𝑞 → 𝑜𝑟𝑔 -tail finished peak 𝑜𝑟𝑔 → 𝑎𝑞 -tail finished peak 𝑎𝑞 → 𝑜𝑟𝑔 -wave like signaln -is the charge, F -the Faraday constant, A -the surface area, D -the diffusion coefficient, C -the concentration, v -the scan rate, N -the number of pores (in case of array), r -the ITIES radius and f(Θ) is a function of the tip inner angle. Figure 1 1 Figure 1 . 7 . 17 Figure 1.7. Examples of silicon containing compounds. Figure 1 . 8 . 18 Figure 1.8. Hydrolysis (1) and condensation (2) reactions of tetraalkoxysilane. Figure 1 . 9 ( 19 blue curve). The reaction rate is slowest at a near neutral pH. Increasing concentration of protons or hydroxides species in the aqueous media leads to increase in hydrolysis rate. The reaction mechanism can be referred to as the nucleophilic substitution of S N 2 type, namely nucleophilic attack on the positively charged silicon atom, which takes place synchronously with the cleavage of the Si-OR bond. The hydrolysis reaction is shown on Figure1.8 (1). The rate of the hydrolysis reaction can be affected by the reaction medium -polar and protic solvents through formation of hydrogen bond may increase the efficiency of Si-OR bond cleavage -whereas the increasing hydrophobic character of R substituent shows the opposite effect.[START_REF] Lev | Sol-Gel Electrochemistry: Silica and Silicates[END_REF] Figure 1 . 9 . 19 Figure 1.9. Schematic change in the relative rate for hydrolysis, condensation and dissolution of alkoxysilane and its products.Figure was adopted from ref 88 Figure 1 . 9 19 Figure 1.9 for the kinetics versus pH dependence. The reaction takes place most efficiently in two pH ranges. The first, around the neutral pH conditions where the nucleophilic attack on the positively charged silicon atom is performed by deprotonated silanol moieties and the second range, at the acidic pH values (pH<2), where deprotonated silanol moieties are even more attracted by positively charged silicon atom due to formation of protonated species -𝑆𝑖𝑂𝐻 2 + . Condensation reaction leads to silicates polymerization and finally the formation of a gel (according to reaction (2) from Figure 1.8). The gelation process is dependent from a variety of factors: (i) temperature -polymerization can be thermally activated and can be significantly reduced at high temperatures; (ii) pH -gelation was found to be slowest at intermediate values and fastest at low and high values; (iii) solvent -decreasing content ofwater in reaction media leads to increase in the gelation time, as well as use of volatile additives.[START_REF] Lev | Sol-Gel Electrochemistry: Silica and Silicates[END_REF] Figure 1 . 10 . 110 Figure 1.10. Illustrative schemes for hard, colloidal assembly and soft template route. Figure 1 . 1 10 A). Template route employing colloidal particles was classified as an intermediate option bearing the rigidity of the hard templates and selfassembly properties of soft templates (see Figure 1.10 B). The last group -shown on Figure 1.10 C -belongs to 'soft' mater, in this case liquid crystals formed by amphiphilic species (able to form variety of spatial arrangements: micelles, vesicles, cubic, hexagonal or lamellar liquid crystal structures), which is the most versatile method as the template extraction can be performed under mild condition without affecting deposit properties. Figure 1 . 11 . 111 Figure 1.11. Schemes representing following steps of photocurrent evolution at the Au (NPs or mirror-like film) modified ITIES. ZnTPPC is the meso-tetra(4-carboxyphenyl)porphyrin. Figure was prepared based on ref 106 . Figure 1 . 12 . 112 Figure 1.12. Three phase junction set-up used for Au NPs deposition. TOA + is the tetraoctylammonium cation. another study, Li et al. used SECM in anodic generation mode for the generation of Ag NPs (see Figure 1.13). 118 Figure 1 . 13 . 113 Figure 1.13. The scheme representing Ag NPs deposition by SECM anodic dissolution of silver UME. Adapted from ref. 118 (Figure 1 . 1 Figure 1.14 B). The authors noticed that regardless of the deposition technique employed, the 𝐶𝑙𝑂 4 (𝑎𝑞→𝑜𝑟𝑔) - Figure 1 . 14 . 114 Figure 1.14. Thin layer organic phase liquid -liquid interface approach for the Ag NPs electrodeposition approach. A -Corresponds to the open circuit potential electrodeposition and Bcorresponds to the potential controlled electrodeposition. Adapted from ref. 121 Figure 1 . 1 15).142 The effect of 1,2-dioctadecanoyl-sn-glycero-3-phosphocholine (DSPC) monolayer onto the adsorption and the kinetics of charge transfer for TEA + , porpanolol, metoprolol and tacrine has been studied by cyclic voltammetry and AC voltammetry.142 Comparison of the calculated values of admittance and apparent capacitance in the presence and absence of ion transfer through DSPC monolayer allowed concluding what follows: (i) all studied ions tend to interact with the phospholipid membrane; (ii) rate constant of TEA + , propranolol and metoprolol decrease with increasing phospholipid deposition surface pressure. No change was observed for tacrine and (iii) calculated apparent capacitance values in the presence and the absence of ion transfer indicated that charge transfer reaction of tacrine and partially the metoprolol are coupled with the adsorption process. In subsequent work, electrochemical impedance spectroscopy was used to evaluate interaction between four similar in structure therapeutics (aminacrine, tacrine, velnacrine and proflavine) and -different in compositionphospholipid monolayers adsorbed at the ITIES. The results indicated that the preferable adsorption site in the organic phase for velnacrine and proflavine is the polar head group region whereas tacrine and aminacrine prefer hydrocarbon tail domains.143 Figure 1 . 15 . 115 Figure 1.15. Simplified electrochemical cell allowing the compact phospholipid monolayer formation at the electrified organic gel -aqueous phase interface. Adapted from ref. 142 Figure 1 . 16 . 116 Figure 1.16. Structures of the monomers used for the electropolimerization at the electrified liquidliquid interface: A -1-methylpyrrole, B -1-phenylpyrole, C -4-(pyrol-1-yl)phenyloxyacetic acid, D -2,2':5',2'' terthiophene, E -tyramine and F -resorcinol. Some effort has been done with regard to planar liquid -liquid interface modification with carbon based materials with examples emerging from synthesis 159 , functionalization 160 or catalysis 161 etc. and only few examples emerges from the use of carbon or/and carbon based material at the polarized liquid -liquid interface. Carbon materials were also used to form 'semi modified' ITIES and such examples are also given here. Figure 1 . 17 . 117 Figure 1.17. Scheme of 3D-ITIES composed from reticulated vitreous carbon modified with 4-ABA/polypeptide multilayer impregnated with ferri/ferrocyanide ions and photosensitizer (ZnTPPS 4-) and the organic phase being the solution of electron acceptor (TCNQ) and 5 mM organic electrolyte. Abbreviations stand for: 4-ABA is the 4-aminobenzoic acid, TCNQ is 7,7',8,8'tetracyanoquiodimethane and ZnTPPS 4-is the zinc meso-tetrakis(p-sulfonatophenyl) porphyrin. Figure is adapted from ref. 162 Figure 1 . 18 .Figure 1 . 18 . 118118 Figure 1.18. Scheme of the ITIES modification with a (doped)-graphene layer. The abbreviations stand for: CVD GR -chemical vapor deposited graphene; DMFc -1,1'-dimethylferrocene, DecMFcdecamethylferrocene. Numbers from arrows correspond to different experimental synthetic approaches: (1) CVD GR deposition; (2) one step CVD GR/metal nanoparticles deposition; (3) two step CVD GR/metal nanoparticles deposition and (4) two step CVD GR/metal nanoparticles deposition under the potentiostatic control. Figure prepared on the basis of ref. 160 Figure 1 . 1 19. The ITO electrode crossing the interface between aqueous solution of sulphites ions and the nitrobenzene containing n-octyltriethoxysilane was modified with the silica positionedalmost completely -on the aqueous side of the liquid -liquid junction. The hydrolysis and condensation reaction were catalyzed by protons generated at ITO electrode according to the redox reaction: Figure 1 . 19 . 119 Figure 1.19. Set-up showing silica stripe formation at the three phase junction. Deposition place is indicated with red arrow. Protons were produced at the ITO on the aqueous side of the liquid -liquid interface with the conventional three electrode set up. OTEOS is n-octyltriethoxysilane. Scheme was adapted from ref. 176 photo-initiator. The colloidal particles self-assembled at the liquid -liquid interface were bonded into the stable film by polymerization induced by UV irradiation. Self-assembly of SiO 2 spheres has led to well-ordered, mechanically stable film of one monolayer thickness for times starting from 30 min and silica spheres concentration up to 0.013%.Whitby et al. studied the effect of SiO 2 nanoparticles/surfactant composite on the liquid -liquid interface stabilization. Macroscopic systems reveal new set of information helping to understand the interior of the pores with the aqueous solution. The hydrophobic character of silicates could enhance the organic phase penetration into zeolite framework, and hence the exact position of the interface remained unclear. To answer these doubts, Dryfe et al. studied facilitated ion transfer of alkali metals with 18-crown-6 ether and based on obtained electrochemical results they suggested that it is rather the aqueous phase which fills the silicalite pores. Ex situ zeolite modified ITIES size selective membrane can be complete with charge selectivity188 . The example given by Dryfe et al. concerns sodium zeolite-Y pressed with 10 tons of pressure and healed with tetraethoxy silane (TEOS) solution. Healing process was applied in order to eliminate inter-grain pathways between mechanically pressed zeolite crystals, which may constitute the route for analytes transfer. Resulting disks with 0.75 mm in thickness and 20 mm in diameter were used to support the liquid -liquid interface. Cyclic voltammetry results in the presence of zeolite-Y membrane have shown size selective exclusion of tetrabutylammonium cations whereas tetraethylammonium cations undergo reversible transfer. When 𝐵𝐹 4 -and 𝐶𝑙𝑂 4 -were studied as the transferring ions (with diameter of ions smaller then diameter of pore entrance) no voltammetric response was observed. This result univocally indicated charge selective exclusion for negatively charged ions. The supporting electrolytes used in this study were LiCl and BTPPA + TPBCl -in aqueous and organic phase respectively. Since the zeolite-Y membrane exhibit both size and charge sieving effect, the potential limits were elongated from a negative and a positive potential side. Broader potential window from less positive potential side has arisen due to size exclusion of BTPPA + transfer and charge barrier for Cl -transfer across modified ITIES. Figure 1 . 20 . 120 Figure 1.20. Schematic and simplified schemes for ex situ modified ITIES with silica materials. Acorrespond to macroscopic ITIES modified with zeolite membrane used in size-selective voltammetric study 188 and B -is the polyethylene terephthalate (PET) membrane with randomly distributed pores modified with the aspiration induced infiltration method. 190 Figure 2 . 1 . 21 The cells were custom made from glass tubes with inner diameters 12 mm (Figure2. 1 A) and 19 mm (Figure2. 1 B). The reference electrodes placed in the Luggin capillaries of each phase were Ag/AgCl wires whereas the counter electrodes were made from platinum mesh. The Ag/AgCl electrode was prepared by oxidation of silver wire in a saturated solution of FeCl 3 . The aqueous counter electrode was platinum mesh spot welded to platinum wire. The organic counter electrode was the platinum mesh spot welded to platinum wire which was isolated from both phases by glass tube (for protocol of preparation of the organic counter electrode please refers to section 2.5.5). During the measurement, the organic phase Luggin capillary was filled with the 10 mM BTPPA + Cl -and 10 mM LiCl supporting aqueous solution. The cell with the smaller interfacial radius was used to study ion transfer reactions and interfacial silica deposition mechanism. The cell with the bigger interface diameter was equipped with a removable upper Luggin capillary and hence was used for the electrogeneration of large amounts of silica (several mg per synthesis). In the second case, during the interfacial deposition, the volume of the organic phase was diminished by placing inert glass boiling stones at the bottom of the cell. Figure 2 . 1 . 21 Figure 2. 1. Four electrode electrochemical cells supporting the macroscopic liquid -liquid interface: Awith fixed Luggin capillaries and B -with removal upper Luggin capillary.The RE org and RE aq correspond to the organic and the aqueous reference electrodes respectively whereas the CE org and CE aq correspond to the organic and the aqueous counter electrodes respectively. The numbers stand for: 1 -the aqueous phase; 2 -the organic phase; 3 -supporting aqueous phase; 4 -is the liquid -liquid interface between higher density phase and the supporting aqueous phase and 5 is the ITIES. The interfacial surface area was 1.13 cm 2 for cell A and 2.83 cm 2 for cell B. Figure 2 . 2 A 22 . The cell consists of a simple glass vessel covered with a lid with three holes. The side places were occupied by platinum mesh counter electrode and Ag/AgCl reference electrode (prepared by Ag oxidation in the oversaturated solution of FeCl 3 ). The center hole was occupied by glass tube to the bottom Figure 2 . 2 . 22 Figure 2.2.The electrochemical cells used for A -microITIES modification and electroanalytical study and C -for in situ Raman analysis of electrochemical silica deposition. B -is the silicon membrane used to support array of microITIES. Designation stands for: 1 -aqueous phase; 2 -organic phase; 3 -microITIES (r is the pore diameter, S is the pore center-to-center distance and d is the membrane thickness -100 µm), 4 -is the objective used to focus laser beam. RE and CE stands for reference electrode and counter electrode respectively. Aq stands for aqueous whereas org for organic. covered with the ParaFilm®. The set-up used to couple electrochemical silica deposition with the confocal Raman spectroscopy is shown on Figure 2.2 C. In this configuration the glass tube filled with the organic phase was placed in the bottom of a custom made PTFE vessel filled with the aqueous phase. The top side of the glass tube was finished with the silicon wafer bearing the array of microITIES. Down part of glass tube was closed with a silicone stopper in order to avoid the organic phase leakage. During the measurement laser spot was focused at the liquid -liquid interface using objective adopted to work in the liquid media. The electrodes were the same as for the cell shown on Figure 2.2 A. Figure 2 . 2 Figure 2. 1 A) and microscopic ITIES (see Figure 2.2 A) were used. In some experiments single pore microITIES (see section 2.5.7 for protocol of preparation) was used instead of silicon wafer membrane. Figure 2 . 3 . 23 Figure 2.3. Set-up used during local pH measurement above the liquid -liquid interface supported with array of micrometer pores. The designations are as follows: 1 -PTFE cell, 2 -aqueous phase, 3 -the glassy tube filled with the organic phase and aqueous reference supporting electrolyte (4), 5 -the silicon wafer supporting array of microITIES (∅ is the pore diameter equal to 50 µm, and d is the membrane thickness -100 µm), 6 -the source of UV irradiation, 7 -the double junction Ag/AgCl reference electrode, 8 -the iridium oxide modified platinum electrode, 9 -correspond to shear-force positioning and 10 -the CCD camera used for visualization. Electrochemistry at the liquid -liquid interface was controlled with the CE aq -aqueous counter electrode (platinum mesh), RE aq -reference counter electrode (Ag/AgCl) and RE org /CE org -Ag/AgCl wire used as organic reference and counter electrode in one. Cell 1 : 1 (𝑎𝑞)𝐴𝑔 | 𝐴𝑔𝐶𝑙 | 5 𝑚𝑀 𝑁𝑎 + 𝐶𝑙 - 𝑥 𝑚𝑀 𝑇𝐸𝑂𝑆 || 10 𝑚𝑀 𝐵𝑇𝑃𝑃𝐴 + 𝑇𝑃𝐵𝐶𝑙 - 𝑦 𝑚𝑀 𝐶𝑇𝐴 + 𝑇𝑃𝐵𝐶𝑙 -𝑖𝑛 𝐷𝐶𝐸 | 10 𝑚𝑀 𝐵𝑇𝑃𝑃𝐴 + 𝐶𝑙 - 10 𝑚𝑀 𝐿𝑖 + 𝐶𝑙 -| 𝐴𝑔𝐶𝑙 |𝐴𝑔 The electrochemical set-up used to study microITIES modification with silica deposits is shown on cell 2 configuration. MicroITIES modified with silica deposits were evaluated electrocatalytically using ion transfer voltammetry (analytes were the species being different in size, charge and nature) in the cell 3 configurations. Cell 2: (𝑎𝑞)𝐴𝑔 | 𝐴𝑔𝐶𝑙 | 5 𝑚𝑀 𝑁𝑎 + 𝐶𝑙 - 𝑥 𝑚𝑀 𝑇𝐸𝑂𝑆 || 10 𝑚𝑀 𝐵𝑇𝑃𝑃𝐴 + 𝑇𝑃𝐵𝐶𝑙 - 𝑦 𝑚𝑀 𝐶𝑇𝐴 + 𝑇𝑃𝐵𝐶𝑙 -𝑖𝑛 𝐷𝐶𝐸 | 𝐴𝑔(𝑜𝑟𝑔) Cell 3: (𝑎𝑞)𝐴𝑔 | 𝐴𝑔𝐶𝑙 | 10 𝑚𝑀 𝐿𝑖 + 𝐶𝑙 - 𝑧 µ𝑀 𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑖𝑎𝑙 𝑎𝑐𝑡𝑖𝑣𝑒 𝑖𝑜𝑛 || 10 𝑚𝑀 𝐵𝑇𝑃𝑃𝐴 + 𝑇𝑃𝐵𝐶𝑙 -𝑖𝑛 𝐷𝐶𝐸 | 𝐴𝑔(𝑜𝑟𝑔) Electrochemical measurements -All electrochemical measurement from chapter III and IV were performed with a PGSTAT 302N (Metrohm, Switzerland) with the four electrode configuration. Potentiostat was controlled by NOVA software. In chapter V the liquid -liquid interface was polarized using PalmSens EmStat 3+ potentiostat with the PalmSens Differential Electrometer Amplifier allowing the application of four electrode configuration. Palmsens potentiostat was controlled by PSTrace software. Small Angle X-ray Scattering -SAXS measurements were performed with the SAXSee mc 2 apparatus from Anton Paar. Profilometry based on shear force measurement -shearfoce detection was done with two piezoelectric plates attached to the pulled borosilicate glass capillary and connected to a lock-in amplifier (7280 DSP LOC-IN-AMPLIFIER). -High impedance potentiometer Kethley 6430 was used in all potentiometric pH measurements. The double junction Ag/AgCl was used as a reference electrode. pH probe was the Pt microelectrode modified with iridium oxide. The pH probe calibration was conducted prior and after measurements in buffered solutions at pH 7 and 4. Ion chromatography -the analysis was performed with 882 Compact IC Plus from Metrohm equipped with IC conductivity detector. High Pressure Liquid Chromatography -chromatograms were recorded using Waters 501 HPLC pump and Waters 486 Tunable Absorbance detector. The colon was RP 18C 100-4.6 mm from Chromolith® performance. H 1 NMR -all spectra were recorded with a Bruker 200 MHz spectrometer at 298 K. UV-Vis spectroscopy -all UV-Vis spectra were recorded in quartz cuvettes with Cary 60 UV-Vis spectrometer from Agilent Technologies Mass spectroscopy -MicroTOFq spectrometer from Bruker was used. The ESI (electrospray ionization) source was in positive mode. The scan was performed between 50 up to 1200 m/z. 3 . 3 Next step was to dissolved BTPPA + TPBCl -in acetone and then filtered under gravity using paper filter. Vessel with the filtrate was covered with Parafilm® in which small holes were made to allow evaporation of acetone. 4. Depending on volume of acetone used, evaporation takes up to few days. Resulting BTPPA + TPBCl -gives rectangular crystals as it is shown on Figure 2.4. Afterwards, crystals were rinsed with 1:1 Acetone:H 2 O mixture, filtered under vacuum and stored overnight in desiccator. 5. The resulting BTPPA + TPBCl -was stored in refrigerator in the vessel covered with an aluminum foil preventing from light exposure. Figure 2 . 4 . 24 Figure 2.4. BTPPA + TPBCl -crystals after acetone evaporation. Figure 2 . 5 . 25 Figure 2.5. The ion chromatography detection of iodides after four successive reaction product (PH + TPBCl -) rinsing with distilled water. Figure 2 . 6 . 26 Figure 2.6. Components used for the organic counter electrode preparation: A -platinum mesh fixed to platinum wire, B -borosilicate capillaries, C -dual-component conductive epoxy and D -copper-tinned contact wire. Figure 2 . 7 B 27 , the platinum wire was placed inside the borosilicate capillary. The second side of the glass capillary was connected to the hose (Figure 2.7 A), which has led to the pump.2. The glass around the platinum wire was gently melted above the Bunsen Burner. Heating was slowly started from the mesh side and the capillary was progressively inside the flame. Attention had to be paid since capillary can easily collapse above the platinum wire closing the capillary. If so, step two has to be repeated. Figure 2 . 7 . 27 Figure 2.7. Trapping the platinum wire inside the glass capillary was performed with the pump (C) under the Bunsen burner (D). One side of capillary was attached to the pump hose (A) whereas the platinum mesh and wire occupied second side (B). 3 . 3 Once platinum mesh and wire are securely fixed to the capillary (Figure2.8 B) the conductive resin has to be placed just above the wire (Figure2.8 C). Syringe with long needle can be used for this purpose. First uptake some resin inside the needle and then presses it out to the capillary. Figure 2 . 8 . 28 Figure 2.8. Successive steps of electrode preparation. A -Platinum mesh and wire inserted into the capillary, B -glass melted around the platinum wire and C -conductive resin placed above the wire. 56 mm and inner diameter 0.75 mm) from SUTTER INSTRUMENT®; -gold wire with fixed diameter (in case of this work the diameter was 25 or 50 µm); -10 ml of aqua regia solution (mixture between concentrated hydrochloric acid and nitric acid in 3 to 1 v/v ratio respectively); -Pump, vertical capillary puller and external power supply shown in Figure 2.10 as A, B and C respectively. Figure 2 . 10 . 210 Figure 2.10. Set-up used to prepare the single pore microITIES capillaries. A -is the pump, B -is the vertical capillary puller and C -is the external power supply. Figure 2 . 11 . 211 Figure 2.11. The vertical capillaries puller was used to close the capillary in the middle and to melt the gold wire into the glass. a) shows the capillary placed in the nickel/chrome wire ring before current passage and b) after current was passed through the wire. 2 . 2 11 a). Using external power supply the current was passed through the wire ipso facto increasing the temperature in the vicinity of the capillary (Figure 2.11 b), which in turn decreased the glass velocity. Next, the hose from the pump was attached to the top of the capillary and under vacuum the walls in the heating region were collapsed, closing the capillary in the mid-length (Figure 2.12 b). 2. Next step was to place short piece of gold wire (3-5 mm, ∅ = 25 or 50 µm) just above the collapsed part of the capillary. The gold wire was melted into the glass in the same manner as the capillary was closed. The capillary was then turned up-side-down and second piece of the gold wire was melted into glass as it is shown on Figure 2.12 c. Figure 2 . 2 Figure 2.12. a) Borosilicate glass capillary before closing the mid-length, b) with collapsed walls closing the capillary in the middle c) the 50 µm gold wires melted into the glass on double side of the capillary mid-length and d) the capillary splitted into two pieces, each contained gold wire. Figure 2 . 2 Figure 2.13. a) The polishing set-up used in this protocol, b) the capillary with the excess of glass above the gold wire and c) the capillary after polishing with four different sand papers (gold wire diameter was 50 µm).4. The gold from the glass was removed by placing the capillary overnight into the beaker containing aqua regia. The gold etching has to be hold under the fume hood due to the toxic nitrogen dioxide evolution during the reaction: Figure 2 . 14 . 214 Figure 2.14. Optical microscope images recorded after gold removal. Images a) and b) correspond to the pore with 25 µm in diameter, whereas c) and d) to the pore with 50 µm in diameter. a) and c) are top view, b) and d) are the side view of the single pore capillary. Figure 2 . 15 . 215 Figure 2.15. Reflux set-up used for synthesis of trimethylbenzhydrylamonium iodide. Figure 2 . 16 . 216 Figure 2.16. Distillation under vacuum set-up. A -the pump, B -rotary evaporator and C -heating bath filled with distilled water. CV recorded in the absence of 𝐶𝑇𝐴 𝑜𝑟𝑔 + and 𝑇𝐸𝑂𝑆 𝑎𝑞 in shown on Figure 3.1 with a dashed line. No ion transfer occurred in the available potential window which is limited by the background electrolyte transfer (𝐶𝑙 𝑜𝑟𝑔→𝑎𝑞 at the less positive potential and 𝑇𝑃𝐵𝐶𝑙 𝑎𝑞→𝑜𝑟𝑔 at the more positive potential side). The addition of cationic surfactant salt to the organic phase -CTA + TPBCl --(see Figure 3.1 red solid line) resulted in the irreproducible current spikes also known as the electrochemical instability -the phenomena known to occur at the electrified liquid -liquid interface in the presence of surface active species, which is discussed in more detail in section 1.1.4. Figure 3 . 1 . 31 Figure 3.1. Cyclic voltammograms recorded in Cell 1. Black solid line was recorded in the presence of TEOS (x = 50 mM) and in the presence of CTA + (y = 1.5 mM); Black dashed line was recorded in the presence of organic and aqueous supporting electrolytes (x and y = 0 mM) and red solid line correspond to voltammogram recorded in the absence of TEOS (x = 0 mM) in the aqueous phase and in the presence of CTA + in the organic phase (y = 14 mM). Scan rate = 5 mV/s. Figure 3 . 2 . 32 Figure 3.2. Schematic representation of the silica deposit formation mechanism. Forward scan is indicated with the red arrow. [CTA + ] org = 1.5 mM; [TEOS] aq = 50 mM; Scan rate = 5 mv/s Charge associated with the transfer of CTA + was found to be dependent from the aqueous phase pH as it is shown on Figure 3.3. Each point from the curve is the average of the five last (from fifteen consecutive voltammetric cycles) positive peak charges. The charge under the reverse peak was observed to obtain maximum values for the pH = 9-10 at which the existence of polynuclear species -𝑆𝑖 4 𝑂 6 (𝑂𝐻) 6 2-𝑎𝑛𝑑 𝑆𝑖 4 𝑂 8 (𝑂𝐻) 4 4--is predominant in the aqueous phase.At the pH = 11 and < 9 mononuclear species are dominant and hence as a consequence the net negative charge is not sufficient to facilitate the 𝐶𝑇𝐴 𝑜𝑟𝑔→𝑎𝑞 + Figure 3 . 3 . 33 Figure 3.3. The charge of the forward cyclic voltammetry scan recorded in the presence of [CTA + ] org = 3 mM and [TEOS] aq = 300 mM as a function of pH. The error bars are the standard deviation of the last five values of fifteen consecutive runs. Figure 3 . 4 .Figure 3 . 5 A 3435 Figure 3.4. A -Cyclic voltammograms recorded for the 1 st , 2 nd , 3 rd , 5 th , 10 th and 15 th cycle in the presence of [TEOS] aq = 50 mM and [CTA + ] org = 1.5 mM. B -Correspond to reverse peak charge as a function of number of cycles. Scan rate = 5 mV/s. pH of the aqueous phase was 9.5. The influence of repetitive cycling on the current -potential characteristics and the charge above the reversed peaks are shown on Figure 3.4 A and B respectively (pH of the aqueous Figure 3 . 5 . 35 Figure 3.5. Cyclic voltammograms showing the influence of interfacial silica deposit formation on the electrochemical instability. A -Cyclic voltammograms for the 1 st scan; B -Cyclic voltammograms for the 15 th scan. Red line correspond to [CTA + ] org = 3 mM, black line correspond to [CTA + ] org = 14 mM. [TEOS] aq = 50 mM. Scan rate = 5 mV/s. Figure 3 . 6 .For Figure 3 . 6 B 3636 Figure 3.6. Influence of [CTA + ] org -A and [TEOS] aq -B on the reverse peak charge for: A -1.5 mM (), 5 mM () and 14 mM () [TEOS] aq ; and B -50 mM (), 200 mM () and 300 mM () [CTA + ] org . The points and the error bars (standard deviations) were calculated from the last five cycles of the fifteen repetitive voltammetric runs. Figure 3 . 7 . 37 Figure 3.7. Infra-red spectrum of silica deposit prepared for following template and silica precursor concentrations: [CTA + ] org = 14 mM and [TEOS] aq = 300 mM. The most significant wavenumbers are indicated with the arrows.X-Ray Photoelectron Spectroscopy (XPS) measurements were performed in order to prove the formation of silica as well as to study the contribution of other attractions between the atoms. A typical XPS spectrum for the silica deposit synthesized at the ITIES (for [CTA + ] org = 10 Figure 3 . 8 B 38 (presence of Si-O bond) and D (presence of Si-O-Si bond). Nitrogen deriving from surfactant species was also present. The C-O bond from Figure 3.8 C suggests that TEOS molecules were not fully hydrolyzed and some Si-O-C may still remain in the sol solution. Figure 3 . 8 . 38 Figure 3.8. XPS spectra of the silica deposit prepared for: [CTA + ] org = 10 mM and [TEOS] aq = 300 mM. A -spectrum in the full range; B -spectrum in the region of O 1s ; C -spectrum in the region of C 1s and Dspectrum in the region of Si 2p . The tables are correlated with the spectra. Figure 3 . 9 . 39 Figure 3.9. Nitrogen adsorption-desorption of the silica deposit prepared using [CTA + ] org = 14 mM and [TEOS] aq = 300 mM before (squares) and after (circles) calcination (heat treatment was performed at 450°C for 30 min). Figure 3 .Figure 3 . 10 B 3310 Figure 3.10 B) in the organic phase resulted in very weak response and the pore center-to-center distance around 6.1 nm. Interestingly, the addition of 20% of ethanol (blue curve Figure 3.10 B) Figure 3 . 10 .Figure 3 . 11 . 310311 Figure 3.10. Variation of SAXS pattern recorded for silica deposits electrogenerated at the ITIES. Acorrespond to silica deposits synthesized under different [CTA + ] org (black -5 mM, red -10 mM and blue -14 mM) and constant [TEOS] aq = 300 mM; B -correspond to organic phase different polarity (black -30% decane / 70% DCE, red -100% DCE and blue 20% ethanol / 80% DCE) the [CTA + ] org = 10 mM and the [TEOS] aq = 300 mM. All samples were prepared by cyclic voltammetry. Figure 3 . 11 . 311 Figure 3.11. TEM images of silica deposits prepared by both cyclic voltammetry and chronoamperometry. The initial composition of two phases is indicated as the headlines of each column. Figure 4 . 1 41 with a black curve. The deposition was performed with [CTA + ] org = 14 mM and [TEOS] aq = 50 mM in the cell 2 at 5 mV/s. Red curve from Figure 4.1 corresponds to blank CV recorded in cell 2 in the absence of template and Figure 4 . 1 , 41 it was assumed that at the beginning of the polarization the liquid -liquid interface was uniformly covered with the CTA + monolayer and the current was only measured due to interfacial double layer charging. The 𝐶𝑇𝐴 𝑜𝑟𝑔→𝑎𝑞 + has started at around +100 mV. The local concentration of CTA + in the aqueous diffusion profile zone should exceed its CMC (it was reported to be 1.4 mM for CTAB) 206 in order to form spherical and positively charged micelles. The presence of positive charge at the edge of micellar spheres catalyzes the condensation of the TEOS precursor around the template species, resulting in silica material formation (see scheme 2 on Figure4.1). The sigmoidal shape of the negative peak suggest that the 𝐶𝑇𝐴 𝑜𝑟𝑔→𝑎𝑞 + was not limited by linear diffusion inside the pore of the silicon wafer, which should be the case in here (the shape of the negative response will be discussed in the following section). The polarization was reversed at -100 mV. Going towards more negative values would force the transfer of first 𝐶𝑙 𝑎𝑞→𝑜𝑟𝑔 followed by 𝐵𝑇𝑃𝑃𝐴 𝑜𝑟𝑔→𝑎𝑞 + . During the reverse scan the CTA + was expected to back transfer towards the organic phase. The formation of the characteristic positive peak at around +260 mV terminated with the abrupt drop in current has suggested adsorption process of CTA + among the silica network (drop in current informs that there are no more charges to transfer). Figure 4 . 1 . 41 Figure 4.1.Typical cyclic voltammogram recorded during the formation of silica deposit. The microITIES membrane was design number 3. The CV was recorded in cell 1 (black line) for [TEOS] aq = 50 mM and [CTA + ] org = 14 mM. A blank cyclic voltammogram was recorded in cell 1 (red line), without TEOS and CTA + . The schemes on the right illustrates the different stages of the silica deposition: 1. Formation of the monolayer at the beginning of polarization; 2. Transfer of 𝐶𝑇𝐴 𝑜𝑟𝑔→𝑎𝑞 + followed by micelles formation and silica condensation; 3. Partial backward transfer of CTA + to the organic phase and silica deposition. Figure 4 4 scheme 3 on Figure 4.1. CVs recorded during interfacial silica material formation for [TEOS] aq = 50 mM and [CTA + ] org = x mM where 1.5 mM ≤ x ≤ 14 mM can be found on Figure 4.2 A. When [CTA + ] org was kept constant (14 mM) and [TEOS] aq was increased from 50 mM up to 300 mM, CVs shown on Figure 4.2 B were recorded. To study the influence of [CTA + ] org and [TEOS] aq , the deposition conditions were [ TEOS] aq was kept constant and [CTA + ] org was increased from 1.5 mM up to 14 mM the linear increase of the negative peak current was observed -arising from 𝐶𝑇𝐴 𝑜𝑟𝑔→𝑎𝑞 + (as it is shown in insert of Figure 4.2 A). This reaction is the rate-determining step. The characteristics of the positive peaks are different. Faradaic current associated with the CTA + back transfer has grown up to [CTA + ] org = 5 mM and level off for higher concentrations (see Figure 4.2 A). Such behavior can be explained in the following manner: the formation of silica deposits assisted by surfactant template for low [CTA + ] org has involved most of the CTA + species transferred to the aqueous phase and hence low current recorded during backward transfer was expected. Higher [CTA + ] org has resulted in much thicker silica deposit formation, which acts as a physical barrier for the CTA + returning to the aqueous phase. When [CTA + ] org was kept constant and [TEOS] aq was increased from 50 mM up to 300 mM no significant changes in the current response and the shape of CVs were observed. The only difference was a shift in the potential transfer of 𝐶𝑇𝐴 𝑜𝑟𝑔→𝑎𝑞 + towards more positive values (100 mV) for higher [TEOS] aq . This was not surprising since the concentration of the negatively charged hydrolyzed TEOS species in the aqueous phase was increased by a factor of 6, which promotes the transfer of 𝐶𝑇𝐴 𝑜𝑟𝑔→𝑎𝑞 + at higher potential values. Figure 4 . 2 . 42 Figure 4.2. A -Cyclic voltammograms recorded during silica deposits formation for [TEOS] = 50 mM and [CTA + ] = 3 mM (solid line), 5 mM (dashed line), 10 mM (dotted line) and 14 mM (dash -dot line). Linear dependency of the negative peak current versus CTA + concentration for the constant [TEOS] aq can be found as an insert. B -Cyclic voltammograms recorded for [CTA + ] = 14 mM and [TEOS] aq = 50 mM (black curve) and 300 mM (red curve). microITIES membrane design was number 3. Scan rate in all cases was 5 mV/s. Figure 4 . 3 43 Figure 4.3 for schemes with corresponding designations for the array of microITIES). For each group, they have proposed voltamperometric current response characteristics for a charge transfer reaction,: (i) linear diffusion leading to clear peak, (ii) radial diffusion leading to steady state wave, (iii) slight overlap of the diffusion profiles leading to a slight peak to clear peak and (iv)total overlap of the diffusion profiles giving a clear peak, from which only group (ii) does not follow scan rate dependency.[START_REF] Davies | The Cyclic and Linear Sweep Voltammetry of Regular and Random Arrays of Microdisc Electrodes: Theory[END_REF] Figure 4 . 3 . 43 Figure 4.3. Schematic representation of microITIES arrays with different diffusion zones. Scheme A shows the geometrical parameters of the membrane supporting microITIES: r -is the pore radius, S -is the pore center-to-center distance or in other words spacing factor and δ -is the diffusion layer thickness. Figure 4 . 3 43 Figure 4.3 show diffusion layer profiles from the aqueous side of the liquid -liquid interface but they can be also applied by analogy for the organic side of the liquid -liquid interface. On Figure 4.4 the CVs curves have been presented as current density versus potential in order to take into account the fact that the number of pores within the array supporting microITIES, consequently their total surface area, varies for each design. Figure 4 . 4 . 44 Figure 4.4. Variations of current density in function of the applied potential recorded during silica deposits formation at the membranes supporting the array of microITIES with different spacing factors Figure 4 . 5 A 45 Figure 4.5 A illustrates the influence of the scan rate on the current-potential curves recorded during interfacial silica deposition at the membrane design number 3. Both the current Figure 4 4 Figure 4 . 6 . 46 Figure 4.6. The charge for forward (empty squares) and reverse (filled circles) processes recorded during silica deposits formation versus scan rate. solutions ([TEOS] aq = 50 mM and [CTA + ] org = 14 mM), but different supporting silicon membranes and distinct deposition conditions (potential scan rates and number of cycles as well as the deposition method). The data are presented in Figure 4.7 under the form of side views of single pore (row a) and pore arrays (row b), and mapping profilometry (row Figure 4 . 7 47 for the modification of membrane design number 3 by one scan at 0.1 mV/s, leading to much thicker deposits(13.1 µm in height and 33.1 µm in diameter). Since the generated deposits were massive as compared with other experiments, they were likely to be more affected by calcination and can suffer from some losses (see, e.g., one deposit missing on the bottom left of part 3b on Figure 4.7). Figure 4 . 7 . 47 Figure 4.7. SEM micrographs and 3D profilometry mapping based on shear force measurements obtained for various microITIES membranes modified with silica deposits. The rows correspond to three different points of views: (a) side view on single pore recorded by SEM, (b) side view on array of modified interfaces recorded by SEM and (c) modified interface mapping made by profilometry. The columns divide the images depending on synthesis initial conditions: (1) deposits prepared by one linear scan voltammetry (half CV scan) at 5 mV/s using microITIES number 2; (2) deposits prepared by three successive CV scans at 5 mV/s using microITIES number 2; and (3) deposits prepared by one CV scan at 0.1 mV/s using microITIES number 3. Polarization direction was always from anodic to cathodic potential direction. Figure 4 . 8 48 where the deposit height, h, was plotted as function of the difference between deposit and pore radii, r d -r p . At the beginning of the deposition process h  r dr p , corresponding to a rather flat morphology, while subsequent growing has led to more hemispherical shape with h values likely to rise up to 2×(r d -r p ). Finally, more massive deposits were characterized by preferential lateral growth (h < 2×(r d -r p )). On the other hand, it is also possible that the liquid -liquid micro-interfaces displacement under charge transfer, 213 which could affect the deposit internal geometry, seems to be minimal here. SEM images for detached membranes (see Figure4.9 A and B) indeed suggest that the silica deposits are filled inside and rather flat at the bottom. Figure 4 . 8 . 48 Figure 4.8. A -Deposit height versus the difference between deposit and pore radii for materials prepared either by cyclic voltammetry or chronoamperometry. The schemes on the right (SEM micrographs and corresponding drawings) correspond to two 'extreme' cases indicated with the Figure 4 . 9 . 49 Figure 4.9. A -SEM image of the silica deposit turned upside down. Inset shows the zoom of the deposit with the dashed white line indicating the imprint of the pore supporting the liquid -liquid interface. B -is the SEM micrograph for the pore from which the silica deposit has been removed. In both cases the silica deposition was performed with one cyclic voltammetry cycle for [CTA + ] org = 14 mM and [TEOS] aq = 50 mM. Membrane design and the scan rate are indicated on the micrographs.The 'worm like' shape of the pores among the silica deposits generated at the macrosocpic liquid -liquid interface (confirmed by TEM and SAXS) were also found to present in the silica deposits at the microITIES as it is shown with TEM micrographs on the Figure 4.10. Figure 4 . 10 . 410 Figure 4.10. The TEM micrographs for the silica deposits electrogenerated for [CTA + ] org = 14 mM and [TEOS] aq = 50 mM. The scan rate was 5 mV/s. Deposition was performed at membrane design number 4 with one cycle. Figure 4 . 11 . 411 Figure 4.11.Raman spectra for silicon membrane (black line) and silica deposit after calcination (red line). The enlargement of the spectra in the 300 -800 cm -1 region corresponds to the frequencies of vibration of silica bonds. Spectra were normalized to the Si-Si peak. Figure 4 . 12 . 412 Figure 4.12. Two different vibrational ranges of RAMAN spectra recorded for silica deposit before (black curve) and after (red curve) calcination. Figure 4 . 4 Figure 4.13. A -blank cyclic voltammograms and B -voltammetric transfer of [TMA + ] aq = 70.9 µM before (red line) and after (black line) calcination. microITIES membrane design number was 3. Silica deposits were electrogenerated by CV (2 cycles at 5 mV s -1 , [CTA + ] = 14 mM and [TEOS] = 300mM). phase junction was also investigated by surface enhanced Raman spectroscopy.111 In the present work, an experimental set-up was developed to couple electrochemical measurements at the ITIES with Raman confocal microscopy. The association of electrochemistry and Raman techniques at microscopic liquid -liquid interfaces allows the collection of complementary data for the mechanism involved in the electrochemically assisted generation of surfactant-templated silica material at such interfaces. The incident laser was focused on both macroscopic and microscopic interfaces to allow the recording of Raman spectra at open circuit potential and upon application of interfacial potential differences. Finally, the changes in interfacial molecular composition of microITIES were studied during the formation of silica deposit. Figure 4 . 4 14 shows the molecular composition of two immiscible phases during silica electrogeneration. Figure 4 . 14 . 414 Figure 4.14. Composition of the aqueous and the organic phase during electrochemical silica deposition at array of microITIES. Figure 4 . 4 Figure 4.15 shows the different Raman spectra recorded at the interface formed between a 5 mM NaCl aqueous solution and DCE in the absence (Figure 4.15 a) and in the presence of 10mM BTPPA + TPBCl -(Figure 4.15 b), of 10mM BTPPA + TPBCl -and 14 mM CTA + TPBCl - (Figure 4.15 c) and 53 mM CTA + TPBCl -(Figure 4.15 d) in the organic phase. Prior to spectrum collection, the laser was focused on the macroscopic liquid -liquid interface. Spectrum (a) from Figure 4 . 4 Figure 4.15 was recorded at the liquid -liquid interface with pure DCE as the organic phase. The series of peaks obtained correspond to the peaks obtained for the Raman spectrum of pure DCE recorded in solution in the macroscopic cell (see spectra A on Figure 4.16). The peaks at 653and 673 cm -1 can be assigned to the C-Cl stretching modes of the gauche conformer, whereas the peak at 753 cm -1 arises from the C-Cl Ag stretching mode of the trans conformer.235 Figure 4 . 15 . 415 Figure 4.15. Raman spectra recorded at open-circuit potential at the macroscopic liquid -liquid interface constituted between 5 mM NaCl aqueous solution and: a) DCE, b) 10 mM BTPPA + TPBCl -in DCE, c) 10 mM BTPPA + TPBCl -and 14 mM CTA + TPBCl -in DCE and d) 53 mM CTA + TPBCl -in DCE. Raman bands marked with () were assigned to BTTPA + , with () to TBCl -, with () to DCE and (△) to CTA + . All spectra were normalized to the band at 2987 cm -1 . Figure 4 . 15 ) 415 Figure 4.15) at 725, 1000and 1078 cm -1. The peak at 725 cm -1 was probably due to the aromatic C-Cl vibration and arose from the presence of the anion of the organic electrolyte. The origin of the peak at 1000 cm -1 was ascribed to the presence of the aromatic rings of BTPPA + (also found elsewhere)217 and can be therefore treated as a trace of the organic electrolyte. The peak at 1078 cm -1 can be assigned to the vibration of the aryl-Cl bond present only in the TPBCl -. Figure 4 . 16 .Figure 4 . 16 ) 416416 Figure 4.16. Raman spectra recorded for a) DCE, b) BTPPA + TPBCl -, c) BTPPA + Cl -and d) K + TPBCl -. Figure 4 .Figure 4 . 17 A 4417 Figure 4.17 A for full spectra and B for spectral region of interest) to ensure that variations of Raman peak intensities are due to ion transfer rather than displacement of the interface. Prior to spectrum collection, the laser spot was focused at the microITIES under open circuit potential (see dashed curve on Figure 4.17 A and B for recorded spectrum). Then, the interfacial potential D. Three types of behaviour were observed corresponding to the different molecules present in the solution. As the interface is polarized at more negative potentials, all bands(653, 673, 753, Figure 4 . 4 Figure 4.17.A -full Raman spectra and B -Raman spectra in the region from 975 cm -1 to 1100 cm -1 recorded at the microITIES between 5 mM NaCl aqueous phase solution and 10 mM BTPPA + TPBCl - organic phase solution under different negative polarization. For A and B dashed line corresponds to Raman spectrum recorded at open circuit potential. Dotted line from A was recorded under negative polarization potential at -300 mV whereas the solid was recorded at -800 mV. The spectra from B were recorded from +200 mV up to -800 mV. C -shows the peak intensity (after normalization to 520 cm -1 ) in function of applied potential -empty squares correspond to the peak at 1002 cm -1 while filled circles can be attributed to the peak at 1078 cm -1 (in that case error bars are too small to notice); open circuit potential was 200 mV; Raman peaks marked with () were assigned to BTTPA + , with ( ) to DCE and with () to TPBCl -. D -is the blank voltammogram recorded prior to spectra collection with the scan rate equal to 5 mV/s. Figure 4 . 18 A 418 Figure 4.18 A for three distinct spectral regions (Figure 4.18 B shows the potential values at which Raman spectra were recorded). After the first potential scan, molecular composition of the liquid -liquid interface has changed and the spectrum obtained became much more complex than at OCP. The arrows on Figure 4.18 indicate the evolution of the particular peaks upon repetitive scans; with some of the intensities increasing with the number of scans while others drop down. Figure 4 . 4 Figure 4.18.A -Raman spectra recorded during in situ silica material formation at the microITIES in three different spectral regions. Arrows on the graphs indicate the direction of peak evolution, whereas the colors correspond to the order of spectra collection (1 st -black, 2 nd -red, 3 rd -green, 4 th -blue and 5 thgrey). The Raman spectrum collection was performed alternately with ion-transfer linear sweep voltammetry as it is shown schematically in part B of the graph. Raman bands marked with (□) were assigned to BTPPA + , with () to TPBCl -, (☆) to DCE and (△) to CTA + . 2 ) 2 are evaluated with the ion transfer voltammetry. Interfacially active ions of different charge, size and nature: (i) three different in size tetraalkylammonium cations, (ii) single charged anion and (iii) two generations -G0 and G1 -of PAMAM dendrimers were employed for this purpose. The silica deposition was performed with one voltammetric cycle at 5 mV/s for [CTA + ] org = 14 mM and [TEOS] aq = 50 mM (see Figure 4.4 C). Modified with silica silicon membranes were then calcinated at 450°C for 30 min. Figure 4 . 4 Figure 4.19 shows the blank CVs recorded before and after modification of the microITIES array with the silica deposits. The potential region was scanned from more negative to more positive potentials on the forward scan as indicated on Figure 4.19 with the dotted arrow. The potential window was determined by the transfer of the supporting electrolyte ions Figure 4 . 19 . 419 Figure 4.19. CV recorded only in the presence of supporting electrolytes (10 mM LiCl in the aqueous phase and 10 mM BTPPA + TPBCl -in the organic phase) before -black line -and after -red linemodification with silica deposits. The insert is the CV recorded in the absence of the aqueous supporting electrolyte before -black line -and after -blue line -modification. Scan rate was 10 mV/s. Dotted arrow indicates the direction of polarization on the forward scan. Figure 4 . 20 . 420 Figure 4.20. CVs of four different ions crossing the interface in the absence (black curves) and in the presence (red curves) of silica deposits at the array of microITIES. A -shown the transfer of three different in size cations (starting from left): TBA + , TEA + and TMA + . B -is the transfer of negatively charged 4OBSA -. Arrows indicate direction of polarization on the forward scan. The concentration of each ion was 56.8 µM and the scan rate was 10 mV/s. Figure 4 . 22 shows 422 the electrochemical behavior of PAMAM dendrimers generation 0 (G0) (Figure 4.22 A) and generation 1(G1) (Figure 4.22 B) before (black lines) and after (red lines) modification. The voltammograms are shown on the Galvani potential scale, based on the internal reference -peak transfer of Cl -(despite such peak is not present on the graphs shown, the correction was made in regard to blank CV recorded prior to each experiment) . Both PAMAM dendrimers (28 µM) and the model ion TEA + (42 µM) were initially present in the aqueous phase to facilitate the comparison. CVs from Figure 4.22 A and B before modification had the same characteristics, independent of the generation of PAMAM dendrimer studied. A first sigmoidal wave rose at +44 mV corresponding to the non-diffusion limited transfer of 𝑇𝐸𝐴 𝑎𝑞→𝑜𝑟𝑔 + . The second sigmoidal wave originated from 𝑃𝐴𝑀𝐴𝑀 𝑎𝑞→𝑜𝑟𝑔 transfer, which was partially masked by organic electrolyte anion transfer -𝑇𝑃𝐵𝐶𝑙 𝑜𝑟𝑔→𝑎𝑞 -. On the reverse scan, back transfer of PAMAM dendrimers resulted in a peak response followed by diffusion limited back transfer of 𝑇𝐸𝐴 𝑜𝑟𝑔→𝑎𝑞 + . Figure 4 . 22 . 422 Figure 4.22. Cyclic voltammograms illustrating the transfer of A -PAMAM generation G0 and B -PAMAM generation G1 dendrimers in the absence (black line) and the presence (red line) of silica deposits. The concentrations are [TEA + ] aq = 42 µM and [PAMAM G0 or G1] aq = 28 µM. Scan rate was 10 mV s -1 . Black, solid arrows indicate the direction of polarization during forward scan. Inserts are the CVs after background subtraction. Figure 4 . 4 Figure 4.23 and Figure 4.24 have three columns: a) CVs recorded before modification; b) CVs recorded after modification and c) corresponding calibration curves. The rows on Figure 4.23 can be ascribe to 1 -TMA + ; 2 -TEA + and 3 -TBA + , whereas on Figure 4.24 to 4 - ). The theoretical calibration curve (see solid line on Figure 4.23 and Figure 4.24 in column c) was calculated with the equation expressing the limiting current in presence of hemispherical diffusion zone: eq. 3 . 5 35 with diffusion coefficients taken from other studies (exception was 4OBSA -for which diffusion coefficient was extracted from linear fit of experimental calibration curve (red dashed line on Figure4.24 4c).Berduque et al. have shown that the electrochemical behavior of PAMAM dendrimers G0 and G1, is characteristic of multiply charged species.[START_REF] Berduque | Electrochemistry of Non-Redox-Active Poly(propylenimine) and Poly(amidoamine) Dendrimers at Liquid-Liquid Interfaces[END_REF] However, they demonstrated that the charge transferred is lower than the theoretical charge. Figure 4 . 23 . 423 Figure 4.23. Data correspond to TMA + (row 1), TEA + (row 2) and TBA + (row 3). Cyclic voltammograms for different interfacial active ion concentrations recorded before modification are shown in column a, whereas their voltammetric behavior after modification is depicted in column b. Column C represent the calibration curves before modification (black squares), after modification (empty circles) and theoretical values calculated using eq. (3.5) (solid line). Error bars, however present are smaller than the size of the points. Arrows indicate the direction of polarization during forward scan. Scan rate was 10 mV/s. Figure 4 . 24 . 424 Figure 4.24. Continuation of Figure 4.23. Data correspond to 4OBSA -(row 4); PAMAM G.0 (row 5) and PAMAM G.1 (row 6). The remaining description is identical with Figure 4.23. Figure 4 . 25 . 425 Figure 4.25. Sensitivity ratio, S/S 0 , as a function of zD for 4OBSA -, TBA + , TEA + and TMA + PAMAM G0 and PAMAM G1. 1 . 1 Scheme 5.1. Methylation reaction of aminodiphenylmethane Figure 5 . 1 51 as spectra a and b respectively. Three characteristic resonances (Figure 5.1 a) were attributed to the ADPM protons. The resonance at around 9.2 ppm, marked with A, corresponds to the protons at the amine group. The multiple peaks from 7.3 to 7.6 ppm, marked as B, are due to the phenyl rings Figure 5 . 1 . 51 Figure 5.1. Proton NMR spectra for a) ADPM and b) PH + I -. Studied molecules were dissolved in DMSO. Spectra were recorded in the 300 MHz spectrometer. Figure 5 . 3 . 53 Figure 5.3. Raman spectra in the region from 2700 to 3200 cm -1 . Bands correspond to:  -symmetric CH 3 ,  -antisymmetric CH 3 and ▲ aromatic -C-H vibrational modes respectively. Black line correspond to PH + I -whereas red line to ADPM. The electrochemical behavior of PH + I -(see Figure 5.4 a) was studied at the single pore glass capillary supporting microITIES -for protocol of preparation refer to section 2.5.6 in chapter II. The pore diameter was 25 µm and the interior of the capillary was silanized prior to Figure 5 . 4 . 54 Figure 5.4. Cyclic voltammograms recorded at single pore microITIES. Dashed line represents to blank CV. Solid line corresponds to 0.5 mg/ml post-synthesis mixture containing PH + I -dissolved in the organic phase. The arrows indicate the direction of polarization on the forward scan. Scan rate was 5 mV/s. Schemes b) and c) correspond to radial diffusion from the aqueous side of the interface and linear diffusion from the organic side of the interface respectively. Figure 5 . 5 . 55 Figure 5.5. Ion chromatography used for iodides detection. a) Chromatograms for known concentrations of aqueous solutions of potassium iodide (insert corresponds to a calibration curve), b) chromatograms for water used for iodides extraction for four subsequent purification steps after PH + TPBCl -metathesis reaction. The eluent used was the aqueous solution of 3.2 mM Na 2 CO 3 and 1 mM NaHCO 3. In all cases the background was subtracted for better data presentation. Figure 5 . 5 ( 55 b) shows the chromatograms for four samples after subsequent rinsing in the region of iodides retention time peak. After first rinsing the iodides concentration was equal to 3.07 mg/mL and dropped to 0.74 mg/mL after second rinsing. No iodides were detected after third and fourth rinsing. Based on this result three rinsing steps were adopted for iodides removal after the metathesis reaction. Figure 5 . 6 . 56 Figure 5.6. The current density versus potential recorded for 1 mM PH + TPBCl -initially present in the organic phase at array of microITIES (8 pores each having 50 µm in diameter). Insert is the current density versus potential for PH + I -before metathesis reaction (recorded at single pore microITIES with diameter equal to 25 µm). The schemes represent the radial diffusion form the aqueous side of the interface and linear diffusion inside the pore filled with the organic phase. Solid arrows indicate the direction of polarization during forward scan. Scan rate was 10 mV/s. Figure 5 . 7 A 57 Figure 5.7 A shows the CVs recorded in the cell with macroscopic liquid -liquid interface. The curve marked with the dashed line represents a blank voltammogram recorded only in the presence of supporting electrolytes (5 mM NaCl in the aqueous phase and 10 mM BTPPA + TPBCl -in the organic phase). The curve marked with the solid line was recorded in the cell 5 for x = 330 µM. The polarization direction, based on the charge of PH + , was from a more positive to a less positive potential during forward scan. For the concentration studied, PH + starts to transfer from the organic to the aqueous phase at around +350 mV, reaching maximal current (-67.5 µA) for the peak at +220 mV. On the reverse scan, the back transfer peak center was found at +350 mV with a height of 64.7 µA. The half-wave transfer potential (E 1/2 ) for the PH + is +300 mV. CVs for different concentration of PH + in the organic phase (from 50 µM to 832 µM) are overlaid and shown on Figure 5.7 B. The linear increase of current on the forward scan in function of the concentration is shown as an inset of Figure 5.7 B. The increasing peak separation for higher analyte concentrations is common at liquid -liquid interface and might arise from the system resistivity likewise from PH + partial interfacial adsorption. Figure 5 . 7 .Figure 5 . 8 A. 5758 Figure 5.7. A -Cyclic voltammogram in the absence (dotted line) and presence (solid line) of 330 µM PH + TPBCl -in the organic phase. B -Cyclic voltammograms for different concentration of PH + TPBCl -in the organic phase: 123,4 µM; 470 µM; 1,01 mM; 2,02 mM and 2,82 mM. Insert shows the current of the Figure 5 . 8 . 5 . 3 .( 5853 Figure 5.8. A -Cyclic voltammograms recorded for PH + interfacial transfer (concentration 50 µM) initially present in the organic phase at different scan rates (from 5 to 25 mV/s every 5 mV/s). B -Correspond to the positive and negative peak current versus square root from the scan rate. R 2 is the coefficient of determination of liner fitting. Dashed arrow shows the direction of the polarization during a forward scan. Figure 5 . 9 AFigure 5 . 9 C 5959 is the reference chromatogram recorded for EtOH. Chromatogram from Figure 5.9 B was recorded for PH + I -dissolved in EtOH without any irradiation. Peak at 0.57 min correspond to the solvent (EtOH), whereas the peak at 12.28 min was attributed to PH + species. In the second approach the UV irradiation was coupled with the electrochemistry at the liquid -liquid interface. correspond to the chromatogram recoded for the aqueous phase collected from above the organic phase after 50 min of chronoamperometric (E = +150 mV) 𝑃𝐻 𝑜𝑟𝑔→𝑎𝑞 + transfer with simultaneous UV irradiation. Figure 5 . 9 D 59 Photolysis of PH + in the aqueous solvent has led to the formation of benzhydryl carbocation. The stability of carbocation in the nucleophilic (aqueous) solvent should be relatively weak and hence formation of alcohol is expected (less probable adjuncts are also possible for instance: benzhydryl chloride). The peak at around 3.14 min was assigned to the resulting benzhydrol (see Figure5.9 C) and was confirmed by the injection of pure benzydrol into the chromatographic column. Trace of undecomposed PH + can also be seen as very weak Figure 5 . 9 . 59 Figure 5.9. Chromatograms recorded during photodecomposition study of PH + cations: A -pure EtOH; B -100 µM PH + I -in EtOH; C -aqueous phase collected from the liquid -liquid cell after 50 min of chronoamperometric transfer (E = 0.15 V) of PH + from the organic to the aqueous phase with simultaneous UV irradiation and D -the aqueous phase collected from the liquid -liquid cell under condition applied in part A in the absence of PH + TPBCl -in the organic phase. Mobile phase flow rate was 3 ml/min. The wavelength of detector was 220 nm. Figure 5 . 10 . 510 Figure 5.10. The liquid -liquid cell was under open circuit potential during irradiation and CVs were recorded before and after 40 min of irradiation. The current decrease on the forward and reverse peak induced by continuous UV irradiation suggests degradation of interfacially active PH + species in the organic phase. The forward and reversed peak currents has decreased from i f = 129.6 µA and i r = 131.4 µA before irradiation down to i f = 69.3 µA and i r = 44.5 µA after 40 min of UV irradiation. Interestingly the available potential window has been narrowed as a large background current was recorded at the positive end of potential scale. This increase observed can be attributed to proton ion transfer. The origin of the increased concentration of protons is discussed the following section. Figure 5 . 10 . 510 Figure 5.10. Cyclic voltammogram recorded for 1.01 mM PH + TPBCl -initially present in the organic phase is marked with red line. Black curve was recorded after 40 minutes of UV irradiation. Solid arrows show the current decrease caused by irradiation. The direction of polarization is indicated with dashed arrow. Scan rate was 10 mV/s. Mass spectroscopy was employed in order to study the content of the aqueous phase after 𝑃𝐻 𝑜𝑟𝑔→𝑎𝑞 + Figure 5 . 12 . 512 For each point, the ITIES was held at corresponding potential for 150 seconds (red circles) and 420 seconds (black squares). Apparently the pH of the aqueous phase stays unaffected up to 0 V. The pH jump due to OH -electrogeneration at the aqueous counter electrode was observed at around -100 mV and reaches plateau for potentials < -200 mV. The pH change around the counter electrode was additionally detected with the pH indicator -phenolphthalein -added to the aqueous phase during voltammetric cyclic in the presence of PH + TPBCl -in the organic phase. The fuchsia cloud -indicating the pH increase -around the counter aqueous electrode starts to be visible once the +100 mV was reached on the forward scan. The interface polarization towards the less positive potential caused the increase of fuchsia cloud. On reverse polarization the fuchsia cloud was seen up to +300 mV until it became blurred away and disappeared. No change in color was detected in the vicinity of the liquid -liquid interface neither in the absence nor in the presence of UV irradiation coupled with the 𝑃𝐻 𝑜𝑟𝑔→𝑎𝑞 + transfer. Figure 5 . 12 . 512 Figure 5.12. Changes of the aqueous phase bulk pH induced by the side reaction at platinum mesh counter electrode. pH for each point was measured after 150 seconds (red circles) and 420 seconds (black squares) of chronoamperometric polarization at given potential. pH at OPC before polarization was 5,73 for (red circles) and 5,83 for (black squares). Photos indicate the change in color of phenolphthalein around the platinum electrode. Figure 2 . 3 in section 2 . 2 . 2322 Figure 2.3 in section 2.2. The liquid -liquid interface was supported with the array of pores, each having 50 µm in diameter, and hence the local pH measurements in the micrometer scale had to be performed. Several possible pH probes could be employed in this regard. Examples include: (i) microelectrodes modified with neutral carrier-based ion-selective liquid-membrane, 263 (ii) the two-dimensional semiconductor pH probe, 264 (iii) the antimony-antimony oxide electrodes 265 or (iv) electrodes modified with iridium oxide. 261,262 Especially the last example is worthy of note. Iridium oxide modified electrodes exhibit Nernstian behavior, are stable over long time periods, have a pH operating range from 2 to 12 and they are cheap as compared with iridium microwires. One of the methods of preparation is an electrodeposition from the alkaline iridium (III) oxide solution. Such an approach was employed in this work to modify Pt microdisc electrodes. Modification with iridium oxide was performed with 10 subsequent voltammetric scans (see Figure 5. 13 A) at scan rate of 50 mV/s between 0 V and +1300 mV -versus silver wire reference electrode. Figure 5 .Figure 5 . 13 B 5 . 14 . 5513514 Figure 5. 13. a) Cyclic voltammograms recorded during the electrodeposition of iridium oxide at Pt microelectrodes, b) is the SEM image of the corresponding modified electrode. On the forward scan, the electrode was polarized towards an anodic potential. Three pairs of signals corresponding to different oxidation states of iridium can be distinguished: (i) 𝐸 1/2 1 of 220 mV due to 𝐼𝑟 𝐼𝐼+ ⇋ 𝐼𝑟 𝐼𝐼𝐼+ redox couple, (ii) 𝐸 1/2 2 at around 0.63 V originating from the 𝐼𝑟 𝐼𝐼𝐼+ ⇋ 𝐼𝑟 𝐼𝑉+ redox couple and (iii) the signal on the positive extreme of the potential window arising Figure 5 . 14 .Scheme 5 . 4 . 5 . 4 .Scheme 5 . 5 . 514545455 Figure 5.14. Local pH measurements in function of experimental time. Black circles correspond to the pH measurements 1 µm above the ITIES in the presence of 1 mM PH + TPBCl -in the organic phase during 𝑷𝑯 𝒐𝒓𝒈→𝒂𝒒 +transfer and simultaneous UV irradiation (with the exception of the first point recorded 1 µm above the ITIES in the absence of irradiation). Red squares were recorded under identical conditions in the absence of PH + TPBCl -in the organic phase. Green symbols correspond to the pH measurements in the bulk aqueous phase performed before and after local pH measurements. Insert correspond to the calibration curves recorded before (black points) and after (red point) experiment with the PH + TPBCl -in the organic phase. photodecomposition products of the organic electrolyte a control experiment in the absence of PH + TPBCl -in the organic phase was performed (see red squares on Figure5.14). No pH change was observed among the studied experimental time once the pH probe was placed 1 µm above the liquid -liquid interface, which potential was held at +150 mV with simultaneous UV irradiation. Figure 5 . 5 15 shows the CVs recorded at an array of microITIES (a) and macroITIES (b) separating the 8 mM BTPPA + TPBCl -in 20% TEOS in DCE and 5 mM NaCl aqueous solution in the absence and in the presence of CTA + Br -in the aqueous phase. Figure 5 . 5 Figure 5.15. a) Cyclic voltamograms recorded at the array of microITIES (8 pores, 50 µm in diameter each). (A) correpond to blank system (8 mM BTPPA + TPBCl -in 20% TEOS in DCE and 5 mM NaCl) which aqueous phase was further enriched by (B) 0.70 mM, (C) 1.38 mM and (D) 2.02 mM CTA + Br -. b) Cyclic voltammograms recorded in macroscopic cell suporting ITIES. (E) is the blank (8 mM BTPPA + TPBCl -in 20% TEOS in DCE and 5 mM NaCl), which can be also found in the insert, whereas (F) additionally contianed 1 mM CTA + Br -in the aqueous phase. Figure 5 . 5 15 (b)) has led to system destabilization and only resistance was recorded. No silica deposit formation was observed since under such conditions the hydrolysis of TEOS at the liquid -liquid interface is very slow. CVs recorded at microITIES (see Figure 5.15 (a)) showed that the interface is mechanically stable up to 2.02 mM CTA + Br -(this was the highest concentration studied) in the aqueous phase as compared with the macroscopic system. The only difference observed was the shift of the whole voltammogram towards more positive potential values and evolution of the peak limiting the potential window on the negative side of potential scale -probably arising from 𝐶𝑇𝐴 𝑎𝑞→𝑜𝑟𝑔 + transfer. Figure 5 . 16 . 516 Figure 5.16.Cyclic voltammograms recorded at the macroscopic liquid -liquid interface. Red curve correspond to blank solution 8 mM BTPPA + TPBCl -in 20% TEOS in DCE and 5 mM NaCl. Remaining black curves were recorded at the ITIES whose organic phase additionally contained 500 µM PH + TPBCl - whereas the aqueous phase was enriched with 0 mM (A), 0.5 mM (B) and 2 mM (C) CTA + Br -in the aqueous phase. The TEM micrographs correspond to the silica deposits whose deposition was conducted under the condition indicated by corresponding CVs. Figure 5 . 5 Figure 5.16 shows the voltammograms recorded prior to local pH change and silica deposition. As it was already shown on Figure 5.15 (b) the presence of cationic surfactants in the aqueous phase destablizes the macroscopic liquid -liquid interface. In the presence of 0.5 mM CTA + Br - (curve B on Figure 5.16) in the aqueous phase the voltammogram shift and some current fluctuation were observed whereas only resisive current was recorded for 2 mM CTA + Br -(curve C on Figure 5.16). To trigger the silica deposition the interface was held at E = +150 mV for 60 The in situ insight of the polarized ITIES with confocal Raman spectroscopy (the dimension of the miniaturized liquid -liquid interface required the application of local characterization technique) allowed the study of different molecular contributions. Two phenomena were followed: (i) interfacial ion transfer reaction and (ii) electrochemically controlled interfacial silica deposit formation. In general, the information extracted during this work reveals the following: 1. The negative polarization affected the molecular composition of the liquid -liquid interface as the Raman signals corresponding to BTPPA + and TPBCl -increased and decreased respectively; 2. No interface movement was detected since the set of Raman bands attributed to DCE have remained unchanged independently from the potential applied; 3. The molecular composition of the liquid -liquid interface during silica formation has changed dramatically after first half of the voltammetric cycles. 4. The strong signals from the CTA + , BTPPA + and TPBCl -were found during the electrodeposition. The intensity of the signal tend to grow with the number of voltammetric cycles; 4 . 4 moléculaire. En général, la déposition de silice a été effectuée par procédé Sol -Gel en présence des molécules connues en tant que 'templates'. La déposition a été contrôlée par électrochimie à l'interface liquide -liquide. Pour cela, deux catégories de d'ITIES (interface between two immiscible electrolyte solutions) ont été utilisées: (i) une interface liquide -liquide macroscopique créés dans une cellule électrochimique avec quatre électrodes et (ii) une l'interface liquide -liquide de dimension microscopique sous le forme d'une interface unique ou d'un réseau de micro-interfaces.Dans un premier temps, l'interface liquide -liquide macroscopique a été utilisée pour étudier le mécanisme de déposition de silice. Pour cela, un tensioactif cationique -CTA + , initialement dissous dans une solution de BTPPA + TPBCl -à 10 mM dans du dichloroéthane, a été utilisé à la fois en tant que 'template' et catalyseur lors de la formation de silice. Ensuite, un précurseur de silice -TEOS -a été hydrolysé dans la phase aqueuse (pH = 3) constituée de 5 mM NaCl. Après hydrolyse, le pH de la phase aqueuse a été augmenté jusqu'à 9 afin de promouvoir la formation d'espèces silanol polynucléaires. Un transfert de CTA + de la phase organique vers la phase aqueuse a été contrôlé par polarisation interfaciale et observé par présence de silice hydrolysé dans la phase aqueuse. Ce type de réaction est connu comme le transfert d'ions facilité. La formation de silice a été déclenchée une fois que les cations CTA + ont été transférés dans la phase aqueuse. Ainsi, la formation de micelles sphériques catalyse la réaction de condensation et agit comme la matrice qui structure les dépôts de silice. Les conclusions générales correspondant à la modification de l'interface liquide -liquide macroscopique sont les suivantes: 1. Caractéristique baisse de courant en voltampérométrie cyclique pendant le transfert retour (𝐶𝑇𝐴 𝑎𝑞→𝑜𝑟𝑔 + ) et indique que le transfert était irréversible et que CTA + a été piégé dans le matériau de silice; 2. Le matériau de silice est formé à l'interface liquide -liquide déjà après un cycle voltampérométrique effectué à 5 mV/s; 3. L'augmentation de charge au transfert arriéré (𝐶𝑇𝐴 𝑎𝑞→𝑜𝑟𝑔 + ) a été observé pour les premier cycles. La charge calculée pour les cycles suivants devient constante. Ces observations peuvent témoigner que la zone interfaciale est saturée par des espèces de silanol polynucléaire; La formation de dépôts de silice a été limitée par [CTA + ] org et sa concentration dans la couche de diffusion du côté de la phase aqueuse de l'interface liquide -liquide; 5. L'intervalle de concentration de TEOS étudié, c'est-à-dire de 50 mM à 300 mM n'a pas affecté le transfert de CTA + ; 6. Un pH optimal pour la formation l'espace de silanol polynucléaire a été trouvé entre 9 et 10. De plus, l'interface liquide -liquide macroscopique a permis de générer un dépôt de silice pour sa caractérisation ultérieure. Les dépôts de silice qui ont collectées de l'interface liquideliquide, ils ont été stockés pendant une nuit dans un four à 130 ° C. Ensemble de techniques de caractérisation a été employé, ce qui suggère que: 1. La formation d'une liaison de silice (Si-O-Si) a été confirmée. Des études par méthodes spectroscopiques (infra-rouge) ont indiqué la présence de CTA + et de traces d'ions d'électrolyte organique dans le matériau de silice; 2. Des analyses par XPS ont indiqué que le TEOS n'a pas été totalement hydrolysé car une liaison C-O a été détectée. De plus, en se basant sur ces résultats XPS, on suppose qu'un équilibrage de charge entre des fonctions OH -chargées négativement (présentes à l'intérieur de pores de la silice) et des ions Na + positifs peut se produire; 3. La mésostructure de dépôt de silice a été confirmée et une distance de centre à centre pour les pores en fonction de la polarité de la phase organique et de la concentration du CTA+ dans cette phase organique [CTA + ] org ont été dans entre 3,7 et 7 nm; 4. Des analyses par SAXS et par imagerie MEB ont confirmé la présence de structures connues dans la littérature comme 'vermiculaires'.Une fois que l'électrogénération de dépôt de silice a été optimisée à l'interface liquideliquide macroscopique, la miniaturisation a été effectuée. L'ITIES microscopique a été supportée avec une plaquette de silicium dont la matrice des pores (le rayon était dans la gamme entre 5 et 10 µm) situé en arrangement hexagonal, prépare par lithographie. La miniaturisation a amélioré certains des paramètres électroanalytiques. D'une part, la limite de détection a baissé grâce à un courant capacitif. De l'autre part, un transport de masse plus élevé a permis d'améliorer la sensibilité du système. La déposition de silice à l'interface liquide -liquide miniaturisée a été étudiée et les conclusions suivantes ont été faites:1. Les courbes courant -potentiel enregistrée lors de la formation de silice n'ont pas correspondis a une réaction simple de transfert d'ions; 2. La forme du sommet de polarisation en avant (résultant du transfert CTA org→aq + ) dépendait de la vitesse de balayage (épaisseur de la couche de diffusion sur le côté de la phase organique de l'interface liquide -liquide) et de la membrane de silicium utilisée (la distance de centre à centre entre les deux pores). En général, la réaction de transfert de CTA org→aq + était limitée par diffusion une fois que le transfert a été régie par diffusion linéaire à l'intérieur des pores de la membrane de silicium (à vitesse de balayage > 10 mV/s) et quand de couche de diffusion superposée sur l'entrée des pores du côté de la phase organique (à vitesse de balayage < 0,1 mV/s). Une superposition des profils de diffusion du côté de la phase organique a également été observée pour la membrane de silicium ayant un faible facteur d'espacement entre les pores; 3. Les dépôts de silice croissant toujours vers la phase aqueuse. Ils sont formés à l'entrée des pores à partir du côté de la phase aqueuse. Les dépôts sont plats en le fond et sont remplis par la silice à l'intérieur (ce qui exclut tout mouvement possible de l'interface pendant le procédé de déposition); 4. La forme des dépôts de silice correspond à la couche de diffusion hémisphérique du CTA + qui transfère vers la phase aqueuse: pour les expériences courtes, les dépôts étaient plats en haut et arrondies sur les côtés et hémisphériques, pour les expériences longues; 7 . 10 . 710 photosensible et actif à l'interface, a été synthétisé et ensuite utilisé pour modifier le pH du côté aqueux de l'interface liquide -liquide. Les conclusions de cette partie de la thèse sont comme suit: [ CTA + ] aq en concentration élevée était impossible. Il fallait donc utiliser les systèmes miniaturisés (l'interface liquide -liquide miniaturisée était mécaniquement stable jusqu'à [CTA + ] aq = 2,02 mM, des concentrations plus élevées n'ont pas été étudiées). Le dépôt de silice à l'interface peut être déclenché aussi par l'augmentation du pH local. Les espèces actives interfacialement (initialement présents dans la phase organique) qui sont fonctionnalisées avec le centre de base, par exemple un atome d'azote avec une paire d'électrons isolée, peuvent être employés pour contrôler des réactions d'hydrolyse et de condensation par voie électrochimique (pour plus de détails, voir section 7.3, partie résultats préliminaires). Certain scientific questions and improvements concerning this work still remain challenging. (i) Further effort could be directed towards evaluation of silica deposits permeability and scanning electrochemical microscopy (SECM) is of highest interest in this regard. (ii) The selectivity of the liquid -liquid interface modified with the silica deposits can be further improved by chemical functionalization, for instance by introduction of organosilane species to the sol. In this way, the parameters such as the charge, hydrophobicity or some specific ligandhost interaction could be tuned depending on the functional group used. (iii) Mesoporous and well-ordered silica materials can be also obtained by evaporation induced self-assembly technique. Resulting material deposited at the membrane supporting nano-or microITIES could be employed as molecular sieve. (iv) Triggering the condensation of silica by local pH change induced by the ion transfer reaction from the organic phase could be used to control the silica deposit formation. The design of the interfacially active ions functionalized with chemical moiety affecting the concertation of protons in the aqueous media is a requirement. (v) Interfacial silica gel electrogeneration can form a scaffold for molecules or particles undergoing interfacial adsorption. This phenomenon might be employed for development of new silica templating methods or for encapsulation of species being adsorbed at the interface. Some preliminary results Figure 6 . 1 . 61 Figure 6.1. A -The set-up design to study local properties of silica deposits. Designations stand for: 1 -is the PTFE cell; 2 -is the aqueous phase being the solution of TBA + Cl -; 3 -is the DCE solution of TBA + TPBCl -and 4 -is the SECM tip filled with the DCE solution of 10 mM BTPPA + TPBCl -. The array of microITIES was modified with silica deposits. B -Schematic representation of the positive feedback recorded at the ITIES. Figure 6 . 1 A 6 . 2 .Figure 6 . 2 . 616262 Figure 6.2. SCEM approach curves recorded A -above the silicon wafer; B -above the pure DCE; Cabove the 10 mM TBA + TPBCl -solution in DCE and D -above the silica deposit modified µITIES supporting 10 mM TBA + TPBCl -in DCE. interface the positive feedback was recorded (see Figure 6.2 C). Intermediate response was observed in the presence of silica deposits as shown on Figure 6.2 D. Information such as the permeability coefficient or the kinetics of ion transfer across the silica deposits still have to be extracted from SECM results. ( MPTMS) and (3-azidopropyl)trimethoxysilane (AzPTMS). Cyclic voltammetry (see Figure6.3) Figure 6 . 3 . 63 Figure 6.3. The CVs recorded during silica deposit formation at macroscopic ITIES in the presence of A -5% of MPTMS and B -15% AzPTMS in the initial sol solution. The experimental conditions: [CTA + ] org = 14 mM; [x% organosilane + 100 -x% TEOS] aq = 300 mM; scan rate was 1 mV/s. Insert are the TEM images for electrogenerated silica material. Figure 6 . 4 . 64 Figure 6.4. Variation of SAXS patterns for A -thiol and B -AzPTMS functionalized silica deposit. The deposits were prepared under following initial conditions: [CTA + ] org = 14 mM; x% organosilane + 100 -x% TEOS] aq = 300 mM; scan rate was 1 mV/s. Black line correspond to pure TEOS, red to 5%, blue to 15% and gray to 25% of organosilanes in the initial sol solution. Figure 6 . 4 . 64 Figure6.4. Similar tendency was observed for both functionalities: (i) the broadness of the peak from SAXS pattern was increasing with the increasing concentration of the organosilanes in the initial sol solution (indicating increase in the average pore center-to-center distance) and (ii) the peak intensity tend to drop for the samples containing higher amount of organic groups suggesting deterioration of mesoporous properties. Scheme 6 . 1 .Scheme 6 . 2 . 6162 Scheme 6. 1. Amination reaction of (2-bromoethyl)trimethylammonium cation with ethylenediamine.The reaction is not selective and may lead to the formation of di-, tri-and quaternary substituted amines hence ethylenediamine has to be used in high excess. Next step is the precipitation of the product from the reaction mixture with TPBCl -anions in order to form a molecule soluble in DCE: Figure 6 . 5 . 1 H 651 Figure 6.5. 1 H NMR spectra for the product of amination reaction. Figure II. 2 . 2 Figure II.2. FT-IR spectra recorded for AzPTMOS (few drops were put on glass support, dried in oven at 130°C for 4 hours and then scraped out, grated with KBr and pressed to form pellet). A -is the peak corresponding to N3 vibrational mode; B -is the CH 2 and CH 3 vibrational modes. Table 1 .1. 1 Table 1 . 1 . 11 Comparison of the different organic solvents, organic phase electrolytes and aqueous phase electrolytes in terms of available potential window widths. Organic phase solvent [Organic phase electrolyte] [Aqueous phase electrolyte] Potential window width Ref. Valeronitrile 10 mM BTPPA + TPBCl - 100 mM LiCl 100 mV Caprylonitrile 10 mM BTPPA + TPBCl - 100 mM LiCl 200 mV 2-octanone 10 mM BTPPA + TPBCl - 100 mM LiCl 250 mV 2-decanone 10 mM BTPPA + TPBCl - 100 mM LiCl 350 mV 3-nonanone 10 mM BTPPA + TPBCl - 100 mM LiCl 350 mV o-NPOE 1 mM TPAs + TPB- 10 mM LiCl 350 mV o-NPOE 20 mM TPA + TPB - 10 mM LiCl 350 mV 5-nonanone 10 mM BTPPA + TPBCl - 100 mM LiCl 400 mV 5-nonanone 10 mM BTPPA + TPBCl - 100 mM LiCl 400 mV 1,2-DCE 100 mM TOctA + TPBCl - 50 mM Li 2 SO 4 600 mV o-NPOE 5 mM TBA + TPBCl - 10 mM LiCl 620 mV 1,2-DCE 10 mM BTPPA + TPBCl - 100 mM LiCl 650 mV 1,2-DCE 1 mM BTPPA + TFPB - 5 mM NaCl 700 mV 1,4-DCB 100 mM TOctA + TPBCl - 50 mM Li 2 SO 4 700 mV 1,2-DCE 1 mM BTPPA + TFPB - 5 mM NaCl 700 mV 1,2-DCE 1 mM BTPPA + TFPB - 5 mM LiF 800 mV 1,2-DCE 1 mM BTPPA + TFPB - 5 mM MgSO 4 825 mV 1,6-DCH 100 mM TOctA + TPBCl - 50 mM Li 2 SO 4 850 mV (1:1 v:v) 1,2-DCE:CH 1 mM BTPPA + TFPB - 5 mM NaCl 920 mV 1,2-DCE 10 mM BTPPA + TFPB - 10 mM MgSO 4 970 mV (1:1 v:v) 1,2-DCE:CH 1 mM BTPPA + TFPB - 5 mM LiF 970 mV (1:1 v:v) 1,2-DCE:CH 1 mM BTPPA + TFPB - 5 mM MgSO 4 1000 mV (1:1 v:v) 1,2-DCE:CH 1 mM BTPPA + TFPB - 2000 mM MgSO 4 1000 mV 1,2-DCE TBA + TPFPB - 0.5 mM HCl 1050 mV (1:1 v:v) 1,2-DCE:CH 10 mM BTPPA + TFPB -10 mM MgSO 4 1120 mV The abbreviation stand for: 1,2-DCE -1,2-dichloroethane; 1,4-DCB -1,4-dichlorobutane; 1,6-DCH -1,6-dichlorohexane; CH -cyclohexane; o-NPOE -2-nitrophenyl octyl ether; BTPPA + -Bis(triphenylphosphoranyldiene)ammonium cation; TPBCl --tetrakis(4-chlorophenyl) anion; TFPB -tetrakis[3,5-bis(trifluoromethyl)phenyl]borate anion; TBA + -tetrabutylammonium cation; TPA +tetrapentylammonium cation; TPB --tetraphenylborate; TPAs + -tetraphenylarsonium cation; TPFPB -tetrakis(pentafluorophenyl)borate anion and TOctA + is the tetraoctylammoniu cation. [START_REF] Ding | Kinetics of Heterogeneous Electron Transfer at Liquid/Liquid Interfaces As Studied by SECM[END_REF] 1.1.4. Electrochemical instability at the electrified liquid -liquid interface in the presence of ionic surfactants Electrochemical instability, term proposed and explained by Kakiuchi, 70,71 explains the instability of the electrified liquid -liquid junction in the presence of ionic molecules undergoing both: partitioning and adsorption processes. Potential depended adsorption of surface active ions was shown to reach a maximum value for the interfacial potential difference, ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф, at around standard ion transfer potential difference of surface active ion i, ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф 𝑖 0 . A consequence of maximal adsorption is the drop in an interfacial tension that leads to the thermodynamic instability. Further deliberation concerning electrocapillary curves in the presence of surface active ions adsorption has led to a conclusion, that for some ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф values, double layer capacitance becomes negative. Since a negative value of capacitance has no physicochemical meaning, real systems compensates the energy lost by emulsification or escapes from instability by Marangoni type movements (transfer of charged species among the liquid -liquid interface induced by surface tension gradients). Thermodynamically forbidden region can be referred to as an instability window. Its width depends on the interfacial tension change upon surfactant adsorption (dependent from surfactant concentration and Gibbs free energy of adsorption, (∆𝐺 𝑎𝑑𝑠,𝑖 0 Table 1 . 2 . 12 Comparison between three different dimensions of ITIES. Dimensions Macro ITIES Micro ITIES Nano ITIES Schemes Conditions r -is the interfacial radius δ -is the diffusion layer thickness Table 1 .3. 1 Table 1 . 3 . 13 Voltammetric characteristics for the array of different arrays of microITIES.[START_REF] Davies | The Cyclic and Linear Sweep Voltammetry of Regular and Random Arrays of Microdisc Electrodes: Theory[END_REF] 1.2. Sol -Gel Process of Silica employing Template Technology The aspects described in the following subsections are: (i) chemistry and properties of silicon containing compounds, (ii) the Sol -Gel process of silica and (iii) template methods used for electrodes structuring -and each is described in regard to the content of the present work rather than giving comprehensive overview covering a set of very broad subjects. The first part covers the most relevant information including a nomenclature, chemical and physical properties of silicon and silicon containing compounds. Next, the Sol -Gel processing of silica materials is discussed chronologically -from the raw material to the final product -silica -formation. Third part describes the available template methods developed for the electrode surface engineering. Next, examples of the templated Sol -Gel processing of mesoporous silica are given, which emerge from solid state supports modification. Finally, the functionalization possibilities of mesoporous silica materials are briefly described. 1.2.1. Nomenclature and physicochemical properties of silicon and silicon containing compounds Silicon (Si) is the second (after the oxygen) most abundant atom in the earth crust (around 28% by mass). Its physico-chemical properties are shown in Table 1 .4. Si comprises of 1 variety of silicate minerals (quartz, tridymite or cristobalite) or synthetic chemicals. In order to avoid any confusion, it is important to give one, commonly accepted terminology used to name silicon containing compounds. The term 'silica' will be used alternatively with its IUPAC name silicon dioxide (SiO 2 ). SiH 4 is called silane. The anionic species containing silicon atom, as for instance 𝑆𝑖𝑂4 4-or [𝑆𝑖𝐹 6 ] 2-are called silicates. The term 'silanol' can be used when at least one OH group is attached to one silicon atom. Silicic acid is the silanol with the general formula [SiO x (OH) 4-2x ] n . Table 1 . 4 . 14 Physicochemical properties of silicon atom. Atomic number Group 14 14 Electron configuration Melting Point [Ne] 3s 2 3p 2 1410°C 86 1414°C 85 Atomic radius (non-bonded) 2.10Å 85 Period 3 Boiling Point 2355°C 86 3265°C 85 Most common Oxidation states -4, +4, Block p Density 2.33 g/cm 3 (at 25°C) 86,85 Electronegativity 1.90 Bond 28.085 enthalpies 85 Atomic weight and isotopes 28 Si -92.223% 29 Si -4.685% 30 Si -3.092% Covalent radius 1.14 Å 86 H-Si Si-Si O-Si 318 kJ mol -1 222 kJ mol -1 452 kJ mol -1 C-Si 301 kJ mol -1 1.3.1. Metals at the electrified liquid -liquid interface. The following description of the examples emerging from electrified liquid -liquid interface modification is divided into: (i) metal interfacial deposition, (ii) phospholipids at the ITIES, (iii) interface modification with organic polymers, (iv) carbon based materials (with some examples emerging from semi-modified ITIES e.g thin layer ITIES supported with carbon based electrodes or three phase junction set-ups) and finally (v) silica materials. The last group gives an overview on the recent development in the field of liquid -liquid interface modification with silica materials. It covers the reports dealing with the electrified ITIES as well as the neat liquid -liquid interface modification. Metal deposition at the liquid -liquid interface involves metal precursor dissolved in one phase and the electron donor dissolved in the latter. The mechanism of deposition follows the homogeneous or the heterogeneous electron transfer reaction and results in the interfacial formation of metallic films or metal NPs. Historically, the first report dealing with the metal interfacial electrogeneration was communicated by Guainazzi et al. who reported the formation of metallic Cu film at the ITIES between aqueous solution of CuSO 4 and dichloroethane solution of 𝑇𝐵𝐴 + 𝑉(𝐶𝑂) 6 -once the current was passed through the system. 103 From that time the electrified liquid -liquid interface has attracted significant scientific interest towards metal deposition. In the following section, a few examples emerging from Au, Ag and Pd and Pt interfacial electrodeposition are given. 1.3.1.1. Au deposition at the ITIES Au NPs and Au films deposition at the electrified liquid -liquid interface is very recent topic which is gaining ground every year. The first report dealing with potential controlled deposition of preformed Au NPs at the liquid -liquid interface was by Su et al., who have shown that mercaptosuccinic acid stabilized Au NPs, suspended in the aqueous phase can undergo a reversible interfacial adsorption under controlled Galvani potential difference. 104 Reversible assembly of Au NPs taking place at the negative end of the voltammetric potential window could not be followed by current characteristics and hence, was studied by electrocapillary curves constructed under different Au NPs concentrations. Adsorption of negatively charged citrate-coated Au NPs was also observed at negative potentials by capacitance measurements. 105 Cheng and Schiffrin reported as a first in situ Au particle formation at the electrified ITIES. The deposition reaction was triggered by the electron transfer between 𝐹𝑒(𝐶𝑁) 6 4-in the aqueous phase and 𝐴𝑢𝐶𝑙 4 -in the DCE. 107 Interestingly Gründer et al. have not observed any Au 0 formation under similar conditions unless some preferential nucleation sites were introduced to the system (Pd NPs adsorbed at the ITIES). 108 Schaming et al. extended Au 0 1.3.2. Phospholipids at the electrified liquid -liquid interface number and saturation of carbon in alkyl chain may differ, with the exception of phosphatidylinositides where additionally inositol group can be substituted with one, two or three phosphate groups. 131 Modification with phospholipids were successfully applied to the solid/liquid, 132,133 air/liquid 134 and at the liquid -liquid interfaces. 135 Model phospholipid monolayers are well-defined and controllable systems, which represents half part of the biological membrane. The phospholipid modified liquid -liquid interface can also provide the information about pH equilibrium, adsorption -desorption reaction of the lipids at and from Trojánek et al. who studied initial nucleation rates of the Pt NPs the liquid -liquid interface, association -dissociation interaction of phsopholipids with and with the wide range of values obtained (from nucleation rate approaching zero up to 207 • 10 -5 𝑐𝑚 -2 𝑠 -1 ), they concluded that the presence or lack of nucleus formation is charged species from both sides of the interface as discussed in series of papers from Mareček dictated by probability. 122 Pd and Pt are known for a long time as extremely versatile catalysts. This feature was also feasible at the liquid -liquid interface. For instance, pre-formed Pd NPs activated electrochemically by heterogeneous electron transfer reaction from the organic phase containing decamethylferrocene were used to catalyze the dehalogenation reaction of organic substrate being dissolved in the aqueous phase. 128 Hydrogen evolution reaction was also catalyzed by Pd and Pt NPs (with the first having slightly better efficiency) electrogenerated in situ at the ITIES by metal precursor reduction with decamethylferrocene -electron donor which also served as a reductant for the aqueous phase protons. 129 Trojánek et al. show the catalytic effect of Pt NPs modified ITIES on the oxygen reduction reaction obtaining the rate constant one order of magnitude greater as compared with the unmodified liquid -liquid interface. 130 Phospholipids are a class of lipids being a part of all biological cell membranes. Phospholipids with the phosphate polar head group and the fatty acid tails are amphiphilic and biological membranes owe a unique double layer structure to this particular property. The nature of the functional groups attached to the phosphate groups results in five main types of compounds: phosphatidylcholines (PC), phosphatidylserines (PS), phosphatidylethanolamines (PE), phosphatidic acids (PA) and phosphatidylinositides (PI). Among each family, the 1.3.3. Organic polymers at the polarized liquid - liquid interface coupled with the DS/Ru nanoparticles it was demonstrated that the rate constant was reduced The liquid -liquid interface electro-modification with polythiophene was the subject by the factor of 2/3 for aminacrine and 1/2 for tacrine with respect to bare phospholipid of works by Cunnane et al. Interfacial polymerization was controlled by the heterogeneous monolayer. For broader image of the ITIES modified with the phospholipid monolayer it is electron transfer between Ce 4+ /Ce 3+ in the aqueous phase and 2,2':5',2'' terthiophene convenient to reach review publications devoted to this particular topic. 135,147,148 monomer (Figure 1.16 D) from the organic phase controlled with the use of external power source 152 or by employing potential determining ions. 153 It was found that interfacial modification with polythiophene film requires concentrations > 1 mM (lower concentration has led to oligomers formation in the organic phase only). The mechanism of interfacial deposition 153 starts with the heterogeneous electron transfer with the ∆ 𝑜𝑟𝑔 𝑎𝑞 Ф 1/2 = 24 mV and formation of the radical cation of the monomer -𝑀𝑜𝑛 •+ : It was mainly the group of Cunnane which introduced and developed the concept of 𝐶𝑒 𝑎𝑞 4+ + 𝑀𝑜𝑛 𝑜𝑟𝑔 0 → 𝐶𝑒 𝑎𝑞 3+ + 𝑀𝑜𝑛 𝑜𝑟𝑔 •+ (1.23) electropolymerization of the organic polymers at the liquid -liquid interface. First report, describes the electron transfer reaction between Fe 3+ /Fe 2+ redox couple in the aqueous phase Formation of the 𝑀𝑜𝑛 •+ at sufficiently high potential results in formation of sexithiophene and the 1-methylpyrrole (Figure 1.16 A) or 1-phenylpyrole (Figure 1.16 B) monomer (Mon) (6M) according to reaction: in the organic phase: 149 𝐹𝑒 𝑎𝑞 3+ + 𝑀𝑜𝑛 𝑜𝑟𝑔 0 → 𝐹𝑒 𝑎𝑞 2+ + 𝑀𝑜𝑛 𝑜𝑟𝑔 •+ (1.22) Radical cation electrogeneration was followed by oligomers formation in the organic phase without any interface modification. It was also shown that N-phenylpyrrole present in the organic phase can facilitate semi-reversible Ag + transfer from the aqueous phase, which results in the formation of polymer coated silver nanoparticles in the organic phase (no direct evidence was given besides from electron absorption measurement). 150 Mareček et al. were the first who modified the ITIES with a free standing polymer layer. 151 The interface modification with pyrrole containing polymers was triggered by the potentiostatic electron transfer between interfacially adsorbed monomer (Figure 1.16 C - 4-(pyrol-1-yl)phenyloxyacetic acid) and Ce 4+ dissolved in the aqueous phase. Interfacial measurement allowed the study of polymerization reaction. Interestingly, ion transfer voltammetry of 𝑇𝑀𝐴 + and 𝑃𝐹 6 -with simultaneous polymer deposition showed that an Phospholipid monolayer modified ITIES was employed for other bioimportant molecules increasing number of voltammetric cycles has led to the formation of compact layer, which study. The interaction of dextran sulfate/Ru nanoparticles 144 , dextran sulfate (DS) 145,146 or/and acted as physical barrier and hence charge transfer was first reduced and consequently gramicidine (gA) 146 with monolayers composed from different phospholipids affected the significantly subtracted. Potential shift in current peaks was observed for facilitated transfer of rate constant of different ions transfer across the monolayer. The presence of DS among the K + by DB18C6 in presence of polymer film at the liquid -liquid interface and no additional phospholipid monolayer decreased the rate constant for the TEA + and metoprolol in opposite resistance to mass transfer was encountered when facilitated transfer of H + by DB18C6 was to the gA modified monolayer, which slightly enhanced its rate constant. For the monolayers probed. Table 1 . 5 . 15 Characteristics of the polymer -Au NPs composites generated at the electrified liquidliquid interface Synthesis method [𝑨𝒖𝑪𝒍 𝟒 -] org [Monomer] aq pH aq Au NPs size distribution Ref. CV [𝐴𝑢𝐶𝑙 4 -] = 0.2 mM [Tyramine] = 1mM 2 Very little film generated Ref. [157] Table 1 . 6 . 16 Compilation of works dealing with silica material deposition at neat liquid -liquid interface. Organic phase solvent Silica source pH of aq. phase Template (if used) Morphology and characteristics Ref. Decane TEOS Acidic CTAB Planar films. Hexagonal order of pores oriented perpendicularly to the film; Silica fibers with mesopores ordered hexagonally and oriented Hexane TEOS, TPOS, TBOS Acidic CTAB parallel to fiber axis. Shape of the fibers depend from the precursor used; Heptane TEOS Acidic CTACl Planar films with rougher aqueous-side and smoother organic-side; 180 Heptane TEOS Basic CTAB Planar films of MCM-41 and MCM-48 type, hexagonal and cubic order of pores; Heptane MTMS Basic CTAB, SDS 184 TEOS Planar films possessing Janus properties. Hydrophobicity of organic Toluene HDTMOS Acid - side depends from precursor used and increase in order: TEOS > PfOTMOS HDTMOS > PfOTMOS; Dichloromethane Monodisperse SiO 2 spheres modified with 3APTMOS Neutral - Monolayer films build from monodispersed silica spheres (~300 nm in diameter); TEOS -tetraethoxysilane, TPOS -tetrapropoxysilane, TBOS -tetrabutoxysilane, MTMS -methyltrimetoxysilica, HDTMOS -heksadecyltrimethoxysilane, PfOTMOS -pentafluorooctyltrimethoxysilane, 3APTMOS -3-(acryloyloxy)propyltrimethoxysilan Planar films possessing Janus properties. Rough organic-side and smooth aqueous-side. Methyl groups oriented towards organicside; Table 2 . 1 . 21 Full list of chemicals used in this work Aqueous and organic electrolytes Name Additional information Abbreviation CAS registratory number Molar mass g/mol Source Function Bis(triphenylphosphoranyldiene) ammonium chloride 97% BTPPA + Cl - 21050-13-5 574.03 Aldrich For organic electrolyte preparation Potassium tetrakis(4-chlorophenylborate) ≥98% K + TPBCl - 14680-77-4 496.11 Fluka For organic electrolyte Table 2 . 2 . 22 Characteristics of silicon wafers used to support the array of microITIES. Design number Pore radius / µm Spacing / µm Number of pores Interfacial Surface area / cm 2 1 5 20 2900 2.28×10 -3 2 5 50 460 3.61×10 -4 3 5 100 110 8.64×10 -5 4 5 200 30 2.36×10 -5 5 10 100 120 3.77×10 -4 𝑚𝑀 𝑁𝑎 + 𝐶𝑙 -|| 10 𝑚𝑀 𝐵𝑇𝑃𝑃𝐴 + 𝑇𝑃𝐵𝐶𝑙 - 𝑥 𝑚𝑀 𝑃𝐻 + 𝑇𝑃𝐵𝐶𝑙 -𝑖𝑛 𝐷𝐶𝐸 | 10 𝑚𝑀 𝐵𝑇𝑃𝑃𝐴 + 𝑇𝑃𝐵𝐶𝑙 - 10 𝑚𝑀 𝐿𝑖 + 𝐶𝑙 2.5.7) and assessed electrochemically. Cell 4 and 5 were used in this regard: Cell 4: (𝑎𝑞)𝐴𝑔 | 𝐴𝑔𝐶𝑙 | 5 𝑚𝑀 𝑁𝑎 + 𝐶𝑙 -|| 10 𝑚𝑀 𝐵𝑇𝑃𝑃𝐴 + 𝑇𝑃𝐵𝐶𝑙 -𝑥 𝑚𝑀 𝑃𝐻 + 𝐼 - 𝑖𝑛 𝐷𝐶𝐸 | 10 𝑚𝑀 𝐵𝑇𝑃𝑃𝐴 + 𝑇𝑃𝐵𝐶𝑙 -10 𝑚𝑀 𝐿𝑖 + 𝐶𝑙 - |𝐴𝑔𝐶𝑙|𝐴𝑔(𝑜𝑟𝑔) Cell 5: Cell 6: (𝑎𝑞)𝐴𝑔 | 𝐴𝑔𝐶𝑙 | 5 𝑚𝑀 𝑁𝑎 + 𝐶𝑙 -𝑥 𝑚𝑀 𝐶𝑇𝐴 + 𝐶𝑙 -|| 8 𝑚𝑀 𝐵𝑇𝑃𝑃𝐴 + 𝑇𝑃𝐵𝐶𝑙 -1 𝑚𝑀 𝑃𝐻 + 𝑇𝑃𝐵𝐶𝑙 -20% 𝑇𝐸𝑂𝑆 𝑖𝑛 𝐷𝐶𝐸 | 10 𝑚𝑀 𝐵𝑇𝑃𝑃𝐴 + 𝑇𝑃𝐵𝐶𝑙 -10 𝑚𝑀 𝐿𝑖 + 𝐶𝑙 - |𝐴𝑔𝐶𝑙| 𝐴𝑔(𝑜𝑟𝑔) (𝑎𝑞)𝐴𝑔 | 𝐴𝑔𝐶𝑙 | 5 - |𝐴𝑔𝐶𝑙| 𝐴𝑔(𝑜𝑟𝑔) Silica deposition triggered by local pH changes was performed in the cell 6 configuration. The [CTA + Cl -] aq = x mM where 0.70 mM ≤ x ≤2.02 mM. 193,[START_REF] Peljo | Electrochemistry at the Liquid/liquid Interface[END_REF] (discussed with more details in the The study, however fundamental, requires further investigation hence any effort was made towards: (i) morphological evaluation of electrogenerated deposits -which probably exhibited some degree of mesostructure; (ii) additional experimental work supporting silica deposit formation mechanism and (iii) experimental examples for any possible applications of mesoporous material at the liquid -liquid interface. Consequently, the in situ modification of the liquid -liquid interface with silica materials is still unexplored and poorly studied field of science. subchapter 1.3.5.3.2). Table 4 . 1 . 41 Summary of the bands observed in the different Raman spectra Molecule of interest Raman shift frequency 203 /cm -1 Band assignment 203 Description 203 1465 Scissoring (bending) mode 1305 -1295 -(CH 2 ) n -in phase twisting mode Medium -strong intensity in Raman 3100 -3000 Aromatic ring in C-H stretching region BTPPA + 1600 -1000 1500 -1141 Five aryl C-H bonds (in aromatic) P=N 1130 -1090 P-(Ph) 3 Weak in Raman 1010 -990 Aromatic rings Very strong in Raman 1000 -700 C -H bending 3070 -3030 C-H of substituted benzenes One strong band 1620 -1585 The two quadrant stretch 1590 -1565 Mono-and disubstituted benzene components give rise to two bands. When benzene is substituent with Cl the bands TPBCl - will occur at lower limit. For phenyl group on para this 1083 Aryl-Cl in chlorobenzene band is seen in the 1130 -1190 cm -1 region. From m to s in Raman 420 -390 Mono and parasubstituted benzenes Very weak Raman band 3030 244 -CH 2 antisymmetric stretching 2987 244 -CH 3 symmetric stretching DCE 730 -710 244 Cl-(CH 2 ) n -Cl stretching mode of the trans conformer 657 244 CH 3 -CH 2 -Cl stretching mode of the gauche conformer 3000 -2840 -CH 2 -and -CH 3 stretching Strong and characteristic bands 2962 ± 10 -CH 3 Antisymmetric stretching Strong intensity and characteristic frequencies CTA + 2926 2872 ± 10 -CH 2 -Antisymmetric stretching -CH 3 symmetric stretching Strong in Raman Strong intensity and characteristic frequencies 2853 -CH 2 -Symmetric stretching Often overlapped with CH 3 in anti-symmetrical region 1470 -1440 -CH 3 Antisymmetric bending Medium intensity 1470 -1340 CH 2 and CH 3 bending Not necessarily visible in Raman spectra Table 4 . 2 . 42 40% drop in 𝐷 𝑖 ′ Table 4 . 2 . 42 The structural and electroanalytical data for six different in size, charge and nature species employed in this work. Ion transferred r h / nm D i • 10 -6 / cm 2 •s -1 𝑫 𝒊 ′ • 10 -6 / cm 2 •s -1 z i S theor. Sensitivity / nA•mM -1 S 0 (R 2 ) S (R 2 ) LOD / µM** BM AM TMA + 0.22 247 13.8 252 8.5 +1 81.6 87.4 (>0.999) 49.0 (0.999) 1.91 0.82 TEA + 0.26 247 10.0 ± 0.4 252 5.0 +1 56.7 57.4 (>0.999) 28.8 (0.998) 0.47 3.82 TBA + 0.48 247 6.4 ± 0.3 252 3.1 +1 36.6 45.2 (>0.999) 17.4 (0.997) 0.07 5.32 4OBSA - 0.30* 8.2** 3.8 -1 n/a -46.4 (0.998) -22.1 (0.992) 3.88 2.78 PAMAM G0 0.70 253 3.4* 2.6 2 50 87.6 94.2 (0.989) 74.9 (0.986) 2.41 3.82 PAMAM G1 0.55 53 4.5 53 0.9 5 50 129 91.6 (0.961) 58.6 (0.977) 4.90 3.70 ACKNOWLEDGEMENTS At the very beginning I am glad to express my deepest gratitude to two big personalities, my supervisors, Alain Walcarius and Grégorie Herzog. Being a member of ELAN team was a great pleasure and it was possible as they trusted in me. For the last three Acknowledgements 4.6) . Decreasing potential scan rates down to 5 and 1 mV/s has led to diffusion layer thicknesses increasing from almost equal (i.e., 90 µm for 5 mV/s) to much larger (i.e., 200 µm for 1 mV/s) than the membrane thickness, leading to sigmoidal-shaped current responses on forward scan (yet with an additional residual and very slight peak at 5 mV/s). The origin of such wave is due to the prolongation in diffusion layers profiles (forming hemispherical diffusion zones at the pores entrance from the organic side of the interface due to fulfillment of S/2 > δ nl condition, see schemes (ii) on Figure 4.5). In addition, the local changes in the interfacial properties as a result of growing silica deposit also play a role in the evolution of CV curves, as one can expect some additional resistance to mass transport through the interface in the presence of a silica deposit. This is notably the case of 1 mV/s scan rate (see scheme (iii) on Where G before modification is the Gibbs energy of transfer in the absence of silica deposits and G after modification is the Gibbs energy of transfer in the presence of silica deposits.G is governed with the following equation: G = -zF (3.4) On Figure 4.21 the variation of Gibbs energy of ion transfer was plotted as a function of the ion hydrodynamic radius. For cations of the tetraalkylammonium series, the shift in Gibbs energy of transfer became larger as the hydrodynamic radius grew. This is to be expected as the diffusion of larger cations through the mesopores of the silica deposits might be more impeded than the smaller cations. The Gibbs energy of transfer for the anion 4OBSA -was also higher after modification than before indicating that the transfer had become even more difficult. The disappearance of primary amine group of the ADPM after methylation reaction was also confirmed with the UV-Vis spectroscopy. Appendix I. Nanopipette preparation and silanization All pipettes were pulled with P-2000 micropipette puller fabricated by Sutter Instrument Company (Figure 1). The protocol allows the preparation of the nanopipettes with the diameters ranging from 150 to 600 nm. Inner walls silanization prevents the ingress of the aqueous phase inside the nanopipette.  Heat: 810 -870 (the energy supplied to the glass allows the control of the nanopore diameter. For higher energy longer tip and smaller pore was obtained. In order to obtain visible changes -factor change should be at least 10);  Filament: 3 (this parameter regulate the scanning pattern of the laser spot and control the heat distribution within the scanning length);  Velocity: 45 (this parameter specifies the velocity at which the glass carriage must be moving before hard pull);  Delay: 150 (the parameter which controls the hard pull)  Pull: 90 (the force applied when pulling). 17 Appendix II. Protocol of preparation of 3azidopropyltrimethoxysilane The protocol was developed in LCPME laboratory. [START_REF] Schlossman | Liquid-Liquid Interfaces: Studied by X-Ray and Neutron Scattering[END_REF] Overall reaction can be written as: The protocol of preparation can be divided into few stages: 1. 100 ml of acetonitrile was placed in round bottom two-neck flask. Top neck was covered with stopper. Side neck was covered with turn-over Flang stopper. N 2 passed through the solvent for 30 min. In order to do so, two needles were situated inside the turn-over Flang stopper, first -N 2 inlet immersed inside the solution and the second -outlet of the gas. 2. After 30 minutes, 2.16 grams of sodium azide (NaN 3 ), 1.29 gram of tetrabutylammonium bromide (TBA + Br -) and 4 grams of 3-chloropropyltrimethoxysilane were add to vigorously stirred solvent inside the round bottom flaks. 3. Reaction mixture was stirred at 80°C (under reflux) for 48h in the set-up depicted on
01752219
en
[ "chim.othe" ]
2024/03/05 22:32:07
2015
https://hal.univ-lorraine.fr/tel-01752219/file/DDOC_T_2015_0336_JUAN.pdf
T H Èse Pierre-Alexandre Juan Stéphane Berbenni Teresa María Dr Pérez Prado Edgar Rauch Dr Delannay Dr Tóth Micromechanical and statistical studies of twinning in hexagonal metals : application to magnesium and zirconium Études micromécaniques et statistiques du maclage dans les métaux hexagonaux : application au magnésium et zirconium Keywords: twinning, magnesium, zirconium, twin-twin junctions, micromechanics, EBSD Mis en page avec la classe thesul. Je souhaiterais tout d'abord exprimer ma plus sincère gratitude à mes directeurs de thèse, Dr. Berbenni et Dr. Capolungo, pour tous leurs conseils et tout leur soutien. Je m'estime chanceux d'avoir eu l'opportunité de travailler sous leur supervision pendant toutes ces années. Je souhaite également remercier Sommaire (a) Comparaison des courbes contrainte-déformation en traction et compression de Mg AZ31B laminé [START_REF] Kurukuri | Rate sensitivity and tension-compression asymmetry in az31b magnesium alloy sheet[END_REF] et (b) comparaison de la réponse mécanique du Zr pur laminé pour différentes températures et trajets de chargement [START_REF] Kaschner | Mechanical response of zirconium-ii. experimental and finite element analysis of bent beams[END_REF]. Traction et compression apparaissent, respectivement, dans la légende sous la forme de "tens" et "comp". Les abréviations "TT" et "IP" signifient, quant à elles, "throughthickness compression" et "in-plane compression", à savoir "compression dans le sens de l' épaisseur" et "compression dans le plan". . . . . . . . . . . . . . . . . . 1.1 (a) comparison of tension and compression stress-strain curves of rolled AZ31B Mg alloy [START_REF] Kurukuri | Rate sensitivity and tension-compression asymmetry in az31b magnesium alloy sheet[END_REF] and (b) comparison of mechanical responses of clock-rolled high-purity Zr for various loading directions and temperatures [START_REF] Kaschner | Mechanical response of zirconium-ii. experimental and finite element analysis of bent beams[END_REF]. Tension and compression are denoted by "tens" and "comp", respectively. Abbreviations "TT" and "IP" stand for "through-thickness" and "in-plane", respectively. . . . . . . . . . . . . . . . . 1.2 EBSD scan example of a clock-rolled Zr specimen loaded in compression along the through-thickness direction. These scans were processed using the software presented in Chapter ). The two inclusions are embedded in an infinite elastic medium, with elastic modulus C 0 , containing an overall uniform plastic strain, E p . The second-order tensor E d represents the imposed macroscopic strain. . . . . . . . . Representation of the local coordinate system (e ′ 1 , e ′ 2 , e ′ 3 ) associated with the {10 12}-tensile twinning. The reference coordinate system (e 1 , e 2 , e 3 ) associated to the crystal structure and the crystallographic coordinate system (a 1 , a 2 , a Solid lines and dashed lines refer to the present model and to the Nemat-Nasser and Hori's double inclusion scheme for homogeneous elasticity, respectively. The ellipsoid aspect ratio for the parent, R parent , is set to 3. . . . . . . . . . . . . . . 2.7 Schematic representation of the elasto-plastic problem with twinning corresponding respectively to the uncoupled [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF] (a) and coupled (present) (b) formulations. The dashed line signifies that inclusions V t and V g-t with tangent moduli L t and L g-t are embedded in an equivalent homogeneous medium with a tangent modulus L ef f 2.8 Computational flowchart of the DI-EPSC scheme . . . . . . . . . . . . . . . . . . between primary electrons and atoms at or near the sample surface [START_REF] Zhou | Scanning Microscopy for Nanotechnology[END_REF]. . . . . . . 3.5 [START_REF] Boersch | About bands in electron diffraction[END_REF] Iron Kikuchi patterns [START_REF] Boersch | About bands in electron diffraction[END_REF][START_REF] Zhou | Scanning Microscopy for Nanotechnology[END_REF] . . . . . . . . . . . . . . . . . . . . . 3.6 Image of a Kikuchi pattern obtained by using a phosphor screen and a TV camera that illustrates the method developed by Venables [START_REF] Venables | Electron back-scattering patterns -a new technique for obtaining crystallographic information in the scanning electron microscope[END_REF] to locate the pattern center. Elliptical black shapes correspond to the projected shadows of the three spheres placed at the surface of the sample [START_REF] Zhou | Scanning Microscopy for Nanotechnology[END_REF] Table des figures 3. [START_REF] Mordike | Magnesium. properties -applications -potential[END_REF] Example of neighboring relationships encountered in EBSD data. On the left, when measurement points form a square grid, the pixel represented by the black disc has 4 neighbors represented by the white circles. On the right, when measurement points form an hexagonal grid, each measurement has 6 neighbors. . . . . . . . . 3.17 Graph grouping measurement points of consistent orientation in connected parts. The colored circles correspond to EBSD measurement points, with the Euler angles mapped on the RGB cube and, white lines represent edges, whose thicknesses are proportional to the weight, w. Consequently, twins appear clearly as areas delineated by a black border where the edge weight becomes negligible. . . . . . . 3.18 Graph grouping measurement points of consistent orientation in connected parts with added twinning mode. Green and red edges linking border points, displayed in brown, indicate tensile and compressive twinning relations, respectively. . . . . 3.19 Three sample cases of twinning : on the left a single twin T in the middle of its parent P ; in the middle, a twin going across its parent and separating it into two consistent part P1 and P2 ; on the right, a grain appearing as two consistent parts next to each others. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.20 Automatic output for a Zr EBSD map. The sample was cut from a high-purity clock-rolled Zr plate and loaded in compression along one of the in-plane directions up to 5% strain [START_REF] Juan | A statistical analysis of the influence of microstructure and twin-twin junctions on nucleation and growth of twins in zr[END_REF]. Yellow borders mark the grain joints, brown borders the twin joints. Green edges represent tensile 1 relation, magenta tensile 2, red compressive ) but is identified as irrelevant to the twinning process. . . 3.23 Example of secondary and ternary twinning observed in an EBSD map of a highpurity clock-rolled Zr sample loaded in compression along the through-thickness direction up to 3% strain. This is shown using three different visualization modes (see appendix A) : raw mode (left), twinning editor mode (middle) and twinning statistics mode (right). The parent grain is surrounded in yellow, first order twins appear in cyan, secondary twins in blue and ternary or higher order twins in red. 3 Macroscopic stress-strain curves of specimens loaded in compression along the rolling direction followed by a second compression along the transverse direction. 4.4 (a) XRD {0001}, {2 11 0} and {10 10} pole figures of specimens before compression (a) and after compression along the transverse direction up to 4% strain (b), the rolling direction up to 1.8% strain (c) and along the rolling direction up to 1.8% strain and then along the transverse direction up to 1.3% strain (d). . . . . . . order tensors E p , ǫ p 1 and ǫ p 2 denote the macroscopic plastic strain imposed to the medium and plastic strains induced by primary and secondary twinning, respectively. The infinite matrix and primary and secondary tensile twins are represented by volumes V -V A , V A -V B and V B , respectively. Second-order tensors ǫ p a and ǫ p b correspond to eigenstrains, modeling twinning shears induced by primary and secondary twinning, prescribed in inclusions V A -V B and V B , respectively. The homogeneous elastic tensor is denoted by the fourth-order tensor C. . . . . . . . Résumé En préambule de ce résumé, je souhaiterais insister sur le fait que le travail rapporté dans ce manuscrit est le fruit de diverses collaborations intervenues à différents moments de mon doctorat tant en France qu'aux Etats-Unis. Concernant les modèles micromécaniques introduits chapitre II, je suis, avec l'aide de mes directeurs de thèse, Dr. Berbenni et Dr. Capolungo, à l'origine de l'intégralité de leur développement et programmation. Dr. Tomé a eu la gentillesse de partager avec nous la dernière version du code EPSC, développée à Los Alamos National Laboratories, à partir de laquelle nous avons implémenté les nouvelles relations de localisation correspondant au schéma DI-EPSC. Leur implémentation a nécéssité une profonde adaptation du code initial. Les discussions avec Dr. Tomé et Dr. Barnett qui ont suivi le développement de cette nouvelle approche micromécanique à double inclusion ont permis l'enrichissement de l'étude. Le chapitre III est principalement dédié à la description d'un nouveau logiciel de visualisation et d'analyse de cartographies EBSD pour matériaux hexagonaux, capable de reconnaître n'importe quel type de macle et d'extraire une "myriade" de données microstructurales, enregistrées au sein d'une base de données SQL. Le développement de l'outil est à mettre au crédit de Dr. Pradalier, professeur de Computer Sciences à Georgia Tech Lorraine, à l'origine du choix de la théorie des graphes pour la construction de l'outil. Pour ma part, je me suis chargé de vérifier les résultats obtenus à partir de ce dernier et de proposer les adaptations et améliorations nécéssaires pour rendre le logiciel plus ergonomique et plus intelligible aux membres de la communauté de la mécanique de la matière. J'ai également rédigé un code fortran qui a permis de générer tous les résultats de l'étude statistique présentée au chapitre IV et réalisée à partir des cartographies EBSD de zirconium que Dr. McCabe a eu l'amabilité de partager avec nous . La double étude portant sur les macles à faible facteur de Schmid ainsi que sur les doubles macles d'extension au sein de l'alliage de magnésium AZ31 a été menée avec l'aide de Dr. Shi (post-doctorant au LEM3 de 2013 à 2015 dans le cadre du projet ANR Magtwin). Ma participation a consisté à réaliser les différents essais mécaniques de compression monotone et séquentielle permettant l'observation de ces macles aux caractéristiques particulières et le développement de la partie théorique du modèle micromécanique à double inclusion en élasticité hétérogène anisotrope avec "eigenstrains". Le travail relaté dans ce manuscrit a ainsi profité de l'interaction avec tous ces chercheurs que je remercie vivement pour m'avoir permis d'élargir mon spectre de connaissances de la sorte. Les matériaux polycristallins à structure hexagonale compacte ont, depuis les années 1960, été l'objet de beaucoup d'intérêt. L'engouement que des métaux tels que le magnésium, le zirconium, le rhénium, le titane, ..., ont pu susciter et suscite toujours s'explique par l'ampleur et l'extrême variété de leurs applications. Une attention plus particulière est portée, au sein de cette thèse, sur le magnésium et le zirconium. Les propriétés de ce dernier, à savoir une excellente résistance à la corrosion, une grande pénétrabilité des neutrons lents, la conservation de ses propriétés à haute température ainsi qu'une bonne ductilité font du zirconium, et plus particulièrement de ses formes [START_REF] Kaschner | Mechanical response of zirconium-ii. experimental and finite element analysis of bent beams[END_REF]. Traction et compression apparaissent, respectivement, dans la légende sous la forme de "tens" et "comp". Les abréviations "TT" et "IP" signifient, quant à elles, "through-thickness compression" et "in-plane compression", à savoir "compression dans le sens de l' épaisseur" et "compression dans le plan". alliées, un métal très utilisé en tant que gaine de combustible et matériau de tuyauterie dans les réacteurs nucléaires de génération III et IV. Le magnésium et ses différents alliages ont, quant à eux, fortement intéressé l'industrie automobile qui, par souci d'alléger les voitures, est toujours à la recherche de nouveaux matériaux plus légers mais aux modules de Young spécifiques élevés. Ainsi, depuis des décennies, des pièces de magnésium moulées ont été utilisées comme éléments structurels. En revanche, sa mise en forme, ainsi celle que des autres métaux hexagonaux, implique de déformer de façon irréversible le matériau. Or, la déformation plastique, dont les caractéristiques clé sont la limite d'élasticité, le durcissement et la ductilité, est fortement anisotropique pour le magnésium, le zirconium et leurs alliages respectifs. La Figure 1 montre les courbes de contrainte-déformation obtenues expérimentalement à partir d'éprouvettes composées d'alliage de Mg AZ31 et de Zr laminés et chargées en tension et compression selon différentes directions ainsi qu'à des températures différentes. Il apparaît clairement que température et direction de chargement ont tous deux un effet très important sur la réponse mécanique du matériau, rendant la prédiction de cette dernière beaucoup plus complexe. Les difficultés rencontrées lors la mise en forme de tôles métalliques faites de matériaux hexagonaux limitent sensiblement leurs potentielles applications. L'accommodation de la déformation et la relaxation des contraintes au sein des matériaux hexagonaux résultent de l'activation, simultanée ou non, de modes de glissement et de maclage. Le glissement se caractérise par le déplacement de dislocations et l'interaction entre celles appartenant aux différents systèmes et modes de glissement. A contrario, le maclage se caractérise, quant à lui, par la nucléation, au sein des grains, de sous-volumes, appelés macles, dont la réorientation du réseau peut être décrite au moyen d'une symétrie spéculaire par rapport à un plan invariant, bien classiquement dénommé plan de maclage. La compréhension de la déformation plastique des métaux hexagonaux est donc particulièrement ardue au vu de la très grande variété des modes δ ( • ) T 1 {10 12} < 1011> 85.2 T 2 {11 21} < 11 26> 34.9 C 1 {11 22} <11 23 > 64.2 C 2 {10 11} <10 12 > 57.1 En général, chaque mode de déformation actif impacte non seulement la réponse mécanique du matériau mais aussi l'évolution de sa texture. Les conséquences d'un tel phénomène s'observent sur les courbes de contrainte-déformation macroscopiques, présentées sur la Figure 1. Ainsi, les courbes correspondant à la compression de Mg AZ31 (Figure 1a) selon les directions transversale et normale à la direction de laminage sont similaires à bien des courbes de contrainte-déformation observées avec d'autres métaux. Cependant, la courbe de forme sigmoïdale obtenue dans le cas d'une compression le long de la direction de laminage est bien plus singulière. Les mêmes observations s'appliquent au zirconium (Figure 1b). La Figure 1b révèle, par ailleurs, l'influence très prononcée de la température sur le durcissement des métaux hexagonaux en présentant côte à côte les courbes de contrainte-déformation macroscopiques obtenues suite à la compression du spécimen selon l'une des directions du plan, à températures ambiante et cryogénique. En effet, alors que la courbe représentant la compression à température ambiante est "typique" d'une déformation dominée par le glissement, celle obtenue suite à la compression effectuée à température cryogénique est clairement de forme sigmoïdale. De telles variations ne peuvent s'expliquer que par l'activation et la compétition existant entre les différents modes de glissement Résumé et de maclage. Par conséquent, la compréhension et la prédiction précise du comportement mécanique des métaux hexagonaux requièrent l'étude et la prise en compte de trois types distincts d'interaction que sont les interactions glissement/glissement (1), glissement/macle (2) et macle/macle [START_REF] Christian | Deformation twinning[END_REF]. Le très générique terme « interaction macle/macle »englobe des phénomènes tels que le maclage secondaire mais également les jonctions macle-macle, présentées sur la Figure 2. En parallèle de ces trois problèmes fondamentaux dont il ne fait nul doute qu'ils soient étroitement liés à l'état des contraintes internes au sein des phases parent et macle, il est primordial d'accroître notre savoir concernant les mécanismes de nucléation et de croissance associés au maclage. Ce sont ces différentes problématiques qui motivèrent ce travail de thèse dont la méthodologie et les résultats clé sont résumés dans les paragraphes ci-après. L'étude portant sur le développement des contraintes internes au cours du processus de maclage se fit tout d'abord faite à l'aide d'une nouvelle approche micromécanique basée sur une topologie à double inclusion et le recours au théorème de Tanaka-Mori. Cette approche fut déclinée en deux modèles dont le premier consista en un schéma de Tanaka-Mori élasto-statique pour milieux élastiques hétérogènes avec incompatibilités plastiques. Ce premier modèle, appliqué dans un premier temps au Mg pur puis à l'alliage de Mg AZ31B, fut initialement développé pour étudier l'évolution des contraintes internes dans les phases parent et macle lors de maclage primaire et secondaire. Bien que limité au cas d'élasticité anisotrope hétérogène, les résultats suggèrent que les niveaux de contraintes internes moyennées au sein des macles sont suffisants pour induire de la plasticité dans ces dernières. Par ailleurs, en raison de la forte différence d'orientation existant entre le parent et la macle, l'élasticité hétérogène semble être ce qui impacte le plus les contraintes internes, tant au niveau de leurs valeurs que de leurs signes. L'étude révèle également, dans le cas du maclage primaire, la forte dépendance de l'état de contrainte de la macle vis à vis de la forme de la phase parent. Fort de ces résultats, un second modèle fut développé, appelé modèle élasto-plastique autocohérent à double inclusion, consistant en une extension du schéma élasto-statique précédemment introduit à l'élasto-plasticité ainsi qu'aux milieux polycristallins. A l'instar du premier modèle, le résultat original de Tanaka-Mori est utilisé pour dériver les nouvelles relations de concentration liant les champs de déformation moyens des macles et des grains maclés au champ de déformation macroscopique. Tous les grains, maclés ou non, sont considérés comme faisant partie intégrante du milieu homogène équivalent dont les propriétés mécaniques effectives sont calculées à l'aide d'une procédure auto-cohérente itérative implicite et non-linéaire, appélée modèle DI-EPSC. Contrairement aux modèles élasto-plastiques existants qui assimilent les phases parent et macle à des inclusions ellipsoïdales indépendantes, les nouvelles relations de concentration tiennent compte du couplage direct existant entre ces dernières. La comparaison des résultats obtenus avec le nouveau modèle DI-EPSC avec ceux provenant du modèle classique EPSC ainsi que ceux issus de l'expérience aboutit à trois résultats significatifs. Le premier consiste en la conclusion qu'une reproduction exacte des effets latents induits par le maclage permet de prédire l'influence de la plasticité sur le durcissement et sur le taux de durcissement se produisant dans les macles. Le second réside en l'observation que les nouvelles relations de concentration génèrent des distributions de déformation en cisaillement plus éparses. Enfin, le troisième résultat est la mise en exergue de l'influence de l'état de contrainte initiale de la macle, lors de sa nucléation, sur la réponse mécanique du matériau. Par ailleurs, il apparut que la majeure partie des instabilités numériques rencontrées, au cours de cette étude, provenait du choix de la matrice de durcissement. En effet, bien que cette dernière fût positive semi-définie, le schéma auto-cohérent se serait avéré bien plus stable si elle avait été définie strictement positive. Cependant, une telle hypothèse va à l'encontre d'une récente étude qui montra que certaines valeurs propres associées à la matrice de durcissement du Mg pur même d'identifier n'importe quel type de macle à condition que l'utilisateur renseigne dans le logiciel la valeur du ratio c/a ainsi que les désorientations théoriques correspondant aux différents systèmes de maclage potentiellement actifs. Pour l'analyse d'autres structures cristallographiques, l'utilisateur devra, en outre, modifier les paramètres de maille ainsi que les quaternions de symétrie. Les deux premières études statistiques furent effectuées à partir de cartographies EBSD de Mg AZ31 laminé dans le but d'expliquer l'activation de macles d'extension {10 12} à faible facteur de Schmid et la nucléation de double macles d'extension {10 12}-{10 12}. La première étude révéla que les macles d'extension {10 12} ayant un faible facteur de Schmid ne représentent que 6,8% de toutes les macles observées. S'appuyant uniquement sur des lois constitutives déterministes, les modèles polycristallins tels qu'EPSC ou EVPSC (modèle élasto-visco-plastique) ne peuvent être capables de justifier de l'activation de telles macles. En raison de leur faible apparition, l'influence de ces dernières sur la réponse mécanique du matériau demeure a fortiori très limitée. La deuxième étude montra, quant à elle, que les double macles d'extension {10 12}-{10 12} obéissent, en général, à la loi de Schmid. Elle révéla également que considérer les variations d'énergie interne induites par l'apparition de telles macles à partir d'un modèle micromécanique à double inclusion, même simplifié, permet de prédire de façon précise quelles sont les variantes susceptibles d'être activées. L'étude établit aussi que de telles macles restent extrêmement rares et ont un effet négligeable sur les propriétés mécaniques du matériau. En parallèle de ces deux études, une troisième fut menée, cette fois, sur du Zr pur afin de discuter des influences respectives (i) des jonctions macle-macle entre macles de première génération, (ii) de la taille de grain, (iii) de l'orientation cristallographique sur la nucléation et la croissance de macle. Les échantillons furent baignés dans l'azote liquide et chargés en compression selon l'une des directions du plan ainsi que le long de la normale au plan dans le but de favoriser, respectivement, la formation de macles d'extension T 1 et de compression C 1 . Les abréviations T 1 , T 2 et C 1 réfèrent aux macles de type {10 12}, {11 21} et {11 22}. Cette étude est la première à établir la pertinence statistique des jonctions macle-macle. Six types de jonctions macle-macle, à savoir Résumé (a) T 1 -T 1 type 1 (b) T 1 -T 2 type 1 (c) T 1 -C 1 type 1 (d) T 2 -T 2 type 1 (e) T 2 -C 1 type 1 (f) C 1 -C 1 type 1 T 1 -T 1 , T 1 -T 2 , T 1 -C 1 , T 2 -T 2 , T 2 -C 1 et C 1 -C 1 , furent observés. Les jonctions macle-macle se produisant entre macles de même mode, et plus particulièrement entre celles appartenant aux deux modes les plus actifs, sont très fréquentes et ne peuvent donc pas être négligées. Selon le trajet de chargement considéré, ces dernières peuvent représenter plus de la moitié de toutes les jonctions observées. La comparaison des épaisseurs de macles apparues dans des grains ne contenant qu'une seule macle avec celles des macles appartenant à des grains comprenant plusieurs macles révèle que les jonctions macle-macle entravent la croissance des macles. En outre, seules les macles appartenant au mode de maclage prédominant semble être fortement affectées par l'orientation cristallographique du grain ainsi que par la direction de chargement. Ces différences peuvent probablement être expliquées par la présence de niveaux de contraintes très localisés permettant la nucléation de n'importe quel type de macle. En accord avec de précédentes études, il est apparu également que la probabilité de nucléation de macle et le nombre moyen de macles par grain maclé augmentent avec la taille de grain. En termes de perspectives, la continuation logique de cette thèse consiste en 1) une intégration plus aboutie des capacités de post-traitement du logiciel EBSD et en 2) le développement et l'implémentation dans le DI-EPSC de modèles stochastiques de nucléation de macle qui prendraient en considération les données statistiques des récentes études. Par ailleurs, en multipliant les mesures EBSD sur plus d'échantillons de Mg et de Zr, chargés monotoniquement et cycliquement selon des directions variées à différentes températures et vitesses de chargement, il serait alors possible pour la modélisation micromécanique de prendre en considération une quantité précieuse de données statistiques. Chapitre 1 Introduction and state-of-the-art Polycrystalline materials with hexagonal close-packed crystal structure -hereafter h.c.p.-have been the subject of worldwide interest dating back to the late 1960s. Such is motivated by the broad range of applications of h.c.p. metals such as magnesium, zirconium, titanium, zinc, cadmium, beryllium, rhenium, cobalt, etc. Focus is placed here on the cases of pure zirconium and magnesium and some of their alloys (with emphasis on AZ31B Mg alloy). Zirconium has an ideal thermal neutron scattering cross-section, good ductility and resistance to corrosion and is therefore extensively used, in an alloyed form (e.g. Zircalloy), as cladding and piping material in generation 3 and 4 nuclear reactors [START_REF] Kaschner | Mechanical response of zirconium-ii. experimental and finite element analysis of bent beams[END_REF][START_REF] Kreuer | Proton conductivity : Materials and applications[END_REF][START_REF] Maglic | Calorimetric and transport properties of zircalloy-2, zircalloy-4 and inconel-625[END_REF][START_REF] Kreher | Residual stresses in polycrystals influenced by grain shape and texture[END_REF][START_REF] Carrillo | Nuclear characterization of zinalco and zircalloy-4[END_REF]. Magnesium and its alloys on the other hand, have great potential for lightweithing applications [START_REF] Mordike | Magnesium. properties -applications -potential[END_REF] -particularly for the automotive industry-as they exhibit high specific Young's modulus. As a result, cast magnesium alloys have, over the past decade, been increasingly used as structural components. The use of sheets of h.c.p. metals for part production necessarily relies on forming operations in which the metal will be necessarily deformed irreversibly. Plastic deformation -with key characteristics such as strength, hardening, ductility-is highly anisotropic both in pure Mg, Zr and most of their alloys. Figure 1.1 depicts experimentally measured stress-strain curves of rolled AZ31B Mg and clock-rolled high-purity Zr loaded in tension and compression along different directions and, in the case of Zr, at different temperatures. It is clearly shown here that a change in temperature or in loading direction can have a significant effect on the materials yield stress, strain hardening rate and ductility. As a result, sheet forming of h.c.p. materials remains a rather delicate task that largely limits their use in sheet forms. [START_REF] Kaschner | Mechanical response of zirconium-ii. experimental and finite element analysis of bent beams[END_REF]. Tension and compression are denoted by "tens" and "comp", respectively. Abbreviations "TT" and "IP" stand for "through-thickness" and "in-plane", respectively. Motivation and Objectives In h.c.p. metals [START_REF] Partridge | The crystallography and deformation modes of hexagonal close-packed metals[END_REF][START_REF] Cahn | Plastic deformation of alpha-uranium ; twinning and slip[END_REF][START_REF] Crocker | The crystallography of deformation twinning in alpha-uranium[END_REF][START_REF] Taylor | Plastic strain in metals[END_REF], strain accommodation and stress relaxation may result from the activation of either slip, twinning or both. Slip is characterized by the motion and interaction of dislocations belonging to different slip systems and possibly slip modes. On the other hand twinning is a sensibly different irreversible deformation mode characterized by the nucleation of sub-volumes within grains, i.e. twin domains, exhibiting a mirror symmetry reorientation of the lattice with respect to a specific plane, the twinning plane. Understanding plastic deformation in h.c.p. metals is particularly complex as a result of the large variety of possibly active deformation modes. For example, three active slip and two twinning modes can be found in pure magnesium while four twinning modes and three slip modes have been experimentally observed in pure zirconium. In addition, plasticity can also be activated within twin domains by means of secondary slip or of double twinning. In general, the relative contribution and activity of each deformation mode will have an effect on both the mechanical response but also on the texture change in the material. The consequence of this can be appreciated by observing the macroscopic stress-strain curves displayed in Figure 1.1. For example, the macroscopic stress-strain curves of a rolled AZ31B Mg alloy loaded along the transverse and normal directions shown in Figure 1.1a are similar to many other metal stress-strain curves. However, the sigmoidal macroscopic stress-strain curve corresponding to the compression along the rolling direction is significantly different. The same observation applies for Zr (Figure 1.1b). Figure 1.1b also shows that temperature may strongly influence the hardening response of h.c.p. metals. It is observed that when compressed along one of the in-plane directions at both room and liquid nitrogen temperatures, the macroscopic stress-strain curve obtained is in one case typical of slip dominated deformation while in the other case the curve is clearly sigmoidal. Such variations in the mechanical responses can only result from the activation and the competition of many and diverse deformation mechanisms. Motivation and Objectives Figure 1.2 -EBSD scan example of a clock-rolled Zr specimen loaded in compression along the through-thickness direction. These scans were processed using the software presented in Chapter 3. Consequently, accurately understanding and predicting the behavior of h.c.p. metals requires that we study and take into account three distinct interaction types : (1) slip/slip interactions, (2) slip/twin interactions, and (3) twin/twin interactions. The term twin-twin interaction is very general and includes phenomena such as secondary or double twinning and twin-twin junctions whose examples are displayed by Figure 1.2. In parallel to these three key fundamental problems, and acknowledging that slip/twin and twin/twin interactions will necessarily be largely driven by the internal stress state within twin domains -for example activation of secondary slip is clearly dependent on the internal stress within the twin-one must also gain an understanding of the growth and nucleation mechanisms associated with twinning [START_REF] Capolungo | Nucleation and stability of twins in hcp metals[END_REF][START_REF] Wang | 1)over-bar0 1 2) Twinning nucleation mechanisms in hexagonal-close-packed crystals[END_REF][START_REF] Wang | Nucleation of a twin in hexagonal close-packed crystals[END_REF]. These topics are briefly discussed in what follows so as to motivate the work to be presented in upcoming chapters. Twinning : key experimental results Twin domains form via the nucleation and glide of twinning partial dislocations on the twinning plane [START_REF] Serra | The crystallography and core structure of twinning dislocations in HCP metals[END_REF][START_REF] Yoo | Nonbasal deformation modes of HCP metals and alloys : Role of dislocation source and mobility[END_REF]. The shear induced by the propagation of those dislocations reorients the lattice in such a way that the parent and twin lattices are symmetric with respect to either the twinning plane or the plane normal to both the twinning direction and the twinning plane [START_REF] Cahn | Plastic deformation of alpha-uranium ; twinning and slip[END_REF][START_REF] Hall | Twinning[END_REF][START_REF] Bilby | Theory of crystallography of deformation twinning[END_REF][START_REF] Bevis | Twinning shears in lattices[END_REF][START_REF] Bevis | Twinning modes in lattices[END_REF]. Following nucleation, twin thickening occurs by the subsequent nucleation and propagation of twinning dislocations along planes parallel and adjacent to the faulted plane. Because motion and nucleation of twinning partial dislocations are sign sensitive, most of constitutive models for h.c.p. materials [START_REF] Salem | Strain hardening due to deformation twinning in titanium : Constitutive relations and crystal-plasticity modeling[END_REF][START_REF] Proust | Modeling texture, twinning and hardening evolution during deformation of hexagonal materials[END_REF][START_REF] Proust | Modeling the effect of twinning and detwinning during strain-path changes of magnesium alloy {AZ31}[END_REF][START_REF] Abdolvand | Incorporation of twinning into a crystal plasticity finite element model : Evolution of lattice strains and texture in zircaloy-2[END_REF] rely on positiveness and magnitude of resolved shear stresses (RSS) on the twinning plane in the shear direction to determine when a twin nucleates and grows. However, recent experimental studies [START_REF] Beyerlein | Effect of microstructure on the nucleation of deformation twins in polycrystalline high-purity magnesium : A multi-scale modeling study[END_REF][START_REF] Khosravani | Twinning in magnesium alloy {AZ31B} under different strain paths at moderately elevated temperatures[END_REF] based on EBSD measurements clearly show that twins nucleate at grain boundaries, crack tips, ledges and other interface defects where stresses are highly localized. Twin nucleation can be considered as a random event as it strongly depends on the presence of defects within the material. These studies suggest that the stress levels required for the nucleation and growth of twins are different. Consequently, knowledge of internal stresses in the parent phases is absolutely necessary for predicting twinning activity. Using a three-dimensional X-ray diffraction technique, Aydiner et al. [START_REF] Aydiner | Evolution of stress in individual grains and twins in a magnesium alloy aggregate[END_REF] performed in-situ measurements of the evolution of full strain and stress averaged tensors within both the twin and parent phases as the twin nucleated and grew. The authors revealed that the two nucleated twin variants were those with the highest averaged resolved shear stresses projected on the twinning plane along the twinning direction. Because the experimental study was limited to solely one grain, results about twin nucleation cannot be considered as statistically representative, but they still prove that internal stresses and twinning incidence are related. They also showed that stress magnitudes in the parent and twin phases are drastically different, while one could expect intuitively that, at least at the onset of twinning, both twin and parent phases have the same stress state. The initial stress state within the twin domain is particularly relevant because the latter influences the rate of hardening, the hardening and, hence, the activation of secondary slip in the twin phase. In addition, at large strains, the twinned volume becomes significant compared to the total volume, and stresses in the twin phases directly impact the macroscopic stress levels of the material. Until now, polycrystalline models based on deterministic approaches to deal with twin nucleation and homogenization techniques were capable of accounting for inter-granular interactions [37,38,[START_REF] Budiansky | On the elastic moduli of some heterogeneous materials[END_REF][START_REF] Hill | A self-consistent mechanics of composite materials[END_REF][START_REF] Hutchinson | Elastic-plastic behaviour of polycrystalline metals and composites[END_REF][START_REF] Hashin | A variational approach to the theory of the elastic behaviour of multiphase materials[END_REF][START_REF] Mori | Average stress in matrix and average elastic energy of materials with misfitting inclusions[END_REF][START_REF] Berveiller | An extension of the self-consistent scheme to plastically-flowing polycrystals[END_REF][START_REF] Berveiller | The problem of two plastic and heterogeneous inclusions in an anisotropic medium[END_REF][START_REF] Lipinski | Elastoplasticity of micro-inhomogeneous metals at large strains[END_REF][START_REF] Molinari | A self consistent approach of the large deformation polycrystal viscoplasticity[END_REF][START_REF] Sabar | A new class of micro-macro models for elastic-viscoplastic heterogeneous materials[END_REF][START_REF] Wang | A finite strain elastic-viscoplastic selfconsistent model for polycrystalline materials[END_REF][START_REF] Mura | Micromechanics of defects in solids[END_REF] but were incapable of quantifying the influence of parent-twin interactions without fitting model parameters or adding an artificial stress correction term [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF]. Slip/slip interactions : towards a comprehensive understanding While this is not the main subject of the present thesis, slip system interactions are briefly discussed here for the sake of completeness. Taylor [START_REF] Taylor | Plastic strain in metals[END_REF] elegantly proved that any polycrystalline material is able to undergo homogeneous deformation without producing cracks if five independent slip systems can be activated. The potentially active slip systems in h.c.p. metals are basal, prismatic and pyramidal first or second order [START_REF] Groves | Independent slip systems in crystals[END_REF][START_REF] Yoo | Slip, twinning and fracture in hexagonal close-packed metals[END_REF]. More recent works [START_REF] Kocks | Importance of twinning for ductility of CPH polycrystals[END_REF][START_REF] Hutchinson | Creep and plasticity of hexagonal polycrystals as related to single-crystal slip[END_REF] showed that five independent slip systems are not always necessary. For example, Hutchinson [START_REF] Hutchinson | Creep and plasticity of hexagonal polycrystals as related to single-crystal slip[END_REF] observed that inelastic deformation can result from the activation of only four linearly independent slip systems without pyramidal slip. Regardless, during slip dominated plasticity, all slip systems necessarily interact. From the constitutive modeling standpoint all such interactions are usually grouped in self and latent hardening matrix. These are expected to capture the collective behavior of dislocations as they interact with one another. Ideally one should thus accurately render each unit process associated with each dislocation interaction event that can potentially lead to junction formation, unzipping, repulsion and crossed states. Discrete dislocation dynamics (DDD) can largely provide answers to those questions. In DDD, dislocation lines are discretized and dislocation motion is predicted by solving an overdamped equation of motion for each dislocation segment (or node depending on the numerical strategy chosen). Then junction formation can be predicted by enforcing conservation of the Burgers vector as well as the maximum dissipation. Recently, latent hardening coefficients resulting from slip/slip interactions were computed from DDD by Bertin et al. [START_REF] Bertin | On the strength of dislocation interactions and their effect on latent hardening in pure magnesium[END_REF] for pure magnesium. The authors showed that basal/pyramidal slip interactions are particularly strong. This can be of prime importance, as it suggests that such interactions could be detrimental to the material's ductility. Note however in the work cited above, some of the key fundamental aspects remain to be treated, as the nature and specifics associated with <c+a> dislocations on the second order pyramidal plane are subject to debate [START_REF] Agnew | Connections between the basal {I1} "growth" fault and <c+a> dislocations[END_REF]. Slip/twin interactions : recent progress Slip/twin interactions occur at the twin interface as a slip dislocation intersects and possibly dissociates onto the twinning interface. The nature of the reaction between slip and twinning dislocations depends on the geometry of the reaction at stake (i.e. incoming dislocation Burgers vector, character, etc.). In his review of deformation modes in h.c.p. metals [START_REF] Yoo | Slip, twinning and fracture in hexagonal close-packed metals[END_REF], Yoo described to a high level of detail the specific crystallographic reactions that can result from slip/twin interactions. While of great interest, the study was necessarily limited to geometrical and elementary energetic 1.2. Crystallography of h.c.p. metals considerations. In particular transition states describing dislocation core effects were disregarded. Provided that accurate atomistic pair potentials can be found/developped, these limitations can be circumvented by means of atomistic simulations such as those conducted in a series of pioneering studies by Serra and Bacon [START_REF] Serra | Computer-simulation of twin boundaries in the HCP metals[END_REF][START_REF] Serra | The crystallography and core structure of twinning dislocations in HCP metals[END_REF][START_REF] Serra | Dislocations in interfaces in the h.c.p. metals. defects formed by absorption of crystal dislocations[END_REF]. The authors revealed the complexity of the unit processes involved. For example, it was shown that a screw "basal" dislocation can easily cross a {10 12} twin boundary, while an edge basal dislocation will dissociate on the twin boundary and could generate surface disconnections. Such simulations are ideal to understand and describe the different mechanisms but are limited to a few dislocations. Later on, the reactions described in the above were rationalized in a constitutive model that suggested that slip assisted twin growth resulting from the continuous generation of steps by means of slip/twin interactions was an unlikely candidate for explaining the rapid growth rate of tensile twins. In parallel, Fan et al. [START_REF] Fan | The role of twinning deformation on the hardening response of polycrystalline magnesium from discrete dislocation dynamics simulations[END_REF] used three-dimensional DDD to study the interactions between slip dislocations and {10 12} tension twin boundaries. It was found, as expected that twins have a stronger influence on hardening (in the sense of limiting slip) than grain boundaries. However, their results cannot be directly implemented in constitutive laws. This shortcoming serves as a motivation for developing novel micromechanical models allowing researchers to capture, albeit in a coarse fashion, these effects. Twin/twin interactions : motivating theoretical and experimental results Twin/twin interactions and secondary twinning are also suspected to have significant effect on microstructure evolution and internal stress developments during plasticity. Regarding twin-twin interactions, the crystallography associated with the different twinning partial dislocations that can occur was discussed in general in work by Cahn [START_REF] Cahn | Plastic deformation of alpha-uranium ; twinning and slip[END_REF] on depleted uranium. In the specific case of h.c.p. materials the literature is quite scarce. One must acknowledge recent work [START_REF] Yu | Twin-twin interactions in magnesium[END_REF][START_REF] Kadiri | The effect of twin-twin interactions on the nucleation and propagation of twinning in magnesium[END_REF][START_REF] Yu | Co-zone {10 -12} twin interaction in magnesium single crystal[END_REF] based on EBSD and TEM measurements investigated the nature of {10 12}-{10 12} twin-twin junctions and their influence on twin nucleation and twin propagation rates. It was suggested that twin-twin junctions could hinder twin growth while favoring nucleation. However, these studies are limited to one type of twin-twin junction and rely on the observation of only a few single twinned grains such that they are not necessarily statistically representative. The question remaining is that of the need for accounting for these interactions in constitutive models. Micromechanical multi-inclusion models exist [START_REF] Hori | Double-inclusion model and overall moduli of multi-phase composites[END_REF][START_REF] Nemat-Nasser | Micromechanics : overall properties of heterogeneous materials[END_REF] but have never been applied or adapted to treat this type of problem. Secondary twinning, also called double twinning, is another interesting problem as several experimental studies [START_REF] Barnett | Non-schmid behaviour during secondary twinning in a polycrystalline magnesium alloy[END_REF][START_REF] Beyerlein | Double twinning mechanisms in magnesium alloys via dissociation of lattice dislocations[END_REF] have proposed connections between the activation of such processes and fracture. An experimental study performed by Martin et al. [START_REF] Martin | Variant selection during secondary twinning in mg-3%al[END_REF] from Mg EBSD scans revealed that activated double twin variants were those that either required the least strain accommodation within the parent domain or obeyed the Schmid's law. However, the latter study neither discussed the statistical relevance of secondary twinning nor quantified its influence on the mechanical response. Crystallography of h.c.p. metals In the Miller-Bravais indexing system, the hexagonal crystal structure as shown in Figure 1.3 is defined with respect to four axes or vectors denoted by a 1 , a 2 , a 3 and c, respectively. Vectors a 1 , a 2 , a 3 lie in the lower basal plane. Angles formed by vector pairs (a 1 ,a 2 ), (a 2 ,a 3 ) and (a 3 ,a 1 ) are all equal to 2π/3 radians. The c-axis is perpendicular to basal planes and, hence, to vectors a 1 , a 2 and a 3 . The magnitude is the same for all vectors a i with i = {1, 2, 3} but differs from the magnitude of vector c. The norm of vectors a i corresponds to the distance between two neighboring atoms lying in the same atomic layer. The magnitude of the vector c represents the distance between two atomic layers of same type. Using other words, it represents the distance separating the lower part from the upper part of a hexagonal cell. Atom positions, directions and planes can be expressed from vectors a 1 , a 2 , a 3 and c. However, it is also possible to only use vectors a 1 , a 2 and c since a 3 = -(a 1 + a 2 ). The hexagonal structure as shown in figure 1.3 contains 3 unit cells consisting of atom arrangements made of two tetrahedra facing upwards. For example, by representing atoms with hard spheres, the first of the 3 unit cells shown in Figure 1.3 includes atoms located at coordinates (1,0,0), (0,0,0), (1/2,1/2,0), (2/3,1/3,1/2), (1,0,1), (0,0,1) and (1/2,1/2,1). An unit cell is composed of 7 atoms. Each atom lying in layers A, i.e. blue spheres, is shared with 2 other unit cells. The atom belonging to layer B is not shared with any other unit cell. As a result, the coordination number of an elementary hexagonal unit cell is 2. If all atoms contained in an unit cell are equidistant, then the axial ratio γ = c/a is equal to 8/3 ∼ 1.633. This value is never reached with pure materials at room temperature and pressure but metals such as magnesium and cobalt exhibit axial ratios which are very close, e.g. 1.623 for both of them [START_REF] Partridge | The crystallography and deformation modes of hexagonal close-packed metals[END_REF]. At room pressure and temperature conditions, axial ratios of pure h.c.p. metals are included between 1.56 (i.e. beryllium) and 1.89 (i.e. cadmium) [START_REF] Partridge | The crystallography and deformation modes of hexagonal close-packed metals[END_REF]. Hexagonal intermetallic phases with a γ ratio equal to 8/3 have also been produced [START_REF] Dorn | High-Strength Materials[END_REF]. Note that the value of γ is function of temperature and pressure. Figure 1.4 displays important planes and directions in h.c.p. metals. They correspond to planes and directions of either slip or twinning systems that will be mentioned further. Crystallography of twinning in h.c.p. metals Hall [START_REF] Hall | Twinning[END_REF] and Cahn [START_REF] Cahn | Plastic deformation of alpha-uranium ; twinning and slip[END_REF] were the first to present a detailed analysis of deformation twinning crystallography in h.c.p. materials. Shortly after, Kiho [START_REF] Kiho | The crystallographic aspect of the mechanical twinning in metals[END_REF][START_REF] Kiho | The crystallographic aspect of the mechanical twinning in Ti and alpha-U[END_REF] and Jawson and Dove [START_REF] Jaswon | Twinning properties of lattice planes[END_REF][START_REF] Jaswon | The prediction of twinning modes in metal crystals[END_REF][START_REF] Jaswon | The crystallography of deformation twinning[END_REF] developed theories aimed at predicting the activation of twinning modes. Their theories rely on the assumption that activated twinning systems are the ones that minimize both the twinning shear magnitude and shuffles. In 1965, Bilby and Crocker [START_REF] Bilby | Theory of crystallography of deformation twinning[END_REF] reviewed and generalized the works previously mentioned. They provided the most complete analysis of shuffling processes and a very rigorous treatment of the orientation relationships and of the division of atomic displacements into shear and shuffles. Till now, their theory, referred to as the Bilby-Crocker theory or the classical crystallographic theory of deformation twinning in the following, remains the reference. Bevis and Crocker [START_REF] Bevis | Twinning shears in lattices[END_REF][START_REF] Bevis | Twinning modes in lattices[END_REF] extended the theory to the case of non-classical twins. The objective of the present paragraph is to present the main aspects of deformation twinning crystallography and introduce the principal twinning modes in h.c.p. metals. A deformation twin consists of a region of a grain that underwent a homogeneous shape deformation in such a way that the twinned domain has exactly the same crystalline structure as the parent domain but a different orientation. Consequently, deformation twinning consists in a homogeneous shape deformation but does not induce volume variation. Because parent and twin phases remain in contact, the deformation undergone by the twinned region must be an invariant plane shear strain. A twinning mode can be completely characterized by two planes and two directions. These planes and directions correspond to the invariant and unrotated plane, denoted by K 1 , the second invariant but rotated plane of the simple shear, K 2 , the twinning shear direction, η 1 , and the direction, η 2 , resulting from the intersection of the plane of shear P , perpendicular to both K 1 and K 2 , with K 2 (Figure 1.5). The plane K 1 is called the composition or twinning plane. Even if four elements characterize a twinning mode, knowing only either K 1 and η 2 or K 2 and η 1 is sufficient to completely define a twinning mode. The conjugate of a given twinning mode is also described by planes K ′ 1 , K ′ 2 and directions η ′ 1 , η ′ 2 such that K ′ 1 = K 2 , K ′ 2 = K 1 , η ′ 1 = η 2 , η ′ 2 = η 1 and the magnitude of the twinning shear, s, is the same. K 2 and η 2 are then called the reciprocal twinning plane and reciprocal twinning direction, respectively. The four orientation relations of the classical crystallographic theory of twinning are : -1) reflexion in K 1 , -2) rotation of π about η 1 , -3) reflexion in the plane normal to η 1 , 1.3. Crystallography of twinning in h.c.p. metals -4) rotation of π about the normal to K 1 . Because hexagonal structures are centro-symmetric, lattices obtained with relations (1) and (4) are identical. The same observation can be made with relations (2) and (3). It is then natural to classify into three types of twins. Type 1 and type 2 twins correspond to twins whose lattices can be obtained from relations (1)-( 4) and ( 2)-(3), respectively. The third type of twins includes all twins for which twin lattices can be reproduced by using any of the four classical orientation relations. Note that relations (2) and ( 4) are particularly convenient for expressing Rodrigues vectors and hence quaternions associated with twinning of types 2 and 1, respectively. Another way to differentiate twins of type 1 and 2 consists in looking at the rationality of indices of K 1 , K 2 , η 1 and η 2 . Twins of type 1 are twins for which indices of K 1 and η 2 are rational while twins of type 2 are those for which K 2 and η 1 are rational. In the case of compound twins, all K 1 , K 2 , η 1 and η 2 indices are rational. Geometrical considerations explaining the rationality or non-rationality of Miller indices of planes K 1 , K 2 and directions η 1 , η 2 are detailed in Bilby and Crocker [START_REF] Bilby | Theory of crystallography of deformation twinning[END_REF] and reminded by Christian and Mahajan [START_REF] Christian | Deformation twinning[END_REF]. Other twinning orientation relations are theoretically possible and have been investigated by Bevis and Crocker [START_REF] Bevis | Twinning shears in lattices[END_REF][START_REF] Bevis | Twinning modes in lattices[END_REF] Referred to as non-classical twins, they will not be studied in the present document. Denoting by u 1 , u 2 and u 3 basis vectors associated with the hexagonal cell and using Einstein convention and Christian et al. notations [START_REF] Christian | Deformation twinning[END_REF], unit vectors parallel to the η 1 direction, the η 2 direction and the normal to the twinning plane can be written as l = l i u i , g = g i u i and m = m i u i , respectively. As a result, if K 1 and η 2 are known, the shear direction η 1 is expressed as follows : η 1 = sl = 2(m -(g.m -1 )g) (1.1) The magnitude of the twinning shear is given by : s 2 = 4((g.m) -2 -1) (1.2) Consequently, the deformation gradient tensor associated with deformation twinning is expressed as F = I + s(l ⊗ n) (1.3) with I, the identity tensor. The case of {10 12} twins is particularly interesting because the twinning shear s becomes null when γ = √ 3, and shear direction reverses as γ passes through this value. As a result, {10 12} twins can be either tensile or compressive, depending on the axial ratio magnitude. However, the simple application of the previously described shear is not sufficient to reorient the crystal lattice in such a way that atoms belonging to the twinned region face their image in the parent phase with respect to the mirror symmetry plane. Additional atomic displacements are then necessary to produce the twin structure from the sheared parent structure. These atomic displacements, which relate the twin lattice sites to the parent lattice sites, are called lattice shuffles. Consider a primitive lattice vector w parallel to the direction η 2 . The magnitude of its projection along the normal to the twinning plane is equal to qd where q is an integer denoting the number of lattice planes K 1 of spacing d crossed by w. Atom displacements are repeated in each successive group of q planes. Bilby and Crocker [START_REF] Bilby | Theory of crystallography of deformation twinning[END_REF] showed that, in the case of twinning of type 1, lattice points lying in the planes p = q/2 and p = q are sheared directly to their final has to be introduced. It corresponds to the number of K 2 planes crossed by a primitive lattice vector parallel to the twinning shear direction η 1 . It is been proven that no shuffles are required when q * = 1 and q * = 2. In short, any crystalline structure containing more than one atom per unit primitive cell will undergo lattice shuffling during twinning deformation when q = 4. Examples of possible shuffling mechanisms are presented in Figure 1.6. Moreover, Table 1.1 lists most of the twinning systems observed in h.c.p. materials. Note that except a very rare twinning system observed in Mg by Reed-Hill [START_REF] Reed-Hill | A study of the (1011) and (1013) twinning modes in magnesium[END_REF], all twinning modes listed in Table 1.1 are compound. Very early, Kiho [START_REF] Kiho | The crystallographic aspect of the mechanical twinning in metals[END_REF][START_REF] Kiho | The crystallographic aspect of the mechanical twinning in Ti and alpha-U[END_REF] and Jaswon et al. [START_REF] Jaswon | Twinning properties of lattice planes[END_REF][START_REF] Jaswon | The prediction of twinning modes in metal crystals[END_REF][START_REF] Jaswon | The crystallography of deformation twinning[END_REF] developed models aimed at predicting the activation of twinning systems. These models are based on the simple assumption that activated twin systems are those inducing the least amount of shear and the smallest lattice shuffles. Bilby and Crocker [START_REF] Bilby | Theory of crystallography of deformation twinning[END_REF] generalized them and suggested that a newly-formed twin has the following properties : -1) small twinning shear magnitude, -2) its formation required simple lattice shuffles, i.e. shuffles with a small value of q, -3) lattice shuffles induced by its nucleation have a small magnitude, -4) if large shuffles are necessary, they should be parallel to the twinning shear direction η 1 . In general, criteria (1) and ( 2) are sufficient to predict the predominant twinning modes. Criteria (3) and (4) are particularly useful to choose between a twin mode and its conjugate. However, they cannot be used to predict the nucleation and growth of twins at the grain scale. For example, they are not capable of predicting if a given strain will be accommodated by a large twin or by several small twins of same mode. More recently, El Kadiri et al. [START_REF] Kadiri | The candidacy of shuffle and shear during compound twinning in hexagonal close-packed structures[END_REF] derived the analytical expressions of all possible shuffles and shears for any compound twin in h.c.p. metal. Their purpose was to propose a generalized crystallographic framework capable of determining the twinning dislocations that can be formed for each twinning system. Their theory recovered the expressions of all twinning dislocations previously identified from the admissible interfacial defect theory developed by Serra et al. [START_REF] Serra | Computer-simulation of twin boundaries in the HCP metals[END_REF][START_REF] Serra | The crystallography and core structure of twinning dislocations in HCP metals[END_REF][START_REF] Serra | Computer-simulation of the structure and mobility of twinning dislocations in hcp metals[END_REF] and discussed hereafter. Their calculations enabled the identification of all planes subject to shear exclusively and those subject to both shear and shuffling for any non elementary twinning dislocation. Regarding the {11 22} and {10 12} twinning modes, the authors demonstrated that the smallest deviation from the stacking sequence is exactly repeatable with an order equal to 7 and 2, respectively. They also revealed that twinning disconnection with a step height equal to multiple interplanar spacings does not necessarily require shuffles within intermediate planes to operate in the twinning direction. Aware that some of the terms used to describe El Kadiri's results have not been defined yet, this discussion can be seen as a transition or an early introduction to the next paragraph. Nucleation and growth of twins Twinning dislocations and twin interfaces A disconnection in a coherent rational twin boundary has a stress field similar to the one associated with a dislocation. This explains why the term of twinning dislocation was used when first discussed by Vladimirskij [77], Frank and Van der Merwe [START_REF] Frank | One-dimensional dislocations .1. Static theory[END_REF] in the late 40's. The equivalent Burgers vector of a disconnection, also called step, of height h can be expressed as follows : b t = hs η 1 η 1 (1.4) where s denotes the unit twinning shear magnitude and η 1 , the twinning shear direction. In cases where h is equal to the spacing d of the lattice planes parallel to K 1 , the twinning dislocation is called the elementary twinning dislocation. Moreover, since the elastic energy is proportional to the square of the Burgers vector magnitude, steps with heights equal to multiples of d tend to dissociate spontaneously into elementary twinning dislocations. However, when parent and twin lattices do not coincide, an elementary twinning dislocation might be energetically unfavorable. As shown by Thompson and Millard [79], lattice shuffles also imply that the interface structure repeats at every q lattice planes parallel to K 1 if q is odd and at every q/2 if q is even. As a result, Burgers vectors, b t,odd and b t,even , corresponding to twinning dislocations for which q is odd and even, respectively, can be expressed such that : b t,odd = qds η 1 η 1 (1.5) b t,even = 1 2 qds η 1 η 1 (1.6) These twinning dislocations are referred to as zonal twinning dislocations. Regarding the nature of twinning dislocations, they can be of edge, screw or mixed type. Twinning dislocations have most of the properties of ordinary lattice dislocations. As explained by Christian and Mahajan [START_REF] Christian | Deformation twinning[END_REF], they can glide along the interface plane when a shear stress is applied. A twin can be represented as a series of twinning dislocation loops whose diameter inversely increases with the vicinity of the central plane of the twin. Then, while expansion of existing dislocation loops engenders diameter increase, formation of new dislocation loops results in an increase of the twin thickness. The motion of zonal dislocations along along free-defect planes parallel to K 1 is responsible for twin growth or shrinkage. The dislocation is then said to be glissile. In reality, short and long-range interactions with point, line, surface defects slow down or even block the displacement of the twinning dislocation. Such interactions can be considered as friction stresses. The lattice resistance is a kind of Peierls-Nabarro force whose magnitude strongly depends on the type of atomic bonding and hence on the structure of the dislocation core. Twinning dislocation cores were first assumed to be similar to lattice dislocation cores, i.e. quite narrow. However, experiments and simulation revealed that the size of twinning dislocation cores varies a lot with the material. For example, twinning dislocation cores observed in zirconia are very narrow and its corresponding Peierls stress is very high, while for many metals, dislocation cores may extend over several atomic planes and may be opposed to a relatively small Peierls stress. Nucleation and growth of twins In addition, the dissociation of zonal dislocations into elementary twinning dislocation lowers the elastic energy but increases the surface energy. Elementary dislocations, products of the dissociation, have parallel Burgers vectors. As a result, because of repulsive forces, they will tend to separate. Note that the dissociation process of zonal dislocations into elementary dislocations is very similar to the one corresponding to the dissociation of lattice dislocations into partials dislocations [START_REF] Mendelson | Fundamental Aspects of Dislocation Theory[END_REF]. Thompson and Millard [79] established that the only stable twinning dislocation for the {10 12} twinning mode is the zonal dislocation of double step height with the following Burgers vector b t = 3 -γ 2 3 + γ 2 η 1 (1.7) Its magnitude is then equal to b t = 3 -γ 2 3 + γ 2 a (1.8) Atomistic simulations performed by Serra et al. [START_REF] Serra | Computer-simulation of twin boundaries in the HCP metals[END_REF] with two-body potentials revealed that the width of these zonal dislocations is sensitive to the used potential. Regarding the {11 22} twinning mode, the zonal dislocation corresponding to the twinning features presented in Table 1.1 has a step height equal to three interplanar spacing of K 1 planes and the following Burgers vector b t = γ 2 -2 3(γ 2 + 1) η 1 (1.9) whose magnitude is b t = γ 2 -2 γ 2 + 1 a (1.10) Due to the high value of q, i.e. q = 6, many lattice shuffles are expected to occur. Serra et al. [START_REF] Serra | The crystallography and core structure of twinning dislocations in HCP metals[END_REF] considered three different shuffle models and observed that the energy and the width of the twinning dislocation were not really affected by the type of shuffle but were sensitive to the atomic potential used. No lattice shuffle occurs with {11 21} twinning. As a result, the twinning dislocation associated with this twinning mode is an elementary dislocation whose the Burgers vector and the Burgers vector magnitude are, respectively, b t = 1 3 4γ 2 + 1 η 1 (1.11) and b t = 1 1 + 4γ 2 a (1.12) The most observed twinning dislocations in {10 11} twin interfaces in Mg and Ti are such that their step height is equal to 4d, with d the interplanar distance between K 1 lattice planes. Their corresponding Burgers vector and Burgers vector magnitude have the following expressions : b t = 4γ 2 -9 4γ 2 + 3 η 1 (1.13) and b t = √ 2(4γ 2 -9) √ 3 4γ 2 + 3 a (1.14) In order to minimize the interfacial energy, all twinning dislocations mentioned in the above may dissociate into dislocations whose Burgers vectors are smaller in magnitude. The energetic stability of twin interfaces is a very pertinent discrimination criterion for investigating the likelihood of twinning modes. Using different two-body potentials, Serra and Bacon [START_REF] Serra | Computer-simulation of twin boundaries in the HCP metals[END_REF][START_REF] Serra | The crystallography and core structure of twinning dislocations in HCP metals[END_REF] thoroughly investigated the main twin interface structures in pure h.c.p. materials, i.e. {10 12}, {11 21}, {10 11} and {11 22} twins. As shown in Table 1.1, all these twin modes were observed experimentally in h.c.p. materials. Simulations revealed that classical twinning dislocations having both a relatively small b and h have smaller line energies in {10 12} and {11 21} interfaces than in {11 22} interfaces. The only stable equilibrium configuration found for the {10 12} twin interface is such that parent and twin lattices are mirror images, and the interface plane results from the coalescence of two adjacent atomic planes into a corrugated {10 12} plane. Moreover, Xu et al. [START_REF] Xu | On the importance of prismatic/basal interfaces in the growth of twins in hexagonal close packed crystals[END_REF] also showed that prismatic/basal interfaces which exhibit a low interface energy play an important role in the growth of {10 12} twins. The relaxed structure of the {10 11} is very similar to the one computed for {10 12} twins, since the stable interface consists of a {10 11} plane generated after the coalescence of two separate atomic planes. Regarding the {11 22} twin interface, atomistic simulations reveal that the interface, as well as all lattice planes parallel to K 1 , is perfectly flat. Mechanisms involved in nucleation and growth of twins Twin formation can be decomposed into three steps. The first step is the nucleation, consisting of the formation of a small twin nucleus. The second step is named propagation and corresponds to the phase during which dislocation loops expand very quickly in all directions contained within the twinning plane. At the end of the second step, the newly-formed twin is flat and wide. The last step in the development of a twin, referred to as the growth step, consists of the thickening of the twin. Mechanisms involved in both nucleation and growth are detailed in the following. In theory, twins may nucleate homogeneously or heterogeneously. Homogeneous nucleation consists in the formation of a small twin in a defect free region under the influence of an applied stress. Homogeneous nucleation implies that there exists a critical resolved shear stress (CRSS), also called theoretical strength of the material, such that when the resolved shear stress on the twinning plane in the twinning direction reaches or exceeds the critical stress, a twin forms. Theoretical works of Orowan [START_REF] Koehler | Dislocations in Metals[END_REF], Price [START_REF] Price | Pyramidal glide and the formation and climb of dislocation loops in nearly perfect zinc crystals[END_REF][START_REF] Price | Nucleation and growth of twins in dislocation-free zinc crystals[END_REF][START_REF] Price | Non-basal glide in dislocation-free cadmium crystals[END_REF], Lee and Yoo [START_REF] Lee | Elastic strain-energy of deformation twinning in tetragonal crystals[END_REF][START_REF] Yoo | Deformation twinning in hcp metals and alloys[END_REF] show that twins may nucleate homogeneously if the applied resolved shear stress on the twinning plane is very high and if both the surface and strain energies are very small. As a consequence, homogeneous nucleations seem to be very unlikely. Experimental results obtained by Bell and Cahn [START_REF] Bell | The nucleation problem in deformation twinning[END_REF][START_REF] Bell | The dynamics of twinning and the interrelation of slip and twinning in zinc crystals[END_REF] as well as Price [START_REF] Price | Pyramidal glide and the formation and climb of dislocation loops in nearly perfect zinc crystals[END_REF][START_REF] Price | Nucleation and growth of twins in dislocation-free zinc crystals[END_REF][START_REF] Price | Non-basal glide in dislocation-free cadmium crystals[END_REF] are in agreement with the previous conclusion. Bell and Cahn [START_REF] Bell | The nucleation problem in deformation twinning[END_REF][START_REF] Bell | The dynamics of twinning and the interrelation of slip and twinning in zinc crystals[END_REF] observed that twins appeared at much higher stress levels in "almost" defect-free h.c.p. single crystals than they did in less perfect crystals. Carrying out in situ measurements on specimens in a scanning electron microscope, Price [START_REF] Price | Pyramidal glide and the formation and climb of dislocation loops in nearly perfect zinc crystals[END_REF][START_REF] Price | Nucleation and growth of twins in dislocation-free zinc crystals[END_REF][START_REF] Price | Non-basal glide in dislocation-free cadmium crystals[END_REF] found that the stresses required to initiate twinning were an order of magnitude higher than those usually measured on "regular" macroscopic specimens. Therefore, Bell et al. [START_REF] Bell | The nucleation problem in deformation twinning[END_REF][START_REF] Bell | The dynamics of twinning and the interrelation of slip and twinning in zinc crystals[END_REF] and Price [START_REF] Price | Pyramidal glide and the formation and climb of dislocation loops in nearly perfect zinc crystals[END_REF][START_REF] Price | Nucleation and growth of twins in dislocation-free zinc crystals[END_REF][START_REF] Price | Non-basal glide in dislocation-free cadmium crystals[END_REF] all agreed with the conclusion that twinning is initiated by some defect configuration. Twinning in constitutive and polycrystalline modeling As opposed to homogeneous nucleation, heterogeneous nucleation consists of a defect-assisted twin formation. Heterogeneous nucleation is usually modeled via the dissociation of some dislocation into a single or multi-layered stacking fault [START_REF] Christian | Dislocations in Solids[END_REF]. The resulting stacking fault, bounded by partial dislocations belonging to the parent crystal, is then the defect responsible for twin nucleation. There are other ways for twins to form. These are called pole or cross-slip source mechanisms which enable a single twinning dislocation to move through successive K 1 planes. Derived from the general theory developed by Bilby [START_REF] Bilby | On the mutual transformation of lattices[END_REF], Cottrell-Bilby [START_REF] Cottrell | A mechanism for the growth of deformation twins in crystals[END_REF] and introduce the concept of a pole mechanism in b.c.c and h.c.p. materials, respectively. The pole mechanism was described by Bilby and Christian [START_REF] Bilby | The Mechanism of Phase Transformations in Metals[END_REF] as follows. Consider a lattice dislocation in a parent crystal with a Burgers vector b a . The same dislocation has after crossing the twin interface, a Burgers vector b b . The two Burgers vectors are assumed to be related by the twinning shear such that b b = Sb a . As a result, the glide of the dislocation leaves a step in the interface of height equal to the projection of the Burgers vector b a along the normal to the twinning plane, i.e. h = b a .m with m an unit vector normal to K 1 . This step is a twinning dislocation whose Burgers vector is b t = b b -b a . Consequently, each point crossed by the initial lattice dislocation along the twin interface is the junction of three dislocation lines. Bilby called the junction point a "generating node". The twinning dislocation associated with such a configuration is said to be a pole dislocation. Other more elaborate illustrations of pole mechanisms involving the dissociation of a pole dislocation into partial and sessile dislocations have been detailed and explained by authors such as Cottrell-Bilby [START_REF] Cottrell | A mechanism for the growth of deformation twins in crystals[END_REF], Venables [START_REF] Venables | Dislocation pole models for twinning[END_REF] and Hirth and Lothe [START_REF] Hirth | Theory of Crystal Dislocations[END_REF]. Similar to nucleation, both homogeneous and heterogenous growth mechanisms have been and are still being investigated in the literature. However, in contrast to homogeneous nucleation, which is very unlikely, homogeneous growth is possible. Homogeneous growth corresponds to repeated homogeneous nucleation of twinning dislocations on K 1 lattice planes to form new twin layers. Twin thickening may also occur by random accumulation of nucleated faults or by heterogeneous nucleation of steps at defect loci or by pole or cross-slip mechanism. Twinning in constitutive and polycrystalline modeling Polycrystalline models accounting for twinning rely either on the use of the finite element method (FEM), such as recently done in work by Izadbakhsh et al. [96] [96, 97], or on the use of the Green operator techniques and Eshelbian micromechanics [START_REF] Shiekhelsouk | Modelling the behaviour of polycrystalline austenitic steel with twinning-induced plasticity effect[END_REF]. The latter can be applied in the form of a mean field approach (e.g. self-consistent methods) or of a full-field via the use of the Fast Fourier Transform (FFT) method originally proposed by Moulinec and Suquet [START_REF] Moulinec | A numerical method for computing the overall response of nonlinear composites with complex microstructure[END_REF][START_REF] Moulinec | Intraphase strain heterogeneity in nonlinear composites : a computational approach[END_REF]. In both cases of full-field methods based on the FEM and the FFT, current models accounting for twin activity do not effectively reorient the crystal within the twin domain such that second generation twinning and secondary slip is necessarily predicted to a lesser accuracy. On the contrary, mean field self-consistent polycrystalline models, in which domain reorientation is straightforward, have been used in a large body of work to predict the effect of twinning on strain hardening and microstructure evolution [START_REF] Tomé | A model for texture development dominated by deformation twinning : Application to zirconium alloys[END_REF][START_REF] Lebensohn | A self-consistent anisotropic approach for the simulation of plastic deformation and texture development of polycrystals : Application to zirconium alloys[END_REF][START_REF] Proust | Modeling texture, twinning and hardening evolution during deformation of hexagonal materials[END_REF]. A simple way to deal with the crystallographic reorientation induced by twinning consists in (1) initially representing the polycrystal with a finite set of orientations with given volume fractions and (2) reorienting an entire crystal when the effective twin fraction is larger than a critical value. This type of approach was originally proposed in early work by Van Houtte [START_REF] Van Houtte | Simulation of the rolling and shear texture of brass by the taylor theory adapted for mechanical twinning[END_REF] in which a Monte Carlo type method was employed to allow for twin reorientation [START_REF] Van Houtte | Simulation of the rolling and shear texture of brass by the taylor theory adapted for mechanical twinning[END_REF]. In the same spirit, Tomé et al. [START_REF] Tomé | A model for texture development dominated by deformation twinning : Application to zirconium alloys[END_REF] developed the "Predominant Twin Reorientation" (PTR) scheme. This last scheme, in which an entire grain is reoriented at once and in which solely the most active twin system is accounted for in terms of crystallographic reorientation, necessarily leads to imprecision in the prediction of texture development. To overcome these limitations, an alternative method, referred to as the "Volume Fraction Transfer" (VFT) scheme was proposed [START_REF] Tomé | A model for texture development dominated by deformation twinning : Application to zirconium alloys[END_REF][START_REF] Lebensohn | A self-consistent anisotropic approach for the simulation of plastic deformation and texture development of polycrystals : Application to zirconium alloys[END_REF]. The polycrystal is represented as a finite set of fixed orientations, weighted by volume fractions. As deformation proceeds, the weights of the orientations evolve to reproduce the nucleation and growth of twins. The VFT scheme provides an accurate description of the texture development in the case of twinning, but does not allow for a direct coupling or for a direct connection between the parent and the twin domains. More recently, Proust et al. [START_REF] Proust | Modeling texture, twinning and hardening evolution during deformation of hexagonal materials[END_REF][START_REF] Proust | Modeling the effect of twinning and detwinning during strain-path changes of magnesium alloy {AZ31}[END_REF] developed the "Composite Grain" (CG) model from the PTR scheme. In the CG model, when a critical twin volume fraction is reached, new grains are created with an orientation corresponding to that of the twin domain. The shape of newly formed domains is fixed by a parameter such that the multi-lamellar aspect of twinning -yielding to relatively flat ellipsoidal twins-is respected. From the point of view of micromechanics, the newly formed twinned domains are either treated as new grains to be embedded in the homogeneous reference medium or can be artificially coupled to the parent phase by imposing traction continuity across the interface. Although the coupled CG approach may appear to be more a realistic description of the geometry and traction continuity conditions associated to twinning, it is to be noted that enforcing traction continuity on the mean field within the parent and twin domains is unlikely to be appropriate when the twin fraction is not small. When applied to the case of pure polycrystalline magnesium in an elasto-plastic self-consistent scheme, it is found that the CG model cannot accurately reproduce the evolution of internal strains concomitant to twinning activity. Typically these approaches suffer from two limitations. First, they do not account explicitly for the direct mechanical interaction between parent and twin or between twin and neighboring grains. Second, they do not consider the stresses induced by the shear transformation inside the twin domain. These essentially limit the accuracy or correspondence with full-field methods of the predicted stress state in the twin domain both at the onset of twinning and when plasticity has occurred. Most models use the average stress in the parent and resolve it on the twin plane so as to quantify the driving force associated with twin growth. More recent approaches considered both the stress states in the parent and twin domains [START_REF] Wang | A constitutive model of twinning and detwinning for hexagonal close packed polycrystals[END_REF][START_REF] Wang | A crystal plasticity model for hexagonal close packed (hcp) crystals including twinning and de-twinning mechanisms[END_REF]. However, at a fine continuum mechanics scale, the stress of interest is the one acting at the interface between the twin and the parent. Experiments [START_REF] Aydiner | Evolution of stress in individual grains and twins in a magnesium alloy aggregate[END_REF][START_REF] Balogh | Spatially resolved in situ strain measurements from an interior twinned grain in bulk polycrystalline {AZ31} alloy[END_REF] reveal that those resolved shears could be very different. High energy X-ray diffraction in-situ measurements by Aydiner et al. [START_REF] Aydiner | Evolution of stress in individual grains and twins in a magnesium alloy aggregate[END_REF] revealed that substantial backstresses develop within twin domains during the activation of {10 12} tensile twins. As such, the accuracy of predictions of secondary slip and second-generation twin activities in polycrystals is limited. The latter is typically observed in magnesium alloys AM30, where tensile twins of the {10 12} type develop within compressive twins of the {10 11} type [START_REF] Martin | Variant selection during secondary twinning in mg-3%al[END_REF][START_REF] Barnett | Non-schmid behaviour during secondary twinning in a polycrystalline magnesium alloy[END_REF]. In recent EBSD measurements performed by Martin et al. [START_REF] Martin | Variant selection during secondary twinning in mg-3%al[END_REF], it was shown that out of the six possible second generation twin variants two are observed far more frequently. To date, no clear explanation of the phenomenon exists, but strain accommodation in twin domains inside both primary twin and 1.6. Scope of the thesis parent phases is likely to play a significant role [START_REF] Martin | Variant selection during secondary twinning in mg-3%al[END_REF]. To remedy the first limitation, related to the direct coupling between the parent and twin domains, an alternative mean field approach still based on a 1-site self-consistent model for elasto-plastic polycrystals "EPSC" but including twins as new "child grains" embedded in the HEM (Homogeneous Equivalent Medium) was proposed [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF]. Continuity conditions were enforced across the parent/twin interface. The traction continuity constraint is appropriate at the onset of twinning at the twin/parent interface. However, as it is enforced on mean stresses within the parent and twin phase, it is unlikely to be accurate when the twin has reached a significant volume. For the initial state of the twin domain, Clausen et al. [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF] proposed to impose an initial twin volume fraction and hence an initial plastic shear for the twin system once twinning is activated. Clausen et al. [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF] also showed that better agreement with experimental data could be obtained by introducing a back-stress term at the constitutive level in an ad-hoc fashion within the twin phase. The motivation there was essentially to reduce the stress within the twin domain at the onset of twinning so as to match elastic strains measured by neutron diffraction. An alternate route has also been proposed, in which, rather than first computing the plastic shear in the parent phase on the twinning plane and then reorienting a twin domain, one first creates a twin domain and then imposes an eigenstrain in the domain. In the work of Lebensohn et al. [START_REF] Lebensohn | A study of the stress state associated with twin nucleation and propagation in anisotropic materials[END_REF] , this method was used in a purely elastic two-phase model. Although limited to a purely elastic accommodation, these models lead to much reduced or even null stress states in the twin domain. The eigenstrain-based approach [START_REF] Lebensohn | An elasto-viscoplastic formulation based on fast fourier transforms for the prediction of micromechanical fields in polycrystalline materials[END_REF] was also used in full field elasto-plastic studies using finite element and fast Fourier transform methods. Rendering such complex physical phenomena in view of performing virtual material characterization is complicated due to the local nature of nucleation events. The vast majority of polycrystalline models use deterministic criteria to predict nucleation events. However, the scale at which nucleation occurs is typically lower than the resolution scale of FFT or FEM based full field simulations such that connections with the atomistic structure of grain boundaries in terms of degrees of freedom and defect content are missing. To overcome such limitations, probabilistic twinning models accounting for the statistical nature of twin nucleation have been developed [START_REF] Niezgoda | Stochastic modeling of twin nucleation in polycrystals : An application in hexagonal close-packed metals[END_REF][START_REF] Beyerlein | Effect of microstructure on the nucleation of deformation twins in polycrystalline high-purity magnesium : A multi-scale modeling study[END_REF]. Clearly these approaches rely on the gathering of rigorous statistical data from experimental studies. In Capolungo et al. [START_REF] Capolungo | Nucleation and growth of twins in zr : A statistical study[END_REF] and Beyerlein et al. [START_REF] Beyerlein | Statistical analyses of deformation twinning in magnesium[END_REF], an automated twin detection tool capable of detecting the presence and geometry of twins from Electron Backscatter Diffraction (EBSD) measurements was used to that end. These works clearly delineate a path towards both directly embedding experimental into constitutive models and generating datasets for model validation. Yet, the studies were limited to microstructures with relatively well organized twin structures and considered only one twin mode at a time. Indeed, automatically extracting microstructural data and twinning statistics such as grain size, grain orientation, number of twins per grain or modes and variants of twins is particularly complex because of the diversity of twinning modes, the multiplicity of certain twins and the complex morphologies one can introduce following complex or arbitrary loading [START_REF] Mccabe | Quantitative analysis of deformation twinning in zirconium[END_REF][START_REF] Marshall | Automatic twin statistics from electron backscattered diffraction data[END_REF]. Scope of the thesis Focused on twinning in h.c.p. metals, the present PhD thesis is dedicated to the study of internal stress development, the investigation and the quantification of the relative contributions of parent/twin and twin/twin interactions on the mechanical behavior and the microstructure evolution during deformation twinning. Particular attention will be drawn to magnesium and zirconium. The thesis is then organized in the following manner : Chapter 2 introduces a new micromechanical approach based on a double inclusion topology and the use of the Tanaka-Mori theorem. A first elasto-static model in heterogeneous elastic media with eigenstrains is developed and applied to the case of first and second generation twinning in a Mg single twinned grain. A second model, referred to as the double-inclusion elasto-plastic self-consistent scheme (DI-EPSC), is then derived. The DI-EPSC scheme consists of an extension of the first model to the case of elasto-plastic media and polycrystalline materials. Applied to an initially extruded Mg AZ31alloy, its predictions will be compared to those obtained by Clausen et al. [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF]. Chapter 3 aims at introducing a new EBSD analysis software developed for automated twinning statistics extraction and based on graph theory and quaternion algebra. Prior to presenting this new automated twin recognition tool, the chapter will briefly describe scanning electron microscopes, give the historical perspectives of the EBSD technique and review the basic concepts of electron diffraction and diffraction pattern analysis. Chapter 4 is dedicated to the identification of statistically representative data associated to nucleation and growth of twins from three studies carried out from Mg AZ31 alloy and pure Zr EBSD scans. The first two studies performed on Mg AZ31 alloy are focused on the determination and explanation of activation criteria for low Schmid factor {10 12} tensile twins and successive {10 12}-{10 12} double extension twins. The last statistical analysis performed on Zr discusses the statistical relevance of twin-twin junctions and their influence on nucleation and growth of twins. Finally, chapter 5 summarizes the main results of the presented work and presents possible further developments and studies. Chapitre 2 Study of the influence of parent-twin interactions on the mechanical behavior during twin growth The present chapter focuses on the influence of parent/twin interactions on the mechanical behavior of polycrystalline Mg. A numerically efficient mean-field Eshelbian based micromechanical model is proposed to address the problem. The general idea is based on the use of the Tanaka-Mori scheme [START_REF] Tanaka | Note on volume integrals of the elastic field around an ellipsoidal inclusion[END_REF][START_REF] Mura | Micromechanics of defects in solids[END_REF] which is first extended to the case of heterogeneous elasticity [START_REF] Juan | Prediction of internal stresses during growth of first-and second-generation twins in mg and mg alloys[END_REF] and then further extended to the case of elasto-plasticity [START_REF] Juan | A double inclusion homogenization scheme for polycrystals with hierarchal topologies : application to twinning in mg alloys[END_REF]. Prior to that a few key foundations of Eshelbian micromechanics are recalled [START_REF] Eshelby | The determination of the elastic field of an ellipsoidal inclusion and related problems[END_REF]. The following convention is used throughout the rest of the paper. Fourth-order tensors will be denoted with capital Latin characters, second-order tensors will be denoted with Greek characters and vectors will be denoted with lower case Latin characters. Einstein summation convention is used for the sake of brevity. Finally, when contracted notations (e.g. " :" denotes a doubly contracted product) are used, non-scalar variables will be noted in bold. The symbol "," denotes a spatial derivative. Chapitre 2. Study of the influence of parent-twin interactions on the mechanical behavior during twin growth The inclusion problem Field equations and thermodynamics In pioneering work, Eshelby [START_REF] Eshelby | The determination of the elastic field of an ellipsoidal inclusion and related problems[END_REF] analytically determined the local strain and stress tensors in an inhomogeneous inclusion containing an eigenstrain and embedded in an infinite elastic homogeneous medium (Figure 2.1). The temperature of the medium is assumed to be constant, i.e. there is no thermal strain. Dynamics effects and body forces are neglected. Denoting r the position vector in the medium V, the equilibrium condition without body force and acceleration is expressed as the divergence of the Cauchy stress tensor denoted by σ : ∇.σ(r) = 0 (2.1) The compatibility equation on the total distortion,β,is given by : β(r) = ∇u(r) (2.2) where u is the displacement vector. In the small deformation approximation, the total strain and rotation tensors, respectively denoted with ǫ and ω, are related to the total distortion as follows : β(r) = ǫ(r) + ω(r) (2.3) with, ǫ(r) = 1 2 ∇u(r) + ∇ t u(r) (2.4 ) ω(r) = 1 2 ∇u(r) -∇ t u(r) (2.5) As previously mentioned, a spatially varying eigenstrain, denoted with superscript *, is imposed in the medium. The eigenstrain is a non-elastic stress-free strain -as per Eshelby-that can physically represent a phase transformation, a pre-strain, a thermal strain and a strain [START_REF] Mura | Micromechanics of defects in solids[END_REF]. In the small perturbation hypothesis, the total strain is written as the sum of the elastic strain and of the eigenstrain. ǫ(r) = ǫ el (r) + ǫ * (r) (2.6) In a linear elastic homogeneous medium, the constitutive relation is simply given by : σ(r) = C 0 : [ǫ(r) -ǫ * (r)] (2.7) where C 0 is the homogeneous reference elastic modulus tensor. The traction and displacement boundary conditions, on ∂V σ and ∂V u respectively, are the following : u d = (E + Ω) r (2.8) t d = σ.n (2.9) where E and Ω represent the macroscopic strain and rotation tensors imposed to the surface ∂V u , respectively, and n is the vector normal to traction surface, ∂V σ . Using the minor symmetry of the elastic modulus of h.c.p. materials and accounting for strain compatibility, the constitutive equation can be written as follows : σ(r) = C 0 : [∇u(r) -ǫ * (r)] (2.10) After introducing the constitutive law into the balance equation, one obtains the so-called Navier type equation for the homogeneous problem considered here : C 0 : ∇.∇u(r) + f * (r) = 0 (2.11) where f * , representing the virtual body forces due to the incompatibility, ǫ * , is given by : f * (r) = -C 0 : ∇.ǫ * (r) (2. 12) The static elastic Green's function G ∞ ij (r -r ′ ) corresponds to the displacement at point r in direction i due to the application of a unit body force applied at r ′ in the j direction. Consequently, the solution for the present problem is the product of the static Green function, G ∞ , and the virtual body force vector, f * : u(r) = ∞ -∞ G ∞ (r -r ′ ).f * (r ′ )dV r ′ (2.13) The static elastic Green function satisfies the following equation : C 0 ijkl G ∞ km,lj (r -r ′ ) + δ im δ(r -r ′ ) = 0 (2.14) where δ im is the kronecker symbol and δ(r -r ′ ) the three-dimensional Dirac delta function. Eq. 2.14 corresponds to the Navier equation after multiplying its terms by the Green function, performing two integrations by parts and simplifying the resulting equation by consideration of the boundary conditions [START_REF] Mura | Micromechanics of defects in solids[END_REF]. The Helmholtz free energy density, denoted Φ and corresponding to the portion of the internal energy available for doing work in an isothermal process, is expressed as an integral of the volume density of elastic energy, W el , on the volume of the medium, V [START_REF] Berbenni | Intra-granular plastic slip heterogeneities : Discrete vs. mean field approaches[END_REF][START_REF] Collard | Role of discrete intra-granular slip bands on the strain-hardening of polycrystals[END_REF]. Φ(E, ǫ * ) = 1 2V V σ(r) : ǫ el (r)dV (2.15) After development and use of the Gauss theorem, the surface terms appear in the expression of Φ : Φ(E, ǫ * ) = 1 2V ∂V σ(r)u(r).ndS - 1 2V V σ(r) : ǫ el (r)dV (2.16) By considering the boundary conditions (Eqs. 2.8-2.9), Φ becomes : Φ(E, ǫ * ) = 1 2V ∂Vσ t d .udS + 1 2V ∂Vu σ(r)u d .ndS - 1 2V V σ(r) : ǫ * (r)dV (2.17) If only perturbation fields due to microstructural inhomogeneities fields are studied, the internal part of the Helmholtz free energy density reduces to : Φ int = - 1 2V V σ(r) : ǫ * (r)dV (2.18) Eshelby's solution The exact solution to this boundary problem is given by the Lippman-Schwinger-Dyson's type integral equations [118,[START_REF] Berveiller | The problem of two plastic and heterogeneous inclusions in an anisotropic medium[END_REF] recalled here : ǫ(r) = E d + ∞ -∞ Γ ∞,s (r -r ′ ) : C 0 : ǫ * (r ′ )dV r ′ (2.19) where Γ ∞,s corresponds to the symmetric modified Green tensor Γ ∞,s ijkl (r -r ′ ) = - 1 2 G ∞ ik,jl (r -r ′ ) + G ∞ jk,il (r -r ′ ) (2.20) Outside the inclusion, with volume V Ω , the eigenstrain tensor is null. Considering a uniform eigenstrain tensor in V Ω , the integral equation becomes : ǫ(r) = E d + V Ω Γ ∞,s (r -r ′ )dV r ′ : C 0 : ǫ * (2.21) To simplify notations, uniform vectors or tensors are replaced by their values such that one writes for example ǫ * (r) = ǫ * when r ∈ V Ω . Finally, the exact solution of this inclusion problem is given by : ǫ(r) = E d + P V Ω (r) : C 0 : ǫ * (2.22) where the fourth-order tensor P V Ω (r) denotes the so-called polarized Hill's tensor. It is expressed as the integral over the inclusion volume of the symmetric modified Green tensor : P V Ω (r) = V Ω Γ ∞,s (r -r ′ )dV r ′ (2.23) Similarly, the final expression of the rotation tensor ω is : ω(r) = Ω d + V Ω Γ ∞,a (r -r ′ )dV r ′ : C 0 : ǫ * (2.24) where Γ ∞,a corresponds to the anti-symmetric modified Green tensor whose expression is : Γ ∞,s ijkl (r -r ′ ) = - 1 2 G ∞ ik,jl (r -r ′ ) -G ∞ jk,il (r -r ′ ) (2.25) Eshelby's tensor, S 0 , is defined as the double dot product between Hill's polarized tensor and the fourth-order elastic stiffness tensor : S 0 (r) = P V Ω (r) : C 0 (2.26) The fourth-order tensor P V Ω (r) is uniform inside the inclusion Ω. As a result, S 0 , ǫ and σ are also uniform when r ∈ V Ω . The final expressions of the strain and stress tensors, inside and outside the inclusion, result from the combined use of intermediary results and newly-introduced notations : -when r ∈ V Ω , ǫ(r) = E d + S 0 : ǫ * (2.27) σ(r) = C 0 : E d + C 0 : S 0 -I : ǫ * (2.28) -when r / ∈ V Ω , ǫ(r) = E d + S 0 (r) : ǫ * (2.29) σ(r) = C 0 : E d + C 0 : S 0 (r)s : ǫ * (2.30) 2.2 A generalized Tanaka-Mori scheme in heterogeneous elastic media with plastic incompatibilities. In this paragraph, a Lippman-Schwinger type equation is derived for microstructures with heterogeneous elasticity (due to twin reorientation) and plastic incompatibilities (eigenstrains due to the shearing in different types of twin domains). This is done by generalizing the original work of Tanaka and Mori [START_REF] Tanaka | Note on volume integrals of the elastic field around an ellipsoidal inclusion[END_REF][START_REF] Mura | Micromechanics of defects in solids[END_REF], initially developed in the case of homogeneous elasticity. This scheme aims at predicting the development of internal stresses within twin and parent domains during the growth of first and second-generation twins. The proposed method considers a static configuration for an elastic medium with eigenstrains. It allows computation of the values and evolutions of internal stresses in a double inclusion of ellipsoidal shape -mimicking the geometry of a twin domain contained in either a parent phase or a primary twin-with inclusion shape, relative shape and volume fraction. Elasto-static Tanaka-Mori scheme Generalized Tanaka-Mori scheme This sub-paragraph presents both the twinning topology considered and the key steps in the derivation of a Generalized Tanaka-Mori scheme. Consider two ellipsoidal inclusions V b and V a such that V 1 ⊂ V 2 with elastic moduli C b in V b and C a in the sub-domain V a -V b (Figure 2. (with V 1 ⊂ V 2 ) with prescribed eigenstrains ǫ * b in V b and ǫ * a in sub-region V a -V b and distinct elastic moduli C b in V b and C a (in sub-region V a -V b ). The two inclusions are embedded in an infinite elastic medium, with elastic modulus C 0 , containing an overall uniform plastic strain, E p . The second-order tensor E d represents the imposed macroscopic strain. Such a geometry can yield a geometrical representation of both the first generation twin, i.e. volume V b is null and volume V a represents the first generation twin, and the second-generation twin with V a and V b representing the first and second generation twins, respectively. In order to represent the shear deformation induced by twinning, two uniform eigenstrains, denoted with ǫ * b and ǫ * a , are respectively introduced in inclusion V b and in the sub-region V a -V b . Another uniform plastic strain, denoted by the second-order tensor E p , is introduced in the sub-domain V -V a in order to model the macroscopic plastic strain undergone by the specimen during mechanical testing (Figure 2.2). Following the same steps as for the inclusion problem, the Navier type equation of this heterogeneous multi-inclusion problem is : C 0 : ∇.∇u(r) + f * (r) = 0 (2.31) where the virtual body forces, represented by f * i , result from both heterogeneous elasticity and incompatibilities (eigenstrains), are expressed as follows : f * (r) = ∇.(δC(r) : ǫ(r) -C(r) : ǫ * (r)) (2.32) The new Lippman-Schwinger-Dyson's type integral equations are then : ǫ(r) = E d - V Γ ∞,s (r -r ′ ) : δC(r ′ ) : ǫ(r ′ ) -C(r ′ ) : ǫ * (r ′ ) dV r ′ (2.33) ω(r) = Ω d - V Γ ∞,a (r -r ′ ) : δC(r ′ ) : ǫ(r ′ ) -C(r ′ ) : ǫ * (r ′ ) dV r ′ (2.34) θ b (r) and θ a (r) denote the characteristics functions associated to V b and V a , respectively. These functions are equal to the identity and null tensors inside and outside their corresponding volumes, respectively. The heterogeneous elastic properties and eigenstrains in the infinite body, V, can be expressed as spatially fluctuating fourth and second-order tensors, C(r) and ǫ * (r), respectively : C(r) = C 0 + δC(r) = C 0 + C a -C 0 θ a (r) + C b -C a θ b (r) (2.35) ǫ * (r) = E p + (ǫ * a -E p ) θ a (r) + ǫ * b -ǫ * a θ b (r) (2.36) In order to simply the subsequent calculations, rewrite the expressions of C(r) and ǫ * (r) in the following manner : C(r) = C 0 (1 -θ a (r)) + C a (θ a (r) -θ b (r)) + C b θ b (r) (2.37) ǫ * (r) = E p (1 -θ a (r)) + ǫ * a (θ a (r) -θ b (r)) + ǫ * b θ b (r) (2.38) Replacing the spatially varying elastic modulus and eigenstrain tensors by their expressions into the integral equation, one obtains the following expression of the strain field anywhere in the volume : ǫ(r) = E d - V -Va Γ ∞,s (r -r ′ ) : C 0 : E p dV r ′ - Va-V b Γ ∞,s (r -r ′ ) : C a -C 0 : ǫ(r) -C a : ǫ * a dV r ′ - V b Γ ∞,s (r -r ′ ) : C b -C 0 : ǫ(r ′ ) -C b : ǫ * b dV r ′ (2.39) Since lim V →∞ V Γ ∞,s (r -r ′ )dV r ′ = 0 and after rearrangement of the terms in the integrand, the strain field expression becomes : ǫ(r) = E d - Va Γ ∞,s (r -r ′ ) : C a -C 0 : ǫ(r) + C 0 : E p -C a : ǫ * a dV r ′ - V b Γ ∞,s (r -r ′ ) : C b -C a : ǫ(r ′ ) + C a : ǫ * a -C b : ǫ * b dV r ′ (2.40) Regardless of the respective shape of both volumes, the strain field ǫ(r) given by Eq. 2.40 is not uniform. Exact solutions of Eq. 2.40 can be obtained via the use of the FFT method [START_REF] Moulinec | A numerical method for computing the overall response of nonlinear composites with complex microstructure[END_REF][START_REF] Moulinec | Intraphase strain heterogeneity in nonlinear composites : a computational approach[END_REF] or of high order polynomial expansions of ǫ(r) [START_REF] Shodja | Elastic fields in double inhomogeneity by the equivalent inclusion method[END_REF]. To avoid the numerical difficulty associated with solving exactly the integral equation, and to proceed with realistic analytical derivations, the strains under the integral are assumed to be equal to their averages over these volumes, i.e. ǭi = 1 V i V i ǫ(r)dV r (with i = {a, b}). ǫ(r) = E d - Va Γ ∞,s (r -r ′ ) : C a -C 0 : ǭa + C 0 : E p -C a : ǫ * a dV r ′ - V b Γ ∞,s (r -r ′ ) : C b -C a : ǭb + C a : ǫ * a -C b : ǫ * b dV r ′ (2.41) All uniform terms can then be extracted from the integrals. ǫ(r) = E d - Va Γ ∞,s (r -r ′ )dV r ′ : C a -C 0 : ǭa + C 0 : E p -C a : ǫ * a - V b Γ ∞,s (r -r ′ )dV r ′ : C b -C a : ǭb + C a : ǫ * a -C b : ǫ * b (2.42) As with other mean-field models [START_REF] Proust | Modeling texture, twinning and hardening evolution during deformation of hexagonal materials[END_REF][START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF], strain fields are supposed to be uniform inside the inclusions and equal to their average values over these ellipsoidal volumes. The average strain in the inclusion V b is derived as follows : ǭb = E d - 1 V b V b Va Γ ∞,s (r -r ′ )dV r ′ dV r : C a -C 0 : ǭa + C 0 : E p -C a : ǫ * a - 1 V b V b V b Γ ∞,s (r -r ′ )dV r ′ dV r : C b -C a : ǭb + C a : ǫ * a -C b : ǫ * b (2.43) where V b Γ ∞,s (r -r ′ )dV r ′ and Va Γ ∞,s (r -r ′ )dV r ′ are uniform because V b and V a are ellipsoidal inclusions and r ∈ V b ⊂ V a (following Eshelby [START_REF] Eshelby | The determination of the elastic field of an ellipsoidal inclusion and related problems[END_REF]). ǭb = E d - Va Γ ∞,s (r -r ′ )dV r ′ : C a -C 0 : ǭa + C 0 : E p -C a : ǫ * a - V b Γ ∞,s (r -r ′ )dV r ′ : C b -C a : ǭb + C a : ǫ * a -C b : ǫ * b (2.44) The average strain tensor in inclusion V a is derived following the same procedure. ǭb = E d - 1 V a Va Va Γ ∞,s (r -r ′ )dV r ′ dV r : C a -C 0 : ǭa + C 0 : E p -C a : ǫ * a - 1 V a Va V b Γ ∞,s (r -r ′ )dV r ′ dV r : C b -C a : ǭb + C a : ǫ * a -C b : ǫ * b (2.45) Since Va Γ ∞,s (r -r ′ )dV r is independent of r, the order of integration in the second term of the previous equation can be changed according to the Tanaka-Mori theorem [START_REF] Tanaka | Note on volume integrals of the elastic field around an ellipsoidal inclusion[END_REF][START_REF] Mura | Micromechanics of defects in solids[END_REF]. 2.2. A generalized Tanaka-Mori scheme in heterogeneous elastic media with plastic incompatibilities. ǭa = E d - 1 V a Va Va Γ ∞,s (r -r ′ )dV r ′ dV r : C a -C 0 : ǭa + C 0 : E p -C a : ǫ * a - 1 V a V b Va Γ ∞,s (r -r ′ )dV r dV r ′ : C b -C a : ǭb + C a : ǫ * a -C b : ǫ * b (2.46) Considering the Eshelby's result [START_REF] Eshelby | The determination of the elastic field of an ellipsoidal inclusion and related problems[END_REF], Va Γ ∞,s (r -r ′ )dV r is uniform, because V b ⊂ V a , so that : ǭa = E d - Va Γ ∞,s (r -r ′ )dV r ′ : C a -C 0 : ǭa + C 0 : E p -C a : ǫ * a - V b V a Va Γ ∞,s (r -r ′ )dV r : C b -C a : ǭb + C a : ǫ * a -C b : ǫ * b (2.47) Similar to the inclusion problem, the following tensors P V i are defined from the volume integral of the symmetrized modified Green function : P V i (r) = V i Γ ∞,s (r -r ′ )dV r ′ (2.48) with i = {a, b}. Tensors S j (V i ), withj = {0, a, b} and i = {a, b}, are written as the double contracted product of the P V i tensors with the stiffness tensors, C j : S j V i = P V i : C j (2.49) As the inclusion shapes considered here are all ellipsoidal, both P V i and S j V i are uniform when r ∈ V i . Clearly, the different tensors S j V i are to be considered as Eshelby type tensors. Consequently, the approximated average strain in the inclusion V b is given by : ǭb = E d -S 0 (V a ) : E p -S a (V a ) -S 0 (V a ) : ǭa -S b (V b ) -S a (V b ) : ǭb + [S a (V a ) -S a (V b )] : ǫ * a + S b (V b ) : ǫ * b (2.50) And the approximated average strain in the inclusion V a is given by : ǭa = E d -S 0 (V a ) : E p -S a (V a ) -S 0 (V a ) : ǭa - V b V a S b (V a ) -S a (V a ) : ǭb + V a -V b V a S a (V a ) : ǫ * a + V b V a S b (V a ) : ǫ * b (2.51) Evaluation of the average strain in sub-region V a -V b is of interest here. It can be obtained from Eqs. 2.50 and 2.51 through the following relationship : ǭVa-V b = V a V a -V b ǭa - V b V a -V b ǭb (2.52) Equations 2.51 and 2.50 consist of an extension of the Tanaka-Mori observation to the case of a double-inclusion problem with heterogeneous elastic properties and eigenstrains. Finally, the expression of the two unknown averaged strains within each volume is obtained by solving the following system of equations : ǭa = E d + ∆S a-0 Va : ǭa + V b V a ∆S b-a Va : ǭb + R a (2.53) ǭb = E d + ∆S a-0 Va : ǭa + ∆S b-a V b : ǭb + R b (2.54) with, ∆S a-0 Va = S a (V a ) -S 0 (V a ) (2.55) ∆S b-a Va = S b (V a ) -S a (V a ) (2.56) ∆S b-a V b = S b (V b ) -S a (V b ) (2.57) R a = S 0 (V a ) : E p + V a -V b V a S a (V a ) : ǫ * a + V b V a S b (V a ) : ǫ * b (2.58) R b = S 0 (V a ) : E p + [S a (V a ) -S a (V b )] : ǫ * a + S b (V b ) : ǫ * b (2.59) Solutions of Eqs. 2.53 and 2.54 are given by : ǭb = I + ∆S b-a V b - V b V a ∆S a-0 Va : I + ∆S a-0 Va -1 : ∆S b-a Va -1 : E d + R b (2.60) ǭa = I + ∆S a-0 Va -1 : E d - V b V a ∆S b-a Va : ǭb + R a (2.61) Interestingly, the expressions of ∆S a-0 Va , ∆S b-a Va , ∆S b-a V b , R a and R b defined in Eqs. 2.55-2.59 show that the generalization of the Tanaka-Mori method introduces a coupling between the averaged strain fields in each volume. Moreover, with the five S j (V i ) tensors introduced in Eq. 2.49, relative shape and volume fraction effects between both inclusions can be predicted. Note that solution of Eq. 2.50 is not trivial, as it requires inverting a general four-dimensional tensor. Tensors S j (V i ) are obtained by use of a Gauss-Lobatto integration of Greens tensors in the Fourier space. This method is similar to that used in the viscoplastic self-consistent (VPSC) [START_REF] Lebensohn | A self-consistent anisotropic approach for the simulation of plastic deformation and texture development of polycrystals : Application to zirconium alloys[END_REF] and elasto-plastic self-consistent (EPSC) [START_REF] Turner | A study of residual stresses in zircaloy-2 with rod texture[END_REF] schemes. Using Hooke's law, average stresses in V b and V 1 -V 2 are given by : σb = C b : ǭb -ǫ * b (2.62) σVa-V b = C a : ǫ Va-V b -ǫ * a (2.63) The average stresses in V a are given by : σa = V b V a σb + V a -V b V a σVa-V b (2.64) As defined in the previous paragraph, the stored elastic energy per unit of volume is given by Φ = 1 2V V σ(r) : ǫ el (r)dV r (2.65) Applying now the Hill's result for heterogeneous elasto-plastic media [START_REF] Hill | Elastic properties of reinforced solids : some theoretical principles[END_REF][START_REF] Mandel | Cours de Mécanique des Milieux Continus[END_REF] to heterogeneous elastic matrix with plastic incompatibilities enables to derive a new closed form of the stored elastic energy density as follows : Φ = 1 2 Σ : E d + Φ int = 1 2 Σ : E d - 1 2V V σ(r) : ǫ * (r)dV r (2.66) where Σ and Φ int denote the macroscopic stress tensor and the internal part of the free Helmholtz energy density resulting from plastic incompatibilities, respectively. Both the macroscopic stress and plastic strain tensors are given by the two subsequent integral equations : Σ = 1 V V σ(r)dV r (2.67) E p = 1 V V B t (r) : ǫ * (r)dV r (2.68) with B, a fourth-order concentration tensor linking the virtual local stress fields that would have existed if the medium remained purely elastic to the macroscopic stress [START_REF] Hill | Elastic properties of reinforced solids : some theoretical principles[END_REF][START_REF] Mandel | Cours de Mécanique des Milieux Continus[END_REF]. In the present case, the macroscopic strain tensor, E d , can be computed from the homogeneous medium elastic constants and the macroscopic stress applied in the following manner E d = C 0 -1 : Σ + E p (2.69) Given the spatial expression of plastic incompatibilities (Eq. 2.36) and the mean-field approximation, the internal free energy density, Φ int , can be rewritten as : Φ int = - V -V a V σV -Va : E p - V a -V b V a σVa-V b : ǫ * a - V b V σb : ǫ * b (2.70) Because the medium is infinite, σV -Va is assumed to be equal to Σ. Consequently, Φ int = - V -V a V Σ : E p - V a -V b V a σVa-V b : ǫ * a - V b V σb : ǫ * b (2.71) Relationship with the classical Eshelby's results and Nemat-Nasser and Hori's solutions Considering the particular case of an elastically homogeneous medium such that C b = C a = C 0 , Eq. 2.40 reduces to : ǫ(r) = E d - Va Γ ∞,s (r -r ′ )dV r ′ : C 0 : E p -C a : ǫ * a - V b Γ ∞,s (r -r ′ )dV r ′ : C 0 : ǫ * a -ǫ * b (2.72) Then, the average strains in V a and V b also reduce to : ǭa = E d + S 0 (V a ) : E p + V a -V b V a S 0 (V a ) : ǫ * a + V b V a S 0 (V a ) : ǫ * b (2.73) ǭb = E d + S 0 (V a ) : E p + S 0 (V a ) -S 0 (V b ) : ǫ * a + S 0 (V b ) : ǫ * b (2.74) where S 0 (V i ) = P V i : C 0 are the elastic tensors associated with C 0 and V i . As expected Eqs. 2.73 and 2.74 correspond to the first extension of the Tanaka-Mori scheme observed by Hori and Nemat-Nasser [START_REF] Hori | Double-inclusion model and overall moduli of multi-phase composites[END_REF] and Nemat-Nasser and Hori [START_REF] Nemat-Nasser | Micromechanics : overall properties of heterogeneous materials[END_REF] . Note that in the case of homogeneous elasticity, Eqs. 2.61 and 2.60 yield null values for ∆S a-0 Va , ∆S b-a Va and ∆S b-a V b so that these equations become respectively Eqs. 2.73 and 2.74. Similarly, if one considers the eigenstrain in V a -V b to be null, the average strain in V b reduces to Eshelby's solution to the inclusion problem : ǭb = E d + S 0 (V b ) : ǫ * b (2.75) Application to first generation tensile twinning in magnesium For the present application, a slightly simplified version of the elasto-static Tanaka-Mori scheme, described in the above, is considered since the matrix does not contain an overall plastic strain incompatibility, E p . The local elastic stiffness tensors, associated to each inclusion/twin domain, are related to that of medium "0" by simple rotation operations. Note that, for the sake of consistency, all calculations need to be performed in a reference coordinate system chosen arbitrarily as that associated to the grain. As a result, the elastic moduli of the inclusions, expressed in the reference coordinate system, are given by : C c ijkl = C 0 mnpq R c im R c jn R c kp R c lq (2.76) where c = {a, b} and R i is the rotation matrix representing the misorientation of the inclusion "i" (Figure 2.2). Note that these cumulative rotations may transform transversely isotropic tensors into anisotropic tensors. The elastic constants of Magnesium expressed in the crystal reference frame are extracted from [START_REF] Kocks | Texture and anisotropy[END_REF] and given, in GPa, by : C 0 =                 Twinning on the {10 12} planes is common to all h.c.p. materials. Because experimental measures of the development of internal strains within both a parent and a {10 12} twin phase are available for magnesium alloy AZ31 [START_REF] Aydiner | Evolution of stress in individual grains and twins in a magnesium alloy aggregate[END_REF], the generalized Tanaka-Mori method presented previously is applied to this problem. Therefore, the misorientation between V a -V b , representing the original parent crystal, and the "unbounded" body, V, is set to zero and the eigenstrain in the sub-region V a -V b , ǫ * a , is equal to the null tensor. In V b , a non-zero eigenstrain -only the shear components along the axis e ′ 2 and e ′ 3 are non null -is prescribed in order to restore the twinning shear for which the unit magnitude is equal to 0.131 according to [START_REF] Partridge | The crystallography and deformation modes of hexagonal close-packed metals[END_REF] . Clearly, the use of a homogeneous eigenstrain within the twin domain is an approximation of the strain state within the twin domain since all the twinning shear strain is effectively concentrated at the twin interface. Note that a similar approximation was made in Lebensohn et al. [START_REF] Lebensohn | A self-consistent anisotropic approach for the simulation of plastic deformation and texture development of polycrystals : Application to zirconium alloys[END_REF] to study twin nucleation in anisotropic materials. The local frame is associated to the twin domain : axis e ′ 1 , is perpendicular to the twinning shear direction, η 1 , and lies in the undistorted plane, K 1 , and axis e ′ 2 , is parallel to the twinning shear direction and lies too in the undistorted planeK 1 ; the third axis, e ′ 3 , along which thickening occurs, is the cross product of the first axis with the second one. ) associated with the {10 12}-tensile twinning. The reference coordinate system (e 1 , e 2 , e 3 ) associated to the crystal structure and the crystallographic coordinate system (a 1 , a 2 , a 3 , c) are also shown. The effect of both twin and grain shapes on the average internal stresses within each phase is studied first. In order to facilitate the understanding of relative shape effects in each inclusion (i.e. the parent grain and the twin phase), axes 1 and 2 of each ellipsoid have the same length. R denotes the aspect ratio of the ellipsoid length along axis 1 divided by that along axis 3 which corresponds to the axis of thickening. Therefore large values of R will denote flat ellipsoidal domains while R equal to 1 describes a perfectly spherical domain. In order to isolate the effects of relative volume from the shape effects, the twin volume fractions are arbitrarily fixed here to 2.5 and 5 percent and R is varied in both the twin and parent domains. Figures 2.4a, 2.4b, 2.4c and 2.4d present the evolutions of the resolved shear stress on the twin system in the twin ((a) and (c)) (i.e. V b ) and in the parent ((b) and (d)) (i.e. V a -V b ) domains, respectively, as a function of the ratio R twin . Simulations are repeated for several initial grain shapes described with R parent . Figure 2.4 suggests several interesting shape effects. First, it is found, by comparison with the values of the resolved shear stresses (RSS) in the parent and twin domains that for some grain and twin shapes, the approach can predict the stress reversal, i.e. RSS of opposite signs in the twin and parent domain. This finding is in qualitative agreement with that experimentally measured in Aydiner et al. [START_REF] Aydiner | Evolution of stress in individual grains and twins in a magnesium alloy aggregate[END_REF]. 3DXRD measurements showed that the difference in the RSS of the parent and twin domains depends on the twin volume fraction. For small twin volume fractions a stress reversal is observed. However, when the twin has reached a "critical" size the stress reversal is no longer observed [START_REF] Aydiner | Evolution of stress in individual grains and twins in a magnesium alloy aggregate[END_REF]. Interestingly it is found by comparison of Figures 2.4a and 2.4b that the relative shape effect between the parent and twin phases has a similar effect to that of the twin fraction discussed above. Namely, for a given parent shape, described by R parent , an increase in R twin (e.g. flattening of the twin phase) leads to an increase in the resolved shear stress in the parent phase. Such an increase can affect the sign of the RSS in the parent domain. As the RSS in the twin domain remains negative the relative shape effect shown here reveals that the occurrence of a stress reversal depends on the relative shape of the twin and parent phases. Second, as otherwise predicted by the elementary solution to the inclusion problem proposed by Eshelby [START_REF] Eshelby | The determination of the elastic field of an ellipsoidal inclusion and related problems[END_REF] and shown in Figure 2.4a, it is found that the parent grain shape has no effect on the stress state within the twin phase. Furthermore, it is found that at fixed relative volume fractions the magnitude of the RSS in the twin domain decreases with an increase in R twin . In other words, internal stresses in the twin are the lowest in magnitude for a flat ellipsoidal twin and the highest for a spherical one. This is consistent with the experimentally observed twin shapes. Third, it is found that both the twin and grain shapes affect the stress state within the parent domain but in opposite directions. Namely, while increasing R twin leads to an increase in the stress state within the parent grain, the same effect is produced by a decrease in R parent . Note that the stress state within the twin phase is far more affected by relative shape effects than that in the parent domain (Figure 2.4b). Note also that the dependence is affected by the relative twin volume fraction. Finally, comparing Figures 2.4a and 2.4c, corresponding to the two different twin fractions, shows that twin volume fraction only affects the magnitude of the stress states within each phase but not the trends associated with changes in the shape of the twin and parent domains. With this, it is to be concluded that in the range of twin fractions studied here, the orders of magnitude of the RSS in the parent and twin domains are different from a factor ≈ 100, while experimental measures do not exhibit such difference. It is to be expected that an elasto-plastic approach, as opposed to the purely elastic accommodation method as proposed here, will limit the stress state within the twin domains, as the magnitude of the elastic incompatibility could then be accommodated by plastic deformation modes. However, this is out the scope of the present work and will be the objective of a further study. As experimental measures of back-stresses within the twin and parent domains showed that their magnitude evolves during growth [START_REF] Aydiner | Evolution of stress in individual grains and twins in a magnesium alloy aggregate[END_REF], it is desired here to fictitiously and qualitatively reproduce growth of a twin in a parent grain by computing the evolution of the RSS in both phases for a fixed grain shape, with R parent = 3, and a twin of increasing thickness. Initially, the twin is a flat ellipsoid with axes 1 and 2 such that the twin is spread on the entire surface available on the twin plane. Note here that as opposed to the previous case the simultaneous effects of both the shape and relative volume fraction effects are evaluated. As shown in Figure 5, where symbols denote the experimentally measured RSS on the twin domains (reported from [START_REF] Aydiner | Evolution of stress in individual grains and twins in a magnesium alloy aggregate[END_REF]), substantial changes in the backstresses are predicted during growth of the twin domain. However, since a purely elastic accommodation is used here, the magnitude of the stress reversal predicted is much larger than those measured at the level of plastic strains performed in Aydiner et al. [START_REF] Aydiner | Evolution of stress in individual grains and twins in a magnesium alloy aggregate[END_REF]. On the contrary, as depicted in Figure 2.5, experimental measures indicate that the magnitude of the back-stresses vary little with twin growth. Therefore it is suggested, from comparison with experimental measures [START_REF] Aydiner | Evolution of stress in individual grains and twins in a magnesium alloy aggregate[END_REF] that the elastic incompatibilities (evolving with twin shape and relative volume fraction effects) necessarily lead to an increase in plastic accommodation, via slip, during twin growth. This is likely to have a prominent role in the generation of the typically observed multi-lamellar twins, as the sequential nucleation of new twin lamellae allows for accommodation of elastic incompatibilities during twin growth. Figure 2.5 exhibits the evolution with twin volume fraction and at a given parent geometry (R parent = 3) of the internal stresses in the twin and parent domains. As revealed by 3DXRD experiments and shown in Figure 2.4, the RSS in the parent phase is of opposite sign, i.e. positive, with respect to that in the twin phase when the twin volume fraction is small. In Figure 2.5, the change of sign occurs at a twin volume fraction equal to 6% and R twin equal to 2.3. For a twin volume fraction equal to 5%, and R twin and R parent respectively equal to 2.3 and 3, Figure 2.4d shows that the RSS in the parent domain is negative, whereas as shown in Figure 2.5 the RSS in the parent domain reaches zero when the twin volume fraction reaches 6%. This observation The specificity of the present extension of the Tanaka-Mori scheme is the consideration of heterogeneous elasticity, the effect of which is to be discussed here. As discussed in the previous section, Nemat-Nasser and Hori [START_REF] Nemat-Nasser | Micromechanics : overall properties of heterogeneous materials[END_REF][START_REF] Hori | Double-inclusion model and overall moduli of multi-phase composites[END_REF] dealt with problems in which two eigenstrains are placed in two overlapping inclusions. However, this scheme is limited to homogeneous elasticity. Hence, comparison of the two schemes allows for the investigation of the sole effect of elastic heterogeneity on the stress states in both parent and twin domains. Let us recall that, in the case of first generation twins, the eigenstrain in volume V a -V b is null. Consequently, the mean stresses in the twin, obtained from the Hori and Nemat-Nasser schemes, correspond to Eshelby's solution. In Figure 2.6, it is shown that elastic heterogeneity changes the trend of stress evolution considerably. This effect is especially appreciable for the internal resolved shear stress of the twin on the twinning plane. Indeed, one observes in Figure 2.6b that the solution using homogeneous elasticity does not capture the experimentally observed decrease of the RSS in the twin (Figure 2.5). However, Figure 2.6a shows that for small twin volume fractions, the present model and the Nemat-Nasser scheme display the same trend about mean internal stresses in the twin whereas, for larger twin volume fractions, the influence of heterogeneous elasticity becomes important and the two models evolve in opposite directions. 2.3 The Double Inclusion Elasto-Plastic Self-Consistent (DI-EPSC) scheme. Encouraged by the promising results obtained with the approach introduced in the previous paragraph, the double inclusion elasto-plastic self-consistent scheme has then been developed. It consists of an adaptation of the EPSC model [START_REF] Turner | A study of residual stresses in zircaloy-2 with rod texture[END_REF][START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF] to consider the direct mechanical interaction between parent and twin phases during the development of intra-granular twins. With reference to the experimental and modeling results of Clausen et al. [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF], the role of parent/twin interactions is examined especially regarding the predictions of internal strains and stresses within twin domains. The present section is organized as follows : in the first part is introduced the new double inclusion elasto-plastic self-consistent (DI-EPSC) scheme. Concentration relations for non-twinned grains, twin and parent domains are detailed and the single crystal plasticity model is descibed ; the second part is dedicated to the quantification of the impact of the topological coupling between the twin and parent phases on model predictions via the observation of latent effects induced by twinning and the comparison of results for an extruded and a randomly textured AZ31 alloy ; finally, in the third part, the authors show the influence of secondary slip on the material response and focus on how to model the twin stress states at the onset of twinning, studying two limit initial configurations where twins are either assumed to have the same stress state as the parent domain or to be fully relaxed. DI-EPSC model The idea to be mathematically derived in the following is to distinguish those grains not containing twins and those containing twin domains. In the self-consistent approach the polycrystal is represented as an ensemble of inclusions (i.e. grains containing or not containing twins) and the average strains and stresses within each domain are obtained from solving a specific inclusion problem yielding a concentration rule that relates the average local stress or strain fields to the macroscopic equivalents. Here in the case of grains not containing twins, the inclusion is assumed to be embedded in a homogeneous equivalent medium (HEM) with properties and mechanical response corresponding to those of the polycrystal. Such concentration laws are similar to that initially derived by Hill [START_REF] Hill | A self-consistent mechanics of composite materials[END_REF] and based on the work of Eshelby [START_REF] Eshelby | The determination of the elastic field of an ellipsoidal inclusion and related problems[END_REF]. In the case of grains containing twins, new concentration relations are derived based on a double inclusion topology so called DI-EPSC in the following. Figure 2.7a presents the current uncoupled approach [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF] while figure 2.7b presents the new coupled one. General equations The fourth-order tensors L ef f and L i denote the linearized tangent moduli of the HEM and of a given phase 'i' respectively. The latter may correspond to a non-twinned grain 'c', a twinned grain 'g', a twin domain 't' and a parent one 'g-t' associated with 'g'. The constitutive response of the HEM is thus given by : Σ = L eff : Ė (2.77) where, Σ and Ė are respectively the macroscopic stress and strain rate tensors associated with the HEM. In similar fashion, the constitutive equation of crystal 'i' is expressed as follows : σi = L i : ǫi (2.78) where, σi and ǫi denote the stress rate and strain rate tensors of crystal 'i', respectively. Concentration tensors then provide a link between the local and global mean fields. In the present formulation these are formally expressed as follows : ǫi = A i : Ė (2.79) σi = B i : Σ (2.80) Here A i and B i denote the strain and stress concentration tensors. These will be different whether or not the considered grain contains a twin domain. From homogeneous boundary conditions and average conditions, the overall mechanical response of the material is obtained by enforcing macro homogeneity conditions : Σ =< σi > (2.81) Ė =< ǫi > (2.82) Combining 2.81, 2.82 and 2.77, the overall tangent modulus is obtained self-consistently as follows : L eff =< L i : A i >:< A i > -1 (2.83) Integral equation Considering spatial fluctuations of linearized tangent moduli denoted δL with respect to a reference homogeneous medium denoted L 0 , the tangent modulus L decomposes as follows : L(r) = L 0 + δL(r) (2.84) At any point r within the volume, the local constitutive equation is given by : σ(r) = L(r) : ǫ(r) (2.85) Upon introducing Eq. 2.84 into Eq. 2.85 and enforcing both static mechanical equilibrium and compatibility of the total local strain, one obtains a Navier-type equation for the heterogeneous medium : ∇.L 0 : ∇ s u(r) + f * (r) = 0 (2.86) where, f * represents the body forces resulting from the heterogeneity within the medium. ∇ s u denotes the displacement gradient. f * (r) = ∇.(δL(r) : ǫ(r)) (2.87) A solution to this boundary value problem is given by the Lippman-Schwinger-Dyson-type integral equations. Following Lipinski and Berveiller [START_REF] Lipinski | Elastoplasticity of micro-inhomogeneous metals at large strains[END_REF], the strain field at any material point of position vector r : ǫ(r) = Ė - V Γ ∞,s (r -r ′ ) : δL(r ′ ) : ǫ(r ′ )dV r ′ (2.88) where Γ ∞,s (r -r ′ ) denotes the symmetric modified Green tensor, given by : Γ ∞,s ijkl (r -r ′ ) = - 1 2 (G ∞,s ik,jl (r -r ′ ) + G ∞,s jk,il (r -r ′ )) (2.89) Double inclusion geometry for twinned grains and mean field approximation In the present case the geometry of the problem is as follows ; the twinned grain, embedded in the HEM, occupies a volume V g that contains one single twin domain with volume V t (such that V t ⊂ V g ). Therefore the parent volume is given by V g -V t . The tangent moduli of the overall inclusion (i.e. twin and parent phase combined), the twin domain and the parent domains are given by L g , L t and L g-t , respectively. Similarly, their volume fractions are denoted f g , f t and f g-t . Let us introduce the characteristic functions θ g (r) and θ t (r) associated with V g and V t , respectively. These functions are either equal to the identity or to the null tensor inside and outside their corresponding volumes. Consequently, the spatially varying tangent modulus, δL(r), can be expressed as : δL(r) = (L g-t -L 0 )θ g (r) + (L t -L g-t )θ t (r) (2.90) Note here that solely the stiffness tensors L t and L g-t are accessible via the constitutive law. Introducing Eq. 2.90 into Eq. 2.88 one obtains an integral expression of the local strain increments : ǫ(r) = Ė - Vg Γ ∞,s (r -r ′ ) : (L g-t -L 0 ) : ǫ(r ′ )dV r ′ - Vt Γ ∞,s (r -r ′ ) : (L t -L g-t ) : ǫ(r ′ )dV r ′ (2.91) Such expression can also be solved exactly via the use of Fast Fourier transforms, or as proposed here can be approximated using the average strains in the different domains like e.g. in Berveiller et al. [START_REF] Berveiller | The problem of two plastic and heterogeneous inclusions in an anisotropic medium[END_REF]. The volume average strain increment, denoted by ǫi , within a volume V i (with i=g, g-t, t) is defined as : ǫi = 1 V i V i ǫ(r)dV r (2.92) Substituting the local strain increment by its expression (Eq. 2.91), the previously defined volume average strain increment becomes : ǫi = Ė - 1 V i V i Vg Γ ∞,s (r -r ′ ) : (L g-t -L 0 ) : ǫ(r ′ )dV r ′ dV r - 1 V i V i Vt Γ ∞,s (r -r ′ ) : (L t -L g-t ) : ǫ(r ′ )dV r ′ dV r (2.93) In the following, this last equation is applied to 'i'='g' or 't'. Because V t and V g are ellipsoidal and V t ⊂ V g , the order of integration in the computation of the averaged strain increment fields can be inverted according to the Tanaka-Mori theorem [START_REF] Tanaka | Note on volume integrals of the elastic field around an ellipsoidal inclusion[END_REF][START_REF] Mura | Micromechanics of defects in solids[END_REF]. In the present double inclusion topology, the strain increment ǫ(r ′ ) present in Eq. 2.91 interferes in subdomains 't' and 'g-t' and is not uniform neither in V g nor V t even in the case of ellipsoidal shape inclusions. However, an intuitive approximation used here considers the mean fields ǫg and ǫt defined by Eq. 2.92 . One can introduce tensors P V i as the integral of the symmetrized modified Green function over volume V i : P V i (r) = V i Γ ∞,s (r -r ′ )dV r ′ (2.94) Similarly, let us introduce tensors S j V i as the double dot product of the P V i tensors with the incremental stiffnesses L j , with j=g, g-t, t, such that : S j V i = P V i : L j (2.95) Self-consistent solutions for twinned grains In the case of ellipsoidal inclusions, tensors P V i and S j V i are uniform when r ∈ V i . Note the similarity between the expression of the S j V i tensors and the Eshelby's tensor. Applying the self-consistent condition, i.e. L 0 = L eff , the strain increment solutions for the twin domains 't' and for the twinned grains 'g' -that include both the twin and the parent phases -are obtained as follows : ǫt = Ė -∆S g-t Vg : ǫg -∆S t V t : ǫt (2.96) ǫg = Ė -∆S g-t Vg : ǫg - f t f g ∆S t Vg : ǫt (2.97) with, ∆S g-t Vg = S g-t Vg -S eff Vg (2.98) ∆S t V t = S t V t -S g-t V t (2.99) ∆S t Vg = S t Vg -S g-t Vg (2.100) After some algebraic manipulations with the systems of equations 2.96 and 2.97, the concentration tensors in the twinned grains and in the twin domains, denoted A g and A t respectively, can be written as : A g = I + ∆S g-t Vg - f t f g ∆S t Vg : (I + ∆S t V t ) -1 : ∆S g-t Vg -1 : I - f t f g ∆S t Vg : (I + ∆S t V t ) -1 (2.101) A t = (I + ∆S t V t ) -1 : (I -∆S g-t Vg : A g ) (2.102) The average strain rate in the parent subdomains V g -V t is estimated here using a simple averaging procedure : ǫg-t = 1 f g -f t f g A g -f t A t : Ė (2.103) As a result, the associated concentration tensor, denoted A g-t , is given by : A g-t = 1 f g -f t f g A g -f t A t (2.104) Self-consistent solutions for non-twinned grains In the case of non-twinned grains, tensors A c and B c are determined via use of Hill's classic self-consistent interaction law [START_REF] Hill | A self-consistent mechanics of composite materials[END_REF] : σc -Σ = -L eff : (S c-1 -I) : ( ǫc -Ė) = -L * ,c : ( ǫc -Ė) (2.105) Here, S c , denotes Eshelby's tensor that depends on the grain shape and on the overall instantaneous elasto-plastic stiffness. I, denotes the fourth-order identity tensor. L * ,c is Hill's constraint tensor [START_REF] Hill | A self-consistent mechanics of composite materials[END_REF]. After simplification, the expression of the strain concentration tensor is given by the following equation : A c = (L c + L * ,c ) -1 : (L eff + L * ,c ) (2.106) It is noteworthy that setting f t = 0 in Eq. 2.101 allows to retrieve us Eq. 2.106. Single crystal constitutive model The crystal plasticity constitutive model adopted here is based on Schmid's law for slip activity and on an extended Voce hardening law, which are briefly summarized in the following. Given a deformation system "s" (i.e. either slip or twin) of a phase "i" which may correspond to a non-twinned grain "c", a parent domain "g-t" or a twin domain "t" and denoting m s the Schmidt tensor on system "s", the first consistency condition simply states that plasticity could occur if the resolved shear stress (RSS) on system "s" is equal to a critical resolved shear stress (CRSS) denoted τ s as follows : m s : σ i = τ s (2.107) The second condition states the necessity for the system "s" to remain on the yield surface during a deformation increment m s : σi = τ s (2.108) In addition, for any system "s", the plastic shear strain rate is necessarily positive. γs > 0 (2.109) In the case of slip, a negative shear strain rate corresponds to a positive shear strain rate on the opposite direction. Because twinning is considered as pseudo-slip, twin nucleation is controlled by the three conditions presented previously (Eqs. 2.107, 2.108 and 2.109). Moreover, a twin forms with exactly the same hardening parameters and variables (e.g. CRSS) as those of its parent domain. Then from purely kinematic considerations, the twin volume fraction evolves as follows : ḟ t = γt s (2.110) where ḟ t denotes the twin volume fraction increment, γt the shear strain increment on the twinning system of the parent phase and s the characteristic twinning shear (s = 0.13). Note that the topology newly introduced does not allow twin multiplicity. The direct coupling between the parent and twin phases strongly decreases the damping and stabilizing effect of the HEM. The new topology makes twin and parent internal states and stiffnesses dependent on each other. Therefore, fulfilling consistency conditions for both domains in the same iteration becomes 2.3. The Double Inclusion Elasto-Plastic Self-Consistent (DI-EPSC) scheme. more difficult. Following the tangent linearization of the material constitutive equations [START_REF] Hill | A self-consistent mechanics of composite materials[END_REF][START_REF] Hutchinson | Elastic-plastic behaviour of polycrystalline metals and composites[END_REF][START_REF] Turner | A study of residual stresses in zircaloy-2 with rod texture[END_REF], the relation between the shear strain increment and the strain rate in crystal "c" is given by : γs = f s : ǫi (2.111) where, f s = s ′ (X -1 ) ss ′ m s ′ : C i (2.112) C c is the elastic stiffness tensor and (X -1 ) ss ′ a square matrix with dimensions equal to the square of the number of active systems in the grain "c". It is expressed as : X ss ′ = m s : C i : m s ′ + V s (Γ)h ss ′ (2.113) The Voce hardening law is of the form τ s = τ s 0 + (τ s 1 + θ s 1 )(1 -exp(- θ s 0 Γ τ s 1 )) (2.114) And the hardening rate is given by : τ s = s ′ V s (Γ)h ss ′ γs ′ (2.115) where V s (Γ) describes the hardening of slip system "s" with accumulated plastic strain Γ and h ss ′ describes the latent interactions between the different deformation systems : ∆τ s ∆Γ = V s (Γ) = θ s 1 + (θ s 0 -θ s 1 + θ s 0 θ s 1 Γ τ s 1 )exp(- θ s 0 Γ τ s 1 ) (2.116) τ s 0 , τ s 1 , θ s 0 and θ s 1 are hardening parameters presented in the following section. Finally, the instantaneous single crystal stiffness is given by the following formula : L i = C i : (I - s m s ⊗ s ′ (X -1 ) ss ′ m s ′ : C i ) (2.117) where the operator ⊗ is the tensor dyadic product. Influence of initial twin stress state From a modeling standpoint, the twinning transformation -inception and propagation of the twin -occurs out of equilibrium and at very fast rate leading to acoustic emission. It can thus be seen as an instantaneous process. From the physics standpoint, the twin/parent interaction strongly affects internal states within the parent and twin phases. Twin growth would tend to shear the parent domain in the twinning direction. To model such a complex mechanism, two distinct assumptions can be considered. In the first case, in order to respect both quasi-static stress equilibrium in twinned grains before and after twin occurrence and overall stress equilibrium conditions, the assumption consists of imposing the parent stress state as initial twin stress state. Therefore, Cauchy stress tensor of parent and twin phases can be written, at the inception of the twin, as follows : σ t = σ g-t (2.118) This assumption implies that stress in the parent is totally transmitted to the newly formed twin without accommodation. In the present paper, it will be referred to as the "unrelaxed initial twin stress state" estimate. However, another assumption considers that twins initially behave like cracks (i.e. total stress relaxation). In order to do so, we chose an elementary case in which the twin is taken as stress free at inception. Mathematically, this corresponds to state in which the Cauchy stress tensor of the twin phase is initially equal to the null tensor : σ t = 0 (2.119) This assumption will be referred to as the "relaxed initial twin stress state" estimate. At the onset of twinning and only at that time step, such approach violates the global equilibrium but minimizes the local energy. At the time step following nucleation, stress equilibrium is naturally restored via use of the self-consistent scheme. In both cases, the twin is assumed to behave elastically at nucleation. Therefore, Hooke's law is used to relate the initial elastic twin strain tensor to the twin domain stress tensor, which, under the assumption considered, is equal to either the parent domain stress tensor or the null tensor. The initial total strain is then expressed as the sum of the elastic twin strain and the plastic strain of the parent domain. Both approximations results will be analyzed and discussed further in this chapter. Computational flowchart The computational flowchart shows how and when tangent moduli and concentration tensors for non-twinned grains, twin and parent domains are numerically derived (see figure 2.8). In figure 2.8, subscripts "n" and "n-1" indicate the "self-consistency conditions" loop iteration considered. At each time step, a macroscopic strain increment is imposed. In order to return the macroscopic stress increment and to calculate the linearized tangent modulus of the HEM, the DI-EPSC code has to compute the local tangent modulus, the stress and strain tensors in all grains. Consequently, it has to deal with twinned and non-twinned grains. Consider first the case of a non-twinned grain. At the beginning of a given self-consistent loop iteration, denoted by "n" in the flowchart, all deformation systems fulfilling the first consistency condition (Eq. 2.107) are flagged as potentially active ; Cauchy stress tensors used in Eq. 2.107 have been computed at the previous self-consistent loop iteration, denoted by "n-1". From these potentially active systems, the instantaneous grain tangent modulus is estimated using Eq. 2.117. The latter is expressed as a function of the elastic stiffness tensor and the active slip systems. Mathematically, the individual contribution of a slip system "s" is represented by the second-order tensor f s (Eq. 2.112) whose formula depends on the inverse of the reduced square matrix X (Eq. 2.113), and hence on the Voce hardening rate, V (Eq. 2.116). The work of Hill [START_REF] Hill | Generalized constitutive relations for incremental deformation of metal crystals by multislip[END_REF] shows that to obtain a unique stress-rate corresponding to a given strain-rate it is sufficient that the matrix V s h ss ′ be positive semi-definite and the elastic stiffness tensor be positive definite. All latent and self-hardening coefficients h ss ′ are equal to 1. Therefore, the hardening matrix V s h ss ′ is not always symmetric, but all its eigenvalues are positive or zero. However, it would have been 2.3. The Double Inclusion Elasto-Plastic Self-Consistent (DI-EPSC) scheme. ! E Flags on potentially active systems L i n No twinning Parent Twin A g n = A g n L n g!t , L n!1 t , L n!1 eff ( ) A n t = A t n L n g!t , L n!1 t , L n!1 eff ( ) A c n = A c n L n c , L n!1 eff ( ) ! ! i = A i : ! E ! ! i = L i : ! ! i L eff ! ! Self-consistency condition Consistency conditions Twinning A g n = A g n L n g!t , L n t , L n!1 eff ( ) A n g!t = A n g!t L n g!t , L n t , L n eff ( ) A n t = A t n L n g!t , L n!1 t , L n!1 eff ( ) Figure 2. 8 -Computational flowchart of the DI-EPSC scheme physically incorrect to ignore the slip anisotropy. In addition, we observed numerically that the major symmetry of local tangent moduli is almost respected. Differences between symmetric non-diagonal components of tangent moduli greater than 1 MPa never exceeded 15.6%. Because the elastic stiffness tensor components are orders of magnitude larger than any hardening rate V , the symmetry of the elastic stiffness tensor dominates the tangent modulus. Then, internal strains and stresses are computed using the Hill's concentration relation (Eq. 2.106). The second consistency condition (Eq. 2.108) associated with a positive shear strain increment condition (Eq. 2.109) allows us to verify whether flagged systems are truly active. The tangent modulus, and the local strain and stress increments are computed and checked again until the fulfillment of these two conditions. The procedure used eliminates randomly one of the potentially active systems and re-iterate the calculation, until either all the systems considered give positive shear or until all have been eliminated, in which case the considered phase is assumed to be elastic. The latter is the case when simulating unloading using EPSC : in some grains the stress first 'slides back' along the yield surface, unloading plastic systems in succession, and eventually detaching from the yield surface. In other grains it detaches at the start of unloading, and goes from plastic to elastic state in one incremental step. Note here that, if the RSS exceeds the CRSS, the stress tensor is proportionally scaled to put it on the yield surface. However, this situation occurs very rarely. For non-twinned grains, there is no difference compared to the EPSC algorithm developed by [START_REF] Turner | A study of residual stresses in zircaloy-2 with rod texture[END_REF]. Consider now the case of a twinned grain. The algorithm deals with parent and twin domains separately. Choice is made here to compute first the tangent moduli (Eq. 2.104) and stresses of the parent phases and then those of the twin phases (Eq. 2.102). As a result, in the case of parent domains, one must use the tangent moduli of the twin phases computed at the previous iteration. Such is not necessary when dealing with twin domains. It is found that this approach improves convergence. Once the computations on all grains have been performed, the overall tangent modulus from which the self-consistency condition checks the convergence is calculated using Eq. 2.83. The overall tangent modulus major symmetry is enforced numerically. If the self-consistency condition is fulfilled, the DI-EPSC code returns the macroscopic stress increment (Eq. 2.77). Application to AZ31 alloy Material parameters and initial textures The AZ31 alloy studied here is composed of 3 wt% Al, 1 wt% Zn, with restrictions on the transition impurities Fe, Ni and Cu. As shown in Figure 2 [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF]. The hardening parameters used in the present simulations are the same as those used in work by Clausen et al. [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF] under the FIF assumption, which consists of assigning a finite volume fraction to the twins at nucleation. All values and parameters are presented in Table 2.1. In order to quantify the impact of the double inclusion topology, we did not re-fit hardening model parameters, and we removed the "backstress" correction added by Clausen et al. [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF] at twin inception. Moreover, all latent hardening constants were assumed to be equal to 1. Latent effects induced by twinning The DI-EPSC scheme introduces a more realistic topology for twinning where twins are directly embedded in parent domains. Note that latent is meant in a sense of capturing the effect of plasticity in the parent phase on hardening and hardening rates in the twin phase. All simulations presented in this paragraph consider the case of the extruded alloy. Only the unrelaxed Figure 2.9 -Initial textures of (a) the extruded alloy (with the extrusion axis at the center of the pole figures) and (b) the randomly textured material initial twin stress state estimate is used in this section. Figure 2.10a shows the macroscopic response of the material, loaded along its extrusion axis, obtained from mechanical loading, EPSC and DI-EPSC. The experimental stress-strain curve is characteristic of twinning-dominated compression with a plateau in the early stages of the deformation and then a progressive stress increase as twins grow and slip occurs in the twins [START_REF] Muránsky | On the correlation between deformation twinning and l 'uders-like deformation in an extruded mg alloy : in situ neutron diffraction and epsc4 modeling[END_REF]. As parameters are not fitted to the set of data, differences between experimental measures and macroscopic predictions are to be expected. In spite of the different topologies, the macroscopic stress-strain curves predicted by the DI-EPSC scheme and the EPSC scheme are nearly identical. However, Figure 2.10b, which describes the evolution of the total twin volume fraction in the polycrystal, reveals that the total twin volume fraction predicted by DI-EPSC is lower than the total twin volume fraction predicted by EPSC. Predicting similar overall mechanical response but different twin volume fractions necessarily implies significant differences in the calculation of internal stresses and the selection of active slip systems within the twin and parent domains. Figure 2.11 presents slip and twinning system activities in both the parent and twin domains as predicted with the current and extended EPSC schemes. In both cases twinning occurs in the early stages of compression : at 1% strain, 80 % of the total number of twins have been created. Before twins appear, basal slip is the prevalent active slip system in the parent phase due to its low CRSS. Once twinning is activated, the activity of basal slip within the parent phase decreases strongly while prismatic slip followed by pyramidal slip are activated, consistent with predictions of Capolungo et al. [START_REF] Capolungo | Slip-assisted twin growth in hexagonal close-packed metals[END_REF]. Regardless of the topology used (i.e. EPSC vs DI-EPSC), similar responses in the parent phases are to be expected. Plasticity in the twin and twin domains morphology do not significantly affect the average stress state within the parent phase. Within the twin domains, though, the two approaches yield drastically different predictions of the slip system activity. At strains lower than 1%, basal slip is the predominant deformation mode in twins. This is to be expected as the twin orientation combined with the imposed stress state favors basal slip activation. However, as twins grow, the current model predicts that basal slip activity drops in favor of pyramidal slip. In spite of a CRSS nearly two times greater than that of prismatic slip CRSS and eight times that of basal slip CRSS, pyramidal slip is extensively activated within the parent and twin phases. Adapting the micromechanical scheme to twinning topology directly enforces the twin/parent interaction and reveals another latent hardening effect that is generally described by single crystal plasticity models. Indeed, in comparing resolved shear stresses (RSS) projected on the twinning plane along twinning direction within twin and parent phases of an arbitrarily chosen twinned grain, one observes that the new DI-EPSC scheme predicts both a higher strain hardening rate and a higher RSS in the twin than the EPSC scheme (Figure 2.12). With the choice of hardening parameters associated with tensile twinning, hardening cannot occur on the twin system. Although not shown here, the effect of grain morphology on slip system activities was studied. From the pioneering work of Tanaka and Mori [START_REF] Tanaka | Note on volume integrals of the elastic field around an ellipsoidal inclusion[END_REF], it is known that for homothetic inclusions, grain and twin shape have a negligible contribution. This can be seen in Eqs. 2.92-2.93 in which the Eshelby type tensors introduce the relative shape effects. When considering different parent and twin morphologies for twins and parents, via the Eshelby-type tensors, it is found that twin shape does not affect the stress state within the parent domains. However, it is found that ellipsoidal twin shapes tend to marginally increase the stress level within the twin domains but do not affect the selected slip systems. At the onset of twinning, twin volume fractions are too small for twin morphologies to influence internal stress and strain states. In the early stages of twin growth, volume fraction effects have a significant effect on local stress levels. Interestingly, the direct coupling between parent and twin phases leads to far more scattered accumulated plastic strain distributions. Figure 2.13 shows both the total shear strain in each single twin and parent domain and the averaged total shear strain in the twin and parent domains as obtained from the DI-EPSC and the EPSC schemes. It appears that the standard deviation of the distribution of total accumulated shear strain in twin phases obtained from the new coupled approach is four times higher than that obtained from the previous uncoupled approach. This phenomenon is more visible in Figure 2.14, where total accumulated shear strain distributions derived from the DI-EPSC scheme are less evenly distributed compared to those predicted by the EPSC scheme. Bin size was optimized using the following formula : w = 3.49Xn -1/3 , where w denotes the bin width, X the standard deviation associated to total shear strains and n the number of twins. Note here that points corresponding to grains for which numerical stability cannot be guaranteed shall be disregarded. Nonetheless the predictions seem more representative, as they directly result from the concentration relations (Eqs. 2.93-2.95). However, shown in black symbols, the averaged total shear strains in twin domains seem to be insensitive to the new coupling, while the averaged total shear strains in parent domains is slightly decreased. That is precisely why, in spite of a higher strain hardening in twins and a smaller twinned volume, the DI-EPSC scheme predicts a macroscopic stress-strain curve nearly identical to that obtained from the EPSC scheme without the backtress correction. Influence of initial texture To test the coupling between the parent and twin phases in a more general way, i.e. with a broader spectrum of active slip and twinning systems, the case of an initially randomly textured AZ31 alloy is investigated. In this paragraph, we only consider simulations regarding the unrelaxed initial twin stress state estimate. Figure 2.14 shows that total shear strain distributions in twin phases derived from the DI-EPSC scheme are centered around a few peaks while they are more evenly distributed when derived from the EPSC scheme. The heavy centre of the DI-EPSC distributions is interpreted as resulting from the orientation of parent domains, which was initially favorable to an easy activation of twinning and secondary slip occurrence. As deformation occurs and plasticity develops, parent grain configurations change, and therefore plastic strain accommodation in the twin domains is affected as a direct result of the new concentration laws. Although not shown here, the diversity of grain orientations limits the number of parent grains favorably oriented for twinning and, hence, lowers the total twin volume fraction in the case of an initially non-textured material. In addition, Figure 2.13 reveals that initial texture has a marginal effect on the averaged total accumulated plastic strain in twin and parent domains because secondary slip and plastic strain accommodation are controlled by criteria which are independent of the initial texture. However, Figure 2.14 shows that plastic strain accommodation in both the parent and twin phases depends on the parent-twin interaction. Moreover, even with an initial random texture, pyramidal slip remains the predominant active slip system in twins, but it is significantly less present compared to the first case with the extruded alloy (Figure 2.15). Influence of initial twin stress state Section 2.3.1.7 introduces the modeling challenges induced by twin inception and presents the two approximations that are used throughout the paper. The present section focuses now on analyzing and quantifying the influence of the initial twin stress state on the mechanical response of an extruded AZ31 alloy. The evolution of RSS projected on the twinning plane along the twinning direction in the cases of initially relaxed and unrelaxed twins is shown in Figure 2. [START_REF] Mordike | Magnesium. properties -applications -potential[END_REF]. It reveals that, with initially relaxed twins, the strain hardening rate in the twin predicted by the DI-EPSC scheme is higher in the first 3% of deformation and then stabilizes at a value close to the one observed with unrelaxed twins. However, the new coupled approach predicts both a higher strain hardening rate and a higher strain hardening in the twins regardless of the initial stress state of the twins. In addition, comparison with the predictions obtained from the EPSC model with initially relaxed and unrelaxed twins shows that the strain hardening rate in the twin domains is not solely controlled by the twin-parent interaction in the case of the DI-EPSC model or by the twin-HEM interaction in the case of the EPSC model. Strain hardening rate and, hence, hardening are strongly dependent on the considered initial twin stress state. In parallel, the total twin volume fraction tends to increase with relaxed twins (Figure 2.10b). As expected and shown in Figures 2.10a and 2.16, initially relaxed twins lower both global and local stress levels that become closer to experimental ones. In addition, the averaged pyramidal slip activity in each twin is significantly lowered as compared to the stress equilibrated case (Figure 2.17). Note that imposing a null Cauchy stress tensor in the twin domain at the inception of twin is a lower bound case. Another way to account for the stress accommodation induced by twinning consists of using an eigenstrain, representative of that due to the twinning shear, within the twin domain at the onset of twinning. However, considering both twinning as pseudo-slip in the parent phase and twinning shear as an eigenstrain within the twinning phase appears as mechanically redundant. Therefore an alternate approach would impose the eigenstrain within the twin domain and considers only slip as a deformation mechanism in the parent domain. Conclusion A new double inclusion micromechanical approach, generalizing the original Tanaka-Mori scheme, is introduced to study the evolution of internal stresses in both parent and twins during twin growth in h.c.p materials. A first elasto-static scheme in heterogeneous elastic media with plastic incompatibilities was derived. The model was shown to reduce to the Nemat-Nasser and Hori scheme as well as to the elementary inclusion problem of Eshelby in some peculiar situations (e.g. homogeneous elasticity) and was first applied to the case of pure Mg to reproduce the average internal resolved shear stresses in the parent and the twinning phases. While the first study is limited to anisotropic elasticity with eigenstrains representing the twinning shears, it is suggested that the magnitude of these backstresses is sufficient to induce plastic deformation within twin domains. Moreover, a detailed analysis of the model shows that the predominant effect on the magnitude and the direction of backstresses is due to heterogeneous elasticity because of large induced misorientations between the parent and the twin domains. It is also found here that the stress state within twin domains is largely affected by the shape of the parent phase. Clearly, all results shown here are limited to static configurations and neglect internal variable evolutions. Nonetheless, it is suggested that application of the generalized Tanaka Mori scheme to mean-field self-consistent methods shall yield more accurate predictions of the internal state within twin domains for real polycrystalline hexagonal metals like magnesium and associated alloys. Then, a second study investigated evolution of internal stresses and strains and plasticity within twin and parent domains of Mg alloys via a new double inclusion-based elasto-plastic self-consistent scheme (DI-EPSC) that used the original Tanaka-Mori result to derive new concentration relations that include average strains in twins and twinned grains (double inclusions). Then, twinned and non-twinned grains are embedded in an HEM with effective behavior determined in an implicit nonlinear iterative self-consistent procedure (called "DI-EPSC" model). Contrary to the existing EPSC scheme which only considers single ellipsoidal inclusions, new localization relations account for a direct coupling between parent and twin phases. Using the same hardening parameters as in [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF], comparison between the EPSC, the DI-EPSC and experimental data leads to three main results with respect to twinning and associated plasticity mechanisms. First, it appears that, by introducing a new topology for twinning, latent effects induced by twinning in the parent phases are capable of predicting the influence of plasticity on hardening and hardening rates in the twin phases. Second, because twins are now directly embedded in the parent phases, new concentration relations lead to more scattered shear strain distributions in the twin phases. Twin stress states are strongly controlled by the interaction with their associated parent domains. Third, the study clearly shows the importance of appropriately considering the initial twin stress state at twin inception. The Electron Backscatter Diffraction (EBSD) technique is based on the collection of backscattered electrons and the indexing of Kikuchi diffraction patterns. EBSD scans provide a profusion of information regarding the texture, presence of grain boundaries, grain morphologies, nature and crystallographic orientation of the different phases present in the material, etc. The EBSD technique is then particularly interesting because it enables us to extract the possibility of extracting statistical information about the microstructure and automatically compute metrics that can be used to guide for models for example. The present chapter introduces a new automated data collection numerical tool [START_REF] Pradalier | A graph theory based automated twin recognition technique for Electron Backscatter Diffraction analysis[END_REF] that uses EBSD scans to generate statistical connections between twin features, microstructure and loading path. The first, second and third parts of this chapter describe the SEM microscopes, the EBSD technique and the basic concepts of electron diffraction and EBSD pattern analysis, respectively. The final section describes the newly developed graph theory-based automated twin recognition technique for EBSD analysis. Brief description of Scanning Electron Microscopes In a scanning electron microscope, a beam of high energy electrons, emitted by either a thermoelectronic gun or a field emission gun (FEG), hits the sample. A system of electromagnetic lenses, also referred to as condensers, and coils enable the user to focus the beam and scan the whole surface of the sample. All incident electrons have a quasi parallel trajectory. Although it depends on the voltage chosen (between 0.1 kV and 30 kV), the diameter of the incident beam does not exceed a few nanometers. When incident electrons penetrate into the sample, they interact both elastically (i.e. no energy loss) and inelastically with atoms and electrons present at or near the surface. These interactions result in the emission of secondary electrons (SE) (i.e., electrons produced by inelastic interactions of high energy electrons with valence electrons, causing the ejection of the electrons from their atomic orbitals), back-scattered electrons (BSE) (i.e. electrons produced by elastic interaction of high energy beam electrons with atom nuclei), characteristic X-rays and photons. Elastic scattering consists of the deviation of the electron trajectory by the nucleus of an atom without loss of energy. Because of the mass difference between the electron and the nucleus, energy transfers are negligible ; the energy loss induced by elastic scattering is smaller than 1 eV. But the deviation angle is important. As a result, backscattered electrons are assumed to have the same energy as beam electrons. It has also been shown that the number of backscattered electrons increases with the atomic number, Z. Depending on the input voltage and Z number, the penetration depth will vary from a few nanometers to 20-30 nanometers. In contrast, inelastic scattering induces a progressive loss of energy due to transfers between high energy electrons and valence electrons belonging to the different atomic orbitals of specimen atoms. These high energy electrons can either be incident beam electrons, also called primary electrons, or back-scattered electrons. The excitation and the ionization of the sample atoms result in the emission of secondary electrons with low deviation angle and low energy (0-50 eV), X-rays, Auger electrons and photons. Secondary electrons can also be produced after back-scattered electrons strike pole pieces or other solid objects located in the vicinity of the sample. They are emitted isotropically from the superficial layers of the specimen, e.g. the depth of these layers can either be a few nanometers in the case of metals or 20-30 nm in the case of non-conducting materials. The secondary emission yield, defined as the ratio of the secondary electrons to the number of primary electrons, increases when the energy, i.e. the voltage, of the beam electrons decreases. Lower primary energies induce slower incident electrons. Except for light atoms (Z<20), the second emission yield does not vary with atom mass. However, the secondary emission yield has to be corrected, since a signifiant part of the detected secondary electrons consists of backscattered electrons. In addition, the ionization of atomic orbitals close to the nucleus triggers the emission of characteristic X-rays and Auger electrons, as shown in Figure 3.3. Mechanisms involved in the emission of X-rays and Auger electrons that consist in electronic transitions between the ionized atomic orbital and external orbitals are aimed at leading the atom toward its state of equilibrium. Energy levels associated with the different types of electrons mentioned in this paragraph are graphically represented in Figure 3.4. Therefore, any standard SEM includes a secondary electron detector. However, it is common to equip scanning electron microscopes with back-scattered electron detectors and X-ray spectrometers, as shown in Figure 3.2. Two types of electron guns exist, i.e. thermal electron guns and field emission guns. The first ones use either a heated tungsten wire or a lanthanum hexaboride crystal. Field emission guns can either be of the cold-cathode type using tungsten single crystal emitters or the thermally assisted Schottky type (Figure 3.1) using emitters made of zirconium oxide. While electrons are emitted from a tungsten thermal guns with voltages between 10 kV and 30 kV, field emission guns are capable of producing incident electron beams whose voltages are between about 0.1 kV and 30 kV. The choice of the electron gun type has to be based on the nature of the materials studied. The resolution of a microscope can be defined as its ability to distinguish and separate very close "objects". Its resolution is altered by monochromatic and chromatic aberrations resulting from the geometry and the variation of the refractive index of lenses, respectively. In addition to aberrations, the resolution of an optical microscope is limited by the diffraction of light. Therefore, the image of a point through a lens is not another point but a diffracted disk, called Airy disk. The consequence of this phenomenon is that disks corresponding to the images of two distinct points may overlap. Abbe's theory states that the resolution limit of a microscope is proportional to the incident wavelength and inversely proportional to the refractive index of the medium. Consequently, decreasing the wavelength allows the resolution limit to improve and, hence, allows users to observe finer details. Using X-rays would then be ideal but they cannot be focused. However, it is possible to generate electron beams whose wavelengths are of the same order of magnitude as X-rays. In both transmission and scanning electron microscopes, electrons are focused by several electromagnetic lenses. Historical perspectives of the Electron Backscatter Diffraction Technique The origins of the EBSD technique date back to 1928 when Shoji Nishikawa and Seishi Kikuchi pointed a beam of 50 keV electrons on a cleavage face of calcite, inclined at 6 • to the vertical [START_REF] Kikuchi | Diffraction of cathode rays by mica[END_REF][START_REF] Nishikawa | The diffraction of cathode rays by calcite[END_REF]. Backscattered electrons were collected on photographic plates positioned perpendicular to the primary beam at 6.4 cm behind and in front of the sample. Recorded patterns displayed black and white lines, as revealed by Figure 3.5. The existence of line pairs is due to multiple scattering and selective reflection. [START_REF] Joy | Electron channeling patterns in the SEM[END_REF] at Oxford University, Kossel diffraction by Biggin and Dingley [START_REF] Dingley | A general method for locating the x-ray source point in kossel diffraction[END_REF] at Bristol University and electron backscatter patterns (EBSP) by Venables and Harland [START_REF] Venables | Electron back-scattering patterns -a new technique for obtaining crystallographic information in the scanning electron microscope[END_REF] at the University of Sussex. Note that EBS patterns are nothing else than Kikuchi patterns. Venables and Harland [START_REF] Venables | Electron back-scattering patterns -a new technique for obtaining crystallographic information in the scanning electron microscope[END_REF] obtained them with a TV camera and phosphor screens. Moreover, Venables [START_REF] Venables | Crystallographic orientation determination in the SEM using electron backscattering patterns and channel plates[END_REF] found an easy way to locate the pattern center, defined as the shortest distance between the impact area where the electron beam hits the sample and the phosphore screen. To do so, Venables placed three spheres on the sample surface, whose projections appear elliptical on the pattern (Figure 3.6). The intersection point of the three major axes corresponds to the pattern center. In 1984, Dingley [START_REF] Dingley | Diffraction from sub-micron areas using electron backscattering in a scanning electron microscope[END_REF] developed and implemented an indexing algorithm capable of locating the pattern center numerically. In 1987, the first indexing software, based on Dingley's code, was released by Link Analytical, now Oxford Instruments. Dingley's model is still used now by current EBSD systems. Five years later, Krieger-Lassen, Conradsen and Juul-Jensen [START_REF] Krieger Lassen | Image processing procedures for analysis of electron back-scattering patterns[END_REF] used the Hough transform, originally developed by Hough [START_REF] Hough | Method and means for recognizing complex patterns[END_REF] in 1962 to track high energy particles, to automatically detect and identify Kikuchi bands. The use of the Hough transform allows the system to transform parallel bands into collections of points. In 1993, Brent Adams [START_REF] Adams | Orientation imaging : the emergence of a new microscopy[END_REF], from Yale University, introduced the term "Orientation Imaging Microscopy" to describe the procedure that generates an orientation map. The technique consists Figure 3.5 -Boersch (1937) Iron Kikuchi patterns [START_REF] Boersch | About bands in electron diffraction[END_REF][START_REF] Zhou | Scanning Microscopy for Nanotechnology[END_REF] in representing pixels of similar orientation with an unique color. Adams and Dingley founded TexSEM Laboratory, alias TSL, to release their EBSD analysis system and chose Thermo Noran to distribute it. In 1999, EDAX purchased TSL, depriving Thermo Noran of their EBSD analysis system. Thermo Noran turned to Robert Schwarzer [START_REF] Krieger Lassen | Automated crystal lattice orientation mapping using a computer-controlled sem[END_REF], from TU Clausthal, who developed its own system for orientation and texture measurements, and Joe Michael and Richard Goehner [START_REF] Michael | Advances in backscattered electron kikuchi patterns for crystallographic phase identification[END_REF], from Sandia National Laboratory. Joe Michael and Richard Goehner were amongst the first to use EBSD for phase identification. There also existed another software, very popular amongst geologists, released by HKL Technologies and based on the work of Schmidt [START_REF] Schmidt | Computer aided determination of crystal orientation from electron channeling patterns in the sem[END_REF]. This software was widely used by scientists willing to study minerals because it included very efficient low-symmetry indexing algorithms. In April 2005, Oxford Instruments purchased HKL Technologies. Currently, both EDAX and Oxford Instruments continue to develop and sell their own EBSD analysis software. Basic concepts of electron diffraction and diffraction pattern analysis Regarding the technique itself, at each measurement point, the electron beam hits the sample surface, tilted about 70 • with respect to the horizontal line and preliminarily polished. Primary accelerated electrons are either transmitted or reflected by the specimen atoms. The EBSD detector (Figures 3.2 and 3.8) collects low loss energy backscattered electrons. Modern EBSD detectors are usually made of a phosphor screen, a compact lens to channel the electrons and low and high resolution CCD camera chips for fast and slow measurements, respectively. The impact of electrons with the phosphor screen produces light that is converted into an electric signal by the CCD camera chips. The different Kikuchi bands and the pattern center are then identified by using an optimized Hough transform and Dingley's method, respectively. From the pattern center and the Kikuchi bands, EBSP softwares are capable of identifying the crystallographic structure and the orientation of the region struck by the beam. This step is called Figure 3.6 -Image of a Kikuchi pattern obtained by using a phosphor screen and a TV camera that illustrates the method developed by Venables [START_REF] Venables | Electron back-scattering patterns -a new technique for obtaining crystallographic information in the scanning electron microscope[END_REF] to locate the pattern center. Elliptical black shapes correspond to the projected shadows of the three spheres placed at the surface of the sample [START_REF] Zhou | Scanning Microscopy for Nanotechnology[END_REF]. indexing and will be repeated for each Kikuchi pattern recorded by the CCD chips, i.e. for each measurement point. Note that samples are usually scanned following a square or hexagonal grid. Electron diffraction techniques rely on the principle of wave-particle duality asserted by Louis de Broglie in 1925. According to this principle, a beam of accelerated electrons is a wave whose wavelength is proportional to the particle velocity. Wavelength and velocity are related by the following expression λ = h p = h √ 2meV (3.1) where h denotes the Planck's constant, i.e. 6, 626.10 -34 J.s, p, the momentum of electrons, V , the potential used to accelerate the beam, m, the mass of an electron, i.e. 9.10938215.10 -31 kg, and e, the elementary negative electric charge of an electron, i.e. 1.6021765.10 -19 coulombs. This equation is the fundamental relationship of wave mechanics. In 1927, Davisson and Germer [START_REF] Davisson | Diffraction of electrons by a crystal of nickel[END_REF] verified de Broglie's relationship by observing diffracted electrons after a beam of 54 eV struck a single crystal of nickel. The Bragg's condition for constructive interference is written as 2.d hkl .sinθ = n.λ (3.2) with d, the distance between two successive lattice planes whose family plane Miller indices are represented by (h, k,l) and n, an integer denoting the order of reflection. As shown by Figure 3.9, the incident electron beam is diffracted at an angle 2θ. Thus θ is also the angle between the incident beam and the lattice planes. While incident primary electrons have a narrow range of energies and directions, inelastic scattering undergone by backscattered electrons broadens the spectrum of energies. Momentum changes caused by both elastic and inelastic scattering cause electrons satisfying the Bragg's condition to scatter in all directions. Therefore, for a given plane {hkl}, electrons diffract in such a way that they form two cones located on each side of the plane. These cones are named Kossel cones, and they are shown by Figure 3.11. The projection of the two cones and the {hkl} plane on the viewing screen consists of three lines. However, since more electrons are scattered forward than sideways or backward, the projected line corresponding to the cone formed by the electrons that scattered forwards appears brighter than the one corresponding to the second cone. The "bright" and "dark" lines are referred to as the excess and deficit lines (Figure 3.10). The spacing of the pair of Kikuchi lines, i.e. the pair of lines formed by the excess and deficit lines, is the same as the spacing of the diffracted spots from the same plane. However, the position of Kikuchi lines is strongly dependent on the orientation of the specimen. Although not proved here, it can also be shown that Kikuchi lines associated with planes {hkl} and {-h-k-l} are parallel. Because each diffraction band represents a lattice plane, the intersections of these bands correspond to zone axes. If more than two diffraction bands intersect at a single spot, the latter is called a pole. The width of the diffraction bands appearing on EBS patterns, denoted by w, is a function of both electron wavelength, λ, and the inter-planar spacing, d hkl , and is then be expressed as w = 2 sin -1 λ d hkl (3.3) As shown by Figure 3.12, the width can also be determined experimentally from two positions vectors, r and r ′ , extending from the pattern center (PC) and intersecting the sides of the band The abbreviation BSD stands for backscattered detector [START_REF] Zhou | Scanning Microscopy for Nanotechnology[END_REF]. at right angles : w = tan -1 r ′ z -tan -1 r z (3.4) The angle between two bands can be computed directly from the phosphor screen (Figure 3 n 2 = OR × OS OR × OS (3.6) As a result, the inter-planar angle, γ, is equal to the arccosine of the scalar product of the first and second unit plane normal vectors previously derived γ = cos -1 (n 1 .n 2 ) (3.7) The main output of any EBSD analysis software is the texture file providing the lattice orientation associated with each pixel of the micrograph. A method for computing the lattice orientation was proposed by Wright et al. [START_REF] Wright | Automatic-analysis of electron backscatter diffraction patterns[END_REF] and relies on the construction of a new set of vectors, (e t 1 ,e t 2 ,e t 3 ), expressed successively in the crystal and sample frames. Defined relative to the sample, their expressions are e t,s 1 = n 1 (3.8) e t,s 2 = n 1 × n 2 n 1 × n 2 (3.9) e t,s 3 = e t,s 1 × e t,s 2 (3.10) In the crystal frame, their expressions become where h i , k i and l i with i = {1, 2} are Miller indices of the plane i. e t,c 1 = (hkl) 1 (hkl) 1 (3.11) e t,c 2 = (hkl) 1 × (hkl) 2 (hkl) 1 × (hkl) 2 (3. The direction cosines, g ij , specify the lattice orientation and are written as g ij = g c ij g s ij (3.14) with, g c ij = e c i .e t,c j (3.15) g s ij = e t,s i .e s j (3.16) The cosine value g ij represents the rotation required to bring the coordinate frames of the sample and of the crystal lattice in coincidence. However, computation of cosine directions requires knowledge of the Miller indices corresponding to the different Kikuchi lines. Prior to presenting one of the possible indexing procedures, it is important and useful to introduce the reciprocal lattice. The reciprocal lattice is a fictitious lattice consisting of the Fourier transform of the direct lattice. Denote (u, v, w) and (u * , v * , w * ) the two sets of basis vectors associated with the frames of the direct and reciprocal lattices, respectively. Therefore, any direction vector d can be expressed by using Miller indices as follows d = hu + kv + lw (3.17) Similarly, any direction vector passing through two nodes of the reciprocal lattices can be written as d = h * u * + k * v * + l * w * (3.18) such that d = 1 d h ′ k ′ l ′ (3.19) Vectors u, v and w are related to vectors u * v * , w * such that u * .v = u * .w = v * .u = v * .w = w * .u = w * .v = 0 (3.20) and u.u * = v.v * = w.w * = 1 (3.21) Consequently, metric and orientational relations existing between the direct and reciprocal lattices are listed below : -dimensions in the reciprocal lattice are equal to their inverse in the direct lattice ; -the direction vector hkl * is perpendicular to the plane (hkl) ; -the direction vector mnp is perpendicular to the plane (mnp)* ; - d hkl = 1 n * hkl ; -d mnp * = 1 nmnp . The use of the reciprocal lattice simplifies many calculations such as, for example, the computation of the angle Φ formed by two reticular planes represented by their respective Miller indices (h 1 k 1 l 1 ) and (h 2 k 2 l 2 ) and expressed as The reciprocal lattice also enables an easy computation of the zone axis of two intersecting planes. Its expression is given by the following relation Φ = cos -1 n * h 1 k 1 l 1 .n * h 2 k 2 l 2 n * h 1 k 1 l 1 . n * h 2 k 2 l 2 (3.22) n mnp = n * h 1 k 1 l 1 × n * h 2 k 2 l 2 (3.23) Moreover, each atom located within the interaction volume scatters incident electrons. The intensity of diffracted electrons in a given direction results from the sum of destructive and non-destructive interferences which are functions of the number of atoms, their nature and location. This dependence on the arrangement of the different atoms composing the unit cell is expressed via a factor called shape factor and usually denoted by F (hkl) F (hkl) = n j=1 f j exp 2πi(hx j + ky j + lz j ) (3.24) where f j is the atomic diffusion factor of the j-th atom of coordinates (x j ,y j ,z j ). Therefore, the formula implies that a node in the reciprocal space does not exist if its associated factor F is null. The diffraction intensity I(hkl) is strictly proportional to the structure factor and can be written as I(hkl) = κ.F (hkl) (3.25) with κ a real constant. Regarding the indexing of diffraction bands and spots, different methods exist and depend whether the material structure is known or not by the experimentalist. Assume for example a hexagonal crystal lattice. The procedure described hereafter is named the triplet indexing technique, or more simply the triplet method, because three vectors or three bands are used to obtain an unique orientation solution for both diffraction spots and bands. The first step consists of measuring the distances D hkl between two diffraction spots symmetric with respect to the pattern center O. Figure 3.14 shows that the distance D h 1 k 1 l 1 corresponds to the distance between points P 1 (h 1 , k 1 , l 1 ) and P ′ 1 (-h 1 , -k 1 , -l 1 ). Distances are then sorted by increasing order. The second step is the computation of inter-reticular distances from previously calculated distances that can be obtained by using the following formula d hkl = 2Zλ D hkl (3.26) Since the material is known, inter-reticular distances can be compared to values listed in standard look-up tables with corresponding Miller indices, such as ASTM tables. In the third step, we ensure that points have been indexed consistently, i.e. Miller indices of three diffraction spots P 1 (h 1 , k 1 , l 1 ), P 2 (h 2 , k 2 , l 2 ) and P 3 (h 3 , k 3 , l 3 ) can be related to each other such that h 1 = h 2 + h 3 , k 1 = k 2 + k 3 and l 1 = l 2 + l 3 . Denoting by v * 1 = OP 1 , v * 2 = OP g * ij = v * i v * j v * i v * j (3.27) The zone axis, displayed with Miller indices uvw on Figure 3.14, can be derived from the cross product of two of the three vectors v * i with i = 1, 2, 3. The method to deal with diffraction bands is exactly the same. One computes the angle between two bands. Knowing the structural data of the studied phase, angle values are then compared to those of look-up tables of inter-planar spacings and corresponding Miller indices. Theoretically, using only one triplet of bands is enough to obtain an unique orientation solution. However, because of experimental uncertainties and the presence of rogue bands, it is better to use multiple triplets. A graph theory based automated twin recognition technique for Electron Back-Scatter Diffraction analysis This section introduces a new EBSD data analysis and visualization software capable of automatically identifying twins and of extracting statistical information pertaining to the presence and geometrical features of twins in relationship with the microstructure. Twin recognition is performed via the use of several graph and group structures. Twin statistics and microstructural data are then classified and saved in a relational database. Software results are all accessible and can be easily corrected, if necessary, via the graphical user interface. Initially developed to identify twins in magnesium and zirconium, the numerical tool's architecture is such that only a minimum of changes is required to analyze other materials, h.c.p. or not. The first part of the section is dedicated to presenting and describing the method. The choice and the evaluation of features of the graphical user interface, as well as the construction of a relational database storing both microstructural information and twinning statistics, are discussed in the second part of the section. Euler angles, quaternion rotation representations and their application to EBSD data Euler angles and quaternion orientation and rotation representations -Euler angles An EBSD map can be seen as an image, e.g. a square or hexagonal array of measurement points, where each measurement point (or pixel) gives the crystal local orientation as a set of Euler angles following the Z-X-Z convention, denoted by (φ 1 , Φ, φ 2 ). Crystal orientation can be obtained by applying the following rotation matrix to the basic crystal structure : R(φ 1 , Φ, φ 2 ) =   cos φ 1 -sin φ 1 0 sin φ 1 cos φ 1 0 0 0 1     1 0 0 0 cos Φ -sin Φ 0 sin Φ cos Φ     cos φ 2 -sin φ 2 0 sin φ 2 cos φ 2 0 0 0 1   (3. 28) From a transformation perspective, the matrix R, also more explicitly denoted by R w c , corresponds to the transformation from the local crystal frame to the world frame. Conversely, the transformation from the world to the crystal will be denoted by R c w . The rotation matrix R c w transforms the vector v c , initially expressed in the local crystal frame, into v w , expressed in the world frame as following : v w = R c w v c . The matrix R is a representation of an element belonging to the algebraic group of 3D rotations, also called the Special Orthogonal group of dimension 3, SO(3). This space is a group in the algebraic sense, meaning that it supports a multiplication operator, defines an inverse, i.e. the transpose of a rotation matrix, and a neutral element, i.e. the identity matrix. -Rodrigues' formalism and quaternions Other useful and well known representations of 3D rotations are the Rodrigues vector and unit quaternions. The Rodrigues vector is a vector whose length is proportional to the amplitude of a given rotation and whose direction is the axis around which the rotation is applied. Quaternions are a compact representation of a rotation of angle θ around an axis v with four values (w, x, y, z) where w = cos θ 2 and (x, y, z) = v • sin θ 2 . By analogy with complex numbers, w is called the real part of the quaternion and v the imaginary part. When working with rotations, unit quaternions are preferred, i.e. w 2 + x 2 + y 2 + z 2 = 1. The advantage of quaternions lies in the existence of a multiplication operator allowing the preservation of the group structure while keeping the representation compact. In computer graphics, quaternions are frequently used because they allow an easy implementation of interpolated rotations between two rotations [START_REF] Dam | Quaternions, interpolation and animation[END_REF]. Note that well-known formulas exist for conversion between rotation matrices and quaternions. In formal terms coming from differential geometry, extracting the Rodrigues representation from a quaternion or a rotation matrix is referred to as using the logarithmic map of the differential manifold [START_REF] Do Carmo | Riemannian geometry[END_REF]. Recovering the quaternion from the Rodrigues representation is the exponential map. Therefore, the relationship between a given Rodrigues vector, r, and its equivalent quaternion, q, can be written as follows : r = log q and q = exp r (3.29) As a result, the amplitude of the rotation q, denoted by θ, can be expressed as the norm of log q : θ = log q . -Metrics The SO(3) group is not a vector space. As a consequence, the usual norm of the Euclidean space R 3 , i.e. x 2 + y 2 + z 2 , does not apply. A more appropriate norm consists in the amplitude of the rotation, θ. Therefore, the norm of a rotation represented by its quaternion q is defined as q so = log q (3.30) Although the norm of a rotation is not directly used for EBSD map analysis, the resulting distance leads to a unified definition of disorientation. Therefore, the distance between two rotations represented by q 1 and q 2 is denoted by : d(q 1 , q 2 ) = log q -1 1 • q 2 = q -1 1 • q 2 so (3.31) Application to EBSD data An EBSD map can be seen as a set of orientations and coordinates since three Euler angles are associated with each single pixel. In addition, the geometrical transformation induced by twinning is described via either the reflection of the lattice with respect to a specific plane or the π radian rotation of the lattice with respect to a given axis. As a result, quaternions and Rodrigues' formalism suit well for EBSD data processing, since they enable a relatively easy computation of disorientations, recognition and classification of the different phases or domains of the map. -Disorientation Disorientation is defined as the orientation difference between two entities. These entities can be grains, parent and/or twin phases or individual measurements. Consider now the case of two measurements points, represented by their quaternions q 1 = q c 1 w and q 2 = q c 2 w . Both quaternions q 1 and q 2 correspond to a rotation from the sample to the local crystal frames. The amplitude of the disorientation between these two measurements, denoted by δ(q 1 , q 2 ), can be expressed as d(q 1 , q 2 ) = q -1 1 • q 2 so . However, the actual disorientation consists in the rotation that needs to be applied to q 1 to transform it into q 2 . δ(q 1 , q 2 ) = q 2 • q -1 1 = q c 2 w • (q c 1 w ) -1 = q c 2 w • q w c 1 = q c 2 c 1 (3.32) The advantage of such a notation lies in the fact that δ includes both the amplitude and the axis of the rotation. The latter property will be particularly useful to recognize and identify mode and system of twins. For the sake of consistency and taking advantage of quaternion properties, what follows systematically identifies the smallest positive rotation transforming q 1 into q 2 when computing the disorientation δ(q 1 , q 2 ). For example, if the real part of δ(q 1 , q 2 ) is negative, it implies that | θ 2 | > π 2 radians. In this case, δ(q 1 , q 2 ) is then replaced by -δ(q 1 , q 2 ) which corresponds to a rotation around the same axis but with an angle equal to θ + π radians. Note that the disorientation angle is now smaller than π radians in absolute value. In addition, the disorientation can be made positive since a quaternion representing a rotation of an angle θ around a vector v is equal to the quaternion corresponding to a rotation of an angle -θ around -v. These two choices lead to an unambiguous representation of quaternions that is helpful when comparing rotations to identify twins. However, the disorientation measure, δ, ignores crystal symmetries. The hexagonal crystallographic structure is invariant by rotations around the c-axis by k π 3 , k ∈ IN and by π radian around any vector lying in the basal plane. Quaternions associated with symmetries around the c-axis and vectors lying in the basal plane are denoted by q x (k) and q z (k), respectively, and are expressed as follows : q x (k) = exp (kπ x) (3.33) q z (k) = exp k π 3 z (3.34) The set of possible disorientations between q 1 and q 2 , denoted by ∆(q 1 , q 2 ), is then defined as : ∆(q 1 , q 2 ) = {q z (j) • q x (i) • δ(q 1 , q 2 )} i=0...1,j=0...5 (3.35) In the case of h.c.p. materials, ∆(q 1 , q 2 ) contains 12 elements. The definition of the disorientation quaternion, Diso(q 1 , q 2 ), and its norm, Diso(q 1 , q 2 ) so , results from the definition of ∆(q 1 , q 2 ). Their expressions are, respectively : Diso(q 1 , q 2 ) = arg min q∈∆(q 1 ,q 2 ) q so (3.36) Diso(q 1 , q 2 ) so = min q∈∆(q 1 ,q 2 ) q so (3.37) The symmetry around any vector lying in the basal plane allows the disorientation with an angle θ greater than π/2 radian to be equivalent to a disorientation with an angle equal to θ -π, smaller in magnitude. If θ -π is negative, the negative sign is removed by considering the rotation of -(θ -π) around the opposite rotation vector. Symmetries also imply that the norm of the disorientation quaternion, Diso(q 1 , q 2 ) so , is always in the range of 0 to π/2 radian. -Classification of twinning relationships As previously mentioned, a twin system can be completely defined by the indices of either the twinning plane, K 1 , and η 2 or the second invariant plane, K 2 , and the twinning shear direction, η 1 . Planes K 1 and K 2 and vectors η 1 and η 2 are all invariant. Also, as per Chapter 1, the lattice reorientation induced by twinning of the first and second kinds can be described by a rotation of 180 • about the normal to K 1 and the twinning shear direction, respectively. In the case of 3.1 lists some of the twinning mode crystallographic properties such as twinning plane, disorientation angle, etc. The parameter γ denotes the c/a ratio, equal to 1.59 and 1.62 in Zr and Mg, respectively. Figure 3.15 shows a graphical representation of one of the six possible twin systems that can be activated for the four above mentioned twinning modes, . Table 3. -Twinning modes in Zr Twinning Twinning plane Twinning direction Disorientation angle mode K 1 η 1 δ ( • ) T 1 {10 12} < 1011> 85.2 T 2 {11 21} < 11 26> 34.9 C 1 {11 22} <11 23 > 64.2 C 2 {10 11} <10 12 > 57.1 All twin modes considered correspond to compound twins [START_REF] Cahn | Plastic deformation of alpha-uranium ; twinning and slip[END_REF]. As a result, the lattice reorientation that they induce can either be described by a rotation of π radians around the shear direction, η 1 , or by a rotation of π radians around the normal to the twinning plane. In the case of h.c.p. structures, the expressions of disorientation quaternions representative of twinning relationships can be indexed by k, such that k ∈ [START_REF] Kurukuri | Rate sensitivity and tension-compression asymmetry in az31b magnesium alloy sheet[END_REF][START_REF] Boersch | About bands in electron diffraction[END_REF], and written as follows : q(k) = exp [π • η(k)] (3.38) with, Identifying the mode and system of a twin consists in finding the closest object to the disorientation q c 2 c 1 existing between the parent and twin phases in T , defined as the set of possible twinning relationships within a given threshold d max : η(k) =   cos [α(k)] cos [β] sin [α(k)] cos [β] sin [β]   (3.39) where, -α(k τ = arg min t∈T δ q c 2 c 1 , t such that δ q c 2 c 1 , t so < d max (3.40) The set T contains 24 and 12 elements in the case of Zr and Mg, respectively. Note how easy it is for the software, and hence for the user, to switch from the analysis of a Zr EBSD map to a Mg EBSD scan. The only two differences lie in a change of the value of the c/a ratio and in a readjustment of the list of theoretical disorientation quaternions corresponding to the different potentially active twinning modes. Identification of grains, parent and twin phases In EBSD scans, non-twinned grains, parent and twin phases are contiguous areas with a consistent orientation. Therefore, detecting grains and twins consists of identifying and grouping contiguous and consistent areas. These operations will be performed following an approach similar to the "super-pixels" [START_REF] Fulkerson | Class segmentation and object localization with superpixel neighborhoods[END_REF][START_REF] Achanta | SLIC Superpixels compared to state-of-the-art of Superpixel Methods[END_REF] technique based on graph theory and commonly used in image analysis. The whole process of twin recognition and parent phase identification relies on one tool of graph theory, the extraction of connected parts, used five times at different levels, which is the extraction of connected parts. Mathematically, a graph G is a pair of sets (V, E) containing vertices and edges, respectively. Two vertices are said to be connected when an edge links them. Vertices are usually designated by integers. It implies that the edge (i, j) connects vertex i to vertex j. A path between two vertices k and l corresponds to a sequence of edges and vertices reaching k from l, and reciprocally. A path is, by definition, non-directional. In addition, a subset W ⊂ V is a connected part of G if, for all pairs of vertices (i, j), i ∈ W , j ∈ W , there exists a path in G between i and j. Extracting connected components is a well known problem in graph theory, for which efficient algorithms already exist [START_REF] Bondy | Graph theory with applications[END_REF]. The degree of complexity of the extraction of connected parts increases linearly with the number of vertices belonging to the graph. Segmentation of the EBSD map into connected parts of consisted orientations The first step in the EBSD analysis consists of grouping all measurement points of similar orientation into fragments. To this end, a first graph is built as follows : every measurement point is considered as a vertex and an edge between two neighboring pixels is created if the disorientation between them is smaller than a given threshold, e.g. 5 degrees. The type of measurement grid, i.e. square or hexagonal (Figure 3.16), affects the construction of the graph but does not have a significant influence on the extraction of connected parts. In addition, assuming that edge disorientations follow a normal distribution, the thickness with which edges appear on screen is characterized by a weight, w, whose value depends on the w(q 1 , q 2 ) = e diso(q 1 ,q 2 ) 2 L 2 (3.41) where L is a threshold value. As a result, the smaller disorientation between two pixels, the more strongly is displayed their mutual edge. The edge color indicates the nature of the disorientation. For example, an edge between two points of similar orientation appears in white. But as shown in Figure 3.18, disorientations corresponding to tensile 1 and compressive 1 twinning relationships are displayed in green and red, respectively. Moreover, some measurement points located in a few small areas exhibit a very low resolution, i.e. the software cannot determine their orientation. Then, a flood-filling algorithm [START_REF] Burtsev | An efficient flood-filling algorithm[END_REF] interpolates missing measurement points and associates them with their closest connected parts. -Grouping of connected parts into grains The second step in the analysis of an EBSD data is to group connected fragments of consistent orientation into grains. In a material free of twins or precipitates, this step is trivial, since every fragment corresponds to a grain. However, when twinning occurs, different configurations have to be considered. Figure 3.19 depicts the three most typical twinning configurations observed in Zr scans. Therefore, a second graph, referred to as the twinning graph, is generated at the level of connected fragments to group them into grains. Vertices are now connected fragments, and edges link two vertices in contact if the disorientation between them can be classified as a twinning relation. A connected component in this graph is a set of fragments all linked by known twinning relations. Hence, this set of fragments is very likely to be part of the same grain. The construction of this graph relies on the measure of disorientation between two components of consistent orientation, i.e. connected parts. Three hypotheses have been taken into account to compute the disorientation between components : -Measuring the disorientation along the connected part boundary. This hypothesis was discarded on the argument that the boundary is the hardest part to measure with the EBSD process and as such is the least reliable. -Using the disorientation of the measurement at the barycenter of the connected part. This is simple enough, but can be incorrect if the barycenter happens to be a bad measurement spot, or even a point outside of the connected part if the latter is non convex. -Computing the average orientation across the connected part. This is computationally more expensive and sensitive to continuous changes of orientation across the grain, but can be implemented in the most generic way. The third hypothesis is the one chosen in the present work. However, because SO(3) is not an Euclidean space, the closed-form expression of the average is incorrect. Consequently, a specific algorithm, similar to the one computing average of quaternions, is used to determine the average orientation of connected EBSD measurements. Consider a set of n EBSD measurement points represented as quaternions q i , i = 1..n. Assume that the initial average, m 0 is equal to q 1 , i.e. m 0 = q 1 . The average orientation of a connected part is estimated iteratively by computing the following two equations : e k = 1 n n i=1 log Diso(m k , q i ) (3.42) m k+1 = m k • exp e k (3.43) The iteration stops when e k reaches a given threshold (5.10 -4 in the present case). At this step, m k corresponds to the best estimate of the connected part orientation. The construction of this second graph allows extraction of a significant amount of properties such as twin modes, twin systems, twin boundary lengths, the list of neighbors for each grain, the list of pixels belonging to grain borders, etc. -Identification of parent phases In twinned grains, parent phases are composed of one or several connected parts, as shown in Figure 3.19. Parent phases are then considered as sets of connected fragments of consistent orientation. To build such sets of connected fragments, the software generates a third graph over the EBSD map. Vertices are now connected fragments and edges only link two connected fragments if they belong to the same grain and if they have a low respective disorientation. By construction, a connected component of this graph is a set of connected parts embedded in the same grain with consistent orientation. By default, the parent phase is identified as the set of orientation path occupying the largest part of the grain. In addition to the incorrect links, the user has also the possibility to correct the software in case a twin occupies more than half of the total grain area. Figures 3.20 and 3.21 show two parts of EBSD maps before and after edition of incorrect twinning relationships, respectively. Note that, as shown in figure 3.22, the software is still capable of recovering complex grain structures. -Detection of higher order twins A higher order twin is defined as a twin embedded in another. For example, a secondary and tertiary twin corresponds to a twin that nucleated in a primary and secondary twin, respectively. Depending on the material, the loading path and loading history, secondary twinning may occur. For example, it has been observed by Martin et al. [START_REF] Martin | Variant selection during secondary twinning in mg-3%al[END_REF] in Mg and appears on scans of Zr samples loaded along the through-thickness direction [START_REF] Kaschner | Role of twinning in the hardening response of zirconium during temperature reloads[END_REF]. Tertiary twinning is more unlikely and statistically irrelevant. However, the software is still capable of identifying tertiary and higher order twins if necessary. The identification of these twins relies on the graph of connected fragments used to build the grains. In this graph, a twin of order n has an edge (i.e. an identifiable twinning relation) with a (n -1) th -order twin or parent, if n -1 = 0, but no identifiable relation with (n -k) th -order twins, with k > 1. An example of such a situation is seen in figure 3.23 where the orange secondary twin (marked with a blue dot in its center) is embedded in the gray twin (marked by a yellow dot). Although the orange twin shares a border with the parent phase, its disorientation with respect to the parent domain does not correspond to one of the previously detailed twinning relations. The present definition of twinning order leads to a simple recursive implementation of higherorder twins. The initialization step consists of considering that all parent fragments previously identified are of twinning order 0. To find n th order twins, all nodes, i.e. fragments, with a twinning order strictly lower than n are removed as well as all edges with one extremity corresponding to one of these nodes. The connected components of the resulting graph are twins of order n. If such a connected component contains several fragments, these fragments can be separated in groups of consistent orientation and identify the "parent" phase, i.e. n th -order twin itself, as the largest one. All fragments not identified as the "parent" fragment correspond to twins of order greater than n. The result of this process applied to the case of secondary and ternary twinning is shown in figure 3.23 where the grey first-order twin (highlighted in cyan) has four secondary twins (in blue), one of which having a tertiary twin (in red). The indirect benefit of detecting and tagging higher order twins lies in the fact that, because of their decreasing likelihood, they help the user to check the software results more rapidly. -The particular case of "twin strips" In a non-negligible number of cases, small twins appear as a strip of connected twins at the outcome of the previous algorithm. This phenomenon occurs either when a very thin twin is separated in small objects because of the low resolution of a few EBSD measurement points or when a twin is divided into two parts by another twin. However, it is statistically relevant to be able to count these connected twins, also called "twin strips", as single twins. Two connected components are considered to belong to the same twin or twin strip if they meet the following five conditions : -They are in the same grain. -They have the same orientation, or the disorientation between the average orientation of both components is small. Typically, the same threshold is used as the one used to build connected components. -The twin's ellipse main orientations (see sec. 3.4.2) are similar, for instance, less than 5 degrees apart. -The sum of the twin half-lengths is within 20% of the distance between their centroid. -The vector linking their centroids diverges by less than a few degrees from the twin's ellipse main orientation. From these conditions, a fifth graph is generated in all grains. Vertices are connected fragments again, and edges link pairs of connected components fulfilling the previous 5 conditions. Consequently, the connected components with more than 1 vertex are twin strips. Figure 3.24 gives an example of the type of reconstruction obtained with this approach. Examples of automatically extracted metrics and statistics The following section illustrates the capabilities of the approach discussed above in terms of exploitable metrics. -Area and Perimeter The area of connected parts, i.e. grains, twins, parent phases, is obtained by multiplying the number of measurement points with the area corresponding to a single pixel. The area associated with a measurement point depends on the step size and the grid type (i.e., hexagonal or square). Similarly, grain boundary length is estimated by multiplying the number of measurement points located along the boundary by the inter-measurement length, associated, here, with the hexagonal grid. -Grain Boundary Properties Because the disorientations between every pair of measurement points or pair of connected fragments are identified, classified and saved, many statistics, such as, for example, grain boundary length and number of neighbors, are easily accessible and stored in the output database. -Convexity Visual observation of EBSD maps suggests that every grain or twin seems to be more or less convex. The degree of convexity of grains and twins can be quantified as follows. First, the convex hull of the object of interest is built by building the convex hull of all its joint points. The object can be either a grain or a twin. The convex hull is a convex polygon that encloses a set of 2D points1 . The construction of the convex hull can be performed in O(n log n). Computing the area of such a polygon is a well defined geometric process. The degree of convexity can then be defined as the ratio of the area of the object to the area of its corresponding convex hull. The ratio is expected to be lower than 1 and the farther away from 1 it is, the less convex the object is. This measure of convexity was implemented to refine the grain detection by identifying grains with a low convexity, trying to break edges in the connected fragment graph and recomputing the convexity of the resulting grains. If the overall convexity is improved, this edge is removed. Tests showed that this method is generally successful. However, the current automatic grain extraction method combined with the graphical user interface is efficient enough to not require the use of this refinement step in practice. high-purity clock-rolled Zr sample loaded in compression along the through-thickness direction up to 3% strain. This is shown using three different visualization modes (see appendix A) : raw mode (left), twinning editor mode (middle) and twinning statistics mode (right). The parent grain is surrounded in yellow, first order twins appear in cyan, secondary twins in blue and ternary or higher order twins in red. -Twin shape and ellipsicity The length and the thickness of twins are computed by estimating the 2D covariance of their constituent EBSD measurement points. The eigenvectors of the covariance matrix indicate the main directions of the twin. The apparent twin length is estimated to be equal to four times the square-root of the largest eigenvalue of its covariance matrix, and the apparent twin thickness is assumed to be equal to four times the square-root of the smallest eigenvalue. The orientation of the twin main axis is then given by the orientation of the largest eigenvectors of its covariance matrix, using the C function atan2. Both the true twin length and thickness can be computed in post-processing by multiplying them by the cosine of the angle between the twin plane and the normal to the sample [START_REF] Beyerlein | Statistical analyses of deformation twinning in magnesium[END_REF][START_REF] Marshall | Automatic twin statistics from electron backscattered diffraction data[END_REF]. Comparing the ellipse area with the actual twin area computed from the number of measurement points contained in the twin allows estimation of the ellipsicity of a twin, defined as its departure from an idealized ellipse. Figure 3.25 depicts these ellipses and illustrates how the ellipsicity criterion highlights what the software identifies as "merged" twins. These "merged" twins are very likely co-zonal Tensile 1 twin variants. Considering co-zonal Tensile 1 variants have an ideal misorientation of 9.6 degrees and given the 5 degree tolerance used for twin variant recognition, the software is, in rare cases, not capable of distinguishing such twins. However, because of the level of precision of EBSD measurement, reducing the software tolerance for twin variant recognition, does not improve the results. Graphical User Interface and Data availability Graphical User Interface To provide a quick and direct access to a large choice of metrics and statistics displayed on an EBSD map, the graphical user interface is aimed at allowing the user to check and correct, if necessary, software results. Three types of situation require the intervention of the user. First, because of the random distribution of disorientations between neighboring grains, it is statistically likely that, for a few grains per map, the disorientation between adjacent grains matches a twinning relation. The user has the choice between either using the convexity option to refine the analysis performed by the map or simply deactivating manually the edge linking the parent to the mistaken twin. Second, in highly strained sample scans, orientation gradients can be significant, and the disorientation between two connected parts may be above the threshold to be flagged as a twinning relation. In such a case, the user can manually activate this twinning relation. Third, the parent is by default the largest connected component of the grain. However, twin phases may occupy the largest part of the grain and appear as the parent phase. Once again, this can be corrected manually by the user. This feature will be particularly useful when dealing with highly strained Mg samples. The result of such manual editions can be seen by comparing Figures 3.20 Data availability The analysis of an EBSD map generates a wealth of quantitative data about grains, twins and parent phases. However, different studies will not use the same EBSD data for the same purposes. This is why it is of primary importance to export data in a way that preserves all relations and does not make assumptions regarding what should or should not be stored. To do so, the present software exports the data-structure extracted from the EBSD map analysis to a SQL database with the structure described in Figure 3.26. In practice, this is performed by using the SQLite library that implements a server-less database stored inside a single file. The advantage of such a relational database lies in the fact that it keeps all the information in a single file and allows the user to create aggregated statistics with simple SQL requests, as shown hereafter with requests 1 and 3, shown in Appendix A. For example, Request 1 generates a table containing features about twins, such as twinning modes (i.e. "twinning"), twin systems (i.e. "variant"), twin area (i.e. "area"), twin thickness (i.e. "thicknesse), quaternions corresponding to the average twin orientation (i.e. "qx", "qy", "qz", "qw"), etc. Request 3 was used to extract information about twin-twin junctions such as the modes and systems of intersecting twins. This request relies on the view created by Request 1 to generate the table containing twin characteristics. Matlab, C++ or Fortran codes can then process the tables generated by the SQL requests in order to extract statistics of interest. For example, the statistics about the influence of microstructure and twin-twin junctions on nucleation and growth of twins presented in the next chapter were computed from four tables only. Moreover, for the sake of keeping a record of experimental conditions, the database also stores constants and parameters used to construct this particular EBSD analysis. Conclusion In addition to a brief description of scanning electron microscopes, physical phenomena observed with electron diffraction and existing EBSD pattern analysis techniques and softwares, the present chapter introduces a new software for EBSD map automated analysis based on graph theory and quaternion algebra. Quaternions allow easy computation of disorientations between pixels and areas of consistent orientation. The subsequent use of graph and group structures allows grain identification, twin recognition and statistics extraction. The newly introduced software is distinguished from pre-existing commercial softwares or academic codes by combining visualization with automated analysis of the EBSD map. The built-in graphical user interface enables an immediate and direct access to microstructural and twinning data such as orientation and size of twins and grains, mode and system of twins, but also allows the user to correct or complete, if necessary, the analysis performed by the software. In addition, all raw and processed data are saved in a relational database. Consequently, all experimental parameters, microstructural data and twinning statistics are easily accessible via SQL requests. The database also enables the systematic quantification of the influence of a very large number of parameters. The constructions of such a database makes a significant difference compared to other pre-existing analysis tools. Moreover, although the software was initially developed to perform statistical analyses on Mg and Zr scans, it is not limited to these two h.c.p. metals. Its algorithm is capable of identifying any twin occurring in h.c.p. materials on the condition that the user properly defines the c/a ratio and the theoretical disorientation quaternions corresponding to all potentially active twinning systems. For the analysis of other crystallographic structures, the user has to adapt the cell characteristics and modify the symmetry quaternions. For example, the authors are using the software for martensite identification, whose the crystallographic structure is tetragonal, in TRIP steels. Chapitre 4 Identification of statistically representative data associated to nucleation and growth of twins The objective of the present chapter is to provide new statistically representative data to establish relationships between the presence and size of twins and loading conditions, texture, etc. As shown in Chapter 2, from the micromechanical standpoint, one can -at the cost of relatively lengthy mathematical derivations-solve Eshelby type problems for relatively complex topologies. Similarly, constitutive models can be extended to reproduce the stochasticity associated with the nucleation of twins, double twins, etc. However, prior to resorting to such developments it is necessary to assess the statistical relevance of the phenomena to be modeled so as to distinguish between first and second order phenomena. In the present chapter, it is the objective to provide data by using the automated EBSD statistical analysis method presented in Chapter 3 to make such distinctions. The following three phenomena will be studied : (1) nucleation and growth of "unlikely twins", (2) double extension twinning and (3) twin-twin junctions. Here "unlikely twins" refers to twin variants one would not expect to find in a given grain owing to its relatively poor orientation with respect to loading conditions. Both in the case of ( 1) and ( 2) current mean field models do not accurately reproduce these processes. The question at stake is that of the necessity of addressing this shortcoming. To this end a study will be performed on initially hot rolled AZ31 magnesium alloy [START_REF] Shi | On the selection of extension twin variants with low schmid factors in a deformed mg alloy[END_REF][START_REF] Shi | Double extension twinning in a magnesium alloy : combined statistical and micromechanical analyses[END_REF]. Regarding [START_REF] Christian | Deformation twinning[END_REF], so far the literature on twin/twin interactions has remained quite limited ; the objective is simply to assess whether twin-twin junctions do affect the selection of variants, and, more importantly, whether these have an effect at the macroscale on twin growth. This second study [START_REF] Juan | A statistical analysis of the influence of microstructure and twin-twin junctions on nucleation and growth of twins in zr[END_REF] will be performed on high purity Zr, as this material readily allows for the nucleation of four twin modes. The present chapter is then organized such that its first part is dedicated to the formation of "unlikely twins" with a first sub-section about low Schmid factor tensile twins and a second one about successive double extension twins that can also be considered as another type of "unlikely twins". The two studies of "unlikely twins" include experimental, statistical and modeling results. The last part of the chapter is focused on the influence of microstructure and twin-twin junctions on the nucleation and growth of twins in Zr. Preliminary notations and considerations Because EBSD scans do not provide access to local stresses before unloading and sectioning, for classification purposes the geometric Schmid factors (SF) are computed from the inner product of the symmetric Schmid tensor and the normalized macroscopic stress tensor, such that Σ 2 = 3 i=1 3 j=1 Σ 2 ij = 1. In addition, similar statistics using the macroscopic stress for computations of distributions can be produced from the use of either full-field or mean field models. The symmetric Schmid tensor is defined as the symmetric part of the dyadic product between the Burgers vector and the normal vector to the deformation plane. For each twinned grain, the six possible twin variants of each twinning mode are classified in order of decreasing SF. Low Schmid factor twins are here divided into two categories. The first type consists of twins with a negative Schmid factor. A twin is said to be a low Schmid factor twin of the second type when its Schmid factor is positive but lower than or equal to 0.3, and when the ratio of its Schmid factor to the highest twin variant Schmid factor possible in the considered twinned grain is lower or equal to 0.6. This ratio will be now referred to as the Schmid factor ratio. 4.2 Nucleation of "unlikely twins" : low Schmid factor twins and double twinning in AZ31 Mg alloy Experimental set-up and testing conditions The material used is initially hot-rolled AZ31 Mg with composition shown in table 4.1 and an initial grain size of 11.4 µm. The thick sheet was annealed at 400 • C for 2 hours after rolling. Using XRD the initial pole figures of the material were measured. These are shown in Figure 4.4 where all three {0001}, {2 11 0} and {10 10} pole figures were recorded prior to and after loading. For the sake of brevity, the abbreviations RD, TD, and ND will stand for rolling, transverse and normal directions, respectively in the remaining of this chapter. As expected, prior to compression basal poles are centered around RD while the {10 10} and {2 11 0} poles are axi-symmetrically distributed about and perpendicular to RD. Using either a wheel saw or a diamond wire saw cubic samples with 10 mm edge length were cut from the as received plate. The diamond wire saw is preferred here, as it yields minimum changes on the microstructure during the cutting process (i.e. in particular it minimizes the number of twins induced by the cutting process). Two different testing conditions were defined to introduce low Schmid factor twins and secondary extension twins. In the first case, uniaxial compression was performed along the rolling direction up to 2.7% engineering strain. These tests were performed at room temperature with a strain rate of 10 -3 s -1 (Figure 4.1). Clearly, a compressive load perpendicular to the basal poles is not expected to generate extension twins. The second battery of tests aims to introduce, in a sequential fashion, {10 12}-{10 12} double extension twins. To this end samples were subjected to more complex loading conditions. Three scenarios were considered ; three cubes were compressed along the rolling direction up to 1.8% strain ; three cubes were compressed along the transverse direction up to 1.8% strain and three cubes were compressed first along the rolling direction up to 1.8% strain and then compressed along the transverse direction up to 1.3% strain. Here too, the imposed strain rate was set to 10 -3 s -1 (Figure 4.1). Table B.2 lists all test cases considered. It is expected here that the first loading path should initiate twinning and that the following compression enables the nucleation of secondary tensile twins. All tests were performed on the Instron 5985 floor testing machine shown in Figure 4.1. Its has a load capacity of 250 kN and a 1430 mm vertical test space. Tests were controlled with the software Bluehill III, and two LVDT capacitive sensors were used to measure displacement to an accuracy 0.4 µm. In Figure 4.1b, the two cylindrical devices correspond to the LVDT sensors, placed on each side of the cubic specimen and protected by three screws. In addition, teflon tape was used to minimize the effect of surface friction. As detailed in appendix B, the machine stiffness was systematically taken into account. Following each compression, the samples were sectioned perpendicularly to the loading direction for microstructure analysis within the bulk. Sectioned faces were ground using SiC papers with grits from 2400 to 4000, and then were electrolytically polished in an electrolyte of 62.5% phosphoric acid and 37.5% ethanol at 3V for 30 seconds and then at 1.5V for 2 minutes at -15 • C. A JEOL 6500F FEG SEM equipped with Channel 5 Analysis was used for EBSD measurements. Samples were scanned using a square grid with a step size of 0.1 µm and 0.3 µm for the first and second statistical analyses, respectively . 4.3 show the macroscopic stress-strain curves corresponding to specimens loaded in compression along RD, TD, ND and in compression along RD, followed by a second compression along TD. For compressions along RD and TD, the yield stress was approximately equal to 70 MPa. The inflection observed in the plastic region of the curves is typical of the activation of {10 12} tensile twins. As revealed by Proust et al. [START_REF] Proust | Modeling the effect of twinning and detwinning during strain-path changes of magnesium alloy {AZ31}[END_REF], the two deformation modes active in the matrix are basal slip and tensile twinning, with twinning increasing its contribution until about 3% strain while the basal slip activity decreases. At 5% strain, the total twinned volume is expected to occupy about 70% of the material volume. Mechanical behavior and microstructure evolutions This result is consistent with initial pole figures, showing that the initial texture of the material is not optimal for twinning activation when compressed along ND. prevent the glide of dislocations. Recent discrete dislocation dynamics simulations performed by Fan et al. [START_REF] Fan | The role of twinning deformation on the hardening response of polycrystalline magnesium from discrete dislocation dynamics simulations[END_REF] revealed that twin boundaries induce a stronger hardening than grain boundaries. After uni-axial compression along TD and RD, basal poles move from the normal direction to the transverse and rolling directions, respectively. This texture change is due to the activation of tensile twinning, which induces a reorientation of the crystal lattice by 86.6 • . As a result, a first compression along RD followed by a second compression along TD is expected to produce successive {10 12}-{10 12} double extension twins. Similarly, Figure 4.5 displays EBSD micrographs prior to compression, after compression along RD, and after compression along RD followed by a second compression along the TD. Low Schmid factor {10 12} tensile twins In this first study [START_REF] Shi | On the selection of extension twin variants with low schmid factors in a deformed mg alloy[END_REF], a new type of selection criteria for low Schmid factor tensile twin variants based on strain compatibility considerations is proposed. The distinction is also made between tensile twins of groups 1 and 2, defined as twins intersecting the grain boundary and twins constituting a pair of cross-boundary twins, respectively. The grain-by-grain analysis of 844 grains containing 2046 twins revealed that the Schmid factors of all twins range from -0.09 to a maximum value of 0.5. Twins with a negative Schmid factor represent 0.6% of all twins. Twins with a Schmid factor lower than 0.3 represent 23.4% of the total twin population. According to the definitions presented in the preliminary section, 127 twins can be deemed low SF twins, i.e. 26.6% of twins with a Schmid factor lower than 0.3 and 6.2% of all twins. As a result, low Schmid factor twins represent 6.8% of the total twin population. In addition, all twinned grains contain between 1 and 4 different twin variants. Twinned grains with 1, 2, 3 or 4 variants represent 62.6%, 30.3% and 7.1% of all twinned grains, respectively. The analysis of twin Schmid factor revealed that the proportion of low Schmid factor per twinned grain increases with the number of activated twin variants. This was expected considering the second criterion defining a low positive Schmid factor. As twinning participates significantly in plastic deformation, the relative contribution of twinning shears to the macroscopic strain tensor, ǫ, is here estimated from the normal components of stress-free distortion tensor of twins, E, expressed in the sample reference frame. The three coordinate axes of the sample reference frame are aligned with RD, TD and ND. The components of both the macroscopic strain and the distortion tensors along RD, TD and ND are denoted by ǫ RD , ǫ T D , ǫ N D and e RD , e T D , e N D , respectively. Since twinning does not involve any volume change, the trace of the twin distortion tensor is null. This implies that there exist six independent sign combinations for e RD , e T D and e N D . These six combinations correspond to a new definition of twin variants. They are detailed, as well as their occurrence frequencies, in Table 4 4.3 reveals that low Schmid factor twins form more frequently in grains favoring the nucleation of twins producing an extension along the rolling direction, which is opposite to the strain induced by the compressive loading. The normal to the twinning plane's shear direction and the twinning plane normal can be written in a set of coordinate axes associated with the twin, i.e. reference axes are chosen parallel to the shear direction. The distortion tensor, E tw , corresponding to the deformation induced by twinning is then written as follows : [START_REF] Partridge | The crystallography and deformation modes of hexagonal close-packed metals[END_REF]. E tw =   0 0 s 0 0 0 0 0 0   with s = 0.129 The expression of the distortion tensor associated with a given tensile twin variant t and expressed in the reference frame of a potentially active deformation system, i, is denoted by E t,i . As mentioned previously, two groups of low Schmid factor twins are here considered. Group 1 includes twins growing from the grain boundary. Group 2 contains twins forming a "cross-boundary" twin pair. Therefore, deformation systems i do not correspond to systems potentially active in the twinned grain but in the neighboring grain with which the observed low Schmid factor twin is in contact. The index i refers to basal slip, pyramidal slip, prismatic slip, tensile and compressive twinning systems. Then, in order to compare the characteristics of low Schmid factor twins in terms of strain accommodation with other tensile twin variants, the distortion tensor for each tensile twin variant t and for each of the 24 slip and twinning systems that could be activated in the neighboring grain was computed. This implied that, for each low Schmid factor twin, 144 different distortion tensors were calculated. The amount of strain to be accommodated by a given system i is assumed to be equal to the component e i xz of the distortion tensor E i , expressed in the reference frame of the system i. Therefore, the larger the component, the higher the ability of the system i to accommodate the twinning shear. It can also be interpreted as being a favorable factor for twin growth. This can be defined as a mean to approximate geometrical accommodation. It resulted from data processing that group 1 low Schmid factor twins require the most accommodation through basal slip with the lowest CRSS and the least accommodation through pyramidal slip with the highest CRSS. It is also found that group 2 low Schmid factor twins require the least pyramidal slip or contraction twinning accommodations with high CRSS but the most accommodation through prismatic slip and tensile twinning. Note that CRSSs associated with prismatic slip and tensile twinning are both higher than the CRSS associated with basal slip and lower than CRSSs associated with pyramidal slip and compressive twinning. Successive {10 12}-{10 12} double extension twins For the study of double extension twins, the total scanned area represents 0.82 mm 2 . Therefore, 4481 grains were observed, none of them were in contact with the map border, as well as 11 052 primary tensile twins and 585 double extension twins. Further details are provided by Table 4.4. Note that double extension twins are 19 times less frequent than primary twins. In terms of twinned area, the difference is even more pronounced ; the total area occupied by secondary twins is 66 times smaller than the total area occupied by primary twins. EBSD scans performed on samples loaded in uniaxial compression along RD and TD did not contain any double extension twin. As a result, double extension twins only appear during the second compression along TD. Considering crystal symmetries of hexagonal crystals, 6 {10 12} tensile twin variants and hence 36 {10 12}-{10 12} double extension twin variants may be activated in Mg. However, based on the misorientation angle existing between the primary and secondary twins, these 36 variants can be grouped into 4 distinct sets, detailed in Table 4.5. The minimum angle associated with Group I twin variants is 0 • because the twinning plane of the secondary twin coincides with the one of the primary twin. Over the 585 double extension twins experimentally observed, 383 were clearly identified, i.e. the difference between the theoretical misorientation angles of the two variants that best match the experimentally measured misorientation angle is smaller than 3 • . Consequently, the study was limited to these 383 double extension twins. In addition, only 4.2% of the secondary twins can be qualified as "low Schmid factor" twins, as defined in the previous paragraph. The present study clearly shows that the activation of tensile secondary twins depends on both grain and primary twin orientations as well as loading direction. However, the Schmid factor analysis is not able to explain why 76.0% of secondary twins belong to Group III and 24.0% to Group IV. Schmid factors corresponding to variants of Group III and IV are always too close, i.e. their difference is lower than 0.05 in magnitude, to be meaningful and to be used as a selection criterion. Elasto-static micromechanical analysis In order to explain such a phenomenon, a simplified version of the elasto-static Tanaka-Mori scheme (Figure 4.7), described in the first section of Chaper 2, was developed. The simplification consists in assuming that the medium was homogeneous, isotropic and elastic. However, in addition to the simplifications induced by the assumption of homogeneous elasticity, considering an isotropic medium implies that the Eshelby type tensors, S(V A ) and S(V B ), can be expressed analytically [START_REF] Eshelby | The determination of the elastic field of an ellipsoidal inclusion and related problems[END_REF][START_REF] Mura | Micromechanics of defects in solids[END_REF]. The purpose of the present analysis is the study of the variations of internal 1 and ǫ p 2 denote the macroscopic plastic strain imposed to the medium and plastic strains induced by primary and secondary twinning, respectively. The infinite matrix and primary and secondary tensile twins are represented by volumes V -V A , V A -V B and V B , respectively. Second-order tensors ǫ p a and ǫ p b correspond to eigenstrains, modeling twinning shears induced by primary and secondary twinning, prescribed in inclusions V A -V B and V B , respectively. The homogeneous elastic tensor is denoted by the fourth-order tensor C. free energy induced by the formation of a tensile secondary twin. Twins are then represented by oblate inclusions embedded in an infinite homogeneous elastic medium. A uniform macroscopic plastic strain, E p , is also introduced in the matrix in order to represent the deformation undergone by the specimen. Double extension twin variants inducing the smallest change of elastic energy are assumed to nucleate preferentially. The change of elastic energy induced by the formation of the secondary twin is defined as the difference of free energies before and after secondary twinning, i.e. ∆Φ = Φ II -Φ I . Figure 4.8, and is expressed as follows : ∆Φ = - V B 2V [Σ : ǫ p 2 + C : ([S(V A ) -I] : [ǫ p 1 : ǫ p 2 + ǫ p 2 : ǫ p 1 -E p : ǫ p 2 ] + [S(V B ) -I] : ǫ p 2 : ǫ p 1 )] (4.1) where the second-order tensors ǫ p 1 and ǫ p 2 correspond to the plastic shear strains, assumed to be uniform, induced by primary and secondary twinning, respectively. They differ from ǫ p a and ǫ p b , defined as the uniform plastic strains present in inclusions V A -V B and V B , respectively. This explains why ǫ p a = ǫ p 1 and ǫ p b = ǫ p 1 + ǫ p 2 . Figures 4.8a and 4.8b show the evolution of the change of elastic energy normalized by the twin volume fraction V B /V with respect to the applied stress and the secondary twin volume fraction, respectively. In the first case, the secondary twin volume fraction remains fixed and equal to 0.03 (Figure 4.8a) ; in the second case, the macroscopic stress, Σ T D , is set to be equal to 100 MPa (Figure 4.8b). Figure 4.8a reveals that the change of free energy density is minimal with Group III double extension twin variants and maximal with Group II double twin variants. Figure 4.8b shows that the free energy density change associated with Group II double twin variants is the highest and increases rapidly with twin volume fraction. It also shows that, when the secondary twin volume fraction is greater than 0.03, the PV4-SV5 variant, belonging to Group III, exhibits the lowest normalized free energy variation and is then the most preferred energetically. Consequently, the variations of the internal free energy density explain why Group III double twin variants are the most frequently observed and Group II double twin variants never observed. In addition, the comparison of predictions by the simplified double inclusion model and the classical Eshelby's scheme revealed that the classical single inclusion model derived by Eshelby is not capable of reproducing the trends obtained with the double inclusion model. Enforcing the topological coupling between the primary and secondary twins then appears as essential for accurate secondary twin activation predictions. 4.3 Probing for the latent effect of twin-twin junctions : application to the case of high purity Zr Experimental set-up and testing conditions The material used comes from a high-purity crystal bar Zr (<100 ppm) which was arc-melted, cast and clock-rolled at room temperature. Cuboidal samples were machined from the rolled plate and annealed at 823K for 1 hour. In the as-annealed state, grains are free of twins, equiaxed and have an average diameter equal to 17 µm. Specimens display a strong axisymmetric texture where basal poles are aligned within approximately 30 degrees of the through-thickness direction (Figure 4.9). Samples were deformed in an equilibrium liquid nitrogen bath at 76K in order to facilitate twin nucleation and loaded in compression along one of the in-plane directions to 5% strain (IP05) and along the through-thickness direction to 3% strain (TT03). Figure 4.10 shows the macroscopic stress-strain curves of cubes compressed along the through-thickness (TT) and in-plane (IP) directions. Experimental data was collected from 10 and 4 (240 µm x 120 µm) scans at different locations on the same cross sectional area of the TT03 and IP05 samples (Figure 4.11), respectively. The section plane for TT03 analysis contains both the TT direction and IP direction, and the section plane for IP05 analysis contains the TT direction and the IP compression direction. Statistical data was obtained using the automated EBSD technique developed by Pradalier et al. [START_REF] Pradalier | A graph theory based automated twin recognition technique for Electron Backscatter Diffraction analysis[END_REF]. The total analyzed area for TT03 and IP05 specimens is 205 736 µm 2 and 73 122 µm 2 , respectively. Twins represent 9.1 % and 5.7 % of the total scanned area in the TT03 and IP05 samples, respectively. Incomplete grains bounded by scan edges are not considered in the statistical analyses. Computing misorientations between measurement points and relying on graph theory analysis, the twin recognition EBSD software [START_REF] Pradalier | A graph theory based automated twin recognition technique for Electron Backscatter Diffraction analysis[END_REF] is able to identify the four twin modes present in Zr (Table 4.6). As highlighted by recent studies [START_REF] Kaschner | Role of twinning in the hardening response of zirconium during temperature reloads[END_REF][START_REF] Capolungo | Nucleation and growth of twins in zr : A statistical study[END_REF], {11 22} compressive (C 1 ) twins and {10 12} tensile (T 1 ) twins are the most commonly observed twins in the TT03 and IP05 scans, with 74.4% and 81.7% respectively (Table 4.7). Table 4.7 also reveals that the second most active twinning modes are {10 12} (T 1 ) and {11 21} (T 2 ) in the TT03 and IP05 samples, respectively. In both cases, the second most active twin modes represent about 17% of the total number of twins. However, no {10 11} (C 2 ) twin was observed in the 14 scans. Grain areas are directly calculated from the number of experimental points of the same orientation with a step size equal to 0.2 µm. As a result of the annealing treatment the grains are equiaxed. The grain area is computed by multiplying the number of measurement points that the grain contains by the area associated with a pixel, i.e. 0.1 µm 2 . Grain diameter is estimated assuming a spherical grain. The software developed by Pradalier et al. [START_REF] Pradalier | A graph theory based automated twin recognition technique for Electron Backscatter Diffraction analysis[END_REF] fits an ellipse to each twin. The measured twin thickness is defined as the minor axis of the ellipse. The true twin thickness is then estimated by multiplying the measured twin thickness by the cosine of the angle formed by the twin plane, K 1 , and the normal to the sample surface [START_REF] Marshall | Automatic twin statistics from electron backscattered diffraction data[END_REF][START_REF] Beyerlein | Statistical analyses of deformation twinning in magnesium[END_REF]. plotted with respect to Schmid factor, the bin size was rounded down to 0.05 in order to obtain an exact integer number of subdomains between -0.5 and 0.5. Twin-twin junctions statistics This section is dedicated to the description of twin-twin junctions between first generation twins occurring in Zr. As mentioned in the previous section, 4 different twin modes are reported for Zr, which allows for 10 different junction modes. However, since only 3 twinning modes have been observed, 6 different twin-twin junction modes may occur. These are listed in Table 4.8. Depending on the twinning modes involved, each twin-twin junction mode contains 3 or 4 types. The distinction between the different twin-twin junction types is based on the value of the minimum angle formed by twin zone axes. The twin zone axis is here used to define the direction that is perpendicular to both the K 1 plane normal and the twinning shear direction, η 1 . For example, 3 different types of T 1 -T 1 , T 1 -T 1 and C 1 -C 1 junctions exist : the first one corresponds to junctions between two twins sharing the same zone axis ; the second and third types correspond to junctions between twins for which the minimum angle formed by the two axes is equal to 2π/3 and π/3 radians, respectively. In the case of T 2 -C 1 twin-twin junctions, 4 types of junctions are considered : 2 types corresponding to junctions between twins sharing the same zone axis and 2 other types for junctions between twins with non parallel zone axes. In the case of T 1 -T 2 and T 1 -C 1 , 3 different types of junctions can be distinguished. None of them corresponds to junctions between twins with parallel zone axes. Minimum angles formed by twin zone axes are here equal to π/6, π/2 and π/3 rad. The 19 interaction modes and types observed in TT03 and IP05 scans are graphically represented in Figure 4.13. The total number of twin-twin junctions observed in TT03 and IP05 scans is 833 and 96, respectively. Table 4.8 lists all possible twin-twin junctions modes in Zr, and, more relevant to this work, their observed occurrence frequencies. Frequencies are here defined as the ratio of the population of a given species to the overall population. Tables 4.7 and 4.8 show that in the case of specimens loaded along the through-thickness direction, whereby C 1 twins are most frequently observed, i.e. 74.5% of all twins, C 1 twins interact mostly with other C 1 twins. Furthermore, T 2 twins tend to interact with twins of different modes regardless the predominant mode since T 2 -T 1 , T 2 -C 1 and T 2 -T 2 twin-twin junctions represent 5.9 %, 4.5 % and 1.4 % of all junctions appearing in TT03 maps and 52.1 %, 0 % and 6.2 % of all junctions observed in specimens loaded along the in-plane direction (IP05), respectively. In TT03 specimens, T 1 twins represent 16.5% of all twins (Table 4.7) but are only involved in 8.8% of all twin-twin junctions ; they also represent 31.4% of all twins embedded in single twinned grains (Table 4.9). Even when parent grains are suitably well oriented for T 1 twin nucleation, such as in the case of compression along the IP direction, T 1 -T 1 twin-twin junctions only represent 41.7% of all twin-twin junctions while T 2 twins, whose population is 4.7 times smaller, are involved in 58.3% of all twin-twin junctions. For both TT and IP specimens, the ratio of twins belonging to the most active twinning mode to the number of twins belonging to the second most active twinning mode is similar, i.e. 4.5 and 4.6, respectively. However, no statistical trend appears regarding the 3 types of twin-twin junctions that may occur between the first and second most active twinning modes. 4.14c shows that, in the case of through thickness compression, 80 % of C 1 -C 1 twin-twin junctions correspond to the third type of junctions. Figure 4.14 also reveals that junctions between T 1 -T 1 twins with parallel zone axes, studied by Yu et al. [START_REF] Yu | Twin-twin interactions in magnesium[END_REF][START_REF] Yu | Co-zone {10 -12} twin interaction in magnesium single crystal[END_REF] in Mg single crystals, do not correspond to the predominant type of twin-twin junctions here. These results are purely qualitative, but they can be used as guidelines (a) T 1 -T 1 type 1 (b) T 1 -T 1 type 2 (c) T 1 -T 1 type 3 (d) T 1 -T 2 type 1 (e) T 1 -T 2 type 2 (f) T 1 -T 2 type 3 (g) T 1 -C 1 type 1 (h) T 1 -C 1 type 2 (i) T 1 -C 1 type 3 (j) T 2 -T 2 type 1 (k) T 2 -T 2 type 2 (l) T 2 -T 2 type 3 m) T 2 -C 1 type 1 (n) T 2 -C 1 type 2 (o) T 2 -C 1 type 3 (p) T 2 -C 1 type 4 (q) C 1 -C 1 type 1 (r) C 1 -C 1 type 2 (s) C 1 -C 1 type 3 Influence of twin-twin junctions and grain-scale microstructural characteristics on twin nucleation and twin growth The twinning process should be decomposed into three steps, starting with twin nucleation. This generally occurs at grain boundaries or local defects where internal stresses are highly concentrated. The second step corresponds to transverse propagation across the grain. Like a crack, the newly formed twin propagates very quickly until reaching another grain boundary or defect. Then, the third and last step is twin growth, consisting of twin thickening [START_REF] Kumar | Numerical study of the stress state of a deformation twin in magnesium[END_REF]. This section is dedicated to the study of statistics related to twin nucleation and twin growth. However, prior to any result, and as a complement to Table 4.7, Figures 4.15 and 4.18 present the distribution of twinning mode frequencies with respect to grain size and Schmid factor for specimens loaded along the TT and IP directions. Twinning mode frequencies are defined as the number of T 1 , T 2 and C 1 twins contained in grains belonging to a given subdomain divided by the total population of twins. Notice that the larger number of twins associated with smaller grains should not be interpreted as a "reverse" Hall-Petch effect. Rather, it is a consequence of having a large number of small grains, as shown in Figure 4.12. displaying average twin thicknesses and average twin numbers per twinned grain, respectively. Twin nucleation Figure 4.16 shows the evolution of the fraction of twinned grains containing T 1 , T 2 and C 1 twins as a function of grain area. Similar to previous statistical studies performed in Mg and Zr [START_REF] Capolungo | Nucleation and growth of twins in zr : A statistical study[END_REF][START_REF] Beyerlein | Statistical analyses of deformation twinning in magnesium[END_REF], the present work also finds that twin nucleation can be correlated to grain size. However, Figure 4.16 not only establishes that the overall probability of twinning incidence increases with grain area but also differentiates the 3 cases corresponding to the different twinning modes observed in samples loaded in compression along the TT and IP directions. The influence of grain size on nucleation probability appears to be the strongest for C 1 and T 1 twins in TT03 and IP05 scans, respectively, since 100% of grains larger than 1060 µm 2 contain at least one twin of the predominant twinning mode while less than 50% of grains smaller than 136 µm 2 are twinned. In both TT03 and IP05 specimens, the effect of grain size on T 2 twin incidence is significant. The fraction of grains containing at least one T 2 twin increases rapidly and linearly with grain area, even if, in the case of samples compressed along the TT direction, T 2 does not correspond to the second most active twinning mode. Still note that about 10% of grains smaller than 136 µm 2 and 30% of grains larger than 664 µm 2 contain at least one T 1 twin in TT03 scans. 4.17b clearly reveal that the average number of twins belonging to the predominant twinning mode increases with grain size. The phenomenon is more pronounced for C 1 twins in samples loaded in compression along the TT direction. However, the influence of grain size remains significant in the case of IP05 specimens : the average number of T 1 twins contained by grains whose areas are in the range [4 µm 2 , 136 µm 2 ] and [534 µm 2 , 664 µm 2 ] is equal to 1.5 and 4.9, respectively. This trend seems to change for grains larger than 928 µm 2 . However, Figure 4 17c and4.17d lies in the observation that small grains can contain a very large number of twins. It also shows that the number of twins per twinned grain can vary significantly from one grain to another. However, EBSD scans are images of 2D sections. As a result, it is possible that small grains are actually bigger than they seem to be. This introduces a bias in grain size effect statistics. Nucleation of twins belonging to the predominant twinning mode is strongly controlled by grain orientation and macroscopic stress direction. As indicated by a classic Schmid factor analysis (Figure 4.18), 91% and 76% of C 1 and T 1 twins, respectively, have a Schmid factor greater than 0.25. Figure 4.19 also shows that 21% and 42% of C 1 and T 1 twins observed in specimens loaded along the TT and IP directions correspond to the 1 st Schmid factor variant, denoted by v 1 , respectively. Regarding the activation of twins belonging to the second most active twinning mode, the dependence on grain orientation and loading direction is less obvious. The phenomenon is particularly striking in the case of T 1 twins in TT03 specimens since 55% of T 1 twins exhibit a negative Schmid factor (Table 4.10) and 26% of them correspond to either the 4 th , 5 th or 6 th variants. For T 2 twins observed in IP05 scans, 38% have a Schmid factor lower than 0.25, and 20% correspond to either the 4 th , 5 th or 6 th variants. Figure 4.18 and Table 4.10 also reveal that the proportion of T 2 twins with a negative Schmid factor remains low (i.e. 11% and 8% in the case of TT and IP compressions, respectively) irrespective of the loading direction. The activation of twins with negative SF is a result of using the macroscopic stress to define SF. In practice, this result is pointing at large local deviations from the macroscopic stress in the grains involved. Twin growth Twin growth is considered to be the last step in the twinning process, consisting in twin lamella thickening. The influence of grain orientation, grain size and twin-twin junctions is investigated via histograms presenting twin thickness distributions with respect to grain area and Schmid factor. Following the same approach as the one used for twin nucleation, Figure 4.20 shows statistics of twin true thicknesses as a function of grain size. Figures 4.20a and 4.20b are histograms displaying average twin true thicknesses sorted by twinning mode. In the case of the TT compression, T 1 twin thickness average is always close to 0.5 µm. The value of C 1 average twin thickness appears to first increase until grain size reaches 928 µm 2 and then to decrease ; the values corresponding to the first and last bin are 0.64 and 0.71 µm, respectively. In the case of the IP compression, the T 1 twin thickness average oscillates around 0.75 µm. As a result, it is not possible to identify a correlation between twin thickness, twinning mode and twinned grain area. However, Figure 4.20 also reveals that the average thickness of twins belonging to the most active mode is always greater than the average thickness of twins belonging to the second most active mode. Figures 4.20c Variant frequencies consist of the ratio of the number of twins of a given SF variant and of a given twinning mode to the total population of twins belonging to the considered twinning mode. growth. Numerical support for such effects is provided by Kumar et al. [START_REF] Kumar | Numerical study of the stress state of a deformation twin in magnesium[END_REF] and based on shear accommodation and stress considerations. Large thickness values observed in small grains also suggest that using 2D variables to describe spatial phenomena introduces a bias in grain size effect statistics. The same comment was made in the paragraph dealing with twin nucleation. Disregarding negative Schmid factor twins (Figure 4.18), the influence of the crystallographic orientation on growth of twins belonging to the predominant mode is clearly shown in Figure 4.21. Figure 4.21 presents the distribution of twin thickness sorted by twinning mode with respect to Schmid factor. The average true twin thickness of C 1 and T 1 twins increases with increasing Schmid factor values in TT03 and P05 specimens, respectively. This indicates that the macroscopic stress is the major driving force for twin growth in the case of first most active twinning modes. However, similar to observations made about twin nucleation, the influence of grain orientation and macroscopic stress direction is reduced for twins belonging to the second and third most active twinning modes. As a result, mechanisms involved in the growth of twins belonging to the predominant twinning mode are likely to be different from those responsible for the growth of other twins. Beyerlein et al. [START_REF] Beyerlein | Statistical analyses of deformation twinning in magnesium[END_REF] and Capolungo et al. [START_REF] Capolungo | Nucleation and growth of twins in zr : A statistical study[END_REF] argue that if backstresses induced by neighboring grains in reaction to the localized twin shear are independent of orientation, then twins with higher SF, and hence with higher resolved shear stress, have an advantage to overcome this shear reaction. Finally, to highlight the influence of twin-twin junctions on twin thickening in a statistically meaningful manner, the comparison of twin thicknesses between single twinned (also referred to as mono-twinned grains in Figures 4.22 and 4.23) and multi-twinned grains is performed for C 1 twins in TT03 specimens. A multi-twinned grain is, here, defined as a twinned grain containing several twins. In the TT03 scans, 102 and 1369 twins in single twinned and multi-twinned grains were observed, respectively (see Tables 4.7 and 4.9). Figures 4. 22b and4.23b show the distribution of twins embedded in single twinned and multi-twinned grains as a function of grain area and Schmid factor. They are aimed at indicating the statistical relevance of data about twins contained by single twinned grains. Figure 4.23a presents the distribution of averaged twin thicknesses embedded in both single twinned and multi-twinned grains as a function of Schmid factor values. Figure 4.23a clearly shows that the average twin thickness of twins embedded in single twinned grains is greater than or equal to the average twin thickness of twins contained by multi-twinned grains irrespective of grain orientation. This phenomenon appears more clearly for high and mid-high Schmid factor values, i.e. SF > 0.25. As previoulsy mentioned, Figure 4.23b shows that bars associated with negative Schmid factors apply to only a few single twinned grains. Moreover, Figure 4.23a shows that similar to multi-twinned grains, the thickness of twins contained in single twinned grains does not depend on grain area. However, the latter is generally greater than the average twin thickness of twins in multi-twinned grains. Figure 4.22b also shows that almost all the single twinned grain areas are smaller than 664 µm 2 . Such a result was expected due to the influence of grain size on twin nucleation. Conclusion The statistical analyses performed on Mg AZ31alloy EBSD scans were aimed at determining activation criteria of two distinct types of "unlikely twins", i.e. low Schmid factor twins, defined as twins with either a negative Schmid factor or a Schmid factor smaller than 0.3 and a ratio of the Schmid factor to the highest twin variant Schmid factor possible in the considered twinned grain lower or equal to 0.6, and double extension twins. As implied by their designation, the nucleation of both low Schmid factor and double extension can be considered as a rare event since low Schmid factor and double extension twins represent 6.7 % and 5.7 % of all twins, respectively. Note also that double extension twins occupy less than 0.5% of the total twinned area. Relying on the value of the distortion induced by a twin that has to be accommodated by the potential other deformation modes, it was found that group 1 low Schmid factor twins, i.e. twin in contact with a grain boundary, require the most accommodation through basal slip with the lowest CRSS and the least accommodation through pyramidal slip with the highest CRSS. Group 2 low Schmid factor twins, i.e. twins forming a cross-boundary twin pair, require the least pyramidal slip or contraction twinning accommodations with high CRSS but the most accommodation through prismatic slip and tensile twinning, of which CRSSs are higher than basal slip and lower than pyramidal slip and compressive twinning. The second study highlights that, contrary to primary twins, secondary tensile twins obey the Schmid's law. It also showed that the use of a micromechanical double-inclusion model is an accurate way to predict the activation of the right twin variant. The fact that the activation of double extension twin variant obeys the Schmid's law also implies that purely deterministic approaches as those used in classical polycrystalline model should be able to predict the nucleation of such twins. The statistical study performed on a large set of EBSD scans of high purity Zr discusses the influence of twin-twin junctions between first generation twins, grain size and crystallographic orientation on nucleation and growth of twins. Samples were loaded in compression at liquid nitrogen temperature along the through-thickness and one of the in-plane directions in order to favor C 1 and T 1 twins, respectively. This study is the first to establish the statistical relevance of twin-twin junctions by collecting and processing data about all twinning modes and all twin-twin junctions in Zr. Six different types of twin-twin junctions ,i.e. T 1 -T 1 , T 1 -T 2 , T 1 -C 1 , T 2 -T 2 , T 2 -C 1 and C 1 -C 1 , are observed. Twin-twin junctions occurring between twins belonging to different modes, and more particularly between twins belonging to the first and second most active twinning modes, appear very frequently and cannot be neglected. Depending on the loading configuration, they may represent more than half of all twin-twin junctions. The comparison between the average twin thickness of twins embedded in single twinned and multi-twinned grains reveals that twin-twin junctions hinder twin growth. In addition, only nucleation and growth of twins belonging to the predominant twinning mode seem to be strongly sensitive to grain orientation and loading direction. These differences can probably be explained by the presence of localized high stress levels allowing the nucleation of any twinning mode. In agreement with previous studies, it is also found that the probability of twin nucleation and the average number of twins per twinned grain increase with grain size. Chapitre 5 Conclusion Three types of interactions, i.e. slip/slip, slip/twin, and twin/twin interactions, are key to the mechanical response and strain hardening in h.c.p. metals. The work presented here clearly focused on the twinning process in general and, more specifically on understanding the means of internal stress development within the twin domain from inception to the final shape and on quantifying the statistical relevance of twin/twin interactions (i.e. double/sequential twinning and twin intersection). To this end, novel micromechanical models were introduced, specimens were experimentally characterized by means of mechanical testing, XRD and EBSD techniques, and these results were analyzed in light of a new freeware developed along the course of the work to extract quantified links between twins, initial microstructure and loading directions. The key findings of each of those initiatives are presented in what follows, and guidance for future developments is proposed. To study internal stress development during twinning, a new micromechanical approach based on a double inclusion topology and the use of the Tanaka-Mori theorem was first adopted in the form an elasto-static Tanaka-Mori scheme in heterogeneous elastic media with plastic incompatibilities. This first model was introduced to study the evolution of internal stresses in both parent and twins during first and second-generation twinning in h.c.p materials. The model was first applied to the case of pure Mg to reproduce the average internal resolved shear stresses in the parent and twin phases. While the study is limited to anisotropic heterogeneous elasticity with eigenstrains representing the twinning shears, it is suggested that the magnitude of the back-stresses is sufficient to induce plastic deformation within the twin domains. Moreover, the predominant effect on the magnitude and the direction of the back-stresses appears to be due to heterogeneous elasticity because of large induced misorientations between the parent and the twin domains. It is also found that the stress state within twin domains is largely affected by the shape of the parent phase. Using the same notations as Martin et al. [START_REF] Martin | Variant selection during secondary twinning in mg-3%al[END_REF] to refer to as "secondary" any {10 12} tensile twin variants embedded in {10 11} compressive twins, application of the model shows that only three (i.e. A, B, D) of the six tensile twin variants have a positive resolved shear stress on the twin plane. Two of those correspond to the most experimentally observed variants. In addition, variants A and D2 are found to exhibit the largest elastic energy decrease during secondary twin growth. Interestingly, it is also found that variant A can also grow to larger volume fractions than variant D. Clearly, all results shown here are limited to static configurations and neglect internal variable evolutions. Nonetheless, these first results suggest that application of the generalized Tanaka-Mori scheme to mean-field self-consistent methods will yield more accurate predictions of the internal state within twin domains for real polycrystalline hexagonal metals like magnesium and associated alloys. Consequently, a second model called the double inclusion elasto-plastic self-consistent scheme (DI-EPSC) and consisting of an extension of the elasto-static Tanaka-Mori scheme to elastoplasticity and polycrystalline media, was proposed. Similar to the previous model, the original Tanaka-Mori result is used to derive new concentration relations including average strains in twins and twinned grains. Then, twinned and non-twinned grains are embedded in an HEM, with effective behavior determined in an implicit nonlinear iterative self-consistent procedure called the "DI-EPSC" model. Contrary to the existing EPSC scheme, which only considers single ellipsoidal inclusions, new strain concentration relations account for a direct coupling between parent and twin phases. Using the same Voce's law coefficients and the same hardening parameters as in Clausen et al. [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF], comparison between the EPSC, the DI-EPSC and experimental data leads to three main results with respect to twinning and associated plasticity mechanisms. First, it appears that, by introducing a new topology for twinning, latent effects induced by twinning in the parent phases are capable of predicting the influence of plasticity on hardening and hardening rates in the twin phases. Second, because twins are now directly embedded in the parent phases, new concentration relations lead to more scattered shear strain distributions in the twin phases. Twin stress states are strongly controlled by the interaction with their associated parent domains. Third, the study clearly shows the importance of appropriately considering the initial twin stress state at twin inception. During this study, it was found that most numerical instabilities can be traced back to the choice of hardening matrix [START_REF] Hill | A self-consistent mechanics of composite materials[END_REF][START_REF] Hutchinson | Elastic-plastic behaviour of polycrystalline metals and composites[END_REF]. Although such matrix was chosen to be positive semi-definite, the EPSC scheme would have been more stable if all eigenvalues of the hardening matrix were strictly positive, i.e. if the hardening matrix was positive definite. However, the hardening matrix associated with Mg single crystals computed by Bertin et al. [START_REF] Bertin | On the strength of dislocation interactions and their effect on latent hardening in pure magnesium[END_REF] from dislocation dynamics is definitely not positive semi-definite and even less positive definite. As a result, the easiest way to overcome such an issue, inherent in the EPSC algorithm, is either to use hardening laws with strictly positive definite hardening matrices or to use visco-plastic self-consistent or elasto-visco-plastic self-consistent schemes. In order to process EBSD scans and extract twinning statistics automatically, a new EBSD analysis software based on graph theory and quaternion algebra was developed. Quaternions allow an easy computation of disorientations between pixels and areas of consistent orientation. The subsequent use of graph and group structures allows grain identification, twin recognition and statistics extraction. The newly introduced software is distinguished from pre-existing commercial softwares or academic codes by combining visualization with automated analysis of the EBSD map. The built-in graphical user interface enables an immediate and direct access to microstructural and twinning data such as orientation and size of twins and grains, mode and system of twins ; it also allows the user to correct or complete, if necessary, the analysis performed by the software. In addition, all raw and processed data are saved in a relational database. Consequently, all experimental parameters, microstructural data and twinning statistics are easily accessible via SQL requests. The database also enables us to quantify systematically the influence of a very large number of parameters. The construction of such a database makes a significant difference compared to other pre-existing analysis tools. Moreover, although the tool was initially developed to perform statistical analyses on Mg and Zr scans, the software is not limited to these two h.c.p. metals. Its algorithm is capable of identifying any twin occurring in h.c.p. materials on condition that the user writes in the code the value of the c/a ratio and the theoretical disorientation quaternions corresponding to all potentially active twinning systems. For the analysis of other crystallographic structures, the user has to adapt the cell characteristics, add or remove quaternions corresponding to the different twinning orientation relations and modify the symmetry quaternions. The two first statistical studies were performed from rolled AZ31 Mg alloy EBSD scans in order to explain the activation of low Schmid factor {10 12} tensile twins and successive {10 12}-{10 12} double extension twins. The first study revealed that low Schmid factor {10 12} tensile twins only represent 6.8 % of all twins. Based on purely deterministic constitutive laws, polycrystalline schemes such as EPSC or EVPSC will not be able to predict the formation of low Schmid factor twins. However, consequences on predicted mechanical responses are expected to be small because of their low statistical relevance. The second study revealed that {10 12}-{10 12} double extension twins obey the Schmid's law in general. It also showed that considering internal energy changes computed from a micromechanical double-inclusion model, even simplified, results in very accurate predictions of double extension twin variant activation. The study also pointed out that such double twins are extremely rare and have a negligible effect on the mechanical properties of the material. Another statistical study performed from Zr EBSD scans was carried out in order to discuss the influence of twin-twin junctions between first generation twins, grain size and crystallographic orientation on nucleation and growth of twins. Samples were loaded in compression at liquid nitrogen temperature along the through-thickness and one of the in-plane directions in order to favor C 1 and T 1 twins, respectively. Abbreviations T 1 , T 2 and C 1 stand for {10 12}, {11 21} and {11 22} twins, respectively. This study is the first to establish the statistical relevance of twin-twin junctions by collecting and processing data about all twinning modes and all twin-twin junctions in Zr. Six different types of twin-twin junctions ,i.e. T 1 -T 1 , T 1 -T 2 , T 1 -C 1 , T 2 -T 2 , T 2 -C 1 and C 1 -C 1 , are observed. Twin-twin junctions occurring between twins belonging to different modes, and more particularly between twins belonging to the first and second most active twinning modes, appear very frequently and cannot be neglected. Depending on the loading configuration, they may represent more than half of all twin-twin junctions. The comparison between the average twin thickness of twins embedded in single twinned and multi-twinned grains reveals that twin-twin junctions hinder twin growth. In addition, only nucleation and growth of twins belonging to the predominant twinning mode seem to be strongly sensitive to grain orientation and loading direction. These differences can probably be explained by the presence of localized high stress levels allowing the nucleation of any twinning mode. In agreement with previous studies, it is also found that the probability of twin nucleation and the average number of twins per twinned grain increase with grain size. The logical continuation of this work consists of 1) a more user-friendly integration of postprocessing capabilities in the EBSD software and 2) the development and implementation of stochastic models taking into account all new statistically meaningful data into the DI-EPSC scheme scheme. For example, the software could become more user-friendly if SQL requests were directly implemented in the software and if the software would display and output tables with information about twins, twin junctions, twinned grains, etc. One can also imagine an interface allowing the user to choose the microstructural features he desires to obtain. Regarding the development of stochastic models, Beyerlein et al. [START_REF] Beyerlein | Effect of microstructure on the nucleation of deformation twins in polycrystalline high-purity magnesium : A multi-scale modeling study[END_REF] and then Niezgoda et al. [START_REF] Niezgoda | Stochastic modeling of twin nucleation in polycrystals : An application in hexagonal close-packed metals[END_REF] proposed models dealing with nucleation of {10 12} tensile twins in Mg. These models could be extended so as to be capable of predicting the nucleation of any type of twinning mode in h.c.p. metals. Results of the statistical study on Zr presented in Chapter 4 are a starting point. In addition, performing EBSD measurements on more Mg and Zr samples loaded either monotonically or cyclically along different directions at different temperatures and strain rates would represent an incredible additional source of data for the modeling community. Annexe A Graphical User Interface of the EBSD map analysis software and SQL Requests for automated twin statistics extraction A.0.1 Graphical User Interface To assist the EBSD map analysis, 9 distinct visualization modes and 8 color mappings of the EBSD data are provided by the software. The color mappings correspond to two different color mappings of the rotation space, three stereographic projections ; and displayed in grey levels, the image quality, confidence index and fitness value reported by the acquisition software. The 9 visualization modes are the following : 1. Raw mode : displays the raw EBSD measurement points as shown in Figure 3.23, left. 2. Twinning editor : displays twinning relations between fragments and allows the user to enable or disable them. Fragments identified as parent phases, first, second, third and higher generation twins are marked with yellow, light blue, dark blue and red discs, respectively. Discs indicating high order twins, i.e. second, third and higher order, appear larger in order to be more visible. Likely being the result of incorrectly enabled relations, these twinning relations have to be inspected in priority ( Figure 3.23, center). 3. Grain neighbors : displays grains with their neighbors. The user, here, is able to mark a fragment as parent or twin phase. 4. Clusters : displays phases grouped by twinning order. Colors used to indicate twinning modes are also the same as those used in the twinning editor visualization mode. However, it emphasizes twinning order. For example, it was previously mentioned that first generation twins are marked with a light blue disc in the twinning editor mode. Here, first generation twins are not only marked by a light blue disc but their boundaries appear in light blue (Figure 3.23, right). these two connected parts such as the disorientation existing between the twin and the parent phases, the twinning mode corresponding to the disorientation, etc. 6. Convex hulls : displays the convex hull of detected grains. The polygons are drawn in green if their area is close to the enclosed grain area, in red otherwise. 7. Ellipses : displays a fitted ellipse around every detected twins. Twins whose shape does not fit an ellipse very well are drawn in red because they are likely to be the result of the merging of two (or more) twins (Figure 3.24, left). This mode allows the user to have access to grain and twin properties such as orientation and size. 8. Connected twins : displays detected twin-twin junctions. The user can also mark manually undetected twin-twin junctions. 9. Twin joints : displays identified twinning relations between measurement points located along twin boundaries. Even though measurement points along twin boundaries are not very reliable, this mode might be useful to visualize how strong the disorientation appears along the boundary. In addition to these visualization modes, options are available to highlight grain or twin boundaries, exclude grains in contact with the map edge, replace twins in a twin strip by their union (Figure 3.24, right), display connected part ids and zoom in or out and pan. When zooming in to a level where individual measurement can be separated, local disorientation is also displayed as shown in figure 3.17 A.0.2 SQL Requests All SQL requests used for statistics extraction are listed in the following. -Request 1 : s e l e c t g . id , g . area , g . qx , g . qy , g . qz , g . qw from g r a i n s as g where not g . map_edge order by g . i d ; -Request 2 : drop view Twins ; create view Twins as s e l e c t C1 . i d as P , C2 . id , c2 . g r a i n , S2 . area , C2 . size , S2 . l e n g t h , S2 . t h i c k n e s s , c2 . x , c2 . y , s 2 . qx , s 2 . qy , s 2 . qz , s 2 . qw , e . twinning , e . v a r i a n t from ConnectedEdges as E inner j o i n Fragments as c1 on c1 . i d=e . i inner j o i n Fragments as c2 on c2 . i d=e . j inner j o i n F r a g m e n t S t a t i s t i c s as s 1 on c1 . i d=s 1 . i d inner j o i n F r a g m e n t S t a t i s t i c s as s 2 on c2 . i d=s 2 . i d inner j o i n Gr ains as G on G. i d = C2 . g r a i n and C1 . i s _ p a r e n t and E . twinning >1 and C1 . g r a i n=C2 . g r a i n and not G. map_edge and c1 . t w i n s t r i p <=0 and c2 . t w i n s t r i p <=0 and c2 . tw inn i ng _ o r de r=1 order by C2 . i d ; -Request 3 : s e l e c t C1 . g r a i n , C1 . id , C2 . id , T1 . twinning , T1 . v a r i a n t , T2 . twinning , T2 . v a r i a n t s e l e c t MAP_ID, g . id , g . area , g . border_length , avg ( t . a r e a ) ,sum( t . a r e a ) , count ( t . i d ) , avg ( t . t h i c k n e s s ) from g r a i n s as g , Twins as t where G. i d = t . g r a i n group by g . i d order by g . i d ; Annexe B Stress-strain curve correction method and mechanical testing parameters Measuring the evolution of the distance existing between the current and the initial positions of the upper compression plate is an accurate way to estimate the deformation of the sample. However, during a compression test, the distance measured by LVDT sensors evolves not only because of the deformation of the sample but also because of the deformation of the machine. Initially, the stiffness of the machine was measured experimentally by compressing square tungsten samples. Correction parameters were then directly derived from the measured machine stiffness. Unfortunately, tungsten samples cracked. As a result, it was decided to correct the measured strain in such a way that the Young's modulus, during the loading phase, is equal to 45 GPa. To calculate the corrective parameters, a MATLAB code was written to extract the stress and strain values as well as to output the stress-strain curves, and another was written to calculate the corrective parameters and Young's modulus for both loading and unloading regimes. The reason why many Young's moduli for unloading regimes are missing in Table B.1 is that measurement points corresponding to unloading phases were not recorded initially. The procedure was then adapted in order to save them. This is also why the elastic regime of the loading phase was used to determine the correction parameters. The following equations describe the method used to correct the measured strain. First, the measured strain, ǫ m , is corrected by a term denoted by ǫ corr linearly proportional to the force applied and, as a result, to the engineering stress, σ m : ǫ c = ǫ m -ǫ corr (B.1) where ǫ c denotes the measured strain after correction and ǫ corr the correction term expressed as follows : ǫ corr = a.σ m + b (B.2) Correction parameters a and b are determined from the desired theoretical Young's modulus value, i.e. E th = 45 GPa, and two measurement points, P 1 (ǫ m 1 , σ m 1 ) and P 2 (ǫ m 2 , σ m 2 ), picked such that both σ m 1 and σ m 2 belong to the elastic regime of the loading. For all corrections, points P 1 and P 2 were chosen to have σ m 1 and σ m 2 equal to about 30 MPa and 60 MPa, respectively. Résumé Chapitre 1 3 . 1 131 Introduction and state-of-the-art 1.1 Motivation and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Crystallography of h.c.p. metals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Crystallography of twinning in h.c.p. metals . . . . . . . . . . . . . . . . . . . . . 1.4 Nucleation and growth of twins . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 Twinning dislocations and twin interfaces . . . . . . . . . . . . . . . . . . 1.4.2 Mechanisms involved in nucleation and growth of twins . . . . . . . . . . 1.5 Twinning in constitutive and polycrystalline modeling . . . . . . . . . . . . . . . 1.6 Scope of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapitre 2 Study of the influence of parent-twin interactions on the mechanical behavior during twin growth 2.1 The inclusion problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Field equations and thermodynamics . . . . . . . . . . . . . . . . . . . . . 2.1.2 Eshelby's solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 A generalized Tanaka-Mori scheme in heterogeneous elastic media with plastic incompatibilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Elasto-static Tanaka-Mori scheme . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Application to first generation tensile twinning in magnesium . . . . . . . 2.3 The Double Inclusion Elasto-Plastic Self-Consistent (DI-EPSC) scheme. . . . . . 2.3.1 DI-EPSC model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Computational flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Application to AZ31 alloy . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapitre 3 Electron backscattered diffraction technique and automated twinning statistics extraction iii Sommaire Brief description of Scanning Electron Microscopes . . . . . . . . . . . . . . . . . 3.2 Historical perspectives of the Electron Backscatter Diffraction Technique . . . . . 3.3 Basic concepts of electron diffraction and diffraction pattern analysis . . . . . . . 3.4 A graph theory based automated twin recognition technique for Electron Back-Scatter Diffraction analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Euler angles, quaternion rotation representations and their application to EBSD data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Identification of grains, parent and twin phases . . . . . . . . . . . . . . . 3.4.3 Graphical User Interface and Data availability . . . . . . . . . . . . . . . . 3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapitre 4 Identification of statistically representative data associated to nucleation and growth of twins 4.1 Preliminary notations and considerations . . . . . . . . . . . . . . . . . . . . . . . 4.2 Nucleation of "unlikely twins" : low Schmid factor twins and double twinning in AZ31 Mg alloy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Experimental set-up and testing conditions . . . . . . . . . . . . . . . . . 4.2.2 Mechanical behavior and microstructure evolutions . . . . . . . . . . . . . 4.2.3 Low Schmid factor {10 12} tensile twins . . . . . . . . . . . . . . . . . . . 4.2.4 Successive {10 12}-{10 12} double extension twins . . . . . . . . . . . . . . 4.3 Probing for the latent effect of twin-twin junctions : application to the case of high purity Zr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Experimental set-up and testing conditions . . . . . . . . . . . . . . . . . 4.3.2 Twin-twin junctions statistics . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapitre 5 Conclusion Annexes Annexe A Graphical User Interface of the EBSD map analysis software and SQL Requests for automated twin statistics extraction A.0.1 Graphical User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.0.2 SQL Requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Annexe B Stress-strain curve correction method and mechanical testing parameters Bibliographie 2 2 Exemple de cartographie EBSD obtenu à partir d'un échantillon de Zr pur laminé chargé en compression dans le sens de son épaisseur. Cette cartographie fut traitée par le logiciel lequel sera plus longuement décrit chapitre 3. . . . . . . . . . . . . 3 Représentation schématique du problème élastique hétérogène avec "eigenstrains", dues aux maclages primaire et secondaire. . . . . . . . . . . . . . . . . . . . . . . 4 Représentation schématique du problème élasto-plastique telle que modélisé dans le modèle auto-cohérent élasto-plastique classique (dont l'abrviation anglaise est "EPSC") (a) et dans le nouveau modèle auto-cohérent élasto-plastique á double inclusion développé dans cette thése et appelé DI-EPSC. . . . . . . . . . . . . . . 5 Activités plastiques moyennées des systèmes de déformation principaux de glissement et de maclage au sein des phases macle et parent obtenues à partir de (a) EPSC et du (b) DI-EPSC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 (a) comparaison des courbes contrainte-déformation macroscopiques prédites par l'EPSC et le DI-EPSC avec celle obtenue expérimentalement et (b) évolution de la fraction volumique totale de macle au sein du polycristal. . . . . . . . . . . . . . 7 Représentation graphique d'exemples de jonctions macle-macle observées dans les cartographies EBSD de Zr. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vdes figures 1 . 5 15 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Schematic representation of a hexagonal structure containing three primitive unit cells. Atoms are represented by blue hard spheres. . . . . . . . . . . . . . . . . . . 1.4 Schematic representation of the main crystallographic planes and directions in h.c.p. structures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Table Schematic representation of the twinning plane, K 1 and its conjugate, K 2 , the twinning shear direction, η 1 , and its conjugate, η 2 , and the plane of shear, P [3]. 1.6 Schematic representation of possible lattice shuffles for double lattice structures when q = 4. (a) Parent structure ; (b) sheared parent ; (c) type 1 twin ; (d) type 2 twin ; (e) possible type 1 shuffle ; (f) possible type 2 shuffle ; (g) alternative type 1 shuffle ; (h) alternative type 2 shuffle [3]. . . . . . . . . . . . . . . . . . . . . . . . 2.1 Schematic representation of the original inclusion elastic problem containing one ellipsoidal inclusion V Ω with prescribed eigenstrain ǫ * . The dashed lines of the signify that the inclusion is embedded in an infinite elastic medium. The inclusion and the matrix have the same elastic modulus C 0 . . . . . . . . . . . . . . . . . . 2.2 Schematic representation of the heterogeneous elastic problem containing two ellipsoidal inclusions V b and V a (with V 1 ⊂ V 2 ) with prescribed eigenstrains ǫ * b in V b and ǫ * a in sub-region V a -V b and distinct elastic moduli C b in V b and C a (in sub-region V a -V b 3 , c) are also shown. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 (a and c) Mean internal stresses projected on the twinning plane in the twinning shear direction in both the twin phase and the parent phase as functions of R twin , (b and d) Influence of R parent on the mean internal stresses of the parent projected on the twinning plane. The twin volume fraction in the parent is 0.25 for (a) and (b) and 0.05 for (c) and (d). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Evolution of the mean internal stresses in both twin and parent phases projected on the twinning plane as a function of twin volume fraction. Lines refer to the model predictions while symbols denote measure data. The ellipsoid aspect ratio for the parent, R parent , is set to 3. . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Resolved shear stresses -projected on the twinning plane and twin shear directionof both the parent (a) and the twin (b) phases as functions of twin volume fraction. 2 . 9 29 Initial textures of (a) the extruded alloy (with the extrusion axis at the center of the pole figures) and (b) the randomly textured material . . . . . . . . . . . . . . 2.10 (a) comparison of macroscopic stress-strain curves from EPSC and DI-EPSC with experimental diffraction data and (b) evolution of the total twin volume fraction in the polycrystal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11 Averaged system activities within the parent and twin phases from (a) EPSC and (b) DI-EPSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi 2.12 RSS projected on the twinning plane in the twinning direction within the parent and twin phases. (a) compares the DI-EPSC and EPSC results ; (b) compares internal stresses for different geometrical configurations for initially unrelaxed twins. Axis length ratios for ellipsoidal shapes are a 1 /a 2 = 1 and a 1 /a 3 = 3. . . . . . . . 2.13 Spread of total shear strains within the parent and twin phases from (a) EPSC and (b) DI-EPSC for an extruded alloy (c) EPSC and (d) DI-EPSC for an initially randomly textured alloy. Each cross represents the total shear strain for one single grain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.14 Total shear strain distributions in parent and twin domains at 4.5% deformation obtained from (a) EPSC and (b) DI-EPSC in the case of the extruded AZ31 alloy and from (c) EPSC and (d) DI-EPSC in the case of the initially randomly textured AZ31 alloy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.15 Averaged system activities within the parent and twin phases from (a) EPSC and (b) DI-EPSC for an initially randomly textured material . . . . . . . . . . . . . . 2.16 RSS projected on the twinning plane in the twinning direction within the parent and twin phases and obtained from the EPSC and DI-EPSC models for both initially relaxed and unrelaxed twins. . . . . . . . . . . . . . . . . . . . . . . . . . 2.17 Averaged system activities within the parent and twin phases from DI-EPSC when twins are assumed to be initially totally relaxed . . . . . . . . . . . . . . . . . . . 3.1 Philips XL 30 F Orientation Imaging Microscopy System at Los Alamos National Laboratory, MST-6 (LANL web site). . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Chamber of a Scanning Electron Microscope. . . . . . . . . . . . . . . . . . . . . 3.3 Schematic representation of signals resulting from the interaction between primary electrons and atoms at or near the sample surface ; R denotes the depth of the interaction volume [5]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Schematic representation of energy levels of electrons resulting from the interaction . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Example of EBSD orientation map showing the undeformed microstructure of a high purity clock-rolled Zr specimen. Colors designate crystal orientations as indicated by the unit triangle in the bottom right-hand corner. The software used to generate this map is OIM Analysis [8]. . . . . . . . . . . . . . . . . . . . . . . 3.8 Schematic representation of the chamber of a SEM equipped with an EBSD detector. The abbreviation BSD stands for backscattered detector [5]. . . . . . . . . . . . . 3.9 Schematic representation of the Bragg's condition . . . . . . . . . . . . . . . . . . 3.10 Examples of Kikuchi patterns obtained from a h.c.p. material (Oxford Instrument) 3.11 Intersection of Kossel cones with the viewing screen [9] . . . . . . . . . . . . . . . 3.12 Diagram for calculation of bandwidth angle [10] . . . . . . . . . . . . . . . . . . . 3.13 Simplified and schematic representation of diffraction setup [10] . . . . . . . . . . 3.14 Schematic representation of the triplet method for diffraction spot indexing . . . 3.15 Schematic representation of twinning modes observed in Zr Mg. Twins are represented via their twinning planes, K 1 . . . . . . . . . . . . . . . . . . . . . . . . . . vii 1 and blue compressive 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.21 Same map as Figure 3.20, but with manual edition of 4 incorrect links. The disabled links are displayed as thin edges. . . . . . . . . . . . . . . . . . . . . . . 3.22 Zoom on the map of Figure 3.20 to illustrate complex grain structures recovered by our software. The dashed line is a disorientation relation that matches a known relation (compressive 1 4 . 2 42 3.24 Zoomed-in EBSD map of a high-purity clock-rolled Zr sample loaded in compression along one of the in-plane directions up to 10% strain. Left : detected component with their ellipses and twin-strip links in magenta ; right : reconstructed complete twin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.25 EBSD map of a high-purity clock-rolled Zr sample loaded in compression along one of the in-plane directions up 10% strain. The top part shows the twinning relation identified. The right caption displays ellipses fitted to twins. Red ellipses correspond to low ellipsicity (below 70%). Low ellipsicity twins correspond here to merged orthogonal twins. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.26 Structure of the database used to store the EBSD analysis results. Boxes are database tables, edges with numbers indicates relations and the n-arity of these relations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 (a) Instron tensile-compression machine ; (b) Compression test at room temperature. viii Macroscopic stress-strain curves of specimens monotonically loaded in compression along RD (a), along TD (b) and along ND (c) at different temperatures and strain rates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . 4.5 (a) EBSD ND inverse pole figure micrograph of the specimen before compression along RD ; (b) Part of an EBSD orientation micrograph of a specimen compressed along the rolling direction up to 2.7% strain ; (c) EBSD orientation micrograph of a specimen successively loaded in compression along the rolling direction up to 1.8% strain and the transverse direction up to 1.3% strain. Black and yellow arrows indicate the presence of double extension twins. . . . . . . . . . . . . . . . 4.6 Scatter plots displaying the Schmid factor and Schmid factor ratio values of 291 Group 3 secondary twins (a) and 92 Group 4 secondary twins (b). . . . . . . . . . 4.7 Schematic representation of the simplified elasto-static Tanaka-Mori scheme. Second- 4 . 8 48 Evolution of the change of elastic energy normalized by the twin volume fraction with respect to (a) the applied stress, Σ T D , and (b) the secondary twin volume fraction, f V B = V B /V A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Initial basal (0001) and prismatic (10-10) pole Figures of the clock-rolled highpurity zirconium studied in this work. The 3-axis is the through thickness direction of the plate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 Macroscopic stress-strain curves of high purity Zr samples loaded in compression along through-thickness (TT) and in-plane (IP) directions at 76K and 300K. . . . 4.11 Examples of EBSD scans for specimens loaded in compression along the TT (a) and along one of the IP (b) directions. . . . . . . . . . . . . . . . . . . . . . . . . 4.12 Effective grain diameter (a,b) and grain area (c,d) distributions for TT03 (a,c) and IP05 (b,d) samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13 Graphical representation of twin-twin junction modes and types observed in TT03 and IP05 Zr EBSD scans. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13 Schematic representation of twin-twin junction modes and types observed in TT03 and IP05 Zr EBSD scans. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.14 Modes and types of twin-twin junctions observed in EBSD scans of samples loaded along the TT-direction ((a),(c),(e)) and the in-plane directions ((b),(d),(f)) . . . 4.15 Distribution of frequencies of T 1 , T 2 and C 1 twins with respect to grain size in samples loaded along the TT (a) and the IP (b) directions. . . . . . . . . . . . . 4.16 Distribution of the fraction of twinned grains containing T 1 , T 2 and C 1 twins plotted with respect to twinned grain area for samples loaded along the TT (a) and the IP (b) directions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.17 Distribution of the number of T 1 , T 2 and C 1 twins per twinned grain for TT03 (a) and IP05 (b) samples and scattergraphs displaying the number of C 1 , T 2 and C 1 twins embedded in parent phases with respect to twinned grain area for TT03 (c) and IP05 (d) samples. Each cross represents one single twin. But because twin numbers are integers, many crosses overlap. . . . . . . . . . . . . . . . . . . . . . 4.18 Distribution of SF values corresponding to twins activated in samples loaded along the TT-direction (a) and the IP-direction (b). . . . . . . . . . . . . . . . . . . . . 4.19 Distribution of variant frequencies of T 1 (a), T 2 (c) and C 1 (e) twins in TT03 scans and of T 1 (b) and T 2 (d) in IP05 scans, respectively, with respect to their Schmid factor. Variant frequencies consist of the ratio of the number of twins of a given SF variant and of a given twinning mode to the total population of twins belonging to the considered twinning mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.20 Distribution of average twin thicknesses as a function of grain size in samples loaded along the TT (a) and the IP (b) directions and scattergraphs displaying twin thickness values with respect to grain size in samples loaded along the TT (c) and the IP (d) directions. Each cross represents one twin. . . . . . . . . . . . . . 4.21 Distribution of average twin thicknesses as a function of SF values in samples loaded along the TT (a) and the IP (b) directions. . . . . . . . . . . . . . . . . . 4.22 Distribution of the twin thickness (a) and the frequency (b) of C 1 twins with respect to grain area in samples loaded along the TT direction. . . . . . . . . . . 4.23 Distribution of the twin thickness (a) and the frequency (b) of C 1 twins with respect to SF values in samples loaded along the TT direction. . . . . . . . . . . A.1 Visualization modes for a single grain . . . . . . . . . . . . . . . . . . . . . . . . . x Figure 1 - 1 Figure 1 -(a) Comparaison des courbes contrainte-déformation en traction et compression de Mg AZ31B laminé [1] et (b) comparaison de la réponse mécanique du Zr pur laminé pour différentes températures et trajets de chargement[START_REF] Kaschner | Mechanical response of zirconium-ii. experimental and finite element analysis of bent beams[END_REF]. Traction et compression apparaissent, respectivement, dans la légende sous la forme de "tens" et "comp". Les abréviations "TT" et "IP" signifient, quant à elles, "through-thickness compression" et "in-plane compression", à savoir "compression dans le sens de l' épaisseur" et "compression dans le plan". Figure 2 - 2 Figure 2 -Exemple de cartographie EBSD obtenu à partir d'un échantillon de Zr pur laminé chargé en compression dans le sens de son épaisseur. Cette cartographie fut traitée par le logiciel lequel sera plus longuement décrit chapitre 3., Figure 3 - 3 Figure 3 -Représentation schématique du problème élastique hétérogène avec "eigenstrains", dues aux maclages primaire et secondaire. Figure 4 - 4 Figure 4 -Représentation schématique du problème élasto-plastique telle que modélisé dans le modèle auto-cohérent élasto-plastique classique (dont l'abrviation anglaise est "EPSC") (a) et dans le nouveau modèle auto-cohérent élasto-plastique á double inclusion développé dans cette thése et appelé DI-EPSC. Figure 5 -Figure 6 - 56 Figure 5 -Activités plastiques moyennées des systèmes de déformation principaux de glissement et de maclage au sein des phases macle et parent obtenues à partir de (a) EPSC et du (b) DI-EPSC. Figure 7 - 7 Figure 7 -Représentation graphique d'exemples de jonctions macle-macle observées dans les cartographies EBSD de Zr. Sommaire 1 . 1 11 Motivation and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.2 Crystallography of h.c.p. metals . . . . . . . . . . . . . . . . . . . . . 13 1.3 Crystallography of twinning in h.c.p. metals . . . . . . . . . . . . . . 14 1.4 Nucleation and growth of twins . . . . . . . . . . . . . . . . . . . . . . 20 1.4.1 Twinning dislocations and twin interfaces . . . . . . . . . . . . . . . . . 20 1.4.2 Mechanisms involved in nucleation and growth of twins . . . . . . . . . 22 1.5 Twinning in constitutive and polycrystalline modeling . . . . . . . . 23 1.6 Scope of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Figure 1 1 Figure 1.1 -(a) comparison of tension and compression stress-strain curves of rolled AZ31B Mg alloy [1] and (b) comparison of mechanical responses of clock-rolled high-purity Zr for various loading directions and temperatures[START_REF] Kaschner | Mechanical response of zirconium-ii. experimental and finite element analysis of bent beams[END_REF]. Tension and compression are denoted by "tens" and "comp", respectively. Abbreviations "TT" and "IP" stand for "through-thickness" and "in-plane", respectively. Figure 1 . 3 - 13 Figure 1.3 -Schematic representation of a hexagonal structure containing three primitive unit cells. Atoms are represented by blue hard spheres. Figure 1 . 4 - 14 Figure 1.4 -Schematic representation of the main crystallographic planes and directions in h.c.p. structures. Figure 1 . 5 - 15 Figure 1.5 -Schematic representation of the twinning plane, K 1 and its conjugate, K 2 , the twinning shear direction, η 1 , and its conjugate, η 2 , and the plane of shear, P [3]. Figure 1 . 6 - 16 Figure 1.6 -Schematic representation of possible lattice shuffles for double lattice structures when q = 4. (a) Parent structure ; (b) sheared parent ; (c) type 1 twin ; (d) type 2 twin ; (e) possible type 1 shuffle ; (f) possible type 2 shuffle ; (g) alternative type 1 shuffle ; (h) alternative type 2 shuffle [3]. Figure 2 . 1 - 21 Figure 2.1 -Schematic representation of the original inclusion elastic problem containing one ellipsoidal inclusion V Ω with prescribed eigenstrain ǫ * . The dashed lines of the signify that the inclusion is embedded in an infinite elastic medium. The inclusion and the matrix have the same elastic modulus C 0 . 2). Both inclusions are embedded in an infinite matrix with reference elastic stiffness C 0 and subjected to traction and displacement boundary conditions. Figure 2 . 2 - 22 Figure 2.2 -Schematic representation of the heterogeneous elastic problem containing two ellipsoidal inclusions V b and V a (with V 1 ⊂ V 2 ) with prescribed eigenstrains ǫ * b in V b and ǫ * a in sub-region V a -V b and distinct elastic moduli C b in V b and C a (in sub-region V a -V b ). The two inclusions are embedded in an infinite elastic medium, with elastic modulus C 0 , containing an overall uniform plastic strain, E p . The second-order tensor E d represents the imposed macroscopic strain. Figure 2 . 3 - 23 Figure 2.3 -Representation of the local coordinate system (e ′1 , e ′ 2 , e ′ 3 ) associated with the {10 12}-tensile twinning. The reference coordinate system (e 1 , e 2 , e 3 ) associated to the crystal structure and the crystallographic coordinate system (a 1 , a 2 , a 3 , c) are also shown. Figure 2 . 4 - 24 Figure 2.4 -(a and c) Mean internal stresses projected on the twinning plane in the twinning shear direction in both the twin phase and the parent phase as functions of R twin , (b and d) Influence of R parent on the mean internal stresses of the parent projected on the twinning plane. The twin volume fraction in the parent is 0.25 for (a) and (b) and 0.05 for (c) and (d). Figure 2 . 5 - 25 Figure 2.5 -Evolution of the mean internal stresses in both twin and parent phases projected on the twinning plane as a function of twin volume fraction. Lines refer to the model predictions while symbols denote measure data. The ellipsoid aspect ratio for the parent, R parent , is set to 3. [START_REF] Clausen | Reorientation and stress relaxation due to twinning : Modeling and experimental characterization for mg[END_REF] and 2.5. Figure 2 . 6 - 26 Figure 2.6 -Resolved shear stresses -projected on the twinning plane and twin shear directionof both the parent (a) and the twin (b) phases as functions of twin volume fraction. Solid lines and dashed lines refer to the present model and to the Nemat-Nasser and Hori's double inclusion scheme for homogeneous elasticity, respectively. The ellipsoid aspect ratio for the parent, R parent , is set to 3. Figure 2 . 7 - 27 Figure 2.7 -Schematic representation of the elasto-plastic problem with twinning corresponding respectively to the uncoupled [4] (a) and coupled (present) (b) formulations. The dashed line signifies that inclusions V t and V g-t with tangent moduli L t and L g-t are embedded in an equivalent homogeneous medium with a tangent modulus L ef f .9, both initially extruded and randomly textured materials are considered. Elastic constants, expressed in the crystal reference frame with Voigt notation, are given, in GPA, by C 11 = 59.75, C 22 = 59.74, C 33 = 61.7, C 12 = 23.24, C 13 = 21.7, C 44 = 16.39 and C 66 = 18.25. In Mg alloys, basal slip {0001} 2110 , prism slip {1010} 1210 , second-order pyramidal slip {2112} 2113 and tensile twinning {1012} 1011 are potential active systems. As shown previously, the present work uses an extended Voce law to describe hardening evolution Figure 2 . 2 Figure 2.10 -(a) comparison of macroscopic stress-strain curves from EPSC and DI-EPSC with experimental diffraction data and (b) evolution of the total twin volume fraction in the polycrystal. Figure 2 . 11 - 211 Figure 2.11 -Averaged system activities within the parent and twin phases from (a) EPSC and (b) DI-EPSC Figure 2 . 2 Figure 2.12 -RSS projected on the twinning plane in the twinning direction within the parent and twin phases. (a) compares the DI-EPSC and EPSC results ; (b) compares internal stresses for different geometrical configurations for initially unrelaxed twins. Axis length ratios for ellipsoidal shapes are a 1 /a 2 = 1 and a 1 /a 3 = 3. Figure 2 . 13 - 213 Figure 2.13 -Spread of total shear strains within the parent and twin phases from (a) EPSC and (b) DI-EPSC for an extruded alloy (c) EPSC and (d) DI-EPSC for an initially randomly textured alloy. Each cross represents the total shear strain for one single grain. Figure 2 . 14 - 214 Figure 2.14 -Total shear strain distributions in parent and twin domains at 4.5% deformation obtained from (a) EPSC and (b) DI-EPSC in the case of the extruded AZ31 alloy and from (c) EPSC and (d) DI-EPSC in the case of the initially randomly textured AZ31 alloy. Figure 2 . 15 - 215 Figure 2.15 -Averaged system activities within the parent and twin phases from (a) EPSC and (b) DI-EPSC for an initially randomly textured material Figure 2 . 16 - 216 Figure2.16 -RSS projected on the twinning plane in the twinning direction within the parent and twin phases and obtained from the EPSC and DI-EPSC models for both initially relaxed and unrelaxed twins. Figure 2 . 17 - 3 Sommaire 3 . 1 62 3. 2 64 3. 3 75 3. 4 . 1 2173316226437541 Figure2.17 -Averaged system activities within the parent and twin phases from DI-EPSC when twins are assumed to be initially totally relaxed Figure 3 . 1 - 31 Figure 3.1 -Philips XL 30 F Orientation Imaging Microscopy System at Los Alamos National Laboratory, MST-6 (LANL web site). Figure 3 . 2 - 32 Figure 3.2 -Chamber of a Scanning Electron Microscope. Figure 3 . 3 - 33 Figure 3.3 -Schematic representation of signals resulting from the interaction between primary electrons and atoms at or near the sample surface ; R denotes the depth of the interaction volume [5]. Figure 3 .Figure 3 . 4 - 334 Figure 3.4 -Schematic representation of energy levels of electrons resulting from the interaction between primary electrons and atoms at or near the sample surface [5]. Figure 3 . 7 - 37 Figure 3.7 -Example of EBSD orientation map showing the undeformed microstructure of a high purity clock-rolled Zr specimen. Colors designate crystal orientations as indicated by the unit triangle in the bottom right-hand corner. The software used to generate this map is OIM Analysis [8]. Figure 3 . 8 - 38 Figure 3.8 -Schematic representation of the chamber of a SEM equipped with an EBSD detector. The abbreviation BSD stands for backscattered detector [5]. Figure 3.9 -Schematic representation of the Bragg's condition Figure 3 . 10 - 310 Figure 3.10 -Examples of Kikuchi patterns obtained from a h.c.p. material (Oxford Instrument) 2 ( 3 . 13 )Figure 3 . 11 - 2313311 Figure 3.11 -Intersection of Kossel cones with the viewing screen[START_REF] Fultz | Transmission Electron Microscopy and Diffractometry of Materials[END_REF] Figure 3 . 12 - 312 Figure3.12 -Diagram for calculation of bandwidth angle[START_REF] Wright | Automatic-analysis of electron backscatter diffraction patterns[END_REF] Figure 3 . 13 - 313 Figure 3.13 -Simplified and schematic representation of diffraction setup [10] Figure 3 . 14 - 314 Figure 3.14 -Schematic representation of the triplet method for diffraction spot indexing 2 and v * 3 = 3 OP 3 , the cosine directions g * ij associated with vectors v * i and v * j are expressed as Figure 3 . 15 - 315 Figure 3.15 -Schematic representation of twinning modes observed in Zr Mg. Twins are represented via their twinning planes, K 1 . ) = (k- 1 )π 3 -5π 6 and β = arctan γ √ 3 for 3 + π 6 √ 3 for 133363 Tensile 1, α(k) = (k-3)π 3 and β = arctan 2γ for Tensile 2, α(k) = kπ 3 and β = π + arctan γ for Compressive 1, α(k) = (k-1)π and β = π + arctan 2γ Compressive 2. Figure 3 . 16 - 316 Figure 3.16 -Example of neighboring relationships encountered in EBSD data. On the left, when measurement points form a square grid, the pixel represented by the black disc has 4 neighbors represented by the white circles. On the right, when measurement points form an hexagonal grid, each measurement has 6 neighbors. Figure 3 . 17 - 317 Figure 3.17 -Graph grouping measurement points of consistent orientation in connected parts. The colored circles correspond to EBSD measurement points, with the Euler angles mapped on the RGB cube and, white lines represent edges, whose thicknesses are proportional to the weight, w. Consequently, twins appear clearly as areas delineated by a black border where the edge weight becomes negligible. Figure 3 .Figure 3 . 19 - 3319 Figure 3.18 -Graph grouping measurement points of consistent orientation in connected parts with added twinning mode. Green and red edges linking border points, displayed in brown, indicate tensile and compressive twinning relations, respectively. Figure 3 . 3 Figure 3.20 -Automatic output for a Zr EBSD map. The sample was cut from a high-purity clock-rolled Zr plate and loaded in compression along one of the in-plane directions up to 5% strain [11]. Yellow borders mark the grain joints, brown borders the twin joints. Green edges represent tensile 1 relation, magenta tensile 2, red compressive 1 and blue compressive 2. Figure 3 . 3 Figure 3.21 -Same map as Figure 3.20, but with manual edition of 4 incorrect links. The disabled links are displayed as thin edges. Figure 3 . 3 Figure 3.22 -Zoom on the map of Figure 3.20 to illustrate complex grain structures recovered by our software. The dashed line is a disorientation relation that matches a known relation (compressive 1) but is identified as irrelevant to the twinning process. Figure 3 . 23 - 323 Figure3.23 -Example of secondary and ternary twinning observed in an EBSD map of a high-purity clock-rolled Zr sample loaded in compression along the through-thickness direction up to 3% strain. This is shown using three different visualization modes (see appendix A) : raw mode (left), twinning editor mode (middle) and twinning statistics mode (right). The parent grain is surrounded in yellow, first order twins appear in cyan, secondary twins in blue and ternary or higher order twins in red. Figure 3 . 3 Figure 3.24 -Zoomed-in EBSD map of a high-purity clock-rolled Zr sample loaded in compression along one of the in-plane directions up to 10% strain. Left : detected component with their ellipses and twin-strip links in magenta ; right : reconstructed complete twin. and 3.21. Figure 3 . 3 Figure 3.25 -EBSD map of a high-purity clock-rolled Zr sample loaded in compression along one of the in-plane directions up 10% strain. The top part shows the twinning relation identified. The right caption displays ellipses fitted to twins. Red ellipses correspond to low ellipsicity (below 70%). Low ellipsicity twins correspond here to merged orthogonal twins. Figure 3 . 26 - 326 Figure 3.26 -Structure of the database used to store the EBSD analysis results. Boxes are database tables, edges with numbers indicates relations and the n-arity of these relations. Sommaire 4. 1 1 Preliminary notations and considerations . . . . . . . . . . . . . . . . 92 4.2 Nucleation of "unlikely twins" : low Schmid factor twins and double twinning in AZ31 Mg alloy . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.2.1 Experimental set-up and testing conditions . . . . . . . . . . . . . . . . 92 4.2.2 Mechanical behavior and microstructure evolutions . . . . . . . . . . . . 94 4.2.3 Low Schmid factor {10 12} tensile twins . . . . . . . . . . . . . . . . . . 96 4.2.4 Successive {10 12}-{10 12} double extension twins . . . . . . . . . . . . . 100 4.3 Probing for the latent effect of twin-twin junctions : application to the case of high purity Zr . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.3.1 Experimental set-up and testing conditions . . . . . . . . . . . . . . . . 104 4.3.2 Twin-twin junctions statistics . . . . . . . . . . . . . . . . . . . . . . . . 107 4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Figure 4 . 1 - 41 Figure 4.1 -(a) Instron tensile-compression machine ; (b) Compression test at room temperature. Figures 4. 2 2 Figures 4.2and 4.3 show the macroscopic stress-strain curves corresponding to specimens loaded in compression along RD, TD, ND and in compression along RD, followed by a second compression along TD. For compressions along RD and TD, the yield stress was approximately equal to 70 MPa. The inflection observed in the plastic region of the curves is typical of the activation of {10 12} tensile twins. As revealed by Proust et al.[START_REF] Proust | Modeling the effect of twinning and detwinning during strain-path changes of magnesium alloy {AZ31}[END_REF], the two deformation modes active in the matrix are basal slip and tensile twinning, with twinning increasing its contribution until about 3% strain while the basal slip activity decreases. At 5% strain, the total twinned volume is expected to occupy about 70% of the material volume. Figures 4.2and 4.3 show the macroscopic stress-strain curves corresponding to specimens loaded in compression along RD, TD, ND and in compression along RD, followed by a second compression along TD. For compressions along RD and TD, the yield stress was approximately equal to 70 MPa. The inflection observed in the plastic region of the curves is typical of the activation of {10 12} tensile twins. As revealed by Proust et al.[START_REF] Proust | Modeling the effect of twinning and detwinning during strain-path changes of magnesium alloy {AZ31}[END_REF], the two deformation modes active in the matrix are basal slip and tensile twinning, with twinning increasing its contribution until about 3% strain while the basal slip activity decreases. At 5% strain, the total twinned volume is expected to occupy about 70% of the material volume. Figure 4.3 reveals that after a first compression along RD the yield stress associated with compression along TD is no longer 70 MPa but approximately 115 MPa. Such a change can be explained by the presence of twin boundaries formed during the first compression, which act as barriers and /dt=0.001/s, T=348K ND3: dE/dt=0.001/s, T=348K ND5: dE/dt=0.1/s, T=298K ND6: dE/dt=0.1/s, T=298K Figure 4 . 2 - 42 Figure 4.2 -Macroscopic stress-strain curves of specimens monotonically loaded in compression along RD (a), along TD (b) and along ND (c) at different temperatures and strain rates. Figure 4 . 3 - 43 Figure 4.3 -Macroscopic stress-strain curves of specimens loaded in compression along the rolling direction followed by a second compression along the transverse direction. Figure 4 . 4 - 44 Figure 4.4 -(a) XRD {0001}, {2 11 0} and {10 10} pole figures of specimens before compression (a)and after compression along the transverse direction up to 4% strain (b), the rolling direction up to 1.8% strain (c) and along the rolling direction up to 1.8% strain and then along the transverse direction up to 1.3% strain (d). Figure 4 . 5 - 45 Figure 4.5 -(a) EBSD ND inverse pole figure micrograph of the specimen before compression along RD ; (b) Part of an EBSD orientation micrograph of a specimen compressed along the rolling direction up to 2.7% strain ; (c) EBSD orientation micrograph of a specimen successively loaded in compression along the rolling direction up to 1.8% strain and the transverse direction up to 1.3% strain. Black and yellow arrows indicate the presence of double extension twins. Table 4 . 5 - 45 Groups of possible double extension twins Group Axis-minimum angle pair Number of variants I 0 • 6 II < 1 210 > 7.4 • 6 III < 014 141 > 60 • 12 IV < 17 80 > 60.4 • 12 All identified secondary twins have a positive Schmid factor, and 78.1% of them have a Schmid factor greater than 0.3. Moreover, 95.8% of these secondary twins have a Schmid factor ratio, introduced in the previous sub-section, greater than 0.6 (Figure 4.6). As a result, activated tensile secondary twins have a relatively high Schmid factor compared to the Schmid factor of the other 5 potentially active tensile secondary twins. Figure 4 . 6 - 46 Figure 4.6 -Scatter plots displaying the Schmid factor and Schmid factor ratio values of 291 Group 3 secondary twins (a) and 92 Group 4 secondary twins (b). Figure 4 . 7 - 47 Figure 4.7 -Schematic representation of the simplified elasto-static Tanaka-Mori scheme. Secondorder tensors E p , ǫ p1 and ǫ p 2 denote the macroscopic plastic strain imposed to the medium and plastic strains induced by primary and secondary twinning, respectively. The infinite matrix and primary and secondary tensile twins are represented by volumes V -V A , V A -V B and V B , respectively. Second-order tensors ǫ p a and ǫ p b correspond to eigenstrains, modeling twinning shears induced by primary and secondary twinning, prescribed in inclusions V A -V B and V B , respectively. The homogeneous elastic tensor is denoted by the fourth-order tensor C. Figure 4 . 8 - 48 Figure 4.8 -Evolution of the change of elastic energy normalized by the twin volume fraction with respect to (a) the applied stress, Σ T D , and (b) the secondary twin volume fraction, f V B = V B /V A . Figure 4 . 9 - 49 Figure 4.9 -Initial basal (0001) and prismatic (10-10) pole Figures of the clock-rolled high-purity zirconium studied in this work. The 3-axis is the through thickness direction of the plate. Figure 4 . 10 - 410 Figure 4.10 -Macroscopic stress-strain curves of high purity Zr samples loaded in compression along through-thickness (TT) and in-plane (IP) directions at 76K and 300K. Figure 4 . 12 - 412 Figure 4.12 -Effective grain diameter (a,b) and grain area (c,d) distributions for TT03 (a,c) and IP05 (b,d) samples Figure 4 . 4 Figure 4.14 shows twin-twin junction types for each mode whose frequency is greater than 4% in TT03 (Figures 4.14a, 4.14c, 4.14e) and IP05 (Figures 4.14b, 4.14d, 4.14f) scans. Notations are all detailed in the Appendix. Therefore, Figure4.14c shows that, in the case of through thickness compression, 80 % of C 1 -C 1 twin-twin junctions correspond to the third type of junctions. Figure4.14 also reveals that junctions between T 1 -T 1 twins with parallel zone axes, studied by Yu et al.[START_REF] Yu | Twin-twin interactions in magnesium[END_REF][START_REF] Yu | Co-zone {10 -12} twin interaction in magnesium single crystal[END_REF] in Mg single crystals, do not correspond to the predominant type of twin-twin junctions here. These results are purely qualitative, but they can be used as guidelines Figure 4.14 shows twin-twin junction types for each mode whose frequency is greater than 4% in TT03 (Figures 4.14a, 4.14c, 4.14e) and IP05 (Figures 4.14b, 4.14d, 4.14f) scans. Notations are all detailed in the Appendix. Therefore, Figure4.14c shows that, in the case of through thickness compression, 80 % of C 1 -C 1 twin-twin junctions correspond to the third type of junctions. Figure4.14 also reveals that junctions between T 1 -T 1 twins with parallel zone axes, studied by Yu et al.[START_REF] Yu | Twin-twin interactions in magnesium[END_REF][START_REF] Yu | Co-zone {10 -12} twin interaction in magnesium single crystal[END_REF] in Mg single crystals, do not correspond to the predominant type of twin-twin junctions here. These results are purely qualitative, but they can be used as guidelines Figure 4.14 shows twin-twin junction types for each mode whose frequency is greater than 4% in TT03 (Figures 4.14a, 4.14c, 4.14e) and IP05 (Figures 4.14b, 4.14d, 4.14f) scans. Notations are all detailed in the Appendix. Therefore, Figure4.14c shows that, in the case of through thickness compression, 80 % of C 1 -C 1 twin-twin junctions correspond to the third type of junctions. Figure4.14 also reveals that junctions between T 1 -T 1 twins with parallel zone axes, studied by Yu et al.[START_REF] Yu | Twin-twin interactions in magnesium[END_REF][START_REF] Yu | Co-zone {10 -12} twin interaction in magnesium single crystal[END_REF] in Mg single crystals, do not correspond to the predominant type of twin-twin junctions here. These results are purely qualitative, but they can be used as guidelines Figure 4 . 13 - 413 Figure 4.13 -Graphical representation of twin-twin junction modes and types observed in TT03 and IP05 Zr EBSD scans. ( Figure 4 . 13 - 413 Figure 4.13 -Schematic representation of twin-twin junction modes and types observed in TT03 and IP05 Zr EBSD scans. Figures 4. 12 12 , 4.15 and 4.18 also indicate that certain bars are only representative of a few twins or a few grains. But, since most of statistics presented in this section rely on average values, the authors decided to not plot bars corresponding to averages performed over less than 3 twins and 3 twinned grains in histograms Figure 4 . 15 - 415 Figure 4.15 -Distribution of frequencies of T 1 , T 2 and C 1 twins with respect to grain size in samples loaded along the TT (a) and the IP (b) directions. Figure 4 . 4 Figure 4.17 shows the distributions of the number of T 1 , T 2 and C 1 twins per twinned grain with respect to grain size. While Figures 4.17aand 4.17b are histograms displaying averaged values, i.e. averaged numbers of twins, Figures 8c and 8d are scattergraphs displaying all values. Figures 4.17aand 4.17b clearly reveal that the average number of twins belonging to the predominant twinning mode increases with grain size. The phenomenon is more pronounced for C 1 twins in samples loaded in compression along the TT direction. However, the influence of grain size remains significant in the case of IP05 specimens : the average number of T 1 twins contained by grains whose areas are in the range [4 µm 2 , 136 µm 2 ] and [534 µm 2 , 664 µm 2 ] is equal to 1.5 and 4.9, respectively. This trend seems to change for grains larger than 928 µm 2 . However, Figure4.15b indicates that beyond 796 µm 2 , averages are performed over fewer than 5 grains. The interest of Figure 4.17 shows the distributions of the number of T 1 , T 2 and C 1 twins per twinned grain with respect to grain size. While Figures 4.17aand 4.17b are histograms displaying averaged values, i.e. averaged numbers of twins, Figures 8c and 8d are scattergraphs displaying all values. Figures 4.17aand 4.17b clearly reveal that the average number of twins belonging to the predominant twinning mode increases with grain size. The phenomenon is more pronounced for C 1 twins in samples loaded in compression along the TT direction. However, the influence of grain size remains significant in the case of IP05 specimens : the average number of T 1 twins contained by grains whose areas are in the range [4 µm 2 , 136 µm 2 ] and [534 µm 2 , 664 µm 2 ] is equal to 1.5 and 4.9, respectively. This trend seems to change for grains larger than 928 µm 2 . However, Figure4.15b indicates that beyond 796 µm 2 , averages are performed over fewer than 5 grains. The interest of Figure 4.17 shows the distributions of the number of T 1 , T 2 and C 1 twins per twinned grain with respect to grain size. While Figures 4.17aand 4.17b are histograms displaying averaged values, i.e. averaged numbers of twins, Figures 8c and 8d are scattergraphs displaying all values. Figures 4.17aand 4.17b clearly reveal that the average number of twins belonging to the predominant twinning mode increases with grain size. The phenomenon is more pronounced for C 1 twins in samples loaded in compression along the TT direction. However, the influence of grain size remains significant in the case of IP05 specimens : the average number of T 1 twins contained by grains whose areas are in the range [4 µm 2 , 136 µm 2 ] and [534 µm 2 , 664 µm 2 ] is equal to 1.5 and 4.9, respectively. This trend seems to change for grains larger than 928 µm 2 . However, Figure4.15b indicates that beyond 796 µm 2 , averages are performed over fewer than 5 grains. The interest of Figure 4.17 shows the distributions of the number of T 1 , T 2 and C 1 twins per twinned grain with respect to grain size. While Figures 4.17aand 4.17b are histograms displaying averaged values, i.e. averaged numbers of twins, Figures 8c and 8d are scattergraphs displaying all values. Figures 4.17aand 4.17b clearly reveal that the average number of twins belonging to the predominant twinning mode increases with grain size. The phenomenon is more pronounced for C 1 twins in samples loaded in compression along the TT direction. However, the influence of grain size remains significant in the case of IP05 specimens : the average number of T 1 twins contained by grains whose areas are in the range [4 µm 2 , 136 µm 2 ] and [534 µm 2 , 664 µm 2 ] is equal to 1.5 and 4.9, respectively. This trend seems to change for grains larger than 928 µm 2 . However, Figure4.15b indicates that beyond 796 µm 2 , averages are performed over fewer than 5 grains. The interest of Figure 4 . 16 - 416 Figure 4.16 -Distribution of the fraction of twinned grains containing T 1 , T 2 and C 1 twins plotted with respect to twinned grain area for samples loaded along the TT (a) and the IP (b) directions. Figures 4 . 4 Figures 4.17c and 4.17d lies in the observation that small grains can contain a very large number of twins. It also shows that the number of twins per twinned grain can vary significantly from one grain to another. However, EBSD scans are images of 2D sections. As a result, it is possible that small grains are actually bigger than they seem to be. This introduces a bias in grain size effect statistics. Figure 4 . 17 - 417 Figure 4.17 -Distribution of the number of T 1 , T 2 and C 1 twins per twinned grain for TT03 (a) and IP05 (b) samples and scattergraphs displaying the number of C 1 , T 2 and C 1 twins embedded in parent phases with respect to twinned grain area for TT03 (c) and IP05 (d) samples. Each cross represents one single twin. But because twin numbers are integers, many crosses overlap. Figure 4 . 18 - 418 Figure 4.18 -Distribution of SF values corresponding to twins activated in samples loaded along the TT-direction (a) and the IP-direction (b). and 4.20d consist of scattergraphs that display all true thicknesses of twins observed in samples loaded along the TT and IP directions. The spread is significant and does not follow any pattern. Fluctuations may be associated with neighbor effects on twin Figure 4 . 19 - 419 Figure 4.19 -Distribution of variant frequencies of T 1 (a), T 2 (c) and C (e) twins in TT03 scans and of T 1 (b) and T 2 (d) in IP05 scans, respectively, with respect to their Schmid factor.Variant frequencies consist of the ratio of the number of twins of a given SF variant and of a given twinning mode to the total population of twins belonging to the considered twinning mode. Figure 4 . 20 - 420 Figure 4.20 -Distribution of average twin thicknesses as a function of grain size in samples loaded along the TT (a) and the IP (b) directions and scattergraphs displaying twin thickness values with respect to grain size in samples loaded along the TT (c) and the IP (d) directions. Each cross represents one twin. Figure 4 . 21 - 421 Figure 4.21 -Distribution of average twin thicknesses as a function of SF values in samples loaded along the TT (a) and the IP (b) directions. Figure 4 . 22 - 422 Figure 4.22 -Distribution of the twin thickness (a) and the frequency (b) of C 1 twins with respect to grain area in samples loaded along the TT direction. Figure 4 . 23 - 423 Figure 4.23 -Distribution of the twin thickness (a) and the frequency (b) of C 1 twins with respect to SF values in samples loaded along the TT direction. 5 .Figure A. 1 - 51 Figure A.1 -Visualization modes for a single grain . Figure A.1 summarizes all available modes. b Annexe B. Stress-strain curve correction method and mechanical testing parametersConsequently, parameters a and b can be expressed as follows := ǫ iσ i E th (1 + a.E th ) (B.4)with i = {1, 2}. Table des figures 1 des1 Table 1 - 1 Modes de maclage dans le Zr Mode de Plan de maclage Direction de maclage Angle de désorientation maclage K 1 η 1 Table 1 . 1 1 -List of observed twinning modes in h.c.p. metals K 1 K 2 η 1 η 2 s q Observed {} {} <> <> - - in 10 12 1012 10 11 101 1 10 11 10 13 10 12 30 32 11 21 0001 11 26 11 20 10 13 10 11 30 32 10 12 i i i i 11 22 11 24 11 23 22 43 (γ 2 -3)/3γ √ 3γ (4γ 2 -9)/4 1/γ (4γ 2 -9)/4 √ 3γ (4γ 4 -21γ 2 + 36) 1/2 /4 √ 2(γ 2 -2)/3γ 4 Mg, Ti, Co, Zr, Zn, Be 8 Mg, Ti 2 Re, Ti, Zr, Co 8 Mg 3γ -Mg 2 Re, Ti, Zr, Co positions, i.e. twin positions, if q is even. Regarding twinning of type 2, another parameter q * Table 2 . 2 1 -CRSS and hardening parameters used in Voce hardening rule Initial twin Deformation τ 0 τ 1 θ 0 θ 1 fraction system (MPa) (MPa) (MPa) (MPa) 0.03 Basal 12 20 240 0 Prism 60 20 240 0 Pyramidal 100 117 2000 0 Tensile twin 60 0 0 0 Table 4 . 4 1 -Chemical composition limits of AZ31B Mg alloy in wt% Al Zn Mn Si Cu 2.5-3.5 0.7-1.3 0.2 min 0.05 max 0.05 max Ca Fe Ni Others Mg 0.04 max 0.005 max 0.005 max 0.30 max balance Table 4 . 4 2 -Description of loading conditions Sample Label Loading Direction Strain Rate Temperature (/s) (K) RD1 RD 0.001 25 RD2 RD 0.001 25 RD3 RD 0.001 25 TD2 TD 0.001 25 ND2 ND 0.001 75 ND3 ND 0.001 75 ND5 ND 0.1 25 ND6 ND 0.1 25 RD1TD1 RD thenTD 0.001 25 RD2TD2 RD thenTD 0.001 25 RD3TD3 RD thenTD 0.001 25 Table 4 . 4 .3. 3 -Classification of twins with respect to the sign of the normal components of the twin distortion tensor. The number of twins corresponding to each variant type and their occurrence frequency with respect to the total twin population are also indicated. Variants e RD e T D e N D Observed twins Observed low SF twins Number % Number % 1 - + + 835 40.8 16 1.9 2 - - + 920 45.0 74 8.0 3 - + - 152 7.4 8 5.3 4 + - - 13 0.6 12 92.3 5 + - + 42 2.1 21 51.0 6 + + - 84 4.1 9 10.7 Table Table 4 . 4 4 -Main characteristics of grains, primary and secondary {10 12} tensile twins observed on EBSD maps of AZ31 Mg alloy specimens loaded in compression along RD and then along TD Phase Number Area Area fraction Average diameter type - (x10 3 µm 2 ) (%) (µm) Grains 4 481 823 100 16.34 Primary twins 11 052 230 27.94 5.24 Secondary twins 585 3.47 0.42 1.37 Table 4 . 4 6 -Twinning modes in Zr. {10 11} (C 2 ) twins were not observed Abbreviation Twinning plane, K 1 Twinning direction, η 1 Misorientation (deg) T 1 T 2 C 1 C 2 {10 12} {11 21} {11 22} {10 11} < 1011> < 11 26> <11 23 > <10 12 > 85.2 34.9 64.2 57.1 Table 4 . 4 8 -Twin-twin junction frequencies for samples loaded in compression along the TT and IP directions Loading path TT03 IP05 Type Twin-twin Number Frequency Number Frequency number interaction type - (%) - (%) 1 2 3 4 5 6 T 1 -T 1 T 1 -T 2 T 1 -C 1 T 2 -T 2 T 2 -C 1 C 1 -C 1 19 49 5 12 37 709 2.3 5.9 0.6 1.4 4.5 85.5 40 50 0 6 0 0 41.7 52.1 0.0 6.2 0.0 0.0 Table 4 . 4 [START_REF] Fultz | Transmission Electron Microscopy and Diffractometry of Materials[END_REF] -Frequencies of twins contained in single twinned grains for TT03 and IP05 samples Loading path TT03 IP05 Twin category Number Frequency (%) Number Frequency (%) All single twinned grains 176 - 73 - T 1 55 31.4 62 84.9 T 2 18 10.3 8 11.0 C 1 102 58.3 3 4.1 Table 4 . 4 10 -Twins with negative SF and their relative frequencies and twinned areas in TT03 and IP05 samples Loading path TT03 IP05 Twin Number Rel. Rel. twinned Number Rel. Rel. twinned mode - freq. (%) area (%) - freq. (%) area (%) All 211 10.7 3.9 22 4.3 4.9 T 1 180 55.2 43.7 14 3.3 0.6 T 2 20 11.2 8.4 7 7.7 19.5 C 1 8 0.5 0.2 1 33.3 68.2 from FragmentEdges as E inner j o i n Fragment as C1 on C1 . i d = E . i inner j o i n Fragment as C2 on C2 . i d = E . j inner j o i n Twins as T1 on C1 . i d = T1 . i d inner j o i n Twins as T2 on C2 . i d = T2 . i d where E . i > E . j and not C1 . i s _ p a r e n t and not C2 . i s _ p a r e n t and C1 . i s _ v a l i d and C2 . i s _ v a l i d and C1 . g r a i n=C2 . g r a i n and C1 . t w i n s t r i p <= 0 and C2 . t w i n s t r i p <= 0 order by C1 . g r a i n ; -Request 4 : s e l e c t t . g r a i n , d . i , d . j , d . d i s t , d . xi , d . yi , d . xj , d . y j from t w i n s as t , I n g r a i n D i s t a n c e s as d where ( t . i d=d . i or t . i d=d . j ) group by d . d i s t * d . x i * d . x j order by t . g r a i n ; -Request 5 : Table B.1 lists the measured Young's moduli and correction parameters for compression tests whose characteristics are described in Table B.2. Abbreviations RD, TD and ND stand for rolling direction, transverse direction and normal direction, respectively. Table B.1 -Young's moduli measured for both the loading and unloading regimes before correction and correction parameters Sample Young's modulus Young's modulus Corr. Param. Corr. Param. Label loading (GPa) unloading (GPa) a (MPa -1 ) b RD1 - 7.8511e-05 5.2016e-04 RD2 - 3.0803e-05 7.7801e-04 RD3 - 5.2416e-05 6.6649e-04 TD2 15.6 - 4.5699e-05 7.8436e-04 ND2 21.4 73.2 2.4145e-05 -6.34e-04 ND3 25.9 69.7 1.6109e-05 -7.043e-04 ND5 18.1 15.0 3.3402e-05 1.02751e-03 ND6 19.6 14.9 2.9008e-05 1.03421e-03 RD1TD1 19.7 - 1.9934e-05 1.39418e-03 - 14.1 - /4.88218e-05 /1.20736e-03 RD2TD2 17.9 - 3.4792e-05 7.7543e-03 - 16.3 - /3.9626e-05 /9.511e-04 RD3TD3 17.4 - 3.669e-05 7.9274e-03 - 13.7 - /5.11435e-05 /1.0106e-03 Table B.2 -Description of loading conditions Sample Label Loading Direction Strain Rate Temperature (/s) (K) RD1 RD 0.001 25 RD2 RD 0.001 25 RD3 RD 0.001 25 TD2 TD 0.001 25 ND2 ND 0.001 75 ND3 ND 0.001 75 ND5 ND 0.1 25 ND6 ND 0.1 25 RD1TD1 RD thenTD 0.001 25 RD2TD2 RD thenTD 0.001 25 RD3TD3 RD thenTD 0.001 25 Afin de traiter les cartographies EBSD et d'en extraire des statistiques relatives au maclage, et ce de façon automatique, un nouveau logiciel d'analyse EBSD, reposant sur les théories des graphes et structures de groupe et les quaternions, fut développé. Les quaternions permettent de calculer facilement les désorientations entre pixels et groupes de pixels de même orientation. L'utilisation de la théorie des graphes et des structures de groupe rend possible l'identification des grains, la reconnaissance des phases macle et l'extraction des statistiques.Le logiciel se distingue des versions commerciales existantes en combinant visualisation et analyse automatisée du micrographe. L'interface graphique intégrée permet un accès direct et immédiat aux données relatives à la microstructure et au maclage ; elle autorise également l'utilisateur à corriger ou compléter, si nécessaire, l'analyse réalisée par le logiciel. Toutes les données, aussi bien brutes que traitées, sont sauvegardées au sein d'une base de données relationnelle. Il est, par conséquent, possible d'accéder à l'intégralité des paramètres expérimentaux, données microstructurelles et statistiques sur le maclage via de simples requêtes SQL. La base de données rend également possible la quantification systématique de l'influence d'un très grand nombre de paramètres. La construction et l'intégration d'une telle base de données au sein même du logiciel sont, en plus de l'interface graphique interactive, autant de fonctionnalités que les autres outils d'analyse actuels n'ont pas.Bien qu'initialement développé pour analyser des micrographes de Zr et de Mg, les capacités du logiciel ne se limitent pas à ces deux métaux hexagonaux. En effet, son algorithme est à The concept also extends to higher dimensions Remerciements While we can only detect fully formed twins with EBSD, their presence implies previous nucleation of such variant. To study twin nucleation, all grains (i.e. twinned and untwinned) that are not on the edge of the map are considered. Concerning twin growth, statistics are based on twinned grains solely. Figure 4.12 shows grain diameter and area distributions of specimens loaded along the TT and IP directions. Because the notion of "grain" is questionable for cases with very small numbers of measurement points, grains smaller than 4 µm 2 (i.e. 23 measurement points) are disregarded. In the following, data are represented as histograms. Histograms are consistent statistical tools capable of estimating density functions, but they do not directly address the issues of bias and variance. However, it is still possible to minimize the error introduced by the histogram representation. In the present article, bin sizes are estimated from Scott's formula : w = 3.49σ.n -1/3 [START_REF] Scott | On optimal and data-based histograms[END_REF], where σ is an estimate of the standard deviation and n the number of elements considered, i.e. the total number of grains. The term n -1/3 results from the minimization of the integrated mean squared error function. The main advantage of this expression lies in its insensitivity to the nature of the estimated density function (Gaussian, log normal, etc). Mean and standard deviation have been computed for area, diameter and SF distributions in TT03 and IP05 specimens. As a result, optimal bin widths for diameter, area and SF in TT03 samples are 63 µm 2 , 2.5 µm and 0.06, respectively, and 112 µm 2 , 4.1 µm and 0.06, respectively, in the case of IP05 maps. However, to be able to compare results obtained from both TT03 and IP05 maps, the same bin sizes have to be applied. In addition, we enforced the constraints that all distributions expressed with respect to diameter and area have the same number of subdomains and that every subdomain contains at least one element. Consequently, diameter and bin sizes used in the next histograms are 5.47 µm and 132 µm 2 , respectively. To avoid empty columns, one grain larger than 1456 µm 2 observed in a TT03 scan was disregarded. Concerning distributions Abstract The main objective of this thesis is to investigate and quantify the influence of parent-twin and twin-twin interactions on the mechanical response of hexagonal close-packed metals. To study parent-twin interactions, a mean-field continuum mechanics approach has been developed based on a new twinning topology in which twins are embedded in twinned grains. A first model generalizing the Tanaka-Mori scheme to heterogeneous elastic media is applied to first and second generation twinning in magnesium. In the case of first generation twinning, the model is capable of reproducing the trends in the development of backstresses within the twin domain as observed experimentally. Applying the methodology to the case of second-generation twinning allows the identification, in exact agreement with experimental observations, of the most likely second-generation twin variants to grow in a primary twin domain. Because the elastic behavior assumption causes internal stress level magnitudes to be excessively high, the first model is extended to the case of elasto-plasticity. Using a self-consistent approximation, the model, referred to as the double inclusion elasto-plastic self-consistent (DI-EPSC) scheme, is applied to Mg alloy polycrystals. The comparison of results obtained from the DI-EPSC and EPSC schemes reveals that deformation system activities and plastic strain distributions within twins drastically depend on the interaction with parent domains. The influence of twin-twin interactions on nucleation and growth of twins is being statistically studied from zirconium and magnesium electron back-scattered diffraction scans. A new twin recognition software relying on graph theory analysis has been developed to extract all microstructural and crystallographic data. It is capable of identifying all twinning modes and all twin-twin interaction types occurring in hexagonal close-packed materials. The first results obtained from high purity Zr electron back-scattered diffraction maps reveal that twin-twin interactions hinder subsequent twin nucleation. They also show that mechanisms involved in twin growth may differ significantly for each twinning mode. A second study performed on AZ31 Mg presents statistics about low Schmid factor {10 12} tensile twins and about {10 12}-{10 12} sequential double twins coupled with a simplified version of the Tanaka-Mori scheme generalized to heterogeneous elasticity with plastic incompatibilities.
01695563
en
[ "chim" ]
2024/03/05 22:32:10
2018
https://univ-rennes.hal.science/hal-01695563/file/Hierlinger%20et%20al_A%20Panchromatic%2C%20Near%20Infrared%20Ir%28III%29%20Emitter%20Bearing%20a%20Tripodal%20C%5EN%5EC%20ligand.pdf
Claus Hierlinger Heather V Flint David B Cordes Alexandra M Z Slawin Elizabeth A Gibson email: [email protected] Denis Jacquemin email: [email protected] Véronique Guerchais email: [email protected] Eli Zysman-Colman email: [email protected] A Panchromatic, Near Infrared Ir(III) Emitter Bearing a Tripodal C^N^C ligand as a Dye for Dye-Sensitized Solar Cells The synthesis of a new complex of the form [Ir(C^N^C)(N^N)Cl] [where C^N^C = 2-(bis(4-(tert-butyl)phenyl)methyl)pyridinato (dtBubnpy, L1) and N^N is diethyl [2,2'-bipyridine]-4,4'dicarboxylate (deeb)] is reported. The crystal structure reveals an unusual tripodal tridentate C^N^C ligand forming three six-membered rings around the iridium center. The photophysical and electrochemical properties suggest the use of this complex as a dye in dye-sensitized solar cells. Time-Dependent Density Functional Theory (TD-DFT) calculations have been used to reveal the nature of the excited-states. Introduction Dye-sensitized solar cells (DSSCs) represent a promising solar cell technology. The majority of champion DSSCs, those showing power conversion efficiencies (PCE) greater than 10%, are based on ruthenium(II) complexes. Iridium(III) complexes, dominant as emitters in electroluminescent devices, [START_REF] Henwood | A Comprehensive Review of Luminescent Iridium Complexes Used in Light-Emitting Electrochemical Cells (LEECs)[END_REF][START_REF] Longhi | Iridium(III) Complexes for OLED Application[END_REF] have to date fared poorly as dyes in DSSCs. [START_REF] Mayo | Cyclometalated iridium(iii)-sensitized titanium dioxide solar cells[END_REF][START_REF] Yuan | Impact of Ligand Modification on Hydrogen Photogeneration and Light-Harvesting Applications Using Cyclometalated Iridium Complexes[END_REF][START_REF] Dragonetti | Simple novel cyclometallated iridium complexes for potential application in dye-sensitized solar cells[END_REF][START_REF] Ning | Novel iridium complex with carboxyl pyridyl ligand for dye-sensitized solar cells: High fluorescence intensity, high electron injection efficiency?[END_REF][START_REF] Baranoff | Cyclometallated iridium complexes for conversion of light into electricity and electricity into light[END_REF][START_REF] Baranoff | Cyclometallated Iridium Complexes as Sensitizers for Dye-Sensitized Solar Cells[END_REF][START_REF] Sinopoli | New cyclometalated iridium(III) dye chromophore complexes for n-type dye-sensitised solar cells[END_REF][START_REF] Sinopoli | Hybrid Cyclometalated Iridium Coumarin Complex as a Sensitiser of Both n-and p-Type DSSCs[END_REF][START_REF] Wang | Iridium (III) complexes with 5,5-dimethyl-3-(pyridin-2-yl)cyclohex-2-enone ligands as sensitizer for dye-sensitized solar cells[END_REF][START_REF] Shinpuku | Synthesis and characterization of novel cyclometalated iridium(III) complexes for nanocrystalline TiO2-based dye-sensitized solar cells[END_REF][START_REF] Gennari | Long-Lived Charge Separated State in NiO-Based p-Type Dye-Sensitized Solar Cells with Simple Cyclometalated Iridium Complexes[END_REF] This is mainly because most iridium(III) complexes are not panchromatic, having absorption spectra that tail off by 550 nm. This induces low short circuit currents in the DSSC and as a consequence poor PCE; typically less than 4%. Indeed, there are very few examples of iridium(III) complexes with significant absorption bands going up to the red or NIR parts of the visible spectrum. [START_REF] Henwood | Unprecedented Strong Panchromic Absorption from Proton-Switchable Iridium(III) Azoimidazolate Complexes[END_REF][START_REF] Hasan | Panchromic Cationic Iridium(III) Complexes[END_REF][START_REF] Tamayo | Cationic Bis-cyclometalated Iridium(III) Diimine Complexes and Their Use in Efficient Blue, Green, and Red Electroluminescent Devices[END_REF][START_REF] Zhao | Series of New Cationic Iridium(III) Complexes with Tunable Emission Wavelength and Excited State Properties:  Structures, Theoretical Calculations, and Photophysical and Electrochemical Properties[END_REF][START_REF] Medina-Castillo | Engineering of efficient phosphorescent iridium cationic complex for developing oxygen-sensitive polymeric and nanostructured films[END_REF][START_REF] Aubert | Linear and Nonlinear Optical Properties of Cationic Bipyridyl Iridium(III) Complexes: Tunable and Photoswitchable?[END_REF][START_REF] Kammer | 1,12-Diazaperylene and 2,11-dialkylated-1,12-diazaperylene iridium(iii) complexes [Ir(C^N)2(N^N)]PF6: new supramolecular assemblies[END_REF][START_REF] Shinpuku | Synthesis and characterization of novel cyclometalated iridium(III) complexes for nanocrystalline TiO2-based dye-sensitized solar cells[END_REF] We recently reported the development of tripodal C^N^C ligands, 2-benzhydrylpyridine and its derivatives, which can coordinate to iridium, forming three six-membered chelate rings through a double C-H bond activation. [START_REF] Hierlinger | An Unprecedented Family of Luminescent Iridium(III) Complexes Bearing a Six-Membered Chelated Tridentate C^N^C Ligand[END_REF] When combined with a bidentate diimine ligand such as 4,4'-ditertbutyl-2,2'-bipyridine (dtBubpy), a family of orange-to-red emitting neutral [Ir(C^N^C)(dtBubpy)Cl] complexes was formed with absorption bands tailing off at 600 nm. Herein, we report an analogous complex showing panchromatic absorption, employing an electron-poor ancillary ligand diethyl [2,2'-bipyridine]-4,4'-dicarboxylate (deeb), and study its use as a DSSC dye. Results and Discussion Synthesis Scheme 1: Scheme for the one-pot synthesis of complex 1. Compound L1 [START_REF] Hierlinger | An Unprecedented Family of Luminescent Iridium(III) Complexes Bearing a Six-Membered Chelated Tridentate C^N^C Ligand[END_REF] and deeb [START_REF] He | A new bipyridyl cobalt complex for reductive dechlorination of pesticides[END_REF] were prepared by literature methods. Complex 1 was obtained as a black solid in 52% yield using a two-step-one-pot protocol wherein a mixture of L1 and IrCl 3 .n6H 2 O in 2-ethoxyethanol/H 2 O (3:1) was heated at reflux for 19 h followed by the addition of deeb and a further reaction time of 6 h (Scheme 1). Complex 1 was characterized by 1 H and 13 C NMR spectroscopy, HR-ESI mass spectrometry, elemental analysis and melting point determination [see Figures S1-6 in the Supporting Information (SI) for NMR and HR-ESI mass spectra]. Crystal Structures Single crystals of sufficient quality of 1 were grown from CH 2 Cl 2 /Et 2 O at -18°C, and the structure of 1 was determined by single-crystal X-ray diffraction (Figure 1, Table S1). Complex analogous to that seen previously. [START_REF] Hierlinger | An Unprecedented Family of Luminescent Iridium(III) Complexes Bearing a Six-Membered Chelated Tridentate C^N^C Ligand[END_REF] The remaining coordination sphere of 1 consists of the deeb N^N ligand and a chloride anion. The arrangement of ligands is unusual, as the chloride coordinates trans to the pyridine of L1 and not trans to a cyclometalated carbon ligand as generally observed in tridentate complexes, [START_REF] Koga | Synthesis, Structures, and Unique Luminescent Properties of Tridentate C∧C∧N Cyclometalated Complexes of Iridium[END_REF][START_REF] Obara | Highly Phosphorescent Iridium Complexes Containing Both Tridentate Bis(benzimidazolyl)-benzene or -pyridine and Bidentate Phenylpyridine:  Synthesis, Photophysical Properties, and Theoretical Study of Ir-Bis(benzimidazolyl)benzene Complex[END_REF][START_REF] Brulatti | Luminescent Iridium(III) Complexes with N∧C∧N-Coordinated Terdentate Ligands: Dual Tuning of the Emission Energy and Application to Organic Light-Emitting Devices[END_REF][START_REF] Ashizawa | Syntheses and photophysical properties of optical-active bluephosphorescent iridium complexes bearing asymmetric tridentate ligands[END_REF][START_REF] Daniels | When two are better than one: bright phosphorescence from non-stereogenic dinuclear iridium(III) complexes[END_REF] NHE).[34] The electrochemical properties of 1 were evaluated by cyclic voltammetry (CV) and differential pulse voltammetry (DPV) in deaerated CH 2 Cl 2 solution at 298 K at a scan rate of 100 mV s -1 using Fc/Fc + as the internal reference and referenced with respect to NHE (0.70 V vs. NHE). [START_REF] Cardona | Electrochemical Considerations for Determining Absolute Frontier Orbital Energy Levels of Conjugated Polymers for Solar Cell Applications[END_REF] The electrochemical data are summarized in Table 1 and where ppy is 2-phenylpyridinato). [START_REF] Chirdon | Tracking of Tuning Effects in Bis-Cyclometalated Iridium Complexes: A Combined Time Resolved Infrared Spectroscopy, Electrochemical, and Computational Study[END_REF] Upon scanning to negative potential, 1 shows a single quasi-reversible reduction wave at -0.94 V ܧ∆( = 99 mV), which is monoelectronic as inferred from the DPV. The electron-withdrawing effect of the ethyl ester groups of the N^N ligand results in a large anodic shift of 610 mV in the reduction wave of 1 compared to R1 (E 1/2 red. -1.58 V vs NHE). [START_REF] Hierlinger | An Unprecedented Family of Luminescent Iridium(III) Complexes Bearing a Six-Membered Chelated Tridentate C^N^C Ligand[END_REF] Complex R2 showed two reversible reduction waves in MeCN. The first reduction located at -0.76 V is assigned to the reduction of the deeb ligand while the second one at -1.30 V is due to the reduction of the phenylpyridinato. [START_REF] Chirdon | Tracking of Tuning Effects in Bis-Cyclometalated Iridium Complexes: A Combined Time Resolved Infrared Spectroscopy, Electrochemical, and Computational Study[END_REF] Thus, the reduction of the deeb ligand in 1 is shifted to more negative potentials compared to the same reduction in R2. DFT calculations of the previously reported R1 indicated that both the HOMO and HOMO-1 are close in energy and involve the iridium and chlorine atoms and the two phenyl rings of L1. [START_REF] Hierlinger | An Unprecedented Family of Luminescent Iridium(III) Complexes Bearing a Six-Membered Chelated Tridentate C^N^C Ligand[END_REF] As can be seen in Figure 4 the same electron density distribution is found in 1. DFT calculations also show that the three lowest unoccupied orbitals are exclusively localized on the deeb ligand in 1 (Figure 4), while the LUMO+1 is primarily on the pyridyl of L1 in R1, illustrating the stronger accepting character of deeb. The ܧ∆ ୰ୣୢ୭୶ for 1 (2.18 eV) is markedly smaller than that of R2 ܧ∆( ୰ୣୢ୭୶ = 2.33 V). [START_REF] Chirdon | Tracking of Tuning Effects in Bis-Cyclometalated Iridium Complexes: A Combined Time Resolved Infrared Spectroscopy, Electrochemical, and Computational Study[END_REF] Figure 3. Structures of reference complexes R1 and R2. Photophysical properties The photophysical data for 1 recorded in CH 2 Cl 2 at 298 K are shown in Figure 5 and the data summarized in Table 2. The absorption profile of 1 differs significantly from that of R1. Complex 1 shows intense high-energy absorption bands (ε on the order of 3.5 × 10 4 M -1 cm -1 ) below 250 nm, which are ascribed to 1 π-π* ligand-centered ( 1 LC) transitions localized on the deeb ligand. A moderately intense band (ε on the order of 1.5 × 10 4 M -1 cm -1 ) at 319 nm is assigned to a ligand-centered (LC) transition on the deeb with a small CT character (see below). Weaker bands (ε on the order of 5-6 × 10 3 and 2 × 10 3 M -1 cm -1 ) in the region of 380 -440 nm and tailing to 500 -600 nm are attributed to a mixture of ( 1 MLCT/ 1 LLCT) and spin-forbidden The assignments for complex 1 were confirmed by TD-DFT calculations (see the ESI for technical details). The two lowest singlet states, computed at 623 and 611 nm, present relatively small intensities (oscillator strengths, f, of 0.010 and 0.056, respectively) and mainly correspond to HOMO-1 to LUMO and HOMO to LUMO transitions. As can be seen in Figure 4, this clearly corresponds to a mixed CT process from the metal and the phenyl rings of the C^N^C ligand towards the deeb. The following significant vertical absorption are predicted by TD-DFT at 496 nm (f=0.071), 456 nm (f=0.027) and 443 nm (f=0.084) and these bands can be ascribed to HOMO-2 to LUMO, HOMO to LUMO+1 and HOMO-1 to LUMO+1 transitions, respectively, and therefore all involve strong CT character towards the deeb moiety. The more intense and resolved band at 319 nm experimentally (see Table 2) is computed at 315 nm by TD-DFT (f=0.162) and corresponds to a more LC excitation from a low-lying orbital centered on the deeb (and partly on chlorine atom) towards the LUMO centered on the deeb as well. Upon photoexcitation at 420 nm, 1 exhibits a broad featureless profile, indicative of an emission with mixed CT character, with a maximum at λ em = 731 nm, an emission that is significantly redshifted (99 nm, 2194 cm -1 ) compared to R1 (λ em = 630 nm). [START_REF] Hierlinger | An Unprecedented Family of Luminescent Iridium(III) Complexes Bearing a Six-Membered Chelated Tridentate C^N^C Ligand[END_REF] The red-shifted luminescence is due to the presence of the presence of the π-accepting deeb. The emission of 1 is likewise red-shifted (51 nm, 2194 cm -1 ) compared to that of R2 (λ em = 680 nm). [START_REF] Chirdon | Tracking of Tuning Effects in Bis-Cyclometalated Iridium Complexes: A Combined Time Resolved Infrared Spectroscopy, Electrochemical, and Computational Study[END_REF] The DFT calculations returns an emission of the T 1 state at 762 nm, close to the experimental value, confirming emission from the lowest triplet excited state. The topology of this state, in terms of localization of the excess α electrons, is displayed in Figure 6. As can be seen, the spin density is mostly localized on the Ir and Cl atoms and on the ancillary ligand, the tridentate ligand playing only a minor role in this state. This localization is consistent with the observed red-shift in emission compared to R1 and R2. The measured photoluminescence quantum yield (Φ PL ) of 1 is 0.5%, lower than those of R1 (6%) and R2 (5%). This finding is a logical consequence of the energy gap law, which states that the nonradiative decay rate increases with decreasing emission energy. [START_REF] Bixon | Energy Gap Law for Nonradiative and Radiative Charge Transfer in Isolated and in Solvated Supermolecules[END_REF][START_REF] Caspar | Application of the energy gap law to nonradiative, excited-state decay[END_REF] Among near-infrared emissive cationic Ir(III) emitters with λ em beyond 700 nm bearing diimines as ancillary ligand, most examples exhibit Φ PL values less than 4%. [START_REF] Pal | Simple design to achieve red-to-near-infrared emissive cationic Ir(iii) emitters and their use in light emitting electrochemical cells[END_REF][START_REF] Tao | Efficient Near-Infrared-Emitting Cationic Iridium Complexes as Dopants for OLEDs with Small Efficiency Roll-off[END_REF][START_REF] Hasan | Tuning the Emission of Cationic Iridium (III) Complexes Towards the Red Through Methoxy Substitution of the Cyclometalating Ligand[END_REF][START_REF] Wang | Near-infrared-emitting heteroleptic cationic iridium complexes derived from 2,3-diphenylbenzo[g]quinoxaline as in vitro theranostic photodynamic therapy agents[END_REF][START_REF] Xin | Efficient near-infrared-emitting cationic iridium complexes based on highly conjugated cyclometalated benzo[g]phthalazine derivatives[END_REF] However, NIR-emitting neutral Ir(III) complexes of the form [Ir(C^N) 2 (O^O)] (where O^O a substituted β-diketonate ancillary ligand) employing highly conjugated C^N ligands have reached Φ PL of up to 16%. [START_REF] Cao | Near-Infrared Polymer Light-Emitting Diodes with High Efficiency and Low Efficiency Roll-off by Using Solution-Processed Iridium(III) Phosphors[END_REF][START_REF] Kesarkar | Near-IR Emitting Iridium(III) Complexes with Heteroaromatic β -Diketonate Ancillary Ligands for Efficient Solution-Processed OLEDs: Structure-Property Correlations[END_REF][START_REF] Tao | High-efficiency nearinfrared organic light-emitting devices based on an iridium complex with negligible efficiency roll-off[END_REF] Complex 1 exhibits a multiexponential emission decay, a reflection of the large non-radiative decay rate constant. Dye-sensitized solar cells (DSSCs) Sandwich-type solar cells were assembled using 1-sensitised nanocrystalline TiO 2 as the working electrodes, platinized conducting glass as the counter electrode and iodide/triiodide in acetonitrile as electrolyte. The photovoltaic performances of solar cells based 1 and N719, as benchmark sensitizer, are summarized in Table 3. Figure 7 shows the current-voltage characteristics of the dyes under AM 1.5 simulated sunlight (100 mW cm -2 ) and in the dark. J sc is the short-circuit current density at the V = 0 intercept, V oc is the open-circuit voltage at the J = 0 intercept, FF is the device fill factor, η is the power conversion efficiency. The photovoltaic efficiency (η = 0.49%) obtained with 1 is but comparable with results for iridium sensitizers reported elsewhere. [START_REF] Sinopoli | Hybrid Cyclometalated Iridium Coumarin Complex as a Sensitiser of Both n-and p-Type DSSCs[END_REF][START_REF] Sinopoli | New cyclometalated iridium(III) dye chromophore complexes for n-type dye-sensitised solar cells[END_REF][START_REF] Baranoff | Cyclometallated Iridium Complexes as Sensitizers for Dye-Sensitized Solar Cells[END_REF][START_REF] Baranoff | Cyclometallated iridium complexes for conversion of light into electricity and electricity into light[END_REF][START_REF] Dragonetti | Simple novel cyclometallated iridium complexes for potential application in dye-sensitized solar cells[END_REF] Both charge injection from the excited dye into TiO 2 and regeneration by the electrolyte are thermodynamically favourable. The fill factor for the N719 device (0.61) was slightly lower than that typically obtained in optimized devices (0.70-0.75) and the shape of the current-voltage curve is consistent with high series resistance. Procedures used in optimized N719 devices, such as mixed solvents or additives such as chenodeoxycholic acid in the electrolyte or dye bath are likely to improve the performance of the N719 devices, however the conditions chosen were optimal for compound 1. We therefore attribute the reason for the low efficiency for 1 compared to the benchmark Ru dye to be the weak absorption in the visible region, compared to ruthenium-based photosensitizers such as N719. The absorption spectrum of the TiO 2 electrode after immersion in the dye solution is provided in Figure S13 and the spectral response of the DSSC is given in Figure S14. The low Potential / V incident photon-to current conversion efficiency (IPCE < 2%) is consistent with the poor lightharvesting at λ > 500 nm. While these dyes absorb broadly across the visible spectrum, the low ε (ε ~ 2 000 M -1 cm -1 ) compared to ruthenium dyes (ε > 10 000 M -1 cm -1 ) is a limitation to their solar cell performance. Conclusions In conclusion, a new panchromatically absorbing, NIR luminescent iridium(III) complexes bearing a tripodal tris(six-membered) chelate ligand has been obtained and comprehensively characterized, including by single crystal X-ray diffraction. The absorption spectrum tails off at 700 nm, much further than most neutral iridium complexes while the emission is significantly shifted into the NIR, with a maximum of 731 nm. DSSCs using 1 as the dye achieved only modest efficiency of 0.49%, comparable to other Ir(III) dyes. This was attributed to the modest absorption coefficient, which leads to weak light harvesting in the visible region and low short-circuit current. Appendix A. Supplementary data CCDC 1583853 contains the supplementary crystallographic data for 1. These data can be obtained free of charge via http://www.ccdc.cam.ac.uk/conts/retrieving.html, or from the Cambridge Crystallographic Data Centre, 12 Union Road, Cambridge CB2 1EZ, UK; fax: (+44) 1223-336-033; or e-mail: [email protected]. NMR and MS spectra for 1, Supplementary crystallographic data, supplementary electrochemical and photophysical data. Description of the DFT/TD-DFT protocol. Experimental details for the DSSC assembly and testing. Plots of the absorption spectra of 1-sensitized TiO 2 and IPCE spectrum of the DSSC. 1 , 1 [Ir(L1)(deeb)Cl], lies in a mirror plane; the pyridyl ring of L1, the iridium(III) and the chloride all lying directly in the plane. The tridentate L1 shows a tripodal chelation motif, Figure 1 .Figure 2 . 12 Figure 1. Solid-state structure of complex 1, thermal ellipsoids are drawn at the 50 % probability level. Hydrogen atoms and solvent molecules are omitted for clarity (color code: C = grey, N = purple, O = red, Cl = green and Ir = blue). the voltammograms are shown in Figure 2 . 21 VFigure 3 , 2213 Complex 1 exhibits a quasi-reversible single electron oxidation wave at 1.ܧ∆( ൌ 88 mV), which is assigned to the Ir(III)/Ir(IV) redox couple, with contributions from the two phenyl rings of L1 and the chloro ligand. Compared to [Ir(L1)(dtBubpy)Cl], R1, (E 1/2 ox. 1.04 V vs. NHE)[START_REF] Hierlinger | An Unprecedented Family of Luminescent Iridium(III) Complexes Bearing a Six-Membered Chelated Tridentate C^N^C Ligand[END_REF] the oxidation potential in 1 is significantly anodically shifted by 170 mV, reflecting the electron-withdrawing capacity of the ethyl ester groups of the N^N ligand, which modifies the electron density on iridium. However, the oxidation potential of 1 is less positive than that of [Ir(ppy) 2 (deeb)]PF 6 , R2, (E 1/2 ox. = 1.57 V in deaerated MeCN vs NHE, Figure 4 . 4 Figure 4. Frontier molecular orbitals of 1 computed through DFT (M06 functional, see the SI for details) and represented using a contour threshold of 0.03 au. ( 3 MLCT/ 3 33 LLCT) transitions involving the deeb ligand. Iridium(III) complexes often do not show absorption onsets lower in energy than 550 nm;[START_REF] Zhao | Cationic Iridium(III) Complexes with Tunable Emission Color as Phosphorescent Dyes for Live Cell Imaging[END_REF][START_REF] Ertl | Highly Stable Red-Light-Emitting Electrochemical Cells[END_REF][START_REF] Pal | Simple design to achieve red-to-near-infrared emissive cationic Ir(iii) emitters and their use in light emitting electrochemical cells[END_REF] though, there are known examples of neutral Ir(III) complexes showing absorption bands beyond 550 nm.[START_REF] Kesarkar | Near-IR Emitting Iridium(III) Complexes with Heteroaromatic β -Diketonate Ancillary Ligands for Efficient Solution-Processed OLEDs: Structure-Property Correlations[END_REF][START_REF] Brulatti | Luminescent Iridium(III) Complexes with N∧C∧N-Coordinated Terdentate Ligands: Dual Tuning of the Emission Energy and Application to Organic Light-Emitting Devices[END_REF][START_REF] Xin | Efficient near-infrared-emitting cationic iridium complexes based on highly conjugated cyclometalated benzo[g]phthalazine derivatives[END_REF][START_REF] Zhang | Near-Infrared-Emitting Iridium(III) Complexes as Phosphorescent Dyes for Live Cell Imaging[END_REF] Figure 5 . 5 Figure 5. The absorptivity (solid line) and photoluminescence spectra (dotted line) of 1 in CH 2 Cl 2 at 298 K (c = 10 -5 M). Figure 6 . 6 Figure 6. DFT computed spin density difference plots for the lowest triplet state of 1. Both side and top views are shown and they have been drawn with a contour threshold of 3 × 10 -3 au. Figure 7 . 7 Figure 7. Current-voltage curves for DSSCs constructed using 1 (orange) and N719 (red) in the dark (dashed line) and under simulated sunlight (solid line, AM1.5, 100 mW cm -2 ). Table 1 : 1 Selected electrochemical properties of complex 1 in degassed CH 2 Cl 2 at a scan rate of 100 mV s -1 with Fc/Fc + as internal reference, and referenced with respect to NHE (Fc/Fc + = 0.70 V in CH 2 Cl 2 );[36,34,37] [36,34,37] [36,34,37] PF 6 N N CO 2 Et N N Ir N Ir N Cl N CO 2 Et R1 R2 Electrochemistry a a b ∆ E E redox is the difference (V) between first oxidation and first reduction potentials; c E HOMO/LUMO = -[E ox/red vs Fc/Fc + + 4.8] eV. [START_REF] Cardona | Electrochemical Considerations for Determining Absolute Frontier Orbital Energy Levels of Conjugated Polymers for Solar Cell Applications[END_REF] Table 2 . 2 Photophysical properties of complex 1. Complex λ abs / nm (ε / M -1 cm -1 ) a λ em b / nm Φ PL b,c / % τ e d / ns 1 237 (34819), 319 (14647), 384 (5105), 731 0.5 36 (73 %) 434 (5607), 504 (2176), 597 (1925) 78 (19 %) 392 (8 %) a Recorded in aerated CH 2 Cl 2 at 298 K; b Recorded at 298 K in deaerated CH 2 Cl 2 solution (λ exc = 420 nm); c [Ru(bpy) 3 ](PF 6 ) 2 in MeCN as the reference (Φ PL = 1.8% in aerated MeCN at 298 K)[52]; d exc = 378 nm. Table 3 . 3 Photovoltaic performance of 1. DYE J SC cm -2 a / mA V OC a / V FF a η a / % 1 0.995 0.67 0.74 0.49 N719 8.84 0.81 0.61 4.4 a Acknowledgements C.H. acknowledges the Région Bretagne, France for funding. EZ-C acknowledges the University of St Andrews and EPSRC (EP/M02105X/1) for financial support. We thank Umicore AG for the gift of materials. We thank the EPSRC UK National Mass Spectrometry Facility at Swansea University for analytical services. This research used computational resources of 1) the GENCI-CINES/IDRIS, 2) the CCIPL (Centre de Calcul Intensif des Pays de Loire), 3) a local Troy cluster. EAG and HVF thank the ERC for a Starting Grant (p-TYPE, 715354). A panchromatic absorbing, NIR emitting neutral iridium complexes bearing a tripodal 2-(bis(4-(tert-butyl)phenyl)methyl)pyridinato (dtBubnpy) ligand, a bidentate diethyl [2,2'-bipyridine]-4,4'-dicarboxylate (deeb)] ligand and a chloro ligand is reported. This complex was used as a dye in a dye-sensitized solar cell.
01717722
en
[ "sdv" ]
2024/03/05 22:32:10
2018
https://univ-rennes.hal.science/hal-01717722/file/Vassallo%20et%20al_How%20do%20walkers%20behave%20when%20crossing%20the%20way%20of%20a%20mobile%20robot%20that%20replicates.pdf
Christian Vassallo email: [email protected] Anne-Hélène Olivier email: [email protected] Philippe Souères email: [email protected] Armel Crétual email: [email protected] Olivier Stasse email: [email protected] Julien Pettré email: [email protected] How do walkers behave when crossing the way of a mobile robot that replicates human interaction rules? Keywords: Human-robot interaction, Locomotion, Collision avoidance, Gait adaptation, Mobile robot  A mobile robot was programmed to reproduce the interaction rules of human walkers  We observed the behavior of human walkers crossing the way of this reactive robot  Contrary to what occurs with a passive robot, the crossing order is preserved  Humans behave with the reactive robot as when crossing another human walker  Making robots move in a human-like way eases their interaction with human walkers Wordcount is 2997 words. Introduction In everyday life, we walk by constantly adapting our motion to our environment. In past work, the relation between the walker and the environment was modeled as a coupled dynamical system. The trajectories result from a set of forces emitted by goals (attractors) and obstacles (repellers) [START_REF] Warren | Behavioral dynamics of visually guided locomotion, Coordination: neural, behavioral and social dynamics[END_REF]. Collision avoidance between pedestrians has also received a lot of attention either using front-on [START_REF] Dicks | Perceptual-motor behaviour during a simulated pedestrian crossing[END_REF] or side-on approach trajectories [START_REF] Huber | Adjustments of speed and path when avoiding collisions with another pedestrian[END_REF][START_REF] Knorr | Influence of Person-and Situation-Specific Characteristics on Collision Avoidance Behavior in Human Locomotion[END_REF][START_REF] Olivier | Minimal predicted distance: A common metric for collision avoidance during pairwise interactions between walkers[END_REF][START_REF] Olivier | Collision avoidance between two walkers: role-dependent strategies[END_REF]. Olivier et al. showed that walkers adapt their trajectory only if a future risk of collision exists [START_REF] Olivier | Minimal predicted distance: A common metric for collision avoidance during pairwise interactions between walkers[END_REF]. This adaptation depends on the order of arrival of pedestrians that defines their order of passage. The first walker that arrives maintains or increases his/her advance by slightly accelerating and changing his/her direction to move away from the other participant. The second one slows down and moves in the opposite direction to reduce the risks of a collision. Huber et al. focused on how trajectories are adapted using speed and heading modifications depending on the crossing angle [START_REF] Huber | Adjustments of speed and path when avoiding collisions with another pedestrian[END_REF]. Future crossing order (who is about to give way or pass first) is quickly and accurately perceived and preserved until the end of the interaction [START_REF] Knorr | Influence of Person-and Situation-Specific Characteristics on Collision Avoidance Behavior in Human Locomotion[END_REF][START_REF] Olivier | Collision avoidance between two walkers: role-dependent strategies[END_REF]. This shows that walkers take efficiency into account since an inversion of the crossing order would result in suboptimal adaptations of higher amplitude. In addition, it was shown that the participant giving way contributes more to solving the collision avoidance [START_REF] Olivier | Collision avoidance between two walkers: role-dependent strategies[END_REF]. Finally, behavior is influenced by the number of pedestrians to interact with and the potential to have social interactions with them [START_REF] Dicks | Perceptual-motor behaviour during a simulated pedestrian crossing[END_REF]. Because humans and robots will have to share the same environment in the near future [START_REF] Goodrich | Human-robot interaction: a survey[END_REF][START_REF] Kruse | Human-aware robot navigation: a survey[END_REF], recent studies focused on tasks involving walkers and a moving robot. Vassallo et al. [START_REF] Vassallo | How do walkers avoid a mobile robot crossing their way?[END_REF] performed an experiment in which participants had to avoid collision with a passive wheeled robot (moving straight at constant speed), crossing perpendicularly their direction. In contrast to a human-human interaction, several inversions of the crossing order were observed, even though this behavior was not optimal. Such a behavior was observed when the walker arrived ahead of the robot with a predictable future crossing distance between 0 and 0.6m but, despite this advance, finally gave way. This result was linked to the notion of perceived danger and safety, and to the lack of experience of interacting with such a robot. Because of its design, the main limitation of Vassallo et al. study [START_REF] Vassallo | How do walkers avoid a mobile robot crossing their way?[END_REF] was its inability to conclude whether the modification of the walker behavior was due to the lack of adaptability of the moving obstacle or solely to its artificial nature. Nonetheless, it was shown in [START_REF] Satake | How to Approach Humans? Strategies for Social Robots to Initiate Interaction[END_REF] that the robot trajectory can be read and understood by humans in a task where a robot moves towards a human to initiate a conversation based on an approach linked to public and social distances. Furthermore, in a face-toface task with a moving robot, humans behave similarly whether they are told or not what the robot trajectory will be [START_REF] Carton | Measuring the effectiveness of readability for mobile robot locomotion[END_REF], showing their ability to actually read the robot motion. Given these results, the question addressed in this paper is: "How would humans behave if they have to cross the trajectory of a robot programmed to replicate the observed human-human avoidance strategy?" Would humans understand that the robot adapts its trajectory and then adapt their own strategy accordingly, or would they give way to the robot as observed in [START_REF] Vassallo | How do walkers avoid a mobile robot crossing their way?[END_REF]? Materials and methods Participants Ten volunteers participated in the experiment (2 women and 8 men). They were 28.8 (±9.5) years old and 1.77m tall (±0.12). They had no known pathology that could affect their locomotion. All of them had normal or corrected sight and hearing. All participants were naïve to the studied situation. Participants gave written and informed consent before their inclusion in the study. The experiment conformed to the Declaration of Helsinki, with formal approval of the ethics evaluation committee of INSERM (IRB00003888, Opinion number 13-124), Paris, France (IORG0003254, FWA00005831). Apparatus A C C E P T E D M A N U S C R I P T The experiment took place in a 40mx25m gymnasium. The room was separated into two areas by 2m high occluding walls forming a gate in the middle (Figure1). Four specific positions were defined: the participant starting position PSP, the participant target PT, and two robot starting positions RSP1 and RSP2, to generate situations where the robot approached from the right or from the left of the participants. Two virtual guidelines ra and rb, parallel to the line (RSP1, RSP2) and respectively located at a distance of 0.5m and 1.0m from the gate, were used as reference for guiding the robot to pass behind or ahead the participant during the avoidance phase. A specific zone between PSP and the gate was named Motion Estimation Zone (MEZ), far enough from PSP to let the participants reach their comfort velocity before they entered the MEZ. The intersection point between the robot and the initial path of the participant was named Hypothetical Crossing Point (HCP) as this is the point where the participant and robot would cross if they do not modify their trajectory. - -----------------------------Insert figure 1 here ------------------------------ Task Participants were asked to walk at their preferred speed from PSP to PT passing through the gate. They were told that a robot could be moving beyond the gate and could obstruct them, meaning that the robot could adapt its trajectory according to the participants' one. One experimental trial corresponded to one travel from PSP to PT. We defined tsee, the time at which the participant passed through the gate and saw the robot moving, and tcross, the time of closest approach, when the human-robot distance was minimal (i.e., the "distance of closest approach"). The crossing configuration and the risk of future collision were estimated using the Signed Minimal Predicted Distance, noted smpd, which gives, at each time step, the future distance of closest approach if both the robot and the participant keep a constant speed and direction [START_REF] Vassallo | How do walkers avoid a mobile robot crossing their way?[END_REF]. A variation of smpd means that the participant or/and the robot are performing adaptation. The sign of this function depends on who, between the participant and the robot, is going to pass first: positive if it is the participant and negative otherwise. A change of smpd sign means a switch of the future crossing order. Recorded data 3D kinematic data was recorded using a 16 infrared cameras motion capture Vicon-MX system (120Hz). Reconstruction was performed with Vicon-Blade and computations with Matlab (Mathworks®). The global position of participants was estimated as the centroid of the reflective markers set on a helmet they were wearing. The stepping oscillations were filtered out by applying a Butterworth low-pass filter (2nd order, dual pass, 0.5Hz cut-off frequency). Robot Behavior We used a RobuLAB10 robot from Robosoft (dimension: 0.45x0.40x1.42m, weight 25 Kg, maximum speed ~3 m.s -1 ). The robot reference point was the center of its base. The robot control sequence was the following (cf. Figure1): 1) The robot was at rest at RSP1 or RSP2. 2) The participant crossed MEZ, its arrival time at HCP was estimated. 3) The theoretical speed at which the robot should move to reach HCP at the same time as the participant was estimated. This speed was then further increased (resp. decreased) for the robot to arrive in advance (respectively lately) at HCP, in order to match the expected smpd. This choice was done such that smpd values at tsee were randomly distributed in [-0.9m;0.9m] 4) When the robot had to avoid the human, 2m before reaching HCP, the robot adapted its trajectory by inserting a new way-point on its trajectory, in order to pass behind the walker along the line r_a or ahead the walker by moving along the line r_b, depending on the A C C E P T E D M A N U S C R I P T sign of smpd at tsee. 5) When the avoidance phase was over, the robot was controlled to reach its final position. Experimental plan Each participant performed 30 trials. The robot starting position (50% from RSP1, 50% from RSP2) was randomized among the trials. To introduce variability, in 2 trials the robot did not move. The participants were not informed about the initial position of the robot nor about the possibility that the robot would not move on every trial. Only the 28 trials with potential adaptations were analyzed. Analysis The analysis focused on the time interval during which adaptation was performed. To this end, smpd was normalized in time by resampling the function at 100 intervals between tsee (time 0%) and tcross (time 100%). The quantity of adaptation was defined as the absolute value of the difference between smpd(tsee) (i.e., the initial conditions of the interaction) and smpd(tcross) (i.e., the actual signed minimum distance between the participant and the robot). Statistics were performed using Statistica (Statsoft®). All effects were reported at p<0.05. Normality was assessed using a Kolmogorov-Smirnov test. Depending on the normality, values are expressed as median (M) or mean ±SD. Wilcoxon signed-rank tests were used to determine differences between values of smpd at tsee and tcross. The influence of the crossing order evolution on the smpd values was assessed by using a Kruskal-Wallis test with post hoc Mann-Whitney tests for which a Bonferroni correction was applied: all effects are reported at a 0.016 level of significance (0.05/3). Finally, we used a Mann-Whitney test to compare the crossing distance depending on the final crossing order. Results We considered 279 trials (one has been removed because the robot failed to start). Figure 2 depicts the evolution of smpd for all trials. - -----------------------------Insert figure 2 here ------------------------------The sign of smpd at tcross showed that participants passed first in 53% of cases, and gave way in the other 47%. Combining this information with the data at tsee, we could evaluate if an inversion of crossing order occurred. The trials have been divided into 4 categories, depending on the relative signs of smpd at tsee and tcross (Pos for positive and Neg for negative): PosPos, PosNeg, NegPos, NegNeg. For example, the PosNeg category contained the trials for which smpd(tsee)>0 and smpd(tcross)<0. smpd categories were distributed among the trials in the following way: PosPos=144 trials (52%), NegNeg=110 trials (39%), PosNeg=22 trials (8%), NegPos=3 trials (1%). All participants had both PosPos and PosPos trials, and 9 out of 10 participants had at least one PosNeg trial. In the remainder of the paper, the NegPos category will not be further considered as it contained only three trials defined as outliers. Examples of corresponding trajectories for each of the 3 remaining categories are depicted in Figure3. Note that in 91% of cases the crossing order was preserved. We only observed ------------------------------Insert figure 3 here ------------------------------Figure4a and 4b show respectively the average evolution of smpd and its time derivative for each category. Based on the sign of the smpd time derivative, we can separate the reaction period during which participants perform adaptations (smpd varies) from the regulation period that follows the collision avoidance (the derivative vanishes, and its sign may even change) as defined in [START_REF] Knorr | Influence of Person-and Situation-Specific Characteristics on Collision Avoidance Behavior in Human Locomotion[END_REF] and [START_REF] Vassallo | How do walkers avoid a mobile robot crossing their way?[END_REF]. The relative duration of the reaction phase for PosPos (55%) and NegNeg (57%) trials was almost the same, while participants were longer to adapt when they decided to give way to the robot in PosNeg (69%) trials. - -----------------------------Insert figure 4 here ------------------------------Figure5 shows comparison between smpd(tsee) and smpd(tcross) for the 3 categories. For each category, the human-robot distance increased from tsee to tcross so that the risk of collision was reduced. Statistical analysis showed a significant difference of smpd between tsee and tcross for PosPos trials (Msmpdtsee=0.71m, Msmpdtcross=1,08m, Z=9.17, p<0.0001, r=0.76), for NegNeg trials (Msmpdtsee=-0.46m, Msmpdtcross=-1.14m, Z=8.98, p<0.0001, r=0.85) and for PosNeg trials (Msmpdtsee=0.29m, Msmpdtcross=-0.71m, Z=4.11, p<0.0001, r=0.88). - -----------------------------Insert figure 5 here ------------------------------Finally, the distance of closest approach was influenced by the category (H(2,276)=29.3, p<0.0005). Post-hoc tests showed that the median distance between the robot and participants did not significantly differ between PosPos (M=1.08m) and NegNeg (M=1.14m) trials. However, when an inversion of the crossing order in PosNeg trials occurred, this distance was smallest (M=0.71m). Discussion In the current study, results indicated that when a human is crossing the trajectory of a mobile robot which is programmed to replicate the observed human avoidance strategy, strong characteristics of collision avoidance are comparable with the ones of human-human interactions. First, the crossing order is preserved from tsee to tcross in a majority of trials, as observed in human-human interactions [START_REF] Knorr | Influence of Person-and Situation-Specific Characteristics on Collision Avoidance Behavior in Human Locomotion[END_REF][START_REF] Olivier | Minimal predicted distance: A common metric for collision avoidance during pairwise interactions between walkers[END_REF]. However, in 8% of trials, the participants gave way to the robot while they were in position to pass first. Such a behavior was observed when smpd(tsee) was around 0.39m. Above this threshold, participants preferred to preserve their role rather than giving way to the robot. This result is confirmed by the repartition into PosPos and PosNeg categories of trajectories starting from the smpd interval [0.39m, 0.74m], where 94% of trials belong to the PosPos group. This result is in contrast with the one previously observed with a passive robot [START_REF] Vassallo | How do walkers avoid a mobile robot crossing their way?[END_REF], where participants consistently preferred to give way to the robot when the risk of collision was below 0.81m, even though this choice was not optimal. Note that, whether or not an inversion of the crossing order occurred, the trajectories were adapted in order to increase the crossing distance between the human and the robot to reduce the risk of collision. Results show that humans solve the collision avoidance with anticipation, as previously demonstrated during human-human interaction [START_REF] Olivier | Minimal predicted distance: A common metric for collision avoidance during pairwise interactions between walkers[END_REF]. Indeed, Figure 4 A C C E P T E D M A N U S C R I P T shows a plateau in smpd values before tcross meaning that the avoidance maneuvers are over before the end of the task. As discussed in the review of Higuchi [START_REF] Higuchi | Visuomotor Control of Human Adaptive Locomotion: Understanding the Anticipatory Nature[END_REF], the anticipatory nature of adaptive locomotor strategies ensures safety navigation during the task. When the participant decides to preserve the crossing order, the task is solved earlier than when a switch of roles occurs, that requires more motion adaptation. The human-human avoidance strategy takes advantage from the configurations of both agents to limit their adaptations [START_REF] Olivier | Minimal predicted distance: A common metric for collision avoidance during pairwise interactions between walkers[END_REF][START_REF] Olivier | Collision avoidance between two walkers: role-dependent strategies[END_REF]. Assuming that both participants have similar locomotion capabilities, a role is assigned to each of them depending on their order of passage, as recalled in the introduction. This high-level strategy is not related to the anthropomorphic walking, it is simply expressed in terms of the trajectory of a representative point (e.g. the waist position and heading) in the horizontal plane of motion. As such, the method can be easily transferred to a wheeled robot. The fact that the robot automatically initiates its avoidance motion by replicating the human strategy allows the human to easily go back to the process usually applied. In this way, the human easily understands the role he/she should play and no conflicting situation occurred in any trials. For this reason, our overall results are comparable to previous findings, which were reported in the case of a human-human interaction. The control of our robot follows a model of shared-avoidance strategy based on the human behavior [START_REF] Vassallo | How do walkers avoid a mobile robot crossing their way?[END_REF]. One conflicting situation might theoretically occur when both agents arrive with a zero smpd (i.e. exactly at the same time) and take the same role. Such a conflicting situation between human walkers was not reported in [START_REF] Vassallo | How do walkers avoid a mobile robot crossing their way?[END_REF] and never occurred in our human-robot experiment. When the human and the robot were approaching the crossing point quite simultaneously, the smpd was checked twice: once at the beginning, based on the measure of the human velocity in the MEZ, and once at tsee. Based on this accurate measurement of the smpd, which is never exactly equal to zero, the robot adopts a role that helps the walker adapt his behavior. For this reason, we never observed any conflicting situation in which the walker would have tried to force the way (NegPos) after the robot had initiated the avoidance. However, the opposite situation (PosNeg), in which the human prefers to give way to the robot though he arrived ahead, was sometimes observed. This cautious behavior does not constitute a conflicting situation that could block both agents. The behavioral similarities observed between human-human and human-robot is in accordance with the study of Carton et al. [START_REF] Carton | Measuring the effectiveness of readability for mobile robot locomotion[END_REF], in which a walker avoids a robot that reproduces an average human trajectory to avoid a face-to-face collision. They showed that giving a human-like behavior to a moving robot gives rise to readable motions that convey intentions. This readability allows humans to minimize their planning effort and avoid the collision earlier and smoothly. In accordance with previous studies [START_REF] Carton | Measuring the effectiveness of readability for mobile robot locomotion[END_REF][START_REF] Dragan | Generating legible motion[END_REF][START_REF] Lichtenthäler | Towards legible robot navigation-how to increase the intend expressiveness of robot navigation behavior[END_REF], our result shows that controlling robots in order to make them behave in a human-like way is a key point to ease human-robot cohabitation. Conclusion Our study suggests that when human walkers cross the trajectory of a mobile robot that obeys the observed human-human avoidance rules, they behave closely as when they cross the trajectory of another human walker. This result shows that, for the ease of human-robot collaboration, machines should move by respecting human interaction rules. In future works, as previously investigated in human-human interactions [START_REF] Dicks | Perceptual-motor behaviour during a simulated pedestrian crossing[END_REF][START_REF] Gallup | Visual attention and the acquisition of information in human crowds[END_REF][START_REF] Higuchi | Visuomotor Control of Human Adaptive Locomotion: Understanding the Anticipatory Nature[END_REF][START_REF] Passos | Information-governing dynamics of attacker-defender interactions in youth rugby union[END_REF], it would be interesting to better understand the visual anticipation processes as well as the nature of the visual information underlying such a collaboration. This can be done by using an eye-tracking system to A C C E P T E D M A N U S C R I P T couple the adaptations made by the human walkers and the gaze-activity. Also, it would be interesting to evaluate whether the use of a humanoid robot, whose morphology is closer to the one of a human than a wheeled robot, modifies the human behavior. Another direction of research would be to extend this work to the case of multiple walkers interacting with each other at the same time. Would it be possible, if some participants are replaced by robots that behave like humans, to observe the same human adaptation? Finally, the nature of human expectations and presuppositions, that can be linked to the notion of socially-aware navigation (see [START_REF] Rios-Martinez | From proxemics theory to socially-aware navigation: A survey[END_REF] for a review), should have strong influence on the walker behavior. Indeed, in a less controlled context, participants would certainly behave differently than in the framework of a scientific experiment, where the robot is expected to behave safely. An interesting complement of study would then be to lead similar experiment in real-life environment to evaluate the impact of the context to the walker behavior. where participants were likely to pass first but adapted their trajectory to finally give way to the robot. Figure 1 : 1 Figure 1: Experimental apparatus and task. The robot moves from RSP1 to RSP2 (or vice versa), following the lateral path r_b or r_a to pass respectively behind or ahead the participant. Figure 2 : 2 Figure 2: Evolution of smpd normalized in time during the interaction [tsee, tcross] for all the 279 trials. Gray curves represent trials where the initial crossing order was preserved while black curves represent trials where the initial crossing order was changed. Figure 3 : 3 Figure 3: Three examples of participant's (P) and robot (R) trajectories during the interaction phase, for PosPos (top), PosNeg (middle) and NegNeg (bottom) categories. The part of the trajectory between tsee (circle mark) and tcross (square mark) is represented in bold line. Corresponding positions along time are linked by dotted lines. Figure 4 : 4 Figure 4: (a) Mean evolution (±1 SD) of smpd for each category of trial. (b) Time derivative of the mean smpd. The three vertical segments correspond, for each curve (PosPos, PosNeg or NegNeg), to the time at which the time derivative of the smpd vanishes, i.e., separate the reaction phase (on the left) from the regulation phase (on the right). Figure 5 : 5 Figure 5: smpd values for PosPos, PosNeg and NegNeg categories at tsee and tcross. A significant difference in values means that adaptations were made to the trajectory by the participant (***p < 0.001). Acknowledgements The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007 -2013) under grant agreement n. 611909 (KoroiBot) and from the French National Research Agency, projects Entracte (#ANR-13-CORD-0002) and Percolation (#ANR-13-JS02-0008). Conflict of interest In this work, there are no conflict of interest. A C C E P T E D
01579651
en
[ "chim" ]
2024/03/05 22:32:10
2017
https://univ-rennes.hal.science/hal-01579651/file/Triplet%20state%20CPL%20active%20helicene-dithiolene_accepted.pdf
Thomas Biet Thomas Cauchy Qinchao Sun Jie Ding Andreas Hauser email: [email protected] Patric Oulevey Thomas Bürgi Denis Jacquemin Nicolas Vanthuyne Triplet state CPL active helicene-dithiolene platinum bipyridine complexes Keywords: UV-vis, ECD, photophysical measurements niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Triplet state CPL active helicene-dithiolene platinum bipyridine complexes Thomas Biet, a Thomas Cauchy, a Qinchao Sun, b Jie Ding, b Andreas Hauser,* b Patric Oulevey, b Thomas Bürgi, b Denis Jacquemin, c Nicolas Vanthuyne, d Jeanne Crassous e and Narcis Avarvari* a Chiral metal dithiolene complexes represent a family of chiral precursors, which can give rise to molecular materials with properties resulting from the interplay of chirality with conductivity, magnetism, and photophysics. We describe herein the first examples of chiral metal diimine dithiolene complexes, by the use of a platinum(II) centre coordinated by 2,2'-bipyridine and helicene-dithiolene ligands. Straightforward synthesis of racemic and enantiopure complexes allows the preparation of luminescent Pt(bipy) [4] and [6]helicene compounds for which the solid-state structure was determined as well. TD-DFT calculations support the assignment of the low energy bands observed in the UV-vis absorption spectra as mixed metal-ligand-to-ligand charge transfer transitions and confirm that the emission band results from the T1 excited state. Interestingly the enantiopure [6]helicene complexes show CPL activity at room temperature in acetonitrile solutions with anisotropy factors of 3×10 -4 . Chiral metal dithiolene complexes represent an emerging family of molecular materials where chirality is expected to modulate properties such as conductivity, magnetism, luminescence, etc. 1 For example, differences in conductivity between diastereomeric pairs of anionic Ni(II) bis(dithiolene) complexes with chiral viologene type cations have been noticed, 2 while the first chiral single component conductors based on neutral Au(III) bis(dithiolene) complexes have been recently described. 3 Besides, stable anionic, neutral or cationic species can be easily accessed in metal bis(dithiolene) complexes thanks to the "non-innocent" character of the dithiolene ligands, so that redox modulation of the chiroptical properties can be observed. 4 Although square planar platinum diimine dithiolene complexes have been investigated over more than twenty years especially for their emission properties in solution, [5][6][7][8] and more recently for photocatalytic water splitting, [START_REF] Zheng | Proc. Natl. Acad. Sci[END_REF] no chiral derivative of this heteroleptic family has been yet reported. For example, in Pt(II) 2,2'-bipyridine (bpy) arene-dithiolene complexes such as Pt(bpy)(tdt) (tdt = toluenedithiolate) showing room temperature luminescence in solution arising from a MMLL'CT (mixed metal-ligand-to-ligand charge transfer) transition, 5,6 chirality could be in principle introduced either on the diimine or the benzodithiolene fragments in order to influence the photophysical properties. One of the interests of such complexes relies on the possible observation of circularly polarized luminescence (CPL), which is the differential spontaneous emission of left and right-handed circularly polarized light. [START_REF] Riehl | [END_REF] While chiral lanthanide complexes are generally the most intense CPL emitters, 11,12 several examples of transition metal complexes have been reported as well. [13][14][15] Among them those containing helicene based ligands are particularly interesting, 16,17 as helicenes 18,19 and heterohelicenes 20 are well known non-planar conjugated molecules possessing strong chiroptical properties. Note that CPL active cationic dioxa, azaoxa and diaza [6]helicenes have been recently reported. 21 This work is presenting out investigations on helical dithiolene platinum 2,2'-bipyridine complexes using the hitherto unknown helicene-dithiolate (heldt) ligands, by analogy with the achiral toluene-dithiolate (tdt) previously mentioned. We describe herein the synthesis and the structural characterization of Pt(bpy)([n]hel-dt) (n = 4, 6) complexes together with their chiroptical and photophysical properties supported by DFT calculations. The racemic 2,3-dithiolate- [4] and [6]helicene ligands, generated in situ from the protected precursors (rac)-1a and (rac)-1b respectively, which have recently been used by some of us for the synthesis of TTF-helicenes, 22 have been reacted with Pt(bpy)Cl2 to generate the corresponding complexes. Thus, (rac)-Pt(bpy)( [4]hel-dt) 2a and (rac)-Pt(bpy)( [6]hel-dt) 2b have been isolated as dark purple crystalline solids after column chromatography (Scheme 1 and ESI). As the racemization barrier for [4]helicenes is generally very low, 23 the enantiopure forms have been prepared only for the [6]helicene dithiolene complexes 2b starting from the precursors (M)-1b and (P)-1b separated by chiral HPLC (Fig. S1-S3, ESI). The racemic complexes 2a and 2b crystallize in the centrosymmetric space groups C2/c and P-1 respectively, with both enantiomers (M) and (P) in the unit cell (Table S1, ESI). Worth noting are the helical curvatures (hc) defined by the dihedral angle between the terminal rings of the helicene skeleton amounting to 25.40° and 53.76° for (rac)-2a (Table S2, Fig. S4-S5, ESI) and (rac)-2b (Table S3, Fig. S6-S7, ESI), which are typical values for [4] and [6]helicenes, and the square planar coordination geometry of the platinum centres. DFT calculations in acetonitrile yield dihedral angles 28.06° and 41.68° for 2a and 2b, suggesting that the impact of crystal packing is stronger for the latter compound. The enantiomerically pure complex (M)-2b has been also analysed by single crystal X-ray diffraction thus allowing to confirm that it was obtained from the (-)-1b precursor. (M)-2b crystallized A c c e p t e d m a n u s c r i p t in the orthorhombic system, non-centrosymmetric space group P212121, with four independent molecules in the unit cell (Fig. 1). Fig. 1 The four independent molecules of complex in the solid state structure of (M)-2b. The four molecules named as Pt1A -Pt1D slightly differ by the helical curvature values ranging from 58.58° (Pt1C) to 62.36° (Pt1D), while the Pt-S (2.24-2.25 Å) and Pt-N (2.05-2.07 Å) are in the normal range for such complexes (Table S4, ESI). 24 The packing of the molecules is very likely governed by the - stacking interactions occurring along the c direction (Fig. 1 and Fig. S8-S9, ESI). The enantiomeric (P) and (M)-2b complexes represent the first chiral members of the platinum diimine dithiolenes family. As outlined above, Pt(diimine)(dithiolate) complexes are emissive in fluid or frozen solutions. 5,6,25 We have therefore set out to measure in a first time the photophysical properties of the racemic 2a and 2b. In the low energy region the complex (rac)-2a has an absorption band from 450 to 650 nm with a maximum around 550 nm (18180 cm -1 ) and an absorption coefficient  ≈ 6700 M -1 cm -1 (Fig. 2, top). For (rac)-2b the maximum of the corresponding band is around 562 nm and the absorption coefficient  ≈ 3640 M -1 cm -1 (Fig. 2, bottom). This low energy absorption band is typical for Pt(diimine)(dithiolate) complexes and has been assigned to a MMLL'CT transition, as the HOMO has metal/dithiolene character while the LUMO is developed over the unsaturated diimine ligand. 6 The small redshift of the CT transition from 2a to 2b is caused by the slight change in the dithiolate ligand, with a more extended rigid  backbone in the latter, which makes the HOMO energy slightly higher by +0.03 eV in 2b than in 2a. Moreover, the absorption coefficient is smaller for 2b likely because of the more distorted structure. TD-DFT reproduces the experimental trends with a vertical absorption at 556 nm (f = 0.21) for 2a and 562 nm (f = 0.19) for 2b. Both complexes are emissive in fluid solutions of CH2Cl2 when irradiated into the CT bands, showing low energy emission bands at 720 nm (2a) and 715 nm (2b) (Fig. 2 and Fig. S10, ESI, for (rac)-2a in CH3CN). The more distorted structure of 2b might be as well at the origin of the lower emission quantum yield for 2b (0.15%) than for 2a (0.19%) (Table S5, ESI). The perfect agreement between absorption and excitation spectra is proof that the luminescence indeed originates from the two compounds despite the low emission quantum yield. For 2b a luminescence life-time of 124 ns was measured in deoxygenated solution with pulsed excitation at 458 nm (decay curve shown in Figure S11,ESI). With the quantum efficiency of 0.15% this corresponds to a radiative lifetime of around 100 μs, indicating that the emission originates from the T1 state, as is generally the case for Pt(II) complexes. 6 Fig. 2 Absorption, emission and excitation spectra of (rac)-2a (top) and (rac)-2b (bottom) in CH2Cl2. Absorption spectra were measured at concentrations of 2.2×10 -5 M for (rac)-2a and at 8×10 -5 M for (rac)-2b. Emission spectra were measured using the same solutions degassed by nitrogen bubbling for 20 min with excitation at 525 nm for (rac)-2a and at 560 nm for (rac)-2b. Excitation spectra were measured at emission wavelengths of 720 nm for (rac)-2a and 715 nm for (rac)-2b. To characterize the charge transfer and emission properties, DFT calculations have been performed on 2a and 2b (Fig. S12-S19, Tables S6-S8, ESI). For both compounds, optimized as (M) and (P) enantiomers respectively, the fully relaxed molecular geometries obtained by DFT are in line with those obtained by X-Ray diffraction. In Fig. 3, we represent the electron density difference (EDD) plots corresponding to the transition to the lowest singlet excited-state and the spin density of the lowest emissive triplet state. The EDD representation clearly shows that there is a strong CT from the dithiolene (donor, mostly in blue in Fig. 3) to the diimine (acceptor, mostly in red). The computed CT distance attains 4.0 Å in both 2a and 2b, which is a rather large value. Interestingly, one notices that the metal centre presents both positive and negative density contributions, indicating that it also partially plays the role of an accepting unit. The implication of the metal in the EDD plots is also consistent with a possible intersystem crossing to the T1 state. The spin density of the lowest triplet state is unsurprisingly localized in exactly the same regions as the corresponding S1 state. The emission for the lowest triplet state was estimated in both the vertical and adiabatic approximations. In the former, DFT yields an emission at 802 nm and 815 nm for 2a and 2b, respectively, whereas in the latter that takes into account the difference of vibrational energies, we obtained 759 nm and 760 nm, for the two compounds. These latter values are in good correspondence with the experimental data, with an error smaller than 0.1 eV, confirming that the observed emission is indeed coming from T1. The fact that the vertical values indicate (incorrectly) a small redshift when going from 2a to 2b, whereas the adiabatic energies show essentially no shift, hints that the T1 geometrical relaxation is smaller in the latter compound than in the former. Investigation of the photophysical properties of both enantiomers of 2b has been performed in acetonitrile solutions. Absorption spectra are shown in the top panel of Fig. 4, together with the emission spectrum of (M)-2b. Compared to the spectra in CH2Cl2, the maximum of the absorption is shifted to 530 nm, that is, by 30 nm to lower wavelength. The emission maximum in acetonitrile appears at 720 nm, that is, shifted to higher wavelength by 5 nm. The CD spectra of (P) and (M)-2b in acetonitrile, image mirror of each other, are shown in the bottom panel of Fig. 4. For the CD spectra, TD-DFT indeed yields a weakly positive contribution to the rotary strength for the transition to the lowest excited-state of (P)-2b and a weakly negative contribution to that strength for the lowest excitedstate of (M)-2b, which is consistent with the experimental findings. CPL, representing the differential emission between left and right circularly polarized light and characterized by the anisotropy factor gem = 2(IL -IR)/(IL + IR) at the maximum of the emission band, has been measured for solutions of (M) and (P)-2b in acetonitrile at room temperature (Fig. 4, bottom). As hypothesized through the introduction of the helicene backbone in the dithiolene ligand, the enantiomers of 2b show CPL activity with an anisotropy factor of ±3×10 -4 , which is a typical value for organic, organometallic and coordination complexes in solution excepting the lanthanides. 11,12 It should be mentioned however that this value of CPL anisotropy is for a compound with a luminescence quantum efficiency of only 0.15%, and which represents the first CPL active metal dithiolene complex. In summary, the first helical Pt(diimine)(dithiolene) complexes have been prepared through the introduction of [4] and [6]helicene backbones in the structures of the dithiolene ligand. The solid state structures of the racemic complexes show the presence of both (P) and (M) enantiomers, with helical curvatures typical of [4] and [6]helicenes and packings controlled by  interactions. Enantiopure [6]helicene complexes have been prepared from the corresponding enantiopure precursors separated by chiral HPLC. The complexes are emissive in fluid solutions at room temperature when excited in the MMLL'CT band, the triplet state being responsible for the observed emission band centered at 715-720 nm. DFT calculations support the absorption, emission and CD properties of the enantiopure compounds. The conformationally stable [6]helicene enantiopure complexes show CPL activity. These results underline the interest of helical dithiolene ligands as means to access chirality related combined properties in the derived complexes and open the way towards the preparation of other related compounds by the use of chiral bipyridines in combination with diverse helicene-dithiolenes in order to tune their photophysical properties. Scheme 1 1 Scheme 1 Synthesis of Pt(bpy)(hel-dt) complexes 2a-b. A c c e p t e d m a n u s c r i p t Fig. 3 3 Fig.3Top: Density difference plots between the lowest S1 and the S0 state as determined by the TD-DFT on the optimal ground-state geometry. The blue (red) regions indicate regions of loss (gain) of density upon transition. Bottom: spindensity difference plots between the T1 and S0 states considering the T1 state in its optimal geometry. (M)-2a and (P)-2b are displayed on the left and right handside, respectively. See the SI for computational details. Fig. 4 4 Fig.4 Absorption spectra of (P)-2b and (M)-2b and emission spectrum of (M)-2b in acetonitrile (2.5×10 -4 M) at T = 298 K, ex = 532 nm (top); CD and CPL spectra of (P)-2b and (M)-2b (bottom). A c c e p t e d m a n u s c r i p t This work was supported in France by the CNRS (GDR 3712 Chirafun), the University of Angers and the French Ministry of Education and Research (grant to T.B.). The investigation was supported in part by the University of Geneva and by the Swiss National Science foundation (grant No 200020_152780). Magali Allain (University of Angers) is warmly thanked for help with the solid state structures.
01611001
en
[ "spi" ]
2024/03/05 22:32:10
2015
https://hal.science/hal-01611001/file/liu_19801.pdf
Y Liu V Vidal S Le Roux F Blas F Ansart P Lours Influence of isothermal and cyclic oxidation on the apparent interfacial toughness in thermal barrier coating systems Keywords: In thermal barrier coatings (TBCs), the toughness relative to the interface lying either between the bond coat (BC) and the Thermal Grown Oxide (TGO) or between the TGO and the yttria stabilized zirconia topcoat (TP) is a critical parameter regarding TBCs durability. In this paper, the influence of aging conditions on the apparent interfacial toughness in Electron Beam-Physical Vapor Deposition (EB-PVD) TBCs is investigated using a specifically dedicated approach based on Interfacial Vickers Indentation (IVI), coupled with Scanning Electron Microscopy (SEM) observations to create interfacial cracks and measure the extent of crack propagation, respectively. Introduction Thermal barrier coatings (TBCs) are typically used in key industrial components operating at elevated temperature under severe conditions such as gas turbines or aero-engines, to effectively protect and isolate the superalloy metal parts, for instance turbine blades, against high temperature gases. Even though TBCs allow drastic improvement of component performance and efficiency [START_REF] Padture | Thermal barrier coatings for gas-turbine engine applications[END_REF][START_REF] Goswami | Thermal barrier coating system for gas turbine application-a review[END_REF], thermal strains and stresses resulting from transient thermal gradients developed during in-service exposure limit the durability of the multi-material system. TBCs exhibit complex structure and morphology consisting of three successive layers, deposited or formed on the superalloy substrate (Fig. 1), i.e. (i) the bond coat standing as a mechanical bond between the substrate and the topcoat; (ii) the Thermally Grown Oxide (TGO), an Al 2 O 3 scale that forms initially by pre-oxidation of the alumina-forming bond coat then slowly grows upon thermal exposure to protect the substrate from further high temperature oxidation and corrosion; (iii) The ceramic topcoat (TC), made of yttria-stabilized-zirconia (YSZ), the so-called thermal barrier coating itself whose role is mainly to insulate the superalloy substrate from high temperatures. Electron Beam Physical Vapor Deposition (EB-PVD) and Air Plasma Spray (APS) are the two major coating processes implemented industrially for depositing YSZ. They generate different morphologies and microstructures and consequently different thermal and mechanical properties. The columnar structure, typical of the EB-PVD deposition, shows an optimal thermal-mechanical accommodation of cyclic stress resulting in high lateral strength. However, elongated (high aspect ratio) inter-columnar spaces roughly normal to the TBC, assist thermal flux conduction and penetration through the top-coat, which detrimentally increases the thermal conductivity of the system which can reach 1.6 W/m•K. APS TBCs are characterized by a lamellar structure, intrinsically much more efficient in terms of thermal insulation (conductivity as low as 0.8 W/m•K) but less resistant to in-plane cyclic mechanical loading. Regardless of the coating process, TBCs can suffer in-service damage as a consequence of the synergetic effect related to mechanical stress, high temperature and thermally activated growth of interfacial alumina. Failure can either occur cohesively within the top coat for APS TBCs or adhesively at interfaces between successive layers in EBPVD TBCs. Degradation of such systems usually occurs through the spallation of the topcoat resulting from severe delamination either at the BC/TGO or the TGO/TC interface. The resistance to spallation is intimately related to the capacity of interfaces of the complex TBC system to sustain crack initiation and propagation, which can be evaluated by measuring the interfacial toughness. Several methods have been proposed to achieve interfacial fracture toughness measurement for various substrate/coating systems, including "four point bending test" [START_REF] Thery | Adhesion energy of a YPSZ EB-PVD layer in two thermal barrier coating systems[END_REF], "barb test" [START_REF] Guo | Effet of interface roughness and coating thickness on interfacial shear mechanical properties of EB-PVD Yttria-Partially Stabilized Zirconia thermal barrier coating systems[END_REF], "buckling test" [START_REF] Faulhaber | Buckling delamination in compressed multilayers on curved substrates with accompanying ridge cracks[END_REF], "micro-bending test" [START_REF] Eberl | In Situ Measurement of the toughness of the interface between a thermal barrier coating and a Ni alloy[END_REF] and various indentation techniques [START_REF] Sniezewski | Thermal barrier coatings adherence and spallation: interfacial indentation resistance and cyclic oxidation behaviour under thermal gradient[END_REF][START_REF] Lesage | Effect of thermal treatments on adhesive properties of a NiCr thermal sprayed coating[END_REF][START_REF] Choulier | Contribution à l'étude de l'adhérence de revêtements projetés a la torche a plasma[END_REF][START_REF] Vasinonta | Measurement of interfacial toughness in thermal barrier coating systems by indentation[END_REF][START_REF] Mao | Evaluation of microhardness, fracture toughness and residual stress in a thermal barrier coating system: a modified Vickers indentation technique[END_REF]. This paper proposes to implement the Vickers hardness technique to estimate the interfacial toughness in EB-PVD TBCs as well as its evolution upon various isothermal and cyclic aging conditions. As a matter of fact, aging may provoke microstructural changes and residual stress development prone to enhance crack initiation and propagation. A tentative correlation between the conditions of aging, the induced microstructural changes and the concomitant evolution of toughness, necessary to understand and predict the durability of TBC systems, is detailed. Materials and testing conditions TBC systems processed by EB-PVD (150 m thick), are provided by SNECMA-SAFRAN. Topcoats and bond-coats are industrial standards, respectively made of yttria stabilised zirconia (namely ZrO 2 -8 wt.% Y 2 O 3 ) and ␤-(Ni,Pt)Al. Substrates are AM1 single crystal Ni-base superalloy disks, with a diameter of 25 mm and a thickness of 2 mm. All specimens are initially pre-oxidised to promote the growth of a thin protective Al 2 O 3 scale. Samples are cut, polished and subsequently aged using various oxidation conditions prior to interfacial indentation. In addition to the as-deposited condition, two series of results are analyzed separately. The first series is relative to isothermal oxidation, following 100 h exposure at 1050 • C, 1100 • C and 1150 • C respectively. As the exposure time is kept constant, the influence of the oxidation temperature can be specifically analysed. The second series, performed at a given temperature, (1100 • C) is dedicated to compare isothermal and cyclic oxidation behavior. Here again, the hot time at 1100 • C (i.e., 100 h), is the same for both tests. Fig. 1 shows the typical cross-sectional microstructure of an initial as-deposited EB-PVD TBC. Note that, after aging, a slight additional grinding is often required to prepare thoroughly the surface for interfacial indentation. Interfacial indentation test Various types of interfacial or surface indentation tests exist. They are performed either on the top surface of specimens, normal to the coating [START_REF] Davis | The fracture energy of interfaces: an elastic indentation technique[END_REF], or on cross-section, either within the substrate close to the interface [START_REF] Colombon | Contraintes Résiduelles et Nouvelles Technologies[END_REF] or at the interface between the substrate and the coating [START_REF] Choulier | Contribution à l'étude de l'adhérence de revêtements projetés a la torche a plasma[END_REF]. The latter, further developed in [START_REF] Chicot | Apparent interface toughness of substrate and coating couples from indentation tests[END_REF], employs a pyramidal Vickers indenter and can be applied for a large range of coating thicknesses (greater than ∼100 m). Typically, it is specifically used for investigating adhesion of TBC systems [START_REF] Sniezewski | Thermal barrier coatings adherence and spallation: interfacial indentation resistance and cyclic oxidation behaviour under thermal gradient[END_REF][START_REF] Lesage | Effect of thermal treatments on adhesive properties of a NiCr thermal sprayed coating[END_REF][START_REF] Chicot | Apparent interface toughness of substrate and coating couples from indentation tests[END_REF][START_REF] Wu | Microstructure parameters affecting interfacial adhesion of thermal barrier coatings by the EB-PVD method[END_REF]. The principle of interfacial indentation is to accurately align one diagonal of the Vickers pyramid with the interface between the substrate and the coating while loading the system to hopefully generate the local delamination of the coating. In this case, resulting from the application of a high enough indentation force, an induced crack with roughly semi-circular shape instantaneously propagates. For a given aging condition, each indentation force P greater than a critical force P c that must be estimated, generates a crack with radius a and an indent imprint with radius b. P c and correlatively the critical crack length a c can not be determined using straightforward measurements but graphically correspond to the coordinates of the intercept between the apparent hardness line Ln(b)-Ln(P) showing the evolution of the imprint size versus the indentation force (master curve), and the Ln(a)-Ln(P) line giving the evolution of the crack size versus the indentation force (Fig. 2). The apparent interfacial toughness (K ca ) is calculated as a function of the critical values according to the following relationship: K ca = 0.015 P c a 3/2 c ⎛ ⎝ E H 1/2 B 1 + H B H T 1/2 + E H 1/2 T 1 + H T H B 1/2 ⎞ ⎠ (1) where B and T stand, respectively, for the bondcoat and the topcoat. In standard TBC systems, the thickness of the TGO is generally low, typically ranging from 0.7 m (after initial pre-oxidation) to 7 m (after long term exposure at high temperature), and is in any case much lower than the imprint of the indent resulting from the force range used for coating delamination purpose (Fig. 3). As a consequence, the influence of the TGO in terms of mechanistic issue is deliberately neglected [START_REF] Sniezewski | Thermal barrier coatings adherence and spallation: interfacial indentation resistance and cyclic oxidation behaviour under thermal gradient[END_REF]. However, it will be shown later that the thickness of the TGO has an influence on the location of the crack initiation and subsequently the propagation path. Determination of Young's modulus and hardness Young modulus E and hardness H strongly depend on the chemical composition and the process-induced microstructure of materials. In multi-materials such as TBCs, those mechanical characteristics change as composition changes throughout the entire thickness of the multi-layered system. If the nature of the single crystal substrate is essentially not affected by the overall deposition process, the morphology and microstructure of the top coat and to a lesser extent of the bond coat are strongly related to processing which in turn affects the mechanical properties. Strictly speaking, the mechanical response of the system to interfacial loading should depend on the elastic and plastic properties of all materials involved including that of the thermally grown oxide. However, a measurement of E and H of the growing oxide is not possible by means of standard micro and nano-indentation. The model detailed in [START_REF] Chicot | Apparent interface toughness of substrate and coating couples from indentation tests[END_REF] requires the knowledge of these characteristic parameters for the substrate and the coating. Accordingly, the TGO is assumed to play the role of a (three-dimensional) interface, thickening as temperature exposure increases and promoting, when loaded, spallation along the (two-dimensional) interface it shares with either the topcoat or the bond coat. Young modulus of the top coat E T and hardness of both the bond coat H B and the topcoat H T are measured using the nano-indentation technique, implementing a Berkovich indenter. Details of the method can be found in [START_REF] Jang | Hardness and Young's modulus of nanoporous EB-PVD YSZ coatings by nanoindentation[END_REF]. Basically, considering the indentation force applied, hardness is evaluated by a simple and direct measurement of the indent imprint dimensions. Young's modulus is calculated by analyzing the purely elastic recovery of the plot relating the evolution of the applied force versus the in-depth displacement. For statistical reasons, hardness and Young modulus have been measured on 10 different locations within the bond coat and top coat respectively. Average values, experimentally determined, are given below: • E B = 133 GPa • E T = 70 GPa • H B = 5.15 GPa • H T = 4.14 GPa Results and discussion Typical examples of interfacial indentation results are given in Fig. 3. The indent imprint alone (Fig. 3a) or the indent imprint plus the induced crack (Fig. 3b) are shown for cases where the critical force to provoke crack formation is not reached or exceeded,respectively. Fig. 4 gathers all data collected from experiments on asdeposited and isothermally oxidised TBCs. The linear relationship between Ln(P) and Ln(b), plotting the so-called master curve of apparent hardness, with a slope close to 0.5 is in good agreement with the general standard formula relating the Vickers hardness (HV) of bulk materials to the ratio between the applied load P and the square of the indent diagonal length b 2 . For a given oxidation temperature, the variation of the length a of the indentationinduced crack versus the applied load P also fits a single regression line on a Log-Log scale which can serve (as indicated in Section 3) to evaluate the critical load P c necessary to initiate interfacial detachment. Note that for aged specimens, the critical force P c (corresponding to the abscissa of the intercept between the master curve and Ln(P) vs Ln(a) plot) decreases as the oxidation temperature increases, thus indicating a thermally activated degradation of the interface. As a comparison, the as-deposited TBC can sustain much higher load prior to suffer interfacial debonding. Quantitatively, the critical force is respectively 0.3 N for 100 h oxidation at 1150 • C, 0.8 N for 100 h oxidation at 1100 • C, 2 N for 100 h oxidation at 1050 • C and about one order of magnitude higher (up to 16 N) for the as-deposited non oxidized TBC. Using Eq. ( 1) and according to the values of E and H reported in Section 4, the apparent interfacial toughness is calculated and detailed in Fig. 5 for all investigated cases. Of course, correlatively to the thermally activated decrease of the critical force discussed above, toughness also decreases as the oxidation temperature increases. Besides, it was shown elsewhere [START_REF] Sniezewski | Thermal barrier coatings adherence and spallation: interfacial indentation resistance and cyclic oxidation behaviour under thermal gradient[END_REF] that for a given oxidation temperature, the interfacial degradation was similarly time-dependent too. This unambiguously shows that the propensity of the coating to detach from the substrate results from complex solid-state diffusion processes that impair the mechan- Beyond the mechanistic approach, the fractographic analysis gives interesting information on the mechanisms of crack initiation and further propagation, which can both vary depending on the aging conditions. Indeed, the enhancement of thermal activation as the oxidation temperature is raised results in the growth of a thicker alumina scale at the interface between the bond coat and the top coat. This observation is consistent with results reported by Mumm et al. [START_REF] Mumm | On the role of imperfections in the failure of a thermal barrier coating made by electron beam deposition[END_REF], using wedge imprint to generate delamination. It was shown that when the oxide thickness is lower than 2.9 m, delamination extends predominantly within the TGO and TBC, whereas for thicknesses higher than 2.9 m, degradation occurs along the interface between the bond coat and the TGO. Fig. 6 proposes a comprehensive map of cracking, which delimitates -within a graph plotting oxide thickness versus oxidation temperature -the two domains of initiation and propagation, either at the BC/TGO or the TGO/TC interfaces. These two domains consistently extend on either side of the critical thickness value proposed by Mumm et al. as the threshold for crack propagation at the TGO/bond coat interface. Note that the diagram includes oxide thicknesses measured for as-deposited, isothermally oxidized and cyclically oxidized (100 "1 h" cycles at 1100 • C) specimens. The oxide thickness was accurately estimated using image analysis on cross-sectional SEM micrographs showing the TGO layer, by dividing the total area of the layer (expressed in m 2 ) by the developed length of its median axis (expressed in m). It was evaluated on thirty contiguous micrographs representing an equivalent length of 4.8 mm [START_REF] Le Roux | Quantitative assessment of the interfacial roughness in multi-layered materials using image analysis: application to oxidation in ceramic-based materials[END_REF]. Mechanism of crack initiation and propagation is strongly related to the stress level and distribution within the multi-layer system. Apart from oxide growth stress (due to volume change during oxide growing), the main cause of mechanical strain and stress is the thermal expansion mismatch between layers of different thermal expansion coefficient during cooling and thermo-cycling. It is particularly critical upon cyclic oxidation as cumulative heating plus cooling is prone to dramatically enhance degradation. Under isothermal exposure, oxide growth stresses may be considered as the predominant contribution. According to Baleix et al. [START_REF] Baleix | Oxidation and oxide spallation of heat resistant cast steels for superplastic forming dies[END_REF], the spallation at the metal/oxide interface system in a Cr 2 O 3 forming heat resistant cast steels, occurs through the so-called oxide buckling for scale thickness equal or higher than 4 m. In the frame of the strain-energy model for spallation, it is shown in this case that the interfacial fracture energy for buckling decreases from 5 J.m -2 to 2 J.m -2 as the oxide thickens from 7 m to 9 m. This suggests a progressive degradation of the interface as the oxidation mechanisms are enhanced through an extended exposure time. Formally, buckling of oxide prior to detachment for the onset to spallation is very comparable to the generation of cracks at the interface between the bond coat and the TGO promoted by indentation in TBCs systems. Indeed, depending on the oxide thickness and the strength of the metal oxide interface, two different routes for spallation are generally reported. In the case of a strong interface for thin oxides, i.e., a high toughness or high fracture energy, compressive shear cracking develops in the oxide. Consequently, detachment of oxide particles occurs by wedge cracking. In the case of a weak interface for thicker oxides, i.e., low toughness or fracture energy, the oxide may detach from the metal in the form of buckles. Evans et al. [START_REF] Evans | The influence of oxides on the performance of advanced gas turbines[END_REF] showed elsewhere that the energy stored within the TGO increases as the TGO thickens, and contributes only to the delamination at the TGO/bond coat interface. As a consequence, the fracture energy or toughness of the interface TGO/bond coat may decrease as the oxidation proceeds and the oxide thickens. Beyond a given oxide thickness threshold, namely around 3 m for 150 m thick EB-PVD TBCs, the toughness of the TGO/bond coat interface becomes lower than that, essentially unchanged, of the TGO/topcoat interface, yielding to a change in delamination location. In addition, as indicated in Fig. 2, the slopes of the various lines plotted for various conditions of aging in order to determine the critical force to initiate interfacial cracking are different. This clearly indicates that whatever the critical force is, the possibility to extend a crack requires more or less mechanical energy depending on the configuration, i.e., the specific morphology of the interface. To address this, a straightforward image analysis methodology detailed in [START_REF] Le Roux | Quantitative assessment of the interfacial roughness in multi-layered materials using image analysis: application to oxidation in ceramic-based materials[END_REF] is applied on SEM cross-sectional micrographs to determine the roughness of the internal interfaces, between the bond coat and the TGO, and the TGO and the top coat (Fig. 7). According to this approach, a rumpling or folding index is defined as the ratio between the developed length of the interface and the horizontal projected length measured on 30 contiguous micrographs corresponding to an equivalent length of 4.8 mm. This index is a relevant indicator of the tortuosity of the interfaces. Fig. 7 gives values for both the TGO-top coat interface (upper profile), the bond coat -TGO interface (lower profile) as well as normalised values (obtained by dividing the values by the reference one (as-deposited TBC)). Both isothermal aging (100 h at 1150 • C, 1100 • C and 1050 • C) and cyclic aging (100 cycles of 1 h at 1100 • C) are investigated. Cross-sectional SEM micrographs illustrating the evolution of the thickness and morphology of the TGO for various isothermal oxidation temperatures are also shown in Fig. 7. Note that the interfacial corrugation of the as-deposted TBC is significant, both for the upper and lower profles, which accounts directly for the roughness of the initial substrate. However, the upper profile is slightly smoother than the lower profile, suggesting a leveling effect of the initial oxidation intrinsic to the EB-PVD deposition process. In all cases, aging results in an enhancement of the upper profile tortuosity: the higher the oxidation temperature, the more pronounced the associated folding effect. The evolution of the lower profile is more complex to analyze. Indeed, aging at 1050 • C leads to a significant decrease (about 10%) with respect to the initial value of the as-deposited TBC. The interface between the oxide and the bond coat becomes smoother as oxidation progresses, indicating a total absence of interfacial folding. This observation is not consistent with results presented in [START_REF] Tolpygo | On the rumpling mechanism in nickel-aluminide coatings part II: characterization of surface undulations and bond coat swelling[END_REF] reporting the occurence of rumpling even under isothermal oxidation. For aging at 1100 • C and 1150 • C, Linf/L -though remaining lower than the reference value -is higher than at 1050 • C. Note that for cyclic oxidation at 1100 • C, the tortuosity of the bond coat/TGO interface is slightly greater than that of the reference, as-deposited TBC and of the TBC aged at the same temperature over the same hot time upon isothermal oxidation (Fig. 7). Globally speaking, the normalised folding index indicating the propensity of the multi-materials sytem to rumpling remains lower than 1 for the interface between the bond coat and the TGO. This clearly indicates that oxidation over short term exposure, either isothermal or cyclic, does not provoke any corrugation of the interface. In contrast, the interface between the TGO and the top coat tends to undulate as it is exposed to high temperature either upon isothermal or cyclic conditions. It is however unusual to evaluate rumpling considering the interface between the TGO and the top coat. It is much more common to monitor the evolution of the bond coat/TGO interface. It can be assumed that (i) rumpling is negligible, or at least little pronounced, under isothermal oxidation, (ii) thermal aging under 100 cycles -though prone to generate more degradation than isothermal exposure -is not sufficient to provoke significant rumpling. Though equivalent in terms of hot-time, cyclic oxidation (1 h-cycle) does not degrade further the interfacial toughness nor the tortuosity of the interface which, taking into account the assumed severity, highly constraining effect of the cooling phases of cycles, is probably due to the low number of cycles imposed to the TBC. Folding effects and subsequent cracks propagation routes can be related to the evolution of oxide thickness. Indeed, for thin oxides, typically with thickness lower than 2.9 m, for the as-deposited and 1050 • C oxidised TBC, indentation-induced cracks initiate and further propagate at the TGO -top coat (outer) interface which exhibits in both cases apparent toughness higher than 3 MPa.m 0.5 . This suggests a high adherence of the TGO in good agreement with previous results commonly reported in the literature. Between the reference, as-deposited TBC and the TBC oxidised at 1050 • C, a huge difference in the folding index is however noted. It is almost 20% higher in the second case as a consequence of a significant roughening of the interface upon oxidation. Though the location of crack initiation and propagation is the same for the two conditions; the evolution of the indentation force versus the size of crack produced is highly specific of each case as it is directly relative to the tortuosity of the interface. Indeed the increase in force is much less pronounced for the smoothest interface corresponding to the case of the as-deposited non-aged TBC and reciprocally. For oxides thicker than 2.9 m, in the case of TBC oxidised at 1100 • C and 1150 • C, indentation-induced cracks propagate at the bond coat/TGO (inner) interface. For the two cases, the apparent toughness of the involved interface is lower than 2.6 MPa.m 0.5 . While thickening, the alumina scale progressively looses adhesion from the bond coat, which transfers the location of crack initiation and propagation accordingly to commonly admitted models for spallation. The folding index of this inner interface is similar in both cases and very close to that of the outer interface of the asdeposited TBC. This results in a similar tortuosity of the interfaces (either inner or outer), where cracks form and extend and accounts for the similar evolution of the "required force" versus "size of crack produced" plots, experimentally established. While thickening, the thermally grown oxide develops non uniformely as clearly shown on micrographs in Fig. 7. Preferential growth of oxide can occur in zones, typically within intercolumnar spaces of the EB-PVD TBC, where the oxidation kinetics is faster as more room is available for oxide to develop. The interface profile generated by this inhomogeneous growth shows local excresences, clearly visible in Fig. 7c. The occurrence of such protrusions, whose formation is thermally activated can have various consequenceswith opposite effects -on the mechanical strenght of the interface. Indeed, an increase in interface tortuosity may contribute to a loss of adhesion as the result of local mechanical stresses and stress concentration responsible for enhanced crack initiation. Once initiated and to further degrade the system, cracks have to propagate. However, it is assumed that the presence of local excrescences acting as mechanical pegs can limit the propagation, thus preventing from early spallation. Conclusion The adhesion and counterpart spallation of EB-PVD TBC systems is investigated using various approaches including isothermal and cyclic oxidation at various temperatures and interfacial indentation of both as-deposited and oxidized systems. This former characterization is dedicated to evaluate the apparent toughness shown by the interface between the inner bond coat (␤-NiPtAl) and the outer top coat (Yttria-Stabilised Zirconia) before and after thermal aging or cycling. For the as-deposited TBC, only short-term pre-oxidized to promote the formation of a dense, slowly growing alumina scale acting as diffusion barrier, the interfacial TGO is rather thin (less than 0.5 m). Upon aging, the TGO layer grows according to a roughly parabolic kinetics. In all cases, two distinct interfaces formed between the bond coat and the TGO (inner interface), and the TGO and the top coat (outer interface), respectively, must be considered. Driven by the growth of the TGO layer, both interfaces undergo morphological and roughness changes as the TGO thickens. The tortuosity of the interfaces, observed by SEM in crosssections, is quantified by a folding or rumpling index estimated using image analysis. It is shown that both the oxide thickness and the folding index of the inner and outer interfaces, have a strong impact on the localization of the indentation-induced crack initiation, the path for propagation of crack once initiated and the ease or difficulty for crack to propagate. The apparent toughness deduced from interfacial indentation decreases as the 100-h isothermaloxidation temperature increases from 1050 • C to 1150 • C indicating a progressive, thermally activated propensity for the degradation of TBC systems, as obviously expected. Apparent interfacial toughness controlled by interfacial roughness, TGO thickness and mostly by the temperature and time of isothermal or cyclic oxidation is a key parameter to address the mechanics and mechanisms of crack initiation and propagation prior to detrimental spallation of TBC systems. This is of course not the sole parameter entering in the implementation of possible models to predict TBC lifetime. Further improving the understanding of TBC behavior under severe oxidation exposure would require considering the fine variations of microstructural details and the evolution of the stored elastic strain energy, within each individual layer (Ni base single crystal, ␤-NiPtAl bond coat, Al 2 O 3 TGO and Yttria-Stabilised-Zirconia) and from one constitutive layer to another, as well as the substrate geometry to get closer to real in-service conditions. Fig. 1 . 1 Fig. 1. SEM micrographs in cross-section of an EB-PVD TBC specimen. Fig. 2 . 2 Fig. 2. Schematic representation of the intercept graphical method to determine the critical load required to generate interfacial crack [Ln(a) and Ln(b) are, respectively, plotted versus Ln(P)]. Fig. 3 . 3 Fig. 3. SEM micrograph of indented samples oxidised 100 h at 1150 • C (a) indentation charge 0.981 N (corresponding to 100 g); (b) indentation charge 2943 N (corresponding to 300 g). Fig. 4 . 4 Fig. 4. Determination of critical loads to initiate interface cracking by Vickers indentation where a and b are plotted versus P using logarithmic scale. Fig. 5 . 5 Fig. 5. Variation of interfacial toughness as a function of aging temperature. For as-deposited TBCs and TBCs aged at 1050 • C, (i.e., for thin Al 2 O 3 oxide scales, respectively 0.5 m and 1.8 m), cracks propagate preferentially along the interface between the TGO and the top coat. Conversely for TBC aged at 1100 • C and 1150 • C, with thick Al 2 O 3 oxide scales (3.4 m and 5.3 m, respectively), cracks propagate predominantly along the TGO/bond coat interface. Fig. 6 . 6 Fig. 6. Map of cracking as a function of oxidation conditions based on oxide thickness criterion. and ×dots correspond to TGO thickness for isothermal (100 h) and cyclic (1100 • C-100 cycles-1 h) oxidation, respectively. Fig. 7 . 7 Fig. 7. Cross-sectional SEM micrographs of specimens (a) as deposited, (b) aged 100 h at 1050 • C, (c) aged 100 h at 1100 • C, (d) aged 100 h at 1150 • C, (e) variation of interfacial folding index as a function of aging conditions. Note that cracks locate preferentially at the TGO-topcoat interface for as-deposited specimen and specimens aged at 1050 • C (relevant parameter Lsup/L) and at the bon coat/TGO interface for specimens aged at 1100 • C and 1150 • C (relevant parameter Linf/L).
01753129
en
[ "sdv.bbm" ]
2024/03/05 22:32:10
2017
https://theses.hal.science/tel-01753129/file/SEEFELDT_ALEXANDRA_CAROLIN_2017.pdf
Bärbel Seefeldt Trude Kurz Rolf Kurz Frieder Kurz A Carolin Seefeldt Michael Graf Natacha P Ér Ébaskine Fabian Nguyen Stefan Arenz Mario Mardirossian Marco Scocchi Daniel N Wilson email: [email protected] C Axel Innis Structure of the mammalian antimicrobial peptide Bac7(1-16) bound within the exit tunnel of a bacterial ribosome The way to get good ideas is to get lots of ideas and throw During the last three years, I was often asked the question why I decided to come to Bordeaux and my answer was always the same, I wanted to work on ribosomes and Structural Biology and that I chose the project not the city. By now, I know that this city and its location are just incredible and I had a great opportunity to get to know the South-West of France. The last three years have been a wonderful journey and I had the honor to work with so many inspiring people and here I would like to take the opportunity to acknowledge everyone. Firstly, I would like to express my sincere gratitude to my thesis supervisor Axel Innis for giving me this great opportunity to come to Bordeaux and to work on these wonderful projects. It has been my pleasure and honor to be your first PhD student. Your passion for science was inspiring. Your constant guidance and our knowledge exchanges helped me during the whole of my research and writing and inspired me to pursue an academic career. Next, I am especially grateful to Britta Seip. I will never forget your welcoming smile when I first arrived in Bordeaux. From the start, you were there for me to discuss with me and giving me constant input throughout the entire time of my thesis and during the writing process. You were there when I needed a push to finish an experiment or to post-pone it to the next day. I would like to thank you for being an inspiring colleague and more importantly becoming a wonderful and close friend. It was a lot of fun to discover France with you. I owe a special thanks to Kishore Inampudi for establishing the flexizyme reaction and tRNAi Met purification, for cloning tRNA Pro into pUC19 and cloning the proline-tRNA synthetase. Thanks to your work, I had a huge advantage when I started my thesis. A special thank goes to Natacha Pérébaskine for ribosome and tRNA purification and teaching me how to crystallize ribosomes. Our synchrotron trips were always special, and we came back to Bordeaux often with more than one exciting story to tell. I had a great time working with you. I would like to thank Alba Herrero del Valle for her support and positive energy. We had a lot to talk and laugh about. You always found a new way to improve my figures and slides (😊) make them look better. I had a great time working with you and I am happy to call you my friend. I want to thank Justine Charon for helping me write the French sections and bringing a great spirit into the lab. I want to thank Guénaël Sacheau for his help with sequencing reactions, correcting my French sections of my thesis and for providing a good mood including OOOOOh yeah, tons of despacito and "Je peux pas, j'ai aqua cloning". A special thanks to Mélanie Gillard, Elodie Leroy and Aitor Manteca. I know you just arrived but it has been a fun time working with you and spent time with you during lunch and coffee breaks. A big thanks to Elodie Leroy for her help with the summary in French. I want to acknowledge Gilles Guichard and his group for fruitful collaboration over the last years that resulted directly in many positive results. I want to thank in particular Caterina Lombardo and Christophe André for the synthesis of the flexizyme compounds and constant discussions to increase the yield of the peptidylation and to overcome problems with proline. Furthermore, I would like to thank Céline Douat and Stéphanie Antunes for their work during the Onc112 project. It was a great pleasure to work with you. Nguyen, Michael Graf and Stefan Arenz for their work during the PrAMP project. A special thanks to Marco Scocchi and Mario Mardirossian for working with us on the further characterization of PrAMPs. I owe also thanks to Alexander Shura Mankin and his team for their help in input. I especially want to thank Nora Vazquez-Laslop. It was always inspiring discussing with you during conferences. A special thanks to Tanja Florin and James Marks III for your ideas to improve my toeprinting experiments. My sincere thanks to Thomas Steitz and his group for providing plasmids, protocols and advice for tRNA purification. I want to thank Derek McCusker and his team for providing material and Emmaunelle Schmitt for providing the pBSTNAV plasmids for tRNA purification. A special thanks to Brice Kaufmann and Stéphane Massip for their support during crystal freezing and fishing. It was always a very exciting moment after weeks of sample preparation. During crystal screening and data collection, I want to acknowledge Pierre Legrand and Leonard Chavas from PX1 and William Shepard and Martin Savko from PX2a beamline at the Soleil synchrotron for their advice and support during data processing. I learned a lot from all of you. I started with cryo-EM in January 2017 without experience in sample preparation, data collection and data processing. I want to thank Armel Bézault for training me in sample preparation and data collection. A special thanks to Rémi Fronzes and his team for sharing the cluster for data processing and constant discussion through-out all the steps. In particular, I want to thank Chiara Rapisdara for long and extensive discussions about structural biology and life and for giving me tons of positive energy. I want to thank Valerie Gabélica and Frédéric Rosu for mass spectrometry analysis of tRNA Pro and for their support during data interpretation. My longest and most constant lunch group member Lionel Beaurepaire had more than once the one or the other idea to get me further with my experiments. Sometimes a lunch break can solve most of the problems with experiments. More importantly, thank you for listening and teaching me about France. Nearly every morning (except when she was on holidays) I was greeted with a warm smile. Thank you Mariline Yot for being there and for being my best French teacher. I want to acknowledge Myriam Mederic for her positive energy every time she came to our lab. A special thanks to Patricia Martin and Kati Ba-Pierozzi for helping me with all the administration, refunding and mission order. Special thanks to Gerald Canet and Eric Roubin for solving all the computer problems, I was facing during the last years. The printer was always happy to see you, too. I want thank the whole unit Inserm U1212, CNRS UMR5320 under the direction of Jean-Louis Mergny. It was a pleasure to discuss and exchange knowledge due to different expertise. In particular, I want to thank Fabien Darfeuille, Cameron Mackereth and Denis Dupuy for answering a lot of questions. In addition, I want to thank the team of Martin Teichmann in particular Stéphanie Durrieu, Camila Perrot and Wiebke Bretting for their help improving protocols. I would like to acknowledge all members of the JJC organization committee 2015 and 2016. I was a great experience to work together and organize two exciting conferences. I want to thank in particular Diane Bécart for a wonderful friendship, 1000000 PhD student dinners, long discussions about life and science. I loved playing music with you and just being around you. Thank you for convincing me to start dancing Batchata, it was each time a great experience even if I am not the most talented dancer. I want to thank Birgit Habenstein and Joséphine Abi-Ghanam for their advice, wonderful discussions and constant support. Furthermore, I want to thank Laura Mauran for hours of surfing and wonderful times on the Sunday market in Pessac. A special thanks to Sonia Cuidad, Martí Ninot, Eduard Puig and Thomas Perry for their constant support, encouragement and for increasing my knowledge about penguins. It was always lots of fun to do things together. Ein großes Dankeschön geht an meine Familie, die mich immer ermutigt und unterstützt hat meinen eigenen Weg zu finden und mich gegen jeglichen Wiederstand durchzusetzen. Meiner Mama Bärbel Seefeldt möchte ich für ihr beeindruckendes Druchhaltevermögen und offenes Ohr danken auch wenn ich meinen eigenen Kopf durchsetzen wollte. Ich möchte meiner Oma Trude Kurz dafür danken, dass sie immer wieder die richtigen Worte findet egal ob auf Deutsch oder Französisch. Meinen Großvater Rolf Kurz möchte ich danken, dass er die Begeisterung für Naturwissenschaften mit mir teilt und mir sehr viel beigebracht hat. Zusätzlich möchte ich meinem Onkel Frieder Kurz danken, dass er immer einen Rat für mich übrighatte. Ohne euch wäre diese Arbeit nicht möglich gewesen. Ich möchte zusätzlich meiner längsten Freundin Anna Neubauer, danken, dass sie mich durch alle Höhen und Tiefen bisher begleitet hat und die sich einfach nur freut, wenn ich anrufe. Studying chemistry and biochemistry made it possible to meet Thomas Leissing, Michaela Wipper, Katharina Essig, Stefanie Herrmann, Kerstin Lippl and Janina Andreß. I want to acknowledge them for their constant support throughout the complete time of my studies but more importantly for their friendship. In the three years you showed me no matter where we will live we will find our way to stay in touch. Five years ago, I went to Riverside California and was lucky to meet Tamara Mielke. I want to thank you for being my best friend ever since. I can thank you enough for listening through all the up and downs of a thesis, for going back with me to California. You are the person to be hiking on a volcano in a volcano, finding our inner sunflower as well as being in a kayak while being surrounded by sea otters, sea lions and humpback whales. I also want to thank Raissa and Jarren Kay for their constant support and Nicola Cellini for reminding me about my personal reasons to keep on going with science. Je tiens à remercier tous les membres de la chorale croq'notes d'être la meilleure classe de français et une merveilleuse compagnie à chanter avec beaucoup de joie tous les mardis soirs et de voyager à travers l'Italie. En particulier, je tiens à remercier Anne-Marie Garcia pour être l'une des meilleures chefs de chorale avec qui j'ai travaillé et pour nous avoir motivé à participer deux fois au projet Tutti. List of Figures and Tables Conventions mRNA, DNA oligos, and Open Reading Frames (ORF) are listed from 5' to 3' end. For standard amino acids, the three letters code or one letter code is used. Amino acid positions within the growing peptide chain are described using the amino acid attached to the P-site tRNA as position 0. The numbering towards the N-terminus decreases, and amino acid that have not yet been incorporated are labeled with increasing numbering towards the stop codon. This is illustrated in the following figure: Arrest peptide sequences are listed by using the one letter code starting from the N-terminus. The amino acid located in the A-site is reported in brackets. Ribosomal residues are listed using the Escherichia coli (E. coli) numbering. For toeprinting experiments, sequencing lanes are located on the left, with the bases in the following order: CUAG. The following lanes are the individual reactions and will be referred to by number, with the lowest number next to the sequencing. Structures of arrested ribosomes are referred to as arrest peptide-70S structures. 7 Gene expression and the bacterial ribosome The central dogma of molecular biology was elucidated by Francis Crick in 1958 [START_REF] Crick | On protein synthesis[END_REF] and summarizes the general steps from a gene to a functional protein. The genetic material is encoded as deoxyribonucleic acid (DNA) and is transcribed into messenger ribonucleic acid (mRNA) by RNA polymerases (RNAP). Subsequently, the mRNA is translated into the corresponding amino acid (aa) sequence by the ribosome [START_REF] Crick | Central Dogma of Molecular Biology[END_REF]. Over the years, discoveries like for example non-coding RNAs [START_REF] Blattner | The Complete Genome Sequence of Escherichia coli K-12[END_REF] or post-transcriptional and post-translational modifications revealed the complexity of this process [START_REF] Mohanty | The majority of Escherichia coli mRNAs undergo posttranscriptional modification in exponentially growing cells[END_REF]. Ribosomes were first observed in electron microscopy (EM) micrographs of mammalian cell cross sections and were described as dense particles or granules [START_REF] Palade | A small particulate component of the cytoplasm[END_REF]. Further investigations showed that ribosomes are abundant in all kingdoms of life and are the key players in protein biosynthesis as reviewed by Schmeing and Ramakrishnan, 2009. In prokaryotes, ribosomes are 2.5 megadaltons (MDa) ribonucleoprotein complexes, termed the 70S (Svedberg) ribosomes, which dissociate into two subunits at low magnesium bivalent cations (Mg 2+ ) concentrations [START_REF] Chao | Dissociation of macromolecular ribonucleoprotein of yeast[END_REF]. In Escherichia coli, the large (50S) subunit consists of 34 proteins, 23S ribosomal RNA (rRNA) and 5S rRNA. The length of the 23S rRNA is approx. 2900 nucleotides (nt) while the length of the 5S rRNA is approx. 120 nt. The small (30S) subunit consists of an approx. 1500 nt long 16S rRNA and 21 proteins [START_REF] Ban | A new system for naming ribosomal proteins[END_REF][START_REF] Kurland | Molecular characterization of ribonucleic acid from Escherichia coli ribosomes[END_REF]Schuwirth et al., 2005;[START_REF] Wilson | Ribosomal Proteins in the Spotlight[END_REF], (Figure 2). Among those 55 proteins, 34 are conserved in all kingdoms of life and 44 proteins are conserved among bacteria (study included 995 bacterial genomes) [START_REF] Yutin | Phylogenomics of Prokaryotic Ribosomal Proteins[END_REF]. During translation, the mRNA sequence is translated into the corresponding amino acid sequence utilizing amino acid-specific transfer RNAs (tRNA) as adaptors [START_REF] Crick | On protein synthesis[END_REF][START_REF] Hoagland | A soluble ribonucleic acif intermediate in protein synthesis[END_REF]. The ribosome has three distinct binding positions for tRNAs: the aminoacyl-site (A-site) is the binding site for the aminoacylated tRNA, the peptidyl-site (P-site) is the position 1. Introduction in which the tRNA is connected to the nascent chain and the exit site (E-site) is the position from which the deacylated tRNA is released [START_REF] Agrawal | Direct Visualization of A-, P-, and E-Site Transfer RNAs in the Escherichia coli Ribosome[END_REF]. The decoding process is carried out by the small subunit [START_REF] Ogle | Insights into the decoding mechanism from recent ribosome structures[END_REF][START_REF] Wimberly | Structure of the 30S ribosomal subunit[END_REF] whereas peptide bond formation is catalyzed in the large subunit within the peptidyl transferase center (PTC) [START_REF] Maden | Ribosome-catalysed peptidyl transfer: the polyphenylalanine system[END_REF][START_REF] Monro | Catalysis of peptide bond formation by 50S ribosomal subunits from Escherichia coli[END_REF][START_REF] Traut | The puromycin reaction and its relation to protein synthesis[END_REF]. A structure of the 70S E. coli ribosome is shown in the following figure: The peptidyl transferase center (PTC) is mainly composed of rRNA and so the ribosome was termed as a ribozyme meaning the catalysis is performed by RNA [START_REF] Ban | The complete atomic structure of the large ribosomal subunit at 2.4 Å resolution[END_REF]Nissen et al., 2000;[START_REF] Noller | Peptidyl transferase: protein, ribonucleoprotein, or RNA?[END_REF]. From the PTC, the nascent peptide chain passes through the ribosomal exit tunnel to reach the cytoplasm. This ribosomal exit tunnel, spanning the 50S subunit, was first described in initial cryo-electron microscopy (cryo-EM) structures [START_REF] Frank | A model of protein synthesis based on cryoelectron microscopy of the E. coli ribosome[END_REF]. The ribosomal exit tunnel is about 100 Å long [START_REF] Nissen | The structural basis of ribosome activity in peptide bond synthesis[END_REF] which is equivalent to a growing peptide chain of 30-35 aa as revealed by a proteolytic protection assay [START_REF] Malkin | Partial resistance of nascent polypeptide chains to proteolytic digestion due to ribosomal shielding[END_REF]. The ribosomal tunnel features different regions, subdivided into the upper tunnel, lower tunnel and the vestibule (Figure 3). It is mainly formed of 23S rRNA and at its narrowest point (constriction site) the ribosomal proteins L4 and L22 reach into the cavity [START_REF] Gabashvili | The polypeptide tunnel system in the ribosome and its gating in erythromycin resistance mutants of L4 and L22[END_REF]. The average diameter of the tunnel is 15 Å, with a minimum of 10 Å at the constriction site [START_REF] Nissen | The structural basis of ribosome activity in peptide bond synthesis[END_REF][START_REF] Voss | The Geometry of the Ribosomal Polypeptide Exit Tunnel[END_REF]. The diameter of the tunnel is sufficient to allow co- Introduction translational folding of α-helices in the upper and lower tunnel [START_REF] Bhushan | α-Helical nascent polypeptide chains visualized within distinct regions of the ribosomal exit tunnel[END_REF][START_REF] Lu | Folding zones inside the ribosomal exit tunnel[END_REF][START_REF] Su | The force-sensing peptide VemP employs extreme compaction and secondary structure formation to induce ribosomal stalling[END_REF][START_REF] Voss | The Geometry of the Ribosomal Polypeptide Exit Tunnel[END_REF] and even the formation of tertiary structure in the vestibule [START_REF] Nilsson | Cotranslational protein folding inside the ribosome exit tunnel[END_REF]. Figure 3 shows a schematic view of the tunnel [START_REF] Seip | How Widespread is Metabolite Sensing by Ribosome-Arresting Nascent Peptides?[END_REF]: To ensure the production of all encoded proteins in equal amounts, it was originally hypothesized that the tunnel wall has "Teflon like" properties, meaning no interactions of the nascent chain with the tunnel [START_REF] Nissen | The structural basis of ribosome activity in peptide bond synthesis[END_REF]. However, studies have shown that certain nascent peptides can interact with the tunnel and induce nascent chain-mediated translational arrest of the translating ribosome as reviewed by [START_REF] Ito | Arrest peptides: cis-acting modulators of translation[END_REF][START_REF] Seip | How Widespread is Metabolite Sensing by Ribosome-Arresting Nascent Peptides?[END_REF][START_REF] Wilson | Translation regulation via nascent polypeptide-mediated ribosome stalling[END_REF] (Figure 3). Although the catalytic process of the ribosome is mainly performed by rRNA, additional ribosomal proteins and protein factors are needed to increase the processivity of the translational cycle as reviewed by Schmeing and Ramakrishnan, 2009. Many classes of antibiotics target the bacterial ribosome, in particular, the PTC and ribosomal tunnel. As antibiotic resistances are becoming a bigger threat it is important to understand the molecular regulation of the bacterial ribosome as well as the action of antibiotics for further drug development as reviewed (Wilson, 2009(Wilson, , 2014)). The prokaryotic translation cycle The bacterial translation cycle can be divided into initiation, elongation and termination phases, the latter being divided into the release of the produced protein and the recycling of the ribosomal subunits, as reviewed by Schmeing and Ramakrishnan, 2009. Figure 4 illustrates the translation cycle including all essential protein factors: Initiation The rate-limiting step of the translation cycle is the initiation step. During this step, as reviewed by [START_REF] Laursen | Initiation of Protein Synthesis in Bacteria[END_REF], both ribosomal subunits and the initiator tRNAi Met needs to be recruited to the start codon and assembled. During initiation three initiation factors (IF1, 2, 3) ensure the correct assembly of the ribosome on the mRNA (Figure 4). The mRNA contains at the 5'end of the open reading frame (ORF) a ribosomal binding site (RBS), termed in bacteria Shine-Dalgarno (SD) sequence which is six to seven nucleotides (nts) upstream from the start codon [START_REF] Ringquist | Translation initiation in Escherichia coli: sequences within the ribosome-binding site[END_REF][START_REF] Shine | The 3′-terminal sequence of Escherichia coli 16S ribosomal RNA: complementarity to nonsense triplets and ribosome binding sites[END_REF] The consensus SD sequence is AGGAGG. The 3' end of the 16S rRNA in the small ribosomal subunit contains an anti-SDsequence forming base pairs with the SD-sequence thus resulting in right positioning of the small subunit [START_REF] Shine | The 3′-terminal sequence of Escherichia coli 16S ribosomal RNA: complementarity to nonsense triplets and ribosome binding sites[END_REF]. In E. coli, 82.6% genes start with an "AUG", encoding Met, as a start codon and are initiated by tRNAi Met . Alternative start codons in E. coli are GUG (14.7%,Val), UUG (3.0%, Leu) and two possible rare ones [START_REF] Blattner | The Complete Genome Sequence of Escherichia coli K-12[END_REF]. In order to prevent premature subunit joining IF3 is bound to the E-site of the small subunit [START_REF] Carter | Crystal structure of an initiation factor bound to the 30S ribosomal subunit[END_REF][START_REF] Dallas | Interaction of translation initiation factor 3 with the 30S ribosomal subunit[END_REF][START_REF] Karimi | Novel roles for classical factors at the interface between translation termination and initiation[END_REF][START_REF] Shine | The 3′-terminal sequence of Escherichia coli 16S ribosomal RNA: complementarity to nonsense triplets and ribosome binding sites[END_REF]. Binding of IF1 to the A-site of the small subunit prevents the binding of elongator tRNA as well as premature subunit joining (Antoun et al., 2006a;Antoun et al., 2006b). The initiator-tRNA has a different sequence than the elongator-tRNA Met , resulting in higher rigidity and affinity for the P-site. Furthermore, the methionine attached to the tRNAi Met is formylated. The fMet-tRNAi Met is delivered to the initiation complex by the translational GTPase (trGTPase) IF2 in its GTP-bound state. [START_REF] Antoun | The roles of initiation factor 2 and guanosine triphosphate in initiation of protein synthesis[END_REF]. Once fMet-tRNAi Met is positioned properly, GTP hydrolysis is triggered and IF1 and IF3 are released from the 30S subunit followed by the recruitment of the 50S subunit and the release of IF2 [START_REF] Simonetti | Involvement of protein IF2 N domain in ribosomal subunit joining revealed from architecture and function of the full-length initiation factor[END_REF]. Elongation The following step, elongation, is the most repeated one in the translational cycle as the ribosome has to go through it for the incorporation of each amino acid. During elongation, the mRNA is decoded by the aid of tRNAs and the 30S subunit. Base triplets encode for a particular amino acid termed genetic code [START_REF] Crick | General nature of the genetic code for proteins[END_REF]. Amino acids are attached to their cognate tRNA through the activity of specific aminoacyl-tRNA synthetases in an ATP-dependent process. Aminoacylation is a process with high enzymatic selectivity. Next, to their synthetase activity these enzymes have also and editing domain for the specific removal of wrongly attached amino acids as a proof-reading domain [START_REF] Ibba | Quality Control Mechanisms During Translation[END_REF]. Specificity is ensured by e.g. a double sieve mechanism [START_REF] Ibba | Aminoacyl-tRNA synthesis: divergent routes to a common goal[END_REF][START_REF] Nureki | Enzyme structure with two catalytic sites for double-sieve selection of substrate[END_REF][START_REF] Schmidt | Mutational isolation of a sieve for editing in a transfer RNA synthetase[END_REF] or the presence of cations [START_REF] Dock-Bregeon | Transfer RNA-Mediated Editing in Threonyl-tRNA Synthetase: The Class II Solution to the Double Discrimination Problem[END_REF] ensuring the discrimination of the chemical properties of similar amino acids. Aminoacylated-tRNAs are recognized and bound by Elongation Factor (EF)-Tu in its GTP bound state [START_REF] Nissen | Crystal structure of the ternary complex of Phe-tRNAPhe, EF-Tu, and a GTP analog[END_REF]. The genetic code contains 64 possible codon combinations for only 20 amino acids and two release factors. The consequence is that multiple codons encode the same amino acid. To reduce the number of tRNAs within the cell, only the first two bases of the codon form Watson-Crick base pairs with the mRNA while the third base can form wobble base pairs [START_REF] Ebel | Factors determining the specificity of the tRNA aminoacylation reaction: Non-absolute specificity of tRNA-aminoacyl-tRNA synthetase recognition and particular importance of the maximal velocity[END_REF][START_REF] Speyer | Synthetic polynucleotides and the amino acid code[END_REF]. The aminoacylated-tRNA in complex with EF-Tu is then recruited to the ribosome allowing the tRNA to form codon-anticodon interactions with the mRNA. The decoding process involves the small subunit detecting the right base pairing by flipping the bases A1493 and A1492 of helix 44, resulting in conformational changes within the L7/12 stalk and sarcin-ricin loop, triggering GTP hydrolysis within EF-Tu and its release from the ribosome [START_REF] Diaconu | Structural basis for the function of the ribosomal L7/12 stalk in factor binding and GTPase activation[END_REF][START_REF] Ogle | Insights into the decoding mechanism from recent ribosome structures[END_REF]Schmeing et al., 2009). For peptide bond formation to occur, the amino acid attached to the 3'CCA end of the A-site tRNA needs to be well positioned in the A-crevice within the PTC. During A-site tRNA accommodation, the tRNA has to go through large conformational changes rendering it the rate-limiting step of peptide bond formation [START_REF] Pape | Complete kinetic mechanism of elongation factor Tu-dependent binding of aminoacyl-tRNA to the A site of the E.coli ribosome[END_REF][START_REF] Valle | Cryo-EM reveals an active role for aminoacyl-tRNA in the accommodation process[END_REF]. Processes studied during the work of this thesis are known to target and influence peptide bond formation and so the following paragraphs discuss this process in detail. During A-site tRNA accommodation, the residues of the 23S rRNA undergo large conformational changes switching from the uninduced to the induced state (Schmeing et al., 2005c). In the uninduced state, the ester bond between the P-site tRNA and the peptide needs to be protected from premature hydrolysis. The ester bond is sequestered by the bases of the 23S rRNA U2585, A2451, and C2063 (Schmeing et al., 2005c). A-site tRNA accommodation induces a rearrangement of 23S ribosomal RNA residues due to base pairing of C75 of the A-site tRNA with G2553 resulting in movement of U2585 and the reorientation of the ribose of A76 of the P-site tRNA (induced fit) (Schmeing et al., 2005c). The induced fit brings the carbonyl ester carbon of the P-site tRNA in close proximity of the α-amino (NH2) group of the following amino acid (Schmeing et al., 2005c). As a consequence, a peptide bond can be formed by the nucleophilic attack of the α-amino (NH2) group of the A-site amino acid to the carbonyl ester carbon of the peptidyl-tRNA. The nucleophilic attack leads to a tetrahedral oxyanion transition state that breaks down leaving the peptide chain attached to the A-site tRNA. The ribosome acts as an entropic trap that changes binding affinities for the different binding sites thus enhancing the reaction by a factor of 10 7 [START_REF] Sievers | The ribosome as an entropy trap[END_REF]. To enhance the nucleophilicity of the α-NH2 group of the A-site amino acid, it needs to be deprotonated. Early X-ray crystallography structures showed that no ribosomal protein was observed within 15 Å of the PTC [START_REF] Ban | The complete atomic structure of the large ribosomal subunit at 2.4 Å resolution[END_REF][START_REF] Nissen | The structural basis of ribosome activity in peptide bond synthesis[END_REF]. Consequently, the ribosome is deemed to act as a ribozyme which was postulated first in 1993 [START_REF] Noller | Peptidyl transferase: protein, ribonucleoprotein, or RNA?[END_REF]. The resulting possible reaction mechanisms that can be catalyzed by RNA are acid-base catalysis, metal catalysis and a substrate-induced mechanism as reviewed by [START_REF] Leung | The mechanism of peptidyl transfer catalysis by the ribosome[END_REF]. The acid-base mechanism based on a crystal structure of the Haloarcula martismortui 50S subunit in complex with the Yarus inhibitor that mimics the transition state of peptide bond formation. It was proposed due to the close proximity of the N3 of A2451 to the Yarus inhibitor [START_REF] Nissen | The structural basis of ribosome activity in peptide bond synthesis[END_REF]. A2451 is one of the five universally conserved bases located in the PTC [START_REF] Thompson | Analysis of mutations at residues A2451 and G2447 of 23S rRNA in the peptidyltransferase active site of the 50S ribosomal subunit[END_REF][START_REF] References Youngman | The active site of the ribosome is composed of two layers of conserved nucleotides with distinct roles in peptide bond formation and peptide release[END_REF]. It was also hypothesized that the base of G2447 lowers the pKa of A2451 due to hydrogen bonding increasing the basicity of the N3 atom and this will facilitate the deprotonation of the α-amino group [START_REF] Nissen | The structural basis of ribosome activity in peptide bond synthesis[END_REF]. Mutation of these two bases led to a growth defect but mutant ribosomes are still capable of peptide bond formation [START_REF] Beringer | The G2447A mutation does not affect ionization of a ribosomal group taking part in peptide bond formation[END_REF][START_REF] Beringer | Essential Mechanisms in the Catalysis of Peptide Bond Formation on the Ribosome[END_REF][START_REF] Polacek | Ribosomal peptidyl transferase can withstand mutations at the putative catalytic nucleotide[END_REF][START_REF] Thompson | Analysis of mutations at residues A2451 and G2447 of 23S rRNA in the peptidyltransferase active site of the 50S ribosomal subunit[END_REF]. The pH independence of the peptide bond formation was demonstrated by replacing α-amino acids by α-hydroxy acids allowing to perform the reaction at different pH ranges. The result was that the reaction rate of the ribosome is independent of the pH of the solution [START_REF] References Bieling | Peptide bond formation does not involve acid-base catalysis by ribosomal residues[END_REF]. This makes an acid-base catalyzed mechanism unlikely [START_REF] References Bieling | Peptide bond formation does not involve acid-base catalysis by ribosomal residues[END_REF] and is consistent with results from molecular dynamics studies [START_REF] Trobro | Mechanism of peptide bond synthesis on the ribosome[END_REF]. X-ray crystallography structures containing substrate and transition state mimics showed that no Mg 2+ or other cations could be localized within the PTC and that instead well-ordered water molecules could be detected (Schmeing et al., 2005b). This fact makes a metal-catalyzed mechanism also unlikely. This leaves substrate induced mechanism as a mechanism that can be catalyzed by RNA. In this case, tRNAs deliver the amino acids to the A-site and are attached to the growing peptide chain in the P-site. Biochemical and molecular dynamics studies have shown that the 2'OH group of A76 is necessary for peptide bond formation when the tRNA is located in the P-site 1. Introduction [START_REF] Dorner | Mononucleotide derivatives as ribosomal P-site substrates reveal an important contribution of the 2′-OH to activity[END_REF][START_REF] Dorner | Molecular aspects of the ribosomal peptidyl transferase[END_REF][START_REF] Trobro | Mechanism of peptide bond synthesis on the ribosome[END_REF][START_REF] References Weinger | Substrate-assisted catalysis of peptide bond formation by the ribosome[END_REF]. The 2'OH group of A76 could be a part of a six-membered proton shuttle between the nascent chain and the α-NH2 group of A-site amino acid ensuring its deprotonation and stabilization of the oxyanion transition state [START_REF] Dorner | Molecular aspects of the ribosomal peptidyl transferase[END_REF]. The well-ordered water molecule in close proximity to the 2'OH group of A76 of the P-site tRNA could possibly form an eight-membered proton shuttle with a second water molecule that stabilizes potentially the oxyanion transition state (Schmeing et al., 2005b). Molecular dynamics showed that the eight-membered proton shuttle is preferred over the six-membered proton shuttle [START_REF] Wallin | The transition state for peptide bond formation reveals the ribosome as a water trap[END_REF] Lacking of the 2'OH group of A2451 leads to reduction in peptide bond formation by a factor of 1000 fold as shown biochemically [START_REF] Erlacher | Chemical engineering of the peptidyl transferase center reveals an important role of the 2′-hydroxyl group of A2451[END_REF][START_REF] Erlacher | Efficient ribosomal peptidyl transfer critically relies on the presence of the ribose 2 '-OH at A2451 of 23S rRNA[END_REF][START_REF] Lang | The role of 23S ribosomal RNA residue A2451 in peptide bond synthesis revealed by atomic mutagenesis[END_REF] and in molecular dynamics studies [START_REF] Trobro | Mechanism of peptide bond synthesis on the ribosome[END_REF] which was not a part of the proton shuttle models. A recent study involving high-resolution X-ray structures of the PRE-and POSTcatalysis state of the 70S T. thermophilus ribosome with full-length tRNAs revealed three trapped water molecules within the PTC. This led to the postulation of the proton wire model, which takes into account the involvement of the 2'OH of A2451 in contrast to previous models (Polikanov et al., 2014). After the peptide bond is formed, the nascent peptide is attached to the A-site tRNA and the P-site tRNA is deaminoacylated. The consequence is that the binding affinities of the tRNAs for their current positions are decreased. This results in movement of the tRNAs between two PREtranslocation states: The classical PRE-state (A/A, P/P) and the hybrid state (A/P, P/E) meaning that the CCA-ends of the tRNAs move towards the next codon leading to the rotation of the 50S subunit compared to the 30S subunit by 6° [START_REF] Frank | A ratchet-like inter-subunit reorganization of the ribosome during translocation[END_REF][START_REF] Munro | Identification of two distinct hybrid state intermediates on the ribosome[END_REF]. As the last step of the elongation cycle, the ribosome needs to be translocated exactly one codon further towards the 3' end of the ORF. This step is catalyzed by EF-G, which recognizes the hybrid state of the ribosome in its GTP-bound form [START_REF] Agrawal | Visualization of elongation factor G on the Escherichia coli 70S ribosome: the mechanism of translocation[END_REF]. Time-resolved cryo-EM as well as high-resolution structures of EF-G bound to the bacterial ribosome and trapped in PRE-, intermediated and POST-translocation states gave great insights into movements of the protein factor, tRNAs, mRNA and the ribosome Due to GTP hydrolysis triggered by the L7/12 stalk and sarcin-ricin loop, EF-G undergoes a large conformational change [START_REF] Chen | Structure of EF-G-ribosome complex in a pretranslocation state[END_REF][START_REF] Fischer | Ribosome dynamics and tRNA movement by time-resolved electron cryomicroscopy[END_REF][START_REF] Gao | The Structure of the Ribosome with Elongation Factor G Trapped in the Posttranslocational State[END_REF][START_REF] Lin | Conformational changes of elongation factor G on the ribosome during tRNA translocation[END_REF][START_REF] Pulk | Control of Ribosomal Subunit Rotation by Elongation Factor G[END_REF][START_REF] Tourigny | Elongation Factor G Bound to the Ribosome in an Intermediate State of Translocation[END_REF][START_REF] Zhou | Crystal Structures of EF-G-Ribosome Complexes Trapped in Intermediate States of Translocation[END_REF]. EF-G was reported to act as Brownian ratchet directing the movement of the ribosome along the mRNA as reviewed in detail by Rodnina and Wintermeyer, 2011 while a recent X-ray crystallographic study suggests that the conformational change of EF-G applies a power stroke resulting in translocation of the bacterial ribosome [START_REF] Lin | Conformational changes of elongation factor G on the ribosome during tRNA translocation[END_REF]. A recent high resolution single molecular study led to the observation that translocation is driven by EF-G acting as a Brownian ratchet and by using a power stroke [START_REF] Chen | Elongation factor G initiates translocation through a power stroke[END_REF]. The elongation cycle is a very fast step due to the fact that the bacterial ribosome incorporates 10-20 aa per second (s) [START_REF] Sørensen | Codon usage determines translation rate in Escherichia coli[END_REF]. Peptide release and recycling Termination is the last step of the translation cycle. Therefore, the stop codon has to be recognized, the peptide has to be released and the ribosome has to be disassembled into its subunits as reviewed by [START_REF] Korostelev | Structural aspects of translation termination on the ribosome[END_REF]. The three stop codons (UGA, UAA, UAG) [START_REF] Brenner | UGA: a third nonsense triplet in the genetic code[END_REF][START_REF] Brenner | Genetic code: the 'nonsense'triplets for chain termination and their suppression[END_REF] are recognized by class I peptide termination factors. The UAA stop codon can be recognized by the two release factors (RF1, 2) found in E. coli. Due to a slight difference in the primary structure of the release factors, RF1 recognizes the UAG and RF2 recognizes the UGA stop codon [START_REF] Klaholz | Structure of the Escherichia coli ribosomal termination complex with release factor 2[END_REF][START_REF] Petry | The termination of translation[END_REF][START_REF] Weixlbaumer | Insights into translational termination from the structure of RF2 bound to the ribosome[END_REF]. The peptide is released by the positioning of a water molecule by the conserved GGQ motif of RF1/2 reaching into the PTC changing its activity to an esterase [START_REF] Jin | Structure of the 70S ribosome bound to release factor 2 and a substrate analog provides insights into catalysis of peptide release[END_REF][START_REF] Korostelev | Crystal structure of a translation termination complex formed with release factor RF2[END_REF][START_REF] Weixlbaumer | Insights into translational termination from the structure of RF2 bound to the ribosome[END_REF]. Subsequently, the trGTPase RF3, a class II peptide termination factor, releases RF1 or RF2 from the ribosome under GTP hydrolysis [START_REF] Freistroffer | Release factor RF3 in E. coli accelerates the dissociation of release factors RF1 and RF2 from the ribosome in a GTP-dependent manner[END_REF][START_REF] Goldstein | Peptide Chain Termination: Effect of Protein S on Ribosomal Binding of Release Factors[END_REF][START_REF] Jin | Crystal structure of the hybrid state of ribosome in complex with the guanosine triphosphatase release factor 3[END_REF]. The GTP hydrolysis is triggered by the L7/12 stalk and the sarcin-ricin loop [START_REF] Diaconu | Structural basis for the function of the ribosomal L7/12 stalk in factor binding and GTPase activation[END_REF][START_REF] Zavialov | A posttermination ribosomal complex is the guanine nucleotide exchange factor for peptide release factor RF3[END_REF]. As the last step, the deaminoacylated P-site tRNA is released and the 70S ribosome is disassembled into the individual subunits. Therefore, the ribosome recycling factor (RRF) and EF-G work together under NTP hydrolysis [START_REF] Barat | Progression of the ribosome recycling factor through the ribosome dissociates the two ribosomal subunits[END_REF][START_REF] Pavlov | Fast recycling of Escherichia coli ribosomes requires both ribosome recycling factor (RRF) and release factor RF3[END_REF][START_REF] Weixlbaumer | Crystal structure of the ribosome recycling factor bound to the ribosome[END_REF]. To prevent the large subunit from reassociation to the small subunit, IF3 binds to its previously described position and the translation cycle can be initiated again (Antoun et al., 2006b). In summary, peptide bond formation is independent of energy use and catalyzed by rRNA, whereas the steps of subunit assembly, tRNA aminoacylation, and delivery, translocation, the release of the release factors 1 or 2 and subunit dissociation require the NTP hydrolysis and protein factors (Schmeing and Ramakrishnan, 2009). The prokaryotic ribosome as a target for antibiotics Antibiotics target macromolecular complexes involved in cell wall synthesis, genome maintenance as well as protein synthesis as reviewed by [START_REF] Kohanski | How antibiotics kill bacteria: from targets to networks[END_REF]. More than 50% of known antibiotics target the bacterial ribosome affecting all steps of the translational cycle. Known antibiotic binding sites include the decoding center (e.g. Aminoglycosides), sarcin-ricin loop (e.g. Thiopeptides), factor binding (e.g. Fusidic acid), peptidyl transferase center (e.g. Phenicols) and the ribosomal exit tunnel (e.g. Macrolides) as reviewed in detail (Wilson, 2009[START_REF] Wilson | On the specificity of antibiotics targeting the large ribosomal subunit[END_REF](Wilson, , 2014)). The large increase of antibiotic resistance, which is a consequence of excessive antibiotic use over the last decades, represents a global problem. Studies have shown that resistance against antibiotics arises rapidly after their introduction into medical use. The macrolide Erythromycin was introduced into clinical use in 1952 and the first resistant bacterial strain was isolated in 1955 as reviewed by [START_REF] Lewis | Platforms for antibiotic discovery[END_REF]. Strategies of antibiotic resistance can be classified into the following categories as reviewed in detail by [START_REF] Roberts | Update on macrolide-lincosamide-streptogramin, ketolide, and oxazolidinone resistance genes[END_REF]Wilson, 2014): disruption of the binding site by mutation or modification, enzymatic modification/degradation of the drug or active export of the drug. The following chapter will focus on antibiotics that target specifically the PTC and macrolides targeting the ribosomal exit tunnel. In addition, they can act as ligands during nascent-chain mediated translational arrest (chapter 1.3.2). Antibiotics targeting the PTC The PTC is a major target for antibiotics. Classes that target specifically the PTC are Phenicols (Chloramphenicol, CAM), Oxazolidinones (Linezolid, LIN), Lincosamides (Clindamycin, CLN), Sparsomycin (SPAR), Puromycins, Pleuromutilins (Tiamulin, TIA), Hygromycin A, A203A and Blasticidin S (BlaS) as reviewed (Arenz and Wilson, 2016;[START_REF] Wilson | On the specificity of antibiotics targeting the large ribosomal subunit[END_REF]. Most of these classes bind to the A-site within the PTC. Puromycin is an antibiotic that reacts covalently with the nascent chain and results in premature termination as reviewed [START_REF] Darken | Puromycin inhibition of protein synthesis[END_REF]. Puromycin is active against eukaryotic as well as prokaryotic ribosomes meaning it cannot be used for treatments [START_REF] Darken | Puromycin inhibition of protein synthesis[END_REF]. Nevertheless, puromycin or its derivatives are used in several biochemical, kinetic and structural studies to study peptide bond formation and nascent chain-mediated translational arrest [START_REF] Muto | Genetically encoded but nonpolypeptide prolyl-tRNA functions in the A site for SecM-mediated ribosomal stall[END_REF][START_REF] Schmeing | A pre-translocational intermediate in protein synthesis observed in crystals of enzymatically active 50S subunits[END_REF][START_REF] Sievers | The ribosome as an entropy trap[END_REF][START_REF] Traut | The puromycin reaction and its relation to protein synthesis[END_REF]. Other antibiotics that bind to the A-site, like chloramphenicol and clindamycin, bind to or in close proximity to the A-site crevice, the binding pocket for the A-site aminoacyl moiety, thus preventing the accommodation of the aminoacyl-tRNA as shown in X-ray crystal structures (Bulkley et al., 2010;Dunkle et al., 2010;[START_REF] Wilson | The oxazolidinone antibiotics perturb the ribosomal peptidyl-transferase center and effect tRNA positioning[END_REF]. The antibiotic Hygromycin A binds to the CCA binding pocket of the A-site resulting in the inhibition of A-site tRNA accommodation [START_REF] Polikanov | Distinct tRNA accommodation intermediates observed on the ribosome with the antibiotics hygromycin A and A201A[END_REF]. Sparsomycin is an antibiotic that, like puromycin, is potent against eukaryotic as well as prokaryotic ribosomes. It was reported that Sparsomycin induces precise translocation of the ribosome in the absence of EF-G [START_REF] Fredrick | Catalysis of Ribosomal Translocation by Sparsomycin[END_REF]. In addition, Sparsomycin stabilizes the growing peptide chain and prevents further A-site accommodation. This particular property is used in structural biology to force the binding short peptidyl-RNA fragments, that mimic peptidyl chain and the 3' CCA end of tRNAs, to the P-site (Melnikov et al., 2016a;Schmeing et al., 2005a). Pleuromutilins like Tiamulin bind to the A as well as the P-site blocking the elongation of peptide growth due to steric hindrance [START_REF] Gürel | U2504 Determines the Species Specificity of the A-site Cleft Antibiotics. The Structures of Tiamulin, Homoharringtonine and Bruceantin Bound to the Ribosome[END_REF]. Blasticidin S binds to the P-site and deforms the 3' CCA-end of the P-site tRNA. This protects the peptide chain from elongation and release [START_REF] Svidritskiy | Blasticidin S inhibits translation by trapping deformed tRNA on the ribosome[END_REF]. All in all, antibiotics that target the PTC inhibit peptide bond formation by hydrolysis of the peptide chain, stabilization of the peptide chain, preventing A-site tRNA accommodation or steric hindrance of prolongation of the growing peptide chain [START_REF] Wilson | On the specificity of antibiotics targeting the large ribosomal subunit[END_REF]. 1. Introduction Macrolides and ketolides binding to the ribosomal tunnel Macrolides and ketolides form a valuable class of broad-spectrum antibiotics, with the first representative pikromycin isolated from a Streptomyces strain in 1950 [START_REF] Brockmann | Pikromycin, ein neues Antibiotikum aus Actinomyceten[END_REF]. Erythromycin (ERY), a well-studied and widely used macrolide, introduced to medical use in 1952 [START_REF] Mcguire | Ilotycin, a new antibiotic[END_REF], was originally isolated from Streptomyces erythraea (as reviewed by [START_REF] Kannan | Macrolide antibiotics in the ribosome exit tunnel: speciesspecific binding and action[END_REF]). Macrolides are active against a broad range of Grampositive and a limited range of Gram-negative bacteria [START_REF] Nakayama | Macrolides in clinical practice. Macrolide antibiotics[END_REF]. They have a high binding specificity for the bacterial ribosome over the eukaryotic one, making macrolides a valuable group of antibiotics [START_REF] Böttger | Structural basis for selectivity and toxicity of ribosomal antibiotics[END_REF][START_REF] Taubman | Sensitivity and resistance to erythromycin in Bacillus subtilis 168: The ribosomal binding of erythromycin and chloramphenicol[END_REF]. The chemical structures of erythromycin (macrolide) and telithromycin (ketolide) are shown in Figure 5: The chemical structure of erythromycin consists of a 14-membered lactone ring, a C3 cladinose and a C5 desosamine sugar (Figure 5). The size of the lactone ring can adopt 12-, 14-, 15-or 16-membered rings and macrolides are classified accordingly. To overcome resistance and to increase bioavailability, rounds of drug development led to the isolation 2 nd and 3 rd generation, macrolide/ketolide antibiotics (reviewed by [START_REF] George | The Macrolide Antibiotic Renaissance[END_REF]. Ketolides are 3 rd generation antibiotics and have a keto group instead of the cladinose sugar. Telithromycin, a member of this group is shown in Figure 5 [START_REF] Katz | Translation and protein synthesis: macrolides[END_REF][START_REF] Zhong | The emerging new generation of antibiotic: ketolides[END_REF]. These class of antibiotics binds to the ribosomal exit tunnel [START_REF] Moazed | Chloramphenicol, erythromycin, carbomycin and vernamycin B protect overlapping sites in the peptidyl transferase region of 23S ribosomal RNA[END_REF] and its binding site is illustrated in the following figure: Extensive biochemical studies identified the upper ribosomal exit tunnel as the binding site for erythromycin allowing the accommodation of the A-site tRNA [START_REF] Celma | Substrate and antibiotic binding sites at the peptidyl transferase centre of E. coli ribosomes: Binding of UACCA-Leu to 50 S subunits[END_REF][START_REF] Moazed | Chloramphenicol, erythromycin, carbomycin and vernamycin B protect overlapping sites in the peptidyl transferase region of 23S ribosomal RNA[END_REF]. Several X-ray crystallography studies identified and proved the biochemical binding sites but different orientations of the lactone ring were observed [START_REF] Berisio | Structural insight into the antibiotic action of telithromycin against resistant mutants[END_REF][START_REF] Schlunzen | Structural basis for the interaction of antibiotics with the peptidyl transferase centre in eubacteria[END_REF][START_REF] Tu | Structures of MLS B K antibiotics bound to mutated large ribosomal subunits provide a structural explanation for resistance[END_REF]. More recent studies have shown that different macrolides and telithromycin bound to the 70S E. coli (Dunkle et al., 2010) and T. thermophilus (Bulkley et al., 2010) ribosome have similar conformation of the lactone ring independently from drug and species (Figure 6). Erythromycin forms stable interactions with the ribosomal exit tunnel via base A2058 while the lactone ring stacks against the hydrophobic surface formed by A2059 and U2611 (Bulkley et al., 2010;Dunkle et al., 2010;Figure 6). This binding site within the ribosomal tunnel originally suggested that macrolides work like a plug, inhibiting protein synthesis in the early elongation phase and leading to peptidyl-tRNA drop-off [START_REF] Menninger | Mechanism of inhibition of protein synthesis by macrolide and lincosamide antibiotics[END_REF][START_REF] Menninger | Erythromycin, carbomycin, and spiramycin inhibit protein synthesis by stimulating the dissociation of peptidyl-tRNA from ribosomes[END_REF][START_REF] Tenson | The mechanism of action of macrolides, lincosamides and streptogramin B reveals the nascent peptide exit path in the ribosome[END_REF]. Further investigations identified nascent chain peptide sequences that lead to the arrest of the ribosome in the presence of the macrolides [START_REF] Gryczan | Conformational alteration of mRNA structure and the posttranscriptional regulation of erythromycin-induced drug resistance[END_REF][START_REF] Horinouchi | Posttranscriptional modification of mRNA conformation: mechanism that regulates erythromycin-induced resistance[END_REF]) (more detailed chapter 2). More recent studies have shown that cells, which are exposed to erythromycin or telithromycin, can still produce a subset of proteins. The by-passing of the drug is possible due to a specific amino acid composition of these proteins in their N-terminus (ILNNIR or IGQQVR) [START_REF] Kannan | Selective protein synthesis by ribosomes with a drug-obstructed exit tunnel[END_REF]. Another effect is drug-induced frameshifting that can be observed in the presence of ketolides like telithromycin [START_REF] Gupta | Regulation of gene expression by macrolide-induced ribosomal frameshifting[END_REF]. Proline-rich antimicrobial peptides (PrAMPs) as possible new therapeutics All organisms produce substances to protect themselves against competitors for nutrients and against pathogens. Peptides or glycopeptides like Vancomycin represent a large proportion of these defense substances. They can be produced by the ribosome or by large multiprotein complexes called non-ribosomal synthetases, which are capable of incorporating nonproteinogenic amino acids into peptides or macrocycles [START_REF] Hur | Explorations of catalytic domains in nonribosomal peptide synthetase enzymology[END_REF]. Antimicrobial peptides (AMPs) can be classified by their structure, such as α-helices, β-sheets or macrocycles, by their mode of action, like lytic or non-lytic, or by their amino acid composition, like hydrophobic, amphiphilic or specific amino acid-rich as reviewed in detail by [START_REF] Epand | Diversity of antimicrobial peptides and their mechanisms of action[END_REF][START_REF] Otvos | Antibacterial peptides and proteins with multiple cellular targets[END_REF]. One class of AMPs are proline-rich antimicrobial peptides (PrAMPs) which are produced as a part of the innate immune system of higher eukaryotes [START_REF] Otvos | Peptides antimicrobiens riches en proline (PrAMPs) La première partie de cette thèse a consisté à la compréhension du mécanisme sous-jacent de l'inhibition de la traduction bactérienne par des peptides antimicrobiens riches en proline[END_REF]. Representative members have been isolated and characterized from the milkweed bug (Oncopeltus fasciatus, (Schneider and Dorn, 2001)), honey bees (Apis mellifera, [START_REF] Casteels | Isolation and characterization of abaecin, a major antibacterial response peptide in the honeybee (Apis mellifera)[END_REF]), green shield bug (Palomena prasina, (Chernysh et al., 1996)), Firebug (Pyrrhocoris apterus, [START_REF] Cociancich | Novel inducible antibacterial peptides from a hemipteran insect, the sap-sucking bug Pyrrhocoris apterus[END_REF]) and even domestic mammals like cows (Bos taurus, [START_REF] Scocchi | Molecular cloning of Bac7, a proline-and arginine-rich antimicrobial peptide from bovine neutrophils[END_REF]). As PrAMPs are incapable of entering the eukaryotic cell (Hansen et al., 2012) and therefore, represent a promising class of antimicrobials. Additionally, these peptides can also be used to treat infections affecting the brain as they can cross the blood-brain barrier [START_REF] Stalmans | Blood-brain barrier transport of short proline-rich antimicrobial peptides[END_REF]. A representative list of sequences is shown in Figure 7: The main sequence characteristic of PrAMPs is a high proline content, in particular, proline arginine repeats (Figure 7). This results in a restricted backbone geometry that does not allow the formation of α-helices, in addition, their positive charge preventing the PrAMPs from passing through eukaryotic cell membranes (Hansen et al., 2012). PrAMPs are potent growth inhibitors of Gram-negative bacteria with an intracellular target. The peptide uptake is coupled to the SbmA transporter that is present in the cell membrane of Gram-negative bacteria e.g. E. coli or Klebsiella pneumoniae [START_REF] Ostorhazi | In vivo activity of optimized apidaecin and oncocin peptides against a multiresistant, KPC-producing Klebsiella pneumoniae strain[END_REF]. The exact function of the SbmA transporter remains unknown (Mattiuzzo et al., 2007). The first presumed intracellular target of the insect-derived PrAMPs drosocin, pyrrhocoricin and apidaecin, identified by co-immunoprecipitation, was the heat shock protein and chaperone DnaK (Otvos et al., 2000). Crystal structures of PrAMPs, oncocin and Bac7, in complex with DnaK were solved and used as the basis for further drug development [START_REF] Zahn | Structural studies on the forward and reverse binding modes of peptides to the chaperone DnaK[END_REF][START_REF] Zahn | Structural identification of DnaK binding sites within bovine and sheep bactenecin Bac7[END_REF]. Optimization of the peptide sequences included the incorporation of non-proteinogenic amino acids such as L-ornithine, D-amino acids and chemical modification of the C-terminus and N-terminus, as shown in Figure 7, to increase the stability of the peptides against serum proteases. It should be noted that all-D peptides are inactive [START_REF] Knappe | Oncocin (VDKPPYLPRPRPPRRIYNR-NH2): a novel antibacterial peptide optimized against gram-negative human pathogens[END_REF]. A recent study revealed that dnaK knockout cells are still sensitive to the synthetic derivatives of oncocin, Onc112 and Onc72, as well as Apidaecin 1b, Api88 and Api137, suggesting that some PrAMPs have a different target (Krizsan et al., 2014). To identify the target by cross-linking, Tyr7 of Api88 was replaced by p-benzoylphenyl alanine, a photoactive derivative. In addition, Api88 was biotinylated for the purification of the crosslinking products. The direct result was that Api88 was cross-linked to the ribosomal protein L10 (Krizsan et al., 2014). Follow-up in vitro protein synthesis assays revealed that these peptides inhibit protein biosynthesis and show a lower dissociation constant (Kd) for the bacterial 70S ribosome than for DnaK (Krizsan et al., 2014). A different study, working with the mammalian-derived PrAMP Bac7 1-35 , the shortened form of the normally 60 aa long Bac7 peptide, identified by pulselabeling experiments that the peptide is transported into the cell and blocks the production of proteins. Additionally, this Bac7 1-35 was co-sedimented with 70S ribosomes (Mardirossian et al., 2014). However, the molecular mechanism of action remains unknown. Nascent chain-mediated translational arrest During protein synthesis, the translating ribosome can stall along the mRNA due to various reasons including rare codons, amino acid starvation, antibiotic binding, mRNA secondary structure or the amino acid sequence of the nascent peptide. A stalling event that is mediated by the translated amino acid sequence is termed nascent chain-mediated translational arrest. This arrest is induced by the particular interactions of the nascent chain with the tunnel wall resulting in a rearrangement of ribosomal bases and can result in a specific path or fold of the nascent chain within the tunnel. This process can be aided by the binding of a small ligand, antibiotic or metabolite. Thus, the nascent chain acts as a sensor for the presence or absence of this particular ligand within the cell and so gene expression can be modulated in prokaryotes as well as eukaryotes (reviewed in detail [START_REF] Chiba | Recruitment of a species-specific translational arrest module to monitor different cellular processes[END_REF][START_REF] Ramu | Programmed drug-dependent ribosome stalling[END_REF][START_REF] Seip | How Widespread is Metabolite Sensing by Ribosome-Arresting Nascent Peptides?[END_REF][START_REF] Wilson | Translation regulation via nascent polypeptide-mediated ribosome stalling[END_REF]. This section will focus on nascent chain-mediated translational arrest in bacteria. After the identification of the first arrest peptide, ermCL, in 1980 [START_REF] Gryczan | Conformational alteration of mRNA structure and the posttranscriptional regulation of erythromycin-induced drug resistance[END_REF][START_REF] Horinouchi | Posttranscriptional modification of mRNA conformation: mechanism that regulates erythromycin-induced resistance[END_REF] different methods were developed to study nascent chainmediated translational arrest in vitro as well as in vivo. In vitro methods include toeprinting, which uses in vitro translation systems that give at nucleotide resolution of the position of the ribosome along the mRNA under stalling and nonstalling conditions using primer extension (detailed description chapter 3.8) (Hartz et al., 1988;Orelle et al., 2013a). Arrest peptides are less sensitive to puromycin treatment and can be analyzed by acidic gel electrophoresis [START_REF] Muto | Peptidyl-prolyl-tRNA at the ribosomal P-site reacts poorly with puromycin[END_REF]. The nature of the P-site tRNA can be identified using northern blotting [START_REF] Chiba | Multisite ribosomal stalling: a unique mode of regulatory nascent chain action revealed for MifM[END_REF][START_REF] Muto | Genetically encoded but nonpolypeptide prolyl-tRNA functions in the A site for SecM-mediated ribosomal stall[END_REF]. Fluorescence resonance energy transfer (FRET) studies can give great insights into conformational changes of the ribosomes while arrest complex formation [START_REF] Johansson | Sequence-Dependent Elongation Dynamics on Macrolide-Bound Ribosomes[END_REF][START_REF] Tsai | The Dynamics of SecM-Induced Translational Stalling[END_REF]. In vivo methods include the expression of a reporter gene, in many studies lacZ that encodes for β-galactosidase. The arrest can be investigated by fusing the arrest peptide directly to the reporter gene, meaning if the arrest occurs LacZ is not produced [START_REF] Martínez | Interactions of the TnaC nascent peptide with rRNA in the exit tunnel enable the ribosome to respond to free tryptophan[END_REF][START_REF] Ude | Translation elongation factor EF-P alleviates ribosome stalling at polyproline stretches[END_REF]. Another approach uses the mechanism of translational regulation by replacing the downstream gene by the reporter gene. In contrast to the fusion protein approach, LacZ is produced only if the arrest occurs [START_REF] Gupta | Nascent peptide assists the ribosome in recognizing chemically distinct small molecules[END_REF]. Introduction Ribosome profiling is a high-throughput method to analyze translation events occurring on a genome-wide scale in vivo [START_REF] Ingolia | Genome-wide analysis in vivo of translation with nucleotide resolution using ribosome profiling[END_REF] and has shown to be a valuable tool to study nascent chain-mediated translational arrest. Total mRNA bound to ribosomes is isolated from cells and digested by nucleases. The remaining mRNA fragments are purified, reverse transcribed, analyzed by next-generation sequencing (NGS) and mapped to the genome [START_REF] Ingolia | Genome-wide analysis in vivo of translation with nucleotide resolution using ribosome profiling[END_REF]. Another method to gain a deepened understanding of the molecular mechanism of nascent chain-mediated translation arrest is solving the structure of arrested ribosomes using cryo-EM (Arenz et al., 2016;Arenz et al., 2014a;[START_REF] Arenz | Molecular basis for erythromycin-dependent ribosome stalling during translation of the ErmBL leader peptide[END_REF][START_REF] Bhushan | SecM-stalled ribosomes adopt an altered geometry at the peptidyl transferase center[END_REF]Bischoff et al., 2014;[START_REF] Seidelt | Structural insight into nascent polypeptide chainmediated translational stalling[END_REF][START_REF] Sohmen | Structure of the Bacillus subtilis 70S ribosome reveals the basis for species-specific stalling[END_REF][START_REF] Su | The force-sensing peptide VemP employs extreme compaction and secondary structure formation to induce ribosomal stalling[END_REF][START_REF] Zhang | Mechanisms of ribosome stalling by SecM at multiple elongation steps[END_REF]. In prokaryotes, one mRNA often encodes multiple genes and is thus termed as a polycistronic mRNA. ORFs regulated by nascent chain-mediated arrest peptides are organized following: The arrest peptide, termed leader peptide, is encoded at the 5' end of the mRNA followed by the ORFs of the regulated genes. The expression of downstream genes can be regulated through transcriptional or translational mechanisms [START_REF] Gong | Analysis of tryptophanase operon expression in vitro accumulation of TnaC-peptidyl-tRNA in a release factor 2-depleted S-30 extract prevents Rho factor action, simulating induction[END_REF][START_REF] Vazquez-Laslop | Molecular mechanism of drugdependent ribosome stalling[END_REF]. Ligand-independent arrest peptides The following table summarizes known sequences that lead to a ribosomal arrest, their biological relevance and how they are released: Table 2: Peptide sequences that lead to a nascent chain-mediated translational arrest in bacteria. The amino acid attached to the P-site tRNA is underlined. 1 Citation is about the discovery of the listed arrest sequence. 2 Citation is describing the residues necessary for the arrest. Name 1 Sequence 2 Biological significance, Release mechanism Species SecM [START_REF] Nakatogawa | Secretion monitor, SecM, undergoes self-translation arrest in the cytosol[END_REF] FXXXXWIXXXXGIRAG(P) [START_REF] Nakatogawa | The ribosomal exit tunnel functions as a discriminating gate[END_REF] Regulation of SecA expression E. coli and others MifM [START_REF] Chiba | A ribosome-nascent chain sensor of membrane protein biogenesis in Bacillus subtilis[END_REF] RIXXWIXXXXXMNXXXXDEED(AGS) [START_REF] Chiba | A ribosome-nascent chain sensor of membrane protein biogenesis in Bacillus subtilis[END_REF] Regulation of membrane integration of YidC2 Bacillus subtilis VemP [START_REF] Ishii | Nascent chain-monitored remodeling of the Sec machinery for salinity adaptation of marine bacteria[END_REF] HRIXGWKETNAMYVALNXS(Q) [START_REF] Ishii | Nascent chain-monitored remodeling of the Sec machinery for salinity adaptation of marine bacteria[END_REF] Modulation of protein composition of the protein translocation machinery in low salt, SecDF1 Vibrio alginolyticus Characterized long ligand-independent arrest peptides regulate the expression of membrane or membrane-associated proteins in bacteria. The arrest is released by applying a mechanical force to the nascent chain, resulting in the translocation of the arrest peptide through the membrane or its insertion into the lipid bilayer. SecM The secretion monitor (SecM), formerly known as gene X, is encoded upstream of the ATPdependent preprotein translocase SecA in bacteria and regulates the expression of SecA via a translational mechanism [START_REF] Mcnicholas | Dual regulation of Escherichia coli secA translation by distinct upstream elements[END_REF][START_REF] Oliver | Regulation of Escherichia coli secA by Cellular Protein Secretion Proficiency Requires an Intact Gene X Signal Sequence and an Active Translocon[END_REF]. SecM has an N-terminal signal sequence for membrane translocation [START_REF] Nakatogawa | Secretion monitor, SecM, undergoes self-translation arrest in the cytosol[END_REF] for the recruitment of the translating ribosome to the membrane and an independent C-terminal arrest sequence. In the absence of SecA, ribosomes translating the SecM ORF get arrested. This leads to a rearrangement of the secondary structure of the translated mRNA freeing the SD sequence for SecA [START_REF] Nakatogawa | Secretion monitor, SecM, undergoes self-translation arrest in the cytosol[END_REF]. Consequently, SecA is expressed directly at the membrane [START_REF] Nakatogawa | SecM facilitates translocase function of SecA by localizing its biosynthesis[END_REF]. SecA regulates its own expression over a negative feedback loop by applying a pulling force on the SecM peptide resulting in the relieve of the arrested ribosome [START_REF] Butkus | Translocon "pulling" of nascent SecM controls the duration of its translational pause and secretion-responsive secA regulation[END_REF]. SecM is translocated resulting in the translocation of SecM into the periplasm and fastly degraded by proteases [START_REF] Nakatogawa | Secretion monitor, SecM, undergoes self-translation arrest in the cytosol[END_REF]. This allows a refolding of the secondary structure of the mRNA and the sequestering of the SD of secA. The crucial residues for nascent chain-mediated translational arrest by SecM (Table 1) were identified by alanine mutation screens and deletion of parts of the peptide [START_REF] Nakatogawa | The ribosomal exit tunnel functions as a discriminating gate[END_REF]. Cryo-electron microscopy showed that SecM arrest peptide can form contacts with the ribosomal tunnel wall, leading to a rearrangement of 23S rRNA bases like A2062 having an allosteric effect on the PTC. The A76 ribose of the P-site tRNA is shifted by 2 Å making a peptide bond formation unlikely [START_REF] Bhushan | SecM-stalled ribosomes adopt an altered geometry at the peptidyl transferase center[END_REF]. The structure also shows the bound A-site Pro-tRNA Pro which was identified to be crucial for SecM-mediated arrest. As discussed earlier the N-alkyl properties of proline reduces peptide bond formation, indicating that this property is crucial for SecM arrest [START_REF] Bhushan | SecM-stalled ribosomes adopt an altered geometry at the peptidyl transferase center[END_REF][START_REF] Muto | Genetically encoded but nonpolypeptide prolyl-tRNA functions in the A site for SecM-mediated ribosomal stall[END_REF]. Recent higher resolution structures (3.3-3.7 Å) revealed that the specific contacts of the peptide with the tunnel wall lead additionally to a rearrangement of the ribosomal bases within the PTC mimicking an uninduced state [START_REF] Zhang | Mechanisms of ribosome stalling by SecM at multiple elongation steps[END_REF]. A lot of particles had the peptide chain attached to the in the A-site tRNA Pro resulting in a different set of interactions that were predicted by MD to be more favored compared to contacts observed in previous studies [START_REF] Gumbart | Mechanisms of SecM-mediated stalling in the ribosome[END_REF][START_REF] Zhang | Mechanisms of ribosome stalling by SecM at multiple elongation steps[END_REF]. Due to peptide contacts and rearrangements of the PTC, base pairing between the Dloop and the tRNA Pro is prevented and so the ribosome cannot adopt the hybrid conformation and so translocation is inhibited [START_REF] Zhang | Mechanisms of ribosome stalling by SecM at multiple elongation steps[END_REF]. The results are consistent with a FRET study investigating the kinetics of SecM-mediated arrest [START_REF] Tsai | The Dynamics of SecM-Induced Translational Stalling[END_REF]. MifM MifM is an arrest peptide that arrests specifically the B. subtilis ribosome [START_REF] Chiba | Recruitment of a species-specific translational arrest module to monitor different cellular processes[END_REF] and regulates the expression of the membrane insertase YidC2 through a translational mechanism [START_REF] Chiba | A ribosome-nascent chain sensor of membrane protein biogenesis in Bacillus subtilis[END_REF]. B. subtilis has two homologs of YidC, SpoIIIJ (YidC1) and YidC2. Studies have shown that MifM arrested ribosomes are released by the insertion of MifM into the membrane by SpoIIIJ [START_REF] Chiba | MifM Monitors Total YidC Activities of Bacillus subtilis, Including That of YidC2, the Target of Regulation[END_REF]. In the absence of SpoIIIJ, MifM arrests and this results in the activation of YidC2. YidC2 itself can release the MifM arrested ribosomes as well, thus repressing its own expression (negative feedback loop) [START_REF] Chiba | MifM Monitors Total YidC Activities of Bacillus subtilis, Including That of YidC2, the Target of Regulation[END_REF]. MifM has a multiple stalling sites consisting of consecutive acidic amino acids [START_REF] Chiba | Multisite ribosomal stalling: a unique mode of regulatory nascent chain action revealed for MifM[END_REF]. The cryo-EM structure of MifM-arrested 70S B. subitilis ribosome (3.5-3.9 Å resolution) showed that MifM forms interactions with L22, L4 and the bases of the 23S rRNA. It has been shown that MifM arrests specifically the B. subitilis ribosome and not the E. coli ribosome [START_REF] Chiba | Recruitment of a species-specific translational arrest module to monitor different cellular processes[END_REF]. This can be explained by the fact that L22 has a species-specific sequence. Biochemical studies have shown that in particular, M80 is crucial for MifM arrest. Mutations of this amino acid to a negatively charged amino acid leads to the loss of the translational arrest. M80 does not form a direct contact with the arrest peptide but its location indicates that it stabilizes the orientation of rRNA bases and the L22 loop [START_REF] Sohmen | Structure of the Bacillus subtilis 70S ribosome reveals the basis for species-specific stalling[END_REF]. The cryo-EM structure contains no A-site tRNA. The interactions of the arrest peptide with the tunnel wall led to an allosteric rearrangement of the PTC that inhibits A-site tRNA accommodation due to the orientation of A2602 and U2506 pointing into the A-site binding pocket [START_REF] Sohmen | Structure of the Bacillus subtilis 70S ribosome reveals the basis for species-specific stalling[END_REF]. 1.4.1.3 VemP The marine Gram-negative bacteria V. alginolyticus has two paralogs of the SecDF protein. The leader peptide VemP is encoded upstream of SecDF2 and regulates its expression through a translational mechanism [START_REF] Ishii | Nascent chain-monitored remodeling of the Sec machinery for salinity adaptation of marine bacteria[END_REF]. VemP-mediated arrest is released by SecDF1 applying a mechanical force to the nascent chain while translocating the peptide into the periplasm. SecDF1 is active at high Na + -concentration. If the Na + concentration drops, SecDF1 becomes inactive and VemP-arrested ribosomes cannot release anymore. This leads to a rearrangement of the secondary structure of the mRNA freeing the SD sequence of the ORF encoding SecDF2 and so it can be translated [START_REF] Ishii | Nascent chain-monitored remodeling of the Sec machinery for salinity adaptation of marine bacteria[END_REF]. Recently, the structure of VemParrested E. coli ribosomes was solved at a resolution of 2.9 Å by cryo-EM [START_REF] Su | The force-sensing peptide VemP employs extreme compaction and secondary structure formation to induce ribosomal stalling[END_REF]. The VemP peptide forms two helices, one in the upper and one in the lower tunnel. The two helices are connected over an linking region forming two consecutive turns through the constriction site [START_REF] Su | The force-sensing peptide VemP employs extreme compaction and secondary structure formation to induce ribosomal stalling[END_REF]. Residues forming the α-helix within the upper tunnel can form interactions with the tunnel wall, leading to an allosteric rearrangement within the PTC that stabilizes the uninduced state of the ribosome. Consequently, the accommodation of the A-site tRNA is prevented by the orientation of U2506 [START_REF] Su | The force-sensing peptide VemP employs extreme compaction and secondary structure formation to induce ribosomal stalling[END_REF]. Ligand-dependent arrest peptides In contrast to ligand-independent arrest peptides, ligand-dependent arrest peptides act as sensors for the presence of ligands. Ligands can be metabolites or antibiotics binding to the ribosomal exit tunnel [START_REF] Seip | How Widespread is Metabolite Sensing by Ribosome-Arresting Nascent Peptides?[END_REF]. A nascent chain-mediated translational arrest can also be induced by specific amino acid sequences and the presence of a ligand, such as an antibiotic or a metabolite. The ribosome-nascent chain complex (RNC) acts as a sensor for the ligand. The following table includes ligand-dependent arrest peptides, their ligand, biological significance and their mechanism of inhibition during the translational cycle: [START_REF] Dorman | Posttranscriptional regulation of the inducible nonenzymatic chloramphenicol resistance determinant of IncP plasmid R26[END_REF] MSTSKNAD(K) [START_REF] Gu | Anti-peptidyl transferase leader peptides of attenuation-regulated chloramphenicol-resistance genes[END_REF][START_REF] Gu | Peptidyl transferase inhibition by the nascent leader peptide of an inducible cat gene[END_REF] Peptidyl transfer Name 1 Sequence 2 Biological significance, ligand Inhibition point ErmA leader [START_REF] Murphy | Nucleotide sequence of ermA, a macrolide-lincosamide-streptogramin B determinant in Staphylococcus aureus[END_REF] MXXXIAVV(E,D,K,R,H) [START_REF] Ramu | Nascent peptide in the ribosome exit tunnel affects functional properties of the A-site of the peptidyl transferase center[END_REF] Regulation of macrolide resistance genes that methylate A2058, macrolides Peptidyl transfer ErmB leader [START_REF] Horinouchi | A complex attenuator regulates inducible resistance to macrolides, lincosamides, and streptogramin type B antibiotics in Streptococcus sanguis[END_REF] MXXXXXVD(K) [START_REF] Arenz | Molecular basis for erythromycin-dependent ribosome stalling during translation of the ErmBL leader peptide[END_REF] Peptidyl transfer ErmC leader [START_REF] Gryczan | Conformational alteration of mRNA structure and the posttranscriptional regulation of erythromycin-induced drug resistance[END_REF][START_REF] Horinouchi | Posttranscriptional modification of mRNA conformation: mechanism that regulates erythromycin-induced resistance[END_REF] MXXXXIFVI(S) (Arenz et al., 2014a) A-site tRNA binding ErmD leader 3 [START_REF] Hue | Regulation of the macrolide-lincosamide-streptogramin B resistance gene ermD[END_REF]) MTHSMRL(R) minimal motif MRL(R) [START_REF] Sothiselvam | Macrolide antibiotics allosterically predispose the ribosome for translation arrest[END_REF] Peptidyl transfer 1.4.2.1 TnaC acts as a sensor for tryptophan Metabolic pathways are tightly regulated. In E. coli, the tryptophan degradation pathway is regulated by the arrest peptide TnaC that is encoded upstream of tryptophanase (TnaB) and the tryptophan-specific permease (TnaA) [START_REF] Konan | Regulation of the Escherichia coli tna operon: nascent leader peptide control at the tnaC stop codon[END_REF]. TnaC regulates the expression of TnaB and TnaA on a transcriptional level. In the absence of tryptophan, TnaC is translated and the peptide is released from the ribosome. This allows mRNA to adopt a specific secondary structure and its transcription is pre-mature terminated by Rho-factor binding. In the presence of free tryptophan, TnaC arrests when it reaches the stop codon preventing its release by RF2 in E. coli [START_REF] Gong | The mechanism of tryptophan induction of tryptophanase operon expression: tryptophan inhibits release factor-mediated cleavage of TnaC-peptidyl-tRNAPro[END_REF]. In Proteus vulgaris, the arrest occurs during elongation and not during termination [START_REF] Cruz-Vera | Tryptophan inhibits Proteus vulgaris TnaC leader peptide elongation, activating tna operon expression[END_REF]. The arrest allows results in a different mRNA fold and the Rho-binding site is sequestered and the full-length mRNA is transcribed. Insights into the underlying molecular arrest mechanism were obtained by cryo-EM. The structure was solved for E. coli TnaC sequence and E. coli ribosomes. A first cryo-EM study showed that the TnaC peptide can form possible specific contacts with the tunnel wall [START_REF] Seidelt | Structural insight into nascent polypeptide chainmediated translational stalling[END_REF]. A more recent cryo-EM structure identified two Trp molecules bound within the hydrophobic pockets formed by the TnaC peptide and located 15-20 Å away from the PTC (Bischoff et al., 2014) which is in agreement with a recent biochemical study [START_REF] Martínez | Interactions of the TnaC nascent peptide with rRNA in the exit tunnel enable the ribosome to respond to free tryptophan[END_REF]. The peptide forms specific contacts with L22, L4 and 23S rRNA. The PTC is inactivated due to distinct positions of U2585 and A2602 preventing release factor binding. U2586 forms stacking interactions with one of the bound Trp-ligands restricting possibly the movement of U2585. Furthermore, the orientation of the penultimate amino acid clashes with the GGQ motif of the release factor (Bischoff et al., 2014). Erm leader peptides sense the presence of macrolide antibiotics The expression of antibiotic resistance genes can be regulated by short leader peptides. Macrolide and ketolide resistance genes encoding a methyltransferase (erythromycin resistance methylase, erm), are regulated by upstream encoded leader peptides [START_REF] Ramu | Programmed drug-dependent ribosome stalling[END_REF]. Different leader peptide sequences were identified which showed differences in arrest motifs and could be sorted accordingly. Expression of the downstream erm gene can be regulated using a translational or transcriptional mechanism. The resistance gene methylates specifically A2058 resulting in the disruption of the erythromycin binding site [START_REF] Katz | Expression of the macrolide-lincosamidestreptogramin-B-resistance methylase gene, ermE, from Streptomyces erythraeus in Escherichia coli results in N 6-monomethylation and N 6, N 6-dimethylation of ribosomal RNA[END_REF][START_REF] Roberts | Nomenclature for macrolide and macrolide-lincosamide-streptogramin B resistance determinants[END_REF]. The first erm leader gene to have been characterized is ermCL [START_REF] Gryczan | Conformational alteration of mRNA structure and the posttranscriptional regulation of erythromycin-induced drug resistance[END_REF][START_REF] Horinouchi | Posttranscriptional modification of mRNA conformation: mechanism that regulates erythromycin-induced resistance[END_REF]. The ErmCL peptide itself has been studied biochemically and by cryo-EM (Arenz et al., 2014a). The 3.9 Å structure of the arrested 70S E. coli ribosome reveals stable contacts of the peptide with erythromycin and the ribosomal tunnel wall resulting in a unique orientation of A2062. This results in a reorientation of bases within the PTC. U2585 adopts a unique orientation and U2586 forms direct contacts with the growing peptide chain. A2602 points into the A-site binding pocket prevent A-site tRNA accommodation (Arenz et al., 2014a). Another erm leader peptide studied by cryo-EM is ErmBL. The path of the peptide in the absence of the drug is significantly different from that of ErmCL. ErmBL does not form direct contacts with the drug but with the tunnel wall [START_REF] Arenz | Molecular basis for erythromycin-dependent ribosome stalling during translation of the ErmBL leader peptide[END_REF]. ErmBL adopts a unique path in the ribosomal tunnel as a consequence of the fact that Asp10 is rotated resulting in a displacement of A76 of the P-site tRNA. The conformation results in a suboptimal geometry for the nucleophilic attack of the A-site amino acid (Arenz et al., 2016). The A-site tRNA Lys is bound in the structure in a different conformation compared to the fully accommodated state and U2585 is trapped between uninduced and induced states. Molecular dynamics simulations have shown that allosteric rearrangement of the PTC due to the presence of erythromycin leads to a trapped lysine side chain, while small hydrophobic sidechains could overcome this effect and the ribosome would not arrest (Arenz et al., 2016). In bacteria, long arrest peptides are involved in the expression of membrane proteins, metabolic enzymes, and antibiotic resistance genes. The arrest peptides form interactions with the ribosomal tunnel wall and potential ligand. Consequently, this leads to an allosteric rearrangement of the PTC resulting in the inhibition of A-site tRNA or RF accommodation by stabilization of the uninduced state, formation of secondary structure within the PTC and reorientation of the 3' end of the P-site tRNA. Short ligand-dependent ribosomal arrest peptides in bacteria To gain a broader understanding of the abundance of macrolide dependent arrest sequences, ribosome profiling was performed with E. coli cells which were exposed for a short time to erythromycin or telithromycin [START_REF] Kannan | The general mode of translation inhibition by macrolide antibiotics[END_REF]. Another independent study used Staphylococcus aureus exposed to sublethal concentration of the macrolide azithromycin [START_REF] Davis | Sequence selectivity of macrolide-induced translational attenuation[END_REF]. The ribosome profiling data identified the motif +X(+), which arrests once the X (any amino acid) is bound to the P-site tRNA within proteins for both studies. This resulted in the conclusion that the nature of the amino acids within the PTC determines the arrest of the peptide, and suggested that macrolide binding leads to the formation of a restrictive PTC [START_REF] Davis | Sequence selectivity of macrolide-induced translational attenuation[END_REF][START_REF] Kannan | The general mode of translation inhibition by macrolide antibiotics[END_REF]. The arrest peptide ErmDL contains this particular motif, namely MRL(R), and arrests accordingly in the presence of erythromycin. Sequential shortening of the ErmDL sequence revealed the minimal stalling motif MRL(R) [START_REF] Sothiselvam | Macrolide antibiotics allosterically predispose the ribosome for translation arrest[END_REF]. The peptide is too short to reach the binding site of the drug, leading to the assumption that the drug and the peptide could induce an allosteric rearrangement of the ribosomal residues within the PTC. Residues that were identified previously as being crucial for the arrest of ErmAL and ErmCL like A2062 or C2610 do not seem to have a direct impact on the stalling efficiency [START_REF] Sothiselvam | Macrolide antibiotics allosterically predispose the ribosome for translation arrest[END_REF][START_REF] Vázquez-Laslop | Role of antibiotic ligand in nascent peptide-dependent ribosome stalling[END_REF]Vázquez-Laslop et al., 2010). In contrast base U2585, which adopts different, distinct conformations during translation elongation to prevent premature peptide hydrolysis and allow A-site tRNA accommodation (Schmeing et al., 2005a;[START_REF] Voorhees | Insights into substrate stabilization from snapshots of the peptidyl transferase center of the intact 70S ribosome[END_REF], seems to adopt a specific conformation as a consequence of erythromycin binding, as suggested by molecular dynamics (MD) simulations and ribosomal residue protection assays [START_REF] Sothiselvam | Macrolide antibiotics allosterically predispose the ribosome for translation arrest[END_REF]. Further biochemical studies revealed that the fMRL(R) motif represents only one possible stalling sequence and that it can be generalized as the motif fM+X(+) that was identified within nascent peptide chains by ribosome profiling [START_REF] Sothiselvam | Binding of macrolide antibiotics leads to ribosomal selection against specific substrates based on their charge and size[END_REF]. Using unnatural amino acids attached to the CCA end mimics, it was shown that the positive charge of the A-site amino acid is crucial for the arrest while the length of the side chain has only a minor effect [START_REF] Sothiselvam | Binding of macrolide antibiotics leads to ribosomal selection against specific substrates based on their charge and size[END_REF], as shown in Figure 8. In summary, fM+X(+) is an arrest peptide that stalls the ribosome in the presence of macrolide antibiotics (Figure 8). The binding of the drug leads to rearrangements within the PTC and to ribosomal arrest through what appears to be an allosteric mechanism. Polyproline induced arrest is relieved by the protein factor EF-P In contrast to SecM, VemP and MifM, polyproline-mediated arrest is induced by the amino acid composition within a short distance to the PTC [START_REF] Peil | Distinct XPPX sequence motifs induce ribosome stalling, which is rescued by the translation elongation factor EF-P[END_REF]Starosta et al., 2014a;[START_REF] Woolstenhulme | High-precision analysis of translational pausing by ribosome profiling in bacteria lacking EFP[END_REF] and might be mainly induced by the chemistry and stereochemistry of proline [START_REF] Muto | Peptidyl-prolyl-tRNA at the ribosomal P-site reacts poorly with puromycin[END_REF]. Among all canonical amino acids, proline is the only N-alkyl amino acid which has a higher alkalinity and its imino group is more likely to be protonated at physiological pH compared to other canonical amino acids. Additionally, proline is also the only canonical amino acid that can form cis as well as trans peptide bonds introducing possible kinks in secondary structure elements [START_REF] Muto | Peptidyl-prolyl-tRNA at the ribosomal P-site reacts poorly with puromycin[END_REF][START_REF] Pavlov | Slow peptide bond formation by proline and other N-alkylamino acids in translation[END_REF]. While fast kinetic studies have shown, that proline is incorporated at a slower peptide bond formation rate compared to other canonical amino acids [START_REF] Pavlov | Slow peptide bond formation by proline and other N-alkylamino acids in translation[END_REF] recent in vivo and in vitro studies have shown that the translation of proteins containing three consecutive prolines is severely reduced in the absence of a specific protein factor [START_REF] Doerfel | EF-P is essential for rapid synthesis of proteins containing consecutive proline residues[END_REF][START_REF] Ude | Translation elongation factor EF-P alleviates ribosome stalling at polyproline stretches[END_REF][START_REF] Woolstenhulme | Nascent peptides that block protein synthesis in bacteria[END_REF]. Three consecutive prolines are encoded in the primary structure of approximately 100 proteins in E. coli, among which are important housekeeping factors like valinyl-tRNA-synthetase (ValS) (Starosta et al., 2014b). These proteins can be still produced due to the presence of the protein factor EF-P. EF-P can specifically recognize ribosomes that are arrested by polyproline motifs and catalyze their rescue [START_REF] Doerfel | EF-P is essential for rapid synthesis of proteins containing consecutive proline residues[END_REF][START_REF] Ude | Translation elongation factor EF-P alleviates ribosome stalling at polyproline stretches[END_REF]. Similar results were observed for the eukaryotic homolog initiation factor 5A (eIF5A) [START_REF] Gutierrez | eIF5A promotes translation of polyproline motifs[END_REF]. Further proteomic studies using stable isotope labeling of amino acids in cell culture (SILAC) on EF-P deletion strains (Δefp) revealed a higher number of downregulated genes than the 100 encoding three consecutive prolines [START_REF] Peil | Distinct XPPX sequence motifs induce ribosome stalling, which is rescued by the translation elongation factor EF-P[END_REF]. Further analysis using in vivo reporter systems and in vitro translation assays revealed the full complexity of polyproline-mediated arrest. The strength of the polyproline-mediated arrest is modulated by the nature of the amino acid in the -2, -1 or +1 position [START_REF] Peil | Distinct XPPX sequence motifs induce ribosome stalling, which is rescued by the translation elongation factor EF-P[END_REF]Starosta et al., 2014a) (Table 2) and Figure 9 represents strong arrest sequences Figure 9: [START_REF] Doerfel | EF-P is essential for rapid synthesis of proteins containing consecutive proline residues[END_REF][START_REF] Peil | Distinct XPPX sequence motifs induce ribosome stalling, which is rescued by the translation elongation factor EF-P[END_REF]Starosta et al., 2014a;[START_REF] Woolstenhulme | High-precision analysis of translational pausing by ribosome profiling in bacteria lacking EFP[END_REF]. The resulting sequences were validated by recent ribosome profiling experiments comparing the data obtained from wild-type cells with Δefp [START_REF] Woolstenhulme | High-precision analysis of translational pausing by ribosome profiling in bacteria lacking EFP[END_REF]. It has been shown that EF-P and its homologs need to be post-translationally modified to gain activity. The modification is species-specific and can vary from glycosylation to β-lysinylation [START_REF] Bailly | Predicting the pathway involved in post-translational modification of elongation factor P in a subset of bacterial species[END_REF][START_REF] Bullwinkle | R)-β-lysine-modified elongation factor P functions in translation elongation[END_REF][START_REF] Lassak | Arginine-rhamnosylation as new strategy to activate translation elongation factor P[END_REF][START_REF] Park | Post-translational modification by β-lysylation is required for activity of Escherichia coli elongation factor P (EF-P)[END_REF]. Bacterial cells lacking EF-P or its modification enzymes show defects in fitness, mobility, membrane integrity and virulence [START_REF] Navarre | PoxA, yjeK, and elongation factor P coordinately modulate virulence and drug resistance in Salmonella enterica[END_REF]. To gain insights into the function of EF-P and its eukaryotic homolog, several structures were determined by cryo-EM and X-ray crystallography. The first structural information of the function of EF-P was obtained by co-crystallization of 70S T. thermophilus ribosomes, unmodified EF-P from T. thermophilus and tRNAi Met from E. coli resulting in a structure at 3.5 Å resolution [START_REF] Blaha | Formation of the first peptide bond: the structure of EF-P bound to the 70S ribosome[END_REF]. The structure was solved under the assumption that EF-P promotes and enhances the formation of the first peptide bond [START_REF] An | Identification and quantitation of elongation factor EF-P in Escherichia coli cell-free extracts[END_REF][START_REF] Glick | Peptide Bond Formation Stimulated by Protein Synthesis Factor EF-P Depends on the Aminoacyl Moiety of the Acceptor[END_REF][START_REF] Glick | Identification of a soluble protein that stimulates peptide bond synthesis[END_REF]. The factor binds next to the P-site tRNA stabilizing the orientation of the 3'CCA end of the P-site tRNA [START_REF] Blaha | Formation of the first peptide bond: the structure of EF-P bound to the 70S ribosome[END_REF]. More recently, the Beckmann group published a cryo-EM structure with 3.9 Å resolution of the 80S yeast (Saccharomyces cerevisiae) ribosome pulled down from cells to study non-stop decay (NSD) to study the Ski complex (His-tagged). During cryo-EM data processing, a large number of particles contained two tRNAs and eIF5A, the eukaryotic homolog of EF-P. Like EF-P, eIF5A needs to be modified to be active. The eukaryotic modification is the unusual amino acid hypusine, which is formed by transferring a 4-aminobutyl moiety on the corresponding Lys residue followed by hydroxylation [START_REF] Dever | The hypusine-containing translation factor eIF5A[END_REF][START_REF] Gutierrez | eIF5A promotes translation of polyproline motifs[END_REF]. The structure showed that the hypusine modification of eIF5A forms direct contact with the CCA end of the P-site tRNA. This aids the correct orientation of the peptide chain for peptide bond formation [START_REF] Schmidt | Structure of the hypusinylated eukaryotic translation factor eIF-5A bound to the ribosome[END_REF]. Recently, a study from the Yusupov group utilized tRNA mimics, a short peptide attached to the RNA fragment ACCA which mimics the 3'CCA end of the tRNA, in complex with the 80S S. cerevisiae ribosome giving a greater understanding of the conformation of proline within the PTC (Melnikov et al., 2016a). The structures solved by X-ray crystallography have a resolution range of 3.1 to 3.5 Å. To obtain greater insights into the conformation of a diproline peptide the tRNA mimic was stabilized with the antibiotic Sparsomycin. The backbone of the dipeptide is bent with the N-terminus pointing towards the tunnel wall while the N-terminus of the control peptide (Phe-Leu) points into the exit tunnel (Melnikov et al., 2016a). In addition, proline does not bind in the same orientation like other amino acids to the A-site crevice leading to wrong orientation for peptide bond formation (Melnikov et al., 2016a). Further studies, including the eIF5A in complex with the yeast ribosome, showed that eIF5A binds at the same position even if the P-site tRNA mimic is absent and the hypusine modification points towards the PTC. In addition, the binding of the factor results in a conformational change of the ribosome (Melnikov et al., 2016b). All in all, EF-P is non-canonical translation factor that presents a possible target for antibiotic research. Even if there are many published structures of EF-P or its eukaryotic homolog in complex with the ribosome, there is no high-resolution structure containing the native substrate Pro-tRNA Pro as well as the orientation of a prokaryotic modification remains unknown. 1.5 Flexizyme as a tool to study translation Identification and sequence optimization One major aim of this thesis is to study short nascent-chain arrest peptides by structural characterization. In doing so, the arrest peptide needs to be attached to the 3'CCA end of tRNAs. To obtain peptidylated tRNA, the flexizyme methodology was used. The flexizyme methodology, a method to transfer activated amino acids and derivatives specifically to the 3' CCA end of a tRNA, was developed overcome limitations of genetic reprogramming in vitro [START_REF] References Goto | Flexizymes for genetic code reprogramming[END_REF]. These limitations included the high substrate sensitivity of aminoacyl-tRNA synthetases and so only a limited number of amino acids could be used to produce novel potential therapeutic peptides by the bacterial ribosome [START_REF] Hartman | An expanded set of amino acid analogs for the ribosomal translation of unnatural peptides[END_REF][START_REF] Hartman | Enzymatic aminoacylation of tRNA with unnatural amino acids[END_REF]. RNA is involved in many reactions like splicing and peptide bond formation and are indicators of an "RNA-world" before protein enzymes were evolved. It has been hypothesized that processes like aminoacylation of tRNAs have been originally catalyzed by ribozymes (as reviewed by [START_REF] Hager | Ribozymes: aiming at RNA replication and protein synthesis[END_REF]. First attempts to identify RNA sequences that can aminoacylate and recognize tRNAs resulted in the identification of sequences that can transfer the amino acid onto their own 5' end or internally onto 2'OH groups [START_REF] Lohse | Ribozyme-catalysed amino-acid transfer reactions[END_REF]Suga et al., 1998a;Suga et al., 1998b). Further experiments and optimization steps resulted in the identification of an RNA sequence that uses activated Biotin-L-glutamine adenylate and Biotin-L-phenylalanine-adenylate to act in cis (self-amino acylation) or in trans (aminoacylation of a different RNA) [START_REF] Lee | Ribozyme-catalyzed tRNA aminoacylation[END_REF]. The natural leaving group, adenylate, was replaced by a cyanomethyl ester (CME) to ensure that the chemical properties of the amino acid are forming the interactions with the ribozyme [START_REF] Lee | Ribozyme-catalyzed tRNA aminoacylation[END_REF]. For incorporation of an amino acid into the nascent chain by the ribosome, the amino acid is specifically attached to the 3' CCA end of its tRNA by the aminoacyl-tRNA synthetase (ARS). To site-direct, the aminoacylation reaction catalyzed ARS like ribozymes a tRNA precursor molecule was used (Saito et al., 2001a). In cells, tRNAs get produced as long precursors that are processed post-transcriptionally. In particular, the 5' end gets processed by the ribozyme RNase P [START_REF] Meinnel | Maturation of pre-tRNAfMet by Escherichia coli RNase P is specified by a guanosine of the 5′-flanking sequence[END_REF]. The 5' end sequence might have had the capability of aminoacyl synthetase by aminoacylating its own 3'CCA end that got lost after the evolution of protein-based aminoacyl-tRNA synthetases. Several rounds of in vitro selection were performed on a pre-tRNA with a randomized 5' region. The selected 5' region of the pre-tRNA is capable of recognizing L-phenylalanine-adenylate or L-phenylalanine-cyanomethyl ester (CME) and can then transfer the amino acid specifically to its own 3'CCA-end acting in cis (Saito et al., 2001a). Furthermore, the identified 5' region folds independently from the tRNA region and after RNase P cleavage the site-directed aminoacylation reactivity is preserved (acting in trans) (Saito et al., 2001b). Moreover, the identified ribozyme can specifically aminoacylate also a microhelix mimicking the CCA end of a tRNA meaning the reaction is independent of the folding of the target molecule (Saito et al., 2001b). The reaction can be enhanced by Mg 2+ cations which preserve the folding of the ribozyme [START_REF] Saito | Outersphere and innersphere coordinated metal ions in an aminoacyl-tRNA synthetase ribozyme[END_REF]. Further sequence optimization led to the identification of a flexizyme (Fx) that forms three base pairs with the target tRNA (Fx3). The three base pairs are formed with the RCCA 3' end of tRNAs (base pairing underlined). The reaction is faster in cases base 73 (R) is an A, U or G base in contrast to C, which results in a longer reaction duration. Fx3 binds specifically enough for efficient aminoacylation without disrupting the acceptor stem of the tRNA resulting a flexizyme that can recognize all tRNAs [START_REF] Murakami | A versatile tRNA aminoacylation catalyst based on RNA[END_REF][START_REF] Ramaswamy | Designer ribozymes: programming the tRNA specificity into flexizyme[END_REF]. To understand the underlying reaction mechanism the structure of Fx3 fused to a microhelix that mimics the 3' acceptor stem of a tRNA in complex with the protein U1A was solved at 2.8 Å resolution. Fx3 flexizyme forms four helices. Three of them adopt compact right-handed helices, termed A-form helices and one irregular helix. The irregular helix is necessary for substrate. Base pairing between the CCA end and the flexizyme give rise to additional stacking interactions as well as a defined network of hydrogen bonds which results in the accessibility of the 3'OH for substrate binding [START_REF] Xiao | Structural basis of specific tRNA aminoacylation by a small in vitro selected ribozyme[END_REF]. To increase the substrate variety and the reaction yields, the Fx3 sequence was used as a template for further rounds in vitro selection using a randomized sequence in the substrate binding site. The identified sequences are listed in Figure 10: The in vitro selected flexizymes have high sequence similarities. However, mutations within the substrate recognition site allow the recognition of three different benzyl-based leaving groups (Figure 10). Figure 11 illustrates the different flexizymes with their corresponding activation group and the chemical properties of amino acid derivatives [START_REF] References Goto | Flexizymes for genetic code reprogramming[END_REF]: As shown in Figure 11,enhanced Fx (eFx) is more active than Fx3 and recognizes two different leaving groups. On one hand aromatic amino acids are activated by CME, the same leaving group as Fx3, and on the other hand, bulky amino acids can be activated by using 4-chloro benzyl thioester (CBT). The dinitro flexizyme (dFx) recognizes the leaving group 3,5dinitrobenzyl ester (DBE) and transfers various amino acids and activated peptides onto tRNAs (Murakami et al., 2006a). Already dFx and eFx increased the accessibility to a wider variety of activated peptides for further experiments whereas hydrophobic amino acids activated with CBT or DBE are insoluble in water, inducing poor reactivity of the activated peptide. To overcome this issue of very hydrophobic amino acids, like those with long alkyl side chains, they can be activated with amino-derived benzyl thioester (ABT) as leaving the group [START_REF] Niwa | A flexizyme that selectively charges amino acids activated by a water-friendly leaving group[END_REF]. Case studies Flexizyme-mediated charging led to a great variety of substrates that could be incorporated into a growing nascent peptide. To do so, in vitro translation was performed in a reconstituted in vitro translation system [START_REF] Shimizu | Cell-free translation reconstituted with purified components[END_REF][START_REF] Shimizu | Protein synthesis by pure translation systems[END_REF]. Reactions were performed using an aminoacylated tRNA in an in vitro translation system lacking these tRNAs, the natural amino acid, and the corresponding aminoacyl-tRNA synthetase. This system is called flexible in vitro translation (FIT) [START_REF] References Goto | Flexizymes for genetic code reprogramming[END_REF] and can be used to initiate translation using any Lor D-amino acid [START_REF] Goto | Initiating translation with D-amino acids[END_REF]Murakami et al., 2006a), short peptides and peptide-like structures [START_REF] Goto | Translation Initiator tRNA charged with exotic peptides[END_REF]. Unnatural amino acids can also be incorporated into the growing peptide chain by charging the molecule onto an elongator tRNA [START_REF] Murakami | Flexizyme as a versatile tRNA acylation catalyst and the application for translation[END_REF]. To ensure that all native amino acids can be included in the reaction, a tRNASNN library can be produced by in vitro transcribing individually each tRNA with an SNN anticodon. In some cases, one amino acid, like alanine or proline, can be delivered by multiple tRNAs to the ribosome. One of the tRNAs can be aminoacylated with the native amino acid using the corresponding aminoacyl tRNA synthetase and the others by non-proteinogenic amino acids or derivatives using flexizyme [START_REF] Iwane | Expanding the amino acid repertoire of ribosomal polypeptide synthesis via the artificial division of codon boxes[END_REF]; all charged tRNAs can be mixed and incorporated while the ribosome is translating an NNS library. A recent study [START_REF] Katoh | Essential structural elements in tRNA Pro for EF-P-mediated alleviation of translation stalling[END_REF] used the flexizyme approach to identify the nucleotides of tRNA Pro , that are crucial for the recognition of EF-P and the release of polyproline-mediated translational arrest. Different mutants of tRNA Pro , tRNA Ser , tRNA Val and tRNAi Met were in vitro transcribed and prolinylated using flexizyme. The reactivity of EF-P was quantified by measuring the incorporation of downstream radioactive labeled Asp and kinetic parameters were obtained by measuring Pro incorporation rates in the presence or absence of EF-P. The major result of this study was that the D-loop structure of tRNA Pro and tRNAi Met is crucial for EF-P recognition and thus needs to be included for further structural studies to understand the detailed mechanism of EF-P [START_REF] Katoh | Essential structural elements in tRNA Pro for EF-P-mediated alleviation of translation stalling[END_REF]. In summary, extensive studies have shown that flexizymes can be used specifically to transfer amino acids or small peptides onto the 3' CCA end of any tRNA and incorporate them into the growing peptide chain during initiation and elongation. Aims The bacterial ribosome is one of the major targets for antibiotics (Wilson, 2009[START_REF] Wilson | On the specificity of antibiotics targeting the large ribosomal subunit[END_REF](Wilson, , 2014)). The rise of multiple bacterial resistant genes against clinically used antibiotics is an increasing threat and there is need for the development of new therapeutic strategies. Certain peptides can inhibit bacterial translation, either as free peptides that are produced as a part of defense mechanisms against other organisms such as PrAMPs or during their own translation (nascent chain-mediated translational arrest). PrAMPs are antimicrobial peptides which are produced by insects and mammals as a part of their innate immune system [START_REF] Otvos | Peptides antimicrobiens riches en proline (PrAMPs) La première partie de cette thèse a consisté à la compréhension du mécanisme sous-jacent de l'inhibition de la traduction bactérienne par des peptides antimicrobiens riches en proline[END_REF]. Though recent studies have shown that PrAMPs inhibit protein biosynthesis through binding to the bacterial ribosome (Krizsan et al., 2014;Mardirossian et al., 2014) the molecular mechanism remains unknown. In order to obtain detailed insights into the molecular mechanism, the structures of different PrAMPs in complex with the 70S T. thermophilus ribosome are determined here using X-ray crystallography. During the nascent-chain mediated translational arrest the bacterial ribosome is inhibited by the translating peptide. Structural information of long nascent chain arrest peptides such as SecM [START_REF] Zhang | Mechanisms of ribosome stalling by SecM at multiple elongation steps[END_REF] and ErmCL (Arenz et al., 2014a) has been obtained recently using cryo-EM. The underlying mechanisms of how short arrest peptides, including polyproline motifs as well as fM+X(+)arrest the ribosome in the presence of erythromycin, remain unknown. To investigate this further, the aim of this work was to study these arrest complexes using the flexizyme methodology for complex formation for structural and biochemical characterization. Obtaining high-resolution or near atomic resolution structures will give insights into the molecular conformation of the nascent chain as well as into their allosteric effect on the bacterial ribosome. The knowledge to be obtained from this will enable a greater understanding how peptide bond formation can be inhibited specifically and could lead to the development of novel highly specific antibiotics targeting this process. Materials Chemicals Kits Methods General methods Microbiological handling During work with bacteria, all steps were performed under sterile conditions. When growing E. coli in liquid media Lysogeny Broth (LB, [START_REF] Bertani | Studies on Lysogenesis I.: The Mode of Phage Liberation by Lysogenic Escherichia coli[END_REF]) or Terrific Broth (TB, [START_REF] Tartof | Improved media for growing plasmid and cosmid clones[END_REF])) were used. The media was prepared as listed in the following table: Plasmid amplification Invitrogen HB101 F -Lambda -araC14 leuB6(Am) DE(gpt-proA)62 lacY1 glnX44(AS) galK2(Oc) recA13 rpsL20(strR) xylA5 mtl-1 thiE1 hsdS20(rB -, mB -) tRNA overexpression [START_REF] Hanahan | Techniques for transformation of E. coli[END_REF] BL21 gold B F -ompT gal dcm lon hsdSB(rB - mB -) λ(DE3 [lacI lacUV5-T7p07 ind1 sam7 nin5]) [malB + ]K-12(λ S ) pLysS[T7p20 orip15A](Cm R ) Protein overexpression [START_REF] Studier | Use of bacteriophage T7 RNA polymerase to direct selective high-level expression of cloned genes[END_REF] pLysS [START_REF] Moffatt | T7 lysozyme inhibits transcription by T7 RNA polymerase[END_REF]) E. coli strain Genotype Experiment Citation BL21 AI B F -ompT gal dcm lon hsdSB(rB - mB -) [malB + ]K-12(λ S ) araB::T7RNAP- tetA tRNA overexpression Invitrogen KC6 A19 ΔspeA ΔtnaA ΔsdaA ΔsdaB ΔgshA ΔtonA ΔendAmet + Ribosome isolation [START_REF] Calhoun | Total amino acid stabilization during cell-free protein synthesis reactions[END_REF] Ribosomes were purified by Natacha Pérébaskine from Thermus thermophilus HB8 cells (wildtype, [START_REF] Oshima | Description of Thermus thermophilus (Yoshida and Oshima) comb. nov., a nonsporulating thermophilic bacterium from a Japanese thermal spa[END_REF]). The cells were ordered from the Bioexpression and fermentation facility of the University of Georgia (USA). For extraction of total T. thermophilus tRNA and genomic DNA, cells were grown at 70°C under optimal aerobic conditions in the 689 media. The media composition is shown in the following table: Cells were plated on 689 solid media supplemented with 1% Gelzan and incubated for 24 h at 70°C. Single colonies had an orange color. Extraction of genomic DNA from E. coli and T. thermophilus Genomic DNA was extracted from 5 mL of E. coli or T. thermophilus overnight culture. The cells were harvested for 10 min at 4000 g in a 5424R Eppendorf centrifuge (Eppendorf, Hamburg, Germany). The pellet was resuspended in 500 µL TES buffer (50 mM Tris HCl pH 7.4, 1 mM EDTA, 1% SDS) and cells were lysed by adding 500 µL phenol (pH 8.0): chloroform: isoamyl alcohol (25:24:1). Phenol denatures proteins and pH 8.0 facilitates the degradation of RNA. The resuspension was carefully inverted until a homogeneous solution was formed, followed by spinning at 13000 revolutions per minute (rpm, 18 000 relative centrifugal force (rcf)) in a tabletop centrifuge (Thermofisher, Waltham, MA, USA) at 4°C for 20 min. This resulted in the separation into three phases: The aqueous phase containing the DNA, the interphase containing denatured proteins and the organic phase containing lipids and other hydrophobic compounds. The aqueous phase, which contains the extracted DNA, was washed twice with chloroform: isoamyl alcohol (24:1). The DNA was precipitated by adding end concentrations of 300 mM Na(CH3CO2) pH 4.8 and 50% Isopropanol. The solution was incubated for 1 h at -20°C to enhance the amount of precipitation. Subsequently, the DNA was pelleted by spinning at full speed for 30 min at 4°C in a tabletop centrifuge. The pellet was washed with 70% ethanol (EtOH) to remove ions. The DNA was resuspended in 200 µL TE Buffer (10 mM Tris•HCl pH 8.0, 1 mM EDTA). To enhance dissolution, the sample was incubated for 1 h at 65°C and overnight at 4°C. The extracted DNA was used for polymerase chain reaction (PCR, Chapter 3.3). Methods Analytical procedures Standard gel electrophoresis The purity and concentration of nucleic acids and proteins were analyzed by gel electrophoresis. To do so biomolecules are exposed to an electric field and are separated due to their charge. DNA fragments shorter than 500 bp were analyzed using TBE (Tris-Borate-EDTA)-PAGE (polyacrylamide gel electrophoresis). The gels were cast using the Bio-Rad gel casting system. The following components were mixed: 0.04% (w/v) APS 0.01% (v/v) TEMED After the gels were polymerized 3 µL PCR sample was mixed with 0.5 µL DNA loading dye and loaded into the wells. Additionally, 1 µL low molecular weight marker (NEB, Ipswich, MA, USA) was loaded to determine the size and concentration of the DNA fragments. The gels were run in 1x TBE buffer (90 mM Tris, 90 mM Boric acid, 2 mM EDTA) at 200V for 1 h at room temperature. The gels were stained afterward with SYBR Gold (Invitrogen, 1:1000 diluted). RNA samples these were analyzed using TBU (Tris, Boric acid, Urea)-PAGEs. Urea is chaotropic salt that denatures intramolecular hydrogen bonds and van der Waals (VdW) forces resulting in the unfolding of secondary structures of the nucleic acids. Prior to loading the RNA samples were mixed with RNA loading dye (formamide, 250 µM EDTA, 0.25% (w/v) bromophenol blue (BFB), 0.25% xylene blue). EDTA chelates Mg 2+ ions that could be present in the sample and would lead to degradation of the RNA in the following denaturation step (95°C, 10 min). Formamide is added to enhance the denaturation of the RNA. The TBU-PAGEs were cast using the following mixture: at room temperature in 1x TBE buffer. Large gels were run for up to 24 h at 250V. Gels were stained afterward with SYBR Gold (Invitrogen, Carlsbad, CA, USA) and detected with the GelDoc+ (Biorad, Hercules, CA, USA). TBU-PAGEs have a basic pH which is not suitable for analyzing peptidylated tRNAs bound via an ester bond since the ester bond is pH sensitive. Samples containing peptidylated tRNAs were analyzed on acidic denaturing PAGEs. The PAGEs were prepared as follows: (Invitrogen, Carlsbad, CA, USA) and detected with the GelDoc+ (Biorad, Hercules, CA, USA). Protein Gels Protein purifications were analyzed using SDS-PAGEs. SDS binds to the hydrophobic backbone of proteins and gives the molecule a negative charge that corresponds to its molecular weight. This allows the separation of proteins in a PAGE according to their size. Protein samples were combined with the SDS-loading dye (250 mM Tris•HCl pH 6.8, 1% SDS. 30% glycerol, 1 mM DTT, 0.1% BPB), which This resulted in the denaturation of the proteins by SDS. Gels were prepared using the following mixture: Northern Blot Northern Blot is a method in Molecular Biology, that allows detecting specific RNAs using a labeled DNA probe that is reverse complementary to the RNA of interest. During this process, the RNA is separated by gel electrophoresis and subsequently transferred onto a membrane [START_REF] Alwine | Method for detection of specific RNAs in agarose gels by transfer to diazobenzyloxymethyl-paper and hybridization with DNA probes[END_REF][START_REF] Thomas | Hybridization of denatured RNA and small DNA fragments transferred to nitrocellulose[END_REF]. Northern Blotting was used to analyze the overexpression of specific tRNAs as well as monitoring the elution fractions during the tRNA purification. During the process, RNAs were separated and denatured by gel electrophoresis. Subsequently, the RNA is transferred and fixed to a membrane. A fluorescently labeled DNA probe, which is reverse complementary to the anticodon loop of the tRNA of interest with an optimal annealing temperature of 65°C, was used to detect specifically the tRNA of interest. The anticodon loop is specific for each tRNA and is the least structured part. The optimal amount of the detection varies from 1 pmol to 100 pmol of tRNA. First, the RNA was separated on a 9% TBU-PAGE and the gel was run for 45 min at room temperature. For the blotting, the membrane Hybond-N+ (GE Healthcare, Chicago, IL, USA) and TBU-PAGE were assembled and prepared for blotting as shown in the following figure: Methods After assembly of the blot according to the order shown in Figure 12 Biorad Trans-Blot®cell (Biorad, Hercules, CA, USA), the transfer of the RNA onto the membrane was performed. The transfer was run overnight at 10V in 1xTBE buffer at 4°C with constant stirring of the buffer in a. After the transfer, the membrane was moved onto a Whatman paper and baked for 2 h at 80°C to fix the RNA onto the membrane. In the next step, the membrane was incubated in for at least 2 h at 28°C with Recette church buffer (250 mM sodium phosphate buffer pH 7.2, 1 mM EDTA, 7% (w/v) SDS, 0,5 (w/v) Bovine serum albumin (BSA) ,4 µg/mL DNA from salmon testicles) [START_REF] Kim | A sensitive non-radioactive northern blot method to detect small RNAs[END_REF]. This blocking step of the membrane prevents unspecific binding of the labeled oligonucleotide and reduces background signal. 1 nmol of the fluorescent probe was denatured for 2 min at 95°C and then added to the blocking solution. The Blot was incubated overnight at 28°C on a tube rotator. Before detection the blot was washed twice with 300 mM NaCl, 30 mM Na3citrate (2x SSC buffer) and 0.2% (w/v) SDS and once with 150 mM NaCl, 15 mM Na3citrate (1x SSC buffer) and 0.1% SDS. The blot was detected using the blue laser of the Pharos detection system from Bio-Rad. Macromolecule concentration determination Macromolecule concentrations were determined using a Nanodrop (Thermofisher) The Nanodrop was blanked with the buffer in which the macromolecules were dissolved in. For the measurement 1-2 µL of the sample was pipetted onto the probe. Nucleic acid concentrations were determined by measuring the absorbance at 260 nm and the purity was monitored by the ratio of the absorbance at A260: A280 . A ratio of 1.8 indicates pure DNA while a ratio of 2.0 indicates a pure RNA sample. Protein concentrations were determined at 280 nm. Depending on the amino acid composition the absorbance can variate. To encounter the amino acid composition the extinction factor ε was calculated using the bioinformatic tool ProtParam [START_REF] Gasteiger | ExPASy: the proteomics server for in-depth protein knowledge and analysis[END_REF]. The exact concentration can be calculated by the following equation: 𝑐 ( µ𝑔 µ𝐿 ) = ( 𝐴 260 𝜀 ) (1) The following table lists all proteins purified in this thesis including their molecular weight and extinction coefficient: Table 17: List of proteins purified in this thesis, their corresponding extinction coefficients and molecular weight were determined using the bioinformatic platform ProtParam. If the complex consists of two subunits, the values are listed independently. Protein Number of amino acids Molecular weight [kDa] Extinction coefficient [M - Native mass spectrometry Native mass spectrometry (MS) is used study biomolecules within the gas phase perceiving the folding of the biomolecule prior to ionization. The method was used to study DNA folding, protein-protein interaction and ribosomes as reviewed by [START_REF] Leney | Native mass spectrometry: what is in the name?[END_REF]. The technique was used to validate the purity, the molecular weight and the presence of modifications of tRNA Pro . Therefore, 1 µmol (final concentration: 10 µM) tRNA were washed eight times through 10 kDa Amicon concentrators (Merck Millipore, Billerica, MA, USA) with a buffer containing 100 mM NH4(CH3CO2). Next, 50 µL of the sample was injected into Agilent 6560-DTIMS-Q-TOF mass spectrometer and the spectrum was detected. The machine was set to negative mode due to the charge of the RNA and the source temperature was set at 200°C and 600V for desolvation termed "soft ionization" condition. The soft ionization condition prevents the tRNA from fragmentation. Cloning DNA templates were generated by Polymerase Chain Reaction (PCR), which allows exponential amplification of DNA fragments [START_REF] Mullis | Process for amplifying, detecting, and/or-cloning nucleic acid sequences (Google Patents)[END_REF]. The reaction uses DNA polymerases (DNAP) that were isolated from thermophiles and remain active after incubation at high temperatures. Other compounds of the reactions are two short DNA oligomers (primer) that are a reverse complement to 3' ends of the sense and antisense strand of the gene of interest. DNAP prolongs the DNA oligonucleotides by incorporating dNTPs from the 5' to the 3' end complementary to the template strand. The reaction cycle can be divided into DNA denaturation, primer annealing and primer extension phase [START_REF] Saiki | Primer-directed enzymatic amplification of DNA with a thermostable DNA polymerase[END_REF]. All primers are listed in the supplemental material according to their downstream application. Amplification of gene of interest from genomic DNA To amplify a gene of interest, for example, an aminoacyl-tRNA synthetase, DNA megaprimers were from Eurogentec (Lüttich, Belgium) or Eurofins (Ebersberg, Germany). A megaprimer is a DNA oligomer overlapping 30 nt (nucleotides) with the multiple cloning site (MCS) of in this case the pBAT4 and 18 nt with the gene of interest [START_REF] Miyazaki | 17 MEGAWHOP Cloning: A Method of Creating Random Mutagenesis Libraries via Megaprimer PCR of Whole Plasmids[END_REF]. For cloning translation factors, histidine6 tags (H6-tag) were introduced at the N-or C-terminus as reported in [START_REF] Shimizu | Cell-free translation reconstituted with purified components[END_REF][START_REF] Shimizu | Protein synthesis by pure translation systems[END_REF]. The amplification was performed by PCR using Phusion DNA polymerase (PhuDNAP). The pipetting scheme is showing the following table: For gene operons like glySQ longer than 750 bp, the temperature during primer extension was reduced to 68°C and the reaction time was increased to 3 min. The PCR products were analyzed on 1.5% TAE-agarose gels (chapter 3.8.1) by loading 3 µL per reaction together with the 2-log DNA ladder (NEB, Ipswich, MA, USA). Depending on its purity, the product was purified by PCR purification kit (Qiagen, Hilden, Germany) or gel extraction kit (Qiagen, Hilden, Germany) following the manufacturer's instructions. Amplification of gene of interest by oligo assembly DNA templates for cell-free protein synthesis and tRNA genes were amplified from oligo assembly. To do so, the sequence was divided into several reverse and forward primers overlapping with each other to be able to result in a full-length synthetic gene. to obtain tRNA gene for cloning into pBASTNAV or pUC19 plasmids, the primers were overlapping 30 nt with the vector for insertion. For cell-free protein synthesis and in vitro transcription, the construct encoded a T7 promoter in the 5' region in order to initiate transcription. The product was assembled from six to eight primers. The outer primers were added in ten times excess over the other primers to amplify the whole fragment. The pipetting scheme is shown in the following table: The following PCR program was used to amplify the genes: The PCR products were analyzed by on 9% TBE-PAGE by loading 3 µL per reaction. Depending on the purity of the product, it was purified by PCR purification kit (Qiagen, Hilden, Germany) or gel extraction kit (Qiagen, Hilden, Germany) following the manufactory's instructions. Vector restriction and ligation For cloning, vectors were amplified in E. coli DH5α cells and purified using the minipreparation kit from Qiagen, (Hilden, Germany). The isolated vector was restricted using 1 µg of vector together with 1x CutSmart buffer and the 1 µL of each restriction enzymes (NEB,) as listed in the following table: For optimal vector digestion, the high fidelity (HF) version of the enzymes was used, when available. These enhanced enzymes have a reduced star activity and higher efficiency. The reaction product was analyzed on a 1% TAE-agarose gel (chapter 3.8) and purified via PCR purification kit (Qiagen, Hilden, Germany). The previously obtained insert was inserted by PCR using MEGAWHOP [START_REF] Miyazaki | Creating random mutagenesis libraries by megaprimer PCR of whole plasmid (MEGAWHOP). Directed evolution library creation: methods and protocols[END_REF][START_REF] Miyazaki | 17 MEGAWHOP Cloning: A Method of Creating Random Mutagenesis Libraries via Megaprimer PCR of Whole Plasmids[END_REF] or by sequence and ligation independent cloning (SLIC) [START_REF] Li | SLIC: a method for sequence-and ligation-independent cloning[END_REF]. For MEGAWHOP cloning, the insert overlaps on both sites 18 nt with the digested vector. The following PCR mixture was used to ligate the insert into the restricted plasmid: To 100 µL The following PCR program was used and the 100 µL master mix was split into two reactions of 50 µL each. For one reaction the annealing temperature was set to 50°C and the other to 60°C: The two reactions were pooled and the PCR product was purified by PCR purification kit (Qiagen, Hilden, Germany). The eluate was directly used for chemical transformation into E. coli DH5α without further analysis. For cloning tRNA genes into the plasmids pBSTNAV2OK and pBSTNAV3S, this approach did not work, probably due to the high AT content in the overlapping region between the insert and the plasmid. In order to clone the tRNA genes into these plasmids, the sequence and ligation independent cloning (SLIC) strategy was used, which relies on the 3'5' exonuclease activity of T4 DNAP in the absence of dNTPs. 100 ng of the insert were mixed with 60 ng of digested vector and 0.5 µL T4-DNAP (NEB) and was incubated for 2 min at room temperature (RT) followed by 5 min incubation on ice. During the reaction, the enzyme creates long sticky ends on the digested vector and on the insert. When the temperature is reduced in the next step the insert and plasmid anneal [START_REF] Li | SLIC: a method for sequence-and ligation-independent cloning[END_REF]. The product was used directly for transformation into competent E. coli DH5α cells. Mutagenesis of Thermus thermophilus Elongation Factor P To date, the post-translational modification of T. thermophilus EF-P remains unknown. To study the orientation of an EF-P modification within the 70S T. thermophilus ribosome, T. thermophilus EF-P was mutated systematically following the MEGAWHOP protocol [START_REF] Miyazaki | Creating random mutagenesis libraries by megaprimer PCR of whole plasmid (MEGAWHOP). Directed evolution library creation: methods and protocols[END_REF] using megaprimers. reverse complementary to each other. The forward primer anneals 24 nt downstream the mutated region and the reverse primer overlaps 24 nt upstream the mutated region. The reaction contained the following mixture: The template plasmid was isolated from E. coli DH5α cells and was thus methylated. Since PCR products are not methylated the template DNA can be degraded specifically using the restriction enzyme DpnI. DpnI recognizes and cuts specifically methylated and hemimethylated DNA double strands. To do so, the PCR reaction was mixed with CutSmart buffer (final concentration 1x) and 10 units DpnI. The reaction was incubated for 2 h at 37°C. The DNA fragments were purified using the PCR purification kit (Qiagen, Hilden, Germany) and eluted in30 µL DNase-free water. 15 µL of this DNA was subsequently used for transformation into competent E. coli XL1-blue cells. Preparation of chemically competent cells For cloning, protein and tRNA expression the corresponding plasmids were transformed into E. coli cells. To enhance uptake of the plasmid DNA the cells were treated with MgCl2, which leads to permeabilization of the cell wall. For the preparation of chemically competent cells, a liquid LB culture (containing the required antibiotic; see Table 10) was inoculated with the respective cells and was grown overnight at 37 °C with 200 rpm shaking. This overnight culture was subsequently used to inoculate 200 mL LB media to a starting OD600 between 0.05 and 0.1, which was grown at 37 °C until the culture reached an OD600 of 0.4-0.6. After pelleting the cells (4000g, 10 minutes, 4 °C, Eppendorf 5804R) the pellets were resuspended in 100 mL 100 mM MgCl2 solution and were incubated for 30 min on ice. Subsequently, the cells were pelleted and resuspended in 10 mL 100 mM CaCl2 and 15% glycerol. The cells were aliquoted in 100 µL and flash-frozen in liquid nitrogen (N2) and stored at -80°C until further use. Plasmid transformation into chemically competent cells For the chemical transformation, the cells were thawed on ice. After adding the plasmid solution (15 µL of PCR products or SLIC products or 0.5 µL of plasmids) the cells were incubated on ice for 30 min. After a heat shock at 42°C for 1 min, the cells were placed on ice immediately and incubated for another 1.5 min. To allow recovery of the cells 500 µL of LB media was added and the cells were incubated at 37 °C for at least one hour. After the incubation, the cells were plated on LB agar plates containing the plasmid-specific antibiotic for cell selection. The agar plates were incubated overnight at 37 °C. Colony identification After cloning, clones, containing the insert, were identified by colony PCR. In a first step master mix was prepared using sequencing primers as amplifying primers, but lacking the Phusion DNA polymerase. The sequencing primers are designed to be plasmid specific and should not bind to the genomic DNA of the cells. The master mix was aliquoted into PCR tubes. As a control, one PCR reaction contained empty purified plasmid. The master mix is listed in the following table: To final volume Single colonies were picked using sterile pipet tips and transferred into the reaction tubes. In order to keep the cells, they were also streaked out onto a fresh LB agar plate containing the required antibiotic. The agar plates were incubated at 37 °C overnight to allow growth of the colonies. The PCR was performed with the following program: The PCR cycler was stopped after the cell lysis step and continued after the addition of 0.25 µL Phu DNAP in each reaction. As the size of the insert varied between 100 bp and 2000 bp, the reactions were analyzed depending on the size of the insert either by a 1.5% TAE-agarose gel or a 9% TBE-PAGE (chapter 3.2). Colonies corresponding to reactions giving rise to a band on the gel with the size of the insert were used for further steps. Plasmid extraction and sequencing The potential colonies, containing potentially the insert, were transferred into 5 mL LB containing the respective antibiotic and grown overnight at 37 °C and shaking at 200 rpm. The cells were harvested by centrifugation (4000 g, 10 min) and the plasmids purified using the plasmid purification kit following the manufacturer's instructions. The plasmid DNA was eluted in30 µL water and the concentration was determined using a NanoDrop (2000c, Thermofisher, Waltham, MA, USA). Plasmids were sequenced by SANGER sequencing (GATC Biotech, Konstanz, Germany) The resulting sequences were confirmed by comparing with the planned sequences using Basic Local Alignment Search Tool (BLAST, [START_REF] Altschul | Basic local alignment search tool[END_REF]) and Multalign (Toulouse, France, ). Protein expression and purification All proteins purified in this thesis were overexpressed in E. coli BL21 Gold. The genes encoding the proteins were cloned to be under the regulation of a T7 RNAP promoter. In E. coli BL21 Gold the T7 RNAP gene is under the control of a LacI repressor and can be induced by the addition of the lactose derivative, isopropyl β-D-1-thiogalactopyranoside (IPTG). The process of protein purification can be divided into a capture step, an intermediate step and a polishing step as reported in [START_REF] Williams | Overview of Conventional Chromatography[END_REF]. To purify the proteins, they were tagged with an H6-tag. The six consecutive H residues form a stable complex with immobilized Co 2+ ions allowing separation of the protein of interest from the cellular proteins (capture step) [START_REF] Petty | Metal-Chelate Affinity Chromatography[END_REF][START_REF] Williams | Overview of Conventional Chromatography[END_REF]. Since the purity of the protein samples, purified proteins after the Co 2+ -NTA column was already high. Subsequently, the buffer was exchanged using a size exclusion chromatography directly as polishing step. In this step, the sample components are separated by their overall size [START_REF] Hagel | Gel-Filtration Chromatography[END_REF][START_REF] Williams | Overview of Conventional Chromatography[END_REF]. 3. Methods Expression vectors Expression and purification of aminoacyl tRNA synthetases, T7 RNAP and EF-G Aminoacyl-tRNA synthetases were cloned into pBAT4 plasmids and H6-tagged as reported in [START_REF] Shimizu | Cell-free translation reconstituted with purified components[END_REF][START_REF] Shimizu | Protein synthesis by pure translation systems[END_REF]. After confirming the sequences of the vectors by SANGER sequencing, the plasmids were transformed into competent E. coli BL21 gold cells and selected on LB containing ampicillin and chloramphenicol (Amp-CAM) plates. For expression experiments, cells were grown in an overnight culture (preculture). The main culture was used to inoculate the main culture to an OD600 of 0.05 which was grown at 37°C (shaking at 180 rpm) until it reached an OD600 of 0.6. Next, the expression was induced by the addition of 1 mM IPTG (end concentration). After induction, the cells were incubated at 30°C and harvested after 4 h by centrifugation at 4,000 rcf for 15 min at 4°C. The harvested cells were flash frozen in liquid nitrogen and stored at -80°C until protein purification. The cells were resuspended in 25 mL of protein lysis buffer (50 mM HEPES•KOH pH 7.6, 10 mM MgCl2, 1 M NH4Cl). To lyse the cells, they were sonicated at 40%, 45 s sonication 45 s recovering on ice for 7 min. The cell debris was removed by centrifugation in JA 25-50 rotator for 45 min at 4°C at 40,000 g in the Avanti J26XP Beckmann centrifuge. The lysate was then mixed with cobalt-agarose (Sigma Aldrich, St Louis, MO, USA) (1 mL per 2 L LB expression culture) and incubated for 1 h at 4°C to allow binding of the His-tagged proteins to the immobilized Co 2+ -ions. After binding of the protein, the beads were transferred onto a gravity column and were washed with 60 mL protein lysis buffer to remove loosely bound proteins from the Cobaltagarose beads. The proteins were eluted by adding 5 mL protein elution buffer (protein lysis buffer with 250 mM imidazole). The eluate was concentrated using Amicon centrifugation concentrators (Merck Millipore, Billerica, MA, USA). The molecular weight cut off of the concentrators was chosen depending on the size of the purified protein. Subsequently, the sample was loaded onto a gel filtration column to exchange the buffer to protein storage buffer (20 mM HEPES KOH pH 7.6, 10 mM MgCl2, 50 mM KCl, 50 mM NH4Cl). Depending on the molecular weight of the protein, the gel filtration was performed using a Superdex 75 (16/600) or Superdex 100 through the NGC medium pressure liquid chromatography system (Biorad, Hercules, CA, USA) collecting 1 mL elution fractions. Elution fractions were analyzed by SDS-PAGE. Fractions containing the protein of interest were pooled and concentrated using centrifugal filters to a concentration of 20 mg/mL. For long-term storage, 100% glycerol was added to a final concentration of 50% glycerol and the samples were flash-frozen in liquid nitrogen and stored at -80°C. Expression and purification of Elongation factor-P Elongation factor-P (EF-P) from E. coli was overexpressed as published [START_REF] Ude | Translation elongation factor EF-P alleviates ribosome stalling at polyproline stretches[END_REF]. To ensure that the expressed protein is fully modified the plasmid encoding for H6-tagged EF-P was co-expressed with the pRSF plasmid encoding for the untagged modification enzymes YjeA and YjeK. The expression of all three proteins is under the control of a T7 promoter [START_REF] Doerfel | Entropic contribution of elongation factor P to proline positioning at the catalytic center of the ribosome[END_REF]. 7 T. thermophilus EF-P was expressed as published in [START_REF] Blaha | Formation of the first peptide bond: the structure of EF-P bound to the 70S ribosome[END_REF] was used. Plasmids encoding mutated versions of T. thermophiles EF-P were also co-expressed with the modification enzymes for E. coli EF-P. The proteins were overexpressed in E. coli BL21 Gold cells. and the expression was induced by addition of 1mM IPTG (end concentration) at an OD600 of 0.6. After induction, the cells were grown for 4 h at 30°C. The cells were harvested by centrifugation at 4,000 g for 15 min at 4°C and flash frozen in liquid nitrogen and stored at -80 °C until further use. For EF-P purification the cells were thawed and resuspended protein lysis buffer and lysed as described in the previous chapter 3.4.2. For the purification of E. coli EF-P, the cell lysate was incubated directly with Co 2+ -NTA beads. The cell lysate containing the T. thermophilus EF-P or one of the mutants was first treated with a heat shock to remove E. coli proteins, using the expected thermal stability of the protein. For the heat shock, the lysate was transferred into an Erlenmeyer flask that was put in a water bath at a temperature of 60°C. The cell lysate was incubated for 20 min followed by removal of cell debris and denatured E. coli proteins by centrifugation in JA 25.50 (Beckmann) rotor for 45 min at 40,000 g at 4°C. The following steps were performed as described in chapter 3.4.2. Since these proteins were mainly used for crystallography experiments the proteins were flash-frozen and stored at -80 °C without the addition of glycerol. RNA handling All reactions containing RNA were performed in DEPC treated water or molecular biology grade water (Merck, Billerica, MA, USA) to avoid degradation of the RNA by RNases. Glassware used for these reactions was baked for several hours to degrade potential RNase contaminations. In vitro transcription and purification in vitro transcribed RNA Unmodified tRNAs, flexizymes and mRNA templates for toeprinting were transcribed in vitro using the T7 RNAP mutant P266L [START_REF] Guillerez | A mutation in T7 RNA polymerase that facilitates promoter clearance[END_REF]. This mutant of the T7 RNAP synthesized more full-length product compared to the wt T7 RNAP. The DNA templates were synthesized by PCR as described in chapter 3.2 encoding the genes under the T7 promoter. In vitro transcription was performed using 5-10 μg/mL of DNA template, 7.5 mM of each NTP, 10 μg/mL of T7 RNAP in Ribomax buffer (80 mM Tris•HCl pH 7.6, 2 mM spermidine, 24 mM MgCl2),.and was incubated for 2 h at 37°C. The reaction was stopped by phenol-chloroform extraction (chapter 3.5.2) and nucleic acids and nucleotides were precipitated with 1 M NH4(CH3CO2) and 70% ethanol end concentration Unincorporated nucleotides were removed by washing the RNA several times with RNase-free water over Amicon centrifugation filters (Merck Millipore, Billerica, MA, USA) using a molecular weight cut-off smaller than the produced RNA. The success of the washing was monitored by measuring the absorption in the flow-through on the NanoDrop. After recovery of the RNA from the centrifugal filters, the concentration was determined using the Nanodrop 2000c (Thermofisher, chapter 3.2) and the purity was determined by gel electrophoresis (chapter 3.2). Phenol-Chloroform extraction Purification of RNA from enzymatic reactions was performed by phenol-chloroform extraction using. acidic phenol (pH 4.3, Na3citrate solution, Sigma Aldrich, St Louis, MO, USA). The low pH prevents hydrolysis of RNA. The reactions were extracted by adding one volume phenol: chloroform: isoamyl alcohol (25:24:1) and mixing it vigorously by vortexing. The organic phase was separated from the aqueous phase by centrifugation. The denatured proteins then locate at the interphase between the organic and the aqueous phase. The aqueous phase was reextracted three times with chloroform: isoamyl alcohol (24:1). All phases were washed with reaction, stabilization buffer or molecular biology water to ensure the recovery of the RNA sample. To remove residual organic solvents, the RNA was precipitated by NH4Ac/ethanol precipitation, as described in chapter 3.5.3. Precipitation of RNA 3.4.3.1 Alcohol precipitation of RNA An important step in the RNA purification is the removal of salts, buffer exchange as well as concentrating the sample. Nucleic acids can be precipitated specifically in the presence of monovalent cations such as Na + and NH4 + . The ions bind to the phosphate-sugar backbone of the nucleic acid and neutralize its negative charges. By the addition of 2.5 Vol EtOH or one volume isopropanol, the RNA precipitates at low temperatures due to its decreased solubility. For precipitation, the samples were incubated for 1 h at -20°C and then spun down for 30 min at 18,000 rcf in a cooled tabletop centrifuge. The pellet was washed with 70% EtOH to remove remaining salt and spun down again. The pellet was dried at room temperature for 10-15 minutes and finally. resuspended in the desired buffer. The salt for RNA precipitation was chosen depending on the downstream application and properties of the sample. In most cases, NH4(CH3CO2) was used due to its volatile properties. For aminoacylated and peptidylated tRNA samples Na(CH3CO2) pH 4.8 was used to protect the product from uncontrolled hydrolysis. Chloride as a counterion was avoided since it shows inhibitory properties in downstream reactions such as in vitro transcription or the initiation of in vitro translation. 3.5.3.2 TCA precipitation of RNA tRNA samples from analytical HPLC runs were often too low concentrated for the following Northern Blot analysis. In order to increase the concentration, the RNA was precipitated using a 100% (w/v) trichloroacetic acid (TCA) solution to. to obtain a final TCA concentration of 10%. The samples were incubated for 1 h on ice to ensure complete precipitation. After the incubation, the RNA was pelleted and washed with 100% acetone to remove residual TCA. After the washing step, the pellet was dried briefly and resuspended in RNA loading dye. If TCA was still left in the sample, which was indicated by a color change of the RNA loading dye, small amounts of 1 M Trizma base were added to ensure a basic pH for optimal separation on TBU-PAGE. tRNA expression and purification A specific tRNA can either be produced by in vitro transcription or by overexpression of the corresponding gene in vivo and subsequent purification from cells. tRNAs function as an adaptor between the mRNA and the amino acid sequence [START_REF] Crick | On protein synthesis[END_REF] and are modified posttranscriptionally. These modifications are important decoding capacity, decreasing codon sensitivity and maintaining open reading frame. The advantage of the purification of tRNAs from cells is that these tRNAs are modified. In contrast to protein purification, the length of the tRNA is crucial for the modification, meaning the tRNA cannot be tagged. In order to purify a specific tRNA species from a crude tRNA purification, the tRNA can be aminoacylated using the corresponding aminoacyl-tRNA synthetase. This reaction is tRNA specific since the enzyme recognizes specific features of the tRNA and the chemical properties of the amino acid [START_REF] Ibba | Aminoacyl-tRNA synthesis: divergent routes to a common goal[END_REF]. The aminoacylation of a tRNA changes its binding properties on a reverse phase (RP) high-performance liquid chromatography (HPLC) column [START_REF] Cayama | New chromatographic and biochemical strategies for quick preparative isolation of tRNA[END_REF] enabling separation from the deaminoacylated-tRNAs in the mixture. The column matrix is formed by hydrophobic residues that bind biomolecules due to hydrophobic residues exposed on the surface. The sample is eluted by gradually increasing concentrations of an organic solvent allowing separation of the different compounds according to their hydrophobicity [START_REF] Aguilar | Reversed-Phase High-Performance Liquid Chromatography[END_REF]. The addition of the amino acid changes the hydrophobic character of the tRNA and leads to a different elution profile. By cycling the deaminoacylation and aminoacylation, the tRNA of interest can be enriched [START_REF] Cayama | New chromatographic and biochemical strategies for quick preparative isolation of tRNA[END_REF]. The used constructs are listed in the following table: L-arabinose This thesis Test expression of tRNA Pro overexpression To identify the optimal expression conditions for tRNA Pro small-scale expression tests were performed in 25 mL LB culture. To test the expression efficiency the plasmids pBSTNAV2OK and pBSTNAV3S, as well as the tRNA Pro containing plasmids, were transformed into E. coli HB101 and E. coli DH5α cells. The preculture was grown for 8 shaking at 180 rpm and was used to inoculate the main culture (25 mL LB) to a starting OD600 of 0.1. The cells were harvested after 16 h of growth at 37°C under optimal aerobic conditions and the OD600 was measured using Nanodrop 2000c (Thermofisher). 10 mL of cell culture were harvested and flash frozen in liquid N2. To test the expression capacity of the pUC19-tRNA Pro constructs the plasmid was transformed into E. coli BL21AI cells. The preculture was grown overnight (37 °C, shaking at 180 rpm) and main cultures were inoculated to an OD600 of 0.1. When the cells reached an OD600 of 0.6 the expression of the tRNA was induced by the addition of L-arabinose (1%, 0.1%, and 0.01%, end concentration). An uninduced sample served as control. 10 mL time points were taken after 4 h and after 19 h of overexpression. The OD600 was measured and the cells were harvested by centrifugation, flash-frozen and stored at -80 °C until further extraction. Independently from the expression strategy, cells were lysed by phenol/chloroform extraction: In the first step the cells were resuspended in 500 µL tRNA lysis Buffer (50 mM Tris•HCl pH 7.2, 50 mM Mg(CH3CO2)2) and mixed with an equal volume of phenol (pH 5.2): chloroform: isoamyl alcohol (25:24:1) solution and centrifuged for 15 min at 18,000 rcf to separate the organic from the aqueous phase. The aqueous phase was washed twice with 500 µL chloroform: isoamyl alcohol (24:1). Subsequently, small RNA species were precipitated specifically with 1 M NH4(CH3CO2) and 70% EtOH (end concentration). After pelleting, the samples were resuspended in RNA loading dye (chapter 3.8) calculated as in the following relation: 𝑉 𝑙𝑜𝑎𝑑𝑖𝑛𝑔 𝑑𝑦𝑒 = 𝑂𝐷 600 • 16.4 𝜇𝐿 (2) The expression levels were analyzed by Northern Blot (chapter 3.2). To purify the tRNA the cells were thawed and resuspended in tRNA lysis buffer (25 mL per 10 g of cells). The cells were lysed and the proteins were denatured by adding acidic phenol (pH 4.2 citric acid, Sigma Aldrich, St Louis, MO, USA) to the resuspension (25 mL per 10 g of cells). The resuspension was incubated on a tube rotator for 1 h at 4°C to ensure complete lysis of the cells. Subsequently, the lysate was centrifuged for 30 min in Eppendorf 5804R centrifuge (Hamburg, Germany). The aqueous phase was extracted once with chloroform: isoamyl alcohol 3. Methods (24:1,Sigma). The organic phases were washed once with fresh tRNA lysis buffer. The DNA was removed by precipitation with 300 mM Na(CH3CO2) pH 4.8 and 20% isopropanol for 1 h at 4°C. Large-scale expression of tRNA Pro and total tRNA extraction After centrifugation for 1 h at 5,170 g (JLA 8.100, Avanti J26XP, Beckmann, Brea, CA, USA) at 4°C, pellets, containing the genomic DNA of the cells, were discarded and the concentration of isopropanol in the supernatant was increased to 60% to precipitate the RNA. The solution was incubated overnight at 4°C to ensure precipitation. The RNA was pelleted by centrifugation and washed with 70% ethanol to remove remaining salts. The pellet was briefly dried on air by placing the centrifugation bottle under the hood for 20 min. This extracted tRNA sample contains aminoacylated, deaminoacylated tRNAs and other RNA species. To deaminoacylate all tRNAs and obtain a homogeneous preparation the pellet was resuspended in 200 mM Tris•acetate pH 9.0 at 37°C for 1 h. In the last step of the total tRNA extraction, small RNA species were precipitated specifically by adding 1 M NH4(CH3CO2) and 70% ethanol. The pellet was dissolved in RNase-free water and the concentration was determined on the NanoDrop (see chapter 3.2). Chromatography of tRNA Pro The total tRNA extraction (chapter 3.6.2) still contains longer rRNA species that will bind tighter to the reverse phase (RP) column as they present a larger hydrophobic surface and might prevent binding of the tRNA to the column material. Therefore, these and other long RNA species were removed specifically by loading the RNA onto a Q-sepharose (GE Healthcare, Chicago, IL, USA) column, which is an anion exchange column. Negatively charged biomolecules bind to the positively charged matrix and can be eluted specifically by increasing concentrations of Cl -ions in the elution buffer. Due to the lower charge, the shorter tRNAs elute before the rRNA species. The purification protocol was adopted from the published protocol [START_REF] Mechulam | Protection-based assays to measure aminoacyl-tRNA binding to translation initiation factors[END_REF]. The binding buffer contained 20 mM HEPES•NaOH pH 7.5, 100 µM EDTA, 6 mM MgCl2, 200 mM NaCl. The sample was eluted by increasing the NaCl concentration over four column volumes slope into 1 M NaCl. The tRNA Pro containing fractions were identified by Northern Blot analysis (chapter 3.8) and pooled for further purification. In the next purification step, the tRNA Pro was prolinylated specificially by using the E. coli prolyl-tRNA synthetase ProS (purification chapter 3.4.1, reaction chapter 3.6.5). This modification changes the binding behavior of tRNA Pro to the C4-RP-HPLC column (Grace, Columbia, MD, USA). The total tRNA was bound to the column using the RP-buffer A (20 mM NH4(CH3CO2) pH 5.5, 10 mM Mg(CH3CO2)2, 400 mM NaCl). The different tRNA species were separated by increasing the amount of methanol (MeOH) within the buffer. RP buffer B contained 20 mM NH4(CH3CO2) pH 5.5, 10 mM Mg(CH3CO2)2, 400 mM NaCl and 15% MeOH. The tRNA was eluted using a gradient from 0-65% RP-buffer B over 12 column volumes. The tRNA Pro containing fractions were identified by Northern Blotting and in the next step deprolinylated using 200 mM Tris-acetate pH 8.0 for 1 h at 37°C. The sample was rerun using the same program since the deaminoacylation of the tRNA Pro changed its running behavior. Fractions containing tRNA Pro were pooled together and precipitated with 1 M NH4Ac and 70% EtOH. The purified tRNA Pro was washed several times with H2ODEPC using a 10 kDa centrifugal filter to remove all sodium cations that potentially interfere with downstream reactions. As a final step, the sample was lyophilized and resuspended in a small volume of RNase-free water. The concentration was determined and the tRNA was stored at -80°C. The protocol for tRNAi Met purification was optimized by Dr. K. Kishore Inampudi following the same principle. tRNAi Met preparations for this thesis were done by Dr. K. Kishore Inampudi, Dr. Axel Innis, Natacha Pérébaskine and myself. tRNA Phe was purified by Natacha Pérébaskine. CCA end modification of tRNA In the cell, amino acids are linked via an ester bond to the 3'OH of the tRNA. This linkage is pH sensitive and does hydrolyze during crystallization. In order to prevent deaminoacylation, this bond was modified to an amide bond, the 3'OH has to be exchanged to a 3'NH2 group. To do so the purified tRNAs were modified enzymatically by removal and adding back of the 3'CCA end using enzyme phosphodiesterase and CCA-adding enzyme. The following aminoacylation reaction was performed using the corresponding amino acyl-tRNA synthetase. This strategy was used previously to get greater insights into peptide bond formation (Polikanov et al., 2014;[START_REF] Voorhees | Insights into substrate stabilization from snapshots of the peptidyl transferase center of the intact 70S ribosome[END_REF]. The modification of the 3' end was performed as described previously with minor modifications [START_REF] Fraser | Synthesis and Aminoacylation of 3′-Amino-3′-deoxy Transfer RNA and Its Activity in Ribosomal Protein Synthesis[END_REF]Polikanov et al., 2014;[START_REF] Voorhees | Insights into substrate stabilization from snapshots of the peptidyl transferase center of the intact 70S ribosome[END_REF]. For most tRNAs, the CCA end was be removed by using phosphodiesterase I (Western Diamond rattlesnake, Sigma In tRNAi Met, five bases are not base-paired which is one more than in other modified tRNAs. Consequently, phosphodiesterase I would remove an additional nucleotide and thus cannot be used for the removal of the CCA end. To overcome this problem, the CCA-adding enzyme was used, which, in the presence of pyrophosphate, specifically removes the CCA end. For this reaction 10 µM tRNA were mixed with 0.1 mg/mL CCA adding enzyme and 250 µM NaPPi in the same buffer as for the phosphodiesterase I treatment and incubated for 1 h at 37°C. The reaction was stopped by phenol-chloroform extraction and the tRNA was precipitated with 1 M NH4(CH3CO2)/70% EtOH precipitation. In the last step, the free nucleotides were removed by washing the RNA over a 10 kDa Amicon centrifugal concentrator (Merck Millipore). The CCA-end was added back by reversing the enzymatic activity of the CCA adding an enzyme, which was done in the presence of CTP and 3'-amino adenosine-triphosphate (3'NH2-ATP, Biolog Inc, Hayward, CA, USA). As the name indicates, 3'NH2-ATP has an amino group instead of the 3' OH group. The reaction to incorporate 3'NH2-ATP into the CCA end of the tRNA was performed using 10 µg tRNA -CCA together with 30 µM CTP, 40 µM 3'NH2-ATP and 2 µM CCA adding enzyme in 50 mM Tris•HCl pH 7.6, 12 mM MgCl2 and 30 mM KCl [START_REF] Fraser | Synthesis and Aminoacylation of 3′-Amino-3′-deoxy Transfer RNA and Its Activity in Ribosomal Protein Synthesis[END_REF]Polikanov et al., 2014;[START_REF] Voorhees | Insights into substrate stabilization from snapshots of the peptidyl transferase center of the intact 70S ribosome[END_REF]. The reaction was incubated for 2 h at 37°C and was stopped by phenol-chloroform extraction. Depending on the downstream application, the product was directly purified by C4-HPLC or was first aminoacylated and then purified by C4-HPLC. All products of the reactions were analyzed on 15% denaturing TBU-PAGE to monitor the process. Aminoacylation with aminoacyl-tRNA synthetases For solving structures, tRNA purification and biochemical assays it was necessary to aminoacylate tRNAs using their aminoacyl-tRNA synthetases. The reactions were performed in 100 mM HEPES•KOH pH 7.5, 20 mM MgCl2, 30 mM KCl, 4.15 mM ATP, and 0.42 mM amino acid using 0.1 mg/mL aminoacyl-tRNA synthetase. The reaction was incubated for 20 to 30 min at 37°C and stopped by phenol-chloroform extraction. Peptidylation of tRNAi Met with flexizyme technique As described in chapter 1.5, flexizyme can be used to specifically peptidylate and aminoacylate tRNA on the CCA end forming an ester bond [START_REF] References Goto | Flexizymes for genetic code reprogramming[END_REF][START_REF] Ramaswamy | Designer ribozymes: programming the tRNA specificity into flexizyme[END_REF].For the reaction 75 µM flexizyme and 25 µM tRNAi Met or microhelix, a short tRNA mimic, were mixed in 50 mM HEPES•KOH buffer (pH 7.5) or 50 mM Bicine•KOH buffer (pH 8.5) and were incubated for 2 min at 95°C followed by incubation at room temperature for 5 min to allow the flexizyme to fold. In the next step, the MgCl2 concentration in the reaction was increased to 50-600 mM MgCl2 (sample-specific) and the reaction was incubated for another 5 min at room temperature followed by an incubation for 3 min on ice [START_REF] References Goto | Flexizymes for genetic code reprogramming[END_REF]. To start the reaction Ribosome purification 70S T. thermophilus ribosomes were purified from the strain T. thermophiles HB8 for X-ray crystallography by Natacha Pérébaskine as published (Selmer et al., 2006). E. coli ribosomes were isolated for toeprinting and structure determination by cryo-EM from E. coli KC6 cells. The cells were grown at an OD600 of 0.6 to ensure that the cells are in exponential growth phase. Using an ice-cooled water bath the cells were cooled down quickly and then harvested by centrifugation (4500 rpm JLA 9.100 Beckmann rotor, 4°C). The cells were frozen in liquid nitrogen and stored at -80°C until ribosome purification. The ribosomes were isolated following the published protocol [START_REF] Blaha | Preparation of functional ribosomal complexes and effect of buffer conditions on tRNA positions observed by cryo-electron microscopy[END_REF] with minor modifications. E. coli KC6 cells were thawed and washed once with ribosome resuspension buffer (50 mM HEPES•KOH pH 7.5, 10 mM Mg(CH3CO2)2, 100 mM NH4Cl and 4 mM βmercaptoethanol (BME)). For cell lysis 2 mL ribosome resuspension buffer per 1 g of cells were added and the cells were lysed using an EmulsiFlex-C5 (Avestin, Ottawa, ON, Canada). The cell debris was removed by centrifuging the supernatant twice at 4 °C at 48 000 g for 1 h (JA 25.50 rotor Beckmann). The cleared cell lysate was overlaid a sucrose cushion (30% (w/v) sucrose in ribosome resuspension buffer) and spun by ultra-centrifugation at 40 000 rpm in a 50.2 Ti rotor (Beckmann) for 20 h at 4°C to pellet the ribosomes. The sucrose was removed by washing the pellet for 5 min with ribosome dissociation buffer (50 mM HEPES•KOH pH 7.5, 1 mM Mg(CH3CO2)2, 100 mM NH4Cl) followed by resuspending the pellet in ribosome dissociation buffer by continuously shaking on ice in the cold room. The low Mg 2+ concentration in the buffer leads to dissociation of the two subunits [START_REF] Chao | Dissociation of macromolecular ribonucleoprotein of yeast[END_REF]. The absorption was determined at 260 nm using the Nanodrop (Thermofisher). To separate the subunits, gradients from 10 to30% sucrose were prepared and overlaid with 200 A260 units of ribosomes. The gradients were centrifuged using a swinging bucket rotor (SW28) at 19 000 rpm for 17 h (Optima L80XP, Beckmann, Brea, CA, USA). The gradients were then fractionated using a peristaltic pump combined with the detector and fraction collector of the NGC medium pressure liquid chromatography system (Bio-Rad). The fraction volume was set to 1 mL. Fractions for the 50S and 30S subunits were pooled separately and pelleted by ultracentrifugation in a 50.2 Ti rotor (Beckmann) by spinning at 40 000 rpm for 24 h. Methods The colorless pellets were resuspended in ribosome re-association buffer (50 mM HEPES•KOH pH 7.5, 10 mM Mg(CH3CO2)2, 30 mM KCl). The concentration of the purified subunits was determined and the subunits were mixed in an equal ratio of absorption units. This leads to an excess of the 30S over 50S subunits and thus allows proper separation of re-associated 70S ribosomes from the 30S subunits in the analysis of the sucrose gradient of these samples. The mixture was diluted to a concentration of 50 A260 per mL and incubated for 1 h at 40°C to allow 70S ribosome formation. 100 A260 units were loaded per gradient (10-30% sucrose in the reassociation buffer) and centrifuged for 18 h at 18 000 rpm using an SW28 (Beckmann) rotor. Gradients were collected as before, pooled and pelleted by ultracentrifugation in 50.2 Ti rotor. The pellets were resuspended in a minimal volume of re-association buffer and incubated for another 15 min at 40°C to ensure the re-association [START_REF] Blaha | Preparation of functional ribosomal complexes and effect of buffer conditions on tRNA positions observed by cryo-electron microscopy[END_REF]. The absorption was determined using the Nanodrop 2000c and converted into molar concentration according to the following equation: 𝑐 ( 𝑚𝑔 𝑚𝐿 ) = 𝐴 260 • 0.06 𝑚𝑔 𝑚𝐿 (3) The ribosome solution was aliquoted, flash-frozen and stored at -80°C until further use. Methods Toeprinting Toeprinting is an in vitro method to identify the position of an arrested ribosome along a mRNA with nucleotide resolution. Originally the reaction was performed using an S30 cell extract (Hartz et al., 1988), but recent developments led to a protocol using a reconstituted in vitro translation system (Orelle et al., 2013b). This system contains all components necessary for in vitro transcription from a T7 promoter as well as for canonical translation and offers welldefined reaction conditions. During the toeprinting reaction, a 5'fluorescently labeled oligonucleotide complementary to the 3' end of the template mRNA is extended using a reverse transcriptase until the enzyme reaches the arrested ribosome. The synthesized cDNA is then purified and analyzed on a sequencing gel (Orelle et al., 2013b;Starosta et al., 2014a). In vitro translation Depending on the experiment different variants of the PURExpress kit (NEB, Ipswich, MA, USA) were used. The nascent chein-medtiated translational arrest of E. coli ribosomes was tested using the full kit or the ΔRF1,2,3 kit. To analyze the possibility to use 70S T. thermophilus ribosomes or peptidylated-tRNAs during the translation reaction the ΔtRNA, Δribosomes, ΔRF1,2,3 kit was used. The following table lists the PURExpress systems used in this thesis as well as the corresponding experiments: reverse transcriptase) was added to the reaction which was incubated for further 20 min at 37°C. The mRNA was degraded in the next step by addition of 0.5 M NaOH and incubation at 37°C for 15 minutes. The reaction was neutralized with 0.5 M HCl and diluted with resuspension buffer (300 mM Na(CH3CO2) pH 5.5, 5 mM EDTA, 0.5% SDS) (Orelle et al., 2013b;Starosta et al., 2014a). Proteins and nucleotides were removed using a nucleotide removal kit (Qiagen, Hilden, Germany). Per reaction 100 µL PNI buffer were added and the sample was loaded onto the spin columns provided by the kit. Subsequently, the column was washed once with 750 µL PE buffer. The cDNA was eluted with 100 µL molecular biology grade water (Merck) and dried by vacuum centrifugation. The dried cDNA was resuspended in 6 µL toeprinting loading dye. Sanger sequencing for toeprinting experiment To be able to determine the exact arrest position of the ribosome on the mRNA, a sequencing reaction was performed according to the method of Sanger [START_REF] Sanger | A rapid method for determining sequences in DNA by primed synthesis with DNA polymerase[END_REF][START_REF] Sanger | DNA sequencing with chain-terminating inhibitors[END_REF]. In the reactions, dGTP was replaced by deazaGTP which leads to an enhanced capability of the HemoKlen Taq DNAP overcome secondary structure elements. The addition of ddNTPs leads to interrupted reactions and thus during several cycles of PCR to the generation of all lengths of DNA fragments. The following stock solutions were used: Analysis and interpretation Sequencing reactions were evaporated by vacuum centrifugation and resuspended in 6 µL toeprinting loading dye. Samples were heated for 5 min at 95°C to enhance solvation and to denature the cDNA. The samples were loaded onto a denaturing sequencing PAGE that was prepared according to the following protocol: M Urea 0.075% (w/v) APS 0.075% (v/v) TEMED The sequencing PAGE was run at 40 W for 2.5 h at room temperature. The bands were detected using a fluorescent scanner (Typhoon, GE Healthcare, Chicago, IL, USA) using the green laser, detecting +3 mm and default settings. The ribosome covers 16-18nt between the P-site and the position where the reverse transcriptase stops. Thus, the band in the sequencing gel is 16-18nt shorter than the corresponding fragment in the sequencing reaction. This difference in length needs to be taken into account for the analysis and interpretation of toeprinting gels.,. X-ray crystallography and structure determination Crystallography is a biophysical technique that is commonly used to obtain the structure of proteins and nucleic acids. During crystallization, the biomolecular complexes adopt wellordered, regular orientations. The obtained crystals are exposed to X-rays from a local source or a synchrotron. The way in which the X-rays interact with the crystal is dependent on the structure of the macromolecule and thus by analyzing the pattern, the 3-dimensional structure can be solved. First high-resolution structures of ribosomal subunits and full 70S T. thermophilus ribosome complexes gave great insights into the molecular mechanism of translation. These crystals can diffract to high resolution to obtain information about e.g. ribosomal modifications (Polikanov et al., 2015a) and or mechanistic insights into peptide bond formation (Polikanov et al., 2014). This degree of resolution is necessary to understand the underlying mechanism of action of arrest peptides and proline-rich antimicrobial peptides. Hindrances of this method include that variation of the sample composition can lead to decrease in resolution or the sample does not crystallize at all. Another disadvantage includes, that the obtained structure derived from molecules that are forced into a specific orientation due to crystal packing and partial dehydration of the biomolecular complex. This might result in artifacts within the structure. However, high-resolution data can be obtained from welldiffracting crystals giving great insights into the underlying molecular mechanism of the biomolecular complex studied. In addition, data sets from multiple experiments can be obtained in relatively short amounts of time and the data processing involves less computational power compared to cryo-EM. Complex formation and crystallography Ribosome complexes were formed in the presence of a short, synthetic RNA to allow tRNA binding and selection. The RNAs (Eurogentec) were composed of a Shine-Dalgarno sequence followed by a short ORF, and are listed in the following table: the mRNA to bind to small subunit and the antibiotic to its binding site. Subsequently, 20 µM of initiator and elongator tRNAs were added and the complex was incubated further 15 min at 37°C. Complexes without tRNAs were co-crystallized with 50 µM of the hibernation factor YfiA to increase the resolution by preventing ratcheting (Polikanov et al., 2015a). As a final step, the complex was kept at room temperature for at least 15 min before setting up the crystallization drops (Polikanov et al., 2014;Seefeldt et al., 2015;Selmer et al., 2006). Ribosomes were crystallized using vapor diffusion in sitting drops crystallization plates. In doing so, the concentration of the ribosomes within crystallization drop increased steadily and slowly over several days in a closed environment. The crystallization process is divided into nucleation and crystal growth. Nucleation occurs when the concentration of the biomolecular complex reaches the unstable state in a supersaturated solution. With the formation of small nuclei crystals, the concentration of the biomolecular complex within the solution drops gets reduced and reaches the metastable state and the crystal grows. The challenge is to find the right balance to obtain a good number of crystals and good-sized ones for diffraction to high Data collection and processing Data were collected at the Soleil Synchrotron (Saclay, France) at the beamlines Proxima 1 and Proxima 2A and at the European Synchrotron Radiation Facility (ESRF, Grenoble, France) at the beamlines ID23 and ID29. The data were collected at 100 K and at a wavelength around 0.98 Å. Slight differences in wavelength, due to the set-up of the different beamlines, were taken into account during processing. While collecting the crystal was rotated by 0.1° per image and exposed for 0.1 s. Data were obtained using helical data collection mode. Alternatively, individual wedges of 30° were collected and merged at the end into one data set. For each data set at least 180° were collected. Methods The data were processed using xdsme, which is a collection of python scripts to process the data with X-ray detector software (XDS) facilitating its usage [START_REF] Legrand | XDSME: XDS Made Easier[END_REF]. XDS is a data processing program that facilitates processing of data with weak reflections and high mosaicity (Kabsch, 2010). The processing is divided in the localization of the spots, indexing, integration, Spots were integrated over all collected frames to re-create the 3D profile of each spot that passed through the EWALD sphere. During merging and scaling, the multiple data sets can be combined into one and converted into *.mtz-format which is necessary for downstream applications. During this step, 5% of all spots were selected for Rfree calculation. These spots were excluded from the refinement. Rfree is a factor to prevent over-refinement. To avoid artifacts during refinement, the same set of spots were used for Rfree calculations for each processed data set. As a molecular replacement (MR) was used for phasing and so heavy atom soaking was not necessary for phasing, FRIEDEL's law was set to true meaning that each spot has a corresponding spot in point symmetry. The final resolution was determined using signal to noise ratio (I/σ) of 1.0, significant value for CC1/2 of > 15 and high completeness (> 95%) for all resolution shells Phasing, refinement and model building During the detection of X-ray diffraction, the intensity of the X-ray wave while hitting the detector gets collected while the phase of the wave gets lost. The phases are necessary for structure determination and as they are crucial for the FOURIER transformation needed to calculate the electron density from the integrated spots. Different techniques have been developed to experimentally determine the phases. Crystals can be soaked with heavy atoms (MAD, MIRAS) or existing models can be used as a template for structure determination (molecular replacement, MR). In our case, a known ribosome structure was available (PDB code: 4Y4O, (Polikanov et al., 2015a) that, when stripped of flexible bases, protein factors and tRNA to prevent model bias, was sufficient to phase the data and obtain a complete electron density of our complex. PrAMPs, tRNAs or protein factors were subsequently built on such a model and rRNA geometry of the input model was improved using the ERRASER-phenix pipeline and its deposited structure factors (Chou et al., 2016). The initial electron density map was obtained by refining the model obtained from ERRASER against the processed data set using PHENIX refine pipeline (Adams et al., 2010). The refinement was divided into rigid-body fitting, positional and B-factor refinement. Rigid body fitting is moving the whole complex or domains of a biomolecular complex into the density. The different domains are highlighted in the following figure: In the case of the 70S T. thermophilus ribosome, the small subunit was divided into the head, body, spur and helix h44 and the large subunit was divided into the body, L1 stalk and the C-terminus of ribosomal protein L9, which is extended in the crystal structure due to crystal contacts. Positional refinement was performed applying multiple restraints, including base pairing restraints and non-crystallographic symmetry (NCS) due to the presence of two ribosomes per asymmetric unit. The base pairing restraints were generated using the online interface from the RNA center in Santa Cruz, CA [START_REF] Laurberg | Structural basis for translation termination on the 70S ribosome[END_REF]. The last step of each macrocycle included B-factor refinement. B-factors indicate the possible movement of each atom. Prior to the first refinement, the B-factor of all atoms was set to the Wilson B-factor as a starting point. Ribosomal modifications were restrained using cif files created by phenix elbow describing the chemical properties of modified residues e.g. tRNA or rRNA modification. In between refinement cycles, the density and the corresponding model were visualized in the program COOT (Emsley and Cowtan, 2004). Using a minimally biased difference map (Fo-Fc), missing parts of a model were modeled manually into the density, which was displayed as positive difference map. Additional information and wrongly modeled parts were surrounded by a negative density and removed. The model was refined against the processed data set over several cycles, maintaining values for R and Rfree factor to stay in a different range of under 5% and making sure that their values decrease with each refinement cycle. The MolProbity server was used to validate the geometry of the structure (Chen et al., 2010). Cryo-Electron microscopy (cryo-EM) Cryo-EM is a method used to study the structure of large biomolecules and has proven to be a valuable tool for the study the underlying mechanism of translation as reviewed in [START_REF] Razi | The impact of recent improvements in cryo-electron microscopy technology on the understanding of bacterial ribosome assembly[END_REF]. Early studies provided information on the overall shape of the ribosome including the ribosomal tunnel and tRNA binding sites as well as its large conformational changes during the translational cycle at a resolution of 25 Å [START_REF] Agrawal | EF-G-dependent GTP hydrolysis induces translocation accompanied by large conformational changes in the 70S ribosome[END_REF][START_REF] Frank | A model of protein synthesis based on cryoelectron microscopy of the E. coli ribosome[END_REF] The recent development of direct electron detectors that allow movie collection mode and the constant improvement of processing software made it possible to obtain near-atomic resolution structures from cryo-EM data. The "Resolution revolution", as it has been called, led to the solution of structures by cryo-EM with resolutions comparable to X-ray crystallography and NMR [START_REF] Kühlbrandt | The resolution revolution[END_REF]. Limitations of this technique include long data collection time per data set and the computational power needed for processing thousands of movies, compared to X-ray crystallography. Advantages include the low amounts of sample required and the visualization of the protein in solution which allows seeing different conformations in the same sample. The structure is solved by shining electrons through the specimen to be observed. The electrons that are transmitted through because they have not interacted with the specimen result in 2D projections in different orientations in the ice of the sample. To prepare cryo-grids, the protein is quickly frozen at a concentration range of 5 nM to 1 µM depending on the size of the specimen in its optimal buffer. The images are processed to reconstruct the single particles followed by the interpretation of the electron density obtained. Sample preparation The desired complex was formed using the same concentrations as for the toeprinting assay (chapter 3.7) and the dilution buffer was substituted with Arg-tRNA Arg to ensure that the dilution has no impact on the occupancy of the A-site tRNA. Therefore, 2.5 µM 70S E. coli ribosomes were incubated with 5 µM synthetic RNA (Eurogentec, Lüttich, Belgium) and 25 µM erythromycin. As erythromycin was dissolved in EtOH, the antibiotic was dried into the reaction tube prior to the complex formation. 10 µM fMKF-tRNAi Met was added together with 20 µM Arg-tRNA Arg . The complex was then incubated for 15 min at 37°C. As a final step, the complex was diluted to a final concentration of 240-480 nM of ribosomes in cryo-EM dilution buffer The sample was placed on a support surface, the grid. For this project, lacey holey carbon grids were used as the thickness of the vitreous ice was optimal to obtain high-resolution information [START_REF] Cho | Measurement of ice thickness on vitreous ice embedded cryo-EM grids: investigation of 8. References optimizing condition for visualizing macromolecules[END_REF]. The grids are carbon coated and its hydrophobicity does not allow for a homogeneous repartition of the drop onto the grid. The hydrophobicity can be controlled, treating the grids with a low-energy plasma [START_REF] Passmore | Chapter Three -Specimen Preparation for High-Resolution Cryo-EM[END_REF]. The grids were glow-discharged for 1 min with an ELMO machine (Cordouan, Pessac, France) at a pressure of 2*10 -2 mbar and a charge of 2 mA. 3.5 µL of the mixture was transferred to a Lacey holey carbon coated grid, blotted for 5 s and frozen with a Vitrobot™ (FEI, Hillsboro, CA, USA). The Vitrobot™ is an automatic freezing machine that allows fast plunging of the grids into liquid ethane. The flash-freezing results in vitreous glass-like ice and prevents the formation of ice crystals. The Vitrobot™ chamber was set to 4°C and 100% humidity. The humidity ensures that water does not evaporate from the sample and change the ribosome or salt concentration within the sample. Optimal ice thickness was obtained using a blotting time of 5 s (Passmore and Russo, 2016). The grid was then clipped for insertion in an autoloader and stored in liquid nitrogen until data collection. Data collection Data were collected on the Talos™ Arctica transmission electron microscope (FEI) equipped with a Falcon III direct electron detector (FEI) at the IECB (Pessac, France). The Talos™ Arctica runs at 200 kV and has a spherical aberration of 2.7. The magnification was set to 120 k resulting in a pixel size of 1.24 Å. The grids were screened to determine the optimal distribution of the particles and thickness of the ice for data collection. The best squares were identified on the Atlas of the whole grid collected by the EPU software (FEI-Thermo). Due to the irregularity of the holes on lacey carbon grids, the software was set up to collect at regular intervals of 2 µm between spots. Regions of the squares having cracks or thick ice were avoided. A total of 5000 movies were collected, each consisting of 20 frames per second. To obtain information on the whole resolution range, a defocus range between 0.5 µm to 2.5 µm was collected. Single particle reconstruction using RELION gui During data collection, the electrons interact with the ribosome and those that pass through Image processing consists of aligning movie frames by MotionCorr2 [START_REF] Zheng | Anisotropic correction of beam-induced motion for improved single-particle electron cryo-microscopy[END_REF][START_REF] Zheng | MotionCor2: anisotropic correction of beam-induced motion for improved cryo-electron microscopy[END_REF] and determining of defocus values and astigmatism of the aligned images using CTFFIND4 [START_REF] Rohou | CTFFIND4: Fast and accurate defocus estimation from electron micrographs[END_REF]. The electron beam induces movement of the particles within the grid which leads to blurred images and thus a low-resolution reconstruction. Due to the development of direct detector, the movement can be detected over the different frames. To correct for this motion, Motioncor2 was used to align each pixel of the 20 frames per second collected. The aligned micrographs were converted into the corresponding power spectra by Auto-picking values were optimized for each data set. The picking threshold used, varied from 0.4-0.8 and maximum standard deviation used was 1.1-1.5 to avoid picking on the carbon. As Lacey grids were used for the analysis, some of the particles stuck to the carbon resulting in loss of information. The minimum inner particle distance was set to 120 Å to avoid picking particles on carbon. Methods The picked particles were extracted and downscaled by a factor of 3 to speed up the processing time. The picked particles were classified into 100 different 2D classes over 25 iteration cycles. The alignment and classification were performed using an algorithm based on maximum likelihood resulting [START_REF] Scheres | Maximum-likelihood methods in cryo-EM. Part II: application to experimental data[END_REF]. To further improve the quality of the picked articles, 2D Classes that appeared to be 70S ribosomes were selected and reclassified into 100 2D classes as a second cleaning step. The particles were re-extracted at their original box size of 320 px and used as an input for 3D auto-refinement using a low-resolution map of a bacterial ribosome. The refinement was performed using the gold-standard FSC calculations. This means that the data set was split in half and all reconstruction steps were performed independently. By comparing the reconstructions, the FOURIER shell correlation is determined and with this the resolution. As recommended the resolution cut off was set to an FSC value of 0.143. The 3D auto-refinement was performed without solvent masking to prevent overfitting. However, including the disordered solvent increases the noise resulting in a lower resolution. By removing solvent regions with the help of a mask, results directly in an increase of the FSC values and with it the resolution of the reconstruction. The resulting map was used as an input for masked 3D classification sorting all particles for the abundance of tRNAs in the A-, P-and E-site [START_REF] Scheres | Chapter Six -Processing of Structurally Heterogeneous Cryo-EM Data in RELION[END_REF]. Therefore, a mask around the tRNAs and a mask for the ribosome without tRNAs were created. By using particle subtraction, particles were classified according to the occupancy of tRNA in the A, P or E-site These polished particles can be used as an input for further refinement. The movement of individual particles was accounted for through movie refinement. Another important step to improve the final quality of the map is particle polishing. During data collection, in our case 20 frames per spot, the particles get damaged and loose high-resolution information. Still, low-resolution information can be necessary to improve reconstruction. Each frame gets a calculated B-factor and dependent on its value the frame gets taken into account for the final reconstruction [START_REF] Scheres | Chapter Six -Processing of Structurally Heterogeneous Cryo-EM Data in RELION[END_REF]. The shiny particles were used for another round of 3D-refinement using a mask for the 50S subunit as the arrest peptide is located in the ribosomal exit tunnel, followed by postprocessing and the determination of the local resolution. Model building For model building and refinement the resulting *.mrc was renamed to *.ccp4. An initial model, a 2.8 Å resolution structure of a 70S E. coli ribosome (PDB code: 4U27, [START_REF] Noeske | Synergy of Streptogramin Antibiotics Occurs Independently of Their Effects on Translation[END_REF]) was a rigid-body fitted into the density using the program Situs [START_REF] Wriggers | Conventions and workflows for using Situs[END_REF]. In an initial fit, the whole structure is fitted into the density setting the resolution to 15 Å. The structure was divided into the same domains as used for PHENIX refine. The domains are fitted over multiple cycles into the density while increasing stepwise the resolution. This task is performed using the program "collage" within the Situs package [START_REF] Wriggers | Conventions and workflows for using Situs[END_REF]. Flexible bases were rotated into the density, the peptide and the drug were modeled using the program COOT (Emsley and Cowtan, 2004). The P-site tRNA (PDB code: 4VY5) was a rigid body fitted to the density using COOT. The backbone geometry of the rRNAs was improved using the program ERRASER (Chou et al., 2016). The resulting coordinates were refined against the map using PHENIX real space refinement (Afonine et al., 2013). To increase the amount of restrains, the input model was used as a reference model. The connection between the peptide and the tRNA was modeled as a data-link and a corresponding cif file was provided. Base pairing restraints were applied as geometry restraints. The base pairing restraints were generated using the online interface from the RNA center in Santa Cruz, CA [START_REF] Laurberg | Structural basis for translation termination on the 70S ribosome[END_REF]. Bioinformatic databases and programs The graphics programs, Pymol (DeLano, 2010) and Chimera [START_REF] Pettersen | UCSF Chimera-a visualization system for exploratory research and analysis[END_REF], were used to generate figures for structural interpretation. Sequencing data were analyzed using basic local alignment tool (BLAST, [START_REF] Altschul | Basic local alignment search tool[END_REF]) and Multalign [START_REF] Corpet | Multiple sequence alignment with hierarchical clustering[END_REF]. The DNA sequence for tRNA cloning was taken from the tRNA database (Leipzig, Germany, [START_REF] Jühling | tRNAdb 2009: compilation of tRNA sequences and tRNA genes[END_REF]. To calculate the theoretical mass spectrum, the sequence containing the modifications proK were taken from the MODOMICs database [START_REF] Bujnicki | Modomics: a database of RNA modification pathways[END_REF][START_REF] Dunin-Horkawicz | MODOMICS: a database of RNA modification pathways[END_REF]. Next, the theoretic electrospray series were generated by Mongo Oligo mass calculator [START_REF] Rozenski | Mongo Oligo Mass Calculator v2[END_REF]. The ExPASy (The Export Protein Analysis System) online tool was used to calculate the molecular weight, extinction factors and isoelectric points (ProtParam). Nucleic acid sequences encoding for proteins were translated into the corresponding amino acid sequence using the Translate tool [START_REF] Gasteiger | ExPASy: the proteomics server for in-depth protein knowledge and analysis[END_REF]. et al., 2016;Roy et al., 2015;Seefeldt et al., 2016;Seefeldt et al., 2015). Results Proline-rich antimicrobial peptides inhibit the bacterial ribosome A network of potential hydrogen bonds and stacking interaction of the peptide with ribosomal residues stabilize the peptides within the ribosomal exit tunnel resulting in well-structured peptides. In particular, residues Y6 and L7 in Onc112, which were previously identified to be crucial for antimicrobial activity [START_REF] Knappe | Rational Design of Oncocin Derivatives with Superior Protease Stabilities and Antibacterial Activities Based on the High-Resolution Structure of the Oncocin-DnaK Complex[END_REF], adopt a specific conformation whereby they point into the A-site crevice. In Bac7 1-16 (green, Figure 15), the corresponding residue to Y6 is an arginine adopting a similar orientation. In Met I (raspberry, Figure 15), L7 is replaced by an arginine residue pointing in a similar orientation and allows further interactions with ribosomal residues. The orientation of these residues is stabilized by U2506 forming potential hydrogen bonds with the peptide backbone. Additional interactions of PrAMPs with ribosomal tunnel include the binding site of the 3' CCA end of the A-site tRNA and the upper tunnel region. From the binding site, which overlaps with the incoming aminoacyl-tRNA, we could hypothesize that PrAMPs inhibit the transition from initiation towards elongation. This hypothesis was investigated biochemically by our collaborators, the group of Prof. Daniel N Wilson (Gene center, Munich by now University of Hamburg) using toeprinting, disome assays and cell-free protein synthesis assays to confirm the proposed mechanism. Our studies showed that PrAMPs inhibit specifically the transition from initiation towards elongation and destabilize the post-initiation complex. For the C-terminus of Onc112, Met I and Pyr, no density was detected due to the reason that it might be disordered in the tunnel and potentially not necessary for the inhibition of protein biosynthesis. To investigate this further, Onc112-ΔC7/9, lacking C-terminally seven or nine amino acids, were used for bacterial growth inhibition experiments and cell-free protein synthesis assays. The results showed that PrAMPs need a certain length for the uptake by the SbmA transporter and to a certain extent for ribosome binding, which is longer than that required to inhibit protein biosynthesis. The resulting mechanism is summarized in Figure 16: Comparing the structure Bac7 1-16 -70S T. thermophilus ribosome with mammalian ribosome showed that the peptide could potentially bind to the mammalian exit tunnel. Using cell extract from rabbits, we were able to show that Bac7 1-16 inhibits the eukaryotic protein biosynthesis (Seefeldt et al., 2016). However, Bac7 has been shown to be not toxic for eukaryotic cells and might be taken up during endocytosis (Tomasinsig et al., 2006) and stored after its production in granules (Zanetti et al., 1990). This prevents the contact of the Bac7 with the eukaryotic ribosome and it is believed that insect-derived PrAMPs might act in a similar way (Seefeldt et al., 2016). PrAMPs are taken up by Gram-negative bacteria via the SbmA transporter. The work was published back to back with results from the group of Prof Thomas A. Steitz, which corroborated our findings. A R T I C L E S Antimicrobial peptides form a diverse group of molecules that are produced as part of the innate immune response of all multicellular organisms 1 . Among these, PrAMPs have garnered considerable attention as a possible means of countering the rapid increase in bacterial resistance to classical antibiotics 2,3 . Unlike many peptides that kill bacteria by disrupting their cell membrane, PrAMPs are transported into the cytoplasm by specialized transporters, such as SbmA in Gramnegative bacteria 4,5 , where they inhibit specific intracellular targets. Given that such transport mechanisms are absent in mammalian cells, and only limited interactions with intracellular eukaryotic proteins have been detected, PrAMPs are generally considered to be nontoxic 6 and therefore an attractive alternative to existing antimicrobials. Interestingly, some PrAMPs can cross the blood-brain barrier to selectively target brain cells, thus further highlighting their potential for the treatment of cerebral infections or for brain-specific drug delivery 7 . Initial efforts to locate bacterial targets for PrAMPs led to the identification of the heat-shock protein DnaK as the prime candidate for inhibition 8 . Short proline-rich peptides (of 18-20 amino acids (aa)) such as oncocin, drosocin, pyrrhocoricin or apidaecin were previously shown to bind to this bacterial chaperone in a stereospecific manner, thus leading to the development of improved PrAMP derivatives with increased affinity for DnaK 9-12 . However, subsequent studies into the antimicrobial properties of PrAMPs 13 have suggested that these peptides are likely to use additional modes of action to inhibit growth. For example, a C-terminally truncated version of the apidaecin 1b peptide results in a loss of antimicrobial activity but no observable decrease in DnaK binding or cellular uptake 13 . Similarly, oncocin (Onc72 and Onc112) and apidaecin (Api88 and Api137) derivatives were found to inhibit the growth of a dnaK-deletion strain as efficiently as that of the dnaK-containing parental strain 14 . Further investigation revealed that these PrAMPs have an additional target within the bacterial cell, namely the ribosome 14 . Although such PrAMPs have been shown to bind to the ribosome and inhibit translation 14 , the mechanism by which they inhibit translation has so far not been determined. Here, we set out to address this issue by obtaining a 3.1-Åresolution X-ray crystallography structure of the Thermus thermophilus 70S ribosome (Tth70S) in complex with a peptidyl (P)-sitebound deacylated tRNA i Met and Onc112, a representative of the oncocin family of PrAMPs produced by the milkweed bug (Oncopeltus fasciatus) 15 . The structure reveals that the N-terminal residues 1-12 of Onc112 bind to the upper region of the ribosomal exit tunnel, overlapping the binding site for the CCA end of an aminoacyl (A)-site tRNA at the peptidyl transferase center. Consistently with this, we showed biochemically that Onc112 allows translation to initiate but destabilizes the initiation complex and thus prevents subsequent entry of affected ribosomes into the translation-elongation phase. Moreover, we demonstrated that although truncation of the C-terminal portion of Onc112 is dispensable for ribosome binding, it is essential for antimicrobial activity. We believe that these findings will provide an excellent basis for the design of improved antibacterial compounds, either peptidic or peptidomimetic, that inhibit translation by targeting the ribosomal exit tunnel. 1 Institut Européen de Chimie et Biologie, Université de Bordeaux, Pessac, France. 2 INSERM U869, Bordeaux, France. 3 Gene Center, Department of Biochemistry, University of Munich, Munich, Germany. 4 Université de Bordeaux, CNRS, Institut Polytechnique de Bordeaux, UMR 5248, Institut de Chimie et Biologie des Membranes et des Nano-objets (CBMN), Pessac, France. 5 Center for Integrated Protein Science Munich (CiPSM), University of Munich, Munich, Germany. 6 These authors contributed equally to this work. Correspondence should be addressed to C.A.I. ([email protected]) or D.N.W. ([email protected]). RESULTS Onc112 binds in a reverse orientation within the exit tunnel We obtained the structure herein referred to as Tth70S-Onc112 by soaking the 19-aa Onc112 peptide (VDKPPYLPRPRPPRrIYNr-NH 2 , in which r denotes d-arginine) into crystals of Tth70S ribosomes in complex with a P-site-bound deacylated tRNA i Met and a short mRNA (Table 1). Using a minimally biased F o -F c map calculated after refinement of a model comprising Tth70S ribosomes, tRNA i Met and mRNA but lacking Onc112, we could see clear density that could be attributed to the N-terminal two-thirds of the Onc112 peptide (Fig. 1). Interestingly, the peptide is bound inside the tunnel with a reversed orientation relative to the growing polypeptide chain during protein synthesis, i.e., with its N terminus located near the peptidyl transferase center and its C terminus extending into the exit tunnel toward the constriction formed by ribosomal proteins L4 and L22. Despite the reversed orientation, the location of the Onc112 peptide overlaps to varying extents with the path of nascent polypeptide chains that have been visualized within the ribosomal tunnel 16-18 (Supplementary Fig. 1). The conformation of Onc112 bound to the ribosome is extended, in a manner similar to but distinct from that observed previously for oncocin in complex with DnaK 10 (Supplementary Fig. 2). Our CD studies suggest that, in solution, the Onc112 peptide adopts an essentially random conformation, with short stretches of poly(Pro)II helix, specifically, 6% -helix, 54% random coil, 30% PPII and 6% -sheet (Supplementary Fig. 3). Interaction between Onc112 and 23S rRNA of the exit tunnel Comparison of the Tth70S-Onc112 structure with that of a Tth70S ribosome featuring tRNA i Met bound to the P site 19 reveals that several nucleotides of the 23S rRNA undergo a conformational change upon binding of Onc112 to the ribosome (Fig. 2a). U2506 shifts to occupy a position similar to that observed upon binding of aminoacyl-tRNA to the A site of the ribosome 20,21 . In the presence of Onc112, U2585, which is very flexible in many crystal structures, adopts a defined position similar to that modeled in the structure of a vacant Escherichia coli 70S ribosome 22 . In addition, A2062 shifts to provide space for Onc112, adopting a similar conformation to that observed previously in the presence of the ErmBL nascent chain 23 . Thus, binding of Onc112 to the ribosome is accompanied by an induced fit involving several 23S rRNA nucleotides that are generally known for their dynamic behavior within the peptidyl transferase center and ribosomal tunnel. Electron density for the Onc112 peptide was strongest for residues Val1-Pro8 and became weaker after Pro10, thus making it difficult to model the peptide beyond Pro12 (Fig. 1). We observed three sets of interactions between the N-terminal 10 aa of Onc112 and nucleotides of the 23S rRNA (Fig. 2b). The first set involves aa 1-3 of Onc112 and encompasses eight potential hydrogen-bond interactions (Fig. 2b,c). Val1 of Onc112 can form three hydrogen bonds with nucleotides of the 23S rRNA; two via its -amine to the N3 atom of C2573 and the O3 atom of C2507; and one via its carbonyl oxygen to the N4 atom of C2573. Three additional hydrogen bonds are possible between the side chain carboxylic acid of Asp2 and the N1 and N2 atoms of G2553 or the 2 -OH of C2507. The positively charged side chain of Lys3 extends into a negatively charged cavity, displacing a hydrated magnesium ion that is present at this site in other Tth70S ribosome structures 20 , and it interacts with the backbone phosphates of A2453 (Fig. 2c) and U2493 (not shown). Substitution of Val1, Asp2 and especially Lys3 by alanine in Onc72 leads to a loss of antimicrobial activity 10 , whereas, as expected, a D2E mutant of Onc112 retained both in vitro and in vivo activity (Supplementary Fig. 4). The K3A substitution in Onc72 reduced its ribosome binding affinity by a factor of 4.3 and lowered the half-maximal inhibitory concentration (IC 50 ) for in vitro translation more than 18-fold 14 . The second set of interactions involves the side chains of Tyr6 and Leu7 of Onc112 (Fig. 2b,d). The aromatic side chain of Tyr6 establishes a -stacking interaction with C2452 of the 23S rRNA (Fig. 2d). In addition, the side chain hydroxyl of Tyr6 hydrogen-bonds with an undetermined ion that is coordinated by the backbone phosphate of U2506 and the O2 atoms of C2452 and U2504. The hydrophobic cavity occupied by the Tyr6 side chain also accommodates the side chain of Leu7 of Onc112, which packs against the phenol moiety of Tyr6, whereas the backbone of Leu7 forms two hydrogen bonds with U2506 (Fig. 2b,d). The compact hydrophobic core formed by Tyr6 and Leu7 is likely to be key in anchoring the Onc112 peptide to the tunnel because mutagenesis experiments have shown that alanine substitution of either residue in Onc72 reduces the ribosome binding affinity by a factor of 7 and results in a complete loss of inhibitory activity on translation in vitro 14 . In contrast, mutation of Leu7 in Onc112 to cyclohexylalanine, which would preserve the hydrophobic environment, resulted in retention of inhibitory activity on translation in vitro but unexpectedly led to a loss of antimicrobial activity (Supplementary Fig. 4). Additional interactions with the ribosome encompass the PRPRP motif of Onc112 (Fig. 2b) and include a -stacking interaction between the guanidino group of Arg9 of Onc112 and the base of C2610 (Fig. 2e). Although substitution of Arg11 with alanine in Onc72 also reduces the ribosome binding affinity and inhibitory properties of the peptide 14 , we observed very little density for the side chain of this residue, thus suggesting that it could be important for the overall electrostatic properties of the peptide rather than for a defined interaction with the ribosome (Fig. 1). The high conservation of the 23S rRNA nucleotides that comprise the ribosome-binding site of Onc112 is consistent with the broad spectrum of antimicrobial activity displayed by this and related PrAMPs against a range of Gram-negative bacteria 10,24 . Onc112 allows translation to initiate but blocks elongation Comparison of the Tth70S-Onc112 structure with that of the Tth70S ribosome in the preattack state of peptide-bond formation 20 indicated that the binding of Onc112 to the ribosome would prevent accommodation of the CCA end of an incoming aminoacyl-tRNA via steric occlusion of the ribosomal A site at the peptidyl transferase center (Fig. 3a). Indeed, Asp2 of Onc112 directly interacts with G2553, a residue located within helix H92 of the 23S rRNA, termed the A loop, that normally stabilizes the A site tRNA at the peptidyl transferase center via Watson-Crick base-pairing with nucleotide C75 of its CCA end. In order to determine the step of translation that Onc112 inhibits, we performed cell-free protein synthesis and monitored the location of the ribosomes on the mRNA (Fig. 3b and Supplementary Data Set 1), by using toe-printing assays 25,26 . In the absence of Onc112 or antibiotic, ribosomes were able to initiate at the AUG start codon and translate through the open reading frame, but they became trapped on the downstream isoleucine codon because isoleucine was omitted from the translation mix. In the presence of the antibiotics clindamycin or thiostrepton, ribosomes accumulated at the start codon and could not translate down to the isoleucine codon because these antibiotics prevent delivery and/or accommodation of the first aminoacyl-tRNA directly following the initiation codon 27 . We observed similar results when performing the toe-printing assay with increasing concentrations of the Onc112 peptide, namely a loss of the band corresponding to ribosomes stalled at the isoleucine codon and an increase in the band corresponding to the ribosomes accumulating at the start codon. These findings indicate that Onc112 allows subunit joining and formation of the 70S initiation complex but prevents accommodation of the first aminoacyl-tRNA at the A site, as suggested by steric overlap between Onc112 and an A-site tRNA (Fig. 3a). This contrasts with a bona fide translation-initiation inhibitor, such as edeine, which interferes with the stable binding of fMet-tRNA i Met to the 30S subunit and thus prevents 70S initiationcomplex formation 28 , in agreement with the lack of a toe-print band at the start codon in the presence of edeine (Fig. 3b). Onc112 destabilizes the translationinitiation complex We noticed that the toe-print bands at the start codon in the presence of Onc112 were consistently weaker than those observed in the presence of clindamycin or thiostrepton (Fig. 3b and data not shown), thus suggesting that Onc112 may also perturb the placement of fMet-tRNA i Met at the P site. In the Tth70S-Onc112 structure, the P-site tRNA is uncharged, and its terminal A76 residue undergoes a conformational change that positions it ~3.4 Å away from the Onc112 peptide. In vivo, however, we would expect fMet-tRNA i Met to bind to the peptidyl transferase center in the same manner as in the Tth70S ribosome preattack complexes 20 , such that the formyl group of the fMet moiety would sterically clash with residues Tyr6 and Leu7 of the Onc112 peptide (Fig. 3a). Consequently, we used sucrose gradients to monitor disome formation upon translating a dicistronic ErmBL mRNA in vitro, in order to investigate the stability of the translation-initiation complex formed in the presence of Onc112 (Fig. 3c-g). As a positive control, we performed translation in the presence of the macrolide antibiotic erythromycin, which acts synergistically with the ErmBL polypeptide chain within the ribosomal tunnel to stall translation at a specific site on the mRNA 29 . Because the mRNA was dicistronic, two stalled ribosomes on a single mRNA led to the formation of disomes that could be visualized on a sucrose gradient (Fig. 3d), as shown previously 16,23 . We observed negligible disome formation in the absence of erythromycin because the ribosomes rapidly translated the short ORF and were released from the mRNA (Fig. 3e). As expected, thiostrepton, which allows 70S initiation-complex formation but prevents elongation (Fig. 3b), also led to efficient disome formation (Fig. 3f). In contrast, the presence of Onc112, even at concentrations as high as 100 M, resulted in only a small increase in disomes (Fig. 3g). This leads us to suggest that the 70S initiation complexes formed in the presence of Onc112 are unstable, presumably because the Onc112 peptide encroaches onto fMet-tRNA i Met , thus causing it to dissociate from the ribosome under the nonequilibrium conditions (centrifugation and sucrose density) used in the disome assay. Onc112 C terminus is needed for uptake, not ribosome binding The lack of density for the C terminus of Onc112 (residues [13][14][15][16][17][18][19] hinted that this region is dispensable for ribosome binding, yet its high degree of conservation suggested that it may nevertheless be necessary for antimicrobial activity. In order to assess the role of the C-terminal region of Onc112, we prepared truncated versions of this peptide, Onc112 C7 and Onc112 C9, which lacked the last 7 and 9 aa, respectively. We then determined the half-minimal inhibitory concentration (MIC 50 ) for the growth of E. coli strain BL21(DE3) in the presence of full-length Onc112 and compared it with those of the truncated Onc112 C7 and Onc112 C9 derivatives (Fig. 4a). As expected, the full-length Onc112 displayed good activity, inhibiting growth with an MIC 50 of 10 M, a value similar to that reported previously 14 . In contrast, truncation of 7 or 9 aa from the C terminus of Onc112 led to a complete loss of inhibitory activity, even at concentrations above 100 M (Fig. 4a). To ascertain whether the truncated Onc112 peptides could still bind to the ribosome and inhibit translation, we monitored in vitro translation of firefly luciferase (Fluc) by measuring luminescence after 60 min of translation in the presence of increasing concentrations of either full-length Onc112 or the truncated Onc112 C7 and Onc112 C9 derivatives (Fig. 4b). As expected, the full-length peptide displayed excellent activity, inhibiting translation of Fluc with an IC 50 of 0.8 M (Fig. 4b), a value similar to that reported when the same system was used for wellcharacterized translation inhibitors such as chloramphenicol 30 . In contrast to their lack of antimicrobial activity (Fig. 4a), both truncated Onc112 peptides displayed some inhibitory activity with the in vitrotranslation system (Fig. 4b), albeit with a reduced efficiency relative to that of full-length Onc112. Specifically, the Onc112 C7 peptide consisting of residues 1-12 of Onc112 had an IC 50 of 5 M, which was only six times greater than that of full-length Onc112, a result consistent with our structure-based prediction that these residues comprise the major ribosome binding determinants. In contrast, the Onc112 C9 peptide consisting of aa 1-10 of Onc112 had an IC 50 of 80 M, which was 16 times greater than that of Onc112 C7 and two orders of magnitude greater than that of full-length Onc112. These results illustrate the contribution of Arg11 to efficient ribosome binding and translation inhibition, as reported previously 14 . (1) Onc112 binds within the exit tunnel of the ribosome with a reverse orientation relative to a nascent polypeptide chain. (2) Onc112 allows formation of a translation-initiation complex but prevents accommodation of the aminoacyl-tRNA (aa-tRNA) at the A site of the peptidyl transferase center. ( 3) The initiation complex is destabilized, thus leading to dissociation of the fMet-tRNA i Met from the P site. Although full-length Onc112 can efficiently penetrate the bacterial cell membrane by using the SbmA transporter ( 4), C-terminal truncation of Onc112 reduces its antimicrobial activity (5), presumably owing to impaired uptake. (b) Relative binding position of Onc112 (orange) on the ribosome, compared with those of well-characterized translation inhibitors chloramphenicol (purple) 32,33 , clindamycin (green) 33 , tiamulin (yellow) 34 and erythromycin (blue) 32,33 as well as an A site-bound Phe-tRNA Phe (ref. 20). Although the inner-membrane protein SbmA has been shown to be responsible for the uptake of the eukaryotic PrAMPs Bac7 and PR39 (refs. 4,5), the only insect PrAMP tested so far was apidaecin Ib 4 . In order to assess the role of SbmA in the uptake of Onc112, we compared the growth of an E. coli strain lacking the sbmA gene ( sbmA) with the parental strain BW25113 in the presence of increasing concentrations of Onc112 (Fig. 4c). As expected, the full-length Onc112 displayed excellent activity against the susceptible SbmA-containing parental strain, inhibiting growth with an MIC 50 of 8 M (Fig. 4c), a value similar to that observed with the BL21(DE3) strain (Fig. 4a). In contrast, the sbmA strain displayed increased resistance to Onc112, such that even with 100 M Onc112, growth was reduced by only 20% (Fig. 4c). These findings indicate that SbmA is indeed necessary for the uptake of Onc112 into Gram-negative bacteria, such as E. coli, and provide further support for the proposition that the SbmA transporter is involved in the mechanism of action of the entire group of the PrAMPs 4 . DISCUSSION From our structural and biochemical data, we propose a model to explain the mechanism by which PrAMPs such as oncocin inhibit translation (Fig. 5a). We have shown that the binding of Onc112 to the ribosomal exit tunnel allows formation of the 70S initiation complex but prevents accommodation of the aminoacyl-tRNA into the A site (Fig. 5a, steps 1 and 2). Additionally, we propose that Onc112 destabilizes the postinitiation complex by inducing dissociation of fMet-tRNA i Met from the P site (Fig. 5a, step 3). Finally, our data also suggest that positively charged residues distributed along the entire length of the Onc112 sequence are necessary for ensuring the efficient SbmA-mediated uptake of Onc112 into the cell, whereas residues from the N-terminal moiety of Onc112 are responsible for targeting this peptide to the ribosome (Fig. 5a, steps 4 and 5). We believe that this mechanism of action is likely to be the same for other PrAMPs, such as drosocin, pyrrhocoricin and apidaecin, which share many of the residues of Onc112 that are important for its ribosome binding and antimicrobial function. The binding site for Onc112 within the ribosomal exit tunnel overlaps with the binding sites for a majority of the antibiotics that target the large subunit of the ribosome (Fig. 5b), such as the chloramphenicols, pleuromutilins (for example, tiamulin) and lincosamides (for example, clindamycin), which inhibit peptide-bond formation by preventing the correct positioning of the tRNA substrates, as well as the macrolides (for example, erythromycin), which abort translation by interfering with the movement of the nascent polypeptide chain through the ribosomal exit tunnel 27 . Given the substantial spatial overlap that exists between the binding sites for these antibiotics and the regions of the tunnel that interact with Onc112 (Fig. 5b) and presumably with several other PrAMPs, it appears likely that such antimicrobial peptides represent a vast, untapped resource for the development of new therapeutics. Several strategies have been pursued to design improved or entirely new antimicrobials that target the exit tunnel of the ribosome 31 . One approach consists of modifying existing antibiotics to create semisynthetic compounds that possess enhanced antimicrobial properties, including better affinity for mutated or modified ribosomes, the ability to evade drug modification or degradation pathways, increased solubility, improved uptake and reduced efflux. Other strategies involve designing chimeric antibiotics from drugs with adjacent binding sites (for example, macrolide-chloramphenicol or linezolid-sparsomycin) or developing entirely new scaffolds, as exemplified by the oxazolidinone linezolid. The ability to produce new scaffolds based on peptides, such as Onc112, that display potent activity against a diverse range of Gramnegative bacteria represents an exciting avenue for the development of future antimicrobials. 19 F probe with gradient capabilities (Supplementary Fig. 5). ESI-MS analyses were carried out on a Thermo Exactive from the Mass Spectrometry Laboratory at the European Institute of Chemistry and Biology (UMS 3033-IECB), Pessac, France (Supplementary Fig. 5). METHODS Methods All peptides were synthesized on Rink Amide PS resin (0.79 mmol/g) with a five-fold excess of reagents for the coupling step (0.2 M N-Fmoc-amino acid solution (in DMF) with 0.5 M DIC (in DMF) and 1.0 M Oxyma (in DMF)). Coupling of N-Fmoc-protected l-and d-arginine-OH was performed twice at 25 °C without microwaves for 1,500 s. Other amino acid couplings were performed first at 90 °C, 170 W, 115 s then at 90 °C, 30 W, 110 s. Fmoc removal was performed with a solution of 20% piperidine in DMF at 75 °C with 155 W for 15 s then 90 °C, 35 W, 50 s. After completion of the synthesis, the peptide resin was washed three times with DCM. Cleavage was performed by treatment with 5 mL of a freshly prepared TFA/TIS/H 2 0 solution for 240 min at room temperature. The resin was then filtered off, and the TFA solution was concentrated under reduced pressure. The crude products were precipitated as TFA salts in the presence of Et 2 O and purified with the appropriate gradient (10-30% of B in 20 min) by semipreparative RP-HPLC. The compounds were freeze dried, and TFA was exchanged with HCl by two repetitive freeze-drying cycles in 0.1 N HCl solution 35 . The list of peptides prepared for this study and details concerning their synthesis is as follows: Onc112. H-Val-Asp-Lys-Pro-Pro-Tyr-Leu-Pro-Arg-Pro-Arg-Pro-Pro-Arg-(d-Arg)-Ile-Tyr-Asn-(d-Arg)-NH 2 (2,389.85 CD spectroscopy. CD spectra of peptides were recorded on a J-815 Jasco spectropolarimeter (Jasco France). Data are expressed in terms of total molar ellipticity in deg cm 2 dmol -1 . CD spectra for the Onc112 peptide were acquired at four different concentrations in phosphate buffer (pH 7.6, 10 mM) between 180 and 280 nm with a rectangular quartz cell with a path length of 1 mm (Hellma 110-QS 1 mm) averaging over two runs. Secondary-structure proportion was estimated from the CD spectra with the deconvolution program CDFriend (S. Buchoux (Unité de Génie Enzymatique et Cellulaire, UMR 6022 CNRS-Université de Picardie Jules Verne) and E. Dufourc (Université de Bordeaux, CNRS, Institut Polytechnique de Bordeaux, UMR 5248 Institut de Chimie et Biologie des Membranes et des Nano-objets (CBMN); available upon request), unpublished). This program uses standard curves obtained for each canonical structure ( -helix, -sheet, helix-polyproline type II and random coil) with L i K j (alternated hydrophobic leucine and hydrophilic/charged lysine residues) peptides of known length, secondary structure and CD spectra. The program implements a simulated annealing algorithm to get the best combination of -helix, -sheet, helix-II and random coil that exhibits the lowest normalized r.m.s. deviation with respect to the experimental spectrum 36-38 . The algorithm yielded the following assessment for the Onc112 peptide: 54% random coil, 30% helix-PPII, 6% -helix and 6% -sheet content. Purification of T. thermophilus 70S ribosomes. Tth70S ribosomes were purified as described previously 39 and resuspended in buffer containing 5 mM HEPES-KOH, pH 7.5, 50 mM KCl, 10 mM NH 4 Cl, and 10 mM Mg(CH 3 COO) 2 to yield a final concentration of 26-32 mg/mL. For storage, Tth70S ribosomes were flash frozen in liquid nitrogen and kept at -80 °C. Preparation of mRNA and tRNA i Met . Synthetic mRNA with the sequence 5 -GGC AAG GAG GUA AAA AUG CGU UUU CGU-3 was obtained from Eurogentec. This mRNA contains a Shine-Dalgarno sequence and an AUG start codon followed by several additional codons. E. coli tRNA i Met was overexpressed in E. coli HB101 cells and purified as described previously 40 . Complex formation. A ternary complex containing Tth70S ribosomes, mRNA and deacylated tRNA i Met was formed by mixing of 5 M Tth70S ribosomes with 10 M mRNA and incubating at 55 °C for 10 min. For the next step, 20 M tRNA i Met was added, and the mixture was incubated at 37 °C for 10 min. Before the complexes for crystallization were used, the sample was incubated at room temperature for at least 15 min. All complexes were centrifuged briefly before use for crystallization. The final buffer conditions were 5 mM HEPES-KOH, pH 7.6, 50 mM KCl, 10 mM NH 4 Cl and 10 mM Mg(CH 3 COO) 2 . Crystallization. Published conditions were used as a starting point for screening crystallization conditions by vapor diffusion in sitting-drop trays at 20 °C (refs. 20,39). Crystallization drops consisted of 3 l of ternary complex and 3-4 l of reservoir solution containing 100 mM Tris-HCl, pH 7.6, 2.9% (v/v) PEG 20000, 7-10% (v/v) MPD and 175 mM arginine. Crystals appeared within 2-3 d and grew to ~1,000 × 100 × 100 m within 7-8 d. For cryoprotection, the concentration of MPD was increased in a stepwise manner to yield a final concentration of 40% (v/v). The ionic composition during cryoprotection was 100 mM Tris-HCl, pH 7.6, 2.9% (v/v) PEG 20000, 50 mM KCl, 10 mM NH 4 Cl and 10 mM Mg(CH 3 COO) 2 . Tth70S-Onc112 complexes were obtained by soaking 10-20 M of Onc112 dissolved in the final cryoprotection solution overnight at 20 °C. Crystals were then flash frozen in a nitrogen cryostream at 80 K for subsequent data collection. Data collection and processing. Diffraction data were collected at beamline ID29 of the European Synchrotron Radiation Facility (ESRF) in Grenoble, France. A complete data set was obtained by merging 0.1° oscillation data collected at 100 K with a wavelength of 0.97625 Å from multiple regions of the same crystal. Initial data processing, including integration and scaling, were performed with XDS 41 . All of the data collected could be indexed in the P2 1 2 1 2 1 space group, with unit-cell dimensions around 210 Å × 450 Å × 625 Å and an asymmetric unit containing two copies of the Tth70S ribosome. Model building and refinement. Initial phases were obtained by molecular replacement performed with Phaser 42 . The search model was obtained from a high-resolution structure of the Tth70S ribosome (PDB 4Y4O). Restrained crystallographic refinement was carried out with Phenix 43 and consisted of a single cycle of rigid-body refinement followed by multiple cycles of positional and individual B-factor refinement. Rigid bodies comprised four domains from the small 30S subunit (head, body, spur and helix h44) and three domains from the large 50S subunit (body, L1 stalk and the C terminus of ribosomal protein L9). Noncrystallographic symmetry restraints between the two copies of the Tth70S ribosome in the asymmetric unit were also applied during refinement. After confirming that a single tRNA was bound to the P site and that additional density corresponding to the Onc112 peptide was visible inside the exit tunnel in a minimally biased F o -F c map, a model for Onc112 was built with Rapper 44 and Coot 45 . The models for the tRNA and mRNA were obtained from a highresolution structure of the Tth70S ribosome preattack complex (PDB 1VY4). Further refinement and model validation were carried out in Phenix and on the MolProbity server 46 , respectively. In the final model, 0.65% of protein residues were classified as Ramachandran outliers, and 94.38% had favorable backbone conformations. In vitro-translation assay. The inhibition of firefly luciferase (Fluc) synthesis by Onc112 was assessed with an E. coli lysate-based transcription-translation coupled assay (RTS100, 5Prime) as described previously for other translational inhibitors 30 . Briefly, 6-L reactions, with or without Onc112/antibiotic were mixed according to the manufacturer's description and incubated for 1 h at 30 °C with shaking (1,000 r.p.m.). 1 L of each reaction was stopped with 7 L kanamycin (50 g/ l) and then diluted with 40 L of luciferase assay substrate (Promega) into a white 96-well chimney flat-bottom microtiter plate (Greiner). The luminescence was then measured with a Tecan Infinite M1000 plate reader. Relative values were determined by defining the luminescence value of the sample without inhibitor as 100%. Growth inhibition assays. Determination of the minimal inhibitory concentration (MIC) of Onc112 was performed as described previously for other antibiotics 30 . Specifically, an overnight culture of E. coli strain BL21(DE3) (Invitrogen), BW25113 or Keio deletion strain BW25113 sbmA (plate 61, well 10E) 47 was diluted 1:100 to an OD 600 of ~0.02, and 200 L of the diluted cells was then transferred into individual wells of a 96-well plate (Sarstedt). Either 10 L of Onc112, Onc112 derivative peptide or water was added to each well. Plates were then incubated overnight in a thermomixer (Eppendorf) at 37 °C/350 r.p.m. The OD 600 was measured in a Tecan Infinite M1000 plate reader, and the relative growth was calculated by defining the growth of samples without antibiotic as 100%. Toe-printing assay. The position of the ribosome on the mRNA was monitored with a toe-printing assay based on an in vitro-coupled transcription-translation system with the PURExpress in vitro protein synthesis kit (NEB) 26 . Briefly, each translation reaction consisted of 1 L solution A, 0.5 L isoleucine + tryptophan amino acid mixture, 0.5 L tRNA mixture, 1.5 L solution B, 1 L (0.5 pmol) hns40aa template: (5 -ATTAATACGACTCACTATAGGGATATAAGGAGGA AAACATATGAGCGAAGCACTTAAAATTCTGAACAACATCCGTACTC TTCGTGCGCAGGCAAGAGAATGTACACTTGAAACGCTGGAAGAAAT GCTGGAAAAATTAGAAGTTGTTGTTAACGAACGTTGGATTTTGTAA GTGATAGAATTCTATCGTTAATAAGCAAAATTCATTATAACC-3 , with start codon ATG, catch isoleucine codon ATT and stop codon TAA in bold, the hns40aa ORF underlined and primer-binding sites in italics) and 0.5 L additional agents (nuclease-free water, Onc112 or antibiotics). Translation was performed in the absence of isoleucine at 37 °C for 15 min at 500 r.p.m. in 1.5-mL reaction tubes. Ile-tRNA aminoacylation was further prevented by the use of the Ile-tRNA synthetase inhibitor mupirocin. After translation, 2 pmol Alexa647labeled NV-1 toe-print primer (5 -GGTTATAATGAATTTTGCTTATTAAC-3 ) was added to each reaction and incubated at 37 °C without shaking for 5 min. Reverse transcription was performed with 0.5 L of AMV RT (NEB), 0.1 L dNTP mix (10 mM) and 0.4 L Pure System Buffer and incubated at 37 °C for 20 min. Reverse transcription was quenched and RNA degraded by addition of 1 L 10 M NaOH and incubation for at least 15 min at 37 °C and then was neutralized with 0.82 L of 12 M HCl. 20 L toe-print resuspension buffer and 200 L PN1 buffer were added to each reaction before treatment with a QIAquick Nucleotide Removal Kit (Qiagen). The Alexa647-labeled DNA was then eluted from the QIAquick columns with 80 L of nuclease-free water. A vacuum concentrator was used to vaporize the solvent, and the Alexa647-labeled DNA was then dissolved into 3.5 L of formamide dye. The samples were heated to 95 °C for 5 min before being applied onto a 6% polyacrylamide (19:1) sequencing gel containing 7 M urea. Gel electrophoresis was performed at 40 W and 2,000 V for 2 h. The GE Typhoon FLA9500 imaging system was subsequently used to scan the polyacrylamide gel. Disome formation assay. The disome formation assay was performed as described previously 16,23 . Briefly, in vitro translation of the 2xermBL construct was performed with the Rapid Translation System RTS 100 E. coli HY Kit (Roche). Translations were carried out for 1 h at 30 °C and then analyzed on 10-55% sucrose-density gradients (in a buffer containing 50 mM HEPES-KOH, pH 7.4, 100 mM KOAc, 25 mM Mg(OAc) 2 and 6 mM -mercaptoethanol) by centrifugation at 154,693g (SW-40 Ti, Beckman Coulter) for 2.5 h at 4 °C. INTRODUCTION Antimicrobial peptides (AMPs) represent a large and diverse group of molecules that form part of the innate immune response of a variety of invertebrate, plant and animal species (1). While many AMPs kill bacteria by disrupting the bacterial cell membrane, there is growing evidence that some AMPs have intracellular targets (1). Members of one such class of non-membranolytic peptides are referred to as proline-rich AMPs (PrAMPs) and are present in the hemolymph of several species of insects and crustaceans, as well as in the neutrophils of many mammals (2). PrAMPs exhibit potent antimicrobial activity against a broad range of bacteria, especially Gram-negative, and are therefore considered as potential lead candidates for the development of therapeutic antimicrobial agents (3). Well-characterized insect PrAMPs include the apidaecins produced by bees (Apis melifera) and wasps (Apis Vespidae), pyrrhocoricin from firebugs (Pyrrhocoris apterus), drosocins from fruit flies (Drosophila), metalnikowins from the green shield bug (Palomena prasina) and the milkweed bug (Oncopeltus fasciatus) oncocins (2,4,5). PrAMPs are synthesized as inactive precursors, which undergo proteolysis to release the active peptide. In contrast to the active insect peptides, which are generally <21 amino acids in length, the active mammalian mature forms tend to be much longer; for example, the porcine PR-39 is 39 residues long, whereas the bovine bactenecin-7 (Bac7), which is also found in sheep and goats, is 60 residues long (2). Nevertheless, Cterminally truncated versions of the mammalian PrAMPs retain antimicrobial activity (6)(7)(8)(9) and exhibit high sequence similarity with the insect PrAMPs. Indeed, the Bac7(1-16) and Bac7 derivatives corresponding to the first 16 and 2 Nucleic Acids Research, 2015 35 residues of Bac7, respectively, display similar, if not improved, antimicrobial activities compared to the full-length processed Bac7 peptide (6,10,11). For instance, Bac7 reduces mortality from Salmonella typhimurium in a mouse model of infection (12) as well as in a rat model for septic shock (13). The insect-derived PrAMPs apidaecin and oncocin, as well as the mammalian Bac7, penetrate the bacterial cell membrane mainly via the SbmA transporter present in many Gram-negative bacteria (10,14). Early studies identified interactions between both insect and mammalian PrAMPs and DnaK, suggesting that this molecular chaperone was the common intracellular target (2,15). However, subsequent studies questioned the relevance of this interaction by demonstrating that these PrAMPs also display an equally potent antimicrobial activity against bacterial strains lacking the dnaK gene (16)(17)(18). Instead, apidaecin, oncocin and Bac7 were shown to bind to the ribosome and inhibit translation (17,19). Subsequent crystal structures of the oncocin derivative Onc112 in complex with the bacterial 70S ribosome revealed that this peptide binds with a reverse orientation in the ribosomal tunnel and blocks binding of the aminoacyl-tRNA to the A-site (20,21). However, there are no crystal structures to date of a mammalian PrAMP in complex with the ribosome. Here we present 2.8-2.9 Å resolution X-ray structures of the Thermus thermophilus 70S (Tth70S) ribosome in complex with either the mammalian Bac7 derivative Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) or the insect-derived PrAMPs metalnikowin I or pyrrhocoricin. The structures reveal that Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16), metalnikowin I and pyrrhocoricin bind within the ribosomal tunnel with a reverse orientation compared to a nascent polypeptide chain, as observed previously for oncocin (20,21). In contrast to the insect PrAMPs oncocin, metalnikowin I and pyrrhocoricin, the mammalian Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) utilizes multiple arginine side chains to establish stacking interactions with exposed nucleotide bases of the rRNA, and we show that its unique N-terminal RIRR motif is critical for inhibiting translation. Like oncocin, metalnikowin I and pyrrhocoricin, the binding site of Bac7 overlaps with that of the A-tRNA, consistent with our biochemical studies indicating that Bac7(1-16) allows 70S initiation complex formation, but prevents subsequent rounds of translation elongation. Furthermore, we demonstrate that Bac7 displays activity in a mammalian in vitro translation system, providing a possible explanation for why Bac7 is produced as a pre-pro-peptide that is targeted to large granules and phagosomes, thus avoiding direct contact between the active peptide and the mammalian ribosome. MATERIALS AND METHODS Peptide synthesis and purification The Bac7 N-terminal fragments Bac7(1-16; RRIR-PRPPRLPRPRPR), Bac7(1-35; RRIRPRPPRL-PRPRPRPLPFPRPGPRPIPRPLPFP) and Bac7 PRPPRLPRPRPRPLPFPRPGPRPIPRPLPFP) were synthesized on solid phase and purified by reversed-phase HPLC as described previously (22). Their concentrations were determined as reported previously (4). All peptides, with a purity of at least 95%, were stored in milliQ water at -80 • C until use. The Onc112 peptide was obtained from an earlier study (21). Metalnikowin I (VDKPDYRPRPRPPNM) and pyrrhocoricin (VDKG-SYLPRPTPPRPIYNRN) were synthesized to 97.5 and 98.1% purity by NovoPro Bioscience (China). Purification of T. thermophilus 70S ribosomes Tth70S ribosomes were purified as described earlier (23) and resuspended in buffer containing 5 mM HEPES-KOH, pH 7.5, 50 mM KCl, 10 mM NH 4 Cl and 10 mM Mg(CH 3 COO) 2 to yield a final concentration of ∼30 mg/ml. Tth70S ribosomes were flash frozen in liquid nitrogen and kept at -80 • C for storage. Preparation of mRNA, tRNA i Met and YfiA Synthetic mRNA containing a Shine-Dalgarno sequence and an AUG start codon followed by a phenylalanine codon (5 -GGC AAG GAG GUA AAA AUG UUC UAA -3 ) was purchased from Eurogentec. Escherichia coli tRNA i Met was overexpressed in E. coli HB101 cells and purified as described previously (24). YfiA was overexpressed in BL21 Star cells and purified as described previously (25). Complex formation A quaternary complex containing Tth70S ribosomes, mRNA, deacylated tRNA i Met and Bac7(1-16) peptide was prepared by mixing of 5 M Tth70S ribosomes with 10 M mRNA and 50 M Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16), and incubating at 55 • C for 10 min. After addition of 20 M tRNA i Met , the mixture was incubated at 37 • C for 10 min. The sample was then incubated at room temperature for at least 15 min and centrifuged briefly prior to use. Ternary complexes containing 50 M metalnikowin I or pyrrhocoricin, 5 M Tth70S ribosomes and 50 M YfiA were formed by incubation for 30 min at room temperature. The final buffer conditions were 5 mM HEPES-KOH, pH 7.6, 50 mM KCl, 10 mM NH 4 Cl and 10 mM Mg(CH 3 COO) 2 . Crystallization Published conditions were used as a starting point for screening crystallization conditions by vapour diffusion in sitting-drop trays at 20 • C (23,26). Crystallization drops consisted of 3 l of quaternary or ternary complexes and 3-4 l of reservoir solution containing 100 mM Tris-HCl, pH 7.6, 2.9% (v/v) PEG 20,000, 7-10% (v/v) 2-methyl-2,4-petanediol (MPD) and 175 mM arginine. Crystals appeared within 2-3 days and grew to ∼1000 × 100 × 100 m within 7-8 days. For cryoprotection, the concentration of MPD was increased in a stepwise manner to yield a final concentration of 40% (v/v). The ionic composition during cryoprotection was 100 mM Tris-HCl, pH 7.6, 2.9% (v/v) PEG 20,000, 50 mM KCl, 10 mM NH 4 Cl and 10 mM Mg(CH 3 COO) 2 . Crystals were flash frozen in a nitrogen cryostream at 80 K for subsequent data collection. Nucleic Acids Research, 2015 3 Data collection and processing Diffraction data for Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) were collected at PROXIMA-2A, a beamline at the SOLEIL synchrotron (Saclay, France) equipped with an ADSC Q315 detector. A complete dataset was obtained by merging 0.25 • oscillation data collected at 100 K with a wavelength of 0.98011 Å from multiple regions of the same crystal. Diffraction data for metalnikowin I and pyrrhocoricin were collected at PROXIMA-1, a beamline at the SOLEIL synchrotron equipped with a DECTRIS PILATUS 6M detector. Complete datasets were obtained by merging 0.1 • oscillation data collected at 100 K with a wavelength of 0.97857 Å from multiple regions of the crystal. Initial data processing, including integration and scaling, was performed with X-ray Detector Software (XDS) (27). The data could be indexed in the P2 1 2 1 2 1 space group, with unit-cell dimensions approximating 210 × 450 × 625 Å and an asymmetric unit containing two copies of the Tth70S ribosome. Model building and refinement Initial phases were obtained by molecular replacement performed with Phaser (28). The search model was obtained from a high-resolution structure of the Tth70S ribosome (PDB ID: 4Y4O) ( 29) where the RNA backbone had been further improved with the ERRASER-Phenix pipeline (30), using the deposited structure factors. Restrained crystallographic refinement was carried out with Phenix ( 31) and consisted of a single cycle of rigid-body refinement followed by multiple cycles of positional and individual B-factor refinement. Rigid bodies comprised four domains from the small 30S subunit (head, body, spur and helix h44) and three domains from the large 50S subunit (body, L1 stalk and the C terminus of ribosomal protein L9). Non-crystallographic symmetry restraints between the two copies of the Tth70S ribosome in the asymmetric unit were also applied during refinement. After confirming that a single tRNA was bound to the P site or that YfiA was present at the decoding center, and that additional density corresponding to the PrAMPs was visible within the exit tunnel in a minimally biased F O -F C map, models of the corresponding PrAMPs were built in Coot (32). The models for the tRNA and mRNA were obtained from a high-resolution structure of the Tth70S ribosome pre-attack complex (PDB ID: 1VY4). The model for YfiA was obtained from a high resolution Tth70S ribosome structure (PDB ID: 4Y4O). Further refinement and model validation was carried out in Phenix (31) and on the Mol-Probity server (33), respectively. In the final models, 0.56-0.95% of protein residues were classified as Ramachandran outliers, and 92.4-94.3% had favourable backbone conformations (Supplementary Table S1). Coordinates and structure factors have been deposited in the Protein Data Bank under accession codes 5F8K (Bac7(1-16)), 5FDU (Metalnikowin I) and 5FDV (Pyrrhocoricin). In vitro translation assays Escherichia coli lysate-based transcription-translation coupled assay (RTS100, 5Prime) were performed as described previously for other translational inhibitors (34). Briefly, 6 l reactions, with or without PrAMP were mixed according to the manufacturer's description and incubated for 1 h at 30 • C with shaking (750 rpm). A total of 0.5 l of each reaction were stopped with 7.5 l kanamycin (50 g/l). The effect of Bac7 on eukaryotic translation was determined using Rabbit Reticulocyte Lysate System (Promega). A total of 6 l reactions, with or without Bac7 were mixed according to the manufacturer´s description and incubated for 1 h at 30 • C with shaking (300 rpm). A total of 5 l of each reaction were stopped in 5 l kanamycin (50 g/l). All samples were diluted with 40 l of Luciferase assays substrate (Promega) into a white 96-well chimney flat bottom microtiter plate (Greiner). The luminescence was then measured using a Tecan Infinite M1000 plate reader. Relative values were determined by defining the luminescence value of the sample without inhibitor as 100%. Toe-printing assay The position of the ribosome on the mRNA was monitored with a toe-printing assay (35) based on an in vitro-coupled transcription-translation system with the PURExpress in vitro protein synthesis kit (NEB), as described previously (21,36). Briefly, each translation reaction consisted of 1 l solution A, 0.5 l isoleucine amino acid mixture, 0.5 l tRNA mixture, 1.5 l solution B, 0.5 l (0.5 pmol) hns37aa template: (5 -ATTAAT ACGACTCACTATAGGGATATAAGGAGGAAAAC ATatgAGCGAAGCACTTAAAattCTGAACAACCTGC GTACTCTTCGTGCGCAGGCAAGACCGCCGCCGC TTGAAACGCTGGAAGAAATGCTGGAAAAATTA GAAGTTGTTGTTtaaGTGATAGAATTCTATCGTTA ATAAGCAAAATTCATTATAAC-3 , with start codon ATG, catch isoleucine codon ATT and stop codon TAA in bold, the hns37aa ORF underlined and toe-print primer binding site in italics) and 0.5 l additional agents (nuclease-free water, water dissolved Bac7 Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16), Bac7 (1, 10 or 100 M final concentration) or antibiotics (100 M thiostrepton, 50 M edeine, 50 M clindamycin final concentration)). Translation was performed in the absence of isoleucine at 37 • C for 15 min at 500 rpm in 1.5 ml reaction tubes. After translation, 2 pmol Alexa647-labelled NV-1 toe-print primer (5 -GGTTATAATGAATTTTGCTTATTAAC-3 ) was added to each reaction. Reverse transcription was performed with 0.5 l of AMV RT (NEB), 0.1 l dNTP mix (10 mM) and 0.4 l Pure System Buffer and incubated at 37 • C for 20 min. Reverse transcription was quenched and RNA degraded by addition of 1 l 10 M NaOH and incubation for at least 15 min at 37 • C and then was neutralized with 0.82 l of 12 M HCl. 20 l toe-print resuspension buffer and 200 l PN1 buffer were added to each reaction before treatment with a QIAquick Nucleotide Removal Kit (Qiagen). The Alexa647-labelled DNA was then eluted from the QIAquick columns with 80 l of nuclease-free water. A vacuum concentrator was used to vaporize the solvent and the Alexa647-labelled DNA was then dissolved into 3.5 l of formamide dye. The samples were heated to 95 • C for 5 min before being applied onto a 6% polyacrylamide (19:1) sequencing gel containing 7 M urea. Gel electrophoresis was performed at 40 W and 2000 V for 2 h. The GE Typhoon FLA9500 imaging system was subsequently used to scan the polyacrylamide gel. Filter binding assay Filter binding assays were performed as described previously (34,37). Briefly, 3 pmol of 70S ribosomes purified from BL21 E. coli strain were exposed to 30 pmol of radiolabelled [ 14 C]-Erythromycin (Perkin Elmer; 110 dpm/pmol) in presence of 1x filter binding buffer (10 mM HEPES/KOH [pH 7.4], 30 mM MgCl 2 , 150 mM NH 4 Cl and 6 mM ␤mercaptoethanol) for 15 min at 37 • C. Our controls indicated that approximately 65% of the 70S ribosomes (2 pmol) contained [ 14 C]-Erythromycin previous to the addition of the different PrAMPs. The PrAMPs were diluted in nuclease-free water to a concentration of 1 mM, 100 M and 10 M. 2 l of each PrAMP stock dilution (Onc112, Bac7 , Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) and Bac7 ) were transferred to the respective tube resulting in final concentrations of 100, 10 and 1 M. Reactions were incubated for an additional 25 min at 37 • C. Afterwards the 20 l samples were passed through a HA-type nitrocellulose filter from Millipore (0.45 m pore size) and the filter subsequently washed three times with 1 ml 1× filter binding buffer. Scintillation counting was performed in the presence of Rotiszint R eco plus Scintillant. All reactions were performed in duplicate and results were analysed using GraphPad Prism 5. Error bars represent the standard deviation from the mean. Disome formation assay The disome formation assay was performed as described previously (38,39). Briefly, in vitro translation of the 2xermBL construct was performed using the Rapid Translation System RTS 100 E. coli HY Kit (Roche). Translations were carried-out for 1 h at 30 • C and then analysed on 10-55% sucrose density gradients (in a buffer containing 50 mM HEPES-KOH, pH 7.4, 100 mM KOAc, 25 mM Mg(OAc) 2 , 6 mM ␤-mercaptoethanol) by centrifugation at 154 693 × g (SW-40 Ti, Beckman Coulter) for 2.5 h at 4 • C. RESULTS The N-terminus of Bac7 adopts a compact conformation We obtained a structure referred to here as Tth70S-Bac7 from co-crystals of Tth70S ribosomes in complex with deacylated tRNA i Met , a short mRNA and Bac7(1-16) (Supplementary Table S1). In addition, we obtained two additional structures, Tth70S-MetI and Tth70S-Pyr, from cocrystals of Tth70S ribosomes in complex with YfiA and either metalnikowin I or pyrrhocoricin, respectively (Supplementary Table S1). The quality of the electron density in the minimally biased F O -F C difference maps calculated after refinement of a model comprising Tth70S ribosomes and tRNA i Met /mRNA or YfiA, made it possible to build a model for the entire Bac7(1-16; RRIRPRPPRL-PRPRPR), the first 10 (of 15; VDKPDYRPRPRPPNM) residues of metalnikowin I (MetI) and the first 16 (of 20; VDKGSYLPRPTPPRPIYNRN) residues of pyrrhocoricin (Pyr), as well as to position several neighbouring solvent molecules (Supplementary Figure S1). Like the insectderived Onc112 peptide (20,21), MetI, Pyr and Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) all bind to the ribosomal exit tunnel in a reverse orientation relative to the nascent polypeptide chain and make extensive interactions with three distinct regions of the large 50S ribosomal subunit: the A-tRNA binding pocket, the Asite crevice and the upper section of the nascent polypeptide exit tunnel (Figure 1A, B and Supplementary Figure S1). A nearly identical, extended backbone conformation is seen for residues 7-13 of Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) and residues 4-10 of Onc112, Met1 and Pyr, with Arg9 of Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) substituting for Tyr6 of Onc112, Met1 and Pyr (Figure 1B). The structural similarity however does not extend to the Nterminus of Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16), where the first six residues adopt a structure that deviates substantially from that of the shorter N-terminus of the insert-derived PrAMPs. Indeed, arginine residues within this region are arranged such that the side chain of Arg6 is sandwiched between the side chains of Arg2 and Arg4 to form a compact, positively charged structure (Figure 1A andB). The binding site of Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) suggests that the additional C-terminal residues of Bac7 and of the full-length Bac7 (60 residues) would occupy the entire length of the ribosomal tunnel. Consistently, a photocrosslinkable derivative of Bac7 has been crosslinked to two ribosomal proteins of ∼16 and 25 kDa ( 19), which we suggest to be L22 and L4, respectively, based on their size and close proximity to the Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) binding site (Supplementary Figure S2). Compared to Onc112 and Met1, additional density for the C-terminal PRPR motif (residues 13-16) of Pyr is observed extending deeper into the tunnel (Figure 1 and Supplementary Figure S1). With the exception of Arg14 for which no density is observed, the PRPR motif is quite well ordered despite not forming any obvious direct interactions with the ribosome. Nucleic Acids Research, 2015 5 Bac7 makes extensive interactions with the 50S ribosomal subunit As with Onc112 (20,21), binding of Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) to the ribosome is accompanied by an induced fit involving 23S rRNA residues A2062, U2506 and U2585 (Supplementary Figure S3A; E. coli numbering is used throughout this work for the 23S rRNA), such that the base of this last nucleotide occupies a position that would normally clash with the formylmethionyl moiety of fMet-tRNA i Met bound to the P-site of an initiation complex (Supplementary Figure S3B). Three modes of interaction are observed between Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) and the large 50S ribosomal subunit (Figure 2A-E). First, the N-terminal region of Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) forms multiple hydrogen bonds and salt bridges with the A-tRNA binding pocket of the ribosome (Figure 2A andB). In particular, the compact structure formed by Arg2, Arg4 and Arg6 provides a positively charged N-terminal anchor that displaces two magnesium ions from a deep groove lined by 23S rRNA residues C2452, A2453 and G2454 on one side, and residues U2493 and G2494 on the other (Figure 2B). This groove differs from the standard A-form RNA major groove in that it occurs between two unpaired, antiparallel strands of the 23S rRNA. Consequently, the compact arginine structure at the N-terminus of Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) is ideally sized and shaped to fit into this groove and the resulting interaction is likely to be specific in spite of its simple electrostatic nature. Further contacts in this region are likely to increase the specificity of Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) for the ribosome, such as the two hydrogen bonds between the side chain of Arg1 and 23S rRNA residues U2555 and C2556, and four hydrogen bonds between the backbone of Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) residues Arg2-Arg4 and 23 rRNA residues U2492, U2493 and C2573 (Figure 2A). Second, the unusually high arginine (50%) and proline (37.5%) content of Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) restricts the types of contacts that this peptide can establish with the ribosome. This results in -stacking interactions between the side chains of Arg2, Arg9, Arg12, Arg14 and Arg16 and exposed bases of 23S rRNA residues C2573, C2452/U2504, C2610, C2586 and A2062, respectively. Additional rigidity within the peptide is provided through the packing of Arg1 against Ile3 and Arg9 against Leu10, and through the compact arginine stack described above (Figure 2C). Third, numerous possible hydrogen bonds can be established between the backbone of Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) and the ribosome (Figure 2A, D andE), including many indirect interactions via ordered solvent molecules (Figure 2D andE). Many of the water-mediated contacts suggested for Tth70S-Bac7 are likely to occur with oncocin, even though the lower resolution of the earlier Tth70S-Onc112 structures precluded the modelling of any water molecules (20,21). In addition, interactions such as those between 23S rRNA residue U2506 and the backbone of Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) residues Arg9 and Leu10 were also proposed to occur between the Onc112 peptide and the ribosome (20,21). Bac7 and Onc112 compete with erythromycin for ribosome binding The C-terminal residues 12-16 of Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) overlap with the binding site of the macrolide antibiotic erythromycin on the bacterial ribosome (40,41), in particular with the region occupied by the cladinose sugar and part of the lactone ring (Figure 3A). Consistently, we could demonstrate that Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) and Bac7 efficiently compete with the binding of radiolabelled erythromycin to the 70S ribosome (Figure 3B). Similarly, Onc112 also efficiently competed with erythromycin (Figure 3B), as expected based on the similarity in binding mode with the ribosome for these regions of Onc112 and Bac7 (Figure 1B). In contrast, Bac7 was a poor competitor of erythromycin (Figure 3B), indicating that the highly cationic N-terminus of Bac7 and its interaction with the A-tRNA binding pocket (Figure 2B) are important for high affinity binding of Bac7 to the ribosome. Indeed, Bac7 derivatives lacking the first four N-terminal residues (RRIR), Bac7 and Bac7 (5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23), exhibit dramatically reduced minimal inhibitory concentrations (MIC) against Gram-negative strains, such as E. coli, as well as Salmonella typhimurium (6). We note, however, that the internalization of Bac7 into bacteria is reduced, indicating that the N-terminal RRIR motif also plays an important role for cell penetration (11). Bac7 allows initiation, but prevents translation elongation Consistent with the erythromycin binding assays and in agreement with previous results (Figure 4A) (19), we observed that Bac7 inhibits the production of luciferase with an IC 50 of 1 M in an E. coli in vitro translation system, similar to MetI and Pyr (Supplementary Figure S1), as well as that observed previously for Onc112 (20,21). Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) was an equally potent inhibitor as Bac7 , consistent with the similar MICs observed for these two derivatives (6,10,11). In contrast, Bac7 inhibited in vitro translation with an IC 50 of 10 M, i.e. 10-fold higher than observed for Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) or Bac7 , indicating that the reduced affinity for the ribosome, together with reduced cellular uptake (11), results in the higher MIC of the Bac7 derivative (6,42). Next we investigated the mechanism of inhibition by Bac7 using two in vitro translation assays. First, we compared the effect of Bac7 and Bac7 on the stabilization of disomes formed upon the stalling of ribosomes on a dicistronic mRNA (in this case 2XErmBL mRNA), as measured by sucrose gradient centrifugation (21,38,39). In the absence of inhibitor, the majority of ribosomes are present as 70S monosomes (control in Figure 4B), whereas the presence of erythromycin leads to translational arrest of the ribosomes on both cistrons of the 2XErmBL mRNA, thereby generating the expected disome peaks (Ery in Fig- ure 4B). Consistent with the in vitro translation assays (Figure 4A), translation inhibition and thus disome formation was observed in the presence of 10 M Bac7 , whereas even 100 M of Bac7 did not produce significant disomes (Figure 4B). These findings suggest that Bac7 but not Bac7 stabilizes an arrested ribosome complex, as observed previously for Onc112 (21). Second, to monitor the exact site of translation inhibition of the Bac7 derivatives, we employed a toeprinting assay, which uses reverse transcription from the 3 end of an mRNA to determine the exact location of the ribosomes that are translating it (35). In the absence of in- hibitor, ribosomes initiated at the AUG start codon of the mRNA, translated through the open reading frame and ultimately became stalled on an isoleucine codon (Figure 4C) due to the omission of isoleucine from the translation mix. In the presence of thiostrepton or clindamycin, ribosomes accumulated at the AUG codon (Figure 4C), since these antibiotics prevent delivery and/or accommodation of aminoacyl-tRNA at the A-site directly following initiation (43). Similar results were observed when using the Bac7 and Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) derivatives, such that complete inhibition of translation elongation was observed at a peptide concentration of 10 M (Figure 4C). These findings suggest that like Onc112 ( 21), Bac7 allows subunit joining and fMet-tRNA i Met binding, but prevents accommodation of the first aminoacyl-tRNA at the A-site, as suggested by the overlap in the binding site of Bac7 and the CCA-end of an A-tRNA (Figure 4D). Curiously, the toeprint for ribosomes stalled during initiation became weaker at 100 M of Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) and Bac7 and the signal for the fulllength mRNA became stronger, similar to the effect ob- , Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) or Bac7 , or 100 M thiostrepton (Ths), 50 M edeine (Ede) or 50 M clindamycin (Cli). Sequencing lanes for C, U, A and G and the sequence surrounding the toe-print bands (arrowed) when ribosomes accumulate at the AUG start codon (green, initiation complex) or the isoleucine codon (blue, stalled elongation complex) are included for reference. (D) Structural comparison of Phe-tRNA Phe (slate) in the A-site and fMet-tRNA i Met in the P-site (blue) (26) with the binding site of Bac7(1-16) (green). served when the antibiotic edeine was used (Figure 4C). Edeine prevents 70S initiation complex formation by destabilizing fMet-tRNA i Met binding to the 30S subunit ( 43). Thus, Bac7 may have a similar effect when high cytosolic concentrations are achieved through active uptake into the cell, possibly due to the presence of non-specific interactions with the ribosome. In contrast to Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) and Bac7 , Bac7 only stabilized the initiation complex at a much higher concentration (100 M) (Figure 4C). This is consistent with a reduced affinity of Bac7 for the ribosome and reinforces the critical role played by the first four residues of Bac7 in its inhibitory activity (Figure 1A) (6,42). Bac7 inhibits eukaryotic translation in vitro Bac7 is internalized by mammalian cells (42,44), yet no toxicity has been observed, even at concentrations well above those effective against microbes (12,13,42), raising the question as to whether Bac7 binds to eukaryotic cytosolic ribosomes. A comparison of the binding site of Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) on the bacterial 70S ribosome with the equivalent region of a mammalian 80S ribosome reveals that the rRNA nucleotide sequence is highly conserved. Structurally, the conformation of three 25S rRNA nucleotides, C4519 (C2573), U4452 (U2506) and A3908 (A2602), would be expected to preclude Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) from binding to the mammalian ribosome (Figure 5A). Nevertheless, these nucleotides are highly mobile and adopt different conformations depending on the functional state of the ribosome (26,39,45,46), suggesting that conformational rearrangements of these nu- 47) on the basis of the 23S and 25S rRNA chains in the corresponding structures, with inset illustrating three rRNA nucleotides whose conformation differs in the 80S (grey) and Tth70S-Bac7 (yellow) structures. (B) Effect of increasing concentrations of Bac7 on the luminescence resulting from the in vitro translation of firefly luciferase (Fluc) using an Escherichia coli lysate-based system (red) or rabbit reticulocyte-based system (black). The error bars represent the standard deviation from the mean for triplicate experiments and the fluorescence is normalized relative to that measured in the absence of peptide, which was assigned as 100%. (C) Model for the targeting of proBac7 to large granules and its processing by elastase to yield active Bac7 peptide. The latter is transported through the bacterial inner membrane by the SbmA transporter and binds within the tunnel of bacterial ribosomes to inhibit translation. cleotides could allow Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) binding. Indeed, we observed that increasing concentrations of Bac7 inhibited in vitro translation using a rabbit reticulocyte system (Figure 5B). Bac7 exhibited an IC 50 of 2.5 M, only 2.5-fold higher than that observed in the E. coli in vitro translation system (Figure 5B). The excellent inhibitory activity of Bac7 on mammalian ribosomes, combined with its lack of toxicity on mammalian cells (42), would be consistent with a mechanism of internalization via an endocytotic process (42) to ensure that Bac7 minimizes contact with the mammalian cytosolic ribosomes. DISCUSSION Our finding that Bac7 is active against eukaryotic translation, together with the current literature, allows us to present a model that explains how and why the mammalian cell prevents the active Bac7 peptide from being present in the cytoplasm (Figure 5C). Bac7 is produced by immature myeloid cells as a pre-pro-Bac7 precursor that is targeted to large granules, where it is stored as pro-Bac7 in differentiated neutrophils [START_REF] Zanetti | Bactenecins, defense polypeptides of bovine neutrophils, are generated from precursor molecules stored in the large granules[END_REF]. The inactive proBac7 is cleaved by elastase, a serine protease that is present in azurophil granules, either upon (A) fusion with the phagosome, or (B) exocytosis and release into the extracellular matrix (Figure 5C) [START_REF] Zanetti | Bactenecins, defense polypeptides of bovine neutrophils, are generated from precursor molecules stored in the large granules[END_REF][START_REF] Scocchi | Proteolytic cleavage by neutrophil elastase converts inactive storage proforms to antibacterial bactenecins[END_REF]. The resulting activated Bac7 peptide can then enter into the bacterial cell through the SbmA transporter (10), where it subsequently binds to the ribosome to inhibit translation (Figure 5C) (19). Our structure of the Tth70S-Bac7 complex reveals specifically how Bac7 interacts with the bacterial ribosome (Figures 1 and2) and inhibits translation by allowing initiation but preventing translation elongation (Figure 3). Although the overall mechanism of action of Bac7 is similar to that of insect-derived AMPs like oncocin (20,21), the high arginine content of Bac7 leads to a distinct mode of binding to the ribosome, namely through electrostatic and stacking interactions with the backbone and bases of 23S rRNA nucleotides, respectively (Figure 2C). It will be interesting to see whether such interactions are the basis for the translational arrest that has been observed when the ribosome translates a nascent polypeptide chain bearing positively charged arginine residues [START_REF] Dimitrova | Nascent peptide-dependent translation arrest leads to Not4p-mediated protein degradation by the proteasome[END_REF][START_REF] Lu | Electrostatics in the ribosomal tunnel modulate chain elongation rates[END_REF]. 1). (B) Clash between the formyl-methionyl moiety of a P-site bound fMet-tRNA i Met (blue) and 23S rRNA residue U2585 in its Bac7-bound conformation (yellow). Bac7 (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) is shown as a green Cα-trace in both panels. Results A novel strategy for the structural characterization of arrested ribosomal complexes featuring short nascent peptides The second part of this thesis focused on understanding the underlying mechanism of short peptides that mediated nascent chain-dependent translational arrest. Previously, the M+X(+) motif was reported to arrest the ribosome in the presence of erythromycin [START_REF] Sothiselvam | Macrolide antibiotics allosterically predispose the ribosome for translation arrest[END_REF][START_REF] Sothiselvam | Binding of macrolide antibiotics leads to ribosomal selection against specific substrates based on their charge and size[END_REF] while poly-proline motifs arrest the ribosome without any additional ligands [START_REF] Doerfel | EF-P is essential for rapid synthesis of proteins containing consecutive proline residues[END_REF][START_REF] Ude | Translation elongation factor EF-P alleviates ribosome stalling at polyproline stretches[END_REF]. However, far no structural information had been obtained. Previously, long nascent chain-mediated translational arrest peptides, like SecM [START_REF] Bhushan | SecM-stalled ribosomes adopt an altered geometry at the peptidyl transferase center[END_REF][START_REF] Zhang | Mechanisms of ribosome stalling by SecM at multiple elongation steps[END_REF], VemP [START_REF] Su | The force-sensing peptide VemP employs extreme compaction and secondary structure formation to induce ribosomal stalling[END_REF], MifM [START_REF] Sohmen | Structure of the Bacillus subtilis 70S ribosome reveals the basis for species-specific stalling[END_REF], TnaC (Bischoff et al., 2014;[START_REF] Seidelt | Structural insight into nascent polypeptide chainmediated translational stalling[END_REF], ErmBL (Arenz et al., 2016;[START_REF] Arenz | Molecular basis for erythromycin-dependent ribosome stalling during translation of the ErmBL leader peptide[END_REF] and ErmCL (Arenz et al., 2014a), have been studied by cryo-EM. The nascent chain-mediated translational arrested ribosome complexes have been obtained using different strategies depending on the length of the arrest peptide. Arrest peptides including SecM, VemP, MifM, and TnaC are long peptides with the N-terminus exiting the ribosomal tunnel and they arrest the ribosome-independent of their N-terminal sequence. The N-terminus was fused to a His8(H8)-tag which allowed purification after in vitro translation [START_REF] Sohmen | Structure of the Bacillus subtilis 70S ribosome reveals the basis for species-specific stalling[END_REF][START_REF] Su | The force-sensing peptide VemP employs extreme compaction and secondary structure formation to induce ribosomal stalling[END_REF][START_REF] Zhang | Mechanisms of ribosome stalling by SecM at multiple elongation steps[END_REF] or the purification of the arrest complex from cells (Bischoff et al., 2014). ErmBL and ErmCL are shorter arrest peptides in which the nascent chain does not reach the exit of the ribosomal tunnel and so they cannot be purified using the H8-tag. To still allow solving the structure, the disome approach was developed (Arenz et al., 2014a;[START_REF] Arenz | Molecular basis for erythromycin-dependent ribosome stalling during translation of the ErmBL leader peptide[END_REF]. This approach is based on a bicistronic DNA template that encodes two ORFs of the arrest peptide that is added into an in vitro translation system. In the absence of the ligand, ribosomes can translate normally resulting in one ribosome per mRNA on average. In the presence of erythromycin, the ribosomes are arrested and so two ribosomes, termed disomes, are bound to the same mRNA molecule. The disomes were separated from the monosomes and through sucrose gradients. Subsequently, RNase H treatment led to the separation of the disomes and the obtained monosomes were used for structure determination (Arenz et al., 2014a;[START_REF] Arenz | Molecular basis for erythromycin-dependent ribosome stalling during translation of the ErmBL leader peptide[END_REF]. The original goal of this thesis was to determine the structure of short arrest peptides using Xray crystallography. For this approach, a high homogeneity of the arrest complex is necessary to obtain high-resolution information. The approaches used to obtain the ribosomal nascentchain complexes (RNCs) contain long mRNA fragments that limits the diffraction of the crystals (data unpublished and obtained by Axel Innis while he was in the group of Prof. Thomas Steitz). To overcome limitations of diffraction and to ensure the stoichiometry of the complex, the short arrest peptides can be transferred to the 3'CCA end of a purified tRNA using the flexizyme methodology [START_REF] References Goto | Flexizymes for genetic code reprogramming[END_REF], a method that was developed for genomic reprogramming (chapter 1.5). By transferring the arrest peptide directly onto the 3'CCA end of the initiator tRNA, which has a high affinity for the P-site, the length of the mRNA can be reduced significantly. The following figure illustrates how complexes can be formed to study short arrest peptides: As 70S T. thermophilus ribosome are known to crystallize well and diffract to high resolution, these ribosomes were used for crystallization experiments. For cryo-EM studies, 70S ribosomes from E. coli were purified by disassembling them into individual subunits and re-associating Results them (Sucrose gradients, supplements Figure 66, [START_REF] Blaha | Preparation of functional ribosomal complexes and effect of buffer conditions on tRNA positions observed by cryo-electron microscopy[END_REF]. This strategy ensures the removal of ribosome associated factors and tRNAs and increases further the homogeneity of the complex. A short RNA encoding a Shine-Dalgarno sequence followed by a short open reading frame (Figure 17) and is necessary for peptidyl tRNA binding to the ribosome. The arrest peptides were synthesized and activated chemically by Dr. Caterina Lombardo and Dr. Christophe André from the group of Dr. Gilles Guichard (IECB, Pessac, France) and then transferred via flexizyme reaction onto the 3' CCA end of the initiator tRNA. The peptidyl-tRNA thus obtained was purified using RP-HPLC. We hypothesized that the high P-site binding affinity of the initiator tRNA is sufficient to force the arrest peptide into the ribosomal exit tunnel. The arrest complex is formed in the presence of additional compounds such as aminoacyl-tRNA or an antibiotic. The stalled complex can then be studied structurally and biochemically. Biochemical assay can be e.g. toeprinting assay, footprinting assay, and puromycin stability assays. The peptide is normally linked via an ester bond to the 3'CCA end of the tRNA and this bond is pH sensitive. However, the ester-linked peptidyl-tRNA can be still used for complex formation to study the RNC with cryo-EM as the sample is frozen directly after complex formation. The recent development of direct detectors and the constant improvement of processing software made it possible to obtain near-atomic resolution structure structures by cryo-EM [START_REF] Kühlbrandt | The resolution revolution[END_REF]. The near-atomic resolution is sufficient to study short arrest peptides and was used to obtain the structure of the MKF-70S E. coli ribosome (chapter 4.4). For X-ray crystallography, crystals were grown for several days at a pH of 7.5 (chapter 3.9) and so the ester bind might hydrolyze during this process. In order to ensure the stoichiometry of the complex, the ester linkage can be replaced by an amide linkage (non-hydrolysable bond) that is pH insensitive and was used in previous studies of the molecular mechanism of peptide bond formation (Polikanov et al., 2014;[START_REF] Voorhees | Insights into substrate stabilization from snapshots of the peptidyl transferase center of the intact 70S ribosome[END_REF]. This tRNA is obtained by exchanging the 3'OH of the tRNA by a 3'NH2-group (chapter 3.5.4). Attempts to transfer the activated peptides onto 3'NH2-tRNAi Met using the flexizyme methodology were so far unsuccessful. Another strategy to introduce the amide bond between the arrest peptide and the tRNA is shown in the following figure: The non-hydrolyzable bond can be introduced using one round of translocation in the presence of the dipeptidyl-tRNAi Met and a non-hydrolysable aminoacylated elongator tRNA. The aminoacylation reaction can be performed using the corresponding aminoacyl-tRNA synthetase as it was used previously reported (Polikanov et al., 2014;[START_REF] Voorhees | Insights into substrate stabilization from snapshots of the peptidyl transferase center of the intact 70S ribosome[END_REF]. The arrest complex is formed by the addition of EF-G and GTP. Another advantage of the second strategy is that the P-site tRNA within the arrest complex is not tRNAi Met making the complex more native. The following chapters include the results obtained to study nascent chain-mediated translational arrest along fM+F(+) in the presence of erythromycin and along consecutive polyprolines and present a novel approach to form arrested complexes using the flexizyme methodology. Results Peptidylation of tRNA i Met using flexizyme The flexizymes eFx, dFx, and aFx were generated by in vitro transcription as described before (Murakami et al., 2006a). Reaction conditions were tested using a CCA-stem loop, a microhelix. The reaction conditions, including Mg 2+ concentration and pH, were screened. A higher pH might increase the yield of peptidylated tRNA but includes also the risk of tRNA degradation and depeptidylation. Consequently, the reaction conditions needed to be balanced. Screening reactivity and time courses Peptides to study fM+X(+) and polyproline-mediated arrest, were designed applying different rules. Bacterial translation is initiated by initiator tRNA attached to a formylated methionine. Peptides that carried a methionine at the N-terminus were thus formylated in order to mimic the natural substrate. In contrast to this, any other N-terminal amino acid was acetylated to mimic the growing peptide chain and increase the stability of the peptidyl-tRNA. The flexizyme reaction is mainly limited by the solubility of the peptide in aqueous solution [START_REF] Niwa | A flexizyme that selectively charges amino acids activated by a water-friendly leaving group[END_REF]. The incorporation of positively-charged amino acids such as lysine and arginine increase the solubility of the peptides in water. Several leaving groups in combination with their corresponding flexizymes were tested in different Mg 2+ concentrations and pH-values ranging from 6.5 to 8.5. The highest peptidylation capacity was reached with 75 µM eFx and 5 mM AcRP-CBT, AcRA-CBT, AcRD-CBT, and fMK-CBT in a buffer containing 50 mM Bicine-KOH pH 8.5, 50 mM MgCl2, 20% DMSO, 25 µM tRNAi Met . Scale-up reactions were incubated for eight days on ice. The following figure shows the acidic denaturing PAGE monitoring the daily progress of the reaction: Figure 19 shows that all four peptides can be transferred to a certain degree to the 3' CCA end of tRNAi Met but their individual reactivity varies depending on the nature of the peptide. The most reactive peptide is AcRA-CBT. The intensity of the AcRA-tRNAi Met band shows a similar intensity as the band for the deaminoacylated tRNA, meaning a reaction efficiency of approximately 50%. The product is formed within the first day, as indicated by the band corresponding to the peptidyl-tRNA (Figure 19). In contrast to this, the reaction involving AcRP-CBT, AcRD-CBT, and fMK-CBT require a week to obtain a comparable intensity of the band corresponding to the peptidylated tRNA (Figure 19). The reaction was stopped as the intensity of the band corresponding to the product did not increase any more. This might indicate the saturation of the reaction. In addition, further incubation might result in hydrolysis of the peptide and the tRNA. This long reaction time stands in contrast to the reactivity of the fMKF-CME peptide, which reacts within 2 h with a peptidylation efficiency of 100% as shown by Dr. K. Kishore Inampudi, a postdoctoral fellow in the Innis group (PAGE listed in supplemental information Figure 67). Another strategy to follow the progress of the flexizyme reaction is using a short microhelix as the flexizyme reaction is independent of the body of the tRNA. The time course for following the reaction of tripeptide AcRAP-CME with microhelix and eFx is shown in Figure 20: The first product is visible after three days and increases until the sixth day with a reaction efficiency around 10% (Figure 20). For large scale reactions, the microhelix was replaced by initiator tRNA and incubated for a week on ice before purification as shown in the corresponding PAGE (Figure 22) Purification of peptidylated tRNAi Met For the following studies, peptidylated tRNAi Met was separated from deaminoacylated tRNAi Met and eFx by RP-HPLC using a C4 column. The deaminoacylated tRNAi Met has a high affinity for the P-site of the ribosome [START_REF] Blaha | Formation of the first peptide bond: the structure of EF-P bound to the 70S ribosome[END_REF], so the peptidylated tRNA needs to be separated from the unreacted one so that the resulting complex contains little or no deacylated tRNAi Met , which would reduce the occupancy of the peptidyl moiety. In addition, eFx might have a negative effect on crystallization and should thus be removed. Reactions were stopped by precipitating the RNA using 300 mM Na(CH3CO2) pH 5.5/70% Ethanol. To purify the sample was recovered in RP-Buffer (20 mM NH4(CH3CO2), 10 mM Mg(CH3CO2)2, 400 mM NaCl) and loaded on a RP-C4-HPLC (see chapter 3.5). The sample was separated using increasing concentrations of methanol (RP buffer B, 40% (v/v) methanol). The gradient was 0-50% RP buffer B over 14 column volumes followed by a steep step from 50-100% buffer B in one column volume. The purification was performed at pH 5.5 in order to Results prevent peptide hydrolysis. The chromatograms of this purification are shown in Figure 21 along the corresponding acidic PAGE analysis: The third peak corresponds to eFx (approx. 7.5 CV). The last peaks correspond to the peptidylated tRNA as the peptide increases the hydrophobicity and results in tighter binding to the column material. AcRP-tRNAi Met (Figure 21A) and AcRA-tRNAi Met (Figure 7C) elutes over a short number of fractions while the peak corresponding to fMK-tRNAi Met (Figure 21E) elutes over several fractions. An explanation could be that the lysine side chain might adopt different orientations. In contrast to this, the peptidyl tRNA peak is absent in the chromatogram for AcRD-tRNAi Met (Figure 21, G, H). In this particular case, the peptide is less hydrophobic as the other ones and co-elutes with eFx. The carboxyl group of the aspartate side chain has a pKa of 3.64 and is likely to be deprotonated at a pH of 5.5. Consequently, the AcRD-tRNA elutes earlier (Figure 21,G,H). Decreasing the slope of the gradient and reducing the pH of the running buffer resulted in similar elution profiles. Subsequent experiments were performed using AcRP-tRNAi Met (approx. 80% reaction efficiency), AcRA-tRNAi Met (approx. 100% reaction efficiency) and fMK-tRNAi Met (approx. 40% reaction efficiency). The reactivity concluded from the HPLCprofile is in all cases higher than estimated from time courses (Figure 19). Reasons include the partial hydrolysis of the peptide from the tRNA during sample analysis. An explanation could be that the buffer concentration within the loading dye is too low to acidify the reaction buffer (50 mM Bicine-KOH pH 8.5) and partial peptidyl hydrolysis might also occur during heating of the samples prior to loading the PAGE. The tripeptide AcRAP-CBT was reacted with initiator tRNA in the presence of eFx for one week on ice. Following, the product was purified via C4-RP-HPLC using a modified gradient from 0-80% RP buffer B. The resulting chromatogram is shown in Figure 22: Figure 22 shows that Ac-RAP-tRNAi Met can be separated from unreacted initiator tRNA and eFx. Due to its hydrophobic properties, the peptidylated tRNA elutes last and is almost completely separated from the eFx peak. The peptidylation efficiency was estimated to be approximately 80%. Fractions containing the peptidylated tRNAi Met were combined. The reverse phase buffer contains 400 mM NaCl, which will interfere with downstream applications e.g. crystallography and in vitro protein synthesis. The buffer was exchanged over concentrators into 5 mM NH4(CH3CO2) buffer. The samples were lyophilized followed by resuspension in 20 µL 5 mM NH4(CH3CO2). After concentrating, the sample was analyzed by acidic denaturing PAGE (Figure 23): All peptidyl-tRNAI Met (AcRP-tRNAi Met , AcRA-tRNAi Met , AcRAP-tRNAi Met and fMK-tRNAi Met ,) were still peptidylated after buffer exchange (Figure 23). Samples still contained minor eFx contaminations which were expected not to interfere with downstream applications. Results Investigations of fM+X(+) nascent chain-mediated translational arrest in the presence of erythromycin Complexes to study fM+X(+) in the presence of erythromycin Two ribosome profiling studies revealed that ribosomes arrest on +X(+) motifs in the presence of macrolides or ketolides [START_REF] Davis | Sequence selectivity of macrolide-induced translational attenuation[END_REF][START_REF] Kannan | The general mode of translation inhibition by macrolide antibiotics[END_REF]. This motif is also present in the ErmD leader peptide and successive shortening of the arrest peptide led to the identification of the minimal arrest motif MRLR that could be generalized in fM+X(+) [START_REF] Sothiselvam | Macrolide antibiotics allosterically predispose the ribosome for translation arrest[END_REF][START_REF] Sothiselvam | Binding of macrolide antibiotics leads to ribosomal selection against specific substrates based on their charge and size[END_REF]. The underlying mechanism remains unknown. The following figure shows the complex for studying this type of arrest peptide: The sequence variants of this study are fMRF(R) and fMKF(R). The phenylalanine in position 0 is the activated amino acid. Phenylalanine-CME was used to optimize the flexizyme sequences and shows a high reactivity [START_REF] References Goto | Flexizymes for genetic code reprogramming[END_REF]. Additionally, the aromatic side chain is bulky and would make side chain assignment easier in the event a medium resolution structure was obtained. The lysine in the penultimate position was used as the positively charged residue. Many experiments and toeprints were performed using additionally the MRF(R) sequence but fMR-CBT showed a low peptidylation rate and fMRF-CME was never synthesized and so structural data were obtained studying the fMKF(R) sequence. The methionine was placed in -2 positions as bacterial translation is initiated using fMet-tRNAi Met . The peptide was attached to tRNAi Met due to the latter's high affinity for the P-site. Due to its bulky guanidinium group, I chose Arg-tRNA Arg as the incoming aminoacyl-tRNA. This particular tRNA was in vitro transcribed and aminoacylated by Dr. Axel Innis while he was in the group of Prof. Thomas Steitz. For this project, the fMKF-tRNAi Met was generated and purified by Dr. K. Kishore Inampudi and he used it for X-ray crystallography. I performed the biochemical characterization by toeprinting and structural studies including X-ray crystallography and cryo-EM. Toeprinting to validate that fM+F(+) arrests the ribosome in the presence of erythromycin By mutating one position at a time, the MRL(R) motif was generalized to fM+X(+) [START_REF] Sothiselvam | Binding of macrolide antibiotics leads to ribosomal selection against specific substrates based on their charge and size[END_REF] considering also the published ribosome profiling data that led to the identification of the +X(+) within long nascent chains [START_REF] Kannan | The general mode of translation inhibition by macrolide antibiotics[END_REF]. In contrast to the MRF(R) sequence, arrest induced by the sequences MKFR, MRFK and MKFK, which compared to the MRLR motif encode two substitutions, has not been shown yet. In order to use this sequences for structural studies I therefore first had to show that they could arrest translation in the presence of erythromycin. Toeprinting reactions (chapter 3.9) were performed using DNA templates encoding MRFRI*, MKFRI*, MRFKI* and MKFKI* in the presence or absence of 25 µM erythromycin within the reaction. A PURExpress system (NEB) lacking release factors was used, preventing the release of ribosomes that reached the stop codon, which was expected to yield a toeprint corresponding to ribosomes positioned with their P-site on the preceding isoleucine codon. The following figure illustrates the nascent chain-mediated translational arrest induced by these motifs in the presence of erythromycin: Toeprinting reactions were performed using a PURExpress system (NEB) omitting the release factors. Non-arrested ribosomes were not released (red arrow). The arrest site is indicated by the dark blue arrow. The position of the ribosome can be analyzed due to the presence of the sequencing lanes and the fact that the ribosomes cover 16-17 downstream nucleotides counting from the first nucleotide of the P-site. The toeprint (Figure 25) shows that the ribosome is arrested on the phenylalanine (blue arrow) codon when translating MRFRI, MKFRI, MRFKI, and MKFKI in the presence of erythromycin. Consequently, all four sequences can be used for structural studies. However, sequences encoding at least one arginine codon show a stronger arrest band as sequences encoding two lysine residues. Overall, all sequences tested arrest E. coli ribosomes. Since for crystallographic studies complexes were formed with 70S T. thermophilus ribosomes, the arrest had to be studied using 70S T. thermophilus ribosomes. Earlier studies showed that reconstituted in vitro translation systems can be used to study bacterial ribosomes from different species, e.g. B. subtilis or T. thermophilus, in the presence of E. coli translation factors [START_REF] Chiba | Recruitment of a species-specific translational arrest module to monitor different cellular processes[END_REF][START_REF] Thompson | Testing the conservation of the translational machinery over evolution in diverse environments: assaying Thermus thermophilus ribosomes and initiation factors in a coupled transcription-translation system from Escherichia coli[END_REF]). In the case of 70S T. thermophilus ribosomes, the reaction can be enhanced using T. thermophilus initiation factors [START_REF] Thompson | Testing the conservation of the translational machinery over evolution in diverse environments: assaying Thermus thermophilus ribosomes and initiation factors in a coupled transcription-translation system from Escherichia coli[END_REF]. Reaction conditions were optimized using the well-characterized arrest peptide ErmDL. The toeprinting reactions were performed starting from a DNA template encoding the wild-type ErmDL sequence (MTHSMRLRSE*) with a final concentration of 2.5 µM 70S ribosomes. The resulting sequencing gel is shown in the following figure: Reactions were performed using 2.5 µM ribosomes provided from NEB, extracted from E. coli KC6 or T. thermophilus HB8 cells (chapter 3.6). The reaction was enhanced using 2 µM of IF1, 2 and 3 from T. thermophilus purified by Dr. Axel Innis in the Steitz laboratory). The band corresponding to the start codon is indicated by the yellow arrow, the arrest site by the dark blue arrow and the stop codon is indicated by the red arrow. Figure 26 indicates that in all reactions the ribosomes were arrested at the leucine codon in the presence of erythromycin (dark blue arrow). This indicates that ribosomes purified from T. thermophilus HB8 cells can be used to study nascent chain-mediated translational arrest in the custom-made PURExpress system by performing the reaction at 37°C as suggested by published experiments [START_REF] Thompson | Analysis of mutations at residues A2451 and G2447 of 23S rRNA in the peptidyltransferase active site of the 50S ribosomal subunit[END_REF]. To prevent the arrest of the post-initiation complex, protein synthesis with 70S T. thermophilus ribosomes can be enhanced at 37°C through the addition of T. thermophilus initiation factors [START_REF] Thompson | Testing the conservation of the translational machinery over evolution in diverse environments: assaying Thermus thermophilus ribosomes and initiation factors in a coupled transcription-translation system from Escherichia coli[END_REF]. The last two lanes show the results of the reaction containing 2 µM of each of the T. thermophilus initiation factors. The presence of the T. thermophilus initiation factors led to a prominent decrease in 70S T. thermophilus ribosomes arrested at the start codon (Figure 26). Subsequent experiments containing 70S T. thermophilus ribosomes were performed at 37°C, using a ribosome concentration of 2.5 µM and 2 µM per initiation factor. Crystal structures of 70S Results the T. thermophilus ribosomes seem to have difficulties translating MR at 37°C in the presence of erythromycin. The presence of erythromycin restricts the PTC and the T. thermophilus ribosomes might not be able to overcome this at low temperatures and so the experiment should be repeated using translation factors from T. thermophilus allowing to perform the reaction at higher temperatures. For crystallographic studies, this issue might be overcome by forming the complex at a higher temperature or using flexizyme-charged peptide. Toeprinting: fMKF-tRNAi Met arrests the ribosome in the presence of erythromycin fMKF-tRNAi Met was peptidylated and purified by Dr. K. Kishore Inampudi using the flexizyme eFx and CME activated fMKF. To investigate the stalling activity of the peptide, I performed a toeprinting assay in the presence or absence of erythromycin using a PURExpress (NEB) system without tRNAs, amino acids, and ribosomes. First attempts to study the incorporation of fMKF-tRNAi Met by toeprinting were performed using 10 µM fMKF-tRNAi Met and the total tRNA solution provided by the kit resulting in ribosomes arrested at the initiation codon (data were not shown). The tRNA solution provided by the kit contains total tRNA extracted from cells and used at a concentration of 5 µM end concentration [START_REF] Shimizu | Cell-free translation reconstituted with purified components[END_REF][START_REF] Shimizu | Protein synthesis by pure translation systems[END_REF]. Additionally, the encoded arginine codon CGG (0.5% in E. coli K12) and AGG (0.2% in E. coli K12) are both rare codons in E. coli K12. the large excess of the initiator tRNA over the rarely represented tRNA Arg resulted propably in a strong signal for the initiation complex but not for the translocated one. The assumption can be made that the amount of the corresponding tRNA is not enough to form the translocated complex that was detectable in toeprinting. To overcome this problem, the experimental set-up was changed to use purified compounds. The initiation complex was preformed by incubating 2.5 µM ribosomes with 5 µM mRNA template (coding sequence: MRPPPI, AUG AGG CCG CCG CCG AUC UAA), 10 µM peptidylated-tRNAi Met , 25 µM erythromycin, 20 µM Arg-tRNA Arg and the factor mix, containing all protein factors for the canonical translation cycle, for 5 min at room temperature. Subsequently, solution A -(minus aa, tRNA) provided by the kit, containing nucleotides, was added, allowing complexes that had not been arrested to be translocated (further information are provided in chapter 3.9). The following figure shows the results of these reactions: Figure 28 shows the toeprint to investigate if the arrest complex can be formed using peptidyl-tRNA obtained using flexizyme (Figure 28). The reaction was performed in two steps: the preformation of the initiation complex in the presence of peptidyl-tRNAi Met and Arg-tRNA Arg and translocation by adding GTP to the reaction. During the formation of nascent chain-mediated translational arrest of the fM+X(+) in the presence of erythromycin, the first peptide bond between the methionine and the arginine or lysine needs to be formed. Lane 2 and 3 show a toeprint corresponding to the translocated complex. If the ribosomes are arrested due to the presence of the fMKF peptide in the tunnel, the ribosomes cannot be translocated and remain on the start codon. Therefore, tRNAi Met is replaced by peptidylated fMKF-tRNAi Met . The toeprint observed in lane 4, corresponding to the reaction without erythromycin shows that ribosomes translocated to the second codon while in the presence of erythromycin the ribosomes remain on the start codon (lane 5). Consequently, the ribosomes in the presence of fMKF-tRNAi Met , Arg-tRNA Arg and erythromycin have undergone drug-dependent translational arrest. However, a less abundant band corresponding to ribosomes that have reached the arginine codon indicates that arrest is not complete. These conditions were used to form the arrest complex for studying the fMKF(R) in the presence of erythromycin fur further characterization by structural biology methods. Structural studies of an MKFR-ribosome complex arrested in the presence of erythromycin Co-crystallization experiments of fMKF-tRNAi Met with 70S T. thermophilus ribosomes in the presence of erythromycin (strategy 1, Figure 17) were performed by Dr. K. Kishore Inampudi. Although he was able to obtain crystals and diffraction data, no density for the peptide could be detected even through clear density for initiator tRNA was visible in a minimally-biased Fo-Fc map obtained from a refinement in which tRNAs were not included in the model. A likely explanation could be that the peptide hydrolyzed during crystallization as the conditions include an incubation time of approximately seven days at 20°C at a pH of 7.5. Similar observations had been made previously by Dr. Axel Innis in the Steitz group. To address this issue, I performed crystallization experiments starting from the dipeptidyl-tRNA fMK-tRNAi Met and Phe-NH-tRNA Phe (strategy 2, Figure 18). The pH-sensitive ester bond between the amino acid and the tRNA was replaced by an amide bond. To introduce this bond, the 3'CCA end is removed enzymatically by CCA-adding enzyme in the presence of an excess of pyrophosphate and synthesized again by the same enzyme in the presence of 3'NH2-ATP, leading to the incorporation of a terminal 3'NH2-adenosine (Further details chapter 3.5). NH2-tRNA Phe was aminoacylated with L-phenylalanine using phenylalanyl-tRNA synthetase followed by its purification by RP-HPLC over a C4 column (chapter 3.5). A similar strategy involving fM-NH-tRNAi Met and F-NH-tRNA Phe had been used previously to study pre-catalysis and postcatalysis states of the 70S T. thermophilus ribosome to obtain novel insights into the mechanism of peptide bond formation (Polikanov et al., 2014;[START_REF] Voorhees | Insights into substrate stabilization from snapshots of the peptidyl transferase center of the intact 70S ribosome[END_REF]. This complex was formed to observe if peptide bond formation can occur between the flexizymecharge peptide and the A-site amino acid which was non-hydrolysable attached to the A-site tRNA. After collecting diffraction data from crystals prepared in this manner, I could see no density that would indicate the presence of aminoacyl-tRNA in the A-site. In contrast, density of deaminoacylated initiator tRNA was detected. An explanation could be that the peptidyl-Asite tRNA has a reduced binding affinity for the A-site and dissociates potentially during crystallization. Results However, to form the arrested ribosome complex, the ribosomes have to undergo one round of translocation. Consequently, the complexes were incubated with small amounts of T. thermophilus EF-G (1 µM) and 1 mM GTP. This strategy resulted in similar observation as in the data obtained from samples formed without EF-G. An explanation could be that under crystallographic conditions the deaminoacylated tRNA still has a higher binding affinity for the P-site than it does for the E-site. Another explanation could be that the mRNA is too short for proper function of EF-G. Complexes for cryo-EM were formed using the fMKF-tRNAi Met , 70S E. coli ribosomes, erythromycin, mRNA, and Arg-tRNA Arg . Peptide hydrolysis was not an issue as the complex is frozen directly after complex formation. Complexes were formed using similar concentrations of mRNA, ribosomes, fMKF-tRNAi Met , Arg-tRNA Arg and erythromycin as used for the toeprinting reactions Subsequently, the sample was diluted to 240-480 nM of ribosomes substituting the dilution buffer with Arg-tRNA Arg (chapter 3.10). The obtained reconstruction is described in the following chapters. Single particle reconstruction using RELION Cryo-EM data were collected at the Talos Arctica TEM at the IECB, Bordeaux, France as described in detail in chapter 3.10. The collected micrographs were processed using RELION (Scheres, 2012b;[START_REF] Scheres | Chapter Six -Processing of Structurally Heterogeneous Cryo-EM Data in RELION[END_REF]. In the first steps, beam-induced movement of the particles was corrected using MotionCorr2 [START_REF] Zheng | Anisotropic correction of beam-induced motion for improved single-particle electron cryo-microscopy[END_REF][START_REF] Zheng | MotionCor2: anisotropic correction of beam-induced motion for improved cryo-electron microscopy[END_REF] followed by CTF correction using CTFFIND4 [START_REF] Rohou | CTFFIND4: Fast and accurate defocus estimation from electron micrographs[END_REF]. During CTF correction, the exact defocus value, as well as the resolution limit of the micrograph, were determined (further information chapter 3.10). A micrograph and its corresponding CTF is shown in the following The ribosomes distribution is even over the micrograph and the ribosomes adopt different orientations (Figure 29). The different orientations are necessary for single particle reconstruction. The concentration was sufficient to have a monolayer of ribosomes. Along the carbon filaments, particles were located too close to each other and excluded during particle picking to avoid bias. For the 3D reconstruction, the defocus value and astigmatism have to be determined. These values are taken into account using the contrast transfer function (CTF). To do so, the program CTFFIND4 [START_REF] Rohou | CTFFIND4: Fast and accurate defocus estimation from electron micrographs[END_REF] was used. Figure 29 illustrates the output for one micrograph. The theoretical CTF correlates well with power spectrum calculated from the micrograph. In this particular, the astigmatism is very low. However, one dataset used for the MKF-70S reconstruction was highly astigmatic. The astigmatic angle was taken into account during CTF correction and so the data set was usable for 3D reconstruction. 2000 particles were manually picked and classified into 10 different 2D classes in order to obtain templates for auto-picking. Auto-picking parameters were optimized for each data set (chapter 3.10) Subsequently, the auto-picked particles were classified twice into 100 2D classes (further information 3.10). The classification is used to separate ribosomal particles from artifacts or ribosomes bound to the lacey carbon. The data were collected using lacey carbon grids, which is a support with irregular arrangements. Due to the irregular arrangement of the carbon, some micrographs contained carbon with bound ribosomes, which were mostly Results removed during 2D classification. Figure 30 shows the most abundant 2D classes after the second round of 2D-classification: Therefore, they contain only background noise and they appear like blob-like or ghost-like particles [START_REF] Henderson | Avoiding the pitfalls of single particle cryo-electron microscopy: Einstein from noise[END_REF][START_REF] Scheres | Semi-automated selection of cryo-EM particles in RELION-1.3[END_REF][START_REF] Scheres | Chapter Six -Processing of Structurally Heterogeneous Cryo-EM Data in RELION[END_REF] and were removed for further reconstruction. The 2D classes contained different orientations of the bacterial ribosomes, showing a lot of molecular detail including stalks and subunits. Subsequently, the selected classes were used for 3D auto-refinement to obtain a first initial map. This map is reconstructed from particles with different abundances of the different tRNA. By a masked 3D classification with background subtraction, the particles can be sorted accordingly. Masked 3D classification is a method to select particles containing information for a specific region by masking out a specific part of the density and classifying the particles accordingly [START_REF] Scheres | Chapter Six -Processing of Structurally Heterogeneous Cryo-EM Data in RELION[END_REF]. For the MKF-70S ribosome structure, the particles were sorted according to their A, P or E-site tRNA occupancy. The resulting distribution is shown in the following figure: Figure 31 shows the electron microscopy density after the initial 3D auto-refinement. The map was reconstructed using 90 708 particles. The particles are a mixture of particles containing e.g. one tRNA in P or E-site, two or three tRNAs. These different occupancies influence directly the quality of the map. By using a mask around the tRNA binding sites (blue mask) and subtracting the background, the particles were sorted accordingly into 10 different classes. After evaluating the resulting tRNA maps in chimera, six of these were identified as noise (11.17%). The most abundant class contained a P and an E-site tRNA (61.66%) corresponding to the arrest complex. Results These 63 054 particles were used for further reconstruction. The number of particles is sufficient to obtain high-resolution information, as a recent study showed that 30 000 particles are sufficient for near atomic resolution reconstruction of ribosomal particles [START_REF] Bai | Ribosome structures to nearatomic resolution from thirty thousand cryo-EM particles[END_REF]. The second most abundant class (13.48%) had a signal for A-, P-, and E-site tRNA. These particles might correspond to non-arrested ribosomes with the peptide attached to the A-site tRNA, as no EF-G was added to the sample the complex could not translocate. The abundance of this complex corresponds to that observed in the toeprinting experiment which confirmed the arrest (Figure 28). The toeprint was performed in the presence of EF-G. In the sample corresponding to the arrested complex, a faint band for the translocated complex could be observed. To analyze if the APE classified particles correspond to the non-arrested complex the cryo-EM map for those particles could be reconstructed (discussed in chapter 5.2) Other classes contain an E-site tRNA (Class E, 7.21%) and a P-site tRNA with a partially bound E-site tRNA (Class P/(E), 6.48%). The distribution shows that the arrested complex has a P-site tRNA and an E-site tRNA bound, even if the complex was formed and diluted in the presence of high concentrations of Arg-tRNA Arg . This indicates that the arrest peptide might inhibit the accommodation of this particular tRNA. The abundance of the PE complex in the absence of any complex formation indicates that the strategy to form arrested complexes using the flexizyme methodology to peptidylate tRNA presents a useful tool for structural biology. Nascent chain-mediated translational arrest complexes have been isolated from cells or purified from in vitro translation reactions (chapter 4.4.1). The recent structure of TnaC was reconstructed from a sample extracted from cells and the final reconstruction contained only 29% all selected particles (Bischoff et al., 2014). Even if the MKF-70S complex was formed from purified compounds, the toeprinting experiment indicates that the peptidyl-tRNAi Met can initiate translation in the absence of erythromycin. After masked 3D classification, the particles were processed using movie refinement and particle polishing to obtain so-called "shiny" particles. Shiny particles were put through another round of 3D auto-refinement using a mask on the 50S subunit as the arrest is taking place in the ribosomal exit tunnel and postprocessing as described in [START_REF] Scheres | Chapter Six -Processing of Structurally Heterogeneous Cryo-EM Data in RELION[END_REF], resulting in the final map. The local resolution distribution and FSC curve (further explanation chapter 3.10) are shown in the following figure: The local resolution distribution (Figure 32A) shows that the body of the large subunit has higher resolution on average than the small subunit which was expected as particles were aligned on the 50S subunit. The small subunit shows lower resolution due to its relative mobility to the 50S subunit. Overfitting was prevented by using the gold-standard approach, meaning the data were split into two halves that were refined independently [START_REF] Henderson | Outcome of the first electron microscopy validation task force meeting[END_REF][START_REF] Scheres | Prevention of overfitting in cryo-EM structure determination[END_REF]. The Fourier shell correlation (FSC) between the two reconstructions yields a resolution estimate of the reconstruction. The 3.9 Å resolution was determined at an FSC threshold of 0.143 as it is implemented in the RELION refinement (Figure 32, Scheres, 2016; Scheres and Chen, 2012). Model building, refinement, and validation The initial model of the 70S E. coli ribosome (PDB code: 4U27, [START_REF] Noeske | Synergy of Streptogramin Antibiotics Occurs Independently of Their Effects on Translation[END_REF] was rigidbody fitted into the density using the Situs package [START_REF] Wriggers | Conventions and workflows for using Situs[END_REF] (detailed steps are discussed in chapter 3.10). Afterwards, flexible bases of the 23S rRNA were moved manually into the density using COOT. These bases were A2062, U2506, U2585 and A2602 and are known to adopt specific positions during nascent chain-mediated translational arrest (Arenz et al., 2014a;[START_REF] Arenz | Molecular basis for erythromycin-dependent ribosome stalling during translation of the ErmBL leader peptide[END_REF]Bischoff et al., 2014;[START_REF] Gumbart | Mechanisms of SecM-mediated stalling in the ribosome[END_REF][START_REF] Sohmen | Structure of the Bacillus subtilis 70S ribosome reveals the basis for species-specific stalling[END_REF][START_REF] Su | The force-sensing peptide VemP employs extreme compaction and secondary structure formation to induce ribosomal stalling[END_REF]. The model was refined against the electron microscopy density using PHENIX realspace refinement applying base pairing restrains and the high-resolution input model was used a reference model. Figure 33 shows different parts of the ribosome with their corresponding density: Results The quality of the density can be further evaluated by interpreting the structural features e.g. secondary structure elements or side chains that can be observed in the density (Figure 33). Figure 33 shows different parts of the ribosome including rRNA and ribosomal proteins. A part of the 5S rRNA is shown in Figure 33A. The density does not show inter-base separation but the phosphate-sugar backbone can be clearly positioned. On the other hand, bases from the two antiparallel strands that are engaged in Watson-Crick base pairing can be clearly distinguished from each other. As a further example, the density for a α-helix of the ribosomal protein L22 and a β-sheet structure of the ribosomal protein L32 (Figure 33 B,C). The density shows the shape of the secondary structure and density for the side chains even for lysine and arginine. The density of the P-site tRNA (Figure 33D) is well defined. The density corresponding to the fMKF peptide is shown in Figure 33E. Density for the phenylalanine and lysine side chains can be clearly observed. In contrast to this, the methionine side chain is disordered indicating that it might not be necessary for the arrest. The density of erythromycin clearly indicates the positions of the sugars and the upper part of the lactone ring (Figure 33F). In contrast to X-ray crystallography structures resulting in density for the whole drug (Bulkley et al., 2010;Dunkle et al., 2010), the lower part of the lactone shows no density as observed in previous cryo-EM structure containing erythromycin (Arenz et al., 2014a;[START_REF] Arenz | Molecular basis for erythromycin-dependent ribosome stalling during translation of the ErmBL leader peptide[END_REF]. This hints that the lactone ring might be more flexible within cryo-EM structures than within crystal structures. In summary, the electron microscopy density was reconstructed to an average resolution of 3.9 Å. The resolution of the 50S subunit is higher as a 50S mask was used during the last steps of the reconstruction. The density shows clear features corresponding to this resolution such as well-defined secondary structures for nucleic acids and proteins including side chains. Results The refinement and model statistics are listed in the following table: Data collection Particles 63 054 Pixel size (Å) 1. 24Defocus range (µm) 0.5-2.5 Voltage (kV) 200 Electron dose (e -/Å 2 ) 120 (6.5 per frame) Model composition Protein residues 5617 RNA bases 4633 Refinement Resolution (Å, 0.143 FSC) 3.9 Map sharpening B factor (Å Score Clash score, all atoms 3.37 The refinement improved the Ramachandran outliers (input 8.34%) as well as the rotamer outliers (input 21.79%). The refinement is still an ongoing process and needs further improvements including applying, even more, restrains reprocessing the cryo-EM data to possibly increase the resolution and the quality of the density. Structure interpretation The complex was formed in the presence of Arg-tRNA Arg a short synthetic mRNA encoding for the amino acid sequence MR. After complex formation, the sample was diluted from 2.5 µM to 240-480 nM. To enhance A-site tRNA binding, the dilution buffer included 20 µM Arg-tRNA Arg . Figure 34 shows the reconstructed density with the P-and E-site tRNAs highlighted: The electron microscopy map contained density for the P-site and E-site tRNAs but no density for the A-site tRNA (Figure 34). The absence of the A-site tRNA can be explained overlying the resulting model with a model containing an A-site tRNA (Figure 35): As Figure 35 illustrates, the penultimate lysine side chain (Lys -1) points into the A-site crevice. The overlay with the pre-attack structure of the 70S T. thermophilus ribosome in complex with Phe-NH-tRNA Phe in the A-site and fMet-NH-tRNAi Met in the P-site (PDB code: 1VY4, (Polikanov et al., 2014)) illustrating the location of the A-site tRNA indicates that the Lys -1 side chain would allow the binding with an incoming phenylalanine residue. Toeprinting experiments mutating the A-site tRNA to all canonical amino acids showed that arrest in the presence of erythromycin occurs only if the amino acid is a K, R or W [START_REF] Sothiselvam | Binding of macrolide antibiotics leads to ribosomal selection against specific substrates based on their charge and size[END_REF]. Also replacing the A-site tRNA with an A-site tRNA mimic carrying a positive charge such as e.g. CCA mimics attached to an ornithine, leads to nascent chain-mediated translational arrest [START_REF] Sothiselvam | Binding of macrolide antibiotics leads to ribosomal selection against specific substrates based on their charge and size[END_REF]. Setting this in the context of the structure, an arginine residue in the A-site (raspberry, Figure 35) would bring two residues with long, positively charged side chains close to each other. This would likely result in static repulsion and would explain the absence of the A-site tRNA in the density even though the complex had been formed and diluted into a buffer containing an excess of Arg-tRNA Arg . Thus, ribosome inhibition appears to result from the inability of the incoming aminoacyl tRNA to fully accommodate into the A-site. A similar inhibition mechanism was observed for the proline-rich antimicrobial peptides (chapter 4.1) While this explains which step of the translation cycle is blocked, it does not explain how the Lys -1 side chain is made to occupy the A-site crevice. One possible explanation is that it is stabilized in this position by making interactions with the ribosomal tunnel wall but due to the limited resolution hydrogen bonds or salt bridges can at best be suggested by proximity. Another non-exclusive explanation is that the confined environment within the drugobstructed tunnel makes it difficult for the Lys -1 side chain to move out of the A-site. To explore this possibility, I will first describe the conformation of various unpaired 23S rRNA residues within the tunnel that are generally known to be involved in the nascent chainmediated translational arrest and that have been implicated in the inactivation of the PTC. These residues are A2062, A2602, U2585 and U2506 (Arenz et al., 2014a;[START_REF] Arenz | Molecular basis for erythromycin-dependent ribosome stalling during translation of the ErmBL leader peptide[END_REF]Bischoff et al., 2014;[START_REF] Su | The force-sensing peptide VemP employs extreme compaction and secondary structure formation to induce ribosomal stalling[END_REF][START_REF] Vazquez-Laslop | Molecular mechanism of drugdependent ribosome stalling[END_REF]. The position of the base of 23S rRNA residue A2602 in ErmCL or TnaC arrested ribosomes excludes A-site tRNA accommodation and release factor binding (Arenz et al., 2014a;Bischoff et al., 2014). Molecular dynamics studies of erythromycin-bound ribosomes showed that A2602 adopts a specific position rotated 110° from the apo-ribosome position [START_REF] Sothiselvam | Macrolide antibiotics allosterically predispose the ribosome for translation arrest[END_REF]. It has been reported from the ErmCL-70S structure and TnaC-70S structure that the base of A2602 adopts a specific conformation that prevents A-site tRNA accommodation or release factor binding, respectively (Arenz et al., 2014a;Bischoff et al., 2014). In the MKF-70S structure, the density for this particular base is very weak, which might indicate that higher resolution information is necessary for this position or A2602 or that it is disordered and has no impact on the absent A-site tRNA. In contrast, the bases U2506, U2585 and A2062 adopt defined conformations in the MKF-70S structure, which differ from their canonical orientation. U2506 is a universally conserved base and point mutations show a reduction in peptide bond formation but not in peptide release [START_REF] References Youngman | The active site of the ribosome is composed of two layers of conserved nucleotides with distinct roles in peptide bond formation and peptide release[END_REF]. U2506 rotates during A-site tRNA accommodation to rearrange the PTC during the induced fit (Schmeing et al., 2005a). In arrested ribosomal complexes, U2506 was reported in the ErmCL-70S ribosome and VemP-70S ribosome structure to form possible stacking interactions with the peptide chain or to adopt a conformation that would prevent A-site tRNA accommodation (Arenz et al., 2014a;[START_REF] Su | The force-sensing peptide VemP employs extreme compaction and secondary structure formation to induce ribosomal stalling[END_REF]. Figure 36 illustrates that U2506 adopts a different conformation in the MKF-70S structure compared to its position during the different stages of the elongation cycle and during other nascent chain-mediated translational arrest events: In the MKF-70S structure, the base of U2506 is 5.3 Å away from the penultimate lysine side chain making it unlikely to form stacking interactions as it was reported for the ErmCL structure. In this particular structure, the base of U2506 is 3.5 Å away from the ErmCL arrest peptide (Figure 36B, (Arenz et al., 2014a)). U2506 adopts two conformations within the VemP structure that would clash with the accommodation of the A-site tRNA (Figure 36C, [START_REF] Su | The force-sensing peptide VemP employs extreme compaction and secondary structure formation to induce ribosomal stalling[END_REF]). In the MKF-70S structure, it appears that U2506 is pushed out of its position and adopts a position different from other arrest peptides (Figure 36). In summary, the base of U2506 adopts a defined orientation that is distinct from those reported so far, but it is too far from the nascent peptide to form direct contacts. Consequently, its positioning might be due to the rearrangement of the PTC. 23S rRNA residue U2585 is universally conserved and is characterized by a highly mobile base, which adopts different positions during peptide bond formation (Schmeing et al., 2005a). During the nascent chain-mediated translational arrest, U2585 can adopt different conformations that differ from its position during the canonical translation cycle as illustrated in Figure 37: U2585 adopts different conformation during the translation cycle. In the pre-induced state, it protects the ester bond of the peptidyl-tRNA from hydrolysis. During A-site tRNA accommodation, the base of U2585 moves away from the ester bond of the P-site tRNA towards the 2'OH group of A76 of the P-site tRNA and peptide bond formation can occur (Schmeing et al., 2005a). The position of U2585 differs from the induced and uninduced orientations (Figure 37A) and from the different orientations that it adopts in during arrest along ErmBL, ErmCL or TnaC (Figure 37B). In contrast to this, U2585 adopts a conformation similar to that observed in VemP-or SecM-arrested ribosomes (Figure 37C). In the VemP structure, U2585 adopts this position due to steric hindrance by the helical arrest peptide. Interestingly, the fMKF-peptide would not clash with the conformations of U2585 during the pre-accommodation and pre-catalysis state, so it is forced into this position due to allosteric re-arrangements of the PTC and by stacking interactions of the base of U2585 against the arrest peptide. Molecular dynamics studies of erythromycin-bound ribosomes suggest that U2585 adopts a specific conformation having a distance of 8.3 Å to the cladinose and 9.6 Å to the desosamine sugar. In the MKF-70Sstructure U2585 is 8.2 Å away from the cladinose sugar and 10-12 Å away from the desosamine sugar. Its conformation in the MKF-structure seems to be similar to its conformation calculated from the molecular dynamics studies restricting the activity of the PTC [START_REF] Sothiselvam | Macrolide antibiotics allosterically predispose the ribosome for translation arrest[END_REF]. Results Another universally conserved base is 23S rRNA residue A2062, which is located in the tunnel across from the erythromycin binding site [START_REF] Vazquez-Laslop | Molecular mechanism of drugdependent ribosome stalling[END_REF]. This particular base is located in the ribosomal tunnel and is known to adopt different conformations due to the translational state of the ribosome. Ribosomes carrying the point mutations A2062C and A2062U do not arrest when translating the ErmCL peptide in the presence of erythromycin [START_REF] Vazquez-Laslop | Molecular mechanism of drugdependent ribosome stalling[END_REF]. It has been hypothesized that the base of A2062 acts as a sensor for arrest peptides. When the ribosome arrests the base of A2062 re-orientates and might interfere with the base pairing of G2061 with A2451 and induces so a conformational change within the PTC [START_REF] Vazquez-Laslop | Molecular mechanism of drugdependent ribosome stalling[END_REF]. However, this cannot be generalized to all arrest peptides. More recent studies have shown that ErmBL, ErmDL, TnaC and MRL(R) are independent of the chemical properties of the base of A2062 [START_REF] Sothiselvam | Macrolide antibiotics allosterically predispose the ribosome for translation arrest[END_REF]Vázquez-Laslop et al., 2010). The orientation of the base of A2062 in the MKF-structure in comparison to its position in other structural studies of the bacterial ribosome is shown in Figure 38: structures of erythromycin-bound 70S ribosomes of E. coli and T. thermophilus (B) shows two distinct orientations of A2062. In the 70S E. coli ribosome (PDB code: 4V7U), A2062 points towards the cytoplasm, while it points towards the PTC in the 70S T. thermophilus ribosome. The comparison of various arrest complexes (C) reveals that the base of A2062 adopts different conformations according to the nature of the arrest peptide. In the ErmBL arrested ribosome (PDB code: 5JTE) the base points towards the cytoplasm, while it points towards the PTC in the ErmCL (PDB code: 3J7Z) and TnaC (PDB code: 4UY8) arrested ribosomes. (D) In SecM-arrested ribosomes (PDB code: 3JBU) the base points in a similar direction as in MFK-70S while in VemP-arrested ribosomes (PDB code: 4NWY) the base points towards the peptidyl transferase center. In the MKF-70S structure, the density for A2062 was clearly defined and the base was modeled to point towards the PTC (Figure 38). This orientation is different from its orientations during the elongation cycle. During pre-accommodation, A2062 points into the middle of the ribosomal tunnel while it is oriented towards the cytoplasm during peptide bond formation and translocation [START_REF] Lin | Conformational changes of elongation factor G on the ribosome during tRNA translocation[END_REF]Polikanov et al., 2014;Schmeing et al., 2005a)(Figure 38A for simplicity reasons only shows the post-catalysis orientation, all steps are shown in Figure 64). The erythromycin binding site is located across from A2062. Crystal structures of erythromycinbound bacterial ribosomes show two distinct positions of A2062. In the 70S E. coli ribosome structure, A2062 points towards the cytoplasm while it points towards the PTC in the 70S T. thermophilus structure (Figure 38B, (Bulkley et al., 2010)) but is rotated by 90° compared to the MKF-70S structure. This indicates that erythromycin allows two distinct different orientations of the base of A2062. The base of A2062 was identified to be important in the drug-dependent arrest of ErmCL and it has been shown that is forms direct contacts with the arrest peptide (Arenz et al., 2014a;[START_REF] Vazquez-Laslop | Molecular mechanism of drugdependent ribosome stalling[END_REF]. However, the orientation of the base of A2062 in the ErmCL structure points towards the PTC but adopts an orientation similar to the one of the erythromycin bound 70S T. thermophilus structure. Biochemical studies have shown that for a number of arrest sequences the chemical properties of the base of A2062 is crucial. Those arrest sequences include ErmCL, ErmAL, and SecM, while ErmBL, TnaC and ErmDL appear to be independent of the chemical properties of the base of A2062 (Vázquez-Laslop et al., 2010). In the ErmBL structure A2062 points towards the cytoplasm. In contrast to this, A2062 in the most recent TnaC structure adopts a similar conformation as observed in the MKF-70S structure (Figure 38C). Results In the most recent SecM structure A2062 adopts a slightly different orientation compared to the MKF-70S structure but points towards the PTC, while in the VemP structure the base points into the ribosomal exit tunnel (Figure 38D, [START_REF] Su | The force-sensing peptide VemP employs extreme compaction and secondary structure formation to induce ribosomal stalling[END_REF][START_REF] Zhang | Mechanisms of ribosome stalling by SecM at multiple elongation steps[END_REF]). Like TnaC, it has been shown that MRLR-mediated arrest in the presence of erythromycin is independent of the nature of the A2062 [START_REF] Sothiselvam | Macrolide antibiotics allosterically predispose the ribosome for translation arrest[END_REF]. However, molecular dynamics studies of SecM and TnaC arrested ribosomes show a reduction of the flexibility of A2062 [START_REF] Gumbart | Mechanisms of SecM-mediated stalling in the ribosome[END_REF]. In contrast to ErmCL, the base of A2062 is 4.5 Å away from the backbone of the MKF peptide and consequently cannot form any hydrogen bonding. In light of the MKF-70S structure, however, it is possible that even though the nature of residue 2062 is not important for the arrest process, the mere presence of a base within the tunnel at this location is sufficient to cause arrest. Indeed, the base of A2062 could prevent the necessary peptide backbone rotation to allow Lys -1out of the A-site crevice, as illustrated in Figure 39. In the absence of erythromycin A2062 points towards the cytoplasm. This allows the rotation of the peptide backbone of the MKF peptide and the reorientation of the lysine side chain. Consequently, the following positive A-site tRNA can bind and peptide bond formation can occur. In contrast to this, the presence of erythromycin limits the flexibility of A2062 and in the presence of the peptide this particular base points up towards the PTC. This affects directly the accommodation of the positively charged amino acyl-tRNA and the ribosome undergoes peptide-mediated translational arrest. Results In summary, the base of A2062 rotates more easily towards the cytoplasm, thus allowing Lys -1 to leave the A-site crevice in the absence of erythromycin. Consequently, the following amino acyl-tRNA carrying a positive charge can bind and peptide bond formation can occur (Figure 39). In contrast to this, the presence of erythromycin could favor a conformation of A2062 in which the base points towards the PTC. The ribosome arrests when A2062 points towards the PTC inhibiting the reorientation of the arrest peptide and forcing the Lys -1 to point into the A-site crevice. Consequently, the following A-site tRNA carrying a K, R or W side chain cannot bind to the PTC due to steric and static hindrance. The ribosome is arrested due to the prevention of A-site tRNA accommodation (Figure 39). Results Polyproline-mediated arrest Complexes to study arrest along consecutive proline motifs Proline is a N-alkylamino acid and its incorporation into the growing peptide chain is slower than that of other canonical amino acids [START_REF] Muto | Peptidyl-prolyl-tRNA at the ribosomal P-site reacts poorly with puromycin[END_REF][START_REF] Pavlov | Slow peptide bond formation by proline and other N-alkylamino acids in translation[END_REF]. The ribosome arrests if the mRNA encodes three consecutive prolines. This arrest is released by the translation factor EF-P [START_REF] Doerfel | EF-P is essential for rapid synthesis of proteins containing consecutive proline residues[END_REF][START_REF] Ude | Translation elongation factor EF-P alleviates ribosome stalling at polyproline stretches[END_REF]. Further studies showed that the amino acid composition in the -2, -1 and +1 positions directly influences the strength of the arrest [START_REF] Peil | Distinct XPPX sequence motifs induce ribosome stalling, which is rescued by the translation elongation factor EF-P[END_REF]Starosta et al., 2014a). Complexes that I sought to make in order to study this complex using structural biology are summarized in the following figure: The -2 position was chosen to be an arginine in order to improve the solubility of the flexizyme substrate. Position -1 can be an alanine or a proline, position 0 is a proline and the incoming amino acid is a proline or its unreactive derivative tetrahydrofuroic-2-acid. All complexes are to be formed in the presence or absence of EF-P. The experimental design for this project includes the use of AcRP-tRNAi Met , AcRA-tRNAi Met , and AcRAP-tRNAi Met . All peptides can be used to induce strong polyproline mediated arrest [START_REF] Peil | Distinct XPPX sequence motifs induce ribosome stalling, which is rescued by the translation elongation factor EF-P[END_REF]Starosta et al., 2014a). The incoming amino acid will be proline or tetrahydrofuroic-2-acid (THFA). THFA is a derivative of proline which has instead of the pyrrolyl ring a tetrahaydrofuryl ring, that cannot form a covalent bond with the nascent chain. This will inhibit the release of the polyproline-mediated arrest by EF-P trapping it in a PRE-release state. In doing so, the impact of the modification of EF-P can be studied in detail. For crystallization experiments, 70S T. thermophilus ribosomes were used and so far, no data is published about E. coli EF-P binding to the 70S T. thermophilus ribosome and so experiments were performed using EF-P from both species. Results A recent study showed that the D-loop of tRNA Pro is crucial for EF-P activity [START_REF] Katoh | Essential structural elements in tRNA Pro for EF-P-mediated alleviation of translation stalling[END_REF]. Therefore, E. coli tRNA Pro had to be purified from cells and prolinylated in vitro in order to be used for amino acid delivery. The Pro-tRNA Pro can be used for one round of translocation replacing the initiator tRNA by the tRNA Pro and so the "native" complex can be studied. The complex can be translocated using purified E. coli EF-G overexpressed recombinantly [START_REF] Mikolajka | Differential Effects of Thiopeptide and Orthosomycin Antibiotics on Translational GTPases[END_REF]. tRNA Pro expression and purification Recent studies showed that the D-loop of tRNA Pro is essential for the function of EF-P [START_REF] Katoh | Essential structural elements in tRNA Pro for EF-P-mediated alleviation of translation stalling[END_REF]. In this study, the authors used flexizymes to prolinylate different in vitro transcribed tRNAs with varying lengths of the D-loop and base pairing strength depending on the nucleotide composition of the D-stem. To investigate the molecular mechanism of arrest along consecutive prolines structurally, tRNA Pro was isolated from cells to ensure the presence of natural nucleotide modifications. E. coli cells have three different genes for tRNA Pro , named proK, proL, and proM. As proK is transcribed as a monocistronic transcript [START_REF] Mohanty | Endonucleolytic cleavages by RNase E generate the mature 3′ termini of the three proline tRNAs in Escherichia coli[END_REF], this tRNA Pro gene was selected for cloning. In addition, the CGG anticodon of proK results in tRNA Pro that is incapable of wobble base pairing. Consequently, proK is the only tRNA Pro that specifically recognizes the CCG codon. To obtain large amounts of modified proK, two expression strategies were used by cloning the gene into two different types of vectors, as indicated in Figure 41: [START_REF] Meinnel | Maturation of pre-tRNAfMet by Escherichia coli RNase P is specified by a guanosine of the 5′-flanking sequence[END_REF][START_REF] Meinnel | Fast purification of a functional elongator tRNAmet expressed from a synthetic gene in vivo[END_REF]. ProK was cloned into the plasmid pUC19 by Dr. K. Kishore Inampudi. As tRNAs are transcribed as longer precursors, the gene was cloned with 8 nt-long flanking regions at the 5' and 3' ends, similarly to the native precursor, in order to ensure recognition by processing the enzymes RNase E and RNase P [START_REF] Mohanty | Endonucleolytic cleavages by RNase E generate the mature 3′ termini of the three proline tRNAs in Escherichia coli[END_REF]. Transcription was put under the control of a T7promoter and a T7 terminator. The advantage of using the T7 promoter is that transcription can be finely regulated in response to different concentrations of L-arabinose ranging from 0.01% (w/v) up to 1% (w/v) by using E. coli BL21AI cells where the gene for T7 polymerase is under the control of the araBAD promoter. Consequently, the tRNA modification machinery should not be saturated, ensuring that tRNA Pro is homogeneously modified. As a second overexpression strategy, proK was cloned into pBSTNAV vectors. Two different variants of the vector were used: pBSTNAV2OK [START_REF] Meinnel | Fast purification of a functional elongator tRNAmet expressed from a synthetic gene in vivo[END_REF] and pBSTNAV3S [START_REF] Meinnel | Maturation of pre-tRNAfMet by Escherichia coli RNase P is specified by a guanosine of the 5′-flanking sequence[END_REF]. Both vectors contain optimized flanking regions for RNase P processing. In pBSTNAV3S, the flanking region was optimized for optimal 5' processing in cases where the first nucleotide of the tRNA is not base-paired, as is the case for tRNAi Met [START_REF] Meinnel | Maturation of pre-tRNAfMet by Escherichia coli RNase P is specified by a guanosine of the 5′-flanking sequence[END_REF][START_REF] Meinnel | Fast purification of a functional elongator tRNAmet expressed from a synthetic gene in vivo[END_REF]. The gene is under the control of the lipoprotein promoter resulting in constitutive overexpression. Overexpression tests of tRNA Pro To identify optimal overexpression strategies, pUC19-tRNA Pro was transformed into E. coli BL21AI cells while pBSTNAV2OK and pBSTNAV3S with and without the proK gene were transformed into the recA deficient E. coli strains DH5α and HB101. E. coli BL21AI transformed with pUC19-tRNA Pro were induced at an OD600 of 0.6 with 0.01%, 0.1% and 1 % L-arabinose. Expression levels for each condition were detected by Northern blotting as described in chapter 3.9. The amount of total RNA loaded was normalized based on the optical density of the cell culture. The following figure shows the Northern blot analysis of tRNA Pro expressed using the pBSTNAV or pUC19-proK vector: The amount of produced is higher using the pBSTNAV vectors compared to expression in the pUC19 vector. In the case of the BL21AI cells, the intensity of the bands correlates to the concentration of L-arabinose used for induction. As shown in Figure 42, all constructs overexpressed proK. The yields for using the pUC19 constructs increase with increasing concentrations of L-arabinose but overexpression of proK using pBSTNAV vectors resulted in higher overall yields. Furthermore, the overnight culture using E. coli BL21AI reached only a final OD600 of 2.6 whereas the overnight culture for E. coli HB101 reached a final OD600 of 8. Bands detected for the L-arabinose induced expression runs higher compared the bands detected from consecutive expression. An explanation could be that the lanes corresponding to pBSTNAV overexpression were highly overloaded and changed so the overall running behavior and it is not an indication of modification of the tRNA. In addition, slightly differences within the PAGE and during blotting might have had influenced the running behavior. The higher intensities of the bands on the Northern blot as well as the larger amounts of cells obtained after overnight expression might correspond to higher yields of tRNA Pro for purification. Consequently, pBSTNAV constructs were used for further experiments. As the expression has to be performed in recA -strains, pBSTNAV constructs were transformed into E. coli DH5α and HB101. The resulting Northern blot is shown in the following As illustrated in Figure 43 on the left, the probe against proK is specific for proK, as shown by its inability to detect proM in spite of the minor sequence differences within the anticodon loops of the two tRNA species. The right part of the Northern blot (Figure 43) shows that RNA extracted from cells that contain the proK plasmids display a higher expression level for tRNA Pro CGG compared to RNA extraction from cells containing the empty plasmid. In addition, the overexpressed appears to be of the correct length as it migrates at the same level as the in vitro transcribed tRNA. Furthermore, there is no visible difference in the intensity of the overexpression band between E. coli DH5α and HB101. As the overnight culture of HB101 reached an OD600 of 8 while the DH5α culture reached only an OD600 of 2.8, the E. coli HB101 cells were used for large-scale expression. The reason is that the higher cell mass corresponds Genomic DNA, plasmid DNA and long rRNAs were removed using increasing concentrations of isopropanol. Subsequently, the total tRNA sample was deaminoacylated by incubating the extract at 37°C for 2 h at pH 9.0 as the ester bond between the tRNA and amino acid is sensitive to high pH. Using Q-sepharose and RP-C4 HPLC chromatography tRNA Pro was specifically enriched due to its chemical properties (chapter 3.5). Figure 44 shows the different purification steps with the chromatogram and the corresponding Northern blot analysis. For simplicity reasons, the denaturing PAGE is only shown for the Q-sepharose column samples: Figure 44 summarizes the chromatographic steps of the tRNA Pro CGG purification protocol. The first step includes the removal of higher molecular weight RNA species. Q-sepharose is an anion exchanger that binds negatively charged molecules e.g. RNAs. By increasing concentrations of Cl -anions, RNA species could be eluted in order of increasing negative charge. Consequently, longer RNA species with greater overall negative charge elute later than the shorter tRNAs. Figure 44A shows the main elution peak with two maxima. The analysis of the fractions indicates that higher molecular weight species begin to co-elute within the second peak (Figure 44B). For further purification, fractions corresponding to an elution volume of 500-700 mL were pooled together and concentrated by ethanol precipitation. By using proline-tRNA synthetase (ProS) and L-proline, the only tRNA Pro becomes prolinylated, thus altering its chromatographic behavior during reversed phase chromatography. The sample was loaded onto an RP-C4 HPLC column and eluted with an increasing concentration of methanol in the buffer to disrupt hydrophobic interactions. As prolinylation increases the hydrophobicity of tRNA Pro , the aminoacylated species elutes at approximately 5-5.5 CVs (Figure 44 C, D, highlighted in yellow) while the deaminoacylated tRNAs elute 0.5 CV later. Consequently, tRNA Pro is specifically enriched during this purification step. The Northern Blot of the prolinylated-tRNA sample (Figure 44D) indicates an additional, strong signal around 7 CV. These fractions were used for a second prolinylation step and re-loaded on the C4column to exclude that these fractions might be separated from the main proline-tRNA Pro peak due to amino acid hydrolysis. As the peak re-eluted at the same column volume as before (data not shown) it is possible that the sample contains unprocessed precursor tRNA Pro or another longer tRNA that can be detected with the DNA probe used. From 20 g E. coli HB101 cells, 1 mg of tRNA Pro (proK) was purified following this protocol. To illustrate the enrichment in the purity of tRNA Pro during the various purification steps samples were analyzed by denaturing PAGE (100 ng per lane) and Northern blotting (1 µg per lane) as shown in Figure 45: Figure 45 shows a summary of all purification steps by denaturing PAGE and Northern blotting. For denaturing PAGE, 100 ng RNA were loaded and 1 µg RNA was loaded on the Northern blot to be able to observe tRNA Pro CCG enrichment over the various purification steps. Denaturing PAGE (Figure 45A) confirms that during the different purification steps the intensity of the band corresponding to tRNA Pro (proK) is higher while the intensities corresponding to higher molecular weight RNAs decreases but still remains to a certain extent. Indeed, the tRNA Pro sample shows a double band even at the end of the purification procedure, which could correspond to a contamination (approx. 20%) by another tRNA. The Northern blot (Figure 45B) shows that the relative concentration of tRNA Pro increases, as indicated by the progressively greater intensity of the corresponding band over each purification step. tRNA Pro characterization After purification, the different methods were used to assess the purity, extent of modification and activity of the purified tRNA Pro . First, the purification and modification state of tRNA Pro was analyzed using native mass spectrometry. To do so, a 10 µM tRNA Pro solution was prepared in such a way as to completely remove Na + ions, which interfere with subsequent data interpretation. This was achieved by performing repeated washes with 150 mM NH4(CH3CO2) MODOMICS is a database listing tRNA modifications and the corresponding modification pathways from all kingdoms of life [START_REF] Bujnicki | Modomics: a database of RNA modification pathways[END_REF][START_REF] Cantara | The RNA modification database, RNAMDB: 2011 update[END_REF][START_REF] Dunin-Horkawicz | MODOMICS: a database of RNA modification pathways[END_REF]. According to the MODOMICS database, tRNA Pro encoded by proK has five different modifications which are present once in the proK: dihydrouridine (D, +2 Da), 1methylguanosine (K, +15 Da), 7-methylguanosine (7,+15 Da), thymine (T, +15 Da) and pseudouridine (P, +0 Da). All modifications except pseudouridine result in a difference in mass compared to the unmodified sequence. The theoretical electrospray series for the unmodified and modified tRNA Pro was calculated using the bioinformatics tool Mongo Oligo calculator [START_REF] Rozenski | The RNA modification database: 1999 update[END_REF] and compared to detected electrospray series. This mass revealed that the sample mainly contained modified tRNA Pro (proK) with minor contaminations. Additionally, Figure 46B shows that the main peak corresponds to a cation adduct of the tRNA Pro even though the sample was rigorously washed. The two possible cations are Mg 2+ (24.3 Da) or Na + (23.0 Da) and were both used during tRNA Pro purification. The high charge states detected for the tRNA Pro make a distinction impossible. Results The activity of the purified tRNA Pro was verified by toeprinting, using an RNA template that encodes the open reading frame AUG CCG CCG AUC UAA (MPPI*). The reaction was performed in a custom-made PURExpress system devoid of tRNAs, amino acids, release factors and ribosomes (chapter 3.7). This allowed the introduction of the previously purified compounds, Several control reactions (Figure 47) were performed: lane 1 shows the reaction containing total tRNA and all amino acids and as release factors were omitted from the reaction, ribosomes stall when the isoleucine codon is in the P-site and the stop codon is in the A-site (band indicated with the green arrow). Toeprints corresponding to the ribosomes that are arrested at the start codon were loaded in lane 2 and 4 indicated by the dark blue arrow. Lane 2 contains total tRNA with 100 µM L-methionine as the sole amino acid added to the reaction, while in lane 4 total tRNA was replaced by 5 µM tRNAi Met . Lane 3 contains all tRNAs but only the amino acids L-methionine and L-proline, resulting in ribosomes arrested at the third (Pro) Results codon (blue arrow) . The same pattern is visible in the last lane, which contains the two amino acids (100 µM each), 5 µM tRNAi Met and 5 µM purified tRNA Pro . This indicates that the purified tRNA is recognized by the proline-tRNA synthetase, recognizes the CCG codon and is incorporated into the growing polypeptide chain. The additional band indicated with the cyan arrow corresponds to ribosomes arrested at the stop codon. This indicates that the prior detected impurity might correspond to tRNA Ile . Thus, the purified tRNA Pro is active in an in vitro translation system. To summarize, the purified tRNA contains a high proportion of active tRNA Pro with minor contaminations. The latter should subsequently be excluded during the correct decoding of the CCG codon by the ribosome and therefore is unlikely to pose a problem for structural studies. Purification and activity of Elongation Factor P (EF-P) Ribosomal arrest along consecutive proline motifs is released by the protein factor EF-P [START_REF] Doerfel | EF-P is essential for rapid synthesis of proteins containing consecutive proline residues[END_REF][START_REF] Ude | Translation elongation factor EF-P alleviates ribosome stalling at polyproline stretches[END_REF]. Various studies have shown that EF-P needs to be modified post-translationally to be active. EF-P modifying enzyme knockout cells show in a similar phenotype as efp knockout cells underlying the importance of the modification [START_REF] Bullwinkle | R)-β-lysine-modified elongation factor P functions in translation elongation[END_REF][START_REF] Navarre | PoxA, yjeK, and elongation factor P coordinately modulate virulence and drug resistance in Salmonella enterica[END_REF][START_REF] Park | Post-translational modification by β-lysylation is required for activity of Escherichia coli elongation factor P (EF-P)[END_REF]. In E. coli, K34 of EF-P is βlysinylated by the enzymes YjeK (EpmA) and YjeA (EpmB). YjeK converts α-lysine into β-lysine which is then transferred to K34 by YjeA [START_REF] Bailly | Predicting the pathway involved in post-translational modification of elongation factor P in a subset of bacterial species[END_REF][START_REF] Park | Post-translational modification by β-lysylation is required for activity of Escherichia coli elongation factor P (EF-P)[END_REF]. In a third step, this modified residue is hydroxylated by YfcM (EpmC), which is not essential to obtain fully functional EF-P [START_REF] Peil | Lys34 of translation elongation factor EF-P is hydroxylated by YfcM[END_REF]. In order to purify active, His6-tagged E. coli EF-P, this protein was co-overexpressed with untagged YjeA and YjeK in E. coli BL21gold cells as described previously [START_REF] Peil | Lys34 of translation elongation factor EF-P is hydroxylated by YfcM[END_REF]. The activity of the purified E. coli EF-P was tested by toeprinting and the results are shown in Figure 48: The toeprinting experiment was performed using three different DNA templates to understand if polyproline-mediated arrest is enhanced when the nascent peptide chain has a certain length. The long DNA template encoded MMHHHHHHRPPPI (1) and the shorter arresting sequence encoded MRPPPI (2). Both sequences contain one arginine followed by three consecutive prolines. This particular motif has been identified to cause strong polyproline-mediated arrest in the absence of EF-P (Starosta et al., 2014a). For both sequences, a toeprint can be detected corresponding to the arrest site in the absence of EF-P (Figure 48, dark blue arrow). Interestingly, the signal corresponding to the polyproline arrest is separated into a triple band with strong bands corresponding to 16 nt and 18 nt spacing between the first nucleotide of the P-site. In the presence of purified EF-P, the arrest is released, as indicated by the toeprint corresponding to ribosomes located with the stop-codon in the A-site (Figure 48, red arrow). Consequently, the purified EF-P is active. These bands are also observable in the sample without EF-P indicating that the ribosomes can overcome the arrest to a certain extent without EF-P. The third sequence, MPPI, is a non-arresting sequence and yields a toeprint that corresponds to ribosomes arrested at the stop codon (Figure 48). Comparison of EF-P from T. thermophilus and E. coli Crystallization experiments were performed using 70S T. thermophilus ribosomes which are known to give rise to crystals diffracting to high-resolution. So far, no data is published if E. coli EF-P rescues polyproline arrested 70S T. thermophilus ribosomes and vice versa. To gain an understanding of the release mechanism of EF-P experiments were performed using EF-P from both species. As mentioned before, EF-P is post-translationally modified and these modifications are crucial for its activity. Depending on the species, the crucial residue can be a lysine residue (e.g. E. coli EF-P) or an arginine residue (e.g. Pseudomonas aeruginosa and T. thermophilus EF-P). Among prokaryotes, the chemical nature of the modification can differ from β-lysinylation (E. coli) to rhamnosylation (P. aeruginosa) [START_REF] Lassak | Arginine-rhamnosylation as new strategy to activate translation elongation factor P[END_REF][START_REF] Park | Post-translational modification by β-lysylation is required for activity of Escherichia coli elongation factor P (EF-P)[END_REF]. So far, the existence and nature of the EF-P in T. thermophilus remain unknown. The amino acid sequence of E. coli and T. thermophilus EF-P is not conserved as shown in the following figure: T. thermophilus EF-P was overexpressed and purified from E. coli BL21gold cells as published (chapter 3.3, (Blaha et al., 2009)). As T. thermophilus EF-P was overexpressed in the absence of modification enzymes and no modification of R32 was reported within the structure of 70S T. thermophilus in complex with T. thermophilus EF-P and tRNA [START_REF] Blaha | Formation of the first peptide bond: the structure of EF-P bound to the 70S ribosome[END_REF], it is safe to assume that the purified T. thermophilus EF-P is unmodified. The activity across these two species, T. thermophilus, and E. coli, was accessed by toeprinting reactions supplemented with 1, 10 or 100 µM EF-P from either species. The resulting gel is shown in the following figure: Figure 50 shows that unmodified T. thermophilus EF-P does not release 70S E. coli ribosomes arrested on polyproline motifs. In contrast to this, E. coli EF-P releases the arrested ribosomes even at a concentration as low as 1 µM (Figure 50). It can be hypothesized that the arrest is not released by T. thermophilus EF-P, either due to the lack of modification or to the inability of T. thermophilus to bind to the E. coli ribosome or both. The binding affinity of E. coli EF-P and T. thermophilus EF-P for the 70S T. thermophilus ribosome was assessed by crystallography. To do so, complexes containing 70S T. thermophilus ribosomes, one of the EF-P homologs and non-hydrolysable fMet-NH-tRNAi Met and Phe-NH-tRNA Phe (chapter 3.5) were formed and set up for crystallization. The sequence of the D-loop of tRNA Pro , which is crucial for EF-P function, is identical to that of the D-loop high purity. The proteins have a molecular weight of 20.22 kDa for T. thermophilus EF-P and its mutants, while the purified E. coli EF-P has a molecular weight of 20.5 kDa. Figure 53 shows the corresponding denaturing protein gel after gel filtration. The mutants show the same molecular weight as the T. thermophilus EF-P (Figure 53). The purified proteins show high purity and no degradation after gel filtration due to the lack of additional bands (Figure 53). The activity of the purified mutants was studied by toeprinting using a PURExpress system lacking release factors. Figure 54 shows the results of this experimental set up: E. coli EF-P releases polyproline-mediated arrest while the T. thermophilus EF-P and the mutants do not release the arrest (Figure 54). Reasons for this might be that the mutants are not recognized by the modification enzymes or that the mutation of the loop is not sufficient to perform the release of polyproline-arrested 70S E. coli ribosome by T. thermophilus EF-P. All EF-P mutants were used to set up crystallization drops containing 50 µM of the protein. Although crystals could be obtained, they did not diffract to a high enough resolution (>8 Å) and it was therefore not possible to make assumptions about the binding capacity or modification state of the mutants. Consequently, subsequent experiments were performed using 70S E. coli ribosomes and E. coli EF-P. Toeprint to study polyproline-mediated arrest and its release by EF-P One of the main aims of this project was to obtain the structure of polyproline arrested ribosomes and getting a grounded understanding how the arrest is released by EF-P. As the D-loop of tRNA Pro is crucial for the recognition by EF-P, the reaction was initiated from a dipeptidylated initiator tRNA and so the complex can be formed using strategy 2. For the poly-proline project, the dipeptides AcRP-CBT and AcRA-CBT, as well as the tripeptide AcRAP-CBT, were used to obtain the corresponding peptidylated initiator tRNA using the flexizyme technique (chapter 4.2). The activity of these peptidyl-tRNAsi Met was analyzed by toeprinting using a mRNA template that encoded the open reading frame AUG CCG CCG AUC UAA (MPPI*). The mRNA template encodes two proline codons in order to induce stalling only in the presence of dipeptidylated or tripeptidlyted initiator tRNA on the second or first codon, respectively. The reaction was performed using a custom-made PURExpress system (NEB, Ipswich, MA, USA). 10 µM peptidyl-tRNAi Met , 20 µM tRNA Pro and 100 µM L-proline were incubated with 5 µM mRNA, factor mix, and 2.5 µM ribosomes in order to pre-form the complex (5 min at RT). Subsequently, solution A -(minus aa, tRNA) provided by the kit, containing nucleotides, was added, allowing complexes that had not been arrested to be translocated (further information is provided in chapter 3.9). Certain reactions were supplemented with E. coli EF-P in order to allow the release of the arrested complex. The reaction was analyzed on a 7.5% sequencing PAGE and the result is shown Figure 55: The start codon control was performed using 10 µM initiator tRNA and 100 µM L-methionine. The used RNA template encodes for MPPI and ribosomes will not arrest along two consecutive prolines. Consequently, in the presence of tRNAi Met , 100 µM L-methionine, 20 µM tRNA Pro and 100 µM L-proline, the ribosomes translate through both proline codons. Consequently, most ribosomes arrest on the third codon (Figure 55, cyan arrow) due to the absence of tRNA Ile . AcRP-tRNAi Met induces a weak arrest complex indicated by a weak double band. The arrest is released by EF-P indicated by a laddering of the double band. AcRA-tRNAi Met forms a stronger stalling complex (middle blue arrow). In most cases the ribosomes protect 16 to 17 nt (start counting from the first nt of the P-site) (Orelle et al., 2013b). The toeprint generated after initiation with the dipeptidyl-tRNAi Met has a spacing of 17 to 18 nt. This is consistent with the toeprinting results translating the whole arrest sequence (Figure 48, Figure 56, Figure 50 and Figure 54). The band corresponding to the polyproline-mediated arrest was split into a triple band corresponding to 16-18 nt (Figure 48, Figure 56, Figure 50 and Figure 54). An explanation could be that the ribosome might adopt a different ratcheting state while being arrested on polyproline motifs. The arrest is released in the presence of E. coli EF-P indicated by the release of the arrest band into laddering. AcRAP-tRNAi Met carries already the arrest sequence and so the arrest site in the absence of EF-P is the start codon (dark blue arrow Figure 55). Addition of EF-P decreases the intensity of the start codon band indicating partial release of the arrested ribosomes. Overall, the polyproline-mediated arrest can be characterized in toeprint assays using peptidylated initiator tRNA. The best stalling efficiency and then release by EF-P were observed initiating the reaction with AcRA-tRNAi Met . A reason could be that the proline within the flexizyme-charged peptide potentially has a restricted backbone geometry thus resulting in reduced reactivity compared to the AcRA-tRNAi Met . In summary, the complex can be formed using the PURExpress system from NEB but the structure remains unknown. As crystals containing high concentrations of E. coli EF-P did not diffract and the quality of the cryo-EM map for the MKF-70S structure showed a nice density for the side chains, is promising to obtain a similar result for the polyproline project. The strategy to obtain the complexes are discussed in further detail in the discussion (chapter 5.3). EF-P does not release other well-characterized nascent chain-mediated translation arrest peptides A recent in vivo study using fluorescently labeled E. coli EF-P has shown that EF-P binding events are more frequent than the occurrence of polyproline-mediated translational arrest [START_REF] Mohapatra | Spatial Distribution and Ribosome-Binding Dynamics of EF-P in Live Escherichia coli[END_REF] raising the question for other functions of EF-P. Furthermore, an in vitro study investigating the effect of miscoding of the P-site tRNA showed that these miscoding events lead to a reduction in the translocation rate that the translocation rate can be enhanced in the presence of EF-P [START_REF] Alejo | Miscoding-induced stalling of substrate translocation on the bacterial ribosome[END_REF]. Both studies illustrate that EF-P might have multiple functions and can bind to ribosomes with empty E-sites [START_REF] Alejo | Miscoding-induced stalling of substrate translocation on the bacterial ribosome[END_REF][START_REF] Mohapatra | Spatial Distribution and Ribosome-Binding Dynamics of EF-P in Live Escherichia coli[END_REF] which appears to be more frequent. In E. coli, EF-P is a part of the ribosome rescue system that recognizes and releases stalled ribosomes. Ribosomal stalling can be induced by various factors, including the absence of stop codons, mRNA secondary structures, or amino acid starvation to name only a few examples. The ribosome rescue system is formed by transfer-messenger RNA (tmRNA) and proteins including e.g. RelA and LepA as reviewed by [START_REF] Giudice | The task force that rescues stalled ribosomes in bacteria[END_REF]. The question whether EF-P might be involved in the release of nascent chain-mediated translational arrested ribosomes stalled by non-polyproline arrest peptides was addressed by performing toeprinting on the wellcharacterized arrest peptides SecM, TnaC, ErmBL, ErmCL, and ErmDL. The results are shown in Figure 56: Figure 56 shows toeprints for several arrest peptides (SecM, TnaC, ErmBL, ErmCL, and ErmDL) in the presence of 2 µM E. coli EF-P. In contrast to ligand-dependent arrest sequences, the release mechanism for SecM has been documented. SecM is released by mechanical force applied by the pre-protein translocate SecA [START_REF] Butkus | Translocon "pulling" of nascent SecM controls the duration of its translational pause and secretion-responsive secA regulation[END_REF]. Consequently, SecM stalling should not be released by EF-P otherwise the regulation mechanism would not work. The Results toeprint experiment (Figure 56B) shows that EF-P does not release SecM arrested ribosomes in vitro and is used as a negative control. In the presence of tryptophan, TnaC arrests when proline codon 24 is located in the P-site and the stop codon is located in the A-site and the release of the peptide is inhibited. To be able to determine arrest conditions from non-arrest conditions the reaction was supplemented with releases factors provided by the kit. On the other hand, the Erm leader peptides arrest ribosomes on internal codons in response to erythromycin. Therefore, release factors could be omitted from the reaction in this case. Ribosomes arrested in the presence of ligands are indicated by dark blue arrows in Figure 56. In all cases, reactions containing EF-P still show a strong signal for the arrested complexes (Figure 56C-E) while ribosomes arrested on polyproline stretches are released (Figure 56A). This means that the arrest peptides SecM, TnaC, ErmBL, ErmCL, and ErmDL are not released by EF-P in vitro. Discussion and perspectives Proline-rich antimicrobial peptides and implications for further drug development The combination of structural and biochemical studies led to a deepened understanding of the mechanism of action of the proline-rich antimicrobial peptides Bac7(1-16), Onc112, Met I and Pyr. The peptides bind to the ribosomal exit tunnel in a reverse orientation relative to the nascent peptide chain. Their binding site extends into the PTC, the A-site crevice, and CCA binding pocket. This inhibits the accommodation of the A-site tRNA and thus the transition from initiation towards elongation is also inhibited [START_REF] Gagnon | Structures of proline-rich peptides bound to the ribosome reveal a common mechanism of protein synthesis inhibition[END_REF]Roy et al., 2015;Seefeldt et al., 2016;Seefeldt et al., 2015). A large range of well-characterized antibiotics targeting specifically the PTC and the ribosomal exit tunnel bind to similar regions as PrAMPs, as illustrated in the following figure: As the binding site of Onc112 overlaps with the binding site of several antibiotics including chloramphenicol and erythromycin (Bulkley et al., 2010;Dunkle et al., 2010), it may be possible to modify PrAMPs in order to increase their binding affinity through additional hydrogenbonds and stacking interactions. In addition, the elongated fold of the PrAMPs covers a long binding surface and could slow down the appearances of resistance. This strategy is listed as an advantage of synergistic antibiotics as reviewed by [START_REF] Kohanski | How antibiotics kill bacteria: from targets to networks[END_REF]. Synergistic antibiotics are pairs of antibiotics acting in concert and therefore enhancing their inhibitory impact. Well-characterized molecules that target the PTC and the upper ribosomal tunnel are streptogramin A and B. These antibiotics are bacteriostatic when used individually, but bactericidal when used together [START_REF] Vazquez | Studies on the Mode of Action of the Streptogramin Antibiotics[END_REF]. The enlarged binding site of two molecules might decrease the rise of resistance [START_REF] Noeske | Synergy of Streptogramin Antibiotics Occurs Independently of Their Effects on Translation[END_REF]. Isolated E. coli cell lines carrying rRNA mutants that are known to be resistant to antibiotics (e.g. macrolides) were used to determine the minimal inhibitory concentration (MIC) of Bac7 and Onc112 [START_REF] Gagnon | Structures of proline-rich peptides bound to the ribosome reveal a common mechanism of protein synthesis inhibition[END_REF]. Cells were carrying the point mutations A2503C and A2059C or the corresponding double mutation. While cells treated with Bac7 were still sensitive to the same concentration of Bac7 (0.75), higher concentrations of Onc112 were necessary to have a similar effect [START_REF] Gagnon | Structures of proline-rich peptides bound to the ribosome reveal a common mechanism of protein synthesis inhibition[END_REF]). An explanation could be that the N-terminus of Bac7 anchors the peptide in its binding pocket as it adopts a specific conformation due to stacking interactions with the tunnel bases [START_REF] Gagnon | Structures of proline-rich peptides bound to the ribosome reveal a common mechanism of protein synthesis inhibition[END_REF]. In order to increase the binding affinity of the PrAMPs, introducing positively-charged aromatic moieties within the N-terminus might enhance stacking interactions. Recently published structures of the antimicrobial peptides Api137 and Klebsazolicin in complex with the 70S E. coli or T. thermophilus ribosomes revealed that these peptides target to the ribosomal exit tunnel [START_REF] Florin | An antimicrobial peptide that inhibits translation by trapping release factors on the ribosome[END_REF]Metelev et al., 2017). These peptides inhibit early elongation (Metelev et al., 2017) and termination [START_REF] Florin | An antimicrobial peptide that inhibits translation by trapping release factors on the ribosome[END_REF]. The following figure compares their structures to that of Bac7(1-16): [START_REF] Florin | An antimicrobial peptide that inhibits translation by trapping release factors on the ribosome[END_REF]Metelev et al., 2017;[START_REF] Scocchi | Molecular cloning of Bac7, a proline-and arginine-rich antimicrobial peptide from bovine neutrophils[END_REF]. * indicates post-translationally modified residues. Bac7(1-16, green) binds to the A-site tRNA binding pocket (1), the A-site crevice (2) and the upper tunnel. Api137 (B, purple, PDB code: 5O2R) binds to the A-site crevice and reaches further into the ribosomal tunnel [START_REF] Florin | An antimicrobial peptide that inhibits translation by trapping release factors on the ribosome[END_REF]. The post-translational modifications of KLB (C, PDB code: 5W4K) restrict the backbone conformation resulting in a "curled-up" conformation binding from the PTC into the upper tunnel (Metelev et al., 2017). The ribosome is taken from (Seefeldt et al., 2016). The initial presumed target of PrAMPs such as oncocin or apidaecin was DnaK. Several rounds of in vitro drug development led to the derivatives Onc112, Onc72, and Api88 and Api137 ( [START_REF] Knappe | Oncocin (VDKPPYLPRPRPPRRIYNR-NH2): a novel antibacterial peptide optimized against gram-negative human pathogens[END_REF]Otvos et al., 2000), discussed in further detail in chapter 1.3). However, cells lacking DnaK remained sensitive to these peptides and Krizsan et al was able to crosslink Api88 to bacterial ribosomes (Krizsan et al., 2014), indicating that the ribosome was the likely target for PrAMPs. These experiments were the starting point of this project and resulted in the structure determination of Onc112 bound to the bacterial ribosome. Recent biochemical and structural studies showed that Api137 traps the ribosome at the stop codon and prevents the release of the peptide [START_REF] Florin | An antimicrobial peptide that inhibits translation by trapping release factors on the ribosome[END_REF]. In contrast to Bac 1-16 (Figure 58B), Api137 binds in the same orientation as the nascent peptide chain with the N-terminus pointing towards the cytoplasm. The binding site covers the PTC and reaches further into the upper tunnel compared to Bac7 1-16 (Florin et al., 2017). Although Api137 and Bac7 1-16 are both classified as PrAMPs, their binding site, orientation and mechanism of action differ extensively (Figure 58B). The second peptide Klebsazolicin (KLB) (Metelev et al., 2017). KLB is an antimicrobial peptide produced as a defense mechanism by K. pneumoniae by the ribosome and belongs to the microcin class. In contrast to PrAMPs, these bacteriocins are characterized by a high cysteine content. The cysteines are post-translationally modified, resulting in a cyclic backbone with a "curled-up" conformation (Metelev et al., 2017). KLB binds to the PTC and the upper part of the ribosomal exit tunnel (Figure 58B). Biochemical studies confirmed that KLB binding inhibits the translational cycle at the early elongation phase, as indicated by the structure of this peptide in complex with the 70S T. thermophilus ribosome (Metelev et al., 2017). All six peptides, Bac7 1-16 , Onc112, Pyr, Met I, KLB and Api137, have a large variety of different sequences and come from very different organisms, ranging from bacteria to higher vertebrates. All peptides target the PTC, reach into the upper ribosomal exit tunnel and/or reach into the A-site tRNA binding pocket [START_REF] Florin | An antimicrobial peptide that inhibits translation by trapping release factors on the ribosome[END_REF][START_REF] Gagnon | Structures of proline-rich peptides bound to the ribosome reveal a common mechanism of protein synthesis inhibition[END_REF]Metelev et al., 2017;Roy et al., 2015;Seefeldt et al., 2016;Seefeldt et al., 2015). As recently reviewed in detail for macrolide antibiotics, a major strategy for antibiotic development is to increase bioavailability, through greater drug stability and improved cellular uptake, while enhancing a drug's binding affinity for its target through the chemical modifications [START_REF] George | The Macrolide Antibiotic Renaissance[END_REF]. For macrolide antibiotics, modifications include the removal of the cladinose sugar or the attachment of novel side chains to the drug backbone [START_REF] George | The Macrolide Antibiotic Renaissance[END_REF]. Similarly, the length of the antimicrobial peptides and their sequence variability opens up a new avenue for drug development. Future objectives will have to include the improvement of the bioavailability and binding affinity of these molecules. This could be achieved by modifying the chemistry of their constituent building blocks, including the use of unusual amino acids and modifications of the backbone, such as the incorporation of β-, γ-, and D-amino acids, as well as urea backbones or peptoids [START_REF] Pasco | Foldamers in Medical Chemistry[END_REF]. Bac7, Onc112, Met I, Pyr, KLB and Api137 are active against Gram-negative bacteria due to the presence of the ATP-dependent peptide transporter SbmA (Mattiuzzo et al., 2007;Metelev et al., 2017;[START_REF] Runti | Functional Characterization of SbmA, a Bacterial Inner Membrane Transporter Required for Importing the Antimicrobial Peptide Bac7(1-35)[END_REF]. Although SbmA was reported to transport peptides, toxins, peptoids and peptide nucleic acids (PNA) its physiological role remains unknown, mostly due to the absence of a detectable phenotype for the deletion mutants under conditions tested so far [START_REF] Corbalan | Functional and Structural Study of the Dimeric Inner Membrane Protein SbmA[END_REF]. Screening Api137-resistant cells identified mutations within the SbmA transporter, indicating the main limitation for using these peptides as potential drugs [START_REF] Florin | An antimicrobial peptide that inhibits translation by trapping release factors on the ribosome[END_REF]. To overcome this limitation, the uptake of PrAMPs could be coupled to an essential and species-specific transporter system e.g. the Iron-uptake pathway. In the structures I obtained, the C-terminus of Bac7, Onc112 and Pyr could not be modeled due to the absence of density, indicating that it is disordered within the crystal [START_REF] Gagnon | Structures of proline-rich peptides bound to the ribosome reveal a common mechanism of protein synthesis inhibition[END_REF]Roy et al., 2015;Seefeldt et al., 2016;Seefeldt et al., 2015). C-terminal truncations of Onc112 led in vivo to loss of function indicating that it might be a crucial recognition site for cellular uptake (Seefeldt et al., 2015). Since the C-terminus is not important for binding to the ribosome it can be modified in order to avoid the dependency on the SbmA transporter for uptake. As shown recently, introducing a fluorescent label into the PrAMP arasin I does not alter the inhibitory properties of the peptide compared to the unlabeled form [START_REF] Paulsen | Inner membrane proteins YgdD and SbmA are required for the complete susceptibility of Escherichia coli to the proline-rich antimicrobial peptide arasin 1 (1-25)[END_REF]. Instead of a fluorescent label a molecule that is recognized specifically by an essential transporter could be introduced into the PrAMP, a method termed "trojan horse" as reviewed in detail [START_REF] Zgurskaya | Permeability barrier of Gram-negative cell envelopes and approaches to bypass it[END_REF]. An essential transporter that has been used previously is the iron uptake system in bacteria. While Fe 3+ (Iron) is crucial for bacteria to survive the sources are often limited during infection. To complex Fe 3+ specifically in the environment and to transport the cation into the cells, bacteria evolved siderophores [START_REF] Braun | Recent insights into iron import by bacteria[END_REF]. Siderophores represent a large class of molecules with different characteristics and molecular weights ranging from 0.3 to 1 kDa. Previous attempts to introduce siderophores into antibiotics such as chloramphenicol often resulted in loss of function [START_REF] Page | Siderophore conjugates[END_REF]. In contrast to this, some Streptomyces species have been shown to synthesize siderophore-attached macrolides. Furthermore, some microcins are linked post-translationally to siderophores [START_REF] Vassiliadis | Isolation and Characterization of Two Members of the Siderophore-Microcin Family, Microcins M and H47[END_REF][START_REF] Vassiliadis | Insight into Siderophore-Carrying Peptide Biosynthesis: Enterobactin Is a Precursor for Microcin E492 Posttranslational Modification[END_REF] The various siderophores and their corresponding transporters are often species-specific, resulting in narrow-spectrum antibiotics. This opens the possibility of creating PrAMPs that are capable of acting against Gram-positive bacteria, which normally lack the SbmA transporter. This makes this concept particularly interesting for further rounds of drug development. In summary, some antimicrobial peptides have been shown to bind to the exit tunnel of bacterial ribosomes and to inhibit the initiation, early elongation or termination steps of translation. Structural data of bacterial ribosomes in complex with these peptides have given us detailed insights into the mechanism of action of PrAMPs, making the structures a solid foundation for further drug development. The flexizyme methodology to study nascent chain-mediated translational arrest The second project of this thesis aimed to study translational arrest mediated by short nascent chains using structure biological methods. Forming the arrest complex by in vitro translation requires a long mRNA fragment that can interfere with the crystallization process (personal communication from Axel Innis). In order to avoid this, the arrest peptide was bound directly to the 3'CCA end of the initiator tRNA using the flexizyme methodology [START_REF] References Goto | Flexizymes for genetic code reprogramming[END_REF]. After the activity of the peptidylated-tRNA was confirmed by toeprinting experiments the arrest complexes were formed for X-ray crystallography. Due to hydrolysis of the ester bond linking the peptide to the tRNA during the crystallization, a second strategy was developed involving the generation of 3'-NH2-tRNA using previously developed protocols (Polikanov et al., 2014;[START_REF] Voorhees | Insights into substrate stabilization from snapshots of the peptidyl transferase center of the intact 70S ribosome[END_REF]. Attempts to peptidylate a 3'NH2-tRNA using a flexizyme have not been successful to date. An alternative strategy to attach the peptide through a non-hydrolysable amide linkage was to add the last amino acid through an additional round of elongation as discussed in detail in chapter 4.2.6. However, resulting crystals revealed no density of the peptide or the A-site tRNA. Reasons could be that the binding affinity of the deaminoacylated tRNAi Met for the P-site might be too high to be replaced under crystallographic concentrations. The recent improvement in resolution of structures obtained by cryo-EM has made it possible to study short arrest peptides with this method. The probability of hydrolysis of the ester-linked arrest peptide was reduced as the complex was directly frozen after complex formation. During the studies of this thesis, two case studies were used to establish a workflow for arrested ribosome complex formation: (i) fM+X(+) mediated arrest in the presence of erythromycin, and (ii) translational arrest along consecutive prolines. Future experiments should focus on the study of the chemical properties of the different amino acids of the arrest peptides to improve or deplete the arresting properties of the peptide as well as the involvement of the peptide backbone in contact formation with the ribosomal residues. The project specific main results are put again into context and perspectives are discussed in the following sections. fMKF(R) arrests the 70S E. coli ribosome in the presence of erythromycin fM+X(+) represents a set of short nascent peptides that arrest the bacterial ribosome in the presence of erythromycin [START_REF] Sothiselvam | Macrolide antibiotics allosterically predispose the ribosome for translation arrest[END_REF][START_REF] Sothiselvam | Binding of macrolide antibiotics leads to ribosomal selection against specific substrates based on their charge and size[END_REF]. To study the underlying molecular mechanism, the sequence fMKF(R) was selected as discussed in detail in chapter 4.3. The fMKF(R) arrest sequence had not been shown before to arrest the 70S E. coli ribosome. This was shown by toeprinting of the whole ORF and by initiating translation using a fMKF peptidyl moiety attached to initiator tRNA in the presence of the drug. The structure of the complex was obtained by cryo-EM with an overall resolution of 3.9 Å. The density of the nascent peptide provided information regarding the orientation of two of the three peptide side chains. The advantage of the flexizyme methodology is that it allows peptides composed of non-canonical amino acids to be attached to the 3'CCA end of tRNAs. A previous study investigated the chemical properties of the A-site amino acid necessary for the arrest using tRNA mimics [START_REF] Sothiselvam | Binding of macrolide antibiotics leads to ribosomal selection against specific substrates based on their charge and size[END_REF]. Based on the structure and the previous study, the following modifications can be performed to study the chemical properties of the nascent peptide necessary for the nascent chain mediated translational arrest: The methionine side chain was not modeled due to the absence of density. This suggests that the methionine side chain could be disordered and raises the question whether its presence is required for arrest. Bacterial translation is initiated in most cases using fMet-tRNAi Met as reviewed by [START_REF] Laursen | Initiation of Protein Synthesis in Bacteria[END_REF], exception listed in chapter 1. 1.2.1). Consequently, all toeprinting experiments were performed using the modification listed in Figure 59. One first modification of the fMKF peptide could include the removal of the formyl group to investigate if its presence is important for the arrest. Another peptide could be synthesized replacing the methionine by an acetyl moiety. These modifications can be only introduced using the flexizyme methodology (Figure 59A). Strikingly, no density could be detected for Arg-tRNA Arg in the A-site even when the dilution buffer was supplemented with this tRNA. As seen in the structure of the fMKF-70S complex, the A-site crevice is blocked by the Lys (-1) side chain. Consequently, the positive charge of the lysine side chain would be in close proximity of the positive charge of the incoming arginine side chain, resulting in static repulsion. Previously, it has been shown that the positive charge of the A-site amino acid side chain is crucial for arrest, whereas its length is not [START_REF] Sothiselvam | Binding of macrolide antibiotics leads to ribosomal selection against specific substrates based on their charge and size[END_REF]. The same cannot be said for the -1 residue at present. The flexizyme methodology allows us to study the impact of the length and charge of the -1 residue side chain, using the modifications listed in Figure 59B. The need for a particular orientation of the lysine side chain could be verified by replacing this residue with D-lysine. This will force the side chain to adopt the mirrored orientation and thus might prevent arrest in the presence of erythromycin. The toeprinting experiment to prove the activity of the fMKF-tRNAi Met (chapter 4.4.3, Figure 28) indicates that even in the presence of erythromycin a certain percentage of ribosomes still translocates to the second codon. Consequently, the arrest strength can be still enhanced. To do so, modification of the -1 side chain to a positively charged aromatic moiety or to an arginine might be enough to enhance the arrest due to potential stacking interactions with the rRNA. In addition, the methionine could be replaced by groups that tend to form stacking interactions, such as aromatic moieties, in order to obtain a higher arrest strength. The arrest strength might be assessed using toeprinting and puromycin sensitivity. Other methionine replacement could include linker side chains to attach the arrest peptide covalently to the macrolide. Previous attempts to peptidylate macrolides included the attachment of long peptide chains [START_REF] Washington | Macrolide-Peptide Conjugates as Probes of the Path of Travel of the Nascent Peptides through the Ribosome[END_REF]. Another study used chloramphenicol attached to short tri-peptides from erm leader peptides including MRL. These peptide-chloramphenicol fusions had an impact on in vitro translation comparable to chloramphenicol itself [START_REF] Mamos | On the use of the antibiotic chloramphenicol to target polypeptide chain mimics to the ribosomal exit tunnel[END_REF]. Chemically linking arrest peptides without the tRNA to erythromycin might overcome macrolide resistance as the peptide would extend into the PTC, thus covering a larger binding site in the way synergistic antibiotics do [START_REF] Noeske | Synergy of Streptogramin Antibiotics Occurs Independently of Their Effects on Translation[END_REF]. The current resolution of the MKF-70S structure is 3.9 Å and is thus too low to make assumptions about hydrogen bonding interactions. To investigate the hydrogen bonding of the backbone with the ribosomal residues, the peptide backbone could be modified by Nalkylation of the backbone or replacing each peptide bonds with an ester bond In doing so, the importance of each peptide bond as H-bond donor could be investigated. Other backbone modifications, such as the introduction of urea building blocks or βand γ-peptides [START_REF] Pasco | Foldamers in Medical Chemistry[END_REF] and might influence the folding and orientation of the peptide within the tunnel. The orientation of the lysine side chain is enhanced by allosteric rearrangement of the 23S rRNA, in particular of bases U2506, U2585, and A2062 which were reported to be crucial for the activity of certain arrest peptides [START_REF] Koch | Critical 23S rRNA interactions for macrolide-dependent ribosome stalling on the ErmCL nascent peptide chain[END_REF][START_REF] Vázquez-Laslop | Role of antibiotic ligand in nascent peptide-dependent ribosome stalling[END_REF]. The orientation of these bases was compared with those in several structures obtained previously, including those of complexes obtained at various stages of the translation cycle or undergoing ligand-dependent or -independent translational arrest. Within the fMKF-70S ribosome structure, U2506 adopts a conformation that is different from that observed in other structures. Furthermore, U2585 adopts a position between pre-accommodation and pre-catalysis that is similar to the conformation observed in the VemP-arrested ribosome [START_REF] Su | The force-sensing peptide VemP employs extreme compaction and secondary structure formation to induce ribosomal stalling[END_REF] structure and that is stabilized by potential stacking interactions against the peptide. Strikingly, the base of A2062 adopts a specific orientation which points towards the PTC and appears to be important for the arrest. It has been reported that arrest peptides can be classified into two categories, depending on whether their activity depends on the chemical properties of A2062 or not [START_REF] Vázquez-Laslop | Role of antibiotic ligand in nascent peptide-dependent ribosome stalling[END_REF]. Ribosomes carrying the A2062C or A2062U point mutations still arrest along the fMRL(R) motif in the presence of erythromycin [START_REF] Sothiselvam | Macrolide antibiotics allosterically predispose the ribosome for translation arrest[END_REF]. The fM+X(+) motif was classified as A2062-independent and the conformation of the base of A2062 in the fMKF-70S structure is similar to that observed in the TnaC-70S structure, which is also independent of the chemical properties of A2062 (Bischoff et al., 2014;[START_REF] Sothiselvam | Macrolide antibiotics allosterically predispose the ribosome for translation arrest[END_REF][START_REF] Vázquez-Laslop | Role of antibiotic ligand in nascent peptide-dependent ribosome stalling[END_REF]. Although the nature of the base itself appears not to be important for the arrest process, it is possible that the mere presence of an unpaired base at this position within the tunnel is sufficient to restrict the rotation of the -1 lysine side chain in the presence of erythromycin, thus forcing it to remain within the A-site crevice. Interestingly, the base of A2062 adopts different orientations within the crystal structures of erythromycin in complex with the 70S ribosomes from E. coli or T. thermophilus, respectively (Bulkley et al., 2010;Dunkle et al., 2010). An explanation for this discrepancy could be that the binding of erythromycin favors the orientation where the base points towards the PTC but does not entirely preclude the alternative conformation. While translating the fMKF(R) sequence in the presence of erythromycin, the base of A2062 points might frequently point towards the PTC, thus forcing the -1 side chain into the A-site crevice and causing arrest. In contrast to this, if the base points towards to the cytoplasm peptide bond formation can occur and the ribosome can translocate. This would explain the appearance of a faint band in the toeprinting experiment discussed in chapter 4.4.3 and would make the formation of a stalled complex a stochastic event that is linked to the particular conformation adopted by A2062. Furthermore, the masked 3D classification revealed that 13.48% of the particles, which corresponds to approx. 12 500 particles, contained A-, P-and E-site tRNA (Figure 31) potentially representing those particles that did not arrest and in which the peptide might be connected to the A-site tRNA. Since approximately 30 000 particles are necessary for a high-resolution reconstruction of the ribosome [START_REF] Bai | Ribosome structures to nearatomic resolution from thirty thousand cryo-EM particles[END_REF] including micrographs containing lower resolution data and recollection of new data might result in enough particles to resolve the structure of the potentially non-arrested ribosomes. Another possibility to assess the role of the bases discussed above is molecular dynamics studies. In a recently published study of the ErmBL arrest peptide, molecular dynamics simulations gave greater insights into the underlying mechanism and mobility of the peptide itself (Arenz et al., 2016). Additionally, biochemical studies to further shed light on the importance of ribosomal bases on the arrest process could include atomic mutagenesis of the ribosome, a method that makes it possible to introduce unnatural bases into or exclude bases from a circularly permutated 23S rRNA. In a recent study using atomic mutagenesis, ribosomes lacking the base of A2062 were generated to study the impact of A2062 on nascent chain-mediated translational arrest along ErmCL in the presence of erythromycin [START_REF] Koch | Critical 23S rRNA interactions for macrolide-dependent ribosome stalling on the ErmCL nascent peptide chain[END_REF]. Using these ribosomes would give greater insights into the requirements for the base of A2062 since these ribosomes should not arrest when translating M+X(+) motifs, in this case, fMKF(R), in the presence of erythromycin. In summary, the flexizyme methodology will make it possible to perform further detailed studies to elucidate the arrest mechanism of MRLR. The advantage of the method is the flexibility to investigate the mechanism when using non-natural sequences such as the removal of the N-terminal methionine and the variation of the -1 side chain. Furthermore, the cryo-EM structure solved revealed how the binding of the macrolide to the ribosomal exit tunnel changes the conformation of several bases of the 23S rRNA within the PTC. Additionally, this structure is the first arrest complex that does not bypass the ligand. Further experiments will be necessary to confirm the hypothesized mechanism and to utilize the knowledge obtained for future drug development. Polyproline-mediated arrest Proline is the only N-alkyl amino acid along the canonical amino acids. Kinetics studies have shown that its chemical properties result in low reactivity during peptide bond formation [START_REF] Pavlov | Slow peptide bond formation by proline and other N-alkylamino acids in translation[END_REF]. The translation of consecutive prolines leads to ribosomal arrest that can be released by elongation factor EF-P [START_REF] Doerfel | EF-P is essential for rapid synthesis of proteins containing consecutive proline residues[END_REF][START_REF] Ude | Translation elongation factor EF-P alleviates ribosome stalling at polyproline stretches[END_REF]. To investigate polyproline-mediated ribosomal arrest structurally the flexizyme methodology was used to initiate translation with different constructs. Translation reactions initiated with AcRP-tRNAi Met and AcRA-tRNAi Met form an arrested complex after one round of translocation that can be released partially in the presence of EF-P (chapter 4,5,9, Figure 56). So far, the underlying mechanism is still unknown and further insights could be obtained by solving the structure of a polyproline arrested ribosome complex The toeprinting reaction was performed using an in vitro translation system [START_REF] Shimizu | Cell-free translation reconstituted with purified components[END_REF][START_REF] Shimizu | Protein synthesis by pure translation systems[END_REF]. The following figure lists all these compounds: The toeprint to illustrate the formation of the polyproline-mediated arrest complex, as well as its release by EF-P, was performed in a custom-made PURExpress system (NEB, Ipswich, MA, USA) containing all of the factors needed to perform in vitro transcription and translation. To form the arrested complex for structural biology, the reaction mixture was reduced to a minimal set of factors listed in bold, allowing only one round of elongation. EF-P is not provided in the custom-made PURExpress system and was added separately. The toeprint to illustrate the formation of the arrested complex was performed in a custommade PURExpress system (NEB, Ipswich, MA, USA) containing all the components listed in Figure 60 but so far, the structure of the polyproline-arrested ribosome in the presence or absence of EF-P has not been solved. In order to form a complex to study polyproline-mediated arrested 70S E. coli ribosomes by cryo-EM, the reaction mixture has to be reduced to a minimal set of compounds (listed in bold in Figure 60), while unnecessary factors (e.g. initiation factors or T7 RNAP) can be omitted from the mixture. Optimal conditions for complex formation can be identified using toeprinting (chapter 3.8). Subsequently, these conditions can be used to form the up-scaled reaction for structural studies. The following figure illustrates possible complexes to study the PRE-and POST-release complex of EF-P bound to polyproline-arrested ribosomes: Figure 61: Complexes to study the molecular mechanism of EF-P. The POST-release complex can be studied using Pro-tRNA Pro , as polyproline-mediated translational arrest will be released in the presence of EF-P. The PRE-release complex can be studied by replacing proline with its unreactive analog tetrahydrofuroic-2-acid (THFA). The complex can be formed specifically using the flexizyme methodology to transfer THFA onto e.g. tRNA Phe . This strategy will allow us to study polyproline-mediated translational arrest in the presence of all compounds that are present in the cell during polyproline-mediated translational arrest. A complex formed with a mRNA encoding MPP allows the binding of AcR(A/P)-tRNAI Met followed by two Pro-tRNA Pro and the formation of the polyproline-arrested complex. This complex can be released in the presence of EF-P [START_REF] Doerfel | EF-P is essential for rapid synthesis of proteins containing consecutive proline residues[END_REF][START_REF] Peil | Distinct XPPX sequence motifs induce ribosome stalling, which is rescued by the translation elongation factor EF-P[END_REF][START_REF] Ude | Translation elongation factor EF-P alleviates ribosome stalling at polyproline stretches[END_REF]. The complex might give insights into the POST-release state. In order to stabilize the PRE-release state and understand how EF-P releases the arrest, the Asite tRNA can be charged with tetrahydrofuroic-2-acid (THFA), an unreactive derivative of proline (Figure 61). THFA can be activated with CBT or DBE and transferred onto the 3'CCA end of tRNA Phe with the help of the eFx or dFx flexizymes, respectively. Using a mRNA encoding the sequence MPF, it will be possible to direct the binding of THFA-tRNA Phe to the A-site, resulting in potential high homogeneity of the complex. tRNA Phe can be used to study the complex as EF-P seems not to form direct contacts with the A-site tRNA [START_REF] Blaha | Formation of the first peptide bond: the structure of EF-P bound to the 70S ribosome[END_REF] and the nature of the A-site tRNA does not have an impact on EF-P function [START_REF] Katoh | Essential structural elements in tRNA Pro for EF-P-mediated alleviation of translation stalling[END_REF]. This strategy will allow us to obtain insights into the state of PRE-release of polyproline-mediated arrest by EF-P. Furthermore, the flexizyme methodology will allow us to study the effect of chemically modified proline residues. Among all canonical amino acids, proline is the only one able to form cis and trans peptide bonds within proteins. Binding of EF-P might influence a specific conformation of the pucker. The conformation of the pucker influences directly the isomeric state of the peptide bond as illustrated in the following figure: indicated by the black arrow adopts preferentially its exo conformation in a trans peptide bond and its endo conformation in a cis peptide bond [START_REF] Milner-White | Pyrrolidine ring puckering in cis and trans-proline residues in proteins and polypeptides: Different puckers are favoured in certain situations[END_REF][START_REF] Vitagliano | Preferred proline puckerings in cis and trans peptide groups: implications for collagen stability[END_REF]. The systematic analysis of different protein structures, including collagen, resulted in the assumption that hydroxylation of the Cγ-atom stabilizes its trans or cis peptide bond by stabilizing the exo or endo conformation of the pucker, respectively (Figure 62, (Milner-White et al., 1992;[START_REF] Vitagliano | Preferred proline puckerings in cis and trans peptide groups: implications for collagen stability[END_REF]. Introducing modifications of the Cγ atom such as hydroxylation, halogenation or the addition of large aliphatic chains [START_REF] Pandey | Proline Editing: A General and Practical Approach to the Synthesis of Functionally and Structurally Diverse Peptides. Analysis of Steric versus Stereoelectronic Effects of 4-Substituted Prolines on Conformation within Peptides[END_REF] the impact of EF-P on cis and trans peptide bonds could be investigated in greater detail. This would not only give greater insights into the function of EF-P but might also make it possible to improve the production of peptides composed of cyclic N-alkyl amino acids (CNAs) using the bacterial ribosome. In a recently published study, the flexizyme methodology was employed to produce CNA peptides in bacteria [START_REF] Kawakami | Extensive Reprogramming of the Genetic Code for Genetically Encoded Synthesis of Highly N-Alkylated Polycyclic Peptidomimetics[END_REF]. CNAs are good drug candidates since they display higher membrane permeability compared to other peptides. In bacteria, EF-P is not essential but crucial for virulence and stress adaptation [START_REF] Navarre | PoxA, yjeK, and elongation factor P coordinately modulate virulence and drug resistance in Salmonella enterica[END_REF][START_REF] Zou | Elongation factor P mediates a novel post-transcriptional regulatory pathway critical for bacterial virulence[END_REF]. Co-crystallization and toeprinting experiments indicate that EF-P does not release arrest across the species tested, meaning that E. coli EF-P does not release Polyproline-arrested T. thermophilus ribosomes and vice versa. Additionally, introducing the E. coli loop sequence into T. thermophilus EF-P did not result in a crystal structure and did not relieve polyproline-mediated arrest using toeprinting. Studies have shown that EF-P from various species is modified differently [START_REF] Bullwinkle | R)-β-lysine-modified elongation factor P functions in translation elongation[END_REF][START_REF] Lassak | Arginine-rhamnosylation as new strategy to activate translation elongation factor P[END_REF][START_REF] Park | Post-translational modification by β-lysylation is required for activity of Escherichia coli elongation factor P (EF-P)[END_REF][START_REF] Rajkovic | Translation Control of Swarming Proficiency in Bacillus subtilis by 5-Amino-pentanolylated Elongation Factor P[END_REF]. The next figure illustrates the variety of known modifications along prokaryotes: needs to be β-lysinylated for EF-P activity. Additionally, K34 is also hydroxylated in a second step [START_REF] Park | Post-translational modification by β-lysylation is required for activity of Escherichia coli elongation factor P (EF-P)[END_REF][START_REF] Peil | Lys34 of translation elongation factor EF-P is hydroxylated by YfcM[END_REF]. In P. aeruginosa and N. meningitides, R32 is rhamnosylated. Recently the modification in B. subtilis was identified as 5-amino-pentanol. So far, a potential modification of R32 in T. thermophilus remains unknown. The post-translational modification of the crucial residue of EF-P varies among different prokaryotes (Figure 63), but additional hydrogen bond donors and acceptors are used to possibly enhance interactions with the tRNA. So far, the potential modification of T. thermophilus EF-P remains unknown. One of the remaining questions is if a modification of T. thermophilus EF-P is necessary for the function of this molecule. This could be investigated by growing T. thermophilus under different temperatures and subsequently isolating EF-P via immunoprecipitation to analyze the protein by mass spectrometry as performed previously for EF-P from E. coli [START_REF] Peil | Lys34 of translation elongation factor EF-P is hydroxylated by YfcM[END_REF][START_REF] Rajkovic | Translation Control of Swarming Proficiency in Bacillus subtilis by 5-Amino-pentanolylated Elongation Factor P[END_REF]. T. thermophilus grows at an optimal temperature of 75°C [START_REF] Oshima | Description of Thermus thermophilus (Yoshida and Oshima) comb. nov., a nonsporulating thermophilic bacterium from a Japanese thermal spa[END_REF]). An explanation could be that 70S T. thermophilus ribosomes are not stalled by polyproline motifs or easier overcome the arrest at these high temperatures as the molecules move more. A second explanation includes the unusual high concentrations of polyamines in the cytoplasm, which have been shown to be essential for cell survival. In addition, thermophiles produce species-specific polyamines such as thermine and tetrakis(3-amino-propyl)-ammonium [START_REF] Oshima | Unique polyamines produced by an extreme thermophile, Thermus thermophilus[END_REF]. Furthermore, the addition of polyamines to in vitro translation reactions is crucial for the initiation of translation and also enhances the elongation (Uzawa et al., 1993a;Uzawa et al., 1993b). The combination of different polyamines showed a synergistic enhancement of protein production at higher temperatures. Thus T. thermophilus EF-P could act as a scaffold to orient polyamines correctly and in doing so release the arrest along consecutive poly-proline motifs. This hypothesis could be toeprinting and co-crystallization experiments using T. thermophilus ribosomes in the presence of different polyamines and T. thermophilus EF-P. Species-specific modifications of EF-P, that can range from rhamnosylation to β-lysinylation, demands a large number of different modification enzymes as the chemical properties of the substrate molecules are diverse [START_REF] Lassak | Arginine-rhamnosylation as new strategy to activate translation elongation factor P[END_REF][START_REF] Park | Post-translational modification by β-lysylation is required for activity of Escherichia coli elongation factor P (EF-P)[END_REF]. Understanding the underlying molecular mechanism of different EF-P modifications might result in a better understanding of the species specificities of bacterial ribosomes and this might give rise to species-specific narrow-spectrum antibiotics. Conclusion My structural studies of the inhibition of bacterial translation by nascent-chain and free peptides resulted in a greater understanding of how these peptides interact with and affect the ribosomal tunnel. Antimicrobial peptides that target the ribosomal exit tunnel are produced as a natural defense mechanism of eukaryotes. The structures obtained can be used for further drug development involving a large number of potential modifications to enhance their uptake by bacteria and their binding affinity for the ribosome. Further steps will also have to include the identification of new sequences and their characterization. In addition, short arrest peptides can be bound directly to the 3' CCA end of tRNAs using the flexizyme technology, for further structural or biochemical characterization. This allows to study the impact of non-canonical amino acids and non-natural backbones on peptide bond formation and on nascent chain mediated translational arrest. An increased understanding of the determinants of nascent chain-mediated translational arrest will ultimately result in a better understanding of the bacterial ribosome. This will allow the development of new scaffolds for novel antibiotics. Further experiments using the flexizyme methodology and the knowledge obtained during this work will provide information about the reactivity and incorporation of non-natural amino acids for example D-amino acids using the bacterial ribosome as a production machinery. (Krizsan et al., 2014). Ces peptides peuvent en effet être attachés aux ribosomes bactériens et inhiber la synthèse des protéines in vitro. D'autres études ont montré que la bacténécine-7 PrAMP provenant de mammifères (Bac7, Bos taurus) se lie également aux ribosomes bactériens et inhibe la traduction in vitro (Mardirossian et al., 2014). Le mécanisme d'action moléculaire sous-jacent des PrAMPs était encore inconnu lorsque j'ai résolu les structures cristallines du ribosome Thermus thermophilus 70S en complexe avec les PrAMPs Onc112, Bac7 [START_REF] Gagnon | Structures of proline-rich peptides bound to the ribosome reveal a common mechanism of protein synthesis inhibition[END_REF]Roy et al., 2015;Seefeldt et al., 2016;Seefeldt et al., 2015). A partir du site de liaison qui chevauche l'aminoacyl-ARNt entrant, nous pouvons supposer initiation (Seefeldt et al., 2016;Seefeldt et al., 2015). Le travail a été publié en parallèle des travaux du groupe du professeur Thomas A. Steitz, ce qui a permis de confirmer nos résultats [START_REF] Gagnon | Structures of proline-rich peptides bound to the ribosome reveal a common mechanism of protein synthesis inhibition[END_REF]Roy et al., 2015). L'arrêt de la traduction induit par le peptide naissant Le sujet de la deuxième partie de cette thèse était d'acquérir la compréhension du mécanisme moléculaire des courts peptides d'arrêt. L'arrêt de la traduction médiée par la chaîne naissante se produit lorsque cette dernière est capable d'interagir avec la paroi du tunnel du ribosome et entraîne un réarrangement au sein du centre de la peptidyl-transférase [START_REF] Ito | Arrest peptides: cis-acting modulators of translation[END_REF][START_REF] Seip | How Widespread is Metabolite Sensing by Ribosome-Arresting Nascent Peptides?[END_REF][START_REF] Wilson | Translation regulation via nascent polypeptide-mediated ribosome stalling[END_REF]. Les raisons de l'arrêt peuvent être simplement la composition en acides aminés du peptide naissant, ajouté à la présence d'un ligand ou l'incorporation d'acides aminés non naturels [START_REF] Ito | Arrest peptides: cis-acting modulators of translation[END_REF][START_REF] Seip | How Widespread is Metabolite Sensing by Ribosome-Arresting Nascent Peptides?[END_REF][START_REF] Wilson | Translation regulation via nascent polypeptide-mediated ribosome stalling[END_REF]. Les longs peptides d'arrêt comme les peptides TnaC (Bischoff et al., 2014;[START_REF] Seidelt | Structural insight into nascent polypeptide chainmediated translational stalling[END_REF] et Erm (Arenz et al., 2016;Arenz et al., 2014a;[START_REF] Arenz | Molecular basis for erythromycin-dependent ribosome stalling during translation of the ErmBL leader peptide[END_REF] ribosomes modifiés de cette manière sont moins efficaces que les ribosomes non modifiés, par conséquent, l'expression du gène de résistance doit être étroitement régulée. En amont du gène ermD, un peptide leader ErmDL court est codé pour réguler l'expression de la méthyltransférase [START_REF] Hue | Regulation of the macrolide-lincosamide-streptogramin B resistance gene ermD[END_REF]. Différents peptides leaders impliqués dans la régulation des gènes de résistance à l'érythromycine ont été identifiés et caractérisés. Le raccourcissement systématique de la séquence peptidique ErmDL a conduit à l'identification de la séquence de décrochage minimale MRL(R) [START_REF] Sothiselvam | Macrolide antibiotics allosterically predispose the ribosome for translation arrest[END_REF]. La mutagenèse systématique des séquences [START_REF] Sothiselvam | Binding of macrolide antibiotics leads to ribosomal selection against specific substrates based on their charge and size[END_REF] couplée aux données de ribosome profiling obtenues en traitant E. coli et Staphyluscoccus aureus avec l'érythromycine, la télithromycine et l'azithromycine ont montré que ce motif peut être généralisé à +X(+) [START_REF] Davis | Sequence selectivity of macrolide-induced translational attenuation[END_REF][START_REF] Kannan | The general mode of translation inhibition by macrolide antibiotics[END_REF]. Le Plusieurs études ont montré que la vitesse de formation de la liaison peptidique est réduite lorsque le ribosome incorpore la proline par rapport à la vitesse d'incorporation des autres acides aminés [START_REF] Muto | Peptidyl-prolyl-tRNA at the ribosomal P-site reacts poorly with puromycin[END_REF][START_REF] Pavlov | Slow peptide bond formation by proline and other N-alkylamino acids in translation[END_REF]. La proline est, d'un point de vue chimique, un N-alkylaminoacide (iminoacide), ce qui entraîne une alcalinité plus élevée par rapport aux autres acides aminés canoniques et une probabilité plus élevée d'obtenir un groupe α-imino protoné à un pH physiologique [START_REF] Pavlov | Slow peptide bond formation by proline and other N-alkylamino acids in translation[END_REF]. De plus, la proline est le seul acide aminé canonique qui peut former à la fois des liaisons peptidiques trans et cis, conduisant à des replis possibles au sein des hélices α. De récentes études in vitro et in vivo ont montré que trois prolines consécutives conduisent à un arrêt du ribosome [START_REF] Doerfel | EF-P is essential for rapid synthesis of proteins containing consecutive proline residues[END_REF][START_REF] Ude | Translation elongation factor EF-P alleviates ribosome stalling at polyproline stretches[END_REF]. De nombreux gènes importants pour le fitness de la bactérie contiennent des séquences de polyproline, comme par exemple le gène codant la valinyl-tRNA synthetase (valS) (Starosta et al., 2014b). Les ribosomes bactériens arrêtés peuvent être libérés lors de la traduction par le facteur d'élongation P (EF-P) [START_REF] Doerfel | EF-P is essential for rapid synthesis of proteins containing consecutive proline residues[END_REF][START_REF] Ude | Translation elongation factor EF-P alleviates ribosome stalling at polyproline stretches[END_REF]. La même observation a été faite pour le facteur d'initiation des homologues eucaryotes 5A (eIF5A) [START_REF] Gutierrez | eIF5A promotes translation of polyproline motifs[END_REF]. Une autre étude protéomique utilisant le marquage isotopique stable d'acides aminés en culture cellulaire (SILAC) sur des cellules Δefp a montré que le nombre de gènes EF-P dépendants n'est pas seulement limité à une séquence PPP [START_REF] Peil | Distinct XPPX sequence motifs induce ribosome stalling, which is rescued by the translation elongation factor EF-P[END_REF]Starosta et al., 2014a). Une analyse plus approfondie utilisant des systèmes de rapporteurs in vivo et des essais de traduction in vitro ont permis d'identifier la diversité de séquences arrêtant en l'absence de EF-P. Pour obtenir une fonction complète, EF P doit être modifié après traduction [START_REF] Bullwinkle | R)-β-lysine-modified elongation factor P functions in translation elongation[END_REF][START_REF] Navarre | PoxA, yjeK, and elongation factor P coordinately modulate virulence and drug resistance in Salmonella enterica[END_REF]. La modification peut varier en fonction de l'espèce bactérienne du sucre jusqu'à la lysine [START_REF] Bullwinkle | R)-β-lysine-modified elongation factor P functions in translation elongation[END_REF][START_REF] Lassak | Arginine-rhamnosylation as new strategy to activate translation elongation factor P[END_REF]. Les cellules bactériennes dépourvues d'efp ou de ses enzymes de modification présentent le même phénotype et sont moins en forme et moins virulentes [START_REF] Navarre | PoxA, yjeK, and elongation factor P coordinately modulate virulence and drug resistance in Salmonella enterica[END_REF]. Cela fait de l'EF-P une cible intéressante pour les antibiotiques. Cependant, jusqu'à présent, le mécanisme sous-jacent de l'arrêt de la traduction médié par le motif polyproline ainsi que son soulagement par EF P reste inconnu. Pour obtenir de plus amples informations sur le mécanisme sous-jacent, EF-P a été purifié comme publié précédemment et son activité a été évaluée par des expériences de toeprinting. En utilisant la cristallographie aux rayons X, il aurait pu être montré que l'EF-P de E. coli ne se lie pas aux ribosomes de T. thermophilus 70S dans des conditions cristallographiques. Il a été montré précédemment que E. coli EF-P reconnaît la boucle D de tRNA Pro [START_REF] Katoh | Essential structural elements in tRNA Pro for EF-P-mediated alleviation of translation stalling[END_REF]. Pour obtenir une compréhension approfondie du mécanisme sous-jacent, une partie de ce Conclusion Les connaissances obtenues ont permis de mieux comprendre comment la formation de liaisons peptidiques peut être inhibée de manière spécifique et pourrait conduire au développement de nouveaux antibiotiques hautement spécifiques ciblant le ribosome bactérien. Following to DNA template generation, the construct was large scale (10 mL reaction) in vitro transcribed [START_REF] References Goto | Flexizymes for genetic code reprogramming[END_REF]. Canonical base orientations Purity of flexizyme compounds The following analysis were performed, written and kindly provided by Dr. Christophe André Formyl-Met-Arg-DBE•HCl (CA General procedure for the synthesis of 3,5-dinitrobenzyl ester was used from Formyl-Met-Arg(Pbf)-OH (150 mg, 0.26 mmol) to led to the titled activated ester peptide as a white solid lyophilisate (38 mg, 28 %). General procedure for the synthesis of 4-chlorobenzyl thioester was used from Formyl-Met-Arg(Pbf)-OH (150 mg, 0.26 mmol) to led to the titled activated thioester peptide as a white solid lyophilisate (39 mg, 31 %). General procedure for the synthesis of 4-chlorobenzyl thioester was used from Acetyl-Arg(Pbf)-Pro-OH (150 mg, 0.26 mmol) to led to the titled activated thioester peptide as a white solid lyophilisate (19 mg, 16 %). General procedure for the synthesis of 4-chlorobenzyl thioester was used from Acetyl-Arg(Pbf)-Asp(tBu)-OH (100 mg, 0.16 mmol) to led to the titled activated thioester peptide as a white solid lyophilisate (28 mg, 38 %). General procedure for the synthesis of 4-chlorobenzyl thioester was used from Acetyl-Arg(Pbf)-Ala-Pro-OH (200 mg, 0.31 mmol) to led to the titled activated thioester peptide as a white solid lyophilisate (31 mg, 19 %). General procedure for the synthesis of cyanomethyl ester was used from Formyl-Met-Lys(Boc)-Phe-OH (150 mg, 0.27 mmol) to led to the titled activated ester peptide as a white solid lyophilisate (37 mg, 28 %). General procedure for the synthesis of cyanomethyl ester was used from Acetyl-Lys(Boc)-Phe-OH (150 mg, 0.34 mmol) to led to the titled activated ester peptide as a white solid lyophilisate (36 mg, 28 %). General procedure for the synthesis of cyanomethyl ester was used from Formyl-Met-Orn(Boc)-Phe-OH (150 mg, 0.28 mmol) to led to the titled activated ester peptide as a white solid lyophilisate (47 mg, 35 %). General procedure for the synthesis of cyanomethyl ester was used from Boc-Met-Lys(Boc)-Phe-OH (150 mg, 0.24 mmol) to led to the titled activated ester peptide as a white solid lyophilisate (16 mg, 14 %). General procedure for the synthesis of 4-chlorobenzyl thioester was used from Formyl-Met-Orn(Boc)-Phe-OH (150 mg, 0.28 mmol) to led to the titled activated thioester peptide as a white solid lyophilisate (44 mg, 27 %). General procedure for the synthesis of 4-chlorobenzyl thioester was used from Boc-Met-Lys(Boc)-Phe-OH (150 mg, 0.24 mmol) to led to the titled activated thioester peptide as a white solid lyophilisate (46 mg, 34 %). HPLC HPLC HPLC HPLC Inhibition du ribosome bactérien par les peptides naissants et antimicrobiens Le ribosome bactérien (70S) catalyse la formation de la liaison peptidique et représente une cible majeure pour les antibiotiques. Le peptide synthétisé passe à travers le tunnel de sortie de la sous-unité 50S du ribosome avant d'être libéré dans le cytoplasme. Des peptides spécifiques peuvent inhiber la traduction en agissant en cis (peptides naissants) ou en trans (peptides antimicrobiens riches en proline, PrAMPs) sur ce tunnel. Il a été montré que les PrAMPs inhibent la synthèse des protéines en se liant au ribosome 70S. Au cours de ma thèse, j'ai résolu les structures cristallines de quatre PrAMPs en complexe avec le ribosome 70S. J'ai ainsi pu révéler que tous ces peptides recouvrent le centre peptidyl transferase (PTC) et se lient avec le tunnel dans une orientation inverse par rapport au peptide naissant. J'ai aussi pu conclure que les PrAMPs inhibent la traduction en bloquant la transition de la phase d'initiation vers l'élongation. L'arrêt de la traduction induit par le peptide naissant se produit lorsqu'un peptide naissant interagit avec le tunnel, entraînant l'inactivation du PTC. L'arrêt peut être uniquement dû à la séquence du peptide ou peut nécessiter un co-inducteur, tel un antibiotique. Les mécanismes d'action des peptides d'arrêt courts (motifs polyproline ou M+X(+)) restent inconnus. Afin d'étudier ces peptides de manière biochimique et structurale, j'ai formé des complexes ribosomaux bloqués avec un peptidyl-ARNt d'arrêt préparé à l'aide d'un ribozyme appelé flexizyme. J'ai ainsi pu obtenir une structure par cryo-EM d'un 70S bloqué par un motif M+X(+) en présence d'érythromycine et de formuler un modèle expliquant l'inactivation allostérique du PTC. Antibiotiques, ribosome, biologie structurale, peptides d'arrêt Inhibition of the bacterial ribosome by nascent chain and free peptides The bacterial (70S) ribosome catalyzes peptide bond formation and represents a major target for antibiotics. The synthesized peptide passes through the exit tunnel of the large ribosomal subunit before it is released into the cytoplasm. Specific peptides can inhibit translation by acting in cis (nascent peptide) or in trans (proline-rich antimicrobial peptides; PrAMPs) due to interactions with the tunnel. PrAMPs were reported to inhibit protein biosynthesis and bind to the 70S ribosome. During my thesis, I solved the crystal structures of four different PrAMPs in complex with the bacterial ribosome, revealing that all peptides cover the peptidyl transferase center (PTC) and bind in a reverse orientation within the exit tunnel relative to a nascent chain. From this, I concluded that PrAMP binding inhibits the transition from initiation towards elongation. Nascent chain-mediated translational arrest occurs when a nascent peptide interacts with the exit tunnel, leading to the rearrangement and inactivation of the PTC. Arrest can be solely due to the peptide's sequence or may require a small molecule co-inducer, such as a drug. The underlying mechanisms of action for short arrest peptides (polyproline or M+X(+) motifs) remain unknown. In order to study these short arrest peptides biochemically and structurally, I adopted a strategy to form arrested ribosomal complexes through the direct addition of the arrest peptidyl moiety to tRNAi Met with the help of a small ribozyme known as flexizyme. I was able to solve the cryo-EM structure of a ribosome arrested by an M+X(+) motif in the presence of erythromycin and to propose a model for the allosteric inactivation of the PTC. Antibiotics, ribosomes, nascent-chain mediated translational arrest, structure biology Figure Figure 1: tRNA positions and correct labeling of the nascent chain, as used in this work. ........ Figure 1: tRNA positions and correct labeling of the nascent chain, as used in this work. ........ Figure 2 : 2 Figure 2: Structure of 70S ribosome from E. coli. ....................................................................................... Figure 3 : 3 Figure 3: Schematic view of the ribosomal exit tunnel. ........................................................................... Figure 4 : 4 Figure 4: Overview of the bacterial translation cycle as reviewed by Schmeing and Ramakrishnan, 2009. .................................................................................................................................... Figure 5 : 5 Figure 5: Chemical structure of Erythromycin (macrolide) and Telithromycin (ketolide). ........... Figure 6 : 6 Figure 6: Binding site of erythromycin within the bacterial ribosome. .............................................. Figure 7 : 7 Figure 7: Sequence alignments of PrAMPs isolated from different species (insects and mammalian) or derived from drug development. ............................................................................ Figure 8 : 8 Figure 8: fM+X(+) arrests the ribosome in the presence of the antibiotic erythromycin. .......... Figure 9 : 9 Figure 9: Summary of the sequence variants that can induce polyproline-mediated ribosomal arrest. ................................................................................................................................................................. Figure 10 : 10 Figure 10: Sequence alignments of the different in vitro selected flexizymes Fx3, eFx, dFx, and aFx. ...................................................................................................................................................................... Figure 11 : 11 Figure 11: Different flexizymes (eFx, dFx, and aFx) were selected in vitro to recognize different leaving groups to increase reactivity and complexity in the substrate..................................... Figure 12 : 12 Figure 12: Assembly of Northern Blot, the RNA was separated in the TBU-PAGE prior to the blotting. ............................................................................................................................................................. Figure 13 : 13 Figure 13: Summary of tRNA Pro extraction and purification. .................................................................. Figure 14 : 14 Figure 14: Ribosomal domains for rigid-body fitting. ........................................................................... Figure 15 : 15 Figure 15: PrAMPs bind to the ribosomal exit tunnel. ........................................................................... Figure 16 : 16 Figure 16: Mechanism of action of proline-rich antimicrobial peptides. ....................................... Figure 17 : 17 Figure 17: General workflow to study short peptides that mediated nascent chain-dependent translational arrest using the flexizyme methodology. ................................................................ Figure 18 : 18 Figure 18: Workflow to obtain the arrest complex using one round of translocation. ............. Figure 19 : 19 Figure 19: Time course of flexizyme reactions for fMK-CBT (A), AcRP-CBT (B), AcRA-CBT (C) and AcRD-CBT (D) onto tRNAi Met . ........................................................................................................ Figure 20 : 20 Figure 20: Time course following the peptidylation reaction of AcRAP-CBT peptide with microhelix over six days. .......................................................................................................................... Figure 21 : 21 Figure 21: RP-HPLC purification of dipetidylated-tRNAi Met after flexizyme reaction for 8 days on ice. .............................................................................................................................................................. Figure 22 : 22 Figure 22: Purification of AcRAP-tRNAi Met after flexizyme reaction for 7 days on ice by C4-RP-HPLC. ............................................................................................................................................................... Figure 23 : 23 Figure 23: After buffer exchange and lyophilization of the peptidylated tRNAs did not hydrolyze. ...................................................................................................................................................... Figure 24 : 24 Figure 24: Complex to study fM+X(+)-mediated arrest in the presence of erythromycin. ..... Figure 25 : 25 Figure 25: MRFR, MKFR, MRFK and MKFK arrest the E. coli ribosome in the presence of erythromycin. ............................................................................................................................................... Figure 26 : 26 Figure 26: In the presence of erythromycin, ErmDL arrests 70S T. thermophilus ribosomes at the same position as the 70S E. coli ribosomes. ............................................................................. Figure 27 : 27 Figure 27: Toeprint of MRFRI in the presence or absence of erythromycin/telithromycin using 70S T. thermophilus ribosomes.. .......................................................................................................... Figure 28 28 Figure 28 fMKF-tRNAi Met arrests E. coli 70S ribosomes in vitro in the presence of erythromycin and Arg-tRNA Arg . ......................................................................................................................................... Figure 29 : 29 Figure 29: Grids prepared with a ribosome concentration of 480 nM showed a good distribution of particles on micrographs. .......................................................................................... Figure 30 : 30 Figure 30: Representative 2D classes of the sample sorted in order of decreasing abundance. ........................................................................................................................................................................... Figure 31 : 31 Figure 31: Masked 3D classification sorting according to the occupancy of A-, P-and E-site tRNA. ............................................................................................................................................................... Figure 32 : 32 Figure 32: Local resolution distribution and FSC curve. ........................................................................ Figure 33 : 33 Figure 33: Examples of density display around different parts of the model ............................... Figure 34 : 34 Figure 34: The reconstructed density contains density for P-site (green) and E-site tRNA (dark blue) but no density for an A-site tRNA. ........................................................................................... Figure 35 : 35 Figure 35: The fMKF-peptide does not form direct contact with the drug while the penultimate lysine side chain points towards the A-site crevice. ............................................ Figure 36 : 36 Figure 36: In the MKF-70S E. coli structure, U2506 is a universally conserved residue within the PTC that adopts a novel conformation different from one during translational cycle and arrest....................................................................................................................................................... Figure 37 : 37 Figure 37: The base of U2585, moving during tRNA accommodation, adopts a specific conformation similar to the one in intrinsic arrested ribosomes. ............................................ Figure 38 : 38 Figure 38: 23S rRNA residue A2062 is a universally conserved base within the ribosomal exit tunnel that acts as a sensor for arrest peptides (Vázquez-Laslop et al., 2010) that points towards the PTC in the MKF-70S structure (green). ...................................................................... Figure 39 : 39 Figure 39: Proposed mechanism of MKF-mediated translational arrest in the presence of erythromycin.. .............................................................................................................................................. Figure 40 : 40 Figure 40: Complexes to study nascent chain-mediated translational arrest along consecutive proline motifs. .............................................................................................................................................. Figure 41 : 41 Figure 41: Schematic view of tRNA Pro overexpression strategies ...................................................... Figure 42 : 42 Figure 42: Comparison of the different overexpression strategies using pBSTNAV vectors and pUC19-proK vector. ................................................................................................................................... Figure 43 : 43 Figure 43: Similar amounts of tRNA Pro in E. coli DH5α and HB101 were expressed per cell using the pBSTNAV vectors.. .................................................................................................................. Figure 44 : 44 Figure 44: Chromatographic steps of tRNA Pro purification.. ................................................................ Figure 45 : 45 Figure 45: tRNA Pro purification steps analyzed by denaturing PAGE and Northern blotting. Figure 46 : 46 Figure 46: Mass spectrogram from native mass spectrometry of tRNA Pro . ................................... Figure 47 : 47 Figure 47: Activity test of purified tRNA Pro using toeprinting ............................................................. Figure 48 : 48 Figure 48: Screening of different toeprinting templates to study polyproline-mediated arrest and its release using purified E. coli EF-P. ........................................................................................ Figure 49 : 49 Figure 49: Sequence alignment of T. thermophilus EF-P (Tth_wt) and E. coli EF-P (Eco_wt). . Figure 50 : 50 Figure 50: Unmodified T. thermophilus EF-P does not release E. coli ribosomes arrested on polyproline motifs. ..................................................................................................................................... Figure 51 : 51 Figure 51: Positive density for minimally biased difference maps (Fo-Fc) of T. thermophilus and E. coli EF-P. .................................................................................................................................................... Figure 52 : 52 Figure 52: Amino acid sequence alignment of T. thermophilus EF-P and the two mutants... Figure 53 : 53 Figure 53: SDS-PAGE showing purified EF-P variants.. .......................................................................... Figure 54 : 54 Figure 54: The mutants generated do not release E. coli ribosomes arrested at consecutive prolines. .......................................................................................................................................................... Figure 55 : 55 Figure 55: Polyproline-mediated arrest and its release by E. coli EF-P was studied using flexizyme-peptidylated tRNAs. .............................................................................................................. Figure 56 : 56 Figure 56: E. coli EF-P releases polyproline-mediated arrested ribosomes but does not release ribosomes stalled while translating other well-characterized arrest peptides.................... Figure 57 : 57 Figure 57: Overlay of the binding sites of Onc112 and clinically used antibiotics. .................... Figure 58 : 58 Figure 58: Comparison of the binding sites of Bac7(1-16), Api137 and Klebsazolicin (KLB). Figure 59 : 59 Figure 59: Peptide modifications to get a greater understanding of the chemical properties needed for nascent chain mediated translational arrest. ............................................................ Figure 60 : 60 Figure 60: Composition of in vitro translation systems for arrested complex formation for structural biology. ...................................................................................................................................... Figure 61 : 61 Figure 61: Complexes to study the molecular mechanism of EF-P. ................................................. Figure 62 : 62 Figure 62: The position of the Cγ-atom influences the isomeric form of the peptide bond. . Figure 63 : 63 Figure 63: The variety of modifications essential for EF-P activity in different species. ........... Figure 64 : 64 Figure 64: Additional conformations of U2506 (A), U2585 (B) and A2062 (C) during preaccommodation, catalysis and translocation. .................................................................................. Figure 65 : 65 Figure 65: Base modification found in proK. ............................................................................................. Figure 66 : 66 Figure 66: Sucrose gradients of the 70S E. coli ribosome purification. ........................................... Figure 67 : 67 Figure 67: fMKF-CME flexizyme reaction performed by Dr. K. Kishore Inampudi. ..................... Figure 68 : 68 Figure 68: ProS purification steps analyzed by SDS-PAGE. ................................................................. Figure 69 : 69 Figure 69: PheST purification steps analyzed by SDS-PAGE. .............................................................. Figure 70 : 70 Figure 70: E. coli and T. thermophilus EF-P purification steps analyzed by SDS-PAGE. ........... Figure 71 : 71 Figure 71: Purification of the T. thermophilus mutants by SDS-PAGE. ........................................... Figure 72 : 72 Figure 72: Sequencing results for proK-pBSTNAV2OK ......................................................................... Figure 73 : 73 Figure 73: Sequencing results for tRNA Pro into pBSTNAV3S vector. ................................................ Figure 74 : 74 Figure 74: sequencing result of Tth EF-P HQ mutant ............................................................................ Figure 75 : 75 Figure 75: Sequencing result from mutagenesis to generate Tth EF-P mutant incl ecoli loop. ........................................................................................................................................................................... Table 1 : 1 Summary of abbreviations used in this work. ............................................................................. Table 2: Peptide sequences that lead to a nascent chain-mediated translational arrest in bacteria. ............................................................................................................................................................. Table 3: Summary of well-characterized ligand-dependent arrest peptides in bacteria. ........... Table 4: List of chemicals ..................................................................................................................................... Table 5: List of antibiotics .................................................................................................................................... Table 6: Enzymes used for cloning, toeprinting and RNA purification ..............................................Table 7: List of Kits ................................................................................................................................................. Table 8: List of used columns for RNA and protein purification .......................................................... Table 9: Media composition of LB and TB .................................................................................................... Table 10: List of used E. coli strains with specifics on genotypes and usage .................................. Table 11: 689 media composition .................................................................................................................... Table 12: Composition of a native nucleic acid (TBE)-PAGE .................................................................. Table 13: Composition of a denaturing nucleic (TBU)-PAGE ................................................................. Table 14: Composition of acidic denaturing PAGE .................................................................................... Table 15: Composition of SDS-PAGE .............................................................................................................. Table 16: List of used probes ............................................................................................................................. Table 17: List of proteins purified in this thesis, their corresponding extinction coefficients and molecular weight were determined using the bioinformatic platform ProtParam. ............. Table 18: Pipetting scheme for gene amplification from genomic DNA .......................................... Table 19: PCR program for gene extraction ................................................................................................. Table 20: Pipetting scheme for primer annealing PCR ............................................................................. Table 21: PCR program for primer annealing .............................................................................................. Table 22: List of vectors with their corresponding restriction enzymes and downstream application ....................................................................................................................................................... Table 23: Pipetting scheme for insertion of gene into pUC19 or pBAT4 .......................................... Table 24: PCR program for MEGAWHOP ...................................................................................................... Table 25: Pipetting scheme for PCR based mutagenesis ........................................................................ Table 26: PCR program for PCR based mutagenesis ................................................................................ Table 27: Pipetting scheme for Colony PCR ................................................................................................. Table 28: Reaction scheme for Colony PCR .................................................................................................. Table 29: List of expression vectors used for protein purification ....................................................... Table 30: List of tRNA constructs for purification and expression ....................................................... Table 31: List of used PURExpress systems ................................................................................................ Table 32: Reaction mixture for toeprinting control reactions............................................................. Table 33: List of 3x ddNTP stock solutions ................................................................................................ Table 34: Mastermix for sequencing reaction .......................................................................................... Table 35: Program for sequencing reaction .............................................................................................. Table 36: Gel mix for sequencing PAGE ...................................................................................................... Table 37: List of synthetic mRNAs for structural biology studies ...................................................... Table 38: Refinement and model statistics single molecular reconstruction. .............................. Table 39:Primer list for aminoacyl-tRNA synthetase cloning .............................................................. Table 40: DNA primers for mutagenesis for T. thermophilus EF-P ................................................... Table 41: DNA primers for tRNA cloning ................................................................................................... Table 42: DNA Primer for toeprinting templates..................................................................................... Figure 1 : 1 Figure 1: tRNA positions and correct labeling of the nascent chain, as used in this work. Figure 2 : 2 Figure 2: Structure of 70S ribosome from E. coli. The bacterial 70S ribosome is a ribonucleoprotein complex formed of a large (50S) and a small (30S) subunit. The 50S subunit consists of 23S rRNA, 5S rRNA (light gray) and 31 proteins (yellow). The 30S subunit consists of a 16S rRNA (gray) and 20 proteins (blue). (PDB code: 4U27, (Noeske et al., 2014)). Figure 3 : 3 Figure 3: Schematic view of the ribosomal exit tunnel. The tunnel spans the whole large subunit connecting the PTC to the cytoplasm. It can be divided into the upper tunnel, lower tunnel, and the vestibule. Figure is taken from Seip and Innis, 2016. Figure 4 : 4 Figure 4: Overview of the bacterial translation cycle as reviewed by Schmeing and Ramakrishnan, 2009. The prokaryotic translation cycle can be divided into three phases: Initiation, elongation, and termination (Release and recycling). The 30S subunit is shown blue and the 50S subunit is shown in yellow. All canonical translation factors are listed illustrating their function in the translation cycle. Figure 5 : 5 Figure 5: Chemical structure of Erythromycin (macrolide) and Telithromycin (ketolide).Erythromycin and Telithromycin consist of a 14-membered lactone ring. ERY has also a C5 desosamine and a C3 cladinose sugar. In the case of telithromycin, the C3-cladinose sugar is replaced by a keto-group and an N-aryl-alkyl acetamide. Figure 6 : 6 Figure 6: Binding site of erythromycin within the bacterial ribosome. Overlay of the 70S E. coli ribosome (turquoise; PDB code: 4V7U) and 70S T. thermophilus ribosome (blue; PDB code: 4V7X) in complex with erythromycin illustrates the similar conformations of the drug bound to the ribosomal tunnel. The bases of the 23S rRNA that form the binding site are shown in gray. For simplification reasons, only the bases from the T. thermophilus 70S ribosome structure are shown. A2059 and U2611 form a hydrophobic surface resulting in stacking interactions with the lactone ring(Bulkley et al., 2010; Dunkle et al., 2010). Figure 7 : 7 Figure 7: Sequence alignments of PrAMPs isolated from different species (insects and mammalian) or derived from drug development. Proline and arginine are highlighted in bold to visualize their high content. The light-yellow box indicates the two most important residues for antimicrobial activity. O stands for L-ornithine, Damino acids are indicated with a letter in lower case, gu stands for N,N,N',N'-tetramethylguanidine. Figure 8 : 8 Figure 8: fM+X(+) arrests the ribosome in the presence of the antibiotic erythromycin. The peptide barely touches the binding site of the drug. Figure 9 : 9 Figure 9: Summary of the sequence variants that can induce polyproline-mediated ribosomal arrest. 0 the proline and its corresponding tRNA are highlighted in green. The amino acid in the A-site (PP(X)), in the -1 position (XP(P)) and -2 positions (XPP(P)) directly influence the strength of the arrest. The figure summarizes strong polyproline arrest motifs identified by[START_REF] Doerfel | EF-P is essential for rapid synthesis of proteins containing consecutive proline residues[END_REF][START_REF] Peil | Distinct XPPX sequence motifs induce ribosome stalling, which is rescued by the translation elongation factor EF-P[END_REF] Starosta et al., 2014a;[START_REF] Woolstenhulme | High-precision analysis of translational pausing by ribosome profiling in bacteria lacking EFP[END_REF]. Figure 10 : 10 Figure 10: Sequence alignments of the different in vitro selected flexizymes Fx3, eFx, dFx, and aFx. The tRNA hybridization and substrate recognition site are marked with gray and black boxes, respectively. Minimal changes in the sequences leading to the change in recognition of different leaving groups. Figure 11 : 11 Figure 11: Different flexizymes (eFx, dFx, and aFx) were selected in vitro to recognize different leaving groups to increase reactivity and complexity in the substrate. The leaving group is marked in blue. The different leaving groups allow different chemical properties of the activated peptide or amino acid derivative (figure after Goto et al., 2011). 3. 2 . 1 . 1 211 Nucleic acid gels DNA samples were analyzed either by agarose TAE gels or native Polyacrylamide gel electrophoresis (PAGE). DNA fragments with a minimal length of 500 bp were analyzed on 1.5-3 % (w/v) agarose gels. Agarose was melted in 70 mL TAE buffer (40 mM Tris, 50 mM Acetate, 1 mM EDTA) and poured into the gel stands. After solidification of the agarose the gel was overlaid with 1x TAE buffer and the DNA samples were loaded by mixing 3 µL PCR product with 0.5 µL 6x DNA loading dye (30% (v/v) Glycerol, 0.25% (w/v) bromophenol blue (BPB), 0,25% (w/v) xylene blue). To be able to estimate the size and concentration of the DNA fragments a DNA size standard (2-log DNA ladder NEB, Ipswich, MA, USA) was loaded. The gel was run at 170V for 60 to 80 min. To detect the DNA the gels were stained afterward with a 2x GelRed solution and detected with the Gel documentation system GelDoc+ (Biorad, Hercules, CA, USA). Figure 12 : 12 Figure 12: Assembly of Northern Blot, the RNA was separated in the TBU-PAGE prior to the blotting. The RNA is transferred onto the membrane by applying an electric field. Due to the negative charge of the RNA, the sample migrates towards the plus pole and onto the membrane. For large scale expression tRNA Pro , eight main cultures of 1 L LB (100 μg/mL ampicillin) were inoculated with an overnight preculture of E. coli HB101 pBSTNAV3S-tRNA Pro . The cultures were grown overnight (approximately 16 h) at 37°C under aerobe conditions. The cells were harvested by centrifugation in a JLA 9.100 rotor at 4 000 rcf for 20 min at 4°C (Avanti J26XP centrifuge, Beckmann, Brea, CA, USA). The cell pellets were washed with tRNA lysis buffer (20 mM Tris•HCl pH 7.4, 20 mM Mg(CH3CO2)2) and pelleted by a second centrifugation step using the same conditions. Per 1 L LB culture, the cell pellet weighed approximately 5 g. For storage, cells were flash-frozen in liquid nitrogen and stored at -80°C. The following figure highlights the different steps during tRNA Pro extraction and purification: Figure 13 : 13 Figure 13: Summary of tRNA Pro extraction and purification. tRNA Pro was overexpressed in E. coli HB101 cells and cells were lysed by phenol/chloroform extraction. DNA and long RNA species were removed by selective isopropanol precipitation. Total tRNA sample was deaminoacylated by incubation at pH 8.0. Subsequently, the 5S rRNA was removed by Q-sepharose column. tRNA Pro was purified further by changing its chemical properties using aminoacylation and deaminoacylation and C4 RP-HPLC. Louis, MO, USA), a 3'-5' exonuclease that is incapable of degrading double-stranded RNA. For this reaction 10 µM tRNA were mixed with 0.1 mg/mL of the phosphodiesterase I in 50 mM Tris•HCl pH 7,6, 10 mM MgCl2,1 mM DTT. The reaction was incubated for one hour at RT and stopped by phenol-chloroform extraction (chapter 3.52). 5 mM activated peptide solubilized in 100% dimethyl sulfoxide (DMSO) were added, resulting in a final concentration of 20% DMSO (v/v). The reactions were incubated at 4 °C on ice. The reaction time and activity depended on the nature of the activated amino acid and varied from 2 h up to several days.For screening and optimizing the reaction conditions n optimization, the reaction was set up using a microhelix and 1 µLtime points were taken directly into 20 µL acidic RNA loading dye over the course of eight days. All samples were analyzed at the end on a 20% acidic denaturing PAGE (see chapter 3.2). Once the reaction conditions were optimized the reaction size was scaled-up to a total volume of 200 µL. The reactions were stopped by addition of 300 mM Na(CH3CO2) pH 4.8 and 70% ethanol. The aminoacylated-tRNA was recovered in 500 µL reverse phase buffer A (20 mM NH4(CH3CO2) pH 5.5, 10 mM Mg(CH3CO2)2, 400 mM NaCl). The peptidylated-tRNA was separated by running the sample over an analytical C4 reverse phase column (Grace) by performing a 14 CV gradient into 50-80% reverse phase buffer B (20 mM NH4(CH3CO2) pH 5.5, 10 mM Mg(CH3CO2)2, 400 mM NaCl, 40% MeOH). The slope of the gradient depended on the hydrophobicity of the peptide. Elution fractions were followed by detecting the UV-Vis profile and analyzed by 9% acidic denaturing PAGE. Fractions containing the peptidylated tRNA were pooled together and washed multiple times using centrifugal filters with 20 mM NH4(CH3CO2) pH 5.5 to completely remove Na + cations. The peptidylated-tRNA was flash-frozen and stored at -80°C. resolution. The reservoir solution consisted of 100 mM Tris•HCl pH 7.6, 2.9% (v/v) polyethylene glycol (PEG) 20 000, 7-10% v/v) 2-methyl-2,4-pentanediol (MPD) and 175 mM arginine. PEG 20 000 and MPD act as precipitants to reach the supersaturated state of the solution. While the concentration of PEG was kept constant, the MPD concentration was optimized depending on the ribosome preparation and the complex composition. Crystallization drops consisted of 3 µL of pre-formed complex and 3-4 µL reservoir solution. The drops contain a lower concentrated reservoir buffer. During the crystallization process, the concentration differences are equilibrated by water diffusion from the drop towards the reservoir, increasing the ribosome concentration slowly within the drop solution resulting in crystallization. Crystals appeared after 2 to 3 days incubation at 20°C and after approximately a week, crystals grew to their full sizes of ∼1000 × 100 × 100 μm.Subsequently, crystals were stabilized to prevent ice formation and drying of the crystals.by replacing water with MPD. The concentration of MPD was increased stepwise: 25%, 30%, 35% and 40% (v/v) MPD in 100 mM Tris•HCl, pH 7.6, 2.9% (v/v) PEG 20,000, 50 mM KCl, 10 mM NH4Cl and 10 mM Mg(CH3COO)2. The reservoir solution was completely removed during this step. For the first three steps, 5 µL of the cryo-protectant solution was added an incubated for 15 min at room temperature. For the last step, 15 µL of the drop solution was removed and replaced by 40 µL of the cryo-protectant solution containing 40% MPD. The crystals were stabilized minimum 1 h. Crystals were fished into nylon loops and flash frozen using a cryostream set to a temperature of 100 K. scaling and merging. Ribosome crystals from T. thermophilus have a large number of weak spots due to large unit cells which were taken into account during spot localization. During this step, spots were identified and separated from noise. The specific localization of each spot arises from the orientation of the molecular complexes within the crystal and the position is crucial for indexing. 70S T. thermophilus ribosomes crystallize in most cases in the point group P212121 with the cell dimensions of 210 × 450 × 625 Å with the corresponding angles of α=β=γ=90°. Figure 14 : 14 Figure 14: Ribosomal domains for rigid-body fitting. The large subunit is divided into the 50S body (yellow) and the L1 stalk (orange). The small subunit is divided into the 30S head (slate), 30S body (dark blue) and the 30S spur (marine) domain. HEPES•KOH pH 7.5, 20 mM Mg(CH3CO2)2, 100 mM K(CH3CO2)) in the presence of 25 µM erythromycin and 20 µM Arg-tRNA Arg . are being detected by the camera at the end of the microscope. The resulting micrographs are a collection of 2D projection of each molecule in solution. The particles are boxed and classified by the software according to their orientation in ice. To obtain the electron microscopy density, the 2D projections are used reconstruct the 3D object. To obtain a 3D model, the data were processed using the REgularized LIkelihood OptimizatioN (RELION 2.1) interface following the recommended protocol including slight changes[START_REF] Fernandez-Leiro | A pipeline approach to single-particle processing in RELION[END_REF] Scheres, 2012a, b). FOURIER transformation. The background was subtracted using a low pass filter and the obtained Thon rings were compared to theoretical values to obtain the defocus values for each image and astigmatism for further processing.Further processing requires the selection and classification of particles. To generate references for auto-picking, 1000 to 2000 ribosome particles were selected manually from micrographs of representative defocus values and classified into 2D classes. The contrast of the images increases proportionally with increasing defocus values, thus for the auto-picking program to efficiently recognized particles it is necessary to cover the whole range when manually selecting the images. The box size was set to 340 Å which is larger than the diameter of the ribosomes. Due to the increased prevalence of multidrug-resistant bacteria, the development of new therapeutic agents has become a public health priority. In this respect, proline-rich antimicrobial peptides (PrAMPs) are potent bacteriostatic agents that are effective against Gram-negative pathogens and have been the focus of numerous studies to improve their properties as potential drugs. As the name indicates, PrAMPs show a high content of prolines and at least one Pro-Arg-Pro motif which leads to a disordered conformation of the peptide in solution[START_REF] Otvos | Peptides antimicrobiens riches en proline (PrAMPs) La première partie de cette thèse a consisté à la compréhension du mécanisme sous-jacent de l'inhibition de la traduction bactérienne par des peptides antimicrobiens riches en proline[END_REF]. Recent studies have shown that PrAMPs, like Bac7 and Onc112, can inhibit protein biosynthesis in vivo and in vitro. In addition, these peptides can be cross-linked to, or co-purified with the bacterial ribosome[START_REF] Krizsan | Insect-derived proline-rich antimicrobial peptides kill bacteria by inhibiting bacterial protein translation at the 70 S ribosome[END_REF] Mardirossian et al., 2014).The underlying molecular mechanism of action of PrAMPs was still unknown when I began my doctoral studies. To investigate this further, I solved the crystal structures of T. thermophilus 70S ribosomes in complex with the PrAMPs Onc112, Bac71-16 , Pyrrhocoricin (Pyr) and metalnikowin I (Met I). The resolution ranges from 2.8 Å to 3.2 Å. Compared to the growing nascent peptide chain, PrAMPs bind in a reverse orientation to the ribosomal exit tunnel (Gagnon Figure 15 : 15 Figure 15: PrAMPs bind to the ribosomal exit tunnel. Onc112 (orange), Bac7 (green), Met I (raspberry) and Pyr bind to the ribosomal tunnel in a reverse orientation compared to the nascent chain. The binding site covers the A-site tRNA binding site, A-site crevice Figure 16 : 16 Figure 16: Mechanism of action of proline-rich antimicrobial peptides. PrAMPs are taken up specifically by Gram-negative bacteria through the SbmA transporter. C-terminal truncations lead to up-take inhibition. Inside the cytoplasm, they bind to the ribosomal exit tunnel in the reversed orientation compared to the growing peptide chain. This results in the inhibition of the transition from initiation towards elongation and destabilizes the initiation complex. Figure is taken from(Seefeldt et al., 2015). Received 26 February; accepted 22 April; published online 18 May 2015; doi:10.1038/nsmb.3034 The proline-rich antimicrobial peptide Onc112 inhibits translation by blocking and destabilizing the initiation complexThe increasing prevalence of multidrug-resistant pathogenic bacteria is making current antibiotics obsolete. Proline-rich antimicrobial peptides (PrAMPs) display potent activity against Gram-negative bacteria and thus represent an avenue for antibiotic development. PrAMPs from the oncocin family interact with the ribosome to inhibit translation, but their mode of action has remained unclear. Here we have determined a structure of the Onc112 peptide in complex with the Thermus thermophilus 70S ribosome at a resolution of 3.1 Å by X-ray crystallography. The Onc112 peptide binds within the ribosomal exit tunnel and extends toward the peptidyl transferase center, where it overlaps with the binding site for an aminoacyl-tRNA. We show biochemically that the binding of Onc112 blocks and destabilizes the initiation complex, thus preventing entry into the elongation phase. Our findings provide a basis for the future development of this class of potent antimicrobial agents. ADVANCE ONLINE PUBLICATION NATURE STRUCTURAL & MOLECULAR BIOLOGY A R T I C L E S Figure 1 Figure 2 12 Figure1Onc112-binding site within the exit tunnel of the ribosome. Transverse section of the exit tunnel of the Tth70S ribosome showing the binding site for the Onc112 peptide (orange). Minimally biased F o -F c difference map contoured at +3.0 (blue) is observable for the first 12 amino acids of Onc112 (VDKPPYLPRPRPPRrIYNr-NH 2 ; residues 1-12 are bold and underlined). Initiator tRNA i Met bound at the P site is shown in green. Inset shows the view chosen to display the Onc112 peptide relative to the complete 70S ribosome. Figure 3 3 Figure 3 Onc112 blocks and destabilizes the initiation complex. (a) Structural comparison of Phe-tRNA Phe (blue) in the A site and fMet-tRNA i Met in the P site (green) 20 with the binding site of Onc112 (orange). (b) Toe-printing assay performed in the absence (-) or presence of increasing concentrations (1 M, 10 M and 100 M) of Onc112, 50 M clindamycin (Cli), 50 M edeine (Ede) or 100 M thiostrepton (Ths). Sequencing lanes for C, U, A and G and the sequence surrounding the toe-print bands (arrows) when ribosomes accumulate at the AUG start codon (green, initiation complex) or the isoleucine codon (blue, stalled elongation complex) are included for reference, as illustrated graphically. The uncropped gel image for the toe-printing assay is in Supplementary Data Set 1. (c-g) Schematic (c) showing the dicistronic ErmBL mRNA that was used to monitor disome formation with sucrose gradients in the presence (d) or absence (e) of 20 M erythromycin (Ery) or the presence of 20 M thiostrepton (f) or 100 M Onc112 (g). In c, SD denotes the Shine-Dalgarno sequence. A, absorbance. Figure 4 4 Figure 4 Characterization of Onc112, its C-terminally truncated derivatives and its membrane transporter in Gram-negative bacteria. (a,b) Effect of Onc112 (red) and C-terminally truncated Onc112 derivatives Onc112 C7 (green) and Onc112 C9 (purple) on overnight growth of E. coli strain BL21(DE3) (a) and the luminescence resulting from the in vitro translation of Fluc (b). (c) Effect of Onc112 on overnight growth of E. coli strain BW25113 (blue) or BW25113 sbmA (orange). In a and c, error bars represent mean s.d. for triplicate experiments, whereas the experiment in b was performed in duplicate, with the plotted points representing the mean value. The growth or luminescence measured in the absence of peptide was assigned as 100% in all cases. Figure 5 5 Figure 5 Mechanism of action and overlap of Onc112 with antibiotics that target the large subunit of the ribosome. (a) Model for the mechanism of action of Onc112.(1) Onc112 binds within the exit tunnel of the ribosome with a reverse orientation relative to a nascent polypeptide chain.(2) Onc112 allows formation of a translation-initiation complex but prevents accommodation of the aminoacyl-tRNA (aa-tRNA) at the A site of the peptidyl transferase center. (3) The initiation complex is destabilized, thus leading to dissociation of the fMet-tRNA i Met from the P site. Although full-length Onc112 can efficiently penetrate the bacterial cell membrane by using the SbmA transporter (4), C-terminal truncation of Onc112 reduces its antimicrobial activity(5), presumably owing to impaired uptake. (b) Relative binding position of Onc112 (orange) on the ribosome, compared with those of well-characterized translation inhibitors chloramphenicol (purple)32,33 , clindamycin (green)33 , tiamulin (yellow)34 and erythromycin (blue)32,33 as well as an A site-bound Phe-tRNAPhe (ref. 20). 23 . 23 Onc112 D2E. H-Val-Glu-Lys-Pro-Pro-Tyr-Leu-Pro-Arg-Pro-Arg-Pro-Pro-Arg-(d-Arg)-Ile-Tyr-Asn-(d-Arg)-NH 2 (2,403.88 g mol -1 ). Synthesis of Onc112 D2E (0.05-mmol scale): 11.6 mg (10% yield); RP HPLC t R 5.75 min (gradient 10-50% of B in 10 min); ESI HRMS (m/z): found 1316.70 [M + 2H] 2+ , 840.14 [M + 3H] 3+ and 601.86 [M + 4H] 4+ . Onc112 L7Cha. H-Val-Asp-Lys-Pro-Pro-Tyr-Cha-Pro-Arg-Pro-Arg-Pro-Pro-Arg-(d-Arg)-Ile-Tyr-Asn-(d-Arg)-NH 2(2,429.92 g mol-1 ). Synthesis of Onc112 L7Cha (0.05-mmol scale): 6.9 mg (6% yield); RP HPLC t R 5.28 min (gradient 10-50% of B in 10 min); ESI HRMS (m/z): found 1,252.18 [M + 2H] 2+ , 822.80 [M + 3H] 3+ and 608.36 [M + 4H] 4+ . Figure preparation . preparation Figure preparation. Figures showing electron density and atomic models were generated with PyMOL (http://www.pymol.org/). Figure 1 . 1 Figure 1. Binding site of Bac7(1-16) on the ribosome and comparison with Onc112. (A) Overview and closeup view of a cross-section of the Tth70S ribosomal exit tunnel showing the Bac7(1-16) peptide (RRIR-PRPPRLPRPRPR) in green and highlighting the three regions of interaction with the ribosome: the A-tRNA binding pocket (light pink), the A-site crevice (light green) and the upper section of the exit tunnel (light blue). (B) Structural comparison of Bac7(1-16) (green) with Onc112 (orange)(20,21), Met1(1-10) (burgundy) and Pyr(1-16) (cyan), highlighting the distinct structure of the Bac7 N-terminus (N-term) and the Pyr Cterminus (C-term). Figure 2 . 2 Figure 2. Interactions between Bac7(1-16) and the ribosome. (A) Bac7(1-16) (green) makes extensive contacts with the A-site tRNA binding region of the ribosome, in particular (B) electrostatic interactions between its N-terminal arginine stack and a deep groove lined by phosphate groups from the 23S rRNA (B). (C) -stacking interactions between arginine side chains (green) of Bac7(1-16) and 23S rRNA bases contribute to much of the binding and are reinforced through further packing against aliphatic side chains (blue). (D and E) Water-mediated contacts between the peptide and the ribosome are also proposed to occur further down the exit tunnel, in addition to direct hydrogen bonding interactions between the two. Figure 3 . 3 Figure 3. Competition between Bac7 derivatives and erythromycin. (A) Superimposition of the binding site of erythromycin (blue) (40,41) with residues 11-16 of Bac7(1-16) (green). (B) A filter binding assay was used to monitor competition between radiolabelled [ 14 C]-erythromycin and increasing concentrations (1-100 M) of Bac7(1-35) (red), Bac7(1-16) (green), Bac7(5-35) (blue), Onc112 (grey) and cold (non-radioactive) erythromycin (ery, black). Figure 4 . 4 Figure 4. Mechanism of action of Bac7 on the ribosome. (A) Effects of increasing concentrations of Bac7 derivatives Bac7(1-16) (green), Bac7(1-35) (red) and Bac7(5-35) (blue) on the luminescence resulting from the in vitro translation of firefly luciferase (Fluc) using an Escherichia coli lysate-based system. The error bars represent the standard deviation from the mean for triplicate experiments and the luminescence is normalized relative to that measured in the absence of peptide, which was assigned as 100%. (B) Sucrose gradient profiles to monitor disome formation from in vitro translation of the 2XErmBL mRNA in the absence (control) or presence of 20 M erythromycin (Ery), 10 M Bac7(1-35) (red) or 100 M Bac7(5-35) (blue). (C) Toe-printing assay performed in the absence (-) or presence of increasing concentrations (1, 10, 100 M) of Bac7, Bac7(1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) or Bac7, or 100 M thiostrepton (Ths), 50 M edeine (Ede) or 50 M clindamycin (Cli). Sequencing lanes for C, U, A and G and the sequence surrounding the toe-print bands (arrowed) when ribosomes accumulate at the AUG start codon (green, initiation complex) or the isoleucine codon (blue, stalled elongation complex) are included for reference. (D) Structural comparison of Phe-tRNA Phe (slate) in the A-site and fMet-tRNA i Met in the P-site (blue)(26) with the binding site of Bac7(1-16) (green). Figure 5 . 5 Figure 5. Specificity of Bac7 for bacterial and eukaryotic ribosomes. (A) Superimposition of Bac7(1-16) (green) onto a mammalian 80S ribosome (PDB ID: 3J7O) (47) on the basis of the 23S and 25S rRNA chains in the corresponding structures, with inset illustrating three rRNA nucleotides whose conformation differs in the 80S (grey) and Tth70S-Bac7 (yellow) structures. (B) Effect of increasing concentrations of Bac7 on the luminescence resulting from the in vitro translation of firefly luciferase (Fluc) using an Escherichia coli lysate-based system (red) or rabbit reticulocyte-based system (black). The error bars represent the standard deviation from the mean for triplicate experiments and the fluorescence is normalized relative to that measured in the absence of peptide, which was assigned as 100%. (C) Model for the targeting of proBac7 to large granules and its processing by elastase to yield active Bac7 peptide. The latter is transported through the bacterial inner membrane by the SbmA transporter and binds within the tunnel of bacterial ribosomes to inhibit translation. Figure S1 . S1 Figure S1. Minimally biased electron density for (A) the Bac7(1-16) peptide (green) and surrounding solvent molecules, as well as the (B) Pyr(1-16) (cyan) and (C) MetI(1-10) (burgundy) peptides. The peptides are shown in the same orientation as in Figure 1A and solvent molecules are displayed as spheres (red). Continuous density for the entire peptide and clear density for the solvent molecules are observed in a minimally biased F o -F c difference map contoured at +2.0σ (blue mesh). (D) Superimposition of the Bac7(1-16), Onc112(1-12) (orange), Pyr(1-16) and MetI(1-10) peptides. (E) Effects of increasing concentrations of Bac7(1-16) (red), Metalnikowin I (green) and Pyrrhocoricin (green) on the luminescence resulting from the in vitro translation of firefly luciferase (Fluc) using an E. coli lysate-based system. The error bars represent the standard deviation from the mean for triplicate experiments and the luminescence is normalized relative to that measured in the absence of peptide, which was assigned as 100%. Figure S2 . S2 Figure S2. Relative position of the ribosome-bound Bac7(1-16) peptide (green) to the ribosomal proteins L4 (red) and L22 (blue) that reach into the lumen of the ribosomal tunnel. The proposed path for the full-length Bac7 peptide is shown as a dotted green line. Figure S3 . S3 Figure S3. (A) Conformational changes in 23S rRNA nucleotides A2062, U2506 and U2585 that take place upon binding of Bac7(1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) to the ribosome. Nucleotides from the Tth70S-Bac7 structure are shown in yellow, while nucleotides in the Bac7-free or "uninduced" conformation are in blue (1). (B) Clash between the formyl-methionyl moiety of a P-site bound fMet-tRNA i Met (blue) and 23S rRNA residue U2585 in its Bac7-bound conformation (yellow). Bac7(1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) is shown as a green Cα-trace in both panels. Figure 17 : 17 Figure17: General workflow to study short peptides that mediated nascent chain-dependent translational arrest using the flexizyme methodology. Arrest peptides can be transferred onto the 3' end of purified initiator tRNA. The peptidylated tRNA can be used to form the arrest complex with purified ribosomes. Figure 18 : 18 Figure18: Workflow to obtain the arrest complex using one round of translocation. The pre-arrest complex is formed using peptidyl-tRNA, 70S ribosomes, and elongator tRNA. The arrest complex is obtained after one round of translocation in the presence of EF-G and GTP. Figure 19 : 19 Figure 19: Time course of flexizyme reactions for fMK-CBT (A), AcRP-CBT (B), AcRA-CBT (C) and AcRD-CBT (D) onto tRNAi Met . Time points were taken daily and analyzed by a 12% acidic denaturing PAGE stained with SYBRgold. The blue arrow indicates the peptidylated tRNA, the red one the deaminoacylated tRNAi Met and the yellow one indicates the band corresponding to eFx. The reaction was performed on ice. Figure 20 : 20 Figure 20: Time course following the peptidylation reaction of AcRAP-CBT peptide with microhelix over six days. The analysis was performed by a 20% acidic denaturing PAGE stained with SYBRgold. The microhelix is smaller than eFx. A band corresponding to the peptidylated product appears after 3 days of reaction on ice. Figure 21 : 21 Figure 21: RP-HPLC purification of dipetidylated-tRNAi Met after flexizyme reaction for 8 days on ice. Elution fractions giving a strong UV signal were analyzed by 12% acidic denaturing PAGE. A, C, E and G show the chromatogram of the purification monitoring the elution profile by measuring the absorption at ʎ=260 nm (blue) Figure 22 : 22 Figure 22: Purification of AcRAP-tRNAi Met after flexizyme reaction for 7 days on ice by C4-RP-HPLC (A). The elution profile was followed using the absorption profile at ʎ= 260 nm (blue) and ʎ=280 nm (red). Fractions were analyzed by acidic denaturing PAGE (B) stained with SYBRgold. Figure 23 : 23 Figure 23: After buffer exchange and lyophilization of the peptidylated tRNAs did not hydrolyze. Lane c shows deaminoacylated initiator tRNA. AcRP-tRNAi Met (1), AcRA-tRNAi Met (2), AcRAP-tRNAi Met (3) and fMKF-tRNAi Met (4) are peptidylated. The analysis was performed by a 12% denaturing acidic PAGE and stained with SYBRgold. Figure 24 : 24 Figure 24: Complex to study fM+X(+)-mediated arrest in the presence of erythromycin. The MKFR sequence was chosen to study this particular arrest. Figure 25 : 25 Figure 25: MRFR, MKFR, MRFK and MKFK arrest the E. coli ribosome in the presence of erythromycin. Figure 26 : 26 Figure 26: In the presence of erythromycin, ErmDL arrests 70S T. thermophilus ribosomes at the same position as the 70S E. coli ribosomes. The reaction was performed using a PURExpress system with separated ribosomes and release factors to study the arrest peptide ErmDL in the presence or absence of erythromycin. Figure 28 28 Figure 28 fMKF-tRNAi Met arrests E. coli 70S ribosomes in vitro in the presence of erythromycin and Arg-tRNA Arg . The custom-made PURExpress system had separated amino acids, tRNAs and ribosomes All reactions were substituted with E. coli ribosomes provided by NEB. The start codon (blue arrow) control reaction (lane 1, first lane after sequencing reaction) contained initiator tRNAi Met and L-methionine Met. As a translocation control, the two reactions were performed in presence of tRNAi Met , tRNA Arg and their corresponding amino acids in the absence (lane 2) or presence (lane 3) of erythromycin (red arrow). fMKF-tRNAi Met can translocate in the presence of Arg-tRNA Arg (lane 4). By the addition of erythromycin most ribosomes arrest at the start codon (lane 5). Figure 29 : 29 Figure 29: Grids prepared with a ribosome concentration of 480 nM showed a good distribution of particles on micrographs. (A) shows a representative micrograph of the MKF-complex at a magnification of 120 k and a defocus value of 2 µm. (B) shows the corresponding power spectrum after CTF correction. The resolution of this particular micrograph was estimated to be 4.2 Å by CTFFIND4[START_REF] Rohou | CTFFIND4: Fast and accurate defocus estimation from electron micrographs[END_REF]. Figure 30 : 30 Figure 30: Representative 2D classes of the sample sorted in order of decreasing abundance. The ribosomal particles adopted different orientations. The small and large subunit can be distinguished. The blue box indicates an artificial class that corresponds to an "Einstein-from-noise" class. Figure 30 30 Figure30shows the representative part of the 2D classification results. The blue box indicates an artificial or "Einstein from noise" class. This class arises from wrongly picked particles. Figure 31 : 31 Figure 31: Masked 3D classification sorting according to the occupancy of A-, P-and E-site tRNA. The masked 3D classification identified ten different classes. Six of these ten classes did not contain information for tRNA and labeled as unclassified. The largest class contained P-and E-site tRNA and was used for further reconstruction. Other classes had the following occupied sites APE, E, P with partial E and unclassified. The initial density before classification shows fewer details than the one after masked 3D classification, 3D refinement, movie processing, particle polishing, and postprocessing. The tRNA masks are shown in blue in the initial map. Figure 32 : 32 Figure 32: Local resolution distribution and FSC curve. (A) The local resolution distribution is ranging from 3.9-10 Å. The large subunit has a higher resolution than the small subunit as the particles were aligned to the large subunit in the last step of processing. (B) shows the plot of the reciprocal of the resolution to the corresponding the FSC values after gold standard refinement. The resolution cut off was set to FSC=0.143. Figure 33 : 33 Figure 33: Examples of density display around different parts of the model: (A) 5S rRNA shows no base separation but clear distinction between backbone and base. α-helix the ribosomal protein L22 (B) and β-sheet (C) structure of the ribosomal protein L32 show clear density for side chains even flexible ones like lysine. (D) shows density for the P-site tRNA including a well-defined 3' CCA end. The density corresponding to the arrest peptide includes density for the side chains of phenylalanine and lysine while the methionine side chain appears to be disordered (E). The density corresponding to erythromycin shows clear density for the two sugar moieties and the upper part of the lactone ring (F). Figure 34 : 34 Figure 34: The reconstructed density contains density for P-site (green) and E-site tRNA (dark blue) but no density for an A-site tRNA. The final density was reconstructed from 63 054 particles. Figure 35 : 35 Figure 35: The fMKF-peptide does not form direct contact with the drug while the penultimate lysine side chain points towards the A-site crevice. The MKF peptide is shown in orange and is attached to the P-site tRNA (dark gray) via an ester bond. Erythromycin is shown in blue. The structure was overlaid with a pre-attack structure of the 70S T. thermophilus ribosome in complex with Phe-NH-tRNA Phe in the A-site and fMet-NH-tRNAi Met in the Psite (PDB code: 1VY4, (Polikanov et al., 2014)) to show where the A-site tRNA (light gray) would normally be located. The phenylalanine would come into close proximity to the lysine side chain but would not clash. Replacing the phenylalanine with an arginine residue (raspberry) would bring two long positive charged side chains close to each other and would likely result in static repulsion. Schematic view of the ribosome taken from (Seip and Innis, 2016). Figure 36 : 36 Figure 36: In the MKF-70S E. coli structure, U2506 is a universally conserved residue within the PTC that adopts a novel conformation different from one during translational cycle and arrest. (A) position of U2506 during different stages of the elongation cycle (pre-accommodation: PDB code: 1VQ6, pre-catalysis state PDB code: 1VY5), (B) ligand-dependent arrest (ErmBL structure PDB code: 5JTE, TnaC structure PDB code: 4UY8 and ErmCL structure PDB code: 3J7Z). (C) during ligand-independent arrest (SecM arrested ribosomes PDB code: 3JBU and the VemP structure PDB code: 4NWY) U2506 adopts a conformation different from the position in the MKF-70S structure. Figure 37 : 37 Figure 37: The base of U2585, moving during tRNA accommodation, adopts a specific conformation similar to the one in intrinsic arrested ribosomes. (A) during the canonical translation, U2585 adopts a different conformation due to the presence of tRNAs. The pre-accommodation state (PDB code: 1VQ9) and the post-catalysis state (PDB code: 1VY5) are illustrated. (B) During ligand-dependent arrest, U2585 can differ from the canonical orientation. In the ErmBL (PDB code: 5JTE) and the TnaC structure (PDB code: 4UY8) U2585 points in a similar direction but it is still shifted by a few degrees. In contrast to this, U2582 adopts a specific orientation in the ErmCL structure (PDB code: 3J7Z) preventing A-site tRNA accommodation. (C) In SecM-arrested ribosomes (PDB code: 3JBU) and the VemP structure (PDB code: 4NWY), U2585 adopts a similar conformation as in the MKF-structure. Figure 38 : 38 Figure 38: 23S rRNA residue A2062 is a universally conserved base within the ribosomal exit tunnel that acts as a sensor for arrest peptides (Vázquez-Laslop et al., 2010) that points towards the PTC in the MKF-70S structure (green). During pre-accommodation (PDB code: 19VQ) the base of A2062 points towards the lactone ring of the drug, while it points towards the cytoplasm during pre-catalysis, post-catalysis as well as translocation. For simplicity reasons only, the orientation during post-catalysis is shown (PDB code: 1VY5). Comparing the Figure 39 : 39 Figure 39: Proposed mechanism of MKF-mediated translational arrest in the presence of erythromycin.In the absence of erythromycin A2062 points towards the cytoplasm. This allows the rotation of the peptide backbone of the MKF peptide and the reorientation of the lysine side chain. Consequently, the following positive A-site tRNA can bind and peptide bond formation can occur. In contrast to this, the presence of erythromycin limits the flexibility of A2062 and in the presence of the peptide this particular base points up towards the PTC. This affects directly the accommodation of the positively charged amino acyl-tRNA and the ribosome undergoes peptide-mediated translational arrest. Figure 40 : 40 Figure 40: Complexes to study nascent chain-mediated translational arrest along consecutive proline motifs.The -2 position was chosen to be an arginine in order to improve the solubility of the flexizyme substrate. Position -1 can be an alanine or a proline, position 0 is a proline and the incoming amino acid is a proline or its unreactive derivative tetrahydrofuroic-2-acid. All complexes are to be formed in the presence or absence of EF-P. Figure 41 : 41 Figure 41: Schematic view of tRNA Pro overexpression strategies using (A) an inducible T7 promoter and proK flanking regions (cloned by KK Inampudi) or (B) consecutively overexpression under the control of lipoprotein promoter with optimized flanking regions in pBSTNAV vectors[START_REF] Meinnel | Maturation of pre-tRNAfMet by Escherichia coli RNase P is specified by a guanosine of the 5′-flanking sequence[END_REF][START_REF] Meinnel | Fast purification of a functional elongator tRNAmet expressed from a synthetic gene in vivo[END_REF]. Figure 42 : 42 Figure42: Comparison of the different overexpression strategies using pBSTNAV vectors and pUC19-proK vector. The amount of produced is higher using the pBSTNAV vectors compared to expression in the pUC19 vector. In the case of the BL21AI cells, the intensity of the bands correlates to the concentration of L-arabinose used for induction. Figure 43 : 43 Figure 43: Similar amounts of tRNA Pro in E. coli DH5α and HB101 were expressed per cell using the pBSTNAV vectors. Overexpression was tested by Northern blotting using a proK specific probe. As a control proK and proM were transcribed in vitro migrating through the gel in a similar fashion as the extracted tRNA Pro . yields of tRNA Pro and the absence of additional bands corresponding to lower molecular weight RNA species.4.5.4 tRNA Pro purificationLarge-scale expression of tRNA Pro CGG yielded 5 g of cells per 1 L of LB expression culture. Total nucleic acid extraction was performed by acidic phenol-chloroform extraction (chapter 3.5). Figure 44 : 44 Figure 44: Chromatographic steps of tRNA Pro purification. (A) shows the chromatogram (ʎ=254 nm blue) of total RNA extract loaded onto a Q-sepharose column and eluted with increasing concentrations of Cl -. Eluate fractions Figure 45 : 45 Figure 45: tRNA Pro purification steps analyzed by denaturing PAGE and Northern blotting. (A) denaturing PAGE (100 ng RNA loaded per lane) and (B) Northern blot (loaded RNA 1 µg per lane) using a probe against the tRNA Pro anticodon stem. L corresponds to the Ambion RNA ladder. (1) corresponds to the total small RNA extract, (2) are the pooled fractions from the Q-sepharose column, (3) are the pooled fractions from the C4-RP HPLC run after prolinylation and (4) are the pooled fractions from the C4-RP HPLC run after deaminoacylation. The amount of tRNA Pro increases while the number of impurities decreases. through centrifugal concentrators. The sample was analyzed by ESI-TOF-MS (electrospray ionization-time of flight -Mass spectrometry). The resulting mass spectrogram is shown in the following figure: Figure 46 : 46 Figure 46: Mass spectrogram from native mass spectrometry of tRNA Pro . M stands for the mass of tRNA Pro -n protons and n correlates to the charge state of the peak. (A) overall spectrum analyzing the distribution of different charged stages (-11: m/z=2267,541, -12: m/z=2078,495, -13: m/z=1918,534, -14: m/z=1781,423, -15: m/z=1662,595, -16: m/z=1558,617, -17: m/z=1466,877, -18: m/z=1385,328, -19: m/z=1312,362) (B) zoomed in view of the -15-charged state. The main peak corresponds to tRNA Pro with an additional Na + ion bound (m/z=1782,994), while peaks observed at higher m/z values correspond to contaminations that were observed on the denaturing RNA PAGE. including tRNA Pro and initiator tRNA (200 pmol each per reaction). Additionally, the reaction was substituted with 1 nmol of L-Met and L-Pro. The resulting sequencing gel is shown in the following figure: Figure 47 : 47 Figure 47: Activity test of purified tRNA Pro using toeprinting (7.5% sequencing TBU-PAGE). The dark blue arrow indicates ribosomes arrested at the start codon, the cyan arrow indicates ribosomes arrested at the third codon and the green arrow indicates unreleased ribosomes. The sample containing purified tRNA Pro shows a clear toeprint for ribosomes stalled on the third codon, indicating active tRNA Pro . Figure 48 : 48 Figure 48: Screening of different toeprinting templates to study polyproline-mediated arrest and its release using purified E. coli EF-P. (1) The DNA template codes for MMHHHHHHRPPPI, (2) MRPPPI and (3) MPPI. (1) and (2) shows polyproline-mediated arrest (dark blue arrow) which is released by an end concentration of 2 µM E. coli EF-P resulting in an increase of ribosomes arrested at the isoleucine codon and the stop codon located in the A-site due to the lack of release factors. (3) does not encode an arrest motif, resulting in a toeprint corresponding to nonreleased ribosomes. Figure 49 : 49 Figure 49: Sequence alignment of T. thermophilus EF-P (Tth_wt) and E. coli EF-P (Eco_wt). The sequence conservation is low as shown in the consensus line. The important residue is highlighted with a yellow box and the loop region is indicated by the gray box. Alignment was performed using Multalign. Figure 50 : 50 Figure 50: Unmodified T. thermophilus EF-P does not release E. coli ribosomes arrested on polyproline motifs. Reactions were performed in the absence of EF-P and in the presence of 1, 10 or 100 µM E. coli or T. thermophilus EF-P. The band indicating an arrested complex is shown by the dark blue arrow and the ribosomes that have reached the isoleucine codon are indicated by the red arrow. Figure 52 : 52 Figure52: Amino acid sequence alignment of T. thermophilus EF-P and the two mutants. Tth_wt shows the T. thermophilus EF-P wt sequence while in Mutant_HQ the loop region is replaced by the E. coli amino acid sequence except for H27 and Q28 as these two residues seem to form direct contacts with the T. thermophilus 70S ribosome.To generate Mutant_loop the whole loop region was replaced with the corresponding amino acid sequence from the E. coli EF-P. The potentially modified residue is highlighted in yellow and the mutated loop region is highlighted by the gray box. Figure 53 : 53 Figure 53: SDS-PAGE showing purified EF-P variants. Mutant_loop (L), E. coli EF-P (E), mutant_HQ (HQ) and T. thermophilus EF-P analyzed by denaturing protein PAGE, after gel filtration. The protein samples obtained show Figure 54 : 54 Figure 54: The mutants generated do not release E. coli ribosomes arrested at consecutive prolines. The arrest is released in the presence of E. coli EF-P (1) but not in the presence of mutant_loop (2), mutant_HQ (3) or T. thermophilus WT EF-P. The blue arrow indicates the arrest site while the red arrow corresponds to the isoleucine codon located in the P-site and the stop codon located in the A-site. Figure 55 : 55 Figure 55: Polyproline-mediated arrest and its release by E. coli EF-P was studied using flexizymepeptidylated tRNAs. The reaction was performed in the custom-made PURExpress. The mRNA template encoded the peptide sequence MPPI. The reaction was performed using 10 µM peptidyl-tRNAi Met and 20 µM tRNA Pro . Reactions were substituted with 10 and 20 µM E. coli EF-P resulting in partial release from the arrest site. The dark blue arrow indicates the start codon equivalent to the arrest complex in the presence of AcRAP-tRNAi Met . The blue arrow indicates the first proline codon, which is equivalent to the arrest site induced by the dipeptidylated tRNAs, including AcRP-tRNAi Met and AcRA-tRNAi Met and the cyan arrow indicates the second proline codon equal to the release of the arrest. Figure 56 : 56 Figure 56: E. coli EF-P releases polyproline-mediated arrested ribosomes but does not release ribosomes stalled while translating other well-characterized arrest peptides. E. coli EF-P was used to study its activity on (A) Polyproline-mediated arrest, on the ligand-independent arrest peptide SecM (B) and on ligand-dependent arrest sequences including (C) TnaC reaction was supplied with release factors, (D) ErmBL, (E) ErmCL and (F) ErmDL. The toeprint corresponding to the arrested ribosome is indicated by a dark blue arrow and the non-released complexes are indicated by the red arrow. Figure 57 : 57 Figure 57: Overlay of the binding sites of Onc112 and clinically used antibiotics. The binding site of Onc112 (orange) overlaps with the binding sites of well-characterized antibiotics. These antibiotics are known to inhibit the transition from initiation to elongation and early elongation. The A-site tRNA is indicated in light blue. The figure is taken from(Seefeldt et al., 2015). Figure 58 : 58 Figure 58: Comparison of the binding sites of Bac7(1-16), Api137 and Klebsazolicin (KLB). (A) The peptides bind to the ribosomal exit tunnel even if they have different sequences and folds[START_REF] Florin | An antimicrobial peptide that inhibits translation by trapping release factors on the ribosome[END_REF] Metelev et al., 2017;[START_REF] Scocchi | Molecular cloning of Bac7, a proline-and arginine-rich antimicrobial peptide from bovine neutrophils[END_REF]. * indicates post-translationally modified residues. Bac7(1-16, green) binds to the A-site tRNA binding pocket (1), the A-site crevice (2) and the upper tunnel. Api137 (B, purple, PDB code: 5O2R) binds to the A-site crevice and reaches further into the ribosomal tunnel[START_REF] Florin | An antimicrobial peptide that inhibits translation by trapping release factors on the ribosome[END_REF]. The post-translational modifications of KLB (C, PDB code: 5W4K) restrict the backbone conformation resulting in a "curled-up" conformation binding from the PTC into the upper tunnel(Metelev et al., 2017). The ribosome is taken from(Seefeldt et al., 2016). Figure 59 : 59 Figure 59: Peptide modifications to get a greater understanding of the chemical properties needed for nascent chain mediated translational arrest. The possible peptide modifications are highlighted in blue. (A) The flexizyme methodology allows translation to be initiated without the N-terminal formyl-group or the N-terminal methionine. (B) To analyze the impact of chain length and the positive charge of the -1 side chain on the arrest, peptides can be synthesized with a shortened side chain at this position or with the amino group replaced by a hydroxyl group. Figure 60 : 60 Figure 60: Composition of in vitro translation systems for arrested complex formation for structural biology. Figure 62 : 62 Figure 62: The position of the Cγ-atom influences the isomeric form of the peptide bond. The Cγ atom Figure 63 : 63 Figure 63: The variety of modifications essential for EF-P activity in different species. In E. coli residue K34 que les PrAMP inhibent la transition entre l'initiation et l'allongement. Afin de répondre à cette hypothèse, des études biochimiques ont été réalisées par nos collaborateurs issus du groupe du Prof. Daniel N Wilson (Gene Center, (Munich maintenant Université de Hambourg)) via des expériences de toeprinting, des tests disome et des tests de synthèse protéique acellulaire et ainsi confirmer le mécanisme proposé. Nos études ont montré que les PrAMP inhibent spécifiquement la transition de l'initiation à l'élongation et déstabilisent le complexe post- ont été étudiés par cryo-EM, tandis que les mécanismes d'arrêt des peptides courts incluant les motifs polyproline ainsi que le motif fM+X(+) arrêtent le ribosome en présence d'érythromycine. Pour mieux comprendre le mécanisme moléculaire sous-jacent des peptides d'arrêt courts, des structures à haute résolution sont nécessaires, mais il est difficile d'obtenir un échantillon homogène suite à la traduction in vitro. Ceux-ci peuvent être obtenus par cristallographie aux rayons X en utilisant des ribosomes 70S provenant dans notre cas de T. thermophilus ou par microscopie cryogénique en utilisant des ribosomes 70S d'Escherichia coli. Pour obtenir des cristaux hautement diffractants, le complexe ne doit contenir que de courts fragments d'ARNm. Ainsi, des complexes ne peuvent pas être formés en utilisant une réaction de traduction in vitro car l'ARNm codant chevauche un contact cristallin et limite ainsi la diffraction (Axel Innis, communication personnelle). Une solution pour assurer la formation d'une liaison stoechiométrique de l'ARNt peptidyle au ribosome et d'augmenter l'homogénéité du complexe pourrait être de charger un peptide chimiquement synthétisé directement sur l'ARNt. Pour obtenir l'ARNt peptidylé, l'approche flexyzyme a été employée, reposant sur le pouvoir catalytique d'un petit ribozyme afin de former une liaison ester entre le peptide chimiquement activé et l'extrémité CCA de l'ARNt (Goto et al., 2011). Dans l'étape suivante, l'ARNt peptidylé a été purifié par chromatographie HPLC. Et les peptides ont été synthétisés par le Dr. Caterina Lombardo et le Dr. Christophe André. Par cette méthode, les fragments des peptides d'arrêt ont été spécifiquement chargés sur l'ARNti Met initiateur. Les ARNt peptidylés ont été caractérisés en utilisant des méthodes biologiques biochimiques et structurales.La méthode que nous présentons peut-être appliquée à un grand nombre de peptides d'arrêt courts, y compris des peptides contenant des acides aminés non naturels dans le but d'acquérir une meilleure compréhension de la spécificité du ribosome. 7. 3 . 1 31 M+X(+) arrête le ribosome bactérien en présence d'érythromycineL'arrêt du ribosome peut être utilisé pour réguler l'expression des gènes dans les cellules. Pour ce faire, le peptide d'arrêt agit comme un capteur pour détecter la présence d'un médicament comme l'érythromycine ou d'un métabolite, comme par exemple le tryptophane. Le gène de résistance à l'érythromycine (ermD) code pour une méthyltransférase qui méthyle la base A2058 de l'ARNr 23S dans le tunnel ribosomal, empêchant ainsi la liaison du médicament. Les peptide d'arrêt est si court qu'il atteint à peine le site de liaison du médicament.Au cours de cette thèse, la séquence MKF(R) a été choisie pour d'autres études. Ce faisant, le Dr. K. Kishore Inampudi (ancien PostDoc dans le laboratoire) a spécifiquement chargé le peptide MKF (méthionine-lysine-phénylalanine) sur l'ARNt méthylé en utilisant la méthodologie du flexizyme[START_REF] References Goto | Flexizymes for genetic code reprogramming[END_REF]. J'ai évalué l'activité de cet ARNt peptidylé par des expériences de toeprinting en présence et en absence d'érythromycine confirmant l'arrêt du complexe formé par la chaîne naissante.Aucune structure n'a pu être obtenue en utilisant la cristallographie aux rayons X car le peptide a été hydrolysé pendant le processus de cristallisation. J'ai donc déterminé la structure du MKF arrêté sur le ribosome d'E. coli 70S en formant le complexe en présence de ribosomes 70S purifiés, de MKF-tRNAiMet, d'Arg -tRNAArg, et d'érythromycine par cryo-EM. Basé sur la structure obtenue, le réarrangement allostérique des résidus ARNr 23S ainsi que l'absence de l'ARNt du site A, le mécanisme suivant a pu être proposé: En l'absence d'érythromycine, la base A2062 tourne plus facilement vers le cytoplasme, permettant donc à la Lys -1 de quitter la crevasse du site A. Par conséquent, l'amino-acyl-ARNt suivant et portant une charge positive peut se lier, permettant la formation de la liaison peptidique. En revanche, la présence d'érythromycine pourrait favoriser une conformation de A2062 dans laquelle la base pointe vers le PTC. Le ribosome s'arrête lorsque la base d'A2062 de l'ARNr 23S pointe vers le PTC inhibant la réorientation du peptide d'arrêt et forçant le Lys -1 à pointer dans la crevasse du site A. Par conséquent, l'ARNt de site A suivant portant une chaîne latérale K, R ou W ne peut pas se lier au PTC en raison de l'encombrement stérique et statique. Le ribosome est arrêté en raison de la prévention de l'accommodation de l'ARNt du site A. 7.3.2 L'arrêt médié par la polyproline est soulagé par le facteur d'élongation P projet consistait à purifier l'ARNt pro à partir d'extraits cellulaires. La pureté a été déterminée à environ 80% par spectrométrie de masse et son activité a été prouvée par toeprinting. Pour étudier l'arrêt médié par le motif polyproline, AcRA-ARNti Met , AcRP-ARNti Met et AcRAP-ARNti Met ont été obtenus en utilisant la méthodologie du flexizyme. J'ai évalué l'activité des peptidyl-ARNt par toeprinting en présence et en absence d'EF-P d'E. coli montrant la formation et la libération du complexe d'arrêt. Les expériences futures permettront de trouver des conditions optimales de formation des complexes dans le but de résoudre la structure par cryo-EM. generate the flexizyme DNA templates were used as published previously. Figure 64 : 64 Figure 64: Additional conformations of U2506 (A), U2585 (B) and A2062 (C) during pre-accommodation, catalysis and translocation. The conformation of the different bases for the MKF-70S structure is shown in green. Bases from the structure of the pre-accommodation state has the PDB code: 1VQ6 and is shown in density. Bases from the Pre (PDB code: 1VY4) and Post catalysis state (PDB code: 1VY5) are shown in marine and slate, respectively. Bases from the structure of the Pre-(PDB code: 4WPO) and Post-translocation (4WQY) are shown in light blue and purple respectively. The bases adopt in all cases different orientations than in the MKF-70S structure Figure 70 : 70 Figure 70: E. coli and T. thermophilus EF-P purification steps analyzed by SDS-PAGE. L stands for lysis, W for the last washing step, CE stands for elution fraction from Co 2+ -NTA column, GF stand for pulled fractions from gel filtration and Conc corresponds to the sample taken after concentrating the sample to 20 mg/mL and before flash freezing. Figure 71 : 71 Figure 71: Purification of the T. thermophilus mutants by SDS-PAGE. s/n stands for supernatant, P stands for pellet, CE stands for elution fraction from Co 2+ -NTA column. Next lanes, correspond to the elution fractions of the corresponding gel filtration : tR = 5.57 min; MS (ESI+): (m/z) = 514.1335 [M+H] + Acetyl-Arg-Pro-DBE•HCl (CA 1-194) General procedure for the synthesis of 3,5-dinitrobenzyl ester was used from Acetyl-Arg(Pbf)-Pro-OH (150 mg, 0.27 mmol) to led to the titled activated ester peptide as a white solid lyophilisate (13 mg, 10 %). HPLC: tR = 5.21 min; MS (ESI+): (m/z) = 494.1591 [M+H] + -Arg-CBT•HCl (CA 2-(8)16) : tR = 6.25 min; MS (ESI+): (m/z) = 474.1005 [M+H] + Acetyl-Arg-Pro-CBT•HCl (CA 2-(12)20) HPLC: tR = 5.84 min; MS (ESI+): (m/z) = 454.1295 [M+H] + Acetyl-Arg-Asp-CBT•HCl (CA 2-144) -Ala-CBT•HCl (CA General procedure for the synthesis of 4-chlorobenzyl thioester was used from Acetyl-Arg(Pbf)-Ala-OH (200 mg, 0.31 mmol) to led to the titled activated thioester peptide as a white solid lyophilisate(44 mg, 33 %).HPLC: tR = 5.20 min; MS (ESI+): (m/z) = 428.0727 [M+H] + Formyl-Met-Lys-CBT•HCl (CA 3-60) General procedure for the synthesis of 4-chlorobenzyl thioester was used from Formyl-Met-Lys(Boc)-OH (200 mg, 0.49 mmol) to led to the titled activated thioester peptide as a white solid lyophilisate (54 mg, 25 %). HPLC: tR = 5.77 min; MS (ESI+): (m/z) = 446.0484 [M+H] + -Ala-Pro-CBT•HCl (CA 3-102) : tR = 5.47 min; MS (ESI+): (m/z) = 252.1155 [M+H] + Acetyl-Arg-Asp-Pro-CBT•HCl (CA 3-104) General procedure for the synthesis of 4-chlorobenzyl thioester was used from Acetyl-Arg(Pbf)-Asp(tBu)-Pro-OH (200 mg, 0.27 mmol) to led to the titled activated thioester peptide as a white solid lyophilisate (27 mg, 17 %). HPLC: tR = 5.06 min; MS (ESI+): (m/z) = 569.0996 [M+H] + Formyl-Met-Lys-Phe-CME•TFA (CA 4-104) -Phe-CME•TFA HPLC: tR = 3.65 min; MS (ESI+): (m/z) = 375.2104 [M+H] + Formyl-Met-Orn-Phe-CME•TFA (CA 4-114) -Lys-Phe-CME•TFA : tR = 3.79 min; MS (ESI+): (m/z) = 464.1888 [M+H] + Formyl-Met-Lys-Phe-CBT•HCl (CA 4-142) General procedure for the synthesis of 4-chlorobenzyl thioester was used from Formyl-Met-Lys(Boc)-Phe-OH (150 mg, 0.27 mmol) to led to the titled activated thioester peptide as a white solid lyophilisate (62 mg, 39 %). HPLC: tR = 6.49 min; MS (ESI+): (m/z) = 593.1595 [M+H] + Formyl-Met-Orn-Phe-CBT•HCl (CA 4-144) -Lys-Phe-CBT•HCl (CA HPLC: tR = 5.81 min; MS (ESI+): (m/z) = 565.1642 [M+H] + Acetyl-Lys-Phe-CBT•HCl (CA 4-148) General procedure for the synthesis of 4-chlorobenzyl thioester was used from Acetyl-Lys(Boc)-Phe-OH (120 mg, 0.28 mmol to led to the titled activated thioester peptide as a white solid lyophilisate (56 mg, 43 %). HPLC: tR = 6.15 min; MS (ESI+): (m/z) = 476.1411 [M+H] + Table 3 : Summary of well-characterized ligand-dependent arrest peptides in bacteria. The amino acid 3 located in the P-site is underlined. 1 citation describing the discovery of the listed arrest sequence.2 citation identified important residues. +No alanine screen is published for ErmDL. Name 1 Sequence 2 Biological significance, ligand Inhibition point TnaC WXXXDXXIXXXXP(*) Regulation of the Termination (Konan and Yanofsky, 1997) (Gong and Yanofsky, 2002) transcription of the tnaBC operon, tryptophan (Cruz-Vera et al., 2005) Cat86 leader MVKTD(K) Peptidyl (Brückner and Matzura, 1985) (Alexieva et al., 1988) resistance (drug chloramphenicol Regulation of transfer CmlA leader modification enzymes), chloramphenicol Table 4 : List of chemicals 4 Name Elemental formula Company Experiment (+/-)-2-Methyl-2,4- C6H14O2 Hampton Research Crystallography pentanediol (MPD) (R)-γ-(3-chloro- C12H15Cl2NO2 Sigma Aldrich, St Amino acid benzyl)-L-proline Louis, MO, USA derivative hydrochloride 1,4-Dithiothreitol C4H10O2S2 Euromedex, in vitro transcription/ (DTT) Souffelweyersheim, translation France 2-(N-morpholino)- C6H13NO4S Sigma Aldrich, St Buffer ethanesulfonic acid Louis, MO, USA (MES) 2-Mercaptoethanol C2H6OS Sigma Aldrich, St Ribosome (BME) Louis, MO, USA purification 4-(2-hydroxyethyl)- C8H18N2O4S Sigma Aldrich, St Buffer for protein 1- Louis, MO, USA purification, piperazineethanesulf crystallography onic acid (HEPES) Acetic acid C2H4O2 Sigma Aldrich, St PH adjustment Louis, MO, USA Acetone C3H6O Sigma Aldrich, St TCA precipitation Louis, MO, USA Acetonitril C2H5N Sigma Aldrich, St HPLC Louis, MO, USA Acrylamide/bis- Euromedex, PAGE (Protein) acrylamide 30% Souffelweyersheim, solution 19:1 France Acrylamide/bis- Dutscher, Brumath, PAGE (DNA, RNA) acrylamide 40% France solution 19:1 Adenosine 5'- Na2C10H14N5O13P3 Sigma Aldrich, St In vitro transcription/ triphosphate Louis, MO, USA translation, amino disodium salt acylation reaction hydrate Agar Sigma Aldrich, St Bacterial selection Louis, MO, USA Agarose type D-5 Euromedex, DNA electrophoresis Souffelweyersheim, France Table 5 : List of antibiotics 5 Antibiotic Provider Experiment Stock Solvent concentration Ampicillin Euromedex, Cell culture 100 mg/mL Water Souffelweyersheim, France Chloramphenicol Sigma Aldrich, St Cell culture 30 mg/mL Ethanol Louis, MO, USA Erythromycin Sigma Aldrich, St In vitro translation, 2 mM Ethanol Louis, MO, USA structure biology Kanamycin Euromedex, Cell culture 50 mg/mL Water Souffelweyersheim, France Puromycin Sigma Aldrich, St In vitro translation 10 mM Water Louis, MO, USA Streptomycin Sigma Aldrich, St Cell culture 20 mg/mL Water Louis, MO, USA Telithromycin Glenthan In vitro translation DMSO Tetracycline Sigma Aldrich, St Cell culture 20 mg/mL Water 0.1% Louis, MO, USA EtOH Thiostrepton Merck In vitro translation DMSO 2.3 Enzymes Table 6 : Enzymes used for cloning, toeprinting and RNA purification 6 Enzyme Company Experiment Stock solution AMV RTase Promega, Toeprinting 10,000 u /mL Medison, WI, USA BSA Euromedex, Northern Blot Souffelweyershei m, France CCA adding enzyme Purified in the lab CCA end modification DpnI New England Mutagenesis 20,000 u/mL Biolabs (NEB), Ipswich, MA, USA EcoRI HF NEB, Ipswich, MA, Plasmid digestion 20,000 u/mL USA EF-G Purified in the lab In vitro translation EF-P Purified in the lab In vitro translation Table 7 : List of Kits 7 Kits Company Experiment PCR purification kit Qiagen, Hilden, Germany DNA purification Nucleotide removal Qiagen, Hilden, Germany Toeprinting, linker ligation kit Minipreparation kit Qiagen, Hilden, Germany extraction of Plasmids from E coli cells Gel extraction kit Qiagen, Hilden, Germany DNA purification from agarose gels Midipreparation kit Macherey & Nagel, Düren, Extraction of Plasmids from E. coli Germany cells 2.5 Equipment 2.5.1 Columns Table 8 : List of used columns for RNA and protein purification 8 Column (material) Company Experiment C4-RP-column (analytical) Grace, Columbia, MD, USA RNA purification C4-RP-column (preparative) Grace, Columbia, MD, USA RNA purification C16-RP-column (analytical) Phenomedex, Torrance, CA, USA RNA purification C16-RP-column (semi-preparative) Phenomedex, Torrance, CA, USA RNA purification C16-RP-column (preparative) Phenomedex, Torrance, CA, USA RNA purification Q-Sepharose (90 mL) GEHeathcare, Chicago, IL, USA RNA purification Superdex 200 16/600 GEHeathcare, Chicago, IL, USA Protein purification Superdex 75 16/600 GEHeathcare, Chicago, IL, USA Protein purification HIS-Select ® Cobalt Affinity Gel Sigma Aldrich, St Louis, MO, USA Protein purification HIS-Select ® Nickel Affinity Gel Sigma Aldrich, St Louis, MO, USA Protein purification Table 9 : Media composition of LB and TB 9 Name Composition of 1 L LB media 5 g LB powder 5 g NaCl TB media 5 g TB powder 100 mL glycerol Plates for bacterial growth contained 1.5% (w/v) Bacto-Agar. All solutions for liquid and solid media were autoclaved. Antibiotics were prepared as 1000x stock solutions and added to plates and liquid media in 1x end concentration. E. coli cells were grown at 37°C under optimal aerobe conditions in baffled shaking flasks. Cells used in this thesis are listed in the following table: Table 10 : List of used E. coli strains with specifics on genotypes and usage E. coli strain Genotype Experiment Citation 10 DH5α fhuA2 lac(del)U169 phoA glnV44 Plasmid (Hanahan, 1983) Φ80' lacZ(del)M15 gyrA96 recA1 amplification, relA1 endA1 thi-1 hsdR17 tRNA overexpression XL-1 blue recA1 endA1 gyrA96 thi-1 hsdR17 supE44 relA1 lac Phage: F´ proAB lacIqZ∆M15 Tn10 Tet r Table 11 : 689 media composition 11 Name Composition (listed amounts for 1 L) 689 media 8 g Tryptone 4 g Yeast Extract 2 g NaCl 1x Castenholz Salts 20 mL Nitch Trance elements 10x CARSTENHOLTZ salts 1 g Nitrilotriacetic acid 1g 16 mL 0.03% (w/v) FeCl3 0.6 g CaSO4 0.08 g NaCl 1 g MgSO4 1.03 g KNO3 6.89 g NaNO3 1.11 g Na2HPO4 Nitch trace elements 0.5 mL H2SO4 (conc.) 2.2 g MnSO4 0.5 g ZnSO4 •7 H2O 0.5 g boric acid 16 mg CuSO4•5H2O 25 mg Na2MoO4 46 mg CoCl2•6H2O Table 12 : Composition of a native nucleic acid (TBE)-PAGE 12 Components (stock concentration) Final concentration 10x TBE buffer (900 mM Tris, 900 mM Boric acid, 1x TBE buffer (90 mM Tris, 90 mM Boric 20 mM EDTA) acid, 2 mM EDTA) 40% (w/v) acrylamide (19:1) 6-12% (w/v) acrylamide (19:1) 10%(w/v) APS 100% (v/v) TEMED Table 13 : 13 Composition of a denaturing nucleic (TBU)-PAGE Components (stock concentration) Final concentration 10x TBE buffer (900 mM Tris, 900 mM Boric acid, 20 mM EDTA) 40% (w/v) acrylamide (19:1) Urea 10%(w/v) APS 100% (v/v) TEMED 2x TBE buffer (180 mM Tris, 180 mM Boric acid, 4 mM EDTA) 6-15% (w/v) acrylamide (19:1) 8 M Urea 0.02% (w/v) APS 0.01% (v/v) TEMED To determine the size of the RNA fragments are RNA size standard (NEB, Ipswich, MA, USA or Thermo Fischer, Waltham, MA, USA) was loaded onto the gel. The gels were run for 1 h at 200V Table 14 : Composition of acidic denaturing PAGE 14 protect the ester bond during the heat denaturation step an acidic RNA loading dye (50 mM Na(CH3CO2) pH 4.8, formamide, 250 mM EDTA, 0.25% BPB, 0.25% Xylene blue) was used. The gels were run for 3 h at 160V at 4°C in 100 mM Na(CH3CO2). For staining the gels were washed with 1x TBE buffer for 10 min and stained afterward with SYBR Gold Stock solutions Final concentration 3M Na(CH3CO2) pH 4.8 100 mM Na(CH3CO2) 40% (w/v) acrylamide (19:1) 9-20% acrylamide (19:1) Urea 8 M Urea 10% APS 0.02% (w/v) APS 100% TEMED 0.01%TEMED Furthermore, to Table 15 : Composition of SDS-PAGE Stock solutions Final concentrations 15 Besides the samples, a pre-stained protein marker (NEB, Ipswich, MA, USA) was loaded to determine the molecular size of the proteins. Gels were run for 1 h at 200V at room temperature in 1xSDS running buffer(25 mM Tris, 192 mM glycine, 0.1% SDS) and were stained with Instant Blue TM protein stain (expedeon, San Diego, CA, USA) and detected with the GelDoc+ (Biorad, Hercules, CA, USA). Stacking Gel Table 16 : List of used probes 16 Name Sequence 1 cm -1 ] Tth EF-P 184 20.22 15.93 E. coli EF-P 188 20.50 25.44 Tth EF-P HQ 184 20.22 14.44 Tth EF-P loop 184 20.20 14.44 E. coli ProS 572 63.67 54.32 E. coli EF-G 750 82.88 61.31 E. coli PheST S 327 S 36.83 87.17 T 795 T 87.38 E. coli GlySQ S 689 S 76.81 118.19 Q 303 Q 34.77 E. coli AsnS 466 52.57 62.59 E. coli AspS 590 65.91 43.11 E. coli TrpS 334 37.44 32.11 T7-RNAP 883 98.86 141.01 Table 18 : Pipetting scheme for gene amplification from genomic DNA 18 Component [stock concentration] Final concentration HF-Buffer (NEB, Ipswich, MA, USA) [5x] 1x dNTPs [10 mM each] 200 µM each Forward primer [10 µM] 200 nM Reverse primer [10 µM] 200 nM Genomic DNA 2 µM PhuDNAP [2 000 u/mL] 0.02 u/µL Nuclease-free water To final volume The corresponding PCR program is shown in the following table: Table 19 : PCR program for gene extraction 19 Temperature Time Function Cycles 98°C 30 s Enzyme activation 1x 98°C 30 s Denaturation of the vector and PCR fragment 30x 50-60°C 30 s Primer annealing 72°C 2 min Primer extension 4°C ∞ Incubation until further usage 1x Table 20 : Pipetting scheme for primer annealing PCR 20 Component [stock concentration] Final concentration HF-Buffer [5x] 1x dNTPs [10 mM each] 200 µM each Forward amplifying primer [10 µM] 20 nM Forward primer 1 [1 µM] 2 nM Reverse primer 1 [1 µM] 2 nM Forward primer 2 [1 µM] 2 nM Reverse primer 2 [1 µM] 2 nM Reverse amplifying primer [10 µM] 20 nM PhuDNAP [2 000 u/mL] 0.2 u Nuclease-free water to a final volume Table 21 : PCR program for primer annealing 21 Temperature Time Function Cycles 98°C 30 s Enzyme activation 1x 98°C 15 s Denaturation of the vector and PCR fragment 45-60°C 15 s Primer annealing 20X 72°C 15-30 s Primer extension 4° ∞ Incubation until further usage 1x Table 22 : List of vectors with their corresponding restriction enzymes and downstream application Vector name Antibiotic resistance Restriction enzymes Experiment 22 pBAT4 Amp NdeI, NcoI Protein overexpression pUC19 Amp EcoRI, HindIII pBSTNAV2OK Amp EcoRI, PstI tRNA overexpression pBSTNAV3S Amp SacI, PstI Table 23 : Pipetting scheme for insertion of gene into pUC19 or pBAT4 23 Component [stock concentration] Final concentration Q5-Reaction Buffer [5x] 1x Q5 High GC Enhancer [5x] 1x dNTPs [10 mM each] 200 µM each Insert 60 nM Digested vector 0.6 nM Q5-DNAP [2 u/µL] 0.2 u Nuclease-free water Table 24 : PCR program for MEGAWHOP 24 Temperature Time Function Cycles 98°C 30 s Enzyme activation 1x 98°C 30 s Denaturation of the vector and PCR fragment 50-60°C 30 s Primer annealing 25x 72°C 2 min Primer extension 4°C Hold Incubation until further usage 1x Table 25 : Pipetting scheme for PCR based mutagenesis 25 Component [stock concentration] Final concentration Q5-Reaction Buffer NEB [5x] 1x Q5 High GC Enhancer [5x] 1x dNTPs [10 mM each] 200 µM each Forward sequencing primer [10 µM] 200 nM Reverse sequencing primer [10 µM] 200 nM Vector with wild typ sequence 0.6 nM Q5-DNAP [2 u/µL] 0.2 u Nuclease-free water To final volume The following PCR program was used: Table 26 : PCR program for PCR based mutagenesis Temperature Time Function Cycles 26 98°C 30 s Enzyme activation 1x 98°C 30 s Denaturation of the vector and PCR fragment 50-60°C 30 s Primer annealing 25x 68°C 3 min Primer extension 4°C ∞ Incubation until further usage 1x Table 27 : Pipetting scheme for Colony PCR 27 Component [stock concentration] Final concentration HF-Buffer NEB [5x] 1x dNTPs [10 mM each] 200 µM each Forward sequencing primer [10 µM] 200 nM Reverse sequencing primer [10 µM] 200 nM PhuDNAP [2 000 u/mL] 0.2 u Nuclease-free water Table 28 : Reaction scheme for Colony PCR Temperature Time function cycles 28 98°C 10 min Cell lysis 1x 98°C 30 s Denaturation of the vector and PCR fragment 52°C' 30 s Primer annealing 25x 72°C 10 s to 2 min Primer extension 4°C Hold Incubation until further usage 1x Table 29 : List of expression vectors used for protein purification 29 Protein (organism) Vector Citation EF-G (E. coli) pET46-Ek/LIC (Mikolajka et al., 2011) EF-P (E. coli) pET14b (Peil et al., 2012) EF-P (T. thermophilus) pET14b (Blaha et al., 2009) +mutants YjeA/K (E. coli) pRSFDuet (Ude et al., 2013) ProS (E. coli) pBAT4 Cloned by Dr. KK Inampudi TrpS (E. coli) pBAT4 This work GlySQ (E. coli) pBAT4 This work AsnS (E. coli) pBAT4 This work AspS (E. coli) pBAT4 This work PheST (E. coli) pQE3 Provided by the Steitz lab T7RNAP (P226L) pAR1219 (Guillerez et al., 2005) Table 30 : List of tRNA constructs for purification and expression 30 Gene Vector Induction Citation tRNAi Met AUG pBS consecutive (Schmitt et al., 1999) tRNA Phe UUU pBR322 consecutive (Jünemann et al., 1996) pUC19 L-arabinose Dr. KK Inampudi tRNA Pro CCG pBSTNAV2OK consecutive This thesis pBSTNAV3S consecutive This thesis tRNA Trp UGG pUC19 L-arabinose This thesis tRNA Gly GGG pUC19 L-arabinose This thesis tRNA Asp GAC pUC19 L-arabinose This thesis tRNA Asn AAC pUC19 Table 31 : List of used PURExpress systems from NEB, Ipswich, MA, USA with the corresponding experiments Name Separated factors Experiments 31 PURExpress system, no ΔRF1, 2, 3 • Characterization of arrest peptides release factors • EF-P activity Custom made Δribosomes • fMet-control PURExpress system ΔRF1, 2, 3 • tRNA activity tests ΔtRNA • activity test of peptidylated tRNA Δaa • testing the secondary structure of mRNA The reactions were assembled as recommended for a total volume of 5 µL per reaction. Additionally, 2 pmol Yakima yellow labeled DNA oligonucleotide, which is a reverse complement oligo to the 3' end of the mRNA, 1 pmol DNA template or 5 pmol mRNA template and 25 µM antibiotic (Orelle et al., 2013b; Starosta et al., 2014a) were used in the reaction. The following table lists the reaction compositions of the control reactions are listed in the following table : Table 32 : Reaction mixture for toeprinting control reactions 32 Reaction PURExpress system Reaction mix Toeprint corresponds to step in translational cycle fMet-control Δribosomes, ΔRF1, 2, 3, 1 µL factor A - ΔtRNA, Δaa 0.8 µL factor mix 0.6 µL ribosomes 150 µM L-methionine 100 µM tRNAi Met Start codon Onc112 control ΔRF1, 2, 3 2 µL Solution A 1.5 µL Solution B 100 µM Onc112 tRNA activity Δribosomes, ΔRF1, 2, 3, Same mix as fMet- Elongation ΔtRNA, Δaa control 150 µM amino acid 100 µM purified tRNA Non-released ΔRF1, 2, 3 2 µL Solution A Termination ribosomes 1.5 µL Solution B mRNA Δribosomes, ΔRF1, 2, 3, 1 µL factor A - Secondary structure of mRNA secondary ΔtRNA, Δaa 0.8 µL factor mix structure 0.5 µL total tRNA 0.5 µL amino acids All components were combined, and the reaction was incubated for 15 min at 37°C to allow transcription and translation. During this time potential arrest complexes can be formed. To detect arrest complexes the 5' labeled oligonucleotide was extended by reverse transcription.1 µL reverse transcriptase mix (100 µM dNTPs, four volumes of PURE system buffer (9 mM Mg(CH3CO2)2, 5 mM K-phosphate pH 7.3, 95 mM K-glutamate, 5 mM NH4Cl, 0.5 mM CaCl2, 1 mM spermidine, 8 mM putrescine, 1 mM DTT) and five volumes of AMV Table 33 : List of 3x ddNTP stock solutions 33 Stock solution Composition ddGTP 20 µM of each dNTP 30 µM ddGTP ddATP 20 µM of each dNTP 350 µM ddATP ddTTP 20 µM of each dNTP 600 µM ddTTP ddCTP 20 µM of each dNTP 200 µM ddCTP The ddNTP mixtures were mixed with the sequencing reaction master mix which is listed in the following table: Table 34 : Mastermix for sequencing reaction 34 Component Concentration 1x fmol sequencing buffer 50 mM Tris•HCl pH 9.0, 2 mM MgCl2 Yakima yellow NV primer (2 µM) 0.3 µM Primer HemoKlen Taq DNAP (NEB, Ipswich, MA, 1 µL per 17 µL master mix USA) DNA template 0.06 µM DNA Water To final volume The following PCR program was used: Table 35 : Program for sequencing reaction 35 Temperature Time Function Cycles 95°C 30 s Heat activation of enzyme 1x 95°C 30 s Denaturation of PCR fragments 42°C 30 s Primer annealing 30x 70°C 1 min Primer extension 72°C 5 min Final extension 1x 4°C ∞ Incubation until further usage 1x Table 36 : Gel mix for sequencing PAGE 36 Components (stock concentration) Final concentration 10x TBE buffer (900 mM Tris, 900 mM Boric acid, 2x TBE buffer (180 mM Tris, 180 mM 20 mM EDTA) Boric acid, 4 mM EDTA) 40% (w/v) acrylamide (19:1) 7.5% (w/v) acrylamide (19:1) Urea 8 10%(w/v) APS 100% (v/v) TEMED Table 37 : List of synthetic mRNAs for structural biology studies 37 Name Sequence Project mRNA_MRFR GGCAAGGAGGUAAAA AUG CGG UUU CGG UAA M(+)F(+) mRNA_MRFR_AGG GGCAAGGAGGUAAAA AUG AGG UUU AGG UAA M(+)F(+) mRNA_MFR_AGG GGCAAGGAGGUAAAA AUG UUU AGG UAA M(+)F(+) mRNA_MPP GGCAAGGAGGUAAAA AUG CCG CCG Poly-prolines mRNA_PP GGCAAGGAGGUAAAA CCG CCG Poly-prolines mRNA_MF GGCAAGGAGGUAAAA AUG UUC UAA PrAMPs/Poly- prolines To form the complex of interest, 5 µM 70S T. thermophilus ribosomes were incubated with 10 µM mRNA and 25 µM of potential antibiotic for 10 min at 55°C in T. thermophilus ribosome buffer (5 mM HEPES•KOH pH 7.5, 30 mM KCl, 10 mM MgCl2, 50 mM NH4Cl). This step allows Table 1 Data collection and refinement statistics 1 Tth70S-Onc112 a a Structure determined from a single crystal. Table S1 . S1 X-ray data processing and crystallographic refinement statistics Bac7(1-16) MetI Pyr PDB code 5F8K 5FDU 5FDV Space group P2 1 2 1 2 1 P2 1 2 1 2 1 P2 1 2 1 2 1 Unit cell dimensions a 209.8 Å 209.7 Å 209.9 Å b 450.3 Å 448.1 Å 450.1 Å c 622.2 Å 623.4 Å 622.9 Å α 90.0° 90.0° 90.0° β 90.0° 90.0° 90.0° γ 90.0° 90.0° 90.0° Data processing Resolution 50 Å -2.8 Å 50 Å -2.9 Å 50 Å -2.8 Å R Merge 51.3% (233.9%) 17.0% (181.0%) 17.8% (229.7%) I/σI 5.71 (0.95) 11.61 (1.10) 15.99 (1.29) CC 1/2 95.7 (16.1) 99.7 (34.9) 99.9 (41.1) Completeness 99.6% (97.6%) 99.6% (99.5%) 100% (100%) Redundancy 8.3 (8.1) 6.9 (6.7) 13.8 (13.4) Refinement R work /R free 24.8% / 29.2% 18.3% / 23.4% 18.9% / 24.0% Bond deviations 0.018 Å 0.030 Å 0.029 Å Angle deviations 1.083° 1.976° 1.942° Figure of merit 0.80 0.84 0.83 Ramachandran outliers 0.56% 0.95% 0.87% Favorable backbone 94.3% 92.4% 93.3% Table 38 : Refinement and model statistics single molecular reconstruction. 38 1-16 , Pyrrhocoricin (Pyr) et Metalnikowin I (Met I). La résolution se situe entre 2,8 Å et 3,2 Å. En comparaison à la chaîne peptidique naissante en croissance, le PrAMP se lie dans une orientation inverse au sein du tunnel de sortie du ribosome. Un réseau de potentielles liaisons hydrogènes ajouté à l'interaction d'empilement du peptide avec des résidus ribosomaux stabilisent les peptides dans le tunnel de sortie, leur conférant une structure adéquate Table 41 : DNA primers for tRNA cloning 41 Name Sequence Plasmid Restriction enzyme tRNAPro_CGG_N AACTTTGTGTAATACCGG pBSTNAV2OK EcoRI, PstI AV2-FWshort: tRNAPro_CGG_N CTTTGTGTAATACCGGTGATTGGCGCAGC pBSTNAV2OK EcoRI, PstI AV2-FW1: CTGGTAGCG tRNAPro_CGG_N ACCTCCGACCCCTTCGTCCCGAACGAAGT pBSTNAV2OK EcoRI, PstI AV2-Rev2: GCGCTACCAGGCTG tRNAPro_CGG_N GGGGTCGGAGGTTCGAATCCTCTATCACC pBSTNAV2OK EcoRI, PstI AV2-FW2: GACCA tRNAPro_CGG_N CTTTCGCTAAGATCTGCAGTGGTCGGTGA pBSTNAV2OK EcoRI, PstI AV2-Rev2: TAGAGG tRNAPro_CGG_N CTTTCGCTAAGATCTGC pBSTNAV2OK EcoRI, PstI AV2-Revshort tRNAPro_CGG_N GTGTAATACTTGTAACGCCGGTGATTGGC pBSTNAV3OK SacII, PstI AV3-FW1: GCAGCCTGGTAGCG tRNAPro_CGG_N GTGTAATACTTGTAACGCC pBSTNAV3OK SacII, PstI AV3-FW1_short Puc19 T7 TATAGTGAGTCGTATTAgATTCACTGGCC pUC19 EcoRI, promotor RV GTCG HindIII PUC19 T7 CTAGCATAACCCCTTGGGGCCTCTAAACG pUC19 EcoRI, terminator FW GGTCTTGAGGGGTTTTTTGAAGCTTGGCG TAATCATGG HindIII WtRNA FV CGACTCACTATAGccctaattAGGGGCGT pUC19 EcoRI, T7prom AGTTCAATTG HindIII WtRNA RV beg CGGTTTTGGAGACCGGTGCTCTACCAATT pUC19 EcoRI, GAACTACGC HindIII WtRNA FV end CGGTCTCCAAAACCGGGTGTTGGGAGTTC pUC19 EcoRI, GAGTCTCTCCGCCCC HindIII WtRNA RV CAAGGGGTTATGCTAGggatgatttcTGG pUC19 EcoRI, T7term CAGGGGCGGAGAGAC HindIII Table 42 : DNA Primer for toeprinting templates Name DNA sequence Amino acid sequence 42 Materials Methods at Universite Victor Segalen on January 25, 2016 http://nar.oxfordjournals.org/ Downloaded from Results References Supplemental information Acknowledgments Firstly, I would like to thank all members of the jury, Prof Ijsbrand Kramer, Dr. Emmanuelle Schmitt, Prof Hiroaki Suga, Dr. Rémi Fronzes and Dr. Yaser Hashem, for taking the time to read and evaluate my thesis. My research was funded with a pre-doctoral fellowship from INSERM and the region Aquitaine. Acknowledgements Next, I want thank Daniel Wilson and his group for constant input, knowledge exchanges and sending the plasmid expressing EF-P and EF-G from E. coli. Additionally, I want to thank Fabian Acknowledgements ACKNOWLEDGMENTS We thank the staff at the European Synchrotron Radiation Facility (beamline ID-29) for help during data collection and B. Kauffmann and S. Massip at the Institut Européen de Chimie et Biologie for help with crystal freezing and screening. We also thank C. Mackereth for discussions and advice. This research was supported by grants from the Agence Nationale pour la Recherche (ANR-14-CE09-0001 to C.A.I., G.G. and D.N.W.), Région Aquitaine (2012-13-01-009 to C.A.I.), the Fondation pour la Recherche Médicale (AJE201133 to C.A.I.), the European Union (PCIG14-GA-2013-631479 to C.A.I.), the CNRS (C.D.) and the Deutsche Forschungsgemeinschaft (FOR1805, WI3285/4-1 and GRK1721 to D.N.W.). Predoctoral fellowships from the Direction Générale de l' Armement and Région Aquitaine (S. Antunes) and INSERM and Région Aquitaine (A.C.S.) are gratefully acknowledged. ACCESSION NUMBERS PDB IDs: 5F8K, 5FDU, 5FDV. ACKNOWLEDGEMENT We thank the staff at the SOLEIL synchrotron (beamlines PROXIMA-2A and PROXIMA-1) for help during data collection and B. Kauffmann and S. Massip at the Institut Européen de Chimie et Biologie for help with crystal freezing and screening. FUNDING Agence Nationale pour la Recherche [ANR-14-CE09-0001 to C.A.I., D.N.W.]; Conseil Régional d'Aquitaine [2012-13-01-009 to C.A.I.]; European Union [PCIG14-GA-2013-631479 to C.A.I.]; Deutsche Forschungsgemeinschaft [FOR1805, WI3285/4-1, GRK1721 to D.N.W.]; Fondo Ricerca di Ateneo [FRA2014 to M.S.]; Institut National de la Santé et de la Recherche Médicale Pre-doctoral Fellowship (to A.C.S.); Conseil Régional d'Aquitaine Pre-doctoral Fellowship (to A.C.S.). Funding for open access charge: Institut National de la Santé et de la Recherche Médicale [PCIG14-GA-2013-631479 to C.A.I.]. Abbreviations COMPETING FINANCIAL INTERESTS The authors declare no competing financial interests. Reprints and permissions information is available online at http://www.nature.com/ reprints/index.html. Supplementary Figure 1 Overlap of Onc112 with nascent polypeptide chains in the ribosome exit tunnel. Inhibitory activity of Onc112 peptide derivatives. (a-b) Effect of Onc112 (red) and Onc112 derivatives Onc112-L7Cha (blue) and Onc112-D2E (olive) on (a) the overnight growth of E. coli strain BL21(DE3) and (b) the luminescence resulting from the in vitro translation of firefly luciferase (Fluc). In (a), the error bars represent the standard deviation (s.d.) from the mean for a triplicate experiment (n=3). In (b), the experiment was performed in duplicate (n=2). The growth or luminescence measured in the absence of peptide was assigned as 100%. Supplementary Figure 5 Validation of Onc112 and derivatives. The toeprint assay was performed using the DNA-template encoding MRFRI* (Figure 27). The resulting sequencing-PAGE revealed that the 70S T. thermophilus ribosomes arrest on the second codon (blue arrow) and cannot proceed to the third codon to form the arrest complex (dark blue arrow) in the presence of erythromycin. The T. thermophilus ribosomes are active as in the absence of erythromycin some ribosomes are arrested at the isoleucine codon. As release factors are omitted these complexes are not released (Figure 27, red arrow). This indicates that Data obtained from crystals containing 100 µM T. thermophilus EF-P resulted in a well-defined positive density. Into this density, the initial T. thermophilus EF-P in complex with the 70S T. thermophilus ribosome (PDB code: 4V6A [START_REF] Blaha | Formation of the first peptide bond: the structure of EF-P bound to the 70S ribosome[END_REF]) was for rigid-body fitting. The Results same model was used to display the positive minimal biased difference map for the other conditions (Figure 51). Complexes co-crystallized in the presence of 20 µM of E. coli or T. thermophilus EF-P show nearly no density for the factor, which could clearly be seen when higher concentrations of T. thermophilus EF-P were used (Figure 51). In summary, no data could be obtained for E. coli EF-P bound to the 70S T. thermophilus ribosome and polyproline arrested E. coli ribosomes were not released in the presence of unmodified T. thermophilus EF-P. Studies of T. thermophilus EF-P mutants containing the flexible loop region from E. coli EF-P One main question of this project was to study the impact of the modification of EF-P on the release of polyproline-mediated arrest. As crystals that were grown in the presence of 70S T. thermophilus ribosomes and high concentrations of E. coli EF-P, did not diffract to highresolution and as the modification for T. thermophilus EF-P remains unknown, a new strategy was devised to overcome this issue, whereby T. thermophilus EF-P was systematically mutated to introduce the amino acid sequence from E. coli EF-P that is crucial for the binding of the modification enzymes without disrupting the binding of T. thermophilus EF-P to the 70S T. thermophilus ribosome. To do so, the structure of E. coli EF-P in complex with its modification enzymes ( [START_REF] Yanagisawa | A paralog of lysyl-tRNA synthetase aminoacylates a conserved lysine residue in translation elongation factor P[END_REF], PDB code: 3A5Z) and the crystal structure containing 70S T. thermophilus ribosome and T. thermophilus EF-P ( [START_REF] Blaha | Formation of the first peptide bond: the structure of EF-P bound to the 70S ribosome[END_REF] The ribosomes were dissociated by reducing the Mg 2+ ion concentration and the subunits were purified by sucrose gradients (A). Subsequently, the ribosomes were re-assembled and the 70S peak was taken [START_REF] Blaha | Preparation of functional ribosomal complexes and effect of buffer conditions on tRNA positions observed by cryo-electron microscopy[END_REF]. MKF-CME flexizyme reaction Sequencing results Sequencing results were analyzed using the alignment tool Multalign.
01753178
en
[ "sdv.sp.pg" ]
2024/03/05 22:32:10
2017
https://theses.hal.science/tel-01753178/file/81292_BRANDAO_-_PALACIO_2017_incomplet.pdf
Keywords: Polymeric nanoparticles, MART1, poly (benzyl glutamate), melanoma, biopolymers, liposomes, β-lapachone, wound healing Nanopartículas poliméricas, MART1, poli (benzil glutamato), melanoma, biopolímeros, lipossomas, β-lapachona, cicatrização Development and characterization of targeted MART-1-nanoparticles for melanoma treatment and β-lapachone-loaded liposomal in hydrogel Titre : Développement et caractérisation de nanoparticules conjuguées à MART1 pour le traitement du mélanome et de liposomes contenant de la bêta-lapachone et incorporés dans un hydrogel destiné à la cicatrisation des plaies. Mots clés : Nanoparticules polyméres, MART1, poly(glutamate de benzyle), mélanome, biopolymères, liposomes, β-lapachone, cicatrisation. Résumé : L'objectif principal de cette travail a été le développement, la caractérisation et l'évaluation in vitro et in vivo de différents nanovecteurs avec, d'une part, des nanoparticules spécifiques pour le traitement de la mélanome et des liposomes de β-lapachoneincorporés dans des hydrogels de biopolymère pour la cicatrisation de blessures topiques. La première partie de cette thèse présente une revue de la littérature et les récentes avancées dans le domaine du ciblage des cellules circulantes mésenchymateuses issues des mélanomes. Les principaux biomarqueurs de ces cellules ont été passes en revue afin de définir les caractéristiques qu'il est souhaitable de conférer aux nanovecteurs. La partie expérimentale du travail a consisté à preparer des nanoparticules (non sphériques) par nanoprécipitation de copolymères dérivés du poly(γ-benzyl-Lglutamate). Ces nanoparticules, taille comprise entre 20 et 100 nm et porteuses d'une charge négative (-3 à -30 mV). Elles ont ensuite été combinées avec l'anticorps MART-1 spécifique de la membrane des cellules de mélanome, en mettant en oeuvre des liaisons de type biotinestreptavidine. La combinaison de l'anticorps en surface des nanoparticules a été évaluée par western blot. L'affinité des immunenanoparticules pour des cellules modèles du mélanome (lignée B16-GFP) et pour des cellules endothéliales de la veine ombilicale humaine (HUVECs) a ensuite été évaluée in vitro par cytométrie de flux. Par ailleurs, étant destinées à une injection intra-veineuse, il était important d'évaluer le degré d'activation du système du complément induit par les nanoparticules. La technique d'immunoélectrophorèse 2D mise en oeuvre a permis de conclure à une activation limitée, favorable à une augmentation du temps de circulation sanguine après injection IV. Par ailleurs, les nanoparticules présentaient une faible cytotoxicité (test au MTT) vis à vis des cellules de mélanome ou des cellules endothéliales. En terme de capture cellulaire, les immuno-nanoparticules fonctionnalisées par MART1, anticorps spécifique pour la reconnaissance de l'antigène surexprimé dans des cellules de mélanome, a été augmentée de 40 à 50% par rapport au contrôle. La deuxième partie de cette thèse a été consacrée au développement, à la caractérisation et l'évaluation in vivo de l'activité cicatrisante de la β-lapachone encapsulée dans des liposomes multilamellaires, eux-mêmes incorporés dans un hydrogel d'un biopolymère produit par la bactérie Zoogloea sp. Ces formulations originales (β-lap-Lipo/ZBP/HEC) présentaient un pH et un comportement rhéologique approprié pour l'application topique, ainsi que la capacité de ralentir la liberation de la βlapachone à partir de l'hydrogel. Une étude hystopathologique détaillée de l'activité cicatrisante a été conduite dans un modèle in vivo et a permis de montrer que l' hydrogel de biopolymère était capable de stimuler la reparation tissulaire, d'augmenter la cellularité locale, de favoriser les fibroblastes, les cellules inflamatoires, les vaisseaux sanguins et la production de fibrilles de collagène pendant la phase proliférative de la cicatrisation. De plus, la formulation β-lap-Lipo/ZBP/HEC a favorisé l'angiogenèse locale et a permis de diminuer l'inflammation de la blessure, démontrant le potentiel de cette formulation originale de β-lap-Lipo/ZBP/HEC dans la thérapie des lesions cutanées. Pour conclure, les nanovecteurs développés constituent des outils intéressants en vue d'intercepter les cellules circulantes du mélanome, tandis que les formulations liposomales associant des biopolymers originaux présentent un potential intéressant dans la cicatrisation des blessures. The main aim of this work was the development, characterization and in vitro and in vivo evaluation of different nanocarriers with specific nanoparticles for the treatment of melanoma and β-lapachone liposomes incorporated in biopolymer hydrogels for the healing of topical wounds. The first part of this thesis presents a review of the literature and recent advances in the field of targeting mesenchymal circulating cells derived from melanomas. The main biomarkers of these cells have been reviewed to define the suitable characteristics of this nanocarriers. The experimental part of the work consisted of development of nanoparticles (nonspherical) by nanoprecipitation of copolymers derived from poly (γ-benzyl-Lglutamate). These nanoparticles, size between 20 and 100 nm and carrying a negative charge (-3 to -30 mV) were then combined with the MART-1 antibody, specific for the melanoma cell membrane, by biotin-streptavidin binding. The binding of the antibody on the surface of nanoparticles was evaluated by Western blot. The affinity of immuno-nanoparticles for melanoma cells (B16-GFP line) and for endothelial cells of human umbilical vein (HUVECs) was then evaluated in vitro by flow cytometry and, being intended for intravenous injection, it was important to evaluate the degree of activation of the complement system induced by the nanoparticles. The 2D immunoelectrophoresis technique used made it possible to conclude that the activation was limited and favourable to increase the blood circulation time of nanoparticles, after intravenous injection. The nanoparticles exhibited low cytotoxicity (MTT assay) against melanoma cells or endothelial cells. In terms of cellular uptake, the immunonanoparticles functionalized with MART1, a specific antibody for the recognition of the overexpressed antigen in melanoma cells, was increased by 40 to 50% compared to control. The second part of this thesis was dedicated to the development, characterization and in vivo evaluation of the wound healing activity of β-lapachone encapsulated in multilamellar liposomes and incorporated in a hydrogel of a biopolymer produced by the bacterium Zoogloea sp. These original formulations (β-lap-Lipo/ ZBP/HEC) had a pH and rheological behavior suitable for topical application, as well as the ability to slow the release of βlapachone from the hydrogel. A detailed histopathological study of the wound healing activity was conducted in an in vivo model and showed that the biopolymer hydrogel was able to stimulate tissue repair, increasing the local cellularity, fibroblasts, cells inflammatory, blood vessels and the production of collagen fibrils during the proliferative phase of healing. In addition, the β-lap-Lipo/ZBP/HEC formulation promoted local angiogenesis and reduced inflammation of the wound, demonstrating the potential of this original formulation of β-lap-Lipo/ZBP/HEC in cutaneous lesions therapy. To conclude, the developed nanocarriers are interesting approachs for intercepting the circulating melanoma cells, while liposomal formulations combining with original biopolymers have an interesting potential for wound healing applications. O objetivo principal deste trabalho foi o desenvolvimento, caracterização e avaliação in vitro e in vivo de diferentes nanocarreadores: nanopartículas específicas para o tratamento de melanoma e lipossomas contendo β-lapachona incorporados em hidrogéis de biopolímero para a cicatrização de feridas tópicas. A primeira parte desta tese apresenta uma revisão da literatura e avanços recentes no campo do direcionamento de fármacos através do uso de nanocarreadores para a células circulantes mesenquimatosas derivadas de melanomas. Os principais biomarcadores presentes nestas células foram descritos com o objetivo de definir as características adequadas para o desenvolvimento de nanocarreadores. A parte experimental do trabalho consistiu no desenvolvimento de nanopartículas (não esféricas) por nanoprecipitação de copolímeros derivados de poli (γ-benzil-Lglutamato). Essas nanopartículas, tamanho entre 20 e 100 nm e carga negativa (-3 a -30 mV) foram então combinadas com o anticorpo MART-1, específico para a membrana das células do melanoma, pela ligação biotinaestreptavidina. A ligação do anticorpo na superfície das nanopartículas foi avaliada por Western blot. A afinidade das imunonanopartículas para células de melanoma (linha B16-GFP) e para células endoteliais de veia umbilical humana (HUVECs) foi então avaliada in vitro por citometria de fluxo e, sendo destinada a injeção intravenosa, era importante avaliar o grau de ativação do sistema complemento induzido pelas nanopartículas. A técnica utilizada de imunoeletroforese 2D permitiu concluir que a ativação foi limitada e favorável para aumentar o tempo de circulação sanguínea das nanopartículas, após a injeção intravenosa. As nanopartículas apresentaram baixa citotoxicidade (teste MTT) contra células de melanoma ou células endoteliais. Em termos de absorção celular, as imuno-nanopartículas funcionalizadas com MART1, um anticorpo específico para o reconhecimento do antígeno super-expresso em células de melanoma, foi aumentada em 40 a 50% em comparação com o controle. A segunda parte desta tese foi dedicada ao desenvolvimento, caracterização e avaliação in vivo da atividade de cicatrização da β-lapachona encapsulada em lipossomas multilamelares e incorporada em um hidrogel de um biopolímero produzido pela bactéria Zoogloea sp. Estas formulações originais (βlap-Lipo/ZBP/HEC) apresentaram um pH e comportamento reológico adequados para aplicação tópica, bem como a capacidade de retardar a liberação de β-lapachona do hidrogel. Um estudo histopatológico detalhado da atividade de cicatrização de feridas foi conduzido em um modelo in vivo e mostrou que o hidrogel de biopolímero foi capaz de estimular o reparo tecidual, aumentando a celularidade local, fibroblastos, células inflamatórias, vasos sanguíneos e a produção de fibrilas de colágeno durante a fase proliferativa. Além disso, a formulação βlap-Lipo/ZBP/HEC promoveu angiogênese local e reduziu a inflamação da ferida, demonstrando o potencial desta formulação original de β-lap-Lipo/ZBP/HEC na terapia de lesões cutâneas. Para concluir, os nanocarreadores desenvolvidos demonstraram ser abordagens interessantes para interceptar as células de melanoma circulantes, enquanto as formulações lipossomais combinada com biopolímero têm um potencial interessante para aplicações de cicatrização de feridas. There are many things to be grateful at this moment and the conclusion of my PhD certainly will be a big landmark in my life. When I was a child I had the willness to be a scientist, discovery new things and travel the world changing experiences and knownlegde. Nevertheless, the reality was not that simple as seemed to be. During my graduation, master's degree and PhD, many difficult tasks have come into my way, impelling me to be strong, resilient and hardworking. On the other hand, great moments of joy, reciprocity and passion also accompanied me on this journey. Naturally, there is still a long path to go towards the construction of scientific knowledge. I would like to thank the wonderful people who crossed my path and who have helped me to achieve my goals and always encouraging me to be the best version of me. First, I will be eternally grateful to my family, my parents and brothers, for supporting me in all moments. We are a very close-knit family, full of love and compasion. An specially thank for my father who is my mentor and main supporter. What would life be without friends as well? So, my special thank will be to my childhood friends that knows me more than myself: Elisa, Duda, Tatiane, Rivanda, Pinelli, Renata Lima, Camila Farias and Marcela. Thank you also to my college friends: Amanda, Larissa Morgana, Giovanna, Dea, Rebecca, Vinicius e Fernanda. Still talking about great friends, but now the new ones, I really must thanks to my colleagues from Maison du Brésil for all the friendship and support during my stay in Paris: Taty, Bárbara, Priscila, Renata, Nayara, Rodrigo, Igor, Júlia, Gustavo, Fabiano etc. Thank you very much for being part of one of the great moments of my life. I would like to greatly thanks to my supervisors and co-superviors: Nereide Stela Santos Magalhães, Gilles Ponchel and Christine Vauthier for provide amazing opportunities of knowledge and for supporting me in all my needs and requirements. You are for me the biggest examples of competence, intelligence and resilience. I would like to express my huge gratitude for my teammates from Brasil and France, that make this journey less labored and more joyful. Any Taylor (my mexican sister) and Mariane Lira (my roomate), I have no words to thank you girls for all your support and friendship, you were a key part of my development as a person and professional. Any, thank you for teaching me and helping me to construct this thesis. Isabella Macário, Rafaela Ferraz, Marília Evelynn, Daniel Charles, Jean Baptist Coty, Francisco Junior Xavier, thank you for all the priceless contribuitions in this work and for trusted and believed in me. Finally, but not least, thanks for all my SLC's (Milena, Marcela, Dani, Vanderval, Clara, Victor etc) and Institut Galien/Paris-sud teammates (André, Henrique, Cassiana, Gabriela, Andreza etc) and professors (Juliette Verganaud, Kawtar Bauchemal, Valerie Nicolas etc) that helped me enormously with all knowledge and friendship. "If you are distressed by anything external, the pain is not due to the thing itself but to your own estimate of it; and this you have the power to revoke at any moment." Marcus Aurelius Antoninus GENERAL INTRODUCTION The nanotechnology has been widely applied in diagnosis and treatment of several diseases [START_REF] Angeli | Nanotechnology applications in medicine[END_REF]. Nanotechnology applications in biomedical field, including vaccination, diagnostics and drug-delivery [START_REF] Boulaiz | Nanomedicine: Application areas and development prospects[END_REF][START_REF] Chowdhury | Cancer nanotheranostics: Strategies, promises and impediments[END_REF][START_REF] Reed | Vaccination with melanoma helper peptides induces antibody responses associated with improved overall survival[END_REF]. In pharmaceutical field, the nanocarriers has been used to improve the efficacy of therapeutic and/or diagnosis agents and to In this context, one of the cancer types with greatest metastatic potential and chemotherapeutic resistance is the melanoma. This tumor is characterized as the malignant transformation of melanocytes of neural crest origin and it is considered the deadliest skin cancer, with a 5-year-survival rate for distant melanoma metastasis of less than 20% (ALBINO et al., 1992;CHEN, et al., 2013). Owing to the clinical relevance of this skin cancer, the melanoma treatment has been widely explored for nanotechnology applications, including the use of polymeric nanoparticles (ANTÔNIO et al., 2017;JAIN;THANKI;JAIN, 2013;[START_REF] Li | Nanobiotechnology for the Therapeutic Targeting of Cancer Cells in Blood[END_REF]. The first section of this thesis was dedicated to the nanotechnological approaches for metastatic melanoma treatment. The first chapter consisted in a literature review about the role of CSCs and CTCs in the development of metastatic melanoma; the current biomarkers of these distinct population of melanoma cells and the advances in polymer based nanoparticles for the metastatic melanoma treatment. In this perspective, the influence of architectural properties of polymeric nanoparticles in the effectiveness of passive and active-melanoma targeting were widely discussed. Thus, the goal of this review was to present and discuss the up-to-date status of melanoma biomarkers and to evaluate the advances in polymeric nanoparticles strategies in order to develop effective drug delivery systems for the treatment of metastatic melanoma. Thereafter, the second chapter comprised of the development and in vitro evaluation of polymeric nanoparticles for targeting melanoma cells. In this study, the melanoma antigen recognized by T-cells (MART-1) was selected as a target, due to this overexpression in melanoma cell lines (TAZZARI et al., 2015). Immunonanoparticles, based on poly (γ-benzyl-L-glutamate) (PBLG) and coupled with antibody against MART-1, were developed, characterized and tested in vitro on melanoma and endothelial lineages. The use of nanotechnology for wound healing treatment also offers a number of advantages when compared to the conventional cutaneous therapies, such as occlusive dressings (JACKSON; KOPECKI; COWIN, 2013). These advantages comprise on the protection of active principles, enhanced drug penetration, promotion of localized drug effects and reduced unwanted systemic absorption (CARNEIRO et al., 2010). In this context, nanoemulsions and liposomes are the most utilized nanocarriers for cutaneous applications (WU; GUY, 2009). Liposomes are spherical vesicles formed by one or more concentric phospholipid bilayers with an aqueous core. The main advantage of this nanocarrier is the ability to encapsulate lipophilic, amphiphilic and hydrophilic substances, due to their biphasic character (TORCHILIN, 2005). Lipophilic substances, such as β-lapachone, a naphthoquinone that presents important biological properties including wound healing, can be encapsulated into liposomes with high drug loads (CAVALCANTI et al., 2015;[START_REF] Fu | ß-Lapachone Accelerates the Recovery of Burn-Wound Skin[END_REF]KUNG et al., 2008). Nevertheless, the application of liposomal dispersion directly to the skin is limited specially due to their low viscosity characteristic (MOURTAS et al., 2008). In this way, the use of thickening agents, including hydrogels, can improve the liposomes rheological properties for topical drug-delivery and also exhibit biological properties (CIOBANU et al., 2014;MOURTAS et al., 2007). Thus, the second section of this thesis aimed to develop a liposomal-hydrogel containing β-lapachone for wound healing applications. The liposomal-hydrogel formulation consisted of β-lapachone-loaded multilamellar liposomes incorporated in a polymeric blend, containing a bacterial cellulose hydrogel produced by Zoogloea sp. Both bacterial cellulose and β-lapachone are expected to have wound healing properties. This study evaluated the in vitro kinetics and rheological properties of the liposomal-hydrogels containing β-lapachone, as well as their in vivo wound healing activity. In general, this thesis aimed to contribute for the development of polymeric and lipidic nanocarriers with different biological applications and administration routes, for instance systemic treatment of melanoma and topical action in wound healing. The first section of this thesis was dedicated to site-specific polymeric nanoparticles for melanoma treatment. In the first chapter, it was conducted an overview of recent literature about polymeric nanoparticles for the therapeutic targeting of melanoma cells (m-CSCs and CMCs) with high metastatic potential. Based on literature review, it is clear the key role of these cells at the physiopathology of metastasis. The reported biomarkers expressed in m-CSCs and CMCs can be used as potential targets of site-specific theranostic nanoparticles for melanoma. In addition, the optimal design of polymeric nanoparticles for passive and active tumor targeting were also discussed take into account the size, shape and surface properties of the nanoparticles. In the second chapter, surface modified polymeric nanoparticles (stealth, fluorescent label and site-specific) based on PBLG-derivatives were prepared and characterized. The nanoparticles showed small sizes, homogenous population, slightly elongated shapes and negative surface potential. Besides, the PBLG-based nanoparticles and immunonanoparticles were not recognized by the complement system and were not cytotoxic for normal endothelial or melanoma cells. In addition, the developed immunonanoparticles containing MART-1 antibody (PBLG-PEG-Bt-MART-1) showed a specific cellular uptake by B16-GFP cells, that overexpress the MART-1 receptor. However, further studies should be performed in order to optimize these immunonanoparticles and to enhance their specific recognition by melanoma cells, for example, increasing the antibody ratio conjugated at the surface of nanoparticles, as well as the incorporation of the drugs into these particles to increase the cytotoxicity against melanoma cells. In summary, these studies highlighted the potential of PBLG-based nanoparticles to be used as systemic drug-delivery systems for melanoma targeting approach. As discussed in the literature review, other promising antibodies against CMC and/or m-CSCs biomarkers can be essayed for melanoma therapeutic and diagnosis purposes and the PBLG-based nanoparticles can be used as versatile and modifiable nanoplatforms for this aim. On the second section of this thesis, a topical polymeric hydrogel containing βlap-loaded multilamellar liposomes (β-lap-Lipo/ZBP-HEC) was developed for in vivo wound healing application. The liposomal-hydrogels showed suitable sizes, pH and rheological characteristics for topical applications. In addition, the release kinetic profile of β-lap from liposomal-hydrogels at the wound site followed the Higuchi model and was 1.9 folds slower than β-lap released from hydrogel. ZBP/HEC hydrogel enhanced the in vivo wound healing activity, increasing the density of specific cells involved in the wound repair. In addition, β-lap-Lipo/ZBP-HEC increased around 2-folds the formation of new vessels at wound site and decreased the inflammatory process during the proliferative phase of skin repair. These results suggest that β-lap-Lipo and ZBP/HEC hydrogel had a synergic effect. These promising findings also shed light to future researches using the ZBP/HEC hydrogel as a bioactive vehicle for incorporation of other drugs with topical actions. Taking into account the overall results, this present work contributed for the development of promising polymeric and lipidic nanocarriers with different biological applications and administration routes: immunonanoparticles containing antibody targeted for MART-1 receptor for systemic treatment of melanoma and β-lapachone encapsulated in multilamellar liposomes and incorporated in a biopolymer hydrogel for topical application in wound healing. Les nanotechnologies permettent de concevoir les outils thérapeutiques nouveaux, conçus pour résoudre les différents problèmes de nature physico-chimique ou biologique qui surviennent lors de l'administration des molécules actives. Les pathologies concernées sont très variées. Toutefois, au cours de la dernière décennie, c'est certainement en cancérologie que les efforts ont été les plus importants (BRYS et al., 2016 ;[START_REF] Sutradhar | Nanotechnology in Cancer Drug Delivery and Selective Targeting[END_REF] et al., 1992 ;CHEN, et al., 2013). En raison de la pertinence clinique de ce cancer de la peau, le traitement du mélanome a été largement étudié pour des applications de la nanotechnologie, y compris l'utilisation de nanoparticules polymériques (ANTÔNIO et al., 2017 ;JAIN ;THANKI;JAIN, 2013;[START_REF] Li | Nanobiotechnology for the Therapeutic Targeting of Cancer Cells in Blood[END_REF]. La première partie de cette thèse a été consacrée aux approches Cette naphtoquinone possède un caractère lipophile favorisant son encapsulation dans des liposomes avec des charges élevées (CAVALCANTI et al., 2015 ;[START_REF] Fu | ß-Lapachone Accelerates the Recovery of Burn-Wound Skin[END_REF]KUNG et al., 2008). Cependant, l'application des dispersions liposomales directement sur la peau n'est pas optimale en raison de leur faible viscosité (MOURTAS et al., 2008). Pour cette raison, la dispersion des liposomes au sein d' hydrogels permet d'atteindre des caractéristiques rhéologiques favorables à leur administration topique. Simultanément, le choix judicieux des polysaccharides utilisés en tant qu'agents épaississants permet d'associer d'autres propriétés biologiques à la préparation (CIOBANU et al., 2014 ;MOURTAS et al., 2007). Figure 1 . 1 Figure 1. Schematic illustration of tumor site derived circulating melanoma cells (CMC) Figure 2 .Figure 1 . 21 Figure 2. Schematic illustration of passive targeting (a) and active targeting (b) for Figure 2 . 2 Figure 2. FT-IR spectra of PBLG-Bz polymerization film recorded at 0 min (a), 58 min Figure 3 . 3 Figure 3. FT-IR spectra of PBLG derivatives: PBLG-Bz (I), PBLG-PEG (II), PBLG- viii Figure 4 . 4 Figure 4. 1 H RMN spectra of PBLG derivates: PBLG-Bz (I), PBLG-Rhod (II), PBLG- Figure 5 . 5 Figure 5. MALDI-TOF MS spectra of PBLG-Rhod (a) and PBLG-Bz (b). Figure 6 . 6 Figure 6. TEM photographs of nanoparticles obtained from the following PBLG Figure 7 . 7 Figure 7. Cellular uptake of PBLG-Rhod (100%) and PBLG/PBLG-Rhod (10%) Figure 8 . 8 Figure 8. Western blot analysis of the remaining free antibodies found in supernatant of Figure 9 . 9 Figure 9. Cellular uptake of PBLG-PEG/PBLG-Rhod, PBLG-PEG-Bt/PBLG-Rhod and Figure 10 . 10 Figure 10. Electrophoregram peaks of complement activation for different PBLG Figure 11 . 11 Figure 11. Evaluation of the MART-1 expression on B16-GFP cells and HUVECs cells Figure 12 . 12 Figure 12. B16-GFP cells viability percentage after treatment with PBLG-derived Figure 13 . 13 Figure 13. HUVECs cells viability percentage after treatment with PBLG-derived Figure 14 .Figure 1 .Figure 2 . 1412 Figure 14. Cellular uptake of PBLG-PEG/PBLG-Rhod, PBLG-PEG-Bt/PBLG-Rhod and Figure 3 . 3 Figure 3. The histograms of cellular densities and collagen fibers on day 3, 7 and 14 post- Figure 4 . 4 Figure 4. Representative histopathological images of skin in the dermis layer: nanotechnologiques pour le traitement du mélanome métastatique. Le premier chapitre consiste en une revue bibliographique. Pour des raisons de simplicité, elles a été divisée en deux parties. Ainsi, dans un premier temps, le rôle et l'importance des cellules souches cancéreuses du mélanome (m-CSC) et des cellules de mélanome circulantes (CMC) dans le développement de métastases ont été décrits et précisés. En particulier, les principaux biomarqueurs connus des m-CSCs et des CMC ont été recensés et présentés sous l'angle de l'importance clinique de ces cellules, de leur utilisation en tant que biomarqueurs pour le diagnostic, le pronostic et le traitement du mélanome. Cette analyse a également examiné le modèle d'expression des m-CSC et des biomarqueurs CMC, leurs fonctions biologiques et les mécanismes de résistance de ces cellules. La deuxième partie de cette analyse a été consacrée à décrire et discuter les approches nanotechnologique actuelles ayant pour objectif de cibler les cellules de mélanome, en mettant l'accent sur les études portant sur les nanoparticules polymériques. Les stratégies de ciblage passives et actives développées ont été présentées et les caractéristiques des nanoparticules qui influencent l'efficacité de ciblage de m-CSCs et CMC a été discutée. Ainsi, l'objectif de cette analyse était de présenter et de discuter l'état actuel des connaissances des biomarqueurs du mélanome et d'évaluer les avantages et les inconvénients des nanoparticules polymériques proposés à ce jour. Elle a permis d'identifier quelles devraient être les principales caractéristiques désirables de nanomédecines efficaces en vue du traitement du mélanome métastatique. Le deuxième chapitre a été consacré au développement rationnel et à la caractérisation des nanoparticules polymères constituées par l'autoassemblage de poly(gamma-L-benzyle glutamate) (PBLG) fonctionnalisées pour le ciblage des cellules du mélanome et spécialement les cellules tumorales circulantes. Ce chapitre expérimental présente le développement et la caractérisation d'immuno-nanoparticules de PBLG obtenues par le greffage en surface d'un anticorps dirigés contre l'antigène MART-1 du mélanome, reconnu par les anticorps de lymphocytes T (MART-1). L'antigène MART-1 a été sélectionné comme cible en raison de sa surexpression dans les cellules du mélanome (TAZZARI et al., 2015) et de l'existence de lignées cellulaires permeettant l'établissement de modèles in vitro et in vivo chez l'animal. Ainsi, le troisième chapitre décrit l'élaborationet la caractérisation d'un système liposomes-hydrogel contenant une molécule active, la β-lapachone (β-lap-Lipo/ZBP-HEC). Son activité cicatrisante a été établie in vivo sur un modèle animal. L'hydrogel utilisé consiste en un mélange de biopolymères, comprenant notamment une cellulose d'origine bactérienne obtenue à partir de la mélasse de la canne à sucre. Ces biopolymères ont déjà démontré leur intérêt dans plusieurs applications biomédicales, déjà décrites dans la littérature. Toutefois, ce travail apporte une nouvelle approche pour l'utilisation biomédicale de ce biopolymère produit par Zoogloea sp. puisqu'il a été utilisé ici non seulement comme hydrogel capable de moduler la libération de composés bioactifs, mais aussi pour son action cicatrisante intrinsèque. Cette étude a évalué la cinétique in vitro et les propriétés rhéologiques du liposomale-hydrogels contenant la β-lapachone, ainsi que leur cicatrisation in vivo , et le deux, la cellulose bactérienne et la β-lapachone, étaient censées avoir des propriétés de cicatrisation. Des hydrogels contenant des liposomes chargés en β-lapachone (β-lap-Lipo/ZBP-HEC) ont été préparés à partir de liposome de taille appropriée. Après dispersion, le pH et les caractéristiques rhéologiques des préparations ont été ajustées pour qu'elles soient adaptées à une application topique. L'incorporation de liposomes dans un système d'hydrogel permet de contrôler la libération du principe actif, en fonctionnant comme un réservoir de médicaments. Ainsi, les cinétiques de libération de la β-lapachone à partir des hydrogels a été établie in vivo . L'analyse des cinétiques a montré que la libération obéissait au modèle d'Higuchi et que dans le cas des systèmes β-lap-Lipo/ZBP-HEC la libération était ralentie d'un facteur 2 environ par rapport à l'hydrogel simple contentant de la β-lapachone. Des études de cicatrisation ont été conduites in vivo sur des rats Wistar. L'analyse histologique des tissus a permis de montrer que les animaux ayant reçu seulement l'hydrogel ZBP/HEC présentaient une augmentation de la densité des cellules spécifiques impliqués dans la cicatrisation de la plaie. En revanche, les animaux appartenant aux groupes traités avec le système β-lap-Lipo/ZBP-HEC (hydrogels contenant les liposomes) ont présenté une augmentation d'un facteur 2 de la formation de nouveaux vaisseaux dans la zone de cicatrisation et ont simultanément diminué le processus inflammatoire au cours de la phase proliférative de réparation de la peau. Pris simultanément, ces résultats suggèrent que les liposomes de β-lapachone (βlap-Lipo) d'une part, et l'hydrogel ZBP/HEC, d'autre part ont eu un effet synergique. Ces résultats prometteurs ouvrent aussi la voie à de futures recherches utilisant l'hydrogel ZBP/HEC comme un transporteur bioactif auquel d'autres molécules à action topique pourraient être associées, de façon à obtenir des effets synergiques intéressants. En conclusion, ce travail doctoral a permis de montrer, dans deux pathologies différentes, l'intérêt de concevoir les formulations à l'échelle nanométrique, notamment des immuno-nanoparticules de PBLG fonctionnalisées par des anticorps dirigés contre MART-1 en vue de l'interception des cellules tumorales circulantes (CTCs) du mélanome dans le sang et/ou le traitement systémique du mélanome et des liposomes multilamellaires encapsulant de la β-lapachone et dispersés dans un hydrogel de biopolymères destinés à favoriser la cicatrisation des plaies cutanées. Table 1 . 1 Current biomarkers detected in m-CSC and CMC Table 2 . 2 In vivo and in vitro studies with drugs-loaded polymeric nanoparticles for passive and active tumor targeting in advanced melanoma treatment. CHAPTER II Table 1 . 1 Characteristics of PBLG derivatives. Table 2 . 2 Morphological analysis and zeta potential of nanoparticles. xii . Dans ce domaine, la principale stratégie développée a consisté à utiliser une très grande variété de systèmes de taille nanométrique conçu pour véhiculer plus les molécules antitumorales vers les cellules cancéreuses, plus spécifiquement que ne peuvent le faire les molécules actives seules, afin d'augmenter l'index thérapeutique de ces molécules. Toutefois, cette stratégie ne permet pas de déjouer tous les obstacles. Ainsi, l'hétérogénéité et la plasticité cellulaire au sein des tumeurs aboutissent à la dissémination des métastases dans des organes distincts que les nanomédecines, conçues pour atteindre les tumeurs primaires, ne peuvent pas rejoindre efficacement. les CTCs comme étant des biomarqueurs intéressant. Bien que leur détection représente un véritable défi, en raison de leur rareté, un nombre croissant de techniques est actuellement développé.Dans ce contexte, nous nous sommes focalisés sur le mélanome qui présente un fort potentiel métastatique et fait souvent l'objet de résistance aux traitements chimiothérapeutiques. Le mélanome provient de la transformation maligne de mélanocytes cutanés. Il s'agit d'un type de cancer particulièrement meurtrier, avec un taux de survie de 5 ans pour des métastases de mélanome inférieures à 20 % (ALBINO De manière générale, le traitement des métastases est considéré comme étant un défi majeur pour l'amélioration des traitements du cancer (BROOKS et al., 2015). Plus précisément, deux types de cellules cancéreuses sont directement impliqués dans l'hétérogénéité tumorale et le développement des métastases : les cellules souches cancereuses (CSCs) et les cellules tumorales circulantes (CTCs). Les CSCs représentent une quantité minoritaire de cellules dans la tumeur mais qui ont la particularité d'être pluripotentes et de posséder une forte capacité métastatique (BROOKS ; BURNESS ; WICHA, 2015 ; CSERMELY et al., 2014). A côté de ces cellules, les CTCs sont des cellules tumorales propagées par le sang et/ou la circulation lymphatiques (XU ; ZHONG, 2010), capables de s'implanter dans un organe distant de la tumeur primaire. Des niveaux élevés de CSCs et CTCs ont été associés à la progression tumorale, à la chimiorésistance et aux métastases (LA PORTA ; ZAPPERI, 2013; PORES et al., 2016) , ce qui a conduit récemment à considérer ACKNOWLEDGMENTS
01753323
en
[ "shs.eco" ]
2024/03/05 22:32:10
2016
https://hal.science/hal-01753323/file/SIND.pdf
Leonardo Pejsachowicz email: [email protected]. Stochastic Independence under Knightian Uncertainty Keywords: Bewley preferences, stochastic independence, product equivalents. JEL classification: D81. Titre en Francais: Independence Stochastique et incertitude à la préférences à la Bewley, indépendance stochastique, équivalent produit. Classification JEL: D81 published or not. The documents may come L'archive ouverte pluridisciplinaire Introduction The distinction introduced by Knight (1921) between risky events, to which a probability can be assigned, and uncertain ones, whose likelihoods are not precisely determined, cannot be captured by the standard subjective expected utility (SEU) model. This paradigm in fact posits a unique probability distribution over the states of the world, the agent's prior, that is used to assign weights to each contingency when evaluating a given course of action. Of the many models that have been proposed to accommodate Knightian uncertainty, the one pioneered in [START_REF] Bewley | Knightian Uncertainty Theory: Part I[END_REF] has recently been proved very useful, both in economic applications of uncertainty1 and as a foundational tool for the systematic analysis of non-SEU preferences. 2 [START_REF] Bewley | Knightian Uncertainty Theory: Part I[END_REF] allows the agent to hold a set of priors. Choices are then performed using the unanimity rule: an action will be preferred to an alternative one only if its expected utility under each prior of the agent is higher. Since the rank of two options might be reversed when considering different priors, under this paradigm the agent's preferences will typically be incomplete. The introduction of this type of incompleteness raises a slew of interesting modeling questions, but the one we will be concentrating on in this paper regards the characterization of stochastic independence (from now on s-independence). Specifically, suppose a Bewley decision maker must choose between bets that depend on two different experiments, for example the tosses of two coins. When can we say, based on the observation of his choices, that he considers the two tosses independent? An answer to this question is clearly of great interest, both for applications of the Bewley model to game theory, since independence of beliefs is a central tenet of Nash equilibrium, and as a benchmark for the development of a theory of updating, which is essential in applications to dynamic environments. In the SEU model, s-independence is captured by the intuitive idea that the preferences of an agent over bets that depend only on one of the tosses should not change when he receives information about the other (see [START_REF] Blume | Lexicographic Probabilities and Choice under Uncertainty[END_REF]), a property we dub conditional invariance. As we show in an example in Section 2.2, such requirement is unable to eliminate all of the forms of correlation between experiments that the multiplicity of priors in the Bewley model introduces. As a consequence under conditional invariance an agent's preferences are no longer uniquely determined by his marginal beliefs, which we argue is essential for a useful definition of s-independence. To overcome this problem we introduce the idea of product equivalent of an act, which is close in spirit to that of certainty equivalent of a risky lottery, although adapted to the product structure of the state space. We show that if a Bewley decision maker treats product equivalents as if they where certainty equivalents, his set of beliefs must coincide with the closed convex hull of the pairwise product of its marginals over each experiment. An important consequence of our result is that it provides a characterization of sindipendence for MAxMin preferences that coincides with the definition proposed in [START_REF] Gilboa | Objective and Subjective Rationality in a Multi-Prior Model[END_REF]. Such definition has been applied in the characterization of independent beliefs for Nash equilibria under ambiguity, for example in [START_REF] Lo | Equilibrium in Beliefs under Uncertainty[END_REF] and more recently in [START_REF] Riedel | Ellsberg Games[END_REF]. Neverthelss, as [START_REF] Lo | Correlated Nash equilibrium[END_REF] complains, the behavioral implications of Gilboa and Schmeidler's definition are still poorly understood. It is our hope that the present work will provide a first step towards a clarification of these implications. [START_REF] Blume | Lexicographic Probabilities and Choice under Uncertainty[END_REF] are the first to provide a decision theoretic axiom for s-independence in the SEU model, based on the insight of conditional invariance, of which we use a stronger version in Section 2.3. Bewley provides an early definition of s-independence for his model in the original 1986 paper, though he gives no behavioral characterization. His definition is weaker than the model we obtain. The remaining related literature is mostly concerned with other non-SEU models. [START_REF] Gilboa | Objective and Subjective Rationality in a Multi-Prior Model[END_REF], as we already said, define a concept of independent product of relations for MaxMin preferences and characterize it, though their characterization uses directly the representation instead of the primitive preference. [START_REF] Klibanoff | Stochastically Independent Randomization and Uncertainty Aversion[END_REF] gives a definition of an independent randomization device, which he uses to evaluate different types of uncertainty averse preferences in the Savage setting. [START_REF] Bade | Stochastic Independence with Maxmin Expected Utilities[END_REF] explores various possible forms of s-independence for events under the MaxMin model of Gilboa and Schmeidler, providing successively stronger definitions. Bade (2011) contains a characterization of s-independence for general uncertainty averse preferences that is particularly useful for the way in which the paper introduces uncertainty in games. [START_REF] Ghirardato | On Independence for Non-Additive Measures with a Fubini Theorem[END_REF] studies products of capacities, and proposes a restriction on admissible products based on the Fubini theorem. We share with this paper the intuition of using the iterated integral property to characterize product structures outside of the standard model. Finally, [START_REF] Epstein | Symmetry of Evidence without Evidence of Symmetry[END_REF], who study alternative versions of the De Finetti theorem for MaxMin preferences, provide an axiom, dubbed orthogonal independence, which achieves a weaker form of separation in beliefs. The rest of the paper is organized as follows: Section 2 introduces the model, provides a motivating example and discusses the limits of conditional invariance as a characterization of s-independence. In Section 3 we define product equivalents and give our main characterization result, and some further corollaries, one of which is the afore mentioned characterization of MaxMin s-independence. Section 4 concludes with a brief discussion on the quality of our main assumption. Preliminaries We consider a finite state space Ω with a product structure Ω = X × Y endowed with an algebra of events Σ which, for simplicity of exposition, we assume through the paper to coincide with 2 Ω . Notice that the collections Σ X = {A × Y | A ⊆ X} and Σ Y = {X × B | B ⊆ Y } are proper sub-algebras of Σ under the convention ∅ × Y = X × ∅ = ∅. States, elements of Ω, are denoted ω or alternatively through their components (x, y). Elements of X and Y represent the outcomes of two separate experiments. Sets of the form {x} × Y , which we call X-states, will be indicated, with abuse of notation, x, and a similar convention applies to Y-states. A prior p is an element of (Ω), the unit simplex in R Ω . For a subset S of Ω we let p(S) = ω∈S p(ω). Let p X ∈ (X) and p Y ∈ (Y ) be the marginals of p over X and Y respectively. The notation p X ×p Y indicates the product prior in (Ω) uniquely identified by p X × p Y (x, y) = p X (x)p Y (y). Prior p has a product structure if p = p X × p Y . For a set P ⊆ (Ω), we let P X and P Y stand for the sets of marginals of elements of P over each experiment. The set of pairwise products of P X and P Y , namely {p X × p Y | (p X , p Y ) ∈ P X × P Y } is denoted P X ⊗ P Y . We work with a streamlined version of the Anscombe-Aumann model. An act is a map from Ω to K ⊆ R, a non-trivial interval of the real line. The set of all acts is F = K Ω with generic elements f and g. The interpretation of f is that it represents an action that delivers (utility) value f (ω) if state ω is realized. For every λ ∈ (0, 1) and f, g ∈ F, the mixture λf + (1 -λ)g is the state-wise convex combination of the acts. 3 For any finite set S, let 1 S be the indicator function of S. The constant act k1 Ω that delivers k ∈ K in every state of the world is denoted, with abuse of notation, k. We say that f is an X-act if f (x, y) = f (x, y ) for all y, y ∈ Y , namely if f is constant across the realizations of Y -states. The set of X-acts is F X with generic elements f X , g X . The unique value that act f X ∈ F X assumes over {x} × Y is indicated f X (x). We can then see f X as the sum x∈X f X (x)1 x . Similar considerations and notation apply to Y -acts. Bewley preferences Our primitive is a reflexive and transitive binary relation -a preference -on F. Through most of the paper we assume that there exist a non-empty convex and closed set P ⊆ (Ω) such that f g if and only if ω∈Ω f (ω)p(ω) ≥ ω∈Ω g(ω)p(ω) for all p ∈ P. ( We say then that has a (non-trivial) Bewley representation or that it is a Bewley preference. 4 An axiomatic characterization of (2.1) can be found in [START_REF] Gilboa | Objective and Subjective Rationality in a Multi-Prior Model[END_REF]. 5The set P above is uniquely determined by . Because P is the only free parameter in the model, we say that P represents when it satisfies (2.1). We note that any two subsets of (Ω) induce the same Bewley preference via (2.1) as long as their closed convex hulls, which we denote co(P ) for a generic P , coincide. The SEU model corresponds to P = {p}, in which case is complete. Hence Bewley preferences are a generalization of SEU in which completeness is relaxed. At the same time, any extension of a Bewley preference to a complete relation over acts that is SEU corresponds to some prior p in 3 The classical Anscombe-Aumann environment posits an abstract consequence space C and defines acts as maps from Ω to the set s (C) of simple lotteries over C. One then goes on to show that, under standard assumptions (in particular Risk Independence, Monotonicity and Archimedean Continuity), there exists a Von Neumann-Morgenstern utility U : s (C) → R such that two acts f and g are indifferent whenever U (f (ω)) = U (g(ω)) for all ω. Thus one can think of our approach as one that considers acts already in their "utility space" representation. 4 In the model of [START_REF] Bewley | Knightian Uncertainty Theory: Part I[END_REF], preferences satisfy a version of equation (2.1) in which and ≥ are replaced by their strict counterparts and >. The two models are close but distinct, and correspond to the weak and strong versions of the Pareto ranking in which different priors take the role of different agents. its representing set of priors P . We will say that is a SEU preference whenever it has a Bewley representation with a singleton set {p} of priors. A motivating example In this section we illustrate the need for a novel characterization of s-independence with a simple example. Consider an agent who is betting on the results of the tosses of two different coins. All he knows about these is that they have been coined by two separate machines, each of which produces either a coin that comes up heads α% of the times, or one that comes up heads β% of the times. The two machines have no connection to each other, and no information on the mechanism that sets the probability of heads in either machine is given, so that no unique probabilistic prior can be formed. Given this description, it seems agreeable that a Bewley decision maker, facing acts on the state space {H 1 , T 1 }× {H 2 , T 2 } (where H i corresponds to the i-th coin coming up heads), would consider the set of priors P 1 = {p α × p α , p β × p α , p α × p β , p β × p β } where p α = (α, 1 -α) ∈ ({H 1 , T 1 }) and p β = (β, 1 -β) ∈ ({H 2 , T 2 }). Does P 1 reflect an intuitve notion of independence between the tosses? One way to answer this question is to ask our agent to compare a particular type of acts which we will call, for lack of a better term, conditonal bets. Namely, assume we ask the agent to decide between bets f 1 and g 1 , where f 1 H 2 T 2 H 1 1 0 T 1 k k g 1 H 2 T 2 H 1 0 1 T 1 k k Both f 1 and g 1 pay the same amount k if the first coin turns up tails, and provide opposing bets on the second toss if the first turns up heads. Whichever ranking the agent provides, it stands to reason that, if he treats the tosses as independent, he should rank in the same way the acts f 2 and g 2 in which the opposing bets on the second coin are provided conditional on the first coming up tails, namely: f 2 H 2 T 2 H 1 k k T 1 1 0 g 2 H 2 T 2 H 1 k k T 1 0 1 This form of invariance of the preferences over one experiment to information on the result of the other can be shown to characterize, for the SEU model, an agent whose unique prior p has a product structure. We notice that P 1 satisfies this invariance with regards to the pairs f 1 , g 1 and f 2 , g 2 . In fact if f 1 g 1 then we must have α 2 ≥ α(1 -α) and βα ≥ β(1 -α) ⇔ α ≥ (1 -α) for p α × p α and p β × p α , and also αβ ≥ α(1 -β) and β 2 ≥ β(1 -β) ⇔ β ≥ (1 -β) for p α × p β and p β × p β . It is immediate to check that the same two conditions ensure f 2 g 2 . This is in line with our intuition that the situation described above, and its related set of priors P 1 , reflect a natural notion of independence between tosses. But now imagine the agent comes to learn that the two machines have a common switch. This switch is the one that decides whether the coins produced will be of the α or β variety, thus whenever the first machine produces a coin of a certain kind so does the other. Here the natural set of priors is P 2 = {p α × p α , p β × p β }. The preferences induced by this set also satisfy the invariance we discussed between pairs f 1 , g 1 and f 2 , g 2 . In fact f 1 g 1 if and only if α ≥ (1 -α) and β ≥ (1 -β), which also implies f 2 g 2 . Nevertheless we would be hard pressed to argue that this situation reflects the same degree of independence of the first. The priors in P 2 in fact contain information about a certain kind of correlation between the tosses. This correlation, which is novel, regards the mechanism that determines the probabilistic model assigned to each coin. As we will see in the next section, the conditional invariance requirement we loosely described is unable, even in its strongest form, to eliminate this sort of correlation. One can understand this failure as stemming from the lack of aggregation that is characteristic of the Bewley model. Correlations in the mechanism that selects models for each toss are reflected in the shape of the whole set of priors. On the other side, when a Bewley decision maker compares two acts, he uses priors one by one, hence only the structure of each single distribution comes into play. Conditional Invariance Here we formalize and extend the discussion of the previous section. Before we do so, we will need an additional definition: Definition 1. An event S ⊆ Ω is -non-null if k > l implies k1 S + l1 Ω\S l for any k, l ∈ K. The definition corresponds to that of a Savage non-null set. For a Bewley preference represented by P , a set S is -non-null if and only if p(S) > 0 for some p ∈ P . Now consider the following axiom: Conditional Invariance For all acts h ∈ F, f X , g X ∈ F X and any pair of -non-null events R, S ∈ Σ Y , f X 1 R + h1 Ω\R g X 1 R + h1 Ω\R =⇒ f X 1 S + h1 Ω\S g X 1 S + h1 Ω\S (2.2) and the same holds when we switch the roles of X and Y in the above statement. This is a stronger version of the Stochastic Independence Axiom (Axiom 6) of [START_REF] Blume | Lexicographic Probabilities and Choice under Uncertainty[END_REF]. 6 If we let X = Y = {H, T } with Y representing the first coin toss and X the second, setting R = {H, T } × {H}, S = {H, T } × {T } and choosing f X , g X and h equal to f X H T H 1 0 T 1 0 g X H T H 0 1 T 0 1 h H T H k k T k k we obtain from (2.2) the implication f 1 g 1 =⇒ f 2 g 2 . Hence acts of the form f X 1 R + h1 Ω\R are the general version of the conditional bets we discussed in Section 2.2 and carry the same interpretation. The next proposition highlights the limits of Conditional Invariance as a behavioral characterization of s-independence: Proposition 1. For any P ⊆ (X) ⊗ (Y ), the Bewley preference induced by co(P ) satisfies Conditional Invariance. Proof: See the Appendix. Thus the type of correlations embodied by a set such as P 2 from the previous section are in general not excluded by Conditional Invariance. More than that, the degree of non-uniqueness that the assumption allows for in the formation of priors is problematic for any definition of s-independence. To see this, consider for a moment an agent with SEU preferences . Suppose we elicit his information on each separate experiment by "asking him questions", i.e. proposing him comparisons of acts, that depend either only on X or only on Y . His answers correspond to the restrictions X and Y of to F X and F Y respectively. By the SEU representation theorem, we know that these are uniquely determined by the marginals p X and p Y of his prior. Now if his prior p has a product structure (which in this case, as we show below, is true if and only if satisfies Conditional Invariance), the inverse is also true. Namely, because in this case p = p X ×p Y , we can uniquely determine his preferences over F, and hence his information about the whole set of possible results in X × Y , using X and Y . Thus once we learned about each experiment in isolation we know all that can be known about the pair, a key aspect of the description of s-independence in a single prior environment. Turning our attention to the Bewley case, we find that the information on each experiment is now subsumed into the sets P X and P Y of marginals, which uniquely determine X an Y . But now suppose we take two sets of priors P and Q inside (X) × (Y ) whose marginals coincide with P X and P Y and such that co(P ) = co(Q) (P 1 and P 2 from Section 2.2 are one such pair, for P X = P Y = {p α , p β } ). Proposition 1 ensures that the preferences induced by P and Q satisfy Conditional Invariance, and by our assumption their restrictions to F X and F Y coincide. But the two preferences will differ, by the uniqueness part of the Bewley representation theorem, hence the condition that characterizes s-independence under SEU does not allow us to uniquely determine the agent's global preferences from X and Y . The information we are lacking is precisely the one on correlations in the mechanism that matches priors on one experiment to priors on the other. To recover the desired degree of uniqueness, we propose in the next section a stronger requirement on preferences. Stochastic Independence via product equivalents In this section we propose a new concept, that of the product equivalent of an act, and use it to give a characterization of s-independence for Bewley preferences. In order to illustrate the logic behind product equivalents we first introduce them in the simpler SEU environment. Product equivalents under SEU Throughout this section we will consider a Bewley decision maker whose representing set of priors P is a singleton. Thus his preferences are complete and he is a subjective expected utility maximizer. In this case it can be shown7 that every act f ∈ F has a certainty equivalent, namely that there exists some constant act k such that f ∼ k. We denote such act ce(f ). Now notice that any act f ∈ F can be seen as a collection of bets on Y delivered conditional on the outcomes in X. Namely, we can find a collection {f x Y } x∈X of Y -acts, uniquely identified by f x Y (y) = f (x, y), such that f = x∈X f x Y 1 x . This particular way of seeing an act suggest the following definition, which is partly inspired by that of certainty equivalent: Definition 2. An act f X ∈ F X is the X-product equivalent of f ∈ F, denoted pe X (f ), if for all x ∈ X f X (x) = ce(f x Y ). Notice that pe X (f ) need not be indifferent to f . 8 This is intuitive, since evaluating pe X (f ) requires a different thought process than the one used for f , in which first the value of each conditional bet the act induces on Y is determined in isolation, and then these are aggregated using the information the agent has over X. Nevertheless we would think it is precisely when X and Y are independent, and hence information about the aggregate value of f is completely embedded in the agents preferences over each individual experiment, that the two approaches will lead to the same result, and consequently f ∼ pe X (f ). The next theorem vindicates such view: Theorem 1. Let be a SEU preference over F represented by {p}. Then the following are equivalent 1) There are distributions p X ∈ (X) and p Y ∈ (Y ) such that p = p X × p Y . 2) satisfies Conditional Invariance. 3) f ∼ pe X (f ) for all f ∈ F . Proof: See the Appendix. 1) ⇔ 2) is well known and easily proved using the uniqueness properties of the SEU representation. Since the definition of product equivalent is novel, the equivalence of 1) and 3) is a new result, although it is an elementary application of separation arguments. We can better understand this part of the result in light of Fubini's celebrated theorem. The latter gives conditions under which the integral of a function through a product measure can be obtained as an iterated integral. Now notice that when is SEU it must be that pe X (f )(x) = y∈Y f (x, y)p Y (y) (3.1) and hence the value of pe X (f ) is nothing but x∈X y∈Y f (x, y)p Y (y) p X (x). Thus 1) ⇒ 3) is equivalent to (a very simple version of) the Fubini theorem, while 3) ⇒ 1) provides an inverse of that result. Uniform product equivalents Theorem 1 suggest an alternative route for the characterization of s-independence under Bewley preferences, one that goes through the extension of the definition of a product equivalent to the multiple-priors case. The first obstacle we find along this way is that in general even certainty equivalents of acts need not exist for a Bewley decision maker. 9 In order to sidestep this issue, Ghirardato et al. ( 2004) consider a set of constant acts, which for each f ∈ F we will denote Ce(f ), that behave as certainty equivalents do for complete preferences, namely: Ce(f ) = {k ∈ K | c f implies c k and f d implies k d for all c, d ∈ K}. (3.2) 9 An easy geometric intuition of this fact is the following. If we look at acts in F as elements of R Ω , we can see that the indifference curves induced by an SEU preference with prior p correspond to the restriction to K Ω of the hyperplanes that are perpendicular to p. On the other side, the indifference curve through f of a Bewley decision maker with a set of priors P is given by the intersection of a set of hyperplanes, one for each p ∈ P . Obvious dimensionality considerations suggest then that in general the only act indifferent to f is f itself. For example if |Ω| = 2 and is incomplete, there are at least two non collinear priors in P , hence indifference curves are points and no two acts f = g are indifferent. They also provide the following characterization result, which illustrates the parallel between Ce(f ) and ce(f ) under Bewley and SEU preferences: We would then hope that substituting Ce(f x Y ) for ce(f x Y ) in the definition of product equivalent would lead to a generalization that retains the intuition behind pe X (f ). Nevertheless here we stumble on a second issue. Both the parallel with Fubini's theorem and equation (3.1) suggest that for a given f , the relevant collection of X-acts in this case should be of the form {f X ∈ F X | ∃ p Y ∈ P Y such that f X (x) = y∈Y f (x, y)p Y (y) for all x ∈ X}, (3.3) which is the set of X-acts obtained by evaluating, for each prior model p ∈ P , the conditional bets on Y induced by f via its marginal p Y , reflecting the information in prior p about the outcomes of Y in isolation. But since Ce(f x Y ) is in general an interval, an X-act f X such that f X (x) ∈ Ce(f x Y ) for all x ∈ X need not be of the form (3.3). In fact for each x ∈ X, f X (x) might correspond to an evaluation of f x Y performed using a different p Y ∈ P Y . Thus we turn to a more indirect approach. First, notice that from any act f X , and α ∈ (X), we can obtain the "reduction"11 of f X via α by taking the mixture x∈X α x f X (x). Looking back, once again, at the SEU setting, we can see that pe X (f ) can be alternatively identified as the unique X-act f X such that, for all elements α of (X), we have x∈X α x f X (x) = ce( x∈X α x f x Y ). In fact if f X = pe X (f ) the equality is always true, since x∈X α x pe X (f )(x) = x∈X α x y∈Y f (x, y)p Y (y) = x∈X y∈Y f (x, y)α x p Y (y) = y∈Y p Y (y) x∈X α x f (x, y) = ce( x∈X α x f x Y ). The inverse is immediately obtained taking the α's that correspond to degenerate distributions over X. Notice that the equalities above hold exactly because each pe X (f )(x) is found using the same marginal p Y , which also coincides with the marginal used to evaluate every ce(f x Y ). This motivates the following definition: Definition 3. An act f X ∈ F X is a X-uniform product equivalent of f ∈ F if x∈X α x f X (x) ∈ Ce( x∈X α x f x Y ) (3.4) for all α ∈ (X). The set of all such acts for given f is denoted U pe X (f ). Armed with this, we are ready to give the main result of the paper: Theorem 2. Let be a Bewley preference over F represented by P . Then the following are equivalent 1) P = co(P X ⊗ P Y ) 2) For all f ∈ F and f X ∈ U pe X (f ), c f ⇒ c f X and f d ⇒ f X d for all c, d ∈ K while at the same time, for all c, d ∈ K, c f X for all f X ∈ U pe X (f ) ⇒ c f and f X d for all f X ∈ U pe X (f ) ⇒ f d. Proof: See the Appendix. The property in item 2) is the requirement that X-uniform product equivalents behave as elements of Ce(f ), the generalized version of certainty equivalents of Ghirardato et al. (2004). As can be seen in item 1), it provides a characterization of s-independence that recovers the desired uniqueness, in the sense that is uniquely determined by X and Y . This is done by building the largest set of product priors consistent with P X and P Y , the set P X ⊗ P Y in which all possible matches of models consistent with X and Y are considered. Obviously, by Proposition 1, when either of the conditions hold, satisfies Conditional Invariance. Remark: Given the theorem, we would expect that the set U pe X (f ) could be shown to coincide with (3.3). In fact this is not true, and U pe X (f ) is in general larger. The intuition is the following. The sets {f X ∈ F X | ∃p Y ∈ P Y such that f X (x) = y∈Y f (x, y)p Y (y) for all x ∈ X} are clearly convex, and hence as is well known they can be identified by the intersection of the half-spaces that contain them. This is what we are de facto doing when we define uniform products using condition (3.4), with the α's playing the role of normals to the hyperplanes defining such half-spaces. Nevertheless we can do this only up to a point, because the α's have to be positive and normalized, a restriction that binds when trying to separate sets of acts. Hence we are left with less hyperplanes than those needed to "cut out" the right set, and with a larger U pe X (f ). This does not affect the result though because the additional acts cannot be distinguished from those in {f X ∈ F X | ∃p Y ∈ P Y such that f X (x) = y∈Y f (x, y)p Y (y) for all x ∈ X} as long as we evaluate f X acts using a distribution in (X). Independent acts and independent events A series of modeling questions concerning s-independence do not lend themselves immediately to representation through a state space with a product structure. Here we propose an approach that leverages Theorem 2 to answer two such questions: when are two acts f, g ∈ F independent according to a Bewley preference? When does such preference consider two events A, B ⊆ Ω independent? We will start as before from a finite state space Ω, though we note that as long as we restrict our attention to simple acts (which assume a finite number of values) the whole discussion can be extended to the case |Ω| = ∞. Now assume we are given a Bewley preference over K Ω . An act f induces a finite partition Π f over Ω given by Π = {f -1 (k) | k ∈ f [Ω]}. For any two acts f, g ∈ F, let Π f ⊗ Π g = {A ∩ B | A ∈ Π f and B ∈ Π g }, which is once again a partition of Ω. Let Σ f be the algebra generated by Π f and Σ f ×g the one generated by Π f ⊗ Π g . Finally, for a set P of priors over (Ω), let P f be the set of restrictions of elements of P to Σ f , namely: P f = {p : Σ f → [0, 1] | ∃ p ∈ P such that p (A) = p(A) for all A ∈ Σ f } and define similarly P f ×g for the restrictions of P to Σ f ×g . We are now ready to give the first definition of this section Definition 4. Say that acts f and g are independent according to if the set P representing satisfies co(P f ×g ) = co(P f ⊗ P g ). Notice that when P is a singleton {p} this reduces to the usual condition that p(A ∩ B) = p(A) × p(B) for all sets A and B in the algebras generated by f and g respectively. A characterization of independent acts is immediately deduced from Theorem 2. Let f ×g be the restriction of to Σ f ×g measurable acts, let F f be the set of acts that are Σ f measurable, and say that h f ∈ F f is the f -Uniform product equivalent, U pe f (h), of the Σ f ×g measurable act h if: A∈Π f α A h f (A) ∈ Ce   A∈Π f α A h A Πg   for all α ∈ (Π f ), where h A Πg is the Σ g measurable act identified by h A Πg (B) = h(A ∩ B) for each B ∈ Π g . An f ∈ U pe f (h), c f ×g h ⇒ c f ×g h f and h f ×g d ⇒ h f f ×g d for all c, d ∈ K while at the same time, for all c, d ∈ K, c f ×g h f for all h f ∈ U pe f (h) ⇒ c f ×g h and h f f ×g d for all h f ∈ U pe f (h) ⇒ h f ×g d. One can now also easily derive a definition of independent events. Fix two elements {k 1 , k 2 } of K such that k 1 > k 2 ,and let f A = k 1 1 A + k 2 1 Ω\A . We will then say that A and B are independent events according to if f A and f B are independent acts according to . It follows that two events are independent under this definition if and only if p(A ∩ B) = p(A)p(B) for all p in the set P representing . A characterization of s-independence for MaxMin preferences In their pioneering work, Gilboa and Schmeidler (1989) provide a characterization of independent product of relations for MaxMin preferences that is strictly connected to the representation in Theorem 2. Recall that a preference over F is MaxMin if there is a closed convex set of priors P ⊆ (Ω) such that is represented by the concave functional V (f ) = min p∈P ω∈Ω f (ω)p(ω). Gilboa and Schmeidler (1989) define a notion of independent product of preferences which is equivalent to a MaxMin relation X×Y on F represented by V (f ) = min p∈co(P X ×P Y ) ω∈Ω f (ω)p(ω) where P X and P Y are the priors representing two original MaxMin preferences X and Y over K X and K Y respectively. The link with our representation is clear, as is the fact that the definition of Gilboa and Schmeidler also satisfies the requirement of being completely identified by it's marginal preferences. In fact more can be said on the relation between the two models. Ghirardato et al. ( 2004) introduce the concept of unambiguous preference. This is a sub-relation * of a complete preference over acts that is identified as follows: f * g ⇔ λf + (1 -λ)h λg + (1 -λ)h for all λ ∈ (0, 1) h ∈ F. For a large class of preferences, which includes MaxMin, * can be shown to be a Bewley relation. 12 Moreover, the sets of priors representing a MaxMin preference and it's unambiguous sub-relation * coincide. Thus we can state the following corollary of Theorem 2: Corollary 4. For any MaxMin preference over F and its unambiguous sub-relation * , letting U pe * X (f ) stand for the X-uniform product equivalents of f under * , the following are equivalent: 1) There are nonempty, closed and convex sets P X ⊆ (X) and P Y ⊆ (Y ) such that is represented by the functional V : F → R given by V (f ) = min p∈co(P X ×P Y ) ω∈Ω f (ω)p(ω) 2) For all f ∈ F and f X ∈ U pe X (f ), c * f ⇒ c * f X and f * d ⇒ f X * d for all c, d ∈ K while at the same time, for all c, d ∈ K, c * f X for all f X ∈ U pe X (f ) ⇒ c * f and f X * d for all f X ∈ U pe X (f ) ⇒ f * d. This provides a characterization of Gilboa Schmeidler independence based on the model primitives ( and the derived relation * ) instead of on elements of the representation, as the one given in the 1989 paper (although see on this our comments in the next section). Remark: One might be tempted to try to extend this result to other classes of preferences under ambiguity,given that Cerreia-Vioglio et. al. ensures that all MBA preferences have an unambiguous sub-relation with a Bewley representation. Nevertheless, as the following two examples illustrate, this might lead to issues with existence and uniqueness. It is our opinion that Corollary 3 is a natural extension of Theorem 2 precisely because there is a deep link, at a mathematical and interpretational level, between the Bewley and MaxMin models, as illustrated for example in [START_REF] Gilboa | Objective and Subjective Rationality in a Multi-Prior Model[END_REF]. Definitions and characterizations of s-independence for alternative models should be built based on their individual structure and interpretation. Example 1. Existence: Consider Multiplier preferences, which are represented by the functional U (f ) = min p∈ (Ω) ω∈Ω f (ω)p(ω) + θr(p||q) where q is a reference distribution, r(p||q) the relative entropy of p w.r.t. q and θ a non-negative real number. Ghirardato and Siniscalchi (2010) show that the set of priors representing the unambiguous part * of a multiplier preference must coincide, when Ω is finite, with (Ω). Hence if Ω = X × Y and both X and Y contain at least two elements, no multiplier preference can satisfy condition 2) of Corollary 4, since in this case (X) × (Y ) is strictly included in (X × Y ). Thus no multiplier preference over K Ω can reflect s-independence according to our definition. Example 2. Uniqueness: A Choquet Expected Utility preference can be represented, assuming for simplicity K ∈ R + , by the functional V (f ) = v({ω | f (ω) ≥ t})dt, where v : 2 Ω → [0, 1] is a capacity, i.e. a normalized, monotone function over sets. As is well known, the Choquet integral can be alternatively expressed as V (f ) = ω∈Ω f (ω)p f (ω), where p f is an additive probability distribution derived from the capacity by assigning to ω the probability p f (ω) = v({ω | f (ω ) ≥ f (ω)}) -v({ω | f (ω ) > f (ω)}). There are as many such probabilities as there are orders ≥ f over the state space induced by the rule ω ≥ f ω if and only if f (ω) ≥ f (ω ). [START_REF] Ghirardato | Differentiating Ambiguity and Ambiguity Attitude[END_REF] show that the priors P representing the unambiguous part of a CEU preference coincide with co{p f | f ∈ F}. Hence if we let X = {x 1 , x 2 } and Y = {y 1 , y 2 }, and consider two CEU preferences X and Y on K X and K Y respectively, we can conclude that the number of extreme points of P X and P Y is of at most two each, as there are only two orders over a set of two elements. Thus P X ⊗P Y has at most 2×2 = 4 extreme points. That means that if we wish to define the independent product of X and Y as the CEU preference over K X×Y whose unambiguous preference is represented by P X ⊗ P Y , we will be unable to do so uniquely as we need to assign 4 distributions to 4! = 24 different orders over X × Y . Dropping the requirement that the product be CEU would only worsen the issue, as the number of MBA preferences consistent with priors P X ⊗ P Y is extremely large. Hence our definition of s-independence is unable to uniquely identify the independent product of two CEU preferences. Some considerations on our results We conclude with a brief comment concerning falsifiability. Decision theorists in general like, with good reason, to keep what we will call "continuity" and "behavioral" axioms separated. The distinction between the two is sometimes vague, but it can be made precise using finite falsifiability as a litmus test. With this we mean that, starting from the primitive of our model, we should always be able to obtain a violation of a behavioral assumption in a finite number of steps. In this sense the classic Independence axiom is behavioral, since it is negated in two steps, by finding three acts f, g, h and a weight λ ∈ (0, 1) such that f g but λg + (1 -λ)h λf + (1 -λ)h. On the other hand the Archimedean axiom, which asks that for any three acts f, g, h the sets {λ ∈ [0, 1] | λf + (1 -λ)h g} and {λ ∈ [0, 1] | g λf + (1 -λ)h} be closed, is typically not. In fact to violate it we must be able to check that, for example, λ * f + (1 -λ * )h g while λ n f +(1-λ n )h g for all {λ n } n∈N of a sequence converging to λ * , which involves verifying an infinite number of positive statements. When considering a novel representation result, we usually prefer new assumptions to be of the first kind rather than the second, since this allows for a direct test of the validity of the model. At the same time, characterizations based on continuity type assumptions do bring a contribution, as they still allow us to identify the position of a model in the space of possible representations. The Conditional Invariance assumption falls in the behavioral side, as can be easily checked. The requirement we proposed as a characterization of s-independence for Bewley preferences, unfortunately, does not. To see this, notice that, for example, a possible violation of the axiom takes place if we can find a c ∈ K such that f c but c f X for some f X ∈ U pe(f ). But showing this requires us to make sure that f X is an X-uniform product equivalent of f , a process which implies checking that for all α ∈ (X), an infinite set, equation (3.4) is satisfied. For this reason we stop short of stating that we provide a full behavioral characterization of the model P = co(P X ⊗ P Y ), and we believe that additional work is still needed to obtain it. This is the focus of ongoing research, which we hope to report in future work. A Proofs Proof of Proposition 1: We prove the proposition only for X-acts conditioned on Y -events, since the argument for the inverse situation is symmetric. Assume f X 1 R + h1 Ω\R g X 1 R + h1 Ω\R for some h ∈ F, f X , g X ∈ F X and R ∈ Σ Y . This means that for all p ∈ P ω∈R f X (ω)p(ω) + ω∈Ω\R h(ω)p(ω) ≥ ω∈R g X (ω)p(ω) + ω∈Ω\R h(ω)p(ω). (A.1) Subtract the common term on each side, for each prior, to get ω∈R f X (ω)p(ω) ≥ ω∈R g X (ω)p(ω). Because g X and f X are constant over X and there is some B ⊆ Y such that R = X × B, we have ω∈R f X (ω)p(ω) = x∈X f X (x) y∈B p(x, y) (A.2) for all p ∈ P , and similarly for g X . Since for each p ∈ P there are p X ∈ (X) and p Y ∈ (Y ) such that p = p X × p Y , we can rewrite the r.h.s. of (A. f X (x)p X (x) ≥ x∈X g X (x)p X (x) for all p X ∈ P X , where the common factor p Y (B) can be canceled on both sides (for those priors in P for which p Y (B) = 0 the inequalities are trivially true). Now, since S = X × B for some B ⊆ Y , we can multiply for each p X ∈ P X the inequality above by p Y (B ), for all p Y ∈ (Y ) such that p X × p Y ∈ P , to obtain ω∈S f X (ω)p(ω) = x∈X f X (x)p X (x)p Y (B ) ≥ x∈X g X (x)p X (x)p Y (B ) = ω∈S g X (ω)p(ω) for all p ∈ P . Adding ω∈Ω\S h(ω)p(ω) to both sides of the inequality delivers the desired implication. Proof of Theorem 1: 1) ⇒ 2) is a direct consequence of Proposition 1. The argument for 2) ⇒ 1) is well known, but we provide it here for completeness. To see that 2) ⇒ 1), consider first the case in which X × {y} is -non-null for only one y ∈ Y (at least one such y must exist since otherwise p(Ω) = 0). Then it is clear that p Y = δ y and p = p X × δ y , where δ y is the degenerate distribution at y inside (Y ), namely the element of (Y ) that is 1 at y and zero everywhere else. Now assume that at least two Y -states y, y ∈ Y are -non-null. Denote also by F X the set of maps from X to K, with generic elements f X and g X . Each X-act f X in F X has a corresponding projection f X in this set, identified by f X (x) = f X (x). For every -non-null Y -state let y be the preference over F X identified by f X y g X ⇐⇒ f X 1 y + h1 Ω\y g X 1 y + h1 Ω\y . (A.3) By the usual arguments this preference is independent from h. Moreover, Conditional Invariance requires it to be also independent from y. Because it is still going to be SEU over F X , there is a unique distribution p X ∈ (X) representing each y . On the other side, we can see that for each -non-null y, the second comparison in (A.3) will be satisfied if and only if x∈X p(x,y) = p(x,y) p Y (y) . The latter inequality is clearly an alternative SEU representation of y , hence by the uniqueness of the SEU representation p(.|y) = p X for all -non-null y ∈ Y . But then p = p X × p Y = p X × p Y . 1) ⇒ 3) is an immediate consequence of the characterization of pe X (f ) in (3.1), and elementary distributive properties of the sum of real numbers. To show that 3) ⇒ 1), assume by way of contradiction that 3) holds but p = p X × p Y . Then by an elementary application of the hyperplane separation theorem there is, w.l.o.g., a vector r in R Ω such that ω∈Ω r ω p(ω) > ω∈Ω r ω p X × p Y (ω). Because distributions are positive and normalized to 1, we can multiply both sides of the inequality by a constant and add to both a constant vector without affecting the inequality. Hence we can assume that r corresponds to some f r in F. But clearly (x,y)∈X×Y f r (x, y)p X (x)p Y (y) = x∈X y∈Y f r (x, y)p Y (y) p X (x) is the value of pe X (f r ) under the prior p, hence this implies that f r pe x (f r ), contradicting 3). by the product structure of P . Let p * X and p * Y be the distributions that achieve the above minimum. For any f x ∈ U pe X (f ) we must have, by taking the p X reduction of f X for any p X ∈ P X , 2) ⇒ 1). Suppose first that there is a p ∈ P such that p / ∈ co(P X ⊗ P Y ). Then there must be, by the usual hyperplane separating argument, an f * ∈ F and a c ∈ K such that ω∈Ω f * (ω)p(ω) > c ≥ ω∈Ω f * (ω)p X × p Y (ω) for all p X × p Y ∈ P X ⊗ P Y . Now the last inequality implies that c f X for all f X ∈ U pe X (f ), since otherwise we would have one member fX of such set for which max p X ∈P X fX (x)p X (x) > max p Y ∈P Y ,p X ∈P X y∈Y x∈X f * (x, y)p X (x)p Y (y) in direct contradiction to x∈X fX p X (x) ∈ Ce( x∈X f * x Y p X (x)) for all p X ∈ P X ⊆ (X). But then by assumption c f which implies c ≥ ω∈Ω f * (ω)p(ω) for all p ∈ P . This proves that P ⊆ co(P X ⊗ P Y ). For the remaining inclusion, assume there are p X ∈ P X and p Y ∈ P Y such that p X × p Y / ∈ P . Then we can find an f * and a c such that ω∈Ω f * (ω)p X × p Y (ω) > c ≥ ω∈Ω f * (ω)p(ω) (A.4) for all p ∈ P . Thus c f * . On the other side, we can find an f X ∈ U pe X (f ) such that f X (x) = y∈Y f * (x, y)p Y (y). By assumption c f X , hence c ≥ x∈X y∈Y f * (x, y)p Y (y)p X (x) for all p X ∈ P X , so that the strict inequality in (A.4) cannot hold. This shows that P X ⊗ P Y ⊆ P and hence, since P is closed and convex, co(P X ⊗ P Y ) ⊆ P . 10 10 Proposition 2 . 2 (From Proposition 18 in Ghirardato et al. (2004)). For every f ∈ F k ∈ Ce(f ) ⇔ min p∈P ω∈Ω f (ω)p(ω) ≤ k ≤ max p∈P ω∈Ω f (ω)p(ω). x∈X f X (x)p(x, y) ≥ x∈X g X (x)p(x, y) ⇐⇒ x∈X f X (x)p(x|y) ≥ x∈X g X (x)p(x|y) where p(x|y) = p(x,y) Proof of Theorem 2 : 1 ) 21 ⇒ 2). For any c ∈ K, f c if and only if minp∈P ω∈Ω f * (ω)p(ω) ≥ c ⇔ min p Y ∈P Y ,p X ∈P X y∈Y x∈X f * (x, y)p X (x)p Y (y) ≥ c x∈X f X (x)p X (x) ≥ min p Y ∈P Y y∈Y x∈X f * (x, y)p X (x) p Y (y) ≥ y∈Y x∈X f * (x, y)p * X (x)p * Y (y)Hence f x c. On the other side, U pe X (f ) contains the set{f X ∈ F X | ∃ p Y ∈ P Y such that f X (x) = y∈Y f (x, y)p Y (y) for all x ∈ X}since for any such f X and any α ∈ (X) we have maxp Y ∈P Y y∈Y x∈X f (x, y)α x p Y (y) p Y ∈P Y y∈Y x∈X f (x, y)α x p Y (y).Hencef x c for all f X ∈ U pe X (f ) implies that min p X ∈P X x∈X   y∈Y f (x, y)p Y (y)   p X (x) ≥ cfor all p Y ∈ P Y , and thus f c. immediate corollary of Theorem 2 is then : Corollary 3. Let be a Bewley preference over F. Then f and g are independent if and only if for all h ∈ F f ×g and h See for example Rigotti and Shannon (2005), Ghirardato and Katz (2006) and Lopomo et al. (2011) for applications in finance, voting, and principal-agent models respectively. In this regard see Ghirardato et al.(2004), Gilboa et al. (2010) and Cerreia-Vioglio et al.(2011). While Gilboa et al. (2010) work in the classical Anscombe-Aumann setting, the correspondence to our environment is immediate and can be found in their Appendix B. Ok Ortoleva Riella (2012) give a different axiomatization for a model that corresponds to (2.1) when K is compact. The difference lies in the fact that in[START_REF] Blume | Lexicographic Probabilities and Choice under Uncertainty[END_REF] the conditioning events are only of the form X × {y}. While the two formulations are equivalent for SEU preferences, it can be shown that for a Bewley decision maker our version is strictly stronger. Using the Archimedean Continuity, Monotonicity and Completeness properties of the SEU model. Thus our definition is different from the most intuitive generalization of certainty equivalent that asks for any X-act that is indifferent to f , which one might dub the X-equivalent of the act. We have adapted Prop. 18 in Ghirardato et al. (2004) to our environment and notation. [START_REF] Ok | Incomplete Preferences Under Uncertainty: Indecisiveness in Beliefs versus Tastes[END_REF] use this type of reduction in the formulation of an axiom that characterizes two dual representations: Bewley's and the alternative single prior multi-expected utility model. [START_REF] Cerreia-Vioglio | Rational Preferences under Ambiguity[END_REF] show that this is true for all Monotone Bernoullian Archimedean preferences, a class that includes Variational Preferences, Smooth Ambiguity preferences and many others. I would like to thank Eric Danan and participants at the THEMA seminar at U.Cergy and Ecole Polytechnique internal seminar for useful comments and discussions. I acknowledge support by a public grant overseen by the French National Research Agency (ANR) as part of the Investissements d'Avenir program (Idex Grant Agreement No. ANR -11-IDEX-0003-02 / Labex ECODEC No. ANR -11-LABEX-0047). Needless to say, all mistakes are my own.
01665015
en
[ "info.info-ao" ]
2024/03/05 22:32:10
2017
https://theses.hal.science/tel-01665015v2/file/BARROIS_Benjamin.pdf
Olivier Sentieys Tout D'abord Karthick Parashar Daniel Menard Cédric Killian Audrey Ga- Briel Karim Nicolas R Et Joel ceux ayant contribué aux miens Keywords: Mes premiers remerciements vont à mon directeur de thèse Olivier Sentieys, qui a cru en moi dès le master et m'a permis d'effectuer cette thèse. C'est appuyé par son expérience que j'ai pu produire les différents travaux présentés dans ce document et m'intégrer dans les riches communautés propres aux domaines de ces travaux. arithmetic provides much higher performance and important energy savings when applied to real-life applications, thanks to much smaller error and data width. The last contribution is a comparative study between fixed-point and floating-point paradigms. For this, a new version of ApxPerf was developed, which adds an extra-layer of High Level Synthesis (HLS) achieved by Mentor Graphics Catapult C or Xilinx Vivado HLS. ApxPerf v2 uses a unique C++ source for both hardware performance and accuracy estimations of approximate operators. The framework comes with template-based synthesizable C++ libraries for integer approximate operators (apx_fixed) and for custom floating-point operators (ct_float). The second version of the framework can also evaluate complex applications, which are now synthesizable using HLS. After a comparative evaluation of our custom floating-point library with other existing libraries, fixed-point and floating-point paradigms are first compared in terms of stand-alone performance and accuracy. Then, they are compared in the context of K-Means clustering and FFT applications, where the interest of small-width floating-point is highlighted. The thesis concludes in the strong interest of reducing the bit-width of arithmetic operations, but also in the important issues brought by approximation. First, the many integer approximate operators published with promising important energy savings at low error cost, seem not to keep their promises when considered at application level. Indeed, we showed that fixed-point arithmetic with smaller bit-width should be preferred to inexact operators. Finally, we emphasize the interest of small-width floating-point for approximate computing. Small floating-point is demonstrated to be very interesting in low-energy systems, compensating its overhead with its high dynamic range, its high flexibility and its ease of use for developers and system designers. Résumé en Français Au cours de ces dernières décennies, des améliorations significatives ont été faites en termes de performances de calcul et de réduction d'énergie, suivant la loi de Moore. Cependant, les limites physiques liées à la réduction de la taille des transistors à base de silicium sont en passe d'être atteintes et le solutionnement de ce problème est aujourd'hui l'un des enjeux majeurs de la recherche et de l'industrie. L'un des moyens d'améliorer l'efficacité énergétique est l'utilisation de différentes représentations des nombres, et l'utilisation de tailles réduites pour ces représentations. Le standard de représentation des nombres réels est aujourd'hui la virgule flottante double-précision. Cependant, il est maintenant admis qu'un important nombre d'applications pourrait être exécuté en utilisant des représentations de précision inférieure, avec un impact minime sur la qualité de leurs sorties. Ce paradigme, récemment qualifié de calcul approximatif ou approximate computing en anglais, apparaît comme une approche prometteuse et est devenu l'un des secteurs de recherche majeurs visant à l'amélioration de la vitesse de calcul et de la consommation énergétique pour les systèmes de calcul embarqués et de haute performance. Le calcul approximatif s'appuie sur la tolérance de beaucoup de systèmes et applications à la perte de qualité ou d'optimalité dans le résultat produit. En relâchant les besoins d'extrême précision de résultat ou de leur déterminisme, les techniques de calcul approximatif permettent une efficacité énergétique considérablement accrue. Dans cette thèse, le compromis performance-erreur obtenu en relâchant la précision des calculs dans les opérateurs arithmétiques de base est traité. Après l'étude et une critique constructive des moyens existants d'effectuer de manière approximée les opérations arithmétiques de base, des méthodes et outils pour l'évaluation du coût en termes d'erreur et l'impact en termes de performance moyennant l'utilisation de différents paradigmes arithmétiques sont présentés. Tout d'abord, après une rapide description des arithmétiques classiques que sont la virgule flottante et la virgule fixe, une étude de la littérature des opérateurs approximatifs est présentée. Les principales techniques de création d'additionneurs et multiplieurs approximatifs sont soulignées par cette étude, ainsi que le problème de la nature et de l'amplitude très variable des erreurs induites lors des calculs les utilisant. Dans un second temps, une technique modulaire d'estimation de l'erreur virgule fixe s'appuyant sur la densité spectrale de puissance est présentée. Cette technique considère la nature spectrale du bruit de quantification filtré à travers le système, menant à une précision accrue de l'estimation d'erreur comparé aux méthodes modulaires ne prenant pas en compte cette nature spectrale, et d'une complexité plus basse que la propagation classique des moyenne et variance d'erreur à travers le système complet. Ensuite, le problème de l'estimation analytique de l'erreur produite par les opérateurs approximatifs est soulevé. La grande variété comportementale et structurelle des opérateurs approximatifs rend les techniques existante beaucoup plus complexes, ce qui résulte en un fort coût en termes de mémoire ou de puissance de calcul, ou au contraire en une mauvaise qualité d'estimation. Avec la technique proposée de propagation du taux d'erreur binaire positionnel, un bon compromis est trouvé entre la complexité de l'estimation et sa précision. Ensuite, une technique utilisant la pseudo simulation à base d'opérateurs arithmétiques approximatifs pour la reproduction des effets de la VOS est présentée. Cette technique permet d'utiliser des simulations haut niveau pour estimer les erreurs liées à la VOS en lieu et place de simulations SPICE niveau transistor, extrêmement longues et coûteuses en mémoire. Introduction Handling The End Of Moore's Law In the History of computers, huge improvements in calculation accuracy have been made in two ways. First, the accuracy of computations was gradually improved with the increase of the bit width allocated to number representation. This was made possible thanks to technology evolution and miniaturization, allowing a dramatical growth of the number of basic logic elements embedded in circuits. Second, accurate number representations such as floating-point were proposed. Floating-point representation was first used in 1914 in an electro-mechanical version of Charles Babbage's computing machine made by Leonardo Torres y Quevedo, the Analytical Engine, followed by Konrad Zuse's first programmable mechanical computer embedding 24-bit floating-point representation. Today, silicon-based general-purpose processors mostly embed 64-bit floating-point computation units, which is nowadays standard in terms of high-accuracy computing. At the same time, important improvements of computational performance have been performed. The miniaturization, besides allowing larger bit-width, also allowed considerable benefits in terms of power and speed, and thus energy. CDC 6600 supercomputer created in 1964, with its 64K 60-bit words of memory, had a power consumption of approximatively 150 kW for a computing capacity of approximatively 500 KFLOPS, which makes 3.3 floating-point operations per Watt. In comparison, best 2017's world' supercomputer Sunway TaihuLight in China with its 1.31 Pbytes of memory, announces 6.05 GFLOPS/W [5]. Therefore, in forty years, the supercomputers energy efficiency has improved by roughly 2E9, following the needs of industry. These impressive progresses were achieved according to Moore's law, who forecast in 1965 [START_REF] Moore | Cramming more components onto integrated circuits[END_REF] an exponential growth of computation circuit complexity, depicted in Figure 1 in terms of transistor count, performance, frequency, power and number of processing cores. From the day it was stated, this law and its numerous variations have outstandingly described the evolution of computing, but also the needs of the global market, such as computing is now inherent to nearly all possible domains, from weather forecasting to secured money transactions, including targeted advertising and self-driving cars. However, many specialists agree on Moore's Law to end in a very near future [START_REF] Williams | What's next? [the end of moore's law[END_REF]. First of all, the gradual decrease of the size of silicon transistors is coming to an end. With a Van Der Waals radium of 210 pm, today's 10 nm transistors are less than 50 atoms large, leading new issues at quantum physics scale, as well as increased current leakage. Then, the 20th century has known a steady increase of clock frequency in synchronous circuit, which represent the overwhelming majority of circuits, helping the important gain in performance. However, the beginning of the third millennium has seen a stagnation of this frequency, enforcing performance to be found in parallelism instead, single-core disappearing in favor of multi/many-core processors. Technology being pushed to its limits and a new cutting-edge physical-layer technology not appearing to arrive soon, new ways have to be found for computing to follow the needs. Moreover, with a strong interest of industry into energy-critical embedded systems, energyefficiency is sought more than ever. Specialists are anticipating a future technological revolution brought by the Internet Of Things (IOT), with a fast growth of the number of interconnected autonomous embedded systems [START_REF] Xu | Internet of things in industries: A survey[END_REF][START_REF] Perera | Context aware computing for the internet of things: A survey[END_REF][START_REF] Zanella | Internet of things for smart cities[END_REF]. As said above, a first performance improvement can be found in computing parallelism, thanks to multi-core/multi-thread superscalar or VLIW processors [START_REF] Sohi | Multiscalar processors[END_REF], or GPU computing. However, not all applications can be well parallelized because of data dependencies, leading to moderate speed-up in spite of an important area and energy overhead. A second modern way to improve performance is the use of hardware accelerators. These accelerators come in mainstream processors, with hardware video encoders/decoders such as for x264 or x265 codecs, but also in Field-Programmable Gate Arrays (FPGAs). FPGAs consist in a grid of programmable logic gates, allowing reconfigurable hardware implementation. In addition of general-purpose Look-Up Tables (LUTs), most FPGAs embed Digital Signal Processing blockss (DSPs) to accelerate signal processing computations. Hybrids embedding processors connected to an FPGA, embodied by Xilinx Zynq family, are today a big stake for high-performance embedded systems. Nevertheless, hardware and software co-design still represents an important cost in terms of development and testing. Approximate Computing or Playing with Accuracy for Energy Efficiency This thesis focuses on an alternative way to improve performance/energy ratio, which is relaxing computation accuracy to improve performance, as well in terms of energy/area for lowpower systems, but also in terms of speed for High Performance Computing (HPC). Indeed, for many reasons, many applications can tolerate some imprecision for various reasons. For instance, having a high accuracy in signal processing applications can be useless if the input signals are noisy, since the least significant bits in the computation will be only applied on the noisy part of the signal. Also, some applications such as classification do not always have golden output and can tolerate a set of satisfying results. Therefore, it is useless to perform many loop iterations to get an output which can be considered as good enough. Various methods applied at several levels are possible to relax the accuracy to get performance or energy benefits. At physical layer, voltage and frequency can be scaled beyond the circuit tolerance threshold with potential important energy of speed benefits, but with effects on the quality of the output that may be destructive and hard to be managed [START_REF] Kuroda | Variable supply-voltage scheme for low-power high-speed cmos digital design[END_REF][START_REF] Sueur | Dynamic voltage and frequency scaling: The laws of diminishing returns[END_REF]. At algorithmic level, several simplifications can be performed such as loop perforation or mathematical functions approximations. For instance, trigonometric functions can be approximated using COordinate Rotation DIgital Computing (CORDIC) [START_REF] Volder | The cordic trigonometric computing technique[END_REF] with limited number of iterations, and complex functions can be approximated by interval using simple polynomials stored in small tables. The choice and management of arithmetic paradigms also allows important energy savings while relaxing accuracy. In this thesis, three main paradigms are explored: • customizing floating-point operators, • reducing bit-width and complexity using fixed-point arithmetic, which can be associated to quantization theory, • and approximate integer operators, which perform arithmetic operations using inaccurate functions. Floating-point arithmetic is often associated to high-precision computing with important time and energy overhead compared to fixed-point. Indeed, floating-point is today the most used representation of real numbers because of its high dynamic and high precision at any amplitude scale. However, because of more complex operations than for integer arithmetic and the complexity of the many particular cases handled in IEEE 754 standard [START_REF]Ieee standard for floating-point arithmetic[END_REF], fixed-point is nearly always selected when low energy per operation is aimed at. In this thesis, we consider simplified small-bitwidth floating-point arithmetic implementations leading to better energy efficiency. Fixed-point arithmetic is the most classical paradigm when it comes to low-energy computing. In this thesis, it is used as a reference for comparisons with the other paradigms. A model for fixed-point error estimation leveraging Power Spectral Density (PSD) is also proposed. Finally, approximate arithmetic operators using modified addition and multiplication functions are considered. Many implementations of these operators were published this past decade, but they have never been the object of a complete comparative study. After presenting literature about floating-point and fixed-point, a study of state-of-the-art approximate operators is proposed. Then, models for error propagation for fixed-point and approximate operators are described and evaluated. Finally, comparative studies between approximate operators and fixed-point on one side, and fixed-point and floating-point on the other side, leveraging classical signal processing applications. More details on the organization of this document are given in the next section. Thesis Organization Chapter 1 Approximate computing in general is presented, followed by a deeper study on the different existing computing arithmetic. After a presentation of classical floating-point and fixed-point arithmetic paradigms, state-of-the-art integer approximate adders and multipliers are presented to give an overview of the many existing techniques to lower energy introducing inaccuracy in computations. Chapter 2 A novel technique to estimate the impact of quantization across large fixed-point systems is presented, leveraging the noise Power Spectral Density (PSD). The benefits of the method compared to others is then demonstrated on signal processing applications. Chapter 3 A novel technique to estimate the error of integer approximate operators error propagated across a system is presented in thus chapter. This technique, based on Bitwise Error-Rate (BWER), first uses training by simulation to build a model, which is then used for fast propagation. This analytical technique is fast and requires less space in memory than other similar existing techniques. Then, a model for the reproduction of Voltage Over-Scaling (VOS) error in exact integer arithmetic operators using pseudo-simulation on approximate operators is presented. Chapter 4 A comparative study of fixed-point and approximate arithmetic is presented in Chapter 4. Both paradigms are first compared in their stand-alone version, and then on several signal processing applications using relevant metrics. The study is performed using the first version of our open-source operator characterization framework ApxPerf, and our approximate operator library apx_fixed. Chapter 5 Using a second version of our framework ApxPerf embedding a High Level Synthesis (HLS) frontend and our custom floating-point library ct_float, a comparative study of fixed-point and small-width custom floating-point arithmetic is performed. First, the hardware performance and accuracy of both operators are compared in their stand-alone version. Then, this comparison is achieved in K-means clustering and Fast Fourier Transform (FFT) applications. Chapter 1 Trading Accuracy for Performance in Computing Systems In this Chapter, various methods to trade accuracy for performance are first listed in Section 1.1. Then, the study is centered on the different existing representations of numbers and the architectures of the arithmetic operators using them. In Section 1.2, floating-point arithmetic is developed. Then, Section 1.3 presents fixed-point arithmetic. Finally, a study of state-of-the-art approximate architectures of integer adders and multipliers is presented in Section 1.4. Various Methods to Trade Accuracy for Performance In this section, the main methods to trade accuracy for performance are presented. First, VOS is discussed. Then, existing algorithm-level transformations are presented. Finally, approximate arithmetic is introduced, to be further developed in Sections 1.2, 1.3, and 1.4 as the central element of this thesis. Voltage Overscaling The power consumption of a transistor in a synchronous circuit is linear with the frequency and proportional to the square of the voltage applied. For a same load of computations, decreasing frequency also increases computing time and the energy is the same. Thus, it is important to mostly exploit the voltage to save as much energy as possible. Nevertheless, decreasing voltage implies more instability in the transitions of the transistor, and this is why in a large majority of systems, the voltage is set above a certain threshold which ensures the stability of the system. Lowering the voltage under this threshold can cause the output of the transistor to be stuck to 0 or 1, compromising the integrity of the realized function. One of the main issues of VOS is process variability. Indeed, two instances A and B of a same silicon-based chip are not able to handle the exact same voltage before breakdown, a given transistor in A being possibly weaker than the same in B, mostly because of Random Dopant Fluctuations (RDF) which are a major issue in nanometer-scale Very Large Scale Integration (VLSI). However, with the important possible energy gains brought by VOS, its mastering is Chapter 1 an important stake which is widely explored [START_REF] Kuroda | Variable supply-voltage scheme for low-power high-speed cmos digital design[END_REF][START_REF] Sueur | Dynamic voltage and frequency scaling: The laws of diminishing returns[END_REF]. Low-leakage technologies like Fully Depleted Silicon On Insulator (FDSOI) allow RDF to impact much less near-threshold computing variability. Despite technology improvements, near-threshold and sub-threshold computing needs error-correcting circuits, coming with an area, energy and delay overhead which needs to be inferior to the savings. In [START_REF] Ernst | Razor: circuit-level correction of timing errors for low-power operation[END_REF], a method called Razor is proposed to monitor at low cost circuit error rate to tune its voltage to get an acceptable failure rate. Therefore, the main challenge with VOS is its uncertainty, the absence of a general rule which would make all instances of an electronic chips equal towards voltage scaling, which make manufacturers generally turn their backs to VOS, preferring to keep a comfortable margin above the threshold. In next Subsections 1.1.2 and 1.1.3, accuracy is traded for performance in a reproducible way, with results which are independent from hardware and totally dependent from the programmer/designer's will, and thus more likely to be used in the future at industrial scale. Algorithmic Approximations A more secure way than VOS to save energy is achieved by algorithmic approximations. Indeed, modifying algorithm implementation to make them deliver their results in less cycles or using less intensively costly functions potentially leads to important savings. First, the approximable parts of the code or algorithm must be identified, i.e. the parts where the gains can be maximized despite a moderate impact on the output quality. Various methods for the identification of these approximable parts are proposed in [START_REF] Roy | Asac: Automatic sensitivity analysis for approximate computing[END_REF][START_REF] Esmaeilzadeh | Architecture support for disciplined approximate programming[END_REF][START_REF] Grigorian | Dynamically adaptive and reliable approximate computing using light-weight error analysis[END_REF]. These methods are mostly perturbation-based, meaning errors are inserted in the code, and the output is simulated to evaluate the impact of the inserted error. As all simulation-based methods, these methods may not be scalable to large algorithms and only consider a limited number of perturbation types. Once the approximable parts identified by an automatized method or manually, depending on the kind of algorithm part to approximate (computation loops, mathematical functions, etc), different techniques can be applied. One of the main techniques for reducing the cost of algorithm computations is loop perforation. Indeed, most signal processing algorithms consist in quite simple functions repeated a high number of times, e.g. for Monte Carlo simulations, search space enumeration or iterative refinement. In these three cases, a subset of loop iterations can simply be skipped, yet returning good enough results [START_REF] Sidiroglou-Douskos | Managing performance vs. accuracy trade-offs with loop perforation[END_REF]. Using complex mathematical functions such as exponentials, logarithms, square roots or trigonometric functions is very area, time and energy-costly compared to basic arithmetic operations. Indeed, accurate implementations may require large tables and long addition and multiplication-based iterative refinement. Therefore, in applications using intensively these mathematical functions, releasing accuracy for performance can be source of important savings. A first classical way to approximate functions is polynomial approximation, using tabulated polynomials representing the function in different ranges. In the context of approximate computing, iterative-refinement based mathematical approximations are also very interesting since they allow loop perforation discussed previously. Reducing the number of iterations can be applied to CORDIC algorithms, very common for trigonometric function approximations [START_REF] Volder | The cordic trigonometric computing technique[END_REF]. Several efficient approximate mathematical functions were proposed for specialized hardware such as FPGAs [START_REF] Pandey | An fpga-based fixedpoint architecture for binary logarithmic computation[END_REF] or Single Instruction Multiple Data (SIMD) hard-ware [START_REF] Cristiano | Fast exponential computation on simd architectures[END_REF]. Approximate Basic Arithmetic The level of approximation we focus on in this thesis is the approximation of basic arithmetic operations, which are addition, subtraction and multiplication. These operations are the base for most functions and algorithms in classical computing, but also the most energy-costly compared to other basic CPU functions such as register shifts or binary logical operators, meaning that a gain in performance or energy on these operations automatically induces an important benefit for the whole application. They are also statistically intensively used in general-purpose CPU computing: in ARM processors, ADD and SUB instructions are the instructions the most used after LOAD and STORE. The approximation of these basic operators is explored along two different angles in this thesis. On the one hand, approximate representations of numbers is discussed, more precisely the approximate representation of real numbers in computer arithmetics, using floating-point and fixed-point arithmetic. On the other hand, approximate computing using modified functions for integer arithmetical functions of addition, subtraction and multiplication is explored used in fixed-point arithmetic and compared to existing methods. Approximations using floating-point arithmetic are discussed in Section 1.2, approximations using fixed-point arithmetic in Section 1.3 and approximate integer operators are presented in Section 1.4. Relaxing Accuracy Using Floating-Point Arithmetic Floating-point (Floating-Point (FlP)) representation is today the main representation for real numbers in computing, thanks to a potentially high dynamic, totally managed by the hardware. However, this ease of use comes with relatively important area, delay and energy penalties. FlP representation is presented in Section 1.2.1. Then, FlP addition/subtraction and multiplication are described in Section 1.2.2. Ways to relax accuracy for performance in FlP arithmetic is then discussed in Section 1.2.3. Floating-Point Representation for Real Numbers In computer arithmetic, the representation of real numbers is a major stake. Indeed, most powerful algorithms are based on continuous mathematics, and their accuracy and stability is directly related to the accuracy of the number representation they use. However, in classical computing, an infinite accuracy is not possible since all representations are contained in a finite bit width. To address this issue, having a number representation as accurate for very small numbers and very large numbers is important. Indeed, large and small numbers are dual, since multiplying (resp. dividing) a number by another large number is equivalent to dividing (resp. multiplying) by a small number. Giving the same relative accuracy to numbers whatever their amplitude is can only be achieved giving the same impact to their most significant digit. In decimal representation, this is achieved with scientific notation, representing the significant Chapter 1 value of the number in the range [1, 10[, weighted by 10 elevated to a certain power. FlP representation in radix-2 is the pendant of the scientific notation for binary computing. The point in the representation of the number is "floating" so the representative value of the number (or mantissa) represents a number in [1, 2[, multiplied by a power of 2. Given an M -bit mantissa, a signed integer exponent of value e, often represented in biased representation, and a sign bit s, any real number between limits defined by M and E the number of bits allocated to the exponent e can be represented with a relative step depending on M by: (≠1) s ◊ m M ≠1 .m M ≠2 m M ≠3 • • • m 1 m 0 ◊ 2 e . With this representation, any number under this format can be represented using M +E +1 bits as showed in Figure 1.1. A particularity of binary FlP with a mantissa represented in [1, 2[ is that its Most Significant Bit (MSB) can only be 1. Knowing that, the MSB can be left implicit, freeing space for one more Least Significant Bit (LSB) instead. Nevertheless, automatically keeping the floating point at the right position along computations requires an important hardware overhead, as discussed in Section 1.2.2. Managing subnormal numbers (numbers between 0 and the smallest positive possible representable value), the values 0 and infinity also represent an overhead. Despite this additional cost, FlP representation is today established as the standard for real number representation. Indeed, besides its high accuracy and high dynamic, it has the huge advantage of leaving the whole management of the representation to the hardware instead of leaving it to the software designer, significantly diminishing developing and testing time. This domination is sustained by IEEE 754 standard, last revised in 2008 [START_REF]Ieee standard for floating-point arithmetic[END_REF], which sets the conventions for floating-point number possible representation, subnormal numbers management and the different cases to be handled, ensuring a high portability of programs. However, such a strict normalization implies: • an important overhead for throwing flags for the many special cases, and even more important overhead for the management of these special cases (hardware or software overhead), • and a low flexibility in the width of the mantissa and exponent, which have to respect the rules of Table 1.1 for 32, 64 and 128-bit implementation. As a first conclusion, the constraints imposed to FlP representation by IEEE 754 normalization imply a high cost in terms of hardware resource, which highly counterbalance its accuracy benefits. However, as discussed in Section 1.2.3, taking liberties with FlP can significantly increase the accuracy/cost ratio. As discussed later in Section 1.3, integer addition/subtraction is the simplest arithmetic operator. However, in FlP arithmetic, it suffers from a high control overhead. Indeed, several steps are needed to perform the FlP addition: • First, the difference of the exponents is computed. • If the difference of the exponents is superior to the mantissa width, the biggest number is directly issued (this is the far path of the operator -one of the numbers is too small to impact the addition). • Else, if the difference of the exponents is inferior to the mantissa width, one of the inputs' mantissas must be shifted so bits of same significance are facing each other. This is the close path, by opposition with the far path. • The addition of the mantissas is performed. • Then, rounding is performed on the mantissa, depending on the dropped bits and the rounding mode selected. • Special cases are then handled (zero, infinity, subnormal results), and the output sign. • Then, mantissa is shifted so it represents a value in [1, 2[, and the exponent is modified depending on the number of shifts. FlP addition principle is illustrated in Figure 1.2 taken from [START_REF] Muller | Handbook of floating-point arithmetic[END_REF]. More control can be needed, depending on the implementation of the FlP adder and the specificities of the FlP representation. For instance, management of the implicit 1 implies to add 1s to the mantissas before addition, and an important overhead can be dedicated to exception handling. For cost comparison, Table 1.2 shows the performance of 32-bit and 64-bit FlP addition, using ac_float type from Mentor Graphics, and 32-bit and 64-bit integer addition using ac_int type, generated using the HLS and power estimation process of the second version of Apx-Perf framework described in Section 4.1, targeting 28nm FDSOI with a 200 MHz clock and using 10, 000 uniform input samples. FlP addition power was estimated activating the close path 50% of the time. These results show clearly the overhead of FlP addition. For 32-bit version, FlP addition is 3.5◊ larger, 2.3◊ slower and costs 27◊ more energy than integer addition. For 64-bit version, FlP addition is 3.9◊ larger, 1.9◊ slower and costs 30◊ more energy. The vs integer addition overhead seems to be roughly linear with the size of the operator, and the impact of numbers representation is highly impacting the performance. However, it is showed in Chapter 5 that this high difference shrinks when the impact of accuracy is taken into account. FlP multiplication is less complicated than addition as only a low control overhead is necessary to perform the operation. Input mantissas are multiplied using a classical integer multiplier (see Section 1.3), while exponents are simply added. At worse, a final +1 on the exponent can be needed, depending on the result of the mantissas multiplication. The basic architecture of a FlP multiplier is described in Figure 1.3 from [START_REF] Muller | Handbook of floating-point arithmetic[END_REF]. Obviously, all classical hardware over- .m x 1.m y e x + b z 1 • • • z p+1 z 1 z 0 z 1 • • • z 2p 2 z p s z 0 • • • z p 2 e x + e y + b e x + e y + b + 1 z p 1 z p+1 • • • z 2p 2 c out sticky Figure 1. 3 -Basic floating-point multiplication [START_REF] Muller | Handbook of floating-point arithmetic[END_REF] heads needed by FlP representation are necessary (rounding logic, normalization, management of particular cases). Table 1.3 shows the difference between 32-bit and 64-bit floating-point multiplication using Mentor Graphics ac_float and 32-bit and 64-bit fixed-width1 integer multiplication using ac_int data type, with the same experimental setup than discussed before for the addition. A first observation on the area shows that the integer multiplication is 48% larger than FlP version for 32-bit version, and 37% larger for 64-bit version. This difference is due to the smaller size of the integer multiplier in the FlP multiplication, since it is limited to the size of the mantissa (24 bits for 32-bit version, 53 bits for 64-bit version). Despite the management of the exponent, the overhead is not large enough to produce a larger operator. However, if the overhead area is not very large, 32-bit FlP multiplication energy is 11◊ higher than the integer multiplication energy, while 64-bit version is 37◊ more energy-costly. It is interesting to note that the difference of energy consumption between addition and multiplication is much more important for integer operators than for FlP. For 32-bit version for instance, integer multiplica- 7◊ more energy than integer addition, while this factor is only 1.4◊ for 32-bit FlP multiplier compared to 32-bit FlP adder. Therefore, using multiplication in FlP computing is relatively less penalizing than for integer multiplication, typically used in Fixed-Point (FxP) arithmetic. Potential for Relaxing Accuracy in Floating-Point Arithmetic There are several possible opportunities to relax accuracy in floating-point arithmetic to increase performance. The main one is simply to use word-length as small as possible for the mantissa and the exponent. With normalized mantissa in [1, 2[, reducing the word-length corresponds to pruning the LSBs, which comes with no overhead. Eventually, rounding can be performed at higher cost. For the exponent, the transformation is more complicated if it is represented with a bias. Indeed, if e is the exponent width, an implicit bias of 2 e ≠ 1 applies to the exponent in classical exponent representation. Therefore, reducing the exponent to a width e Õ means that a new bias must be applied. The original exponent must be added 2 e Õ ≠ 2 e (< 0) before pruning MSBs, implying hardware overhead at conversion. The original exponent must represent a value in Ë ≠2 e Õ ≠1 + 1, 2 e Õ ≠1 È to avoid overflow. In practice, it is better to keep a constant exponent width to avoid useless overhead and conversion overflows which would have a huge impact on the quality of the computations, even if they are scarce. A second way to improve computation at inferior cost is to play with the implicit bias of the exponent. Indeed, increasing the exponent width increases the dynamic towards infinity, but also the accuracy towards zero. Thus, if the absolute maximum values to be represented are known, the bias can be chosen so it is just large enough to represent these values. This way, the exponent gives more accuracy to very small values, increasing accuracy. However, using a custom bias means that the arithmetical operators (addition and multiplication) must consider this bias in the computation of resulting exponent, and the optimal bias along computation may diverge to ≠OE. To avoid this, if the original 2 e ≠ 1 exponent bias is kept, exponent bias can be simulated by biasing the exponents of the inputs of each or some computations using shifting. For the addition, biasing both inputs adding 2 e in to the exponent implies that the output will also be represented biased by 2 e in . For the multiplication, the output will be biased by 2 e in +1 . Keeping an implicit track of the bias along computations allows to know any algorithm output bias, and eventually to perform a final rescaling of the outputs. Finally, accuracy can be relaxed in the integer operators composing FlP operators, i.e. the integer adder adding mantissas in FlP addition close path, and the integer multiplier in the FlP multiplication. Indeed, they can be replaced by the approximated adders and multipliers described in Section 1.4 to improve performance relaxing accuracy. However, as the most part of the performance cost is in control hardware more than in integer arithmetic part, the impact on accuracy would be strong for a very small performance benefit. The same approximation can be applied on exponent management, but the impact of approximate arithmetic would be huge on the accuracy and is strongly unadvised. More state-of-the-art work on FlP arithmetic is developed in Section 5.1, more particularly on HLS using FlP custom computing cores. Relaxing Accuracy Using Fixed-Point Arithmetic Aside from FlP, a classical representation for real numbers is Fixed-Point (FxP) representation. This Section presents generalities about FxP representation in Section 1.3.1, then presents the classical models for quantization noise in Section 1.3.2. Finally, hardware implementations of addition and multiplication are respectively listed and discussed in Sections 1.3.3 and 1.3.4. Fixed-Point Representation for Real Numbers In fixed-point representation, an integer number represents a real number multiplied by a factor depending on the implicit position of the point in the representation. A real number x is represented by the FxP number x FxP represented on n bits with d bits of fractional part by the following equation: x FxP = - - -x ◊ 2 d - - - r ◊ 2 ≠d , where |•| r is a rounding operator, which can be implemented using several functions such as the ones of Section 1.3.2. The representation of a 12-bit two's complement signed FxP number with a 4-bit integer part is depicted in Figure 1 2 2 2 1 2 0 2 -1 2 -2 2 -3 2 -4 2 -5 Integer part Fractional part -2 3 𝑥 2 𝑥 1 𝑥 0 2 -7 2 -8 2 -6 number x bin = {x i } ioe[0,n≠1] represents the integer number x int the following way: x int = x n≠1 ◊ 1 ≠2 n≠1 2 + n≠2 ÿ i=0 x i ◊ 2 i Therefore, the two's complement n-bit FxP number represented by x FxP with d-bit fractional part is worth: x FxP = x int ◊ 2 ≠d = x n≠1 ◊ 1 ≠2 n≠d≠1 2 + n≠2 ÿ i=0 x i ◊ 2 i≠d . Quantization and Rounding Representing a real number in FxP is equivalent to transforming a number represented on an infinity of bits to a finite word-length. This reduction is generally referred as quantization of a continuous amplitude signal. Using a FxP representation with a d-bit fractional part implies that the step between two representable values is equal to q = 2 ≠d , referred as quantization step. The process of quantization results in a quantization error defined by: e = x q ≠ x, (1.1) where x is the original number to be quantified and x q the resulting quantified number. Quantization of continuous amplitude signal is performed differently depending on the rounding mode chosen. Several classical rounding modes are possible: 1. Rounding towards ≠OE (RD). The infinity of bits dropped are not taken into consideration. This results in a negative quantization error e and is equivalent to truncation. 2. Rounding towards +OE (RU). Again, the infinity of bits dropped are not taken into consideration, and the nearest superior representable value is selected. This is equivalent to adding q to the result of the truncation process. 3. Rounding towards 0 (RZ). If x is negative, the number is rounded to +OE. Else, it is rounded to ≠OE 4. Rounding to Nearest (RN). The nearest representable value is selected -if the MSB of the dropped bits is 0, then x q is obtained by rounding to ≠OE (truncation) -else x q is obtained by rounding towards +OE. The special value where the MSB of the dropped part is 1 and all the following bits are 0 can lead whether to rounding up or down depending on the implementation. Choosing one or another case does not change anything in the error distribution in the case of continuous amplitude signal, which is also the case for discrete case, as said below. The quantization error produced by rounding towards ±OE and to nearest discussed above are depicted in Figure 1.5. RZ method has a varying quantization error distribution depending on the sign of the value to be rounded. As described in [START_REF] Widrow | A study of rough amplitude quantization by means of nyquist sampling theory[END_REF] and [START_REF] Widrow | Statistical analysis of amplitude-quantized sampled-data systems[END_REF], the additional error due to the quantization of a continuous signal is uniformly distributed between its limits ([≠q, 0] for truncation, [0, q] for rounding towards +OE and [≠q/2, q/2] for rounding to nearest) and statistically independent on the quantized signal. Therefore, the mean and variance of the error are perfectly known in these cases and are indexed in Table 1.4. Thanks to its independence to the signal, quantization error can be seen as an additive uniformly distributed white noise q such as depicted in Figure 1.6. This representation of quantization error is the base of FxP representation error analysis discussed in the next Chapter. Figure 1.6 -Representation of FxP quantization error as an additive noise The previous paragraphs describe the properties of the quantization of a continuous signal. However, in FxP arithmetic, it is necessary to reduce the bit width along computations to avoid a substantial growth of the necessary resources. Indeed, as discussed in Section 1.3.4, an integer Chapter 1 multiplication needs to produce an output which width is equal to the sum of its inputs to get a perfectly accurate computation. However, the LSBs of the result are often not significant enough to be kept, and so a reduction of data width must be performed to save area, energy and time. Therefore, it is often necessary to reduce a FxP number x b = d 1 ≠ d 2 . This distribution is still uniform for RD, RU and RN rounding methods, but has a different bias. Moreover, for RN method, this bias is not 0, and depends on the direction of rounding chosen when the MSB of the dropped part is 1 and all the other dropped bits 0. This can lead to divergences when accumulating a large number of computations. To overcome this possible deviation, Convergent Rounding to Nearest (CRN) was proposed in [START_REF] Lapsley | DSP Processor Fundamentals: Architectures and Features[END_REF]. When the special case cited above is met, the rounding is once performed toward +OE, and once towards ≠OE. This way, the quantization error distribution gets centered to zero. . As the highest error occurs for the special in between case, distributing this error using alternatively RD and RU paradigms balances the error, lowering by half the highest negative error and moving its impact to a new spike removing the bias as showed on Figure 1.7b. However, this compensation slightly increases the variance of the quantization error. The values of the mean µ e and variance ‡ 2 e of RD, RU, RN and CRN rounding methods 1 1 ≠ 2 ≠2d b 2 RN 0 q 2 12 q 2 1 2 ≠d b 2 q 2 12 1 1 ≠ 2 ≠2d b 2 CRN 0 q 2 12 0 q 2 12 1 1 ≠ 2 ≠2d b +1 2 • the round bit, which is the value of the bit indexed by d 1 ≠ d 2 ≠ 1 of x 1 , • and the sticky bit, which is a logical or applied to the bits {0, ..., d 1 ≠ d 2 ≠ 2} of x 1 . The extraction of round and sticky bits is illustrated in Figure 1.8. The horizontal stripes in x 1 correspond to the round bit, and the tilted stripes to the bits implied in the computation of the sticky bit. Here, both are worth 1, and the rounding logic outputs 1, which can correspond to RU, RN, or CRN. The possible functions performed by the different rounding functions which can be implemented in rounding logic block of Figure 1.8 are listed in Table 1.5. It is important to notice that for RD method, the value of round and sticky bits have no influence on the rounding direction. For RN method, if the default rounding direction is towards +OE when round/sticky bits are 1/0, then the value of the sticky bit does not influence the rounded result. If it is up, the sticky bit has to be considered. Therefore, some hardware simplifications can be performed for RD and RN (down case) methods, by just dropping the unused bits. 0 0 ≠ ≠ ≠ ≠ 0 1 ≠ + ≠ ≠ 1 0 ≠ + Always ≠ Alternatively or always + ≠ and + 1 1 ≠ + + + Table 1.5 -Rounding direction depending on the value of round and sticky bits Addition and Subtraction in Fixed-Point Representation The addition/subtraction in FxP representation is much simpler than the one of FlP described in Section 1.2.2. Indeed, FxP arithmetic is entirely based on integer arithmetic. Adding two FxP numbers can be performed in 3 steps: 1. Aligning the points of the two numbers, shifting one of them (software style) or driving their bits to the right input in the integer adder (hardware design style). 2. Adding the inputs using an integer adder. 3. Quantizing the output using methods of Section 1.3.2. In this section, we will consider the addition (respectively subtraction) of two signed FxP numbers x and y with a total bit width of resp. n x and n y , a fractional part width of d x and d y and an integer part width of m x = n x ≠ d x and m y = n y ≠ d y . In the rest of this chapter, a n x -bit FxP number x with m x -bit integer part will be noted x(n x , m x ). To avoid overflows or underflows, the output z(n z , m z ) of the addition/subtraction of x and y must respect the following equation: m z = max (m x , m y ) + 1. (1.2) Moreover, an accurate addition/subtraction must also respect: d z = max (d x , d y ) . (1.3) The final process for FxP addition of x(6, 2) and y(8, 3) returning z(9, 4) then quantized to z q (6, 4) is depicted by Figure 1.9. For the subtraction x ≠ y, the classical way to operate is to compute y Õ = ≠y before performing the addition x + y Õ . In two's complement representation, this is equivalent to performing y Õ = y + 1, where y is the binary inverse of y. The inversion is fast and requires small circuit, and adding 1 can be performed during the addition step of Figure 1.9 (after shifting) to avoid performing one more addition for the negation. As a first conclusion about FxP addition/subtraction, the cost of the operation mostly depends on two parameters: the cost of shifting the input(s), and the efficiency of integer addition. From this point, we will focus on the integer addition, which represent the majority of this cost. The integer addition can be built from the composition of 1-bit additions, taking each three inputs -the input bits x i , y i and the input carry c i and returning two outputs -the output sum bit s i and the output carry c i+1 . This function is realized by the Full Adder (FA) function of Figure 1.10 which truth table is described in Table 1. [START_REF] Moore | Cramming more components onto integrated circuits[END_REF]. The simplest addition structure is the Ripple Carry Adder (RCA), built by the direct composition of FAs. Each FA of rank i takes the input bits and the input carry of rank i. It returns the output bit of rank i and the output carry of rank i + 1, which is connected to the following full adder, resulting in the structure of Figure 1.11. This is theoretically the smallest possible area for an addition, with a complexity of O (n). However, this small area is counterbalanced Chapter 1 Figure 1.9 -Fixed-point addition process of x(6, 2) and y(8, 3) returning z(9, 4) quantized to z q (6, 4) Figure 1.10 -One-bit addition function -Full adder (or compressor 3:2) by a high delay also in O (n), while the theoretical optimum is O (log n) [START_REF] Koren | Computer Arithmetic Algorithms[END_REF]. Therefore, RCA is only implemented when area budget is critical. x i y i c i c i+1 s i 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 1 1 1 1 0 1 1 1 1 1 A classical improvement issued from the RCA is the Carry-Select Adder (CSLA). The CSLA is composed of elements containing two parallel RCA structures, one taking 0 as input carry and the other taking 1. When the actual value of the input carry is known, the correct result is selected. This pre-computation (or prediction) of the possible output values increases speed, which can reach at best a complexity of O ( Ô n) when the variable widths of the basic elements are optimal, while more than doubling the area compared to classical RCA. An 8-bit version of CSLA is depicted in Figure 1.12, with basic blocks of size 2-3-3 from LSB to MSB. Therefore, the resulting critical path is 3 FAs and 3 multiplexers 2-1, from input carry to output carry, instead of 8 FAs for RCA. It is important to note that CSLA structure can be applied to any addition structure such as the ones described below, and not only RCA, which can lead to better speed performance. As already stated, the longest path in the adder starts from the input carry (or the input LSBs) and ends at the output carry (or output MSB). Therefore, propagating the carry across the operator as fast as possible is a major stake as long as high speed is required. It can whether be done duplicating hardware like for CSLA, but it can also be achieved by prioritizing the carry propagation. In FA design, the output is computed together with the output carry. In Carry- Chapter 1 Lookahead Adder (CLA) design, carry propagation is performed by an independent circuit so the carries at the MSBs position do not need to wait for all outputs of inferior rank to be computed to be available. For this, two peculiar values need to be calculated at each bit position: the generate and propagate bits g i and p i defined by: p i = x i ü y i , g i = x i • y i . (1.4) Using these values obtained with very small circuitry, the carry of rank i is extracted from the carry of previous rank by the following relation: c i = g i≠1 ' (c i≠1 • p i≠1 ) . (1.5) Then by recurrence, any carry signal can be deduced knowing any carry signal of inferior rank and all propagate and generate bits of intermediate rank. The addition output bit z i is then simply deduced by the following relation: z i = p i ü c i . (1.6) For instance, knowing c i , the four following carries are defined by the equations: c i+1 = g i ' (c i • p i ) , c i+2 = g i+1 ' (g i • p i+1 ) ' (c i • p i • p i+1 ) , c i+3 = g i+2 ' (g i+1 • p i+2 ) ' (g i • p i+1 • p i+2 ) ' (c i • p i • p i+1 • p i+2 ) , c i+4 = g i+3 ' (g i+2 • p i+3 ) ' (g i+1 • p i+2 • p i+3 ) ' (g i • p i+1 • p i+2 • p i+3 ) ' (c i • p i • p i+1 • p i+2 • p i+3 ) . (1.7) The direct translation of these equations into hardware leads to faster carry generation compared to RCA, but also leads to an important area overhead. However, parallel prefix adders, which are based on CLA paradigm, show better area performance. The main idea is to propagate g and p bits to get equivalent couples (p Õ , g Õ ) which turns the computation of any c i in Equation 1.5 independent of c i≠1 , so we finally get: c i = g Õ i≠1 , z i = p Õ i ü c i = p Õ i ü g Õ i≠1 , (1.8) and so all outputs can be computed in parallel. This equivalent representation is obtained thanks to a series of 4:2 compressors, extracting an equivalent couple (p Õ i , g Õ i ) from couples (p i , g i ) and (p j , g j ) where j < i, performing: p Õ i = p j • p i , g Õ i = g i ' (g j • p i ) . (1.9) The details and mathematical proof of this method are available in [START_REF] Parhami | Computer Arithmetic[END_REF]. The use of this (p, g) 4:2 compressor has led to the creation of several parallel prefix adders, such as the Brent-Kung Adder (BKA) and the Kogge-Stone Adder (KSA) [START_REF] Kogge | A parallel algorithm for the efficient solution of a general class of recurrence equations[END_REF], respectively depicted for their [START_REF] Ernst | Razor: circuit-level correction of timing errors for low-power operation[END_REF] As a conclusion about integer adders used for FxP addition, several addition structures exist. This section only presents the main principles, many other instances based on these principles do exist, many being described in [START_REF] Parhami | Computer Arithmetic[END_REF]. What is important to observe is that integer adders have an area of minimum complexity O(n) and time complexity O(log n). However, both these complexity cannot be achieved by a same structure. Reaching the minimum time complexity implies parallelism and so larger area, whereas the smallest area implies longer critical path and so higher delay. In the next section, the carry-save addition method is presented in the context of summand grid reduction in multiplication. This is why it was not handled in this section. Multiplication in Fixed-Point Representation As for addition, FxP multiplication is performed by integer multiplication. Unlike addition, no alignment of the inputs is necessary. Thus, for the accurate multiplication of n-bit inputs, a 2n-bit result is returned. Therefore, compared to addition where only 1 more bit is necessary, multiplication is a potential source for high resource needs downstream, which definitely justifies the necessity of quantizing numbers along computations, as presented in Section 1.3.2. Integer multiplication can be split in two phases -generation of summand grid, and summand grid addition, leading to the scheme showed in Figure 1.15. Compared to higher-base, Figure 1.15 -General integer multiplication principle applied on 6-bit input Figure 1.16 -General visualization of 6bit multiplication summand grid binary multiplication is much simpler. Indeed, only two values are possible for all summands, 0 or the value of the multiplicand, which can leads to major simplifications. Indeed, the generation of each line of the summand grid of an n-bit multiplier can be performed by n 2-to-1multiplexers selecting whether the input bits are x i or 0, controlled by the value of the bit y j corresponding to the current line. Therefore, the most expensive part of the multiplier in terms of resources is the carry-save reduction of the summand grid to reduce it to a final addition. The summand grid can be visualized by an n/2-stage triangle, as showed on Figure 1.16. The reduction of the tree is achieved by several stages of FA and Half Adder (HA), until only two lines are left. A HA can be seen as a simplified FA (see Section 1.3.3) with only two inputs instead of three. A HA is built with only two logic gates instead of five for FA. FAs perform a 3-2 compression illustrated by Figure 1.17 and HAs a 2-2 transformation as in Figure 1.18. Figure 1.17 -Full adder compression -requires 5 gates Thus, the complexity of the multiplier in terms of speed and area depends on how the summand grid reduction is organized. Different classical methods to build reduction trees exist in the literature, most famous ones being Wallace tree [START_REF] Wallace | A suggestion for a fast multiplier[END_REF] and Dadda tree [START_REF] Dadda | Some schemes for parallel multipliers[END_REF]. Wallace tree reduces the partial product bits as early as possible, whereas Dadda tree reduces them as late as possible. This leads to two different kinds of architectures, Wallace tree being the fastest, whereas Dadda tree implementation is smaller. Figure 1.19 shows the difference between Wallace and Dadda trees in a 5-bit multiplier context. Wallace tree requires 9 FAs and 3 HAs (51 gates) before final 8-bit addition, whereas Dadda tree needs 8 FAs and 4 HAs (48 gates). Computing multiplications based on partial product reduction such as with Wallace or Chapter 1 Dadda trees is a good compromise between speed and area. Indeed, only a few reduction stages are necessary for the reduction before the final addition (2 stages for 8-bit multiplication, 6 stages for 16-bit and 8 stages for 32-bit), which represents an acceptable overhead. Tree multipliers have a delay complexity in O(log n). A particular sort of tree multiplier is the array multiplier. It is made with one-sided Carry-Save Adder (CSA) reduction tree (less efficient than distributed trees), which makes it slower (O(n)) and theoretically larger than previously discussed multipliers. And the final computation is performed by a RCA, which is the slowest possible adder as discussed in Section 1.3.3. However, it is very interesting in VLSI design thanks to its regularity which implies small wires ensuring a compact layout. This regularity also implies fine-grained pipelining possibilities. Figure 1.20 is a 6-bit signed array multiplier. The signed version is obtained using modified Baugh-Wooley two's-complement technique which consists in inverting the MSBs of all partial products except the last one which has all its bits inverted except the MSB. On Figure 1.20, AFA (resp. AHA) corresponds to a FA (resp. HA) whose inputs x i and y i are combined by an AND cell, NFA corresponds to a FA which inputs x i and y i are combined by a NAND cell. Previously discussed multipliers are fast but their area is important. E.g, array multiplier area complexity is O ! n 2 " , whereas it is possible to reach an O(n) complexity with a sequential multiplier such as the one presented on the multiplexer is then accumulated in the MSB half of another register shifting one bit right at each new addition. Therefore, the whole computation is only performed by an adder which can be one of the several presented in Section 1.3.3, chosen whether for delay or area performance. In this section, several ways to manage computation time or area of the summand grid were presented. However, improvements can be done upstream in order to reduce the initial size of the summand grid. The most classical and efficient way is to apply modified Booth encoding on y. In Radix-4 Booth encoding, y is encoded so only Ân/2Ê + 1 lines in the summand grid are generated instead of n. However, this implies circuitry overhead for the encoding. Indeed, n-bit y operand needs to be transformed into Ân/2Ê + 1 actions to perform on x for partial product generation. These actions consists in a multiplication of x by an element of the set {≠2, ≠1, 0, 1, 2}, which translates into 0 to 2 left shifts during the partial products generation, as well as possible negation. The decision of the corresponding action is driven by the rules of Table 1.7. Higher radix encoding such as Radix-8 Booth encoding do exist, but increasing the radix leads to much higher encoding complexity and a high cost due to a more important number of shifts or dense wiring of input bits x i , which tend to cancel the benefits of the reduction of the number of partial products. However, Radix-4 or Radix-8 Booth encoding techniques tend to be faster than classical Radix-2 multiplication, especially for large bit widths, which imply a large summand grid. Chapter 1 In this section, several multiplication techniques and optimizations were presented. Their efficiency strongly depends on the techniques used to generate and reduce the summand grid. Generally, for an n-bit multiplication, tree multiplication is O(log n) fast, with or without Booth encoding, whereas array multiplier is O(n) fast and sequential multiplier O(n log n) fast. However, the sequential multiplier has an area in O(n) and the array multiplier in O(n 2 ), whereas Wallace or Dadda tree multipliers have an intermediate area. Array multiplier, despite being the largest and not the fastest, has compact layout possibilities and fine-grained pipelining possibilities. The following section present approximate operators which try to overcome the limitations of addition and multiplication complexity, often taking the previously presented operators as a work basis. y i y i≠1 y i≠2 Y Âi/2Ê Operation 0 0 0 0 +0 0 0 1 1 +x 0 1 0 1 +x 0 1 1 2 +2x 1 0 0 ≠2 ≠2x 1 0 1 ≠1 ≠x 1 1 0 ≠1 ≠x 1 1 1 0 +0 Relaxing Accuracy Using Approximate Operators In Section 1.3, hardware integer addition and multiplication were presented in the context of FxP arithmetic. These operators are accurate, meaning they always return a mathematically correct value. However, many applications do not need calculations to be perfectly accurate since a degraded output can be tolerated. This is why these past decades, many researchers tried to break performance limitations of accurate arithmetic by proposing an important number of approximate operators. Section 1.4.1 presents a collection of previously published approximate adders and Section 1.4.2 presents some multipliers. Approximate Integer Addition If adders are the most basic arithmetic operators, they nevertheless are intensively used in all applications. Moreover, they are also directly used in some multiplier architectures and in many implementations of more complex functions, such as exponential, logarithms, functions using CORDIC algorithm, etc. Improving their latency, area or power consumption is therefore a big stake for all arithmetic operator design field. For an n-bit adder, optimal delay complexity is O (log n) and area complexity O (n) [START_REF] Koren | Computer Arithmetic Algorithms[END_REF]. It is thus impossible to find perfectly accurate adders getting below these complexities. For breaking these barriers, approximate adders were created. This section presents a non-exhaustive list of them. As stated in Section 1.3.3, the critical path of an n-bit addition is located between the input LSBs (considering the adder has no input carry) and the z n output, which can be considered as the adder's output carry. Therefore, the best source of improvements in addition is based on the ability to break the path of this critical carry chain. Indeed, most of the time during addition, the whole carry chain is unused and yet limiting the frequency of the adder and spending energy in glitch. It is showed in [START_REF] Schilling | The longest run of heads[END_REF] that the probability for a series of n coin tosses, the longest run of heads that does not exceed x is the series A n (x), defined by: A n (x) = I 2 n if n AE x, q 0AEjAEx A n≠1≠j (x) otherwise. (1.12) Using this series, the longest carry chain with respectively a probability of 99% and 99.99% is given by Table 1.8. E.g. for a 256-bit adder, the longest propagation of a carry will be 20 bits with only 0.01% chance for it to be longer. It can also be noticed that the probability for the longest carry chain not to exceed a given x has a fast growth, as illustrated by Figure 1.23, which shows the probability for the longest carry chain for a 64-bit adder to be inferior or equal to x as a function of x. This probability explicitly shows that cautiously breaking carry chains only causes few chances for the result to be false. However, breaking a long carry chain is likely to cause a strongly erroneous output since an error with the weight of the MSBs of the carry chain can be performed. A balance must therefore be found between the occurrence of errors and their amplitude. Given an adder of width n, the probability of having a correct result P (n, x) = 3 1 ≠ 1 2 x+2 4 n≠x≠1 , ( 1.13) which also has a fast growth with x. Sample Adder, Almost Correct Adder and Variable Latency Speculative Adder In [START_REF] Lu | Speeding up processing with approximation circuits[END_REF], S.-L. Lu proposes the Sample Adder, which will be denoted as Lu's Parallel Adder (LPA) from this point. Based on the previous remarks about the potential of breaking the carry chains, an LPA of width n is parameterized by k, the maximum width of the transmitted carry chain. Based on the structure of a parallel prefix adder, the LPA is made of k rows of k-bit carry-chain computing blocks (or inferior to k for boundary blocks) applied on propagate and generate circuits. The output carry of each block is transmitted to the corresponding rank, and the final sum is computed along with the corresponding sum bit. The example of a 16-bit LPA with k = 4 is represented in Figure 1.24. First, a conversion of (x i , y i ) to (p, g) is performed, and carries values are computed in parallel. Finally, the sum bits x i , yi and the computed carries c i are added to get the output sum bit at each position. The structure of LPA makes it delay constant for a given k, independently from n. The author claims the adder to be faster and smaller than KSA and Han-Carlson Adder (HCA), with a constant delay complexity in O(k) and an area complexity in O(n). A functionally similar adder denoted as Almost Correct Adder (ACA) is proposed in [START_REF] Verma | Variable latency speculative addition: A new paradigm for arithmetic circuit design[END_REF]. Indeed, the same principle of limiting the transmitted carry chain to a same k for each position is used. However, the implementation is different. To understand it, the kill bit must be added to generate-propagate scheme, giving Equations 1.14. The kill signal is set when both inputs are 0. When this situation occurs, an hypothetical input carry can not be transmitted. p i = x i ü y i , g i = x i • y i , k i = (¬x i ) • (¬y i ) . (1.14) Figure 1.24 -16-bit Sample Adder (LPA) with k = 4 -Red-striped square converts (x i , y i ) to (p, g) (Equation 1.4) and green-striped square converts sum bit x i ü yi and c i to output z i . Considering k i , we get: c i = Y _ ] _ [ 0 if k i = 1, 1 if g i = 1, c i≠1 otherwise (p i = 1). s i = a i ü b i ü c i≠1 . (1.15) Using Equations 1.15, a matrix recursion can be found to express c i as a function of any of its carry predecessors: A c i 1 B = A p i g i 0 1 B A c i≠1 1 B = M i A c i≠1 1 B , and by recursion, A c i 1 B = M i M i≠1 • • • M i≠k+2 M i≠k+1 A c i≠k 1 B . (1.16) Therefore, knowing the adder output carry c n+1 implies propagating, generating or killing carries from first carry c 0 , performing n ≠ 1 simple binary matrix products (performed operations are only logical OR and AND). ACA proposes to reduce this chain by limiting this series of matrix products to a given number, taking into account the low probability for the existence of a long carry chain. For instance, a 32-bit operator with a maximum considered carry chain of 8 taken into account for each output bit calculation will produce incorrect results only for cases where the longest carry chain should be greater to 8, which occurs for only 2.4%. To summarize, an n-bit ACA with x-bit restricted carry chain will have to consider n≠x+1 carry chains of size x instead of a single n ≠ 1-bit carry chain. However, successive x-bit carry chains have x ≠ 1 bits recovering with their direct neighbour carry chains. As a consequence, an organization for a fast calculation of carry chains matrix M i:i≠x+1 is possible as showed in Figure 1.25, where it is applied to a 16-bit ACA with 6-bit carry chains. M i:i≠x+1 is the matrix product r x≠1 j=0 M i≠j . By construction, an n-bit ACA with a k-bit considered carry chain has an area complexity O (n log k) and a time complexity O (log k). In [START_REF] Schilling | The longest run of heads[END_REF], it is showed that the [START_REF] Verma | Variable latency speculative addition: A new paradigm for arithmetic circuit design[END_REF] expectation for the longest chain of ones for an n-bit sequence is log n ≠ 2/3, which in our case makes k proportional to log n for equal performance. Therefore, the final complexity of an n-bit ACA is O (n log log n) for space complexity, which is near-linear even for relatively high values of n, and O (log log n) for time complexity. Hence, the theoretical space complexity limit for accurate adder is nearly reached, whereas time complexity is exponentially beaten, so ACA can be considered as a fast approximate adder. As previously mentioned, LPA and ACA produce the same function with different hardware layout. Figure 1.26 sums up the effect of approximation on the output on an 8-bit LPA or ACA with k = 2. Each colored rectangle takes the inputs considered in the computation of the corresponding output bit(s). Most approximate adders can have their function fully described by this type of figure, except for particular ones such as ETAI described in Section 1.4.1.2 or configurable adders such as AC2A mentioned in Section 1.4.1.3. Figure 1.27 shows the error maps for 8-bit ACA for k = 2 and k = 4 in log scale. The error map is the amplitude of error given all possible combinations of inputs x (input 1) and y (input 2). The white zones correspond to the inputs leading to no error, i.e. the inputs implying a carry chain inferior to k. As expected, k = 4 leads to scarcer error than k = 2. The error map is here represented using FxP-represented inputs with a 1-bit integer part. The theoretical highest possible error is therefore equal to 2 ≠ q = 2 ≠ 2 1≠n . For LPA and ACA, it is interesting to see that the error map seems to be fractal, which shows the structural different between the nature of their error and the uniform nature of quantization noise issued by FxP. As pointed out, the ACA adder proposes high benefits in terms of delay with small area sacrifice compared to classical accurate adders. Moreover, it is possible to choose the deepness of the approximation by selecting the length of the maximal considered carry chain for each output bit. As reducing this length is source of error, an architecture going against the first interest of approximate computing is proposed in [START_REF] Verma | Variable latency speculative addition: A new paradigm for arithmetic circuit design[END_REF], the Variable Latency Speculative Adder (VLSA). This adder is totally accurate, but based on ACA. The method consists in the following steps: + 1. calculate the sum using an (n, k) ACA, choosing k such as a relatively low number of errors occurs, 2. detect if an error has occurred, i.e. if a carry chain is bigger than k, and 3. if an error occurred, correct the sum in order to obtain an exact result. This method, which provides a correct adder with variable latency from one unit to two, is only interesting if the sum of the ACA and the error recovery system does not exceed an equivalent state-of-the-art adder. An error recovery system which uses a maximum of the ACA structure to detect and correct the error is proposed in [START_REF] Verma | Variable latency speculative addition: A new paradigm for arithmetic circuit design[END_REF]. The error detection mechanism has to consider all chains of length k + 1 instead of k for the ACA. This leads to an O(log n) time complexity, which is higher than (n, k) ACA time complexity, but still more efficient than a traditional adder by two thirds according to the author. The correction system is an n/k-bit Carry Look-Ahead (CLA) block, which returns carries that were missed by the ACA because of a too long carry chain. This mechanism has about the same time-efficiency than the corresponding ACA, so the critical path will be the error detection mechanism. The schematic of the resulting architecture is given in Figure 1.28. Tests comparing ACA, ACA with error detection, ACA with error detection and recovery and a traditional fast adder provided by DesignWare are provided by [START_REF] Verma | Variable latency speculative addition: A new paradigm for arithmetic circuit design[END_REF]. ACA adders are optimally sized for an accuracy of 99.99% following Table 1.8 values. Results are showed on Figure 1.29. For ACA with no error detection and recovery, we can see a clear benefit in delay compared to traditional adder. They are both near-linear, but the proportional coefficient is much smaller for ACA. In terms of area, ACA is about 25% smaller than the traditional adder. For the ACA with error recovery, it can be noticed that it is nearly as fast as a traditional fast adder. Though, the error correction only occurs for 0.01% of computations. The average delay of corrected ACA is 0.9999 multiplied by the error detection delay, which is about 2/3 of the traditional adder according to first graph of Figure 1.29. In terms of area, the ACA and its error recovery are together about 1.5◊ larger than an optimal exact adder, but its complexity is linear. To conclude about ACA: • ACA takes advantage of deep carry propagation scarcity. • It performs scarce errors depending on its design, but with a potential high amplitude, especially when a carry chain is incompletely considered on a bit of high significance. • It has a near-linear delay even for high values of n, with a very low proportional coefficient compared to a fast adder. • It covers a 25% smaller area compared to this same fast adder. [START_REF] Verma | Variable latency speculative addition: A new paradigm for arithmetic circuit design[END_REF] applied to any other approximate operator, and is a very interesting accurate structure for systems which can allow variable number of cycles for an operation. In this context VLSA can be seen as a 2-cycle addition, which can nearly always be bypassed in 1 cycle. Error-Tolerant Adders Between 2009 and 2010, Zhu proposed four approximate adders: • Error-Tolerant Adder type I (ETAI) [START_REF] Kyaw | Low-power high-speed multiplier for errortolerant application[END_REF], • Error-Tolerant Adder type II (ETAII) [START_REF] Zhu | An enhanced low-power high-speed adder for errortolerant application[END_REF], • Error-Tolerant Adder type II Modified (ETAIIM) [START_REF] Zhu | An enhanced low-power high-speed adder for errortolerant application[END_REF], and • Error-Tolerant Adder type IV (ETAIV) [START_REF] Zhu | Enhanced low-power high-speed adder for error-tolerant application[END_REF]. In [START_REF] Kyaw | Low-power high-speed multiplier for errortolerant application[END_REF], the first Error-Tolerant Adder (ETA) is presented, then referred as ETAI in the subsequent iterations [START_REF] Zhu | An enhanced low-power high-speed adder for errortolerant application[END_REF][START_REF] Zhu | Enhanced low-power high-speed adder for error-tolerant application[END_REF]. Its principle is simple: the most significant part (MSB side) of the adder is an accurate adder, and the least significant part (LSB side) is approximated with Algorithm 1. The inputs x i and y i of the approximate part are read from their MSB. When both are equal to 1, the calculations are stopped and all bits from rank i to the LSB are set to 1. This mechanism is depicted in Figure 1.30. ETAI is made for fast approximation. Its goal is to round up the result as fast as possible when a generate signal is met in the approximate part, i.e. both input bits are 1. In this way, the carry which should have been generated towards the MSB is compensated at best by maximizing the lower weight bits that have not been treated yet, that is to say all the bits from the carry generation to the LSB. The propagation of these 1s to the right is performed by a control block which was designed for propagating the information as fast as possible when the two-1s case occurs, using a control signal. This block is composed of two types of sub-blocks: i = 1 then two_ones Ω True z i Ω 1 else z i Ω x i ' y i end if end while while i Ø 0 do z i Ω 1 i Ω i ≠ 1 end while 1 0 1 1 0 0 1 1 0 1 1 0 1 0 0 1 1 0 0 1 1 0 1 0 0 0 0 1 0 0 1 1 + 1 0 0 0 1 1 1 0 0 1 0 0 1 1 Block Output value CT L i CSGCI CT L i+1 ' (x i • y i ) CSGCII CT L i+1 ' CT L i+k ' (x i • y i ) Table 1.9 -Logical equations of CSGC type I and II • Control Signal Generating Cells of type II (CSGCII), which is similar to CSGCI, but takes in addition as input another control signal of rank i + k, where k is fixed at design time. CSGCII allows for the control signal to be short-circuited. This way, for an n+m-bit ETAI(n, m) with n-bit accurate part and m-bit approximate part, the critical 1s propagation is k + 1 < m, where k is the spacing between two CSGCII in the control block. The architecture of the control block is graphically described in Figure 1.31, for a 20-bit approximate part ETAI. CSGCI and CSGCII are simple logic blocks easily described by their binary logic in Table 1.9. Once the control bits are generated/propagated for the whole approximate part, calculations are performed by a carry-free addition block, composed of a logic function called Modified XOR (MXOR) and presented at transistor-level logic in [START_REF] Kyaw | Low-power high-speed multiplier for errortolerant application[END_REF]. Studying the truth table of this new logic block reveals that it actually is a 3-input OR gate. Hence, each output z i of the approximate part is given by z i = MXOR(x i , y i , z i ) = x i ' y i ' CT L i . The accuracy of ETAI is studied introducing Minimum Acceptable Accuracy (MAA) and Acceptance Probability (AP). MAA is a fixed value defining what is the desired minimum accuracy compared to an exact computation for the result, expressed as a percentage. AP is the probability for a given MAA to be reached by the operator. Simulation results are given by Figure 1.32. The first graph is obtained for several 16-bit ETAIs with 10 4 simulation sets of inputs, the second one has unknown simulation size parameters, but all ETAIs are designed with an approximate part representing 75% of the adder width. For 99% of MAA, ETAI [START_REF] Xu | Internet of things in industries: A survey[END_REF][START_REF] Xu | Internet of things in industries: A survey[END_REF] gives quite a high AP (about 99%), but when the approximate part length grows, AP dramatically drops when MAA increases. The second graph shows that for small operators, a difference of 1% on the MAA provokes a high drop in AP. For longer operators, the difference of AP for different MAA gets tinier. Errors performed by an ETAI(n, m) can be high in amplitude (nearly 2 m≠1 ). ETAI is an adder which often performs errors, often with a low amplitude but sometimes with relatively high amplitude. Moreover, it has very bad performance for the addition of low amplitude numbers. Acceptance Probability (%) Acceptance Probability (%) Minimum Acceptable Accuracy (%) Size of Adder (bits) Figure 1.32 -ETAI accuracy simulation results [START_REF] Kyaw | Low-power high-speed multiplier for errortolerant application[END_REF] Simulation results for the timing and delay performance of ETAI compared to most classical accurate adders are given in Table 1.12 [START_REF] Zhu | Enhanced low-power high-speed adder for error-tolerant application[END_REF] for 0.18µm CMOS process, for a 100 MHz frequency. In this table, CSK refers to Carry-Skip Adder (see [START_REF] Parhami | Computer Arithmetic[END_REF]). The details about the size of these adders is not given. 100 sets of inputs were used for simulation, which is a bit low for giving an average of all different possible reactions of ETAI. These results give advantage to ETAI towards classical correct adders on this metric, with Power-Delay Product (PDP) savings up to more than 80% compared to carry-select implementation. Once again, results must be moderated, since the experimental conditions are not very clear (including size of adders, size of approximate part, value of CSGCII spacing parameter k). Another fast approximate adder, ETAII, is presented in [START_REF] Zhu | An enhanced low-power high-speed adder for errortolerant application[END_REF]. The structure of this adder is based on the same idea as LPA and ACA previously presented, i.e. shortening carry propagation paths. Indeed, carry propagation chain is cut in more little sub-chains of equal size. But contrary to ACA, every output bits do not take the same input propagated carry chain size. In the structure of ETAII given Figure 1.33, it can be noticed that each sub-block of size X is calculated using an exact adder, but taking into account only the carries generated inside and the propagated output carry of its predecessor carry generator sub-block. For carry generation, CLA blocks are implemented and for the sum generator, classical RCAs are used to minimize area. Well-designing the ETAII is entirely about finding the proper size for the carry propagation chains. The author studied the AP of 32-bit ETAII given different carry generator blocks sizes m. The results are available in Table 1.10. Simulations were led using 10 4 sets of inputs, which is quite low again for the adder width, so the results must be taken with caution. For a 32-bit adder, a high AP can be reached for a high MAA with quite a low carry chain length. E.g., more than 97% of tested inputs lead to an accuracy superior to 99% compared to the accurate value. Contrary to ETAI, ETAII has no relative accuracy disparity comparing low amplitude and high amplitude input sets. Simulation in [START_REF] Zhu | An enhanced low-power high-speed adder for errortolerant application[END_REF] showed that AP for a given MAA is similar for ETAII whatever the range of inputs is, at least for n/m = 4. Comparison Table 1.10 -AP as a function of MAA and carry propagation block size for 32-bit ETAII of accuracy between an ETAI (with unknown parameters) and a 32-bit ETAII with 4-bit carry propagation block is given by the author. For instance, for a 99% MAA and inputs in the integer range J0, 2 8 K, ETAII presents a 97% AP against only 52% for ETAI. In order to improve ETAII accuracy, a modified version is proposed in [START_REF] Zhu | An enhanced low-power high-speed adder for errortolerant application[END_REF], the ETAIIM. Indeed, ETAII has a periodic structure, with the same substructure for high significance and low significance output bits, which makes it give as much importance to LSB part than to MSB part, which is generally unwise. In order to give more importance to MSB, ETAIIM takes the same structure as ETAII, but with a longer carry propagation chain for most significant bits. Figure 1.34 represents a 32-bit ETAIIM, with 4-bit carry propagation for all LSB and a 12-bit MSB carry propagation chain. Such a structure induces a longer critical path, corresponding to this longer carry chain and the corresponding sum generator. In this way, the previously described ETAIIM has a 99.9% AP for a 99% MAA against 97.0% AP for the same MAA for the corresponding ETAII. Hardware simulations results are presented in Table 1.12 [START_REF] Zhu | Enhanced low-power high-speed adder for error-tolerant application[END_REF], comparing classical correct adders to ETAII and ETAIIM. The only difference between ETAII and ETAIIM is the delay which is 64% higher for ETAIIM, but both operate with the same power. ETAIV is presented in [START_REF] Zhu | Enhanced low-power high-speed adder for error-tolerant application[END_REF]. Its principle is the same as ETAII and ETAIIM, shortening the carry chain. However, ETAIV presents longer carry chains than ETAII and ETAIIM for an identical delay, in exchange for a higher energy cost. • the LSB carry generator is strictly identical than the one met in ETAII and ETAIIM and • the MSB carry generator is composed of two parallel carry generator blocks: one taking value 0 (GND) as input carry, the other taking 1 (VDD). In this way, the two exhaustive possibilities for the concerned partial carry propagation are calculated. The good one is chosen thanks to a 2-bit multiplexer controlled by the LSB carry generator output carry. The partial block diagram of ETAIV is depicted in Figure 1.37. The author performed simulations on 10 4 sets of inputs and accuracy results are given in Table 1.11, where X represents the number of bits of each sum generator (and thus each carry generator). ETAIV provides better results in terms of AP for a given MAA than ETAII (considering the same block size X). The error maps for 16-bit ETAIV with respectively blocks of width X = 2 and X = 3 are depicted in Figure 1.36. For X = 2, triangular patterns are clearly visible, corresponding to areas where error is less. Generally, the error is quite high since many carries are not transmitted. For X = 3, the errors seem more homogeneous and with lower amplitude in average. Table 1.12 -Simulation results for Error-Tolerant Adders [START_REF] Zhu | Enhanced low-power high-speed adder for error-tolerant application[END_REF] Hardware simulation results for ETAIV are given in Table 1.12. Because of its longer critical paths, ETAIV needs slightly more power than ETAII and ETAIIM and has a 21% greater delay. The authors also show by simulation that the design has a good accuracy even for low amplitude inputs. It is a good alternative to ETAIIM, if a little amount of area can be traded for delay reduction. + 𝑥 𝑛-1 ~𝑥𝑛-𝑋 𝑦 𝑛-1 ~𝑦𝑛-𝑋 𝑧 𝑛-1 ~𝑧𝑛-𝑋 Sum Generator 𝑥 𝑛-𝑋-1 ~𝑥𝑛-2𝑋 𝑦 𝑛-𝑋-1 ~𝑦𝑛-2𝑋 𝑧 𝑛-𝑋-1 ~𝑧𝑛-2𝑋 Carry Generator Carry Generator MUX2 𝐺𝑁𝐷 𝑉𝐷𝐷 Sum Generator 𝑥 𝑛-2𝑋-1 ~𝑥𝑛-3𝑋 𝑦 𝑛-2𝑋-1 ~𝑦𝑛-3𝑋 𝑧 𝑛-2𝑋-1 ~𝑧𝑛-3𝑋 Carry Generator 𝐺𝑁𝐷 Sum Generator 𝑎 𝑁-3𝑋-1 ~𝑎𝑁-4𝑋 𝑏 𝑁-3𝑋-1 ~𝑏𝑁-4𝑋 𝑠 𝑁-3𝑋-1 ~𝑠𝑁-4𝑋 With ETAI, ETAII and their modified versions ETAIIM and ETAIV, four subsequent approximate adders are proposed. ETAI is original by its reversed carry-propagation approximation, but is very inaccurate and has very poor performance for low amplitude inputs. However, it is a very low energy adder thanks to its low delay. ETAII, ETAIIM and ETAIV have a very different nature from ETAI, but they are very close the one from the others in their principle. The most accurate is ETAIV, followed by ETAIIM and ETAII. However, the fastest is ETAII, then ETAIV and ETAIIM. ETAII is the most energy-efficient, ETAIV coming second. The four operators have linear area complexity, but ETAI is the smallest, followed by ETAII and ETAIIM tied, not far from ETAIV. However, the authors compare these operators to some classical adders, but not to state-of-the-art fast adders such as KSA [START_REF] Kogge | A parallel algorithm for the efficient solution of a general class of recurrence equations[END_REF] or Ladner-Fischer Adder (LFA) [START_REF] Ladner | Parallel prefix computation[END_REF]. Accuracy-Configurable Approximate Adder In [START_REF] Kahng | Accuracy-configurable adder for approximate arithmetic designs[END_REF], Kahng proposes an Accuracy-Configurable Adder (AC2A). This operator is able to perform additions on different levels of accuracy on an unique implementation using a series of error correction systems which can be activated or deactivated thanks to power gating techniques. The approximate part of an n-bit AC2A is composed of n/k ≠ 1 recovering k-bit exact adders, whose only the MSB part is kept for the final result, as illustrated by Figure 1.38. Hence, errors are generated only when a carry is not propagated to the input of one of these sub-adders. As a functional point of view, AC2A resembles ETAIV, except that the sub-blocks recover by a half instead of 2/3 and that the accuracy is tunable at run time on AC2A. The approximate computation part has a delay complexity of O (log 2 k + 1) , + Figure (n ≠ 2k) (log 2 k + 1) 2 2 . This means delay complexity beats the optimal delay for an accurate adder, whereas area complexity is slightly above the optimal accurate adder. Estimations of minimal delay, area, dynamic power and pass rate (1 ≠ error rate) as a function of the value of k for the approximate computation part of a 16-bit AC2A compared to a conventional CLA is given in Table 1.13. For k AE 6, the proposed computation part is more power-efficient than the classical CLA in terms of dynamic power, but its area is larger when k Ø 2. It can thus be assumed that its static power is superior to the one of CLA. When k decreases, the minimal delay of the proposed operator decreases, but the error rate increases because of the induced larger number of unconsidered propagated carries. Therefore, a trade-off must be found between delay and pass rate. To characterize AC2A approximate part more completely, two metrics are used in [START_REF] Kahng | Accuracy-configurable adder for approximate arithmetic designs[END_REF]: ACC amp = 1 ≠ |R c ≠ R e | R c , and (1.17) ACC amp measures the relative amplitude of the error, 1.0 representing a perfect accuracy and the value decreasing with the error. ACC inf represents the proportion of correct bits, 1.0 representing a correct output bit sequence. Considering these metrics, comparisons of 16-bit AC2A approximate part with k = 4, ETAI [START_REF] Zhu | Design of low-power highspeed truncation-error-tolerant adder and its application in digital signal processing[END_REF] and ETAIIM [START_REF] Zhu | An enhanced low-power high-speed adder for errortolerant application[END_REF] described in Section 1.4.1.2, LPA [START_REF] Lu | Speeding up processing with approximation circuits[END_REF] described in Section 1.4.1.1 and an accurate CLA are given in Table 1.14. In this table, EDC stands for error detection and correction system. The best delay is obtained for AC2A and ETAI, but ETAI is 38% less area-costly than AC2A. However, ETAI pass rate and ACC inf are very low. In terms of area, AC2A is beaten by ETAIIM, which is also more accurate considering ACC amp and ACC inf metrics and pass rate, but with a delay overhead of 30%. LPA is nearly as fast as AC2A and also more accurate, but as it is based on a parallel prefix structure, its area is superior by 47%. The required area overhead for error detection and correction is 75% for LPA, whereas it is only 28% for AC2A and 15% for ETAIIM. Error detection and correction method is discussed below. A first conclusion from these results is that AC2A is a fast adder with a good balance between accuracy and area. ACC inf = 1 ≠ B e B w , ( 1 Figure 1.39 [START_REF] Kahng | Accuracy-configurable adder for approximate arithmetic designs[END_REF] gives comparative results for AC2A adder, using the metrics ACC amp and ACC inf , varying voltage from 0.6V to 1.0V . In these graphs, AC2A is refered to as ACA adder (not to be confused with Almost-Correct Adder [START_REF] Verma | Variable latency speculative addition: A new paradigm for arithmetic circuit design[END_REF]), and Lu's adder refers to LPA [START_REF] Lu | Speeding up processing with approximation circuits[END_REF], both presented in Section 1.4.1.1. AC2A shows an interesting resistance to VOS. Indeed, for ACC amp metric, only ETAI achieves a better resistance, but it has extremely bad results with ACC inf metric, whereas AC2A beats every other tested operator, closely followed by ETAIIM. What can be concluded is that AC2A has a shorter critical path than the tested adders, and ensures a good accuracy in terms of error amplitude as well as a low Bit Error Rate (BER). As mentioned before, AC2A calculation errors occur when at least one of the sub-adder should have taken an input carry. Knowing this, detecting an error can be performed with a very little overhead. Correction can then be performed by transmitting the lost carry or carries to the concerned sub-adders. Just as VLSA (see Section 1.4.1.1), the entire error correction could be performed with additive cycles, but with only a small area overhead since most of the design is re-used for correction. However, unlike VLSA, AC2A proposes a configurable accuracy. Indeed, the periodical structure of the error correction system allows several levels of correction, so that an erroneous result can be partially corrected. For this, a pipelined correction design is proposed, following the principle showed in Table 1 An example of the previously described structure is developed in [START_REF] Kahng | Accuracy-configurable adder for approximate arithmetic designs[END_REF] taking a 32-bit AC2A composed of 4 8-bit sub-adders. With such a structure, four modes can be applied: • mode-1: no power-gating, the whole pipeline is active, and the produced result is exact, • mode-2: only stage 4 is power-gated, only the most significant bits sub-adder is not corrected, • mode-3: stage 3 and 4 are power-gated, only one sub-adder is corrected, which is the second one from the LSB, • mode-4: stage 2, 3 and 4 are power-gated, and so only the approximate calculation is performed with no error correction. Comparisons of the accuracy reached by these modes using ACC amp and ACC inf metrics, as well as power and power reduction compared to a conventional pipelined adder are give in Table 1. [START_REF] Ernst | Razor: circuit-level correction of timing errors for low-power operation[END_REF]. The conventional pipelined adder refers to an exact adder where the calculation is regularly performed by pipelining the calculation using its sub-adders. Therefore, the 32bit conventional pipelined adder used as a reference also has four pipeline stages, each stage performing one-fourth of the total calculation, contrary to the proposed pipeline where the whole approximate calculation is performed on the first pipeline step. In mode-1, the conventional pipelined adder is more energy efficient than AC2A, but when using approximate modes 2 to 4, energy saving raises from 12.4 to 51.6%, with a relatively small loss of accuracy. Accuracy results of the 32-bit pipelined AC2A on SPEC 2006 benchmarks [START_REF]Standard performance evaluation corporation (spec) cpu2006[END_REF] detailed in [START_REF] Kahng | Accuracy-configurable adder for approximate arithmetic designs[END_REF], present generally good accuracy for every mode, even using mode-4. In this mode, ACC amp is superior to 0.99 for every test and above 0.95 for ACC inf . In order to show the advantage of configurability in terms of power, the same benches are run with a dynamic configuration of the accuracy. Even if it is not specified in [START_REF] Kahng | Accuracy-configurable adder for approximate arithmetic designs[END_REF], it can be assumed that the operating modes were chosen thanks to a succession of simulations and configuration optimizations and so there is no auto-control system. [START_REF] Kahng | Accuracy-configurable adder for approximate arithmetic designs[END_REF] show that the proposed 32-bit AC2A leads to an average of 30.0% power savings with the ACC amp objective (44.5% at best) and 35.8% average power savings with the ACC inf objective (47.1% at best) on SPEC 2006 benchmarks. As a conclusion, AC2A proposes a pipelined accuracy-configurable adder with good accuracy performance and with potentially important energy savings thanks to partial power gating applied on the error-correction system. However, accuracy configuration must be done offline and so programming effort is increased. In terms of power consumption, AC2A is quite near to ETAI performance, which is intermediately energy-efficient (see Section 1.4.1.2), but with a much better accuracy in terms of information accuracy, meaning the number of correct bits produced. However, AC2A has two main drawbacks: • error correction is gradually performed from the least significant sub-adders, and • changing the error correction efficiency at run time implies a modification on the number of cycles for an addition, and variable-latency instructions are generally difficult to handle in an instructions pipeline. Discarding these drawbacks, AC2A coupled with a good configuration can lead to important energy savings with a relatively low loss of accuracy or information. Gracefully-Degrading Adder This section presents a quality-configurable approximate adder denoted as Gracefully-Degrading Adder (GDA) [START_REF] Ye | On reconfiguration-oriented approximate adder design and its application[END_REF]. In comparison to AC2A [START_REF] Kahng | Accuracy-configurable adder for approximate arithmetic designs[END_REF] presented in Section 1.4.1.3, GDA is meant to be an approximate adder with a better accuracy, reached with less effort. Indeed, as showed in Table 1.15, each AC2A additive correction cycle leads to better correction, the first correction cycle correcting LSB bringing much less accuracy improvement than the last one, correcting MSB. Both GDA and AC2A can reach the same maximum quality with the same effort, but the optimized operator is able to reach a very good quality with a dramatically reduced amount of effort when compared to the original one. The proposed GDA is based on a structure very similar to ETAIV [START_REF] Zhu | Enhanced low-power high-speed adder for error-tolerant application[END_REF] described in Section 1.4.1.2. Indeed, the adder is divided into smaller chained adders whose input can be Chapter 1 switched whether to the upstream sub-adder output carry or to the output of a carry-in prediction block thanks to a multiplexer. For an n-bit GDA divided in four n/4-bit sub-adders, with X = (X 3 , X 2 , X 1 , X 0 ) and Y = (Y 3 , Y 2 , Y 1 , Y 0 ) , both are n/4-bit subsets of inputs, and Z = (Z 3 , Z 2 , Z 1 , Z 0 ) , n/4-bit subsets of outputs, the structure of the corresponding GDA is given in Figure 1.40. The configuration of GDA consists in setting up the multiplexers. If the 3 2 1 0 Figure 1.40 -Structure of proposed n-bit GDA composed of 4 n/4-bit sub-adders upstream sub-adder output carry is chosen, then a bigger sub-adder is virtually used, causing longer delay in return for a higher accuracy. In order to obtain a faster operator, it is then better to choose carry-in prediction blocks as sub-adders input. To be prevented from an important loss of accuracy, these blocks need to predict the input carry efficiently and with the shortest possible delay, and imperatively strictly inferior to the adder unit. Proposed carry-in prediction is based on a hierarchical scheme. Indeed, in order to have a fast and accurate prediction, many LSBs have to be considered, more precisely more than the length of the proposed GDA adder units. To achieve that, prediction computation needs to be parallelized. As for ACA (see Section 1.4.1.1), prediction is based on propagate and generate signals p i and g i , which define a recursive formula for the calculation of the carry value c i+1 : p i = a i ü b i , g i = a i • b i , c i+1 = g i ' (p i • c i (). (1.19) By developing the previous equations and by assuming the longest carry chain cannot exceed t, the expression of the carry signal c i is then c i = g i≠1 ' (p i≠1 • g i≠2 ) ' • • • ' Q a i≠t+1 Ÿ j=i≠1 p j R b • g i≠t ' Q a i≠t Ÿ j=i≠1 p j R b • c i≠t . (1.20) Using Equation 1.12 in Section 1.4.1, we can prove that for a 32-bit adder, the probability for the longest carry-chain to exceed 8 is 2.43%. Therefore, assuming taking the 8 preceding bits into account for carry prediction is acceptable, the expression of the carry signal becomes c i = c Õ i ' 1 r i≠4 j=i≠1 p j 2 • c Õ i≠4 , with c Õ i = g i≠1 ' (p i≠1 • g i≠2 ) ' • • • ' 1 r i≠4 j=i≠1 p j 2 • g i≠4 , and c Õ i≠4 = g i≠5 ' (p i≠5 • g i≠6 ) ' • • • ' 1 r i≠4 j=i≠1 p j 2 • g i≠8 . (1.21) Moreover, c Õ i≠4 will propagate to c i only if i≠4 Ÿ j=i≠1 P j = 1. (1.22) These equations lead to the hierarchical prediction scheme of Figure 1.41, where two 4-bit groups are watched in parallel before considering the condition showed by Equation 1.22 using AND gates. Carry-in Prediction Carry-in Prediction 𝑐 𝑖-4 ′ 𝑝 𝑖-1 : 𝑝 𝑖-4 𝑐 𝑖 ′ 𝑐 𝑖 Figure 1. -GDA hierarchical prediction scheme Based on this hierarchical prediction scheme, another level of configurability can be introduced. Indeed, following the scheme proposed by Figure 1.42, the number of preceding bits considered in the prediction can be set by a series of multiplexers, with a step determined by the deepness of carry-in prediction blocks. For instance, the number of carries which can be considered is whether 4, 8, . . ., 4 ◊ k, where k is the number of implemented unitary carry-in prediction blocks. This structure gives a high number of possibilities for the granularity of the reconfigurability and on the parallelism of the carry-in prediction. To conclude about its structure, GDA proposes two levels of configuration: Carry-in Prediction Carry-in Prediction Carry-in Prediction • by its possibility to select the use of carry-in prediction or exact adding between each sub-adder, and • by its possibility to select the deepness of carry-in prediction when this mode is active. Chapter 1 By structure, GDA offers fine-grained control for the level of error that can be tolerated at its output, and potentially offers a good prediction of carry despite an important area penalty for control. In [START_REF] Ye | On reconfiguration-oriented approximate adder design and its application[END_REF], the authors give very complete comparative simulation results of 32-bit GDA compared to exact adders, RCA and CLA, and other approximate adders, Variable Latency Carry Select Adder (VLCSA-1) [START_REF] Du | High performance reliable variable latency carry select addition[END_REF] and LPA [START_REF] Lu | Speeding up processing with approximation circuits[END_REF] described in Section 1.4.1.1, leveraging worstcase-error, error rate and average error. Results are given in Table 1.17. All simulations are based on one million randomly-generated inputs, so they have to be taken with caution, especially worst-case error. All adders are static, meaning AC2A and GDA are not generated using their reconfigurable version, the configuration is set before implementation. M A corresponds to the mode AC2A is operating such as described in Table 1.16. For the test and for each mode, AC2A was implemented in a non-pipelined version in order to get fair area and delay measurements. M B and M C are the parameters for GDA. M B indexes the number of bits for each sub-adder (respectively 4, 8, 12 or 16) and M C the number of prediction bits (respectively 4, 8, 12 or 16). From this point and for the rest of this section, AC2A i will denote AC2A on mode M A = i, and GDA (i,j) will denote GDA on mode (M B , M C ) = (i, j). When comparing all exact adders, RCA, CLA, AC2A 4 and GDA (4,4) , the two last exact adders achieve a better delay than the classical others, but with an important area and power overhead. For instance, GDA (4,4) is 42.3% faster than RCA, but with 44.5% area overhead and twice the operating power. Compared to AC2A 4 , GDA (4,4) occupies 66.45% more area and consumes 7.97% more power, but is 5.93% faster. It shows than nor AC2A 4 neither GDA (4,4) are only efficient in terms of delay in their exact version. Accuracy-configurable implementation of AC2A and GDA are compared in Table 1.18, showing the area for each reconfigurable operator, as well as the power and delay corresponding to all their operating modes. Delay ú corresponds to the delay when applying voltage scaling for each operator and each mode in order to have the same power consumption as the highest, occurring for GDA [START_REF] Barrois | Leveraging power spectral density for scalable system-level accuracy evaluation[END_REF]4) . In their reconfigurable versions, it can be observed that GDA costs 19.43% less area than AC2A. In accurate mode, GDA is only 0.90% slower than AC2A because of the higher number of multiplexers in the critical path. Results from Table 1.17 and Table 1.18 highlight the benefits of GDA compared to AC2A for an equivalent power (Delay ú rows in Table 1.18) and for each of the three metrics considered. Figure 1.43 shows the benefits of GDA, selecting the optimal accuracy configuration for each of these metrics: 3) and GDA (4,4) . In conclusion, the slight delay overhead in GDA structure allows significative reductions of the error, contrary to AC2A. However, it has to be kept in mind that each of the curves of Figure 1.43 does not represent the entirety of GDA but only the optimal points. This means for instance that the ideal configuration for optimizing worst-case error is not the same for error Worst-case error rate or average error. Therefore, there is no optimal configuration for GDA considering all metrics at the same time, and so GDA accuracy has to be configured knowing the prior metric to take into consideration. • AC2A worst-case error is compared to GDA (1,1) , GDA (2,1) , GDA (3,1) , GDA (4,1) and GDA (4,4) . • AC2A error rate is compared to GDA (1,1) , GDA (1,2) , GDA (1,3) , GDA (1,4) and GDA (4,4) . • AC2A average error is compared to GDA (1,1) , GDA (2,2) , GDA (3, GDA (M B , M C ) (1, 1) (1, 2) (1, 3) (1 (M B , M C ) (1, 1) (1, 2) (1, 3) (1 Addition Using Approximate Full-Adder Logic All the approximate adders previously described leverage modifications in addition function, always cutting off in different ways carry propagation, except for ETAI (see Section 1.4.1.2). In this section, the adders are built considering modifications in the Full Adder (FA) function. As developed in Section 1.3.3, FA function is the basic cell the most used in binary addition since it produces a 1-bit addition. Several possible modifications of the FA celle are proposed in [START_REF] Gupta | Impact: Imprecise adders for low-power approximate computing[END_REF], and more particularly on the Mirror Adder (MA) circuit implementing FA logic (see Figure 1.44a). Indeed, being able to remove transistors in a FA cell modifying as little as possible its binary logic function Chapter 1 can potentially induce high benefits in terms of speed, area, and energy savings, because of the omnipresence of this cell in numerous adder designs. First, transistors are removed one by one from the conventional MA to find a configuration where all sets of input x, y and c in give in as many cases as possible the good set of outputs z and c out . The best result according to the author was obtained removing 8 transistors, which leads to the resulting circuit of Figure 1.44b. Then, the final Approximate Mirror Adders (AMAs) are designed following a series of observations about FA truth table: • The Approximate Mirror Adder type 1 (AMA1) is based on the observation that z = c out for 6 cases out of 8. Therefore, the z calculation part is suppressed and z is set to c out using a buffer in order to limit capacitance for latency and/or power efficiency purpose. AMA1 transistor view is showed on Figure 1.44c. A 44.5% area gain is observed by AMA1 towards traditional MA. • The Approximate Mirror Adder type 2 (AMA2) is based on the observation that for 6 cases out of 8, cout = x is verified (or c out = y by symmetry). Therefore, an inverter is used on x to calculate c out from simplified MA of Figure 1.44b, producing the transistor view in Figure 1.44d. A 41.2% area gain is observed by AMA2 towards traditional MA, which is nearly as good as AMA1 thanks to its shorter critical path. • The Approximate Mirror Adder type 3 (AMA3) is obtained from AMA2. In AMA2, there are 3 errors for z in its truth table. The idea of AMA3 is to reduce dependency between c in and z. A good way to do it is to force z = y. Structuring the adder this way also insures c out to be correct when z is correct. AMA3 is 66.7% smaller than traditional MA, which is much better than AMA1 and AMA2, but it comes at a cost of a much higher approximation. Truth tables of AMA1, AMA2 and AMA3 are given in Table 1. [START_REF] Grigorian | Dynamically adaptive and reliable approximate computing using light-weight error analysis[END_REF]. They show that AMA1 globally generates less errors than AMA2, and AMA2 less than AMA3. Obviously, the case when each of these errors occurs is very important for the global accuracy. For instance, it is potentially more costly in terms of error to accidentally generate a carry on c out than not to propagate a carry that should have been propagated or than to give the wrong result to z, since the error is possibly propagated to a higher significance output bit. Because of the potential high amplitude errors that can be generated by chaining AMA, AMA-based approximate adders cannot be exclusively used in approximate adders designs to reach an acceptable output minimal accuracy. That is why the author only presents designs where only the LSB part is made of AMAs. In [START_REF] Gupta | Impact: Imprecise adders for low-power approximate computing[END_REF], the MSB part is composed of conventional exact FA cell instances forming a RCA. The adders produced this way are denoted as IMPrecise Adder for low-power Approximate CompuTing (IMPACT). However, in general, it is possible to use any of the accurate adder structures presented in Section 1.3.3. The designs of AMAs being fixed, the only way to modify accuracy is to modify the number of approximated LSBs. In [START_REF] Gupta | Impact: Imprecise adders for low-power approximate computing[END_REF], image processing tests are performed with different approximated LSB lengths and compared to FxP truncation method in terms of Peak Signal-to-Noise Ratio (PSNR), power and area savings towards exact method. Results for Discrete Cosine Transform (DCT) and Inputs Acc. outputs Approximate outputs III of [START_REF] Gupta | Impact: Imprecise adders for low-power approximate computing[END_REF]. It can be noticed that operating voltages for all AMAs and truncated adders are very similar so the results are quite fair. Results of Figure 1.45 show that AMA3 is the most accurate in terms of PSNR, followed by AMA2, AMA1 and finally truncation. Truncation being the worse is not surprising since it virtually sets all LSBs to 0, but the order of accuracy using the three different AMAs would be expected to be the opposite since 1 corresponds to the slightest approximation and 3 to the largest one. However, no other accuracy metric is used so the nature of the performed error is unknown, though we could suspect the AMAss to perform a salt-and-pepper noise because of the nature of their design. Moreover, using 20-bit adders on an 8-bit image brings question about their legitimacy. In terms of power, truncation is obviously more efficient than AMA-based adders, but IMPACT shows a good power efficiency, especially for a high number of approximated LSBs. The most power efficient is AMA3, followed by AMA1 or AMA2 depending on the situation. For area results, truncation does not appear. A good estimation of truncation area savings can be performed considering that for x truncated LSBs on a 20-bit adder, the benefits in terms of area are x 20 ◊ 100 in percents. Therefore, we can assume area savings for 7, 8 and 9 LSBs truncation are respectively 35%, 40% and 45%, which is about 25% as high as AMA3, partially explaining energy savings results. However, results show that area savings are twice as high for AMA3 than for AMA2 and AMA1, which are quite similar. Further results implementing MPEG video compression are given in [START_REF] Gupta | Impact: Imprecise adders for low-power approximate computing[END_REF] and tend to confirm the results obtained for DCT/IDCT experiment. x y c in z c out z 1 c out 1 z 2 c out 2 z 3 c out 3 0 0 0 0 0 1 0 0 0 0 0 0 0 1 1 0 1 0 1 0 0 0 0 1 0 1 0 0 1 0 0 1 0 0 1 1 0 1 0 1 1 0 1 0 1 0 0 1 0 1 0 0 1 0 1 1 0 1 0 1 0 1 0 1 0 1 1 1 0 0 1 0 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 As a conclusion, IMPACT are power-efficient approximate adders, but their accuracy compared to truncation still needs to be tested on legitimate experimental settings, which is done in Chapter 4. Approximate Adders by Probabilistic Pruning of Existing Designs In previous sections, all proposed adders were original designs. However, the existence of a deep literature about integer arithmetic raises the following question: is it possible to take any adder circuit and automatically transform it to an approximate one? The answer of this question is addressed in [START_REF] Lingamneni | Energy parsimonious circuit design through probabilistic pruning[END_REF]. The idea is to prune iteratively little parts of a design in order to trade accuracy for area, power, and potentially delay. Of course, pruned elements must not be chosen randomly, and so strategies are leveraged. Two parameters are considered for design pruning -activity and significance. Indeed, power consumption is proportional to activity, therefore it is more interesting for a highactivity-driven gate to be removed uppermost. For arithmetic operators, each output is twice as significant as its direct less significant bit, and so an error occurring at its position has twice as much impact on the output error as an error on this neighbor. Assuming this, every transistor, gate, or group of gates can be assigned a cost function which is the value of its activity multiplied by its significance. This way, they can be sorted in an increasing order to generate a priority list for pruning. The cost functions can also be modified considering all operator outputs have the same weigth. However, we will only get interested in weighted output since this is the most common situation in signal processing. For automated design, an error target must be given as a parameter. In [START_REF] Lingamneni | Energy parsimonious circuit design through probabilistic pruning[END_REF], two error parameters are allowed in the framework: • Error Rate = Number of erroneous computations Total number of computations . • Relative Error Magnitude = 1 ‹ q ‹ k=1 |z k ≠z Õ k | x k , where ‹ is the number of possible sets of inputs, zk the accurate output for a given input set k and z Õ k the output for the same input case considering the pruned design. Once the error nature chosen and the target defined, probabilistic pruning optimization process follows the flowchart given in Figure 1. [START_REF] Kidambi | Area-efficient multipliers for digital signal processing applications[END_REF]. In [START_REF] Lingamneni | Energy parsimonious circuit design through probabilistic pruning[END_REF], the method is applied to parallel-prefix adders, which is convenient as they are defined by a grid of base cells (see Section 1.3.3), and thus probabilistic pruning is easy to be applied considering each of these parallel-prefix base cells to be the unitary block for the method. Figure 1.47 shows the results for a pruned 16-bit Kogge-Stone adder with a -20 dB relative error accuracy goal. For this adder, activity increases with the considered level of the parallel-prefix graph, while the significance logically increases from LSB to MSB on each level. Thus, cells are pruned from the lower right corner to the upper left corner. It can be seen that to reach the ≠20 dB relative error, 10 cells were pruned on a total of 49, which represents more than 20% area saving. Probabilistic pruning method on several classical adders such as RCA, CSA, KSA, HCA, LFA is applied in [START_REF] Lingamneni | Energy parsimonious circuit design through probabilistic pruning[END_REF]. All error estimations were obtained by functional simulation, whose number of sets of inputs is unknown. Analytical method was used to compute activity estimation, assuming each input has a 0.5 activity. Globally, the best benefits in the Energy-Delay-Area (EDA) product are 2◊-7.5◊ compared to the original adder, with a relative error of respectively 10% and 10 ≠6 %, these best results being obtained for KSA and HCA. Results about these adders are given in Figure 1.48. The most important gains are obtained on delay, thanks to shorter critical paths. Despite a lack of detailed results, we can assume by interpolating the points of the result curves that quite an important benefit can be observed even when trading a few amount of accuracy. As an example, for a 10 ≠2 relative error, a factor of more than 3◊ is reached for the relative EDA. The main issue with this method is the use of mean relative error as a target, which does not prevent the operator from performing high amplitude errors. As an example, Weighted-Pruned Kogge-Stone Adder (WPKSA) presented on Figure 1.47 can perform an error on its MSB for a certain number of sets of inputs, causing an extremely high error, which cannot be tolerated depending on the considered application. The list of approximate adders presented in Section 1.4.1 is far from exhaustive, many others exist in literature. However, the ones presented here are representative of the general trend of approximate adders. Most are derived from classical adders, with different ways to cut carry chains and accelerate carry propagation. Some designs come with error detection and correction circuits, leading to another kind of approximate adders, the configurable adders, which can take different accuracy targets at run time. Finally, an interesting automated method for approximate circuit generation stands out from the crowd [START_REF] Lingamneni | Energy parsimonious circuit design through probabilistic pruning[END_REF], applied on adders in the original paper but which can be applied to any signal processing circuit. Next section presents a subset of the literature for approximate multipliers. Approximate Integer Multiplication In this section, a subset of literature dealing with approximate multipliers is developed. As seen previously in Section 1.3.4, the most important part of the multiplication structure is the reduction of the summand grid, performed with adders (generally CSA). Therefore, a high quantity of different multipliers can be derived using the approximate adders such as the ones presented in previous section. In this document, we will try to highlight more original multipliers leveraging more creative ideas from which they can benefit. .2.1 Approximate Array Multipliers In this section, three versions of approximate array multipliers are presented. As a reminder, accurate array multipliers are presented in Section 1.3.4. A 6-bit two's-complement signed array multiplier is depicted in Figure 1.20. Accurate array multipliers are not the fastest neither the smallest multipliers. However, their periodic structure has a compact hardware layout, thanks to short wiring, and allows for efficient pipelining. This advantage makes array multiplier one of the most used in embedded System on Chip (SoC). The three Approximate Array Multiplier (AAM) of this section are Fixed-Width Multipliers (FWMs), meaning that if their inputs are of with n, their output is also of width n, instead of 2n, as it should be to be perfectly accurate. Only the n MSBs are kept. In practice, most multipliers are FWMs as data width is generally constant in processing units. The first AAM will be denoted by Approximate Array Multiplier I (AAMI) and was proposed in [START_REF] Kidambi | Area-efficient multipliers for digital signal processing applications[END_REF]. As the two others described below, the idea is to prune the least significant cells, i.e. the ones responsible for the computation of the non-kept output LSBs. The resulting two'scomplement signed operator is given in Figure 1.49. More than the half of the base cells were pruned, and the diagonal AFAs were changed into AHAs since they have one less input in AAMI version. The mathematical calculation of the error bias is given in [START_REF] Kidambi | Area-efficient multipliers for digital signal processing applications[END_REF]. The author also showed that the bias and the variance of the error are linear with the size of the AAMI. The basic pruning proposed by AAMI was then improved in [START_REF] Jou | Design of low-error fixed-width multiplier for dsp applications[END_REF] with Approximate Array Multiplier II (AAMII) presented below and in [START_REF] Van | Design of the lower error fixed-width multiplier and its application[END_REF] with Approximate Array Multiplier III (AAMIII). AAMII adds to AAMI a correction circuit on the diagonal to reduce the bias of error. This correction circuit is made with very few gates, since it is composed of n very simple cells on the diagonal. The first one is an AND gate, followed by n ≠ 2 AAO cells composed of two Table 1.20 [START_REF] Jou | Design of low-error fixed-width multiplier for dsp applications[END_REF] gives accuracy results comparing AAMI and AAMII using its maximal absolute error AE max , and the AP for a given MAA, this metric being explained in Section 1.4.1.2. These results show that AAMII has a much lower maximal error than AAMI thanks to bias reduction circuit, and the maximal error is increasing slower when the size of the multiplier increases. AP is also much higher for AAMII, and the difference between both multiplier designs seems to increase with MAA. As an example, for a 16-bit multiplication, 77.9% of outputs will be less than 0.01% far from the expected results, against only 35.2% for AAMI. To conclude about AAMII, this second version achieves much better performance in terms of maximal error as well as in term of acceptance probability, with a very small area and delay overhead. AAMII also has an area which is nearly half of the classical non-fixed-width array multiplier, but slightly more important than AAMI. A third amelioration of the signed version of the AAM, referred as AAMIII, is proposed in [START_REF] Van | Design of the lower error fixed-width multiplier and its application[END_REF]. As AAMII, the idea is to lower the bias induced by the operator truncation, but more effectively. To reach an optimal efficiency, a method to make a fixed-width array multiplier with reduced maximum error, average error and error variance with no overhead for the correction circuit compared to AAMII is presented. The minimization of the bias is exclusively performed by feeding the array multiplier's diagonal with AND and NAND gates (instead of OR gates in AAMII) and adjusting using a constant input on the last FA line. In order to find the most effective configuration, an exhaustive search of the bias is first performed. For a given set of inputs (x i , y j ), i,joeJ0,n≠1K then the best error correction term towards AAMI structure C t can be written as [START_REF] Van | Design of the lower error fixed-width multiplier and its application[END_REF]] C t = n≠2 ÿ i=1 Èa n≠i≠1 b i Í q n≠i≠1 + ÂKÊ (1.23) with ÈT Í q k = I T, if q k = 0 T , if q k = 1 (1.24) where the value of K depends on the inputs value. The first part of the correction term given by Equation 1.23 can be achieved by inserting simple AND and NAND gates on the AAMI diagonal. When these gates are set, then the only way to affect multiplication result is to set the input bit on the last FA line to 0 or 1. When this value is set to a given value p oe {0, 1}, then, two correction term cases occur depending on the inputs, given by C t = Y _ _ ] _ _ [ n≠2 q i=1 Èa n≠i≠1 b i Í q n≠i≠1 + ÂpÊ , if ◊ < n n≠2 q i=1 Èa n≠i≠1 b i Í q n≠i≠1 + ÂpÊ , if ◊ = n (1. ◊ = n≠1 ÿ i=0 Èa n≠i≠1 b i Í q n≠i≠1 (1.26) Therefore, the values of q k , k oe J0, n ≠ 1K need to be fixed so the compensation offset defined by the last line input is as near as possible of p oe {0, 1} in the first case of Equation 1.25, and p in the second case. To determine this case, an exhaustive search is performed, testing all combinations of (q 0 , . . . , q n≠1 ) for all input combinations in order to determine the best value of K ◊<n and K ◊=n to minimize the bias for each combination. Figure 1.52 [START_REF] Van | Design of the lower error fixed-width multiplier and its application[END_REF] shows the example of an exhaustive search of K ◊<n and K ◊=n for a 6-bit multiplier. Once this exhaustive search performed, then the nearest case from K ◊<n = 0or1 and K ◊=n = K ◊<n is determined. This case defines which combination of q k is chosen, and what is the optimal value of the last line input bit. On Figure 1.52, the vertical red line shows the index of the optimal values of K ◊<n and K ◊=n . After exhaustive experimentation, the author analytically shows that for any n-bit multiplier the optimal configuration is always: • q 0 = q n≠1 = 1 and 'k oe J1, n ≠ 2K, q k = 0, and • Last line input = 1 for n high enough (case-by-case simulation is needed for low values). Taking that into account, the author gives the structure of a signed 8-bit AAMIII, as showed on Figure 1.53. FA, AFA, NFA and A cells are previously described cells. ND-ND cell is composed of two NAND gates, disposed as on Figure 1.51c. Applying the method described above, AAMIII from width 4 to 12 are generated and compared with the two previous versions in [START_REF] Van | Design of the lower error fixed-width multiplier and its application[END_REF]. The accuracy results in terms of maximum error E M , average error µ e and variance of error ‡ 2 e are given in Table 1.21. In this table, only signed versions of the three AAMs versions were studied. There are slight benefits with AAMIII in terms of maximal error comparing to AAMII, but the relative gains decreases when the width of the operator increases. However, the benefits in terms of mean and variance of the error, and therefore in terms of power of error, are very important and increase with the width of the operator. E.g, for a 12-bit operator, the power of error is reduced by 52.0% comparing AAMIII to AAMII, which is a huge gain considering there is no area overhead, as showed in Table 1.22. For larger operators, all AAMs area ratio decreases towards 0.5 and so the area overheads of AAMII and AAMIII towards AAMI become negligible. The error maps of 4-bit, 8-bit and 16-bit AAMIII are visible on Figure 1.54. On these error maps, four squared areas are clearly visible. The lower-left square, corresponding to lowamplitude inputs, returns low amplitude error. The three other squares perform globally much higher error, with patterns which need to be different depending on the operator's width. We can then conclude about AAMs that: • AAMI has brought an interesting way to reduce by a factor of 2 the area of an n-bit fixed-width parallel multiplier, removing the least significant part of the operator. • AAMII has proposed an improvement of AAMI by adding simple cells on the operator diagonal. • AAMIII has proposed an optimized modification of the diagonal cells in order to set the approximate operator bias to its minimum, achieving great gains in terms of accuracy with no area overhead. Multiplier As a consequence, AAMIII designing method is a very efficient way to build an approximate n-bit fixed-width array multiplier with a nearly-50% area reduction. Error-Tolerant Multiplier The Error-Tolerant Multiplier (ETM) [START_REF] Kyaw | Low-power high-speed multiplier for errortolerant application[END_REF] is inspired from the principle of ETAI [START_REF] Zhu | Design of low-power highspeed truncation-error-tolerant adder and its application in digital signal processing[END_REF] studied in Section 1.4.1.2. Indeed, it is composed of two parts: • the MSB part, which is a conventional accurate multiplier, and • the LSB part, which is designed for very fast approximation. For the accurate MSB part, as an n ◊ n multiplier has a O(n 2 ) area complexity, dividing the accurate part digits number by a factor k insures a k 2 area saving. Therefore, the benefit of reducing the accurate multiplication part is much bigger than for adders. For the approximate LSB part, an approximation resembling ETAI, described by Algorithm 1 page 44: the inputs of the approximate part are read from MSB to LSB, performing a logical OR until two 1s are met on the same position i. When this happens, all outputs from i down to 0 are set to 1. An illustration of the process is showed in Figure 1.55. ETM also embeds a system detecting if the MSB part of the inputs has at least one bit worth 1. This way, if there is no 1 (meaning it is a low amplitude multiplication), then the accurate multiplier is used for the multiplication of the LSB part. This way, the calculation is 100% accurate with no overhead on the design area, except for multiplexing the inputs. Therefore, contrary to ETAI, ETM performs well for low amplitude inputs. The system view of ETM can be seen on Figure 1.56. For a good use of this system, the accurate and approximate parts lengths of the multiplier need to be of equal width. The LSB approximation part is achieved the same way as ETAI thanks to a control block (see Figure 1.31). The only difference is that all sub-blocks are CSGCI blocks (see Section 1.4.1.2), meaning the control signal is not propagated as fast as for ETAI. This choice is probably due to the fact that the approximation part is faster than the accurate multiplication part, and so there is no need to reduce the delay on the control block which would imply an area and power overhead. The accurate multiplication is performed by an array multiplier of size n/2. The evaluation of accuracy is performed using AP for a given MAA in [START_REF] Kyaw | Low-power high-speed multiplier for errortolerant application[END_REF]. This metric is presented in Section 1.4.1.2. Figure 1.57 presents the results for a MAA range from 90% to 99%, using ETM of input width from 4 to 20 bits. We can suppose these ETM have the same lengths n/2 for their accurate and approximate parts. Results were obtained by simulation of 65, 000 sets of inputs for the 20-bit multiplier, and 6, 500 for the others. Such a choice can be discussed, the number of points being objectively too low to ensure all approximation cases to be met. On the graph, what can first be noticed is that small-width ETMs have a very low accuracy, with only less than 20% AP for a 95% MAA for 8-bit ETM for instance. For larger operators such as 16-bit or 20-bit, AP seems stable above 90% MAA, but the 16-bit AP curve dramatically decreases after 97%. Results for lower MAA can be found in [START_REF] Kyaw | Low-power high-speed multiplier for errortolerant application[END_REF]. To conclude about accuracy, the ETM model seems quite well adapted to large operators, small ones often generating high errors relatively to their computing range. For a 99% MAA, 20-bit ETM seems to be the minimal operator for a 90% AP. Of course, the interest of large widths in approximate computing is questionable. Answers are given in Chapter 4. 1 0 1 1 1 0 0 1 1 0 1 1 × 0 1 0 0 1 1 0 0 1 0 0 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 1 1 0 + 0 0 0 0 0 0 0 1 1 0 1 1 0 1 0 1 0 0 1 1 1 1 1 1 1 1 1 1 1 6-bits parallel multiplier Design simulations were performed on 0.18 µm CMOS process with a 10 MHz frequency. The comparison was made between a conventional 12-bit array multiplier and a 12-bit ETM with 6-bit accurate part and 6-bit approximate part. Power and delay were reported for five sets of inputs, the PDP of the five corresponding operations are given in Figure 1.58. The detailed results are given in [START_REF] Kyaw | Low-power high-speed multiplier for errortolerant application[END_REF]. These results show that energy consumption strongly depends on the inputs. On the five tested sets of inputs, energy consumption of ETM is 75% to 90% lower than the conventional 12-bit multiplier, though we could regret the very low number of performed tests. Looking at detailed test results, we can see there is nearly no improvement in calculation delay, whereas power improvement is quite important. In terms of area, the 12-bit ETM covers 491 µm 2 whereas parallel multiplier covers 1028 µm 2 , which is more than twice. Since a 12-bit array multiplier is roughly four times as large as its 6-bit version, this means that the approximation part has an area overhead nearly equivalent to the exact part area. To conclude about this operator, ETM proposes a n-bit approximate multiplication using a n/2-bit accurate multiplier, that can be used for the MSB part or LSB part of the computation depending on the input range. This allows for high energy and area savings, with quite good accuracy results for large-width operators. Once again, using a unique scalar metric does not give details about the nature of the error, but this operator, by nature, can perform high amplitude errors in a small number of worst-case input sets. It can be noticed that ETM has an area which is comparable to the area of an AAM version II or III (see Section 1.4.2.1), but ETM is an n ◊ n ae 2n multiplier whereas APMs are only n ◊ n ae n. However, this does not ensure at all that ETM is more accurate than the best of them (AAMIII), and more tests would need to be performed to determine its accuracy. Approximate Multipliers using Modified Booth Encoding As discussed in Section 1.3.4, modified Booth encoding allows the number of partial product of the summand grid to be importantly reduced. Usual Radix-4 encoding divides the number of partial products by two for an even input width. As the critical part of a multiplier is the carry-save reduction of the partial products, using modified Booth encoding is a good potential for area, delay and power saving. Therefore, in the context of approximate multipliers in lowpower applications, using this technique as a starting point is a potentially efficient way to save energy. In [START_REF] Jou | Fixed-width multiplier for dsp application[END_REF], the authors present a fixed-width Booth multiplier with a simple error-correction system. In FxP context, fixed-width multipliers are obtained by truncating the LSB half part of the multiplication output. A classical approximation for fixed-width multipliers consists in removing the LSB half part of the partial products and to compensate the induced bias by removing an adapted constant. In [START_REF] Jou | Fixed-width multiplier for dsp application[END_REF], error compensation is performed by keeping a few recombined cells of the most significant column of the LSB part. Therefore, the usual constant output bias is replaced by an input-dependent bias. To determine how error-compensation cells should be recombined, the authors studied -n≠1 , the number of carries generated at rank n ≠ 1 as a function of -, the sum of ones in summand grid at rank n ≠ 1. The author statistically determines that the best choice to minimize the error is to have -n≠1 = -. Therefore, every rank n ≠ 1 cell in summand grid is kept without any recombination. The resulting summand grid is given by Figure 1.59. From this point, the column of rank n ≠ 1 in the summand grid will be referred as LP major (Low Part, Chapter 1 𝑝𝑝 0,8 𝑝𝑝 0,7 𝑝𝑝 0,6 𝑝𝑝 0,5 𝑝𝑝 0,4 𝑝𝑝 0,3 𝑝𝑝 0,2 𝑝𝑝 0,1 𝑝𝑝 0,0 𝑝𝑝 Y i X sel 2X sel NEG ≠2 0 1 1 ≠1 1 0 1 0 0 0 0 1 1 0 0 2 0 1 0 Table 1.23 -Equivalence between Radix-4 modified-Booth-encoded symbol Y i and control bits in partial product generation major position) and all the lower significance elements LP minor (Least significant Part, minor position). The most significant part which corresponds to the n bits which are always kept for fixed-width multipliers output calculation will be referred as MP (Most significant Part, major position). The proposed 8-bit multiplier has 46% less gates than the accurate equivalent. In [START_REF] Jou | Fixed-width multiplier for dsp application[END_REF], accuracy results are given in terms of Signal-to-Noise Ratio (SNR). For a 16-bit multiplier, the SNR is 76.64 dB. More detailed results about this multiplier are respectively given in [START_REF] Cho | Design of low-error fixed-width modified booth multiplier[END_REF] and [START_REF] Juang | Low-error carry-free fixed-width multipliers with low-cost compensation circuits[END_REF], reported in Tables 1. [START_REF] Widrow | Statistical analysis of amplitude-quantized sampled-data systems[END_REF], 1.28 and 1.30, and denoted as Fixed-width modified-Booth-encoded Multiplier version I (FBMI). Another fixed-width Booth multiplier with input-dependent error correction, denoted here as Fixed-width modified-Booth-encoded Multiplier version II (FBMII), is proposed in [START_REF] Cho | Design of low-error fixed-width modified booth multiplier[END_REF]. In FBMII, only the Booth encoder control outputs are considered for error correction. The idea is to perform a statistical approximation of the carries generated by the truncated part in function of the coded input to be able to design a well-adapted error compensation system. Radix- Y Õ i = I 1 if Y i " = 0 0 otherwise. (1.27) Therefore, for a given chain of symbols {Y i } ioeLP , the corresponding chain of bits {Y Õ i } ioeLP can be obtained with a very small area and delay overhead performing logical OR as Y Õ i = X sel,i | 2X sel,i . (1.28) Then, statistics on the chain of bits {Y Õ i } ioeLP must be performed in order to find the best way to estimate carries generated in LP group. For this, S LP , the sum of all weighted partial products of LP group, is introduced and defined as S LP = ÿ 0AE2i+j<n≠1 p i,j 2 2+2i+j≠n . (1.29) Indeed, a good estimation of S LP for a given input allows to inject in MP part a good correcting bias. For this, the exact carries of LP major are propagated and the carries of LP minor are estimated. S LP minor is defined the same way as S LP but restricted to LP minor . First, a method leveraging exhaustive simulation is proposed in [START_REF] Jou | Fixed-width multiplier for dsp application[END_REF]. The idea is to list all occurrences of all possibilities for {Y Õ i } ioeLP minor and to calculate the rounded statistical mean of S LP minor for each of these occurrences. E.g., for n = 10, there are 108 ways to obtain {Y Õ i } ioeLP minor = 1, 0, a_carry 0 = Y Õ 3 | Y Õ 2 | Y Õ 1 | Y Õ 0 a_carry 0 = Y Õ 3 Y Õ 2 (Y Õ 1 | Y Õ 0 ) | Y Õ 1 Y Õ 0 (Y Õ 3 | Y Õ 2 ) (1.30) This circuit can be implemented using 8 basic logic gates. Then, the value of the carry to propagate on LP major is obtained by rounding. As {Y Õ i } ioeLP minor can only be 0 or 1, the maximal value for the propagated carry is 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 𝑌 1 ′ 𝑌 0 ′ 𝑌 3 ′ 𝑌 ) 2 ≠1 (n/2 ≠ 1) * r , and so the number of binary approximated carries N a_carry is always Ân/4Ê. A methodology for the error correction system can then be derived: 1. For an n-bit FBMII, the number of approximated carries is N a_carry = Ân/4Ê, denoted as a_carry 0 , a_carry 1 , . . . , a_carry N a_carry≠1 . 'i oe J0, N a_carry ≠ 1K, a_carry i = 1 … A n/2≠2 q i=0 Y Õ i B AE (2i + 1). 3. Approximate carry circuit is designed by defining the compensation carry logic, e.g. using a Karnaugh map. Using this method, accuracy results are given in [START_REF] Cho | Design of low-error fixed-width modified booth multiplier[END_REF] in terms of error mean and variance for n = 10 and n = 12, showed in 2. Each group bits are summed using a FA (or HA or a wire for the last group). 3. At each FA output, the carry signal c is an approximate carry, and the sum signal s is summed with the ones from the other groups. 4. The process is repeated until only one sum signal is left. 5. Finally, 1 is added to the last adder. With ACGPII, original ACGPI is slightly modified by a new higher level approximation, but the sum of the generated carries remains the same, just as showed for n = 8 in Table 1.26. The design for n = 32 is given in Figure 1.62. With only 7 FA and 1 HA, the ACGPII represents a very low overhead in term of area. Moreover, with only three levels, it only adds a small delay to the critical path. A comparison of delays and areas using ACGPI and ACGPII for different operator sizes is provided in Table 1.27. For n <= 10, ACGPI is better in both domains than ACGPII, in addition to be more accurate by construction in the estimation of the carries. When the operator size grows, ACGPII gets much more efficient in delay and area. Indeed, for a 32-bit FBMII, ACGPII is 78% faster than ACGPI for 56% area benefit. Delay and area comparisons of FBMI and FBMII using ACGPII were performed in [START_REF] Cho | Design of low-error fixed-width modified booth multiplier[END_REF]. They show that their performance is nearly the same, with a slight advantage for FBMII/ACGPII on delay for all sizes and also on area from n = 14. Using Synopsys tools, FBMI and FBMII show very similar power consumption at least for n = 10 and n = 12, with a bit more than 60% of the power consumption of an ideal fixed-width Booth multiplier, meaning a complete Booth multiplier with output truncation or rounding. Accuracy comparisons between rounding, truncation, FBMI and FBMII using ACGPII are given for n = 16 and n = 20 by Table 1.28. They show that proposed FBMII/ACGPII significantly beats both truncation and FBMI in terms of To conclude about FBMII, this fixed-width Booth multiplier proposes much better performance than FBMI with no area, power or delay overhead, thanks to improved error compensation systems which are ACGPI and ACGPII. The first one consists in a statistical analysis of the carries generated in the truncated part in function of the Booth-coded value of the input, whereas the second one is an approximate version of the first one, allowing a dramatic reduction of the theoretical correction circuit size and delay. Chapter 1 ACGPI ACGPII Y Õ 2 Y Õ 1 Y Õ 0 a_carry 0 a_carry 1 a_carry 0 a_carry 1 0 0 0 0 0 0 0 0 0 1 1 0 0 1 0 1 0 1 0 0 1 0 1 1 1 0 1 0 1 0 0 1 0 0 1 1 0 1 1 0 1 0 1 1 0 1 0 1 0 1 1 1 1 1 1 1 In [START_REF] Juang | Low-error carry-free fixed-width multipliers with low-cost compensation circuits[END_REF], Juang proposes a fixed-width Booth multiplier with a very low cost error correction system, based on the estimation of LP minor bits of the summand grid in function of LP major bits values. For more convenience, the proposed multiplier will be denoted as Fixed-width modified-Booth-encoded Multiplier version III (FBMIII). This section presents the methodology for an 8-bit FBMIII as well as accuracy and area comparisons for different sizes of FBMIII and previously described multipliers. To design an 8-bit FBMIII, LP minor levels need to be discriminated like showed on Figure 1.63. The four level weighted symbol strings of LP minor are denoted as w, x, y and z, and can be expressed as w = 6 q i=0 2 i≠7 pp 0,i , x = 4 q i=0 2 i≠5 pp 1,i , y = 2 q i=0 2 i≠3 pp 2,i , z = 2 ≠1 pp 3,0 . (1.32) As a reminder, each symbol pp i,j is in the set {≠1, 0, 1}. The idea of FBMIII error correction is to find the relation between LP major bits and these symbolic strings. For this, the best values Chapter 1 𝑝𝑝 0,8 𝑝𝑝 0,7 𝑝𝑝 0,6 𝑝𝑝 0,5 𝑝𝑝 0,4 𝑝𝑝 0,3 𝑝𝑝 0,2 𝑝𝑝 0,1 𝑝𝑝 0,0 𝑝𝑝 , q 2 and q 3 must be found in order to get as close as possible to w + x © q 1 ◊ (pp 0,7 + p 1,5 ) , y © q 2 ◊ pp 2,3 , z © q 3 ◊ pp 3,1 . (1.33) Their best representative is their mathematical mean. q 2 mathematical mean can be decomposed as E [q 2 ] = 1 ÿ k=≠1 q 2,k ◊ P (pp 2,3 = k) . (1.34) where q 2,k is the optimally correcting value of q 2 for a given k. P (pp 2,3 = k) can be easily computed for any value of k assuming that each multiplier binary input is equiprobable. Table 1.29 gives the probabilities for all pp i,j to be worth k in LP . Assuming these data, the calculation of E [y|pp 2,3 = 1] is E [y|pp 2,3 = 1] = E # 2 ≠1 ◊ pp 2,2 + 2 ≠2 ◊ pp 2,1 + 2 ≠3 ◊ pp 2,0 $ = 2 ≠1 ◊ E [pp 2,2 ] + 2 ≠2 ◊ E [pp 2,1 ] + 2 ≠3 ◊ E [pp 2,0 ] = 2 ≠1 ◊ (1/2) + 2 ≠2 ◊ (1/2) + 2 ≠ 3 ◊ (1/2) = 0.4375. (1.35) Therefore, as y = q 2,k ◊ k, the best value for q 2,1 is 0.4375. The same computation and reasoning for k = ≠1 and k = 0 respectively gives q 2,≠1 = ≠0.4375 and q 2,0 oe R. As q 2,0 can take any value, 1 is chosen in an arbitrary manner. By injecting these three q 2,k in Equation Therefore, q 2 = 1 is chosen as the approximated coefficient for y. The same process is applied for q 1 and q 3 , and finally we get q 1 = 0, q 2 = 1, q 3 = 1. (1.37) Partial products p i,j = ≠1 p i,j = 0 p i,j = +1 p i,0 1/8 C V ideal = Í 2 ≠1 (pp 0,7 + pp 1,5 + pp 2,3 + pp 3,1 ) + 2 ≠1 (w + x + y + z) Î . (1.38) Approximating w, x, y and z using q 1 , q 2 and q 3 leads to C V app = % 2 ≠1 (pp 0,7 + pp 1,5 + pp 2,3 + pp 3,1 ) + 2 ≠1 (q 1 ◊ (pp 0,7 + pp 1,5 ) +q 2 ◊ pp 2,3 + q 3 ◊ pp 3,1 )Ê , = % 2 ≠1 (pp 0,7 + pp 1,5 ) & + pp 2,3 + pp 3,1 . (1.39) In spite of taking into consideration both LP major and LP minor , the computation of the compensation value C V app is not more complex than the compensation value of FBMI which only took LP major into account for compensation, and so better compensation performance can be expected. Indeed, for C V app to be applied, adding pp 2,3 and pp µ |e| = 1 2 n A 2 n≠1 ≠1 q X=≠2 n≠1 A 2 n≠1 ≠1 q Y =≠2 n≠1 |X ◊ Y ≠ Z op | BB , ‡ 2 e = 1 2 n A 2 n≠1 ≠1 q X=≠2 n≠1 A 2 n≠1 ≠1 q Y =≠2 n≠1 |X ◊ Y ≠ Z op | 2 BB , (1.40) where Z op is the result of the operation X ◊ Y performed by the operator op. Accuracy results are given in Table 1. [START_REF] Wallace | A suggestion for a fast multiplier[END_REF]. In terms of absolute error mean, it can be noticed that FBMIII achieves slightly better performance than FBMI, but has an error variance which is twice as small. In terms of error power, an 8-bit FBMIII is 14.7% more efficient than its FBMI equivalent, and 23.9% for their 12-bit versions. FBMIII accuracy benefits towards FBMI seem to increase with the size of the operator. However, FBMIII is slightly beaten by truncation in term of accuracy. This is still good performance taking into account that FBMIII saves a 44% area compared to an 8-bit truncated-output fixed-width Booth multiplier and 49% compared to a 12-bit one. More results about area are detailed in [START_REF] Juang | Low-error carry-free fixed-width multipliers with low-cost compensation circuits[END_REF] as well as more tests comparing FBMI and FBMIII on image processing tests, confirming FBMIII to be more accurate on several metrics such as root mean square error, SNR and PSNR. Therefore, FBMIII proposes better global performance than FBMI on many metrics, with a smaller error correction system. In this section, three fixed-width Booth operators with error correction system were presented: • FBMI [START_REF] Jou | Fixed-width multiplier for dsp application[END_REF] proposes a low-cost error-compensation system, considering only LP major . • FBMII [START_REF] Cho | Design of low-error fixed-width modified booth multiplier[END_REF] proposes a higher-cost error-compensation system, only based on the value of the input multiplier, but taking it entirely into account (LP major + LP minor ). This higher cost allows to strongly beat FBMI in terms of accuracy, and even the basic fixedwidth Booth multiplier obtained by truncation of the output. Compared to its summand grid, FBMII error compensation system is still small, and so it represents a very interesting fixed-width Booth multiplier. • FBMIII [START_REF] Juang | Low-error carry-free fixed-width multipliers with low-cost compensation circuits[END_REF] has the lowest-cost error-compensation system, which has even smaller cost than FBMI's LP major consideration. Moreover, it has better performance than FBMI, but does not beat output truncation method as FBMII does. To conclude, FBMII seems to be the most efficient fixed-width Booth operator in terms of accuracy, though FBMIII proposes a lower-cost error compensation. In terms of delay, the compensation system overhead for the three presented operators is nearly negligible compared to the cost of the carry-save partial product reduction. Therefore, for the best accuracy, FBMII should be given high priority, and FBMIII should be chosen only if area is the critical resource. Dynamic Range Unbiased Multiplier In [START_REF] Hashemi | Drum: A dynamic range unbiased multiplier for approximate applications[END_REF], a novel approximate multiplier referred as Dynamic Range Unbiased Multiplier (DRUM) is inspired from floating-point multiplication. As a reminder, FlP multiplication process is described in Section 1.2.2. Indeed, the idea of DRUM with n-bit inputs is to use an accurate multiplier of size k < n, shifting the n-bit inputs in a way that the input MSBs of the multiplier are fed with the most significant one of each input. This way, no effort is wasted in the multiplication uselessly processing "high significance zero ◊ zero". To reduce the approximation, inputs are unbiased before multiplication. As only a subset of k bits is extracted from the inputs, all the less significant bits are virtually set to 0, causing an error which is always in the same direction. To prevent this, the LSB of the k bits extracted for multiplication is set to 1. The process applied on each input for unbiasing is depicted in Figure 1.65. Once the inputs extracted and unbiased, the accurate k-bit multiplication is performed. Finally, the result is shifted so the output of k-bit multiplication is expressed with its legitimate significance. The corresponding structure of DRUM is depicted in Figure 1.66 [START_REF] Hashemi | Drum: A dynamic range unbiased multiplier for approximate applications[END_REF]. First, leading-zero detection is performed on each input to get the position of the first one in k-bit multiplication. Then, after input unbiasing, k-bit multiplication is performed. Finally, the result is shifted using the sum of the leading-zero detection values of the inputs. Designing DRUM is therefore about finding a good compromise for k. Decreasing k value by 1 diminishes the size of the effective multiplier, but increases the size of the multiplexers of Figure 1.66, and the L0D needs to be able to count one potential more leading zero. Also, the final maximal shifting is increased by 1. Generally, decreasing k decreases area but may increase delay. To keep the benefit of using leading-zero detection, the inputs must be unsigned. Therefore, a signed version of DRUM must unsign the inputs using two's complement transformation before being applied, and manage the sign of the output depending on the sign of the inputs, which can be achieved with a small overhead. Using Synopsys Design Compiler and Mentor Graphics Modelsim with 65-nm standard cell libraries, area and power for 16-bit DRUM as a function of k are depicted in Figure 1.67 [START_REF] Hashemi | Drum: A dynamic range unbiased multiplier for approximate applications[END_REF]. These simulations show that substantial reductions in area and power are reached. For 16-bit DRUM with k = 3, more than 80% of area and more than 90% of power are saved with respect to a 16-bit accurate multiplier -the structure of the reference multiplier being unknown. For k = 8, nearly 50% area and power are saved. The intermediate k values seem to show near-linear savings. These savings need to be put in relation with the errors being performed. In [START_REF] Hashemi | Drum: A dynamic range unbiased multiplier for approximate applications[END_REF], four error metrics are explored, all relative to the accurate result. If the relative error referred as RE is defined by RE = Z ≠ Ẑ Z , (1.41) where Z is the exact result of 16-bit multiplication X ◊ Y and Ẑ the result of the same multiplication using DRUM, the four error metrics explored are defined by mean relative error (or relative error bias), and ‡ RE the standard deviation of the relative error. S represents the set of all possible inputs and N S the width of this set. It is important to notice that all these metrics are relative to the accurate output, as this is the family of metrics which is the most reliable when speaking about FlP-like error. Indeed, as the multiplier is "sliding" on the inputs, the error performed is always relative to the amplitude of the inputs. Table 1.31 gives the values of these metrics for the same 16-bit versions of DRUM as the ones of Figure 1.67. Max RE =max ioeS |RE i | , MA RE = 1 N S ÿ ioeS |RE i | , µ RE = 1 N S ÿ ioeS RE i , ‡ RE = 1 N S ÿ ioeS RE 2 i , ( 1 As expected, the error is larger when k is smaller. For k = 3, maximum relative error reaches 56%, which is high but much smaller than most approximate operators which can have error propagated to their MSB in case of broken carry chain. Thanks to the unbiasing method, error bias is very low and lowers when k increases. The general amplitude of error, represented by MA RE and ‡ RE is generally low. As a conclusion, DRUM operator shows interesting area and power benefits, leveraging FlP multiplication style applied to fixed-significance data. As an example, for a 16-bit DRUM with k = 6, more than 60% area is saved and more than 70% of the power, while the error stays very tight. Indeed, the error bias is nearly zero, while the mean absolute relative error is 1.47% only, with a maximum at 6.31%. DRUM approximate operator is a scarce example of approximate operator with important savings and producing often-erroneous but low-amplitude In this chapter, besides classical floating-point and fixed-point paradigms, a subset of approximate operators were described, all chosen for being different the ones from the others and basing their designs on different techniques of integer addition and multiplication. The list dressed in this chapter is far from exhaustive but tried to cover the main stakes of approximate operators. The study of the existing literature also concludes in a more bitter observation: most presented operators do not come with enough results about their impact on real-life application. Some of them only come with stand-alone results using convenient metrics hiding high error spikes, and others are tested on applications too simple to make definitive conclusions. In this document, we intend to provide methods, tools and conclusions about the general advantages and drawbacks of these operators which are all different the ones from the others. It is also important to note that all the approximation techniques presented in this chapter are not exhaustive, though they are the main ones and lay the foundation for the remainder of this thesis. A more complete survey of existing approximation techniques at many levels can be found in [START_REF]A survey of techniques for approximate computing[END_REF]. The high number of existing techniques, which are enabled at different levels, from algorithm design to the physical layer, will need by the future to be unified in a single general technique allowing to take advantage of the best of each through cross-level design. Chapter 2 Leveraging Power Spectral Density in Fixed-Point System Refinement The first contribution of this thesis, after literature study and comparison of approximate operators in the previous chapter, is a novel method for system-level optimization in FxP arithmetic. This work led to a paper in the DATE'16 conference [START_REF] Barrois | Leveraging power spectral density for scalable system-level accuracy evaluation[END_REF]. Motivation for Using Fixed-Point Arithmetic in Low-Power Computing Signal processing applications popularly use fixed-point data types for implementation. The choice of fixed-point data types is driven usually by cost constraints such as power, area and timing. The objective of fixed-point refinement during the design process is to make sure that chosen data types are precise enough to achieve the expected quality of computation while minimizing the cost constraint. The acceptable lower quality of computation is because either there are error correction mechanisms explicitly defined as a part of the system or that the user perception defines the lower bound on the quality of output or both. For instance, video CODECs such as H.264 popularly used for wireless transmission allow a certain amount of errors on the channel, which can be corrected by error-resilience [START_REF] Alajel | Error resilience performance evaluation of h.264 iframe and jpwl for wireless image transmission[END_REF] or because the human eye is insensitive to some errors [START_REF] Wu | An overview of perceptual processing for digital pictures[END_REF]. All these layers of error resilience mean that using approximations for concerned computations could represent significant gain of area, time and/or energy. A classical way to approximate a computation process is to use fixed-point arithmetic. Indeed, the representation of fractional numbers by integers insures faster and more energy-efficient arithmetical computations as discussed in previous chapter, and the design of the operators requires meaningfully less area. The most important drawback of using an approximated arithmetic is the need for managing the induced computation errors. The errors with fixed-point data types are classified into two types arising from finite precision on one hand and finite dynamic range on the other. Although the impact of errors due to violation of finite dynamic range is more pronounced, these errors can be mitigated by techniques such as range analysis using affine arithmetic, interval arithmetic or more complex statistical techniques such as [START_REF] Özer | Stochastic bit-width approximation using extreme value theory for customizable processors[END_REF]. In spite of allowing for good dynamic range, the lack of precision causes errors that are perceived as bad Chapter 2 quality of computation. In case of wireless applications, this can be measured as Bit-Error-Rate (BER), in image and signal processing as Signal-to-Quantization-Noise Ratio (SQNR), and, in general, as quantization noise power. Measuring the impact of finite precision on the output quality of computation is discussed in this Chapter. Commercial tools for performing fixed-point accuracy include C++ fixed-point libraries (e.g. ac_fixed from Mentor Graphics used with Catapult-C, or sc_fixed from SystemC) or the Matlab fixed-point design toolbox. These tools are primarily based on facilitating FxP simulation with user-defined word-lengths using software FxP constructs and libraries. Although very useful, evaluation by simulation can be very time consuming. The time required for FxP evaluation grows in proportion with the number of FxP variables and also the number of input sample size. Using the analytical approach for accuracy evaluation, the noise power is obtained by evaluating a closed-form expression as a function of the number of bits assigned to various signals in the system. This approach requires a one-time effort for arriving at the closed-form expression for a given system. These analytical techniques can be handy but are generally limited in applicability to linear and some types of non-linear systems (referred to as smooth operations). The analytical technique evaluates the first two moments of the quantization noise sources and propagates it through the Signal Flow Graph (SFG) from all noise sources to the system output. On relatively small systems, the evaluation of path functions can be accomplished manually. As the system complexity grows, it would require automation support. And possibly for very large systems, the automation could also prove painstakingly slow. Therefore, several divide and conquer approaches have been proposed [START_REF] Parashar | A hierarchical methodology for word-length optimization of signal processing systems[END_REF][START_REF] Novo | Cracking the complexity of fixed-point refinement in complex wireless systems[END_REF] to overcome the apparent complexity of large systems which respectively suffer from loss of information or enumerating all paths in the graph. With the method described in this chapter, we provide an alternative analytical accuracy evaluation approach for use with hierarchical techniques to be applied on LTI systems. This technique captures the information associated with the frequency spread of quantization noise power by sampling its PSD. We show how such information can be used for breaking the complexity of evaluating quantization noise at the output of large signal processing systems. Contributions brought by our work are as follows: • quantifying the accuracy of the proposed technique based on PSD propagation and • demonstrating its high scalability at system level resulting from linear time complexity. The rest of this chapter is organized as follows. Section 2.2 reviews analytical methods for accuracy analysis at the algorithm level of errors due to finite arithmetic effects in systems using fixed-point arithmetic. In Section 2.3, the proposed estimation method based on PSD is introduced and developed for general systems. Finally, in Section 2.4, two representative signal-processing benchmarks are chosen to showcase the efficiency of the proposed method. Related work on accuracy analysis The loss in accuracy due to finite precision imposed by fixed-point numbering format has been evaluated using several metrics. The most common among them are the error bounds and the Mean Square Error (MSE). While the first metric is used to determine the worst case impact, the MSE is an average case metric very useful in tuning the average performance of the system under consideration in terms of its energy and timing. Although the finite precision accuracy must be compared with infinite precision (or arbitrary precision) numbers, it is impossible to do so while simulating using a computer. So, the IEEE double-precision floating-point format, whose dynamic range and precision are several orders of magnitude higher compared to typical fixed-point word lengths, is considered as the reference for all comparison purposes and may be referred as infinite precision in what follows. In the literature, the MSE is the mean square value of the differences between computations by fixed-point system and the reference system implementation extracted as showed on Figure 2.1 and is also referred to as quantization noise power. This is a scalar quantity and it changes as a function of the FxP word-length. Evaluation of quantization noise power at the output of a fixed-point system is either performed by simulation-based technique or using analytical techniques. Simulation-based techniques are universal and can be made use of as long as there are enough computational resources. By the nature of it, simulation-based techniques take longer time and are subjected to the input stimulus bias. Analytical techniques, on the other hand, provide a closed-form expression for calculating the quantization noise power as a function of FxP word-lengths. However, they are limited due to their dependence upon the following properties [START_REF] Widrow | Quantization Noise: Roundo Error in Digital Computation, Signal Processing[END_REF]: 1. Quantization noise and the signal are uncorrelated. 2. Quantization noise at its source is spectrally white. 3. Effect of a small perturbation at the input of the operation generates a linearly proportional perturbation at the output of the operation. The first two properties pertain to the quantization noise source under conditions defined in the Pseudo-Quantization Noise (PQN) model, the statistics of the noise and signal are uncorrelated and even though the signal itself may be correlated in time, the noise signal is uncorrelated in time [START_REF] Widrow | Quantization Noise: Roundo Error in Digital Computation, Signal Processing[END_REF]. The independent and uniform nature of this noise was already discussed in Section 1.3.2. The representation of quantization error as an additive uniformly distributed white noise is depicted by Figure 1.6. The third property relates to the application of "perturbation theory" [START_REF] Constantinides | Perturbation analysis for word-length optimization[END_REF]. It is possible to propagate quantization noise through as long as the function defined by the operation can be linearized. Consider a binary operator whose inputs are x and y and the output is z. If the input Chapter 2 signals be perturbed by b x and b y to obtain x and y respectively, the output is perturbed by the quantity b z to obtain z. In other words, as long as the fixed-point operator is smooth, the impact of small perturbations at the input translates to perturbation at the output of the operator without any change in its macroscopic behavior. In the realm of perturbation theory, the output noise b z is a linear combination of the two input noises b x and b y such as b z = ‹ 1 b x + ‹ 2 b y (2.1) where ‹ 1 , ‹ 2 are obtained from a first-order Taylor approximation [START_REF] Constantinides | Perturbation analysis for word-length optimization[END_REF] of the continuous and differentiable function f : z = f (x, y) (2.2) ƒ f (x, y) + ˆf ˆx (x, y).(x ≠ x) + ˆf ˆy (x, y).(y ≠ y). Therefore, the expression of the terms ‹ 1 and ‹ 2 are given as ‹ 1 = ˆf ˆx (x, y) ‹ 2 = ˆf ˆy (x, y). ( 2.3) Following the third property of quantization noise enumerated above, a further assumption for Equation 2.1 to hold true is that the noise terms b x and b y are uncorrelated with one another. It has to be noted here that the terms ‹ 1 and ‹ 2 can be time varying. This method is not limited to binary operations only. In fact, this method can be applied at the functional level with any number of inputs and outputs and to all operators on a given data path in order to propagate the quantization noise from all error sources to the output. When above conditions hold true, the output quantization noise power of the system is obtained by linear propagation of all quantization noise sources [START_REF] Menard | Analytical Fixed-Point Accuracy Evaluation in Linear Time-Invariant Systems[END_REF] as E Ë b 2 y È = Ne ÿ i=1 K i ‡ 2 i + Ne ÿ i=1 Ne ÿ j=1 L ij µ i µ j (2.4) where E [•] is the expectation function, b y is the error signal associated with its corresponding system output signal y. The system under consideration consists of N e fixed-point operations and the i th operation is generating quantization noise b i with mean and standard deviation µ i and ‡ i . Figure 2.2a illustrates this noise propagation. The terms K i and L ij are constants and depend on the path function h i from the i th source to the output y and are calculated as K i = OE ÿ k=≠OE E Ë h 2 i (k) È , ( 2.5 ) L ij = OE ÿ k=≠OE OE ÿ l=≠OE E [h i (k)h j (l)] . (2.6) Hierarchical techniques for evaluation of quantization noise power have been proposed [START_REF] Weijers | From MIMO-OFDM algorithms to a real-time wireless prototype: a systematic Matlab-to-hardware design flow[END_REF][START_REF] Parashar | A hierarchical methodology for word-length optimization of signal processing systems[END_REF] to overcome the scalability concerns associated with fixed-point systems. In this approach, the system components are evaluated one at a time and then combined by superposition at the output (Figure 2.2b, blind propagation of µ i , ‡ 2 i ). If simulation-based technique is used for evaluation of quantization noise power at the output, the hierarchical evaluation process helps parallelize simulation of each of the components. When employing analytical technique such as the one in Eq. 2.4, the number of paths required to be evaluated is reduced dramatically. This reduction is very interesting from the design automation perspective. The paths are broken around the system component boundaries and each component can be evaluated separately thereby reducing the burden of semantic analysis. However, it has to be borne in mind that the application of the technique in Equation 2.4 requires that the quantization noise satisfies the three properties enumerated above and also that the noise signals are always uncorrelated, which is often false and can cause severe errors. The method in this chapter addresses this problem and suggests a technique that exploit the information hidden in the PSD of the quantization noise [START_REF] Widrow | Quantization Noise: Roundo Error in Digital Computation, Signal Processing[END_REF][START_REF] Jackson | Roundoff-noise analysis for fixed-point digital filters realized in cascade or parallel form[END_REF] PSD-based accuracy evaluation It is clear from the state of the art that there exists two types of limitations to the existing accuracy evaluation techniques. While the analytical technique reduces the simulation time greatly, its preprocessing time can grow exponentially requiring to employ hierarchical techniques such as [START_REF] Parashar | A hierarchical methodology for word-length optimization of signal processing systems[END_REF]. However, these techniques introduce the problem of inaccuracy by approximating error quantities with just mean and variance. This is especially true in cases when large systems are broken down to smaller sub-systems for analysis. For illustration, consider the system shown in Figure 2.2b. The system S consists of several sub-systems marked as Op 1...5 . The noise generated at the output of each system, correspondingly marked as b 1...5 , is propagated (blue arrows) through several parts of the system for calculation of the moments of error at the system output. Suppose there are memory elements in Op 1 and Op 2 , propagation of noise b 1 and b 2 (say) through Op 3 by just using the first two moments of the quantization noise (as described in the previous section) can lead to errors in estimates at the output of Op 3 which can further be amplified by Op 5 all the way to output O. Similarly, the path through Op 4 also influences the error of the estimate through Op 5 leading to very large error margins for O. In order to analytically arrive at the moments of the system output, additional information pertaining to quantization noise at points of convergence of two or more noise paths is required. We refer to the methods that do not consider PSD information (such as [START_REF] Weijers | From MIMO-OFDM algorithms to a real-time wireless prototype: a systematic Matlab-to-hardware design flow[END_REF]) as PSD-agnostic methods. In this section, we propose a technique which efficiently makes use of the PSD of the quantization noise for evaluating the error at the output of a system and which is scalable both in terms of accuracy and system size. PSD of a quantization noise A large signal processing system can be divided into a number of sub-systems, each characterized by its transfer function. The transfer function defines the magnitude and phase relationship of the path for input signals of different frequencies. Since our interest is only the noise power, we ignore the phase spectrum and consider only its magnitude spectrum or the PSD. With the knowledge of the PSD distribution of the input and the system PSD profile, it is possible to calculate the PSD of the output. The PSD S xx (F ) of a signal x at any normalized frequency F is defined as the Fourier transform (F {•}) of the autocorrelation function of x as S xx (F ) = F {x(n) ⇧ x ú (n + m)} , (2.7) S xx (F ) = F {x} ⇧ F {x} ı = |F {x}| 2 . (2.8) With the knowledge of the PSD of x, the MSE and the mean of x is obtained by summing up the power in each frequency component as E Ë x 2 È = ⁄ 1 ≠1 S xx (F ) dF = µ 2 + ‡ 2 S xx (0) = µ 2 . (2.9) The PSD of the quantization noise generated by a fixed-point data type with d fractional bits is (as discussed in Section 2.2) white, except for F = 0, which depends on mean. By discretizing the PSD into N PSD regular bins including the DC component, the PSD of a generated quantization noise b x is given by: S bx (F ) = I 1 N PSD ‡ 2 if F " = 0, µ 2 if F = 0. (2.10) where mean and variance µ and ‡ 2 for both truncation and rounding modes with d bits is as given in [START_REF] Widrow | Quantization Noise: Roundo Error in Digital Computation, Signal Processing[END_REF]. PSD propagation across a fixed-point LTI system In the method developed in this chapter, we will focus on linear and time-invariant (LTI) systems, which constitute the major part of signal processing systems. An LTI system can be represented by a signal flow graph (SFG) composed of boxes corresponding to sub-systems defined by their impulse response and delimited by additive quantization noise sources such as the one described in Section 2.2. The proposed PSD evaluation method then consists of three steps: 1. Detect cycles in SFG and break them to obtain an equivalent acyclic SFG that can be used for noise propagation using classical SFG transformations [START_REF] Ogata | Modern Control Engineering[END_REF]. An example of SFG breaking is issued by Figure 2.3. Given the original cyclic SFG of Figure 2.3a, the loop generated by H 3 loopback is flattened as showed on Figure 2.3b. 2. The discrete PSD of each signal processing block and of the additive noise associated with the input signal is calculated on N PSD points. 3. The noise PSD parameters are propagated from inputs to outputs, using Equations 2.11 and 2.14. Let x be the input of a system of impulse response h. Then the output y is obtained by the convolution operation (ú) of x and h as y = x ú h. In the Fourier transform domain it can be written as Y = X ⇧ H where Y = F {y}. Following this, the output PSD S yy (F ) is obtained as [START_REF] Jackson | Roundoff-noise analysis for fixed-point digital filters realized in cascade or parallel form[END_REF] S yy (F ) = S xx (F ) ⇧ ÎH(F )Î 2 . (2.11) where ÎH(F )Î is the magnitude response of the system h. In any signal processing system, the quantization noise sources from various inputs converge in at either an adder or a multiplier. Considering the LTI subset, multiplications are nothing but multiplication with constants and hence correspond to linear scaling factors for noise powers. In the case of adders, if the sum of two quantities x and y is obtained as z = x + y, then S zz (F ) is given by S zz (F ) = S xx (F ) + S yy (F ) + S xy (F ) + S yx (F ), (2.12) where S xy (F ) is obtained using the cross-correlation spectrum of x and y and is obtained as The complexity of propagating PSD parameters through the system essentially depends on the number of discrete points N PSD . The total time for evaluation of the PSD parameters can be split into two parts: first, • pp corresponding to the preprocessing stage which involves evaluating the N PSD -point Fourier transform of transfer function of the sub-systems with complexity O{Nlog(N )}; second, the actual time required for evaluation • eval which is O{N } from Equations 2.11 and 2.14. • eval is required for evaluating the accuracy for various inputs and can be repeatedly performed without any preprocessing say N eval times. Since the time spent on preprocessing is a one-time effort, the actual evaluation time is dominated by the • eval which is linear with N PSD . S xy (F ) = F {x(n) ⇧ y ú (n + m)} . ( 2 Experimental Results of Proposed PSD Propagation Method In this section, the proposed method is evaluated using a three-step approach. First, we show experimentally that the estimates obtained by proposed PSD technique are close to simulation. Then, we present the impact of choosing the number N PSD to capture PSD information on the accuracy as well as the execution times of the proposed approach. Finally, we also discuss the improved accuracy in estimation and compare it with the result obtained by PSD agnostic method. All experiments are performed using Matlab R2014b. The MSE deviation E d is chosen as the metric for comparison in all these experiments. It is calculated as is obtained by proposed analytical estimation. From this metric, an accuracy equivalent to less than one bit corresponds to the range E d oe (≠75%, 300%), which can be trivially proven considering the error power relative to two successive word-lengths. Beyond these limits, the estimation is unmistakably not suitable for the fixed-point refinement process as the finally selected wordlength would not meet the maximum error requirements. In the following sections, we first present the experiments and provide a discussion of the results obtained. E d = E # err 2 sim $ ≠ E # err 2 est $ E # err 2 sim $ , ( 2 Experimental Setup Finite Impulse Response (FIR) and Infinite Impulse Response (IIR) Filters The first experiment consists in evaluating the PSD of a single Finite Impulse Response (FIR) and Infinite Impulse Response (IIR) filter blocks as described in Section 2.3. The quantized input signal is propagated through the chosen filter and the output quantization noise power is measured by simulation and by the proposed PSD method. The error in estimates of the noise power E d is obtained on a total of 147 FIR and 147 IIR filters obtained by attributing different functionalities (bandpass, low-pass and hi-pass), various taps involving memory elements between 16 and 128 taps for FIR filters and from 2 to 10 taps for IIR filter. Simulation is run on 10 6 inputs and PSD estimation is performed on 1024 samples. Frequency Domain Filtering The system described in Figure 2.4 is a frequency domain band-pass filter. It consists of a 16tap low-pass FIR filter H hp followed by a frequency-domain filter, composed of a 16-point FFT block, a multiplication by the 16 coefficients of a high-pass FIR filter H lp and an inverse FFT. The frequency domain filter applies the filter using the popular overlap save method. Simulations are carried out on a set of 10 7 input samples. databases and from Brodatz texture images [START_REF] Brodatz | Textures: A Photographic Album for Artists and Designers[END_REF] used generally for evaluating JPEG2000 compression algorithms. Two levels of sub-band decomposition are performed on the sample images using the hierarchical signal flow graph. For the encoder, the first level of filtering and downsampling is applied on rows and the second one on columns. The second level coding is applied on the low-pass components (x ll ). Symmetrically, the decoder first performs upsampling and filtering is applied on columns followed by the second upsampling and filtering on the rows. For this experiment, fractional word-lengths d of all variables are set to the same value and are varied across 8 ≠ 32 bits in steps of 4 In the case of FIR filters, E d is contained within an absolute value of 0.37% in comparison with simulation. In the case of IIR filters, Ed bounds are higher because of their recursive nature and the high filter orders tested. FIR and IIR filters result in an average absolute E d of respectively 0.11% and 9.44%, showing a generally very accurate estimation. For both, the accuracy is anyway largely less than one-bit equivalent. Moreover, classical flat estimation [START_REF] Menard | Analytical Fixed-Point Accuracy Evaluation in Linear Time-Invariant Systems[END_REF] applied to the same filters gives the exact same results in terms of E d , showing their strict equivalence on an elementary filtering block. Daubechies 9/7 Discrete Wavelet Transform Figure 2.6 presents the results for the two other experiments when the number of fractional bits are changed between 8 and 32 bits with a maximum deviation in error of only about 10%. The maximum error in estimate is by far too small to have an impact on the final optimization. Influence of the Number of PSD Samples The proposed PSD estimation method achieves very good accuracy with a large number of sampling PSD samples. However, as discussed in Section 2.3.2, a larger number of N PSD samples increases the evaluation time. Therefore, it would be of interest to know the impact of finding out how this choice affects the estimation accuracy. To observe this, in both examples chosen in this section, the fixed-point error is obtained by both simulation and the proposed Chapter 2 PSD method with different values of N PSD in powers of 2 ranging from 16 to 1024. In this example, fractional bit-width d is uniformly set to 32 for all signals. Output error power deviation E d value for this experiment is plotted in Figure 2.7 versus N PSD . As expected, increasing the number of PSD samples leads to an improvement of E d . For N PSD = 16, E d is slightly inferior to ≠8% for the frequency filtering system, and slightly superior to 1% for the DWT system. Then, both curves tend to a value inside ±1%. The accuracy obtained is better than the sub-one-bit objective. The accuracy of estimates obtained using the proposed method is a function of the system complexity. Comparison with PSD-Agnostic Methods The deviation of the error estimates between the proposed and the PSD agnostic method is presented in Table 2.2. The max error is obtained with N PSD = 16 and min error is obtained with N PSD = 1024. In all cases, it can be observed that the PSD agnostic method is much more erroneous than even the maximum error obtained using the proposed technique. It has to be noted that for the DWT example, the PSD agnostic method renders an error of 610%. The PSD agnostic method is 4.5◊ worse off in its estimate for frequency filtering, and 554◊ for DWT. For the best case, these values raise respectively to 3.5 10 3 ◊ and 6.7 10 4 ◊. Frequency Repartition of Output Error Another interesting feature inherent to the proposed estimation method is to know the frequency repartition of errors, which is relevant for refinement of fixed-point signal processing systems, and which is not estimated with conventional methods. Indeed, the classical flat method is not able to give any clue about the frequency repartition of the error, which is a capital information in signal processing. E.g, for image compression for instance, accuracy is more likely to be relaxed in low frequencies than in high ones, human vision being less sensible to slight variations of colors than in tiny details. The proposed PSD estimation is able to give the frequency repartition of this error in a very precise and fast way. data fractional parts set to 12 bits. Black to white values represent log-normalized low to high errors. The center of the image represents low frequencies, while the borders represent the high ones. These visual representations show that proposed method achieves a very good estimation of frequency repartition of the output error, taking only a few milliseconds with PSD method whereas simulation on 72 grayscale images has taken several hours of computation using Matlab. Such a fast and accurate information can be used for refining the system word-lengths to reach a better output quality, basing the refinement not only on output error intensity but also on what frequency repartition is best for the application. Using PSD method, different versions of the application can be evaluated in terms of error frequency repartition to allow for code transformations leading to less impact in the relevant frequency bands. Frequency repartition information can then be modified by allowing the introduction of errors in one or several given parts of the system to identify. Conclusions about PSD Estimation Method This chapter described the characterization and propagation of quantization noises in a fixedpoint signal processing system using its power spectral density. This method is applied at block level, which dramatically reduces the complexity of fixed-point system evaluation when compared to classical flat estimation method. It therefore leads to a significant speed up for accuracy evaluation, going from 3 to 5 orders of magnitude when compared to Monte-Carlo simulation in tested examples. Results demonstrate that the proposed estimation method leveraging spectral information achieves a less than one-bit accuracy with a large margin. They also show that complexity-equivalent PSD-agnostic techniques evaluate the accuracy with large errors. The proposed PSD technique also allows the observation of useful frequential properties of the output error that could not be achieved with conventional scalar methods. This work was published at DATE'16 conference [START_REF] Barrois | Leveraging power spectral density for scalable system-level accuracy evaluation[END_REF]. Chapter 3 Fast Approximate Arithmetic Operator Error Modeling In this chapter, techniques based on propagation of Bit-Error Rate (BER) are presented. First, the bitwise-error rate propagation method is proposed and applied to approximate operators. This method allows for fast analytical propagation of approximate operators error, with low memory cost. The model is trained by simulation and converges fast. Then, attempts to use approximate operators for the simulation of Voltage Over-Scaling (VOS) effects are discussed. The Problem of Analytical Methods for Approximate Arithmetic As pointed out in Chapter We discussed in Chapter 2 the importance of optimizing a computing system so the operators with the lowest cost meeting accuracy requirements are used so no time, area and/or energy is uselessly spent. For FxP error, modeling the error as an additive uniform white noise allows an efficient propagation of the mean and variance of the noise [START_REF] Widrow | A study of rough amplitude quantization by means of nyquist sampling theory[END_REF][START_REF] Widrow | Statistical analysis of amplitude-quantized sampled-data systems[END_REF]. However, the nature of approximate operators error is very different from FxP noise. To account for this specifity, Figure 3.1 shows the error maps of several different 8-bit approximate operators previously mentioned. All error maps take as a reference an 8-bit exact adder. The uniform nature of 4bit reduction FxP noise illustrates with a very regular striped-pattern on the error map 3.1a, whereas all the others are very different. ACA error map 3.1b shows a fractal behavior, with nested error triangles. AAM error map 3.1c has four areas with very different error amplitude and patterns. Finally, DRUM error map 3.1d, with its floating-point-like behavior, has an error pattern which is transformed depending on the amplitude of the inputs. All these differences between Approximate (Apx) operators and FxP, and between different Apx operators themselves, make it hard to find pure analytical models to estimate their impact on the output error of an application. The only efficient way to estimate their impact is therefore simulation. Hybrids between analytical models and simulation, referred as pseudo-simulation, have been developed, but they are inefficient because often heavier and less accurate than simulation. In [START_REF] Huang | A methodology for energy-quality tradeoff using imprecise hardware[END_REF], an analytical propagation of the error Probability Density Function (PDF) of approximate operators is proposed. It leverages modified interval arithmetic, representing and propagating the PDF of signal, signal error and operators error by sets of intervals. This method allows fast simulation compared to Monte Carlo simulation. However, the method has a major limitation. Indeed, the model for the propagation of error PDF, which is specific to each operator, potentially costs a lot of memory. Indeed, given 2 input error PDFs with k intervals each, k 2 resulting values must be kept in memory. However, to be accurate, k must be large enough to be representative of the real error PDF. For an n-bit value, a perfect accuracy for the representation of a corresponding error PDF must have k = 2 n . Therefore, for a 32-bit operator, a perfect accuracy would require 2 2ú32 = 2 64 values to be stored, that is 10 19 . Of course, a much lower value must be chosen for k, which implies important approximations in the PDFs, and also in the model of propagation. Therefore, this model is likely to diverge quite fast along propagations, or to be memory-hungry. In the next section a proposition of a lighter model to propagate the Bitwise-Error Rate (BWER) caused by approximate operators is presented. Bitwise-Error Rate Propagation Method This section presents the Bitwise-Error Rate (BWER) propagation method. First, the main principle of BWER propagation method is described. Then, the data structure used for propagation and the training algorithm are discussed. Finally, the propagation algorithm is described. Main Principle of BWER Propagation Method BWER propagation method is an analytical method which consists in estimating the output BWER of a system composed of approximate integer operators. BWER is defined as the BER associated to each bit position of a binary word. Given an n-bit binary word x = {x i } ioeJ0,n≠1K , BWER is the vector BWER(x) = {p i } ioeJ0,n≠1K composed of n real numbers in range [0, 1] corresponding to the probability p i for x i to be erroneous. Given an approximate operator Op whose inputs are x and y of width n and whose output is z of width m, BWER propagation method aims at determining analytically the output BWER vector, knowing both input BWER vectors such as depicted on Figure 3.2. Then, considering a network of approximate operators, the BWER vectors can be propagated operator by operator from the inputs, considered as accurate, to the outputs. To be time-efficient, the propagation must be simulation-free and so the model must be completely analytic. Storage Optimization and Training of the BWER Propagation Data Structure To propagate BWER across an operator, a BWER transfer function must be built. For this, the impact of an error at any bit position of both inputs on any bit of the output must be determined. Let e x,i be the event "The i-th position bit of x is erroneous". Considering n-bit inputs x and y, let the vector be the event "x and y to have all their bits erroneous". In this notation, err_id is the integer value represented by the binary word E x,y,err_id where the event e E x, x,i is represented by 1 and the opposite event e x,i is represented by 0, read left to right from MSB to LSB. E.g, if x and y are 2-bit inputs, E x,y,3 represents the event vector {e y,1 , e x,1 , e y,0 , e x,0 }, and E x,y,6 represents {e y,1 , e x,1 , e y,0 , e x,0 }. For inputs of width n, there are 4 n possible event vectors, E x,y,0 being the vector for which all input bits are correct, and E x,y,4 n ≠1 being the one for which all input bits are erroneous, which is Equation 3.1. From this point, these vectors will be referred as Error Event Vectors (EEVs). Therefore, to know the impact of an error on any input bit for n-bit inputs on m-bit output, the set of probabilities must be determined and stored: P (e z,j |E x,y,i ) ioeJ0,4 n ≠1K,joeJ0,m≠1K (3.2) This set has a size of m◊4 n real numbers. The cost for storing this data in memory considering single-precision floating-point representation is given by Table 3 is clear that storing such an amount of data is not scalable, even for small bit-width such as 16-bit. Moreover, the time that would be needed to train such a volume of data would also be huge. Therefore, reductions of this data structure are necessary. First, for arithmetic operators, the output bits of significance j only depend of input bits of significance i AE j. This already allows for an important reduction in the required storage. Indeed, the number of data needed to be stored is now (m ≠ n + 1 3 )4 n ≠ 1 3 instead of m ◊ 4 n , as the set of conditional probabilities to be stored of Equation 3.2 is now: P (e z,j |E x,y,i ) ioeJ0,4 min(j,n) ≠1K,joeJ0,m≠1K (3.3) As an example, for 16-bit addition, 22 GB are necessary instead of the previous 292 GB. In spite of a 92% memory reduction, this is still too high to be decently implemented. The previous reduction implies no approximation. However it is possible to reduce dramatically the size of the data structure, allowing a small amount of inaccuracy. For this, the hypothesis that any output bit of significance j only depends at most on the input bits of significance i oe Jj ≠ k + 1, jK, where k is arbitrary. With this method, the set of conditional probabilities of Equation 3.3 becomes: P (e z,j |E x,y,i ) ioeJ0,4 min(j,k) ≠1K,joeJ0,m≠1K (3.4) This approximation is legitimated by two facts: 1. As already stated, the probability for a carry chain to be long is very small [START_REF] Schilling | The longest run of heads[END_REF]. Therefore, a low significance input bit only has an impact on a much higher significance output bit in a very small minority of cases. 2. A vast majority of approximate arithmetic operators are based on cutting carry propagations to a certain limitation l. Therefore, choosing k > l even induces no approximation at all. Choosing an arbitrary k reduces the storage cost to (m ≠ k + 1 3 )4 k ≠ 1 3 real numbers. Table 3 turning the storage of the data structure to a decent value. Moreover, knowing the parameters of each considered operator, k can be minimized so there is no approximation in the model. E.g, for ACA 16 (5), taking k = 6 can be chosen with no compromise, requiring only 186 kB of storage. Once k is selected, the model needs to be trained offline. Monte Carlo simulation with fault injection is used for this, using functional model of the operator in C++, from the approximate operator library of ApxPerf 2.0 [2]. The extraction process of an observation of the EEV E z corresponding to an observation of an input EEV E x,y is depicted in Figure 3.3. x and y inputs are randomly picked using Monte Carlo simulation, and the accurate operation Op Acc is Chapter 3 performed to obtain the corresponding exact output z = z j joeJ0,m≠1K . In parallel, random faults are injected in x and y performing an XOR with fault injection vectors. These fault injection vectors produce the 2n-bit observation F x,y of an EEV E x,y . The approximate operation Op Apx is then fed with the generated faulty inputs x and ŷ, returning ẑ. Finally, the m-bit bitwise-error observation vector F z = {f z,j } joeJ0,m≠1K of an EEV E z = {e z,j } joeJ0,m≠1K is extracted from ẑ and z by an m-bit XOR. Figure 3.3 -Extraction of Binary Error Event Vectors for BWER Model Training The conditional probability of Equation 3.4 can be estimated by the ratio between the number of observations of the event e z,j when the corresponding EEV E x,y,i is simultaneously observed, referred as (f z,j = 1|F x,y,i ), and F x,y,i the total number of observations of E x,y,i : First, looking at the blue part, after approximate operation, the output LSB is not faulty. Therefore, f z,0 = 0. As x0 was not faulty and ŷ0 was faulty, the corresponding observation of input EEV is F P (e z,j |E x,y,i ) © q l (f z,j = 1|F x,y,i ) l q l (F x,y,i ) l . ( 3 x,y,2 . Thus, following Equation 3.5, the estimation of P (e z,0 |E x,y,2 ) is modified by increasing the denominator by 1. Then, looking at the yellow part of Figure 3.4, the output observation is f z,1 = 1, meaning the output is erroneous. As k = 2, input ranks 1 and 0 are observed together. The corresponding observation (0, 1, 1, 0) is F x,y,6 . P (e z,1 |E x,y, 6) is modified increasing the numerator by 1 (faulty output) and increasing the denominator by 1. The operation is repeated at significance position 2 (green part) and 3 (red part), each time observing 2k = 2 input error vector bits. Following this method, the conditional probability data structure has m elements updated at each new training cycle. Finally, after a sufficient number of training cycles (discussed in Section 3.3.1), the model is trained and ready for propagation, which is discussed in the following section. BWER Propagation Algorithm The propagation of BWER is performed the following way. Given two BWER inputs B x = {b x,i } ioeJ0,n≠1K and B y = {b y,i } ioeJ0,n≠1K and the equivalent conglomerate vector The output BWER B x,y = {b y,n≠1 , b x,n≠1 , b y,n≠2 , b x,n≠2 , . . . , b y,1 , b x,1 , b y,0 , b x,0 } . ( 3 B z = {b z,j } joeJ0,m≠1K is then determined by b z,j = 4 min(j,k) ≠1 ÿ i=0 - i,j P (e z,j |E x,y,i ) , ( 3.7) wherei,j is the probability of the event vector E x,y,i to be true at significance j knowing B x,y , referred as P ) = 0.21 ◊ (1 ≠ 0.14) ◊ (1 ≠ 0.17) ◊ 0.05 = 7.49E≠3 This operation has then to be iterated and summed for all other 15 values i to determine b z,4 , and again for all j in J0, m ≠ 1K. Results of the BWER Method on Approximate Adders and Multipliers This section presents results about the BWER propagation method. These results were produced a few days before the redaction of this part of the document and are consequently not yet published. First, the convergence speed of the trained data structure is studied in Section 3.3.1. Then, results concerning stand-alone approximate adders and multipliers and tree structures of adders are given considering inputs with a maximal activity. Chapter 3 BWER Training Convergence Speed As developed previously in Section 3.2.2, one of the main interests of BWER propagation method is the reasonable memory cost of the trained structure. However, the method can only be suitable if the training time is not too important. In this Section, we evaluate the convergence speed of BWER training. To evaluate the convergence speed, three approximate adders and two approximate multipliers were used. The adders are ACA, ETAIV and IMPACT, described in Section 1.4.1. The multipliers are AAMIII, denoted as AAM in the rest of this section, and DRUM described in Section 1.4.2. They were all tested with input/output widths between 8 and 16 using multiple configurations, with values of k in BWER method in {2, 4, 6, 8}, leading to 374 different experiment parameters. In this experiment, the reference of the final BWER trained structure are the values obtained after 10 8 random value draws. The experiment was performed using 374 cores of IRISA's computing grid IGRIDA, each instance of the experiment being computed on a single-core for correct time analysis. IGRIDA computing grid embeds 1700 cores leveraging Intel Xeon CPUs. All implementations are done in C++, using approximate library apx_fixed described in the next chapter. A first observation of the curves shows that the convergence speed of the training clearly depends on k. The smaller k, the faster the convergence. For k = 2, the mean distance of the estimation from the reference gets under 10 ≠2 between 20, 000 and 50, 000 training samples, which represents only 0.5 ≠ 1.2 seconds of training. For k = 8, getting as near from the reference as 10 ≠1 takes about 1, 000, 000 training samples, which represents a training time of 22 seconds. Indeed, when k is small, the training structure is very small, and the elements of the structure are likely to be activated by a random input with a high probability. When k is large, however, each element of the structure has very few chances to be activated by the drawn number. This is why the number of random inputs necessary to have a sufficient number of activations for all elements of the data structure to estimate the BWER probabilities grows very fast with the value of k. From now on, all BWER estimation structures used are trained on 10 8 input samples. Evaluation of the Accuracy of BWER Propagation Method As mentioned in Section 3.2.3, the trained structure of BWER propagation method is built to work for inputs with maximum activity at each bit position. In this section, all presented results are produced in these conditions, meaningj in Equation 3.9 is always worth 0.5. Two experiments are presented in this section. First, results on single stand-alone operators are given, using the same approximate operators as the ones described in the previous section. Finally, results on tree of operations for the same approximate operator instance are given. Figure 3.6 shows the example of this structure, made of three stages. This tree structure is considered to represent typical data-flow graphs used in signal processing applications and is used to see how the model propagates in this structure. Even if the model can be accurate for one operator alone, the interest of analytical models is in their use for application-level error estimation, which is not considered in the models currently published in the literature for approximate operators. To evaluate the efficiency of the BWER method, the mean distance D B = 1 n n≠1 ÿ i=0 - - -B(i) ≠ B(i) - - -, (3.8) is used, where B(i), i oe J0, n ≠ 1K is the reference output BWER of a given operator of output width n and B(i), i oe J0, n ≠ 1K is the output estimated by the model. The first experiment without considering the activity of the inputs is the verification of the model on stand-alone operators. For this experiment and all the other experiments to come in this section, all simulations for the reference are run on 10 7 input samples. For the estimation time, the analytical propagation is averaged on at least 5 seconds of repeated experiments. An example of estimation and simulation output BWER vector is shown in Figure 3.7 for a 12-bit approximate adder ACA with carry chains limited to x = 2, and a parameter k = 4 for the BWER propagation method. In this specific case, the estimation is very near from the reference simulation. There are no errors occurring on the first 3 bits, which is normal for an ACA with x = 2. Then, the error probability increases with the bit significance, as expected. Other cases may show different results, depending on the value of k. E.g, if k Ø x in ACA, then at least one LSB causing error at a higher significance is not taken into account. In this case, the estimation is not accurate. Therefore, it is important to find the optimal value of k to minimize the memory needed and the training time while keeping the best possible accuracy in the estimation. Figure 3.8a shows the evolution of D B with k for 16-bit ACA as a function of x. As expected, when k < x, the estimation is bad. Then, the quality of estimation improves when k approaches x. However, when k gets larger than x, the estimation gets worse again. This is due to the number of training samples, which is the same for all k in our experiment. As showed in Section 3.3.1, the training process converges slower when k is larger. Therefore, when k > x, as there is no improvement in the accuracy of the model compared to k = x, the slowest convergence is source of inaccuracy. Thus, the optimal value of k in this case is k = x, which is the best balance between accuracy and required training samples. Figure 3.8b shows the evolution of D B with k for 16-bit IMPACT as a function of the number of MSBs computed with an exact adder N acc . For IMPACT, the results are different than for ACA. Indeed, in IMPACT, every output bit depends on every input bit of lower significance. Therefore, for all k < n , there is a lack of information in the model. It is interesting to see that for IMPACT, when the number of accurate MSB computations gets larger, the accuracy of the estimation gets worse off, until a certain value of k for which the lack of samples used for training the model compensates for the increase of the information taken into account. This tendency is quite opposed to what could be expected and needs to be investigated in the future for better comprehension. AAM and DRUM multipliers results are respectively given in Figures 3.8c and 3.8d. A first observation on the error shows that globally, the accuracy of the estimation is worse off than for adders. Indeed, there is a main difference between adders and multipliers. In adders, the output bit of significance i is strongly dependent on the input bits of significance i, and then less and less dependent on the input bits of significance i ≠ 1, i ≠ 2, i ≠ 3, . . . , 0. Therefore, when k is high enough, the absence of information on the inputs of rank i ≠ k ≠ 1, . . . , 0 represents a small accuracy penalty. However, for an n-bit multiplier with n even, the output bit of significance n ≠ 1 is as dependent on the inputs of significance n ≠ 1 as on the input bits of significance 0. However, when k < n, the input bit of significance 0 is not reached by the estimation, leading to potentially bad accuracy. In Figure 3.8c for AAM results, as AAM has no parameter, each curve represents the bit-width of the multiplier. The smaller the multiplier, the better the estimation. For all AAM widths, increasing k always leads to better accuracy as expected, except for 8-bit AAM with k = 8, which is worse than k = 6. Indeed, as only the most significant half of multiplication is taken into account in AAM, this means that for each bit j at the output no partial product implying input significance 0 is operating at weight j. Therefore, when switching from k = 6 to k = 8, only one more significant bit instead of two is taken into account, which is not enough to counterbalance the fact that the training was less efficient for k = 8, thus leading to a lower accuracy. The same phenomenon can be observed for DRUM on Figure 3.8d when the floating multiplier is only 2-bit or 4-bit large. tion is used. For ACA on Figure 3.9a, for x = 6, the estimation becomes less accurate with the number of stages, which is what could be expected. However, for the other configurations, the opposite is observed, i.e., more stages lead to better estimation accuracy. This is actually due to the high BWER of the approximate adder configuration, which leads to a maximal BWER at nearly all positions after a high number of stages. As the BWER propagation model also estimates a maximal BWER, not because of a high accuracy but because of a saturation effect, the estimation becomes very good. However, this is only a side effect of the bad performance induced by using several layers of ACA. Estimation and Simulation Time As the BWER propagation method is analytical, it is made for fast accuracy evaluation. In this section, the execution time of the method is evaluated. Table 3.4 gives the time spent for the BWER propagation method to evaluate ACA, IMPACT, AAM and DRUM in their 8-, 12and 16-bit versions. For each bitwidth, the training is obtained taking the average of several different parameters of all approximate adders and multipliers, except for AAM which takes no parameter. All simulations are run on 10 7 points. All estimations are repeated during at least 5 seconds and the total time is divided by the number of repetitions. All computations were run on IGRIDA computing grid composed of different models of Intel Xeon processors. As the computing grid is heterogeneous, the evaluation may vary if one processor or another is used, and this must be considered in the analysis of results of Table 3 .4. BWER propagation time roughly oscillates between 500µs and 18ms for addition, and between 800µs and 42ms for multiplication for stand-alone operators. In a larger system, the propagation time has to be multiplied by the number of operators. In comparison, 10 7 simulations take between 26s and 129s. This makes a huge difference when the operation has to be performed on complete systems and repeated many times, which is the case in an incremental system optimization process where many configurations of approximate operators must be Conclusion and Perspectives In the previous Sections, BWER propagation model principle, its convergence speed and some results on operators or tree of operators were presented. The existing literature and the results show how difficult it is to build strong general error propagation models for approximate operators. First, their many different natures make the accuracy of the models very dependent on each of the structures, and a good estimation accuracy on ACA for instance could give bad results on an adder like IMPACT (and vice versa). Then, their errors often containing scarce and very high amplitude peaks, it is very hard to evaluate this with analytical models smoothing these phenomena. Another limitation of the BWER propagation method is that it is only suitable if all inputs are uniformly distributed on their whole dynamic, which means they have a maximal activity. Indeed, BWER does not carry any information about the activity at any position. If this hypothesis is not true, then the results must be weighted by information about activity. Letj the probability for the output bit z j of an operator to be worth 1. Then, the output BWER in these conditions can be approximated by b z,j = 2- j 4 min(j,k) ≠1 ÿ i=0 - i,j P (e z,j |E x,y,i ) . ( 3.9) Indeed, if the input MSB are mostly worth 0 instead of equally 0 or 1, the output BWER on the MSBs will be proportionally lower. Thus, knowing the probability of input bits to be 1 at each position, these probabilities need to be propagated analytically across the operators along with BWER propagation. In practice, the simplest way is to use the analytical propagation of these probabilities across exact operators (adders and multipliers). The propagation of the probability of the output bits to be worth 1 across an adder can be trivially calculated composing their propagation across a full adder. Their propagation across a multiplier is calculated on the composition of additions in the partial product reduction. The output probability of a bit to be worth 1 analytically obtained this way is then weighted by the initially computed BWER at the output of the operator. Indeed, as approximate operators are erroneous by nature, they can generate bit flips that can totally modify the value of activity when compared to an exact operator. If -Õ j is the probability for the output of an exact operator to have its j-th bit worth 1, then we approximatej (see Equation 3.9) as - j = - Õ j ◊ (1 ≠ b z,j ) + b z,j ◊ 1 1 ≠ - Õ j 2 . ( 3.10) The question of using BWER for propagation is also contestable. Indeed, few significant signal processing metrics are possibly deducible from it. However, the objective of this section is to point out the interest of basing the analytical error estimation on models trained using simulated values of approximate adders, as it is the only way to catch their many different natures. Also, the interest of finding ways to reduce the storage cost of the models while sacrificing a minimum of estimation accuracy has been highlighted by the use of k in the training and the propagation method. In the future, methods derived from BWER training and propagation could be developed leveraging other metrics for the propagation and more efficient storage compression methods. Indeed, as discussed in Section 3.3.2, important data are likely to be lost, especially in multipliers, when choosing bad parameters for the model. After many unsuccessful attempts and a deep study of the literature, a more general conclusion about modeling approximate operators error is that there seem not to be better method than Monte Carlo simulation. Indeed, simple models always suffer from high imprecision which does not allow them to be used in real system design process, while complex models giving reasonably good results always come with a high storage or computational cost approaching the cost of Monte Carlo simulation. Moreover, simulating approximate operators can be done with a potentially good computational efficiency. Indeed, most of them are essentially a composition of small exact adders or multipliers. When computing using a CPU, it is therefore possible to use the integer arithmetic units to accelerate the computations using high-level code instead of heavier gate-level descriptions. There also are good opportunities to use HLS on the approximate operators code to simulate them on FPGA, accelerating computation using DSPs. In the next section, pseudo-simulation leveraging approximate operators is used for the reproduction of VOS effects. In Section 1.1.1, functional approximation leveraging VOS is discussed. In this section, a method to reproduce the effects of VOS on arithmetic operators using models based on approximate operators is discussed. As for BWER method developed above, it is based on model training applied to BER at each significance position of an operator. Voltage scaling has been used as a prominent technique to improve energy efficiency in digital systems, scaling down supply voltage effects in quadratic reduction in energy consumption of the system. Reducing supply voltage induces timing errors in the system that are corrected through additional error detection and correction circuits. A class of circuit-level approximation is achieved by applying dynamic voltage and frequency scaling aggressively to an accurate operator. Due to the dynamic control of voltage and frequency, timing errors due to scaling can be controlled flexibly in terms of trade-off between accuracy and energy. This method is referred as Voltage Over-Scaling (VOS). It has the potential to unlock the opportunities of higher energy efficiency by operating the transistors near or below the threshold. VOS-based approximate operators can be used when error-resilient applications are considered. Despite its high efficiency in terms of energy savings, sub-threshold VOS has important drawbacks. First, the process variability makes its effects hard to predict in a general way since two instances of a same chip are likely to behave differently. Second, besides on-chip measurements or transistor-level simulation simulation, there is no suitable method able to give a prediction of these effects. On-chip measurements is costly in terms of human resource, and observing at wire level the effects of VOS is impossible -adding hardware at that level would modify the nature of what is intended to observe. Transistor-level simulation (such as SPICE), on the other hand, is accurate. However, the computational resources and simulation time necessary for large system observation are prohibitive. In this section, we intend to reproduce the effects of VOS at arithmetic operator scale, leveraging approximate operators. This allows for much faster and low-resource simulation, while keeping satisfying accuracy compared to the reality. For this, we propose a new modeling technique that is scalable for large-size operators and compliant with different arithmetic configurations. The proposed model is accurate and allows for fast simulations at the algorithm level by imitating the faulty operator with statistical parameters. We also characterize the basic arithmetic operators using different operating triads (combination of supply voltage, body-biasing scheme and clock frequency) to generate models for approximate operators. Error-resilient applications can be mapped with the generated approximate operator models to achieve better trade-off between energy efficiency and error margin. In our experiments using 28nm FDSOI technology, we achieve maximum energy efficiency of 89% for basic operators like 8-bit and 16-bit adders at the cost of 20% Bit Error Rate (ratio of faulty bits over total bits) by operating them in near-threshold regime. Characterization of Arithmetic Operators In this section, characterization of arithmetic operators is discussed for voltage over-scaling based approximation. Characterization of arithmetic operators helps to understand the behaviour of the operators with respect varying operating triads. Adders and Multipliers are the most common arithmetic operators used in datapaths. In this work different adder configurations are explored in the context of near-threshold regime. (V dd , V bb , T clk ) , where V dd is supply voltage, V bb is body-biasing voltage, and T clk is clock period. In ideal condition, the arithmetic operator functions without any errors. Also, EDA tools introduce additional timing margin in the datapaths during Static Timing Analysis (STA) due to clock path pessimism. This additional timing prevents timing errors due to variability effects. Due to the limitation in availability of design libraries for near/subthreshold computing, it is necessary to use SPICE simulation to understand the behaviour of arithmetic operators in different voltage regimes. By tweaking the operating triads, timing errors e are invoked in the operator and can be represented as e = f (V dd , V bb , T clk ) (3.11) Characterization of arithmetic operator helps to understand the point of generation and propagation of timing errors in arithmetic operators. Among the three parameters in the triad, scaling V dd causes timing errors due to the dependence of operator's propagation delay t p on V dd , such as t p = V dd .C load k(V dd ≠ V t ) 2 (3.12) Body-biasing potential V bb is used to vary the threshold voltage (V t ), thereby increasing the performance (decreasing t p ) or reducing leakage of the circuit. Due to the dependence of t p on V t , V bb is used solely or in tandem with V dd to control the timing errors. Scaling down V dd improves the energy efficiency of the operator due to its quadratic dependence to total energy. Chapter 3 E total = V 2 dd .C load . Mere increase in T clk does not reduce the energy consumption, though it will reduce the total power consumption of the circuit P total = -.V 2 dd . 1 T clk .C load (3.13) Therefore, T clk is scaled along with V dd and V bb to achieve high energy efficiency. Characterization of Adders Adder is an integral part of any digital system. In this section, two adder configurations Ripple carry adder (RCA) and Brent-Kung adder (BKA) are characterized based on circuit level approximations. Ripple carry adder is a sequence of full adders with serial prefix based addition. RCA takes n stages to compute n-bit addition. In worst case, carry propagates through all the full adders and makes it longest carry chain adder configuration. Longest carry chain corresponds to the critical path of the adder, based on which the frequency of operation is determined. In contrast, Brent-Kung adder is a parallel prefix adder. BKA takes 2 log 2 (n ≠ 1) stages to compute n-bit addition. In BKA, carry generation and propagation are segmented into smaller paths and executed in parallel. Behaviour of arithmetic operator in near/sub-threshold region is different from the superthreshold region. In case of an RCA, when the supply voltage is scaled down, the expected behaviour is failure of critical path(s) from longest to the shortest with respect to the reduction in the supply voltage. Fig. 3.11 shows the effect of voltage over-scaling in 8-bit RCA. When the supply voltage is reduced from 1V to 0.8V, MSBs starts to fail. As the voltage is further reduced to 0.7V and 0.6V more BER is recorded in middle order bits rather than most significant bits. For 0.5V V dd , all the middle order bits reaches BER of 50% and above. Similar behaviour is observed in 8-bit BKA shown in Fig. 3.12 for v dd values of 0.6V and 0.5V. This behaviour imposes limitations in modelling approximate arithmetic operators in near/sub-threshold using standard models. Behaviour of arithmetic operators during voltage over-scaling in near/subthreshold region can be characterized by SPICE simulations. But SPICE simulators take long time (4 days with 8 cores CPU) to simulate exhaustive set of input patterns needed to characterize arithmetic operators. Modelling of VOS Arithmetic Operators As stated previously, there is a need to develop models that can simulate the behavior of faulty arithmetic operators at functional level. In this section, we propose a new modelling technique that is scalable for large-size operators and compliant with different arithmetic configurations. The proposed model is accurate and allows for fast simulations at the algorithm level by imitating the faulty operator with statistical parameters. As VOS provokes failures on the longest combinatory datapaths in priority, there is clearly a link between the impact of the carry propagation path on a given addition and the error issued from this addition. Figure 3.13 illustrates the needed relationship between hardware operator controlled by operating triads and statistical model controlled by statistical parameters P i . As the knowledge of the inputs gives necessary information about the longest carry propagation chain, the values of the inputs are used to generate the statistical parameters that control the equivalent model. These statistical parameters are obtained through an off-line optimization ), the goal is to find C max , minimizing the distance between the output of the hardware operator and the equivalent modified adder. This distance can be defined by the above listed accuracy metrics. Hence, C max is given by: C max (in 1 , in 2 ) = Argmin Coe[0,N ] Îx (in 1 , in 2 ) , x (in 1 , in 2 )Î where Îx, yÎ is the chosen distance metric applied to x and y. As the search space for characterizing C max for all sets of inputs is potentially very high, C max is characterized only in terms of probability of appearing as a function of the theoretical maximal carry chain of the inputs, denoted as P 1 C max = k|C th max = l 2 . This way, the mapping space of 2 2N possibilities is reduced to (N + 1) 2 /2. Table 3.5 gives the template of the probability values needed by the equivalent modified adder to produce an output. Table 3.5 -Carry propagation probability table of modified 4-bit adder C max C th max 0 1 2 3 4 0 1 P (0|1) P (0|2) P (0|3) P (0|4) 1 0 P (1|1) P (1|2) P (1|3) P (1|4) 2 0 0 P (2|2) P (2|3) P (2|4) 3 0 0 0 P (3|3) P (3|4) 4 0 0 0 0 P (4|4) The optimization algorithm used to construct the modified adder is shown in Algorithm 2. When the inputs (in 1 , in 2 ) are in the vector of training inputs, output of the hardware adder configuration x is computed. Based on the particular input pair (in 1 , in 2 ), maximum carry Chapter 3 chain C th max corresponding to the input pair is determined. Output x of the modified adder with three input parameters (in 1 , in 2 , C) is computed. The distance between the hardware adder output x and modified adder output x is calculated based on the above defined accuracy metrics for different iterations of C. The flow continues for the entire set of training inputs. Algorithm 2 Optimization Algorithm P (0 : N bit_adder | 0 : N bit_adder ) Ω 0 max_dist Ω +OE C max_temp Ω 0 for variable in 1 , in 2 oe training_inputs do x Ω add_hardware(in 1 , in 2 ) C th max Ω max_carry_chain(in 1 , in 2 ) for variable C oe C th max down to 0 do x Ω add_modif ied(in 1 , in 2 , C) dist Ω Îx, xÎ if dist <= max_dist then dist_max Ω dist C max_temp Ω C end if end for P (C max_temp |C th max ) + + end for P (: | :) Ω P (: | :)/size(training_outputs) Once the offline optimization process performed, the equivalent modified adder can be used to generate the outputs corresponding to any couple of inputs in 1 and in 2 . To imitate the exact operator subjected to VOS triads, the equivalent adder is used in the following way: 1. Extract the theoretical maximal carry chain C th max which would be produced by the exact addition of in 1 and in 2 . 2. Pick of a random number, choose the corresponding row of the probability table, in the column representing C th max , and assign this value to C max . Compute the sum of in 1 and in 2 with a maximal carry chain limited to C max . For the experiments, the equivalent modified adder used is ACA, presented in Section 1.4.1.1. As a reminder, for an n-bit ACA parameterized by k, each output sum bit is calculated considering only the k previous input carries. This approximated operator is therefore chosen as its effects represent quite well the effects of VOS, with errors occurring on the critical path, i.e. the carry chain. Therefore, the control parameter used in the optimization of the model is the value of k. Figure 3.15 shows the estimation error of model of different adders based on the above defined accuracy metrics. SPICE simulations are carried out in 43 operating triads with 20K input patterns. Input patterns are chosen in such a way that all the input bits carry equal probability to propagate carry in the chain. Figure 3.15a plots the SNR of 8-and 16-bit RCA and BKA adders. MSE distance metric shows higher mean SNR, followed by Hamming distance and weighted Hamming distance metrics. Since MSE and weighted Hamming distance are taking the significance of bits into account, their resulting mean SNRs are higher than for the Hamming distance metric. Figure 3.15b shows the plot of normalized Hamming distance of all the four adders. In this plot, MSE and Hamming distance metrics are almost equal, with a slight advantage for non-weighted Hamming distance, which is expected since this metric gives all bit positions the same impact. Both the 8-bit adders have same behavior in terms of the distance between output of hardware adder and modified adder. On the other hand, 16-bit RCA is better in terms of SNR compared to its BKA counterpart. These results demonstrate the accuracy of the proposed approach to model the behavior of operators subjected to VOS in terms of approximation. This method was presented in [3], along with error versus energy SPICE simulations under VOS. In these results, important energy savings are performed using VOS, with no or limited errors on the output. Figure 3.16 shows the results obtained for different voltage triads on a 16bit RCA. The link between the intensity of error, represented here as BER and energy savings is clearly established, as well as the many possible tuning knobs. This emphasizes the need for models such as the one presented here, so tuning a faulty circuit does not take days to weeks. However, the method presented in this section has the main drawback to depend on simulations results to be trained. As these simulations are extremely long, only 20,000 simulated points at best could be used for this model training, which is very low and does not ensure robustness of the generated outputs. Moreover, it is still hard to test the accuracy of this method on more complex systems, since transistor-level SPICE simulations has prohibitive computational time and memory cost. 0 .5 3 ,0 .6 , ± 2 0 .5 3 ,0 .7 ,0 0 .5 3 ,0 .7 , ± 2 0 .5 3 ,0 .8 ,0 0 .5 3 ,0 .9 ,0 0 .5 3 ,0 .8 , ± 2 0 .5 3 ,1 .0 ,0 0 .1 9 ,0 .9 , ± 2 0 .7 0 ,1 .0 ,0 0 .5 3 ,0 .9 , ± 2 0 .2 0 ,1 .0 , ± 2 0 .2 5 ,1 .0 , ± 2 0 .5 3 ,1 .0 , ± 2 0 .2 5 ,0 .8 , ± 2 0 .5 3 ,0 .5 , ± 2 0 .2 0 ,0 .8 , ± 2 0 .2 5 ,1 .0 ,0 0 .2 0 ,0 .9 , ± 2 0 .2 5 ,0 .9 ,0 0 .2 5 ,0 .7 , ± 2 0 .2 0 ,0 .7 , ± 2 0 .2 0 ,1 .0 ,0 0 .2 5 ,0 .6 , ± 2 0 .2 0 ,0 .9 ,0 0 .5 3 ,0 .4 , ± 2 0 .2 5 ,0 .8 ,0 0 .5 3 ,0 .6 ,0 0 .2 0 ,0 .6 , ± 2 0 .2 5 ,0 .5 , ± 2 0 .2 0 ,0 .8 ,0 0 .2 5 ,0 .7 ,0 0 .2 0 ,0 .5 , ± 2 0 .2 0 ,0 .7 ,0 0 .2 5 ,0 .4 , ± 2 0 .5 3 ,0 .5 ,0 0 .2 5 ,0 .6 ,0 0 .2 0 ,0 .4 , ± 2 0 .2 0 ,0 .6 ,0 0 .2 5 ,0 .5 ,0 0 .2 0 ,0 .5 ,0 0 .5 3 ,0 .4 ,0 0 .2 0 ,0 . Conclusion As a conclusion for this chapter, approximate computing error modeling is a very complex task. Today, there is still no real suitable method for analytical propagation of errors at the application level as it exists for FxP arithmetic. Indeed, existing techniques always come with a major drawback -the accuracy of the estimation always has to be traded for memory or computational cost. There are two main reasons for this: 1. The output error strongly depends on the inputs. A variation of 1 bit in an input can easily switch between a perfectly accurate result to an error with the amplitude of the MSB. 2. And, as stated and developed in Chapter 1, the countless approximate operators of different natures make general rules hard to be found. In the BWER propagation method proposed in Section 3.2, a solution implying reasonable memory cost with very fast error estimation after model training was proposed. However, as it is limited to BWER, only a limited number of metrics can be extracted, and the lack of accuracy in the estimation makes it not scalable to large systems. What was finally observed along the many attempts during the development of this thesis and the reading of literature about integer approximate operators is that, when accuracy of error estimation is sought, Monte Carlo simulation is often the best solution. First, it is easier to extract any metric from the results, and especially metrics relative to an application. Moreover, the particularity of approximate operators is that, to be energy efficient, they have to remain quite simple and can often be represented by series of additions. Therefore, their functions can often be efficiently coded using integer functions supported by all CPUs so they execute very fast and therefore Monte Carlo simulations can remain efficient. In this chapter, we also showed that approximate operators functions could be useful to represent complicated physical phenomena such as VOS faults, combining them to estimate these faults using Monte Carlo simulation. In the next chapter, FxP and approximate operators paradigms are compared in terms of raw error and hardware performance, as well as in terms of error and performance regarding real-life signal processing applications. For this, only Monte Carlo simulation is used, so that the study is not dependent on the accuracy variations of models. Chapter 4 Approximate Operators Versus Careful Data Sizing In Chapter 1, fixed-point (FxP) and approximate (Apx) operators paradigms were presented. In Section 1.3, FxP arithmetic is presented as well as quantization error. Classical techniques for FxP refinement as well as PSD propagation method are presented in Chapter 2. Error management of Apx operators and their issues are discussed in Chapter 3. In this chapter, both arithmetic paradigms are compared. On the one hand, FxP arithmetic is relying on the use of accurate integer operators, inaccuracy being induced by quantization process, mostly at arithmetic operators outputs. On the other hand, approximate operators rely on inaccurate designs inducing errors by nature. Therefore, this chapter compares data sizing to functional approximation. For this, an open-source hardware performance (area, delay, power, energy) and accuracy characterization framework was developed during the thesis related to this document. This framework allows fast, accurate and user-friendly simulation-based area, delay and power estimation of approximate arithmetic operators in a general meaning -FxP, floating-point (FlP) or Apx operators such as presented in Chapter 1. It comes with built-in approximate adders and multipliers libraries, real-life applications benchmarks, and very complete scripts for results processing. The framework is presented in Section 4.1. The raw comparison between FxP and Apx paradigms is presented in Section 4.2. Finally, the performance of both are compared in signal processing applications in Section 4.3. APXPERF: Hardware Performance and Accuracy Characterization Framework for Approximate Computing This section presents ApxPerf, which is the open-source hardware performance and accuracy characterization framework developed during the thesis. Two versions were developed. The first one, based on Bash and Matlab scripts and taking VHDL for hardware performance evaluation and C++ for error evaluation, is presented in Section 4.1.1. The second one, written with Python3, adds an HLS layer so only a unique C++ source is needed for both functional and hardware simulation. APXPERF-First Version The first version of ApxPerf, presented in [START_REF] Barrois | Leveraging power spectral density for scalable system-level accuracy evaluation[END_REF], is a framework dedicated to approximate arithmetic operators characterization. It is composed of a hardware characterization part and an accuracy characterization part. The hardware characterization part is based on VHDL specifications of the operators. Given an approximate operator VHDL code, the code is parsed to the clock and reset signals, as well as the two inputs and the output. The parameters of the approximate operator, which must be resolved at compile time, are set up using generic variables. The VHDL source is then compiled along with linked hardware libraries provided by the user using Synopsys Design Compiler and a gate-level design is produced. Area and delay are extracted from PrimeTime reports. Then, a VHDL benchmark is generated, automatically interfaced with the operator to be characterized. The benchmark is then run using Modelsim from Mentor Graphics. A VCD file containing all the transitions at every gate interface with a default 1ps granularity is generated. Finally, the VCD and SDF files and the technology libraries are passed to Synopsys PrimeTime, which produces a very accurate time-based estimation of dynamic and static power of the circuit. In parallel, a C source of the same operator is given to a C++ framework. The function parameters (composed of the operator inputs and output and the parameters of the operator) are parsed and a test-bench is generated. Then, the simulation is run. An important number of metrics are returned by the simulation: • Mean Square Error (MSE), • Mean Absolute Error (MAE), • Mean Absolute Relative Error, • Error Rate, • Bit Error Rate (BER), • Bitwise Error Rate (BWER), • Acceptance Probability (AP) given a Minimum Acceptable Accuracy (MAA), • Minimum and Maximum Error, • Mean of Error (or error bias), • Power Spectral Density (PSD) of error, and • Probability Density Function (PDF) of error. Hardware and accuracy simulations are run taking uniformly distributed inputs on the whole range of the operator to be tested. Accuracy simulations are optimized for parallel execution with OpenMP for a high speed-up when used on a multicore machine. The overall scheme of the first version of ApxPerf is depicted in Figure 4.1. HDL Simulation C Simulation Veri cation Data Fusion Operator HDL Description RTL Synthesis Gate-Level Sim. The framework comes with built-in C and VHDL versions of the following approximate operators: • Almost-Correct Adder (ACA, see Section 1.4.1.1), • Error-Tolerant Adder Version IV (ETAIV, see Section 1.4.1.2), • IMPrecise Adder for low-power Approximate CompuTing (IMPACT, see Section 1.4.1.5), • Approximate Array Multiplier III (AAMIII, see Section 1.4.2.1), and • FxP adders and multipliers with various sizes. ApxPerf framework also embeds several signal processing applications, only for the accuracy evaluation part -Fast Fourier Transform (Fast Fourier Transform (FFT)), K-Means clustering algorithm, JPEG encoding and motion compensation filter for High Efficiency Video Codec (HEVC). These applications are further described in Section 4.3. The first version of ApxPerf was used for all the results in this Chapter. However, the second version, described in the next section, brings many improvements, mostly thanks to HLS and the usage of C++ templates for the parameterization of operators and simulations. erated results and the generation of figures are performed using Numpy and Matplotlib packages of Python. An execution trace of ApxPerf v2 characterization for 32-bit ct_float (see Chapter 5) is given by Listing 4.1. Another important improvement in the framework is the integration of hardware performance characterization in embedded signal processing applications. At the time of writing this thesis, only K-means clustering and FFT were adapted from ApxPerf v1. Finally, the code of the approximate operators were replaced by a template-based C++ synthesizable library called apx_fixed, based on Mentor Graphics AC Datatypes v3.7 under license Apache 2.0. This library included in ApxPerf v2 features: • synthesizable operator overloading: unary operators: unary ≠, !, ++, ≠≠, relational operators: <, >, <=, >=, ==, ! =, binary operators: +, + =, ≠ ≠ =, ú, ú = replaced by the approximate operators selected in the template values in the apx_fixed instance declaration (see example below), <<, <<=, >>, >>=, and assignment operator from/to an other instance of apx_fixed. • non-synthesizable operator overloading: assignment operator from/to C++ native datatypes (float, double), output operator << for easy display and writing in files. apx_fixed variables are represented by a fixed-point value which width, integer part width, rounding et wrapping modes (inherited from ac_fixed) are parameterized in template as well as the name and parameters of the approximate operators to be used in additions and multiplications. A use case of apx_fixed library is given by Listing 4.2. In this example, the result of the operation out = x ◊ y + z is computed. x and z are 8-bit integer numbers with 2-bit integer part in FxP representation, denoted as [START_REF] Xu | Internet of things in industries: A survey[END_REF]2). y has (10, 5) representation and the output out is represented on [START_REF] Williams | What's next? [the end of moore's law[END_REF]2). In line 14, the apx_fixed variables are initialized casting double precision floating-point values. The nearest representable value is set for each variable. E.g, x is set from 0.13236. As it only has 6-bit fractional part, its value is 0.125 (00.001000 in binary representation). In line 18, the operation is performed. Thanks to operator overload, the first operation x ◊ y is performed using the approximate multiplier given in the template of x type declaration (first operand), which is classical fixed-point multiplication. As x and y are (8, 2) and (10, 5), the implicit result is stored in a number with representation (18, 7) according to the rules discussed in Section 1.3.4. Then the implicit result is added to z. The sum of (18, 7) and (8, 2) representations returns an implicit [START_REF] Grigorian | Dynamically adaptive and reliable approximate computing using light-weight error analysis[END_REF][START_REF] Xu | Internet of things in industries: A survey[END_REF] according to Section 1.3.3. The addition is performed using ACA [START_REF] Esmaeilzadeh | Architecture support for disciplined approximate programming[END_REF]3) based on template values and the size of inputs. Finally the result is casted to the (7, 2) number out. For this, the bits from position 6 to 12 of the result are extracted and put in out after truncation (because of the directive AC_TRN in out type apx3_t). This computation is fully synthesizable. During hardware optimization, the paths return EXIT_SUCCESS; 28 } leading to the generation of the bits 13 to the MSB 18 would be pruned, as well as the bits from the LSB to bit position 5 because truncation does not consider them. As a matter of fact, changing rounding and overflow mode respectively to saturation and rounding-to-nearest in the template declarations would increase accuracy and hardware cost. Finally, line 25 gives the example of the overloading of output operator << in apx_fixed. The code outputs given below reflects the successive casts and approximations. The syntax developed in apx_fixed library allows for fast and easy development and testing of approximate arithmetic kernels. The software flexibility of C++ and the efficiency of HLS tools allow for complex circuits to be produced and simulated with benchmarks, whose generation is easy to be fully automatized thanks to the overloading of type casting. A custom synthesizable floating-point library embedded by ApxPerf v2 called ct_float is described in the last chapter of this thesis. Raw Comparison of Fixed-Point and Approximate Operators In this section, the results of the raw comparison between FxP and Apx operators are presented. They were all obtained using the first version of ApxPerfdescribed in Section 4.1.1 and were published in DATE'17 conference [2]. A first comparison of FxP and Apx operators consists in measuring the difference in their accuracy with regards to a performance metric (energy, area, delay). For this, approximate adders ACA, ETAIV and IMPACT and approximate multipliers AAMIII and FBMIII (respectively denoted as AAM and FBM in this section), described in Chapter 1, were compiled and tested with a number of bits varying from 2 to 32 and all possible combinations of parameters. FxP operators (i.e. classical integer adders and multipliers) were tested in the same way, with all possible combinations of input and output size (inputs from 2 to 32, outputs from 2 to 32 for the adders and 2 to 64 for the multipliers). All power results are given for a clock frequency of 100 MHz. With ApxPerf v1, Design Compiler (2013.03) was used for RTL synthesis with a 28nm FDSOI technology library, Modelsim (10.3c) for gate-level simulation and PrimeTime (2013.12) for power analysis. Simulation and power analysis are performed on 10 5 random input samples. The extraction of error metrics based on the C description was computed on more than 10 7 random inputs. Results for adders are presented on Figure 4.3 and provide MSE versus power, area, delay, and PDP. For the sake of clarity, results are here only presented for 16-bit input operators. 16-bit output is considered as the correct adder and used as reference. Truncated and rounded FxP adder outputs vary from 15 to 2 bits (from left to right). For approximate adders, results are given by varying: the number of approximated LSBs (M ) and types of FA for IMPACT, the maximal size of carry chain (P ) for ACA, and the block size (X) for ETAIV. What can be first noticed is that in terms of power consumption and design area, FxP operators are better than Apx operators for a same MSE, except for very-low accuracy. However, in terms of delay, most approximate operators are faster, but they cannot reach the same level of accuracy when more than 8 bits are kept for the fixed-point output. In terms of energy, the PDP of FxP adders is quite near from approximate ones when less than 8 output bits are kept. However, ACA and IMPACT are able to spend less energy without sacrificing much accuracy. In some applications, all output bits have the same weight on the error. Therefore results on BER metric are presented on Figure 4.4 for the same adders than previously. Approximate operators achieve very good BER performance when compared to fixed-point operators. Considering the power and area, truncated and IMPACT adders perform similarly for any fixed BER. However, for delay and energy per addition, most approximate operators perform significantly better truncated or rounded FxP operators. When not considering bit significance in the operands, FxP operators are penalized by the suppression of part of their output bits, implicitly forcing them to zero. Results for the multipliers are presented in Table 4.1. The 16 to 32 integer multiplier is considered as the correct multiplier for accuracy reference. As AAM and FBM multipliers are fixed-width operators (16-bit inputs and output), comparison results are provided only for the truncated FxP multiplier with 16-bit output (MUL t [START_REF] Ernst | Razor: circuit-level correction of timing errors for low-power operation[END_REF][START_REF] Ernst | Razor: circuit-level correction of timing errors for low-power operation[END_REF]). Fixed-width MUL t truncated multiplier reaches the best accuracy, and consumes least power. Although MUL t is slower than FBM, its energy per multiplication (PDP) is 44% better than both approximate operators. FBM is 37% faster than MUL t and AAM is 17% smaller. However, FBM is extremely MSE inaccurate, with 7 orders of magnitude more erroneous results than fixed-point. Both AAM and FBM are worse than MUL t by about 19% for BER metric. As a conclusion for this operator-level performance analysis of various approximation schemes, fixed-point operators perform better when considering the MSE metric representative of signal processing applications, while approximate adders show good BER performance. However, the importance of output bit-width was not taken into account in this results. Indeed, when the bit-width is reduced, as in truncated or rounded operators, the amount of data to transfer to load and store operator inputs and output is consequently reduced. This shortening in bit-width has a major impact on energy consumption and must be considered for real-life application. Thus, although inexact and truncated-or-rounded operators seem to reveal the same gross performance, selecting the second one will allow to decrease energy cost by avoiding the transfer and memory storage of useless erroneous bits. Comparison of Fixed-Point and Approximate Operators on Signal Processing Applications In this Section, the effect of fixed-point and approximate adders and multipliers is evaluated on different real-life applications, leveraging relevant and adapted metrics. Considered applications include Fast Fourier Transform (FFT), JPEG image encoding, motion compensation filtering in the context of High-Efficiency Video Coding (HEVC) decoding, and K-means clustering. Fast Fourier Transform As a classical computation kernel used in many signal processing or communication applications, FFT is relevant for this study. ApxPerf v1 provides an instrumented, tunable FFT kernel. This sections presents results on a 32-sample Radix-2 FFT computed on 16-bit input/output data. In a first experiment, only the adders are considered. The total energy to compute the FFT is estimated by: P DP F F T = N add ÿ i=1 P DP add,i + N mul ÿ i=1 P DP mul,i (4.1) where N add and N mul are the total of additions and multiplications, respectively. (4. 2) The exact multipliers used alongside the modified adders are optimally sized according to the adder bit-width, so they are not source of error. For any accuracy constraint, FxP adders (truncation or rounding) notably dominate approximate adders. This supremacy could be explained by two factors: the relative energy cost of multipliers with regards to adders and the need for less operand size for the multiplier when reducing the accuracy of additions. This figure also shows the great potential of energy reduction when playing with accuracy of the fixed-point operators. A first conclusion here is that reducing the FxP adder size provides a smaller entropy of the data processed, transported and stored, than keeping the same bit-width along the computations but containing errors in data since computations rely on approximate operators. The same experiment is performed using 16-bit AAM and FBM multipliers and a 16-bit truncated FxP multiplier, while keeping 16-bit exact adders. Table 4.2 shows that AAM and FxP multipliers differ only by 6 dB of of accuracy. However, AAM consumes 78% more energy that reduced-precision fixed-point equivalent. Results on the FFT comfort the conclusion of Section 4.2. Providing results with both approximate adders and multipliers in the same simulation will not lead to a different conclusion. JPEG Encoding The second application is a JPEG encoder, representative of the image processing domain. The main algorithm of this encoder is the Discrete Cosine Transform (DCT). To obtain an approx- [START_REF] Wang | Image quality assessment: from error visibility to structural similarity[END_REF], which is representative of the human perception of image degradation. This metric results in a score between [0, 1], 1 representing a perfect quality. To obtain Figure 4.6, the DCT energy consumption is compared for all presented approximate adders, as well as for fixed-point versions. The algorithm is applied with an encoding effort of 90% on the image Lena. As observed for the FFT, the fixed-point versions of the algorithm are much more energy efficient than for approximate operators, mostly thanks to the bits dropped during the calculation. It is important to notice that the nature of the error generated by approximate operators (few high amplitude errors in general) is very problematic in the context of image processing. 4.7d is generated using 16-bit IMPACT with 8-bit exact addition and 8-bit approximate addition using approximation number 3 (see Section 1.4.1.5). The best visual result clearly occurs for the fixed-point addition. The result of IMPACT adder is also quite good thanks to its 8-bit accurate adder on the MSB part, but as showed above, its larger output implies an important energy overhead. Moreover, some visual artefacts are present in sharp details. For ETAIV, the image is quite degraded with horizontal and vertical lines giving a noisy appearance to the image. For ACA, strong artefacts are visible everywhere. Motion Compensation Filter for HEVC Decoder HEVC is the new generation of video compression standard. Efficient prediction of a block from the others requires fractional position Motion Compensation (MC) carried-out by interpolation filters. These MC filters are modified using fixed-point and approximate operators to test their accuracy and energy efficiency. Previously described MSSIM metric is used to determine the output accuracy of the filter on a classical signal processing image. K-means Clustering The last experiment presented in this section is K-means clustering. Given a bidimensional point cloud, this algorithm classifies them finding centroids and assigning each point to the cluster defined by the nearest centroid. At the core of K-means clustering is distance computation. More details about K-means clustering are given in Section 5.3.1. For the experiment, 5 sets of 5.10 3 points of data were generated around 10 random points with a Gaussian distribution. The accuracy metric is the success rate, from 0 to 1, representing the proportion of points classified in the correct cluster. Table 4.5 presents the success rate and energy spent in distance computation replacing the exact adders by fixed-point and approximate versions. For truncated fixed-point version, the energy spent in multiplication is inherently inferior to approximate, thanks to the reduction of bit-width, leading to a total energy reduction by more than half for a success rate of 99%, and even by nearly 10 for a success rate of about 86%. In spite of the theoretical competitiveness of approximate operators, their advantages are likely to be lost at application level. Indeed, at the difference of fixed-point operators, accuracy reduction is obtained by simplifying the operator structure but not by reducing the operator output bit-width. This reduces the energy of the considered operator, but does not have a positive impact on the other operators, as it is the case for fixed-point. Considerations About Arithmetic Operator-Level Approximate Computing In this Chapter, two types of hardware approximation were compared: fixed-point arithmetic relying on truncated and rounded accurate operators with careful data sizing, and approximate operators. A direct comparison using the ApxPerf framework showed that both techniques are competitive, depending on the observed error and performance metrics. However, stepping back observing some state-of-the-art applications showed that using approximate operators often leads to an important overhead since the designed architecture manipulates larger data containing in average more erroneous useless information bits. Approximate operators output showing high error entropy compared to quantized exact output for which all the error is virtually contained in dropped bits, the error generated by approximate operators may have a huge impact in downstream operations using their output. Mathematically, it is always preferable to propagate many low significance errors (symbolized by dropped bits in fixed-point paradigm) than scarcer high significance errors. Indeed, in most approximate operators proposed in literature, the probability for high significance errors to occur is never negligible, leading to errors with amplitudes as high as the represented dynamic. More generally, it has been shown that comparing the raw performance of operators does not necessarily reflect their performance in a given application context. Hence, a major stake for hardware approximation is now to be considered at a higher level to take more parameters into consideration, leveraging relevant error metrics. An effort must also be made in the research for more efficient approximate multipliers, since they are responsible for the majority of power consumption in computation-intensive applications. However, considering approximate operators in embedded processors to replace or enhance their integer arithmetic unit might still be a good option, since processor data size is fixed and cannot be application specific. After a conclusion in favor of fixed-point compared to functional approximate arithmetic operators, the studies lead in next chapter drop approximate operators to compare fixed-point and custom floating-point paradigms. If the rounding modes of both inputs are different (discouraged), then the first one parsed in the C++ code is selected. ct_float representation and arithmetic operators were created to remain simple for energy efficiency yet minimally secured, thanks to the combination of several choices. First, ct_float mantissa is represented in [1, 2[ with an implicit 1. This allows a 1-bit accuracy benefit in general. However, subnormal numbers are not handled, which implies that a certain range of numbers are not representable around 0. The exponent is represented in a biased representation. The bias is not customizable for the moment and is set at the center of the exponent range like in IEEE 754 representation. Using biased representation instead of two's-complement results in simpler exponent value comparisons, which are omnipresent in arithmetic operators. An important choice is that no flag bits are returned. In general, such flags are returned to indicate zero-case or subnormal cases for further management. However, these flags imply more bits to transfer in hardware (generally two), and the pipeline may be broken during the management of the corresponding special cases, leading to extra-energy consumption. In return for not raising zero-case flag, the number zero is directly managed by the arithmetic operators. For this, the value 0 must be representable. To represent 0, the nearest representable negative value from 0 is used. One representable value is sacrificed, but it does not imply any change in comparison operators for instance. A slight overhead is then necessary in the arithmetic operators to detect the 0 value at the input. For the addition/subtraction, if one of the inputs is worth 0, the second is selected. No extra-delay is implied since a simple short path may exist in parallel to the original adder. At the addition/subtraction output, the special value representing zero must be issued when the computed output mantissa is a vector of 0, leading to a slight control overhead. It is also insured that only two strictly opposed added values can issue 0. For the multiplication, 0 detection at the input returns 0 value, which only implies a very small overhead. It is insured that only a multiplication by 0 can return 0. As subnormals are not representable by ct_float, the output is always saturated to the smallest absolute possible representable value with the same sign. Towards infinity, the operators do not under/overflow. Saturation to the highest absolute representable value of same sign is returned. The combination of these choices implies a slight overhead in the operators which is not spent in external hardware management. The local management implies less data stored or long-distance transferred, which represents global energy savings. A use case of ct_float is given by Listing 5.1. In this example, an FIR filter is applied to random data. First, on line 9, an array of ct_float is initialized with the impulse response of the filter. The coefficients are parsed to 16-bit ct_float with e = 7 and m = 9. Then, input and output signals x and y are declared on line 13. Random inputs are generated in double representation, parsed to ct_float [START_REF] Williams | What's next? [the end of moore's law[END_REF][START_REF] Perera | Context aware computing for the internet of things: A survey[END_REF] and stored in array x. The random generation presented in this example is not synthesizable. Then, the synthesizable FIR filter is applied. Additions and multiplication are overloaded with the operators developed in ct_float library. Finally, the resulting FIR filter is displayed using display operator << overloading. ApxPerf also comes with a random number generation library. This simplifies the generation of integer and floating-point values, using uniform or normal distributions with userselected parameters. Several functions are available to guarantee that the generated random values are in the range of the representable values of the considered integer, fixed-point or floating-point operator. For custom floating-point operators such as ct_float, it is also possible to generate couple of inputs which guarantee the activation of the close path of addition (see Section 1.2.2) with a given probability. It is important to have this path activated for a certain percentage of input couples to perform fair Monte Carlo time-based dynamic power estimation, as it is the most energy-costly computingnpath of the addition. However, when performing totally Monte Carlo simulation using inputs uniformly distributed on the whole range of the representable value of a floating-point operator, the close path only has a very low probability to be activated. Therefore, it is important to consider this feature in the generation of random inputs. In the next section, stand-alone performance of ct_float facing other custom floatingpoint libraries is evaluated, using ApxPerf and its random number generation library. Performance of CT_FLOAT Compared to Other Custom Floating-Point Libraries Reconfigurable architectures like FPGA are more and more used in many domains. The most recent and impacting example is the FPGA chip found in Apple's iPhone 7 and suspected to be used for artificial intelligence [START_REF] Tilley | This mysterious chip in the iphone 7 could be key to apple's ai push[END_REF]. These conditions illustrate the interest of customizable floating-point architectures. Indeed, combining the ease of use of floating-point representation associated to low-energy small data width make these architectures very promising for the future of reconfigurable architectures. The past years have hosted the creation of several customizable floating-point libraries. Mentor Graphics, in its synthesizable C++ libraries AC Datatypes [START_REF]Mentor graphics ac datatypes v3.7[END_REF], proposes the custom floating-point class ac_float. Based on the fixed-point library ac_fixed, ac_float allows for light floating-point computation, thanks to simple operators. The mantissa in the representation is not normalized and has no implicit 1. This allows for easy management of subnormals, but induces a potential loss of accuracy in computations. The mantissa is represented in signed two's complement, so the sign information is contained in the mantissa instead of using an extra sign bit. However, there is no benefit to this choice since two's complement represents a loss of 1 bit of precision compared to unsigned representation. The choice of two's complement representation on the mantissa also turns comparison operator more com-Chapter 5 plex. Moreover, many cases are not handled such as zero or infinity. ac_float also supports custom exponent bias, but managing the exponent bias comes with an overhead. ct_float, presented in Section 5. Other alternatives such as VFLOAT [START_REF] Fang | Open-source variable-precision floating-point library for major commercial fpgas[END_REF][START_REF] Fang | Open-source variable-precision floating-point library for major commercial fpgas[END_REF] or OptiFEX [START_REF] Mahzoon | Optifex: A framework for exploring area-efficient floating point expressions on fpgas with optimized exponent/mantissa widths[END_REF] do exist but are not taken into account in the study led in this chapter. VFLOATproposes IEEE 754-2008 compliant customizable computing cores for existing FPGA. OptiFEX generates floating-point computing cores targeting FPGA like FloPoCo. Table 5.1 recapitulates the different known properties of ac_float, ct_float and FloPoCo floating-point representation. In this table, the number of additional bits in the representation is taking for reference a representation with implicit 1 in the mantissa and with one bit of sign in the representation. For an equal general accuracy, ac_float needs one more bit on the mantissa than ct_float and FloPoCo. However, with its 2-bit exception field, FloPoCo has the representation requiring the largest width, but also the highest computing reliability. The hardware performance comparison process for ac_float, ct_float and FloPoCo is as follows. All operators are characterized for an Application Specific Integrated Circuit (ASIC) target in 28nm FDSOI @ 1.0V, 25C. All designs are generated with a clock of 200 MHz. As ac_float and ct_float are compatible with ApxPerf v2, this framework is used to perform the hardware performance characterization process. For time-based power analysis, the random inputs generated for adder/subtracter characterization are ensuring an activation of the close path for at least 50% of the computations. For FloPoCo, VHDL files of the operators and test benches are generated using Stratix IV target and disabling all possible hardware acceleration which could allocate DSPs blocks used in FPGAs. Then, the design is compiled using Design Compiler, characterized using FloPoCo's generated benchmark in ModelSim, and power is estimated using PrimeTime. However, to our knowledge, the benchmark generated by FloPoCo does not insure any proportion of activation of the close path, so the time-based estimated dynamic power could be underestimated. FloPoCo's benchmark top design does not consider the input and output data registers, whereas ApxPerf generated benchmark does. This grossly represents about 5 to 10% underestimation in the total power for FloPoCo operators, which has to be kept in mind for the analysis of results. All operators are Using Equation 5.2 implies that both static energy and dynamic energy are considered. Moreover, increasing the clock period on a same circuit should give the exact same energy per operation since increasing T clk will proportionally decrease the average dynamic power P d given by the tool (since integrated on a proportionally longer time), while not modifying neither the static power nor the operator critical path. Therefore, the proposed energy estimation metric gets rid of both the issues of the classical previously described metrics. Also, taking the number of cycles like in Equation 5.2 for pipelined operators into consideration provides a fair comparison between operators having a different number of cycles. Indeed, let consider two operators op 1 and op 2 , where op 2 is the same circuit as op 1 but pipelined in 2 cycles (instead of 1 for op 1 ). Flip-flops excluded, both the circuits are the same. The energy overhead brought by flip-flops in op 2 is to some extent compensated by the smaller fan-outs in the circuit. This means that dynamic power of op 1 and op 2 are very close. However, if the pipeline is efficiently chosen, the critical path of op 2 should be twice as small as op 1 . Considering this hypothesis, the energy per operation of both op 1 and op 2 should be the same. This is translated by Equation 5.2 by compensating the division of T cp with the number of cycles N c . As a conclusion, with Equation 5.2, we have a measure of the energy per operation which can be considered as robust to: • modification of the slack due to different clock periods, and • pipelining the operator. With this metric, different operators, operating in slightly different conditions of frequency and pipelining can be legitimately compared. From this point, the total energy spent before stabilization metric is used each time energy per operation is mentioned. For the custom floating-point comparative study, half-precision and single-precision floatingpoint were tested. Half-precision corresponds to an exponent represented on 5 bits and a mantissa on 11 bits. Single-precision has an 8-bit exponent and a 24-bit mantissa. Both addition/-subtraction and multiplication were tested on each of these precisions. The results of the comparative studies for 16-bit (resp. 32-bit) addition/subtraction (resp. multiplication) are given in Tables 5.2, 5.4, 5.3 and 5.5. The two last lines of the tables refer to the relative performance of ct_float towards ac_float (resp. FloPoCo) (e.g., ct_float area is 2.15% higher than ac_float). At first sight, the three custom floating-point libraries give results in the same order of magnitude. For 16-bit addition/subtraction, ct_float is 15% more energy-costly than both ac_float and FloPoCo, despite being as large as ac_float and 12% smaller than FloPoCo. The fastest 16-bit adder/subtracter is ac_float, followed by ct_float, which is 19% slower but 27% faster than FloPoCo. All performance are slightly in favor of ac_floatfor 16-bit addition/subtraction. Area (µm For 16-bit multiplication, ac_float is beaten by both ct_floatand FloPoCo. FloPoCo's multiplier is the smallest and with the lowest energy consumption. However, ct_float is 25% faster but consumes 32% more energy. However, it must be kept in mind that there are registers in the inputs and outputs of ct_float and ac_float which are not present for FloPoCo, so the real gap should be narrower. 32-bit addition/subtraction shows very similar energy for ac_float, ct_float and FloPoCo. Indeed, ct_float is 9% worse than ac_float and 4% better than FloPoCo. Again, FloPoCo is the slowest operator, ct_float being 27% faster. The energy of 32-bit multiplication is strongly in favor of ct_float, which saves more than 45% more energy than both ac_float and FloPoCo. ct_float is the 13% smaller than FloPoCoand 49% smaller than FloPoCo. However, ac_float is 5% faster. In conclusion, ac_float, ct_float and FloPoCo addition/subtraction and multiplication are quite competitive the one towards the others. Though they all have different features (implicit 1 or not, particular cases management, etc.), they all are quite close in terms of performance. Nevertheless, FloPoCo generally produces the largest and slowest operators, but not always with the highest energy consumption. This can be explained by the fact that ac_float and ct_float operators are generated by a different software than FloPoCoand therefore the basic integer arithmetic operators architecture used may not be the same. Also, the values of the inputs for power estimation are generated differently for ac_float and ct_float on one side, and FloPoCo on the other side, thus activating differently the far and close paths during simulations. However, the main interesting conclusion of this study is to show that the proposed custom floating-point library ct_float competes with the other existing libraries and gives slightly comparable performance results. In the following section, ct_float is used as a reference for the comparison with fixedpoint arithmetic, first in stand-alone versions, and then leveraging classical signal processing applications. Stand-Alone Comparison of Fixed-Point and Custom Floating-Point Because of the different nature of floating-point and fixed-point errors, this section only compares them in terms of area, delay, and energy. Indeed, floating-point error magnitude strongly depends on the amplitude of the represented data. Low-amplitude data have low error magnitude, while high amplitude data have much higher error magnitude. Floating-point error is only homogeneous considering relative error. Oppositely, fixed-point has a very homogeneous error magnitude, uniformly distributed between well-known bounds. Therefore, its relative error depends on the amplitude of the represented data. It is low for high amplitude data and high for low amplitude data. This duality makes these two paradigms impossible to be atomically compared using the same error metric. The only interesting error comparison which can be performed is to use these two representations in the same application, which is done in Section 5.3 on FFT and K-means clustering. The study in this section and in the rest of this chapter is performed using ApxPerf v2 as mentioned before, and with the ct_float library for custom floating-point. A 100 MHz clock is set for designing and estimating performance. All the other parameters and targets are the same as for previous section. Energy per operation is estimated using detailed power results given by PrimeTime at gate level. and is estimated using the metric described in previous Section by Equation 5.2. In this section, 8-, 10-, 12-, 14-and 16-bit fixed-width operators are compared. For each of these bit-widths, several versions of the floating-point operators are estimated with different exponent widths. 25.10 3 uniform couples of input samples are used for each operator characterization. The random generation embedded by ApxPerf v2 insures that 25% of the floatingpoint adder inputs activate the close path of the operator, which has the highest energy by nature. Adders and multipliers are all tested in their fixed-width version, meaning their number of input and output bits are the same. The output is obtained using truncation of the result. Figure 5.2 (resp. Figure 5.3) shows the area, delay and energy of adders (resp. multipliers) for different bit-widths, relative to the corresponding fixed-point operator. FlP N (k) represents N -bit floating-point with k-bit exponent width. As discussed above, floating-point adder has an important overhead compared to fixed-point adder. For any configuration, results show that area and delay are around 3◊ higher for floating-point. As a consequence, the higher complexity of the floating-point adder leads to 5◊ to 12◊ more energy per operation. Results for the multipliers are very different. Indeed, floating-point multipliers are 2-3◊ smaller than for fixed-point. Indeed, the control part of floating-point multiplier is much less complicated than for the adder. Moreover, as multiplication is applied only on the mantissa, the multiplication is always applied on a smaller number of bits for floating-point than for fixed-point. Timing is also slightly better for floating-point, but not as much as area since an important number of operand shifts may be needed during computations. The impact of these shifts has an important impact on the energy per operation, especially for large mantissas. This brings floating-point to suffer an overhead of 2◊ to 10◊ on the energy per operation. For a good interpretation of these results, it must be kept in mind that, in a fixed-point application, data shifting is often needed at many points in the application. The cost of shifting this data does not appear in the preliminary results presented in this section. However, for floating-point, data shifting is directly contained in the operator hardware, which is reflected in the results. Thus, the important advantage of fixed-point showed by Figures 5.2 and 5.3 must be tempered by the important impact of shifts when applied in applications. Application-Based Comparison of Fixed-Point and Custom Floating-Point In this section, floating-point and fixed-point operators are compared in the context of their use in applications. Indeed, as stated below, they have very different error nature and thus their error can not be legitimately compared in the previous stand-alone comparison. First, both paradigms are compared using the K-Means clustering algorithm in Section 5.3.1, these results were published in [4]. Then, FxP and FlP operators are compared on the FFT algorithm in Section 5.3.2. These results are issued from the work of Romain Mercier during an undergraduate internship at IRISA. Algorithm 3 K-Means Clustering (1 Dimension) Require: k AE N data err Ω +OE cpt Ω 0 c Ω init_centroids(data) do Û Main loop old_err Ω err err Ω 0 c_tmp[0 : k ≠ 1] Ω 0 min_distance Ω +OE for d oe {0 : N data ≠ 1} do min_distance Ω +OE for i oe {0 : k ≠ 1} do Û Data labelling distance Ω distance_comp(data[d], c[i]) if distance < min_distance then min_distance Ω distance labels[d] Ω i end if end for c_tmp[labels[d]] Ω c_tmp[labels[d]] + data[d] counts[labels[d]] Ω counts[labels[d]] + 1 err Ω err + min_distance end for for i oe {0 : k ≠ 1} do Û Centroids position update if counts[i] " = 0 then c[i] Ω c_tmp[i]/counts[i] else c[i] Ω c_tmp[i] end if end for cpt Ω cpt + 1 while (|err ≠ old_err| > acc_target) ' (cpt < max_iter) The experimental setup is divided into two parts: accuracy and performance estimation. Accuracy estimation is performed on 20 data sets composed of 15.10 3 bidimensional data samples. These data samples are all generated in a square delimited by the four points {± Ô 2, ± Ô 2}, using Gaussian distributions with random covariance matrices around 15 random mean points. Several accuracy targets are used to set the stopping condition: 10 ≠2 , 10 ≠3 , 10 ≠4 . The reference for accuracy estimation is IEEE-754 double-precision floating-point. Figure 5.4 is an example of a typical golden output for the experiment. The error metrics for the accuracy estimation are: • the Mean Square Error of the resulting cluster Centroids (CMSE), and • the classification Error Rate (ER) in percents, which is defined as the proportion of points not being tagged by the right cluster identifier. The lower the CMSE, the better the estimated position of centroids compared to golden output. Energy estimation is performed using the first of these 20 data sets, limited to 20.10 3 iterations of distance computation for time and memory purposes. As data sets were generated around 15 points, the number of clusters researched is also set to 15. Performance and accuracy of the K-Means clustering experiment, from input data generation to result processing and graphs generation, is fully available in the open-source ApxPerf v2 framework, which is used for the whole study. Experimental Results on K-Means Clustering Section 5.2 showed that fixed-point additions and multiplications consume less energy than floating-point for the same bitwidth. However, these results do not yet consider the impact of the arithmetic on accuracy. This section details the impact of accuracy on the bidimensional K-means clustering algorithm. A first qualitative study on the K-Means clustering showed that, to get correct results (no artifacts), floating-point data must have a minimal exponent width of 5 bits in distance computation (smaller exponents are too inaccurate in low distance computations) and fixed-point data a minimal number of 3 bits for its integer part. Thus, all the following results use these two parameters. Area, latency and energy of distance computed by Equation 5.5 are provided. The total energy of the application is defined as E K-means = E dc ◊ (N it + N cycles ≠ 1) ◊ N data , ( 5.6) where E dc is the energy per distance computation calculated from the data extracted with Apx-Perf, N it the average number of iterations necessary to reach K-means stopping condition, N cycles the number of stages in the pipeline of the distance computation core (automatically determined by HLS), and N data the number of processed data per iteration. Results for 8-bit and 16-bit FlP and FxP arithmetic operators are detailed in Table 5.6, with stopping condition set to 10 ≠4 . For the 8-bit version of the algorithm, several interesting results can be highlighted. First, the custom floating-point version is twice as large as fixedpoint version and floating-point distance computation consumes 2.44◊ more energy than fixedpoint. However, the floating-point version of K-means converges in 8.35 cycles on average against 14.9 cycles for fixed-point. This makes floating-point version for the whole K-means The competitiveness of FlP over FxP on small bit-widths and the higher efficiency of FxP on larger bit-widths is confirmed by Figure 5.6 depicting energy vs. classification error rate. Indeed, for different accuracy targets (10 ≠{2,3,4} ), only 8-bit floating-point provides higher accuracy for a comparable energy cost, while 10-to 16-bit fixed-point versions reach an accuracy equivalent to floating-point with much lower energy. The stopping condition does not seem to have a major impact on the relative performance. Comparative Results on Fast Fourier Transform Application In the previous section, a comparative study between fixed-point and custom floating-point was performed on K-means. We showed that, contrary to what could be expected, floatingpoint was very competitive for small bit-width, besides being easier to manage due to its high flexibility. In this section, a similar study is performed on the Fast Fourier Transform (FFT). The error study, generation and analysis of results were performed by undergraduate intern Romain Mercier. The hardware performance estimation part was obtained using ApxPerf v2. The original study also included approximate integers operators, which will not be discussed here since a study on approximate operators in FFT has already been done in Section 4.3.1. The implementation of the studied FFT is Radix-2 Decimation-In-Time (DIT) FFT, which is the most common form of the Cooley-Tukey algorithm [START_REF] Cooley | An algorithm for the machine calculation of complex fourier series[END_REF]. For the hardware estimation, only the core of computation of the FFT is considered, i.e. the computation of: X k = E k + e ≠ 2fii N k O k X k+ N 2 = E k ≠ e ≠ 2fii N k O k . (5.7) This leads to the hardware implementation of 6 additions/subtractions and 4 multiplications. For each version of the FFT, all constants and variables are represented with the same parameters (same bit-width, same integer part width for FxP, same exponent width for FlP). The absence of over/underflow for FxP version is ensured. For FlP version, the repartition of the exponent and mantissa widths is chosen for giving the smallest error after exhaustive search. For hardware performance estimation, only FFT-16 was characterized. The error metric used for the study of error is the Mean Square Error (MSE) at the output compared to double-precision floating-point FFT. Energy per operation related to error for FFT-16 is depicted in Figure 5.7. On the x-axis, the energy derived from Equation 5.2 of the computing core of FFT is given in pJ. On the y- In this experiment, the advantage is clearly in favor of fixed-point. Indeed, for any identical bit-width, fixed-point outperforms floating-point in terms of energy and accuracy. As already showed in Section 5.2, floating-point operations, additions in particular, are much more expensive than fixed-point in return for an increased accuracy on larger dynamic. However, FFT output quality is not as dependent on accuracy on a dynamic as large as for K-means clustering for instance. This makes floating-point even less accurate than fixed-point at equal bit-width, because of a smaller significant part, mantissa for floating-point, all bits for fixed-point. Indeed, in the experiment, the exponent takes 7 bits of the total width, which are not assigned to more accuracy on the significant part. Another interesting point is the data points presenting an energy peak, which are occurring for 12-, 18-and 28-bit floating-point and 22-bit fixed-point. These peaks are most probably due to differences of implementation in the HLS process. E.g, larger adder or multiplier structures may have been selected by the tool to meet constraint of delays, leading to energy overhead. Conclusion and Discussion about the use of Fixed-Point and Floating-Point arithmetic for Energy-Efficient Computing A raw comparison of floating-point and fixed-point arithmetic operators gives an advantage in area, delay and energy efficiency for fixed-point. However, the comparison on a real application like the K-means clustering algorithm provides interesting features to custom floating-point Chapter 5 arithmetic. Indeed, for K-means, contrary to what would have been expected, floating-point arithmetic tends to show better results in terms of energy/accuracy trade-off for very small bit-widths (8 bits in our case). However, increasing this bit-width still leads to an important area, delay and energy overhead of floating-point. The most interesting results occur for 8-bit floating-point representation. With only 3 bits of mantissa, which corresponds to only 3-bit integer adders and multipliers, the results are better than 8-bit fixed-point integer operators. This is obviously due to the adaptive dynamic offered by floating-point arithmetic at operation level, whereas fixed-point has a fixed dynamic which is disadvantageous for low-amplitude data and distance calculation. However, non-iterative algorithms should be tested to know if small floating-point keeps its advantage. Floating-point representation showed its limitations on the FFT experiment. Indeed, the significant bits of the mantissa sacrificed to the exponent represent a penalty in an application whose output quality is not as dependent on a high accuracy on enhanced dynamic as the Kmeans clustering is. However, the gap between fixed-point and floating-point in this context could probably be narrowed with different architecture choices, such as exponent bias and subnormal numbers support. Nevertheless, these features would come with inevitable area, delay and energy overheads. From a hardware-design point of view, custom floating-point is costly compared to fixedpoint arithmetic. Fixed-point benefits from free data shifting between two operators, as outputs of one operator only need to be connected to the inputs of the following in the datapath. However, from a software-design point of view, shifts between fixed-point computing units must be effectively performed, which leads to a non-negligible delay and energy overhead. Oppositely, floating-point computing units do not suffer from this overhead, since data shifting is implemented in the operators and managed by the hardware at runtime. Thanks to this feature, floating-point exhibits another important advantage which is the ease of use, since software development is faster and more secured. Hence, in the aim of producing general-purpose low-energy processors, small-bitwidth floating-point arithmetic can provide major advantages compared to classical integer operators embedded in microcontrollers, with a better compromise between ease of programming, energy efficiency and computing accuracy. Conclusion To face the predicted end of Moore's Law, this thesis proposes to look into approximate architectures. Indeed, it has been showed that most applications can be computed with relaxed accuracy without affecting their output quality, or with a tolerable degradation. Several levels of architectural approximations are possible, listed in Chapter 1. In this thesis, the opportunities to save energy using approximate arithmetic are highlighted. They concern floating-point, fixed-point, and approximate integer adders and multipliers. In this document, four main contributions were proposed. First, to our knowledge, there was no existing critical and comparative study of existing approximate arithmetic operators considering an equivalent number of references, so the first Chapter of this document can constitute a base. Two techniques for the estimation of the output error of approximate systems were proposed, making the second main contribution. Concerning fixed-point systems, a fast and scalable method leveraging the spectral shape of error was described in Chapter 2. This technique has the same accuracy as statistical propagation techniques, but with much lower complexity in the analytical model construction. The development and results of this new approach were presented in [START_REF] Barrois | Leveraging power spectral density for scalable system-level accuracy evaluation[END_REF]. In the first part of Chapter 3, a technique based on bitwise-error rate propagation was proposed for approximate operators. Compared to others, this technique is a good compromise between memory cost and accuracy. However, the very different natures of existing approximate operators make it very hard to find generally good models. After exploring a high number of solutions, the conclusion is that no model approaches the accuracy of Monte Carlo without equaling or exceeding its complexity. Contrary to fixed-point paradigm, it seems that approximate arithmetic operators can not be safely be used without preliminary simulations. A third contribution was to use error behavior of approximate operators to estimate the effect of VOS on accurate operators. Indeed, simulating VOS requires transistor-level simulation, which is extremely long and memory-costly. Using combinations of approximate operators trained on real data allows for fast estimation of the effects of VOS in systems too large to be simulated. This contribution was published in [3]. Finally, two comparative studies of existing approximate paradigms, supported by the creation and usage of our open-source framework ApxPerf, constitute the fourth contribution. First, fixed-point and approximate operators are compared. In their stand-alone versions, both are quite competitive. As most approximate operators are based on shortening the carry chains, they are generally fast and they generate scarce but high amplitude error, depending on their parameters. In opposition, fixed-point always generate errors because of quantization at the output, but this error is always small and well characterized. Therefore, they have quite similar error in average, whereas approximate operators tend to be faster thanks to shorter critical Conclusion path. However, our study showed that when comparing both paradigms in real signal processing applications, fixed-point is much better for two reasons. Firstly, its error entropy is minimal, only standing in the dropped bits which are the LSBs. Therefore, the propagation of this error across the system leads to a contained amplification. On the contrary, high-entropy approximate operator error potentially occurring at any bit significance leads to drastically important amplification effects, and the application output may be strongly degraded. Secondly, dropping bits at the output of fixed-point operators instead of keeping the same bit-width during computations for approximate operators has an effect which had never been pointed out until now, which is the need for smaller downstream operators and for less memory. These two reasons make quantization by far superior to operator-level approximation. The only reason why approximate operators could be considered is for constant bit-width processing like in CPUs and when computation must necessarily be faster than what fixed-point can offer. This study was published in [2]. The second study puts face-to-face fixed-point arithmetic and floating-point arithmetic in the context of low-energy computing. Generally, floating-point is associated to high accuracy computing, but with important hardware cost. In our study, we consider small-width floatingpoint across our custom library ct_float. After a comparison between different existing custom floating-point libraries showing that ct_float is competitive, we use this library combined with ApxPerf to evaluate its cost facing fixed-point. In their stand-alone versions, floating-point addition/subtraction is as expected much more costly than fixed-point. For multiplication, the gap is tight as floating-point uses smaller integer multiplier and as the control overhead in floating-point multiplier is reasonably small. Two applications were used to evaluate the real overhead of floating-point. In K-means clustering, small floating-point in surprisingly competitive with fixed-point. Indeed, K-means requires accurate computations both for small and large distance, which are advantaging floating-point thanks to its high accuracy on larger dynamic than fixed-point. In the context of FFT, whose quality is less impacted by inaccuracy on small amplitude signals, fixed-point strongly outperforms floating-point. Indeed, the large dynamic of floating-point does not bring enough improvement in this case to compensate for its overhead. As a conclusion, small-width floating-point competitiveness in terms of energy-error ratio is strongly dependent on the application. Nevertheless, despite a generally higher energy consumption for floating-point compared to fixed-point, its ease of use makes it very interesting for fast development. Having a CPU embedding small bit-width floating-point processing units could be very interesting for general-purpose low-power computing. The study on K-means clustering application was published in [4]. The work constituting this thesis comes with the following conclusions: • Approximate operators error degradation across a system is difficult to estimate efficiently when excluding Monte Carlo simulation. However, approximate operators can be used to reproduce the effects of physical phenomena such as VOS. • If stand-alone approximate arithmetic operators are generally competitive, they show important limitations when used in real-life signal processing applications compared to classical fixed-point. • Floating-point energy cost is able to compete with fixed-point when used in applications that require accuracy at different amplitude scales. In this context, using floating-point for low-energy computing has to be considered. This thesis also opens-up new perspectives. Firstly, it has been showed that in general, using approximate operators leads to important errors. Therefore, there is a strong need to find new approximate operators with better error performance. Some of them are on a good track. With DRUM approximate multiplier for instance, no high amplitude errors can be performed, as it is based on floating-point paradigm applied to a fixed representation. The idea of mixing fixed-point and floating-point paradigms is a good idea for speed, though it imposes the storage of many useless zeros in the LSBss. Other operators like the GDA approximate adder propose runtime-configurable approximate adders, which shows interest in the context of energy-autonomous embedded devices that may need to run in different levels of power consumption depending on the remaining stored energy and the amount of energy being harvested. Finally, the concept of exact adders based on error-corrected approximate operators proposed by VLSA deserves to be further investigated. Secondly, models for approximate operators error propagation need to be developed, though we concluded in this study that their various natures are an impediment to the good performance of general models. In our mind, new models should consider the nature of the approximation performed on each operator. Therefore, the general model would be an aggregation of several models, and the one that adapts best to a considered approximate operators would be selected when needed. Finally, floating-point paradigm for low-energy computing has a high potential for new research. First, the best compromise between control overhead and accuracy should be investigated in this context, for instance the management of subnormals. The question if the accuracy benefits from supporting these low amplitude numbers is important enough to compensate for the hardware overhead. The customization of the exponent bias also has to be investigated. With a constant bias, adding one more bit in the exponent increases the dynamic as much towards infinity than towards zero. However, it is not interesting to have a dynamic which is too large towards infinity, since resources could be allocated to even better accuracy around zero. Nevertheless, introducing a fully parameterizable bias would bring an important memory and control overhead. A solution might be to have several possible predefined biases that could be used to operate at different amplitude scales with moderated overhead. The interest of handling particular cases such as under/overflows also needs to be investigated. Once all these investigations, the structure of a low-energy small-bit-width processing unit should be proposed with innovative features. For instance, if exponent management is faster, it could be possible to use a single exponent management unit for two mantissa management units to save area. All these opportunities should be considered for future research and the proposition of new energy-efficient architectures, which are a major stake to overcome the predicted end of Moore's Law. Acronyms CONTENTS Introduction Figure 1 - 1 Figure 1 -45 years of microprocessor trend data, collected by M. Horowitz, F. Labonte, O. Shacham, K. Olukotun, L. Hammond, and C. Batten and completed by K. Rupp Figure 1 . 1 - 11 Figure 1.1 -12-bit floating-point number with 4 bits of exponent and 7 bits of mantissa Figure 1 . 1 Figure 1.2 -Dual-path floating-point adder[START_REF] Muller | Handbook of floating-point arithmetic[END_REF] Figure 1 . 1 Figure 1.4 -12-bit fixed-point number with 4 bits of integer part and 8 bits of fractional part Figure 1 . 5 - 15 Figure 1.5 -Distribution of continuous signal quantization error for rounding towards ±OE and rounding to nearest Figure 1 . 7 - 17 Figure 1.7 -Comparison of quantization error distribution of conventional rounding and convergent rounding Figure 1 . 1 2 ≠d b 2 112 [START_REF] Williams | What's next? [the end of moore's law[END_REF] shows the effect of CRN on the error distribution compared to simpler RN. On Figure1.7a, the discrete uniform distribution of RN method has a negative bias of q 2 Chapter 1 Figure 1 . 8 - 118 Figure 1.8 -Example of quantization and rounding of a 10-bit fixed-point number to a 6-bit fixed-point number Figure 1 . 1 Figure 1.12 -8-bit RCA-based Carry-Select Adder (CSLA) with 2-3-3 basic blocks Figure 1 . 18 - 118 Figure 1.18 -Half adder transformation -requires 2 gates Figure 1 . 19 - 119 Figure 1.19 -Wallace and Dadda trees schemes applied to 5-bit multiplication summand grid reduction. The dashed rectangle corresponds to final 8-bit addition. Both require three stages, but Wallace tree takes 51 gates while Dadda tree only requires 48. Figure 1 . 1 Figure 1.20 -6-bit signed array multiplier -AFA (resp. AHA) corresponds to a FA (resp. HA) which inputs x i and y i are combined by an AND cell, NFA corresponds to a FA which inputs x i and y i are combined by a NAND cell. AHA and AFA structure are depicted in Figure 1.21. Figure 1 . 22 .Figure 1 . 21 - 122121 Figure 1.21 -Structures of AHA and AFA Figure 1 . 22 - 122 Figure 1.22 -Sequential multiplier -boxes hatched in grey are right-shift registers Figure 1 . 23 - 123 Figure 1.23 -Probability for the longest carry chain of a 64-bit adder to be inferior to x as a function of x Figure 1 . 25 - 125 Figure1.25 -Distribution of calculations for carry propagation matrix products[START_REF] Verma | Variable latency speculative addition: A new paradigm for arithmetic circuit design[END_REF] Figure 1 . 26 - 126 Figure 1.26 -Consideration of carries in LPA and ACA output computation for an 8-bit adder with k = 2 -Each color shows the inputs considered in the computation of the corresponding output bit. 4 Figure 1 . 27 - 4127 Figure 1.27 -Error maps of 8-bit LPA and ACA adders for different values of k -White zones correspond to accurate calculations. Figure 1 . 28 - 128 Figure1.28 -Hardware implementation of VLSA[START_REF] Verma | Variable latency speculative addition: A new paradigm for arithmetic circuit design[END_REF] Figure 1 . 33 - 133 Figure 1.33 -Hardware implementation of ETAII[START_REF] Zhu | An enhanced low-power high-speed adder for errortolerant application[END_REF] Figure 1 . 34 - 134 Figure 1.34 -Hardware implementation of ETAIIM Figure 1 . 35 - 135 Figure 1.35 -Consideration of carries in ETAIV output computation for an 8-bit adder with X = 2 -Each color shows the inputs considered in the computation of the corresponding output bit. 3 Figure 1 . 36 - 3136 Figure 1.36 -Error maps of 16-bit ETAIV for different values of X 1 . 38 - 5 delay 1385 Figure 1.38 -Consideration of carries in AC2A output computation for an 8-bit adder with k = 2 -Each color shows the inputs considered in the computation of the corresponding output bit. 8 Figure 1 . 42 - 8142 Figure 1.42 -GDA reconfigurable prediction scheme Figure 1 . 43 - 143 Figure1.43 -Error vs delay for an identical power consumption for GDA and AC2A[START_REF] Ye | On reconfiguration-oriented approximate adder design and its application[END_REF] Figure 1 . 44 - 144 Figure 1.44 -Original and approximate MA transistor view -A and B inputs refer to x and y in the notations of this document, while Sum output refers to z. Chapter 1 Figure 1 . 11 Figure 1.45 -DCT/IDCT test results for IMPACT -The savings are relative to a 20-bit accurate RCA Figure 1 . 46 - 146 Figure1.46 -Flowchart for probabilistic pruning optimization process[START_REF] Lingamneni | Energy parsimonious circuit design through probabilistic pruning[END_REF] Figure 1 . 48 - 148 Figure 1.48 -Probabilistic pruning results for Kogge-Stone and Han-Carlson adders[START_REF] Lingamneni | Energy parsimonious circuit design through probabilistic pruning[END_REF] Chapter 1 Figure 1 . 11 Figure 1.49 -Two's-complement signed 6-bit AAMI structure Figure 1 .Figure 1 . 51 - 1151 Figure 1.50 -Two's-complement signed 6-bit AAMII structure 25 )Figure 1 . 52 - 25152 Figure 1.52 -Exhaustive search of K ◊<n and K ◊=n for a 6-bit multiplier -the vertical red line shows the index of the optimal values of K ◊<n and K ◊=n Chapter 1 Figure 1 . 11 Figure 1.53 -8-bit signed AAMIII structure (a) n = 4 (b) n = 8 (c) n = 16 Figure 1 . 1 11 Figure 1.54 -Error maps of 4-bit, 8-bit and 16-bit AAMIII Figure 1 . 56 -Figure 1 . 57 - 156157 Figure 1.56 -Structure of a 12-bit ETM[START_REF] Kyaw | Low-power high-speed multiplier for errortolerant application[END_REF] Figure 1 . 58 - 158 Figure1.58 -PDP for 12-bits ETM and array multiplier[START_REF] Kyaw | Low-power high-speed multiplier for errortolerant application[END_REF] Figure 1 . 60 - 160 Figure 1.60 -Partial product generation 1 Figure 1 . 61 - 1161 Figure 1.61 -Karnaugh map representation of approximate carry for n = 10 Figure 1 . 1 Figure 1.65 -DRUM input unbiasing process. Step 1: original input. Step 2: selecting the k non-zeros MSBs. Step 3: unbiasing. Greyed cells represent the virtual value of dropped bits. Chapter 1 Figure 1 . 66 - 1166 Figure 1.66 -Structure of DRUM. L0D stands for Leading-zero-Detector, used to select the bits to be effectively multiplied and to perform the final shift. Chapter 1 error 1 outputs instead of producing scarce high-amplitude error. This is confirmed by error maps visible in Figure1.68. 6 Figure 1 . 68 - 6168 Figure 1.68 -Error maps of 16-bit DRUM with different k Figure 2 . 1 - 21 Figure 2.1 -Extraction of the mean square error of a fixed-point system Figure 2 . 2 - 22 Figure 2.2 -Comparison of noise parameters propagation using traditional flat, PSD agnostic and proposed PSD methods A 2 -Figure 2 . 4 - 224 Figure 2.4 -Band-pass frequency filtering scheme Figure 2 . 2 Figure 2.6 -E d versus fractional bit-width d Figure Figure 2.7 -E d versus number of PSD samples N PSD Figure 2 . 8 - 28 Figure 2.8 -Execution time in seconds and speed up for frequency filtering and DWT systems versus the number of PSD samples Figure 2.9 gives a visual comparison between the PSDs of output error obtained by intensive simulation and PSD method on 1024 samples for a 2-level Daubechies DWT encoding and decoding, with all Chapter 2 Figure 2 . 9 - 29 Figure 2.9 -Output frequency repartition of the fixed-point error after DWT encoding and decoding Chapter 3 ( 4 Figure 3 . 1 - 3431 Figure 3.1 -Error maps of 8-bit FxP quantization process and approximate operators. 8-to-4bit quantization illustrates the regularity of the uniform error repartition of FxP arithmetic. The error maps of the three other approximate operators illustrate that the nature of their error is far more complex. Figure 3 . 2 - 32 Figure 3.2 -Propagation of BWER across an operator . 5 )Figure 3 . 53 Figure 3.4 presents an example of how are trained the corresponding conditional probabilities after one iteration of training. The inputs and output are both 4-bit and k = 2. First, looking at the blue part, after approximate operation, the output LSB is not faulty. Therefore, f z,0 = 0. As x0 was not faulty and ŷ0 was faulty, the corresponding observation of input EEV is Fx,y,2 . Thus, following Equation 3.5, the estimation of P (e z,0 |E x,y,2 ) is modified by increasing the denominator by 1. Then, looking at the yellow part of Figure3.4, the output observation is f z,1 = 1, meaning the output is erroneous. As k = 2, input ranks 1 and 0 are observed together. The corresponding observation (0, 1, 1, 0) is F x,y,6 . P (e z,1 |E x,y,6) is modified increasing the numerator by 1 (faulty output) and increasing the denominator by 1. The operation is repeated at significance position 2 (green part) and 3 (red part), each time observing 2k = 2 input error vector bits.Following this method, the conditional probability data structure has m elements updated at each new training cycle. Finally, after a sufficient number of training cycles (discussed in Section 3.3.1), the model is trained and ready for propagation, which is discussed in the following section. Figure 3 . 4 - 34 Figure 3.4 -Example of Conditional Probabilities Training for n = 4, m = 4 and k = 2 j (E x,y,0 | B x,y ). Let the partial input BWER b x,3 , b y,3 , b x,4 , b y,4 given by Table 3.3. Let an approximate adder trained with k = 2. To determine b z,4 (similarly for other b The results of training convergence speed are depicted in Figure3.5. The experiments show that the training time relative to the number of random training samples is independent of k on a same processor. Therefore, a single curve for elapsed time is depicted on the figure in dotted red, which is the mean of the curves for all 374 experiments. The elapsed time is linear with the number of training samples. On this experiment, the 10 8 training samples took about 35 minutes for each experiment in average. The blue curves represent the mean distance of the training values from the reference for each k. As a reminder, the values in the data structure are probabilities and thus they are in [0, 1]. Figure 3 . 5 - 35 Figure 3.5 -Convergence of BWER Training in Function of k Chapter 3 Figure 3 . 6 - 336 Figure 3.6 -Tree Operation Structure with Three Stages Figure 3 . 7 - 37 Figure 3.7 -BWER Estimation and Simulation Results for Stand-Alone ACA with x = 2 and k = 4 Figure 3 .Figure 3 . 8 - 338 Figure 3.9 presents the value of DB metric of adders for different number of stages in the tree configurations described in Figure3.6. For each figure, the number of stages varies between two and four, and the output error refers to the output of the last stage. For each configuration of approximate operators, the value of k giving the best results in the previous stand-alone estima- Figure 3 . 9 - 39 Figure 3.9 -Evolution of estimation error D B with k for different configurations of 16-bit approximate adders with different number of stages Chapter 3 3. 4 34 Modeling the Effects of Voltage Over-Scaling (VOS) in Arithmetic Operators Figure 3 . 3 Figure 3.10 -Proposed Design Flow for Arithmetic Operator Characterization Figure 3 . 11 -Figure 3 . 12 -Figure 3 . 14 - 311312314 Figure 3.11 -Distribution of BER in output bits of 8-bit RCA under voltage scaling Figure 3 . 15 - 315 Figure 3.15 -Estimation Error of the Model for Different Adders and Distance Metrics Figure 3 . 16 - 316 Figure 3.16 -BER and Energy for Different VOS Triads Applied to 16-bit RCA Figure 4 . 1 - 41 Figure 4.1 -First version of APXPERF // Declaration of variables 12 double 12 x_d = 0.13236, y_d = -1.54351, z_d = 0.75498; 13 double out_d; 14 apx1_t x = x_d; apx2_t y = y_d; apx1_t z = z_d; precision operation (non-synthesizable) 18 out_d = x_d * y_d + z_d; accurate:\t" << x_d << " * " << y_d << " + " << z_d << " = " << out_d << endl; 25 cout << "approximate:\t" << x << " * " << y << " + " << z << " = " << out << endl; * -1.5625 + 0.75 = 0.53125 Figure 4 4 .5 shows P DP F F T as a function of output Peak Signal-to-Noise Ratio (PSNR). PSNR is the maximal power of the output signal divided by the MSE, i.e: P SNR(x)[dB] = 10. log C max(x 2 ) MSE(x) D . Figure Figure 4.5 -Power Consumption of FFT-32 Versus Output PSNR Using 16-bit Approximate Adders shows Lena with four different approximations in the DCT encoding. On Figure4.7a, the additions are replaced by a 16-bit accurate adder with the output truncated to 10 bits. On trunc.Fixed-Point round. Figure 4 . 6 - 46 Figure 4.6 -Power Consumption of DCT in JPEG Encoding Versus Output MSSIM Using 16-bit Approximate Operators Figure 5 . 1 - 51 Figure 5.1 -Representation of the Power Spent by a Circuit in One Cycle. The area of the red polygon represents the total energy spent before stabilization. E s is the static energy and E d Figure 5 . 3 - 53 Figure 5.3 -Relative Area, Delay and Energy per Operation Comparison Between Fixed-Point and Floating-Point for Different Fixed-Width Multipliers and 5.5b show the output for floating-point and fixed-point 8bit computations, applied on the same inputs than the golden output of Figure5.4. A very neat stair-effect on data labelling is clearly visible, which is due to the high quantization levels of the 8-bit representation. However, in the floating-point version, the positions of clusters centroid is very similar to the reference, which is not the case for fixed-point.For the 16-bit version, all results are in favor of fixed-point, floating-point being twice bigger and consuming 1.7◊ more energy. Fixed-point also provides slightly better error results (2.9% for ER vs. 0.6%). Figures 5.5c and 5.5d show output results for 16-bit floating-point and fixed-point. Both are very similar and nearly equivalent to the reference, which reflects the high success rate of clustering. Figure 5 . 5 - 55 Figure 5.5 -K-Means Clustering Outputs for 8-and 16-bit floating-point and fixed-point with Accuracy Target of 10 ≠4 Figure 5 . 6 -Figure 5 . 7 - 5657 Figure 5.6 -Energy Versus Classification Error Rate for K-Means Clustering with Stopping Conditions of 10 ≠4 (Top), 10 ≠3 (Center) and 10 ≠2 (Bottom) 1 45 years of microprocessor trend data . . . . . . . . . . . . . . . . . . . . . . 1.1 12-bit floating-point number . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Dual-path floating-point adder . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Basic floating-point multiplication . . . . . . . . . . . . . . . . . . . . . . . . 1.4 12-bit fixed-point number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Distribution of continuous signal quantization error . . . . . . . . . . . . . . . (a) Quantization error distribution for rounding towards ≠OE . . . . . . . . . (b) Quantization error distribution for rounding towards +OE . . . . . . . . . (c) Quantization error distribution for rounding to nearest . . . . . . . . . . 1.6 Representation of FxP quantization error as an additive noise . . . . . . . . . . 1.7 Conventional rounding vs convergent rounding . . . . . . . . . . . . . . . . . (a) Conventional rounding . . . . . . . . . . . . . . . . . . . . . . . . . . . (b) Convergent rounding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8 Quantization and rounding of a fixed-point number . . . . . . . . . . . . . . . 1.9 Fixed-point addition process . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.10 One-bit addition function -Full adder . . . . . . . . . . . . . . . . . . . . . . 1.11 8-bit Ripple Carry Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.12 8-bit Carry-Select Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.13 16-bit Brent-Kung Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.14 16-bit Kogge-Stone Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.15 General integer multiplication principle . . . . . . . . . . . . . . . . . . . . . 1.16 General visualization of multiplication summand grid . . . . . . . . . . . . . . 1.17 Full adder compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.18 Half adder transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.19 Wallace and Dadda trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (a) Wallace Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (b) Dadda tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.20 6-bit signed array multiplier . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.21 Structures of AHA and AFA . . . . . . . . . . . . . . . . . . . . . . . . . . . (a) Structure of AHA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (b) Structure of AFA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.22 Sequential multiplier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.23 Probability for the longest carry chain of a 64-bit adder to be inferior to x as a function of x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 1.24 16-bit Sample Adder (LPA) with k = 4 . . . . . . . . . . . . . . . . . . . . . 39 1.25 Distribution of calculations for carry propagation matrix products [34] . . . . . 40 1.26 Consideration of carries in LPA and ACA output computation . . . . . . . . . 40 1.27 Error maps of 8-bit LPA and ACA adders for different values of k . . . . . . . 41 (a) n = 8, k = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 (b) n = 8, k = 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 1.28 Hardware implementation of VLSA [34] . . . . . . . . . . . . . . . . . . . . . 42 1.29 Delay and area results for ACA with different bitwidth [34] . . . . . . . . . . . 43 1.30 Principle of ETAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 1.31 ETAI control block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 1.32 ETAI accuracy simulation results [35] . . . . . . . . . . . . . . . . . . . . . . 46 1.33 Hardware implementation of ETAII . . . . . . . . . . . . . . . . . . . . . . . 47 1.34 Hardware implementation of ETAIIM . . . . . . . . . . . . . . . . . . . . . . 48 1.35 Consideration of carries in ETAIV output computation . . . . . . . . . . . . . 48 1.36 Error maps of 16-bit ETAIV for different values of X . . . . . . . . . . . . . . 49(a) n = 16, X = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 (b) n = 16, X = 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 1.37 Hardware implementation of ETAIV . . . . . . . . . . . . . . . . . . . . . . . 49 1.38 Consideration of carries in AC2A output computation . . . . . . . . . . . . . . 51 1.39 Accuracy vs power for AC2A and other approximate adders under VOS . . . . 53 1.40 Structure of proposed n-bit GDA composed of 4 n/4-bit sub-adders . . . . . . 56 1.41 GDA hierarchical prediction scheme . . . . . . . . . . . . . . . . . . . . . . . 57 1.42 GDA reconfigurable prediction scheme . . . . . . . . . . . . . . . . . . . . . 57 1.43 Error vs delay for an identical power consumption for GDA and AC2A . . . . . 61 (a) Worst-case error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 (b) Error rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 (c) Average error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 1.44 Original and approximate MA transistor view . . . . . . . . . . . . . . . . . . 63 (a) Original MA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 (b) Simplified MA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 (c) AMA1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 (d) AMA2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 1.45 DCT/IDCT test results for IMPACT . . . . . . . . . . . . . . . . . . . . . . . 64 1.46 Flowchart for probabilistic pruning optimization process [45] . . . . . . . . . . 66 1.47 16-bit Weighted-Pruned Kogge Stone Adder (WPKSA) . . . . . . . . . . . . . 66 1.48 Probabilistic pruning results for Kogge-Stone and Han-Carlson adders [45] . . 67 1.49 Two's-complement signed 6-bit AAMI structure . . . . . . . . . . . . . . . . . 68 1.50 Two's-complement signed 6-bit AAMII structure . . . . . . . . . . . . . . . . 69 1.51 AAO, AA and ND-ND cells . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 (a) AAO cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 (b) AA cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 (c) ND-ND cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 LIST OF FIGURES 1.52 Exhaustive search of K ◊<n and K ◊=n for a 6-bit multiplier . . . . . . . . . . . 1.53 8-bit signed AAMIII structure . . . . . . . . . . . . . . . . . . . . . . . . . . 1.54 Error maps of 4-bit, 8-bit and 16-bit AAMIII . . . . . . . . . . . . . . . . . . (a) n = 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (b) n = 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (c) n = 16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.55 Example of ETM multiplication process [35] . . . . . . . . . . . . . . . . . . 1.56 Structure of a 12-bit ETM [35] . . . . . . . . . . . . . . . . . . . . . . . . . . 1.57 Accuracy evaluation by simulation for ETM [35] . . . . . . . . . . . . . . . . 1.58 PDP for 12-bits ETM and array multiplier [35] . . . . . . . . . . . . . . . . . 1.59 Summand grid for an 8-bit fixed-width Booth multiplier with LP major -based error correction [49] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.60 Partial product generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.61 Karnaugh map representation of approximate carry for n = 10 . . . . . . . . . (a) a_carry 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (b) a_carry 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.62 Approximate carry generation circuit using ACGPII for n = 32 . . . . . . . . . 1.63 LP minor level discrimination for the design of FBMIII . . . . . . . . . . . . . 1.64 8-bit FBMIII schematized structure . . . . . . . . . . . . . . . . . . . . . . . . 1.65 DRUM input unbiasing process . . . . . . . . . . . . . . . . . . . . . . . . . . 1.66 Structure of DRUM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.67 Area and power benefits of 16-bit DRUM [52] . . . . . . . . . . . . . . . . . . 1.68 Error maps of 16-bit DRUM . . . . . . . . . . . . . . . . . . . . . . . . . . . (a) n = 8, k = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (b) n = 8, k = 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (c) n = 8, k = 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Extraction of the mean square error of a fixed-point system . . . . . . . . . . . 2.2 Comparison of noise parameters propagation using traditional flat, PSD agnostic and proposed PSD methods . . . . . . . . . . . . . . . . . . . . . . . . . . (a) Traditional flat method: propagation of µ i , ‡ 2 i from each noise to output . (b) PSD agnostic: blind propagation of µ i , ‡ 2 i . Proposed PSD method: propagation of µ i , ‡ 2 i , and P SD i . . . . . . . . . . . . . . . . . . . . . . . . 2.3 SFG cycle breaking process example . . . . . . . . . . . . . . . . . . . . . . . (a) Cyclic SFG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (b) Equivalent acyclic SFG . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Band-pass frequency filtering scheme . . . . . . . . . . . . . . . . . . . . . . 2.5 1-level DWT coder and decoder . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 E d versus fractional bit-width d . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 E d versus number of PSD samples N PSD . . . . . . . . . . . . . . . . . . . . 2.8 Execution time in seconds and speed up for frequency filtering and DWT systems versus the number of PSD samples . . . . . . . . . . . . . . . . . . . . . (a) Frequency filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (b) Daubechies 9/7 DWT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 Output frequency repartition of the fixed-point error after DWT encoding and decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 (a) Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 (b) PSD estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 3.1 Error maps of 8-bit FxP quantization process and approximate operators . . . . 108 (a) 8-to-4-bit quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 (b) 8-bit ACA, k = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 (c) 8-bit AAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 (d) 8-bit DRUM, k = 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 3.2 Propagation of BWER across an operator . . . . . . . . . . . . . . . . . . . . 109 3.3 Extraction of Binary Error Event Vectors for BWER Model Training . .. . . . 112 3.4 Example of Conditional Probabilities Training for n = 4, m = 4 and k = 2 . . 113 3.5 Convergence of BWER Training in Function of k . . . . . . . . . . . . . . . . 115 3.6 Tree Operation Structure with Three Stages . . . . . . . . . . . . . . . . . . . 116 3.7 BWER Estimation and Simulation Results for Stand-Alone ACA with x = 2 and k = 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 3.8 Evolution of estimation error D B with k for different configurations of 16-bit approximate stand-alone adders and multipliers . . . . . . . . . . . . . . . . . 118 (a) ACA -Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 (b) IMPACT -Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 (c) AAM -Multiplier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 (d) DRUM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 3.9 Evolution of estimation error D B with k for different configurations of 16-bit approximate adders with different number of stages . . . . . . . . . . . . . . . 119 (a) ACA -Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 (b) IMPACT -Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 3.10 Proposed Design Flow for Arithmetic Operator Characterization . . . . . . . . 123 3.11 Distribution of BER in output bits of 8-bit RCA under voltage scaling . . . . . 125 3.12 Distribution of BER in output bits of 8-bit BKA under voltage scaling . . . . . 125 3.13 Equivalence Between Faulty Hardware Operator and Equivalent Functionally Faulty Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 3.14 Design flow of modelling of VOS operators . . . . . . . . . . . . . . . . . . . 127 3.15 Estimation Error of the Model for Different Adders and Distance Metrics . . . 129 (a) Signal to Noise Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 (b) Normalized Hamming Distance . . . . . . . . . . . . . . . . . . . . . . 129 3.16 BER and Energy for Different VOS Triads Applied to 16-bit RCA . . . . . . . 130 4.1 First version of ApxPerf . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 4.2 Second version of ApxPerf . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 4.3 Direct comparison of 16-bit-input fixed-point and approximate adders regarding MSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 (a) MSE vs power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 (b) MSE vs delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141List of Tables 1.1 IEEE 754 normalized floating-point representation . . . . . . . . . . . . . . . 1.2 Cost of FlP addition vs integer addition . . . . . . . . . . . . . . . . . . . . . 1.3 Cost of FlP multiplication vs integer multiplication . . . . . . . . . . . . . . . 1.4 Mean and variance of quantization error . . . . . . . . . . . . . . . . . . . . . 1.5 Rounding direction vs round/sticky bit . . . . . . . . . . . . . . . . . . . . . . 1.6 Full adder truth table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Radix-4 modified Booth encoding . . . . . . . . . . . . . . . . . . . . . . . . 1.8 Bounds on the longest run of 1's with high probability . . . . . . . . . . . . . . 1.9 Logical equations of CSGC type I and II . . . . . . . . . . . . . . . . . . . . . 1.10 AP as a function of MAA and carry propagation block size for 32-bit ETAII . . 1.11 AP as a function of MAA and X for 32-bit ETAIV . . . . . . . . . . . . . . . 1.12 Simulation results for Error-Tolerant Adders [37] . . . . . . . . . . . . . . . . 1.13 Estimated parameters of the approximate computation part of a 16-bit AC2A relatively to a conventional 16-bit CLA . . . . . . . . . . . . . . . . . . . . . 1.14 Comparison of 16-bit AC2A approximate part with other adders . . . . . . . . 1.15 Error correction cycles in AC2A . . . . . . . . . . . . . . . . . . . . . . . . . 1.16 Accuracy and power consumption of 4-stage pipelined 32-bit AC2A as a function of the active mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.17 Comparison between 32-bit GDAs and exact and approximate static adders . .1.18 Accuracy-configurable implementation of AC2A and GDA . . . . . . . . . . . 1.19 Truth tables of accurate and approximate MA cells . . . . . . . . . . . . . . . 1.20 Accuracy comparison of AAMI and AAMII [47] . . . . . . . . . . . . . . . . 1.21 Accuracy comparison of signed AAM versions I, II and III [48] . . . . . . . . . 1.22 Area ratio comparing to original parallel adder for AAM versions I, II and III [48] 1.23 Equivalence between Radix-4 modified-Booth-encoded symbol Y i and control bits in partial product generation . . . . . . . . . . . . . . . . . . . . . . . . . 1.24 Representation of approximate carry values . . . . . . . . . . . . . . . . . . . 1.25 Accuracy comparison for FBMI and FBMII using ACGPI . . . . . . . . . . . . 1.26 Approximate carry signals generated by ACGPI and ACGPII for n = 8 . . . . 1.27 Comparison of delay and area of ACGPI and ACGPII [50] . . . . . . . . . . . 1.28 Accuracy comparison for FBMI and FBMII using ACGPII . . . . . . . . . . . 1.29 Value of P (pp i,j = k) in LP . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.30 Accuracy comparison for FBMI and FBMIII . . . . . . . . . . . . . . . . . . . Leveraging Power Spectral Density in Fixed-Point System Refine- ment 93 2 1.4.3 Final Discussion on Approximate Operators in Literature . . . . . . . . 90 .1 Motivation for Using Fixed-Point Arithmetic in Low-Power Computing . . . . 93 2.2 Related work on accuracy analysis . . . . . . . . . . . . . . . . . . . . . . . . 94 2.3 PSD-based accuracy evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 98 2.3.1 PSD of a quantization noise . . . . . . . . . . . . . . . . . . . . . . . 98 2.3.2 PSD propagation across a fixed-point Linear and Time-Invariant (LTI) system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 2.4 Experimental Results of Proposed Power Spectral Density (PSD) Propagation Method . Frequency Domain Filtering . . . . . . . . . . . . . . . . . . 101 2.4.1.3 Daubechies 9/7 Discrete Wavelet Transform . . . . . . . . . 101 2.4.2 Validation of the Approach for LTI Systems . . . . . . . . . . . . . . . 103 2.4.3 Influence of the Number of PSD Samples . . . . . . . . . . . . . . . . 103 2.4.4 Comparison with PSD-Agnostic Methods . . . . . . . . . . . . . . . . 104 2.4.5 Frequency Repartition of Output Error . . . . . . . . . . . . . . . . . . 105 2.5 Conclusions about PSD Estimation Method . . . . . . . . . . . . . . . . . . . 106 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 2.4.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 2.4.1.1 Finite Impulse Response (FIR) and Infinite Impulse Response (IIR) Filters . . . . . . . . . . . . . . . . . . . . . . . . . . 101 2.4.1.2 Fast Approximate Arithmetic Operator Error Modeling 107 3 Storage Optimization and Training of the BWER Propagation Data Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 3.2.3 BWER Propagation Algorithm . . . . . . . . . . . . . . . . . . . . . . 112 3.3 Results of the BWER Method on Approximate Adders and Multipliers . . . . . 113 3.3.1 BWER Training Convergence Speed . . . . . . . . . . . . . . . . . . . 114 3.3.2 Evaluation of the Accuracy of BWER Propagation Method . . . . . . . 114 3.3.3 Estimation and Simulation Time . . . . . . . . . . . . . . . . . . . . . 119 3.3.4 Conclusion and Perspectives . . . . . . . . . . . . . . . . . . . . . . . 120 3.4 Modeling the Effects of Voltage Over-Scaling (Voltage OverScaling (VOS)) in Arithmetic Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 3.4.1 Characterization of Arithmetic Operators . . . . . . . . . . . . . . . . 123 .1 The Problem of Analytical Methods for Approximate Arithmetic . . . . . . . . 107 3.2 Bitwise-Error Rate Propagation Method . . . . . . . . . . . . . . . . . . . . . 109 3.2.1 Main Principle of Bitwise-Error Rate (BWER) Propagation Method . . 109 3.2.2 3.4.2 Modelling of VOS Arithmetic Operators . . . . . . . . . . . . . . . . . 124 3.4.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Approximate Operators Versus Careful Data Sizing 133 4.1 ApxPerf: Hardware Performance and Accuracy Characterization Framework for Approximate Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Table 1 . 1 Precision Mantissa Exponent Max decimal Exponent width width exponent bias Single precision 24 8 38.23 127 Double precision 53 11 307.95 1023 Quadruple precision 113 15 4931.77 16383 1 -IEEE 754 normalized floating-point representation 1.2.2 Floating-Point Addition/Subtraction and Multiplication Table 1 . 2 12 2 -Dual-path floating-point adder[START_REF] Muller | Handbook of floating-point arithmetic[END_REF] Area Total Critical Power-Delay (µm 2 ) power (mW) path (ns) Product (fJ) 32-bit ac_float 653 4.39E≠4 2.42 1.06E≠3 64-bit ac_float 1453 1.12E≠3 4.02 4.50E≠3 32-bit ac_int 189 3.66E≠5 1.06 3.88E≠5 64-bit ac_int 373 7.14E≠5 2.10 1.50E≠4 -Cost of FlP Table 1 . 1 3 -Cost of floating-point multiplication vs integer multiplication tion consumes 4. Chapter 1 .4. 𝑥 11 𝑥 10 𝑥 9 𝑥 8 𝑥 7 𝑥 6 𝑥 5 𝑥 4 𝑥 3 Table 1 . 1 4 -Mean and variance of quantization error depending on the rounding method and type of signal The implementation of rounding methods discussed above depend of two parameters. Indeed, when quantizing x 1 with d 1 -bit fractional part to x 2 with d 2 -bit fractional part, where d 2 < d 1 , only the following information is needed: Table 1 . 1 6 -Full adder truth table Figure 1.11 -8-bit Ripple Carry Adder (RCA) -bit versions by Figures 1.13 and 1.14. The parallel adders are the fastest adders with a delay complexity of O (log n), but with a superior area (2n ≠ log 2 (n) ≠ 2 compressors for BKA and n log 2 (n) ≠ n + 1 compressors for KSA but with lower fan-out). Figure 1.13 -16-bit BKA. Red-striped square converts (x i , y i ) to (p, g) (Equation 1.4) and blue-striped square converts (p, g) to z i (Equation 1.8). Circles represent (p, g) compressors. Given the multiplication of two FxP numbers x(m Chapter 1 Figure 1.14 -16-bit KSA. Red-striped square converts (x i , y i ) to (p, g) (Equation 1.4) and blue-striped square converts (p, g) to z x , d x ) and y(m y , d y ), the result z(m z , d z ) must respect for its integer part: m z = m x + my (1.10) to avoid under/overflows. Moreover, if there must be no loss of accuracy, the fractional part must also respect: d z = d x + dy. (1.11) i (Equation 1.8). Circles represent (p, g) compressors. Table 1 . 1 7 -Radix-4 modified Booth encoding -Y j is a radix-4 number which is not repre- sented in the encoded. Only the corresponding operation is performed during partial product generation. Each carry generator block is divided into two parts: 𝑥 31 ~𝑥28 𝑥 27 ~𝑥24 𝑥 23 ~𝑥20 𝑥 19 ~𝑥16 𝑥 15 ~𝑥12 y 31 ~𝑦28 y 27 ~𝑦24 y 23 ~𝑦20 y 19 ~𝑦16 y 15 ~𝑦12 CLA CLA CLA CLA … RCA RCA RCA RCA RCA 𝑐 32 𝑜𝑢𝑡 𝑧 31 ~𝑧28 𝑧 27 ~𝑧24 𝑧 23 ~𝑧20 𝑧 19 ~𝑧16 𝑧 15 ~𝑧12 Table 1 . 1 MUX2 Carry Generator Carry Generator 𝐺𝑁 𝑉𝐷 11 -AP as a function of MAA and x for 32-bit ETAIV Chapter 1 Table 1 . 1 .65 0.75 0.83 0.89 area 0.87 1.05 1.12 1.15 1.12 dynamic power 0.44 0.68 0.84 0.95 1.00 pass rate 0.554 0.829 0.942 0.982 0.995 13 -Estimated parameters of the approximate computation part of a 16-bit AC2A relatively to a conventional 16-bit CLA an area complexity of O ((n ≠ 2k) (log 2 k + 1)) , and a dynamic power complexity of O 1 Table 1 . 1 Chapter 1 .18) where R c and R e are the respective values of the correct and approximate results, B e the number of erroneous bits in the approximate results and B w the output bit-width. Therefore, 14 -Comparison of 16-bit AC2A approximate part with k = 4 with CLA and other approximate adders Table 1 . 1 approximate calculation is performed. Then, each following cycle is dedicated to the successive correction of each sub-adder, from the second one counting from the LSB. By power-gating each partial error correction systems, different levels of accuracy can then be targeted. By design, calculation and correction cycles have the same theoretical maximum delay. 1.000 amp Voltage scaling 0.900 ACC 1.000 (1.0V~0.6V) 0.800 0.995 0.700 0.990 3.00E-04 8.00E-04 0.600 ACA adder CLA 0.500 Lu's adder ETAI 0.400 ETAIIM total power (W) 2.00E-04 4.00E-04 6.00E-04 8.00E-04 1.00E-03 1.20E-03 1.000 inf Voltage scaling 0.900 ACC 1.000 (1.0V~0.6V) 0.800 0.990 0.700 0.980 0.600 4.00E-04 8.00E-04 ACA adder CLA 0.500 Lu's adder ETAI 0.400 ETAIIM total power (W) 2.00E-04 4.00E-04 6.00E-04 8.00E-04 1.00E-03 1.20E-03 Figure 1.39 -Accuracy vs power for AC2A and other approximate adders under VOS . [START_REF]Ieee standard for floating-point arithmetic[END_REF] . During the first cycle, the 15 -Error correction cycles in a 4-block AC2A -Checkmarks means the output of block S i is accurate after j cycles, crosses that it is inaccurate. Table 1 . 1 The final configurations insure Configuration ACC amp (max) ACC inf (max) Total power (mW) Power reduction mode-1 1.000 1.000 5.962 -11.5% mode-2 0.998 0.960 4.683 12.4% mode-3 0.991 0.925 3.691 31.0% mode-4 0.983 0.900 2.588 51.6% 16 -Accuracy and power consumption of 4-stage pipelined 32-bit AC2A as a function of the active mode and comparison with conventional pipelined adder that 0.99 AE ACC amp AE 1.00 for the first series of tests and 0.95 AE ACC inf AE 1.00 for the second. Results in Table 1 . 1 , 4) [START_REF] Roy | Asac: Automatic sensitivity analysis for approximate computing[END_REF] -Comparison between 32-bit GDAs and exact and approximate static adders [START_REF] Ye | On reconfiguration-oriented approximate adder design and its application[END_REF] Table 1 . 1 [START_REF] Esmaeilzadeh | Architecture support for disciplined approximate programming[END_REF] -Accuracy-configurable implementation of AC2A and GDA[START_REF] Ye | On reconfiguration-oriented approximate adder design and its application[END_REF] , 4) Table 1 . 1 19 -Truth tables of accurate and approximate MA cells -Shaded cells indicate design logic errors. Table 1 . 1 [START_REF] Sidiroglou-Douskos | Managing performance vs. accuracy trade-offs with loop perforation[END_REF] -Accuracy comparison of AAMI and AAMII[START_REF] Jou | Design of low-error fixed-width multiplier for dsp applications[END_REF] Chapter 1 Table 1 . 1 [START_REF] Cristiano | Fast exponential computation on simd architectures[END_REF] -Area ratio comparing to original parallel adder for AAM versions I, II and III[START_REF] Van | Design of the lower error fixed-width multiplier and its application[END_REF] 536 0.527 0.522 0.518 AAMII 0.608 0.569 0.550 0.5540 0.533 AAMIII 0.608 0.569 0.550 0.5540 0.533 1,8 𝑝𝑝 1,7 𝑝𝑝 1,6 𝑝𝑝 1,5 𝑝𝑝 1,4 𝑝𝑝 1,3 𝑝𝑝 1,2 𝑝𝑝 1,1 𝑝𝑝 1,0 𝑝𝑝 2,8 𝑝𝑝 2,7 𝑝𝑝 2,6 𝑝𝑝 2,5 𝑝𝑝 2,4 𝑝𝑝 2,3 𝑝𝑝 2,2 𝑝𝑝 2,1 𝑝𝑝 2,0 𝑝𝑝 3,8 𝑝𝑝 3,7 𝑝𝑝 3,6 𝑝𝑝 3,5 𝑝𝑝 3,4 𝑝𝑝 3,3 𝑝𝑝 3,2 𝑝𝑝 3,1 𝑝𝑝 3,0 𝑝 𝐵,15 𝑝 𝐵,14 𝑝 𝐵,13 𝑝 𝐵,12 𝑝 𝐵,11 𝑝 𝐵,10 𝑝 𝐵,9 Summand Grid Truncated Part 𝑝 𝐵,8 Error Correction 𝑀𝑃 𝐿𝑃 𝑚𝑎𝑗𝑜𝑟 𝐿𝑃 𝑚𝑖𝑛𝑜𝑟 Figure 1.59 -Summand grid for an 8-bit fixed-width Booth multiplier with LP major -based error correction [49] 4 modified Booth-encoding is determined by radix-4 symbols Y 𝑏 𝑗 𝑋 𝑠𝑒𝑙,𝑖 𝑝𝑝 𝑖,𝑗 𝑏 𝑗+1 2𝑋 𝑠𝑒𝑙,𝑖 𝑁𝐸𝐺 𝑖 i . For the generation of a given partial product pp i,j , the value of input x j and bits X sel , 2X sel and NEG at determined by Y i from Table 1.23 are used, and the resulting generation circuit is depicted in Figure 1.60. To get 1-bit statistics on the encoded multiplier instead of the 3 bits representing a symbol Y i , Y Õ i is 1, 1. Amongst these 108 sequences, 52 verify {E [S is rounding operation. Therefore, the best compensation of the truncated carries generated by LP minor is 2. As {E [S LP minor ]} and 56 verify {E [S LP minor ]} r = 1 LP minor ]} r always is between 0 and 2, two carries must be transmitted to LP major , following rules of Table 1.24. For n = 10, the Karnaugh map representations of a_carry 0 and a_carry 1 as a function of {Y Õ i } from this map and is given by ioeLP minor are given by Figure 1.61. The logic for the carry generation can be determined r = 2, where {•} r Table 1 . 1 24 -Representation of approximate carry values LP minor ] = 2 ≠1 n/2≠2 ÿ Y Õ i . (1.31) i=0 The relation between the value of the chain of bits {Y Õ i } ioeLP minor and the value E [S LP minor ] is given by E [S Table 1 1 Multiplier Error metric n = 10 n = 12 Rounded 4.87E≠4 1.22E≠4 Truncated FBMI µ e 9.66E≠4 2.43E≠4 1.22E≠3 3.21E≠4 FBMII 6.28E≠4 1.63E≠4 Rounded 8.01E≠8 4.51E≠9 Truncated FBMI ‡ 2 e 3.17E≠7 1.99E≠8 8.22E≠6 5.21E≠7 FBMII 1.94E≠7 1.33E≠8 . [START_REF] Widrow | Statistical analysis of amplitude-quantized sampled-data systems[END_REF] . Rounded and Truncated respectively refer to a rounding and truncation of the output of the original accurate multiplier. FBMII proposes a better accuracy than its predecessor FBMI, but also beats truncation despite its nearly-50% area savings. With the proposed method, for bigger multiplier size, the overhead of this approximate carry generation procedure, denoted as Approximate Carry Generation Procedure version I (ACGPI), becomes too impacting in terms of area and delay, and is not suitable. To face that, Table 1 . 1 25 -Accuracy comparison for FBMI and FBMII using ACGPI the author proposes another approximate carry generation procedure denoted as Approximate Carry Generation Procedure version II (ACGPII): 1. {Y Õ i } ioeLP minor is divided in groups of 3, the last group being of size 1, 2 or 3 depending of the set size. Table 1 . 1 26 -Approximate carry signals generated by ACGPI and ACGPII for n = 8 𝑌 14 ′ 𝑌 13 ′ 𝑌 12 ′ 𝑌 11 ′ 𝑌 10 ′ 𝑌 9 ′ 𝑌 8 ′ 𝑌 7 ′ 𝑌 6 ′ 𝑌 5 ′ 𝑌 4 ′ 𝑌 3 ′ 𝑌 2 ′ 𝑌 1 ′ 𝑌 0 ′ FA FA FA FA FA 𝑎_𝑐𝑎𝑟𝑟𝑦 4 𝑎_𝑐𝑎𝑟𝑟𝑦 3 𝑎_𝑐𝑎𝑟𝑟𝑦 2 𝑎_𝑐𝑎𝑟𝑟𝑦 1 𝑎_𝑐𝑎𝑟𝑟𝑦 0 FA HA 𝑎_𝑐𝑎𝑟𝑟𝑦 6 𝑎_𝑐𝑎𝑟𝑟𝑦 5 FA 1 𝑎_𝑐𝑎𝑟𝑟𝑦 7 Figure 1.62 -Approximate carry generation circuit using ACGPII for n = 32 n Delay (ns) ACGPI ACGPII Area (# of NAND gates) ACGPI ACGPII 10 4.48 6.24 10 11 12 7.21 6.06 22 15 14 7.94 6.21 31 20 16 10.25 7.56 43 23 18 10.78 7.56 55 27 32 18.23 10.20 189 60 Table 1 . 1 [START_REF] Koren | Computer Arithmetic Algorithms[END_REF] -Comparison of delay and area of ACGPI and ACGPII[START_REF] Cho | Design of low-error fixed-width modified booth multiplier[END_REF] Multiplier Error metric n = 16 n = 20 Rounded 7.51E≠6 4.63E≠7 Truncated FBMI µ e 1.47E≠5 1.92E≠5 9.02E≠7 1.15E≠6 FBMII 1.07E≠5 6.35E≠7 Rounded 1.96E≠11 8.08E≠14 Truncated FBMI ‡ 2 e 7.96E≠11 3.21E≠13 1.92E≠10 8.24E≠13 FBMII 6.38E≠11 2.44E≠13 Table 1 . 1 [START_REF] Parhami | Computer Arithmetic[END_REF] -Accuracy comparison for FBMI and FBMII using ACGPII error mean and variance. As a matter of fact, FBMII/ACGPII has a mean square error which is 68.1% inferior to FBMI for n = 16 and 69.8% for n = 20. 1,8 𝑝𝑝 1,7 𝑝𝑝 1,6 𝑝𝑝 1,5 𝑝𝑝 1,4 𝑝𝑝 1,3 𝑝𝑝 1,2 𝑝𝑝 1,1 𝑝𝑝 1,0 𝑝𝑝 2,8 𝑝𝑝 2,7 𝑝𝑝 2,6 𝑝𝑝 2,5 𝑝𝑝 2,4 𝑝𝑝 2,3 𝑝𝑝 2,2 𝑝𝑝 2,1 𝑝𝑝 2,0 𝑝𝑝 3,8 𝑝𝑝 3,7 𝑝𝑝 3,6 𝑝𝑝 3,5 𝑝𝑝 3,4 𝑝𝑝 3,3 𝑝𝑝 3,2 𝑝𝑝 3,1 𝑝𝑝 3,0 𝑝 𝐵,15 𝑝 𝐵,14 𝑝 𝐵,13 𝑝 𝐵,12 𝑝 𝐵,11 𝑝 𝐵,10 𝑝 𝐵,9 Summand Grid Truncated Part 𝑤 𝑥 𝑦 𝑧 𝑝 𝐵,8 𝑀𝑃 𝐿𝑃 𝑚𝑎𝑗𝑜𝑟 𝐿𝑃 𝑚𝑖𝑛𝑜𝑟 Figure 1.63 -LP minor level discrimination for the design of FBMIII of q 1 1.34 as well as probability values of Table 1.29, then E [q 2 ] is given by E [q 2 ] = 0.4375 ◊ (3/16) + 0.4375 ◊ (3/16) + 1 ◊ (5/8) = 0.7890625 (1.36) Table 1 . 1 The global FB-MIII can be mapped as in Figure1.64, where RFA blocks are Redundant Full-Adder blocks and C block is the partial correction block for the appliance of the two partial compensations 3,1 does not need any ex- & can be performed with two AND gates. & . pp k,i..j refers to the partial symbolic string from pp k,i to pp k,j and pp B,i..j refers to the Booth-encoded multiplier output from rank i to rank j. The author gives comparisons of accuracy between the truncated output version of the fixed-width Booth multiplier and FBMI presented above, in terms of absolute error mean and 30 -Accuracy comparison for FBMI and FBMIII error variance. These metrics for a given n-bit operator denoted as op are defined as: Table 1 . 1 Metric k 3 4 5 6 7 8 Max RE (%) 56.25 26.56 12.86 6.31 3.1 1.54 MA RE (%) 11.90 5.89 2.94 1.47 0.73 0.37 µ RE (%) 2.08 0.53 -0.14 -0.04 0.01 0.01 ‡ RE (%) 14.75 7.26 3.61 1.80 0.90 0.45 .42) where Max RE is the maximum relative error, MA RE the mean absolute relative error, µ RE the Figure 1.67 -Area and power benefits of 16-bit DRUM relatively to 16-bit accurate multiplier [52] -DRUMk refers to 16-bit DRUM using k-bit multiplier. 31 -Error results for 16-bit DRUM for k between 3 and 8 -Error metrics are defined by Equations 1.42. signal to achieve very accurate estimates. 𝑆 𝑏 1 𝜇 1 , 𝜎 1 2 𝑏 3 𝜇 3 , 𝜎 3 2 𝑏 5 𝜇 5 , 𝜎 5 2 𝐼 3 𝐼 1 𝐼 2 𝑂𝑝 1 𝑂𝑝 2 + + 𝑏 2 𝜇 2 , 𝜎 2 2 𝑂𝑝 3 𝑂𝑝 4 + + 𝑏 4 𝜇 4 , 𝜎 4 2 𝑂𝑝 5 + 𝑂 𝜇 𝑂 , 𝜎 𝑂 2 (a) Traditional flat method: propagation of µi, ‡ 2 i from each noise to output 𝐼 1 𝐼 2 𝐼 3 𝑂𝑝 1 𝑂𝑝 2 𝑆 + 𝑏 1 + 𝑏 2 𝑃𝑆𝐷 2 𝜇 1 , 𝜎 1 2 𝑃𝑆𝐷 1 𝜇 2 , 𝜎 2 2 𝑂𝑝 3 𝑂𝑝 4 + 𝑏 3 + 𝑏 4 𝑂𝑝 5 𝑃𝑆𝐷 4 𝜇 3 , 𝜎 3 2 𝑃𝑆𝐷 3 𝜇 4 , 𝜎 4 2 + 𝑏 5 2 𝑃𝑆𝐷 5 𝜇 5 , 𝜎 5 𝑂 𝜇 𝑂 , 𝜎 𝑂 2 𝑃𝑆𝐷 𝑂 (b) PSD agnostic: blind propagation of µi, ‡ 2 i . Proposed PSD method: propagation of µi, ‡ 2 i , and P SDi Table 2 . 2 and N PSD is set to 1024. 𝐿𝑃 𝑐 2 ↓ 𝑥 𝑙𝑙 𝐿𝑃 𝑐 2 ↓ 𝑥 𝑖𝑛 𝐻𝑃 𝑐 2 ↓ 𝑥 𝑙ℎ 𝐿𝑃 𝑐 2 ↓ 𝑥 ℎ𝑙 𝐻𝑃 𝑐 2 ↓ 𝐻𝑃 𝑐 2 ↓ 𝑥 ℎℎ 𝑦 𝑙𝑙 2 ↑ 𝐿𝑃 𝑑 + 2 ↑ 𝐿𝑃 𝑑 𝑦 𝑙ℎ 2 ↑ 𝐻𝑃 𝑑 + 𝑦 𝑜𝑢𝑡 𝑦 ℎ𝑙 2 ↑ 𝐿𝑃 𝑑 𝑦 ℎℎ 2 ↑ 𝐻𝑃 𝑑 + 2 ↑ 𝐻𝑃 𝑑 Figure 2.5 -1-level DWT coder and decoder 1 -Relative error power estimation statistics E d Table 2 . 2 about one millisecond in case of both experiments. With more PSD samples, the time taken by frequency filtering example grows slower than Daubechies DWT example owing to its small size. A speed-up factor of 3 ≠ 5 orders of magnitude compared to simulation is obtained in both cases even for the highest value of N PSD . Proposed Proposed PSD PSD method PSD method agnostic (max accuracy) (min accuracy) method Freq. Filt. DWT 9/7 ≠8.40% 1.10% ≠0.87% 0.90% 29.5% 610% 2.7 -E d versus number of PSD samples N PSD 2 -Comparison of E d between PSD agnostic method and proposed PSD method Time spent on this estimation is usually another critical resource. Figure 2.8 gives the time of output error estimation using the proposed PSD method versus N PSD . With N PSD = 16 the proposed method requires 1, Section 1.4, many different approximate operators do exist. Most adders rely on different ways to break the carry chain, like LPA, ACA, ETA version II to IV, and most multipliers by pruning the partial products to simplify the summand grid reduction, such as AAM version I to III and Fixed-width modified-Booth-encoded Multiplier (FBM) version I to III. Only a few examples such as ETAI or DRUM use strongly different techniques. Amongst these operators, some are configurable at run time like AC2A and GDA. Table 3 . 3 1 -Storage Cost of BWER Propagation Full Data Structure .1 for different operations. It Operation n m Storage 8-bit addition 8 9 2.4 MB 8-bit multiplication 8 16 4.2 MB 16-bit addition 16 17 292 GB 16-bit multiplication 16 32 550 GB Table 3 . 3 Add 292 GB 22 GB 31 MB 2.4 MB 186 kB 14 kB 980 B 16-Mul 550 GB 280 GB 93 MB 6.3 MB 431 kB 29 kB 1.9 kB 2 -Storage Cost of BWER Propagation Data Structure for 16-bit Addition and Multiplication Depending on Reduction Method to 2.4 MB for instance with k = 8, limiting the consideration of input LSBs to a horizon of 8, Operator Original 1 st Reduction k = 10 k = 8 k = 6 k = 4 k = 2 2 nd Reduction 16- .2 gives the corresponding memory cost as a function of k for 16-bit addition and multiplication. For 16-bit addition, the data structure size is reduced from 292 GB Table 3 . 3 3 -Partial Input BWER for the Example b z,i ), -i,4 must be known for i oe J0, 15K. - 6,4 is calculated from Table 3.3 the following way: -6,4 = P 4 (E = P (e y,4 , e x,y,6 | B x,4 , e x,y ) y,3 , e x,3 Table 3 . 3 Chapter 3 4 -BWER Propagation and Simulation Time of Stand-Alone Approximate Operators -Simulation is run on 10 7 input samples considered. Table 4 . 4 1 -Direct comparison of 16-bit-input and output fixed-point and approximate multipliers 0.05 0.5 0.045 Power (mW) 0.025 0.03 0.035 0.04 (16,15) Fxp add. -trunc. Fxp add. -round. ACA ETAIV IMPACT (16,2) Delay (ns) 0.4 0.2 0.3 0.1 (16,15) Fxp add. -trunc. Fxp add. -round. ACA ETAIV IMPACT (16,2) -100 0.02 -80 -60 -40 -20 0 -100 0 -80 -60 -40 -20 0 MSE (dB) MSE (dB) (a) MSE vs power (b) MSE vs delay 0.025 220 200 0.02 (16,15) 180 (16,15) PDP (pJ) 0.01 0.015 Area 140 160 Fxp add. -trunc. (16,2) 120 Fxp add. -trunc. (16,2) Fxp add. -round. Fxp add. -round. 0.005 ACA ACA ETAIV 100 ETAIV IMPACT IMPACT 0 80 -100 -80 -60 -40 -20 0 -100 -80 -60 -40 -20 0 MSE (dB) MSE (dB) (c) MSE vs PDP (d) MSE vs area Figure 4.3 -Direct comparison of 16-bit-input fixed-point and approximate adders regarding MSE Table 4 . 4 2 -Accuracy and Energy Consumption of FFT-32 Using 16-bit Fixed-Width Multipliers imate version of the encoder, DCT operations are computed using fixed-point or approximate operators. The quality metric to compare the exact and the approximate versions of the JPEG encoder is the Mean Structural Similarity (MSSIM) 4.5 -Power Consumption of FFT-32 Versus Output PSNR Using 16-bit Approximate Adders MUL t (16, 16) AAM (16) FBM (16) PSNR (dB) PDP (pJ) 53.88 55.43 59.66 92.49 ≠18.14 93.26 Table 4 . 4 3 gives the energy spent by the MC filter replacing all its additions by adders producing an MSSIM of approximately 0.99. In their 16-bit version, ACA and ETAIV can only reach respectively Figure 4.7 -Lena Encoded with DCT Instrumented With Different 16-bit Approximate Operators0.96 and 0.98. In any case, and as discussed above, the multiplier overhead provokes an energy consumption which is 4.6 times superior for the approximate version than for the truncated FxP version. For multiplier replacement, Table4.3 shows that both 16-bit AAM and FBM produce an accuracy similar to fixed-width truncated FxP multiplier. Moreover, replacing multipliers by FBM in the MC filter do not lead to an important energy overhead, which makes it competitive considering that its delay is 37% inferior to MUL t[START_REF] Ernst | Razor: circuit-level correction of timing errors for low-power operation[END_REF][START_REF] Ernst | Razor: circuit-level correction of timing errors for low-power operation[END_REF] according to Table4.1. However, AAM suffers from an important energy overhead. Comparison of Fixed-Point and Approximate Operators on Signal Processing Applications147 Table 4 . 4 [START_REF] Moore | Cramming more components onto integrated circuits[END_REF] shows K-means clustering classification success rate and energy spent in distance computation performing multiplication using 16-bit input FxP and approximate multipliers. Success Adder Min. Mult. Total Rate Energy (pJ) Energy (pJ) Energy (pJ) ADD t (16, 11) 99.14% 1.55E≠2 9.36E≠2 2.03E≠1 ACA(16, 12) 99.10% 1.54E≠2 2.49E≠1 5.13E≠1 ETAIV(16, 4) 99.43% 1.30E≠2 2.49E≠1 5.11E≠1 IMPACT(16, 6, 3) 99.67% 1.00E≠2 2.49E≠1 5.08E≠1 ADD t (16, 8) 86.00% 1.27E≠2 2.40E≠2 6.06E≠2 ACA(16, 8) 86.06% 9.85E≠3 2.49E≠1 5.08E≠1 ETAIV(16, 2) 63.25% 7.00E≠3 2.49E≠1 5.05E≠1 IMPACT(16, 10, 1) 87.29% 1.26E≠2 2.49E≠1 5.11E≠1 Table 4 . 4 5 -Accuracy and Energy of Distance Computation for K-means Clustering Using 16bit Input Adders for Different Success RatesAAM achieves similar accuracy than fixed-width truncated accurate multiplier, with 99% classification success rate. However, it presents an energy overhead of 75%. FBM achieves very poor performance for K-means, with only 10% success, which is equivalent to prune 12 output bits of a FxP multiplier. Success Multiplier Min. Adder Total Rate Energy (pJ) Energy (pJ) Energy (pJ) MUL t (16, 16) 99.84% 2.49E≠1 1.83E≠2 5.15E≠1 AAM(16) 99.43% 4.42E≠1 1.83E≠2 9.02E≠1 FBM(16) 10.27% 2.54E≠1 1.83E≠2 5.27E≠1 MUL t (16, 4) 10.87% 2.04E≠1 1.24E≠3 4.09E≠1 Table 4.6 -Accuracy and Energy of Distance Computation for K-means Clustering Using 16- bit Input Multipliers 1.1 and embedded in ApxPerf, offers a balance between computational safety and simplicity. Inspired by ac_float, it is written in C++ for HLS. Two versions of ct_float do exist, one based on Mentor Graphics ac_int data type, made for Mentor Graphics Catapult C, and the other one based on ap_int data type from Xilinx, used in Vivado for FPGA targets.FloPoCo (for Floating-Point Cores, but not only) is a generator of arithmetic cores[START_REF] De Dinechin | Designing custom arithmetic data paths with flopoco[END_REF].Also based on C++, it has its own synthesis engine and directly returns VHDL. More than simple arithmetic operators, it is able to generate optimized floating-point computing cores performing complex arithmetic expressions. In this Section, we will only get interested in FloPoCoś custom floating-point addition and multiplication. The main difference of FloPoCo's floating-point representation is the extra 2-bit exception field transported in data. Like for ct_float subnormals are not handled by FloPoCo. Unlike ac_float both ct_float and FloPoCo do not support custom exponent bias. Table 5 . 3 53 2 ) Critical path (ns) power (mW) operation (pJ) Total Energy per AC_FLOAT 312 1.44 1.84E≠1 9.07E≠1 CT_FLOAT 318 1.72 2.13E≠1 1.05 FLOPOCO 361 2.36 1.84E≠1 9.06E≠1 CT_FLOAT/AC_FLOAT +2.15% +19.4% +15.4% +15.7% CT_FLOAT/FLOPOCO -11.8% -27.0% +15.7% +15.8% Table 5.2 -Comparative Results for 16-bit Custom Floating-Point Addition/Subtraction with F clk = 200MHz Area (µm 2 ) Critical path (ns) power (mW) operation (pJ) Total Energy per AC_FLOAT 488 1.18 2.15E≠1 1.05 CT_FLOAT 389 1.13 1.76E≠1 8.59E≠1 FLOPOCO 361 1.52 1.34E≠1 6.50E≠1 CT_FLOAT/AC_FLOAT -20.4% -4.24% -18.2% -18.2% CT_FLOAT/FLOPOCO +7.68% -25.6% +31.7% +32.1% -Comparative Results for 16-bit Custom Floating-Point Multiplication with F clk = 200MHz Table 5 . 5 55 Chapter 5 -Comparative Results for 32-bit Custom Floating-Point Multiplication with F clk = 200MHz Table 5 . 5 6 -8-and 16-bit Performance and Accuracy for K-Means Clustering Experiment algorithm consuming only 1.6◊ more energy than fixed-point. Moreover, floating-point version has a huge advantage in terms of accuracy. Indeed, CMSE is 10◊ better for floating-point and ER is 1.8◊ better. Figures 5.5a Chapter 5 AAM Approximate Array Multiplier. 68, 70, 72-74, 77, 107, 108, 114, 117-119, 140, 143, 144, 148, 149, 194, 197 AAMI Approximate Array Multiplier I. 68-70, 72, 192, 197 AAMII Approximate Array Multiplier II. 68-70, 72, 192, 197 AAMIII Approximate Array Multiplier III. 68, 70-74, 77, 114, 135, 140, 193 AC2A Accuracy-Configurable Adder. 50-55, 58, 60, 61, 107, 192, 197 ACA Almost Correct Adder. 38-43, 46, 56, 107, 108, 111, 114, 116-120, 128, 135, 140, 143, 146, 192, 194 ACGPI Approximate Carry Generation Procedure version I. 80-83, 197 ACGPII Approximate Carry Generation Procedure version II. 81-83, 193, 197 AMA Approximate Mirror Adder. 62, 64 AMA1 Approximate Mirror Adder type 1. 62-65, 192 AMA2 Approximate Mirror Adder type 2. 62-65, 192 AMA3 Approximate Mirror Adder type 3. 62, 64, 65 AP Acceptance Probability. 45-49, 69, 70, 76, 134, 197 Apx Approximate. 109, 133, 140 ASIC Application Specific Integrated Circuit. 156 BER Bit Error Rate. 52, 94, 107, 109, 122, 129, 130, 134, 142, 143, 194, 195 BKA Brent-Kung Adder. 30, 31, 128, 129 BWER Bitwise-Error Rate. 6, 109-117, 119-122, 130, 134, 194, 198 CLA Carry-Lookahead Adder. 29, 30, 46, 51, 52, 58, 197 An operator is considered as fixed-width when its output has the same width as its inputs. In the considered multiplication case, half of the output LSBs is truncated. Remerciements Acronyms Publications Search for custom error estimation test bench "err_testbench.cpp"... Not found. Automated generation of missing test bench(es)... -Detection of input and output types... Done. Input type: ct_float<8,24,CT_RD> Output type: ct_float<8,24,CT_RD> -Instrumentation of hardware performance estimation test bench input generation... Done. -Compilation of hardware performance estimation test bench input generation... Done. -Generation of hardware performance estimation test bench inputs... Done. -Instrumentation of hardware performance estimation test bench... Done. -Instrumentation of error estimation test bench... Done. Compilation of error estimation test bench... Done. Execution of error estimation test bench... Done. Copy of error estimation results to results folder... Done. Instrumentation of Catapult C script... Done. Execution of Catapult C script... Done. Instrumentation of Design Compiler script... Done. Detection of a previous compilation of technology libraries... Done. Execution of Design Compiler script... Done. Instrumentation of SystemC Verify makefile... Done. Compilation of SystemC Verify flow... Done. Preparation of technology libraries for Modelsim... Done. Instrumentation of Modelsim script for VCD generation... Done. Execution of SystemC Verify flow... Done. Instrumentation of PrimeTime script... Done. Execution of PrimeTime script... Done. Save of compiled technology libraries for next executions... Done. Copy of gate-level VHDL, experiment parameters, reports and logs to results directory ... Done. Chapter 3 Hardware Operator process that minimizes the difference between the outputs of the operator and its equivalent statistical model, according to a certain metric. In this work, we used three accuracy metrics to calibrate the efficiency of the proposed statistical model: • Mean Square Error (MSE) -average of squares of deviations between the output of the statistical model x and the reference x: (3.14) • Hamming distance -number of positions with bit flip between the output of the statistical model x and the reference x: • Weighted Hamming distance -Hamming distance with weight for every bit position depending on their significance: Proof of Concept: Modelling of Adders In the rest of the section, we develop a proof of concept by applying VOS on different adder configurations. All the adder configurations are subjected to VOS and characterized using the flow described in Fig. 3.10. Fig. 3.14 shows the design flow of modelling VOS operators. As shown in Fig. 3.13 rudimentary model of the hardware operators is created with the input vectors and the statistical parameters. For the given input vectors, output of both the model and the hardware operator is compared based on the defined set of accuracy metrics. The comparator shown in Fig. 3.14 generates signal to noise ratio (SNR) and Hamming distance to determine the quality of the model based on the accuracy metrics. SNR and Hamming distance are fed back to the optimization algorithm to further fine tune the model to represent the VOS operator. In the case of adder, only one parameter P i for the statistical model is used and is defined as C max , the length of the maximum carry chain to be propagated. Hence, given the operating APXPERF-Second Version The second version of ApxPerf so far brings an extra-layer of high level synthesis. In this version, whose framework is described in Figure 4.2, only one source is needed for both hardware and accuracy estimation, written in C++. HLS + RTL Synthesis Simulation + Verification Gate Synthesis The HLS and the simulations are performed by Mentor Graphics CatapultC. During HLS, the Register Transfer Level (RTL) representation of the input source is generated. Then, a second compilation pass is ensured by Design Compiler to get a gate-level representation. The gate-level representation is then passed again to CatapultC for Modelsim simulation and verification using integrated SystemC Verify framework. Thanks to this framework, the same C++ test bench as for accuracy estimation is used for both hardware verification and generation of the VCD files for PrimeTime power estimation. This way, the statistical distribution of the generated test bench, which can be uniform or random with tunable parameters, is ensured to be the same for hardware performance and accuracy characterizations. The accuracy estimation part returns the same error metrics as for the first version of Apx-Perf described in the previous Section. The main novelty is the possibility to add any error metric to the error estimation as a plugin, with no need to modify the framework kernel. Another main evolution is the replacement of Bash and Matlab scripts by Python. This second version is consequently more portable. Moreover, except for the hardware characterization part requiring Mentor Graphics and Synopsys tools, the whole error estimation part, from simulation to results management, is not linked to any third-party software. The management of gen- Chapter 5 Fixed-Point Versus Custom Floating-Point Representation in Low-Energy Computing In Chapter 4, we compared classical fixed-point arithmetic with operator-level approximate computing. The general conclusion was the superiority of fixed-point arithmetic thanks to lower error entropy making error more robust to deterioration in propagation. In this Chapter, fixed-point arithmetic is compared to custom floating-point arithmetic. As a reminder, fixed-point arithmetic is presented in Section 1.3 and floating-point arithmetic in Section 1.2. To perform this comparison, the study was led using the second version of Apx-Perf, described in Section 4.1.2. This version embeds a synthesizable custom floating-point library called ct_float presented in Section 5.1 and developed in the context of this thesis. In this section, ct_float is compared to other custom floating-point libraries to first show its efficiency. Then, stand-alone fixed-point and floating-point paradigms are compared in Section 5.2 to appreciate their differences in terms of accuracy and hardware performance. Finally, in Section 5.3, both representations are compared on signal processing applications, K-means clustering and FFT, leveraging relevant metrics. CT_FLOAT: a Custom Synthesizable Floating-Point Library The second version of ApxPerf framework was presented in Section 4.1.2. It allows for fast and user-friendly hardware characterization of approximate operators written in C++ thanks to HLS, leveraging Catapult C, Design Compiler, ModelSim and PrimeTime, and error characterization thanks to C++ benchmarks. As mentioned in Section 4.1.2, ApxPerf v2 comes with built-in approximate operators libraries such as apx_fixed containing approximate integer adders and multipliers in fixed-point representation. This section presents ct_float, the main operator library of ApxPerf v2. • a first version based on Mentor Graphics ac_int datatype, made for Catapult HLS but also stand-alone error estimation and • a second version based on Xilinx Vivado HLS integer library ap_int made for Xilinx FPGA target using Vivado. Both versions are provided in the same source code and activated through C++ pre-compiler directives. The implementation of ct_float, as for apx_fixed, features: • Synthesizable operator overloading: unary operators: unary ≠, !, ++, ≠≠, relational operators: <, >, <=, >=, ==, ! =, binary operators: +, + =, ≠ ≠ =, ú, ú =, <<, <<=, >>, >>=, and assignment operator from/to another instance of ct_float. • Non-synthesizable operator overloading: assignment operator from/to C++ native datatypes (float, double), output operator << for easy display and writing in files. Other built-in functions allow easy manipulation of floating-point values, such as test functions to get information about the extreme representable values for a given floating-point representation, to know if a given value is representable, etc. The declaration of an instance of ct_float requires three template parameters: 1. the exponent width e, 2. the mantissa width m, and 3. the rounding mode used in arithmetic operators and changes of representation. Currently, four rounding modes are available, given by Table 1.5 in Chapter 1. The value of mantissa width m includes the implicit 1 (see below). The representation also includes a sign bit. Therefore, the total number of bits in memory is equal to e + m. As mentioned above, two synthesizable operators are available: addition and multiplication. Unlike apx_fixed, the output representation of these operators is not determined to be prevented from under/overflows. Indeed, if the inputs are on (e To estimate the energy spent per operation, we also introduce a fair metric, which is the total energy spent before stabilization. Indeed, in literature, energy per operation is often estimated whether: • using the total energy per clock cycle, or • using Power-Delay Product (PDP), which is the multiplication of the average dynamic power of the operation. In the first case, a fair comparison between two different operators is strongly dependent on their difference of slack. Indeed, let us imagine two operators op 1 and op 2 which have the same size and the same static power. if op 1 stabilizes twice as fast as op 2 with a same dynamic power, then we would naturally tend to say that E (op 1 ) = 1 2 ◊E (op 2 ). However, with the total energy per clock cycle metric, E (op 1 ), if the slack is high, then op 1 and op 2 will seem to have very close energy per operation, which is indeed false. With the PDP metric, the static power is not considered. Therefore, if op 1 and op 2 have very different static power, which is true if they do not have quite equivalent area, then the energy per operation will be too much in favor of the larger operator. To be perfectly fair, the energy per operation must consider the whole energy (static and dynamic) spent before stabilization such as depicted in Figure 5.1. Considering the average static power P s , the average dynamic power P d , the critical path delay T cp , the clock period T clk and the number of latency cycles N c , the total energy spent before stabilization E op is (5.2) Chapter 5 Comparison on K-Means Clustering Application This section describes the K-means clustering algorithm and gives the comparative results for FxP and FlP. First, the principle of K-means method is described. Then, the specific algorithm used in this case study is detailed. K-Means Clustering Principle, Algorithm and Experimental Setup K-means clustering is a well-known method for vector quantization, which is mainly used in data mining, e.g. in image classification or voice identification. It consists in organizing a multidimensional space into a given number of clusters, each being totally defined by its centroid. A given vector in the space belongs to the cluster in which it is nearest from the centroid. The clustering is optimal when the sum of the distances of all points to the centroids of the cluster they belong to is minimal, which corresponds to finding the set of clusters where µ i is the centroid of cluster S i . Finding the optimal centroids position of a vector set is mathematically NP-hard. However, iterative algorithms such as Lloyd's algorithm allow us to find good approximations of the optimal centroids by an estimation-maximization process, with a linear complexity (linear with the number of clusters, with the number of data to process, with the number of dimensions and with the number of iterations). The iterative Lloyd's algorithm [START_REF] Lloyd | Least squares quantization in pcm[END_REF] is used in our case study. It is applied to bidimensional sets of vectors in order to have easier display and interpretation of the results. From now, we will only refer to the bidimensional version of the algorithm. Figure 5.4 shows results of K-Means on a random set of input vectors, obtained using double-precision floating-point computation with a very restrictive stopping condition. Results obtained this way are considered as the reference golden output in the rest of the paper. The algorithm consists of three main steps: 1. Initialization of the centroids. 2. Data labelling. 3. Centroid position update. Steps 2 and 3 are iterated until a stopping condition is met. In our case, the main stopping condition is when the difference of the sums of all distances from data points to their cluster's centroid between two iterations is less than a given threshold. A second stopping condition is the maximum number of iterations, required to avoid the algorithm getting stuck when the arithmetic approximations performed are too high to converge. The detailed algorithm for one dimension is given by Algorithm 3. Input data are represented by the vector data of size N data , output centroids by the vector c of size k. The accuracy target for stopping condition is defined by acc_target and the maximum allowed number of iterations by max_iter. In our study, we use several values for acc_target, and max_iter is set to 150, which is never reached in The impact of fixed-point and floating-point arithmetic on performance and accuracy is evaluated considering the distance computation function distance_comp, defined by: d Ω (x ≠ y) ◊ (x ≠ y). (5.4) The computation is written this way instead of using the square function in order to let the HLS determine the intermediate types, thanks to C++ native types overloading implemented in ct_float and ac_fixed, which are used for floating-point and fixed-point implementation, respectively. All the other parts of the computations are implemented using double-precision floating-point, and their contribution to the performance cost is not evaluated. Using a whole approximate K-means application would require these operations to be approximated the same way as distance computation. However, as distance computation is the most complex part of the algorithm and as it is the deepest operation in the inner loops, its impact on accuracy and performance is the most critical. In the 2D case, the distance computation becomes which is equivalent to 1 addition, 2 subtractions, and 2 multiplications. However, as distance computation is cumulative on each dimension, the hardware implementation relies only on 1 addition (accumulation), 1 subtraction, and 1 multiplication. Abstract The physical limits being reached in silicon-based computing, new ways have to be found to overcome the predicted end of Moore's law. Many applications can tolerate approximations in their computations at several levels without degrading the quality of their output, or degrading it in a acceptable way. This thesis focuses on approximate arithmetic architectures to seize this opportunity. Firstly, a critical study of state-of-the-art approximate adders and multipliers is presented. Then, a model for fixed-point error propagation leveraging power spectral density is proposed, followed by a model for bitwise-error rate propagation of approximate operators. Approximate operators are then used for the reproduction of voltage over-scaling e ects in exact arithmetic operators. Leveraging our open-source framework ApxPerf and its synthesizable template-based C++ libraries apx_fixed for approximate operators, and ct_float for low-power floating-point arithmetic, two consecutive studies are proposed leveraging complex signal processing applications. Firstly, approximate operators are compared to fixed-point arithmetic, and the superiority of fixed-point is highlighted. Secondly, fixed-point is compared to small-width floatingpoint in equivalent conditions. Depending on the applicative conditions, floating-point shows an unexpected competitiveness compared to fixed-point. The results and discussions of this thesis give a fresh look on approximate arithmetic and suggest new directions for the future of energy-e cient architectures.
01753394
en
[ "spi.meca.mefl" ]
2024/03/05 22:32:10
2017
https://pastel.hal.science/tel-01753394/file/2017PSLEM029_archivage.pdf
Keywords: Heat pump, coefficient of performance, rotary, scroll, heating capacity, CFD, numerical model, heat transfer, compressor, heat losses, uncertainty, refrigerant, thermodynamic cycle LIST OF FIGURES (COSTIC, 2004) .................................................................................. Figure 1.3Working principle of a scroll compressor (Hitachi Industrial Equipment Systems, 2017) .................................................................. Figure 1.4 Working principle of a rotary compressor [START_REF] Lee | Development of capacity modulation compressor based on a two stage rotary compressor. 2. Performance experiments and P-V analysis[END_REF] Mechanical .......................................................................................... Figure 2.9. Equilateral and highly skewed triangle ................................................... Figure 2.10. Thin layer of oil in 2.12. Heat fluxes released from the fluid to compression chamber................ Figure 2.13. Flow inside the compression chamber (above) is approximated as the flow inside a pipe Mechanical ................................................................................. xiv Figure 3.1. Calorimeter chamber of the scroll and rotary compressor test bench used for experimental validation .............................................. Figure 3.2. Location of thermocouples on compressor shells of scroll (a) and rotary (b) compressors ........................................................................ Figure 3.3. Experimental and numerical shell .................................................................................. Figure 3.17. Temperature contours of scroll compressor shell in absolute temperature (K) ................................................................................... Figure 3.18 ......................................... Figure 5.3 Exterior unit from the side of the fan ........................................................ Figure 5.4. Exterior unit located in the climatic chamber 2 (outdoor chamber) ....... Figure 5.8. Thermometer (PT100) used to measure the evaporator inlet air, and a schematic with required measurements (b) for the experimental validation of the performance assessment method ....................................................... Figure 5.10 . Root mean square errors of the estimated heating capacities when Qamb is obtained using different air temperature sensors .............. Figure 5 .11. Deviations of heating capacities from reference values when Qamb is obtained using different air temperature sensors .............. Figure 5.12. Deviations of heating capacities calculated using the previosuly defined Qamb evaluation method and the more accurate method from the reference values as a function of compressor speeds ......... Figure 5. 13 ......................................................................................... Figure 5.16. Compressor heat losses calculated from the new heat loss evaluation method, Eq. ( 5.1), old heat loss evaluation method, Eq. ( 1. 19) and reference values Eq. (5.8) in tested operating conditions .......................................................................................... Figure 5.17 HP unit ........................................... 23 INTRODUCTION Le développement des pompes à chaleur (PAC), qui offrent des efficacités énergétiques élevées, est essentiel pour réduire la consommation d'énergie dans les bâtiments et répondre aux enjeux énergétiques et environnementaux à l'échelle nationale ou européenne. Cependant, les performances réelles des PAC sont difficiles à évaluer sur site. Une méthode adaptée par [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF] permet de mesurer leurs performances sur le terrain mais sa précision reste fortement dépendante de l'évaluation des pertes thermiques du compresseur. L'objectif de cette thèse est d'améliorer cette méthode de mesure en identifiant et validant une approche simplifiée d'évaluation sur site des pertes thermiques du compresseur. La méthode de mesure des performances est détaillée dans le chapitre 1. Deux modèles numériques de comportement thermique de compresseurs (scroll et rotary) sont exposés dans le chapitre 2. Après leur validation expérimentale (chapitre 3), les modèles sont utilisés dans le chapitre 4 pour déterminer l'approche à adopter pour pouvoir évaluer précisément les pertes thermiques sur site. Cette approche est ensuite intégrée dans la méthode de mesure des performances réelles et validée en laboratoire sur un prototype de pompe à chaleur air-eau dans le chapitre 5. Research objective The three key targets for 2020 of the European Union (EU) climate action is to reduce greenhouse gas (GHG) emissions by 20% when compared with emission levels in 1990, ensure that 20% of total energy consumption is from renewable energy, and a 20% increase in energy efficiency (European Commission, 2017). It has been shown that a major part of the GHG emissions originate from heating and domestic hot water (DHW) consumption. According to French Environment and Energy Management Agency (2014), heating and DHW production was responsible for 25% of total CO2 emissions and 40% of final energy consumption in France. For these reasons, efficient and environmentally friendly energy solutions for heating and DHW production are in demand. One of these solutions consists of using residential heat pumps (HPs). Due to high theoretical efficiency of residential HPs, their development is essential when attempting to reduce heating energy consumption in dwellings. However, current methods to evaluate HP efficiencies have become problematic since efficiencies are measured and established in controlled laboratory conditions [START_REF] Ertesvåg | Uncertainties in heat pump coefficient of performance (COP) and exergy efficiency based on standardized testing[END_REF]. Performance values obtained in such conditions may differ from the ones obtained onfield due to several factors, such as installation quality, design of the heating system, and climatic conditions. Therefore, real-time on-field performance measurements provide more reliable data. Nevertheless, measuring accurately on-field heating capacity and coefficient of performance (COP) of HPs is difficult, particularly, of air-to-air HPs, since measuring air enthalpies and specifically air mass flow rate on-field is challenging [START_REF] Mcwilliams | Review of airflow measurement techniques[END_REF]. A performance assessment method, presented and validated in the work of [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF] demonstrates that air-to-air HP efficiencies can be measured using mass/energy balances of the refrigeration components. The measurements necessary for this method, located on the refrigerant side, require only non-intrusive supplementary sensors, surface temperature sensors, in addition to already installed pressure sensors. The method is, therefore, well-adapted for in-situ performance evaluation. Performance assessment method is based on compressor energy balance, where compressor heat losses towards the ambient air must be estimated. [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF] stated that the sensitivity index of heat losses in the relative uncertainty of heating capacity predicted by the method is 40 %. Thus, compressor heat losses must be evaluated more accurately as they influence significantly the overall accuracy of the method. The method consists of measuring indirectly the refrigerant mass flow rate using the compressor energy balance. The heating capacity is then obtained using this value and by measuring condenser inlet and outlet enthalpies. Finally, the COP value of HP can then be calculated directly as a ratio of heating capacity to the measured HP electrical power input. An advantage of this on-field performance measurement is to ensure optimal operation of HP systems by promptly detecting significant performance degradation. This diagnosis reflects positively on energy and cost savings. Additionally, refrigerant mass flow rate also provides in itself valuable information in terms of fault detection and diagnostics (FDD). Therefore, integrating an on-field FDD technology is important to minimize performance degradation, maintenance costs, such as field inspections and component replacements, and machine down-time. To sum up, the method provides means for the optimization of HP operation and maintenance quality. The aim of this thesis is to improve the method presented by [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF] by investigating the simplest way to measure precisely compressor heat losses on-field, and then to experimentally validate this fully operational and reliable in-situ assessment method of HP performances. The newly proposed method should be compatible with various refrigeration cycles, such as air-to-air HPs, with and without vapor injection, and easily applicable in already-installed machines. Research objectives stated above are achieved in three steps:  First, a more accurate method to evaluate compressor heat losses on-field will be established in order to minimize the influence of compressor heat losses on the uncertainty of the performance assessment method. For this purpose, a numerical model that simulates compressor thermal behavior is developed. The model couples integral and differential formulations. Exterior thermal profiles of scroll and rotary compressor shells are validated with experimental data obtained for both compressor types in Mitsubishi Heavy Industries (MHI) laboratory.  The simulation model is then used to observe the thermal behavior of compressor shells and to determine the minimum non-intrusive instrumentation, i.e. the minimal number of surface temperature sensors and their locations on the rotary and scroll compressor shells, necessary to estimate compressor heat losses on-field. Also, the required correlations in order to evaluate convective heat transfer coefficients used to calculate compressor heat losses on-field are selected. The methodology to measure scroll and rotary compressor heat losses on-field is established and integrated in the performance assessment method.  The improved method is then experimentally validated in EDF R&D Lab Renardières, France, using an air-to-water HP prototype in several operating conditions. Thesis overview Chapter 1 first presents a brief overview of the thermodynamic cycle that constitutes a HP system: compression, condensation, expansion, and evaporation. In this chapter, state-of-the-art performance measurement methods are presented and their limitations are highlighted. Chapter 1 also introduces a promising performance measurement method that uses indirect mass flow rate measurement, presented in the work of [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF], and the influence of the current compressor heat loss evaluation integrated in the method on the overall result uncertainty. Finally, a FDD method that can potentially be integrated in the proposed performance assessment method is presented. Chapter 2 illustrates the developed numerical model that simulates heat transfer of scroll and rotary compressors. Two distinct numerical models have been developed for both compressor types due to differences in the internal component layouts, specifically the location of the electrical motor. The model is a hybrid model where a computational fluid dynamics (CFD) analysis is supported and optimized by integral formulations programmed in MATLAB. Simulations are done in steady state conditions. The limitations of the numerical models and suggestions for future development conclude Chapter 2. Consequently, Chapter 3 deals with the experimental validation of the developed model using measurement results obtained for both compressor types, scroll and rotary. The experimental test bench for both compressor types is presented. Introduced simplifications, applicability of the model to different internal component layouts and dimensions, and suggestions for future development are discussed. The chapter concludes that the accuracy of the model is sufficient for it to be used to determine the temperature sensor location for measuring on-field compressor heat losses. Chapter 4 presents the required instrumentation, in terms of temperature sensor locations for scroll and rotary compressors, and the selected heat transfer correlations that can be employed when the shell temperature and the temperature of air surrounding the compressor are known. Thus, Chapter 4 establishes the improved on-field heat loss evaluation method for scroll and rotary compressors. The performance assessment method with improved heat loss evaluation method is experimentally validated in Chapter 5 with a specific HP test bench. The air-to-water HP prototype built specifically for the HP test bench along with the experimental test set up is presented. The deviations between the heating capacities obtained from the performance assessment method and reference values are presented and discussed. Finally, conclusions are drawn on the reliability and applicability of the method in onfield conditions. The possibilities to further exploit the numerical model in order to characterize scroll and rotary compressors of different sizes and to adapt the heat loss evaluation methods accordingly, are explored. It is also proposed to validate the performance assessment method coupled with the chosen FDD method with the experimental test bench presented in Chapter 5. La méthode intègre l'évaluation de pertes thermiques du compresseur. [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF] a déterminé que l'incertitude sur l'évaluation de ces pertes contribue à hauteur de 40% sur l'erreur d'estimation de la puissance thermique. Cela illustre l'importance de développer une approche plus aboutie et plus fiable pour évaluer les pertes thermiques sur site, afin de réduire l'incertitude sur la méthode de mesures des performances. L'approche utilisée à l'origine dans cette méthode reposait sur l'approximation que la température de l'enveloppe du compresseur est homogène et égale à la température du réfrigérant au refoulement. Toutefois, ceci n'est pas toujours le cas, en particulier pour les compresseurs scroll où la chambre de compression est en haut. CHAPTER 1 BACKGROUND AND LITERATURE REVIEW La méthode de mesures des performances calcule le débit massique. La connaissance du débit en temps réel est utile dans le modèle de détection et de diagnostic de défauts, présentée dans le travail de Li & Braun (2007). Prochainement, les défauts les plus communs dans les PAC pourraient être testés afin de valider expérimentalement la méthode de détection et de diagnostic de défauts couplée avec la méthode de mesure des performances. Chapter 1 Background and literature review 8 Overview of heat pumps Following widespread concern with greenhouse gas emissions, climate change, and increasing energy demand in heating and air-conditioning, the development of residential HPs has been a driving force for diminishing heating energy consumption of dwellings, due to their high theoretical efficiency. Currently, millions of HPs are installed worldwide. Heat pumps constitute heating units or systems that extract heat from outdoor air, which is then used to heat a building. Vapor-compression refrigeration cycle is the basis of the heat transfer cycle, where a chemical substance, the refrigerant, alternately changes phase from liquid to gas and gas to liquid, as depicted in Figure 1.1. The heat transfer cycle consists of four distinct steps: 1. Compression (1-2): Refrigerant gas entering at low pressure (LP) and temperature is compressed to a higher pressure (HP), thus elevating the temperature, by consuming electrical power input. In some cases injection of refrigerant fluid at intermediate pressure is installed (see Subsection 1.2.1). Condensation (2-3): Refrigerant gas at high temperature and pressure releases thermal energy to a medium, such as water or air that serves as a heat sink, undergoing phase change from gas to liquid state. Expansion (3-4): High-pressure and medium-temperature refrigerant liquid is submitted to an expansion by flowing through an orifice in the expansion valve, thus, reducing liquid pressure and temperature. Evaporation (4-1): Refrigerant at low temperature and pressure absorbs heat from a medium, such air or water that serve as a heat source, undergoing phase change. Finally, low-temperature and low-pressure refrigerant gas flows to the compressor and the cycle is repeated. 1.1 Overview of heat pumps 9 Figure 1.1 Schematic (a) and P-h diagram (b) of a HP refrigeration cycle Reversible heat pumps can reverse the direction of the refrigerant flow between the two heat exchangers, condenser and evaporator, using a reversing valve, as illustrated in Figure 1.2. In such case, the heat pump will operate in a cooling mode, extracting heat from a building and rejecting it to the outdoor air. Reversible heat pump systems consist of four primary components:  compressor;  condenser;  expansion valve and  evaporator. Compressor -The refrigerant entering in a gaseous phase is compressed to a higher pressure. In the process, the refrigerant temperature is also increased. Heat pump compressors require electrical power input. Lubricant oil is used to prevent damage of internal compressor components. In hermetic compressors, the motor and compression chamber are confined in a steel shell. In such compressor types, oil is present in a sump (reservoir) at the bottom of the compressor, and covers the majority of internal components. Droplets of oil are mixed with refrigerant and part of the oil migrates into the refrigeration cycle. For this reason, the working fluid of a HP cycle is a mix of refrigerant and lubricant oil. Scroll and rotary compressors are the two most commonly employed compressor types in residential air-to-air HPs [START_REF] Tran | Méthodes de mesure in situ des performances annuelles des pompes à chaleur air/air résidentielles[END_REF]. Both compressors are positive displacement compressor types. Scroll compressors have one scroll, or spiral, orbiting in a path that is defined by a matching fixed scroll, creating gas pockets between the scrolls. The orbiting scroll is attached to the crankshaft and the fixed one is attached to the compressor body. The gas is drawn in from the outer side portion of the scroll, creating a gas pocket that travels in between the scrolls. The gas then moves towards the center of the scrolls (discharge) simultaneously decreasing the pocket size, increasing the pressure and temperature. The working principle of scroll compressors is illustrated in Figure 1.3. (Hitachi Industrial Equipment Systems, 2017) In rotary compressors the rotating shaft sets in rotating motion a roller, named the rotor, inside a cavity. The circular rotor rotates inside a circular cavity eccentrically, Figure 1.3 Working principle of a scroll compressor Chapter 1 Background and literature review 12 since the centers of the circular cavity and rotor are offset. This compresses the refrigerant gas to a desired pressure, as the volume of the gas decreases. The working principle of a rotary compressor is depicted in Figure 1.4. Condenser -The heat exchanger condenses refrigerant fluid from gaseous to liquid state, and delivers extracted heat from the heat source (outdoor air) to the heat sink (indoor air). Typically, a condenser consists of copper coils with aluminum fins. A fan is used to pull the ambient air through the finned coils, creating indoor air circulation across the exchanger. At the same time, the refrigerant circulates inside the finned coils. Expansion valve -The aim of the expansion valve is to decrease the refrigerant pressure. Thermostatic (TXV) or electronic expansion valves (EEV) are, typically, used to adjust the device opening to ensure a certain pressure drop. Generally, the fluid is in a two-phase state when exiting the expansion valve. Evaporator -Similar to a condenser, an evaporator, typically, consists of copper coils with aluminum fins. The refrigerant in a two-phase state in an evaporator is converted to a gaseous state by extracting thermal energy from a heat source (outdoor air). Generally, the refrigerant exiting the evaporator must be in a completely gaseous state, since liquid droplets can damage the compressor. As in the case of condensers, the air crossing the exchanger coils can also be regulated with a fan. Figure 1.4 Working principle of a rotary compressor [START_REF] Lee | Development of capacity modulation compressor based on a two stage rotary compressor. 2. Performance experiments and P-V analysis[END_REF] 1.1 Overview of heat pumps 13 The First Law of Thermodynamics postulates the conservation of energy stating that energy cannot be created or destroyed, but can be transformed. Assuming that work and heat are the only forms of energy exchanged between the system and surroundings, the First Law is described in: 𝑚̇∆ℎ = 𝑄 ̇+ 𝑊 ̇ (1.1) The equation above is an energy balance for open systems formulated as an enthalpy balance. Coefficient of performance (COPHP) is the most common indicator of heat pump efficiency. Coefficient of performance is a dimensionless value defined as a ratio of the net heating capacity, 𝑄 ̇𝑐𝑜𝑛𝑑 , and electrical power input, 𝑊 ̇𝐻𝑃 , under designated operating conditions, as listed below: 𝐶𝑂𝑃 𝐻𝑃 = 𝑄 ̇𝑐𝑜𝑛𝑑 𝑊 ̇𝐻𝑃 (1.2) The condensation heat, 𝑄 ̇𝑐𝑜𝑛𝑑 , is determined from the condenser energy balance in steady-state, as follows: 𝑄 ̇𝑐𝑜𝑛𝑑 = 𝑚̇(ℎ 𝑐𝑜𝑛𝑑,𝑖𝑛 -ℎ 𝑐𝑜𝑛𝑑,𝑜𝑢𝑡 ) (1.3) where 𝑚̇ is the mass flow rate of the working fluid, ℎ is the working fluid enthalpy, in and out are condenser inlet and outlet sides, respectively. As mentioned previously, the working fluid of the HP refrigeration cycle is a refrigerant and oil mixture, and the enthalpy change at the condenser can be calculated as follows: ∆ℎ 𝑐𝑜𝑛𝑑,𝑖𝑛→𝑜𝑢𝑡 = (1 -𝐶 𝑔 )(ℎ 𝑟,𝑐𝑜𝑛𝑑,𝑖𝑛 -ℎ 𝑟,𝑐𝑜𝑛𝑑,𝑜𝑢𝑡 ) + 𝐶 𝑔 ∆ℎ 𝑜 𝑇 𝑐𝑜𝑛𝑑,𝑜𝑢𝑡 -𝑇 𝑐𝑜𝑛𝑑,𝑖𝑛 (1.4) where 𝐶 𝑔 is the mass fraction of oil with respect to the working fluid flow, in and out are the inlet and outlet sides of the condenser, respectively, and ∆ℎ 𝑜 𝑇 𝑐𝑜𝑛𝑑,𝑜𝑢𝑡 -𝑇 𝑐𝑜𝑛𝑑,𝑖𝑛 is the specific enthalpy change of oil calculated, as follows: ∆ℎ 𝑜 𝑇 𝑐𝑜𝑛𝑑,𝑜𝑢𝑡 -𝑇 𝑐𝑜𝑛𝑑,𝑖𝑛 = 𝑐 𝑝,𝑜 (𝑇 𝑐𝑜𝑛𝑑,𝑖𝑛 -𝑇 𝑐𝑜𝑛𝑑,𝑜𝑢𝑡 ) (1.5) where 𝑐 𝑝,𝑜 (𝑇) is the specific heat capacity of oil calculated as in [START_REF] Conde | Estimation of thermophysical properties of lubricating oils and their solutions with refrigerants: An appraisal of existing methods[END_REF] and [START_REF] Liley | Chemical Engineering Handbook[END_REF]: Chapter 1 Background and literature review 14 𝜌 𝑜 (𝑇) = 𝜌 𝑜 (𝑇 0 ) -0.6(𝑇 -𝑇 0 ) (1.6) 𝑐 𝑝,𝑜 (𝑇) = 1684 + 3.4𝑇 √𝑠 (1.7) where 𝑠 is the ratio of oil density to water density at 15.56 C, where 𝑇 is the temperature in C, 𝜌 𝑜 (𝑇 0 ) is the oil density at 𝑇 0 = 38 C, typically, provided by the manufacturer. Finally, the condensation heat is obtained, as follows: Performances can be easily estimated on-field in water-to-water and air-to-water HPs by measuring the thermal energy supplied to the water circuit, which is calculated from the water inlet and outlet temperature, and mass flow rate measurements. It is particularly challenging to accurately measure the performances of air-to-air heat pumps (HPs) on-field, since obtaining accurate air enthalpy and flow rate measurements is problematic [START_REF] Mcwilliams | Review of airflow measurement techniques[END_REF]. 𝑄 ̇𝑐𝑜𝑛𝑑 = 𝑚̇[(1 -𝐶 𝑔 )(ℎ An anemometer can be used to measure the air flow directly in order to estimate the performances of air-to-air HPs on-field. However, the implementation of such measurement installation on-field is difficult. For instance, it can interfere with the HP operation, due to affected air flow rate that resulted from additional pressure losses. [START_REF] Ichikawa | Study on running performance of a split-type air conditioning system installed on a university campus in suburban Tokyo[END_REF] presented another method, where the heating energy is obtained from the air velocity field, temperatures and humidity ratios. This method requires data from the manufacturer, which makes it non-generic and unsuitable for measurements over long periods of time. Both of the mentioned methods can be classified as external methods since the measurements are based on air properties. Internal methods, on the other hand, require refrigerant fluid measurements. The refrigerant enthalpy is computed from temperature and pressure measurements. Mass flow rate can be obtained using a flow meter or component energy/mass balances, as in the work of [START_REF] Teodorese | Testing of a Room Air Conditioner-High Class RAC Test Results-Medium Class RAC Test Results[END_REF] and 1.2 Performance assessment method using indirect measurement of refrigerant mass flow rate 15 [START_REF] Fahlén | Methods for commissioning and performance checking of heat pumps and refrigerant equipment[END_REF]. The main drawback of these methods is the requirement of using intrusive flow meter and/or pressure sensors. The importance of using nonintrusive sensors to obtain accurate performance data must be stressed since intrusive sensors cause weak perturbations in the refrigerant flow in functioning machines affecting the performance of the refrigeration cycles. Furthermore, the application of intrusive measurements is expensive, difficult in already-installed machines, and can provoke potential leak sources. Based on the work of [START_REF] Fahlén | Methods for commissioning and performance checking of heat pumps and refrigerant equipment[END_REF], an internal method is presented in the work of Tran, et al. (2012). The method constitutes a promising on-field performance assessment method that is based on refrigerant fluid measurements and component energy/mass balances. The method does not require intrusive flow and pressure meters, and is, therefore, perfectly suitable for on-field measurements. In fact, only non-intrusive surface temperature sensors are employed to estimate required pressures, refrigerant mass flow rate, heating capacities, and COP values. The method can be applied in different types of heat pump systems, including air-to-air and more complex refrigeration cycles. The method is described in more detail in Section 1.2. Performance assessment method using indirect measurement of refrigerant mass flow rate As stated previously, the performance assessment method determines indirectly the refrigerant mass flow rate using only non-intrusive sensors. It comprises basic and complex cycles, such as vapor injection cycles. The aim of the method is to determine the performances of HPs in terms of heating capacity and COP. Compressor energy balance Basic cycles In basic refrigeration cycle, shown in the P-h diagram (a) and system schematic with all measurements required (b) in Figure 1.5, the method utilizes solely a steady-state compressor energy balance, as depicted below: (1.10) 𝑊 ̇𝑐𝑜𝑚𝑝 = 𝑚̇[(1 -𝐶 𝑔 )(ℎ In order to determine refrigerant enthalpy at a certain point, fluid temperature and pressure must be known. Fluid temperatures are obtained directly from pipe surface measurements. High and low pressures are determined indirectly via non-intrusive saturation temperature measurements on condenser and evaporator heat exchangers. Evaporation pressure is determined by placing a surface temperature sensor at the inlet of the evaporator (point 4 in Figure 1.5), where the fluid is definitely in a two-phase state. Similarly, for the condensation pressure, the saturation temperature is measured in the center of the condenser (point 2 ' in Figure 1.5), after desuperheating and before subcooling of the fluid. If the fluid is zeotropic, the temperature glide during phase change introduces an error in the calculated pressure. Yet, the influence of the error is limited; for example, in the case of refrigerant R407C, which is a strongly zeotropic fluid with a temperature glide of approximately 6 K, the error is 0.2 bar at most [START_REF] Tran | Méthodes de mesure in situ des performances annuelles des pompes à chaleur air/air résidentielles[END_REF]. Figure 1.5 Refrigeration cycle P-h diagram (a) and schematic with required measurements (b) of basic cycles Extension to cycles with injection The efficiency of basic refrigeration cycles has been shown to degrade, when airsource HPs operate at very low and high ambient temperatures, due to increased compressor heat losses and reduced mass flow rate, respectively. Vapor injection cycles 1.2 Performance assessment method using indirect measurement of refrigerant mass flow rate 17 have been proposed by several HP manufacturers as alternatives to basic cycles, in order to increase both performance and heating capacity at these extreme conditions [START_REF] Heo | Comparison of the heating performance of air-source heat pumps using various types of refrigerant injection[END_REF]. [START_REF] Singer | On-field measurement method of vapor injection heat pump system[END_REF], showed that theoretically performance assessment method can be extended to injection cycles. This can be achieved by determining steady-state equation systems to resolve mass flow rates. These equations are based on the component mass/energy balances. Injection HP cycles can be classified in two main categories: cycles that include a flash tank (FT) and cycles that include an internal heat exchanger (IHX). Various combinations and arrangements of these components and expansion valves can form novel injection refrigerant cycles. However, the main principle of injection cycles is that the refrigerant flow is separated after exiting the condenser and the bypass flow passes through the mentioned components in order to be injected into the multistage compressor unit [START_REF] Singer | On-field measurement method of vapor injection heat pump system[END_REF]. Flash Tank HP (FT) -In such cycles working fluid exiting the condenser in a liquid state passes by an expansion valve. After which, flash tank separates the fluid in vapor state from the fluid in liquid state. The former is injected in the compressor serving as a cooling agent, and the latter continues further expansion before entering the evaporator in order to complete the cycle. Such maneuver allows lower compressor discharge temperatures, thus, improving the COP and heating capacity. The P-h diagram and flow schematic are depicted in Figure 1.6. Steady-state equation system is used to calculate the mass flow rates. The system of equations consists of compressor mass balance, Eq. (1.11), FT energy balance, Eq. (1.12), and compressor energy balance, Eq. ((1.13): 𝑚̇1 + 𝑚̇9 = 𝑚̇4 (1.11) 𝑚̇4ℎ 6 = 𝑚̇9ℎ 9 + 𝑚̇1ℎ 7 (1.12) 𝑊 ̇𝑐𝑜𝑚𝑝 = 𝑄 ̇𝑎𝑚𝑏 + 𝑚̇4ℎ 4 -𝑚̇9ℎ 9 -𝑚̇1ℎ 1 (1.13) where ℎ is the specific enthalpy of working fluid (refrigerant and oil), calculated as follows: ℎ = (1 -𝐶 𝑔 )ℎ 𝑟 + 𝐶 𝑔 ℎ 𝑜 (1.14) From the three-equation system mass flow rates are resolved, and 𝑚̇4 is used to calculate the condensation heat (heating mode): Chapter [START_REF] Singer | On-field measurement method of vapor injection heat pump system[END_REF]. In such cycles the working fluid is divided in two streams after exiting the condenser and a lesser amount is expanded to a two-phase state. The injected fluid is in vaporized state due to a heat exchange that occurs between the subcooled and two-phase working fluids. The P-h diagram and flow schematic are illustrated in Figure 1.7. One of the main advantages of the IHX system is that injection of the cooled refrigerant vapor allows the working fluid to be compressed to a desired pressure while consuming less electrical energy than a compressor in a simple cycle would under the same operating conditions. The steady-state three-equation system consists of compressor mass balance, Eq. (1.16), IHX energy balance, Eq. (1.17), and compressor energy balance, Eq. (1.18): 𝑚̇1 + 𝑚̇9 = 𝑚̇4 (1.16) 𝑚̇1(ℎ 5 -ℎ 6 ) = 𝑚̇9(ℎ 9 -ℎ 8 ) (1.17) 𝑊 ̇𝑐𝑜𝑚𝑝 = 𝑄 ̇𝑎𝑚𝑏 + 𝑚̇4ℎ 4 -𝑚̇9ℎ 9 -𝑚̇1ℎ 1 (1.18) 1.2 Performance assessment method using indirect measurement of refrigerant mass flow rate 19 Similar to the case of FT vapor injection system, thermal capacity is calculated from Eq. (1.15) utilizing the total mass flow rate, 𝑚̇4, passing through the condenser. Figure 1.7. Refrigeration cycle P-h diagram (a) and schematic with required measurements (b) of IHX injection cycles The proposed method is incapable of determining the refrigerant enthalpy when the fluid is in two-phase state due to unknown vapor quality. The fluid is certainly in vapor phase at compressor discharge (point 4, Figure 1.7). However, it can be diphasic at condenser discharge (point 5, Figure 1.7), compressor inlet (point 1, Figure 1.7), and injection port (point 9, Figure 1.7). In such cases, a simplification is introduced: the method supposes that the fluid is in saturated state and, thus, the enthalpy determination is possible. During real-time measurements, if the uncertainty of the method is greater than the superheat and subcooling at compressor inlet (point 1, Figure 1.7) and condenser outlet (point 5, Figure 1.7), respectively, then the refrigerant is assumed to be in two-phase state and the enthalpy is determined assuming that the fluid is in saturated state. However, it must be noted that two-phase phenomena at compressor inlet and condenser outlet are, either rare, or the duration is brief. Hence, the effect of this simplification on the seasonal performance of an appropriately designed functional HP system can be considered negligible. Two-phase condition at the compressor injection, on the other hand, is non-negligible as it may result from system control. Influence of compressor heat losses The method was experimentally validated in basic cycle air-to-air HP system in [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF] in various operating conditions. Refrigerant fluid used in the experimental setup was R410A and compressor type was rotary, and the method was compared to intrusive reference method, developed and validated in [START_REF] Tran | Refrigerant-based measurement method of heat pump seasonal performances[END_REF], where Coriolis mass flow meter and intrusive pressure sensors were used. The deviation in the Chapter 1 Background and literature review 20 refrigerant pressure determined from saturation temperature measurements is 2.7% in comparison with a direct low pressure measurement. The deviation between the indirectly measured mass flow rate, Eq. (1.10), and the values measured using a mass flow meter was below 4%. The method, extended to injection cycles, was also validated in an IHX HP system using experimentally obtained data in several operating conditions by [START_REF] Goossens | Experimental validation of on-field measurement method for a heat pump system with internal heat exchanger[END_REF]. Scroll compressor and R407C refrigerant were used during the tests. Since there is no reliable performance measurement method available for comparison in airto-air HPs, the method was validated in air-to-water HP operating in heating mode in laboratory conditions, where the water enthalpy method is used as a reference method. The maximum deviation between the COP values obtained from performance assessment method extended to IHX injection cycles, described in Subsection 1.2.1 (IHX HPs), and the reference COP values obtained from water side measurements was below 8%. [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF] showed that the influences of indirect pressure measurements are insignificant when compared to compressor heat losses and oil mass fraction, even if the fluid is strongly zeotropic. On the other hand, compressor heat losses influence significantly the overall accuracy of the method. The presented method integrates a simplified model of compressor heat transfer towards the ambient air. Compressor heat loss expression is composed of two partsheat transfer by convection and radiationas follows: 𝑄 ̇𝑎𝑚𝑏 = ℎ 𝑐 𝐴(𝑇 (1.19). After performing an uncertainty analysis, it was determined that compressor heat loss terms are responsible for a relatively large portion of the relative uncertainty of heating capacity: the sensitivity index of compressor heat losses was 40%. Consequently, a more comprehensive and reliable method for evaluating compressor heat losses on-field must be established to decrease the uncertainty of the performance assessment method. Fault detection and diagnostic using mass flow rate measurements The performance assessment method determines indirectly the refrigerant mass flow rate. This information not only supplies valuable real-time performance data but also provides means to integrate diagnostic features. In other words, the performance assessment method can be coupled with an appropriate fault detection and diagnostic (FDD) method. A more elaborate analysis of FDD methods is presented in Appendix A. In addition, the appendix presents in more detail a promising FDD method, described in the work of [START_REF] Li | Decoupling features and virtual sensors for diagnosis of faults in vapor compression air conditioners[END_REF], that can be coupled with the performance assessment method. Main faults in the heat pump unit On-field efficiency degradation can result from faults occurring in the heating/cooling system. These faults are classified in two categories, those occurring in the thermodynamic cycle of the HP unit, the refrigeration cycle, and those occurring on a system level, such as installation and control faults [START_REF] Madani | The common and costly faults in heat pump systems[END_REF]. Refrigeration cycle faults are the most difficult and expensive to diagnose, according to [START_REF] Li | Decoupling features for diagnosis of reversing and check valve faults in heat pumps[END_REF]. These faults complicate maintenance of the machine and promote unnecessary costs. According to case studies done by [START_REF] Downey | What can 13,000 air conditioners tell us?[END_REF], Chapter 1 Background and literature review 22 and [START_REF] Li | Decoupling features and virtual sensors for diagnosis of faults in vapor compression air conditioners[END_REF] more than 50% of on-field packaged air conditioning systems were improperly charged due to improper commissioning or refrigerant leakage. Faults that are common in air conditioners (ACs) are common in HPs as well, since ACs are practically HPs operating in cooling-only mode. [START_REF] Cowan | Review of Recent Commerical Rooftop Unit Field Studies in the Pacific Northwest and California[END_REF] estimated that 5% to 11% of energy costs can be reduced with proper refrigerant charge. It is, therefore, essential to detect and distinguish different faults during installation and, specifically, operating phases, in order to ensure proper maintenance process of the machine and thus, its high performance level, durability, and potential to reduce greenhouse gas emissions during the whole life cycle. A number of researchers, such as [START_REF] Madani | A comprehensive study on the importnt faults in heat pump system during the warranty period[END_REF], have conducted extensive literature reviews on the most frequent and costliest faults. In addition, an abundant amount of information regarding these faults is available on HP troubleshooting forums. Faults presented in this section occur in the HP refrigeration cycle only, not the whole heating/cooling system. Table 1.1 groups the most frequent individual faults in broader categories (grouped faults) and presents the consequences (operating issues) of each fault group on the machine performance. The list is not exhaustive, only the most typical issues are presented. Six fault categories were identified as the most common ones: control/electronics, exchanger fouling, leaking valves, refrigerant issues, superheat issues, and defrosting issues. It must be pointed out that some faults may be interconnected. For example, defrosting issue can be a result of poor defrost activation setting, or be a consequence of an electrical issue and, thus, be classified as a control/electronics fault. Sensor issues, specifically temperature sensor issues, tend to occur frequently in both air-to-air and air-to-water HPs provoking a number of operating faults. Outdoor temperature sensors are damaged more easily since they are exposed to outside air, where air moisture can enter and freeze inside the sensor. Also, during defrosting period there is a rapid temperature change which can lead to thermal material stresses and, thus, damage the temperature sensor. Another reason for faulty temperature sensors is bad wiring and loose contacts. Faulty pressure switches occur commonly in air-to-water HPs. Pressure switches may falsely signal that the pressure is too low due to dirt or dust accumulation or due to high vibrations at the compressor discharge line. Typically, exchanger fouling is a result of dirt or frost accumulation on the indoor/outdoor coil or filter, which in turn provokes cooling/heating issues. As the exchanger fouls, the air free flow area and/or coil surface for effective heat exchange is decreased, which in turn increases the pressure drop of air passing through the exchanger, which decreases the air flow. Hence, the heat exchange between the air and the refrigerant is deteriorated. In addition, frost or dirt can impact the heat transfer coefficient directly by providing an insulating resistance. In heating mode, the coil temperature of the outdoor exchanger (evaporator) must be cooler than the ambient temperature for heat to be transferred from the ambient air to the coil. If the ambient temperature is below or close to zero, then the coil temperature will most certainly be below zero. Since water freezes at zero at ambient pressure, water droplets in the air will start to freeze on the heat exchanger coil causing frost formation. Typically, activating the defrosting cycle removes the frost formed on the evaporator. There are various defrosting triggering methods, such as setting fixed intervals for defrost on/off cycles, or activating the defrosting cycle after measuring the temperature difference of the ambient and coil temperatures using a thermostat. Faults associated with the activation of defrosting cycle can trigger heating/cooling issues, since it leads to frost accumulation on the outdoor coil. Compressor valve/reversing valve leakage can contribute to cooling/heating issues as well. Compressor valve leakage creates backflows of high pressure refrigerant into the low pressure side of the heat pump system. This in turn provokes losses in the volumetric efficiency and refrigerant mass flow rate. Refrigerant undercharge/overcharge can originate from incorrect initial refrigerant charge and refrigerant undercharge can originate also from leaking valves. These faults, as well as impurities in the refrigerant, such as the presence of a non-condensable gas, can deteriorate the thermal capacity of the unit. Superheating issues also contribute to the deterioration of unit thermal capacity and originate from faulty, blocked, or poorly adjusted expansion valve. Fault detection and diagnostic method Current fault detection and diagnostic methods are typically based upon pressure and temperature measurements, which are then compared to the values provided by manufacturers in fault-free conditions, referred to as reference values. Any deviation of the measured value from the reference value indicates a fault. Identifying and distinguishing faults with such method is challenging. In addition, the method requires a physical intervention and the fault has already had time to impact the performance of the machine. Ideally, the purpose of FDD methods is to track in real-time the evolution of fault indicating parameters to detect specific faults before they have an impact on the COP. Fault detection and diagnosis methods range from detailed physical models to simple polynomial black-box models. Different models show advantages and disadvantages in terms of applicability of on-field implementation, cost, accuracy, and data required. Ideally, FDD methods utilize low cost sensors, such as temperature sensors, and preferably sensors that are already integrated in the machine or as little supplementary sensors as possible in order to keep the hardware costs at minimum. FDD method must 1.3 Fault detection and diagnostic using mass flow rate measurements 25 guarantee a compromise between hardware cost and diagnosis quality in terms of accuracy, applicability to a wide range of different HP types and sizes, and low computational effort for on-field calculations. As mentioned above, real-time measurements of the mass flow rate via component energy/mass balances calculated using surface mounted temperature and indirect pressure measurements provide valuable information, concerning not only performances, but also machine diagnostics, on-field. This information coupled with some complementary measurements constitutes a practical and promising FDD tool, termed decoupling-feature diagnostic method, as presented in the work of [START_REF] Li | Decoupling features and virtual sensors for diagnosis of faults in vapor compression air conditioners[END_REF]. [START_REF] Li | Decoupling features for diagnosis of reversing and check valve faults in heat pumps[END_REF] extended the method to HPs by including the detection of reversing valve and check valve leakage. In the method of [START_REF] Li | Decoupling features and virtual sensors for diagnosis of faults in vapor compression air conditioners[END_REF], each of the fault indicating physical parameters, i.e. decoupling features, is uniquely influenced by individual faults. Decoupling features are also insensitive to variations in ambient conditions. Any deviations of these feature values from their reference values indicates a fault in the system. Reference values are evaluated in fault-free conditions, for example, from compressor and fan maps, typically provided by equipment manufacturers. Decoupling features are determined using physical and virtual sensors. The goal of employing virtual sensors is to limit the use of potentially intrusive system measurements. Compressor energy balance, used for indirect evaluation of refrigerant mass flow rate, Eq. (1.9), is an example of a virtual sensor. Another example of a virtual sensor is energy balances with refrigerant and air-side measurements used to estimate exchanger air flow rate. Saturation pressures are estimated with the aid of property relations and mounted surface temperature sensors at locations where refrigerant is in saturated state. Hence, virtual sensors are used to evaluate condensation and evaporation pressures. Most common faults occurring in residential HPs, such as charge faults and heat exchanger fouling, can be detected with the embarked FDD method solely with the aid of non-intrusive low-cost sensors, particularly temperature sensors. The following list is an example of faults that can be analyzed with the decoupling feature-based diagnostic technique:  refrigerant overcharge,  refrigerant undercharge,  evaporator fouling,  condenser fouling. Chapter 1 Background and literature review 26 To sum up, the benefits of the method include on-field applicability, low calculation effort, low implementation cost, and its capacity to adapt to numerous vapor compression cycles, since fault detection is based on physical phenomena of thermodynamic cycles. This particular FDD method coupled with the performance assessment method could be experimentally validated using air-to-air or air-to-water HPs. A more detailed presentation of the method and how it can be used to analyze the most common faults in the HP unit (refrigerant overcharge/undercharge, exchanger fouling) can be found in Appendix A, which can also serve as groundwork for the experimental validation of the FDD method. Conclusions Currently, manufacturers supply HP performance data obtained in standardized and controlled laboratory conditions. This data might not be representative of the performance data obtained on-field, due to installation quality, system design, climatic conditions, and faults occurring in the system. For this reason, estimating the performances of residential HPs, in terms of heating capacity and COP values on-field and in real-time, is of great importance when optimizing energy consumption. Determining the performances of air-to-air HPs is, particularly, problematic, given that measuring accurately the enthalpy and, specifically, the mass flow rate of air is challenging. [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF] adapted a method that is based on the refrigerant fluid measurements and component energy/mass balances to estimate the performances of such HPs on-field in real-time. Nonintrusive sensors, such as surface temperature sensors, are used to estimate pressure and refrigerant mass flow rate in different types of heat pump systems, including air-to-air. The method was then extended to comprehend more complex cycles, such as IHX and FT injection cycles. The method integrates the evaluation of compressor heat losses, i.e. the heat transfer from compressor shell towards the ambient air. [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF] determined that the sensitivity index of compressor heat losses in the relative uncertainty of heating capacity calculated by the method is 40%. This highlights the importance of developing a more comprehensive and reliable method for evaluating compressor heat losses onfield in order to reduce the uncertainty of the performance assessment method. Compressor heat loss model already integrated in the method assumes that compressor shell temperature is uniform and equal to the refrigerant temperature at discharge. This is, specifically not the case in scroll compressors, where the compression chamber is typically at the top of the compressor. In order to determine a more comprehensive heat loss evaluation method, the thermal behavior of rotary and scroll compressor shells in various operating conditions must be investigated, and an appropriate estimation method of 𝑇 𝑠ℎ𝑒𝑙𝑙 must be selected. Numerical models developed to serve this purpose are presented in Chapter 2. Furthermore, a literature review on the available convective heat transfer correlations must be performed. Correlations that are best suitable for the physical nature of heat transfer and geometry are presented in Chapter 3. The performance assessment method determines the refrigerant mass flow rate, 𝑚̇, in real-time. This information can be integrated in a physical FDD model presented in the work of [START_REF] Li | Decoupling features and virtual sensors for diagnosis of faults in vapor compression air conditioners[END_REF]. The performance assessment method can be used in this context as the virtual sensor to measure the refrigerant mass flow rate along with evaporation and condensation pressures, information that is necessary in the particular FDD method. One suggestion for future development, elaborated in Conclusions and perspectives section of this thesis, is experimentally testing the most common faults occurring in HPs in order to determine the validity of the FDD method. CHAPTER 2 COMPRESSOR HEAT TRANSFER MODEL Afin d'améliorer l'évaluation des pertes thermiques du compresseur in situ, le comportement thermique de l'enveloppe de deux types de compresseurs (scroll et rotary) est investigué. Le profil thermique extérieur du compresseur dépend de la distribution de chaleur à l'intérieur. Pour cette raison, l'intérieur des compresseurs doit être modélisé. La modélisation hybride, qui combine les formulations intégrales et différentielles, a été choisie. Ce type de modélisation est plus flexible en termes d'applicabilité à différents types de compresseurs et est moins couteux en temps de calcul que la modélisation différentielle. De plus, l'approche hybride fournit également des résultats plus précis qu'une approche intégrale. Deux modèles hybrides ont été développés pour les compresseurs de type de scroll et rotary. La méthode de calcul dans les modèles est divisée en trois étapes fondamentales : l'analyse du cycle thermodynamique, l'analyse thermophysique détaillée de l'écoulement et l'analyse du réseau thermique du compresseur. Les résultats obtenus avec ces modèles sont présentées dans le chapitre 3. Ces résultats sont ensuite utilisés dans le chapitre 4 pour déterminer quelles zones contribuent le plus aux pertes thermiques et quelle zone de l'enveloppe du compresseur est la plus représentative pour déterminer la température moyenne de l'enveloppe ou de la zone de l'enveloppe lorsque plusieurs zones sont considérées. State-of-the-art of compressor heat transfer Performance assessment method described in the work of [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF] integrates a simplified model of compressor heat losses, Eq. (1.19), that influences significantly the overall accuracy of the method. As mentioned earlier, it was shown that the sensitivity index of heating capacity uncertainty predicted by the internal refrigerant method is 40 %. Thus, a more precise on-field evaluation of compressor heat losses is required in order to improve the performance assessment method. Examining the thermal profiles of rotary and scroll exterior shells in various operating conditions provides valuable information: the number and location of surface temperature sensors as well as calculation methodology used to measure heat losses on-field can be identified. Exterior thermal profile reflects the thermal behavior of interior components. Thus, overall heat transfer within the compressor domain must be investigated. The computational domain must include internal components that are critical from the heat transfer point of view. For this purpose a numerical model described in Section 2.2 was developed. The model predicts thermal profiles of the exterior shell in various operating conditions in scroll and rotary compressors. Modeling heat transfer within compressor domain is a challenging task. The complexity of compressor geometry hinders simplifications and contributes to various complex flow and heat transfer phenomena. However, modeling the thermal behavior of compressors tends to be of interest, since interior temperature and pressure distributions at various operating conditions provide valuable information when considering compressor performances, component and material durability, and heat transfer towards the ambient air. Experimental and numerical approaches are, typically, used for this purpose. A short literature review in the following subsections highlights the state-of-theart of compressor heat transfer modeling. The purpose of this literature review is to help determine the most adequate numerical modeling technique required to establish the thermal behavior of scroll and rotary compressor exterior shells. Correlations based on experimental measurements Experimental techniques, where thermocouples are placed inside and/or outside the compressor, are one of the most straightforward and conventional ways to investigate the thermal behavior of compressors [START_REF] Ribas | Thermal analysis of reciprocating compressors -critical review[END_REF]. This approach is intrusive and the location of sensors has to be considered very carefully for the flow dynamics and heat transfer inside the compressor to remain unaffected. In addition to thermocouples, heat flux sensors and infrared cameras can be used to investigate the thermal behavior of compressors. Experimental data is used to identify the most significant heat transfer zones and mechanisms between component surfaces and the surrounding fluid by locating hot and cold spots within the compressor [START_REF] Jang | Experimental investigation on convective heat transfer mechanism in a scroll compressor[END_REF]. Correlations for compressor heat losses and component thermal conductances are derived from the acquired experimental data. Derived heat transfer correlations are typically functions of variables that can be measured on-field, for instance, condensation temperature. Correlations can also be used in some numerical models [START_REF] Diniz | A lumped-parameter thermal model for scroll compressors including the solution for the temperature distribution along the scroll wraps[END_REF]. Some correlations are developed solely with mathematical data analysis methods and have no physical basis. Others are a combination of mathematical and physical analyses. However, the accuracy and overall validity of the derived correlations is limited/unknown outside the calibration range, since they are based on the data obtained from a limited number of operating conditions. Furthermore, such models cannot be adjusted by changing a coefficient/parameter to account for e.g. compressor layout modifications or change of refrigerant type [START_REF] Kim | Thermal performance analysis of small hermetic refrigeration and air-conditioning compressors[END_REF]. More accurate experimental models based on a wider range of operating conditions can guarantee relevancy and sufficient accuracy of the model in various operating modes (different refrigerants, compressor types, rotation speeds, condensation temperatures, etc.) However, such models, typically, involve too many variables that are hard to measure on-field, such as the mechanical characteristics of the compressor, i.e. volumetric, isentropic efficiencies, etc., required to account for modification in operating modes [START_REF] Duprez | Modeling of scroll compressorimprovements[END_REF]. Chapter 2 Compressor heat transfer model 32 Numerical models Numerical models consist of three different approaches: integral, differential, and hybrid. The choice of modeling approach depends on the desired level of complexity, versatility and application of the model. Integral models Integral models consist of building a system of steady or unsteady state energy balance equations (integral formulations) for each control surface. They are often referred to as lumped conductance or thermal network (TNW) models. Control surfaces are usually limited by the geometrical boundaries of the compressor solid components for convenience reasons. With the appropriate energy balances and heat transfer coefficients the compressor components are thermally interconnected, thus forming a network. Padhy (1992), [START_REF] Sim | A study on heat transfer and performance analysis of hermetic reciprocating compressors for refrigerators[END_REF] and [START_REF] Ooi | Heat transfer study of a hermetic refrigeration compressor[END_REF] developed compressor heat transfer models by deriving heat transfer coefficients using correlations already available in the literature. In this case, the model does not rely on experimental data and can be applied to a wide range of operating conditions. Nevertheless, the accuracy of the model suffers, since there is only a limited number of heat transfer correlations available in the literature and some correlations are poorly adapted to the complex flow pattern and geometries of internal components. Also, a good understanding of the main heat transfer mechanism in every single element is necessary to employ appropriate heat transfer correlations. Another approach is to use experimental data to calibrate thermal conductances, as in the model developed by [START_REF] Todesca | Thermal analysis in reciprocating hermetic compressors[END_REF]. Yet, if the operating conditions deviate significantly in comparison with the reference conditions used to calibrate the model or the compressor internal component layout is modified, experimentally derived thermal conductances need to be corrected, which makes the model less flexible. The main advantage of the integral method is its relative simplicity and flexibility to adapt to different compressor types and internal component layouts. The main disadvantage of this modeling approach is its poor accuracy originating from the need to assign heat transfer coefficients, difficulty to model multidimensional heat conduction between solid components, and the potential necessity for extensive iterations. Differential models Differential models solve the governing partial differential flow equations for each control volume or element. Flow equations consist of mass, momentum, and energy equations, and typically require the aid of a computational fluid dynamics (CFD) program for their resolution. Differential models can predict flow and thermal profiles in a compressor design. In the works of [START_REF] Chikurde | Thermal mapping of hermetically sealed compressors using computational fluid dynamics[END_REF], [START_REF] Birari | Use of CFD in design and development of R404a reciprocating compressor[END_REF], [START_REF] Pereira | A numerical study of convective heat transfer in the compression chambers of scroll compressors[END_REF], and [START_REF] Raja | A numerical model for thermal mapping in a hermetically sealed reciprocating compressor[END_REF] steady-state thermal behavior of a hermetical reciprocating and scroll compressors is simulated, where different compressor component zones are assigned as heat sources/sinks resulting from motor losses (electrical losses), frictional losses (mechanical losses), and heat losses due to a non-isentropic compression process (thermodynamic losses), using a commercially available CFD code (AN-SYS Fluent). The models are validated with experimental results. The main advantage of such models is that the use of heat transfer coefficients or experimentally calibrated thermal conductances can be avoided. For this reason, practically, any internal component layout can be modeled. Such models provide a very accurate and detailed thermal analysis for a variety of operating conditions. The disadvantages of such models are that the developed model is usually compressor design specific and the computational processing time is quite high and requires powerful computers. In addition, developing differential models for entire compressor domains is time-consuming and some complex physical phenomena inside the compressor, such as flow of lubricating oil, complicate significantly the modeling strategies. Hybrid models Hybrid models resolve integral and numerical formulations in a coupled manner: thermal network models are used to optimize simplified CFD models [START_REF] Diniz | A lumped-parameter thermal model for scroll compressors including the solution for the temperature distribution along the scroll wraps[END_REF]. Thus, hybrid models are a compromise between differential and integral models. Hybrid methods assign heat transfer coefficients, as in the case of integral models, and/or other necessary information as boundary conditions to represent the effects of flow dynamics on the heat transfer between solid and fluid interfaces. This eliminates the need to create complex, computationally expensive and detailed differential models. For instance, modeling in detail rotations and other movements of interior components and their effects on the flow and heat transfer can be avoided in the CFD geometry, mesh, and setup. In this case differential formulations resolve the evolution of flow inside the compressor and its effects on the component wall temperature with relatively low computational costs and limited complexity. Hybrid modeling approach is, typically, customized, i.e. different variations of the modeling types and optimization techniques are possible, as long as both types of formulations, integral and differential, are integrated in the model. Hybrid models presented in the works of [START_REF] Almbauer | 3-Dimensional Simulation for Obtaining the Heat Transfer Correlations of a Thermal Network Calculation for a Hermetic Reciprocating Compressor[END_REF], [START_REF] Sanvezzo | A Heat Transfer Model Combining Differential and Integral Formulations for Thermal Analysis of Reciprocating Compressors[END_REF], and [START_REF] Ribas | Thermal analysis of reciprocating compressors[END_REF] have a common approach: they model part of the domain, typically, the Chapter 2 Compressor heat transfer model 34 solid components and conduction within them, using differential equations, and the fluid domain using integral formulations. The main advantage of hybrid models is that they tend to be more accurate than integral models while remaining relatively simplified and more flexible in terms of compressor design and applicability in comparison with differential models [START_REF] Ribas | Thermal analysis of reciprocating compressors -critical review[END_REF]. The model is partly resolved using differential equations and can, thus, resolve heat conduction in solid components. A disadvantage is that some hybrid models require experimental data to determine heat transfer coefficients. In this case, the accuracy of model is restricted to a specific range of operating conditions, as in the hybrid model of [START_REF] Diniz | A lumped-parameter thermal model for scroll compressors including the solution for the temperature distribution along the scroll wraps[END_REF]. 2.2 Numerical model of compressor heat transfer 35 shell towards the ambient air was not a primary interest of the hybrid models found in literature. The focus of these works was to provide information on the distribution of temperature inside the compressor as well as to investigate gas superheat, i.e. heating of the gas as it passes by the hot motor when flowing from the inlet port to the compression chamber. This observation concerns not only the aforementioned hybrid models but also the previously presented differential and integral models. The variation of compressor exterior thermal profile and heat losses in various operating conditions was encountered in none of the presented studies. Numerical model of compressor heat transfer Thermal profiles of compressor exterior shell must be investigated in various operating conditions in order to establish an improved on-field compressor heat loss evaluation method. The developed numerical model must comprehend two hermetic compressor technologies: rotary and scroll. The goal of the model is to avoid referring to a great deal of costly, time-consuming, and technical expertise-requiring measurements. The developed model must possess sufficient accuracy with relatively low computational costs. For the reasons stated here, hybrid model was selected as the most adequate approach. Hybrid models do not rely on complex and detailed CFD models, making them more robust in terms of compressor types and different component layout applicability, yet they are generally more accurate than integral models. Modelling approach The developed hybrid model comprises a 3D CFD model that solves partial differential flow equations in order to perform a more elaborate flow and heat transfer analysis, supported by a code written in MATLAB, representing the thermodynamic compression cycle and a thermal network of the compressor domain in 2D. Integral formulations representing various heat transfer configurations of the internal component surfaces and the surrounding fluid in MATLAB are used to verify and validate calculation results obtained from the CFD code. This helps maintain the differential part with low computational costs and limited complexity level, and ensures compressor design flexibility. Thus, the two modelling approaches, integral and differential, are coupled. The calculation procedure can be divided into three fundamental steps: thermodynamic cycle analysis, detailed thermophysical flow analysis, and compressor thermal network analysis, as depicted in Figure 2.3. The comparison of the 𝑄 ̇𝑐𝑜𝑚𝑝 values, which is the heat released by the internal components, calculated in the first and third step of the model, is used as the convergence criteria. In rotary compressors, the fluid enters from the inlet port (point 1, Figure 2.4 (b)) located at the bottom of the compressor and passes directly into the compression chamber. After undergoing the compression process, the gas passes through the discharge port (point 2, Figure 2.4 (b)) and through the narrow gap between the rotor and stator, potentially heating the solid components. Next, the gas fills the discharge plenum and, finally, exits the compressor from the outlet port (point 3, Figure 2.4 (b)). The model principle and calculation procedure illustrated in Figure 2 Scroll Thermodynamic cycle analysis The first part of the model analyzes the thermodynamic cycle in order to evaluate the boundary conditions, such as temperatures, inlet velocity, and heat sources, used in integral and differential parts of the model, steps two and three, respectively. In this part the heat released by the compression process, 𝑄 ̇𝑐𝑜𝑚𝑝 , is calculated. This variable is used in the third step of the model to determine whether the model has converged. The net heat released by the internal components is: 𝑄 ̇𝑐𝑜𝑚𝑝 = 𝑄 ̇𝑠𝑢𝑐 -𝑄 ̇𝑛𝑜𝑛-𝑖𝑠 (2.1) where 𝑄 ̇𝑠𝑢𝑐 is the heat released by the motor assumed to be equal to the superheat of the refrigerant gas passing from the compressor inlet (point 1, Figure 2.4 (a)) to the compression chamber (point 2, Figure 2.4 (b)), and 𝑄 ̇𝑛𝑜𝑛-𝑖𝑠 is the heat absorbed by the components from the fluid due to a non-isentropic compression process (thermodynamic losses). Gas superheat is assumed to be equal to electrical losses dissipated as heat from the motor, located at the bottom of the hermetical compressor (Figure 2.4 (a)), as depicted below: 𝑄 ̇𝑠𝑢𝑐 = (1 -𝜂 𝑚𝑜𝑡 )𝑊 ̇𝑒 (2.2) where 𝜂 𝑚𝑜𝑡 is the motor efficiency and 𝑊 ̇𝑒 is the electrical power input. In order to calculate 𝑄 ̇𝑛𝑜𝑛-𝑖𝑠 , the isentropic compression power must be known: 𝑊 ̇𝑖𝑠 = 𝑚̇(ℎ 𝑑𝑖𝑠,𝑖𝑠 -ℎ 𝑠𝑢𝑐 ) (2.3) where 𝑚̇ is the mass flow rate of the fluid, ℎ 𝑠𝑢𝑐 is the refrigerant enthalpy at the suction chamber cavity, and ℎ 𝑑𝑖𝑠,𝑖𝑠 is the discharge enthalpy of an isentropic compression determined from: ℎ 𝑑𝑖𝑠,𝑖𝑠 = ℎ(𝑠(𝑇 𝑠𝑢𝑐 , 𝑃 𝑠𝑢𝑐 ), 𝑃 𝑑𝑖𝑠 ) (2.4) where 𝑃 𝑑𝑖𝑠 is the discharge a pressure and 𝑠(𝑇 𝑠𝑢𝑐 , 𝑃 𝑠𝑢𝑐 ) is the fluid entropy at the suction chamber cavity. Consequently, 𝑄 ̇𝑛𝑜𝑛-𝑖𝑠 is obtained from: 2.2 Numerical model of compressor heat transfer 39 𝑄 ̇𝑛𝑜𝑛-𝑖𝑠 = 𝜂 𝑚𝑜𝑡 𝑊 ̇𝑒 -𝑊 ̇𝑖𝑠 (2.5) Refrigerant enthalpy at the suction cavity of compression chamber, ℎ 𝑠𝑢𝑐 , is calculated from suction heat and inlet conditions as follows: ℎ 𝑠𝑢𝑐 = ℎ 𝑖𝑛 + 𝑄 ̇𝑠𝑢𝑐 𝑚̇ (2.6) Detailed thermophysical flow analysis The second step of the model is executed in ANSYS Fluent, a CFD code based on finite control-volume technique to numerically solve a set of governing relations of fluid in motion. The output of this part of the model is refrigerant thermal and velocity profiles, as well as solid zone temperatures. This information is subsequently transmitted to the final step of the model in order to perform a TNW analysis that verifies the validity of the CFD simulation results. The laws of fluid mechanics are governed by the conservation equations of mass, momentum, and energy. These equations, otherwise called flow equations, constitute a Chapter 2 Compressor heat transfer model 40 system of partial differential equations (PDEs) in space or space-time, where second order is the highest space derivative. A set of equations, called Navier-Stokes (N-S) equations, is an example of flow equations with a specific degree of approximation, yet providing the most general flow description. These equations represent how the velocity, pressure, temperature, and density are related in a moving fluid and can be applied to linear Newtonian viscous fluids. Unsteady N-S equations consist of timedependent conservation of mass, Eq. (2.7), momentum, Eq. (2.8), and energy Eq. (2.9): 𝜕𝜌 𝜕𝑡 + ∇ ⃗ ⃗ • (𝜌𝑣 ) = 0 (2.7) 𝜕𝜌𝑣 𝜕𝑡 + ∇ ⃗ ⃗ • (𝜌𝑣 ⊗ 𝑣 + 𝑃𝐼 ̿ -𝜏̿ ) = 𝜌𝑓 𝑒 ⃗⃗⃗ (2.8) 𝜕𝜌𝐸 𝜕𝑡 + ∇ ⃗ ⃗ • (𝜌𝑣 𝐻 -𝑘∇ ⃗ ⃗ 𝑇 -𝜏̿ • 𝑣 ) = 𝑊 𝑓 + 𝑞 𝐻 (2.9) where 𝜌 is the fluid density, 𝑣 (𝑢, 𝑣, 𝑤) is a velocity vector with u, v, w components, 𝜏̿ is a stress tensor, 𝑃 is the fluid pressure, 𝐼 ̿ is the unit tensor, 𝑓 𝑒 ⃗⃗⃗ is the external force vector, 𝑘 is the thermal conductivity, 𝐸 is the total energy, 𝐻 is the stagnation enthalpy, 𝑊 𝑓 is the work of external forces, and 𝑞 𝐻 is the external heat sources [START_REF] Hirsch | Numerical Computation of Internal & External Flows[END_REF]. Navier-Stokes equations are extensions of the Euler equations, which are used to describe inviscid flows. The relations presented above, Eq. (2.7)-(2.9), are fairly general and need to satisfy only few restrictive assumptions:  fluid forms a mathematical continuum,  the particles are in thermodynamic equilibrium,  only effective forces are due to gravity,  heat conduction follows Fourier's law,  there is no internal heat sources. The equations are extremely complex since they are non-linear, coupled (i.e. must be solved simultaneously), three-dimensional, and time-dependent. For this reason approximation tools, such as control volume method (FVM), are used to solve these equations. In the FVM flow equations are represented in integral form in order to calculate the gross fluxes of mass, momentum, and energy passing through a finite region, control volume, of the flow. Other methods are finite difference method (FDM) and finite element method (FEM). More information about the conservation equations, their derivation, and resolution can be obtained from fluid dynamics text books, such Finite difference method consists of solving discretized PDEs by Taylor expansions. This method is applicable in practice only to structured grids. Finite element method, on the other hand, is most widely used in the world of structural mechanics. In finite element method, governing equations are transformed to weak formulations with some mathematical modifications using a weighting function. Finite volume method is a technique that is most widely applied in CFD due to its generality, conceptual simplicity, and its ease of implementation to arbitrary structured and unstructured mesh. This method associates a local finite volume, or control volume, to each mesh point constituting a computational domain, once the grid has been generated. Conservation equations in integral form are applied to each local volume. Thus, in FVM the discretized space is formed by a set of small cells, where each cell is associated to one mesh point. Each cell center contains an unknown scalar quantity making FVM a cell-centered approach, unlike in the case of FDM and FEM. The FVM discretization technique can be illustrated by considering a general form of a conservation equation for a scalar quantity 𝑈 written in an integral form for an arbitrary control volume, Ω 𝐽 associated to mesh point 𝑗: 𝜕 𝜕𝑡 ∫ 𝑈 𝑑Ω Ω 𝑗 + ∮ 𝐹 ⃑ • 𝑑𝑆 ⃑ 𝑆 𝑗 = ∫ 𝑄 𝑑Ω Ω 𝑗 (2.10) where 𝐹 ⃑ flux vector, 𝑆 ⃑ is the surface vector, and 𝑄 is the source term. The equation is then replaced by its discrete form: 𝜕 𝜕𝑡 (𝑈 𝑗 Ω 𝑗 ) + 𝐹 ⃑ • ∆𝑆 ⃑ 𝑓𝑎𝑐𝑒𝑠 = 𝑄 𝑗 Ω 𝑗 (2.11) In discrete form, the volume integral is expressed as the averaged values over the cell and the surface integral is replaced by a sum over all the bounding faces of the considered Ω 𝑗 volume. In steady-state and in the absence of source terms, the finite volume formulation reduces to a balance equation of all the fluxes entering and leaving the control volume: Chapter 2 Compressor heat transfer model 42 𝐹 ⃑ • ∆𝑆 ⃑ 𝑓𝑎𝑐𝑒𝑠 = 0 (2.12) Algebraic conservation equations are linearized in each cell forming a system of equations with a sparse coefficient matrix that can be solved numerically. Fluent solves the linear system using a point implicit (Gauss-Seidel) linear equation solver in conjunction with an algebraic multi-grid method. More information on the linearization and linear system solution method can be obtained from [START_REF] Hirsch | Numerical Computation of Internal & External Flows[END_REF]. Geometry The domain geometry of the CFD model is decomposed into solid and fluid zones. The subdomains are oil continuum (on the bottom of the compressor cylinder), discrete refrigerant fluid zones, surrounding air continuum, and solid zones bounded by simplified geometrical boundaries of the components. Figure 2.6 is the scroll computational domain drawn with the aid of ANSYS preprocessing tool, Designer Modeler. The computational domain consists of compressor shell, crankshaft, fluid gap between rotor and stator, suction and discharge ports, compression chamber, and cylinder inlet and outlet cavities through which the refrigerant enters and exits the domain. The computational domain in CFD is coherent with the TNW programmed in MATLAB (step 3, integral part of the model). The crankcase and rotor are modeled to be stationary, unlike in the TNW, where the parts in question are rotating. Seeing that both compressor types, scroll and rotary, are hermetic ones, every part of the compressor assembly inside the shell is in contact with refrigerant or oil. However, to simplify the CFD model, oil is present only on the bottom of the compressor. Dimension parameterization was enabled in the geometry ensuring that the dimensions of the components can be easily changed when necessary. Mesh The scroll domain mesh consists of 3.1910 6 tetrahedrons. The mesh type is unstructured. Unstructured mesh is defined by irregular connectivity typically of triangles in 2D and tetrahedral elements in 3D. Structured mesh typically comprises quadrilateral in 2D and hexahedral elements in 3D and the node connectivity follows a fixed pattern. Given the complexity of the geometry, it was nearly impossible to generate a structured mesh. Structured meshes are believed to generate more accurate solutions, however, the accuracy of the mesh depends on the flow problem in question. The adaptivity of unstructured mesh to more complex flows and geometry may generate more accurate solutions. However, the central processing unit time to attain convergence is longer in problems involving unstructured grids: calculating the residuals requires data of the neighboring nodes, which can be found by adding/subtracting 1 from the neighboring cell index of 𝑖, 𝑗 space for 2D and 𝑖, 𝑗, 𝑘 space for 3D in structured grids, which requires less storage, and, thus, faster execution. This is not the case for unstructured mesh types since explicit storage is required. Meshed domain can be seen in Figure 2.8. The mesh was generated with the ANSYS Workbench meshing tool, Mechanical. Denser mesh is imposed in narrow passages and regions with more significant curvature, such as small cylindrical cavities (compressor inlet and outlet, suction and discharge). The average orthogonal quality of the mesh is 0.84. Orthogonal quality is calculated from the relationship of the vector that is normal to the edge of a face and the vector from the center of the face to the center of the edge. Its values range from 0 to 1, where 0 corresponds to low quality. The maximum aspect ratio of the mesh is 17.6 and average aspect ratio 1.85. Aspect ratio is the ratio of the longest edge length to the shortest edge length. Ideally, the aspect ratio will be 1, which is the case for an equilateral face or cell. One the most important mesh quality measures is skewness, which determines how close to ideal shape, i.e. equilateral shape, a face or cell is. Compressor heat transfer model 46 level of 0.98 were kept to a minimumthree elements have a skewness above 0.98. The average skewness in the scroll mesh was 0.23. Skewness, orthogonal quality, maximum aspect ratio are three main parameters to evaluate the mesh quality. Mesh quality influences the convergence and the accuracy of the results. Based on the investigation of the mentioned quality parameters, it can be concluded that the mesh is of good quality. The solver type is segregated and pressure-based. Gravitational acceleration is enabled in the domain and set in negative x-direction to 9.8 m/s. Standard k-epsilon model was chosen as turbulence model for its robustness, economy, and reasonable accuracy. The model adds two additional PDEs to the system of equations: turbulent kinetic energy equation, 𝑘, and the equation for the dissipation rate of turbulent kinetic energy, 𝜀. Velocity inlet (point 1, Figure 2.4 (a)) and outflow (point 4, Figure 2.4 (a)) were set at the inlet and outlet boundary conditions, respectively. The inlet velocity was calculated in MATLAB using the inlet mass flow rate: 𝑣 𝑖𝑛 = 𝑚Ȧ 𝑖𝑛𝑙𝑒𝑡 𝜌 (2.13) 2.2 Numerical model of compressor heat transfer where 𝐴 𝑖𝑛𝑙𝑒𝑡 is the cross-sectional area of the inlet tube and ρ is the density of the refrigerant at the inlet condition. The path of the refrigerant was modeled to be identical to the one portrayed in Figure 2.4. Outflow boundary condition is used when the details of the flow velocity and pressure are unknown prior to the solution of the flow problem. In other words, no conditions at the outlet are predefined, instead Fluent extrapolates the required information from the interior. Operating pressure of the domain is set at atmospheric pressure, 𝑃 𝑎𝑡𝑚 = 101325 𝑃𝑎, and the operating temperature is the ambient air temperature. Refrigerant, air, and engine oil are materials used in fluid domains, and steel is the material set in solid domains (internal components and compressor shell). Boussinesq approximation was applied for air density. Refrigerant fluid properties, thermal conductivity, specific heat, dynamic viscosity, and density are assumed to be constant. Thus, the model assumes that the refrigerant is at constant pressure and density. Refrigerant properties are taken at inlet conditions. Engine oil is defined as a "solid" and fluid material. A thin oil layer considered to be in a solid state is modeled between the liquid oil (oil sump at the bottom, Figure 2.4) and the refrigerant interface, in order to model heat transfer between oil sump and the refrigerant on the bottom, as shown in Figure 2.10. Oil in solid state is assumed to have similar properties as oil in liquid state. This approximation is justified due to the high viscosity of oil. The velocity of the oil sump was assigned as zero in x, y, and z direction. Oil sump and top of the crankshaft temperatures are set as constant and calculated from a multivariate correlation for oil temperature, as listed below: 𝑇 𝑜𝑖𝑙 = 0.28𝑇 𝑜𝑢𝑡 + 0.32𝑇 𝑖𝑛 + 9.8 (2.14) where 𝑇 𝑖𝑛 and 𝑇 𝑜𝑢𝑡 are refrigerant temperatures at compressor inlet and outlet. The correlation was derived from experimental data over a range of operating conditions, where the difference between condensation and evaporation temperatures varied from 25 to 60 °C. Each operating point was tested at three compressor speeds: 30, 60, and 90 rps. Ambient temperatures were set to 10 and 25 °C. Volumetric and surface heat fluxes were added to the computational domain. Surface heat flux was assigned as a boundary condition on the rotor top plate, which is equal to 𝑄 ̇𝑠𝑢𝑐 calculated from Eq. (2.2). Chapter 2 Compressor heat transfer model 48 The CFD model performs a thermal analysis exclusively, and, therefore, no compression is assumed to take place in this part of the model (second step). To represent an increase in temperature from 𝑇 𝑠𝑢𝑐 and 𝑇 𝑑𝑖𝑠 , temperatures of the refrigerant at suction and discharge, respectively, due to compression and non-isentropic heat release, a heat flux is introduced in the compression chamber, 𝑄 ̇𝑐𝑜𝑚𝑝-𝑐ℎ𝑎𝑚 , as depicted in Figure 2.11. Suction temperature was calculated from the suction enthalpy in Eq. (2.6) 2.2 Numerical model of compressor heat transfer 49 Besides refrigerant and oil fluid domains, there is a domain of air that surrounds the compressor. The walls of this domain are set as constant, isothermal wall boundary condition, to represent the ambient temperature surrounding the compressor, 𝑇 𝑎𝑚𝑏 . Heat is transferred from compressor shell to ambient air by natural convection and radiation. As mentioned earlier Boussinesq approximation was configured for air density. Boussinesq approximation was chosen due to its ability to reduce the nonlinearity of the model and the number of iterations required for convergence, thus lower computational costs. The approximation implies that the density variations are small and density has no effect on the flow field. In other words, air density is assumed to give rise only to buoyancy forcesdensity variation is important in the buoyance term of N-S equations. Otherwise in the equations, the density is assumed to be constant. The emissivity of solid components was set as 1 and P1 radiation model was selected. Such radiation model accounts for absorption, scattering, and emissivity in the domain. Solution was initialized from the inlet boundary condition. In the solution methods menu, the pressure-velocity coupling method was set to SIMPLE, which is an algorithm that is appropriate for steady-state calculations. The gradient was set to Least Square Based and pressure as standard in the spatial digitalization. The upwinding of momentum, energy, turbulent kinetic energy and turbulent dissipation rate were set as first order upwind scheme. The convergence criteria were the scaled residuals RMS errors, heat flux and mass flow rate imbalances (<1 %), and the physical nature of the results. Compressor thermal network analysis The final step of the model involves a network analysis of energy transfer equations between solid-fluid interfaces by convection. The goal of the TNW analysis is to verify that the heat, 𝑄 ̇𝑐𝑜𝑚𝑝 , calculated from Eq. (2.1) equals to the net heat dissipated from compressor components: 𝑄 ̇𝑐𝑜𝑚𝑝 = 𝑄 ̇𝑟𝑜𝑡𝑜𝑟 + 𝑄 ̇𝑠𝑡𝑎𝑡𝑜𝑟 + 𝑄 ̇𝑐𝑟𝑎𝑛𝑘 + 𝑄 ̇𝑐𝑜𝑚𝑝-𝑐ℎ𝑎𝑚 (2.15) The total heat flux of each component is the sum of all heat fluxes of individual surfaces that constitute the component. For instance, heat transferred from the rotor to the surroundings is the sum of heat fluxes between the fluid and rotor surfaces in contact with the fluid. If heat released due to an imperfect compression process, 𝑄 ̇𝑐𝑜𝑚𝑝 , obtained from Eq. (2.1) is equal to the heat released by the compressor components obtained from Eq.(2.15), the model converged. If not, more boundary conditions may be assigned in CFD model to guide the differential step more effectively, until convergence in step 3 is achieved. Chapter 2 Compressor heat transfer model 50 In order to obtain the net energy released by components, temperatures of the solid components and the surrounding fluid, as well as the refrigerant velocities in proximity to the solid components must be known. These values are obtained from the CFD model. Consequently, using this information, convective heat transfer coefficients are evaluated. Nusselt number correlations for different flow regimes and geometries were taken from the literature. Correlations for internal and external flows for horizontal and vertical, rotating and static plates and cylinders were considered. The relation between Nusselt number and convective heat transfer coefficient is: 𝑁𝑢 𝑥 = ℎ 𝑥 𝑥 𝑘 ⁄ (2.16) where 𝑥 is the characteristic length, which depends on the geometry of the object and flow characteristics. It can be a diameter, radius, or length, 𝑘 is the thermal conductivity of the fluid, and ℎ 𝑥 is the convective heat transfer coefficient [START_REF] Incropera | Fundamentals of heat and mass transfer[END_REF]. Fluid thermal properties and convective heat transfer coefficients were evaluated at mean film temperatures obtained from the following equation [START_REF] Incropera | Fundamentals of heat and mass transfer[END_REF]: 𝑇 𝑓𝑖𝑙𝑚 = 𝑇 𝑠 + 𝑇 𝑟 2 (2.17) where 𝑇 𝑠 is component surface temperature and 𝑇 𝑟 is the refrigerant gas temperature. Heat transfer from the rotor is the heat flux from the top disk of the rotor. Similarly, total heat transfer from the stator is equal to the heat flux from the top disk of the stator. Heat transfer from the crankshaft equals to the heat released from the top lateral side of the shaft (above the motor assembly). Only the top surfaces of these components are considered in the thermal analysis since this is where heat exchange between suction gas and these components is assumed to occur. There is a narrow rotor-stator and stator-shell gap, where heat exchange occurs as well. However, according to compressor manufacturers, the gaps are extremely narrow and a very small portion of the fluid (oil and refrigerant) is assumed to pass in such gaps. Due to this, it can be assumed that the fluid passes directly from the inlet port to the compression chamber, and there is only a small portion of fluid at the bottom of the compressor, in the cavity between the shell bottom and crankshaft and motor assembly. The fluid can be assumed to be stagnant and inertno thermal interaction between fluid and solid components. For this reason, stator-rotor and stator-shell gaps are not modeled from a thermal point of view in this part of the model (step 3). Similarly, heat 2.2 Numerical model of compressor heat transfer 51 transfer from the bottom of the motor assembly and the crankcase is assumed to be zero. If the stator-rotor and stator-shell gaps are significant in size, heat exchange from these parts cannot be neglected from the analysis. Appendix C presents the correlation found in literature that are appropriate for quantifying the heat exchange between two concentric cylinders (rotor-stator gap), where one cylinder represents a rotorhotter and rotating cylinder, and another one represents the statorcooler and static concentric cylinder, as well as correlation found in literature for two static concentric cylinders (stator-shell gap). Heat transfer from the compression chamber is expressed in the following equation: 𝑄 ̇𝑐𝑜𝑚𝑝-𝑐ℎ𝑎𝑚 = 𝑄 ̇𝑐ℎ𝑎-𝑐𝑦𝑙 + 𝑄 ̇𝑐ℎ𝑎-𝑡𝑜𝑝 + 𝑄 ̇𝑐ℎ𝑎-𝑏𝑜𝑡 (2.18) where 𝑄 ̇𝑐ℎ𝑎-𝑐𝑦𝑙 is the heat absorbed by the lateral walls inside the compression chamber, 𝑄 ̇𝑐ℎ𝑎-𝑡𝑜𝑝 is the is the heat absorbed by the top circular plate inside the compression chamber, and 𝑄 ̇𝑐ℎ𝑎-𝑏𝑜𝑡 is the heat absorbed by the bottom circular plate of the compression chamber, as illustrated in Figure 2.12. Figure 2.12. Heat fluxes released from the fluid to compression chamber As mentioned earlier, oil can be found in many places inside the compressor. Practically, oil forms a thin film on the majority of the interior surfaces. At the bottom of the compressor, where the fluid is assumed to be stagnant, refrigerant is mixed with droplets of oil, and beneath that is an oil sump (Figure 2. Compressor heat transfer model 52 Since most of the flow over the interior compressor components was assumed to be forced, (average flow velocity more than 2 m/s), a dimensionless Reynolds number is required to estimate whether the flow is in a turbulent or laminar regime: 𝑅𝑒 𝑥 = 𝑣𝑥 𝜐 (2.19) where 𝜈 is the kinematic viscosity and 𝑣 is the flow velocity [START_REF] Incropera | Fundamentals of heat and mass transfer[END_REF]. Nusselt number correlation for forced convection for turbulent and external flows over a flat round plate is presented in the following equation [START_REF] Padet | Convection thermique et massique[END_REF]: 𝑁𝑢 ̅̅̅̅ 𝐷 = 0.035𝑅𝑒 𝐷 0.8 𝑃𝑟 1 3 ⁄ (2.20) where 𝑃𝑟 is the Prandtl number evaluated from the mean film temperature and 𝐷 is the surface diameter. Such correlations were used for top and bottom surfaces of the rotor, stator, and compression chamber. [START_REF] Churchill | A correlating equation for forced convection from gases and liquids to a circular cylinder in crossflow[END_REF] suggested a Nusselt number correlations for forced and external convection across vertical cylindrical surfaces, such as the rotor, stator, crankcase, and compression chamber cylinder walls, presented in the following set of equations: 𝑁𝑢 𝐷 = 0.3 + 0.62𝑅𝑒 𝐷 1/2 𝑃𝑟 1/3 [1 + ( 0.4 𝑃𝑟 ) 2 3 ⁄ ] 1 4 ⁄ [1 + ( 𝑅𝑒 𝐷 282000 ) 5 8 ⁄ ] 4 5 ⁄ (4 • 10 5 < 𝑅𝑒 𝐷 < 5 • 10 6 ) (2.21) or 𝑁𝑢 𝐷 = 0.3 + 0.62𝑅𝑒 𝐷 1/2 𝑃𝑟 1/3 [1 + ( 0.4 𝑃𝑟 ) 2 3 ⁄ ] 1 4 ⁄ [1 + ( 𝑅𝑒 𝐷 282000 ) 1 2 ⁄ ] (2 • 10 4 < 𝑅𝑒 𝐷 < 4 • 10 5 ) (2.22) or 𝑁𝑢 𝐷 = 0.3 + 0.62𝑅𝑒 𝐷 1 2 ⁄ 𝑃𝑟 1 3 ⁄ [1 + ( 0.4 𝑃𝑟 ) 2 3 ⁄ ] 1 4 ⁄ (𝑅𝑒 𝐷 < 10 4 ) (2.23) 2.2 Numerical model of compressor heat transfer 53 [START_REF] Gnielinski | New equations for heat and mass transfer in turbulent pipe and channel flow[END_REF] recommended the following Nusselt number correlation for static vertical cylinder for an internal flow: 𝑁𝑢 𝐷 = ( 𝑓 8 ) (𝑅𝑒 𝐷 -1000)𝑃𝑟 1 + 12.7 ( 𝑓 8 ) 1 2 ⁄ (𝑃𝑟 2 3 ⁄ -1) (𝑃𝑟 ≥ 0.5, 3 • 10 3 ≤ 𝑅𝑒 𝐷 ≤ 5 • 10 6 ) (2.24) where 𝑓 is the internal friction for a smooth wall obtained from a formula suggested by [START_REF] Petukhov | Heat transfer and friction in turbulent pipe flow with variable physical properties[END_REF]: 𝑓 = (0.790. 𝑙𝑛𝑅𝑒 𝐷 -1.64) 2 (2.25) Such correlation is used to estimate heat transfer inside the compression chamber. As seen in Figure 2.13 (top), fluid enters the compression chamber from the four entrances and exits from the discharge located on the top of the chamber. However, there is no heat transfer correlation available directly from the literature for this type of flow configuration, hence, Eq. ( 2.3) was used. The hydraulic diameter, 𝐷 ℎ , that is typically a pipe diameter in such correlation (𝐷 ℎ,2 in Figure 2.13) was assumed to be equal to the height of the compression chamber inner wall (𝐷 ℎ,1 in Figure 2.13). The heat transfer area was considered to be the total inner area of the compression chamber: top, bottom, and lateral surfaces. Figure 2.13. Flow inside the compression chamber (above) is approximated as the flow inside a pipe (below) Chapter 2 Compressor heat transfer model 54 For a rotating disk heated from the bottom or cooled from the top, i.e. rotor top surface, in laminar flow the following Nusselt number correlation can be used [START_REF] Incropera | Fundamentals of heat and mass transfer[END_REF]: 𝑁𝑢 𝐷 = 0.33𝑅𝑒 𝛺 1 2 ⁄ (10 3 ≤ 𝑅𝑒 𝛺 ≤ 2 • 10 5 ) (2.26) where 𝑅𝑒 𝛺 is the Reynolds number for rotating surface calculated using the following formula: 𝑅𝑒 𝛺 = 𝛺𝐷 2 𝜐 (2.27) where 𝜈 is the kinematic viscosity, 𝛺 is the angular velocity, and 𝐷 is the diameter of the body (characteristic length). [START_REF] Kreith | Convection heat transfer and flow phenomena of rotating spheres[END_REF] recommended Nusselt number correlation for a rotating cylinder used to calculate the heat transfer from top lateral part of the crankshaft is listed below: 𝑁𝑢 𝐷 = 0.133𝑅𝑒 𝛺 2 3 ⁄ 𝑃𝑟 1 3 ⁄ (𝑅𝑒 𝛺 < 4.3 • 10 5 ) (2.28) where the cylinder diameter is the characteristic length, 𝐷. This correlation takes into account only the rotational flow (vortex) and, hence, the angular velocity is the characteristic velocity of the fluid used to calculate Reynolds numbers. Fluid that flows past the crankshaft also has an axial velocity, since it flows towards the compression chamber. However, in order to keep the thermal analysis simplified, it is assumed that the fluid surrounding the crankcase has only an angular velocity and no axial flow takes place. If the deviation between the values obtained from Eq. (2.1) (step one) and Eq. (2.15) (step three) are within the 10 % range, then the model is considered to be converged. If not, the second step of the model must be guided more efficiently, for instance, assigning more boundary conditions might be necessary. Rotary Thermodynamic cycle analysis The heat absorbed by the rotary compression chamber (components) is assumed to be equal to the heat released by the fluid due to a non-isentropic compression (thermodynamic losses), as depicted in the following equation: 𝑄 ̇𝑐𝑜𝑚𝑝 = 𝑄 ̇𝑛𝑜𝑛-𝑖𝑠 = 𝑊 ̇𝑒 -𝑊 ̇𝑖𝑠 (2.29) 2.2 Numerical model of compressor heat transfer 55 where 𝑊 ̇𝑖𝑠 is calculated as described in Subsection 2.2.2.1 in Eq. (2.3). The heat released by the fluid is calculated in the third step of the model, the thermal network analysis, where 𝑄 ̇𝑐𝑜𝑚𝑝 is calculated as the heat transfer from the fluid to the walls of the compression chamber. As in the case of scroll model, the two values are compared to verify the validity of the CFD model. The main difference between this step in rotary and scroll compressor models is that there is no gas superheat in rotary compressors: the gas passing by the motor assembly is hotter than the motor surfaces and releases thermal energy instead, since the compression chamber is located before the motor. Therefore, calculating the motor heat is only required for the second step (CFD part) of the model in order to assign it as surface heat flux boundary condition. Detailed thermophysical flow analysis Generally, this step of the model is quite similar to the one described in Subsection 2.2.2.2 for scroll compressors in terms of geometry, mesh, and setup configurations. The main difference is the component layout. Geometry As in the scroll compressor model, the domain geometry of the CFD model is decomposed into solid and fluid zones: sub domains are oil continuum (the bottom of the compressor cylinder), discrete refrigerant fluid zones, surrounding air continuum, and solid zones bounded by simplified geometrical boundaries of the components. Figure 2.14 portrays the computational domain in Designer Modeler. The computational domain consists of compressor shell, crankshaft, fluid gap between rotor and stator, suction and discharge ports, compression chamber, and cylinder inlet and outlet cavities through which the refrigerant enters and exits the domain. The computational domain in CFD is coherent with the TNW programmed in MATLAB (step 3, integral part of the model). The crankcase and rotor are considered to be stationary, unlike in the TNW, where the parts in question are considered to be rotating. Chapter 2 Compressor heat transfer model 56 Dimension parameterization was enabled in the geometry ensuring that the dimensions of the components can be easily changed when necessary. A supplementary layer on the compressor shell can be seen, as highlighted in green in Figure 2.15. Modifying the material of this part, for instance, to wool-fiber, can represent an insulation layer. In that case the wall boundary types are set to double-sided wall (wall and wall-shadow). However, if no insulation layer is desired, this part of the shell can be set to steel material, in which case the wall boundaries must be changed from wall to interior. The thickness of the insulation layer can be adjusted in the parametrization settings. Mesh The domain mesh consists of 2.310 6 tetrahedrons, hexahedrons, wedges, and pyramids. The mesh in rotary domain is a hybrid mesh; consists of portions of structured grid (stator and oil sump) and unstructured grid (rest of the domain). Meshed domain is presented in Figure 2.16. Similar to scroll compressor domain, denser mesh is imposed in narrow passages and regions with more significant curvature, such as small cylindrical cavities (compressor inlet and outlet, suction and discharge). The average orthogonal quality of the mesh is 0.86. The maximum aspect ratio is 21.0 and the average is 1.86. The maximum skewness is 0.98 of one element. The average skewness in the scroll mesh was 0.23. The mesh of rotary domain is of good quality. Chapter 2 Compressor heat transfer model 58 The emissivity of internal components was set to be equal to 1 and P1 model was chosen as the radiation model, as in the scroll model setup. Oil sump and top of the crankshaft temperatures are set as constant and calculated from a correlation for oil temperature, as listed below: 𝑇 𝑜𝑖𝑙 = -0.0235𝑇 𝑜𝑢𝑡 2 + 4.045𝑇 𝑜𝑢𝑡 -96.699 (2.30) Unlike in the case of scroll compressors, a parametric analysis showed that oil temperature is primarily dependent on condensation temperature. The correlation was derived from experimental data over a range of operating conditions, identical to the ones considered in scroll. Temperature measurements from the experimental tests showed that the exterior wall temperature at the motor level is hotter. The CFD model does not model the statorshell gap, since imposing a mesh in such a confined space is challenging, considering that the aim was to keep the number of mesh elements and, thus, the calculation time to a minimum. In order to represent working fluid passing through this narrow passage, constant temperature, 𝑇 𝑑𝑖𝑠 , was set as a boundary condition of the wall between the shell and stator. Volumetric and surface heat fluxes were added to the computational domain. Similar to the scroll model, second step of the rotary model performs a thermal analysis exclusively and heat flux required to represent an increase in temperature from 𝑇 𝑠𝑢𝑐 to 𝑇 𝑑𝑖𝑠 is calculated as depicted in Figure 2.11. Motor losses were assigned as surface heat flux boundary condition, calculated from Eq. (2.2). Compressor thermal network analysis As mentioned above in Subsection 2.2.2.3, the heat released by the fluid to the solid components due to thermodynamic losses during the compression process is equal to the heat absorbed by the solid parts of the compression chamber, as listed below (identical to Eq. (2.18): 𝑄 ̇𝑐𝑜𝑚𝑝 = 𝑄 ̇𝑐𝑜𝑚𝑝-𝑐ℎ𝑎𝑚 = 𝑄 ̇𝑐ℎ𝑎-𝑐𝑦𝑙 + 𝑄 ̇𝑐ℎ𝑎-𝑡𝑜𝑝 + 𝑄 ̇𝑐ℎ𝑎-𝑏𝑜𝑡 (2.31) The flow of refrigerant within the compression chamber was approximated as the flow of fluid in a pipe, depicted in Figure 2.13. If the deviation between the values obtained from Eq. (2.29) (step one) and Eq. (2.31) (step three) are within the 10 % range, then the model is considered to be converged. Chapter 2 Compressor heat transfer model 60 If not, the second step of the model must be guided more efficiently, for instance, assigning more boundary conditions might be necessary. Conclusions Tran et al. (2013) established that compressor heat losses contribute significantly to the relative uncertainty of the performance assessment method. Therefore, in order to improve the performance assessment method, the estimation of compressor heat losses on-field must be more accurate. One of the main sources of uncertainty in the current method used to estimate compressor heat losses, presented in Eq. (1.19), is the fact that the compressor shell was assumed to be isothermal and equal to the refrigerant discharge temperature. For this reason, the first step in improving the compressor heat loss model was to investigate the thermal behavior of the rotary and compressor shell in various operating conditions. In order to establish the temperature distribution of the compressor shell, internal compressor components must be modeled, i.e. the computational domain of the model is the entire compressor. However, thermal analysis of compressors is a challenging task due to complex interior geometry that hinders simplifications. A literature review led us to choose a hybrid modeling approach. Such approach combines integral and differential formulations. It is more flexible in terms of its applicability to various compressor types and less computationally expensive than differential models, and it yields more accurate results than integral models. Two hybrid models were developed; for scroll and rotary compressors. The calculation procedure in the models is divided in three fundamental steps: thermodynamic cycle analysis, detailed thermophysical flow analysis executed with the aid of a CFD program, and a compressor thermal network analysis. The calculation procedure of the numerical models is the following: 1. The first step of the model analyzes the thermodynamic compression cycle in order to evaluate the initial and boundary conditions used in integral and differential parts of the model, steps two and three, respectively. 2. The second step of the model uses a CFD code to numerically solve a set of governing differential equations. The outcomes of this part of the model are transmitted to the final step of the model in order to perform a TNW analysis that verifies the CFD simulations. 3. The third step of the model involves a network of energy transfer equations between the solid-fluid interfaces by convection. The goal is to verify that the value of 𝑄 ̇𝑐𝑜𝑚𝑝 calculated in the first step from Eq. (2.1) and (2.29), for scroll and rotary compressor, respectively, equals to the net heat dissipated/absorbed by the components calculated from Eqs.(2.15) and (2.31), for scroll and rotary compressor, respectively. If the difference between the two values is less than 10 %, the model is considered converged, and the external and internal thermal profiles (if necessary) can be extracted from the second part (CFD) of the model. The results obtained from the models presented in Chapter 3 are used to determine which zones contribute most to the heat losses and which location of temperature sensor can be used best to represent the average temperature of the shell or shell zone, presented in Chapter 4. CHAPTER 3 EXPERIMENTAL VALIDATION OF THE NUMERI-CAL MODEL Les températures d'enveloppe des compresseurs scroll et rotary ont été comparées aux températures obtenues sur banc d'essai, pour cinq points de fonctionnement. Nous observons une bonne cohérence entre les valeurs issues du modèle numérique et les valeurs expérimentales, pour les deux types de compresseurs. Le modèle numérique a montré que l'enveloppe du compresseur rotary est homogène pour tous les points de fonctionnement. Cependant, le compresseur scroll a présenté une forte hausse de température au niveau de la chambre de compression, indiquant que la majeure partie des pertes thermiques provient de la partie haute pression du compresseur. Les valeurs de température des composants internes ont été extraites du modèle numérique pour le compresseur scroll. Nous observons une bonne cohérence entre ces valeurs et les valeurs expérimentales. Cette information peut être utile pour évaluer la sensibilité des composants à l'endommagement thermique. Les simplifications introduites dans le modèle numérique, comme par exemple les simplifications liée à la lubrification d'huile et la géométrie de composants, ont une influence sur la précision des modèles numériques. Néanmoins, ces simplifications ont été nécessaires afin de maintenir le modèle flexible pour qu'il puisse rester adapté à différentes configurations géométriques, et afin d'assurer un temps de calcul raisonnable. Chapter 3 Experimental validation of the numerical model 64 Experimental setup Numerical results were compared to experimental data obtained from a scroll and rotary compressor test bench in Mitsubishi Heavy Industries (MHI) laboratory, Air-Conditioning & Refrigeration Systems. Figure 3.1 is a schematic representation of the calorimeter chamber, where scroll and rotary compressors were placed, each at a time. A small fan inside the calorimeter chamber was used to help homogenize the ambient temperature. Depending on the ambient air setting the outlet ambient air was either cooled or chilled by a heat exchanger and later directed by a fan from the heat exchanger to the inlet mesh, on the top of the calorimeter chamber. Ambient temperature was measured at the air inlet. The refrigerant fluid used in tests were R407C and R134a in scroll and rotary compressor, respectively. Various operating conditions were tested in order to validate the In the experimental setup 40 and 42 thermocouples were placed along the compressor shell and on the internal components of scroll and rotary compressors, respectively. The uncertainty of these sensors was 0.6 K. In addition, inlet and outlet pressures were measured with pressure sensors, as well as compressor power input with a wattmeter, and a flow meter was used to measure the refrigerant mass flow rate and oil concentration in the refrigerant circuit using an oil separator. Thermocouples were placed on the static parts of the motor top and bottom, in close proximity to the stator in order to establish the motor temperature. Similarly, in order to determine the temperature of the crankshaft, thermocouples were placed on the frame located in close proximity to the crankshaft. Also, temperatures of the discharge baffle (bottom of the discharge plenum) were measured, as well as the shell temperature at various locations, and oil temperature at the bottom of the compressor. In the experimental setup rotary compressor had an accumulator tank. Temperature sensors were placed on the exterior shell of this tank. Refrigerant temperature at inlet, outlet, and discharge were measured. In scroll compressors, refrigerant temperature at the compression chamber suction cavity was measured as well. Measuring points 1-7 and 1-6 for scroll and rotary compressor, respectively, represent the locations of thermocouples along the shell, from the bottom to the top, as depicted in Figure 3.2. The tested compressors had no insulation layer. Typically, in residential applications, compressors have an insulation layer. However, as mentioned earlier in Subsection 2.2, the numerical model can be easily adapted to integrate an insulation layer around the compressor shell. During the tests, all values were registered in steady operating state, about 10 minutes after startup. Comparison of thermal profiles The numerical models were validated using the results obtained from the experimental test bench in five operating conditions. In the validation the experimental thermal profiles of scroll and rotary compressor were compared to the temperature values in the corresponding locations extracted from the numerical model. For confidentiality reasons, the temperature values are expressed as dimensionless values. The operating conditions are listed in Table 3.1. Scroll Starting from The dimensionless RMS errors of external profiles are depicted in Table 3.2 at corresponding operating conditions. Table 3.2 Dimensionless RMS errors of external profiles in different operating conditions in scroll compressor Rotary Numerical and experimental temperature values of external rotary shell expressed in dimensionless values for confidentiality reasons were compared starting from Figure 3.11 to Figure 3.15 in the same operating conditions as in scroll (Table 3.1). Measuring points 1-6 represent the locations of thermocouples along the shell, from the bottom to the top (Figure 3.2 (b)). The second step (CFD model) took approximately 2 hours to converge, and for the third step to converge, the value obtained in Eq. (2.29) had to fall within the ±10% range of the value obtained from Eq. (2.31). Less time was required for rotary model to converge due to smaller number of mesh elements and the presence of structured grid in some subdomains. The dimensionless RMS errors of external profiles are depicted in Table 3.3 at corresponding operating conditions. Table 3.3 Dimensionless RMS errors of external profiles in different operating conditions in rotary compressor Operating condition 1 2 3 4 5 RMSE (1) 0.031 0.023 0.039 0.029 0.050 Discussion The sources of discrepancies between the experimental and numerical values, potentially due to introduced model simplifications, and the model applicability to various component layouts and dimensions are discussed in this subsection. Numerical values seem to agree with experimental data. The root-mean square errors range from 0.04 to 0.08 and from 0.02 to 0.05 (dimensionless values) in scroll and rotary compressors, respectively. The deviations seem to be smaller in rotary domain than in scroll. Note that the uncertainty of the shell contact temperature sensor is limited to approximately 0.003, or an order of magnitude smaller than RMS differences above. Simplifications The presence of lubricating oil inside the compressor In hermetical compressors lubricating oil prevents potential damage of internal components due to friction by lubricating the bearings. In addition to oil being stored in the sump at the bottom, a thin oil film (Figure 3.18) is present over many components inside the compressor, such as the crankcase, stator and the internal surface of the compressor shell. The oil pump is coupled with the crankshaft; as the shaft spins, the lubricating oil flows inside the pump through the crankshaft by centrifugal forces. As it reaches the other extremity of the shaft assembly, part of it is projected as a jet hitting the upper surface of the compressor shell along which it later flows as a thin oil film. The other part of the projected oil returns to the sump by flowing over the compressor crankcase. Oil forms an interface between the refrigerant fluid and the surface of the solid components, which can affect the heat transfer inside the compressor. Dutra & Deschamps (2013) conducted experimental studies on the effect of the oil flow on the heat transfer of the reciprocating compressor shell for a selected operating condition. It was shown that if the oil heated by the crankshaft is hotter than the refrigerant gas in the discharge plenum, it can contribute to the heating of the discharge gas. Chapter 3 Experimental validation of the numerical model 78 The numerical model considers that oil is still, present only at the bottom. The walls of the interior side of the shell and component surfaces are, therefore, directly in contact with the refrigerant gas. In order to represent the thermal effects of oil film on the crankshaft, its surface temperature was set to 𝑇 𝑜𝑖𝑙 . According to Sanvezzo & Deschamps ( 2012) oil is mainly found on the crankcase, interior side of the compressor shell, and stored at the bottom of the compressor. Therefore, the simplifications made in the numerical model considering oil are justified. However, these simplifications can be responsible for the deviations of numerical values from experimental data. Figure 3.18. Thin oil film forming an interface betwee the refirgerant gas and a solid component Simplified geometry Intricate geometry of compressor domains is simplified in the second and third steps of the model by removing components that were considered unnecessary from the heat transfer point of view and unnecessary fillets, and overall simplification of the geometrical shapes. Compressor parts consist of cylindrical and rectangular shapes. Geometry and configuration of the internal components modifies the flow dynamics and heat transfer within the domain and, thus, the exterior shell profile. For this reason, simplifications introduced in the geometry of compressor domains were carefully considered: the aim was to find a compromise between sufficient accuracy of the results and reduced complexity level of the model. Rotating parts, such as crank and rotor, are modeled as stationary in the second part (CFD) of the model. Consequently, the heat transfer coefficients estimated from these surfaces in the second step of the model are inferior to the real ones. However, the third step of the model (thermal network analysis) considers these parts as rotating. If the 𝑄 ̇𝑐𝑜𝑚𝑝 value in this part deviates by more than 10 % from the value obtained in the first step of the model (thermodynamic cycle analysis), approximate heat transfer coefficient can be assigned as boundary conditions to the rotating parts in the CFD domain to improve the global convergence of the model. In the simulation results shown in Subsections 3.2.1 and 3.2.2, 𝑄 ̇𝑐𝑜𝑚𝑝 values obtained from Eqs. (2.1) and (2.29) in scroll and rotary, respectively, did not deviate more than by 10 % from the Eqs. (2.15) and (2.31) in scroll and rotary, respectively. Therefore, the thermal profiles were extracted directly from the CFD model, as the model converged. Rotary compressor used in the test bench had two inlet ports and an accumulator tank on one side. The temperature of the accumulator tank is typically much cooler than the compressor shell, as it serves to accumulate and preheat the fluid in order to ensure that the refrigerant at compressor inlet is in gaseous state. One inlet port compressor was considered in the numerical model, as depicted in Figure 2.4 (b), which might explain some discrepancies between numerical and experimental values in rotary compressors. Suction superheat in scroll model It is also important to consider simplifications related to suction superheat in scroll compressors. In the first step of the model (thermodynamic cycle analysis), the suction fluid is assumed to absorb all the heat dissipated by the motor due to electrical losses. Therefore, the temperature of the fluid at the suction of the compression chamber is obtained from the suction enthalpy calculated in Eq. (2.6). In reality motor heat is partly absorbed by the fluid and partly dissipated in the ambient air by convective and radiative heat transfer. Refrigerant temperature at suction is, thus, overestimated. The motor efficiencies are elevated and, therefore, the dissipated motor heat is low and the elevation of refrigerant temperature between inlet and suction cavity is a few degrees only, i.e. superheat is low. For these reasons, the part of 𝑄 ̇𝑚𝑜𝑡 that escapes to the ambient air can be considered negligible and the overestimated 𝑇 𝑠𝑢𝑐 is believed to differ only slightly from the real value. This is particularly the case for scroll compressors with an inlet between the motor and bottom of compression chamber: all of the suction gas is in contact with the top motor surfaces only, and not by the entire motor assembly as in the case of inlet located below the motor. Two refrigerant inlets and their influence on the flow paths are depicted in Figure 3 Model applicability to different compressor dimensions and component layouts The main purpose of the numerical model was to investigate the thermal behavior of scroll and rotary compressors. However, the component layout of one compressor type scroll or rotarycan vary, depending on the manufacturer, configuration of the compressor in the refrigeration cycle, etc. For instance, the refrigerant inlet of a scroll compressor can be located between the compression chamber and the motor or at the bottom, as depicted in Figure 3.19. In this case the flow dynamics and the internal temperature distribution are altered, which reflects directly on the temperature distribution of the exterior shell. In order to extend the model to other types of scroll and rotary compressors, first, the component layout of the compressor of interest must be compared to the compressor layout in the numerical model. Specifically, the location of inlet and outlet ports and compression chamber, the refrigerant flow path, and motor assembly (size of statorshell and stator-rotor gaps) must be examined. Second, if the mentioned characteristics vary significantly from the ones presented in Chapter 2, the component layout must be adapted to the new one in all of the steps of the model, specifically, second and third. 3.4 Future development 81 Some deviations between component layouts may require only minor modifications. For example, if the location of the inlet port in scroll compressor is at the bottom, below the motor, then the inlet port must be lowered in the second step (differential part) and a gap in the motor assembly must be inserted in order to create a path for the fluid to reach the compression chamber. Heat transfer coefficients for fluid passing between two concentric cylinders introduced in Appendix C must be employed in the third step of the model. Other deviations between the layouts may require more extensive changes in the model. An example of such case is if the compression chamber is located at the bottom in scroll compressors. This would imply great modifications, specifically, in the second step. The model can be easily adapted to compressors that have different dimensions than the ones already configured. This implies modifying the geometry parameters in the CFD part of the model and updating the mesh, and modifying the characteristic lengths and heat transfer areas in the third step of the model. To conclude, it can be assumed that the model is applicable with few modifications to a wide range of compressors used in residential applications, since only the most essential and general compressor components were modeled including a general component layout scheme, and the dimensions can be adjusted easily. However, if the component layout deviates significantly from the modeled one, some substantial adjustments must be implemented. Future development Numerical model can be employed to investigate exterior thermal profiles in a great variety of operating conditions by varying ambient, evaporation, and condensation temperatures, as well as refrigerant fluids. This information can then be exploited to generalize and validate the instrumentation method to evaluate compressor heat losses on-field, defined in Chapter 4, over a wider range of operating conditions. Since the models are adapted to consider an insulation layer, the influences of different insulation materials and their thicknesses can be investigated as well. It is observed that in most residential HP applications the compressor is enclosed in a casing and is, therefore, subjected to heat transfer by natural convection. However, in some applications compressors can be exposed to mixed/forced convection. For the moment mixed and forced convection cannot be implemented easily in the CFD part of the model due to convergence issues. Therefore, the possibility of adapting the model to integrate other convection regimes by assigning different air velocities in x, y, or z direction must be investigated. Chapter 3 Experimental validation of the numerical model 82 As mentioned previously, the model can be adapted to different component layouts. For example, the inlet port can be moved from the top to the bottom of the compressor, as depicted in Figure 3.19. In such case the influence of the inlet port location on the exterior thermal profile and compressor heat losses in scroll compressors calculated using the method described in Chapter 4 can be investigated. The models can also be used to investigate the effects of different operating parameters and motor efficiencies on the internal temperature distribution, as the models provide information about the temperatures of the internal components. This information can then be used to identify the hottest and, thus, most vulnerable to damage, zones in the compressor design. Conclusions Temperatures on the exterior shell of scroll and rotary compressors were compared to the temperatures obtained from an experimental test bench in MHI laboratory in five operating conditions. The second step (CFD part) took approximately 4 and 2 hours to converge in scroll and rotary compressors, respectively. In both cases the difference between 𝑄 ̇𝑐𝑜𝑚𝑝 values calculated in the first and third steps of the model was below 10 %, and the models were considered to be globally converged. The RMS errors of dimensionless temperature values of external profiles range between 0.04 and 0.08 in scroll compressor and 0.03 and 0.05 in rotary compressor. Results obtained in rotary compressor tend to be slightly more accurate. The numerical model showed that the shell of rotary compressor is relatively uniform regardless of the operating condition. Scroll compressor, on the other hand, exhibits a strong temperature jump at the level of compression chamber suggesting that a great deal of heat losses occur at the top part of scroll compressors. There seems to be a good agreement between the temperature values of the internal components obtained from the numerical model for scroll compressors and the experimental values. This information can be of use when evaluating component vulnerability to thermal damage. Simplifications introduced in the numerical model, such as the simplifications related to lubricating oil and component geometry, influence the accuracy of the model. However, these simplifications were inevitable in order to maintain the model flexible, in terms of its capacity to be applied to different geometrical layouts, and computationally inexpensive. CHAPTER 4 IMPROVED IN SITU EVALUATION METHOD OF COMPRESSOR HEAT LOSSES FOR PERFOR-MANCE ASSESSMENT Dans ce chapitre, les modèles numériques développés sont utilisés pour déterminer le nombre et la position de capteur(s) de température nécessaires à la mesure de température de l'enveloppe des compresseurs. L'objectif est d'améliorer la précision de mesure des sur site. Pour le compresseur rotary, le capteur situé entre la chambre de compression et le bas du moteur est le plus représentatif de la température moyenne de l'enveloppe. Pour le compresseur scroll, la partie basse du compresseur (basse pression) reste froide et la majeure partie des pertes thermiques ont lieu en sur la partie haute (haute pression). La température mesurée par le capteur sur l'enveloppe au niveau de la chambre de refoulement doit être utilisée pour estimer la température moyenne de la partie haute pression du compresseur. Ainsi, une corrélation développée ici doit être utilisée pour prendre en compte les pertes thermiques totales de ce type de compresseur. La corrélation développée est valable quand la température moyenne de la partie basse pression, mesurée par le capteur situé sur l'enveloppe à mi-hauteur du moteur, est plus élevée que la température de l'air ambiant. Pour conclure, pour mesurer les pertes thermiques sur site, deux capteurs de température sur l'enveloppe sont nécessaires dans le cas d'un compresseur de type scroll ; un seul dans le cas d'un compresseur de type rotary. De plus, les corrélations utilisées pour calculer les coefficients de transferts de chaleur en convection naturelle pour les différentes surfaces des compresseurs (haut, bas et surfaces latérales), obtenues à partir d'une revue bibliographique, sont également présentées dans ce chapitre. Chapter 4 Improved in situ evaluation method of compressor heat losses for performance assessment Introduction This chapter presents the improved calculation method to evaluate rotary and scroll compressor heat losses. This method can be integrated on-field in the performance assessment method. The method includes Nusselt number correlations found in the literature. These correlations account for the nature of the flow and the geometry of the compressor shell. The optimal instrumentation to measure heat losses of rotary and scroll compressors on-field is determined based on the resultsthermal profiles and heat flux distribution on the compressor shellof the numerical models. Existing methods of compressor heat loss modelling One approach to measure compressor heat losses in situ is to model heat losses as a function of compressor power input, 𝑊 ̇𝑐𝑜𝑚𝑝 , as described in the work of [START_REF] Fahlén | Methods for commissioning and performance checking of heat pumps and refrigerant equipment[END_REF], depicted in the equation below: 𝑄 ̇𝑎𝑚𝑏 = 𝜂𝑊 ̇𝑐𝑜𝑚𝑝 (4.1) where 𝜂 is the heat loss factor, assumed to be constant and known, and equal to 0.08. However, experimental data obtained from compressor test bench, described in Chapter 3, in MHI laboratories, where 𝑄 ̇𝑎𝑚𝑏 values were obtained from the compressor energy balance, showed that 𝜂 varies between 7 and 13 % in scroll compressors and between 5 and 47 % in rotary compressors, depending on the operating condition and rotation speed. Compressor heat losses can be represented using a shell and ambient temperature relationship, as in the work of [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF] presented in Subsection 1.2.2, as follows: 𝑄 ̇𝑎𝑚𝑏 = 𝑎(𝑇 𝑠ℎ𝑒𝑙𝑙 -𝑇 𝑎𝑚𝑏 ) + 𝑏(𝑇 𝑠ℎ𝑒𝑙𝑙 4 -𝑇 𝑎𝑚𝑏 4 ) (4.2) where 𝑇 𝑠ℎ𝑒𝑙𝑙 is the temperature, 𝑇 𝑎𝑚𝑏 is the ambient temperature, and factors 𝑎 and 𝑏 are obtained from compressor geometry and the nature of heat transfer. Factors 𝑎 and 𝑏 represent convective and radiative heat exchange coefficients, respectively, calculated, as follows: 𝑎 = ℎ 𝑐 𝐴 (4.3) 𝑏 = 𝜎𝐴 (4.4) 4.2 Existing methods of compressor heat loss modelling 85 As mentioned in Subsection 1.2.2, it was assumed that 𝑇 𝑠ℎ𝑒𝑙𝑙 is uniform and equal to discharge temperature, 𝑇 𝑐𝑜𝑚𝑝,𝑒𝑥 . This approximation is not justified, particularly, in the case of scroll compressors; as shown in Figure 3.17, most of the scroll compressor shell remains at low temperature and only the top part of the shell approaches the discharge temperature value. Simulation results have also shown that in rotary compressors the discharge temperature values deviate from the area-weighted average temperatures obtained from the CFD program, presented in Table 4.1 and discussed in more detail in Subsection 4.3. The deviations between the temperature values obtained from different temperature sensor locations and the area-weighted average temperatures of the entire rotary compressor shell can be seen in Table 4.1. It is observed that the shell temperature at discharge overestimates the average shell temperature. Other researchers have suggested that the compressor shell temperature may depend on many factors, such as the pressure ratio, suction pressure, and discharge pressure/temperature. Correlations have been developed in previous works establishing a relationship between 𝑇 𝑠ℎ𝑒𝑙𝑙 and other variables related to the operating conditions. For instance, [START_REF] Kim | Thermal performance analysis of small hermetic refrigeration and air-conditioning compressors[END_REF] proposed a linear dependence between 𝑇 𝑠ℎ𝑒𝑙𝑙 and 𝑇 𝑑𝑖𝑠 for small hermetic reciprocating, rotary, and scroll compressors. The model was developed based on thermodynamic principles and large data sets from compressor calorimeter and in situ tests. A linear correlation was suggested by [START_REF] Duprez | Modeling of scroll compressorimprovements[END_REF], where 𝑇 𝑠ℎ𝑒𝑙𝑙 is a function of evaporation and condensation temperatures in scroll compressors. [START_REF] Li | Simplified steady-state modeling for variable speed compressor[END_REF] conducted an analysis of experimental data over a wide range of operating conditions and suggested a non-linear correlation for 𝑇 𝑠ℎ𝑒𝑙𝑙 of reciprocating, Chapter 4 Improved in situ evaluation method of compressor heat losses for performance assessment 86 rotary, and scroll compressors, as a function of the pressure ratio and discharge pressure. It is challenging to determine a correlation that accurately represents 𝑇 𝑠ℎ𝑒𝑙𝑙 and takes into account the physical aspects of heat exchange and compressor geometry, yet remaining relatively simplified in terms of unknown parameters and possessing a good extrapolation capacity. In addition, assuming that 𝑇 𝑠ℎ𝑒𝑙𝑙 is uniform, specifically in the case of scroll compressors with compression chamber on top implies significant simplifications since strong temperature fluctuations (Figure 3.17) take place along the shell. Improved calculation method of compressor heat losses In order to establish a more accurate method to evaluate compressor heat loss on-field, first, appropriate correlations for convective heat coefficient were found in literature. Then, the results of the numerical model were used to define the appropriate instrumentation, in terms of sensor number and their location, to accurately measure 𝑇 𝑠ℎ𝑒𝑙𝑙 . Nusselt number correlations for convective heat transfer coefficients Heat exchange between the exterior envelope of the compressor and the environment occurs by radiation and convection. Heat exchange by conduction is considered negligible. Compressor can either be exposed to forced, natural, or mixed convection. Natural convection is believed to occur when the only force generating air movement is gravity, i.e. no external force is generating air movement, unlike in forced convection. Changes in fluid temperature result in density changes, and in a gravitational field the lighter fluid is pushed upwards by buoyance force, which is then replaced by a denser and cooler fluid. This results in air movement, thus heat transfer by convection takes place. Mixed convection occurs when the buoyancy forces in forced flows are nonnegligible: neither natural nor forced convection mode dominates. Whether natural, forced, or mixed convection takes place depends on the configuration of the compressor inside the HP exterior/interior unit. In most residential application compressor is enclosed inside a casing and is, therefore, submitted only to natural convection. For this reason only the case of heat transfer occurring by natural convection is investigated. Improved calculation method of compressor heat losses 87 Correlations available from literature that account for physical characteristics of heat transfer and compressor geometry have been carefully selected. Dimensionless Rayleigh number for natural convection is used to estimate the flow regime of air, and is calculated as follows: 𝑅𝑎 𝑥 = 𝑔𝛽(𝑇 𝑠ℎ𝑒𝑙𝑙 -𝑇 𝑎𝑚𝑏 ) 𝜈𝛼 𝑥 3 (4.5) where 𝑔 is the gravitational acceleration, 𝛽 is the thermal expansion coefficient, 𝜈 is the kinematic viscosity, 𝛼 and is the thermal diffusivity. Material properties are considered at 𝑇 𝑓𝑖𝑙𝑚 . Air can be assumed to be an ideal gas, therefore, the thermal expansion coefficient is depicted in the following equation: 𝛽 = 1 𝑇 𝑓𝑖𝑙𝑚 (4.6) A literature review of available correlations has yielded the following correlations for isothermal surfaces, which are best fitted to the nature of flow and compressor geometry in question. [START_REF] Churchill | Correlating equations for laminar and turbulent free convection from a vertical plate[END_REF] proposed a Nusselt number correlation for vertical plates in laminar flows: 𝑁𝑢 ̅̅̅̅ 𝐿 = 0.68 + 0.67𝑅𝑎 𝐿 1/4 (1 + (0.492 𝑃𝑟 ⁄ ) 9 16 ⁄ ) 4 9 ⁄ (𝑅𝑎 𝐿 ≤ 10 9 ) (4.7) where is the Prandtl number and 𝐿 is the length of the lateral part. The correlation is adapted as follows to turbulent flow regime: 𝑁𝑢 ̅̅̅̅ 𝐿 = (0.825 + 0.387𝑅𝑎 𝐿 1/6 (1 + (0.492 𝑃𝑟 ⁄ ) 9 16 ⁄ ) 8 27 ⁄ ) 2 (10 9 ≤ 𝑅𝑎 𝐿 ≤ 10 12 ) (4.8) This empirical correlation was used for the lateral part of the compressor cylinder. McAdams (1954) recommended an experimental correlation for heated horizontal plates facing up in laminar flow regime: 𝑁𝑢 ̅̅̅̅ 𝐷 = 0.54𝑅𝑎 𝐷 1/4 (10 5 < 𝑅𝑎 𝐷 ≤ 10 7 ) (4.9) for turbulent flows as follows: Chapter 4 Improved in situ evaluation method of compressor heat losses for performance assessment 88 𝑁𝑢 ̅̅̅̅ 𝐷 = 0.14𝑅𝑎 𝐷 1/3 (10 7 < 𝑅𝑎 𝐷 < 3 • 10 10 ) (4.10) which were used for scroll and rotary compressor top surfaces. For the bottom surfaces, a correlation suggested by [START_REF] Kadambi | Free convection heat transfer from horizontal surfaces for prescribed variations in surface temperature and mass flow through the surface[END_REF] for heated horizontal plates facing down in laminar and turbulent regimes: 0.82𝑅𝑎 𝐷 1 5 ⁄ (10 5 < 𝑅𝑎 𝐷 < 10 10 ) (4.11) The correlations recommended by [START_REF] Mcadams | Heat Transmission[END_REF], Eqs. (4.9) and (4.10), and [START_REF] Kadambi | Free convection heat transfer from horizontal surfaces for prescribed variations in surface temperature and mass flow through the surface[END_REF], Eq. (4.11), were developed for horizontal square plates. However, they were adapted to compressor top and bottom surfaces despite them being of circular shape. Diameter of the plates was selected as the characteristic length instead of the side length, as assumed originally in the correlations. Optimal on-field instrumentation for Tshell Results obtained from compressor test bench and the second step of the numerical model (CFD model) were used to determine the zones on scroll and rotary compressor shell that contribute the most to compressor heat losses, by investigating their shell thermal profiles and heat flux distributions in various operating conditions. As seen from the results shown in Subsection 3.2.2, the temperature distribution of rotary compressor shell is quite uniform. The hottest point on the shell is at the level of the middle of the motor assembly (sensor 4 in Figure 3.2 (b)). However, its deviation from the rest of the shell temperatures seems to be small. Therefore, rotary shell can be considered as an isothermal zone. As mentioned earlier, the area-weighted averages presented in Table 4.1 show that the most representative location for measuring the average temperature of rotary compressors is sensor 3, which is located at the level of the space between compression chamber and motor assembly (Figure 3.2 (b)). However, as seen from Table 4.1, measuring the temperature from the top part of the compressor (sensor 5, at the level of discharge plenum in Figure 3.2 (b)) gives a very close estimation of the area-weighted average as well. Investigating the thermal profiles of scroll compressor, Subsection 3.2.1, shows that there is a significant temperature gradient along the shell (temperature jump), specifically, at sensors 6 and 7, which are located above the compression chamber as seen in Figure 3.2 (a). Examining the heat flux distribution on the compressor shell using the 4.3 Improved calculation method of compressor heat losses 89 results obtained from the numerical model has shown that most of the heat losses occur at the high pressure (HP) part of the compressor: from the bottom of the compression chamber to the top on the lateral side, and the top horizontal surface. Table 4.2 shows the ratio of heat transferred to the ambient from the mentioned zones, 𝑄 ̇𝑎𝑚𝑏,𝑡𝑜𝑝 , to the total shell heat losses, 𝑄 ̇𝑎𝑚𝑏,𝑡𝑜𝑡 . 4.2 shows that the heat flux ratio tends to vary with speed despite the pressure ratio and ambient temperature remaining constant. Furthermore, as the compression ratio is increased, from (𝑇 𝑐𝑜𝑛𝑑 = 40 °𝐶, 𝑇 𝑒𝑣𝑎𝑝 = 0 °𝐶) to (𝑇 𝑐𝑜𝑛𝑑 = 60 ℃, 𝑇 𝑒𝑣𝑎𝑝 = 0 ℃), the heat flux ratio remains relatively constant at 30 rps, 76% and 75%, respectively, or slightly decreases at 60 and 90 rps. Oil temperature on the bottom of the compressor is dependent on the outlet temperature, as depicted in Eq. (2.30). As the condensation temperature increases, the outlet temperature increases, thus, 𝑇 𝑜𝑖𝑙 increases as well. In this case the bottom of the compressor becomes hotter, as the HP part of the compressor becomes hotter. However, when the speed is increased the oil temperature is assumed to remain approximately constant, since the variations in 𝑇 𝑜𝑢𝑡 are minor. Therefore, the bottom of the compressor remains at the same temperature while the HP part becomes hotter, as the speed increases. For this reason, the heat flux ratio was assumed to depend on the compressor power input: as the speed increases at the same operating condition, the mass flow rate and power input are increased. Since the motor efficiency is considered to be high and constant, and the motor is cooled by the refrigerant in any case, an increase in power input primarily augments the heat exchange of the HP part of the compressor, thus increasing the ratio. The correlation used for the heat flux ratio is depicted in the following equation: where 𝑊 ̇𝑛𝑜𝑚 is the electrical power input in operating condition 1 (Table 4.2). 𝑇 𝑐𝑜𝑛𝑑 (°C) 𝑇 𝑒𝑣𝑎𝑝 ( This correlation is valid if the average temperature of the bottom part of the cylinder, the LP part, is higher than the ambient temperature (the reverse situation could occur in a cooling mode but was not considered here). The average shell temperature of LP cylinder part in scroll compressors can be measured from sensor 3 (middle of the motor). If the average LP shell temperature, 𝑇 𝑠ℎ𝑒𝑙𝑙,𝐿𝑃 is higher than the ambient temperature, sensor 6 (at the level of discharge plenum) must be used to determine the average temperature of the HP part and the obtained heat losses must be multiplied by the reciprocal of the heat flux ratio obtained from Eq. (4.12). To sum up, the required instrumentation to determine 𝑇 𝑠ℎ𝑒𝑙𝑙 in rotary compressors is:  temperature sensor at the level of the space between the compression chamber and motor bottom (𝑻 𝒔𝒉𝒆𝒍𝒍 = 𝑻 𝒔𝒉𝒆𝒍𝒍,𝟑 ),  this location reflects the area-weighted average shell temperature the most,  one sensor required. Similarly, the required instrumentation to determine 𝑇 𝑠ℎ𝑒𝑙𝑙 in scroll compressors is:  one sensor at the level of discharge plenum (𝑻 𝒔𝒉𝒆𝒍𝒍,𝑯𝑷 = 𝑻 𝒔𝒉𝒆𝒍𝒍,𝟔 ) and one sensor at the level of the middle of the motor (𝑻 𝒔𝒉𝒆𝒍𝒍,𝑳𝑷 = 𝑻 𝒔𝒉𝒆𝒍𝒍,𝟑 ),  heat losses from the high pressure part of scroll compressor are evaluated,  correlation in Eq. (4.12) must be used to account for the total heat flux area,  correlation is valid if 𝑇 𝑠ℎ𝑒𝑙𝑙,𝐿𝑃 > 𝑇 𝑎𝑚𝑏 ,  two sensors required in total. where 𝐴 𝐿,𝐻𝑃 , and 𝐴 𝑡𝑜𝑡,𝐻𝑃 are the HP lateral and total HP areas, respectively, and 𝑅 𝐻𝐹 is the heat flux ratio for top and total heat fluxes calculated from Eq. (4.12). Heat loss expression for compressors Chapter 4 Improved in situ evaluation method of compressor heat losses for performance assessment 92 Conclusions As mentioned earlier the main simplification introduced in the heat loss model presented in [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF], was that the compressor shell was considered to be isothermal and equal to refrigerant temperature at discharge, which is not the case, particularly, in scroll compressors, as seen in the results of Subsection 3.2.1. In this chapter the developed numerical models were used to determine the location of the necessary temperature sensors that best represent 𝑇 𝑠ℎ𝑒𝑙𝑙 of scroll and rotary compressors, and thus, allow a more accurate heat loss estimation on-field. The shell of the rotary compressor is mostly isothermal. The RMS errors between different temperature sensor locations and the area-weighted averages of the shell temperatures obtained from the numerical model in different operating conditions were used to determine the most optimal temperature sensor location. Temperature sensor located between the compression chamber and motor assembly (sensor 3, Figure 3.2 (b)) is the most representative of the average shell temperature in rotary compressors. In scroll compressors most of the shell (the LP part) remains relatively cold, and most of the heat flux occurs on the top part of the compressor shell (the HP part). Comparing heat flux ratios of the HP part to the total heat flux of the shell showed that approximately 80 % of the heat losses in scroll compressors occur on the top lateral part and top flat horizontal plate. A correlation, Eq. (4.12), was derived based on the obtained results, which represents the ratio of HP heat losses to the total shell heat losses. However, the correlation is only valid if the average shell temperature of the LP part of the compressor measured on the shell at the level of the middle of the motor (sensor 3 in Figure 3.2 (a)) is higher than 𝑇 𝑎𝑚𝑏 . In such case the shell temperature at the level of discharge plenum (sensor 6 in Figure 3.2 (a)) must be used to determine the average temperature of the HP part. To conclude, two temperature sensors must be used in scroll compressors and one temperature sensor must be used in rotary compressors when estimating compressor heat losses on-field. In this chapter, the correlations used to calculate heat transfer coefficients in natural convection regime for top, bottom, and lateral compressor surfaces obtained from a literature review were presented. Equations are embarked on-field to calculate heat losses of scroll and rotary compressors, respectively. CHAPTER 5 EXPERIMENTAL VALIDATION OF THE PER-FORMANCE ASSESSMENT METHOD L'objectif du banc d'essai expérimental était de valider la méthode de mesure des performances intégrant la nouvelle approche d'évaluation des pertes thermiques, définie dans le chapitre 4, pour le compresseur de type rotary. Un prototype a été construit et testé pour plusieurs points de fonctionnement. Les valeurs de référence de la puissance calorifique ont été calculées à partir des mesures sur le circuit d'eau. Après comparaison avec les valeurs de référence, l'erreur moyenne quadratique de la méthode de mesure des performances améliorée est 2.36 %. L'erreur moyenne quadratique obtenue pour la méthode de mesure des performances intégrant l'ancienne approche d'évaluation des pertes thermiques, décrit par [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF], est 7.51 %. Enfin, le banc d'essai expérimental a permis de :  valider la nouvelle approche pour évaluer les pertes thermiques du compresseur rotary,  définir que l'endroit optimal pour mesurer la température de l'air ambiant est en entrée de l'évaporateur dans le prototype,  déterminer que 0.5 % est une valeur constante plus appropriée pour le taux d'huile Chapter 5 Experimental validation of the performance assessment method 94 Introduction In this chapter the performance assessment method that integrates the improved heat loss evaluation, described in Section 4.3, is experimentally validated for the case of HP equipped with a rotary compressor. For this purpose, an experimental test bench was built using an air-to-water HP prototype to be tested in various operating conditions. The prototype consists of an exterior unit comprising a variable rotary compressor, electronic expansion valve, evaporator, and a fan. The interior unit consists of a condenser that releases heat to a water circuit, which heats a hot water tank. The water inlet temperature is controlled with the aid of a thermal control unit. The tests are performed at different evaporation condensation, and ambient temperatures, as well as various compressor speeds. The heating capacities estimated with the performance assessment method, which integrates the improved heat loss evaluation method (new location for 𝑇 𝑠ℎ𝑒𝑙𝑙 and new correlations) described in Chapter 4, are compared to the reference values, calculated from the water-side measurements. Different sensor locations to measure the ambient temperatures and their influence on the discrepancies between the heating capacities were investigated. Based on this information the best location for an ambient temperature sensor in this particular case is selected. One objective is to know if the already-installed outdoor temperature sensor of the HP is sufficient or if an additional sensor is required inside the compressor enclosure. In addition, the influence of the oil mass fraction value is investigated. Heat pump prototype An air-to-water HP prototype was built to validate the performance assessment method that integrates the improved compressor heat loss evaluation presented in Subsection 4.3, Eq. (4.13), for rotary compressors. The prototype consists of an exterior and interior unit. The exterior unit comprises a variable speed compressor, EEV, fan, and an evaporator. A metallic sheet envelopes the exterior unit components. Another metallic sheet is used to separate the compressor and expansion valve from the evaporator and fan, as depicted in Figure 5.1. Figure 5.2 and Figure 5.3 portray the exterior unit from the side of the heat exchanger surface and the fan, respectively. Wiring of the temperature and pressure sensors can be seen on the figures as well. Interior unit consisted of a condenser that released heat to the water circuit. Experimental setup The air-to-water prototype extends over three rooms. The exterior unit (rotary compressor, expansion valve, evaporator, and fan) is set up in climatic chamber number 1 and the interior unit, which consists of a condenser connected to a water loop, is set up in another room (number 2). The water loop is fitted with a tank and water temperature control unit. The control unit of the water inlet temperature is situated in room 3. Temperature control unit is used to cool the water inlet temperature, 𝑇 𝑤.𝑖 . The experimental setup can be seen in Experimental validation of the performance assessment method 98 Measurement variable sampling rate is 0.1 Hz for a duration of approximately two hours at each operating condition. Data acquisition and control of temperatures, pressures, water flow rate, etc., is done in Laboratory Virtual Instrument Engineering Workbench (LABview), which is a development environment based on a graphical programming language used to create various applications that interact with measurement signals. The mass flow rate of water is used to regulate the condensation temperature, and ambient temperature and relative humidity level are used to regulate the evaporation temperature. The condensation temperatures were set at 40 and 60 °C, and evaporation temperatures ranged from 0 to 2 °C. In order to determine the compressor heat losses, the air temperature surrounding the compressor must be known. In the work of [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF] ambient temperature was taken as the exterior temperature. In order to see whether this is indeed the best location for measuring accurately the temperature of the air surrounding the compressor, various temperature sensors were installed around the compressor (Table 5.1) to test the influence of ambient temperature measurements on the accuracy of the performance measurement method. As seen from the instrumentation schematic (Figure 5.7), eight temperature sensors were installed around the compressor; seven on the metallic walls and one sensor (Tair7) was floating right above the top of the compressor, i.e. the sensor was hanging by its wire. In addition, one temperature sensor is placed in the calorimeter chamber (outdoor unit) at the evaporator inlet to measure the ambient temperature (Figure 5.8). Chapter 5 Experimental validation of the performance assessment method 100 The prototype along with its instrumentation is a good representation of on-field installations. The only exception is the fact that the compressor does not have an insulation layer around it; in most residential applications there is an insulation material covering the shell. Chapter 5 Experimental validation of the performance assessment method 102 Comparison of proposed performance assessment method with reference method As mentioned in Chapter 4, compressor heat losses in rotary compressors are calculated from the following equation: Reference heating capacities The heating capacity is calculated with the performance measurement method that integrates the new methodology to evaluate rotary compressor heat losses, presented in Eq. (5.1). A reference heating capacity value, 𝑄 ̇𝑐𝑜𝑛𝑑 𝑟𝑒𝑓 , represents the heat that is released by the condenser to the water, calculated from an energy balance equation on the water side, as follows: 𝑄 ̇𝑐𝑜𝑛𝑑 𝑟𝑒𝑓 = 𝑚̇𝑤𝑐 𝑝,𝑤 (𝑇 𝑤,𝑜 -𝑇 𝑤,𝑖 ) (5.7) where 𝑇 𝑤,𝑖 and 𝑇 𝑤,𝑜 is the water inlet and outlet temperatures, respectively, 𝑚̇𝑤 is the water mass flow rate measured by an electromagnetic flow meter, and 𝑐 𝑝,𝑤 is the specific heat capacity of water. The RMS error of the heating capacity calculated from the performance measurement method, which integrates the new compressor heat loss evaluation method, from the reference values is calculated. The deviations between 𝑄 ̇𝑐𝑜𝑛𝑑 and 𝑄 ̇𝑐𝑜𝑛𝑑 𝑟𝑒𝑓 are investigated as a function of compressor speed, compressor power input, evaporation temperature, and shell and ambient temperature difference. Additional factors influencing the results of the method The influence of various ambient sensor locations on the discrepancies between 𝑄 ̇𝑐𝑜𝑛𝑑 and 𝑄 ̇𝑐𝑜𝑛𝑑 𝑟𝑒𝑓 in terms of RMS errors can be investigated. Based on the obtained information the best location of the ambient sensor is then chosen. In hermetic compressors, lubricating oil is in contact with the refrigerant and a fraction of it migrates along with the refrigerant through the cycle. It is difficult and inconvenient to measure precisely the oil mass fraction, 𝐶 𝑔 , with respect to the working fluid flow in real-time operating conditions. According, to [START_REF] Tran | Refrigerant-based measurement method of heat pump seasonal performances[END_REF], 𝐶 𝑔 is assumed Chapter 5 Experimental validation of the performance assessment method 104 to be constant and known, and equal to 2%. The influence of different oil concentration (OC) values is also investigated. Comparison with other heat loss values The results obtained when integrating the new heat loss calculation method defined in Eq.(5.1) for rotary compressors are compared to the results obtained with the one defined in the work of (5.9) The RMS error between 𝑄 ̇𝑎𝑚𝑏 𝑟𝑒𝑓 and 𝑄 ̇𝑎𝑚𝑏 can be used as the absolute uncertainty of 𝑄 ̇𝑎𝑚𝑏 in the uncertainty analysis used to determine the sensitivity index of the compressor heat losses in the final uncertainties of heating capacities obtained with the performance assessment method. Operating conditions The energy balances used in the performance assessment method are time-independent. Therefore, they are only applicable in steady-state conditions. In order to ensure steady-state conditions during tests, the following criteria are followed: Uncertainty values analysis 105  acquisition period of measurement data of at least two hours,  standard deviations of ambient temperature must be below 0.3 ℃,  standard deviations of compressor shell and discharge temperatures must be below 0.3 ℃,  standard deviation of superheat must be below 1 ℃. Various operating conditions are tested depicted in Table 5.2. Evaporation and condensation temperature, ambient temperature, and compressor speed vary in different operating conditions. Uncertainty values analysis Uncertainties of the measured variables (sensor uncertainties) required in the calculations described in Section 5.4 are shown in Table 5.3. Although the measurement uncertainty of temperature sensors (𝑇 1 -𝑇 3 ) was 0.2 °C, it is in fact the pipe surface temperature and not the direct measurement of the fluid inside the pipe. Thus, the uncertainty was expanded to 0.8 °C [START_REF] Tran | Méthodes de mesure in situ des performances annuelles des pompes à chaleur air/air résidentielles[END_REF]. The uncertainty of the exterior air temperature is 0.8 °C including the heterogeneity of air temperature. The uncertainty of wattmeter varied depending on the frequency: 0.2 %, 0.4 %, and 0.5 % for 30, 60, and 90 rps, respectively. A measurement of the exterior temperature is performed also with a PT100 temperature sensor located at the center of outdoor unit fan at the air inlet side. The uncertainty introduced by the data acquisition system was included in the uncertainties. Refrigerant pressures were measured at the compressor inlet and outlet. In some cases a required variable, Y, for instance, heating capacity in our case, cannot be measured directly, but is rather calculated as a function of one or more variables, i.e. 𝑌 = 𝑓(𝑋 1 , 𝑋 2 , … ). Each of the measured variables, X1, X2, etc., have an associated random variability. This variability is referred to as the uncertainty of each measured variable. In order to quantify the final uncertainty of the calculated value, Y, it must be calculated how the uncertainties in each of the measured variables propagate into the value of Y. The standard method described in NIST Technical Note 1297 was used to determine the propagation of uncertainties [START_REF] Taylor | Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results[END_REF]. The method assumes that individual measurements are uncorrelated and random. The final uncertainty in the calculated value Y can be calculated as follows: 𝜎 𝑌 = √ ( 𝛿𝑌 𝛿𝑋 𝑖 ) 2 𝜎 𝑋 𝑖 2 𝑖 (5.10) where 𝜎 𝑋 𝑖 is the absolute uncertainty of each measured quantity. The sensitivity index of the variable with respect to the calculated variable is calculated as listed below: 𝑆 𝑖 = ( 𝛿𝑌 𝛿𝑋 𝑖 ) 2 𝜎 𝑋 𝑖 2 𝜎 𝑌 2 (5.11) The sum of all sensitivity indices is equal to unity. 5.6 Results of on-field performance assessment method 107 Results of on-field performance assessment method As discussed above, the compressor integrated in the air-to-water prototype was the same as the one used in rotary compressor test bench described in Chapter 3. During the compressor test bench, an oil separator was used to separate oil from the refrigerant. The mass flow rate of the working fluid (mix of oil and refrigerant) before passing the oil separator was measured. After oil was separated from the refrigerant, the refrigerant mass flow rate was measured again. Thus, 𝐶 𝑔 levels were measured during this test bench. These measured values are referred to as experimental 𝐶 𝑔 values. It was noticed that the oil concentration (OC) was very small, much smaller than the value taken by [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF], where 𝐶 𝑔 = 2 %. It was also noticed that OC is not constant and depends primarily on the compressor speed. Since the compressor used in the prototype is the same, it was assumed that the 𝐶 𝑔 values are identical at corresponding speeds. Unless otherwise mentioned, heating capacities are calculated with the experimental 𝐶 𝑔 values. The RMS errors and deviations of heating capacities from the reference values are presented in Figure 5. It can be seen that when 𝑇 𝑎𝑚𝑏 is measured from the exterior air, Text sensor, it yields the smallest discrepancies between 𝑄 ̇𝑐𝑜𝑛𝑑 and 𝑄 ̇𝑐𝑜𝑛𝑑 𝑟𝑒𝑓 . Based on this information it can be concluded that the temperature of the exterior air is the most optimal in terms of accuracy and also in terms of cost and installation facility, since no additional temperature sensor is required for this measurement; temperature measurements of exterior air are already installed in heat pump units. Heating capacities were calculated from the performance measurement method that integrated the previous methodology used to calculate compressor heat losses defined by [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF], at various operating conditions. The deviations of these values were compared to the ones obtained from the method that integrated improved 𝑄 ̇𝑎𝑚𝑏 evaluation method, shown in Figure 5.12. .12. Deviations of heating capacities calculated using the previosuly defined 𝑸 ̇𝒂𝒎𝒃 evaluation method and the more accurate method from the reference values as a function of compressor speeds The RMS error of the heating capacities calculated with 𝑄 ̇𝑎𝑚𝑏 from Eq. (1.19) is 7.51 % as opposed to RMS error of 2.36 % calculated using 𝑄 ̇𝑎𝑚𝑏 from Eq. (5.1). Exterior air temperature measurement was used for 𝑇 𝑎𝑚𝑏 . It was noticed that as the compressor speed increases, the deviations in the old method decrease. As the speed increases, the compressor power input increases significantly, thus weighting more in the compressor energy balance equation, Eq. (1.9), and the error introduced by inaccurate heat loss estimation becomes less significant. Observing the experimental OC values, an approximation of 𝐶 𝑔 = 0.5 % ± 100 % seems to be more reasonable. Heating capacities calculated with experimental 𝐶 𝑔 values, 𝐶 𝑔 = 2 % , and 𝐶 𝑔 = 0.5 % were compared to reference heating capacities in Figure 5.13. In these calculations ambient heat losses were evaluated with the new method. The RMS error of experimental 𝐶 𝑔 values, which varied specifically with different compressor speeds, was the smallest in comparison with 𝐶 𝑔 = 2 % and 𝐶 𝑔 = 0.5 % that yielded a RMS errors of 6.52 % and 3.36 %, respectively. Therefore, assuming that the oil mass fraction is constant and equal to 0.5 % gives more accurate performance values than assuming that the oil mass fraction is equal to 2 %. Optimal instrumentation to accurately measure compressor heat losses on-field is schematically illustrated in Figure 5.14. Figure 5.15 compares reference heating capacities with calculated heating capacities. Reference values include relative uncertainties obtained from the error propagation formula, Eq. (5.10). Experimental validation of the performance assessment method 112 heat losses, 𝑄 ̇𝑎𝑚𝑏 𝑟𝑒𝑓 , were calculated using the refrigerant mass flow rate obtained from the condenser and water-side energy balances, Eqs. (5.8) and (5.9). Figure 5.16. Compressor heat losses calculated from the new heat loss evaluation method, Eq. ( 5.1), old heat loss evaluation method, Eq. ( 1.19) and reference values Eq. ( 5.8) in tested operating conditions The RMS error of heat losses from the improved heat loss evaluation method, Eq. (5.1), was 10.4 % and the RMS error of the previously defined heat loss evaluation method, Eq. (1.19), was 19.9 %. Assuming that the relative uncertainty of compressor heat losses is the RMS error, i.e. 10.4 %, and taking experimental OC values, an uncertainty analysis shows that the average sensitivity index of the improved compressor heat losses is 27 %. Table 5.4 shows the sensitivity indices of each variable in the uncertainty of the heating capacity calculated. The oil mass fraction in the uncertainty analysis was equal to 0.02 ± 0.02, the same as in the analysis of [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF], which determined that the sensitivity index of compressor heat losses in the final uncertainty was 40 %. The deviations in heating capacity calculated with the performance assessment method, where compressor heat losses were obtained from the new evaluation method, Eq. ( 5.1), are presented in Figure 5.17 as a function of compressor speed, in Figure 5.18 as a function of compressor power input, in Figure 5.19 as a function of evaporation pressure, and in Figure 5.20 as a function of temperature difference between the shell temperature sensor at the level between the compression chamber and the motor (sensor number 3, Figure 3.2 (b)) and the exterior air temperature. The parametric study shows that there are no particular tendencies when the deviations are plotted as a function compressor speed, power input, and evaporation temperature. To conclude, based on the information presented above, the criteria to obtain the most accurate results when using the performance assessment method to calculate HP performances are:  𝑇 𝑠ℎ𝑒𝑙𝑙 at the level the space between the compression chamber and the bottom of the motor (sensor 3 in Figure 3.2 (b)),  𝑇 𝑎𝑚𝑏 can be evaluated from the exterior air temperature, for instance, measured at the evaporator inlet,  𝐶 𝑔 = 0.5 %,  steady-state operating conditions. These criteria are valid for air-to-air and air-to-water HPs that comprise a rotary compressor. Steady-state conditions exclude conditions, where the process variables fluctuate in respect to time. Such operating conditions are, for instance, startup and defrosting periods. Conclusions The aim of the experimental test bench was to validate the performance assessment method that integrated an improved heat loss evaluation method defined in Section 4.3 of Chapter 4 for rotary compressors. For this purpose an air-to-water HP prototype was Chapter 5 Experimental validation of the performance assessment method 116 built and tested in various operating conditions. The prototype consisted of an exterior unit, which comprised a compressor, EEV, and evaporator, and a fan, and was installed in a climatic chamber. The interior unit comprised a condenser connected to a water loop. The water circuit was coupled with a water tank. All of the mentioned components were installed in another room. And, finally, the water inlet temperature was regulated by a thermal regulator installed in a third room. The reference heating capacity values were calculated from the water side measurements. The compressor used in the tests was a rotary compressor, the same as the one used in the test bench presented in Chapter 3. The RMS error of the improved performance assessment method is 2.36 % from the reference heating capacity values. This is an improvement when compared to the RMS error obtained from the performance assessment method that integrated the previously defined heat loss evaluation method, described by [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF], which is 7.51 %. The influence of different locations to measure the ambient temperature was investigated. It was concluded that measuring the exterior temperature (outside the exterior unit envelope), for instance, at the evaporator inlet, yields the best results in terms of accuracy and practicality. This location to measure ambient temperature offers more flexibility for adaptation to various HP systems, since it is insensitive to different configurations of internal components inside the exterior unit. In other words, as the component layout inside the exterior unit may change, the ambient sensor located, for instance, on the metal sheet separating the compressor from the evaporator (Tair2) may measure temperatures inferior or superior to the ones measured by Tair2 during the tests on the air-to-water prototype, and may no longer be an accurate representation of air temperature surrounding the compressor. In addition, the chosen location will not require an additional sensor, since the evaporator inlet temperature is, typically, measured by default. It was also concluded that oil mass fraction of 2 % contributes greatly to the deviations between the heating capacities. A constant oil mass fraction value of 0.5 %, derived from the OC measurements during the experimental test benches described in Chapter 3, yields more accurate results. The improved heat loss evaluation method needs to be validated also in scroll compressors, which requires the use of Eq. (4.14), defined in Section 4.3. Also, as mentioned previously, the energy balances employed in the performance assessment method require steady-state conditions. The error introduced by the method when the operating mode is in unsteady-state conditions should also be evaluated. The optimal instrumentation used to determine 𝑇 𝑠ℎ𝑒𝑙𝑙 and, thus, allowing a more accurate heat loss estimation of rotary and scroll compressor on-field, is presented in Table 5.5. Additional information Scroll 2 discharge plenum (sensor 6, Fig. 3.2 (a)) A correlation for the ratio of HP losses to the total shell losses (Eq. (4.12)) must be used to account for the total heat flux area. Correlation is valid if 𝑇 𝑠ℎ𝑒𝑙𝑙,𝐿𝑃 > 𝑇 𝑎𝑚𝑏 . middle of the motor (sensor 3, Fig. 3.2 (a)) Used to measure 𝑇 𝑠ℎ𝑒𝑙𝑙,𝐿𝑃 . Rotary 1 between the compression chamber and motor bottom (sensor 3, Fig. 3.2 (b)) Represents the best the areaweighted average of 𝑇 𝑠ℎ𝑒𝑙𝑙 calculated by the numerical model. CONCLUSIONS AND PERSPECTIVES L'objectif de cette thèse a été de déterminer une approche plus précise pour évaluer les pertes thermiques sur site. Pour cette raison, deux modèles numériques, pour les compresseurs scroll et rotary, ont été développés afin d'investiguer les profils thermiques et la distribution de flux de chaleur sur les enveloppes des compresseurs, pour différents points de fonctionnement. La méthode de mesure des performances intégrant la nouvelle approche pour évaluer les pertes thermiques est validée expérimentalement pour le cas d'une PAC air-eau équipée d'un compresseur rotary, pour plusieurs points de fonctionnement. Cette méthode améliorée présente une erreur quadratique de 2.36 % par rapport aux valeurs de puissance calorifique de référence, alors que la méthode précédente présente une erreur de 7.51 %. Les perspectives de travail concernant les modèles numériques, la méthode de mesure des performances et la méthode de détection et de diagnostic de défauts sont également évoquées. . Conclusions Heat pumps exhibit a high theoretical efficiency in residential applications. Their development is, therefore, important when attempting to reduce heating energy consumption in dwellings. However, HP performance values established in controlled laboratory conditions may vary from the ones obtained on-field due to several factors, such as controls, oversizing, faults, different temperature and flow set points, and climatic conditions. Real-time on-field performance evaluation provides more reliable data. Nevertheless, measuring accurately on-field heating capacity and coefficient of performance (COP) of HPs is difficult, particularly, of air-to-air HPs, since measuring air enthalpies and specifically air mass flow rate on-field is challenging. A promising method for measuring heat pump performances on-field has been previously developed and published. The method is based on refrigerant fluid measurements and component energy/mass balances. Nonintrusive sensors, such as surface temperature sensors, are used to estimate pressure and refrigerant mass flow rate in different types of heat pump systems, including air-to-air. The accuracy of the method is strongly dependent on the evaluation of heat transfer from compressor towards the ambient air, i.e. compressor heat losses. It was stated in the work of [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF] that the sensitivity index of heat losses in the relative uncertainty of heating capacity predicted by the method is 40 %. For this reason, the objective of this work was to develop an embarked method to evaluate more accurately compressor heat losses on-field. One of the main simplifications of the previously defined method used to estimate compressor heat losses is the fact that the compressor shell is assumed to be isothermal and equal to the refrigerant discharge temperature. In order to investigate whether the shell is indeed isothermal and whether its temperature is equal to refrigerant temperature at discharge, numerical models for scroll and rotary compressors were developed and experimentally validated in this work. The numerical models are based on hybrid modeling approach, which combines integral and differential formulations. The calculation procedure in the scroll and rotary models is divided in three fundamental steps: thermodynamic compression cycle analysis, detailed thermophysical flow analysis executed with the aid of a CFD program, and a compressor network analysis. The first step of the model evaluates the initial and boundary conditions used in integral and differential parts of the model, steps two and three, respectively. The second step of the model uses a CFD program and the outputs are transmitted to the final step of the model. A thermal network analysis verifies the validity of the results obtained from the CFD simulations, and determines whether the model can be considered globally converged in the third step of the model. Experimental validation of the numerical models determined the RMS errors of the dimensionless temperature values of external thermal profiles: 0.04-0.08 and 0.03-0.05 in scroll and rotary compressors, respectively. The location of the necessary temperature sensors that best represent 𝑇 𝑠ℎ𝑒𝑙𝑙 of scroll and rotary compressors and zones, which contribute most to the heat losses, were determined with the aid of the numerical models. The simulation results showed that the shell of rotary compressor is relatively uniform, unlike in the case of scroll compressor, where a strong temperature jump at the level of compression chamber suggests that an important share of heat losses occur in the high pressure part. In fact, approximately 80 % of the heat losses in scroll compressors occurred in the high pressure part in tested conditions. For this reason, heat losses from the high pressure part of scroll compressor are evaluated. The required instrumentation to determine 𝑇 𝑠ℎ𝑒𝑙𝑙 in scroll compressors is:  one sensor at the level of discharge plenum and one sensor at the level of the middle of the motor,  two sensors required in total. Similarly, the required instrumentation to determine compressor shell temperature in rotary compressors is:  temperature sensor at the level of the space between the compression chamber and motor bottom,  one sensor required. In addition, the correlations used to calculate heat transfer coefficients in natural convection regime for top, bottom, and lateral compressor surfaces were selected for the embarked compressor heat loss evaluation. The improved compressor heat loss evaluation is then integrated in the performance assessment method. An air-to-water HP prototype was built and tested in various operating conditions to validate the performance assessment method that integrated an improved heat loss evaluation method rotary compressors, as the compressor of the HP prototype was of rotary type. The reference heating capacity values were calculated from the water side measurements. Experimental validation was also used to determine the most optimal location of the sensor used to measure the temperature of air surrounding the compressor, 𝑇 𝑎𝑚𝑏 . Exterior air temperature measured, for instance, at the evaporator inlet, was chosen as the most appropriate representation of 𝑇 𝑎𝑚𝑏 . The improved performance value yielded a RMS error of 2.36 % from the reference values. The RMS error obtained from the performance assessment method that integrated the heat loss evaluation method presented in the work of [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF] was 7.51 %. Perspectives Suggestions for future work can be divided in three groups. One related to the numerical model, another related to the performance assessment method, and the third related to fault detection and diagnostics. Numerical model The model can be exploited to observe the behavior of exterior thermal profiles in a wider range of operating conditions. A matrix consisting of different ambient, condensation, and evaporation temperatures, compressor speeds combinations can be used for the purpose. Also, the influence of different insulation materials and their thicknesses can be investigated, as well as the influence of component dimensions and layout. The model can then be used to characterize scroll and rotary compressors of different sizes and to adapt the heat loss evaluation methods, described in Chapter 4, accordingly. Performance assessment method The improved heat loss evaluation method needs to be validated also in scroll compressors. The validation of the method in scroll compressors is, particularly, interesting since the scroll compressor shell is not isothermal; the proposition to measure the heat losses only from the high pressure part of the compressor must be tested. The performance assessment method is applicable only in steady-state conditions since the energy balances that constitute the method are time-independent. It is interesting to test the validity of and the error introduced by the method in unsteady operating conditions, such as, startup and defrosting periods, where the measured variables fluctuate with time. Fault detection and diagnostics The method determines indirectly the refrigerant mass flow rate using the compressor energy balance. This information can be utilized for fault detection and diagnostics purposes. The method on its own provides means to optimize HP performance by providing real-time performance data in terms of COP and heating capacity. However, coupling the method with a fault detection and diagnostics method is of interest, since it can minimize performance degradation, maintenance costs, and machine down-time. Conclusions and perspectives 123 A fault detection and diagnostics method described by [START_REF] Li | Decoupling features and virtual sensors for diagnosis of faults in vapor compression air conditioners[END_REF] in Chapter 1 can be coupled with the performance assessment method. The most common residential HP faults, such as exchanger fouling and refrigerant overcharge and undercharge, and their detection methodology, could be tested and validated with the air-towater prototype, described in Section 5.2. The capability of performance assessment method to perform on-field diagnostics can then be validated. The fault detection and diagnostics method, presented in the work of [START_REF] Li | Decoupling features and virtual sensors for diagnosis of faults in vapor compression air conditioners[END_REF], compares the deviations between fault-free and faulty decoupling feature values (e.g. air mass flow rate at the evaporator side), which are calculated using the indirect evaluation of the refrigerant mass flow rate provided by the performance assessment method. Naturally, the uncertainty of the mass flow rate will influence the uncertainty of the calculated decoupling feature value. Therefore, if the uncertainty is significant, some faults may be falsely detected or ignored. However, since the improved compressor heat loss evaluation decreases noticeably the uncertainty of the performance assessment method, a more efficient fault detection could be enabled. Résumé Actuellement, la plupart des fabricants de pompes à chaleur (PAC) fournissent les valeurs de coefficients de performances (COP) obtenus en laboratoire en conditions contrôlées et standardisées. Une méthode prometteuse, appelée méthode de mesure des performances dans la suite, d'évaluation des performances de PAC in situ, basée sur le bilan énergétique du compresseur, a été présentée par [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF]. Cette méthode détermine le débit de fluide frigorigène et est compatible avec différents types de PAC, notamment air-air, et des cycles frigorifiques plus complexes. [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF] ont déterminé que l'incertitude sur l'évaluation de pertes thermiques du compresseur contribue à hauteur de 40% sur l'erreur d'estimation de la puissance thermique. L'objectif de cette thèse est d'établir une méthode simplifiée quant à l'instrumentation pour mesurer les pertes thermiques in situ. Pour cela, deux modèles numériques détaillés sont développés afin d'examiner la distribution de température sur l'enveloppe de deux types de compresseurs, scroll et rotary. Les mesures expérimentales fournies par un fabricant de compresseurs, Mitsubishi Heavy Industries (MHI), sont utilisées pour calibrer et valider les modèles numériques. Ces derniers permettent de définir deux protocoles de mesures différents pour les deux compresseurs. Ensuite, le protocole établi pour le compresseur rotary est intégré dans la méthode de mesures des performances. Les puissances thermiques calculées sont comparées avec des valeurs de référence, obtenues à partir d'un prototype en banc d'essai à EDF Lab Les Renardières. Mots Clés Pompes à chaleur, coefficient de performances, rotary, scroll, puissance calorifique, CFD, modèle numérique, transfert de chaleur, compresseur, pertes thermiques, incertitude, in-situ, fluide frigorigène, cycle thermodynamique Abstract Currently, most heat pump (HP) manufacturers provide coefficient of performance (COP) values obtained in laboratories under standardized controlled operating conditions. These COP values are not necessarily representative of those obtained on-field. A promising method, referred to as the performance assessment method, that measures heat pump performances in-situ based on compressor energy balance, was presented by [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF]. The method determines refrigerant mass flow rate and has the capability of measuring performances of various HP types, such as air-to-air, as well as more complex refrigeration cycles. The method abstains from intrusive measurements, and is, therefore, perfectly suitable for in-situ measurements. As shown in the work of [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF], compressor heat losses account for 40% in the final uncertainty of performance values obtained with the performance assessment method. The objective of this thesis is to establish a rather simplified measurement method, in terms of instrumentation, that is used to determine compressor heat losses insitu. For this purpose two detailed numerical models for assessing the temperature fields of the scroll and rotary compressor shells were developed. Experimental measurements obtained with the help of compressor manufacturer, Mitsubishi Heavy Industries (MHI), are used to validate and calibrate the numerical models. The developed numerical models allow to define two different measurement protocols for both compressors. Established compressor heat loss protocol for rotary compressor is then integrated in the performance assessment method and the obtained heating capacities are compared with reference measurements in an experimental test bench in EDF Lab Les Renardières. Figure 1 . 1 11 Figure 1.1 Schematic (a) and P-h diagram (b) of a HP refrigeration cycle ................ Figure 1.2 Reversible heat pump in heating (a) and cooling (b) mode[START_REF] Costic | Pompes à chaleur en habitat individuel[END_REF]) .................................................................................. Figure 1.3 Working principle of a scroll compressor (Hitachi Industrial Equipment Systems, 2017) .................................................................. Figure 1.4 Working principle of a rotary compressor[START_REF] Lee | Development of capacity modulation compressor based on a two stage rotary compressor. 2. Performance experiments and P-V analysis[END_REF] ..... Figure 1.5 Refrigeration cycle P-h diagram (a) and schematic with required measurements (b) of basic cycles ........................................................ Figure 1.6. Refrigeration cycle P-h diagram (a) and schematic with required measurements (b) of FT injection cycle .............................................. Figure 1.7. Refrigeration cycle P-h diagram (a) and schematic with required measurements (b) of IHX injection cycles .......................................... Figure 2.1. Example of heat flux sensors used to measure compressor heat fluxes in the work of Dutra & Deschamps (2013) .............................. Figure 2.2. Illustration of a hybrid model approach when modeling heat transfer of stator ................................................................................. Figure 2.3. Schematic of the three calculation steps of the model ............................. Figure 2.4. Computational domains of scroll (a) and rotary (b) compressors .......... Figure 2.5. Thermodynamic refrigeration cycle of a heat pump with associated power distributions inside the compressor ......................................... Figure 2.6. Computational domain of scroll compressor in ANSYS Workbench pre-processing tool, Design Modeler ................................................. Figure 2.7. Structured (on the left) and unstructured (on the right) grids ................. Figure 2.8. Scroll compressor mesh in ANSYS Workbench pre-processin tool,Mechanical .......................................................................................... Figure 2.9. Equilateral and highly skewed triangle ................................................... Figure 2.10. Thin layer of oil in solid phase between refrigerant and oil sump in liquid state....................................................................................... Figure 2.11. Required heat to reach Tdis (temperature of the compressed fluid at discharge) ....................................................................................... Figure 2.12. Heat fluxes released from the fluid to compression chamber................ Figure 2.13. Flow inside the compression chamber (above) is approximated as the flow inside a pipe (below) ............................................................. Figure 2.14. Computational domian of rotary compressor in ANSYS Workbench pre-processing tool, Design Modeler .............................. Figure 2.15. Insulation layer high-lighted in green around the exterior shell of the compressor . Mechanical ................................................................................. Figure 1.1 Schematic (a) and P-h diagram (b) of a HP refrigeration cycle ................ Figure 1.2 Reversible heat pump in heating (a) and cooling (b) mode[START_REF] Costic | Pompes à chaleur en habitat individuel[END_REF]) .................................................................................. Figure 1.3 Working principle of a scroll compressor (Hitachi Industrial Equipment Systems, 2017) .................................................................. Figure 1.4 Working principle of a rotary compressor[START_REF] Lee | Development of capacity modulation compressor based on a two stage rotary compressor. 2. Performance experiments and P-V analysis[END_REF] ..... Figure 1.5 Refrigeration cycle P-h diagram (a) and schematic with required measurements (b) of basic cycles ........................................................ Figure 1.6. Refrigeration cycle P-h diagram (a) and schematic with required measurements (b) of FT injection cycle .............................................. Figure 1.7. Refrigeration cycle P-h diagram (a) and schematic with required measurements (b) of IHX injection cycles .......................................... Figure 2.1. Example of heat flux sensors used to measure compressor heat fluxes in the work of Dutra & Deschamps (2013) .............................. Figure 2.2. Illustration of a hybrid model approach when modeling heat transfer of stator ................................................................................. Figure 2.3. Schematic of the three calculation steps of the model ............................. Figure 2.4. Computational domains of scroll (a) and rotary (b) compressors .......... Figure 2.5. Thermodynamic refrigeration cycle of a heat pump with associated power distributions inside the compressor ......................................... Figure 2.6. Computational domain of scroll compressor in ANSYS Workbench pre-processing tool, Design Modeler ................................................. Figure 2.7. Structured (on the left) and unstructured (on the right) grids ................. Figure 2.8. Scroll compressor mesh in ANSYS Workbench pre-processin tool,Mechanical .......................................................................................... Figure 2.9. Equilateral and highly skewed triangle ................................................... Figure 2.10. Thin layer of oil in solid phase between refrigerant and oil sump in liquid state....................................................................................... Figure 2.11. Required heat to reach Tdis (temperature of the compressed fluid at discharge) ....................................................................................... Figure 2.12. Heat fluxes released from the fluid to compression chamber................ Figure 2.13. Flow inside the compression chamber (above) is approximated as the flow inside a pipe (below) ............................................................. Figure 2.14. Computational domian of rotary compressor in ANSYS Workbench pre-processing tool, Design Modeler .............................. Figure 2.15. Insulation layer high-lighted in green around the exterior shell of the compressor . Mechanical ................................................................................. Figure 3.1. Calorimeter chamber of the scroll and rotary compressor test bench used for experimental validation .............................................. Figure 3.2. Location of thermocouples on compressor shells of scroll (a) and rotary (b) compressors ........................................................................ Figure 3.3. Experimental and numerical shell temperature distributions in scroll compressor at Tcond = 40 °C, Tevap = 0 °C, and Tamb = 10 °C, 30 rps (operating condition 1) .................................. Figure 3.4. Experimental and numerical shell temperature distributions in scroll compressor at Tcond = 40 °C, Tevap = 0 °C, and Tamb = 10 °C, 60 rps (operating condition 2) .................................. Figure 3.5. Experimental and numerical shell temperature distributions in scroll compressor at Tcond = 60 C, Tevap = 0 C, and Tamb = 10, 30 rps (operating condition 3) ...................................................... Figure 3.6. Experimental and numerical shell temperature distributions in scroll compressor at Tcond = 40 °C, Tevap = 15 °C, and Tamb = 25 °C, 30 rps (operating condition 4) ..................................Figure 3.7. Experimental and numerical shell temperature distributions in scroll compressor at Tcond = 40 °C, Tevap = 15 °C, and Tamb = 25 °C, 90 rps (operating condition 5) .................................. Figure 3.8. Experimental and numerical component temperatures in scroll compressor at Tcond = 40 °C, Tevap = 0 °C, and Tamb = 10 °C, 30 rps (operating condition 1) ................................................. Figure 3.9. Experimental and numerical component temperatures in scroll compressor at Tcond = 40 °C, Tevap = 15 °C, and Tamb = 25 °C, 30 rps (operating condition 4) ................................................. Figure 3.10. Experimental and numerical component temperatures in scroll compressor at Tcond = 40 °C, Tevap = 15 °C, and Tamb = 25 °C, 90 rps (operating condition 5) ................................................. Figure 3.11. Experimental and numerical shell temperature distributions in rotary compressor at Tcond = 40 °C, Tevap = 0 °C, and Tamb = 10 °C, 30 rps (operating condition 1) .................................. Figure 3.12. Experimental and numerical shell temperature distributions in rotary compressor at Tcond = 40 °C, Tevap = 0 °C, and Tamb = 10 °C, 60 rps (operating condition 2) .................................. Figure 3.13. Experimental and numerical shell temperature distributions in rotary compressor at Tcond = 60 °C, Tevap = 0 °C, and Tamb = 10 °C, 30 rps (operating condition 3) ............................... Figure 3 . 3 Figure 3.14. Experimental and numerical shell temperature distributions in rotary compressor at Tcond = 40 °C, Tevap = 15 °C, and Tamb = 25 °C, 30 rps (operating condition 4) ............................... Figure 3.15. Experimental and numerical shell temperature distribution in rotary compressor at Tcond = 40 °C, Tevap = 15 °C, and Tamb = 25 °C, 90 rps (operating condition 5) ................................. Figure 3.16. Temperature contours of rotary compressor shell in absolute temperature (K) ................................................................................... Figure 3.17. Temperature contours of scroll compressor shell in absolute temperature (K) ................................................................................... Figure 3.18. Thin oil film forming an interface betwee the refirgerant gas and a solid component ............................................................................... Figure 3.19. Different inlet locations in scroll compressor and their effect on the refrigerant flow paths.................................................................... Figure 4.1. Location of temperature sensors required to evaluate heat losses of rotary (on the left) and scroll (on the right) compressors .................. Figure 5.1. Exterior unit with a compressor and EEV separated by a metallic sheet from the evaporator and fan ...................................................... Figure 5.2. Exterior unit from the side of the evaporator .......................................... Figure 5.3 Exterior unit from the side of the fan ........................................................ Figure 5.4. Exterior unit located in the climatic chamber 2 (outdoor chamber) ....... Figure 5.5. Indoor unit with condenser coupled to a water loop (on the left) and contorl unit of water inlet temperature located in room number 3 (on the right) ....................................................................... Figure 5.6. Air-to-water HP prototype and required instrumentation installed in three climatic chambers .................................................................. Figure 5.7. Locations of sensors used to measure the temperature of air surrounding the compressor ............................................................ Figure 5.8. Thermometer (PT100) used to measure the evaporator inlet air, i.e. Text ..................................................................................................... Figure 5.9. Refrigeration cycle P-h diagram (a) and a schematic with required measurements (b) for the experimental validation of the performance assessment method ....................................................... Figure 5.10. Root mean square errors of the estimated heating capacities when Qamb is obtained using different air temperature sensors .............. Figure 5.11. Deviations of heating capacities from reference values when Qamb is obtained using different air temperature sensors .............. Figure 5.12. Deviations of heating capacities calculated using the previosuly defined Qamb evaluation method and the more accurate method from the reference values as a function of compressor speeds ......... Figure 5.13. Comparison of heating capacities deviations from the reference values with different Cg values. Figure 1 . 2 12 Figure 1.2 Reversible heat pump in heating (a) and cooling (b) mode[START_REF] Costic | Pompes à chaleur en habitat individuel[END_REF] Figure 1.6. Refrigeration cycle P-h diagram (a) and schematic with required measurements (b) of FT injection cycle Heat flux sensors used in the work of Dutra & Dechamps 2.1 State-of-the-art of compressor heat transfer 31 (2013) can be seen in Figure 2.1. More information on the advantages and disadvantages of the listed experimental techniques can be found in Appendix B. Figure 2 . 1 . 21 Figure 2.1. Example of heat flux sensors used to measure compressor heat fluxes in the work of[START_REF] Dutra | Experimental Characterization of Heat Transfer in the Components of a Small Hermetic Reciprocating Compressor[END_REF] Figure 2 2 Figure 2.2 is a graphical illustration of an example of a hybrid model. Motor assembly enclosed in the compressor shell with the surrounding fluid is portrayed in the figure.Conduction within a volume is modeled with differential formulations and heat transfer between solid and fluid components is modeled with integral formulations. Information is exchanged between integral and differential approaches until convergence is achieved. In Figure2.2 Δx and Δy are spatial mesh sizes in x and y directions. Figure 2 . 2 . 22 Figure 2.2. Illustration of a hybrid model approach when modeling heat transfer of statorMost hybrid models found in literature consider reciprocating compressors.[START_REF] Diniz | A lumped-parameter thermal model for scroll compressors including the solution for the temperature distribution along the scroll wraps[END_REF] developed a hybrid thermal model for scroll compressors. However, the model integrates a detailed modeling of the scroll wraps to obtain temperature distribution along this component. Thus, the model cannot be adapted to rotary compressors as well. Also, thermal profile of compressor shell, as well as heat transfer from the Figure 2 . 3 . 23 Figure 2.3. Schematic of the three calculation steps of the model Figure 2 2 Figure 2.4 (a) and (b) are representations of the computational domains of scroll and rotary compressors, respectively. The model is simplified by removing unnecessary fillets and parts in order to facilitate the meshing in the pre-processing stage and convergence in the CFD solver. Compressor is surrounded by ambient air with enclosure walls maintained at a constant temperature. Depending on the configuration, compres- Figure 2 . 4 . 24 Figure 2.4. Computational domains of scroll (a) and rotary (b) compressors Figure 2 . 2 Figure 2.5 illustrates the refrigeration cycle of a heat pump and associated compressor power distributions. This step is coded in MATLAB. Refrigerant gas properties, such as enthalpies, are obtained from the libraries of REFPROP (NIST, 2008). Figure 2 . 5 . 25 Figure 2.5. Thermodynamic refrigeration cycle of a heat pump with associated power distributions inside the compressor as Viscous Fluid Flow by F. M. White and Numerical Computation of Internal and External Flows by C. Hirsch. Figure 2 . 6 . 26 Figure 2.6. Computational domain of scroll compressor in ANSYS Workbench preprocessing tool, Design Modeler FigureFigure 2 . 7 . 27 Figure 2.7. Structured (on the left) and unstructured (on the right) grids Figure 2 . 8 . 28 Figure 2.8. Scroll compressor mesh in ANSYS Workbench pre-processin tool, Mechanical Figure2.9 illustrates highly skewed versus equilateral element. Elements that have skewness close to 1 are defined as degenerate elements and significantly influence the quality of the mesh. Degenerate cells are characterized by nearly coplanar nodes. Skewness value close to 0 indicates an equilateral cell and is the best achievable skewness value. Cells beyond skewness Figure 2 . 9 . 29 Figure 2.9. Equilateral and highly skewed triangle Figure 2 . 2 Figure 2.10. Thin layer of oil in solid phase between refrigerant and oil sump in liquid state Figure 2 . 11 . 211 Figure 2.11. Required heat to reach Tdis (temperature of the compressed fluid at discharge) 4). Simplifications made in the model that concern oil are discussed in more detail in Subsection 3.3.1. 1: 𝑄 ̇𝑐ℎ𝑎-𝑐𝑦𝑙 2: 𝑄 ̇𝑐ℎ𝑎-𝑡𝑜𝑝 3: 𝑄 ̇𝑐ℎ𝑎-𝑏𝑜𝑡 Chapter 2 Figure 2 . 14 . 214 Figure 2.14. Computational domian of rotary compressor in ANSYS Workbench preprocessing tool, Design Modeler Figure 2 . 15 . 215 Figure 2.15. Insulation layer high-lighted in green around the exterior shell of the compressor Figure 2 . 16 . 216 Figure 2.16. Rotary compressor mesh in ANSYS Workbench pre-processing tool, Mechanical Figure 3 . 1 . 31 Figure 3.1. Calorimeter chamber of the scroll and rotary compressor test bench used for experimental validation the two compressor types. Various operating conditions were achieved by varying specific parameters: compressor speed, ambient temperature, inlet and outlet pressures. Figure 3 . 2 . 32 Figure 3.2. Location of thermocouples on compressor shells of scroll (a) and rotary (b) compressors Figure 3 . 3 .Figure 3 . 4 .Figure 3 . 5 .Figure 3 . 6 .Figure 3 . 7 . 3334353637 Figure 3.3. Experimental and numerical shell temperature distributions in scroll compressor at 𝑻 𝒄𝒐𝒏𝒅 = 𝟒𝟎 °𝑪, 𝑻 𝒆𝒗𝒂𝒑 = 𝟎 °𝑪, and 𝑻 𝒂𝒎𝒃 = 𝟏𝟎 °𝑪, 30 rps (operating condition 1) Figure 3 . 8 .Figure 3 . 9 .Figure 3 . 10 . 3839310 Figure 3.8. Experimental and numerical component temperatures in scroll compressor at 𝑻 𝒄𝒐𝒏𝒅 = 𝟒𝟎 °𝑪, 𝑻 𝒆𝒗𝒂𝒑 = 𝟎 °𝑪, and 𝑻 𝒂𝒎𝒃 = 𝟏𝟎 °𝑪, 30 rps (operating condition 1) Figure 3 . 11 .Figure 3 . 12 .Figure 3 . 13 .Figure 3 . 14 .Figure 3 . 15 . 311312313314315 Figure 3.11. Experimental and numerical shell temperature distributions in rotary compressor at 𝑻 𝒄𝒐𝒏𝒅 = 𝟒𝟎 °𝑪, 𝑻 𝒆𝒗𝒂𝒑 = 𝟎 °𝑪, and 𝑻 𝒂𝒎𝒃 = 𝟏𝟎 °𝑪, 30 rps (operating condition 1) in Subsection 2.2.3 show that the shell temperature of rotary compressor is relatively uniform regardless of the operating condition. On the other hand, the thermal profiles of scroll compressors, found in Subsection 3.2.1, show a significant temperature jump between the low and high pressure parts of the compressor. More specifically, a strong temperature increase is seen at sensors 5-7. Figure3.16 and Figure3.17 are graphical illustrations of the thermal behavior of scroll and rotary compressors obtained in Fluent in the same operating conditions at the same compressor speed. The figures depict that temperature fluctuations of rotary compressor are relatively small in comparison with scroll compressor, where most of the shell remains at low temperature. The operating conditions cannot be disclosed for confidentiality reasons. Figure 3 . 3 Figure 3.16. Temperature contours of rotary compressor shell in absolute temperature (K) Figure 3 3 Figure 3.19. Different inlet locations in scroll compressor and their effect on the refrigerant flow paths Figure 4 . 4 Figure 4.1 represents schematically the location and number of temperature sensors required for evaluating heat losses of rotary and scroll compressors on-field. Figure 4 . 1 . 41 Figure 4.1. Location of temperature sensors required to evaluate heat losses of rotary (on the left) and scroll (on the right) compressors Figure 5 . 1 .Figure 5 . 3 5153 Figure 5.1. Exterior unit with a compressor and EEV separated by a metallic sheet from the evaporator and fan FigureFigure 5 . 4 . 54 Figure 5.4. Exterior unit located in the climatic chamber 2 (outdoor chamber) Figure 5 . 6 . 56 Figure 5.6. Air-to-water HP prototype and required instrumentation installed in three climatic chambers Figure 5 . 7 .Figure 5 . 8 . 5758 Figure 5.7. Locations of sensors used to measure the temperature of air surrounding the compressor Figure 5 . 9 . 59 Figure 5.9. Refrigeration cycle P-h diagram (a) and a schematic with required measurements (b) for the experimental validation of the performance assessment method Figure 5 . 10 .Figure 5 . 11 . 510511 Figure 5.10. Root mean square errors of the estimated heating capacities when 𝑸 ̇𝒂𝒎𝒃 is obtained using different air temperature sensors Figure5.12. Deviations of heating capacities calculated using the previosuly defined 𝑸 ̇𝒂𝒎𝒃 evaluation method and the more accurate method from the reference values as a function of compressor speeds Figure 5 . 13 . 513 Figure 5.13. Comparison of heating capacities deviations from the reference values with different 𝑪 𝒈 values Figure 5 . 14 .Figure 5 . 15 . 514515 Figure 5.14. Required instrumentation to measure rotary compressor heat losses onfield Figure 5 . 17 .Figure 5 . 18 .Figure 5 . 20 . 517518520 Figure 5.17. Deviations in heating capacities from reference values as a function of compressor speed difference T shell -T amb(°C) TABLE OF CONTENTS OF INTRODUCTION ........................................................................................................ CHAPTER 1 BACKGROUND AND LITERATURE REVIEW ............................ 1.1 Overview of heat pumps ........................................................................................ 1.2 Performance assessment method using indirect measurement of refrigerant mass flow rate ...................................................................................................................... 1.2.1 Compressor energy balance ..................................................................... ..... Figure 1.5 Refrigeration cycle P-h diagram (a) and schematic with required measurements (b) of basic cycles ........................................................ Figure 1.6. Refrigeration cycle P-h diagram (a) and schematic with required measurements (b) of FT injection cycle .............................................. Figure 1.7 . Refrigeration cycle P-h diagram (a) and schematic with required measurements (b) of IHX injection cycles .......................................... Figure 2.1. Example of heat flux sensors used to measure compressor heat fluxes in the work of Dutra & Deschamps (2013) .............................. Figure 2.2. Illustration of a hybrid model approach when modeling heat transfer of stator ................................................................................. . Thin oil film forming an interface betwee the refirgerantgas and a solid component ............................................................................... Figure 3.19. Different inlet locations in scroll compressor and their effect on the refrigerant flow paths.................................................................... Figure 4.1. Location of temperature sensors required to evaluate heat losses of rotary (on the left) and scroll (on the right) compressors .................. Figure 5.1. Exterior unit with a compressor and EEV separated by a metallic sheet from the evaporator and fan ...................................................... Figure 5.2. Exterior unit from the side of the evaporator . . Comparison of heating capacities deviations from the reference values with differentCg values.......................................................... Figure 5.14. Required instrumentation to measure rotary compressor heat losses on-field.................................................................................... xvi Figure 5.15. Reference and calculated heating capacities at different operating conditions . Table 1 . 1 . Deviations in heating capacities from reference values as a function of compressor speed............................................................ Figure 5.18. Deviations in heating capacities from reference values as a function of compressor power input ................................................. Figure 5.19. Deviations in heating capacities from reference values as a function of evaporation temperature ..Figure A.1. Flow chart diagram of condenser fouling detection algorithm ............ Figure A.2. Flow chart diagram of evaporator fouling detection algorithm ........... 1 Most common faults occurring in the xvii .............................................. Figure 5.20 . Deviations in heating capacities from reference values as a function of temperature difference between compressor shell and ambient temperatures................................................................. Table 3 . 3 1 Operating conditions used to obtain experimental results for numericalmodel validation ................................................................ 67 Table 3.2 Dimensionless RMS errors of external profiles in different operating conditions in scroll compressor .......................................................... 70 Table 3.3 Dimensionless RMS errors of external profiles in different operating conditions in rotary compressor ......................................................... 75 Table 4.1. The RMS errors of temperature values measured from different temperature sensors from the area-weighted average temperatures of the rotary compressor shell ...................................... 85 Table 4.2. Shell heat flux ratio of scroll compressor extracted from Fluent in different operating conditions .. ........................................................... 89 Table 5 . 5 1. Location of ambient temperature sensors ............................................... 100 Table 5.2. Operating conditions tested in the experimental test bench .................... 105 Table 5.3. Sensors used in measurements and their respective uncertainties .......... 105 Table 5.4 Sensitivity index of each variable in the final uncertainty of the calculated heating power in operating condition 6 .......................... 113 Table 5.5 . Optimal instrumentation used to determine Tshell in scroll and rotary compressors ...................................................................................... 117 Table A.1. Fault decoupling features, required temperature measurements, and virtual measurements ........................................................................ 137 Table B.2 Measurement techniques used to determine compressor thermal behavior ............................................................................................ 140 Pour optimiser la consommation énergétique des PAC, l'évaluation de la puissance thermique et du coefficient de performance sur site en temps réel est importante. Néan- moins, déterminer les performances des PAC reste problématique, notamment dans le cas de PAC air-air, car l'estimation de l'enthalpie et du débit massique de l'air est difficile. [START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF] a adapté une méthode qui est basée sur les mesures des propriétés de fluide frigorigène et les bilans énergétique/massique afin d'obtenir les performances d'une PAC en temps réel sur site. Cette méthode utilise seulement des capteurs non-intrusifs. Elle est compatible avec différents types de PAC, y compris les PAC air-air, et est adaptée aux cycles plus complexes (cycles à injection). Tran et al. (2013) assumed that the shell temperature is homogenous and equal to discharge temperature. Particularly 1.3 Fault detection and diagnostic using mass flow rate measurements Tran et al. (2013) compared the compressor heat losses, 𝑄 ̇𝑎𝑚𝑏 , obtained from physical considerations, Eq. (1.19), to the reference values, 𝑄 ̇𝑎𝑚𝑏 𝑟𝑒𝑓 . Reference heat loss values were obtained from the compressor thermal balance, as follows: 𝑄 ̇𝑎𝑚𝑏 𝑟𝑒𝑓 = 𝑊 ̇𝑐𝑜𝑚𝑝 -𝑚̇𝑟 𝑒𝑓 [(1 -𝐶 𝑔 )(ℎ 𝑟,𝑐𝑜𝑚𝑝,𝑜𝑢𝑡 -ℎ 𝑟,𝑐𝑜𝑚𝑝,𝑖𝑛 ) + 𝐶 𝑔 ∆ℎ 𝑜 𝑇 𝑐𝑜𝑚𝑝,𝑖𝑛 →𝑇 𝑐𝑜𝑚𝑝,𝑜𝑢𝑡 ] (1.20) where 𝑚̇𝑟 𝑒𝑓 is the mass flow rate of the working fluid measured with a flow meter. The ratios between 𝑄 ̇𝑎𝑚𝑏 𝑄 ̇𝑎𝑚𝑏 𝑟𝑒𝑓 ⁄ lie between 0.7 and 1.2. Based on this information Tran et al. (2013) derived a relative uncertainty value of estimated heat losses, Eq. 𝑐𝑜𝑚𝑝,𝑜𝑢𝑡 -𝑇 𝑎𝑚𝑏 ) + 𝜎𝐴(𝑇 𝑐𝑜𝑚𝑝,𝑜𝑢𝑡 4 -𝑇 𝑎𝑚𝑏 4 ) (1.19) where ℎ 𝑐 is the convective heat transfer coefficient, 𝐴 is the surface area of compressor shell exposed to ambient air, 𝑇 𝑎𝑚𝑏 is the ambient air temperature, 𝑇 𝑐𝑜𝑚𝑝,𝑜𝑢𝑡 is the re- frigerant temperature at the compressor exhaust side, and 𝜎 is the Stefan-Boltzmann constant. Compressor surface emissivity is assumed to be equal to unity and the emis- sivity of the surrounding air can be neglected. Convective heat transfer coefficient consists of three parts: convective heat transfer coefficient for isothermal cylinders (Morgan's correlation), and convective heat transfer coefficients for hot surface facing up and down (McAdams' correlations). in scroll compressors, where the motor is on the bottom, this can provoke a significant error. 21 Table 1 1 Grouped faults Individual faults (origins) Operating issues  Wiring problems  Loose terminals  Electrical problems in the motor Startup Control/electronics  Compressor overloaded -overload relay prob- Cooling Heating lems General  Electrical problems in auxiliary components  Sensor issues .1 Most common faults occurring in the HP unit , scroll and rotary. Yet, each of the fundamental steps are customized for scroll and rotary domains. The model steps are described in more detail for scroll and rotary compressors in Subsection 2.2.2 and 2.2.3, respectively. Chapter 2 Compressor heat transfer model both compressor types (a) (b) .3 is identical for 38 Table 3 3 .1 Operating conditions used to obtain experimental results for numerical model validation Operating condition 𝑇 𝑐𝑜𝑛𝑑 (°C) 𝑇 𝑒𝑣𝑎𝑝 (°C) 𝑇 𝑎𝑚𝑏 (°C) speed (rps) 1 40 0 10 30 2 40 0 10 60 3 60 0 10 30 4 40 15 25 30 5 40 15 25 90 Table 4 . 4 1. The RMS errors of temperature values measured from different temperature sensors from the area-weighted average temperatures of the rotary compressor shell Operating conditions 𝑇 𝑐𝑜𝑛𝑑 (°C) 𝑇 𝑒𝑣𝑎𝑝 (°C) 𝑇 𝑎𝑚𝑏 (°C) speed (rps) 40 0 10 60 40 0 10 90 60 0 10 30 60 0 10 60 40 15 25 30 RMS errors (°C) 𝑇 𝑜𝑢𝑡 Sensor 3 Sensor 4 Sensor 5 Sensor 6 4.84 1.88 5.26 2.30 2.70 Table 4 4 .2. Shell heat flux ratio of scroll compressor extracted from Fluent in different operating conditions Table where 𝑁𝑢 ̅̅̅̅ 𝐿 is the Nusselt number for the lateral side, Eq. (4.7) or (4.8), and 𝑁𝑢 ̅̅̅̅ 𝐷,1 is the Nusselt number for top surface, Eq. (4.9) or (4.10), 𝑁𝑢 ̅̅̅̅ 𝐷,2 is the Nusselt number for bottom surface, Eq. (4.11), 𝐴 𝐿 is the lateral side area, 𝐴 𝐷 is the top and bottom plate areas. To conclude, compressor heat losses in rotary compressors are calculated from the following equation, already presented in Subsection 1.2.2: 𝑄 ̇𝑎𝑚𝑏 = ( 𝑁𝑢 ̅̅̅̅ 𝐿 𝑘 𝐿 𝐴 𝐿 + 𝑁𝑢 ̅̅̅̅ 𝐷,1 𝑘 𝐷 𝐴 𝐷 + 𝑁𝑢 ̅̅̅̅ 𝐷,2 𝑘 𝐷 𝐴 𝐷 ) (𝑇 𝑠ℎ𝑒𝑙𝑙,3 -𝑇 𝑎𝑚𝑏 ) + 𝜎𝐴 𝑡𝑜𝑡 (𝑇 𝑠ℎ𝑒𝑙𝑙,3 4 -𝑇 𝑎𝑚𝑏 4 ) (4.13) Similarly, heat losses in scroll compressors are calculated as follows: 𝑄 ̇𝑎𝑚𝑏 = 1 𝑅 𝐻𝐹 [( 𝑁𝑢 ̅̅̅̅ 𝐿 𝑘 𝐿 𝐴 𝐿,𝐻𝑃 + 𝑁𝑢 ̅̅̅̅ 𝐷,1 𝑘 𝐷 𝐴 𝐷 ) (𝑇 𝑠ℎ𝑒𝑙𝑙,6 -𝑇 𝑎𝑚𝑏 ) + 𝜎𝐴 𝑡𝑜𝑡,𝐻𝑃 (𝑇 𝑠ℎ𝑒𝑙𝑙,6 4 -𝑇 𝑎𝑚𝑏 4 )] (4.14) Table 5 . 1 . 51 Location of ambient temperature sensors Sensor Location Tair1 Bottom of the compressor metal enclosure Tair2 Metal sheet between compressor and evaporator Tair3 Side of the compressor metal en-closure Tair4 Side of the compressor metal en-closure Tair5 Top of the compressor metal en-closure Tair6 Side of the compressor metal en-closure Tair7 Hanging above the compressor Tair8 Side (at the bottom) of the com-pressor metal enclosure Text Center of the exterior unit at the air inlet side Tran et al. (2013), Eq. (1.19) in various operating conditions.[START_REF] Tran | In situ measurement methods of air to air heat pump performance[END_REF] estimated that 𝑇 𝑠ℎ𝑒𝑙𝑙 was considered to be equal to the refrigerant temperature at discharge and the average ℎ 𝑐 was 6.67 W K -1 m -2 . The latter value was calculated considering different ambient and refrigerant discharge temperatures to obtain a range of ℎ 𝑐 values, from which an average was then taken. Nusselt number correlations suggested for isothermal cylinders (Morgan's correlation) and for hot surface facing up and down (McAdams' correlations) were used to calculate ℎ 𝑐 . Reference compressor heat loss values, 𝑄 ̇𝑎𝑚𝑏 𝑟𝑒𝑓 , were calculated from compressor en- ergy balance: 𝑄 ̇𝑎𝑚𝑏 𝑟𝑒𝑓 = 𝑊 ̇𝑐𝑜𝑚𝑝 -𝑚̇𝑟 𝑒𝑓 [(1 -𝐶 𝑔 )(ℎ 𝑟,𝑐𝑜𝑚𝑝,𝑜𝑢𝑡 -ℎ 𝑟,𝑐𝑜𝑚𝑝,𝑖𝑛 ) + 𝐶 𝑔 ∆ℎ 𝑜 𝑇 𝑐𝑜𝑚𝑝,𝑜𝑢𝑡 -𝑇 𝑐𝑜𝑚𝑝,𝑖𝑛 ] (5.8) where 𝑚̇𝑟 𝑒𝑓 is the refrigerant mass flow rate measured during the experiments from the condenser and water-side energy balance, as follows: 𝑚̇𝑟 𝑒𝑓 = 𝑟𝑒𝑓 𝑄 ̇𝑐𝑜𝑛𝑑 (1 -𝐶 𝑔 )(ℎ 𝑟,𝑐𝑜𝑛𝑑,𝑖 -ℎ 𝑟,𝑐𝑜𝑛𝑑,𝑜 ) + 𝐶 𝑔 ∆ℎ 𝑜 𝑇 𝑐𝑜𝑛𝑑,𝑜 -𝑇 𝑐𝑜𝑛𝑑,𝑖 Table 5 . 2 . 52 Operating conditions tested in the experimental test bench Operating condition 𝑇 𝑎𝑚𝑏 (°C) 𝑇 𝑒𝑣𝑎𝑝 (°C) 𝑇 𝑐𝑜𝑛𝑑 (°C) speed (rps) 1 5.8 1.0 40.0 30 2 9.0 0.5 40.0 60 3 12.7 0.0 41.0 90 4 3.7 1.0 60.0 30 5 7.7 2.0 60.0 60 6 12.4 1.0 60.0 90 Table 5 .3. 5 Sensors used in measurements and their respective uncertainties Variable Uncertainty 𝑊 ̇𝑐𝑜𝑚𝑝 0.2-0.5 % 𝑇 1 0.8 °C 𝑇 2 0.8 °C 𝑇 3 0.8 °C 𝑇 𝑠ℎ𝑒𝑙𝑙 0.2 °C 𝑇 𝑒𝑥𝑡 0.8 °C 𝑃 𝑖𝑛 0.25 % 𝑃 𝑜𝑢𝑡 0.25 % 𝑇 𝑤,𝑖 0.2 °C 𝑇 𝑤,𝑜 0.2 °C 𝑉 ̇𝑤 0.5 % Table 5 . 5 5. Optimal instrumentation used to determine Tshell in scroll and rotary compressors Compressor Number Location (internal component level) ' 2' EXTENSION OF THE PERFORMANCE ASSESS-MENT METHOD TO A FDD METHOD Extension of the performance assessment method to a FDD method 131 A.1 Fault detection and diagnostic methods There is a wide range of fault detection and diagnosis methods; from detailed physical models to simple polynomial black-box models. Each method shows advantages and disadvantages in terms of applicability of on-field implementation, cost, accuracy, and data required. Ideally, FDD methods utilize low cost sensors, such as temperature sensors, and preferably sensors that are already integrated in the machine or as little supplementary sensors as possible in order to keep the hardware costs at minimum. However, the diagnosis provided by the method must be reliable and of high quality. The FDD method must be applicable to a wide range of HP types and sizes, and must require low computational effort for on-field calculations. Black-box models Black-box models rely on statistical methods to establish a relationship used to determine whether a measured parameter indicates a fault and, thus, do not account for physical characteristics and phenomena of the system. For example, clustering algorithms based on fuzzy clustering or neural networks, designed to associate measured parameter values to corresponding faults, are used [START_REF] Saleh | Artificial neural network models for depicting mass flow rate R22, R407C and R410A through electronic expansion valve[END_REF]. Black-box model can also be based on multivariate analysis technology, such as principle component analysis (PCA), which is used to transform a number of related variables to a smaller set of uncorrelated variables. This can then be used in parameter residual generation, where the discrepancies between the expected value and the measured value are used to detect and quantify a fault. An example of such FDD model for air-source heat pump water chillers/heaters is described and validated by [START_REF] Chen | A fault detection technique for air-source heat pump water chiller/heater[END_REF]. Despite a great variety of black-box modeling approaches, some more optimized than others, they all have one important disadvantage in common: the need for a large database in order to learn machine operation. The database over a full-range of fault-free operating conditions that include summer and winter seasons. The learning period required for the database must be either executed in laboratory conditions, where different operating conditions will be simulated, or in already-installed machines, which is time consuming and cost-prohibiting. Furthermore, the algorithm will use this database that is machine specific to extract the required data concerning machine operation and is poorly adaptable directly to another machine, as it needs to go through a learning period again. Fault-tree analysis Fault-tree analysis is a deductive failure analysis that relies on Boolean algebra to determine all the possible event combinations that can potentially lead to undesirable consequences in the machine [START_REF] Li | Fault detection and diagnosis for building cooling system with a tree-structured learnign method[END_REF]. The method, thus, establishes the cause Extension of the performance assessment method to a FDD method 132 of a specific fault, as well as their probability, and can be generalized to various HP types. It is based on practical experience and expert opinion, reported during interventions. Therefore, the reliability and precision of the method depends on the quantity and quality of failure data available. Physical models Physical models, such as the model described in [START_REF] Wichman | Fault detection and diagnostics for commercial coolers and freezers[END_REF]; [START_REF] Li | Decoupling features and virtual sensors for diagnosis of faults in vapor compression air conditioners[END_REF]; [START_REF] Li | Decoupling features for diagnosis of reversing and check valve faults in heat pumps[END_REF], for commercial coolers and freezers, are based on studying the physical phenomena occurring in the system. Such models typically require real-time measurements of values that are characteristic to specific faults. These values are then compared to nominal or fault-free values typically provided from equipment manufacturers. Any discrepancies between the values indicate a fault. Intrusive flow meters cannot be used in already-installed machines. For this reason, [START_REF] Li | Decoupling features and virtual sensors for diagnosis of faults in vapor compression air conditioners[END_REF] developed a decoupling-based diagnostic technique that relies on realtime indirect flow measurements, such as the one obtained from the performance measurement method. The method identifies diagnostic features that are decoupled, i.e. insensitive, to other faults and operating conditions. Quantities necessary to determine decoupling features rely on low-cost, non-intrusive measurements and physical models of components. In addition, the method can be extended to different vapor compression cycles, including HPs. Furthermore, it does not require extensive datasets for training. Low-cost implementation, robustness of the model, and the capability to handle multiple-simultaneous faults make the method perfectly suitable for on-field applications in already-installed machines. This particular FDD method and its possibility to adapt to air-to-air heat pumps is described in the following subsection. A.2 Promising fault detection and diagnostic method Real-time measurements performed by the performance assessment method can be coupled with some complementary measurements to constitute a practical and promising FDD tool, termed decoupling-feature diagnostic method, as presented in the work of [START_REF] Li | Decoupling features and virtual sensors for diagnosis of faults in vapor compression air conditioners[END_REF]. Most common faults occurring in residential HPs can be detected and distinguished with the proposed FDD method solely with the aid of non-intrusive low-cost sensors, particularly temperature sensors. The following list is an example of faults that can be analyzed with the decoupling feature-based diagnostic technique: Extension of the performance assessment method to a FDD method 133  refrigerant overcharge;  refrigerant undercharge;  evaporator fouling and  condenser fouling. These faults are the most common in residential HP applications. Furthermore, these faults are important and difficult to diagnose, and they impact the system's thermal capacity, efficiency and equipment life [START_REF] Breuker | Evaluating the performance of a fault detection and diagsnotic system for vapor compression equipment[END_REF]. The primary impact of exchanger fouling is a decrease in air flow. The decoupling feature for such fault is, therefore, volumetric air flow, determined with virtual sensors that employ energy balances of the air and refrigerant sides. This decoupling feature is strongly influenced by the level of fouling and fan problems. It is also independent of other faults occurring in the unit in systems that incorporate fixed-speed fans. It can also be used to diagnose exchanger fan problems. Conductance can also characterize exchanger fouling. However, it is less precise since it is not only representative of airside fouling since it may depend on refrigerant flow parameters and characteristics. Air flow rate is, therefore, more representative of exchanger fouling, since it is decoupled from other faults. Condenser fouling Condenser fouling can be detected by employing a virtual air flow rate sensor that is obtained from a steady-state energy condenser energy balance including virtual pressure measurements from saturation temperatures and other temperature measurements, as depicted in the following equation: where 𝑉 ̇𝑐𝑎 is condenser air volume flow rate, 𝜐 𝑐𝑎 is condenser air specific volume, 𝑐 𝑝,𝑎𝑖𝑟 is the air specific heat, 𝑇 𝑐𝑎,𝑖𝑛 and 𝑇 𝑐𝑎,𝑜𝑢𝑡 are condenser inlet and outlet temperatures, ℎ 𝑐𝑜𝑛𝑑,𝑖𝑛 and ℎ 𝑐𝑜𝑛𝑑,𝑜𝑢𝑡 are refrigerant enthalpy at condenser inlet and outlet, respectively, estimated from condenser inlet and outlet pressures, respectively, and condenser inlet and outlet temperatures, respectively. The method requires refrigerant mass flow rate, which can be obtained from compressor energy balance depicted in Eq. (1.10). Thus, this diagnostic feature is coupled with the performance assessment method. Extension of the performance assessment method to a FDD method 134 The obtained air flow rate is compared to a nominal air flow rate value estimated from condenser fan map. The change in physical properties of air can be considered negligible with the pressure drop across condenser coil due to fouling. Therefore, specific heat capacity of air evaluated at constant pressure is suitable to represent heat absorbed by the traversing air. Air absolute humidity is assumed to remain constant across the condenser. Figure A.1. Flow chart diagram of condenser fouling detection algorithm Evaporator fouling Similar to condenser fouling, the principle impact of evaporator fouling is decrease in air flow rate, which can be measured with a virtual sensor, as follows: where 𝑉 ̇𝑒𝑎 is evaporator air volume flow rate, and 𝜐 𝑒𝑎 is evaporator air specific volume, ℎ 𝑒𝑣𝑎𝑝,𝑖𝑛 and ℎ 𝑒𝑣𝑎𝑝,𝑜𝑢𝑡 are refrigerant enthalpy at evaporator inlet and outlet, respectively, and ℎ 𝑒𝑎,𝑖𝑛 and ℎ 𝑒𝑎,𝑜𝑢𝑡 are air enthalpy at evaporator inlet and outlet, respectively. Relative humidity of evaporator inlet and outlet air must be known to obtain air enthalpies. However, in order to reduce on-field instrumentation and simplify the Extension of the performance assessment method to a FDD method 135 measurements, the heat absorbed by the air can be estimated with the aid of specific heat capacity at constant pressure, as in the case of condenser, as listed below: where 𝑇 𝑒𝑎,𝑖𝑛 and 𝑇 𝑒𝑎,𝑜𝑢𝑡 are evaporator inlet and outlet air temperatures. Air flow rate measurement is compared to a nominal flow rate value estimated from evaporator fan map. Refrigerant undercharge decreases the subcooling at the condenser exit, and when the unit is significantly undercharged the refrigerant may be in two-phase state in the liquid line. Refrigerant undercharge will provoke an increase in superheat. A thermostatic and electronic expansion valves will try to compensate and regulate this increase in superheat by lowering the evaporation pressure. In case of a severe overcharge, the liquid line will become filled and refrigerant will start flowing back to the condenser, thus, deteriorating the thermal capacity due to Extension of the performance assessment method to a FDD method 136 diminished exchange area (condenser is filled with more liquid). In such cases, subcooling will increase as well as the condensation pressure in order to compensate for the losses in thermal capacity. Refrigerant overcharge or undercharge is a type of fault that has an impact on the whole system, and not just one single component, since it modifies multiple parameters in the system. For this reason, measuring the mass flow rate with a virtual sensor is not possible. The decoupling feature for refrigerant charge is calculated from the following equation: where 𝑘 𝑠ℎ 𝑘 𝑠𝑐 ⁄ is a constant characteristic to a specific system and represents the slope of a straight line plot relating the superheat and subcooling in two operating conditions, as follows: where 𝑇 𝑠ℎ,𝑟𝑎𝑡𝑒𝑑 and 𝑇 𝑠𝑐,𝑟𝑎𝑡𝑒𝑑 is the superheat and subcooling at rated conditions, respectively, and 𝑇 𝑠ℎ,0 and 𝑇 𝑠𝑐,0 is the superheat and subcooling, respectively, at another fault-free operating condition; a change in the operating condition parameters, must ensure a change in the superheat and subcooling, e.g. change in condenser inlet temperature, humidity, flow rate, condenser or evaporator inlet temperatures, or any combination of these variables. If the feature is negative there is an undercharge in the system, positive value indicates an overcharge in the system. According to [START_REF] Li | Decoupling features and virtual sensors for diagnosis of faults in vapor compression air conditioners[END_REF], this feature is relatively independent of operating conditions and almost uniquely dependent on refrigerant charge. In addition, it is applicable to expansion organs with a fixed area and a variable expansion area, such as TXV. [START_REF] Li | Decoupling features for diagnosis of reversing and check valve faults in heat pumps[END_REF] determined that for fixed orifice (FXO) expansion valve and TXV, is estimated to be 1/2.5. On the other hand, the coefficient for electronic expansion valves (EEVs) is equal to zero, since the superheat of a system is kept constant with such devices. The difference in superheat and subcooling indicates a fault related to refrigerant charge in the system. In the work of [START_REF] Li | Decoupling features for diagnosis of reversing and check valve faults in heat pumps[END_REF], Eq. (A.4) was modified in order to quantify the deviation from the nominal charge level. Furthermore, [START_REF] Kim | Extension of a virtual refrigerant charge sensor[END_REF] added another term to the equation, which estimates charge deviation that incorporates vapor quality, in order to adapt the equation better to variable speed compressors and more extreme operating conditions, such as severe overcharge or very Extension of the performance assessment method to a FDD method 137 low ambient temperatures. The validity of equation (decoupling feature) is when the charge deviation ranges from 60 to 140%. Outside of this range, the precision suffers, as well as when the compressor speed is relatively low in order to achieve very low subcooling and superheat. [START_REF] Kim | Extension of a virtual refrigerant charge sensor[END_REF] suggested a modification in the equation and extended the validity of the model to 50 to 150% of charge deviation. However, the modifications and adjustments mentioned above complicate the implementation of the method on-field, due to their requirement of additional quantities that are difficult to measure on-field, such as vapor quality, or the requirement of additional data from the manufacturer making the method system specific. A.3 Conclusions A promising method based on the decoupling-feature diagnostic method is presented. Decoupling features required to measure each individual fault along with the necessary temperature and virtual sensors are presented in Table A.1. Table A.1. Fault decoupling features, required temperature measurements, and virtual measurements The benefits of the method include on-field applicability, low calculation effort, low implementation cost, and its capacity to adapt to numerous vapor compression cycles, since fault detection is based on physical phenomena of thermodynamic cycles. This is mainly achieved with virtual sensors. The primary inconvenience of the method is the fact that it relies on reference values provided by the manufacturer or obtained by testing the HP in a laboratory. On the other hand, methods that do not require manufacturer data demand extensive and costly learning periods, thus, making them less versatile and less convenient for on-field use of already-installed machines. Manufacturer's data required for the decoupling feature method is typically readily provided with equipment. Fault EXPERIMENTAL TECHNIQUES FOR COMPRES-SOR THERMAL ANALYSIS Experimental techniques for compressor thermal analysis 139 The thermal behavior inside the compressor is yet to be understood. Thorough modeling of compressor thermal profile requires extensive resources due to many complex phenomena occurring inside the compressor, mainly as a result of the presence and distribution of oil and the flow dynamics of the refrigerant gas. The task becomes even more difficult, and specifically costly, if the thermal profile is required for a widerange of operating conditions and compressor types. In order to acquire the necessary experimental data to determine correlations that describe the thermal behavior of a compressor, several techniques can be employed. It must be noted that the most straightforward and conventional way to perform experimental measurements is to use thermocouples. Other techniques to determine compressor thermal behavior is using heat flux sensors and infrared cameras. Table B.2 is a compilation of different techniques along with their methods of employment, advantages and disadvantages. Such measurement approaches were found to be used by other researchers. It must be stressed that determining the refrigerant compressor thermal profile through experimental techniques is challenging, since hermetic compressors are typically tightly assembled (compact) machines with a large number of geometrically complex components besides the essential components, such as various coils, filets and pins, and complicated flow dynamics inside. However, one of the main difficulties encountered when performing such analysis is the fact that some important components from the heat transfer point of view are in motion. For instance, the rotor, which primarily consists of an assembly of magnets, rotates, as well as the crankshaft. Also, the gap between the rotor and stator is small; hence measuring the rotor and stator temperatures separately seems to be complicated if not impossible. Therefore, a great deal of studies encountered seemed to lack experimental information of many interior components, such as the rotor-stator assembly and crankshaft. It can be speculated that it is mainly due to the difficulty of measuring the temperatures of rotating surfaces. In some studies the temperature of the stator-rotor assembly was determined as a whole, referring to the "motor temperature", however, with no explanation of how and which part of the motor assembly was actually measured. Measurements are typically obtained from such parts of the compressor, as the inlet, outlet, exterior shell temperature distribution and rarely some interior locations, such as the refrigerant temperature at the suction cavity of the compression chamber. Experimental techniques for compressor thermal analysis 140 C.1 Nusselt number correlation for the stator-rotor gap The turbulent Taylor-Coutte flow (flow between two rotating cylinders) with an axial throughflow has to be considered in order to calculate the heat transfer coefficient of the stator-rotor gap. A correlation was found that represents such flow between two concentric cylinders with the inner one in rotating motion and the outer one stationary [START_REF] Bouafia | Analyse expérimentale des transferts de chaleur en espace annulaire étroit et rainuré avec cylindre intérieur tournant[END_REF]. The heat transfer correlation takes into account the fact that the inner cylinder, in this application the rotor, is heated. The inner surface of the stator is assumed to be perfectly smooth. The correlation for the average Nusselt number is calculated as follows: where 𝑅𝑒 𝑒𝑓𝑓 is the effective Reynolds number obtained from the following equation (Bouafia et al., 1997): where 𝛼 is the weighting coefficient set as 0.5 (optimal value) for a rotor-stator assembly, 𝑅𝑒 𝑎 is the axial Reynolds number and 𝑅𝑒 𝑡 is the tangential Reynolds number: where 𝑣 𝑎 is the axial velocity. 𝑅𝑒 𝑡 = 𝛺𝑅 𝑒𝑥𝑡 𝑥 𝜈 (C.9) where 𝑅 𝑒𝑥𝑡 is the exterior radius of the rotor. The characteristic length, 𝑥, is the hydraulic diameter in this case, calculated from: where 𝐴 is the cross-sectional area of the flow and 𝑝 is the wetted perimeter of the cross-section. Additional Nusselt number correlations 144 C.2 Nusselt number correlation for the stator-shell gap Heat transfer between the shell and stator can be approximated by a correlation used for heat transfer between two concentric cylinders [START_REF] Mills | Basic Heat and Mass Transfer[END_REF]: where 𝑅𝑎 𝑥 is the Rayleigh number for which the gap width is the characteristic length. It is assumed that convection in such regions is natural, since fluid passes with a low velocity in the gap. In this case both cylinders are station
01753415
en
[ "info.info-au" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01753415/file/ACC2017_last-minute.pdf
Nohemi Alvarez-Jarquin Antonio Loría email: [email protected]. José Luis Avila email: [email protected] Consensus under switching spanning-tree topologies and persistently exciting interconnections We study the consensus problem for networks with changing communication topology and with time-dependent communication links. That is, the network changes in two dimensions: "geographical" and "temporal". We establish that consensus is reached provided that there always exists a spanning tree for a minimal dwell-time and the interconnection gains are persistently exciting. Our main result covers the particular case, studied in the literature, of one fixed-topology with time-varying interconnections but also that of changing topologies with reliable interconnections during a dwell-time. Another originality of our work lies in the method of proof, based on stability theory of time-varying and switched systems. Simulations on an academic example are provided to illustrate our theoretical results. I. INTRODUCTION In spite of the considerable bulk of literature on consensus consensus analysis, such problem for systems with taime-varying changing topologies and time-varying interconnections, has been little studied; some recent works include [START_REF] Chowdhury | Persistence based convergence rate analysis of consensus protocols for dynamic graph networks[END_REF], [START_REF] Kumar | Consensus analysis of systems with time-varying interactions : An event-triggered approach[END_REF], [START_REF] Chowdhury | On the estimation of algebraic connectivity in graphs with persistently exciting interconnections[END_REF], [START_REF] Maghenem | Lyapunov functions for persistently-excited cascaded time-varying systems: application in consensus analysis[END_REF]. This problem, however, is of great interest for researchers of several disciplines due to the multiple applications related to networked multiagent systems: satellite formation flying [START_REF] Carpenter | Decentralized control of satellite formations[END_REF], [START_REF] Sarlette | Cooperative attitude synchronization in satellite swarms: a consensus approach[END_REF], coupled oscillators, formation tracking control for mobile robots [START_REF] Dasdemir | A simple formation-tracking controller of mobile robots based on a "spanning-tree" communication[END_REF], coupled air traffic control [START_REF] Tomlin | Conflict resolution for air traffic management: A study in multiagent hybrid systems communication[END_REF] just to mention a few. These applications justify the design of appropriate consensus protocols to drive all dynamic agents to a common value. The consensus problem consists in establishing conditions under which the differences between any two motions among a group of dynamic systems, converge to zero asymptotically. In [START_REF] Ren | Consensus seeking in multi-agent systems under dynamically changing interaction topologies[END_REF] the authors present multiple agents in the presence of limited and unreliable information exchange with changing communication topologies; the analysis relies on the graph theory and consensus may be established if the union of the directed interaction graphs have a spanning tree. In [START_REF] Olfati-Saber | Consensus problems in networks of agents with switching topology and time-delays[END_REF] directed networks with switching topology are treated as a hybrid system. A common Lyapunov function allows to show convergence analysis of an agreement protocol for this system. The authors of [START_REF] Gong | Average consensus in networks of dynamic agents with switching topologies and multiple time-varying delays[END_REF] study the consensus problem in undirected networks of dynamic agents with fixed and switching topologies. Using Lyapunov theory; it is showed that all the nodes in the network achieve consensus asymptotically for appropriate communication delays, if the network topology graph is connected. In [START_REF] Xiao | State consensus for multi-agent systems with switching topologies and time-variant delays[END_REF] the authors address the consensus problem for discrete-time multiagent systems with changing communications topologies and bounded time-varying communication delays. In this paper, we consider the consensus problem for networks of dynamic systems interconnected in a directed graph through time-varying links. In contrast to the related literature, see for instance [START_REF] Kim | Consensus of output-coupled linear multi-agent systems under fast switching network: Averaging approach[END_REF], [START_REF] You | Consensus condition for linear multi-agent systems over randomly switching topologies[END_REF], we assume that the network's graph is time-varying with respect to two time-scales. Firstly, we assume that the interconnection topology changes that is, the agent A may communicate with B over a period of time and with C over a (dwell-)time interval. Secondly, in clear contrast with the literature, we assume that during a dwell-time in which the topology is fixed, the communication links are not reliable that is, they vary with time and, in particular, they may have random failures. The necessary and sufficient condition is that each interconnection gain is, separately, persistently exciting. Persistently exciting covers in particular random signals of positive mean (offset). This is also in contrast to conditions based on excitation of an "averaged" graph Laplacian -cf. [START_REF] Kim | Consensus of output-coupled linear multi-agent systems under fast switching network: Averaging approach[END_REF]. Thus, the problem we analyze covers both the case of fixed topology with time-varying interconnections and that of switching topologies with reliable interconnections. In the following section we present our main results. For clarity of exposition, we present in the first place, an auxiliary result on consensus under a fixed spanning tree topology with time-varying reliable interconnections. Then, we show that the switched topology problem may be approached using stability theory for switched linear time-varying systems. In Section III we present an example of three agents whose interconnection topology changes among six possible configurations. Concluding remarks are provided in Section IV. II. MAIN RESULTS A. Problem statement Consider N dynamic agents Ψ λ : ẋλ = u λ , λ = 1, 2, • • • , N, (1) where u λ represents a protocol of interconnection. The most common continuous consensus protocol under an all-to-all communication assumption, is given by -see [START_REF] Olfati-Saber | Consensus problems in networks of agents with switching topology and time-delays[END_REF], [START_REF] Xiao | State consensus for multi-agent systems with switching topologies and time-variant delays[END_REF], u λ (t, x) = - N κ=1 a λκ (t) x λ (t) -x κ (t) , (2) where a λκ is the (λ, κ) entry of the adjacency matrix and x λ is the information state of the λ-th agent. The system (1), (2) reaches consensus if for every initial condition all the states reach a common value as t tends to infinity. The consensus problem has been thoroughly studied both for the case of constant and time-varying interconnections, mostly under the assumption of an allto-all communication topology. It also is well known, from graph theory, that it is necessary and sufficient to reach consensus that there exists a spanning tree. In the case that the interconnections are time-varying, a similar result was established in [START_REF] Moreau | Stability of continuous-time distributed consensus algorithms[END_REF] based on the assumption that, roughly speaking, there exists an average spanning tree. In this paper, we analyze consensus under time-varying topologies; as opposed to the more traditional graphtheory based analysis [START_REF] Ren | Distributed consensus in multivehicle cooperative control[END_REF], we adopt a stability theory approach. B. The network model With little loss of generality, let us consider the following consensus protocol u λ =    -a λλ+1 (t) x λ (t) -x λ+1 (t) , ∀ λ ∈ [1, N -1], 0, λ = N, (3) where a λλ+1 ≥ 0 and it is strictly positive whenever information flows from the (λ + 1)th node to the λth node. This protocol leads to a spanning-tree configuration topology; the closed-loop equations are                    ẋ1 = -a 12 (t) x 1 -x 2 , . . . ẋλ = -a λ,λ+1 (t) x λ -x λ+1 , . . . ẋN-1 = -a N -1 N (t) x N -1 -x N , ẋN = 0. (4) In a leader-follower configuration, the N th node may be considered as a "swarm master" with its own dynamics. For simplicity, here we consider it to be static. It is clear that there are many other possible spanningtree configurations; the one showed above is considered conventionally. Actually, there exist a total number of N ! spanning-tree configurations; for instance, for a group of three agents there exist six possible spaning-tree configuration topologies which determine six different sequences Thus, to determine the N ! possible spanning-tree communication topologies, among N agents, we introduce the following notation. For each k ≤ N we define a function π k which takes integer values in {1, . . . N }. We also introduce the sequence of agents {Ψ π k } N k=1 with the following properties: 1) every agent Ψ λ is in the sequence; 2) no repetitions of agents in the sequence is allowed 3) the root agent is labeled Ψ π N and it communicates with the agent Ψ π N -1 , the latter is parent of Ψ π N -2 and so on down to the leaf agent Ψ π1 . That is, the information flows with interconnection gain a π k π k+1 (t) ≥ 0, from the agent Ψ π k+1 to the agent Ψ π k . The subindex k represents the position of the agent Ψ π k in the sequence. Note that any sequence {Ψ π1 , Ψ π2 , ..., Ψ π N -1 , Ψ π N } of the agents may be represented as a spanning-tree topology which is depicted in Figure 2. Thus, in general, each possible fixed topology labeled i ∈ 1, . . . , N ! is generated by a protocol of the form (3) which we write as {Ψ 3 , Ψ 2 , Ψ 1 }, {Ψ 2 , Ψ 1 , Ψ 3 }, etc. -see Figure 1. Ψ 1 a 12 (t) Ψ2 a 23 (t) Ψ3 i = 1 Ψ 2 a 21 (t) Ψ1 a 13 (t) Ψ3 i = 2 Ψ 3 a 31 (t) Ψ1 a 12 (t) Ψ2 i = 3 Ψ 1 a 13 (t) Ψ3 a 32 (t) Ψ2 i = 4 Ψ 2 a 23 (t) Ψ3 a 31 (t) Ψ1 i = 5 Ψ 3 a 32 (t) Ψ2 a 21 (t) Ψ1 i = 6 Ψ π 1 aπ 1 π 2 (t) Ψ π 2 • • • Ψ π N -1 Ψ π N aπ N -1 π N (t) u i π k = -a i π k π k+1 (t) x π k -x π k+1 , k ∈ {1, . . . , N -1}, 0, k = N, (5) where k denotes the position of the agent Ψ λ in the sequence {Ψ π k } N k=1 and π k represents which agent Ψ λ is in the position k, this is, π k = λ. Under (5), the system (1) takes the form ẋdi = -L i (t)x di , i ∈ {1, . . . , N !}, (6) where to each topology i ≤ N ! corresponds a state vector x di = x π1 , x π2 , . . . , x π N which contains the states of all interconnected agents in a distinct order, depending on the topology. For instance, referring to Figure 1, for i = 1 we have x d1 = x 1 , x 2 , x 3 while x d4 = x 1 , x 3 , x 2 while for i = 4. Accordingly, to each topology we associate a distinct Laplacian matrix L i (t) which is given by L i (t) :=        a i π 1 π 2 (t) -a i π 1 π 2 (t) 0 0 0 0 a i π 2 π 3 (t) -a i π 2 π 3 (t) 0 0 . . . . . . . . . . . . . . . 0 0 • • • a i π k-1 π k (t) -a i π k-1 π k (t) 0 0 • • • • • • 0        . ( 7 ) Since any of the N ! configurations is a spanning tree, which is a necessary and sufficient condition for consensus, all configurations may be considered equivalent in some sense, to the first topology, i.e., with i = 1. As a convention, for the purpose of analysis we denote the state of the latter by x = x 1 , x 2 , . . . , x N and refer to it as an ordered topology. See Figure 3. It is clear (at least intuitively) that consensus of all systems ( 6) is equivalent to that of ẋ = L 1 (t)x, where Ψ 1 a12(t) Ψ 2 • • • Ψ N -1 Ψ N a N -1N (t) L 1 (t) :=       a 12 (t) -a 12 (t) 0 0 0 0 a 23 (t) -a 23 (t) 0 0 . . . . . . . . . . . . . . . 0 0 • • • a N -1N (t) -a N -1N (t) 0 0 • • • 0 0       . ( 8 ) More precisely, the linear transformation from a "disordered" vector x di to the ordered vector x is defined via a permutation matrix P i that is, x di = P i x (9) with P i ∈ R n×n defined as P i =     E π1 E π2 . . . E π N     , i ∈ {1, . . . , N !}, (10) and the rows E π k = 0, 0, . . . , 1 . . . , 0 . π k th position The permutation matrix P i is a nonsingular matrix with P -1 i = P i [START_REF] Horn | Matrix Analysis[END_REF]. For instance, relative to Figure 1 we have x d2 = [x 2 , x 1 , x 3 ] and P 2 =   0 1 0 1 0 0 0 0 1   . In order to study the consensus problem for (6) for any i it is both sufficient and necessary to study that of any configuration topology. Moreover, we may do so by studying the error dynamics corresponding to the differences between any pair of states. C. Fixed topology For clarity of exposition we start with a preliminary result which applies to the case of a fixed but arbitrary topology -cf. [START_REF] Ren | Distributed consensus in multivehicle cooperative control[END_REF]Theorem 2.33]. In view of the previous discussion, without loss of generality, we focus on the study of the ordered topology depicted in Figure 3. Consensus may be established using an argument on stability of cascaded systems. To see this, let z 1 denote the vector of ordered errors corresponding to this first topology that is, z 1λ := x λ -x λ+1 , ∀ λ ∈ {1, . . . , N -1}. Then, the systems in [START_REF] Horn | Matrix Analysis[END_REF] . . . ż1N-1 = -a N -1 N (t)z 1N -1 (11) is (globally) uniformly asymptotically stable. In a fixed topology we have a λ,λ+1 (t) > 0 for all t ≥ 0 that is, the λth node in the sequence always receives information from its parent labeled λ + 1, albeit with varying intensity. The origin of the decoupled bottom equation, which corresponds to the dynamics of the root node, is uniformly exponentially stable if a N -1 N (t) > 0 for all t. Each of the subsystems in [START_REF] Olfati-Saber | Consensus problems in networks of agents with switching topology and time-delays[END_REF] from the bottom to the top is input to state stable. Uniform exponential stability of the origin {z = 0} follows provided that a λλ+1 is bounded. In compact form, the consensus dynamics becomes ż1 = A 1 (t)z 1 , z 1 = [z 11 • • • z 1 N -1 ] (12) where the matrix A 1 (t) ∈ R N -1×N -1 is defined as A 1 (t) =         -a 12 (t) a 23 (t) 0 • • • 0 0 -a 23 (t) a 34 (t) 0 . . . . . . . . . . . . . . . . . . 0 0 0 -a N-2N-1 (t) a N-1N (t) 0 0 0 • • • -a N-1N (t)         . ( 13 ) Lemma 1: Let Φ(t; t • ) = A 1 (t)Φ(t; t • ), Φ(t • ; t • ) = I N -1 , ∀t ≥ t • > 0. (14) Assume that, for every i ∈ {1, ..., N -1}, a ii+1 is a bounded persistently exciting signal that is, there exist T i and µ i > 0 such that t+Ti t a ii+1 (s)ds ≥ µ i ∀ t ≥ 0. (15) Then, there exist ᾱ > 0, α > 0 such that ||Φ(t; t • )|| ≤ ᾱe -α(t-t•) , ∀ t ≥ t • ≥ 0. ( 16 ) Proof: Note that the solution of the differential equation ( 14) is given by Φ(t; t • ) = [φ ij (t; t • )], where (18) For each j = i + 1 such that i < N -1 the third integral in [START_REF] You | Consensus condition for linear multi-agent systems over randomly switching topologies[END_REF] depends on φ ii and φ i+1i+1 which are bounded by ki e -ki(t-s) and ki+1 e -ki+1(t-s) , respectively. Consequently, φ ij (t; t • ) =            0, i > j |φ ij (t; t • )| ≤ ki kj |a i+1,i+2 | ∞ 1 |k i -k j | e -min{kj ,ki}(t-t•) where by assumption, |a i+1,i+2 | ∞ is bounded. Thus, all elements of Φ ij (t; t • ) are bounded in norm by a decaying exponential. D. Time-varying topology In this section we study the more general case, in which not only the interconnection gains are time-varying, as in the previous section and [START_REF] Ren | Distributed consensus in multivehicle cooperative control[END_REF], but the topology may be randomly chosen as long as there always is a spanning tree which lasts for at least a dwell-time. For the purpose of analysis we aim at identifying, with each possible topology, a linear time-varying system of the form [START_REF] Ren | Consensus seeking in multi-agent systems under dynamically changing interaction topologies[END_REF] with a stable origin and to establish stability of the switched system. To that end, let i determine one among the N ! topologies schematically represented by a graph as showed in Figure 2. Let x λ denote the state of system Ψ λ then, for the ith topology, we define the error z i = [z i1 • • • z iN -1 ] , ( 19 ) z ik = x π k -x π k+1 , k ∈ {1, • • • , N -1}, ( 20 ) where k denotes the graphical position of the agent Ψ λ in the sequence {Ψ π k } N k=1 and π k represents which agent Ψ λ is in the position j, this is, π k = λ. Example 1: Consider two possible topologies among those showed in Figure 1 represented in more detail in Figure 4 (for i = 1) and Figure 5 (for i = 4). Then, we have whereas in the second case, when i = 4, That is, for each topology i the dynamics of the interconnected agents is governed by the equation z 11 = x π1 -x π2 = x 1 -x 2 z 12 = x π2 -x π3 = x 2 -x 3 Ψ 1 k = 1 Ψ 2 k = 2 Ψ 3 k = 3 z 21 = x π1 -x π2 = x 1 -x 3 z 22 = x π2 -x π3 = x 3 -x 2 Ψ 1 k = 1 Ψ 3 k = 2 Ψ 2 k = 3 żi = A i (t)z i (21) where A i (t) :=      -a i π 1 π 2 (t) a i π 2 π 3 (t) 0 0 . . . . . . . . . . . . 0 0 -a i π N-2 π N-1 (t) a i π N-1 π N (t) 0 0 • • • -a i π N-1 π N (t)      (22) According to Lemma 1 the origin {z i = 0} is uniformly globally exponentially stable provided that a π k π k+1 (t) is strictly positive for all t. It is clear that consensus follows if the origin {z i = 0} for any of the systems (21) (with i fixed for all t) is uniformly exponentially stable. Actually, there exist α i and ᾱi such that |z i (t)| ≤ ᾱi e -αit , ∀ t ≥ 0. (23) Observing that all the systems (21) are equivalent up to a linear transformation, our main result establishes consensus under the assumption that topology changes, provided that there exists a minimal dwell-time. Indeed, the coordinates z i are related to z 1 by the transformation z i = W i z 1 , (24) where W i := T P i T -1 , P i is defined in (10), T ∈ R N -1×N is given by T =       1 -1 0 • • • 0 0 0 1 -1 • • • 0 0 . . . . . . . . . . . . 0 0 0 1 -1 0 0 0 0 • • • 1 -1       (25) and T -1 ∈ R N ×N -1 denotes a right inverse of T .Note that the matrix W i ∈ R N -1×N -1 is invertible for each i ≤ N ! since each of its rows consists in a linear combination of two different rows of T -1 , which contains N -1 linearlyindependent rows. Actually, using (24) in ( 21) we obtain ż1 = Āi (t)z 1 , (26) where Āi (t) := W -1 i A i (t)W i . (27) We conclude that |z 1 (t)| ≤ α i e -αit , α i := W -1 i ᾱi , ∀ t ≥ 0. (28) Based on this fact we may now state the following result for the switched error systems which model the network of systems with switching topology. Lemma 2: Consider the switched system ż1 = Āσ(t) (t)z 1 (29) with σ : R ≥0 → {1, . . . , N !} and for each i ∈ {1, . . . , N !}, Āi is defined in (27). Let the dwell time τ d > ln N ! i=1 α i N ! i=1 α i . (30) Then, the equilibrium {z 1 = 0} of ( 26) is uniformly globally exponentially stable for any switching sequence {t p } such that t p+1 -t p > τ d for every switching time t p . Proof: Let t p be an arbitrary switching instant. For all t ≥ t p such that σ(t) = i we have ||z 1 (t)|| ≤ α i e -αi( Using the property of continuity of both the norm function and the state z(t), we have ||z 1 (t p+1 )|| ≤ ||z 1 (t p + τ d )|| (33) and therefore ||z 1 (t p+1 )|| ≤ α i e -αiτ d ||z 1 (t p )||. (34) Note that to guarantee asymptotic stability of (29) it is sufficient that for every pair of switching times t p and t q ||z 1 (t q )|| -||z 1 (t p )|| < 0 (35) whenever p < q and σ(t p ) = σ(t q ). Now consider the sequence of switching times t p , t p+1 , ..., t p+N !-1 , t p+N ! satisfying σ(t p ) = σ(t p+1 ) = . . . = σ(t p+N !-1 ) and σ(t p ) = σ(t p+N ! ) which corresponds to a switching signal in which all the N ! switched are chosen. From (34) it follows that ||z 1 (t p+N ! )|| ≤ N ! i=1 α i e -( N ! i=1 αi)τ d ||z 1 (t p )||. (36) To ensure that ||z 1 (t p+N ! )|| -||z 1 (t p )|| < 0 (37) it is sufficient that N ! i=1 α i e -( N ! i=1 αi)τ d -1 < 0 (38) Therefore, since the norm is a non-negative function we obtain N ! i=1 α i e -( N ! i=1 αi)τ d < 1 (39) and the proof follows. Finally, in view of Lemma 2 we can make the following statement. Theorem 1: Let {t p } denote a sequence of switching instants p ∈ Z ≥0 and let σ : R ≥0 → {1, . . . , N !} be a piecewise constant function satisfying σ(t) ≡ i for all t ∈ [t p , t p+1 ) with t p -t p+1 ≥ τ d and τ d satisfying (30). Consider the system (1) in closed loop with u σ(t) π k = -a σ(t) π k π k+1 (t) x π k -x π k+1 , k ∈ {1, . . . , N -1}, 0, k = N. (40) Let the interconnection gains a i λκ , for all i ∈ {1, . . . , N !} and all λ, κ ∈ {1, . . . , N -1}, be persistently exciting. Then, the system reaches consensus with uniform exponential convergence. III. EXAMPLE For illustration, we consider a network of three agents hence, with six possible topologies, as showed in Figure 1. The information exchange among agents in each topology is ensured via channels with persistently-exciting communication intensity; the corresponding parameters are shown in Table I , Ψ 2 , Ψ 3 }, {Ψ 2 , Ψ 1 , Ψ 3 } and {Ψ 3 , Ψ 1 , Ψ 2 }. The graphs corresponding to the interconnection gains are showed in Figures 6 and7. By applying Lemma 1, we can compute α i and α i for each topology i, see Table II. Substituting the values of α i and α i into (30), we find that the dwell time must satisfy τ d > 7.92. We performed some numerical simulations using Simulink of Matlab. In a first test, the initial conditions are set to x 1 (0) = -2, x 2 (0) = 1.5 and x 3 (0) = -0.5; the switching signal σ(t) is illustrated in Figure 8. The systems' trajectories, converging to a consensus equilibrium, are showed in Figure 9. It is worth mentioning, however, that the dwell-time condition (30) only provides a sufficient stability condition. In Figure 10 we show the graph of a switching signal which does not respect the dwell-time condition and, yet, all the states converge to a common value -see Figure 11. IV. CONCLUSIONS We provided the convergence analysis of a consensus problem for a network of integrators with directed information flow under time-varying topology. Our analysis relies on stability theory of time-varying and switched systems. We established a minimal dwell-time conditions over the switching signal. Fig. 1 . 1 Fig. 1. Example of 3 agents, where by changing their positions, we obtained six possible topologies. Fig. 2 . 2 Fig. 2. A spanning-tree topology with time dependent communication links between Ψπ k and Ψπ k+1 . Fig. 3 . 3 Fig. 3. A spanning-tree topology with time dependent communication links between Ψπ k and Ψπ k+1 . Fig. 4 . 4 Fig. 4. A topology with 3 agents where π 1 = 1, π 2 = 2 and π 3 = 3. Fig. 5 . 5 Fig. 5. The second topology with 3 agents where π 1 = 1, π 2 = 3 and π 3 = 2. Fig. 7 .Fig. 8 . 78 Fig. 7. Persistently exciting interconnection gains for the topologies {Ψ 1 , Ψ 3 , Ψ 2 }, {Ψ 2 , Ψ 3 , Ψ 1 } and {Ψ 3 , Ψ 2 , Ψ 1 }. Fig. 9 . 9 Fig. 9. Trajectories of x 1 , x 2 and x 3 . Fig. 10 . 10 Fig. 10. The switching signal σ(t), which does not satisfy the dwell-time condition. Fig. 11 . 11 Fig. 11. Trajectories of x 1 , x 2 and x 3 . with i = 1 reach consensus if and only if the origin of ż11 = -a 12 (t)z 11 + a 23 (t)z 12 ż12 = -a 23 (t)z 12 + a 34 (t)z 13 t-tp) ||z 1 (t p )||, ∀t p ≤ t < t p+1 . (31) Since by hypothesis τ d ∈ [t p , t p+1 ), from (31) we have ||z 1 (t p + τ d )|| ≤ α i e -αiτ d ||z 1 (t p )||. Table I . I . Parameters of the interconnection gains. i=1 T µ i=2 T µ i=3 T µ a 12 (t) 0.25 0.5 a 21 (t) 2.0 1.6 a 31 (t) 0.3 0.1 a 23 (t) 0.2 1 a 13 (t) 0.8 0.2 a 12 (t) 0.7 0.6 i=4 T µ i=5 T µ i=6 T µ a 13 (t) 2 1 a 23 (t) 0.4 0.1 a 32 (t) 0.5 0.3 a 32 (t) 4 0.4 a 31 (t) 0.5 0.4 a 21 (t) 4.2 1.8 0.5 1 a 12 (t) 0 0 0.2 0.4 0.6 0.8 1 1.2 0.5 1 a 23 (t) 0 0 0.5 1 1.5 2 2.5 0.5 1 a 21 (t) 0 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 0.5 1 a 13 (t) 0 0 0.5 1 1.5 2 2.5 0.5 1 a 31 (t) 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.5 1 a 12 (t) 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Time Fig. 6. Persistently exciting interconnection gains for the topolo- gies {Ψ 1 Table II . II Parameters corresponding to the exponential bounds. i 1 2 3 4 5 6 α i 6.51 10.26 6.11 12.85 4.71 3.98 α 0.2 0.3 0.4 0.1 0.35 0.1
01753426
en
[ "spi.mat" ]
2024/03/05 22:32:10
2014
https://hal.science/hal-01753426/file/Murillo_19804.pdf
N V Murillo-Gutiérrez F Ansart J-P Bonino S R Kunst C F Malfatti Architectural optimization of an epoxy-based hybrid sol-gel coating for the corrosion protection of a cast Elektron21 magnesium alloy Keywords: An epoxy-based hybrid sol-gel coating was prepared in various architectural configurations has been studied for the corrosion protection of a cast Elektron21 magnesium alloy. The creation of a single layer of this coating presents defects consisting of macro-pores and protuberances, which opens access for corrosive species to reach the metallic substrate. These defects are suspected to result from the high reactivity of the substrate, as well as to the irregular topography of the substrate disrupted by the microstructure of the own magnesium alloy. Hence, a sol-gel coating in bilayer architecture is proposed, where the first layer would "inert" the surface of the magnesium substrate, and the second layer would cover the defects of the first layer and also thickening the coating. The morphological characteristics of the sol-gel coatings were analyzed by scanning electron microscopy (SEM), and their corrosion behavior was evaluated by OCP (open circuit potential) monitoring and electrochemical impedance spectroscopy (EIS) in chloride media. It is shown that both the architectural arrangement and the individual thickness of the first and second layers have an important influence on the anticorrosion performances of the protective system, just as much as its global thickness. Introduction The lower density than that of aluminum (around 30% less), makes the magnesium and its alloys an interesting material for the automotive and aeronautics industries, since this structural metal could allow the production of specific pieces and the construction of lighter vehicles. This strategy is related to the reduction of combustible consumption of transport vehicles and the optimization of resources. Nonetheless, these alloys are highly susceptible to corrosion when exposed to aggressive environments, resulting on the loss of its mechanical characteristics. In the recent years, many magnesium alloys have been developed in order to increase both its mechanical and corrosion resistance [START_REF] Ballerini | About some corrosion mechanisms of AZ91D magnesium alloy[END_REF][START_REF] Winston Revie | Corrosion and Corrosion Control-An Introduction to Corrosion Science and Engineering[END_REF][START_REF] Zhang | Corrosion behavior of Mg Y alloy in NaCl aqueous solution[END_REF][START_REF] Wang | Evaluating the improvement of corrosion residual strength by adding 1.0 wt.% yttrium into an AZ91D magnesium alloy[END_REF], but even though it is necessary to apply a protective coating on the metallic surface in order to prevent the contact with corrosive species. Many of the most efficient surface treatments for magnesium alloys uses chromium-based compounds including Cr 6+ species, highly toxic and harmful for health and the environment, which are in process of interdiction by the international normative. This background leads to the research for alternative surface treatments for magnesium alloys excluding the use of dangerous compounds. The sol-gel route is a chemical way to produce harmless inorganic materials generally by using metallic alcoxydes, which may also contain organic functions (acrylic, vinyl, etc.) permitting the production of hybrid organic/inorganic materials and coatings with improved flexibility. In particular, the epoxy-based materials presents interesting characteristics like mechanical resistance and good chemical and thermic stability, superior to other types of hybrid materials [START_REF] Zheng | Inorganic organic sol gel hybrid coatings for corrosion protection of metals[END_REF][START_REF] Wen | Organic/inorganic hybrid network materials by the sol-gel approach[END_REF][START_REF] Davis | Formation of silica epoxy hybrid network polymers[END_REF]. A number of hybrid coatings have been produced for the corrosion protection of different metallic substrates such as steels [START_REF] Cambon | Effect of cerium concentration on corrosion resistance and polymerization of hybrid sol gel coating on martensitic stainless steel[END_REF], aluminum [START_REF] Malfatti | The influence of cerium ion concentrations on the characteristics of hybrid films obtained on AA2024-T3 aluminium alloy[END_REF] or zinc alloys [START_REF] Meiffren | Development of new processes to protect zinc against corrosion, suitable for on-site use[END_REF]. In the case of magnesium alloys, many authors have reported the use of this kind of coatings [START_REF] Zomorodian | Anti-corrosion performance of a new silane coating for corrosion protection of AZ31 magnesium alloy in hank's solution[END_REF][START_REF] Brusciotti | Hybrid epoxy silane coatings for improved corrosion protection of Mg alloy[END_REF][START_REF] Lamaka | Novel hybrid sol-gel coatings for corrosion protection of AZ31B, Electrochim[END_REF][START_REF] Zhong | A novel approach to heal the sol-gel coating system on magnesium alloy for corrosion protection[END_REF][START_REF] Guo | Experimental study of electrochemical corrosion behaviour of bilayer on AZ31B Mg alloy[END_REF][START_REF] Zhong | Effect of cerium concentration on microstructure, morphology and corrosion resistance of cerium silica hybrid coatings on magnesium alloy AZ91D[END_REF], showing a considerable degree of protection. Commonly, the protective coatings applied on the automotive [START_REF] Gadow | Coating system for magnesium die castings in Class A surface quality[END_REF] and aeronautics [START_REF] Du | Inorganic/organic hybrid coatings for aircraft aluminum alloy substrates[END_REF] industries consists of a series of superposed layers of different chemical nature, which typically consists of a chemical/anodic conversion layer, an adhesion primer and an organic paint or finishing. Many surface treatments for magnesium alloys including a sol-gel coating have been studied in combination with conversion layers [START_REF] Zucchi | Influence of a silane treatment on the corrosion resistance of a WE43 Mg alloy[END_REF][START_REF] Yong | Molybdate phosphate composite conversion coating on magnesium alloy surface for corrosion protection[END_REF], anodic coatings [START_REF] Lamaka | Complex anticorrosion coating for ZK30 magnesium alloy[END_REF][START_REF] Li | Composite coatings on a Mg Li alloy prepared by combined plasma electrolytic oxidation and sol-gel techniques[END_REF] or even multilayered sol-gel coatings [START_REF] Tan | Multilayer sol-gel coatings for corrosion protection of magnesium[END_REF][START_REF] Shi | Corrosion protection of AZ91D magnesium alloy with sol-gel coating containing 2 methyl piperidine[END_REF][START_REF] Hu | Composite anticorrosion coatings for AZ91D magnesium alloy with molybdate conversion coating and silicon solgel coatings[END_REF], in order to provide a complex "architecture" to the protective anticorrosion system. The present work proposes an architectural approach to the production of an epoxy-based sol-gel coating, for the corrosion protection of a cast magnesium alloy (Mg-Nd-Gd-Zr-Zn). Usually, the metallic pieces obtained by low-pressure casting may contain a large amount of impurities at the surface, also presenting a very irregular and rough surface resulting from the contact with the mold. Hence, a chemical etching is generally applied for this kind of pieces in order to remove the contaminated and rough external layer before the application of protective coatings [START_REF] Nwaogu | Influence of inorganic acid pickling on the corrosion resistance of Mg alloy AZ31 sheet[END_REF], though this surface pretreatment may expose the microstructure of the metallic substrate. On the other hand, the high reactivity of the magnesium substrate in aqueous media manifests itself when the sol contains a large portion of water, complicating the preparation of homogeneous coatings without pores and defects. Therefore, a bilayer epoxy-based sol-gel system is presented, where the first created sol-gel layer of sol would undergo the chemical reactions with the magnesium alloy, but thereby providing an inert surface for a second sol-gel layer which contribute to the thickening of the sol-gel coating. The morphological aspects of the sol-gel systems are analyzed by scanning electron microscopy, and their electrochemical properties are studied by OCP (open circuit potential) monitoring and electrochemical impedance spectroscopy in chloride media (0.05 M NaCl solution), in order to evaluate their corrosion resistance. Experimental Materials Substrate preparation The substrate consists of a cast Elektron21 (EV31A) magnesium alloy, provided by Fonderie Messier. After heat treatment (T6 code: heat solution and aging), large ingots of this alloy where cut and machined to obtain smaller coupons with dimensions of 40 × 20 × 6 mm. The composition of this magnesium alloy is presented on Table 1, furnished by the supplier. After degreasing with acetone, the magnesium samples were polished with a SiC paper up to grade #1200, and subsequently rinsed with ethanol and dried with a stream of air. Then, the samples followed an acid etching with 20 g/L of HNO 3 for 2 min, followed by rinsing with ethanol and drying with a flux of air. Sols and coatings preparation The sols where produced by mixture of 3-glycidyloxypropyltrimethoxysilane (GPTMS), aluminium-tri-sec-butoxide (ASB), deionized water and propanol in a molar ratio of 2:1:1:10. The mixture was kept under stirring for 2 h, followed by aging for 24 h at room temperature, before application on the magnesium samples. The viscosity of the sols was around 15 mPa s. The bilayer hybrid sol-gel coatings were obtained by the dip-coating technique, by immerging the magnesium samples into the sols, and withdrawing with a controlled speed of 50, 100, 200 or 400 mm/min, and subsequently realizing a heat treatment. The heat treatment includes two steps: drying at 50 • C for 2 h, and curing at 110 • C for 3 h. The protective coatings were obtained by applying one or two layers of hybrid sol-gel film, where for each layer produced a heat treatment was applied. Characterization techniques Viscosity of the sols was measured before production on the substrates with a Lamy RM-100 rheometer. The morphology of the magnesium substrate and coatings were analyzed by scanning electron microscopy (SEM) with a JEOL JSM-6510LV microscope, using an operating voltage of 20 kV. Electrochemical impedance spectroscopy (EIS) and open circuit potential (E ocp ) monitoring were performed in a 0.05 M NaCl solution at room temperature, using a NOVA frequency response analyzer and an AUTOLAB PGSTAT 30 potentiostat. The E ocp measurements were performed during the first hour of immersion of the samples in the corrosive solution. The measurements were performed using a three-electrode cell. The working electrode exposed area was 2 cm 2 , with a reference and auxiliary electrodes consisting of a saturated calomel electrode (SCE) and a platinum foil electrode, respectively. The EIS spectra were drawn using a potentiostatic mode in the frequency range from 100 kHz to 10 mHz, with an applied voltage of 10 mV vs. E ocp . The reproducibility of the tests was verified by analyzing three samples for each characterization technique. Results and discussion Magnesium substrate The surface of the magnesium alloy samples after acid treatment is shown on Fig. 1a. The microstructure of the alloy shows the magnesium-rich grains, joined by two segregated regions. These regions correspond to neodymium-rich zones (grain boundaries) and to zirconium-rich zones (located inside the magnesium grains). On Fig. 1a, the irregularity of the surface of the alloy may be observed, where the zirconium-rich zones seem to be higher than the rest of the surface. Fig. 1b shows a cross-section SEM image of the substrate on back-scattered electrons (BSE) mode, where the presence of a white region (neodymium grain boundary), and a protuberance may be identified. The EDX analysis of the image reveals that the protuberance contains mainly zirconium (Fig. 1c). These zirconium-rich protuberances are supposed to be created due to a lower dissolution rate of the zirconium than that of the magnesium grains and the neodymium-rich zones, during the acid etching. Monolayer hybrid coating Morphological characterization The surface and the cross-section of the monolayer hybrid coating were observed by SEM (Fig. 2). It may be seen on the surface view of the sample (Fig. 2a) the irregular surface of the hybrid coating, which presents two kinds of defects: protuberances and pits. The cross-section view of the substrate on BSE mode (Fig. 2b) shows the average thickness of the hybrid coating, estimated to 5.2 ± 0.3 m. As previously shown in Fig. 1b, the presence of protuberances may be observed, created by the zirconium-rich regions. At this location, the thickness of the film is reduced and disrupts the homogeneity of the coating. The second kind of defect, the pits, may be attributed to the reactivity of the alloy with the sol that produces a hydrogen evolution and therefore creating uncovered regions. Electrochemical measurements The electrochemical behavior of the bare magnesium alloy, and covered monolayer hybrid coating (M) was analyzed by immersion in a corrosive solution containing 0.05 M of NaCl. The open circuit potential (E ocp ) was followed during the first hour of the immersion (Fig. 3). From the beginning of the immersion, the E ocp value for the bare magnesium substrate rises with time, which is a phenomenon associated to the development of a passivity layer as a result of the reaction between with the electrolyte. On the other hand, the potential of the substrate coated by the monolayer hybrid film stays constant from the beginning of the immersion at a higher potential than that of the bare substrate and remains stable for 1 h. At the end of the measurement, both magnesium bare, and coated magnesium by the monolayer hybrid film, showed an equivalent potential, of around -1.56 V. This behavior indicates that the monolayer hybrid film does not represent an efficient barrier between the substrate and the media. After the E ocp monitoring, the electrochemical impedance spectroscopy (EIS) tests were performed (Fig. 4). This technique provides information about the interface of an electrochemical system, which can be the case of a corrosion process [START_REF] Gabrielli | Identification of Electrochemical Processes by Frequency Response Analysis[END_REF]. In specific, the impedance modulus at low frequency (Bode impedance modulus diagram) is ascribed to the global resistance of the electrochemical system [START_REF] Walter | A review of impedance plot methods used for corrosion performance analysis of painted metals[END_REF]. Moreover, the phenomena shown on the Bode phase diagrams occur at characteristic frequencies that depend of the electrochemical system under study [START_REF] Gabrielli | Méthodes électrochimiques appliquées à la corrosion-techniques dynamiques[END_REF]. It may be seen on the Bode modulus diagram that the impedance modulus values of the monolayer hybrid system (M) drops considerably between 1 h and 24 h of immersion in the corrosive solution, and after 168 h this value is comparable to that of the bare magnesium substrate. This decrease is due to the progressive degradation of the coating induced by the aggressive electrolyte. For the bare magnesium substrate, it was observed a small increase in the impedance modulus values for 168 h of immersion, compared to 24 h of immersion, which can be attributed to the development and stabilization of a partially protective passivity layer at the surface of the metallic substrate [START_REF] Zucchi | Electrochemical behaviour of a magnesium alloy containing rare earth elements[END_REF][START_REF] Guo | Investigation of corrosion behaviors of Mg 6Gd 3Y 0.4Zr alloy in NaCl aqueous solutions[END_REF]. For every immersion time (1, 24 and 168 h), the Bode phase angle diagram of the bare magnesium substrate shows the presence of two time constants, located at the middle frequency range (∼20 Hz) and at the low frequency range (∼100 mHz). These phenomena can be associated to the capacitive response of the passivity layer of the substrate, and to the charge and species exchanges between the substrate and the electrolyte [START_REF] Correa | Corrosion behaviour study of AZ91 magnesium alloy coated with methyltriethoxysilane doped with cerium ions[END_REF], respectively. On the other hand, the monolayer hybrid system shows the presence of a high frequency time constant (∼100 kHz) at 1 h of immersion (Fig. 4a), which may be attributed to the barrier effect afforded by the epoxy-based coating [START_REF] Lamaka | Complex anticorrosion coating for ZK30 magnesium alloy[END_REF][START_REF] Banerjee | Electrochemical impedance spectroscopic investigation of the role of alkaline pretreatment in corrosion resistance of a silane coating on magnesium alloy ZE41[END_REF]. After 24 h of immersion, the phase angle value for this time constant is considerably reduced and continues to decrease progressively with time. This decrease of the barrier properties of the film are related to the liquid intake of the film, which reduces its capacitance value [START_REF] Brusciotti | Hybrid epoxy silane coatings for improved corrosion protection of Mg alloy[END_REF][START_REF] Conde | Polymeric sol-gel coatings as protective layers of aluminium alloys[END_REF], and progressively degrading the protective properties of the coating. At the same time (24 h), it is possible to observe the apparition of a second time constant in the middle frequency range (∼100 Hz), the same region of that of the bare magnesium substrate, and which may be therefore associated to the formation of corrosion products at the metal/coating interface [START_REF] Galio | Inhibitor doped sol-gel coatings for corrosion protection of MG AZ31[END_REF][START_REF] Zanotto | Protection of the AZ31 magnesium alloy with cerium modified silane coatings[END_REF]. Moreover, after 168 h of immersion, both systems the bare substrate and coated by epoxy-based hybrid film, presents one comparable peak around 10 Hz, where in the case of the monolayer hybrid coating it may be attributed to the presence of a porous mixed layer of corrosion products and the sol-gel film. At this stage, the values of the hybrid coating (modulus and phase) are comparable to those of the bare magnesium substrate, meaning that the hybrid coating has lost its protective properties. The interpretation of the results EIS obtained leads to consider that the monolayer hybrid coating offers an insufficient protection for the metallic substrate when exposed for long periods of time to the corrosive media. These results can be correlated to the SEM observations of the hybrid film that shows an irregular morphology, including macro-such as pits and protuberances, which represents sensible zones to the intrusion of corrosive species through the coating unto the magnesium substrate. In order to provide a homogeneous and effective protection to the substrate, the architecture of the hybrid coating was modified by the addition of a second layer of the same epoxy composition. The "bilayer" architecture consists of a first layer, Layer 1 or "inertion layer", which would react with the magnesium substrate, partially leveling and "inerting" the surface. The second layer, Layer 2 or "thickening layer" would cover the defects presents on Layer 1 and increase the global thickness of the hybrid coating. In the first place, four bilayer systems were studied, consisting of a first epoxy-based layer (Layer 1) produced with a withdrawal speed of 200 mm/min. The second layer (Layer 2) was produced with withdrawal speeds of 50, 100, 200 and 400 mm/min, in order to obtain four bilayer systems of different global thickness, which are noted as A-D, respectively. After that, each layer followed a heat treatment as described on Table 2. Morphological characterization. The SEM observation on mode BSE of the cross-section of the bilayer systems A-D is presented in Fig. 5. It is possible to observe that the global thickness of the coating increases with the withdrawal speed applied to the second layer. Besides, some irregularities of the magnesium substrate seem covered by the hybrid film, as it may be seen for the bilayer system D, where a protuberance appears under the surface of the hybrid coating. Furthermore, the individual layers 1 and 2 are visible for most of the bilayer coatings, allowing the measurement of their average thickness. It is to notice that the thickness of the Layer 1 remains constant (5.2 ± 0.3 m) for all the systems. Fig. 6 regroups the global thicknesses of these bilayer coatings and the individual thickness for each layer. Besides, it is important to underline that the thickness of the hybrid coating can be controlled directly on the metallic magnesium substrate, or over a previously sol-gel coated surface of the same nature. The observation of the surface of the different hybrid bilayer coatings by SEM (Fig. 7) permits to notice the absence of defects and uncovered areas, previously seen on the monolayer hybrid coating. This means that the second layer of hybrid film has recovered this kind of defects, even for the lowest thickness of Layer 2 (bilayer system A, 2.3 ± 0.3 m). Moreover, it may be seen that the irregularities formed by protuberances are less visible as the global thickness of the coating increases, leading to more homogeneous surfaces. This is specially observed in the case of the bilayer systems presenting a thickness equivalent or superior to ∼11 m (coatings C and D), where the homogeneity of the hybrid film seem considerably improved, meaning these defects can be covered with a proper thickness of hybrid film. Electrochemical measurements. The E ocp records of the bare magnesium substrate and coated by the bilayer systems A-D during the immersion on the corrosive solution (0.05 M NaCl) are presented in Fig. 8. It is to notice that the potential of the systems A and B is follows a similar behavior as the bare magnesium substrate, which is also comparable to that of the monolayer hybrid coating previously presented on Fig. 3. In contrast, the potential of the bilayer systems C and D is located in a region of nobler potentials, a first indication of a superior corrosion resistance for these two bilayer protective systems. The EIS analyze of the systems A-D after 1 h of immersion in a 0.05 M NaCl solution (Fig. 9a) shows high impedance modulus values for all the systems, compared to the magnesium bare substrate. As well as, a high frequency time constant (ascribed to the barrier effect of the sol-gel film), can be observed representing the effectiveness of the protective coating during for short periods of contact with the corrosive solution. The impedance modulus values increase with the global thickness of the hybrid film. However, the absorption of liquid into the hybrid film and the attack of corrosive species leads to the reduction of the impedance modulus values of the system, as it may be seen after 24 h of immersion (Fig. 9b), and finally after a much longer time period of 168 h (Fig. 9c), where both the low frequency impedance modulus and the high frequency time constant obtained for the protective systems A and B are comparable with that of the bare magnesium substrate. Moreover, the apparition of an intermediate layer of corrosion products at the substrate/coating interface is characterized by a well-defined second time constant in the middle frequency range (about 1-100 Hz). Nevertheless, here again the systems C and D shows a high value of both impedance modulus values and phase angle, even after 168 h of immersion in the corrosive media. This effect is more remarkable in the case of the bilayer system D, indicating that the electrochemical characteristics rises with the augmentation of the hybrid film thickness. Besides, for the bilayer system D it is not possible to observe the presence of the phenomenon in the middle frequency, observed for all the other systems and associated to the film degradation and corrosion products at the substrate/coating interface, which indicate the superior barrier effect of this hybrid film system compared to the others. Although the systems A-D have all a second hybrid layer (Layer 2) that successfully covers the defects observed in the Layer 1, the electrochemical measurements shows that the bilayer systems A and B have a similar behavior to that of the monolayer hybrid coating (M), losing its protective characteristics when in contact with the corrosive electrolyte for extended time periods. However, the thickness increase of the Layer 2 to ∼5 m or higher, permits to obtain higher electrochemical values for the system, and so to increase the durability of the protective hybrid coating. Here, it may be considered that a global thickness of ∼11 m is a threshold value permitting to achieve an effective anticorrosion protection for this epoxy-based bilayer coating. The last bilayer systems presented have all an equivalent "inertion layer" thickness of around 5 m, and as a consequence, the influence of coating with a Layer 1 of different thickness, lower of higher than 5 m is an interesting aspect to evaluate the corrosion resistance of these bilayer architectures. The next section of this work proposes the creation of two different thicknesses of Layer 1 for a similar thickness of Layer 2, in order to obtain two different bilayer systems. Variation of the thickness of Layer 1 For the study of the thickness "inertion layer" (Layer 1) influence, two other bilayer systems were prepared, with the parameters listed on Table 3. These bilayer coatings are called D ′ and A ′ , since it is expected to possess an equivalent global thickness to that of the systems D and A. After the OCP measurements, the bilayer system D ′ samples were analyzed by EIS in the corrosive solution, showing that this protective coating possess a high impedance modulus and phase angle values, in the first hours of immersion (Fig. 14a). This behavior can be associated to the high barrier effect of these hybrid films (bilayer system D and D ′ , which present the same global thickness layer, around 18um). Nonetheless, both the resistance and the phase angle of the bilayer system D ′ , which presents the higher Layer 1 thickness (13um), decreases dramatically after 24 h in contact with the aggressive solution (Fig. 14b). Furthermore, the impedance modulus values of this system are similar to those of the bare magnesium substrate after 168 h of immersion, representing the major degradation of the protective properties of this coating. It is important to underline the difference on behavior between these systems of identical global thickness: D and D ′ . The bilayer system D shows a high impedance modulus and phase angle values at the end of the EIS test (168 h of immersion), that may be ascribed to an efficient barrier effect. On the other hand, the bilayer system D ′ presents not only a rapid diminution of the impedance modulus and phase angle values), from 24 h of immersion, presented a well-defined second time constant at middle frequency, which evidences the formation of corrosion products, result of the interaction between the electrolyte and the magnesium substrate, as it was mentioned before. This example shows that coating with a thicker "inertion layer" under these conditions does not necessarily leads to an increase of the corrosion resistance. The sol-gel coatings (>1 m) are susceptible to cracking [START_REF] Brinker | Sol-Gel Science: The Physics and Chemistry of Sol-Gel Processing[END_REF], and also diminishes the evaporation rate of solvents during drying [START_REF] Certhoux | New sol-gel formulations to increase the barrier effect of a protective coating against the corrosion of steels[END_REF]. In that case, the creation of a Layer 1 with 13 ± 0.3 m of thickness may represent an excessive amount of solvents that could not be completely evaporated applying the current heat treatment. Imprisoned inside the hybrid film, the solvents and water would disable the full reticulation and consolidation of the solid network of the sol-gel film and therefore permitting the passage of liquid and corrosive species unto the metallic substrate. It is important to underline the fact that the "inertion layer" (Layer 1) of all the bilayer systems here presented undergoes a heat treatment twice: the first one is applied after coating with Layer 1 and the second is indirectly applied at the same time for Layer 2. This means that the solid network of Layer 1 would be better consolidated than that of Layer 2. Even though the "inertion layer" of the systems D and D ′ followed an identical heat treatment, their thickness would seem to be a crucial factor on the corrosion protection of the magnesium substrate. As for the "thickening layer" (Layer 2), the solvents and water are able to escape even after the drying stage of the coating, since this layer remain in direct contact with the environment. Hence, even if the "thickening layer" presents an important thickness (∼13 m), as it is the case of the bilayer system D, it can evaporate the solvents easily because there is no solid barrier between this layer and the environment. On the other hand, in the case where the thickest layer is the "inertion layer" (Layer 1), as for the bilayer system D ′ (∼13 m), the "thickening layer" (Layer 2) would act as a physical barrier that partially obstructs the evaporation of solvents and water during drying of the coating. Despite these two bilayer coatings possess the same global thickness, they show a remarkable difference in anticorrosion performances, as it has been proved by the electrochemical tests. Here, the bilayer system D presents the best electrochemical values, and so the best corrosion performances. With an equivalent global thickness to system D ′ , the "inertion layer" of this bilayer coating is of about 5 m thick, which would contain an inferior quantity of solvents easier to evaporate, permitting to achieve a better reticulation than an "inertion layer" of ∼13 m. In this context, the preparation of a bilayer coating with an "inertion layer" inferior to ∼5 m, which is the case of the bilayer system A ′ , would help to support the previous observations. First, the E ocp record of this coating (Fig. 15) during the first hour of immersion in the corrosive solution shows no remarkable difference with the behavior shown by the bilayer system A, which possess an equivalent global thickness. Besides, both of the hybrid films present an E ocp potential equivalent to that of the bare magnesium substrate and to the monolayer hybrid coating (Fig. 3). The EIS spectra of the bilayer system A ′ at 1 h, 24 h and 168 h (Fig. 16), shows a similar behavior to that of the bilayer system A. And the impedance modulus values of these systems decreases at similar rates with the immersion time. Besides, the high frequency time constant observed in these coatings diminishes simultaneously with the immersion time, probably due to the formation of corrosion products after 24 h (depicted by a middle frequency time constant). For these two bilayer coatings of equivalent global thickness, A and A ′ , the change on thickness for the "inertion layer" do not modifies considerably the electrochemical properties of the system. From another point of view, both systems D ′ and A ′ possess an identical Layer 2 (∼5 m), as well as the previously presented system C. Here, these three bilayer systems present an equivalent thickness of Layer 2, with an increasing thickness of Layer 1 in the order: A ′ , C and D ′ . Fig. 12 regroups the individual layer thickness for these systems. It is important to underline that the electrochemical characteristics of system A ′ have been shown to behave in the same way of the monolayer hybrid coating, insufficient to ensure the protection of the metallic substrate. In contrast, when the thickness of Layer 1 is equivalent to ∼5 m (system C), the protective performances of the coating considerably increase. However, the creation of an "inertion layer" with a significant thickness (∼13 m) like in the case of the system D ′ , the protective properties of the coating are severely diminished. The thickness of Layer 1 seem to have a strong influence on the final properties of the bilayer coating, rising the durability of the system when it's thickness is equivalent to 5 m, as it is observed in the case of system C. In the case of an important thickness (13 m), this may represent an excessive amount of solvents to evaporate so the full reticulation of the coating is not achieved, as it was observed in the case of the system D ′ . The electrochemical results here presented suggest a strong relationship between the architecture of the bilayer systems and their anticorrosion performances. The first part of the study of bilayer systems: A-D, with a Layer 1 thickness of 5.2 ± 0.3 m, have shown that it is necessary to increase the thickness of the Layer 2 equal or greater than ∼5 m in order to achieve satisfactory anticorrosion performances. Lower thicknesses for this second hybrid layer will lead to performances comparable to those of a monolayer film. Finally, the second part of the study of bilayer systems, it was observed that increasing the thickness of the "inertion layer" will not necessarily improve the anticorrosion performances of the system, as it was noted for the bilayer coating D ′ , which presents a Layer 1 thickness equivalent to ∼13 m. On the other hand, coating with an "inertion layer" of inferior thickness (∼2 m) has no significant influence on the anticorrosion performances, when compared to a bilayer system with an equivalent global thickness as it was shown for the systems A and A ′ . An important influence of the thicknesses of both the "inertion layer" and the "thickening layer" is observed, which may be attributed to parameters like the solvent accumulation and evaporation inside Layer 1, the reticulation and condensation degree of the sol-gel network, the heat treatment performed or even the reactivity of the magnesium substrate. Conclusions The present work reports the use of different architectural arrangements for a bilayer epoxy-based sol-gel coating for the corrosion protection of a magnesium alloy. The morphological analyze of the monolayer architecture evidences the presence of macro-defects which would potentially decrease the anticorrosion protection of this coating, as it is shown by the EIS analyze in a chloride solution. Therefore, a second or "thickening" layer of the epoxy-based coating was added in order to cover the defects of the first or "inertion" layer, and also to increase the global thickness of the coating. First, it is shown that for an "inertion layer" of a thickness of 5 m, the "thickening" layer shall present the same thickness (5 m) in order to increase the anticorrosion performances of the system. Otherwise the bilayer system shows the same performances resembles to those of the monolayer architecture of the epoxy-based coating. The best anticorrosion performances were obtained for the bilayer system presenting the highest global thickness (18 m), the system D. Second, the thickness of the second layer was set to a constant value of ∼5 m as for the "inertion" layer it was increased to ∼13 m, resulting on lower anticorrosion performances compared to a coating with equivalent global thickness but inversed architecture. Here, the accumulation of solvents inside the "inertion" layer would lead to a lower reticulation degree, and so to the diminution of the protective properties of the bilayer system. The study of the epoxy-based sol-gel film here presented shows that a single layer of this coating is not enough to ensure the protection of the magnesium alloy, and that it may be covered with a second layer of the same chemical nature. However, the architecture of this coating has an important role on the final anticorrosion performances of the system, just as much as the thickness of the protective system. Fig. 1 . 1 Fig. 1. SEM image of the cast magnesium alloy after acid etching with 20 g/L of HNO3: (a) surface of the substrate; (b) cross-section (mode BSE); (c) EDX analyze for zirconium. Fig. 2 . 2 Fig. 2. SEM images of the cast magnesium alloy coated with the monolayer hybrid sol-gel film. (a) Surface; (b) cross-section (BSE mode). Fig. 3 . 3 Fig. 3. Open circuit potential (Eocp) registered for the bare Mg alloy, and coated by the monolayer hybrid coating (M), during the first hour of immersion in the corrosive solution (0.05 M NaCl). Fig. 4 . 4 Fig. 4. Bode diagrams of the EIS spectra obtained for the bare magnesium substrate, and covered by the monolayer hybrid coating (M), after 1 h (a), 24 h (b) and 168 h (c) of immersion in a 0.05 M NaCl solution. 3. 3 . 3 Bilayer hybrid sol-gel coatings 3.3.1. Variation of the thickness of Layer 2 Fig. 6 . 6 Fig. 6. Average individual layer thickness of the bilayer hybrid systems A-D. Fig. 5 . 5 Fig. 5. SEM images of the cross-section of the bilayer hybrid coatings A-D. Fig. 7 . 7 Fig. 7. SEM images of the surface of the bilayer hybrid coatings A-D. Fig. 8 . 8 Fig. 8. Open circuit potential (Eocp) registered for the bilayer systems: A-D, during the first hour of immersion in the corrosive solution (0.05 M NaCl). Fig. 9 . 9 Fig. 9. Bode diagrams of the EIS spectra obtained for the bare magnesium substrate, and covered by the bilayer systems A-D, after 1 h (a), 24 h (b) and 168 h (c) of immersion in 0.05 M of NaCl. Fig. 10 . 10 Fig. 10. SEM images on mode BSE of the cross-section of the bilayer coatings D ′ and A ′ . 3. 3 . 2 . 1 . 321 Morphological characterization. Fig.10shows the SEM observation (BSE mode) of the cross-section of the bilayer systems D ′ and A ′ , permitting to measure the global and individual thickness of the bilayer coatings. These values are plotted on Fig.11. It may be seen that the global thickness of the systems D and D ′ are identical (around 18 m), and the thickness obtained for each individual layer corresponds to the withdrawal speed applied during the dipcoating, even if the withdrawal speeds of Layer 1 and Layer 2 of the systems D and D ′ are inverse. In other words, the global thickness for either systems D and D ′ is equivalent, even if the thicknesses of the "inertion layer" and the "thickening layer" are different.3.3.2.2. Electrochemical characterization.The results of the E ocp measurements for the bilayer system D ′ during the first hour of immersion on the corrosive solution are presented in Fig.13, and compared with those obtained for the bilayer system D. With an identical global thickness, these bilayer systems (D and D ′ ) show a similar behavior of potential during the immersion, located in a nobler region than that of the bare magnesium alloy, and to the bilayer systems A and B previously presented. The high values of potential presented by the system D ′ denotes an efficient barrier effect that temporary prevents the passage of the electrolyte unto the metallic substrate. Fig. 11 . 11 Fig. 11. Average individual layer thickness of the bilayer hybrid systems D and D ′ , A and A ′ . Fig. 12 . 12 Fig. 12. Average individual layer thickness of the bilayer hybrid systems A ′ , C and D ′ . Fig. 13 . 13 Fig. 13. Open circuit potential (Eocp) registered for the bilayer systems: D and D ′ , during the first hour of immersion in the corrosive solution (0.05 M NaCl). Fig. 14 . 14 Fig. 14. Bode diagrams of the EIS spectra obtained for the bare magnesium substrate, and covered by the bilayer systems D and D ′ , after 1 h (a), 24 h (b) and 168 h (c) of immersion in 0.05 M of NaCl. Fig. 15 . 15 Fig. 15. Open circuit potential (Eocp) registered for the bilayer systems: A and A ′ , during the first hour of immersion in the corrosive solution (0.05 M NaCl). Fig. 16 . 16 Fig.[START_REF] Zhong | Effect of cerium concentration on microstructure, morphology and corrosion resistance of cerium silica hybrid coatings on magnesium alloy AZ91D[END_REF]. Bode diagrams of the EIS spectra obtained for the bare magnesium substrate, and covered by the bilayer systems A and A ′ , after 1 h (a), 24 h (b) and 168 h (c) of immersion in 0.05 M of NaCl. Table 1 1 Chemical composition of the cast magnesium alloy. Elektron21 Element Nd Gd Zr Zn Other rare earths Mg Wt.% 3.1 1.7 1.0 0.5 <0.4 Balance Table 2 2 Parameters used for the production of the bilayer sol-gel coatings A-D. Hybrid coating Layer 1 Layer 2 Withdrawal speed Withdrawal speed (mm/min) (mm/min) Monolayer M 200 - Bilayer A 200 50 B 200 100 C 200 200 D 200 400 Table 3 3 Parameters used for the production of the bilayer sol-gel coatings D ′ and A ′ . Hybrid coating Layer 1 Layer 2 Withdrawal speed Withdrawal speed (mm/min) (mm/min) Bilayer D ′ 400 200 A ′ 50 200 Acknowledgments The present work was carried out as a part of the CARAIBE project. The FDA and the OSEO are gratefully acknowledged for the financial support for this project. The authors are grateful to the collaborators of this project: Liebherr, Turbomeca, Eurocopter, Mecaprotec, and the Institut Carnot CIRIMAT. A special acknowledgment is recognized to the LAPEC of the UFRGS, for the research contribution to this work.
00175371
en
[ "sde" ]
2024/03/05 22:32:10
2006
https://hal-bioemco.ccsd.cnrs.fr/bioemco-00175371/file/Barthes_al_ASS_06.pdf
INTRODUCTION Soil organic matter (SOM) management is recognized as a cornerstone for successful farming in most tropical areas, with or without the application of mineral fertilizers [START_REF] Merckx | Soil organic matter and soil fertility[END_REF]. Several experiments have demonstrated the direct or indirect positive effects of SOM on chemical, physical and biological properties of soil related to plant response [START_REF] Sanchez | Properties and Management of Soils in the Tropics[END_REF][START_REF] Pieri | Fertility of Soils: A Future for Farming in the West African Savannah[END_REF]. Moreover, SOM is an essential reservoir of carbon (C), and SOM management can have significant implications on the global C balance and thus on climate change [START_REF] Craswell | The role and function of organic matter in tropical soils[END_REF]. In many rural areas of the tropics, the environmental challenge consists of reducing deforestation, increasing organic matter storage in cultivated soils, and reducing soil erosion. Therefore, under the economical conditions prevailing in developing countries, maintaining soil fertility and meeting the environmental challenge require land-use practices that include high levels of organic inputs and soil organic C sequestration [START_REF] Feller | Soil organic carbon sequestration in tropical areas. General considerations and analysis of some edaphic determinants for Lesser Antilles soils[END_REF]. Natural fallowing has long been the main practice to maintain soil fertility in tropical areas. However, as its effects only become significant after a period of at least 5 years, natural fallowing is no longer possible in the context of increasing population. Such is precisely the case in southern Benin, where the population density is 300 to 400 inhabitants km -2 [START_REF] Azontonde | Dégradation et restauration des terres de barre (sols ferrallitiques faiblement désaturés argilo-sableux) au Bénin[END_REF]. The benefits of legume-based cover crops in Africa (in regions with annual rainfall > 800 mm) as an alternative to natural fallow, to control weeds and soil erosion, and enrich soil organic matter and N are widely recognized [START_REF] Voelkner | Urgent needed: An ideal green mulch crop for the tropics[END_REF][START_REF] Raunet | Semis direct sur couverture végétale permanente du sol: De la technique au concept[END_REF][START_REF] Carsky | Mucuna cover crop fallow systems: potential and limitations[END_REF]). In southwestern Nigeria, higher maize ( Zea mays ) yields were obtained in live mulch plots under Centrosema pubescens or Psophocarpus palustris than in conventionally tilled and no-till plots for four consecutive seasons [START_REF] Akobundu | Live mulch: A new approach to weed control and crop production in the tropics[END_REF]. The effect of relay-cropping maize through Mucuna pruriens (var. utilis ) was assessed in southern Benin from 1988 to 1999 in terms of plant productivity and soil fertility [START_REF] Azontonde | Dégradation et restauration des terres de barre (sols ferrallitiques faiblement désaturés argilo-sableux) au Bénin[END_REF][START_REF] Azontonde | Le mucuna et la restauration des propriétés d'un sol ferrallitique au sud du Bénin[END_REF]. The relay-cropping system (M) was compared with traditional maize cropping system without any input (T), and with a maize cropping system with mineral fertilizers (NPK). This paper focuses on changes in soil C during the period of the experiment in relation to residue biomass C returned to the soil, runoff and soil erosion losses, and loss of C with erosion. MATERIALS AND METHODS Description of the Site and Treatments The experiment was conducted from 1988 to 1999 at an experimental farm at Agonkanmey (6 ° 24'N, 2 ° 20' E), near Cotonou in southern Benin in an area of low plateaus. The climate is subhumid-tropical with two rainy seasons (March-July and September-November). Mean annual rainfall is 1200 mm and mean annual temperature is 27 ° C. The soils are classified as Typic Kandiustult (Soil Survey Staff 1994) or Dystric Nitisols (FAO-ISRIC-ISSS 1998), and have a sandy loam surface layer overlying a sandy clay loam layer at about 50 cm depth. Most of the land is cultivated to maize ( Zea mays ), beans ( Vigna sp.), cassava ( Manihot esculenta ), or peanuts ( Arachis hypogea ), often associated with oil palm ( Elaeis guineensis ). The study was conducted on three 30 × 8 m plots on a 4% slope. These demonstration plots were not replicated, as it is usually difficult in long-term experiments [START_REF] Shang | Carbon turnover and carbon-13 natural abundance in organo-mineral fractions of a tropical dry forest soil under cultivation[END_REF], especially when these include runoff plots. Three cropping systems were compared: T (traditional), maize without any input; NPK, maize with mineral fertilizers (200 kg ha -1 of NPK 15-15-15, and 100 kg ha -1 of urea); M, relay-cropping of maize and a legume cover crop, Mucuna pruriens var. utilis , with no fertilizer. Maize (var. DMR ) was cropped during the first rainy season with shallow hoe tillage by hand (hoeing depth was about 5 cm). In M plot, maize was sown through the mucuna mulch from the previous year. Mucuna was sown one month later, and once maize had been harvested, its growth as a relay-crop continued until the end of the second (short) rainy season. During this short rainy season, the T and NPK treatments were maintained as natural fallow. Additional information on the site and soil properties has been provided by [START_REF] Azontonde | Dégradation et restauration des terres de barre (sols ferrallitiques faiblement désaturés argilo-sableux) au Bénin[END_REF] and [START_REF] Azontonde | Le mucuna et la restauration des propriétés d'un sol ferrallitique au sud du Bénin[END_REF]. Soil and Plant Sampling Undisturbed soil profile samples were collected: (1) in March, June, August, andOctober 1988 and1995, at 18 locations per plot for 0 to 10, 10 to 20, and 20 to 40 cm depths, using 0.2-dm 3 soil cores, and (2) in November 1999 at three locations per plot for 0 to 10 and 10 to 20 cm depths in two replicates, and for 20 to 30, 30 to 40, and 50 to 60 cm depths in one replicate, using 0.5dm 3 soil cores. Soil samples were also obtained with a knife for different depths along the profile walls. Soil bulk density (Db) was determined after oven-drying core samples, whereas the other samples were air-dried, sieved (2 mm) and ground (< 0.2 mm) for C and N analyses. Aboveground biomass of maize and mucuna was determined every year from five replicates (1 × 1 m) at maize harvest (August) and at mucuna maximum growth (October), respectively. In 1995, following the same pattern, roots of maize and mucuna were collected for 0 to 10, 10 to 20, and 20 to 40 cm depths, and hand-sorted [START_REF] Azontonde | Le mucuna et la restauration des propriétés d'un sol ferrallitique au sud du Bénin[END_REF]. Annual root biomass was calculated using the ratio of below-to aboveground biomass determined in 1995, and the annual aboveground biomass. Sampling of the aboveground biomass of weed was done in November 1999 at nine locations per plot, using a 0.25 × 0.25-m frame. Litter was simultaneously and similarly sampled. Root sampling was also carried out in November 1999 on six 0.25 × 0.25 × 0.30-m monoliths per plot: monoliths were cut into three layers (corresponding to 0-10, 10-20, and 20-30 cm depths), and visible roots were hand sorted. With respect to the vegetation cover, we assumed that roots and litter sampled in T and NPK originated from weeds, whereas those sampled in M originated from mucuna. All plant samples were dried at 70 ° C, weighed for biomass measurement, and finely ground for C determination. Carbon and Nitrogen Determination, and Other Analyses Total C content (Ct) of soil samples collected in 1988 and 1995 was determined by the Walkley and Black method (WB), and total N content (Nt) by the Kjehldahl method. Both Ct and Nt of soil samples collected in 1999 were determined by the dry combustion method (DC) using an Elemental Analyzer (Carlo Erba NA 1500). The Ct was analyzed on 60 samples using both WB and DC methods, leading to a relationship ( r = 0.971) that was used to convert WB data into DC data. All Ct data are thereafter expressed on a DC basis. The C content of plant samples was determined by dry combustion using an Elemental Analyzer (CHN LECO 600). Particle-size analysis was performed by the pipette method after removal of organic matter with H 2 O 2 and dispersion by Na-hexametaphosphate. Soil pH in water was determined using a 1:2.5 volumetric soil:solution ratio. Determinations of Runoff, Soil Losses, and Eroded Carbon Each plot was surrounded by half-buried metal sheets and fitted out with a collector draining runoff and sediments toward two covered tanks set up in series. When the first tank was full, additional flow moved through a multi-divisor tank into the second tank, both with a capacity of 3-m 3 . Runoff and soil loss data were collected from 1993 to 1997. Runoff amount (m 3 ) was assessed on every plot after each rainfall event or sequence of events, by measuring the volume of water in each tank and multiplying it by a coefficient depending on divisors. This runoff amount was converted to depth on the basis of the plot area. Annual runoff rate (mm mm -1 ) was defined as the ratio of annual runoff depth to annual rainfall, and mean annual runoff rate as the ratio of runoff depth to rainfall over five years. The amount of dry coarse sediments (Mg) was deduced by weighing wet coarse sediments collected in the first tank, and oven-drying the aliquots. The quantity of suspended sediments (Mg) was assessed by flocculation and oven drying of aliquots collected from each tank. Annual soil losses (Mg ha -1 yr -1 ) were computed as the sum of dry-coarse and -suspended sediments over one year, and averaged over five years to calculate mean annual soil losses. Annual eroded C (Mg C ha -1 yr -1 ) was calculated as the product of annual soil losses by C content of sediments. Sediment C content was not measured, but was estimated as the product of soil Ct (at 0-10 cm depth, for the year under consideration) by an enrichment ratio (Starr et al. 2000). Soil Ct for the year under consideration was interpolated from soil Ct measurements carried out in 1988, 1995, and 1999. The enrichment ratio, defined as the ratio of Ct in sediments to that in the soil (0-10 cm depth), was estimated from the data in the literature: on light-textured Ultisols and Oxisols under maize cultivation (with mineral fertilizers) in southern and northern Ivory Coast, with 2100-and 1350-mm annual rainfall, respectively, C enrichment ratios measured in runoff plots by Roose (1980a[START_REF] Roose | Dynamique actuelle d'un sol ferrallitique gravillonnaire issu de granite sous culture et sous savane arbustive soudanienne du Nord de la Côte d'Ivoire -Korhogo: 1967-1975[END_REF] were 1.9 (7% slope) and 1.4 (3% slope), respectively. Thus, C enrichment ratio of 1.6 was assumed for maize plots (T and NPK). In the absence of literature data regarding cover crops, C enrichment ratio under maize-mucuna (M) was found similar to those measured in runoff plots having comparable soil cover conditions: for two light-textured Oxisols under bush savannas in northern Ivory Coast, with 1200-and 1350-mm annual rainfall, respectively, C enrichment ratio measured by [START_REF] Roose | Importance relative de l'erosion, du ruissellement et du drainage oblique et vertical sous une savane arbustive de moyenne Côte d'Ivoire -Bouaké: 1967-1971[END_REF] and [START_REF] Roose | Dynamique actuelle d'un sol ferrallitique gravillonnaire issu de granite sous culture et sous savane arbustive soudanienne du Nord de la Côte d'Ivoire -Korhogo: 1967-1975[END_REF] was 2.6 (4% slope) and 3.4 (3% slope), respectively; and for a sandy Ultisol under banana plantation in southern Ivory Coast (14% slope, 1800-mm annual rainfall), C enrichment ratio was 3 [START_REF] Roose | Pédogenèse actuelle d'un d'un sol ferrallitique remanié sur schiste sous forêt et sous bananeraie fertilisée de basse Côte d'Ivoire. Synthèse de huit années d'observations de l'erosion, du drainage et de l'activité des vers de terre à la station IRFA d'Azaguié et la Forêt du Téké[END_REF]. Averaging these data, C enrichment ratio of 3 was assumed for maize-mucuna (M) rotation. Dissolved C in runoff was neither measured nor taken into consideration. Statistical Analyses Differences in mean Ct and Ct stocks were tested by a Student unpaired t -test. Differences in mean annual runoff rates, soil losses, and eroded C were tested by a paired t -test. In both cases, no assumptions were made on normality and variance equality [START_REF] Dagnélie | Théorie et méthodes statistiques[END_REF]. RESULTS General Properties of Soils (Table 10.1) The clay (< 2 µ m) content of the soil ranged between 110 and 150 g kg -1 for 0 to 10 cm depth in 1988, and it increased between 1988 and 1999 in T (50%) but not in NPK and M treatments (increase < 15%). The clay content also increased with depth. Moreover, clay content in 1999 was higher at 0 to 10 cm in T than at 10 to 20 cm in NPK and M treatments. The sand (> 50 µ m) content was between 600 and 800 g kg -1 to 20 cm depth, mainly in the form of coarse sand (> 200 µ m) (data not shown). Soil pH was acidic (< 6) and decreased between 1988 and 1999, especially in T and NPK treatments (-0.5 over a decade). Soil Carbon Total soil carbon content Ct (g C kg -1 soil) was determined through 18-and 3-replicate sampling in March 1988 and November 1999, respectively (Table 10.1). The validity of the latter was assessed using 18-replicate sampling done in October 1995 as a reference: following [START_REF] Dagnélie | Théorie et méthodes statistiques[END_REF] and [START_REF] Shang | Carbon turnover and carbon-13 natural abundance in organo-mineral fractions of a tropical dry forest soil under cultivation[END_REF], at 95% confidence level, irrespective of the plot and the depth, threereplicate sampling in 1995 would have led to a less than 8% relative error in Ct estimation. Thus, Ct determined in 1999 by three-replicate sampling was representative of the mean value of the plot. Similarly, Ct stock (Mg C ha -1 ) estimated in November 1999 was representative of the large area. Differences in Ct between plots were negligible (< 2% at 0-20 cm) in March 1988. Between March 1988 and November 1999, Ct increased considerably at 0 to 20 cm depth in M (90%, p < 0.01) but changed slightly in T (-8%) and NPK (3%), and for 20 to 40 cm depth (changes < 20%). In November 1999, and as a consequence, Ct at 0 to 20 cm depth was much greater in M than in T (100%, p < 0.01) and NPK (80%, p < 0.05) treatments. Differences between plots were rather small below this depth, as were differences between NPK and T (< 30% in general) treatments. Changes in Ct stock (Mg C ha -1 ) for 0 to 40 cm depth were similar showing small initial differences between plots (< 7%); between March 1988 and November 1999, slight changes in T and NPK (< 15%) treatments but a considerable increase in M (50%, p < 0.01); higher final Ct stock in M than in T (70%, p < 0.01) and NPK (45%, p < 0.05) treatments. Stock of Ct for 0 to 40 cm depth finally attained the value of 24, 29, and 41 Mg C ha -1 in T, NPK, and M, respectively. Between 1988 and 1999, mean (± standard deviation) annual changes in Ct stock were 0.1 (±0.1), 0.2 (±0.4), and 1.4 (±0.4) Mg C ha -1 yr -1 in T, NPK, and M, respectively, for 0 to 20 cm depth; and -0.2 (±0.1), 0.2 (±0.5), and 1.3 (±0.5) Mg C ha -1 yr -1 , respectively, for 0 to 40 cm depth. Residue Biomass Average annual residue biomass (dry matter) returned to the soil in T, NPK, and M was 8, 13, and 19.9 Mg ha -1 yr -1 , with 35, 72, and 82% of aboveground biomass, respectively (Table 10.2). Mean annual residue C added was 3.5, 6.4, and 10 Mg C ha -1 yr -1 , with 39, 74, and 84% as aboveground biomass, respectively (aboveground biomass had a slightly more C content than roots). Returned C mainly originated from weeds in T (55% as roots and 17% as aboveground biomass), which represented 44 and 92% of aboveground and belowground residue C, respectively. In contrast, returned C in NPK was mainly from maize (61% as aboveground biomass and 14% as roots). In M, maize and mucuna accounted for similar amounts of residue C, either as aboveground biomass (about 40% each) or roots (8% each). Moreover, maize residue biomass C was of the same order of magnitude in NPK and M (ca. 5 Mg C ha -1 yr -1 ) treatments. Runoff, Soil Losses, and Eroded Carbon Annual rainfall ranged between 1000 and 1558 mm, and averaged 1200 mm between 1993 and 1997 (Table 10.3). Mean annual runoff rate in T, NPK, and M treatments was 0.28, 0.12, and 0.08 mm mm -1 , and mean annual soil losses was 34.0, 9.3, and 2.9 Mg ha -1 yr -1 , respectively. Using C enrichment ratios of sediments determined in similar soil and climate conditions (Roose, 1980a[START_REF] Roose | Dynamique actuelle d'un sol ferrallitique gravillonnaire issu de granite sous culture et sous savane arbustive soudanienne du Nord de la Côte d'Ivoire -Korhogo: 1967-1975[END_REF], mean eroded C was estimated at 0.3, 0.1, and 0.1 Mg C ha -1 yr -1 in T, NPK, and M treatments, respectively. In plots vulnerable to erosion, eroded C was thus of the same order of magnitude as changes in Ct stock for 0 to 40 cm depth: -0.3 vs. -0.2 Mg C ha -1 yr -1 in T, and -0.1 vs. 0.2 Mg C ha -1 yr -1 in NPK. In contrast, eroded C in M was negligible compared with changes in Ct stock: -0.1 vs. 1.3 Mg C ha -1 yr -1 . Moreover, mean annual runoff rate and soil losses were significantly more in T than in NPK and more in NPK than in M; and eroded C was more in T than in NPK and M (p < 0.01) treatments. Additionally, mean annual runoff rate, soil losses, and eroded C increased with the increase in annual rainfall. DISCUSSION Changes in Soil Carbon At the end of our experiment, Ct stock for 0 to 40 cm depth was 24 Mg C ha -1 under unfertilized maize, 29 Mg C ha -1 under fertilized maize, and 41 Mg C ha -1 under maize-mucuna rotation. Elsewhere in southern Benin and in similar soil conditions, [START_REF] Djegui | Statut organique d'un sol ferrallitique du Sud-Bénin sous forêt et différents systèmes de culture[END_REF] reported Ct stocks for 0 to 35 cm depth at 27 Mg C ha -1 under oil palm plantation, 30 Mg C ha -1 under food crops (with fallow), and 48 Mg C ha -1 under forest. The data on Ct stock presented herein are consistent with other published data (Table 10.4). For an Alfisol in southwestern Nigeria, rates of 0.2 Mg C ha -1 yr -1 were recorded for 0 to 10 cm depth for fertilized maize [START_REF] Lal | Land use and cropping systems effects on restoring soil carbon pools of degraded Alfisols in Western Nigeria[END_REF], vs. 0.3 Mg C ha -1 yr -1 for NPK; in Brazilian Ultisols and Oxisols, rates of around 1 Mg C ha -1 yr -1 were measured for 0 to 20 cm depth under long-term no-till cropping systems [START_REF] Bayer | Changes in organic matter fractions under subtropical no-till cropping systems[END_REF]Sá et al. 2001), vs. 1.4 Mg C ha -1 in M; in a Nigerian Alfisol, rates beyond 2 Mg C ha -1 yr -1 have even been measured for 0 to 20 cm depth under a 2year Pueraria cover [START_REF] Lal | Land use and soil management effects on soil organic matter dynamics on Alfisols in western Nigeria[END_REF]. These data confirm that residue mulching increases Ct stock in tropical soils, especially in cropping systems including legume cover crops. Residue Biomass The high rates of Ct increase in M resulted first from high residue biomass returned to the soil, which averaged 20 Mg ha -1 yr -1 (dry matter). The aboveground biomass of mucuna was 8 Mg ha -1 y -1 , within the range of published data: 6 to 7 Mg ha -1 yr -1 in 1-year mucuna fallows in Nigeria [START_REF] Vanlauwe | Utilization of rock phosphate by crops on a representative toposequence in the Northern Guinea savanna zone of Nigeria: Response by Mucuna pruriens, Lablab purpureus and maize[END_REF] and an average of 11 Mg ha -1 yr -1 in mucuna-maize systems in Honduras (> 2000-mm annual rainfall;Triomphe 1996b). The ratio of change in Ct stock to residue C measured in these plots also agreed with data in the literature: in a 12-year no-till maize-legume rotations on a sandy clay loam Ultisol in Brazil, Ct stock increase for 0 to 17.5 cm depth represented 11 to 15% of aboveground residue C (Bayer et al. 2001), vs. 15% in M (and 5% in NPK). In contrast, in long-term no-till cereal-legume rotations on clayey Oxisols also in Brazil, the increase in Ct stock for 0 to 40 cm depth represented 22 to 25% of total residue C (Sá et al. 2001), vs. 12% in M (and 3% in NPK). This difference confirms the role of clay content for C sequestration through the development of stable aggregates and hence organic matter protection [START_REF] Feller | Physical control of soil organic matter dynamics in the tropics[END_REF]. In plots that were left under natural fallow during the short rainy season, weeds represented an important proportion of residue biomass, i.e., 77% in T and 29% in NPK. Weeds represented about 50% of the aboveground residue biomass in T, as was also the case in nonfertilized maize plots studied in Nigeria (Kirchhof and Salako 2000). These data underline the need for systematic measurements of weed biomass when it represents a noticeable proportion of biomass returned to the soil. In our experiment, weeds were sampled on one day only, and it is likely that it led to some uncertainties. Weed biomass was negligible in M: proportions of aboveground residue biomass for maize, mucuna, and weeds were 49, 51, and 0%, respectively. Similarly, these proportions were 49, 42, and 9%, respectively, in 1-year maize-mucuna plots studied in Nigeria (Kirchhof and Salako 2000). Indeed, [START_REF] Carsky | Mucuna cover crop fallow systems: potential and limitations[END_REF] reported that weed suppression was often cited as the reason for the adoption of mucuna fallow systems in Africa. Nitrous Oxide Emissions Use of nitrogenous fertilizers also impacts nitrous oxide (N 2 O) emissions, which can be roughly estimated using Equation 10.1 [START_REF] Bouwman | Direct emission of nitrous oxide from agricultural soils[END_REF]: N-N 2 O emissions (kg ha -1 yr -1 ) = 1 + [0.0125 × N-fertilizer (kg ha -1 yr -1 )] (10.1) a For 0-17.5 cm depth. b From +0.2 to +1.4 Mg C ha -1 yr -1 , depending on the site. In NPK, N fertilizer was used at the rate of 76 kg N ha -1 yr -1 [START_REF] Azontonde | Le mucuna et la restauration des propriétés d'un sol ferrallitique au sud du Bénin[END_REF]. Following Equation 10.1, it resulted in 2-kg N-N 2 O ha -1 yr -1 emissions. As the global warming potential of N 2 O is about 300 times that of CO 2 (IPCC, 2001), these N 2 O emissions were equivalent to more than 0.2-Mg C-CO 2 ha -1 yr -1 emissions, and thus offset Ct increase (0.2 Mg C ha -1 yr -1 ). In M, mucuna residues supplied the soil with more than 250 kg N ha -1 yr -1 [START_REF] Azontonde | Le mucuna et la restauration des propriétés d'un sol ferrallitique au sud du Bénin[END_REF]. In this case, Equation 10.1 led to an overestimation of N 2 O emissions, as it was established from a set of experiments excluding legume cover crops, which provide N that is less directly available than mineral fertilizers. However, it may give an order of magnitude: by following Equation 10.1, N supply by mucuna residues resulted in 4-kg N-N 2 O ha -1 yr -1 emissions, equivalent to 0.5-Mg C-CO 2 ha -1 yr -1 emissions (vs. 1.3 Mg C ha -1 yr -1 as Ct increase). Though overestimated, these data indicate that from an environmental point of view, Ct increase in soils under legume cover crops could be partly offset by N 2 O emissions. Runoff, Soil Losses, and Eroded Carbon As compared with T, mean annual runoff rate and soil losses were 57 and 73% less in NPK, respectively, and were 71 and 91% less in M, respectively. Protection of the soil surface by vegetation and residues dissipates kinetic energy of rainfall and has an important influence on the reduction of runoff and erosion [START_REF] Wischmeier | Predicting Rainfall Erosion Losses: A Guide to Erosion Planning[END_REF]. Thus, groundcover by mucuna mulch was probably the main reason for less runoff and soil losses in M than in T and NPK treatments. Similarly but to a lesser extent, it is likely that due to large biomass, fertilized maize provided a better groundcover than unfertilized maize. Additionally, residue return determines an increase in SOM that favors aggregate stability [START_REF] Feller | Aggregation and organic matter storage in kaolinitic and smectitic tropical soils[END_REF], thus preventing detachment of easily transportable particles, and thereby reducing surface clogging, runoff, and erosion (Le Bissonnais 1996) Therefore, higher Ct also resulted in less runoff and erosion in M than in NPK, and also less in NPK than in T treatment. With respect to runoff plots from tropical areas cropped with maize (or sorghum), comparisons with published data show that annual runoff rate was high (> 0.25 mm mm -1 ) in T and under humid conditions (2100-mm annual rainfall); soil losses were high (> 20 Mg ha -1 yr -1 ) under humid or semiarid conditions (500-mm annual rainfall) and in nonfertilized plots (Table 10.5). In contrast, runoff rate was low (< 0.10 mm mm -1 ) on steep slopes with clayey soils (Kenya) and under maizemucuna (M); soil losses were low (< 5 Mg ha -1 yr -1 ) in M treatment. Thus, runoff and erosion increased with increase in annual rainfall and/or with a decrease in soil surface cover (absence of mulch, nonfertilized plots, semiarid conditions), in accordance with usual observations [START_REF] Wischmeier | Predicting Rainfall Erosion Losses: A Guide to Erosion Planning[END_REF][START_REF] Roose | Land husbandry, components and strategy[END_REF]. Under nonfertilized maize in Kenya, low runoff rates (0.02 mm mm -1 ) resulted in high soil losses (29 Mg ha -1 yr -1 ); assuming that the clayey Alfisol in this study had a stable structure with a high infiltration rate, steep slopes (30%) probably determined the nonselective transport of aggregates in the absence of adequate groundcover. Mean annual C erosion was estimated at 0.3, 0.1, and 0.1 Mg C ha -1 yr -1 in T, NPK, and M treatments, respectively. Though mean soil losses were three times more in NPK than in M, eroded C was similar in both treatments probably because of high C content in surface soil (which supplies sediments) and higher C enrichment ratio of sediments in M than in NPK treatments. Indeed, several experiments have indicated that C enrichment ratio increases with decrease in soil losses (Roose 1980a[START_REF] Roose | Dynamique actuelle d'un sol ferrallitique gravillonnaire issu de granite sous culture et sous savane arbustive soudanienne du Nord de la Côte d'Ivoire -Korhogo: 1967-1975[END_REF]. Thus, mucuna mulch was less effective in reducing the amount of C erosion than in reducing runoff and soil losses; but it was very effective in reducing the proportion of topsoil C that was eroded, which was much lesser in M than in NPK treatments. This underlines the interest of referring C erosion to topsoil C (enrichment ratio), and to temporal changes in topsoil C. These data are consistent with those reported in the literature, which showed that C erosion significantly increased with increase in the product of soil losses and soil Ct stock (r = 0.932, p < 0.01; Figure 10.1, drawn up from Table 10.5). The data reported herein show that either soil Ct stock (T and NPK) or soil losses (M) were rather small, thus C erosion was much smaller than in studies from Kenya (high soil Ct stocks on steep slopes) and Ivory Coast (humid conditions), where it ranged from 0.7 to 2.4 Mg C ha -1 yr -1 . CONCLUSION For this sandy loam Ultisol, relay-cropping of maize and mucuna (M) was very effective in enhancing C sequestration: change in Ct stock for 0 to 40 cm depth was 1.3 Mg C ha -1 yr -1 over the 12-year period of the experiment, ranging among the highest rates recorded for the eco-region. This increase resulted first from the high amount of residue biomass provided by mucuna, which amounted to 10 Mg DM ha -1 yr -1 (83% aboveground). Mucuna residues, supplying the soil with N, also favored the production of maize biomass, and total mucuna plus maize residue biomass returned to the soil was about 20 Mg ha -1 yr -1 . These results indicate the usefulness of mucuna for SOM management. In contrast, nonfertilized (T) and fertilized continuous maize cultivation (NPK) resulted in -0.2-and 0.2-Mg C ha -1 yr -1 change in Ct stock for 0 to 40 cm depth, respectively. Total residue biomass was 8 and 13 Mg ha -1 yr -1 , including 77 and 29% by weeds, respectively. These contributions demonstrate the need for weed biomass sampling, especially when noticeable rainfall occurs beside the cropping season. Weed biomass was negligible in M, underlining the potential of mucuna for weed control. Moreover, the thick mulch produced by mucuna decreased losses by runoff and erosion, which were 0.28, 0.12, and 0.08 mm mm -1 , and 34, 9, and 3 Mg ha -1 yr -1 in T, NPK, and M treatments, respectively. Eroded C was estimated at 0.3, 0.1, and 0.1 Mg C ha -1 yr -1 in T, NPK, and M, respectively. Thus, C erosion was of the same order of magnitude as changes in soil Ct stock in treatments vulnerable to erosion (T and NPK). In contrast, C erosion under maize-mucuna was negligible as compared to changes in soil Ct stock. Through its benefits on SOM management, weed suppression, and erosion control, cropping systems including a legume cover may have an adverse impact from a global change standpoint. Indeed, rough estimates show that N 2 O emissions resulting from N supply by mucuna may partly offset soil C storage in M treatment. In NPK, N 2 O fluxes consecutive to mineral N supply could even offset soil C storage completely. In order to characterize these adverse effects and establish greenhouse gas balances precisely, there is an urgent need for accurate field measurements of N 2 O fluxes, especially in cropping systems including legumes. Figure 10 . 1 101 Figure10.1 Relationship between mean annual C erosion (Mg C ha -1 yr -1 ) and the product of mean annual soil losses (Mg ha -1 yr -1 ) into Ct stock of bulk soil at 0-30 cm (Mg C ha -1 ) (data from Table10.5). Table 10 .1 Soil Clay Content, pH in Water, Total Carbon Content Ct, C:N Ratio, and Total Carbon Stock in 1988 and 1999 (mean ± standard deviation when available) 10 Depth T NPK M (cm) 1988 1999 1988 1999 1988 1999 Clay (g kg -1 ) 0-10 147 ± 1 216 111 ± 6 128 127 ± 6 136 10-20 nd 339 nd 198 nd 179 pH 0-10 5.6 ± 0.1 5.1 5.6 ± 0.1 5.2 5.2 ± 0.1 5.0 10-20 5.4 ± 0.2 4.7 5.4 ± 0.2 5.0 5.1 ± 0.2 5.0 Ct (g kg -1 ) 0-10 5.5 ± 0.2 5.3 ± 0.1 5.4 ± 0.1 6.7 ± 1.8 5.2 ± 0.1 11.5 ± 2.0 10-20 4.6 ± 0.3 4.0 ± 0.7 4.8 ± 0.4 3.8 ± 1.2 4.8 ± 0.4 7.3 ± 0.9 20-30 a 4.1 ± 0.2 3.5 ± 0.5 4.0 ± 0.4 3.6 ± 1.1 4.6 ± 0.3 4.4 ± 0.1 30-40 a 3.2 ± 0.1 4.1 ± 0.7 4.2 ± 0.2 50-60 nd 2.4 ± 0.1 nd 3.5 ± 1.8 nd 3.3 ± 0.5 C:N 0-10 10.2 ± 1.0 12.2 ± 0.4 10.8 ± 0.5 11.3 ± 0.1 11.5 ± 0.5 11.9 ± 0.8 10-20 10.9 ± 1.4 10.1 ± 0.6 10.7 ± 1.8 9.9 ± 0.7 12.0 ± 1.8 11.6 ± 0.8 20-30 a 11.4 ± 1.2 8.7 ± 0.5 10.6 ± 1.9 9.3 ± 1.0 12.8 ± 1.7 10.0 ± 1.2 30-40 a 8.2 ± 0.8 8.8 ± 1.4 8.9 ± 1.3 50-60 nd 7.0 ± 0.4 nd 8.8 ± 3.2 nd 8.1 ± 1.4 Ct stock 0-10 7.7 ± 0.7 8.4 ± 0.3 7.3 ± 0.5 10.6 ± 3.4 6.8 ± 0.3 17.4 ± 3.3 (Mg C ha -1 ) 0-20 13.6 ± 0.9 14.5 ± 0.4 14.6 ± 1.0 17.0 ± 3.9 13.8 ± 0.8 28.7 ± 3.9 0-40 25.9 ± 1.5 24.2 ± 0.5 27.0 ± 1.8 28.8 ± 5.7 27.7 ± 1.7 41.4 ± 4.9 0-60 nd 32.0 ± 0.3 nd 39.7 ± 3.6 nd 51.7 ± 4.1 Note: nd: not determined. a 20-40 cm in 1988. Table 10 .2 Residue Biomass Returned to the Soil (mean ± standard deviation) Origin Residue Biomass (dry matter) (Mg ha -1 10 M 84 91 86 18 22 19 - - - 51 55 52 C:N of Residues NPK 75 78 75 - - - 25 nd nd 65 nd nd T 118 121 118 - - - 35 nd nd 78 nd nd Residue Biomass C (Mg C ha -1 yr -1 ) T NPK M 0.77 ± 0.03 3.91 ± 0.12 4.33 ± 0.12 0.20 ± 0.01 0.89 ± 0.04 0.76 ± 0.03 0.97 ± 0.03 4.80 ± 0.12 5.10 ± 0.13 0.00 0.00 4.07 ± 0.12 0.00 0.00 0.85 ± 0.03 0.00 0.00 4.93 ± 0.13 0.60 ± 0.18 0.82 ± 0.14 0.00 1.91 ± 0.72 0.75 ± 0.37 0.00 2.51 ± 0.74 1.57 ± 0.40 0.00 1.37 ± 0.18 4.73 ± 0.18 8.41 ± 0.17 2.11 ± 0.72 1.64 ± 0.37 1.61 ± 0.04 3.48 ± 0.74 6.37 ± 0.41 10.02 ± 0.18 C Content of Residues (g C kg -1 ) T NPK M 533 ± 26 524 ± 16 538 ± 10 474 ± 20 508 ± 20 456 ± 21 519 ± 33 521 ± 26 524 ± 23 --488 ± 25 --455 ± 20 --482 ± 32 440 ± 12 430 ± 10 - 400 400 - 409 415 - 488 ± 29 505 ± 19 513 ± 27 406 452 455 ± 29 435 490 503 ± 40 M 8.05 ± 0.20 1.67 ± 0.07 9.72 ± 0.21 8.34 ± 0.24 1.88 ± 0.06 10.22 ± 0.25 0.00 0.00 0.00 16.39 ± 0.32 3.55 ± 0.09 19.94 ± 0.33 ) yr -1 NPK 7.46 ± 0.19 1.76 ± 0.05 9.22 ± 0.19 0.00 0.00 0.00 1.89 ± 0.29 1.89 ± 0.92 3.78 ± 0.96 9.35 ± 0.34 3.65 ± 0.92 13.00 ± 0.98 T Maize Aboveground 1.44 ± 0.06 Roots 0.42 ± 0.01 Subtotal 1.86 ± 0.07 Mucuna Aboveground 0.00 Roots 0.00 Subtotal 0.00 Weeds Aboveground 1.36 ± 0.39 Roots a 4.77 ± 1.81 Subtotal 6.13 ± 1.85 Subtotal Aboveground 2.80 ± 0.40 Roots 5.19 ± 1.81 Total 7.99 ± 1.85 Note: nd: not determined. a 0-30 cm; data resulting from sampling carried out in November 1999, assuming that roots collected in T and NPK were weed roots only and had a C content of 400 g C kg -1 . Table 10 .3 Annual Runoff Rates, Soil Losses, and C Erosion Year Rainfall (mm yr -1 ) Runoff Rate (mm mm -1 ) Soil Losses (Mg ha -1 yr -1 ) 10 C Erosion (Mg C ha -1 yr -1 ) a SD: standard deviation. Table 10 .4 Compared Values of Annual Changes in Ct Stock under Various Tropical Cropping Systems Including Reduced or No Tillage Country and Soil Type Cropping System (and duration, in yr) Change in Ct Stock (Mg C ha -1 yr -1 ) Reference For 0-20 cm Depth 10 Nigeria, Alfisol Pueraria sp. (2) +2.1 Lal (1998) Benin, Ultisol maize-mucuna (11) +1.4 this chapter Brazil, clayey Oxisol cereals and soybean (10, 22) +1.0 Sá et al. (2001) Brazil, clay loam Ultisol Cajanus cajan-maize (12) +0.9 a Bayer et al. (2001) Nigeria, Alfisol Stylosanthes sp. (2) +0.4 Lal (1998) Benin, Ultisol fertilized maize (11) +0.2 this chapter Nigeria, Alfisol Centrosema sp. (2) +0.1 Lal (1998) Benin, Ultisol nonfertilized maize (11) +0.1 this chapter For 0-10 cm Depth Benin, Ultisol maize-mucuna (11) +1.0 this chapter Nigeria, Alfisol Cajanus cajan-maize (3) +0.7 Lal (2000) Honduras, various soils mucuna-maize (1 to 15) +0.5 b Triomphe (1996a) Benin, Ultisol fertilized maize (11) +0.3 this chapter Nigeria, Alfisol fertilized maize (3) +0.2 Lal (2000) Benin, Ultisol nonfertilized maize (11) +0.1 this chapter Table 10 .5 Compared Values of Annual Runoff Rate, Soil Losses, and C Erosion from Runoff Plots Cropped with Maize (or sorghum) in Tropical Areas Country 10 Reference Gachene et al. (1997) this chapter Roose (1980a) Gachene et al. (1997) Roose (1978) Moyo (1998) Roose (1980b) this chapter this chapter C Erosion ) yr -1 (Mg C ha -1 2.4 0.3 1.8 0.7 0.2 0.2 0.1 0.1 0.1 Soil Losses ) yr -1 (Mg ha -1 29.0 34.0 89.4 8.4 7.3 20.6 5.5 9.3 2.9 Ct Stock a Runoff Rate (Mg C ha -1 ) (mm mm -1 ) Nonfertilized Maize 80 0.02 20 0.28 Fertilized Maize (or Sorghum) 34 0.27 80 0.01 13 0.25 15? 0.17 21 0.20 22 0.12 Maize-Mucuna 35 0.08 Soil Type Clayey Alfisol Sandy loam Ultisol Sandy loam Ultisol Clayey Alfisol Sandy Alfisol Sandy Alfisol Sandy (gravely) Oxisol Sandy loam Ultisol Sandy loam Ultisol Slope (%) 30 4 7 30 1 5 3 4 4 Rainfall ) (mm yr -1 1000 1200 2100 1000 800 500 1350 1200 1200 Kenya Benin Ivory Coast Kenya Burkina Faso Zimbabwe Ivory Coast Benin Benin a At 0-30 cm. ACKNOWLEDGMENTS We thank Barthélémy Ahossi, Maurice Dakpogan, Charles de Gaulle Gbehi, Nestor Gbehi, Branco Sadoyetin, and André Zossou for fieldwork, Jean-Yves Laurent and Jean-Claude Marcourel for technical assistance, and Gerard Bourgeon for his comments on a previous version of the paper. This work was financially supported by a French PROSE grant (CNRS-ORSTOM program on soil and erosion).
01753785
en
[ "sdv" ]
2024/03/05 22:32:10
2014
https://hal.science/hal-01753785/file/article.pdf
E Potier N C Rivron C A Van Blitterswijk K Ito email: [email protected] PhD Dr K Ito Micro-aggregates do not influence bone marrow stromal cell chondrogenesis Keywords: bone marrow stromal cells, mesenchymal stem cells, chondrogenesis, cell-cell interactions, micro-aggregates, hydrogel Although bone marrow stromal cells (BMSCs) appear promising for cartilage repair, current clinical results are suboptimal and the success of BMSC-based therapies relies on a number of methodological improvements, among which is better understanding and control of their differentiation pathways. We investigated here the role of the cellular environment (paracrine vs juxtacrine signalling) in the chondrogenic differentiation of BMSCs. Bovine BMSCs were encapsulated in alginate beads, as dispersed cells or as small micro-aggregates, to create different paracrine and juxtacrine signalling conditions. BMSCs were then cultured for 21 days with TGFβ3 added for 0, 7 or 21 days. Chondrogenic differentiation was assessed at the gene (type II and X collagens, aggrecan, TGFβ, sp7) and matrix (biochemical assays and histology) levels. The results showed that micro-aggregates had no beneficial effects over dispersed cells: matrix production was similar, whereas chondrogenic marker gene expression was lower for the micro-aggregates, under all TGFβ conditions tested. This weakened chondrogenic differentiation might be explained by a different cytoskeleton organization at day 0 in the micro-aggregates. Introduction Articular hyaline cartilage only possesses a limited self-repair capacity. Most tissue damage, caused either by wear and tear or trauma, will not be healed but replaced by fibrocartilage. This tissue has inferior biochemical and biomechanical properties compared to native hyaline cartilage, altering the function of the joint and ultimately leading to severe pain [START_REF] Ahmed | Strategies for articular cartilage lesion repair and functional restoration[END_REF][START_REF] Nesic | Cartilage tissue engineering for degenerative joint disease[END_REF]. Surgical approaches are commonly proposed to promote the healing of cartilage damage. They present, however, several limitations linked to the cell/tissue source and may lead to the formation of fibrocartilage rather than hyaline cartilage [START_REF] Ahmed | Strategies for articular cartilage lesion repair and functional restoration[END_REF]Khan et al., 2010). Alternative sources of cells/tissues are therefore needed to regenerate cartilage damage. One of the most promising sources is bone marrow stromal cells (BMSCs) [START_REF] Gregory | Non-hematopoietic bone marrow stem cells: molecular control of expansion and differentiation[END_REF]Khan et al., 2010;[START_REF] Krampera | Mesenchymal stem cells for bone, cartilage, tendon and skeletal muscle repair[END_REF][START_REF] Prockop | Marrow stromal cells as stem cells for nonhematopoietic tissues[END_REF]. As these cells are isolated from the bone marrow, no cartilage tissue harvesting is required and the tissue source is exempt from degenerative cartilage disease. BMSCs also possess a high proliferative rate, allowing the regeneration of large defects. Many studies have established that BMSCs can differentiate in vitro into chondrocytes (Muraglia et al., 2000;[START_REF] Halleux | Multi-lineage potential of human mesenchymal stem cells following clonal expansion[END_REF][START_REF] Pittenger | Multilineage potential of adult human mesenchymal stem cells[END_REF]. The patient's condition can affect BMSC proliferation and differentiation: age and osteoarthritis have been reported to reduce the chondrogenic potential of BMSCs [START_REF] Murphy | Reduced chondrogenic and adipogenic activity of mesenchymal stem cells from patients with advanced osteoarthritis[END_REF]; although other studies report that these factors do not influence BMSC chondrogenesis [START_REF] Dudics | Chondrogenic potential of mesenchymal stem cells from patients with rheumatoid arthritis and osteoarthritis: measurements in a microculture system[END_REF][START_REF] Scharstuhl | Chondrogenic potential of human adult mesenchymal stem cells is independent of age or osteoarthritis etiology[END_REF]. BMSCs have been used to repair cartilage lesions in numerous animal models [START_REF] Guo | Repair of large articular cartilage defects with implants of autologous mesenchymal stem cells seeded into β-tricalcium phosphate in a sheep model[END_REF][START_REF] Uematsu | Cartilage regeneration using mesenchymal stem cells and a three-dimensional polylactic-glycolic acid (PLGA) scaffold[END_REF] but also in humans [START_REF] Wakitani | Human autologous culture expanded bone marrow mesenchymal cell transplantation for repair of cartilage defects in osteoarthritic knees[END_REF][START_REF] Nejadnik | Autologous bone marrow-derived mesenchymal stem cells versus autologous chondrocyte implantation: an observational cohort study[END_REF]. Although the results are promising, the repair tissues were not completely composed of hyaline cartilage [START_REF] Matsumoto | Articular cartilage repair with autologous bone marrow mesenchymal cells[END_REF]. However, it is the general belief that, with further advancements, BMSC-based therapies will eventually be helpful in clinics. One important direction for improvement is to better understand and control the differentiation pathway leading BMSCs to fully differentiated and functional chondrocytes. For many years, BMSC differentiation has been induced by various cocktails of biochemical factors [START_REF] Augello | The regulation of differentiation in mesenchymal stem cells[END_REF] and more and more evidence indicates that their biomechanical environment can control their differentiation [START_REF] Potier | Directing bone marrow-derived stromal cell function with mechanics[END_REF]. Other than applied exogenous stimulation, direct communication of cells with their environment can also affect their behaviour. For example, cells can respond to different substrate stiffness, for BMSCs, by adapting their differentiation pathways [START_REF] Engler | Matrix elasticity directs stem cell lineage specification[END_REF][START_REF] Pek | The effect of matrix stiffness on mesenchymal stem cell differentiation in a 3D thixotropic gel[END_REF] or, for chondrocytes, their chondrogenic phenotype [START_REF] Sanz-Ramos | Response of sheep chondrocytes to changes in substrate stiffness from2 to 20 Pa: effect of cell passaging[END_REF][START_REF] Schuh | Effect of matrix elasticity on the maintenance of the chondrogenic phenotype[END_REF]. However, so far, few studies have focused on the relationship between cell-cell communication and BMSC differentiation, when adhesion of cells to each other may also provide important clues to control BMSCs, as shown with the osteogenic differentiation pathway [START_REF] Tang | The regulation of stem cell differentiation by cell-cell contact on micropatterned material surfaces[END_REF]. The aim of this study was, therefore, to modulate the cell-cell interactions between BMSCs and evaluate the impact on BMSC in vitro chondrogenesis. In order to create different cell-cell interactions, BMSCs were seeded into hydrogel either as dispersed cells, where interactions rely on paracrine signalling, or as microaggregates, where interactions rely on paracrine and juxtacrine signalling. Micro-aggregates, rather than micromass, were used to promote cell-cell contact locally. Indeed, it has been shown that micromass culture, used to mimic the condensation phenomenon of mesenchymal cells during development [START_REF] Bobick | Regulation of the chondrogenic phenotype in culture[END_REF], leads to heterogeneous distribution of the cartilaginous matrix [START_REF] Barry | Chondrogenic differentiation of mesenchymal stem cells from bone marrow: differentiation-dependent gene expression of matrix components[END_REF][START_REF] Mackay | Chondrogenic differentiation of cultured human mesenchymal stem cells from marrow[END_REF][START_REF] Schmitt | BMP2 initiates chondrogenic lineage development of adult human mesenchymal stem cells in highdensity culture[END_REF][START_REF] Murdoch | Chondrogenic differentiation of human bone marrow stem cells in Transwell cultures: generation of scaffold-free cartilage[END_REF], most likely due to mass transport limitations within the micromass. Downscaling from micromass (200 000-250 000 cells) to micro-aggregates (50-300 cells) should overcome these mass transport issues. In fact, micro-aggregate culture has already been shown to be superior to micromass for BMSC chondrogenesis, with a more homogeneous differentiation and matrix deposition observed [START_REF] Markway | Enhanced chondrogenic differentiation of human bone marrow-derived mesenchymal stem cells in low oxygen environment micropellet cultures[END_REF]. Finally, the effects of cell-cell interactions on BMSC chondrogenesis could be attenuated by the presence of exogenous growth factors (e.g. TGFβ3) in the culture medium. We therefore used different patterns of TGFβ stimulation (0, 7 or 21 days) to assess the influence of different cellular environments [dispersed cells (DC) vs micro-aggregates (MA)] on BMSC chondrogenesis. Materials and Methods Bovine BMSC isolation and expansion Bovine BMSCs were isolated from three cows (8-12 months old, all skeletally immature), in accordance with local regulations. Bone marrow was aspirated from the pelvis and immediately mixed 1:1 with high-glucose (4.5 g/l) Dulbecco's modified Eagle's medium (hgDMEM; Gibco Invitrogen, Carlsbad, CA, USA) supplemented with 100 U/ml heparin (Sigma, Zwijndrecht, The Netherlands) and 3% penicillin-streptomycin (Lonza, Basel, Switzerland). Bone marrow samples were then centrifuged (300 × g, 5 min) and resuspended in growth medium: hgDMEM +10% fetal bovine serum (FBS; Gibco Invitrogen; batch selected for BMSC growth and differentiation) +1% penicillin-streptomycin. BMSCs were isolated by adhesion [START_REF] Friedenstein | The development of fibroblast colonies in monolayer cultures of guineapig bone marrow and spleen cells[END_REF][START_REF] Kon | Autologous bone marrow stromal cells loaded onto porous hydroxyapatite ceramic accelerate bone repair in critical size defects of sheep long bones[END_REF][START_REF] Potier | Hypoxia affects mesenchymal stromal cell osteogenic differentiation and angiogenic factor expression[END_REF]. Cells were seeded in flasks (using 7-10 ml medium:bone marrow mix per 75 cm2) and, after 4 days, the medium was changed. BMSCs were then expanded up to P1 (passage at 5,000 cells/cm2) before freezing [70-80% confluence; in 90% FBS/10% dimethylsulphoxide (Sigma)]. A fresh batch of BMSCs was thawed and cultured up to P4 for each experiment (each passage at 5000 cells/cm2). Cells from each donor were cultured separately. Bovine BMSCs (n = 4) isolated and expanded following these protocols showed successful chondrogenesis using the micromass approach [START_REF] Johnstone | In vitro chondrogenesis of bone marrow-derived mesenchymal progenitor cells[END_REF]) (as shown with safranin O staining). Production of agarose chips Custom-made PDMS stamps, with a microstructured surface consisting of 2865 rounded pins with a diameter of 200 μm and a spacing of 100 μm, were produced. The stamps were sterilized with alcohol and placed in a sixwell plate, microstructured surface up. Warm ultra-pure agarose solution [Gibco Invitrogen; 3% in phosphate-buffered saline (PBS)] was poured on the stamps, centrifuged for 1 min at 2,500 rpm and incubated for 30 min at 4°C. The agarose chips were then separated from the stamps, cut to size to fit in a well of a 12-well plate, covered with PBS and kept at 4°C until use [START_REF] Rivron | Tissue deformation spatially modulates VEGF signaling and angiogenesis[END_REF]. Formation of micro-aggregates and alginate seeding At passage 5, BMSCs were used to seed: (a) alginate beads (dispersed cells; DC); or (b) agarose chips (microaggregates; MA) (Figure 1). For the DC condition, BMSCs were resuspended in 1.2% sodium alginate (Sigma) solution (in 0.9% NaCl; Merck, Darmstadt, Germany) at a concentration of 7 × 106 cells/ml. The cell + alginate suspension was slowly forced through a 22G needle and added dropwise to a 102 mM CaCl2 (Merck) solution [START_REF] Guo | Culture and growth characteristics of chondrocytes encapsulated in alginate beads[END_REF][START_REF] Jonitz | Differentiation capacity of human chondrocytes embedded in alginate matrix[END_REF]. Beads were incubated for 10 min at 37°C to polymerize and were then rinsed three times in NaCl 0.9% and twice in hgDMEM + 1% penicillin-streptomycin. For the MA condition, BMSCs were resuspended in growth medium at 2 × 10 5 cells/ml and 750 μl cell suspension was used per agarose chip (with PBS previously removed) to produce the microaggregates. Seeded chips were centrifuged for 1 min at 200 × g to force the cells to the bottom of the microwells; 3 ml growth medium was then slowly added and the cells were cultured for an additional 3 days in growth medium to allow cell aggregation. Microaggregates were then collected, flushing the agarose chips with growth medium, and used to seed alginate beads at a final concentration of 7 × 10 6 cells/ml, as described for the DC condition. Culture Seeded beads (with either DC or MA) were cultured for 3 weeks in Ch-medium [hgDMEM + 1% penicillin-streptomycin + 0.1 μM dexamethasone (Sigma) + 1% ITS-1+ (Sigma) + 1.25 mg/ml bovine serum albumin (BSA; Sigma) + 50 μg/ml ascorbic acid 2-phosphate (Sigma) + 40 μg/ml L-proline (Sigma) + 100 μg/ml sodium pyruvate (Gibco Invitrogen)] [START_REF] Mackay | Chondrogenic differentiation of cultured human mesenchymal stem cells from marrow[END_REF]. This medium was supplemented with 10 ng/ml (TGFβ3; (Peprotech, Rocky Hill, NJ, USA) [START_REF] Barry | Chondrogenic differentiation of mesenchymal stem cells from bone marrow: differentiation-dependent gene expression of matrix components[END_REF] (Ch(+) medium) for 0, 7 or 21 days (Figure 1). BMSCs were cultured under 5% CO2 and 2% O2 [START_REF] Markway | Enhanced chondrogenic differentiation of human bone marrow-derived mesenchymal stem cells in low oxygen environment micropellet cultures[END_REF]; six beads/well, of a six-well plate containing 3 ml medium, were cultured. Cell viability At days 0 and 21, beads (n = 3 beads/donor/group) were washed in PBS and incubated in 10 μM calcein AM (Sigma)/10 μM propidium iodide (Gibco Invitrogen) solution (in PBS) for 1 h at 37°C. Cells were then imaged in the centres of the beads at a depth of 200 μm, using a confocal microscope (CLSM 510 Meta, Zeiss, Sliedrecht, The Netherlands). Cell morphology and adhesion At days 0 and 21, beads (n = 3 beads/donor/group) were embedded in cryo-compound (Tissue-Tek® OCT™; Sakura, Alphen aan den Rijn, The Netherlands) and snap-frozen in liquid nitrogen; 50 μm-thick cryosections were cut in the middle of the beads. The sections were then thawed, fixed for 30 min at room temperature (RT) in buffered formalin 3.7% (Merck), rinsed in PBS and incubated for 5 min at RT in Triton 1.5% in PBS. The sections were rinsed in PBS and stained with TRITC-phalloidin (Sigma; 1 μM in PBS + 1% BSA) for 2 h at RT. The sections were then rinsed in PBS, counterstained with DAPI for 15 min at RT (Sigma; 100 ng/ml in PBS), rinsed in PBS and MilliQ water, airdried and mounted in Entellan (Merck). The stained sections were observed using a confocal microscope. Morphometric analyses to determine cluster areas and numbers of cells/cluster were conducted on these images, using Zen 2012 software (Zeiss). For each group, 25 clusters or cells were analysed. Stained clusters or cells were manually outlined and the corresponding area determined. Cell numbers/cluster were also counted manually. Immunostaining for vinculin and pan-cadherin was conducted on 10 μm-thick cryosections. The sections were thawed, fixed for 10min at RT in buffered formalin 3.7%, rinsed in PBS and incubated for 10 min at RT in Triton 0.5% in PBS. After blocking in 3% BSA for 1 h, the sections were incubated for 1 h at RT with monoclonal mouse anti-vinculin antibodies (Sigma), diluted at 1:400, or with monoclonal mouse anti-cadherin antibodies (Abcam; Cambridge, UK), diluted at 1:100, in 3% BSA. The sections were then washed three times in PBS and incubated for 1 h at 38°C with Alexa 488-conjugated goat anti-mouse antibodies (Molecular Probes; Bleiswijk, The Netherlands), diluted 1:300 in PBS. The stained sections were then rinsed three times and mounted with Mowiol. For both stainings, human cardiomyocyte progenitor cells grown on coverslips were used as a positive control. Both antibodies are known to work with bovine material. Cartilaginous matrix formation and cell proliferation At days 0 and 21, five beads/donor and group were pooled and digested in papain solution [150 mM NaCl (Merck), 789 μg/ml Lcysteine (Sigma), 5 mM Na2EDTA.2H2O (Sigma), 55 mM Na3citrate.2H2O (Sigma) and 125 μg/ml papain (Sigma)] at 60°C for 16 h. Digested samples were then used to determine their content of sulphated glycosaminoglycans (sGAG), as a measure of proteoglycans, and DNA. sGAG content was determined using the dimethyl methylene blue (DMMB) assay, adapted for alginate presence [START_REF] Enobakhare | Quantification of sulfated glycosaminoglycans in chondrocyte/alginate cultures, by use of 1,9dimethylmethylene blue[END_REF]. Shark cartilage chondroitin sulphate (Sigma) was used as a reference and digested with empty alginate beads (i.e. alginate concentration identical for references and experimental samples). DNA content was measured using the Hoechst dye method [START_REF] Cesarone | Improved microfluorometric DNA determination in biological material using 33258 Hoechst[END_REF], with a calf thymus DNA reference (Sigma). For the 7 days of TGFβ3-treatment group, the beads were also analysed at day 7. At days 0 and 21, beads (n = 3 beads/donor/group) were also embedded in cryo-compound and snap-frozen in liquid nitrogen. 10 μm thick cryosections were cut in the middle of the beads. The sections were then thawed, incubated for 5 min in 0.1 M CaCl2 at RT and fixed in buffered formalin 3.7% for 3 min at RT. The sections were then rinsed in 3% glacial acetic acid (Merck) and stained in Alcian blue solution (Sigma; 1%, pH 1.0, for alginate presence) for 30 min at 37°C. The sections were then rinsed in 0.05 M CaCl2 and counterstained with nuclear fast red solution (Sigma) for 7 min at RT. The stained sections were rinsed in 0.05 M CaCl2 before mounting in Mowiol (Merck) and were observed using a brightfield microscope (Observer Z1, Zeiss). Gene expression At days 0 and 21, nine beads/donor and group were pooled, snap-frozen in liquid nitrogen and stored at -80°C until RNA isolation. Frozen beads were placed in between a 316 SS 8 mm bead and a custom-made lid, placed in a 2 ml Eppendorf tube, and were disrupted for 30 s at 1,500 rpm (Micro-dismembrator; Sartorius, Göttingen, Germany). RNA was then extracted using TRIzol® (Gibco Invitrogen) and purified using an RNeasy mini-kit (Qiagen, Venlo, The Netherlands). The quantity and purity of the isolated RNA were measured by spectrophotometry (ND-1000, Isogen, De Meern, The Netherlands) and integrity by gel electrophoresis. Absence of genomic DNA was validated by endpoint PCR and gel electrophoresis using primers for glyceraldehyde 3-phosphate dehydrogenase (GAPDH). Total RNA (300 ng) was then reverse-transcribed (M-MLV; Gibco Invitrogen) and the gene expression levels of sox9, aggrecan, type II collagen, TGFβ, type X collagen and sp7 (also known as Osterix) were assessed with SYBR green qPCR (iCycler; Biorad, Hercules, CA, USA) (see Table 1 for primer list). 18S (PrimerDesign Ltd, Southampton, UK) was selected as a reference gene from three genes (RPL13A, GAPDH and 18S) as the most stable gene throughout our experimental conditions. Expression of the gene of interest is reported as relative to 18S expression (2 -ΔCT method). When gene expression was not detected, the 2 -ΔCT value was set to 0 to conduct the statistical analysis. For the 7 days of TGFβ3 treatment group, beads were also analyzed at day 7. Statistical analysis General linear regression models based on ANOVAs were used to examine the effects of seeding (DC and MA), TGFβ treatment (0, 7 and 21 days) and days of culture (days 0, 7 and 21) and their interactions on the variables DNA and GAG/DNA contents, and sox9, type II collagen, aggrecan, TGFβ, type X collagen and sp7 gene expression. In all analyses, full factorial models were fitted to the data and then a backwards stepwise procedure was used to remove the non-significant effects. For each significant effect, a Tukey-HSD post hoc test was conducted; p < 0.05 was considered significant. All data analyses were performed in R v. 2.9.0 (R Development Core Team, 2009). Results Physical properties of the granule surfaces BMSCs showed high cell viability after seeding for all conditions (Figure 2A,E). At day 0, cells appeared as a welldispersed cell population for DC conditions (Figure 2A) or as dense micro-aggregates for MA conditions (Figure 2E). However, some dead cells could be observed around the micro-aggregates (Figure 2E), most likely due to a higher shear stress exerted on micro-aggregates than dispersed cells when producing the alginate beads. At day 21, cell viability remained high for all conditions (Figure 2B-D, F-H). DNA content confirmed that cells proliferated (Figure 2I). DC conditions, however, led to higher proliferation than MA conditions. Cell morphology and adhesion At day 0, the DC condition led to single dispersed cells (Figure 3A,I), while MA resulted in large clusters (Figure 3E,I) containing, on average, 14 cells (Figure 3J). In DC conditions, after 21 days of culture BMSCs proliferated (Figure 3J) and formed clusters (Figure 3B-D) whose size increased when TGFβ was added, although not significantly (Figure 3I). In MA conditions (Figure 3F-H), micro-aggregates grew during culture, with the bigger clusters observed for 7 days of TGFβ (Figure 3I), although cell proliferation was limited (Figure 3J). At day 0, cell-cell interactions were more developed in MA than in DC, as shown by the immunostaining of pancadherins (Figure 4D andA, respectively), which are glycoproteins involved in cell-cell adhesion. These improved cell-cell interactions, however, disappeared after 3 weeks of culture (Figure 4E). TGFβ treatment had no effects on cadherin expression for either DC or MA (data not shown). Regarding vinculin, a membrane-cytoskeletal protein involved in cell-matrix adhesion, its expression was similar for DC and MA (Figure 4G and J, respectively), with a dispersed localization of vinculin through the cell surface. At day 21, vinculin condensed in focal adhesions. Distribution seemed similar for DC and MA (Figure 4H and K, respectively), and was not influenced by the different TGFβ stimulation patterns (data not shown). Matrix production In all conditions, proteoglycans (PGs) were deposited (Figure 5A-H), demonstrating successful chondrogenic differentiation of the BMSCs. For both DC and MA conditions, prolonging exposure to TGFβ increased the PG production, as confirmed by a quantitative assay (Figure 5I). However, no differences between DC and MA could be detected. In both conditions, PGs appeared to be concentrated within the clusters (for DC) or the microaggregates (for MA), filling the void spaces previously observed. Gene expression Levels of gene expression of chondrogenic markers (sox9, a transcription factor involved in early chondrogenesis; type II collagen and aggrecan, both main components of cartilage matrix) increased at day 21 for all conditions (Figure 6A-C). In MA conditions, however, type II collagen and aggrecan mRNA expression were inhibited compared to DC conditions at day 21 (Figure 6B,C). Seven days of TGFβ treatment led to the highest levels of expression of all chondrogenic markers for both MA and DC conditions (Figure 6A-C). At day 0, MA upregulated TGFβ gene expression compared to DC, but this high level of expression disappeared at day 21 under all TGFβ stimulation conditions (Figure 6D). Transient TGFβ stimulation For the 7 days of TGFβ treatment, chondrogenic marker gene expression and matrix production were also evaluated at day 7. The results showed that BMSCs had already started to differentiate at that point. All chondrogenic markers (sox9, aggrecan, type II collagen) were already highly upregulated at the gene levels (Figure 7B-D). PGs were also produced at day 7 (Figure 7A), but to a limited amount. PG content and type II collagen expression were significantly higher at day 21 than at day 7, indicating that the cells were still going along the chondrogenic pathway although TGFβ was withdrawn. Hypertrophy and osteogenic differentiation Both type X collagen (Figure 8A), an indicator of chondrocyte hypertrophy, and sp7 (Figure 8B), a transduction factor involved in early osteogenesis, mRNA expressions increased at day 21 for the DC condition, while only type X collagen expression increased in the MA condition. For both conditions and genes, the highest level of expression at day 21 was for the 7 days of TGFβ treatment. When type X collagen levels of expression were similar for MA and DC conditions, sp7 mRNA expression was upregulated at day 0 for the MA condition (Figure 8B). Discussion In summary, these results show that BMSCs underwent (partial) chondrogenic differentiation for all conditions tested (PG production and increased chondrogenic marker expression). MA, however, did not perform better (matrix production) or even as well (gene expression) as DC under all the TGFβ stimulation patterns tested. Nonetheless, these data show that small microaggregates can be successfully integrated into hydrogel (alginate). Although cell death was slightly upregulated at day 0 (Figure 2E), BMSCs in MA survived up to 21 days (Figure 2) and differentiated into chondrocytes, with the deposition of a PG-rich matrix within the MA (Figure 5H) and substantial upregulation of sox9, type II collagen and aggrecan gene expression (Figure 6A-C). Increase of TGFβ gene expression at day 0 in MA compared to DC (Figure 6D) suggests an early stimulant effect of MA, maybe due to improved juxtacrine signaling (cell-cell contact) rather than paracrine signaling (limited distance between neighbouring cells), as cell-cell contact was improved at day 0 in the MA condition, as shown by pan-cadherin staining. This upregulation, however, was lost at day 21 and, more importantly, was not translated into enhanced chondrogenic matrix production (Figure 5) or gene expression (Figure 6), even if no exogenous TGFβ was added. One explanation for this absence of effects may be the disappearance, during the 3 weeks of culture, of the cell-cell interactions observed at day 0 (Figure 4). BMSCs in MA most likely lost contact with each other (as shown by pan-cadherin staining) due to extracellular matrix production between the cells, but formed new junctions (as shown by vinculin staining) with this matrix (Figure 5). The lack of effects of TGFβ gene expression upregulation may also been explained by a low translation efficiency or by post-transcriptional regulatory mechanisms. Several studies comparing genomic and proteomic analyses report, indeed, moderate correlation between mRNA and protein expression [START_REF] Chen | Discordant protein and mRNA expression in lung adenocarcinomas[END_REF][START_REF] Huber | Comparison of proteomic and genomic analyses of the human breast cancer cell line T47D and the anti-estrogen-resistant derivative T47D-r[END_REF][START_REF] Oberemm | Toxicogenomic analysis of N-nitrosomorpholine induced changes in rat liver: comparison of genomic and proteomic responses and anchoring to histopathological parameters[END_REF][START_REF] Tian | Integrated genomic and proteomic analyses of gene expression in mammalian cells[END_REF]. Another explanation for the absence of effects of upregulated TGFβ expression might be that BMSCs are not sensitive to the levels of TGFβ they are producing, either because these levels are too low or because BMSCs are less sensitive in MA. Cytoskeleton organization, indeed, has been shown to modulate cell sensitivity to TGFβ. Disorganization of the microfilaments in rabbit articular chondrocytes after treatment with dihydrocytochalasin B enhanced the sensitivity of the cells to TGFβ (increased PG and collagen synthesis) [START_REF] Benya | Dihydrocytochalasin B enhances transforming growth factor-β-induced re-expression of the differentiated chondrocyte phenotype without stimulation of collagen synthesis[END_REF]. In the present study, BMSCs at day 0 displayed more organized microfilaments in MA cells than in the round cells of the DC condition (Figure 3A/E). This difference in cytoskeleton organization may also explain why MA are not upregulating chondrogenic gene expression as well as DC under transient and continuous TGFβ treatment (Figure 6). Although no significant differences were observed at the matrix level (Figure 5), our data support results observed with bovine articular chondrocytes in hydrogel, where small micro-aggregates (5-18 cells) inhibit chondrocyte biosynthesis compared to dispersed cells [START_REF] Albrecht | Probing the role of multicellular organization in threedimensional microenvironments[END_REF]. Distribution of PGs, however, was quite distinct between the two conditions, with a more evenly distributed matrix for DC (Figure 5). As the cell concentration was similar at day 0 for both conditions, DC resulted in a more dispersed and homogeneous distribution of cells (Figure 2), which could account for a more even distribution of the matrix produced by the BMSCs. Still, MA might be more potent for the osteogenic differentiation of BMSCs. In fact, MA (not embedded into a hydrogel) have already been shown to promote osteogenic differentiation of human BMSCs compared to 2D culture (increased calcium deposition and osteogenic gene expression) [START_REF] Kabiri | 3D mesenchymal stem/stromal cell osteogenesis and autocrine signalling[END_REF]. In the present study, we observed an upregulation of sp7, a transcription factor involved in early osteogenic differentiation, in MA at day 0 compared to DC (Figure 8). This suggests a positive influence of the MA on the osteogenic differentiation pathway. The absence of factors required for osteogenic differentiation, such as FBS or β-glycerophosphate, during culture, however, probably nullifies this influence, and additional experiments need to be conducted to assess the potential of MA to promote BMSC osteogenesis. Contrary to MA, BMSCs in the DC condition proliferated during the 3 weeks of culture. This absence of significant proliferation in MA already containing several cells (Figure 3) may be explained by contact inhibition present in the MA but not in the DC. At day 21, cloned DC spontaneously formed clusters. Although these structures appeared similar to the MA, they were smaller and contained fewer cells (Figure 3). Recreating and amplifying this natural process of cloning and clustering in the MA, however, did not exert any substantial effect on BMSC differentiation, suggesting that cell-cell interactions are not required for initiating chondrogenesis. These results also confirm that bovine BMSCs can spontaneously differentiate toward the chondrogenic lineage without the presence of TGFβ (PG production and increased sox9, type II collagen and aggrecan gene expression; Figure 5, Figure 6, Figure 7) when cultured in hydrogel and serum-free conditions, as previously reported for bovine BMSCs in micromass culture [START_REF] Bosnakovski | Gene expression profile of bovine bone marrow mesenchymal stem cell during spontaneous chondrogenic differentiation in pellet culture system[END_REF]. Seven days of TGFβ treatment were enough to enhance the production of cartilaginous matrix, as shown previously with human BMSCs [START_REF] Buxton | Temporal exposure to chondrogenic factors modulates human mesenchymal stem cell chondrogenesis in hydrogels[END_REF], but, surprisingly, gave the highest upregulation of chondrogenic marker expression (Figure 6) for both MA and DC. However, the transient TGFβ stimulation also led to higher expression of type X collagen (a marker of chondrocyte hypertrophy). Hence, continuous stimulation with TGFβ resulted in a more stable chondrogenic phenotype; it also led to the highest matrix production (Figure 5). Conclusions on the (absence of) effects of MA on BMSC chondrogenesis, however, are only valid for the cell concentration and hydrogel tested here. Using a lower concentration may dilute paracrine signaling in the DC condition and, therefore, diminish the chondrogenic differentiation of BMSCs. [START_REF] Buxton | Temporal exposure to chondrogenic factors modulates human mesenchymal stem cell chondrogenesis in hydrogels[END_REF] have already evaluated the influence of cell concentration on the chondrogenesis of BMSCs seeded into a hydrogel. They reported a maximal PG/collagen synthesis/cell for concentrations in the range 12.5-25 million cells/ml. Lower concentrations led to lower matrix production, indicating the involvement of paracrine signaling in BMSC chondrogenesis. With the concentration used here (7 million cells/ml), paracrine signaling should be diluted in the DC condition and so MA could have a beneficial effect by locally increasing this paracrine signaling. As no positive effect was found for the MA, it seems that cell-cell contact or cytoskeleton organization have a stronger negative effect than paracrine signaling, a positive one for BMSC chondrogenesis. Moreover, the effect of MA on BMSC chondrogenesis has only been tested here in alginate and could, therefore, be an artifact of that system. The previous observation, that micro-aggregates inhibit chondrogenesis of bovine chondrocytes seeded in photo-polymerizable hydrogel [START_REF] Albrecht | Probing the role of multicellular organization in threedimensional microenvironments[END_REF] when compared to dispersed cells, tends to indicate, however, that the negative effects observed here were not an artifact of alginate. Another limitation of the study may be the use of exogenous TGFβ if it is the endogenous molecular agent involved in juxtacrine signaling. In this case, adding TGFβ to the culture medium may have overpowered any increase of TGFβ expression present in MA, but not in DC, conditions. Such a beneficial effect, however, should have been observed when the BMSCs were cultured without exogenous TGFβ, when no differences between MA and DC were observed (Figure 5, Figure 6, 0 days TGFβ group). Nonetheless, if TGFβ had been involved in cellular signaling after BMSC differentiation, bone morphogenic protein 2 (BMP2) could have been used to induce BMSC chondrogenic differentiation instead [START_REF] Schmitt | BMP2 initiates chondrogenic lineage development of adult human mesenchymal stem cells in highdensity culture[END_REF]. This study provides important clues about the communication of BMSCs with their environment, where cell-cell interaction seems to have a limited involvement in their (chondrogenic) differentiation. Although DC cloned and spontaneously formed clusters, accelerating and amplifying this process with the MA did not provide beneficial effects. This suggests that influencing cellmatrix, rather than cell-cell, interactions may be a more potent tool to control BMSC differentiation, at least for the chondrogenic pathway. To conclude, this study shows that micro-aggregates, although potentially promoting cell-cell contacts and improving paracrine signaling, have no beneficial effects on bovine BMSC chondrogenesis in alginate. Bovine BMSCs (n = 3) were expanded up to P4 in hgDMEM + 10% FBS + 1% P/S (growth medium). Cells were then used to seed either alginate beads at 7 million cells/ml (dispersed cells; DC) or agarose chips cast on PDMS stamps (micro-aggregates; MA). BMSCs on agarose chips were cultured for 3 additional days in growth medium to allow the cells to form micro-aggregates. Those were then used to seed alginate beads at 7 million cells/ml. After seeding in alginate, BMSCs (DC or MA) were cultured for 3 weeks in hgDMEM + 1%P/S + 0.1 μM dexamethasone + 1% ITS-1+ + 1.25 mg/ml BSA + 50 μg/ml ascorbic acid 2-phosphate + 40 μg/ml L-proline + 100 μg/ml sodium pyruvate (Ch-medium). This medium was supplemented with 10 ng/ml TGFβ3 (Ch+ medium) for 0, 7 or 21 days. At D0 and D21, cell viability was characterized by live/dead staining; cell morphology by histology (phalloidin, antivinculin and anti-pan-cadherin staining); produced matrix by histology (Alcian blue staining); biochemical assays [glycosaminoglycan (GAG) and DNA content]; and cell phenotype was characterized by qRT-PCR (types II and X collagens, sox9, aggrecan, TGFβ and sp7). Gene expression of type X collagen (A) and sp7 (B), as determined by qRT-PCR; expression is relative to 18S reference gene (2 -ΔCT method). Values are mean+ SD (NB: logarithmic y axis, and error bars are also logarithmic); * p < 0.05 vs D0; # p < 0.05 vs 7 days of TGFβ3 treatment; @ p < 0.05 vs dispersed cells. Table 1. Primer sequences for target and reference genes used in RT-qPCR assays RPL13a, ribosomal protein L13a; GAPDH, glyceraldehyde-3-phosphate dehydrogenase; SOX9, SRY (sex determining region Y)-box 9; COL2A1, collagen type II α1; ACAN, aggrecan; TGFB1, transforming growth factor-β1; COL10A1, collagen type X α1; SP7, Sp7 transcription factor. *GenBank™ accession number. **BD, primers designed with Beacon designer software (Premier Biosoft, Palo Alto, CA, USA) and ordered from Sigma. Figure 1 . 1 Figure 1. Experimental design. Figure 2 . 2 Figure 2. Cell viability and proliferation.(A-H) Bovine BMSCs seeded in alginate beads as dispersed cells (A-D) or as micro-aggregates (E-H) at days 0 (A, E) and 21 after exposure to TGFβ3 for 0 (B, F), 7 (C, G), and 21 (D, H) days. Cells were stained with calcein AM (green fluorescence) for living cells and propidium iodide (red fluorescence) for dead cells. White frames are ×2.5 digital magnification; representative of three donors/group; scale bar = 100 μm; colour images are available online. (I) DNA content/bead, as determined with Hoechst dye assay; values are mean ± SD; n = 3/group. * p < 0.05 vs D0; @ p < 0.05 vs dispersed cells. Figure 3 . 3 Figure 3. Cell morphology. (A-H) Bovine BMSCs seeded in alginate beads as dispersed cells (A-D) or micro-aggregates (E-H) at day 0 (A, E) and day 21 after exposure to TGFβ3 for 0 (B, F), 7 (C, G) and 21 (D, H) days. The beads were cryosectioned, fixed and stained with phalloidin (red fluorescence) for F-actin filaments and counterstained with DAPI (blue fluorescence) for cell nuclei; representative of three donors/group; scale bar = 20 μm; colour images available online. (I, J) Morphometric analysis: area covered by cells or clusters (I) and cell number/cluster (J) were determined by image analysis; values are mean ± SD; n = 25 clusters/cells analysed/ group. * p < 0.05 vs D0; # p < 0.05 vs 7 days of TGFβ3 treatment; $ p < 0.05 vs 21 days of TGFβ3 treatment; @ p < 0.05 vs dispersed cells. Figure 4 . 4 Figure 4. Cell adhesion.(A-F) Bovine BMSCs seeded in alginate beads as dispersed cells (A, B) or as micro-aggregates (D, E) at day 0 (A, D) and after 21 days of exposure to TGFβ3 (B, E). The beads were cryosectioned, fixed and stained with anti-pan-cadherin (green and counterstained with DAPI (blue fluorescence) for cell nuclei. Human cardiomyocyte progenitor cells were used as positive controls (C) and experimental samples for secondary antibody negative control (F). (G-L) Bovine BMSCs seeded in alginate beads as dispersed cells (G, H) or as micro-aggregates (J, K) at day 0 (G, I) and after 21 days of exposure to TGFβ3 (H, K). Beads were cryosectioned, fixed and stained with anti-vinculin (green fluorescence) and counterstained with DAPI (blue fluorescence) for cell nuclei. Human cardiomyocyte progenitor cells were used as positive controls (I) and experimental samples for secondary antibody negative control (L); representative of three donors/group; scale bar = 10 μm. Figure 5 . 5 Figure 5. Matrix production. (A-F) Bovine BMSCs seeded in alginate beads as dispersed cells (A-C) or as micro-aggregates (D-F) at day 21 after exposure to TGFβ3 for 0 (A, D), 7 (B, E) and 21 (C, F) days. Beads were cryosectioned, fixed and stained with Alcian blue for proteoglycans (note that light blue is alginate); representative of three donors/group; scale bar = 200 μm. (G, H) Higher magnifications of (B, E), respectively; scale bar = 50 μm; colour images available online. (I) GAG/DNA content after 21 days of culture, as determined with DMMB and Hoechst dye assays, respectively. Values are mean ± SD (NB: logarithmic y axis, and error bars are also logarithmic); n = 3/group; * p < 0.05 vs D0; $ p < 0.05 vs 21 days of TGFβ3 treatment; ND, not detected. Figure 6 . 6 Figure 6. Gene expression -chondrogenesis markers.Gene expression of sox9 (A), type II collagen (B), aggrecan (C) and TGFβ (D), as determined by qRT-PCR. Expression is relative to 18S reference gene (2 -ΔCT method). Values are mean + SD (NB: logarithmic y axis, and error bars are also logarithmic);n=3/group; * p < 0.05 vs D0; # p < 0.05 vs 7 days of TGFβ3 treatment; $ p < 0.05 vs 21 days of TGFβ3 treatment; @ p < 0.05 vs dispersed cells. Figure 7 . 7 Figure 7. Transient TGFβ stimulation.(A) GAG/DNA content at days 0, 7 and 21, as determined with DMMB and Hoechst dye assays, respectively; values are mean ± SD (NB: logarithmic y axis, and error bars are also logarithmic). (B-D) Gene expression of sox9 (B), type II collagen (C) and aggrecan (D), as determined by qRT-PCR; expression is relative to 18S reference gene (2-ΔCT method). Values are mean + SD (NB: logarithmic y axis, and error bars are also logarithmic); n = 3/group; * p < 0.05 vs D0; # p < 0.05 vs D21; @ p < 0.05 vs dispersed cells; ND, not detected. Figure 8 . 8 Figure 8. Gene expression -Hypertrophy and osteogenesis markers. Acknowledgments The authors would like to thank R. R. Delcher for his help with statistical analysis and Marina van Doeselaar for immunostainings. The authors did not receive financial support or benefits from any commercial source related to the scientific work reported in this manuscript. Disclosure Statement The authors declare no competing financial interests.
01753795
en
[ "shs.scipo" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01753795/file/Thomas_Pierret_Syran_Salafists_at_war_final_draft_HAL%20-%20copie.pdf
Thomas ' Pierret Salafis'at'War'in'Syria:'Logics'of'Fragmentation'and'Realignment' ' Introduction' ' By 'March'2011,'Salafism'was'a'very'marginal'force'on'Syria's'religious'scene,'as' its' growing' popular' appeal' during' the' previous' decade' had' been' matched' by' increasing' state' repression. 1 'Following' the' outbreak' of' the' civil' war,' however,' Salafi'factions'rapidly'became'the'dominant'force'among'the'insurgency,'to'the' extent' that' by' October' 2013' five' of' them' featured' among' the' seven' most' powerful' rebel' groups' in' the' country,' totalling' several' dozens' of' thousands' of' fighters:'two'of'them'were'offspring'of'the'Jihadi'Islamic'State'in'Iraq'(Jabhat&al( Nusra,'hereafter'al(Nusra,'and'the'Islamic'State'in'Iraq'and'Sham,'ISIS),'and'the' three' others' would' soon' become' the' founding' factions' of' the' more' pragmatic' Islamic'Front'(Ahrar&al(Sham,'the'Army'of'Islam,'and'Suqur&al(Sham),'along'with' the'Islamist,'but'nonRSalafi'Tawhid&Brigade. 2 ' ' To'a'certain'extent,'the'spectacular'rise'of'Salafism'in'Syria'was'function'of'this' doctrine's'inherent'features:'in'particular,'an'exclusive'definition'of'the'Islamic' allegiance ' of' entire' preRexisting' local' rural' battalions' (kata'ib)' that' kept' their' names 'and,'initially'at'least,'only'superficially'embraced'the'Salafi'orientation'of' their'umbrella'organisation. 32 '' ' Third,' Ahrar' alRSham's' posture' towards' nonRJihadi' partners' was' far' more' flexible' than' that' of' alRNusra.' Although' it' expressed' no' sympathy' for' the' WesternRbacked'opposition'in'exile,'it'joined'the'Syria'Revolutionaries'Front, 33 'a' shortRlived' rebel' coalition' whose' politburo' included' MuslimRBrotherhood' affiliated' members' of' the' Syrian' National' Council,' the' National' Coalition's' ancestor. 34 'The' group' also' distanced' itself' from' alRNusra's' aforementioned' rhetorical' onslaught' against' the' National' Coalition, 35 'and' although' Ahrar' alR Sham'never'integrated'FSA'structures,'it'cooperated'with'the'latter's'provincial' Military' Councils,' and' representatives' of' the' movement' attended' a' highRprofile' meeting' convened' by' FSA' Chief' of' staff' Salim' Idris' in' Ankara' in' June' 2013. 36 ' Even' more' remarkably,' Ahrar' alRSham' rapidly' opened' channels' of' ! communication'with'Western'states, 37 'arguably'as'a'means'to'reassure'the'latter' about'its'agenda'and'consequently'avoid'being'blacklisted.'' ' Fourth,' Ahrar' alRSham's' political' platform' was' consistently' presented' in' an' ambiguous'manner'that'did'not'reflect'ideological'flexibility,'but'testified'to'the' movement's' feeling' of' embarrassment' towards' its' own' doctrinaire' agenda.' Jihadi'ideologues'who'scrutinised'the'charter'of'the'SIF'formulated'polite'regrets' at' the' fact' that' although' not' explicitly' compromising' on' key' issues' such' as' the' Islamic' state' and' the' legal' inferiority' of' nonRMuslims,' the' document' featured' vague' phrasing' that' left' the' door' open' for' various' interpretations. 38 'In' an' Al' Jazeera'interview,'Ahrar'alRSham's'leader''Abbud'showed'utterly'uncomfortable' -'and'eventually'silent'-'about'the'specifics'of'the'state'his'movement'aimed'to' establish. 39 ' ' Ahrar' alRSham's' relative' pragmatism' must' be' understood' in' the' light' of' the' unique' context' in' which' it' emerged,' that' is,' the' Arab' Awakening.' Starting' with' Sayyid'Qutb'in'the'1960s,'the'militant'Islamists''vision'of'politics'was'shaped'by' a' highly' elitist,' 'vanguardist'' model' well' illustrated' by' the' name' of' the' Syrian' group' in' which' the' elders' among' Ahrar' alRSham's' cadre' had' fought' during' 39 Al Jazeera, 9 June 2013, https://www.youtube.com/watch?v=GRPb4nFU2UA. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 32 Al- the' !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ! 37 Lund, Syria's Salafi Insurgents, p. 18. 38 Abu Basir al-Tartusi, 'A Note on the Charter of the Syrian Islamic Front' (in Arabic), Jihadology, 22 January 2013; Iyad Qunaybi, 'A Look at the Charter of the Syrian Islamic Front' (in Arabic), Jihadology, 22 January 2013. Hayat, 21 August 2012, http://alhayat.com/Details/427912; on the parochial model, see Staniland, Not to be confused with the coalition of the same name established in late 2013 by FSA commander Syria's Salafi Insurgents, p. 29; on the Muslim Brotherhood's occasional funding of Ahrar al-Sham, see Raphaël Lefèvre, 'The Syrian Brotherhood's armed struggle', Carnegie Middle East Center, Networks of Rebellion. Jamal Ma'ruf. 14 December 2012. 35 Lund, 'Aleppo and the battle'. 36 Al Jazeera, 20 June 2013, 33 34 Lund, https://www.youtube.com/watch?v=simuv4yIVgU&feature=youtube_gdata_player.
01709246
en
[ "info.info-au" ]
2024/03/05 22:32:10
2016
https://hal.science/hal-01709246/file/ObserverDesignSwitchedSystems.pdf
Diego Mincarelli email: [email protected] Alessandro Pisano email: [email protected] Thierry Floquet email: [email protected] Elio Usai email: [email protected] Uniformly convergent sliding mode-based Keywords: Switched systems, observer design, second order sliding modes come INTRODUCTION In the last decade, the control community has devoted a great deal of attention to the study of hybrid/switched systems [START_REF] Engell | Modelling and analysis of hybrid systems[END_REF][START_REF] Goebel | Hybrid dynamical systems[END_REF][START_REF] Liberzon | Basic problems in stability and design of switched systems[END_REF]. They represent a powerful tool to describe systems that exhibit switchings between several subsystems, inherently by nature or as a result of external control actions such as in switching supervisory control [START_REF] Morse | Supervisory control of families of linear set-point controllers -part 1: Exact matching[END_REF]. Switched systems and switched multi-controller synthesis have numerous applications in the control of mechanical systems [START_REF] Narendra | Adaptation and learning using multiple models, switching, and tuning[END_REF][START_REF] Chevallereau | Asymptotically stable running for a five-link, four-actuator, planar bipedal robot[END_REF], automotive industry [START_REF] Balluchi | Hybrid control in automotive applications: the cut-off control[END_REF], switching power converters [START_REF] Koning | Brief digital optimal reduced-order control of pulsewidth-modulated switched linear systems[END_REF], aircraft and traffic control [START_REF] Kamgarpour | Hybrid Optimal Control for Aircraft Trajectory Design with a Variable Sequence of Modes[END_REF][START_REF] Glover | A stochastic hybrid model for air traffic control simulation[END_REF], biological systems [START_REF] Vries | Hybrid system modeling and identification of cell biology systems: perspectives and challenges[END_REF][START_REF] Cinquemani | Identification of genetic regulatory networks: A stochastic hybrid approach[END_REF], and many other fields [START_REF] Branicky | Studies in hybrid systems: Modeling, analysis, and control[END_REF]. Remarkable theoretical results involving switched systems have been achieved concerning their stability and stabilizability [START_REF] Lin | Stability and stabilizability of switched linear systems: A survey of recent results[END_REF][START_REF] Liberzon | Switching in Systems and Control. Systems and Control: Foundations and Applications[END_REF], controllability and reachability [START_REF] Sun | Switched Linear Systems: Control and Design[END_REF][START_REF] Sun | Controllability and reachability criteria for switched linear systems[END_REF] and observability [START_REF] Vidal | Observability of linear hybrid systems[END_REF][START_REF] Xie | Necessary and sufficient conditions for controllability and observability of switched impulsive control systems[END_REF][START_REF] Babaali | Observability of switched linear systems in continuous time[END_REF][START_REF] Sun | Switched Linear Systems: Control and Design[END_REF]. The problem of observer design for linear switched systems has been thoroughly investigated by the control community and different approaches have been proposed. The assumptions about the knowledge of the discrete state play a crucial role. In the case of complete knowledge of the discrete state a Luenberger-like switched observer matching the currently active dynamics can be used and the problem is that of guaranteeing the stability of the switched error dynamics. In [START_REF] Alessandri | Switching observers fo continuous-time and discrete-time linear systems[END_REF] it is shown that the observer gain matrices can be selected by solving a set of linear matrix inequalities. In [START_REF] Bejarano | Switched observers for switched linear systems with unknown inputs[END_REF], the approach is generalized to cover linear switched systems with unknown exogenous inputs. In [START_REF] Tanwani | Observability implies observer design for switched linear systems[END_REF], by adopting the notion of observability over an interval, borrowed from [START_REF] Xie | Necessary and sufficient conditions for controllability and observability of switched impulsive control systems[END_REF], an observer is designed for switched systems whose subsystems are not even required to be separately observable. However, in some situations the active mode is unknown and needs to be estimated, along with the continuous state, by relying only on the continuous output measurements. Usually, in such case, the observer consists of two parts: a discrete state (or location) observer, estimating the active mode of operation, and a continuous observer that estimates the continuous state of the switched system. In [START_REF] Balluchi | Design of observers for hybrid systems[END_REF], the architecture of a hybrid observer consisting of both a discrete and continuous state identification part is presented, assuming partial knowledge of the discrete state, i.e. dealing with the case in which some discrete events causing the switchings are supposed to be measurable. When such a discrete output is not sufficient to identify the mode location, the information available from the continuous evolution of the plant is used to estimate the current mode. However, the " distinguishability" of the different modes, i.e. the property concerning the possibility to reconstruct univocally the discrete state, was not analysed. The present work intrinsically differs from [START_REF] Balluchi | Design of observers for hybrid systems[END_REF] in that we consider the case of completely unknown discrete state. In such a case the possibility to obtain an estimate of the current mode in a finite time is clearly important, not to say crucial. This is clear for instance from [START_REF] Pettersson | Designing switched observers for switched systems using multiple lyapunov functions and dwell-time switching[END_REF], where the authors focus on the continuous-time estimation problem and show that a bound to the estimation error can be given if the discrete mode is estimated correctly within a certain time. Additionally, for those switched systems admitting a dwell time, a guaranteed convergence of the discrete mode estimation taking place "sufficiently faster" that the dwell time is needed. In view of these considerations sliding mode-based observers seem to be a suitable tool due to the attractive feature of finite-time convergence charac-terizing the sliding motions [START_REF] Davila | Second-order sliding-mode observer for mechanical systems[END_REF][START_REF] Floquet | On sliding mode observers for systems with unknown inputs[END_REF][START_REF] Orlov | Finite time stabilization of a perturbed double integrator -part i: Continuous sliding modebased output feedback synthesis[END_REF][START_REF] Pisano | Sliding mode control: a survey with applications[END_REF]. As a matter of fact, sliding mode observers have been successfully implemented to deal with the problem of state reconstruction for switched systems. In [START_REF] Bejarano | State exact reconstruction for switched linear systems via a super-twisting algorithm[END_REF], an observer is proposed ensuring the reconstruction of the continuous and discrete state in finite time. In [START_REF] Davila | Continuous and discrete state reconstruction for nonlinear switched systems via high-order sliding-mode observers[END_REF], the authors present an observer, based on the high-order sliding mode approach, for nonlinear autonomous switched systems. However in the above works, though guaranteeing the finite-time convergence, the convergence time depends on the initial conditions mismatch, and, as a consequence, the estimation convergence in a certain pre-specified time can be guaranteed only upon the existence of an a-priori known admissible domain for the system initial conditions. Main contribution and structure of the paper In the present paper we propose a stack of observers whose output injection is computed by relying on the modified Super-Twisting Algorithm, introduced in [START_REF] Cruz-Zabala | Uniform robust exact differentiator[END_REF], that guarantees the so-called "uniform convergence" property, i.e. convergence is attained in finite-time and an upper bound to the transient time can be computed which does not depend on the initial conditions. We also show that, under some conditions, the discrete mode can be correctly reconstructed in finite-time after any switch independently of the observation error at the switching times. Using the continuous output of the switched system, the observer estimates the continuous state and, at the same time, produces suitable residual signals allowing the estimation of the current mode. We propose a residual "projection" methodology by means of which the discrete state can be reconstructed after a switching instant with a finite and pre-specified estimation transient time, allowing a more quick and reliable reconstruction of the discrete state. Additionally, we give structural "distinguishability" conditions in order to guarantee the possibility to reconstruct the discrete state univocally by processing the above mentioned residuals. The paper structure is as follows. Section 2 formulates the problems under analysis and outlines the structure of the proposed scheme. Section 3 illustrates the design of the continuus state observers' stack by providing the underlying Lyapunov based convergence analysis. Section 4 deal with the discrete state estimation problem. Two approaches are proposed, one using the " asymptotically vanishing residuals" (Subsection 4.1) and another one, taking advantage of the above mentioned residuals' "projection" methodology ("uniform-time zeroed residuals", Subsection 4.2). Section 4.3 outlines the structural conditions addressing the "distinguishability" issues. Section 5 summarizes the proposed scheme and main results of this paper. Section 6 illustrates some simulation results and Section 7 gives some concluding remarks. Notation For a vector v = [v 1 , v 2 , . . . , v n ] T ∈ R n denote sign(v) = [sign(v 1 ), sign(v 2 ), . . . , sign(v n )] T . (1) Given a set D ⊆ R n , let vj (D) be the maximum value that can assume the j-th element of v on D. Denote as M the 2-norm of a matrix M. For a square matrix M, denote as σ(M) the spectrum of M, i.e. the set of all eigenvalues of M. Finally, denote as N ← (M) the left null space of a matrix M. PROBLEM STATEMENT Consider the linear autonomous switched dynamics ẋ (t) = A σ x (t) y(t) = C σ x(t) (2) where x (t) ∈ R n represents the state vector, and y(t) ∈ R p represents the output vector. The so-called switching law or discrete state σ(t) : [0, ∞) → {1, 2, ..., q} determines the actual system dynamics among the possible q "operating modes" which are represented, for system (2), by the set of matrix pairs {A 1 , C 1 }, {A 2 , C 2 }, ...{A q , C q }. Without loss of generality, it is assumed that the output matrices C i , ∀i = 1, 2, ..., q are full row rank matrices. The switching law is a piecewise constant function with discontinuities at the switching time instants: σ(t) = σ k , t k ≤ t < t k+1 , k = 0, 1, ..., ∞ (3) where t 0 = 0 and t k (k = 1, 2, . . .) are the time instants at which the switches take place. Definition 1 The dwell time is a constant ∆ > 0 such that the switching times fulfill the inequality t k+1t k ≥ ∆ for all k = 0, 1, .... The objective is to design an observer able to simultaneously estimate the discrete state σ(t) and the continuous state x(t) of system (2), by relying on the availability for measurements of the output vector y(t). We propose a design methodology based on a stack of sliding mode observers producing estimates of the evolution of the continuous state of the switched system. At the same time, suitable residuals are provided for identifying the actual value of the discrete state and, consequently, the specific observer of the stack that is producing an asymptotically converging estimate of the continuous state. The observer structure, depicted in Fig. 1, mainly consists of two parts: a location observer and a continuous state observer. The location observer is devoted to the identification of the discrete state, i.e. the active mode of operation of the switched system, on the basis of some residuals signals produced by the continuous observer. The continuous state observer receives as input the output vector y(t) of the switching system and, using the location information provided by the location observer, produces an estimation of the continuous state of the system. Finally, the proposed approach requires each subsystem to be observable: Assumption 3 The pairs (A i , C i ) are observable ∀i = 1, 2, ..., q. 3 Continuous state observer design Let us preliminarily consider, as suggested in [START_REF] Edwards | Sliding Mode Control: Theory and applications[END_REF], a family of nonsingular coordinate transformations such that the output vector y(t) is a part of the transformed state z(t), i.e. z(t) =    ξ(t) y(t)    = T σ x(t) (4) where ξ(t) ∈ R (n-p) and the transformation matrix is given by T σ =    (N σ ) T C σ    , (5) where the columns of N i ∈ R n×(n-p) span the null space of C i , i = 1, 2, ..., q. By Assumption 2, the trajectories z(t) will evolve into some known compact domain D z . The transformation ( 4) is always nonsingular, and the switched system dynamics (2) in the transformed coordinates are: ż (t) = Āσ z (t) (6) where Āσ = T σ A σ (T σ ) -1 =    Āσ11 Āσ12 Āσ21 Āσ22    (7) A stack of q dynamical observers, each one associated to a different mode of the switched system, is suggested as follows: żi (t) = Āi ẑi (t) + Li ν i (t), if ẑi ∈ D z i = 1, 2, . . . , q, ẑij (t) = zj (D z ), if ẑij ≥ zj (D z ) j = 1, 2, . . . , n (8) where ẑi = [ ξi , ŷi ] T is the state estimate provided by the i-th observer, ν i ∈ R p represents the i-th observer injection input, yet to be designed, and Li ∈ R n×p takes the form Li =    L i -I    (9) where L i ∈ R (n-p)×p are observer gain matrices to be designed and I is the identity matrix of dimension p. In the second of [START_REF] Chevallereau | Asymptotically stable running for a five-link, four-actuator, planar bipedal robot[END_REF], which corresponds to a saturating projection of the estimated state, according to the a-priori known domain of evolution D z of vector z(t), the notation ẑij denotes the j-th element of the vector ẑi . By introducing the error variables e i ξ (t) = ξi (t) -ξ(t), e i y (t) = ŷi (t) -y(t), i = 1, 2, ..., q, (10) in view of ( 6) and of the first equation of ( 8) the next error dynamics can be easily obtained: ėi ξ (t) = Āi11 e i ξ (t) + Āi12 e i y (t) + ( Āi11 -Āσ11 )ξ(t) + ( Āi12 -Āσ12 )y(t) + L i ν i (t) (11) ėi y (t) = Āi21 e i ξ (t) + Āi22 e i y (t) + ( Āi21 -Āσ21 )ξ(t) + ( Āi22 -Āσ22 )y(t) -ν i (t) (12) Then, by defining ϕ iσ (e i ξ , e i y , ξ, y) = Āi21 e i ξ (t) + Āi22 e i y (t) + ( Āi21 -Āσ21 )ξ(t) + ( Āi22 -Āσ22 )y(t) (13) one can rewrite (12) as ėi y (t) = ϕ iσ (e i ξ , e i y , ξ, y) -ν i (t) (14) Let us consider now the second equation of [START_REF] Chevallereau | Asymptotically stable running for a five-link, four-actuator, planar bipedal robot[END_REF]. We know that the trajectories z(t) evolve inside a compact domain D z and the "reset" relation ẑij (t) = zj (D z ) , applied when the corresponding estimate leaves the domain D z , forces them to remain in the set D z . Such a "saturation" mechanism in the observer guarantees that the estimation errors e i ξ and e i y are always bounded. Consequently the functions ϕ ij are smooth enough, that is there exist a constant Φ such that the following inequality is satisfied d dt ϕ ij (e i ξ , e i y , ξ, y) ≤ Φ, ∀i, j = 1, 2, ..., q (15) Following [START_REF] Cruz-Zabala | Uniform robust exact differentiator[END_REF], the observer injection term ν i is going to be specified within the next Theorem which establishes some properties of the proposed observer stack that will be instrumental in our next developments. Theorem 1 Consider the linear switched system (2), satisfying the Assumptions 1-3 along with the stack of observers [START_REF] Chevallereau | Asymptotically stable running for a five-link, four-actuator, planar bipedal robot[END_REF], and the observer injection terms set according to ν i = k 1 φ 1 (e i y ) -ν 2i (16) ν2i = -k 2 φ 2 (e i y ) (17) where φ 1 (e i y ) = |e i y | 1/2 sign(e i y ) + µ|e i y | 3/2 sign(e i y ) (18) φ 2 (e i y ) = 1 2 sign(e i y ) + 2µe i y + 3 2 µ 2 |e i y | 2 sign(e i y ) (19) with µ > 0 and the tuning coefficients k 1 and k 2 selected in the set: K = (k 1 , k 2 ) ∈ R 2 |0 < k 1 < 2 √ Φ, k 2 > k 2 1 4 + 4Φ 2 k 2 1 ∪ (k 1 , k 2 ) ∈ R 2 |k 1 < 2 √ Φ, k 2 > 2Φ (20) Let L i be chosen such that Re σ Āi11 + L i Āi21 ≤ -γ, γ > 0 (21) Then, for sufficiently large Φ and γ, there exists an arbitrarily small time T * << ∆ independent of e i y (t k ) such that, for all k = 0, 1, ..., ∞ and for some α > 0, the next properties hold along the time intervals t ∈ [t k + T * , t k+1 ): e i y (t) = 0 ∀i (22) ν i (t) = ϕ iσ (e i ξ , 0, ξ, y) ∀i (23) e σ k ξ (t) ≤ αe -γ(t-t k -T * ) (24) Proof. The theorem can be proven by showing the uniform (i.e. independent of the initial condition) time convergence of e i y to zero for all the q observers, after each switching, and analyzing the dynamics of the error variables e i ξ once the trajectories are restricted on the surfaces S i o = { e i ξ , e i y : e i y = 0}. ( 25 ) Considering the input injection term ( 16) into ( 14) the output error dynamics are given by ėi y = ϕ iσ -k 1 φ 1 (e i y ) + ν 2i (26) By introducing the new coordinates ψ i = ν 2i + ϕ iσ (27) and considering (17) one obtains the system ėi y = -k 1 φ 1 (e i y ) + ψ i (28) ψi = -k 2 φ 2 (e i y ) + d dt ϕ iσ (29) In light of [START_REF] Engell | Modelling and analysis of hybrid systems[END_REF], system (28)-( 29) is formally equivalent to that dealt with in [START_REF] Cruz-Zabala | Uniform robust exact differentiator[END_REF], where suitable Lyapunov analysis was used to prove the uniform-time convergence to zero of e i y and ψ i , i.e. e i y = 0 and ψ i = 0 on the interval t ∈ [t k + T * , t k+1 ), where T * is an arbitrarily small transient time independent of e i y (t k ). Consequently ėi y = 0 and, from equation ( 14), condition ( 23) is satisfied too. By substituting ( 23) into ( 11) with e i y = 0, it yields the next equivalent dynamics of the error variables: ėi ξ (t) = Āi11 e i ξ (t)+( Āi11 -Āσ11 )ξ(t)+( Āi12 -Āσ12 )y(t)+L i ϕ iσ (e i ξ , 0, ξ, y) (30) where ϕ iσ (e i ξ , 0, ξ, y) = Āi21 e i ξ (t) + ( Āi21 -Āσ21 )ξ(t) + ( Āi22 -Āσ22 )y(t) (31) Finally, by defining the following matrices Ãi = ( Āi11 + L i Āi21 ) ∆A ξ iσ = ( Āi11 -Āσ11 ) + L i ( Āi21 -Āσ21 ) ∆A y iσ = ( Āi12 -Āσ12 ) + L i ( Āi22 -Āσ22 ) (32) equation ( 30) can be rewritten as ėi ξ (t) = Ãi e i ξ (t) + ∆A ξ iσ ξ(t) + ∆A y iσ y(t) It is worth noting that for the correct observer (i.e., that having the index i = σ k which matches the current mode of operation σ(t)) one has ∆A ξ iσ = ∆A y iσ = 0. Hence, along every time intervals t ∈ [t k + T * , t k+1 ), with k = 0, 1, ..., ∞, the error dynamics of the correct observer are given by ėσ k ξ (t) = Ãσ k e σ k ξ (t) (34) which is asymptotically stable by [START_REF] Liberzon | Switching in Systems and Control. Systems and Control: Foundations and Applications[END_REF]. Since by Assumption 3 the pairs (A i , C i ) are all observable, the pairs ( Āi11 , Āi21 ) are also observable, which motivates the tuning condition [START_REF] Liberzon | Switching in Systems and Control. Systems and Control: Foundations and Applications[END_REF]. The solution of (34) fulfills the relation [START_REF] Narendra | Adaptation and learning using multiple models, switching, and tuning[END_REF]. Remark 1 Equation (34) means that the estimation ξi provided by the "correct" observer, i.e. the observer associated to the mode σ k activated during the interval t ∈ [t k , t k+1 ), at time t k + T * starts to converge exponentially to the real continuous state ξ. By appropriate choice of L i the desired rate of convergence can be obtained. Remark 2 It is worth stressing that the time T * is independent of the initial error at each time t k and can be made as small as desired (and in particular such that T * << ∆) by tuning the parameters of the observers properly. Discrete state estimation In the previous section it was shown that there is one observer in the stack that provides the asymptotic reconstruction of the continuous state of the switched system (2). However, the index of such "correct" observer is the currently active mode, which is still unknown to the designer, hence the scheme needs to be complemented by a discrete mode observer. In the next subsections we present two methods for reconstructing the discrete state of the system by suitable processing of the observers' output injections. Asymptotically vanishing residuals Along the time intervals [t k + T * , t k+1 ) the observers' output injection vectors [START_REF] Morse | Supervisory control of families of linear set-point controllers -part 1: Exact matching[END_REF] satisfy the following relationship: ν i (t) =      Āσ k 21 σ k ξ (t) for i = σ k Āi21 e i ξ (t) + ( Āi i21 -Āσ k 21 )ξ(t) + ( Āi22 -Āσ k 22 )y(t) for i = σ k (35) It turns out that along the time intervals [t k +T * , t k+1 ) the norm of the injection term of the correct observer will be asymptotically vanishing in accordance with ν σ k (t) ≤ A M 21 αe -γ(t-t k -T * ) → 0, (36) where A M 21 = sup i∈{1,2,...,q} Āi21 . The asymptotic nature of the convergence to zero of the residual vector corresponding to the correct observer is due to the dynamics (34) of the error variable e σ k ξ (t), which in fact tends asymptotically to zero. Uniform-time zeroed residuals By making the injection signals insensitive to the dynamics of e i ξ (t) it is possible to obtain a uniform-time zeroed residual for the correct observer, i.e. a residual which is exactly zeroed after a finite time T * following any switch, independently of the error at the switching time. Let us make the next assumption. Assumption 4 For all i = 1, 2, ..., q, the submatrices Āi21 are not full row rank. The major consequence of Assumption 4 is that N ← Āi21 is not trivial. Let U i be a basis for N ← Āi21 (i.e. U i Āi21 = 0) and denote νi (t) = U i ν i (t) (38) Clearly, by (35), on the interval [t k + T * , t k+1 ) one has that νi (t) =      0 for i = σ k -U i A σ k 21 ξ(t) + U i ( Āi22 -Āσ k 22 )y(t) for i = σ k (39) It turns out that starting from the time t k + T * the norm of the injection term of the correct observer will be exactly zero, i.e. νσ k (t) = 0 ∀t ∈ [t k +T * , t k+1 ). In order to reconstruct univocally the discrete state, it must be guaranteed that the "wrong" residuals cannot stay identically at zero. In the following section we shall derive a structural condition on the system matrices guaranteeing that the uniform-time zeroed residuals associated to the "wrong" observers stay at the origin. Uniqueness of the reconstructed switching law The main property allowing the discrete mode reconstruction is that, ter a finite time T * following any switch, the residual corresponding to the "correct" observer converges to zero, according to (35), or it is exactly zero if the uniform-time zeroed residuals (38) are used. In order to reconstruct the discrete state univocally, all the other residuals (i.e. those having indexes corresponding to the non activated modes) must be separated from zero. In what follows the uniform-time zeroed residuals (38) are considered as they provide better performance and faster reconstruction capabilities. Nevertheless, analogue considerations can be made in the case of the asymptotically vanishing residuals (35). Definition 2 Given the switched system (2), a residual νi (t) is said to be non vanishing if x(t) = 0 almost everywhere implies νi (t) = 0 almost everywhere ∀i = σ k , that is the residuals corresponding to the "wrong" observers cannot be identically zero on a time interval unless the state of the system is identically zero in the considered interval. Next Lemma establishes an "observability-like" requirement guaranteeing that that the uniform-time zeroed residuals (39) are non vanishing. Lemma 1 The uniform-time zeroed residuals νi (t) in (39) are non vanishing if and only if the pairs ( Āj , Cij ) are observable ∀i = j, where Āj (j = 1, 2, ..., q) are the state matrices of system ( 6) and Cij = -U T i Āj21 U T i ( Āi22 -Āj22 ) . Proof. Along the time interval [t k + T * , t k+1 ), the "wrong" residuals νi (t) in (39), i.e. those with index i = σ k , are related to the state z(t) of ( 6) as νi (t) = Ciσ k z(t) (40) where z(t) is the solution of ż (t) = Āσ k z (t) (41) It is well known that if the pair ( Āσ k , Ciσ k ) of system (40)-( 41) is observable, then νi (t) is identically zero if and only if z(t) (and, thus, x(t)) is null. Therefore, to extend this property over all the intervals t k + T * ≤ t < t k+1 with k = 0, 1, ..., ∞, all the pairs ( Āj , Cij ) ∀i = j have to be observable. Assumption 5 The state trajectories x(t) in (2) are not zero almost everywhere and all the residuals νi (t) in (38) are non vanishing according to Definition (2). Thus if Assumption 5 is fulfilled, only the residual associated to the correct observer will be zero, while all the others will have a norm separated from zero. It is clear that if such a situation can be guaranteed, then the discrete mode can be univocally reconstructed by means of simple decision logic, as discussed in Theorem 2 of the next Section. Remark 3 If Assumption 4 is not satisfied the residual (38) cannot be considered. Nevertheless, by considering the asymptotically vanishing residuals (35), similar structural conditions guaranteeing that the "wrong" residuals cannot be identically zero can be given. Consider the following extended vector: z i e (t) =        e i ξ (t) ξ(t) y(t)        (42) From ( 6), ( 33) and (35) the following system can be considered on the interval [t k + T * , t k+1 ): żi e (t) = A iσ k z i e (t) ν i (t) = C iσ k z i e (t) (43) where A iσ k =    Ãi ∆A ξ iσ k ∆A y iσ k 0 Āσ k    C iσ k =        Āi21 Āi21 -Āσ k 21 Āi22 -Āσ k 22        T (44) The asymptotically vanishing residuals (35) are non vanishing if the pairs (A ij , C ij ) are observable ∀i = j. Continuous and discrete state observer The proposed methodology of continuous and discrete state estimation is summarized in the next Theorem 2 Consider the linear switched system (2), fulfilling the Assumption 1-5, and the observer stack [START_REF] Chevallereau | Asymptotically stable running for a five-link, four-actuator, planar bipedal robot[END_REF] described in the previous Theorem 1. Consider the next evaluation signal ρi (t) = t t-ǫ νi (τ ) dτ . ( 45 ) where ǫ, is a small time delay, and the next active mode identification logic: σ(t) = argmin i∈{1,2,...,q} ρi (t) (46) Then, the discrete state estimation will be such that σ(t) = σ(t), t k + T * + ǫ ≤ t ≤ t k+1 , k = 0, 1, ..., ∞ (47) and the continuous state estimation given by x(t) = (T σ ) -1    ξσ (t) ŷσ (t)    (48) will be such that x(t) -x(t) ≤ αe -γ(t-t k -T * ) ∀t ∈ [t k + T * , t k+1 ) (49) Proof. By considering (39) which, specified for the correct observer (i = σ k ), guarantees that νσ k (t) = 0, t k + T * ≤ t ≤ t k+1 (50) along with the Assumption 5 , whose main consequence is that νi (t) cannot be identically zero over an interval when i = σ(t), it follows that it is always possible to find a threshold η such that for the evaluation signals ρi (t) in (45) one has ρi (t) > η, t k + T * + ǫ ≤ t < t k+1 , i = σ k (51) ρσ k (t) ≤ η, t k + T * + ǫ ≤ t < t k+1 . (52) Thus the mode decision logic (46) provides the reconstruction of the discrete state after the finite time T * + ǫ to any switching time instant, i.e. σ(t) = σ k , t k + T * + ǫ ≤ t < t k+1 (53) The second part of the Theorem 2 concerning the continuous state estimation can be easily proven by considering the coordinate transformation (4) and Theorem 1, which imply (49). Remark 4 We assumed that the state trajectories are not zero almost everywhere (Assumption 5). As a result the wrong residuals can occasionally cross the zero value. This fact motivates the evaluation signal introduced in (45): considering a window of observation for the residuals, all the wrong residuals will be separated from zero, while only the correct one can stay close to zero during a time interval of nonzero length. The architecture of the observer is shown in Fig. 2. Remark 5 It is possible to develop the same methodology if the residual (35) is considered instead of (38). The evaluation signal can be used to identify the discrete state. However, the time to identify the discrete mode will be bigger than the one of Theorem 2 since the vanishing transient of the error variable e i ξ is needed to last for a while, starting from the finite time t k + T * . ρ i (t) = t t-ǫ ν i (τ ) dτ . (54) SIMULATION RESULTS In this section, we discuss a simulation example to show the effectiveness of our method. Consider a switched linear system as in (2) with q = 3 modes by the matrices A 1 =        0.1 0.6 -0.4 -0.5 -0.8 1 0.1 0.4 -0.7        , A 2 =        -0.2 0.3 -0.8 -0.2 -0.4 0.8 1 0.6 -0.3        , A 3 =        -0.8 -0.5 0.2 -0.5 -0.1 -0.5 -0.3 -0.2 0.3        (55) C 1 =    1 0 0 0 1 0    , C 2 =    1 0 0 0 0 1    , C 3 =    0 0 1 0 1 0    (56) The system starts from the mode 1 with the initial conditions x(0) = [-3, -1, 6] T and evolves switching between the three modes according to the switching law shown in Fig. 3. After the coordinate transformation (4) obtained by the transformation matrices T 1 =        0 0 1 1 0 0 1 0        , T 2 =        0 -1 0 1 0 0 0 0 1        , T 3 =        -1 0 0 0 0 1 0 1 0        ( 57 ) the system is in the proper form to apply our estimation procedure. Since the pairs (A i , C i ) are all observable for i = 1, 2, 3, the stack of observers ( 8) can be implemented. By properly tuning the parameters of the STA-based observers according to [START_REF] Liberzon | Basic problems in stability and design of switched systems[END_REF], the components of the vector error e y are exactly zero for each observer after a time T * subsequent to any switch (Fig. 4) as proven in Theorem 1. On the contrary, the error e ξ at time t k + T * starts to converge exponentially to zero only for the correct observer, as shown in Fig. 5. Notice that the three signals occasionally cross the zero, but only the one corresponding to the correct observer remains zero on a time interval. The gains of the observers L i are chosen such that the eigenvalues of the matrices Ãi in [START_REF] Vries | Hybrid system modeling and identification of cell biology systems: perspectives and challenges[END_REF] governing the error dynamics ( 33) are located at -5. Since Assumption 5 is satisfied, the discrete mode can be univocally identified. To this end let us consider the asymptotically vanishing residuals (35) and the uniform-time zeroed residuals (39). The simulations confirm that both signals stay at zero only for the correct observer. Moreover, the signal corresponding to the asymptotically vanishing residual is in general "slower" than the signal corresponding to the uniform-time zeroed residual. In order to highlight the different behaviours of the two signals, we reported in Fig. 6 the dynamics of the two residuals provided by the third observer when the mode 3 becomes active at t = 20. The evaluation signal obtained with the uniform-time zeroed residuals allows us a faster estimation of the switching law, as compared to their asymptotic counterparts. In Fig. 7 the actual and the reconstructed switching law are depicted by using the two different evaluation signals. CONCLUSIONS The problem of simultaneous continuous and discrete state reconstruction has been tackled for linear autonomous switched systems. The main ingredient of the proposed approach is an appropriate stack of high-order sliding mode observers used both as continuous state observers and as residual generators for discrete mode identification. As a novelty, a procedure has been devised to algebraically process the residuals in order to reconstruct the discrete state after a finite time that can be arbitrarily small, and, additionally, conditions ensuring the identifiability of the system modes are derived in terms of the original system matrices. An original "projection" procedure has been proposed, leading to the so-called uniform-time zeroed residuals, which allows a faster reconstruction of the active mode as compared to the (however feasible) case when such projection is no adopted. The same observer can be designed in the case of forced switched systems, too. However, further investigation is needed concerning the underlying conditions to univocally reconstruct the discrete state, which would be affected by the chosen input too. ACKOWLEDGMENTS A. Pisano gratefully acknowledges the financial support from the Ecole Centrale de Lille under the 2013 ECL visiting professor program. Fig. 1 .Assumption 2 12 Fig. 1. Observer structure Fig. 2 . 2 Fig. 2. Continuous and discrete state observer Fig. 3 . 3 Fig. 3. Actual switching signal. 3 y2Fig. 4 . 1 ξFig. 5 . 3415 Fig. 4. Estimation vector error e y corresponding to the three observers. 17 17 3 Fig. 6 . 36 Fig. 6. Evaluation signal used to estimate the discrete state. Fig. 7 . 7 Fig. 7. Actual and reconstructed switching signals.
01753807
en
[ "sdv.ida", "sdv.gen.gh" ]
2024/03/05 22:32:10
2018
https://amu.hal.science/hal-01753807/file/Version%20Finale%20favier.pdf
Favier M Pd Bordet Jc Phd MD Alessi Mc Nurden P Md PhD Nurden At Remi Favier email: [email protected] Heterozygous mutations of the integrin αIIbR995/β3D723 intracytoplasmic salt bridge cause macrothrombocytopenia, platelet functional defects and enlarged α-granules Keywords: inherited macrothrombocytopenia, integrin a2IIbb3, platelet function defects, enlarged a -granules Rare gain-of-function mutations within the ITGA2B or ITGB3 genes have been recognized to cause macrothrombocytopenia (MTP). Here we report three new families with autosomal dominant (AD) MTP, two harboring the same mutation of ITGA2B, αIIb R995W, and a third family with an ITGB3 mutation, β3D723H. The two mutated amino-acids are directly involved in the salt bridge linking the intra-cytoplasmic part of IIb to 3 of the integrin αIIbβ3. For all affected patients, the bleeding syndrome and MTP was mild to moderate. Platelet aggregation tended to be reduced but not absent. Electron microscopy associated with a morphometric analysis revealed large round platelets; a feature being the presence of abnormal large -granules with some giant forms showing signs of fusion. Analysis of the maturation and development of megakaryocytes reveal no defect in their early maturation but abnormal proplatelet formation was observed with increased size of the tips. Interestingly, this study revealed that in addition to the classical phenotype of patients with αIIbβ3 intracytoplasmic mutations, an abnormal maturation of -granules. It will be interesting to determine if this feature is a characteristic of mutations disturbing the IIb R995/3 D723 salt bridge. INTRODUCTION Integrin αIIbβ3 is the platelet receptor for fibrinogen (Fg) and other adhesive proteins and mediates platelet aggregation playing a key role in hemostasis and thrombosis. It circulates on platelets in a low-affinity state becoming ligand-competent as a result of conformational changes induced by "inside-out" signaling following platelet activation [START_REF] Coller | The GPIIb/IIIa (integrin αIIbβ3) odyssey: a technology-driven saga of a receptor with twists, turns, and even a bend[END_REF]. Inherited defects of αIIbβ3 with loss of expression and/or function are causal of Glanzmann thrombasthenia (GT), an autosomal recessive bleeding disorder [START_REF] George | Glanzmann's thrombasthenia: The spectrum of clinical disease[END_REF][START_REF] Nurden | Glanzmann thrombasthenia: a review of ITGA2B and ITGB3 defects with emphasis on variants, phenotypic variability, and mouse models[END_REF]. Rare gain-of-function mutations of the ITGA2B or ITGB3 genes encoding αIIbβ3 also cause macrothrombocytopenia (MTP) with a low platelet count and platelets of increased size [START_REF] Nurden | Glanzmann thrombasthenia: a review of ITGA2B and ITGB3 defects with emphasis on variants, phenotypic variability, and mouse models[END_REF][START_REF] Nurden | Glanzmann thrombasthenia-like syndromes associated with macrothrombocytopenias and mutations in the genes encoding the αIIbβ3 integrin[END_REF]. Mostly heterozygous with autosomal dominant (AD) expression these include D621_E660del*, L718P, L718del and D723H mutations in β3, and G991C, G993del, R995Q or W in αIIb (Table I) [START_REF] Hardisty | A defect of platelet aggregation associated with an abnormal distribution of glycoprotein IIb-IIIa complexes within the platelet: The cause of a lifelong bleeding disorder[END_REF][START_REF] Peyruchaud | R to Q amino acid substitution in the GFFKR sequence of the cytoplasmic domain of the integrin αIIb subunit in a patient with a Glanzmann's thrombasthenia-like syndrome[END_REF][START_REF] Ghevaert | A nonsynonymous SNP in the ITGB3 gene disrupts the conserved membrane-proximal cytoplasmic salt bridge in the αIIbβ3 integrin and cosegregates dominantly with abnormal proplatelet formation and macrothrombocytopenia[END_REF][START_REF] Jayo | L718P mutation in the membrane-proximal cytoplasmic tail of β3 promotes abnormal αIIbβ3 clustering and lipid microdomain coalesce, and associates with a thrombasthenia-like phenotype[END_REF][START_REF] Kunishima | Heterozygous ITGA2B R995W mutation inducing constitutive activation of the αIIbβ3 receptor affects proplatelet formation and causes congenital macrothrombocytopenia[END_REF][START_REF] Kashiwagi | Demonstration of novel gain-of-function mutations of αIIbβ3: association with macrothrombocytopenia and Glanzmann thrombasthenia-like phenotype[END_REF][START_REF] Kobayashi | Identification of the integrin β3 L718P mutation in a pedigree with autosomal dominant thrombocytopenia with anisocytosis[END_REF][START_REF] Nurden | An intracytoplasmic β3 Leu718 deletion in a patient with a novel platelet phenotype[END_REF]. While Asp621_Glu660del affects the extracellular cysteine-rich βA domain of β3, the others affect transmembrane or intracellular cytoplasmic domains and in particular the salt bridge linking the negatively charged D723 of β3 with the positively charged R995 of the much studied GFFKR sequence of αIIb [START_REF] Hughes | The conserved membraneproximal region of an integrin cytoplasmic domain specifies ligand-binding affinity[END_REF][START_REF] Hughes | Breaking the integrin hinge. A defined structural constraint regulates integrin signaling[END_REF][START_REF] Kim | Interactions of platelet integrin αIIb and β3 transmembrane domains in mammalian cell membranes and their role in integrin activation[END_REF]. These mutations permit residual or even total αIIbβ3 expression but give rise to conformation changes that propagate through the integrin and which are recognized by binding of the monoclonal antibody, PAC-1 [START_REF] Shattil | Changes in the platelet membrane glycoprotein IIb.IIIa complex during platelet activation[END_REF]. The MTP appears related to cytoskeletal changes during the late stages of megakaryocyte (MK) development and altered proplatelet formation [START_REF] Bury | Outside-in signaling generated by a constitutively activated integrin αIIbβ3 impairs proplatelet formation in human megakaryocytes[END_REF][START_REF] Bury | Cytoskeletal perturbation leads to platelet dysfunction and thrombocytopenia in variant forms of Glanzmann thrombasthenia[END_REF][START_REF] Hauschner | Abnormal cytoplasmic extensions associated with active αIIbβ3 are probably the cause for macrothrombocytopenia in Glanzmann thrombasthenia-like syndrome[END_REF]). Yet, while most of the above variants combine MTP with a substantial loss of platelet aggregation and a GT-like phenotype, the D723H β3 substitution had no effect on platelet aggregation and was called a non-synonymous single nucleotide polymorphism (SNP) by the authors [START_REF] Peyruchaud | R to Q amino acid substitution in the GFFKR sequence of the cytoplasmic domain of the integrin αIIb subunit in a patient with a Glanzmann's thrombasthenia-like syndrome[END_REF]. This was surprising as another cytoplasmic domain mutation involving a near-neighbour Arg724Ter truncating mutation in β3, while not preventing αIIbβ3 expression gave a full GT phenotype [START_REF] Wang | Truncation of the cytoplasmic domain of beta3 in a variant form of Glanzmann thrombasthenia abrogates signaling through the integrin alphaIIbbeta3 complex[END_REF]. We recently reported a heterozygous intracytoplasmic β3 Leu718del that resulted in loss of synchronization between the cytoplasmic tails of β3 and αIIb; changes that gave moderate MTP, a reduced platelet aggregation response and, unexpectedly, enlarged α-granules [START_REF] Nurden | An intracytoplasmic β3 Leu718 deletion in a patient with a novel platelet phenotype[END_REF]. It is in this context that we now report our studies on a second European family with a heterozygous β3 D723H variant as well as the first two families to be described outside of Japan with a heterozygous αIIb R995W substitution. Significantly, not only both of these variants of the αIIbR995/β3D723 salt bridge give rise to moderate MTP and platelet function defects; their platelets also contained enlarged α-granules. CASE HISTORIES We now report three families (A from Reunion island; B and C from France) with inherited MTP transmitted across 2 or 3 generations suggestive of autosomal dominant (AD) inheritance. The family pedigrees are shown in Fig. 1 and the three index cases (AII.1 an adult female, BI.1 and CI.1 adult males) identified. Other family members known to have MTP and significant subpopulations of enlarged platelets are also highlighted. All showed moderate to mild thrombocytopenia and often a higher proportion of immature platelets when analyzed with the Sysmex XE-5000 automat (Sysmex,Villepinte, France) (Table I). Increased mean platelet volumes were observed for BI.1, CI.1, and C1.2 (Table I) but values are not given when the large diameter of many of the platelets meant that they were not taken into account by the machine (particularly so for members of family A with MTP). Other blood cell lineages were usually present for affected family members and all routinely tested coagulation parameters were normal. As quantitated by the ISTH-BAT bleeding score, members of family A with MTP were the most affected (Table I). For example, AII.1 suffered from severe menorrhagia and severe post-partum bleeding requiring platelet and red blood cell transfusions after her second childbirth although two other children were born without problems (including a cesarean section). AII.1 also experienced occasional spontaneous bruising and episodes of iron-deficient anemia of unknown cause. An affected sister also has easy bruising and childbirth was under the cover of platelet transfusion. In family B the index case BI.1 suffered epistaxis but no bleeding has been reported for other family members. No bleeding was seen for the index case (CI.1) in family C despite major surgery following a bomb explosion while working as a war photographer. His daughter (CII.1) however experiences mild bleeding with frequent hematomas. Our study was performed in accordance with the declaration of Helsinki after written informed consent, and met with the approved protocol from INSERM (RBM-04-14). METHODS Platelet aggregation Platelet aggregation was tested in citrated platelet-rich plasma (PRP) according to our standard protocols [START_REF] Bluteau | Thrombocytopenia-associated mutations in the ANKRD26 gene regulatory region induce MAPK hyperactivation[END_REF] and compared to PRP from healthy control donors without adjustment of the platelet count. The following agonists were used: 10µM adenosine diphosphate (ADP); 1mM arachidonic acid (AA); 1M U46619, (all from Sigma Aldrich, L'isle d'Abeau, Chesnes, France); 20 µM thrombin receptor activating peptide (TRAP) (Polypeptide Group, Strasbourg, France); 1µg/mL collagen (COL) (Chronolog Corporation, Havertown, USA); 5 M, epinephrine (Sigma), 1.5 mg/mL, and 0.6 mg/mL ristocetin (Helena Biosciences Europe, Elitech, Salon-en-Provence, France). Results were expressed as percentage maximal intensity. Flow cytometric analysis (FCM) Glycoprotein expression on unstimulated platelets was assessed using citrated PRP according to our standard protocols [START_REF] Nurden | An intracytoplasmic β3 Leu718 deletion in a patient with a novel platelet phenotype[END_REF][START_REF] Bluteau | Thrombocytopenia-associated mutations in the ANKRD26 gene regulatory region induce MAPK hyperactivation[END_REF]. On occasion, platelet surface labelling for αIIb, β3, GPIbα and P-selectin was quantified using the PLT Gp/Receptors kit (Biocytex, Marseille, France) at room temperature before and after stimulation with 10µM ADP and 50µM TRAP using the Beckman Coulter Navios flow cytometer (Beckman Coulter, Villepinte, France). Platelets were identified by their light scatter characteristics and their positivity for a PC5 conjugated plateletspecific monoclonal antibody (MoAb) (CD41). An isotype antibody was used as negative control. To study platelet αIIbβ3 activation by flow cytometry, platelets were activated with either 10 μΜ ADP or 20 μΜ TRAP in the presence of FITC-conjugated PAC-1. A fluorescence threshold was set to analyze only those platelets that had bound FITC-PAC1. In brief, an antibody mixture consisting of 40 μl of each MoAb (PAC-1 and CD41) was diluted with 280 μl of PBS. Subsequently 5μl of PRP were mixed with 40μl of the antibody mixture and with 5 μl of either saline or platelet activator. After incubating for 15 min at room temperature in the dark, 1ml of isotonic PBS buffer was added and samples were analyzed. Antibody binding was expressed either as the mean fluorescence intensity or as the percentage of platelets positive for antibody. Transmission electron microscopy (EM) PRP from blood taken into citrate or ACDA anticoagulant was diluted and fixed in PBS, pH 7.2, containing 1.25 %(v/v) glutaraldehyde for 1h as described [START_REF] Nurden | An intracytoplasmic β3 Leu718 deletion in a patient with a novel platelet phenotype[END_REF]. After centrifugation and two PBS washings, they were post-fixed in 150 mM cacodylate-HCl buffer, pH 7.4, containing 1% osmium tetroxide for 30 min at 4°C. After dehydration in graded alcohol, embedding in EPON was performed by polymerization at 60°C for 72 h. Ultrathin sections 70-80 nm thick were mounted on 200-mesh copper grids, contrasted with uranyl acetate and lead citrate and examined using a JEOL JEM1400 transmission electron microscope equipped with a Gatan Orius 600 camera and Digital Micrograph software (Lyon Bio Image, Centre d'Imagerie Quantitative de Lyon Est, France). Morphometric measurements were made using Image J software (National Institutes of Health, USA). Genetic analysis and mutation screening. DNA from AII.1, BI.1 and CI.1 was subjected to targeted exome sequencing (v5-70 Mb) as part of a study of a series of families with MTP due to unknown causes organized within the Paris Trousseau Children's Hospital (Paris, France). Single missense variants known to be pathological for MTP in the ITGA2B and ITGB3 cytoplasmic tails were highlighted and their presence in other family members with MTP was confirmed by Sanger sequencing (primers are available on request). The absence of other potentially pathological variants in genes known to be causal of MTP in the targeted exome sequencing analysis was confirmed. In silico models to investigate αIIbβ3 structural changes induced by the mutations were obtained using the PyMOL Molecular Graphics System, version 1.3, Schrödinger, LLC (www.pymol.org) and 2k9j pdb files for transmembrane and cytosolic domains as described in our previous publications [START_REF] Nurden | Glanzmann thrombasthenia: a review of ITGA2B and ITGB3 defects with emphasis on variants, phenotypic variability, and mouse models[END_REF][START_REF] Nurden | Glanzmann thrombasthenia-like syndromes associated with macrothrombocytopenias and mutations in the genes encoding the αIIbβ3 integrin[END_REF][START_REF] Nurden | An intracytoplasmic β3 Leu718 deletion in a patient with a novel platelet phenotype[END_REF]. Amino acid changes are visualized in the rotamer form showing side change orientations incorporated from the Dunbrack Backbone library with maximum probability. In vitro MK differentiation, ploidy analyses, quantification of proplatelets and immunofluorescence analysis Plasma thrombopoietin (TPO) levels were measured as previously described [START_REF] Bluteau | Thrombocytopenia-associated mutations in the ANKRD26 gene regulatory region induce MAPK hyperactivation[END_REF]. Patient or control CD34 + cells were isolated using an immunomagnetic beads technique (Miltenyi, Biotec, France) and grown supplemented with 10 ng/mL TPO (Kirin Brewery, Tokyo, Japan) and 25 ng/mL Stem Cell Factor (SCF; Biovitrum AB, Stockholm, Sweden). (i) Ploidy analyses. At day 10, Hoechst 33342 dye (10 μg/mL; Sigma-Aldrich, Saint Quentin Fallavier, France) was added to the medium of cultured MKs for 2 h at 37°C. Cells were then stained with directly coupled MoAbs: anti-CD41-phycoerythrin and anti-CD42aallophycocyanin (BD Biosciences, Le Pont de Claix, France) for 30 min at 4°C. Ploidy was measured in the CD41 + CD42 + cell population by means of an Influx flow cytometer (BD; Mountain View,USA) and calculated as previously described [START_REF] Bluteau | Thrombocytopenia-associated mutations in the ANKRD26 gene regulatory region induce MAPK hyperactivation[END_REF]. (ii) Quantification of proplatelet-bearing MKs. To evaluate the percentage of MKs forming proplatelets (PPTs) in liquid medium, CD41 + cells were sorted at day 6 of culture and plated in 96-well plates at a concentration of 2000 cells per well in serum-free medium in the presence of TPO (10 ng/mL). MKs displaying PPTs were quantified between day 11 and 13 of culture by enumerating 200 cells per well using an inverted microscope (Carl Zeiss, Göttingen, Germany) at a magnification of ×200. MKs displaying PPTs were defined as cells exhibiting ≥1 cytoplasmic process with constriction areas and were analyzed in triplicate in two independent experiments for each individual. (iii) Fluorescence microscopy. Primary MKs grown in serum-free medium were allowed to spread for 1 h at 37 °C on 100 g/mL fibrinogen (Sigma Aldrich, Saint Quentin Fallavier, France) coated slides, then fixed in 4% paraformaldehyde (PFA), washed and permeabilized for 5 min with 0.2% Triton-X100 and washed with PBS prior to being incubated with rabbit anti-VWF antibody (Dako, Les Ulis, France) for 1 h, followed by incubation with Alexa 546-conjugated goat anti-rabbit immunoglobulin G (IgG) for 30 min and Phalloidin-FITC (Molecular Probes, Saint Aubin, France). Finally, slides were mounted using Vectashield with 4, 6 diamidino-2-phenylindole (Molecular Probes, Saint Aubin, France). The PPT-forming MKs (cells expressing VWF) were examined under a Leica DMI 4000, SPE laser scanning microscope (Leica Microsystems, France) with a 63×/1.4 numeric aperture oil objective. RESULTS Molecular genetic analysis: We describe 3 previously unreported families, one based in Reunion Island and the others in France, with inherited MTP and mild to moderate bleeding. Targeted exome sequencing revealed heterozygous missense mutations of residues that compose the platelet αIIbR995/β3D723 intracytoplasmic salt bridge whose loss is integral to integrin signaling. Probands AII.1 and BI.1 have the αIIbR955W variant previously identified in Japanese families with MTP [START_REF] Kunishima | Heterozygous ITGA2B R995W mutation inducing constitutive activation of the αIIbβ3 receptor affects proplatelet formation and causes congenital macrothrombocytopenia[END_REF]. In contrast CI.1 possesses β3D723H originally described as a nonsynonymous SNP and associated with MTP in a UK family [START_REF] Ghevaert | A nonsynonymous SNP in the ITGB3 gene disrupts the conserved membrane-proximal cytoplasmic salt bridge in the αIIbβ3 integrin and cosegregates dominantly with abnormal proplatelet formation and macrothrombocytopenia[END_REF]. Sanger sequencing confirmed the presence of both variants and showed that their expression segregated with MTP in the family members available for genetic analysis (see Fig. Platelet aggregation and flow cytometry analysis: Citrated PRP from each index case was stimulated with ADP, TRAP, AA and collagen and platelet aggregation measured using standard procedures (Fig. 2A). Results were variable, and taking the curves obtained for ristocetin as a control of the low platelet count for each index case platelets from each family with αIIbR995 and β3D723 variants retained at least a partial aggregation response with family AII.1 showing the largest lossparticularly for TRAP. The response to epinephrine was also much reduced or absent for all samples (data not illustrated). Striking was the low response to AA for patients CI.1and CII.2, a finding reversed on addition of the thromboxane A2 analog U46619 (data not shown). Otherwise, the platelets retained a rapid response to ADP and collagen. Flow cytometry and MoAbs recognizing determinants specific for αIIb, β3 or the αIIbβ3 complex (data not shown) gave comparable results for each index case with surface levels for the 3 index cases ranging from 48 to 75% of those on normal platelets (Fig. 2B). Taking into account the increased platelet size, such intermediate levels would suggest that both mutations have a direct influence on αIIbβ3 expression. Enigmatically, the platelet expression of GPIb was particularly increased for the 4 tested family members (AII.1, AII.2, BI.1, BII.2) with the αIIbR995W mutation a finding only partially explained by the increased platelet volume of these patients with MTP. Binding of PAC-1 recognizing an activation-dependent epitope on αIIbβ3 was analyzed as a probe of the activation state of the integrin. Spontaneous binding of PAC-1 was seen for the platelets of index case AII.1 with the αIIbR995W mutation suggesting signs of activation but was not seen for the index case of the second family with this mutation or for the index case of the family with β3D723H (Fig. 2C). Studies were extended to platelets stimulated with high doses of ADP and TRAP; increased binding was seen for AII.1 consistent with further activation of the residual surface αIIbβ3 of the platelets. However, no binding was seen for BI.1 or C.I.1. suggesting that for these patients the residual αIIbβ3 was refractory to stimulation under the non-stirred conditions of this set of experiments (Fig. 2B). Fig. 2. Selected biological platelet findings for the three index cases. In A) Light transmission aggregometry performed in citrated platelet-rich plasma (PRP) compares typical responses of platelets from the index cases (AII.1, BI.1, CI.1) to that of a typical control donor. For AII.1 aggregation with high doses of ADP, Col, TRAP and AA were reduced compared to ristocetin-induced platelet agglutination whose intensity reflected the low platelet count of the patient. For BI.1 platelet aggregation was moderately reduced with Col and TRAP while for CI.1 platelet aggregation was reduced essentially for TRAP and AA (it should be noted that it was restored with the thromboxane receptor agonist U46619, not shown). In B) Spontaneous PAC1 binding evaluated by flow cytometry on resting platelets was marginally increased for AII.1 but not for the other index cases. Binding increased for AII.1 after platelet activation with ADP and TRAP but remained low compared to the control. In contrast, PAC-1 binding was basal for BI.1 and CI.1 even after addition of ADP or TRAP. In C) we illustrate the levels of GPIb and αIIbβ3 receptors evaluated by flow cytometry not only for the probands but also for other selected family members. A decreased surface expression of αIIb and β3were found for all affected patients, values ranging between 43% (AII.2) and 70% (BII.2) of the control mean. Interestingly, levels of GPIb were increased for the patients and particularly so for families A and B with sometimes values beyond 150% of normal values. Electron microscopy: Platelets from the index cases of all 3 families were examined by transmission EM and for each subject a significant subpopulation of the platelets were larger than normal (Fig 3). Overall, the platelets showed wide size variations; many tended to be round in contrast to the discoid shape of controls (control platelets are illustrated by us in ref [START_REF] Nurden | An intracytoplasmic β3 Leu718 deletion in a patient with a novel platelet phenotype[END_REF]. Patients from all index cases possessed platelets with large variations in the numbers of vacuoles. Striking was a heterogeneous distribution of α-granules with the presence of giant forms particularly evident for patient AII.1 and what appears to represent granule fusion was seen (highlighted in panel 3b). All of the morphological changes were analyzed quantitatively; statistical significance was achieved for all measurements except for those concerning α-granule numbers of patient CI.1 with the β3D723H substitution (Fig. 3). Please note that the platelets of the patients with the αIIbR995W mutation tended to be larger and show more ultrastructural changes. Interestingly, the greater the frequency of giant granules, the lower is their concentration/µm 2 . The presence of giant α-granules repeats what we have recently seen for a patient with a heterozygous β3718del and is reminiscent of the feature that we have much earlier described in Paris Trousseau syndrome (12, 22, and 23). Targeted exome sequencing failed to show mutations in the FlI1 gene of the three index cases. Megakaryopoiesis: Plasma TPO levels were within normal range for each of the index cases (Table I). Analysis of MK maturation and development did not reveal any defect in early megakaryocyte maturation. Ploidy measured in theCD41 + CD42 + cell population at day 10 of culture by means of an Influx flow cytometer with the proportion of 2N-32N MKs being within the normal range (Fig. 4). Proplatelet formation was examined on days 11 and 12 of culture using an inverted microscope and no difference in percentage of proplatelet bearing MKs was detected. Proplatelet morphology was analyzed at the same time using a SPE laser scanning microscope after dual fluorescent labeling of PFA-permeabilized cells with Phalloidin and antibody to VWF (green) (Fig. 4). While the mature MKs basically showed normal morphology proplatelet numbers tended to be lower and some extensions appeared swollen and with decreased branching. Another finding was that the size of the tips and bulges occurring at intervals along the proplatelets tended to be larger than for control MKs and especially so for the two index cases with αIIbR995W (AII.1 and BI.1); an image of a giant granule can be observed in an illustrated extension of AII. DISCUSSION In the resting state, the trans-membrane and intra-cytoplasmic segments of the two subunits of αIIbβ3 interact, an interaction that is key to maintaining the extracellular domain of the integrin in its bent resting state [START_REF] Kim | Interactions of platelet integrin αIIb and β3 transmembrane domains in mammalian cell membranes and their role in integrin activation[END_REF][START_REF] Litinov | Activation of individual alphaIIbbeta3 integrin molecules by disruption of transmembrane domain interactions in the absence of clustering[END_REF]. One area of contact between the cytoplasmic tails involves π interactions and aromatic cycle stacking of consecutive F residues within the highly conserved αIIb GFFKR (aa991-955) sequence with W713 of β3 (shown in Fig. 1). A second interaction principally involves a salt bridge between the positively charged αIIbR995 and negatively charged β3D723 [START_REF] Hughes | The conserved membraneproximal region of an integrin cytoplasmic domain specifies ligand-binding affinity[END_REF][START_REF] Hughes | Breaking the integrin hinge. A defined structural constraint regulates integrin signaling[END_REF][START_REF] Kim | Interactions of platelet integrin αIIb and β3 transmembrane domains in mammalian cell membranes and their role in integrin activation[END_REF]. Early studies including site-directed mutagenesis, truncation models and charge reversal mutations showed that loss of this intra-molecular clasp led to integrin activation and modified function [START_REF] Hughes | The conserved membraneproximal region of an integrin cytoplasmic domain specifies ligand-binding affinity[END_REF][START_REF] Hughes | Breaking the integrin hinge. A defined structural constraint regulates integrin signaling[END_REF]. Hence the mutations described in our patients are of high significance for integrin biology. Enigmatically, the β3D723H change has the more pronounced structural effect resulting in (i) repulsive electrical charge forces with the positively charged H now facing the positively charged R995 and (ii) steric encumbrance due to the larger H. The net result is a widening of the interval between R995 and H723 and a weakening of the salt bridge, changes that accompany the acquisition of a higher affinity state [START_REF] Adair | Three-dimensional model of the human platelet integrin alphaIIbbeta3 based on electron cryomicroscopy and x-ray crystallography[END_REF]. Of similar consequence but milder in nature is the replacement of αIIb995R by the neutral W while both mutations potentially also interfere with π interactions involving αIIbF992. A novel feature of our study is the presence of enlarged α-granules in the platelets of all 3 index cases. This is of interest for we have recently reported enlarged α-granules for a patient with MTP associated with a β3 L718del resulting in loss of synchronization between opposing amino acids of the αIIb and β3 cytoplasmic tails and a weakening of the αIIbR995/β3D723 salt bridge (Table SI) [START_REF] Nurden | An intracytoplasmic β3 Leu718 deletion in a patient with a novel platelet phenotype[END_REF]. The presence of enlarged α-granules in platelets of patients from 3 unrelated families with MTP linked to cytoplasmic tail mutations in ITGA2B or ITGB3 in our current study strongly suggests that they have a role in the ultrastructural changes with emphasis on the αIIbR995W mutation. But further studies will be required to define this role and rule out secondary genetic variants in linkage disequilibrium with the primary mutations. Classically, enlarged α-granules are a consistent feature of the Paris-Trousseau syndrome first being seen on stained blood smears and then confirmed by EM [START_REF] Breton-Gorius | A new congenital dysmegakaryopoietic thrombocytopenia (Paris-Trousseau) associated with giant platelet α-granules and chromosome 11 deletion at 11q23[END_REF][START_REF] Favier | Paris-Trousseau syndrome: clinical, hematological, molecular data of ten cases[END_REF]. Paris-Trousseau syndrome results from genetic variants and haplodeficiency of the FLI1 transcription factor [START_REF] Favier | Progress in understanding the diagnosis and molecular genetics of macrothrombocytopenia[END_REF], variants that were absent from our families when studied by targeted exome sequencing. For our previously studied patient with the L718del, immunogold labeling and EM clearly showed the association of P-selectin, αIIbβ3 and fibrinogen with the giant α-granules suggesting a normal initial granule biosynthesis [START_REF] Nurden | An intracytoplasmic β3 Leu718 deletion in a patient with a novel platelet phenotype[END_REF]. Whether the giant granules are formed as part of the secretory pathway, as has been proposed for normal platelets [START_REF] Eckly | Respective contributions of single and compound granule fusion to secretion by activated platelets[END_REF], or perhaps are the consequences of premature apoptosis remain subjects for further study. In this context it would be interesting to know if they are more abundant in aging platelets and at what stage they appear in maturing MK and/or during platelet biogenesis. Preliminary immunofluorescence studies showed what appeared to giant granules in the proplatelets of cultured megakaryocytes from AII.1. It is also important to know if enlarged α-granules have been overlooked in previously published cases of cytoplasmic tail mutations affecting αIIbβ3 or are restricted to certain cases. In parallel, we also keep in mind that FLI-1 with others transcription factors coregulate directly ITGA2B and ITGB3 [START_REF] Tijssen | Genome-wide analysis of simultaneous GATA1/2, RUNX1, FLI1, and SCL binding in megakaryocytes identifies hematopoietic regulators[END_REF]. The genotypes and phenotypes of previously published cases associating cytoplasmic tail mutations of αIIb or β3 and MTP are compared in Table S1. There is much phenotypic variability but as in our cases all give rise to a mild to moderate thrombocytopenia and platelet size variations including giant forms. Inheritance is AD when family studies permit this conclusion, although two reports associate single allele missense mutations of the αIIb cytoplasmic tail with a second and different mutation causing loss of expression of the second allele [START_REF] Peyruchaud | R to Q amino acid substitution in the GFFKR sequence of the cytoplasmic domain of the integrin αIIb subunit in a patient with a Glanzmann's thrombasthenia-like syndrome[END_REF][START_REF] Kashiwagi | Demonstration of novel gain-of-function mutations of αIIbβ3: association with macrothrombocytopenia and Glanzmann thrombasthenia-like phenotype[END_REF]. Such a loss may exaggerate the effect of single allele missense mutations in these cases. In all but one of the published cases bleeding was mild to moderate or was absent and our cases follow this pattern. Platelet aggregation was never totally abrogated but tended to occur more slowly with a reduced final intensity as was largely the case for families A and B in our study. Platelets of family C (β3D723H) retained a good aggregation response, a finding in agreement with the report on the UK family with the same mutation [START_REF] Ghevaert | A nonsynonymous SNP in the ITGB3 gene disrupts the conserved membrane-proximal cytoplasmic salt bridge in the αIIbβ3 integrin and cosegregates dominantly with abnormal proplatelet formation and macrothrombocytopenia[END_REF]. For families A, B and C intermediate levels of αIIbβ3 were shown at the surface, results again consistent with many of the literature reports (Table S1). It is noteworthy that the platelet aggregation response of obligate heterozygotes for classic type I GT is normal [START_REF] George | Glanzmann's thrombasthenia: The spectrum of clinical disease[END_REF] suggesting that despite the influence of the low platelet count, the cytoplasmic domain mutations have a direct effect on the platelet aggregation response. Interestingly, a low platelet surface αIIbβ3 expression was shown to be associated with normal internal pools of αIIbβ3 in patients with αIIbR995Q and αIIbR99W substitutions suggesting defects in integrin recycling [START_REF] Hardisty | A defect of platelet aggregation associated with an abnormal distribution of glycoprotein IIb-IIIa complexes within the platelet: The cause of a lifelong bleeding disorder[END_REF][START_REF] Kashiwagi | Demonstration of novel gain-of-function mutations of αIIbβ3: association with macrothrombocytopenia and Glanzmann thrombasthenia-like phenotype[END_REF]. Our results for family C with intermediate platelet surface levels of αIIbβ3 differed from the results for the UK family where αIIbβ3 expression was normal [START_REF] Ghevaert | A nonsynonymous SNP in the ITGB3 gene disrupts the conserved membrane-proximal cytoplasmic salt bridge in the αIIbβ3 integrin and cosegregates dominantly with abnormal proplatelet formation and macrothrombocytopenia[END_REF]. Expression of the adhesion receptor GPIb was increased on the platelets of our index cases and especially so for the two with the αIIbR995W variant a finding that was previously observed for Japanese cases with the same mutation [START_REF] Kunishima | Heterozygous ITGA2B R995W mutation inducing constitutive activation of the αIIbβ3 receptor affects proplatelet formation and causes congenital macrothrombocytopenia[END_REF]. The reason for this is not known but could reflect altered megakaryopoiesis. A feature of cytoplasmic domain mutations causal of MTP is that long-range conformational changes extend to the functional domains of the integrin and give what is often termed a partially activated state (6-12) (Table S1). This was indeed shown by Hughes et al [START_REF] Hughes | The conserved membraneproximal region of an integrin cytoplasmic domain specifies ligand-binding affinity[END_REF][START_REF] Hughes | Breaking the integrin hinge. A defined structural constraint regulates integrin signaling[END_REF] who expressed αIIbβ3 in CHO cells after modifying residues of the salt bridge through site-directed mutagenesis. While the changes permit binding of the activation-dependent IgM MoAb PAC-1, only for one report has spontaneous binding of Fg been observed for this class of mutation [START_REF] Kobayashi | Identification of the integrin β3 L718P mutation in a pedigree with autosomal dominant thrombocytopenia with anisocytosis[END_REF]. These results therefore differ from the C560R mutation in the β3 cysteinerich β (A) extracellular domain as reported for a French patient whose platelets circulated with αIIbβ3-bound Fg [START_REF] Ruiz | A point mutation in the cysteine-rich domain of glycoprotein (GP) IIIa results in the expression of a GPIIb-IIIa (alphaIIbbeta3) integrin receptor locked in a high-affinity state and a Glanzmann thrombasthenia-like phenotype[END_REF]. The conformational changes permitting spontaneous PAC-1 binding but only rarely Fg binding remain to be defined although αIIbβ3 clustering remains a potential explanation [START_REF] Jayo | L718P mutation in the membrane-proximal cytoplasmic tail of β3 promotes abnormal αIIbβ3 clustering and lipid microdomain coalesce, and associates with a thrombasthenia-like phenotype[END_REF]. The activation state of αIIbβ3 is also often greater in transfected heterologous cells than for platelets of the patients themselves perhaps due to abnormal recycling and concentration of the mutated integrin in internal pools. Unexpectedly, variable or no PAC-1 binding in our patients was seen after stimulation with TRAP despite these patients showing a residual aggregation response in citrated PRP. This apparent contradiction is possible related to the non-stirred conditions of the in vitro PAC-1 binding experiments. As patients from family C showed a markedly abnormal response to AA, a role for thromboxane A2 generation in αIIbβ3 activation merits investigation. Previous studies have examined MK maturation in culture but have largely been performed for patients with a 40 amino acid del (p.647-686) in the β3 extracellular β-tail domain causal of MTP [START_REF] Bury | Outside-in signaling generated by a constitutively activated integrin αIIbβ3 impairs proplatelet formation in human megakaryocytes[END_REF][START_REF] Bury | Cytoskeletal perturbation leads to platelet dysfunction and thrombocytopenia in variant forms of Glanzmann thrombasthenia[END_REF][START_REF] Hauschner | Abnormal cytoplasmic extensions associated with active αIIbβ3 are probably the cause for macrothrombocytopenia in Glanzmann thrombasthenia-like syndrome[END_REF]. Among the changes that were noted were (i) fewer proplatelets and (ii) tips of larger size; changes associated with abnormal MK spreading on Fg and a disordered actin distribution and cytoskeletal defects seemingly linked to a sustained "outside-in" signaling induced by the constitutively active αIIbβ3 [START_REF] Bury | Outside-in signaling generated by a constitutively activated integrin αIIbβ3 impairs proplatelet formation in human megakaryocytes[END_REF][START_REF] Bury | Cytoskeletal perturbation leads to platelet dysfunction and thrombocytopenia in variant forms of Glanzmann thrombasthenia[END_REF][START_REF] Hauschner | Abnormal cytoplasmic extensions associated with active αIIbβ3 are probably the cause for macrothrombocytopenia in Glanzmann thrombasthenia-like syndrome[END_REF]. Analysis of megakaryopoiesis for our patients did not reveal a defect in MK maturation or in ploidy but confirmed the above studies and previous studies on MKs from Japanese families with αIIbR995W (9) or the UK family with β3D723H defect in that abnormal proplatelet formation with decreased branching and with bulges of increased size at their tips. This defect was quite similar between our patients even if the size of tips seemed larger for the patients with αIIbR995W. The relationship between defects in αIIbβ3 complexes to changes in α-granule size remains to be determined. VWF-labelled granules with increased size were detected already in proplatelets in AII.1 and interestingly for the three patients studied, the larger the platelet surface area the larger the α-granule diameter suggesting that the defect responsible for increased platelet size contributes also to determination of α-granule size. The cause of the apparent fusion or coalescing of granules as a mechanism of forming the giant granules not only observed for this patient but also for the β3 Leu718 deletion ( 12) merits further study. The fact that mutations modifying the salt bridge between the positively charged αIIbR995 and negatively charged β3D723 is particularly intriguing. Our results therefore support a generalized hypothesis where mutations within αIIb or β3 cytoplasmic domains somehow lead to a facilitated MK surface interaction with stromal proteins in the marrow medullary compartment that in turn promote cytoskeletal changes that not only lead to altered proplatelet formation and platelet biogenesis but also, at least on occasion, an altered α-granule maturation. Footnote * Human Genome Variation Society nomenclature for the αIIb and β3 mature proteins are used in this study. Acknowledgment: the authors thank N.Saut for her technical expertise for sequencing. 1 for families A, B and C) and absent from the subjects AIII.1, BI.2, CII.2 A who have a normal platelet count. The structural effect of the mutations was studied using the sculpting function incorporated in the PyMol in silico modeling program (see Methods); the images in Fig. 1 show the transmembrane and cytoplasmic domain segments of αIIb (blue) and β3 (green). The interactions creating the inner membrane association clasp are highlighted for wild type αIIbβ3 in dashed circles with the positive αIIbR995 and negative β3D723 represented as sticks. Both substitutions result in steric interference, especially when β3D723 is replaced by the larger H. The substitutions of αIIbR995 with neutral W or β3D723 with the positive H necessarily weaken or abrogate the salt bridge potentially leading to a separation of the subunit tails. Secondary influences also extend to other membrane proximal amino acids in π interactions shown as sticks and transparent spheres (see Discussion). Fig. 1 . 1 Fig. 1. Genetic analysis and structural in silico modeling of the αIIb R995W and the β3 Fig. 3 . 3 Fig. 3. Transmission electron microscopy of platelets from the index cases of each family 1 ( 1 Fig 4, yellow arrow). Fig. 4 . 4 Fig. 4. In vitro derived MK differentiation. MK differentiation was induced from control (Cont1,
01577826
en
[ "phys.meca.solid" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01577826/file/main.pdf
Physics of muscle contraction In this paper we report, clarify and broaden various recent efforts to complement the chemistry-centered models of force generation in (skeletal) muscles by mechanics-centered models. The physical mechanisms of interest can be grouped into two classes: passive and active. The main passive effect is the fast force recovery which does not require the detachment of myosin cross-bridges from actin filaments and can operate without a specialized supply of metabolic fuel (ATP). In mechanical terms, it can be viewed as a collective folding-unfolding phenomenon in the system of interacting bi-stable units and modeled by near equilibrium Langevin dynamics. The parallel active force generation mechanism operates at slow time scales, requires detachment and is crucially dependent on ATP hydrolysis. The underlying mechanical processes take place far from equilibrium and are represented by stochastic models with broken time reversal symmetry implying non-potentiality, correlated noise or multiple reservoirs. The modeling approaches reviewed in this paper deal with both active and passive processes and support from the mechanical perspective the biological point of view that phenomena involved in slow (active) and fast (passive) force generation are tightly intertwined. They reveal, however, that biochemical studies in solution, macroscopic physiological measurements and structural analysis do not provide by themselves all the necessary insights into the functioning of the organized contractile system. In particular, the reviewed body of work emphasizes the important role of long-range interactions and criticality in securing the targeted mechanical response in the physiological regime of isometric contractions. The importance of the purely mechanical micro-scale modeling is accentuated at the end of the paper where we address the puzzling issue of the stability of muscle response on the so called " descending limb" of the isometric tetanus. Introduction In recent years considerable attention has been focused on the study of the physical behavior of cells and tissues. Outside their direct physiological functionality, these biological systems are viewed as prototypes of new artificially produced materials that can actively generate stresses, adjust their rheology and accommodate loading through remodeling and growth. The intriguing mechanical properties of these systems can be linked to hierarchical structures which bridge a broad range of scales, and to expressly nonlocal interactions which make these systems reminiscent more of structures and mechanisms than of a homogeneous matter. In contrast with traditional materials, where microscopic dynamics can be enslaved through homogenization and averaging, diverse scales in cells and tissues appear to be linked by complex energy cascades. To complicate matters further, in addition to external loading, cells and tissues are driven internally by endogenous mechanisms supplying energy and maintaining non-equilibrium. The multifaceted nature of the ensuing mechanical responses makes the task of constitutive modeling of such distributed systems rather challenging [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15]. While general principles of active bio-mechanical response of cells and tissues still remain to be found, physical understanding of some specific sub-systems and regimes has been considerably improved in recent years. An example of a class of distributed biological systems whose functioning has been rather thoroughly characterized on both physiological and bio-chemical levels is provided by skeletal (striated) muscles [16][17][18][START_REF] Mcmahon | Muscles, reflexes and locomotion[END_REF][START_REF] Nelson | Biological Physics Energy, Information, Life[END_REF][START_REF] Epstein | Theoretical models of skeletal muscle: biological and mathematical considerations[END_REF][START_REF] Rassier | Striated Muscles: From Molecules to Cells[END_REF][START_REF] Sugi | Muscle Contraction and Cell Motility Fundamentals and Developments[END_REF][START_REF] Morel | Molecular and Physiological Mechanisms of Muscle Contraction[END_REF]. The narrow functionality of skeletal muscles is behind their relatively simple, almost crystalline geometry which makes them a natural first choice for systematic physical modeling. The main challenge in the representation of the underlying microscopic machinery is to strike the right balance between chemistry and mechanics. In this review, we address only a very small portion of the huge literature on force generation in muscles and mostly focus on recent efforts to complement the chemistry-centered models by the mechanics-centered models. Other perspectives on muscle contraction can be found in a number of comprehensive reviews [START_REF] Close | [END_REF][26][START_REF] Burke | Motor Units: Anatomy, Physiology, and Functional Organization[END_REF][START_REF] Eisenberg | [END_REF][29][30][31][START_REF] Aidley | The physiology of excitable cells 4th ed[END_REF][START_REF] Geeves | [END_REF][34][35][36][37][38]. The physical mechanisms of interest for our study can be grouped into two classes: passive and active. The passive phenomenon is the fast force recovery which does not require the detachment of myosin cross-bridges from actin filaments and can operate without a specialized supply of ATP. It can be viewed as a collective folding-unfolding in the system of interacting bi-stable units and modeled by near equilibrium Langevin dynamics. The active force generation mechanism operates at much slower time scales, requires detachment from actin and is fueled by continuous ATP hydrolysis. The underlying processes take place far from equilibrium and are represented by stochastic models with broken time reversal symmetry implying non-potentiality, correlated noise, multiple reservoirs and other non-equilibrium mechanisms. The physical modeling approaches reviewed in this paper support the biochemical perspective that phenomena involved in slow (active) and fast (passive) force generation are tightly intertwined. They reveal, however, that biochemical studies of the isolated proteins in solution, macroscopic physiological measurements of muscle fiber energetics and structural studies using electron microscopy, X-ray diffraction and spectroscopic methods do not provide by themselves all the necessary insights into the functioning of the organized contractile system. The importance of the microscopic physical modeling that goes beyond chemical kinetics is accentuated by our discussion of the mechanical stability of muscle response on the descending limb of the isometric tetanus (segment of the tension-elongation curve with negative stiffness) [17-19; 39]. An important general theme of this review is the cooperative mechanical response of muscle machinery which defies thermal fluctuations. To generate substantial force, individual contractile elements must act collectively and the mechanism of synchronization has been actively debated in recent years. We show that the factor responsible for the cooperativity is the inherent non-locality of the system ensured by a network of cross-linked elastic backbones. The cooperation is amplified because of the possibility to actively tune the internal stiffness of the system towards a critical state where correlation length diverges. The reviewed body of work clarifies the role of non-locality and criticality in securing the targeted mechanical response of muscle type systems in various physiological regimes. It also reveals that the "unusual" features of muscle mechanics, that one can associate with the idea of allosteric regulation, are generic in biological systems [40][41][42][43] and several non-muscle examples of such behavior are discussed in the concluding section of the paper. Background We start with recalling few minimally necessary anatomical and biochemical facts about muscle contraction. Skeletal muscles are composed of bundles of non ramified parallel fibers. Each fiber is a multi-nuclei cell, from 100 µm to 30 cm long and 10 µm to 100 µm wide. It spans the whole length of the tissue. The cytoplasm of each muscle cell contains hundreds of 2 µm wide myofibrils immersed in a network of transverse tubules whose role is to deliver the molecules that fuel the contraction. When activated by the central nervous system the fibers apply tensile stress to the constraints. The main goal of muscle mechanics is to understand the working of the force generating mechanism which operates at submyofibril scale. The salient feature of the skeletal muscle myofibrils is the presence of striations, a succession of dark an light bands visible under transmission electron microscope [16]. The 2 µm regions between two Z-disks, identified as half-sarcomeres in Fig. 1, are the main contractile units. As we see in this figure, each half-sarcomere contains smaller structures called myofilaments. The thin filaments, which are 8 nm wide and 1 µm long, are composed of polymerized actin monomers. Their helix structure has a periodicity of about 38 nm, with each monomer having a 5 nm diameter. The thick filaments contain about 300 myosin II molecules per half-sarcomere. Each myosin II is a complex protein with 2 globular heads whose tails are assembled in a helix [44]. The tails of different myosins are packed together and constitute the backbone of the thick filament from which the heads, known as cross-bridges, project outward toward the surrounding actin filaments. The crossbridges are organized in a 3 stranded helix with a periodicity of 43.5 nm and the axial distance between two adjacent double Schematic representation of a segment of myofibril showing the elementary force generating unit: the half-sarcomere. Zdisks are passive cross-linkers responsible for the crystalline structure of the muscle actin network; M-lines bundle myosin molecules into global active cross-linkers. Titin proteins connect the Z-disks inside each sarcomere. heads of about 14.5 nm [45]. Another important sarcomere protein, whose role in muscle contraction remains ambigous, is titin. This gigantic molecule is anchored on the Z-disks, spans the whole sarcomere structure and passively controls the overstretching; about its potentially active functions see Refs. [46][47][48][49]. A broadly accepted microscopic picture of muscle contraction was proposed by A.F Huxley and H.E. Huxley in the 1950's, see a historical review in Ref [50]. The development of electron myograph and X ray diffraction techniques at that time allowed the researcheres to observe the dynamics of the dark and light bands during fiber contraction [51][52][53]. The physical mechanism of force generation was first elucidated in [54], where contraction was explicitly linked to the relative sliding of the myofilaments and explained by a repeated, millisecond long attachement-pulling interaction between the thick and thin filaments; some conceptual alternatives are discussed in Refs. [START_REF] Pollack | Muscles and molecules-uncovering the principles of biological motion[END_REF][START_REF] Cohen | [END_REF][57] The sliding-filament hypothesis [53; 58] assumes that during contraction actin filaments move past myosin filaments while actively interacting with them through the myosin crossbridges. Biochemical studies in solution showed that actomyosin interaction is powered by the hydrolysis of ATP into ADP and phosphate Pi [59]. The motor part of the myosin head acts as an enzyme which, on one side, increases the hydrolysis reaction rate and on the other side converts the released chemical energy into useful work. Each ATP molecule provides 100 zJ (zepto = 10 -21 ) which is equivalent to ∼ 25 k b T at room temperature, where k b = 1.381 × 10 -23 J K -1 is the Boltzmann constant and T is the absolute temperature in K. The whole system remains in permanent disequilibrium because the chemical potentials of the reactant (ATP) and the products of the hydrolysis reaction (ADP and Pi) are kept out of balance by a steadily operating exterior metabolic source of energy [16; 17; 60]. The stochastic interaction between individual myosin cross bridges and the adjacent actin filaments includes, in addition to cyclic attachment of myosin heads to actin binding sites, concurrent conformational change in the core of the myosin catalytic domain (of folding-unfolding type). A lever arm amplifies this structural transformation producing the power stroke, which is the crucial part of a mechanism allowing the attached cross bridges to generate macroscopic forces [16; 17]. Representation of the Lymn-Taylor cycle, where each mechanical state (1 → 4) is associated with a chemical state (M-ADP-Pi, A-M-ADP-Pi, A-M-ADP and M-ATP). During one cycle, the myosin motor executes one power-stroke and splits one ATP molecule. A basic biochemical model of the myosin ATPase reaction in solution, linking together the attachment-detachment and the power stroke, is known as the Lymn-Taylor (LT) cycle [59]. It incorporates the most important chemical states, known as M-ATP, A-M-ADP-Pi, AM-ADP and AM, and associates them with particular mechanical configurations of the actomyosin complex, see Fig. 2. The LT cycle consists of 4 steps [17; 35; 62; 63]: (i) 1→2 Attachment. The myosin head (M) is initially detached from actin in a pre-power stroke configuration. ATP is in its hydrolyzed form ADP+Pi, which generates a high affinity to actin binding sites (A). The attachment takes place while the conformational mechanism is in prepower stroke state. (ii) 2→3 Power-stroke. Conformational change during which the myosin head executes a rotation around the binding site accompanied with a displacement increment of a few nm and a force generation of a few pN. During the power stroke, phosphate (Pi) is released. (iii) 3→4 Detachment. Separation from actin filament occurs after the power stroke is completed while the myosin head remains in its post power stroke state. Detachment coincides with the release of the second hydrolysis product ADP which considerably destabilize the attached state. As the myosin head detaches, a fresh ATP molecule is recruited. (iv) 4→1 Re-cocking (or repriming). ATP hydrolysis provides the energy necessary to recharge the power stroke mechanism. While this basic cycle has been complicated progressively to match an increasing body of experimental data [64][65][66][67][68], the minimal LT description is believed to be irreducible [69]. However, its association with microscopic structural details and relation to specific micro-mechanical interactions remain a subject of debate [70][71][START_REF] Sugi | Evidence for the essential role of myosin head lever arm domain and myosin subfragment-2 in muscle contraction Skeletal Muscle -From Myogenesis to Clinical Relations[END_REF]. Another complication is that the influence of mechanical loading on the transition rates, that is practically impossible to simulate in experiments on isolated proteins, remains unconstrained by the purely biochemical models. An important feature of the LT cycle, which appears to be loading independent, is the association of vastly different timescales to individual biochemical steps, see Fig. 2. For instance, the power stroke, taking place at ∼1ms time scale, is the fastest step. It is believed to be independent of ATP activity which takes place at the orders of magnitude slower time scale, 30-100 ms [67; 73]. The rate limiting step of the whole cycle is the release of ADP with a characteristic time of ∼ 100 ms, which matches the rate of tension rise in an isometric tetanus. Mechanical response 1.2.1. Isometric force and isotonic shortening velocity. Typical experimental setup for measuring the mechanical response of a muscle fibers involves a motor and a force transducer between which the muscle fiber is mounted. The fiber is maintained in an appropriate physiological solution and is electro stimulated. When the distance between the extremities of the fibers is kept constant (length clamp or hard device loading), the fully activated (tetanized) fiber generates an active force called the isometric tension T 0 which depends on the sarcomere length L [77; 78]. The measured "tension-elongation" curve T 0 (L) , shown in Fig. 3(a), reflects the degree of filament overlap in each half sarcomere. At small sarcomere lengths ( L ∼ 1.8 µm), the isometric tension level increases linearly as the detrimental overlap (frustration) diminishes. Around L = 2.1 µm, the tension reaches a plateau T max , the physiological regime, where all available myosin cross-bridges have a possibility to bind actin filament. The descending limb corresponds to regimes where the optimal filament overlap progressively reduces (see more about this regime in Section 5). One of the main experiments addressing the mechanical behavior of skeletal muscles under applied forces (load clamp or soft loading device) was conducted by A.V. Hill [79], who introduced the notion of "force-velocity" relation. First the muscle fiber was stimulated under isometric conditions producing a force T 0 . Then the control device was switched to the load clamp mode and a load step was applied to the fiber which shortened (or elongated) in response to the new force level. After a transient [80] the system reached a steady state where the shortening velocity could be measured. A different protocol producing essentially the same result was used in Ref. [81] where a ramp shortening (or stretch) was applied to a fiber in length clamp mode and the tension measured at a particular stage of the time response. Note that in contrast to the case of passive friction, the active force-velocity relation for tetanized muscle enters the quadrant where the dissipation is negative, see Fig. 3(b). Fast isometric and isotonic transients. The mechanical responses characterized by the tension-elongation relation and the force-velocity relation are associated with timescales of the order of 100 µs. To shed light on the processes at the millisecond time scale, fast load clamp experiments were performed in Refs. [82][83][84]. Length clamp experiments were first conducted in Ref. [74], where a single fiber was mounted between a force transducer and a loudspeaker motor able to deliver length steps completed in 100 µs. More specifically, after the isometric tension was reached, a length step δL (measured in nanometer per half sarcomere, nm hs -1 ) was applied to the fiber, with a feedback from a striation follower device that allowed to control the step size per sarcomere, see Fig. 4(a). Such experimental protocols have since become standard in the field [76; 85-88]. The observed response could be decomposed into 4 phases: (0 → 1) from 0 to about 100 µs (phase 1). The tension (respectively sarcomere length) is altered simultaneously with the length step (respectively force step) and reaches a level T 1 (respectively L 1 ) at the end of the step. The values T 1 and L 1 depend linearly on the loading (see Fig. 5, circles), and characterize the instant elastic response of the fiber. Various T 1 and L 1 measurements in different conditions allow one to link the instantaneous elasticity with different structural elements of the sarcomere, in particular to isolate the elasticity of the cross bridges from the elasticity of passive structures such as the myofilaments [89][90][91]. (1 → 2) from about 100 µs to about 3 ms (phase 2). In length clamp experiments, the tension is quickly recovered up to a plateau level T 2 close but below the original level T 0 ; see Fig. 4(a) and open squares in Fig. 5. Such quick recovery is too fast to engage the attachment-detachment processes and can be explained by the synchronized power stroke involving the attached heads [74]. For small step amplitudes δL, the tension T 2 is practically equal to the original tension T 0 , see the plateau on the T 2 vs. elongation relation in Fig. 5. In load 1), the processes associated with passive power stroke (2) and the ATP driven approach to steady state (3)(4). Data are adopted from Refs. [74][75][76]. T 0 T 2 , L 2 L 1 T 1 -15 -10 -5 0 5 0 0.5 1 1.5 δL [nm hs -1 ] T T 0 T 1 L 1 T 2 L 2 Figure 5. Tension-elongation relation reflecting the state of the system at the end of phase 1 (circles) and phase 2 (squares) in both length clamp (open symbols) and force clamp (filled symbols). Data are taken from Refs. [80; 85; 87; 93-95]. clamp experiment, the fiber shortens or elongates towards the level L 2 , see filled squares in Fig. 5. Note that on Fig. 5, the measured L 2 points overlap with the T 2 points except that the plateau appears to be missing. In load clamp the value of L 2 at loads close to T 0 has been difficult to measure because of the presence of oscillations [92]. At larger steps, the tension T 2 start to depend linearly on the length step because the power stroke capacity of the attached heads has been saturated. (2 → 3 → 4) In force clamp transients after ∼ 3 ms the tension rises slowly from the plateau to its original value T 0 , see Fig. 4(a). This phase corresponds to the cyclic attachment and detachment of the heads see Fig. 2, which starts with the detachment of the heads that where initially attached in isometric conditions (phase 3). In load clamp transients phase 4 is clearly identified by a shortening at a constant velocity, see Fig. 4(c), which, being plotted against the force, reproduces the Hill's force-velocity relation, see Fig. 3(b). First attempts to rationalize the fast stages of these experiments [74] have led to the insight that we deal here with mechanical snap-springs performing a transition between two configurations. The role of the external loading reduces to biasing mechanically one of the two states. The idea of bistability in the structure of myosin heads has been later fully supported by crystallographic studies [96][97][98]. Based on the experimental results shown in Fig. 5 one may come to a conclusion that the transient responses of muscle fibers to fast loading in hard (length clamp) and soft (load clamp) devices are identical. However, a careful analysis of Fig. 5 shows that the data for the load clamp protocol are missing in the area adjacent to the state of isometric contractions (around T 0 ). Moreover, the two protocols are clearly characterized by different kinetics. Recall that the rate of fast force recovery can be interpreted as the inverse of the time scale separating the end of phase 1 and the end of phase 2. The experimental results obtained in soft and hard device can be compared if we present the recovery rate as a function of the final elongation of the system. In this way, one can compare kinetics in the two ensembles using the same initial and final states; see dashed lines in Fig. 5. A detailed quantitative comparison, shown in Fig. 6, reveals considerably slower response when the system follows the soft device protocol (filled symbols). The dependence of the relaxation rate on the type of loading was first noticed in Ref. [99] and then confirmed by the direct measurements in Ref. [100]. These discrepancies will be addressed in Section 2. We complement this brief overview of the experimental results with an observation that a seemingly natural, purely passive interpretation of the power stroke is in apparent disagreement with the fact that the power stroke is an active force generating step in the Lymn-Taylor cross bridge cycle. The challenge of resolving this paradox served as a motivation for several theoretical developments reviewed in this paper. Modeling approaches 1.3.1. Chemomechanical models. The idea to combine mechanics and chemistry in the modeling of muscle contraction was proposed by A.F. Huxley [54]. The original model was focused exclusively on the attachment-detachment process and the events related to the slow time scale (hundreds of milliseconds). The attachment-detachment process was interpreted as an out-of-equilibrium reaction biased by a drift with a given velocity [67; 73]. The generated force was linked to the occupancy of continuously distributed chemical states and the attempt was made to justify the observed force-velocity relations [see Fig. 3(b)] using appropriately chosen kinetic constants. This approach was brought to full generality by T.L. Hill and collaborators [101][102][103][104][105]. More recently, the chemomechanical modelling was extended to account for energetics, to include the power-stroke activity and to study the influence of collective effects [67; 86; 106-114]. In the general chemo-mechanical approach muscle contraction is perceived as a set of reactions among a variety of chemical states [67; 68; 86; 115; 116]. The mechanical feedback is achieved through the dependence of the kinetic constants on the total force exerted by the system on the loading device. The chemical states form a network which describes on one side, various stages of the enzymatic reaction, and on the other side, different mechanical configurations of the system. While some of crystallographic states have been successfully identified with particular sites of the chemical network (attached and detached [54], strongly and weakly attached [67], pre and post power stroke [74], associated with the first or second myosin head [START_REF] Brunello | Proc. Natl. Acad. Sci[END_REF], etc.), the chemo-mechanical models remain largely phenomenological as the functions characterizing the dependence of the rate constants on the state of the force generating springs are typically chosen to match the observations instead of being derived from a microscopic model. In other words, due to the presence of mechanical elements, the standard discrete chemical states are replaced by continuously parameterized configurational "manifolds". Even after the local conditions of detailed balance are fulfilled, this leads to the functional freedom in assigning the transition rates. This freedom originates from the lack of information about the actual energy barriers separating individual chemical states and the uncertainty was used as a tool to fit experimental data. This has led to the development of a comprehensive phenomenological description of muscle contraction that is Biochemical vs purely mechanistic description of the power stroke in skeletal muscles: (a) The Lymn-Taylor four-state cycle, LT (71) and (b) the Huxley-Simmons two-state cycle, HS (71). Adapted from Ref. [121]. almost fully compatible with available measurements, see, for instance, Ref. [68] and the references therein. The use of phenomenological expressions, however, gives only limited insight into the micro-mechanical functioning of the force generating mechanism, leaves some lagoons in the understanding, as in the case of ensemble dependent kinetics, and ultimately has a restricted predictive power. Power-stroke modes To model fast force recovery A.F. Huxley and R.M. Simmons (HS) [74] proposed to describe it as a chemical reaction between the folded and unfolded configurations of the attached cross-bridges with the reaction rates linked to the structure of the underlying energy landscape. Almost identical descriptions of mechanically driven conformational changes were proposed, apparently independently, in the studies of cell adhesion [117 ; 118], and in the context of hair cell gating [119; 120]. For all these systems the HS model can be viewed as a fundamental meanfield prototype [121]. While the scenario proposed by HS is in agreement with the fact that the power stroke is the fastest step in the Lymn-Taylor (LT) enzymatic cycle [16; 59], there remained a formal disagreement with the existing biochemical picture, see Fig. 7. Thus, HS assumed that the mechanism of the fast force recovery is passive and can be reduced to a mechanically induced conformational change. In contrast, the LT cycle for actomyosin complexes is based on the assumption that the power stroke can be reversed only actively through the completion of the biochemical pathway including ADP release, myosin unbinding, binding of uncleaved ATP, splitting of ATP into ADP and P i , and then rebinding of myosin to actin [59; 68], see Fig. 2. While HS postulated that the power stroke can be reversed by mechanical means, most of the biochemical literature is based on the assumption that the power-stroke recocking cannot be accomplished without the presence of ATP. In particular, physiological fluctuations involving power stroke are almost exclusively interpreted in the context of active behavior [122][123][124][125][126][127][128]. Instead the purely mechanistic approach of HS, presuming that the power-stroke-related leg of the LT cycle can be decoupled from the rest of the biochemical pathway, was pursued in Refs [116; 129], but did not manage to reach the mainstream. 1.3.3. Brownian ratchet models. In contrast to chemomechanical models, the early theory of Brownian motors followed largely a mechanically explicit path [130][131][132][133][134][135][136][137][138]. In this approach, the motion of myosin II was represented by a biased diffusion of a particle on a periodic asymmetric landscape driven by a colored noise. The white component of the noise reflects the presence of a heat reservoir while the correlated component mimics the non-equilibrium chemical environment. Later, such purely mechanical approach was parallelled by the development of the equivalent chemistrycentered discrete models of Brownian ratchets, see for instance, Refs. [38; 139-142]. First direct applications of the Brownian ratchet models to muscle contraction can be found in Refs. [143][144][145], where the focus was on the attachment-detachment process at the expense of the phenomena at the short time scales (power stroke). In other words, the early models had a tendency to collapse the four state Lymn-Taylor cycle onto a two states cycle by absorbing the configurational changes associated with the transitions M-ATP → M-ADP-Pi and A-M-ADP-Pi → A-M-ADP into more general transitions M-ATP → AM-ADP and AM-ADP → M-ATP. Following Ref. [54], the complexity of the structure of the myosin head was reduced to a single degree of freedom representing a stretch of a series elastic spring. This simplification offered considerable analytical transparency and opened the way towards the study of stochastic thermodynamics and efficiency of motor systems, e.g. Refs. [140;146;147]. Later, considerable efforts were dedicated to the development of synthetic descriptions, containing both ratchet and power stroke elements [112; 113; 143; 144; 148-150]. In particular, numerous attempts have been made to unify the attachement-detachment-centered models with the power stroke-centered ones in a generalized chemo-mechanical framework [60; 67; 68; 86; 87; 105; 114; 116; 144; 151-154]. The ensuing models have reached the level of sophistication allowing their authors to deal with collective effects, including the analysis of traveling waves and coherent oscillations [60; 110; 114; 143; 155-159]. In particular, myosin-myosin coupling was studied in models of interacting motors [113; 152] and emergent phenomena characterized by large scale entrainment signatures were identified in Refs. [36; 110; 114; 122; 123; 148]. The importance of these discoveries is corroborated by the fact that macroscopic fluctuations in the groups of myosins have been also observed experimentally. In particular, considerable coordination between individual elements was detected in close to stall conditions giving rise to synchronized oscillations which could be measured even at the scale of the whole myofibril [26; 82; 92; 149; 160-162]. The synchronization also revealed itself through macro-scale spatial inhomogeneities reported near stall force condition [163][164][165][166]. In ratchet models the cooperative behavior was explained without direct reference to the power stroke by the fact that the mechanical state of one motor influences the kinetics of other motors. The long-range elastic interactions were linked to the presence of filamental backbones which are known to be elastically compliant [167; 168]. The fact, that similar cooperative behavior of myosin cross-bridges has been also detected experimentally at short time scales, during fast force recovery [92], suggests that at least some level of synchronization should be already within reach of the power-stroke-centered models disregarding motor activity and focusing exclusively on passive mechanical behavior. Elucidating the mechanism of such passive synchronization will be one of our main goals of Section 2. Organization of the paper In this review, we focus exclusively on models emphasizing the mechanical side of the force generation processes. The mechanical models affirm that in some situations the microscale stochastic dynamics of the force generating units can be adequately represented by chemical reactions. However, they also point to cases when one ends up unnecessarily constrained by the chemo-mechanical point of view. The physical theories, emphasized in this review, are in tune with the approach pioneered by Huxley and Simmons in their study of fast force recovery and with the general approach of the theory of molecular motors. The elementary contractile mechanisms are modeled by systems of stochastic differential equations describing random walk in complex energy landscapes. These landscapes serve as a representation of both the structure and the interactions in the system, in particular, they embody various local and nonlocal mechanical feedbacks. In contrast to fully microscopic molecular dynamical reconstructions of multi-particle dynamics, the reviewed mechano-centered models operate with few collective degrees of freedom. The loading is transmitted directly by applied forces while different types of noises serve as a representation of non-mechanical external driving mechanisms that contain both equilibrium and non-equilibrium components. Due to the inherent stochasticity of such mesoscopic systems [140], the emphasis is shifted from the averaged behavior, favored by chemo-mechanical approaches, to the study of the full probability distributions. In Section 2 we show that even in the absence of metabolic fuel, long-range interactions, communicated by passive crosslinkers, can ensure a highly nontrivial cooperative behavior of interacting muscle cross-bridges. This implies ensemble dependence, metastability and criticality which all serve to warrant efficient collective stroke in the presence of thermal fluctuations. We argue that in the near critical regimes the barriers are not high enough for the Kramers approximation to be valid [169; 170] which challenges chemistry-centered approaches. Another important contribution of the physical theory is in the emphasis on fluctuations as an important source of structural information. A particularly interesting conclusion of this section is the realization that a particular number of cross-bridges in realistic half-sarcomeres may be a signature of an (evolutionary) fine tuning of the mechanical response to criticality. In Section 3 we address the effects of correlated noise on force generation in isometric conditions. We focus on the possibility of the emergence of new noise-induced energy wells and stabilization of the states that are unstable in strictly equilibrium conditions. The implied transition from negative to positive rigidity can be linked to time correlations in the out-of-equilibrium driving and the reviewed work shows that subtle differences in the active noise may compromise the emergence of such "non-equilibrium" free energy wells. These results suggest that ATP hydrolysis may be involved in tuning the muscle system to near-criticality which appears to be a plausible description of the physiological state of isometric contraction. In Section 4 we introduce mechanical models bringing together the attachment-detachment and the power stroke. To make a clear distinction between these models and the conventional models of Brownian ratchets we operate in a framework when the actin track is nonpolar and the bistable element is unbiased. The symmetry breaking is achieved exclusively through the coupling of the two subsystems. Quite remarkably, a simple mechanical model of this type formulated in terms of continuous Langevin dynamics can reproduce all four discrete states of the minimal LT cycle. In particular, it demonstrates that contraction can be propelled directly through a conformational change, which implies that the power stroke may serve as the leading mechanism not only at short but also at long time scales. Finally, in Section 5 we address the behavior of the contractile system on the descending limb of the isometric tetanus, a segment of the force length relation with a negative stiffness. Despite potential mechanical instability, the isometric tetanus in these regimes is usually associated with a quasi-affine deformation. The mechanics-centered approach allows one to interpret these results in terms of energy landscape whose ruggedness is responsible for the experimentally observed history dependence and hysteresis near the descending limb. In this approach both the ground states and the marginally stable states emerge as fine mixtures of short and long half-sarcomeres and the negative overall slope of the tetanus is shown to coexists with a positive instantaneous stiffness. A version of the mechanical model, accounting for surrounding tissues, produces an intriguing prediction that the energetically optimal variation of the degree of nonuniformity with stretch must exhibits a devil's staircase-type behavior. The review part ends with Section 7 where we go over some non-muscle applications of the proposed mechanical models In this Section 7 we formulate conclusions and discuss directions of future research. Passive force generation In this Section, we limit ourselves to models of passive force generation. First of all we need to identify an elementary unit whose force producing function is irreducible. The second issue concerns the structure of the interactions between such units. The goal here is to determine whether the consideration of an isolated force-producing element is meaningful in view of the presence of various feedback loops. The pertinence of this question is corroborated by the presence of hierarchies that undermine the independence of individual units. The schematic topological structure of the force generating network in skeletal muscles is shown in Fig. 8. Here we see that behind the apparent series architecture that one can expect to dominate in crystals, there is a system of intricate parallel connections accomplished by passive cross-linkers. Such elastic elements play the role of backbones linking elements at smaller scales. The emerging hierarchy is dominated by long-range interactions which make the "muscle crystal" rather different from the conventional inert solids. The analysis of Fig. 8 suggests that the simplest nontrivial structural element of the network is a half-sarcomere that can be represented as a bundle of finite number of cross-bridges. The analysis presented below shows that such model cannot be simplified further because for instance the mechanical response of individual cross-bridges is not compatible by itself with observations. The minimal model of this type was proposed by Huxley and Simmons (HS) who described myosin cross-bridges as hard spin elements connected to linear springs loaded in parallel [74]. In this Section, we show that the stochastic version of the HS model is capable of reproducing qualitatively the mechanical response of a muscle submitted to fast external loading in both length clamp (hard device) and force clamp (soft device) settings (see Fig. 6). We also address the question whether the simplest series connection of HS elements is compatible with the idea of an affine response of the whole muscle fiber. Needless to say that the oversimplified model of HS does not address the full topological complexity of the cross-bridge organization presented in Fig. 8. Furthermore, the 3D steric effects that appear to be crucially important for the description of spontaneous oscillatory contractions [148; 162; 164; 166; 171-173], and the effects of regulatory proteins responsible for steric blocking [174][175][176][177][178], are completely outside the HS framework. Hard spin model Consider now in detail the minimal model [74; 99; 121; 179; 180] which interprets the pre-and post-power-stroke conformations of the myosin heads as discrete (chemical) states. Since these states can be viewed as two configurations of a "digital" switch such model belongs to the hard spin category. The potential energy of an individual spin unit can be written in the form u HS (x) = { v 0 if x = 0, 0 if x = -a. (2.1) where the variable x takes two values, 0 and -a, describing the unfolded and the folded conformations, respectively. By a we denoted the "reference" size of the conformational change interpreted as the distance between the two energy wells. With the unfolded state we associate an energy level v 0 while the folded configuration is considered as a zero energy state, see Fig. 9(a). In addition to a spin unit with energy (2.1) we assume that each cross-bridge contains a linear spring with stiffness κ 0 in series with the bi-stable unit; see Fig. 9(b). The attached cross-bridges are connecting myosin and actin filaments which play the role of elastic backbones. Their function is to provide mechanical feedback and coordinate the mechanical state of the individual cross-bridges [167; 168]. There is evidence [89; 95] that a lump description of the combined elasticity of actin and myosin filaments by a single spring is rather adequate, see also Refs. [89; 100; 181-183]). Hence we represent a generic half sarcomere as a cluster of N parallel HS elements and assume that this parallel bundle is connected in series to a linear spring of stiffness κ b . We chose a as the characteristic length of the system, κ 0 a as the characteristic force, and κ 0 a 2 as the characteristic energy. The resulting dimensionless energy of the whole system (per cross bridge) at fixed total elongation z takes the form A A B (a) A B A (b) • • • • • • • • • • • • half-sarcomere • • • elementary unit • A • B • A (c) v(x; z) = 1 N N ∑ i=1 [ (1 + x i ) v 0 + 1 2 (y -x i ) 2 + λ b 2 (z -y) 2 ] , (2.2) where λ b = κ b /(N κ 0 ), y represents the elongation of the cluster of parallel cross bridges and x i = {0, -1}, see Fig. 9(b). Here, for simplicity, we did not modify the notations as we switched to non-dimensional quantities. It is important to note that here we intentionally depart from the notations introduced in Section 1.2. For instance, the length of the half sarcomere was there denoted by L, which is now z. Furthermore, the tension which was previously T will be now denoted by σ while we keep the notation T for the ambient temperature. Soft and hard devices. It is instructive to consider first the two limit cases, λ b = ∞ and λ b = 0. Zero temperature behavior. If λ b = ∞, the backbone is infinitely rigid and the array of cross-bridges is loaded in a hard device with y being the control parameter. Due to the permutational invariance of the energy v(x; y) = 1 N N ∑ i=1 [ (1 + x i ) v 0 + 1 2 (y -x i ) 2 ] , (2.3) each equilibrium state is fully characterized by a discrete order parameter representing the fraction of cross-bridges in the folded (post power stroke) state p = - 1 N N ∑ i=1 x i . At zero temperature all equilibrium configurations with a given p correspond to local minima of the energy (2.3), see Ref. [179]. These metastable states can be viewed as simple mixtures the two states, one fully folded with p = 1, and the energy (1/2)(y + 1) 2 , and the other one fully unfolded with p = 0, and the energy (1/2)y 2 + v 0 . The energy of the mixture reads v(p; y) = p 1 2 (y + 1) 2 + (1 -p) [ 1 2 y 2 + v 0 ] . (2.4) The absence of a mixing energy is a manifestation of the fact that the two populations of cross-bridges do not interact. The energies of the metastable states parameterized by p are shown in Fig. 10 (c-e). Introducing the reference elongation y 0 = v 0 -1/2, one can show that the global minimum of the energy corresponds either to folded state with p = 1, or to unfolded state with p = 0. At the transition point y = y 0 , all metastable states have the same energy, which means that the global switching can be performed at zero energy cost, see Fig. 10(d). The tension-elongation relations along metastable branches parameterized by p can be presented as σ(p; z) = ∂ ∂z v(p; y) = y + p, where σ denotes the tension (per cross-bridge). At fixed p, we obtain equidistant parallel lines, see Fig. 10 [(a) and(b)]. At the crossing (folding) point y = y 0 , the system following the global minimum exhibits a singular negative stiffness. Artificial metamaterial showing negative stiffness has been recently engineered by drawing on the Braess paradox for decentralized globally connected networks [13; 184; 185]. Biological examples of systems with non-convex energy and negative stiffness are provided by RNA and DNA hairpins and hair bundles in auditory cells [120; 186-188]. In the other limit λ b → 0, the backbone becomes infinitely soft (zy → ∞) and if λ b (zy) → σ the system behaves as if it was loaded in a soft device, where now the tension σ is the control parameter. The relevant energy can be written in the form The order parameter p parametrizes again the branches of local minimizers of the energy (2.5), see Ref. [179]. At a given value of p, the energy of a metastable state reads w(x, y; σ) = v(x, y) -σz = 1 N N ∑ i=1 [ (1 + x i ) v 0 + 1 2 (y -x i ) 2 -σy ] , (2.5) ŵ(p; σ) = - 1 2 σ 2 + pσ + 1 2 p(1 -p) + (1 -p)v 0 . (2.6) In contrast to the case of a hard device [see Eq. (2.4)], here there is a nontrivial coupling term p(1p) describing the energy of a regular solution. The presence of this term is a signature of a mean-field interaction among individual cross-bridges. The tension-elongation relations describing the set of metastable states can be now written in the form ẑ(p; σ) = -∂ ∂σ ŵ(p; σ) = σp. The global minimum of the energy is again attained either at p = 1 or p = 0, with a sharp transition at σ = σ 0 = v 0 , which leads to a plateau on the tension-elongation curve, see Fig. 10 (b). Note that even in the continuum limit the stable "material" response of this system in hard and soft devices differ and this ensemble nonequivalence is a manifestation of the presence of long-range interactions. To illustrate this point further, we consider the energetic cost of mixing in the two loading devices at the conditions of the switch between pure states, see Fig. 10 [(d) and(g)]. In the hard device [see (d)] the energy dependence on p in this state is flat suggesting that there is no barrier, while in the soft device [see (g)] the energy is concave which means that there is a barrier. To develop intuition about the observed inequivalence, it is instructive to take a closer look at the minimal system with N = 2, see Fig. 11. Here for simplicity we assumed that v 0 = 0 implying σ 0 = 0 and y 0 = -1/2. The two pure configurations are labeled as A (p = 0) and C (p = 1) at σ = σ 0 and as D (p = 0) and B (p = 1) at y = y 0 . In a hard device, where the two elements do not interact, the transition from state D to state B at a given y = y 0 goes through the configuration B+D, which has the same energy as configurations D and B: the cross-bridges in folded and unfolded states are geometrically compatible and their mixing requires no additional energy. Instead, in a soft device, where individual elements interact, a transition from state A to state C taking place at a given σ = 0 requires passing through the transition state A + C which has a nonzero pre-stress. Pure states in this mixture state have different -1 1 -1 1 B D A C y -y 0 σ C + C A + A B + D A + C B + B D + D Figure 11. Behavior of two cross-bridges. Thick line: global minimum in a soft device (σ 0 = 0). Dashed lines, metastable states p = 0, and p = 1. The intermediate stress-free configuration is obtained either by mixing the two geometrically compatible states B and D in a hard device, which results in a B + D structure without additional internal stress, or by mixing the two geometrically incompatible states A and C in a soft device, which results in a A + C structure with internal residual stress. Adapted from Ref. [179]. values of y, and therefore the energy of the mixed configuration A + C, which is stressed, is larger than the energies of the pure unstressed states A and C. We also observe that in a soft device the transition between the pure states is cooperative requiring essential interaction of individual elements while in a hard device it takes place independently in each element. Finite temperature behavior. We now turn to finite temperature to check the robustness of the observations made in the previous section. Consider first the hard device case λ b = ∞, considered chemo-mechanically in the seminal paper of HS [74], see Ref. [121] for the statistical interpretation. With the variable y serving now as the control parameter, the equilibrium probability density for a micro-state x with N elements takes the form ρ(x; y, β) = Z(y, β) -1 exp [-βv(x; y)] where the partition function is Z(y, β) = ∑ x ∈ {0,-1} N exp [-βNv(x; y)] = [Z 1 (y, β)] N . Here Z 1 represents the partition function of a sin-gle element given by Z 1 (y, β) = exp [ - β 2 (y + 1) 2 ] +exp [ -β ( y 2 2 + v 0 ) ] . (2.7) Therefore one can write ρ(x; y, β) = ∏ N i=1 ρ 1 (x i ; y, β), where we have introduced the equilibrium probability distribution for a single element ρ 1 (x; y, β) = Z 1 (y, β) -1 exp [-βv(x; y)] , (2.8) with v(x; y) -the energy of a single element. The lack of cooperativity in this case is clear if one considers the marginal probability density at fixed p ρ(p; y, β) = ( N N p ) [ρ 1 (-1; y, β)] N p [ρ 1 (0; y, β)] N (1-p) = Z(y, β) -1 exp[-βN f (p; y, β)], where f (p; y, β) = v(y, p)-(1/β) s(p) is the marginal free energy, v is given by Eq. (2.4) and s(p) = 1 N log ( N N p ) , is the ideal entropy, see Fig. 12. In the thermodynamic limit N → ∞ we obtain explicit expression f ∞ (p; y, β) = v(p; y) -(1/β) s ∞ (p), where s ∞ (p) = -[p log(p) + (1 -p) log(1 -p)] . The func- tion f ∞ (p) is always convex since ∂ 2 ∂p 2 f ∞ (p; y, β) = [ β p(1 - p) ] -1 > 0, and therefore the marginal free energy always has a single minimum p * (y, β) corresponding to a microscopic mixture of de-synchronized elements, see Fig. 12(b). These results show that the equilibrium (average) properties of a cluster of HS elements in a hard device can be fully recovered if we know the properties of a single element-the problem studied in [74]. In particular, the equilibrium free energy f (z, β) = f (p * ; σ, β), where p * is the minimum of the marginal free energy f [see Fig. 12(c)] can be written in the HS form f (y, β) = - 1 βN log [Z(y, β)] = 1 2 y 2 + v 0 + y -y 0 2 - 1 β ln { 2 cosh [ β 2 (y -y 0 ) ] } , (2.9) which is also an expression of the free energy in the simplest paramagnetic Ising model [START_REF] Balian | From Microphysics to Macrophysics Methods and Applications of Statistical Physics[END_REF]. Its dependence on elongation is illustrated in Fig. 13(a). We observe that for β ≤ 4 (supercritical temperatures), the free energy is convex while for β > 4 (sub-critical temperatures), it is non-convex. The emergence of an unusual "pseudo-critical" temperature β = β c = 4 in this paramagnetic system is a result of the presence of the quadratic energy associated with the "applied field" y, see Eq. (2.9). The ensuing equilibrium tension-elongation relation (per cross-bridge) is identical to the expression obtained in Ref. [74], ⟨σ⟩ (y, β) = ∂ f ∂ y = σ 0 +y-y 0 - 1 2 tanh [ β 2 (y -y 0 ) ] . (2.10) As a result of the nonconvexity of the free energy, the dependence of the tension ⟨σ⟩ on y can be non-monotone, see Fig. 13(b). Indeed, the equilibrium stiffness In connection with these results we observe that the difference between the quasi-static stiffness of myosin II measured by single molecule techniques, and its instantaneous stiffness obtained from mechanical tests on myofibrils, may be due to the fluctuational term κ F , see Refs. [91; 191; 192]. Note also that the fluctuation-related term does not disappear in the zero temperature limit (producing a delta function type contribution to the affine response at y = y 0 ), which is a manifestation of a (singular) glassy behavior [193; 194]. κ(y, β) = ∂ ⟨σ⟩ (y, β)/∂ y = 1 -(β/4) { sech [β (y -y 0 ) /2] } 2 , ( 2 It is interesting that while fitting their experimental data HS used exactly the critical value β = 4, corresponding to zero stiffness in the state of isometric contraction. Negative stiffness, resulting from non-additivity of the system, prevails at subcritical temperatures; in this range a shortening of an element leads to tension increase which can be interpreted as a meta-material behavior [13; 99; 195]. In the soft device case λ b = 0 , the probability density associated with a microstate x is given by ρ(x, y; σ, β) = Z(σ, β) -1 exp [-βNw(x, y; σ)] where the partition function is now Z(σ, β) = ∫ dy ∑ x ∈ {0,1} N exp { -βN [v(x; y) -σy] } . By integrating out the internal variable x i , we obtain the marginal probability density depending on the two order parameters, y and p, ρ(p, y; σ, β) = Z(σ, β) -1 exp [-βNg(p; y; σ, β)] . (2.12) Here we introduced the marginal free energy g(p, y; σ, β) = f (p, y, β) -σy = v(p, y) -σy -(1/β)s(p), (2.13) which is convex at large temperatures and non-convex (with two metastable wells) at low temperatures, see Fig. 14, signaling the presence of a genuine critical point. By integrating the distribution (2.12) over p we obtain the marginal distribution ρ(y; σ, β) = Z -1 exp [-βNg(y; σ, β)] where g(y; σ, β) = f (y; β) -σy, with f being the equilibrium free energy of the system in a hard device, see Eq. 2.9. This free energy has more than one stable state as long as the equation f ′ (y)-σ = 0 has more than one solution. Since f ′ is precisely the average tension elongation-relation in the hard device case, we find that the critical temperature is exactly β c = 4. The same result could be also obtained directly as a condition of the positive definiteness of the Hessian for the free energy (2.13) (in the thermodynamic limit). The physical origin of the predicted second order phase transition becomes clear if instead of p we now eliminate y and introduce the marginal free energy at fixed p. In the (more transparent) thermodynamic limit we can write g ∞ (p; σ, β) = ŵ(p, σ) -β -1 s ∞ (p), (2.14) where ŵ = -(1/2)σ 2 + p(σ -σ 0 ) + (1/2)p(1 -p) + v 0 , is the zero temperature energy of the metastable states parametrized by p, see Eq. 2.6 and Fig. 10. Since the entropy s ∞ (p) is convex with a maximum at p = 1/2, the convexity of the free energy depends on the competition between the term p(1-p) purely mechanical interaction and the term s ∞ (p)/β, with the the later dominating at low β. reflecting pre post N = 1 -2 -1 0 1 0 0.2 0.4 0.6 0.8 1 y -y 0 v p = 0 p = 1 / 4 p = 1 / 2 p = 3 / 4 p = 1 N = 4 -2 -1 0 1 0 0.2 0.4 0.6 0.8 1 y -y 0 f ∀N -2 -1 0 1 0 0.2 0.4 0.6 0.8 1 y -y 0 f (a) (b) (c) y -y + σ - σ + y -y 0 σ σ 0 -1 1 -1 0 1 y -y + y -y 0 κ (a) (b) (c) The Gibbs free energy g ∞ (σ, β) and the corresponding force-elongation relations are illustrated in Fig. 15. In (a), the energies of the critical points of the free energy (2.14) are represented as function of the loading and the temperature, with several isothermal sections of the energy landscape are shown in (b). For each critical point p, the elongation ŷ = σp is shown in Fig. 15(c). At σ = σ 0 = v 0 , the free energy g ∞ becomes symmetric with respect to p = 1/2 and therefore we have ⟨p⟩ (σ 0 , β) = 1/2, independently of the value of β. The structure of the second order phase transition is further illustrated in Fig. 16(a). Both mechanical and thermal properties of the system can be obtained from the probability density (2.12). By eliminating y and taking the thermodynamic limit N → ∞ we obtain ρ ∞ (p; σ, β) = Z -1 exp [-βNg ∞ (p; σ, β)] with Z(σ, β) = ∑ p exp[-βNg ∞ (p; σ, β)]. The average mechanical behavior of the system is now controlled by the global minimizer p * (σ, β) of the marginal free energy g ∞ , for instance, g(σ, β) = g ∞ (p * , σ, β) and ⟨p⟩ (σ, β) = p * (σ, β), The average elongation ⟨y⟩ (σ, β) = σp * (σ, β) is illustrated in Fig. 16 (c), for the case β = 5. The jump at σ = σ 0 corresponds to the switch of the global minimum from C to A, see Fig. 16[(a) and(c)]. In Fig. 16[(d)-(f)] we also illustrate typical stochastic behavior of the order parameter p at fixed tension σ = σ 0 (ensuring that ⟨p⟩ = 1/2). Observe that in the ordered (low temperature, ferromagnetic) phase [see (f)], the thermal equilibrium is realized through the formation of temporal microstructure, a domain structure in time, which implies intermittent jumps between ordered metastable (long living) configurations. Such transition are systematically observed during the unzipping of biomolecules, see, for instance, Ref. [196]. In Fig. 17 we show the equilibrium susceptibility χ(σ, β) = -∂ ∂σ ⟨p⟩ (σ, β) = N β⟨[p -⟨p⟩ (σ, β)] 2 ⟩ ≥ 0, which diverges at β = β c and σ = σ 0 . We can also com- pute the equilibrium stiffness κ(σ, β) -1 = 1 N ∂ ∂σ ⟨y⟩ (σ, β) = β⟨[y -⟨y⟩ (σ, β)] 2 ⟩ ≥ 0, where ⟨y⟩ (σ, β) = σ -⟨p⟩ (σ, β), and see that it is always positive in the soft device. This is another manifestation of the fact that the soft and hard device ensembles are not equivalent. At the critical point (β = 4, σ = σ 0 ), the marginal energy of the system has a degenerate minimum corresponding to the configuration with p = 1/2; see Fig. 15[(c) dashed line]. Near the critical point, we have the asymptotics 0 1 0 1 B p g ∞ 0 1 0 1 A B C p g ∞ A B C 0.8 0.9 1 1.1 1.2 0 0.5 1 B A C σ/σ 0 p B A C -1 -0.5 0 0.5 1 0.8 0.9 1 1.1 1.2 y -y 0 σ σ 0 (b) (c) 0 0.5 1 0 1 time (×10 3 ) p 0 0.5 1 0 1 time (×10 3 ) p 0 0.5 1 0 1 time (×10 3 ) p C A B B (d) β = 2 (e) β = 4 (f) β = 5 p ∼ 1/2 ± ( √ 3/4)[β -4] 1/2 , for σ = σ 0 , and p ∼ 1/2 -sign[σ - σ 0 ] [(3/4) |σ -σ 0 |] 1/3 , for β = 4. showing that the critical exponents take the classical mean field values [START_REF] Balian | From Microphysics to Macrophysics Methods and Applications of Statistical Physics[END_REF]. Similarly we obtain ⟨y⟩ - y 0 = ±( √ 3/4) [β -4] 1/2 , for σ = σ 0 , and ⟨y⟩ -y 0 = sign[σ -σ 0 ] [(3/4) |σ -σ 0 |] 1/3 , for β = 4. In critical conditions, where the stiffness is equal to 0, the system becomes anomalously reactive; for instance, being exposed to small positive (negative) force increment it instantaneously unfolds (folds). In Fig. 18 we summarize the mechanical behavior of the system in hard [(a) and (b)] and soft devices [(c) and (d)]. In a hard device, the system develops negative stiffness below the critical temperature while remaining de-synchronized and fluctuating at fast time scale. Instead, in the soft device the stiffness is always non-negative. However, below the critical temperature the tension elongation relation develops a plateau which corresponds to cooperative (macroscopic) fluctuations between two highly synchronized metastable states. In the soft device ensemble, the pseudo-critical point of the hard device ensemble becomes a real critical point with diverging susceptibility and classical mean field critical exponents. For the detailed study of the thermal properties in soft and hard devices, see Refs. [121; 180]. Mixed device. Consider now the general case when λ b is finite. In the muscle context this parameter can be interpreted as the lump description of myofilaments elasticity [89; 91; 95], in cell adhesion it can be identified either with the stiffness of the extracellular medium or with the stiffness of the intracellular stress fiber [118; 197; 198], and for protein . In a hard device , the pseudo critical temperature β -1 c = 1/4 separates a regime where the tension elongation is monotone (a) from the region where the system develops negative stiffness (b). In soft device, this pseudo critical point becomes a real critical point above which (β > β c ) the system becomes bistable (d). folding in optical tweezers, it can be viewed as the elasticity of the optical trap or the DNA handles [186; 188; 199-204]. The presence of an additional series spring introduces a new macroscopic degree of freedom because the elongation of the bundle of parallel cross-bridges y can now differ from the total elongation of the system z, see Fig. 9. At zero temperature, the metastable states are again fully characterized by the order parameter p, representing the fraction of cross-bridges in the folded (post-power-stroke) configuration. At equilibrium, the elongation of the bundle is given by ŷ = (λ b zp) /(1 + λ b ), so that the energy of a metastable state is now vb (p; z) = v(p; ŷ) + (λ b /2)(zŷ) 2 , which can be rewritten as vb (p; z) = λ b 2(1 + λ b ) [ p(z + 1) 2 + (1 -p)z 2 ] + (1 -p)v 0 + p(1 -p) 2(1 + λ b ) . (2.15) Notice the presence of the coupling term ∼ p(1p), characterizing the mean field interaction between crossbridges. One can see that this term vanishes in the limit λ b → ∞. Again, when λ b → 0 and zy → ∞, while λ b (zy) → σ, we recover the soft device potential modulo an irrelevant constant. The global minimum of the energy (2.15) corresponds to one of the fully synchronized configurations (p = 0 or p = 1). These two configurations are separated at the transition point z = z 0 = (1 + λ b )v 0 /λ b -1/2 , by an energy barrier whose height now depends on the value of λ b , see Ref. [179] for more details. At finite temperature, the marginal free energy at fixed p and y can be written in the form f m (p, y; z, β) = f (p; y, β) + λ b 2 (z -y) 2 , (2.16) where f is the marginal free energy for the system in a hard device (at fixed y). Averaging over y brings about the interaction among cross-bridges exactly as in the case of a soft device. The only difference with the soft device case is that the interaction strength now depends on the new dimensionless parameter λ b . The convexity properties of the energy (2.16) can be studied by computing the Hessian, H(p, y; z, β) = ( 1 + λ b 1 1 [βp (1 -p)] -1 ) (2.17) which is positive definite if β < β c where the critical temperature is now β * c = 4(1 + λ b ). The latter relation also defines the critical line λ b = λ c (β) = β/4 -1, separating disordered phase (λ b > λ c ) , where the marginal free energy has a single minimum, from the ordered phase (λ b < λ c ), where the system can be bi-stable. As in the soft device case, elimination of the internal variable p allows one to write the partition function in a mixed device as Z = ∫ exp { -βN [ f m (y; z, β)] } dy. Here f m denotes the marginal free energy at fixed y and z f m (y; z, β) = f (y; β) + (λ b /2)(z -y) 2 (2.18) and f is the equilibrium free energy at fixed y, given by Eq. (2.9). We can now obtain the equilibrium free energy fm = -(1/β) log [Z(z, β)] and compute its successive derivatives. In particular the tension-elongation relation ⟨σ⟩ (⟨y⟩) and the equilibrium stiffness κ m can be written the form ⟨σ⟩ = λ b [z -⟨y⟩] , κ m = λ b { 1 -βNλ b [⟨ y 2 ⟩ -⟨y⟩ 2 ] } . As in the soft device case, we have in the thermodynamic limit, ⟨y⟩ (z, β) = y * (z, β), where y * is the global minimum of the marginal free energy (2.18). We can also write κ m = κ(y * ,β)λ b κ(y * ,β)+λ b , where κ is the thermal equilibrium stiffness of the system at fixed y, see Eq. (2.11). Since λ b > 0, we find that the stiffness of the system becomes negative when κ becomes negative, which takes place at low temperatures when β > 4. Our results in the mixed device case are summarized in Fig. 19(a) where we show the phase diagram of the system in the (λ b , β -1 ) plane. The hard and soft device limits, which we have already analyzed, correspond to points (a)-(d). At finite λ b there are three "phases": (i) In phase I, corresponding to β < 4, the marginal free energy (2.18) is convex and the equilibrium tension elongation relation is monotone ; (ii) In phase II [4 < β < 4(1 + λ b ), see (e) ] the free energy is still convex but the tension-elongation becomes non monotone; (iii) In phase III [β > 4(1 + λ b )], the marginal free energy (2.18) is non convex and the equilibrium response contains a jump, see (f) in the right panel of Fig. 19. Kinetics. Consider bi-stable elements described by microscopic variables x i whose dynamics can be represented as a series of jumps between the two states. The probabilities of the direct and reverse transitions in the time interval dt can be written as P(x i (t + dt) = -1|x i (t) = 0) = k + (y, β)dt, P(x i (t + dt) = 0|x i (t) = -1) = k -(y, β)dt. (2.19) Here k + (y, β) [resp. k -(y, β)] is the transition rate for the jump from the unfolded state (resp. folded state) to the folded state (resp. unfolded state). The presence of the In the mixed device, the system exhibits three phases, labeled I, II and III in the left panel. The right panels show typical dependence of the energy and the force on the loading parameter z and on the average internal elongation ⟨y⟩ in the subcritical (phase II, e) and in the supercritical (phase III, f) regimes. In phase I, the response of the system is monotone; it is analogous to the behavior obtained in a hard device for β < 4, see Fig. 18(b). In phase II, the system exhibits negative stiffness but no collective switching except for the soft device limit λ b → 0, see Fig. 18(d). I II III 0.1 0.2 0.3 0.4 0 1 2 3 4 β -1 λ b ∞ (a) (b) (c) (d) (e) (f) In phase III (supercritical regime), the system shows an interval of macroscopic bistability (see dotted lines) leading to abrupt transitions in the equilibrium response (solid line). x 0 jumps is a shortcoming of the hard spin model of Huxley and Simmons [74] and in the model with non-degenerate elastic bistable elements (soft spins) they are replaced by a continuous Langevin dynamics [99; 205], see Section 2.2.4. v 0 E 0 E 1 -1 (a) y > -1/2 u x 0 v 0 E 0 E 1 -1 (b) y < -1/2 u -2 -1 0 1 2 4 6 β = 1 β = 2 y kτ (c) To compute the transition rates k ± (y, β) without knowing the energy landscape separating the two spin states, we first follow [74] who simply combined the elastic energy of the linear spring with the idea of the flat microscopic energy landscape between the wells, see Fig. 20(a,b) for the notations. Assuming further that the resulting barriers E 0 and E 1 = E 0 + v 0 are large comparing to k b T, we can use the Kramers approximation and write the transition rates in the form k + (y, β) = k -exp [-β (y -y 0 )] , k -(y, β) = exp [-β E 1 ] = const, (2.20) where k -determines the timescale of the dynamic response: τ = 1/k -= exp [β E 1 ] . The latter is fully controlled by a single parameter E 1 whose value was chosen by HS to match the observations. Note that Eq. (2.20) is only valid if y > -1/2 [see Fig. 20(a)], which ensures that the energy barrier for the transition from pre-to post-power stroke is actually affected by the load. In the range y < -1/2, omitted by HS, the forward rate becomes constant, see Fig. 20(a). -1 0 v 0 v * k - k + x u -1 0 1 2 3 0 5 10 15 20 = 0 = -0.5 = -1 σ/σ 0 (k + + k -) τ (a) (b) The fact that only one transition rate in the HS approach depends on the load makes the kinetic model non-symmetric: the overall equilibration rate between the two states r = k + + k -monotonously decreases with stretching. For a long time this seemed to be in accordance with experiments [74; 85; 87; 206], however, a recent reinterpretation of the experimental results in Ref. [207] suggested that the recovery rate may eventually increase with the amplitude of stretching. This finding can be made compatible with the HS framework if we assume that both energy barriers, for the power stroke and for the reverse power stroke, are load dependent, see Fig. 21, and Ref. [180] for more details. This turns out to be a built-in property of the soft spin model considered in Section 2.2. In the hard spin model with N elements, a single stochastic trajectory can be viewed as a random walk characterized by the transition probabilities P [ p t+dt = p t + 1/N ] = ϕ + (p t , t)dt, P [ p t+dt = p t -1/N ] = ϕ -(p t , t)dt, P [ p t+dt = p t ] = 1 - [ ϕ + (p t , t) + ϕ -(p t , t) ] dt (2.21) where the rate ϕ + (resp. ϕ -) describes the probability for one of the unfolded (resp. folded) elements to fold (resp. unfold) within the time-window dt. While in the case of a hard device we could simply write ϕ + (t) = N(1p t )k + (y, β), and ϕ -(t) = N p t k -, in both soft and mixed devices, y becomes an internal variable whose evolution become dependent on p, making the corresponding dynamics non-linear. The isothermal stochastic dynamics of the system specified by the transition rates (2.20) is most naturally described in terms of the probability density ρ(p, t). It satisfies the master equation, ∂ ∂t ρ(p, t) = ϕ + (1 -p + 1/N, t) ρ (p -1/N, t) + ϕ -(p + 1/N; t) ρ (p + 1/N, t) -[ϕ + (1 -p; t) + ϕ -(p; t)] ρ (p, t) , (2.22) where ϕ + and ϕ -are the transition rates introduced in Eq. (2.21). This equation generalizes the HS mean-field kinetic equation dealing with the evolution of the first moment ⟨p⟩ (t) = ∑ pρ(p, t), namely ∂ ∂t ⟨p⟩ (t) = ⟨ ϕ + (1 -p, t) ⟩ - ⟨ ϕ -(p, t) ⟩ . (2.23) In the case of a hard device, studied by HS, the linear dependence of ϕ ± on p allows one to compute the averages on the right hand side of (2.23) explicitly. The result is the first order reaction equation of HS ∂ ∂t ⟨p⟩ = k + (y) (1 -⟨p⟩) -k -(y) ⟨p⟩ . ( 2 ρ(p, t) = ( N N p ) [⟨p(t)⟩] N p [1 -⟨p(t)⟩] N -N p . (2.25) The entire distribution is then enslaved to the dynamics of the order parameter ⟨p⟩ (t) captured by the original HS model. It is then straightforward to show that in the long time limit the distribution (2.25) converges to the Boltzmann distribution (2.8). In the soft and mixed devices the cross-bridges interact and the kinetic picture is more complex. To simplify the setting, we assume that the relaxation time associated with the internal variable y is negligible comparing to other time scales. This implies that the variable y can be considered as equilibrated, meaning in turn that y = ŷ(p, σ) = σp in a soft device and y = ŷ(p, z) = (1 + λ b ) -1 (λ b zp), in a mixed device. Below, we briefly discuss the soft device case, which already captures the effect of the mechanical coupling in the kinetics of the system. Details of this analysis can be found in Ref. [180]. To characterize the transition rates in a cluster of N > 1 elements under fixed external force, we introduce the energy w(p, p * ) corresponding to a configuration where p elements are folded (x i = -1) and p * elements are at the transition state (x i = ℓ), see Fig. 21. The energy landscape separating two configurations p and q can be represented in terms of the "reaction coordinate" ξ = px(qp), see Fig. 22. The transition rates between neighboring metastable states can be computed explicitly using our generalized HS model (see Fig. 21), τϕ + (p; σ, β) = N(1 -p) exp [-β∆ w+ (p; σ)] , τϕ -(p; σ, β) = N p exp [-β∆ w-(p; σ)] , (2.26) where ∆ w± are the energy barriers separating neighboring states, ∆ w+ (p; σ) = -ℓ(σ -p) -σ 0 + (1 + 3 N ) ℓ 2 2 ∆ w-(p; σ) = -(ℓ + 1)(σ -p) + (1 + 3 N ) ℓ 2 2 -1+N +2ℓ 2N . In (2.26) 1/τ = α exp[-β v * ] , with α = const, determining the overall timescale of the response. The mechanical coupling appearing in the exponent of (2.26) makes the dynamics nonlinear. To understand the peculiarities of the time dependent response of the parallel bundle of N cross-bridges brought about by the above nonlinearity, it is instructive to first examine the expression for the mean first passage time τ(p → p ′ ) characterizing transitions between two metastable states with N p and N p ′ (p < p ′ ) folded elements. Following Ref. [208] (and omitting the dependence on σ and β), we can write σ - 1 σ + 0 0.02 0.06 0.1 σ/σ 0 τ φ φ ( p→p 1 ) φ ( p→p 0 ) φ( p 0 → p 1 ) φ ( p 1 → p 0 ) σ - 1 σ + 10 -12 10 -10 10 -8 10 -6 10 -4 10 -2 σ/σ 0 τ φ exact N → ∞ σ - 1 σ + 10 -5 10 -4 10 -3 10 -2 10 -1 σ/σ 0 τ φ (p 0 ↔ p 1 ) exact N → ∞ (a) (b) (c) τ(p → p ′ ) = N p ′ ∑ N k=N p [ρ (k) ϕ + (k)] -1 N k ∑ N i=0 ρ (i) , (2.27) where ρ is the marginal equilibrium distribution at fixed p and ϕ + is the forward rate. In the case β > β c for the interval of loading [σ -, σ + ], the marginal free energy g ∞ [see (2.14)] has two minima which we denote p = p 1 and p = p 0 , with p 0 < p 1 . The minima are separated by a maximum located at p = p. We can distinguish two process: (i) The intra-bassin relaxation, which corresponds to reaching the metastable states (p = 0 or p = 1) starting from the top of the energy barrier p and (ii) The inter-basin relaxation, which deals with transitions between macro-states. For the intra-basin relaxation, the first passage time can be computed using Eq. (2.27), see Ref. [180]. The resulting rates φ( p → p 0,1 ) ≡ 1/[τ( p → p 0,1 )] are practically independent of the load and scale with 1/N, see Fig. 23(a). Regarding the transition between the two macrostates, we note that Eq. (2.27) can be simplified if N is sufficiently large. In this case, the sums in Eq. (2.27) can be transformed into integrals τ(p 0 → p 1 ) = N 2 p 1 ∫ p 0 [ρ ∞ (u)ϕ + (u)] -1 [ u ∫ 0 ρ ∞ (v) dv ] du , (2.28) where ρ ∞ ∼ exp[-βNg ∞ ] is the marginal distribution in the thermodynamic limit. The inner integral in Eq. (2.28) can be computed using Laplace method. Noticing that the function g ∞ has a single minimum in the interval [0, u > p 0 ] located at p 0 , we can write τ(p 0 → p 1 ) = [ 2πN β g ′′ ∞ (p 0 ) ] 1 2 p 1 ∫ p 0 [ρ ∞ (u) ϕ + (u)] -1 ρ ∞ (p 0 ) du. In the remaining integral, the inverse density (1/ρ ∞ ) is sharply peaked at p = p so again using Laplace method we obtain τ(p 0 → p 1 ) = 2π (N/β) ϕ + ( p) -1 g ′′ ∞ (p 0 ) g ′′ ∞ ( p) -1 2 × exp { β N [g ∞ ( p) -g ∞ (p 0 )] } . (2.29) We see that the first passage time is of the order of exp[N∆g ∞ ], see Eq. (2.29), where ∆g ∞ is the height of the energy barrier separating the two metastable states. In the thermodynamic limit, this energy barrier grows exponentially with N, which freezes collective inter-basin dynamics and generates metastability, see Fig. 23[(b) and (c)] and Ref. [180]. The above analysis can be generalized for the case of a mixed device by replacing the soft device marginal free energy g by its mixed device analog. The kinetic behavior of the system in the general case is illustrated in Fig. 24. The individual trajectories generated by the stochastic Eq. 2.21 are shown for N = 100. The system is subjected to a slow stretching in hard [(a) and (b)], soft [(c) and (d)] and mixed [(e) and (f)] devices. These numerical experiments mimic various loading protocols used for unzipping tests in biological macro-molecules [186; 199; 203; 209]. Observe that individual trajectories at finite N show a succession of jumps corresponding to collective foldingunfolding events. At large temperatures, see Fig. 24[(a),(c) and (e)], the transition between the folded and the unfolded state is smooth and is associated with a continuous drift of a unimodal density distribution, see inserts in Fig. 24. In the hard device such behavior persists even at low temperatures, see (b), which correlates with the fact that the marginal free energy in this case is always convex. Below the critical temperature [(d) and (f)], the mechanical response becomes hysteretic. The hysteresis is due to the presence of the macroscopic wells in the marginal free energy which is also evident from the bimodal distribution of the cross-bridges shown in the inserts. A study of the influence of the loading rate on the mechanical response of the system can be found in Ref. [180]. To illustrate the fast force recovery phenomenon, consider the response of the system to an instantaneous load increment. We compare the behaviors predicted by the master equation the exact two scale dynamics at low temperatures, event though the final equilibrium states are captured correctly. The difference between the chemo-mechanical description of HS and the stochastic simulation targeting the full probability distribution is due to the fact that in the equation describing the mean-field kinetics the transition rates are computed based on the average values of the order parameter. At large temperatures, where the distribution is uni-modal, the average values faithfully describe the most probable states and therefore the mean-field kinetic theory captures the timescale of the response adequately; see Fig. 25 (a). Instead, at low temperatures, when the distribution is bi-modal, the averaged values correspond to the states that are poorly populated; see Fig. 25 (b) where ⟨p⟩ in = 1/2. The value of the order parameter , which actually makes kinetics slow, describes a particular metastable configuration rather than the average state and therefore the mean-field kinetic equation fails to reproduce the real dynamics; see Fig. 25[(b) and(c)]. Soft spin model The hard spin model states that the slope of the T 1 curve, describing instantaneous stiffness of the fiber, and the slope of the T 2 curve are equal, which differs from what is observed experimentally, see Fig. 5. The soft spin model [99; 205] was developed to overcome this problem and to provide a purely mechanical continuous description of the phenomenon of fast force recovery. To this end, the discrete degrees of freedom were replaced by the continuous variables (x i ); the latter can be interpreted as projected angles formed by the segment S1 of the myosin head with the actin filament. Most importantly, the introduction of continuous variables has eliminated the necessity of using multiple intermediate configurations for the head domain [67; 68; 86]. The simplest way to account for the bistability in the configuration of the myosin head is to associate a bi-quadratic double-well energy u SS (x i ) with each variable x i , see Fig. 26(a); interestingly, a comparison with the reconstructed potentials for unfolding biological macro-molecules shows that a biquaqdratic approximation may be quantitavely adequate [186]. A nondegenerate spinodal region can be easily incorporated into this model, however, in this case we lose the desirable analytical transparency. It is sufficient for our purposes to keep the other ingredients of the hard spin model intact; the original variant of the soft spin model model (see Ref. [START_REF] Marcucci | A mechanical model of muscle contraction[END_REF]) corresponded to the limit κ b /(N κ 0 ) → ∞. x 0 uSS(x) v 0 -a (a) κ 2 κ 1 Σ uSS κ 0 elastic backbone N κ b Σ y z (b) In the soft spin model the total energy of the cross bridge can be written in the form v(x, y) = u SS (x) + (κ 0 /2)(y -x) 2 , (2.30) where u SS (x) = { 1 2 κ 1 (x) 2 + v 0 if x > ℓ, 1 2 κ 2 (x + a) 2 if x ≤ ℓ. (2.31) The parameter ℓ describes the point of intersection of the 2 parabolas in the interval [-a, 0], and therefore v 0 = (κ 2 /2)(ℓ + a) 2 -(κ 1 /2) ℓ 2 , is the energy difference between the pre-power-stroke and the post-power-stroke configurations. It will be convenient to use normalized parameters to characterize the asymmetry of the energy wells: λ 2 = κ 2 /(1 + κ 2 ) and λ 1 = κ 1 /(1 + κ 1 ). The dimensionless total internal energy per element of a cluster now reads v(x, y; z) = 1 N N ∑ i=1 [ u SS (x i ) + 1 2 (y -x i ) 2 + λ b 2 (z -y) 2 ] , (2.32) where λ b = κ b /(N κ 0 ). Here z is the control parameter. In a soft device case, the energy takes the form w(x, y; σ) = 1 N N ∑ i=1 [ u SS (x i ) + 1 2 (y -x i ) 2 -σy ] . (2.33) where σ is the applied tension per cross-bridge, see Ref. [179] for the details. where α i = 1 if x i > ℓ and 0 otherwise, we find that the global minimum of the energy again corresponds to one of the homogeneous states p = 0, 1 with a sharp transition at z = z 0 . We can also take advantage of the fact the soft spin model deals with continuous variables x i and define a continuous reaction path connecting metastable states with different number of folded units. Each folding event is characterized by a micro energy barrier that can be now computed explicitly. The typical structure of the resulting energy landscape is illustrated in Fig. 27 for different values of the coupling parameter λ b , see Ref. [179] for the details. In Fig. 28 we illustrate the zero temperature behavior of the soft-spin model with a realistic set of parameters, see Tab. 1 below. Finite temperature behavior. When z is the control parameter (mixed device), the equilibrium probability distribution for the remaining mechanical degrees of freedom can be written in the form ρ(x, y; z, β) = Z -1 (z, β) exp [-βNv (x, y; z)] , where β = (κ 0 a 2 )/(k b T) and Z(z, β) = ∫ exp [-βNv (x, y; z)] d xdy. In the soft device ensemble, z becomes a variable and the equilibrium distribution takes the form, ρ(x, y, z; σ, β) = Z -1 (σ, β) exp [-βNw (x, y, z; σ)] , (2.34) with Z(σ, β) = ∫ exp [-βNw (x, y, z, σ)] d xdydz. When z is fixed, the internal state of the system can be again characterized by the two mesoscopic paramters y and p. By integrating (2.34) over x and y we can define the marginal density ρ(p; z, β) = Z -1 exp [-βN f (p; z, β)]. Here f is the marginal free energy at fixed (p, z) which is illustrated in Fig. 29. As we see, the system undergoes an order-disorder phase transition which is controlled by the temperature and by the elasticity of the backbone. If the double well potential is symmetric (λ 1 = λ 2 ), this transition is of second order as in the hard spin model. A typical bifurcation diagrams for the case of slightly nonsymmetric energy wells are shown in Fig. 30. The main feature of the model without symmetry is that the second order phase transition becomes a first order phase transition. critical points, two corresponding to metastable states and one to an unstable state. The equilibrium response can be obtained by computing the partition function Z numerically. In the thermodynamic limit, we can employ the same methods as in the previous section and identify the equilibrium mechanical properties of the system with the global minimum of the marginal free energy f . In Fig. 31[(c)-(e)], we illustrate the equilibrium mechanical response of the system : similar phase diagrams have been also obtained for other systems with long-range interactions [211]. While the soft spin model is analytically much less transparent than the hard spin model, we can still show analytically that the system develops negative stiffness at sufficiently low temperatures. Indeed, we can write f ′′ = ⟨σ⟩ ′ = λ b [ 1 -βNλ b ⟨ (y -⟨y⟩) 2 ⟩] , where f is the equilibrium free energy of the system in a mixed device. This expression is sign indefinite and by the same reasoning as in the hard spin case, on can show that the critical line separating Phase I and Phase II is represented in Fig. 31 by a vertical line T = T c . In phase I (T > T c ) the equilibrium free energy is convex and the resulting tension-elongation is monotone. In Phase II (T < T c ) the equilibrium free energy is non-convex and the tension-elongation relation exhibits an interval with negative stiffness. In phase III the energy is nonconvex within a finite interval around z = z 0 , see dotted line in Fig. 31(e). As a result the system has to oscillate between two metastable states to remain in the global minimum of the free energy [solid line in Fig. 31(e)]. The ensuing equilibrium tension-elongation curve is characterized by a jump located at z = z 0 . Observe that the critical line separating Phase II and Phase III in Fig. 31 (b) represents the minimum number of crossbridges necessary to obtain a cooperative behavior at a given value of the temperature. We see that for temperatures around 300 K, the critical value of N is about 100 which corresponds approximately to the number of cross-bridges involved in isometric contraction in each half-sarcomere, see Section 2.2.3. This observation suggests that muscle fibers may be tuned to work close to the critical state [99]. A definitive statement of this type, however, cannot be made at this point in view of the considerable error bars in the data presented in Table 1. In a soft device, a similar analysis can be performed in terms of the marginal Gibbs free energy g(p; σ, β). A comparison of the free energies of a symmetric system in the hard and the soft device ensembles is presented in Fig. 32, where the parameters are such that the system in the hard device is in phase III, see Fig. 31. We observe that both free energies are bi-stable in this range of parameters, however the energy barrier separating the two wells in the hard device case is about three times smaller than in the case of a soft device. Since the macroscopic energy barrier separating the two state is proportional to N, the characteristic time of a transition increases exponentially with N as in the hard spin model, see Section 2.1.3. Therefore the kinetics of the power-stroke will be exponentially slower in the soft device than in the hard device as it is observed in experiment, see more about this in the next section. Note also that the macroscopic oscillations are more coherent in a soft device than in a hard device. By differentiating the equilibrium Gibbs free energy g(σ, β) = -1/(βN) log [Z(σ, β)] with respect to σ, we obtain the tension-elongation relation, which in a soft device is always monotone since g′′ = - [ 1 + βN ⟨ (z -⟨z⟩) 2 ⟩] < 0. This shows once again that soft and hard device ensembles are non-equivalent, in particular, that only the system in a hard device can exhibit negative susceptibility. In Fig. 33, we illustrate the behavior of the equilibrium free energies f and g in thermodynamic limit [(a) and (b)] together with the corresponding tension-elongation relations [(b) and (d)], see Ref. [212] for the details. The tension and elongation are normalized by their values at the transition point where ⟨p⟩ = 1/2 while the value of β is taken from experiments (solid line). The bi-stability (metastability) takes place in the gray region and we see that this region is much wider in the soft device than in the hard device, which corroborates that the energy barrier is higher in a soft device. Matching experiments. The next step is to match the model with experimental data. The difficulty of the parameter identification lies in the fact that the experimental results vary depending on the species, and here we limit our analysis to the data obtained from rana temporaria [73; 80; 87; 89]. Typical values of the parameters of the non-dimensional model obtained from these data are listed in Table 1. The first parameter a is obtained from structural analysis of myosin II [73; 96-98]. It has been shown that that its tertiary structure can be found in two conformations forming an angle of ∼70°. This corresponds to an axial displacement of the lever arm end of ∼10 nm. We therefore fix the characteristic length in our model at a = (10 ± 1) nm. The absolute temperature T is set to 277.15 K which correspond to 4 • C. This is the temperature at which most experiments on frog muscles are performed [206]. Several experimental studies aimed at measuring the stiffness of the myosin head and of the myofilaments (our backbone). One technique consists in applying rapid (100 µs) length steps to a tetanized fiber to obtain its overall stiffness κ tot , which corresponds to the elastic backbone in series with N cross-bridges: κ tot = (N κ 0 κ b )/(N κ 0 + κ b ). The stiffness associated with the double well potential (κ 1,2 ) is not included into this formula because the time of the purely elastic response is shorter than the time of the conformational change. This implies an assumption that the conformational degree of freedom is "frozen" during the purely elastic response. Such assumption is supported by experiments reported in Ref. [213], where shortening steps were applied at different stages of the fast force recovery, which means during the power-stroke. The results show that the overall stiffness is the same in the recovery process and in the isometric conditions. If we change the chemical environment inside the fiber by removing the cell membrane ("skinning") it is possible to perform the length steps under different calcium concentrations. We recall that calcium ions bind to the tropomyosin complex to allow the attachment of myosin heads to actin. Therefore, by changing the calcium environment, one can change the number of attached motors (N) and thus their contribution to the total stiffness while the contribution of the filaments remains the same [87; 89; 214]. Another solution is to apply rapid oscillations during the activation phase when force rises [100; 215]. These different techniques give κ b = (150 ± 10) pN nm -1 , a value which is compatible with independent X-ray measurements [76; 91; 93; 100; 167; 168; 181; 216]. To determine the stiffness of a single element, elastic measurements have been performed on fibers in rigor mortis where all the 294 cross-bridges of a half-sarcomere are attached, see Ref. [89]. Under the assumption that the filament elasticity is the same in rigor and in the state of tetanus, one can deduce the stiffness of a single cross-bridge. The value extracted from experiment is κ 0 = (2.7 ± 0.9) pN nm As we have seen, the phase diagram shown in Fig. 31(b), suggests a way to understand why N ≈ 100. Larger values of N are beneficial from the perspective of the total force developed by the system. However, reaching deep inside phase III means highly coherent response, which gets progressively more sluggish as N increases. In this sense being around the would be a compromise between a high force and a high responsiveness. It follows from the developed theory that for the normal temperature the corresponding value of N would be exactly around 100; for an attempt of a similar evolutionary justification for the size of titin molecule [217]. There are, of course, other functional advantages of a near-criticality associated, for instance, with diverging correlation length and the possibility of fast coherent response. At the end of the second phase of the fast force recovery (see Section 1.2.2), the system reaches an equilibrium state characterized by the tension T 2 in a hard device or by the shortening L 2 in a soft device. The values of these parameters are naturally linked with the equilibrium tension ⟨σ⟩ in a hard device and equilibrium length ⟨z⟩ in a soft device. In particular, the theory predicts that in the large deformation (shortening or stretching) regimes, the tension-elongation relation must be linear, see Fig. 33. The linear stiffness in these regimes corresponds to the series arrangement of N elastic elements, each one with stiffness equal to either κ 1 or κ 2 and with a series spring characterized by the stiffness κ b . Using the classical dimensional notations-(T, L) instead of the non dimensional (σ, z)-the tension elongation relation at large shortening takes the form T 2 (L) = κ 0 κ 2 κ 0 +κ 2 κ b κ 0 κ 2 κ 0 +κ 2 + κ b (L + a) In experiment, the tension T 2 drops to zero when a step L 2 ≃ -14 nm hs -1 (nanometer per half-sarcomere) is applied to the initial configuration L 0 . Therefore L 0 = -a -L 2 . Sincea = 11 nm, we obtain L 0 = 3.2 nm. Using a linear fit of the experimental curve shown in Fig. 5 (shortening) we finally obtain κ 2 ≃ 1 pN nm -1 . The value of κ 1 is more difficult to determine since there are only few papers dealing with stretching [94; 218]. Based on the few available measurements, we conclude that the stiffness in stretching is 1.5 larger than in shortening which gives κ 1 ≃ 3.6 pN nm -1 . A recent analysis of the fast force recovery confirms this estimate [207]. The last parameter to determine is the intrinsic bias of the double well potential, v 0 , which controls the active tension in the isometric state. The tetanus of a single sarcomere in physiological conditions is of the oder of 500 pN [80; 100]. If we adjust v 0 to ensure that the equilibrium tension matches this value, we obtain v 0 ≃ 50 zJ. This energetic bias can also be interpreted as the maximum amount of mechanical work that the cross-bridge can produce during one stroke. Since the amount of metabolic energy resulting from the hydrolysis of one ATP molecule is of the order of 100 zJ we obtain a maximum efficiency around 50 % which agrees with the value currently favoured in the literature [18; 219]. Kinetics. After the values of the nondimensional parameters are identified, one can simulate numerically the kinetics of fast force recovery by exposing the mechanical system to a Langevin thermostat. For simplicity, we assume that the macroscopic variables y and z are fast and are always mechanically equilibrated. Such quasiadiabatic approximation is not essential but it will allow us to operate with a single relaxation time-scale associated with the microscopic variables x i . Denoting by η the corresponding drag coefficient we construct the characteristic timescale τ = η/κ, which will be adjusted to fit the overall rate of fast force recovery. The response of the internal variables x i is governed by the non-dimensional system dx i = b(x i )dt + √ 2β -1 dB i where the drift is b(x, z) = -u ′ SS (x i ) + (1 + λ b ) -1 (λ b z + 1 N ∑ x i ) -x i , b(x, σ) = -u ′ SS (x i ) + σ + N -1 ∑ x i -x i in a hard and a soft device, respectively. In both cases the diffusion term dB i represents a standard Wiener processes. In Fig. 34, we illustrate the results of stochastic simulations imitating fast force recovery , using the same notations as in actual experiments. The system, initially in thermal equilibrium at fixed L 0 (or T 0 ), was perturbed by applying fast (∼100 µs) length (load) steps with different amplitudes. Typical ensemble-averaged trajectories are shown in Fig. 34[(a) and (b)] in the cases of hard and soft device, respectively. In a soft device (b) the system was not able to reach equilibrium within the realistic time scale when the applied load was sufficiently close to T 0 , see, for instance, the curve T = 0.9 T 0 in Fig. 34(b), where the expected equilibrium value is L 2 = -5 nm hs -1 . Instead, it remained trapped in a quasistationary (glassy) state because of the high energy barrier required to be crossed in the process of the collective powerstroke. The implied kinetic trapping, which fits the pattern of two-stage dynamics exhibited by systems with long-range interactions [211; 220; 221], may explain the failure to reach equilibrium in experiments reported in Refs. [92; 161; 222]. In the hard device case, the cooperation among the cross-bridges is much weaker and therefore the kinetics is much faster, which allows the system to reach equilibrium at experimental time scale. A quantitative comparison of the obtained tensionelongation curves with experimental data [see Fig. 34(c)] shows that for large load steps the equilibrium tension fits the linear behavior observed in experiment as it can be expected from our calibration procedure. For near isometric tension in a soft device the model also predicts the correct interval of ki-netic trapping, see the gray region in Fig. 34(c). While the model suggests that negative stiffness should be a characteristic feature of the realistic response in a hard device for a single half-sarcomere (see Fig. 31), such behavior has not been observed in experiments on whole myofibrils. Note, however, that in the model all cross bridges are considered to be identical and, in particular, it is assumed that they are attached with the same initial pre-strain. If there exists a considerable quenched disorder resulting from the randomness of the attachment/detachment positions, the effective force elongation curve will be flatter [151]. Another reason for the disappearence of the negative susceptibility may be that the actual spring stiffness inside a cross-bridge is smaller due to nonlinear elasticity [223]. One can also expect the unstable half-sarcomeres to be stabilized actively through processes involving ATP , see Refs. [158; 224] and our Section 3. The softening can be also explained by the collective dynamics of many half sarcomeres organized in series, see our Section 2.3. The comparison of the rates of fast recovery obtained in our simulations with experimental data (see Fig. 34) shows that the soft-spin model reproduces the kinetic data in both hard and soft ensembles rather well. Note, in particular, that the rate of recovery in both shortening and stretching protocols increases with load. This is a direct consequence of the fact that the energy barriers for forward and the reverse transitions depend on the mechanical load. Instead, in the original formulation of the HS, and in most subsequent chemomechanical models, the reverse rate was kept constant and this effect was missing. In Ref. [207], the authors proposed to refine the HS model by introducing a load dependent barrier also for the reversed stroke, see the results of their modeling in Fig. 34. Interacting half-sarcomeres So far, attention has been focused on (passive) behavior of a single force generating unit, a half-sarcomere. We dealt with a zero dimensional, mean field model without spatial complexity. However, as we saw in Fig. 8(a), such elementary force generating units are arranged into a complex, spatially extended structure. Various types of cross-links in this structure can be roughly categorized as parallel or series connections. A prevalent perspective in physiological literature is that interaction among force generating units is so strong that the mean field model of a single unit provides an adequate description of the whole myofibril. The underlying assumption is that the deformation, associated with muscle contractions, is globally affine. To challenge this hypothesis, we consider in this Section the simplest arrangement of force generating units. We assume that the whole section of a muscle myofibril between the neighboring Z disk and M-line deforms in an affine way and treat such transversely extended unit as a (macro) half-sarcomere. The neighboring (macro) half-sarcomeres, however, will be allowed to deform in an non-affine way. The resulting model describes a chain of (macro) half-sarcomeres arranged in series and the question is whether the fast force recovery in such a chain takes place in an affine way [225]. Chain models of a muscle myofibril were considered in Refs. [2; 114; 226] where the nonaffinity of the deformation was established based on the numerical simulations of kinetics. Analytical studies of athermal chain models with bi-stable Here we present a simple analytical study of the equilibrium properties of a chain of half-sarcomeres which draws on Ref. (A) u ss κ 0 N κ b (B) y 1 z 1 (A) u ss κ 0 N κ b y 2 z 2 [231] and allows one to understand the outcome of the numerical experiments conducted in Ref. [114]. Two half-sarcomeres. Consider first the most elementary series connection of two half-sarcomeres, each of them represented as a parallel bundle of N cross-bridges. This system can be viewed as a schematic description of a single sarcomere, see Fig. 35(b). To understand the mechanics of this system, we begin with the case where the temperature is equal to zero. The total (nondimensional) energy per cross bridge reads v = 1 2 { 1 N N ∑ i=1 [ u SS (x 1i ) + 1 2 (y 1 -x 1i ) 2 + λ b 2 (z 1 -y 1 ) 2 ] + 1 N N ∑ i=1 [ u SS (x 2i ) + 1 2 (y 2 -x 2i ) 2 + λ b 2 (z 2 -y 2 ) 2 ] } . (2.35) In a hard device case, when we impose the average elongation z = (1/2)(z 1 + z 2 ), none of the half-sarcomeres is loaded individually in either soft or hard device. In a soft device case, the applied tension σ, which we normalized by the number of cross bridges in a half-sarcomere, is the same in each halfsarcomere when the whole system is in equilibrium. The corresponding dimensionless energy per cross bridge is w = v -σz. The equilibrium equations for the continuous variables x i are the same in hard and soft devices, and have up to 3 solutions,          xk1 (y k ) = (1 -λ 1 ) ŷk , if x ki ≥ ℓ, xk2 (y k ) = (1 -λ 2 ) ŷk -λ 1 , if x ki < ℓ, xk * = ℓ, (2.36) where again λ 1,2 = κ 1,2 /(1 + κ 1,2 ) and ŷk denotes the equilibrium elongation of the half-sarcomere with index k = 1, 2. We denote by ξ = {ξ 1 , ξ 2 }, the micro-configuration of a sarcomere where the triplets ξ k = (p k , r k , q k ), with p k + q k + r k = 1, characterize the fractions of cross bridges in halfsarcomere k that occupy position xk1 , xk * (spinodal state) and xk0 , respectively. For a given configuration ξ k , the equilibrium value of y k is given by ŷk (ξ k , z k ) = λ b ẑk + r k ℓ -p k λ 2 λ b + λ xb (ξ k ) , where λ xb (ξ k ) = p k λ 2 +q k λ 1 +r k , is the stiffness of each halfsarcomere. The elongation of a half-sarcomere in equilibrium is ẑk = ŷk + σ/λ b , where σ is a function of z and ξ in the hard device case and a parameter in the soft device case. To close the system of equations we need to add the equilibrium relation between the tension σ and the total elongation z = (1/2)(ŷ 1 + ŷ2 ) + σ/λ b . After simplifications, we obtain σ(z, ξ) = λ(ξ) [ z + 1 2 ( p 1 λ 2 -r 1 ℓ λ xb (ξ 2 ) + p 2 λ 2 -r 2 ℓ λ xb (ξ 2 ) )] , (2.37) ẑ(σ, ξ) = σ λ(ξ) - 1 2 ( p 1 λ 2 -r 1 ℓ λ xb (ξ 1 ) + p 2 λ 2 -r 2 ℓ λ xb (ξ 2 ) ) (2.38) in a hard and a soft devices, respectively, where λ(ξ ) -1 = λ -1 b +(1/2)[λ xb (ξ 1 ) -1 +λ xb (ξ 2 ) -1 ] is compliance of the whole sarcomere. The stability of a configuration (ξ 1 , ξ 2 ) can be checked by computing the Hessian of the total energy and one can show, that configurations containing cross-bridges in the spinodal state are unstable, see Refs. [212; 227] for detail. We illustrate the metastable configurations in Fig. 36 (hard device) and Fig. 37 (soft device). For simplicity, we used a symmetric double well potential (λ 1 = λ 2 = 0.5, ℓ = -0.5). Each metastable configuration is labeled by a number representing a micro-configuration in the form {(p 1 , q 1 ), (p 2 , q 2 )} where p k = 0, 1/2, 1 (resp. q k = 0, 1/2, 1) denotes the fraction of cross bridges in the post-power-stroke state (resp. pre-power-stroke) in halfsarcomere k. The correspondence between labels and configurations goes as follows: 1: {(1, 0), (1, 0)} -2 and 2': {(1, 0), ( 1 2 , 1 2 )} and {( 1 2 , 1 2 ), (1, 0)} -3: {( 1 2 , 1 2 ), ( 1 2 , 1 2 )} -4 and 4': {(1, 0), (0, 1)} and {(0, 1), (1, 0)} -5 and 5': {( 12 , 1 2 ), (0, 1)} and {(0, 1), ( 12 , 1 2 )} -6: {(0, 1), (0, 1)}. For instance the label 2': {( 12 , 1 2 ), (1, 0)} corresponds to a configuration where in the first half-sarcomere, half of the cross bridges are in post-powerstroke and another half are in pre-power-stroke; in the second half-sarcomere, all the cross bridges are in post-power-stroke. In the hard device case (see Fig. 36) the system, following the global minimum path (bold line), evolves through non affine states 4 {(1, 0), (0, 1)} and 4' {(0, 1), (1, 0)}, where one halfsarcomere is fully in pre-power-stroke, and the other one is fully in post-power-stroke. This path is marked by two transitions located at z * 1 and z * 2 see Fig. 36(a). The inserted sketches in Fig. 36 (b) show a single sarcomere in the 3 configurations encountered along the global minimum path. Note that along the two affine branches, where the sarcomere is in affine state (1 and 6), the M-line (see the middle vertical dashed line) is in the middle of the structure. Instead, in the non-affine state (branch 4), the two half-sarcomeres are not equally stretched, and the M-line is not positioned in the center of the sarcomere. As a result of the (spontaneous) symmetry breaking, the M-line can be shifted in any of the two possible directions to form either configuration 4 or 4', see also Ref. [225]. In the soft device case [see Fig. 37], the system following the global minimum path never explores non-affine states. Instead both half-sarcomeres undergo a full unfolding transition at the same threshold tension σ * . If the temperature is different from zero we need to compute the partition functions Z 2 (z, β) = ∫ exp [-2βNv(z, x)] δ(z 1 + z 2 -2z) dx (2.39) Z 2 (σ, β) = ∫ exp [-2βNw(σ, x)] dx, (2.40) in a hard and a soft device, respectively, where again β = (κa 2 /(k b T). The corresponding free energies are f2 (z, β) = -(1/β) log[Z 2 (z)] and g2 (σ, β) = -(1/β) log[Z 2 (σ)]. The explicit expressions of these free energies can be obtained in the thermodynamic limit N → ∞, but they are too long to be presented here, see Refs. [212; 231] for more details. We illustrate the results in Fig. 38 where we show both, the energies and the tension-elongation isotherms. We see that a sarcomere exhibits different behavior in the two loading conditions. In particular, the Gibbs free energy remains concave in the soft device case for all temperatures while the Helmholtz free energy becomes nonconvex at low temperatures in the hard device case. Nonconvexity of the Helmholtz free energy results in nonmonotone tension-elongation relations with the developments of negative stiffness. It is instructive to compare the obtained non-affine tension-elongation relations with the ones computed under the assumption that each half-sarcomere is an elementary constitutive element with a prescribed tension-elongation relation. We suppose that such a relation can be extracted from the response of a half-sarcomere in either soft or hard device which allows us to use expressions obtained earlier, see Fig. 33. The hard device case is presented in Fig. 39. With thick lines we show the equilibrium tension-elongation relation while thin lines correspond to the behavior of the two phenomenologically modeled half-sarcomeres in series exhibiting each either soft or hard device constitutive behavior. Note that if the chosen constitutive relation corresponds to the hard device protocol [illustrated in Fig 33(b)], we obtain several equilibrium states for a given total elongation which is a result of the imposed constitutive constraints, see Fig. 39(a). The global minimum path predicted by the "constitutive model" shows discontinuous transitions between stable branches which resemble continuous transitions along the actual equilibrium path. If instead we use the soft device constitutive law for the description of individual half-sarcomeres [illustrated in Fig. 33(d)], the tension-elongation response becomes monotone and is therefore completely unrealistic, see Fig. 39(b). We reiterate that in both comparisons, the misfit is due to the fact that in a fully equlibrated sarcomere none of the half-sarcomeres is loaded in either soft or hard device. It would be interesting to show that a less schematic system of this type can reproduce non-affinities observed experimentally [235]. In Fig. 40 we present the result of a similar analysis for a sarcomere loaded in a soft device. In this case, if the "constitutive model" is based on the hard device tensionelongation relations [from Fig. 33(b)], we obtain the same (constrained) metastable states as in the previous case, see Fig. 39(a), thin lines. This means, in particular, that the response contains jumps while the actual equilibrium response is monotone, see Fig. 40(a). Instead, if we take the soft device tension-elongation relation as a "constitutive model", we obtain the correct overall behavior, see Fig. 40(b). This is expected since in the (global) soft device case both halfsarcomeres are effectively loaded in the same soft device and the overall response is affine. A chain of half-sarcomeres. Next, consider the behavior of a chain of M half-sarcomeres connected in series. As before, each half-sarcomere is modeled as a parallel bundle of N cross bridges. We first study the mechanical response of this system at zero temperature. Introduce x ki -the continuous degrees of freedom characterizing the state of the cross bridges in halfsarcomere k, y k -the position of the backbone that connects all the cross bridges of the half-sarcomere k and z k -the total elongation of the half-sarcomere k. The total energy (per cross bridge) of the chain takes the form v(x, y, z) = 1 M N M ∑ k=1 { N ∑ i=1 [ u SS (x ki ) + 1 2 (y k -x ki ) 2 +λ b 1 2 (z k -y k ) 2 ] } , (2.41) where x = {x ki }, y = {y k } and z = {z k }. In the hard device, the total elongation of the chain is prescribed: M z = M ∑ k=1 z k , where z is the average imposed elongation (per halfsarcomere). In the soft device case, the tension σ is imposed and the energy of the system also includes the energy of the loading device w = v -σ ∑ M k=1 z k . We again characterize the microscopic configuration of each half-sarcomere k by the triplet ξ k = (p k , q k , r k ), denoting as before the fraction of cross bridges in each of the wells and in the spinodal point, with p k +q k +r k = 1 for all 1 ≤ k ≤ M. The vector ξ = (ξ 1 , . . . , ξ M ) then characterizes the configuration of the whole chain. In view of the complexity of the ensuing energy landscape, here we characterize only a subclass of metastable configurations describing homogeneous (affine) states of individual halfsarcomeres. More precisely, we limit our attention to configurations with q k = 0, p k = 1, 0 and r k = 1, 0 for all 1 ≤ k ≤ M. In this case, a single half-sarcomere can be characterized by a spin variable m k = 1, 0. The resulting equilibrium tension-elongation relations in hard and soft devices take the form σ(z, m) = [ 1 λ b + 1 M M ∑ k=1 1 m k λ 2 + (1 -m k )λ 1 ] -1 × [ z + 1 M M ∑ k=1 m k λ 1 m k λ 2 + (1 -m k )λ 1 ] , (2.42) ẑ(σ, m) = [ 1 λ b + 1 M M ∑ k=1 1 m k λ 2 + (1 -m k )λ 1 ] σ - 1 M M ∑ k=1 m k λ 2 m k λ 2 + (1 -m k )λ 1 , (2.43) where m = (m 1 , . . . , m M ). In Fig. 41 we show the energy and the tension-elongation relation for the system following the global minimum path in a hard device. Observe that the tension-elongation relation contains a series of discontinuous transitions as the order parameter M -1 ∑ m k increases monotonously from 0 to 1 and their number increases with M while their size decreases. In the limit M → ∞, the relaxed (minimum) energy is convex but not strictly convex, see the interval where the energy depends linearly on the elongation for the case M = 20 in Fig. 41(a), see also Refs. [227;236]. The corresponding tension-elongation curves [see Fig. 41(b)] exhibit a series of transitions. In contrast to the case of a single half sarcomere, the limiting behavior of a chain is the same in the soft and hard devices (see the thick line). The obtained analytical results are in full agreement with the numerical simulations reported in Refs. [114; 164; 235; 237]. Fig. 42 illustrates the distribution of elongations of individual half-sarcomere in a hard device case as the system evolves along the global minimum path. One can see that when deformation becomes non-affine the population of halfsarcomere splits into 2 groups: one group is stretched at the level above average (top trace above diagonal) and the other one at the level below average (bottom trace below diagonal). The numbers beside the curves indicates the amount of halfsarcomeres in each group. In the soft device case, the system always remains in the affine state: all half-sarcomeres change conformation at the same moment and therefore the system stays on the diagonal (the dashed lines) in Fig. 42. Assume now that the temperature is different from zero. The partition function for the chain in a soft device can be obtained as the product of individual partition functions: Z M (σ, β) = [Z s (σ, β)] M = [√ 2π N βλ b ∫ exp [-βNg(σ, x, β)] dx ] M , which reflects he fact that the half-sarcomeres in this setting are independent. In the hard device, the analysis is more involved because of the total length constraint. In this case we need to compute Z M (z, β) = ∫ exp [-βN Mv(z, x)] δ [ 1 M ∑ z k -z ] dx (2.44) A semi-explicit asymptotic solution can be obtained for the hard device case in the limit β → ∞ and M → ∞. Note first, that the partition function depends only on the "average magnetization" m -the fraction of half-sarcomeres in post-power-stroke conformation. At M N → ∞ we obtain asymptotically (see Ref. [212;231] for the details) Z M (z, β) ≈ C ϕ(m * ) exp [-βM NΨ(m * ; z, β)] [ βM N ∂ 2 m Ψ(m; z, β) m=m * ] 1 2 , (2.45) where C = ( 2π β ) (N +2)M -1 2 N 1 2 -M . Using the notations µ 1,2 = (λ 1,2 λ b )/(λ 1,2 + λ b ) , we can now write the expression for the marginal free energy at fixed m in the form Ψ(m; z, β) = 1 2 [ m µ 2 + 1 -m µ 1 ] -1 (z + m) 2 + (1 -m) v 0 - 1 2β [ m log (1 -λ 2 ) + (1 -m) log (1 -λ 1 ) ] + 1 βN [ m log (m) + (1 -m) log (1 -m) + m 2 log (λ 2 λ b ) + 1 -m 2 log (λ 1 λ b ) ] , (2.46) where computation of the second derivative of (2.46) with respect to m shows that Ψ is always convex. In other words, our assumption that individual half-sarcomeres respond in an affine way, implies that the system does not undergo a phase transition in agreement with what is expected for a 1D system with short range interactions. Now we can compute the Helmholtz free energy and the equilibrium tension-elongation relation for a chain in a hard device ϕ(m) = { [m/µ 2 + (1 -m)/µ 1 ] [m (1 -m)] } -1 2 . Here m * is the minimum of Ψ in the interval ]0, 1[. A direct f∞ (z, β) = Ψ(m * ; z, β), (2.47 ) σ∞ (z, β) = ( m * µ 2 + 1 -m * µ 1 ) -1 (z + m * ) . (2.48) In the case of a soft device, the Gibbs free energy and the corresponding tension-elongation relation are simply the re-scaled versions of the results obtained for a single halfsarcomere, see Section 2.2. In Fig. 43 we illustrate a typical equilibrium behavior of a chain in a hard device. The increase of temperature enhances the convexity of the energy, as in the case of a single half-sarcomere, however, when the temperature decreases we no longer see the negative stiffness. Instead, when N is sufficiently large, we see a tension-elongation plateau similar to what is observed in experiments on myofibrils, see Fig. 43(b). The obtained results can be directly compared with experimental data. Consider, for instance, the response of a chain with M = 20 half-sarcomeres submitted to a rapid length step. The equilibrium model with realistic parameters predicts in this case a tension-elongation plateau close to the observed T 2 curve, see dashed line in 44(a). Our numerical experiments, however, could not reproduce the part of this plateau in the immediate vicinity of the state of isometric contractions. This may mean that even in the chain placed in a hard device, individual half-sarcomeres end up being loaded in a mixed device and can still experience kinetic trapping. Our stochastic simulations for a chain in a soft device reproduce the whole trapping domain around the state of isometric contractions, see Fig. 44(a). The computed rate of the quick recovery for the chain is shown in Fig. 44(b). We see that the model is able to capture quantitatively the difference between the two loading protocols. However, the hard device response of the chain (see squares) is more sluggish than in the case of a single half-sarcomere. Once again, we see an interval around the state of isometric contractions where our system cannot reach its equilibrium state at the experimental time scale. Note, however, that the rate of relaxation to equilibrium increases with both stretching and shortening, saturating for large applied steps as it was experimentally observed in Ref. [207]. Active rigidity As we have seen in Section 2.2. Purely entropic stabilization is excluded in this case because the temperature alone is not sufficiently high to ensure positive stiffness of individual half-sarcomeres [114]. Here we discuss a possibility that the homogeneity of the myofibril configuration is due to active stabilization of individual half-sarcomeres [224]. We conjecture that metabolic resources are used to modify the mechanical susceptibility of the system and to stabilize configurations that would not have existed in the absence of ATP hydrolysis [239][240][241]. We present the simplest model showing that active rigidity can emerge through resonant non-thermal excitation of molecular degrees of freedom. The idea is to immitate the inverted Kapitza pendulum [242], aside from the fact that in biological systems the inertial stabilization has to be replaced by its overdamped analog. The goal is to show that a macroscopic mechanical stiffness can be controlled at the microscopic scale by a time correlated noise which in biological setting may serve as a mechanical representation of a nonequilibrium chemical reaction [243]. Mean field model To justify the prototypical model with one degree of freedom, we motivate it using the modeling framework developed above. Suppose that we model a half-sarcomere by a parallel array of N cross-bridges attached to a single actin filament following Section 2.2. We represent again attached cross bridges as bistable elements in series with linear springs but now assume additionally that there is a nonequilibrium driving provided through stochastic rocking of the bi-stable elements. More specifically, we replace the potential u SS (x) for individual cross-bridges by u SS (x)x f (t), where f (t) is a correlated noise with zero average simulating out of equilibrium environment, see Ref. [244] for more details. If such a half-sarcomere is subjected to a time dependent deterministic force f ext (t), the dynamics can be described by the following system of nondimensional Langevin equations ẋi = -∂ x i W + √ 2Dξ(t), ν ẏ = -∂ y W, (3.1) where ξ(t) a white noise with the properties ⟨ ξ(t) ⟩ = 0, and ⟨ ξ(t 1 )ξ(t 2 ) ⟩ = δ(t 2t 1 ). Here D is a temperature-like parameter, the analog of the parameter β -1 used in previous sections. The (backbone) variable y, coupled to N fast soft-spin type variables x i through identical springs with stiffness κ 0 , is assumed to be macroscopic, deterministic and slow due to the large value of the relative viscosity ν. We write the potential energy in the form W = ∑ N i=1 v(x i , y, t)-f ext y, where v(x, y, t) is the energy (2.30) with a time dependent tilt in x and the function f ext (t) is assumed to be slowly varying. The goal now is to average out fast degrees of freedom x i and to formulate the effective dynamics in terms of a single slow variable y. Note, that the equation for y can be re-written as ν N ẏ = κ 0 ( 1 N N ∑ i=1 x i -y ) + f ext N , (3.2) which reveals the mean field nature of the interaction between y and x i . If N is large, we can replace 1 N ∑ N i=1 x i by ⟨x⟩ using the fact that the variables x i are identically distributed and exchangeable [245]. Denoting ν 0 = ν/N and g ext = f ext /(κ 0 N) and assuming that these variables remain finite in the limit N → ∞, we can rewrite the equation for y in the form ν 0 ẏ = κ 0 [(⟨x⟩ -y) + g ext (t)]. Assume now for determinacy that the function f ext (t) is periodic and choose its period τ 0 in such a way that Γ = ν 0 /κ 0 ≫ τ 0 . We then split the force κ 0 (⟨x⟩y) acting on y into a slow component κ 0 ψ(y) = κ 0 (⟨x⟩y) and a slow-fast component κ 0 ϕ(y, t) = κ 0 (⟨x⟩ -⟨x⟩) where ⟨x⟩ = lim t→∞ (1/t) ∫ t 0 ∫ ∞ -∞ x ρ(x, t) dx dt, and ρ(x, t) is the probability distribution for the variable x. We obtain Γ ẏ = ψ(y)+ϕ(y, t)+ g ext and the next step is to average this equation over τ. To this end we introduce a decomposition y(t) = z(t) + ζ(t), where z is the averaged (slow) motion and ζ is a perturbation with time scale τ. Expanding our dynamic equation in ζ, we obtain, Γ( ż + ζ) = ψ(z) + ∂ z ψ(z)ζ + ϕ(z, t) + ∂ z ϕ(z, t)ζ + g ext . (3.3) Since g ext (t) ≃ τ -1 0 ∫ t+τ 0 t g ext (u) du, we obtain at fast time scale Γ ζ = ϕ(z, t), see Ref. [246] for the general theory of these type of expansions. Integrating this equation between t 0 and t ≤ t 0 + τ 0 at fixed z we obtain ζ(t) -ζ(t 0 ) = Γ -1 ∫ t t 0 ϕ(z(t 0 ), u)du and since ϕ is τ periodic with zero average, we can conclude that ζ(t) is also τ 0 periodic with zero average. If we now formally average (3.3) over the fast time scale τ 0 , we obtain Γ ż = ψ(z) + r + g ext , where r = (Γτ 0 ) -1 ∫ τ 0 ∫ t 0 ∂ z ϕ(z, t)ϕ(z, u) dudt. Given that both ϕ(z, t) and ∂ z ϕ(z, t) are bounded, we can write |r | ≤ (τ 0 /Γ)c ≪ 1, where the "constant" c depends on z but not on τ 0 and Γ. Therefore, if N ≫ 1 and ν/(κ 0 N) ≫ τ 0 , the equation for the coarse grained variable z(t) = τ -1 0 ∫ t+τ 0 t y(u) du can be written in terms of an effective potential (ν/N) ż = -∂ z F + f ext /N. To find the effective potential we need to compute the primitive of the averaged tension F(z) = ∫ z σ(s) ds, where σ(y) = κ 0 [y -⟨x⟩]. The problem reduces to the study of the stochastic dynamics of a variable x(t) described by a dimensionless Langevin equation ẋ = -∂ x w(x, y, t) + √ 2Dξ(t). (3.4) The potential w(x, y, t) = w p (x, t) + v e (x, y) is the sum of two components: w p (x, t) = u SS (x) -x f (t) , mimicking an out of equilibrium environment and v e (x, y) = (κ 0 /2)(xy) 2 , describing the linear elastic coupling of the "probe" with a "measuring device" characterized by stiffness κ 0 . We assume that the energy is supplied to the system through a timecorrelateded rocking force f (t) which is characterized by an amplitude A and a time scale τ. To have analytical results, we further assume that the potential u SS (x) is bi-quadratic, u SS (x) = (1/2) (|x| -1/2) 2 . Similar framework has been used before in the studies of directional motion of molecular motors [247]. The effective potential F(z) can be viewed as a nonequilibrium analog of the free energy [248][249][250][251]. While in our case, the mean-field nature of the model ensures the potentiality of the averaged tension, in a more general setting, the averaged stochastic forces may lose their gradient structure and even the effective "equations of state" relating the averaged forces with the corresponding generalized coordinates may not be well defined [252][253][254][255][256][257]. Phase diagrams Suppose first that the non-equilibrium driving is represented by a periodic (P), square shaped external force f (t) = A(-1) n(t) with n(t) = ⌊2t/τ⌋, (3.5) where the brackets denote the integer part. The Fokker-Planck equation for the time dependent probability distribution ρ(x, t) reads ∂ t ρ = ∂ x [ρ ∂ x w(x, t) + D∂ x ρ] . (3.6) Explicit solution of (3.6) can be found in the adiabatic limit when the correlation time τ is much larger than the escape time for the bi-stable potential u SS [132; 258]. The idea is that the time average of the steady state probability can be computed from the mean of the stationary probabilities with constant driving force (either f (t) = A or f (t) = -A). The adiabatic approximation becomes exact in the special case of an equilibrium system with A = 0, when the stationary probability distribution can be written explicitly 2 . The tension elongation curve σ(z) can then be computed analytically, since we know ⟨x⟩ = ⟨x⟩ = ∫ ∞ -∞ x ρ 0 (x) dx. The resulting curve and the corresponding potential F(z) are shown in Fig. 45(a). At zero temperature the equilibrium system with A = 0 exhibits negative stiffness at z = 0 where the effective potential F(z) has a maximum (spinodal state). As temperature increases we observe a standard entropic stabilization of the configuration z = 0, see Fig. 45(a). ρ 0 (x) = Z -1 exp [-ṽ(x)/D] . Here Z = ∫ ∞ -∞ exp(-ṽ(x)/D)dx, and ṽ(x, z) = (1/2)(|x| - 1/2) 2 + (κ 0 /2)(x -z) By solving equation ∂ z σ| z=0 = 0, we find an explicit expression for the critical temperature D e = r/[8(1 + κ 0 )] where r is a root of a transcendental equation 1 + √ r/πe -1/r /[1 + erf (1/ √ r)] = r/(2κ 0 ). The behavior of the roots of the equation σ(z) = -κ 0 (⟨x⟩z) = 0 at A = 0 is shown in Fig. 46 (b) which illustrates a second order phase transition at D = D e . In the case of constant force f ≡ A the stationary probability distribution is also known [START_REF] Risken | The Fokker-Planck Equation[END_REF] C A is the tri-critical point, D e is the point of a second order phase transion in the passive system. The "Maxwell line" for a first order phase transition in the active system is shown by dots. Here κ 0 = 0.6. Adapted from Ref. [224]. ρ A (x) = Z -1 exp [-(ṽ(x) -Ax) /D] , where again Z = ∞ ∫ -∞ exp(-ṽ(x)/D)dx. In adiabatic approximation we can write the time averaged stationary distribution in the form, ρ Ad (x) = 1 2 [ρ A (x) + ρ -A (x)], which gives ⟨x⟩ = 1 2 [⟨x⟩(A) + ⟨x⟩(-A)] . The force-elongation curves σ(z) and the corresponding potentials F(z) are shown in Fig. 45 (b). We see the main effect: as the degree of non-equilibrium, characterized by A, increases, not only the stiffness in the state z = 0, where the original double well potential u SS had a maximum, changes from negative to positive, as in the case of entropic stabilization, but we also see that the effective potential F(z) develops around this point a new energy well. We interpret this phenomenon as the emergence of active rigidity because the new equilibrium state becomes possible only at a finite value of the driving parameter A, while the temperature D can be arbitrarily small. The behavior of the roots of the equation σ(z) = -κ 0 (⟨x⟩z) = 0 at A 0 is shown in Fig. 46(a) which now illustrates a first order phase transition. The full steady state regime map (dynamic phase diagram) summarizing the results obtained in adiabatic approximation is presented in Fig. 47 (a). There, the "paramagnetic" phase I describes the regimes where the effective potential F(z) is convex, the "ferromagnetic" phase II is a bi-stability domain where the potential F(z) has a double well structure and, ). This is an effect of stochastic resonance which is beyond reach of the adiabatic approximation. Force-elongation relations characterizing the mechanical response of the system at different points on the (A, D) plane [see Fig. 47 (b)] are shown in Fig. 48 where the upper insets illustrate the typical stochastic trajectories and the associated cycles in {⟨x(t)⟩, f (t)} coordinates. We observe that while in phase I thermal fluctuations dominate periodic driving and undermine the two well structure of the potential, in phase III the jumps between the two energy wells are fully synchronized with the rocking force. In phase II the system shows intermediate behavior with uncorrelated jumps between the wells. In Fig. 48(d) we illustrate the active component of the force σ a (z) = σ(z; A) -σ(z; 0) in phases I, II and III. A salient feature of Fig. 48(d) is that active force generation is significant only in the resonant (Kapitza) phase III. A biologically beneficial plateau (tetanus) is a manifestation of the triangular nature of a pseudo-well in the active landscape F a (z) = ∫ z σ a (s)ds; note also that only slightly bigger ( f , ⟨x⟩) hysteresis cycle in phase III, reflecting a moderate increase of the extracted work, results in considerably larger active force. It is also of interest that the largest active rigidity is generated in the state z = 0 where the active force is equal to zero. If we now estimate the non-dimensional parameters of the model by using the data on skeletal muscles, we obtain A = 0.5, D = 0.01, τ = 100 [224]. This means that muscle myosins in stall conditions (physiological regime of isometric contractions), may be functioning in resonant phase III. The model can therefore provide an explanation of the observed stability of skeletal muscles in the negative stiffness regime [99]; similar stabilization mechanism may be also assisting the titin-based force generation at long sarcomere lengths [262]. The results presented in this Section for the case of periodic driving were shown in Ref. [224] to be qualitatively valid also for the case of dichotomous noise. However, the Ornstein-Uhlenbeck noise was unable to generate a nontrivial Kapitza phase. To conclude, the prototypical model presented in this Section shows that by controlling the degree of non-equilibrium in the system, one can stabilize apparently unstable or marginally stable mechanical configurations and in this way modify the structure of the effective energy landscape (when it can be defined). The associated pseudo-energy wells with resonant nature may be crucially involved not only in muscle contraction but also in hair cell gating Active force generation In this Section we address the slow time scale phase of force recovery which relies on attachment-detachment processes [79]. We review two types of models. In models of the first type the active driving comes from the interaction of the myosin head with actin filament, while the power stroke mechanism remains passive [269]. In models of the second type, the active driving resides in the power stroke machinery [244]. The latter model is fully compatible with the biochemical Lynm-Taylor cycle of muscle contractions. Contractions driven by the attachment-detachment process A physiological perspective that the power-stroke is the driving force of active contraction was challenged by the discovery that myosin catalytic domain can operate as a Brownian ratchet, which means that it can move and produce contraction without any assistance from the power-stroke mechanism [136; 137; 142]. It is then conceivable that contraction is driven directly by the attachment-detachment machinery which can rectify the correlated noise and select a directionality following the polarity of actin filaments [60; 143]. To represent the minimal set of variables characterizing skeletal Myosin II in both attached and detached statesposition of the motor domain, configuration of the lever domain and the stretch state of the series elastic element-we use three continuous coordinates [269]. To be maximally transparent we adopt the simplest representation of the attachementdetachment process provided by the rocking Brownian ratchet model [132; 247; 270; 271]. We interpret again a half-sarcomere as a HS type parallel bundle of N cross bridges. We assume, however, that now each cross-bridge is a three-element chain containing a linear elastic spring, a bi-stable contractile element, and a molecular motor representing the ATP-regulated attachment-detachment process, see Fig. 49. The system is loaded either by a force f ext representing a cargo or is constrained by the prescribed displacement of the backbone. The elastic energy of the linear spring can be written as v e (x) = 1 2 κ 0 (zy-ℓ) 2 , where κ 0 is the elastic modulus and ℓ is the reference length. The energy u SS of the bi-stable mechanism is taken to be three-parabolic u SS (y -x) =          1 2 κ 1 (y -x) 2 y -x ≥ b 1 -1 2 κ 3 (y -x -b) 2 + c b 2 ≤ y -x < b 1 1 2 κ 2 (y -x -a) 2 + v 0 y -x < b 2 (4.1) where κ 1,2 are the curvatures of the energy wells representing pre-power stroke and post-power stroke configurations, respectively and a < 0 is the characteristic size of the power stroke. The bias v 0 is again chosen to ensure that the two wells have the same energy in the state of isometric contraction. The energy barrier is characterized by its position b, its height c and its curvature κ 3 . The values of parameters b 1 and b 2 are chosen to ensure the continuity of the energy function. We model the myosin catalytic domain as the Brownian ratchet of Magnasco type [132]. More specifically, we view it as a particle moving in an asymmetric periodic potential while being subjected to a correlated noise. The periodic potential is assumed to be piece-wise linear in each period Φ(x) = { Q λ 1 L (x -nL), 0 < x -nL < λ 1 L Q λ 2 -Q λ 2 L (x -nL), λ 1 L < x -nL < L (4.2) where Q is the amplitude, L is the the period, λ 1 -λ 2 is the measure of the asymmetry; λ 1 + λ 2 = 1. The variable x marks the location of a particle in the periodic energy landscape: the head is attached if x is close to one of the minima of Φ(x) and detached if it is close to one of the maxima. Inserts illustrate the states of various mechanical subunits. The system of N cross-bridges of this type connected in parallel is modeled by the system of Langevin equations [269]                  ν x ẋi = -Φ ′ (x i ) + u ′ SS (y i -x i )+ f (t + t i ) + √ 2D x ξ x (t) ν y ẏi = -u ′ SS (y i -x i ) -κ 0 (y i -z -ℓ i ) + √ 2D y ξ y (t) ν z ż = N ∑ i=1 κ 0 (y i -z -ℓ i ) + f ext + √ 2D z ξ z (t) (4. 3) with D x,y,z = ν x,y,z k b T, where ν x,y,z denote the relative viscosities associated with the macroscopic variables, and ξ is a standard white noise. The correlated component of the noise f (t), imitating the activity of the ATP, is assumed to be periodic and piece-wise constant, see Eq. (3.5). Since our focus here is on active force generation rather than on active oscillations, we de-synchronize the dynamics by introducing phase shifts t i , assumed to be independent random variables uniformly distributed in the interval [0, T]; we also allow the pre-strains ℓ i to be random and distribute them in the intervals [iLa/2, iL + a/2]. Quenched disorder disfavors coherent oscillations observed under some special conditions (e.g. Ref. [166]). While we leave such collective effects outside our review, several comprehensive expositions are availbale in the literature, see Refs. [12; 36; 38; 112; 114; 143; 157; 158; 165; 166; 171; 171; 173; 272-282]. To illustrate the behavior of individual mechanical units we first fix the parameter z = 0 and write the total energy of a cross-bridge as a function of two remaining mechanical variables y and x: v(x, y) = Φ(x) + u SS (y -x) + v e (-y) (4.4) The associated energy landscape is shown in Fig. 50, where the upper two local minima A and B indicate the pre-power stroke and the post-power stroke configurations of a motor attached in one position on actin potential, while the two lower local minima A ′ and B ′ correspond to the pre-power stroke and the post-power stroke configurations of a motor shifted to a neighboring attached position. We associate the detached state with an unstable position around the maximum separating the minima (A, B) and (A ′ ,B ′ ), see Ref. [269] for more details. In Fig. 51 we show the results of numerical simulations of isotonic contractions at f ext = 0.5 T 0 , where T 0 is the stall tension. One can see that the catalytic domain of an Observe that the position of the backbone can be considered stationary during the recharging of the power stroke. In this situation, the key-factor for the possibility of recharging (after the variable x has overcome the barrier in the periodic potential) is that the total energy v(x, y) has a minimum when the snap-spring is in the pre-power stroke state. The corresponding analytical condition is (Q/v 0 ) > (λ 1 L)/a which places an important constraint on the choice of parameters [269]. A direct comparison of the simulated mechanical cycle with the Lymn-Taylor cycle (see Fig. 2) shows that while the two attached configurations are represented in this model adequately, the detached configurations appear only as transients. In fact, one can see that the (slow) transition B → A ′ represents a combined description of the detachment, of the power stroke recharge and then of another attachment. Since in the actual biochemical cycle such a transition are described by at least two distinct chemical states, the ratchet driven model is only in partial agreement with biochemical data. Contractions driven by the power stroke We now consider a possibility that acto-myosin contractions are propelled directly through a conformational change. The model where the power-stroke is the only active mechanism driving muscle contraction was developed in Ref. [244]. To justify such change of the modeling perspective, we recall that in physiological literature active force generation is largely attributed to the power-stroke which is perceived as a part of active rather than passive machinery [153]. This opinion is supported by observations that both the power-stroke and the reverse-power-stroke can be induced by ATP even in the absence of actin filaments [71], that contractions can be significantly inhibited by antibodies which impair lever arm activity [283], that sliding velocity in mutational myosin forms depends on the lever arm length [192] and that the directionality can be reversed as a result of modifications in the lever arm domain [284; 285]. Although the simplest models of Brownian ratchets neglect the conformational change in the head domain, some phases of the attachment-detachment cycle have been interpreted as a power-stroke viewed as a part of the horizontal shift of the myosin head [144; 286]. In addition, ratchet models were considered with the periodic spatial landscape supplemented by a reaction coordinate, representing the conformational change [287; 288]. In all these models, however, power stroke was viewed as a secondary element and contractions could be generated even with the disabled power stroke. The main functionality of the power-stroke mechanism was attributed to fast force recovery which could be activated by loading but was not directly ATP-driven [74; 99; 289]. The apparently conflicting viewpoint that the power-stroke mechanism consumes metabolic energy remains, however, the underpinning of the phenomenological chemo-mechanical models that assign active roles to both the attachmentdetachment and the power-stroke [86; 102]. These models pay great attention to structural details and in their most comprehensive versions faithfully reproduce the main experimental observations [68; 115; 290]. In an attempt to reach a synthetic description, several hybrid models, allowing chemical states to coexist with springs and forces, have been also proposed, see Refs. [112; 152; 291]. These models, however, still combine continuous dynamics with jump transitions which makes the precise identification of structural analogs of the chemical steps and the underlying micro-mechanical interactions challenging [154]. The model. Here, following Ref. [244], we sketch a mechanistic model of muscle contractions where power stroke is the only active agent. To de-emphasize the ratchet mechanism discussed in the previous section, we simplify the real picture and represent actin filaments as passive, non-polar tracks. The power-stroke mechanism is represented again by a symmetric bi-stable potential and the ATP activity is modeled as a center-symmetric correlated force with zero average acting on the corresponding configurational variable. A schematic representation of the model for a single crossbridge is given in Fig. 52(b), where x is the observable position of a myosin catalytic domain, yx is the internal variable characterizing the phase configuration of the power stroke element and d is the distance between the myosin head and the actin filament. Through the variable d we can take into account that when the lever arm swings, the interaction of the head with the binding site weakens, see Fig. 52 To mimic the underlying steric interaction, we assume that when a myosin head executes the power-stroke, it moves away from the actin filament and therefore the control function Ψ(yx) progressively switches off the actin potential, see Fig. Then the overdamped stochastic dynamics can be described by the system of dimensionless Langevin equations ẋ = -∂ x G(x, y) -f (t) + √ 2Dξ x (t) ẏ = -∂ y G(x, y) + f (t) + √ 2Dξ y (t). (4.7) Here ξ(t) is the standard white noise with ⟨ξ i (t)⟩ = 0, and ⟨ξ i (t)ξ j (s)⟩ = δ i j δ(ts) and D is a dimensionless measure of temperature; for simplicity the viscosity coefficients are assumed to be the same for variables x and y. The time dependent force couple f (t) with zero average represents a correlated component of the noise. In the computational experiments a periodic extension of the symmetric triangular potential Φ(x) with amplitude Q and period L was used, see Fig. 53(a). The symmetric potential u SS (yx) was taken to be bi-quadratic with the same stiffness k in both phases and the distance between the bottoms of the wells denoted by a, see Fig. 53(b). The correlated component of the noise f (t) was described by a periodic extension of a rectangular shaped function with amplitude A and period τ, Fig. 53(c). Finally, the steric control ensuring the gradual switch of the actin potential is described by a step function Ψ(s) = (1/2) [1 -tanh (s/ε)] , (4.8) where ε is a small parameter, see Fig. 53(d). The first goal of any mechanical model of muscle contraction is to generate a systematic drift v = lim t→∞ ⟨x(t)⟩/t without applying a biasing force. The dependence of the average velocity v on the parameters of the model is summarized in Fig. 54. It is clear that the drift in this model is exclusively due to A 0. When A is small, the drift velocity shows a maximum at finite temperatures which implies that the system exhibits stochastic resonance [294]. At high amplitudes of the ac driving, the motor works as a purely mechanical ratchet and the increase of temperature only worsens the performance [136; 137; 143]. One can say that the system (4.7) describes a power-strokedriven ratchet because the correlated noise f (t) acts on the relative displacement yx. It effectively "rocks" the bistable potential and the control function Ψ(yx) converts such "rocking" into the "flashing" of the periodic potential Φ(x). It is also clear that the symmetry breaking in this problem is imposed exclusively by the asymetry of the coupling function Ψ(y-x). Various other types of rocked-pulsated ratchet models have been studied in Refs. [295; 296]. The idea that the source of non-equilibrium in Brownian ratchets is resting in internal degrees of freedom [297; 298] originated in the theory of processive motors [299][300][301][302]. For instance, in the description of dimeric motors it is usually assumed that ATP hydrolysis induces a conformational transformation which then affects the position of the motor legs [303]. Here the same idea is used to describe a non-processive motor with a single leg that remains on track due to the presence of a thick filament. By placing emphasis on active role of the conformational change in non-processive motors the model brings closer the descriptions of porters and rowers as it was originally envisaged in Ref. [304]. 4.2.2. Hysteretic coupling The analysis presented in Ref. [244] has shown that in order to reproduce the whole Lymn-Taylor cycle, the switchings in the actin potential must take place at different values of the variable yx depending on the direction of the conformational change. In other words, we need to replace the holonomic coupling (4.6) by the memory operator d{x, y} = Ψ{y(t)x(t)} (4.9) whose output depends on whether the system is on the "striking" or on the "recharging" branch of the trajectory, see Fig. 55. Such memory structure can be also described by a rate independent differential relation of the form ḋ = Q(x, y, z) ẋ + R(x, y, d) ẏ, which makes the model non-holonomic. Using (4.9) we can rewrite the energy of the system as a functional of its history y(t) and x(t) G{x, y} = Ψ{y(t)x(t)}Φ(x) + u SS (yx). (4.10) In the Langevin setting (4.7), the history dependence may mean that the underlying microscopic stochastic process is non-Markovian (due to, say, configurational pinning [305]), or that there are additional non-thermalized degrees of freedom that are not represented explicitly, see Ref. [306]. In general, it is well known that the realistic feedback implementations always involve delays [307]. To simulate our hysteretic ratchet numerically we used two versions of the coupling function (4.8) shifted by δ with the branches Ψ(yx ± δ) blended sufficiently far away from the hysteresis domain, see Fig. 55. Our numerical experiments show that the performance of the model is not sensitive to the shape of the hysteresis loop and depends mostly on its width characterized by the small parameter δ. In Fig. 56 we illustrate the "gait" of the ensuing motor. The center of mass advances in steps and during each step the power-stroke mechanism gets released and then recharged again, which takes place concurrently with attachmentdetachment. By coupling the attached state with either pre-or post-power-stroke state, we can vary the directionality of the motion. The average velocity increases with the width of the hysteresis loop which shows that the motor can extract more energy from the coupling mechanism with longer delays. The results of the parametric study of the model are summarized in Fig. 57. The motor can move even in the absence of the correlated noise, at A = 0, because the nonholonomic coupling (4.10) breaks the detailed balance by itself. At finite A the system can use both sources of energy (hysteretic loop and ac noise) and the resulting behavior is much richer than in the non-hysteretic model, see Ref. [244] for more details. Lymn-Taylor cycle. The mechanical "stroke" in the space of internal variables (d, yx) can be now compared with the Lymn-Taylor acto-myosin cycle [59] shown in Fig. 2 and in the notations of this Section in Fig. 58(a). We recall that the chemical states constituting the Lymn-Taylor cycle have been linked to the structural configurations (obtained from crystallographic reconstructions): A(attached, pre-power-stroke → AM*ADP*Pi), B(attached, post-powerstroke → AM*ADP), C(detached, post-power-stroke → M*ATP), D(detached, pre-power-stroke → M*ADP*Pi). In the discussed model the jump events are replaced by continuous transitions and the association of chemical states with particular regimes of stochastic dynamics is not straightforward. In Fig. 58(b), we show a fragment of the averaged trajectory of a steadily advancing motor projected on the (x, yx) plane. In Fig. 58(c) the same trajectory is shown in the (x, yx, d) space with fast advances in the d direction intentionally schematized as jumps. By using the same letters A, B, C, D as in Fig. 58(a) we can visualize a connection between the chemical/structural states and the transient mechanical configurations of the advancing motor. Suppose, for instance, that we start at point A corresponding to the end of the negative cycle of the ac driving f (t). The system is in the attached, pre-power-stroke state and d = 1. As the sign of the force f (t) changes, the motor undergoes a power-stroke and reaches point B while remaining in the attached state. When the configurational variable y-x passes the detachment threshold, the myosin head detaches which leads to a transition from point B to B ′ on the plane d = 0. Since the positive cycle of the force f (t) continues, the motor completes the power-stroke by moving from B ′ to point C. At this moment, the rocking force changes sign again which leads to recharging of the power-stroke mechanism in the detached state, described in Fig. 58(a) as a transition from C to D. In point D, the variable yx reaches the attachment threshold. The myosin head reattaches and the system moves to point D ′ where d = 1 again. The recharging continues in the attached state as the motor evolves from D ′ to a new state A, shifted by one period. One can see that the chemical states constituting the minimal enzyme cycle can be linked to the mechanical configurations traversed by this stochastic dynamical system. The detailed mechanical picture, however, is more complicated than in the prototypical Lymn-Taylor scheme. In some stages of the cycle one can use the Kramers approximation to build a description in terms of a discrete set of chemical reactions, however, the number of such reactions should be larger than in the minimal Lymn-Taylor model. In conclusion, we mention that the identification of the chemical states, known from the studies of the prototypical catalytic cycle in solution, with mechanical states, is a precondition for the bio-engineering reproduction of a wide range of cellular processes. In this sense, the discussed schematization of the contraction phenomenon can be viewed as a step towards building engineering devices imitating actomyosin enzymatic activity. Force-velocity relations. The next question is how fast such motor can move against an external cargo. To answer this question we assume that the force f ext acts on the variable y which amounts to tilting of the potential (4.10) along the y direction G{x, y} = Ψ{y(t)x(t)}Φ(x) + u SS (yx)y f ext . (4.11) A stochastic system with energy (4.11) was studied numerically in Ref. [244] and in Fig. 59 we illustrate the obtained forcevelocity relations. The quadrants in the ( f ext , v) plane where R = f ext v > 0 describe dissipative behavior. In the other the other two quadrants, where R = f ext v < 0, the system shows anti-dissipative behavior. Observe that at low temperatures the convexity properties of the force-velocity relations in active pushing and active pulling regimes are different. In the case of pulling the typical force-velocity relation is reminiscent of the Hill's curve describing isotonic contractions, see Ref. [79]. In the case of pushing, the force-velocity relation can be characterized as convex-concave and such behavior has been also recorded in muscles, see Refs. [161;308;309]. The asymmetry is due to the dominance of different mechanisms in different regimes. For instance, in the pushing regimes, the motor activity fully depends on ac driving and at large amplitudes of the driving the system performs as a mechanical ratchet. Instead, in the pulling regimes, associated with small amplitudes of external driving, the motor advances because of the delayed feedback. Interestingly, the dissimilarity of convexity properties of the force-velocity relations in pushing and pulling regimes has been also noticed in the context of cell motility where actomyosin contractility is one of the two main driving forces, see Ref. [310]. Descending limb In this Section, following Ref. [238], we briefly address one of the most intriguing issues in mesoscopic muscle mechanics: an apparently stable behavior on the "descending limb" which is a section of the force-length curve describing isometrically tetanized muscle [17-19; 39]. As we have seen in the previous Sections, the active force f generated by a muscle in a hard (isometric) device depends on the number of pulling cross-bridge heads. The latter is controlled by the filament overlap which may be changed by the (pre-activation) passive stretch ∆ℓ. A large number of experimental studies have been devoted to the measurement of the isometric tetanus curve f (∆ℓ), see Fig. 60 and Fig. 3. Since the stretch beyond a certain limit would necessarily decrease the filament overlap, the active component of f (∆ℓ) must contain a segment with a negative slope known as the "descending limb" [78; 311-315]. The negative stiffness associated with active response is usually corrected by the positive stiffness provided by passive crosslinkers that connect actin and myosin filaments. However, for some types of muscles the total force-length relation f (∆ℓ) describing active and passive elements connected in parallel, still has a range where force decreases with elongation. It is this overall negative stiffness that will be the focus of the following discussion. If the curve f (∆ℓ) is interpreted as a description of the response of the "muscle material" shown in Fig. 61, the softening behavior associated with negative overall stiffness should lead to localization instability and the development of strain inhomogeneities [227; 316]. In terms of the observed quantities, the instability would mean that any initial imperfection would cause a single myosin filament to be pulled away from the center of the activated half-sarcomeres. Some experiments seem to be indeed consistent with non-uniformity of the Z-lines spacing, and with random displacements of the thick filaments away from the centers of the sarcomeres [225; 312; 314; 317; 318]. The nontrivial half-sarcomeres length distribution can be also blamed for the observed disorder and skewing [319; 320]. The link between non-affine deformation and the negative stiffness is also consistent with the fact that the progressive increase of the range of dispersion in half-sarcomere lengths, associated with a slow rise of force during tetanus (creep phase), was observed mostly around the descending limb [321][322][323], even though the expected ultimate strain localization leading to failure was not recorded. A related feature of the muscle response on the descending limb is the non-uniqueness of the isometric tension, which was shown to depend on the pathway through which the elongation is reached. Experiments demonstrate that when a muscle fiber is activated at a fixed length and then suddenly stretched while active, the tension first rises and then falls without reaching the value that the muscle generates when stimulated isometrically [320; [324][325][326][327][328][329][330]. The difference between tetani subjected to such post-stretch and the corresponding isometric tetani reveals a positive instantaneous stiffness on the descending limb. Similar phenomena have been observed during sudden shortening of the active muscle fibers: if a muscle is allowed to shorten to the prescribed length it develops less tension than during direct tetanization at the final length. All these puzzling observations have been discussed extensively in the literature interpreting half-sarcomeres as softening elastic springs [54; 78; 331-334]. The fact of instability on the descending limb for such spring chain was realized already by Hill [331] and various aspects of this instability were later studied in Refs. [332; 335]. It is broadly believed that a catastrophic failure in this system is warranted but is not observed because of the anomalously slow dynamics [313; 320; 336-338]. In a dynamical version of the model of a chain with softening springs, each contractile component is additionally bundled with a dashpot characterized by a realistic (Hill-Katz) force-velocity relation [226; 313; 320; 336-339]. A variety of numerical tests in such dynamic setting demonstrated that around a descending limb the half-sarcomeres configuration can become non uniform but at the time scale which is unrealistically long. Such over-damped dynamic model was shown to be fully compatible with the residual force after stretch on the descending limb, and the associated deficit of tension after shortening. These simulations, however, left unanswered the question about the fundamental origin of the multi-valudness of the muscle response around the descending limb. For instance, it is still debated whether such non-uniqueness is a property of individual half-sarcomeres or a collective property of the whole chain. It is also apparently unclear how the local (microscopic) inhomogeneity of a muscle myofibril can coexist with the commonly accepted idea of a largely homogenous response at the macro-level. To address these questions we revisit here the onedimensional chain model with softening springs reinforced by parallel (linear) elastic springs, see Fig. 61 and62. A formal analysis [238], following a similar development in the theory of shape memory alloys [227], shows that this mechanical system has an exponentially large (in N) number of configurations with equilibrated forces, see an illustration for small N in Fig. 63 and our goal will be to explore the consequences of the complexity of the properly defined energy landscape. Pseudo-elastic energy. The physical meaning of the energy associated with the parallel passive elements is clear but the challenge is to associate an energy function with active elements. In order to generate active force, motors inside the active element receive and dissipate energy, however, this not the energy we need to account for. As we have already seen, active elements posses their own passive mechanical machinery which is loaded endogenously by molecular motors. Therefore some energy is stored in these passive structures. For instance, we can account for the elastic energy of attached springs and also consider the energy of de-bonding. A transition from one tetanized state to another tetanized state, leads to the change in the stored energy of these passive structures. Suppose that to make an elongation dℓ along the tetanus, the external force f (ℓ) must perform the work f dℓ = dW where W(ℓ) is the energy of the passive structures that accounts not only for elastic stretching but also for inelastic effect associated with the changes in the number of attached cross-bridges. By using the fact that the isometric tetanus curve f (ℓ) has a up-down-up structure we can conclude that the effective energy function W(ℓ) must have a double-well structure. If we subtract the contribution due to parallel elasticity W p (ℓ), we are left with the active energy W a (ℓ), which will then have the form of a Lennard-Jones potential. Shortening below the inflection point of this potential would lead to partial "neutralization" of cross-bridges, and as a result the elastic energy of contributing pullers progressively diminishes. Instead, we can assume that when the length increases beyond the inflection point (point of optimal overlap), the system develops damage (debonding) and therefore the energy increases. After all bonds are broken, the energy of the active element does not change any more and the generated force becomes equal to zero. Local model. Consider now a chain of half sarcomeres with nearest neighbor interactions and controlled total length, see Fig. 61. Suppose that the system selects mechanical configurations where the energy invested by pullers in loading the passive sub-structures is minimized. The energy minimizing configurations will then deliver an optimal trade-off between elasticity and damage in the whole ensemble of contractile units. This assumption is in agreement with the conventional interpretation of how living cells interact with an elastic environment. For instance, it is usually assumed that active contractile machinery inside a cell rearranges itself in such a way that the generated elastic field in the environment minimizes the elastic energy [340; 341]. The analysis of the zero temperature chain model for a myofibril whose series elements are shown in Fig. 62 confirms that the ensuing energy landscape is rugged, see Ref. [238]. The possibility of a variety of evolutionary paths in such a landscape creates a propensity for history dependence, which, in turn, can be used as an explaination of both the "permanent extra tension" and the "permanent deficit of tension" observed in areas adjacent to the descending limb. The domain of metastability on the force-length plane, see Fig. 63, is represented by a dense set of stable branches with a fixed degree of inhomogeneity. Note that in this system the negative overall slope of the force-length relation along the global minimum path can be viewed as a combination of a large number of micro-steps with positive slopes. Such "coexistence" of the negative averaged stiffness with the positive instantaneous stiffness, first discussed in Ref. [332], can be responsible for the stable performance of the muscle fiber on the descending limb. Observe, however, that the strategy of global energy minimization contradicts observations because the reported negative overall stiffness is incompatible with the implied convexification of the total energy. Moreover, the global minimization scenario predicts considerable amount of vastly over-stretched (popped) half-sarcomeres that have not been seen in experiments. We are then left with a conclusion that along the isometric tetanus at least some of the active, nonaffine configurations correspond to local rather than global minima of the stored energy. A possible representation of the experimentally observed tetanus curve as a combination of local and global minimization segments is presented by a solid thick line in Fig. 63. In view of the quasi-elastic nature of the corresponding response, it is natural to associate the ascending limb of the tetanus curve at small levels of stretch with the homogeneous (affine) branch of the global minimum path (segment AB in Fig. 63). Assume that around the point where the global minimum configuration becomes non-affine (point B in Fig. 63), the system remains close to the global minimum path. Then, the isometric tetanus curve forms a plateau separating ascending and descending limbs (segment between points B and C in Fig. 63). Such plateau is indeed observed in experiments on myofibrils and is known to play an important physiological role ensuring robustness of the active response. We can speculate that a limited mixing of "strong" and "weak" (popped) halfsarcomeres responsible for this plateau can be confined close to the ends of a myofibril while remaining almost invisible in the bulk of the sample. To account for the descending limb, we must assume that as the length of the average half-sarcomere increases beyond the end of the plateau (point C in Fig. 63), the tetanized myofibril can no longer reach the global minimum of the stored energy. To match observations we assume that beyond point C in Fig. 63 the attainable metastable configurations are characterized by the value of the active force, which deviates from the Maxwell value and becomes progressively closer to the value generated by the homogeneous configurations as we approach the state of no overlap (point D). The numerical simulations show [238] that the corresponding non-affine configurations can be reached dynamically as a result of the instability of a homogeneous state. One may argue that such, almost affine metastable configurations, may be also favored due to the presence of some additional mechanical signaling , which takes a form of inter-sarcomere stiffness or next to nearest neighbor (N N N) interaction. As the point D in Fig. 63 is reached, all cross-bridges are detached and beyond this point the myofibril is supported exclusively by the passive parallel elastic elements (segment DE). Since all the metastable non-affine states involved in this construction have an extended range of stability, the application of a sudden deformation will take the system away from the isometric tetanus curve BCD in Fig. 63. It is then difficult to imagine that the isometric relaxation, following such an eccentric loading, will allow the system to stabilize again exactly on the curve BCD. Such "metastable" response would be consistent with residual force enhancement observed not only around the descending limb but also above the optimal (physiological) plateau and even around the upper end of the ascending limb. It is also consistent with the observations showing that the residual force enhancement after stretch is independent of the velocity of the stretch, that it increases with the amplitude of the stretch and that it is most pronounced along the descending limb. Nonlocal model. While the price of stability in this system appear to be the emergence of the limited microscopic non-uniformity in the distribution of half sarcomere lengths, we now argue that it may be still compatible with the macroscopic (averaged) uniformity of the whole myofibril [319]. To support this statement we briefly discuss here a model of a myofibril which involves long range mechanical signaling between half-sarcomeres via the surrounding elastic medium, see Ref. [238]. The model is illustrated in Fig. 64. It includes two parallel elastically coupled chains. One of the chains, containing double well springs, is the same as in the local model. The other chain contains elements mimicking additional elastic interactions in the myofibril of possibly non-one-dimensional nature; it is assumed that the corresponding shear (leaf) springs are linearly elastic. The ensuing model is nonlocal and involves competing interactions: the double-well potential of the snap-springs favors sharp boundaries between the "phases", while the elastic foundation term favors strain uniformity. As a result of this competition the energy minimizing state can be expected to deliver an optimal trade off between the uniformity at the macro-scale and the non-uniformity (non-affinity) at the microscale. The nonlocal extension of the chain model lacks the permutation degeneracy and generates peculiar microstructures with fine mixing of shorter half sarcomeres located on the ascending limb of the tension-length curve and longer half sarcomeres supported mostly by the passive structures [238]. The mixed configurations represent periodically modulated patterns that are undistinguishable from the homogeneous deformation if viewed at a coarse scale. The descending limb can be again interpreted as a union of positively sloped steps that can be now of vastly different sizes. It is interesting that the discrete structure of the force-length curve survives in the continuum limit, which instead of smoothening makes it extremely singular. More specifically, the variation of the degree of non-uniformity with elongation along the global energy minimum path exhibits a complete devil's staircase type behavior first identified in a different but conceptually related system [342], see Fig. 65 andRef. [238] for more details. To make the nonlocal model compatible with observations, one should again abandon the global minimisation strategy and associate the descending limb with metastable (rather than stable) states. In other words, one needs to apply an auxiliary construction similar to the one shown in Fig. 63 for the local model, which anticipates an outcome produced by a realistic kinetic model of tetanization. Non-muscle applications The prototypical nature of the main model discussed in this review (HS model, a parallel bundle of bistable units in passive or active setting ) makes it relevant far beyond the skeletal muscle context. It provides the most elementary description of molecular devices capable of transforming in a Brownian environment a continuous input into a binary, allor-none output that is crucial for the fast and efficient strokelike behavior. The capacity of such systems to flip in a reversible fashion between several metastable conformations is essential for many processes in cellular physiology, including cell signaling, cell movement, chemotaxis, differentiation, and selective expression of genes [343; 344]. Usually, both the input and the output in such systems, known as allosteric, are assumed to be of biochemical origin. The model, dealing with mechanical response and relying on mechanical driving, complements biochemical models and presents an advanced perspective on allostery in general. The most natural example of the implied hypersensitivity concerns the transduction channels in hair cells [345]. Each hair cell contains a bundle of N ≈ 50 stereocilia which are The broadly accepted model of this phenomenon [119] views the hair bundle as a set of N bistable springs arranged in parallel. It is identical to the HS model if the folded (unfolded) configurations of cross-bridges are identified with the closed (opened) states of the channels. The applied loading, which tilts the potential and biases in this way the distribution of closed and open configurations, is treated in this model as a hard device version of HS model. Experiments, involving a mechanical solicitation of the hair bundle through an effectively rigid glass fiber, showed that the stiffness of the hair bundle is negative around the physiological functioning point of the system [120], which is fully compatible with the predictions of the HS model. A similar analogy can be drawn between the HS model and the models of collective unzipping for adhesive clusters [7; 12; 118; 341; 346]. At the micro-scale we again encounter N elements representing, for instance, integrins or cadherins, that are attached in parallel to a common, relatively rigid pad. The two conformational states, which can be described by a single spin variable, are the bound and the unbound configurations. The binding-unbinding phenomena in a mechanically biased system of the HS type are usually described by the Bell model [117], which is a soft device analog of the HS model with κ 0 = ∞. In this model the breaking of an adhesive bond represents an escape from a metastable state and the corresponding rates are computed by using Kramers' theory [341; 347] as in the HS model. In particular, the rebinding rate is often assumed to be constant [263 ; 348], which is also the assumption of HS for the reverse transition from the post-to the pre-power-stroke state. More recently, Bell's model was generalized through the inclusion of ligand tethers, bringing a finite value to κ 0 and using the master equation for the probability distribution of attached units [118; 348]. The main difference between the Bell-type models and the HS model is that the detached state cannot bear force while the unfolded conformation can. As a result, while the cooperative folding-unfolding (ferromagnetic) behavior in the HS model is possible in the soft device setting [99], similar cooperative binding-unbinding in the Bell model is impossible because the rebinding of a fully detached state has zero probablity. To obtain cooperativity in models of adhesive clusters, one must use a mixed device, mimicking the elastic backbone and interpolating between soft and hard driving [118; 179; 197; 341]. Muscle tissues maintain stable architecture over long periods of time. However, it is also feasible that transitory muscle-type structures can be assembled to perform particular functions. An interesting example of such assembly is provided by the SNARE proteins responsible for the fast release of neurotransmitors from neurons to synaptic clefts. The fusion of synaptic vesicles with the presynaptic plasma membrane [349; 350] is achieved by mechanical zipping of the SNARE complexes which can in this way transform from opened to closed conformation [351]. To complete the analogy, we mention that individual SNAREs participating in the collective zipping are attached to an elastic membrane that can be mimicked by an elastic or even rigid backbone [352]. The presence of a backbone mediating long-range interactions allows the SNAREs to cooperate in fast and efficient closing of the gap between the vesicle and the membrane. The analogy with muscles is corroborated by the fact that synaptic fusion takes place at the same time scale as the fast force recovery (1 ms) [353]. Yet another class of phenomena that can be rationalized within the HS framework is the ubiquitous flip-flopping of macro-molecular hairpins subjected to mechanical loading [187; 188; 196; 199]. We recall that in a typical experiment of this type, a folded (zipped) macromolecule is attached through compliant links to micron-sized beads trapped in optical tweezers. As the distance between the laser beams is increased, the force applied to the molecule rises up to a point where the subdomains start to unfold. An individual unfolding event may correspond to the collective rupture of N molecular bonds or an unzipping of a hairpin. The corresponding drops in the force accompanied by an abrupt increase in the total stretch can lead to an overall negative stiffness response [186; 199; 203]. Other molecular systems exhibiting cooperative unfolding include protein β-hairpins [354] and coiled coils [209]. The backbone dominated internal architecture in all these systems leads to common mean-field type mechanical feedback exploited by the parallel bundle model [355; 356]. Realistic examples of unfolding in macromolecules may involve complex "fracture" avalanches [357] that cannot be modeled by using the original HS model. However, the HS theoretical framework is general enough to accommodate hierarchical meta-structures whose stability can be also biased by mechanical loading. The importance of the topology of interconnections among the bonds and the link between the collective nature of the unfolding and the dominance of the HS-type parallel bonding have been long stressed in the studies of protein folding [358]. The broad applicability of the HS mechanical perspective on collective conformational changes is also corroborated by the fact that proteins and nucleic acids exhibit negative stiffness and behave differently in soft and hard devices [209; 359; 360]. The ensemble dependence in these systems suggests that additional structural information can be obtained if the unfolding experiments are performed in the mixed device setting. The type of loading may be affected through the variable rigidity of the "handles" [361; 362] or the use of an appropriate feedback control that can be modeled in the HS framework by a variable backbone elasticity. As we have already mentioned, collective conformational changes in distributed biological systems containing coupled bistable units can be driven not only mechanically, by applying forces or displacements, but also biochemically by, say, varying concentrations or chemical potentials of ligand molecules in the environment [363]. Such systems can become ultrasensitive to external stimulations as a result of the interaction between individual units undergoing conformational transformation which gives rise to the phenomenon of conformational spread [344; 364]. The switch-like input-output relations are required in a variety of biological applications because they ensure both robustness in the presence of external perturbations and ability to quickly adjust the configuration in response to selected stimuli [343; 365]. The mastery of control of biological machinery through mechanically induced conformational spread is an important step in designing efficient biomimetic nanomachines [195; 366; 367]. Since interconnected devices of this type can be arranged in complex modular metastructures endowed with potentially programmable mechanical properties, they are of particular interest for micro-enginnering of energy harvesting devices [13]. To link this behavior to the HS model, we note that the amplified dose response, characteristic of allostery, is analogous to the sigmoidal stress response of the paramagnetic HS system where an applied displacement plays the role of the controlled input of a ligand. Usually, in allosteric protein systems, the ultrasensitive behavior is achieved as a result of nonlocal interactions favoring all-or-none types of responses; moreover, the required long-range coupling is provided by mechanical forces acting inside membranes and molecular complexes. In the HS model such coupling is modeled by the parallel arrangement of elements, which preserves the general idea of nonlocality. Despite its simplicity, the appropriately generalized HS model [99] captures the main patterns of behavior exhibited by apparently purely chemical systems, including the possibility of a critical point mentioned in Ref. [363]. Conclusions In contrast to inert matter, mechanical systems of biological origin are characterized by structurally complex network architecture with domineering long-range interactions. This leads to highly unusual mechanical properties in both statics and dynamics. In this review we identified a particularly simple system of this type, mimicking a muscle half-sarcomere, and systematically studied its peculiar mechanics, thermodynamics and kinetics. In the study of passive force generation phenomena our starting point was the classical model of Huxley and Simmons (HS). The original prediction of the possibility of negative stiffness in this model remained largely unnoticed. For 30 years the HS model was studied exclusively in the hard device setting which concealed the important role of cooperative effects. A simple generalization of the HS model for the mixed device reveals many new effects, in particular the ubiquitous presence of coherent fluctuations. Among other macroscopic effects exhibited by the generalized HS model are the non-equivalence of the response in soft and hard devices and the possibility of negative susceptibilities. These characteristics are in fact typical for nonlinear elastic materials in 3D at zero temperature. Thus, the relaxed energy of a solid material must be only quasi-convex which allows for non-monotone stress strain relations and different responses in soft and hard devices [368]. Behind this behavior is the long range nature of elastic interactions which muscle tissues appear to be emulating in 1D. For a long time it was also not noticed that the original parameter fit by HS placed skeletal muscles almost exactly in the critical point. Such criticality is tightly linked to the fact that the number of cross-bridges in a single half sarcomere is of the order of 100. This number is now shown to be crucial to ensure mechanical ultra sensitivity that is not washed out by finite temperature and it appears quite natural that muscle machinery is evolutionaty tuned to perform close to a critical point. This assumption is corraborated by the observation that criticality is ubiquitous in biology from the functioning of auditory system [120] to the macroscopic control of upright standing [369; 370]. The mechanism of fine tuning to criticality can be understood if we view the muscle fiber as a device that can actively modify its rigidity. To this end the system should be able to generate a family of stall states parameterized by the value of the meso-scopic strain. A prototypical model reviewed in this paper shows that by controlling the degree of nonequilibrium in the system, one can indeed stabilize apparently unstable or marginally stable mechanical configurations, and in this way modify the structure of the effective energy landscape (when it can be defined). The associated pseudoenergy wells with resonant nature may be crucially involved in securing robustness of the near critical behavior of the muscle system. Needless to say that the mastery of tunable rigidity in artificial conditions can open interesting prospects not only in biomechanics [371] but also in engineering design incorporating negative stiffness [372] or aiming at synthetic materials involving dynamic stabilization [373; 374]. In addition to the stabilization of passive force generation, we also discussed different modalities of how a power-strokedriven machinery can support active muscle contraction. We have shown that the use of a hysteretic design for the power-stroke motor allows one to reproduce mechanistically the complete Lymn-Taylor cycle. This opens a way towards dynamic identification of the chemical states, known from the studies of the prototypical catalytic reaction in solution, with particular transient mechanical configurations of the actomyosin complex. At the end of this review we briefly addressed the issue of ruggedness of the global energy landscape of a tetanized muscle myofibril. The domain of metastability on the forcelength plane was shown to be represented by a dense set of elastic responses parameterized by the degree of cross-bridge connectivity to actin filaments. This observation suggests that the negative overall slope of the force-length relation may be a combination of a large number of micro-steps with positive slopes. In this review we focused almost exclusively on the results obtained in our group and mentioned only peripherally some other related work. For instance, we did not discuss a vast body of related experimental results, e.g. Refs. [116; 166; 375; 376]. Among the important theoretical work that we left outside, are the results on active collective dynamics of motors [12; 377-379]. Interesting attempts of building alternative models of muscle contraction [56; 380] and of creating artificial devices imitating muscle behavior [195] were also excluded from the scope of this paper. Other important omissions concern the intriguing mechanical behavior of smooth [381; 382] and cardiac [383][384][385][386][START_REF] Caruel | Thermodynamical framework for modeling chemo-mechanical coupling in muscle contraction-formulation and preliminary results CMBE[END_REF] muscles. Despite the significant progress in the understanding of the microscopic and mesoscopic aspects of muscle mechanics, achieved in the last years, many fundamental problems remain open. Thus, the peculiar temperature dependence of the fast force recovery [207; 388] has not been systematically studied, despite some recent advances [121; 180]. A similarly important challenge presents the delicate asymmetry between shortening and stretching, which may require the account of the second Myosin head [START_REF] Brunello | Proc. Natl. Acad. Sci[END_REF]. Left outside most of the studies is the short-range coupling between cross-bridges due to filaments extensibility [76], the inhomogeneity of the relative displacement between myosin and actin filaments, and more generally the possibility of a non-affine displacements in the system of interacting cross bridges. Other under-investigated issues include the mechanical role of additional conformational states [74] and the functionality of parallel elastic elements [389]. We anticipate that more efforts will be also focused on the study of contractional instabilities and actively generated internal motions [148] that should lead to the understanding of the self-tuning mechanism bringing sarcomeric systems towards criticality [99; 390; 391]. Criticality implies that fluctuations become macroscopic, which is consistent with observations at stall force conditions. The proximity to the critical point allows the system to amplify interactions, ensure strong feedback, and achieve considerable robustness in front of random perturbations. In particular, it is a way to quickly and robustly switch back and forth between highly efficient synchronized stroke and stiff behavior in the desynchronized state [99]. Figure 1. Figure 2. Figure 3 . 3 Figure 3. Isometric contraction (a) and isotonic shortening (b) experiments. (a) Isometric force T 0 as function of the sarcomere length linked to the amount of filament overlap. (b) Force-velocity relation obtained during isotonic shortening. Data in (b) are taken from Ref. [61]. Figure 4 . 4 Figure 4. Fast transients in mechanical experiments on single muscle fibers in length clamp [hard device, (a) and (b)]; and in force clamp [soft device, (c) and (d)]. Typical experimental responses are shown separately on a slow timescale [(a) and (c)] and on a fast time scale [(b) and (d)]. In (a) and (c) the numbers indicate the distinctive steps of the transient responses: the elastic response (1), the processes associated with passive power stroke (2) and the ATP driven approach to steady state(3)(4). Data are adopted from Refs.[74][75][76]. Figure 6 . 6 Figure 6. Drastically different kinetics in phase 2 of the fast load recovery in length clamp (circles) and force clamp (squares) experiments. Data are from Refs. [74; 80; 85-87; 93]. Figure 7. Figure 8 .Figure 9 . 89 Figure 8. Structure of a myofibril. (a) Anatomic organization of half sarcomeres linked by Z disks (A) and M lines (B). (b) Schematic representation of the network of half sarcomeres; (c) Topological structure of the same network emphasizing the dominance of long-range interactions. Figure 10 . 10 Figure 10. Behavior of a HS model with N = 10 at zero temperature. [(a) and (b)] Tension-elongation relations corresponding to the metastable states (gray) and along the global minimum path (thick lines), in hard (a) and soft (b) devices. (c-e) [respectively (f-h)] Energy levels of the metastable states corresponding to p = 0, 0.1, . . . , 1, at different elongations y (respectively tensions σ). Corresponding transitions (E→B, P→Q, ...) are shown in (a) and (b). Adapted from Ref. [179]. Figure 12 . 12 Figure 12. Hill-type energy landscapes in a hard device, for N = 1 (a) and N = 4 (b).(c) Equilibrium free-energy profile f (solid line), which is independent of N together with the metastable states for N = 4 (dotted lines). Here v 0 = 1/2 and β = 10. Adapted from Ref.[121]. Figure 13 . 6 Figure 14 . 13614 Figure 13. Equilibrium properties of the HS model in a hard device for different values of temperature. (a) Helmholtz free energy; (b) tensionelongation relations; (c) stiffness. In the limit β → ∞ (dot-dashed line), corresponding to zero temperature, the stiffness κ diverges at y = y 0 . Adapted from Ref. [121]. Figure 15 . 15 Figure 15. Mechanical behavior along metastable branches. (a) Free energy of the metastable states. For β > 4 (see dotted line), three energy levels coexist at the same tension. (b) Free energy at three different temperatures. (c) Tension-elongation curves. Figure 16 .Figure 17 . 1617 Figure 16. Phase transition at σ = σ 0 , and its effect on the stochastic dynamics. (a) Bifurcation diagram at σ = σ 0 . Lines show minima of the Gibbs free energy g ∞ . [(b) and (c)] Tension-elongation relations corresponding to the metastable states (gray) and in the global minimum path (black). [(d)-(f)] Collective dynamics with N = 100 in a soft device under constant force at different temperatures. Here the loading is such that ⟨p⟩ = 1/2 for all values of β. Fig 16 (a) is adapted from Ref. [99]. Figure 18 . 18 Figure 18. Different regimes for the HS model in the two limit cases of hard [(a) and (b)] and soft devices [(c) and (d)]. In a hard device , the pseudo critical temperature β -1 c = 1/4 separates a regime where the tension elongation is monotone (a) from the region where the system develops negative stiffness (b). In soft device, this pseudo critical point becomes a real critical point above which (β > β c ) the system becomes bistable (d). Figure 19 . 19 Figure 19. Phase diagram in the mixed device. The hard and soft device cases, already presented in Fig. 18, correspond to the limits (a)-(d).In the mixed device, the system exhibits three phases, labeled I, II and III in the left panel. The right panels show typical dependence of the energy and the force on the loading parameter z and on the average internal elongation ⟨y⟩ in the subcritical (phase II, e) and in the supercritical (phase III, f) regimes. In phase I, the response of the system is monotone; it is analogous to the behavior obtained in a hard device for β < 4, see Fig.18(b). In phase II, the system exhibits negative stiffness but no collective switching except for the soft device limit λ b → 0, see Fig.18(d).In phase III (supercritical regime), the system shows an interval of macroscopic bistability (see dotted lines) leading to abrupt transitions in the equilibrium response (solid line). Figure 20 . 20 Figure 20. Energy barriers in the HS model. [(a) and (b)] Two functioning regimes. The regime (b) was not considered by Huxley and Simmons. (c), Relaxation rate as function of the total elongation y. The characteristic timescale is τ = exp [βE 1 ]. Adapted from Ref. [121]. Figure 21 . 21 Figure 21. (a) Generalization of the Huxley and Simmons model of the energy barriers based on the idea of the transition state v * corresponding to the conformation ℓ. (b) Equilibration rate between the states as function of the loading parameter at different values of ℓ. The original HS model corresponds to the case ℓ = -1. In (b) v 0 = 1, v * = 1.2 and β = 2. Dotted lines in (b) is a schematic representation of diffusion (versus reaction) dominated processes. Adapted from [180] Figure 22 . 22 Figure 22. Energy landscape characterizing the sequential folding process of N = 10 bistable elements in a soft device with σ = σ 0 . Parameters are v 0 = 1, v * = 1.2, and ℓ = -0.5. Adapted from Ref. [180] Figure 23 . 23 Figure 23. Intra-and inter-bassin relaxation rates in a soft device. (a) Relaxation towards to the metastable state in the case of a reflecting barrier at p = p (intra-bassin relaxation). [(b) and (c)] Transition between the two macroscopic configurations p 0 (σ) and p 1 (σ) (interbassin relaxation). (b) Forward [ k(p 0 → p 1 )] and reverse [ k(p 1 → p 0 )] rates. (c) Equilibration rate k(p 0 ↔ p 1 ) = k(p 0 → p 1 ) + k(p 1 → p 0 ). Solid line, computation based on Eq. (2.27); dot-dashed line, thermodynamic limit approximation, see Eq. (2.29). The parameters are N = 200, β = 5 and ℓ = -0.5. Adapted from Ref. [180]. Figure 24 . 24 Figure 24. Quasi-static response to ramp loading in different points of the phase diagram. [(a) and (b)] Hard device, see Ref. [121]; [(c) and (d)] soft device, see Ref.[180] and [(e) and (f)], mixed device. In each point, stochastic trajectories obtained from Eq. 2.21 (solid lines) are superimposed on the thermal equilibrium response (dashed lines). The inserts show selected snapshots of the probability distribution solving the master equation(2.22). Figure 25 . 25 Figure[START_REF] Close | [END_REF]. Relaxation of the average conformation in response to fast force drops at different temperatures and initial conditions ⟨p⟩ in . Thick lines, solutions of the master equation (2.22); thin lines, solutions of the mean-field HS equation. In (b), the initial condition corresponds to thermal equilibrium: bimodal distribution and ⟨p⟩ in = 1/2. In (c), the initial condition corresponds to the unfolded metastable state: unimodal distribution and ⟨p⟩ in ≈ 0.06. Snapshots at different times show the probability density profiles. Figure 26 . 26 Figure 26. Soft spin (snap-spring) model of a parallel cluster of crossbridges. (a) Dimensional energy landscape of a bistable cross-bridge. (b) Structure of a parallel bundle containing N cross-bridges. Adapted from Ref. [99]. 2. 2 . 1 . 20 Figure 27 . 212027 Figure 27. Energy landscape along the global minimum path for the soft-spin model in a hard device at different values of the coupling parameter λ b with N = 20. Adapted from Ref. [179]. The asymmetry in the potential is the results of choosing λ 2 λ 1 . Parameters are, λ 2 = 0.4, λ 1 = 0.7, ℓ = -0.3. Figure 28 .β = 7 ,Figure 29 . 28729 Figure 28. Soft spin model at zero temperature with parameters adjusted to fit experimental data, see Tab. 1 in Section 2.2.3. (a) Tensionelongation relations in the metastable states (gray area) and along the global minimum path (solid lines). [(b) and (c)] Energy landscape corresponding to successive transitions between the homogeneous states (A→B and C→F), respectively. [(d) and (e)] Size of the energy barriers corresponding to the individual folding (B → ) and (B ← ) at finite N (open symbols) and in the thermodynamic limit (solid lines). Adapted from Ref. [179]. Figure 30 .Figure 31 . 3031 Figure 30. Bifurcation diagram with non-symmetric wells. Solid (dashed) lines correspond to local minima (maxima) of the free energy. Parameters are, λ 2 = 0.47, λ 1 = 0.53, ℓ = -0.5, λ b = 0.5 and β = 20. Here z is such that ⟨p⟩ = 1/2 at β = 20 and λ b = 0.5. Figure 32 . 32 Figure 32. Non-equilibrium energy landscapes: f (solid lines) and g (dashed lines) at z = z 0 . Trajectories on the right are obtained from stochastic simulations. Minima are arbitrarily set to 0. Parameters are: β = 10, λ 1 = λ 2 = 1/2, v 0 = 0, λ b = 0.1 (symmetric system). Figure 33 . 33 Figure 33. Soft spin model in hard [(a) and (b)] and soft [(c) and (d)] devices. [(a) and (c)] Free energies; [(b) and (d)] tension elongation relations. The solid lines correspond to the parameters listed in Tab.1 and the gray regions indicate the corresponding domains of bistability. The tension and elongation are normalized to their values at the transition point where ⟨p⟩ = 1/2. -1 [80; 91; 100]. Given that we know the values of κ and a, we can estimate the non-dimensional inverse temperature,β = (κ 0 a 2 )/(k b T) = 71 ± 26.Once κ b and κ 0 are known, the number of cross-bridges attached in the state of isometric contraction can be obtained directly from the formula κ tot = (N κ 0 κ b )/(N κ 0 + κ b ). Experimental data indicate that N = 106 ± 11[80; 87; 100]. We can then deduce the value of our coupling parameter, λ b = κ b /(N κ 0 ) = 0.54 ± 0.19. 1 T 2 δL = -1 nm hs - 1 δL = -5 nm hs - 1 timeFigure 34 . 121134 Figure 34. Soft-spin model compared with experimental data from Fig. 5 and 6. [(a) and (b)] Average trajectories were obtained from stochastic simulations, after the system was exposed to various load steps in hard (a) and soft (b) devices. (b') Schematic representation of the regime shown in (b) for large times illustrating eventual equilibration (dotted line). (c) Tension-elongation relation obtained from the numerical simulations (sim.) compared with experimental data (symbols, exp.); dotted line, thermal equilibrium in a soft device. (d) Comparison of the rates of recovery: crosses show the result of the chemomechanical model from Ref[207]; asterisks show the "fast component" of the recovery rate (see explanations of such fast-slow decomposition in Ref[207]). Figure adapted from Ref.[99]. Here parameters are: κ 2 = 0.41 pN nm -1 , κ 1 = 1.21 pN nm -1 , λ b = 0.72, ℓ = -0.08 nm, N = 112, β = 52 (κ 0 = 2 pN nm -1 , a = 10 nm, T = 277.13 K, z 0 = 4.2 nm hs -1 . Figure 35 . 35 Figure 35. Model of a sarcomere. A single sarcomere is located between two Z-disks (A). The M-line (B) separates the two half-sarcomeres. A single sarcomere contains two arrays of N parallel cross-bridges connected by two linear springs Figure 36 .Figure 37 . 3637 Figure 36. Mechanical equilibrium in a half-sarcomere chain with N = 2 and symmetric double well potential in a hard device. (a) Energy levels; (b) Tension-elongation relation. Solid lines, metastable states; dashed lines, unstable states; bold lines:, global minimum. Parameters: λ 1 = λ 2 = 0.5, u 0 = 0, ℓ = -0.5, λ b = Figure 38 . 38 Figure 38. Equilibrium response of a single sarcomere in the thermodynamic limit. [(a) and (b)] Hard device; [(c) and (d)] soft device. [(a) and (c)] Gibbs and Helmholtz free energy; [(b) and (d)] corresponding tension-elongation . Parameters are, λ 1 = 0.7, λ 2 = 0.4, ℓ = -0.3, λ b = 1. Figure 39 . 39 Figure 39. Tension-elongation relations for a in a hard device. Thick lines: equilibrium tension-elongation relations based on the computation of the partition function (2.39). Thin lines: response of two half-sarcomere in series, each one endowed with the constitutive relation illustrated in Fig. 33(b). (a) Hard device constitutive law. (b) Soft device constitutive law, see Ref. [212] for more details. Parameters are: λ 1 = 0.7, λ 2 = 0.4, ℓ = -0.3, N = 10, β = 20 and λ b = 1. Figure 40 . 40 Figure 40. Tension-elongations for a sarcomere in a soft device. Thick lines: equilibrium tension-elongation relations based on the computation of the partition function (2.40). Thin lines: response of two half-sarcomere in series, each one endowed with the constitutive relation illustrated in Fig. 33(d). (a) Hard device constitutive law. (b) Soft device constitutive law, see Ref. [212] for more details. Parameters are as in Fig. 39. Figure 41 . 41 Figure 41. Global minimum of the (hard device) energy in the zero temperature limit (β → ∞) for a sarcomere chain with different M: (a) -energies; (b) -tension-elongation relations. In (b) the solid line represents the tension-elongation relation in a soft device. Parameters are: λ 1 = 0.7, λ 2 = 4, ℓ = -0.3. = 20 Figure 42 . 2042 Figure 42. Elongation of half-sarcomeres along the global minimum path for M = 2 (a) and M = 20 (b) in a hard device. Upper branch, pre-power-stroke half-sarcomeres; lower branch, post-powerstroke half-sarcomeres. Numbers indicate how many half-sarcomere are in each branch at a given z. Dashed lines, Soft device response. Parameters are: λ 1 = 0.7, λ 2 = 0.4, ℓ = -0.3. Figure 43 . 43 Figure 43. Influence of the parameter N on the equilibrium response of an infinitely long chain (M → ∞) in a hard device: (a) free energy; (b) tension-elongation relation. The asymmetry of the tension curve is a consequence of the asymmetry of the double well potential. Parameters are: λ 1 = 0.7, λ 2 = 0.4, ℓ = -0.3, λ b = 1, and β = 10. 1 ,Figure 44 . 144 Figure 44. Quick recovery response of a with M = 20 half-sarcomeres. (a) Tension elongation relation obtained with M = 20 in a hard device (circles) and in a soft device (squares) compared with the same experiments as in Fig. 5 (triangles). (b) Corresponding rates in hard (circles) and soft (squares) devices compared with experimental data from Fig. 6 (triangles). Figure 45 . 45 Figure45. Tension elongation curves σ(z) in the case of periodic driving (adiabatic limit). The equilibrium system (A = 0) is shown in (a) and and out-of-equilibrium system (A 0) -in (b). The insets show the effective potential F(z). Here κ 0 = 0.6. Adapted from Ref.[224]. Figure 46 . 46 Figure 46. The parameter dependence of the roots of the equation σ(z) = 0 in the adiabatic limit: (a) fixed D = 0.04 and varying A, first order phase transition [line C A -M A in Fig. 47 (a)]; (b) fixed A = 0 and varying D, second order phase transition [line D e -C A in Fig. 47 (a)]. The dashed lines correspond to unstable branches. Here κ = 0.6. Adapted from Ref. [224]. Figure 47 . 47 Figure 47. Phase diagram in (A, D) plane showing phases I,II and III: (a) -adiabatic limit, (b) -numerical solution at τ = 100 (b).C A is the tri-critical point, D e is the point of a second order phase transion in the passive system. The "Maxwell line" for a first order phase transition in the active system is shown by dots. Here κ 0 = 0.6. Adapted from Ref.[224]. Figure 48 . 48 Figure 48. (a-c) Typical tension-length relations in phases I, II and III . Points α, β and γ are the same as in Fig. 47 (b); (d) shows the active component of the force. Inserts show the behavior of stochastic trajectories in each of the phases at z ≃ 0 (gray lines) superimposed on their ensemble averages (black lines); the stationary hysteretic cycles, the structure of the effective potentials F(z) and the active potential F a (z) defined as a primitive of the active force σ a (z). The parameters: κ 0 = 0.6, τ = 100. Adapted from Ref. [224]. [119], integrin binding [263], folding/unfolding of proteins subjected to periodic forces [264] and other driven biological phenomena [265-268]. Figure 49 . 49 Figure49. Schematic representation of a parallel bundle of crossbridges that can attach and detach. Each cross bridge is modeled as a series connection of a ratchet Φ, a bi-stable snap-spring u SS , and linear elastic element v e . Figure 50 . 50 Figure 50. Contour plot of the effective energy v(x, y; z 0 ) at z 0 = 0. Inserts illustrate the states of various mechanical subunits. Figure 51 . 51 Figure 51. The numerical simulation of the time histories for different mechanical units in a load clamp simulation at zero external force: (a) the behavior of the myosin catalytic domain; (b) the behavior of the power stroke element (snap-spring); (c) the behavior of the elastic element; (d) the total displacement of the backbone. Figure 52 . 52 Figure 52. (a) An illustration of the steric effect associated with the power-stroke; (b) sketch of the mechanical model. Adapted from Ref. [244]. Figure 53 . 53 Figure 53. The functions Φ, u SS , f and the coupling function Φ used in numerical experiments. Analytic expressions for (a),(b) and (c) are given by Eqs. [(4.1),(4.2) and (4.8)], respectively. Adapted from Ref. [244]. (a). The implied steric rotation-translation coupling in ratchet models has been previously discussed in Refs. [154; 292; 293].We write the energy of a single cross-bridge in the formĜ(x, y, d) = d Φ(x) + u SS (yx),(4.5)where Φ(x) is a non-polar periodic potential representing the binding strength of the actin filament and u SS (yx) is a symmetric double-well potential describing the power-stroke element, see Fig.49. The coupling between the state of the power-stroke element yx and the spatial position of the motor x is implemented through the variable d. In the simplest version of the model d is assumed to be a function of the state of the power-stroke element d(x, y) = Ψ(yx).(4.6) 52(b). Similarly, as the power-stroke is recharging, the myosin head moves progressively closer to the actin filament and therefore the function Ψ(yx) should be bringing the actin potential back into the bound configuration.In view of (4.6), we can eliminate the variable d and introduce the redressed potential G(x, y) = Ĝ[x, y, Ψ(yx)]. Figure 54 . 54 Figure 54. The dependence of the average velocity v on temperature D and the amplitude of the ac signal A. The pre-and post-powerstroke states are labeled in such a way that the purely mechanical ratchet would move to the left. Adapted from Ref. [244]. Figure 55 . 55 Figure 55. The hysteresis operator Ψ{y(t)x(t)} linking the degree of attachment d with the previous history of the power-stroke configuration y(t)x(t). Adapted from Ref. [244]. Figure 56 . 56 Figure 56. Stationary particle trajectories in the model with the hysteretic coupling (4.9). Parameters are: D = 0.02 and A = 1.5. Adapted from Ref. [244]. Figure 57 . 57 Figure 57. The dependence of the average velocity v on temperature D in the hysteretic model with δ = 0.5. Adapted from Ref. [244]. Figure 58 . 58 Figure 58. (a) Schematic illustration of the four-step Lymn-Taylor cycle in the notations of this Section. (b) A steady-state cycle in the hysteretic model projected on the (x, yx) plane; color indicates the sign of the rocking force f (t): black if f (t) > 0 and gray if f (t) < 0; (c) Representation of the same cycle in the (d, x, yx) space with identification of the four chemical states A, B, C, D constituting the Lymn-Taylor cycle shown in (a). The level sets represent the energy landscape G at d = 0 (detached state) and d = 1 (attached state). The parameters are: D = 0.02, A = 1.5, and δ = 0.75. Adapted from Ref. [244]. Figure 59 . 59 Figure 59. The force-velocity relation in the model with hysteretic coupling at different amplitudes of the ac driving A and different temperatures D. The hysteresis width is δ = 0.5. Adapted from Ref. [244]. Figure 60 . 60 Figure 60. Schematic isometric tetanus with a descending limb. Adapted from Ref. [238]. Figure 61 . 61 Figure 61. The model of a muscle myofibril. Adapted from Ref. [238]. Figure 62 . 62 Figure 62. Non-dimensional tension-elongation relations for the active element (a), for the passive elastic component (b) and for the bundle (c). Adapted from Ref. [238]. Figure 63 . 63 Figure 63. The structure of the set of metastable branches of the tension-elongation relation for N = 10. Here f is the total tension (a) and f a is the active tension (b). The thick gray line represents the anticipated tetanized response. Adapted from Ref. [238]. Figure 64 . 64 Figure 64. Schematic representation of the structure of a halfsarcomere chain surrounded by the connecting tissue. Adapted from Ref. [238]. Figure 65 . 65 Figure 65. (a) The force-length relation along the global energy minimum path in the continuum limit for the model shown in Fig. 64. (b) The force-length relation along the global energy minimum path with the contribution due to connecting tissue subtracted. (c) The active forcelength relation along the global energy with the contribution due to connecting tissue and sarcomere passive elasticity subtracted. Adapted from Ref. [238]. Table 1 . 1 Realistic values (with estimated error bars) for the parameters of the snap-spring model ( 1 zJ = 10 -21 J). Dimensional Non-dimensional a 10 ± 1 nm κ 0 2.7 ± 0.9 pN nm -1 N 100 ± 30 T 277.15 K β 80 ± 30 κ b 150 ± 10 κ 1 3 ± 1 pN nm -1 pN nm -1 λ b 0.56 ± 0.25 λ 1 0.5 ± 0.1 κ 2 1.05 ± 0.75 pN nm -1 λ 2 0.25 ± 0.15 v 0 50 ± 10 zJ v 0 0.15 ± 0.30 Acknowledgments We thank J.-M. Allain, L. Marcucci, I. Novak, R. Sheska and P. Recho for collaboration in the projects reviewed in this article. We are also grateful to V. Lombardi group, D. Chapelle, P. Moireau, T. Lelièvre and P. Martin for numerous inspiring discussions. The PhD work of M.C. was supported by the Monge Fellowship from Ecole Polytechnique. L. T. was supported by the French Governement under the Grant No. ANR-10-IDEX-0001-02 PSL.
01753845
en
[ "spi.signal" ]
2024/03/05 22:32:10
2017
https://theses.hal.science/tel-01753845/file/CHEVRIE_Jason.pdf
ORGANISATION DE LA THÈSE contrôle sont finalement illustrées au travers de plusieurs scénarios expérimentaux ex-vivo utilisant des caméras ou l'échographie 3D comme retour visuel. Chapitre 5: Nous considérons le problème des mouvements du patient pendant la procédure d'insertion d'aiguille. Nous présentons d'abord une vue d'ensemble des techniques de compensation de mouvement pendant l'insertion d'une aiguille. Notre approche de contrôle introduite dans le chapitre 4 est ensuite étendue et nous exploitons la méthode de mise à jour de modèle proposée dans le chapitre 3 afin de se charger de l'insertion d'une aiguille dans des tissus subissant des mouvements latéraux. Nous fournissons les résultats expérimentaux obtenus en utilisant notre approche de contrôle pour guider l'insertion d'une aiguille dans un fantôme constitué de tissus mous en mouvement. Ces expériences ont été réalisées en utilisant plusieurs modalités de retour d'information, fournies par un capteur d'efforts, un capteur électromagnétique et l'échographie 2D. Conclusion: Finalement nous concluons cette thèse et présentons des perspectives pour de possibles extensions et applications. Acknowledgments There are many people that I want to thank for making the realization of this thesis such a great experience. I have never really felt at ease telling my life and my feelings, so I will keep it short and hope to make it meaningful. I would first like to deeply thank Alexandre Krupa and Marie Babel for the supervision of this work. I greatly appreciated the freedom and trust they gave me to conduct my research as well as their guidance, feedback and support at the different stages of this thesis: from peaceful periods to harder times when I was late just a few hours before submission deadlines. I am really grateful to Philippe Poignet and Nicolas Andreff for the time they took to review my manuscript within the tight schedule that was given to them. I would also like to thank them along with Bernard Bayle and Sarthak Misra for being members of my thesis committee and for providing interesting feedback and discussions on my work to orient my future research. Additional thanks go to Sarthak for the great opportunity he gave me to spend some time in the Netherlands at the Surgical Robotics Laboratory. I would also like to thank the other members of this lab for their welcome, and especially Navid for the time we spent working together. I would like to express my gratitude to François Chaumette for introducing me the Lagadic team months before I started this thesis, which gave me the desire to work there. Many thanks go to all the members and ex-members of this team that I had the pleasure to meet during these years at IRISA. A bit for their help with my work, but mostly for the great atmosphere during all the coffee/tea/lunch breaks and the various out-of-office activities. The list has become too long to mention everyone separately, but working on this thesis during these years would not have been such a great pleasure without all of the moments we spent together, and I hope this can continue. I would also like to thank all my friends from Granville, Cachan or Rennes, who all contributed to the success of this thesis, consciously or not. A special thank you to Rémi, for various reasons in general, but more specifically here for being among the very few people to proof-read a part of this manuscript and give some feedback. Finally, my warmest thanks go to my family for their continuous support and encouragements during all this time, before and during the thesis. Les procédures cliniques mini-invasives se sont largement étendues durant ce dernier siècle. La méthode traditionnellement utilisées pour traiter un patient a longtemps été de recourir à la chirurgie ouverte, qui consiste à faire de larges incisions dans le corps pour pouvoir observer et manipuler ses structures internes. Le taux de succès de ce genre d'approche est tout d'abord limité par les lourdes modifications apportées au corps du patient, qui mettent du temps à guérir et peuvent entrainer des complications après l'opération. Il s'ensuit également un risque accru d'infection dû à l'exposition des tissues internes à l'environnement extérieur. Au contraire, les procédures mini-invasives ne requièrent qu'un nombre limité de petites incisions pour accéder aux organes. Le bien-être général du patient est donc amélioré grâce à la réduction des douleurs post-opératoires et la limitation de la présence de larges cicatrices. Le temps de rétablissement des patients est également grandement réduit [EGH + 13], en même temps que les risques d'infection [START_REF] Gandaglia | Effect of minimally invasive surgery on the risk for surgical site infections: Results from the national surgical quality improvement program (nsqip) database[END_REF], ce qui conduit à de meilleurs taux de succès des opérations et une réduction des coûts pour les hôpitaux. Lorsque la chirurgie ouverte était nécessaire avant l'introduction de l'imagerie médicale, diagnostique et traitement pouvaient faire partie d'une seule et même opération, pour tout d'abord voir les organes et ensuite planifier et effectuer l'opération nécessaire. Les rayons X ont été parmi les premiers moyens découverts pour permettre l'obtention d'une vue anatomique de l'intérieur du corps sans nécessiter de l'ouvrir. Plusieurs modalités d'imagerie ont depuis été développées et améliorées, parmi lesquelles la tomodensitométrie (TDM) [START_REF] Hounsfield | Computerized transverse axial scanning (tomography): Part 1. description of system[END_REF], l'imagerie par résonance magnétique (IRM) [START_REF] Lauterbur | Image formation by induced local interactions: Examples employing nuclear magnetic resonance[END_REF] et l'échographie [START_REF] Wild | Application of echo-ranging techniques to the determination of structure of biological tissues[END_REF] sont maintenant largement utilisées dans le domaine médical. Au-delà des capacités de diagnostic accrues qu'elle offre, l'imagerie médicale a joué un rôle important dans le développement de l'approche chirurgicale mini-invasive. Observer l'intérieur du corps est nécessaire au succès d'une intervention chirurgicale, afin de voir les tissus d'intérêt et la position des outils chirurgicaux. De part la nature même de la chirurgie miniinvasive, une ligne de vue directe sur l'intérieur du corps n'est pas possible et il est donc nécessaire d'utiliser d'autres moyens d'observation visuelle, tels que l'insertion d'endoscope ou des techniques d'imagerie anatomique. i RÉSUMÉ EN FRANÇAIS Chaque technique a ses propres avantages et inconvénients. Les endoscopes utilisent des caméras, ce qui offre une vue similaire à un oeil humain. Les images sont donc faciles à interpréter, cependant il n'est pas possible de voir à travers les tissus. À l'opposé, l'imagerie anatomique permet de visualiser l'intérieur des tissus, mais un entrainement spécifique des médecins est nécessaire pour l'interprétation des images obtenues. La tomodensitométrie utilise des rayons X, qui sont des radiations ionisantes, ce qui limite néanmoins le nombre d'images qui peuvent être acquises afin de ne pas exposer le patient à des doses de rayonnement trop importantes [SBA + 09]. L'équipe médicale doit également rester en dehors de la salle où se trouve le scanner pendant la durée d'acquisition. D'un autre côté l'IRM utilise des radiations non-invasives et fournit également des images de haute qualité, avec une grande résolution et un large champ de vue. Cependant ces deux modalités imposent de sévères contraintes, telles qu'un long temps nécessaire pour obtenir une image ou un équipement coûteux et encombrant qui limite l'accès au patient. Dans ce contexte l'échographie est une modalité de choix grâce à sa capacité à fournir une visualisation en temps réel des tissus et des outils chirurgicaux en mouvement. De plus, elle est non-invasive et ne requière que des scanners légers et des sondes facilement manipulables. Des outils longilignes sont souvent utilisés pour les procédures miniinvasives afin d'être insérés à travers de petites incisions réalisées à la surface du patient. En particulier les aiguilles ont été utilisées depuis longtemps pour extraire ou injecter des substances directement dans le corps. Elles procurent un accès aux structures internes tout en ne laissant qu'une faible marque dans les tissus. Pour cette raison elles sont des outils de premier choix pour une invasion minimale et permettent d'atteindre de petites structures dans des régions profondes du corps. Cependant les aiguille fines peuvent présenter un certain niveau de flexibilité, ce qui rend difficile le contrôle précis de leur trajectoire. Couplé au fait qu'une sonde échographique doit être manipulée en même temps que le geste d'insertion d'aiguille, la procédure d'insertion peut rapidement devenir une tâche ardue qui requière un entrainement spécifique des cliniciens. En conséquence, le guidage robotisé des aiguilles est devenu un vaste sujet de recherche pour fournir un moyen de faciliter l'intervention des cliniciens et augmenter la précision générale de la procédure. La robotique médicale a pour but de manière générale de concevoir et contrôler des systèmes mécatroniques afin d'assister les cliniciens dans leur tâches. L'objectif principal étant d'améliorer la précision, la sécurité et la répétabilité des opérations tout en réduisant leur durée [START_REF] Taylor | Medical robotics in computer-integrated surgery[END_REF]. Cela peut grandement bénéficier aux procédures d'insertion d'aiguille en particulier, pour lesquelles la précision est bien souvent cruciale pour éviter les erreurs de ciblage et la répétition inutile d'insertions. L'intégration d'un système robotique dans les blocs opératoires reste un grand défi en raison des contraintes cliniques et de l'acceptation du dispositif technique par le personnel médical. Parmi les différentes conceptions qui ont été proposées, certains sysii MOTIVATIONS CLINIQUES tèmes présentent plus de chances de succès que d'autres. De tels systèmes doivent offrir soit une assistance au chirurgien sans modifier de manière significative le déroulement de l'opération soit des bénéfices clairs à la fois sur la réussite de l'opération et sur les conditions opératoires du chirurgien. C'est le cas par exemple des systèmes d'amélioration des images médicales ou de suppression des tremblements ou encore des systèmes télé-opérés. Pour les procédures d'insertion d'aiguille, cela consisterait principalement à fournir un monitoring en temps réel du déroulement de l'insertion ainsi qu'un système robotique entre le patient et la main du chirurgien servant à assister le processus d'insertion. À cet égard, un système robotique guidé par échographie est un bon choix pour fournir une imagerie intra-opératoire en temps réel et une assistance pendant l'opération. Motivations cliniques Les aiguilles sont largement utilisées dans une grande variété d'actes médicaux pour l'injection de substances ou le prélèvement d'échantillons de tissus ou de fluides directement à l'intérieur du corps. Alors que certaines procédures ne nécessitent pas un placement précis de la pointe de l'aiguille, comme les injections intramusculaires, le résultat des opérations sensibles dépend grandement de la capacité à atteindre une cible précise à l'intérieur du corps. Dans la suite nous présentons quelques applications pour lesquelles un ciblage précis et systématique est crucial pour éviter des conséquences dramatiques et qui pourraient grandement bénéficier d'une assistance robotisée. Biopsies pour le diagnostic de cancer Le cancer est devenu une des causes majeures de mortalité dans le monde avec 8.2 millions de décès dus au cancer estimés à travers le monde en 2015 [TBS + 15]. Parmi les nombreuses variétés de cancer, le cancer de la prostate est l'un des plus diagnostiqués parmi les hommes et le cancer du sein parmi les femmes, le cancer du poumon étant aussi une cause majeure de décès pour les deux. Cependant la détection précoce des cancers peut améliorer la probabilité de succès d'un traitement et diminuer le taux de mortalité. Indépendamment du type de tumeur, la biopsie est la méthode de diagnostic traditionnellement utilisée pour confirmer la malignité de tissus suspects. Elle consiste à utiliser une aiguille pour prélever un petit échantillon de tissu à une position bien définie à des fins d'analyse. Le placement précis de l'aiguille est d'une importance capitale dans ce genre de procédure afin d'éviter une erreur de diagnostic due au prélèvement de tissus sains autour de la région suspectée. Les insertions manuelles peuvent donner des résultats variables qui dépendent du clinicien effectuant l'opération. Le guidage robotique de l'aiguille a donc le potentiel de grandement améliorer les performances des biopsies. Un retour échographique est souvent utilisé, par iii RÉSUMÉ EN FRANÇAIS exemple pour le diagnostic du cancer de la prostate [START_REF] Kaye | Robotic ultrasound and needle guidance for prostate cancer manage-ment: review of the contemporary literature[END_REF]. La tomodensitométrie est également un bon choix pour le cancer du poumon et un système robotique est d'une grande aide afin de compenser les mouvements de respiration [ZTK + 13]. Les systèmes robotiques peuvent également être utilisés afin de maintenir et modifier la position des tissus pour aligner une tumeur potentielle avec l'aiguille, particulièrement dans le cas de biopsies du cancer du sein [START_REF] Mallapragada | Robotassisted real-time tumor manipulation for breast biopsy[END_REF]. Curiethérapie La curiethérapie a prouvé être un moyen efficace pour traiter le cancer de la prostate [GBS + 01]. Elle consiste à placer de petits grains radioactifs dans la tumeur à détruire. Cette procédure nécessite le placement précis et uniforme d'une centaine de grains, ce qui peut prendre du temps et requière une grande précision. Les conséquences d'un mauvais placement peuvent être dramatiques par la destruction de structures sensibles alentours, comme la vessie, le rectum, la vésicule séminale ou l'urètre. L'insertion est habituellement effectuée sous échographie trans-rectale, ce qui peut permettre d'utiliser un système robotisé pour accomplir des insertions précises et répétées sous guidage échographique [START_REF] Hungr | A 3d ultrasound robotic prostate brachytherapy system with prostate motion tracking[END_REF] [SSK + 12] [START_REF] Kaye | Robotic ultrasound and needle guidance for prostate cancer manage-ment: review of the contemporary literature[END_REF]. L'IRM est également couramment utilisée et fait l'objet de recherche pour une utilisation avec un système robotique [START_REF] Seifabadi | Toward teleoperated needle steering under continuous mri guidance for prostate percutaneous interventions[END_REF]. Cancer du foie Après le cancer du poumon, le cancer du foie est la cause majeure de décès dus au cancer chez l'homme, avec environ 500000 décès chaque année [TBS + 15]. L'ablation par radiofréquence est la principale modalité thérapeutique utilisée pour effectuer une ablation de tumeur du foie [START_REF] Lencioni | Percutaneous image-guided radiofrequency ablation of liver tumors[END_REF]. Une sonde d'ablation, apparentée à une aiguille, est insérée dans le foie et génère de la chaleur pour détruire localement les tissus. Guider précisément la sonde sous guidage visuel peut éviter la destruction inutile de trop de tissus. Les biopsies du foie peuvent également être effectuées en utilisant des aiguilles de ponction percutanée [START_REF] Grant | Guidelines on the use of liver biopsy in clinical practice[END_REF]. Utiliser un guidage robotisé sous modalité échographique pourrait permettre d'éviter de multiple insertions qui augmentent les saignements hépatiques et peuvent avoir de graves conséquences. Contributions Dans cette thèse nous traitons du contrôle automatique d'un système robotique pour l'insertion d'une aiguille flexible dans des tissus mous sous guidage échographique. Traiter ce sujet nécessite de considérer plusieurs points. Tout d'abord l'interaction entre l'aiguille et les tissus doit être modélisée afin iv CONTRIBUTIONS de pouvoir prédire l'effet du système robotique sur l'état de la procédure d'insertion. Le modèle doit être capable de représenter les différents aspects de l'insertion et être à la fois suffisamment simple pour être utilisé en temps réel. Une méthode de contrôle doit également être conçue pour permettre de diriger la pointe de l'aiguille vers sa cible tout en maintenant la sécurité de l'opération. Le ciblage précis est rendu difficile par le fait que les tissus biologiques peuvent présenter une grande variété de comportements. Guider l'aiguille introduit aussi nécessairement une certaine quantité de dommages aux tissus, de telle sorte qu'un compromis doit être choisi entre le succès du ciblage et la réduction des dommages. Les mouvements physiologiques du patient peuvent également être une source importante de mouvement de la région ciblée et doivent aussi être pris en compte pour éviter d'endommager les tissus ou l'aiguille. Finalement la détection fiable de l'aiguille dans les images échographiques est un pré-requis pour pouvoir guider l'aiguille dans la bonne direction. Cependant cette tâche est rendue difficile par la faible qualité de la modalité échographique. Afin de relever ces défis, nous apportons plusieurs contributions dans cette thèse, qui sont : • Deux modèles 3D de l'interaction entre une aiguille flexible à pointe biseautée et des tissus mous. Ces modèles sont conçus pour permettre un calcul en temps réel et fournir une représentation 3D de l'ensemble du corps de l'aiguille pendant son insertion dans des tissus en mouvement. • Une méthode d'estimation des mouvements latéraux des tissus en utilisant uniquement des mesures disponibles sur le corps de l'aiguille. • Une méthode de suivi d'aiguille flexible dans des volumes échographiques 3D qui prend en compte les artefacts inhérents à la modalité échographique. • La conception d'une approche de contrôle pour un système robotique insérant une aiguille flexible dans des tissus mous. Cette approche a été développée de manière à être facilement adaptable à n'importe quels composants matériels, que ce soit le type d'aiguille, le système robotique utilisé pour le contrôle des mouvements de l'aiguille ou la modalité de retour utilisée pour obtenir des informations sur l'aiguille. Elle permet également de considérer des stratégies de contrôle hybrides, comme la manipulation des mouvements latéraux appliqués à la base de l'aiguille ou le guidage de la pointe de l'aiguille exploitant une géométrie asymétrique de cette pointe. • La validation ex-vivo des méthodes proposées en utilisant diverses plateformes expérimentales et différents scénarios afin d'illustrer la flexibilité de notre approche de commande pour différents cas d'insertion d'aiguille. v RÉSUMÉ EN FRANÇAIS Organisation de la thèse Le contenu de chaque chapitre de cette thèse est à présent détaillé dans la suite. Chapitre 1: Nous présentons le contexte clinique et scientifique dans lequel s'inscrit cette thèse. Nous définissons également nos objectifs principaux et présentons les différents défis associés. Le matériel utilisé dans les différentes expériences effectuées est également présenté. Chapitre 2: Nous présentons une vue d'ensemble des modèles d'interaction aiguille/tissus. Un état de l'art des différentes familles de modèles est tout d'abord fourni, avec un classement des modèles selon leur complexité et leur utilisation prévue en phase pre-opératoire ou intra-opératoire. Nous proposons ensuite une première contribution sur la modélisation 3D d'une aiguille à pointe biseautée, qui consiste en deux modèles numériques pouvant être utilisés pour des applications en temps-réel et offrant la possibilité de considérer le cas de tissus en mouvement. Les performances des deux modèles sont évaluées et comparées à partir de données expérimentales. Chapitre 3: Nous traitons le problème du suivi du corps d'une aiguille incurvée dans des volumes échographiques 3D. Les principes généraux de l'acquisition d'images échographiques sont tout d'abord décrits. Ensuite nous présentons une vue d'ensemble des algorithmes récents de détection et de suivi utilisés pour la localisation du corps de l'aiguille ou seulement de sa pointe dans des séquences images échographiques 2D ou 3D. Nous proposons ensuite une nouvelle contribution au suivi 3D d'une aiguille en exploitant les artefacts naturels apparaissant autour de l'aiguille dans des volumes 3D. Finalement nous proposons également une méthode de mise à jour de notre modèle d'aiguille en utilisant les mesures acquises pendant l'insertion pour prendre en compte les mouvements latéraux des tissus. Le modèle mis à jour est utilisé pour prédire la nouvelle position de l'aiguille et améliorer le suivi de l'aiguille dans le prochain volume 3D acquis. Chapitre 4: Nous nous concentrons sur le sujet principal de cette thèse qui est le contrôle robotisé d'une aiguille flexible insérée dans des tissus mous sous guidage visuel. Nous dressons tout d'abord un état de l'art sur le guidage d'aiguilles flexibles, depuis le contrôle bas niveau de la trajectoire de l'aiguille jusqu'à la planification de cette trajectoire. Nous présentons ensuite la contribution principale de cette thèse, qui consiste en une approche de contrôle pour le guidage d'aiguille qui a la particularité d'utiliser plusieurs stratégies de guidage et qui est indépendante du type de manipulateur robotique utilisé pour actionner l'aiguille. Les performances de cette approche de vi Chapter 1 Introduction Minimally invasive procedures have greatly expanded over the past century. The traditional way to cure a patient has long been to resort to open surgery, which consists in making a large cut in the body to observe and manipulate its intern parts. The success rate of such an approach is first limited by the heavy modifications made to the body, which take time to heal and can lead to complications after the surgery. There is also a greater risk of subsequent infections due to the large exposure of the inner body to the outside environment. On the contrary, minimally invasive procedures only require a limited number of small incisions to access the organs. Therefore, this improves the overall well-being of the patient thanks to reduced postoperative pain and scarring. The recovery time of the patient is also greatly reduced [EGH + 13] along with the risk of infections [START_REF] Gandaglia | Effect of minimally invasive surgery on the risk for surgical site infections: Results from the national surgical quality improvement program (nsqip) database[END_REF], resulting in higher success rates for the operations and a cost reduction for the hospitals. When open surgery was necessary before the introduction of medical imaging, diagnosis and treatment could be two parts of a same intervention, in order to first see the organs and then plan and perform the required surgery. X-rays were among the first tools discovered to provide an anatomical view of the inside of the body without needing to open it. Several imaging modalities have since been developed and improved for medical purposes, among which computerized tomography (CT) [START_REF] Hounsfield | Computerized transverse axial scanning (tomography): Part 1. description of system[END_REF], magnetic resonance imaging (MRI) [START_REF] Lauterbur | Image formation by induced local interactions: Examples employing nuclear magnetic resonance[END_REF] and ultrasound (US) [START_REF] Wild | Application of echo-ranging techniques to the determination of structure of biological tissues[END_REF] are now widely used in the medical domain. Beyond the improved diagnosis capabilities that it offers, medical imaging has played an important role in the development of the minimally invasive surgery approach. Viewing the inside of the body is necessary to perform successful surgical interventions, in order to see the tissues of interest and the position of the surgical tools. Due to the nature of minimally invasive surgery, a direct view is not possible and it is thus necessary to use other means of visual observation, such as endoscope insertion or anatomical imaging techniques. Each technique has its own advantages and drawbacks. Endoscopes use cameras, which offer the same view as a human eye. The 1 CHAPTER 1. INTRODUCTION images are thus easy to interpret, however it is not possible to see through the tissues. On the other hand, anatomical imaging allows a visualization of the inside of the tissues, but a specific training of the physicians is required in order to interpret the images. CT imaging uses X-rays, which are ionizing radiations, therefore limiting the number of images that can be acquired in order not to expose the patient to a too high amount of radiations [SBA + 09]. The medical staff should also remain outside the scanner room during the acquisition. On the other hand MRI makes use of non-invasive radiations and also provides high quality images, with high resolution and large field of view. However they impose severe constraints, such as a long time to acquire an image or an expensive and bulky scanner that limits the access to the patient. In this context, ultrasonography is a modality of choice for intra-operative imaging, due to its ability to provide a real-time visualization of tissues and tools in motion. Additionally, it is non-invasive and requires lightweight scanners and portable probes. Slender tools are often used for minimally invasive procedures in order to be inserted through narrow incisions made at the surface of the patient. In particular, needles have been used since long times to extract or inject substances directly inside the body. They provide an access to inner structures while leaving only a very light wound in the tissues. For this reason they are tools of first choice for minimal invasiveness that can allow reaching small structures in deep regions. Thin needles can however exhibit a certain amount of flexibility, which makes accurate steering of the needle trajectory more complicated. Coupled to the handling of an US probe at the same time as the needle insertion gesture, the insertion procedure can become a challenging task which requires specific training of the clinician. Consequently, robotic needle steering has become a vast subject of research to ease the intervention of the clinician and to improve the overall accuracy of the procedure. Medical robotics in general aims at designing and controlling mechatronics systems to assist the clinicians in their tasks. The main goal is to improve the accuracy, safety and repeatability of the operations and to reduce their duration [START_REF] Taylor | Medical robotics in computer-integrated surgery[END_REF]. It can greatly benefit the needle insertion procedures for which accuracy is often crucial to avoid mistargeting and unnecessary repeated insertions. However, the integration of a robotic system in the operating room remains a great challenge due to clinical constraints and acceptance of the technical device from the medical staff. Among the many designs that have been proposed, some systems have better chances of being accepted. Such systems should either assist the surgeon without requiring a lot of modifications of the clinical workflow or should procure clear benefits for both the success of the operation and the operating conditions of the surgeon. This is for example the case of imaging enhancement and tremor cancellation systems, or of tele-operated systems. For needle insertions procedures, this would mainly consists in providing a real-time monitoring of the 1.1. CLINICAL MOTIVATIONS state of the insertion as well as a robotic system between the patient and the hand of the surgeon assisting at the insertion process. In this context, an USguided robotic system is a great choice to provide real-time intra-operative imaging and assistance during the operation. Clinical motivations Needles are widely used in a great variety of medical acts for the injection of substances or the sampling of fluids or tissues directly inside the body. While some procedures do not require an accurate placement of the needle tip, such as intramuscular injections, the results of sensitive operations highly depend on the ability to reach a precise location inside the body. In the following we present some applications for which systematic accurate targeting is crucial to avoid dramatic consequences and which could greatly benefit from a robotic assistance. Biopsy for cancer diagnosis Cancer has become one of the major cause of death in the world with 8.2 million cancer deaths estimated worldwide in 2015 [TBS + 15]. Among the many types of cancers, prostate cancer is the most diagnosed cancer among men and breast cancer among women, with lung cancer being a leading cause of cancer deaths for both. However early detection of cancer can improve the chance of success of cancer treatment and diminish the mortality rates. Whatever the kind of tumor, biopsies are the traditional diagnostic method used to confirm the malignancy of suspected tissues. It consists in using a needle to get a small sample of tissues at a defined location for analysis purposes. The accurate placement of the needle is of paramount importance in this procedure to avoid misdiagnosis due to the sampling of healthy tissues surrounding the suspected lesion. Freehand insertions can give variable results depending on the clinician performing the operation. Therefore, robotic needle guidance under visual feedback has the potential to greatly improve the performances of biopsies. Ultrasound feedback is often used, as for example for the diagnostic of prostate cancer [START_REF] Kaye | Robotic ultrasound and needle guidance for prostate cancer manage-ment: review of the contemporary literature[END_REF]. Computerized tomography (CT) is also a good choice for lung cancer diagnosis and a robotic system is of great help to compensate for breathing motions [ZTK + 13]. Robotic systems can also be used to maintain and modify the position of the tissues to align a suspected tumor with the needle, especially for breast cancer biopsy [START_REF] Mallapragada | Robotassisted real-time tumor manipulation for breast biopsy[END_REF]. Brachytherapy Brachytherapy has proven to be an efficient way to treat prostate cancer [GBS + 01]. It consists in placing small radioactive seeds in the tumors to CHAPTER 1. INTRODUCTION destroy. The procedure requires the accurate uniform placement of about a hundred seeds, which can be time consuming and require great accuracy. The consequence of misplacement can be dramatic due to the destruction of surrounding sensitive tissues like bladder, rectum, seminal vesicles or urethra. The insertion is usually performed under transrectal ultrasound, which can allow the use of robotic systems to perform accurate and repetitive insertions under ultrasound (US) guidance [START_REF] Hungr | A 3d ultrasound robotic prostate brachytherapy system with prostate motion tracking[END_REF] [SSK + 12] [START_REF] Kaye | Robotic ultrasound and needle guidance for prostate cancer manage-ment: review of the contemporary literature[END_REF]. Magnetic resonance imaging (MRI) is also commonly used and is the subject of research to explore its use together with a robotic system [START_REF] Seifabadi | Toward teleoperated needle steering under continuous mri guidance for prostate percutaneous interventions[END_REF]. Liver cancer Liver cancer is the major cause of cancer deaths after lung cancer among men with about 500000 deaths each year [TBS + 15]. Radiofrequency ablation is the primary therapetic modality to perform liver tumor ablations [START_REF] Lencioni | Percutaneous image-guided radiofrequency ablation of liver tumors[END_REF]. An electrode needle is inserted in the liver and generates heat to locally destroy the tissues. Accurately guiding the needle under imageguidance can help avoiding unnecessary tissue destruction. Liver biopsies can also be performed using percutaneous punction needles [START_REF] Grant | Guidelines on the use of liver biopsy in clinical practice[END_REF]. Performing robotic ultrasound (US) guidance could avoid multiple insertions that increase hepatic bleeding and can have dramatic consequences. Scientific context Reaching a specific region in the body without performing open surgery is a challenging task that has been a vast subject of research and developments over the past decades. Many robotic designs have been proposed to achieve this goal. In the following we present a non-exhaustive overview of these different technologies as well as various kinds of sensor modalities that have been developed and used to provide feedback on the medical procedure. We then define where we positioned the work presented in this thesis relative to this context. Robotic designs Continuum robots: These systems are snake-like robots consisting of a succession of actively controllable articulations, as can be seen in Fig. 1.1a. They offer a large control over their whole shape and can be used to perform many kinds of operations. Many varieties of designs are possible and the study of such robots is a vast field of research by itself [START_REF] Walker | Robot strings: Long, thin continuum robots[END_REF] [START_REF] Burgner-Kahrs | Continuum robots for medical applications: A survey[END_REF]. However their design and control are often complex and their diameter is usually larger than standard needles, which limit the use of such system in practice. Concentric tubes: This kind of robots, also known as active cannulas, is a special kind of continuum robots which consist of a telescopic set of flexible concentric pre-curved tubes that can slide and rotate with respect to each other [START_REF] Webster | Design and kinematic modeling of constant curvature continuum robots: A review[END_REF]. Each tube is initially maintained inside the larger tubes and the insertion of such device is performed by successively inserting each set of tubes and leaving in place the outermost tubes one after another, as seen in Fig. 1.1b. They offer additional steering capabilities compared to flexible needles due to the pre-curved nature of each element, while maintaining a relatively small diameter. Furthermore, once the tubes have been deployed, rotation of the different elements allows for controlled deformations of the system all along its body. Although the design can be limited to only one pre-curved stylet placed in an outer straight tube, as was proposed in [OEC + 05], some other designs are possible to enable an additional control of the curvature of each tube [START_REF] Chikhaoui | Kinematics and performance analysis of a novel concentric tube robotic structure with embedded soft micro-actuation[END_REF]. As continuum robots, the modeling and control of such systems remain quite complex [START_REF] Dupont | Design and control of concentric-tube robots[END_REF] [BLH + 16]. SCIENTIFIC CONTEXT Needle insertion devices: Many robotic systems have been designed for the insertion of traditional needles and particularly for asymmetric tip needles. Several kinds of asymmetries are possible, as illustrated in Fig. 1.2. These needles tend to naturally deviate from a straight trajectory, such that the rotation around their shaft plays an important role. Many needle Figure 1.3: General concept of a needle insertion device (taken from [START_REF] Webster | Design considerations for robotic needle steering[END_REF]). insertion systems have been proposed, all being a variant of the same design consisting of one linear stage for the insertion and one rotative stage for needle rotation along and around its main axis [START_REF] Webster | Design considerations for robotic needle steering[END_REF], as depicted in Active needles: Alternatively, many designs have been proposed to replace traditional needles and provide additional control capabilities over their bending. A needle made of multiple segments that can slide along each other was designed such that the shape of the tip can be modified during the insertion [START_REF] Ko | Closedloop planar motion control of a steerable probe with a "programmable bevel"; inspired by nature[END_REF]. A 1 degree of freedom (DOF) actuated needle tip was designed such that it can act as a pre-bent tip needle with variable angle between the shaft and the pre-bent tip [AGL + 16]. A similar tendon-actuated needle tip with 2 DOF was also used to allow the orientation of the tip without rotation of the needle around its axis [START_REF] Roesthuis | Modeling and steering of a novel actuatedtip needle through a soft-tissue simulant using fiber bragg grating sensors[END_REF]. Additional considerations about tip designs can be found in [START_REF] Van De Berg | Design choices in needle steering-a review[END_REF]. These needle designs allows a high controllability of the tip trajectory, however, in addition to the increased complexity of the needle itself, they require a special system to be able to control the additional DOF from the needle base. A combination of different methods can also be made as was done in [SMR + 15], where using a succession of cable driven continuum robot, concentric tubes and beveled-tip needle increases the reachable space and final accuracy of the targeting. Sensor feedback In order to be used for needle insertion assistance, a robotic system should be able to monitor the state of the insertion. Therefore, feedback modalities have to be used to provide some information on the needle and the tissues. The choice of the sensors is an important issue that has to be taken into account from the beginning of the conception of the system. Indeed, they should either be directly integrated into the system or they can pose compatibility issues in the case of external modalities. In the following we provide an overview of some feedback modalities currently used or explored for needle insertion procedures. Shape feedback: The shape of the entire needle can be reconstructed using fiber Bragg grating (FBG) sensors. This kind of sensor consists in several optic fibers integrated in the needle. The light propagates differently in these fibers depending on the curvature of the fiber at certain locations, such that the curvature of the needle can be measured and used to retrieve its shape [PED + 10]. This kind of sensor requires a special design of the needle, since the fibers need to follow the same deformations as the needle does. An electromagnetic (EM) tracker can also be used for the tracking of the position and orientation of a specific point of the needle, which is typically the tip. They provide a great accuracy on the measures and currently available trackers are small enough such that they can fit directly in standard needles. Real-time imaging modalities: Feedback on the needle position is not sufficient for needle insertions since the position of the targeted region must also be known. Using an imaging modality can provide a visual feedback on the position of both the needle and the target. Ultrasound (US) is the CHAPTER 1. INTRODUCTION modality of choice for real-time imaging due its fast acquisition rate of 2D or 3D images, good resolution and safety. Special 2.5D US transducers are also the subject of current research to enable a direct detection and display of the needle tip in a 2D US image, even when the tip is outside the imaging plane of the probe [XWF + 17]. However, these transducers are currently not commonly available. A limiting factor of US in general is the low quality of the images due to the intrinsic properties of US waves. On the other hand, computerized tomography (CT)-scan or magnetic resonance imaging (MRI) are used for manual insertions thanks to the high quality of their images and the large field of view that they offer. However, as stated previously, this kind of imaging method can not be used directly for real-time image-guided robotic needle insertion, due to their long acquisition time. They can still be used for non real-time tele-operated robotic control, by alternating between insertion steps and imaging steps, however a single needle insertion can take more than 45 minutes. Tissue motions between two acquisitions is also an issue that requires additional real-time sensors to be compensated for, such as force sensors [MDG + 05] or optical tracking [ZTK + 13]. On the contrary CT fluoroscopy can be used to acquire real-time images. However, in manual needle insertion this exposes the clinician to a high dose of noxious radiations. This can be avoided by wearing unpractical special shielding or by using a remotely controlled insertion system [SPB + 02]. However the patient is still exposed to the high amount of radiations necessary for real-time performances. Fast MRI acquisition has also recently been explored to perform imageguided needle insertion [PvKL + 15]. Decreasing the image size and quality, a 2D image could be acquired with a sufficient resolution every 750 ms. By comparison, the US modality can provide a full 3D volume with similar resolution within the same acquisition time, and 2D US can acquire several tens of images per second. Force feedback: Force sensors can be used to measure the forces applied to the needle and tissues. Force sensing can be useful to monitor the state of the insertion, for example by detecting the perforation of the different structures that the needle is going through [START_REF] Okamura | Force modeling for needle insertion into soft tissue[END_REF]. It can also be used with tele-operated robotic systems to provide a feedback to the clinician [PBB + 09] or compensate for tissue motion [START_REF] Joinié-Maurin | Force feedback teleoperation with periodical disturbance compensation[END_REF]. Any kind of force sensors can be used with the US modality, however compatibility issues have to be taken into account for the design of sensors compatible with CT [KPM + 14] or MRI [START_REF] Gassert | Sensors for applications in magnetic resonance environments[END_REF]. CHALLENGES Objectives The objective of this thesis is to focus on the robotic steering of traditional flexible needles. These needles are already widely available and used in clinical practice. Moreover they do not require specific hardware, contrary to other special robotic designs, which requires dedicated control hardware and techniques. The idea is then to provide a generic formulation of the different concepts that we introduce, such that our work can be adapted to several kinds of needle tip and rigidity. In this context, the control of the full motion of the needle base should thus be performed, such that it is not limited to flexible beveled-tip needles but can also be used to insert rigid symmetric tip needles. The formulation should also stay as much as possible independent of the actual robotic system used to perform the needle steering. This choice is motivated by the fact that it would ease the clinical acceptance of the method and could be applicable to several robotic systems and medical applications. Another objective is to focus on the insertion under ultrasound (US) guidance, motivated by the fact that it is already used in current medical practice and does not require any modification of the needle to provide a real-time feedback on its whole shape. For the development and validation of our work, we try to keep in mind some clinical constraints related to the set-up and registration time, which should be as small as possible. Several other modalities have also to be explored, such as force feedback and electromagnetic (EM) feedback, which can easily be implemented alongside traditional needles and the US modality. Challenges In order to fulfill our objective of performing the ultrasound-guided control of a robotic system for the insertion of a flexible needle in soft tissues, several challenges needs to be addressed. We describe these different challenges in the following. Interaction modeling: First, the control of the insertion of a flexible needle with a robotic system requires a model of the interaction between the needle and soft tissues. The effect of the inputs of the robotic insertion system on the needle position and effective length have to be modeled as well. The model should be complete to represent the whole body of the needle in 3D as well as the influence of the tip geometry on its trajectory. It should also be generic enough so that it can be easily adaptable to several kinds of needles. Since it is used for intra-operative purposes, it should be able to represent the current state of the insertion, taking into account the effect of potential motions of the tissues on the deformation of the needle. While complex and accurate models of the needle and tissues exist, the complexity of the modeling must remain reasonable such that real-time performances can be achieved. Needle control: The control of the trajectory of a flexible needle is a challenging task in itself. The complex interaction of the needle with the tissues at its tip and along its shaft is difficult to completely predict, especially because of the great variety of behaviors exhibited by biological tissues. Accurately reaching a target requires then to take into account and to exploit the flexibility of the needle and the motion of the tissues. The safety of the operation should also be ensured to avoid excessive damage caused by the needle onto the tissues. This is a difficult task since inserting the needle necessarily introduces a certain amount of tissue cutting, and steering the needle can only be achieved through an interaction of the needle with the tissues. Tissue motion: Many needle insertion procedures are not performed under general anesthesia. As a consequence, physiological motions of the patient can not always be controlled. Patient motions can have several effects on the needle insertion. First it introduces a motion of the targeted region, which should be accounted for in order to avoid mistargeting. The trajectory of a flexible needle could also be modified by tissue motions. During manual needle insertion, clinicians can directly see the motion of the skin of the patient and feel the forces applied on the needle and tissues, such that they can easily follow the motions of the patient if needed. A robotic system should also be able to adapt to some motions of the patient while inserting the needle to avoid threatening the safety of the operation. This point represents a great challenge due to the limited perception available for the robotic system. Needle detection: Accurate localization of the needle in ultrasound (US) images is a necessary condition to be able to control the state of the insertion. The low quality of US images is a first obstacle to the accurate localization of the needle tip. It can greatly vary depending on the tissues being observed and the position of the needle relatively to the US probe. Using 3D US has the advantage that the whole shaft of the needle can be contained in the field of view of the US probe, which is not the case with 2D US. However, even in 3D US the needle is not equally visible at all points due to specific artifacts that can come from the surrounding tissues or from the needle itself. Even if the 3D volume acquisition is relatively fast, the position of the needle in the volume can still greatly vary due to the motion of the patient or of the probe between two acquisitions. Overall needle localization using US feedback represents a challenging task that is still an open issue that has to be addressed. CONTRIBUTIONS Contributions In order to address the challenges mentioned previously, we present several contributions in this thesis, which are: • two 3D models of the interaction between a flexible needle with a bevel tip and soft tissues. The models are designed to allow real-time processing and to provide a 3D representation of the entire needle body during the insertion in moving tissues; • a method to estimate the lateral motions of the tissues using only the measures available on the needle; • a method for tracking a flexible needle in 3D ultrasound volumes taking into account the artifacts inherent to the ultrasound modality; • the design of a framework for the control of a robotic system holding a flexible needle inserted in soft tissues. The framework is designed to be easily adaptable to any hardware components, whatever the needle type, the robotic system used for the control of the needle motion or the feedback modality used to provide information on the needle location. It can also provide hybrid control strategies like manipulation of the lateral motions of the needle base or tip-based steering of the needle tip; • the ex-vivo validations of the proposed methods using various experimental platforms and scenarios in order to illustrate the flexibility of the framework in performing needle insertions. The contributions on the topic of an hybrid control strategy used to steer a flexible needle under visual feedback were published in an article in the proceedings of the International Conference on Robotics and Automation (ICRA) [START_REF] Chevrie | Needle steering fusing direct base manipulation and tip-based control[END_REF]. The contributions on the topic of needle modeling and tissue motion estimation using visual feedback were published in an article in the proceedings of the International Conference on Intelligent Robots and Systems (IROS) [START_REF] Chevrie | Online prediction of needle shape deformation in moving soft tissues from visual feedback[END_REF]. Experimental context Experiments presented in this thesis were primarily conducted on the robotic platform of the Lagadic team at IRISA/Inria Rennes, France. Others were also conducted at the Surgical Robotics Laboratory attached to the University of Twente, Enschede, the Netherlands. This offered the opportunity to test the genericity of our methods using different experimental setups. We present in this section the list of the different equipments that we used in the different experiments presented all along this thesis. CHAPTER 1. INTRODUCTION The general setup that we used is made up of four parts: a needle attached to a robotic manipulator, several homemade phantoms simulating soft tissues, a set of sensors providing various kinds of feedbacks and a workstation used to process the data and manage the communications between the different components. Robots Two different kinds of needle manipulation systems were used, the first one in France and the second one in the Netherlands. • The Viper s650 and Viper s850 from Omron Adept Technologies, Inc. (Pleasanton, California, United States) are 6 axis industrial manipulators, depicted in Fig. 1.5a. The robots communicate with the workstation through a FireWire (IEEE 1394) connection. They were used to hold and actuate the needle or to hold the 3D ultrasound (US) probe. They were also used to apply motions to the phantom in order to simulate patient motions. • The UR3 and UR5 from Universal Robots A/S (Odense, Denmark) are 6 axis table-top robots, depicted in Fig. 1.5b. Both robots were connected to a secondary workstation and communicated through Ethernet using Robot Operating System (ROS) (Open Source Robotics Foundation, Mountain View, USA). UR3 was used to hold and actuate an insertion device described in the following. UR5 is a larger version of UR3 and was used to apply a motion to the phantom. We also used a 2 degrees of freedom needle insertion device (NID), visible in Fig. 1.5b, designed at the Surgical Robotics Laboratory [START_REF] Shahriari | Design and evaluation of a computed tomography (CT)compatible needle insertion device using an electromagnetic tracking system and CT images[END_REF], which controls the insertion and rotation of the needle along and around its axis. A Raspberry Pi 2 B (Raspberry Pi foundation, Caldecote, United Kingdom) along with a Gertbot motor controller board (Fen logic limited, Cambridge, United Kingdom) were used to control the robot through pulse-width-modulation (PWM). Motor encoders were used to measure the position and rotation of the needle, allowing to know its effective length that can bend outside the NID. The NID was connected to the end effector of the UR3 through a plastic link, as can be seen in Fig. 1.5b, allowing the control of the 3D pose of the NID with the UR3. Visual feedback systems We used two different modalities to provide a visual feedback on the needle and phantom position. Cameras were used for the evaluation of the performances of the control framework and ultrasound (US) probes were used to validate the framework using a clinical modality. We used in France two Point Grey FL2-03S2C cameras from FLIR Integrated Imaging Solutions Inc. (formerly Point Grey Research, Richmond, BC, Canada), which are color cameras providing 648 x 488 images with a frame rate up to 80 images per second. Each camera was coupled with a DF6HA-1B lens from Fujifilm (Tokyo, Japon), which has a 6 mm focal length with manual focus. The cameras send the acquired images to the workstation through a FireWire (IEEE 1394) connection. This system was used only with translucent gelatin phantoms to enable the observation of the needle for validation purposes. Both cameras and a gelatin phantom can be seen in Fig. 1.6. A white screen monitor or a piece of paper were used to offer a uniform background behind the phantom that facilitates the segmentation of the needle in the images. Two different US systems were used for the experiments. For the experiments performed in France, we used a 4DC7-3/40 convex 4D US probe (see Fig. 1.7a) from BK Ultrasound (previously Ultrasonix Medical Corporation, Canada), which is a wobbling probe with frequency range from 3 MHz to 7 MHz, transducer radius of 39.8 mm and motor radius of 27.25 mm. This probe was used with a SonixTOUCH research US scanner from BK Ultrasound (see Fig. 1.7b). The station allows an access to raw data via an Ethernet connection, such as radio frequency data or pre-scan B-mode data. For the experiments performed in the Netherlands, we used a 7CF2 Convex Volume 4D/3D probe (see Fig. 1.7c) from Siemens AG (Erlangen, Germany), which is a wobbling probe with frequency range from 2 MHz to 7 Mhz, transducer radius of 44.86 mm and motor radius of 14.84 mm. This probe was used with an Acuson S2000 US scanner from Siemens (see Fig. 1.7d). This station does not give access to raw data nor online access to transformed data. Pre-scan 3D US volumes can be retrieved offline using the digital imaging and communications in medicine (DICOM) format. Nevertheless, 2D images were acquired online using a USB frame grabber device from Epiphan Video (Ottawa, Ontario, Canada) connected to the video output of the station. Phantoms Different phantoms were used for the experiments. Porcine gelatin was used in all phantoms, either alone or while embedding ex-vivo biological tissues. We used either porcine or bovine liver as biological tissues. The gelatin and tissues were embedded in transparent plastic containers of different sizes. Various artificial targets were also embedded in some phantoms, in the form of raisins or play-dough spheres of different sizes, ranging from 4 mm to 8 mm. Workstations All software developments were made using the C++ language. We used the ViSP library [START_REF] Marchand | Visp for visual servoing: a generic software platform with a wide class of robot control skills[END_REF] as a basis for the majority of the control framework, image processing, graphics user interface and communications. CUDA library was used for optimization of the post-scan conversion of 3D ultrasound volumes with a Nvidia GPU. Eigen library was used for fast matrix inversion for the needle modeling. For the experiments performed in France we used a workstation running Ubuntu 14.04 LTS 64-bit and equipped with 32 GB memory, a Intel R Xeon R E5-2620 v2 @2.10GHz × 6 CPU and a NVIDIA R Quadro R K2000 GPU. For the experiments performed in the Netherlands we used a personal computer running Fedora 24 64-bit and equipped with 16GB memory and a Intel R Core TM i7-4600U @2.10 Ghz × 4 CPU. Needles We summarize the characteristics of the different needles used in the experiments in Table 1.1. A picture of the needle used in France and a zoom on the beveled tip can be seen in Fig. 1.8. Force sensor For the experiments performed in the Netherlands, we used a Nano 43 force torque sensor from ATI Industrial Automation (Apex, USA), which is a sixaxis sensor measuring forces and torques in all 3 Cartesian directions with a resolution of 1.95 mN for forces and 25 µN.m for torques. The sensor was placed between the UR3 robot and the needle insertion device to measure the interaction efforts exerted at the base of the needle, as depicted in Fig. 1.9a. Electromagnetic tracker For the experiments performed in the Netherlands, we used an Aurora v3 electromagnetic (EM) tracking system from Northern Digital Inc. (Waterloo, Canada), which consists in a 5 degrees of freedom EM sensor (see Fig. 1.9b) placed in the tip of the needle and an EM field generator (see Fig. 1.9c). The system is used to measure the 3D position and axis alignment of the needle tip, with an position accuracy of 0.7 mm and an orientation accuracy of 0.20 • , at a maximum rate of 65 measures per second. Thesis outline In this chapter we presented the clinical and scientific context of this thesis. We defined our general objective as being the robotic insertion of a flexible needle in soft tissues under ultrasound (US) guidance and we described the associated challenges. A list of the equipments used in the various experiments presented in this thesis was also provided. The remaining of this manuscript is organized as follows. Chapter 2: We present an overview of needle-tissue interaction models. A review of different families of models is first provided, with a classification of the models depending on their complexity and intended use for pre-operative or intra-operative purposes. We then propose a first contribution on the 3D modeling of a beveled-tip needle interacting with soft tissues consisting of two numerical models that can be used for real-time applications and offering the possibility to consider the case of moving tissues. The performances of both models are evaluated and compared through experiments. Chapter 3: We address the issue of tracking the body of a curved needle in 3D US volumes. The general principles of the acquisition process of US images and volumes are first described. Then we present an overview of recent detection and tracking algorithms used to localize the whole needle body or only the needle tip in 2D or 3D US sequences. We then propose a new contribution to 3D needle tracking that exploits the natural artifacts appearing around the needle in US volumes. Finally we also propose a method to update our needle model using the measures acquired during the insertion to take into account lateral tissue motions. The updated model is used to predict the new position of the needle and to improve needle tracking in the next acquired US volume. Chapter 4: We focus on the core topic of this thesis which is the robotic steering of a flexible needle in soft tissues under visual guidance. We first provide a review of current work on flexible needle steering, from the low level control of the needle trajectory to the planning of this trajectory. We then present the main contribution of this thesis, which consists in a needle steering framework that has the particularity to include several steering strategies and which is independent of the robotic manipulator used to steer the needle. The performances of the framework are illustrated through several ex-vivo experimental scenarios using cameras and 3D US probes as visual feedback. Chapter 5: We consider the issue of patient motions during the needle insertion procedure. An overview of motion compensation techniques during needle insertion is first presented. We further extend our steering framework proposed in chapter 4 and we exploit the model update method proposed in chapter 3 in order to handle needle steering under lateral motions of the tissues. We provide experimental results obtained by using the proposed framework to perform needle insertion in a moving soft tissue phantom. These experiments were performed using several information feedback modalities, such as a force sensor, an electromagnetic tracker as well as 2D US. Conclusion: Finally we provide the conclusion of this dissertation and present perspectives for possible extensions and applications. Chapter 2 Needle insertion modeling This chapter provides an overview of needle-tissue interaction models. The modeling of the behavior of a needle interacting with soft tissues is useful for many aspects of needle insertion procedures. First it can be used to predict the trajectory of the needle tip, before inserting the real needle. This can be of great help to the clinicians in order to find an adequate insertion entry point that optimizes the chances of reaching a targeted region inside the body, while reducing the risks of damaging other sensitive regions. Secondly, using thinner needles allows decreasing the patient pain and the risk of bleeding [START_REF] Gill | Does needle size matter?[END_REF]. However, the stiffness of a thin needle is greatly reduced and causes its shaft to bend during the insertion. This makes the interaction between the needle and tissues more complex to comprehend by the clinicians, since the position of the needle tip is not directly known from the position and orientation of the base, contrary to rigid needles. The introduction of a robotic manipulator holding the needle and controlling its trajectory can be of great help to unburden the operator of the needle manipulation task. This removes a potential source of human error and leaves the clinicians free to focus on other aspects of the procedure [START_REF] Abolhassani | Needle insertion into soft tissue: A survey[END_REF]. Needle-tissue interaction models are a necessity for the usage of such robotic systems, in order to know how they should be controlled to modify the needle trajectory in the desired way. In the following, we first provide a review of needle-tissue interaction models. We address the case of kinematic models (section 2.1), which only consider the trajectory of the tip of the needle, and the case of finite element modeling (section 2.2) that can completely model the behavior of the needle and the surrounding tissues. Then we present mechanics-based models (section 2.3) used to represent the body of the needle without modeling all the surrounding tissues. We further extend on this topic and propose two new 3D models of a needle locally interacting with soft tissues (section 2.4). Finally, in section 2.5 we compare the trajectories of the needle tip obtained with both models to the trajectories obtained during the insertion of a real CHAPTER 2. NEEDLE INSERTION MODELING needle. The work done using both models was published in two articles presented in international conferences [CKB16a] [START_REF] Chevrie | Online prediction of needle shape deformation in moving soft tissues from visual feedback[END_REF]. Kinematic modeling During the insertion of a needle, a force is applied to the tissues by the needle tip to cut a path in the direction of the insertion. In return the tissues apply reaction forces to the needle tip and the direction of this forces depends on the geometry of the tip, as illustrated in Fig. 2.1. In the case of a symmetric needle tip, the lateral forces tends to negate each other, leaving only a force aligned with the needle. The needle tip trajectory then follows a straight line when the needle in inserted. However when the needle tip has an asymmetric shape, as for example in the case of a beveled or pre-curved tip, inserting the needle results in a lateral reaction force. The needle trajectory bends in the direction of the reaction force. The exact shape of the trajectory depends on the properties of the needle and tissues. The stiffness of the needle introduces internal forces that naturally act against the bending of the shaft. The deformations of the tissues also creates forces all along the needle body, which modify its whole shape. Kinematic modeling is used under the assumption that the tissues are stationary and no lateral motion is applied to the needle base, such that the different forces are directly related to the amount of deflection observed at the tip. The value of all these forces are ignored in this case and only the trajectory of the tip is represented from a geometric point of view. The whole needle shaft is ignored as well and the insertion and rotation along and around the needle axis are assumed to be directly transmitted to the tip. This way the modeling is limited to the motion of the tip during the insertion or rotation of the needle. Note that this kind of representation is limited to asymmetric geometries of the tip, since a symmetric tip would only produce a straight trajectory that does not require a particular modeling. Kinematic modeling of the behavior of a needle during its insertion was Unicycle model: The 2D unicycle consists in modeling the tip as the center of a single wheel that can translate in one direction and rotate along another normal direction, as illustrated in Fig. 2.2a. During the needle insertion, the needle tip is assumed to follow a circular trajectory. The ratio between the translation and rotation is fixed by the natural curvature K nat of this circular trajectory and depends on the needle and tissue properties such that    ẋ = v ins cos(θ) ẏ = v ins sin(θ) θ = K nat v ins , (2.1) where x and y are the coordinates of the wheel center, i.e. the needle tip, θ is the orientation of the wheel and v ins is the insertion velocity. Bicycle bicycle: The 2D bicycle model uses two rigidly fixed wheels at a distance L w from each other, such that the front wheel lies on the axis of the rear wheel and is misaligned by an fixed angle φ, as illustrated in Fig. 2.2b. The point representing the needle tip lies somewhere between the two wheels, at a distance L t from the rear wheel. In addition to the rotation and the velocity in the insertion direction observed with the unicycle model, the tip is also subject to a lateral translation velocity, directly linked to the CHAPTER 2. NEEDLE INSERTION MODELING distance L t . The trajectory of the tip is then described according to        ẋ = v ins cos(θ) -Lt Lw tan(φ) sin(θ) ẏ = v ins sin(θ) + Lt Lw tan(φ) cos(θ) θ = tan(φ) Lw v ins , (2.2) where x and y are the coordinates of the needle tip, θ is the orientation of the rear wheel and v ins is the insertion velocity. This model is equivalent to the unicycle model when the tip is at the center of the rear wheel, i.e. L t = 0. Rotation around the needle axis: The rotation around the needle axis is also taken into account in kinematic models. They were first mainly used in the 2D case, such that the needle tip stays in a plane [RMK + 11]. The tip can then only describe a curvature toward the right or the left, such that a change of direction corresponds to a 180 • rotation of a real 3D needle. The tip trajectory is thus a continuous curve made up of a succession of arcs. However, a better modeling of the needle insertion is achieved by considering the 3D case, where the rotation along the needle axis is continuous. In this case the orientation of the asymmetry fixes the direction in which the tip trajectory describes a curve during the needle insertion. This can lead to a greater variety of motions, such as helical trajectories [HAC + 09]. Discussion: Kinematic modeling is easy to implement since it needs few parameters and is not computationally expensive. However the relationship between the model parameters and the real needle behavior is difficult to model since they depend on the needle geometry and tissue properties. In practice these parameters are often identified after performing some preliminary insertions in the tissues. Since this is not feasible in practice for real surgical procedures, online estimation of the natural curvature of the tip trajectory can be performed, for example by using a method based on a Kalman filter as proposed by Moreira et al. [START_REF] Moreira | Needle steering in biological tissue using ultrasound-based online curvature estimation[END_REF]. It can also be observed that the trajectories obtained with both unicycle and bicycle models are limited. For example they are continuous when a rotation without insertion is performed between two insertion steps. The two successive parts of the trajectory are tangent if the unicycle model is used and are not tangent if the bicycle model is used. However, both models fail to describe the trajectory of a pre-bent needle, for which a translational offset is also added when the needle is rotated. Hence modifications have to be made to account for the fact that the tip is not aligned with the axis of the rotation [RKA + 08]. Another point is that kinematic models do not take into account the interaction between the body of the needle and the tissues. The shaft is 2.2. FINITE ELEMENT MODELING assumed to exactly follow the trajectory of the needle tip and has no influence on this trajectory. This assumption can only hold if the needle is very flexible and the tissues are stiff, such that the forces due to the bending of the needle are small enough to cause very little motion of the tissues. The tissues must also be static, such that they do not modify the position of the needle during the insertion. These assumptions are easy to maintain during experimental research work, but harder to maintain in clinical practice due to patient physiological motions and variable tissue stiffness. An extension of the bicycle model that takes into account additional lateral translations of the needle tip is possible [FKR + 15]. This allows a better modeling of the tip motion, but it requires additional parameters that need to be estimated and can vary depending on the properties of the tissues, which limits its practical use. Finite element modeling Finite element modeling (FEM) is used to model the whole tissue and needle. In addition to the effect of the needle-tissue interaction on the needle shape, the resulting deformations of the tissues are also computed. The method consists in using a finite set of elements interacting with each other, each element representing a small region of interest of the objects being modeled, as can be seen in Fig. 2.3a. This allows the modeling of the needle deformations as well as the motion of a targeted region in the tissues, due to the needle interaction or due to external manipulation of the tissues [THA + 09][PVdBA11]. This requires a description of the geometry of the tissues and the needle as well as a certain amount of physical parameters for all of them that depends on the chosen complexity of the mechanical model. In general the computational complexity of such models is high when compared to other modeling methods. The time required for the computations increases with the level of details of the model, i.e. the number CHAPTER 2. NEEDLE INSERTION MODELING of elements used to represent each object, and the number and complexity of the phenomena taken into consideration. Modeling the exact boundary conditions and properties of real in vivo objects, such as different organs in the body, is also a challenging task. This makes FEM hard to use for real-time processing without dedicated hardware optimization and limits its use to pre-planning of needle and target trajectories [AGPH09] [START_REF] Hamzé | Preoperative trajectory planning for percutaneous procedures in deformable environments[END_REF]. However, it offers a great flexibility on the level of complexity, which can be chosen independently for the different components in the model. We provide in the following a short overview of different models that can be used for the needle and the tissues. Needle: Various complexity can be chosen for the needle model. A 1D beam model is often used under various forms. It can for example be a rigid beam [START_REF] Dimaio | Needle insertion modeling and simulation[END_REF], a flexible beam [START_REF] Dimaio | Interactive simulation of needle insertion models[END_REF] or a succession of rigid beams linked by angular springs [GDS09] [START_REF] Haddadi | Development of a dynamic model for bevel-tip flexible needle insertion into soft tissues[END_REF]. The needle geometry can also be modeled entirely in 3D to accurately represent its deformations and the effect of the tip geometry [MRD + 08][YTS + 14]. Tissues: Tissues can also be modeled with different levels of complexity, ranging from a 2D rectangular mesh with elastostatic behavior [START_REF] Dimaio | Interactive simulation of needle insertion models[END_REF] to 3D mesh with real organ shape [CAR + 09] (see Fig. 2.3b) and dynamic nonlinear behavior [START_REF] Tang | Constraint-based soft tissue simulation for virtual surgical training[END_REF]. The complexity of the interactions between the needle and the tissues can also vary. In addition to interaction forces due to the lateral displacements of the needle, tangential forces are often added as an alternation between friction and stiction along the needle shaft [DGM + 09], introducing a highly non-linear behavior. The complexity of the tissue cutting at the needle tip and along the needle shaft can also greatly vary. It usually involves a change in the topology of the model [CAK + 14], which can be simple to handle if the needle is modeled as a 1D beam [START_REF] Goksel | Haptic simulator for prostate brachytherapy with simulated needle and probe interaction[END_REF] or more complex when using a 3D modeling of the needle and non-linear fracture phenomenon in the tissues [START_REF] Oldfield | Detailed finite element modelling of deep needle insertions into a soft tissue phantom using a cohesive approach[END_REF][YTS + 14]. Mechanics-based modeling Mechanics-based models are used to model the entire needle shaft of the needle and its interactions with the surrounding tissues. The needle is thus often modeled as a 1D beam with a given flexibility that depends on the mechanical properties of the real needle. On the other hand, the tissues are not entirely modeled as is done with finite element modeling (FEM) but only the local interaction with the needle is taken into account. MECHANICS-BASED MODELING Bernoulli beam equations: A first way to model the interaction between the needle shaft and the tissues is to use a set of discrete virtual springs placed along the shaft of the needle, as was done in 2D by Glozman et al. [START_REF] Glozman | Image-guided robotic flexible needle steering[END_REF]. The needle is cut into multiple flexible beams and virtual springs are placed normal to the needle at the intersection of the beam extremities, as depicted in Fig. 2.4a. Knowing the position and orientation of the needle base and the position of the springs, the shape of the needle can be computed using the Bernoulli beam equations for small deflections. Concerning the interaction of an asymmetric needle tip with the tissues, a combination of axial and normal virtual springs can also be used to locally model the deflection of the tip [START_REF] Dorileo | Needle deflection prediction using adaptive slope model[END_REF]. Instead of using discrete springs, the Bernoulli equations can also be applied when the needle-tissue interaction is modeled using a distributed load applied along the needle shaft [KFR + 15], as illustrated in Fig. 2.4b. This allows a continuous modeling of the interaction along the needle shaft, resulting in a smoother behavior compared to the successive addition of discrete springs. CHAPTER 2. NEEDLE INSERTION MODELING Energy-based method: An energy-based variational method can also be used, instead of directly using the Bernoulli equations to solve the needle shape. This method, known as the Rayleigh-Ritz method and used by Misra et al. [MRS + 10], consists in computing the shape of the needle that minimizes the total energy stored in the system. It has been shown that this energy is mainly the sum of the bending energy stored in the needle, the deformation energy stored in the tissues and the worksthat are due to tissue cutting at the tip and the insertion force at the base. This method can be combined with different models of the interaction of the needle with the tissues, as long as a deformation energy can be computed. For example, a combination of virtual springs along the needle shaft and continuous load at the end of the needle can be used [START_REF] Roesthuis | Mechanicsbased model for predicting in-plane needle deflection with multiple bends[END_REF]. Different methods are also available to define these continuous loads. They can be computed depending on the distance between the needle and the tissues, as a continuous version of the virtual springs. In this case the position of the tissues can be taken depending on a previous position of the needle shaft [MRS + 10] or tip [KFR + 15]. The continuous load can also directly be estimated online [START_REF] Wang | Mechanics-based modeling of needle insertion into soft tissue[END_REF]. The two methods stated above can also be used along with pseudocontinuous models of the needle instead of the continuous one. In this case the needle is modeled using a succession of rigid rods linked by angular springs which are used to model the compliance of the needle [START_REF] Goksel | Modeling and simulation of flexible needles[END_REF]. Such a model is more simple since the parameters to describe its shape only consist in the angles observed between successive rods, without requiring additional parameters for the shape of these rods. Dynamic behavior: The different models presented in this section mainly allow modeling the quasi-static behavior of a needle inserted in the tissues. The dynamics of the insertion can also be modeled by adding a mass to the needle beams, some visco-elastic properties to the elements modeling the tissues and a model of friction along the shaft [YPY + 09][KRU + 16]. The friction mainly occurs during the insertion of the needle, nevertheless it has been shown that a rotation lag between the needle base and the needle tip could also appear [START_REF] Reed | Modeling and control of needles with torsional friction[END_REF]. Hence a model of torsionnal friction and needle torsion can be added when the needle rotates around its axis [START_REF] Swensen | Torsional dynamics of steerable needles: Modeling and fluoroscopic guidance[END_REF]. However, as stated in previous section 2.1 for kinematic models, each additional layer of modeling requires the knowledge or estimation of new parameters, in addition to the increased computational complexity. Hence, the number of modeled phenomena that can be included depends on the intended use of the model: a high number for offline computations, hence approaching FEM models, or a reduced number to keep real-time capabilities, like kinematic models. Generic model of flexible needle In this section we describe and compare two models that we propose for the 3D modeling of a flexible needle with an asymmetric tip interacting with moving soft tissues. These models were designed to provide a quasi-static representation of the whole body of the needle that can be used in a real-time needle steering control scheme. They both use a 1D beam representation for the needle and a local representation for the tissues to keep the computational cost low enough. The first model is inspired from the virtual springs approach presented in section 2.3. This approach is extended to 3D and is used with the addition of a reaction force at the needle tip to take into account an asymmetric geometry of the tip. The second model is a two-body model where the needle interacts with a second 1D beam representing the cut path generated by the needle tip in the tissues. Note that we use 3D models to account for all the phenomena occurring in practice. It would be possible to maintain the trajectory of the needle base in a 2D plane using a robotic manipulator, however, the motions of the tissues occur in all directions and can not be controlled. Therefore the body of a flexible needle can also move in any direction, such that 3D modeling is necessary. Needle tissue interaction model with springs We describe here the first model that we propose and which is inspired from the 2D virtual springs model used in [START_REF] Glozman | Image-guided robotic flexible needle steering[END_REF]. Interaction along the needle shaft: The interaction between the needle and the tissues is modeled locally using n 3D virtual springs placed all along the needle shaft. We define each 3D spring, with index i ∈ [[1, n]], using 3 parameters: a scalar stiffness K i , a rest position p 0,i ∈ R 3 defined in the world frame {F w } (see Fig. 2.5) and a plane P i that contains p 0,i (see Fig. 2.5). The rest position p 0,i of the spring with index i corresponds to the initial location of one point of the tissues when no needle is pushing on it. The plane P i is used to define the point of the needle p N,i ∈ R 3 on which the spring with index i is acting. Each time the model needs to be recomputed, the plane P i is reoriented such that it is normal to the needle and still passes through the rest position p 0,i . This way the springs are only used to model the normal forces F s,i ∈ R 3 applied on the needle shaft, without tangential component. We use an elastic behavior to model the interaction between the needle and the tissues, such that the force exerted by the spring on the point p N,i can be expressed according to F s,i = -K i (p N,i -p 0,i ). (2.3) CHAPTER 2. NEEDLE INSERTION MODELING Figure 2.5: Illustration of the mechanical 3D model of needle-tissue interaction using virtual springs and a reaction force at the tip. The stiffness K i of each spring is computed such that it approximates a given stiffness per unit length K T such that K i = K T l i , (2.4) where l i is the length of the needle that is supported by the spring with index i. This length l i can vary depending on the actual distance between the points p N,i-1 , p N,i and p N,i+1 . For simplicity we consider here that the tissue stiffness per unit length K T is constant all along the needle. However it would also be possible to change the value of K T depending on the depth of the spring in the tissues and therefore consider the case of inhomogeneous tissues or variable tissue geometry. In practice this parameter should be estimated beforehand or by using an online estimation method to adapt to unknown stiffness changes. However online estimation of K T will not be considered in this work. The needle is then modeled by a succession of n + 1 segments such that the extremities of the segments lay on the planes P i of the virtual springs, except for the needle base that is fixed to a needle holder and the needle tip that is free. Each segment is approximated in 3D using a polynomial curve c j (l) of order r so that c j (l) = M j [ 1 l . . . l r ] T , (2.5) where j ∈ [[1, n + 1]] is the segment index and c j (l) ∈ R 3 is the position of a point of the segment at the curvilinear coordinate l ∈ [0, L j ], with L j the total length of the segment. The matrix M j ∈ R 3×(r+1) contains the coefficients of the polynomial curve. Interaction at the tip: The model defined so far is sufficient to take into account the interaction with the tissues along the needle shaft. However the specific interaction at the tip of the needle still needs to be added. We represent all the normal efforts exerted at the tip of the needle with an equivalent normal force F tip and an equivalent normal torque T tip exerted at the extremity of the needle, just before the beginning of the bevel. In order to model a beveled tip, these force and moment are computed using a model of the bevel with triangular loads distributed on each side of the tip, as proposed by Misra et al. [MRS + 10]. Let us define α as the bevel angle, b as the length of the face of the bevel, a as the length of the bottom edge of the needle tip and β as a cut angle that indicates the local direction in which the needle tip is currently cutting in the tissues as depicted in Fig. 2.6. Note that the point O in Fig. 2.6 corresponds to the last extremity of the 3D curve used to represent the needle. The equivalent normal force F tip and torque T tip exerted at the point O can be expressed as F tip = K T b 2 2 tan(α -β) cos α - K T a 2 2 tan β y, (2.6) T tip = K T a 3 6 tan β - K T b 3 6 tan(α -β) 1 - 3 2 sin(α) 2 x, (2.7) where x and y are the axis of the tip frame {F t } as defined in Fig. 2.6. Tip orientation around the shaft: We assume that the orientation of the base frame {F b } (see Fig. 2.5) is known and that the torsional bending of the needle can be neglected. The first assumption usually holds in the case of robotic needle manipulation, where the needle holder can provide a feedback on its pose. The second assumption however can be debated since it has been shown that stiction along the needle can introduce a lag and an hysteresis between the base and tip rotation [START_REF] Abayazid | 3d flexible needle steering in soft-tissue phantoms using fiber bragg grating sensors[END_REF]. However inserting the needle is usually sufficient to break the stiction and reset this lag [START_REF] Reed | Modeling and control of needles with torsional friction[END_REF]. Hence we assume that the orientation of the tip frame {F t } around the tip axis can directly be computed from the base orientation and the needle shape. Computation of the needle shape: In order to maintain adequate continuity properties of the needle, second order continuity constraints are added, namely defined as c j (L j ) = c j+1 (0), (2.8) dc j dl l=L j = dc j+1 dl l=0 , (2.9) d 2 c j dl 2 l=L j = d 2 c j+1 dl 2 l=0 . (2.10) The total normal force F j at the extremity of the segment j can be calculated from the sum of the forces exerted by the springs located from this extremity to the needle tip, so that F j = Π j   F tip + n k=j F s,k   , (2.11) where Π j stands for the projection onto the plane P j . The projection is used to remove the tangential part of the force and to keep only the normal component. This normal force introduces a constant shear force all along the segment and using Bernoulli beam equation we have EI d 3 c j dl 3 (l) = -F j , (2.12) GENERIC MODEL OF FLEXIBLE NEEDLE with E the needle Young's modulus and I its second moment of area. Note that in the case of a radially symmetric needle section the second moment of area is defined as I = Ω x 2 dx dy, (2.13) where the integral is performed over the entire section Ω of the needle. For a hollow circular needle, I can be calculated from the outer and inner diameters, d out and d in respectively, according to I = π 64 (d 4 out -d 4 in ). (2.14) Finally the moment due to the bevel force gives the following boundary condition: EI d 2 c n+1 dl 2 l=L n+1 = T tip × z, (2.15) where T tip is the torque exerted at the tip defined by (2.7) and z is the axis of the needle tip frame {F t } as defined in Fig. 2.6. In practice we expect real-time performances for the model, so the complexity should be as low as possible. We use here third order polynomials (r = 3) to represent the needle such that each polynomial curve is represented by 12 coefficients. It is the lowest sufficient order for which the mechanical equations can directly be solved. From a given needle base pose and a given set of virtual springs, the shape of the needle can then be computed. The needle model is defined by 12 × (n + 1) parameters, corresponding to the polynomial segments coefficients. The continuity conditions provide 9 × n equations. The fact that the segments extremities have to stay in the planes defined by the springs adds n equations and the springs forces in the planes define 2 × n equations. The base position and orientation give 6 additional boundary equations. The tip conditions also give 6 equations due to the tip force and tip moment. So the final shape of the needle is solved as a linear problem of 12 × (n + 1) unknowns and 12 × (n + 1) equations. In practice we used the Eigen C++ library for sparse linear problem inversion. Insertion of the needle: During the insertion, springs are added regularly at the tip to account for the new amount of tissues supporting the end of the needle. Once the last segment of the needle reaches a threshold length L thres , a new spring is added at the tip. The rest position of the spring is taken as the initial position of the tissue before the tip has cut through it, corresponding to point A in Fig. 2.6. The next section presents a second model that we propose, where the successive springs are replaced by a continuous line. A different method is used to solve the needle parameters, allowing a decoupling between the number of elements used to represent the needle and the tissues. CHAPTER 2. NEEDLE INSERTION MODELING Needle tissue interaction model using two bodies In this section we model the interaction between the needle and the tissues as an elastic interaction between two one-dimensional bodies. Needle and tissues modeling: One of the bodies represents the needle shaft and the other one represents the rest position of the path that was cut in the tissues by the needle tip during the insertion (see Fig. 2.7). Note that the needle body (depicted in red in Fig. 2.7) actually represents the current shape of the path cut in the tissues, while the tissue body (depicted in green in Fig. 2.7) represents this same cut path without taking into account the interaction with the needle, i.e. the resulting shape of the cut after the needle is removed from the tissues. Both bodies are modeled using polynomial spline curves c, such that c(l) = n i=1 c i (l) , l ∈ [0, L] , (2.16 ) c i (l) = χ i (l) M i      1 l . . . l r      , ( 2.17) where c(l) ∈ R 3 is the position of a point at the curvilinear coordinate l, L is the total length of the curve, M i ∈ R 3×(r+1) is a matrix containing the coefficients of the polynomial curve c i and χ i is the characteristic function of the curve, that takes the value 1 on the definition domain of the curve and 0 elsewhere. Parameters n and r represent respectively the number of curves of the spline and the polynomial order of the curves. Both can be tuned to find a trade-off between model accuracy and computation time. In the following we add the subscripts or superscripts N and T on the different parameters to indicate that they respectively corresponds to the needle and tissues. Computation of the needle shape: For simplicity we assume that the tissues have a quasi-static elastic behavior, i.e. the force exerted on each point of the needle is independent of time and proportional to the distance between this point and the rest cut path. This should be a good approximation as long as the needle remains near the rest cut path, what should be ensured in practice to avoid tissue damage. We note K T the interaction stiffness per unit length corresponding to this interaction. Given a segment of the needle between curvilinear coordinates l 1 and l 2 , the force exerted on it by the tissues can thus be expressed as F (l 1 , l 2 ) = -K T l 2 l 1 c N (l) -c T (l)dl. (2.18) GENERIC MODEL OF FLEXIBLE NEEDLE Figure 2.7: Illustration of the whole needle insertion model (left) and zoom on the tip for different tip geometries (right). Needle segments are in red and the rest position of the path cut in the tissues is in green. New segments are added to the cut path according to the location of the cutting edge of the tip. + 09] that the quasi-totality of the energy stored in the needle-tissue system consists in the bending energy of the needle E N and the deformation energy of the tissues E T . We use the Rayleigh-Ritz method to compute the shape of the needle which minimizes the sum of these two terms. It has been shown in previous work [MRS According to the Euler-Bernouilli beam model, the bending energy E N of the needle can be expressed as E N = EI 2 L N 0 d 2 c N (l) dl 2 2 dl, (2.19) where E is the Young's modulus of the needle, I is its second moment of area (see the definition in (2.13)) and L N is its length. By tuning the parameters E and I according to the real needle, both rigid and flexible needles can be represented by this model. CHAPTER 2. NEEDLE INSERTION MODELING The energy stored in the tissues due to the needle displacement can be expressed as E T = K T 2 L T 0 c N (L f ree + l) -c T (l) 2 dl, (2.20) where L f ree is the length of the free part of the needle, i.e. from the needle base to the tissue surface, and L T is the length of the path cut in the tissues. We add the constraints imposed by the needle holder, which fix the needle base position p b and direction d b , so that c N (0) = p b , (2.21) dc N dl (0) = d b . (2.22) Continuity constraints up to order two are also added on the spline coefficients c N i (l i ) = c N i+1 (l i ), (2.23) dc N i dl l=l i = dc N i+1 dl l=l i , (2.24) d 2 c N i dl 2 l=l i = d 2 c N i+1 dl 2 l=l i , (2.25) where l i is the curvilinear coordinate along the needle spline corresponding to the end of segment c N i and the beginning of segment c N i+1 . In order to take into account the length of the tip, which can be long for example for pre-bent tips or beveled tips with small bevel angle, the tip is modeled as an additional polynomial segment added to the needle spline, as can be seen in Fig. 2.7. The corresponding terms are added to the bending energy (2.19) and tissue energy (2.20), similarly to the other segments. The system is then solved as a minimization problem under constraints, expressed as min m E N + E T Am = b, (2.26) where m is a vector stacking all the coefficients of the matrices M i and with matrix A and vector b representing the constraints (2.21) to (2.25). In practice this minimization problem reduces to the inversion of a linear system, so that we also used the Eigen C++ library for sparse linear problem inversion. Tip orientation around the shaft: Similarly to the previous model (see section 2.4.1), we assume that there is no lag between the tip rotation and the base rotation along the needle shaft. This way the orientation of the tip can be computed from the orientation of the base and the shape of the needle. A more complex modeling of the torsional compliance of the needle could however be necessary in the case of a pre-bent tip needle for which the shape of the tip could cause a higher torsional resistance. Insertion of the needle: As the needle progresses in the tissues and the length of the cut path increases, we update the modeled rest cut path by adding new segments to the spline curve. Each time the model is updated, if the needle was inserted more than a defined threshold L thres , a new segment is added such that its extremity corresponds to the location of the very tip of the needle, i.e. where the cut occurs in the tissues. This way the model can take into account the specific geometry of the needle tip. In the case of a symmetric tip, the cut path will stay aligned with the needle axis. On the other hand it will be shifted with respect to the center line of the needle shaft when considering an asymmetric tip, as is depicted in Fig. 2.7, leading to the creation of a force that will pull the needle toward the direction of the cut. It can be noted that external tissue deformations can be taken into account with this kind of modeling. Indeed, deformations of the tissues created by external sources, like tissue manipulation or natural physiological motions (heartbeat, breathing, . . . ), induce modifications of the shape and position of the rest cut path. This, in turn, changes the shape of the needle via the interaction model. External tissue motions will be further studied in the section 3.5 of the next chapter. Another advantage of this model is that the number of polynomial curves of the needle spline is fixed and is independent of the number of curves of the tissue spline. This leads to a better control over the computational complexity of the model compared to the virtual springs approach presented in section 2.4.1. Using the virtual springs model, the number of parameters to compute increases as the needle is inserted deeper into the tissues, due to the progressive addition of springs and needle segments. This is an important point for the use of the model in a real-time control framework. In the next section we compare the performances of both models in terms of accuracy of the obtained tip trajectories and computation time. Validation of the proposed models In this section we compare the performances of the models defined previously in terms of accuracy of the representation of the needle behavior. We compare the simulated trajectories of the needle tip obtained with both models to the real trajectories of a needle inserted in soft tissues under various motions of the needle base. We first describe the experiments performed to acquire the trajectories of the base and tip of the needle and then we provide the comparison of these trajectories to the ones generated using both models. Experimental conditions (setup in the Netherlands): We use the needle insertion device (NID) attached to the end effector of the UR3 robot to insert the Aurora biopsy needle in a gelatin phantom, as depicted in Fig. 2.8. The needle is 8 cm outside of the NID and its length does not vary during the insertion. The position of the needle tip is tracked and recorded using the Aurora electromagnetic (EM) tracker embedded in the tip and the field generator. The pose of the needle base, at the tip of the NID (center of frame {F b } in Fig. 2.8), is recorded using the odometry of the UR3 robot. The phantom has a Young modulus of 35 kPa and is maintained fixed during the experiments. Experimental scenarios: Different trajectories of the needle base are performed to test the models in any possible direction of motion. The different insertions are performed at the center of the phantom, such that they do not cross each other. We use 12 different insertion scenarios and repeat each scenario 3 times, leading to a total of 36 insertions. Each scenario is decomposed as follows. The needle is first placed perpendicular to the surface of the phantom such that the tip barely touches the surface. Then the needle is inserted 1 cm in the phantom, by translating the robot along the needle axis. Then a motion of the needle base is applied before restarting the insertion for 5 cm. The applied motion is expressed in the frame of the needle base An example of the measured tip trajectories for each type of base motions can be seen in solid lines in Fig. 2.9 to 2.13. The tip position is expressed in the initial frame of the tip, at the surface of the phantom. Generation of model trajectories: In order to generate the different trajectories of the needle tip using both models, we first set their parameters according to the physical properties of the needle. The needle length is set to 8 cm and the other parameters are set according to the properties of the Aurora needle given in Table 1.1. The polynomial order of the curves is set to r = 3 for both models and the length threshold defining the addition of a virtual spring or tissue spline segment is set to L thres = 1 mm. The length of the needle segments for the two-body model is set to 1 cm, resulting in a total of n = 8 segments. We recall that the number of segments for the virtual springs model varies with the number of springs added during the insertion. One tip trajectory is then generated for both models and each experiment by applying the motion of the base that is recorded during the experiment to the base of the model. The value of the model stiffness per unit length K T of both models is optimized separately such that the final error between the simulated tip positions and the measured tip positions is minimized. Since the insertions are performed in the same phantom and at similar locations, the same value of K T is used for all experiments. The best fit is obtained with K T = 49108 N.m -2 for the two-body model and K T = 56868 N.m -2 for the virtual springs model. As mentioned in previous section, in clinical practice this parameter can be difficult to estimate beforehand and would certainly need to be estimated online. It can be observed in Fig. 2.9 to 2.13 that the tip trajectories measured for similar base motions in symmetric directions are not symmetric. This is due to a misalignment between the axis of the NID, in which are performed the motions, and the real axis of the needle. This misalignment corresponds to a rotation of 1.0 • around axis 0.5 0.86 0 T in the base frame {F b }. Similarly an orientation error of 4.1 • is observed between the orientation of the NID around the needle axis and the orientation of the bevel. A correction is thus applied to the needle base pose measured from the robot odometry to obtain the pose that is applied to the modeled needle base. An example of the simulated tip trajectories for each type of base motions can be seen in Fig. 2.9 to 2.13, with long-dashed lines for the virtual springs model and short-dashed lines for the two-body model. z tip (cm) 0 • +3 • -3 • (b) z tip (cm) 0 • +3 • -3 • (b) Figure 2.12: Tip position obtained when a rotation is applied around the y axis of the base frame between two insertion steps along the z axis. Measures are shown with solid lines, virtual springs model with long-dashed lines and two-body model with short-dashed lines. Results: We can observe that both models follow the global behavior of the needle during the insertion. Let us first focus on the effect of the asymmetry of the beveled tip. We can see that during an insertion without lateral base motions, the deviation due to the bevel is well taken into account, as for example in z tip (cm) 0 • +90 • -90 • 180 • (b) Figure 2.13: Tip position obtained when a rotation is applied around the z axis of the base frame between two insertion steps along the z axis. Measures are shown with solid lines, virtual springs model with long-dashed lines and two-body model with short-dashed lines. Fig. 2.9b. We observe the same kind of constant curvature trajectories that are usually obtained with kinematic models. However it can also be seen that during the first few millimeters of the insertion, a lateral translation of the tip occurs, which does not fit the constant curvature trajectory appearing later. This effect is due to the fact that the bevel is cutting laterally while the needle body is not yet embedded in the gelatin. The reaction force generated at the bevel is thus mostly compensated by the stiffness of the needle, which is low due to the length of the needle. This effect can usually be reduced in practice by using a sheath around the body of the flexible needle, such that it can not bend outside the tissues [START_REF] Webster | Design considerations for robotic needle steering[END_REF]. However in a general case, this kind of effect is not taken into account by kinematic models, while it can be represented using our mechanics-based models. Let us now consider the influence of lateral base motions on the behavior of the needle. We can see that the tip trajectory is modified in the same manner for both models and follows the general trajectory of the real needle tip. Therefore both models can be used to provide a good representation of the whole 3D behavior of a flexible needle inserted in soft tissues. This is a great advantage over kinematic models, which do not consider lateral base motions at all. Concerning the accuracy of the modeling, some limitations seem to appear when the base motion tends to push the surface of the bevel against the tissues. This is for example the case for a positive translation along the y axis of the base frame (green curves in Fig. 2.10b) or a negative rotation around the x axis (blue curves in Fig. 2.11b). In these cases both models seem to amplify the effect of the base motions on the following trajectory of the tip. However this could also be due to the experimental conditions. A small play between the needle and the NID could indeed cause an attenuation of the motion transmitted to the needle base. 1 2 3 4 5 6 7 N o n e T x = + 2 m m T x = - 2 m m T y = + 2 m m T y = - 2 m m R x = + 3 • R x = - 3 • R y = + 3 • R y = - 3 • R z = + 9 0 • R z = -9 0 • R z = + 1 8 0 • M e a n Accuracy comparison: Let us now compare to each other the performances of both models in terms of accuracy. The absolute final error between the tip positions simulated by the models and the tip positions measured during the experiments are summarized in Fig. 2.14. Mean values are provided across the 3 insertions performed for each scenario. The average position error over the insertion process is provided as well in Fig. 2.15. The average is taken over the whole insertion and across the 3 insertions performed for each scenario. We can see that the two-body model provides in each scenario a better modeling accuracy on the trajectory of the needle tip. While both models tends to give similar results in the general case, the virtual springs model seems to particularly deviate from the measures when rotations around the needle axis are involved. This is also clearly visible in Fig. 2.13a. Several reasons may be invoked to explain this result. First it is possible that the discrete nature of the springs has a negative effect on the modeling accuracy compared to a continuous modeling of the load applied on the needle shaft. However we believe that this effect is not predominant here since the thresholds chosen for both models (distance between successive springs in one case and length of the segment added to the cut path spline in the other case) were the same and had small values compared to the curvature of the needle. The second possible reason is that the model to compute the force and torque at the needle tip is not the best way to represent the interaction with the tissues. Indeed, the computation of the continuous loads applied on the sides of the tip does not take into account the real 3D shape of the tip, which has a circular section. The force magnitude is also independent of the orientation of the bevel, which might not be true during the rotation of the needle around its axis, leading to a wrong orientation of the tip when the insertion restarts. Concerning the use of the model in clinical practice, we can see that the final tip positioning error obtained with the two-body model is around 2 mm in each case. This can be sufficient to reach tumors with standard size in open-loop control. However this assumes that the stiffness parameter K T was well estimated beforehand, which might be difficult in practice and could require the use of an online estimation method. 0 0.5 1 1.5 2 2.5 3 N o n e T x = + 2 m m T x = - 2 m m T y = + 2 m m T y = - 2 m m R x = + 3 • R x = - 3 • R y = + 3 • R y = -3 • R z = + 9 0 • R z = -9 0 • R z = + 1 8 0 • M e a n Computation time comparison: Let us finally compare the time required to compute both models depending on the number of elements used to represent the tissues, i.e. the number of springs for the virtual springs model and the number of tissue spline segments for the two-body model. The computation times are acquired during a simulation of the needle insertion using the same parameters as in the previous experiments. The results are depicted in Fig. 2. 16. It is clearly visible that the number of virtual springs increases the computation time, as could be expected from the fact that the number of needle parameters to compute directly depends on the number of springs. On the contrary the number of tissue spline segments of the two-body model does not have a significant influence on the computation time since the number of parameters of the needle spline is fixed and chosen independently, hence the size of the linear problem to solve is also fixed. This is a clear advantage of the two-body model since the computation time can then be independent of the state of the insertion and can also be tuned beforehand to obtain the desired real-time performances. Note that we obtained a computation time around 2 ms, which is sufficient for real-time computation in our case, however this could greatly vary and be tuned depending on the available hardware and the number of segments chosen to represent the needle. In conclusion, we will use the two-body model in the following of the experiments, because it can provide an accurate estimation of the 3D behavior of the needle and can be used for real-time processing due to its deterministic computation time. It is also easier to adapt to different kinds of tip geometry and the motion of the tissue spline can be used to model external displacement of the tissues. Conclusion We presented a review of needle-tissue interaction models separated into three categories corresponding to different cases of use. Typically, kinematic models ignore the interaction of the needle body with the surrounding tissues and only consider the trajectory of the needle tip. Hence they are computationally inexpensive and are well adapted for real-time control of the insertion of needles with asymmetric tips. On the other side, complete models of the needle and tissues based on finite element modeling offer an accurate but complex modeling of insertion procedures. They usually require far more resources, which limits their use to applications where real-time performances are not a priority, such as needle insertion pre-planning or surgical training simulations. In-between these two categories are mechanics-based models, which uses local approaches to model the full behavior of a needle being inserted in soft tissues while keeping aside the full modeling of the tissues. They provide a more complete modeling than kinematic models while maintaining good performances for a real-time use. In section 2.4 we have proposed two 3D mechanics-based models that give a good representation of the 3D behavior of a needle during its insertion is soft tissues. In particular, the two-body model that we designed offers a good accuracy for all kinds of motions applied to the base of the needle. Its complexity can also be chosen and is constant during the insertion, which allows tuning the required computation time to achieve desired real-time performances. Therefore we will use this model as a basis for a real-time needle steering framework in chapter 4. The accuracy of the modeling of the needle tip trajectory is however dependent on the estimation of the interaction stiffness between the needle and the tissues. Real tissues are usually inhomogeneous and pre-operative estimation of the stiffness can be difficult, such that online estimation techniques would have to be studied for a use in real clinical practice. In this chapter we only considered the case of stationary tissues, while our model can also handle moving tissues by modifying the position of the curve representing the tissues (rest cut path). Contrary to the motions of the needle base, which can be controlled by the robotic manipulator, the motions of the patient can not be controlled. Therefore an information feedback is necessary to estimate these motions and update the state of the model 2.6. CONCLUSION accordingly. Visual feedback is usually used in current practice to monitor the whole needle insertion procedure and provide a way to see both the needle and a targeted region. Hence we will focus on this kind of feedback in order to design an update method for our model. In particular, 3D ultrasound (US) can provide a real-time feedback on the whole shape of the needle, which can be used to ensure that our model stays consistent with the real state of the insertion procedure. A first step is to design an algorithm to extract the localization of the needle body in the US volumes, which is a great challenge in itself, due to the low quality of the data provided by the US modality. This will be a first point of focus of the next chapter 3. The performances of needle tracking algorithms in terms of accuracy and computation time can be greatly improved by using a prediction of the needle location. Therefore we will also use our model for this purpose, since we have shown that it could predict the trajectory of the needle with a good accuracy. Chapter 3 Needle localization using ultrasound In this chapter we focus on the robust detection and tracking of a flexible needle in 3D ultrasound (US) volumes. In order to perform an accurate realtime control of a flexible needle steering robotic system, a feedback on the localization of the needle and the target is necessary. The 3D US modality is well adapted for this purpose thanks to its fast acquisition compared to other medical imaging modalities and the fact that it can provide a visualization of the entire body of the needle. However the robust tracking of a needle in US volume is a challenging task due to the low quality of the image and the artifacts that appear around the needle. Additionally, even though intraoperative volumes can be acquired, the needle can still move between two volume acquisitions due to its manipulation by the robotic system or the motions of the tissues. A prediction of the needle motion can thus be of a great help to improve the performances of needle tracking in successive volumes. We first provide in section 3.1 several points of comparison between the imaging modalities that are used in clinical practice to perform needle insertions and we motivate our choice of the 3D US modality. We then describe the principles of US imaging and the techniques used to reconstruct the final 2D images or 3D volumes in section 3.2. We present in section 3.3 a review of the current methods used to detect and track a needle in 2D or 3D US. We then propose a new needle tracking algorithm in 3D US volumes that takes into account the natural artifacts observed around the needle. We focus on the estimation of the motions of the tissues in section 3.5 and propose a method to update the needle model that we designed in the previous chapter using different measures available on the needle. Tests and validation of the method are then provided in section 3.6. The updated model is then used to improve the performances of the needle tracking across a sequence of US volumes. The work presented in this chapter on the model update from visual feedback was published in an article presented in international conference [START_REF] Chevrie | Online prediction of needle shape deformation in moving soft tissues from visual feedback[END_REF]. Introduction Needle insertion procedures are usually performed under imaging feedback to ensure the accuracy of the targeting. Many modalities are available, among which the most used ones are magnetic resonance imaging (MRI), computerized tomography (CT) and ultrasound (US). In the following we present a comparison of these modalities on several aspects that has led us to consider the use of the US modality instead of the others. Image quality: The main advantage of MRI and CT is that they provide high contrast images of soft tissues in which the targeted lesion can clearly be distinguished from the surrounding tissues. On the other side, the quality of US images is rather poor due to the high level of noise and interference phenomena. However MRI and CT are sensitive to most metallic components, that creates distortions in the images. The presence of a metallic needle in their field of view is thus a source of artifacts that are disturbing for MRI [SCI + 12] or CT [SGS + 16] when compared to US artifacts [START_REF] Reusz | Needlerelated ultrasound artifacts and their importance in anaesthetic practice[END_REF]. This alleviates the main drawback of the US modality. Robotic design constraints: MRI and CT have additional practical limitations compared to US imaging, since their scanners are bulky and require a dedicated room. These scanners are composed of a ring inside which the patient is placed for the acquisition, which reduces the workspace and accessibility to the patient for the surgical intervention. This adds heavy constraints on the design of robotic systems that can be used in these scanners [MGB + 04] [ZBF + 08], in addition to the previously mentioned incompatibility with metallic components, that can even cause security issues in the case of MRI. On the other side, US probes are small, can easily be moved by hand and the associated US stations are easily transportable to adapt to the workspace. Additionally they do not pose particular compatibility issues and can thus be used with a great variety of robotic systems. Acquisition time: The acquisition time of MRI and CT is typically long compared to US and makes them unsuitable for real-time purposes. Using non real-time imaging requires to perform the insertion in many successive steps, alternating between image acquisitions and small insertions of the needle, as classically done with MRI [MvdSvdH + 14] or CT [SHvK + 17]. In addition to the increased duration of the intervention, patients are often asked to hold their breath during the image acquisition to avoid motion blur in the image. Therefore, discrepancies arise between the real position of the needle and the target and their position in the image because of the motions of the tissues as soon as the patient restarts breathing. For these reasons, real-time imaging is preferred and can be achieved using the US modality. High acquisition rates are usually obtained with 2D US probes, as tens of images can typically be acquired per second. A 3D image, corresponding to an entire volume of data, can also be acquired at a fast rate using matrix array transducers or at a lower frame rate using more conventional motorized 3D US probes. In conclusion, US remains the modality of choice for real-time needle insertion procedures [START_REF] Chapman | Visualisation of needle position using ultrasonography[END_REF]. Hence in the following of this chapter we will focus on the detection and tracking of a needle using the US feedback acquired by 3D probes. In the next section we present the general principles of US imaging. Ultrasound imaging Physics of ultrasound Ultrasound (US) is a periodic mechanical wave with frequency higher than 20 kHz that propagates by producing local changes of the pressure and position in a medium. The principle of US imaging is to study the echos reflected back by the medium after that an initial US pulse has been sent. Wave propagation: Most imaging devices assume that soft tissues behave like water, due to the high proportion of water they contain. The speed c of US waves in liquids can be calculated according to the Newton-Laplace equation c = K ρ , (3.1) where K is the bulk modulus of the medium and ρ its density. Although variations of the local density of the tissues introduce variations of the speed of ultrasound, it is most of the time approximated by a constant c =1540 m.s -1 . When the wave encounters an interface between two mediums with different densities, a part of the wave is reflected back while the rest continues propagating through the second medium. The amplitudes of the reflected and transmitted waves depend on the difference of densities. This is the main phenomenon used in US imaging to visualize the variations of the tissue density. Image formation: An US transducer consists in an array of small piezoelectric elements. Each element can vibrate according to an electric signal sent to them, creating an US wave that propagates through the medium in the form of a localized beam. Each beam defines a scan line on which the variation of the tissue density can be observed. The elements also act as receptors, creating an electric signal corresponding to the mechanical deformations applied to them by the returning echos. These electric signals are recorded for a certain period of time after a short sinusoidal pulse was applied to an element, giving the so-called radio frequency signal. Considering an interface that is at a distance d from the wave emitter, an echo will be observed after a time T = 2d c , (3.2) corresponding to the time needed by the pulse to propagate to the interface and then come back to the transducer. The position corresponding to an interface can thus directly be calculated from the delay between the moment the US pulse was sent and the moment the echo was received by the transducer, as illustrated in Fig. 3.1. The radio frequency signal can then be transformed into a suitable form for an easy visualization of the tissue density variations along each scan line. Acquisition frequency: The acquisition frequency depends on the number of piezoelectric elements n p and the desired depth of observation d o , i.e. the length of the scan lines. The radio frequency signals corresponding to 3.2. ULTRASOUND IMAGING each scan line must be recorded one after another in order to avoid mixing the echos corresponding to adjacent scan lines. The time T line required to acquire the signal along one scan line is given by T line = 2d o c . (3.3) The total time T acq of the 2D image acquisition is then T acq = n p T line . (3.4) A typical acquisition using a transducer with 128 elements and an acquisition depth of 10 cm would take 16.6 ms, corresponding to a frame rate of 60 images per second. This is a fast acquisition rate that can be considered real-time for most medical applications. Image resolution: The axial resolution of the US imaging system is the minimum distance between two objects in the wave propagation direction that allows viewing them as two separate objects in the image. This directly corresponds to the wavelength of the wave propagating in the medium. Wavelength λ and frequency f are directly related to each other by the speed of the wave according to λ = c f . (3.5) Therefore a higher axial resolution can be achieved through a higher frequency. Standard US systems usually use a frequency between 1 MHz and 20 MHz, corresponding to an axial resolution between 1.54 mm and 77 µm. The lateral resolution is the minimum distance between two objects perpendicular to the wave propagation direction that allows viewing them as two separate objects in the image. A threshold value for this resolution is first set by the distance between the different scan lines, which depends on the geometry of the transducer. For linear transducers, the piezoelectric elements are placed along a line, such that all scan lines are parallel and the threshold directly corresponds to the distance between the elements. For convex transducers, the elements are placed along a circular surface, such that the scan lines are in a fan-shape configuration and diverge from the transducer, as illustrated in Fig. 3.2. The threshold thus corresponds to the distance between the elements at the surface of the transducer but then grows with the depth due to the increasing distance between the scan lines. Hence, this configuration allows a larger imaging region far from the transducer, but at the expense of a resolution decreasing with the depth. However another factor that determines the real lateral resolution is the width of the US beam. This varies with depth and depends on the size of the piezoelectric elements and the wave frequency. The wave focuses into a narrow beam only for a short distance from the emitter, called the near zone or Fresnel's zone. Then the wave tends to diverge, leading to a wide beam in the far zone or Fraunhofer's zone, as illustrated in Fig. 3.3. The width of the beam in the near zone is proportional to the size of the piezoelectric element, while the length of the zone decreases with this size. The length of the near zone can be increased by using a higher frequency. The lateral resolution is also often modified by using several adjacent elements with small delays instead of only one at a time. This generates a beam focused at a specified depth, but also causes the far zone width to increase faster after the focus depth. Similarly, an out of plane resolution can be defined, which determines the actual thickness of the tissue slice that is visible in the image. This resolution also varies with depth depending on the size of the piezoelectric elements and the frequency, as illustrated in Fig. 3.3. An acoustic lens is often added to the surface of the probe to focus the beams at a given depth. ULTRASOUND IMAGING Observation limitations: Other factors can modify the quality of the received US wave. Attenuation: The viscosity of soft tissues is responsible for a loss of energy during the US wave propagation [START_REF] Wells | Absorption and dispersion of ultrasound in biological tissue[END_REF]. This loss greatly increases with the wave frequency, such that a trade-off has to be made between the spatial resolution and the attenuation of the signal. Speckle noise: Due to the wave characteristics of US, diffraction and scattering also occur when the wave encounters density variations in the tissues that have a small size compared to its wavelength. The US beam is then reflected in many directions instead of one defined direction. This results in textured intensity variations in the US images known as speckle noise, as can be seen in Fig. 3.1. While it has sometimes been used for tissue tracking applications as in [START_REF] Krupa | Real-Time Tissue Tracking with B-Mode Ultrasound Using Speckle and Visual Servoing[END_REF][KFH09], speckle noise is generally detrimental to a good differentiation between the different structures in the tissues, such that filtering is often performed to reduce its intensity. Shadowing: At the level of an interface between two media with very different densities it can be observed that the US wave is almost entirely reflected back to the transducer. The intensity of the transmitted signal is then very low, such that the intensity of the echos produced by the structures that are behind the interface are greatly reduced. This causes the appearance of a shadow in the US image that greatly limits the visibility of the structures behind a strong reflector. This can be seen on the bottom of the US image in Fig. 3.1, which is mostly dark due to the presence of reflective interfaces higher in the image. Particular artifacts can also appear around a needle due to its interaction with the US wave. These artifacts can greatly affect the appearance of the needle in 2D or 3D US images, such that they should be taken into account to accurately localize the needle. This kind of artifacts will be the focus of section 3.3.1. Now that we have seen the principles of US signals acquisition, we describe in the following how the acquired data are exploited to reconstruct the final image or volume. Reconstruction in Cartesian space The radio frequency signal acquired by the piezoelectric elements must be converted into a form that is suitable for the visualization of the real shape of the observed structures in Cartesian space. This conversion should take into account the natural attenuation of the ultrasound (US) signal during its travel in the tissues as well as the geometric arrangement of the different scan lines. We describe in the followings the process that is used to transform the radio frequency signal into a 2D or 3D image. Reconstruction of 2D images We first consider the case of a reconstructed 2D image, called B-mode US image. The signal is first multiplied by a depth-dependent gain to compensate for the attenuation of the US signal during its travel through the tissues. The amplitude of the signal is then extracted using envelop detection. This removes the sinusoidal shape of the signal and only keeps the part that depends on the difference between the density of the tissues. The signal is then sampled to enable further digital processing. The sampling frequency is chosen to respect the Nyquist sampling criterion, i.e. it should be at least twice as much as the frequency of the US wave. This frequency is typically of 20 MHz or 40 MHz for current 2D US probes. A logarithmic compression of the level of intensity is then usually applied to the samples to facilitate the visualization of high and low density variations on the same image. The samples can be stored in a table with each line corresponding to the samples of a same scan line. The resulting image is called the pre-scan image. The samples need then to be mapped to their corresponding position in space to reconstruct the real shape of the 2D slice of tissue being observed, which constitute the post-scan image. Let N s be the number of samples along a scan line and N l the number of scan lines. Each sample can be attributed two indexes i ∈ [[0, N s -1]] and j ∈ [[0, N l -1]] corresponding to their placement in the pre-scan image, with j the scan line index and i the sample index on the scan line. The shape of the reconstructed image depends on the geometry of the arrangement of the piezoelectric elements. For linear probes, the piezoelectric elements are placed on a straight surface, such that the scan lines are parallel to each other. In this case the coordinates x and y of a sample in the post-scan image (see Fig. 3.4) can directly be calculated with a scaling and offset such that x = L s i, (3.6) y = L p j - N l -1 2 , (3.7) where L s is the physical distance between two samples on a scan line and L p is the distance between two piezoelectric elements of the transducer. The physical distance L s between two samples on a scan line depends on the sampling frequency f s such that For a convex probe with radius R, the physical position of a sample in the probe frame can be expressed in polar coordinates (r, θ) according to L s = c f s . ( 3 r = R + L s i, (3.9) θ = L p R j - N l -1 2 . (3.10) This can be converted to Cartesian coordinates using x = r cos(θ), (3.11) y = r sin(θ). (3.12) In practice the post-scan image is defined in Cartesian coordinates with an arbitrary resolution, such that the mapping should actually be done the opposite way. Each pixel (u, v) in the image corresponds to a physical position (x, y) in the imaging plane of the probe such that u = x -x min s , (3.13) v = y -y min s , (3.14) where (x min , y min ) is the position of the top left corner of the image in the probe frame, as depicted in Fig. 3.4, and s is the pixel resolution of the image. The value of each pixel is computed by finding the value in the prescan image at position (i, j) corresponding to the physical position (x, y). For linear probes this is computed according to i = x L s , (3.15) j = N l -1 2 + y L p , (3.16) while for convex probes it results in i = x 2 + y 2 -R L s , (3.17) j = N l -1 2 + atan2(y, x)R L p . (3.18) In practice this process gives non-integer values for i and j, while the pre-scan data are only acquired for integer values of i and j. Therefore an interpolation process is necessary to compute the actual value I post (u, v) of the pixel in the post-scan image I post from the available values in the pre-scan image I pre . Different interpolation techniques can be used: • Nearest neighbor interpolation: the pixel value is taken as the value of the closest prescan sample: I post (u, v) = I pre ([i], [j]), (3.19) where [.] denotes the nearest integer operator. This process is fast but leads to a pixelized aspect of the post-scan image. • Bi-linear interpolation : the pixel value is computed using a bi-linear interpolation between the four closest neighbors: I post (u, v) = (1-a)(1-b) I pre ( i , j ) + a (1-b) I pre ( i + 1, j ) +(1-a) b I pre ( i , j +1) + a b I pre ( i + 1, j +1), (3.20) with a = i -i , (3.21) b = j -j , (3.22) and where . denotes the floor operator. This process provides a smoother resulting image while still remaining relatively fast to compute. • Bi-cubic interpolation : the pixel value is computed using a polynomial interpolation between the 16 closest neighbors. This process provides a globally smoother resulting image and keeps a better definition of edges than bi-linear interpolation. However it involves longer computation times. ULTRASOUND IMAGING Reconstruction of 3D volumes The 2D US modality only provides a feedback on the content of a planar zone in the tissues. In order to visualize different structures in 3D, it is necessary to move the US probe along a known trajectory and reconstruct the relative 3D position of the structures. In the case of needle segmentation, it is possible that only a section of the needle is visible, when the needle is perpendicular to the imaging plane. In this case it is difficult to know exactly which point along the needle corresponds to this visible section. Even when a line is visible in the image, it is possible that the needle is only partially visible and partially out of the imaging plane. It can lead to erroneous conclusions about the real position of the needle tip, which could lead to dramatic outcomes if used as input for an automatic needle insertion algorithm. To alleviate this issue, automatic control of the probe position can be performed to maintain the visibility of the needle in the image [CKM13] [START_REF] Mathiassen | Visual servoing of a medical ultrasound probe for needle insertion[END_REF]. However this is not always possible if the needle shaft does not fit into a plane due to its curvature, which is often the case for the flexible needles that we use in the following. Three-dimensional US probes have been developed to provide a visual feedback on an entire 3D region of interest [START_REF] Huang | A review on real-time 3d ultrasound imaging technology[END_REF]. This way entire 3D structures can be visualized without moving the probe. Two main technologies are available: matrix array transducers and motorized transducers. Matrix array transducers: This technology is a 3D version of the classical 2D transducers. It uses a 2D array of piezoelectric elements placed on a surface instead of only one line. Similarly to 2D probes, the 3D matrix probes can be linear or bi-convex depending on whether the surface is planar or curved. They also provide the same fast acquisition properties, with a volume acquisition rate that is proportional to the number of elements of the array. However, due to the complexity of manufacturing, current probes only have a limited number of piezoelectric elements in each direction compared to 2D transducers, which limits the resolution and field of view that can be achieved. Motorized transducers: Also known as wobbling probes, these probes consist in a classical 2D US transducer attached to a mechanical part that applies a rotational motion to it. A series of 2D US images are acquired and positioned into a volume using the known pose of the transducer at the time of acquisition. In the following we consider the case of a sweeping motion such that the imaging plane of the transducer moves in the out of plane direction. In this case, the resolution of the volume is different in all directions. It corresponds to the resolution of the transducer in the imaging plane, while the resolution in the sweeping direction depends on the frame rate of the 2D transducer and the velocity of the sweeping. The acquisition rate is also limited by the duration of the sweeping motion, such that a trade-off has to be made between the resolution in the sweeping direction and the acquisition rate. Since an indirect volume scanning is made, some motion artifacts can appear in the volume, due to the motion of the tissues or the probe during the acquisition. Similarly to the 2D case, a post-scan volume with arbitrary voxel resolution can be reconstructed from the acquired pre-scan data. Each voxel (u, v, w) in the post-scan volume corresponds to a physical position (x, y, z) in the probe frame, such that u = x -x min s , (3.23) v = y -y min s , (3.24) w = z -z min s , (3.25) where (x min , y min , z min ) is the position of the top left front corner of the reconstructed volume in the probe frame and s is the voxel resolution of the image. The value of each voxel is computed by finding the value in the prescan image at position (i, j, k) corresponding to the physical position (x, y, z). It should be noticed that the center of the transducer does usually not lie on the axis of rotation of the motor, such that all acquired scan lines do not cross at a common point, which increases the complexity of the geometric reconstruction of the volume. We define R m the radius of the circle described by the center point on the transducer surface during the sweeping. The position (x, y, z) of a point in space will be defined with respect to the center of rotation O m of the motor, which is fixed in the probe frame, while the center of the transducer O t can translate, as depicted in Fig. 3.5. This leads to x = (r cos(θ) -R + R m ) cos(φ), (3.26) y = r sin(θ), (3.27) z = (r cos(θ) -R + R m ) sin(φ), (3.28) where r and θ are the polar coordinates of the sample in the transducer frame and φ is the current sweeping angle of the motor (see Fig 3 .5). The volume can be reconstructed by assuming that equiangular planar frames are acquired. However, since the scan lines are acquired one after another by the transducer, the sweeping motion introduces a small change in the direction of successive scan lines. Therefore the scan lines are not co-planar and some motion artifacts can appear in the reconstructed volume. In order to avoid these artifacts, we reconstruct the volume using the exact orientation of the scan lines, such that r, θ and φ can be computed according to [START_REF] Lee | Intensity-based visual servoing for non-rigid motion compensation of soft tissue structures due to physiological motion using 4d ultrasound[END_REF]: r = R + L s i, (3.29) θ = L p R j - N l -1 2 , (3.30) φ = δφ k + j N l - N f N l -1 2N l , (3.31) where N f is the number of frames acquired during one sweeping motion, δφ is the angular displacement of the transducer between the beginning of two frame acquisitions and is equal to 1 if the sweeping motion is performed in the positive z direction and -1 in the negative one. Finally the position (i, j, k) in the pre-scan data corresponding to the CHAPTER 3. NEEDLE LOCALIZATION USING ULTRASOUND physical position (x, y, z) in the probe frame can be calculated according to i = r -R L s , (3.32) j = N l -1 2 + R L p θ, (3.33) k = N f N l -1 2N l - j N l + φ δφ , (3.34) with r = R -R m + x 2 + z 2 2 + y 2 , (3.35) θ = atan2 y, R -R m + x 2 + z 2 , (3.36) φ = atan2(z, x). (3.37) As in the 2D case, voxel interpolation is necessary. The same techniques can be used: nearest neighbor interpolation still requires only one voxel while tri-linear interpolation requires 8 voxels and tri-cubic interpolation requires 64 voxels. Due to the high number of voxels and the increased dimension of the interpolation, the conversion to post-scan can be time consuming and often requires hardware optimization to parallelize the computations and achieve reasonable timings. Once the US volume has been reconstructed, the different structures present in the tissues can then be observed. In particular, the real 3D shape of a needle can be detected in Cartesian space. Needle detection in ultrasound Robust needle tracking using ultrasound (US) has the potential to make possible robotic needle guidance. Due to the high density of a metallic needle, a strong echo is generated, such that the needle appears as a very bright line in US images. However detecting a needle in US images is still a challenging task due to the overall noisy nature of the images. In this section we present the common factors that may hinder a detection algorithm, as well as an overview of current ultrasound-based methods used for the detection and tracking of a needle in 2D or 3D US image feedback. Ultrasound needle artifacts We describe here several phenomena that are typically observed in ultrasound (US) images and that are specific to the presence of a needle in the field of view of the probe [START_REF] Reusz | Needlerelated ultrasound artifacts and their importance in anaesthetic practice[END_REF]. These phenomena creates artifacts that can limit the performances of a needle detection algorithm. An illustration of the different artifacts is shown in Fig. 3.6 and a picture of a needle observed in 3D US can be seen in Fig. 3.7. Reflection: The direction in which a US beam is reflected at an interface between two media with different densities depends on the angle of incidence of the wave on the interface. In some cases the beam can be reflected laterally such that the echo never returns to the transducer [START_REF] Reusz | Needlerelated ultrasound artifacts and their importance in anaesthetic practice[END_REF]. This effect reduces the visibility of the needle when the insertion direction is not perpendicular to the propagation of the US wave. This can be particularly visible with convex probes, for which the beam propagation direction is not the same at the different locations on the image, resulting in a variation of the intensity of the observed needle. This effect can be reduced by using echogenic needles with a surface coating that reflects the US beam in multiple directions. Special beam steering modes are also available on certain US probes, for which all elements of the transducer are activated with small delays to create a wave that propagates in a desired direction. This can be used to enhance the visibility of the needle when its orientation is known, as was done in [START_REF] Hatt | Enhanced needle localization in ultrasound using beam steering and learning-based segmentation[END_REF]. Reverberation/Comet tail artifact: The high difference between the density of the needle and the density of soft tissues induces a high reflection of the US wave at the interface. This occurs on both sides of the needle and in each direction, such that a part of the wave can be reverberated multiple times between the two walls of the needle. Multiple echos are subsequently sent back to the transducer with a delay depending on the distance between the walls and the number of reflections inside the needle. Since the image is reconstructed using the assumption that the distance from the probe is proportional to the time needed by the wave to come back to the transducer Side lobes artifacts Reverberation / Comet tail artifact Re¡ection attenuation (see (3.2)), the echos created by the reflections inside the needle are displayed as if they came from an interface located deeper after the needle. Hence a comet tail artifact can be observed in a cross-sectional view of the needle, due to the appearance of a bright trailing signal following the real position of the needle. Beam width/Side lobes artifact: Due to the width of the US beam, it is possible that the needle is hit by several beams corresponding to different scan lines. This results in a needle that apparently spreads laterally and is larger than its real diameter. Similarly, the piezoelectric elements can emit parasitic side beams in addition to the main beam. The amplitude of the wave in the side beams is usually smaller than the amplitude of the main beam, which limits the influence that they have on the final image due to the attenuation in the tissues. However, strong reflectors like a needle may reflect the quasi-totality of the side beams, creating strong echos coming back to the transducer which are interpreted as returning from the main beam during the reconstruction of the image. This creates further lateral spread of the apparent position of the needle, as can be seen in Fig. 3.7. Needle detection algorithms Many image processing techniques have been proposed over the last decade to detect a needle in 2D or 3D ultrasound (US) images. Needle detection in 2D US is challenging because of the missing third dimension. The needle can be only partially visible and it is not always possible to ensure that it is entirely in the imaging plane. On the opposite, while the data acquired by a 3D US probe usually require some processing to be visualized in a comprehensible way by a human operator, they can easily by used directly by a computer process to detect the 3D shape of the needle. However it usually requires more computation due to the increased dimension of the image. Tracking algorithms have also been proposed to find the position of the needle across a sequence of images. These algorithms usually use a detection algorithm that is applied on each newly acquired image. The result is enhanced by using a temporal filtering of the output of the detection algorithm, typically a Kalman filter, or a modification of the detection algorithm to take into account the position of the needle in the previous images. In the following we present an overview of the general techniques that are used for needle tracking in 2D or 3D. Needle detection algorithms generally follow the same order of steps performed on the image: • a pre-filtering of the image to enhance the needle visibility and remove some noise, • a binarization of the image to select a set of potential points belonging to the needle, • a shape fitting step to find the final localization of the needle. Image pre-filtering: Smoothing of the image is often performed to filter out the speckle noise that is present in the image. This process also reduces the sharpness of the edges, which can be detrimental to find the boundaries of the needle. Median filtering is sometimes preferred to achieve noise smoothing while keeping a good definition of the edges. In order to enhance the separation between the bright needle and the dark background, a modification of the pixel intensity levels can then be used. This can be achieve in many ways, such as histogram equalization [PZdW + 14] or exponential contrast enhancement [WRS + 16]. In the case where the needle is co-planar with the imaging plane and a guess of its orientation is known, a specific filter can be used to enhance the visibility of the linear structures with a given orientation. For example an edge-detector can be used in a given direction [OEC + 06]. Gabor filtering is also often used in 2D [START_REF] Kaya | Needle localization using gabor filtering in 2d ultrasound images[END_REF] or in 3D [PZdW + 14]. Image binarization: A threshold is applied to the pre-processed image to keep only the points that have a good probability of belonging to the needle. Otsu's thresholding method can be used to automatically find an optimal threshold that separates two classes of intensities [START_REF] Otsu | A threshold selection method from gray-level histograms[END_REF]. However this method can yield poor performances if the background itself has several distinct levels of intensity. This can occur on non-filtered images when low reflective structures, with a black appearance, are present within normal tissues, with a gray appearance. Therefore the value of the threshold is mostly tuned manually to a pre-defined value or such that a certain percentage of the total number of point is kept, which can introduce a great variability in the performances of the algorithms. In [START_REF] Neubach | Ultrasound-guided robot for flexible needle steering[END_REF] needle tracking is performed using the difference between two successive US images. The resulting image presents a bright spot at the location where the needle tip progressed in the tissues, allowing to obtain a small set of pixels corresponding to the position of the needle tip after the thresholding process. However this method can only be used if the probe and the tissues are stationary, such that the motions in the images are only due to the needle. Doppler US imaging is used to display the velocity of moving tissues instead of their density variations. Therefore, it can be used to naturally reduce the number of structures with the same intensities as the needle by applying fast vibrations to the needle. An active vibrator attached near the base of the needle is used in [START_REF] Adebar | 3d ultrasoundguided robotic needle steering in biological tissue[END_REF] to produce these vibrations. The rotation of the needle around its axis is used in [START_REF] Mignon | Using rotation for steerable needle detection in 3d color-doppler ultrasound images[END_REF] to create the same amount of motion all along the needle, avoiding the attenuation of the vibrations along the needle shaft that can be observed when using a vibrator at the needle base. Needle shape fitting: Given a set of segmented needle points, a first decimation is often performed to remove obvious outliers. This typically involves morphological transformations, like a succession of erosions and dilatations, to remove the groups of pixels that are too small to possibly represent a needle. Fusion with Doppler US modality can also be used to get additional information in the needle location and remove outliers. Many methods can then be used to find the position of the needle depending on its configuration in the image. When using a 2D US probe, the needle can first be perpendicular to the imaging plane, such that only a section of the shaft is visible and can be tracked. In [WRS + 16] the set of pixels corresponding to the measured needle section is used as the input for a Kalman filter to estimate the point of the set that represents the real center of the needle cross section. The comet tail artifact exhibited by the needle can also be exploited to find the needle position. In [VAP + 14] and [AVP + 14] needle tracking is achieved by using the Hough transform applied to the pixel set to find the best line fitting the tail of the artifact. The center of the needle is then taken as the topmost extremity of the line and translated by a length corresponding to the radius of the needle. A similar process is used in [SRvdB + 16] where Fourier descriptors are used to find the center of the needle in the comet tail artifact. In the case where the needle shaft is in the imaging plane of a 2D probe or the field of view of a 3D probe, line detection algorithms are used to find the best fit of the needle. A Hough transform is used in [OEC + 06] to find the best group of points that fits a linear shape and remove all of the other outliers. A polynomial fitting is then performed with the remaining points to find the final shape of the needle. A now wide-spread method for line or polynomial fitting is the Random Sample Consensus (RANSAC) algorithm. The principle of the algorithm is to take a random sample of points and to build a polynomial fitting from this sample. The quality of the sample is assessed by the total number of points of the set that fits the obtained polynomial. The process is repeated many times and the fitting containing the maximum of points is taken as the result of the needle detection process. This algorithm is quite robust to outliers since a polynomial fitting from a sample containing outliers is likely to fit poorly to the real inliers. Such an algorithm was used in 2D US in [START_REF] Kaya | Needle localization using gabor filtering in 2d ultrasound images[END_REF] after a Gabor filtering and Otsu's automatic thresholding. It can also easilly be applied to 3D US, as done in [START_REF] Uherčík | Model fitting using ransac for surgical tool localization in 3-d ultrasound images[END_REF] after applying a simple threshold. Due to the stochastic nature of the algorithm, there is no guaranty that the final result of the algorithm contains only inliers, and inconsistent results can be obtained if the algorithm does not run for a sufficient amount of time. The algorithm can be made faster and more consistent by minimizing the number of outliers present in the initial set of points. This can for example be done by using a needle enhancing pre-filtering like a 3D Gabor filter [PZdW + 14]. For needle tracking in a US sequence, a temporal filtering can be used to filter the output of the RANSAC algorithm and to predict a region of interest in which the needle should lie. A Kalman filter was used for 3D needle tracking in [START_REF] Chatelain | Real-time needle detection and tracking using a visually servoed 3d ultrasound probe[END_REF] and improved with a mechanics-based model to predict the motion of the needle between two acquisitions in [START_REF] Mignon | Beveled-tip needlesteering using 3d ultrasound, mechanical-based kalman filter and curvilinear roi prediction[END_REF]. Direct approach: Some approaches use directly the image intensity to localize the needle without relying on a prior thresholding of the image. For example, projective methods consist in calculating the integral of the image intensity along a curve that represents an underlying model of the object sought in the image. The curve with the highest value for the integral is selected as the best representation. The most known projective method is the Hough transform, which uses straight lines as the model. The generalized Radon transform, using polynomials, can be used to track a needle in a 3D US volume, as performed in [START_REF] Neshat | Real-time parametric curved needle segmentation in 3d ultrasound images[END_REF]. Due to the high number of possible configurations for the model in the image, projective methods are highly computationally expensive, especially in 3D. In [OEC + 06] detection rays are first traced perpendicular to an estimation of the needle direction and an edge detector is run along each ray to only keep one pixel along each ray. This way the set of possible pixels belonging to the needle has a fixed and relatively small size. The Hough transform is then use to find a line approximation fitting the maximum of points. A polynomial fitting is finally performed to find the best shape of the needle. The pixel intensities can also be used directly with template matching to provide a fast tracking of the needle tip, as is done in [START_REF] Kaya | Visual tracking of biopsy needles in 2d ultrasound images[END_REF]. An artificial neural network is used in [START_REF] Rocha | Flexible needles detection in ultrasound images using a multi-layer perceptron network[END_REF] to directly compute for each pixel in a region of interest the probability that this pixel belongs to the needle. A particle filter is used in [START_REF] Chatelain | 3d ultrasoundguided robotic steering of a flexible needle via visual servoing[END_REF] to locate a needle in 3D US. Each particle consists in a 3D polynomial curve that is directly projected in the 3D US volume. The probability that a particle corresponds to the real needle in the volume is computed using the sum of the intensities of the voxels along the curve and an additional term for the tip detection. In the following section, we present the needle tracking algorithm that we use in order to take into account the different points mentioned previously. We choose a direct approach to avoid the tuning of a threshold for a segmentation step, and we use a method that only considers a limited set of points in the image in order to keep a reduced computational complexity. Intensity-based needle tracking In this section we present the tracking algorithms that we designed to localize the 3D position of the needle shaft using stereo cameras or 3D ultrasound (US). Both algorithms use directly the intensity value of the pixels or voxels located near the previous position of the needle to find its new best position. Their local behavior allows for fast computations while using directly the intensities make them independent of the quality of a prior segmentation of potential points belonging to the needle. We first present the tracking using stereo cameras and then we focus on the design of the algorithm to track a needle in 3D US volumes. Tracking with camera feedback We present here the algorithm that we designed to track the 3D shape of a needle embedded in a translucent gelatin phantom using two stereo cameras. Since cameras are not clinically relevant to observe a needle embedded in real tissues, it will be used for the validation of other aspects of the insertion, such as the control of the needle trajectory. The experimental conditions are Figure 3.8: Illustration of the reconstruction of a 3D point from its position observed in two images acquired by two different cameras. The red dots represent the 2D position of the object seen in both images and the green dot is the estimation of the 3D position of the object. thus optimized such that the algorithm can provide an accurate and reliable measure of the needle position. A uniform background is provided such that the needle is clearly visible in the images. We describe in the following the different steps allowing the measure of the 3D position of the needle from the 2D images acquired by the cameras. Camera registration: The two cameras are placed orthogonally to each other to provide a 3D feedback on the position of the needle and the phantom. The intrinsic parameters of each camera are first calibrated using the ViSP library [START_REF] Marchand | Visp for visual servoing: a generic software platform with a wide class of robot control skills[END_REF] and a calibration grid made of circles. These parameters comprise the position of the optical center in the image, the ratio between the focal length and the size of a pixel as well as two parameters to correct for radial distortion of the image. Once these intrinsic parameters are known, a mapping can be determined between each object in the image and a corresponding line in 3D space on which the object is supposed to lie. The relative pose between the cameras (translation and rotation) is then calibrated using the same calibration grid viewed by both cameras [START_REF] Marchand | Pose estimation for augmented reality: A hands-on survey[END_REF]. Any object observed in both images at the same time can be mapped to two different lines in space using the intrinsic parameters. The position of the object in 3D space can then be estimated by finding the closest point to the two lines, as illustrated in Fig. 3.8. The 3D accuracy of this tracking system, calculated from the size of the pixels and the distance of the needle from the cameras, is approximately 0.25 mm around the location of the needle. During the insertion procedure it can also be necessary to express the Needle detection in 2D images: We use a gradient-based tracking algorithm to track the needle seen in each image. The principle of the algorithm is depicted in Fig. 3.9. The needle shape is approximated in the image with a third order polynomial curve defined by four equi-spaced control points. After the tracking has been initialized and a new image has been acquired, a line is drawn for each control point such that it is normal to the previous polynomial curve and passes by the control point. The two edges of the needle shaft are found as the points corresponding to the maximum and minimum values of the gradient along the normal line. The new position of the control point is taken as the center between these two edges. A Kalman filter is used to temporally smooth the position of each control point to avoid abrupt changes than may correspond to a bubble or another object with sharp edges near the control point. A new polynomial curve is then computed from the new positions of the control points. An edge detection is finally performed along a line tangent to the extremity of the curve to find the new position of the needle tip. 3D needle reconstruction: After the tracking as been done in each image, two 2D polynomial curves are available, which correspond to the projection of the 3D needle on the images. Several points are sampled along one of the 2D curves and then matched to their corresponding point on the 2D curve in the second image using epipolar geometry, i.e. using the intrinsic parameters of the cameras and their relative pose to deduce the possible correspondences between different pixels in both images. A 3D point is then reconstructed from each pair of matching 2D points along the needle (see Fig. 3.8). Finally the 3D needle is reconstructed by fitting a 3D polynomial curve to the set of 3D points. The new 3D needle curve is then projected back onto each image to initialize the tracking in the next images. This allows a further smoothing of the motion of the 2D curves in the images and provides a way to recover if one of the 2D tracking in the image partially fails. Iterative tracking using ultrasound needle artifacts We describe here the new tracking algorithm that we use to track a flexible needle in 3D ultrasound (US) volumes. We first mention different points that drove us toward the development of the algorithm and then we present the algorithm itself. Needle artifacts: The presence of a needle in the field of view of an US probe leads to strong echos reflected back to the transducer due to the high difference of density between the needle and the tissues. This results in a bright region observed in the reconstructed US volume. Therefore a majority of needle tracking algorithms are designed to find the location of the needle in the middle of a bright zone. However the bright signal observed around the needle can mostly be due to US artifacts, like reverberation artifacts or lateral resolution degradation due to side lobes and beam width, as was presented in section 3.3.1. The effects of lateral resolution degradation are usually symmetric, such that the needle can effectively be found in the center of the bright zone in the lateral direction. However in the beam propagation direction, only the first received echo corresponds to the first wall of the needle, while subsequent echos are either due to the second wall or reverberations inside the needle. Therefore the real position of the center of the needle is located just after the first echo and not in the center of the bright signal. Detecting the needle in the center of the bright zone would result in an erroneous estimation of the real position of the needle. For this reason the algorithm that we propose in the following is optimized to take into account such artifacts. Segmentation: Some algorithms first perform a segmentation of the US volume to isolate the voxels that are likely to belong to the needle. A second algorithm, typically a Random Sample Consensus (RANSAC) algorithm, is then used to find among those voxels the largest set that fits the best a predefined geometrical model of the needle shape. Hence the performances of this second algorithm highly depend on the quality of the segmentation step, both in terms of accuracy and processing time. However the segmentation of the needle is also usually heavily dependent on a threshold that determines if a given voxel belongs or not to the needle. A too high value of the threshold may lead to ignore some parts of the needle that are less bright than the rest, due to shadowing from other structures or a too large angle of incidence with the US beam. On the contrary, with a low value of the threshold too many voxels may be included, belonging to bright structures, background noise or needle artifacts. In practice the best tuning of the threshold may depend on the actual content of the volume, which can change over time during a same operation. In order to avoid these issues, we use directly the intensity of the voxels without prior segmentation step. Computation time: In order to perform real-time control, the tracking algorithm should be able to provide an estimation of the position of the needle than is not too outdated with the real position of the needle. The acquisition and reconstruction of a 3D US volume in Cartesian space already introduces a delay, such that any further delay should be reduced to the minimum. Time consuming algorithms, like projective algorithms, usually perform heavy computations on a large set of voxels. These approaches are usually optimized using parallelization to achieve good timing performances. However this require specialized hardware, which can increase the cost of a needle tracking system. On the opposite, local algorithms only consider a limited set of voxels in the vicinity of an initial guess of the needle position. This initial guess is then refined iteratively until the new position of the needle is found. The actual result of such methods depends on their initialization, however they can perform with great speed and accuracy when the initial guess is not too far from the real needle. By exploiting only the data in a small region, they also ignore most outliers present in the volume, like other linear structures that could be mistaken for a needle by a global detection algorithm. Therefore in the following we choose to use a local approach to perform needle tracking. Iterative tracking using needle artifacts: In order to address the different points mentioned previously, we propose to detect the position of the shaft of the needle in 3D US using a local iterative algorithm that directly uses the voxels intensities and takes into account the artifacts that are specific to the needle. The algorithm is initialized around a 3D polynomial curve that represents a prediction of the needle body position in the US volume. The curve is defined by N control points equi-spaced along the curve. Several polynomial curve candidates are then sampled all around the first one by displacing each of the control points by a given step in the directions normal to the needle. Five positions are thus generated for each control points, leading to a total of 5 N curve candidates. The best curve is selected among the candidates to maximize a cost function calculated from the voxels intensities. The algorithm is then repeated around the new selected curve, until no better curve can be found around the current best one. In the following we note c i the polynomial curve candidates, with i ∈ [[1, 5 N ]], and V (c i (l)) the intensity of the voxel at position c i (l), with l the curvilinear coordinate along the curve. In order to take into account the different points mentioned previously, the cost function J(c i ) associated to a curve c i is defined as follows J(c i ) = J 3 (c i ) L 0 (J 1 (l) + J 2 (l)) dl, (3.38) where L is the length of the curve c i and J 1 , J 2 , J 3 are different sub-cost functions. Figure 3.10 provides an illustration of the different sub-cost functions used in the algorithm. J 1 is used to detect the first wall of the needle in the beam propagation direction and to place the curve at a distance corresponding to the radius of the needle under this edge: J 1 (l) = - 0 -L d w(s) V (c i (l) + (s -r N ) d(l)) ds (3.39) + L d 0 w(s) V (c i (l) + (s -r N ) d(l)) ds, where L d defines the amount of voxels taken to perform the integrals, w is a weighting function used to give more importance to the voxels near the center of the integration zone, d(l) ∈ R 3 is the beam propagation direction at needle point c i (l) and r N denotes the radius of the needle expressed in voxels. We used a triangular profile for w, defined such that w(s) =      L d +s L 2 d if -L d < s < 0 L d -s L 2 d if 0 ≤ s < L d 0 otherwise . (3.40) J 2 is used to promote the curves that are laterally centered in the bright zone, i.e. bright portions that spread in a normal direction to the US beam where L n defines the amount of voxels taken to perform the integral, and n(l) ∈ R 3 is a unit vector normal to the needle curve and beam propagation direction at the needle point c i (l) defined such that J 2 (l) = n(l) = d(l) × dc i dl (l) d(l) × dc i dl (l) . (3.42) The parameters L d and L n can be tuned to set the number of voxels taken into account around the curve candidates. Low values can be used to decrease the computations but the algorithm becomes more sensitive to noise in the volume. On the contrary, high values increase the computation time but introduce a better filtering of the noise. A trade-off can be achieved INTENSITY-BASED NEEDLE TRACKING by choosing intermediate values corresponding to the expected dimensions of the cross section of the needle. Finally J 3 is used to penalize curves with high curvatures that may result from fitting adjacent background noise J 3 = + 1 L L 0 d 2 c i dl 2 (s) ds (3.43) where is a parameter used to define a curvature threshold from which the curvatures are penalized. Tip tracking: Once the curve has been laterally fitted, the location of the needle tip p t is sought in the alignment of the extremity of the best curve c best to maximize the following cost function J 4 = 0 -Lt w(s) V p t + s dc best dl (L) ds (3.44) - Lt 0 w(s) V p t + s dc best dl (L) ds, where L t defines the amount of voxels taken to perform the integral. The parameter L t can be tune similarly to L d and L n to find a trade-off between computational cost and sensitivity to noise. Due to the local and iterative nature of the algorithm, its performances in terms of timing and detection accuracy depend on the quality of the initialization of the needle position. With a proper initialization, the algorithm can perform fast and fit the exact shape of the needle. This can for example be obtained by using a model-based estimation of the needle motion between two acquisitions of the US volume. The tracking and timing performances of the algorithm are evaluated in the next section. Experimental validation We propose to illustrate the performances of our needle tracking algorithm in 3D ultrasound (US) during the insertion of a needle. We compare the tracking result with an algorithm that uses the Random Sample Consensus (RANSAC) algorithm after an intensity-based binarization of the volume, in order to show the limitations that can appear with such algorithm. Experimental conditions (setup in France): We use the wobbler probe and US station from BK Ultrasound to record a sequence of US volumes during the insertion of the Angiotech biopsy needle in a gelatin phantom. The needle is inserted using the Viper s650 and the US probe is held and The acquisition parameters of the US probe are set to acquire 31 frames during a sweeping motion with an angle of 1.46 • between successive frames. Due to the strong reflectivity of the walls of the container and the low attenuation in gelatin, a reverberation of the US wave occurs between the surface of the probe and the opposite wall. The acquisition depth is set to 15 cm, which is larger than the container, in order to remove the artifacts created by this reverberation from the region where the needle is in the volume. This results in the acquisition of one volume every 900 ms and a maximal resolution of 0.3 mm × 1 mm × 2 mm at the level of the needle, which is approximately 5 cm away from the probe. The spacial resolution of the post-scan volume is set to 0.3 mm in all directions and linear interpolation is used for the reconstruction. A focus length of 5 cm is set for the transducer to obtain a good effective resolution near the needle. The needle is inserted slowly at 1 mm.s -1 such that the needle position is only slightly different between two volumes. Tracking algorithm: We compare our intensity-based tracking algorithm to the result obtained with a tracking using RANSAC algorithm. For our algorithm, we set the size of the integration regions L d , L n and L t to 10 voxels (see (3.39), (3.41) and (3.44)), corresponding to a distance of 3 mm around the needle. A manual initialization of both tracking algorithms is performed in the first volume after the needle as been inserted 1.5 cm in the phantom. The threshold for the volume binarization necessary for the RANSAC algorithm is chosen just after the initialization. The maximum intensity level along the needle is computed and the threshold is set to 80% of this value. The robustness of the RANSAC algorithm is increased by rejecting obvious outliers during the sampling process, which are identified if the length of the detected needle is lower than 90% of the length of the needle detected in the previous volume. Results: Both algorithms can track the needle without failing in a sequence of 3D US volumes. However they yield different shapes of the tracked needle at the different steps of the insertion. We detail these differences in the following. Limited needle intensity: Figure 3.12 shows two cross sections of a volume acquired near the beginning of one insertion. Due to the location and orientation of the needle with respect to the probe, a great part of the US beam reflected by the needle shaft does not return to the transducer, resulting in a low intensity along the needle. On the contrary, the needle tip is more visible and some strong reflections also occur near the surface. Hence, after applying a threshold to the image for the RANSAC algorithm, only the tip and the artifacts due to the insertion point remains. The needle tip can still be found thank to the rejection of short fitting curves in the RANSAC algorithm, without which the best linear fit would be the artifact in this case. However the result does still overfit the artifact, leading to a global tracking that does not correspond to the real shape of the needle. On the other hand, our algorithm can accurately fit the shape of the needle in spite of the low intensity along the needle shaft. This shows that using a threshold to binarize the volume does not allow an adaptation to the variations of intensity along the needle. On the opposite, taking into account all levels of intensities allows exploiting all the information available on the edge of the needle, leading to a better tracking. Let us now consider the cases where higher intensities are available along the needle shaft and not only at the needle tip. Artifact fitting: Figure 3.13 shows three cross sections of a volume acquired in the middle of the insertion process. This time a part of the needle is almost normal to the beam propagation direction, such that a strong echo is reflected and results in a clearly visible bright region. Reverberation and side lobes artifacts are clearly visible in this case. The algorithm based on RANSAC tends to center in the middle of the bright region, which mainly contains reverberation artifacts. The resulting tracking is thus shifted with respect to the real position of the needle shaft. On the contrary our algorithm can fit to the first echo produced by the needle. Conclusion: These experiments have shown that our intensity-based tracking algorithm allows taking into account needle artifacts, created by reverberation or beam width, to accurately detect the position of the needle body in 3D US volumes. Using directly the voxels intensities allows adapting to variations of intensities along the needle shaft. This point is important for real applications since the needle intensity may vary due to different phenomena, such as reflection outside of the transducer or shadowing from other Figure 3.12: Tracking of the needle at the beginning of the insertion. The needle tracked using the proposed algorithm is represented in green and the needle tracked using the RANSAC algorithm is represented in red. Due to the large incidence angle with the ultrasound beam, the intensity along the needle shaft is reduced. Thresholding the image for the RANSAC algorithm yield only the needle tip and the strong reflections near the surface, leading to inaccurate needle shaft detection. On the contrary, taking all voxels into account leads to a better detection of the edges of the needle. Figure 3.13: Tracking of the needle in the middle of the insertion. The needle tracked using the proposed algorithm is represented in green and the needle tracked using the RANSAC algorithm is represented in red. Reverberation artifact is visible along the needle shaft in the beam direction (approximately x axis) resulting in a comet tail that can be seen in the needle cross section view (xz view). Side lobes artifacts normal to this direction can also be seen on each side of the needle (along the z axis). Some parts of these artifacts are included in the binarized volume after thresholding, resulting in a biased tracking with the RANSAC algorithm. On the contrary, the tracking taking artifacts into account fits the first echo and ignores the reverberation artifact. structures. Therefore, this tracking algorithm will be used in the following for all experiments performed under 3D US feedback. Nevertheless, we tested the tracking using a slow insertion speed such that the needle motion was small between two acquisitions. The local tracking could then perform smoothly. In practice it is possible that motions with greater amplitude occur between two acquisitions, either due to a faster manipulation of the needle base or due to some movements of the tissues induced by physiological motions of the patient. The first point can be addressed by using a model that can predict the new position of the needle after a given motion has been applied to its base, like the model that we proposed in the previous chapter 2. The second point, however, requires to estimate the motions of the tissues, which will be the focus of the next section. Tissue motion estimation During needle insertion, physiological motions of the patient, like breathing, can induce a displacement of the tissues around the needle. This can modify the needle shape and the future trajectory of the needle tip. The effect of lateral tissue motions is all the more important when using flexible needles. Such needles indeed tend to follow the motions of the tissues without applying a lot of resistance. The modification of the future trajectory is also amplified when the part of the needle that is outside of the tissue is long, mainly at the beginning of the insertion. Therefore the interaction model needs to be updated online in order to account for such tissue motions and be able to provide a good estimation of the current state of the insertion. In this section we present the method that we propose and have validated to update the model. The position of the model of the tissues presented in previous chapter (section 2.4.2) is estimated using an unscented Kalman filter (UKF). We first give a general presentation of Bayesian filtering and the formulations of particle filters and UKF. In a second part we provide more details to explain how we adapted the UKF to different kinds of available measurements including needle position feedback, provided by visual or electromagnetic tracking, and force feedback. Multimodal estimation Bayesian filtering In this section we present and develop the general principles of Bayesian filtering that leads to the design of the unscented Kalman filter (UKF) and particle filter (PF). In the following sections and chapters, the UKF will be used for state estimation and applied to the case of needle-tissue interaction modeling. TISSUE MOTION ESTIMATION System modeling: Bayesian filtering is a general approach used to estimate the state of a system given some observations of this system. The first step is to provide a model of the evolution of the state of the system over time. Let us consider a system that can be fully parameterized at each instant using a state vector x ∈ R Nx containing N x state variables. The system can also be controlled using an input vector u ∈ R Nu containing N u components. The evolution of the system can generally be modeled with a state equation such that x k+1 = f k (x k , u k , w k ), (3.45) where k represents the time index, w k ∈ R Nw is a process noise of dimension N w with covariance matrix Q k ∈ R Nw×Nw and f k : R Nx ×R Nu ×R Nw → R Nx is a function to model the deterministic behavior of the system. Let y ∈ R Ny be a vector of N y measures on the system such that y k = h k (x k , u k , ν k ), (3.46) where ν k ∈ R Nν is a measurement noise of dimension N ν with covariance matrix R k ∈ R Nν ×Nν and h k : R Nx × R Nu × R Nν → R Ny is a function representing the deterministic measurement model. General principles: Bayesian filtering consists in estimating the probability density function (pdf) p(x k |y k , . . . , y 0 ) of the current state knowing the current and past measurements. In the following we slightly develop the computations that are used to provide a recursive estimation of p(x k |y k , . . . , y 0 ). It can be shown using Bayes law that we have the following relationship: p(x k |y k , . . . , y 0 ) = p(y k |x k , y k-1 , . . . , y 0 ) p(x k |y k-1 , . . . , y 0 ) p(y k |y k-1 , . . . , y 0 ) , (3.47) where p(y k |x k , y k-1 , . . . , y 0 ) is the pdf of the current measure knowing the current state of the system and the past measures, p(x k |y k-1 , . . . , y 0 ) is the pdf of the current state of the system knowing the past measures and p(y k |y k-1 , . . . , y 0 ) is the pdf of the current measure knowing the past measures. First it can be seen that the denominator p(y k |y k-1 , . . . , y 0 ) does not depend on x k and is thus equivalent to a scaling factor for p(x k |y k , . . . , y 0 ). Since the integral of a pdf is always equal to 1, it is sufficient to compute and normalize the numerator, so that this scaling factor does not need to be computed and can be dropped. In addition, in order to simplify the derivation of the recursive filter, it is assumed that the system follows a first order Markov process, i.e. the state x k of the system at time k only depends on the previous state x k-1 and is independent of the other states before that It can also be noted that p(x k |y k-1 , . . . , y 0 ) can be further developed using the chain rule: p(x k |y k-1 , . . . , y 0 ) = p(x k |x k-1 )p(x k-1 |y k-1 , . . . , y 0 )dx k-1 , (3.49) where p(x k |x k-1 ) is the pdf of the current state knowing the previous state of the system and p(x k-1 |y k-1 , . . . , y 0 ) is the pdf of the previous state knowing the past measures. Finally, we get the recursive formula p(x k |y k , . . . , y 0 ) ∝ p(y k |x k )p(x k |y k-1 , . . . , y 0 ) (3.50) ∝ p(y k |x k ) p(x k |x k-1 )p(x k-1 |y k-1 , . . . , y 0 )dx k-1 . (3.51) An graphical illustration of this equation is provided in Fig. 3.14. In practice p(x k-1 |y k-1 , . . . , y 0 ) is known from the previous step of the recursive method, p(x k |x k-1 ) can be estimated using the evolution model (3.45) and p(y k |x k ) can be estimated using the the measurement model (3.46). Hence most Bayesian filters proceed in two steps: a prediction step, where a prediction of the state is made based on the previous estimate, i.e. p(x k |y k-1 , . . . , y 0 ) is computed using (3.49), and an update step, where the new measure is integrated to correct the prediction, i.e. p(x k |y k , . . . , y 0 ) is computed using (3.50). TISSUE MOTION ESTIMATION Figure 3.15: Illustration of the pdf approximations used by Kalman filters (KF) and particle filters (PF). Implementations: There exists many families of Bayesian filters that use different methods to get the estimations of the different pdfs and perform the prediction and update steps. For an overview of Bayesian filters we invite the reader to refer to [START_REF] Van Der Merwe | The unscented particle filter[END_REF] or [START_REF] Chen | Bayesian filtering: From kalman filters to particle filters, and beyond[END_REF]. The family of the particle filters uses a finite set of samples to approximate the pdf. This allows a good estimation of the pdfs but requires more computational resources, especially for a high dimensional state space. The family of the Kalman filters (KFs) uses the Gaussian approximation, i.e. all the pdfs are Gaussian. This greatly reduces the computations but may lead to approximations when the real pdfs are highly non-Gaussian. Figure 3.15 shows an illustration of these different approximations. In the following we briefly focus on particle filtering before detailing more thoroughly the Kalman filters. Particle filter The principle of the particle filters (PFs) is to use a large set of N p weighted samples X i , called particles, to approximate the different pdfs. The weights w i associated to each particle are a representation of the likelihood of the particle and are defined such that Np i=1 w i = 1. A pdf g(x) of a random variable x is thus approximated as g(x) ≈ Np i=1 w i δ (x -X i ) (3.52) where δ is the Dirac delta function. Using this approximation, the pdfs in (3.51) are reduced to finite sums. The main advantage of the PF is that it can be used with non-linear systems as well as non-Gaussian pdfs. However its performance depends on the number of particles used in the approximations. A high number of particles is usually required to obtain a good accuracy, especially when considering high dimensional state spaces, which increases its required computational load. Many variants of the PF exist depending on the method used to sample and update the particles [START_REF] Chen | Bayesian filtering: From kalman filters to particle filters, and beyond[END_REF]. On the contrary, Kalman filters offer a reduced complexity and are deterministic since they do not rely on a random sampling process. Therefore we will use these kind of filters in the following. Kalman filters Let us develop the case of the KFs a bit further. Under the Gaussian assumption, each pdf can be entirely characterized using only their mean µ and covariance matrix P , such that they take the form p(x) = 1 (2π) Nx |P | e -1 2 (x-µ) T P -1 (x-µ) , (3.53) with |P | the determinant of P and . T the transpose operator. An estimate of the state at the end of each step can directly be built using the mean of the state pdf and the covariance matrix gives the uncertainty on this estimate. In the following we note xk|k-1 and P x,k|k-1 the mean and covariance matrix, respectively, of the pdf of the state at the end of the prediction step, and xk and P x,k the same at the end of the update step. We also introduce the prediction of the measures at the end of the prediction step ŷk . It can be shown that the update step can be reduced to xk = xk|k-1 + K k (y k -ŷk ), (3.54) P x,k = P x,k|k-1 -K k P ỹ,k K T k , (3.55) K k = P xy,k P -1 ỹ,k , (3.56) where K k ∈ R Nx×Ny is called the Kalman gain, P xy,k ∈ R Nx×Ny is the covariance matrix between x k and y k , and P ỹ,k ∈ R Ny×Ny is the covariance of the innovation ỹk = y k -ŷk . Different versions of KFs can be derived depending on the method used to propagate the pdfs through the evolution and observation equations. For completeness we briefly describe the most known classical KF and extended Kalman filter (EKF) before detailing the UKF. Kalman filter and extended Kalman filter: For both KF and EKF, the propagations of the pdfs are done by directly propagating the means through the system equations, (3.45) and (3.46), and the covariance matrices through a linearized version of the equations. The prediction step is thus TISSUE MOTION ESTIMATION computed according to xk|k-1 = f k (x k-1 , u k-1 , 0), (3.57) P x,k|k-1 = F k-1 P x,k-1 F T k-1 , + W k-1 Q k-1 W T k-1 (3.58) ŷk = h k (x k|k-1 , u k , 0), (3.59) where F k = ∂f k ∂x x=x k ∈ R Nx×Nx , (3.60) W k = ∂f k ∂w x=x k ∈ R Nx×Nw . (3.61) The update step is performed as stated previously in (3.54) and (3.55) with the values P xy,k = P x,k|k-1 H T k , (3.62) P ỹ,k = H k P x,k|k-1 H T k + G k R k G T k , (3.63) where H k = ∂h k ∂x x=x k|k-1 ∈ R Ny×Nx , (3.64) G k = ∂h k ∂ν x=x k|k-1 ∈ R Ny×Nν . (3.65) The difference between the KF and EKF is that the KF makes the additional assumptions that the system is linear, while the EKF can be used with non-linear systems. This way no linearization is required for the KF, which reduces the computational complexity and makes it easy to implement. In this case the system equations become x k+1 = F k x k + B k u k + W k w k , (3.66) y k = H k x k + D k u k + G k ν k , (3.67) with B k ∈ R Nx×Nu and D k ∈ R Ny×Nu . Unscented Kalman filter: The UKF proposed by Julier et al. [START_REF] Julier | A new extension of the kalman filter to nonlinear systems[END_REF] is a sample-based KF hence approaching from a PF. It uses a small number of weighted state samples, called sigma points, to approximate the Gaussian pdfs. The propagation of the pdfs through the system is done by propagating the sigma points in the system equations. The advantage of this method is that it does not linearize the equations around one point but instead propagates the sigma points through the non-linearities. This way the propagation can be achieved with a higher order of approximation than with the EKF in the case of a highly non-linear system [WVDM00] while also being less computationally demanding than PF. In the case of a numerical model for which the linearization in the EKF can not be done analytically, the UKF requires similar computations as the EKF. Therefore, due to its better performances and simplicity, we will use the UKF to update our numerical interaction model instead of the other filters presented previously. We develop its principle a bit further in the following. Augmented state: Usually, the process and observation noises, w and ν, are incorporated with the state x in an augmented state x a ∈ R Nx+Nw+Nν such that x a =   x w ν   , P a x =   P x P xw P xν P xw Q P wν P xν P wν R   (3.68) with P xw ∈ R Nx×Nw the covariance between the state and process noise, P xν ∈ R Nx×Nν the covariance between the state and the measurement noise and P wν ∈ R Nw×Nν the covariance between the process noise and the measurement noise. Under this form, the UKF allows taking into account non-linear incorporation of correlated noises. For simplicity of notation and computation, we assume in the following that the process and measurement noises w and ν are independent additive noises. This way P xw = 0, P xν = 0, P wν = 0 and the system equations take the form x k+1 = f k (x k , u k ) + w k , (3.69) y k = h k (x k , u k ) + ν k , (3.70) This simplification allows us to follow the UKF steps using only the state x instead of the augmented state x a . Prediction and update steps: At each iteration, a set of 2N x + 1 sigma points X i , i ∈ [[0, 2N x ]] , is sampled from the current state pdf according to the weighted unscented transform, so that    X 0 = xk-1 , X i = xk-1 + ( √ N x αP x ) i , i = 1, . . . , N x , X i = xk-1 -( √ N x αP x ) i-Nx , i = N x + 1, . . . , 2N x , (3.71) where α is a positive scaling factor than is used to control the spread of the sigma points and () i denotes the i th column of a matrix. Using a large value for α leads to wide spread sigma points and a small value leads to sigma points close to each other. Tuning this parameter may be difficult as it should depend on the shape of the non-linearity that is encountered. Close sigma points may be equivalent to a linearization of the non-linearity while spread sigma points may be too far from the non-linearity of interest, which may lead to a reduced quality of the filtering in both cases. The prediction step is performed by propagating each sigma point through the evolution equation (3.69): X i ← f k-1 (X i , u k-1 ) i = 0, . . . , 2N x . (3.72) The mean xk|k-1 and covariance matrix P x,k|k-1 of the Gaussian pdf associated to the new propagated set can then be computed using weighted sums along the new propagated sigma points: xk|k-1 = 2Nx i=0 W (m) i X i , (3.73) P x,k|k-1 = Q + 2Nx i=0 W (c) i (X i -xk|k-1 )(X i -xk|k-1 ) T , (3.74) with W (m) 0 = (α 2 -1) α 2 , (3.75) W (c) 0 = (α 2 -1) α 2 + 3 -α 2 , (3.76) W (m) i = W (c) i = 1 4α 2 , i = 1, . . . , 2N x . (3.77) For the update step, a corresponding estimate of the measures Y i is then associated to each sigma point using the measure equation (3.70): Y i = h k (X i , u k ) , i = 1, . . . , 2N x . (3.78) The standard update step ((3.54)-(3.56)) is finally performed to obtain the final estimate of the new state mean and covariance. The different terms CHAPTER 3. NEEDLE LOCALIZATION USING ULTRASOUND are estimated as weighted sums along the sigma points: ŷk = 2Nx i=0 W (m) i Y i , (3.79) P xy,k = 2Nx i=0 W (c) i (X i -xk|k+1 )(Y i -ŷk ) T , (3.80) P ỹ, k = R + 2Nx i=0 W (c) i (Y i -ŷk )(Y i -ŷk ) T . (3.81) An illustration of the different steps of the UKF is provided in Fig. 3.16. Discussion: As a side remark, it can be noted that all the operations performed in the KFs assumes that the variables lie in a vector space. In the case where one of the variables lies in a manifold that does not reduce to a vector space, the vector operations, such as addition or multiplication by a matrix, lose their signification. The Gaussian pdfs are also harder to define on manifolds. This can typically be the case when considering orientations in the state space or measurements space. In that case we use the manifold version of the KFs as described for the UKF by Hauberg et al. [START_REF] Hauberg | Unscented kalman filtering on riemannian manifolds[END_REF]. This method basically consists in mapping the variables (sigma points or their associated measure estimates) to a tangent space at some point of the manifold using the logarithm map. This way all the linear operations developed previously can be used on this tangent space, which is a vector space. Note also that the covariance matrices only make sense on the tangent space. Once the calculations have been performed, the resulting estimates of the state or measures can be mapped again on the manifold using the exponential map. At each prediction step the tangent space of the state manifold is taken at the current state estimate xk , corresponding to the center sigma point X 0 . The remaining sigma points are sampled in this tangent space according to (3.71). Similarly, at each update step the tangent space of the measure manifold is taken at the measure estimate of the center sigma point Y 0 = h k (X 0 , u k ). The measures associated to each sigma point are then all mapped to this tangent space. The covariance matrices can then be computed using (3.80) and (3.81) by replacing in the equations the sigma points and measure estimates by their corresponding mapping on the tangent spaces. An illustration of the logarithm and exponential map as well as the different steps of the UKF on manifold spaces can be found in Fig. 3.17. Now that the general formulation of the UKF has been presented, next section develops how we make use of it to update the state of our needletissue interaction model. Figure 3.17: Illustration of the unscented Kalman filter on manifolds. Tissue motion estimation using unscented Kalman filter In this section we present how we use the unscented Kalman filter (UKF) to estimate the tissue motions and update our needle-tissue interaction model presented in section 2.4.2. We will consider two kinds of measurements: measurements on the geometry of the needle, such as position or direction of some point of the needle shaft, and measurements of the force and torque exerted at the base of the needle. The method is described in such a way that it is independent of the method actually used in practice to provide the measurements. Position and direction feedback can for example be provided by an electromagnetic (EM) tracker placed somewhere inside the needle or through shape reconstruction using fiber Bragg grating [PED + 10]. It can also be provided by a needle detection algorithm that runs on some visual feedback; the visual feedback itself can be of various nature, as for example a sequence of 2D or 3D images provided by cameras [START_REF] Bernardes | Robot-assisted automatic insertion of steerable needles with closed-loop imaging feedback and intraoperative trajectory replanning[END_REF], ultrasound [START_REF] Kaya | Visual tracking of biopsy needles in 2d ultrasound images[END_REF], computerized tomography [HGG + 13] or magnetic resonance imaging [PvKL + 15]. We do not consider the case where the position of the tissues is directly provided, for example by using an EM tracker or a visual marker placed on the tissue surface. Although the method could also be used with such measures, it poses additional issues that will be observed and discussed later in section 3.6.2. Evolution equation Let us define the state of the UKF as the position x ∈ R 3 of the tissues in the two-body model. We take this state as the position of the extremity of the tissue spline near the tissue surface (entry point) and expressed in the world frame {F w }, as illustrated in Fig. 3.18. We consider here that the tissue spline can not deform, such that the state x is then sufficient to characterize the whole motion of the tissue spline. In the case where prior information is known on the tissue motions, this can be included in the evolution model by choosing an adequate function f k in (3.45). For example a periodic model of breathing motion [HMB + 10] can be used when needle insertion is performed near the lungs and the patient is placed under artificial breathing, leading to x k = a + b cos 2n πt k T + φ , (3.82) where T is the period of the motion, a ∈ R 3 is the initial position, b ∈ R 3 is the amplitude of the motion, φ ∈ R is the phase of the motion, n ∈ N is a coefficient used to tune the shape of the periodic motion and t k is the time. Using a model for tissue motions has the advantage that the process noise in the filter can be tuned with lower values of uncertainties in the covariance matrix, leading to an overall better smoothing of the measures. It can also be used to provide a prediction of the future position of the tissues. However, if the model does not fit the real motion, it may lead to poor filtering performances. In most situations, the exact motion of the tissues is not known and additional parameters to estimate need to be added to the state, such as the motion amplitude b, the period T or the phase φ. This, however, adds a layer of complexity to the model and can induce some 3.5. TISSUE MOTION ESTIMATION observability issues if the number of measurements is not increased as well. In clinical practice, patients are rarely placed under artificial breathing or even general anesthesia for needle insertion procedures such a biopsies. Breathing motion can then be hard to model perfectly since it may have amplitude or frequency varying over time. It may also happen that the patients suddenly hold their breath or simply move in a way that is not expected by the model. In this case the prediction can be far from the reality and may cause the state estimation to diverge. In order to take into consideration the previous remarks and be able to account for any kind of possible motions, we choose a simple random walk model. This offers great flexibility but at the expense of reduced prediction capabilities on the tissue motions. The corresponding evolution equation can be written as x k+1 = x k + w k . (3.83) The advantage of this form is that the equation is linear and the noise is additive. This way it is not required to use the unscented transform to perform the prediction step, which reduces to xk|k-1 = xk-1 , (3.84) P x,k|k-1 = P x,k-1 + Q k-1 . (3.85) The sigma points can then be sampled using xk|k-1 and P x,k|k-1 for the update step that we describe in the following. Measure equation One advantage of the UKF is that we can use our interaction model to provide a numerical way to compute the measure function h k without analytic formulation. We consider the case where the needle is constantly held by a needle holder that provides a pose feedback of its end effector thanks to mechanical odometry. The pose of the needle base in the model is thus regularly updated using this feedback during the insertion. This way, even without tissue motions, it is possible that the shape of the needle changes. Therefore the function h k relating the estimated state to the measurements can greatly vary between two successive update steps and provides some prediction of the measures. Needle position: Let us first consider the case where the measurements consist in a set of points belonging to the needle. Let us define a set of M points p j , j ∈ [[1, M ]], located at some given curvilinear coordinates l j on the needle. In that case the measure vector can be written as y =    p 1 . . . p M    . (3.86) From the model of the needle, the estimates pj of the measured needle points can be computed according to pj = c N (l j ), (3.87) where we recall that c N is the spline curve representing the needle in the model. Note that it is possible to change the dimension of the measure vector y and the curvilinear coordinates l j depending on the measures that are available. For example if a needle tracking algorithm is used, points can be added when and where the needle is clearly visible in the image, while fewer points may be available when and where the needle is almost not visible. The dimensions of the measurement noise vector ν k and its covariance matrix R k will also vary accordingly. Needle direction: In some cases the direction of the body of the needle at some given curvilinear coordinates l d can also be measured. This is typically the case when using a 5 degrees of freedom EM tracker embedded in the tip of the needle. In that case the measure vector can be written as y = d =   d x d y d z   , (3.88) where d ∈ S 2 is a unit vector tangent to the body of the needle at the curvilinear coordinates l d and S 2 denote the unity sphere in R 3 . From the model of the needle, the estimates of the needle body direction at curvilinear coordinate l d can be computed according to d = dc N (l) dl l=l d . (3.89) In that case, since S 2 is not a vector space, we need to use the version of the UKF on manifold that was discussed in section 3.5.1.3. The tangent space of S 2 is taken at the measure estimate Y 0 associated to the center sigma point. In this particular case the logarithm map of a measure point Y i is the angle-axis rotation vector θu representing the rotation between Y 0 and this measure point. This can be computed using Log Y 0 (Y i ) = θu, (3.90) with u = Y 0 × Y i Y 0 × Y i , (3.91) θ = atan2( Y 0 × Y i , Y 0 .Y i ), (3.92) where × denotes the cross product between two vectors, u is the axis of the rotation and θ is the angle between the two vectors Y 0 and Y i . The exponential map of an angle-axis rotation vector θu in the tangent space is obtained by rotating Y 0 according to this rotation vector, such that Exp Y 0 (θu) = cos(θ)Y 0 + sin(θ)u × Y 0 . (3.93) TISSUE MOTION ESTIMATION Efforts at the needle base: Let us now consider the measures of the force and the torque exerted at the needle base. Since our needle model does not take into account any axial compression or torsion, it can not be used to provide estimates of the axial force and torque exerted on the base. So we only consider the measures of the lateral forces and torques, which are sufficient to estimate the lateral motions of the tissues. The corresponding measure vector can be written as y = f b t b . (3.94) where f b ∈ R 2 is the lateral force exerted at the base of the needle and t b ∈ R 2 is the lateral torque exerted at the base of the needle. From the model of the needle, the estimates of the force f b and torque tb can be computed according to the Bernoulli equations f b = EI d 3 c N (l) dl 3 l=0 , (3.95) tb = EI d 2 c N (l) dl 2 l=0 × z, ( 3.96) where we recall that E is the Young's modulus of the needle, I is the second moment of area of a section of the needle, c N is the spline curve representing the needle in the model and z is the axis of the needle base. Update step: A complete measure vector first needs to be chosen as a combination of the different measurements defined previously, as for example a vector stacking the force measures provided by a force sensor and the position and direction measures provided by an electromagnetic tracker. Let us now describe how is performed the update step at each new acquisition of the measures. The state of the whole needle-tissue model is first saved at the moment of the acquisition. The sigma points X i are then sampled using (3.71) around the estimate of tissue position obtained at the prediction step. A new needle-tissue model is then generated for each sigma point and the position of each spline c T representing the position of the tissues is modified according to the sigma point X i . The new needle shape of each model is then computed and the estimates of the measures Y i can be generated from the model as defined previously in (3.87), (3.89), (3.95) or (3.96). Since the actual spread of the sigma points depends on the covariance P x,k|k-1 , it can happen that a high uncertainty leads to unfeasible states. For example if the distance between the current state estimate and one of the sigma points is greater than the length of the needle, it is highly probable that the model of the needle corresponding to this sigma point can not interact with the model of the tissues anymore. Such sigma point should thus be rejected to avoid failure of the computation of the model or at least avoid irrelevant estimates of the measures. Therefore the value of α is tuned at each update step to avoid such numerical issues (see (3.71)). A small value α = 10 -3 is chosen as the default, as is typically done in a lot of works using the UKF [START_REF] Wan | The unscented kalman filter for nonlinear estimation[END_REF]. We then adaptively reduce the value of α when needed such that the sigma points do not spread further than 1 mm from the current estimated position of the tissues. The new state estimate and state covariance can finally be updated according to the update step equations defined previously ((3.54)-(3.56) and (3.79)-(3.81)). Finally, the position of the whole tissue spline in the model is updated according to the value of xk computed by (3.54). Now that we have described a method to estimate the position of the tissues from measures provided on the needle, we propose in the following to use this method and assess its performances in different experiments. Tissue update validation In this section we present different experimental scenarios to evaluate the performances of our tissue motion estimation algorithm using the unscented Kalman filter. We first present the results obtained using the effort feedback provided by a force sensor and the position feedback on the needle tip provided by an electromagnetic tracker. Then we consider the case of position feedback on the needle shaft provided by cameras. Finally, we estimate the position of the tissues using the position feedback provided by a 3D ultrasound probe and use this estimation to improve the robustness of the needle tracking algorithm. Update from force and position feedback We consider in this section the update of the model using the force and torque feedback on the needle base as well as the position and direction feedback on the needle tip. Experimental conditions (setup in the Netherlands): The setup used in these experiments is depicted in Fig. 3.19. We use the needle insertion device (NID) attached to the UR3 robot. The Aurora biopsy needle with the embedded electromagnetic (EM) tracker is placed inside the NID and inserted in a gelatin phantom. The UR5 robot is used to apply a known motion to the phantom. The ATI force torque sensor is used to measure the interaction efforts exerted at the base of the needle and the Aurora EM tracker is used to measure the position and direction of the tip of the needle. We use the two-body model presented in section 2.4.1 with polynomial needle segments of order r = 3 to represent the part of the needle that is Registration: Registration of the position of the EM tracking system in the frame of the UR3 robot is performed before the insertions. The needle is moved at different positions and two sets of positions are recorded, one given by the UR3 odometry and one given by the EM tracker. Point cloud matching between the two sets is then used to find the pose of the EM tracking system in the frame of the UR3. The force torque sensor is used to measure the interaction efforts between the needle and the tissues. Since the sensor is mounted between the UR3 robot arm and the NID, it also measures the effect of the gravity due to the mass of the NID. Therefore the effect of gravity must be removed from the measures in addition to the sensor natural biases to obtain the desired measures. Note that we only apply small velocities and accelerations to the NID during our experiments and for this reason we choose to ignore the effects of inertia on the force and torque measurements. The details of the force sensor registration can be found in Appendix A. Experimental scenario: The force and EM data were acquired during the experiments on motion compensation that will be presented later in the thesis. In this section we only take into account the different measurements that were acquired during those experiments and we do not focus on the actual control of the needle that was performed. During those experiments, a known motion was applied to the phantom with the UR5 while the needle was inserted at constant speed with the NID. The UR3 was controlled to apply a lateral motion to the whole NID to avoid tearing the gelatin or breaking the needle. Update method: The length of the needle model is updated during the insertion to correspond to the real length of the part of the needle that is outside the NID, measured from the full length of the needle and the current translation of the NID. The pose of the simulated needle base is updated using the pose of the UR3 and the rotation of the needle inside the NID. The position of the modeled tissues is estimated using our update algorithm based on the unscented Kalman filter (UKF) presented in section 3.5.2. In order to determine the contribution of each component, in the following we consider three update cases: one using only the force and torque feedback at the needle base, one using only the position and orientation feedback of the needle tip and the last one using all the measures. In each case the different measures are stacked in one common measure vector that is then used in the UKF. The estimations for each kind of measures are computed as described in previous section 3.5.2.2, i.e. using (3.94) to (3.96) for the force and torque feedback, (3.86) and (3.87) for the position feedback and (3.88) to (3.93) for the orientation feedback. For each method, we consider that the measurements are independent, such that the measurement noise covariance matrix R in the UKF (used in (3.81)) is set as a diagonal matrix. The value of each diagonal element is set depending on the type of measure: (0.7) 2 mm 2 for the tip position, (2) 2 ( • ) 2 for the tip orientation, (0.2) 2 N 2 for the force and (25) 2 (mN.m) 2 for the torque. These values are chosen empirically, based on the sensors accuracy and the way they are implemented in the setup. The process noise covariance matrix Q (used in (3.74)) is also set as a diagonal matrix with diagonal elements set to (0.2) 2 mm 2 . Results on the filtering of the measures: We first compare the difference between the measured quantities and their values estimated using the model updated by the UKF. An example of tip positions as well as the position and orientation estimation errors obtained during one of the experiments is shown in Fig. 3.20. The position and orientation measures obtained from the EM tracking system are considered as the ground-truth for these experiments. We can see that the tip position and orientation are better estimated when using only the tip measurements, while using only the force and torque feedback tends to introduce a drift in the estimation that increases with the depth of the needle tip in the gelatin. This could be expected because of the flexible nature of the needle. Near the tissue surface the pose of the needle base has a great influence on the needle shape. On the other hand, the shape of a flexible needle is progressively determined by its interaction with the tissues as it is deeper inserted. The interaction force near the tip of the needle tends to be damped by the tissues and have little influence on the force measured at the needle base. Then, the more the needle is inserted, the less information about the needle tip is provided by the force and torque measured at the needle base. Force and torque measures are respectively shown in Fig. 3.21 and 3.22. The measures obtained from the force sensor are considered as the groundtruth for these experiments. We can observe that even when using only the force and torque feedback, the estimated measures of the torque does not seem to fit the real measures as well as expected. This can be explained by the low value of the torque measures compared to the value of the variance that was set in the UKF, such that the torque is almost not taken into account for the estimation in this case. The low value of the measures can be explained by the experimental conditions. Indeed, the needle can slide in and out the NID to modify its effective length, meaning that the effective base of the needle is not fixed to the NID. This introduces a play between the needle and the NID that causes a dead-zone in which torques are not transmitted correctly. On the other side, we can observe that the force is correctly estimated when using only the force and torque feedback, while some errors can appear when using only the tip position and orientation feedback. This can be explained as previously by the fact that the position of the tip provides little information on the force at the base once the needle is inserted in the tissues. Overall it can be observed that using all the measures to perform the update provides a trade-off between the fitting of the different measures by the model. Whichever the kind of measure chosen to update the model, we can see in Fig. 3.20 that the position of the needle tip can be well estimated with an error under 2.5 mm while a 1.5 cm lateral motion is applied to the tissues. This can be sufficient in clinical practice to reach standard size tumors in the body while the patient is freely breathing. Results on the tissue motion estimation: Finally, let us compare the estimation of the position of the tissues to the measure provided by the odometry of the robot moving the phantom. The estimated and measured positions are shown in Fig. 3.23, as well as the absolute estimation error. It can be seen that the overall shape of the tissue motion is well estimated. However, some lag and drift in the estimation can be observed for all combinations of the measures. In the case of the force measurements, the lag can be due to the play between the needle and the NID. Indeed, the tissues have to move from a certain amount and displace the needle before any force can actually be transmitted to the NID and be measured. This issue could be solved, along with the problem of torque measurement mentioned previously, by using a needle manipulator that provides a better fixing to the needle. In the case of the tip position measurements, the drift can be due to modeling errors on the shape of the spline curve simulating the path cut in the tissues. Indeed, the extremity of this spline is progressively updated according to the position of the simulated needle tip during the insertion. However, modeling errors can lead to an incorrect shape of the spline, such that the estimation of the rigid translation of the tissues can not be done properly. A first solution could be to allow some deformations of the spline once it has been created, however this would introduce many additional parameters that need to be estimated. This can create observability issues and may require additional sensors, which is not desirable in practice. Another solution would be to directly use the position feedback provided on the needle tip to update the extremity of the spline. This solution will be explored in the following when using visual feedback on the entire needle shaft. Conclusions: We have seen that the update of the position of the tissues in our model could be done using a method based on the UKF with measures provided by force and torque feedback at the needle base and/or EM position feedback on the tip. Both modalities could provide good results by themselves such that it may not be required to use both at the same time. However they provide different kinds of information that may be used for different purposes, such as accurate targeting for the EM tracker and reduction of the forces applied on the tissues for the force sensor. An additional advantage of using the force sensor is that it does not require a specific modification of the needle, contrary to the EM tracker that must be integrated into the needle before the insertion and removed before injecting something through the lumen of the needle. Nevertheless, neither of them can provide a feedback on the position of a target in the tissues, such that an additional modality is required for needle insertion procedures. On the contrary, medical imaging modalities can provide a feedback on a target as well as the position of the needle. Therefore in the next section we focus on the estimation of the tissue motions in our model by using the position feedback provided by an imaging modality. Update from position feedback In this section, we propose to test our tissue motion estimation method to update our interaction model using a 3D position feedback on the needle shaft. We focus here on the visual feedback provided by cameras to validate the algorithm. However it could be adapted to any other imaging modalities Figure 3.24: Experimental setup used to validate the performances of the tissue motion estimation algorithm when using the visual feedback provided by two cameras to detect the position of the needle body. that can provide a measure of the needle localization, as will be done with 3D ultrasound (US) in the next section. In the following we present the experiments that we performed to assess the quality of the model update obtained using the measures of the positions of several points along the needle. The performances are compared in terms of accuracy of the tip trajectory and estimated motions of the tissues. Experimental conditions (setup in France): The setup used for these experiments is depicted in Fig. 3.24. The Angiotech biopsy needle is attached to the end effector of the Viper s650 and inserted in a gelatin phantom embedded in a transparent plastic container. The needle is inserted in the phantom without steering, i.e. the trajectory of the base of the needle simply describes a straight vertical line. Lateral motions are applied manually to the phantom during the insertion. Visual feedback is obtained using the stereo cameras system. The whole needle shaft is tracked in real-time by the image processing algorithm described previously in section 3.4.1. The position of the phantom is measured from the tracking of two fiducial markers with four dots glued on each side of the container [START_REF] Horaud | An analytic solution for the perspective 4-point problem[END_REF] (see Fig. 3.24 and Fig. 3.27). We use the two-body model presented in section 2.4.1 with polynomial needle segments of order r = 3. We fix the length of the needle segments to 1 cm, resulting in a total of n = 13 segments and the last segment measuring 0.6 mm. The stiffness per unit length of the model is set to 3200 N.m -2 and the length threshold to add a new segment to the tissue spline is set to L thres = 0.1 mm. Model update: We propose to compare five different methods to represent the needle and to update the spline curve representing the path cut in the tissues in our model, as described in the following: • Method 1: the needle is modeled as a straight rigid needle. • Method 2: the needle is modeled using the two-body flexible needle model. The extremity of the tissue spline is updated using the cutting edge of the modeled bevel, as was described in the definition of the model in section 2.4.2. • Method 3: similar to method 2, except that the extremity of the tissue spline is updated using the visual feedback instead of the model of the bevel. The segment is added to link the last added segment to the current position of the real needle tip measured from the camera visual feedback. However the position of the whole tissue spline is not modified during the insertion. • Method 4: similar to method 2 with the addition of the proposed update algorithm based on unscented Kalman filter (UKF) to estimate the position of the tissue spline from the measured position of the needle. • Method 5: similar to method 3 with the addition of the proposed update algorithm based on UKF to estimate the position of the tissue spline from the measured position of the needle. For each method, the position of the simulated needle base is updated during the insertion using the odometry of the robot. For methods 4 and 5, we use the positions of several points along the needle as input for the update algorithm (as described by (3.86) and (3.87) in section 3.5.2.2). The points are extracted 5 mm from each other along the 3D polynomial curve obtained from the needle tracking using the cameras. The measurement noise covariance matrix R in the UKF is set as a diagonal matrix with diagonal elements equal to (0.25) 2 mm 2 , corresponding to the accuracy of the stereo system used to get the needle points. The process noise covariance matrix Q is set as a diagonal matrix with diagonal elements equal to (0.1) 2 mm 2 . Experimental scenario: Five insertions at different locations in the phantom are performed. The needle is first inserted 1 cm in the phantom to be able to initialize the needle tracking algorithm described in section 3.4.1. The insertion is then started, such that the needle base is only translated along the needle axis. The phantom is moved manually along different trajectories for each insertion, such that the motions have an amplitude up to 1 cm in the x and y directions of the world frame {F w }as depicted in Fig. 3.24. Results: We present now the results obtained during the experiments. We first consider the accuracy of the simulated tip trajectories and evaluate the effect of the update rate on this accuracy. The quality of the estimation of the motions of the tissues is then assessed and we discuss some limitations of the modeling. Comparison of tip trajectories: We first compare the tip trajectories obtained with the different model update methods. The average absolute position error between the measured and simulated needle tips calculated over time and across the different experiments is summarized in Fig. 3.25 and Table 3.1. An example of measured and simulated tip positions obtained during one experiment is shown in Fig. 3.26. Figure 3.27 shows the corresponding pictures of the needle acquired with the cameras at different steps of the insertion. The tissue spline corresponding to each model is overlaid on the images at each step. It is clearly visible from the simulated tip trajectories that updating the model while taking into account the motions of the tissues is crucial to ensure that the model remains a good representation of the real needle. It can also be observed from the mean absolute error over all the experiments in Fig. 3.25, that the more the model is updated, the better it fits the reality. However we can see that method 3 yields poor results since only the extremity of the tissue spline is updated by adding new segments that fit the measured positions of the tip. Since the lateral position of the spline is not updated to account for tissue motions, the resulting shape of the spline does not correspond to the reality, as can be seen in Fig. 3.27 (blue curve). On the contrary, modifying the whole position of the spline in addition to the update of its extremity allows taking into account the lateral motions of the tissues, as is done with methods 4 and 5. These results illustrate that a feedback on the needle is a necessity during insertion procedures. Indeed, a pre-operative planning would not be sufficient to predict the real trajectory of the flexible needle, as is illustrated by the trajectories of the non-updated models (method 1 and 2). Therefore, the association of the needle model and update algorithm that we propose proves to be a good method to accurately represent the current state of the insertion process. However, 3D medical imaging modalities typically have an acquisition time that is longer than the framerate of the cameras used in these experiments, such that the update can only be performed at a lower rate. Hence, we propose to compare the results obtained using two different update rates for the update methods that use the visual feedback (methods 3, 4 and 5). Effect of update rate: In order to simulate a slower imaging modality, like the 3D US that we will use in the following, we set the update rate to 1 Hz, meaning that the update of the tissue spline is performed only once every second. However, the update of the position of the needle base from the robot odometry is still performed at the fast rate available with the robot. The resulting error between the measured and simulated needle tip trajectories during the example experiment can be seen in Fig. 3.28. The average tip position errors calculated over time and across the different experiments are also summarized in Fig. 3.25 and Table 3.1. As expected, a higher update rate (30 Hz) provides better results than a lower update rate (1Hz), since more measures can be taken into account to estimate the position of the tissues. However, regularly updating the model even at a low rate still allows a good reduction of the modeling errors that occurred between two acquisitions, such that we can expect good results from the algorithm with 3D US. Estimation of tissue motions: Now that we have illustrated the importance of updating the interaction model to ensure a good modeling of the needle during the insertion procedure, we propose to evaluate the performances of the update algorithm to see if it can actually be used to estimate the real motions of the tissues. The position of the phantom is obtained by the tracking of the fiducial markers placed on the container, as can be seen in Fig. 3.27. The measured positions of the tissues during the previous insertion example are presented in Fig. 3.29 along with the estimation provided by the method 4. The results using the slower update rate are also shown. Overall the update method allows the tracking of the motions of the tissues and similar results are observed for both high and low update rates. This can also be observed in Fig. 3.27, on which it is visible that the updated tissue splines from method 4 and 5 follows the motion of the tissues around the needle. We further discuss the quality of the estimation in the following. Limitations of the model: Some tracking errors can still be observed on the position of the tissues when updating the model. They can be due to the accumulation of errors in the shape of the tissue spline, as was also discussed in previous section 3.6.1 when using force and position feedback. The same solution that was proposed could also be used here, consisting in updating the whole shape of the tissue spline instead of only its global translation. However, it is very likely to see observability issues appearing, due to the fact that different shapes of the tissue spline can lead to similar needle shapes. Additional phenomena can explain the tracking errors, such as the nonlinear properties of the tissues on which we briefly focus in the following. During some other of our experiments, some large lateral motions were applied to the base of the needle, such that the needle was cutting laterally in the gelatin and a tearing appeared at the surface. In this case the needle is moving inside the tissues without external motion of the tissues. The results of the tissue motion estimation using the update method 4 in this case are shown in Fig. 3.30. The tearing of the gelatin occurred at the beginning of the insertion, from t = 2.5s to t = 4.5s. We can see that the model is automatically updated according to the measures of the needle position, so that a drift appears in the estimated position of the tissues. Once the needle has stopped cutting laterally in the gelatin (at t = 4.5), the needle is embedded anew in the tissues. This is equivalent to changing the rest position of the cut path associated to the real needle and this is what is actually represented by the tissue spline of the updated model. Hence the following motions of the tissues are well estimated, although the drift remains. Figure 3.30: Example of tissue motions estimated using the update method 4 with the position feedback obtained from cameras. At the beginning of the insertion (blue zone from t = 2.5s to t = 4.5s), the needle base is moved laterally such that a tearing occurs at the surface of the gelatin. This creates an offset in the estimation of the motions of the tissues. Even if the tearing of the tissues is less likely to appears in real biological tissues, this example shows that our model and update method can lead to a wrong estimation of the real position of the tissues due to unmodeled phenomena. However, it can also be noted that if the simulated position of the tissues was updated according to an external position feedback on the real tissues, for example by tracking a marker on the surface of the tissues, the resulting state of the model would poorly fit the position of the real needle. On the contrary, our update algorithm using the position of the needle allows the model to fit the measures provided on the needle and to remain consistent with the way it locally represents the tissues. This can be seen as an advantage of the method since the goal of our model is to give a good estimation of the local behavior of the needle without modeling all the surrounding tissues. Conclusions: From the results of these experiments we can conclude that the method that we proposed to update the state of our model based on the UKF allows taking into account the effect of tissue motions on the shape of the needle. TISSUE UPDATE VALIDATION We have also seen that the non-linear phenomena occurring in the tissues, such as a lateral cutting by the needle, can have a great impact on the quality of the estimation of the real position of the tissues. In practice, real tissues are less prone to tearing than the gelatin used in the experiments and the needle will also be steered to avoid such tearing, however the hyper-elastic properties of real biological tissues may induce a similar drift in the estimation. Therefore, the update algorithm will not be used in the followings as a way to measure the exact position of the tissues, but only as a way to keep the model in a good state to represent the local behavior of the needle. We could also see that the method provides a good update even when considering the low acquisition rate that is available with a slower, but still real-time, imaging modality, such as 3D US. Hence, in the next section we use the update method as a way to increase the modeling accuracy of the needle insertion, such that it can be used as a prediction tool to improve the tracking of a needle in 3D US volumes. Needle tracking in 3D US with moving soft tissues In this section we propose to combine the model update method based on unscented Kalman filter (UKF) that was designed in section 3.5.2 with the needle tracking algorithm in 3D ultrasound (US) proposed in section 3.4.2. This combination is used to provide a robust tracking of a needle in a sequence of 3D US volumes during an insertion in moving tissues. In the previous section we used the visual feedback provided by cameras to track the needle and update the needle model to take into account the lateral motions of the tissues. However, the position of the tracking system was registered beforehand in the frame of the robotic needle manipulator by observing the needle in the acquired images, as described in section 3.4.1. In the case of a 3D US probe, a similar registration of the pose of the probe would require many insertions of the needle in the tissues to be able to observe its position in the US volume. This is not possible in a clinical context, in which multiple insertions should be avoided and where the registration process should be simple and not time consuming. Therefore, we propose to use a fast registration method performed directly at the beginning of the insertion procedure. In the following we present the results of the experiments that we performed to assess the performances of the tracking method combining our contributions. Experimental conditions (setup in France): The Angiotech biopsy needle is used and attached to the end effector of the Viper s850. The insertion is done vertically in a gelatin phantom embedded in a transparent plastic container. The container is fixed to the end effector of the Viper s650, which is used to apply a known motion to the phantom. We use the 3D US probe and US station from BK Ultrasound to grab online 3D US volumes. The US probe is fixed to the same table on which the phantom is placed, such that it is perpendicular to the needle insertion direction and remains in contact with the phantom. The acquisition parameters of the US probe are set to acquire 31 frames during a sweeping motion with an angle of 1.46 • between successive frames. The acquisition depth is set to 10 cm, resulting in the acquisition of one volume every 630 ms and a maximal resolution of 0.3 mm × 1 mm × 2 mm at the level of the needle, which is approximately 5 cm away from the probe. The spacial resolution of the post-scan volume is set to 0.3 mm in all directions and linear interpolation is used for the reconstruction. Tracking method: We use the tracking algorithm proposed in section 3.4.2 that exploits US artifacts to track the needle in the acquired sequence of US volumes. For each new volume acquisition, the tracking is initialized using three different methods described in the following: • Method 1: the tracking is initialized from the position of the needle tracked in the previous volume. No model of the needle is used in this case. • Method 2: the tracking is initialized using the projection of the needle model in the 3D US volume. We use the two-body model presented in section 2.4.2 with polynomial needle segments of order r = 3. We fix the length of the needle segments to 1 cm, resulting in a total of n = 13 segments and the last segment measuring 0.6 mm. The stiffness per unit length of the model is set to 3200 N.m -2 and the length threshold to add a new segment to the tissue spline is set to L thres = 0.1 mm. The model is updated between two volume acquisitions using only the odometry of the Viper s850 to defined the position of the simulated needle base. • Method 3: the same process as method 2 is used, except that the model is updated with the method presented in section 3.5.2 to take into account the motions of the tissues. Similarly to the experiments performed in previous section with camera feedback, we use the positions of several points separated by 5 mm from each other on the needle body as inputs for the UKF. The measurement noise covariance matrix R in the UKF is set with diagonal elements equal to (2) 2 mm 2 and the process noise covariance matrix Q with diagonal elements equal to (3) 2 mm 2 . TISSUE UPDATE VALIDATION Note that the needle model is defined in the frame of the robot, since the position of the simulated base is set according to the robot odometry. A registration between the US volume and the robot is thus necessary for the update method in order to convert the position of the needle body tracked in the volume to the robot frame. Registration: We describe here the registration method that we use to find the correspondence between a voxel in a 3D US volume and its real location in the needle manipulator frame. The US volumes are first scaled to real Cartesian space by using the size of a voxel, which is known from the characteristics of the probe and the process used to convert pre-scan data into post-scan data (as explained in section 3.2.2.2). In order to be in accordance with our objective of a reduced registration time and complexity, we use a fast registration method that can be used directly at the beginning the insertion procedure. After an initial insertion step, the part of the needle that is visible in the acquired US volume is manually segmented, giving both tip position and orientation. The pose of the volume is then computed by matching the measured tip position and orientation to the position and orientation obtained from the needle manipulator odometry and the needle model. The manual needle segmentation is also used for the initialization of the needle tracking algorithm. Note, however, that this method provides a registration accuracy that depends on the quality of the manual needle segmentation. Experimental scenario: We perform 10 straight insertions of 10 cm at different locations in the gelatin phantom with an insertion speed of 5 mm.s -1 . The needle is first inserted 1 cm in the phantom and manually segmented in the US volume to initialize the different tracking algorithms and register the probe pose. The insertion is then started at the same time as the motion applied to the phantom. For each experiment, a similar 1D lateral motion is applied to the container such that the phantom always stays in contact with both the table and the US probe. The motion follows a profile m(t) similar to a breathing motion [HMB + 10], expressed as m(t) = b cos 4 ( π T t - π 2 ), (3.97) where t is the time, b is the magnitude of the motion, set to 1 cm, and T is the period of the motion, set to 5 s. The insertion is performed for a duration of 18 s, roughly corresponding to 4 periods of the motion and the acquisition of 29 volumes. Results on tip tracking: An example of the positions of the tip tracked during one experiment using the different methods is shown in The insertion is performed along the y axis of the probe while the lateral motion of the tissues is applied along the z axis. One tracking method is initialized without model of the needle (blue), one is initialized from a model of the needle that does not take into account the motions of the tissues (green) and one is initialized from a model updated using the tissue motion estimation algorithm presented in section 3.5.2 (red). along with the ground-truth acquired by manual segmentation of the needle tip in the volumes. Figure 3.32 shows the result of the needle tracking algorithms in two orthogonal cross sections of the volume near the end of the insertion. We can observe that tracking the needle without any a priori information on the needle motion occurring between two volume acquisitions (method 1) leads to a failure of the tracking as soon as the beginning of the insertion. The tracking get stuck on the artifact appearing at the surface of the gelatin, due to the fast lateral velocity of the tissues as well as the low visibility of the needle at the beginning of the insertion. We can see that the tracking is able to follow the motion of the tissues (z axis in Fig. 3.31) since the artifact moves with the phantom. However the length of the inserted part of the needle is not provided to the algorithm, so that it stays at the surface without taking into account the insertion motion, as can be seen in Fig. 3.32. On the contrary, using the needle model to initialize the tracking allows taking into account the length of the needle that is currently inserted, such that both methods 2 and 3 provide a good estimation of the tip position along the y axis. However, when the model is only updated at its base (method 2), the tracking mostly fails due to the wrong lateral location of the initialization that does not take into account the motions of the tissues. We can see that the tracking is rather inconsistent in this case. Sometimes the tracking recovers the correct location of the needle when it is near the model, as can be seen in Fig. 3.31 from volume 13 to 16 (green curve); and when the tissues move far from their initial position, the tracking is initialized near other structures or artifacts, such that it fails to find the needle, as is the case in Fig. 3.32. On the other hand, updating the model according to the tracked position of the needle (method 3) allows taking into account the motions of the tissues. This way, the prediction of the needle localization in the following volume is of good quality and the tracking algorithm can accurately find the position of the needle. Overall the combination of the tracking algorithm with the updated model allows a good tracking of the needle tip with a mean accuracy of 3.1 ± 2.5 mm over the volume sequences of all the insertions, which may be sufficient for commonly performed needle insertions. Results on tissue motion estimation: As a final consideration, let us have a look at the estimated position of the tissues in the updated model that is provided in Fig. 3.33. We can see that the overall motions of the tissues are well estimated by the algorithm. However, we can observe a delay between the estimation and the measures. Although a part of this delay may be introduced by the filtering effect of the UKF, it is most probably due to the delay introduced by the acquisition time required to obtain the final 3D US volume. This issue could be solved by taking into account the known time required by the system to reconstruct the volume from the data acquired by the US transducer. We can also observe a slight drift of the estimation during the insertion. This can be due to the accumulation of modeling errors that can arise because of some local tearing of the gelatin when the phantom moves far from its initial position. It can also come from the fast registration method that we used in these experiments, which can introduce a difference between the real position of the needle and the measured position reconstructed from the tracking in the volume. These observations confirm the fact that has already been discussed in previous section, namely that updating the position of the tissues in the model should only be used as a way to get a good representation of the needle by the model and not an accurate measure of the tissue position. Conclusions: We provided a method for improving the robustness of the tracking of a flexible needle in 3D US volumes when the tissues are subject to lateral motions. Using a mechanics-based model allows a prediction of the motions of the needle tip and shaft due to the motions of the needle manipulator between two volume acquisitions. The prediction can then be used to provide a good initialization of an iterative needle tracking algorithm. Finally, updating the model thanks to the result of the tracking allows taking into account the motions of the tissues and improves the modeling accuracy and the subsequent prediction. The quality of the prediction of the needle location could even be further improved by using a fast information feedback to update the modeled position of the tissues between consecutive volume acquisitions. This could be done using a force sensor or an electromagnetic tracker, as we have demonstrated in section 3.6.1. However, an imaging modality remains a necessity to achieve the steering of the needle toward a target. Conclusion In this chapter, we started by a brief comparison of the imaging modalities traditionally used to perform needle insertions. From this we chose to focus on the ultrasound (US) modality and we presented the general principles of US imaging as well as the way to reconstruct 2D images or 3D volumes that can then be exploited. We also covered the case of several artifacts that are specific to the presence of a needle in the field of view of the US probe. A review of current detection and tracking methods used to localize a needle from 2D or 3D US feedback was then provided. We proposed a first contribution in this field consisting in an iterative algorithm that exploits the artifacts observed around a needle to accurately find the position of its whole body in a 3D US volume. The performances of the algorithm were illustrated through an experimental validation and a comparison to another state-of-the-art algorithm. Then we considered the case of a change of position of the tissues due to motions of the patient. We presented the concepts of Bayesian filtering and proposed an algorithm based on an unscented Kalman filter to update the state of the interaction model that we developed in chapter 2 using the different measures available on the needle. We have shown through various experimental scenarios that the update method could be used with several kinds of information feedback on the needle, such as force feedback, electromagnetic position feedback or visual position feedback, in order to take into account the lateral motions of the tissues. We then proposed to fuse our two contributions into one global method to mutually improve both tracking performances in 3D US and insertion modeling accuracy. Good localization of the needle and accurate modeling of the insertion are two important keys to provide an image-guided robotic assistance during an insertion procedure. Now that we have addressed these two points and have proposed a contribution for both of them, we will focus in chapter 4 on the design of a control framework for robotic needle insertion under visual guidance. Chapter 4 Needle steering In this chapter we address the issue of steering a flexible needle inserted in soft tissues. The goal of a needle insertion procedure is to accurately reach a targeted region embedded in the body with the tip of the needle. Achieving this goal is not always easy for clinicians due to the complex behavior exhibited by a thin flexible needle interacting with soft tissues. Robot assisted needle insertion can then be of great help to improve the accuracy of the operation and to reduce the necessity of repeated insertions. In chapter 2 we presented different ways of modeling the insertion of a flexible needle in soft tissues. In particular we have seen that kinematic and mechanics-based models offer a reasonable computational complexity that makes them suitable for real-time processing and control of a robotic system. In the following, we first provide in section 4.1 a review of current techniques used to steer different kinds of needles using a robotic system. Then we present different methods in section 4.2 used to define the trajectory that the needle tip must follow to reach a target and avoid obstacles. In section 4.3 we propose a new contribution consisting in a generic needle steering framework for closed-loop control of a robotic manipulator holding a flexible needle. This framework is based on the task function framework and can be adapted to steer different kinds of needles. It is formulated such that different kinds of sensing modalities can be used to provide a feedback on the needle and the target. We finally describe different experimental scenarios in section 4.4 that we use to assess the performances of our steering framework. Parts of the work presented in this chapter on the steering framework were published in two articles presented in international conferences [CKB16a] [CKB16b]. Steering strategies In this section we present a review of current techniques used to control the trajectory of the tip of a needle inserted in soft tissues. The techniques used to reach a target in soft tissues while avoiding other sensitive regions can be gathered into three main families. • Tip-based steering methods use a needle with an asymmetric design of the tip to create a deflection of the tip trajectory when the needle is inserted into the tissues without any other lateral motion of its base. • Base manipulation methods on the contrary use lateral translation and rotation motions of the needle base during the insertion to modify the trajectory of the needle tip. • Lastly, tissue manipulation is a special case in the sense that no needle steering is actually performed. Instead it uses deformations of the surrounding tissues to modify the position of the target and obstacles. We present each steering family in further detail in the following. Tip-based needle steering As described in the section on kinematic modeling 2.1, it can be observed that the presence of an asymmetry of the needle tip geometry, such as a bevel, leads to a deviation of the needle trajectory from a straight path, as illustrated in Fig. 4.1a. Considering this effect as a drawback, clinicians usually rotate the needle around its axis during the insertion to cancel the effect of the normal component of the reaction force created at the needle tip. This allows the trajectory of the needle tip to follow a straight line. However many research works have been conducted over the last two decades to use this effect as an advantage to steer the needle tip, leading to the creation of the tip-based steering strategies [APM07] [START_REF] Van De Berg | Design choices in needle steering-a review[END_REF]. Tip-based needle steering consists in controlling the orientation of the lateral component of the reaction force at the tip to face a desired direction. The behavior of the needle tip can usually be accurately modeled using kinematic models [WIKC + 06]. Needles used for tip-based control typically have a small diameter and are made of super-elastic alloys, such as Nitinol, to decrease the needle rigidity and to increase the influence of the tip force on the needle trajectory. This allows getting closer to the assumption that the needle is very flexible with respect to the surrounding tissues, which is required for the validity of kinematic models (see section 2.1). The control of the insertion of such needles is often limited to the insertion of the needle along its base axis and the orientation of the needle around this axis. Different control strategies have been developed to steer the needle tip using only these two degrees of freedom (DOF). A constant ratio between 4.1. STEERING STRATEGIES the rotation and insertion velocities of the needle can be used to obtain a helical trajectory [HAC + 09]. A low ratio leads to a circular trajectory with curvature corresponding to the natural curvature of the needle insertion. A high ratio leads to an almost straight trajectory. Duty-cycling: The duty cycling control strategy, first tested in [START_REF] Engh | Toward effective needle steering in brain tissue[END_REF] and later formalized in [START_REF] Minhas | Modeling of needle steering via duty-cycled spinning[END_REF], consists in using alternatively only the two extreme cases of the helical trajectories: pure insertion of the needle (maximal curvature of the trajectory) and insertion with fast rotation (straight trajectory). The resulting trajectory of the needle tip can be approximated by an arc of a circle with an effective curvature K ef f that can be tuned between 0 and the maximal curvature K nat . It has been shown that the relation between K ef f and the duty cycle ratio DC between the length of the phases could be approximated by a linear function [START_REF] Minhas | Modeling of needle steering via duty-cycled spinning[END_REF]: DC = L rot L rot + L ins , (4.1) K ef f = (1 -DC)K nat , (4.2) where L ins and L rot are the insertion lengths corresponding respectively to the pure insertion phase and the insertion phase with fast rotation. Similarly, in the case of a constant insertion velocity, the duty-cycle DC can be computed from the duration of each phase instead of their insertion length. This method has first been used only in 2D, using an integer number of full 2π rotations during the rotation phase [START_REF] Minhas | Modeling of needle steering via duty-cycled spinning[END_REF]. It was later extended to 3D by adding an additional angle of rotation before the insertion phase to orient the curve toward the desired direction [START_REF] Wood | Algorithm for three-dimensional control of needle steering via duty-cycled rotation[END_REF]. A 3D kinematic formulation was also proposed by Krupa [Kru14] and Patil et al. [START_REF] Patil | Needle steering in 3-d via rapid replanning[END_REF]. Duty-cycling control has also been extensively used in its 2D or 3D versions over the past decade, associated with various needle insertion systems, needle tracking algorithms and methods to define the trajectory of the tip (see for example [vdBPA + 11] [BAP + 11] [PBWA14] [CKN15] [MPT16] ). Trajectory planning will be covered in next section 4.2. Duty-cycling control presents some drawbacks that have to be addressed. First the natural curvature K nat must be known to compute the duty-cycle DC. This parameter is difficult to determine in practice and may even vary with the insertion depth, such that an online estimation can be required [START_REF] Moreira | Needle steering in biological tissue using ultrasound-based online curvature estimation[END_REF]. It may also not be possible to continuously rotate the needle along its axis. This is for example the case when using cabled sensors attached to the needle, such as electromagnetic trackers or optic fibers embedded in the needle. The duty cycling control has to be adapted in this case to alternate the direction of the rotation around the needle shaft [START_REF] Majewicz | Design and evaluation of duty-cycling steering algorithms for robotically-driven steerable needles[END_REF]. The effect of the bevel angle on the needle insertion has been studied in artificial [START_REF] Webster | Design considerations for robotic needle steering[END_REF], ex-vivo [START_REF] Majewicz | Evaluation of robotic needle steering in ex vivo tissue[END_REF] or in-vivo [MMVV + 12] tissues. It has been observed that it has a direct effect on the amount of deflection of the needle tip from a straight path. However the curvature of the tip trajectory is very low in biological tissues, which can limit the interest of using the duty-cycling control in clinical practice. The natural curvature of the needle can be increased by using a needle with a prebent [AGL + 16] or precurved tip [VDBDJVG + 17]. However this is not suitable for duty-cycling control since it also increases the damage done to the tissues during the rotation of the needle. Special design: Particular mechanical designs of the needle have been proposed to control the force created at the needle tip. Swaney et al. [START_REF] Swaney | Webster. A flexure-based steerable needle: High curvature with reduced tissue damage[END_REF] designed a specific flexure based needle tip to offer the high curvature of a prebent-tip needle during insertion while keeping the reduced tissue damage of a beveled-tip needle during rotations. Active tips were also designed to allow a modification of the lateral force intensity and orientation without using rotation of the needle around its axis. Burrows et al. [START_REF] Burrows | Smooth online path planning for needle steering with non-linear constraints[END_REF] use a needle made of multiple segments that can slide along each other, thus modifying the shape of the tip of the needle. Shahriari et al. [SRvdB + 16] use a tendon-actuated needle tip with 2 DOF, which acts as a pre-bent tip with a variable tip angle and orientation. The main drawbacks of tip-based steering are that the tip trajectory can only be modified by inserting the needle and that the amplitude of the obtained lateral motions is relatively small in real clinical conditions. Although special designs have been proposed to offer improved steering capabilities, these needles are still unsuitable for a fast and low cost integration into clinical practice. However, other steering methods can be used to steer traditional needles, as we will see in the following. Needle steering using base manipulation Base manipulation consists in controlling the needle tip trajectory using an adequate control of the 6 degrees of freedom (DOF) of the needle base. In the case of a symmetric tip needle, changing the trajectory of the needle tip from a straight path requires bending the needle and pushing laterally on the tissues, as illustrated in Fig. 4.1b. This is the natural way clinicians use to steer a needle when holding it by its base. Pioneer work on robotic control of a needle attached by its base to a robotic manipulator was performed by DiMaio et al. [START_REF] Dimaio | Needle steering and motion planning in soft tissues[END_REF]. The flexibility of the needle and its interaction with soft tissues was modeled using 2D finite element modeling (FEM) and was used to predict the motion of the needle tip resulting from a given needle base motion. The model was used to compute the trajectory of the needle base that would result in the desired tip trajectory. Due to the computational complexity of the FEM, only preplanning of the needle trajectory was performed and the actual insertion was performed in open-loop control. Closed-loop needle base manipulation was performed under fluoroscopic guidance by Glozman and Shoham [START_REF] Glozman | Image-guided robotic flexible needle steering[END_REF] and later under ultrasound guidance by Neubach and Shoham [START_REF] Neubach | Ultrasound-guided robot for flexible needle steering[END_REF]. The 2D virtual springs model was used in both cases to perform a pre-planning of the needle trajectory that also minimizes the lateral efforts exerted on the tissues. Additionally, this mechanics-based model enabled real-time performance and was used in the closed-loop control scheme to ensure that the real needle tip follows the planned trajectory. Despite being among the first work on robotic needle steering, base manipulation has been the subject of little research these past years compared to the amount of work on tip steerable needles. This can mainly be explained by the fact that bending the needle and pushing on the tissues to control the lateral motion of the tip can potentially induce more tissue damage than only inserting the needle. The efforts required to induce significant lateral tip motion also rapidly increase as the needle is inserted deeper into the tissues. This can limit the use of base manipulation to superficial targets. However it can also be noted that the 2 DOF used in tip-based control (translation and rotation along and around the needle axis) can also be controlled using base manipulation. Therefore it is also possible to use a base manipulation framework to perform tip-based steering of a needle with an asymmetric tip, as illustrated in Fig. 4.1c. Using only tip-based control, the needle base can only translate in one insertion direction and it is not possible to compensate for any lateral motions of the tissues that may arise from patient motion. On the contrary, using all 6 DOF of the needle base offers the advantage of keeping additional DOF if necessary. Therefore, due to its ability to handle both symmetric and asymmetric tips, base manipulation in the general sense is the steering method that we choose to explore in the following. Tissue manipulation Tissue manipulation consists in applying deformations on the internal parts of the tissues by moving one [THA + 09] or multiple points [START_REF] Mallapragada | Robotassisted real-time tumor manipulation for breast biopsy[END_REF][PVdBA11] of the surface of the tissues. This kind of control requires an accurate finite element modeling (FEM) model of the tissues, which is difficult to obtain in practice due to parameter estimation. The computational load of FEM is also an obstacle for real-time use, limiting it to pre-planning of the insertion procedure, which further enhances the need for an accurate modeling. This technique has only been used so far to align the target with a large rigid needle and no work has been conducted to explore the modification of the trajectory of a flexible needle. In addition, it can be observed that the motion of the tissue surface has a little influence on the motion of deep anatomical structures: tissue manipulation can then only be used to move superficial targets. Shallow targets are not the only kind of targets that we want to cover in our work, therefore we do not consider tissue manipulation in the following. Needle tip trajectory In section 4.1, we presented different methods to control the motion of the tip of a needle being inserted in soft tissues using a robotic manipulator. Once a type of needle and an associated control scheme has been chosen to control the needle tip, a strategy needs to be chosen to define the motion to apply to the needle tip. Two approaches are generally used, which are path planning and reactive control. The path planning approach uses some predictions of the behavior of the system and tries to find the best sequence of motions that needs to be applied to fulfill the general objective. On the contrary, the reactive control approach only relies on the current state of the system and intra-operative measures to compute the next motion to apply. Path planning Path planning is used to define the entire trajectory that needs to be followed by the needle tip to reach the target. This approach requires a model of the needle insertion process to predict the effect of the control inputs on the tip trajectory. It is mostly used in tip-based steering, for which the unicycle model (see section 2.1) can be used because of its simplicity and computational efficiency. Planning the natural trajectory: Duindam et al. [DXA + 10] planned the trajectory of the needle while considering a stop-and-turn strategy, thus alternating between rotation-only phases and insertion-only phases. Three insertion steps were considered, leading to a tip trajectory following three successive arcs with constant curvature. The best duration of each phase, i.e. the length of each arc, was computed such that the generated trajectory reached the target. Hauser et al. [HAC + 09] exploited the helical shape of the paths obtained when applying constant insertion and rotation velocities to the needle. The best velocities were computed by selecting the helical trajectory that allowed the final tip position to be the closest to the target. A model predictive control scheme was used, in which the best selected velocities are applied for a short amount of time and the procedure is repeated until the target has been reached. Rapidly-exploring random tree (RRT): Among the many existing path planning algorithms, the RRT algorithm [START_REF] Lavalle | Randomized kinodynamic planning[END_REF] has been widely used in needle steering applications. This probabilistic algorithm consists in randomly choosing multiple possible control inputs and generating the corresponding output trajectories. The best trajectory is then chosen and the corresponding control inputs are applied to the real system. The RRT can be used in many ways, depending on the underlying model chosen to relate the control inputs to the output tip trajectory. The first use of RRT for 3D flexible needle insertion planning was done by Xu et al. [START_REF] Xu | Motion planning for steerable needles in 3d environments with obstacles using rapidly-exploring random trees and backchaining[END_REF]. The kinematic model of the needle with constant curvature was used to predict the motion of the needle tip for given insertion and rotation velocities. Due to the constant curvature constraint the control inputs were limited to a stop-and-turn strategy. However, a lot of trajectories had to be generated before finding a good one: the algorithm was then slow and could only be used for pre-operative planning of the insertion. The introduction of the duty-cycling control allowed dropping the constant curvature assumption and consider the possibility of controlling the effective curvature of the tip trajectory. This simplified the planning and online intra-operative replanning could be achieved in 2D [BAP + 11] and 3D [PA10] [START_REF] Bernardes | 3d robust online motion planning for steerable needles in dynamic workspaces using duty-cycled rotation[END_REF]. The RRT was also used with 2D finite element model-123 CHAPTER 4. NEEDLE STEERING ing instead of kinematic modeling to provide a more accurate offline preoperative planning that takes into account the tissue deformations due to the needle insertion [START_REF] Patil | Motion planning under uncertainty in highly deformable environments[END_REF]. Planning under uncertainties: Since planning methods always rely on the predictions given by a model of the needle, inaccuracy of the model can diminish the performances of the planning if they are not taken into account. Stochastic planning methods have been proposed to consider uncertainties on the motion of the tip. Park et al. [PKZ + 05] used a path-of-probability approach, where a stochastic version of the kinematic model is used to compute the probability density function of the final position of the needle tip. This is then used to generate an set of tip trajectories that can reach the target. Alterowitz et al. [START_REF] Alterovitz | The stochastic motion roadmap: A sampling framework for planning with markov motion uncertainty[END_REF] used a stochastic motion roadmap to model the probability to obtain a given 2D pose of the needle tip starting from another tip pose. The optimal sequence of control inputs was then computed from the map to minimize the probability to hit an obstacle and to maximize the probability to reach the target. Fuzzy logic was also proposed as a way to cope with incertainty in the control inputs [START_REF] Lee | A probability-based path planning method using fuzzy logic[END_REF]. Even when modeling the uncertainty, unpredicted tissue inhomogeneities or tissue motions can greatly modify the trajectory of the tip or the geometry of the environment. Using pre-operative planning usually requires the use of reactive control during the real procedure to ensure that the planned trajectory can be accurately followed. Planning can be used to not only plan a feasible trajectory but also to design an optimal controller that can take into account the uncertainties on the model and the intra-operative measures during the procedure. In [vdBPA + 11] a linear-quadratic Gaussian (LQG) controller was designed to robustly follow the trajectory that was pre-planned using RRT. The controller could take into account the current state uncertainty to minimize the probability to hit an obstacle. Sun and Altrerovitz [START_REF] Sun | Motion planning under uncertainty for medical needle steering using optimization in belief space[END_REF] proposed to take into account the sensor placement directly during the design of the planning and LQG controller, in order to minimize the uncertainty on the tip location along the planned trajectory. This way, obstacles could be avoided without having to pass far away from them to ensure avoidance. Online re-planning: Online re-planning of the trajectory can also be used instead of considering uncertainties in a model. By regularly computing a new trajectory that takes into account the current state of the insertion, the control can directly compensate for modeling uncertainties [BAPB13] [PBWA14]. This offers the good prediction capabilities of the planning approach while maintaining a good reactivity to environment changes, which is one of the motivations behind the research on fast planning algorithms that could work in real-time. However, online re-planning is only possible when using simplified models like kinematic models. In the case of base manipulation control, the whole shape of the needle needs to be modeled, limiting the use of such model to pre-operative planning. Reactive control can then be used during the insertion to adapt to the changes in the environment. Reactive control Reactive control consists in using only a feedback on the current state of the system to compute the control inputs to apply. This kind of control usually uses inverse kinematics to compute the control inputs to obtain a desired output motion. If the approach does not rely on an accurate modeling of the system, it uses closed-loop control to compensate for modeling errors. Reactive control with tip-based control: In the case of beveled-tip needles, sliding mode control can be used to control the bevel orientation during the insertion such that the bevel cutting edge is always directed toward the target. The advantage of this method is that it does not rely on the parameters of an interaction model with the tissues. Rucker et al. [RDG + 13] demonstrated that an arbitrary accuracy could be reached with this method by choosing an appropriate ratio between insertion and rotation velocities. Sliding mode control have proven its efficiency with many feedback modalities, such as electromagnetic (EM) tracker [RDG + 13], fiber Bragg grating (FBG) sensors [START_REF] Abayazid | 3d flexible needle steering in soft-tissue phantoms using fiber bragg grating sensors[END_REF], ultrasound (US) imaging [START_REF] Abayazid | Integrating deflection models and image feedback for realtime flexible needle steering[END_REF][FRS + 16] or computerized tomography (CT)-scan fused with EM tracking [SHvK + 17]. For this reason, we will include it in our control framework in the following section 4.3. Reactive control can also be used to intra-operatively compensate for deviations from a trajectory that has been planned pre-operatively by another planning algorithm. Sliding mode control can for example be adapted to follow keypoints along the planned trajectory instead of directly pointing toward the target [AVP + 14]. The linear-quadratic Gaussian (LQG) control framework can also be used to take into account modeling errors and measure noise during the insertion [START_REF] Kallem | Image guidance of flexible tip-steerable needles[END_REF]. Since reactive control is expected to work in real-time, kinematic models are most often used. However such models are only applicable for needle with asymmetric tips whereas base manipulation must be used in the case of a symmetric tip needle. Reactive control with base manipulation: The first robotic needle insertion procedure using base manipulation [START_REF] Dimaio | Needle steering and motion planning in soft tissues[END_REF] proposed to use vector fields to define the trajectory that needed to be followed by the needle tip. The needle and tissues were modeled using 2D finite element modeling (FEM) and the vector field was attached to the tissue model, such that tissue deformations also induced a modification of the vector field. An attractive vector field was placed around the target and repulsive ones were placed around obstacles, defining in each point of the space the desired instantaneous velocity that the needle tip should follow. Inverse kinematics was computed from the current state of the model to find the local base motion that generates the desired tip motion. This was only performed in simulation and then applied in open-loop to a real needle due to computational complexity of the FEM. Mechanics-based models were also used with closed-loop feedback using fluoroscopic [START_REF] Glozman | Image-guided robotic flexible needle steering[END_REF] or US [START_REF] Neubach | Ultrasound-guided robot for flexible needle steering[END_REF] imaging, allowing the intra-operative steering of a flexible needle toward a target. Reactive control using visual feedback: Visual servoing is a kind of reactive control based on visual feedback. The method computes the control inputs required to obtain a desired variations of some visual features defined directly in the acquired images. In [START_REF] Krupa | A new duty-cycling approach for 3d needle steering allowing the use of the classical visual servoing framework for targeting tasks[END_REF] and [START_REF] Chatelain | 3d ultrasoundguided robotic steering of a flexible needle via visual servoing[END_REF] it was used to control the needle trajectory using 3D US imaging and the duty-cycling method. This approach offers a great accuracy and robustness to modeling errors due to the fact that the control is directly defined in the image. It is also quite flexible since many control behaviors can be obtained depending on the design of the visual features that are chosen. For these reasons we choose visual servoing as a basis for our needle steering framework and we will describe its principles in more detail in the following section 4.3.1. Needle steering framework This section presents our contribution to the field of needle steering in soft tissues. We propose a generic control framework that can be adapted to control the different degrees of freedom of a robotic system holding any kind of needle shaped tool with symmetric or asymmetric tip geometry. The proposed approach is based on visual servoing [START_REF] Espiau | A new approach to visual servoing in robotics[END_REF], which consists in controlling the system to obtain some desired variations of several features defined directly in an image, such as for example the alignment of the needle with a target in an ultrasound image. In order to offer a framework that can be adapted to many kinds of information feedback and that is not limited to visual feedback, we propose a formulation that uses the task function framework [START_REF] Samson | Robot Control: The Task Function Approach[END_REF], which is the core principle used in visual servoing. This way a single control law can be used to integrate the information on the needle and the target provided by several kinds of modalities, such as electromagnetic tracking, force feedback, medical imaging or fiber Bragg grating shape sensors. In the following we first present the fundamentals of the task function 4.3. NEEDLE STEERING FRAMEWORK framework in section 4.3.1 and the stability aspects of the control in section 4.3.2. We then describe in section 4.3.3 how we apply this framework to the case of needle steering by using the mechanics-based needle models that we proposed in section 2.4. Finally we present in section 4.3.4 the design of several task functions that can be used in the framework to steer the needle tip toward a target while maintaining a low amount of deformations of the tissues. Experimental validation of the framework in the case of visual feedback will be described in section 4.4. Task function framework A classical method used to control robotic systems is the task function framework [START_REF] Samson | Robot Control: The Task Function Approach[END_REF], that we describe in the following. General formulation: We consider a generic control vector v ∈ R m containing the m different input velocities that are available to control the system. This vector can typically contain the velocity of each joint of a robotic arm or the six components of the velocity screw vector of an end-effector. We note r ∈ R m the position vector associated to v, i.e. the position of the joints or the pose of the end-effector. In the task function framework, a task vector e ∈ R n is defined and contains n scalar functions that we want to control. In image-based visual servoing these tasks usually correspond to some geometrical features extracted from the images. At each instant the variations of the tasks can be expressed as ė(t, v) = de dt = ∂e ∂t + ∂e ∂r v. (4.3) The term ∂e ∂t represents the variations over time of the tasks that are not due to the control inputs. The tasks are linked to the control inputs by the Jacobian matrix J ∈ R n×m defined as J = ∂e ∂r . (4.4) Let us define ėd the desired value for the variation of the task functions. In all the following developments, the subscript . d will be used to describe the desired value of a certain quantity. The best control vector that allows fulfilling the tasks can be computed as v = J + ėd - ∂e ∂t , (4.5) where + stands for the Moore-Penrose pseudo-inverse operator [START_REF] Penrose | A generalized inverse for matrices[END_REF]. The variation ∂e ∂t is usually not directly available and an estimation ∂e ∂t is necessary to compute v = J + ėd -∂e ∂t . For simplicity, in the following we consider the case where ∂e ∂t = 0, which is usually associated to a static case where only the control inputs v have an action on the environment. This leads to the control law v = J + ėd , (4.6) which is the main control law that we will use in the experiments. Tasks and inputs priorities: When n < m, there are more degrees of freedom (DOF) than the number of tasks to fulfill. If the tasks are independent, i.e. the rank of J is equal to the number n of tasks, then there are infinite solutions to exactly fulfill all the tasks. In this case the Moore-Penrose pseudo-inverse gives the solution with the lowest euclidean norm. If the components of the input vector are not homogeneous, for example containing both translational and rotational velocities, the euclidean norm may actually have no physical meaning. A diagonal weighting matrix M ∈ R m×m can be used in this case to give specific weights to the different components: v = M -1 (J M ) + ėd . (4.7) Different methods of Jacobian normalization used to tune the weights of the matrix have been summarized by Khan et al. [START_REF] Khan | Jacobian matrix normalization -a comparison of different approaches in the context of multi-objective optimization of 6-dof haptic devices[END_REF]. When n > m, there are not enough DOF to control the different tasks independently. The same thing happens if the rank of J is lower than n, meaning that some of the tasks are not independent. A diagonal weighting matrix L ∈ R n×n can then be used in these cases to give specific weights to the different tasks depending on their priority. Hence, v = (LJ ) + L -1 ėd . (4.8) Both weighting matrices can also be used to deal with dependant tasks in an underdetermined system, leading to the weighted pseudo-inverse [START_REF] Eldén | A weighted pseudoinverse, generalized singular values, and constrained least squares problems[END_REF] expressed as v = M -1 (LJ M ) + L -1 ėd . (4.9) Note however that the weighted pseudo-inverse only achieves a trade-off between tasks, meaning that even high-priority tasks may not be exactly fulfilled. NEEDLE STEERING FRAMEWORK Hierachical stack of tasks: Absolute priority can be given to some tasks using hierarchical stack of tasks [START_REF] Siciliano | A general framework for managing multiple tasks in highly redundant robotic systems[END_REF]. In that case, each set of tasks with a given priority is added successively to the control output such that they do not disturb the previous tasks with higher priority. This is done by allowing the contribution of low priority tasks to lie only in the null space of the higher priority tasks. The control output that is obtained after adding the contributions of the tasks from priority level 1 to i (1 being the highest priority) is given by v i = v i-1 + P i-1 (J i P i-1 ) + ( ėi,d -J i v i-1 ) , (4.10) where J i is the Jacobian matrix corresponding to the task vector e i containing the tasks with priority level i, ėi,d still denotes the desired value of ėi and P i is the projector onto the null space of all tasks with priority levels from 1 to i. The projectors P i can be computed according to P i = I m -    J 1 . . . J i    +    J 1 . . . J i    , (4.11) where I m is the m by m identity matrix. Alternatively, these projectors can also be computed iteratively using [BB04] P 0 = I m P i = P i-1 -(J i P i-1 ) + (J i P i-1 ) . (4.12) For example, using only 2 priority levels, the control law thus becomes v = J + 1 ė1,d + P 1 (J 2 P 1 ) + ė2,d -J 2 J + 1 ė1,d , (4. 13) with P 1 = I m -J 1 + J 1 . An illustration of the hierarchical stack of tasks using this formulation can be seen in Fig. 4.2a. Singularities: One issue when using task functions is the presence of singularities. Natural singularities may first arise when one of the tasks becomes singular, meaning that the rank of the Jacobian matrix is lower than the number n of tasks. Algorithmic singularities can also arise when tasks with different priorities become dependent, i.e. when J i P i-1 becomes singular even if J i is not. While the pseudo-inverse is stable exactly at the singularity, it leads to numerical instability around the singularity. This numerical instability is easily illustrated using the singular value decomposition of the matrix: J = min(n,m) i=0 σ i u i v T i , (4.14) where the u i form an orthonormal set of vectors of R n , the v i form an orthonormal set of vectors of R m and σ i are the singular values of J . . Each E i is the set of control inputs for which the task i is fulfilled. Each S i is the input vector obtained using a single task i in the classical formulation (4.6). C is the input vector obtained using both tasks in the classical formulation (4.6). The same input vector C is obtained using the hierarchical formulation (4.13). Each R i is the input vector obtained using the singularity robust formulation (4.16) when the task i is given the highest priority. The contributions due to tasks 1 and 2 are shown with blue and red arrows, respectively, when the task 1 is given the highest priority and with green and yellow arrows, respectively, when the task 2 is given the highest priority. The pseudo-inverse of J is then computed as J + = min(n,m) i=0 τ i v i u T i with τ i = σ -1 i if σ i = 0 0 if σ i = 0 . (4.15) The matrix J is singular when at least one of the σ i is equal to zero. In this case the pseudo-inverse can still be computed since it sets the value of τ i to zero instead of inverting the singular value σ i . However in practice the matrix is almost never exactly at the singularity because of numerical inaccuracies. Around the singularity, the matrix is ill conditioned and one of the σ -1 i becomes very large, leading to very large velocity outputs, which are not desirable in practice. Algorithmic singularities can be avoided by using the singularity robust 4.3. NEEDLE STEERING FRAMEWORK formulation for the control law [START_REF] Chiaverini | Singularity-robust task-priority redundancy resolution for real-time kinematic control of robot manipulators[END_REF]: v i = v i-1 + P i-1 J + i ėi,d . (4.16) While this method entirely removes algorithmic singularities, it leads to distortions of the low priority tasks, even when they are almost independent of the higher priority ones. An illustration of the hierarchical stack of tasks using this formulation can be seen in Fig. 4.2b. In order to reduce the effect of singularities on the control outputs, damped least squares pseudo-inverse [START_REF] Deo | Overview of damped leastsquares methods for inverse kinematics of robot manipulators[END_REF] has been proposed using a different formulation of (4.15). Then τ i = σ i σ 2 i + λ 2 , (4.17) where λ is a damping factor. This method requires the tuning of λ and many methods have been proposed to limit the task distortions far from the singularity while providing stability near the singularity. Stability The task function framework is typically used to perform visual servoing, in which a visual sensor is used to provide some visual information on the system. The control of the system is performed by regulating the value of some visual features s ∈ R n , directly defined in the visual space, toward desired values s * ∈ R n . A typical approach is to design the task functions to regulate the visual features s toward the desired values with an exponential decay, such that e = ss * , (4.18) ėd = -λ s e, (4.19) where λ s is a positive control gain that tunes the exponential decrease rate of the task vector e. In this particular case the control law (4.6) becomes v = -λ s J + e. (4.20) In practice the real Jacobian matrix J can not be known perfectly because it depends on the real state of the system. An approximation J needs to be provided to the controller, such that the real control law becomes v = -λ s J + e. (4.21) Using this control law, it can be shown that the system remains locally asymptotically stable as long as the matrix J J + verifies [CH06] J J + > 0. (4.22) Note that this stability condition is also difficult to check since the real J is not known. However this condition is usually verified in practice if the approximation J provided to the controller is not too coarse. In the following we describe how we adapt the task function framework to the problem of needle steering and we present the method that we use to compute the estimation of the Jacobian matrix J corresponding to a given task vector. Task Jacobian matrices In the two previous sections we have presented the fundamentals of the task function framework. We now present how we adapt it to perform the control of a needle insertion procedure. As was presented in section 4.1.2, we choose to use the base manipulation method to control the 6 degrees of freedom (DOF) of the needle base. The generic input vector v that was defined in the task function framework (see beginning of section 4.3.1) is thus taken as the velocity screw vector v b ∈ R 6 of the needle base, containing three translational and three rotational velocities. We consider in this section that a task vector e ∈ R n has been defined to control the variations of a specific set of features s ∈ R n related to the needle. These features can for example consist in the position of a point along the needle shaft or the orientation of a beveled tip. The exact design of the different tasks to perform a successful needle insertion will be covered in detail in the following section 4.3.4. In order to use the task function framework, an estimation of the Jacobian matrices associated to each task should regularly be provided during the insertion process. We propose to compute online numerical approximations of these matrices using the mechanics-based models that we defined in section 2.4. We assume that the features s can be computed from the model and we use a finite different approach to compute the numerical approximations during the insertion. Let r ∈ SE(3) be the vector representing the current pose of the base of the needle model. We note J s ∈ R n×6 the Jacobian matrix associated to the feature vector s such that J s = ∂s ∂r . (4.23) Since the state of the model is computed from r and we assume that s can be computed from the model, then s directly depends on r. The finite difference approach consists in computing the value of s taken for several poses r i spread along each direction around the current pose r. Due to the non vector space nature of SE(3), we use the exponential map Exp r to compute each r i according to r i = Exp r (δtv i ), (4.24) where δt is a small time step and v i ∈ R 6 is a unit velocity screw vector corresponding to one DOF of the needle base. v i represents then a translation along one axis of the base frame for i = 1, 2, 3 and v i represents a rotation around one axis of the base frame for i = 4, 5, 6. Since each v i corresponds to only one DOF of the base, each column J s,j of the Jacobian matrix (j = 1, . . . , 6) can then be approximated using the forward difference approximation J s,j = s(r j ) -s(r) δt . (4.26) For more accuracy, we use instead the second order central difference approximation, although it doubles the number of poses r i for which s needs to be evaluated: J s,j = s(r j ) -s(r -j ) 2δt , (4.27) with r -j = Exp r (-δtv j ). (4.28) Note that s can also lie on a manifold instead of a vector space, for example when evaluating the Jacobian corresponding to the pose of a point along the needle shaft. In this case the logarithm map Log s(r) to the tangent space should be used and leads to J s,j = Log s(r) (s(r j )) -Log s(r) (s(r -j )) 2δt . (4.29) Note that δt should be chosen as small as possible to obtain a good approximation of the Jacobian but not to small to avoid numerical precision issues. Now that we have defined a method to compute numerical approximations of the Jacobian matrices from our numerical needle model, we focus in the following section on the design of different task functions to control the needle insertion, i.e. the definition of s. Task design for needle steering An important issue to control the behavior of the needle manipulator using the task function framework is the design of the different task functions stacked in the task vector e. In this section we consider the specific case of the needle insertion procedure where a needle is held by its base. The general objectives that we want to fulfill are first the control of the needle tip trajectory to reach a target and then the control of the deformations of the needle and the tissues to avoid safety issues. The main point of the task function formulation is then to hide the complexity of the control of the base motions and to translate it into the control of some easily understandable features. Each elementary task requires three components: the definition of the task function (see for example (4.18)), the computation of the Jacobian matrix associated to the task and the desired variation of the task function (see for example (4.19)). The difficulty is that many different task functions can be designed to fulfill a same general objective. It is also preferable that the dimension of the task vector remains as small as possible. This way we can avoid under-actuation of the system, where the tasks are not exactly fulfilled, and also decrease the probability of incompatibility between different tasks. In the following we first cover the design of the tasks that can be used for the steering of the needle tip toward a target and we then focus on the design of the tasks that can be used to avoid tearing the tissues or breaking the needle. Targeting tasks design The first and main objective to fulfill in an insertion procedure is that the needle tip reaches a target in the tissues. In this section we propose different task vectors that can be used to control the tip trajectory in order to reach the target and we present their different advantages and drawbacks. We first 4.3. NEEDLE STEERING FRAMEWORK start with general task vectors that give a full control over the tip trajectory and then we successively present simpler ones that control only individual aspects of the tip trajectory. The first task vectors can be used with any kind of needle tips, while the last task vector is more specific to beveled tips. We recall here that the subscript . d is used to define the desired value of a quantity. Tip velocity screw control: A first idea is to directly control the motion of the tip via its velocity screw vector v t ∈ R 6 . We denote r t ∈ SE(3) the pose of the needle tip and J tip ∈ R 6×6 the associated Jacobian matrix relative to the pose of the needle base r. The Jacobian matrix J tip is then defined such that J tip = ∂r t ∂r , (4.30) v t = J tip v b , (4.31) where the screw vectors v t and v t are defined in their respective frames {F t } and {F b } as illustrated in Fig. 4.4. Note that J tip is computed using the finite difference method defined on a manifold that was defined by (4.29) in section 4.3.3. This Jacobian matrix will be used as a basis for the other tasks in the following since it entirely describes the relation between the motion of the needle base and the motion of the needle tip. This relation can then directly be inverted according to (4.6) to allow the control of the desired tip motion v t,d , such that v b = J + tip v t,d . (4.32) One advantage of this control is that it translates the control problem from the needle base to the needle tip. It can thus allow an external human operator to directly control the desired motion of the tip v t,d without having to consider the complex interaction of the flexible needle with the tissues. However one drawback is that it constraints the six control inputs, meaning that no additional task can be added. Subsequently, in the case of the design of an autonomous control of the tip trajectory, the design of the desired variations of the tip motion v t,d should take into account its effect on the whole behavior of the needle to avoid unfeasible motions that would damage the tissues. This can thus be as difficult to design as directly controlling the motions of the needle base. However the tip screw vector can be written as v t = v t ω t , with v t ∈ R 3 the translational velocity vector and ω t ∈ R 3 the rotational velocity vector. In the following we propose different ways to separate the components of v t to obtain a task vector of lower dimension that is still adequate for the general targeting task and allows the addition of other task functions. Tip velocity control: A first solution that is better than the one proposed in previous paragraph is to limit the task vector to the control of the translational velocities v t of the tip. The corresponding Jacobian matrix is then J vt = I 3 0 3 J tip , (4.33) where I 3 and 0 3 are the 3 by 3 identity and null matrices, respectively, and J tip was defined by (4.30). This relation can also be directly inverted according to (4.6) to allow the control of the desired tip translations v t,d , such that v b = I 3 0 3 J tip + v t,d . (4.34) The main advantage of this control is that it allows a direct control of the tip trajectory and keeps some free degrees of freedom (DOF) to add additional task functions. This can also easily be adapted to follow a trajectory defined by a planning algorithm or to give the control of v t,d to an external human operator. NEEDLE STEERING FRAMEWORK In the case of an autonomous controller, a way to reach the target is to fix the desired variations of the task vector such it goes toward the target with a fixed velocity v tip . Noting p t = x t y t z t T the position of the target in the needle tip frame {F t } (see Fig. 4.4), we have v t,d = v tip p t p t . (4.35) Note that this is the main targeting task design that we use in the different experiments to test our framework. In practice p t can be computed from the tracking of the needle tip and the target using any modality that allows this tracking, such as for example an imaging modality and an electromagnetic tracker. One drawback of this task vector when it is used alone is that it does not explicitly ensure that the needle actually aligns with the target. It is thus possible that the tip translates in the direction of the target while the tip axis goes further away from the target, resulting in a motion of the needle shaft that is cutting laterally in the tissues. However, since this task vector does not constrains all the DOF of the base, an additional task function can be added to explicitly solve this issue, for example a safety task function that limits the cutting of the tissues (as will be designed later in section 4.3.4.2). Alternatively, another targeting task vector can also be designed to directly ensure that the needle aligns with the target. This is what we propose to explore in the following. Minimal set of targeting task functions: We propose to solve the two issues caused by the targeting task vectors that were designed in the previous paragraphs. The issue with the first task vector controlling the whole tip motion is that it constrains all the DOF of the needle base, such that no other task functions can be added. The second task vector controlling only the tip translations solves this issue but does not ensure that the needle aligns with the target. Therefore we propose to decompose the general targeting task into the two fundamental actions that allow reaching the target: inserting the needle and orienting the needle tip axis toward the target. Each action can be achieved using only one scalar task function. In the following we first present a task function to control the insertion of the needle and then we present two possible task functions to control the orientation of the needle tip axis. Needle insertion: The insertion of the needle can easily be controlled using only the velocity v t,z of the needle tip along its axis (z axis of the tip frame {F t } depicted in Fig. 4.4). The associated Jacobian matrix J vt,z ∈ R 1×6 can then be expressed as J vt,z = 0 0 1 0 0 0 J tip , (4.36) where J tip was defined by (4.30). The desired insertion velocity v t,z,d can then be set to a positive constant v tip (that was defined by (4.35)) during the insertion and can be set to zero once the target has been reached, such that v t,z,d = v tip if z t > 0 0 if z t ≤ 0 , (4.37) where we recall that z t is the distance from the tip to the target along the needle tip axis. Needle tip axis orientation: Orienting the needle tip axis toward the target can first be achieved by minimizing the angle θ between the needle tip axis and the axis defined by the tip and the target, as illustrated in Fig. 4.4. This angle can be expressed as θ = atan2 x 2 t + y 2 t , z t , (4.38) where we recall that x t , y t and z t are the components of the position p t of the target in the tip frame {F t } depicted in Fig. 4.4. The Jacobian matrix J θ ∈ R 1×6 corresponding to this angle can then be derived as follows: J θ = ∂θ ∂r = ∂θ =   -1 0 0 0 -z t y t 0 -1 0 z t 0 -x t 0 0 -1 -y t x t 0   . (4.41) Finally we obtain J θ =                         - x t cos 2 (θ) z t x 2 t + y 2 t - y t cos 2 (θ) z t x 2 t + y 2 t x 2 t + y 2 t x 2 t + y 2 t + z 2 t y t x 2 t + y 2 t - x t x 2 t + y 2 t 0                         T J tip , (4.42) where J tip was defined by (4.30). Aligning the needle axis with the target can then be achieved by regulating the value of θ toward zero, such that θd = -λ θ θ, (4.43) where λ θ is a positive control gain that tunes the exponential decrease rate of θ. Alternatively the distance d between the needle tip axis and the target can also be used as a feature to minimize in order to reach the target (see Fig. 4.4). This distance can be expressed as d = x 2 t + y 2 t . (4.44) The corresponding Jacobian matrix J d ∈ R 1×6 can be derived as follows: The different task functions can then be stacked together and used in (4.6), which leads to the following two possible control laws J d = ∂d ∂r = ∂d v b = J vt,z J θ + v t,z,d θd , (4.49) or v b = J vt,z J d + v t,z,d ḋd . (4.50) Both control laws allow the automatic steering of the needle tip toward the target, while letting several free DOF of the needle base to perform other tasks at the same time. Note that these control laws give the same priority to both scalar tasks: different priorities could also be given by using a hierarchical formulation as presented in section 4.3.1. Giving the control of v t,z along with θd or ḋd to an external human operator would be less intuitive than the direct control of the tip translations defined by (4.34). The exact trajectory of the tip would be harder to handle in this case due to the non-intuitive effect of θd or ḋd on the exact tip trajectory. However, it could be possible to give only the control of the insertion speed v t,z to the operator and let the system handle the alignment with the target. Additionally, in the case of an autonomous controller using (4.37) along with (4.43) or (4.48), an adequate tuning of the insertion velocity v tip and the gain λ θ or λ d is required. If the gain is too low with respect to the insertion velocity, the needle tip does not have enough time to align with the target before it reaches the depth of the target. The gain should thus be chosen large enough to avoid mistargeting if the target is initially misaligned. Tip-based control task functions: All the previously defined task vectors control in some way one of the lateral translations or rotations of the needle tip. They can thus be used with symmetric or asymmetric tip geometries. However, the advantage of a needle with an asymmetric tip is that the tip trajectory can also be controlled during a pure insertion using only the orientation of the asymmetry, without direct control of the lateral translations. In the case of a beveled-tip, the lateral force created at the tip during the insertion is directly linked to the bevel orientation. Orientation of the bevel toward the target can then be achieved by regulating the angle σ around the needle axis between the target and the orientation of the bevel cutting edge (y axis), as depicted in Fig. 4.4. This angle can be expressed according to σ = atan2(y t , x t ) - π 2 . ( 4 .51) The corresponding Jacobian matrix J σ ∈ R 1×6 can be derived as follows: J σ = ∂σ ∂r = ∂σ J σ = y t d 2 - x t d 2 0 x t z t d 2 y t z t d 2 -1 J tip , (4.54) where d was defined by (4.44). Regulation of σ toward zero can also be achieved using σd = -λ σ σ, (4.55) where λ σ is a positive control gain that tunes the exponential decrease rate of σ. A smooth sliding mode control will however be preferred, as was done in [RDG + 13], to rotate the bevel as fast as possible while it is not aligned with the target. This is equivalent to define a maximum rotation velocity ω z,max in (4.55) and to use a relatively high value for λ σ , such that σd = -ω z,max sign(σ) if |σ| ≥ ωz,max λσ -λ σ σ if |σ| < ωz,max λσ . (4.56) Tip-based control can thus be performed by stacking this task function with the insertion velocity task function defined by (4.36) and (4.37) and use them in (4.6), which leads to the following control law v b = J vt,z J σ + v t,z,d σd . (4.57) This control law allows the automatic steering of the needle tip toward the target by using the asymmetry of the needle tip and also let several free DOF of the needle base to perform other tasks at the same time. The direct control of both v t,z and σd can be given to an external human operator to perform the insertion. Alternatively, it could also be possible to give only the control of the insertion speed v t,z to the operator and let the system automatically orient the bevel toward the target. In the case of an autonomous controller using (4.37) and (4.56), an adequate tuning of the insertion velocity v tip with respect to the rotation velocity ω z,max is necessary to ensure that the bevel can be oriented fast enough toward the target before the needle tip reaches the depth of the target. This can usually be achieved by setting a high value ω z,max [RDG + 13]. Conclusion: We have presented several task vectors that could be used in a control law to achieve the steering of the needle tip toward a target. Each task vector uses a different strategy, such as the control of the tip velocity, the alignment of the tip axis with the target or the orientation of the asymmetry of the needle tip toward the target. Most of the task vectors do not constrain all of the available DOF of the needle base, such that they can be used in combination with one another or with other task functions to achieve several objectives at the same time. In particular, the orientation of the bevel of a beveled-tip can be used alongside the control of the lateral translations or rotations of the tip in order to increase the targeting performances of the controller. This will be explored in the experiments presented in section 4.4. The deformations of the needle and the tissues should also be controlled during the insertion, especially when using the control of the lateral translations of the tip, which can only be achieved by bending the needle and pushing on the tissues. Therefore, in the next section we focus on the design of additional task functions in order to ensure the safety of the insertion procedure. Safety tasks design The tasks defined in the previous section are used to control the trajectory of the needle tip. However they do not take into account other criteria that may be relevant to ensure a safe insertion of the needle. Two main points need to be taken into account for the safety of the insertion procedure. First the lateral efforts exerted on the tissues should be minimized to reduce the risks of tearing, which would go against the general concept of minimally invasive procedure. The second point is to avoid breaking the needle, for obvious safety reasons. Both points can be viewed as a same objective since breaking the needle will only occur if large efforts are applied on the tissues. In order to address these two points, we propose in the following three task functions that can be used in combination with the targeting task vectors of previous section using one of the control schemes defined in section 4.3.1 by (4.6), (4.10) or (4.16). The first task is designed toward the control of the deformations of the tissues, the second one toward the control of the deformations of the needle and the third one to achieve a trade-off between the two. An experimental comparison of the performances obtained with each task function will then be provided later in section 4.4.1.2. Surface stretch reduction task: It can be noted that tissue tearing has the greatest probability to appear near the surface of the tissues. Indeed, this occurs when the skin has already been fragilized by the initial cut of the needle and when less surrounding tissues are present to maintain their cohesion. A first solution to avoid tearing the surface of the tissues is to ensure that the body of the needle remains close to the initial position of the insertion point. This can be achieved by reducing the relative lateral position δ ∈ R 2 on the tissue surface between the current position of the needle c N (L f ree ) and the initial position of the insertion point c T (0), as illustrated in Fig. 4.5 (note that we choose to take the notations c N , c T and L f ree that were introduced in the definition of the two-body model in section 2.4.2). This task and the associated Jacobian matrix can then be expressed according to δ = P s c N (L f ree ) -c T (0) , (4.58 ) J δ = P s ∂c N (L f ree ) ∂r = P s J L f ree , (4.59) where P s ∈ R 2×3 is an orthogonal projector onto the tissue surface and Figure 4.5: Illustration of the geometric features used for the safety task functions. Note that the representation is here limited to a 2D case, however in the general case the angle γ is defined in the plane containing the needle base axis z and the initial insertion point c T (0). J L f ree ∈ R 3×6 is the Jacobian matrix linking the variations of the position of the needle point at the curvilinear coordinate L f ree to the variations of the needle base pose r. This matrix J L f ree is computed from the model using the method described by (4.27) in section 4.3.3. Regulation of δ toward zero can be achieved using the classical law δd = -λ δ δ, (4.60) where λ δ is a positive control gain that tunes the exponential decrease rate of δ. Alternatively the scalar distance δ can also be directly used to decrease 143 CHAPTER 4. NEEDLE STEERING the dimension of the task: δ = δ , (4.61 ) J δ = δ T δ J δ , (4.62) δd = -λ δ δ, (4.63) where λ δ is a positive control gain that tunes the exponential decrease rate of δ. Note that this formulation introduces a singularity when δ = 0. However, it can be shown that the local asymptotic stability of the system remains still valid [START_REF] Marey | A new large projection operator for the redundancy framework[END_REF]. From a numerical point of view, it can also be noted that δ T δ is always a unit vector for δ = 0, such that J δ does not introduce arbitrary large values near δ = 0. Therefore in the following we will use the scalar version of the task for the reduction of the tissue stretch at the surface. Needle bending reduction task: A solution to avoid breaking the needle is to ensure that the needle remains as straight as possible. However maintaining the needle strictly straight is not possible since needle bending is necessary to steer the needle tip, either from lateral base motion or by using the natural curvature at the needle tip. We propose to use the bending energy of the needle as a quantity to minimize. This energy can be computed from the needle models presented in section 2.4. As defined in (2.19), the energy is given by E N = EI 2 L N 0 d 2 c N (l) dl 2 2 dl, where we recall that E is the Young's modulus of the needle, I is the second moment of area of the needle section and c N is the spline curve of length L N representing the shape of the needle. The corresponding Jacobian matrix J E N ∈ R 1×6 can then be computed from the model using the method described by (4.27) in section 4.3.3 J E N = ∂E N ∂r . (4.64) Regulation of E N toward zero can by achieved using the classical law ĖN,d = -λ E N E N , (4.65) where λ E N is a positive control gain that tunes the exponential decrease rate of E N . NEEDLE STEERING FRAMEWORK Needle base alignment task: Limiting the distance between the needle and the initial position of the insertion point does not ensure that the needle is not bending outside of the tissues. Similarly, once the needle has been inserted, reducing the bending of the needle does not ensure that the needle is not pushing laterally on the surface of the tissues. In order to avoid pushing on the tissues near the insertion point and to also limit the bending of the needle outside the tissues, the needle base axis can be maintained oriented toward the insertion point. This can be viewed as a remote-center-of-motion around the initial insertion point, in the case where this one is not moving, i.e. if no external tissue motions occur. We propose to achieve this goal by regulating toward zero the angle γ between the needle base z axis and the initial location of the insertion point c T (0), as illustrated in Fig. 4.5. This way the needle base axis should also follow the insertion point in the case of tissue motions that are not due to the interaction with the needle. Noting x 0 , y 0 , and z 0 the coordinates of the initial position of the insertion point c T (0) in the needle base frame {F b } (see Fig. 4.5), the angle γ can be expressed according to γ = atan2 x 2 0 + y 2 0 , z 0 . (4.66) The Jacobian matrix J γ ∈ R 1×6 corresponding to this angle can then be derived as follows: J γ = ∂γ ∂r = ∂γ ∂c T (0) ∂c T (0) ∂r , (4.67) with ∂γ ∂c T (0) = x 0 cos 2 (γ) z 0 x 2 0 + y 2 0 y 0 cos 2 (γ) z 0 x 2 0 + y 2 0 - x 2 0 + y 2 0 x 2 0 + y 2 0 + z 2 0 (4.68) and ∂c T (0) ∂r =   -1 0 0 0 -z 0 y 0 0 -1 0 z 0 0 -x 0 0 0 -1 -y 0 x 0 0   . (4.69) Finally we obtain J γ =                         - x 0 cos 2 (γ) z 0 x 2 0 + y 2 0 - y 0 cos 2 (γ) z 0 x 2 0 + y 2 0 x 2 0 + y 2 0 x 2 0 + y 2 0 + z 2 0 y 0 x 2 0 + y 2 0 - x 0 x 2 0 + y 2 0 0                         T . (4.70) Regulation of γ toward zero can by achieved using the classical law γd = -λ γ γ, (4.71) where λ γ is a positive control gain that tunes the exponential decrease rate of γ. Conclusion: In this section we have defined three different task functions that can be used to control the deformations of the needle or tissues during the insertion. These task functions can be combined together with a targeting task using the task function framework in order to obtain a final control law that allows reaching a target with the needle tip while ensuring the safety aspect of the insertion procedure. In the following section we propose to test in different experimental scenarios the whole needle steering framework that we designed. Several combinations of the task vectors defined in sections 4.3.4.1 and 4.3.4.2 will be explored as well as the different formulations used to fuse them into one control law as described in section 4.3.1. Framework validation In this section we present an overview of the experiments that we conducted to test and validate our proposed needle steering framework. We first use the stereo cameras to obtain a reliable feedback on the needle localization in order to test the different aspects of the framework independently from the quality of the tracking. We then perform insertions under 3D ultrasound visual guidance using the tracking algorithm that we proposed in chapter 3. Insertion under camera feedback In this section we propose to evaluate the performances of our framework when using the visual feedback provided by the stereo camera system presented in section 1.5.2. In all the experiments the stereo camera system is registered and used to retrieve the position of the needle shaft in the tissues using the registration and tracking methods described in section 3.4.1. We first present experiments that we performed to combine our framework with the duty-cycling control technique described in section 4.1.1. We then compare the performances obtained during the needle insertion when using the different safety task functions that were defined in section 4.3.4.2. Finally we propose to test the robustness of the method to modeling errors introduced by lateral motions of the tissues. Switching base manipulation and duty-cycling We first propose to use both base manipulation and tip-based control to insert a needle and reach a virtual target. Tip-based control allows a fine control of the tip trajectory, however the amplitude of the lateral tip motions that can be obtained is limited, such that the target can be unreachable if it is not initially aligned with the needle axis. On the contrary, using base manipulation allows a better control over the lateral tip motions at the beginning of the insertion, however the effect of base motions on the tip motions is reduced once the needle tip is inserted deeper in the tissues. In the following we use an hybrid controller that alternates between dutycycling control (see section 4.1.1), when the target is almost aligned with the needle, and base manipulation using our task framework (see section 4.3) in order to accurately reach a target that may be misaligned at the beginning of the insertion. Experimental conditions (setup in France): In these experiments, the Angiotech biopsy needle is actuated by the Viper s650. The insertion is done in a gelatin phantom embedded in a stationary transparent plastic container. Visual feedback is obtained using the stereo cameras system and the whole needle shaft is tracked in real-time by the image processing algorithm described in section 3.4.1. A picture of the setup is shown in Fig. 4.6. A virtual target to reach is defined just before the beginning of the insertion such that it is located at a predefined position in the initial tip frame. We use the virtual springs model presented in section 2.4.1 with polynomial needle segments of order r = 3. The stiffness per unit length of the model is set to 10000 N.m -2 for these experiments and the length threshold to add a new virtual spring is set to L thres = 2.5 mm. The rest position of a newly added spring (defined as p 0,i in section 2.4.1, see Fig. 2.5) is set at the position of the tracked needle tip in order to compensate for modeling errors. This is similar to the update method 3 presented in section 3.6.2. The pose of the needle base of the model is updated using the odometry of the robot. Control: We use either base manipulation using the task function framework or duty-cycling control depending on the alignment of the target with the needle tip axis. Duty-cycling is used when the target is almost aligned and only small modifications of the tip trajectory are needed. Base manipulation is used when larger tip motions are necessary to align the needle with the target. Base manipulation control: We use three tasks to control the needle manipulator and we fuse them using the singularity robust formulation of the task function framework, as defined by (4.16) in section 4.3.1. Each task is given a different priority level such that it does not disturb the tasks with higher priority. The tasks are defined as follows. • The first task with highest priority controls the tip translation velocity v t , as defined by (4.33) and (4.35). We set the insertion velocity to 1 mm.s -1 . Note that we choose this task over the tip alignment tasks defined in (4.49) and (4.50) because it does not require the tuning of an additional gain. • The second task with medium priority controls the bevel orientation via the angle σ, as defined by (4.51), (4.54) and (4.56). The maximal 4.4. FRAMEWORK VALIDATION rotation speed ω z,max is set to 60 • .s -1 and the gain λ σ is set to 4 3 (see (4.56)) such that the maximal rotation velocity is used when the bevel orientation error is higher than 45 • . • The third task with lowest priority is used to reduce the mean deformations of the tissues δ m , which we compute here from the virtual springs interaction model according to δ m = 1 L ins n i=1 l i δ i , (4.72) where L ins is the current length of the needle that is inserted in the tissues, n is the current number of virtual springs, δ i is the distance between the needle and the rest position of the i th virtual spring, i.e. the virtual spring elongation, and l i is the length of the needle model that is supported by this virtual spring. The Jacobian matrix J δm corresponding to δ m is numerically computed from the model using the method described by (4.27) and the desired variation of δ m is computed as δm,d = -λ δm δ m , (4.73) with the control gain λ δm set to 1. The final velocity screw vector v b applied to the needle base is then computed according to v b = J + vt v t,d + P 1 J + σ σd -λ δm P 2 J + δm δ m , (4.74) with P 1 = I 6 -J + vt J vt , (4.75) P 2 = I 6 - J vt J σ + J vt J σ = P 1 -(J σ P 1 ) + (J σ P 1 ) , (4.76) where I 6 is the 6 by 6 identity matrix. Duty-cycling control: We use duty-cycling control when the target is almost aligned with the needle tip axis. This is detected by comparing the angle θ between the target and the tip axis (as defined in (4.38)) with the maximum angle θ DC obtained during one cycle of duty-cycling. This angle corresponds to the angle obtained during a cycle with only insertion (dutycycle ratio DC = 0), such that θ DC = K nat L DC . (4.77) where K nat is the natural curvature of the needle tip trajectory during the insertion and L DC is the total insertion length of a cycle, set to 3 mm in this experiment. If θ < θ DC , the needle would overshoot the current desired direction in less than a cycle length. In that case it is better to reduce the effective curvature K ef f of the tip trajectory such that it aligns with the desired direction, i.e using K ef f = θ L DC , (4.78) DC = 1 - θ L DC K nat (4.79) The total rotation of the needle during each rotation phase is set to 2π + σ, where σ is the angle between the target and the bevel as defined in (4.51), such that the bevel is oriented in the target direction before starting the translation phase. Experimental scenarios: Four experiments are performed with a same phantom to validate our method. At the beginning of each experiment, the needle is placed such that it is normal to the surface of the gelatin and its tip slightly touches it. The insertion point is shifted between the experiments in a way that the needle can not cross a previous insertion path. The needle is first inserted 7 mm in the gelatin to allow the manual initialization of the tracking algorithm in the images. Then the insertion procedure starts with an insertion speed of 1 mm.s -1 and is stopped when the target is no more in front of the needle tip. Open-loop insertion toward an aligned target: In the first experiment, a virtual target is defined before the beginning of the insertion such that it is aligned with the needle and placed at a distance of 8 cm from the tip. A straight insertion along the needle axis is then performed in openloop control. Fig. 4.7a shows the view of the front camera at the end of the experiment and Fig. 4.8a shows the 3D lateral distance between the needle tip axis and the target. Note that the measure presents a high level of noise at the beginning of the insertion. This is first due to the noisy estimation of the needle direction at the beginning of the insertion since the visible part of the needle is small. Second, the needle tip is far from the target at the beginning of the insertion, which amplifies the effect of the direction error on the lateral distance. We can see that the target is missed laterally by 8 mm at the end because of the natural deflection of the needle. This experiment justifies that needle steering is necessary to accurately reach a target even if it is correctly aligned with the needle axis at the beginning of the procedure. Tip-based control with a misaligned target: In a second experiment, the target is shifted 1 cm away from the initial tip axis and such that a 135 • rotation is necessary to align the bevel toward the target. The (b) Duty-cycling control with a target shifted 1 cm away from the initial needle axis: duty-cycling control is saturated and the target is missed due to insufficient tip deflection. The target can be reached in both cases using the hybrid control framework ((c) aligned target and (d) shifted target). In each graph, the purple sections marked "DC" correspond to duty-cycling control and the red sections marked "BM" correspond to base manipulation. duty-cycling control is used alone for this experiment. Fig. 4.7b shows the view of the front camera at the end of the experiment and Fig. 4.8b shows the 3D lateral distance between the needle tip axis and the target. After the first rotation, the duty-cycling controller is saturated and only performs pure insertion phases. We can see that the lateral alignment error decreases during the insertion. However the natural curvature of the needle is not Table 4.1: Final lateral position error between the needle tip and the target for different insertion scenarios. Final lateral error (mm) Straight insertion and aligned target 7.6 Duty-cycling and shifted target 4.9 Hybrid control and aligned target 0.6 Hybrid control and shifted target 0.1 sufficient to compensate for the initial error and the target is finally missed by 5 mm. This experiment justifies that base manipulation is necessary to accurately reach a misaligned target with a standard needle or that a needle offering higher curvature needs to be used. Hybrid control: Two other experiments were performed with the same initial target placements (one aligned target and one misaligned target) and using the hybrid controller with both base manipulation and duty-cycling. Figure 4.7c and 4.7d show the view of the front camera at the end of the experiments and Fig. 4.8c and 4.8d show the 3D lateral distance between the needle tip axis and the target. We can see that the controller allows reaching the target with a sub-millimeter accuracy in both cases. Table 4.1 shows a summary of the final lateral targeting error between the tip and the target. The targeting error along the needle direction was under 0.25 mm in each experiment, which corresponds to the accuracy of the vision system. These experiments show that using base manipulation in addition to tip-based steering allows a larger reachable space compared to the sole use of tip-based control methods. In addition, we can observe that the controller rarely switched to dutycycling, as can be seen in Fig. 4.8c and Fig. 4.8d. This is due to the small natural curvature of the needle tip trajectory obtained for this association of needle and phantom. In this case it may not be necessary to reduce the curvature of the needle tip trajectory. We could just orient the bevel edge toward the target, as is done by the second task of the base manipulation controller, and avoid the high number of rotations required to perform dutycycling control. However duty-cycling control should still be used when using a more flexible needle, and more especially if the tip trajectory is defined by a planning algorithm that allows non-natural curvatures. A second observation concerns the oscillations in the lateral error that appear during the duty-cycling control in Fig. 4.8c and 4.8d. These oscillations are due to a small misalignment between the axis of rotation of the robot and the actual axis of the needle. This misalignment introduces some lateral motions of the needle during the rotation phases, which in turn modify the needle tip trajectory. From a design point of view, it shows that the accuracy of the realization of a needle steering mechanical system can have a direct effect on the accuracy of the needle steering. Furthermore, depending on the frame in which the observation is made, a lateral motion of the needle base can be seen as a motion of the phantom, so that this oscillation phenomenon confirms the fact that tissue motions is an important issue for an open loop insertion procedure. This effect is likely to have a greater importance when using relatively stiff needles, for which base motion have a significant effect on the tip motion. On the contrary it should have a lower impact when using more flexible needles, so that duty-cycling control is better suited for very flexible needles. Conclusion: We have seen that combining both base manipulation and tip-based control during a visual guided robotic insertion allows a good targeting accuracy in a large reachable space. This validate the fact that using additional degrees of freedom of the needle base can be necessary to ensure the accurate steering of the needle tip toward a target. We also observed that duty-cycling control was actually not really adapted to our experimental setup due to the low natural curvature of the tip trajectory during the insertion. Therefore in the following we do not use dutycycling control anymore but we only orient the cutting edge of the bevel toward the target, such that the full curvature of the needle tip trajectory is used. As a final note, we observed during the experiments that the third task used to reduce the deformations of the tissues had almost no influence on the final velocity applied to the needle base. This is due to the singularity robust formulation and the fact that the task is near incompatible with the first task with high priority. The contribution of the third task was then greatly reduced after the projection in the null space of the first task. Therefore we do not use this formulation in the followings and we use instead the classical formulations defined by (4.6) or (4.10). Safety task comparison We propose here to compare the performances obtained when using the different safety tasks that we defined in section 4.3.4.2. Experimental conditions (setup in France): In these experiments, the Angiotech biopsy needle is actuated by the Viper s650. The insertion is done in a gelatin phantom embedded in a stationary transparent plastic container. Visual feedback is obtained using the stereo cameras system and the whole needle shaft is tracked in real-time by the image processing algorithm described in section 3.4.1. A raisin is embedded in the gelatin 9 cm under the surface and used as a real target. A picture of the setup is shown in Fig. 4.9. We use the two-body model presented in section 2.4.2 with polynomial needle segments of order r = 3. We fix the length of the needle segments to 1 cm, resulting in a total of n = 13 segments and the last segment measuring 0.6 mm. A soft phantom is used in these experiments, such that the stiffness per unit length of the model is set to 1000 N.m -2 . The length threshold to add a new segment to the tissue spline is set to L thres = 0.1 mm. The pose of the needle base of the model is updated using the odometry of the robot. Control: We use three tasks for the control of the needle manipulator and we fuse them using the classical formulation of the task function framework, as defined by (4.6) in section 4.3.1. The different tasks are defined as follows. • The first task controls the tip translation velocity v t , as defined by (4.33) and (4.35). We set the insertion velocity v tip to 5 mm.s -1 . • The second task controls the bevel orientation via the angle σ, as defined by (4.51), (4.54) and (4.56). The maximal rotation speed ω z,max is set to 60 • .s -1 and the gain λ σ is set to 10 (see (4.56)) such that the maximal rotation velocity is used when the bevel orientation error is higher than 6 • . • The third task is one of the three safety tasks defined in section 4.3.4.2: reduction of the tissue stretch δ at the surface ((4.61), (4.62) and (4.63)), reduction of the needle bending energy E N ((2.19), (4.64) and (4.65)) or reduction of the angle γ between the needle base axis and the insertion point ((4.66), (4.70) and (4.71)). The control gain for each of these tasks (λ δ , λ E N or λ γ ) is set to 1. We observed in the previous section that the singularity robust hierarchical formulation (see (4.16)) induces too much distortion of the low priority tasks. Therefore, we choose here to give the same priority level to each task, such that the control should give a trade-off between good targeting and safety of the procedure. The final velocity screw vector applied to the needle base v b is then computed according to v b =   J vt J σ J 3   +   v t,d σd ė3,d   , (4.80) where J 3 is the Jacobian matrix corresponding to the safety task (either J δ , J E N or J γ ) and ė3,d is the desired variation for the safety task (either δd , ĖN,d or γd ). Note that the total task vector is here of dimension 5 while we have 6 degrees of freedom available, such that all tasks should ideally be fulfilled. For the computation of the control law, the desired variations of the two first tasks are computed from the measures of the target and tip position using the visual feedback. On the contrary, the desired variations of the safety tasks are computed from the needle interaction model. Experimental scenarios: Five insertions are performed for each kind of safety task. The needle is placed perpendicular to the tissue surface before the beginning of the insertion. Initial insertion locations are chosen such that they are sufficiently far away from the previous insertions, leading to an initial misalignment with the target up to 1.7 cm. The needle is first inserted 1 cm into the phantom to manually initialize the tracking algorithm. The controller is stopped once the needle tip reaches the depth of the target. Pictures of the initial and final state of one experiment are shown in Fig. 4.10. In the following we compare the values taken by each of the three physical quantities defined for the third task during the experiments, namely the tissue stretch at the surface, the needle bending energy and the angle between the needle base axis and the insertion point. These values are recorded from the state of the model during the insertions. Targeting: Let us first look at the targeting performances of the method. The lateral distance between the needle tip axis and the target was measured during the insertions and is shown in Fig. 4.11. As stated previously this measure is noisy at the beginning of the insertion due to the distance between the needle tip and the target. The mean value of the final lateral targeting error across the five insertion procedures is summarized in Fig. 4.12. The target could be reached in all cases with an accuracy of less than 2 mm, which is sufficiently accurate for most clinical needle insertion applications. This demonstrates the good performances of our steering method. Similar targeting performances are obtained when reducing the surface tissue stretch or reducing the needle bending energy. Aligning the needle base with the insertion point further decreases the targeting error. However this result should be interpreted with caution and may be due to statistical variance, as the targeting error is indeed close to the diameter of the needle (0.7 mm) and the visual system accuracy (0.25 mm). Surface tissue stretch: Let us now consider the effect of the tasks on the tissue stretch δ at the surface. The value of δ for each experiment is shown in Fig. 4.13. The mean value of δ over time and across the five insertion procedures is summarized in Fig. 4.14 for each active task. As expected, actively reducing the surface stretch effectively reduces the surface stretch compared to the other safety tasks. On the other hand reducing the bending of the needle introduces a higher stress to the tissue surface. This can be explained by the fact that keeping the needle straight outside of the tissues requires that the internal shear force applied to the needle at the tissue surface is small. This is only possible if the integral of the force load applied to the needle in the tissues is near zero. Since the needle tip needs to be steered laterally to reach the target, some load is applied to the tissues near the tip. An opposing load is thus necessary near the tissue surface to drive the integral of the load to zero, leading to a deviation of the needle shaft from the initial position of the insertion point. An intermediate between these two behaviors seems to be obtained when aligning the needle with the initial position of the insertion point. This could be expected since orienting the needle base tends to move the needle body toward the same direction, i.e. toward the insertion point, hence reducing the surface stretch. However bending of the needle outside of the tissues is still possible due to the interaction with the tissues, creating a certain amount of stretch at the surface. Needle bending energy: Let us now look at the effect of the tasks on the bending energy E N stored in the needle. The value of E N for each experiment is shown in logarithm scale in Fig. 4.15. The mean value of E N over time and across the five insertion procedures is summarized in Fig. 4. 16 for each active task. As expected, actively reducing the bending energy effectively reduces the energy compared to the other safety tasks. On the other hand reducing the tissue stretch at the surface requires a higher needle bending. This can be explained by the fact that steering the needle tip laterally while keeping the needle near the insertion point results in a force load applied by the tissues only on one side of the needle. Needle bending outside of the tissues is thus necessary to be able to obtain this load. As seen previously for the surface tissue stretch, aligning the needle with the initial position of the insertion point seems to provide an intermediate between these two behaviors. This could be expected since orienting the needle base axis toward the insertion point tends to straighten the part of the needle that is outside of the tissues, hence reducing the overall bending energy. However the needle can still bend near the surface and inside the tissues, which is good to perform the targeting task. An additional observation can be made on the behavior of the needle bending reduction task. Once the needle has been inserted in the tissues and some natural deflection appeared, moving the needle base only provides a limited way of changing the shape of the needle inside the tissues. This creates a non-zero floor value under which the bending energy cannot be reduced without removing the needle from the tissues. From the task function point of view, when the floor value is reached the corresponding task Jacobian matrix becomes incompatible with the task controlling the insertion. A singularity occurs in this case, leading to some instabilities that increases the needle bending, as could be observed in some experiments (for example the blue curve in Fig. 4.15b). This behavior indicates that using this task is not suitable to increase the safety of the control. Base axis insertion point angle: Let us finally consider the effect of the tasks on the angle γ between the needle base axis and the initial position of the insertion point. The value of γ for each experiment is shown in Fig. 4.17. The mean value of γ over time and across the five insertion procedures is summarized in Fig. 4.18 for each active task. As expected, actively reducing the angle between the base axis and the insertion point effectively reduces this alignment error when compared to the other safety tasks. As discussed previously, reducing the tissue stretch at the surface requires bending the part of the needle that is outside the tissues to fulfill the targeting task. Since the needle is constrained to pass by the initial position of the insertion point, this bending can only be achieved by rotating the needle base to put it out of alignment, resulting in a higher value of γ. Similarly, we have seen that reducing the bending of the needle introduces a stretch of the tissues at the surface to achieve the targeting task. Since the needle body is aligned with the needle base axis due to the reduced bending, then the base can not be aligned with the insertion point. It can also be observed during all the experiments that the features associated to the safety tasks tend to increase near the end of the insertion, as visible in Fig. 4.13a and 4.17c. A small increase of the lateral distance near the end can also be observed in Fig 4 .11. Since the task functions are designed to regulate these features toward zero, this effect indicates an incompatibility between the safety and targeting tasks. The total Jacobian matrix defined in (4.80) is then close to singularity, such that the computation of the pseudo-inverse introduces some distortions. The hierarchical formulation (4.10) of the task function framework could be used instead of the classical formulation (4.6) to choose which task should have the priority in this case. This point will be explored later in section 4.4.2. Conclusion: Through these experiments we have confirmed that steering a flexible needle in soft tissues requires a certain amount of tissue deformations and needle bending. Trying to steer the needle while actively reducing the deformations at the surface of the tissues can only be achieved by bending the needle. Trying to reduce the amount of bending during the steering can only be achieved through deformations of the tissue surface. Keeping the needle base aligned with the initial position of the insertion point seems to allow needle steering while procuring a trade-off between tissue deformations near the surface and needle bending outside the tissues. In conclusion, the last method should be preferred in general to reduce both the needle and the tissues deformations. The task reducing the tissue stretch at the surface can be used if the needle is not too flexible, such that it does not bend too much outside of the tissues. On the contrary, the task reducing the needle bending should be avoided since it introduces some stability issues in addition to the deformations of the tissues. Robustness to modeling errors We now propose to evaluate the robustness of the base manipulation framework towards modeling errors and tissue motions. Experimental conditions (setup in France): In these experiments, the Angiotech biopsy needle is actuated by the Viper s650. The insertion is done in a gelatin phantom embedded in a transparent plastic container. The phantom is moved manually during the first half of the insertion. Visual feedback is obtained using the stereo camera system and the whole needle shaft is tracked in real-time by the image processing algorithm described in section 3.4.1. The setup is similar to the previous section and can be seen in Fig. 4.9. A virtual target is defined just before the beginning of the insertion such that it is 8 cm under the tissue surface and 4 mm away from the initial needle axis. This target is fixed in space and does not follow the motions applied to the phantom, hence simulating a moving target from the point of view of the needle which is embedded in the phantom. We use the two-body model presented in section 2.4.2 with polynomial needle segments of order r = 3. We fix the length of the needle segments to 1 cm, resulting in a total of n = 13 segments and the last segment measuring 0.6 mm. The stiffness per unit length of the model is set to 3200 N.m -2 and the length threshold to add a new segment to the tissue spline is set to L thres = 0.1 mm. Control: We use two tasks for the control of the needle manipulator and we fuse them using the classical formulation of the task function framework, as defined by (4.6) in section 4.3.1. The tasks are defined as follows. • The first task controls the tip translation velocity v t , as defined by (4.33) and (4.35). We set the insertion velocity v tip to 2 mm.s -1 . • The second task controls the bevel orientation via the angle σ, as defined by (4.51), (4.54) and (4.56). The maximal rotation speed ω z,max is set to 60 • .s -1 and the gain λ σ is set to 10 (see (4.56)) such that the maximal rotation velocity is used when the bevel orientation error is higher than 6 • . The final velocity screw vector applied to the needle base v b is then computed according to v b = J vt J σ + v t,d σd . (4.81) The controller is stopped once the needle tip reaches the depth of the target. Experimental scenarios: We perform four insertions using the controller defined previously. For each experiment, the phantom is manually moved laterally with respect to the insertion direction with an amplitude of up to 1 cm. During two of the insertions, the interaction model is updated using only the pose of the needle manipulator. During the two other insertions, the model is also updated with the UKF-based update algorithm defined in section 3.5. We use the position feedback version of the algorithm by measuring the position of needle points separated by 5 mm along the needle shaft. The process noise covariance matrix is set with diagonal elements equal to 10 -8 m 2 and the noise covariance matrix with diagonal elements equal to (2.5 × 10 -4 ) 2 m 2 . Results: The lateral distance between the needle tip axis and the target is shown in Fig. 4.19, either measured using the needle tracking (Fig. 4.19a) or estimated from the needle model (Fig. 4.19b). An example of the final state of two models, one updated and one not updated, during a single insertion is shown in Fig. 4.20. We can see that when the position of the tissue model is not updated, the needle model does not fit to the real needle. However the target can be reached with sub-millimeter accuracy in all cases, despite the fact that an inaccurate model is used in some cases. This shows that an accurate modeling of the current state of the insertion is not necessary to obtain estimates of the Jacobian matrices which can maintain the convergence of the control law, as previously expressed by (4.22). The task controller proves to be robust to modeling uncertainties thanks to the closed-loop feedback compensating for the errors appearing in the Jacobian matrices. Nevertheless it can be noted that updating the model is necessary if this one must be used for prediction of the needle tip trajectory. Furthermore, the fact that the phantom is moving while the target is not moving introduces an apparent motion of the target with respect to the needle tip. The designed targeting tasks shows good targeting performances by compensating for this target motion thanks to the closed-loop nature of the control. Conclusions: From these results, we have good reasons to expect good targeting performances when using 3D ultrasound (US) volume as feedback, even if the probe pose is not accurately estimated and causes the model to Yellow and blue lines are, respectively, the needle and tissue spline curves of a model updated using only the pose feedback of the needle manipulator, such that the position of the tissue spline (blue) is not updated during the insertion. Red and green lines are, respectively, the needle and tissue spline curves of a model updated using the pose feedback of the needle manipulator and the visual feedback, such that the position of the tissue spline (green) is updated during the insertion. be updated from inaccurate measures. The close-loop control may be able to ensure good targeting as long as the desired values for the tasks are computed in the same image space, i.e. both needle and target are detected using the same US volume. In the following section we present additional experiments to see if this intuition can be confirmed. Insertion under US guidance In previous sections we tested our steering framework using cameras to track the needle in a translucent phantom. If cameras offer a good accuracy, in clinical practice the needle is inserted in opaque tissues, making cameras unusable for such procedure. In this section we propose to test if the framework can be used in practice using a clinically relevant imaging modality. We present experiments performed using 3D ultrasound (US) as the visual feedback to obtain the 3D position of the body of the needle. We mainly focus our study on the targeting accuracy and also consider the effect of setting different priority levels for the different tasks. Experimental conditions (setup in France): In these experiments, the Angiotech biopsy needle is actuated by the Viper s650. Two phantoms are used, one gelatin phantom and one phantom with a porcine liver embedded in gelatin. We use the 3D US probe and station from BK Ultrasound to grab online 3D US volumes. The US probe is fixed to the end effector of the Viper s850 and maintained fixed in contact with the phantom. The needle is inserted from the top of the phantom while the probe is set to the side of the phantom, as illustrated in Fig. 4.21. A thin plastic film is set to replace one side of the plastic container, allowing a soft contact between the probe and the phantom such that the US waves can propagate through the phantom. This ensures a good visibility of the needle in the US volume, by avoiding too much reflection of the US wave outside of the transducer. This orthogonal configuration can be observed in practice in several medical applications, such as kidney biopsy or prostate brachytherapy, where the needle is inserted perpendicularly to the US wave propagation direction. The whole needle shaft is tracked in each volume using the tracking algorithm described in section 3.4.2. A virtual target is manually defined before the beginning of the insertion. The acquisition parameters of the US probe are set to acquire 31 frames during a sweeping motion with an angle of 1.46 • between successive frames. The acquisition depth is set to 15 cm, resulting in the acquisition of one volume every 900 ms. The needle is around 4 cm from the probe transducer for each experiment, which leads to a maximum resolution of 0.85 mm in the insertion direction and 0.3 mm × 1.72 mm in the other lateral directions. A focus length of 5 cm is set for the transducer to obtain a good effective resolution near the needle. We use the two-body model with polynomial needle segments of order r = 3. We fix the length of the needle segments to 1 cm, resulting in a total of n = 13 segments and the last segment measuring 0.6 mm. The stiffness per unit length of the model is set to 1000 N.m -2 and the length threshold to add a new segment to the tissue spline is set to L thres = 0.1 mm. Control: We use three tasks for the control of the needle manipulator and we fuse them using the hierarchical formulation of the task function framework, as defined by (4.10) in section 4.3.1. Each task is given a different priority level such that it does not disturb the tasks with higher priority. The tasks are defined as follows. • The first task controls the tip translation velocity v t , as defined by (4.33) and (4.35). We set the insertion velocity v tip to 1 mm.s -1 . to the needle base is then computed according to v b = J vt J σ + v t,d σd + P 1 (J δ P 1 ) + δd -J δ J vt J σ + v t,d σd , (4.82) with P 1 = I 6 - J vt J σ + J vt J σ , (4.83) where I 6 is the 6 by 6 identity matrix. In the second set the safety task has the highest priority and the two targeting tasks have the same lower priority. The final velocity screw vector v b is then computed according to v b = J + δ δd + P 2 J vt J σ P 2 + v t,d σd - J vt J σ J + δ δd , (4.84) with P 2 = I 6 -J + δ J δ . (4.85) Experimental scenario: Four insertions are performed in the gelatin phantom and four insertions in the porcine liver embedded in gelatin. For each type of phantom two insertions are performed using a higher priority for the targeting tasks as defined by (4.82) and two insertions are performed using a higher priority for the safety task as defined by (4.84). For each experiment, the needle is first placed perpendicular to the surface of the phantom with its tip slightly touching the surface. This position allows the initialization of the needle model and the tissue surface model using the current pose of the needle holder. The needle is then inserted 1.5 cm in the tissues and a 3D US volume is acquired. The needle tracking algorithm is initialized by manually segmenting the insertion point and the needle tip in the volume. A virtual target point is manually chosen in the volume between 5 cm and 10 cm under the needle. The pose of the probe is initialized separately for each experiment using the registration method described in section 3.6.3. The needle tracking algorithm defined in section 3.4.2 is also initialized at the same time. Then the chosen control law is launched and stops when the tip of the tracked needle reaches the depth of the target. Results: We first discuss the targeting performances obtained for the different experiments and then we discuss the effect of the priority order on the realization of the safety task. We can see that the target can be reached in each case with a final lateral targeting error below 3 mm, which comes close to the maximal accuracy of the reconstructed US volumes. This accuracy may be sufficient for most clinical applications. However, for more demanding applications, using a better resolution for the US volume acquisition could be sufficient to achieve a better targeting accuracy. The priority order does not seem to have a significant impact on the final targeting error, although slightly larger errors could be observed for the insertions in gelatin when the safety task was set to the highest priority. FRAMEWORK VALIDATION During these experiments we choose to update the needle model using only the pose feedback of the needle manipulator; no update of the position of the tissue spline is used to compensate for the modeling errors introduced by the constant stiffness per unit length set in the model. This confirms that the targeting performances are quite robust to modeling approximations. The registration of the probe pose performed at the beginning of the insertion is also quite inaccurate, especially concerning the orientation of the probe. Indeed, it depends on the quality of the manual segmentation of the part of the needle that is initially visible in the US volume. Since this needle part is initially short and the resolution of the volume is limited, it is difficult to manually segment the correct orientation of the needle. Nevertheless, since the inputs of the targeting tasks are provided using directly the position of the target in the frame of the needle tip tracked in the volume, the target can still be accurately reached. This way these experiments have demonstrated that the exact pose of the probe is not required by the steering framework to achieve good targeting performances, thanks to the closed-loop nature of the control. Safety task performances: Let us now look at the safety task that was added to minimize the tissue deformations at the surface. The placement of the probe on the side of the phantom is such that the top surface of the tissues is visible in the US volumes. Hence we can measure the stretch at the surface of the tissues during the insertions. The initial position of the insertion point is recorded at the initialization of the needle tracking algorithm. The surface stretch is then measured as the distance between this initial position and the current position of the tracked needle at the surface. The measured surface stretch during the insertions is shown in Fig. 4.24 along with the corresponding value estimated from the model. Let us first remark that the measured and model estimated values seems to follow the same general tendencies, although they are not really fitting. The fitting error can easily be explained by two factors. First of all it can be the consequence of modeling errors, introduced by non-linearities of the phantom properties such as the natural non-linearity of the liver or some amount of tearing on the surface of the gelatin. It may also be due to the accuracy of the measure of the surface stretch, which is limited by the volume resolution and the fact that a lot of artifacts appear at the tissue surface, deteriorating the quality of the tracking algorithm around this zone. As expected, we observe that the surface stretch of the model is well regulated to zero when the safety task is set to the highest priority. The measured deformations are also reduced, even if this is less visible from the measures done with the biological tissues. On the other hand, when the targeting tasks have the priority, more surface stretch tends to be observed. This indicates that the safety task is not always compatible with the targeting tasks, such that its contribution to the control law is sometimes damped by the hierarchical formulation due to the projection on the null space of the targeting tasks (see (4.82)). It can be noted that the task compatibility only depends on the Jacobian matrices of the different tasks and is independent of their priority levels. Therefore the incompatibility should be observed as well when the priorities are inverted. However the good targeting performances do not seem to be affected by the priority of the safety task. This shows that the safety task is 4.5. CONCLUSION incompatible with only some components of the targeting tasks, corresponding to the lateral translations of the tip. Indeed, when the safety task has the lowest priority, it is simply damped whenever it becomes incompatible with the control of the lateral translations. This results in higher tissue deformation, while the targeting performances are not affected. On the other hand, when the safety task has the highest priority, only the components of the targeting tasks controlling the lateral translations of the needle tip is damped in case of incompatibility. The components corresponding to the tip-based control, i.e. the insertion and the rotation around the needle axis, are indeed always compatible with the safety task since they do not induce much lateral motion of the needle shaft. Hence these components are always available to ensure a certain amount of needle tip steering toward the target, leading to the good targeting performance. Conclusions: We have seen than our needle steering framework could be used to accurately reach a target in soft tissues using 3D US as visual feedback. A safety task can also be added using the hierarchical stack of tasks formulation in order to reduce the tissue deformations. Overall, we could see that setting the highest priority to the safety task provides a better control over the deformations of the tissues, while it does not really affect the good targeting performances of the controller. This could be used in clinical applications to perform accurate needle insertions, while also reducing the amount of tissue deformation that is necessary to reach the target. Conclusion In this chapter we presented a review of current flexible needle steering methods used to accurately reach a targeted region in soft tissues. We focused on the two main approaches using either lateral motions of the needle base to control the motions of the tip or using only the natural deflection generated by an asymmetric tip during the insertion. Then, we provided an overview of different strategies used to define the trajectory that must be followed by the needle tip. We proposed a contribution consisting of a steering framework that allows the control of the 6 degrees of freedom of the base of a flexible needle to achieve several tasks during the insertion. Two main tasks were considered: a targeting task to achieve a good steering of the tip toward a target and a safety task to reduce the efforts applied to the needle and the tissues. This framework is also generic enough to integrate both steering approaches using lateral base manipulation and tip-based control in the targeting task. We then evaluated the performances of the framework through several experiments using a closed-loop visual feedback on the needle provided either by cameras or by the 3D ultrasound modality. These experiments demonstrated the robustness of the control framework to modeling errors and showed that it could achieve a good targeting accuracy even after some motions of the tissues occurred. Several ways to fulfill the safety task were compared and it was found that aligning the needle base with the insertion point during the insertion could provide a trade-off between needle bending and deformations of the tissues. Overall the framework proved to enable the accurate steering of the tip of the needle toward a target while ensuring low deformations of the tissues. However we only considered virtual targets that were fixed in space and the reduction of the deformations of the tissues was only assessed in stationary tissues. In order to adress these two points, in chapter 5 we will consider the compensation of external motions of the tissues during the needle insertion. The control framework will be extended to cope with moving tissues and to integrate force feedback in the control law. Chapter 5 Needle insertion with tissue motion compensation In chapter 4 we proposed a framework to steer a beveled-tip flexible needle under visual guidance. This framework uses all 6 degrees of freedom of the needle base to provide increased targeting performances compared to only using the natural deflection of the beveled-tip. It also allows other tasks to be fulfilled at the same time, such as ensuring the safety of the insertion procedure for the patient. In particular the efforts exerted by the needle on the tissues should be reduced to the strict minimum to avoid further damage caused by the needle. However these efforts can not only be due to the manipulation of the needle but they may also be due to the motions of the tissues themselves. In this chapter we focus on the compensation of such motions of the tissues during the insertion of a needle. The effect of tissue motions on the performances of the needle tracking has already been covered in chapter 3 and we will focus on tracking the motions of a real target in this chapter. We also propose further adaptations of the steering framework designed in chapter 4 to decrease the risks of tearing the tissues due to the lateral tissue motions. We will consider the case of force feedback to perform the steering with motion compensation. The chapter is organized as follows. In section 5.1, we first present some possible causes of tissue motions and an overview of current available techniques that may be used for motion compensation. We then propose some extensions of our control framework in section 5.2 to handle motion compensation via visual feedback or force feedback. The tracking of a moving region of interest using the ultrasound (US) modality will be the focus of section 5.3. Finally in section 5.4 we report the results obtained using the proposed framework to perform needle insertions in moving ex-vivo tissues using 2D US together with electromagnetic tracking as position feedback as well as force feedback for motion compensation. The work presented in this chapter was published in an international journal article [CSB + 18]. Tissue motion during needle insertion Tissue motion is a typical issue that arises during needle insertion procedures. When the procedure is performed under local anesthesia it is possible that the patient moves in an unpredicted manner. In that case, general anesthesia can be needed to reduce unwanted motions [START_REF] Flaishon | An evaluation of general and spinal anesthesia techniques for prostate brachytherapy in a day surgery setting[END_REF]. Whatever the chosen anesthesia method, physiological motions of the patient still occur, mainly due to natural breathing. Motion magnitude greater than 1 cm can be observed in the case of insertions performed near the lungs, like lung or liver biopsies [HMB + 10]. A first consequence of tissue motions is that the targeted region is moving. This can be compensated for by using real-time visual feedback to track the moving target and a closed-loop control scheme to insert the needle toward the measured position of the target. Target tracking using visual feedback is further discussed in the next section 5.3. Another point of concern in current works on robotic assisted procedures is that the needle is fixedly held by a mechanical robotic system. In the case of base manipulation control, a long part of the needle is outside of the tissues at the early stage of the insertion. Tissue motions can then induce a bending of this part of the needle and modify the orientation of the needle tip. This can greatly influence the resulting tip trajectory, especially if the insertion was planned pre-operatively for an open-loop insertion. In the case of tip-based control, the robotic device is often maintained close to the tissue surface to avoid any bending and buckling of the flexible needle outside the tissues. Hence the needle cannot really bend or move laterally, inducing direct damage to the tissues if the lateral motions of the tissues are large. Motion compensation is thus necessary to limit the risks of tearing the tissues. Many compensation methods exist and have been applied in various cases. Predictive control: Predictive control can be used to compensate for periodic motions, like breathing. In this case, the motions of the tissues are first estimated using position feedback and then used to predict the future motions such that they can then be compensated for. Cameras and visual markers can be used to track the surface of the body as was done by Ginhoux et al. [GGdM + 05]. However this does not provide a full information on what is happening inside the body and anatomical imaging modalities can be used instead. For example, Yuen et al. [YPV + 10] used 3D ultrasound (US) for beating heart surgery to track and predict the 1D motions of the mitral annulus in the direction of a linear surgical tool. The main drawback of this kind of predictive control is that the motion is assumed to be periodic with a fix period. This can require placing the patient under artificial breathing, which is usually not the case for classical needle insertions. In the last example, motion compensation of the beating heart was actually performed using a force sensor located between the tissues and the tip of the surgical tool. The motion estimation provided by the visual feedback was only used as a feed-forward to a force controller. Force feedback: Force control is another method used to perform motion compensation. For needle insertion procedures, a separate force sensor was used by Moreira et al. [START_REF] Moreira | Towards physiological motion compensation for flexible needle interventions[END_REF] to estimate the tissue motions in the insertion direction. The estimated motion was used to apply a periodic velocity to the needle in addition to the velocity used for the insertion. Impedance or admittance controls are also often used to perform motion compensation since tissue damage can directly be avoided by reducing the force applied to the tissues. This usually requires to first model the dynamic behavior of the tissues. Many models have been proposed for this purpose [START_REF] Moreira | Viscoelastic model based force control for soft tissue interaction and its application in physiological motion compensation[END_REF]. Atashzar et al. [AKS + 13] attached a force sensor directly to a needle holder. The force sensor was maintained in contact with the surface of the tissues during the insertion, allowing the needle holder to follow the motions of the tissues. While axial tissue motions could be accurately compensated for, lateral tissue cutting may still occur in such configuration since the tissues can slip laterally with respect to the sensor. The force sensor can also be directly attached between the manipulator and the needle, as was done by Cho et al. [CSK + 15][KSKK16] . This way, lateral tissue motions could be compensated for. Motion compensation in the insertion direction is however difficult to perform in this case. Indeed, the insertion naturally requires a certain amount of force to overcome the friction, stiction or tissue cutting forces, such that it is difficult to separate the effect of tissue motions from the necessary insertion forces. Since cutting the tissues in the insertion direction is necessary during the needle insertion procedure, we choose to focus only on lateral tissue motions. These lateral motions are also likely to cause more damage due to a tearing of the tissues. In order to be able to adapt to any kind of lateral motions, such as unpredictable patient motions, in the following we do not consider the case of predictive control. Instead we propose to adapt the needle steering framework that we defined in chapter 4, such that it can incorporate force feedback in a reactive control to compensate for tissue motions. Motion compensation in our task framework In this section we present an extension of the needle steering framework that we proposed in section 4.3 in order to enable motion compensation. We only consider the case of lateral motion compensation to avoid tissue tearing. Compensation in the insertion direction is less critical in our case since it only has an effect at the tip of the needle, which is already controlled by a task designed in section 4.3.4.1 to reach the target. COMPENSATION Motion compensation can easily be integrated in our needle steering framework by adding a task to the controller. In the following, we first discuss the use of the safety tasks that were designed in section 4.3.4.2 and then we propose a new task design to use the force feedback provided by a force sensor. Geometric tasks: A lateral motion of the tissues is equivalent to moving the rest position of the path cut by the needle in the tissues (see section 2.4.2 for the definition of this path), which also modifies the initial position of the insertion point. The safety tasks designed previously to provide a safe behavior of the insertion in stationary tissues can thus directly be used to perform motion compensation. The task designed to minimize the distance between the insertion point and the needle shaft can naturally compensate for the tissue motions since the needle shaft remains close to the insertion point. The task designed to align the needle base with the insertion point will also naturally follow the tissue motions. In this case it is possible that the needle base only rotates to align with the moving insertion point but does not translate. However, if the needle base does not translate while the tissues are moving laterally then the needle tip deviates from its desired trajectory. In this case, motion compensation can be obtained using the combination of the safety task with a targeting task that controls the lateral motions of the tip, such as the tip translations or the alignment of the tip axis with the target. The task designed to minimize the bending of the needle will also be sensitive to tissue motions. Indeed, if the needle is in a state of minimal bending energy, then an external motion of the tissues introduces an additional bending of the needle that the task will compensate. However, this task should not be used for stability reasons, as was discussed in section 4.4.1.2. The main issue concerning the implementation of these safety tasks is that the initial rest position of the insertion point must be known. This rest position can not be observed directly, even with external tissue tracking, since the observable position of the insertion point results from the lateral interaction with the needle. Therefore an estimation is required, for example using the model update method that we proposed in section 3.5.2, such that it gives a correct estimation of the real state of the tissues. However, as was discussed in section 3.6, an exact estimation of the position of the tissues is difficult to obtain due to their non-linear properties and the modeling approximations. Therefore, we propose instead to use force feedback, which directly provides a measure of the interaction of the needle with the tissues and does not rely on a good estimation of the tissue position. Force feedback: The ultimate goal of the motion compensation is to reduce the lateral efforts exerted by the needle on the tissues in order to avoid the tearing of the tissues. A task can then directly be designed to minimize these efforts. In practice it is hard to measure directly the forces applied on the tissues at each point of the needle, as it would require a complex design to place force sensors all along the needle shaft. The forces could be retrieved indirectly by using the full shape of the needle and its mechanical properties. This would require integrated shape sensors in the needle, like fiber Bragg grating (FBG) sensors [PED + 10], which is not always desirable due to the additional design complexity. An imaging modality allowing a full view of the needle could also be used. However, viewing the whole needle is not possible with ultrasound (US) imaging since it is limited to the inside of the tissues. It could be possible to use 3D computerized tomography (CT) or magnetic resonance imaging (MRI), however their acquisition time is too slow to be used for real-time control. The ideal case would be to measure the forces at only one location of the needle, so that it is not required to use special modifications of the needle itself. It can be noted that in a static configuration the total force exerted at the base of the needle corresponds to the sum of the forces exerted along the needle shaft. Inertial effects can usually be ignored in practice because of the low mass of the needle, such that the static approximation is valid in most cases. Therefore minimizing the lateral force exerted at the base of the needle should also reduce the efforts exerted on the tissues. Therefore in the following we propose to design a task for our steering framework in order to minimize this lateral force. Lateral force reduction task: Let us define the lateral component f l ∈ R 2 of the force exerted on the needle base. As mentioned previously, we ignore the axial component since it is necessary for the insertion of the needle. The task Jacobian J f ∈ R 2×6 and the desired variations ḟ l,d of the task are defined such that ḟ l = J f v b , (5.1) ḟ l,d = -λ f f l , (5.2) where λ f is a positive control gain that tunes the exponential decrease rate of f l and we recall that v b is the velocity screw vector of the needle base. The task Jacobian J f is computed from the interaction model using the finite difference method (4.27) presented in section 4.3.3. For this computation, the lateral force can directly be computed as the shear force applied at the level of the needle base. Using the two-body model defined in sec-COMPENSATION tion 2.4.2, this can be expressed according to f l = EI d 3 c N (l) dl 3 l=0 , ( 5.3) where we recall that E is the Young's modulus of the needle, I is its second moment of area and c N is the spline curve representing the needle. This task will be used in the followings to perform motion compensation when a force sensor is available to provide a measure of the interaction force at the base of the needle. This measure will be used as the input to the control law (5.2). However, motion compensation during a needle insertion procedure is not limited to the reduction of the damage done to the tissues. In order to obtain good targeting performances while the tissues are moving, the motions of the target should also be measured. Therefore, in the following we focus on the tracking of a moving target using an imaging modality. Target tracking in ultrasound In all previous experiments we only considered the case of virtual targets. However, in practice, the needle should be accurately steered toward a real target. The target can be moving, either due to physiological motions of the patient or due to the effect of the insertion of the needle on the tissues. In this section we present a tracking algorithm that we developed to follow the motion of a moving spherical target in 2D ultrasound images. Target tracking in 2D ultrasound We use a custom tracking algorithm based on the Star algorithm [START_REF] Friedland | Automatic ventricular cavity boundary detection from sequential ultrasound images using simulated annealing[END_REF] to track the center of a circular target in 2D ultrasound (US) images. This kind of tracking has proved to yield good performances for vessel tracking [GSM + 07]. The process of the tracking algorithm is described in Alg. 1 and illustrated in Fig. 5.1 and we detail its functioning in the following. Template matching: This kind of techniques is widely used in image processing in general and consists in finding a patch of pixels in an image that corresponds the best to another reference patch. Many similarity criteria can be used to assess the resemblance between two patches, like the sum of square differences, the sum of absolute differences or the normalized cross correlation, each having their pros and cons. The reference patch can also be defined in two main ways. The first one consists in extracting a patch in the previous image at the location of the object. This way the object can be tracked all along the image sequence even if its shape changes. However the accumulation of errors can cause Algorithm 1: Target tracking: initialization is performed manually by selecting the target center p center and radius r in the image as well as the number N of rays for the Star algorithm. A square pixel patch I patch centered around p center is extracted for the template matching. I patch , p center , r, N ← INITIALIZE_TRACKING(); while Tracking do I ← ACQUIRE_IMAGE(); p center ← TEMPLATE_MATCHING(I, I patch ); E ← ∅; for i ∈ [0, N -1] do θ ← 2πi N ; Ray ← TRACE_RAY(p center , 2r, θ)                    Star algorithm ; p edge ← EDGE_DETECTION(Ray); E ← E ∪ p edge ; end p center , r ← CIRCLE_FITTING(E); I patch ← EXTRACT_REFERENCE_PATCH(I, p center ); end the tracking to drift. The second one is to take a capture of the object of interest at the beginning of the process and keep it as a reference. This allows avoiding drifts but the tracking can fail if the object shape is changing. In our case we first apply a template matching between two successive images to get a first estimation of the target motion. This is represented by the TEMPLATE_MATCHING function in Alg. 1. We chose here to take the sum of square differences as similarity measure because it is fast to compute and usually yields good matching. The possible drift will be canceled by the following step of the algorithm, which is the Star algorithm. Star algorithm: We use the Star algorithm to refine the tracking and to remove the drift obtained with successive template matching by exploiting the a priori shape of the target. The Star algorithm is initialized around the center of the target estimated by the template matching. Angularly equidistant rays are then projected from the target center (see Fig. 5.1). The length of each ray is chosen such that it is higher than the diameter of the target to ensure that each ray is crossing a boundary of the target. An edge detector is run along each ray to find these boundaries. Contrary to the boundaries of a vessel which are almost anechoic, we consider here an hyperechoic target. Using a classical gradient-based edge detector as was done for the needle tracking in camera images (section 3.4.1), false edge detection could arise due to noise and inhomogeneities inside the target. To reduce this effect, we Figure 5.1: Illustration of the Star algorithm used for the tracking of a circular target in 2D ultrasound. The blue dot is an initial guess of the target center from which rays are projected (blue lines). The estimation of the target center (green cross) is obtained using circle fitting (green circle) on the detected boundaries along each ray (yellow dots). find the boundary along each ray as the point which maximizes the difference between the mean intensities on the ray before and after this point. This is equivalent to finding the distance l edge along each ray such that l edge = arg min L∈[0,2r] 1 L L 0 I(l)dl - 1 2r -L 2r L I(l)dl (5.4) where 2r is the length of the ray and I(l) is the pixel intensity along the ray at the distance l from the initial center (see blue dot and lines in Fig. 5.1). Finally a circle fitting is performed on the detected boundaries to find the center of the target as illustrated in Fig. 5.1. This new estimation of the target center is used to extract a new reference path for template matching and the whole process is repeated for the next image. Both steps of the algorithm are complementary. Template matching can be used to find the target in the whole image if necessary; however its performances are degraded by noise and intensity variations, which cause a drift over time. On the contrary, the Star algorithm can find the real center of the target and adapt to noise, changes of intensity and, up to a certain extend, to changes of the shape of the target. However it requires that the initial guess of the center lies inside the real target in the image. Template matching is thus a good way to provide this first initialization. Overall this tracking algorithm is relatively robust and can be used to track a moving target in 2D US images, in spite of speckle noise or shape variations. It can also easily be adapted to track a 3D spherical target in 3D US volumes. Target tracking validation in 2D ultrasound In this section we provide the results of experiments performed to validate the performances of the tracking algorithm that we developed in previous section. Experimental conditions (setup in the Netherlands): The UR5 robot is used to move a gelatin phantom with embedded play-dough spherical targets. We use the 3D wobbling probe and the ultrasound (US) station from Siemens to acquire the 2D US images. A cross section of the volume is selected to be displayed on the screen of the station such that it contains the target and is normal to the probe axis (US beam propagation direction). The screen of the US scanner is then transferred to the workstation using a frame grabber. The acquisition parameters of the US probe are set to acquire 42 frames during a sweeping motion with an angle of 1.08 • between successive frames. The field of view of each frame is set to 70 • and the acquisition depth is set to 10 cm, resulting in the acquisition of one volume every 110 ms. The targets are between 24 mm and 64 mm from the probe transducer in the experiments, which leads to a maximum resolution of the US image between 0.45 mm × 0.73 mm and 0.70 mm × 1.49 mm. Experimental scenario: A 3D translational motion is applied to the phantom to mimic the displacement of the liver during breathing [HMB + 10]. The applied motion m(t) has the following profile: m(t) = a + b cos 4 ( π T t - π 2 ), (5.5) where a ∈ R 3 is the initial position of the target, b ∈ R 3 is the magnitude of the motion and T is the period of the motion. The magnitude of the motion is set to 7 mm and 15 mm respectively in the horizontal and vertical directions in the image, which corresponds to a typical amplitude of motion of the liver during breathing [HMB + 10]. No motion is set in the out of plane direction. The period of the motion is set to T = 5s. After manual initialization, the tracking is performed for a duration of 30 s corresponding to 6 periods of the motion. Results: The position of the tracked target is compared with the groundtruth obtained from the odometry of the UR5 manipulator. An example of the evolution of the target position is shown in The motion described by (5.5) is applied to the gelatin phantom with a period T = 5s. The global mean tracking error is 3.6 mm for this experiment. However it reduces to 0.6 mm after compensating for the delay of about 450 ms introduced by the data acquisition. the conversion of the volume to Cartesian space, the extraction of the slice to display on the screen, the transfer of the image to the workstation and finally the tracking process. The sweeping takes around 110 ms and the mean tracking time is 300 µs, which indicates that the remaining latency of about 340 ms should mostly be due to the post-scan conversion and the frame grabbing. In order to assess the quality of the tracking algorithm, the actual positioning accuracy is measured by adding a delay to the ground truth signal. The mean tracking errors over time between the delayed ground-truth and the measures are summarized in Table 5.1. Sub-millimeter accuracy is obtained, which is is sufficient for most medical applications. We can also observe that the tracking accuracy is lowered by the distance of the target from the probe. This is due to two factors mentioned in section 3.2. First we use a convex wobbling probe, which means that the distance between the different US beams increases as they get further away from the transducer. Additionally each beam also tends to widen during their propagation due to the diffusion phenomenon. Overall the resolution of the 2D image extracted from the 3D volume naturally decreases when its distance from the probe increases. This confirms that the algorithm yields excellent tracking performance and is only limited by the resolution and the latency of the acquisition system. Hence we use this algorithm in the following to perform needle insertion toward real moving targets. Redefinition of control inputs and tasks: Using this setup, the control inputs consist of the velocity screw vector v U R ∈ R 6 of the end-effector of the UR3 plus the 2 velocities v N ID ∈ R 2 of the NID. Hence, we define the control vector v r ∈ R 8 of the whole robotic system as v r = v U R v N ID . (5.6) In the following we assume that v U R is expressed as the velocity screw vector of the frame of the tip of the NID, corresponding to the frame {F b } depicted in Fig. 5.4. In order to use our steering framework based on task functions with this system, the Jacobian matrices associated to the different tasks that we defined in sections 4.3.4 and 5.2 need to be modified to take into account the additional degrees of freedom (DOF) of the NID. The Jacobian matrix J ∈ R n×8 associated to a task vector e ∈ R n of dimension n is now defined Figure 5.5: Illustration of the three different configurations used to insert the needle. The needle insertion device (NID) is shown in black and the needle is the green line. For configurations 1 and 2, the needle is fully outside and does not slide any further in the NID. For configurations 3, it starts fully inside and can slide in the NID. No constraints is added on the external motion of the NID for configuration 1, while remote center of motion (RCM) is applied around the insertion point for configurations 2 and 3. Additionally, no translations of the tip of the NID is allowed for the third configuration. such that ė = J v r . (5.7) Note that we still use our needle model and the method defined in section 4.3.3 to compute the Jacobian matrices. However, the method is adapted to add the two additional DOF of the NID. For simplicity, in the followings we will keep the same notations that we used in sections 4.3.4 and 5.2 for the Jacobian matrices of the different tasks. We will also refer to the equations presented in these sections for the definitions of the tasks. Insertion configurations and control laws: We compare three different insertion configurations, as depicted in Fig. 5.5. The first two configurations are used to simulate the case of a needle held by its base. The needle is fully outside of the NID and no control of the translation stage inside the NID is performed, which is equivalent to having a 10.8 cm long needle held by its base. A remote center of motion around the insertion point is added for the configuration 2. For the third configuration, the tip of the NID (center of frame {F b } in Fig. 5.4) is set in contact with the surface of the phantom and the needle is initially inside the NID. The insertion is then performed using the translation stage of the NID, resulting in a variable length of the part of the needle that is outside the NID. A remote center of motion is also added around the tip of the NID, which is equivalent to the insertion point in this case. We use several tasks to define the control associated to each configuration and we fuse them using the classical formulation of the task function framework, as defined by (4.6) in section 4.3.1. Four tasks are common to all configurations and are defined as follows. • The first task controls the insertion velocity v t,z of the needle tip along the needle axis, as defined by (4.36) and (4.37). We set the insertion velocity v tip to 3 mm.s -1 . • The second task controls the bevel orientation via the angle σ, as defined by (4.51), (4.54) and (4.56). The maximal rotation speed ω z,max is set to 180 • .s -1 and the gain λ σ is set to 10 (see (4.56)) such that the maximal rotation velocity is used when the bevel orientation error is higher than 18 • . • The third task controls the alignment angle θ between the needle tip and the target, as defined by (4.38), (4.42) and (4.43). The control gain λ θ is set to 1. Due to the stiffness of the phantom and the high flexibility of the needle, this task can rapidly come close to singularity once the needle is deeply inserted. In order to avoid high control outputs, this task is deactivated once the needle tip has been inserted 2 cm. Given the insertion velocity and the gain set for this task, this gives enough time to globally align the needle with the target such that tip-based control tasks (first and second tasks) are then sufficient to ensure a good targeting. • The fourth task is used to remove the rotation velocity ω U R ,z of the UR3 around the needle axis. Indeed, we can observe that this rotation has the same effect on the needle as the rotation ω N ID of the needle inside the NID. However, using the UR3 for this rotation would result in unnecessary motions of the whole robotic arm and of the NID, which could pose safety issues for the surroundings. Therefore we add a task to set ω U R ,z to zero. The Jacobian matrix J ω U R ,z ∈ R 1×8 and the desired value ω U R ,z,d associated to this task are then defined as J ω U R ,z = [0 0 0 0 0 1 0 0], (5.8) ω U R ,z,d = 0. (5.9) For the first two configurations, an additional task is added to remove the translation velocity v N ID of the needle inside the NID. The Jacobian matrix J v N ID ∈ R 1×8 and the desired value v N ID ,d associated to this task are then defined as J v N ID = [0 0 0 0 0 0 1 0], (5.10) v N ID ,d = 0. (5.11) The final control vector v r,1 ∈ R 8 for the first configuration is then computed according to v r,1 =       J vt,z J σ J θ [0 0 0 0 0 1 0 0] [0 0 0 0 0 0 1 0]       +       v t,z,d σd θd 0 0       . (5.12) Note that the task space is here of dimension 5 while the input space is of dimension 8, such that all tasks should ideally be fulfilled. For the second configuration, a task is added to align the needle base with the initial position of the insertion point at the surface of the phantom, such that there is a remote center of motion. This task is defined via the angle γ between the needle base axis and the insertion point, as defined by (4.66), (4.70) and (4.71). The final control vector v r,2 ∈ R 8 for this configuration is then computed according to v r,2 =         J vt,z J σ J θ J γ [0 0 0 0 0 1 0 0] [0 0 0 0 0 0 1 0]         +         v t,z,d σd θd γd 0 0         . (5.13) Note that the task space is here of dimension 6 while the input space is of dimension 8, such that all tasks should ideally be fulfilled. Finally, a remote center of motion is applied at the insertion point for the third configuration. Since the tip of the NID is directly located at the insertion point, this is achieved by adding a task to remove the translation velocity v U R ∈ R 3 . The Jacobian matrix J v U R ∈ R 3×8 and the desired value v U R ,d associated to this task are then defined as J v U R = [I 3 0 3×5 ], (5.14) v U R ,d = 0, (5.15) where I 3 is the 3 by 3 identity matrix and 0 3×5 is the 3 by 5 null matrix. MOTION COMPENSATION USING FORCE FEEDBACK The final control vector v r,3 ∈ R 8 for this configuration is then computed according to v r,3 =       J vt,z J σ J θ [0 0 0 0 0 1 0 0] [I 3 0 3×5 ]       +       v t,z,d σd θd 0 0       . (5.16) Note that the task space is here of dimension 7 while the input space is of dimension 8, such that all tasks should ideally be fulfilled. Experimental scenario: The needle is first placed perpendicular to the surface of the tissues such that its tip barely touches the phantom surface. Then a straight insertion of 8 mm is performed, corresponding to the minimal length of the needle that remains outside the NID when the needle is fully retracted inside it. This way the tip of the NID is just at the level of the phantom surface for the third configuration. This is also done with the two other configurations such that the initial length of needle inside the phantom is the same for every experiment. Four insertions are performed for each configuration. The virtual target is initialized such that it is 7 cm in the insertion direction and 1 cm in a lateral direction with respect to the needle axis. A different lateral direction is used for each experiment, such that the different targets are rotated about 90 • around the needle axis between each experiment. The controller is then started and stopped once the needle tip reaches the depth of the target. Results: We measure the interaction force exerted at the base of the needle, i.e. in the frame {F b } depicted in Fig 5 .4, during each experiment. The mean value of the absolute lateral force is summarized for each insertion configuration in Fig. 5.6. We can see that when the NID is near the surface of the tissues, it induces an increase in the amount of force exerted at the needle base compared to the case where the needle base is far from the tissues. This could be expected since the lateral motion of the needle shaft near the needle base is directly applied to the tissues when the whole needle is inserted. In the opposite case the lateral motion can be absorbed by some amount of bending of the needle body, resulting in less force applied to the tissue. While it is better to reduce the amount of force applied to the tissue, it is also important for the motion compensation to have a good sensitivity of the force measurements to the needle and tissue motions. Using the first two configurations would result in a small and noisy measured force at the base of the needle, due to the damping of the lateral motions of the tissue by the compliance of the needle. Such measures would neither be useful for the model update algorithm nor for the compensation of the tissue motions. These configurations also have the drawback that they require a motion of the whole robot only to insert the needle, which increases the risk of collision with the surroundings. On the contrary this is not required with the third configuration since the insertion can be performed using the internal translation stage of the NID, which would be better in order to avoid collisions in a medical context. Additionally, this last configuration offers a great sensitivity to the tissue motions, since a small displacement is sufficient to induce a significant measure of force at the needle base. This would be beneficial for the model update as well as for the motion compensation. Therefore, in the followings we will choose to insert the needle using the third configuration. Needle insertion with motion compensation We present here the results of the experiments performed to test the performances of our framework during an insertion with lateral tissue motions. Experimental conditions (setup in the Netherlands): The setup used to hold and insert the needle is the same as in section 5.4.1. The ATI force torque sensor is still used to measure the force applied to the base of the needle and the Aurora electromagnetic (EM) tracker is used to measure the position and direction of the tip of the biopsy needle. The UR5 robot is used to apply a known motion to a phantom. Two phantoms are used, one with porcine gelatin and one with a bovine liver embedded in the gelatin. Artificial targets made of play-dough are placed in the gelatin phantom and We use the two-body model presented in section 2.4.2 with polynomial needle segments of order r = 3 to represent the part of the needle that is outside of the NID, from the frame {F b } depicted in Fig. 5.7 to the needle tip. We fix the length of the needle segments to 1 cm, resulting in one segment of 8 mm when the needle is retracted to the maximum inside the NID and 11 segments with the last one measuring 8 mm when the needle is fully outside. We use a rather hard phantom, such that we set the stiffness per unit length of the model to 35000 N.m -2 . The length threshold to add a new segment to the tissue spline is set to L thres = 0.1 mm. The length and the pose of the base of the needle model are updated using the odometry feedback from the UR3 robot and the NID. The position of the tissue spline in the model is also updated using the force feedback and the EM feedback as input for the update algorithm that we defined in section 3.5.2. The performances of the update algorithm during these experiments have already been described in section 3.6.1. Figure 5.8 summarizes the whole setup and algorithms used for these experiments. Control: As explained in section 5.4.1, we consider here the input velocity vector v r of the whole robotic system defined by (5.6). We use three targeting tasks, one motion compensation task and two additional tasks for the control of the system and we fuse them using the classical formulation of the task Figure 5.8: Block diagram representing the experimental setup and control framework used to perform needle insertions in a moving phantom. The UR5 robot applies a motion to a phantom. The position of the target is tracked in ultrasound images. Measures from the force torque sensor and electromagnetic (EM) tracker are used to update the needle-tissue interaction model. The model and all measures are used by the task controller to control the UR3 and the needle insertion device in order to steer the needle tip towards the target while compensating for tissue motions. MOTION COMPENSATION USING FORCE FEEDBACK function framework, as defined by (4.6) in section 4.3.1. The different tasks are defined as follows. • The first task controls the insertion velocity v t,z of the needle tip along the needle axis, as defined by (4.36) and (4.37). We set the insertion velocity v tip to 3 mm.s -1 . • The second task controls the bevel orientation via the angle σ, as defined by (4.51), (4.54) and (4.56). The maximal rotation speed ω z,max is set to 180 • .s -1 and the gain λ σ is set to 10 (see (4.56)) such that the maximal rotation velocity is used when the bevel orientation error is higher than 18 • . • The third task controls the alignment angle θ between the needle tip and the target, as defined by (4.38), (4.42) and (4.43). The control gain λ θ is set to 1. Due to the stiffness of the phantoms and the high flexibility of the needle, this task can rapidly come close to singularity once the needle is deeply inserted. In order to avoid high control outputs, this task is deactivated once the needle tip has been inserted 2 cm. Given the insertion velocity and the gain set for the task, this gives enough time to globally align the needle with the target such that tip-based control tasks (first and second tasks) are then sufficient to ensure good targeting. • The fourth task is the safety task and it is chosen to reduce the lateral force f l applied to the base of the needle, as defined by (5.1) and (5.2). The control gain λ f is set to 2.5. • The fifth task is used to remove the rotation velocity ω U R ,z of the UR3 around the needle axis, as was discussed in section 5.4.1. It is defined by (5.8) and (5.9). • The sixth task is used to remove the translation velocity v U R ,z of the UR3 along the needle axis. For similar reasons as the previous task, we can observe that this translation is redundant with the insertion of the needle by the NID. However, translating the UR3 in this direction could drive the NID into the tissues, which should be avoided for safety reasons. Therefore we add a task to set v U R ,z to zero. The Jacobian matrix J v U R ,z ∈ R 1×8 and the desired value v U R ,z,d associated to this task are then defined as J v U R ,z = [0 0 1 0 0 0 0 0], (5.17) v U R ,z,d = 0. (5.18) COMPENSATION The final control vector v r at the beginning of the insertion is then computed according to v r =         J vt,z J σ J θ J f [0 0 0 0 0 1 0 0] [0 0 1 0 0 0 0 0]         +         v t,z,d σd θd ḟ l,d 0 0         . (5.19) Once the needle tip reaches 2 cm under the initial tissue surface, v r is then computed according to v r =       J vt,z J σ J f [0 0 0 0 0 1 0 0] [0 0 1 0 0 0 0 0]       +       v t,z,d σd ḟ l,d 0 0       . (5.20) The inputs of the targeting tasks are computed using the target position measured from the tracking in US images and the pose of the needle tip measured from the EM tracker. The input of the safety task is computed from the lateral force applied at the needle base that is measured from the force sensor. Note that the task space is here of dimension 6 or 7 while the input space is of dimension 8, such that all tasks should ideally be fulfilled. Registration: The position of the EM tracking system is registered in the frame of the UR3 robot before the experiments using the method that was presented in section 3.6.1. The force torque sensor is also calibrated beforehand to remove the sensor biases and the effect of the weight of the NID in order to reconstruct the interaction forces applied to the base of the needle. The details of the sensor calibration and the force reconstruction can be found in Appendix A. In order to accurately reach the target, its position must be known in the frame of the needle tip, such that the inputs of the targeting tasks can be computed. An initial registration step is thus required to find the correspondence between a pixel in a 2D US image and its real location in a common frame with the EM tracker, which is the UR3 robot frame in our case. The pose of the 3D US probe is first registered beforehand in the UR3 robot frame using the following method. The needle is inserted at different locations near the surface of the phantom and two sets of the positions of the needle tip are recorded, one using the needle manipulator odometry and needle model, and the other one using a manual segmentation of the needle tip in acquired 3D US volumes. Point cloud matching between the two sets is then used to find the pose of the US probe in the frame of the UR3 robot. The drawback of this method is that it is not clinically relevant, since many insertions are required before starting the real insertion procedure. However, for a clinical integration the pose of the probe could be measured using an external positioning system, such as an EM tracker fixed on the probe or through the tracking of visual markers with an optical localization system. Before the beginning of each needle insertion, the acquisition of 3D US volumes is launched and a plane normal to the probe axis is selected to be displayed on the US station screen. This plane is chosen such that it contains the desired target for the insertion. The position of the image in space is then calculated from the probe pose and the distance between the probe and the image, available using the US station. Each image can finally be scaled to real Cartesian space by using the size of a pixel, which is also known using the measurement tool available in the US station. Experimental scenario: Five insertions are performed in each phantom. At the beginning of each experiment the image plane displayed on the US station is selected to contain the desired target. The tracking algorithm is manually initialized by selecting the center of the target and its diameter in the image. The needle is initially inside the NID. An initialization of the insertion is performed by moving the tip of the NID to the surface of the tissues, such that the 8 mm of the needle that remain outside the NID are inserted into the tissues. The update algorithm, needle insertion and tissue motions are then started and are stopped once the needle tip has reached the depth of the target. The motion applied to the phantom is similar to the motion used for target tracking validation defined by (5.5) in section 5.3.2. The amplitude of the motion is set to 15 mm and 7 mm in the x and z direction of the world frame {F w } depicted in Fig. 5.7, respectively. Several values of the period of the motion are used, between 10 s and 20 s, as recapped in Table 5.2. Results: An example of the position of the needle tip measured with the EM tracker and the position of the tracked target during an insertion in the liver phantom can be seen in Fig. 5.9. We can see in Fig. 5.9b that both the needle and the target follow the motions of the tissues shown in Fig. 5.9a. The needle tip is steered toward the target and reaches it at the end of the insertion, as can be seen in Fig. 5 Table 5.2: Summary of the conditions and results of the insertions performed in a gelatin phantom and a bovine liver embedded in gelatin. Different periods T are used for the motion of the phantom. The target location in the initial tip frame is indicated for each experiment. The error is calculated as the absolute lateral distance between the needle tip axis and the center of the target at the end of the insertion. The mean and standard deviation of the error for each kind of phantom are presented separately. Target ). An example of a slice extracted from a final volume in gelatin is depicted in Fig. 5.12, showing that the center of the target can be accurately reached. Table 5.2 gives a recap of the initial position of the targets in the initial frame of the needle tip and the absolute lateral errors obtained at the end of each experiment. We can see that the target can be reached with an accuracy under 4 mm in all cases. This can be sufficient in clinical applications to reach medium sized tumors near moving structures. We can note that we choose to not compensate for the latency introduced by the acquisition system (see section 5.3.1). Indeed, this latency will mostly be unknown during a real operation and can also vary depending on the chosen acquisition parameters, such as the field of view and the resolution of the scanning. This point could be addressed if higher accuracy is required for a specific application. The mean targeting error is also higher in biological tissues than in gelatin. Several factors may explain this observation. The main reason is certainly that the tracking of the target is more challenging in this case. The target is less visible and the level of noise is increased in the image due to the texture of the tissues, as can be seen when comparing Fig. 5.10 and 5.11. This also limits the accuracy of the detection of the target center in the final 3D US volume. The evaluation of the final targeting accuracy is thus subject to more variability. The inhomogeneity of the tissues can also be a cause of deviation of the needle tip from its expected trajectory. However the tip-based targeting task orienting the bevel edge toward the target should alleviate this effect. Motion compensation: Finally, let us consider the motion compensation aspect during the insertion. Due to the proximity of the NID with the surface of the phantom, the lateral forces measured at the needle base are very similar to the forces applied to the tissues, such that we can use these measures to estimate the stress applied to the tissues. Table 5.3 gives a recap of the maximum and mean lateral forces applied to the needle base during the insertion. We can observed that these forces are maintained under 1.0 N during all the experiments, which is sufficiently low to avoid significant tearing of the tissues. In clinical practice, this is also lower than the typical forces that can be necessary to puncture the tissues and insert the needle. The framework is thus well adapted to perform motion compensation and to avoid tissue damage. Note that we did not perform full insertions without the motion compensation to compare the forces obtained in this case. However, we could observe during preliminary experiments that applying a lateral motion to the phantom while the needle is inserted in the tissues and fixed by the NID results in a large cut in the gelatin and also damages the needle. A small amount of tearing could still be observed at the surface of the gelatin when the motion compensation was performed, essentially due to the natural weakness of the gelatin and the cutting effect when the needle is inserted while applying a lateral force. Nevertheless, it can be expected that real biological tissues are more resistant and would not be damaged in this case. Figure 5.13 shows a representative example of the motions of the tissues and the lateral forces measured during an insertion in the bovine liver. Two phases can clearly be distinguished on the force profile. During the first 6 seconds, some fast variations of the force can be observed. This corresponds to the phase where all tasks are active using (5.19). The robotic system is thus controlled to explicitly align the needle tip axis with the target while reducing the applied force. Since the target is initially misaligned, a lateral rotation is necessary, which naturally introduces an interaction with the phantom. Fast motion of the system are thus observed, resulting from the interaction between the alignment and motion compensation tasks. After the tip alignment task has been deactivated, all remaining tasks in (5.20) are relatively independent, since inserting and rotating the needle does not introduce significant lateral forces. This results in a globally smoother motion, where the needle is simply inserted while the lateral motion of the NID naturally follows the motion of the phantom. Therefore motion compensation is clearly achieved in this case. We can see that the lateral force is well driven towards zero when the tissues are moving slowly. However, the amount of lateral force also tends to increase when the velocity of the tissues increases, which is due to the proportional nature of the control law (5.2). This could be improved by modifying the control law in order to reduce the motion following error, for example by adding some integral term in addition to the proportional gain. Conclusions: These experiments confirm that, when the needle has a great level of flexibility, lateral motion of the needle should only be performed at the beginning of the insertion to align the needle with the target. It allows a fast and efficient way to modify the needle trajectory without having to insert the needle, which could not be possible by exploiting only the natural deflection of the tip. However once the needle is inserted deeper in the tissues, the motion of the base has only a low effect on the tip trajectory compared to its effect on the force applied to the surface of the tissues. Alternating between several tasks depending on the state of the insertion has thus proved to be a good way to exploit the different advantages of each task while reducing their undesirable effects. These experiments also demonstrate that motion compensation can be performed at the same time as the accurate steering of the needle tip toward a target. Here we use only a first order control law for the force reduction task, which proves to be sufficient to yield good motion compensation. A more advanced impedance control could be used to obtain even better results and to reduce even further the applied lateral force [START_REF] Moreira | Viscoelastic model based force control for soft tissue interaction and its application in physiological motion compensation[END_REF]. Overall this set of experiments also demonstrates the great flexibility of our control framework. It shows that the framework can be used in a new configuration, where the needle is not held by its base but is instead inserted progressively in the tissues using a dedicated device. It has also proven to be able to maintain the same general formulation and adapt to 2D US, EM sensing and force feedback. Conclusion In this chapter we provided an overview of motion compensation techniques that can specifically be used to follow the motions of soft tissues. We then showed that the framework we proposed in chapter 4 could be easily adapted to provide motion compensation capabilities in addition to the needle steering. We then proposed a tracking algorithm that can follow a moving target in 2D ultrasound images. The performances of this algorithm were validated through experiments showing that it could achieve good tracking up to the resolution provided by the image. Finally we demonstrated the great flexibility of our global framework at handling multiple kinds of feedback modalities and robotic systems through experiments performed in a multi-sensor context and using a system dedicated to needle insertion. We showed that it allows fusing the steering of the tip of a needle with the compensation of lateral motions of the tissues. Results obtained during experimental insertions in moving ex-vivo biological tissues demonstrated that performances compatible with the clinical context can be obtained for both tasks performed at the same time, which constitute a great contribution toward safe and accurate robotic needle insertion procedures. Conclusion Conclusions In this thesis we covered several aspects of robotic needle insertion under visual guidance. In Chapter 1 we first presented the clinical and scientific context of this work and the challenges associated to this context. In Chapter 2 we provided a review on the modeling of the interaction between a needle and soft tissues. We then proposed two 3D models of the insertion of a beveled tip needle in soft tissues and compared their performances. In Chapter 3 we addressed the issue of needle localization in 3D ultrasound (US) volumes. We first provided an introduction to US imaging and an overview of needle tracking algorithms in 2D and 3D US images. We proposed a 3D tracking algorithm that takes into account the natural artifacts that can be observed around the needle location. We used our needle model to improve the robustness of the needle tracking and proposed a method to update the model and take tissue motions into account. In Chapter 4 we first presented a review of techniques used to control the trajectory of a needle during its insertion and to define the trajectory that must be followed by the needle tip. We then proposed a needle steering framework that is based on the task function framework used to perform visual servoing. The framework allows a great flexibility and can be adapted to different steering strategies to control several kinds of needles with symmetric or asymmetric tips. We then validated our framework through several experimental scenarios using the visual guidance provided by cameras or a 3D US probe. In Chapter 5 we considered the case of patient motions during the needle insertion and provided an overview of methods that can be used to compensate for these motions. We then extended our steering framework to handle motion compensation using force feedback. We finally demonstrated the flexibility of our framework by performing needle steering in moving soft tissues using a dedicated needle insertion robot in a multi-sensor context. In the following we draw some conclusions concerning needle modeling, ultrasound visual feedback, needle steering under ultrasound guidance and compensation of tissue motions during needle insertion. Finally we present some perspectives for future developments of our work. Needle modeling We first focused on the modeling of the behavior of a flexible needle being inserted in soft tissues. Fast and accurate modeling of the interaction phenomena occurring during the insertion of a needle is a necessity for the control of a robotic system aiming at the assistance of the insertion procedure. We have seen that models based on kinematics are efficient and allow fast and reactive control. However their efficiency often comes at the cost of a limitation of the phenomena that they can take into account. On the contrary, almost every aspect of an insertion can be described using finite element modeling, from the deformations of the needle to the complex modifications that it can create on the tissues. Nevertheless this complexity can only be obtained through heavy parameterization that requires accurate knowledge of boundary conditions and time consuming computations. A trade-off should thus be found to take into account the main interaction phenomena and to keep a low level of complexity to stay efficient. Mechanics-based models offer such compromise by reducing their computation requirement while still being a realistic representation of the reality. In this thesis we proposed a 3D flexible needle model that is simple enough to yield real-time performances, while still being able to take into account the deformations of the needle body due to its interaction with the moving tissues at its tip and all along its shaft. Ultrasound visual feedback A second requirement for the development of a robotic system usable in clinical practice is its ability to monitor the state of the environment on which it is acting. For needle procedure assistance this means knowing the state of the needle and of the tissues. Visual feedback is a great way to deal with this issue by mean of the dense information it can provide on the environment. Additional conditions should also be fulfilled on the nature of the provided images. A great quality of the image is only useful if it can be obtained frequently. On the contrary, images provided at a fast rate can only be used if they contain exploitable information. To this end, the ultrasound (US) modality is one of the best choice since it can provide real-time 2D or 3D relevant data on a needle and its surrounding tissues. Extracting this information in a reliable way is however a great challenge due to the inherent properties of this modality. We have shown a review of the different artifacts that can appear in US images as well as current techniques used to localize a needle in these images. In order to be used in the context of a closedloop control of a robotic system, needle detection algorithms should be fast and accurate. We contributed to this field by proposing a needle tracking method that directly exploits the artifacts created by the needle to find the location of its whole body in 3D US volumes. Ensuring the good consistency of the tracking between successive volumes can be achieved by modeling the expected behavior of the needle. The modeling of the current state of the needle interaction with the tissues can also be improved by exploiting the measures available on the needle. Therefore we fused our contributions by proposing a method to update our interaction model from the measures provided by several sensors and to use the model to improve the quality of the needle tracking. Needle steering under ultrasound guidance Once a good model of the insertion process is available along with a reliable measure of the needle location, the closed-loop robotic control of the insertion can be performed. We first reviewed the current techniques available to steer a needle in soft tissues and the different strategies used to define the trajectory of the tip. In order to remain as close as possible from a possible clinical application, we chose to focus on the steering of standard needles and did not consider the case of special designs. Two main methods are usually used in the literature to steer the needle, either manipulating the needle by its base, like usually done by the clinicians, or by exploiting an asymmetry of the tip to deflect the needle from a straight path during the insertion. In order to stay generic, we designed a control framework to manipulate the needle by its base and also to exploit the asymmetry of the tip. This framework also directly uses our previous contribution on needle modeling by using the model in real-time to compute the motions to apply to the base of the needle allowing specific motions of the needle tip. We performed several validation experiments to assess the performances of our framework in needle insertions performed under visual feedback. In particular we showed that the approach is robust to modeling errors and can adapt to tissue or target motions. Tissue motion compensation during needle insertion We addressed the topic of the compensation of the patient motions during a needle insertion procedure. The body of a patient may be moving during a medical procedure for several reasons, the most common ones being physiological motions which can hardly be avoided. Tissue motions can have several impacts on the results of a needle insertion procedure. The first one is that the targeted region inside the body is moving. This point should be taken into account by tracking the target and adapt the control of the needle trajectory accordingly. The steering framework that we proposed was already capable of handling this first issue and we complemented it with a target tracking algorithm in ultrasound images. The second important issue concerns the safety of the insertion, which can be compromised if the robotic system is not designed to follow the motions of the patient. In order to address this issue, we proposed an adaptation of our steering framework to be able to perform motion compensation using either visual feedback or force feedback. We demonstrated the performances of our method by performing the insertion of a flexible needle in ex-vivo moving biological tissues, while compensating for the tissue motions. Overall, the results of our experiments confirmed that our global steering framework can be adapted to several kinds of robotic systems and can also integrate the feedback provided by several kinds of sensing modalities in addition to the visual feedback, such as force measurements or needle tip pose provided by an electromagnetic tracker. Perspectives We discuss here several extensions and further developments that could be made to complement the current work in terms of technical improvements and clinical acceptance. Both aspects are linked since driving this work toward the operating room requires to first identify the specific needs and constraints of the medical staff, and then translate them into theoretical and technical constraints. In the following we address the limitations that were already mentioned in this manuscript as well as new challenges arising from specific application cases. Needle tracking Speed of needle tracking: We presented a review of needle tracking techniques in ultrasound (US) images and volumes. In order to be usable, automatic tracking of the needle should be fast, such that it gives a good measure of the current state of the insertion. Hence efficient tracking algorithms are a first necessity to achieve this goal, which is why we proposed a fast tracking algorithm. However, independently of the chosen tracking algorithm, current works are mostly using 3D volumes reconstructed in Cartesian space. Since the reconstruction of a post-scan volume from pre-scan acquired data requires some time, it introduces a delay in the acquisition. This is a first obstacle for a fast control of the needle trajectory since the needle has to be inserted slowly to ensure that the visual feedback is not completely out of date once available. The time of conversion can be reduced using specific hardware optimization, however a simple solution would be to directly use the pre-scan data to track the needle. While conversion of the whole post-scan volume remains desirable to provide an easy visualization for the clinician, it is not necessary for a tracking algorithm. The needle could be tracked in the pre-scan space and then converted in Cartesian space if needed. Taking a step further, it can be noted that acquiring the pre-scan volume also takes time. Tracking of the needle could be done frame by frame during the wobbling process to provide 2D feedback on the needle section directly once available. Reliability of needle tracking: In order to be usable for a control application in a clinical context, the tracking of the needle should be reliable. Reliability in the case of tracking in 3D US is challenging because of the overall low quality and the artifacts present in the image, which is the reason why we proposed a method to account for needle artifacts. In general, even for the human eye, it can be difficult to find the location of the needle in a given US volume when other bright linear structures are present. Temporal filtering is usually applied to reduce the size of the region in which to search the needle. In our case we used a mechanical model of the needle to predict the position of the needle in the next volume and we updated the model to take into account tissue motions. However large tissue motions or an external motion of the US probe can cause a large apparent motion of the needle in the volume which is not taken into account by the temporal filtering, resulting in a failure of the tracking. Following the motion of the other structures around the needle could be a solution to ensure the spacial consistency of the tracked needle position. Tracking the whole tissues could be a solution, for example by using methods based on optical flow [TSJ + 13]. Deep learning techniques could also be explored since they show ever improving promising results for the analysis of medical images [LKB + 17]. Active tracking: An accurate tracking of the needle location in the US volume is very important for the good proceedings of the operation. Servoing the position of the US probe could be a good addition to increase the image quality and ease the tracking process. A global optimization of the US quality could be performed [START_REF] Chatelain | Confidence-driven control of an ultrasound probe: Target-specific acoustic window optimization[END_REF]. A control scheme could also be designed to take into account the needle specific US artifacts, such as intensity dropout due to the incidence angle, and to optimize the quality of the image specifically around the needle [START_REF] Chatelain | Real-time needle detection and tracking using a visually servoed 3d ultrasound probe[END_REF]. Needle steering Framework improvement: The needle insertion framework based on task functions that we proposed can be extended in many ways. First we did not directly consider the case of obstacle avoidance. This could be easily added by the design of a specific avoidance task or by using trajectory planning to take into account sensitive regions [XDA + 09]. A higher level control over the tasks priority should also be added to adapt the set of tasks to the many different situations that can be encountered [START_REF] Mansard | Task sequencing for highlevel sensor-based control[END_REF]. For example targeting tasks could be deactivated in case a failure of the needle tracking has been detected. A specific task could be designed to take the priority in this case and to move the needle in such a way that it can easily be found by the tracking algorithm. Active model learning: In our control framework we used a mechanicsbased model of the needle to estimate to local effect of the control inputs on the needle shape. We proposed a method to update the local model of the tissues according to the available measures. A first improvement would be to explore the update of other parameters as well, like the tissue stiffness. However it is possible that the model does not always accurately reflect the real interaction between the needle and tissues, mainly due to the complex nature of biological tissues. A model-less online learning of the interaction could be explored, using the correlation between the control motions applied to the needle and its real measured effect on the needle and tissue motions. The steering strategy could also be modified to optimize this learning process, for example by stopping the insertion and slightly moving the needle base in all direction to observe the resulting motions of the needle tip and target. Clinical integration Before integrating the proposed framework into real clinical workflow, many points needs to be taken into account and discussed directly with the clinical staff. System registration: A first requirement for a good integration into the clinical workflow is that the system should be easy to use out-of-the-box, without requiring time consuming registration before each operation. In the case where the insertion is done using 3D ultrasound (US) to detect both the needle and the target in a same frame, we have proposed a simple registration method to estimate the pose of the US probe in the frame of the needle manipulator. The method only requires two clicks of the operator through a GUI and is necessary anyway to initialize the tracking of the needle. We showed that this was sufficient to achieve good targeting performances thanks to the robustness of the control method, however the estimation of the motions of the tissues proved to be more dependent on an accurate registration. An online estimation of the probe pose could be used to refine the initial registration. This may require additional sensors, such as fiber Bragg grating sensors integrated into the needle [PED + 10], to be able to differentiate between the motion that is due to the tissues or the probe. In clinical practice the US probe is also unlikely to stay immobile during the whole procedure. This can for example be because it is manually held by the clinician or because the field of view of the probe is too narrow and it has to be moved to follow the needle and the target. Online estimation would also be an advantage in those cases, and sensors could provide a direct feedback on the probe pose, like electromagnetic trackers or external cameras. A specific mechanical design could also be used to mechanically link the probe to the needle manipulator [YPZ + 07]. In this case the probe pose is known by design and a registration step is unnecessary. Tele-operation: The method we have proposed so far was aimed at performing a fully automated needle insertion. This nowadays still remains a great factor of rejection among the medical community. However the clinician can easily be integrated into the framework. A first possibility is to consider the robot as an assistant which can perform some predefined automated tasks, such as a standby mode, during which only tissue motion compensation is performed, and an insertion mode, during which the needle is automatically driven toward the target. The clinician would only have to select the set of tasks currently being performed. This way the global flow of the operation would still be controlled by a human operator while the low level complex fusion of the tasks would be handled by the system. However this only leaves a partial control to the clinician on the real insertion procedure. A second possibility is to give a full control to the clinician over one of the tasks and let the others to the system. For example the clinician can control the trajectory of the needle tip while the robot transparently handles the orientation of the bevel and motion compensation. A haptic interface could be used to provide a guidance on the optimal trajectory to follow to avoid some predefined obstacles and reach the target. Other kinds of haptic feedback could also be explored, as for example a feedback on the state of the automatic tasks performed by the system or the compatibility of the clinician's task with the other tasks. A visual feedback could also be provided to the clinician such that the control of the tip trajectory could be defined directly in the frame of a screen instead of the frame of the real needle. Clinical validation: We have shown that our steering framework could be adapted on several robotic systems. In order to go toward clinical integration, repeatability studies have to be conducted in biological tissues to assess the robustness of the method with a specific set of hardware components. These studies should be repeated for each envisioned set of hardware and the performances should be evaluated in accordance with the exact application that is considered. Performance requirements can indeed be different for each application, as for example in moving lung biopsies and prostate brachytherapy. Long-term vision: Finally, one can believe that fully autonomous surgeon robots will one day become reality. Contrary to a human surgeon, robotic systems are not limited to two hands and two eyes. They can have several dexterous arms that perform manipulations with more accuracy than a human. They can also integrate many feedback modalities at once, allowing a good perception of many different aspects of their environment. This is currently not enough to provide them with a good understanding of what is truly happening in front of them and the best action that should be performed. However, with the ever improving performances of artificial intelligence, it may be possible in the future that robotic systems have a better comprehension and adaptability to their environment. They could then be able to chose and perform with great efficiency the adequate task, as for example a medical act, that is the best adapted to a current situation taken in its globality. Before reaching this state, systems and techniques should first be developed that can autonomously perform narrow tasks with the best efficiency, such as a needle insertion. These could then be connected together to form a generic expert system. Dans cette thèse nous nous concentrons sur le contrôle automatique de la trajectoire d'une aiguille flexible à pointe biseautée en utilisant la modalité échographique comme retour visuel. Nous proposons un modèle 3D de l'interaction entre l'aiguille et les tissus ainsi qu'une méthode de suivi de l'aiguille dans une séquence de volumes échographiques 3D qui exploite les artefacts visibles autour de l'aiguille. Ces deux éléments sont combinés afin d'obtenir de bonnes performances de suivi et de modélisation de l'aiguille même lorsque des mouvements des tissus sont observés. Nous développons également une approche de contrôle par asservissement visuel pouvant être adaptée au guidage de differents types d'outils longilignes. Cette approche permet d'obtenir un contrôle précis de la trajectoire de l'aiguille vers une cible tout en s'adaptant aux mouvements physiologiques du patient. Les résultats de nombreux scénarios expérimentaux sont présentés et démontrent les performances des différentes méthodes proposées. Abstract The robotic guidance of a needle has been the subject of a lot of research works these past years to provide an assistance to clinicians during medical needle insertion procedures. However, the accurate and robust control of a needle insertion robotic system remains a great challenge due to the complex interaction between a flexible needle and soft tissues as well as the difficulty to localize the needle in medical images. In this thesis we focus on the ultrasound-guided robotic control of the trajectory of a flexible needle with a beveled-tip. We propose a 3D model of the interaction between the needle and the tissues as well as a needle tracking method in a sequence of 3D ultrasound volumes that uses the artifacts appearing around the needle. Both are combined in order to obtain good performances for the tracking and the modeling of the needle even when motions of the tissues can be observed. We also develop a control framework based on visual servoing which can be adapted to the steering of several kinds of needle-shaped tools. This framework allows an accurate placement of the needle tip and the compensation of the physiological motions of the patient. Experimental results are provided and demonstrate the performances of the different methods that we propose. Figure 1 . 1 : 11 Figure 1.1: Example of (a) continuum robot (taken from [CMC + 08]) and (b) concentric tubes (taken from [WRC09]). Figure 1 . 2 : 12 Figure 1.2: Illustration of several kinds of needle tip. Fig. 1.3. They are usually designed for a specific kind of intervention, such as prostate interventions under ultrasound (US) imaging [YPZ + 07] [HBLT12]. Special robots have also been designed to be compatible with the limitations imposed by computerized tomography (CT) scanners [MGB + 04], magnetic resonance imaging (MRI) scanners [MvdSK + 17] or both [ZBF + 08]. Figure 1 . 4 : 14 Figure 1.4: Example of special designs of the needle tip: (a) multi-segment needle (taken from [KFRyB11]), (b) one degree of freedom active pre-bent tip (taken from [AGL + 16]) and (c) two degrees of freedom active prebent tip (taken from [RvdBvdDM15]). Figure 1.5: Pictures of the robotic systems used for the experiments: (a) system used in France and (b) system used in the Netherlands. Figure 1 . 6 :Figure 1 161 Figure 1.6: Picture of the stereo camera system and one gelatin phantom used for the experiments. ( a ) a Figure 1.8: Picture of the needle used for the experiments in France. Figure 2 . 1 : 21 Figure 2.1: Illustration of the reaction forces applied to the needle tip by the tissues depending on the tip geometry. A symmetric tip, on the right, induces symmetric reaction forces. An asymmetric tip, on the left, induces asymmetric reaction forces which can modify the tip trajectory. Figure 2. 3 : 3 Figure 2.3: Illustration of finite element modeling (taken from (a) [ODGRyB13] and (b) [CAR + 09]). Figure 2.4: Illustration of mechanics-based models of needle-tissue interaction using either virtual springs or a continuous load. Figure 2 2 Figure 2.6: Illustration of the reaction forces applied on each side of the bevel (depicted on the right). Point O corresponds to the end of the 3D curve representing the needle. The current velocity of the needle tip v t defines the angle β in which the cutting occurs in the tissues. Figure 2 2 Figure 2.8: Picture of the setup used to acquire the trajectory of the tip of a real needle inserted in a gelatin phantom. The frame attached to the needle base is denoted by {F b }. {F b }, depicted in Fig. 2.8, and is one of the following • No motion (straight insertion) • Translation of 2 mm or -2 mm along x axis • Translation of 2 mm or -2 mm along y axis • Rotation of 3 • or -3 • around x axis • Rotation of 3 • or -3 • around y axis • Rotation of 90 • , -90 • or 180 • around z axis CHAPTER 2 .Figure 2 Figure 2 . 10 : 22210 Figure 2.9: Tip position obtained when a translation is applied along the x axis of the base frame between two insertion steps along the z axis. Measures are shown with solid lines, virtual springs model with long-dashed lines and two-body model with short-dashed lines. 2. 5 . 5 VALIDATION OF THE PROPOSED MODELS Rotation around x axis in the needle base frame {F b } Figure 2 . 11 : 211 Figure 2.11: Tip position obtained when a rotation is applied around the x axis of the base frame between two insertion steps along the z axis. Measures are shown with solid lines, virtual springs model with long-dashed lines and two-body model with short-dashed lines. Rotation around z axis in the needle base frame {F b } FinalFigure 2 . 14 : 214 Figure 2.14: Absolute final error between the position of the simulated needle tip and the measured position of the real needle tip. For each type of base motions, the mean and standard deviation are calculated using the final position error across 3 different insertions performed in the phantom. Figure 2 . 15 : 215 Figure 2.15: Average absolute error during the whole insertion between the position of the simulated needle tip and the measured position of the real needle tip. For each type of base motions, the mean and standard deviation are calculated over the whole length of the insertion and across 3 different insertions performed in the phantom. Figure 2 . 16 : 216 Figure 2.16: Computation time needed to get the shape of the needle from the base pose and position of the tissue model (virtual springs or spline segments). Figure 3 . 1 : 31 Figure 3.1: Illustration of the reconstruction of an ultrasound (US) scan line from the delay of propagation of the US wave. A first interface is encountered after a time t1 and a second one after time t 2 > t 1 . A part of the US wave reaches back the transducer after a total time of 2t 1 and a second part after 2t 2 . The distance of the first and second interfaces from the transducer are then computed as ct 1 and ct 2 , respectively. Figure 3 . 2 : 32 Figure 3.2: Illustration of the configuration of the piezoelectric elements (red) on linear and convex ultrasound transducers. Figure 3 . 3 : 33 Figure 3.3: Illustration of the effect of US beam width on the lateral and out of plane resolution of an US probe. Piezoelectric elements are represented in red and only a linear transducer is depicted here. . 8 )Figure 3 . 4 : 834 Figure 3.4: Illustration of 2D post-scan conversion for linear (top) and convex (bottom) transducers. Figure 3 . 5 : 35 Figure 3.5: Illustration of 3D post-scan conversion for a convex transducer wobbling probe. Figure 3 3 Figure 3.6: Illustration of several phenomena leading to the appearance of needle artifacts in ultrasound images: needle reverberation artifact, side lobes (blue arrows) artifact and reflection out of the transducer. Figure 3 . 7 : 37 Figure 3.7: Two orthogonal cross sections of a 3D ultrasound (US) volume showing the artifacts present around a needle. The needle is in the plane of the picture on the left and the right picture shows a cross section of the needle. In both images the US wave is coming from the left. Figure3.9: Illustration of the gradient-based algorithm used to track the needle in the images acquired by two stereo cameras. Steps of the algorithm: 1) initialization of control points from the previous needle location, 2) detection of the minimum and maximum gradient in normal directions, 3) update of the control points and polynomial fitting, 4) tip detection using gradient along the needle shaft direction. Figure 3.10: Illustration of the sub-cost functions used for the local tracking algorithm. The voxel intensities in the dark blue box should be low, so they are subtracted from sub-cost J 1 (see eq.(3.39)), while the ones in the orange box should be high and are added to J 1 . Similarly, voxels in the green boxes are added to the sub-cost J 2 (see eq.(3.41)). Once the needle has been tracked laterally by maximizing the total cost function J (see eq.(3.38)), a research of the tip is performed along the tangent at the extremity of the needle to maximize the function J 4 . The voxel intensities in the light blue box are subtracted from J 4 (see eq.(3.44)) while the ones in the yellow box are added. Figure 3 . 11 : 311 Figure 3.11: Picture of the setup used to acquire volume sequences of needle insertions in a gelatin phantom. The Viper s650 robot holds the needle on the left and the Viper s850 robot holds the probe on the right. Figure 3 . 14 : 314 Figure 3.14: Illustration of the principle of Bayesian filtering Figure 3 . 16 : 316 Figure 3.16: Illustration of the unscented Kalman filter Figure 3 . 3 Figure 3.18: Illustration of the two-body model and definition of the state x considered for the unscented Kalman filter (UKF). Figure 3 . 3 Figure 3.19: Experimental setup used to validate the performances of the tissue motion estimation algorithm when using force feedback at the base of the needle and position feedback at the needle tip. Figure 3 Figure 3 33 Figure 3.20: Example of measured and estimated tip position as well as the corresponding absolute position and orientation estimation errors. Three different combinations of feedback are used for the update algorithm: force and torque feedback (FT, red), tip position and orientation feedback (PO, green) and feedback from all sources (FT+PO, blue). Figure 3 3 Figure3.22: Example of torques measured and estimated using three different combinations of feedback for the update algorithm: force and torque feedback (FT, red), tip position and orientation feedback (PO, green) and feedback from all sources (FT+PO, blue). Figure 3 3 Figure 3.23: Example of measured and estimated tissue position as well as the corresponding absolute position estimation error. Three different combinations of feedback are used for the update algorithm: force and torque feedback (FT, red), tip position and orientation feedback (PO, green) and feedback from all sources (FT+PO, blue). Figure 3 3 Figure 3.25: Mean over time and across five experiments of the absolute error between the real and modeled tip position obtained for the different update methods and two different update rates. Figure 3.26: Example of measured and simulated positions of the needle tip during an insertion in gelatin while lateral motions are applied to the phantom. Five different models and update methods are used for the needle tip simulations. The measured tissue motions are shown in (a), the different tip positions in (b) and the absolute error between the measured and simulated tip positions in (c). Figure 3 . 3 Figure3.27: Two orthogonal views of a sequence acquired during a needle insertion in gelatin. Different models and update methods are used for the needle tip simulations. Method 1: rigid needle; method 2: flexible needle; method 3: flexible needle with extremity of the tissue spline updated from the measured tip position; method 4: flexible needle with tissue spline position updated with lateral tissue motion estimation; method 5: flexible needle with tissue spline updated with lateral tissue motion estimation and extremity from the measured tip position. The tissue spline of the different models are overlaid on the images as colored lines. Method 1 does not have any cut path and methods 2, 3, 4 and 5 are depicted in green, blue, red and yellow, respectively. The real needle can be seen in black, although it is mostly recovered by the tissue splines associated with methods 4 and 5. Overall only the tissue splines of method 4 and 5 can follow the real shape of the path cut in the gelatin. Figure 3 3 Figure3.28: Example of absolute error between the measured and simulated positions of the needle tip when using an update rate of 1 Hz for the estimation of the tissue motions. Figure 3 3 Figure 3.29: Example of tissue motions measured and estimated using the update method 4 with the position feedback obtained from cameras. Two update rates are compared: (a) fast update rate corresponding to the acquisition with cameras, (b) slow update rate simulating the acquisition with 3D ultrasound. Overall the estimations follow the real motions of the tissues. Figure 3 3 Figure3.31: Position of the tip in the 3D ultrasound volume obtained by manual segmentation and using different needle tracking methods. The insertion is performed along the y axis of the probe while the lateral motion of the tissues is applied along the z axis. One tracking method is initialized without model of the needle (blue), one is initialized from a model of the needle that does not take into account the motions of the tissues (green) and one is initialized from a model updated using the tissue motion estimation algorithm presented in section 3.5.2 (red). Figure 3 3 Figure 3.32: Illustration of the needle tracking in two orthogonal cross sections of a 3D ultrasound volume acquired near the end of the insertion. The result of the tracking initialized without model is represented by the blue curve, the tracking initialized from a non updated model by the green curve and the tracking initialized from an updated model by the red curve. Without information on the needle motion (blue curve), the tracking fails and get stuck on an artifact occurring at the surface of the tissues (blue arrow). Updating only the needle base leads to a initialization of the tracking around another bright structure that is tracked instead of the real needle (green curve). Taking into account the tissue motions allows a better initialization between two acquisitions, such that the tracking can find the needle (red curve). Figure 4.1: Illustration of the different kinds of flexible needle steering methods: (a) tip-based steering of a needle with asymmetric tip (b) base manipulation of a needle with symmetric tip (c) base manipulation of a needle with asymmetric tip. (a) Classical hierarchical formulation (b) Singularity robust formulation Figure 4 . 2 : 42 Figure4.2: Illustration of the task function framework in the case of two near incompatible tasks (n = 2) and a control vector with two components v x and v y (m = 2). Each E i is the set of control inputs for which the task i is fulfilled. Each S i is the input vector obtained using a single task i in the classical formulation (4.6). C is the input vector obtained using both tasks in the classical formulation (4.6). The same input vector C is obtained using the hierarchical formulation (4.13). Each R i is the input vector obtained using the singularity robust formulation (4.16) when the task i is given the highest priority. The contributions due to tasks 1 and 2 are shown with blue and red arrows, respectively, when the task 1 is given the highest priority and with green and yellow arrows, respectively, when the task 2 is given the highest priority. Figure 4 4 Figure 4.3: Illustration of the different needle base poses used to estimate the Jacobian matrices associated to the different features. Figure 4.3 shows an illustration of the frames corresponding to each r i . Using the first order of the Taylor expansion we then have s(r i ) s(r) + δtJ s v i . (4.25) Figure 4 . 4 : 44 Figure 4.4: Illustration of different geometric features that can be used to define task functions for the general targeting task. Figure 4 . 6 : 46 Figure 4.6: Picture of the setup used to test the hybrid base manipulation and duty-cycling controller. Figure 4 . 7 : 47 Figure 4.7: Final views of the front camera at the end of 4 insertions with different controls. The crosses represent the target. (a) Straight insertion with an initially aligned target: the target is missed due to tip deflection. (b) Duty-cycling control with a target shifted 1 cm away from the initial needle axis: duty-cycling control is saturated and the target is missed due to insufficient tip deflection. The target can be reached in both cases using the hybrid control framework ((c) aligned target and (d) shifted target). Figure 4 . 8 : 48 Figure 4.8: Measure of the lateral distance between the needle tip axis and the target during 4 insertions with different controls. (a) Straight insertion with an initially aligned target: the target is missed due to tip deflection.(b) Duty-cycling control with a target shifted 1 cm away from the initial needle axis: duty-cycling control is saturated and the target is missed due to insufficient tip deflection. The target can be reached in both cases using the hybrid control framework ((c) aligned target and (d) shifted target). In each graph, the purple sections marked "DC" correspond to duty-cycling control and the red sections marked "BM" correspond to base manipulation. Figure 4 4 Figure 4.9: Picture of the setup used to test the performances of the different safety tasks. (a) Initial state, Front camera (b) Initial state, side camera (c) Final state, front camera (d) Final state, side camera Figure 4 . 10 : 410 Figure 4.10: Views of the front and side cameras at the beginning and end of one experiment. The green line represents the needle segmentation and the target set for the controller is represented by the red cross. Figure 4 . 11 : 411 Figure 4.11: Measure of the lateral distance between the needle tip axis and the target during the insertions. Each graph shows a set of five insertions performed using one specific kind of safety task. Measures are noisy at the beginning of the insertion due to the distance between the needle tip and the target. Bending energy reduction Base axis / insertion point angle reduction Figure 4 . 4 Figure 4.12: Mean value of the final distance between the needle tip axis and the target. The mean is taken across the five experiments for each kind of safety task. Figure 4 . 13 : 413 Figure 4.13: Value of the distance between the needle and the initial position of the insertion point at the tissue surface during the insertions. Each graph shows a set of five insertions performed using one specific kind of safety task. Base axis / insertion point angle reduction Figure 4 . 14 : 414 Figure 4.14: Mean value of the distance between the needle and the initial position of the insertion point at the tissue surface. The mean is taken over time and across the five experiments for each kind of safety task. Figure 4 . 15 : 415 Figure 4.15: Value of the energy of bending stored in the needle during the insertions. Each graph shows a set of five insertions performed using one specific kind of safety task. Figure 4 . 16 : 416 Figure 4.16: Mean value of the energy of bending stored in the needle. The mean is taken over time and across the five experiments for each kind of safety task. Figure 4 . 17 : 417 Figure 4.17: Value of the angle between the needle base axis and the initial position of the insertion point at the tissue surface during the insertions. Each graph shows a set of five insertions performed using one specific kind of safety task. Figure 4 . 4 Figure 4.18: Mean value of the angle between the needle base axis and the initial position of the insertion point at the tissue surface. The mean is taken over time and across the five experiments for each kind of safety task. Estimated lateral distance using needle model Figure 4 . 4 Figure 4.19: Lateral distance between the needle tip axis and the target during a controlled insertion of the needle while lateral motions are applied to the phantom. (a) distance measured using the tracking of the needle, (b) distance estimated using the needle model. Two insertions are performed without update of the model to account for tissue motions (blue and green lines) and two insertions are performed while the model is fully updated (red and black lines). Figure 4 . 4 Figure 4.20: State of two needle models overlaid on the camera views at the end of a needle insertion. The blue cross represents the virtual target.Yellow and blue lines are, respectively, the needle and tissue spline curves of a model updated using only the pose feedback of the needle manipulator, such that the position of the tissue spline (blue) is not updated during the insertion. Red and green lines are, respectively, the needle and tissue spline curves of a model updated using the pose feedback of the needle manipulator and the visual feedback, such that the position of the tissue spline (green) is updated during the insertion. Targeting performances: The lateral distance between the axis of the measured needle tip and the target during the insertions are shown in Fig.4.22. Two cross sections of the US volume acquired at the end of the Figure 4 . 4 Figure 4.22: Measure of the lateral distance between the needle tip axis and the target during the insertions. Insertions are performed either in gelatin phantom or in porcine liver embedded in gelatin. Highest task priority is given to either the targeting or the safety tasks. (a) Gelatin, targeting tasks with highest priority (b) Gelatin, safety task with highest priority (c) Porcine liver, targeting tasks with highest priority (d) Porcine liver, safety task with highest priority Figure 4 . 4 Figure 4.23: Cross sections of an ultrasound volume at the end of the insertion for different experimental conditions. The result of the needle tracking is overlaid as a red curve and the interaction model is projected back in the two cross sections with the needle spline in blue and the tissue spline in yellow. The target is shown as a red cross. The green dashed lines indicates the surface of the tissues. Figure 4 . 4 Figure 4.24: Distance between the needle shaft and the initial position of the insertion point at the surface during the insertions. The graphs show the value of the distance measured in the acquired ultrasound volume or estimated from the model. Insertions are performed either in gelatin phantom or in porcine liver embedded in gelatin. Highest task priority is given to either the targeting or the safety tasks. Figure 5 5 Figure 5.2: Illustration of the performance of the target tracking algorithm.The motion described by (5.5) is applied to the gelatin phantom with a period T = 5s. The global mean tracking error is 3.6 mm for this experiment. However it reduces to 0.6 mm after compensating for the delay of about 450 ms introduced by the data acquisition. Figure 5 . 4 : 54 Figure 5.4: Picture of the setup used to compare the force exerted at the needle base using different configurations for the insertion. Figure 5.6: Mean value of the absolute lateral force exerted at the base of the needle. The mean is taken over time and across the four experiments for each configuration. Figure 5 5 Figure 5.7: Picture of the setup used to perform needle insertions toward a target embedded in an ex-vivo bovine liver while compensating for lateral motions of the phantom. Figure 5.9: Measures during an insertion in a bovine liver embedded in gelatin: (a) Measure of the tissue motions from the UR5 odometry, (b) measure of the needle tip position from the electromagnetic tracker and measure of the target position from the tracking in 2D ultrasound, (c) measure of the lateral distance between the target and the needle tip. Overall the target can be reached with good accuracy even if the tissues are moving. Figure 5 . 10 : 510 Figure 5.10: Target tracking in ultrasound images during a needle insertion in a bovine liver embedded in gelatin. The boundaries of the target are not always clearly visible. The needle being inserted can slightly be seen coming from the right. Figure 5 . 11 : 511 Figure 5.11: Target tracking in ultrasound images during a needle insertion in a gelatin phantom. The needle can be seen coming from the right. Figure 5 .Figure 5 . 13 : 5513 Figure5.12: Slice extracted from a high resolution 3D ultrasound volume acquired at the end of an insertion in the gelatin phantom. The needle is coming from the right and the needle tip is shown with a red cross. The line on the left corresponds to a wooden stick used to maintain the spherical target during the conception of the phantom. d Position of the base of the needle p 0,i Rest position of a virtual spring p N,i Needle point associated to a virtual spring T tip Normal torque at the needle tip x, y, z Generic axes of a frame χ i Characteristic function of a curve defining its domain of definition φ Angle between the wheels of a bicycle model Π j Projector onto a virtual spring plane P i v t Needle tip translation velocity v ins Needle insertion velocity θ Orientation of the wheel of a kinematic model {F b } Frame of the needle base {F t } Frame of the needle tip a Length of the bevel along the needle axis b Length of the face of a bevel d in Inner needle diameter d out Outer needle diameter E Needle Young's modulus E N Bending energy stored in the needle E T Deformation energy stored in the tissues I Second moment of area of the needle section Stiffness per unit length of the interaction between the needle and the tissues K nat Natural curvature of the trajectory of an asymmetric needle tip u k Control input vector at time index k w Process noise vector for Bayesian filtering W k Process noise matrix of a linearized system for Kalman filtering w k Process noise vector at time index k x State vector for Bayesian filtering x, y, z Generic axes of a frame x a Augmented state for unscented Kalman filtering x k State vector at time index k y Measure vector for Bayesian filtering y k Measure vector at time index k δ Dirac delta function δφ Angular displacement of the ultrasound transducer of a wobbling probe between the beginning of two frame acquisitions Binary variable indicating the direction of sweeping of the ultrasound transducer of a wobbling 3D probe d Estimated unit vector tangent to a point along a needle f b Estimated lateral force exerted at the base of a needle pj Estimated position of a point along a needle tb Estimated lateral torque exerted at the base of a needle xk State estimate after the update step for Bayesian filtering xk|k-1 State estimate after the prediction step for Bayesian filtering ŷk Measure estimate after the prediction step for Bayesian filtering λ Wavelength of an ultrasound wave . Floor operator X i Particle for a particle filtering or sigma point for unscented Kalman filtering Y i Measure vector associated to a sigma point for unscented Kalman filtering 250 LIST OF SYMBOLS φ Angle between the center and current orientation of the ultrasound transducer of a 3D wobbling probe φ Phase of the tissue motion for the breathing motion profile ρ Mass density of a medium θ Angle of a rotation associated to the angle-axis rotation vector θu ỹk Innovation vector for Bayesian filtering × Cross product operator between two vectors {F b } Frame of the needle base {F t } Frame of the needle tip {F w } Fixed reference frame associated to a robot atan2(y, x) Multi-valued inverse tangent operator b Amplitude of the 1D tissue motion for the breathing motion profile c Speed of sound in soft tissues 1540 m.s -1 Distance between an interface in the tissues and the ultrasound transducer d o Acquisition depth of an ultrasound probe f Frequency of an ultrasound wave f s Sampling frequency of the radio-frequency signal g Generic probability density function I post Post-scan image I pre Pre-scan image J Cost function used for the needle tracking J 1 , J 2 , J 3 , J 4 Sub-cost functions used for the needle tracking K Bulk modulus of a medium k Time index for Bayesian filtering L Length of a polynomial curve l d Curvilinear coordinate of point along the needle L d , L n , L t Lateral integration distances for the needle tracking sub-costs 251 LIST OF SYMBOLS l j Curvilinear coordinate of point along the needle L p Distance between two piezoelectric elements along an ultrasound transducer L s Distance between samples along a scan line M Number of points along the needle taken as measures for unscented Kalman filtering m 1D breathing motion profile applied to the tissues N Number of control points defining the polynomial curve for the needle tracking n Coefficient tuning the shape of the motion for the breathing motion profile n Number of segments in a spline of the two-body model N ν Dimension of the measurement noise vector for Bayesian filtering N f Number of frames acquired during a sweeping motion of the ultrasound transducer of a 3D wobbling probe N l Number of scan lines N p Number of particles of a particle filter n p Number of piezoelectric elements of an ultrasound transducer N s Number of samples acquired along a scan line N u Dimension of the control input vector for Bayesian filtering N w Dimension of the process noise vector for Bayesian filtering N x Dimension of the state vector for Bayesian filtering N y Dimension of the measure vector for Bayesian filtering p Generic probability density function R Radius of curvature of a convex transducer r Polynomial order of spline segments R m Radius of the circular trajectory described by the ultrasound transducer of a 3D wobbling probe r N Radius of the needle expressed in voxels in an ultrasound volume λ δ Positive control gain for the task associated to the distance between the rest position of the insertion point and the needle point at the surface of the tissues λ γ Positive control gain for the task associated to the angle between the needle base axis and the rest position of the insertion point λ σ Positive control gain for the task associated to the angle between the bevel cutting edge and a target λ θ Positive control gain for the task associated to the angle between the needle tip axis and a target λ d Positive control gain for the task associated to the distance between the needle tip axis and a target λ δ Positive control gain for the task associated to the vector between the rest position of the insertion point and the needle point at the surface of the tissues λ δm Positive control gain for the task associated to the mean deformation of the tissues along the needle shaft λ E N Positive control gain for the task associated to the bending energy stored in the needle ω z,max Maximal rotation velocity around the needle axis σ Angle between the bevel cutting edge and a target σ i Singular value of a matrix τ i Singular value of the pseudo-inverse of a matrix v t Translation velocity of the needle tip v t,z Translation velocity of the needle tip along its axis v tip Scalar insertion velocity of the needle tip θ Angle between the needle tip axis and a target θ DC Angle of rotation of the tip during one cycle of duty-cycling control J Estimation of the Jacobian matrix J {F b } Frame of the needle base {F t } Frame of the needle tip {F w } Fixed reference frame associated to a robot 256 LIST OF SYMBOLS atan2(y, x) Multi-valued inverse tangent operator d Distance between the needle tip axis and a target DC Duty cycle in duty-cycling control E N Bending energy stored in the needle i Level of priority of a task K ef f Effective curvature of the trajectory of an asymmetric needle tip during duty-cycling control K nat Natural curvature of the trajectory of an asymmetric needle tip L N Length of the spline curve representing the needle model L DC Insertion length of a cycle during duty-cycling control L f ree Length of the needle that is outside the tissues L ins Length of the insertion phase in duty-cycling control L rot Length of the rotation phase in duty-cycling control L thres Threshold length before the addition of a tissue spline segment in the two-body model L thres Threshold length between the addition of two successive virtual springs m Dimension of the control input vector n Dimension of the task vector n Number of segments in a spline of the two-body model r Polynomial order of spline segments t Generic time x 0 , y 0 , z 0 Components of the rest position of the insertion point in the frame of the needle base x t , y t , z t Components of the position of a target in the frame of the needle tip Chapter 5 . d Subscript used to indicate the desired value of a quantity 0 3×5 3 by 5 null matrix a Initial position of the tissues for the breathing motion profile b Amplitude of the tissue motion for the breathing motion profile c N Spline curve representing the needle e Task vector f l Lateral force exerted at the base of the needle I 3 3 by 3 identity matrix J Generic Jacobian matrix relating the variation of the task vector with respect to the control inputs J γ Jacobian matrix associated to the angle between the needle base axis and the rest position of the insertion point J σ Jacobian matrix associated to the angle between the bevel cutting edge and a target J f Jacobian matrix associated to the lateral force exerted at the base of the needle J ω U R ,z Jacobian matrix associated to the rotation velocity of the robot around the needle axis J v U R ,z Jacobian matrix associated to the translation velocity of the tip of the needle insertion device along the needle axis J v U R Jacobian matrix associated to the translation velocity of the tip of the needle insertion device J vt,z Jacobian matrix associated to the translation velocity of the needle tip along its axis J v N ID Jacobian matrix associated to the translation velocity of the translation stage of the needle insertion device J θ Jacobian matrix associated to the angle between the needle tip axis and a target m Breathing motion profile applied to the tissues v r Control inputs vector of the robotic system consisting of the UR3 robot and the needle insertion device v N ID Control inputs vector of the needle insertion device v U R Control inputs vector of the UR3 robot 258 LIST OF SYMBOLS x, y, z Generic axes of a frame γ Angle between the needle base axis and the rest position of the insertion point λ γ Positive control gain for the task associated to the angle between the needle base axis and the rest position of the insertion point λ σ Positive control gain for the task associated to the angle between the bevel cutting edge and a target λ θ Positive control gain for the task associated to the angle between the needle tip axis and a target λ f Positive control gain for the task associated to the lateral force exerted at the base of the needle ω z,max Maximal rotation velocity around the needle axis ω N ID Rotation velocity of the rotation stage of the needle insertion device ω U R ,z Rotation velocity of the robot around the needle axis σ Angle between the bevel cutting edge and a target v U R ,z Translation velocity of the tip of the needle insertion device along the needle axis v U R Translation velocity of the tip of the needle insertion device v t,z Translation velocity of the needle tip along its axis v tip Scalar insertion velocity of the needle tip v N ID Translation velocity of the translation stage of the needle insertion device θ Angle between the needle tip axis and a target {F b } Frame of the needle base {F t } Frame of the needle tip {F w } Fixed reference frame associated to a robot E Needle Young's modulus I Second moment of area of the needle section L thres Threshold length before the addition of a tissue spline segment in the two-body model Résumé Le guidage robotisé d'une aiguille a été le sujet de nombreuses recherches ces dernières années afin de fournir une assistance aux cliniciens lors des procédures médicales d'insertion d'aiguille. Cependant le contrôle précis et robuste d'un système robotique pour l'insertion d'aiguille reste un grand défi à cause de l'interaction complexe entre une aiguille flexible et des tissus ainsi qu'à cause de la difficulté à localiser l'aiguille dans les images médicales. Table 1 . 1 Needle type Chiba biopsy needle Chiba biopsy stylet Reference Angiotech MCN2208 Aurora Needle 610062 Young's modulus 200 GPa 200 GPa Outer diameter 22G (0.7 mm) 23.5G (0.55 mm) Inner diameter 0.48 mm 0.5 mm Length (cm) 12.6 from 0.8 to 10.8 Tip type Chiba Chiba Tip angle 25 • 25 • 1: Characteristics of the needles used in the experiments. The lengths are calculated from the base of the needle holder to the needle tip. Table 3 . 3 1: Mean over time and across five experiments of the absolute error between the real and modeled tip position obtained for the different update methods and two different update rates. Absolute position error (mm) Update rate 30 Hz 1 Hz Method 1 5.9±3.9 5.9±3.9 Method 2 6.1±3.0 6.1±3.0 Method 3 2.1±1.6 1.9±1.5 Method 4 0.6±0.3 0.9±0.5 Method 5 0.4±0.2 0.7±0.5 Table 5 . 5 3: Summary of the lateral force measured at the base of the needle during the insertions performed in a gelatin phantom and a bovine liver embedded in gelatin. The mean and standard deviation of the lateral force are calculated over time. Phantom Max Force (mN) Mean Global mean Gelatin Liver 630 870 189 ± 119 154 ± 92 494 162 ± 88 753 217 ± 135 773 268 ± 176 1022 137 ± 127 538 61 ± 72 601 41 ± 64 626 64 ± 61 474 136 ± 127 198 ± 133 88 ± 103 3D US probe • The second task controls the bevel orientation via the angle σ, as defined by (4.51), (4.54) and (4.56). The maximal rotation speed ω z,max is set to 60 • .s -1 and the gain λ σ is set to 10 (see (4.56)) such that the maximal rotation velocity is used when the bevel orientation error is higher than 6 • . • The third task is the safety task used to reduce the tissue stretch at the surface δ, as defined by (4.61), (4.62) and (4.63). The control gain λ δ is set to 1. Note that in a general clinical context it is not always possible to see the insertion point at the tissue surface due to the configuration of the probe with respect to the insertion site. So we choose here to use the estimation of δ using the needle model instead of the real measure as an input of the safety task. Two sets of priority levels are tested. In the first set, the two targeting tasks (first and second tasks) have the same priority and the safety task (third task) has a lower priority. The final velocity screw vector v b applied Motion compensation using force feedback In previous sections we have defined a way to use force feedback in our needle steering framework as well as a method to track a moving target in 2D ultrasound (US) images. Therefore, in this section we present the results of experiments that we conducted to test our control framework in the case of a needle insertion performed under tissue motions. Force sensitivity to tissue motions We first propose to compare the sensitivity of the force measurements depending on the configuration of the needle. Two configurations are mostly used to perform robotic needle insertions. The first one is mainly used to performed base manipulation and consists in holding the needle by its base, leaving a part of the body of the needle outside the tissues during the insertion. The second configuration is mainly used to performed tip-based steering. The needle is then usually maintained in an insertion device such that only the part outside of the device can bend. The device is placed near the surface such that the needle is directly inserted inside the tissues, with no intermediate length left free to bend between the device and the tissues. In the following we perform needle insertions using different configurations and compare the interaction forces measured at the base of the needle. Experimental conditions (setup in the Netherlands): We use the needle insertion device (NID) attached to the UR3 robot arm. The biopsy needle with the embedded electromagnetic (EM) tracker is placed inside the NID and is inserted in a gelatin phantom. The ATI force torque sensor is used to measure the interaction efforts exerted at the base of the needle. A picture of the setup is shown in Fig. 5.4. The position of the EM tracking system is registered in the frame of the UR3 robot before the experiments using the method that was presented in section 3.6.1. The force torque sensor is also calibrated beforehand to remove the sensor biases and the effect of the weight of the NID in order to reconstruct the interaction forces applied to the base of the needle (see Appendix A). A fixed virtual target is defined just before the beginning of the insertion such that it is at a fixed position in the initial frame of needle tip. We use the two-body model presented in section 2.4.2 with polynomial needle segments of order r = 3 to represent the part of the needle that is outside of the NID, from the frame {F b } depicted in Fig. 5.4 to the needle tip. We fix the length of the needle segments to 1 cm, resulting in n = 1 segment of 8 mm when the needle is retracted to the maximum inside the NID and n = 11 segments with the last one measuring 8 mm when the needle is fully outside of the NID. We use a rather hard phantom, such that we set the List of Publications Appendix A Force sensor calibration This appendix presents the registration process and the computation method, used in the experiments of chapters 3 and 5, to retrieve the interaction forces and torques applied at the base of the needle without the gravity component due to the mass of the needle insertion device. The force f ∈ R 3 measured by the sensor can be expressed according to where m d is the mass of the needle insertion device (NID), g ∈ R 3 is the gravity vector, b f ∈ R 3 is the sensor force bias and f ext ∈ R 3 is the rest of the forces applied to the sensor, with each vector defined in the sensor frame. The torque t ∈ R 3 measured by the sensor can be expressed similarly according to where × denotes the cross product operator, c d ∈ R 3 is the position of the center of mass of the NID, b t ∈ R 3 is the sensor torque bias and t ext ∈ R 3 is the rest of the torques applied to the sensor, with again each vector defined in the sensor frame. Note that f ext and t ext correspond to the contribution of the interaction forces and torques that we want to measure. Let us define g w the gravity vector expressed in the world reference frame and w R f ∈ SO(3) the rotation from the world frame to the force sensor frame such that During the insertion procedure, the contribution of the gravity and the biases can be removed depending on the pose of the NID to isolate the interaction forces. Then, The interaction forces f b ∈ R 3 and torques t b ∈ R 3 applied to the base of the needle can then be expressed in the needle base frame according to where f R b ∈ SO(3) and f T b ∈ R 3 are, respectively, the rotation and translation from the sensor frame to the needle base frame. In practice only the orientation w R e ∈ SO(3) of the end effector of the UR3 is known thanks to the robot odometry such that w R f is actually computed according to Noting g i the gravity vector associated to i th orientation of the UR3 end effector, b f and m d can first be computed to minimize the cost function J f defined as which leads after calculations to Then b t and c d can be computed to minimize the cost function J t defined as which leads after calculation to with where g i,x , g i,y and g i,z are the components of g i . LIST
00175403
en
[ "phys.cond.cm-scm" ]
2024/03/05 22:32:10
2007
https://hal.science/hal-00175403/file/Ramos_JA073412H-31-44_Revised.pdf
Yuxia Luan Laurence Ramos email: [email protected] Real-time observation of polyelectrolyte-induced binding of charged bilayers We present real-time observations by confocal microscopy of the dynamic behavior of multilamellar vesicles (MLVs), composed of charged synthetic lipids, when put in contact with oppositely charged polyelectrolyte (PE) molecules. We find that the MLVs exhibit astonishing morphological transitions, which result from the discrete and progressive binding of the charged bilayers induced by a high PE concentration gradient. Our physical picture is confirmed by quantitative measurements of the fluorescence intensity as the bilayers bind to each other. The shape transitions lead eventually to the spontaneous formation of hollow capsules, whose thick walls are composed of lipid multilayers condensed with PE molecules. This class of objects may have some (bio)technological applications. Introduction Liposomes are often studied as simplified models of biological membranes 1,2 and are extensively used in the industrial area ranging from pharmacology to bioengineering. [START_REF] Lasic | Handbook of Biological Physics, 1a ,Structure and Dynamics of Membranes[END_REF] The biomimetic properties of the membrane also make liposomes attractive as vessels for model systems in cellular biology. [START_REF] Karlsson | [END_REF]5 Composite systems of lipid bilayer and polymers have received special attention due to their similarity with living systems such as plasma membrane and various organelle membrane, that mainly consist of complex polymers and lipids. 6 Experimental investigations of vesicle/polymer mixed systems also aim at improving the stability and at controlling the permeability of liposomes for drug delivery or targeting, or for gene therapy. [7][8][9][10][11][12] For example, stabilization is usually obtained by loose hydrophobic anchoring of water soluble chains that do not significantly perturb the bilayer organization, such as alkyl-modified Dextran or Pullulan with a low degree of substitution, 13,14 long poly(ethylene glycol) capped with one or two lipid anchors per macromolecule, or poloxamers. 15,16 It was shown recently 7,[17][18][19][20] that water-soluble polymers, upon binding to vesicles, can markedly affect the shape, curvature, stiffness, or stability of the bilayer. However, the mechanisms of these polymer-induced reorganizations of membranes remain sometimes conjectural, although it is clear that the hydrophobicity of the polymer plays an important role. On the other hand, interactions between surfactants and polymers in bulk solution are extensively investigated, due to their numerous applications from the daily life to the various industries (e.g. pharmaceutical, biomedical application, detergency, enhanced oil recovery, paints, food and mineral processing). [START_REF] Goddard | Interactions of Surfactants with Polymer and Proteins[END_REF][START_REF] Zintchenko | [END_REF][23][24][25] Charged amphiphilic molecules, like lipids or surfactants, and oppositely charged polyelectrolytes (PE) spontaneously form stable complexes, which are very promising objects, because of their great variability in structures and properties. [26][27] In this context, interactions of charged bilayers with oppositely charged PE are particularly regarding. For instance, in bioengineering, the interactions between lipids bilayers and DNA molecules are crucial for gene therapy. [28][29][30][31][32] When charged bilayers interact with polyelectrolyte of opposite charge, it is generally accepted that electrostatic interactions induce the bridging of the lipid bilayers by the PE molecules. [33][34][35] The resulting structure of the PE/lipid complexes is a condensed lamellar phase, with PE strands intercalated between the lipids bilayers. However, although most studies provide a general picture for the PE/lipid structure, very few addressed the question of the mechanism of the formation of the complexes or the associated issue of dynamics and intermediate steps for the assemblage process. In addition to our previous work 36 , two noticeable exceptions include the work of Kennedy et al. who found that the order of addition of DNA to cationic lipid or vice versa could affect the size and size distribution of the complexes 31 and that of Boffi et al. who showed that two distinct types of DNA/lipid complexes can be formed depending on the sample preparation procedure. 32 Nevertheless, the determinants for the assembly and dynamics of complex formation remain poorly understood. Under certain conditions, lipids can self-assemble into giant vesicles, the size of living cells. These are very elegant objects that allow manipulation and real time observation with a light microscope [37][38][39][40] and that have opened the way to a wealth of theoretical and experimental investigations. 41 However, unlike experimental work on giant unilamellar vesicles, experimental reports on real-time observation of the effect of a chemical species on the stability and shape changes of a multilamellar vesicles are extremely scarce. [42][43][44][45][46] Nevertheless, as it is demonstrated in this paper, when multilamellar vesicles are used, richer behaviours can be expected, since cooperative effects due to the dense packing of bilayers may play an important role. In the present study, we employ a real-time approach to study the dynamics of the interactions between charged membranes and oppositely charged PE molecules, and monitored by light and confocal microscopy the behavior of multilamellar vesicles (MLVs) made of a synthetic lipid in a concentration gradient of PE. When the gradient is strong enough, the MLV undergoes spectacular morphological transitions, which enable us to visualize the progressive binding of charged bilayers induced by oppositely charged PE molecules. Specifically, these shape transitions lead eventually to the spontaneous formation of a hollow capsule with thick walls that are presumably composed of lipid multilayers condensed with PE molecules. This class of objects may have some potential (bio)technological applications 47 and this contribution could have some significance in mimicking bioprocess. We first present our experimental observations, then describe the mechanisms at play and provide quantitative measurements, based on the fluorescence intensity, which support our physical picture. Finally, we briefly conclude. Experimental Results We use vesicles made of didodecylammonium bromide (DDAB) as synthetic lipid, and an alternating copolymer of styrene and maleic acid in its sodium salt form, as anionic polyelectrolyte (PE). The DDAB bilayers are labeled with a fluorescent surfactant for confocal and fluorescence imaging. We follow by light and confocal microscopy the behavior of the DDAB vesicles when they are submitted to a PE concentration gradient. The Materials and Methods are described in the Supporting Information. Interaction between Giant Unilamellar Vesicles and Polyelectrolyte The time-dependent morphological changes of a giant unilamellar vesicle (GUV) are investigated when the GUV is exposed to a concentrated PE solution (30% W/W). The GUV is floppy and fluctuating before interacting with PE. Upon contact with the polyelectrolyte solution, the bilayer becomes tense and the vesicle immediately turns to perfectly spherical and taut. Some patches, that appear very intense in fluorescence, gradually formed on the surface of the GUV. The patches thicken with time (Figure 1). Concomitantly, the size of the GUV decreases. These processes lead ultimately to the collapse of the GUV, resulting in a single small lump made of a compact DDAB/PE complex. The duration of the whole process is of the order of several minutes. Analogous observations have been recently reported for the interaction of GUVs with small unilamellar vesicles 48 , with the matrix protein of a virus 49 , and with a flavonoid of green tea extracts 50 . Interaction between Multilamellar Vesicles and Polyelectrolyte Phase-diagram In sharp contrast to the case of GUVs, the interactions of charged multilamellar vesicles (MLVs) with polyelectrolyte molecules of opposite charge lead to unexpectedly rich phenomena. We interestingly notice that, depending on C PE , the PE concentration, completely different morphological transitions are observed. The "phase" diagram shown in Figure 2 summarizes our experimental findings for MLVs put into contact with different concentrations of PE. Successive peeling events are found when C PE < ~ 2 % as shown in Figure 2A, while concentrated PE (C PE >~10%) induces the appearance of spectacular morphological changes of the MLV (Figure 2B and2C). We confirmed by differential interference contrast and phase contrast microscopy that non-fluorescent MLVs exhibit identical morphological changes, and that all dynamical processes reported below are preserved. C PE Peeling ~ 2 % Layer by layer binding ~ 10 % A) B) C) C PE Peeling ~ 2 % Layer by layer binding Weak Polyelectrolyte Gradient When a MLV is exposed to a diluted PE concentration, the size of the MLV gradually decreases and concomitantly small aggregates formed in the vicinity of the MLV. The MLV is peeled progressively, layer after layer, one DDAB/PE complex being formed for each peeling event, while the interior of the MLV remains always intact. Peeling events proceed until the MLV is completely used up. The final state of the MLV is a pile of small aggregates of size ranging from 2 to 10 μm. The whole consumption of a MLV through the peeling mechanism is a slow process that lasts more than 10 minutes, each peeling events lasting about tens of seconds (Figure 3). We note that the effect of a weak polyelectrolyte gradient on a MLV has been reported previously. 45 However, the novel confocal microscopy pictures given in Figure 3 show unambiguously a single event, which provides a compelling evidence for a peeling mechanism. Strong Polyelectrolyte Gradient In sharp contrast with our observations for a dilute polyelectrolyte solution, for C PE > ~10%, the morphological transitions of a MLV lead to a finite-size cellular object, with water encapsulated in the cells, and whose walls are very likely made of DDAB/PE complexes (Figure 4E). The angles between the thick walls measured in 2-dimentional picture are about 120°, similarly to the angle at which film meet in a three-dimensional dry foam. [START_REF] Weaire | The Physics of Foams[END_REF] When the size of the initial MLV is sufficiently small, hollow capsules are eventually obtained (Figures 4B-D), whose size is sensibly equal to that of the initial MLV. Although the large scale structure depends dramatically on the initial PE concentration, the microscopic structure in all cases is a condensed lamellar phase (Figure 4G), as checked by small-angle X-ray scattering (Figure 4F), whose periodicity is of the order of 3.0 nm, hence only slightly larger than the bilayer thickness (2.4 nm). The typical whole sequence of morphological transformation of a MLV when it is exposed to a concentrated PE solution is shown in the time series pictures of Figure 5. Before the MLV starts to deform significantly, the fluorescent intensity inside the vesicle becomes heterogeneous, the higher intensity being localized in the region with higher PE concentration. The surprising buds (Figure 2 B-C) composed of well-separated set of bilayers form subsequently. Interestingly, we note that the first striated buds form systematically where the PE concentration is lower. The interaction dynamics then speeds up and the MLV is found to experience rapid fluctuations, with the formations of protruding and budding, while dynamical events can also be distinguished in the core of the vesicle. The initially "full" I (arb. u.) q (nm -1 ) B) A) A) E) E) G) C) C) F) I (arb. u.) q (nm -1 ) B) A) A) E) E) G) C) C) D) F) I (arb. u.) q (nm -1 ) B) A) A) E) E) G) C) C) F) MLV appears finally essentially devoid in DDAB bilayers: the inside of the resulting object is essentially black with some thick fluorescent strands. This cellular soft object forms therefore a peculiar kind of biliquid foam [53][54][55] . As opposed to our observations for a weak polyelectrolyte gradient, the dynamics is here very fast: the whole sequence lasts less than 1 minute. We finally note that we have performed some additional tests. First, we have done experiments in salted water (with NaBr) instead of pure water. The main observations described above are preserved with a salt concentration of 10 -3 M. With a NaBr concentration of 10 -2 M, our experimental observation in pure water cannot be reproduced due to the lack of stability of the MLVs. 56 Secondly, we have also investigated the interaction of MLV with other polymers, both neutral and charged (as listed in the Supporting Information), and found similar results as those described here only with the polystyrenesulfonate polyanions, thus confirming that attractive electrostatic interactions between the DDAB bilayers and the polymer molecules are a key ingredient for our observations. Discussion Due primarily to the strong electrostatic interactions between charged bilayers and oppositely charged polyelectrolyte molecules, DDAB/PE complexes form, whose structure is a condensed lamellar phase. 36,45 The confocal pictures of a GUV interacting with a PE solution (Figure 1) provide a dynamic observation for the formation of these complexes. In this part, we discuss the experimental findings on the formation of DDAB/PE complexes, when polyelectrolyte molecules interact with a MLV. As we showed in the experimental section, depending on the PE concentration, the polyelectrolyte molecules interact with a unique bilayer (when the PE gradient is weak), or with the entire stack of bilayers (when the PE gradient is strong). Interaction between PE and a unique bilayer Upon contact with a weak PE gradient, a MLV is peeled off gradually. Each peeling event implies firstly the formation of a pore, which expands until failure of the entire bilayer. We have previously visualized the expansion of a pore by light microscopy. 45 Pore formation in unilamellar vesicles has been observed under different experimental conditions, including application of an electric field [57][58] interaction with proteins 37,50 or with a water-soluble polymer with hydrophobic pendent groups, 59 or attractive interactions with a patterned surface. 60 In our case, pores form because of the adsorption of PE onto the DDAB bilayers due to a strong electrostatic attraction between the two species. In fact, because of these interactions, part of the surface area of the external bilayer may be used up to form PE/lipid complexes. This creates a tension in the bilayer which ruptures above a critical tension, leading to the formation of a pore. The peeling mechanism was previously discussed in details. 45 Interaction between PE and a stack of bilayers PE-induced binding of two bilayers as elementary mechanism We argue that the astonishing structures exhibited upon contact of a multilamellar vesicle with a strong gradient of PE concentration are due to a discrete and progressive binding of the bilayers induced by the PE molecules as they diffuse within the multilayer material. The elementary initial event can be imaged in real-time and is shown in Figure 6. It consists of the budding starting from the outmost bilayer. This structure results from the expulsion of the water that is located between the outmost and the secondary outer bilayers as they rapidly bind to each other due to the bridging of the oppositely charged PE between them. The binding front can be followed by confocal imaging: with time, the binding quickly spreads and the water between the bilayers is driven into a small and spherical water pool. Such events typically last a few seconds, and are faster when the PE concentration gradient is higher. A scheme of the microscopic process is shown in Figure 6E. We note that a temperature-induced binding of bilayers has been observed by light microscopy, but the dynamics could not be followed. 61 The succession of such events, i.e. binding of the secondary outer with the ternary outer bilayers, then binding of the ternary outer with the quaternary outer bilayers, … leads to the striated structures shown Figures 2B,C and 7. These structures originate from the successive formation of water pools, while the core of the MLV remains intact. The further interaction with PE leads to the binding of bilayers in the core of the MLV: the initially homogeneous contrast inside the MLV (Figures 2A,4A) becomes progressively extremely heterogeneous as bilayers bind to each other and leave large portions free of bilayers. This is simultaneously accompanied by more important and erratic shape transformation, which leads ultimately to the formation of a cellular biliquid foam or hollow capsule (Figures 4 and5). More quantitatively, the volumes measured by image analysis can be compared with the volumes evaluated from the simple model (scheme, Figure 6E). We take for the water thickness between DDAB bilayers, 80 nm, the maximum swelling of the lamellar phase (prior to interaction with PE) [START_REF] Dubois | [END_REF] and calculate, for the MLV of Figure 6 (radius 10.7 μm), the volume of the water pool after binding of the outmost and secondary outer bilayer, V c . We find V c =130 μm 3 . We compare V c to the volume V m , for the water pockets evaluated from Figures 6C andD. We find V m,6C = 400 μm 3 ≈ 3V c and V m,6D = 530 μm 3 ≈ 4V c , respectively, as expected since 3 and 4 elementary events have occurred respectively in C and D (as clearly distinguished in a movie of the process, movie S1 in SI). Similarly, we measure that the total water volume for 10 bilayers (white circle, Figure 2C) is about 4300 μm 3 , while we calculate that the volume resulting from the binding of two bilayers is about 470 μm 3 , hence roughly 10 times smaller, as expected. The very good agreement between the numerical values confirms the mechanism we propose, and suggests that there is no water release during this process. Quantification of the discrete binding of the bilayers We follow the binding of individual surfactant bilayers into the thick bundles with confocal microscopy (Figure 7), and analyze the fluorescence intensity distribution with Image J. In Figure 8A, we show that the intensity profile, perpendicular to a bilayer, is homogeneous along the bilayer. We define I, the integrated intensity, as the surface area of the peak of the intensity profile. We found that I is constant for all individual bilayers (labeled a to h). The empty symbols in Figure 8C show I along the thick bundle P1-P2 (marked by the crosses). We measure that the intensity increases along the thick bundle from P1 to P2, which precisely reveals a discrete and continuous increase of I as more and more bilayers bind. To quantify this, we add to the intensity (along P1-P2) between bilayers n and n+1 the intensities of all bilayers with labels ranging from n+1 and h (h is the last bilayer). The calculated values are reported as full symbols in Figure 8C. Interestingly, we find that, at any step, these calculated intensities are a very good evaluation of the intensity (full hexagons) for the thickest part of the bundle (closed to P2). In fact, all full symbols Figures 8C are located on a same horizontal line. This provides a further and conclusive evidence of a discrete and progressive binding of bilayers. Furthermore, our calculations demonstrate that the number of individual bilayers that compose a bundle can be evaluated from the fluorescent intensity of the bundle. For instance, by comparison of the intensity of a single bilayer to that of the thickest wall, we evaluate that for the cellular composite material (Figure 4E) the thickest external wall contains ~ 20 bilayers. Kinetics is a key parameter for our observations Importantly, we have noted that the events are polarized, the "budding" always occurring in the point diametrically opposed to the point where PE concentration is higher. This indicates that the binding always starts where PE concentration is higher and that the process is sensitive to the PE concentration gradient. In addition, the formation of buds indicates that the binding kinetics is faster than the diffusion of water across the compact bilayers. These experimental observations are consistent with the fact that the key parameter for this novel observation is to expose MLV to a strong gradient of PE. In addition, the hollow capsules formed have more or less the same size as the initial MLV when the initial MLV is not too large. This supports the fact that the water release, if any, is weak during the whole process, which is in full agreement with a binding kinetics faster than the diffusion of water across the bilayers. In addition, our observations imply that the PE molecules penetrate inside the MLV. Very generally, the entry of the PE molecules into a MLV is driven by three forces: electrostatic interactions (between the surfactant headgroups and the maleic acid units of the PE), hydrophobic interactions (between the surfactant tails and the styrene units of the PE) and osmotic pressure (due to the high concentration of PE outside the MLV). Microscopically, the PE molecules may deform the bilayer, weaken the cohesion among the organized DDAB molecules, and create defects; hence membrane subunits may temporally be separated, allowing the passage of PE molecules (as observed with lipid vesicles in presence of surfactant) [62][63][64] . We finally note that the penetration of a polymer across a lipid bilayer has been recently observed experimentally, [65][66] in concordance with our experimental findings. Conclusions In summary, we have provided experimental data on the kinetics of formation of synthetic charged lipids/polyelectrolyte complexes. By using multilamellar vesicles and a high polyelectrolyte concentration gradient, we were able to visualize by confocal imaging the progressive binding of the charged bilayers as they interact with oppositely charged polyelectrolyte. Although PE/lipid interactions have previously been visualized on the nanometer scale by atomic force microscopy [67][68][69] , our experiments constitute, to the best of our knowledge, one of the first observations on the micrometer scale. We have described the microscopic mechanisms at play and have provided quantitative measurements, which support our physical picture. The key parameter for this novel observation is to expose MLV to a strong gradient of PE. We indeed have demonstrated that a weak gradient induced radically different morphological transitions. Our description of a gradual binding process of charged bilayers induced by oppositely charged polyelectrolyte may shed some light for understanding the more complicated cell membrane behaviors induced by different kinds of charged proteins. Finally, we have also shown that a strong gradient induced eventually the spontaneous evolution of a MLV towards a hollow capsule. Our simple approach may be useful in designing a class of soft composite polyelectrolyte/lipid shell for applications for drug delivery or controlled drug release. Figure 1 . 1 Figure 1. Evolution of the morphology of a giant unilamellar vesicle (GUV) upon contact with a concentrated PE solution (30% W/W). Timing is indicated in white text. The scale is the same for all pictures. Scale bar = 5 μm. Figure 2 . 2 Figure 2. "Phase" diagram of MLV in contact with different PE concentrations, viewed by confocal imaging. Scale bars =10 μm. Figure 3 . 3 Figure 3. One peeling event of MLV induced by a diluted PE (0.5 % W/W). Timing is indicated in white text. The scale is the same for all pictures. Scale bar = 10 μm. 52 52 5 I 5 (arb. u.)q (nm -1 ) 5 I 5 (arb. u.)q (nm -1 ) 5 IFigure 4 . 54 Figure 4. (A, B) Differential Interference Contrast; (C) Fluorescence and (D, E) Confocal imaging of, (A) a MLV Figure 5 . 5 Figure 5. Time series showing the shape transformation of a MLV upon contact with a concentrated PE solution Figure 6 . 6 Figure 6. (A-D) Series of the morphological transformation of a MLV as it interacts with PE. The PE diffuses from Figure 7 .Figure 8 . 78 Figure 7. Pictures showing individual bilayers binding into a thick bundle. Scale bar =10 μm. Timing is indicated in Acknowledgment. We acknowledge financial support from the CNRS-CEA-DFG-MPIKG Network "Complex fluids: from 3 to 2 dimensions" and from the European Network of Excellence "SoftComp" (NMP3-CT-2004-502235). We thank G. Porte for fruitful discussions. Supporting Information Available: Materials and Methods; Movie showing the initial process of budding formation as a MLV interacts with a concentrated PE solution (30%W/W).
01754038
en
[ "chim.cata" ]
2024/03/05 22:32:10
2017
https://theses.hal.science/tel-01754038/file/2017PA066209.pdf
M Alejandro Perez-Luna 2'-bipyridyl HOMO: Highest Occupied Molecular Orbital HRMS: High Resolution Mass Spectrometry Keywords: 6]: 1, 4, 7, 10, 13, 16-hexaoxacyclooctadecane aryl BDE: Bond Dissociation Energy Bn: benzyl Boc: tert-butoxycarbonyle bpy: 2, 2'-bipyridyl bpz: 2, 2'-bipyrazine COD: 1, 5-cyclooctadiene Cp: cyclopentadienyl doublet DBU: 1, 8-diazabicyclo[5.4.0]undec-7-ene ESI: Electro-Spray Ionisation EWG: Electron Withdrawing Group dppe: ethylenebis(diphenylphosphine) Je souhaite particulièrement remercier mes encadrants, Cyril Ollivier et Louis Fensterbank pour m'avoir permis de réaliser mon doctorat au sein de l'équipe MACO. Merci pour votre disponibilité, vos conseils et remarques, ainsi que pour votre compréhension. Bien que le début de doctorat ait pu être compliqué, le chemin parcouru pendant ces trois années sur le plan professionnel et personnel a été très enrichissant et je souhaite une nouvelle fois vous remercier pour cela. Je remercie aussi les membres de l'équipe MACO, membres permanents ainsi qu'étudiants présents ou passés, pour la bonne ambiance et les bons moments passés à vos côtés. Enfin, un grand merci aux personnes de mon entourage, famille et amis, qui ont su me soutenir pendant ces trois années. Résumé en français Bien que les réactions ioniques occupent une place importante en chimie de synthèse, les processus impliquant des espèces radicalaires ont réussi à s'imposer comme des alternatives de choix. Le premier radical de synthèse a été mis en évidence en 1900 par Moses Gomberg. Il s'agit du radical triphénylméthyle. 1 Malgré cette découverte, l'intérêt pour la chimie radicalaire de synthèse a mis du temps à émerger. Grâce aux travaux de plusieurs chimistes, dont notamment ceux de Paneth, Hey et Kharasch, et la découverte de la RPE dans les années 40, l'étude des réactions faisant intervenir des radicaux a été simplifiée et a permis d'élargir les applications de la chimie radicalaire à la synthèse organique. Parmi les processus mettant en jeu des espèces radicalaires, la mise au point de processus en chaines médiés par l'hydrure d'étain, les réactions de transfert d'atome ou de groupe d'atomes, mais aussi les réactions de cyclisation ainsi que les processus en cascade ont en partie fortement contribués au développement de la chimie radicalaire de synthèse. Malgré tous ces efforts, leurs utilisations dans des procédés industriels restent limités. La polymérisation radicalaire ou l'oxydation du cumène pour la production de phénol méritent toutefois d'être citées. En effet, les radicaux ont longtemps été considérés comme des espèces très réactives hors de contrôle par une partie des chimistes. Pourtant, les contributions de Kochi, Davis, Giese, Minisci, Curran, Hart et d'autres ont pu démystifier la réactivité des radicaux et en faire des candidats valables pour la chimie de synthèse. A l'opposé des réactifs organométalliques de type Grignards, organolithiens, cuprates,… qui doivent être manipulés avec précautions, les précurseurs radicalaires permettent d'engendrer les radicaux en solvant partiellement hydraté. De plus, les réactions radicalaires ont l'avantage de ne pas être sensibles à la nature du solvant, d'être chimiosélectives et peuvent être réalisées sans avoir à utiliser de groupes protecteurs. De fait, plusieurs options pour la formation de radicaux sont utilisables. Pour des réactions en chaîne, plusieurs initiateurs (trialkylboranes, peroxydes, composés azo,…) et médiateurs radicalaires (stannanes, silanes, thiols,…) sont à disposition. La formation des espèces radicalaires peut aussi se faire par transfert monoélectronique à l'aide de sels métalliques en quantité stoechiométriques, en oxydation (manganèse, cérium, argent,…) ou en réduction (samarium, titane, zinc, nickel,…). Toutes ces alternatives sont autant d'outils à disposition du chimiste pour la formation de liaisons carbone-carbone ou carbone-hétéroatome. 2 Bien qu'elle possède des avantages, la chimie radicalaire a aussi ces limitations. En effet, les réactions radicalaires doivent être réalisées en milieu dilué, ce qui limite la possibilité de les réaliser sur large échelle. De plus, afin de piéger de façon efficace les radicaux, un excès d'accepteur radicalaire est souvent nécessaire. A noter que, les initiateurs radicalaires présentent des risques d'explosion. Mais l'aspect le plus négatif de cette chimie est l'utilisation trop récurrente de médiateurs à base d'étain, toxiques pour l'Homme et difficiles à éliminer. Avec l'avènement du concept de chimie verte à la fin des années 1990, 3 de nombreux efforts ont été réalisés pour éviter l'utilisation des dérivés de l'étain dans les méthodes de synthèse. Progressivement, des alternatives ont été proposées pour remplacer les médiateurs stannylés. Des méthodes utilisant des quantités catalytiques de ces dérivés ou bien des supports solides à base d'étain ont d'abord été considérées. L'utilisation d'autres médiateurs radicalaires tels que des silanes, phosphines ou thiols moins toxiques se sont révélés de bon réactifs mais moins efficaces. Les métaux mettant en jeu des transferts monoélectroniques ont aussi été considérés comme alternatives mais doivent être utilisés en quantités stoechiométriques ou sur-stoechiométriques, de plus ils sont spécifiques d'une fonction chimique. D'un point de vue développement durable et écologique, ces alternatives aux dérivés de l'étain sont limitées. Pour autant, un domaine de recherche amorcé à la fin des années 1970, permettant de former des radicaux par transfert monoélectronique grâce à un complexe métallique photoactivé à travers le développement d'une catalyse photorédox en lumière visible, a attiré depuis une dizaine d'années de nombreux groupes de recherche. Après activation par un photon suivie de la promotion d'un électron dans une orbitale de plus haute énergie, certains complexes métalliques ou photocatalyseurs, ont la faculté de réduire ou oxyder par transfert monoélectronique un substrat, engendrant ainsi une espèce radicalaire. Les radicaux ainsi formés peuvent être employés de la même façon qu'avec les méthodes précédentes dans des processus radicalaires unitaires, incluant la recombinaison, le transfert d'électrons, la β-fragmentation, l'addition homolytique et la substitution homolytique. Les processus d'oxydation et de réduction sont en partie régis par des facteurs thermodynamiques et plus particulièrement par les potentiels redox des espèces mises en jeu. Afin de moduler le potentiel d'oxydation ou de réduction des photocatalyseurs, le métal (Cu, Ru, Ir,…) ou les ligands peuvent être modifiés, ceci permettant ainsi d'obtenir une large gamme de potentiels redox utilisable selon les substrats à réduire ou oxyder (Schéma 1). 4 Même si la plupart des photocatalyseurs sont des complexes métalliques, un nombre croissant Corriu, 13 peuvent être formés à partir de trialkoxysilanes ou trichlorosilanes et de catéchol. Ces espèces anioniques possèdent des potentiels d'oxydation relativement faible (~+0.3-+0.9V vs ESC). 14 Une version ammonium ainsi qu'une version potassium de ces espèces ont pu être obtenu. Dans notre cas, les potassiums alkyles bis-catécholato silicates ont été le coeur de notre étude. Cependant, il a été remarqué que la version potassium des silicates se décompose progressivement. L'ajout d'un éther couronne, le [18-C-6], permet de complexer le potassium et d'éviter la dégradation observée. [Ir(dF(CF 3 )ppy) 2 bpy](PF 6 ) et 3 mol% d'un complexe de nickel (0) [Ni(dtbbpy)]. 17 17 C. Lévêque, L. Chenneberg, V. Corcé, J.-P. Goddard, C. Ollivier and L. Fensterbank, Org. Chem. Front., 2016, 3, 462-465. Schéma 9. Catalyse duale photoredox/nickel en présence de silicates Dans un souci de rendre toujours plus ''vertes'' les conditions réactionnelles, nous avons par la suite envisagé de remplacer le photocatalyseur d'iridium (très coûteux et non recyclable après catalyse) par un photocatalyseur organique. Cependant, ces derniers ont des propriétés moins favorables pour la catalyse photoredox que les analogues métalliques. Les temps de vie à l'état excité étant trop cours, la probabilité de pouvoir effectuer des transferts monoéléctroniques est donc diminuée. Parmi les chromophores ayant prouvé leur efficacité en catalyse photorédox, nous avons sélectionné la fluorescéine, l'Eosin Y et le catalyseur de Fukuzumi. Lors d'expériences de piégeage avec le TEMPO, seul le catalyseur de Fukuzumi a montré une activité modérée vis-à-vis du silicate de benzyle. En effet, les alkyles silicates non activités n'ont pas pu être converti de façon efficace dans ces conditions. 18 Schéma 10. Photooxydation des alkyles bis-catécholato silicates par des photocatalyseurs organiques En 2012, le groupe d'Adachi, en quête de nouvelles molécules pour la préparation de diodes organiques électroluminescentes (OLEDs), a décrit une famille de N-carbazolyl dicyanobenzène qui avait des propriétés particulièrement performantes pour ce type d'applications. 19 Quelques années plus tard, après avoir déterminé les potentiels rédox de ces composés, Zhang a montré que le 1,2,3,5-tetrakis-(carbazol-yl)-4,6-dicyanobenzène (4CzIPN) pouvait être utilisé en tant que photocatalyseur 20 dans des conditions de catalyse duale photorédox/nickel en reproduisant les réactions décrites par Molander et MacMillan. 19 H. Uoyama, K. Goushi, K. Shizu, H. Nomura and C. Adachi, Nature, 2012, 492, 234-238. 20 J. Luo and J. Zhang, ACS Catal., 2016, 6, 873-877. Les conditions de catalyse duale développées nous ont par la suite permis de coupler des bromures éthyléniques avec le [18-Crown-6] bis(catécholato)-acétoxypropylsilicate de potassium. Les bromures éthyléniques non activés ont pu être convertis en produits de couplage avec des rendements modérés, même si les bromures éthyléniques activés ont donné de meilleurs résultats. L'utilisation de β-bromo/chlorostyrènes diastéréomériquement purs en tant qu'électrophiles a permis d'obtenir des produits de couplage avec conservation de la géométrie de la double liaison lorsque le 4CzIPN a été utilisé. 21 Schéma 14. Extension de la méthode de couplage aux halogénures éthyléniques En outre, nous avons observé que les dérivés styryles engagés en catalyse pouvaient s'isomériser sous irradiation lumineuse en présence du photocatalyseur organique. Au vu de ces résultats, nous avons pu conclure que le 4CzIPN peut être un photooxydant mais aussi un simple photosensibilisateur. Toutefois, l'oxydation des silicates s'est révélée être plus rapide que le processus de photosensibilisation conduisant à l'isomérisation. 21 Schéma 15. Photosensibilisation vs. photooxydation par le 4CzIPN Après avoir étudié le couplage croisé de type C(sp 3 )-C(sp 2 ) en conditions de catalyse duale photoredox/nickel entre des alkyles silicates et des halogènures éthyléniques ou aromatiques, nos travaux se sont tournés vers la formation de liaisons C(sp 3 )-C(sp 3 ) en utilisant cette fois des halogénures d'alkyles. 22 L'optimisation du système permettra de développer de nouveaux systèmes catalytiques mettant en jeu les silicates comme précurseurs de radicaux carbonés. Chapter I. But some examples can be mentioned like the radical polymerization or the production of phenol from oxidation of cumene. Although radicals have remained mysterious and considered as out of control by a part of the chemists, the contribution of Kochi, Curran, Giese, Hart and others6 demystified the radicals and turned this species as alternatives of usual anionic reactions. Indeed, contrary to organometallic reagents, radical precursors can provide highly reactive neutral radicals in smooth conditions, under air atmosphere and nondistilled solvents. In addition, radical reactions are highly chemoselective and can be performed without protected functions. Indeed, several conditions for the generation of radicals are available. Radical initiators (Et 3 B, peroxides, azo compounds …) and mediators (stannanes, silanes, thiols…) can provide radicals for chain reactions. Single electron transfers from stoichiometric metallic oxidants (manganese, cerium) or reductants (samarium, titanium, zinc, nickel) are as many alternatives for the formation of radicals. Photoredox catalysis, an opportunity for Challenges in radical chemistry Radical chemistry offers many advantages (vide supra) and promising features compared to anionic reactions. However, it suffers of several drawbacks which limit their use in synthesis. Because the reactions are performed in diluted conditions, the scale up is quite complicated. Some of the processes also require an excess of radical acceptor. But, the most problematic aspects are the use of explosive initiators (peroxides, azo compounds) or toxic mediators like the tin (IV) derivatives which are also difficult to remove from the product. In this context, solutions have been proposed to escape from ''the tyranny of tin'' and to offer the opportunity of using sustainable methods. In order to progressively substitute the use of tin reagents, methodologies involving only catalytic amounts of these reagents or tin-supported surfaces have been reported. 7 Processes employing stoichiometric mediators such as silanes, phosphines or thiols are also potential alternatives. Organoboranes, as substrates or radical initiators for chain-reactions, showed promising results as well. 6b,8 Less toxic metals responsible of one-electron transfers were also considered as a possible solution. However, methodologies involving metal complexes based on iron, copper, manganese, titanium, samarium… require excess amounts. Unfortunately, in terms of sustainability and eco-compatibility, it is important to develop alternatives even more efficient. At the end of the 70's, some pioneering works 9 mentioned the generation of radicals thanks to a single electron transfer (SET) mediated by a photoactivated metal complex. The photoredox catalysis as an alternative The radical chemistry allowed chemists to develop a wide range of processes for the carbon-carbon bond formation. Many efforts, to avoid the use of toxic tin reagents and to find others alternatives have been done so far, but many reactions are still performed under these conditions. Since the end of the 2000s, photoredox catalysis has emerged as a powerful and versatile eco-compatible approach for the generation of radicals. Nature as a source of inspiration The evolution has allowed living organism to develop sustainable and highly sophisticated process. Among them, the photosynthesis of plants has early interested scientists. The chlorophyll present in plant cells absorbs the sunlight in the visible range and initiates the transformation of CO 2 and water to saccharides and oxygen. This natural synthetic process highlights the fundamental and efficient conversion of energy from sunlight into chemical energy. Taking advantage of this process, increasing efforts from the radical chemistry community has been realized to develop new methodologies using the visible light as a promotor of redox reactions for the generation of radicals. Visible-light photoredox catalysis has emerged as a powerful methodology for radical formation in terms of selectivity and sustainability. Since the pioneering work on this field, more and more groups have incremented the photoredox catalysis as a part of their research and the number of publications on this topic demonstrates its growing popularity (Figure 2). Inspired by the photoredox process of natural photosynthesis, chemists were interested on the development of photocatalysts absorbing light in the visible region and with a large range of potential values to perform efficient redox transformations. The first man-made photocatalyst mainly reported and still frequently used is the photoactive complex Ru(bpy) 3 Cl 2 . First reported as a photoredox catalyst for organic synthetic purposes by Kellog, 9 Pac 10 and Deronzier, 11 this complex has been essentially used in inorganic chemistry for devices applications 12 or transformation of small molecules 13 (CO 2 , H 2 O).This is only from 2008 and the advances of MacMillan, 14 Yoon, 15 and Stephenson, 16 that organic photoredox catalysis has definitely taken off. Artificial redox photocatalysts The photosynthesis led to the development of catalytic processes involving chromophores as light harvesters to promote electron transfers for photoelectrochemical cells, photocatalytic water splitting systems or photobioreactors. In this context, chemists tuned photocatalysts which could act as chlorophyll. The complex Ru(bpy) 3 Cl 2 first synthesized by Burstall 17 in 1936 has shown photophysical and redox properties to perform single electron processes under visible light activation. Thus, many efforts have been done to widen the range of transition metal-based photocatalysts. In order to get closer to the principles of green chemistry, organic dyes have also attracted interest. In terms of efficiency, a valuable photocatalyst has to absorb the visible light, get fitting redox properties and a long lifetime at the excited state to enable electron transfers. Several polypyridyl complexes of metal from the fourth to sixth periods showed these properties. Most of them are ruthenium, 12 rhenium 18 or Photophysical properties The This MLCT transition is actually the most important one since the absorption is in the visiblelight region spectra and it allows to reach a 3 MLCT after an Inter-System Crossing (ISC) thanks to a strong spin-orbit coupling, usually observed with the heavy metal atom (Scheme 2 b)). The consequence is a longer luminescence lifetime (10 -7 -10 -6 s) 12a,19b compared to 3d metal complexes and so the possibility of electrons transfers. Constantly trying to evolve toward a greener chemistry, many photoredox processes involve organic dyes as photocatalyst such as fluorescein, Eosin Y or rose bengal. 24 As organic molecules, the strongest light absorption results in the promotion of an electron of a π orbital to a π* orbital. Because of the lack of heavy atoms and so the inefficient intersystem crossover, the light emission is mainly fluorescence. The consequence for these photocatalysts is a very short lifetime of the S 1 excited state (2 to 20 ns for most of the organic photocatalysts). The envisaged electron transfers must be at least as fast as the deexcitation of the chromophore. Principle of photoredox catalysis As we have seen above, photoexcited chromophores (PC*) can transfer one electron to substrates providing radicals which can be involved in radical processes. However, to render the overall process catalytic, the photocatalyst (PC However, regarding a triplet state, the spin multiplicity must change during the BET which is less favored. Therefore, free radicals are more likely to be formed in the reaction once the photocatalyst is in a triplet excited state (Scheme 8). Metal complexes reach easily the triplet excited state thanks to the heavy metal atom which increases the spin-orbit coupling and so the inter-system crossover (ISC). For organic molecules, the most populated excited state is most of the times the singlet state which limits their efficiency in photoredox catalysis. Despite this property, some organic dyes have already attested to be used as photocatalysts in redox processes. 28 At the moment, no explanation can rationalize these observations inconsistent with the theory. Scheme 8. Cage escape and BET in singlet and triplet ion pairs for photooxidation of a substrate Since the triplet excited state is more likely to provide radical processes, organic dyes reaching efficiently this state would be more promising for photoredox catalysis. Organic Thermally Activated Delayed Fluorescence (TADF) materials have begun to be exploited in this field. As mentioned in the name category of these molecules, they are chromophores with a very long luminescence lifetime compared to other common organic molecules. Usually, organic dyes or complexes reaching a triplet state after ISC lose their excess of energy by phosphorescence or vibrational relaxation. In the case of TADF materials, a reverse intersystem crossover is possible (RISC) thanks to the thermal activation by the surrounding environment (Scheme 9 a)). This pathway is possible only if the energy difference between S 1 and T 1 (∆E ST ) is enough low and particularly in the range of the thermal energy. Thanks to a quantum mechanical analysis, Adachi 25b found that the reduction in the overlap between the HOMO and the LUMO results in a small ∆E ST . In this context, molecules displaying a donor(red)-acceptor(green) scaffold are unavoidable candidates such as phenoxazinetriphenyltriazine (PXZ-TRZ), 29 4,6-bis[4-(9,9-dimethyl-9,10- For TADF materials, the kinetics values of ISC, RISC and fluorescence processes are crucial to obtain the enhancement of luminescence lifetime. The fluorescence process is usually in the range of nanoseconds. In order to avoid this direct desexcitation pathway, the ISC kinetics must be in the range of the fluorescence. Also, the luminescence lifetime would be increased if the RISC is slower than the ISC. Actually, for this kind of molecules, the ISC process is rather efficient. The spin conversion efficiency is correlated to the first-order mixing coefficient between singlet and triplet states (λ). This parameter is inversely proportional to ∆E ST and proportional to H SO which is the spin-orbit interaction (Scheme 9 c)). 31 Because the ∆E ST is low, the overall spin-orbit coupling is important. In consequence, TADF materials can exist in their triplet excited state and perform electron transfers. Particularly, the 4CzIPN proved to be an efficient photocatalyst in photoredox/nickel dual catalysis. 26,32 Scheme 9. a) Energy diagram for TADF materials. b) Examples of Organic Donor(red)-Acceptor(green) TADF materials. c) First-order spin-orbit coupling parameter Formation of carbon centered radicals In terms of synthetic interests, the photoredox catalysis found opportunities to generate various types of radicals by photoreduction or photooxidation of different chemical functions. Radicals centered on carbon, nitrogen, 33 sulphur 34 or phosphorus 35 can be obtained by photoredox catalysis and engaged in the formation of carbon carbon bonds or heteroatom carbon bonds. The wide diversity of carbon-centered radicals generated (aryl, alkenyl, alkyl) makes this method of great interest. Thus, many works on the generation by photocatalysis of such intermediates have been realized, starting with the formation of aryl radicals. Formation of aryl radicals by photoreduction The pioneering example developed by Deronzier was the reduction of arenediazonium salts with Ru(bpy) 3 Cl 2 as photocatalyst. The excited ruthenium complex enables the formation of an aryl radical by photoreduction of the diazonium salt to perform a Pschorr type reaction (Scheme 10 a)). 36 More recently, König extended this catalytic process to intermolecular Meerwein type arylation reactions with Eosin Y as photocatalyst (Scheme 10 b)). 37 Aryl radicals were further obtained by photoreduction of sulfonium 38 or iodonium 39 salts and engaged in intermolecular radical allylation reactions. Although aryl radicals are obtained efficiently, alkyl radicals are more of interest for the formation of molecular scaffolds. Halides are substrates of choice for reduction processes to generate alkyl radicals. Fukuzumi 40 was the first to report the photoreduction of alkyl halides. Recently, inspired by this work, MacMillan 14 managed to merge photoredox catalysis and organocatalysis, and to perform enantioselective α-alkylation of aldehydes with bromoalkanes as radical precursors. In the presence of a chiral imidazolidinone, an enamine is formed. Reduction of the bromoalkane by [Ru(bpy) 3 + ] leads to an alkyl radical which adds to the enamine double bond. The resulting α-amino radical is then oxidized by [Ru(bpy) and formic acid. 43 The presence of formic acid favored the formation of an excellent donor of hydrogen, the ammonium salt (H + NEtiPr 2 .HCO 2 -) In similar conditions(Et 3 N instead of DIPEA), radicals obtained from the reduction of bromomalonate could also be engaged in radical cyclizations with alkenes, alkynes 44 , indoles and pyrroles 45 moieties (Scheme 12 a)). More interestingly, polyene substrates gave polycyclic compounds after cascade cyclizations. Scheme 11. Enantioselective α-alkylation of aldehydes Although the processes look efficient, the main limitation is the nature of the halide. Indeed, reductive potentials of non-activated alkyl iodides or bromides are lower than [Ru 2+ ] * or [Ru + ] and measured between -1.61V and -2.5V vs SCE 46 . Stephenson showed that they are reduced with the highly reductant Ir(ppy) 3 (-1.73V vs SCE at the excited state) and the generated radical can be engaged in 5-exo-trig and 5-exo-dig cyclization processes (Scheme In 2011, MacMillan mentioned a reaction between photogenerated α-amino radicals and benzonitrile. 60 The radicals were obtained by oxidation of N-alkyl anilines and added at the ipso position of the cyano arene releasing cyanide and the α-amino arene (Scheme 18). The reaction performed with the Ir(ppy) 3 photocatalyst was suitable for (non)-cyclic anilines and various cyano (hetero)arenes. Photoexcitation of the iridium photocatalyst leads to the reduction of the cyano arene. The [Ir IV ] intermediate then oxidizes the aniline, resulting in the formation of the α-amino radical which couples with the reduced arene. The newly formed anion provides the final formation of the product and cyanide. Scheme 18. Photoredox amine α-arylation reaction • Oxidation of α α α α-aminocarboxylates Anionic species are substrates of choice for oxidation reactions. MacMillan showed the ability of α-aminocarboxylates to undergo photooxidation, generating an α-amino radical by the loss of CO 2 . As he showed before, the radicals were able to be coupled with dicyanobenzene in similar conditions. 61 . Even if the amine must be protected (Boc or Cbz) to avoid the oxidation of the nitrogen atom, secondary and tertiary amines could be engaged in the process because, these radicals' precursors are quite easy to oxidize TEMPO adducts. However, secondary and primary alkyl trifluoroborates were not converted to the expected product. Only (triol)borates of 2-(hydroxymethyl)-2-methylpropane-1,3-diol which showed lower oxidation potentials and could provide TEMPO adducts in moderate yields (Scheme 20). Synthetic applications were also extended to Giese type reactions. (E 1/2 (R-CO 2 - /R- CO 2 . )= +0.95 -+1.16 V vs SCE). Scheme 20. Photooxidation of organoborates Few years after, Molander and co-workers extended the synthetic applications of organotrifluoroborates. Scheme 21. Organic photooxidation of alkyl trifluoroborates Conclusion The photoredox catalysis has emerged as a powerful alternative method for the formation of radicals. The diversity of photocatalyst (ruthenium/iridium based complexes or organic dyes) with various redox potentials allows to promote single electron transfers selectively. Many reactions performed in classical conditions, could be realized in a photocatalytic version. The appeal of this field grows year after year and demonstrates its importance in the chemistry community and particularly for the development of a greener radical chemistry. Chapter II Merging photoredox and organometallic catalysis for cross-coupling reactions 2 Chapter II Merging photoredox and organometallic catalysis for crosscoupling reactions Context Photoredox catalysis has shown its efficiency to accomplish radical reactions. Methods merging organocatalysis and photoredox catalysis have been also developed. 14,41,42,[START_REF] Hamilton | For examples of photoredox-organo dual catalysis[END_REF] Recently, photoredox catalysis has proved to be compatible with transition metal catalysis. In this type of catalysis, the photocatalyst is necessary to generate a radical and the organometallic catalyst performs the cross-coupling steps between an electrophile and the generated radical. Therefore, metal complexes able to trap radicals are catalysts of choice for such transformations. Radical trapping by transition-metals Transition metal catalyzed reactions have become essential tools for the elaboration of new molecular building blocks. For instance, palladium catalyzed cross-coupling reactions and C-H functionalization reactions are processes well-established and efficient. The mechanisms relying on two electron transfer steps are inherent to the nature of fifth period transition metal. However, concerning 3d transitions metals, although they are known to perform ioxidative addition in carbon-halogen or carbon-hydrogen bonds, they can also promote single electron transfer 94 and generate radicals. These ones can be further trapped by the metal and involved in organometallic processes. • Cobalt Some cobalt complexes showed to be able to reduce alkyl halides to alkyl radicals which could undergo cyclizations. The resulting radical intermediates lead the formation of an Scheme 30. Proposed mechanism for Photoredox/palladium C-H arylation catalysis This mild approach for palladium catalyzed-C-H bonds arylation is the first photoredox/transition metal dual catalytic process which demonstrates the feasibility of photoredox mediated cross-coupling reactions and pointed the way towards new synthetic opportunities. Towards photoredox/transition-metal dual catalysis processes Since the seminal work of Sandford, many photoredox/transition-metal dual catalysis processes have been developed. One of the most important features of this kind of catalysis is the opportunity offered by the photoredox transformations to modify the oxidation state of organometallic intermediates. This allows tuning the reactivity of the metallic centers for synthetic applications. Two tandem processes can be distinguished: -Catalytic reactions which do not involve the formation of radicals (except superoxide anion [O 2 ]· -). In this case, the SET takes place only between the photocatalyst and the transition metal. -Reactions in which a photogenerated radical is trapped by an organometallic complex. In this case, a second electron transfer between the organometallic catalyst and the photocatalyst occurs to maintain the redox balance. Each process will be therefore named ''Catalysis of Redox Steps'' and ''Catalysis of Downstream Steps'' respectively (Figure 4). as photocatalyst under visible-light irradiation. The authors showed that the gold catalyst Ph 3 AuNTf 2 was the most effective complex in methanol for tandem 5-exo-trig cyclizationarylation reactions of 4-penten-1-ol. The scope could be extended to 5-penten-1-ol and 5penten-1-tosylamide. Various aryldiazoniums bearing substituents such as halogens, esters or methoxy groups led to the product formation of products in moderate to good yields (Scheme 35) . Scheme 38. Photoredox/Gold catalyzed ring expansion-arylation reaction Gold catalyzed transformation of allenes under photoredox conditions has also been reported by the group of Shin. 125 In the same way, allenoates could be converted to arylated furanone in the presence of a cationic gold complex, aryldiazonium salts and Ru(bpy) 3 (PF 6 ) 2 . In the presence of a cationic gold(I) catalyst ([Ph 3 PAu]OTf), allenoates are converted to a gold(I)-furanone intermediate. Then, the aryl radical addition and the oxidation step would generate the gold(III) complex which after reductive elimination liberates the arylated furanone and the starting gold(I) catalyst. (Scheme 39). Scheme 39. Photoredox/Gold catalyzed cross-coupling of allenes with diazonium salts The photoredox/gold dual catalysis has demonstrated its efficiency towards crosscoupling reactions with aryldiazonium salts. Moreover, the ability of gold to promote cyclizations or rearrangements and perform cross-coupling reactions depends mainly on the electrophilicity of the catalyst. Two mechanisms have been proposed depending on the nature of the starting gold catalyst (neutral or cationic). Copper-mediated catalysis As mentioned in paragraph2.2, copper is also an efficient radical trapping agent. After her pioneering works with palladium on dual catalysis, Sanford developed a copper-catalyzed trifluoromethylation of arylboronic acids. This known reaction usually required expensive trifluoromethylating agents or stoichiometric amounts of copper. 126 For these reasons, they choose CF 3 I which can be reduced in the presence of Ru(bpy) 3 (PF 6 ) 2 as photoredox catalyst to a CF 3 • radical, 41,127 and a copper catalyst (20 mol% of Cu(OAc)) to obtain the cross-coupling products. 128 Various aryl and heteroaryl (pyridine, furane, quinoline) boronic acids could be trifluoromethylated (Scheme 40). Electron-rich and electron-poor electrophiles were tolerated and converted to the expected products in moderate to excellent yields. In addition, perfluoroalkyliodides were also suitable partners. However, with these materials, the loading of copper must be drastically increased to get satisfactory yields. Conclusion The combination of photoredox catalysis and organometallic catalysis has shown to be a highly versatile approach for the construction of a wide range of molecular scaffolds. The overall methodology involves either only electron transfers between both catalysts, or with an additional radical trapping step by the organometallic catalyst. Various transition metals including palladium, gold and copper can be used to perform cyclizations or cross-coupling reactions via C-H or C-X bond activation. Nickel complexes have also widely participated to such type of transformations. These processes will be detailed in chapter four. Chapter III Oxidation of alkyl bis-catecholato silicates: a mild way for the formation of carbon centered radicals Definition of silicates Silicon is the most abundant element by mass in earth's crust. Even if pure silicon ( 0) is widely used for its application in electronics, as semiconductor, it is commonly found combined with oxygens in minerals called ''silicates'' which contains the silicate ion ((SiO 4 ) 4-) associated with metal oxides. However, in some minerals, the silicon-oxygen ratio is different than 1:4 due to the oligomerization of ((SiO 4 ) However, since silicon can reach penta-and hexavalency with alkoxy or fluoride ligands, this definition is biased. Consequently, hypervalent silicon species bearing such groups are considered as silicates as well. Thus, pentacoordinate silicon species with 4 alkoxy ligands and 1 organic substituent are so called silicates (Figure 5). Oxidation of hypercoordinate silicon compounds Hypercoordinated silicon compounds containing one, two or three alkyl residues have shown various reactivities for the formation of synthetic scaffolds. Indeed, these silicon 129 https://goldbook.iupac.org/html/O/OT07579.html derivative substrates have been especially engaged with nucleophiles, 130 used as Lewis acids 131 for the activation of carbonyl compounds, or involved in Hiyama type cross-coupling reactions. 132 Also, the in situ formation of such compounds led to the same kind of reactivity. Furthermore, anionic penta-and hexacoordinated silicon compounds display a negative charge which gives them a reductant feature which has been investigated Finally, the proposed mechanism starts with a direct oxidation of the silicate by the copper(II) halide giving the radical and a copper(I). Another equivalent of copper(II) halide is required to generate the final alkyl halide (Scheme 44 a)). Metal mediated oxidation of organopentafluorosilicates The methodology was then extended to radical conjugate addition processes. To limit the formation of alkyl halide, copper(II) acetate was used as oxidant. The radicals were involved in Giese-type reactions with various α,β unsaturated carbonyl compounds (Scheme 44 b)). Scheme 44. Mechanism of the oxidation of organopentafluorosilicates Copper(II)-mediated oxidation of alkylpentafluorosilicates is the first process involving the formation of a carbon centered radical after fragmentation of the carbon-silicon bond of silicates. During the following twenty years, no other evidence about the oxidation of hypercoordinated silicon species was reported. Photon-induced electron transfers with alkyl bis-catecholato silicates Among the variety of hypercoordinated silicon compounds, the alkyl bis-catecholato silicates have shown interesting features for synthetic applications. bis-catecholato silicates. Among the bases used for the synthesis, he also demonstrated that pyridine or quaternary ammonium hydroxide were also effective (Scheme 48 Eq.1). At the end of the 80's Corriu,136b,143 inspired by the work of Frye, reported the synthesis of alkaline alkyl bis-catecholato silicates. Instead of using amines as base he proposed to use methanolic solution of sodium or potassium methoxide to get the corresponding sodium and potassium silicates (Scheme 48 Eq.2). Scheme 48. First synthesis of alkyl bis-catecholato silicates In our laboratory, we decided to apply these methods with modifications. Indeed, we found that the potassium version of these silicates is not stable. In less than one week, we observed a slow decomposition of the silicates. In order to stabilize them, we envisioned to add a [18-C-6] crown ether to intensify the hypervalent bond by charge separation 144a (Scheme 49). With this modification, we avoided the introduction of water in the solid and we managed to synthesize a wide range of primary (2-18), secondary (1, 1' and 22), tertiary (21) and aryl (19-20) silicates and store them on the bench for months. 32a,144 About the ammonium version, we decided to develop a straightforward synthesis of tetraalkylammonium silicates. So, starting from trichlorosilane, catechol and a tertiary amine in THF, the triethylammonium silicates were obtained. Then a metathesis performed with tetraethylammonium bromide in acetonitrile gave the tetraethylammonium silicates (21 and 22). With this easy synthetic approach, primary, secondary and tertiary alkylsilicates as well as arylsilicates were obtained in moderate to excellent yields. Scheme 49.Modified synthesis of alkyl bis-catecholato silicates and scope synthesis All the silicates can be categorized into 4 groups: the activated silicates, secondary silicates, primary silicates and aryl silicates. Tertiary, benzyl and allylsilicates and silicates substituted in α position by a hetero atom will be all considered as activated silicates. It has to be noted that the group of Molander reported the synthesis of ammonium silicates without performing any metathesis step. 145 The alkoxysilanes were treated by catechol and triethylamine or diispropylamine in THF or dioxane according to the Frye's 145 M Jouffroy, D N. Primer and G A. Molander, J. Am. Chem. Soc., 2016, 138, 475-478. procedure (Scheme 50). They were able to get primary and secondary alkyl silicates in good yields but with questionable stabilities. Scheme 50. Molander's Ammonium silicates synthesis Structural analysis and properties Hypercoordination of the silicon atom in alkyl bis-catecholato silicates Like the element carbon, silicon is tetracoordinated in a tetrahedral geometry respecting the octet rule. Nevertheless, silicon can extend its coordination to 5 and 6 with a trigonal bipyramidal or pyramidal square base structure, and octahedral structure respectively around the silicon. We will not consider the hexacoordinated silicon species in this part. In the case of a pentacoordinated silicon atom, a p orbital is engaged in a hypervalent The [18-C-6] potassium alkyl bis-catecholato silicates was crystallized and analyzed by X-ray diffraction. We observed that the stability of these silicates depends on the interaction with the potassium. Even if the crown ether, modify de near environment of the potassium, the bond lengths of the Si-O and Si-C are not importantly modified (Table 1). The effect is more noticeable for the C-Si-O and O-Si-O angles. According to the values, the potassium cyclohexyl bis-catecholato silicate crystallize in a trigonal bipyramidal structure whereas the [18-C-6] cyclohexyl bis-catecholato silicate ( 1) is almost in a pyramidal square base structure. Table 1. Crystallographic data around the silicon for the cyclohexyl silicates Silicate Redox properties Even if the redox potential is a thermodynamic value and does not give any information about the kinetic of the reaction, it is important to know in photocatalysis the redox properties of the substrates. Indeed, these data inform us about the feasibility of a considered reaction. Some electrochemical experiments were realized on all the synthesized silicates. Activated silicates showed to be the easiest to oxidize (E ox = +0,34 +0,72 V vs SCE) followed by the secondary silicates (E ox = +0,69 +0,76 V vs SCE), primary silicates (E ox = +0,74 +0,90 V vs SCE) and the aryl silicates (E ox > +0,88V vs SCE) (Figure 7). If we envisage that the silicates may undergo homolytic fragmentation of the C-Si bond to provide the carbon centered radicals, the stability of the radicals seem linked to the oxidation facility of the corresponding silicates. Figure 7. Scale of the oxidation potential of the silicates Regarding the oxidation process, DFT calculations revealed that the oxidation likely occurs on a catechol ring of the silicates. The resulting catechol radical is then highly delocalized and so stabilized. Prokof'ev and co-workers146 showed by ESR experiments of penta-coordinated bis- (3,6-ditertbutylcatecholato) silicatesderivatives that the radical of this species is especially localized on the four oxygen atoms (Figure 8). The presence of the tertbutyl group on the catechol ring was chosen to increase the kinetic stability of the radicals and to simplify the ESR spectrum. Nevertheless, in their case and ours, the radical may also be delocalized on both rings, increasing the number of mesomeric form and so the stability of the radical. Moreover, the calculations revealed that the BDE of the C-Si bond becomes low enough to observe the homolytic fragmentation (Table 2) and give the free radical by an irreversible radical substitution on the silicon of the phenoxyl radical. Even though these silicates are easy to oxidize, the difference of redox potentials is not well rationalized. Since the oxidation is considered to occur on the catechol ring, the oxidation potential must be only depending on the electron density on the catechol which must be the same for all the silicates. Indeed, no electron withdrawing or donating group was added on the ligand. Considering an orbital approach, the oxidation process takes out an electron of the HOMO of the silicate localized on the catechol rings. The higher is the energy level of the HOMO the lower is the oxidation potential. Actually, the HOMO-1 is representative of the C-Si bond. In term of energy, if the HOMO-1 energy level increases, the one of the HOMO increases also. In consequence, different kind of alkyl group would not have the same effect on the oxidation potential of the silicates. In addition, Nishigaishi reported that coordinating solvent would facilitate the oxidation of the silicates. 138 Coordination of the DMF to the silicon is therefore possible. Probably a combination of both effects would drastically modify the oxidation potential of the silicates. Studies on photooxidation of alkyl bis-catecholato silicates Bis-catecholato silicates offer many advantages in terms of preparation and stability. They also display interesting low oxidation potentials (~0.3 -0.9V vs SCE). The very mild and more sustainable conditions, offered by photoredox catalysis, were strongly appealing. We have realized some investigations about the generation of radicals by photooxidation of alkyl bis-catecholato silicates. 144a Photooxidation by polypyridine transition metal photocatalysts Prior to engage the [18-C-6] potassium alkyl bis-catecholato silicates in radical carbon-carbon bond forming reactions, the generation of alkyl radicals was proved by spintrapping experiment with TEMPO. It is known that TEMPO is a good radical scavenger that quickly reacts with carbon centered radicals. 147 To get a representative result, benzylsilicate5 was chosen due to its low oxidation potential. It was found that the benzylsilicate reacts with TEMPO (2.2 equivalent) in the presence of 2 mol% of Ir[(dF(CF 3 )ppy) 2 (bpy)](PF 6 ) in DMF for 24 hours under light irradiation (477 nm) at room temperature to afford the benzyl-TEMPO adduct 23 in 95% yield (Table 3). This result demonstrates that the benzylsilicate is an excellent precursor of a benzyl radical under visible-light photooxidative conditions. 147 V. W. Bowry and K. U. Ingold, J. Am. Chem. Soc., 1992, 114, 4992-4996. To a metal free oxidation On the basis of this work, we envisioned to develop a metal free oxidation of silicates for eco-compatible reasons. We also compared their reactivities with the respective alkyl trifluoroborates. Stoichiometric oxidation Inspired by the work of Kumada on stoichiometric oxidation of pentafluorosilicates, 134 previously known reaction 151 but not exploited in radical chemistry. Post functionalization of the resulting radicals was achieved by TEMPO spin trapping, allylation 152 or conjugate addition reactions. Our purpose was to avoid the use of metallic oxidants by replacing with pure organic oxidants. We previously found that the Dess-Martin periodinane (DMP) could efficiently oxidize trifluoroborates in Et 2 O. 150 So we considered Et 2 O as an optimized solvent for the oxidation of trifluoroborates with non-metallic oxidants. Owing to the facile oxidation of alkyl bis-catecholato silicates in DMF with iridium photocatalyst, we also performed the study in this solvent. Among the organic oxidants, tritylium tetrafluoroborate, an underexplored oxidant, 153 was tested first with a series of alkyl trifluoroborates and alkyl biscatecholato silicates. 154 For the reasons previously mentioned, 147,150 we engaged respectively benzyltrifluoroborate and the benzylsilicate in spin-trapping experiments with TEMPO. We observed that a good yield of TEMPO adduct was obtained in Et 2 O for the oxidation of benzyltrifluoroborate and DMF did not appear as the best solvent, while benzylsilicate did not react efficiently with the tritylium in both solvents (Table 4). We also considered the Ledwith-Weitz aminium salt as organic oxidant for both radical precursors. The use of this radical cation oxidant as a SET oxidative agent (oxidation potential: + 1.06V vs SCE) 155 had never been tested. The reactions were performed in DMF and Et 2 O as well. For benzyltrifluoroborate and benzylsilicate, the benzyl-TEMPO adduct was not obtained in satisfactory yield. However, DMF had a surprising effect on the reaction and the expected product was isolated in 69% and 86% yield respectively. The low yields obtained in Et 2 O for benzylsilicate might be due to their low solubility in this solvent. The difference of yield obtained with both oxidants is not obvious to rationalize. Less stabilized trifluoroborates and silicates were further engaged in the optimized conditions to see the synthetic opportunities of these oxidants. Good yields were obtained with the secondary (25) and primary series (24/24') of alkyl trifluoroborates using Et 2 O and tritylium conditions (Scheme 54). Only tert-butyl precursor gave low yield of product 26, (25%) presumably for steric reasons. Only 27% yield of 25 with the secondary substrate and no TEMPO adduct in the primary alkyl for the aminium salt oxidation version. Scheme 54. Oxidation of organotrifluoroborates by organic reagents 5-Hexenyl ( 3) and cyclohexyl silicates (1) were involved in this process. The aminium salt proved to be ractive in DMF for secondary and primary alkyl substrates giving, 44% of 27 and 61% of 24/24' respectively (Scheme 55). Tritylium can also be used as a reliable oxidant for the silicates. In fact, tritylium conditions gave also a moderate yield of 53% for 27. In the case of the 5-hexenyl silicate, a mixture of linear (24) and cyclized products (24') with a ratio of 10:1 was observed demonstrating once again the radical character of these transformations. Scheme 55. Screening of stoichiometric oxidation of silicates Then we were able to oxidize the potassium ((1S,2R,3S,6S)-3methylbicyclo [4.1.0]heptan-2-yl)trifluoroborate salt leading the generation of secondary radical, the trifluoroborate salt subsequently underwent a ring opening of the cyclopropane to give 28 (Scheme 56). This time, the tertiary radical was trapped in good yield (67%). Interestingly, the tritylium in diethyl ether conditions proved to be also compatible with conjugate addition since the methyl vinyl ketone (MVK) adduct29 was also isolated in satisfactory yield (63%). No reaction was observed with silicates. Even if we succeed to oxidize moderately the alkyl bis-catecholato silicates with stoichiometric organic oxidants, the next step was to improve these conditions to a catalytic version. The solution was to use an organic photocatalyst. Scheme 56. Generation of tertiary radical and conjugate addition Organic dyes as photooxidant Organic dyes have already attested their efficiency in photoredox processes. 156 We envisioned using some dyes for which the redox potentials (excited state reduced photocatalyst) match with those of silicates and TEMPO. Fluorescein, Eosin Y and Fukuzumi catalyst (9-Mesityl-10-methylacridinium perchlorate) were first selected. The main drawback of these photocatalysts is their short excited state lifetime (< 6 ns) (Scheme 57). The experiments were performed with benzyltrifluoroborate and benzylsilicate to see the viability of the methodology and its applications in other radical processes. Scheme 57. Selected organic dyes for the photooxidation of silicates We choose DMF as solvent and a 10 mol% of photocatalyst loading to start the study. Fluorescein and Eosin Y showed to be not convenient photocatalyst. Only small amount of benzyl-TEMPO adduct was obtained when the reaction was performed with benzyltrifluoroborate. Fukuzumi catalyst allowed us to get the expected product with benzyltrifluoroborate and benzylsilicate in 92% and 66% yield respectively (Scheme 58). We then considered this catalyst for further substrates. In this photooxidative conditions, allyl, 5hexenyl and cyclohexyl type substrates were tested. For both kinds of substrates, the same trend was observed: the less stabilized is the radical, the lower is the yield. However, the yields were rather better for trifluoroborates than for silicates, especially for the primary and secondary radical precursors. Scheme 58. Photooxidation of silicates and trifluoroborates with organic photocatalysts At this stage, the metal-free photooxidation of silicates showed to be possible but not highly efficient. These photocatalysts may suffer from a short excited state lifetime. Even if the thermodynamic data enable the reaction, the kinetic of the oxidation step is probably too slow compared to the other deexcitation pathways of the excited state. In 2012, Adachi et al. described a family of carbazolyl dicyanobenzenes as light-harvesters for organic lightemitting diodes. 25e Among them, 1,2,3,5-tetrakis-(carbazol-yl)-4,6-dicyanobenzene (4CzIPN) displayed promising photophysical properties for photoredox catalysis: a high photoluminescence quantum yield (94.6%) and a long life-time in the excited state (5.1 µs) which is a really huge value for an organic photosensitizer. The group of Zhang 26 calculated the redox parameters of this chromophore. Thanks to photophysical analysis and electrochemical studies, they found the following values: In the same conditions as before, we were pleased to see the formation of the benzyl-TEMPO adduct in 92% yield using 4CzIPN as photocatalyst. Moreover, decreasing the photocatalyst loading to 1 mol% showed the same efficiency (Scheme 59). Scheme 59. Spin-trapping experiments with 4CzIPN as photocatalyst In order to explore the potential of this dye, various alkyl silicates were engaged in a series of radical addition reactions (Scheme 60). The first selected radical acceptor was the activated allylsulfone 31 which previously proved to be a convenient radical trap. 144a,149 Allylation adducts were obtained in excellent yields for stabilized radicals (α-aminyl and tertiary) and secondary radicals (32-34), and in moderate yields for primary radicals (35-37). Interestingly, 2-(diphenylphosphineoxide)ethylsilicate gave allylation adduct (36), without fragmentation of the Ph 2 PO radical. 157 The kinetic rate of β-fragmentation is probably lower than the addition of the carbon-centered radical on the double bond of the acceptor. Cyclohexylsilicate (1) was further chosen as radical precursor for alkynylation 39,158 vinylation 158 41 and Giese-type reaction 43. In each case, the expected product was obtained in good yield. Scheme 60. Radical addition reaction catalyzed by 4CzIPN Conclusion Ammonium and [18-C-6] potassium alkyl bis-catecholato silicates can be synthesized quantitatively and efficiently. Oxidation of such species led to the formation of (non)stabilized carbon-centered radicals which can be trapped by acceptors. Stoichiometric organic oxidants demonstrated to be a valuable oxidant, but photocatalysts (metal based or organic) are more performing for the generation of the radicals. It is the first evidence for the generation of non-stabilized alkyl radicals by photooxidation. Development of new synthetic combining photooxidation of silicates and transition metals catalysis can be therefore envisioned to develop other types of cross-coupling processes. Chapter IV Combining photooxidation of alkyl bis-catecholato silicates and Nickel catalysis: functionalization of eletrophiles Nickel is the first element of group 10, above palladium, platinum and darmstadtium. Named after the goblin-like sprite of Germanic mythology Nickel, this element has been extremely used for its corrosion resistance when it is employed in alloys. Also, nickel's compounds are involved in electrochemical processes, especially in rechargeable cells. Unfortunately, evidence of allergic reactions and toxicity made the use of nickel restricted. Despite its presumed hazards for health, nickel has found applications in efficient catalytic industrial processes (oil, pharmaceutical, food), and has recently received significant development in organic synthesis for cross-coupling reactions. Progress in Nickel catalysis As a fourth-period transition metal, nickel is able to do SET processes (2.2) like its neighbor, cobalt and copper. Also, located just above palladium, it has similar reactivity and can promote the same elementary steps suchs as oxidative addition, reductive elimination or C-H activation (Scheme 61). In contrary to Hiyama, Sonogashira, Stille or Suzuki coupling and Heck reaction where a palladium catalyst is almost always used, nickel proved to be an alternative to palladium to Negishi cross-coupling reactions. Moreover, nickel is much cheaper (4.5 $/mmol) than palladium (6.200 $/mmol) 159 which makes it more attractive for economic reasons. Due to its specific properties, nickel has proven to be an efficient catalyst involved in various transformations 160 thanks to the large oxidation states accessible (Table 5). +4 are also accessible. 102,162 This ability of nickel allows different kinds of reactivities and radical pathways. Thus, catalytic cycle involving 1, 2, 3 or 4 different oxidation states are possible. Thanks to these properties, new approaches for the formation of the C-C bond have been envisaged and the studies are still going on. One major progress in nickel catalysis has been brought recently by the work of Weix. In 2010, he reported the cross-electrophile coupling of aryl halides with alkyl halides. 163 Instead of using an organometallic nucleophile with an electrophile like in the common crosscoupling reactions, his group succeeded to perform the cross-coupling reaction of two electrophiles using a nickel catalyst and a reductant in a stoichiometric amount. In situ formation of a Ni(0) complex with 4,4'-di-tert-butyl-2,2'-bipyridine and 1,2bis(diphenylphosphino)benzene as ligands led to the cross-coupling products bearing various functional groups on both partners like boronic esters, carbamates or esters for instance (Scheme 62). In order to render the process catalytic, they found that a Mn(0) was required as a reductant. Scheme 64. First oxidation step determination Regarding the next steps of the mechanism, they envisioned the formation of radicals. Radical clock experiments proved the generation of radicals: when cyclopropylmethyl bromide was used as alkyl electrophile, only the homoallyl product was obtained and performing the reaction with an enantiopure alkyl bromide provided a racemic mixture of cross-coupling products (Scheme 65). Both results are consistent with the formation of an alkyl radical after the oxidative addition step of the aryl halide. This nickel-mediated radical formation may occur before the addition of the radical on the same nickel center. As a proof, they used 5-hexenyl iodide because the 5-hexenyl radical is known to cyclize slower (k = 2.3 x 10 5 ) than the ring-opening of the cyclopropylmethyl radical (k = 7 x 10 7 ). 6a A mixture of cyclized (A) and linear (B) products should be observed. Variation of the concentration of the catalyst changed the ratio of cyclized and linear products (Figure 9) which is contrary to the idea of a direct radical formation radical trapping by the same nickel center. Indeed, if the radical formation and the radical addition happen on the same nickel center, the ratio must be unchanged, whatever the concentration. But, in the case of a high concentration, the radical should not have time to cyclize and should be directly trapped by a nickel center (the same or not). Scheme 65. Radical clock experiments Development of visible-light photoredox/nickel dual catalysis With the dynamic initiated by Sanford 110 on photoredox/transition metal dual catalysis and the work of Weix on nickel catalysis, the idea of merging photoredox catalysis and nickel arose with two distinct reports from Molander169 in one hand and MacMillan and Doyle170 in the other hand. Pioneer works • Benzyltrifluoroborates in photoredox/nickel dual catalysis The work of Molander was inspired by the report of Akita 66 who demonstrated the photooxidation of alkyl trifluoroborates to generate stabilized alkyl radicals in the presence of [Ir(dF(CF 3 )ppy) 2 bpy](PF 6 ) as photocatalyst under blue light irradiation. His idea was to engage the formed radical in a nickel catalytic process. Among the alkyl trifluoroborates, the benzyl precursor was selected due to its lowest oxidation potential (E 1/2 ox = +1.13V vs SCE in MeCN). In the presence of 2 mol% of the iridium photocatalyst, 3 mol% of Ni(COD) 2 and 4,4'-di-tert-butyl-2,2'-bipyridine (as the ligand) they managed to couple the benzyl radical with (hetero)aryl bromide derivatives in moderate to excellent yields (Scheme 67). The reaction could be performed with benzyltrifluoroborates bearing electron donating or electron withdrawing groups with high yields. Various aryl bromides could be coupled with excellent yields. Moreover, bromo-pyridine, pyrimidine, indole, quinoline and thiophene have proved to be also suitable substrates. Scheme 67. Photoredox cross-coupling of benzyltrifluoroborates with aryl bromides About the mechanism of this reaction, the authors proposed the direct oxidative addition of the aryl bromide on the nickel( 0) complex (Scheme 68). For its part, the photoexcited iridium complex (E Advances on photoredox/nickel dual catalysis • Tuning of alkyl trifluoroborates About the alkyl trifluoroborates, only activated radical precursors could be engaged. The limitation of the process is due to the photooxidation step. Indeed, generation of unstabilized radicals from trifluoroborates is tricky, caused by the high oxidative potential of such species. Nevertheless, good yields of cross-coupling products can be obtained once the generated radical is stabilized such as secondary, 172 benzyl 173 and α-amino or oxy 174 radicals are valuable entities for such coupling reactions. In order to promote, the formation of the radicals. The oxidation potential can be decreased from 1.50V vs SCE for the secondary trifluoroborates to 0.91V for (2,2,2-trifluoroethyl)benzene trifluoroborates (Scheme 72). Scheme 72.Scale of organotrifluoroborates oxidation potential Concerning the electrophiles, aryl and heteroaryl halides were the most suitable substrates (Scheme 73). Recently, acid derivatives such as acyl chloride 175 and anhydride176 also revealed to be potent coupling partners. Non-activated primary alkyl trifluoroborates did not afford cross-coupling product so far, that is one of the drawbacks of such radical precursors. Scheme 73. Organotrifluoroborates in photoredox/nickel dual catalysis • Decarboxylative processes MacMillan demonstrated all the potential of his methodology based on the generation of alkyl radicals by photooxidation of carboxylic acids followed by decarboxylation. Among the substrates susceptible to undergo CO 2 extrusion, α-amino acids, 170 anhydrides, 177 keto acids 178 and oxalates 179 revealed to be excellent radical precursors. After in situ formations of anhydride, from a carboxylic acid and an acyl chloride, a Interestingly, benzyl alcohols gave good yields but these ones dramatically dropped when non-activated primary alcohols were tested. Scheme 76. Photoredox/nickel catalyzed cross-coupling reactions from oxalates various aryl electrophiles. 144b We found that 2 mol% of photocatalyst and 3 mol% of nickel catalyst under blue light irradiation, gave the cross-coupling product 45 between the acetoxypropylsilicate 17 and 4'-bromoacetophenone 44 (Table 6). Control experiments showed that each reagent is necessary for the reaction as well as the light irradiation. Moreover, the reaction was performed with the acetoxypropylsilicate 17' without the crownether (entry 6). No effect of the chelating agent was observed on the yield of the reaction. We then limited our studies to primary [18-C-6] Using the same conditions, a series of arylbromides with various substituents was engaged with acetoxypropylsilicate 17 (Scheme 81). We first studied the selectivity towards the substitution of halogen atoms on the aryl ring. Bromobenzenes substituted at the para position with other halogens were screened. Fluoro and chloro substituted bromobenzenes were converted to the cross-coupling product 58 and 59 in good yields. The reaction is highly selective as no substitution of the fluorine or chlorine atom was observed. However, when the reaction was started with 1-bromo-4-iodobenzene, a mixture of products 60 and 61 was obtained in moderate yield in a ratio 10:1. The functionalization of the iodo position was preferred over the bromo position. Indeed, oxidative addition is more likely at the C-I bond184 that explains the selectivity observed. 1,4-Dibromobenzene gave largely the monofunctionalized product 60 and the di-functionalized product 62 in low yield. Scheme 81. Cross-coupling reactions of silicate 17 with various arylbromides This cross-coupling methodology was very efficient for substrates bearing electron withdrawing groups such as trifluoromethyl 63 and 64 or acetyl 45 and 65 moieties at the ortho, meta or para position. We further investigated electron enriched aryl substrates. Electrophiles substituted by slightly donor groups such as methyl 68 and 69 or trimethylsilyl 66 and 67, were converted in good yields. Various bromoanisoles were then engaged in the reaction conditions. Cross-coupling products were obtained for meta-bromoanisoles 71 and 72 in moderate yields. However, only the starting material was recovered with orthobromoanisole and para-bromoanisole. Using 1-bromo-4-iodoanisole instead afforded the expected adduct 70 in moderate yield (46%). These results highlight again the importance of the oxidative addition in the process. Aryl bromides substituted by strongly donor groups are not reactive enough. In this case, the corresponding iodo compound should be privileged. Remarkably, a pinacol boronate function could also be tolerated under these conditions as illustrated by the formation of 73 that could be utilized for further coupling reactions. Since pinacol boronates are sensitive to acidic conditions and to purification by silica column chromatography, a direct oxidation to the phenol was performed (NaOH, H 2 O 2 ). Higher yields were obtained in phenol product 74 (69% vs 53% for 73). A similar procedure was applied to give product 75. This transformation was then extended to heterocyclic bromides. 2-Fluoro-4bromopyridine was selected as electrophile in the same catalytic conditions and coupled with silicates (Scheme 82). A small library of new alkyl-fluoropyridines was obtained 77-81 in satisfactory yields (59-86%). As before, silicate 17' without the crown-ether gave adduct 79 in a similar yield (79% vs 81%). Other heteroaryl bromide derivatives were screened. Pyridyl systems proved to be compatible with these cross-coupling reaction conditions. Methylpyridine 82, trifluoromethylpyridine 84 and methylnicotinate 83 adducts were obtained in 40-75% yields. Others heterolytic systems such as pyrimidine, quinoline, indole, benzofuran and thiophene were converted to the corresponding cross-coupling products 85-89 in low to fair yields. Ni(III) complex which would be the result of two successive steps: oxidative addition of the aryl halide and addition of the radical on the nickel. The order of the steps is still not elucidated. Some computations on benzyltrifluoroborates suggest a complex mechanism with a thermodynamically favored radical addition whether to a Ni(0) or a Ni(II). 185 According to the report, both pathways involving first the oxidative addition of ArX to the Ni(0) followed by the radical addition or the opposite can occur. Some of our results (59-62) showed nevertheless the importance of the oxidative addition step. Toward a green dual catalytic system We demonstrated first the ability of potassium ([18-C-6]) bis-catecholato silicates of being good precursors of alkyl radicals for cross-coupling reactions. We also proved that photooxidation of such species can be mediated by as well ruthenium or iridium polypyridine complex or more interestingly by organic dyes and especially by 4CzIPN. Zhang et al. reported 26 the first examples of the use of 4CzIPN as photocatalyst in photoredox/nickel dualcatalyzed cross-coupling reactions of aminocarboxylates and benzyltrifluoroborates with arylbromides. Similarly, we planned to extend this mixed organic photoredox/metallic dualcatalysis to alkyl silicates and compare its efficiency to our previous conditions. 32a Thus, we selected acetoxypropylsilicate (1.5 equiv.) 17 and 4'-bromoacetophenone 44 as crosscoupling partners (Table 7). We were expecting to find Ni(COD) 2 as a workable nickel source but no product formation was observed after 24 hours (entry 1). Switching to NiCl 2 .dme as a precatalyst (3 mol%) with 4CzIPN (2 mol%) and 4,4'-di-tert-butyl-2,2'-bipyridine, these new conditions provided the cross-coupling product 45 in 77% yield. We were able to decrease the catalysts loading to 2 mol% of the nickel and 1 mol% of photocatalyst for a 83% yield of the expected product (Entry 4). Unfortunately, decreasing the amount of alkyl silicate affected the yield of the reaction (Entry 5). Yield (%) coupling process. This last result highlights once again the radical character of the transformation. 1 1.5 2 Ni(COD) 2 , 3 0% 2 1.5 2 NiCl 2 .dme, 3 77% a 3 1.5 2 NiCl 2 .dme, 5 81% a 4 1.5 1 NiCl 2 .dme, 2 79% a , 83% b 5 1.2 1 NiCl 2 .dme, 2 57% a a Scheme 84. Screening of alkyl silicates for organic photoredox/nickel dual catalysis with 44 Again, we screened several aryl and heteroaryl bromides with acetoxypropylsilicate 17 (Scheme 85). A better yield was observed for the electron neutral substrate 92 and the electron poor electrophile 45 than for para-bromoanisole 96. Polyaromatic substrates also gave the expected products 93-95 in moderate yields. In these cases, the polyarylated bromides may act as quencher of luminescence of the photocatalyst through an energy transfer. The oxidative process would be then in competition. Moreover, when 4'chloroacetophenone was engaged, the yield of cross-coupling adduct decreased dramatically (from 83% to 14% of 45). This highlights the importance of the oxidative addition step, in accordance with the postulated mechanism. Heteroaryl bromides can also act as electrophilic partners. Pyridines 79 and 83, quinoline 86 or benzofuran derivatives 87 provided coupling products in moderate to good yields. Scheme 85. Screening of (hetero)aryl bromides for organic photoredox/nickel dual catalysis with 17 In order to illustrate the efficiency of 4CzIPN as a photocatalyst for this type of dual catalysis, a scale-up experiment was accomplished using acetoxypropylsilicate 17 and 4'bromoacetophenone 44. The reaction was performed on 3 mmol by using the usual experimental conditions except for the reaction time. Indeed, after 24 hours of reaction, only 63% yield of 45 was obtained, which could be further improved to 76% by increasing the reaction time to 48 hours (Scheme 86). Scheme 86. Scale-up reaction We then considered extending this methodology to alkenyl halides. These ones were already reported to be valuable electrophiles in such processes 171 and particularly with silicates. 188 Therefore, we engaged alkenyl bromides with acetoxypropylsilicate 17 under these modified conditions. Unactivated alkenyl bromides were converted to their corresponding products in moderate yields 96-98. Using activated alkenyl halides such as styryl bromide, styryl chloride and β-gem-dichlorostyrene gave fair to good yields of coupling [Ir(dF(CF 3 )ppy) 2 (bpy)](PF 6 ) (Table 8). Previously reported conditions 144b,188 led to the expected products in 75% (E/Z: 85/15) yield with the ruthenium photocatalyst (entry 2) and 90% (E/Z: 67/33) with the iridium photocatalyst (entry 3). In order to verify that the reaction does not follow a pure radical vinylation mechanism involving an addition-β-elimination tandem, the reaction was performed without nickel catalyst but only traces of products were observed. This result rules out the exclusive radical pathway. In addition, isomerization of βbromostyrene in the presence of a photosensitizer has been already reported. 189 Control experiments were realized in the absence of the nickel catalyst and silicate and showed that the bromide slightly isomerized in the presence of 4CzIPN (E/Z: 65/35) (entry 5) compared to an E/Z ratio of 40/60 with the iridium complex (entry 7). The bromide atom has brought a heavy atom effect and so an easy access to the triplet state190 which then promotes the isomerization. The photoexcited Ru(bpy) 3 2+ * did not provide any isomerization of the styryl bromide, probably due to its lower energy triplet state (entry 6). Therefore we can assume that the 4CzIPN can act as photosensitizer of styryl derivatives but the photooxidation is faster than the photoisomerisation, opening opportunities for stereoselective alkenyl-alkyl crosscoupling reactions. In conclusion, this organic dye has proved its efficiency in photoredox/nickel dual catalysis with alkyl silicates. Other works reported later the use of the 4CzIPN with alkyl silicates for the functionalization of imines,191 borylated bromoarenes, 32b and also with benzyltrifluoroborates. 32c Alkyl bis-catecholato silicates in photoredox/nickel dual catalysis: formation of C(sp 3 )-C(sp 3 ) bonds Alkyl bis-catecholato silicates have demonstrated to be powerful substrates for the generation of primary alkyl radicals which can be involved in radical addition reactions and nickel-catalyzed C(sp 3 )-C(sp 2 ) bonds formation. The remaining challenge is to develop a catalytic version for the C(sp 3 )-C(sp 3 ) bonds formation. C(sp 3 )-C(sp 3 ) cross-coupling reactions involving alkyl carboxylates In 2016, the group of MacMillan reported new reaction conditions for cross-coupling reactions between alkyl carboxylates and alkyl bromides. 192 The reported methodology involves the use of [Ir(dF(CF 3 )ppy) 2 (dtbbpy)](PF 6 ) as oxidizing photocatalyst, NiCl 2 .dme as nickel source and 4,4'-dimethoxy-2,2′-bipyridine (4,4'-dOMe-bpy) as ligand. The carboxylates obtained by reaction of the corresponding carboxylic acids with potassium carbonate reacted with various alkyl bromides under these conditions. Decarboxylation of αamino or α-oxo carboxylates gave stabilized alkyl radicals which could be coupled with primary or secondary alkyl bromides in good yields (Scheme 88). The reaction conditions were tolerated by electrophiles bearing a free alcohol, carbonyl or oxirane function. In addition, unactived primary and secondary carboxylic acids were engaged in the process, giving the coupling products in moderate yields. Silicates as nucleophilic partners When MacMillan and co-workers reported their advances on C(sp 3 )-C(sp 3 ) crosscoupling photoredox/nickel dual catalysis, we were developing our own catalytic system involving alkyl bis-catecholato silicates.193 Optimization of the cross-coupling reaction We started our studies on the coupling of [18-C-6] n-hexylsilicate 2 with ethyl 4bromobutyrate 103. The catalytic system was composed with 2 mol% of [Ir(dF(CF 3 )ppy) 2 (bpy)](PF 6 ) as photocatalyst and 5 mol% of Ni(COD) 2 and 2,2'-bipyridine to produce the organometallic catalyst. After 24 hours in DMF under light irradiation with 1.2 equivalents of silicate and 1 equivalent of the electrophile 103, we were pleased to see the formation of cross-coupling product 104 in 29% yield (Table 9 entry 4). Contrary to MacMillan's results, we observed the dimerization product of the electrophile 105 in 52% yield. This byproduct has been previously reported by Weix and co-workers during their studies on the nickel-catalyzed dimerization of alkyl halides. 194 They used the same conditions as for the aryl-alkyl electrophilic cross-coupling reactions. 163,164 Based on their mechanism, they proposed (Scheme 66) the alkyl halide may furnish an alkyl radical. Therefore two kinds of radicals would be in competition in the reaction mixture. Then, the aim was to improve the reaction conditions in order to avoid the formation of the homocoupling product and favor the cross-coupling adduct. Increasing the photocatalyst loading to 5 mol% slightly improved the 104/105 ratio (entry 1). Switching to other photocatalysts (Ru(bpy) 3 PF 6 and 4CzIPN) did not give better results (entry 2 and 3). However, starting with 1.5 equivalents of silicates gave the best ratio of products 104/105 42/38 (entry 6) whereas changing the number of equivalents of the silicate (to 2 equiv.) (entry 7) or using a nickel(II) precatalyst proved to be less effective (entry 8). The nature of the ligand was analyzed. Indeed, the efficiency of the process has been proved to be dependent of the ligand. 192,195 Several bidentate ligands were screened (Scheme 90). Modified bipyridyl type ligands did not enhance the yield of cross-coupling product 104 and limit the formation of adduct 105. In particular, the ligand 4,4'-dOMe-bpy revealed to be ineffective. Bioxazole derivatives were also engaged as ligand but again, the reaction was less productive. Tridentate ligands were also tested. Terpyridine proved to be an effective ligand leading to 104 in 42% yield but did not avoid the dimerization (50% of 105). 2,6-Bis(oxazoline)-pyridine (pybox) gave both products in lower yields. Scheme 90. Screening of ligands The electrophile partner was also modified (Scheme 91). Ethyl 4-chlorobutyrate was not converted as well as the tosyl derivative while the iodo analogous afforded both products in poor yields. The triflate derivative was also tested but showed to react with the solvent DMF. Scheme 91. Screening of electrophiles Under the optimized reaction conditions, we envisioned that the formation of the desired product 104 might be promoted at lower concentration. Thus, we performed the crosscoupling reaction with slow addition of ethyl 4-bromobutyrate 103. The electrophile was added over 18 hours thanks to a syringe pump. Unfortunately, the homocoupling and the cross-coupling products were obtained in similar yields. Scope of the reaction and limitations After optimisation, various silicates were engaged in our reaction conditions. As for the aryl-alkyl cross-coupling reactions, silicates bearing a nitrile group 106 and 107, a halogen 108 and 109 or an oxirane 110 could be coupled with compound 103 (Scheme 92). Activated silicates such as benzyl or anilinomethyl silicates afforded the cross-coupling products 112 and 113. Conversely, acetoxymethyl and chloromethyl silicates did not provided any coupling product 115 and 116. Secondary cyclohexyl silicate showed also to be reactive leading to 111. With triethylammonium cyclohexyl and cyclopentyl silicates, the reaction revealed to be poorly effective to give 111 and 114 and so, the yields of cross-coupling products were low to moderate. It should be noted that the formation of the homocoupling product observed in the optimization study, was obtained in each case with yields close to those of the cross-coupling product. It suggests that the formation of both products is simultaneous, and so two catalytic pathways are occurring at the same time with possibly a nickel intermediate involved in both processes. Finally, we also engaged different bromides with acetoxypropylsilicate 17. Primary 117-120 and secondary alkyl bromides(121-123 reacted more or less efficiently (27-37 and 16-35%, respectively). 5-Hexenyl bromide provided a mixture of direct C(sp 3 )-C(sp 3 ) cross- Extension to other catalytic systems The efficiency of alkyl bis-catecholato silicates to generate alkyl radicals is now well established. We expected to engage these precursors in other catalytic systems. Concept of non-innocent ligands In our group and in the context of the Ph.D. thesis of Jérémy Jacquet,196 it has been demonstrated that some non-innocent ligands can catalyze redox processes through the generation of radicals. In 1966, Jørgensen defined the concept of innocent ligand: ''ligands are innocent when they allow oxidation state of the central atoms to be defined' '. 197 For instance, ligands in palladium-catalyzed cross-coupling reactions are innocent.In each step of the catalysis, the oxidation state of the metal is well-defined. At the opposite, iridium and ruthenium complexes in photoredox catalysis are catalyst with non-innocent ligands because they are metal-ligand coupled redox system. In addition, some complexes exist with almost pure ligand centered redox processes. This is the case of the bis-iminosemiquinonato copper complex Cu(L SQ ) 2 125, which displays a Cu(II) and two iminosemiquinonate radical ligands (Scheme 93). This complex can undergo two successive single electron reductions leading to the formation of the bis-anionic bis-aminophenolate copper(II) complex Cu(L AP ) 2 2- , or two successive single electron oxidations to give the bis-cationic bis-iminobenzoquinone copper(II) complex Cu(L BQ ) 2 2+ 127 (Scheme 93). 198 This ability of single electron redox processes can be exploited to generate radicals by SET between one of the copper species and a radical precursor. Scheme 93. Redox properties of Cu(L SQ ) 2 It was found that complex Cu(L SQ ) 2 125 can reduce electrophilic CF 3 sources to produce radicals. Spin-trapping experiments with TEMPO and stoichiometric amounts of the trifluoromethylating agent and the copper complex were realized (Scheme 94). In DCM after 3 hours, the adduct CF 3 -TEMPO can be obtained in 67% yield from the Umemoto's reagent. The same product was obtained after 3 hours in 69% yield from the Togni II reagent in NMP.199 Inspired by these catalytic processes, we envisioned to develop a copper-mediated catalytic oxidation of alkyl bis-catecholato silicates. Preliminary results We first considered two properties of the alkyl bis-catecholato silicates to perform the oxidation. In one hand, these substrates are easy to oxidize given that the range of oxidation potential is +0.34-+0.90 V vs SCE in DMF. On the other hand, the oxidation process is more efficient in DMF than other solvents. With these two parameters, we expected to realize efficiently the oxidation with the complex Cu(L BQ ) 2 (OTf) 2 (E 1/2 = +0.83V vs SCE in DCM). In addition, these types of copper complexes showed efficient in NMP, 199 which displays a similar polarity to DMF. So for this reason, we opted for this last solvent. We first looked at the possible formation of alkyl radicals from the corresponding bis-catecholato silicates by oxidation with the copper complex 127. Benzylsilicate 4 was engaged in spin-trapping experiments with TEMPO in the presence of a stoichiometric amount of complex 127. The benzyl TEMPO adduct23 was isolated in 15% yield after 24 hours in DMF (Scheme 96). The same reaction was also performed with 10 mol% of catalyst and two equivalents of TEMPO. In this case, only 7% yield of the product was obtained. We also tested allylsulfone 31 as radical scavenger. However, with a catalytic loading or a stoichiometric amount of copper complex 127, we never observed the formation of product 128. Conclusion The alkyl radicals generated by photooxidation of alkyl bis-catecholato silicates could be engaged in dual-catalytic processes involving photoredox/nickel dual catalysis. Thanks to the nickel catalysis, formation of C(sp 2 )-C(sp 3 ) bonds from alkyl silicates and substituted aryl or heteroaryls could be realized with an iridium based photocatalyst. A dual-catalytic system was also developed with 4CzIPN, an organic dye as photocatalyst. These new reaction conditions showed to be as efficient compared with those previously exposed. Moreover, the methodology was extended to the formation of alkyl substituted alkenes. Finally, a dualcatalytic process for the C(sp 3 )-C(sp 3 ) bonds formation was developed with a limited efficiency due to the formation of a homocoupling adduct as side product. Finally, the ability of alkyl bis-catecholato silicates to undergo radical formation upon oxidation may also be extended to other catalytic processes such as the use of copper complexes with non-innocent ligands which require more investigations. 5 Experimental Section General informations Unless otherwise noted, reactions were carried out under an argon atmosphere in oven-dried glassware. THF and diethyl ether were distillated over sodium/benzophenone, triethylamine over potassium hydroxide. Catechol was purchased from commercial source and purified by crystallization from toluene followed by sublimation. Synthesis of photocatalysts Synthesis of 4CzIPN The 4CzIPN has been synthesized following a previous reported procedure. [7] To a 100 mL round-bottom-flask was added NaH (60% in mineral oil) (15 mmol, 600 mg). THF (40 mL) was added followed by the slow addition of carbazole (10 mmol, 1.67 g). After 30 min of stirring at room temperature the tetrafluoroisophtalonitrile (2 mmol, 400 mg) was added and the mixture was stirred at room temperature for 20 hours. A yellow precipitate progressively appeared. Water (1 mL) was added to neutralize the excess of NaH and the mixture was evaporated to give a yellow solid. The solid was successively washed with water and ethanol. The crude product was dissolved in the minimum of DCM and crystallized by addition of pentane to give the pure 4CzIPN (1.13g, 71% yield). The spectroscopic data are in agreement with those reported in the literature. and hexanes (~30 mL) to afford iridium µ-Cl-dimer (58%). The spectroscopic data are in agreement with those reported in the literature.202 1 H NMR (400 MHz, CDCl 3 ): δ9.52 (s, 1H), 8.46 (dd, J = 8.8, 3.1 Hz, 1H), 8.05 (d, J = 8.3 Hz, 1H), 6.50 -6.36 (m, 1H), 5.08 (d, J = 8.9 Hz, 1H). To a 25 mL round-bottom flask equipped with a magnetic stir bar was added iridium dimer (302 mg, 0.2 mmol) and 2,2'-bipyridine (66 mg, 0.42 mmol). The flask was attached to a reflux condenser and the flask were placed under an inert atmosphere by three evacuation/purge cycles. The reaction components were dissolved in degassed ethylene glycol (13 mL) and heated with stirring at 150 °C for 16 h. Upon cooling to rt, the reaction mixture was diluted with water (120 mL) and transferred to a separatory funnel. The aqueous phase was washed three times with hexanes (3x 60 mL), then drained into an erlenmeyer flask and heated to 85 °C for 15 min to remove residual hexanes. 8.36-8.26 (m, 2H), 8.14 (ddd, J = 5.5, 1.6, 0.8 Hz, 1H), 7.75 (dt, J = 1.9, 0.9 Hz, 1H), 7.69 (ddd, J = 7.7, 5.5, 1.3 Hz, 1H), 6.88 (ddd, J = 12.8, 9.4, 2.3 Hz, 1H). Synthesis of silicates General procedure A for [18-C-6] potassium silicate synthesis To a stirred solution of catechol (2 eq.) in dry methanol (0.25 M) was added 18-C-6 (1 eq.). After dissolution of the crown ether, the trialkoxy organosilane (1 eq.) was added, followed by a solution of potassium methoxide in methanol (1 eq.). The reaction mixture was stirred for 3 hours and the solvent was removed under reduced pressure. The residue was dissolved in the minimum volume of acetone and diethyl ether was added until a cloudy solution was obtained (scrapping on the edge of the flask could be done to induce crystallization). The flask was placed at -20°C overnight. The crystals were collected by filtration, washed with diethyl ether and dried under vacuum to afford [18-C-6] potassium silicate. General procedure B for tetraethylammonium silicate synthesis To a stirred solution of catechol (2 eq.) in dry THF (0.1 M) was added triethylamine (4 eq.). The reaction mixture was cooled to 0°C with an ice bath and the organotrichlorosilane (1 eq.) was added dropwise. The mixture was stirred for an hour at 0°C and an additional hour atroom temperature. The triethylamine hydrochloride salt was filtered off and the filtrate was evaporated under reduced pressure. The residue was taken up in acetonitrile (0.3 M) and tetraethylammonium bromide (1 eq.) was added. The mixture was stirred for an hour and the solvent was evaporated under reduced pressure. The solid was taken up in water, filtered, washed with water and dried under high vacuum to afford the tetraethylammonium silicate. Potassium [18-Crown-6] bis(catecholato)-cyclohexylsilicate (1) Following the general procedure A with cyclohexyltriethoxysilane (2.5 mmol, 0.5 mL), catechol (5.0 mmol, 550.6 mg), 18-Crown-6 (2.5 mmol, 660 mg) and potassium methoxide (2.5 mmol, 700 µL of a 3.56 M solution in methanol) in 10 mL of dry methanol at room temperature. The crude product was purified according the general procedure to afford 1 (1.23 g, 78%) as a white solid. , 2844, 1486, 1454, 1351, 1267, 1100, 1011, 963, 893, 816, 739,593 cm -1 . mmol, 700 µL of a 3.56 M solution in methanol) in 10 mL of dry methanol. The crude product was purified according the general procedure to afford 3 (800 mg, 50%) as a white solid. 1 H NMR (400 MHz, Methanol-d4): δ 6.71 -6.62 (m, 4H), 6.58 -6.52 (m, 4H), 5.69 (ddt, J = 17.0, 10.2, 6.7 Hz, 1H), 4.89 -4.82 (m, 1H), 4.80 -4.75 (m, 1H),3.54 (s, 24H), 1.96 -1.85 (m, 2H), 1.39 -1.20 (m, 4H), 0.71 -0.62 (m, 2H). 3063, 2898, 1598, 1485, 1452, 1350, 1283, 1267, 1246, 1102, 1010,963, 816, 739, 587 cm -1 . Potassium [18-Crown-6] bis(catecholato)-benzylsilicate (4) Following the general procedure A with benzyltriethoxysilane (2.5 mmol, 642 µL), catechol (5 mmol, 550.6 mg), 18-Crown-6 (2.5 mmol, 660 mg) and potassium methoxide (2.5 mmol, 700 µL of a 3.56 M solution in methanol) in 10 mL of dry methanol. The crude product was purified according the general procedure to afford 4 (1.35 g, 84%) as a white solid. 54.6, 54.1, 53.6, 53.4, 36.2, 33.5, 32.5, 32.5, 31.7, 31.7, 27.5, 26.4, 25.3, 24.4, 15.4, 15.1. 29 3044, 2906, 2871, 1591, 1488, 1450, 1349, 1245, 1183, 1095, 1037, 1013, 948, 822, 795, 730, 698, 596 cm -1 . Following the general procedure B with tert-butyltrichlorosilane (10 mmol, 1.91g), catechol (20 mmol, 2.20 g), triethylamine (40 mmol, 1.4 mL) and tetraethylammonium bromide (10 mmol, 2.10 g) in 100 mL of dry THF. The crude product was purifies according the general procedure to afford 21 (1.54 g, 39%) as a white solid. , 2951, 2928, 2840, 1596, 1485, 1393, 1361, 1246, 1174, 1100,1012, 1000, 890, 815, 738, 696, 626, 595 cm -1 . Tetraethylammonium bis(catecholato)-cyclopentylsilicate (22) Following the general procedure B with cyclopentyltrichlorosilane (5 mmol, 830 µL), catechol (10 mmol, 1.1 g), Et 3 N (20 mmol, 2.79 mL) in 50 mL of dry THF. The counter anion metathesis was performed with Et 4 NBr (5 mmol, 1.05 g) in 20 mL of acetonitrile. The crude product was purified according the general procedure to afford 22 (1.25 g, 56%) as a white solid. Synthesis of electrophiles 1-Bromocyclooctene To a 50 mL round-bottom-flask was added cyclooctene (31.5 mmol, 3.5 g) and 7 mL of DCM. At 0°C was added dropwis bromine (31.5 mmol, 1.65 g) and the mixture stirred 1 hour at room temperature. The solvent was removed under reduced pressure to give 1,2dibromocyclooctane. The compound was added in 14 mL of piperidine without purification. The mixture was heated at reflux temperature and stirred overnight. The solution was filtrated and the solid washed with pentane. The organic phase was then washed with HCl (1M, 2x70 mL), saturated NaHCO 3 (2x70 mL) and brine (100 mL), and dried over magnesium sulphate. The solvent was removed under reduced pressure to give 1-bromocyclooctene (3.15g, 37%). The spectroscopic data are in agreement with those reported in the literature. 8, 125.0, 35.3, 30.0, 28.8, 27.6, 26.6, 25.6. ((2-Bromoallyl)oxy)(tert-butyl)dimethylsilane To a 50 mL round-bottom-flask was added tert-butyldimethylsilyl chloride (5.25 (E)-(2-bromovinyl)benzene To a solution of cinnamic acid (10 mmol, 1.48 g) in methylene chloride triethylamine (0.5 mmol, 70 µL) was added at room temperature and stirred for five minutes. Nbromosuccinimide (12.0 mmol, 2.13 g,) was added in one portion and the mixture stirred for 30 minutes. The solvent was removed under reduced pressure. The crude was purified by flash column chromatography (pentane) to afford (E)-(2-bromovinyl)benzene (1.06 g, 99%). ( 2-Chlorovinyl)benzene To a solution of (E)-3-(4-methoxyphenyl)acrylic acid (20 mmol, 3.56g) 5, 59.3, 40.1, 34.3, 32.6, 25.7, 24.8, 19.9, 17 48.2, 43 .7, 42.3, 42.0, 38.8, 36.1, 34.8, 34.5, 34.1, 30.0, 28.1, 23.0, 21.7. 3, 167.3, 140.6, 124.8, 64.4, 60.8, 31.6, 28.3, 25.0, 21.1, 14.4. IR (neat): 29953, 167.3, 140.6, 124.8, 64.4, 60.8, 31.6, 28.3, 25.0, 21.1, 14.4. IR (neat): , 29493, 167.3, 140.6, 124.8, 64.4, 60.8, 31.6, 28.3, 25.0, 21.1, 14.4. IR (neat): , 17413, 167.3, 140.6, 124.8, 64.4, 60.8, 31.6, 28.3, 25.0, 21.1, 14.4. IR (neat): , 17103, 167.3, 140.6, 124.8, 64.4, 60.8, 31.6, 28.3, 25.0, 21.1, 14.4. IR (neat): , 16303, 167.3, 140.6, 124.8, 64.4, 60.8, 31.6, 28.3, 25.0, 21.1, 14.4. IR (neat): , 13683, 167.3, 140.6, 124.8, 64.4, 60.8, 31.6, 28.3, 25.0, 21.1, 14.4. IR (neat): , 12343, 167.3, 140.6, 124.8, 64.4, 60.8, 31.6, 28.3, 25.0, 21.1, 14.4. IR (neat): , 11873, 167.3, 140.6, 124.8, 64.4, 60.8, 31.6, 28.3, 25.0, 21.1, 14.4. IR (neat): , 11463, 167.3, 140.6, 124.8, 64.4, 60.8, 31.6, 28.3, 25.0, 21.1, 14.4. IR (neat): , 1028, 944, 813cm -1 , 944, 813cm -1 Following general procedure E with 1-phenyl-2-p-toluenesulfonylethyne (1.2 mmol, 307.6 mg).The crude product was purified by flash column chromatography (pentane) to afford 39 as a colorless oil (44 mg, 78%).The spectroscopic data are in agreement with those reported in the literature. 116a 1-(Tert 1 .92 -1.85 (m, 2H), 1.79 -1.74 (m, 2H), 1.58 -1.52 (m, 3H), 1.40 -1.33 (m, 3H). 13 C NMR (75 MHz, CDCl 3 ): δ 131.7 (2 C), 128.3 (2 C), 127.5, 124.3, 94.6, 80.7, 32.9, 29.8, 26.1 (2 C), 25.1 (2 C). (2,2-Dichlorovinyl)cyclohexane (41) Following general procedure E with trichloroethylene (1.2 mmol, 108 µL).The crude product was purified by flash column chromatography (pentane) to afford 41 as a colorless oil (39 mg, 70%).The spectroscopic data are in agreement with those reported in the literature. 2, 118.7, 39.3, 31.8 (2 C), 25.9, 25.7 (2 C). Dimethyl 2-cyclohexylsuccinate (43) Following general procedure Fwith dimethyl maleate (1.2 mmol, 150 µL). The crude product was purified by flash column chromatography (pentane/ethyl acetate, 95/5) to afford 40 as a colorless oil (84 mg, 78%). The spectroscopic data are in agreement with those reported in the literature. General procedure H for photoredox/nickel cross-coupling dual catalysis with aryl/heteroaryl halide or vinyl bromide To a Schlenk flask was added aryl, heteroaryl halide or alkenyl halide (1 eq., 0.3 mmol), appropriate silicate (1.5 eq., 0.45 mmol), 4CzIPN (1 mol%, 3 µmol, 2.4 mg), and 4,4'-di-tert-1 H NMR (400 MHz, CDCl 3 ): δ 7.90 (d, J = 8.4 Hz, 2H), 7.28 (d, J = 8.4 Hz, 2H), 5.95 (ddt, J = 17.1 Hz, 10.5 Hz, 6.7 Hz, 1H), 5.13-5.08 (m, 2H), 3.45 (d, J = 6.7 Hz, 2H), 2.59 (s, 3H). 13 C NMR (100 MHz, CDCl 3 ): δ 197. 8, 145.8, 136.3, 135.3, 128.8 (2 C), 128.6 (2 C), 116.7, 40.1, 26.6. IR (neat): 3050, 1680, 1604, 1356, 1266 cm -1 . 4'-(Anilinomethyl)acetophenone (48) Following general procedure G with anilinomethylsilicate 8 (0.45 mmol, 294 mg) and 4'-bromoacetophenone 44 (0.3 mmol, 60 mg). The crude product was purified by flash column chromatography (pentane/diethyl ether, 80/20) to afford 48as a colorless oil (62 mg, 91%). The spectroscopic data are in agreement with those reported in the literature. 212 1 H NMR (400 MHz, CDCl 3 ): δ 7.93 (d, J = 8.4 Hz, 2H), 7.47 (d, J = 8.4 Hz, 2H), 7.17-7.15 (m, 2H), 6.75-6.71 (m, 1H), 6.66 -6.60(m, 2H) 0, 149.0, 135.1, 128.7 (2 C), 128.6 (2 C), 36.1, 31.8, 31.2, 29.0, 26.7, 22.7, 14.2. IR (neat): 2900, 1681, 1605, 1265 cm -1 . 4'-(Isobutyl)acetophenone (51) Following general procedure G with isopropylsilicate 5 (0.45 mmol, 272 mg) and 4'bromoacetophenone 44 (0.3 mmol, 60 mg). The crude product was purified by flash column chromatography (pentane/diethyl ether, 98/2) to afford 51 as a colorless oil (39 mg, 75%). The spectroscopic data are in agreement with those reported in the literature. 0, 147.7, 135.1, 129.4 (2 C), 128.4 (2 C), 45.5, 30.2, 29.7, 22.5 (2 C). IR(neat): 2909, 1680, 1605, 1265 cm -1 . 214 E. V. Bellale, D. S. Bhalerao and K. G. Akamanchi, J. Org. Chem., 2008, 73, 9473-9475. 4-Acetoxypropylphenylboronic pinacol ester (73) Following general procedure G with acetoxypropylsilicate 17 (0.45 mmol, 292 mg) and 4-bromophenylboronic acid pinacol ester (0.3 mmol, 85 mg. The crude product was purified by flash column chromatography (pentane/EtOAc, 90/10) to afford 73as a brown oil (49 mg, 53%). Following general procedure G with acetoxypropylsilicate 17 (0.45 mmol, 292 mg) and 4-bromophenylboronic acid pinacol ester (0.3 mmol, 85 mg). After 24h of reaction, the crude reaction mixture was filtered through a plug of celite, washing with THF (15 mL). The filtrate was concentrated by rotary evaporation. The resulting solution of DMF was diluted with THF (10 mL) and cooled to 0°C in an ice water bath. To the cold stirring solution was added 1M NaOH (1.5 mL, 5 equiv.) and 30% aq. H 2 O 2 (171 µl, 5 equiv.). After 30 min the mixture was diluted with water (10 mL) and diethyl ether (10 mL) and neutralized by addition of 1M HCl (2.5 mL). The organic layer was collected and washed with water (2 x 10mL), brine (2 x 10 mL), dried over MgSO 4 and evaporated under reduced pressure. The crude product was purified by flash column chromatography (pentane/EtOAc, 90/10) to afford 74 as a brown oil (41 mg, 69%). The spectroscopic data are in agreement with those reported in the literature. 5, 153.9, 133.2, 129.4 (2 C), 115.3 (2 C), 63.9, 31.2, 30.4, 21.0. IR (neat): 3356, 2978, 1707, 1595, 1514, 1227, 1035 cm -1 . 3-(2-Hydroxyphenyl)propylacetate (75) Following general procedure G with acetoxypropylsilicate 17 (0.45 mmol, 292 mg) and 2-bromophenylboronic acid pinacol ester (0.3 mmol, 67 µl). After 24h of reaction, the crude reaction mixture was filtered through a plug of celite, washing with THF (15 mL). The filtrate was concentrated by rotary evaporation. The resulting solution of DMF was diluted with THF (10 mL) and cooled to 0°C in an ice water bath. To the cold stirring solution was added 1M NaOH (1.5 mL, 5 equiv.) and 30% aq. H 2 O 2 (171 µl, 5 equiv.). After 30 min the mixture was diluted with water (10 mL) and diethyl ether (10 mL) and neutralized by addition of 1M HCl (2.5 mL). The organic layer was collected and washed with water (2 x 10mL), brine (2 x 10 mL), dried over MgSO 4 and evaporated under reduced pressure. The crude product was 4-(3-Chloropropyl)-2-fluoropyridine (81) Following general procedure G with 3-chloropropylsilicate 12 (0.45 mmol, 281 mg) and 4-bromo-2-fluoropyridine 76 (0.3 mmol, 31µL). The crude product was purified by flash column chromatography (pentane/diethyl ether, 99/1 then 95/5) to afford 81 as a colorless oil (42 mg, 81%). 3-(2-methylpyridin-3-yl)propylacetate (82) Following general procedure G with acetoxypropylsilicate 17 (0.45 mmol, 292 mg) and 3-bromo-2-methylpyridine (0.3 mmol, 35 µl). The crude product was purified by flash column chromatography (pentane/diethyl ether, 50/50) to afford 82 as a colorless oil (39 mg, 67%). 3-(Thiophen-3-yl)propyl acetate (89) Following general procedure G with acetoxypropylsilicate 17 (0.45 mmol, 292 mg) and 3-bromothiophene (0.3 mmol, 29 µl). The crude product was purified by flash column chromatography (pentane/diethyl ether, 90/10) to afford 89 as a colorless oil (28 mg, 50%). 1, 143.9, 126.8, 124.5, 123.2, 63.5, 30.5, 26.3, 20.9 7.74 -7.72 (m, 1H), 7.55 -7.46 (m, 2H), 7.42 -7.39 (m, 1H), 7.34 -7.32 (m, 1H), 4.17 (t, J = 6.5 Hz, 2H), 3.18 -3.15 (m, 2H), 2.14 -2.07 (m, 2H), 2.09 (s, 3H). 13 C NMR (100 MHz, CDCl 3 ): δ 171. 2, 137.2, 133.9, 131.8, 128.8, 126.9, 126.1, 125.9, 125.5, 125.5, 123.6, 64.1, 29.5, 29.3, 21.0. IR (neat): 20522, 137.2, 133.9, 131.8, 128.8, 126.9, 126.1, 125.9, 125.5, 125.5, 123.6, 64.1, 29.5, 29.3, 21.0. IR (neat): , 29532, 137.2, 133.9, 131.8, 128.8, 126.9, 126.1, 125.9, 125.5, 125.5, 123.6, 64.1, 29.5, 29.3, 21.0. IR (neat): , 19302, 137.2, 133.9, 131.8, 128.8, 126.9, 126.1, 125.9, 125.5, 125.5, 123.6, 64.1, 29.5, 29.3, 21.0. IR (neat): , 17332, 137.2, 133.9, 131.8, 128.8, 126.9, 126.1, 125.9, 125.5, 125.5, 123.6, 64.1, 29.5, 29.3, 21.0. IR (neat): , 15922, 137.2, 133.9, 131.8, 128.8, 126.9, 126.1, 125.9, 125.5, 125.5, 123.6, 64.1, 29.5, 29.3, 21.0. IR (neat): , 13602, 137.2, 133.9, 131.8, 128.8, 126.9, 126.1, 125.9, 125.5, 125.5, 123.6, 64.1, 29.5, 29.3, 21.0. IR (neat): , 12342, 137.2, 133.9, 131.8, 128.8, 126.9, 126.1, 125.9, 125.5, 125.5, 123.6, 64.1, 29.5, 29.3, 21.0. IR (neat): , 1040, 738 cm -1 , 738 cm -1 . HRMScalc.for [C 15 H 16 NaO 2 ] + 251.1043; found 251.1041. 3-(Phenanthren-9-yl)propylacetate (94) Following general procedure H with acetoxypropylsilicate 17 (0.45 mmol, 292 mg) and 9-bromophenanthrene (0.3 mmol, 77 mg). The crude product was purified by flash column chromatography (pentane/diethyl ether, 95/5) to afford 94 as a colorless oil (38 mg, 45%). 1 H NMR (400 MHz, CDCl 3 ): δ 4.12 (q, J = 7.1 Hz, 2H), 2.30-2.26 (m, 2H), 1.65 -1.58 (m, 2H), 1.35 -1.25 (m, 12H), 1.25 (t, J = 7.1 Hz, 3H), 0.88 (t, J = 6.8 Hz, 3H). 13 C NMR (100 MHz, CDCl 3 ): δ 174. 1, 60.3, 34.6, 32.0, 29.6, 29.4, 29.4, 29.3, 25.2, 22.8, 14.4, 14.2. NOESY of 102 Diethyl octanedioate (105) Following general procedure I with ethyl 4-bromobutyrate 103 (0.3 mmol, 43 µl) and hexylsilicate 2 (0.45 mmol, 284 mg). The crude product was purified by flash column chromatography (pentane/diethyl ether, 90/10) to afford 105 as a colorless oil (26 mg, 38%). The spectroscopic data are in agreement with those reported in the literature. The spectroscopic data are in agreement with those reported in the literature. 224 resulting dark-red solution was stirred for 1 h at room temperature. After evaporation of the solvent, the residue was triturated in hexane and filtered to afford a red-brown solid (quantitative). This Cu(L BQ ) 2 Br 2 complex (249 mg, 0.31 mmol, 1 equiv.) and silver triflate (157 mg, 0.62 mmol, 2 equiv.) were introduced into a Schlenk flask under an argon atmosphere. Degassed acetonitrile (5 mL) was added and the resulting mixture stirred for 2 h at room temperature. The suspension was filtered, and the filtrate was evaporated. The residue was dissolved in DCM and filtered. The filtrate was concentrated to give complex 127 (245 mg, 83%). The UV-vis data were consistent with those reported in the literature. 106 UV-vis [CH 2 Cl 2 ; λ nm (ε, M -1 .cm -1 )]: 290 (14100), 434 (6970), 504 (7370) of silicates by a bis-iminobenzoquinone copper(II) complex are promising but still require more investigations. The alkyl bis-catecholato silicates previously used as Lewis-acid proved to be excellent radical precursors upon photooxidation and thus versatile substrates for synthetic applications. Photoredox catalysis in visible light has succeeded in establishing itself as a gentle and ecocompatible method of formation of radical species, and more particularly of carbon radicals. Although this catalysis has proved to be efficient for the formation of carboncarbon/heteroatom bonds, the generated carbon centered radicals are oftenverystabilized.Conversely, bis-catecholato silicates have shown to be capable of generating alkyl radicals that are not stabilized by photooxidation using polypyridine complexes of transition metals (Ru, Ir) that are photoactive in visible light but also organic photocatalysts. The radicals formed can thus be trapped by various radical acceptors. In addition, the bis-catecholato silicates can be employed in cross-coupling reactions with alkene halides and (hetero)aromatic halides under photoredox/nickel dual catalysis conditions for the formation of C(sp 2 ) -C(sp 3 ) bonds. The methodology can also be extended to C(sp 3 ) -C (sp 3 ) with some limitations. On the other hand, a study comparing the silicates and the "atecomplex" of boron for the formation of radicals by oxidation process is presented. Finally, promising works on the oxidation of silicates by copper complexes bearing non-innocent ligands have been initiated. Key words: Photoredox catalysis, Dual catalysis, photooxidation, photocatalyst, silicate, radical, cross-coupling 5 Schéma 1 .Schéma 2 .Schéma 3 . 5123 photoréduction ont rapportés la formation de radicaux alkyles stabilisés, notamment par réduction de α-halo-esters, 6 de dérivés d'acides 7 ou d'époxydes 8 par exemple (Schéma 2). Schéma 4 . 4 Photooxydation d'α-amino acides et d'organotrifluoroborates Dans notre groupe, nous avons montré que des espèces hypervalentes du silicium en conditions de photooxydation peuvent former des radicaux carbonés non stabilisés. Les alkyles bis-catécholato silicates dont les premières synthèses ont été effectuées par Frye 12 puis Schéma 5 . 14 Schéma 6 . 16 Schéma 8 . 5146168 photoréduction, des travaux ont ensuite montré la possibilité de coupler des électrophiles aromatiques avec des radicaux engendrés par photooxydation, grâce à des complexes de nickel. Les premiers travaux sur ce type de couplage ont été réalisés de façon simultané par Molander en utilisant des benzyltrifluoroborates15 comme source de radicaux d'une part, et d'autre part à partir de α-amino carboxylates et anilines grâce aux études de MacMillan et Doyle.16 Schéma 11 . 21 Schéma 12 . 21 Schéma 13 . 1121122113 photocatalyseur organique.21 Photoredox catalysis, an opportunity for sustainable Figure 1 . 1 Figure 1. Bio-catalyzed dehydroxylation of alcohols Figure 2 . 2 Figure 2. Number of publications on photoredox catalysis since 2000 Scheme 2. a) Simplified molecular orbital diagram for Ruthenium and Iridium photocatalysts. b) Simplified energy diagram of absorbing and emitting processes. Figure 3 . 3 Figure 3. Electronic absorption spectrum of [Ru(bpy) 3 ](PF 6 ) 2 in EtOH. Scheme 10 . 1 . 2 . 4 . 2 101242 Scheme 10. a) Photocatalyzed Pschorr reaction. b) Photocatalyzed Meerwein arylation Scheme 23 . 2 . 3 2323 Scheme 22. Cobalt-mediated radical reactions Figure 4 . 2 . 4 . 1 4241 Figure 4. Modes of photoredox/transition metal catalysis Scheme 40 . 40 Scheme 40. Photoredox/copper-catalyzed trifluoromethylation and perfluoroalkylation of boronic acids Figure 5 . 5 Figure 5. General representation of organic silicates 1 1 Photo-induced Electron Transfer (PET) between the benzil and the bis-catecholato allylsilicate. To have a better mechanistic insight of the reaction, he performed solvent studies 138 and found that coordinating solvents such as DMF or methanol or additives (pyridine, imidazole, n-butylamine, DMF) drastically improved the yield of the reaction. 144 (a) V. Corcé, L.-M. Chamoreau, E. Derat, J.-P. Goddard, C. Ollivier and L. Fensterbank Angew.Chem. Int.Ed. 2015, 54, 11414-11418. (b) C.Lévêque, L. Chenneberg, V. Corcé, J.-P. Goddard, C. Ollivier and L.Fensterbank, Org. Chem. Front., 2016, 3, 462-465. Scheme 51. a) Hybridization of silicon orbitals. b) Molecular orbital diagram of 3c-4e interaction. Figure 6 . 6 Figure 6. Crystal structures of potassium cyclohexyl bis-catecholato silicates with acetone (left 1') and [18-C-6] (right 1) Figure 8 . 8 Figure 8. Valence tautomerism of bis-(3,6-ditertbutylcatecholato) silicate radicals E 1 / 2 ( 12 photooxidation of the silicates. Scheme 62 . 62 Scheme 62. Nickel-catalyzed cross-electrophile coupling of aryl halides and alkyl halides Figure 9 . 9 Figure 9. Evolution of cyclized/linear products depending on the concentration of nickel catalyst NMR yield.b Isolated yield At first, the compatibility of several types of alkyl silicates was explored with 4'bromoacetophenone (Scheme 84). As previously reported conditions, α-amino and α-oxo alkyl silicates provided the products 48 and 49 in good yields. When the chloromethyl silicate was used as radical precursor, no product formation was observed. Aliphatic silicates bearing a phosphine oxide, an ester or an alkene function were as well tolerated. Concerning the phosphine oxide silicate 16, and as previously mentioned no product resulting from the generation of a P-centered radical was observed. 5-Hexenyl silicate gave rise a mixture of products 91, 91' and 91'' in a 10/13/77 ratio and 88% overall yield. Here, the intermediate 5hexenyl radical would directly react to provide products 91 and 91' resulting from postisomerization of 91.187 Interestingly, the formation of the major product 91'' is attributed to a 5-exo-trig cyclization of the radical prior to the addition to the nickel intermediate in the 187 (a) R. G. Miller, P. A. Pinke, R. D. Stauffer, H. J. Golden and D. J. Baker, J. Am.Chem. Soc., 1974, 96, 4211-4220. (b) F. Weber, A. Schmidt, P. Ro¨se, M. Fischer, O.Burghaus and G. Hilt, Org. Lett., 2015, 17, 2952-2955. products 99 - 99 Scheme 87. Reaction of silicates with alkenyl halides Scheme 88 . 88 Scheme 88. Dual photoredox/nickel catalyzed sp 3 -sp 3 coupling reaction between alkyl carboxylates and alkyl bromides 195 F. Lima, M. A. Kabeshov, D. N. Tran, C. Battilochio, J. Sedelmeier, G. Sedelmeier, B. Schenkel, S. V.Ley, Angew. Chem. Int. Ed., 2016, 55, 14085-14089. Scheme 94 .Scheme 95 . 9495 Scheme 94. Spin-trapping of CF 3 ·radical experiments with TEMPO Scheme 97. Envisioned mechanism for the copper-mediated oxidation of silicates ( round-bottom flask equipped with a magnetic stir bar was added 2-(2,4difluorophenyl)-5-(trifluoromethyl)pyridine (400 mg, 1.50 mmol) and IrCl 3 hydrate (209 mg, 0.70 mmol). The flask was equipped with a cold water condenser and evacuated and purged with argon three times. A degassed mixture of methoxyethanol/water (2/1) (12 mL) was added and the resulting mixture stirred at 120°C for 20 h, during which time a yellow precipitate was observed to form. After cooling to rt, 10 mL of water were added and the precipitate was collected by vacuum filtration. The solid was washed with water (2x20 mL) 116a 1 H 1 NMR (300 MHz, CDCl 3 ): δ 5.70 (d, J = 9.2 Hz, 1H), 2.43 -2.32 (m, 1H), 1.77 -1.62 (m,5H), 1.37 -1.04 (m, 6H). 13 C NMR (75 MHz, CDCl 3 ): δ 135. 116a photoredox/nickel cross-coupling dual catalysisTo a Schlenk flask was added aryl or heteroaryl halide (1 eq., 0.3 mmol), silicate(1.5 eq., 0.45 mmol),Ir[(dF(CF3)ppy)2(bpy)](PF6) (2 mol %, 6 µmol, 6 mg), and 4,4'-di-tert-butyl-2,2'bipyridine (3 mol %, 9 µmol, 2.4 mg). The Schlenk flask was taken into a glovebox and Ni(COD)2 (3 mol %, 9 µmol, 2.5 mg) was added. The Schlenk flask was sealed with a rubber septum, removed from the glovebox, and evacuated / purged with vacuum / argon three times.Degassed DMF (3 mL) was introduced (followed by the aryl or heteroaryl halide if liquid) and the reaction mixture was irradiated with blue LEDs (477 nm) for 24 hours. The reaction mixture was diluted with diethyl ether (50 mL), washed with saturated NaHCO3 (2 times), brine (2 times), dried over MgSO4 and evaporated under reduced pressure. The residue was purified by flash column chromatography on silica gel to afford the cross-coupling product. 214 1 H 1 NMR (400 MHz, CDCl 3 ): δ 7.87 (d, J = 8.3 Hz, 2H), 7.23 (d, J = 8.3 Hz, 2H), 2.58 (s, 3H), 2.53 (d, J = 7.2 Hz, 2H), 1.93 -1.87 (m, 1H), 0.91 (d, J = 6.6 Hz, 6H). 13 C NMR (100 MHz, CDCl 3 ): δ 198. 223 1 H 9 ( 19 NMR (400 MHz, CDCl 3 ): δ 4.12 (q, J = 7.1 Hz, 4H), 2.28 (t, J = 7.5 Hz, 4H), 1.66-1.58 (m, 4H),1.35 -1.31 (m, 4H), 1.25 (t, J = 7.1 Hz, 6H). 13 C NMR (100 MHz, CDCl 3 ): δ 173.Following general procedure I with ethyl 4-bromobutyrate 103 (0.3 mmol, 43 µl) and cyanoethylsilicate 13 (0.45 mmol, 271 mg). The crude product was purified by flash column chromatography (pentane/diethyl ether, 80/20) to afford 106 as a colorless oil (10 mg, 20%). Les premières expériences d'optimisation nous ont permis d'obtenir le produit de couplage entre le 4-bromobutyrate d'éthyle et le n-Une fois encore les rendements en produits de couplage croisé sont demeurés faibles. En outre, le bromure de hex-5-ényle a conduit à la formation de trois produits de couplage croisé dont un produit de cyclisation. Ce dernier est probablement le résultat d'une cyclisation radicalaire 5-exo-trig. Ce résultat nous a laissé penser qu'un radical hex-5-ényle a pu être formé durant la réaction, ce qui signifie que nos conditions réactionnelles permettent alors la formation d'un radical à partir des bromures d'alkyles. Ainsi, deux radicaux sont en compétition lors de la réaction de couplage, ce qui expliquerait la formation du produit d'homocouplage. Des études mécanistiques plus approfondies permettraient de mieux rationaliser ces résultats. Dans notre cas, nous avons tenté d'utiliser ce type de complexe pour engendrer des radicaux alkyles par oxydation des alkyles bis-catécholato silicates. Les silicates possèdent un potentiel d'oxydation relativement bas (~0.3 -0.9V vs SCE). Par rapport à ces valeurs, le Actuellement, des travaux sont à l'étude sur l'oxydation des silicates par des complexes métalliques avec des ligands non-innocents. Des résultats préliminaires ont montré que le complexe de cuivre Cu(L BQ ) 2 (OTf) 2 en quantité catalytique peut former un radical α-aminé à partir du silicate correspondant et que ce radical peut être piégé en réaction d'allylation. complexe Cu(L BQ ) 2 (OTf) 2 (L BQ = iminobenzoquinone) a donc été choisi (+0.83V vs SCE). Des premières expériences de piégeage par le TEMPO ont été réalisées. En conditions catalytiques hexylsilicate avec un rendement modéré (34%). Ce résultat s'explique par la formation d'un ou stoechiométriques dans le DMF, de faibles de rendements d'adduit benzyl-TEMPO ont été produit secondaire qui s'est avéré être le produit d'homocouplage du bromure utilisé avec un obtenus. De plus, la présence d'allylsulfone à la place du TEMPO n'entraine aucune formation rendement similaire (38%). Plusieurs axes d'optimisation ont été explorés pour favoriser la de produit d'allylation. formation du produit de couplage croisé par rapport au produit d'homocouplage. Les modifications apportées sur la nature du ligand complexé au nickel, du photocatalyseur, des charges catalytiques ou encore du solvant de réaction, n'ont pas permis de former exclusivement le produit de couplage croisé. Dans les meilleures conditions, le produit attendu a pu être obtenu avec un rendement de 43% et le produit secondaire avec un rendement de 38%. Schéma 16. Conditions optimisées de catalyse duale photoredox/nickel pour la formation de liaisons C(sp 3 )-C(sp 3 ) Après optimisation des conditions réactionnelles, plusieurs alkyles silicates ont été engagés en présence du 4-bromobutyrate d'éthyle comme partenaire électrophile. Malheureusement, pour l'ensemble des silicates engagés dans le processus catalytique, le produit d'homocouplage a toujours été isolé avec un rendement proche de ceux des produits de couplage croisé, lesquels restent encore faibles. Par la suite, le partenaire électrophile a été modifié. Plusieurs bromures primaires et secondaires ont été couplés avec l'acétoxypropylsilicate. Schéma 17. Catalyse duale photoredox/nickel: couplage croisé entre des alkyles silicate et le 4-bromobutyrate d'éthyle Schéma 18. Catalyse dual photoredox/nickel: couplage croisé entre l'acétoxypropyl silicate et divers bromures d'alkyles Après avoir démontré tout le potentiel des alkyles bis-catécholato silicates pour la formation de liaisons carbone-carbone en catalyse photoredox ainsi qu'en catalyse duale photoredox/nickel, nous nous sommes tournés vers un autre système catalytique mettant aussi en jeu des transferts monoélectroniques. Une thématique développée au sein de notre laboratoire, concerne la catalyse par des métaux de transition portant des ligands noninnocents. Un ligand est considéré non-innocent lorsque le degré d'oxydation du métal, sur lequel il est complexé, ne peut être clairement défini. En particulier, le complexe bis(iminosemiquinonate) de cuivre (Cu(L SQ ) 2 ) possède la particularité d'être un complexe sur lequel les deux ligands ont un caractère radicalaire. Chaque ligand pouvant être oxydé ou réduit, le complexe peut exister sous cinq formes différentes. 23 Schéma 19. Propriétés redox du complexe Cu(L SQ ) 2 Des travaux réalisés au sein de notre équipe ont montré que le complexe Cu(L SQ ) 2 utilisé en quantité catalytique pouvait former des radicaux CF 3 · par réduction d'une source de CF 3 électrophile tel que le réactif d'Umemoto ou le réactif de Togni II. Ce dernier a notamment été utilisé pour former ces radicaux CF 3 · et les engager dans des processus d'addition radicalaire sur des alcènes, des alcynes, des éthers d'énols silylés ainsi que sur des hetérocyles tels que le pyrrole ou le furane. 24 Schéma 20. Piégeage des radicaux CF 3 · formés par réduction du réactif de Togni II avec le complexe Cu(L SQ ) 2 . Schéma 21. Oxydation du benzylsilicate par le complexe Cu(L BQ ) 2 (OTf) 2 Par la suite, nous avons remplacé de solvant DMF par le dichlorométhane ainsi que le précurseur radicalaire par l'anilinométhyle silicate, étant plus facile à oxyder. En présence d'allylsulfone comme accepteur radicalaire, nous avons pu cette fois-ci observer la formation du produit d'allylation avec un rendement moyen de 48%. Après quelques étapes d'optimisation (augmentation de la charge catalytique en complexe de cuivre, de la quantité d'accepteur et de la concentration), un rendement de 66% en produit d'addition radicalaire a pu être atteint. Même si l'optimisation et l'exploration de cette nouvelle voie d'oxydation des silicates restent encore à développer, ces résultats préliminaires sont prometteurs. Schéma 22. Résultats préliminaires d'addition radicalaire sur l'allylsulfone En conclusion, les travaux réalisés au cours de cette thèse se sont centrés sur la transformation de l'énergie lumineuse en énergie chimique avec la formation d'espèces radicalaires à partir de substrats organiques. Dans notre cas, nous nous sommes intéressés plus particulièrement à la formation de radicaux alkyles primaires par photooxydation d'alkyles bis-catécholato silicates. Ces espèces présentant un atome de silicium hypervalent possèdent plusieurs avantages en terme de synthèse, de stabilité mais aussi de faibles potentiel d'oxydation, ce qui en fait des précurseurs de choix pour la génération de radicaux par photooxydation. En effet, l'oxydation de ces espèces par des photocatalyseurs métalliques d'iridium ou de ruthénium photoactivés a permis de former des radicaux carbonés stabilisés ou non. Des radicaux primaires, secondaires et tertiaires ont pu être engagés dans des réactions d'addition radicalaire en conditions catalytiques. Par la suite, les études ont montré que ces radicaux pouvaient être piégés par des complexes de nickel et couplés avec des électrophiles aromatiques ou hétéroaromatiques en conditions de catalyse duale, avec de bon rendements. Afin de rendre ce type de catalyse plus éco-compatible, nous avons donc cherché à remplacer le photocatalyseur d'iridium par un organophotocatalyseur. Le 4CzIPN a démontré qu'il pouvait photooxyder les silicates avec la même efficacité que les complexes métalliques. En catalyse duale, cet organophotocatalyseur nous a permis d'obtenir des produits de couplage croisé avec des bromures aromatiques ou hétéroaromatiques mais aussi avec des bromures éthyléniques. Enfin, le couplage alkyl-alkyl en conditions de catalyse duale avec les silicates a pu être réalisé, mais malheureusement avec de faibles rendements. 1.2.2.2 Molecular orbital approach and redox potentials excited state lifetimes of few micro-secondes have shown optimal features for photoredox E([M]/[M -]) and E λ λ λ λem (Scheme 4. left). Also, the energy necessary to oxidize the catalyzed processes. 26 photocatalyst at the excited-state E In this part, we will focus on the description of metal-based photocatalysts and especially on the Ru(bpy) 3 2+ which is the most studied. However, the mentioned statements are applicable to homo-and heteroleptic iridium photocatalysts. As previously detailed, ruthenium and iridium photocatalysts reach a triplet state ( 3 MLCT) after visible-light absorption. About the Ru II (bpy) 3 2+ , it corresponds to the promotion of an electron from the t 2 g orbitals of the metal to a π* orbital of the ligand. The excited complex can be formally written as ''[Ru III (bpy) 2 (bpy) -] 2+ '', equivalent to [Ru II (bpy) 3 2+]* where respectively the metal becomes oxidant and a ligand bpy reductant (Scheme 3). Scheme 3. Simplified orbital diagram of Ru(bpy) 3 2+ during light absorption Recent work on the development of OLEDs has shown new perspective in The photoexcited complex [M] can then oxidize or reduce a substrate by a SET giving terms of organic photocatalysts design. 25 Thanks to a short gap between singlet and triplet the opportunity to generate radicals. In order to have an idea on the tolerant substrates for the state and efficient spin conversion processes, a series of tetracarbazolyl dicyanobenzene with SET, the redox potentials of photocatalysts at the excited-state have to be determined. From electrochemical and fluorescence data, a qualitative estimation of the excited-state redox potentials can be given. The minimum difference of energy between the ground state of the catalyst and its excited state corresponds to the wavelength of the emission maximum E λ λ λ λem . Therefore, the required energy to reduce the photocatalyst in the excited state E([M]*/[M -]) equals the sum of the required energy to reduce the photocatalyst in the ground state ([M] + /[M*]) equals the difference between the energy necessary to oxidize the photocatalyst in the ground state E ([M] + /[M]) and E λ substrate D· + and Ru(bpy) 3 + (Scheme 5). In the case of an acceptor A, if the LUMO is in the same energy range, the electron transfer would form the radical anion A· -and Ru(bpy) 3 3+ λ λ λem (Scheme 4. right). However, an electrostatic work term w r , describing the charge generation and separation within the electron-transfer complex, must be taken in account but is difficult to estimate. Thus the calculated potentials are slightly over-or underrated, but experimental values can be measured 27 by phase-modulated voltammetry, showing a good match with the calculated ones. Fine tuning of the redox potential can be realized changing the ligand (bipyridine, bipyrazine, phenanthroline…) or modifying the metal. In this context, heteroleptic iridium complexes are particularly interesting due to their two ligands type (cyclometalating ligand and bidentate ligand) which can be both modified. Scheme 5. SET between excited Ru(bpy) 3 2+ and donor or acceptor. Scheme 4. Calculation of the redox potential of photocatalysts at the exited-state and comparison with the measured values. Therefore, the photocatalyst excited state may pick up or give out an electron to a substrate if the redox potentials match. In terms of orbital, if the HOMO of a donor D is between the t 2 g and π* orbitals of the Ru(bpy) 3 +2 , a SET can happen giving the oxidized • Photocatalyzed Barton-McCombie deoxygenation Under these conditions, tertiary and secondary alcohols could be deoxygenated in moderate yields (Scheme 13 d)). A famous reaction generating alkyl radicals is the Barton-McCombie deoxygenation reaction. Secondary or tertiary alcohols are converted to thiocarbonate, thiocarbamate or xanthate derivatives which are afterwards reduced by a tin-hydride reagent (nBu 3 SnH). During the process, the C-O bond of the modified alcohol undergoes fragmentation, to give an alkyl radical. After a hydrogen abstraction step from the tin-hydride reagent, the deoxygenated product and the reactive tin mediator (nBu 3 Sn·) are obtained. 48 In order to avoid the use of tin reagents, some groups have tried a photocatalytic version. Similarly to the Scheme 15. Evolution of N-centered radical cations classical conditions of the Barton-McCombie deoxygenation, alcohols have to be activated as In 2012, Zheng and co-workers showed that an N-centered radical cation can evolve to an ester, an oxalate or a thiocarbamate (Scheme 13 a)). In 2013, inspired by the N-(acyloxy)phthalimides photocatalyzed decarboxylation of Okada, 49 Overman 50 reported the efficient deoxygenation of aliphatic tertiary alcohols by photoreduction of the corresponding N-(acyloxy)phthalimide oxalates derivatives in the presence of photocatalyst Ru(bpy) 3 (PF 6 ) 2 , Hantzsch ester and i-Pr 2 NEt.HBF 4 as a sacrificial electron donor and a H-atom donor respectively. The generated tertiary alkyl radicals could then be then engaged in Giese-type reactions (Scheme 13 b)). Although no example involving secondary or primary radicals were mentioned, Reiser et al. reported the photoreduction of 3,5-bis(trifluoromethyl)benzoate esters. 51 Starting with the reduction of the photoexcited [Ir(ppy) 2 (dtbbpy)](PF 6 ) by DIPEA, benzyl, α-carbonyl and α-cyano esters were reduced by the [Ir(II)] complex to give the deoxygenated products in good yields (Scheme 13 c)). At the same time in our group, a procedure involving the reduction of O-thiocarbamates was developed. 52 The high reductive potential of such derivatives (-1.56V to -1.73V vs SCE) required the use of the highly reducing photoexcited fac-Ir(ppy) 3 photocatalyst Scheme 13. Photoreductive Barton-McCombie type deoxygenation reactions. • Reduction of α α α α-ketoepoxides and α α α α-ketoaziridines Our group reported in 2011 the photoreduction of alpha-ketoepoxides and ketoaziridines 53 to generate an alpha carbonyl radical intermediate that can be trapped by allylsulfones. This transformation requires the use of [Ir(ppy) 2 (dtbbpy)](PF 6 ) that absorbs in the formation of carbon-carbon bonds. After oxidation of Ν-cyclopropylanilines, the cyclopropane ring opening led to an intermediate bearing an iminium moiety and a primary alkyl radical. The radical then adds to an alkene or an alkyne substituted by an aryl or an ester moiety. 57 The resulting stabilized radical cyclizes to the iminium to give the N-cyclopentyl (or N-cyclopentenyl) aniline (Scheme 16). Scheme 16. Photocatalyzed intermolecular [3+2] cycloaddition of N-cylopropylanilines 57 (a) S. Maity, M. Zhu, R. S. Shinabery and N. Zheng, Angew. Chem. Int. Ed., 2012, 51, 222-226. (b) T. H. Nguyen, S. A. Morris and N. Zheng, Adv. Synth. Catal., 2014, 356, 2831-2837. (c) T. H. Nguyen, S. Maity and N. Zheng, Beilstein J. Org. Chem., 2014, 10, 975-980. 61,62 They were also engaged in vinylation reactions. 63 Oxidation of the carboxylate by photoexcited [Ir(dF(CF 3 )ppy) 2 (dtbbpy)](PF 6 ) catalyst gives the carbon-centered radical which is directly trapped by a vinylsulfone. After the C-SO 2 Ph bond fragmentation, the sulfonyl radical (E 1/2 (PhSO 2 •/PhSO 2 Na)=+0.50V vs SCE) 64 participates to the regeneration of the photocatalyst via a SET with the Ir II intermediate (Scheme 19). Scheme 19. Photooxidation of α-aminocarboxylates • Oxidation of organoborates In the same line, alkyl trifluoroborates have proved to be suitable substrates to generate radicals upon oxidation. 65 Akita and Koike reported their work on the photooxidation of organoborates with the [Ir(dF(CF 3 )ppy) 2 (bpy)](PF 6 ) catalyst. 66 In order to prove the photooxidation feasibility, they performed spin-trapping experiments with TEMPO and found that allyl, benzyl and tertiary alkyl trifluoroborates could form the corresponding alkyl-65 (a) Y. Nishigaichi, T. Orimi and A. Takuwa, J. Organomet. Chem., 2009, 694, 3837-3839. (b) G. Sorin, R. Martinez Mallorquin, Y. Contie, A. Baralle, M. Malacria, J.-P. Goddard and L. Fensterbank, Angew. Chem. Int. Ed., 2010, 49, 8721-8723. (c) G. A. Molander, V. Colombel and V. A. Braz, Org. Lett., 2011, 13, 1852-1855. 66 Y. Yasu, T. Koike and M. Akita, Adv. Synth. Catal., 2012, 354, 3414-3420. 2 Processes with radical formation: Catalysis of Downstream Steps Based on an article of van Leeuwen 112 who reported the coupling reaction between anilidines and olefins with Pd(OAc) 2 as catalyst and benzoquinone as oxidant, they chose the photooxidizing catalyst [Ir(ppy) 2 (bpy)](PF 6 ) to perform this catalytic process. Starting from (Z)-(phenylamino)but-2-enoate derivatives in the presence of Pd(OAc) 2 and potassium carbonate, they could obtain the formation of indoles in good to excellent yields. Control experiments showed that no reaction occurred under oxygen atmosphere. However, the presence of potassium superoxide proved to be essential. Based on these informations, the authors proposed a mechanism in which Pd(OAc) 2 did two consecutive C-H insertions. After reductive elimination, the indole is obtained and a Pd(0) complex re-oxidized simultaneously by superoxide and the photoexcited iridium complex.Others photoredox/transition-metal dual catalysis processes involving two SETs between both catalysts were developed. Each SET allows the organometallic catalyst to reach the specific oxidation state required for the catalysis. Tunge and co-workers reported a decarboxylative allylation of para-amino allyl 2-phenylacetate in the presence of Pd(PPh 3 ) 4 as transition metal catalyst and [Ir(ppy) 2 (bpy)](BF 4 ) as photocatalyst.116 They could obtain parahomoallylaniline products in moderate yields. Alternatively starting the reaction from carboxylic acids and allyl methyl carbonate in stoichiometric amounts provided the products in the same range of yields. During the optimization, the authors noted the formation of a Regarding the mechanism, they proposed an oxidative addition of the aryl bromide on the Ni(0)(dtbbpy) active catalyst. A ligand exchange with the alcohol would then generate the Ni(II) aryl alkoxide. At this stage, the crucial oxidation to Ni(III) was carried out by the photoexcited iridium complex (E 1/2 (Ni III /Ni II+ ) = +0.71V vs Ag/AgCl 119 and E 1/2 (Ir III */Ir II ) = +1.21V vs SCE). The Ni(III) aryl alkyloxide complex then underwent reductive elimination to give the aryl ether and a Ni(I) complex reduced by the Ir(II) intermediate to regenerate both catalysts.Over the last years, gold catalysis has emerged as a powerful tool for the formation of • Two successive SETs catalysis 2.4.Catalysis of downstream steps involves radical formation by photoredox catalysis. This chemical species adds on the organometallic transition metal, generating a reactive metallic intermediate which can undergo reductive elimination or participate to electrophilic activation of multiple bonds for instance. 2.4.2.1 Gold mediated catalysis Scheme 34. Photoassisted nickel catalyzed C-O bond formation Scheme 31. Photocatalyzed Fujiwara-Moritani reaction • C-H Functionalization Among the photoredox/transition metal dual catalytic reactions reported in the literature, C-H functionalization has been particularly studied. Rueping and co-workers developed another approach to oxidative Heck Reaction so called Fujiwara-Moritani reaction under photoredox conditions. 111 The Ir(II) photocatalyst transfers its excess of electron to a molecule of oxygen, regenerating the photocatalyst Ir(III) and providing a superoxide anion (Scheme 31). dibenzyl product suggesting the formation of a benzyl radical in the process. In addition, they observed the formation of a 1,5-hexadiene resulting from the presence of an allyl radical. So they suggested a plausible mechanism involving a benzyl-allyl radical coupling. They proposed the oxidative addition of the allyl ester to Pd(0) to form a Pd-π-allyl complex and a carboxylate anion (Scheme 33). The photoactivated iridium catalyst oxidizes the carboxylate which undergoes decarboxylation to furnish the benzyl radical. Thereafter, the Pd-π-allyl cationic complex is reduced by the Ir(II) intermediate to provide the allyl radical and regenerate both catalysts. 116 S. B. Lang, K. M. O'Nele and J. A. Tunge, J. Am. Chem. Soc., 2014, 136, 13606-13609. Scheme 33. Photocatalyzed decarboxylative allylation of phenylacetate derivatives Nickel also proved to be a valuable candidate to photoredox/transition metal dual catalysis. MacMillan developed an efficient intermolecular cross-coupling reaction between arylbromides and alcohols involving photoredox/nickel dual catalysis. Studies on the nickel catalysis revealed that the C-O bond formation using this metal is tricky. Reductive elimination of the C-Ni II -O is endothermic 117 and does not favor the C-O bond formation except for nickel metallacycles. 118 However, reductive elimination from a nickel(III) intermediate was suspected to be much easier. In order to reach this oxidation state, the idea of this group was to use a photocatalyst to oxidize the [C-Ni II -O] intermediate. Using NiCl 2 .dme, a bipyridine type ligand and [Ir(dF(CF 3 )ppy) 2 (dtbbpy)]PF 6 as photocatalyst, they could obtained the expected aryl ether products in excellent yields (Scheme 34). cyclic systems, functionalization of alkenes, alkynes or allenes by nucleophilic addition reactions 120 and cross-coupling reactions. The latter involves a reductive elimination step from a Au(III) intermediate. In 2006, García and co-workers observed that benzyl and aryl radicals react with gold(I) complexes to afford organogold(III) species. 121 This property has been exploited by Glorius to develop the first photoredox/gold dual catalysis. 122 Aryldiazonium tetrafluoroborate salts were used as aryl radical precursors and Ru(bpy) 3 (PF 6 ) 2 . The successive radical addition and oxidation steps would happen directly on the Ph 3 PAuCl complex (Scheme 37). Then, the cationic Au(III) intermediate may promote the cyclization step by electrophilic activation of the triple bond and the reductive elimination leading to the C-C bond formation. 120 For recent reviews on Gold catalysis see: (a) L. Fensterbank and M. Malacria, Acc. Chem. Res., 2014, 47, 953-965. (b) R. Dorel and A. M. Echavarren, Chem. Rev., 2015, 115, 9028-9072. 121 C. Aprile, M. Boronat, B. Ferrer, A. Corma and H. García, J. Am. Chem. Soc., 2006, 128, 8388-8389. 122 B. Sahoo, M. N. Hopkinson and F. Glorius, J. Am. Chem. Soc., 2013, 135, 5505-5508. Scheme 35. Ru/Au catalyzed oxy/aminoarylation of alkenes Regarding the mechanism, the starting cationic Au(I) coordinates to the alkene which gives a cyclic alkylgold(I) intermediate. The photoexcited Ru(II)* reduces the aryldiazonium to generate the aryl radical which adds on the alkylgold(I). The resulting Au(II) complex is then oxidized by Ru(III) to afford the key Au(III) intermediate. A reductive elimination step provides the final product and regenerates the initial Au(I) complex (Scheme 36). Scheme 36. Proposed mechanism for the Ru/Au dual catalysis On the same line, our group realized a similar tandem process with o-alkynylphenols. 123 A neutral gold catalyst was used suggesting another possible pathway for the C-C bond 123 Z. Xia, O. Khaled, V. Mouriès-Mansuy, C. Ollivier and L. Fensterbank, J. Org. Chem., 2016, 81, 7182-7190. formationScheme 37. Dual Photoredox/Gold catalysis arylative cyclization of o-alkynylphenols This kind of pathway was first proposed by Toste for the arylative ring expansion cascade of alkenyl and allenyl cycloalkanols. Upon treatment with aryldiazonium salts, Ph 3 PAuCl and Ru(bpy) 3 (PF 6 ) 2 as catalysts, aryl substituted cyclic ketones were obtained in low to good yields. 124 Regarding the mechanism (Scheme 38), the photoredox process is similar to the one proposed by Glorius, except that the neutral complex Ph 3 PAuCl undergoes an addition of an aryl radical followed by oxidation with Ru(III). The resulting alkylgold(III) intermediate was proposed. Its formation was supported by the observation of [Ph 3 P-Ph] + .BF 4 -as a side product formed by reductive elimination. In the case of a fast coordination of the complex to the alkene or allene, the ring expansion may occur. Then, the reductive elimination affords the corresponding ketone and regenerates the starting Au(I) catalyst. 124 X. Shu, M. Zhang, Y. He, H. Frei and F. D. Toste, J. Am. Chem. Soc., 2014, 136, 5844-5847. It is well established that the fluoride ion has the strongest affinity with silicon. Taking advantage of this property, the synthesis of hypercoordinated silicon compounds containing several fluorine atoms can be easily prepared by treatment of chlorosilane with a fluoride salt. Synthesis and copper mediated oxidation of organopentafluorosilicates Some of the synthesized silicates were engaged with stoichiometric amount of copper halide salts (CuCl 2 and CuBr 2 ). They showed that upon treatment in ether, THF or methanol, the alkylpentafluorosilicates were converted into the corresponding alkyl halides in low to good yields, depending on the nature of the starting copper salt (Scheme 42 Eq. 2).Investigating the mechanism of the reaction, they evidenced of a radical pathway. When the reaction was performed with octylpentafluorosilicate and copper chloride in methanol under an oxygen atmosphere, they observed the formation of octanal. The octanylperoxyde radical obtained from reaction of the octanyl radical with molecular oxygen, would produce octanal and octanol after dismutation (Scheme 43 Eq. 1). Spin trapping experiments with nitrosobenzene derivatives revealed the formation of an N-oxyl radical confirmed by ESR O 2 , MeOH C 7 H 15 O O C 7 H 15 O Eq. 1 C 7 H 15 SiF 5 2 2 K CuX 2 (2 equiv.) C 7 H 15 H Solvent ArNO C 7 H 15 N O Ar Eq. 2 2 2 K SiF 5 CuBr 2 Eq. 3 Br 2 K exo/endo: (70-87)/(30/13) SiF 5 Inspired by the work of Tansjö, 133 Kumada and co-workers 134 reported in 1982 the first efficient synthesis of organopentafluorosilicates. By treating a large panel of trichlorosilanes by potassium fluoride in aqueous conditions, they could synthesize a wide range of organopentafluorosilicates (Scheme 42 Eq. 1). Scheme 42. (Eq. 2). Finally, treating pure exo-and endo-2-norbornylpentafluorosilicates with copper(II) bromide gave a mixture of exo-and endo-2-norbornyl bromide in an exo/endo ratio of (70-87)/(30-13) (Eq. 3). This epimerization suggests the formation of a planar radical intermediate. Moreover, the obtained ratios were similar to the literature on the formation of a 2-norbornyl radical. 135 2 Scheme 43. Investigation on the mechanism of oxidation of organopentafluorosilicates 135 (a) E. C. Kooyman and G. C. Vegter, Tetrahedron, 1958, 4, 382-392. (b) F. D. Greene, C.-C. Chu and J. Walia, J. Org. Chem., 1964, 29, 1285-1289. Table 2 . 2 Calculated BDE of C-Si bond (before/after oxidation) Carbon chain C-Si BDE* (kcal/mol) n-hexyl 88.8/23.8 cyclohexyl 86.0/20.3 benzyl 79.8/13.8 PhNHCH 2 86.3/17.0 phenyl 104.1/37.5 Table 3 . 3 Optimization of photooxidation of benzylsilicate and spin-trapping experiments with TEMPO All the screened photocatalysts were competent for the photooxidation of the benzylsilicate including Ru(bpy) 3 (PF 6 ) 2 , Ir[(dF(CF 3 )ppy) 2 (dtbbpy)](PF 6 ) andIr[(dF(CF 3 )ppy) 2 (bpy)](PF 6 ). This last photocatalyst was chosen for the next studies. This photocatalyst showed a redox potential at the excited state (E°(Ir(III)*/Ir(II)) = + 1.21V vs SCE) appropriated to oxidize the benzylsilicate (E°(Bn-[Si] -/Bn . ) = + 0.61V vs SCE) and liberate the TEMPO adduct with an excellent 95% yield. Indeed, fluorescence experiments demonstrated that the luminescence of the photocatalyst is quenched by the silicate. However, it is also known that TEMPO is also a luminescence quenching agent 66 of this iridium photocatalyst, which can provide the Noxoammonium cation (E°(TEMPO + /TEMPO) ≈ +1.00V vs SCE in water 148 ). However, comparing the oxidation potentials of TEMPO and benzylsilicate 23, it is reasonable to admit that the excited photocatalyst directly react with the silicate instead of TEMPO.About the influence of the solvent, we observed the same trend reported by Nishigaichi.138 Solvent with a strong donor effect (DMF and methanol) gave better yields of benzyl-TEMPO adduct than acetone or acetonitrile which are weak donating solvents.In summary and based on the work of Akita on photooxidation of alkyl trifluoroborates, 66 it was proposed that benzylsilicate is oxidized by the excited Ir(III)*. The benzyl radical is directly trapped by the TEMPO and the Ir(II) intermediate transfers an electron to another equivalent of TEMPO (Scheme 52). The methodology was then extended to various radical acceptors. The scope of the reaction proved to be quite large from primary to tertiary silicates. Particularly, we can mention the generation of primary non-activated radical, a new example of aminomethylation and this never reported . The cyclohexyl silicate could be engaged in Giese-type reactions, alkenylation and alkynylation reactions, giving the radical adducts in moderate to good yields. Entry Photocatalyst Solvent Yield a 1 Ru(bpy) 3 (PF 6 ) 2 Acetone 34% 2 Ir[(dF(CF 3 )ppy) 2 (dtbbpy)](PF 6 ) Acetone 34% 3 Ir[(dF(CF 3 )ppy) 2 (bpy)](PF 6 ) Acetone 38% 4 Ru(bpy) 3 (PF 6 ) 2 Acetonitrile 25% 5 Ir[(dF(CF 3 )ppy) 2 (dtbbpy)](PF 6 ) Acetonitrile 20% 6 Ir[(dF(CF 3 )ppy) 2 (bpy)](PF 6 ) Acetonitrile 23% 7 Ru(bpy) 3 (PF 6 ) 2 MeOH 52% 8 Ir[(dF(CF 3 )ppy) 2 (dtbbpy)](PF 6 ) MeOH 86% 9 Ir[(dF(CF 3 )ppy) 2 (bpy)](PF 6 ) MeOH 77% 10 Ru(bpy) 3 (PF 6 ) 2 DMF 90% 11 Ir[(dF(CF 3 )ppy) 2 (dtbbpy)](PF 6 ) DMF 95% 12 Ir[(dF(CF 3 )ppy) 2 (bpy)](PF 6 ) DMF 95% a NMR yields, butadiene sulfone was used as internal standard. Scheme 52. Mechanism of formation of benzyl-TEMPO adduct under photooxidative conditions To explore the reactivity of silicates as carbon centered radical precursors, the synthesized [18-C-6] potassium alkyl bis-catecholato silicates were engaged in radical allylation reactions with allylsulfone 27. 149 With the previously optimized photooxidative conditions, all categories of silicates (stabilized or not) were tested. The corresponding allylated compounds were isolated in moderate to good yields (Scheme 53). 149 (a) C.-P. Chuang, Tetrahedron, 1991, 47, 5425-5436. (b) M. H. Larraufie, R. Pellet, L. Fensterbank, J. P. Goddard, E. Lacôte, M. Malacria, C. Ollivier, Angew. Chem. Int. Ed., 2011, 50, 4463-4466. chloromethylationScheme 53.Examples of intermolecular radical additions our group reported studies on copper(II)-mediated oxidation of alkyl trifluoroborates, 150 a 150 G. Sorin, R. Martinez Mallorquin, Y. Contie, A. Baralle, M. Malacria, J.-P. Foddard and L. Fensterbank, Angew. Chem. Int. Ed., 2010, 49, 8721 -8723. Table 4 . 4 Spin-trapping experiments with organic oxidants Benzyl precursor Oxidant Solvent Yield Bn-BF 3 K Ph 3 CBF 4 Et 2 O 65% Bn-Si(catechol) 2 K [18-C-6] Ph 3 CBF 4 Et 2 O 16% Bn-BF 3 K Ph 3 CBF 4 DMF 35% Bn-Si(catechol) 2 K [18-C-6] Ph 3 CBF 4 DMF 10% Bn-BF 3 K Ar 3 N.SbCl 6 Et 2 O 2% Bn-Si(catechol) 2 K [18-C-6] Ar 3 N.SbCl 6 Et 2 O 16% Bn-BF 3 K Ar 3 N.SbCl 6 DMF 69% Bn-Si(catechol) 2 K [18-C-6] Ar 3 N.SbCl 6 DMF 86% Table 5 . 5 Comparison of characteristics of nickel and palladium Nickel Palladium Oxidation states -1 0 +1 +2 +3 +4 0 +1 +2 +3 +4 Atomic radius 135 pm 140 pm Electronegativity 1.91 2.20 Harder Softer Facile oxidative addition Facile reductive elimination β-migratory insertion β-migratory elimination Radical pathways more accessible 1/2 red (Ir III */Ir II ) = +1.21V vs SCE inMeCN) oxidizes the benzyltrifluoroborate by SET to give the benzyl radical and an Ir(II) intermediate. As envisaged by Weix, the radical At the same time, MacMillan and Doyle reported a similar approach with the participation of α-amino radicals. These radicals were obtained by photooxidation of α-amino carboxylic acids in the presence of cesium carbonate61,63 or by photooxidation of dimethylaniline.60 Both radicals were coupled with aryl halides through a nickel catalysis. The radicals were obtained by a photooxidative process with[Ir(dF(CF 3 )ppy) 2 (dtbbpy)](PF 6 ) as photocatalyst and the cross-coupling reactions realized with NiCl 2 .dme as precatalyst and 4,4'-di-tert-butyl-2,2'-bipyridine as ligand. Chloro, bromo and iodo arenes, and heteroaryl bromides could be cross-coupled with cyclic α-amino carboxylates with good yields (Scheme • α α α α-amino radicals in photoredox/nickel dual catalysis This complex may undergo oxidative addition with the aryl halide to form the Ar[Ni(II)]X intermediate. Oxidation of the carboxylate (E 1/2 (R-CO 2 -/R-CO 2 . ) = +0.95 -+1.16V vs SCE) or dimethyl aniline (E 1/2 (PhNMe 2 /PhN +• .Me 2 )= +0.71 vs SCE) by the photoexcited Ir(III) (E 1/2 red (Ir III */Ir II ) = +1.21V vs SCE in MeCN) leads to the corresponding α-amino radical which adds on the nickel complex to form the intermediate Ar[Ni(III)]RX. After reductive elimination, the benzylamine and complex [Ni(I)]X are obtained. A final SET with the Ir(II) regenerates both catalysts. Scheme 70. Proposed mechanism of photoredox/nickel dual catalysisα-amino arylation This photoredox/nickel dual catalysis was then extended to the coupling of α-oxy carboxylic acids with vinyl iodide as electrophile. The α-oxy carboxylates are as easy as the α-amino carboxylate (E 1/2 ox = +1.08 V vs SCE in MeCN) 171 to be photoxidized and so able to provide α-oxy radicals after decarboxylation which can be engaged in dual- Scheme 68. Proposed single-electron transmetalation in photoredox/nickel cross-catalysis. Following the same mechanism, allyl ethers were obtained in good yields with coupling only 1 mol% of iridium photocatalyst and 2 mol% of nickel catalyst. It is interesting to mention that the process showed to be stereoconvergent. No erosion of the E:Z ratio was observed during the process. Moreover, α-amino and simple alkyl carboxylic acids could may react with the complex Ar [Ni(II) ]X to give a Ni(III) intermediate which after reductive elimination affords the cross-coupling product. Both catalysts would be regenerated from Ni(I) and Ir(II) by a SET and completing both photoredox and organometallic catalytic cycles. 69). As efficiently, non-cyclic α-amino and α-anilino radicals were engaged with aryl bromides. Scheme 69. Photoredox cross-coupling of α-amino radicals with aryl halides The authors proposed the same mechanism as Molander (Scheme 70). They envisaged the reduction of the [Ni(II) ] complex by two SET event to give the [Ni(0) ] active catalyst. be engaged in a similar dual catalysis (Scheme 71). 171 A. Noble, S. J. McCarver and D. W. C. MacMillan, J. Am. Chem. Soc., 2015, 137, 624-627. Scheme 71. Decarboxylative cross-coupling of carboxylic acids with vinyl Iodides Table 6 . 6 Control experiments and influence of the crown ether Entry Conditions Yield 1 Standard conditions 85% 2 No [Ir] 0% 3 No Ni(COD) 2 0% 4 No ligand 0% 5 No light 0% 6 Silicate without [18-C-6] 86% A comparison of reactivity was made with the benzyltrifluoroborate in our conditions. It was found a better reactivity of the benzylsilicate. potassium alkyl bis-catecholato silicates. First of all, the silicates were screened with 4'-bromoacetophenone as electrophile (Scheme 80). Silicate precursors of stabilized benzyl, allyl and α-amino radicals gave excellent yields of cross-coupling products 46, 47, 48 with 88%, 86% and 91% yields respectively. A lower yield was obtained with the acetoxymethylsilicate 49. Synthetically more attracting, non-stabilized radicals could also be involved furnishing the corresponding coupling products. Aliphatic silicates such as hexyl or isobutyl silicate gave product 47 and 51 in good yields. The reaction conditions were tolerated by various silicates bearing functional groups such as an ester 45 and 49, a nitrile 52 and 53, an oxirane 54 and 55 or a halogen 56 and 57. Scheme 80. Cross-coupling reactions of alkyl silicates with 4'-bromoacetophenone Table 7 . 7 Screening of reaction conditions for organic photoredox/nickel dual-catalyzed cross-coupling of acetoxypropylsilicate and 4'-bromoacetophenone Entry Silicate 4CzIPN loading Nickel source/loading (nb equiv.) (mol%) (mol%) Table 8 . 8 Comparison of experimental conditions for photoredox/nickel dual-catalysis with β-bromostyrene Entry Photocatalyst (mol%) Nickel source (mol%) Yield (E/Z) 1 4CzIPN (1) NiCl 2 .dme (2) a 73% b (90/10) 2 Ru(bpy) 3 (PF 6 ) 2 (2) NiCl 2 .dme (3) a 75% b (85/15) 3 [Ir(dF(CF 3 )ppy) 2 (bpy)](PF 6 ) (2) Ni(COD) 2 (3) a 93% b (67/33) 4 4CzIPN (1) No nickel <5% b 5 4CzIPN (1) No nickel S.M. b,c (65/35) 6 Ru(bpy) 3 (PF 6 ) 2 (2) No nickel S.M. b,c (85/15) 7 [Ir(dF(CF 3 )ppy) 2 (bpy)](PF 6 ) (2) No nickel S.M. b,c (40/60) Table 9 . 9 Optimization of the reaction conditions Entry Silicate (equiv.) Photocatalyst (mol%) Ni source (5 mol%) Yield (104/105) 1 1.2 [Ir] (5) Ni(COD) 2 34/38 2 1.2 Ru(bpy) 3 (PF 6 ) (5) Ni(COD) 2 22/36 3 1.2 4CzIPN (5) Ni(COD) 2 22/25 4 1.2 [Ir] (2) Ni(COD) 2 29/52 5 1.5 [Ir] (2) Ni(COD) 2 42/50 6 1.5 [Ir] (5) Ni(COD) 2 43/38 7 2 [Ir] (5) Ni(COD) 2 44/42 8 1.5 [Ir] (5) NiCl 2 .dme 26/30 Table 10 . 10 Optimization of copper mediated oxidation of anilinomethylsilicate Entry Concentration Loading [Cu] Yield 1 0.1M 10 mol% 52% 2 0.05M 10 mol% 48% 3 0.2M 10 mol% 56% 4 0.1M 5 mol% 12% 5 0.1M 2 mol% 6% 6 0.1M 10 mol% 66% a a 6 equivalents of allylsulfone 27 used. About the mechanism of the reaction, copper catalyst 127 would oxidize silicate 8 and provide the anilinomethyl radical and complex 126. After addition of the radical on the acceptor followed by fragmentation of the C-S bond, product 32 would be obtained with a tosyl radical. This one might oxidize complex 126 regenerating the catalyst 127 with the formation of a sulfinate anion (Scheme 97). However, analysis of the redox potentials of these species involved in the last step reveals that the electron transfer is not thermodynamically favored (E 1/2 (PhSO 2 •/PhSO 2 Na) = +0.50V vs SCE and E 1/2 (Cu(L BQ ) 2 2+ /Cu(L BQ )(L SQ ) + ) = +0.83V vs SCE). 13 CNMR (100 203 D. Hanss, J. C. Freys, G. Bernardinelli, O. S. Wenger, Eur. J.Inorg. Chem., 2009, 4850-4859. Potassium [18-Crown-6] bis(catecholato)-acetoxymethylsilicate (18) Following Si NMR (119 MHz, Methanol-d4): δ -75.75. HRMS calc. for [C 20 H 21 O 5 Si] -369.1164; found 369.1176. .5 mmol, 700 µL of a 3.56 M solution in methanol) in 10 mL of dry methanol at room temperature. The crude product was purified according the general procedure to afford 18 (1.36 g, 87%) as a white solid. Potassium bis(catecholato)-acetoxypropylsilicate (17') 1 H NMR(600 MHz, Methanol-d 4 ): δ 6.68 (dd, J = 5.6, 3.4 Hz, 4H), 6.57 (dd, J = 5.8, 3.5 Hz, 4H), 3.82 (s, 2H), 3.53 (s, 24H), 1.83 (s, 3H). 13 C NMR(151 MHz, Methanol-d 4 ): δ 174.1, 150.9 (4 C), 119.5 (4 C), 111.7 (4C), 71.2 (12 C), 58.1, 20.7. 29 Si NMR (119 MHz, Methanol- d 4 ): δ -85.8 (t, J = 5.7 Hz). HRMS calc. for [C 15 H 13 O 6 Si] -317.0487; found 317.0495. M.p. 110°C. IR (neat): 3028, 2901, 2868, 1719, 1599, 1487, 1348, 1243, 1102, 963, 830, 737 cm -1 . Following the general procedure A with acetoxypropyl trimethoxysilane (5 mmol, 1.05 Potassium [18-Crown-6] bis(catecholato)-4-methoxyphenylsilicate (19) mL), catechol (10 mmol, 1.10 g) and potassium methoxide (5 mmol, 1.4 mL of a 3 .56 M solution in methanol) in 20 mL of dry methanol. The crude product was purified according the general procedure to afford 17' (1.55 g, 70%*) as a white solid. 5 (4 C), 68.6, 24.9, 20.8, 13. 7. HRMS calc. for [C 17 H 17 O 6 Si] -345.0800; found 345.0813. M.p. 160°C. IR (neat): 3016, 2950, 2882, 1735, 1597, 1486, 1351, 1242, 1105, 955, 819, 749, 725 cm -1 . Following the general procedure Awith 4-methoxyphenyltriethoxysilane (2.5 mmol, 656 µL), catechol (5 mmol, 550.6 mg), 18-Crown-6 (2.5 mmol, 660 mg) and potassium methoxide (2.5 mmol, 700 µL of a 3.56 M solution in methanol) in 10 mL of dry methanol. The crude product was purified according the general procedure to afford 19 (1.4 g, 85%) as a white solid. 1 H NMR (400 MHz, Methanol-d 4 ): δ 7.53 (d, J = 8.8 Hz, 2H), 6.90 -6.66 (m, 6H), 6.62 - Following the general procedure A with 3-chloropropyltrimethoxysilane (5 mmol, 910 6.45 (m, 4H), 3.69 (s, 3H), 3.52 (s, 24H). 13 C NMR (100 MHz, Methanol-d 4 ): δ 161.4, 151.1 µL), catechol (10 mmol, 1.10 g), 18-Crown-6 (5 mmol, 1.32 g) and potassium methoxide (5 (4 C), 137.6 (2 C), 132.8, 119.4 (4 C), 113.6 (2 C), 111.6 (4 C), 71.2 (12 C), 55.3. 29 Si NMR mmol, 1.4 mL of a 3.56 M solution in methanol) in 20 mL of dry methanol. The crude (119 MHz, Methanol-d 4 ): δ -87.50. HRMS calc. for [C 19 H 15 O 5 Si] -351.0694; found 351.0700. product was purified according the general procedure to afford 12 (2.96 g, 95%) as a white solid. M.p.197°C. IR (neat): 2895, 1588, 1485, 1453, 1350, 1246, 1228, 1096, 1012, 949, 821, 743,657, 585 cm-1. Potassium [18-Crown-6] bis(catecholato)-3-chloropropylsilicate (12) 1 H NMR (400 MHz, Methanol-d 4 ): δ 6.68 (dd, J = 5.6, 3.5 Hz, 4H), 6.56 (dd, J = 5.6, 3.5 Hz, 4H), 3.88 (t, J = 7.0 Hz, 2H), 2.15 (s, 6H), 1.92 (s, 3H), 1.66 -1.56 (m, 2H), 0.7 -0.65 (m, 2H). 13 C NMR (101 MHz, Methanol-d 4 ): δ 173.3, 150.8 (4 C), 119.4 (4 C), 111.*Silicate without [18-Crown-6] cristallyzes with a molecule of acetone. the general procedure A with 3-acetoxypropyltrimethoxysilane (2.5 mmol, 590 µL), catechol (5 mmol, 550.6 mg), 18-Crown-6 (2.5 mmol, 660 mg) and potassiummethoxide (2M.p. >250°C. IR (neat): mmol, 791 mg), 2-bromoallyl alcohol (5 mmol, 685 mg) and 25 mL of DCM. After 15 minutes of stiring, imidazole (5.25 mmol, 374 mg) was added and the mixture stirred overnight. Thee mixture was filtrated over a pad of silica and eluted with Et 2 O. The solvent was removed under reduced pressure to give ((2-bromoallyl)oxy)(tert-butyl)dimethylsilane(1.25 g, 99%).The spectroscopic data are in agreement with those reported in the literature.204 1 H NMR (400 MHz, CDCl 3 ): δ5.85 (d, J = 1.8 Hz, 1H), 5.43 (d, J= 1.6 Hz, 1H),4.11-4.10 (m, 2H), 0.82 (s, 9H), 0.00 (s, 6H). 13 C NMR (100 MHz, CDCl 3 ): δ132. 0, 114.8, 67.6, 25.9 (3 C), 18.5, -5.2 (2 C) . IR (neat): 2958, 2854, 1637, 1463, 124, 1085, 838, 774 cm -1 . .1. Following general procedure C with potassium potassium ((1R,2R,3R,5S)-2,6,6- trimethylbicyclo[3.1.1]heptan-3-yl)trifluoroborate(1 mmol, 244 mg), methyl vinyl ketone (5 mmol, 0.4 mL) and triphenylcarbenium tetrafluoroborate (1 mmol, 330mg). The crude product waspurified by flash column chromatography (pentane) to afford 29 as a colorless oil 2,2,6,6-Tetramethyl-1-((2-((1R,4S)-4-methylcyclohex-2-en-1-yl)propan-2-(131 mg, 63%). The spectroscopic data are in agreement with those reported in the yl)oxy)piperidine (28) literature. 150 1 H NMR (400 MHz, CDCl 3 ): δ 2.46 (ddd, J = 16.5, 10.1, 5.9 Hz, 1H), 2.34 (ddd, J = 16.5, 10.4, 5.1 Hz, 1H), 2.30-2.22 (m, 1H), 2.12 (s, 3H), 2.15-2.06 (m, 1H), 1.91 -1.85 (m, 1H), 1.85 -1.70 (m, 2H), 1.68 -1.50 (m, 2H), 1.45 -1.34 (m, 2H), 1.16 (s, 6H), 0.99 (d, J = 7.0 Hz, 3H), 0.96 (s, 3H), 0.70 (d, J = 9.7 Hz, 1H). Following general procedure C with (1S,2R,3S,6R)-3,7,7-trimethylbicyclo[4.1.0] heptan-2-trifluoroborate(0.3 mmol, 65 mg) and triphenylcarbenium tetrafluoroborate (0.3 mmol, 99 mg) in DMF. The crude product waspurified by flash column chromatography (pentane/diethyl ether, 99/1) to afford 28 as a colorless oil (48 mg, 67%).The spectroscopic data are in agreement with those reported in the literature. 150 1 H NMR (400 MHz, CDCl 3 ): δ 5.76 (m, 1H), 5.67 (m, 1H), 2.53 (m, 1H), 2.18 (m, 1H), 1.74-1.65 (m, 2H), 1.57-1.40 (m, 6H), 1.30-1.24 (m, 2H), 1.21 (s, 3H), 1.20 (s, 3H), 1.12 (s, 6H), 1.10 (s, 3H), 1.09 (s, 3H), 0.98 (d, J = =7.2 Hz, 3H). 13 C NMR (100 MHz, CDCl 3 ): δ 133.9, 128.8, 81.1, 59.5 2 C), 47.2, 41.2 (2 C) 35.3, 35.1, 29.3, 29.0, 24.0, 23.8, 21.2, 21.0 (2 C), 20.7, 17.3. 4-((1R,2S,3R,5R)-2,6,6-Trimethylbicyclo[3.1.1]heptan-3-yl)butan-2-one (29) 13 C NMR (100 MHz, CDCl 3 ): δ 209.5, HRMS calc. for [C 20 H 23 NaO 3 P] + 365.1277; found 365.1268, for [(C 20 H 23 O 3 P) 2 Na] + 707.2662; found 707.2366 . 1437, 1176, 1105, 717, 694 cm -1 . Ethyl 6-acetoxy-2-methylenehexanoate (37) Following general procedure D with 17 (0.3 mmol, 194.6 mg). The crude product was purified by flash column chromatography (pentane/diethyl ether, 90/10) to afford 37 as a colorless oil (24 mg, 37%). Thepotassium allyl trifluoroborate (0.3 mmol, 44 mg) and 9 mesityl-10- methylacridinium perchlorate (0.03 mmol, 12.4 mg) and TEMPO (0.66 mmol, 103 mg) were added to a Schlenk flask. The Schlenk flask was evacuated / purged with vacuum / argon three times. Degassed DMF (3 mL) was introduced followed by two freeze-pump-thaw cyclesand the reaction mixture was irradiated with blue LED (477 nm) at room temperature for 24h under an argon atmosphere. The reaction mixture was diluted with diethyl ether (50 mL), washed with saturated aqueous NaHCO 3 (2 times), brine (2 times), dried over MgSO 4 and evaporated under reduced pressure. The crude product was purified by flash column chromatography to afford30 as a colorless oil (19 mg, 32%). The spectroscopic data are in agreement with those reported in the literature. 66 -butoxy)-2,2,6,6-tetramethylpiperidine (30) 1 H NMR (400 MHz, CDCl 3 ): δ 6.15 (d, J = 1.5 Hz, 1H), 5.52 (d, J = 1.4 Hz, 1H), 4.20 (q, J = 7.1 Hz, 2H), 4.07 (t, J = 6.5 Hz, 2H), 2.35 -2.31 (m, 2H), 2.04 (s, 3H), 1.69 -1.62 (m, 2H), 1.58 -1.50 (m, 2H), 1.30 (t, J = 7.1 Hz, 3H). 13 C NMR (100 MHz, CDCl 3 ): δ 171. Cyclohexylethynyl)benzene (39) . HRMS calc. for [C 11 H 18 NaO 4 ] + 237.1097; found 237.1097. ( , 4.42 (s, 2H), 4.16 (s, 1H), 2.59 (s, 3H). 13 C NMR (100 MHz, CDCl 3 ): δ197.7, 147.7, 145.2, 136.2, 129.3 (2 C), 128.7 (2 C), 127.3 (2 C), 117.9, 112.9 (2 C), 47.9, 26.6. IR (neat): 3321, 1669, 1597, 1510 cm -1 . NMR (400 MHz, CDCl 3 ): δ 7.87(d, J = 8.2 Hz, 2H), 7.26 (d, J = 8.2 Hz, 2H), 2.68-2.64 (m, 2H), 2.58 (s, 3H),1.67-1.58 (m, 2H), 1.38-1.26 (m, 6H), 0.88 (t, J = 6.8 Hz, 3H). 13 C NMR (100 MHz, CDCl 3 ): δ 198. 212 G.-N. Wang, T.-H.Zhu, S.-Y.Wang, T.-Q.Wei and S.-J. Ji, Tetrahedron 2014, 70, 8079-8083. 1 H 3-(4-Hydroxyphenyl)propylacetate (74) NMR (400 MHz, CDCl 3 ): δ 7.74 (d, J = 8.0 Hz, 2H), 7.20 (d, J = 8.0 Hz, 2H), 4.08 (t, J = 6.6 Hz, 2H), 2.72 -2.68 (m, 2H), 2.05 (s, 3H), 1.97 -1.94 (m, 2H), 1.34 (s, 12H). 13 C NMR (100 MHz, CDCl 3 ): δ171.1, 144.6, 135.0 (2 C), 127.8 (2 C), 83.7 (2 C), 63.8, 32.4, 30.1, 24.9 (4 C), 21.0. 11 B NMR (128 MHz, CDCl 3 ): 30.6. IR (neat):2960, 1737, 1611, 1357, 1235, 657 cm -1 . HRMS calc. for [C 17 H 25 BNaO 4 ] + 327.1741; found 327.1754 1 H 218 1 H NMR (400 MHz, CDCl 3 ): δ 7.04 (d, J = 8.5 Hz, 2H), 6.76 (d, J = 8.5 Hz, 2H), 5.19 (s, 1H) 4.08 (t, J = 6.6 Hz, 2H), 2.63 -2.59 (m, 2H), 2.06 (s, 3H), 1.95 -1.89 (m, 2H). 13 C NMR (100 MHz, CDCl 3 ): δ171. NMR (400 MHz, CDCl 3 ):δ 8.12(d, J = 5.1 Hz, 1H), 7.02 (dt, J = 5.0, 1.6 Hz, 1H), 6.85 - 6.68 (m, 1H), 3.53 (t, J = 6.3 Hz, 2H), 2.83 (dd, J = 8.4, 6.8 Hz, 2H), 2.19 -2.02 (m, 2H). 13 C NMR (100 MHz, CDCl 3 ):δ 164.1 (d, J = 239.5 Hz), 155.7 (d, J = 7.7 Hz), 147.6, 121.7, 109.3 (d, J = 38.0 Hz),43.6, 32.5, 31.9 (d, J = 2.8 Hz).19 F NMR (376 MHz, CDCl 3 ): δ -68.6. IR(neat): 2926, 1613, 1558, 1411, 1275, 1148, 908, 728 cm -1 . HRMS calc. for [C 8 H 9 ClFNNa] + 196.0300; found 196.0306. 1 H NMR (400 MHz, CDCl 3 ): δ 7.13 (dd, J = 5.2 Hz, J = 1.2 Hz, 1H), 6.92 (dd, J = 5.1 Hz, J = 3.4 Hz, 1H), 6.80 (s,1H), 4.12 (t, J = 6.4 Hz, 2H), 2.95 -2.90 (m, 2H), 2.06 (s, 3H), 2.04 -1.99 (m, 2H). 13 C NMR (100 MHz, CDCl 3 ): δ171. 1 H . IR (neat): 2942, 1734, 1232, 1040, 697 cm -1 . HRMS calc.for [C 9 H 12 SLiO 2 ] + 191.0713; found 191.0710. Following general procedure F with 2-(diphenylphosphine oxide)ethylsilicate 16 (0.45 mmol, 350 mg) and 4'-bromoacetophenone 44 (0.3 mmol, 60 mg). The crude product was purified by flash column chromatography (dichloromethane/ethyl acetate, 60/40 to 40/60)) to afford 90 contaminated with 15% of ethyldiphenylphosphine oxide as a white solid (79 mg, Following general procedure H with acetoxypropylsilicate 17 (0.45 mmol, 292 mg) and 1-bromonaphthalene(0.3 mmol, 42 µl). The crude product was purified by flash column chromatography (pentane/diethyl ether, 95/5) to afford 93 as a colorless oil(43 mg, 63%). NMR (400 MHz, CDCl 3 ): δ 8.03 (dt, J = 7.8, 0.9 Hz, 1H),7.87 (dd, J = 8.2, 1.3 Hz, 1H), 3-(Naphthalen-1-yl)propylacetate (93) 68%). 1-(3-(2-(Diphenylphosphoryl)ethyl)phenyl)ethan-1-one (90) 1 H NMR (400 MHz, CDCl 3 ): δ 7.82 (d, J = 8.3 Hz, 2H), 7.77 -7.72(m, 4H), 7.53 - 7.43 (m, 6H), 7.23 (d, J = 8.3 Hz, 2H), 3.06 -2.87 (m, 2H), 2.60 -2.54 (m, 2H), 2.53(s, 3H) . 1 H Photooxydation de silicates hypervalents pour la génération de radicaux carbonés: processus radicalaires et catalyse duale La catalyse photoredox en lumière visible a réussi à s'imposer comme une méthode douce et éco-compatible de formation d'espèces radicalaires, et plus particulièrement de radicaux carbonés. Bien que cette catalyse ait su prouver son efficacité pour la formation de liaisons carbone-carbone/hétéroatome, les radicaux carbonés formés sont très souvent stabilisés. A l'inverse, les alkyles bis-catécholato silicates ont montré leur capacité à engendrer des radicaux alkyles non stabilisés par photooxydation à l'aide de complexes polypyridine de métaux de transition (Ru, Ir) photoactifs en lumière visible mais aussi de photocatalyseurs organiques. Les radicaux formés peuvent ainsi être piégés par différents accepteurs radicalaires. En outre, les alkyles bis-catécholato silicates sont engagés en présence d'électrophiles comme des halogénures éthyléniques ou (hétéro)aromatiques dans des conditions de catalyse duale photoredox/nickel afin de former des liaisons C(sp 2 ) -C(sp 3 ). La méthodologie a été étendue au couplage C(sp 3 ) -C(sp 3 ) avec toutefois quelques limitations. D'autre part, une étude comparant les silicates et les « ate-complexes » de bore pour la formation de radicaux par processus d'oxydation est présentée. Enfin, des travaux prometteurs sur l'oxydation des silicates par des complexes de cuivre portant des ligands noninnocents ont été amorcés. Mots clés: Catalyse photoredox, photooxydation, photocatalyseur, silicate, radical, couplage croisé, catalyse duale, catalyse au nickel Photooxidation of hypervalent silicon species for the formation of carbon centered radicals: radical processes and dual catalysis (a) M. Gomberg, J. Am.Chem. Soc., 1900, 22,757-771. (b) M. Gomberg, J. Am.Chem. Soc., 1901, 23, 496- 502. (c) M. Gomberg, J. Am.Chem. Soc., 1902, 24, 597-628. C. K. Prier, D. A. Rankic and D. W. V.MacMillan, Chem. Rev., 2013, 113, 5322-5363.5 (a) N. A. Romero and D. A.Nicewicz, Chem. Rev., 2016, 116, 10075-10166. (b) M.Neumann, S.Fldner, B. König and K. Zeitler, Angew. Chem. Int. Ed., 2011, 50, 951 -954. W. Tucker, J. D. Nguyen, J. M. R. Narayanam, S. W. Krabbe and C. R. J.Stephenson, Chem. Commun., 2010, 46, 4985-4987. L. Chenneberg, A.Baralle, M. Daniel, L. Fensterbank, J.-P. Goddard and C. Ollivier, Adv. Synth. Catal., 2014, 356, 2756-2762. E. Hasegawa, S. Takizawa, T. Seida, A. Tamagucgi, N. Yamaguchi, N. Chiba, T. Takahashi, H. Ideka and K.Akiyama, Tetrahedron, 2006, 62, 6581-6588. J. D. Nguyen, E. M. D'Amato, J. M. R. Narayanam and Corey R. J.Stephenson, Nature Chem., 2012, 4, 854- 859. (a) A. McNally, C. K. Prier, D. W. C. MacMillan, Science, 2011, 334, 1114-1117. (b) Z. Zuo and D. W. C. MacMillan, J. Am.Chem. Soc., 2014, 136, 5257-5260. (c) A. Noble and D. W. C. MacMillan, J. Am.Chem. Soc., 2014, 136, 11602-11605. 11 Y. Yasu, T.Koike and M. Akita, Adv. Synth. Catal., 2012, 354, 3414-3420. C. L. Frye, J. Am.Chem. Soc.,1964, 86, 3170-3171. (a) G. Cerveau, C. Chuit, R. J. P. Corriu, L. Gerbier, C.Reye, J.-L. Aubagnac and B. El Amrani, Int. J. Mass Spectrom. IonPhys., 1988, 82, 259. (b) A. Boudin, G. Cerveau, C. Chuit, R. J. P.Corriu and C. Reye, Bull. Chem. Soc. Jpn., 1988, 61, 101-106. V. Corcé, L.-M. Chamoreau, E. Derat, J.-P. Goddard, C. Ollivier and L. Fensterbank Angew.Chem. Int. Ed. 2015, 54, 11414 -11418. J. C.Tellis, D. N. Primer and G. A. Molander, Science, 2014, 345, 433436. Z. Zuo, D. T. Ahneman, L. Chu, J. A. Terrett, A. G. Doyle, D. W. C. MacMillan, Science, 2014, 345, 437440. L. Chenneberg, C. Lévêque, V.Corcé, A.Baralle, J.-P. Goddard, C. Ollivier and L. Fensterbank, Synlett, 2016, 27, 731-735. C. Lévêque, L. Chenneberg, V. Corcé, C. Ollivier and L. Fensterbank, Chem Commun, 2016, 52, 9877-9880. C. Lévêque, V. Corcé, L. CHenneberg, C. Ollivier and L. Fensterbank, Eur. J.Org. Chem. 2017, 2118-2121. P. Chaudhuri, C. Nazari Veani, E. Bill, E. Bothe, T. Weyhermüller, K. Wieghardt, J. Am.Chem. Soc., 2001, 123, 2213-2223. J. Jacquet, S. Blanchard, E. Derat, M. Desage-ElMurr and L. Fensterbank, Chem. Sci., 2016, 7, 2030-2036. For a comprehensive account on all aspect of radical chemistry in synthesis, see: a) C. Chatgilialoglu and A Studer in Encyclopedia of Radicals in Chemistry, Biology and Materials, Eds. John Wiley & Sons Ldt, Chichester, 2012. (b) P. Renaud and M. P. Sibi in Radicals in Organic SynthesisVol.1 & 2, Wiley-VCH, Weinheim, 2001. (c) D. P. Curran, N. A. Porter and B. Giese in Stereochemistry of Radical Reactions, VCH, Weinheim, 1996. (d) A. Gansäuer in Radicals in Synthesis I & II, Topics in Current Chemistry, Springer, Heidelberg, Vols 263 & 264, 2006. A. Studer and S. Amrein, Synthesis, 2002, 7, 835-849. 8 C. Ollivier and P. Renaud, Chem. Rev., 2001, 101, 3415-3434. 9 D. M. Hedstrand, W. H. Kruizinga and R. M.Kellogg, Tetrahedron Lett., 1978, 19, 1255-1258. For reviews on the use of organic photocatalysts: (a) S.Fukuzumi and K. Ohkubo, Org. Biomol. Chem., 2014, 12, 6059-6071. (b) N. A. Romero and D. A. Nicewicz, Chem. Rev., 2016, 116, 10075-10166. (c) M.Neumann, S.Fldner, B. König and K. Zeitler, Angew. Chem. Int. Ed., 2011, 50, 951-954. (a) M.A Baldo, S. Lamabsky, P. E. Burrows, M. E.Thompson and S. R. Forrest, Appl. Phys. Lett., 1999, 75, 4- 6. (b) A. Endo, K. Sato, K. Yoshimura, T. Kai, A. Kawada, H. Miyazaki and C. Adachi, Appl. Phys. Lett., 2011, 98, 083302. (c) S. Y. Lee, T. Yasuda, H. Nomura and C. Adachi, Appl. Phys. Lett., 2012, 101, 093306. (e) H Uoyama, K. Goushi, K. Shizu, H. Nomura and C. Adachi, Nature, 2012, 492, 234-238. J.Luo and J. Zhang, ACS Catal., 2016, 6, 873-877. W. E.Jones and M. A. Fox, J. Phys. Chem., 1994, 98, 5095-5099. (a) D. Ravelli and M Fagnoni, ChemCatChem, 2012, 4, 169-171 (b) N. A. Romero and D. A.Nicewicz, Chem. Rev., 2016, 116, 10075-10166. N. J. Turro, Modern Molecular Photochemistry, 98-100 (Benjamin Cummings, 1978) (a) C. Lévêque, L. Chenneberg, V. Corcé, C. Ollivier and L. Fensterbank, Chem Commun, 2016, 52, 9877-9880. (b) B. A. Vara, M. Jouffroy and G. A. Molander, Chem Sci, 2017, 8, 530-535. (c) E. E. Stache, T. Rovis and A. G. Doyle, Angew. Chem. Int. Ed., 2017, 56, 3679-3683. M. H. Larraufie, R. Pellet, L. Fensterbank, J. P. Goddard, E. Lacôte, M.Malacria and C. Ollivier, Angew. Chem. Int. Ed., 2011, 50, 4463-4466. (a) J.Zhou and G. C. Fu, J. Am. Chem. Soc., 2004, 126, 1340-1341. (b) M. Guisán-Ceinos, R. Soler-Yanes, D. Collado-Sanz, V. B. Phapale, E. Buñuel and D. J. Cárdenas, Chem.Eur. J., 2013, 19, 8405-88410. (c) D. A. Powell, T. Maki and G. C. Fu, J. Am. Chem. Soc., 2005, 127, 510-511. M. E. Lorris, R. A. Abramovitch, J. Marquet and M.Moreno-Mañas, Tetrahedron, 1992, 48, 6909-6916. C.-P. Zhang, Z.-L.Wang, Q.-Y.Chen, C.-T.Zhang, Y.-C.Gu and J.-C. Xiao, Angew.Chem. Int. Ed., 2011, 50, 1896-1900. J. Zoller, D. C. Fabry, M. A. Ronge and M. Rueping, Angew.Chem. Int. Ed., 2014, 53, 13264-13268. M. D. K. Boele, G. P. F. van Strijdonck, A. H. M. de Vries, P. C. J. Kamer, J. G. de Vries and P. W. N. M. van Leeuwen, J. Am.Chem. Soc., 2002, 124, 1586-1587. S. A. Macgregor, G. W. Neave and C. Smith, Faraday Discuss., 2003, 124, 111-127. (a) P. T. Matsunaga, G. L. Hillhouse and A. L. Rheingold, J. Am.Chem. Soc., 1993, 115, 2075-2077. (b) P. T. Matsunaga, J. C.Mavropoulos and G. L. Hillhouse, Polyhedron, 1995, 14, 175-185. (c) R. Han and G. L. Hillhouse, J. Am.Chem. Soc., 1997, 119, 8135-8136. A. Klein, A. Kaiser, W. Wielandt, F. Belaj, E. Wendel, H. Bertagnolli and S. Záliš, Inorg. Chem., 2008, 47, 11324-11333. D. V. Patil, H. Yun and S. Shin, Adv. Synth. Catal., 2015, 357, 2622-2628. (a) A. Hosomi, S. Kohra and Y. Tominaga, J.Chem. Soc. Chem. Commun., 1987, 1517-1518. (b) A.Boudin, G. Cerveau, C. Chuit, R. J. P.Corriu and C. Reye, Bull. Chem. Soc. Jpn., 1988, 61, 101-106. Y. Nishigaichi, A. Suzuki, T. Saito and A. Takuwa, Tetrahedron Lett., 2005, 46, 5149-5151. D. Matsuoka and Y. Nishigaichi, Chem. Lett., 2014, 43, 559-561. A. A. Isse and A. Gennaro, J. Phys. Chem. A, 2004, 108, 4180-4186. M. Montalti, A. Credi, L. Prodi and M. T. Gandolfi, Handbook of Photochemistry, 3rd ed.,Taylor & Francis, Boca Raton, 2006. C. L. Frye, J. Am.Chem. Soc., 1964, 86, 3170-3171. G. Cerveau, C. Chuit, R. J. P. Corriu, L. Gerbier, C. Reye,J.-L. Aubagnac, B.El Amrani, Int. J. Mass Spectrom. IonPhys. 1988, 82, 259. d C-Si (Å) d O-Si (Å) α C-Si-O (°) α O-Si-O (°)With acetone 1.880 O 1 : 1.778 O 2 : 1.737 O 3 : 1.776 O 4 : 1.735 O 1 : 93.48 O 2 : 116.35 O 3 : 105.75 O 4 : 105.98 O 1 -O 3 : 160.64 O 2 -O 4 : 137.40 With [18-C-6] 1.883 O 1 : 1.749 O 2 : 1.769 O 3 : 1.743 O 4 : 1.743 O 1 : 103.61 O 2 : 103.61 O 3 : 106.88 O 4 : 106.88 O 1 -O 3 : 149.51 O 2 -O 4 : 149.51 A.I. Prokof'ev, T.I. Prokof'eva, I.S. Belostotskaya, N.N. Bubnov, S.P. Solodovnikov, V.V. Ershov, M.I. KabachnikTetrahedron, 1979, 35, 24712482 J. L. Hodgson, M. Namazian, S. E. Bottle and M. L. Coote, J.Phys. Chem. A, 2007, 111, 13595-13605. For selective reports on organic dyes as photocatalyst see: (a) D. P. Hari, T. Hering and B. König, Org. Lett., 2012, 14, 5334-5337. (b) Y. C. Teo, Y. Pan and C. H. Tan, ChemCatChem, 2013, 5, 235-240. (c) D. Leow, Org. Lett., 2014, 16, 5812-5815. (d) S. P. Pitre, C. D. McTiernan, H. Ismaili and J. C. Scaiano, ACS Catal., 2014, 4, 2530-2535. (e) P. D.Morse and D. A. Nicewicz, Chem. Sci., 2014, 6, 270-274. (f) A. Graml, I. Ghosh and B. König, J.Org. Chem., 2017, 82, 3552-3560. (a) D.Leca, L. Fensterbank, E. Lacôte and M. Malacria, Angew. Chem., Int. Ed., 2004, 43, 4220. (b) G. Ouvry, B. Quiclet-Sire and S. Z.Zard, Angew. Chem., Int. Ed., 2006, 45, 5002. A.-P. Schaffner, V.Darmency and P. Renaud, Angew.Chem., Int. Ed.,2006, 45, 5847. T. T. Tsou and J. K.Kochi, J. Am. Chem. Soc., 1979, 101, 6319-6332. (a) J.Hanss and H.-J. Krüger, Angew. Chem. Int. Ed., 1998, 37, 360-363. (b) G. E. Martinez, C. Ocampo, Y. J. Park and A. R. Fout, J. Am.Chem. Soc., 2016, 138, 4290-4293. (c) E. Chong, J. W. Kampf, A. Ariafard, A. J. Canty and M. S. Sanford, J. Am.Chem. Soc., 2017, 139, 6058-6061. D. A. Everson, R. Shrestha and D. J. Weix, J. Am.Chem. Soc., 2010, 132, 920-921. J.C.Tellis, D. N. Primer and G. A. Molander, Science, 2014, 345, 433436. Z. Zuo, D. T. Ahneman, L. Chu, J. A. Terrett, A. G. Doyle and D. W. C. MacMillan, Science, 2014, 345, 437-440. E. E. Stache, T. Rovis and A. G. Doyle, Angew .Chem. Int.Ed., 2017, 56, 3679-3683. J. F. Hartwig, Organotransition Metal Chemistry: From Bonding to Catalysis, University Science Books, Sausalito, 2009. (a) Y.-P. Zhao, L.-Y.Yang and R. S. H.Liu, Green Chem., 2009, 11, 837-842. (b) K. Singh, S. J. Staig and J. D. Weaver, J. Am.Chem. Soc., 2014, 136, 5275-5278. For a review on molecular photochemistry, see: N. J. Turro, V. Ramamurthy and J. C. Scaiano, Modern Molecular Photochemistry, University Science Books, Sausalito CA, 1991. N. R. Patel, C. B. Kelly, A. P. Siegenfeld and G. A. Molander, ACS Catal., 2017, 7, 1766-1770. C. Lévêque, V. Corcé, L.Chenneberg, C. Ollivier and L. Fensterbank, Eur. J. Org. Chem. 2017, 2118-2121. M. R. Prinsell, D. A. Everson, D. J. Weix, Chem. Commun., 2010, 46, 5743-5745. For the PhD manuscript see: Catalyse coopérative avec les ligands rédox non-innocents : processus radicalaires et organométalliques, Jérémy Jacquet, Université Pierre et Marie Curie, 2016. C. K. Jørgensen, Coord. Chem. Rev., 1966, 1, 164-178. P. Chaudhuri, C. Nazari Veani, E. Bill, E. Bothe, T. Weyhermüller, K. Wieghardt, J. Am.Chem. Soc., 2001, 123, 2213-2223. J. Jacquet, S. Blanchard, E. Derat, M. Desage-ElMurr and L. Fensterbank, Chem. Sci., 2016, 7 ,2030-2036. C. Mukherjee, U. Pieper, E. Bothe, V. Bachler, E. Bill, T.Weyhermüller and P. Chaudhuri, Inorg. Chem., 2008, 47, 8943-8956. A. Beeby, S. Bettington, I. J. S. Fairlamb, A. E. Goeta, A. R. Kapdi, E. H.Niemelä and A. L. Thompson, New J. Chem., 2004, 28, 600-605. M. Nonoyama, Bull. Chem. Soc. Jpn., 1974, 47, 767-768. M. Charpenay, A. Boudhar, C. Hulot, G. Blond and J. Suffert, Tetrahedron, 2013, 69, 7568-7591. H NMR (300 MHz, CDCl 3 ): δ 7.42 -7.39 (m, 2H),7.31 -7.24 (m, 3H), 2.64 -2.55 (m,1H), H NMR (300 MHz, CDCl 3 ): δ 3.67 (s, 3 H),3.64 (s, 3 H), 2.76 -2.65 (m, 2 H), 2.48 -2.38 (m, 1 H), 1.74 -1.85 (m, 6 H), 1.29 -0.97 (m, 5 H).13 C NMR (100 MHz, CDCl 3 ): δ 175.0, 173.0, 51.80, 51.6, 47.1, 40.03, 33.3, 30.7, 30.2, 29.8, 26.4, 26.2. H NMR (400 MHz, CDCl 3 ): δ 7.22 -7.18 (m, 1H), 3H), 4.09 (t, J = 6.6 Hz, Y. M. Chiang, H. K. Liu, J. M. Lo, S. C. Chien, Y. F. Chan, T. H. Lee, J. K. Su, Y. H. Kuo, J. Chin.Chem. Soc., 2003, 50, 161-166. Y. Cai, X. Qian, C. Gosmini, Adv.Synth. Catal., 2016, 358, 2427-2430 H NMR (400 MHz, CDCl 3 ): δ 4.13 (q, J = 7.1 Hz, 2H), 2.35 (t, J = 7.1 Hz, 2H), 2.32 (t, J = Remerciements Abbreviations On the same line, they developed a methodology for the cross-coupling of aryl bromides/iodides with keto acids to provide aryl ketones. Aryl electrophiles bearing electron withdrawing groups or electron donating groups afforded the expected products in good yields. Aliphatic and aryl keto acids were also valuable coupling partners. Interestingly, vinyl bromides and secondary alkyl bromides could be engaged in the process (Scheme 75). The authors proposed a photooxidation of the keto carboxylate which after loss of CO 2 provides the acyl radical. The postulated mechanism is then similar to those reported above (Scheme 70) except the nature of the generated radical. • Formation of radicals by atom abstraction Once the nickel catalyst undergoes oxidative addition of RX to give the intermediate R [Ni(II)]X and a radical R'• adds upon it to furnish the transient complexe RR' [Ni(III)]X. The way to generate the carbon-centered radical is, therefore an essential point. The direct oxidation of radical precursors proved to be effective toward these dual/catalytic processes. Recently, the formation of radicals by atom abstraction further involved in this kind of synthetic approach has been reported. MacMillan and co-workers could realize the crosscoupling reaction of alkyl bromides with aryl bromides. 180 This type of reaction previously reported by Weix could be performed at room temperature by using as radical mediator TTMSSH (Scheme 77 a)). The authors proposed the formation of a bromine radical in the process which can abstract a hydrogen atom to the TTMSSH. Then, the silyl radical abstracts the bromine of the alkyl bromide and liberates the reactive alkyl radical. The group of Doyle proposed a hydrogen atom abstraction by a chlorine radical. In their system, the cross-coupling was achieved between an aryl chloride and an α-oxy radical (Scheme 77 b)). They proposed that a chlorine radical can be formed from the intermediate Ar [Ni(III)]Cl. Actually, after oxidative addition, the Ar [Ni(II)]Cl is oxidized by the photoexcited catalyst. Then, the intermediate Ar [Ni(III)]Cl would release the chlorine radical which abstract a hydrogen to the solvent (THF 181 or dioxolane 182 ). The generated carboncentered radical reacts in the same manner as above to give the cross-coupling product. In the case of dioxolane, treatment of the product by HCl provided benzaldehyde derivatives. 180 P. Zhang, C. "Chip" Le and D. W. C. MacMillan, J. Am. Chem. Soc., 2016, 138, 8084-8087. 181 B. J. Shields and A. G. Doyle, J. Am. Chem. Soc., 2016, 138, 12719-12722. 182 M. K. Nielsen, B. J. Shields, J. Liu, M. J. Williams, M. J. Zacuto and A. G. Doyle, Angew. Chem., 2017, 129, 7297-7300. Scheme 77. Photoredox/nickel dual catalysis with the generation of radicals by atom abstraction • Formation of C-P/C-S bonds Radicals are not only carbon-centered. The formation of C-X bonds by photoredox/nickel dual catalysis has been also envisioned. In 2015, the group of Xiao reported an efficient method for the C-P bond formation under mild conditions. Thanks to a ruthenium photocatalyst and an in-situ formed nickel catalyst, they could realize the crosscoupling reaction between (hetero)aryl iodides and phosphine oxides in high yields. The authors proposed the same catalytic cycles as above except that a phosphinyl radical is at stake. This radical is obtained by a base-assisted photooxidation of the phosphinous acid (Scheme 78). More recently, another group engaged phosphonates as radical precursors and aryl/alkenyl tosylate, mesylate or sulfamate as electrophiles. Scheme 78. Photoredox/nickel dual-catalyzed C-P bond formation Scheme 79.Photoredox/nickel dual-catalyzed C-S bond formation. Alkyl bis-catecholato silicates in photoredox/nickel dual catalysis: formation of C(sp 2 )-C(sp 3 ) bonds Process involving a metal based photocatalyst We have recently reported that potassium and ammonium alkyl bis-catecholato silicates valuable source of C-centered radicals upon visible-light photooxidation using [Ir(dF(CF 3 )ppy) 2 (bpy)](PF 6 ) as catalyst. 144a The silicates display several advantages compared to the alkyl trifluoroborates in terms of synthesis, byproducts and stability of the generated radicals. In addition, they are easier to oxidize than the analogous trifluoroborates or carboxylates. 14 Due to the ability of these substrates to be engaged in photoredox/nickel dualcatalysis reactions, we envisioned to use the silicates as radical source in such processes. Preliminary results revealed that silicates can be coupled with 4-bromobenzonitrile in the presence of iridium photocatalyst, Ni(COD) 2 and dtbbpy under visible-light irradiation. 144a In order to gain more insight of these early results, we engaged a wide range of silicates with The latter presumably indicates the generation of the 5-hexenyl radical and its further interception by the nickel complex to enter the cross-coupling catalytic cycle. This radical generation from the alkyl bromide partner may also be at the origin of the homocoupling product. We succeed to extend the photoredox/nickel dual catalysis process with alkyl silicates and electrophiles to the formation of C(sp 3 )-C(sp 3 ) bonds. Unfortunately, the yields of crosscoupling products are low to moderate due to the formation of the homocoupling product. Based on the reports on the nickel electrophilic cross-coupling reaction and photoredox/nickel catalysis, we could not propose a mechanism taking account of the formation of each product. This mechanism poses intriguing questions and required to be more investigated. Scheme 92. Screening of alkylsilicates and alkyl bromides for the photoredox/nickel C(sp 3 )-C(sp 3 ) cross-coupling reactions Scheme 96. Copper mediated trapping experiments of benzyl radical We next switched to DCM which proved to be also efficient with this family of copper complexes. 199 Anilinomethyl silicate 8 was selected due to its lowest oxidation potential (E ox = +0.34 V vs SCE in DMF), in order to render the reaction as thermodynamically favorable as possible. In this case, we did not engage the silicate with TEMPO because it should lead to an unstable hemiaminal. Allylsulfone 31 was used as radical acceptor with 10 mol% of complex 127 to give product 32 in 52% yield after 24 hours of reaction (Table 10, entry 1). Several parameters of the reaction were modified to optimize the system. The catalyst loading was first analyzed. Decreasing the amount of copper complex from 10 mol% to 5 mol% dropped dramatically the yield (12%) of the reaction and the worst case was observed with 2 mol% (entry 4 and 5). We then considered the influence of the concentration. Dividing or multiplying by two, the concentration had not significant impact on the yield (entry 2 and 3). Increasing the number of equivalents of acceptor from 4 to 6 improved the yield to 66% (entry 6). Experimental Section Potassium [18-Crown-6] 1350, 1298, 1246, 1104, 1013, 949, 911, 893,866, 818, 751, 738, 586 cm -1 . Potassium [18-Crown-6] bis(catecholato)-hex-5-enylsilicate (3) Following the general procedure A with hexenyltrimethoxysilane (2.5 mmol, 511 mg), catechol (5 mmol, 550.6 2890, 1599, 1487, 1352, 1246, 1209, 1191, 1142, 1105, 1012, 953, 884, 825, 769, 736, 700, 647, 596 cm -1 . Potassium [18-Crown-6] bis(catecholato)-3-chloropropylsilicate ( 7) Potassium [18-Crown-6] bis(catecholato)-allylsilicate (10) Following the general procedure A with allyltriethoxysilane (2.5 mmol, 538 µL), catechol (5 mmol, 550.6 mg), 18-Crown-6 (2.5 mmol, 660 mg) and potassium methoxide (2.5 mmol, 700 µL of a 3.56 M solution in methanol) in 10 mL of dry methanol at -20°C. The crude product was purified according the general procedure to afford 10 (1.25 g, 84%) as a white solid. 3035, 2892, 2870, 1486, 1350, 1240, 1166, 1100, 1009, 964,888, 819, 746, 732,689, 603, 588 cm -1 . Potassium [18-Crown-6] bis(catecholato)-3-cyanopropylsilicate ( 14) Following the general procedure A with 3-cyanopropyltriethoxysilane (5 mmol, 1.16 mL), catechol (10 mmol, 1.10 g), 18-Crown-6 (5 mmol, 1.32 g) and potassium methoxide ( 5mmol, 1.4 mL of a 3.56 M solution in methanol) in 20 mL of dry methanol. The crude product was purified according the general procedure to afford 14 (2.76 g, 90%) as a white solid. 3039, 2952, 2870, 2236, 1702, 1599, 1484, 1353, 1245, 1227, 1098, 1011, 953, 820, 737 cm -1 Potassium [18-Crown-6] bis(catecholato)-3,3,3-trifluoropropylsilicate ( 15) Following the general procedure A with 3,3,3-trifluoropropyltrimethoxysilane (5 mmol, 956 µL), catechol (10 mmol, 1.10 g), 18-Crown-6 (5 mmol, 1.32 g) and potassium methoxide (5 mmol, 1.4 mL of a 3.56 M solution in methanol) in 20 mL of dry methanol. The crude product was purified according the general procedure to afford 15 (2.56 g, 80%) as a white solid. 1 H NMR (600 MHz, Methanol-d 4 ):δ 6.70 (dd, J = 5.6, 3.5 Hz, 4H), 6.59 (dd, J = 5.7, 3.4 3040, 2907, 2871, 1597, 1485, 1353, 1245, 1201, 1098, 1057, 820, 739 cm -1 . Following the general procedure A with 2-(diphenylphosphine oxide)ethylsilane (2.5 mmol, 981 mg), catechol (5 mmol, 550.6 mg), 18-Crown-6 (2.5 mmol, 660 mg) and potassiummethoxide (2.5 mmol, 700 µL of a 3.56 M solution in methanol) in 10 mL of dry methanol at room temperature. The crude product was purified according the general procedure to afford 16 (1.41 g, 72%) as a white solid. Tetraethylammoniumbis(catecholato)-tertbutylsilicate (21) The spectroscopic data are in agreement with those reported in the literature. (Z)-(2-bromovinyl)benzene To a solution of cinnamic acid (50 mmol, 8.9 mmol) in AcOH (25 mL) was added bromine (55 mL, 2.85 mL) at rt. When the solution turns yellow the reaction is over. The mixture was quenched with an aqueous solution of sodium thiosulfate (1 M, 25 mL). The precipitate was filtered and washed with water. The dibromo intermediate compound was directly engaged in a solution of triethylamine (100 mmol, 5.8 mL) in DMF at 0°C. The resulting mixture was warmed to room temperature and stirred for 5 h. The reaction was quenched by addition of water, the two phases were separated and the aqueous phase was extracted with pentane (2x40 mL). The combined organic layers were washed with water (50 mL), then dried over anhydrous magnesium sulfate and concentrated under reduced pressure to afford the (Z)-(2-bromovinyl)benzene (15 g, 97%). The spectroscopic data are in agreement with those reported in the literature. filtrated over a pad of celite and the solvent removed under reduced pressure to give the pure product (950 mg, 100%). The spectroscopic data are in agreement with those reported in the literature. Synthesis of ethyl 4-hydroxybutanoate To a 50 mL round-bottom-flask was added butyrolactone (40 mmol, 3.44 g) and 9 mL of distilled methanol. To the mixture was slowly added a solution of potassium hydroxyde (40 mmol, 2.24 g) in the minimium of methanol. After 4 hours of reaction, the solvent was removed under reduced pressure. The resulting white solide was washed with AcOEt and pentane. The solide was dissolved in DMF (25 mL) and bromoethane (40 mmol, 4.5 mL) was added to the solution. The mixture was stirred overnight at room temperature and then diluted with water (75 mL). The aqueous phase was extracted with AcOEt (3x50 mL). The combined organic phase were washed with water (2x50mL), NaHCO 3 (2x50 mL and brine (2x50 mL), and dried with magnesium sulfate.The solvent was removed under reduced pressure to give the product (4.2 g,79%). Synthesis of ethyl 4-(tosyloxy)butanoate To a 100 mL round-bottom-flask was added ethyl 4-hydroxybutanoate (7.6 mmol, 1.01 g), pyridine (3 mL) and 30 mL of distiled DCM. At 0°C was slowly added tosyl chloride (7.6 mmol, 2.43 g). The resulting yellow mixture was stirred overnight. The solution was washed with a saturated solution of CuSO 4 . The aqueous phase was extracted with DCM (2x30 mL) and the combined organic phases dried magnesium sulfate. The solvent was removed under Radical additions reactions General procedure C for stoichiometric oxidation of organotrifluoroborate and organobis(catecholato) silicate. To a Schlenk flask was added the appropriate trifluoroborate salt or silicate salt (0.3 mmol), the oxidizing agent (0.3 mmol) and TEMPO (0.9 mmol, 1 41 mg ). The Schlenk flask was sealed with a rubber septum, and evacuated/purged with vacuum/argon three times. Degassed DMF or diethyl ether (3 mL) was introduced followed by two freeze-pump-thaw cycles. The reaction mixture was stirred at room temperature for 24h under an argon atmosphere. The reaction mixture was diluted with diethyl ether (50 mL), washed with water (2 times), brine (2 times), dried over MgSO 4 and evaporated under reduced pressure. The reaction residue was purified by flash column chromatography on silica gel. General procedure D for addition of silicates to allylsulfone To a Schlenk flask was added the appropriate silicate (1 eq., 0.3 mmol), allyl sulfone (4 eq., 1.2 mmol, 322 mg) and 4CzIPN (1 mol %, 3 µmol, 2.4 mg). Degassed DMF was added (3 mL) and the reaction mixture was irradiated with blue LED (477 nm) at room temperature for 24h under an argon atmosphere. The reaction mixture was diluted with diethyl ether (50 mL), washed with saturated aqueous NaHCO 3 (2 times), brine (2 times), dried over MgSO 4 and evaporated under reduced pressure. The crude product was purified by flash column chromatography to afford the adduct. General procedure E for vinylation and alkynylation reactions of cyclohexylsilicate 1 To a Schlenk flask was added potassium [18-C-6] bis(catecholato) cyclohexylsilicate ( 1eq., 0.3 mmol, 189.3 mg), 4CzIPN (1 mol %, 3 µmol, 2.4 mg) and the desired acceptor (4 eq., 1.2 mmol) (liquid alkenes were added with the solvent). Degassed DMF was added (3 mL). The reaction mixture was irradiated with blue LED (477 nm) for 24 hours. The reaction mixture was diluted with diethyl ether (50 mL), washed with saturated aqueous NaHCO 3 (2 times), brine (2 times), dried over MgSO 4 and evaporated under reduced pressure. The crude product was purified by flash column chromatography to afford the adduct. General procedure F for addition of cyclohexylsilicate 1 to activated alkenes To a Schlenk flask was added potassium [18-C-6] 44.5, 41.6, 40.8, 40.6, 38.5, 34.7 (3 C), 33.1, 27.6, 23.9, 22.0, 20.5 (2 C), 17.5. 2, 139.1, 126.8, 60.6, 53.4, 44.5, 31.5, 29.2 (3 C), 14.2. 1-(Tert Ethyl 2-(cyclohexylmethyl)acrylate (34) Following general procedure D with 1 (0.3 mmol, 189.3 mg). The crude product was purified by flash column chromatography (pentane/diethyl ether, 95/5) to afford 34 as a colorless oil (52 mg, 88%).The spectroscopic data are in agreement with those reported in the literature. 139.8, 125.6, 125.5, 60.6, 40.1, 36.8, 33.4, 33.2, 26.7, 26.4, 14.3. The spectroscopic data are in agreement with those reported in the literature. 6, 145.3, 135.6, 128.8 (2 C), 128.6 (2 C), 119.1, 34.3, 26.5, 26.5, 16.4. IR (neat): 2905, 2258, 1675, 1605, 1266 cm -1 . 4'-(2-(7-Oxabicyclo-[4.1.0]hept-3-yl)ethyl)acetophenone (54) Following general procedure G with 2- (7-oxabicyclo-[4.1.0] 8 (197.8), 148.4 (148.3), 135.0 (135.0), 128.5 (128.5), 128.5 (128.5), 53.0 (52.5), 51.8 (51.7), 38.1 (37.8), 33.3 (33.0), 32.1 31.8), 30.6 (29.3), 27.0 (25.1), 26.5, 24.3 (23.5) 8, 146.5, 135.5, 128.8 (2 C), 128.7 (2 C), 44.1, 33.6, 32.8, 26.6. IR (neat): 2935, 1678, 1605, 1358, 1265 cm -1 . 4'-(3,3,3-Trifluoropropyl)acetophenone(57) Following general procedure G with 3,3,3-trifluoropropylsilicate 13 (0.45 mmol, 290 mg) and 4'-bromoacetophenone 44 (0.3 mmol, 60 mg). The crude product was purified by flash column chromatography (pentane/diethyl ether, 99/1 then 95/5) to afford 57 as a colorless oil (52 mg, 80%). 6, 144.4, 135.8, 128.8 (2 C), 128.5 (2 C), 126.5 (q, J = 275 Hz), 35.1 (q, J = 28 Hz), 28.2 (q, J = 3 Hz), 26.5. 19 F NMR (376 MHz, CDCl 3 ):δ -66.57. IR (neat): 2871, 1677, 1607, 1266 cm -1 . 217 X.-Q. Li, W.-K. Wang, C. Zhang, Adv. Synth. Catal.,2009, 351, 2342-2350. purified by flash column chromatography (pentane/EtOAc, 90/10) to afford 75 as a colorless oil (34 mg, 58%). The spectroscopic data are in agreement with those reported in the literature. 219 1 H NMR (400 MHz, CDCl 3 ): δ 7.12 -7.06 (m, 2H), 6.88 -6.84 (m,1H), 6.77 -6.75 (m, 1H) 5.43 (s, 1H) 4.12 (t, J = 6.5 Hz, 2H), 2.72 -2.68 (m, 2H), 2.07 (s, 3H), 1.99 -1.94 (m, 2H), 13 C NMR (100 MHz, CDCl 3 ): δ171. 6, 153.7, 130.3, 127.4, 127.3, 120.7, 115.4, 64.2, 28.6, 26.3, 21.0. IR (neat): 3355, 2999, 1707, 1491, 1236, 1032 cm -1 . 4-Anilinomethyl-2-fluoropyridine (77) Following general procedure G with anilinomethylylsilicate 8 (0.45 mmol, 294 mg) and 4-bromo-2-fluoropyridine 76(0.3 mmol, 31 µl). The crude product was purified by flash column chromatography (pentane/EtOAc, 80/20) to afford 77as a colorless oil (53 mg, 86%). 129.34, 119.7(d, J = 3.9 Hz), 118.3, 112.8, 107.7 (d, J = 37.8 Hz) 3015, 2895, 2257, 1904, 1674, 1601, 1437, 1264, 1174, 1111, 747, 725 The spectroscopic data are in agreement with those reported in the literature. Compound 91 (characterictic signals) 2, 141.3, 128.5 (2C), 128.5 (2C), 126.1, 63.9, 32.3, 30.3, 21.1. 220 R. Ortiz and M. Yus, Tetrahedron, 2005, 61, 1699-1707. 221 B. Karimi, H. M. Mirzaei, A. Mobaraki and H. Vali, Catal. Sci. Technol., 2015, 5, 3624-3631. 3, 139.6, 124.5, 64.6, 33.7, 30.1, 29.0, 28.9, 27.1, 26.7, 26.4, 26.4, 21.2. 2, 137.5, 131.4, 130.0, 128.8 (2 C), 128.3 (2 C), 126.8, 64.0, 28.9, 25.1, 21.0. 2, 64.7 , 40.0, 35.8, 33.7, 32.7, 28.9, 28.6, 25.9, 25.2, 21.0. 4-(((tert-Butyldimethylsilyl)oxy)methyl)pent- 5-Phenylpent- Synthesis of non-inocent ligand complexes Synthesis of complex Cu(L SQ ) 2 (125) To a colorless solution of 2-(3,5-dimethoxyanilino)-4,6-di-tert-butylphenol, prepared following a previously reported procedure, 106 (3.56 mmol, 2 equiv.) in acetonitrile (40 mL), were added at 40°C, CuCl (176 mg, 1.78 mmol, 1 equiv.) and triethylamine (992 µL, 7.12 mmol, 4 equiv.). The resulting dark green mixture was refluxed for 2 h under air. After cooling down to romm temperature, the dark green complex formed was filtered and washed with cold acetonitrile. The UV-vis data were consistent with those reported in the literature. 106 UV-vis [CH 2 Cl 2 ; λ nm (ε, M -1 .cm -1 )]: 307 (21000), 460 (4610), 795 (7270). Synthesis of complex Cu(L BQ ) 2 (OTf) 2 (127) To a flame-dried Schlenk flask was introduced complex 125 (200 mg, 0.31 mmol, 1 equiv.). The flask was back-filled three times with argon and degassed DCM (9 mL) was added. A 2.8M solution of bromine in DCM (111 µL, 0.31 mmol, 1 equiv.) was added and the The visible-light photoredox catalysis has proved to be an efficient alternative to toxic tin-mediated radical transformations and redox processes involving a stoichiometric amount of organometallic complexes, for the generation of radical species. This catalytic approach and the use of visible-light offers several advantages in terms of mild, selective and more ecocompatible process. In this manuscript, we first reported the formation of non-stabilized alkyl radicals under photooxidative conditions. Photooxidation of alkyl bis-catecholato silicates, simply obtained from organosilanes, which are substrates of sol-gel processes, by a photoactive iridium complex under visible-light irradiation can provide alkyl radicals. Electrochemical studies of bis-catecholato silicates showed lower oxidation potentials than alkyl trifluoroborates or carboxylates. Spin-trapping experiments revealed that the silicates are good radical precursors. Further experiments demonstrated the ability of the generated radicals to be engaged in radical addition reactions such as allylation, vinylation or Giese-type reaction. In addition, the organic dye 4CzIPN (1,2,3,5-tetrakis-(carbazol-yl)-4,6-dicyanobenzene), proved to be an efficient photocatalyst as well. The second part of this Ph. D. thesis focuses especially on a promising and growing application of photoredox catalysis: the merge of photoredoxandorganometallic catalysis. In conjuction with photooxidative processes, nickel catalysis showed to be a suitable candidate to perform cross-coupling reactions. With an iridium based photocatalyst and a nickel complex, alkyl bis-catecholato silicates were coupled to (hetero)aryl bromides via the formation of alkyl radicals. A ''greener'' version was further developed with 4CzIPN without loss of performance. The conditions for C(sp 3 )-C(sp 2 ) cross-coupling reactions were compatible to aryl and alkenyl halides. Subsequently, the C(sp 3 )-C(sp 3 ) bond formation in photoredox/nickel dual catalysis was studied. However, the methodology developed suffers of a side reaction of homocoupling which affects the yield of the reactions. Finally, the ability of alkyl bis-catecholato silicates to be easily oxidized provides an opportunity to engage these precursors in others redox catalytic systems. Organometallic complexes with non-innocent ligands can participate to the generation of radicals by a single electron transfer through oxidation or reduction. Preliminary results concerning the oxidation
01754054
en
[ "stat.ot", "stat.ap", "qfin", "qfin.cp" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01754054/file/DeepLearningContSirignano2018.pdf
Justin Sirignano Rama Cont Universal features of price formation in financial markets: perspectives from Deep Learning Using a large-scale Deep Learning approach applied to a high-frequency database containing billions of electronic market quotes and transactions for US equities, we uncover nonparametric evidence for the existence of a universal and stationary price formation mechanism relating the dynamics of supply and demand for a stock, as revealed through the order book, to subsequent variations in its market price. We assess the model by testing its out-of-sample predictions for the direction of price moves given the history of price and order flow, across a wide range of stocks and time periods. The universal price formation model exhibits a remarkably stable out-of-sample prediction accuracy across time, for a wide range of stocks from different sectors. Interestingly, these results also hold for stocks which are not part of the training sample, showing that the relations captured by the model are universal and not asset-specific. The universal model -trained on data from all stocks -outperforms, in terms of out-of-sample prediction accuracy, asset-specific linear and nonlinear models trained on time series of any given stock, showing that the universal nature of price formation weighs in favour of pooling together financial data from various stocks, rather than designing asset-or sector-specific models as commonly done. Standard data normalizations based on volatility, price level or average spread, or partitioning the training data into sectors or categories such as large/small tick stocks, do not improve training results. On the other hand, inclusion of price and order flow history over many past observations improves forecasting performance, showing evidence of path-dependence in price dynamics. 1 Price formation: how market prices react to supply and demand The computerization of financial markets and the availability of detailed electronic records of order flow and price dynamics in financial markets over the last decade has unleashed TeraBytes of high frequency data on transactions, order flow and order book dynamics in listed markets, which provide us with a detailed view of the high-frequency dynamics of supply, demand and price in these markets [START_REF] Cont | Statistical modeling of high frequency financial data: Facts, models and challenges[END_REF]. This data may be put to use to explore the nature of the price formation mechanism which describes how market prices react to fluctuations in supply and demand. At a high level, a 'price formation mechanism' is a map which represents the relationship between the market price and variables such as price history and order flow: Price(t + ∆t) = F Price history(0...t), Order Flow(0...t), Other Information = F (X t , t ), where X t is a set of state variables (e.g., lagged values of price, volatility, and order flow), endowed with some dynamics, and t is a random 'noise' or innovation term representing the arrival of new information and other effects not captured entirely by the state variables. Empirical and theoretical market microstructure models, stochastic models and machine learning price prediction models can all be viewed as different ways of representing this map F , at various time resolutions ∆t. One question, which has been implicit in the literature, is the degree to which this map F is universal (i.e., independent of the specific asset being considered). The generic, as opposed to asset-specific, formulation of market microstructure models seems to implicitly assume such a universality. Empirical evidence on the universality of certain stylized facts [START_REF] Cont | Empirical properties of asset returns: stylized facts and statistical issues[END_REF] and scaling relations [START_REF] Benzaquen | Unravelling the trading invariance hypothesis[END_REF][START_REF] Andersen | Intraday Trading Invariance in the E-Mini S&P 500 Futures Market[END_REF][START_REF] Kyle | Market microstructure invariance: Empirical hypotheses[END_REF][START_REF] Mandelbrot | The multifractal model of asset returns[END_REF] seems to support the universality hypothesis. Yet, the practice of statistical modeling of financial time series has remained asset specific: when building a model for the returns of a given asset, market practitioners and econometricians only use data from the same asset. For example, a model for Microsoft would only be estimated using Microsoft data, and would not use data from other stocks. Furthermore, the data used for estimation is often limited to a recent time window, reflecting the belief that financial data can be 'non-stationary' and prone to regime changes which may render older data less relevant for prediction. Due to such considerations, models considered in financial econometrics, trading and risk management applications are asset-specific and their parameters are (re)estimated over time using a time window of recent data. That is, for asset i at time t the model assumes the form Price i (t + ∆t) = F X i 0:t , t | θ i (t) , where the model parameter θ i (t) is periodically updated using recent data on price and other state variables related to asset i. As a result, data sets are fragmented across assets and time and, even in the high frequency realm, the size of data sets used for model estimation and training are orders of magnitude smaller than those encountered in other fields where Big Data analytics have been successfully applied. This is one of the reasons why, except in a few instances [START_REF] Buhler | Deep hedging[END_REF][START_REF] Dixon | Sequence Classification of the Limit Order Book using Recurrent Neural Networks[END_REF][START_REF] Kolanovic | Big Data and AI Strategies: Machine Learning and Alternative Data Approach to Investing[END_REF][START_REF] Sirignano | Stochastic Gradient Descent in Continuous Time[END_REF][START_REF] Sirignano | Deep Learning for Limit Order Books[END_REF][START_REF] Sirignano | Deep Learning for Mortgage Risk[END_REF], large-scale learning methods such as Deep Learning [START_REF] Goodfellow | Deep Learning[END_REF] have not been deployed for quantitative modeling in finance. In particular, the non-stationarity argument is sometimes invoked to warn against their use. On the other hand, if the relation between these variables were universal and stationary, i.e. if the parameter θ i (t) varies neither with the asset i nor with time t, then one could potentially pool data across different assets and time periods and use a much richer data set to estimate/ train the model. For instance, data on a flash crash episode in one asset market could provide insights into how the price of another asset would react to severe imbalances in order flow, whether or not such an episode has occurred in its history. In this work, we provide evidence for the existence of such a universal, stationary relation between order flow and market price fluctuations, using a nonparametric approach based on Deep Learning. Deep learning can estimate nonlinear relations between variables using 'deep' multilayer neural networks which are trained on large data sets using 'supervised learning' methods [START_REF] Goodfellow | Deep Learning[END_REF]. Using a deep neural network architecture trained on a high-frequency database containing billions of electronic market transactions and quotes for US equities, we uncover nonparametric evidence for the existence of a universal and stationary price formation mechanism relating the dynamics of supply and demand for a stock, as revealed through the order book, to subsequent variations in its market price. We assess the model by testing its out-of-sample predictions for the direction of price moves given the history of price and order flow, across a wide range of stocks and time periods. The universal price formation model exhibits a remarkably stable out-of-sample prediction accuracy across time, for a wide range of stocks from different sectors. Interestingly, these results also hold for stocks which are not part of the training sample, showing that the relations captured by the model are universal and not asset-specific. We observe that the neural network thus trained outperforms linear models, pointing to the presence of nonlinear relationships between order flow and price changes. Our paper provides quantitative evidence for the existence of a universal price formation mechanism in financial markets. The universal nature of the price formation mechanism is reflected by the fact that a model trained on data from all stocks outperforms, in terms of out-of-sample prediction accuracy, stock-specific linear and nonlinear models trained on time series of any given stock. This shows that the universal nature of price formation weighs in favour of pooling together financial data from various stocks, rather than designing stock-or sector-specific models as commonly done. Also, we observe that standard data transformations such as normalizations based on volatility or average spread, or partitioning the training data into sectors or categories such as large/small tick stocks, do not improve training results. On the other hand, inclusion of price and order flow history over many past observations improves forecasting performance, showing evidence of path-dependence in price dynamics. Remarkably, the universal model is able to extrapolate, or generalize, to stocks not within the training set. The universal model is able to perform well on completely new stocks whose historical data the model was never trained on. This implies that the universal model captures features of the price formation mechanism which are robust across stocks and sectors. This feature alone is quite interesting for applications in finance where missing data problems and newly issued securities often complicate model estimation. Outline Section 2 describes the dataset and the supervised learning approach used to extract information about the price formation mechanism. Section 3 provides evidence for the existence of a universal and stationary relationship linking order flow and price history to price variations. Section 4 summarizes our main findings and discusses some implications. A data-driven model of price formation via Deep Learning Applications such as image, text, and speech recognition have been revolutionized by the advent of 'Deep Learning' -the use of multilayer ('deep') neural networks trained on large data sets to uncover complex nonlinear relations between high-dimensional inputs ('features') and outputs [START_REF] Goodfellow | Deep Learning[END_REF]. At an abstract level, a deep neural network represents a functional relation y = f (x) between a high-dimensional input vector x and an output y through iterations ('layers') consisting of weighted sums followed by the application of nonlinear 'activation' functions. Each iteration corresponds to a 'hidden layer' and a deep neural network can have many hidden layers. Neural networks can be used as 'universal approximators' for complex nonlinear relationships [START_REF] Hornik | Multilayer Feedforward Networks are Universal Approximators[END_REF], by appropriately choosing the weights in each layer. In supervised learning approaches, network weights are estimated by optimizing a regularized cost function reflecting in-sample discrepancy between the network output and desired outputs. In a deep neural network, this represents a high-dimensional optimization over hundreds of thousands (or even millions) of parameters. This optimization is computationally intensive due to the large number of parameters and large amount of data. Stochastic gradient descent algorithms (e.g., RMSprop or ADAM) are used for training neural networks, and training is parallelized on Graphics Processing Units (GPUs). We apply this approach to learn the relation between supply and demand on an electronic exchange -captured in the history of the order book for each stock -and the subsequent variation of the market price. Our data set is a high-frequency record of all orders, transactions and order cancellations for approximately 1000 stocks traded on the NASDAQ between Figure 1: The limit order book represents a snapshot of the supply and demand for a stock on an electronic exchange. The 'ask' side represents sell orders and the 'bid' side, buy orders. The size represents the number of shares available for sale/purchase at a given price. The difference between the lowest sell price (ask) and the highest buy price (bid) is the 'spread' (in this example, 1 ¢). Electronic buy and sell orders are continuously submitted, cancelled and executed through the exchange's order book. A 'limit order' is a buy or sell order for a stock at a certain price and will appear in the order book at that price and remain there until cancelled or executed. The 'limit order book' is a snapshot of all outstanding limit orders and thus represents the visible supply and demand for the stock (see Figure 1). In US stock markets, orders can be submitted at prices occurring at multiples of 1 cent. The 'best ask price' is the lowest sell order and the 'best bid price' is the highest bid price. The best ask price and best bid price are the prices at which the stock can be immediately bought or sold. The 'mid-price' is the average of the best ask price and best bid price. The order book evolves over time as new orders are submitted, existing orders are cancelled, and trades are executed. In electronic markets such as the NASDAQ, new orders may arrive at high frequencysometimes every microsecond -and order books of certain stocks can update millions of times per day. This leads to TeraBytes of data, which we put to use to build a data-driven model of the price formation process. When the input data is a time series, causality constraints require that the relation between input and output respects the ordering in time. Only the past may affect the present. A network architecture which reflects this constraint is a recurrent network (see an example in Figure 2) based on Long Short-Term Memory (LSTM) units [START_REF] Gers | Learning to Forget: Continual Prediction with LSTM[END_REF]. Each LSTM unit has an internal state which maintains a nonlinear representation of all past data. This internal state is updated as new data arrives. Our network has 3 layers of LSTM units followed by a final feed-forward layer of rectified linear units (ReLUs). A probability distribution for the next price move is produced by applying a softmax activation function. LSTM units are specially designed to efficiently encode the temporal sequence of data [START_REF] Gers | Learning to Forget: Continual Prediction with LSTM[END_REF][START_REF] Goodfellow | Deep Learning[END_REF]. h 1 t-1 h n-m t-1 h n t-1 Y t-1 X t-1 h 1 t-1(1) h n-m t-1(1) h n t-1(1) h 1 t-1(2) h n-m t-1(2) h n t-1(2) h 1 t h n-m t h n t Y t X t h 1 t(1) h n-m t(1) h n t(1) h 1 t(2) h n-m t(2) h n t(2) h 1 t+1 h n-m t+1 h n t+1 Y t+1 X t+1 . . . We train the network to forecast the next price move from a vector of state variables, which encode the history of the order book over many observation lags. The index t represents the number of price changes. At a high level, the LSTM network is of the form (Y t , h t ) = f (X t , h t-1 ; θ). (2.1) Y t is the prediction for the next price move, X t is the state of the order book at time t, h t is the internal state of the deep learning model, reprenting information extracted from the history of X up to t, and θ designates the model parameters, which correspond to the weights in the neural network. At each time point t the model uses the current value of state variables X t (i.e. the current order book) and the nonlinear representation of all previous data h t-1 , which summarizes relevant features of the history of order flow, to predict the next price move. In principle, this allows for arbitrary history-dependence: the history of the state variables (X s , s ≤ t) may affect the evolution of the system, in particular price dynamics, at all future times T ≥ t in a nonlinear way. Alternative modeling approaches typically do not allow the flexibility of blending nonlinearity and history-dependence in this manner. A supervised learning approach is used to learn the value of the (high-dimensional) parameter θ by minimizing a regularized negative log-likelihood objective function using a stochastic gradient descent algorithm [START_REF] Goodfellow | Deep Learning[END_REF]. The parameter θ is assumed to be constant across time, so it affects the output at all times in a recursive manner. A stochastic gradient descent step at time t requires calculating the sensitivity of the output to θ, via a chain rule, back through the previous times t -1, t -2, . . . , t -T (commonly referred to as 'backpropagation through time'). In theory, backpropagation should occur back to time 0 (i.e., T = t). However, this is computationally impractical and we truncate the backpropagation at some lag T . In Section 3.4, we discuss the impact of the past history of the order book and the 'long memory' of the market. The resulting LSTM network involves up to hundreds of thousands of parameters. This is relatively small compared to networks used for instance in image or speech recognition, but it is huge compared to econometric models traditionally used in finance. Previous literature has been almost entirely devoted to linear models or stochastic models with a very small number of parameters. It is commonly believed that financial data is far too noisy to build such large models without overfitting; our results show that this is not necessarily true. Given the size of the data and the large number of network parameters to be learned, significant computational resources are required both for pre-processing the data and training the network. Training of deep neural networks can be highly parallelized on GPUs. Each GPU has thousands of cores, and training is typically 10× faster on a GPU than a standard CPU. The NASDAQ data was filtered to create training and test sets. This data processing is parallelized over approximately 500 compute nodes. Training of asset-specific models was also parallelized, with each stock assigned to a single GPU node. Approximately 500 GPU nodes are used to train the stock-specific models. These asset-specific models, trained on the data related to a single stock, were then compared to a 'universal model' trained on the combined data from all the stocks in the dataset. Data from various stocks were pooled together for this purposes without any specific normalization. Due to the large amount of data, we distributed the training of the universal model across 25 GPU nodes using asynchronous stochastic gradient descent (Figure 3). Each node loads a batch of data (selected at random from all stocks in the dataset), computes gradients of the model on the GPU, and then updates the model. Updates occur asynchronously, meaning node j updates the model without waiting for nodes i = j to finish their computations. Results We split the universe of stocks into two groups of roughly 500 stocks; training is done on transactions and quotes for stocks from the first group. We distinguish: • stock-specific models, trained using data on all transactions and quotes for a specific stock. • the 'universal model', trained using data on all transactions and quotes for all stocks in the training set. All models are trained for predicting the direction of the next price move. Specifically, if τ 1 , τ 2 , . . . are the times at which the mid-price P t changes, we estimate P[P τ k+1 -P τ k > 0|X τ 0:k ] Figure 3: Asynchronous stochastic gradient descent for training the neural network. The dataset, which is too large to be held in the nodes' memories, is stored on the Online Storage system. Batches of data are randomly selected from all stocks and sent to the GPU nodes. Gradients are calculated on the GPUs and then the model is asynchronously updated. and P[P τ k+1 -P τ k < 0|X τ 0:k ] where X t is the state of the limit order book at time t. The models therefore predict whether the next price move is up or down. The events are irregularly spaced in time. The time interval τ k+1 -τ k between price moves can vary considerably from a fraction of a second to seconds. We measure the forecast accuracy of a model for a given stock via the proportion of observations for which it correctly predicts the direction of the next price move. This can be estimated using the empirical estimator A i = Number of price changes where model correctly predicts price direction for stock i Total number of price changes × 100%. All results are out-of-sample in time. That is, the accuracy is evaluated on time periods outside of the training set. Model accuracy is reported via the cross-sectional distribution of the accuracy score A i across stocks in the testing sample, and models are compared by comparing their accuracy scores. In addition, we evaluate the accuracy of the universal model for stocks outside the training set. Importantly, this means we assess forecast accuracy for stock i using a model which is trained without any data on stock i. This tests whether the universal model can generalize to completely new stocks. Typically, the out-of-sample dataset is a 3-month time period. In the context of highfrequency data, 3 months corresponds to millions of observations and therefore provides a lot of scope for testing model performance and estimating model accuracy. In a data set with no stationary trend (as in the case at such high frequencies), a random forecast ('coin-flip') would yield an expected score of 50%. Given the large size of the data set, even a small deviation (i.e. 1%) from this 50% benchmark is statistically significant. The main findings of our data-driven approach may be summarized as follows: • Nonlinearity: Data-driven models trained using deep learning substantially outperform linear models in terms of forecasting accuracy (Section 3.1). • Universality: The model uncovers universal features that are common across all stocks (Section 3.2). These features generalize well: they are also observed to hold for stocks which are not part of the training sample. • Stationarity: model performance in terms of price forecasting accuracy is remarkably stable across time, even a year out of sample. This shows evidence for the existence of a stationary relationship between order flow and price changes (Section 3.3), which is stable over long time periods. • Path-dependence and long-range dependence: inclusion of price and order flow history is shown to substantially increase the forecast accuracy. This provides evidence that price dynamics depend not only on the current or recent state of the limit order book but on its history, possibly over long time scales (Section 3.4). Our results show that there is far more common structure across data from different financial instruments than previously thought. Providing a suitably flexible model is used which allows for nonlinearity and history-dependence, data from various assets may be pooled together to yield a data set large enough for deep learning. Deep Learning versus Linear Models Linear state space models, such as Vector Autoregressive (VAR) models, have been widely used in the modeling of high frequency data and in empirical market microstructure research [START_REF] Hasbrouck | Empirical Market Microstructure: The Institutions[END_REF] and provide a natural benchmark for evaluating the performance of a forecast. Linear models are easy to estimate and capture in a simple way the trends, linear correlations and autocorrelations in the state variables. The results in Figure 4 show that the deep learning models substantially outperform linear models. Given the large sample size, an increase of 1% in accuracy is considered significant in the context of high-frequency modeling. The linear (VAR) model may be formulated as follows: at each observation we update a vector of linear features h t and then use a probit model for the conditional probability of an upward price move given the state variables: h t = Ah t-1 + BX t , Y t = P(∆P t > 0|X t , h t ) = G(CX t + Dh t ). (3.-1) where G depends on the distributional assumptions on the innovations in the linear model. For example, if we use a logistic distribution for the innovations in the linear model, then the probability distribution of the next price move is given by softmax (logistic) function applied to a linear function of the current order book and linear features: P(∆P t > 0|X t , h t ) = Softmax(CX t + Dh t ). We compare the neural network against a linear model for approximately 500 stocks. To compare models we report the difference in accuracy scores across the same test data set. Let • L i be the accuracy of the stock-specific linear model g θ i for asset i estimated on data only from stock i, • Âi be the accuracy of the stock-specific deep learning model f θ i trained on data only from stock i, and • A i be the accuracy for asset i of the universal deep learning model f θ trained on a pooled data set of all quotes and transactions for all stocks. The left plot in Figure 4 reports the cross-sectional distribution for the increase in accuracy Âi -L i when moving from the stock-specific linear model to the stock-specific deep learning model. We observe a substantial increase in accuracy, between 5% to 10% for most stocks, when incorporating nonlinear effects using the neural networks. The right plot in Figure 4 displays histograms of A i (red) and L i (blue). We clearly observe that moving from a stock-specific linear model to the universal nonlinear model trained on all stocks substantially improves the forecasting accuracy by around 10%. The deep neural network outperforms the linear model since it is able to estimate nonlinear relationships between the price dynamics and the order book, which represents the visible supply and demand for the stock. This is consistent with an abundant empirical and econometric literature documenting nonlinear effects in financial time series, but the large amplitude of this improvement can be attributed to the flexibility of the neural network in representing nonlinearities. More specifically, sensitivity analysis of our data-driven model uncovers stable nonlinear relations between state variables and price moves, i.e. nonlinear features which are useful for forecasting. Figure 5 presents an examples of such a feature: the relation between the depth on the bid and ask sides of the order book and the probability of a price decrease. Such relations have been studied in queueing models of limit order book dynamics [START_REF] Cont | A stochastic model for order book dynamics[END_REF][START_REF] Cont | Price dynamics in a Markovian limit order market[END_REF]. In particular, it was shown in [START_REF] Cont | Price dynamics in a Markovian limit order market[END_REF] that when the order flow is symmetric then there exists a 'universal' relation -not dependent on model parameters -between bid depth, ask depth and the probability of a price decrease at the next price move. However, the derivations in these models hinge on many statistical assumptions which may or may not hold, and the universality of such relations remained to be empirically verified. Our analysis shows that there is indeed evidence for such a universal relation, across a wide range of assets and time periods. Figure 5 (left) displays the probability of a price decrease as a function of the depth (the number of shares) at the best bid/ask price. The larger the best ask size, the more likely the next price prove will be downwards. The probability is approximately constant along the center diagonal where the bid/ask imbalance is zero. However, as observed in queueing models [START_REF] Cont | A stochastic model for order book dynamics[END_REF][START_REF] Cont | Price dynamics in a Markovian limit order market[END_REF][START_REF] Figueroa-Lopez | One-level limit order book model with memory and variable spread[END_REF], even under simplifying assumptions, the relation between this probability and various measures of the bid/ask imbalance is not linear. Furthermore, such queueing models typically focus on the influence of depth at the top of the order book and it is more difficult to extract information from deeper levels of the order book. The right contour plot in Figure 5 displays the influence of limit orders deeper in the order book (here: total size aggregated across levels 5 to 10) on the probability of a price decrease. We see that the influence is less than the depth at the top of the book, as illustrated by the tighter range of predicted probabilities, but still significant. Universality across assets A striking aspect of our results is the stability across stocks of the features uncovered by the deep learning model, and its ability to extrapolate ('generalize') to stocks which it was not trained on. This may be illustrated by comparing forecasting accuracy of stock-specific models, trained only on data of a given stock, to a universal model trained on a pooled data set of 500 stocks, a much larger but extremely heterogeneous data set. As shown in Figure 6, which plots A i -Âi , the universal model consistently outperforms the stock-specific models. This indicates there are common features, relevant to forecasting, across all stocks. Features extracted from data on stock A may be relevant to forecasting of price moves for stock B. Given the heterogeneity of the data, one might imagine that time series from different stocks should be first normalized (by average daily volume, average price or volatility etc.) before pooling them. Surprisingly, this appears not to be the case: we have observed that standard data transformations such as normalizations based on average volume, volatility or average spread, or partitioning the training data into sectors or categories such as large/small tick stocks do not improve training results. For example, a deep learning model trained on small tick stocks does not outperform the universal model in terms of forecasting price moves for small tick stocks. It appears that the model arrives at its own data-driven normalization of inputs based on statistical features of the data rather than ad hoc criteria. The source of the universal model's outperformance is well-demonstrated by Figure 7. The universal model most strongly outperforms the stock-specific models on stocks with less data. The stock-specific model is more exposed to overfitting due to the smaller dataset while the universal model is able to generalize by interpolating across the rich scenario space of the pooled data set and therefore is less exposed to overfitting. So, the existence of these common features seems to argue for pooling the data from different stocks, notwithstanding their heterogeneity, leading to a much richer and larger set of training scenarios. Using 1 year of the pooled data set is roughly equivalent to using 500 years (!) of data for training a single-stock model and the richness of the scenario space is actually enhanced by the diversity and heterogeneity of behavior across stocks. Due to the large amount of data, very large universal models can be estimated without overfitting. Figure 8 shows the increase in accuracy for a universal model with 150 units per layer (which amounts to several hundred thousand parameters) versus a universal model with 50 units per layer. Remarkably, the universal model is even able to generalize to stocks which were not part of the training sample: if the model is only trained on data from stocks {1, . . . , N }, its forecast accuracy is similar for stock N +1. This implies that the universal model is capturing features in the relation between order flow and price variastions which are common to all stocks. Table 1 illustrates the forecast accuracy of a universal model trained only on stocks 1-464 (for January 2014-May 2015), and tested on stocks 465-489 (for June-August 2015). This universal model outperforms stock-specific models for stocks 465 -489, even though the universal model has never seen data from these stocks in the training set. The universal model trained only on stocks 1 -464 performs roughly the same for stocks 465 -489 as the universal model trained on the entire dataset of stocks 1 -489. Results are reported in Table 1. Figure 10 displays the accuracy of the universal model for 500 completely new stocks, which are not part of the training sample. The universal model achieves a high accuracy on these new stocks, demonstrating that it is able to generalize to assets that are not included in the training data. This is especially relevant for applications, where missing data issues, stock splits, new listings and corporate events constantly modify the universe of stocks. Stationarity The relationships uncovered by the deep learning model are not only stable across stocks but also stationary in time. This is illustrated by examining how forecast accuracy behaves when the training period and test period are separated in time. Figure 10 shows the accuracy of the universal model on 500 stocks which were not part of the training sample. The left histogram displays the accuracy in June-August, 2015, shortly after the training period (January 2014-May 2015), while the right plot displays the cross-sectional distribution of accuracy for the same model in January-March, 2017, 18 months after the training period. Interestingly, even one year after the training period, the forecasting accuracy is stable, without any adjustments. Such stability contrasts with the common practice of 'recalibrating' models based on a moving window of recent data due to perceived non-stationarity. If the data were nonstationary, accuracy would decrease with the time span separating the training set and the prediction period and it would be better to train models only on recent periods immediately before the test set. However, we observe that this is not the case: Table 2 reports forecast results for models trained over periods extending up to 1, 3, 6, and 19 months before the test set. Model accuracy consistently increases as the length of the training set is increased. The message is simple: use all available data, rather than an arbitrarily chosen time window. Note that these results are not incompatible with the data itself being non-stationary. The stability we refer to is the stability of the relation between the inputs (order flow and price history) and outputs (forecasts). If the inputs themselves are non-stationary, the output will be non-stationary but that does not contradict our point in any way. Path-dependence Statistical modeling of financial time series has been dominated by Markovian models which, for reasons of analytical tractability, assume that the evolution of the price and other state variables only depends on their current value and there is no added value to including their history beyond one lag. There is a trove of empirical evidence going against this hypothesis, and pointing to long-range dependence in financial time series [START_REF] Bacry | Continuous cascade models for asset returns[END_REF][START_REF] Lillo | The long memory of the efficient market[END_REF][START_REF] Mandelbrot | The multifractal model of asset returns[END_REF]. Our results are consistent with these findings: we find that the history of the limit order book contains significant additional information beyond that contained in its current state. Figure 11 shows the increase in accuracy when using an LSTM network, which is a function of the history of the order book, as compared with a feedforward neural network, which is only a function of the most recent observation (a Markovian model). The LSTM network, which incorporates temporal dependence, significantly outperforms the Markovian model. The accuracy of the forecast also increases when the network is provided with a longer history as input. Figure 12 displays the accuracy of the LSTM network on a 5, 000-step sequence minus the accuracy of the LSTM network on a 100-step sequence. Recall that a step ∆ k = τ k+1 -τ k is on average 1.7 seconds in the dataset so 5000 lags corresponds to 2 hours on average. There is a significant increase in accuracy, indicating that the deep learning model is able to find relationships between order flow and price change events over long time periods. Our results show that there is significant gain in model performance from including many lagged values of the observations in the input of the neural network, a signature of significant -and exploitable -temporal dependence in order book dynamics. Discussion Using a Deep Learning approach applied to a large dataset of billions of orders and transactions for 1000 US stocks, we have uncovered evidence of a universal price formation mechanism relating history of the order book for a stock to the (next) price variation for that stock. More importantly, we are able to learn this mechanism through supervised training of a deep neural network on a high frequency time series of the limit order book. The resulting model displays several interesting features: Figure 11: Comparison of out-of-sample forecast accuracy of a LSTM network with a feedforward neural network trained to forecast the direction of next price move based on the current state of the limit order book. Cross-sectional results for 500 stocks for test period June-August, 2015. Figure 12: Out-of-sample increase in accuracy when using a 5000-step sequence versus a 100-step sequence, across 1, 000 stocks. Test period : June-August 2015. • Universality: the model is stable across stocks and sectors, and the model trained on all stocks outperforms stock-specific models, even for stocks not in the training sample, showing that features captured are not stock-specific. • Stationarity: model performance is stable across time, even a year out of sample. • Evidence of 'long memory' in price formation: including order flow history as input, even up to several hours, improves prediction performance. • Generalization: the model extrapolates well to stocks not included in the training sample. This is especially useful since it demonstrates its applicability to recently listed instruments or those with incomplete or short data histories. Our results illustrate the applicability and usefulness of Deep Learning methods for modeling of intraday behavior of financial markets. In addition to the fundamental insights they provide on the nature of price formation in financial markets, these findings have practical implications for model estimation and design. Training a single universal model is orders of magnitude less complex and costly than training or estimating thousands of single-asset models. Since the universal model can generalize to new stocks (without training on their historical data), it can also be applied to newly issued stocks or stocks with shorter data histories. Figure 2 : 2 Figure 2: Architecture of a recurrent neural network. Figure 4 : 4 Figure 4: Comparison of a deep neural network with linear models. Models are trained to predict the direction {-1, +1} of next mid-price move. Comparison for approximately 500 stocks and out-of-sample results reported for June-August, 2015. Left-hand figure: increase in accuracy of stock-specific deep neural networks versus stock-specific linear models. Righthand figure: accuracy of a universal deep neural network (red) compared to stock-specific linear models (blue). Figure 5 : 5 Figure 5: Left: relation between depth at the bid, depth at the ask and the probability of a price decrease. The x-axis and y-axis display the quantile level corresponding to the observed bid and ask depth. Right: Contour plot displaying the influence of levels deeper in the order book (5 to 10) on the probability of a price decrease. Figure 6 : 6 Figure 6: Out-of-sample forecasting accuracy of the universal model compared with stockspecific models. Both are deep neural networks with 3 LSTM layers followed by a ReLU layer. All layers have 50 units. Models are trained to predict the direction of the next move. Comparison across 489 stocks, June-August, 2015. Figure 7 : 7 Figure 7: Increase in out-of-sample forecast accuracy (in %) of the universal model compared to stock-specific model, as a function of size of training set for stock specific model (normalized by total sample size, N = 24.1 million). Models are trained to predict the direction of next price move. Comparison across 500 stocks, June-August, 2015. Figure 8 : 8 Figure 8: Comparison of two universal models: a 150 unit per layer model versus 50 unit per layer model. Models are trained to predict direction {-1, +1} of next mid-price move. Out-of-sample prediction accuracy for direction of next price move, across approximately 500 stocks (June-August, 2015). Figure 9 : 9 Figure 9: Performance on approximately 500 new stocks which the model has never seen before. Out-of-sample accuracy reported for June-August, 2015. Universal model trained during time period January 2014-May 2015. Figure 10 : 10 Figure 10: Performance on 500 new stocks which the model has never seen before. Left: outof-sample accuracy reported for June-August, 2015. Right: out-of-sample accuracy reported for January-March, 2017. Universal model trained on data from January 2014-May 2015. Table 1 : 1 Comparison of universal model trained on stocks 1-464 versus (1) stock-specific models for stocks 465-489 and (2) universal model trained on all stocks 1-489. Models are trained to predict direction of next mid-price move. Second column shows the fraction of stocks where the universal model trained only on stocks 1-464 outperforms models (1) and (2). The third column shows the average increase in accuracy. Comparison for 25 stocks and out-of-sample results reported for June-August, 2015. Model Comparison Average increase in accuracy Stock-specific 25/25 1.45% Universal 4/25 -0.15% Table 2 : 2 Out-of-sample forecast accuracy of deep learning models trained on entire training set (19 months) vs. deep learning models trained for shorter time periods immediately preceding the test period, across 50 stocks Aug 2015. Models are trained to predict the direction of next price move. Second column shows the fraction of stocks where the 19-th month model outperforms models trained on shorter time periods. The third column shows the average increase in accuracy across all stocks. Historical order book data was reconstructed from NASDAQ Level III data using the LOBSTER data engine[START_REF] Huang | LOBSTER: Limit Order Book Reconstruction System[END_REF]. the London Quant Summit 2018, JP Morgan and Princeton University for their comments. Computations for this paper were performed using a grant from the CFM-Imperial Institute of Quantitative Finance and the Blue Waters supercomputer grant "Distributed Learning with Neural Networks".
01754055
en
[ "spi.nrj" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01754055/file/Blavette--Influence%20of%20the%20wave%20dispersion_EWTEC2017-revised_v4.pdf
Anne Blavette email: [email protected] Thibaut Kovaltchouk email: [email protected] François Rongère email: [email protected] Marilou Jourdain De Thieulloy Paul Leahy email: [email protected] Bernard Multon email: [email protected] H Ben Ahmed email: [email protected] Marilou Jourdain Hamid Ben Influence of the wave dispersion phenomenon on the flicker generated by a wave farm Keywords: Flicker, aggregation effect, hydrodynamic modelling, time delay-based approach à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Influence of the wave dispersion phenomenon on the flicker generated by a wave farm I. INTRODUCTION The inherently fluctuating nature of waves may be reflected to some extent in the power output of wave energy converters (WECs). These fluctuations can induce voltage fluctuations which can potentially generate flicker [START_REF] Molinas | Power Smoothing by Aggregation of Wave Energy Converters for Minimizing Electrical Energy Storage Requirements[END_REF]- [START_REF] Kovaltchouk | Wave farm flicker severity: Comparative analysis and solutions[END_REF] Hence, wave farm managers will be required to demonstrate that their farm is compliant with grid codes and similar regulations, in order to be granted grid connection. This is usually performed through grid impact assessment studies by means of numerical power system simulators such as DIgSILENT PowerFactory [START_REF]DIgSILENT PowerFactory[END_REF], PSS®E [START_REF] Pss®e | [END_REF], etc. Hence, numerical models of the considered WEC(s) are necessary. These models can be based on experimental data in the form of electrical power output time series [START_REF] Blavette | Impact of a Medium-Size Wave Farm on Grids of Different Strength Levels[END_REF] or on socalled "wave-to-wire" models which compute this type of data from the (usually simulated) sea surface elevation. A significant number of such "wave-to-wire" models have been developed, as reviewed in [START_REF] Penalba | A Review of Wave-to-Wire Models for Wave Energy Converters[END_REF]. However, few of them have considered arrays of WECs as described in [START_REF] Forehand | A Fully Coupled Wave-to-Wire Model of an Array of Wave Energy Converters[END_REF]. Regarding these latter, different approaches have been used. The most comprehensive approach consists in simulating the sea surface elevation at each node of the wave farm where a WEC is located, taking into account the wave dispersion phenomenon, as well as the hydrodynamic interactions between WECs due to radiation and diffraction. Using this approach is very heavy from a computational perspective and should be restricted to simulating the output power of a wave farm whose WECs are closely located. In the case where the WECs are sufficiently far away from each other so that their hydrodynamic interactions can be considered as negligible, a second approach should be used which consists in calculating the sea surface elevation at each node of the wave farm where a WEC is located without taking into account the radiation and diffraction due to the neighbouring WECs. Finally, a third simplified approach has been widely used in the electrical engineering community in flicker studies focussing on wave farms. This approach consists in calculating the output power of an entire farm based on the power profile of a single WEC. This power profile serves as a reference to which a random time delay is applied for each WEC in order to model the device aggregation effect [1]- [START_REF] Kovaltchouk | Wave farm flicker severity: Comparative analysis and solutions[END_REF], which will be described in Section C.2. The computational effort regarding this latter approach is extremely light with respect to the other two approaches. However, questions remain concerning its physical validity, as this approach does not take into account the wave dispersion phenomenon. The objective of this paper is to tackle this question by comparing the flicker level obtained from the second and the third approaches described in this section. Sufficiently distantly located WECs will be considered in order to neglect the inter-WECs hydrodynamic interactions. The farm output power is then injected in a local electrical grid model developed under PowerFactory to compute the corresponding voltage profile at the Point of Common Coupling (PCC). This voltage profile serves as input to a flickermeter from which the associated short-term flicker level is computed. The modelling hypotheses will be described in Section II and the results in Section III. In Section IV, the conclusions will be detailed. The results of the comparative study will contribute in defining the required level of hydrodynamic detail necessary for simulating the output power of a wave farm when it is to be used for flicker analyses. II. MODELLING HYPOTHESES A. Hydrodynamic simulation The hydrodynamic model is based on linear wave theory and simulates wave field from a superposition of Airy waves obtained through discretising a JONSWAP spectrum and using random phases. Contrary to the time delay-based method, the wave dispersion phenomenon is taken into account here. By discretising the wave spectrum 𝑆(𝜔) using 𝑛 regularly spaced frequency components, the amplitude of each elementary wave component is given by [START_REF] Faltinsen | Sea loads on ships and offshore structures[END_REF] as: 𝑎(𝜔 𝑖 ) = 𝑎 𝑖 = √2𝑆(𝜔 𝑖 )δ𝜔 (1) Each wave component then represents a complex elementary free surface elevation at horizontal position (𝑥, 𝑦) and time 𝑡: 𝜂 ̃𝑖(𝑥, 𝑦, 𝑡) = 𝑎 𝑖 𝑒 𝑗[𝑘 𝑖 (𝑥 cos 𝜃 ̅ +𝑦 sin 𝜃 ̅ ) -𝜔 𝑖 𝑡 + 𝜑 𝑖 ] (2 ) where 𝜃 ̅ is the mean wave direction of the mono-directional wave field, 𝜑 𝑖 ∈ [0, 2𝜋[ is the random phase of the 𝑖 th wave component chosen at wave field initialisation and 𝑘 𝑖 is the wave number, which is solution of the dispersion relation given by 𝜔 𝑖 2 = 𝑔𝑘 𝑖 tanh 𝑘 𝑖 ℎ (3) with ℎ being the mean water depth at position (𝑥, 𝑦). If the water depth can be considered as infinite, the relation degenerates to 𝜔 𝑖 2 = 𝑔𝑘 𝑖 . Summing the 𝑛 contributions gives the free surface elevation: This linear wave field modelling is then integrated into a linear framework for wave structure interactions that relies on hydrodynamics coefficients obtained from the linear potential flow theory. Note that in this study, no hydrodynamic interactions between the scattered wave fields are taken into account so that only coefficients for one isolated body are to be calculated using the seakeeping software NEMOH [START_REF] Babarit | Theoretical and numerical aspects of the open source BEM solver NEMOH[END_REF] which is based on the Boundary Element Method (BEM). The time domain linear excitation force applying to a body having position (𝑥, 𝑦) is then obtained by the superposition of the excitation generated by each wave component as: 𝐹 𝑒𝑥 (𝑥, 𝑦, 𝑡) = 𝑅𝑒 {∑ 𝐹 ̃𝑒𝑥 (𝜔 𝑖 )𝜂 ̃𝑖(𝑥, 𝑦, 𝑡) 𝑛 𝑖=1 } (5) Sea-states were simulated for significant heights equal to 1 m and 3 m, as well as for peak periods equal to 7 s, 9 s, 10 s and 12 s. B. Wave device The wave farm is composed of identical heaving buoys controlled passively and described in a previous paper [START_REF] Kovaltchouk | Influence of control strategy on the global efficiency of a Direct Wave Energy Converter with Electric Power Take-Off[END_REF]. As the focus of this paper is on the comparison of two methods for modelling the device aggregation effect on flicker, a simple, passive control strategy was adopted for the WEC. It consists of the application of a constant damping factor as a function of the sea-state characteristics (significant wave height 𝐻 𝑠 and 𝑇 𝑝 ). This damping factor is optimised with respect to a given sea-state during a preliminary offline study. For the sake of realism, levelling is applied on the power takeoff (PTO) force, which is limited to 1 MN, and on the output electrical power, which is limited to 1 MW. Each WEC is connected to the offshore grid through a fully rated back-toback power electronic converter. C. Wave farm 1) Wave farm layout The wave farm considered in this study is considered to be composed of 24 of the devices described in the previous section. All these devices are deemed identical in terms of hydromechanical and electro-mechanical properties. They are placed at a distance 𝑑 of each other, on 3 rows and 8 columns facing the incoming waves, as shown in Fig. 1. The inter-WEC distance 𝑑 is supposed to be sufficient so that the hydrodynamic interactions between the devices can be considered as negligible. In this research work, it was assumed equal to 600 m in the full hydrodynamic approach, while it is made approximately equal to 600 m in the timedelay based approach, as it will be described in Section C.4 [START_REF] Babarit | Impact of long separating distances on the energy production of two interacting wave energy converters[END_REF]. 2) Introduction to the two approaches studied here The wave farm power output is computed as the sum of the power output of all the WECs composing the farm. As the temporal profile of the sea surface elevation at a given node in the wave farm is not expected to be identical to this corresponding to another node, it is not expected either that two power output temporal profiles from two different WECs could be identical. Hence, the power output of a wave farm cannot be computed as the product of the power output of a single device times the number of WECs composing the farm. Also, the fact that different WECs achieve peak power at different times leads to a reduced peak-to-average ratio of the wave farm power output compared to that of a single device. This is illustrated in Fig. 2 which shows the temporal power output profile for a single WEC and for the wave farm composed of 24 devices normalised by their respective average value. While the peak-to-average ratio is equal to 3.6 for the single WEC (even though its power output is limited to 1 MW), it is equal to 1.9 for the wave farm. This decrease in the peak-to-average ratio implies that the temporal power output profile is "smoother" in the case of a wave farm than in the case of a single WEC which is usually referred to as the device aggregation effect. The objective of this paper was to determine whether a full hydrodynamic simulation was required to model this effect on flicker, or whether a simplified, time delay-based method was sufficient. These two approaches are described in the next sections. Fig. 2 Temporal power output profile (over 100s) for a single WEC (blue) and for the wave farm composed of 24 devices (pink) for significant wave height 𝐻 𝑠 =3 m and peak period 𝑇 𝑝 =7 s. The profiles are normalised with respect to their average value. 2.1) Full hydrodynamic simulation In this approach, the wave excitation force at each node of the wave farm where a WEC is located is computed by means of the code described in Section A. Then, the power output of each WEC is computed based on its corresponding excitation force temporal profile. Hence, the power output 𝑃 𝑓𝑎𝑟𝑚 of the wave farm corresponds to the algebraic sum of the power output 𝑃 𝑊𝐸𝐶𝑖 of each WEC 𝑖, such as: 𝑃 𝑓𝑎𝑟𝑚 (𝑡) = ∑ 𝑃 𝑊𝐸𝐶𝑖 (𝑡) 24 𝑖=1 (7) 2.2) Time delay-based method The time delay-based method requires only a single power output temporal profile 𝑃 𝑊𝐸𝐶1 of a single WEC, to which different uniformly distributed random time delays ∆𝑡 𝑖 are applied to represent the power output of the other devices composing the farm. Hence, the wave farm output power can be expressed as: 𝑃 𝑓𝑎𝑟𝑚 (𝑡) = ∑ 𝑃 𝑊𝐸𝐶1 (𝑡 + ∆𝑡 𝑖 ) 24 𝑖=1 [START_REF]DIgSILENT PowerFactory[END_REF] where ∆𝑡 𝑖 models the fictive propagation of a wave group whose envelope characteristics are independent of the travelled distance. This means that the wave dispersion phenomenon is not taken into account here. This physical effect implies that in dispersive media such as water, the travel speed of a sine wave is linked to its frequency through the dispersion relationship which, for deep water waves, can be expressed as [START_REF] Falnes | Ocean waves and oscillating systems[END_REF]: 𝜔 2 = 𝑔𝑘 (9) as mentioned earlier. Term 𝛥𝑡 𝑖 is assumed equal to: 𝛥𝑡 𝑖 = 𝑑 𝑡𝑑 𝑣 𝑔 = 4𝜋𝑑 𝑡𝑑 𝑔𝑇 𝑝 ( 10 ) where 𝑑 𝑡𝑑 is distance between a reference WEC (whose time delay is equal to zero) and given WEC 𝑖, and 𝑣 𝑔 is the group speed which is defined as equal to 𝑔𝑇 𝑝 /4𝜋 here, where 𝑔=9.81 m.s² is the gravity of Earth. Given that the incoming waves are simulated as mono-directional waves, the distance 𝑑 𝑡𝑑 taken into account here is equal to the distance along the axis parallel to the wave front propagation direction. In the full hydrodynamic approach, the distance between the WECs was assumed to be equal to a fixed distance 𝑑 =600 m. However, if this constant distance were used in the time delay-based approach (i.e. 𝑑 𝑡𝑑 = 𝑑), given that the excitation force temporal profile is similar for all WECs, then all the devices located on a given row of the wave farm would present the same power output at any time 𝑡, thus resulting in coincident power profiles for 8 WECs, which is unrealistic. Hence, in order to avoid this situation, an additional uniformly distributed random distance 𝑑 𝑟𝑎𝑛𝑑 , arbitrarily selected as ranging between -50 m and +50 m (in order to represent WECs linear drift), is added to the fixed inter-WEC distance 𝑑 such as: 𝑑 𝑡𝑑 = 𝑑 + 𝑑 𝑟𝑎𝑛𝑑 where 𝑑 𝑟𝑎𝑛𝑑 ∈ [-50; 50] m (11) Ten time delay sets were used in this study. D. Electrical grid An electrical grid model was developed under the power system simulator PowerFactory and is shown in Fig. 3. This model is inspired from the Atlantic Marine Energy Test Site (AMETS) [START_REF] Amets | SEAI website[END_REF] located off Belmullet, Ireland for the onshore local grid part. It is composed of a 10/20kV transformer whose impedance is equal to 2.10 -4 +𝑗0.06 pu (where 𝑗 is the imaginary unit) and of a 0.1 MW load representing the onshore substation connected to the rest of the national network through a 5 km-long overhead line of impedance 0.09+𝑗0.3 Ω/km. On the 20 kV bus (which is the Point of Common Coupling (PCC)), a VAr compensator maintains power factor at unity. Then, a 20/38 kV transformer (of impedance equal to 2.10 -4 +𝑗0.06 pu) connects to the farm to the local (national) grid where a 2 MW load (representing the consumption of a local town) is also connected. The rest of the national grid is modelled by means of a 38 kV voltage source in series with an impedance. This impedance magnitude is selected to be equal to Z=20 Ω (i.e. equal to a short-circuit level of 72 MVA), and its angle is selected to be equal to 30°, which corresponds to a "weak grid" and constitutes thus a worst case scenario in which relatively high flicker levels can be expected. The offshore grid is composed of a 20 km-long submarine cable of series impedance equal to 0.07+ 𝑗 0.11 Ω/km and capacitance equal to 0.31 µF/km, as described in [START_REF]Nexans data sheet[END_REF]. The cable distance was selected according to the values observed for two planned or already existing wave energy test sites [START_REF] Amets | SEAI website[END_REF], [START_REF][END_REF]. The offshore network is also composed of a 0.4/10 kV transformer (of impedance equal to 2.10 -4 +𝑗0.06 pu) and of 24 wave devices. The influence on the study results of the internal network between the WECs and the 0.4/10 kV transformer was deemed negligible and was therefore not included in the model. III. RESULTS A. Flicker level with respect to time delays As mentioned earlier, ten different time delay sets were used in the time delay-based approach. The corresponding minimum, maximum, and average short-term flicker levels 𝑃 𝑠𝑡 are shown in Table I and Table II for the two significant wave heights considered here (𝐻 𝑠 =1 m and 𝐻 𝑠 =3 m). The standard deviation, also shown in these tables, indicates that for most cases the deviation from the average value is relatively small, compared to the allowed flicker limits which range usually between 0.35 and 1 [START_REF] Blavette | Impact of a Medium-Size Wave Farm on Grids of Different Strength Levels[END_REF]. However, some higher values of the standard deviation indicate that the time delay set can have a non-negligible influence on flicker, and that it is therefore important to average the flicker level corresponding to several time delay sets in order to obtain a reasonable estimation of the flicker which would have been obtained through the more realistic, full hydrodynamic approach. As the average flicker level is mostly representative of the order of magnitude of the flicker level corresponding to the 10 different time delay sets, it will be used for the comparative study between the time delay-based approach and the full hydrodynamic approach, as described in the following section. B. Comparison of the two approaches It is shown in Fig. 4 that both approaches generate similar results with a difference which is generally negligible in comparison with the usual maximum allowed flicker limits ranging between 0.35 and 1. This observation applies to both the low-energy and the mild sea-states (𝐻 𝑠 =1 m and 𝐻 𝑠 =3 m respectively). Hence, it can be concluded that the flicker level generated by a wave energy farm can be estimated with a relatively high level of accuracy in most cases by means of the average flicker level corresponding to several time delay sets (here, ten time delay sets were used). In other words, this means flicker can be estimated from a single WEC power output, without further requirement for modelling the hydrodynamic conditions at each node in the farm where a WEC is expected to be located. This means also, in physical terms, that the wave dispersion phenomenon can usually be considered as negligible when it comes to flicker studies under the conditions considered in this study. Fig. 4 Short-term flicker level 𝑃 𝑠𝑡 as a function of the sea-state peak period 𝑇 𝑝 for 𝐻 𝑠 =1 m and 𝐻 𝑠 =3 m, and for the two considered approaches IV. CONCLUSIONS This paper has described a comparative study between a time delay-based approach and a more realistic, full hydrodynamic approach for determining the flicker level generated by a wave farm composed of 24 devices. The results have shown that in most cases, using the average flicker level corresponding to 10 different time delay sets leads to a negligible error compared to the full hydrodynamic approach. This means that the wave dispersion phenomenon has a limited impact on flicker. However, some non-negligible values for the flicker level error in some rare cases suggest that the time delay-based approach should be restricted to estimating flicker at a first stage before more refined studies based on the full hydrodynamic approach are conducted. Future work will focus on the comparative analysis, in terms of flicker level, between the two approaches described in this paper and a more comprehensive hydrodynamic approach including the hydrodynamic interactions between WECs. It will also investigate the influence of several parameters such as inter-WEC distance, WEC spatial arrangement, device number, etc. Fig. 1 1 Fig. 1 Wave farm spatial layout Fig. 3 3 Fig. 3 Electrical grid model developed under DIgSILENT PowerFactory Table I I Short-term flicker levels 𝑃 𝑠𝑡 for the ten time delay sets (𝐻 𝑠 =1 m) Peak period 𝑇 𝑝 (s) 7 9 10 12 level 𝑃 𝑠𝑡 Short-term flicker Average Standard deviation Minimum Maximum 0.09 0.04 0.04 0.14 0.40 0.12 0.26 0.68 0.50 0.14 0.31 0.84 0.76 0.16 0.60 1.14 Table II Short II -term flicker levels 𝑃 𝑠𝑡 for the ten time delay sets (𝐻 𝑠 =3 m) Peak period 𝑇 𝑝 (s) 7 9 10 12 Short-term level 𝑃 𝑠𝑡 flicker Average Standard deviation Minimum Maximum 0.63 0.12 0.43 0.80 0.80 0.09 0.67 0.94 0.96 0.21 0.73 1.40 0.83 0.10 0.73 1.09 ACKNOWLEDGMENT The research work presented in this paper was partly conducted in the frame of the QUALIPHE project (ANR-11-PRGE-0013) funded by the French National Agency of Research (ANR) which is gratefully acknowledged.
01754060
en
[ "spi.nrj" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01754060/file/Matine-Optimal%20sizing%20of%20submarine%20cables--EWTEC2017.pdf
Abdelghani Matine Charles-Henri Bonnard Anne Blavette Salvy Bourguet François Rongère Thibaut Kovaltchouk Emmanuel Schaeffer Optimal sizing of submarine cables from an electro-thermal perspective Keywords: Submarine cables, optimal sizing, electro-thermal, wave energy converter (WEC), finite element analysis (FEA) à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Optimal sizing of submarine cables from an electro-thermal perspective Abdelghani Matine #1 , Charles-Henri Bonnard #2 , Anne Blavette* 3 , Salvy Bourguet #4 , François Rongère +5 , Thibaut Kovaltchouk°6, Emmanuel Schaeffer #7 # IREENA, Université de Nantes, 37 boulevard de l'Université, 44602 Saint-Nazaire, France 1 [email protected] , 2 [email protected], 4 [email protected], 7 [email protected] * SATIE (UMR 8029), Ecole Normale Supérieure de Rennes, avenue Robert Schuman, 35170 Bruz, France 3 [email protected] + LHEEA (UMR 6598), Ecole Centrale de Nantes, 1 Rue de la Noë, 44300 Nantes, France 5 [email protected] °Lycée F. Roosevelt, 10 rue du Président Roosevelt, 51100 Reims, France 6 [email protected] Abstract-In similar fashion to most renewables, wave energy is capital expenditure (CapEx)-intensive: the cost of wave energy converters (WECs), infrastructure, and installation is estimated to represent 60-80% of the final energy cost. In particular, grid connection and cable infrastructure is expected to represent up to a significant 10% of the total CapEx. However, substantial economical savings could be realised by further optimising the electrical design of the farm, and in particular by optimising the submarine cable sizing. This paper will present the results of an electro-thermal study focussing on submarine cables temperature response to fluctuating electrical current profiles as generated by wave device arrays, and obtained through the finite element analysis tool COMSOL. This study investigates the maximum fluctuating current loading which can be injected through a submarine cable compared to its current rating which is usually defined under steady-state conditions, and is therefore irrelevant in the case of wave energy. Hence, using this value for design optimisation studies in the specific context of wave energy is expected to lead to useless oversizing of the cables, thus hindering the economic competitiveness of this renewable energy source. I. INTRODUCTION In similar fashion to most renewables, harnessing wave energy is capital expenditure (CapEx)-intensive: the cost of wave energy converters (WECs), infrastructure, and installation is estimated to represent 60-80% of the final energy cost [START_REF]Ocean Energy Strategic Roadmap 2016 -Building ocean energy for Europe[END_REF]. In particular, cable costs (excluding installation costs) are expected to represent up to a significant 10% of the total CapEx, based on the offshore wind energy experience [START_REF] On | Wind Turbine Technology and Operations Factbook[END_REF][START_REF] Macaskill | Offshore wind-an overview[END_REF]. However, substantial economical savings could be realised by further optimising the electrical design of the farm, and in particular by optimising the submarine cable sizing. Several studies have focussed on optimising the electrical network composed of the wave farm offshore network and/or of the local onshore network [START_REF] Nambiar | Optimising power transmission options for marine energy converter farms[END_REF][START_REF] Blavette | Dimensioning the Equipment of a Wave Farm: Energy Storage and Cables[END_REF]. In [START_REF] Nambiar | Optimising power transmission options for marine energy converter farms[END_REF], a techno-economic analysis was conducted on maximising the real power transfer between a wave energy farm and the grid by varying three design parameters of the considered array, including the export cable length, but excluding its current rating which was thus not considered as an optimisation variable. It seems that this latter parameter was selected as equal to the maximum current which may theoretically flow through the cable. This theoretical value was obtained from the maximum theoretical power output supposedly reached for a given sea-state and which was extracted from a power matrix. Following this, the corresponding scalar value for the maximum current for a given sea-state was obtained by means of load flow calculations based on a pre-defined electrical network model. Hence, the cable rating was assumed to be equal to the maximum value among the different current scalar values corresponding to several sea-states. In similar fashion, in [START_REF] Beels | A methodology for production and cost assessment of a farm of wave energy converters[END_REF] where a power transfer maximisation is also conducted, the cable rating is determined as well based on the maximum power output by a wave device array for different sea-states. In both these papers, there is no limitation on the considered sea-states from which energy can be extracted, apart from the operational limits of the wave energy device itself in [START_REF] Nambiar | Optimising power transmission options for marine energy converter farms[END_REF]. In other words, as long as the sea-state characteristics are compatible with the wave energy device operational limits in terms of significant wave height and period, it is considered that wave energy is harnessed. Another approach challenging this idea was proposed in [START_REF] Sharkey | Maximising value of electrical networks for wave energy converter arrays[END_REF] based on the offshore wind energy experience [START_REF] Crown | Round 3 Offshore Wind Farm Connection Study[END_REF]. This approach consists in rating the submarine export cable to a current level less than the maximum current which could be theoretically harnessed when the wave device operational constraints only are considered. The rationale underpinning this approach consists in considering that the most highly energetic seastates contribute to a negligible fraction of the total amount of energy harnessed every year. Hence, this corresponds to a negligible part of the annual revenue. However, harnessing wave energy during these highly energetic sea-states leads to an increased required current rating for the export cable whose associated cost is expected to be significantly greater than the corresponding revenue. Consequently, it seems more reasonable, from a profit maximisation perspective, to decrease the export cable current rating, even if it means shedding a part of the harnessable wave energy. However, in similar fashion to the papers mentioned previously, current is calculated in this paper as a scalar value representing the maximum level which can be reached during a given sea-state. In other words, the fluctuating nature of the current profile during a sea-state is not considered. However, the maximum current value, from which the cable current rating is usually calculated, flows in the cables during only a fraction of the sea-state duration. Based on a very simple model, it was shown in [START_REF] Blavette | Dimensioning the Equipment of a Wave Farm: Energy Storage and Cables[END_REF] that the slow thermal response of the cable (relatively to the fast current fluctuations generated from the waves) [START_REF] Adapa | Dynamic thermal ratings: monitors and calculation methods[END_REF][START_REF] Hosek | Dynamic thermal rating of power transmission lines and renewable resource[END_REF] leads to temperature fluctuations of limited relative amplitude compared to the current fluctuations. Hence, this implies that it could be feasible to inject a current profile whose maximum value is greater than the current rating without exceeding the conductor maximum allowed temperature, which is usually equal to 90°C for XLPE cable [START_REF] Nexans | Submarine cable 10kV[END_REF][START_REF] Abb | XLPE Submarine Cable Systems Attachment to XLPE Land Cable Systems -User's Guide[END_REF][START_REF]Nexans data sheet[END_REF]. Downrating submarine cables in this manner, compared to rating them with respect to the maximum but transient, current level flowing through it, could lead to significant savings from a CapEx point of view. In this perspective, this paper presents a detailed study on the thermal response of a submarine cable subject to a fluctuating current as generated by a wave device array. Section II will describe the development of a finite element analysis (FEA) based on a 2D thermal model of a 20kV XLPE submarine cable and performed using commercial FEA software COMSOL. The thermal response of a submarine cable to the injection of a fluctuating current profile as generated by wave energy arrays is analysed in Section III. The objective of this study is to determine the maximum current loading which can be injected through a submarine cable without exceeding the conductor thermal limit (equal to 90°C here) and compare this value to the cable rating. As mentioned earlier, this latter value is usually defined under static conditions which are irrelevant in the case of wave energy. Hence, its use for design optimisation studies in this specific context is expected to lead to useless oversizing of the cables, thus hindering the economic competitiveness of wave energy. II. THERMAL MODELLING OF THE SUBMARINE POWER CABLE A. Cable design and characteristics This study considers a 20 kV XLPE insulated power cable containing three copper conductors, each with a cross section of 95 mm² and having each a copper screen, as shown in I. The static current carrying capacity of the considered cable is equal to 290A and is calculated according to IEC standards 60287-1-1 [START_REF]Calculation of the current rating: Part 1-1 Current rating equations (100% load factor) and calculation of losses[END_REF] and 60287-2-1 [START_REF]Electric cables -Calculation of the current rating -Part 2-1: Calculation of Thermal Resistance[END_REF]. It is based on the following assumptions: B. Thermal Model This section describes the development of a 2D finite element analysis (FEA) of the submarine cable thermal model using commercial software COMSOL. In order to predict the temperature distribution within the cable, the heat transfer equation of thermal conduction in transient state is applied [START_REF] Long | Essential Heat Transfer. s.l[END_REF]: ( ) where ρ is the mass density (kg.m -3 ), is the specific heat capacity (J.kg -1 .K -1 ), T is the cable absolute temperature (K), K is the thermal conductivity (W.m -1 .K -1 ) and Q is a heat source (W.m -3 ). The heat sources in cable installations can be divided into two generic groups: heat generated in conductors and heat generated in insulators. The losses in metallic elements are the most significant losses in a cable. They are caused by Joule losses due to impressed currents, circulating currents or induced currents (also referred to as "eddy currents"). The heat produced by the cable metallic components (namely conductors, sheath and armour) can be calculated based on equations provided in IEC standard 60287-1-1 [START_REF]Calculation of the current rating: Part 1-1 Current rating equations (100% load factor) and calculation of losses[END_REF]. First, the Joule losses W c of the conductor can be calculated by using the following formula: ( ( )) ( ) where R 20°C is the resistance of the cable conductor at 20°C (W/km), α is the constant mass temperature coefficient at 20°C (K -1 ), and I c (A) is the current density. Terms y p and y s are the skin effect factor and the proximity factor respectively. The sheath and armour losses (W s and W a respectively) can be calculated such as: where U 0 is the applied voltage, δ is the loss angle, and ω is the angular frequency. However, the heat produced in the insulating layers expected to be significant, compared to the heat produced by the metallic components, under certain high voltage conditions only. Finally, the boundary conditions for this model are illustrated in Fig. 2. The modelled region is 7 m deep (H=7 m). The two side boundaries A and B are placed sufficiently far away from the cable so that there is no appreciable change in temperature with distance in the xdirection close to the boundaries. The cable is placed in the middle of the modelled region with respect to the x-direction, and its length is equal to 10 m (this value was proved sufficient in [START_REF] Swaffield | Methods for rating directly buried high voltage cable circuits[END_REF] for meeting the zero heat flux boundary conditions for sides A and B). The soil surface C is assumed to be at constant ambient temperature of 12 ºC. The vertical sides A and B of the model are assumed to have a zero net heat flux across them due to their distance from the heat source. The equation in the side A and B is defined as: ( ) where n is the unit vector normal to the surface. On side D, the thermal exchanges via convection between the sea bed and the seawater must be taken into account. The heat convection exchange is defined as: ( ) ( ) where h is the heat transfer coefficient, T out is the sea temperature and T the temperature of the upper boundary of the sea bed (side D). C. Current temporal profiles injected in the cable The wave farm is composed of 15 to 20 identical heaving buoys controlled passively and described in another paper [START_REF] Kovaltchouk | Influence of control strategy on the global efficiency of a Direct Wave Energy Converter with Electric Power Take-Off[END_REF]. Different power output temporal profiles for a single WEC were computed by means of several combined simulation programmes described in [START_REF] Blavette | Influence of the wave dispersion phenomenon on the flicker generated by a wave farm[END_REF]. The first programme computes the wave excitation force at a single location in the wave farm. Then, the temporal profile of the excitation force is injected into a wave device model to obtain the corresponding electrical power output profile from which the wave farm power output is calculated. In order to model the device aggregation effect, the power output profiles of the other WECs composing the farm are computed by shifting the power profile of a single device by a random time delay, as described in a paper mentioned earlier [START_REF] Blavette | Influence of the wave dispersion phenomenon on the flicker generated by a wave farm[END_REF]. These power profiles are then injected into an electrical grid numerical model which is shown in Fig. 3. This model has been developed under the power system simulator PowerFactory [START_REF]DIgSILENT PowerFactory[END_REF] and is described in more detail in [START_REF] Blavette | Influence of the wave dispersion phenomenon on the flicker generated by a wave farm[END_REF]. The components of the offshore grid are highlighted in [START_REF] Blavette | Influence of the wave dispersion phenomenon on the flicker generated by a wave farm[END_REF]. A. Model validation under steady-state conditions A steady-state load current equal to 290A (i.e. equal to the steady-state capacity of the considered cable) is used for the analysis. If the model is valid, the conductor temperature should remain below its maximum allowed limit which is equal to 90°C. Table III summarizes the calculation of the different heat sources needed to solve the heat transfer problem according to the equations given in IEC standard 60287-1-1 [START_REF]Calculation of the current rating: Part 1-1 Current rating equations (100% load factor) and calculation of losses[END_REF]. The same heat fluxes are used as source terms for the FEA model, which allows to compare the calculated temperature with the two methods. Fig. 8 shows the meshing of the submarine cable and its surrounding area that need finer elements because these areas are the most important sections of the presented analysis. Then, the areas which are far away from the cables can be modelled with a coarser mesh. The steady-state temperature field distributions in the cable and in its environment are shown in Fig. 9 and Fig. 10 respectively. The calculation from the IEC standard leads to a temperature of 67°C for the copper cores and 39°C for the external sheath. It can be seen that the maximum temperature of the copper conductor resulting from the FEA is 75°C while the external sheath reached 44°C, a little bit higher but in the same order of magnitude than the IEC standards. Note that both methods return copper core temperature below the critical temperature of 90°C, which provide a safety margin with a normal current load of 290 A. Despite the higher temperatures resulting from the FEA, one can see this results comparison as a form of validation of the model, especially considering that IEC standard uses a simplified model to calculate the temperature, i.e. an electricequivalent circuit composed of thermal resistors and current sources. Hence, it leads us to conclude that the FEA model can be used to calculate the temperature of a submarine cable under fluctuating current as generated by a group of WECs. B. Thermal response under a fluctuating current profile This section describes the transient thermal response of the submarine cable to different current profiles as generated by an array of wave energy devices considering several sea states, as described in Table II. The objective of this study is to investigate the levels of current which can be transmitted through a submarine cable without the conductor exceeding the thermal limit of 90°C. For each simulated case, we consider the maximum value of the current and its percentage with respect to the continuous current rating of the cable, i.e. 290 A. Table IV shows that the maximum current of each current profile. It is important to highlight that these maximum currents can be far above the continuous current rating in all cases. Simulation results of such a thermal problem depend on the initial conditions. Hence, it is important to accurately define the initial thermal conditions of the surrounding soil. The simplest initial condition which can be defined is a uniform temperature field. The value of this initial thermal condition should correspond to the case where the cable is subject to a current load equal to the average of the fluctuating current profile to be applied afterwards. The role of this first phase of the simulation is to quickly bring the cable temperature close to the expected range within which it is expected to vary once the fluctuating current profile is applied, thus reducing the simulation time. We used enough sequential repetitions of the current depicted in Figs 4 to 7 to reach a simulation time of 100 ks, i.e. a duration that is necessary to reach close-toequilibrium conditions for the thermal problem. Figs. 11, 12, 13 and 14 show the conductor thermal response versus time, for Cases 1 to 4 respectively. In these cases, the current maximal values are equal to 97% to 287% of the cable capacity. The maximum temperature does not exceed the allowed limit of 90°C for the first two cases. In other words, as the temperature is below the allowed limit of 90°C, the cable could be considered as overrated with respect to the considered current profiles. The third case presents good agreement between the sea state and the sizing of the cable (Fig. 13), as the temperature is close to the maximum allowed value of 90°C. Fig. 14 shows the simulation results for Case 4. In this case, the current maximal value is up to 287 % of the cable capacity but the temperature exceeds 90°C. Hence, the cable can be considered as underrated here. In summary, under the conditions considered in this study, it appears that the cable is able to carry a fluctuating current profile whose maximum value is approximately equal to two and a half times the cable steady-state current rating. CONCLUSIONS This paper describes the results of a study focusing on the electrothermal response of a submarine cable to fluctuating current profiles as generated by a wave farm under different conditions, in particular regarding sea-state characteristics and the number of devices composing the farm. It was shown that the cable temperature remained below the allowed limit (equal to 90°C for the considered cable) when the maximum value of the injected current profile is as high as about two and a half times the (steady-state) rated current. Hence, it could be possible to size a cable to be included in a wave farm to the half of the maximum current, rather than to 100% of this current while keeping a good margin of safety. This could lead to significant savings in the CapEx of wave farm projects, thus contributing to improve the economic competitiveness of wave energy. ACKNOWLEDGMENT The research work presented in this paper was conducted in the frame of the EMODI project (ANR-14-CE05-0032) funded by the French National Agency of Research (ANR) which is gratefully acknowledged. This project is also supported by the S2E2 innovation centre ("pôle de compétitivité"). Fig. 1 1 Fig. 1 Cross section of the considered three-phase export cable as modelled under COMSOL Fig. 1. The sheaths are made of polyethylene and the bedding of polypropylene yarn. The surrounding armour is made of galvanized steel. This cable is currently installed in the SEM-REV test site located off Le Croisic, France and managed by Ecole Centrale de Nantes. Usual values regarding the cable material thermal properties, as provided in IEC standard 60853-2 [15], are presented in TableI. Fig. 2 2 Fig. 2 Illustration of the cable environment and of its boundary conditions.-Maximal allowed conductor temperature at continuous current load : 90°C -Current frequency : 50 Hz -Ambient temperature : 12°C -Cable burial depth : 1.5 m -Thermal resistivity of surroundings: 0.7 K.m/W Fig. 3 3 Fig. 3 Wave farm electrical network model as developed under PowerFactory λ λ where λ 1 and λ 2 are dissipation factors in the sheath and in the armour respectively. Insulating materials also produce heat. Dielectric losses in the insulation are given by: Fig. 4 Fig. 5 Fig. 6 Fig. 7 Fig. 8 45678 Fig. 4 Current profile flowing through the cable (Case 1) Fig. 9 9 Fig. 9 Steady-state temperature field simulation (°C) of the submarine cable under normal load conditions. Fig. 10 10 Fig. 10 Steady-state temperature field simulation (°C) of the cable environment. The blue arrows represent the heat flux. Fig. 13 13 Fig.13 Cable temperature versus time (Case 3) It is composed of 15 to 20 WECs, of a 0.4/10 kV transformer, of a 6.5 km-long cable, and of a 10/20 kV transformer. The rest of the network is located onshore is shaded in blue in the figure. Four current temporal profiles were simulated for different sea-state characteristics (the significant wave height H s and the peak period T p ) and device numbers, as detailed in TableII. They are shown in Fig.4to 7. TABLE III HEAT III LOSSES IN THE SUBMARINE CABLE Numerical Losses type Formula value (W/m) Conductor Joule losses W c 21.025 Sheath losses W s 0.2 Armour losses W a 0.027 Dielectric losses W d 0.028
00175431
en
[ "spi.auto" ]
2024/03/05 22:32:10
2006
https://hal.science/hal-00175431/file/wodes06_VG-ODS-JMF.pdf
Vincent Gourcuff email: [email protected] Olivier De Smet email: [email protected] Jean-Marc Faure email: [email protected] Efficient representation for formal verification of PLC programs * This paper addresses scalability of model-checking using the NuSMV model-checker. To avoid or at least limit combinatory explosion, an efficient representation of PLC programs is proposed. This representation includes only the states that are meaningful for properties proof. A method to translate PLC programs developed in Structured Text into NuSMV models based on this representation is described and exemplified on several examples. The results, state space size and verification time, obtained with models constructed using this method are compared to those obtained with previously published methods so as to assess efficiency of the proposed representation. I. INTRODUCTION Formal verification of PLC (Programmable Logic Controllers) programs thanks to model-checking tools has been addressed by many researchers ( [START_REF] Moon | Modeling programmable logic controllers for logic verification[END_REF], [START_REF] Rausch | Formal verification of PLC programs[END_REF], [START_REF] Huuck | A model-checking approach to safe SFCs[END_REF], [START_REF] Zoubek | Automatic verification of temporal and timed properties of control programs[END_REF], [START_REF] Frey | Formal methods in PLC programming[END_REF], [START_REF] Smet | Verification of a controller for a flexible manufacturing line written in ladder diagram via model-checking[END_REF], [START_REF] Bel Mokadem | Verification of a timed multitask system with Uppaal[END_REF], [START_REF] Jiménez-Fraustro | A synchronous model of IEC 61131 PLC languages in SIGNAL[END_REF]). These works have yielded formal semantics of the IEC 61131-3 standardized languages [START_REF]Programmable controllers -Part 3[END_REF] as well as rules to translate PLC programs into formal models that can be taken as inputs of model-checkers such as SMV [START_REF] Mcmillan | The SMV Language[END_REF] or UPPAAL [START_REF] Bengtsson | UP-PAAL -a tool suite for automatic verification of real-time systems[END_REF]. Despite these valuable results, it is easy to observe that model-checking is not employed daily in companies that develop PLC programs (see ( [START_REF] Lucas | A study of current logic design practices in the automotive manufacturing industry[END_REF]) for a comprehensive study of logic design practices). Automation engineers prefer to use the traditional, while being tedious and not exhaustive, simulation techniques to verify that programs they have developed fulfill the application requirements. Several reasons can be put forward to explain this situation: specifying formal properties in temporal logic or in the form of timed automata is an extremely tough task for most engineers; modelcheckers provide, in case of negative proof, counterexamples that are difficult to interpret; PLC vendors do not propose commercial software able to translate automatically PLC programs into formal models, ... All these difficulties are real and solutions must be found to overcome them, e.g. libraries of application-oriented properties, explanations of counterexamples in suitable languages, automatic translation software. Nevertheless, in our view, the main obstacle to industrial use of formal verification is combinatory explosion that occurs when dealing with large size control programs. Formal models that underlie modelchecking are indeed discrete state models such as finite state machines or timed automata. Even if properties are proved symbolically, using binary decision diagrams (BDDs) for instance, existing methods produce, from industrial, large size, PLC programs, models that include too many states to be verified by the present model-checking tools. In that case, no proof can be obtained and formal verification is then useless. The aim of the research presented in this paper is to tackle out, or at least to lessen, this problem by proposing a translation method that yields, from PLC programs, formal models far smaller than those obtained with existing methods. These novel models will include only the states that are meaningful for properties proof and then will be less sensitive to combinatory explosion. This efficient representation of PLC programs will contribute to improve scalability of modelcheckers and to favor their industrial use. This paper includes five sections. Section 2 delineates the frame of our research. The principle of the translation method is explained in section 3. Section 4 describes how efficient NuSMV models can be obtained from PLC programs developed in a standardized language thanks to this method, while section 5 presents experimental results. Prospects for extending these works are given in section 6. PLCs (Figure 1) are automation components that receive logic input signals coming from sensors, operators or other PLCs and send logic output signals to actuators or other controllers. The control algorithms that specify the values of outputs according to the current values of inputs and the previous values of outputs are implemented within PLCs in programs written in standardized languages, such as Ladder Diagram (LD), Structured Text (ST) or Instruction List (IL). These programs run under a real-time operating system whose scheduler may be multi-or mono-task. This paper focuses only on mono-task schedulers. Given this restriction, a PLC performs a cyclic task, termed PLC cycle, that includes three steps : inputs reading, program execution, outputs updating. The period of this task may be constant (periodic scan) or may vary (cyclic scan). II. MODEL-CHECKING OF LOGIC CONTROLLERS Previous works that have been carried out to check PLC programs properties by using existing model-checkers addressed either timed ( [START_REF] Zoubek | Automatic verification of temporal and timed properties of control programs[END_REF], [START_REF] Bel Mokadem | Verification of a timed multitask system with Uppaal[END_REF]) or untimed ([1], [START_REF] Rausch | Formal verification of PLC programs[END_REF], [START_REF] Huuck | A model-checking approach to safe SFCs[END_REF], [START_REF] Smet | Verification of a controller for a flexible manufacturing line written in ladder diagram via model-checking[END_REF], [START_REF] Jiménez-Fraustro | A synchronous model of IEC 61131 PLC languages in SIGNAL[END_REF]) model-checking. Since our objective is to facilitate industrial use of formal verification techniques by avoiding or limiting combinatory explosion and that this objective seems more easily reachable for untimed systems, only untimed [START_REF] Cimatti | NuSMV Version 2: An OpenSource Tool for Symbolic Model Checking[END_REF], though similar results would be obtained with that of other model-checkers of the same class. It matters also to point out that, given the kind of systems that are considered, periodic and cyclic tasks behave in the same fashion: PLC cycle duration is meaningless. Several approaches have been proposed to translate a PLC program into a formal untimed model. For room reasons, only two of them will be sketched below. [START_REF] Rossi | Validation formelle de programmes ladder diagram pour automates programmables industriels (formal verification of PLC program written in ladder diagram)[END_REF] for instance expresses the semantics of each element (contact, coil, links,...) of LD in the form of a small state automaton. The formal behavior of a given program is then obtained by composition of the different state automata that describe its elements. This method relies upon a detailed semantics of ladder diagram and can be extended to programs written in several languages, but it gives rise easily to state space explosion, even for rather small examples. A more efficient approach ([2], [START_REF] Smet | Verification of a controller for a flexible manufacturing line written in ladder diagram via model-checking[END_REF]) translates each program statement into a SMV next function. Each PLC cycle is then modeled by a sequence of states, the first and last states being characterized respectively by the values of input-output variables at the input reading and output updating steps, the intermediary states by the values of these variables after execution of each statement. Figure 2 illustrates this method on a didactic example written in ST. Thorough this paper, PLC programs examples will be given in ST. ST is a a textual language, similar to PASCAL, but tailor-made for automation engineers, for it includes statements to invoke and to use the outputs of Function Blocks (FB) such as RS (SR) -reset (set) dominant memory -, RE (FE) -rising (falling) edge. This language is advocated for the control systems of power plants that are targeted in the project. Equivalent programs in other sequentially executed languages, like programs written in IL or LD, can be obtained without difficulty. The program presented in Figure 2 includes four statements: two assignments followed by one IF selection and one assignment. From this program, it is possible to obtain by using the previous method (translation of each statement into a SMV next function) an execution trace whose part is shown on Figure 2, assuming that the values of the variables in the initial state (defined when setting up the controller) and the values of the input variables at the inputs reading steps of the first and second PLC cycles are respectively: • Initial values of variables: In addition to the formal model of the controller, modelcheckers need a set of formal properties to prove. Two kinds of properties are generally considered: I 1 = 1, I 2 = 0, I 3 = 1, I 4 = 0, O 1 = 0, O 2 = 0, O 3 = • Intrinsic properties, such as absence of infinite loop, no deadlock, ..., which refer to the behavior of the controller independently of its environment; • Extrinsic properties which refer to the behavior of inputs and outputs, e.g. commission of outputs for a given combination of inputs, always forbidden combination of outputs, allowed sequences of inputs-outputs,... This paper focuses only on extrinsic properties. Referring to outputs behavior, these properties impact indeed directly safety and dependability of the controlled process and then are more crucial. If one of them (or several) are not satisfied, hazardous events may occur, leading to significant failures. If focus is put on extrinsic properties verification, the two approaches described above lead to state automata with numerous states that are not meaningful. It can be seen indeed on Figure 2 that the intermediary states defined for each statement are not useful in that case; extrinsic properties are related only to the values of input-output variables when updating the outputs, i.e. at the end of the PLC cycle. A similar reasoning may be done for the other method. Hence efficient representation for formal verification will include only the states describing the values of input-output variables when updating outputs (shaded states in Figure 2). This representation may be obtained directly from a PLC program by applying the method whose principle is explained in the next section. III. METHOD PRINCIPLE A. Assumptions In what follows it is assumed that: • PLC programs are executed sequentially; • only Boolean variables are used; • internal variables may be included in the program; The fourth and fifth ones apply only to ST programs but similar assumptions for LD or IL programs can be easily drawn up. Iterations are forbidden because they can lead to too long cycle times that do not comply with real-time requirements. The sixth assumption may be puzzling, for contrary to the usual programming rule that advocates that each variable must be assigned only once. Even if this programming rule is helpful when developing a software module from scratch, this assumption must be introduced to cope with industrial PLC programs in which it is quite usual to find multiple assignments of the same variable. Two reasons can be put forward to explain this situation. First industrial PLC programs are often developed from previous similar ones; then programs designers copy and paste parts of previous programs in the new program. This reuse practice may lead to assign one variable several times. Second a ST program may contain both normal assignments and assignments included within selection statements; this is an other reason that explains multiple assignments. As our objective is to proof properties on existing programs, without modifying them prior to verification, this specific feature must be taken into account. It will be shown below that multiple assignments do not impede to construct efficient representation. Figure 3 outlines the translation method that has been developed to obtain efficient representation of PLC programs. As shown on this figure, this method includes two main steps: static analysis of the program and generation of the NuSMV model that describes formally the behavior of the program with regards to its inputs-outputs. In the second case, computation of the value of one output variable must use the values of output variables for this cycle if the last assignment of these output variables is located upstream in the program, or the values of output variables at the previous PLC cycle (cycle i) if those variables are assigned downstream; this computation will use obviously the values of input variables for cycle i+1. Hence, the main objective of static analysis is to determine, for each output variable, whether the value of each variable involved in computation of the value of this output variable at PLC cycle i+1 is related to PLC cycle i+1 or to PLC cycle i. Static analysis is exemplified on the program given in Figure 4. This ST program computes the values of five output variables (O 1 , ..., O 5 ) from those of four input variables (I 1 , ..., I 4 ) and includes only allowed statements. Some specific features of this example are to be highlighted: • the IF statement does not specify the value of O 3 if the condition following the IF is not true; this is allowed in ST language and means that the value of O 3 remains the same when this condition is false; • the assignment of O 4 uses the output of a RS (reset dominant memory) FB; • one output variable (O 1 ) is assigned twice. Scanning sequentially the program from top to bottom, statement by statement, static analysis yields dependency relations represented graphically in Figure 5 a). In this figure, an arrow from variable X to variable Y means that the value of Y depends on the value of X (or that the value of X is used to compute the value of Y). Each statement gives rise to one dependency relation. For instance, the dependency relation obtained from the first statement means that the value of O 1 depends on the values of I 1 and I 2 , the third relation that the value of O 3 is computed from the values of I 3 , I 4 , O 1 , and O 3 itself (in case of false condition), the fourth relation that the value of O 4 is computed from the values of I 1 , O 5 , and O 4 itself (if the two inputs of a memory are false, the output stays in its previous state),.... From this first set of relations, it is then possible to build an other set of more detailed relations such as: • there is only one dependency relation for each output variable (multiple assignments are removed); • dependency relations are developed, if possible; • the value of each output variable O j (j: positive integer) at PLC cycle i+1, noted O j,i+1 , is obtained from values of input variables for this cycle, noted I k,i+1 (k: positive integer), and from values of output variables for this cycle (O j,i+1 ) or for the previous one (O j,i ). This second set of relations is presented in Figure 5b This set of dependency relations involving the values of output variables for two successive PLC cycles permits to translate efficiently PLC programs into NuSMV models as explained in the next section. IV. TRANSLATING ST PROGRAMS INTO NUSMV MODELS It is assumed in this section that the reader has a basic knowledge of the model-checker NuSMV; readers who want to know more on this proof tool can refer to [START_REF] Cimatti | NuSMV Version 2: An OpenSource Tool for Symbolic Model Checking[END_REF]. To check a system, NuSMV takes as input a transition relation that specify the behavior of a Finite State Machine (FSM) which is assumed to represent this system. The transition relation of the FSM is expressed by defining the values of variables in A. Translation algorithm Each ST statement that gave rise to one of the final dependency relations is translated into one NuSMV assignment; then useless ST statements (assignments that are cancelled by other upstream assignments) are not translated. The set of useful statements is noted P r in what follows. The values of the variables within one assignment are obtained from the corresponding dependency relation. If the value of a variable in this relation is that at PLC cycle i+1, then the next value of this variable will be introduced in the corresponding NuSMV assignment, using the next function; if the dependency relation mentions the value at cycle i, then the corresponding NuSMV assignment will employ the current value of the variable. Given these translation rules, the translation algorithm described Figure 6 has been developed. This algorithm yields a NuSMV model from a set of statements P r issued from a PLC program. BEGIN PLC prog TO NuSMV model(P r) FOR each statement S i of P r: IF S i is an assignment (V i := expression i ) THEN FOR each variable V k in expression i : Replace V k by the variable pointed out in the dependency graph (V k,i or V k,i+1 ) ELIF S i is a conditional structure (if cond; then stmt 1 ; else stmt 2 ) FOR each variable V k in cond: Replace V k by the variable pointed out in the dependency graph (V k,i or V k,i+1 ) FOR each variable Vm assigned in S i : Replace Vm assignment by: "case cond :≺assignment of Vm in PLC prog TO NuSMV model(stmt 1 ) ; !cond : ≺assignment of Vm in PLC prog TO NuSMV model(stmt 2 ) ; esac ; " Fig. 6. Translation algorithm B. Taking into account Function Blocks If a ST assignment includes an expression involving a Boolean Function Block (FB), the behavior of this FB must be detailed in the corresponding NuSMV assignment. Hence a library of generic models describing in NuSMV syntax the behavior of the usual FBs has been developed. When translating ST assignments that include instances of FBs, instances of these generic models will be introduced into the NuSMV assignments. The RS (reset dominant memory) FB, for instance, has two inputs, noted Set and Reset, and one output Q. Its behavior is recalled below: Using the algorithm of Figure 6, the NuSMV model presented in Figure 7 can be obtained from the program of the previous section. • If Reset is true, then Q is false; • If It matters to emphasize that the translation algorithm does not introduce auxiliary variables, such as line counter, end of cycle, unlike the method proposed in [START_REF] Smet | Verification of a controller for a flexible manufacturing line written in ladder diagram via model-checking[END_REF]. It remains nevertheless to assess the efficiency of this representation. V. ASSESSMENT OF THE REPRESENTATION EFFICIENCY Several experiments have been carried out to assess efficiency of the representation proposed in this paper. To facilitate these experiments, an automatic translation program based on the method presented in the previous sections has been developed. A. First experiment The objective of this experiment was to compare, on the simple example of Figure 4, the sizes of the state spaces of the NuSMV models obtained with the representation proposed in [START_REF] Smet | Verification of a controller for a flexible manufacturing line written in ladder diagram via model-checking[END_REF], i.e. direct translation of each statement of the PLC program into one NuSMV assignment, and with that presented in this paper. Reachable states System diameter representation of [START_REF] Smet | Verification of a controller for a flexible manufacturing line written in ladder diagram via model-checking[END_REF] 314 out of 14336 22 proposed representation 21 out of 512 2 The two NuSMV models have been first compared, using behavioral equivalence techniques, so as to verify that they behave in the same manner. This comparison gave a positive result: the sequence of outputs generated by the two models is the same whatever the sequence of inputs. Then the sizes of their state spaces have been computed, using the NuSMV forward check function, as shown in Table I. This table contains, for each representation, the number of reachable states among the possible states, e.g. 314 among 14336 means that 314 states are really reachable among the 14336 possible, as well as the system diameter: minimum number of iterations of the NuSMV model to obtain all the reachable states. These results shows clearly that, even for a simple example, the proposed representation reduces the size of the state space by roughly one order of magnitude. B. Second experiment The second experiment was aiming at assessing the gains in time and in memory size, if any, due to the new representation when proving properties. This experiment has been performed using the test-bed example presented in [START_REF] Smet | Verification of a controller for a flexible manufacturing line written in ladder diagram via model-checking[END_REF]: controller of a Fischertechnik system, for which numerical results were already available. Once again two models have been developed and the same properties have been checked on both. This experiment shows that the proposed representation reduces significantly the verification time and the memory consumption. The ratio between the verification times obtained with the two representations, for instance, varies between 9000 and 600, depending on the property. Similar results are obtained with the other properties. C. Third experiment This third experiment has been performed with industrial programs developed for the control of a thermal power plant. The control system of this plant comprises 175 PLCs connected by networks. All the programs running on these PLCs have been translated as explained previously. The objective of this experiment was merely to assess the maximum, medium and minimum sizes of the state spaces of the models obtained from this set of industrial programs when using the proposed representation. These values are given on the fourth line of Table III. Even if the sizes of the state spaces are very different, this experiment shows clearly the possibility of translating real PLC programs without combinatory explosion. Moreover these state spaces can be explored by the model-checker in a reasonable time, a mandatory condition for checking properties; only 8 seconds are necessary indeed to explore all the state spaces of these programs. A secondary result is given at the last line of this Even if it is not possible to obtain from these three experiments definitive numerical conclusions, such as state space reduction rate, verification time improvement ratio, ... they have allowed to illustrate the benefits of the proposed representation on a large concrete example, coming from industry. VI. CONCLUSION The representation of PLC programs proposed in this paper can contribute to favor dissemination of model-checking techniques, for it enables to lessen strongly state space explosion problems and to reduce verification time. The examples given in the paper were written in ST language. Nevertheless programs written in LD or in IL languages can be represented in the same manner; the principle of the translation method is the same, only the translation rules of statements are to be modified. Ongoing works concern an extension of this representation to take into account integer variables and the development of a similar representation for timed model-checking. Fig. 1 . 1 Fig. 1. PLC basic components 0 and O 4 = 1 • 1 Input variables values at the beginning of the first PLC cycle: I 1 = 0, I 2 = 0, I 3 = 1 and I 4 = 1 • Input variables values at the beginning of the second PLC cycle: I 1 = 1, I 2 = 1, I 3 = 0 and I 4 = 1 It matters to highlight that the values of input variables remain constant in all the states of one PLC cycle. Fig. 2 . 2 Fig.2. A simple program and part of the resulting trace with the method presented in[START_REF] Smet | Verification of a controller for a flexible manufacturing line written in ladder diagram via model-checking[END_REF] • only the Boolean operators defined in IEC 61131-3 standard (NOT, AND, OR, XOR) are allowed; • only the following statements of ST language are allowed: assignment, function and function block (FB) control statements, IF and CASE selection statements; iteration statements (FOR, WHILE, REPEAT) are forbidden; • multiple assignments of the same variable are possible; • Boolean FBs, such as set and reset dominant memories defined in the standard or FBs that implement application specific control rules, like actuators starting or shutting down sequences, may be included in a program. The first two assumptions are simple and can be made for programs in ST, LD or IL. The third assumption means that a program computes the values of internal and output variables from those of input variables and of computed (internal and output) variables; this allows us to consider internal variables in the same way as outputs in what follows. Fig. 3 . 3 Fig. 3. Method overview ). Only the relation coming from the latter assignment of O 1 has been kept. The first relation of the previous relations set has nevertheless permitted to obtain the final dependency relation of O 3 : the value of this variable at cycle i+1 is obtained from the values of I 1 , I 2 , I 3 , I 4 for cycle i+1 and the value of O 3 at cycle i. The computation of the value of O 4 at cycle i+1 uses the value of O 5 at cycle i for this variable is assigned after O 4 in the program whilst the value of O 5 at cycle i+1 is computed from the values of O 2 and O 4 at this same cycle because these two variables have been assigned upstream in the program. O 1 :Fig. 4 . 14 Fig. 4. PLC program example Fig. 5 . 5 Fig. 5. Dependency relations obtained by static analysis. a) ordered intermediate relations; b) final relations Fig. 7 . 7 Fig. 7. NuSMV model of the program presented in Figure 4 TABLE I STATE I SPACE SIZES OF THE PROGRAM PRESENTED INFIGURE 4 Table II gives duration and memory consumption of the checking process for two properties. These results were obtained by using NuSMV, version 2.3.1, on a PC P4 3.2 GHz, with 1 GB of RAM, under Windows XP. representation of [6] proposed representation liveness property 5h / 526MB 2s / 8MB safety property 20min / 200MB 2s / 8MB TABLE II TIME II AND MEMORY REQUIRED FOR PROPERTIES VERIFICATION table; the translation time, time necessary to obtain from the set of programs a set of NuSMV models in the presented representation complies with engineering constraints; translation of one PLC program into one NuSMV model will not slow down PLC program design process. Number of programs 175 Output variables max:47 min:1 sum:1822 Input variables max:50 min:2 sum:2329 State space size of each program max:8.10 28 min: 10 5 mean:5.10 26 Strutation time of all state spaces 8 sec Whole time for translation 50 sec TABLE III RESULTS FOR A SET OF INDUSTRIAL PROGRAMS * This work was carried out in the frame of a research project funded by Alstom Power Plant Information and Control Systems, Engineering tools Department.
01754478
en
[ "info.info-ti", "info.info-ne", "info.info-lg", "info.info-cv" ]
2024/03/05 22:32:10
2017
https://hal.science/tel-01754478/file/New%20Architectures%20for%20Handwritten%20Mathematical%20Expressions%20Recognition_vf-fr.pdf
Cyk Cock Younger Kasami Delaunay Dt Triangulation CROHME Competition on Recognition of Handwritten Mathematical Expressions. CTC Connectionist Temporal Classification Keywords: Dimensional Probabilistic Context-Free Grammars. AC Averaged Center. ANNs Artificial Neural Networks. BAR Block Angle Range. BB Bounding Box. BBC Bounding Box Center BPTT Back Propagation Through Time CPP Closest Point Pair UAR Unblocked Angle Range. VAR Visibility Angle Range Reconnaissance d'expressions mathématiques, Réseaux de neurones récurrents, BLSTM, Écriture en ligne Mathematical expression recognition, recurrent neural networks, BLSTM, online handwriting Thanks to the various encounters and choices in life, I could have an experience studying in France at a fairly young age. Along the way, I met a lot of beautiful people and things. Christian and Harold, you are so nice professors. This thesis would not have been possible without your considerate guidance, advice and encouragement. Thank you for sharing your knowledge and experience, for reading my papers and thesis over and over and providing meaningful comments. Your serious attitude towards work has a deep impact on me, today and tomorrow. Harold, thanks for your help in technique during the 3 years' study. Thank all the colleagues from IVC/IRCCyN or IPI/LS2N for giving me such a nice working environment, for so many warm moments, for giving me help when I need some one to speak French to negotiate on the phone, many times. Suiyi and Zhaoxin, thanks for being rice friends with me each lunch in Polytech. Thanks all the friends I met in Nantes LIST OF TABLES 6.11 The expression level evaluation results on CROHME 2014 test set with 11 trees. . . . . . . 6.12 Illustration of node (SLG) label errors of (Merge 9 ) on CROHME 2014 test set. We only list the cases that occur ≥ 10 times. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.13 Illustration of edge (SLG) label errors of (Merge 9 ) on CROHME 2014 test set. . . . . . . 'Sup' denotes Superscript relationship. . . 2.9 Example of a search for most likely expression candidate using the CYK algorithm. Extracted from [Yamamoto et al., 2006]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10 The system architecture proposed in [START_REF] Awal | A global learning approach for an online handwritten mathematical expression recognition system[END_REF]. Extracted from [START_REF] Awal | A global learning approach for an online handwritten mathematical expression recognition system[END_REF]. 2.11 A simple example of Fuzzy r-CFG. Extracted from [START_REF] Maclean | A new approach for recognizing handwritten mathematics using relational grammars and fuzzy sets[END_REF]. . . . . . 2.12 (a) An input handwritten expression; (b) a shared parse forest of (a) considering the grammar depicted in Figure 2.11. Extracted from [START_REF] Maclean | A new approach for recognizing handwritten mathematics using relational grammars and fuzzy sets[END_REF] [START_REF] Deng | What you get is what you see: A visual markup decompiler[END_REF] . . . . . . . . . . 3.5 An unfolded bidirectional network. Extracted from [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. . . . . . . . . . . 3.6 LSTM memory block with one cell. Extracted from [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. . . . . . . . . . 3.7 A deep bidirectional LSTM network with two hidden levels. . . . . . . . . . . . . . . . . 3.8 (a) A chain-structured LSTM network; (b) A tree-structured LSTM network with arbitrary branching factor. Extracted from [START_REF] Tai | Improved semantic representations from tree-structured long short-term memory networks[END_REF]. . . . . . . . . . . . . . . . . . . . . . 3.9 Illustration of CTC forward algorithm. Blanks are represented with black circles and labels are white circles. Arrows indicate allowed transitions. Adapted from [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. 3.10 Mistake incurred by best path decoding. Extracted from [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. . . . . . . . 3.11 Prefix search decoding on the alphabet {X, Y}. Extracted from [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF] Extracted from [START_REF] Matsakis | Recognition of handwritten mathematical expressions[END_REF]. (b) An example of Delaunay-triangulation-based graph at symbol level. Extracted from [START_REF] Hirata | Automatic labeling of handwritten mathematical symbols via expression matching[END_REF]. . . . . . . . . . . . . . . . . . 5.2 An example of line of sight graph for a math expression. Extracted from [START_REF] Hu | Features and Algorithms for Visual Parsing of Handwritten Mathematical Expressions[END_REF]. . . . 6.17 (d)Tree-Left-R1 ; (e)Tree-0-R1 ; (f)the built SLG after merging several trees and performing other post process steps; (g) the built SLG with N oRelation edges removed. There is a node label error: the stroke 2 with the ground truth label '9' was wrongly classified as '→'. Introduction In this thesis, we explore the idea of online handwritten Mathematical Expression (ME) interpretation using Bidirectional Long Short-Term Memory (BLSTM) and Connectionist Temporal Classification (CTC) topology, and finally build a graph-driven recognition system, bypassing the high time complexity and manual work with the classical grammar-driven systems. Advanced recurrent neural network BLSTM with a CTC output layer achieved great success in sequence labeling tasks, such as text and speech recognition. However, the move from sequence recognition to mathematical expression recognition is far from being straightforward. Unlike text or speech where only left-right (or past-future) relationship is involved, ME has a 2 dimensional (2-D) structure consisting of relationships like subscript and superscript. To solve this recognition problem, we propose a graph-driven system, extending the chain-structured BLSTM to a tree structure topology allowing to handle the 2-D structure of ME, and extending CTC to local CTC to relatively constrain the outputs. In the first section of the this chapter, we introduce the motivation of our work from both the research point and the practical application point. Section 1.2 provides a global view of the mathematical expression recognition problem, covering some basic concepts and the challenges involved in it. Then in Section 1.3, we describe the proposed solution concisely, to offer the readers an overall view of main contributions of this work. The thesis structure will be presented in the end of the chapter. Motivation A visual language is defined as any form of communication that relies on two-or three-dimensional graphics rather than simply (relatively) linear text [Kremer, 1998]. Mathematical expressions, plans and musical notations are commonly used cases in visual languages [START_REF] Marriott | A survey of visual language specification and recognition[END_REF]. As an intuitive and easily (relatively) comprehensible knowledge representation model, mathematical expression (Figure 1.1) could help the dissemination of knowledge in some related domains and therefore is essential in scientific documents. Currently, common ways to input mathematical expressions into electronic devices include typesetting systems such as L A T E X and mathematical editors such as the one embedded in MS-Word. But these ways require that users could hold a large number of codes and syntactic rules, or handle the troublesome manipulations with keyboards and mouses as interface. As another option, being able to input mathematical expressions by hand with a pen tablet, as we write them on paper, is a more efficient and direct mean to help the preparation of scientific document. Thus, there comes the problem of handwritten mathematical expression recognition. Incidentally, the recent large developments of touch screen devices also drive the research of this field. Handwritten mathematical expression recognition is an appealing topic in pattern recognition field since it exhibits a big research challenge and underpins many practical applications. From a scientific point of view, a large set of symbols (more than 100) needs to be recognized, and also the 2 dimensional (2-D) structures (specifically the relationships between a pair of symbols, for example superscript and subscript), both of which increase the difficulty of this recognition problem. With regard to the application, it offers an easy and direct way to input MEs into computers, and therefore improves productivity for scientific writers. Research on the recognition of math notation began in the 1960's [START_REF] Anderson | Syntax-directed recognition of hand-printed two-dimensional mathematics[END_REF], and several research publications are available in the following thirty years [START_REF] Chang | A method for the structural analysis of two-dimensional mathematical expressions[END_REF][START_REF] Martin | Computer input/output of mathematical expressions[END_REF][START_REF] Anderson | Two-dimensional mathematical notation[END_REF]. Since the 90's, with the large developments of touch screen devices, this field has started to be active, gaining amounts of research achievement and considerable attention from the research community. A number of surveys [START_REF] Blostein | Recognition of mathematical notation[END_REF][START_REF] Chan | Mathematical expression recognition: a survey[END_REF][START_REF] Tapia | A survey on recognition of on-line handwritten mathematical notation[END_REF][START_REF] Zanibbi | Recognition and retrieval of mathematical expressions[END_REF] summarize the proposed techniques for math notation recognition. This research domain has been boosted by the Competition on Recognition of Handwritten Mathematical Expressions (CROHME) [START_REF] Mouchère | Advancing the state of the art for handwritten math recognition: the crohme competitions, 2011-2014[END_REF], which began as part of the International Conference on Document Analysis and Recognition (ICDAR) in 2011. It provides a platform for researchers to test their methods and compare them, and then facilitate the progress in this field. It attracts increasing participation of research groups from all over the world. In this thesis, the provided data and evaluation tools from CROHME will be used and results will be compared to participants. Mathematical expression recognition We usually divide handwritten MEs into online and offline domains. In the offline domain, data is available as an image, while in the online domain it is a sequence of strokes, which are themselves sequences of points recorded along the pen trajectory. Compared to the offline ME, time information is available in online form. This thesis will be focused on online handwritten ME recognition. For the online case, a handwritten mathematical expression could have one or more strokes and a stroke is a sequence of points sampled from the trajectory of the writing tool between a pen-down and a pen-up at a fixed interval of time. For example, the expression z d + z shown in Figure 1.2 is written with 5 strokes, two strokes of which belong to the symbol '+'. Generally, ME recognition involves three tasks [START_REF] Zanibbi | Recognition and retrieval of mathematical expressions[END_REF]]: (1) Symbol Segmentation, which consists in grouping strokes that belong to the same symbol. In Figure 1.3, we illustrate the segmentation of the expression z d + z where stroke3 and stroke4 are grouped as a symbol candidate. This task becomes very difficult in the presence of delayed strokes, which occurs when interspersed symbols are written. For example, it could be possible in the real case that someone write first a part of the symbol '+' (stroke3), and then the symbol 'z' (stroke5), in the end complete the other part of the symbol '+' (stroke4). Thus, in fact any combination of any number of strokes could form a symbol candidate. It is exhausting to take into account each possible combination of strokes, especially for complex expressions having a large number of strokes. (2) Symbol Recognition, the task of labeling the symbol candidates to assign each of them a symbol class. Still considering the same sample z d + z, Figure 1.4 presents the symbol recognition of it. This is as well a difficult task because the number of classes is quite important, more than one hundred different symbols including digits, alphabet, operators, Greek letters and some special math symbols; it exists an overlapping between some symbol classes: (1) for instance, digit '0', Greek letter 'θ', and character 'O' might look about the same when considering different handwritten samples (inter-class variability); (2) there is a large intra-class variability because each writer has his own writing style. Being an example of inter-class variability, the stroke5 in Figure 1.4 looks like and could be recognized as 'z', 'Z' or '2'. To address these issues, it is important to design robust and efficient classifiers as well as a large training data set. Nowadays, most of the proposed solutions are based on machine learning algorithms such as neural networks or support vector machines. (3) Structural Analysis, its goal is to identify spatial relations between symbols and with the help of a 2-D language to produce a mathematical interpretation, such as a symbol relation tree which will be emphasized in later chapter. For instance, the Superscript relationship between the first 'z' and 'd', and the Right relationship between the first 'z' and '+' as illustrated in Figure 1.5. Figure 1.6 provides the corresponding symbol relation tree which is one of the possible ways to represent math expressions. Structural analysis strongly depends on the correct understanding of relative positions among symbols. Most approaches consider only local information (such as relative symbol positions and their sizes) to determine the relation between a pair of symbols. Although some approaches have proposed the use of contextual information to improve system performances, modeling and using such information is still challenging. These three tasks can be solved sequentially or jointly. In the early stages of the study, most of the proposed solutions [START_REF] Chou | Recognition of equations using a two-dimensional stochastic context-free grammar[END_REF][START_REF] Koschinski | Segmentation and recognition of symbols within handwritten mathematical expressions[END_REF], Winkler et al., 1995[START_REF] Matsakis | Recognition of handwritten mathematical expressions[END_REF][START_REF] Zanibbi | Recognizing mathematical expressions using tree transformation[END_REF][START_REF] Tapia | Recognition of on-line handwritten mathematical expressions using a minimum spanning tree construction and symbol dominance[END_REF][START_REF] Tapia | Understanding mathematics: A system for the recognition of on-line handwritten mathematical expressions[END_REF][START_REF] Zhang | Using fuzzy logic to analyze superscript and subscript relations in handwritten mathematical expressions[END_REF] are sequential ones which treat the THE PROPOSED SOLUTION recognition problem as a two-step pipeline process, first symbol segmentation and classification, and then structural analysis. The task of structural analysis is performed on the basis of the symbol segmentation and classification result. The main drawback of these sequential methods is that the errors from symbol segmentation and classification will be propagated to structural analysis. In other words, symbol recognition and structural analysis are assumed as independent tasks in the sequential solutions. However, this assumption conflicts with the real case in which these three tasks are highly interdependent by nature. For instance, human beings recognize symbols with the help of global structure, and vice versa. The recent proposed solutions, considering the natural relationship between the three tasks, perform the task of segmentation at the same time build the expression structure: a set of symbol hypotheses maybe generated and a structural analysis algorithm may select the best hypotheses while building the structure. The integrated solutions use contextual information (syntactic knowledge) to guide segmentation or recognition, preventing from producing invalid expressions like [a + b). These approaches take into account contextual information generally with grammar (string grammar [Yamamoto et al., 2006, Awal et al., 2014, Álvaro et al., 2014b, 2016[START_REF] Maclean | A new approach for recognizing handwritten mathematics using relational grammars and fuzzy sets[END_REF] and graph grammar [Celik andYanikoglu, 2011, Julca-Aguilar, 2016]) parsing techniques, producing expressions conforming to the rules of a manually defined grammar. Either string or graph grammar parsing, each one has a high computational complexity. In conclusion, generally the current state of the art systems are grammar-driven solutions. For these grammar-driven solutions, it requires not only a large amount of manual work for defining grammars, but also a high computational complexity for grammar parsing process. As an alternative approach, we propose to explore a non grammar-driven solution for recognizing math expression. This is the main goal of this thesis, we would like to propose new architectures for mathematical expression recognition with the idea of taking advantage of the recent advances in recurrent neural networks. The proposed solution As well known, Bidirectional Long Short-term Memory (BLSTM) network with a Connectionist Temporal Classification (CTC) output layer achieved great success in sequence labeling tasks, such as text and speech recognition. This success is due to the LSTM's ability of capturing long-term dependency in a sequence and the effectiveness of CTC training method. Unlike the grammar-driven solutions, the new architectures proposed in this thesis include contextual information with BLSTM instead of grammar parsing technique. In this thesis, we will explore the idea of using the sequence-structured BLSTM with a CTC stage to recognize 2-D handwritten mathematical expression. Mathematical expression recognition with a single path. As a first step to try, we consider linking the last point and the first point of a pair of strokes successive in the input time to allow the handwritten ME to be handled with BLSTM topology. As shown in Figure 1.7, after processing, the original 5 visible strokes Figure 1.7 -Introduction of traits "in the air" turn out to be 9 strokes; in fact, they could be regarded as a global sequence, just as same as the regular 1-D text. We would like to use these later added strokes to represent the relationships between pairs of stokes by assigning them a ground truth label. The remaining work is to train a model using this global sequence with a BLSTM and CTC topology, and then label each stroke in the global sequence. Finally, with the sequence of outputted labels, we explore how to build a 2-D expression. The framework is illustrated in Figure 1.8. Mathematical expression recognition by merging multiple paths. Obviously, the solution of linking only pairs of strokes successive in the input time could handle just some relatively simple expressions. For complex expressions, some relationships could be missed such as the Right relationship between stroke1 and stroke5 in Figure 1.7. Thus, we turn to a graph structure to model the relationships between strokes in mathematical expressions. We illustrate this new proposal in Figure 1.9. As shown, the input of the recognition system is an handwritten expression which is a sequence of strokes; the output is the stroke label graph which consists of the information about the label of each stroke and the relationships between stroke pairs. As the first step, we derive an intermediate graph from the raw input considering both the temporal and spatial information. In this graph, each node is a stroke and edges are added according to temporal or spatial properties between strokes. We assume that strokes which are close to each other in time and space have a high probability to be a symbol candidate. Secondly, several 1-D paths will be selected from the graph since the classifier model we are considering is a sequence labeller. Indeed, a classical BLSTM-RNN model is able to deal with only sequential structure data. Next, we use the BLSTM classifier to label the selected 1-D paths. This stage consists of two steps --the training and recognition process. Finally, we merge these labeled paths to build a complete stroke label graph. Mathematical expression recognition by merging multiple trees. Human beings interpret handwritten math expression considering the global contextual information. However, in the current system, even though several paths from one expression are taken into account, each of them is considered individually. The classical BLSTM model could access information from past and future in a long range but the information outside the single sequence is of course not accessible to it. Thus, we would like to develop a neural network model which could handle directly a structure not limited to a chain. With this new neural network model, we could take into account the information in a tree instead of a single path at one time when dealing with one expression. We extend the chain-structured BLSTM to tree structure topology and apply this new network model for online math expression recognition. Figure 1.10 provides a global view of the recognition system. Similar to the framework presented in Figure 1.9, we first drive an intermediate graph from the raw input. Then, instead of 1-D paths, we consider from the graph deriving trees which will be labeled by tree-based BLSTM model as a next step. In the end, these labeled trees will be merged to build a stroke label graph. Thesis structure Chapter 2 describes the previous works on ME representation and recognition. With regards to representation, we introduce the symbol relation tree (symbol level) and the stroke label graph (stroke level). Furthermore, as an extension, we describe the performance evaluation based on stroke label graph. For ME recognition, we first review the entire history of this research subject, and then only focus on more recent solutions which are used for a comparison with the new architectures proposed in this thesis. Chapter 3 is focused on sequence labeling using recurrent neural networks, which is the foundation of our work. First of all, we explain the concept of sequence labeling and the goal of this task shortly. Then, the next section introduces the classical structure of recurrent neural network. The property of this network is that it can memorize contextual information but the range of the information could be accessed is quite limited. Subsequently, long short-term memory is presented with the aim of overcoming the disadvantage of the classical recurrent neural network. The new architecture is provided with the ability of accessing information over long periods of time. Finally, we introduce how to apply recurrent neural network for the task of sequence labeling, including the existing problems and the solution to solve them, i.e. the connectionist temporal classification technology. In Chapter 4, we explore the idea of recognizing ME expressions with a single path. Firstly, we globally introduce the proposal that builds stroke label graph from a sequence of labels, along with the existing limitations in this stage. Then, the entire process of generating the sequence of labels with BLSTM and local CTC given the input is presented in detail, including firstly feeding the inputs of BLSTM, then the training and recognition stages. Finally, the experiments and discussion are described. One main drawback of the strategy proposed in this chapter is that only stroke combinations in time series are used in the representation model. Thus, some relationships are missed at the modeling stage. In Chapter 5, we explore the idea of recognizing ME expressions by merging multiple paths, as a new model to overcome some limitations in the system of Chapter 4. The proposed solution will take into account more possible stroke combinations in both time and space such that less relationships will be missed at the modeling stage. We first provide an overview of graph representation related to build a graph from raw mathematical expression. Then we globally describe the framework of mathematical expression recognition by merging multiple paths. Next, all the steps of the recognition system are explained one by one in detail. Finally, the experiment part and the discussion part are presented respectively. One main limitation is that we use the classical chain-structured BLSTM to label a graph-structured input data. In Chapter 6, we explore the idea of recognizing ME expressions by merging multiple trees, as a new model to overcome the limitation of the system of Chapter 5. We extend the chain-structured BLSTM to tree structure topology and apply this new network model for online math expression recognition. Firstly, a short overview with regards to the non-chain-structured LSTM is provided. Then, we present the new proposed neural network model named tree-based BLSTM. Next, the framework of ME recognition system based on tree-based BLSTM is globally introduced. Hereafter, we focus on the specific techniques involved in this system. Finally, experiments and discussion parts are covered respectively. In Chapter 7, we conclude the main contributions of this thesis and give some thoughts about future work. I State of the art 2 Mathematical expression representation and recognition This chapter introduces the previous works regarding to ME representation and ME recognition. In the first part, we will review the different representation models on symbol and stroke level respectively. On symbol level, symbol relation (layout) tree is the one we mainly focus on; on stroke level, we will introduce stroke label graph which is a derivation of symbol relation tree. Note that stroke label graph is the final output form of our recognition system. As an extension, we also describe the performance evaluation based on stroke label graph. In the second part, we review first the history of this recognition problem, and then put emphasize on more recent solutions which are used for a comparison with the new architectures proposed in this thesis. Mathematical expression representation Structures can be depicted at three different levels: symbolic, object and primitive [START_REF] Zanibbi | Evaluating structural pattern recognition for handwritten math via primitive label graphs[END_REF]. In the case of handwritten ME, the corresponding levels are expression, symbol and stroke. In this section, we will first introduce two representation models of math expression at the symbol level, especially Symbol Relation Tree (SRT). From the SRT, if going down to the stroke level, a Stroke Label Graph (SLG) could be derived, which is the current official model to represent the ground-truth of handwritten math expressions and also for the recognition outputs in Competitions CROHME. Symbol level: Symbol relation (layout) tree It is possible to describe a ME at the symbol level using a layout-based SRT, as well as an operator tree which is based on operator syntax. Symbol layout tree represents the placement of symbols on baselines (writing lines), and the spatial arrangement of the baselines [START_REF] Zanibbi | Recognition and retrieval of mathematical expressions[END_REF]. As shown in Figure 2.1a, symbols '(', 'a', '+', 'b', ')' share a writing line while '2' belongs to the other writing line. An operator tree represents the operator and relation syntax for an expression [START_REF] Zanibbi | Recognition and retrieval of mathematical expressions[END_REF]. The operator tree for (a + b) 2 shown in Figure 2.1b represents the addition of 'a' and 'b', squared. We will focus only on the model of symbol relation tree in the coming content since it is closely related to our work. In SRT, nodes represent symbols, while labels on the edges indicate the relationships between symbols. For example, in Figure 2.2a, the first symbol '-' on the base line is the root of the tree; the symbol 'a' is Above '-' and the symbol 'c' is Below '-'. In Figure 2.2b, the symbol 'a' is the root; the symbol '+' is on the Right of 'a'. As a matter of fact, the node inherits the spatial relationships of its ancestor. In Figure 2.2a, node '+' inherits the Above relationship of its ancestor 'a'. Thus, '+' is also Above '-' as 'a'. Similarly, 'b' is on the Right of 'a' and Above the '-'. Note that all the inherited relationships are ignored when we depict the SRTs in this work. This will be also the case in the evaluation stage since knowing the original edges is enough to ensure a proper representation. 101 classes of symbols have been collected in CROHME data set, including digits, alphabets, operators and so on. Six spatial relationships are defined in the CROHME competition, they are: Right, Above, Below, Inside (for square root), Superscript, Subscript. For the case of nth-Roots, like 3 √ x as illustrated in Figure 2.3a, we define that the symbol '3' is Above the square root and 'x' is Inside the square root. The limits of an integral and summation are designated as Above or Superscript and Below or Subscript depending on the actual position of the bounds. For example, in expression n i=0 a i , 'n' is Above the ' ' and 'i' is Below the ' ' (Figure 2.3b). When we consider another case n i=0 a i , 'n' is Superscript the ' ' and 'i' is Subscript the ' '. The same strategy is held for the limits of integral. As can be seen in Figure 2.3c, the first 'x' is Subscript the ' ' in the expression x xdx. File formats for representing SRT File formats for representing SRT include Presentation MathML1 and L A T E X, as shown in Figure 2.4. Compared to L A T E X, Presentation MathML contains additional tags to identify symbols types; these are primarily for formatting [START_REF] Zanibbi | Recognition and retrieval of mathematical expressions[END_REF]. By the way, there are several files encoding for operator trees, including Content MathML and OpenMath [START_REF] Davenport | Unifying math ontologies: A tale of two standards[END_REF]Kohlhase, 2009, Dewar, 2000]. SRT represents math expression at the symbol level. If we go down at the stroke level, a stroke label graph (SLG) can be derived from the SRT. In SLG, nodes represent strokes, while labels on the edges encode either segmentation information or symbol relationships. Relationships are defined at the level of symbols, implying that all strokes (nodes) belonging to one symbol have the same input and output edges. Consider the simple expression 2+2 written using four strokes (two strokes for '+') in The four strokes are indicated as s1, s2, s3, s4 in writing order. 'R' is for left-right relationship corresponds to segmentation information; it indicates that a pair of strokes belongs to the same symbol. In this case, the edge label is the same as the common symbol label. On the other hand, the non-dashed edges define spatial relationships between nodes and are labeled with one of the different possible relationships between symbols. As a consequence, all strokes belonging to the same symbol are fully connected, nodes and edges sharing the same symbol label; when two symbols are in relation, all strokes from the source symbol are connected to all strokes from the target symbol by edges sharing the same relationship label. Since CROHME 2013, SLG has been used to represent mathematical expressions [START_REF] Mouchère | Advancing the state of the art for handwritten math recognition: the crohme competitions, 2011-2014[END_REF]. As the official format to represent the ground-truth of handwritten math expressions and also for the recognition outputs, it allows detailed error analysis on stroke, symbol and expression levels. In order to be comparable to the ground truth SLG and allow error analysis on any level, our recognition system aims to generate SLG from the input. It means that we need a label decision for each stroke and each stroke pair used in a symbol relation. File formats for representing SLG The file format we are using for representing SLG is illustrated with the example 2 + 2 in Figure 2.6a. For each node, the format is like 'N, N odeIndex, N odeLabel, P robability' where P robability is always 1 in ground truth and depends on the classifier in system output. When it comes to edges, the format will be 'E, F romN odeIndex, T oN odeIndex, EdgeLabel, P robability'. An alternative format could be like the one shown in Figure 2.6b, which contains the same information as the previous one but with a more compact appearance. We take symbol as an individual to represent in this compact version but include the stroke level information also. For each object (or symbol), the format is 'O, ObjectIndex, ObjectLabel, P robability, StrokeList' in which StrokeList' lists the indexes of the strokes this symbol consists of. Similarly, the representation for relationships is formatted as 'EO, F romObjectIndex, T oObjectIndex, RelationshipLabel, P robability'. Performance evaluation with stroke label graph As mentioned in last section, both the ground truth and the recognition output of expression in CROHME are represented as SLGs. Then the problem of performance evaluation of a recognition system is essentially measuring the difference between two SLGs. This section will introduce how to compute the distance between two SLGs. A SLG is a directed graph that can be visualized as an adjacency matrix of labels (Figure 2.7). Figure 2.7a provides the format of the adjacency matrix: the diagonal refers stroke (node) labels and other cells interpret stroke pair (edge) labels [START_REF] Zanibbi | Evaluating structural pattern recognition for handwritten math via primitive label graphs[END_REF]. Figure 2.7b presents the adjacency matrix of labels corresponding to the SLG in Figure 2.5c. The underscore '_' identifies that this edge exists and the label of it is N oRelation, or this edge does not exist. The edge e14 with the label of R is an inherited relationship which is not reflected in SLG as we said before. Suppose we have 'n' strokes in one expression, the number of cells in the adjacency matrix is n 2 . Among these cells, 'n' cells represent the labels of strokes while the other 'n(n -1)' cells interpret the segmentation information and relationships. In order to analyze recognition errors in detail, Zanibbi et al. defined for SLGs a set of metrics in [START_REF] Zanibbi | Evaluating structural pattern recognition for handwritten math via primitive label graphs[END_REF]. They are listed as follows: • ∆C, the number of stroke labels that differ. • ∆S, the number of segmentation errors. • ∆R, the number of spatial relationship errors. • ∆L = ∆S + ∆R, the number of edge labels that differ. • ∆B = ∆C + ∆L = ∆C + ∆S + ∆R, the Hamming distance between the adjacency matrices. Suppose that the sample '2 + 2' was interpreted as '2 -1 2 ' as shown in Figure 2.8, we now compare the two adjacency matrices (the ground truth in • ∆C = 2, cells l2 and l3. The stroke s2 was wrongly recognized as 1 while s3 was incorrectly labeled as -. • ∆S = 2, cells e23 and e32. The symbol '+' written with 2 strokes was recognized as two isolated symbols. • ∆R = 1, cell e24. The Right relationship was recognized as Superscript. • ∆L = ∆S + ∆R = 2 + 1 = 3. • ∆B = ∆C + ∆L = ∆C + ∆S + ∆R = 2 + 2 + 1 = 5. Zanibbi et al. defined two additional metrics at the expression level: • ∆B n = ∆B n 2 , the percentage of correct labels in adjacency matrix where 'n' is the number of strokes. ∆B n is the Hamming distance normalized by the label graph size n 2 . • ∆E, the error averaged over three types of errors: ∆C, ∆S, ∆L. As ∆S is part of ∆L, segmentation errors are emphasized more than other edge errors ∆R in this metric [START_REF] Zanibbi | Evaluating structural pattern recognition for handwritten math via primitive label graphs[END_REF]. ∆E = ∆C n + ∆S n(n-1) + ∆L n(n-1) 3 (2.1) We still consider the sample shown in Figure 2.8b, thus: • ∆B n = ∆B n 2 = 5 4 2 = 5 16 = 0.3125 • ∆E = ∆C n + ∆S n(n-1) + ∆L n(n-1) 3 = 2 4 + 2 4(4-1) + 3 4(4-1) 3 = 0.4694 (2.2) Given the representation form of SLG and the defined metrics, 'precision' and 'recall' rates at any level (stroke, symbol and expression) could be computed [START_REF] Zanibbi | Evaluating structural pattern recognition for handwritten math via primitive label graphs[END_REF], which are current indexes for accessing the performance of the systems in CROHME. 'recall' and 'precision' rates are commonly used to evaluate results in machine learning experiments [START_REF] Powers | Evaluation: from precision, recall and f-measure to roc, informedness, markedness and correlation[END_REF]. In different research fields like information retrieval and classification tasks, different terminology are used to define 'recall' and 'precision'. However, the basic theory behind remains the same. In the context of this work, we use the case of segmentation results to explain 'recall' and 'precision' rates. To well define them, several related terms are given first as shown in Tabel 2.1. 'segmented' and 'not segmented' refer to the prediction of classifier while 'relevant' and 'non relevant' refer to the ground truth. 'recall' is defined as recall = tp tp + f n (2.3) and 'precision' is defined as precision = tp tp + f p (2.4) In Figure 2.8, '2+2' written with four strokes was recognized as '2-1 2 '. Obviously in this case, tp is equal to 2 since two '2' symbols were segmented and they exist in the ground truth. f p is equal to 2 also because '-' and '1' were segmented but they are not the ground truth. f n is equal to 1 as '+' was not segmented but it is the ground truth. Thus, 'recall' is 2 2+1 and 'precision' is 2 2+2 . A larger 'recall' than 'precision' means the symbols are over segmented in our context. Mathematical expression recognition In this section, we first review the entire history of this research subject, and then only focus on more recent solutions which are provided as a comparison to the new architectures proposed in this thesis. Overall review Research on the recognition of math notation began in the 1960's [START_REF] Anderson | Syntax-directed recognition of hand-printed two-dimensional mathematics[END_REF], and several research publications are available in the following thirty years [START_REF] Chang | A method for the structural analysis of two-dimensional mathematical expressions[END_REF][START_REF] Martin | Computer input/output of mathematical expressions[END_REF][START_REF] Anderson | Two-dimensional mathematical notation[END_REF]. Since the 90's, with the large developments of touch screen devices, this field has started to be active, gaining amounts of research achievement and considerable attention from the research community. A number of surveys [START_REF] Blostein | Recognition of mathematical notation[END_REF][START_REF] Chan | Mathematical expression recognition: a survey[END_REF][START_REF] Tapia | A survey on recognition of on-line handwritten mathematical notation[END_REF][START_REF] Zanibbi | Recognition and retrieval of mathematical expressions[END_REF], Mouchère et al., 2016] summarize the proposed techniques for math notation recognition. As described already in Section 1.2, ME recognition involves three interdependent tasks [START_REF] Zanibbi | Recognition and retrieval of mathematical expressions[END_REF]: (1) Symbol segmentation, which consists in grouping strokes that belong to the same symbol; (2) symbol recognition, the task of labeling the symbol to assign each of them a symbol class; (3) structural analysis, its goal is to identify spatial relations between symbols and with the help of a grammar to produce a mathematical interpretation. These three tasks can be solved sequentially or jointly. Sequential solutions. In the early stages of the study, most of the proposed solutions [START_REF] Chou | Recognition of equations using a two-dimensional stochastic context-free grammar[END_REF][START_REF] Koschinski | Segmentation and recognition of symbols within handwritten mathematical expressions[END_REF], Winkler et al., 1995[START_REF] Lehmberg | A soft-decision approach for symbol segmentation within handwritten mathematical expressions[END_REF][START_REF] Matsakis | Recognition of handwritten mathematical expressions[END_REF][START_REF] Zanibbi | Recognizing mathematical expressions using tree transformation[END_REF][START_REF] Tapia | Recognition of on-line handwritten mathematical expressions using a minimum spanning tree construction and symbol dominance[END_REF][START_REF] Toyozumi | A study of symbol segmentation method for handwritten mathematical formula recognition using mathematical structure information[END_REF][START_REF] Tapia | Understanding mathematics: A system for the recognition of on-line handwritten mathematical expressions[END_REF][START_REF] Zhang | Using fuzzy logic to analyze superscript and subscript relations in handwritten mathematical expressions[END_REF][START_REF] Yu | A unified framework for symbol segmentation and recognition of handwritten mathematical expressions[END_REF] are sequential ones which treat the recognition problem as a two-step pipeline process, first symbol segmentation and classification, and then structural analysis. The task of structural analysis is performed on the basis of the symbol segmentation and classification result. Considerable works are done dedicated to each step. For segmentation, the proposed methods include Minimum Spanning Tree (MST) based method [START_REF] Matsakis | Recognition of handwritten mathematical expressions[END_REF], Bayesian framework [START_REF] Yu | A unified framework for symbol segmentation and recognition of handwritten mathematical expressions[END_REF], graph-based method [START_REF] Lehmberg | A soft-decision approach for symbol segmentation within handwritten mathematical expressions[END_REF][START_REF] Toyozumi | A study of symbol segmentation method for handwritten mathematical formula recognition using mathematical structure information[END_REF]] and so on. The symbol classifiers used consist of Nearest Neighbor, Hidden Markov Model, Multilayer Perceptron, Support Vector Machine, Recurrent neural networks and so on. For spatial relationship classification, the proposed features include symbol bounding box [START_REF] Anderson | Syntax-directed recognition of hand-printed two-dimensional mathematics[END_REF], relative size and position [START_REF] Aly | Statistical classification of spatial relationships among mathematical symbols[END_REF], and so on. The main drawback of these sequential methods is that the errors from symbol segmentation and classification will be propagated to structural analysis. In other words, symbol recognition and structural analysis are assumed as independent tasks in the sequential solutions. However, this assumption conflicts with the real case in which these three tasks are highly interdependent by nature. For instance, human beings recognize symbols with the help of structure, and vice versa. Integrated solutions. Considering the natural relationship between the three tasks, researchers mainly focus on integrated solutions recently, which performs the task of segmentation at the same time build the expression structure: a set of symbol hypotheses maybe generated and a structural analysis algorithm may select the best hypotheses while building the structure. The integrated solutions use contextual information (syntactic knowledge) to guide segmentation or recognition, preventing from producing invalid expressions like [a + b). These approaches take into account contextual information generally with grammar (string grammar [Yamamoto et al., 2006, Awal et al., 2014, Álvaro et al., 2014b, 2016[START_REF] Maclean | A new approach for recognizing handwritten mathematics using relational grammars and fuzzy sets[END_REF] and graph grammar [Celik andYanikoglu, 2011, Julca-Aguilar, 2016]) parsing techniques, producing expressions conforming to the rules of a manually defined grammar. String grammar parsing, along with graph grammar parsing, has a high time complexity in fact. In the next section we will analysis deeper these approaches. Instead of using grammar parsing technique, the new architectures proposed in this thesis include contextual information with bidirectional long short-term memory which can access the content from both the future and the past in an unlimited range. End-to-end neural network based solutions. Inspired by recent advances in image caption generation, some end-to-end deep learning based systems were proposed for ME recognition [START_REF] Deng | What you get is what you see: A visual markup decompiler[END_REF], Zhang et al., 2017]. These systems were developed from the attention-based encoder-decoder model which is now widely used for machine translation. They decompile an image directly into presentational markup such as L A T E X. However, considering we are given trace information in the online case, despite the final L A T E X string, it is necessary to decide a label for each stroke. This information is not available now in end-to-end systems. The recent integrated solutions In [Yamamoto et al., 2006], a framework based on stroke-based stochastic context-free grammar is proposed for on-line handwritten mathematical expression recognition. They model handwritten mathematical expressions with a stochastic context-free grammar and formulate the recognition problem as a search problem of the most likely mathematical expression candidate, which can be solved using the Cock Younger Kasami (CYK) algorithm. With regard to the handwritten expression grammar, the authors define production rules for structural relation between symbols and also for a composition of two sets of strokes to form a symbol. Figure 2.9 illustrates the process of searching the most likely expression candidate with Figure 2.9 -Example of a search for most likely expression candidate using the CYK algorithm. Extracted from [Yamamoto et al., 2006]. the CYK algorithm on an example of x y + 2. The algorithm which fill the CYK table from bottom to up is as following: • For each input stroke i, corresponding to cell M atrix(i, i) shown in Figure 2.9, the probability of each stroke label candidate is computed. This calculation is the same as the likelihood calculation in isolated character recognition. In this example, the 2 best candidates for the first stroke of the presented example are ')' with the probability of 0.2 and the first stroke of x (denoted as x 1 here) with the probability of 0.1. • In cell M atrix(i, i+1), the candidates for strokes i and i+1 are listed. As shown in cell M atrix (1,2) of the same example, the candidate x with the likelihood of 0.005 is generated with the production rule < x → x 1 x 2 , SameSymbol >. The structure likelihood computed using the bounding boxes is 0.5 here. Then the product of stroke and structure likelihoods is 0.1 × 0.1 × 0.5 = 0.005. • Similarly, in cell M atrix(i, i + k), the candidates for strokes from i to i + k are listed with the corresponding likelihoods. • Finally, the most likely EXP candidate in cell M atrix (1, n) is the recognition result. In this work, they assume that symbols are composed only of consecutive (in time) strokes. In fact, this assumption does not work with the cases when the delayed strokes take place. In [START_REF] Awal | A global learning approach for an online handwritten mathematical expression recognition system[END_REF]], the recognition system handles mathematical expression recognition as a simultaneous optimization of expression segmentation, symbol recognition, and 2D structure recognition under the restriction of a mathematical expression grammar. The proposed approach is a global strategy allowing learning mathematical symbols and spatial relations directly from complete expressions. The general architecture of the system in illustrated in Figure 2.10. First, a symbol hypothesis generator based on 2-D Figure 2.10 -The system architecture proposed in [START_REF] Awal | A global learning approach for an online handwritten mathematical expression recognition system[END_REF]. Extracted from [START_REF] Awal | A global learning approach for an online handwritten mathematical expression recognition system[END_REF]. dynamic programming algorithm provides a number of segmentation hypotheses. It allows grouping strokes which are not consecutive in time. Then they consider a symbol classifier with a reject capacity in order to deal with the invalid hypotheses proposed by the previous hypothesis generator. The structural costs are computed with Gaussian models which are learned from a training data set. The spatial information used are baseline position (y) and x-height (h) of one symbol or sub-expression hypothesis. The language model is defined by a combination of two 1-D grammars (horizontal and vertical). The production rules are applied successively until reaching elementary symbols, and then a bottom-up parse (CYK) is applied to construct the relational tree of the expression. Finally, the decision maker selects the set of hypotheses that minimizes the global cost function. A fuzzy Relational Context-Free Grammar (r-CFG) and an associated top-down parsing algorithm are proposed in [START_REF] Maclean | A new approach for recognizing handwritten mathematics using relational grammars and fuzzy sets[END_REF]. Fuzzy r-CFGs explicitly model the recognition process as a fuzzy relation between concrete inputs and abstract expressions. The production rules defined in this grammar have the form of: in this work is a tabular variant of Unger's method for CFG parsing [START_REF] Unger | A global parser for context-free phrase structure grammars[END_REF]. This process is divided into two steps: forest construction, in which a shared parse forest is created from the start non-terminal to the leafs that represents all recognizable parses of the input, and tree extraction, in which individual parse trees are extracted from the forest in decreasing order of membership grade. Figure 2.12 show an handwritten expression and a shared parse forest of it representing some possible interpretations. A 0 r ⇒ A 1 A 2 • • • A k , In [Álvaro et al., 2016], they define the statistical framework of a model based on Two-Dimensional Probabilistic Context-Free Grammars (2D-PCFGs) and its associated parsing algorithm. The authors also regard the problem of mathematical expression recognition as obtaining the most likely parse tree given a sequence of strokes. To achieve this goal, two probabilities are required, symbol likelihood and structural probability. Due to the fact that only strokes that are close together will form a mathematical symbol, a symbol likelihood model is proposed based on spatial and geometric information. Two concepts (visibility and closeness) describing the geometric and spatial relations between strokes are used in this work to characterize a set of possible segmentation hypotheses. Next, a BLSTM-RNN are used to calculate the probability that a certain segmentation hypothesis represents a math symbol. BLSTM possesses the ability to access context information over long periods of time from both past and future and is one of the state of the art models. With regard to the structural probability, both the probabilities of the rules of the grammar and a spatial relationship model which provides the probability p(r|BC) that two sub-problems B and C are arranged according to spatial relationship r are required. In order to train a statistical classifier, given two regions B and C, they define nine geometric features based on their bounding boxes (Figure 2.13). Then these nine features are rewrote as the feature vector h(B, C) representing a spatial relationship. Next, a GMM is trained with the labeled feature vector such that the probability of the spatial relationship model can be computed as the posterior probability provided by the GMM for class r. Finally, they define a CYK-based algorithm for 2D-PCFGs in the statistical framework. Unlike the former described solutions which are based on string grammar, in [START_REF] Julca-Aguilar | Recognition of Online Handwritten Mathematical Expressions using Contextual Information[END_REF], the authors model the recognition problem as a graph parsing problem. A graph grammar model for mathematical expressions and a graph parsing technique that integrates symbol and structure level information are proposed in this work. The recognition process is illustrated in Figure 2.14. Two main components are involved in this process: (1) hypotheses graph generator and (2) graph parser. The hypotheses graph generator builds a graph that defines the search space of the parsing algorithm and the graph parser does the parsing itself. In the hypotheses graph, vertices represent symbol hypotheses and edges represent relations [Álvaro et al., 2016] between symbols. The labels associated to symbols and relations indicate their most likely interpretations. Of course, these labels are the outputs of symbol classifier and relation classifier. The graph parser uses the hypotheses graph and the graph grammar to generate first a parse forest consisting of several parse trees, each one representing an interpretation of the input strokes as a mathematical expression, and then extracts a best tree among the forest as the final recognition result. In the proposed graph grammar, production rules have the form of A → B, defining the replacement of a graph by another graph. With regard to the parsing technique, they propose an algorithm based on the Unger's algorithm which is used for parsing strings [START_REF] Unger | A global parser for context-free phrase structure grammars[END_REF]. The algorithm presented in this work is a top-down approach, starting from the top vertex (root) to the bottom vertices. End-to-end neural network based solutions In [START_REF] Deng | What you get is what you see: A visual markup decompiler[END_REF], the proposed model WYGIWYS (what you get is what you see) is an extension of the attention-based encoder-decoder model. The structure of WYGIWYS is shown in Figure 2.15. As can be seen, given an input image, a Convolutional Neural Network (CNN) is applied first to extract image features. Then, for each row in the feature map, they use an Recurrent Neural Network (RNN) encoder to re-encodes it expecting to catch the sequential information. Next, the encoded features are decoded by an RNN decoder with a visual attention mechanism to generate the final outputs. In parallel to the work of [START_REF] Deng | What you get is what you see: A visual markup decompiler[END_REF], [START_REF] Zhang | A Tree-BLSTM based Recogniton System for Online Handwritten Mathematical Expression[END_REF]] also use the attention based encoder-decoder framework to translate MEs into L A T E X notations. Compared to the recent integrated solutions, the end-to-end neural network based solutions require no large amount of manual work for defining grammars or a high computational complexity for grammar parsing process, and achieve the state of the art recognition results. However, considering we are given trace information in the online case, despite the final L A T E X string, it is necessary to decide a label for each stroke. This alignment is not available now in end-to-end systems. Discussion In this section, we first introduce the development of mathematical expression recognition in general, and then put emphasis on the more recent proposed solutions. Instead of analyzing the advantages and disadvantages of the existing approaches consisting of variable grammars and their associated parsing techniques, the aim of this section is to provide a comparison to the new architectures proposed in this thesis. In spite of considerable different methods related to the three sub-tasks (symbol segmentation, symbol recognition and structural analysis), and variable grammars and parsing techniques, the key idea behind these [START_REF] Deng | What you get is what you see: A visual markup decompiler[END_REF] integrated techniques is relying on explicit grammar rules to solve the ambiguity in symbol recognition and relation recognition. In other words, the existing solutions take into account contextual or global information generally with the help of a grammar. However, using either string or graph grammar, a large amount of manual work is needed for defining grammars and a high computational complexity for grammar parsing process. BLSTM neural network is able to model the dependency in a sequence over indefinite time gaps, overcoming the short-term memory of classical recurrent neural networks. Due to this ability, BLSTM achieved great success in sequence labeling tasks, such as text and speech recognition. Instead of using grammar parsing technique, the new architectures proposed in this thesis will include contextual information with bidirectional long short-term memory. In [Álvaro et al., 2016], it has been used an elementary function to recognize symbols or to control segmentation, which is itself included in an overall complex system. The goal of our work is to develop a new architecture where a recurrent neural network is the backbone of the solution. In next chapter, we will introduce how the advanced neural network take the contextual information into consideration for the problem of sequence labeling. Sequence labeling with recurrent neural networks This chapter will be focused on sequence labeling using recurrent neural networks, which is the foundation of our work. Firstly, the concept of sequence labeling will be introduced in Section 3.1. We explain the goal of this task. Next, Section 3.2 introduces the classical structure of recurrent neural network. The property of this network is that it can memorize contextual information but the range of the information which could be accessed is quite limited. Subsequently, in Section 3.3 long short-term memory is presented. This architecture is provided with the ability of accessing information over long periods of time. Finally, we introduce how to apply recurrent neural network for the task of sequence labeling, including the existing problems and the solutions to solve them, i.e. the connectionist temporal classification technique. In this chapter, considerable amount of variables and formulas are involved in order to clearly describe the content, likewise to extend easily the algorithms in later chapters. We use here the same notations as in [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. In fact, this chapter is a short version of Alex Graves' book «Supervised sequence labeling with recurrent neural networks». We use the same figures and similar outline to introduce this entire framework. Since the architecture of BLSTM and CTC is the backbone of our solution, thus we take a whole chapter to elaborate this topology to help to understand our work. Sequence labeling In machine learning, the term 'sequence labeling' encompasses all tasks where sequences of data are transcribed with sequences of discrete labels [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. Well known examples include handwriting and speech recognition (Figure 3.1), gesture recognition and protein secondary structure. In this thesis, we only consider supervised sequence labeling cases in which the ground-truth is provided during the training process. The goal of sequence labeling is to transcribe sequences of input data into sequences of labels, each label coming from a fixed alphabet. For example looking at the top row of Figure 3.1, we would like to assign the sequence "FOREIGN MINISTER" of which each label is from English alphabet, to the input signal on the left side. Suppose that X denotes a input sequence and l is the corresponding ground truth, being a sequence of labels, the set of training examples could be referred as T ra = {(X, l)}. The task is to use T ra to train a sequence labeling algorithm to label each input sequence in a test data set, as accurately as possible. In fact when people try to recognize a handwriting or speech signal, we focus on not only local input signal, but also a global, contextual information to help the transcription process. Thus, we hope the sequence labeling algorithm could have the ability also to take advantage of contextual information. Recurrent neural networks Artificial Neural Networks (ANNs) are computing systems inspired by the biological neural networks [START_REF] Jain | Artificial neural networks: A tutorial[END_REF]. It is hoped that such systems could possess the ability to learn to do tasks by considering some given examples. An ANN is a network of small units, joined to each other by weighted connections. Whether connections form cycles or not, usually we can divide ANNs into two classes: ANNs without cycles are referred to as Feed-forward Neural Networks (FNNs); ANNs with cycles, are referred to as feedback, recurrent neural networks (RNNs). The cyclical connections could model the dependency between past and future, therefore RNNs possess the ability to memorize while FNNs do not have memory capability. In this section, we will focus on recurrent networks with cyclical connections. Thanks to RNN's memory capability, it is suitable for sequence labeling task where the contextual information plays a key role. Many varieties of RNN were proposed, such as Elman networks, Jordan networks, time delay neural networks and echo state networks [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. We introduce here a simple RNN architecture containing only a single, self connected hidden layer (Figure 3.3). Topology In order to better understand the mechanism of RNNs, we first provide a short introduction to Multilayer Perceptron (MLP) [START_REF] Rumelhart | Learning internal representations by error propagation[END_REF][START_REF] Werbos | Generalization of backpropagation with application to a recurrent gas market model[END_REF][START_REF] Bishop | Neural networks for pattern recognition[END_REF] which is the most widely used form of FNNs. As illustrated in Figure 3.2, a MLP has an input layer, one or more hidden layers and an output layer. The S-shaped curves in the hidden and output layers indicate the application of 'sigmoidal' nonlinear activation functions. The number of units in the input layer is equal to the length of feature vector. Both the number of units in the output layer and the choice of output activation function depend on the task the network is applied to. When dealing with binary classification tasks, the standard configuration is a single unit with a logistic sigmoid activation. For classification problems with K > 2 classes, usually we have K output units with the soft-max function. Since there is no connection from past to future or future to past, MLP depends only on the current input to compute the output and therefore is not suitable for sequence labeling. Unlike the feed forward network architecture, in a neural network with cyclical connections presented in Figure 3.3, the connections from the hidden layer to itself (red) could model the dependency between past and future. However, the dependencies between different time-steps can not be seen clearly in this figure. Thus, we unfold the network along the input sequence to visualize them in Figure 3.4. Different with Figure 3.2 and 3.3 where each node is a single unit, here each node represents a layer of network units at a single time-step. The input at each time step is a vector of features; the output at each time step is a vector of probabilities regarding to different classes. With the connections weighted by 'w1' from the input layer to hidden layer, the current input flows to the current hidden layer; with the connections weighted by 'w2' from the hidden layer to itself, the information flows from the the hidden layer at t -1 to the hidden layer at t; with the connections weighted by 'w3' from the hidden layer to the output layer, the activation flows from the hidden layer to the output layer. Note that 'w1', 'w2' and 'w3' represent vectors of weights instead of single weight values, and they are reused for each time-step. Forward pass The input data flow from the input layer to hidden layer; the output activation of the hidden layer at t -1 flows to the hidden layer at t; the hidden layer sums up the information from two sources; finally the summed and processed information flows to the output layer. This process is referred to as the forward pass of RNN. Suppose that an RNN has I input units, H hidden units, and K output units, let w ij denote the weight of the connection from unit i to unit j, a t j and b t j represent the network input activation to unit j and the output activation of unit j at time t respectively. Specifically, we use use x t i to denote the input i value at time t. Considering an input sequence X of length T , the network input activation to the hidden units could be computed like: a t h = I i=1 w ih x t i + H h =1 w h h b t-1 h (3.1) In this equation, we can see clearly that the activation arriving at the hidden layer comes from two sources: (1) the current input layer through the 'w1' connections; (2) the hidden layer of previous time step through the 'w2' connections. The size of 'w1' and 'w2' are respectively size(w1) = I × H + 1(bias) and size(w2) = H × H. Then, the activation function θ h is applied: b t h = θ h (a t h ) (3.2) We calculate a t h and therefore b t h from t = 1 to T . This is a recursive process where a initial configuration is required of course. In this thesis, the initial value b 0 h is always set to 0. Now, we consider propagating the hidden layer output activation b t h to the output layer. The activation arriving at the output units can be calculated as following: a t k = H h=1 w hk b t h (3.3) The size of 'w3' is size(w3) = H × K. Then applying the activation function θ k , we get the output activation b t k of the output layer unit k at time t. We use a a special name y t k to represent it: y t k = θ k (a t k ) (3.4) We introduce the definition of the loss function in Section 3.4. Backward pass With the loss function, we could compute the distance between the network outputs and the ground truths. The aim of backward pass is to minimize the distance to train an effective neural network. The widely used solution is gradient descent of which the idea is to first calculate the derivative of the loss function with respect to each weight and then adjust the weights in the direction of negative slope to minimize the loss function [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. To compute the derivative of the loss function with respect to each weight in the network, the common technique used is known as Back Propagation (BP) [START_REF] Rumelhart | Learning internal representations by error propagation[END_REF][START_REF] Williams | Gradient-based learning algorithms for recurrent networks and their computational complexity[END_REF][START_REF] Werbos | Generalization of backpropagation with application to a recurrent gas market model[END_REF]. As there are recurrent connections in RNNs, researchers designed the special algorithms to calculate weight derivatives efficiently for RNNs, two well known methods being Real Time Recurrent Learning (RTRL) [START_REF] Robinson | The utility driven dynamic error propagation network[END_REF] and Back Propagation Through Time (BPTT) [Williams andZipser, 1995] [Werbos, 1990]. Like Alex Graves, we introduce BPTT only as it is both conceptually simpler and more efficient in computation time. We define δ t j = ∂L ∂a t j (3.5) Thus the partial derivatives of the loss function L with respect to the inputs of the output units a t k is δ t k = ∂L ∂a t k = K k =1 ∂L ∂y t k ∂y t k ∂a t k (3.6) Afterwards, the error will be back propagated to the hidden layer. Note that the loss function depends on the activation of the hidden layer not only through its influence on the output layer, but also through its influence on the hidden layer at the next time-step. Thus, δ t h = ∂L ∂a t h = ∂L ∂b t h ∂b t h ∂a t h = ∂b t h ∂a t h ( K k=1 ∂L ∂a t k ∂a t k ∂b t h + H h =1 ∂L ∂a t+1 h ∂a t+1 h ∂b t h ) (3.7) δ t h = θ h a t h K k=1 δ t k w hk + H h =1 δ t+1 h w hh (3.8) δ t h terms can be calculated recursively from T to 1. Of course this requires the initial value δ T +1 h to be set. As there is no error coming from beyond the end of the sequence, δ T +1 h = 0 ∀h. Finally, noticing that the same weights are reused at every time-step, we sum over the whole sequence to get the derivatives with respect to the network weights ∂L ∂w ij = T t=1 ∂L ∂a t j ∂a t j ∂w ij = T t=1 δ t j b t i (3.9) The last step is to adjust the weights based on the derivatives we have computed above. It is an easy procedure and we do not discuss it here. Bidirectional networks The RNNs we have discussed only possess the ability to access the information from past, not the future. In fact, future information is important to sequence labeling task as well as the past context. For example when we see the left bracket '(' in the handwritten expression 2(a + b), it seems easy to answer '1', 'l' or '(' if only focusing on the signal on the left side of '('. But if we consider the signal on the right side also, the answer is straightforward, being '(' of course. An elegant solution to access context from both directions is Bidirectional Recurrent Neural Networks (BRNNs) (BRNNs) [START_REF] Schuster | Bidirectional recurrent neural networks[END_REF][START_REF] Schuster | On supervised learning from sequential data with applications for speech recognition[END_REF][START_REF] Baldi | Exploiting the past and the future in protein secondary structure prediction[END_REF]. Figure 3.5 shows an unfolded bidirectional network. As we can see, there are 2 separate recurrent hidden layers, forward and backward, each of them process the input sequence from one direction. No information flows between the forward and backward hidden layers and these two layers are both connected to the same output layer. With the bidirectional structure, we could use the complete past and future context to help recognizing each point in the input sequence. Long short-term memory (LSTM) In Section 3.2, we discussed RNNs which have the ability to access contextual information from one direction and BRNNs which have the ability to visit bidirectional contextual information. Due to their memory capability, lots of applications are available in sequence labeling tasks. However, there is a problem that the range of context that can be in practice accessed is quite limited. The influence of a given input on the hidden layer, and therefore on the network output, either decays or blows up exponentially as it cycles around the network's recurrent connections [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. This effect is often referred to in the literature as the vanishing gradient problem [START_REF] Hochreiter | Gradient flow in recurrent nets: the difficulty of learning long-term dependencies[END_REF][START_REF] Bengio | Learning long-term dependencies with gradient descent is difficult[END_REF]. To address this problem, many methods were proposed such as simulated annealing and discrete error propagation [START_REF] Bengio | Learning long-term dependencies with gradient descent is difficult[END_REF], explicitly introduced time delays [START_REF] Lang | A time-delay neural network architecture for isolated word recognition[END_REF][START_REF] Lin | Learning long-term dependencies in narx recurrent neural networks[END_REF] or time constants [START_REF] Mozer | Induction of multiscale temporal structure[END_REF], and hierarchical sequence compression [START_REF] Schmidhuber | Learning complex, extended sequences using the principle of history compression[END_REF]. In this section, we will focus on Long Short-Term Memory (LSTM) architecture [START_REF] Hochreiter | Long short-term memory[END_REF]]. Topology We replace the summation unit in the hidden layer of a standard RNN with memory block (Figure 3.6), generating an LSTM network. There are three gates (input gate, forget gate and output gate) and one or more cells in a memory block. Figure 3.6 shows a LSTM memory block with one cell. We list below the activation arriving at three gates at time t: Input gate: the current input, the activation of hidden layer at time t -1, the cell state at time t -1 Forget gate: the current input, the activation of hidden layer at time t -1, the cell state at time t -1 Output gate: the current input, the activation of hidden layer at time t -1, the current cell state The connections shown by dashed lines from the cell to three gates are named as 'peephole' connections which are the only weighted connections inside the memory block. Just because of the three 'peephole's, the cell state is accessible to the three gates. These three gates sum up the information from inside and outside the block with different weights and then apply gate activation function 'f', usually the logistic sigmoid. Thus, the gate activation are between 0 (gate closed) and 1 (gate open). We present below how these three gates control the cell via multiplications (small black circles): Input gate: the input gate multiplies the input of the cell. The input gate activation decides how much information the cell could receive from the current input layer, 0 representing no information and 1 repre-Figure 3.6 -LSTM memory block with one cell. Extracted from [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. senting all the information. Forget gate: the forget gate multiplies the cell's previous state. The forget gate activation decides how much context should the cell memorize from its previous state, 0 representing forgetting all and 1 representing memorizing all. Output gate: the output gate multiplies the output of the cell. It controls to which extent the cell will output its state, 0 representing nothing and 1 representing all. The cell input and output activation functions ('g' and 'h') are usually tanh or logistic sigmoid, though in some cases 'h' is the identity function [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. Output gate controls to which extent the cell will output its state, and it is the only outputs from the block to the rest of the network. As we discussed, the three control gates could allow the cell to receive, memorize and output information selectively, thereby easing the vanishing gradient problem. For example the cell could memorize totally the input at first point as long as the forget gates are open and the input gates are closed at the following time steps. Forward pass As in [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF], we only present the equations for a single memory block since it is just a repeated calculation for multiple blocks. Let w ij denote the weight of the connection from unit i to unit j, a t j and b t j represent the network input activation to unit j and the output activation of unit j at time t respectively. Specifically, we use use x t i to denote the input i value at time t. Considering a recurrent network with I input units, K output units and H hidden units, the subscripts ς, φ, ω represent the input, forget and output gate and the subscript c represents one of the C cells. Thus, the connections from the input layer to the three gates are weighted by w iς , w iφ , w iω respectively; the recurrent connections to the three gates are weighted by w hς , w hφ , w hω ; the peep-hole weights from cell c to the input, forget, output gates can be denoted as w cς , w cφ , w cω . s t c is the state of cell c at time t. We use f to denote the activation function of the gates, and g and h to denote respectively the cell input and output activation functions. b t c is the only output from the block to the rest of the network. As with the standard RNN, the forward pass is a recursive calculation by starting at t = 1. All the related initial values are set to 0. Equations are given below: Input gates a t ς = I i=1 w iς x t i + H h=1 w hς b t-1 h + C c=1 w cς s t-1 c (3.10) b t ς = f (a t ς ) (3.11) Forget gates a t φ = I i=1 w iφ x t i + H h=1 w hφ b t-1 h + C c=1 w cφ s t-1 c (3.12) b t φ = f (a t φ ) (3.13) Cells a t c = I i=1 w ic x t i + H h=1 w hc b t-1 h (3.14) s t c = b t φ s t-1 c + b t ς g(a t c ) (3.15) Output gates a t ω = I i=1 w iω x t i + H h=1 w hω b t-1 h + C c=1 w cω s t c (3.16) b t ω = f (a t ω ) (3.17) Cell Outputs b t c = b t ω h(s t c ) (3.18) Backward pass As can be seen in Figure 3.6, a memory block has 4 interfaces receiving inputs from outside the block, 3 gates and one cell. Considering the hidden layer, the total number of input interfaces is defined as G. For the memory block consisting only one cell, G is equal to 4H. We recall Equation 3.5 δ t j = ∂L ∂a t j (3.19) Furthermore, define t c = ∂L ∂b t c t s = ∂L ∂s t c (3.20) Cell Outputs t c = K k=1 w ck δ t k + G g=1 w cg δ t+1 g (3.21) As b t c is propagated to the output layer and the hidden layer of next time step in the forward pass, when computing t c , it is natural to receive the derivatives from both the output layer and the next hidden layer. G is introduced for the convenience of representation. Output gates δ t w = f (a t w ) C c=1 h(s t c ) t c (3.22) States t s = b t w h (s t c ) t c + b t+1 φ t+1 s + w cς δ t+1 ς + w cφ δ t+1 φ + w cω δ t ω (3.23) Cells δ t c = b t ς g (a t c ) t s (3.24) Forget gates δ t φ = f (a t φ ) C c=1 s t-1 c t s (3.25) Input gates δ t ς = f (a t ς ) C c=1 g(a t c ) t s (3.26) Variants There exists many variants of the basic LSTM architecture. Globally, they can be divided into chainstructured LSTM and non-chain-structured LSTM. Bidirectional LSTM Replacing the hidden layer units in BRNN with LSTM memory blocks generates Bidirectional LSTM [START_REF] Graves | Framewise phoneme classification with bidirectional lstm networks[END_REF]. LSTM network processes the input sequence from past to future while Bidirectional LSTM, consisting of 2 separated LSTM layers, models the sequence from two opposite directions (past to future and future to past) in parallel. Both of 2 LSTM layers are connected to the same output layer. With this setup, complete long-term past and future context is available at each time step for the output layer. Deep BLSTM DBLSTM [START_REF] Graves | Hybrid speech recognition with deep bidirectional lstm[END_REF] can be created by stacking multiple BLSTM layers on top of each other in order to get higher level representation of the input data. As illustrated in Figure 3.7, the outputs of 2 opposite hidden layer at one level are concatenated and used as the input to the next level. Non-chain-structured LSTM A limitation of the network topology described thus far is that they only allow for sequential information propagation (as shown in Figure 3.8a) since the cell contains a single recurrent connection (modulated by a single forget gate) to its own previous value. Recently, research on LSTM has been beyond sequential structure. The one-dimensional LSTM was extended to n dimensions by using n recurrent connections (one for each of the cell's previous states along every dimension) with n forget gates. It is named Multidimensional LSTM (MDLSTM) dedicated to the graph structure of an n-dimensional grid such as images [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. In [START_REF] Tai | Improved semantic representations from tree-structured long short-term memory networks[END_REF], the basic LSTM architecture was extend to tree structures, the Child-sum Tree-LSTM and the N-ary Tree-LSTM, allowing for richer network topology (Figure 3.8b) where each unit is able to incorporate information from multiple child units. In parallel to the work in [START_REF] Tai | Improved semantic representations from tree-structured long short-term memory networks[END_REF], [START_REF] Zhu | Long short-term memory over recursive structures[END_REF] explores the similar idea. The DAG-structured LSTM was proposed for semantic compositionality [START_REF] Zhu | Dag-structured long short-term memory for semantic compositionality[END_REF]. In later chapter, we will extend the chain-structured BLSTM to tree-based BLSTM which is similar to the above mentioned work, and apply this new network model for online math expression recognition. Connectionist temporal classification (CTC) RNNs' memory capability greatly meet the sequence labeling tasks where the context is quite important. To apply this recurrent network into sequence labeling, at least a loss function should be defined for the training process. In the typical frame wise training method, we need to know the ground truth label for each time step to compute the errors which means pre-segmented training data is required. The network is trained to make correct label prediction at each point. However, either the pre-segmentation or making label prediction at each point, both are large burdens to users or networks. The technique of CTC was proposed to solve these two points. It is specifically designed for sequence labeling problems where the alignment between the inputs and the target labels is unknown. By introducing an additional 'blank' class, CTC allows the network to make label predictions at some points instead of each point in the input sequence, so long as the overall sequence of character labels is correct. We introduce CTC briefly here; for a more detailed description, refer to A. Graves' book [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. From outputs to labelings CTC consists of a soft max output layer with one more unit (blank) than there are labels in alphabet. Suppose the alphabet is A (|A| = N ), the new extended alphabet is A which is equal to A ∪ [blank]. Let y t k denote the probability of outputting the k label of A at the t time step given the input sequence X of length T , where k is from 1 to N + 1 and t is from 1 to T . Let A T denote the set of sequences over A with length T and any sequence π ∈ A T is referred to as a path. Then, assuming the output probabilities at each time-step to be independent of those at other time-steps, the probability of outputting a sequence π would be: p(π|X) = T t=1 y t πt (3.27) The next step is from π to get the real possible labeling of X. A many-to-one function F : A T → A ≤T is defined from the set of paths onto the set of possible labeling of X to do this task. Specifically, first remove the repeated labels and then the blanks (-) from the paths. For example considering an input sequence of length 11, two possible paths could be cc --aaa -tt-, c ---aa --ttt. The mapping function works like: F (cc --aaa -tt-) = F (c ---aa --ttt) = cat. Since the paths are mutually exclusive, the probability of a labeling sequence l ∈ A ≤T can be calculated by summing the probabilities of all the paths mapped onto it by F : p(l|X) = π∈F -1 (l) p(π|X) (3.28) Forward-backward algorithm In section 3.4.1, we defined the probability p(l|X) as the sum of the probabilities of all the paths mapped onto l. The calculation seems to be problematic because the number of paths grows exponentially with the length of the input sequence. Fortunately it can be solved with a dynamic-programming algorithm similar to the forward-backward algorithm for Hidden Markov Model (HMM) [START_REF] Bourlard | Connectionist speech recognition: a hybrid approach[END_REF]. Consider a modified label sequence l with blanks added to the beginning and the end of l, and inserted between every pair of consecutive labels. Suppose that the length of l is U , apparently the length of l is U = 2U + 1. For a labeling l, let the forward variable α(t, u) denote the summed probability of all length t paths that are mapped by F onto the length u/2 prefix of l, and let the set V (t, u) be equal to {π ∈ A t : F (π) = l 1:u/2 , π t = l u }, where u is from 1 to U and u/2 is rounded down to an integer value. Thus: α(t, u) = π∈V (t,u) t i=1 y i π i (3.29) All the possible paths mapped onto l start with either a blank (-) or the first label (l 1 ) of l, so we have the formulas below: α(1, 1) = y 1 - (3.30) α(1, 2) = y 1 l 1 (3.31) α(1, u) = 0, ∀u > 2 (3.32) In fact, the forward variables at time t can be calculated recursively from those at time t -1. α(t, u) = y t l u u i=f (u) α(t -1, i), ∀t > 1 (3.33) where f (u) = u -1 if l u = blank or l u-2 = l u u -2 otherwise (3.34) Note that α(t, u) = 0, ∀u < U -2(T -t) -1 (3.35) Given the above formulation, the probability of l can be expressed as the sum of the forward variables with and without the final blank at time T . p(l|X) = α(T, U ) + α(T, U -1) (3.36) Figure 3.9 illustrates the CTC forward algorithm. Similarly, we define the backward variable β(t, u) as the summed probabilities of all paths starting at t + 1 that complete l when appended to any path contributing to α(t, u). Let W (t, u) = {π ∈ A T -t : F (π + π) = l, ∀π ∈ V (t, u)} denote the set of all paths starting at t + 1 that complete l when appended to any path contributing to α(t, u). Thus: β(t, u) = π∈W (t,u) T -t i=1 y t+i π i (3.37) The formulas below are used for the initialization and recursive computation of β(t, u): β(T, U ) = 1 (3.38) β(T, U -1) = 1 (3.39) β(T, u) = 0, ∀u < U -1 (3.40) β(t, u) = g(u) i=u β(t + 1, i)y t+1 l i (3.41) where g(u) = u + 1 if l u = blank or l u+2 = l u u + 2 otherwise (3.42) Note that β(t, u) = 0, ∀u > 2t (3.43) If we reverse the direction of the arrows in Figure 3.9, it comes to be an illustration of the CTC backward algorithm. Loss function The CTC loss function L(S) is defined as the negative log probability of correctly labeling all the training examples in some training set S. Suppose that z is the ground truth labeling of the input sequence X, then: L(S) = -ln (X,z)∈S p(z|X) = - (X,z)∈S ln p(z|X) (3.44) BLSTM networks can be trained to minimize the differentiable loss function L(S) using any gradient-based optimization algorithm. The basic idea is to find the derivative of the loss function with respect to each of the network weights, then adjust the weights in the direction of the negative gradient. The loss function for any training sample is defined as: L(X, z) = -ln p(z|X) (3.45) and therefore L(S) = (X,z)∈S L(X, z) (3.46) The derivative of the loss function with respect to each network weight can be represented as: ∂L(S) ∂w = (X,z)∈S ∂L(X, z) ∂w (3.47) The forward-backward algorithm introduced in Section 3.4.2 can be used to compute L(X, z) and the gradient of it. We only provide the final formula in this thesis and the process of derivation can be found in [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. L(X, z) = -ln |z | u=1 α(t, u)β(t, u) (3.48) To find the gradient, the first step is to differentiate L(X, z) with respect to the network outputs y t k : ∂L(X, z) ∂y t k = - 1 p(z|X)y t k u∈B(z,k) α(t, u)β(t, u) (3.49) where B(z, k) = {u : z u = k} is the set of positions where label k occurs in z . Then we continue to backpropagate the loss through the output layer: ∂L(X, z) ∂a t k = y t k - 1 p(z|X) u∈B(z,k) α(t, u)β(t, u) (3.50) and finally through the entire network during training. Decoding We discuss above how to train a RNN with CTC technique, and the next step is to label some unknown input sequence X in the test set with the trained model by choosing the most probable labeling l * : l * = arg max l p(l|X) (3.51) The task of labeling unknown sequences is denoted as decoding, being a terminology coming from hidden Markov models (HMMs). In this section, we will introduce in brief several approximate methods that perform well in practice. Likewise, we refer the interested readers to [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF] for the detailed description. We also design new decoding methods which are suitable to the tasks of this thesis in later chapters. Best path decoding Best path decoding is based on the assumption that the most probable path corresponds to the most probable labeling l * ≈ F (π * ) (3.52) where π * = arg max π p(π|X). It is simple to find π * , just concatenating the most active outputs at each time-step. However best path decoding could lead to errors in some cases when a label is weakly predicted for several successive time-steps. Figure 3.10 illustrates one of the failed cases. In this simple case where there are just two time steps, the most probable path found with best path decoding is '--' with the probability of 0.42 = 0.7 * 0.6, and therefore the final labeling is 'blank'. In fact, the summed probabilities of the paths corresponding to the labeling of 'A' is 0.58, greater than 0.42. Prefix search decoding Prefix search decoding is a best-first search through the tree of labelings, where the children of a given labeling are those that share it as a prefix. At each step the search extends the labeling whose children have the largest cumulative probability. As can be seen in Figure 3.11, there exist in this tree 2 types of nodes, end node ('e') and extending node. An extending node extends the prefix at its parent node and the number above it is the total probability of all labelings beginning with that prefix. An end node denotes that the labeling ends at its parent and the number above it is the probability of the single labeling ending at its parent. At each iteration, we explore the extending of the most probable remaining prefix. Search ends when a single labeling is more probable than any remaining prefix. Prefix search decoding could find the most probable labeling with enough time. However the fact that the number of prefixes it must expand grows exponentially with the input sequence length, affects largely the feasibility of its application. Constrained decoding Constrained decoding refers to the situation where we constrain the output labelings according to some predefined grammar. For example, in word recognition, the final transcriptions are usually required to form sequences of dictionary words. Here, we only consider single word decoding, which means all word-toword transitions are forbidden. With regard to single word recognition, if the number of words in the target sequence is fixed, one of the possible methods could be as following: considering an input sequence X, for each word wd in the dictionary, we firstly calculate the sum of the probabilities p(wd|X) of all the paths π which can be mapped into wd using the forward-backward algorithm described in Section 3.4.2; then, assign X with the word holding the maximum probability. II Contributions 4 Mathematical expression recognition with single path As well known, BLSTM network with a CTC output layer achieved great success in sequence labeling tasks, such as text and speeches recognition. This success is due to the LSTM's ability of capturing longterm dependency in a sequence and the effectiveness of CTC training method. In this chapter, we will explore the idea of using the sequence-structured BLSTM with a CTC stage to recognize 2-D handwritten mathematical expression (Figure 4.1). CTC allows the network to make label predictions at any point in the input sequence, so long as the overall sequence of labels is correct. It is not well suited for our cases in which a relatively precise alignment between the input and output is required. Thus, a local CTC methodology is proposed aiming to constrain the outputs to emit at least once or several times the same non-blank label in a given stroke. This chapter will be organized as follows: Section 4.1 globally introduce the proposal that builds stroke label graph from a sequence of labels, along with the existing limitations in this stage. Then, the entire process of generating the sequence of labels with BLSTM and local CTC given the input is orderly presented in detail, including firstly feeding the inputs of BLSTM, then the training and recognition stages. The experiments and discussion are introduced in Section 4.3 and Section 4.4 respectively. From single path to stroke label graph This section will be focused on introducing the idea of building SLG from a single path. First, a classification of the degree of complexity of math expressions will be given to help understanding the different difficulties and the cases that could or could not be solved by the proposed approach. Complexity of expressions Expressions could be divided into two groups: ( 1 The proposed idea Currently in CROHME, SLG is the official format to represent the ground-truth of handwritten math expressions and also for the recognition outputs. The recognition system proposed in this thesis is aiming to output the SLG directly for each input expression. As a strict expression, we use 'correct SLG' to denote the SLG which equals to the ground truth, and 'valid SLG' to represent the graph where double-direction edge corresponds to segmentation information and all strokes (nodes) belonging to one symbol have the same input and output edges. In this section, we explain how to build a valid SLG from a sequence of strokes. An input handwritten mathematical expression consists of one or more strokes. The sequence of strokes in an expression can be described as S = (s 1 , ..., s n ). For i < j, we assume s i has been entered before s j . A path (different from the notation within the CTC part) in SLG can be defined as Φ i = (n 0 , n 1 , n 2 , ..., n e ), where n 0 is the starting node and n e is the end node. The set of nodes of Φ i is n(Φ i ) = {n 0 , n 1 , n 2 , ..., n e } and the set of edges of Φ i is e(Φ i ) = {n 0 → n 1 , n 1 → n 2 , ..., n e-1 → n e }, where n i → n i+1 denotes the edge from n i to n i+1 . In fact, the sequence of strokes described as S = (s 1 , ..., s n ) is exactly the path following stroke writing order (called time path, Φ t ) in SLG. Still taking '2 + 2' as example, the time path is presented with red color in Figure 4.3a. If all nodes and edges from Φ t are well classified during the recognition process, we could obtain a chain-SLG as the Fig 4 .3b. We propose to get a complete (i.e. valid) SLG from Φ t by adding the edges which can be deduced from the labeled path to obtain a coherent SLG as depicted on Figure 4.3c. The process can be seen as: (1) complete the segmentation edges between Considering both the nodes and edges, we rewrite the time path Φ t shown in Figure 4.3b as the format of (s1, s1 → s2, s2, s2 → s3, s3, s3 → s4, s4) labeled as (2, R, +, +, +, R, 2). This sequence alternates the node labels {2, +, +, 2} and the edge labels {R, +, R}. Given the labeled sequence (2, R, +, +, +, R, 2), the information that s2 and s3 belong to the same symbol + can be derived. With the rule that doubledirection edge represents segmentation information, the edge from s3 to s2 will be added automatically. According to the rule that all strokes in a symbol have the same input and output edges, the edges from s1 to s3 and from s2 to s4 will be added automatically. The added edges are shown in bold in Figure 4.3c. In this case a correct SLG is built from Φ t . Our proposal of building SLG from the time path works well on chain-SRT expressions as long as each symbol is written successively and the symbols in such kind of expressions are entered following the order from the root to the leaf in SRT. Successful cases include linear expressions as 2 + 2 mentioned previously and a part of 2-D expressions such as P eo shown in Figure 4.4a. The sequence of strokes and edges is (P, P, P, Superscript, e, R, o). All the spatial relationships are covered in it and naturally a correct SLG can be generated. Usually users enter the expression P eo following the order of P, e, o. Yet the input order of e, o, P could be also possible. For this case, the corresponding sequence of strokes and edges is (e, R, o, _, P, P, P ). Since there is no edge from o to P in SLG, we use _ to represent it. Apparently, it is not possible to build a complete and correct SLG with this sequence of labels where the Superscript relationship from P to e is missing. As a conclusion, for a chain-SRT expression written with specific order, a correct SLG could be built using the time path. For those 2-D expressions of which the SRTs are beyond of the chain structure, the proposal presents unbreakable limitations. Figure 4.4c presents a failed case. According to time order, 2 and h are neighbors but there is no edge between them as can be seen on Figure 4.4d. In the best case the system can output a sequence of stroke and edge labels (r, Superscript, 2, _, h). The Right relationship existing between r and h drawn with red color in Figure 4.4d is missing in the previous sequence. It is not possible to build the correct SLG with (r, Superscript, 2, _, h). If we change the writing order, first r, h and then 2, the time sequence will be (r, Right, h, _, 2). Yet, we still can not build a correct SLG with Superscript relationship missing. Being aware of this limitation, the 1-D time sequence of strokes is used to train the BLSTM and the outputted sequence of labels during recognition will be used to generate a valid SLG graph. Detailed Implementation An online mathematical expression is a sequence of strokes described as S = (s 1 , ..., s n ). In this section, we present the process to generate the above-mentioned 1-D sequence of labels from S with the BLSTM and local CTC model. CTC layer only outputs the final sequence of labels while the alignment between the inputs and the labels is unknown. BLSTM with CTC model may emit the labels before, after or during the segments (strokes). Furthermore, it tends to glue together successive labels that frequently co-occur [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. However, the label of each stroke is required to build SLG, which means the alignment information between a sequence of strokes and a sequence of labels should be provided. Thus, we propose local CTC here, constraining the network to emit the label during the segment (stroke), not before or after. First part is to feed the inputs of the BLSTM with S. Then, we focus on the network training process-local CTC methodology. Lastly, the recognition strategies adopted in this chapter will be explained in detail. BLSTM Inputs To feed the inputs of the BLSTM, it is important to scan the points belonging to the strokes themselves (on-paper points) as well as the points separating one stroke from the next one (in-air points). We expect that the visible strokes will be labeled with corresponding symbol labels and that the non-visible strokes connecting two visible strokes will be assigned with one of the possible edge labels (could be relationship label, symbol label or '_'). Thus, besides re-sampling points from visible strokes, we also re-sample points from the straight line which links two visible strokes, as can be seen in Figure 4.5. In the rest of this thesis, Given each expression, we first re-sampled points both from visible strokes and invisible strokes which connects two successive visible strokes in the time order. 1-D unlabeled sequence can be described as {strokeD 1 , strokeU 2 , strokeD 3 , strokeU 4 , ..., strokeD K } with K being the number of re-sampled strokes. Note that if s is the number of visible strokes in this path, K = 2 * s -1. Each stroke (strokeD or strokeU ) consists of one or more points. At a time-step, the input provided to the BLSTM is the feature vector extracted from one point. Without CTC output layer, the ground-truth of every point is required for BLSTM training process. With CTC layer, only the target labels of the whole sequence is needed, the pre-segmented training data is not required. In this chapter, a local CTC technology is proposed and the ground-truth of each stroke is required. The label of strokeD i should be assigned with the label of the corresponding node in SLG; the label of strokeU i should be assigned with the label of the corresponding edge in SLG. If no corresponding edge exists, the label N oRelation will be defined as '_'. Features A stroke is a sequence of points sampled from the trajectory of a writing tool between a pen-down and a pen-up at a fixed interval of time. Then an additional re-sampling is performed with a fixed spatial step to get rid of the writing speed. The number of re-sampling points depends on the size of expression. For each expression, we re-sample with 10 × (length/avrdiagonal) points. Here, length refers to the length of all the strokes in the path (including the gap between successive strokes) and avrdiagonal refers to the average diagonal of the bounding boxes of all the strokes in an expression. Since the features used in this work are independent of scale, the operation of re-scaling can be omitted. Subsequently, we compute five local features per point, which are quite close to the state of art [Álvaro et al., 2013[Álvaro et al., , Awal et al., 2014]]. For every point p i (x, y) we obtained 5 features (see Figure 4.6a): [sin θ i , cos θ i , sin φ i , cos φ i , P enU D i ] with: • sin θ i , cos θ i are the sine and cosine directors of the tangent of the stroke at point p i (x, y); • φ i = ∆θ i , defines the change of direction at point p i (x, y); • P enU D i refers to the state of pen-down or pen-up. Even though BLSTM can access contextual information from past and future in a long range, it is still interesting to see if a better performance is reachable when contextual features are added in the recognition task. Thus, we extract two contextual features for each point (see Figure 4.6b): [sin ψ i , cos ψ i ] with: • sin ψ i , cos ψ i are the sine and cosine directors of the vector from the point p i (x, y) to its closest pen-down point which is not in the current stroke. For the single-stroke expressions, sin ψ i = 0, cos ψ i = 0. Note that the proposed features are size-independent and position-independent characteristics, therefore we omit the normalization process in this thesis. Later in different experiments,we will use the 5 shape descriptor alone or the 7 features together depending on the objective of each experiment. Training process -local connectionist temporal classification Frame-wise training of RNNs requires separate training targets for every segment or timestep in the input sequence. Even though presegmented training data is available, it is known that BLSTM and CTC stage have better performance when a 'blank' label is introduced during training [START_REF] Bluche | Framewise and ctc training of neural networks for handwriting recognition[END_REF], so that better decision can be made only at some point in the input sequence. Of course doing so, precise segmentation of the input sequence is not possible. As the label of each stroke is required to build a SLG, we should make decisions on stroke (strokeD or strokeU ) level instead of sequence level (as classical CTC) or point level during the recognition process. Thus, a correspondingly stroke level training method cc-, c --, --c, -cc and-c-('-' denotes 'blank'). More generally, the number of possible label sequences is n * (n + 1)/2 (n is the number of points), which is actually 6 with the proposed example. In Section 3.4, CTC technology proposed by Graves is introduced. We modify the CTC algorithm with a local strategy to let it output the relatively precise alignment between the input sequence and the output sequence of labels. In this way, it could be applied for the training stage in our proposed system. Given the input sequence X of length T consisting of U strokes, l is used to denote the ground truth, i.e. the sequence of labels. As one stroke belongs to at most one symbol or one relationship, the length of l is U . l represents the label sequence with blanks added to the beginning and the end of l, and inserted between every pair of consecutive labels. Apparently, the length of l is U = 2U + 1. The forward variable α(t, u) denotes the summed probability of all length t paths that are mapped by F onto the length u/2 prefix of l, where u is from 1 to U and t is from 1 to T . Given the above notations, the probability of l can be expressed as the sum of the forward variables with and without the final blank at time T . p(l|X) = α(T, U ) + α(T, U -1) (4.1) In our case, α(t, u) can be computed recursively as following: α(1, 1) = y 1 - (4.2) α(1, 2) = y 1 l 1 (4.3) α(1, u) = 0, ∀u > 2 (4.4) α(t, u) = y t l u u i=f local (u) α(t -1, i) (4.5) where f local (u) = u -1 if l u = blank u -2 otherwise (4.6) In the original Eqn. 3.34, the value u -1 was also assigned when l u-2 = l u , enabling the transition from α(t -1, u -2) to α(t, u). This is the case when there are two repeated successive symbols in the final labeling. With regard to the corresponding paths, there exists at least one blank between these two symbols. Otherwise, only one of these two symbols can be obtained in the final labeling. In our case, as one label will be selected for each stroke, the above-mentioned limitation can be ignored. Suppose that the input at time t belongs to i th stroke (i from 1 to U ), then we have α(t, u) = 0, ∀u/u < (2 * i -1), u > (2 * i + 1) (4.7) which means the only possible arrival positions for time t are l 2 * i-1 , l 2 * i , l 2 * i+1 . Figure 4.8 demonstrates the local CTC forward-backward algorithm using the example '2a' which is written with 2 visible strokes. The corresponding label sequences l and l of it are '2Ra' and '-2-R-a-' respectively (R is for Right relationship). We re-sampled 4 points for pen-down stroke '2', 5 points for pen-up stroke 'R' and 4 points for pen-down stroke 'a'. From this figure, we can see each part located on one stroke is exactly the CTC forward-backward algorithm. That is why the output layer adopted in this paper is called local CTC. Similarly, the backward variable β(t, u) denotes the summed probabilities of all paths starting at t + 1 that complete l when appended to any path contributing to α(t, u). The formulas for the initialization and recursion of the backward variable in local CTC are as follows: β(T, U ) = 1 (4.8) β(T, U -1) = 1 (4.9) β(T, u) = 0, ∀u < U -1 (4.10) β(t, u) = g local (u) i=u β(t + 1, i)y t+1 l i (4.11) where g local (u) = u + 1 if l u = blank u + 2 otherwise (4.12) Suppose that the input at time t belongs to i th stroke (i from 1 to U ), then: β(t, u) = 0, ∀u/u < (2 * i -1), u > (2 * i + 1) (4.13) With the local CTC forward-backward algorithm, the α(t, u) and β(t, u) are available for each time step t and each allowed positions u of time step t. Then the errors are backpropagated to the output layer (Equation 3.49), the hidden layer (Equation 3.50), finally to the entire network. The weights in the network are adjusted with the expectation to enabling the network output the corresponding label for each stroke. As can be seen in Figure 4.8, each part located on one stroke is exactly the CTC forward-backward algorithm. In this chapter, a sequence consisting U strokes is regarded and processed as a entirety. In fact, each stroke i could be coped with separately. To be specific, with regard to each stroke i we have α i (t, u), β i (t, u) and p(l i |X i ) associated to it. The initialization of α i (t, u) and β i (t, u) is the same as described previously. With this treatment, p(l|X) can be expressed as: p(l|X) = U i=1 p(l i |X i ) (4.14) Either way, the result is the same. We will reintroduce this point in Chapter 6 where the separate processing method is taken. Recognition Strategies Once the network is trained, we would ideally label some unknown input sequence X by choosing the most probable labeling I * : I * = argmax l p(l|X) (4.15) Since local CTC is already adopted in the training process in this work, naturally recognition should be performed at stroke (strokeD and strokeU ) level. As explained in Section 4.1 to build the Label Graph, we need to assign one single label to each stroke. At that stage, for each point or time step, the network outputs the probabilities of this point belonging to different classes. Hence, a pooling strategy is required to go from the point level to the stroke level. We propose two kinds of decoding methods: maximum decoding and local CTC decoding, both based on stroke level. Maximum decoding With the same method taken in [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF] for isolated handwritten digits recognition using a multidimensional RNN with LSTM hidden layers, we first calculate the cumulative probabilities over the entire stroke. For stroke i, let o i = {p i ct }, where p i ct is the probability of outputting the c th label at the t th point. Suppose that we have N classes of labels (including blank), then c is from 1 to N ; |s i | points are re-sampled for stroke i, then t is from 1 to |s i |. Thus, the cumulative probability of outputting the c th label for stroke i can be computed as P i c = |s i | t=1 p i ct (4.16) Then we choose for stroke i the label with the highest P i c (excluding blank). Local CTC decoding With the output o i , we choose the most probable label for the stroke i: l * i = argmax l i p(l i |o i ) (4.17) In this work, each stroke outputs only one label which means we have N -1 possibilities of label of stroke. blank is excluded because it can not be a candidate label for stroke. With the already known N -1 labels, p(l i |o i ) can be calculated using the algorithm depicted in Section 4.2.3. Specifically, based on the Eqn. 6.17 we can write Eqn. 4.18, p(l i |o i ) = α(|s i |, 3) + α(|s i |, 2) (4.18) with T = |s i | and U = 3 (l is (blank, label, blank)). For each stroke, we compute the probabilities corresponding to N -1 labels and then select the one with the largest value. In mathematical expression recognition task, more than 100 different labels are included. If Eqn. 4.18 is computed more that 100 times for every stroke, undoubtedly it would be a time-consuming task. A simplified strategy is adopted here. We sort the P i c from Eqn. 4.16 using maximum decoding and keep the top 10 probable labels (excluding blank). From these 10 candidates, we choose the one which has the highest p(l i |o i ). In this way, Eqn. 4.18 is computed only 10 times for each stroke, greatly reducing the computation time. Furthermore, we add two constraints when choosing label for stroke: (1) the label of strokeD should be one of the symbol labels, excluding the relationship labels, like strokes 1, 3, 5, 7, 9, 11 in Figure 4.9. (2) the label of strokeU i is divided into 2 cases, if the labels of strokeD i-1 and strokeD i+1 are different, it should be one of the six relationships (strokes 2, 8, 10) or '_' (stroke 4); otherwise, it should be relationships, '_' or the label of strokeD i-1 (strokeD i+1 ). Taking stroke 6 shown in Figure 4.9 for example, if '+' is assigned to it means that the corresponding pair of nodes (strokes 5 and 7) belongs to the same symbol while '_' or relationship refers to 2 nodes belonging to 2 symbols. Note that to satisfy these constraints on edges labels, the labels of pen-down strokes are chosen first and then pen-up strokes. After recognition, post-processing (adding edges) should be done in order to build the SLG. The way to proceed has been already introduced in Section 4.1. Figure 4.9 -Illustration for the decision of the label of strokes. As stroke 5 and 7 have the same label, the label of stroke 6 could be '+', '_' or one of the six relationships. All the other strokes are provided with the ground truth labels in this example. Experiments We extend the RNNLIB library 1 With the Label Graph Evaluation library (LgEval) [START_REF] Mouchère | Icfhr 2014 competition on recognition of on-line handwritten mathematical expressions (crohme 2014)[END_REF], the recognition results can be evaluated on symbol level and on expression level. We introduce several evaluation criteria: symbol segmentation ('Segments'), refers to a symbol that is correctly segmented whatever the label; symbol segmentation and recognition ('Seg+Class'), refers to a symbol that is segmented and classified correctly; spatial relationship classification ('Tree Rels.'), a correct spatial relationship between two symbols requires that both symbols are correctly segmented and with the right relationship label. For all experiments the network architecture and configuration are as follows: • The input layer size: 5 or 7 (when considering the 2 additionnal context features) • The output layer size: the number of class (up to 109) • The hidden layers: 2 layers, the forward and backward, each contains 100 single-cell LSTM memory blocks • The weights: initialized uniformly in [-0.1, 0.1] • The momentum: 0.9 This configuration has obtained good results in both handwritten text recognition [Graves et al., 2009] and handwritten math symbol classification [Álvaro et al., 2013, 2014a]. Data sets Being aware of the limitations of our proposal related to the structures of expressions, we would like to see the performance of the current system on expressions of different complexities. Thus, three data sets are considered in this chapter. The blank label is only used for local CTC training. Figure 4.10 show some handwritten math expression samples extracted from CROHME 2014 data set. Experiment 1: theoretical evaluation As discussed in Section 4.1, there exist obvious limitations in the proposed solution of this chapter. These limitations could be divided into two types: (1) to chain-SRT expressions, if users could not write a multi-stroke symbol successively or could not follow a specific order to enter symbols, it will not be possible to build a correct SLG; (2) to those expressions of which the SRTs are beyond of the chain structure, regardless of the writing order, the proposed solution will miss some relationships. In this experiment, laying the classifier aside temporarily, we would like to evaluate the limitations of the proposal itself. Thus, to carry out this theoretical evaluation, we take the ground truth labels of the nodes and edges in the time path only of each expression. Table 4.1 andTable 4.2 present the evaluation results on CROHME 2014 test set at the symbol and expression level respectively using the above-mentioned strategy. We can see from Table 4.1, the recall ('Rec.') and precision ('Prec.') rates of the symbol segmentation on all these 3 data sets are almost 100% which implies that users generally write a multi-stroke symbol successively. The recall rate of the relationship recognition is decreasing from Data set 1 to 3 while the precision rate remains almost 100%. With the growing complexity of expressions, increasing relationships are missed due to the limitations. About 5% relationships are missed in Data set 1 because of only the problem of writing order. With regards to the approximate 25% relationships omitted in Data set 3, it is owing to the writing order and the conflicts between the chain representation method and the tree structure of expression, especially the latter one. In Table 4.2, the evaluation results at the expression level are available. 86.79% of Data set 1 which contains only 1-D expressions could be recognized correctly with the proposal at most. For the complete CROHME 2014 test set, only 34.11% expressions can be interpreted correctly in the best case. Experiment 2 In this experiment, we evaluate the proposed solution with BLSTM classifier on data sets of different complexity. Local CTC training and local CTC decoding methods are used inside the recognition system. Only 5 local features are extracted at each point for training. Each system is trained only once. The evaluation results on symbol level for the 3 data sets are provided in Table 4.3 including recall ('Rec.') and precision ('Prec.') rates for 'Segments', 'Seg+Class', 'Tree Rels.'. As can be seen, the results in 'Segments' and 'Seg+Class' are increasing while the training data set is growing. The recall for 'Tree Rels.' is decreasing among the three data sets. It is understandable since the number of missed relationships grows with the complexity of expressions knowing the limitation of our method. The precision for 'Tree Rels.' fluctuates as the data set is expanding. The results of Data set 3 are comparable to the results of CROHME 2014 because the same training and testing data sets are used. The second part of Table 4.3 gives the symbol level evaluation results of the participant systems in CROHME 2014 sorted by recall of correct symbol segmentation. The best 'Rec.' of 'Segments' and 'Seg+Class' reported in CROHME 2014 are 98.42% and 93.91% respectively. Ours are 93.26% and 84.40%, both ranked 3 out of 8 systems (7 participants in CROHME 2014 + our system). Our solution presents competitive results on symbol recognition task and segmentation task even though the symbols with delayed strokes are missed. However, our proposal, at that stage, shows limited performances at the relationship level, with 'Rec.' = 61.85%, 'Prec.' = 75.06%. This is mainly because approximate 25% relationships are missed in the time sequence. If we consider only the relationships covered by the time sequence which accounts for 75.54%, the recall rate will be 61.85%/75.54% = 81.88%, close to the second ranked system in the competition. Thus, one of the main works of next chapters would be focused on proposing a solution to catch the omitted approximate 25% relationships at the modeling stage. We present a correctly recognized sample and an incorrectly recognized sample in Figure 4.11 and Figure 4.12 respectively. The expression a ≥ b (Figure 4.11) is a 1-D expression and therefore the time path could cover all the relationships in this expression. It was correctly recognized by our system in this chapter. Considering the other sample 44 -4 4 of which the SRT is a tree structure (Figure 4.12), the Right relationship from the minus symbol to fraction bar was omitted in the modeling stage, likewise the Above relationship from the fraction bar to the numerator 4. In addition, the relation from the minus symbol to the numerator 4 was wrongly recognized as Right and it should be labeled as N oRelation. Experiment 3 In this experiment, we would like to know if different training and decoding methods and the contextual features will improve or not the performance of our recognition system. We use different training methods and different features to train the recognition system, and take two kinds of strategy at the recognition stage. All the systems in this part are trained and evaluated on Data set 3. Since the weights inside the network are initialized randomly, each system is trained four times with the aim to compute mean evaluation values and standard deviations, and therefore obtain convincing conclusions and have an idea of the system stability. As shown in Discussion The capability of BLSTM networks to process graphical two-dimensional languages such as handwritten mathematical expressions is explored in this chapter as a first try. Using online math expressions, which are available as a temporal sequence of strokes, we produce a labeling at the stroke level using a BLSTM network with a local CTC output layer. Then we propose to build a two-dimensional (2-D) expression from this sequence of labels. Our solution presents competitive results with CROHME 2014 data set on symbol recognition task and segmentation task. Proposing a global solution to perform at one time segmentation, recognition and interpretation, with no dedicated stages, is a major advantage of the proposed solution. To some extent, at the present time, it fails on the relationship recognition task. This is primarily due to an intrinsic limitation, since currently, a single path following the time sequence of strokes in SLG is used to build the expression. In fact, some important relationships are omitted at the modeling stage. We only considered stroke combinations in time series in the work of this chapter. For the coming chapter, the proposed solution will take into account more possible stroke combinations in both time and space such that less relationships will be missed at the modeling stage. A sequential model could not include temporal and spacial information at the same time. To overcome this limitation, we propose to build a graph from the time sequence of strokes to model more accurately the relationships between strokes. Mathematical expression recognition by merging multiple paths In Chapter 4, we confirmed the fact of that there exists unbreakable limitations if using a single 1-D path to model expressions. This conclusion was verified from both theoretical and experimental point of view. The sequence of strokes arranged with time order was used in those experiments as an example of 1-D paths since it is the most intuitive and readily available. Due to the unbreakable limitations, in this chapter, we turn to a graph structure to model the relationships between strokes in mathematical expressions. Further, using the sequence classifier BLSTM to label the graph structure is another research focus. This chapter will be organized as follows: Section 5.1 provides an overview of graph representation related to build a graph from raw mathematical expression. Then we globally describe the framework of mathematical expression recognition by merging multiple paths in Section 5.2. Next, all the steps of the recognition system are explained one by one in detail. Finally, the experiment part and the discussion part are presented in Section 5.4 and Section 5.5 respectively. Overview of graph representation Each mathematical expression consists of a sequence of strokes. Relations between two strokes could be divided into 3 types: belong to the same symbol (segmentation), one of the 6 spatial relationships, no relation. It is possible to describe a ME at the stroke level using a SLG of which nodes represent strokes, while the edges encode either segmentation information or one of the spatial relationships. If there is no relation between two strokes, of course no corresponding edge would be found between two strokes in SLG. All the above discussion supposes the knowledge of the ground truth. In fact, given a handwritten expression, our work is to find the ground truth. Thus the first step is to derive an intermediate graph from the raw information. Specifically, it involves finding pairs of strokes between which there exist relations (represented as edges). In [START_REF] Hu | Features and Algorithms for Visual Parsing of Handwritten Mathematical Expressions[END_REF], they call this stage as graph representation. We could find out all the ground truth edges (100% recall) by adding an edge between any pair of strokes in the derived graph. However, this exhaustive approach brings at the same time the problem of low precision. For an expression with N strokes, if we consider all the possibilities, there would be N (N -1) edges in the derived graph. Compared to the ground truth SLG, many edges do not exist. Suppose that all symbols in this expression are single-stroke symbol, there are only N -1 ground truth edge, and the precision is 1 N . Apparently this exhaustive solution with a 100% recall and an around 1 N precision is unbearable in practice because even through later classifier could recognize these invalid edges to some extent it is still a big burden. Thus, better graph models should be explored. In this section, we introduce several models used in other literature provided as a basis of the proposed model in this thesis. Time Series (TS) is widely used as a model for the task of math symbol segmentation and recognition in previous works [START_REF] Hu | Segmenting handwritten math symbols using adaboost and multi-scale shape context features[END_REF][START_REF] Koschinski | Segmentation and recognition of symbols within handwritten mathematical expressions[END_REF][START_REF] Kosmala | On-line handwritten formula recognition using statistical methods[END_REF][START_REF] Yu | A unified framework for symbol segmentation and recognition of handwritten mathematical expressions[END_REF][START_REF] Smithies | A handwriting-based equation editor[END_REF], Winkler and Lang, 1997a,b]. In this model, strokes are represented as nodes, and between two successive strokes in the input order there is an edge (undirected) connecting them. We also considered this model but a directed version in Chapter 4 where it is called time path. Time Series is a good model for symbol segmentation and recognition since people usually writes symbols with no delayed stroke. However, it is not strong enough to capture the global structure of math expression, which has been clarified very well in last chapter. Unlike Time Series which is a chain structure in fact, K Nearest Neighbor (KNN) is a graph model in the true sense. In KNN graph, for each stroke, we first search for its K closest strokes. Then each undirected edge between this stroke and each of its K closest neighbors will be added into the graph. Thus, each node has at least K edges connected to it. In other words, the number of the edges connected to each node is relatively fixed in fact. However, it is not well suitable for math expression where nodes are connected to a variable number of edges. In [START_REF] Matsakis | Recognition of handwritten mathematical expressions[END_REF], Minimum spanning tree (MST) is used as the graph model. A spanning tree is a connected undirected graph, which a set of edges connect all of the nodes of the graph with no cycles. To define a minimum spanning tree, the graph edges also need to be assigned with a weight, in which case an MST is a spanning tree that has the minimum accumulative edge weight of the graph [START_REF] Matsakis | Recognition of handwritten mathematical expressions[END_REF]. Minimum spanning trees can be efficiently computed using the algorithms of Kruskal and Prim [START_REF] Cormen | Introduction to algorithms[END_REF]. In [START_REF] Matsakis | Recognition of handwritten mathematical expressions[END_REF], each stroke is represented as a node and the edge between the two strokes is assigned with a weight which is the distance of two strokes. A Delaunay Triangulation (DT) for a set P of points in a plane is a triangulation DT (P ) such that no point in P is inside the circumcircle of any triangle in DT (P ) [de Berg et al.]. [START_REF] Hirata | Automatic labeling of handwritten mathematical symbols via expression matching[END_REF] uses a graph model based on Delaunay triangulation. They assume that symbols are correctly segmented, thus, instead of stroke, each symbol is taken as a node. In [START_REF] Hu | Features and Algorithms for Visual Parsing of Handwritten Mathematical Expressions[END_REF], Line Of Sight (LOS) graph is considered since they find that, for a given stroke, it usually can see the strokes which have a relation with it in the symbol relation tree. The center of each stroke is taken as an eye, there would be an directed edge from the current stroke to each stroke it can see. A sample is available in Figure 5.2. [START_REF] Muñoz | Mathematical Expression Recognition based on Probabilistic Grammars[END_REF] use an undirected graph of which each stroke is a node and edges only connect strokes that are visible and close as a segmentation model. They also consider the visibility between strokes. Figure 5.2 -An example of line of sight graph for a math expression. Extracted from [START_REF] Hu | Features and Algorithms for Visual Parsing of Handwritten Mathematical Expressions[END_REF]. Hu carried out considerable experiments to choose the appropriate graph model in [START_REF] Hu | Features and Algorithms for Visual Parsing of Handwritten Mathematical Expressions[END_REF]. To better stress the characteristics of variable graph models, several related definitions are provided before going to the details. A stroke is a sequence of points which can be represented as a Bounding Box (BB) (Figure 5.3a) or a Convex Hull (CH) (Figure 5.3b). A subset P of the plane is called convex if and only if for any pair of (3) the distance between their closest points. Closest Point Pair (CPP) of two strokes refers to the pair of points having the minimal distance, where two points are from two strokes. The experiment results from [START_REF] Hu | Features and Algorithms for Visual Parsing of Handwritten Mathematical Expressions[END_REF] reveal that both MST and TS models achieve a high precision rate (around 90% on CROHME 2014 test set, AC distance is used for MST) but a relatively low recall (around 87% on CROHME 2014 test set). For KNN graph, the larger K is, the higher the recall is and the lower the precision is. When K = 6, the recall reach 99.4% and the precision is 28.3% Based on the previous works and our own work in Chapter 4, we develop a new graph representation model which is a directed graph built using both the temporal and spatial information of strokes. The framework In this section, we introduce the global framework (Figure 5.4) of the proposed solution of this chapter to provide the readers an intuitive look at the detailed implementation is proposed next. As depicted in Figure 5.4, the input to the recognition system is an handwritten expression which is a sequence of strokes; the output is the stroke label graph which consists of the information about the label of each stroke and the relationships between stroke pairs. As the first step, we derive an intermediate graph from the raw input considering both the temporal and spatial information. In this graph, each node is a stroke and edges are added according to temporal or spatial properties. The derived graph is expected to have a high recall and a reasonable precision compared to the ground truth SLG. The remaining work is to label each node and each edge of the graph. To this end, several 1-D paths will be selected from the graph since the classifier model we are considering is a sequence labeler. The classical BLSTM-RNN model are able to deal with only sequential structure data. Next, we use the BLSTM classifier to label the selected 1-D paths. This stage consists of two steps, being the training and recognition process. Finally, we merge these labeled paths to build a complete stroke label graph with a strategy of setting different weights for them. Detailed implementation As explained in last section, the input data is available as a sequence of strokes S = (s 0 , ..., s n-1 ) (for i < j, we assume s i has been entered before s j ) from which we would like to obtain the final SLG graph describing unambiguously the ME. In this part, we will introduce the recognition system step by step following the order within the framework. Derivation of an intermediate graph G In a first step, we will derive an intermediate graph G, where each node is a stroke and edges are added according to temporal or spatial properties. Based on the previous works on graph reprensentation that we reviewed in Section 5.1, we develop a new directed graph representation model. Some definitions regarding to the spatial relationships between strokes will be provided first. Definition 5.1. The distance between two strokes s i and s j can be defined as the Euclidean distance between their closest points. dist(s i , s j ) = min p∈s i ,q∈s j (x p -x q ) 2 + (y p -y q ) 2 (5.1) It is the CPP distance mentioned in Section 5.1 as a matter of fact. Definition 5.2. A stroke s i is considered visible from stroke s j if the bounding box of the straight line between their closest points does not cross the bounding box of any other stroke s k . For example in Figure 5.5, s 1 and s 3 can see each other because the bounding box of the straignt line between their closest points does not cross the bounding box of stroke s 2 and s 4 . In [START_REF] Muñoz | Mathematical Expression Recognition based on Probabilistic Grammars[END_REF] the visibility is defined by the straight line between their closest points does not cross any other stroke. We simplify it through replacing the stroke with its bounding box to reduce computation. As illustrated in Figure 5.6, point (0, 0) is the center of the bounding box of stroke s i . The angle of each region is π 4 . If the center of bounding box of s j is in one of these five regions, for example R1 region, we can say s j is in the R1 direction of s i . The purpose of defining these 5 regions is to look for the Above, Below, Sup, Sub and Right relationships between strokes in these 5 preferred directions, but not to recognize them. Definition 5.4. Let G be a directed graph in which each node corresponds to a stroke and edges are added according to the following criteria in succession. We defined for each stroke s i ( i from 0 to n -2): • the set of crossing strokes S cro (i) = {s cro1 , s cro2 , ...} from {s i+1 , ..., s n-1 }. • the set of closest stroke S clo (i) = {s clo } from {s i+1 , ..., s n-1 } -S cro(i) . For stroke s i ( i from 0 to n -1): • the set S vis (i) of the visible closest strokes in each of the five directions respectively from S - {s i } S cro (i) S clo (i). Here, the closeness of two strokes is decided by the distance between the centers of their bounding boxes, differently from Definition 5.1. Edges from s i to the S cro (i) S clo (i) S vis (i) will be added to G. Finally, we check if the edge from s i to s i+1 (i from 0 to n -2) exists in G. If not, then add this edge to G to ensure that the path covering the sequence of strokes in the time order is included in G. An example is presented in Figure 5.7. Mathematical expression d dx a x is written with 8 strokes (Figure 5.7a). From the sequence of 8 strokes, the graph shown in Figure 5.7b is generated with the above mentioned method. Comparing the built graph with the ground truth (Figure 5.7c), we can see the difference in Figure 5.7d. All the ground truth edges are included in the generated graph except edges (blue ones in Figure 5.7d) from strokes 4 to 3 and from strokes 7 to 6. This flaw can be overcome as long as strokes 3 and 4 and the edge from strokes 3 to 4 are correctly recognized. Because if strokes 3 and 4 are recognized as belonging to the same symbol, the edge from strokes 4 to 3 can be completed automatically, as well as the edge from strokes 7 to 6. In addition, Figure 5.7d indicates the unnecessary edges (red edges) when matching the built graph to the ground truth. It is expected that the graph built could include the ground truth edges as many as possible, simultaneously contains the unnecessary edges as few as possible. Graph evaluation Hu evaluates the graph representation model by comparing the edges of the graph with ground truth edges at the stroke level [START_REF] Hu | Features and Algorithms for Visual Parsing of Handwritten Mathematical Expressions[END_REF]. The recall and precision rates are considered. In this section, we take a x is written with 8 strokes; (b) the SLG built from raw input using the proposed method; (c) the SLG from ground truth; (d) illustration of the difference between the built graph and the ground truth graph, red edges denote the unnecessary edges and blue edges refer to the missed ones compared to the ground truth. similar but more directive method, the same solution as introduced in Section 4.3.2. Specifically, provided the ground truth labels of the nodes and edges in the graph, we would like to see evaluation results at symbol and expression levels. We reintroduce the evaluation criteria here as a kind reminder to readers: symbol segmentation ('Segments'), refers to a symbol that is correctly segmented whatever the label; symbol segmentation and recognition ('Seg+Class'), refers to a symbol that is segmented and classified correctly; spatial relationship classification ('Tree Rels.'), a correct spatial relationship between two symbols requires that both symbols are correctly segmented and with the right relationship label. Table 5.1 and Table 5.2 present the evaluation results of the graph construction on CROHME 2014 test set (provided the ground truth labels) at the symbol and expression level respectively. We re-show the evaluate results, already given in Tables 4.1 and4.2, of time graph as a reference to the new graph. Due to the delayed strokes, time graph miss a small part of segmentation edges. Thus around 0.27% symbols are wrongly segmented. The new graph achieves 100% recall rate and 99.99% precision rate on segmentation task (0.01% error is resulted from a small error in data set, not the model itself). These figures evidence that the new model could handle the case of delayed strokes. With regards to relationship recognition task, time graph model misses about 25% relationships. The new graph catches 93.48% relationships. Compared to time graph, there is a great improvement in relationship representation. However, owing to the missed 6.52% relationships, only 67.65% expressions are correctly recognized as presented in Table 5.2. These values will be upper bounds for the recognition system based on this graph model. Select paths from G The final aim of our work is to build the SLG of 2-D expression. The proposed solution is carried out with merging several 1-D paths from G. These paths are expected to cover all the nodes and as many as the edges of the ground truth SLG (at least the edges of the ground truth SRT). With the correctly recognized node and edge labels, we have the possibility to build a correct 2-D expression. Obviously, a single 1D path is not able to cover all these nodes and edges, except in some simple expressions. We have explained this point in detail in Chapter 4. This section will explain how we generate several paths from the graph, enough different to cover the SRT, and then how to merge the different decisions in a final graph. A path in G can be defined as Φ i = (n 0 , n 1 , n 2 , ..., n e ), where n 0 is the starting node and n e is the end node. The node set of Φ i is n(Φ i ) = {n 0 , n 1 , n 2 , ..., n e } and the edge set of Φ i is e(Φ i ) = {n 0 → n 1 , n 1 → n 2 , ..., n e-1 → n e }. Two types of paths are selected in this chapter: time path and random path. time path starts from the first input stroke and ends with the last input stroke following the time order. For example, in Figure 5.7d, the time path is (0, 1,2,[START_REF] Zhang | Using BLSTM for Interpretation of 2D Languages -Case of Handwritten Mathematical Expressions[END_REF]4,5,[START_REF]7 -The expression level evaluation results on CROHME 2014 test set with 3 trees, along with CROHME 2014 participant results. Network, model correct (%) ≤ 1 error ≤ 2 errors ≤ 3 errors i[END_REF]7). Then, we consider several additional random paths. To ensure a good coverage of the graph we guide the random selection choosing the less visited nodes and edges (give higher priority to less visited ones). The algorithm of selecting random path is as follows: (1) Initialize T n = 0, T e = 0. T n records the number of times that node n has been chosen as starting node, T e records the number of times of that edge e has been used in chosen paths; (2) Update T n = T n + 1 , T e = T e + 1 of all the nodes and edges in time path; (3) Randomly choose one node N from the nodes having the minimum T n , update T N = T N + 1; (4) Find all the edges connected to N , randomly choose one from the edges having the minimum T e , denoted as E, update T E = T E + 1; if no edge found, finish. (5) Reset N as the to node of E, go back to step 4. One random path could be like (1,5,[START_REF]7 -The expression level evaluation results on CROHME 2014 test set with 3 trees, along with CROHME 2014 participant results. Network, model correct (%) ≤ 1 error ≤ 2 errors ≤ 3 errors i[END_REF]7). Training process Each path, time or random, is handled independently during the training process as a training sample. In Chapter 4, we introduced the technique related to training a time path, such as how to feed BLSTM inputs (Section 4.2.1), extracting features(Section 4.2.2) and local CTC training method (Section 4.2.3); the same training process is kept for random paths in this chapter. Totally, we have 3 types of BLSTM classifiers trained with respectively only time path, only random paths and time + random paths. More related contents could be found in the experimental section of this chapter. Recognition Since we use local CTC technique in the training process in this work, naturally the recognition stage should be performed on stroke (strokeD and strokeU ) level. As explained previously, to build the SLG, we also need to assign one single label to each stroke. Considering these two causes, a pooling strategy is required to go from the point level to the stroke level since for each point or time step, the network outputs the probabilities of this point belonging to different classes. We proposed two kinds of decoding methods based on stroke level (maximum decoding, local CTC decoding) and tested the effects of them in Chapter 4. According to the evaluation results, maximum decoding is a better choice for its low computation and same level effectiveness as local CTC decoding. With the Equation 4.16, we can compute P s c , the cumulative probability of outputting the c th label for stroke s. Then we sort the normalized P s c and only keep the top n probable labels (excluding blank) with the accumulative probability ≥ 0.8. Note that n is maximum 3 even though the accumulative probability of top 3 labels is not up to 0.8. Merge paths Each stroke belongs to at least one path, but possibly to several paths. Hence, several recognition results can be available for a single stroke. At this stage, we propose to compute the probability P s (l) to assign the 86CHAPTER 5. MATHEMATICAL EXPRESSION RECOGNITION BY MERGING MULTIPLE PATHS label l to the stroke s by summing all path Φ i with the formula: P s (l) = Φ i W Φ i * P (Φ i ,s) l A (s) label(Φ i ,s) (l) Φ i W Φ i A (s) (5.2) A = n(Φ i ) ∪ e(Φ i ) (5.3) M (m) = 1 if m ∈ M 0 otherwise (5.4) W Φ i is the weight set for path Φ i and label(Φ i , s) is the set of candidate labels for stroke s from path Φ i , 1 ≤ |label(s, Φ i )| ≤ 3. If stroke s exists in path Φ i , but l / ∈ label(s, Φ i ), in this case, P (Φ i ,s) l is 0. The classifier answers that there is no possibility to output label l for stroke s from path Φ i . We still add W Φ i into the normalized factor of P s (l). If stroke s does not exist in path Φ i , the classifier's answer to stroke s is unknown. And we should not take into account this path Φ i . Thus, W Φ i will not be added into the normalized factor of P s (l). After normalization, the label with the maximum probability is selected for each stroke. As shown in Figure 5.8, we consider merging 3 paths Φ 1 , Φ 2 , Φ 3 . Stroke s only belongs to path Φ 1 , Φ 2 . In path Φ 1 , the candidate labels for stroke s are a, b, c while in path Φ 2 , the candidate labels are b, c, d. The probability of assigning a to stroke s is computed as: P s (a) = W Φ 1 * P Φ 1 ,s a + W Φ 2 * 0 + W Φ 3 * 0 W Φ 1 + W Φ 2 + W Φ 3 * 0 (5.5) Stroke s is not covered by path Φ 3 , thus W Φ 3 will not be added into the normalized factor. The probability of outputting label a for stroke s in path Φ 2 is 0. Furthermore, the probability of assigning b to stroke s is computed as: P s (b) = W Φ 1 * P Φ 1 ,s b + W Φ 2 * P Φ 2 ,s b + W Φ 3 * 0 W Φ 1 + W Φ 2 + W Φ 3 * 0 (5.6) In conclusion, we combine the recognition results from different paths (while in Chapter 4, only one path is used), and then select for each node or edge the most probable label. Afterwards, an additional process will be carried out in order to build a valid LG, i.e. adding edges, in the same way as what have done in Chapter 4. We first look for the segments (symbols) using connected component analysis: a connected component where nodes and edges have the same label is a symbol. With regards to the relationship between two symbols, we choose the label having the maximum accumulative probability among the edges between two symbols. Then, according to the rule that all strokes in a symbol have the same input and output edges and that double-direction edges represent the segments, some missing edges can be completed automatically. Experiments In the CROHME 2014 data set, there are 8834 expressions for training and 982 expressions for test. Likewise, we divide 8834 expressions for training (90%) and validation (10%); use CROHME 2014 test set for test. Based on the RNNLIB library1 , the recognition system is developed by merging multiple paths. For each training process, the network having the best CTC error on validation data set is saved. Then, we evaluate this network on the test data set. The Label Graph Evaluation library (LgEval) [START_REF] Mouchère | Icfhr 2014 competition on recognition of on-line handwritten mathematical expressions (crohme 2014)[END_REF] is used to analyze the recognition output. The specific configuration for network architecture is the same as the one we set in Section 4.3. This configuration has obtained good results in both handwritten text recognition [Graves et al., 2009] and handwritten math symbol classification [Álvaro et al., 2013, 2014a]. The size of the input layer is 5 (5 local features, as same as Chapter 4) while the output layer size in this experiment is 109 (101 symbol classes + 6 relationships + N oRelation + blank). For each expression, we extract the time path and 6 (or 10) random paths. Totally, 3 types of classifiers are trained: the first one with only time path (denoted as CLS T , actually it is the same classifier as in Chapter 4); the second BLSTM network is trained with only 6 random paths (denoted as CLS R6 , for 10 random paths we use CLS R10 ); and the third classifier uses time + 6 random paths (denoted as CLS T +R6 ). We train these 3 types of classifiers to see the effect of different training content on recognition result, also the impact of the number of paths. We use these 4 different classifiers (CLS T , CLS R6 , CLS R10 , CLS T +R6 ) to label the different types of paths extracted from the test set as presented in Table 5.3 (exp.1, uses CLS T to label time path; exp.2, uses CLS T to label both time path and random paths; exp.3, uses CLS T to label time path and CLS R to label random paths; exp.4, uses CLS T +R6 to label both time path and random paths; exp.5, use CLS T to label time path and CLS R10 to label random paths;). In exp.1, only the labeled time path is used to build a 2-D expression, actually it is the same case carried out in Chapter 4. For exp.(2 3 4), time path and 6 random paths are merged to construct a final graph. time path and 10 random paths are contained in exp.5. The weight of time path is set to 0.4 and each random path is 0.12 . 2 CLS T CLS T 3 CLS T CLS R6 4 CLS T +R6 CLS T +R6 5 CLS T CLS R10 The evaluation results at symbol level are provided in Table 5.4 including recall ('Rec.') and precision ('Prec.') rates for symbol segmentation ('Segments'), symbol segmentation and recognition ('Seg+Class'), stage, likewise the Above relationship from the fraction bar to the numerator 4. In addition, the relation from the minus symbol to the numerator 4 was wrongly recognized as Right and it should be labeled as N oRelation. In this chapter, a ≥ b is recognized correctly also (Figure 5.9). We present the the derived graph in Figure 5.9b. Then from the graph, we extract the time path and 6 random paths. In this example, all the nodes and edges of Figure 5.9b are included in the extracted 7 paths. After merging the results of 7 paths with the Equation 5.2, we can get the labeled graph illustrated as Figure 5.9c. The edge from stroke 0 to 1 is wrongly labeled as Sup. Next, we carry out the post process stage. The segments (symbols) are decided using connected component analysis: 3 symbols (a, ≥, b) in this expression. With regards to the relationship between a and ≥, we have 2 candidates Sup with the probability 0.604 and Right with the probability 0.986. With the strategy of choosing the label having the maximum accumulative probability, the relationship between a and ≥ is Right then. After post process, we construct a correct SLG provided in Figure 5.9d. The recognition result for 44 -4 4 is presented in Figure 5.10. From the handwritten expression (Figure 5.10a), we could derive a graph presented in (Figure 5.10b). Then, we extract paths from the graph, and label them, finally merge the labeled paths to built a labeled graph. Figure 5.10c provides the built SLG from which we can see several extraneous edges appear owing to multiple paths and in this sample they are all recognized correctly as N oRelation. We remove these N oRelation edges to have a intuitive judgment on the recognition result (Figure 5.10d).As can be seen, the Right relationship from the minus symbol to fraction bar is missed. This error comes from the graph representation stage where we find no edge from stroke 2 to 4. Both stroke 3 and 4 are located in R1 region of stroke 2, but stroke 3 is closer to stroke 2 than stroke 4. Thus, we miss the edge from stroke 2 to 4 at the graph representation stage, and naturally miss the Right relationship from the minus symbol to fraction bar in the built SLG. This error can be overcome by searching for a better graph model or some post process strategies regarding to the connected SLG. As discussed above, our solution presents competitive results on symbol recognition task and segmentation task, but not on relationship detection and recognition task. Compared to the work of Chapter 4, the solution in this chapter presents improvements on recall rate of 'Tree Rels.' but at the same time decreases on precision rate of 'Tree Rels.' Thus, at the expression level, the recognition rate remains the same level as the solution with single path. One of the intrinsic causes is that even though several paths from one expression is considered in this system, the BLSTM model processes each path separately which means the model only could access the contextual information in one path during training and recognition stages. Obviously it conflicts with the real case that human beings recognize the raw input using the entire contex- 92CHAPTER 5. MATHEMATICAL EXPRESSION RECOGNITION BY MERGING MULTIPLE PATHS tual information. In the coming chapter, we will search for a model which could take into account more contextual knowledge at one time instead of just the content limited in one single path. Discussion We recognize 2-D handwritten mathematical expressions by merging multiple 1-D labeled paths in this chapter. Given an expression, we propose an algorithm to generate an intermediate graph using both temporal and spatial information between strokes. Next from the derived graph, different types of paths are selected and later labeled with the strong sequence labeler-BLSTM. Finally, we merge these labeled paths to build a 2-D math expression. The proposal presents competitive results on symbol recognition task and segmentation task, promising results on relationship recognition task. Compared to the work of Chapter 4, the solution in this chapter presents improvements on recall rate of 'Tree Rels.' but at the same time decreases on precision rate of 'Tree Rels.' Thus, at the expression level, the recognition rate remains the same level as the solution with single path. Currently, even though several paths from one expression is considered in this system, in essential the BLSTM model deals with each path isolatedly. The classical BLSTM model could access information from past and future in a long range but the information outside the single sequence is of course not accessible to it. In fact, it conflicts with the real case that human beings recognize the raw input using the entire contextual information. As shown in our experiments, it is laborious to solve a 2-D problem with a chain-structured model. Thus, we would like to develop a tree-structured neural network model which could handle directly the structure not limited to a chain. With the new neural network model, we could take into account more contextual information in a tree instead of a single 1D path. merging multiple trees In Chapter 5, we concluded that it is hard to use the classical chain-structured BLSTM to solve the problem of recognizing mathematical expression which is a tree structure. In this chapter, we extend the chain-structured BLSTM to tree structure topology and apply this new network model for online math expression recognition. Firstly, we provide a short overview with regards to the Non-chain-structured LSTM. Then, we propose in Section 6.2 a new neural network model named tree-based BLSTM which seems to be appropriate for this recognition problem. Section 6.3 globally introduces the framework of mathematical expression recognition system based on tree-based BLSTM. Hereafter, we focus on the specific techniques involved in this system in Section 6.4. Finally, experiments and discussion parts are covered in Section 6.5 and Section 6.7 respectively. Overview: Non-chain-structured LSTM A limitation of the classical LSTM network topology is that they only allow for sequential information propagation (as shown in Figure 6.1a) since the cell contains a single recurrent connection (modulated by a single forget gate) to its own previous value. Recently, research on LSTM has been beyond sequential structure. The one-dimensional LSTM was extended to n dimensions by using n recurrent connections (one for each of the cell's previous states along every dimension) with n forget gates such that the new model could take into account the context from n sources. It is named Multidimensional LSTM (MDLSTM) dedicated to the graph structure of an n-dimensional grid such as images [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. MDLSTM model exhibits great performances on offline handwriting recognition tasks where the input is an image [Graves and Schmidhuber, 2009[START_REF] Messina | Segmentation-free handwritten chinese text recognition with lstm-rnn[END_REF][START_REF] Bluche | Scan, attend and read: End-to-end handwritten paragraph recognition with mdlstm attention[END_REF], Maalej and Kherallah, 2016, Maalej et al., 2016]. In [START_REF] Tai | Improved semantic representations from tree-structured long short-term memory networks[END_REF], the basic LSTM architecture was extend to tree structures for improving semantic representations. Two extensions, the Child-sum Tree-LSTM and the N-ary Tree-LSTM, were proposed to allow for richer network topology where each unit is able to incorporate information from multiple child units (Figure 6.1b). Since the Child-sum Tree-LSTM unit conditions its components on the sum of child hidden states, it is well-suited for trees with high branching factor or whose children are unordered. The N-ary Tree-LSTM can be used on tree structures where the branching factor is at most N and where children are ordered. In parallel to the work in [START_REF] Tai | Improved semantic representations from tree-structured long short-term memory networks[END_REF], [START_REF] Zhu | Long short-term memory over recursive structures[END_REF] explored the similar idea and proposed S-LSTM model which provides a principled way of considering long-distance interaction over hierarchies, e.g., language or image parse structures. Furthermore, the DAG-structured LSTM was proposed for semantic compositionality in [START_REF] Zhu | Dag-structured long short-term memory for semantic compositionality[END_REF], possessing the ability to incorporate external semantics including non-compositional or holistically learned semantics. The proposed Tree-based BLSTM This section will be focused on Tree-based BLSTM. Different with the tree structures depicted in [START_REF] Tai | Improved semantic representations from tree-structured long short-term memory networks[END_REF][START_REF] Zhu | Long short-term memory over recursive structures[END_REF], we devote it to the kind of structures presented in Figure 6.2 where most nodes have only one next node. In fact, this kind of structure could be regarded as several chains with shared or overlapped segments. Traditional BLSTM process a sequence both from left to right and from right to left in order to access information coming from two directions. In our case, the tree will be processed from root to leaves and from leaves to root in order to visit all the surround context. From root to leaves. There are 2 special nodes (red) having more than one next node in Figure 6.2. We name them Mul-next node. The hidden states of Mul-next node will be propagated to its next nodes equally. The forward propagation of a Mul-next node is the same as for a chain LSTM node; with regard to the error propagation, the errors coming from all the next nodes will be summed up and propagated to Mul-next node. From leaves to root. Suppose all the arrows in Figure 6.2 are reversed, we have the new structure which is actually beyond a tree in Figure 6.3. The 2 red nodes are still special cases because they have more than one previous node. We call them Mul-previous nodes. The information from all the previous nodes will be summed up and propagated to the Mul-previous node; the error propagation is processed like for a typical LSTM node. Mul We give the specific formulas below regarding to the forward propagation of Mul-previous node and the error back-propagation of Mul-next node. The same notations as in Chapter 3 and [Graves et al., 2012] are used here. The network input to unit i at node n is denoted a n i and the activation of unit i at node n is b n i . w ij is the weight of the connection from unit i to unit j. Considering a network with I input units, K output units and H hidden units, let the subscripts ς, φ, ω referring to the input, forget and output gate. The subscript c refers to one of the C cells. Thus, the peep-hole weights from cell c to the input, forget, output gates can be denoted as w cς , w cφ , w cω . s n c is the state of cell c at node n. f is the activation function of the gates, and g and h are respectively the cell input and output activation functions. L is the loss function used for training. We only give the equations for a single memory block. For multiple blocks the calculations are simply repeated for each block. Let Pr(n) denote the set of previous nodes of node n and Ne(n) denote the set of next nodes. We highlight the different parts with box compared to the classical LSTM formulas which have been recalled in Chapter 3. The forward propagation of Mul-previous node Input gates a n ς = I i=1 w iς x n i + H h=1 w hς |Pr(n)| p=1 b p h + C c=1 w cς |Pr(n)| p=1 s p c (6.1) b n ς = f (a n ς ) (6.2) Forget gates a n φ = I i=1 w iφ x n i + H h=1 w hφ |Pr(n)| p=1 b p h + C c=1 w cφ |Pr(n)| p=1 s p c (6.3) b n φ = f (a n φ ) (6.4) 96CHAPTER 6. MATHEMATICAL EXPRESSION RECOGNITION BY MERGING MULTIPLE TREES Cells a n c = I i=1 w ic x n i + H h=1 w hc |Pr(n)| p=1 b p h (6.5) s n c = b n φ |Pr(n)| p=1 s p c + b n ς g(a n c ) (6.6) Output gates a n ω = I i=1 w iω x n i + H h=1 w hω |Pr(n)| p=1 b p h + C c=1 w cω s n c (6.7) b n ω = f (a n ω ) (6.8) Cell Outputs b n c = b n ω h(s n c ) (6.9) The error back-propagation of Mul-next node We define n c = ∂L ∂b n c n s = ∂L ∂s n c δ n i = ∂L ∂a n i (6.10) Then n c = K k=1 w ck δ n k + G g=1 w cg |Ne(n)| e δ e g (6.11) Output gates δ n w = f (a n w ) C c=1 h(s n c ) n c (6.12) States n s = b n w h (s n c ) n c + |Ne(n)| e=1 b e φ |Ne(n)| e=1 e s +w cς |Ne(n)| e=1 δ e ς + w cφ |Ne(n)| e=1 δ e φ + w cω δ n ω (6.13) Cells δ n c = b n ς g (a n c ) n s (6.14) Forget gates δ n φ = f (a n φ ) C c=1 |Pr(n)| p=1 s p c n s (6.15) Input gates δ n ς = f (a n ς ) C c=1 g(a n c ) n s (6.16) The framework We would apply the proposed tree-based BLSTM model for online mathematical expression recognition. This section provides a general view of the recognition system (Figure 6.4). Similar to the framework proposed in Chapter 5, we first drive an intermediate graph from the raw input. Then, instead of 1-D paths, we consider from the graph deriving trees which will be labeled by tree-based BLSTM model as a next step. In the end, these labeled trees will be merged to build a stroke label graph. Input Output an intermediate graph G merge labeled trees derive trees from graph G label trees with tree-based BLSTM Figure 6.4 -Illustration of the proposal that uses BLSTM to interpret 2-D handwritten ME. Tree-based BLSTM for online mathematical expression recognition In this section, each step illustrated in Figure 6.4 will be elaborated in the text. The input data is available as a sequence of strokes S from which we would like to obtain the final LG graph describing unambiguously the ME. Let S = (s 0 , ..., s n-1 ), where we assume s i has been written before s j for i < j. Derivation of an intermediate graph G In a first step, we will derive an intermediate graph G where each node is a stroke and edges are added according to temporal or spatial relationships between strokes. In fact, we already introduced a graph representation model and evaluated it in Chapter 5. The evaluation results showed that around 6.5% relationships are missed compared to the ground truth graph. In this Section, we are aiming to improve the graph model to reduce the quantity of missed relationships. Similarly, we provide several definitions related to the graph building first. Definition 6.1. A stroke s i is considered visible from stroke s j if the straight line between between their closest points does not cross any other stroke s k . For example, s 1 and s 3 can see each other because the straignt line between their closest points does not cross stroke s 2 or s 4 as shown in Figure 6.5. This definition is the same as the one used in [START_REF] Muñoz | Mathematical Expression Recognition based on Probabilistic Grammars[END_REF]. Compared to Definition 5.2 where we replaced the stroke with its bounding box to reduce computation, the current one is more accurate. Figure 6.5 -Illustration of visibility between a pair of strokes. s 1 and s 3 are visible to each other. Definition 6.2. For each stroke s i , we define 5 regions (R1, R2, R3, R4, R5 shown in Figure 6.6) of it. The center of the bounding box of stroke s i is taken as the reference point (0, 0). R1 R2 R3 R4 R5 (0,0) Figure 6.6 -Five regions for a stroke s i . Point (0, 0) is the center of bounding box of s i . The angle range of R1 region is [-π 8 , π 8 ]; R2 : [ π 8 , 3 * π 8 ]; R3 : [ 3 * π 8 , 7 * π 8 ]; R4 : [-7 * π 8 , -3 * π 8 ]; R5 : [-3 * π 8 , -π 8 ]. The purpose of defining these 5 regions is to look for the Right, Supscript, Above, Below and Subscript relationships between strokes. If the center of bounding box of s j is located in one of five regions of stroke s i , for example R1 region, we say s j is in the R1 direction of s i . In Definition 5.3, the angle of each region is π 4 . Here, a wider searching range is defined for both R3 and R4 regions. That is because in some expressions like a+b+c d+e+f , a larger searching range means more possibilities to catch the Above relationship from '-' to 'a' and the Below relationship from '-' to 'd'. Definition 6.3. Let G be a directed graph in which each node corresponds to a stroke and edges are added according to the following criteria in succession. We defined for each stroke s i (i from 0 to n-2): • the set of crossing future strokes S cro (i) = {s cro1 , s cro2 , ...} from {s i+1 , ..., s n-1 }. For stroke s i (i from 0 to n-1): • the set S vis (i) of the visible leftmost (considering the center of bounding box only) strokes in five directions respectively. Edges from s i to the S cro (i) S vis (i)will be added to G. Then, we check if the edge from s i to s i+1 ( i from 0 to n-2) exists in G. If not, this edge is added to G to ensure that the path covering the sequence of strokes in the time order is included in G. Each edge is tagged depending on the specific criterion we used to find it before. Consequently, we have at most 7 types of edges (Crossing, R1, R2, R3, R4, R5 and T ime) in the graph. For those edges from s i to the S cro (i) ∩ S vis (i), the type Crossing is assigned. Figure 6.7 illustrates the process of deriving graph from raw input step by step using the example of f a = b f . First according to 10 strokes in the raw input (Figure 6.7a), we create 10 nodes, one for each stroke (Figure 6.7b); for each stroke, look for its crossing stroke or strokes and add the corresponding edges labeled with Crossing between nodes (Figure 6.7c); proceeding to next step, for each stroke, look for its the visible rightmost strokes in five directions respectively and add the corresponding edges labeled as one of R1, R2, R3, R4, R5 between nodes if the edges do not exist in the graph (Figure 6.7d); finally, check if the edge from s i to s i+1 ( i from 0 to n -2) exists in G and if not, add this edge to G labeled as T ime to ensure that the path covering the sequence of strokes in the time order is included in G (Figure 6.7e). Graph evaluation With the same method adopted in Section 6.4.2 and 4.3.2, we evaluate the new proposed graph representation model. Specifically, provided the ground truth labels of the nodes and edges in the graph, we would like to see evaluation results at symbol and expression levels. We reintroduce the evaluation criteria repeatably here as a kind reminder to readers: symbol segmentation ('Segments'), refers to a symbol that is correctly segmented whatever the label; symbol segmentation and recognition ('Seg+Class'), refers to a symbol that is segmented and classified correctly; spatial relationship classification ('Tree Rels.'), a correct spatial relationship between two symbols requires that both symbols are correctly segmented and with the right relationship label. Table 6.1 and Table 6.2 present the evaluation results on CROHME 2014 test set (provided the ground truth labels) at the symbol and expression level respectively. We re-show the evaluate results of time graph and the proposed graph in Chapter 5 as a reference to the new graph. Compared to the graph model proposed in Chapter 5, the new graph model stays at the same level with regards to the recall rate and precision rate on symbol segmentation and recognition task. When it comes to relationship classification task, the new graph presents a small improvement, about 0.5%. The new graph catch 93.99% relationships. Owing to the missed 6.00% relationships, around 30% expressions are not correctly recognized as presented in Table 6.2. Heretofore, we derive a graph from the raw input considering the the temporal and spatial information. Figure 6.8 illustrates the ME f a = b f written with 10 strokes and the derived graph G. We would like to label nodes and edges of G correctly in order to build a SLG finally. The solution proposed in this chapter is to derive trees from G, then recognize the trees using the tree-based BLSTM model. There exists different strategies to derive trees from G. In any of the cases, a start node should be selected first. We take the leftmost (considering the leftmost point in a stroke) stroke as the starter. For the example illustrated in Figure 6.8a, stroke s2 is the starter. From the starting node, we traverse the graph with the Depth-First Search algorithm. Each node should be visited only once. When there are more than one edge outputting from one node, the visiting order will follow (Crossing, R1, R3, R4, R2, R5, T ime). With this strategy, a tree is derived to which we give the name Tree-Left-R1 which is dedicated to catch R1 relationship. If the visiting order follow (Crossing, R2, R1, R3, R4, R5, T ime), another tree named Tree-Left-R2 would be derived to focus more on R2 relationship. Likewise, tree Tree-Left-R3, Tree-Left-R4, Tree-Left-R5 are derived respectively to emphasize R3, R4, R5 relationships. The Crossing edge is always on the top of list, and it is because we assume that a pair of crossing strokes belong to a single symbol. In Figure 6.8b, Tree-Left-R1 is depicted in red with the root in s2. Note that in this case, all the nodes are accessible from the start node s2. However, as G is a directed graph, some nodes are not reachable from one starter in some cases. Therefore, we consider deriving trees from different starters. Besides the leftmost stroke, it is interesting to derive trees from the first input stroke s0 since sometimes users start writing an expression from its root. Note that in some cases, the leftmost stroke and stroke s0 could be the same one. We replace the left-most stroke with stroke s0 and keep the same strategy to derive the trees. These new trees are named as Tree-0-R1, Tree-0-R2, Tree-0-R3, Tree-0-R4, Tree-0-R5 respectively. Finally, if s0 is taken as the starting point and time order is considered first, a special tree is obtained which we call Tree-Time. Tree-Time is proposed with the aim of having a good cover of segmentation edges since users usually write a multi-stroke symbol continuously. As a matter of fact, it is a chain structure. Tree-Time is defined by s0 → s1 → s2 → s3 . . . → s9 for the expression in Figure 6.8. Table 6.3 offers an clear look at different types of the derived trees from the graph. The solution is to go from the previous trees defined at the stroke level down to a tree at the point level, points being the raw information that are recorded along the pen trajectory in the online signal. To be free of the interference of the different writing speed, an additional re-sampling process should be carried out with a fixed spatial step. In the considered trees, nodes, which represent strokes, are re-sampled with a fixed spatial step, and the same holds for edges by considering the straight lines in the air between the last point and the first point of a pair of strokes that are connected in the tree. This is illustrated in Figure 6.9, where the re-sampled points are displayed inside the nodes (on-paper points for node) and Figure 6.9 -A re-sampled tree. The small arrows between points provide the directions of information flows. With regard to the sequence of points inside one node or edge, most of small arrows are omitted. above the edges (in-air points for edge). Since this tree will be processed by the BLSTM network, we need for the training stage to assign it a corresponding ground-truth. We derive it from the SLG by using the corresponding symbol label of the strokes (nodes) for the on-paper points and the corresponding symbol or relationship label for the in-air points (edges) when this edge exists in the SLG. When an edge of the tree does not exist in the SLG, the label N oRelation noted '_' will be used. In this way, an edge in the graph which was originally denoted with a C, Ri (i = 1...5) or T relation will be assigned with one of the 7 labels: (Right, Above, Below, Inside, Superscript, Subscript, _) or a symbol label when the two strokes are belonging to the same symbol. Totally, for the ground truth, we have 108 classes(101 symbol classes + 6 relationships + N oRelation). The number of re-sampling points depends on the size of expression. For each node or edge, we resample with 10 × l/d points. Here, l refers to the length of a visible stroke or a straight line connecting 2 strokes and d refers to the average diagonal of the bounding boxes of all the strokes in an expression. Subsequently, for every point p(x, y) we compute 5 features [sinθ, cosθ, sinφ, cosφ, PenUD] which are already described in Section 4.2.2. Training process Figure 6.10 illustrates a tree-based BLSTM network with one hidden level. To provide a clear view, we only draw the full network on a short sequence (red) instead of a whole tree. Globally, the data structure Figure 6.10 -A tree-based BLSTM network with one hidden level. We only draw the full connection on one short sequence (red) for a clear view. we are dealing with is a tree; locally, it consists of several short sequences. For example, the tree presented in Figure 6.10 has 6 short sequences one of which is highlighted with red color. The system processes each node or edge (which is a short sequence in fact) separately but following the order with which the correct propagation of activation or errors could be ensured. The training process of a short sequence (the red one in Figure 6.10 for example) is similar to the classical BLSTM model except that some outside information should be taken into account. In the classical BLSTM case, the incoming activation or error of a short sequence is initialized as 0. Forward pass. Here, when proceeding with the forward pass from the input layer to the output layer, for the hidden layer (from root to leaves), we need to consider the coming information from the root direction and for the hidden layer (from leaves to root), we need to consider the coming information from the leaves direction. Obviously, no matter which kind of order for processing sequence we are following, it is not possible to have the information from both directions in one run. Thus another stage which we call precomputation is required. The pre-computation stage has two runs: (1) From the input layer to the hidden layer (from root to leaves), we process the short sequence consisting of the root point first and then the next sequences (Figure 6.11a). In this run, each sequence in the tree stores the activation from the root direction. (2) From the input layer to the hidden layer (from leaves to root), we process the short sequences consisting of the leaf point first and then the next sequences (Figure 6.11b). In this run, each sequence in the tree sums and stores the activation from the leaf direction. After pre-computation stage, the information from both directions are available to each sequence thus the forward pass from input to output is straightforward. Error propagation. The backward pass of tree-based BLSTM network has 2 parallel propagation paths: (1) one is from the output layer to hidden layer (from root to leaves), then to the input layer; (2) the other one is from the output layer to hidden layer (from leaves to root), then to the input layer. As these 2 paths of propagation are independent, no pre-stage is needed here. For propagation (1), we process the short sequences consisting of the leaf point first and then the next sequences. For propagation (2), we process the short sequence consisting of the root point first and then the next sequences. Note that when there are several hidden levels in the network, a pre-stage is required also for error propagation. Loss function. It is known that BLSTM and CTC stage have better performance when a "blank" label is introduced during the training [START_REF] Bluche | Framewise and ctc training of neural networks for handwriting recognition[END_REF], so that decision can be made only at some point in the input sequence. One of the characteristics of CTC is that it does not provide the alignment between the input and output, just the overall sequence of labels. As we need to assign each stroke a label to build a SLG, a relatively precise alignment between the input and output is preferred. A local CTC algorithm was proposed in Chapter 4 aiming to limit the label into the corresponding stroke at the same time take the advantage of "blank" label, and furthermore it was verified by experiments to outperform the frame wise training method. We succeeded in realizing local CTC for a global sequence labeling task in Chapter 4. In this chapter, we will use local CTC training method in a tree labeling task. With regards to local CTC, the theory behind two types of tasks remain the same actually. The difference between them is: in Chapter 4, a global sequence consisting of several strokes is a entity being processed; here, we treat each short sequence (stroke) as a processing unit. Inside each short sequence, or we can say each node or edge, a local CTC loss function is easy to be computed from the output probabilities related to this short sequence. The total CTC loss function of a tree is defined as the sum of all local CTC loss functions regarding to all the short sequences in this tree. Since each short sequence has one label, the possible labels of the points in one short sequence are shown in Figure 6.12. The equations provided in Section 4.2.3 are for a global sequence of one or more strokes. In the remaining part of this section, the equations we present are related to a short sequence (a single stroke). Given the tree input represented as X consisting of N short sequences, each short sequence could be denoted as X i , i = 1, ..., N with the ground truth label l i and the length T i . l i represents the label sequence with blanks added to the beginning and the end of l i , i.e. l i = (blank, l i , blank) of length 3. The forward variable α i (t, u) denotes the summed probability of all length t paths that are mapped by F onto the length u/2 prefix of l i , where u is from 1 to 3 and t is from 1 to T i . Given the above notations, the probability of l i can be expressed as the sum of the forward variables with and without the final blank at point T i . p(l i |X i ) = α i (T i , 3) + α i (T i , 2) (6.17) α i (t, u) can be computed recursively as following: Figure 6.12 -The possible labels of points in one short sequence. α i (1, 1) = y 1 blank (6.18) α i (1, 2) = y 1 l i (6.19) α i (1, 3) = 0 (6.20) (a) (b) α i (t, u) = y t (l i )u u j=u-1 α i (t -1, j) (6.21) Note that α i (T i , 1) = 0 (6.22) α i (t, 0) = 0, ∀t (6.23) Figure 6.13 demonstrates the local CTC forward-backward algorithm limited in one stroke. Similarly, the backward variable β i (t, u) denotes the summed probabilities of all paths starting at t + 1 that complete l i when appended to any path contributing to α i (t, u). The formulas for the initialization and recursion of the backward variable are as follows: β i (T i , 3) = 1 (6.24) β i (T i , 2) = 1 (6.25) β i (T i , 1) = 0 (6.26) 6.27) Note that β i (1, 3) = 0 (6.28) With the local CTC forward-backward algorithm, we can compute the α i (t, u) and β i (t, u) for each point t and each allowed positions u at point t. The CTC loss function L(X i , l i ) is defined as the negative log probability of correctly labeling the short sequence X i : β i (t, u) = u+1 j=u β i (t + 1, j)y t+1 (l i ) j ( β i (t, 4) = 0, ∀t ( L(X i , l i ) = -ln p(l i |X i ) (6.30) According to the Equation 3.48, we can rewrite L(X i , l i ) as: L(X i , l i ) = -ln 3 u=1 α i (t, u)β i (t, u) (6.31) Then the errors will be back propagated to the output layer (Equation 3.49), the hidden layer (Equation 3.50), finally to the entire network. The weights in the network will be updated after each entire tree structure is processed. The CTC loss function of a entire tree structure is defined as the sum of the errors with regards to all the short sequences in this tree: L(X, l) = N i=1 L(X i , l i ) (6.32) This formula is used for evaluating the performance of the network, and therefore could be as the metric to decide the training process stops or not. Recognition process As mentioned, the system treats each node or edge as a short sequence. A simple decoding method is adopted here as in previous chapter. We choose for each node or edge the label which has the highest cumulative probability over the short sequence. Suppose that p ij is the probability of outputting the i label at the j point. The probability of outputting the i label can be computed as P i = s j=1 p ij , where s is the number of points in a short sequence. The label with the highest probability is assigned to this short sequence. Post process Several trees regarding to one expression will be merged to build a SLG after labeling. Besides the merging strategy, in this section we consider several structural constraints which are not used when building the SLG in previous chapter. Generally, 5 steps are included in post process: (1) Merge trees. Each node or edge belongs at least to one tree, but possibly to several trees. Hence, several recognition results can be available for a single node or edge. We take an intuitive and simple way to deal with the problem of multiple results, choosing the one with the highest probability. (2) Symbol segmentation. We look for the symbols using connected component analysis: a connected component where nodes and edges have the same label is a symbol. (3) Relationships. We solve two possible kinds of conflicts in this step. (a) Perhaps between two symbols, there exists edges in both directions. Then, in each direction, we choose the label having the maximum probability. If the labels in two directions are both one of (Right, Above, Below, Inside, Superscript, Subscript) as illustrated in Figure 6.14a, we also choose the one having the larger probability. (b) Another type of conflict could be the case illustrated in Figure 6.14b where one symbol has two (or more) input relationships (one of 6 relationships). When observing the structure of SRTs, we can easily find that there is at most one input relationship for each node (symbol) in SRT. Therefore, when one symbol has two (or more) input relationships, we choose for it the one having the maximum probability. (4) Make connected SRT. As SRT should be a connected tree (this is a structural constraint, not a language specific constraint), there exist one root node and one or multiple leaf nodes inside each SRT. Each node has only one input edge, except the root node. After performing the first three steps, we still have the probability to output a SRT consisting several root nodes, in other words, being a forest instead of a tree. To address this type of error, we take a hard decision but quite simple: for each root r (except the one inputted earliest), add a Right edge to r from the leaf being the one nearest to r considering input time. We choose Right since it appears most in math expressions based on the statistics. (5) Add edges. According to the rule that all strokes in a symbol have the same input and output edges and that double-direction edges represent the segments, some missing edges can be completed automatically. Setup. We constructed the tree-based BLSTM recognition system with the RNNLIB library 1 . As described in Section 3.3.4, DBLSTM [START_REF] Graves | Hybrid speech recognition with deep bidirectional lstm[END_REF] can be created by stacking multiple BLSTM layers on top of each other in order to get higher level representation of the input data. Several types of configurations are included in this chapter: Networks (i), (ii), (iii) and (iv). The first one consists of one bidirectional hidden level (two opposite LSTM layers of 100 cells). This configuration has obtained good results in both handwritten text recognition [Graves et al., 2009] and handwritten math symbol classification [Álvaro et al., 2013, 2014a]. Network (ii) is a deep structure with two bidirectional hidden levels, each containing two opposite LSTM layers of 100 cells. Network (iii) and Network (iv) have 3 bidirectional hidden levels and 4 respectively. The setup about the input layer and output layer remains the same. The size of the input layer is 5 (5 features); the size of the output layer is 109 (101 symbol classes + 6 relationships + N oRelation + blank). Evaluation. With the Label Graph Evaluation library (LgEval) [START_REF] Mouchère | Icfhr 2014 competition on recognition of on-line handwritten mathematical expressions (crohme 2014)[END_REF], the recognition results can be evaluated on symbol level and on expression level. We introduce several evaluation criteria: symbol segmentation ('Segments'), refers to a symbol that is correctly segmented whatever the label is; symbol segmentation and recognition ('Seg+Class'), refers to a symbol that is segmented and classified correctly; spatial relationship classification ('Tree Rels.'), a correct spatial relationship between two symbols requires that both symbols are correctly segmented and with the correct relationship label. Experiment 1 In this experiment, we would like to see the effects of the depth of the network on the recognition results. Then according to the results, we choose the proper network configurations for the task. For each expression, only Tree-Time is derived to train the classifier. The evaluation results on symbol level and global expression level are presented in Table 6.4 and 6.5 respectively. From the tables, we can conclude that as the network turns to be deeper, the recognition rate first increases and then stays at a relatively stable level. There is a large increase from Network (i) to Network (ii), a slight increase from Network (ii) to Network (iii) and no improvement from Network (iii) to Network (iv). These results show that 3 bidirectional hidden levels in the network is a proper option for the task in this thesis. A network with depth larger than 3 brings no improvement but higher computational complexity. Thus, for the coming experiments we will not take into account Network (iv) any more. Experiment 2 In this section, we carry out the experiments by merging several trees. As a first try, we derive only 3 trees , Tree-Time, Tree-Left-R1 and Tree-0-R1 for each expression to train classifiers separately. With regards to each tree, we consider 3 network configurations, being Network (i), Network (ii), Network (iii). Thus, we have 9 classifiers totally in this section. After training, we use these 9 classifiers to label the relevant trees and finally merge them to build a valid SLG. We merge the 3 trees labeled by the corresponding 3 classifiers which have the same network configuration to obtain the systems (i, Merge3 ), (ii, Merge3 ), (iii, Merge3 ). Then we merge the 3 trees labeled by all these 9 classifiers to obtain the system Merge9. The evaluation results on symbol level and global expression level are presented in Table 6.6 and 6.7 respectively. We give both the individual tree recognition results and the merging results in each table. Tree-Time covers all the strokes of the input expression but can miss some relational edges between strokes; Tree-Left-R1 and Tree-0-R1 could catch some additional edges which are not covered by Tree-Time. The experiment results also verified this tendency. Compared to (iii, Tree-Time), the symbol segmentation and classification results of (iii, Merge3 ) stay at almost the same level while the recall rate of relationship classification is greatly improved (about 12%). The different recognition results of network (ii) are systematically increased when compared to (i) as the deep structure could get higher level representations of the input data. The performance of network (iii) is moderately improved when compared to (ii), just as same as the case in Experiment 1. When we consider merging all these 9 classifiers, we also get a slight improvement as shown by Merge 9. We compare the result of Merge 9 to the systems in CROHME 2014. With regard to the symbol classification and recognition rates, our system performs better than the second-ranked system in CROHME 2014. For relationship classification rate, our system reaches the level between the second-ranked and the third-ranked systems in CROHME 2014. The global expression recognition rate is 29.91%, ranking third in andMerge11 ). As can be seen, all trees have the similar recognition results except Tree-Time. And more trees do bring some effects on relationship classification task. Compared to (ii, Merge3 ), the results of relationship classification are slightly improved around 1% while symbol segmentation and recognition results are slightly reduced around 0.5%. Finally, at expression level, we see no significant changes as shown in Table 6.11. Error analysis In this section, we make a deep error analysis of the recognition result of (Merge 9 ) to better understand the system and to explore the directions for improving recognition rate in future. The Label Graph Evaluation library (LgEval) [START_REF] Mouchère | Icfhr 2014 competition on recognition of on-line handwritten mathematical expressions (crohme 2014)[END_REF]] evaluate the recognition system by comparing the output SLG of each expression with its ground truth SLG. Thus, node label confusion matrix and edge label confusion matrix are available to us with the library. Based on the two confusion matrices, we analyze the errors specifically below. Node label In table 6.12, we list the types of SLG node label error which has a high frequency on CROHME 2014 test set recognized by (Merge 9 ) system. The first column gives the outputted node labels by the classifier; the second column provide the ground truth node labels; the last column records the corresponding no. of occurrences. As can be seen from the table, the most frequent error (x → X, 46) belongs to the type of the lowercase-uppercase error. Moreover, (p → P , 24), (c → C, 16), (X → x, 16) and (y → Y , 14) also belong to the same type of lowercase-uppercase error. Another type of error which happens quite often in our experiment is the similar-look error, such as (x → ×, 26), (× → x, 10), (z → 2, 10), (q → 9, 10) and so on. Theoretically, two main types of node label error, being the lowercase-uppercase error and the similar-look error, could be eased when more training data is introduced. Thus, one of future work could be trying to collect new data as much as possible. Edge label Table 6.13 provides the edge (SLG) label errors of CROHME 2014 test set using (Merge 9 ). As can be seen, a large amount of errors come from the last row which represents the missing edges. 1858 edges with label Right are missed in our system, along with 929 segmentation edges. In addition, we can see the errors of high frequency in the sixth row which represents that five relationship (exclude Right) edges or segmentation edges or N oRelation (_) edges are mis-classified as Right edges. Among them, one of the reasons since we take a hard decision (add Right edges) in this step. Another possible reason is that, as Right relationship is the most frequent relation in math expressions, maybe the classifiers answer too often this frequent class. Now we will explore deeper the problem of the missing edges which appear in the last row of the Table 6.13. In fact, there exist three sources which result in the missing edges: (1) the edges are missed during the 2) Some edges in the derived graph are recognized by the system as N oRelation (_), which actually have a ground truth label of one of 6 relationship or symbol (segmentation edge). ( 3) When deriving multiple trees from the graph G, they do not well cover the graph completely. We have tried to ease the source 3 by deriving more trees, for example 11 trees in Experiment 3. However, the idea of using more trees did not work well in fact. Thus, a better strategy for deriving trees from the graph will be explored in future works. We reconsider the 2 test samples (a ≥ b and 44 -4 4 ) from CROHME 2014 test set recognized by system (Merge9 ). We provide the handwritten input, the derived graph from the raw input, the derived trees from the graph, along with the built SLG for each test sample (Figure 6.15 and Figure 6.16). These 2 samples were recognized correctly by system (Merge9 ). As shown, several extra edges appear owing to multiple trees and they were all recognized correctly as N oRelation. We remove these N oRelation edges to have a intuitive judgment on the recognition result (Figure 6.15f and 6.16f). In addition, we present a failed case in Figure 6.17. Just like the previous samples, we illustrate the handwritten input of 9 9+ √ 9 , the derived graph from the raw input, the derived trees from the graph, as well as the final SLG built. As can be seen, the structure of this expression was correctly recognized, only one error being the first symbol '9' of the denominator was recognized as '→'. This error belongs to the type of the similar-look error we have explained in error analysis section. Enlarging the training data set could be a solution to solve it. Also, it could be eased by introducing language model since 9 →+ √ 9 is not a valid expression from the point of language model. Discussion In this chapter, we extended the classical BLSTM to tree-based BLSTM and applied the new network model for recognizing online mathematical expressions. The new model has a tree topology, and possesses the ability to model directly the dependency among the tree-structured data. The proposed tree-based BLSTM system, requiring no high time complexity or manual work involved in the classical grammardriven systems, achieves competitive results in online mathematical expression recognition domain. Another major difference with the traditional approaches is that there is no explicit segmentation, recognition and layout extraction steps but a unique trainable system that produces directly a SLG describing a mathematical expression. With regard to the symbol segmentation and classification, the proposed system performs better than the second-ranked system in CROHME 2014 (the top ranked system used a much larger training data set which is not available to the public). For relationship recognition, we achieve better results than the third-ranked system. When considering the expression recognition rate with ≤ 3 errors, our result is 50.15%, close to the second-ranked system (50.20%). In future, several directions could be explored to extend the current work. ( 1) As we analyzed in Section 6.6, we could put efforts into collecting more training data to ease the lowercase-uppercase error and the similar-look error. ( 2) The current graph model misses still around 6% relationships on CROHME 2014 test set. Now, only one rule is used to define the visibility between a pair of strokes. In future, we will try to set several rules for defining the visibility between strokes. As long as a pair of strokes meet any one among these several rules, we decide that they could see each other. (3) A better strategy for deriving trees from the graph should be explored to get a better coverage of the graph. (4) As we cover more and more edges, the precision rate will decrease relevantly as presented in the previous experiments. Thus, one future direction could be developing a better training protocol to enforce the training of the class N oRelation. Then, a stronger post process step should be considered to improve the recognition rate. Conclusion and future works In this chapter, we first summarize the works of the thesis and list the main contributions made during the research process. Then, based on the current method and experiments results, we will propose several possible directions for future work. Conclusion We study the problem of online mathematical expression recognition in this thesis. Generally, ME recognition involves three tasks: symbol segmentation, symbol recognition and structural analysis [START_REF] Zanibbi | Recognition and retrieval of mathematical expressions[END_REF]. The state of the art solutions, considering the natural relationship between the three tasks, perform these 3 tasks at the same time by using grammar parsing techniques. Commonly, a complete grammar for math expressions consists of hundreds of production rules. These rules need to be designed manually and carefully for different data sets. Furthermore, the time complexity for grammar-driven parsing is usually exponential if no constraints are set to control it. Thus, to bypass the high time complexity and manual work of the classical grammar-driven systems, we proposed a new architecture for online mathematical expression recognition in this thesis. The backbone of our system is the framework of BLSTM recurrent networks with a CTC output layer, which achieved great success in sequence labeling task such as text and speech recognition thanks to the ability of learning long-term dependency and the efficient training algorithm. Mathematical expression recognition with a single path. Since BLSTM network with a CTC output layer is capable of processing sequence-structured data, as a first step to try, we proposed a trivial strategy where a BLSTM directly labelled the sequence of pen-down and pen-up strokes respecting the time order. These later added strokes (pen-up strokes) are used to represent the relationships between pairs of visible strokes by assigning them a ground truth label. In order to assign each stroke (visible or later added) a label in the recognition process, we extended the CTC training technique to local CTC, constraining the output labels into the corresponding strokes and at the same time benefiting from introducing an addition 'blank' class. At last, we built the 2-D expression from the outputted sequence of labels. The main contributions in this first proposal consist of: (1) We propose a new method to represent the relationship of a pair of visible strokes by linking the last point and the first point of them. With this method, a global sequence is generated and could be coped with BLSTM and CTC topology. ( 2 Mathematical expression recognition by merging multiple paths. In the above-mentioned simple proposal, we considered only the pairs of strokes which are successive in the time order. Obviously, a sequence-structured model is not able to cover all the relationships in 2-D expressions. Thus, we turned to a graph structure to model the relationships between strokes in mathematical expressions. Globally, the input of the recognition system is an handwritten expression which is a sequence of strokes; the output is the stroke label graph which consists of the information about the label of each stroke and the relationships between stroke pairs. Firstly, we derived an intermediate graph from the raw input using both the temporal and spatial information between strokes. In this intermediate graph, each node represents a stroke and edges are added according to temporal or spatial properties between strokes, which represent the relations of stroke pairs. Secondly, several 1-D paths were selected from the graph since the classifier model used is a 1-D sequence labeler. Next, we used the BLSTM classifier to label the selected 1-D paths. Finally, we merged these labeled paths to build a complete stroke label graph. Compared to the proposal with a single path, the solution by merging multiple paths presented improvements on recall rate of 'Tree Rels.' but at the same time decreases the precision rate of 'Tree Rels.' Thus, at the expression level, the recognition rate remained the same level as the solution with single path. One main contribution of this proposal is that multiple paths are used to represent a 2-D expression. However, even though several paths from one expression were considered in this system, the BLSTM model dealt with each path separately in essential. The classical BLSTM model could access information from past and future in a long range but the information outside the single sequence is of course not accessible to it. In fact, it is not the real case where human beings recognize the raw input using the entire contextual information. Mathematical expression recognition by merging multiple trees. As explained above, human beings interpret handwritten math expression by considering the global contextual information. In the system by merging multiple paths, each path was processed separately implying that only contextual information in the path could be visited. Thus, we developed a neural network model which could handle directly a structure not limited to a chain. We extended the chain-structured BLSTM to tree structure topology and applied this new network model for online math expression recognition. With this new neural network model, we could take into account the information in a tree instead of a single path at one time when dealing with one expression. Similar to the framework of the solution by merging multiple paths, we first derived an intermediate graph from the raw input. Then, instead of 1-D paths, we considered from the graph deriving trees which would be labeled by tree-based BLSTM model as a next step. In the end, these labeled trees were merged to build a stroke label graph. Compared to the proposal by merging multiple paths, the new recognition system was globally improved which was verified by experiments. One main contribution of this part is that we extend the chain-structured BLSTM to tree-based BLSTM, and provide the new topology with the ability of modeling dependency in a tree. We list the main contributions here: • One major difference with the traditional approaches is that there is no explicit segmentation, recognition and layout extraction steps but a unique trainable system that produces directly a SLG describing a mathematical expression. • We propose a new method to represent the relationship of a pair of visible strokes by linking the last point and the first point of them. • We extend the CTC training technique to local CTC. The new training technique proposed could improve the system performance globally compared to frame-wise training, as well constrain the output relatively. • We extend the chain-structured BLSTM to tree-based BLSTM, and provide the new topology with the ability of modeling dependency in a tree. • The proposed system, without using any grammar, achieves competitive results in online math expression recognition domain. Future works Based on the current method and error analysis, we summarize here several possible directions for future work. • Some work should be done with regards to improve the existing method, like improving the graph model, proposing a better strategy for deriving trees and developing a stronger post process stage. • Some efforts could be put into introducing language model into the graph. For example, as known an n-gram model is widely used in 1-D language processing like text and speech, how to take into account the statistical properties of n-grams in math expression recognition task is an interesting direction to explore for us. Actually a master project have been proposed already in this direction. • Another interesting work could be to extend BLSTM model to a DAG structure which will better cover the derived graph and therefore be able to handle more contextual information compared to the tree structure BLSTM. So we could leave the stage of deriving trees aside. • The current recognition system achieves competitive results without using any grammar knowledge. In future, we could apply graph grammar to improve the current recognition rate. • In this thesis, we extend the chain-structured BLSTM to a tree topology to let it model the dependency directly in a tree structure. Furthermore, we extend the CTC training technique to local CTC to constrain the output position relatively at the same time improve the system training efficiency compared to frame-wise training. These proposed algorithm are generic ones and we will apply them into other research fields in future. Finalement, on constate que les solutions actuelles sont quasi-systématiquement pilotées par une grammaire. Cela impose à la fois une tâche laborieuse pour construire ladite grammaire et un coût calculatoire élevé pour produire l'étape d'analyse. En contraste à ces approches, la solution que nous explorons se dispense de grammaire. C'est le parti pris de cette thèse, nous proposons de nouvelles architectures pour produire directement une interprétation des expressions mathématiques en tirant avantage des récents progrès dans les architectures des réseaux récurrents. Le graphe des étiquettes (LG). En descendant au niveau trait, il est possible de dériver du SRT un graphe de traits étiqueté (LG). Dans un LG, les noeuds représentent les traits tandis que les étiquettes sur les arcs encodent soit des informations de segmentation, soit des relations spatiales. Considérons l'expression simple « 2+2 » écrite en quatre traits dont deux traits pour le symbole '+' dont le tracé est présenté Figure 8.2a et le LG 8.2b. Comme on peut le voir, les noeuds du SLG sont étiquetés avec l'étiquette du symbole auxquels ils appartiennent. Un arc en pointillés porte une information de segmentation, cela indique que la paire de traits associés appartient au même symbole. Dans ce cas, l'arc porte l'étiquette du symbole. Sinon, un arc en trait plein définit une relation spatiale entre les symboles associés. Plus précisément, tous les traits d'un symbole sont connectés à tous les traits du symbole avec lequel une relation spatiale existe. Les relations spatiales possibles ont été définies par la compétition CROHME [START_REF] Mouchère | Advancing the state of the art for handwritten math recognition: the crohme competitions, 2011-2014[END_REF], elles sont au nombre de six : Droite, Au-dessus, En-dessous, A l'intérieur (cas des racines), Exposant et Indice. Réseaux Long Short-Term Memory Réseaux récurrents (RNNs). Les RNNs peuvent accéder à de l'information contextuelle et sont prédisposés à la tâche d'étiquetage des séquences [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. Dans la Figure 8.3, un réseau récurrent monodirectionnel est représenté en mode déplié. Chaque noeud à un instant donné représente une couche du réseau. La sortie du réseau au temps t i dépend non seulement de l'entrée au temps t i mais aussi de l'état au temps t i-1 . Les mêmes poids (w1, w2, w3) sont partagés à chaque pas temporel. LSTM. Les architectures classiques de type RNN présentent l'inconvénient de souffrir d'un oubli exponentiel, limitant l'utilisation d'un contexte réduit [START_REF] Hochreiter | Gradient flow in recurrent nets: the difficulty of learning long-term dependencies[END_REF]. Les réseaux de type Long short-term memory LSTM [START_REF] Hochreiter | Long short-term memory[END_REF]] sont capable de circonvenir à ce problème en utilisant un bloc mémoire capable de préserver l'état courant aussi longtemps que nécessaire. Un réseau LSTM est similaire à un réseau RNN, exception faite des unités de sommation des couches cachées qui sont remplacées par des blocs mémoires. Chaque bloc contient plusieurs cellules récurrentes dotées de trois unités de contrôle : les portes d'entrée, de sortie et d'oubli. Ces portes agissent par le biais de facteur multiplicatif pour interdire ou autoriser la prise en compte de l'information se propageant. BLSTM. Les réseaux LSTM traitent la séquence d'entrée de façon directionnelle du passé vers le futur. De façon complémentaire, les Bidirectional LSTM [START_REF] Graves | Framewise phoneme classification with bidirectional lstm networks[END_REF],sont composés de 2 couches séparées de type LSTM, chacune travaillant en sens inverse de l'autre (passé vers futur et futur vers présent). Les deux couches LSTM sont complètement connectées à la même couche de sortie. De cette façon, les contextes court terme et long terme dans chaque direction sont disponibles pour chaque instant de la couche de sortie. BLSTM profonds. Les DBLSTM [START_REF] Graves | Hybrid speech recognition with deep bidirectional lstm[END_REF] peuvent être construits en empilant plusieurs BLSTM l'un sur l'autre. Les sorties des 2 couches opposées sont concaténées et utilisées en tant qu'entrée pour un nouveau niveau. BLSTM à structure non linéaire. Les structures précédentes ne sont capables que de traiter des données en séquences. Les Multidimensional LSTM [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF] quant à eux peuvent traiter des informations depuis n directions en introduisant n portes d'oubli dans chaque cellules mémoire. De plus, les travaux de [START_REF] Tai | Improved semantic representations from tree-structured long short-term memory networks[END_REF], ont étendu ces réseaux pour traiter des structures d'arbres, les topologies Child-sum Tree-LSTM et N-ary Tree-LSTM permettent d'incorporer dans une unité des informations en provenance de cellules filles multiples. Des approches similaires sont proposées dans [START_REF] Zhu | Long short-term memory over recursive structures[END_REF]. Enfin, une architecture LSTM pour graphe acyclique a été proposée pour de la composition sémantique [START_REF] Zhu | Dag-structured long short-term memory for semantic compositionality[END_REF]. Cette solution consiste d'abord à construire un graphe à partir des données d'entrée en utilisant à la fois les proximités temporelle et spatiale. Dans ce graphe, chaque trait est représenté par un noeud et les arcs sont rajoutés en fonction de propriétés spatio-temporelles des traits. Nous faisons l'hypothèse que des traits qui sont soit spatialement, soit temporellement proches, peuvent appartenir au même symbole ou peuvent partager une relation spatiale. A partir de ce graphe, plusieurs chemins vont être extraits et vont constituer des séquences qui vont chacune être traitées par l'étiqueteur de séquence qu'est le BLSTM. Ensuite, une étape de fusion combine ces résultats indépendants et construit un unique SLG. La couche CTC : Connectionist temporal classification Cette façon de procéder présente l'avantage de traiter plusieurs chemins, augmentant ainsi les chances de retrouver des relations utiles. Toutefois, chaque chemin est traité individuellement et indépendamment des autres. Ainsi le contexte qui est pris en compte se limite au seul chemin courant, sans pouvoir intégrer des informations présentes sur les autres chemins. Cela constitue une limitation par rapport à l'analyse visuelle humaine. Reconnaissance d'EM par fusion d'arbres Comme nous l'avons évoqué précédemment, nous utilisons globalement tout le contexte pour reconnaitre une expression mathématique manuscrite. Cette façon de procéder nécessite d'élargir le point de vue consistant à simplement fusionner des parcours sur des chemins individuels. Pour atteindre cet objectif, une nouvelle structure de réseau est proposée. Elle permet de traiter des données qui ne se limitent pas à des chaines. Ces réseaux de type BLSTM permet de traiter des arbres et sont donc utilisables pour reconnaitre des EM. L'avantage de ce nouveau type de structure est de prendre en compte une information plus riche ne se limitant pas à un seul chemin où chaque noeud possède un seul successeur mais de traiter des arbres pour représenter une expression. La 8. 1 1 Les résultats au niveau symbole sur la base de test de CROHME 2014, comparant ces travaux et les participants à la compétition. . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Les résultats au niveau expression sur la base de test CROHME 2014, comparant ces travaux et les participants à la compétition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List of Figures 1.1 Illustration of mathematical expression examples. (a) A simple and liner expression consisting of only left-right relationship. (b) A 2-D expression where left-right, above-below, superscript relationships are involved. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Illustration of expression z d + z written with 5 strokes. . . . . . . . . . . . . . . . . . . . 1.3 Illustration of the symbol segmentation of expression z d + z written with 5 strokes. . . . . 1.4 Illustration of the symbol recognition of expression z d + z written with 5 strokes. . . . . . 1.5 Illustration of the structural analysis of expression z d + z written with 5 strokes. Sup : Superscript, R : Right. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Illustration of the symbol relation tree of expression z d + z. Sup : Superscript, R : Right. 1.7 Introduction of traits "in the air" . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8 Illustration of the proposal of recognizing ME expressions with a single path. . . . . . . . 1.9 Illustration of the proposal of recognizing ME expressions by merging multiple paths. . . . 1.10 Illustration of the proposal of recognizing ME expressions by merging multiple trees. . . . 2.1 Symbol relation tree (a) and operator tree (b) of expression (a+b) 2 . Sup : Superscript, R : Right, Arg : Argument. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 The symbol relation tree (SRT) for (a) a+b c , (b) a + b c . 'R' refers to Right relationship. . . 2.3 The symbol relation trees (SRT) for (a) 3 √ x, (b) n i=0 x i and (c) x xdx. 'R' refers to Right relationship while 'Sup' and 'Sub' denote Superscript and Subscript respectively. . . . . 2.4 Math file encoding for expression (a + b) 2 . (a) Presentation MathML; (b) L A T E X. Adapted from [Zanibbi and Blostein, 2012]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 (a) 2 + 2 written with four strokes; (b) the symbol relation tree of 2 + 2; (c) the SLG of 2 + 2. The four strokes are indicated as s1, s2, s3, s4 in writing order. 'R' is for left-right relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 The file formats for representing SLG considering the expression in Figure2.5a. (a) The file format taking stroke as the basic entity. (b) The file format taking symbol as the basic entity. 2.7 Adjacency Matrices for Stroke Label Graph. (a) The adjacency matrix format: li denotes the label of stroke si and eij is the label of the edge from stroke si to stroke sj. (b) The adjacency matrix of labels corresponding to the SLG in Figure 2.5c. . . . . . . . . . . . . 2.8 '2 + 2' written with four strokes was recognized as '2 -1 2 '. (a) The SLG of the recognition result; (b) the corresponding adjacency matrix. 3. 1 1 Illustration of sequence labeling task with the examples of handwriting (top) and speech (bottom) recognition. Input signals is shown on the left side while the ground truth is on the right. Extracted from [Graves et al., 2012]. . . . . . . . . . . . . . . . . . . . . . . . . 3.2 A multilayer perceptron. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 A recurrent neural network. The recurrent connections are highlighted with red color. . . . 3.4 An unfolded recurrent network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Illustration of the proposal that uses BLSTM to interpret 2-D handwritten ME. . . . . . . . 4.2 Illustration of the complexity of math expressions. . . . . . . . . . . . . . . . . . . . . . . 4.3 (a) The time path (red) in SLG; (b) the SLG obtained by using the time path; (c) the post-processed SLG of '2 + 2', added edges are depicted as bold. . . . . . . . . . . . . . . 4.4 (a) P eo written with four strokes; (b) the SRT of P eo ; (c) r 2 h written with three strokes; (d) the SRT of r 2 h, the red edge cannot be generated by the time sequence of strokes . . . . . 4.5 The illustration of on-paper points (blue) and in-air points (red) in time path, a 1 +a 2 written with 6 strokes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 The illustration of (a) θ i , φ i and (b) ψ i used in feature description. The points related to feature computation at p i are depicted in red. . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 The possible sequences of point labels in one stroke. . . . . . . . . . . . . . . . . . . . . 4.8 Local CTC forward-backward algorithm. Black circles represent labels and white circles represent blanks. Arrows signify allowed transitions. Forward variables are updated in the direction of the arrows, and backward variables are updated in the reverse direction. . . . . 4.9 Illustration for the decision of the label of strokes. As stroke 5 and 7 have the same label, the label of stroke 6 could be '+', '_' or one of the six relationships. All the other strokes are provided with the ground truth labels in this example. . . . . . . . . . . . . . . . . . . 4.10 Real examples from CROHME 2014 data set. (a) sample from Data set 1; (b) sample from Data set 2; (c) sample from Data set 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11 (a) a ≥ b written with four strokes; (b) the built SLG of a ≥ b according to the recognition result, all labels are correct. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12 (a) 44 -4 4 written with six strokes; (b) the ground-truth SLG; (c) the rebuilt SLG according to the recognition result. Three edge errors occurred: the Right relation between stroke 2 and 4 was missed because there is no edge from stroke 2 to 4 in the time path; the edge from stroke 4 to 3 was missed for the same reason; the edge from stroke 2 to 3 was wrongly recognized and it should be labeled as N oRelation. . . . . . . . . . . . . . . . . . . . . . 5.1 Examples of graph models. (a) An example of minimum spanning tree at stroke level. 5. 3 9 3 Stroke representation. (a) The bounding box. (b) The convex hull. . . . . . . . . . . . . . 5.4 Illustration of the proposal that uses BLSTM to interpret 2-D handwritten ME. . . . . . . . 5.5 Illustration of visibility between a pair of strokes. s 1 and s 3 are visible to each other. . . . 5.6 Five directions for a stroke s i . Point (0, 0) is the center of bounding box of s i . The angle of each region is π 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 (a) d dx a x is written with 8 strokes; (b) the SLG built from raw input using the proposed method; (c) the SLG from ground truth; (d) illustration of the difference between the built graph and the ground truth graph, red edges denote the unnecessary edges and blue edges refer to the missed ones compared to the ground truth. . . . . . . . . . . . . . . . . . . . . 5.8 Illustration of the strategy for merge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 (a) a ≥ b written with four strokes; (b) the derived graph from the raw input; (c) the labeled graph (provided the label and the related probability) with merging 7 paths; (d) the built SLG after post process, all labels are correct. . . . . . . . . . . . . . . . . . . . . . . . . 5.10 (a) 44 -4 4 written with six strokes; (b) the derived graph; (c) the built SLG by merging several paths; (d) the built SLG with N oRelation edges removed. . . . . . . . . . . . . . 6.1 (a) A chain-structured LSTM network; (b) A tree-structured LSTM network with arbitrary branching factor. Extracted from [Tai et al., 2015]. . . . . . . . . . . . . . . . . . . . . . 6.2 A tree based structure for chains (from root to leaves). . . . . . . . . . . . . . . . . . . . . 6.3 A tree based structure for chains (from leaves to root). . . . . . . . . . . . . . . . . . . . . 6.4 Illustration of the proposal that uses BLSTM to interpret 2-D handwritten ME. . . . . . . . 6.5 Illustration of visibility between a pair of strokes. s 1 and s 3 are visible to each other. . . . 6.6 Five regions for a stroke s i . Point (0, 0) is the center of bounding box of s i . The angle range of R1 region is [-π 8 , π 8 ]; R2 : [ π 8 , with 10 strokes; (b) create nodes; (c) add Crossing edges. C : Crossing. 6.7 (d) add R1, R2, R3, R4, R5 edges; (e) add T ime edges. C : Crossing, T : T ime. . . . . 6.8 (a) f a = b f is written with 10 strokes; (b) the derived graph G, the red part is one of the possible trees with s2 as the root. C : Crossing, T : T ime. . . . . . . . . . . . . . . . . . 6.9 A re-sampled tree. The small arrows between points provide the directions of information flows. With regard to the sequence of points inside one node or edge, most of small arrows are omitted. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10 A tree-based BLSTM network with one hidden level. We only draw the full connection on one short sequence (red) for a clear view. . . . . . . . . . . . . . . . . . . . . . . . . . . 6.11 Illustration for the pre-computation stage of tree-based BLSTM. (a) From the input layer to the hidden layer (from root to leaves), (b) from the input layer to the hidden layer (from leaves to root). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12 The possible labels of points in one short sequence. . . . . . . . . . . . . . . . . . . . . . 6.13 CTC forward-backward algorithm in one stroke X i . Black circle represents label l i and white circle represents blank. Arrows signify allowed transitions. Forward variables are updated in the direction of the arrows, and backward variables are updated in the reverse direction. This figure is a local part (limited in one stroke) of Figure 4.8. . . . . . . . . . . 6.14 Possible relationship conflicts existing in merging results. . . . . . . . . . . . . . . . . . . 6.15 (a) a ≥ b written with four strokes; (b) the derived graph; (b) Tree-Time; (c)Tree-Left-R1 (In this case, Tree-0-R1 is the same as Tree-Left-R1 ); (e) the built SLG of a ≥ b after merging several trees and performing other post process steps, all labels are correct; (f) the built SLG with N oRelation edges removed. . . . . . . . . . . . . . . . . . . . . . . . . . 6.16 (a) 44 -4 4 written with six strokes; (b) the derived graph; (b) Tree-Time; (c)Tree-Left-R1 (In this case, Tree-0-R1 is the same as Tree-Left-R1 ); . . . . . . . . . . . . . . . . . . . . 6.16 (b)the built SLG after merging several trees and performing other post process steps; (c) the built SLG with N oRelation edges removed. . . . . . . . . . . . . . . . . . . . . . . written with 7 strokes; (b) the derived graph; (b) Tree-Time; . . . . . . . . . . . 8. 1 L 1 'arbre des relations entre symboles (SRT) pour (a) a+b c et (b) a + b c ,'R'définit une relation à droite. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 (a) « 2 + 2 » écrit en quatre traits ; (b) le graphe SLG de « 2 + 2 ». Les quatre traits sont repérés s1, s2, s3 et s4, respectant l'ordre chronologique. (ver.) et (hor.) ont été ajoutés pour distinguer le trait horizontal et vertical du '+'. 'R' représente la relation Droite. . . . 8.3 Un réseau récurrent monodirectionnel déplié. . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Illustration de la méthode basée sur un seul chemin. . . . . . . . . . . . . . . . . . . . . . 8.5 Introduction des traits « en l'air ». . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Reconnaissance par fusion de chemins. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 Reconnaissance par fusion d'arbres. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 1 . 1 - 11 Figure 1.1 -Illustration of mathematical expression examples. (a) A simple and liner expression consisting of only left-right relationship. (b) A 2-D expression where left-right, above-below, superscript relationships are involved. Figure 1 . 2 - 12 Figure 1.2 -Illustration of expression z d + z written with 5 strokes. Figure 1 . 3 - 13 Figure 1.3 -Illustration of the symbol segmentation of expression z d + z written with 5 strokes. Figure 1 . 1 Figure 1.4 -Illustration of the symbol recognition of expression z d + z written with 5 strokes. Figure 1 .Figure 1 . 6 - 116 Figure 1.5 -Illustration of the structural analysis of expression z d + z written with 5 strokes. Sup : Superscript, R : Right. Figure 1 . 1 Figure 1.8 -Illustration of the proposal of recognizing ME expressions with a single path. Figure 1 . 9 - 19 Figure 1.9 -Illustration of the proposal of recognizing ME expressions by merging multiple paths. Figure 1 . 1 Figure 1.10 -Illustration of the proposal of recognizing ME expressions by merging multiple trees. Figure 2 . 1 - 21 Figure 2.1 -Symbol relation tree (a) and operator tree (b) of expression (a + b) 2 . Sup : Superscript, R : Right, Arg : Argument. Figure 2 . 2 - 22 Figure 2.2 -The symbol relation tree (SRT) for (a) a+b c , (b) a + b c . 'R' refers to Right relationship. Figure 2 . 3 - 23 Figure 2.3 -The symbol relation trees (SRT) for (a) 3 √ x, (b) n i=0 x i and (c) x xdx. 'R' refers to Right relationship while 'Sup' and 'Sub' denote Superscript and Subscript respectively. Figure 2 2 Figure 2.4 -Math file encoding for expression (a + b) 2 . (a) Presentation MathML; (b) L A T E X. Adapted from [Zanibbi and Blostein, 2012]. Figure 2 .Figure 2 22 Figure 2.5 -(a) 2 + 2 written with four strokes; (b) the symbol relation tree of 2 + 2; (c) the SLG of 2 + 2. The four strokes are indicated as s1, s2, s3, s4 in writing order. 'R' is for left-right relationship Figure 2 . 6 - 26 Figure 2.6 -The file formats for representing SLG considering the expression in Figure2.5a. (a) The file format taking stroke as the basic entity. (b) The file format taking symbol as the basic entity. Figure 2 2 Figure 2.7 -Adjacency Matrices for Stroke Label Graph. (a) The adjacency matrix format: li denotes the label of stroke si and eij is the label of the edge from stroke si to stroke sj. (b) The adjacency matrix of labels corresponding to the SLG in Figure 2.5c. Figure 2 .Figure 2 22 Figure 2.8 -'2 + 2' written with four strokes was recognized as '2 -1 2 '. (a) The SLG of the recognition result; (b) the corresponding adjacency matrix. 'Sup' denotes Superscript relationship. where A 0 belongs to non-terminals and A 1 , • • • , A k belong to terminals. r denotes a relation between the elements A 1 , • • • , A k . They use five binary spatial relations: , →, , ↓, . The arrows indicate a general writing direction, while denotes containment (as in notations like √ x, for instance). Figure 2 . 2 Figure 2.11 -A simple example of Fuzzy r-CFG. Extracted from [MacLean and Labahn, 2013]. Figure 2 . 2 Figure 2.12 -(a) An input handwritten expression; (b) a shared parse forest of (a) considering the grammar depicted in Figure 2.11. Extracted from [MacLean and Labahn, 2013] Figure 2 . 2 Figure 2.14 -Achitecture of the recognition system proposed in [Julca-Aguilar, 2016]. Extracted from[START_REF] Julca-Aguilar | Recognition of Online Handwritten Mathematical Expressions using Contextual Information[END_REF] Figure 2 . 15 - 215 Figure 2.15 -Network architecture of WYGIWYS. Extracted from[START_REF] Deng | What you get is what you see: A visual markup decompiler[END_REF] Figure 3 . 1 - 31 Figure 3.1 -Illustration of sequence labeling task with the examples of handwriting (top) and speech (bottom) recognition. Input signals is shown on the left side while the ground truth is on the right. Extracted from [Graves et al., 2012]. Figure 3 . 2 - 32 Figure 3.2 -A multilayer perceptron. Figure 3 . 3 - 33 Figure 3.3 -A recurrent neural network. The recurrent connections are highlighted with red color. Figure 3 . 3 Figure 3.4 -An unfolded recurrent network. Figure 3 . 3 Figure 3.5 -An unfolded bidirectional network. Extracted from [Graves et al., 2012]. Figure 3 3 Figure 3.7 -A deep bidirectional LSTM network with two hidden levels. Figure 3 . 3 Figure 3.9 -Illustration of CTC forward algorithm. Blanks are represented with black circles and labels are white circles. Arrows indicate allowed transitions. Adapted from [Graves et al., 2012]. Figure 3 . 3 Figure 3.10 -Mistake incurred by best path decoding. Extracted from [Graves et al., 2012]. Figure 3 . 3 Figure 3.11 -Prefix search decoding on the alphabet {X, Y}. Extracted from [Graves et al., 2012]. Figure 4 . 1 - 41 Figure 4.1 -Illustration of the proposal that uses BLSTM to interpret 2-D handwritten ME. ) linear (1-D) expressions which consist of only Right relationships such as 2+2, a+b; (2) 2-D expressions of which relationships are not only Right relationships such as P eo , √ 36, a+b c+d . There are totally 9817 expressions (8834 for training and 983 for test) in CROHME 2014 data set. Among them, the amount of linear expressions is 2874, accounting for around 30% proportion. Furthermore, we define chain-SRT expressions as certain expressions of which the symbol relation trees are essentially a chain structure. Chain-SRT expressions contain all the linear expressions and a part of 2-D expressions such as P eo , √ 36. Figure 4.2 illustrates the classifications of expressions. Figure 4 . 2 - 42 Figure 4.2 -Illustration of the complexity of math expressions. Figure 4.3 -(a) The time path (red) in SLG; (b) the SLG obtained by using the time path; (c) the postprocessed SLG of '2 + 2', added edges are depicted as bold. Figure 4.4 -(a) P eo written with four strokes; (b) the SRT of P eo ; (c) r 2 h written with three strokes; (d) the SRT of r 2 h, the red edge cannot be generated by the time sequence of strokes Figure 4 4 Figure 4.5 -The illustration of on-paper points (blue) and in-air points (red) in time path, a 1 + a 2 written with 6 strokes. Figure 4 . 6 - 46 Figure 4.6 -The illustration of (a) θ i , φ i and (b) ψ i used in feature description. The points related to feature computation at p i are depicted in red. Figure 4 4 Figure 4.7 -The possible sequences of point labels in one stroke. Figure 4 4 Figure 4.8 -Local CTC forward-backward algorithm. Black circles represent labels and white circles represent blanks. Arrows signify allowed transitions. Forward variables are updated in the direction of the arrows, and backward variables are updated in the reverse direction. by introducing the local CTC training technique, and use the extended library to train several BLSTM models. Both frame-wise training and local CTC training are adopted in our experiments. For each training process, the network having the best classification error (frame-wise) or CTC error (local CTC) on validation data set is saved. Then, we test this network on the test data set. The maximum decoding (Eqn. 4.16) is used for frame-wise training network. With regard to local CTC, either the maximum decoding or local CTC decoding (Eqn. 4.18) can be used. Data set 1 . 1 We select the expressions which do not include 2-D spatial relation, only left-right relation from CROHME 2014 training and test data. 2609 expressions are available for training, about one third of the full training set and 265 expressions for testing. In this case, there are 91 classes of symbols. Next, we split the training set into a new training set and validation set, 90% for training and 10% for validation. The output layer size is 94 (91 symbol classes + Right + N oRelation + blank). In left-right expressions, N oRelation will be used each time when a delayed stroke breaks the left-right time order. Data set 2. The depth of expressions in this data set is limited to 1, which imposes that two subexpressions having a spatial relationship (Above, Below, Inside, Superscript, Subscript) should be leftright expressions. It adds to the previous linear expressions some more complex MEs. 5820 expressions are selected for training from CROHME 2014 training set; 674 expressions for test from CROHME 2014 test set. Also, we divide 5820 expressions into the new training set and validation set, 90% for training and 10% for validation. The output layer size is 102 (94 symbol classes + 6 relationships + N oRelation + blank). Data set 3. The complete data set from CROHME 2014, 8834 expressions for training and 983 expressions for test. Also, we divide 8834 expressions for training (90%) and validation (10%). The output layer size is 109 (101 symbol classes + 6 relationships + N oRelation + blank). Figure 4.10 -Real examples from CROHME 2014 data set. (a) sample from Data set 1; (b) sample from Data set 2; (c) sample from Data set 3. Figure 4.11 -(a) a ≥ b written with four strokes; (b) the built SLG of a ≥ b according to the recognition result, all labels are correct. Figure 5.1a presents an example of MST. Figure 5 . 1 - 51 Figure 5.1 -Examples of graph models. (a) An example of minimum spanning tree at stroke level. Extracted from [Matsakis, 1999]. (b) An example of Delaunay-triangulation-based graph at symbol level. Extracted from [Hirata and Honda, 2011]. Figure 5.1b presents an example of Delaunaytriangulation-based graph applied on math expressions. Figure 5 . 3 - 53 Figure 5.3 -Stroke representation. (a) The bounding box. (b) The convex hull. Figure 5.4 -Illustration of the proposal that uses BLSTM to interpret 2-D handwritten ME. Figure 5 Figure 5 . 6 - 556 Figure 5.5 -Illustration of visibility between a pair of strokes. s 1 and s 3 are visible to each other. Figure 5.7 -(a) d dx ax is written with 8 strokes; (b) the SLG built from raw input using the proposed method; (c) the SLG from ground truth; (d) illustration of the difference between the built graph and the ground truth graph, red edges denote the unnecessary edges and blue edges refer to the missed ones compared to the ground truth. Figure 5 5 Figure 5.8 -Illustration of the strategy for merge. Figure 5 5 Figure 5.9 -(a) a ≥ b written with four strokes; (b) the derived graph from the raw input; (c) the labeled graph (provided the label and the related probability) with merging 7 paths; (d) the built SLG after post process, all labels are correct. Figure 5.10 -(a) 44 -4 4 written with six strokes; (b) the derived graph; (c) the built SLG by merging several paths; (d) the built SLG with N oRelation edges removed. Figure 6 6 Figure 6.1 -(a) A chain-structured LSTM network; (b) A tree-structured LSTM network with arbitrary branching factor. Extracted from [Tai et al., 2015]. Figure 6 6 Figure 6.2 -A tree based structure for chains (from root to leaves). Figure 6 . 3 - 63 Figure 6.3 -A tree based structure for chains (from leaves to root). Figure 6 Figure 6 66 Figure 6.7 -(a) f a = b f is written with 10 strokes; (b) create nodes; (c) add Crossing edges. C : Crossing. 102CHAPTER 6 . 6 Figure 6.8 -(a) f a = b f is written with 10 strokes; (b) the derived graph G, the red part is one of the possible trees with s2 as the root. C : Crossing, T : T ime. Table 6 . 3 - 63 The different types of the derived trees. Type Root Traverse algorithm Visiting order Tree-Left-R1 the leftmost stroke Depth-First Search (Crossing, R1, R3, R4, R2, R5, T ime) Tree-Left-R2 the leftmost stroke Depth-First Search (Crossing, R2, R1, R3, R4, R5, T ime) Tree-Left-R3 the leftmost stroke Depth-First Search (Crossing, R3, R1, R4, R2, R5, T ime) Tree-Left-R4 the leftmost stroke Depth-First Search (Crossing, R4, R1, R3, R2, R5, T ime) Tree-Left-R5 the leftmost stroke Depth-First Search (Crossing, R5, R1, R3, R4, R2, T ime) Tree-0-R1 s0 Depth-First Search (Crossing, R1, R3, R4, R2, R5, T ime) Tree-0-R2 s0 Depth-First Search (Crossing, R2, R1, R3, R4, R5, T ime) Tree-0-R3 s0 Depth-First Search (Crossing, R3, R1, R4, R2, R5, T ime) Tree-0-R4 s0 Depth-First Search (Crossing, R4, R1, R3, R2, R5, T ime) Tree-0-R5 s0 Depth-First Search (Crossing, R5, R1, R3, R4, R2, T ime) Tree-Time s0 Depth-First Search only the time order 6.4.4 Feed the inputs of the Tree-based BLSTM In section 6.4.3, we derived trees from the intermediate graph. Nodes of the tree represent visible strokes and edges denote the relationships between pairs of strokes. We would like to label each node and edge correctly with the Tree-based BLSTM model, aiming to build a complete SLG finally. To realize this, the first step is to feed the derived tree into the Tree-based BLSTM model. Figure 6 . 6 Figure 6.11 -Illustration for the pre-computation stage of tree-based BLSTM. (a) From the input layer to the hidden layer (from root to leaves), (b) from the input layer to the hidden layer (from leaves to root). Figure 6 . 6 Figure 6.13 -CTC forward-backward algorithm in one stroke X i . Black circle represents label l i and white circle represents blank. Arrows signify allowed transitions. Forward variables are updated in the direction of the arrows, and backward variables are updated in the reverse direction. This figure is a local part (limited in one stroke) of Figure 4.8. 6.29) 108CHAPTER 6. MATHEMATICAL EXPRESSION RECOGNITION BY MERGING MULTIPLE TREES Figure 6 . 6 Figure 6.14 -Possible relationship conflicts existing in merging results. Figure 6 . 6 Figure 6.15 -(a) a ≥ b written with four strokes; (b) the derived graph; (b) Tree-Time; (c)Tree-Left-R1 (In this case, Tree-0-R1 is the same as Tree-Left-R1 ); (e) the built SLG of a ≥ b after merging several trees and performing other post process steps, all labels are correct; (f) the built SLG with N oRelation edges removed. Figure 6 .Figure 6 . 66 Figure 6.16 -(a) 44 -4 4 written with six strokes; (b) the derived graph; (b) Tree-Time; (c)Tree-Left-R1 (In this case, Tree-0-R1 is the same as Tree-Left-R1 ); 120CHAPTER 6 .Figure 6 .Figure 6 . 666 Figure 6.17 -(a) 9 9+ √ 9 written with 7 strokes; (b) the derived graph; (b) Tree-Time; ) We extend the CTC training technique to local CTC. The new training technique proposed could improve the system performance globally compared to frame-wise training, as well constrain the output relatively. The limitation of this simple proposal is that it takes into account only a pair of visible strokes successive in the input time, and therefore miss some CHAPTER 7. CONCLUSION AND FUTURE WORKS relationships for 2-D mathematical expressions. 8. 2 2 Figure 8.1 -L'arbre des relations entre symboles (SRT) pour (a) a+b c et (b) a + b c ,'R'définit une relation à droite. Figure 8.2 -(a) « 2 + 2 » écrit en quatre traits ; (b) le graphe SLG de « 2 + 2 ». Les quatre traits sont repérés s1, s2, s3 et s4, respectant l'ordre chronologique. (ver.) et (hor.) ont été ajoutés pour distinguer le trait horizontal et vertical du '+'. 'R' représente la relation Droite. Figure 8 . 3 - 83 Figure 8.3 -Un réseau récurrent monodirectionnel déplié. L 'utilisation des réseaux récurrents (RNN) trouve tout son intérêt pour les tâches d'étiquetage de séquences où le contexte joue un rôle important. L'entrainement de ces réseaux nécessite la définition d'une fonction de coût qui repose classiquement sur la connaissance des étiquettes désirées (vérité terrain) pour chaque instant des sorties. Cela impose de disposer d'une base d'apprentissage dont chacune des séquences soit complétement étiquetée au niveau de tous les points constituant la trame du signal. Cela représente un travail très fastidieux pour assigner ainsi une étiquette à chacun de ces points. L'usage d'une couche CTC permet de contourner cette difficulté. Il suffit de connaitre la séquence d'étiquettes d'un point de vue global, sans qu'un alignement complet avec le signal d'entrée ne soit nécessaire. Grace à l'utilisation d'une étiquette additionnelle « blank », le CTC autorise le réseau à ne fournir des décisions qu'en quelques instants bien spécifiques, tout en permettant une reconstruction complète de la séquence. 8. 3 Figure 8 . 6 - 386 Figure 8.4 -Illustration de la méthode basée sur un seul chemin. Figure 8.7 -Reconnaissance par fusion d'arbres. .14 Achitecture of the recognition system proposed in[START_REF] Julca-Aguilar | Recognition of Online Handwritten Mathematical Expressions using Contextual Information[END_REF]. Extracted from [Julca-Aguilar, 2016] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.15 Network architecture of WYGIWYS. Extracted from . . . . . . . . . 2.13 Geometric features for classifying the spatial relationship between regions B and C. Extracted from [Álvaro et al., 2016] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Table 2 . 2 1 -Illustration of the terminology related to recall and precision. relevant non relevant segmented true positive (tp) false positive (fp) not segmented false negative (fn) true negative (tn) Table 4 . 4 1 -The symbol level evaluation results on CROHME 2014 test set (provided the ground truth labels on the time path). Data set Segments (%) Seg + Class (%) Tree Rels. (%) Rec. Prec. Rec. Prec. Rec. Prec. 1 99.73 99.46 99.73 99.46 95.78 99.40 2 99.75 99.49 99.73 99.48 80.33 99.39 3 99.73 99.45 99.72 99.44 75.54 99.27 Table 4.2 -The expression level evaluation results on CROHME 2014 test set (provided the ground truth labels on the time path). Data set correct (%) <= 1 error <= 2 errors <= 3 errors 1 86.79 87.55 91.32 93.96 2 44.21 51.63 61.87 68.69 3 34.11 40.94 50.51 58.25 Table 4 . 4 3 -The symbol level evaluation results on CROHME 2014 test set, including the experiment results in this work and CROHME 2014 participant results. Data set, features Segments (%) Seg + Class (%) Tree Rels. (%) Rec. Prec. Rec. Prec. Rec. Prec. 1, 5 90.11 80.75 78.91 70.71 79.87 73.66 2, 5 91.88 84.47 82.42 75.77 64.75 71.96 3, 5 93.26 86.86 84.40 78.61 61.85 75.06 system CROHME 2014 participant results III 98.42 98.13 93.91 93.63 94.26 94.01 I 93.31 90.72 86.59 84.18 84.23 81.96 VII 89.43 86.13 76.53 73.71 71.77 71.65 V 88.23 84.20 78.45 74.87 61.38 72.70 IV 85.52 86.09 76.64 77.15 70.78 71.51 VI 83.05 85.36 69.72 71.66 66.83 74.81 II 76.63 80.28 66.97 70.16 60.31 63.74 Table 4 . 4 4 shows the recognition rates at the global expression level with no error, and with at most one Table 4.4 -The expression level evaluation results on CROHME 2014 test set, including the experiment results in this work and CROHME 2014 participant results. Data set, features correct (%) <= 1 error <= 2 errors <= 3 errors 1, 5 25.28 40.75 49.06 52.08 2, 5 12.76 25.07 31.16 36.20 3, 5 12.63 21.28 27.70 31.98 system CROHME 2014 participant results III 62.68 72.31 75.15 76.88 I 37.22 44.22 47.26 50.20 VII 26.06 33.87 38.54 39.96 VI 25.66 33.16 35.90 37.32 IV 18.97 28.19 32.35 33.37 V 18.97 26.37 30.83 32.96 II 15.01 22.31 26.57 27.69 to three errors in the labels of SLG. This metric is very strict. For example one label error can happen only on one stroke symbol or in the relationship between two one-stroke symbols; a labeling error on a 2-stroke symbol leads to 4 errors (2 nodes labels and 2 edges labels). As can be seen, the expression recognition rates are decreasing as the data sets are getting more and more complex from Data set 1 to 3. On Data set 1 of only linear expressions, the ME recognition rate is 25.28%. The recognition rate with no error on CROHME 2014 test set is 12.63%. The best one and worst one reported by CROHME 2014 are 62.68% and 15.01%. When looking at the recognition rate having less than three errors, four participants ranked between 27% and 37%, while our result is 31.98%. Table 4.5, with the first 2 networks, we can conclude that local CTC training can improve Table 4.5 -The symbol level evaluation results (mean values) on CROHME 2014 test set with different training and decoding methods, features. Feat. Train Decode Segments (%) Seg + Class (%) Tree Rels. (%) Rec. Prec. Rec. Prec. Rec. Prec. 5 frame-wise maximum 92.71 85.88 83.76 77.59 59.84 73.71 5 local CTC maximum 93.21 86.73 84.11 78.26 61.75 74.51 5 local CTC local CTC 93.2 86.71 84.11 78.25 61.71 74.67 7 local CTC maximum 93 86.43 84 78.06 61.73 74.06 Table 4.6 -The standard derivations of the symbol level evaluation results on CROHME 2014 test set with local CTC training and maximum decoding method, 5 local features. Feat. Train Decode Segments (%) Seg + Class (%) Tree Rels. (%) Rec. Prec. Rec. Prec. Rec. Prec. 5 local CTC maximum 0.12 0.26 0.26 0.32 0.29 0.83 Consequently, in the coming chapters we will use local CTC training method instead of frame-wise training, maximum decoding instead of local CTC decoding, 5 local features instead of 7 features for all the experiments aimed at making an effective and efficient system. the system performance globally compared to frame-wise training. Furthermore, the proposed local CTC training method is able to accelerate the convergence process and therefore reduce the training time significantly. Comparing the results from the second and third systems, the conclusion is straightforward that local CTC decoding does not help the recognition process but cost more computation. Maximum decoding is a better choice in this work. In the fourth system, we test the effect of contextual features on BLSTM networks. The results stay at the same level with system trained with only local features. In addition, to have a look at the stability of our recognition system, we provide in Table 4 .6 the standard derivations of the symbol level evaluation results on CROHME 2014 test set with local CTC training and maximum decoding methods, 5 local features. As shown, the standard derivations are quite low, indicating that our system has a high stability. BAR) of the convex hull is calculated. If the Visibility Angle Range (VAR) = overlap of BAR and UAR is nonzero, s i could see s j . Then UAR is updated with UAR -VAR. The model called LOS CH symmetric (CH: the convex hull is used to compute the block angle range of each stroke; symmetric: for a edge from s i to s j , there would be the reverse edge from s j to s i ) has a recall 99.9% of and a precision of 29.7% (CROHME 2014 test set, CPP distance is used). In fact, each graph model has its strong points and also limitations. Hu choose LOS CH symmetric as the graph representation in his work as a high recall and a reasonable precision is required. (CROHME 2014 test set, CPP distance is used). The recall and the precision of DT graph are 97.3% and 39.1% respectively (CROHME 2014 test set, AC distance is used). In LOS graph model, the bounding box center is taken as eye of a stroke. Each stroke s i has an Unblocked Angle Range (UAR) which is initialized as [0, 2π]. For any other stroke s j , the Block Angle Range ( Table 5 . 5 1 -The symbol level evaluation results on CROHME 2014 test set (provided the ground truth labels of the nodes and edges of the built graph). Model Segments (%) Seg + Class (%) Tree Rels. (%) Rec. Prec. Rec. Prec. Rec. Prec. time graph 99.73 99.45 99.72 99.44 75.54 99.27 new graph 100.00 99.99 99.99 99.98 93.48 99.95 Table 5 . 5 2 -The expression level evaluation results on CROHME 2014 test set (provided the ground truth labels of the nodes and edges of the built graph). Model correct (%) <= 1 error <= 2 errors <= 3 errors time graph 34.11 40.94 50.51 58.25 new graph 67.65 76.70 85.76 90.74 Table 5 . 5 3 -Illustration of the used classifiers in the different experiments depending of the type of path. exp. Path Type Time Random 1 CLS T Table 5 . 5 5 -The expression level evaluation results on CROHME 2014 test set, including the experiment results in this work and CROHME 2014 participant results exp. correct (%) ≤ 1 error ≤ 2 errors ≤ 3 errors 1 11.80 19.33 26.55 31.43 2 8.55 16.89 23.40 29.91 3 13.02 22.48 30.21 35.71 4 11.19 19.13 26.04 31.13 5 13.02 21.77 30.82 36.52 system CROHME 2014 participant results III 62.68 72.31 75.15 76.88 I 37.22 44.22 47.26 50.20 VII 26.06 33.87 38.54 39.96 VI 25.66 33.16 35.90 37.32 IV 18.97 28.19 32.35 33.37 V 18.97 26.37 30.83 32.96 II 15.01 22.31 26.57 27.69 Table 6 . 6 1 -The symbol level evaluation results on CROHME 2014 test set (provided the ground truth labels). Model Segments (%) Seg + Class (%) Tree Rels. (%) Rec. Prec. Rec. Prec. Rec. Prec. time graph 99.73 99.45 99.72 99.44 75.54 99.27 graph (Chapter 5) 100.00 99.99 99.99 99.98 93.48 99.95 new graph 99.97 99.93 99.96 99.92 93.99 99.86 Table 6.2 -The expression level evaluation results on CROHME 2014 test set (provided the ground truth labels). Model correct (%) <= 1 error <= 2 errors <= 3 errors time graph 34.11 40.94 50.51 58.25 graph (Chapter 5) 67.65 76.70 85.76 90.74 new graph 69.89 77.21 85.96 90.54 6.4.3 Derivation of trees from G The complete data set from CROHME 2014 is used, 8834 expressions for training and 983 expressions for test. We extract randomly 10% of the 8834 expressions of the training set as a validation set. To get more recent comparison with the state of the art, we have also use the last CROHME 2016 data set to evaluate the best configuration. The training data set remains the same as CROHME 2014. 1147 expressions are included in CROHME 2016 test data set. 6.5 Experiments Data sets. Table 6 . 6 4 -The symbol level evaluation results on CROHME 2014 test set with Tree-Time only. Network, model Segments (%) Seg + Class (%) Tree Rels. (%) Rec. Prec. Rec. Prec. Rec. Prec. i, Tree-Time 92.93 84.82 84.12 76.78 60.70 76.19 ii, Tree-Time 95.10 90.47 87.53 83.27 65.06 83.18 iii, Tree-Time 95.43 91.13 88.26 84.28 65.45 83.57 iv, Tree-Time 95.57 91.21 87.81 83.80 65.98 82.85 Table 6.5 -The expression level evaluation results on CROHME 2014 test set with Tree-Time only. Network, model correct (%) ≤ 1 error ≤ 2 errors ≤ 3 errors i, Tree-Time 12.41 20.24 26.14 30.93 ii, Tree-Time 16.09 25.46 32.28 37.27 iii, Tree-Time 16.80 25.56 32.89 38.09 iv, Tree-Time 16.19 25.97 33.20 38.09 Table 6 . 6 6 -The symbol level evaluation results on CROHME 2014 test set with 3 trees, along with CROHME 2014 participant results. Network, model Segments (%) Seg + Class (%) Tree Rels. (%) Rec. Prec. Rec. Prec. Rec. Prec. i, Tree-Time 92.93 84.82 84.12 76.78 60.70 76.19 i, Tree-Left-R1 84.82 72.49 72.80 62.21 44.34 57.78 i, Tree-0-R1 85.31 72.88 74.17 63.37 42.92 60.08 i, Merge3 93.53 87.20 86.10 80.28 71.16 66.13 ii, Tree-Time 95.10 90.47 87.53 83.27 65.06 83.18 ii, Tree-Left-R1 86.71 75.64 76.85 67.03 48.14 61.91 ii, Tree-0-R1 87.52 76.66 77.00 67.45 48.14 63.04 ii, Merge3 95.01 90.05 88.38 83.76 76.20 72.28 iii, Tree-Time 95.43 91.13 88.26 84.28 65.45 83.57 iii, Tree-Left-R1 88.03 78.13 78.56 69.72 50.31 65.87 iii, Tree-0-R1 87.41 77.02 77.63 68.40 48.23 64.28 iii, Merge3 95.25 90.70 88.90 84.65 77.33 73.72 Merge 9 95.52 91.31 89.55 85.60 78.08 74.64 system CROHME 2014 participant results III 98.42 98.13 93.91 93.63 94.26 94.01 I 93.31 90.72 86.59 84.18 84.23 81.96 VII 89.43 86.13 76.53 73.71 71.77 71.65 V 88.23 84.20 78.45 74.87 61.38 72.70 IV 85.52 86.09 76.64 77.15 70.78 71.51 VI 83.05 85.36 69.72 71.66 66.83 74.81 II 76.63 80.28 66.97 70.16 60.31 63.74 Table 6 . 6 12 -Illustration of node (SLG) label errors of (Merge 9 ) on CROHME 2014 test set. We only list the cases that occur ≥ 10 times. output label ground truth label no. of occurrences x X 46 x × 26 p P 24 , 1 19 c C 16 y Y 14 + t 14 . . . . 13 X x 16 a x 14 1 | 11 - 1 10 × x 10 z 2 10 q 9 10 Table 6 . 6 13 -Illustration of edge (SLG) label errors of (Merge 9 ) on CROHME 2014 test set. The first column represents the output labels; the first row offers the ground truth labels; other cells in this table provide the corresponding no. of occurrences. '*' represents segmentation edges, grouping two nodes into a symbol. The label of segmentation edge is a symbol (For convenient representation, we do not give the specific symbol types, but a overall label '*'.). We evaluated the graph model in Section 6.4.2 where around 6% relationships were missed. One of the future works could be searching for a better graph representation model to catch the 6% relationships. ( * Above Below Inside Right Sub Sup _ * 208 0 0 17 1 1 29 Above 8 1 21 10 Below 2 1 1 114 7 Inside 5 1 1 9 Right 344 65 22 40 152 112 1600 Sub 4 6 3 44 1 7 Sup 1 3 35 3 31 _ 929 300 80 109 1858 189 235 graph representation stage. Mathematical markup language (MathML) version 3.0, https://www.w3.org/Math/. Graves A. RNNLIB: A recurrent neural network library for sequence learning problems. http://sourceforge.net/projects/rnnl/. Graves A. RNNLIB: A recurrent neural network library for sequence learning problems. http://sourceforge.net/projects/rnnl/. The weights are manually optimized. We tested several different weight assignments, and then choose the best one among them. Figure 8.5 -Introduction des traits « en l'air ». associée à un trait sur l'un des points de ce trait, limitant ainsi la flexibilité de l'étiquette « blank » à l'intérieur dudit trait. Ceci va permettre de reconstruire le graphe SLG en ayant une et une seule étiquette par noeud et par arc. Acknowledgments spatial relationship classification ('Tree Rels.'). A correct spatial relationship between two symbols requires that both symbols are correctly segmented and with the right relationship label. As presented, the results for 'Segments' and 'Seg+Class' do not show a big difference among exp. (1 2 3 4). It can be explained by the fact that time path is enough to give good results and random paths contributes little. With regard to 'Tree Rels.', 'Rec.' of exp. (2 3 4) is improved compared to exp.1 because random paths catch some ground truth edges which are missed in time path; but 'Prec.' rate declines which means that random paths also cover some edges which are not in ground truth LG. Unfortunately, these extra edges are not labeled as N oRelation. Among (1 2 3 4) experiments, exp.3 outperforms others for all the items. Thus, it is a better strategy to use CLS T for labeling time path and use CLS R for random path. Our results are comparable to the results of CROHME 2014 because the same training and testing data sets are used. The second part of Table 5.4 gives the symbol level evaluation results of the participants in CROHME 2014 sorting by the recall rate for correct symbol segmentation. The best 'Rec.' of 'Segments' and 'Seg+Class' reported by CROHME 2014 are 98.42% and 93.91% respectively. Ours are 92.77% and 85.17%, both ranked 3 out of 8 systems (7 participants in CROHME 2014 ). Our solution presents competitive results on symbol recognition task and segmentation task. Table 5.5 shows the recognition rates at the global expression level with no error, and with at most one to three errors in the labels of LG. Among (1 2 3 4) experiments, exp.3 outperforms others for all the items. Compared to exp.1 where only time path is considered, we see an increase on recall rate of 'Tree Rels.' but meanwhile a decrease on precision rate of 'Tree Rels.' in exp.3. As only 6 random paths is used in it, we would like to see if more random paths could bring any changes. We carry out another experiment, exp.5, where 10 random paths is used to train classifier CLS R . The evaluation results on symbol level and expression level are provided respectively in Table 5.4 andTable 5.5. As shown, when we consider more random paths, the recall rate of 'Tree Rels.' keeps on increasing but precision rate of 'Tree Rels.' is decreasing. Thus, at the expression level, the recognition rate remains the same level as the experiment with 6 random paths. To illustrate these results, we reconsider the 2 test samples (a ≥ b and 44 -4 4 ) recognized with the system of exp.3. In last chapter where we just use the single time path, a ≥ b was correctly recognized and for 44 -4 4 , the Right relationship from the minus symbol to fraction bar was omitted in the modeling all the participated systems. When we compute the recognition rate with ≤ 3 errors, our result is 50.15%, very close to the second-ranked system (50.20%). The top ranked system is from My Script company and they use a much larger training data set which is not available to the public. Furthermore, as we know, all the top 4 systems in the CROHME 2014 competition are grammar driven solutions which need a large amount of manual work and a high computational complexity. There is no grammar considered in our system. To have more uptodate comparisons, we also evaluate the system of Merge 9 on CROHME 2016 test data set. As can be seen in Table 6.8, compared to other participated systems in CROHME 2016, our system is still competitive on symbol segmentation and classification task. For relationship recognition task, there is room for improvement. The results at expression level is presented in Table 6.9. The global expression recognition rate is 27.03%. Experiment 3 In Experiment 2, we consider merging 3 trees to build a SLG describing an math expression. Compared to the results of symbol segmentation and recognition, the relationship classification results are not particularly prominent. We thought one of the possible reasons could be that only 3 trees can not well cover the graph G, in other words, some edges in the graph G are not used in the already derived 3 trees. Thus, in this experiment, we will test all the 11 trees illustrated in Table 6.3 to see if more trees could improve the results of relationship classification task. Taking into consideration that we see a small increase in recognition results, but a much increase in time complexity from Network (ii) to Network (iii) in previous experiments, we use Network (ii) as a cost-effective choice to train 11 different classifiers in this section. The evaluation results on symbol level and global expression level are presented in Table 6.10 and 6.11 respectively. In each table, we provide in detail the individual tree recognition results and the merging Publications of author Journals Thèse de Doctorat Ting ZHANG Nouvelles architectures pour la reconnaissance des expressions mathématiques manuscrites New Architectures for Handwritten Mathematical Expressions Recognition Résumé Véritable challenge scientifique, la reconnaissance d'expressions mathématiques manuscrites est un champ très attractif de la reconnaissance des formes débouchant sur des applications pratiques innovantes. En effet, le grand nombre de symboles (plus de 100) utilisés ainsi que la structure en 2 dimensions des expressions augmentent la difficulté de leur reconnaissance. Dans cette thèse, nous nous intéressons à la reconnaissance des expressions mathématiques manuscrites en-ligne en utilisant de façon innovante les réseaux de neurones récurrents profonds BLSTM avec CTC pour construire un système d'analyse basé sur la construction de graphes. Nous avons donc étendu la structure linéaire des BLSTM à des structures d'arbres (Tree-Based BLSTM) permettant de couvrir les 2 dimensions du langage. Nous avons aussi proposé d'ajouter des contraintes de localisation dans la couche CTC pour adapter les décisions du réseau à l'échelle des traits de l'écriture, permettant une modélisation et une évaluation robustes. Le système proposé construit un graphe à partir des traits du tracé à reconnaître et de leurs relations spatiales. Plusieurs arbres sont dérivés de ce graphe puis étiquetés par notre Tree-Based BLSTM. Les arbres obtenus sont ensuite fusionnés pour construire un SLG (graphe étiqueté de traits) modélisant une expression 2D. Une différence majeure par rapport aux systèmes traditionnels est l'absence des étapes explicites de segmentation et reconnaissance des symboles isolés puis d'analyse de leurs relations spatiales, notre approche produit directement un graphe SLG. Notre système sans grammaire obtient des résultats comparables aux systèmes spécialisés de l'état de l'art. Abstract As an appealing topic in pattern recognition, handwritten mathematical expression recognition exhibits a big research challenge and underpins many practical applications. Both a large set of symbols (more than 100) and 2-D structures increase the difficulty of this recognition problem. In this thesis, we focus on online handwritten mathematical expression recognition using BLSTM and CTC topology, and finally build a graph-driven recognition system, bypassing the high time complexity and manual work in the classical grammar-driven systems. To allow the 2-D structured language to be handled by the sequence classifier, we extend the chain-structured BLSTM to an original Tree-based BLSTM, which could label a tree structured data. The CTC layer is adapted with local constraints, to align the outputs and at the same time benefit from introducing the additional 'blank' class. The proposed system addresses the recognition task as a graph building problem. The input expression is a sequence of strokes, and then an intermediate graph is derived considering temporal and spatial relations among strokes. Next, several trees are derived from the graph and labeled with Tree-based BLSTM. The last step is to merge these labeled trees to build an admissible stroke label graph (SLG) modeling 2-D formulas uniquely. One major difference with the traditional approaches is that there is no explicit segmentation, recognition and layout extraction steps but a unique trainable system that produces directly a SLG describing a mathematical expression. The proposed system, without any grammar, achieves competitive results in online math expression recognition domain.
01754481
en
[ "sdv.aen" ]
2024/03/05 22:32:10
2015
https://hal.univ-lorraine.fr/tel-01754481/file/DDOC_T_2015_0219_BEKHIT.pdf
M Denis Poncelet Je Remercie Également M Andrée Voilley Carole Jeandel Carole Perroud Myriam Michel Sylvie Wolff Lactococcus lactis subsp. lactis ATCC 11454 in microcapsules based on biopolymers" Keywords: biopolymer, bioactive films, microbeads, starch, hydroxypropylmethylcellulose, mechanical properties Je tiens en premier lieu à remercier le Professeur Stéphane Desobry, pour avoir accepté de m'accueillir au sein de son équipe de recherche et m'avoir encadré pendant la durée de cette thèse. Je pense avoir appris à son contact, je lui suis reconnaissante pour le temps qu'il m'a consacré et pour toutes les opportunités qu'il m'a données au cours de cette thèse. Ensuite, j'adresse tout particulièrement ma reconnaissance à ma co-directrice de thèse, le Dr Laura Sanchez-gonzalez pour ses conseils, commentaires, aides, et pour son précieux engagement dans l'amélioration du travail. J'ai sincèrement apprécié de travailler avec elle et je suis reconnaissante pour le temps qu'elle m'a consacré. J'aimerais aussi lui exprimer ma gratitude pour son implication et sa disponibilité. Je tiens à remercier très chaleureusement les membres du programme Erasmus Mundus de m'avoir accordé la bourse pour financer ma thèse, et en particulier Delphine pour sa présence, sa contribution, ses aides et ses chaleureux conseils. List of Abbreviations List of tables List of Figures Pour éviter d'incorporer les LAB directement dans les aliments ou elles se développeraient très rapidement, une voie consiste à les placer dans le matériau d'emballage et de permettre la migration de la nisine active vers l'aliment selon le concept de l'emballage actif. Cela est envisageable par le biais de LAB piégés dans des microbilles et incorporés dans des films de biopolymères. Les films de polymères bioactifs ainsi aurait également pour effet de ralentir la croissance bactérienne tout en permettant la synthèse de nisine pendant un stockage prolongé. à-vis de Listeria spp qui est un contaminant majeur des aliments réfrigérés. L. lactis peut produire de la nisine. L'inclusion de cette bactérie dans des microbilles afin de les protéger d'un environnement hostile et contrôler la libération de nisine commence à être étudiée au niveau international. Il existe différents polymères utilisés dans l'encapsulation (principalement des polysaccharides et protéines) et de nombreuses techniques d'encapsulation. La sélection d'une technique compatible avec la stabilisation et le maintien en vie de bactéries lactiques est assez complexe. La méthode la plus utilisée pour la production de microcapsules contenant des probiotiques est l'extrusion, en raison de la simplicité de son fonctionnement, son faible coût et les conditions de formation appropriées vis-à-vis de la viabilité des bactéries. LITTERATURE REVIEW Litterature review Chapter I: Litterature Review Introduction The encapsulation process aims to entrap a specific component within a matrix (proteins, polysaccharides, etc.). There are examples for component that need to be encapsulated for use in the food industry, such as flavors to control aroma release, antimicrobials to protect from spoilage, antioxidants to increase nutritional value and delay chemical degradation, vitamins to increase their bioavailability, or even probiotics (as lactic acid bacteria) to improve food value Capsules roles and encapsulation objectives Encapsulated probiotics is used to protect the cells against a harsh environment, control their release, avoid alteration in stomach and improve LAB viability in products [START_REF] Burgain | Encapsulation of probiotic living cells: From laboratory scale to industrial applications[END_REF] ; [START_REF] Nedovic | An overview of encapsulation technologies for food applications[END_REF]. Therefore, particles with diameters of a few µm to a few mm are produced. There are various names for the substance used for encapsulation, as coating, membrane, shell, carrier material, wall material, external phase or matrix. The matrix used for prepare microcapsules in food processes should be food grade and able to protect the LAB entrapping in matrix [START_REF] Nedovic | An overview of encapsulation technologies for food applications[END_REF]. The different structures of capsules Microencapsulation technique occurs typically in three phases. The 1 st phase aims the integration of bioactive component in a matrix, which can be liquid or solid. In the case of liquid core, integration is a dissolution or a dispersion in the matrix. If the core is solid, the incorporation is an agglomeration or an adsorption. The 2 nd phase distributes liquids while powder dissolved in a matrix. The last step, consists in stabilizing the capsules by chemical (polymerization), physicochemical (gelation) or physical (evaporation, solidification, coalescence) process [START_REF] Passos | Innovation in food engineering: New techniques and products[END_REF]. Microcapsules can be ranked in 4 categories (Fig. 1): (i) Matrix-core/shell microcapsules processed by gelling biopolymers drops in a solution containing a bivalent ion followed by treatment of the surface with a polycation "multi-step technique" [START_REF] Murua | In vitro characterization and in vivo functionality of erythropoietin-secreting cells immobilized in alginate-poly-L-lysine-alginate microcapsules[END_REF]; [START_REF] Willaert | Applications of cell immobilisation biotechnology[END_REF] . (ii) Liquid-core/shell microcapsules processed by dropping a cell suspension containing bivalent ions into a biopolymers solution "one-step technique". (iii) Cells-core/shell microcapsules (coating). (iv) Hydrogel microcapsules in which cells are hydrogel-embedded. Many attempts has been over the years to improve the microencapsulation techniques such as correct biomaterial characterization and purification, improvements in microcapsules production procedures [START_REF] Wilson | Layer-by-layer assembly of a conformal nanothin PEG coating for intraportal islet transplantation[END_REF]. Application to cell protection and release Capsules prevents the cell release and increases mechanical and chemical stability [START_REF] Overgaard | Immobilization of hybridoma cells in chitosan alginate beads[END_REF]. The capsules are often obtained by coating technique (negative charge polymers as "alginate" coated by positively charged polymers as "chitosan") that enhances stability of the gel [START_REF] Smidsrød | Alginate as immobilization matrix for cells[END_REF]) and provides a barrier to cell Liquid Core Hydrogel Bead One Matrix Beads Coating release [START_REF] Dumitriu | Polysaccharides in medicinal applications[END_REF], [START_REF] Gugliuzza | Smart Membranes and Sensors: Synthesis, Characterization, and Applications[END_REF], [START_REF] Zhou | Spectrophotometric quantification of lactic bacteria in alginate and control of cell release with chitosan coating[END_REF]. [START_REF] Tanaka | A novel immobilization method for prevention of cell leakage from the gel matrix[END_REF] reported the coating of gel capsules by a cell-free alginate gel layer. Cross-linking with cationic polymers to improve the stability of microcapsules [START_REF] Kanekanian | Encapsulation Technologies and Delivery Systems for Food Ingredients and Nutraceuticals[END_REF]. [START_REF] Kolot | Immobilized microbial systems: principles, techniques, and industrial applications[END_REF]; [START_REF] Garti | Encapsulation technologies and delivery systems for food ingredients and nutraceuticals[END_REF]; (Kwak, 2014) develops a membrane around the beads to minimize cell release, and produces stronger microcapsules. The reaction of the biofunctional reagent with chitosan membrane results in bridge formation linking the chitosan molecules. The length of the bridge depends on the type of cross-linking agent [START_REF] Hyndman | Microencapsulation of Lactococcus lactis within cross-linked gelatin membranes[END_REF]. For dry capsules, incorporation of cryoprotectants such as glycerol enhances survival of encapsulated cells after lyophilisation and rehydration [START_REF] Lakkis | Encapsulation and controlled release technologies in food systems[END_REF]. (Aziz Homayouni, Azizi, Javadi, Mahdipour, & Ejtahed, 2012) presented that the survival of bifidobacteria increased significantly. [START_REF] Sheu | Improving survival of culture bacteria in frozen desserts by microentrapment[END_REF] reported Lb. delbrueckii ssp. bulgaricus survival up to 90% because these agents reduced ice crystal formation by water binding. The capsules with glycerol also exhibited a 43% decrease capsules size, due to higher alginate concentration per unit volume in the capsules with glycerol binding with water. Polymers used for encapsulation The selection of a biopolymer or combination of biopolymers depends on many factors: the desired physicochemical and functional properties of the particles (e.g., size, charge, polarity, loading capacity, permeability, degradability, and release profile), the properties of the biopolymers (e.g., charge, polarity, and solubility), and the nature of any enclosed active ingredient (e.g. charge, polarity, solubility, and stability) [START_REF] Joye | Biopolymer-based nanoparticles and microparticles: Fabrication, characterization, and application[END_REF]. Several matrix are used to offer a large range of properties adapted to the entrapped bacteria. Carbohydrates Alginate Alginate hydrogels are extensively used in microcapsules with probiotics because of their simplicity, non-toxicity, biocompatibility and low cost [START_REF] Rowley | Alginate hydrogels as synthetic extracellular matrix materials[END_REF]) [START_REF] Krasaekoopt | Evaluation of encapsulation techniques of probiotics for yoghurt[END_REF]. Alginate is a linear heteropolysaccharide extracted from different types of algae with two structural units consisting of D-mannuronic (M) and L-guluronic acids (G) (Fig. 2). Depending on the source, the composition and the sequence in D-mannuronic and L-guluronic acids vary widely and influence functional properties of the material. G-units have a bucked shape while M-units tends to extended band. Two G-units aligned side-by-side result in the formation of a hole with specific dimension, which is able to bind selectively divalent cations. To prepare alginate capsules, sodium alginate droplets fall into a solution containing a multivalent cations (usually Ca +2 in the form of CaCl2). The droplets form gel spheres instantaneously, entrapping the cells in a three dimensional structure due to a polymer crosslinking by exchange of sodium ions from the guluronic acids with divalent cations (Ca 2+ , Sr 2+ , or Ba 2+ ). This results in a chain-chain association to form the "egg box model". There are some disadvantages of involving alginate polymer to form capsules. Alginate capsules are very porous and sensitive to acidic environment [START_REF] Gouin | Microencapsulation: industrial appraisal of existing technologies and trends[END_REF][START_REF] Mortazavian | Survival of encapsulated probiotic bacteria in Iranian yogurt drink (Doogh) after the product exposure to simulated gastrointestinal conditions[END_REF], which is not compatible for bacteria preservation and resistance of the microparticles under stomach conditions. κ-Carrageenan -carrageenan is a neutral polysaccharide extracted from marine macroalgae, commonly used in food industry. There are three types of carrageenan ( -, -and -) commonly used in foods and exhibit different gelation properties: -carrageenan forms rigid, brittle gels; -carrageenan produces softer, elastic and cohesive gels; while -carrageenan does not form gels (Fig. 3). These differences can be attributed to differences in sulfate groups and anhydro-bridges [START_REF] Burey | Hydrocolloid gel particles: formation, characterization, and application[END_REF]. They can be used to form microcapsules by using different production techniques. Carrageenan formed gels through ionotropic gelation coupled to a cold mechanism, which involves helix formation upon cooling and crosslinking in the presence of K + ions (in the form of KCl) to stabilize the gel and prevent swelling, and induce gelation. However, KCl has been reported to have an inhibitory effect on some lactic acid bacteria. As an alternative to K + , NH 4+ ions have been recommended and produce stronger gel capsules [START_REF] Krasaekoopt | Evaluation of encapsulation techniques of probiotics for yoghurt[END_REF]. High concentration of carrageenan such as 2-5% needs temperatures between 60-90°C for dissolution. Gelation is induced by temperatures changes. Probiotics are added into the polymer solution at 40-45ºC and gelation occurs by cooling down to room temperature. Encapsulation of probiotic cells in -carrageenan beads keeps the bacteria in a viable state [START_REF] Dinakar | Growth and viability of Bifidobacterium bifidum in Cheddar cheese[END_REF] but gels produced are brittle and not able to withstand stresses (Chen & Chen, 2007). Cellulose Cellulose is a structural polysaccharide from plants that is available to use in the food industry after several physically or chemically modification. Cellulose acetate phtalate (CAP) This polymer is a derivate of cellulose (Fig 4) that is used with drugs for controlled drug release in the intestine [START_REF] Mortazavian | Survival of encapsulated probiotic bacteria in Iranian yogurt drink (Doogh) after the product exposure to simulated gastrointestinal conditions[END_REF]. CAP is insoluble in acid media (pH ≤5) but soluble when the pH is≥6. It provides a good protection for probiotic bacteria in simulated gastrointestinal (GI) conditions [START_REF] Favaro-Trindade | Microencapsulation of L. acidophilus (La-05) and B. lactis (Bb-12) and evaluation of their survival at the pH values of the stomach and in bile[END_REF], [START_REF] Burgain | Encapsulation of probiotic living cells: From laboratory scale to industrial applications[END_REF]. [START_REF] Rao | Survival of microencapsulated Bifidobacterium pseudolongum in simulated gastric and intestinal juices[END_REF] found that preparing an emulsion with starch and oil and adding CAP improved high viability of probiotics in simulated gastric environment. Sodium carboxymethyl cellulose Sodium carboxymethyl cellulose (NaCMC) is a hydrophilic polyanionic cellulose that is commonly used as a food-grade biopolymer for building block for assembling polymers [START_REF] Tripathy | Designing carboxymethyl cellulose based layer-bylayer capsules as a carrier for protein delivery[END_REF]. It consists of linked glucopyranose residues with varying levels of carboxymethyl substitution (Fig 5). CMC is not biodegradable and has been used as a hydrophobic polymer to produce hollow-shell particles with solid core [START_REF] Gunduz | Continuous generation of ethyl cellulose drug delivery nanocarriers from microbubbles[END_REF][START_REF] Montes | Coprecipitation of amoxicillin and ethyl cellulose microparticles by supercritical antisolvent process[END_REF]. It can be used with drugs and probiotics because of its resistance to gastric acid and intestinal solubility properties [START_REF] Hubbe | Cellulosic nanocomposites: a review[END_REF]. Xanthan gum Xanthan Gum is an extracellular anionic polysaccharide extracted from microbial source Xanthomonas campestris. It is a complex polysaccharide consisted of a primary chain of β-D- (1,4)-glucose backbone, which has a branching tri-saccharide side chain comprised of β-D- (1,2)-mannose, attached to β-D- (1,[START_REF]Antilisterial activity[END_REF]-glucuronic acid, and terminates in a β-D-mannose [START_REF] Elçin | Encapsulation of urease enzyme in xanthan-alginate spheres[END_REF], [START_REF] Goddard | Principles of polymer science and technology in cosmetics and personal care[END_REF]) (Fig. 6). Xanthan is soluble in cold water, and hydrates rapidly. It is considered to be basically non-gelling. The viscosity gradually reduces with increasing shear stress, thereafter returning to the stable over a wide range of pH (2)(3)[START_REF]Antilisterial activity[END_REF](5)(6)(7)(8)(9)(10)(11)(12) and temperature. The H-bonding and polymer chain entanglement formed network this is lead to high viscosity. Two chains may be aligned to form a double hemlix, providing a rather rigid configuration. The conversion between the ordered double helical conformation and the single more flexible extended chain may take place over hours between 40°C and 80°C [START_REF] Nedovic | An overview of encapsulation technologies for food applications[END_REF]. The extraordinary resistance to enzymatic degradation is attributed to the shielding of the backbone by side chain. Xanthan undergoes cryogelation [START_REF] Giannouli | Cryogelation of xanthan[END_REF]. Chitosan is a linear polysaccharide with positive charge, which is obtained by extracted chitin from crustacean shells then deacetylated. Chitosan is a hydrophilic, cationic and crystalline polymer that demonstrates film forming ability and gelation characteristics. It is composed of glucosamine units, which can polymerize by cross-linking in the presence of anions and polyanions (Fig. 7). Chitosan is preferably used as a coating than capsules because it is not efficient for increasing cell viability by simple encapsulation [START_REF] Mortazavian | Survival of encapsulated probiotic bacteria in Iranian yogurt drink (Doogh) after the product exposure to simulated gastrointestinal conditions[END_REF]. Chitosan capsules are efficient with hydrophilic macromolecules through electrostatic interaction or hydrogen bonding. However, chitosan seems to have inhibitory effects on LAB (Groboillot, [START_REF] Green | Membrane formation by interfacial cross-linking of chitosan for microencapsulation of Lactococcus lactis[END_REF]. Pectin is a heteropolysacchride that enhances cellulose structures in plant cell walls. The backbone of pectin contains regions of galacturonic acid residues that can be methoxylated. Natural pectin typically has a degree of esterification of 70 to 80%, but this may be diverse by changing extraction and production conditions. Pectin gelation and structure forming characteristics are due to the degree of esterification (DE) and the arrangement of the methyl groups over the pectin molecule (Fig. 8). Gelation of high methoxyl pectin (HMP) requires a pH about 3 and a insoluble solids, while gelation of low methoxyl pectin (LMP) requires the presence of a controlled amount of calcium ions and need neither sugar nor acid [START_REF] Joye | Biopolymer-based nanoparticles and microparticles: Fabrication, characterization, and application[END_REF]. Pectin form rigid gels by reacting with calcium salts or multivalent cations, which crosslink the galacturonic acids of the main polymer chains. Calcium pectinate 'eggbox' structure is obtained by combination of the carboxylic acid groups of the pectin molecules and the calcium ions. [START_REF] Jain | Perspectives of biodegradable natural polysaccharides for site-specific drug delivery to the colon[END_REF]; (L. [START_REF] Liu | Pectin-based systems for colonspecific drug delivery via oral route[END_REF]; [START_REF] Tiwari | Preparation and characterization of satranidazole loaded calcium pectinate microbeads for colon specific delivery; Application of response surface methodology[END_REF]. Biopolymer particles can be produced by combination of pectin with other polymers or cations. Its degradation rate can also be modified by chemical-modification (de Vos, Faas, Spasojevic, & Sikkema, 2010). Pectin is degradable by bacteria. Therefore, it can be used as gelling agent in food, in medicines and as a source of dietary fiber, because it remains intact in the stomach and the small intestine. 9). Dextran is one of the polysaccharide promising target for modification, because it has a large number of hydroxyl groups [START_REF] Suarez | Tunable protein release from acetalated dextran microparticles: a platform for delivery of protein therapeutics to the heart post-MI[END_REF]; (H. [START_REF] Wang | Amphiphilic dextran derivatives nanoparticles for the delivery of mitoxantrone[END_REF]. Dextran are well soluble in water, but is modified to change the solubility pattern, that has implications when forming biopolymer particles by antisolvent precipitation [START_REF] Broaders | Acetalated dextran is a chemically and biologically tunable material for particulate immunotherapy[END_REF]. Dextran is largely used as biomaterial in the field of polysaccharide polymers, because the required chemical modifications by hydroxyl are low cost [START_REF] Hu | Biodegradable amphiphilic polymer-drug conjugate micelles[END_REF]; (Kuen Yong Lee, Jeong, Kang, [START_REF] Lee | Electrospinning of polysaccharides for regenerative medicine[END_REF]. Dextran is rapidly degraded by dextranases produced in the gut. . Gellan gum Gellan gum is a microbial polysaccharide derived from Pseudomonas elodea which is constituted of a repeating unit of tetrasaccharides composed of two glucose units, one glucuronic acid and one rhamnose (M.-J. Chen & Chen, 2007) (Fig. 10). Gellan is not easily degraded by enzymes, and stable over wide pH range; therefore it is used in the food industry [START_REF] Nag | Microencapsulation of probiotic bacteria using pHinduced gelation of sodium caseinate and gellan gum[END_REF], [START_REF] Yang | Preparation and evaluation of chitosancalcium-gellan gum beads for controlled release of protein[END_REF]. Gellan gum particles are useful for colonic delivery of active compounds because, it can be degraded by galactomannanases in colonic fluids [START_REF] Yang | Preparation and evaluation of chitosancalcium-gellan gum beads for controlled release of protein[END_REF]. To forming delivery system, there are two types of gellan gum : the highly acetylated (native) and the deacetylated forms [START_REF] Singh | Effects of divalent cations on drug encapsulation efficiency of deacylated gellan gum[END_REF]. . The starch is not digested by amylases in the small intestine [START_REF] Singh | Starch digestibility in food matrix: a review[END_REF]; [START_REF] Anal | Recent advances in microencapsulation of probiotics for industrial applications and targeted delivery[END_REF] and releases bacterial cells in the large intestine [START_REF] Mortazavian | Survival of encapsulated probiotic bacteria in Iranian yogurt drink (Doogh) after the product exposure to simulated gastrointestinal conditions[END_REF]. It improves probiotic delivery in a viable and a metabolically active state to the intestine, because it has an ideal surface for the adherence of the probiotic cells [START_REF] Anal | Recent advances in microencapsulation of probiotics for industrial applications and targeted delivery[END_REF][START_REF] Crittenden | Adhesion of bifidobacteria to granular starch and its implications in probiotic technologies[END_REF]. Proteins Collagen Collagen is the major component of mammalian connective tissue. It is found in high concentrations in tendon, skin, bone, cartilage and, ligament. The collagen protein is composed of a triple helix, which generally consists of two identical chains (α1) and an additional chain that differs slightly in its chemical composition (α2) (Fig. 13). Due to its biocompatibility, biodegradability, abundance in nature, and natural ability to bind cells, it has been used in cell immobilization. Human like collagen (HLC) is produced by recombinant Escherichia coli BL21 containing human like collagen cDNA. Collagen may be gelled utilizing changes in pH, allowing cell encapsulation in a minimally traumatic manner [START_REF] Rosenblatt | Injectable collagen as a pH-sensitive hydrogel[END_REF], [START_REF] Senuma | Bioresorbable microspheres by spinning disk atomization as injectable cell carrier: from preparation to in vitro evaluation[END_REF]. It may be processed into fibers and macroporous scaffolds [START_REF] Chevallay | Collagen-based biomaterials as 3D scaffold for cell cultures: applications for tissue engineering and gene therapy[END_REF], [START_REF] Roche | Native and DPPA crosslinked collagen sponges seeded with fetal bovine epiphyseal chondrocytes used for cartilage tissue engineering[END_REF], while [START_REF] Su | Encapsulation of probiotic Bifidobacterium longum BIOMA 5920 with alginate-human-like collagen and evaluation of survival in simulated gastrointestinal conditions[END_REF] prepared microspheres using alginate and HCL by electrostatic droplet generation. The results showed encapsulation probiotic bacteria improving tolerance of in simulated gastric juice than free probiotic bacteria. Figure 12: collagen triple helix (from Wikipedia) Gelatin Gelatin is a protein derived from collagen by partial hydrolysis. It contains between 300 and 4,000 amino acids (Fig. 13). Gelatin is soluble in most polar solvents and forms a solution of high viscosity in water. Gelatin gels formed on cooling of solutions with concentrations above about 1%wt. The form of gels depends on the quality of the gelatin and the pH; they are clear, elastic, transparent, and thermo-reversible; it dissolves again on 35-40°C (Zuidam & Nedovic, 2010). Gelatin has been used for prepare microcapsules (probiotic bacteria) alone or with other compounds. When pH of gelatin is below the isoelectric point, the net charge of gelatin is positive then formed and a strong interaction with the negatively charged marerials [START_REF] Krasaekoopt | Evaluation of encapsulation techniques of probiotics for yoghurt[END_REF], [START_REF] Anal | Recent advances in microencapsulation of probiotics for industrial applications and targeted delivery[END_REF]. Higher concentrations of gelatin produce strong capsules that are tolerant against crackling and breaking. Gellan gum and gelatin mixing has been used for the encapsulation of Lactobacillus lactis ssp. cremoris [START_REF] Hyndman | Microencapsulation of Lactococcus lactis within cross-linked gelatin membranes[END_REF]. Figure 13: Gelatin structure Milk proteins Caseins Caseins are the most prevalent phosphoproteins in milk and are extremely heat-stable proteins. They is obtained by destabilizing skim milk micelles by various processes. The main casein types are presented in Fig. 14. The products obtained are mineral acid casein, lactic acid casein, and rennet casein. The microcapsules can prepared by using milk protein solution and enzyme rennet. Aggregation of the casein occurs by using rennet enzyme cleaving the k-casein molecule [START_REF] Heidebach | Microencapsulation of probiotic cells for food applications[END_REF]. Non-covalent cross-links are then gradually formed between chains to form a final gel above 18º C [START_REF] Bansal | Aggregation of rennet-altered casein micelles at low temperatures[END_REF]. They are able to encapsulate probiotics, without significant cells loss during encapsulation process. Caseins can protect the bacteria in capsules during incubation under simulated gastric conditions. This technique is suitable for application with probiotic in food. Whey protein Whey is a by-product of cheese or casein production. Whey protein include α-lactalbumin, ßlactooglobulin, immunoglobulins, and serumalbumin but also various minor proteins (Fig. 15). Whey protein can be gelled by two different way, such as heat induced gelation (heating above the thermal denaturation temperature under appropriate pH and ionic strength conditions) or cold induced gelation (addition of calcium to a mixture using both extrusion and phase separation methods). Whey proteins gel are produced using various whey proteins, such as βlactoglobulin [START_REF] Jones | Comparison of proteinpolysaccharide nanoparticle fabrication methods: Impact of biopolymer complexation before or after particle formation[END_REF], (Jones & McClements, 2011), and lactoferrin [START_REF] Bengoechea | Formation and characterization of lactoferrin/pectin electrostatic complexes: Impact of composition, pH and thermal treatment[END_REF]. Protein derived from plants provide some advantages over animal proteins to form a biopolymers molecules: reduced risk of contamination and infection, cheaper, usable in vegetarian food products. [START_REF] Regier | Fabrication and characterization of DNA-loaded zein nanospheres[END_REF]) -(L. [START_REF] Chen | Elaboration and Characterization of Soy/Zein Protein Microspheres for Controlled Nutraceutical Delivery[END_REF]. Among all vegetal proteins, hydrophobic cereal proteins are largely used to produce biopolymer particles by various methods. These particles are shown suitable for encapsulation of active ingredients (A. Patel, Hu, Tiwari, & Velikov, 2010a), [START_REF] Ezpeleta | Gliadin nanoparticles for the controlled release of all-trans-retinoic acid[END_REF], [START_REF] Duclairoir | Alpha-tocopherol encapsulation and in vitro release from wheat gliadin nanoparticles[END_REF]. because they are water insoluble, biodegradable and biocompatible. Emulsifier-stabilized protein particles may be used to improve the chemical stability of encapsulated active ingredients [START_REF] Podaralla | Influence of Formulation Factors on the Preparation of Zein Nanoparticles[END_REF] , (A. Patel et al., 2010a),(A. R. [START_REF] Patel | Colloidal approach to prepare colour blends from colourants with different solubility profiles[END_REF]. because it have good physical stability over a wide range of pH conditions (A. [START_REF] Patel | Synthesis and characterisation of zein-curcumin colloidal particles[END_REF]. Other plant proteins that have recently been shown to be suitable for producing biopolymer particles include pea leguminous [START_REF] Pierucci | Comparison of alpha-tocopherol microparticles produced with different wall materials: pea protein a new interesting alternative[END_REF] and soy protein isolate [START_REF] Liu | Soy Protein Nanoparticle Aggregates as Pickering Stabilizers for Oil-in-Water Emulsions[END_REF]. Soy protein isolate (SPI) contains an abundant of mixture hydrophilic globular proteins, cheap, and renewable source. Chickpea proteins showed also good functional attributes. Two salt-soluble globulin-type proteins dominate legumin and vicilin. (J. [START_REF] Wang | Entrapment, survival and release of Bifidobacterium adolescentis within chickpea protein-based microcapsules[END_REF]. Chickpea protein coupled with alginate were designed to serve as a suitable probiotic carrier intended for food applications [START_REF] Klemmer | Pea protein-based capsules for probiotic and prebiotic delivery[END_REF]. These capsules were able to protect B. adolescentis within simulated gastric juice and simulated intestinal fluids. -Non toxicity. -Biocompatibility. -Low cost. -Alginate beads are sensitive to the acidic environment. -Not compatible for the resistance of the microparticles in the stomach conditions. -Very porous. + [START_REF] Krasaekoopt | Evaluation of encapsulation techniques of probiotics for yoghurt[END_REF] ( [START_REF] Mortazavian | Survival of encapsulated probiotic bacteria in Iranian yogurt drink (Doogh) after the product exposure to simulated gastrointestinal conditions[END_REF] -Carrageenan -Softer, elastic and cohesive gels. -K+ ions to stabilize the gel and prevent swelling -Brittle gels ; not able to withstand stresses. -KCl has been reported to have an inhibitory effect on some lactic acid bacteria. - -Ethylcellulose is not biodegradable and has been used as a hydrophobic polymer to produce hollow-shell particles and particles with a solid core. b-Sodium carboxymethyl cellulose (NaCMC). -High hydrophilicity. [START_REF] Gunduz | Continuous generation of ethyl cellulose drug delivery nanocarriers from microbubbles[END_REF] (Montes, Gordillo, Pereyra, & de la Ossa, 2011) [START_REF] Hubbe | Cellulosic nanocomposites: a review[END_REF] Xanthan Gum -Stable over a wide range of pH (2)(3)[START_REF]Antilisterial activity[END_REF](5)(6)(7)(8)(9)(10)(11)(12) and temperature. -Resistance to enzymatic degradation. -Soluble in cold water, hydrates rapidly. It is considered to be basically non-gelling + [START_REF] Elçin | Encapsulation of urease enzyme in xanthan-alginate spheres[END_REF] (Goddard & Gruber, 1999) [START_REF] Nedovic | An overview of encapsulation technologies for food applications[END_REF]) [START_REF] Giannouli | Cryogelation of xanthan[END_REF] Chitosan -Useful for encapsulation of hydrophilic macromolecules. -Poor efficiency for increasing cell viability by encapsulation and it is preferable to use as a coat but not as a capsule. -Inhibitory effects on LAB. - [START_REF] Mortazavian | Survival of encapsulated probiotic bacteria in Iranian yogurt drink (Doogh) after the product exposure to simulated gastrointestinal conditions[END_REF]) (Groboillot et al., 1993) Pectin -Can be used to form biopolymer particles in combination with other polymers. -Is used as gelling agent in food, in medicines and as a source of dietary fiber. - -Is rapidly degraded by dextranases produced in the gut. -Soluble in water. -Degradation rapidly and acid-sensitivity + [START_REF] Suarez | Tunable protein release from acetalated dextran microparticles: a platform for delivery of protein therapeutics to the heart post-MI[END_REF] (H. [START_REF] Wang | Amphiphilic dextran derivatives nanoparticles for the delivery of mitoxantrone[END_REF]) [START_REF] Hu | Biodegradable amphiphilic polymer-drug conjugate micelles[END_REF] (Kuen Yong [START_REF] Lee | Electrospinning of polysaccharides for regenerative medicine[END_REF] Gellan gum -Not easily degraded by enzymes. -Stable over widely pH range. -Useful for colonic delivery of active compounds. + (M.-J. Chen & Chen, 2007) [START_REF] Yang | Preparation and evaluation of chitosancalcium-gellan gum beads for controlled release of protein[END_REF] (B. N. [START_REF] Singh | Effects of divalent cations on drug encapsulation efficiency of deacylated gellan gum[END_REF]. Starch -Good enteric delivery characteristic that is a better release of the bacterial cells in the large-intestine. -Ideal surface for the adherence of the probiotic cells to the starch granules. -Improving probiotic delivery in a viable and a metabolically active state to the intestine. -Poor solubility. -High surface tension. Gelatin -Clear, elastic, transparent, and thermo-reversible gels. -used for probiotic encapsulation. -High deformation of capsules. -Law the values of viscoelastic parameters. + (Zuidam & Nedovic, 2010) [START_REF] Krasaekoopt | Evaluation of encapsulation techniques of probiotics for yoghurt[END_REF]) [START_REF] Hyndman | Microencapsulation of Lactococcus lactis within cross-linked gelatin membranes[END_REF] Collagen -Used in cell immobilization due to its biocompatibility, biodegradability, abundance in nature, and natural ability to bind cells. -can form a high-water-content hydrogel composite. -High cost to purify, natural variability of isolated collagen. - -Proteins plants provide some advantages over animal proteins to form a biopolymers molecules since the risk of contamination and infection is reduced -Cheap. -Can be used in vegetarian products. -Water insoluble, biodegradable and biocompatible. -Good physical stability over a range of pH conditions. Chickpea Protein -Good functional attributes and nutritional importance. -Good protection to probiotic bacteria -+ [START_REF] Podaralla | Influence of Formulation Factors on the Preparation of Zein Nanoparticles[END_REF] (A. R. [START_REF] Patel | Colloidal approach to prepare colour blends from colourants with different solubility profiles[END_REF]. (A. Patel, Hu, Tiwari, & Velikov, 2010b) (F. [START_REF] Liu | Soy Protein Nanoparticle Aggregates as Pickering Stabilizers for Oil-in-Water Emulsions[END_REF] (J. [START_REF] Wang | Entrapment, survival and release of Bifidobacterium adolescentis within chickpea protein-based microcapsules[END_REF]) [START_REF] Klemmer | Pea protein-based capsules for probiotic and prebiotic delivery[END_REF] Microencapsulation methods There are several encapsulation techniques. Before selecting one of them, people must take into consideration the following point (N.J. Zuidam & Nedovic, 2010): i) What conditions that affect the viability of probiotics? (ii) Which processing conditions are used during food production? (iii)What will be the storage conditions of the food product containing the Microcapsules? (iv)Which particle size is needed to incorporate in the food product? [START_REF] Kailasapathy | Encapsulation technologies for functional foods and nutraceutical product development[END_REF]; [START_REF] Zhao | Measurement of particle diameter of Lactobacillus acidophilus microcapsule by spray drying and analysis on its microstructure[END_REF]. There are different parameters to optimize spray-drying such as air flow, feed rate, feed temperature, inlet air temperature, and outlet air temperature [START_REF] Vega | Invited review: spray-dried dairy and dairy-like emulsions-compositional considerations[END_REF]; [START_REF] O'riordan | Evaluation of microencapsulation of a Bifidobacterium strain with starch as an approach to prolonging viability during storage[END_REF]. There are many advantages for spray-dried process : it can be operated on a continuous basis, it is rapid and relatively low cost, it can be applied on a large scale suitable for industrial applications [START_REF] Brun-Graeppi | Cell microcarriers and microcapsules of stimuli-responsive polymers[END_REF]; [START_REF] Gouin | Microencapsulation: industrial appraisal of existing technologies and trends[END_REF]. Spray-drying disadvantages are linked to the "high" temperature used to dry, which is not suitable for probiotic bacteria [START_REF] Favaro-Trindade | Microencapsulation of L. acidophilus (La-05) and B. lactis (Bb-12) and evaluation of their survival at the pH values of the stomach and in bile[END_REF]; [START_REF] Ananta | Cellular injuries and storage stability of spraydried Lactobacillus rhamnosus GG[END_REF]; (Oliveira, Moretti, Boschini, Baliero, Freitas, & Favaro-Trindade, 2007). The compatibility of bacteria strain and type of encapsulating polymers have to be controlled to allow bacteria survival during spray-drying process as well as during storage [START_REF] Desmond | Improved survival of Lactobacillus paracasei NFBC 338 in spray-dried powders containing gum acacia[END_REF]. Freeze-drying Freeze-drying has been largely used to producing probiotic powders. During the process, the solvent or the suspension medium is frozen then sublimed [START_REF] Santivarangkna | Alternative drying processes for the industrial preservation of lactic acid starter cultures[END_REF][START_REF] Solanki | Development of Microencapsulation Delivery System for Long-Term Preservation of Probiotics as Biotherapeutics Agent[END_REF]. The freeze-drying process is divided into three stages; freezing, primary drying, and second drying. The freeze-drying processing condition is milder than spray-drying and higher probiotic survival rates are typically achieved [START_REF] Wang | Entrapment, survival and release of Bifidobacterium adolescentis within chickpea protein-based microcapsules[END_REF]. Freeze-drying disadvantage is linked to crystal formation and stress condition due to high osmolarity that cause damages to cell membrane. To increase the viability of probiotics during dehydration, skim milk powder, whey protein, glucose, maltodextrin, trehalose are added to the drying media before freeze-drying to act as cryoprotectants [START_REF] Basholli-Salihu | Effect of lyoprotectants on β-glucosidase activity and viability of Bifidobacterium infantis after freeze-drying and storage in milk and low pH juices[END_REF]. Cryoprotectants reduce the osmotic difference between the internal and external environments by accumulation within the cells [START_REF] Kets | Effect of Compatible Solutes on Survival of Lactic Acid Bacteria Subjected to Drying[END_REF]. Spray freeze-drying Spray-freeze-drying technique combines processing steps that are common to freeze-drying and spray-drying. Probiotic cells are in a solution, which is atomized into a cold vapor phase of a cryogenic liquid such as liquid nitrogen. The microcapsules formed by dispersion of frozen droplets then dried in a freeze dryer (Amin, Thakur, Jain, 2013); (H. [START_REF] Wang | Amphiphilic dextran derivatives nanoparticles for the delivery of mitoxantrone[END_REF][START_REF] De Vos | Encapsulation for preservation of functionality and targeted delivery of bioactive food components[END_REF]; (K. [START_REF] Kailasapathy | Encapsulation technologies for functional foods and nutraceutical product development[END_REF]; [START_REF] Semyonov | Microencapsulation of Lactobacillus paracasei by spray freeze drying[END_REF]. The main advantages of spray-freeze-drying techniques are: controlled size, higher surface area for capsules in spray freeze dried. This technique has nevertheless some disadvantages including the high use of energy, the long processing time and the cost which is 30-50 times more expensive than spraydrying (Zuidam & Nedovic, 2010). In some studies, using polysaccharides contributes to reduce cells mobility in the glassy state matrix, that act as a protective excipient and improve cell viability during freezing [START_REF] Semyonov | Microencapsulation of Lactobacillus paracasei by spray freeze drying[END_REF]. Spray-chilling / Spray-cooling Spray-chilling, spray-cooling, spray-congealing are similar processes as spray-drying, but no water is evaporated and the air used is cold, which enables particle solidification. The microcapsules are quickly formed when the matrix containing the bioactive compound is into contact with the cold air [START_REF] Champagne | Microencapsulation for the improved delivery of bioactive compounds into foods[END_REF]. The spray-chilling mainly uses a molten lipid matrix as carrier. Spray-chilling disadvantages are that the capacity of encapsulation is low and the release of core material is observed during storage [START_REF] Sato | Polymorphism in Fats and Oils[END_REF]. Spraychilling technique is a cheaper encapsulation technology that has potential for industrial scale manufacture [START_REF] Gouin | Microencapsulation: industrial appraisal of existing technologies and trends[END_REF]. This technology could generate smaller beads, which be desirable in food processing. [START_REF] Pedroso | Protection of Bifidobacterium lactis and Lactobacillus acidophilus by microencapsulation using spray-chilling[END_REF]) used spraychilling technology to prepare microencapsulate Bifidobacterium lactis and Lactobacillus acidophilus using as wall materials with palm and palm kernel. The solid lipid microparticles provided an effective protection for probiotics against gastric and intestinal fluids. Fluid bed In the fluid bed process, a cell suspension is sprayed and dried on inert carriers. The fluid bed advantages are mainly the control over the temperature and the lower cost. The disadvantages are that this technology is difficult to master for long duration. This method can be used to produce multilayer coatings with two different fats [START_REF] Champagne | The determination of viable counts in probiotic cultures microencapsulated by spray-coating[END_REF]. Fluid bead is one of the most used encapsulation technologies commercially applied to probiotics. Some companies have developed commercial products, such as Probiocap® and Duaolac® [START_REF] Burgain | Encapsulation of probiotic living cells: From laboratory scale to industrial applications[END_REF]. Impinging aerosol technology Impinging aerosol technology is used two separate aerosols: One with the microbial suspension in alginate solution and the other one with calcium chloride. The mixture of alginate is injected from the top of a cylinder and on the mean time the calcium chloride is injected from the base to produce alginate microcapsules [START_REF] Sohail | Survivability of probiotics encapsulated in alginate gel microbeads using a novel impinging aerosols method[END_REF]. The impinging aerosol technology advantages are: it is suitable for encapsulating heat labile and solvent sensitive materials, it provides a large volume production, the capsules could be spray or freeze-dried in a later stage. The microcapsules diameter of 2 mm was obtained and offered high protection to L. rhamnosus GG in gastic acid and bile [START_REF] Sohail | Survivability of probiotics encapsulated in alginate gel microbeads using a novel impinging aerosols method[END_REF]. Electrospinning This technique is a combination between two techniques namely electrospray and spinning. In this technique, a high electric field is applied to a fluid which comes out from the tip of a die that acts as one of the electrodes. This leads to droplet deformation and finally to the ejection of a charged jet from the tip towards the counter electrode leading to the formation of continuous capsules (cylinder). The main advantage of electrospinning technique is that capsules are very thin with large surface areas "few nanometers" [START_REF] Agarwal | Use of electrospinning technique for biomedical applications[END_REF]. (López-Rubio, Sanchez, Wilkanowicz, Sanz, & Lagaron, 2012) compared two type of electrospinning microcapsules, encapsulated probiotics in a protein based matrix (whey protein concentrate) and in a carbohydrate based matrix (pullulan). Whey protein microcapsules proved a greater cell viability when compared to pullulan structures. Methods to produce humid capsules Emulsification and ionic gelation Emulsification is a technique to encapsulate alive probiotics and uses different polysaccharides as encapsulating materials such as alginate, Ƙ-carrageenan, gellan-gum, xanthan, or pectin. For encapsulation in an emulsion, an emulsifier and/or a surfactant are needed. A solidifying agent is then added to the emulsion (Chen & Chen, 2007); [START_REF] Kailasapathy | Encapsulation technologies for functional foods and nutraceutical product development[END_REF]; [START_REF] De Vos | Encapsulation for preservation of functionality and targeted delivery of bioactive food components[END_REF]. This coupled technique gives a high survival rate of the bacteria but it provides capsules of large sizes and different shapes. The gel beads can be coated by a second polymer to provide better protection to the cell and improve organoleptic properties (K. [START_REF] Kailasapathy | Encapsulation technologies for functional foods and nutraceutical product development[END_REF]. Emulsification and enzymatic gelation This technique uses milk proteins with probiotics that are encapsulated by means of an enzymatic induced gelation; milk proteins have high gelation properties and offer a good protection for probiotics [START_REF] Heidebach | Microencapsulation of probiotic cells by means of rennet-gelation of milk proteins[END_REF], [START_REF] Heidebach | Microencapsulation of probiotic cells for food applications[END_REF]. This method produces spherical particles, insoluble in water. [START_REF] Heidebach | Microencapsulation of probiotic cells by means of rennet-gelation of milk proteins[END_REF] detailed an example of rennet gelation to prepare microcapsules. This technique permitted to use of alginate, Ƙ-carrageenan, gellan-gum or xanthan for capsule coating, even they are not allowed for use in dairy products in some countries [START_REF] Picot | Production of Multiphase Water-Insoluble Microcapsules for Cell Microencapsulation Using an Emulsification/Spray-drying Technology[END_REF]. Emulsification and interfacial polymerization Interfacial polymerization is an alternative technique that is performed in one-step. The technique requires the formation of an emulsion. The discontinuous phase contains an aqueous suspension of probiotic cells and the continuous phase is an organic solvent. To begin the polymerization reaction, a biocompatible agent is added. The microcapsules obtained are thin and have a strong membrane (Kaila [START_REF] Kailasapathy | Microencapsulation of probiotic bacteria: technology and potential applications[END_REF]. Interfacial polymerization of microcapsules is used to improve the productivity in fermentation [START_REF] Yáñez-Fernández | Rheological characterization of dispersions and emulsions used in the preparation of microcapsules obtained by interfacial polymerization containing Lactobacillus sp[END_REF]. Extrusion Extrusion technique is the most popular method for humid microcapsules production [START_REF] Green | Membrane formation by interfacial cross-linking of chitosan for microencapsulation of Lactococcus lactis[END_REF], [START_REF] Koyama | Cultivation of yeast and plant cells entrapped in the lowviscous liquid-core of an alginate membrane capsule prepared using polyethylene glycol[END_REF], [START_REF] Özer | Effect of Microencapsulation on Viability of Lactobacillus acidophilus LA-5 and Bifidobacterium bifidum BB-12 During Kasar Cheese Ripening[END_REF]. It involves the preparation of a hydrocolloid solution, mixed with the microbial cells and the extrusion of the cell suspension through a needle, then dropped into the solution of a crosslinking agent [START_REF] Heidebach | Microencapsulation of probiotic cells for food applications[END_REF]. Gelation occurs by a combination between the polymer and the cross-linking agent. The advantages of this technique are the simplicity of its operation, the lower cost, and the operational conditions suitable for probiotic bacteria viability [START_REF] De Vos | Encapsulation for preservation of functionality and targeted delivery of bioactive food components[END_REF]. The main disadvantage for this technique is that the microcapsules are larger than 500 µm [START_REF] Reis | Review and current status of emulsion/dispersion technology using an internal gelation process for the design of alginate particles[END_REF]. In addition, rapid cross-linking between droplet polymers solution and cross link agent led to rapid hardening of microcapsules surfaces that delay the movement of cross-linking ions into the inner core [START_REF] Liu | Characterization of structure and diffusion behaviour of Ca-alginate beads prepared with external or internal calcium sources[END_REF]. Co-Extrusion Co-Extrusion technology is based on a laminar liquid jet that is broken into equally sized droplets by a vibrating nozzle (Prüsse, Bilancetti, Bučko, Bugarski, Bukowski, Gemeiner, Lewińska, Manojlovic, Massart, Nastruzzi, Nedovic, et al., 2008), [START_REF] Del Gaudio | Mechanisms of formation and disintegration of alginate beads obtained by prilling[END_REF]. The droplets are then gelled in a cross-linking solution. The diameter of microcapsules is controlled by two main factors, whaic are the flow rate and polymer solution viscosity [START_REF] Del Gaudio | Mechanisms of formation and disintegration of alginate beads obtained by prilling[END_REF]. [START_REF] Graff | Increased intestinal delivery of viable Saccharomyces boulardii by encapsulation in microspheres[END_REF] encapsulated Saccharomyces boulardii using a laminar jet break up technique. Microcapsules were obtained by coating with chitosan solution and significantly reduced the degradation of yeast cells in the gastrointestinal tract. [START_REF] Huang | Microfluidic device utilizing pneumatic micro-vibrators to generate alginate microbeads for microencapsulation of cells[END_REF] obtained microcapsules with two concentrations of alginate solution that were introduced separately into the inner and outer chambers of the coaxial nozzle. The polymer droplets were cross-linked in calcium chloride solution. Adjusting the concentrations of the shell and core materials provided a high control on the size of alginate microspheres and on releasing of the microbial cells from the microspheres. Coacervation This technique can be used to encapsulate flavor oils, preservatives, enzymes as well as microbial cells [START_REF] John | Bioencapsulation of microbial cells for targeted agricultural delivery[END_REF]; (Oliveira, Moretti, Boschini, Baliero, Freitas, Freitas, et al., 2007); (Oliveira, Moretti, Boschini, Baliero, Freitas, & Favaro-Trindade, 2007). This technique uses specific pH, temperature and composition of the solution to separate one or more incompatible polymers from an initial coating solution. The incompatible polymers are added to the coating polymer solution and the dispersion is stirred. The separation of incompatible polymer and deposition of dense coacervate phase surrounding the core material to form microcapsules occur as a result for changes in the physical parameters [START_REF] Gouin | Microencapsulation: industrial appraisal of existing technologies and trends[END_REF]; [START_REF] John | Bioencapsulation of microbial cells for targeted agricultural delivery[END_REF][START_REF] Nihant | Microencapsulation by coacervation of poly(lactide-co-glycolide) IV. Effect of the processing parameters on coacervation and encapsulation[END_REF]; (Oliveira, Moretti, Boschini, Baliero, Freitas, Freitas, et al., 2007) ; (Oliveira, Moretti, Boschini, Baliero, Freitas, & Favaro-Trindade, 2007). The most important factors for the coacervation technique are the volume of dispersed phase, the ratio of incompatible polymer to coating polymer, the stirring rate of the dispersion and the core material to be encapsulated (N. [START_REF] Nihant | Microencapsulation by coacervation of poly(lactide-co-glycolide) IV. Effect of the processing parameters on coacervation and encapsulation[END_REF]. In the coacervation technique, the composition and viscosity for polymers solution in supernatant phases act on size distribution, surface morphology and internal porosity of the microcapsules (Nicole [START_REF] Nihant | Microencapsulation by Coacervation of Poly(lactide-co-glycolide). III. Characterization of the Final Microspheres[END_REF], (N. [START_REF] Nihant | Microencapsulation by coacervation of poly(lactide-co-glycolide) IV. Effect of the processing parameters on coacervation and encapsulation[END_REF]. (Oliveira, Moretti, Boschini, Baliero, Freitas, Freitas, et al., 2007) used the coacervation technique to encapsulate B. lactis (BI 01) and L. acidophilus (LAC 4) in a casein/pectin complex. This technology presented a good encapsulation capacity and a controlled liberation of core material from the microcapsules by mechanical stress, temperature and pH changes. So, the coacervation technique is a favorable with probiotic bacteria (Oliveira, Moretti, Boschini, Baliero, Freitas, Freitas, et al., 2007). The coacervation method disadvantage is that it cannot be used for producing very small microspheres [START_REF] John | Bioencapsulation of microbial cells for targeted agricultural delivery[END_REF]. Table 3: Encapsulation methods Encapsulation methods Advantages Disadvantages Valid with probiotic bacteria Methods to produce dry capsules Spray Drying -It can be operated on a continuous basis. -It can be applied on a large scale. -It is suitable for industrial application. -It is rapid and relatively low cost. -The high temperature used in the process may not be suitable for encapsulating probiotic bacteria. - Freeze Drying -The freeze-drying processing condition is milder than spray-drying higher probiotic survival rates are typically achieved. -Freezing causes damage to the cell membrane because of crystal formation and imparts stress condition by high osmolarity. + Spray Freeze Drying -Controlled size. -Larger specific surface area than spray-dried capsules. -The use of high energy. -The long processing time and the cost which is 30-50 times expensive than spray-drying. + Spray chilling/ Spray cooling -cheapest encapsulation technology. -potential of industrial scale manufacture. -generates smaller beads. -The spray chilling mainly uses a molten lipid matrix as carrier. The micro particles that are produced can present some disadvantages, which include a low encapsulation capacity and the expulsion of core material during storage. + Fluid bed -Good temperature control. -Low cost. -Easy scale-up. -Technology difficult to control for longer duration. + Impinging aerosol technology -Suitable for encapsulating heat labile and solvent sensitive materials. -Large volume production capacity. + Electrospinning -Production of very thin capsules. -Large surface areas. + Methods to produce humid capsules Emulsification and ionic gelation -High survival rate of the bacteria. -Gel beads can be coated by second polymer that provides more protection for bacteria. -Large size ranges and shapes. + Emulsification and enzymatic gelation -Processing of water insoluble and spherical particles. -Use of coatings. -Coating microcapsules (enzymatic induced gelation) by alginate, Ƙcarrageenan, gellan-gum or xanthan that are not allowed in dairy products in some countries. + Extrusion -Simple operations. -Low cost. -Mild operational conditions ensuring high cell viability. -Inefficiency in producing microspheres smaller than 500 µm. -Less stable microspheres. + Co-Extrusion -size-controlled microspheres. + Litterature review 39 -Significant protection of the microorganisms in the gastrointestinal tract. -High production rate. Coacervation -Good encapsulation capacity. -Controlled liberation of core material from the microspheres by mechanical stress. -This method may not be used for producing small microspheres. + Techniques for capsules characterization 4.1. Microcapsules size, morphology and stability The particle size is often the most important characteristic of the capsules, and was measured by different type of microscopy or light scattering. Microscopy Optical and electron microscopies are used to measure the size of capsules, the surface topography, the thickness of membrane and, sometimes, the permeability of capsules membrane. Conventional Optical Microscopy This microscope is used to characterizing structures of capsules that are ≥ 0.2 µm as fixed by the wavelength of visible light, and the size of capsules can be measured (N.J. Zuidam & Nedovic, 2010). Confocal Laser Scanning Microscopy (CLSM) (CLSM) produce in-focus images of a fluorescent specimen by optical sectioning. CLSM provides a better spatial 3D image than electron microscopy and provides additional information such as the three-dimensional localization and quantification of the encapsulated phase. CLSM may allow definition of encapsulation rate without the need for any destruction, extraction and chemical assays [START_REF] Lamprecht | Characterization of microcapsules by confocal laser scanning microscopy: structure, capsule wall composition and encapsulation rate[END_REF]. CLSM could also provide the distribution of polymers and cross-linking ions [START_REF] Strand | Visualization of alginatepoly-L-lysine-alginate microcapsules by confocal laser scanning microscopy[END_REF]. [START_REF] Lamprecht | Characterization of microcapsules by confocal laser scanning microscopy: structure, capsule wall composition and encapsulation rate[END_REF]. Transmission Electron Microscopy (TEM) TEM is able of resolving structure with smaller dimension than optical microscopy [START_REF] Dash | Kinetic modeling on drug release from controlled drug delivery systems[END_REF]. TEM is used to measure the structures of very thin samples by passing electrons through them. TEM gives the morphology and shell thickness of encapsulates following fixation, dehydration, and sectioning [START_REF] Chen | Chitosan/beta-lactoglobulin core-shell nanoparticles as nutraceutical carriers[END_REF]. [START_REF] Chiu | Encapsulation of doxorubicin into thermosensitive liposomes via complexation with the transition metal manganese[END_REF] used TEM to analyze various liposome preparations (Fig. 17). [START_REF] Xu | Effect of molecular structure of chitosan on protein delivery properties of chitosan nanoparticles[END_REF] used TEM to examined diameter and spherical shape for capsules. Atomic force Microscopy Atomic force microscopy is used for observing the surface structure of particles. It used to obtained a resolution in the nanometer range and three-dimensional. Samples undergo relatively mild preparation procedures that reduce the risk of damaging or altering the sample properties prior to measurement [START_REF] Burey | Hydrocolloid gel particles: formation, characterization, and application[END_REF]. Particles produced using fluid gels typically have irregular shapes and can even have tail like structures (Frith, Garijo, Foster, & Norton, 2002) [START_REF] Williams | Microstructural origins of the rheology of fluid gels[END_REF]. (Burgain et al., 2013) (Fig. 19.20) used AFM to identify specific interactions between bacteria and whey proteins. Force measurements and topography images were made at room temperature and different pH. It was observed that many factors influence "Bacteria / dairy matrix" interactions, including the nature of proteins, the nature of strains and the pH of the media. Laser light scattering Laser light scattering also measures the size of microcapsules in the size range between 0.02-2.000 µm. Single particle optical sensing (SPOS) SPOS is used to measure particle size distribution (Onwulata, 2005); it is based on the magnitude of pulse generated by single particles passing through a small photo zone, illuminated by light from laser bulb, which can be correlated with the size of the particles [START_REF] Dodds | 13 -Techniques to analyse particle size of food powders[END_REF]. Focused beam reflectance measurement (FBRM) FBRM provides in situ/online characterization of non-spherical particles by measuring chord lengths of particles [START_REF] Li | Determination of non-spherical particle size distribution from chord length measurements. Part 2: Experimental validation[END_REF], [START_REF] Barrett | Characterizing the Metastable Zone Width and Solubility Curve Using Lasentec FBRM and PVM[END_REF]. In Fig 21, FBRM measurements were conducted in aqueous suspensions which were prepared with distilled water. In the image analysis experiments, the particles were evenly distributed on microscope slides and were measured in dry form [START_REF] Li | Determination of non-spherical particle size distribution from chord length measurements. Part 2: Experimental validation[END_REF]. Malven Zetasizer Malven Zetasizer characterized the electrical properties of biopolymer particles by ζ-potential. to evaluate the magnitude of the repulsion between capsules [START_REF] Legrand | Polymeric nanocapsules as drug delivery systems. A review[END_REF]. and predict the stability of particle suspensions to aggregation ( Jahanshahi & Babaei, 2008) ζ-Potential measurements are highly sensitive to pH and ionic strength. In addition, in systems consisting of a biopolymer mixture, it may be difficult to interpret the data since they will all contribute to the overall signal. Microcapsules composition, physical state and release To determine particle composition and distribution of active ingredients within particles, several techniques have been explored. FTIR spectroscopy FTIR gives information on the chemical structures and interactions between the matrix and the active compound (or bacteria). (Ben Messaoud et al., 2015a) used FTIR to investigated molecular interactions between alginate and thickening agents. (Ben messaoud et al., 2015b) studied the influence of the thickening agents for alginate capsules by modulated with anionic chitosan, xanthan gum and maltodextrin. The result showed that the release profile of cochineal red food dye changed considerably with the different thickening agents. After a one-day storage, capsules filled with chitosan avoided any molecular transport and 35% of the encapsulated red dye remained in the capsules filled with maltodextrin. X-ray photoelectron spectroscopy X-ray photoelectron spectroscopy was used for chemical analysis of particle surface composition, while elemental analysis has been used to study the overall composition of particles and evaluate if a certain compound was encapsulated within the particles. (Montes, Gordillo, Pereyra, & de la Ossa, 2011) used X-ray to determine the particle composition, size and shape. The powder samples were mounted on double sided adhesive and analyzed without any further treatment. Differential scanning calorimetry (DSC) DSC is used to detect the thermal changes in the sample during heating or cooling to show the presence of particular organization as crystals or detect interactions between biopolymers in the particles. [START_REF] Ribeiro | Chitosan-reinforced alginate microspheres obtained through the emulsification/internal gelation technique[END_REF] assessed interactions between alginate and chitosan in capsules membranes [START_REF] Yang | Preparation and evaluation of chitosancalcium-gellan gum beads for controlled release of protein[END_REF]. Rheological gel characterization The rheometer investigates the influence of polymers composition on gel properties. The elastic Spectrophotometer Spectrophotometer is used to measure the relative amount of released active compound versus time. At scheduled time intervals, the amount released was determined from solution absorbance at 500 nm wavelength. At the end of the experiments, to determine the total mass initially encapsulated and the remaining amount, the capsules are destructured by sonication [START_REF] Leick | Deformation of liquidfilled calcium alginate capsules in a spinning drop apparatus[END_REF]. -Provides a better spatial 3D image. -the distribution of polymers and cross-linking ions [START_REF] Lamprecht | Characterization of microcapsules by confocal laser scanning microscopy: structure, capsule wall composition and encapsulation rate[END_REF]. Malvern particle sizing (Mastersizer) -Particles in size ranging from 0.05 up to 900 µm. Single particle optical sensing (SPOS) -particle size distribution (Onwulata, 2005) Particle charge Malven Zetasizer -characterized the electrical properties of biopolymer particles by ζ-potential (Jahanshahi & Babaei, 2008). [START_REF] Legrand | Polymeric nanocapsules as drug delivery systems. A review[END_REF]) Particle morphology Conventional optical microscopy -Characterizing structures of capsules. -Size of capsules. (Zuidam & Nedovic, 2010b) Scanning Electron Microscopy (SEM) -Surface characteristics, such as composition, shape and size. (Mohsen Jahanshahi & Babaei, 2008) (Montes, Gordillo, Pereyra, & Martínez de la Ossa, 2011) Transmission Electron Microscopy (TEM) -Structures of very thin samples. -Morphology and shell thickness of encapsulates following fixation, dehydration, and sectioning [START_REF] Chen | Chitosan/beta-lactoglobulin core-shell nanoparticles as nutraceutical carriers[END_REF] Atomic force microscopy (AFM) -Surface structure of particles in the nanometer range and three-dimensional. [START_REF] Burey | Hydrocolloid gel particles: formation, characterization, and application[END_REF] Litterature review 50 (Burgain et al., 2013) Confocal Laser Scanning Microscopy (CLSM) -Provides a better spatial 3D image. -Provides additional information such as the threedimensional location and quantification of the encapsulated phase. [START_REF] Lamprecht | Characterization of microcapsules by confocal laser scanning microscopy: structure, capsule wall composition and encapsulation rate[END_REF]. Focused beam reflectance measurement (FBRM) -Provide in situ/on-line characterization of non-spherical particles by measuring chord lengths of particles (Li et al., 2005a), [START_REF] Barrett | Characterizing the Metastable Zone Width and Solubility Curve Using Lasentec FBRM and PVM[END_REF] QICPIC -2D, 3D view of particles, and then determine several size and shape parameters. -Sphericity, and The convexity the particles (Cellesi, Weber, Fussenegger, Hubbell, & Tirelli, 2004) [START_REF] Burgain | Encapsulation of probiotic living cells: From laboratory scale to industrial applications[END_REF]. Particle composition and physical state and release X-ray photoelectron spectroscopy -Chemical analysis of particle surface composition, -particle size and shape -presence of compounds in the resulting precipitates. -Determine the total mass initially encapsulated. [START_REF] Leick | Deformation of liquidfilled calcium alginate capsules in a spinning drop apparatus[END_REF] Particular interest of LAB encapsulation in Alginate Pure alginate capsules The common materials was used in encapsulation of probiotic bacteria involve polysaccharides, originate from seaweed (K-carrageenan, alginate), plants (starch, arabic gum), bacteria (gellan, xanthan), and animal proteins (milk, gelatin). Alginates are excessively used at laboratory-and industry scale for encapsulation because they are cheap, readily available, biocompatible, and have low toxicity [START_REF] Krasaekoopt | Evaluation of encapsulation techniques of probiotics for yoghurt[END_REF]. There are various techniques to prepare alginate alginate microcapsules, based on dry techniques freeze-drying [START_REF] Shah | Microencapsulation of probiotic bacteria and their survival in frozen fermented dairy desserts[END_REF][START_REF] Giulio | Use of alginate and cryo-protective sugars to improve the viability of lactic acid bacteria after freezing and freeze-drying[END_REF][START_REF] Capela | Effect of cryoprotectants, prebiotics and microencapsulation on survival of probiotic organisms in yoghurt and freeze-dried yoghurt[END_REF][START_REF] Ross | Microencapsulation of probiotic strains for swine feeding[END_REF], spray-drying ((K. Y. [START_REF] Lee | Survival of Bifidobacterium longum immobilized in calcium alginate beads in simulated gastric juices and bile salt solution[END_REF], or electrospraying [START_REF] Laelorspoen | Microencapsulation of Lactobacillus acidophilus in zein-alginate core-shell microcapsules via electrospraying[END_REF]. Humid capsules are alos largely used based on liquid form, emulsification and ionic gelation [START_REF] Hansen | Survival of Caalginate microencapsulated Bifidobacterium spp. in milk and simulated gastrointestinal conditions[END_REF][START_REF] Mandal | Effect of alginate concentrations on survival of microencapsulated Lactobacillus casei NCDC-298[END_REF][START_REF] Allan-Wojtas | Microstructural studies of probiotic bacteria-loaded alginate microcapsules using standard electron microscopy techniques and anhydrous fixation[END_REF], extrusion [START_REF] Ivanova | Encapsulation of lactic acid bacteria in calcium alginate beads for bacteriocin production[END_REF][START_REF] Chandramouli | An improved method of microencapsulation and its evaluation to protect Lactobacillus spp. in simulated gastric conditions[END_REF], [START_REF] Sathyabama | Co-encapsulation of probiotics with prebiotics on alginate matrix and its effect on viability in simulated gastric environment[END_REF], [START_REF] Corbo | Immobilization and microencapsulation of Lactobacillus plantarum: Performances and in vivo applications[END_REF], [START_REF] Muthukumarasamy | Survival of Escherichia coli O157: H7 in dry fermented sausages containing micro-encapsulated probiotic lactic acid bacteria[END_REF]. In There are some disadvantages related to the alginate microbeads. For example, the microbeads are very porous which this is a drawback when the aim is to protect the cells from its environment [START_REF] Gouin | Microencapsulation: industrial appraisal of existing technologies and trends[END_REF]. Moreover, alginate microbeads is sensitive to acidic environment [START_REF] Mortazavian | Survival of encapsulated probiotic bacteria in Iranian yogurt drink (Doogh) after the product exposure to simulated gastrointestinal conditions[END_REF] which make them not compatible for microparticles in stomach conditions. [START_REF] Sousa | Characterization of freezing effect upon stability of, probiotic loaded, calciumalginate microparticles[END_REF] presented that the alginate micro-particles was not able to exert protection to the encapsulated probiotic cells stored at -20 °C for 60 days, especially from acid and particularly bile salts. Nevertheless, the defects alginate micro-particles can be treated by mixing alginates with other polymer compounds, coating the capsules by another compound or applying structural modification by using different additives [START_REF] Krasaekoopt | Evaluation of encapsulation techniques of probiotics for yoghurt[END_REF]) Microparticles preparetion Microbeads (matrix) preparation Alginate and pectin solutions (1 % (w/w)) were prepared with sterile physiological water (9 % sodium chloride, VWR Belgium) or with sterile M17 broth supplemented with 0.5 % D (+) glucose. Preliminary studies indicate a positive effect of addition of 0.5 % glucose on L.lactis growth and nisin production (data not shown). L.lactis culture was regenerated by transferring a loopful of the stock culture into 10 mL of M17 broth and incubated at 30 ºC overnight. A 10 l aliquot from overnight culture was again transferred into 10 mL of M17 broth and grown at 30 ºC to exponential or stationary phase of growth (6 and 48 h respectively). L.lactis cells were collected by centrifugation (20 min, 4 °C, 5000 rpm) and diluted to obtain a target inoculum in microbeads of 10 5 CFU.mg -1 . Alginate/pectin hydrogel microspheres were made using the Encapsulator B-395 Pro (BÜCHI Labortechnik, Flawil, Switzerland). In this study five polymers ratios (A/P) were selected: 100/0; 75/25; 50/50; 25/75; 0/100. The encapsulation technology is based on the principle that a laminar flowing liquid jet breaks up into equal sized droplets by a superimposed nozzle vibration. The vibration frequency determined the quantity of droplets produced and was adjusted at 1200 Hz to generate 1200 droplets per second. The flow rate was 3 mL.min -1 . A 120 µm diameter nozzle was used for the preparation of beads. Droplets fell in 250 mL of a sterile CaCl2 solution (100 mM) continuously stirred at 150 rpm to allow microbeads formation. The beads were maintained in the gelling bath for 15 minutes to complete the reticulation process and then were filtered and washed with buffer solution (9 % sodium chloride). Microcapsules (coremembrane) preparation SA pure or with L.lactis composed the membrane of the mi-crocapsules. SA (1.3% (w/w)) solution was prepared with sterile physiological water (9% sodium chloride). Physico-chemical characterization of microparticles Size The mean distribution of capsules was measured using a laser light scattering particle size analyzer Mastersizer S (Malvern Instruments Ltd. UK) equipped with a He-Ne laser, a beam of light of 360 nm. The system was able to determine particles in size ranging from 0.05 up to 900 µm. Measurements were achieved in ten replicates for each system. Results were reported as the volume weighted mean globule size D (4,3) in µm: , = ∑ / ∑ (1) Where ni was the number of particles; dί was the diameter of the particle (µm). The D [START_REF]Antilisterial activity[END_REF]3) was chosen instead of D (3, 2) since it is very sensitive to the presence of small amounts of large particles. Morphology Microparticles were observed under an optical microscope (Olympus AX70, Japan) equipped with a camera (Olympus DP70). Dp controller software (version 2.1.1) was used for taking pictures. Microparticles shape was also determined by using a QICPIC TM analyzer (Sympatec MgbH, Clausthal-Zellerfeld, Germany). The analyzer was directly connected to the reactor and made measurements every 5 min during 60 min. The liquid with capsules was pumped into the reactor, passed through the measuring cell, and images were captured and recorded. The results analysis provided 2D, 3D particle, and then determine shape parameters. The diameter of a circle of equal projection area (EQPC) was calculated. It identified the diameter of a circle with the same area as the 2D image of the particles. As different shaped particles may have the same EQPC, other parameters were used to characterize the particles. The sphericity was defined as the ratio between the EQPC perimeters with the real particle perimeter. The convexity provided information about the harshness of the particle. A particle with smooth edges had a convexity value of 1 whereas a particle with irregular ones had a lower convexity [START_REF] Burgain | Encapsulation of probiotic living cells: From laboratory scale to industrial applications[END_REF]. All tests were run in triplicate. Mechanical stability To investigate mechanical stability of alginate microparticles, individual capsules were compressed between two parallel plates. A rotational rheometer Malvern Kinexus pro (Malvern Instruments, Orsay, France) with a plate-and-plate (20 mm) geometry was used. A force gap test was used to compress the microparticles in a droplet of water from 500 to 5 µm with a linear compression speed of 10 µm.s -1 . The gap and the normal force being imposed were measured simultaneously at the upper plate. Three replicates were considered for each type of microcapsules. FTIR spectroscopy Attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectra of freeze-dried microparticles were acquired using a Tensor 27 mid-FTIR Bruker spectrometer (Bruker, Karlsruhe, Germany) equipped with an ATR accessory (128 scans, 4 cm -1 resolution, wavenumber range 4000 -550 cm -1 ) and a DTGS detector. The spectral manipulations were performed using OPUS software (Bruker, Karlsruhe, Germany). All tests were run in triplicate. Encapsulation of L.lactis L.lactis survival and nisin activity L.lactis free and encapsulated in the different microcapsules was placed in physiological water for 10 days at 30°C. Periodically during the storage period bacterial survival and nisin activity inside and outside microparticles were studied. To analyze bacterial survival and nisin activity inside microparticles, 1 g of capsules was placed in a 0.1 M citrate solution to dissolved hydrogel microspheres by calcium chelation. Bacterial survival Serial dilutions were made from microparticles dissolution and physiological water and then poured onto M17 agar. Plates were incubated for 24 hours at 30 ºC before colonies were counted. All tests were run six times. Nisin activity Micrococcus Flavus DSM 1790 sensitive to nisin was used to evaluate nisin activity. Two successive M. flavus cultures in TSBYE medium (TSB: Biomerieux, Marcy l'étoile, France; YE: Biokar diagnostics, Beauvais, France) were made from cryotubes stored at -80 °C. The optical density (OD) at 660 nm of the culture was measured, then a dilution in TSAYE medium (TSAYE; Bacteriological Agar Type A: Biokar diagnostics, Beauvais, France; Tween 80: Merck Hohenbrunn, Germany) was performed to obtain a final absorbance at 660 nm of 0.01. 12 mL of this medium were poured and plates were placed at 4 °C for 2 h to allow agar solidification. Then, wells were hollowed in the agar using a Durham and 25 µL of liquefied microcapsules and physiological water containing L.lactis were deposited in the wells. In parallel, a negative control (M17) was performed. Plates were incubated overnight at 4 °C, and then for 24 h at 37 °C. The inhibition diameters were measured. Results were expressed in cm and converted into nisin mg.mL -1 using a standard curve obtained from a commercial solution of nisin (Sigma-Aldrich, St Louis, USA). All tests were run in triplicate. Antimicrobial activity of microparticles Bacterial strain Culture of Listeria Monocytogenes CIP 82110 was regenerated by transferring a loopful of stock culture into 10 mL of TSB and incubated at 37 ºC overnight. A 10 L aliquot from overnight culture was again transferred into 10 mL of TSB and grown at 37 ºC to the end of the exponential phase of growth. Subsequently, this appropriately diluted culture was used for the inoculation of the synthetic media containing L.lactis free or encapsulated in order to obtain a target inoculum of 10 2 CFU.mL -1 . Antimicrobial activity A synthetic media, TSBYE broth, was inoculated with L.monocytogenes and L.lactis free or encapsulated in exponential state in alginate-xanthan capsules enriched with M17 supplemented with 0.5 % glucose. Media was stored at 30 °C for 7 days. L.monocytogenes and L.lactis counts were examined both immediately after the inoculation and periodically during the storage period. Serial dilutions were made and then poured onto PALCAM agar (Biokar diagnostics, Beauvais, France) and M17 agar plates. Plates were incubated for 48 hours at 37 ºC (PALCAM agar) or 24 hours at 30 °C (M17 agar) before colonies were counted. All tests were run in triplicate. Preparation of the bioactive films The film forming aqueous dispersions (FFD) contained 4 % (w/w) of HPMC or corn starch and glycerol as plasticizer. The hydrocolloid:glycerol mass ratio was 1:0.25 in every case. Polymers were dissolved in distilled water (pH 6.5) under continuous stirring (400 rpm) at 25 °C. Lactococcus lactis subsp lactis ATCC 11454 was used for the preparation of bioactive films. The selection of the strain was based on its antimicrobial activity, its ability to produce nisin, a bacteriocin. Microbial culture was regenerated according methodology described above. Lactic acid bacteria free or encapsulated were incorporated by adding the bacterial cells preparation into the FFD. The ratio was fixed in order to have a final concentration of 3 logs CFU/cm 2 in dry film. FFD were then placed under magnetic stirring for 5 minutes. A casting method was used to obtain the polysaccharide films without lactic acid bacteria and bioactive films. FFD were poured onto a framed and levelled PET Petri dishes (85 or 140 mm diameter) and were dried at 25 ºC and 40 % relative humidity for approximately 48 hours. Film thickness was controlled by pouring the amount of FFD that will provide a surface density of solids in the dry films of 56 g/m 2 in all cases. Dry films were peeled off the casting surface and preconditioned in desiccators at 5 ºC and 75 % relative humidity (RH) prior to testing. These values of temperature and RH were chosen to simulate the storage conditions of refrigerated coated products. Films characterization Moisture content and thickness After equilibration, films were dried in triplicate at 60 °C for 24 h in a natural convection oven and for 24 h more in a vacuum oven in order to determine their moisture content. Measurements of film thickness were carried out by using an electronic digital micrometer (0-25 mm, 1 m). Water vapour permeability Water vapour permeability (WVP) was measured in dry film discs, which were equilibrated at 75 % RH and 5 ºC, according to the gravimetric method described in the AFNORNFH00-030 standard (1974). The dry film was sealed in a glass permeation cell containing silica gel, a dessicant. The glass permeation cells were 5.8 cm (i.d.) × 7.8 cm (o.d.) × 3.6 cm deep with an exposed area of 26.42 cm2. The permeation cells were placed in a controlled temperature (5°C) and RH (75%) chamber via ventilation. The water vapor transport was determined from the weight gain of the cell. After 30 min, steady-state conditions were reached, and weightings were made. To calculate WVTR, the slopes of weight gain as a function of time in the steady state period were determined by linear regression. For each type of film, WVP measurements were replicated three times and WVP was calculated according to Mc Hugh et al. (1993). Oxygen permeability The oxygen permeability of the films (OP) was measured in triplicate by using an oxygen permeation measurement system (Systech Illinois 8100 Oxygen Permeation Analyser, France) at 20 °C and 75% RH (ASTM, 2005). A sample of the film was placed in a test cell and pneumatically clamped in place. Films were exposed to pure nitrogen flow on one side and pure oxygen flow on the other side. An oxygen sensor read permeation through the barrier material and the rate of permeation or oxygen transmission rate was calculated taking into account the amount of oxygen and the area of the sample. Oxygen permeability was calculated by dividing the oxygen transmission rate by the difference in oxygen partial pressure between the two sides of the film, and multiplying by the average film thickness. Mechanical properties A Lloyd instruments universal testing machine (AMETEK, LRX, U.K.) was used to determine the tensile strength (TS), elastic modulus (EM), and elongation (E) of the films, according to ASTM standard method D882 (2001). EM, TS, and E were determined from the stress-Hencky strain curves, estimated from force-distance data obtained for the different films (2.5 cm wide and 10 cm long). At least six replicates were obtained for each formulation. Equilibrated film specimens were mounted in the film-extending grips of the testing machine and stretched at a deformation rate of 50 mm/min until breaking. The relative humidity of the environment was held constant at 53 % during the tests, which were performed at 25 °C. Optical properties The transparency of the films was determined through the surface reflectance spectra in a spectrocolorimeter CM-5 (KonicaMinolta Co., Tokyo, Japan). Measurements were taken from three samples in each formulation by using both a white and a black background. The transparency was determined by applying the Kubelka-Munk theory for multiple scattering to the reflection spectra. As each light flux passes through the layer, it is affected by the absorption coefficient (K) and the scattering coefficient (S). Transparency was calculated, as indicated by [START_REF] Hutchings | Food colour and appearance[END_REF], from the reflectance of the sample layer on a white background of known reflectance and on an ideal black background, through the internal transmittance (Ti). Colour coordinates of the films, L*, Cab* (Equation 1) and hab* (Equation 2) from the CIELAB colour space were determined, using D65 illuminant and 10º observer and taking into account R∞ (Equation 3) which correspond with the reflectance of an infinitely thick layer of the material. 2 2 * * * b a C ab   Equation 1        * * * a b arctg h ab Equation 2 b a R    Equation 3 Finally, the whiteness index (WI) was calculated by applying equation 4. 7.6. FTIR analysis ATR-FTIR spectra of freeze-dried alginate-pectin microbeads without bacteria and preconditioned polysaccharide films were recorded with a Tensor 27 mid-FTIR Bruker spectrometer (Bruker, Karlsruhe, Germany) equipped with an ATR accessory. 128 scans were used for both reference and samples between 4000 and 400 cm -1 at 4 cm -1 resolution. Spectral manipulations were then achieved using OPUS software (Bruker, Karlsruhe, Germany). Raw absorbance spectra were smoothed using a 13 points smoothing function. After elastic baseline correction using 200 points, spectra were centred and normalized. All tests were run at least in triplicate.   Antimicrobial activity of the films against Listeria monocytogenes Bacterial strain Stock culture of Listeria monicytogenes CIP 82110 was regenerated by transferring a loopful into 10 mL of TSB and incubated at 37 ºC overnight. A 10 l aliquot from overnight culture was again transferred into 10 mL of TSB and grown at 37 ºC to the end of the exponential phase of growth. Subsequently, this appropriately diluted culture was used for the inoculation of the agar plates in order to obtain a target inoculum of 10 2 CFU/cm 2 . Antimicrobial effectiveness of films The methodology followed for the determination of antimicrobial effectiveness of films was adapted from [START_REF] Kristo | Thermal, mechanical and water vapor barrier properties of sodium caseinate films containing antimicrobials and their inhibitory action on Listeria monocytogenes[END_REF]. Aliquots of Tryptone Soy Agar (TSA, Biokar Diagnostics, Beauvais, France) (20 g) were poured into Petri dishes. After the culture medium solidified, properly diluted overnight culture from L.monocytogenes was inoculated on the surface and the different films (containing or not L.lactis) of the same diameter as the Petri dishes were placed onto the inoculated surfaces. Plates were then covered with parafilm to avoid dehydration and stored at 5 ºC for 12 days. L.monocytogenes and L.lactis counts on TSA plates were examined both immediately after the inoculation and periodically during the storage period. The agar was removed aseptically from Petri dishes and placed in a sterile plastic bag with 100 mL of tryptone soy water (Biokar Diagnostics, Beauvais, France).The bag was homogenized for 2 minutes in a Stomacher blender 400 (Interscience, Saint-Nom-La-Breteche, France). Serial dilutions were made and then poured onto M17 agar and PALCAM agar. Plates were incubated for 48 hours at 37 ºC before colonies were counted. All tests were run in duplicate. Statistical analysis A statistical analysis of data was performed through a one-way analysis of variance using (chlorure de sodium à 9%) ou avec un bouillon stérile M17 additionné de 0,5% de D (+) glucose. 10 5 CFU.mg -1 de L. lactis sont inoculés dans les microsphères. La caractérisation physico-chimique des microsphères a consisté à suivre le diamètre, la sphéricité et la convexité des billes de 0 à 7 jours à 30°C. Toutes les microsphères ont été produites dans les mêmes conditions, mais la taille des perles a été montrée dépendante de la viscosité du fluide injecté. Au bout de 7 jours, les billes A/P : 100/0, 75/25, 50/50 ont été observées plus stables que d'autres billes. La survie et la croissance de L.lactis dans les microsphères pendant 7 jours à 30°C ont été mesurées. La survie de L.lactis a été évaluée suivant trois facteurs : l'état physiologique de L. lactis au moment de l'encapsulation, la composition interne en microsphères (eau physiologique ou M17 enrichi avec 0,5% de glucose), le rapport A / P. La population de L. lactis augmente plus rapidement avec le milieu M17 enrichi en glucose par rapport à l'eau physiologique utilisée pour la dissolution des polymères. Le ratio de polymères et l'état physiologique de la bactérie ne semblent pas avoir d'influence significative. Il a été observé que des billes d'alginate/pectine fournissent une meilleure protection de L.lactis contre les facteurs environnementaux que celles faites avec de l'alginate pure ou de la pectine pure. Le ratio A/P ι5/25 présente de meilleurs résultats de maintien d'une population stable au sein des perles, en raison de propriétés mécaniques plus stables des microsphères. La production et l'activité de la nisine a été déterminée pour les différents rapports A/P dans l'eau physiologique à 30°C pendant 7 jours. Les résultats montrent que plusieurs facteurs agissent significativement, comme l'état physiologique et la composition de la matrice en polymères et en substances nutritive). Les meilleurs résultats ont été montrés lorsque L.lactis a été encapsulée en phase exponentielle avec A/P (75/25) en présence du M17 enrichi en glucose. Lette activité a été mesurée via l'inhibition de la croissance de L. monocytogenes pendant toute la période de stockage. La réduction de la population de L.monocytogenes était plus élevée avec des bactéries encapsulées que les cellules libres. En conclusion, les meilleurs résultats ont été globalement obtenus avec des microsphères (ratio:75/25) enrichies en M17 et complété avec 0,5% de glucose contenant L. lactis en phase exponentielle. Introduction The interest in application of lactic acid bacteria (LAB) in the prevention of food spoilage and foodborne pathogens growth has increased in the last twenty years (Scannell et al., 2000). Many studies have shown that LAB can reduce the presence of Listeria monocytogenes in meat and seafood (Budde et al., 2003;Jacobsen et al., 2003;Tahiri et al., 2009) or inhibit other foodborne pathogens such as Escherichia coli, Pseudomonas aeruginosa, Salmonella Typhimurium, Salmonella Enteritidis and Staphylococcus aureus (Trias et al., 2008). Several mechanisms, such as lactic acid production, competition for nutrients or production of antimicrobial compounds, explain inhibition of spoilage or pathogenic microorganisms by LAB. Among LAB, L. lactis subsp. lactis is particularly used for food preservation because of its ability to produce bacteriocin, such as nisin, to control spoilage and pathogenic bacteria. However the possible interactions between food components and LAB decreased their effectiveness. The immobilization of LAB by encapsulation using natural polymers as proteins or polysaccharides appears as an interesting strategy to protect strain and modulate nisin release. Encapsulation of bacteria in calcium alginate beads is one of the most studied system for probiotic immobilization and protection (Léonard et al., 2014;Madziva et al., 2005;Polk et al., 1994;Smrdel et al., 2008). Some studies focus on the interest to design composite systems by associating several biopolymers as pectin and alginate to control active components release (Jaya et al., 2008). Authors reported that an increase in pectin caused the diminution of gel barrier and increases the percentage of drug release. Moreover, morphology of alginate pectin microcapsules showed porous micro-structure and they also facilitates active components release. Encapsulation of Lactococcus lactis subsp. lactis on alginate / pectin composite microbeads 90 Sodium alginate is a water soluble anionic polysaccharide, mainly found in the cell walls of brown algae and can be isolated from the bacteria Pseudomonas (Pawar and Edgar, 2012). This natural polymer possesses several attractive properties such as good biocompatibility, wide availability, low cost, and simple gelling procedure under mild conditions. Alginate composition is variable and consists in homopolymeric blocks alternating 1,4-linked β-Dmannuronic acid (M) and α-L guluronic acid (G) residues. Physical properties of alginate are dependent on the composition, sequence and molecular weight. Gel formation is driven by interactions between G-blocks which associate to form firmly held junctions due to divalent cations. In addition to G-blocks, MG blocks also participate by forming weaker junctions. Pectin is one of the main structural water-soluble polysaccharides of plant cell walls. It is commonly used in the food industry as gelling and stabilizing agents. Basically, pectins are polymers of (1-4) linked partially methyl esterified -D-galacturonic acid (Synytsya et al., 2003). Pectins gelation is driven by the interaction between the polygalacturonate chains and divalent cations and is described by the egg-box model where the divalent cations are thought to be held in the interstices of adjacent helical polysaccharide chains (Braccini and Pérez, 2001). Therefore, the objectives of the present study were (a) to develop novel alginate-pectin hydrogel microspheres for the microencapsulation of L. lactis subsp. lactis, a lactic acid bacteria, by dripping using the vibrating technology (b) to analyze physicochemical properties of composite microbeads (c) to evaluate the effect of polymers ratio and physiological state of encapsulated bacteria (exponential or stationary phase) on microbial survival, nisin release and antilisterial activity (d) to determine if a nutritional enrichment of hydrogel matrix by addition of synthetic media (M17) supplemented with 0.5 % glucose can improve results. Results and Discussion Physico-chemical characterization of microbeads Shape and size Microscopic images of A/P composite microbeads were presented on Fig. 22. Microbeads were fairly regular and spherical. Beads diameter, sphericity and convexity at 0 day and after 7 days at 30 °C were reported on Table 7. In general, the size and shape of microspheres depended on the intrinsic properties of the injected polymer solutions such as viscosity, density and surface tension (Chan et al., 2009). Initially, little differences were observed in terms of microbeads size and sphericity, but convexity increased clearly with pectin content in the matrix. After 7 days, significant differences were observed in size and convexity. Convexity provides information about the roughness of the particle. Convexity values are between 0 and 1. A particle with smooth edges has a convexity value of one whereas a particle with irregular ones has a lower convexity [START_REF] Burgain | Encapsulation of probiotic living cells: From laboratory scale to industrial applications[END_REF]. After 7 days at 30 °C, sphericity also decreased with pectin content. The sphericity value "1" is a perfect sphere and a particle with a sphericity close to "0" is highly irregular. Therefore, sphericity is a good way to describe particle shape deviation. Alginate contributed largely to beads sphericity as observed by [START_REF] Sandoval-Castilla | Textural properties of alginate-pectin beads and survivability of entrapped Lb. casei in simulated gastrointestinal conditions and in yoghurt[END_REF]. As observed in the present paper, Dı áz-Rojas et al. ( 2004) reported that the use of both polymers pectin/alginate in matrix composite beads induce loss of sphericity as the proportion of pectin in the matrix increased, as a consequence of the weaker mechanical stability of the calcium-pectinate network compared to that from calcium-alginate. Particularly significant changes occurred for pure pectin microbeads. A swelling phenomenon was responsible for the changes in size as no aggregation occurred. 100 % of pectin microbeads were distorted and became more irregular. From results in Table 7, it was concluded that the beads A/P 100/0, 75/25, 50/50 were more stable than other beads with higher pectin content. Mechanical stability The microcapsules functionality is closely related to their chemical and mechanical stability. In fact microspheres are sensitive to deformations that may lead to their rupture or to undesirable early release of their contents. In order to evaluate the mechanical stability of the prepared systems, microbeads were compressed between two parallel plates. As shown in Fig. 2, the normal force (N) versus the gap distance (mm) is plotted as double-logarithmic compression curves (Degen et al., 2011). As expected, for all the systems, the normal force increases with a decreasing gap, because the bead became more and more compressed. However, some differences in the force evolution were observed. In fact the initial force at 0.2 mm could be noted for the different systems which could be related to the beads size. Some curves showed a region of force diminution at small gaps (<0.005 mm) which could be related to potential beads rupture. Nevertheless, due to the low speed used in these experiments, microbeads break-up was not clearly observed. For comparison purpose, at an intermediate gap (0.01) the beads 25/75 and 75/25 showed a better stability than the others systems were a force of 0.5 N is needed to main the beads at the corresponding gap in Fig. 23. The differences on mechanical stability could be related to potential synergy between alginate and pectin (Walkenström et al., 2003). This synergism is attributed to a heterogeneous association of the G blocks of alginate and the methyl ester regions of pectin (Oakenfull et al., 1990). In the other hand, molecular modeling showed that the G blocks and methylesterified polygalacturonic acid ribbons could pack together in parallel twofold crystalline arrays (Thom et al., 1982). Practically, all the force/gap curves showed a region where the force decreased at small gaps (<0.005 mm) which could be related to potential beads rupture at a compressed state. For alginate beads, the start of hydrogel break-down is associated with the rupture of one single polymer chain (Zhang et al., 2007). It's interesting to note, that the bursting zone was less obvious for composite alginate/pectin beads which are stabilized by alginate G, MG junctions and pectin polygalcturonic junctions. The formation of IPN (Inter-Penetrating Network) could lead to stiffer microbeads. FTIR spectroscopy Fig. 24a shows FTIR spectra of freeze-dried alginate, pectin and alginate/pectin mixture and Fig. 24b spectra of the corresponding freeze-dried microbeads. As shown in Fig. 24a, alginate spectra (100/0) displayed two vibrations in the infrared spectrum due to the carboxylate group; an antisymmetric stretch at 1600 cm -1 and a symmetric stretch at 1412 cm -1 (Sartori et al., 1997). The pectin spectra (0/100) displayed also a peak between 1720 and 1760 cm -1 . The region between 1000 and 1140 cm -1 corresponds to the stretching vibrations of (C-OH) side groups and the (C-O-C) glycosidic bond vibration (Kamnev et al., 1998). The absorption bands between 1100 and 1200 cm -1 were from ether (R-O-R) and cyclic C-C bonds in the ring structure of pectin molecules. The regions between 1590 and 1600 cm -1 are due to the aromatic ring stretching. The region between 1600 and 1800 cm -1 is of special interest and was usually used to compare pectin samples (Synytsya et al., 2003). This spectral region reveals the existence of two bands at 1620-1650 and 1720-1760 cm -1 from free and esterified carboxyl groups, respectively. For full assignment of the infrared bands of alginate and pectin is presented in Table 8. As expected, the mixture solutions of alginate/pectin (75/25, 50/50 and 25/75) displayed the typical bands of the two corresponding biopolymers without significant shifts. However some differences in band intensities were noticed. In particular the mixture 75/25 showed a significant increase of the bands corresponding to the carboxylate groups of pectin and alginate (1600 and 1400 cm -1 ) and to the glycosidic pectin vibrations (1000-1300 cm -1 ). The IR spectra of calcium alginate beads (100/0) (Fig. 24b) displayed the same characteristic peaks of sodium alginate with a greater intensities of the bands corresponding to the carboxylate groups (1600 and 1412cm -1 ) (Pereira et al., 2003). The infrared spectrum of calcium pectinate microbeads (0/100) showed intensity increase of the band corresponding to the carboxylate groups (1620 cm -1 ) and the appearance of a narrow peak at 1400 cm -1 due to the interactions between the galacturonic residues and divalent cations (Braccini and Pérez, 2001). The alginate/pectin blend beads, showed the same typical bands of alginate and pectin with some differences as function of the alginate/pectin ratio especially in the wavenumber between 1000 and 1200 cm -1 . In fact, FTIR analysis of 75/25 beads showed the same two bands (1020 and 1060 cm -1 ) as alginate 100/0. The increase of pectin amount resulted in the appearance of a shouldering peak at 1145 cm -1 for (50/50) and then in well-defined peak for the 25/75 system. Moreover the pH-values of the studied mixtures 100/0, 0/100, 75/25, 50/50 and 25/75 were 7. 04, 3.30, 4.34, 3.94, 3.60 for, respectively. Alginate and pectin could form synergistic mixed gels at low pH values (but >4) in the absence of calcium with a relatively low gelation kinetic (>200 min) (Walkenström et al., 2003). In our case the alginate/pectin mixture was not allowed to stand, and the formation of composite gel was not observed, however we could suppose the possible formation of cooperative bands between alginate and pectin for mixtures before the encapsulation step, which could explain the increase of the intensity of the characteristics bands and therefore better physico-chemical properties. Otherwise, the freeze-drying step of the hydrogel microspheres before the FTIR study could also initiate or reinforce the potential interactions between the two polymers. L.Lactis survival The suitability of different matrices for L.lactis survival and growth inside and outside hydrogel microspheres was studied in physiological water for 7 days at 30 °C (Fig. 25 and26). Bacterial viability significantly changes with three factors: A/P ratio, physiological state of L.lactis during the bead formation and internal composition of the microsphere (physiological water or M17 enriched with 0.5 % glucose). Non-encapsulated L.lactis was used as control (Fig. 27). Bacterial population decreased significantly for the storage period at 30 °C independently of the physiological state of the strain at the beginning of the assay. The addition of nutrients within microbeads led to significant differences in terms of L.lactis counts inside and outside beads. As expected, the population decreased more rapidly with physiological water than with glucose-enriched M17 as medium used for polymers dissolution, independently of polymers ratio and L.lactis physiological state. From the 5 th day of storage bacterial population decreases dramatically inside microbeads, a reduction of approximately 50 % was observed when strain was encapsulated in exponential phase. This decrease was less marked with strain in stationary phase. Concerning bacterial physiological state at the moment of encapsulation, stationary and exponential phase, L.lactis counts inside microbeads after 7 days of storage was higher when LAB were encapsulated in exponential phase than in stationary phase. Results focused on the importance of two parameters, physiological state and presence of nutrients. These factors impact certainly cellular stress and therefore bacterial survival. Finally, the composition of the matrix (A/P ratio) modified significantly bacterial population inside and outside hydrogel microspheres. The use of pectin leads to significant variations especially when physiological water was used for polymers dissolution. Composite A/P hydrogel microspheres tend to present interesting properties compared to pure alginate or pectin beads. [START_REF] Sandoval-Castilla | Textural properties of alginate-pectin beads and survivability of entrapped Lb. casei in simulated gastrointestinal conditions and in yoghurt[END_REF] observed that calcium-alginate-pectin beads matrices provided a better protection against adverse environmental factors to Lactobacillus casei than those made with pure alginate or pectin. The lack of alginate in beads reduced the protective effect, suggesting that both polymers, alginate and pectin, form a structured trapping matrix that is more resistant especially to acids. In this study, microbeads 75/25 present best results to maintain microbial counts within beads. This is could be related to mechanical properties (Fig. 1) or to potential reduction of pore size of the composite system which will increase the retention of the encapsulated bacteria. The best mechanical properties were found for A/P 75/25, beads were more stable. With this matrix composition LAB are certainly better retained in microbeads. Outside hydrogel microspheres, L. lactis counts change significantly with polymers ratio: bacterial population was higher for microbeads 0/100 and 25/75. Alginate has more linear and organized chains than pectin, reticulation links were more efficient with calcium ions. This higher crosslinking for alginate increased the cohesive forces between chains (Silva et al., 2009) and difficult bacteria release. In addition, a greater swelling can occur with pectin, also increasing the output of LAB. Previous studies reported a greater swelling of pectin films, compared to alginate films, as a lowering in crosslinking extent allowed more water absorption (Silva et al., 2009;Sriamornsak and Kennedy, 2008). Mean values and standard deviations. Nisin activity Nisin activity inside and outside microbeads was determined for the different A/P ratios in physiological water at 30 °C for 7 days (Tables 9 and10). As observed for bacterial survival, strain physiological state as well as matrix composition (polymers ratios and addition of nutrients) are key factors. When bacteria were encapsulated in stationary state, a concentration of active nisin was detected inside microbeads at day 0 because nisin was produced before encapsulation step. However when L.lactis was encapsulated during the exponential phase of the growth curve, no nisin was detected initially in beads but after 1 day active nisin was present. Nisin production occurs during the bacterial growth and the amount of peptide adsorbed on cell surface is higher during stationary phase compared to exponential phase of bacterial gowth curve. Therefore the physiological state of the strain at the time of encapsulation impacts the initial concentration of active nisin in microbeads. During storage period, no nisin was detected after 3 days of storage inside microspheres prepared with physiological water whatever their polymer composition. The enrichment of microbeads internal media with glucose-enriched M17 improves these data. After a storage period of 7 days a significant amount of antimicrobial peptide was detected independently of physiological state of the bacteria and polymers ratio used. Previous studies reported equally changes in term of bacteriocin concentration with the composition of the nutrient broth (Parente and Ricciardi, 1999). The A/P ratio does not significantly affect the concentration of active nisin inside microspheres. However, microbeads release properties were modified by the ratio A/P. The mixed matrix (alginate-pectin) reveals an improved suitability to nisin production than pure alginate or pectin microbeads, due to intermediate properties of diffusion and stability of the bead wall. After a storage period of 3 days at 30 °C two ratios seemed interesting independently of others factors (physiological state of the strain and addition of nutrients): 50/50 and 25/75. In conclusion, the physiological state of the bacteria during the encapsulation process and the composition of the microbeads (A/P ratio, enrichment of internal medium with nutrients) were determining factors for both bacteria viability and bacteriocin activity which could be related with the nutritional or cellular stress-producing effects. Of the several matrices tested A/P (75/25) with glucose-enriched M17 gave the best results when L.lactis was encapsulated in exponential state. _ _ _ _ _ _ 75/25 _ 1.4 (0.2) a _ _ _ _ _ _ 50/50 _ 1.2 (0.2) a _ _ _ _ _ _ 25/75 _ 1.0 (0.2) a _ _ _ _ _ _ 0/100 _ 0.1 (0.1) b _ _ _ _ _ _ 100/0 M17 enriched with 0.5 % glucose _ 0.5 (0.2) ax 0.9 (0.5) ax 0.7 (0.4) ax _ 0.9 (0.2) ax 0.5 (0.2) ax _ 75/25 _ 0.8 (0.5) ax 2.2 (0.3) by 2.2 (0.3) by _ 2.1 (0.4) bx 1.7 (0.3) by _ 50/50 _ 0.9 (0.4) ax 1.8 (0.2) by 2.0 (0.3) by _ 1.9 (0.3) bx 1.9 (0.2) by _ 25/75 _ 1.1 (0.13) ax 1.8 (0.4) by 2.0 (0.2) by _ 1.8 (0.4) bx 1.9 (0.1) by _ 0/100 _ 0.6 (0.1) ax 1.9 (0.2) by 0.9 (0.3) ax _ 2.0 (0.2) bx 0.8 (0.4) ay _ a, b,c Different letters in the same column indicate significant differences among samples (p <0.05) x,y,z Different letters in the same file indicate significant differences among time for a same sample (p <0.05) x,y,z Different letters in the same file indicate significant differences among time for a same sample (p <0.05) Antimicrobial activity As commented above the best system to protect L.lactis and permit nisin release is the use of composite microbeads (A/P:75/25; internal medium: glucose-enriched M17) with LAB in exponential state. The possible antilisterial effect of this system at 30 °C was determined on TSB medium and is shown in Figure 28a. Non-encapsulated L. lactis was used as control. Listeria monocytogenes population increased from 2.8 to 7.9 log CFU.mL -1 at the end of the storage period. As expected, in presence of L. lactis free or encapsulated a complete inhibition of L. monocytogenes growth was observed during all storage period. Mechanisms underlying these antimicrobial effects have not been studied, but it can be a combination of several factors such as the production of organic acids, hydrogen peroxide, enzymes, lytic agents and other antimicrobial peptides, or bacteriocins [START_REF] Alzamora | Minimally processed fruits and vegetables: fundamental aspects and applications[END_REF]. Antimicrobial properties of L. lactis were not limited by the system of encapsulation developed in this study. Moreover from the 5 th day of storage at 30 °C reduction of L.monocytogenes counts was higher with microbeads. These data can certainly be explained by a difference in LAB viability. As is shown in Fig. 28b,L.lactis population grew immediately after the incorporation of the strain on TSB medium. No differences between non-encapsulated bacteria and microbeads were observed until 3 rd day of storage. From the 5 th day L.lactis counts decrease to achieve 8.8 and 6.9 log CFU.mL -1 for encapsulated and free strain respectively. A loss of elementary nutrients in synthetic media explains these results. Figure 27: Survival of Lactococcus lactis non-encapsulated during a storage period in physiological water of 7 days at 30 °C (stationary state in solid line and exponential state in dash line). Mean values and standard deviations. ... '.' '" .. •• •-•--------j--- ••••••f•••• .. • ••••••• •••••••••• • •~• .. . •• -" . .,.. " * Conclusion Microencapsulation of LAB producing nisin, L. lactis subsp. lactis, was performed on composite alginate / pectin hydrogel microspheres. The physical properties and the entrapped efficiency of the alginate/pectin beads are greatly affected by the biopolymers ratio used. The best mechanical properties were found for alginate/pectin: 75/25; the beads were more stable and allow the best release of nisin during the storage period. The preparation of alginate/pectin inter-penetrating network resulted in a better control of physicochemical properties of composite microbeads and potentially of the hydrogel mesh size. The physiological state of the bacteria during the encapsulation process and the composition of the microbeads (A/P ratio, enrichment of internal medium with nutrients) were determining factors for both bacteria viability and bacteriocin activity which could be related with the nutritional or cellular stressproducing effects. The best results were obtained with composite microbeads (75/25) enriched with M17 supplemented with 0.5 % glucose. -Budde, B. B., Hornbaek, T., Jacobsen, T., Barkholt, V., & Koch, A. G. (2003). Leuconostoc carnosum 4010 has the potential for use as a protective culture for vacuum-packed meats: culture isolation, bacteriocin identification, and meat application experiments. International Journal of Food Microbiology, 83(2), 171-184. -Burgain, J., Gaiani, C., Linder, M., & Scher, J. ( 2011). Encapsulation of probiotic living cells: From laboratory scale to industrial applications. Journal of Food Engineering, 104(4), 467-483. -Cellesi, F., Weber, W., Fussenegger, M., Hubbell, J. a., & Tirelli, N. ( 2004). Towards a fully synthetic substitute of alginate: Optimization of a thermal gelation/chemical crosslinking scheme ("tandem" gelation) for the production of beads and liquid-core capsules. Biotechnology and Bioengineering, 88(6), 740-749. -Degen, P., Leick, S., Siedenbiedel, F., & Rehage, H. (2011). Magnetic switchable alginate beads. Colloid and Polymer Science, 290(2), 97-106. -Dı áz-Rojas, E. I., Pacheco-Aguilar, R., Lizardi, J., Argüelles-Monal, W., Valdez, M. A., Rinaudo, M., & Goycoolea, F. M. ( 2004). Linseed pectin: gelling properties and performance as an encapsulation matrix for shark liver oil. Food Hydrocolloids, 18 -Sandoval-Castilla, O., Lobato-Calleros, C., García-Galindo, H. S., Alvarez-Ramírez, J. Introduction Application of lactic acid bacteria (LAB) as a biopreservation strategy has increasing interest in the last decades. LAB are considered as GRAS (Generally Recognized As Safe) and can inhibit the growth of different bacteria, yeasts and fungi, through the production of organic acids, hydrogen peroxide, enzymes, defective phages, lytic agents and antimicrobial peptides, or bacteriocins [START_REF] Alzamora | Minimally processed fruits and vegetables: fundamental aspects and applications[END_REF]. Among pathogens present in foodstuffs, Listeria monocytogenes remains one of the major problems particularly in dairy products. Previous studies have already proved the antilisterial efficacy of LAB in model systems (Antwi et al., 2008), in dairy products (Liu et al., 2008), in sea-food products (Concha-Meyer et al., 2011), as well as in meat products (Maragkoudakis et al., 2009). To guarantee food safety, the incorporation of LAB into food packaging appears an interesting novel approach. But some recent studies reported problems of LAB viability (Sánchez-González et al., 2013 ;[START_REF] Sánchez-González | Antilisterial and physical properties of biopolymer films containing lactic acid bacteria[END_REF]. The use of encapsulation techniques to protect LAB before their addition in bioactive films could be an interesting approach to limit this phenomenon. Indeed, microencapsulation methods permit the entrapment of microbial cells within particles based on different materials and their protection against non-favorable external conditions [START_REF] Champagne | Physical properties and antilisterial activity of bioactive films containing alginate/pectin composite microbeads with entrapped Lactococcus lactis subsp lactis[END_REF]Zuidam & Shimoni, 2010). Different factors such as encapsulation method, type and concentration of materials used, particle size and porosity or type of microparticles (bead, capsule, composite, coating layer..) affect effectiveness of the bacterial protection (Ding & Shah, 2009). Several biopolymers were studied for encapsulation: alginate, pectin, k-carrageenan, xanthan gum, gellan gum, starch derivatives, cellulose acetate phthalate, casein, whey proteins and gelatin. Alginate has been widely used as microencapsulation material, as it is non-toxic, biocompatible, and cheap [START_REF] Jen | Review: Hydrogels for cell immobilization[END_REF][START_REF] Léonard | Preferential localization of Lactococcus lactis cells entrapped in a caseinate/alginate phase separated system[END_REF]Léonard et al., 2014). Sodium alginate (SA) composition is variable and consists in homopolymeric and heteropolymeric blocks alternating 1,4-linked β-D-mannuronic acid (M) and α-L guluronic acid (G) residues (Pawar & Edgar, 2012). Physical properties of alginate are dependent of the composition, sequence and molecular weight (Pawar & Edgar, 2012). Gel formation is driven by interactions between G-blocks, which associate to form firmly held junctions due to divalent cations. In addition to G-blocks, MG blocks also participate by forming weaker junctions (Pawar & Edgar, 2012). Some studies focus on the interest on design composite systems on associating several biopolymers such as alginate and xanthan gum to control active components release [START_REF] Wichchukit | Whey protein/alginate beads as carriers of a bioactive component[END_REF]. Xanthan Gum (XG) is an extracellular anionic polysaccharide secreted by Xanthomonas campestris. It is a complex polysaccharide consisted in a primary chain of β-D- (1,[START_REF]Antilisterial activity[END_REF]-glucose backbone, which has a branching tri-saccharide side chain comprised of β-D-(1,2)-mannose, attached to β-D-(1,4)glucuronic acid, and terminates in a β-D-mannose [START_REF] Elçin | Encapsulation of urease enzyme in xanthan-alginate spheres[END_REF][START_REF] Goddard | Principles of polymer science and technology in cosmetics and personal care[END_REF]. Recently, XG has been used to combine with SA in beads to preserve LAB viability and modulate release properties. This could be due to the molecular interaction between SA and XG, which led to the formation of a complex matrix structure (Fareez et al., 2015 and[START_REF] Pongjanyakul | Xanthan-alginate composite gel beads: molecular interaction and in vitro characterization[END_REF][START_REF] Pongjanyakul | Xanthan-alginate composite gel beads: molecular interaction and in vitro characterization[END_REF]. The aim of the present study was to develop novel SA-XG microspheres that can enhance stability of L.lactis and release of nisin during storage for future food packaging applications. LAB were usually present inside beads or in the core of capsules. One of the originalities of this study was to immobilize bacteria in SA membrane of the capsule and to use the core as a nutrient pool to permit a gradual bacterial growth. Physico-chemical properties of microcapsules were studied. The effect of bacteria physiological state during encapsulation step (exponential or stationary phase) and a possible enrichment of aqueous-core with nutrients (M17 supplemented with 0.5 % glucose vs physiological water) on bacterial survival, nisin release and antilisterial activity was studied. Results and Discussion Physico-chemical characterization of microcapsules Shape and size Microscopic image of freshly prepared SA-XG microcapsules was presented on Fig. 29. Capsule was fairly spherical and the aqueous-core was centered. Microcapsules size, sphericity and convexity at day 0 and after 7 days at 30 °C were reported on Table 11. For a given procedure of capsules production (given nozzle diameter, vibration frequency, extrusion flow rate), the average diameter was demonstrated dependent of the viscosity of injected fluid (Cellesi et al., 2004). In this study viscosity was set to obtain rather spherical particles. Results of sphericity and convexity were in accordance with microscopic observations and indicated that just-prepared capsules were rather spherical and with an irregular surface. Therefore, a sphericity value "1" correspond to a perfect sphere and a particle with sphericity close to "0" is highly irregular. Convexity provides information about the roughness of the particle. Convexity values are between 0 and 1. A particle with smooth edges has a convexity value of one whereas a particle with irregular ones has a lower convexity [START_REF] Burgain | Encapsulation of probiotic living cells: From laboratory scale to industrial applications[END_REF]. All parameters measured remained constant during the storage period. Microcapsules thus remained stable under conditions tested in this study. SA capsules are sensitive to deformations that may lead to their rupture or to undesirable early release of their contents. In order to evaluate the mechanical stability of the prepared systems, individual microcapsule was compressed between two parallel plates. Fig. 30 reports the plot of the normal force (N) versus the displacement (µm) for SA-XG microcapsules prepared with physiological water and M17 enriched with glucose. The microcapsules did not show a rupture point (a maximum peak force followed by a dramatic force decrease), even when the capsules were compressed down to a thickness of around 5 m from an original diameter of 500 m. The compression curves showed the same profiles which means that the aqueous core composition of the microcapsules did not influence the compression profile of the microcapsules. Under compression, the alginate physical cross-links could restructure in a more dense fashion, releasing the water in excess and resulting in a volume reduction (Cellesi et al., 2004). The water expulsion could be also accentuated due to the low speed used during the compression experiments (10 m s -1 ) which might be lower than the water release kinetic from the squeezed microcapsules. The FTIR spectra of freeze-dried SA-XG solutions and microcapsules are shown in Fig. 31. All spectra displayed a band between 3000 and 3700 cm -1 (OH stretching) followed by a small band (3000-2850) due to CH stretching. FTIR spectra of SA (a) showed two characteristics peaks around 1595 and 1408 cm -1 , indicating the stretching of COO -(asymmetric) and COO -(symmetric), respectively. The band at 1020 cm -1 is an antisymmetric stretch (C-O-C) given by the guluronics units (Pereira et al., 2003). The FTIR spectrum of xanthan gum (b) shows two carbonyl peaks (C-O) at 1725 cm -1 corresponding to acetate groups of an inner mannose unit and at 1600 cm -1 , the characteristic band of carboxylate of pyruvate group and glucuronic acid (Hamcerencu et al., 2007). The reticulation of alginate with calcium cations caused a decrease in intensity of COOstretching peaks, and a decrease in intensity of 1031 cm -1 peak (c). This indicated that an ionic bonding between calcium ion and carboxyl groups of SA and a partial covalent bonding between calcium and oxygen atom of ether groups, respectively. The incorporation of XG into calcium alginate microcapsules (d) did not show significant modifications. However, SA-XG microcapsules spectra (d) showed a decrease of peak intensity of the carboxylate bands. This observation could be related to potential hydrogen bonding between alginate carboxylate groups and XG hydroxyls groups [START_REF] Pongjanyakul | Xanthan-alginate composite gel beads: molecular interaction and in vitro characterization[END_REF]. Otherwise, the freeze-drying step of microcapsules before the FTIR study could also initiate or reinforce the hydrogen bonding between the two polymers. As described in the literature, the barriers capsules could be obtained by coating technique in which negative charge polymers, as "alginate", are coated by positively charged polymers, such as "chitosan". Coating is used to enhance gel stability [START_REF] Smidsrød | Alginate as immobilization matrix for cells[END_REF][START_REF] Kanekanian | Encapsulation Technologies and Delivery Systems for Food Ingredients and Nutraceuticals[END_REF] and provide a better barrier to cell release [START_REF] Gugliuzza | Smart Membranes and Sensors: Synthesis, Characterization, and Applications[END_REF][START_REF] Zhou | Spectrophotometric quantification of lactic bacteria in alginate and control of cell release with chitosan coating[END_REF]. The reaction of the biofunctional molecule with the membrane resulted in bridge formation. The length of the bridge depended on the type of cross-linking agent [START_REF] Hyndman | Microencapsulation of Lactococcus lactis within cross-linked gelatin membranes[END_REF]. Nevertheless, in the present study, FTIR spectra of SA-XG microcapsules did not exhibit the characteristics bands of XG which could be related to potential ionic bonding formation between XG and calcium ions. The low amount of XG compared to the SA concentration could explain this unapparent change in the FTIR spectra. Encapsulation of L.lactis L.Lactis survival The survival of L.lactis inside and outside of two types of microscapsules was studied in physiological water for 10 days at 30°C (Fig. 32). The aim of this part was to study if a possible enrichment of microcapsules core with nutrients (M17 supplemented with 0.5 % glucose) and the physiological state of the strain during microcapsule production were key factors to preserve viability of L.lactis. Non-encapsulated L.lactis was used as control (data not presented). During the storage period a significant decrease of bacterial counts were observed for bacteria in exponential or in stationary state at the beginning of the assay to achieve after 10 days at 30 °C regardless what physiological state of the strain at the beginning of assay and core composition (Fig 32 a,b). Addition of nutrients within microbeads led to little differences in terms of L. lactis population. This is certainly due to a rapid release of some nutrients (small molecules) to the storage medium. Indeed, recent studies reported that the pore size of hydrogel beads may range from around 5 to 200 nm depending on their composition and preparation (Zeeb et al., 2015 andGombotz andWee, 1998). The combination of the supply of nutrients in core and the encapsulation of LAB during exponential phase gave the best results in terms of viability (Fig 32a); in these conditions L. lactis counts were maintained to 5 log CFU.mg -1 after 10 days at 30 °C. Concerning bacterial counts outside capsules, in physiological water, a significant increase during the beginning of the storage period was observed whatever the composition of the microcapsules. A maximum level of 4.5 log CFU mg -1 was achieved after 5 days of microcapsules storage at 30 °C. When capsules were placed into physiological water, bacteria adsorb on surface or present on alginate membrane were certainly progressively released to external medium. From the 5 th day of storage, L.lactis population started to decrease significantly to achieve 3.5 log CFU.mg -1 at the end of the storage period. Polymers choice and structure of microcapsules were key factors but results of this study focused on the importance of two supplementary parameters, physiological state of LAB during the encapsulation step and the possible enrichment of capsules with nutrients. These factors Nisin activity Nisin activity inside and outside microcapsules was determined for different systems in physiological water at 30°C for 10 days (Table 12). As observed for bacterial survival, bacterial physiological state as well as addition of nutrients in core were key factors to optimize nisin activity and consequently antimicrobial properties of microcapsules. Maximum concentration of active nisin inside capsules was detected after 1 day of storage at 30 °C independently of the aqueous-core composition and bacterial physiological state. A significant decrease in the amount of active nisin was then observed. Addition of nutrients in core reduces this phenomenon, a slight concentration of active bacteriocin was reported at the end of the storage period. Previous studies reported equally changes in term of bacteriocin concentration with the composition of the nutrient broth but for non-encapsulated bacteria (Parente & Ricciardi, 1999). Bacterial physiological state of L.lactis during encapsulation step appears equally as an important factor. When LAB was encapsulated in stationary state, a small concentration of active nisin was detected inside microcapsules at day 0 because nisin was already produced by the bacteria before encapsulation. Conversely, when L.lactis was encapsulated in exponential phase, no nisin was detected initially in capsules. Therefore the physiological state of the LAB at the time of encapsulation impacts the initial concentration of active nisin in microcapsules. Concerning release of active compound, it is necessary to wait one day of storage to detect a significant concentration of active nisin outside microcapsules, in physiological water. This amount gradually increases to achieve a maximum after one or three days at 30 °C when L.lactis was encapsulated in stationary and exponential state respectively. Production of nisin occurs during bacterial growth. When bacteria were encapsulated in stationary phase, cell growth was completed. A high concentration of nisin has been released but a significant part is certainly remained adsorbed on the surface of bacterial cells. This fraction was encapsulated with bacteria and quickly released outside microcapsules. After 10 days at 30 °C the presence of active nisin in the solution of NaCl used for the storage of microcapsules was detected only for one of the tested systems: capsules with alginate membrane and aqueous-core based on nutrients (M17 enriched with 0.5 % glucose) prepared using bacteria in exponential state. To conclude with this part, to optimize an encapsulation system it is necessary to take into account two parameters that influence bacterial survival and production of bacteriocin, bacterial physiological state during the encapsulation process and the possible addition of nutrients in the system. These factors could be related with nutritional or cellular stress-producing effects. Microcapsules with L.lactis in exponential state encapsulated in alginate membrane and aqueous-core based on xanthan gum with nutrients (M17 enriched with 0.5 % glucose) gave the best results. Antimicrobial activity As discussed above, the best system to preserve bacterial survival and permit nisin release was the use of microcapsules with L.lactis in exponential state encapsulated in SA membrane and aqueous-core based on XG with nutrients (M17 enriched with 0.5 % glucose). Antilisterial properties of this system were determined at 30°C on synthetic medium, TSB (Fig. 33a). Nonencapsulated L.lactis was used as a control. Listeria monocytogenes population increased from 2.8 to 7.9 log CFU.mL -1 at the end of the storage period. A clear inhibition of L. monocytogenes growth was observed in presence of L.lactis free or encapsulated. Antimicrobial properties of L.lactis were not limited by the system of encapsulation developed in this study. Moreover from the 3 rd day of storage at 30°C reduction of L.monocytogenes counts was higher with L.lactis encapsulated than with free L.lactis. These data can certainly be explained by a difference in LAB viability. As is shown in Fig. 33b,L.lactis population grew immediately after the incorporation of the strain on TSB medium. No differences between non-encapsulated bacteria and encapsulated bacteria were observed until 2 nd day of storage. From the 3 rd day L.lactis counts decrease to achieve at the end of the storage period 8.6 and 6.6 log CFU.mL -1 for encapsulated and free strain respectively. A loss of elementary nutrients in synthetic media explains these results. Considering the effect of the size of the capsule on antilisterial activity, the smaller beads were more effective than larger beads due to higher surface/volume ratio, as previously observed by Anal, Stevens, and Remunan-Lopez (2006) Introduction Lactic acid bacteria (LAB) are traditionally used to provide taste, texture and increase the nutritional value of fermented foods such as dairy products (yoghurt, cheese), meat products, as well as some vegetables. However, a large amount of research has been focused on the great potential of LAB use for food preservation. Studies have shown that LAB can inhibit the growth of different microorganisms, including bacteria, yeasts and fungi, through the production of organic acids, hydrogen peroxide, enzymes, defective phages, lytic agents and antimicrobial peptides, or bacteriocins [START_REF] Alzamora | Minimally processed fruits and vegetables: fundamental aspects and applications[END_REF]. During the last years, innovative bioactive films enriched with LAB have been developed [START_REF] Gialamas | Development of a novel bioactive packaging based on the incorporation of Lactobacillus sakei into sodiumcaseinate films for controlling Listeria monocytogenes in foods[END_REF]Sánchez-González et al., 2013;[START_REF] Sánchez-González | Antilisterial and physical properties of biopolymer films containing lactic acid bacteria[END_REF]. Among biopolymers used as support for LAB, cellulose derivatives appeared as remarkable film forming compounds. They are not only biodegradable, odorless and tasteless [START_REF] Krochta | Edible and biodegradable polymer films: challenges and opportunities[END_REF] but they also exhibit good barrier properties against lipids, oxygen and carbon dioxide at low and intermediate relative humidity (Nispero-Carriedo, 1994). Hydroxypropylmethyl cellulose (HPMC) has been also used for its good film forming properties and mechanical resistance. The third interesting polysaccharide used in active packaging is starch. This biopolymer is a renewable resource, inexpensive (compared with other compounds) and widely available [START_REF] Lourdin | Influence of amylose content on starch films and foams[END_REF]. However, one of the major problems encountered was the decrease of the film's antimicrobial activity throughout time due to LAB viability problems (Sánchez-González et al., 2013;[START_REF] Sánchez-González | Antilisterial and physical properties of biopolymer films containing lactic acid bacteria[END_REF]. To limit this problem and increase films effectiveness, encapsulation techniques appeared as an interesting approach. Indeed, microencapsulation methods permitted the entrapment of microbial cells within particles based on different materials and their protection against non-favorable external conditions [START_REF] Champagne | Physical properties and antilisterial activity of bioactive films containing alginate/pectin composite microbeads with entrapped Lactococcus lactis subsp lactis[END_REF]Zuidam & Shimoni, 2010). Different factors such as encapsulation method, Physical properties and antilisterial activity of bioactive films containing alginate/pectin composite microbeads with entrapped Lactococcus lactis subsp lactis. conditions as temperature, RH gradient, kind and amount of plasticizer, etc… It was verified that biopolymer films are highly permeable to water vapor, which is coherent with the hydrophilic nature of polysaccharides [START_REF] Han | Edible films and coatings: a review[END_REF]. Under these experimental conditions, significant differences were observed in WVP values between corn starch and HPMC films. This high WVP is of high interest to allow mass transport though the film and nisin activity for food safety application. The optical properties of the films were evaluated through their color and transparency since these properties have a direct impact on the appearance of the coated product. Film transparency was evaluated through the internal transmittance, Ti (0-1, theoretical range). An increase in Ti can be assumed as an increase in transparency [START_REF] Hutchings | Food colour and appearance[END_REF]. The spectral distribution of Ti (400-700 nm) is shown in Figure 1. The main impact was observed when both capsules and bacteria were added in the film where the transparency slightly decrease. All systems remained high transparent independently of the capsules or bacteria content. This properties, confirmed by the table 2, would also help easy applications of the films as a packaging materials. FTIR analysis FTIR spectra of freeze-dried (A/P:75/25) microbeads, preconditioned starch and HPMC films before and after microbeads incorporation are shown in Fig. 2. As discussed in our previous study, FTIR spectrum of A/P microbeads, displayed typical bands of alginate and pectin biopolymers (Fig. 2a). The band 1590-1600 cm The small band at 1640-1650 cm -1 indicated the C-O of the HPMC pyranose molecules or could be related to O-H stretching of water molecules coupled to the structure of HPMC [START_REF] Klangmuang | Combination of beeswax and nanoclay on barriers, sorption isotherm and mechanical properties of hydroxypropyl methylcellulose-based composite films[END_REF]. The sharp band at 1050 cm -1 (C-O stretching) presented an evident shoulder at 1110 cm -1 attributed to a C-O-C asymmetric stretching vibration [START_REF] Akhtar | Antioxidant capacity and light-aging study of HPMC films functionalized with natural plant extract[END_REF]. In the same sense, previous studies of nano-functionalized films showed that incorporation of particles didn't not resulted necessarily in important FTIR spectra modifications [START_REF] García | Physico-Mechanical Properties of Biodegradable Starch Nanocomposites[END_REF]. Viability of lactic acid bacteria during storage of the films. The viability of free and encapsulated L. lactis added to HPMC and Starch films was tested throughout a storage period of 12 days at 5°C and 75% RH. L. lactis microbial counts as a function of the storage time are shown in Fig. 3a. As can be seen, the viability of encapsulated L.lactis was greater than that free bacteria in both polymer matrices. For free L.lactis, a significant reduction of the initial population was observed during the storage period, which indicated that free L.lactis were more sensitive to storage stresses. By comparing both hydrocolloid matrices, starch appeared to be a more favorable environment for L.lactis survival. Regardless of the nature of the matrix, worse results were obtained free L.lactis in comparison with encapsulated L.lactis. Counts for free L.lactis were lower than 2 log CFU/cm 2 in all films after 5 days storage, which indicates the great sensitivity of this strain to the lack of nutrients and to the decrease of the water content. Antilisterial activity The antimicrobial activity of the developed films against Listeria monocytogenes was tested in a synthetic non-selective medium (TSA) stored at 5°C (Fig. 3b). Pure HPMC and Starch films and with A/P capsules without L. lactis were used as control samples. As is shown in Fig. 3b,L. monocytogenes population increased from 2.5 to 7.5 log CFU/cm 2 at the end of the storage period. As expected, pure HPMC and Starch films with A/P capsules without L.lactis were not effective against L. monocytogenes growth, since no significant differences were observed in microbial growth on TSA plates. All films containing bioactive cultures exhibited a significant antilisterial activity since, during the period of storage, a reduction of the initial microbial population was observed in all cases. The films with free and encapsulated L. lactis showed, therefore, bactericidal activity. After 3 storage days, the best growth limitation were obtained with starch and HPMC films with encapsulated L.lactis, all films with free and encapsulated L.lactis led to a reduction of the microbial growth of approximately 3 logs with respect to the control. For films based on polysaccharide, different results were obtained. The initial population remained constant during the first 7days and, after that, a slight decrease was observed in agreement with [START_REF] Gialamas | Development of a novel bioactive packaging based on the incorporation of Lactobacillus sakei into sodiumcaseinate films for controlling Listeria monocytogenes in foods[END_REF] and [START_REF] Sánchez-González | Antilisterial and physical properties of biopolymer films containing lactic acid bacteria[END_REF]. In this sense, the Physical properties and antilisterial activity of bioactive films containing alginate/pectin composite microbeads with entrapped Lactococcus lactis subsp lactis. viability of free L. lactis significantly decreased in comparison with encapsulated L. lactis and limited L. monocytogenes inhibition. Conclusion Abstract Nisin is an antimicrobial peptide produced by strains of Lactococus lactis subsp. lactis, recognized as safe for food applications by the Joint Food and Agriculture Organization/World Health Organization (FAO/WHO). Nisin could be applied, for shelf-life extension, biopreservation, control of fermentation flora, and potentially as clinical antimicrobials. Entrapment of bacteria able to produce nisin in calcium alginate beads is promising way for cells immobilization in active films to extend food shelf-life. The present PhD work aimed to design biopolymeric active packaging entrapping bioprotective lactic acid bacteria (LAB) and control undesirable microorganisms growth in foods, particularly L. monocytogenes. First, the mechanical and chemical stability of the alginate beads were improved, and consequently the effectiveness of encapsulation was increased. Alginate/pectin (A/P) biopolymers were prepared, as first microspheres design, by extrusion technique to encapsulate nisin-producing Lactococcus lactis subsp. lactis in different physiological state (exponential phase, stationary phase). Results showed that A/P composite beads were more efficient to increase beads properties than those formulated with pure alginate or pectin. Association of alginate and pectin induced a synergistic effect which improved microbeads mechanical properties. As a second microspheres design, aqueous-core microcapsules were prepared with an alginate hydrogel membrane and a xanthan gum core. Results showed that microcapsules with L.lactis in exponential state encapsulated in alginate membrane and aqueous-core based on xanthan gum with nutrients gave the best results and exhibit interesting antilisterial activity. These microparticles were applied in food preservation and particularly in active food packaging. A novel bioactive films (HPMC, starch) was developed and tested, entrapping active beads of alginate/xanthan gum core-shell microcapsules and alginate/pectin hydrogel enriched with L.lactis. Figure 1 :Figure 2 :Figure 3 :Figure 4 :Figure 5 :Figure 6 :Figure 7 :Figure 8 :Figure 9 : 123456789 Figure 1 : The different types of capsules ......................................................................................... 8 Figure 2: Alginate structure. ............................................................................................................ 10 Figure 3: Carrageenan structure. ..................................................................................................... 11 Figure 4: Cellulose acetate phtalate (CAP) structure ...................................................................... 13 Figure 5: Sodium carboxymethyl cellulose (NaCMC) sturcture .................................................... 13 Figure 6: Xanthan gum structure ..................................................................................................... 14 Figure 7: Chitosan structure ............................................................................................................ 15 Figure 8: Pectin structure ................................................................................................................ 16 Figure 9: Dextran structure ............................................................................................................. 17 Figure 10 :Figure 13 :Figure 14 :Figure 16 :Figure 19 :Figure 20 :Figure 21 :Figure 33 : 1013141619202133 Figure 10: Gellan gum structure ...................................................................................................... 18 Figure 11: Starch structure. ............................................................................................................. 18 Figure 12: collagen triple helix (from Wikipedia) .......................................................................... 19 Figure 13: Gelatin structure ............................................................................................................. 20 Figure 14: Caseins structure ............................................................................................................ 21 Figure 15: Whey Protein structure .................................................................................................. 22 Figure 16: Visualization of microcapsules containing a nile red stained oil phase by a light microscopy image (a) and by CLSM using the red fluorescence channel and transmitted light detection (b). The fluorescence signal allows the oil-containing and air-containing microcapsules to be unambiguously distinguished. Scale bar is shown in m. (Lamprecht et al., 2000). ................. 41 Figure 17: ........................................................................................................................................ 41 Figure 18: Scanning electronic microscopy of the outer surface of the BSA nanoparticles. a: Outer surface of the particle with magnification of 30000. b: Outer surface of the particle with magnification of 60000. .................................................................................................................. 42 Figure 19: Deflection images of micellar casein at two pHs ((A) pH 6.8 and (B) pH 4.8) and whey proteins ((C) pH 4.8). Each image corresponds to 512 horizontal lines that describe the outward and return of AFM cantilever tip ( 1024 scans are made on each image). The graphics below each image correspond to height profiles taken from a cross section on the AFM images. .............................. 43 Figure 20: Height images of bacterial strains L. rhamnosus GG and L. rhamnosus GR-1. Each image corresponds to 512 horizontal lines that describe the outward and return of AFM cantilever tip (1024 scans are made on each image) insets: 3D views of bacterial strains. ............................................ 44 Figure 1 : 1 Figure 1 : The different types of capsules Figure 2 : 2 Figure 2: Alginate structure. Figure 3 : 3 Figure 3: Carrageenan structure. Figure 4 : 4 Figure 4: Cellulose acetate phtalate (CAP) structure Figure 5 : 5 Figure 5: Sodium carboxymethyl cellulose (NaCMC) sturcture Figure 6 : 6 Figure 6: Xanthan gum structure Figure Figure 7: Chitosan structure Figure Figure 8: Pectin structure Figure Figure 9: Dextran structure Figure 10 : 10 Figure 10: Gellan gum structure Figure 11 : 11 Figure 11: Starch structure. Figure 14 : 14 Figure 14: Caseins structure Figure 15 : 15 Figure 15: Whey Protein structure (v) What are the mechanisms of release? (vi)What are the cost constraints? Different technologies are used to produce microcapsules according to the answers of the above questions. They are presented below. is one of the oldest process and the most widely used in microencapsulation technique in the food industry sector. It is an economical and flexible operation. The process involves the atomization of a suspension of microbial cells in polymeric solution in a chamber supplied with hot air. This lead to solvent evaporation. The dried particles are then separated by a filter or cyclone (M.-J. Chen, Chen, & Kuo, 2007); (de Vos et al., 2010); (K. Figure 16 : 16 Figure 16: Visualization of microcapsules containing a nile red stained oil phase by a light microscopy image (a) and by CLSM using the red fluorescence channel and transmitted light detection (b). The fluorescence signal allows the oil-containing and air-containing microcapsules to be unambiguously distinguished. Scale bar is shown in m.[START_REF] Lamprecht | Characterization of microcapsules by confocal laser scanning microscopy: structure, capsule wall composition and encapsulation rate[END_REF]. Figure 17 : 17 Figure 17: (a) TEM of CS-βlg nanoparticles (N-native) in simulated gastric fluid with pepsin for 0.5 h, (b) in simulated intestinal fluid with pancreatin for 10 h, (c) then degraded by chitosanase and lysozyme for 4 h. (d) TEM of chitosan nanoparticles in simulated intestinal fluid with pancreatin for 10 h, (e) and degraded by chitosanase and lysozyme for 4 h. 4. 1 . 1 . 4 . 114 Scanning Electron Microscopy (SEM)SEM provides information on surface characteristics, such as composition, shape and size(Mohsen Jahanshahi & Babaei, 2008),[START_REF] Pierucci | Comparison of alpha-tocopherol microparticles produced with different wall materials: pea protein a new interesting alternative[END_REF],[START_REF] Montes | Coprecipitation of amoxicillin and ethyl cellulose microparticles by supercritical antisolvent process[END_REF]. The samples must be frozen, dried or fractured and subsequently coated with metal compounds, which alter the representativeness of the sample. As an example, Rahimnejad,Jahanshahi, Najafpour (2006) (Fig.18) determined nanoparticle size and distribution by SEM. The samples (protein (BSA) nanoparticle) were dipped into liquid nitrogen for 10 min, and then freeze-dried. The sample was fixed on the aluminum stub and coated with 20 nm of gold palladium. The shape of the nanoparticles were demonstrated spherical with sizes absolutely below 100 nm. Figure 18 : 18 Figure 18: Scanning electronic microscopy of the outer surface of the BSA nanoparticles. a: Outer surface of the particle with magnification of 30000. b: Outer surface of the particle with magnification of 60000. Figure 19 : 19 Figure 19: Deflection images of micellar casein at two pHs ((A) pH 6.8 and (B) pH 4.8) and whey proteins ((C) pH 4.8). Each image corresponds to 512 horizontal lines that describe the outward and return of AFM cantilever tip ( 1024 scans are made on each image). The graphics below each image correspond to height profiles taken from a cross section on the AFM images. Figure 20 : 20 Figure 20: Height images of bacterial strains L. rhamnosus GG and L. rhamnosus GR-1. Each image corresponds to 512 horizontal lines that describe the outward and return of AFM cantilever tip (1024 scans are made on each image) insets: 3D views of bacterial strains. Figure 21 : 21 Figure 21: Raw micrographs of samples: (a) ceramic beads; (b) plasma aluminium; and (c) zinc dust. (G') and viscous (G'') moduli of gels are investigated by dynamic mechanical analyses with a plate-plate geometry (20 mm) at 20°C. Rheological frequency sweep tests are performed on three-dimension. Dynamic strain sweep tests are measured at a frequency of 1 Hz to investigate the linear viscoelastic range. In general, the elastic modulus of an gel depends on the number of cross-links and length and stiffness of the chains between cross-links[START_REF] Pongjanyakul | Xanthan-alginate composite gel beads: molecular interaction and in vitro characterization[END_REF] . ( Montes, Gordillo, Pereyra, & de la Ossa, 2011) Differential scanning calorimetry (DSC) -Detect crystals of an encapsulated compound, -Interactions between biopolymers in the particles. (Ribeiro et al., 2005) (Yang et al., 2013) Rheological gel characterization -Rheological parameters of initial preparation -Mechanical parameters of capsules (Pongjanyakul & Puttipipatkhachorn, 2007) Spectrophotometer -The relative amount of released compounds. ( [START_REF] Allan-Wojtas | Microstructural studies of probiotic bacteria-loaded alginate microcapsules using standard electron microscopy techniques and anhydrous fixation[END_REF] prepared calcium alginate microcapsules, with or without probiotic bacteria using emulsification. Results showed large differences between the alginate matrix of microcapsules without and with bacteria. The presence of bacteria during gelation caused local changes to the gelation process and the occurrence of "void space" phenomenon, as observed in fermented dairy products.[START_REF] Chandramouli | An improved method of microencapsulation and its evaluation to protect Lactobacillus spp. in simulated gastric conditions[END_REF]) studied the effect of capsule size, sodium alginate concentrations and calcium chloride concentrations on the viability of encapsulated bacteria. The viability of probiotic bacteria in the microcapsules increased with alginate capsule size and gel concentration. No significant differences were observed in the viability of probiotic bacteria in capsules when the concentration of calcium chloride increased. L . lactis culture was re-generated by transferring a loopful of the stock culture into 10 mL of M1ι broth and incubated at 30 °C overnight. A 10 L aliquot from overnight culture was again transferred into 10 mL of M17 broth and grown at 30 °C to exponential or stationary phase of growth (6 and 48 h respectively). L.lactis cells were collected by centrifugation (20 min, 4°C, 5000 rpm),diluted and add to SA solution to obtain a target inoculum in microspheres of 10 5 CFU.mg -1 .The core of the capsules was composed by XG (0.2% (w/w)) dissolved in sterile physiological water (9% sodium chloride) or sterile M17 broth supplemented with 0.5% D (+) glucose. Preliminary studies indicated a positive effect of addition of 0.5% glucose on L. lactis growth and nisin production (data not shown).Microcapsules were made using the Encapsulator B-395 Pro (BÜCHI Labortechnik, Flawil, Switzerland). The Buchi technology is based on the principle that a laminar flowing liquid jet breaks up into equal sized droplets by a superimposed nozzle vibration. The vibration frequency determined the quantity of droplets produced and was adjusted at 700 Hz to generate 700 droplets per second. A diameter nozzle 200 µm for membrane and 80 µm for core were used for the preparation of capsules. Droplets fell in a CaCl2 solution (200 mM) to allow microparticles formation. Capsules were maintained in the gelling bath for 15 min to complete the reticulation process and then were filtered and washed with buffer solution (9 % sodium chloride). Figure 22: Microscopic images of A/P (75/25) beads population (a) and a single microbead (b). Figure 23 : 23 Figure 23: Force/gap curves of alginate, pectin and alginate/pectin microbeads Figure 24 : 24 Figure 24: FTIR spectra of alginate, pectin and alginate/pectin solutions (a) and alginate, pectin and alginate/pectin composite microbeads (b). Figure 25 : 25 Figure 25: Survival of Lactococcus lactis, encapsulated in exponential state, during a storage period in physiological water of ι days at 30 °C (▲100/0, X ι5/25, □ 50/50, 25/ι5, • 0/100) (a) L.lactis inside physiological water microbeads (b) L.lactis inside glucose-enriched M17 microbeads (c) L.lactis outside physiological water microbeads (d) L.lactis outside glucose-enriched M17 microbeads. Mean values and standard deviations. Figure 26 : 26 Figure 26: Survival of Lactococcus lactis, encapsulated in stationary state, during a storage period in physiological water of ι days at 30 °C (▲100/0, x ι5/25, □ 50/50, 25/ι5, • 0/100) (a) L.lactis inside physiological water microbeads (b) L.lactis inside glucose-enriched M17 microbeads (c) L.lactis outside physiological water microbeads (d) L.lactis outside glucose-enriched M17 microbeads. Mean values and standard deviations. Figure 28: . Effect of Lactococcus lactis free or encapsulated (microbeads 75/25) on the growth of Listeria monocytogenes (▲ L.lactis non-encapsulated, microbeads 75/25 and control in solid line) on TSB medium stored at 30 °C (a) and survival of L.lactis in contact with the culture medium (▲ L.lactis nonencapsulated, • microbeads ι5/25) (b). Mean values and standard deviations. - Braccini, I., & Pérez, S. (2001). Molecular Basis of Ca2+-Induced Gelation in Alginates and Pectinsμ The Egg-Box Model Revisited. Biomacromolecules, 2(4), 1089-1096. T.,Budde, B., & Koch, A. (2003). Application of Leuconostoc carnosum for biopreservation of cooked meat products. Journal of Applied Microbiology, 95(2), 242-249. -Jaya, S., Durance, T. D., & Wang, R. (2008). Effect of alginate-pectin composition on drug release characteristics of microcapsules. Journal of Microencapsulation, 26(2), 143-153. -Léonard, L., Degraeve, P., Gharsallaoui, A., Saurel, R., & Oulahal, N. (2014). Design of biopolymeric matrices entrapping bioprotective lactic acid bacteria to control Listeria monocytogenes growth: Comparison of alginate and alginate-caseinate matrices entrapping Lactococcus lactis subsp. lactis cells. Food Control, 37, 200-209. -Madziva, H., Kailasapathy, K., & Phillips, M. (2005). Alginate-pectin microcapsules as a potential for folic acid delivery in foods. Journal of Microencapsulation, 22(4), 343-351. -Parente, E., & Ricciardi, A. (1999). Production, recovery and purification of bacteriocins from lactic acid bacteria. Applied Microbiology and Biotechnology, 52(5), 628-638. -Pawar, S. N., & Edgar, K. J. (2012). Alginate derivatization: A review of chemistry, properties and applications. Biomaterials, 33(11), 3279-3305.-Pereira, L.,Sousa, A., Coelho, H., Amado, A. M., & Ribeiro-Claro, P. J. A. (2003). Use of FTIR, FT-Raman and 13C-NMR spectroscopy for identification of some seaweed phycocolloids. Biomolecular Engineering, 20(4-6), 223-228. -Polk, A., Amsden, B., De Yao, K., Peng, T., & Goosen, M. F. A. (1994). Controlled release of albumin from chitosan-alginate microcapsules. Journal of Pharmaceutical Sciences, 83(2), 178-185. Figure 29 : 29 Figure 29: Microscopic image of single alginate-xanthan gum microcapsule Figure 30 : 30 Figure 30: Typical force-displacement curves of alginate-xanthan microcapsules prepared with M17 and physiological water Figure 31 : 31 Figure 31: FTIR spectra of freeze-dried (a) alginate solution, (b) xanthan gum solution, alginate microbeads (c) and alginate xanthan microcapsules (d). permit only to control osmotic pressure and to avoid cellular lysis. Immobilization of L.lactis in SA membrane seems preserve bacterial viability in physiological water. Although bacterial population inside capsules decreased significantly only after 3 days at 30 °C at the end of the storage period microbial counts remains above 3 log CFU.mg-1 Figure 32 : 32 Figure 32: Survival of Lactococcus lactis inside (a,b) and outside (c,d) microcapsules, during a storage period in physiological water of 10 days at 30 °C (▲physiological water, • M1ι supplemented with glucose) (a,c) L.lactis encapsulated in exponential state (b, d) L.lactis encapsulated in stationary state. Mean values and standard deviations. Figure 34 . 34 Figure 34. Spectral distribution of internal transmittance (Ti) of films equilibrated at 5 °C and 75 % RH. -1 is related to antisymmetric stretch of C-O of alginate and pectin carboxylate groups. The peak at 1413 cm -1 is a symmetric stretch of the COO from the carboxylate groups. The band at 1030 cm -1 is an antisymmetric stretch (C-O-C) given by the guluronics units (Sartori et al., 1997). Figure 35 . 35 Figure 35. FTIR spectra of A/P microbeads (a), starch and HPMC films with and without A/P microbeads incorporation (b), magnification of starch films (c) and HPMC films spectra (d). Figure 3 . 3 Figure 3. Effect of bioactive films on survival of Lactococcus lactis in the film in contact with the culture medium (a) and growth of Listeria monocytogenes (b) on TSA medium, stored at 5ºC. Mean values and standard deviation. (□ HPMC, ∆ HPMC+B, ○ HPMC+C, ◊ HPMC+C+B, ■ ST, ▲ ST+B, • ST+C, ♦ ST+C+B and control in solid line). B= Bacteria; C= Capsules. antilisterial activity of bioactive films containing alginate/pectin composite microbeads with entrapped Lactococcus lactis subsp lactis. Table 1 : 1 comparison of various carbohydrates used for bacteria encapsulation in the literature ... 24 Table 2 : 2 comparison of various proteins used for bacteria encapsulation in the literature ............. 27 Table 3 : 3 Encapsulation methods ..................................................................................................... 37 Table 4 : 4 Particle characterization .................................................................................................... 49 Table 5 : 5 Pure alginate capsules ....................................................................................................... 54 Table 6 : 6 Capsules produced with alginate mixed with other polymers .......................................... 61 Table 7 : 7 Size, sphericity and convexity of microbeads with different ratios of polymers, at day 0 and after a storage period in physiological water of 7 days at 30 °C. .................................................... 92 Table 8 : 8 FTIR Bands of alginate and pectin with Assignments. ..................................................... 98 Table 9 : 9 Concentrations of active nisin for 3 days at 30 °C inside and outside hydrogel microspheres containing L.lactis encapsulated in exponential state. Two internal compositions of microbeads were tested: physiological water and M17 enriched with 0.5 % glucose .............................................. 104 Table 10: Concentrations of active nisin for 3 days at 30 °C inside and outside hydrogel microspheres containing L.lactis encapsulated in stationary state. Two internal compositions of microbeads were tested: physiological water and M17 enriched with 0.5 % glucose .................. 105 Table 11 : Size, sphericity and convexity of microcapsules at day 0 and after a storage period in physiological water of 10 days at 30 °C. ....................................................................................... 119 Table 12 : Concentrations of active nisin for 10 days at 30 °C inside and outside alginate-xanthan microcapsules containing L.lactis encapsulated in stationary or exponential state. Two internal compositions of microcapsules were tested: physiological water and M17 enriched with 0.5 % glucose. Mean values and standard deviations. ............................................................................. 128 Table 1 : 1 comparison of various carbohydrates used for bacteria encapsulation in the literature Interest Carbohydrates Advantages Disadvantages with References probiotics Alginate -Simplicity. Cellulose acetate phtalate (CAP) (M.-J. Chen & Chen, 2007) (Burey et al., 2008) (Krasaekoopt et al., 2003) Cellulose a--Insoluble in acid media (pH ≤5). a- -Good protection for microorganisms -Adding CAP with starch and oil improved viability of probiotics Cellulose acetate phtalate (CAP) -Soluble when the pH is≥6. b + (Favaro-Trindade & Grosso, 2002) (Burgain et al., 2011) (Rao et al., 1989) (Tripathy & Raichur, 2013) + -Sodium carboxymethyl cellulose (NaCMC). Table 2 : 2 comparison of various proteins used for bacteria encapsulation in the literature + (Fuentes-Zaragoza et al., 2010) (J. Singh et al., 2010) (Anal & Singh, 2007) (Mortazavian et al., 2008) (Crittenden et al., 2001) Table 4 : 4 Particle characterization Particle characterization Machines Measurement References Static light scattering (SLS) -The intensity of scattered light waves as a function of scattering angle. dynamic light scattering -Size of particles from direction and speed of particle (DLS) movement due to Brownian motion Particle size Confocal Laser Scanning Microscopy (CLSM) Table 5 : 5 Pure alginate capsules Alginate % Encapsulation method Probiotic bacteria Survival Stability of capsules Reference 3% Freeze dried Lactobacillus + + (Shah & Ravula, acidophilus MJLA1 2000). and Bifidobacterium spp. BDBB2) 2, 3, and 4% Spray drying Bifidobacterium + + Survival of longum KCTC 3128 Bifidobacterium and HLC 3742 longum Immobilized in Calcium Alginate Beads in Simulated Gastric. (K. Y. Lee & Heo, 2000) 10% Bifidobacterium lactis - - (Trindade & Grosso, and Lactobacillus 2000) acidophilus 1.8% alginate Emulsification and Bifidobacteria lactis + - (Hansen, Allan- solution ionic gelification Bb-12Bifidobacteria Wojtas, Jin, & adolescentis 15703, Paulson, 2002) Bifidobacteria breve 15700, Bifidobacteria lactis Bb-12 and Bifidobacteria longum Bb-46 . Capsules produced with alginate mixed with other polymers The alginate capsules are very porous, and allow diffusion of water in and out of the matrix, so alginate is not fully adapted to protect probiotic bacteria from external environment. Mixing, or coating with other polymers is used to compensate this defect. The denatured whey protein encapsulated cells presented a better stability than undenatured whey protein in simulated acidic and bile conditions. This study indicated that combination of denatured whey protein isolate and sodium alginate matrix was able to deliver probiotics with improved survival rate and suitable for controlled core release applications. Lactobacillus casei in different beads types made of sodium alginate and different ratio "amidated low(A)-methoxyl pectin(P)", by the extrusion technique. The beads made with A-P blends in 1:4 and 1:6 ratios provided a better protection to Lb. casei under simulated gastric juice and bile salts. Finally,[START_REF] Albertini | Development of microparticulate systems for intestinal delivery of Lactobacillus acidophilus and Bifidobacterium lactis[END_REF] prepared beads by the extrusion method and nine formulations were developed using alginate as main carrier, xanthan gum (XG) as hydrophilic retardant polymer, and the cellulose derivative, cellulose acetate phthalate (CAP), as gastro-resistant polymer. The results showed that the combination of 0.5% of XG or 1% of CAP within the 3% of alginate solution increased the survival of the probiotic bacteria in acidic conditions from 63% (freeze-dried bacteria) up to 76%.From all the different results were presented above, it can be concluded that the mixing alginate with other polymers in one matrix or coating alginate capsules with other polymers enhanced the protective properties of the beads, and provided the more stable capsules. (Vodnar & Socaciu, 2014) encapsulated of viable cells in chitosan coated alginate beads. Microencapsulated L. casei and L. plantarum were resistant to simulated gastric conditions and bile solution. (González-Forte, Bruno, & Martino, 2014) used starch coating alginate to efficiently protect the L. plantarum probiotic bacteria through a simulated gastrointestinal system (HCL pH 1-2). [START_REF] Chan | Bioencapsulation by compression coating of probiotic bacteria for their protection in an acidic medium[END_REF] demonstrated as a novel encapsulation proposal, a coating of alginate micro-particles by hydroxypropyl cellulose. Results showed there was significant improvement in survival of encapsulated cells when exposed to acidic media of pH 1.2 and 2. (Martin, Lara-Villoslada, Ruiz, & Morales, 2013) mixed alginate and starch and showed that the viability of probiotic decreases in 3 log cell numbers in the case of the formulae with alginate only and in 0.3 log in the formulae with mixed alginate and starch. Moreover, the alginate/starch allowed to obtain a suitable particle size and the viability of probiotic was not modified after 45 days at 4 °C. [START_REF] Rajam | Effect of whey proteinalginate wall systems on survival of microencapsulated Lactobacillus plantarum in simulated gastrointestinal conditions[END_REF] mixed denatured and undenatured whey proteins with alginate to prepare microencapsules with Lactobacillus plantarum. (Sandoval-Castilla, Lobato-Calleros, García-Galindo, Alvarez-Ramírez, & Vernon-Carter, 2010) entrapped Table 6 : 6 Capsules produced with alginate mixed with other polymers Polymer Encapsulation method Probiotic bacteria Survival Stability of capsules Reference Alginate3%, skim Spray drying Bifidobacterium + + (Yu, Yim, Lee, & milk, poly dextrose, longum ATCC Heo, 2001) soy fiber, yeast extract, chitosan, Ƙ- 15707, Bifidobacterium carageenan, and infantis ATCC whey (0.6%) 25962 and Bifidobacterium breve ATCC 15700 Alginate/Skim milk Freeze drying Acetobacter xylinum + + (Jagannath, Raju, & Bawa, 2010) Alginate and Beads coating Lactococcus lactis + + (Klinkenberg, Chitosan ssp. Lactis Lystad, Levine, & Dyrset, 2001) Extrusion Lactobacillus + (Le-Tien, Millette, rhamnosus Mateescu, & Lacroix, 2004) Extrusion Lactobacillus + (Göksungur, helveticus Gündüz, & Harsa, 2005) Extrusion/Coating Lactobacillus + (K. Kailasapathy & acidophilus Iyer, 2005) Stock cultures of Lactococcus lactis subsp. lactis ATCC 11454, a nisin producing-strain, Micrococcus flavus DSM 1790 and Listeria monocytogenes CIP 82110 were kept frozen (-80 ºC) in synthetic media enriched with 30 % glycerol (M17 broth for the LAB and TSB broth (Biokar diagnostics, Beauvais, France) for the others strains). 1.2. Biofilm Materials Sodium alginate from brown algae (viscosity ≤0.02 Pa.s for an aqueous solution of 1 % wt at 20°C), pectin from citrus peel (galacturonic acid ≥ ι4 %, Methoxy Groups ≥ 6. Stock cultures of Lactococcus lactis subsp. lactis ATCC 11454 and Listeria Monocytogenes CIP 82110 were kept frozen (-80 ºC) in synthetic media enriched with 30 % glycerol (M17 Broth for LAB and Tryptone Soy Broth (TSB, Biokar diagnostics, Beauvais, France) for the other strain). Encapsulation of Lactococcus lactis subsp. lactis on alginate / pectin composite microbeads: effect of matrix composition on bacterial survival and nisin release Ce chapitre présente l'encapsulation de Lactococcus lactis subsp. lactis dans des microsphères d'alginate / pectine. Différents ratios d'alginate / pectine ont été utilisés (A / P) ((100/0, 75/25, 50/50, 25/75, 0/100) et des microsphères ont été préparés par extrusion en encapsulant Lactococcus lactis subsp Lactis sous 4 conditions : deux états physiologiques différents (phase exponentielle, phase stationnaire) et en présence de milieux de croissance différents (glucose enrichi M17, l'eau physiologique). L'objectif de cette étude était d'élaborer les microsphères alginate / pectine et d'évaluer le meilleur ratio de polymères permettant la survie microbienne, la libération de nisine, l'activité Statgraphics Plus for Windows 5.1. Homogeneous sample groups were obtained by using LSD test (95 % significance level). Chapter III: anti-listéria en comparaison entre L.lactis libre et L.lactis encapsulée. Des solutions d'alginate et de pectine (1% (p / p)) ont été préparées avec de l'eau physiologique Table 7 : 7 Size, sphericity and convexity of microbeads with different ratios of polymers, at day 0 and after a storage period in physiological water of 7 days at 30 °C. A/P Size D[4,3] (μm) Sphericity Convexity day 0 day 7 day 0 day 7 day 0 day 7 100/0 264 (13) ax 255 (5) ax 0.90 (0.01) ax 0.93 (0.01) ax 0.25 (0.01) ax 0.23 (0.04) ax 75/25 274 (10) ax 267 (3) bx 0.90 (0.02) ax 0.90 (0.01) ax 0.27 (0.01) ax 0.24 (0.05) ax 50/50 275 (2) ax 268 (7) bx 0.95 (0.01) bx 0.90 (0.02) ay 0.43 (0.01) bx 0.35 (0.06) bx 25/75 272 (11) ax 285 (6) cx 0.94 (0.01) bx 0.88 (0.01) ay 0.50 (0.01) cx 0.50 (0.01) cx Table 8 : 8 FTIR Bands of alginate and pectin with Assignments. Polymer Wavenumber (cm -1 ) Functional groups Alginate 1600 1412 COO -antisymmetric stretch COO -symmetric stretch 1024 COC antisymmetric stretch Pectin 1740 1640-1610 1440 C=O stretching COO -antisymmetric stretching COO -symmetric stretch 1380 C-H bending 1240 CO, CC of ring stretching 1145 COC of glycosidic link/ring 1100 CO, CC, CCH, OCH ring 2.2 . Encapsulation of L.lactis Table 9 : 9 Concentrations of active nisin for 3 days at 30 °C inside and outside hydrogel microspheres containing L.lactis encapsulated in exponential state. Two internal compositions of microbeads were tested: physiological water and M17 enriched with 0.5 % glucose internal A/P microbeads composition Table 10 : 10 Concentrations of active nisin for 3 days at 30 °C inside and outside hydrogel microspheres containing L.lactis encapsulated in stationary state. Two internal compositions of microbeads were tested: physiological water and M17 enriched with 0.5 % glucose A/P internal composition a, b,c Different letters in the same column indicate significant differences among samples (p <0.05) Design of microcapsules containing Lactococcus lactis subsp. lactis in alginate shell and xanthan gum with nutrients core Ce chapitre présente l'encapsulation de Lactococcus lactis subsp. lactis dans des particules composées d'une membrane d'alginate et un coeur de xanthane. Les micro-perles ont été préparées avec de l'alginate pour la membrane et de la gomme xanthane enrichie en glucose enrichi M17 ou additivée d''eau physiologique dans le coeur, combinés avec L.lactis dans différents états physiologiques.L'objectif de cette étude était de développer une nouvelle technique de préparation de billes qui améliorerait la survie de L.lactis, ainsi que l'activité antimicrobienne par rapport à L.lactis libre. 'activité de la nisine a été comparée entre les bactéries encapsulées et libres en phase exponentielle dans de l'eau physiologique pendant 10 jours à 30 ° C. Il a été observé que l'activité de la nisine est meilleure dans le cas de L. lactis encapsulée en phase exponentielle et avec un milieu nutritif de M17 enrichi avec 0.5% de glucose pendant 10 jours. En effet, il a été montré que la réduction de la population de L. monocytogenes était plus élevée avec L. lactis encapsulée qu'avec L.lactis libre. En conclusion, les meilleurs résultats ont été obtenus avec des microsphères à membrane d'alginate et coeur de xanthane avec 0.5% glucose dans un milieu M17 et pour L.lactis encapsulée en phase exponentielle. Chapter IV: Les capsules ont été préparées avec de l'alginate pour la fraction membranaire (1,3%) avec de l'eau physiologique et de la gomme de xanthane dans le coeur de 0,2% avec de l'eau physiologique ou de bouillon M17 stéril additionné de 0,5% de D (+) glucose. L'inoculum en bactérie dans les microsphères était de 10 5 CFU.mg -1 . La caractérisation physico-chimique des microsphères a permis d'évaluer le diamètre, la sphéricité et la convexité des perles de 0 à 10 jours à 30 ° C. Toutes les microsphères ont été produites dans les mêmes conditions. On constate que le diamètre de microbilles, la sphéricité et la convexité ne présente pas de différences significatives au jour 0. Après 10 jours, la constatation est la même, ce qui signifie que ce type de capsules est plus stable et offre une bonne protection des bactéries vis-à-vis des conditions externes. La survie et la croissance de L.lactis dans les microsphères pendant 10 jours à 30 ° C change de manière significative selon deux facteurs : la composition interne de la microsphère (eau physiologique ou M17 enrichi avec 0,5% de glucose), et l'état physiologique de L.lactis. Les résultats montrent que les conditions optimales de viabilité de la bactérie encapsulée sont : un état de croissance exponentiel et 0.5% glucose dans le milieu M17 avec la gomme de xanthane. , & Vernon-Carter, E. J. (2010). Textural properties of alginate-pectin beads and survivability of entrapped Lb. casei in simulated gastrointestinal conditions and in yoghurt. Food Research International, 43(1), 111-117. L Table 11 : 11 Size, sphericity and convexity of microcapsules at day 0 and after a storage period in physiological water of 10 days at 30 °C. Time (days) Size D[4,3] ( m) Sphericity Convexity 0 413(10) a 0.6(0.2) a 0.23(0.02) a 10 419(7) a 0.6(0.2) a 0.27(0.02) a a,b,c Different letters indicate significant differences between the various samples among the same column (p <0.05) Table 12 : 12 Concentrations of active nisin for 10 days at 30 °C inside and outside alginate-xanthan microcapsules containing L.lactis encapsulated in stationary or exponential state. Two internal compositions of microcapsules were tested: physiological water and M17 enriched with 0.5 % glucose. Mean values and standard deviations. Different letters in the same file indicate significant differences among time for a same sample (p <0.05) bacterial physiologic al state core composition of microcapsules containing Lactococcus lactis subsp. lactis in alginate shell and xanthan gum with nutrients core 3. Conclusion SA for ampicilin/chitosan capsules. /XG core-shell microcapsules were developed for immobilization of L. lactis on membrane. change nisin release. The physiological state of the bacteria during the encapsulation process and the enrichment of aqueous-core with nutrients (M17 supplemented with 0.5% glucose) were determining factors for both bacteria viability and bacteriocin activity.Microcapsules with L. lactis in exponential state encapsulated in alginate membrane and aqueous-core based on xanthan gum with nutrients (M17 enriched with 0.5% glucose) gave the best results in terms of bacterial viability and nisin activity. These microcapsules allow a complete inhibition of L. monocytogenes growth for 7 days at 30 °C. Antimicrobial properties of L. lactis were not limited by the system of encapsulation developed in this study. It could be interesting to design novel bioactive food packaging based on biopolymers films enriched with these SA/XG core-shell microcapsules.Meyer, A., Schöbitz, R., Brito, C., Fuentes, R. (2011). Lactic acid bacteria in an alginate film inhibit Listeria monocytogenes growth on smoked salmon. FoodControl, 22, log CFU . mL -1 a) 0 b) 2 Figure 33: Effect of Lactococcus lactis free or encapsulated in exponential state (alginate-xanthan Time (days) 1 2 3 4 5 6 7 Time (days) 0 1 3 4 5 6 7 0 2 4 10 12 14 microcapsules with internal medium prepared with glucose-enriched M17) on the growth of Listeria Effect of monopotassium phosphate. International Journal of Food Microbiology, 125( 3) 320-329. -Burgain, J., Gaiani, C., Linder, M., & Scher, J. (2011). Encapsulation of probiotic living cells: From laboratory scale to industrial applications. Journal of Food Engineering, 104(4), 467-483. -Champagne, C.P., & Kailasapathy, K. (2008). Encapsulation of probiotics. Delivery and Controlled Release of Bioactives in Foods and Nutraceuticals, 154, 344-369. -Cellesi, F., Weber, W., Fussenegger, M., Hubbell, J. a., & Tirelli, N. (2004). Towards a fully -Concha-485-489. -Ding, W.K., & Shah, N.P. (2009). An improved method of microencapsulation of probiotic bacteria for their stability in acidic and bile conditions during storage. Journal of Food Science, 74(2), M53-M61. -Elçin, Y. M. (1995). Encapsulation of urease enzyme in xanthan-alginate spheres. Biomaterials, 16(15), 1157-1161. -Goddard, E. D., & Gruber, J. V. (1999). Principles of Polymer Science and Technology in Cosmetics and Personal Care. CRC Press. enteritidis. International Journal of Food Microbiology, 130, 219-226. -Parente, E., & Ricciardi, A. (1999). Production, recovery and purification of bacteriocins from lactic acid bacteria. Applied Microbiology and Biotechnology, 52(5), 628-638. Design Microcapsules present a plastic behavior and no differences were observed in terms of 6 Biotechnology and Bioengineering, 50(4), 357-364. log CFU . mL -1 8 -Jen, A.C., Wake, M.C. & Mikos, A.G. (1996). Review: Hydrogels for cell immobilization. Chapter V: synthetic substitute of alginate: Optimization of a thermal gelation/chemical cross-linking Pawar, S. N., & Edgar, K. J. (2012). Alginate derivatization: A review of chemistry, properties mechanical properties among studied systems. The aqueous core composition of the microcapsules did not affect SA network stability. Addition of XG caused a change in matrix and applications. Biomaterials, 33(11), 3279-3305. scheme ("tandem" gelation) for the production of beads and liquid-core beads. Biotechnology and Bioengineering, 88(6), 740-749. -Sánchez-González, L., Saavedra-Quintero, J. & Chiralt, A. (2013). Physical properties and structure of microcapsules membrane by establishment of potential hydrogen bonding between antilisterial activity of bioactive edible films containing Lactobacillus plantarum. Food XG hydroxyls groups and SA carboxylate groups. This certainly leads to better LAB protection Hydrocolloids, 33(1), 92-98. monocytogenes ( L.lactis non-encapsulated, ▲ alginate-xanthan microcapsules and control in solid line) on TSB medium stored at 30°C (a) and survival of L.lactis in contact with the culture medium ( L.lactis non-encapsulated, ▲ alginate-xanthan microcapsules) (b). Mean values and standard deviations. and References -Alzamora, S.M., Tapia, M.S., López-Malo, A. (2000). Minimally processed fruits and vegetables: fundamental aspects and applications. Aspen Publishers, Inc. -Antwi, M., Theys, T.E., Bernaerts, K.,Van Impe, J.F. & Geeraerd A.H. (2008). Validation of a model for growth of Lactococcus lactis and Listeria innocua in a structured gel system: -Léonard, L., Gharsallaoui, A., Ouaali, F., Degraeve, P., Waché, Y., Saurel, R., & Oulahal, N. (2013). Preferential localization of Lactococcus lactis cells entrapped in a caseinate/alginate phase separated system. Colloids and Surfaces B: Biointerfaces, 109, 266-272. -Léonard, L., Degraeve, P., Gharsallaoui, A., Saurel, R., & Oulahal, N. (2014). Design of biopolymeric matrices entrapping bioprotective lactic acid bacteria to control Listeria monocytogenes growth: Comparison of alginate and alginate-caseinate matrices entrapping Lactococcus lactis subsp. lactis cells. Food Control, 37, 200-209. -Liu, L., O'Conner, P., Cotter, P.D., Hill, C., Ross, R.P. (2008). Controlling Listeria monocytogenes in Cottage cheese through heterologous production of enterocin A by Lactococcus lactis. Journal of Applied Microbiology, 104, 1059-1066. -Maragkoudakis, P.A., Mountzouris, K.C., Psyrras, D., Cremonese, S., Fischer, J., Cantor, M.D., Tsakalidou, E. (2009) . Functional properties of novel protective lactic acid bacteria and application in raw chicken meat against Listeria monocytogenes and Salmonella Physical properties and antilisterial activity of bioactive films containing alginate/pectin composite microbeads with entrapped Lactococcus lactis subsp lactis. Physical properties and antilisterial activity of bioactive films containing alginate/pectin composite microbeads with entrapped Lactococcus lactis subsp lactis. Table 1 . 1 Effect of the incorporation of lactic acid bacteria (Lactococcus lactis) entrapped in microcapsules (C) and microbeads (B) on film mechanical properties (Elongation, Tensile Strength, Elastic Modulus), water vapor permeability, oxygen permeability, moisture content and thickness of biopolymer films (HPMC & STarch) equilibrated at 5°C and 75% relative humidity. Mean values and standard deviation.. Film E (%) TS (MPa) EM (MPa) (g. WVP µm.m -2 .j -1 .kPa -1 ) OP (cc.m.Pa -1 .s -1 ) x 10 7 Moisture content (g water. g film -1 ) Thickness (µm) HPMC 57 (7) a 24 (4) a 524 (45) a 2.15 (0.11) a 46 (3) a 0.158 (0.002) a 159 (6) a HPMC+C 41 (7) b 24 (3) a 561 (26) a 2.02 (0.11) a 51 (8) a 0.162 (0.002) a 210 (4) b HPMC+B 58 (7) a 25.5 (3) a 473 (51) a 2.25 (0.13) a 43 (5) a 0.153 (0.007) a 153 (8) a HPMC+C+B 34 (6) bd 18.0 (4) a 447 (34) a 2.22 (0.16) a 55 (7) a 0.154 (0.006) a 205 (3) b ST 3.3 (0.2) c 20 (2) a 962 (59) b 3.05 (0.11) b 2.50 (0.14) b 0.166 (0.016) b 123 (7) c ST+C 3.6 (0.3) c 12 (4) b 615 (100) c 2.51 (0.11) c 2.07 (0.06) c 0.242 (0.006) c 124 (4) c ST+B 30.6 (1.8) d 7.1 (0.5) c 298 (58) d 3.20 (0.11) b 2.3 (0.2) b 0.130 (0.002) d 122 (4) c ST+C+B 31 (2) d 6.0 (0.4) d 280 (62) d 2.42 (0.12) c 1.91 (0.13) c 0.273 (0.008) e 125 (7) c Physical properties and antilisterial activity of bioactive films containing alginate/pectin composite microbeads with entrapped Lactococcus lactis subsp lactis. Table 2 . 2 Lightness (L*), chrome (C*ab), hue (h*ab) and whiteness index (WI) of biopolymer films equilibrated at 5°C and 75% relative humidity. Mean values and standard deviation. Film L* (C * ab) (H * ab) WI HPMC 81 (3) a 1.8 (0.6) a 104 (3) a 73 (3) a HPMC+C 79.1 (1.9) a 2.3 (1.2) a 99 (2) a 72 (2) a HPMC+B 68.3 (0.6) b 0.8 (0.4) a 103.0 (1.5) a 68.2 (0.6) b HPMC+C+B 63 (2) c 1.2 (0.2) a 103 (2) a 63 (2) c ST 85.7 (1.3) d 7.3 (0.7) b 103.2 (1.8) a 83.9 (0.9) d ST+C 73 (2) e 6.8 (1.9) b 102.4 (1.3) a 80 (3) d ST+B 86 (3) d 5.0 (1.2) b 99.3 (1.9) a 86 (2) d ST+C+B 73 (4) e 5.2 (1.7) b 101.8 (1.2) a 82 (2) d Physical properties and antilisterial activity of bioactive films containing alginate/pectin composite microbeads with entrapped Lactococcus lactis subsp lactis. Encapsulation of L. lactis in alginate conjugated with entrapment in starch or HPMC films shown a high efficiency to produced active packaging materials. The mechanical, transport and transaperncy properties remained adapted to packagind application despited of capuses incorporation. Moreover, the developed films with microcapsules presented an interesting antilisterial activity. These systems must be considered as de possible solution for active packaging in the near future. cette étude était de développer des microcapsules à coeur aqueux, pour analyser les propriétés physico-chimiques de ces microcapsules, d'évaluer leur impact sur la survie des bactéries encapsulées en phase exponentielle ou stationnaire. De plus, la libération et l'activité antilisterial de nisine produite ont été suivies sur 10 jours. Les résultats ont montré que la composition du coeur liquide des microcapsules n'a pas affecté la stabilité du réseau d'alginate et que les capsules ont bien résistées aux conditions de conservation. L'addition de gomme xanthane dans le coeur a permis, dans un premier temps, de contrôler la viscosité et une bonne formation du coeur sphérique dans le centre des microcapsules; mais a aussi renforcé la structure des capsules par établissement de liaison hydrogène entre la gomme xanthane et les groupes d'hydroxyles de l'alginate. Les microcapsules de L. lactis encapsulé à phase exponentielle dans une membrane d'alginate et dans un coeur à base de M17 enrichi avec 0,5% de glucose et 0.2% de gomme de xanthane ont donné les meilleurs résultats. Ces capsules ont enfin été incorporées dans des films d'Amidon et d'HPMC et un excellent effet antilisteria a été demontré ce qui est prométeur pour le la préparation d'emballages actifs. Les films enrichis en microcapsules après quelques études de transfert de technologie vers l'industrie pourront être finalisés pour la conservation des aliments hudides à courte durée de vie. Ces nouveaux films bioactifs, à base de biopolymères (amidon, HPMC) et enrichis avec L.lactis, incorporés et stabilisés dans un milieu composé d'alginate/gomme de xanthane à coeur et d'alginate /pectine pour la membrane pourront alors être utilisés.La nisine est un peptide antimicrobien produit par la souche de Lactococus lactis subsp. lactis, autorisée pour des applications alimentaires par le comité d'experts sur les additifs alimentaires et les aliments de l'Organisation des Nations Unies pour l'Alimentation et l'Agriculture et de l'Organisation Mondiale de la Santé (FAO / OMS). La nisine peut être appliquée, par exemple, pour la conservation des aliments, la biopréservation, le contrôle de la flore de fermentation, et potentiellement comme agent antimicrobien clinique. Le piégeage de bactéries capables de produire de la nisine dans des billes d'alginate de calcium est ainsi une voie prometteuse pour immobiliser des cellules actives et prolonger la durée de conservation des aliments. Le travail de thèse visait à concevoir des emballages actifs en biopolymère renfermant des bactéries lactiques bioprotectrices (LAB) pour contrôler la croissance de micro-organismes indésirables dans les aliments, en particulier L. monocytogenes. La stabilité mécanique et chimique des billes d'alginate a d'abord été améliorée, et l'efficacité d'encapsulation a été accrue. Des capsules « alginate / pectine (A / P) » ont été préparés comme premières microsphères, par une technique d'extrusion. La production de nisine par Lactococcus lactis subsp. lactis encapsulé dans différents états physiologiques (phase exponentielle, phase stationnaire) a été étudiée. Les résultats ont montré que les billes composites (A/P) avaient de meilleurs propriétés que celles formulées avec de l'alginate ou de la pectine pure. L'association de l'alginate et de la pectine induit un effet synergique qui a amélioré les propriétés mécaniques des microbilles. La deuxième partie du travail a concerné la mise au point de microcapsules à coeur liquides avec une membrane d'hydrogel d'alginate et d'un noyau de gomme de xanthane. Les résultats ont montré que ces microcapsules contenant L. lactis encapsulé durant la phase exponentielle dans une matrice d'alginate et un noyau nutritif de gomme xanthane ont donné les meilleurs résultats et présentent une activité anti-listeria intéressante. Ces microbilles ont enfin été appliquées pour la conservation des aliments et en particulier dans des emballages alimentaires actifs. Des films (HPMC, amidon) ont été élaborés en piégeant des perles d'«alginate actifs/gomme xanthane» enrichies en L. lactis dans des films d'emballage et appliqués pour la conservation des aliments. Résumé Design of microcapsules containing Lactococcus lactis subsp. lactis in alginate shell and xanthan gum with nutrients core and 1.8 log CFU.mg -1 respectively. The lack of nutrients can certainly explain these results, a, b,c,d Different letters in the same column indicate significant differences among formulations (p <0.05). a, b,c,d,e Different letters in the same column indicate significant differences among formulations (p <0.05). Remerciements Acknowledgements Authors thank the European Commission for Erasmus Mundus Grant to Mrs Bekhit (Erasmus Mundus External Window "ELEMENT" Program). Chapter II: Material and methods Material and methods Abstract Alginate/pectin hydrogel microspheres were prepared by extrusion based on a vibrating technology to encapsulate bacteriocin-producing lactic acid bacteria. Effects of both alginate/pectin (A/P) biopolymers ratio and physiological state of Lactococcus lactis subsp. lactis (exponential phase, stationary phase) were examined for nisin release properties, L. lactis survival and beads physico-chemical properties. Results showed that A/P composites were more efficient to increase beads properties than those formulated with pure alginate or pectin. Association of alginate and pectin induces synergistic effect which improves microbeads mechanical properties. FTIR spectroscopy confirms possible interactions between alginate and pectin during inter-penetrating network formation. Physiological state of bacteria during encapsulation process and microbeads composition (A/P ratio, enrichment of internal medium with nutrients) were determining factors for both bacteria viability and bacteriocin release. Of the several matrices tested A/P (75/25) with glucose-enriched M17 gave the best results when L. lactis was encapsulated in exponential state. Keywords: hydrogel microspheres, biopolymers, lactic acid bacteria, physico-chemical properties, antilisterial activity. -Scannell, A. G. M., Hill, C., Ross, R. P., Marx, S., Hartmeier, W., & Arendt, E. K. (2000). -Synytsya, A., Čopı ḱová, J., Matějka, P., & Machovič, V. (2003). Fourier transform Raman and infrared spectroscopy of pectins. Carbohydrate Polymers, 54(1), 97-106. Development of bioactive -Tahiri, I., Desbiens, M., Kheadr, E., Lacroix, C., & Fliss, I. (2009). Comparison of different application strategies of divergicin M35 for inactivation of Listeria monocytogenes in cold-smoked wild salmon. Food Microbiology, 26(8), 783-793. -Trias, R., Bañeras, L., Badosa, E., & Montesinos, E. (2008). Bioprotection of Golden Delicious apples and Iceberg lettuce against foodborne bacterial pathogens by lactic acid bacteria. International Journal of Food Microbiology, 123(1-2), 50-60. -Walkenström, P., Kidman, S., Hermansson, A.-M., Rasmussen, P. B., & Hoegh, L. (2003). Microstructure and rheological behaviour of alginate/pectin mixed gels. Food Hydrocolloids, 17(5), 593-603. -Zhang, J., Daubert, C. R., & Allen Foegeding, E. (2007). A proposed strain-hardening mechanism for alginate gels. Journal of Food Engineering, 80(1), 157-165. Abstract Aqueous-core microcapsules with sodium alginate (SA) hydrogel in membrane and xanthan gum (XG) in core were prepared using ionotropic gelation method to encapsulate bacteriocinproducing lactic acid bacteria (LAB). In this study LAB were immobilized in microcapsule membrane. XG was applied to reinforce SA microcapsules. Molecular interaction between SA and XG in the microcapsules was investigated using FTIR spectroscopy. Microcapsules morphology and mechanical properties were examined. The impact of an enrichment with nutrients of core (M17 broth supplemented with 0.5% glucose vs physiological water) and physiological state of Lactococcus lactis during encapsulation step (exponential vs stationary state) was studied on L. lactis survival, nisin release. Furthermore, the antimicrobial effectiveness of the best system to preserve bacterial survival and permit nisin release was studied against Listeria monocytogenes. No differences were observed in terms of mechanical properties among studied systems. FTIR spectroscopy confirmed the establishment of possible hydrogen bonding between XG hydroxyls groups and SA carboxylate groups which could modify microcapsules release properties. Microcapsules with L. lactis in exponential state encapsulated in SA membrane and aqueous-core based on XG with nutrients gave the best results. At 30 °C, a complete inhibition of L. monocytogenes growth throughout the storage period was observed for these microcapsules.These microcapsules could be used for future applications in food preservation and particularly in food packaging. Novel bioactive films based on biopolymers and enriched with L.lactis in alginate/xanthan gum core-shell microcapsules could be designed. type and concentration of materials used, particle size and porosity or type of microparticles (bead, capsule, composite, coating layer..) were show to affect effectiveness of bacterial protection (Ding & Shah, 2009). Alginate has been widely used as microencapsulation material as it is non-toxic, biocompatible, and cheap [START_REF] Jen | Review: Hydrogels for cell immobilization[END_REF][START_REF] Léonard | Preferential localization of Lactococcus lactis cells entrapped in a caseinate/alginate phase separated system[END_REF]Léonard et al., 2014). Alginate consists in homopolymeric and heteropolymeric blocks alternating 1,4linked β-D-mannuronic acid (M) and α-L guluronic acid (G) residues in which the G units form crosslinks with divalent ions, to produce ''egg-box'' model gels (Pawar & Edgar, 2012). Studies have reported that alginate can form strong complexes with other natural polyelectrolytes such as pectin (also a polyuronate) by undergoing chain-chain association and forming hydrogels upon addition of divalent cations (e.g., Ca2+) [START_REF] Fang | Binding behaviour of calcium to polyuronates: Comparison of pectin with alginate[END_REF]Pillay & Fassihi, 1999a), improving mechanical and chemical stability of alginate beads, and consequently improving their encapsulation effectiveness (Pillay & Fassihi, 1999b). The aim of the present paper was to evaluate how HPMC and corn starch films were affected by the incorporation of L. lactis, free or encapsulated in alginate/pectin composite microbeads, through the analysis of different physical properties (water vapor barrier, oxygen permeability, mechanical and optical properties) as well as their antilisterial impact. Results and discussion Physico-chemical properties Properties of films equilibrated at 5°C and 75% RH and are reported in Table 1. Globally, properties of HPMC films were slightly affected by incorporation of capsules and/or bacteria while starch films were dramatically modified (huge increase of elongation properties, strong reduction of EM and TS). With entrapped bacteria, properties of both films became similar except for the oxygen barrier properties which remained highly lower in the case of starch film. The WVP values of corn starch films were in the range of those reported by Greener and Fennema (1989). The slight differences can be attributed to minor changes in the experimental absorption bands between 1100 and 1200 cm -1 were from ether (R-O-R) and cyclic C-C bonds in the ring structure of pectin molecules (Synytsya et al., 2003). The FTIR spectra of the preconditioned films in the presence and absence of A/P microbeads are shown in Fig. 2b. In general, films with and without A/P microbeads present similar features in the FTIR spectral regions. Starch film spectra showed characteristic bands at 931 and 1149 cm -1 , which are associated with the C-O bond stretching. The peaks at 1016 and 1077 cm -1 are characteristic of the C-O stretching of the anhydroglucose ring and the peak at 1645 cm -1 is related to O-H stretching of water molecules linked to starch structure. Finally, the band between 3100 and 3600 cm -1 due to (O-H) stretching is followed a band at 2929 cm -1 is associated with the C-H stretching in the glucose ring. The region below 800 cm -1 displayed complex vibrational modes due to the skeletal vibrations of the pyranose ring in the glucose unit [START_REF] Kizil | Characterization of irradiated starches by using FT-Raman and FTIR spectroscopy[END_REF]. The incorporation of A/P microbeads in the starch film results in very little modifications. In fact, the absorption band at 3300 cm -1 showed a decreased intensity (Fig. 2c), and the band around 1000 cm -1 showed a narrow width in the presence of A/P microbeads. These spectra modification could be related to potential interaction between alginate (COO -) and starch (-OH) groups, via hydrogen bonding (Swamy et al., 2008). Although, the FTIR spectra of starch before and after functionalization with A/P microbeads, show only slight differences, they were repeated in all the tests made on different samples. The HPMC FTIR spectra with and without A/P microbeads incorporation showed the same typical band with neither significant different intensities nor evident shifts (Fig 2b andd). The band situated at 3000-3750 cm -1 corresponding to the hydroxyl groups stretching (O-H), is followed by a small band at 300-2800 due C-H stretching. Chapter VI: General Conclusion
01754791
en
[ "spi.meca.mema" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01754791/file/Sgueglia_19580.pdf
Alessandro Sgueglia Peter Schmollgruber Nathalie Bartoli Olivier Atinault Emmanuel Benard Joseph Morlier Exploration and Sizing of a Large Passenger Aircraft with Distributed Ducted Electric Fans In order to reduce the CO2 emissions, a disruptive concept in aircraft propulsion has to be considered. As studied in the past years hybrid distributed electric propulsion is a promising option. In this work the feasibility of a new concept aircraft, using this technology, has been studied. Two different energy sources have been used: fuel based engines and batteries. The latters have been chosen because of their flexibility during operations and their promising improvements over next years. The technological horizon considered in this study is the 2035: thus some critical hypotheses have been made for electrical components, airframe and propulsion. Due to the uncertainty associated to these data, sensivity analyses have been performed in order to assess the impact of technologies variations. To evaluate the advantages of the proposed concept, a comparison with a conventional aircraft (EIS 2035), based on evolutions of today's technology (airframe, propulsion, aerodynamics) has been made. I n the next decades, due to the cost of fuel and the increasing number of aircraft flying everyday, the world of aviation will cope with more stringent environmental constraints and traffic density increase. Both ACARE (Advisory Council or Aviation Research and Innovation in Europe) [START_REF]ACARE project[END_REF] and NASA 2 published their targets in terms of environmental impact within the next years. In Table 1 the noise, emissions and energy consumption reduction according to ACARE for the next years are reported: the fuel and energy consumption have to be drastically reduced to meet the 2050 goals. To achieve these objectives, disruptive changes at the aircraft level have to be made. Fostered by the progress made in the automotive industry, aeronautics found an interest in hybrid propulsion. An idea is to merge this concept and distributed propulsion, where the engines are distributed along the wing. As shown by Kirner [START_REF] Kirner | An Investigation into the Benefits of Distributed Propulsion on Advanced Aircraft Configurations[END_REF] and Ko et al., [START_REF] Ko | Assessment of the Potential Advantages of Distributed Propulsion for Aircraft, International Society for Air Breathing Engines[END_REF] distributed propulsion increases performance in fuel consumption, noise, emission and handling qualities. The resulting Distributed Electric Propulsion (DEP) technology has been applied for different aircraft configuration (such as the N+3 Aircraft generation from NASA [START_REF] Greitzer | N+3 Aircraft Concept Design and Trade Studies[END_REF] ): results show a drag reduction (which leads to a minor fuel consumption) and also a better efficiency due to the aero propulsive effects. Nomenclature This work presents the exploration and the sizing of a large passenger aircraft with distributed electric ducted fans, EIS 2035. The objective is to carry out a Multidisciplinary Design Analysis (MDA) in order to consider all the coupling between the disciplines: airframe, hybrid electric propulsion and aerodynamics. The aero propulsive effects are also considered, in order to converge towards a disruptive concept. In the first part of the paper the proposed concept is described, then the propulsive chain architecture is presented, including a review of the key components and their models. The second part is dedicated to their integration in the in-house aircraft sizing code developed by ONERA and ISAE-Supaero, identified as FAST. [START_REF] Schmollgruber | Use of a Certification Constraints Module for Aircraft Design Activities[END_REF] Then the design mission is presented, and the hypothesis for the 2035 technology horizon are discussed. Finally, the performance of the integrated design are presented, and the conclusions regarding the feasibility of such vehicle are reported. II. Electric hybrid aircraft concept The aircraft definitive concept is shown in Fig. 1: for the modelisation it has been used OpenVSP, 7 a free tool for visualization and VLM computation. New components are added to the aircraft architectures: • Turbogenerators, which are the ensemble of a fuel burning engine and a converter device; • Batteries (not shown in figure) which provide electric power and are located in the cargo; • Electric motor and ducted fan, in the nacelle on the wing upper surface; • DC/DC and DC/AC converter (called respectively converter and inverter [START_REF] Bradley | Subsonic Ultra Green Aircraft Research: Phase II-VolumeII-Hybrid Electric Design Exploration[END_REF] ) in order to provide the current in the right mode and at the same voltage; • Cables for the current transport, including a cooling system and protections. Detailed models of each component are described in the next section. The wing-body airframe is the usual "tube and wing" configuration, and no changes on that part have been done. For the engine positions, different choices are possible: 9 upper-wing trailing edge engines, lower-wing leading edge engines and imbedded engines. In this work they are located in the upper part of the wing, at the trailing edge. This allows some advantages in terms of blowing: from an internal project in the frame of the EU program CleanSky 2, 10 it has been estimated the 2D maximum lift coefficient in the zone affected by the engines varies from 4 to 5. For the results presented later it has been used the mean value of 4.5. This effect has three main advantages: • If the approach speed constraint is used for the wing sizing, the wing surface is reduced. • High-lift devices are no more needed for takeoff and landing, leading to a minor wing weight. • It is possible to have a shorter takeoff length. Also, in previous works 9 the engines are mounted near the tip, since it is on that zone the stall begins and a higher C L is needed. In this concept they are located on the inner part in order to not increase the structure at the tip: a twist has to be added in order to make the stall begin in the center part. The motors also provide some moment which partially balance the bending at the wing root: from an internal work at ONERA, it has been estimated that the impact of the engine position on the wing weight is of 5%. Another advantage of the DEP architecture is that the EM weight is reduced. In fact, the One Engine Inoperative (OEI) condition (which is assumed as critical case for the design) is less stringent, as shown by Steiner et al.: 11 in case of OEI condition, the supplementary power (and thus also thrust) required by the other engines is smaller; in particular the total power of a single motor increases with the ratio of N N -1 , being N the number of engines. It is clear that, when the number of engines increases, the effect of the OEI condition becomes negligable, and the weight of each motor decreases. Aircraft is also sized in order to have all the EM working, even if one of the energy source is inoperative. Regarding the energy sources, the generators are located at the rear, on the fuselage, in order to reduce the pylons wetted area and the interferences with the wing. The batteries are instead located in the cargo zone, half of them ahead the wing and the other half back the wing. This choice has been made because it is expected to find the center of gravity in proximity of the wing, and with this disposition the batteries do not drastically affect its position. Also, due to batteries location (in the cargo), the maximum payload is reduced since only part of the freight is available for luggages. A T-tail configuration has been used for the empennage. In this work it has been decided to size the aircraft in order to fly fully electric at least to 3000ft. The reason for this choice is that the mean atmospheric boundary layer height is of about 1km (it changes according to the atmospheric condition), and in that region the convective effects create turbulences which mix the air [START_REF] Li | Vertical Distribution of CO 2 in the Atmospheric Boundary Layer: Characteristics and Impact of Meteorological Variables[END_REF] (Fig. 2): when the emissions are in this region, the quality of the air is decreased, meanwhile for greater altitude this effect is no more as relevant as it is in the boundary layer region. III. Propulsive chain architecture The generic scheme of the propulsive chain is shown in Fig. 3: it is referred to only one half wing, and the number of batteries, generators, EM and fans are not specified since they are variables to be optimized. In this kind of architecture batteries are coupled with a turbogenerator in order to supply power: they are connected through an electrical node (called bus in this work), thus they can be defined as a serial architecture. Converters are placed after these components in order to bring the current at the right transport voltage. The total power is then transferred to the inverters, which convert DC to AC current, required by electric motors. EM work in parallel; each of them is connected to a ducted fan, which generates thrust. Since all the cables from batteries and generators are connected to the bus, from which the current is then trasported to the EM, they are always operative, even if one energy source being inoperative. The power available at each step of the chain is also specified in Fig. 3: η is the efficiency of a generic component, P V the power density, M the Mach number, z the altitude, N the number of electric motors, T the thrust and V ∞ the velocity. In the following sections, each component is detailed. Power is controlled using two different power rates, one for the batteries and another one for the generators. It is then possible to write: P tot = δ batt P batt2 + δ gen P gen2 (1) having defined the battery and generator power rate as below: δ batt = P batt P batt,max (2) δ gen = P gen P gen,max (3) where P batt,2 and P gen,2 are defined from Fig. 3. In previous works on hybrid architectures [START_REF] Pornet | Methodology for Sizing and Performance Assessment of Hybrid Energy Aircraft[END_REF][START_REF] Cinar | Sizing, Integration and Performance Evaluation of Hybrid Electric Propulsion Subsystem Architectures[END_REF] , an hybrid factor has been defined in order to control how much of the total power had to be supplied by each source. In this work, there is no factor splitting the power required from the two sources: with the law presented in Eq. (1) it is possible to have batteries and generators supply the maximum of their available power at the same time. The advantage offered by this approach is that at takeoff or climb (critical conditions in terms of power required), a failure of one energy source can be easily sustained: in case one of them being inoperative, it is possible to ask more power from the second source. Finally, the power required by secondary systems (such as the environmental control system, the ice protection system, lighting and so on) have to be considered too: in this work it has been decided to use the estimation done by Seresinhe and Lawson 15 for a More Electric Aircraft concept, similar to A320. A. Gas turbine generator One of the two power sources is the gas turbine generator: it is composed of a turboshaft engine connected to a generator which converts shaft power to electrical power. The turboshaft has been modeled using GSP (Gasturbine Simulation Program), a software developed at NLR. [START_REF] Visser | GSP: A Generic Object-Oriented Gas Turbine Simulation Environment[END_REF] The scheme is shown in Fig. 4. A single compressor has been used, meanwhile there are two turbines after the combustion chamber: the first is the high speed turbine, directly linked to the compressor, while the second is the low speed turbine. Since it has to produce power, the main outputs are the power and the Power Specific Fuel Consumption (PSFC), which depend on the altitude and the Mach number. The design conditions are reported in Table 2. The turboshaft engine has also been sized in order to supply enough power in case of failure of one energy source in cruise. The gas turbine is not included into the sizing process: once having obtained the curves of power and PSFC from GSP, they are provided to the software FAST and interpolated to get the value of interest. An estimation of the weight is given in the work of Burguburu et al.: [START_REF] Burguburu | Turboshaft Engine Presedesign and performance Assessment[END_REF] it is based on empirical data from a large number of existing turboshaft engines. As previously said, the converter device is mounted on the low speed turbine shaft. The only parameter used for sizing is the power to mass ratio P m , defined as the power converted per unit mass. Starting from this parameter it is possible to estimate the weight: m gen = P gen P m (4) where P gen is the power delivered to the device at the design point. B. Battery In the proposed architecture, battery is a vital component as it is a main source of power as it introduces significant weight to the entire system. There are different types of battery available (such, for example, Li-Ion or Li-S 18 ): in this work it has been decided to use a Li-Ion battery type. A battery is fully defined by a set of five parameters: [START_REF] Pornet | Methodology for Sizing and Performance Assessment of Hybrid Energy Aircraft[END_REF][START_REF] Lowry | Electric Vehicle Technology Explained[END_REF][START_REF] Cinar | Development of a parametric Power Generatin and Distribution Subsystem Models at Conceptual Aircraft Design Stage[END_REF] • Specific energy density E m , which represents how much energy can be stored in a battery per unit mass (in W h kg -1 ); • Energy density E V , which represents how much energy can be stored in a battery per unit volume (in W h L -1 ); • Specific power density P m , which represents how much power can be delivered per unit mass (in kW kg -1 ); • Power density P V , which represents how much power can be delivered per unit volume (in kW L -1 ); • Density (ρ), which represents the mass per unit of volume (in kg m -3 ). These variables are not independent of each others, but only three of them are necessary to compute the others ones. In this work, a battery is defined by its specific energy density, specific power density and density. Missing values are calculated as follows: E V = ρ E m (5) P V = ρ P m (6) The energy stored and the maximum power which can be delivered by the battery are then computed: E batt = E m m batt = E V V batt (7) P max,batt = P m m batt = P V V batt (8) where m batt is the battery mass and V batt the battery volume. For monitoring the state of the battery, the state of charge (SoC) has to be defined: it is the ratio between the remaining energy E at a certain time (t) and the total stored energy E batt . The complement of SoC is defined as the Depth of Discharge (DoD). SoC = E (t) E batt = 1 - E cons (t) E batt (9) DoD = E cons (t) E batt = 1 -SoC (10) Due to safety reasons, the SoC can not be under a certain limit, which in general depends on the battery type. For a Li-Ion battery the minimum limit for the SoC is 20%; therefore the following constraint will be used in the sizing process: SoC f inal = 1 -DoD f inal ≥ 0.2 (11) C. Electric motor Electric motors are the other main components of the hybrid propulsion: they convert electrical power to mechanical power. The high reliability allows to work at very high efficiency; furthermore, as opposed to traditional combustion engine, their efficiency is independent from the altitude, which represents the main advantage. [START_REF] Lowry | Electric Vehicle Technology Explained[END_REF] Performance of these electric motors is determined by their torque and rotational speed charateristics. In this work it has been decided to use AC current based motors, since they are lighter than DC current based. Electric motors have also a very high efficiency (about 0.95); inefficiencies can be caused by various factors and are of different types [START_REF] Lowry | Electric Vehicle Technology Explained[END_REF] (but a complete analysis of them is beyond the scope of this work, in which only the total efficiency is defined). The major requirement for electric motors is the power to mass ratio, defined as the power that delivered per mass unit. Once the maximum power required by the electric motor is known, it is possible to estimate its weight: m EM = P max,EM P m (12) In subsequent steps the rotational speed and the torque are computed according to the fan requirement, and the motor is then fully defined. D. DC/DC and DC/AC transformers In order to convert current within the energy chain, converters and inverters are used. Performance of these devices depends on their efficiency, which is around 0.9. Since the architectures of inverter and converter are similar, it is possible to compute directly their total weight with the equation m IC = P inverter N EM + P converter N gen + P converter N batt P m (13) where P m is the power to mass ratio, N EM the number of electric motors, N gen the number of generators and N batt the number of batteries. E. Cables The cables have to transport current from one device to another within the hybrid architecture. They are sized in order to carry a certain current, which must be below the maximum allowed threshold. The current, and so the sizing, depends on the voltage used for the transport. First the current which flows through a cable is computed as i = P ∆V ( 14 ) Then a check has to be done in order to be sure that value is lower than the maximum current. If it is not, more cables have to be installed; the number is computed dividing the value of current with the maximum one: N cable = i i max ( 15 ) where the square brackets represents the integer part of i imax . Finally, according to EM, generators and batteries positions, it is possible to estimate the cable length and so the weight: m cable = N cable m L L cable (16) where m L is the cable linear density. Installation and Healt Monitoring System have to be included in the weight calculation: preliminary works at ONERA 10 show an increasing in weight of 30% for the installation and of 5% for the HMS. Typical values for the cables' parameters are reported in Table 3. 21 Table 3. Values used for the cables sizing i max 360 A ∆V 2160 V m/L 1.0 kg m -1 F. Cooling system All the components have their efficiency: this means that not all the power generated by batteries or generators is converted into electrical power, but part of them is converted into heat. It consists of two different devices: heat exchanger and air cooling systems. The first are devices which surrounds the cables and artificially dissipate the power. The amount of power to dissipate is: P diss = (1 -η batt ) P batt,max N batt + (1 -η gen ) P gen,max N gen + (1 -η EM ) P EM,max N EM (17) The heat exchangers introduce a penalty in mass: in the framework of an internal project at ONERA, the penalty has been estimated to be 1t. 10 This value is based also on the work of Anton. [START_REF] Anton | High-output Motor Technology for Hybrid-Electric Aircraft[END_REF] The air cooling system is used instead to have cold air that circulates into the system: it consists of some air inlet placed on the fuselage. It does not introduce weight but a penalty on the drag coefficient: [START_REF] Hoerner | Fluid-Dynamic Drag -Theoretical, Experimental and Statistical Information[END_REF] in the same internal project mentioned, the impact has been estimated to be of 5% on the C D . Penalties due to the cooling system are summed up in Table 4. Table 4. Penalties due to the cooling system considered in this work (estimation from an internal work at ONERA) Mass +1 t C D +5 % G. Fan The ducted fans are the last devices in the energy chain: they are directly connected to the electric motor. The design point for the preliminary sizing has been chosen as the beginning of the cruise. As the fans are directly connected to electric motors, the torque and rotational speeds have to be the same for both: if the motor torque is too small, a resize of the fan has to be carried out, or a gearbox must be added. In Fig. 5 the scheme of a ducted electric fan is shown, meanwhile in Fig. 6 a 3D rendering is given, where the elements are drawn separately in order to understand the architecture. Due to technological limits, there is a minimum for the fan diameter: if it is too small, it is not possible to design the fan. In order to avoid this situation, the operating Mach number should not exceed the value of 0.7. In Appendix B the fan sizing, based on isoentropic equations, is fully described. Since the air passes through the fan, the wetted area of the duct is relevant for aerodynamic calculations. It is computed considering the total area of the external duct, the disk created by the actuator, the central duct and the total area of the electric motor. The FAST code has been modified in order to consider also the hybrid architecture sizing. New modules have been added, into the new category "HybridDEP"; here below there is a rapid description: • Battery.py: this module contains the battery model and the functions used for computing the actual SoC, the weight, the volume, the energy and the maximum power available. Battery sizing is also included. • Cable.py: this module contains the definition of the cable function used for computing the maximum current, the diameter and the weight. • DuctedFan.py: this module contains the functions used for sizing the ducted fan and computing the power required for the condition of interest. • ElectricMotor.py: this module contains the definition of the electric motor and the function used for its sizing. • HybridEngine.py: this module is the main module for the propulsion, since it calls the components' modules for computing both the power and the thrust (as in Fig. 3) according to the actual requirement during flight phase and the PSFC to estimate the fuel consumption. The standard mass breakdown module using the French norm AIR 2001/D 24 has also been modified. Since it considers only a classical "tube and wing" configuration, there are no references on the hybrid architecture, such as batteries or cables. Thus five new elements have been added in the propulsion category. The detailed structure is presented in appendix A (Table 16). Finally, two new sections in the .xml input file have been added: one contains all the parameters for the hybrid distributed electric propulsion, while the other contains an estimation of the secondary systems power. The FAST workflow is presented in Fig. 7: since from a method's point of view it can be considered as a MDA, an eXtended Design Strucutre Matrix (xDSM) scheme [START_REF] Lambe | Extensions to the Design Structure Matrix for the Description of Multidiplinary Design, Analysis, and Optimization Processes[END_REF] has been used to describe the main process. Under this format, each rectangular box represents an analysis (e.g. a function or computational code). Input variables related to the analysis are placed vertically while outputs are placed horizontally. Thick gray lines represent data dependencies whereas thin black lines represent process connections. The order of execution is established by the component number. Finally, the superscript notation defined by Lambe et al. [START_REF] Lambe | Extensions to the Design Structure Matrix for the Description of Multidiplinary Design, Analysis, and Optimization Processes[END_REF] has been used. Algorithm 1 details the different steps based on the input given by the work of Pornet et al. [START_REF] Pornet | Methodology for Sizing and Performance Assessment of Hybrid Energy Aircraft[END_REF] and Cinar et al., [START_REF] Cinar | Sizing, Integration and Performance Evaluation of Hybrid Electric Propulsion Subsystem Architectures[END_REF] which describe a sizing process for electric aircrafts. Respect to the original version, a new analysis is added at step 2; all the other blocks have been indirectly modified due to the presence of the new propulsive architecture. 2:Battery sizing. Batteries are sized respect to two different criteria: the power at the takeoff and the energy consumed; the latter is divided by 0.8 in order to consider the 20% safety margin of SoC. Using Eq. ( 7) and Eq. ( 8), battery volume is computed, then the maximum value is taken. Finally, using the same equations, power and energy available are defined. At the first iteration the initial value of volume from step 0 is used, since there is no information about the energy consumption. 3: Wing sizing. Wing area is sized with respect to fuel capacity and approach speed. As for the battery, at the first iteration no wing sizing is performed, as there is no information about the fuel consumption. 4: Compute initial geometry. 5: Resize the geometry in order to match the center of gravity and stability constraints. 6: Aerodynamic calculation. 7: Mass breakdown calculation. For the DEP components, the weight estimation is based according to values from previous loop; 8: Design mission simulation. The mission includes: take off, initial climb (up to 1500ft), climb to cruise altitude, cruise, descent, alternate flight of 200NM, 45 minutes of holding, landing and taxi in. For the cruise two approaches are possible (step and cruise climb); more details will be provided in section V. For the Hybrid-Electric concept, the balance equation is written in terms of power instead of thrust; at each step the code computes the fuel and energy consumption and updates the aircraft weight and battery SoC. 9: Update the MTOW. 10: Check if the convergence criteria is satisfied; if not proceed to next iteration. The convergence is reached when the relative difference between the Operating Weight Empty (OWE) computed at step 7 and step 8 is less than 0.05%. If this condition is satisfied, the code check that the mission fuel is lower than the maximum fuel that can be stored, as that the battery SoC is greater than 20%: if these conditions are fulfilled, the sizing loop is over, otherwise it proceeds to next iteration. until 10 → 1: MDA has converged V. Design mission and sizing parameters A. Design mission definition In the FAST code, the design mission is made of two blocks: the first one represents the mission, and the second one is used for computing the reserve, according to certification rules. [START_REF] Schmollgruber | Use of a Certification Constraints Module for Aircraft Design Activities[END_REF] In particular, the reserve fuel is computed considering an alternate flight of 200NM and 45 minutes of holding. For the key segment of the mission (cruise), two different approaches can be selected: the step climb mission and the cruise climb mission. [START_REF] Bradley | Subsonic Ultra Green Aircraft Research: Phase II-VolumeII-Hybrid Electric Design Exploration[END_REF] In the first case the cruise starts at the optimal altitude (computed by the code), which is kept constant until the code computes is more efficient to climb at a higher level with a step climb of 2000ft. In the second case the aircraft is always at the point of maximum efficiency and at the same Mach number: to keep these conditions the altitude is increased at each time step. In terms of computational costs, the cruise climb option is faster than the step climb, since the code does not check at each iteration if it is convenient to perform the step climb or not. L (0) , M T OW (0) , M LW (0) , M ZF W (0) , S (0) In order to assess the difference between the two approaches, the case of a cruise climb is performed, using the TLAR of the CeRAS aircraft 27 (2750NM of range for 150 passengers). The results have been compared with that reported by Schmollgruber et al.: 6 the differences shown in Table 5 are negligible. Thus in the following sizing loops presented in this paper the cruise climb option is always used. Since an hybrid propulsion system is used, the degree of hybridization over the entire mission has to be defined: recalling Eq. ( 1), the battery power rate defines the use of the battery for each segment. Two cases are possible: the power is not balanced (i.e. for takeoff and climb) and it is balanced (i.e. in cruise). In the first case the battery and generator power rates are given in input, meanwhile in the second case only the percentage of power required by the battery is given in input, then the two power rates are computed. Batteries are never used in cruise, since the energy consumption leads to an increasing in weight that is not affordable for the aircraft. In order to be sure to use the batteries in the most efficient way, the SoC at the end of the mission has to be 20%: if at the end of the sizing the SoC is greater than this value, the degree of hybridization is manually changed until it is 20%. B. Sizing parameters For the sizing a certain number of TLAR have to be defined into the .xml input file. In Table 6 the design parameters for the hybrid aircraft are reported: the number of passengers is the same of an aircraft A320-type (150); the range varies from 800 to 1600NM, meanwhile the Mach number is 0.7, lower than a traditional aircraft. As said, the value of 0.7 has been chosen in order to reach a fan diameter that would be too small, as there is a limit due to technology level. About the propulsive architecture, 2 generators, 4 batteries and 40 engines have been considered; finally the minimum power required at takeoff is fixed to 28MW. After having defined the TLAR, the parameters for the electrical components have to be chosen. As already mentioned, the focus is on the 2035 horizon: in bibliography there are different values for the chosen technological horizon (see for example the works of Lowry and Larminie, [START_REF] Lowry | Electric Vehicle Technology Explained[END_REF] Bradley and Droney, [START_REF] Bradley | Subsonic Ultra Green Aircraft Research: Phase II-VolumeII-Hybrid Electric Design Exploration[END_REF] Belleville, [START_REF] Belleville | Simple Hybrid Propulsion Model for Hybrid Aircraft Design Space Exploration[END_REF] Friedrich et al., [START_REF] Friedrich | Hybrid-Electric Propulsion for Aircraft[END_REF] Delahye, [START_REF] Delhaye | Electrical Technologies for Aviation of the Future, Airbus, 2015 32 HASTECS project: programme de recherche aéronautique européen mené par le Laplace[END_REF] as the HASTECS project 32 and the estimation of Fraunhofer Institute 18 ). All the data found in bibliography are different, leading to an incertitude: after an internal discussion at ONERA and ISAE-Supaero the technology table reported in Table 7 has been defined. However, due to the aforementioned uncertainity in defining the technological horizon, sensibility analysis will be later shown with variation of the parameters in order to assess the effect. Finally, due to the development of new materials, a reduction in the weight has to be considered. In The wing weight reduction is valid only for a conventional aircraft: it is not possible to use composites because of the level of current that flows in the cables in the wing. Thus, for the hybrid concept, no wing reduction is considered. In the next section, where results are presented, the hybrid aircraft is compared with respect to a conventional aircraft: the last has the same TLAR reported in Table 6, the weight reduction reported in Table 8, a maximum efficiency of 19 is assumed and the engine model is based on the CeRAS engine, 27 with a SFC reduction of 20%. VI. Preliminary results for the Hybrid-Electric Aircraft concept As stated in the previous section (Table 6), the range is not fixed: in fact it is a variable to be explored in order to find the breaking point from which the hybrid concept is advantageous with respect to a traditional aircraft. The first parametric study shows the fuel consumption with respect to the range: the result is presented in Fig. 8. For both the configurations (conventional and hybrid) the fuel consumption increases with the range, but it is possible to note a value (about 1200NM) for which the two configurations have the same fuel consumption. Under that value the hybrid configuration is advantageous with respect to the traditional one. This effects is due to the battery sizing: under the breaking value, the sizing criteria is the power requirement at takeoff (which is 28MW), which means that the energy available is the same. When the range is decreased, the MTOW is decreased too, and this leads to a final SoC greater than 0.20: it is then possible to change the degree of hybridization and save more fuel. On the contrary, when the range is higher than 1200NM, the energy requirement becomes the most important criteria: batteries are resized and the increasing in weight make the hybrid architecture worse than the traditional one. For the rest of the work here presented, the design range considered is 1200NM, with the mission hybridization reported in Table 9: since the battery is sized according the power requirements, there is more energy than needed for the mission fully electric until 3000ft; for this reason the entire climb segment is fully electric, with a battery power rate of 70%. Table 10 shows the comparison for the two different configurations. In order to have a unique parameter for comparison, the Payload Fuel Energy Efficiency (PFEE) has been used as figure of merit. [START_REF] Hileman | Payload Fuel Energy Efficiency as a Metric for Aviation Environmental Performance[END_REF] This parameter is defined as the payload per range divided by the energy consummed: P F EE = (P L)(Range) E cons (18) PFEE has been used since it contains the payload carried for a certain range, and the energy consummed for the mission. The PFEE is similar for both configurations, which confirms the results that for the chosen range the hybrid and the traditional aircraft are comparable (about 98kg km MJ -1 ), but for the first concept there are no emissions close to ground. The OEI condition is already included in the FAST calculation, but is not critical in the design. The cases in which one generator or two batteries are inoperative have been then considered as additional failure cases: the hypothesis made is that the failure occurs during takeoff. As explained in section II and section III, the aircraft is designed in order to have all the EM operative, even if one energy source is inoperative. The fuel breakdown for these cases is reported in Table 11. In case one generator is inoperative, there are no differences in the takeoff and climb phase, since they are fully electric; then it is still possible to conclude the cruise, even if the fuel consumption is higher since more power is required to a single generator and the PSFC increases. For the reserve phase, the aircraft is not able to climb again for an alternate flight as the power requirement for this segment is higher than the maximum power of a generator, and only 45 minutes of holding have been considered. In case two batteries are out, instead, it is not possible to have a fully electric segment and the help of the generator is required at each phase. No great differences are shown for the reserve calculation, as for that phase only generators are used also in the baseline, but the fuel increases for the design flight. This study has been performed in order to understand the behavior in case of failure, but it does not consider yet the possible certification requirements (i.e. that if one source is inoperative the aircraft has to climb to 500ft and then lands again); a detailed work has to be done in the future. VII. Exploration of the design space The technology data for 2035 are affected by uncertainty. In order to assess the effects of a different technological level on the feasibility of the proposed concept, an exploration of the design space has been performed in this section. Battery, generator, electric motor and gasturbine technologies variation have been considered, as the effects of the engines number (to assess the DEP advantages) and the maximum 2D lift coefficient variation. Table 12 reports the minimum and maximum values for the parameters of the hybrid chain components. Some assumptions have been made: • The TLAR used are the same used in the previous section (Table 6). • The effect of each component has been studied separately, changing one component's technology and keeping all the others constant and equal to the baseline (Table 7, and reported also in Table 12 for sake of clarity). • The mission hybridization is the same used for the baseline (Table 9): in case at the end of a simulation the SoC is greater than 0.20 as required by equation (11), it has been changed in order to use all the available battery energy. Changes are always reported. For all the studies three key parameters have been considered as output: the MTOW, the wing area S w and the fuel consumption FC. They have been chosen since they are the most important parameters in a design process. The results are presented in Fig. 10: each column represents the impact of one component's technology variation for the key parameters. The same scale has been used, in order to better understand the effect of a variation on a single output (MTOW, S w and FC). In the next sections the impact of each parameter is described separately; for sake of clarity in Appendix C all the graphs are reported (from Fig. 14 to Fig. 19), both in the common and real scales. A. Impact of battery technology variation In this section the variation of battery technology in the range defined in Table 12 has been studied; results are shown in the first colume of Fig. 10. The battery technology affects all the parameters considered: between the minimum and the maximum value, the MTOW is reduced of about 20%, the wing area of about 15% and the fuel consumption of about 18%. It is possible to note that from the first and the second point (from an energy density of 350W h kg -1 to 500W h kg -1 ) the MTOW is reduced more than in the second segment: this happens because, for low values of E m batt batteries are resized accorgind to the energy requirement, leading to a divergence into the MTOW. In the last case ( E m batt =700W h kg -1 ), instead, the sizing criteria is the power at the takeoff, and since the MTOW is reduced of 8% with respect to the baseline, there is more energy to use in batteries and the degree of hybridization for the alternate climb is changed respect to what has been used in Table 9: -δ gen al,climb = 0.0 -δ batt al,climb = 0.65 Thus, for the last point also the alternate climb is fully electric, leading to a major gain in fuel consumption. B. Impact of generator technology variation The generator technology is varied into the range identified in Table 12. Minimum and maximum values correspond to a variation of ±50% respect to the design. The results correspond to the second column of Fig. 10. The effects on the MTOW, wing area and fuel consumption are smaller, compared to that of the battery technology: the MTOW varies of about 4%, meanwhile the wing area is almost constant. The major effect is on the FC (about 7%): when the weight of the generator is decreased, the nacelle is smaller, and thus there is a little gain on the efficiency, which affects the FC. C. Impact of electric motor technology variation The variation of electric motor technology has then been studied, within the range presented in Table 12: as for the generator, minimum and maximum value of power to mass ratio have been defined considering a variation of ±50% with respect to power to mass ratio base value. Results are shown in the third column of Fig. 10: both on the MTOW and the fuel consumption there is a gain of about 7%, meanwhile the effect on the wing area is not relevant. The effects are greater than that of the generator technology variation, but still smaller than that of the battery technology variation. D. Impact of engines number variation In this section the effect of engines number has been considered: it varies from 10 to 40 (as reported in Table 12). This parameter affects the maximum lift coefficient. In fact, as said earlier, the surface interested by the blowing has a maximum C l of about 4.5; the wing maximum C L is computed as: C L,wingmax = C l,max S blow S w (19) where S blow is the surface interested by blowing, shown also in Fig. 9 in red. From Eq. ( 19) it can be deduced that the maximum C L is reduced when S blow is reduced, that is when the engines number is smaller. Results of this study correspond to the fourth column of Fig. 10: there are no relevant effect on the MTOW and FC, meanwhile the wing area changes of about the 20%. This is explained because the wing area is sized according the approach speed constraint, and when there are less engines the maximum C L is smalled, and this leads to a greater wing area to sustain the flight. E. Impact of maximum 2D lift coefficient effects The effect of maximum 2D lift coefficient has been considered: as already said, it is estimated to vary between 4 and 5. Results are shown in the fifth column of Fig. 10. The only effect is on the wing area (which is reduced of the 15%): this means that the main advantages in having a higher C l are not in the FC, but in the possibility to have a shorter takeoff field length and a smaller power at takeoff requirements (parameter that affects the battery sizing). F. Impact of PSFC variation Finally, the effects of PSFC variation have been considered: the value is first decreased by 10% and then increased by the same amount with respect to the baseline (as in Table 12). Results are shown on the last column of Fig. 10: the main effect is in the FC, which varies of the 20% in the range considered. The result is expected, since the PSFC mainly depends on the combustion process efficiency. The effect on the MTOW and the wing area is instead negligible (less than 1%). These analyses show that, when the technologies improve, in general the performances are better (MTOW and fuel consumption are reduced), meanwhile the study on the number of engines clearly shows the advantage in using a DEP architecture. It is also possible to note that a linear change in a technology does not imply a linear change in the results: this happens because a reduction in the weight leads to a reduction in the energy consumption, and so in a reduction in battery weight (if it is sized according to energy) or in the fuel consumption (if it is sized according to power at takeoff, since more energy is available). In order to better understand the effects of each parameter, a sensitivity analysis has been performed, based on the method of the sparse polynomial chaos expansions, with a design of experiments made of 800 points. Fot its generation, a Latin Hybercube Sampling has been used. The idea is to identify the sensitivity index of the three key parameters (MTOW, S w and FC) with respect to the design variables, in order to understand the impact of each variable on a certain output. It is also possible to identify the interaction between the variables, if there are. VIII. Sensitivity studies with respect to technological levels The analysis performed is based on the sparse polynomial chaos expansions method for computing global sensitivity indices. [START_REF] Blatman | Efficient Computational of Global Sensitivity Indices Using Sparse Polynopmial Chaos Expansions[END_REF][START_REF] Dubreuil | Construction of Bootstrap Confidence Intervals on Sensitivity Indices Computed by Polynomial Chaos Expansion[END_REF] This method has been selected since it allows to compute the sensitivity (Sobol) indices as the Montecarlo method, but it requires less points for the estimation. In this section it is assumed that Y = M(X), where X = (X i ), i = 1, • • • , n (n being the number of design variables), is a random vector modeling the input parameters (independent and uniformly distributed) and M is the numerical solver used to compute a scalar quantity of interest Y (the sizing tool FAST in this work). Assuming that Y is a second order random variable, it can be shown that 36 Y = ∞ i=0 C i φ i (X) (20) where {φ i } i∈N is a polynomial basis orthogonal with respect to the probability density function (pdf) of X and C i are unknown coefficients. Sparse polynomial chaos consists in the construction of a sparse polynomial basis {φ i } α∈A , where α = (α 1 , • • • , α n ) is a multi index used to identify the polynomial acting with the power α i on the variable X i and A is a set of index α. In practice A is a subset of the set B which contains all the index α up to a dimension d i.e. card(B) = (d+n)! d!n! . Objective of sparse approach is to find an accurate polynomial basis {φ i } α∈A such as card(A) << card(B). This is achieved by Least Angle Regression i.e. unknown coefficients C i are computed by iteratively solving a mean square problem and selecting, at each iteration, the polynomial the most correlated with the residual. [START_REF] Blatman | Adaptive Sparse Polynomial Chaos Expansion Based on Least Angle Regression[END_REF] Finally, the following approximation is deduced: Y ≈ Ŷ = α∈A C α φ α (X) (21) Due to the orthogonality of the polynomial basis {φ i } α∈A it is possible to write: Figure 10. Parametric analysis results: each column represents the effect of the variation of a component's technology, keeping constant all the other. The first column represents the impact of the battery, the second column the impact of the generator, the third column the impact of the electric motors, the fourth the impact of the number of engines, the fifth the impact of the maximum 2D lift coefficient, and the sixth the effect of the PSFC reduction. The output considered are the MTOW, the wing (Sw) area and the fuel consumption (FC) E[ Ŷ ] = C 0 V ar[ Ŷ ] = α∈A C 2 α E[φ 2 α (X)] (22) where E[ Ŷ ] is the mean value and V ar[ Ŷ ] is the variance of the answered variable ( Ŷ ). Sudret [START_REF] Sudret | Global Sensitivity Analysis Using Polynomial Chaos Expansions[END_REF] identifies the polynomial chaos expansion using the ANOVA decomposition, from which it is possible to show that the first order sensitivity index of the variable X i is Ŝi = α∈Li C 2 α E[φ 2 α (X)] V ar[ Ŷ ] ( 23 ) where L i = {α ∈ A/∀ j = i α j = 0}; that is only the polynomials acting exclusively on variable X i have been considered. The total sensitivity index can also be computed: ŜTi = α∈L + i C 2 α E[φ 2 α (X)] V ar[ Ŷ ] ( 24 ) where L + i = {α ∈ A/α i = 0} ; that is all the polynomials acting on the variable X i have been considered (which means that all variance caused by its interaction, of any order, with any other input variables are included). The idea is to determine the sensitivity indices relative to the technology component used in FAST for the MTOW, the S w and the FC. From the analysis the same parameters considered in Table 12 have been considered, except for the battery specific power density (which from the table can be estimated to be 4 times the specific energy density) and the number of engines (since this method only considers continuous variables). For each variable it is possible to define the coefficient of variation CV: CV = σ µ (25) where σ = (xmax-xmin) 2 12 is the variance and µ = xmax-xmin 2 the mean value. It has been decided to work keeping constant the CV for each variable: once that the mean value is fixed, it is possible to deduce the minimum and maximum values of variation. In Table 13 the mean values and the range of variation for each parameter are reported. It has to be noted that, compared to Table 12, the range of variation is smaller. Table 13. Mean values, minimum and maximum values for the parameters considered for the sensitivity analysis (with CV =0.05 kept constant for all the variables). A database of 800 points has been generated for the experiment (via a Latin Hypercube Sampling). The coefficient of variation used is 0.05 (which means that each variable changes of 5% between the minimum and maximum value). Results are shown in Table 14 and Fig. 11. The following conclusions can be deduced: Mean value Minimum Maximum Battery specific energy density [W h kg • MTOW is mostly affected by the battery technology (which has a sensitivity index of 0.86). • The driving parameter for the FC is the PSFC reduction (sensitivity index is 0.64), but there is also an effect of the battery, due to the fact that when it is resized, the MTOW increases and so the FC. Also the EM efficiency has an effect: recalling the propulsive chain (Fig. 3), this parameter regulates the power required by generators, which affects the FC (with the same PSFC, more power means more fuel burnt). • The wing area is finally driven by the maximum 2D lift coefficient (sensitivity index is 0.84); also for this parameter there is a small effect of the battery technology, due to the fact that the sizing criteria for the wing is the approach speed, and when the MTOW increases, a greater area is needed to sustain the flight. From Table 14 it is also possible to note that the sum of all the indices is about one (from 0.99 to 1.07): this means that all the variance of the answered variables is explained, and there are no high order interactions between the input variables. Thus a study of the total sensitivity indices does not provide more information than the first order indices. (see Eq. ( 24)). Having fixed the CV, the range in which the parameters vary is smaller than the technological range established in Table 12. For this reason a second analysis has been performed, using the assumption made in the technological table (which means that the CV is not the same for each parameter anymore). A new database of 800 points has been defined. Results are presented in Table 15 and Fig. 12. Compared to the previous case, the effects are mostly due to the battery variation, except for the wing area, for which there is still an effect of the maximum lift coefficient, even if it is reduced. This means that, until the uncertainty in battery technology is as it has been hypotized, there is no gain in improving the technological level of the other components, since the sizing will be affected mostly by the battery parameters. In conclusion, with the current level of fidelity used in FAST, it is possible to consider the effects of the technology variation; results shown that the main driver for the design process is the battery. IX. Conclusion & future perspectives In this work the feasibility of a large passenger hybrid aircraft has been studied, in which a set of batteries and generators work in sinergy in order to supply power. The proposed concept is based on a distributed electric propulsion architecture, in which a certain number of ducted fans located along the wing provide the thrust necessary. A focus has been made on the advantages of such architecture (weight engine reduction due to the less stringent OEI condition, blowing effects which increase the maximum lift coefficient). Some efforts have been also made in electrical components' modeling and into the description of the propulsive energy The technological hypotheses made refer to a 2035 horizon. All these aspects have been coded into the FAST sizing tool and the modifications made have been presented. The sizing tool is based on empirical equations and low fidelity tool: its level of fidelity can be classified as low. Results show that the hybrid concept has a potential gain up to a certain range, after which the batteries weight become so large that the fuel consumption is increased compared with an aircraft with conventional engines, for the same technological horizon. Once that the baseline has been assessed, two failure cases (for batteries and generator) have been studied in order to understand if the aircraft can prevent the partial loss of one power source. A major investigation in the failure cases has to be considered in the future, according to new certification that could require to not be able to fullfill all the mission: in that case particular attention has to be put into the maximum power can be lost. Also, in the proposed concept the assumption that all electric motors work even if there is a energy source loss has been made: the case in which a loss of a energy source lead to a loss of a certain number of engines (which affects the maximum wing lift coefficient) has to be considered too. However, the scenarios considered in this work are conservative in that sense. Due to the uncertainity in the data for the 2035 horizon, an exploration of the design space, with the technology table available, and sensitivity analyses have been performed. The tradeoff shows that the main parameter for the design process is the battery technology, with the PSFC reduction and maximum 2D lift coefficient having minor effects on the FC and wing area. The conclusion of this analyses is that, until the uncertainty into the battery technology holds, an improving in others components' technologies does not affect the results in a relevant way. From an analysis point of view, FAST performs a MDA: this means that it reaches a viable aircraft, which is not necessary the optimum one (respect to fuel consumption). Next step is to bring the sizing loop here described into a MDO framework. This work can be divided into different phases: • First step is to choose a MDO framework and include the sizing loop into an optimization loop, in order to find the set of TLAR and hybridization degree which minimize the energy and fuel consumption. A suitable choice for the MDO framework could be OpenMDAO, an open software developed by NASA Glenn Research Centre, in collaboration with University of Michigan. 39 • FAST is a tool based on a low fidelity level. A second step is then to study different fidelity levels in FAST in order to assess the difference in results using multifidelity tools. These first two steps considering different scenarios only regarding the battery technology. • Finally, in order to better understand the effect of the technology level, a MDO formulation using uncertainty quantification could be derived. [START_REF] Brevault | Decoupled MDO formulation for interdisciplinary coupling satisfaction under uncertainty[END_REF] A. Mass breakdown standard As mentioned in section IV, the mass breakdown standard used in FAST is based on the French norm AIR 2001/D. The detailed mass breakdown is reported in Table 16: the aircraft has been divided into five categories: airframe, propulsion, systems and fixed installation, operational items and crew, plus fuel weight and payload. Each category has been divided into other subsections, one for each component, as clearly shown in the table. In category B, the sections B4, B5, B6, B7 and B8 have been added in order to consider the hybrid architecture too. B. Methodology of preliminary design of a ducted fan In this section the methodology used for the sizing of a ducted fan is explained. The scheme of the fan is shown in Fig. 13. Knowing the operating condition and the compressor ratio, it is possible to deduce the power required and then the size of the inlet and outlet area. The input for the model are: -Gas constants for the air: γ=1.4 and R=287J kg -1 K -1 ; -Mach number in flight M 0 -The non-dimensional thrust coefficient of one fan, defined as C T = T γ 2 p s0 M 2 0 S ref It has to be noted that the thrust coefficient is referred to the thrust required by a single ducted fan, and not the total thrust. The process is described below. 1. The first step is to compute the total pressure and temperature at the inlet, using the de Saint-Venant relations: p t0 = p s0 1 + γ -1 γ M 2 0 γ γ-1 θ t0 = θ s0 1 + γ -1 γ M 2 0 2. Then it is possible to compute the Mach number at the exit of the nozzle: M 3f = 2 γ -1 1 + γ -1 γ M 2 0 FPR γ-1 γ -1 = f (M 0 , FPR) This relation is obtained considering the nozzle adapted, that is the pressure at the exit of the nozzle is equal to the ambient pressure (p 3f = p s0 ). It is also possible to compute the velocity ratio β as follows: β = V 3f V 0 = M 3f M 0 θ 3f θ 0 = M 3f M 0 FPR γ-1 γ 1 + γ-1 2 M 2 0 1 + γ-1 2 M 2 3f = f (M 0 , FPR) η f being the polytropic efficiency of the fan. This value is introduced considering the ratio between the total pressure and the total temperature through the fan: θ t3f θ t0 = p t3f p t0 γ-1 γη f If η f =1 the compression is isentropic. In practice it is possible to compute the polytropic efficiency with the semiempirical relation: η f = 0.98 -0.08 (FPR -1) which take into account the effect of the FPR: the higher is, the less the compression is efficient. 3. At this stage it is possible to compute the nozzle exit area: S 3f S ref = C T 2 FPR γ(1-η f )-1 γη f 1 + γ-1 2 M 2 0 1 + γ-1 2 M 2 3f 1 γ-1 1 β 2 -β = f (M 0 , FPR, C T ) Finally, supposing the section circular, it is possible to deduce the diameter: D 3f = 2 A 3f π 4. At this step it is possible to compute the mass flow and then the power required by the fan. The mass flow which exits from the nozzle is: Knowing then the ratio between the hub and tip radius σ, it is possible to compute the fan radius: ṁ = p 3f M 3f S 3f γ Rθ 3f = p 3f M 3f S 3f γ Rθ t3f 1 + γ -1 2 M 2 r f an = S f an π(1 -σ 2 ) 6. Once the fan is sized, it is possible to deduce the rotational velocity and the torque. In order to do that, a tip velocity has to be defined: this value is determined by aerodynamic criteria and it allows to obtain the polytropic fan efficiency desired. Some values are summed up in the table below: it is possible to interpolate the data for different values of FPR. The process just described has an error estimated of about 10%. It is still valid for off design conditions, the only difference is that a different value of FPR has to be found, in order to provide the same S 3f . In practice a lower FPR corresponds to a lower RPM on a real fixed-pitch fan. This is automatically done in the code. C. Parametric analysis results In this section the detailed results for the design space exploration (presented in Section VII) are reported. As explained, each study refers to a variation of only one technology, keeping all the others constant to the base values (Table 12). The results are shown in a real and a normalized (the same used in Fig. 10), in order to better understand the overall effect of each variation. Target parameter [%] 2025 2035 2050 Noise -10.0 -11.0 -15.0 Emissions -81.0 -84.0 -90.0 Fuel\Energy consumption -49.0 -60.0 -75.0 Figure 1 . 1 Figure 1. Hybrid aircraft concept proposed with distributed electric ducted fan -modelisation in OpenVSP 7 Figure 2 . 2 Figure 2. Typical evolution of atmospheric boundary layer during the day. The convective boundary layer extension is shown 12 Figure 3 . 3 Figure 3. Distributed electric propulsion architecture Figure 4 . 4 Figure 4. Turboshaft scheme, as modeled in GSP (software developed at NLR)[START_REF] Visser | GSP: A Generic Object-Oriented Gas Turbine Simulation Environment[END_REF] Figure 5 . 5 Figure 5. General scheme of a ducted electric fan with its different parts Figure 7 . 7 Figure 7. FAST xDSM. Black lines represent the main workflow; thick grey lines represent the data flow, meanwhile green blocks indicate an analysis, grey and white block an input/output data. Algorithm 1 details the MDA. Figure 8 . 8 Figure 8. Fuel consumption with respect to the range variation, keeping constant all the others TLAR Figure 9 . 9 Figure 9. Wing view. The red zone represents the surface interested by blowing, used for computing the maximum lift coefficient in Eq. (19) Figure 11 . 11 Figure 11. Plot of sensitivity indices for MTOW, FC and Sw for the analysis considered. Figure 12 . 12 Figure 12. Plot of sensitivity indices for MTOW, FC and Sw, considering the range of variation defined in Table 12 Figure 13 . 13 Figure 13. Scheme of a ducted fan for the model presented 1 γη f - 1 and 5 .Rθt0 1 1151 with p t3f = p t0 FPR from the fan pressure ratio definition and T t3f = T t0 FPR γ-1 γη f . The total enthalpy variation is then∆H = c p (θ t3fθ t0 ) = γR γ -1 θ t0 F P Rγfinally the power required by the fan is P f an = ∆H ṁ It is finally possible to compute the fan area. For aerodynamic reasons, the Mach number at the fan section is 0.65, with this assumption the area is computed from the mass flow and the total conditions as S f an = ṁ p t0 M f an γ FPR 1 . 1 1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 V tip [m s -1 ] 200 230 290 330 370 390 400 400The rotational speed, in round per minute, is then:Ω f an = V tip r f an 60 2πand finally the torque, in N m, isC f an = P f an Ω f an 2π 60 Figure 14 .Figure 15 . 1415 Figure 14. Parametric analyses with respect to battery technology. (E/m) is the battery specific energy density, MTOW the Maximum TakeOff Weight, Sw the wing area and FC the fuel consumption Figure 17 . 17 Figure 17. Parametric analyses with respect to the number of engines. N EM is the number of engines (10, 20 and 40), MTOW the Maximum TakeOff Weight, Sw the wing area and FC the fuel consumption Table 1 . 1 ACARE targets for the next years[START_REF]ACARE project[END_REF] Table 2 . 2 Turboshaft design parameters Design Mach number 0.70 Design operating altitude 11000 m Power at design point 13293 kW PSFC at design point 0.22 kg kW -1 h -1 Algorithm 1 FAST algorithm Require: Initial design parameters (TLAR) Ensure: Sized aircraft, drag polars, masses, design mission trajectory 0: Initialize the values. Estimate weight, wing surfaces initial values, using methods from ISAE & Airbus design manual, 26 as initialization of DEP components. repeat 1: Initialize the loop. Table 5 . 5 Comparison between the step climb and the cruise climb approaches, using the CeRAS aircraft 27 (Npax=150, M=0.78, R=2750NM) Step climb Cruise climb Diff. % MTOW OWE Wing area Fuel mission [kg] [kg] [kg] [m 2 ] 74 618.96 42 200.58 122.74 18 799.11 74 562.82 42 190.71 122.68 18 798.85 -0.075 -0.023 -0.481 -0.001 Table 8 8 the estimated impacts of a 2035 technology on the weight of different components are reported: values come from an internal project at ONERA, in the frame of the EU program Clean Sky 2. 10 Table 6 . 6 Design parameters for the hybrid aircraft with DEP considered Range 800-1600 NM Cruise Mach number 0.70 Number of passengers 150 Approach speed 132 kn Wing span Number of engines ≤36 m 40 Number of generators 2 Number of batteries 4 Minimum power at takeoff 28 MW Table 7 . 7 Design parameters for the electric components for 2035 horizon Battery Generator Electric motor IDC Table 8 . 8 Estimated impact of new materials on weight for 2035 horizon Wing Fuselage Landing gear Cabin seats -10 % -5 % -5 % -30 % Table 10 . 10 Comparison between the Hybrid-Electric concept and the conventional aircraft, EIS2035, for the desired range of 1200NM Hybrid Traditional Diff. % Table 11 . 11 Fuel breakdown comparison between the baseline and the scenarios of failure identified (one generator or two batteries inoperatives) No failure Generator out Batteries out Taxi out [kg] 0 0 0 Takeoff [kg] 0 0 36.67 Initial climb [kg] 0 0 118.52 Climb [kg] 0 0 362.41 Cruise [kg] 4543.81 4984.86 4857.90 Descent [kg] 206.46 227.79 228.58 Alternate climb [kg] 562.62 - 604.35 Alternate cruise [kg] 325.39 - 365.10 Alternate descent [kg] 156.98 - 172.68 Holding [kg] 994.74 1102.73 1078.89 Block fuel [kg] 4750.26 5212.65 5604.13 Reserve fuel [kg] 2182.22 1259.11 2389.14 Mission fuel [kg] 6932.49 6471.76 7993.27 Table 12 . 12 Technology table for evaluating the sensitivity to technology Minimum Maximum Baseline Table 14 . 14 Sensitivity indices for MTOW, FC and Sw for the analysis considered (in bold the relevant values) MTOW FC S w Battery specific energy density 0.8642 0.1799 0.1561 Battery efficiency 0.0002 0.0000 0.0000 Generator power density 0.0126 0.0561 0.0010 Generator efficiency 0.0004 0.0001 0.0000 EM power density 0.0001 0.0086 0.0000 EM efficiency 0.1070 0.1848 0.0003 C l,max 0.0119 0.0089 0.8421 PSFC reduction 0.0003 0.6389 0.0000 Index sum 0.9968 1.0773 0.9995 Table 15 . 15 Sensitivity indices for MTOW, FC and Sw, considering the range of variation defined in Table12(in bold the relevant values) MTOW FC S w Battery specific energy density 0.9358 0.7388 0.3792 Battery efficiency 0.0000 0.0001 0.0000 Generator power density 0.0569 0.0561 0.0223 Generator efficiency 0.0002 0.0001 0.0002 EM power density 0.0017 0.0086 0.0004 EM efficiency 0.0032 0.1848 0.0005 C l,max 0.0015 0.0089 0.5953 PSFC reduction 0.0000 0.0026 0.0000 Index sum 0.9992 0.9975 0.9974 Table 16 . 16 Standard for mass breakdown used in FAST A Airframe A1 Wing A2 Fuselage A3 Horizontal and Vertical tail A4 Flight controls A5 Landing gear A6 Pylons A7 Paint B Propulsion B1 Engines B2 Fuel and oil systems B3 Unusable oil and fuel B4 Cables and cooling system B5 Batteries B6 Generators B7 IDC B8 Bus protection C Systems and fixed installations C1 Power systems (APU, electrical and hydraulical system) C2 Life support systems (Pressurization, de-icing, seats, ...) C3 Instrument and navigation C4 Transmissions C5 Fixed operational systems (radar, cargo hold mechanization) C6 Flight kit D Operational items E Crew F Fuel G Payload Table 17 . 17 Tip velocity for different values of FPR Acknowledgments The authors would like to thank: • AIRBUS for the financial support in the frame of Chair CEDAR (Chair for Eco Design of AircRaft). • The European Commission for the financial support within the frame of the Joint Technology Initiative JTI Clean Sky 2, Large Passenger Aircraft Innovative Aircraft Demonstration Platform "LPA IADP" (contract N CSJU-CS2-GAM-LPA-2014-2015-01). • Michael Ridel and David Donjat for their contribution on cables and cooling system models as well as Sylvain Dubreuil for his work on the sensitivity analysis.
01754876
en
[ "phys.phys.phys-flu-dyn" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01754876/file/mortagne_19813.pdf
Caroline Mortagne Kevin Lippera Philippe Tordjeman Michael Benzaquen Thierry Ondarçuhu Dynamics of anchored oscillating nanomenisci I. INTRODUCTION The study of liquid dynamics in the close vicinity of the contact line is fundamental to understanding the physics of wetting [START_REF] De Gennes | Wetting: Statics and dynamics[END_REF][START_REF] Bonn | Wetting and spreading[END_REF]. The strong confinement inherent to this region leads, in the case of a moving contact line, to a divergence of the energy dissipation. This singularity can be released by the introduction of microscopic models based on long-range interactions, wall slippage, or diffuse interface [START_REF] Snoeijer | Moving contact lines: Scales, regimes, and dynamical transitions[END_REF], which are still difficult to determine experimentally. In most cases, the spreading is also controlled by the pinning of the contact line on surface defects [START_REF] Joanny | A model for contact angle hysteresis[END_REF][START_REF] Perrin | Defects at the Nanoscale Impact Contact Line Motion at all Scales[END_REF]. For nanometric defects, the intensity and localization of the viscous energy dissipation is crucial to understanding the wetting dynamics. The aim of this paper is to study the hydrodynamics of a nanomeniscus anchored on nanometric topographic defects and subjected to an external periodic forcing. This configuration allows one to investigate the viscous dissipation in a meniscus down to the very close vicinity of the fixed contact line and to assess the dynamics of the pinning of nanometric defects. In addition to being an important step towards the elucidation of the wetting dynamics on rough surfaces, this issue is relevant for vibrated droplets or bubbles [START_REF] Noblin | Vibrated sessile drops: Transition between pinned and mobile contact line oscillations[END_REF] and for the reflection of capillary waves on a solid wall [START_REF] Michel | Acoustic Measurement of Surface Wave Damping by a Meniscus[END_REF]. Atomic force microscopy (AFM) has proven to be a unique tool to carry out measurements on liquids down to the nanometer scale: liquid structuration [START_REF] Fukuma | Water distribution at solid/liquid interfaces visualized by frequency modulation atomic force microscopy[END_REF] or slippage [START_REF] Maali | Measurement of the slip length of water flow on graphite surface[END_REF] at solid interfaces was evidenced, while the use of specific tips fitted with either micro-or nanocylinders allowed quantitative measurements in viscous boundary layers [START_REF] Dupré De Baubigny | AFM study of hydrodynamics in boundary layers around micro-and nanofibers[END_REF] and at the contact line [START_REF] Guo | Direct Measurement of Friction of a Fluctuating Contact Line[END_REF]. In this study, we have developed an AFM experiment based on the frequency modulation mode (FM-AFM) to monitor, simultaneously, the mean force and the energy dissipation experienced by an anchored nanomeniscus. Artificial defects with adjustable size are deposited on cylindrical fibers (radius below 100 nm) to control the pinning of the contact line and the meniscus stretching during the oscillation. The experiments are analyzed in the frame of a nanohydrodynamics model based on the lubrification approximation. Interestingly, the meniscus oscillation does not lead to any stress divergence at the contact line allowing a full resolution without the use of cutoff lengths in contrast with the case of a moving contact line. This study thus provides a comprehensive description of dissipation mechanisms in highly confined menisci and an estimate of the critical depinning contact angle for nanometric defects. II. EXPERIMENTAL METHODS The fibers used in the experimental study were carved with a dual beam FIB (1540 XB Cross Beam, Zeiss) from conventional silicon AFM probes (OLTESPA, Bruker). Using a beam of Ga ions, a 2 to 3 µm long cylinder of radius R ∼ 80 nm is milled at the end of a classical AFM tip. An ELPHY MultiBeam (Raith) device allows to manufacture nanometric spots of platinum by electron beam induced deposition (EBID) in order to create ring defect of controlled thickness around the cylinders (see Supplemental Material [12]). An example of a homemade cylinder with three annular rings is displayed in Fig. 1(d). The liquids used are ethylene glycol (1EG), diethylene glycol (2EG), triethylene glycol (3EG), and an ionic liquid, namely, 1-ethyl-3-methylimidazolium tetrafluoroborate. The liquids have a low volatility at room temperature. Their dynamic viscosities are η = 19.5, 34.5, 46.5, and 44 mPa • s, and their surface tensions are γ = 49.5, 49.5, 48, and 56 mN • m -1 at 20 • , respectively. As surface conditions play a crucial role in wetting, measurements are made before and after a 5 min UV/O 3 treatment aimed at removing contaminants and making the surface more hydrophilic [START_REF] Vig | UV/ozone cleaning of surfaces[END_REF]. Using a PicoForce AFM (Bruker), the tips are dipped in and withdrawn from a millimetric liquid drop deposited on a silicon substrate. Prior to any experiment series, the cantilever quality factor Q and deflection sensitivity are measured, and its spring constant k is determined using standard calibration technique [START_REF] Butt | Calculation of thermal noise in atomic force microscopy[END_REF]. The experiments are performed in frequency modulation (FM-AFM) mode using a phase-lock loop device (HF2LI, Zurich Instrument) which oscillates the cantilever at its resonance frequency f . A proportional-integral-derivative controller is used to adjust the excitation signal A ex in order to maintain the tip oscillation amplitude A constant. The excitation signal A ex is therefore a direct indication of the system dissipation. In particular, it is linearly related to the friction coefficient of the interaction through β = β 0 (A ex /A ex,0 -1), where A ex,0 and β 0 = k/(ω 0 Q) are, respectively, the excitation signal and the friction coefficient of the free system in air, measured far from the liquid interface [START_REF] Giessibl | Advances in atomic force microscopy[END_REF]. We used cantilevers with quality factor Q ∼ 200 high enough to ensure that the resonant frequency is related to the natural angular frequency through ω 0 = 2πf . We showed recently that this procedure, and the appropriate calibration used, gives quantitative measurements of dissipation in the viscous layer around the tip [START_REF] Dupré De Baubigny | AFM study of hydrodynamics in boundary layers around micro-and nanofibers[END_REF]. In the present case, it allows us to monitor, during the whole process, both the capillary force F and the friction coefficient β, which are related to the shape of the meniscus and to the viscous dissipation, respectively. Note that both values are obtained with a 20% accuracy mainly coming from the uncertainty in the determination of k. III. RESULTS Figure 1 shows the results of a typical experiment performed on a 3EG drop. The measured force F [Fig. 1(a)] and friction coefficient β [Fig. 1(b)] are plotted as a function of the immersion depth d for a ramp of 2.5 µm. The cylinder is dipped in (light blue curves) and withdrawn (dark blue curves) from the liquid bath at 2.5 µm • s -1 . The tip oscillates at its resonance frequency (66 820 Hz in air) with an amplitude of 6 nm. The cantilever stiffness is k = 1.5 N • m -1 , soft enough to perform deflection measurements while being adapted for the dynamic mode. The force curve can be interpreted using the expression of the capillary force [START_REF] Delmas | Contact Angle Hysteresis at the Nanometer Scale[END_REF]: F = 2πRγ cos θ , where R is the fiber radius and θ is the mean contact angle during the oscillation. After the meniscus formation at d = 0, and until the contact line anchors on the first ring [at reference (i)] F and θ remain constant, consistent with Refs. [START_REF] Delmas | Contact Angle Hysteresis at the Nanometer Scale[END_REF][START_REF] Barber | Static and Dynamic Wetting Measurements of Single Carbon Nanotubes[END_REF][START_REF] Yazdanpanah | Micro-Wilhelmy and related liquid property measurements using constant-diameter nanoneedle-tipped atomic force microscope probes[END_REF]. A small jump of the force is observed when the contact line reaches a platinum ring on reference points (i), (ii), or (iii). Once the meniscus is pinned, the contact angle increases as the cylinder goes deeper into the liquid, leading to a decrease of the force F . Conversely, the withdrawal leads to a decrease of θ and an increase of the force F on the left of (i), (ii), and (iii). Hence, each ring induces two hysteresis cycles characteristic of strong topographic defects [START_REF] Joanny | A model for contact angle hysteresis[END_REF]. Different contributions to the probe-liquid system account for the friction coefficient behavior. The global increase of β with d observed on Fig. 1(b) results from the contribution of the viscous layer around the tip which is proportional to the immersion depth [START_REF] Dupré De Baubigny | AFM study of hydrodynamics in boundary layers around micro-and nanofibers[END_REF]. At withdrawal, β increases dramatically when the probe reaches the reference points (iv), (v), and (vi) of Fig. 1(b). In those regions, the force curve indicates that the meniscus is pinned on a defect. The dissipation growth is therefore attributed to the decrease of the contact angle before depinning as schematized on the zoom on the friction coefficient curve [Fig. 1(c)]. This large effect can be qualitatively understood considering that small contact angles-corresponding to reduced film thickness-generate strong velocity gradients in the meniscus and thus a large dissipation. Note that a similar behavior is observed on a moving contact line for which the friction coefficient also displays a strong dependance upon the contact angle β ∼ 1/ θ [START_REF] De Gennes | Wetting: Statics and dynamics[END_REF]. IV. THEORETICAL MODEL In order to account for the experimental results, we developed a theoretical model for the oscillation of a liquid meniscus in cylindrical geometry (see the Supplemental Material [12]). We consider the problem in the frame of reference attached to the cylinder (see Fig. 2). The flow induced by the interface motion leads to a friction coefficient β men . The latter is related to the mean energy loss P during an oscillation cycle, through P = β men (Aω) 2 /2 [START_REF] Pérez | Mécanique: fondements et applications: avec 300 exercices et problèmes résolus[END_REF]. Since the capillary number is small (see Ref. [START_REF]The global capillary number can be evaluated to Ca = Aωη/γ ∼ 10 -3 . However[END_REF]) we may safely state that viscous effects do not affect the shape of the liquid interface. Therefore, the meniscus profile is solution of the Laplace equation resulting from the balance between capillary and hydrostatic pressures, which in turn yields the well-known catenary shape [START_REF] Derjaguin | Theory of the distortion of a plane surface of a liquid by small objects and its application to the measurement of the contact angle of the wetting of thin filaments and fibres[END_REF][START_REF] James | The meniscus on the outside of a small circular cylinder[END_REF][START_REF] Dupré De Baubigny | Shape and effective spring constant of liquid interfaces probed at the nanometer scale: finite size effects[END_REF]: h = (R + r 0 ) cos θ cosh z (R + r 0 ) cos θ -ln(ζ ) (1) were ζ = cos θ/(1 + sin θ ). The meniscus height Z 0 is given, in the limit of small contact angles, by with γ E ≃ 0.577 the Euler constant and l c the capillary length. Since Z 0 (t) oscillates around its mean position as Z 0 [θ (t)] = Z 0 ( θ ) + A cos(ωt), we can derive the temporal evolution of the contact angle: Z 0 = (R + r 0 ) cos θ ln 4 l c R + r 0 -γ E (2) cos θ (t) = cos θ + A cos(ωt) (R + r 0 ) ln 4l c R+r 0 -γ E . (3) Note that our model is meant to deal with positive contact angles only, even if the defect thickness could in principle allow slightly negative ones. This defines a critical contact angle θ crit related to the minimum value of θ allowed by the model. One has cos θ crit = 1 -A/((R + r 0 ){ln[4 l c /(R + r 0 )] -γ E }). This critical depinning angle on an ideally strong defect increases with respect to A and decreases with respect to R + r 0 . The interface motion being known, the velocity field is derived using the Stokes equation. Indeed, gravity and inertia can be safely neglected (Re ∼ 10 -8 and l c ≃ 2 mm). Moreover, the viscous diffusion time scale τ ν = R 2 /ν is much smaller than the oscillation period (τ ν f ∼ 10 -7 ), such that the Stokes equation reduces to the simplest steady Stokes equation. Using the lubrication approximation, we have finally ∂ z P = η r v where P is the hydrodynamic pressure and v is the velocity component in the z direction. Finally, combining the mass conservation equation, ∂ t (π h 2 ) + ∂ z q = 0, where q is the local flow rate through a liquid section of normal z, the no-slip (at r = R), and free interface (at r = h) boundary conditions yields the velocity profile: v(r,z,t) = 2[R 2 + 2 h 2 ln(r/R) -r 2 ] z 0 du ∂ t (h 2 ) R 4 + 3 h 4 -4 h 2 R 2 -4 h 4 ln(h/R) . (4) From Eq. ( 4) we derive the expression of β men : β men ( θ ) = 4πη A 2 ω 2 Z 0 0 h R (∂ r v) 2 r dr dz t , (5) where t , designates the temporal average over an oscillation cycle (see the Supplemental Material [12]). Figure 2 displays an example of viscous stress field (color gradient) and velocity profile (vertical dark arrows) inside a nanomeniscus. The latter are computed from Eqs. (1), (3), and (4) for a fiber of radius R = 85 nm and a defect with r 0 = 10 nm, for typical operating conditions (f = 65 kHz and A = 10 nm). We observe that the stress is essentially localized at the fiber wall and is at maximum at a distance of the order of R beneath the contact line. Interestingly, the degrees FIG. 3. Normalized friction coefficient β men /η plotted as a function of θ [see Eq. ( 5)]. The dashed line signifies the theoretical model, and the experimental dotted curves are performed over all the studied liquids, before and after UV/O 3 treatment, with R = 85 nm, A = 18 nm, and r 0 = 40 nm. The values of the free parameters used are θ break = 18.5 • , 12.6 • , 15.1 • , 9.5 • , 10.9 • and 14.9 • and β bottom /ηR = 8.7, 4.5, 10.2, 8.9, 13.2, 9.1 for 1EG, 2EG and 3EG, before and after UV/O 3 treatment, respectively. meniscus oscillation does not lead to any stress singularity at the contact line. It does not require the introduction of a slippage in the vicinity of the (moving) contact line as in the case of wetting dynamics [START_REF] Kirkinis | Hydrodynamic Theory of Liquid Slippage on a Solid Substrate Near a Moving Contact Line[END_REF][START_REF] Thompson | Simulations of Contact-Line Motion: Slip and the Dynamic Contact Angle[END_REF]. We therefore used the standard no-slip boundary condition, validated by the molecular scale values of slip lengths measured on hydrophilic surfaces [START_REF] Bocquet | Nanofluidics, from bulk to interfaces[END_REF]. The viscous stress maps also allow one to check a posteriori the interface profile hypothesis. The local capillary number is Ca local = η∂ z v/ P where P ≃ γ /Rγ /(R + A) ≃ Aγ /R 2 . Taking a maximum value of η∂ z v = 3000 Pa obtained for 10 nm defects [see Fig. 2(b)] we find that Ca 5 × 10 -2 , thus validating the hypothesis. The fact that the viscous stress strongly decays when z becomes of the order of a few probe radii also strengthens the lubrication approximation, only valid for small surface gradients (∂ z h ≪ 1). When the mean contact angle θ is decreased, a strong increase of the viscous stress is observed but its localization remains mostly unchanged (see the Supplemental Material [12]). Another striking result is the influence of the defect height r 0 : for contact angles close to the critical one, a reduction in size of the defect increases significantly the viscous stress but also affects its localization, which becomes concentrated closer from the contact line as r 0 is decreased (Fig. 2). This effect is not straightforward and may have important consequences on the wetting on surfaces with defects. Finally, the integration of the stress according to Eq. ( 5) leads to the normalized friction coefficient β men /η as a function of θ , an example of which is plotted in Fig. 3 (dashed line). A significant increase of β men is observed for decreasing contact angles in agreement with the experimental observations. V. DISCUSSION To quantitatively confront the FM-AFM experiments to the theoretical model, we use the force signal to determine the experimental contact angles θ. We assume that, due to the inhomogeneous thickness of the platinum rings, the meniscus depins from the defect for a contact angle θ break larger than θ crit value expected for an ideal defect. The maximum force before depinning then reads F max = 2πγ (R + r 0 ) cos θ break , which allows one to calculate the experimental contact angle for any d values using cos θ = (F /F max ) cos θ break . The latter equation enables us to determine the contact angle for each d position without using the cantilever stiffness k only known within 20% error. For each experiment, we make a linear fit of the friction coefficient curve only taking into account the regions which are not influenced by the defects such as, for example, the portion between points (iv) and (v) in Fig. 1(b). The subtraction of this fit allows one to dispose of the viscous layer , and 12.5 • and β bottom /ηR = 8.6, 8.2, and 7 for r 0 = 30, 40, and 50 nm, respectively. (b) Influence of oscillation amplitude A for 1EG on a defect with r 0 = 40 nm: A = 6 nm, 17.7 nm, and 29.5 nm are plotted in blue, green, and yellow, respectively. The values of the free parameters used are θ break = 7.9 • , 9.5 • , and 11.5 • and β bottom /ηR = 0.13, 0.89, and 2.7 for A = 6 nm, 17.7 nm, and 29.5 nm, respectively. Inset: plot of θ break (symbols) and θ crit (solid line) as a function of the oscillation amplitude for a defect of thickness r 0 = 40 nm. Symbol size corresponds to the error bar in θ break measurements. contribution on the side of the fiber, leaving only β men and a constant term induced by the dissipation associated with the bottom of the tip, called β bottom . The data are then fitted by computing the two free parameters β bottom and θ break which minimize the standard deviation between the experimental data and the theoretical curve [Eq. [START_REF] Perrin | Defects at the Nanoscale Impact Contact Line Motion at all Scales[END_REF]]. The routine is performed with MATLAB, using the Curve Fitting toolbox. It (independently) determines the values of the adjusting parameters β bottom and θ break using the nonlinear least squares method. As for R and r 0 , we use effective values measured by SEM. FM experiments were then performed over all the studied liquids. More than 90 experiments were carried out with two different home-made probes (R = 80 nm and 85 nm), defect thicknesses r 0 between 10 and 50 nm and oscillation amplitudes A ranging from 5 to 35 nm. Additionally, experiments were performed before and after surface cleaning by UV/O 3 treatment to assess the influence of tip wettability. As an example, Fig. 3 displays six curves performed with three different liquids, before and after UV/O 3 treatment, on the same defect (R = 85 nm and r 0 = 40 nm) with an amplitude A = 18 nm. The agreement between the experimental data and the theoretical model is remarkable. A 10-fold enhancement of dissipation is observed when the contact angle is decreased from 50 • to 10 • . As expected, the 5 min surface cleaning does not affect the dissipation process since all curves superpose on a same master curve. Yet ozone cleaning has a strong impact on the θ break values. The hydrophilic surfaces obtained after UV/O 3 treatment lead to a strong pinning which allows to reach smaller contact angle values. For example, for 1EG θ break decreases from 18.5 • to 9.5 • , the latter value being very close to the value of θ crit = 9.4 • . Consequently, the dissipation can reach larger values after ozone treatment. This is a common observation on all the measurements. When the tip is more hydrophobic, the liquid may detach between the dots forming the defect before the θ crit value is reached. Note that, while the model is developed for small contact angles, confrontation with experiments demonstrates that it remains valid until θ break ∼ 50 • , values giving a weak dissipation. This is consistent with previous observations that the lubrication approximation yields good predictions for moderately large contact angles [START_REF] Bonn | Wetting and spreading[END_REF]. In order to discuss further the influence of the various parameters and the resulting values of the fitting variables θ break and β bottom , we report in Fi. 4(a) a comparison between the theoretical model and FM experiments performed on 3EG for (a) different defect thicknesses and (b) various degrees FIG. 5. Superposition of 30 experimental curves. In order to visualize different curves, the color is related to the θ break value. The range of theoretical values is limited by two solid lines (r 0 = 5 nm, A = 33 nm for the higher one and r 0 = 50 nm, A = 6 nm for the lower one); Inset: Histogram of the β bottom values extracted from the experimental data. oscillation amplitudes. Figure 4(a) shows that the ring thickness r 0 has a low impact on the friction coefficient curve for 30 nm r 0 50 nm. Nevertheless, a systematic evolution of θ break is observed: larger defect thicknesses lead to a stronger pinning of the defect, which results in a smaller θ break value, as marked by the arrows on the curves. We also found that the oscillation amplitude plays a significant role only for contact angles close to θ crit . Therefore its influence can be noticed only after the UV/O 3 treatment. The theoretical model reproduces well the influence of amplitude observed for contact angles smaller than 15 • [see Fig. 4(b)]. A larger amplitude increases the value β men at low θ and also leads to an increase of the θ break value, a general trend observed on all experiments. These results show that the experimental conditions, namely, the defect size r 0 , the oscillation amplitude A, and the surface wettability, have a small influence on the shape of the friction coefficient as a function of the contact angle. We therefore report in Fig. 5 30 curves obtained using different tips, defects, liquids, and amplitudes. All curves superimpose in a rather thin zone which is nicely bounded by the theoretical curves giving the extreme cases within the range of experimental conditions (10 nm r 0 50 nm and 6 nm A 33 nm). The highest dissipation is obtained for small defect and high amplitude (r 0 = 5 nm and A = 33 nm). From all the measurements (more than 90), we extracted values of the two adjustable parameters, namely, θ break and β bottom . The value of θ break gives an indication of the pinning behavior. Strong pinning, which corresponds to low θ break values, is reached for large defects on hydrophilic tips under weak forcing. This trend, consistent with macroscopic expectation, therefore remains valid down to nanometer-scale defects. In the optimal case, the θ crit value expected for an ideal defect could be approached [see Fig. 4(c)]. Dynamic effects are also probably involved in the depinning transition since three liquids with similar surface tension and contact angle but varying viscosities show different pinning behaviors. This result, which has important consequences for the description of wetting dynamics on real surfaces, requires further investigations. Unlike θ break , β bottom does not show any systematic influence of amplitude, defect size, and wettability as expected from the model. Statistics over all experiments (see inset of Fig. 5) show that β bottom is proportional to the liquid viscosity and is centered around a mean value β bottom /(ηR) = 7. This is consistent with expected values for a either a flat end or an hemispherical end leading to β bottom = 8η R [START_REF] Zhang | Oscillatory motions of circular disks and nearly spherical particles in viscous flows[END_REF] or β bottom = 3πη R. This large dispersion comes from the fact that the tip end is ill-defined and moreover may evolve with time since measurements on hard surfaces are required, after each series of measurements, for calibration purposes. This hinders a more quantitative comparison with the theory. VI. CONCLUSION In conclusion, this work provides a comprehensive investigation of the viscous dissipation in anchored oscillating menisci. We find an excellent agreement between the experimental results and our lubrication-based theoretical model describing the flow pattern inside the oscillating meniscus. The confinement induced by the stretching of the meniscus leads to a strong increase of viscous stress which accounts for the surge of dissipated energy observed at a small angle. Note that this effect is amplified for small defect sizes, in which case the stress is strongly localized at the contact line with important consequences for the wetting dynamics on surfaces with defects. The fabrication of artificial nanometric defects also gives new insights on the depinning of the contact line which appears for a contact angle value θ break larger than the theoretical one θ crit obtained for a perfect pinning. The latter value could be approached using hydrophilic tips showing that the pinning is all the stronger that the oscillation amplitude A is small and the defect size r 0 is large. This study demonstrates that FM-AFM combined with the nanofabrication of dedicated probes with controlled defects is a unique tool for quantitative measurements of dissipation in confined liquids, down to the nanometer scale, and paves the way for a systematic study of open questions in wetting science regarding the extra dissipation which occurs when the contact line starts to move. In particular, our approach brings new insights for the role of surface defects, their pinning behavior, and the associated induced dissipation, down to the nanometer scale. FIG. 1 . 1 FIG. 1. FM-AFM spectroscopy curves performed on a 3EG liquid drop. (a) Force F and (b) friction coefficient β as a function of the immersion depth d. (c) SEM image of the 3.2 µm long and 170 nm diameter probe, covered by three platinum rings of thicknesses r 0 = 10, 15, and 40 nm, from bottom to top. (d) Zoom on the friction coefficient curve on the second defect with sketches of the meniscus. FIG. 2 . 2 FIG. 2. (a) Oscillating meniscus anchored on a defect, displayed in the frame of reference of the fiber. The velocity profile (black arrows) is calculated from Eq. (4). The stress field η∂ r v (color gradient) is computed for R = 100 nm, r 0 = 40 nm, l c = 2 mm, A = 10 nm, f = 65 kHz, θ = θ crit = 6.73 • , and η = 30 mPa • s. Color bar in Pa. (b) Same with r 0 = 10 nm and θ = θ crit = 7.5 • . FIG. 4 . 4 FIG. 4. Normalized friction coefficient β men /η vs mean contact angle θ for different operating conditions. The dashed lines are plots of the theoretical model [Eq. (5)]. (a) Influence of ring thickness r 0 on 2EG for A = 6 nm. The arrows indicate the value of θ break . The values of the free parameters used are θ break = 19.6 • , 15• , and 12.5 • and β bottom /ηR = 8.6, 8.2, and 7 for r 0 = 30, 40, and 50 nm, respectively. (b) Influence of oscillation amplitude A for 1EG on a defect with r 0 = 40 nm: A = 6 nm, 17.7 nm, and 29.5 nm are plotted in blue, green, and yellow, respectively. The values of the free parameters used are θ break = 7.9 • , 9.5 • , and 11.5 • and β bottom /ηR = 0.13, 0.89, and 2.7 for A = 6 nm, 17.7 nm, and 29.5 nm, respectively. Inset: plot of θ break (symbols) and θ crit (solid line) as a function of the oscillation amplitude for a defect of thickness r 0 = 40 nm. Symbol size corresponds to the error bar in θ break measurements. ACKNOWLEDGMENTS The authors thank P. Salles for his help in the development of tip fabrication procedures, Dominique Anne-Archard for viscosity measurements, and J.-P. Aimé, D. Legendre, and E. Raphaël for fruitful discussions. This study has been partially supporter through the ANR by the NANOFLUIDYN project (Grant No. ANR-13-BS10-0009).
01754878
en
[ "spi.acou", "spi.meca.vibr" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01754878/file/PapierNESactifR2.pdf
P Y Bryk S Bellizzi R Côte email: [email protected] Experimental study of a hybrid electro-acoustic nonlinear membrane absorber Keywords: Noise Reduction, Hybrid Absorber, Nonlinear Energy Sink, Electroacoustic Absorber, High Sound Level, Low Frequency A hybrid electro-acoustic nonlinear membrane absorber working as a nonlinear energy sink (here after named EA-NES) is described. The device is composed of a thin circular visco-elastic membrane working as an essentially cubic oscillator. One face of the membrane is coupled to the acoustic field to be reduced and the other face is enclosed. The enclosure includes a loudspeaker for the control of the acoustic pressure felt by the rear face of the membrane through proportional feedback control. An experimental set-up has been developed where the EA-NES is weakly coupled to a linear acoustic system. The linear acoustic system is an open-ended tube, coupled on one side to the EA-NES by a box, and on the other side to a source loudspeaker by another box. Only sinusoidal forcing is considered. It is shown that the EA-NES is able to perform resonance capture with the acoustic field, resulting in noise reduction by targeted energy transfer, and to operate in a large frequency band, tuning itself passively to any linear system. We demonstrate the ability of the feedback gain defining the active loop to modify the resonance frequency of the EA-NES, which is a key factor to tune the triggering threshold of energy pumping. The novelty of this work is to use active control combined to passive nonlinear transfer energy to improve it. In this paper, only experimental results are analyzed. Introduction The reduction of noise and vibration at low frequencies is still nowadays a main issue in many fields of engineering. In order to overpass this issue, a new concept of absorbers including nonlinear behavior has been developed in the past decade. This type of absorbers is based on the principle of the "Targeted Energy Transfer" (TET) also named "energy pumping" [START_REF] Vakakis | Nonlinear targeted energy transfer in mechanical and structural systems[END_REF]. TET is an irreversible transfer of the vibrational energy from an input subsystem to a nonlinear attachment (the absorber) called Nonlinear Energy Sink (NES). TET permits to reduce undesirable large vibration amplitudes of structures or acoustic modes. Nonlinear energy transfer results from nonlinear mode bifurcations, or through spatial energy localization by formation of nonlinear normal modes. The phenomena can be described as a 1:1 resonance capture [START_REF] Vakakis | Energy pumping in nonlinear mechanical oscillators: Part II -Resonance capture[END_REF] and, considering harmonic forcing, as response regimes characterized in terms of periodic and Strongly Modulated Responses (SMR) [START_REF] Starosvetsky | Dynamics of a strongly nonlinear vibration absorber coupled to a harmonically excited two-degree-of-freedom system[END_REF]. The basic NES generally consists of a lightmass, an essentially nonlinear spring and a viscous linear damper. In the field of structural vibration, a wide variety of NES designs with different types of stiffness (cubic , non-polynomial, non-smooth nonlinearities...) has been proposed [START_REF] Gourdon | Nonlinear energy pumping under transient forcing with strongly nonlinear coupling: Theoretical and experimental results[END_REF][START_REF] Sigalov | Resonance captures and targeted energy transfers in an inertially-coupled rotational nonlinear energy sink[END_REF][START_REF] Gourc | Quenching chatter instability in turning process with a vibro-impact nonlinear energy sink[END_REF][START_REF] Mattei | Nonlinear targeted energy transfer of two coupled cantilever beams coupled to a bistable light attachment[END_REF]. In the acoustic field, to the best of our knowledge only one type of vibro-acoustic NES design has been tested, see the series of papers [START_REF] Cochelin | Experimental evidence of energy pumping in acoustics[END_REF][START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF][START_REF] Mariani | Toward an adjustable nonlinear low frequency acoustic absorber[END_REF][START_REF] Cote | Experimental evidence of simultaneous multi-resonance noise reduction using an absorber with essential nonlinearity under two excitation frequencies[END_REF][START_REF] Shao | Theoretical and numerical study of targeted energy transfer inside an acoustic cavity by a non-linear membrane absorber[END_REF]. It was demonstrated that a passive control of sound at low frequency can be achieved using a vibroacoustic coupling between the acoustic field (the primary system) and a geometrically nonlinear thin clamped structure (the NES). In [START_REF] Cochelin | Experimental evidence of energy pumping in acoustics[END_REF][START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF][START_REF] Shao | Theoretical and numerical study of targeted energy transfer inside an acoustic cavity by a non-linear membrane absorber[END_REF], the thin baffled structure consists of a simple thin circular latex (visco-elastic) membrane whereas in [START_REF] Mariani | Toward an adjustable nonlinear low frequency acoustic absorber[END_REF] a loudspeaker used as a suspended piston is considered. In both cases, the thin baffled structure has to be part of the frontier of the closed acoustic domain (to be controlled). Hence only one face (named the front face) is exposed to the acoustic field whereas the other face (the rear face) radiates outside. Hence this type of devices has to be modified to be used in cavity noise reduction. A simple way to do this is to enclose the rear face of the thin clamped structure limiting the sound radiation. This principle has been used to design electroacoustic absorbers based on the use of an enclosed loudspeaker including an electric load that shunts the loudspeaker electrical terminals [START_REF] Boulandet | Optimization of electroacoustic absorbers by means of designed experiments[END_REF]. An electroacoustic absorber can either be passive or active in terms of their noise suppression characteristics including as in [START_REF] Lissek | Electroacoustic absorbers : Bridging the gap between shunt loudspeaker and active sound absorption[END_REF][START_REF] Boulandet | Toward broad band electroacoustic resonators through optimized feedback control strategies[END_REF] pressure and/or velocity feedback techniques. Loudspeakers have also been used to design devices to control normal surface impedance [START_REF] Lacour | Preliminary experiments on noise reduction in cavities using active impedance changes[END_REF]. Two approaches have been developed. The first is referred to as direct control: the acoustic pressure is measured close to the diaphragm of the loudspeaker and used to produce the desired impedance. In the second approach, passive and active means are combined: the rear face of a porous layer is actively controlled so as to make the front face normal impedance take a prescribed value. In this paper, a hybrid passive/active nonlinear absorber is developed. The absorber is composed of a clamped thin circular visco-elastic membrane with its rear face enclosed. The acoustic field inside the hood (i.e the acoustic load of the rear face) is controlled using a loudspeaker with proportional feedback control. Three objective are assigned. Firstly, the device has to be designed such that it can be used inside a cavity. Secondly, noise reduction must mainly result from TET due to the nonlinear behavior of the membrane, thus defining a new concept of NES. Thirdly, the control loudspeaker has to be used as a linear electroacoustic absorber inside the hood. The control loudspeaker does not act directly on the acoustic field to be reduced. It only modifies the relative acoustic load exciting the membrane. This absorber is here after named hybrid electroacoustic NES (EA-NES). The paper is organized as follows. In Section 2, the principle of functioning of the EA-NES under study is described considering each sub-structure separately. In Section 3, we first describe the experimental set-up. It is composed of an acoustic field (in a pipe, excited by a loudspeaker) coupled to the EA-NES. Then we check the stability analysis of the feedback loop and perform a frequency analysis under broadband excitation. In Section 4, we analyze in detail the responses under sinusoidal excitations and we bring some confirmations on the efficiency of the EA-NES. The Hybrid Electro-Acoustic NES General presentation The EA-NES is shown in Fig. 1 The EA-NES is based on the conjugate functioning of three elements : • The clamped membrane that interacts with the acoustic field in order to provide noise attenuation in its non-linear range; • The hood by which the EA-NES can work inside a surrounding acoustic field unlike previous developed NES (see for example [START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF]); • The feedback loop that reduces the pressure in the hood allows to use a small hood volume and also to tune the stiffness and damping linear behaviour of the EA-NES. 2.2. About each subsystem of the EA-NES The clamped membrane The clamped membrane with its supporting device is shown in Fig. 1(right). It was already used in [START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF]. The device allows to change the diameter of the membrane (from 40 to 80 mm). It includes a sliding system used to apply a constant and permanent in-plan pre-stress to the membrane. Once the prestress is set, the membrane is clamped to the supporting device. Applying an in-plane pre-stress modifies the modal component of the associated underlying linear system. Coupled to an undesirable acoustic field, the clamped membrane device can be used as an acoustic NES absorber [START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF] to reduce the acoustic pressure. Direct [START_REF] Shao | Theoretical and numerical study of targeted energy transfer inside an acoustic cavity by a non-linear membrane absorber[END_REF] or indirect [START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF] couplings are possible. If the clamped membrane device is properly designed, TET is obtained thanks to resonance capture phenomena. Resonance capture occurs between two nonlinear modes resulting of a weak coupling between the nonlinear subsystem (the NES) and the linear subsystem (the acoustic field). At low excitation level, the two nonlinear modes coincide respectively with the linear mode of the NES and the target mode of the acoustic field. Hence, when the in-plane pre-stress of the clamped membrane sets the resonance frequency of the NES lower than the resonance frequency of the target mode of the acoustic field, TET is possible above a threshold excitation level. Furthermore, Bellet [START_REF] Bellet | Vers une nouvelle technique de controle passif du bruit : Absorbeur dynamique non lineaire et pompage energetique[END_REF] has also shown that the gap between the two resonance frequencies has an impact on the threshold of the TET: the closer are the two resonance frequencies, the smaller is the threshold of the TET. As a subsystem of the EA-NES, the clamped membrane provides the coupling between the EA-NES and the acoustic field to reduce. It is also responsible for the resonance capture phenomena (as a nonlinear component). The other subsystems (the hood and the feedback loop) must preserve the essential prop-erties of a NES, namely a weakly damped system with a hardening nonlinear mode, and with a frequency at low vibratory amplitude smaller than the resonance frequency of the target mode of the acoustic field. The hood The hood is a wooden cubic box with the clamped membrane fixed on one face and an enclosed control loudspeaker on the opposite face (see Fig. 1(left)). The hood has been added to meet two key objectives. Firstly, with hood, the EA-NES can be used to reduce the noise in a cavity where only the front face of the clamped membrane is load by the undesirable acoustic field. Secondly, the hood creates a difference of pressure between the two faces of the membrane, which permits to place it anywhere inside the acoustic field. When the acoustic wavelength is large in comparison with the largest dimension of the hood, the acoustic field can be considered as homogeneous in the hood and the acoustic pressure p e (t) can be related to the relative variation of the volume as p e (t) = -ρ a c 2 0 ∆V e (t) V e (1) where V e denotes the volume of the box at rest, ∆V e (t) the variation of the volume, ρ a the volumetric mass of the air and c 0 the celerity of sound in the air. Assuming the motion of the membrane primarily defined on the first transversal mode, Eq. (1) reduces to p e (t) = ρ a c 2 0 V e (S e m x m (t) -S e ls x ls (t)) (2) where x m denotes the transversal membrane motion with S e m the effective area of the membrane and x ls denotes the transversal motion of the diaphragm of the loudspeaker with S e ls the effective area. Eq. ( 2) means that the air inside the hood is equivalent to an acoustic compliance that resists to the motion of the membrane and hardens it. This compliance being in return inversely related to the volume V e , it results that the smaller is the volume of hood, the higher is the resonance frequency of the underlying linear EA-NES. Finally Eq. ( 2) also shows that the acoustic pressure p e (t) can be reduced to zero if x ls and x m are in-phase. The feedback loop is composed of an enclosed loudspeaker inside the hood, a microphone placed at the geometrical center of the hood and a unit control (see The reference of the loop (u t (t) = 0) means a pressure reduction p e (t) until zero, which is only possible with an infinite value of K as illustrated by the following total loop transfer in Laplace variables P e (s) = (1 -H p (s)KH F (s)H s (s)) -1 P p (s) (3) between the acoustic pressure p p (t) and the acoustic pressure p e (t). However the choice of the gain K is limited by instability phenomena which exist usually for positive gain K at high frequencies, resulting from the acoustic modes of the cavity and for negative gain K at low frequency (< 100 Hz), resulting from the control loudspeaker and membrane dynamics. The influence of the acoustic modes of the cavity is reduced by placing the microphone at the geometrical center of the hood. This location is near the node of pressure of some of the first modes. Stability margin analysis results from the properties of the Open Loop Transfer Function (OLTF) between (t) and u M (t) defined by H F (s)H S (s)H p (s)K (see Fig. ( 2 ) ). The variation range of the gain values can be increased by selecting adequately the filter H F (s). It is that we do in the next section. Choice and sizing of components of the EA-NES Our objective is to design a EA-NES able to interact with primary acoustic fields in the frequency range [40, 100] Hz. The clamped latex membrane was designed following the recommendations discussed in [START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF]. The membrane has a radius of 0.05 m and a thickness of 0.0002 m. These dimensions guarantee a proper functioning of the clamped latex membrane as a NES when the membrane is coupled to a resonant tube [START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF][START_REF] Cote | Experimental evidence of simultaneous multi-resonance noise reduction using an absorber with essential nonlinearity under two excitation frequencies[END_REF] without hood (alone). The sliding system is adjusted such that the pre-stress of the membrane gives the coupled setup (EA-NES and primary system) a resonance associated to the EA-NES at a frequency around 70 Hz (see Fig. 5). The Experimental set-up and stability analysis Experimental set-up The experimental set-up shown in Fig. 3 consists in a vibroacoustic system (also named primary system) coupled to the EA-NES. The same set-up was used in [START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF]. The primary system is made of an open interchangeable U-shaped tube which length L can be adjusted, coupled at each end to coupling boxes. The diameter of the tube is 0.095 m. One coupling box (box 1) contains an acoustic source, and the EA-NES (as described in the previous section) is mounted on one face of the other coupling box (box 2). The volume of the coupling box During the measurement, a target voltage signal e(t) from a generator (not shown in Fig. 3) and a power amplifier (TIRA, BAA120) provide an input current or input voltage signal to the source loudspeaker (depending the selected driving mode: current-or voltage-feedback control mode). The responses of the system are recorded simultaneously (using a multi-channel analyzer/recorder OROS, OR38): the acoustic pressures p tube (t) (at the mid-length of the tube), The lowest resonance around 20 Hz results from the whole coupled system (the tube acts as an acoustic mass coupled to the coupling boxes acting like springs). The next two resonances, around 72 Hz and 87 Hz, as seen in Fig. 5, are respectively assigned to the EA-NES and to the first mode of the tube alone. 1 is V 1 = 27 × 10 -3 m Beyond 100 Hz, the resonance frequencies coming from the highest modes of the tube and the coupling boxes appear. From now on we will focus on a frequency span below 100 Hz. Stability of the feedback loop As a preliminary step, the OLTF, H F (s)H S (s)H p (s)K at s =j2πf , is measured with the EA-NES coupled to the tube of length L 1 with a unity gain (K = 1) (see Fig. 3 Selecting the gain K in the range [0, 200] results in an OLTF with gain margin greater than or equal to 6 dB, the 0 dB gain margin corresponding 260 to K = 400. Gain margins and phase margins with the associated critical frequencies are reported Table 1 for several values of K. Similarly selecting the gain K in the range [-64, 0] results in an OLTF with gain margin from to 0 dB to 36 dB. . Note that, one can also modify the use of the feedback loop in order to amplify p e (t) instead of reducing it (hardening the clamped membrane behaviour). K f Gm (Hz) G m (dB) f Pm (Hz) P m ( • ) L 1 L 2 L 1 L 2 L 1 L 2 L 1 L 2 -1 75 To achieve this goal, the loudspeaker is fed with a negative value for the gain K. This case is equivalent to study the feedback loop with OLTF shifted of 180 degrees, which gives different phase and gain margins reported in Table 1 for K = -1. One can notice that the gain margin for K = 1 is higher than the gain margin with K = -1. It means the feedback loop can soften more the membrane than harden it. The gain K as a tuning parameter of the EA-NES The objective of this section is to verify that the gain K in the feedback loop can be used to tune the modal component associated with the EA-NES without disturbing the modal component associated with the primary system. The influence of the gain K on the behavior of the EA-NES is analyzed after measuring the FRF p 2 /e. Here also the source loudspeaker is driven in voltagefeedback control mode. It is excited with a low-level band-limited white-noise. In terms of modal parameters, these results confirm that the gain K affects simultaneously the resonance frequency and the associated damping ratio of the modal component assigned to the EA-NES. Increasing the gain K reduces the resonance frequency and simultaneously increases the damping ratio. Finally, as expected, negative values of the gain K increase the resonance frequency. NES is able to work as a linear electroacoustic absorber inside the hood with any linear primary system having its resonance frequency in a large frequency range. Study at high excitation level: efficiency of the EA-NES from TET Now let us look at the behavior of the coupled system under a sinusoidal forcing when the frequency and the amplitude of the sinusoidal forcing vary. Here the source loudspeaker is driven using a current-feedback control mode reducing the dissipation introduced in the system by the source loudspeaker [START_REF] Bortoni | Effects of acoustic damping on current-driven loudspeakers[END_REF]. During the measurement, a target signal e(t) from a generator (TTI TGA1244) (not shown in Fig. 3) and a power amplifier (TIRA, BAA120) provide an input current signal to the source loudspeaker. The target signal e(t) is of the form e(t) = E cos(2πf e t + φ e ) ( 4 ) where f e denotes the excitation frequency and E is the associated excitation amplitude. The phase φ e is introduced arbitrarily by the signal generator. A measurement run consists in making a series of experiments, where the value of the scanning frequency f e is updated for each experiment, while the other parameter E remains unchanged. The duration of an experiment must be limited for practical reasons, but must be long enough to capture the physics of the response. We have chosen a duration of 13 s for each experiment. There In order to characterize the pumping phenomenon, we first describe in details the results obtained with the EA-NES with K = 200 coupled to the tube of L 1 length, followed by an analysis of the influence of K. Then, the experiment involving the tube of L 2 length is considered. EA-NES with the tube of L 1 length and K = 200 The behavior of the system with the tube of L 1 length and K = 200 under sinusoidal excitation, scanned around the first mode of the tube is presented. Concerning the command level of the source loudspeaker, the power amplifier driving the loudspeaker in current-feedback mode provides a nearly constant source loudspeaker current i s (t) as shown in Fig. 9 where the Root Mean Square (RMS) values of i s (t) are plotted versus frequency for some excitation amplitudes. Hence the source loudspeaker is not modified by the system (tube+EA-NES) and it plays its full role as a controlled source. Concerning the system response, the RMS value in the steady state regime is extracted from each measurement of the acoustic pressure p tube (t) and it is plotted in Fig. 10(a peak around f = 83.5 Hz, smaller than the resonance frequency observed at low excitation level. These behaviors were classically reported considering NES analysis [START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF] and they can be attributed to the pumping phenomenon and TET from the primary system to the NES. TET signatures are of two types. First, TET occurs only when the primary linear system reaches a certain vibration energy threshold and secondly TET is associated to quasi-periodic response regime with a slow evolution of the amplitudes (also named Strongly Modulated Regimes (SMR) [START_REF] Starosvetsky | Dynamics of a strongly nonlinear vibration absorber coupled to a harmonically excited two-degree-of-freedom system[END_REF]). This point can be confirmed by an analysis of energy conversion. To analyze the energy conversion occurring from the fundamental frequency f e , to the harmonic frequencies (kf e for the integer k > 1) and to the non harmonic frequencies (f = kf e ), the Fundamental Conversion Ratio (FCR), the Harmonic Conversion Ratio (HCR) and the Non Harmonic Conversion Ratio (NHCR) are used. The definition are these indicators are recalled in [START_REF] Cote | Experimental evidence of simultaneous multi-resonance noise reduction using an absorber with essential nonlinearity under two excitation frequencies[END_REF]. For each signal, FCR, HCR and NHCR are obtained from Fourier analysis estimated from the steady state response. Basically, the FCR is the proportion of signal energy at the source frequency. For a linear system this indicator should be 100%. The HCR is the proportion of signal at the integer harmonics of the source frequency. It can give information about nonlinear effects like saturation. The NHCR is the proportion of signal frequencies out of the integer harmonics of the source frequency. It can give information about nonlinear effects like a loss of periodicity. 10(d)). This transfer of energy increases with the excitation amplitude but the associated RMS level of the acoustic pressure in the tube remains limited (see Fig. 10(a)). In this domain, the responses of the system are no more periodic and are replaced by so-called SMR. This domain is the domain where the EA-NES works well. Another amplitude-frequency domain which is detached from the previous domain is also visible in Fig. 10(c) at low frequencies and high amplitudes showing that, in this case, energy can be transferred from the fundamental frequency to the high harmonic frequencies. In this domain, the response of the system is periodic but unfortunately the associated RMS level increases, which leads to undesired periodic regimes. This domain is the domain where is EA-NES not efficient for noise reduction and it is characterized by the appearance of undesired periodic regimes. The threshold for the appearance of undesired periodic regimes, here denoted EA-NES leads to a two-to-three decrease (6 to 10 dB) in the acoustic pressure. The influence of the gain K is similar to the influence of the linear stiffness of a classical NES [START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF]. 4.3. EA-NES with the tube of L 2 length: influence of K The same analysis was also conducted with a tube of length L 2 . A very important property of this EA-NES can be first highlighted: its ability to adapt and tune itself to the resonance frequency of different linear systems. To demonstrate experimentally this property we report in Fig. 16 the ridge lines of the acoustic pressure p tube measured on the system with the tube of L 1 length and on the system with the tube of L 2 length using the the EA-NES with the same gain value K = 50. In both cases, we observe that EA-NES works well and the associated efficiency span ([E s , E S ])are [0.5, 1.1] and [1, 2.2] respectively. However it appears that the EA-NES does not perform noise reduction with the same efficiency for the two tubes. Indeed, with the tube of L 1 length the acoustic pressure in the efficiency span is near 750 Pa whereas is about 1500 Pa for the tube of length L 2 . This difference is due to the fact that the tube of L 2 length has a higher resonance frequency than the tube of L 1 length. In consequence, more energy has to be provided to the tube of length L 2 so that resonance capture occurs [START_REF] Bellet | Vers une nouvelle technique de controle passif du bruit : Absorbeur dynamique non lineaire et pompage energetique[END_REF]. A complete excursion as in case of tube of L 1 length was not be possible due to a limitation of the acoustic source performance used in this setup. Finally, the maximum power consumed by the control loudspeaker are displayed in Fig. 18 for several values of the gain K. For all cases, the power consumed by the loudspeaker increases with the excitation amplitude and slowly varies with the gain K for positive values. The maximum power consumed by the loudspeaker is inferior to 1.5 W rms . Thus the EA-NES requires a limited amount energy to work, which is a good asset for industrial applications. Conclusion A new acoustic NES with a controlled acoustic load has been presented. The control of the linear stiffness of the membrane by the help of a feedback loop has been validated experimentally. Furthermore this NES has been tested coupled to two different tubes and has performed acoustic pumping with various efficiencies depending on the gain K. It appears that there is no optimum value of the gain K because it depends on the excitation level. Indeed one can obtain either a low threshold but a short span when f N ES is close to f T , or a high threshold with a large span when f N ES is distant from f T . However, unlike the previous passive NES, the frequency of the EA-NES can be easily modified in real time. Future work is ongoing, focusing on using a gain K variable. It would allow to optimize K according to the excitation in order to get the strongest sound attenuation. We work also on the integration of the EA-NES in a cavity, with a narrow band noise excitation, in the framework of progress towards applications. . It is composed of a clamped circular latex membrane with one face (the front face) exposed to the acoustic field to be reduced and the other one (the rear face) enclosed. The hood includes a feedback loop composed of a microphone, a loudspeaker and a unit control. The feedback loop controls the acoustic pressure inside the hood and seen by the rear face of the membrane. Figure 1 : 1 Figure 1: EA-NES: (a) Schematic representation and (b) Front face. Fig. 1 ( 1 Fig.1(left)). The feedback loop is based on a proportional controller following the block-diagram displayed in Fig.2where H p (the plant transfer) denotes the transfer function between the tension u c (t) applied to the control loudspeaker (voltage-feedback control) and the acoustic pressure p e (t) and H s the transfer function characterizing the microphone. A part of the acoustic pressure p e (t) is due to p p (t) (perturbation term) resulting from the undesirable acoustic field acting on the front face of the clamped membrane. The unit control includes an analogue band-pass filter H F and a controller sets a scalar (real) gain K. dimensions of the enclosure are 0.38 m×0.22 m×0.22 m, which gives V e = 0.018 m 3 . The membrane and the control loudspeaker are fixed respectively on the opposite faces with size 0.22 m × 0.22 m. Note that the pressure p e (t) inside the volume V e is equivalent to a compliance as long as the acoustic wavelength is large in comparison with the largest dimension of the enclosure. With λ > 10 × 0.38 one obtains a maximum frequency of f max = 89.5 Hz. Efficient control loudspeakers need the following properties: high resonance frequency of the driver part (such that the resonance frequency of the enclosed loudspeaker is above the working frequency band of the EA-NES), large force factor, linear behavior for large excursion of the diaphragm and effective area of the diaphragm as large as possible compared to the effective area of the latex membrane. Note that the area of the diaphragm is also limited by the size of the boxes. The BEYMA 8P300Fe/N (8 Inch) loudspeaker family was selected corresponding to the following Thiele/Small parameter values: m LS = 19.4 10 -3 kg, c LS = 1.7 Nsm -1 and k LS = 2834.9 Nm -1 , S e LS = 0.022 m 2 , R e = 6.6 Ω and Bl = 9.21 NA -1 . The rear enclosure of the loudspeaker is chosen as V LS = 0.0248 m 3 . The effective diaphragm area is six time larger than the effective latex membrane, so for the same volume variation the displacement of the diaphragm is reduced to the same extent with respect to the displacement of the membrane (see Eq. (2)), which should happen in case of a perfect control. The resonance frequency of the driver part alone of the control loudspeaker is 60.8 Hz. It increases to 95 Hz when V e and V LS are taken into account. Finally the other elements of the control unit are a G.R.A.S 40BH microphone chosen for its high dynamics (until 181 dB SPL), an analogic band-pass filter KEMO Benchmaster VBF8, and a TIRA BAA 120 amplifier used to set the gain K. 3 Figure 3 : 33 Figure 3: Experimental set-up under study: (a) Picture and (b) Schema (the blue lines define the configuration used to measure the OLTF and the same notations as in Fig. 2 were used). p 2 2 (t) (inside the box 2) and p e (t) (inside the hood of the EA-NES) with three microphones (GRAS, 40BH), (see Fig.3, right). Also recorded (not shown on the schema) are the source loudspeaker current i s (t) and voltage e s (t) responses and the control loudspeaker current i c (t) and voltage responses e c (t). The sampling frequency is f s = 8192 Hz. Figure 4 4 Figure 4 shows the Frequency Response Function (FRF) denoted p 2 /e between the voltage applied to the source loudspeaker e(t) (in voltage-feedback Figure 4 : 4 Figure 4: FRF p 2 /e measured with the EA-NES with K = 0 coupled to the tube of L 1 length: (a) Modulus and (b) Phase. Figure 5 5 Figure 5 shows the FRF p tube /e and p 2 /e measured with the tube of L 1 length and with the tube of L 2 length and imposing in both cases the control gain K = 0. For the two tubes, two resonance peaks appear in the frequency band [40, 120] Hz. In both cases (L 1 and L 2 lengths), the first resonance peak is localized around the frequency 70 Hz and can be attributed to the EA-NES.The second resonance peak (around the frequency 87 Hz for the tube of L 1 length and around the frequency 99 Hz for the tube of L 2 length) is primarily attributed to the tube (in accordance with L 1 < L 2 ). In both case, they exhibit high response levels. Figure 5 : 5 Figure 5: FRFs (a,c) p tube /e and (b,e) p 2 /e measured with the EA-NES with K = 0 coupled to the tube of L 1 (blue curves) and L 2 (red dashed curves) length: (a,b) Modulus and (c,d) Phase. 2 -Figure 6 : 26 Figure 6: Open loop transfer function in the Nysquist domain measured with the EA-NES with unity gain (K = 0) coupled to the tube of L 1 (red dashed curves) and L 2 (blue curves) length. Figure 7 7 Figure7shows the modulus of the FRF p 2 /e with the tube of L 1 length for several values of the gain K. Also reported is the FRF obtained by replacing the EA-NES with the clamped membrane NES alone (no hood). First of all, we Figure 7 : 7 Figure 7: FRF p 2 /e measured with the EA-NES for several values of the gain K and with a clamped membrane NES (red continuous line) coupled to the tube of L 1 length. Figure 8 : 8 Figure 8: FRF p 2 /e measured with the EA-NES for several values of the gain K coupled to the tube of L 2 length. are two steps in an experiment. The first step lasts 3 s with no source signal. It permits us to get null initial conditions, whatever happened before. The second step lasts 10 s with the source on, but we record only the last 7 s, the first 3 s include generally the transient effects of the excitation. A measurement test consists in making a series of runs where the value of the amplitude E is updated for each run. Six tests were performed with the tube of L 1 length with the following six values of the gain K: -40, 0, 50, 100, 200 and 400. Five tests were performed with the tube of L 2 length with the following five values of the gain K: -40, 0, 50, 65 and 100. The frequency step δf = 0.25 Hz was used to define runs in the frequency band [80, 93] Hz for the tube of L 1 length and [90, 105] Hz for the tube of L 2 length. The step in the amplitude band [0.01414, E max ] is equal to 0.07 with E max = 2.9 for the tube of L 1 length and E max = 4 for the tube of L 2 length. Figure 9 : 9 Figure 9: RMS values of the source loudspeaker current is(t) measured with the EA-NES with K = 200 coupled to the tube of L 1 length for several values of the excitation amplitude E versus excitation frequency. Figure 10 : 10 Figure 10: System with the EA-NES with K = 200 coupled to the tube of L 1 length: (a) RMS values, (b) FCR, (c) HCR and (d) NHCR of the steady state regime of p tube as a surface level according to frequency and excitation amplitude. Fig. 10(b)). The corresponding response regimes are periodic resulting from the linear behavior of the EA-NES. By increasing the excitation amplitude, an amplitude-frequency domain appears, characterized by steady state responses, a fraction of energy of which is transferred from the fundamental frequency to the non harmonic frequency domain (see Fig. 10(d)). This transfer of energy Figure 11 : 11 Figure 11: System with the EA-NES with K = 200 coupled to the tube of L 1 length: Steady state responses of (a) p tube and (b) pe versus time and (c) parametric plot (p tube , pe) obtained with the excitation frequency fe = 86.75 and excitation amplitude E = 0.29. Figure 12 : 12 Figure 12: Idem as Fig. 11 with the excitation frequency fe = 86.75 and excitation amplitude E = 1.5. Figure 13 :Figure 14 : 1314 Figure 13: Idem as Fig. 11 with the excitation frequency fe = 83.5 and excitation amplitude E = 2.24. Figure 16 : 16 Figure 16: System with the EA-NES with the gain K = 50 coupled to the tube of (a) L 1 and (b) L 2 length: Ridge line (blue line) of the acoustic pressure p tube and corresponding resonance frequencies (red with bullets line, y-axis on right side) versus the excitation amplitude. Figure 17 : 17 Figure 17: System with the EA-NES with several values of gain K coupled to the tube of L 2 length: Ridge line of the RMS values of p tube according to the level of excitation. Figure 17 ) 17 Figure 17) shows the ridge lines of the acoustic pressure p tube obtained with five values of gain K (= -40, 0, 50, 65 and 100). The results are similar to that observed with the tube of length L 1 for the configuration with the EA-NES with K = -40, 0, and 50 where the thresholds E s and E S are visible. For the configuration with the EA-NES with K > 50, only the threshold E s are reached. Figure 18 : 18 Figure 18: System with the EA-NES with several values of gain K coupled to the tube of L 2 length: Maximum value of power consumed by the control loudspeaker depending on the level of excitation. Table 1 : 1 Gain Gm and phase Pm margins and associated frequencies f Gm and f Pm of the feedback loop measured with the EA-NES for several values of the gain K coupled to the tubes of L 1 and L 2 length. 76.25 36.1 36.9 - - 360 360 1 441 436 52. 52.7 - - 360 360 50 441 436 18. 18.7 - - 360 360 100 441 436 12. 12.7 131.9 133 93 102 200 441 436 6. 6.7 207 203 45 40 E S , is defined as the excitation amplitude where the variation of the acoustic pressure again increases corresponding to an abrupt change in the resonance frequency resulting in a smaller value. As shown in Fig. 7,E Acknowledgment The first author acknowledges DGA-France for the financial support.
01753246
en
[ "spi.gproc", "chim.poly" ]
2024/03/05 22:32:10
2014
https://hal.science/hal-01753246/file/LeMoigne-ECCM-Seville-2014.pdf
Martial Sauceau Nicolas Le Moigne email: [email protected] Mohamed Benyakhlef Rabeb Jemai Jean-Charles Benezet José-Marie Lopez-Cuesta Élisabeth Rodier Jacques Fages Jean-Charles Bénézet Elisabeth Rodier Processing and characterization of PHBV/clay nano-biocomposite foams by supercritical CO2 assisted extrusion Keywords: polyhydroxyalkanoates, nanocomposite, foam, supercritical fluid Introduction Bio-based polymers like polyhydroxyalkanoates (PHAs) are marketed as eco-friendly alternatives to the currently widespread non-degradable oil-based thermoplastics, due to their natural and renewable origin, their biodegradability and biocompatibility. Poly 3hydroxybutyrate (PHB) properties are similar to various synthetic thermoplastics like polypropylene and hence it can be used alternatively in several applications, especially for agriculture, packaging but also biomedicine where biodegradability and biocompatibility are of great interest. However, some drawbacks have prevented its introduction to the market as an effective alternative to the oil-based thermoplastics. PHB is indeed brittle and presents a slow crystallization rate and a poor thermal stability which makes it difficult to process [START_REF] Bordes | Nano-biocomposites: biodegradable polyester/nanoclay systems[END_REF][START_REF] Cabedo | Studying the degradation of polyhydroxybutyratecovalerate during processing with clay-based nanofillers[END_REF]. In order to improve the PHB properties, several kinds of PHAs copolymers have been described in the literature such as the Poly(3-hydroxybutyrate-co-3-hydroxyvalerate) (PHBV) with various hydroxyvalerate (HV) contents and molecular weights, which present better mechanical properties, lower melting temperature and an extended processing window [START_REF] Bordes | Effect of clay organomodifiers on degradation of polyhydroxyalkanoates[END_REF]. Improved properties can be also obtained by the addition of nanoparticles of layered silicates such as clays. Indeed, clay minerals present high aspect ratio and specific surface, and can be dispersed in small amounts in polymer matrices to prepare nanocomposites with improved thermal stability, mechanical properties or barrier properties [START_REF] Ray | Polymer/layered silicate nanocomposites: a review from preparation to processing[END_REF]. One of the key parameters is the clay dispersion that can be controlled by the elaboration route either the solvent intercalation, the in-situ intercalation or the melt intercalation; the latter being preferred for a sustainable development since it limits the use of organic solvents [START_REF] Bordes | Nano-biocomposites: biodegradable polyester/nanoclay systems[END_REF]. The organomodifiers inserted in clays interlayer spaces to improve polymer / clay affinity and the polymer chains intercalation have a strong influence on the dispersion but it has been also shown to catalyse the PHBV degradation during processing [START_REF] Bordes | Effect of clay organomodifiers on degradation of polyhydroxyalkanoates[END_REF][START_REF] Hablot | Thermal and thermomechanical degradation of PHB-based multiphase systems[END_REF]. The use of supercritical fluids have recently appeared as an innovative way to improve clay dispersion leading new "clean and environment friendly" processes. Supercritical carbon dioxyde (sc-CO 2 ) has favorable interaction with polymers and it has the ability to dissolve in large quantities and to act as a plasticizer, which modify drastically polymer properties (viscosity, interfacial tension, …). In addition, the dissolved sc-CO 2 can acts as a foaming agent during processing. It is therefore possible to control pore generation and growth by controlling the operating conditions [START_REF] Nikitine | Controlling the structure of a porous polymer by coupling supercritical CO 2 and single screw extrusion process[END_REF][START_REF] Sauceau | New challenges in polymer foaming: a review of extrusion processes assisted by supercritical carbon dioxide[END_REF], and to generate low density porous structures of interest for the lightening of packaging or the storage of active ingredients, e.g. for drug release applications. All these features make sc-CO 2 also able to modify the nanoparticles dispersion inside polymer matrices, which in turn has an effect on the foam structure. Improved dispersion of clays and modified porous structures of synthetic polymers have been obtained with sc-CO 2 , mainly in batch processes via the in-situ intercalation method. Only few studies have reported on the preparation of nanocomposites systems by sc-CO2 assisted continuous processes [START_REF] Sauceau | New challenges in polymer foaming: a review of extrusion processes assisted by supercritical carbon dioxide[END_REF], more easily adaptable for an industrial scale-up. Zhao et al. [START_REF] Zhao | Processing and characterization of solid and microcellular poly(lactic acid)/polyhydroxybutyrate-valerate (PLA/PHBV) blends and PLA/PHBV/ Clay nanocomposites[END_REF][START_REF] Zhao | Morphology and Properties of Injection Molded Solid and Microcellular Polylactic Acid/Polyhydroxybutyrate-Valerate (PLA/ PHBV) Blends[END_REF] have recently investigated the possibility to use a supercritical N 2 assisted injection molding process to develop microcellular PLA/PHBV clay nano-biocomposites. The results showed a decrease of the average cell size and an increased cell density with the addition of clays in PLA/PHBV blends. Rheological behaviour of the PLA/PHBV/clays nanocomposites suggested good dispersion of the clays within the matrix. In this study, we developed a continuous sc-CO 2 assisted extrusion process to prepare PHBV/clays nano-biocomposite foams by two methods: a one-step method based on the direct foaming of physical PHBV / clays mixtures, and a two-step method based on the foaming of PHBV / clays mixtures prepared beforehand by twin-screw extrusion. The structures obtained are characterized in terms of clay dispersion, matrix crystallization, porosity and pore size distribution and density, and discussed as regard to the processing conditions such as temperature, shearing/pressure, CO 2 mass fraction. Materials and methods Materials PHBV with a HV content of 13 wt%, nucleated with boron nitride and plasticized with 10 % of a copolyester was purchased from Biomer (Germany). The weight-average molecular weight is 600 kDa. The clay used is an organo-modified montmorillonite (MMT), Cloisite C30B (C30B), produced by Southern Clay Products, Inc. (USA). To limit hydrolysis of PHBV upon processing, C30B and PHBV were dried at 80°C before use. Preparation of PHBV / C30B extruded mixtures and physical mixtures PHBV based nanocomposites containing 2.5% w/w C30B and PHBV based masterbatches containing 10%, 20% w/w C30B were prepared by melt intercalation using a co-rotating twin-screw extruder BC21 (Clextral, France) having a L/D (length to diameter ratio) of 48. A parabolic temperature profile not exceeding 165°C was used to limit thermal degradation of PHBV. The mixing and the dispersion of the C30B clays within the PHBV matrix were ensured by two kneading sections. Extrudates were water-cooled at the exit of the die, and dried overnight at 50°C under vacuum. About 2.5 kg of granules were collected for each batch. All the batches were moulded by injection with a Krauss Maffei KM-50-180-CX into test specimens. The barrel to die temperature profile was 40 to 165°C. PHBV based masterbatches containing 10%, 20% w/w C30B were diluted to 2.5 % w/w C30B in the injection molding machine to analyze the effect of the dilution on the nanocomposite structures and properties. As will be shown in the following, this dilution procedure was also used to produce PHBV / 2.5% nanocomposite foams by sc-CO 2 assisted extrusion. In addition to extruded mixtures, physical mixtures of PHBV pellets coated with 2.5% w/w of C30B were prepared with a simple manual batch mixing by placing a mixture of both components in a stainless steel rotative drum for 10 minutes (Faraday cage linked to Keithley 6514 electrometer). These physical mixtures of PHBV pellets / 2.5% C30B were then also foamed by sc-CO 2 assisted extrusion. Foaming by sc-CO 2 assisted extrusion Figure 1 shows the experimental set up, which has previously been described elsewhere [START_REF] Nikitine | Controlling the structure of a porous polymer by coupling supercritical CO 2 and single screw extrusion process[END_REF][START_REF] Kamar | Biopolymer foam production using a (sc-CO 2 ) assisted extrusion process[END_REF]. The single-screw extruder (Rheoscam, SCAMEX) has a screw diameter of 30 mm and a length to diameter ratio (L/D) of 37. It is equipped with four static mixer elements (SMB-H 17/4, Sulzer, Switzerland). Sensors allow measuring the temperature and the pressure of the polymer during the extrusion process. CO 2 (N45, Air liquide) is pumped from a cylinder by a syringe pump (260D, ISCO, USA) and then introduced at constant volumetric flow rate. The pressure, the temperature and the volumetric sc-CO 2 flow rate are measured within the syringe pump. Sc-CO 2 density, obtained with the equation of state established by Span and Wagner [START_REF] Span | A new equation of state for carbon dioxide covering the fluid region from the triplepoint temperature to 1100 K at pressures up to 800 MPa[END_REF], is used to calculate mass flow rate and thus the sc-CO 2 mass fraction w CO2 . Once steady state conditions are reached with the chosen operating conditions, extrudates are collected and water-cooled at ambient temperature. Several samples were collected during each experiment in order to check the homogeneity of the extrudates. Experimental conditions chosen after preliminary trials (not described here) are summarized in Table 1. The lowest possible screw speed was selected to increase the residence time of the mixtures, and thus the mixing time. For all experiments, T a , T b and T c were fixed at 160°C to ensure the melting of the PHBV matrix while limiting thermal degradation. Moreover, the polymer temperature was reduced at the exit of the die in order to favour foaming. T d and T e were fixed at 140°C and T f not higher than 140°C. Series Structural characterizations The structures of nanocomposites and masterbatches were observed with a Quanta 200 FEG (FEI Company) electron microscope in transmission mode (STEM). Ultra-thin specimens of 70 nm thicknesses were cut from the middle section of moulded specimens and deposited on Cu grids. The wide angle X-Ray diffraction (WAXD) were performed using an AXS D8 Advance diffractometer (Bruker, Germany) equipped with a Cu cathode (λ = 1.5405 Å). The interlayer distance d 001 of the C30B clays were determined from the (001) diffraction peak using Bragg's law. Porosity ε is defined as the ratio of void volume to the total volume of the sample and can be calculated by the Equation ( 1): ε = 1 -ρ app / ρ p (1) Where ρ app is the apparent density calculated from the weight of the samples and their volumes evaluated by measuring their diameter and length with a vernier (Facom, France). ρ p is the solid polymer density, determined by helium pycnometry (Micromeretics, AccuPYC 1330), which is about 1.216. The fracture surfaces of foamed extrudates were sputter coated with gold and observed using an Environmental Scanning Electronic Microscope XL30 ESEM FEG (Philips, Netherlands). On the basis of the images obtained, a mean diameter D cell is determined and a cell density N cell per volume unit of unfoamed sample were calculated according to Equation (2): N cell = (N 3/2 / A) × (ρ p / ρ app ) (2) N cell is the number of cells on an SEM image, A the area of the image, ρ app the apparent density of the foam and ρ p the solid polymer density. Results and discussion Structure and rheology of extruded PHBV / C30B nanocomposites and masterbatches The structures of the PHBV / C30B nanocomposites and masterbatches were investigated by two complementary methods, WAXD and STEM. The WAXD measurements allowed to determine the interlayer distance of the C30B clays within the PHBV matrix. The STEM gave a direct visualization of the clay dispersion within the matrix. As shown on Figure 2a, WAXD patterns showed a diffraction peak at 2θ = 4.6° for C30B clays, which corresponds to an interlayer distance of 18 Å. Two diffraction peaks were detected on the patterns of the PHBV / 2.5, 10 and 20 wt% C30B mixtures. For the PHBV / 10% and 20% C30B masterbatches, the peak at d 001 distance of 17 -18 Å corresponding to the initial interlayer distance of C30B, suggests that a part of the clays is still aggregated. The second peak observed at d 001 distance of 38 Å, 36.1 Å, 34 Å for PHBV / C30B containing 2.5, 10 and 20 wt% C30B, respectively, indicates an important intercalation of the polymer chains within the interlayer space of the clays. Similar results were found by Choi et al. [START_REF] Choi | Preparation and characterization of poly(hydroxybutyrate-co-hydroxyvalerate)-organoclay nanocomposites[END_REF] and Bordes et al. [START_REF] Bordes | Structure and properties of PHA/clay nano-biocomposites prepared by melt intercalation[END_REF] for PHBV / 2 -3% C30B prepared by melt intercalation. The intensity of the peak at 38 Å for the PHBV / 2.5% C30B is particularly weak suggesting a possible exfoliation of the clays. Moreover, masterbatches have been diluted in a single-screw injection press to analyse the effect on the clay dispersion. The WAXD patterns are very similar for the PHBV / 2.5% C30B nanocomposite and the PHBV / 2.5% C30B nanocomposites diluted from the masterbatches. These results were confirmed by the STEM pictures, on which mostly intercalated and possibly exfoliated layered structures are observed for both PHBV / 2.5% C30B (Figure 2b) and PHBV / 2.5% C30B diluted from the 20% C30B masterbatch (Figure 2c). The rheological behaviour of extruded PHBV and PHBV / C30B nanocomposites was compared to unprocessed PHBV. The zero-shear rate viscosity of the extruded PHBV was strongly decreased after the extrusion process. This can be directly related to a decrease of the average molecular weight of the PHBV chains due to thermal degradation and shearing upon extrusion. When adding 2.5% C30B, a strong shear thinning behaviour was observed at high pulsations, highlighting the catalytic degradation effect of the clays on the PHBV matrix. The rheological behaviour at high pulsations is indeed known to be mainly determined by the macromolecular structure of the polymer matrix with little influence of the clays. When diluting the 20% C30B masterbatch to 2.5% C30B, it was interesting to observe that at high pulsations, the viscosity of the unprocessed PHBV was recovered due to the input of unprocessed and hence non-degraded PHBV in the mixture. This preliminary study on the nanostructure and the rheological behaviour of the PHBV / 2.5% C30B nanocomposites and those produced from the masterbatches allowed to demonstrate that the dilution of the masterbatches to lower clay contents in a single-screw apparatus is a good approach to prepare PHBV / C30B nanocomposites with good dispersion and limited degradation. The good dispersion capacity of the clays is attributed to the physico-chemical interactions between PHBV and C30B that originates from strong hydrogen bonding between the ester carbonyl groups of PHBV and the hydroxyl groups in the interlayer space of C30B [START_REF] Choi | Preparation and characterization of poly(hydroxybutyrate-co-hydroxyvalerate)-organoclay nanocomposites[END_REF]. In the following, this dilution procedure has thus been used to prepare PHBV / 2.5% C30B nanocomposites foams, i.e. PHBV / 20% C30B was diluted to 2.5% C30B with unprocessed PHBV during the sc-CO2 assisted single-screw extrusion. The obtained foams were compared to PHBV / 2.5% C30B foams based on physical mixtures. Processing and characterization of the PHBV / clays nano-biocomposite foams 3.2.1. Effect of the sc-CO 2 mass fraction on the PHBV and PHBV / 2.5% C30B foams The effect of the sc-CO 2 mass fraction on the porosity ε is represented on Figure 3a. In all cases, porosity decreases with increasing sc-CO 2 mass fraction, which is rather astonishing since more sc-CO 2 is theoretically available for nucleation. This could be explained by a faster cooling of the extrudates, as sc-CO 2 pressure drop is endothermic. The fast cooling thus increases the stiffness of the polymer which limits the growth of the cells and hence the expansion and the porosity. A part of the sc-CO 2 could also be in excess [START_REF] Han | Continuous microcellular polystyrene foam extrusion with supercritical CO 2[END_REF] due to the formation of PHBV crystals upon cooling that limits the diffusion. This excess of sc-CO 2 diffuses to the walls of the extruder and does not participate to cell nucleation. This also decreases the processing pressures and temperatures, which in turn limits the expansion. SEM pictures of the extrudates of neat PHBV and physical and extruded PHBV / 2.5% C30B mixtures are represented on Figure 4. When sc-CO 2 mass fraction increases, pores become smaller, more numerous and regular. These observations are illustrated by the evolution of the cell density and the mean cell diameter on Figure 3b andc. It confirms that the presence of more sc-CO 2 increases nucleation (higher cell density) but accelerates the cooling and thus limits the growth and the coalescence of the pores (lower cell diameter). The sc-CO 2 mass fraction must thus be optimized to promote nucleation while keeping enough growth and expansion. Generally, the PHBV and PHBV / 2.5% C30B foam structure dependency to sc-CO 2 mass fraction can be summarized as follows: (i) Low sc-CO 2 mass fractions induce higher processing pressures and temperatures that promote higher pressure drop, extensive growth and higher porosity, but also less nucleation and poor homogeneity due to the coalescence of the pores. (ii) High sc-CO 2 mass fractions promote nucleation and homogeneity, but also induce lower processing pressures and temperatures, that limit the growth and porosity due to the increased stiffness of the frozen polymer. Clays / Foaming interrelationships and effect on the nanocomposite foams structure As shown on Figure 4, the clay particles are inhomogeneously dispersed in the extrudates based on the physical PHBV / 2.5% C30B mixtures (series 3). Several aggregates of 10 µm to 350 µm are observed whatever the sc-CO 2 mass fraction. The effect of the sc-CO 2 on the properties of the polymer matrix and the presence of the static mixer were thus not sufficient to induce the intercalation of the polymer chains within the interlayer space of the C30B clays and their dispersion during single-screw extrusion. The static mixer indeed enhances the distributive mixing of the sc-CO 2 and the clays but only have little dispersive efficiency. Concerning the extruded PHBV / 2.5% C30B mixtures (series 4), a very good dispersion of the C30B is observed with no visible aggregates (Figure 4). This supports that the prior preparation of a PHBV / 20% C30B masterbatch and its further dilution to 2.5% C30B during the sc-CO 2 assisted extrusion process is a necessary step to obtain a good dispersion. WAXD patterns (not shown) revealed that the position of the intercalation peak at 36.2° for series 4 remains unchanged by the foaming process whatever the sc-CO 2 mass fraction, meaning that no significant enhanced intercalation occurs with the plasticization of the matrix induced by the sc-CO 2 . A possible improvement of the dispersion capacity of the clays could be obtained by improving their 'CO 2 -philicity' with a surfactant bearing together hydroxyl groups to conserve a good affinity with PHBV and a 'CO 2 -philic' carbonyl group. Good dispersion of the clays has been shown to favour homogeneous nucleation, limit the coalescence, and hence give porous structures with higher cell density [START_REF] Ngo | Processing of nanocomposite foams in supercritical carbon dioxide. Part I: Effect of surfactant[END_REF][START_REF] Zeng | Polymer-Clay Nanocomposite Foams Prepared Using Carbon Dioxide[END_REF]. As shown on Figures 3 and4 at low sc-CO 2 mass fraction (w CO2 < 1.5%), extruded PHBV / 2.5% C30B mixtures based foams indeed appear less heterogeneous with rather smaller cell diameter as compared to neat PHBV foams, while showing equivalent porosity. At high sc-CO 2 mass fraction (w CO2 .> 3.5%), the presence of the clays decreases significantly the porosity. The diffusion of the sc-CO 2 within the PHBV matrix may be slightly hampered by the C30B clays, which are known to be barrier to gas and fluid in polymeric materials [START_REF] Ray | Biodegradable polymers and their layered silicate nanocomposites: In greening the 21 st century materials world[END_REF]. Consequently, the excess of sc-CO 2 diffuses to the walls of the extruder and accelerates the cooling of the extrudate, limiting the growth of the pores and the expansion of the foams. Conclusions A continuous sc-CO 2 assisted extrusion process has been developed to prepare PHBV/clays nano-biocomposite foams. The prior preparation of a PHBV / 20% C30B masterbatch and its further dilution during the sc-CO 2 assisted single-screw extrusion process are necessary steps to obtain good clay dispersion and limited PHBV degradation. By controlling the sc-CO 2 mass fraction in a narrow window, good clay dispersion appears to favour homogeneous nucleation while limiting the coalescence, and hence allows to obtain PHBV/clays nanobiocomposite foams with better homogeneity and porosity higher than 50 %. Figure 1 . 1 Figure 1. Experimental device used for the foaming by sc-CO 2 assisted single-screw extrusion Figure 2 . 2 Figure 2. WAXD patterns (a) and STEM pictures of PHBV / 2.5% C30B (b), and PHBV / 2.5% C30B diluted from the 20% C30B masterbatch (c). Figure 3 . 3 Figure 3. Evolution of (a) the porosity ε, (b) the cell density N cell and (c) the mean diameter D cell as a function of the CO 2 mass fraction for foams series 2, 3 and 4. Figure 4 . 4 Figure 4. SEM pictures for foams series 2, 3 and 4 at different sc-CO 2 mass fraction. Table 1 . 1 Experimental conditions used for the foaming by sc-CO2 assisted single-screw extrusion Material Screw speed (rpm) T f (°C) w CO2 (mass %) Static mixer Die length (mm) Die diameter (mm) 2 Neat PHBV 30 140 0 to 4 no 20 1 3 PHBV / 2.5% C30B (physical mixture) 40 140 0 to 3 yes 5 0.5 4 PHBV / 2.5% C30B (extruded mixture) 55 140 0 to 4 yes 20 1
01755086
en
[ "spi.auto" ]
2024/03/05 22:32:10
2018
https://inria.hal.science/hal-01755086/file/ECC18_Emilia_A.pdf
On hyper-exponential output-feedback stabilization of a double integrator by using artificial delay The problem of output-feedback stabilization of a double integrator is revisited with the objective of achieving the rates of convergence faster than exponential. It is assumed that only position is available for measurements, and the designed feedback is based on the output and its delayed values without an estimation of velocity. It is shown that by selecting the closedloop system to be homogeneous with negative or positive degree it is possible to accelerate the rate of convergence in the system at the price of a small steady-state error. Efficiency of the proposed control is demonstrated in simulations. I. INTRODUCTION The design of regulators for dynamical systems is a fundamental and complex problem studied in the control theory. An important feature of different existing methods for control synthesis is the achievable quality of transients and robustness against exogenous perturbations and noises. Very frequently the design methods are oriented on various canonical models, and the linear ones are the most popular. Then the double integrator is a conventional benchmark system, since the tools designed for it can be easily extended to other more generic models. If non-asymptotic rates of convergence (i.e. finite-time or fixed time [START_REF] Bernuau | On homogeneity and its application in sliding mode[END_REF]) are needed in the closed-loop system, then usually homogeneous systems come to the attention as canonical dynamics, which include linear models as a subclass. The theory of homogeneous dynamical systems is welldeveloped for continuous time-invariant differential equations [START_REF] Bacciotti | Liapunov Functions and Stability in Control Theory[END_REF], [START_REF] Bhat | Geometric homogeneity with applications to finite-time stability[END_REF], [START_REF] Kawski | Progress in systems and control theory: New trends in systems theory[END_REF], [START_REF] Zubov | On systems of ordinary differential equations with generalized homogenous right-hand sides[END_REF] or time-delay systems [START_REF] Efimov | Development of homogeneity concept for time-delay systems[END_REF], [START_REF] Efimov | Weighted homogeneity for time-delay systems: Finite-time and independent of delay stability[END_REF] (applications of the conventional homogeneity theory to analysis of timedelay systems considering delay as a kind of perturbation have been considered in [START_REF] Aleksandrov | On the asymptotic stability of solutions of nonlinear systems with delay[END_REF], [START_REF] Asl | Analytical solution of a system of homogeneous delay differential equations via the Lambert function[END_REF], [START_REF] Bokharaie | D-stability and delayindependent stability of homogeneous cooperative systems[END_REF], [START_REF] Diblik | Asymptotic equilibrium for homogeneous delay linear differential equations with l-perturbation term[END_REF]). The main feature of a homogeneous system (described by ordinary differential equation) is that its local behavior of trajectories is the same as global (local attractiveness implies global asymptotic stability, for example [START_REF] Bernuau | On homogeneity and its application in sliding mode[END_REF]), while for time-delay homogeneous D. Efimov systems the independent on delay (IOD) stability follows [START_REF] Efimov | Weighted homogeneity for time-delay systems: Finite-time and independent of delay stability[END_REF], with certain robustness to exogenous inputs in both cases. The rate of convergence for homogeneous ordinary differential equations is related with degree of homogeneity [START_REF] Bernuau | On homogeneity and its application in sliding mode[END_REF], but for time-delay systems the links are not so straightforward [START_REF] Efimov | Comments on finite-time stability of time-delay systems[END_REF]. In addition, the homogeneous stable/unstable systems admit homogeneous Lyapunov functions [START_REF] Zubov | On systems of ordinary differential equations with generalized homogenous right-hand sides[END_REF], [START_REF] Rosier | Homogeneous Lyapunov function for homogeneous continuous vector field[END_REF], [START_REF] Efimov | Oscillations conditions in homogenous systems[END_REF]. Analysis of delay influence on the system stability is vital in many cases [START_REF] Gu | Stability of Time-Delay Systems[END_REF], [START_REF] Fridman | Introduction to Time-Delay Systems: Analysis and Control[END_REF]. Despite of variety of applications, most of them deal with the linear time-delay models, which is originated by complexity of stability analysis for time-delay systems [START_REF] Fridman | Introduction to Time-Delay Systems: Analysis and Control[END_REF]. However, in some cases introduction of a delay may lead to an improvement of the system performance [START_REF] Fridman | Delay-induced stability of vector secondorder systems via simple Lyapunov functionals[END_REF], [START_REF] Fridman | Stabilization by using artificial delays: An LMI approach[END_REF]. The goal of this work is to develop the results obtained in [START_REF] Fridman | Delay-induced stability of vector secondorder systems via simple Lyapunov functionals[END_REF], [START_REF] Fridman | Stabilization by using artificial delays: An LMI approach[END_REF] for linear systems to a nonlinear homogeneous case restricting for brevity the attention to the case of the double integrator model. A design method is proposed, which uses position and its delayed values for practical output stabilization with hyper-exponential convergence rates. The outline of this work is as follows. The preliminary definitions and homogeneity concept for time-delay systems are given in Section II. The problem statement and the control design and stability analysis are presented in sections III and IV, respectively. An example is considered in Section V. II. PRELIMINARIES Consider an autonomous functional differential equation of retarded type with inputs [START_REF] Kolmanovsky | Stability of functional differential equations[END_REF]: ẋ(t) = f (x t , d(t)), t ≥ 0 (1) where x(t) ∈ R n and x t ∈ C [-τ, ; f : C [-τ,0] × R m → R n is a continuous function ensuring forward uniqueness and existence of the system solutions, f (0, 0) = 0. We assume that for the initial functional condition x 0 ∈ C [-τ,0] and d ∈ L m ∞ the system (1) admits a unique solution x(t, x 0 , d), which is defined on some time interval [-τ, T ) for T > 0. The upper right-hand Dini derivative of a locally Lipschitz continuous functional V : C [-τ,0] → R + along the system (1) solutions is defined as follows for any φ ∈ C [-τ,0] and d ∈ R m : D + V (φ, d) = lim h→0 + sup 1 h [V (φ h ) -V (φ)], where φ h ∈ C [-τ,0] for 0 < h < τ is given by φ h = φ(θ + h), θ ∈ [-τ, -h) φ(0) + f (φ, d)(θ + h), θ ∈ [-h, 0]. A continuous function σ : R + → R + belongs to class K if it is strictly increasing and σ(0) = 0; it belongs to class K ∞ if it is also radially unbounded. A continuous function β : R + × R + → R + belongs to class KL if β(•, r) ∈ K and β(r, •) is a strictly decreasing to zero for any fixed r ∈ R + . The symbol 1, m is used to denote a sequence of integers 1, ..., m. For a symmetric matrix P ∈ R n×n , the minimum and maximum eigenvalues are denoted as λ min (P ) and λ max (P ), respectively. A. ISS of time delay systems The input-to-state stability (ISS) property is an extension of conventional stability paradigm to the systems with external inputs [START_REF] Pepe | A Lyapunov-Krasovskii methodology for ISS and iISS of time-delay systems[END_REF], [START_REF] Teel | Connections between Razumikhin-type theorems and the ISS nonlinear small gain theorem[END_REF]. Definition 1. [START_REF] Pepe | A Lyapunov-Krasovskii methodology for ISS and iISS of time-delay systems[END_REF] The system (1) is called ISS, if for all x 0 ∈ C [-τ,0] and d ∈ L m ∞ the solutions are defined for all t ≥ 0 and there exist β ∈ KL and γ ∈ K such that |x(t, x 0 , d)| ≤ β( x 0 , t) + γ(||d|| ∞ ) ∀t ≥ 0. Definition 2. [20] A locally Lipschitz continuous functional V : C [-τ,0] → R + is called ISS Lyapunov-Krasovskii functional for the system (1) if there exist α 1 , α 2 ∈ K ∞ and α, χ ∈ K such that for all φ ∈ C [-τ,0] and d ∈ R m : α 1 (|φ(0)|) ≤ V (φ) ≤ α 2 ( φ ), V (φ) ≥ χ(|d|) =⇒ D + V (φ, d) ≤ -α(V (φ)). Theorem 1. [START_REF] Pepe | A Lyapunov-Krasovskii methodology for ISS and iISS of time-delay systems[END_REF] If there exists an ISS Lyapunov-Krasovskii functional for the system (1), then it is ISS with γ = α -1 1 • χ. B. Homogeneity For any r i > 0, i = 1, n and λ > 0, define the dilation matrix Λ r (λ) = diag{λ ri } n i=1 and the vector of weights r = [r 1 , ..., r n ] T . Definition 3. [START_REF] Efimov | Homogeneity for time-delay systems[END_REF] The function g : C [-τ,0] → R is called r-homogeneous (r i > 0, i = 1, n), if for any φ ∈ C [-τ,0] the relation g(Λ r (λ)φ) = λ ν g(φ) holds for some ν ∈ R and all λ > 0. The vector field f : C [-τ,0] → R n is called r- homogeneous (r i > 0, i = 1, n), if for any φ ∈ C [-τ,0] the relation f (Λ r (λ)φ) = λ ν Λ r (λ)f (φ) holds for some ν ≥ -min 1≤i≤n r i and all λ > 0. In both cases, the constant ν is called the degree of homogeneity. The introduced notion of homogeneity in C [-τ,0] is reduced to the standard one in R n [START_REF] Zubov | On systems of ordinary differential equations with generalized homogenous right-hand sides[END_REF] under a vector argument substitution. For any x ∈ R n the homogeneous norm can be defined as follows |x| r = n i=1 |x i | /ri 1/ , ≥ max 1≤i≤n r i . For all x ∈ R n , its Euclidean norm |x| is related with the homogeneous one: σ r (|x| r ) ≤ |x| ≤ σ r (|x| r ) for some σ r , σ r ∈ K ∞ . The homogeneous norm in the Banach space has the same homogeneity property that is ||Λ r (λ)φ|| r = λ||φ|| r for all φ ∈ C [a,b] . In C [-τ,0] , for a radius ρ > 0, denote the corresponding sphere S τ ρ = {φ ∈ C [-τ,0] : ||φ|| r = ρ} and the closed ball B τ ρ = {φ ∈ C [-τ,0] : ||φ|| r ≤ ρ}. An advantage of homogeneous systems described by nonlinear ordinary differential equations is that any of its solution can be obtained from another solution under the dilation re-scaling and a suitable time parameterization. A similar property holds for functional homogeneous systems. Proposition 1. [START_REF] Efimov | Weighted homogeneity for time-delay systems: Finite-time and independent of delay stability[END_REF] Let x(t, x 0 ) be a solution of the rhomogeneous system dx(t)/dt = f (x t ), t ≥ 0, x t ∈ C [-τ,0] (3) with the degree ν for an initial condition x 0 ∈ C [-τ,0] , τ ∈ (0, +∞). For any λ > 0 the functional differential equation dy(t)/dt = f (y t ), t ≥ 0, y t ∈ C [-λ -ν τ,0] (4) has a solution y(t, y 0 ) = Λ r (λ)x(λ ν t, x 0 ) with the initial condition y 0 ∈ C [-λ -ν τ,0] , y 0 (s) = Λ r (λ)x 0 (λ ν s) for s ∈ [-λ -ν τ, 0]. In [START_REF] Efimov | Development of homogeneity concept for time-delay systems[END_REF], using that result it has been shown that for (3) with ν = 0 the local asymptotic stability implies global one (for the ordinary differential equations even more stronger conclusion can be obtained: local attractiveness implies global asymptotic stability [START_REF] Bernuau | On homogeneity and its application in sliding mode[END_REF]). For time-delay systems with ν = 0 that result has the following correspondences: Lemma 1. [START_REF] Efimov | Weighted homogeneity for time-delay systems: Finite-time and independent of delay stability[END_REF] Let the system (3) be r-homogeneous with degree ν = 0 and globally asymptotically stable for some delay 0 < τ 0 < +∞, then it is globally asymptotically stable for any delay 0 < τ < +∞ (i.e. IOD). Corollary 1. [7] Let the system (3) be r-homogeneous with degree ν and asymptotically stable with the region of attraction B τ ρ for some 0 < ρ < +∞ for any value of delay 0 ≤ τ < +∞, then it is globally asymptotically stable IOD. Corollary 2. [7] Let the system (3) be r-homogeneous with degree ν < 0 and asymptotically stable with the region of attraction B τ ρ for some 0 < ρ < +∞ for any value of delay 0 ≤ τ ≤ τ 0 with 0 < τ 0 < +∞, then it is globally asymptotically stable IOD. Corollary 3. [7] Let the system (3) be r-homogeneous with degree ν > 0 and the set B τ ρ for some 0 < ρ < +∞ be uniformly globally asymptotically stable for any value of delay 0 ≤ τ ≤ τ 0 , 0 < τ 0 < +∞ 1 , then (3) is globally asymptotically stable (at the origin) IOD. III. PROBLEM STATEMENT Consider the double integrator system: ẋ1 (t) = x 2 (t), ẋ2 (t) = u(t), (5) y(t) = x 1 (t), where x 1 (t) ∈ R and x 2 (t) ∈ R are the position and velocity, respectively, u(t) ∈ R is the control input and y(t) ∈ R is the output available for measurements. The goal is to design a static output-feedback control practically stabilizing the system with a hyper-exponential convergence rate, i.e. with a convergence faster than any exponential. IV. MAIN RESULTS The solution considered in this paper is the delayed nonlinear controller u(t) = -(k 1 + k 2 ) y(t) α + k 2 y(t -h) α , (6) where y α = |y| α sign(y), k 1 > 0 and k 2 > 0 are tuning gains, α > 0, α = 1 is a tuning power and h > 0 is the delay (if α = 1 then the control ( 6) is linear and it has been studied in [START_REF] Fridman | Delay-induced stability of vector secondorder systems via simple Lyapunov functionals[END_REF], [START_REF] Fridman | Stabilization by using artificial delays: An LMI approach[END_REF]). The restrictions on selection of 1 In this case for any 0 ≤ τ ≤ τ 0 , any ε > 0 and κ ≥ 0 there is 0 ≤ T ε κ,τ < +∞ such that |x(t, x 0 )|r ≤ ρ + ε for all t ≥ T ε κ,τ for any x 0 ∈ B τ κ , and |x(t, x 0 )|r ≤ στ (||x 0 ||r) for all t ≥ 0 for some function στ ∈ K∞ for all x 0 ∈ C [-τ,0] . these parameters and the conditions to check are given in the following theorem. Theorem 2. For any k 1 > 0, k 2 > 0, h 0 > 0, if the system of linear matrix inequalities Q ≤ 0, P > 0, q > 0, (7) Q =    Q 11 k 2 Zb Zb k 2 b Z k 2 2 h 2 -4 e -h h 2 q qh 2 k 2 b Z qh 2 k 2 qh 2 -γ    , Q 11 = A P + P A + qh 2 A bb A + P, Z = P + qh 2 A , A = 0 1 -k 1 -k 2 h , b = 0 1 is feasible for some > 0, γ > 0 and any 0 < h ≤ h 0 , then for any 0 < η < +∞ there exists ∈ (0, 1) sufficiently small such that the system (5), ( 6) is a) globally asymptotically stable with respect to the set B 2h η for any α ∈ (1 -, 1); b) locally asymptotically stable at the origin from B 2h η for any α ∈ (1, 1 + ). All proofs are omitted due to space limitations. Note that for any α ≥ 0 the closed-loop system (5), ( 6) is rhomogeneous for r 1 = 1 and r 2 = α+1 2 with the degree ν = α-1 2 , then the result of Proposition 1 can be used for substantiation. The requirement that the matrix inequalities [START_REF] Efimov | Weighted homogeneity for time-delay systems: Finite-time and independent of delay stability[END_REF] have to be verified for any 0 < h ≤ h 0 may be restrictive for given gains k 1 and k 2 , then another local result can be obtained by relaxing this constraint. Corollary 4. For any k 1 > 0, k 2 > 0 and 0 < h 1 < h 0 , let the system of linear matrix inequalities (7) be verified for some > 0 and all h 1 ≤ h ≤ h 0 . Then for any 0 < ρ 1 < +∞ there exist ∈ (0, 1) sufficiently small and ρ 2 > ρ 1 such that the system (5), ( 6) is asymptotically stable with respect to the set B 2h ρ1 with the region of attraction B 2h ρ2 for any α ∈ (1 -, 1 + ). Remark 1. The result of Theorem 2 complements corollaries 2 and 3. Note that in all cases, for ν = 0, the global stability at the origin cannot be obtained in ( 5), (6) (due to homogeneity of the system, following the result of Lemma 1 the globality implies IOD result), while in the linear case with ν = 0 such a result is possible to derive for any 0 < h ≤ h 0 . Then it is necessary to justify a need in the control with ν = 0 comparing to the linear feedback with the same gains. An answer to this question is presented in the following result, and to this end denote for the system (5), (6): T (α, ρ 1 , ρ 2 , h) = arg sup t≥T -h sup x0∈S 2h ρ 2 |x(t, x 0 )| r ≤ ρ 1 as the time of convergence of all trajectories initiated on the sphere S 2h ρ2 to the set B 2h ρ1 provided that the delay h and the power α applied in the feedback. Proposition 2. For given k 1 > 0, k 2 > 0, h 0 > 0, let the system of linear matrix inequalities [START_REF] Efimov | Weighted homogeneity for time-delay systems: Finite-time and independent of delay stability[END_REF] be verified for some > 0 and any 0 < h ≤ h 0 . Then there exist ∈ (0, 1) sufficiently small and 0 < ρ 1 < ρ 2 < +∞ such that in the system (5), ( 6) In other words, the result above claims that for any fixed feedback gains k 1 and k 2 , if the conditions of Theorem 2 are satisfied, then the nonlinear closed-loop system ( 5), ( 6) with ν = 0 (α = 1) is always converging faster than its linear analog with ν = 0 (α = 1) between properly selected levels ρ 1 and ρ 2 (which values depend on smaller or high than 1 is α) for a delay h . T (α, ρ 1 , ρ 2 , h ) < T (1, ρ 1 , ρ 2 , h ) (8 The result of Proposition 2 provides a motivation for using nonlinear control in this setting: playing with degree of homogeneity of the closed-loop system it is possible to accelerate the obtained linear feedback by fixing the gains and delay values, but introducing an additional power tuning parameter. Note that another, conventional solution, which consists in gains k 1 and k 2 increasing for acceleration, may be infeasible for the given delay value h 0 . Let us consider some results of application of the proposed control and an illustration of the obtained acceleration. the matrix inequalities (7) are satisfied for h 1 < h ≤ h 0 with h 1 = 5 × 10 -4 , and the results of verification are presented in Fig. 1. Thus, all conditions of Corollary 4 are verified. The errors of regulation obtained in simulation of the system (5), [START_REF] Efimov | Development of homogeneity concept for time-delay systems[END_REF] with delay h 0 for different initial conditions with α = 0.8 and α = 1.2, in comparison with the linear controller with α = 1, are shown in figures 2 and 3, respectively (the solid lines represent the trajectories of the system with α = 1 and the dashed ones correspond to α = 1, since the plots are given in a logarithmic scale, then the latter trajectories are close to straight lines). As we can conclude, in the nonlinear case the convergence is much faster than in the linear one close to the origin for α ∈ (0, 1) and far outside for α > 1, which confirms the statement of Proposition 2. Note that the value of η (the radius of the set to which the trajectories converge for α < 1 or from which they converge to the origin for α > 1) is not restrictive. VI. CONCLUSIONS The paper addresses the problem of output stabilization of the double integrator using a nonlinear delayed feedback by obtaining hyper-exponential (faster than any exponential) rates of convergence. The control does not need an estimation of velocity, and the applicability of the approach can be checked by resolving linear matrix inequalities. The efficiency of the proposed approach is demonstrated in simulations and a comparison with a linear controller is carried out. The homogeneous norm is a rhomogeneous function of degree one: |Λ r (λ)x| r = λ|x| r for all x ∈ R n . Similarly, for any φ ∈ C [a,b] , -∞ ≤ a < b ≤ +∞ the homogeneous norm can be defined as follows ||φ|| r = n i=1 ||φ i || /ri 1/ , ≥ max 1≤i≤n r i , and there exist two functions ρ r , ρ r ∈ K ∞ such that for all φ ∈ C [a,b] ρ r (||φ|| r ) ≤ ||φ|| ≤ ρ r (||φ|| r ). ) for some 0 < h ≤ h 0 and any α ∈ (1-, 1) or α ∈ (1, 1+ ), provided that sup 0<h≤h0 T (α, 0.5, 1, h) ≤ T α for some T α ∈ R + and all α ∈ (1 -, 1 + ). 3.73 × 10 -2 , q = 1.5 × 10 -11 Figure 1 .Figure 2 . 12 Figure 1. The results of verification of (7) for different h Figure 3 . 3 Figure 3. Trajectories of stabilized double integrator with α = 1.2 , W. Perruquetti and J.-P. Richard are at Inria, Non-A team, Parc Scientifique de la Haute Borne, 40 av. Halley, 59650 Villeneuve d'Ascq, France and CRIStAL (UMR-CNRS 9189), Ecole Centrale de Lille, BP 48, Cité Scientifique, 59651 Villeneuve-d'Ascq, France. E. Fridman is with School of Electrical Engineering, Tel-Aviv University, Tel-Aviv 69978, Israel. D. Efimov is with Department of Control Systems and Informatics, Saint Petersburg State University of Information Technologies Mechanics and Optics (ITMO), 49 Kronverkskiy av., 197101 Saint Petersburg, Russia. This work was partially supported by the Government of Russian Federation (Grant 074-U01), the Ministry of Education and Science of Russian Federation (Project 14.Z50.31.0031) and by Israel Science Foundation (grant No 1128/14).
00175517
en
[ "sdv.bbm.gtp" ]
2024/03/05 22:32:10
2005
https://ens-lyon.hal.science/ensl-00175517/file/brodie_etal_manuscript.pdf
Edward-Benedict B Brodie Of Brodie Samuel Nicolay Marie Touchon Benjamin Audit Yves D'aubenton-Carafa Claude Thermes A Arneodo From DNA sequence analysis Keywords: numbers: 87.15.Cc, 87.16.Sr, 87.15.Aa come DNA replication is an essential genomic function responsible for the accurate transmission of genetic information through successive cell generations. According to the "replicon" paradigm derived from prokaryotes [1], this process starts with the binding of some "initiator" protein to a specific "replicator" DNA sequence called origin of replication (ori). The recruitement of additional factors initiates the bidirectional progression of two divergent replication forks along the chromosome. One strand is replicated continuously from the origin (leading strand), while the other strand is replicated in discrete steps towards the origin (lagging strand). In eukaryotic cells, this event is initiated at a number of ori and propagates until two converging forks collide at a terminus of replication (ter ) [2]. The initiation of different ori is coupled to the cell cycle but there is a definite flexibility in the usage of the ori at different developmental stages [3,4]. Also, it can be strongly influenced by the distance and timing of activation of neighbouring ori, by the transcriptional activity and by the local chromatin structure [3]. Actually, sequence requirements for an ori vary significantly between different eukaryotic organisms. In the unicellular eukaryote Saccharomyces cerevisiae, the ori spread over 100-150 bp and present some highly conserved motifs [2]. In the fission yeast Schizosaccharomyces pombe, there is no clear consensus sequence and the ori spread over at least 800 to 1000 bp [2]. In multi-cellular organisms, the ori are rather poorly defined and initiation may occur at multiple sites distributed over thousands of base pairs [5]. Actually, cell diversification may have led higher eukaryotes to develop various epigenetic controls over the ori selection rather than to conserve specific replicator sequences [6]. This might explain that only very few ori have been identified so far in multi-cellular eukaryotes, namely around 20 in metazoa and only about 10 in human [7]. The aim of the present work is to show that with an appropriate coding and an adequate methodology, one can challenge the issue of detecting putative ori directly from the genomic sequences. According to the second parity rule [8], under no-strand bias conditions, each genomic DNA strand should present equimolarities of A and T and of G and C. Deviations from intrastrand equimolarities have been extensively studied in prokaryotic, organelle and viral genomes for which they have been used to detect the ori [START_REF] Mrazek | Proc. Natl. Acad. Sci[END_REF]. Indeed the GC and TA skews abruptly switch sign at the ori and ter displaying step like profiles, such that the leading strand is generally richer in G than in C, and to a lesser extent in T than in A. During replication, mutational events can affect the leading and lagging strands differently, and an asymmetry can result if one strand incorporates more mutations of a particular type or if one strand is more efficiently repaired [START_REF] Mrazek | Proc. Natl. Acad. Sci[END_REF]. In eukaryotes, the existence of compositional biases has been debated and most attempts to detect the ori from strand compositional asymmetry have been inconclusive. In primates, a comparative study of the β-globin ori has failed to reveal the existence of a replication-coupled mutational bias [10]. Other studies have led to rather opposite results. The analysis of the yeast genome presents clear replication-coupled strand asymmetries in subtelomeric chromosomal regions [11]. A recent space-scale analysis [12] of the GC and TA skews in M bp long human contigs has revealed the existence of compositional strand asymmetries in intergenic regions, suggesting the existence of a replication bias. Here, we show that the (T A+ GC) skew profiles of the 22 human autosomal chromosomes, display a remarkable serrated "factory roof" like behavior that differs from the crenelated "castle rampart" like profiles resulting from the prokaryotic replicon model [START_REF] Mrazek | Proc. Natl. Acad. Sci[END_REF]. This observation will lead us to propose an alternative model of replication in higher eukaryotes. Sequences and gene annotation data were downloaded from the UCSC Genome Bioinformatics site and correspond to the assembly of July 2003 of the human genome. To exclude repetitive elements that might have been inserted recently and would not reflect long-term evolutionary patterns, we used the repeat-masked version of the genome leading to a homogeneous reduction of ∼ 40 -50% of sequence length. All analyses were carried out using "knowngene" gene annotations. The TA and The values of ST A and SGC were calculated in adjacent 1 kbp windows. The dark (light) grey dots refer to "sense" ("antisense") genes with coding strand identical (opposed) to the sequence; black dots correspond to intergenes. GC skews were calculated as S T A = (T -A)/(T + A) and S GC = (G -C)/(G + C). Here, we will mainly consider S = S T A +S GC , since by adding the two skews, the sharp transitions of interest are significantly amplified. In Fig. 1 are shown the skew S profiles of 3 fragments of chromosomes 8 and 20 that contain 3 experimentally identified ori. As commonly observed for eubacterial genomes [START_REF] Mrazek | Proc. Natl. Acad. Sci[END_REF], these 3 ori correspond to rather sharp (over several kbp) transitions from negative to positive S values that clearly emerge from the noisy background. The leading strand is relatively enriched in T over A and in G over C. The investigation of 6 other known human ori [7] confirms the above observation for at least 4 of them (the 2 exceptions, namely the Lamin B2 and βglobin ori, might well be inactive in germline cells or less frequently used than the adjacent ori). According to the gene environment, the amplitude of the jump can be more or less important and its position more or less localized (from a few kbp to a few tens kbp). Indeed, it is known that transcription generates positive TA and GC skews on the coding strand [13,14], which explains that larger jumps are observed when the sense genes are on the leading strand and/or the antisense genes on the lagging strand, so that replication and transcription biases add to each other. On the contrary to the replicon characteristic step like profile observed for eubacteria [START_REF] Mrazek | Proc. Natl. Acad. Sci[END_REF], S is definitely not constant on each side of the ori location making quite elusive the detection of the ter since no corresponding downward jumps of similar amplitude can be found in Fig. 1. In Fig. 2 are shown the S profiles of long fragments of chromosomes 9, 14 and 21, that are typical of a fair proportion of the S profiles observed for each chromosome. Sharp upward jumps of amplitude (∆S ∼ 0.2) similar to the ones observed for the known ori in Fig. 1, seem to exist also at many other locations along the human chro- mosomes. But the most striking feature is the fact that in between two neighboring major upward jumps, not only the noisy S profile does not present any comparable downward sharp transition, but it displays a remarkable decreasing linear behavior. At chromosome scale, one thus gets jagged S profiles that have the aspects of "factory roofs" rather than "castle rampart" step like profiles as expected for the prokaryotic replicon model [START_REF] Mrazek | Proc. Natl. Acad. Sci[END_REF]. The S profiles in Fig. 2 look somehow disordered because of the extreme variability in the distance between two successive upward jumps, from spacings ∼ 50-100 kbp (∼ 100-200 kbp for the native sequences) up to 2-3 M bp (∼ 4-5 M bp for the native sequences) in agreement with recent experimental studies that have shown that mammalian replicons are heterogeneous in size with an average size ∼ 500 kbp, the largest ones being as large as a few M bp [15]. We report in Fig. 3 the results of a systematic detection of upward and downward jumps using the wavelet-transform (WT) based methodology described in Ref. [12(b)]. The selection criterium was to retain only the jumps corresponding to discontinuities in the S profile that can still be detected with the WT microscope up to the scale 200 kbp which is smaller than the typical replicon size and larger than the typical gene size. In this way, we reduce the contribution of jumps associated with transcription only and maintain a good sensitivity to replication induced jumps. A set of 5100 jumps was detected (with as generally expected an almost equal proportion of upward and downward jumps). In Fig. 3(a) are reported the histograms of the amplitude |∆S| of the so-identified upward (∆S > 0) and downward (∆S < 0) jumps respectively, for the repeat-masked sequences. These histograms do not superimpose, the former being significantly shifted to larger |∆S| values. When plotting N (|∆S| > ∆S * ) vs ∆S * in Fig. 3 (b), one can see that the number of large amplitude upward jumps overexceeds the number of large amplitude downward jumps. These results confirm that most of the sharp upward transitions in the S profiles in Figs 1 and2, have no sharp downward transition counterpart. This demonstrates that these jagged S profiles are likely to be representative of a general asymmetry in the skew profile behavior along the human chromosomes. As reported in a previous work [14], the analysis of a complete set of human genes revealed that most of them present TA and GC skews and that these biases are correlated to each other and are specific to gene sequences. One can thus wonder to which extent the transcription machinery can account for the jagged S profiles shown in Figs 1 and2. According to the estimates obtained in Ref. [14], the mean jump amplitudes observed at the transition between transcribed and non-transcribed regions are |∆S T A | ∼ 0.05 and |∆S GC | ∼ 0.03 respectively. The characteristic amplitude of a transcription induced transition |∆S| ∼ 0.08 is thus significantly smaller than the amplitude ∆S ∼ 0.20 of the main upward jumps in Fig. 2. Hence, it is possible that, at the transition between an antisense gene and a sense gene, the overall jump from negative to positive S values may reach sizes ∆S ∼ 0.16 that can be comparable to the ones of the upward jumps in Fig. 2. However, if some coorientation of the transcription and replication processes may account for some of the sharp upward transitions in the skew profiles, the systematic observation of "factory roof" skew scenery in intergenic regions as well as in transcribed regions, strongly suggests that this peculiar strand bias is likely to originate from the replication machinery. To further examine if intergenic regions present typical "factory roof" skew profiles, we report in Fig. 4 the results of the statistical analysis of 287 pairs of putative adjacent ori that actually correspond to 486 putative ori almost equally distributed among the 22 autosomal chromosomes. These putative ori were identified by (i) selecting pairs of successive jumps of amplitude ∆S ≥ 0.12, and (ii) checking that none of these upward jumps could be explained by an antisense gene -sense gene transition. In Fig. 4(a) is shown the S profile obtained after rescaling the putative ori spacing l to 1 prior to computing the average S values in windows of width 1/10 that contain more than 90% of intergenic sequences. This average profile is linear and crosses zero at the median position n/l = 1/2, with an overall upward jump ∆S ≃ 0.17. The corresponding average S profile over windows that are now more than 90% genic is shown in Fig. 4(b). A similar linear profile is obtained but with a jump of larger mean amplitude ∆S ≃ 0.28. This is a direct consequence of the gene content of the selected regions. As shown in Fig. 4(b), sense (antisense) genes are preferentially on the left (right) side of the 287 selected sequences, which implies that the replication and -when present -transcription biases tend to add up. In Fig. 4(c) is shown the histogram of the linear slope values of the 287 selected skew profiles after rescaling their length to 1. The histogram of mean absolute deviation from a linear decreasing profile reported in Fig. 4 (d), confirms the linearity of each selected skew profiles. Following these observations, we propose in Fig. 5 a rather crude model for replication that relies on the hypothesis that the ori are quite well positioned while the ter are randomly distributed. In other words, replication would proceed in a bi-directional manner from well defined initiation positions, whereas the termination would occur at different positions from cell cycle to cell cycle [16]. Then if one assumes that (i) the ori are identi- cally active and (ii) any location in between two adjacent ori has an equal probability of being a ter, the continuous superposition of step-like profiles like those in Fig. 5(a) leads to the anti-symmetric skew pattern shown in Fig. 5 (b), i.e. a linear decreasing S profile that crosses zero at middle distance from the two ori. This model is in good agreement with the overall properties of the skew profiles observed in the human genome and sustains the hypothesis that each detected upward jump corresponds to an ori. To summarize, we have proposed a simple model for replication in the human genome whose key features are (i) well positioned ori and (ii) a stochastic positioning of the ter. This model predicts jagged skew profiles as observed around most of the experimentally identified ori as well as along the 22 human autosomal chromosomes. Using this model as a guide, we have selected 287 domains delimited by pairs of successive upward jumps in the S profile and covering 24% of the genome. The 486 corresponding jumps are likely to mark 486 ori active in the germ line cells. As regards to the rather large size of the selected sequences (∼ 2 M bp on the native sequence), these putative ori are likely to correspond to the large replicons that require most of the S-phase to be replicated [15]. Another possibility is that these ori might correspond to the so-called replication foci observed in interphase nuclei [15]. These stable structures persist throughout the cell cycle and subsequent cell generations, and likely represent a fundamental unit of chromatin organization. Although the prediction of 486 ori seems a significant achievement as regards to the very small number of experimentally identified ori, one can reasonably hope to do much better relatively to the large number (probably several tens of thousands) of ori. Actually what makes the analysis quite difficult is the extreme variability of the ori spacing from 100 kbp to several M bp, together with the necessity of disentangling the part of the strand asymmetry coming from replication from that induced by transcription, a task which is rather delicate in regions with high gene density. To overcome these difficulties, we plan to use the WT with the theoretical skew profile in Fig. 5 (b) as an adapted analyzing wavelet. The identification of a few thousand putative ori in the human genome would be a very promising methodological step towards the study of replication in mammalian genomes. This work was supported by the Action Concertée Incitative IMPbio 2004, the Centre National de la Recherche Scientifique, the French Ministère de la Recherche and the Région Rhône-Alpes. FIG. 1 : 1 FIG. 1: S = ST A + SGC vs the position n in the repeatmasked sequences, in regions surrounding 3 known human ori (vertical bars): (a) MCM4 (native position 48.9 M bp in chr. 8 [7(b)]); (b) c-myc (nat. pos. 128.7 M bp in chr. 8 [7(a)]); (c) TOP1 (nat. pos. 40.3 M bp in chr. 20 [7(c)]).The values of ST A and SGC were calculated in adjacent 1 kbp windows. The dark (light) grey dots refer to "sense" ("antisense") genes with coding strand identical (opposed) to the sequence; black dots correspond to intergenes. FIG. 2 : 2 FIG. 2: S = ST A +SGC skew profiles in 9 M bp repeat-masked fragments in the human chromosomes 9 (a), 14 (b) and 21 (c). Qualitatively similar but less spectacular serrated S profiles are obtained with the native human sequences. FIG. 3 : 3 FIG. 3: Statistical analysis of the sharp jumps detected in the S profiles of the 22 human autosomal chromosomes by the WT microscope at scale a = 200 kbp for repeat-masked sequences [12(b)]. |∆S| = |S(3 ′ ) -S(5 ′ )|, where the averages were computed over the two adjacent 20 kbp windows respectively in the 3' and 5' direction from the detected jump location. (a) Histograms N (|∆S|) of |∆S| values. (b) N (|∆S| > ∆S * ) vs ∆S * . In (a) and (b), the solid (resp. thin) line corresponds to downward ∆S < 0 (resp. upward ∆S > 0) jumps. FIG. 4 : 4 FIG. 4: Statistical analysis of the skew profiles of the 287 pairs of ori selected as explained in the text. The ori spacing l was rescaled to 1 prior to computing the mean S values in windows of width 1/10, excluding from the analysis the first and last half intervals. (a) Mean S profile (•) over windows that are more than 90% intergenic. (b) Mean S profile (•) over windows that are more than 90% genic; the symbols ( ) (resp. ( )) correspond to the percentage of sense (antisense) genes located at that position among the 287 putative ori pairs. (c) Histogram of the slope s of the skew profiles after rescaling l to 1. (d) Histogram of the mean absolute deviation of the S profiles from a linear profile. FIG. 5 : 5 FIG. 5: A model for replication in the human genome. (a) Theoretical skew profiles obtained when assuming that two equally active adjacent ori are located at n/l = 0 and 1, where l is the ori spacing; the 3 profiles in thin, thick and normal lines, correspond to different ter positions. (b) Theoretical mean S profile obtained by summing step-like profiles as in (a), under the assumption of a uniform random positioning of the ter in between the two ori.
01755248
en
[ "spi.nrj" ]
2024/03/05 22:32:10
2018
https://pastel.hal.science/tel-01755248/file/65155_LEE_2018_archivage.pdf
Heejae Lee email: [email protected] Jean-Charles Vanel Abderrahim Yassar Gael Zucchi Chloe Dindault Minjin Kim Dr Warda Hadouchi Dr Anna Shirinskaya Dr Mark Chaigneau Dr Jongwoo Jin Dr Taewoo Jeon Dr Chang-Seok Lee Dr Chang-Hyun Kim Dr Heechul Woo Dr Sungyup Jung Seonyong Park Sanghuk Yoo Junha Park Hyeonseok Sim Gookbin Cho Analysis of Current-Voltage Hysteresis and Ageing Characteristics for CH 3 NH 3 PbI 3-x Cl x Based Perovskite Thin Film Solar Cells The three years have already passed, and I want to prepare another new start. It was a doctoral course that started with curiosity and worry, The synthesis of the halide perovskite materials is optimized in a first step by controlling the deposition conditions such as annealing temperature (80°C) and spinning rate (6000 rpm) in the one step-spin-casted process. CH 3 NH 3 PbI 3-x Cl x based perovskite solar cells are then fabricated in the inverted planar structure and characterized optically and electrically in a second step. Direct experimental evidence of the motion of the halide ions under an applied voltage has been observed using glow discharge optical emission spectroscopy (GDOES). Ionic diffusion length of 140 nm and ratio of mobile iodide ions of 65 % have been deduced. It is shown that the current-voltage hysteresis in the dark is strongly affected by the halide migration which causes a substantial screening of the applied electric field. Thus we have found a shift of voltage at zero current (< 0.25 V) and a leakage current (< 0.1 mA/cm 2 ) in the dark versus measurement condition. Through the current-voltage curves as a function of temperature we have identified the freezing temperature of the mobile iodides at 260K. Using the Nernst-Einstein equation we have deduced a value of 0.253 eV for the activation energy of the mobile ions. Finally, the ageing process of the solar cell has been investigated with optical and electrical measurements. We deduced that the ageing process appear at first at the perovskite grain surface and boundaries. The electrical characteristics are degraded through a deterioration of the silver top-electrode due to the diffusion of iodides toward the silver as shown by GDOES analysis. Abstract (in French) Les perovskites organiques-inorganiques en halogénures de plomb sont des matériaux très prometteurs pour la prochaine génération de cellules solaires avec des avantages intrinsèques tels que leur faible coût de fabrication (grande disponibilité des matériaux de base et leur mise en oeuvre à basse température) et leur bon rendement de conversion photovoltaïque. Cependant, les cellules solaires pérovskites sont encore instables et montrent des effets d'hystérésis courant-tension délétères. Dans cette thèse, des résultats de l'analyse physique de couches minces de pérovskite à base de CH 3 NH 3 PbI 3-x Cl x et de cellules solaires ont été présentés. Les caractéristiques de transport électrique et les processus de vieillissement ont été étudiés avec différentes approches. Dans une première étape, la synthèse du matériau pérovskite a été optimisée en contrôlant les conditions de dépôt des films en une seule étape telles que la vitesse de rotation (6000 rpm) de la tournette et la température de recuit des films (80 °C). Dans un second temps, des cellules solaires perovskites à base de CH 3 NH 3 PbI 3-x Cl x ont été fabriquées en utilisant la structure planaire inversée et caractérisées optiquement et électriquement. Grace à l'utilisation de la spectroscopie optique à décharge luminescente (GDOES), un déplacement des ions halogénures a été observé expérimentalement et de façon directe sous l'application d'une tension électrique. Une longueur de diffusion ionique de 140 nm et un rapport de 65% d'ions mobiles ont été déduits. Il est montré que l'hystérésis courant-tension dans l'obscurité est fortement affectée par la migration des ions halogénures provoquant un écrantage substantiel du champ électrique appliqué. Nous avons donc trouvé sous obscurité un décalage de la tension à courant nul jusque 0,25 V et un courant de fuite jusque 0,1 mA / cm 2 en fonction des conditions de mesure. Grâce aux courbes courant-tension en fonction de la température, nous avons déterminé la température de transition de la conductivité ions/électrons à 260K et analysé les résultats expérimentaux en utilisant l'équation de Nernst-Einstein donnant une énergie d'activation de 0.253 eV pour les ions mobiles. Enfin, le processus de vieillissement de la cellule solaire a été étudié avec des mesures optiques et électriques. Nous avons déduit que le processus de vieillissement apparaît d'abord à la surface des cristaux de pérovskite ainsi qu'aux joints de grains. Les mesures GDOES nous indiquent que les caractéristiques électriques des cellules pérovskites sont perdues par une corrosion progressive de l'électrode supérieure en argent causée par la diffusion des ions iodures. Chapter 1. Introduction Solar cells operation Photovoltaics The worldwide consumption of energy has increased every year by several percentage over the last thirty years 1 . Due to an overall growth of world population and a further development in area like Asia, Africa and Latin-America, it is believed that the growth will persist or even accelerate over the coming decades 2 . Most of this energy is nowadays supplied by fossil on the one hand and by nuclear energy on the other hand. However, these resources are limited and their use has a serious environmental impact, which extends probably over several future generations 3 . This situation poses an enormous challenge already for the present generation to start up a transition in energy consumption and production. More efficient usage of produced energy could possibly lead to a decreased consumption whereas new technologies could steer this transition toward a more sustainable energy production. Sustainable development meets the needs of today without jeopardizing the future. In this respect, renewable energy sources fit in very well. Besides their environmental friendliness, they offer several advantages 4 . Diversification of energy supplies can lead to more economical and political stability. Moreover, countries and regions can become more independent by supplying their own energy from renewable instead of having to import fuels or electricity from large production plants. Finally, a transition from traditional fuel based energy production to renewable energy resources can even lead to substantial increase of employment. Several renewable energy resources are under development or even already introduced on the market. Still, they make up only a limited part of the total energy production 5 . Among these renewable energy resources, the direct and indirect use of solar energy is believed to have much larger possible application than used nowadays. The total amount of solar irradiation per year on the earth's surface equals 10000 times the world's yearly energy need. This solar energy can on the one hand be applied passively as lighting resource and space heating in buildings. Besides this, active applications concern the heating of water or heat fluids through concentrator systems for domestic use or even in industrial processes. The Where is the angle from the vertical (solar zenith angle). When the sun is directly overhead, the air mass is 1. By passing through the earth's atmosphere, light scattering and absorption occur such that the spectrum and the intensity at the earth's surface have been altered. Combining these effects with the complication that besides a component of radiation directly from the sun also a significant part is scattered as indirect or diffuse light, other AMstandards are developed. A commonly used reference radiation distribution is the global spectrum. This global spectrum combines a direct AM1.5 part with a (= 100 mW/cm 2 ). It accords rather well to circumstances for Western European countries on a clear sunny day in summer. common to list the short-circuit current density (mA/ ) rather than the short-circuit current. Second condition is the number of photons. J SC from a solar cell is directly dependent on the light intensity. So the spectrum of the incident light is also important. Third condition is the optical properties with absorption and reflection of the solar cell. It can be controlled directly by the thickness of active layer. And last condition is the collection probability of the solar cell. It depends chiefly on the surface passivation and the minority carrier lifetime in the base. When comparing solar cells of the same material type, the most critical material parameter is the diffusion length and surface passivation. In a cell with perfectly passivated surface and uniform generation, the equation for the short-circuit current can be approximated as: = ( + ) (1.5) When G is the generation rate, and L n and L p and the electron and hole diffusion lengths respectively. Although this equation makes several assumptions which are not true for the conditions encountered in most solar cells, the above equation nevertheless indicates that the short-circuit current depends strongly on the generation rate and the diffusion lengths. Open-circuit voltage (V OC ) The open-circuit voltage, V OC , is the maximum voltage available from a solar cell, and The above equation shows that V OC depends on the saturation current of the solar cell and the light-generated current. While J SC typically has a small variation, the key effect is the saturation current, since this may vary by orders of magnitude. The saturation current, I 0 depends on recombination in the solar cell. Open-circuit voltage is then a measure of the amount of recombination in the device. The V OC can also be determined from the carrier concentration 11 : this = ( ∆ )∆ (1.7) Where kT/q is the thermal voltage, N A is the doping concentration, n is the excess carrier concentration and n i is the intrinsic carrier concentration. The determination of V OC from the carrier concentration is also termed implied V OC . Fill factor & Power conversion efficiency (PCE) The J SC and the V OC are the maximum current and voltage respectively from a solar cell. However, at both of these operating points, the power from the solar cell is zero. The fill factor (FF) is a parameter which, when considering the J-V curve in conjunction with V OC and J SC , determines the maximum power from a solar cell. The FF is defined as the ratio of the maximum power from the solar cell to the product of V OC and J SC . Graphically, the FF is a measure of the squareness of the J-V curve of the solar cell, so is the area of the largest rectangle inscribed in the J-V curve in Figure 1.3b. % = × ×100 = × × ×100 (1.8) The PCE is the most commonly used parameter to compare the performance of one solar cell to another. Efficiency is defined as the ratio of electrical power output from the solar cell to input incoming optical power from the sun. In addition to reflecting the performance of the solar cell itself, the efficiency depends on the spectrum and intensity of the incident sunlight and the temperature of the solar cell. Therefore, conditions under which efficiency is measured must be carefully controlled in order to compare the performance of one device to another. The efficiency of a solar cell is determined as the fraction of incident power which is converted to electricity and is defined as following equation. PCE = -exp -1 -(1.10) In ideal case, R S become zero and R SH become infinity. To understand the effect of resistances on solar cell properties, Figure 1.4 shows the equivalent circuit and example of J-V characteristics with poor R S and R SH . 12 Introduction of Perovskite Solar Cells While the organic-inorganic halides have been of interest since the early twentieth century 13 , the first report of perovskite-structured hybrid halide compounds was published by Weber in 1978 14,15 . He reported both CH 3 NH 3 PbX 3 (X = Cl, Br, I) and the CH 3 NH 3 SnBr 1-x I x alloy. In the subsequent decades, these materials were studied in the context of their unusual chemistry and physics [16][17][18] with the first solar cell appearing in 2009 19 . The notable achievements in the photovoltaic applications of hybrid perovskites have been the subject of many reviews. The basics of the perovskite crystal structure are introduced and the unique dynamic behaviors of the hybrid organic-inorganic materials are presented in this chapter, which underpins their performance in photovoltaic devices that will be discussed in the later chapters. Walsh et al. is the leading group that verified the molecular motion and dynamic crystal structure of hybrid halide perovskite [20][21][22][23][24][25][26][27][28][START_REF] Leguy | The dynamics of methylammonium ions in hybrid organic-inorganic perovskite solar cells[END_REF] . Perovskite Alkali-metal lead and tin halides had been already synthesized in 1893 [START_REF] Wells | Uber die Caesium-und Kalium-Bleihalogenide[END_REF] , yet the first crystallographic studies that determined that cesium lead halides had a perovskite structure with the chemical formula CsPbX 3 (X = Cl, Br or I) were only carried out in 1957 by the Danish scientist Christian Møller [START_REF] Møller | Crystal structure and photoconductivity of caesium plumbohalides[END_REF] . He also observed that these coloured materials were photoconductive, thus suggesting that they behave as semiconductors. In 1978, Dieter Weber replaced cesium with methylammonium (MA) cations (CH 3 NH 3 + ) to generate the first threedimensional organic-inorganic hybrid perovskites [START_REF] Weber | CH 3 NH 3 PbI 3 , a Pb(ǁ)-system with Cubic Perovskite Structure[END_REF][START_REF] Weber | CH 3 NH 3 PbI 3 , a Pb(ǁ)-system with Cubic Perovskite Structure[END_REF] . The general crystal structure of these materials is shown in Fig. 1.5. the undistorted facets of the cuboctahedral cavity. Correspondingly, molecules belonging to different planes are anti-aligned with a head-tail motif. Such an anti-ferroelectric alignment is expected from consideration of the molecular dipole-dipole interaction 21 . In the lowtemperature orthorhombic phase, the CH 3 NH 3 + sub-lattice is fully ordered (a low entropy state). The ordering may be sensitive to the material preparation and/or cooling rate into this phase, i.e. the degree of quasi-thermal equilibrium. It is possible that different ordering might be frozen into the low-temperature phase by mechanical strain or electric fields. At 165 K, MAPI goes through a first-order phase transition from the orthorhombic to the tetragonal space group, which continuously undergoes a second-order phase transition to the cubic phase by ca. 327 K 16,[START_REF] Weller | Complete structure and cation orientation in the perovskite photovoltaic methylammonium lead iodide between 100 and 352[END_REF] . As with the orthorhombic phase, this can be considered a 2× 2×2 expansion of the cubic perovskite unit cell. The molecular cations are no longer in a fixed position as in the orthorhombic phase. CH 3 NH 3 + is disordered between two non-equivalent positions in each cage. [START_REF] Wasylishen | Cation rotation in methylammonium lead halides[END_REF] With increasing temperature the tetragonal lattice parameters become more isotropic. The molecular disorder also increases to the point where a transition to a cubic phase occurs around 327 K. The transition can be seen clearly from changes in the heat capacity, 17 as well as in temperature-dependent neutron diffraction [START_REF] Weller | Complete structure and cation orientation in the perovskite photovoltaic methylammonium lead iodide between 100 and 352[END_REF] . Indeed, for the bromide and chloride analogues of MAPI, pair-distribution function analysis of X-ray scattering data indicates a local structure with significant distortion of the lead halide framework at room temperature. Hysteresis characteristics and device stability 1.2.3.1. Hysteresis The rapid rise in performance of the cells has been accomplished essentially by minor modifications in the device structure, morphology of the films and fabrication methods, etc. However, there are several fundamental issues despite the excellent device efficiencies. Many observations indicate that the perovskite films suffer from un-stabilized optoelectronic performance. They are hysteresis in current and voltage curves, wide distribution in performance, performance durability, difficulties in reproducing the results, etc. These issues require deeper scientific understanding and demand serious attention. Among all the above problems, hysteresis has been apparently considered as a major. It has been widely observed that perovskite solar cells show substantial discrepancy between the current density-voltage (J-V) curves measured on forward scan (from short circuit to open circuit) and backward scan (from open circuit to short circuit). J-V hysteresis can be found in dye-sensitized solar cells (DSSCs), organic thin film solar cells (OSCs), and Si solar cells, when the voltage scan is too fast [START_REF] Naoki | Methods of measuring energy conversion efficiency in dye-sensitized solar cells[END_REF] . This hysteresis is explained by the effect of capacitive charge, including space charges and trapped charges. When the scanning speed is faster than the release rate of traps, or faster than the space charge relaxation time, hysteresis is seen. In organic-inorganic perovskite solar cells, hysteresis behavior is much slower but more complex and anomalous. This anomalous property of the perovskite solar cells creates confusion about the actual cell performance [START_REF] Editorial | Solar cell woes[END_REF][START_REF] Editorial | Bringing solar cell efficiencies into the light[END_REF][START_REF] Editorial | Perovskite fever[END_REF] . Hysteretic J-V curves imply that there is a clear difference in transient carrier collection at a given voltage during the forward and backward scan. It is generally known that backward scan measures higher current than the forward scan, independent of scan sequence. This confirms that carrier collection is always more efficient during the backward scan. In general, carrier collection (or current) in the device depends on carrier generation, separation, and its transport at recombination in the bulk of the layers and across different interfaces in the device. As carrier generation and separation are considered fast processes and depend only on illumination (not on the voltage scan) any difference in initial collection must be influenced by transport and/or transfer at the interfaces. Parameters affecting hysteresis As carrier collection depends on the conductivity of perovskite and other layered materials and the connectivity at their interfaces, hysteresis is observed to be affected by many factors that can slightly change the characteristics of the layers in perovskite device. Therefore, the diversity in device structure and the fabrication methods, and even changes in measurement conditions result in wide variation in the trends of hysteresis. As a result, the problem of hysteresis becomes too complex to be understood completely. Device structure and process parameters Perovskite devices of different architectures using the same perovskite but different electron and hole collecting layers show different magnitudes of hysteresis [START_REF] Kim | Control of I-V Hysteresis in CH3NH3PbI3 Perovskite Solar Cell[END_REF] . For instance, the standard planar heterojunction architecture; FTO / TiO 2 compact layer / perovskite / spiro-OMeTAD / Ag, FTO / PCBM / perovskite / spiro-OMeTAD / Ag and FTO / TiO 2 -PCBM / perovskite / spiro-OMeTAD / Ag (PCBM is [6, 6]-phenyl-C61-butyric acid methyl In addition, temperature and light intensity can also alter the J-V hysteresis significantly. For a planar perovskite solar cell (FTO/TiO2 compact layer/perovskite/spiro-OMeTAD/Au), as reported by Ono et al. [START_REF] Ono | Temperature-dependent hysteresis effects in perovskite-based solar cells[END_REF] , J-V hysteresis is large at low (250 K) and room temperatures (300 K) whereas the behavior becomes feeble at higher temperature (360 K). Grätzel et al. [START_REF] Meloni | Ionic polarization-induced current-voltage hysteresis in CH3NH3PbX3 perovskite solar cells[END_REF] also witnessed hysteresis of iodide based perovskite cells increasing with decrease in temperature. The other interesting fact that was noticeable here is that the reverse scan shows minor dependence on the temperature while the forward J-V curve is strongly affected by change in temperature. Even, in case of hysteresis-free perovskite cells of inverted architecture, remarkably large hysteresis comes up when the device is measured at low temperature [START_REF] Bryant | Observable hysteresis at low temperature in "Hysteresis Free" organicinorganic lead halide perovskite solar cells[END_REF] . Like in other solar cells, photocurrent of perovskite cells increases linearly with light intensity. With increase in photocurrent, the gap between the forward and backward J-V curves increases proportionately. For a planar perovskite cell (FTO/TiO2 compact layer/CH 3 NH 3 PbI 3-x Cl x /spiro-OMeTAD/Au), as observed in our lab, the difference between the forward and backward performance increases with light intensity. However, when normalized with the photocurrent, the J-V hysteresis looks almost unchanged. For a TiO2 mesoscopic device (FTO/TiO2 dense layer/TiO2 mesoporous layer/ CH 3 NH 3 PbI 3-x Cl x /spiro-OMeTAD/Au), as reported by Grätzel et al., the shape and magnitude of J-V hysteresis remain independent of light intensity. Such independence of hysteresis on light intensity rejects direct involvement of the photo-generated carriers in hysteresis (Fig. 1.14). Piezoresponse force microscopy (PFM) of CH 3 NH 3 PbI 3 at different poling (prebiasing in dark) conditions shows ferroelectric domains in CH 3 NH 3 PbI 3 [START_REF] Yasemin Kutes | Direct observation of ferroelectric domains in solution-processed CH3NH3PbI3 perovskite thin films[END_REF][START_REF] Kim | Ferroelectric polarization in CH3NH3PbI3 perovskite[END_REF][START_REF] Chen | Interface band structure engineering by ferroelectric polarization in perovskite solar cells[END_REF] . Such poling of the perovskite films alters the cell performance and J-V hysteresis dramatically, which is attributed to polarization of the ferroelectric domains that leads to modification of band structure at the interfaces [START_REF] Chen | Interface band structure engineering by ferroelectric polarization in perovskite solar cells[END_REF] . It While some results evidence ferroelectric property of perovskite and support its strong effect on hysteresis some other reports contradict the presumption of hysteresis being caused by ferroelectric polarization [START_REF] Snaith | Anomalous hysteresis in perovskite solar cells[END_REF] . Fan et al. [START_REF] Fan | Ferroelectricity of CH3NH3PbI3 perovskite[END_REF] reported that perovskites are not ferroelectric at room temperature although their theoretical calculations predicted the material to be mild ferroelectric. Therefore, exhibition of ferroelectric behavior by the perovskite compound at the operating conditions of the device still remains a debate. Furthermore, the effect in the actual device consisting of thin layers can be different from the ferroelectric property of bulk perovskite. The explanation of transient effect caused by ferroelectric polarization must be consistent when all the different interfaces are included. Therefore, it is reasonable that the interface properties play equally important role in causing hysteresis. Unbalanced carrier extraction Although polarization by pre-biasing perovskite was found to enhance hysteresis, Jena et al. [START_REF] Jena | The interface between FTO and the TiO2 compact layer can be one of the origins to hysteresis in planar heterojunction perovskite solar cells[END_REF] discovered that hysteresis is also exhibited by a cell made of non-ferroelectric PbI2. (Fig. This result indicated that ferroelectric property couldn't be the only reason for hysteresis. 1.16) Examination of J-V characteristics of different interfaces in simplified structures like FTO/TiO2 compact layer (CL)/Spiro-OMeTAD/Au and FTO/TiO2 CL disclosed that the interface between FTO and TiO2 can be one of the contributors to hysteresis. Besides, it strongly suggested that the interface between perovskite and TiO2 could be a major player for hysteretic J-V curves. In devices made of perovskite, modification of the interface between perovskite and electron collecting layer apparently affects hysteresis. For instance, modification of TiO2 compact layer (CL) with C60 83 reduces hysteresis. Incorporation of Zr [START_REF] Nagaoka | Zr incorporation into TiO2 electrodes reduces hysteresis and improves performance in hybrid perovskite solar cells while increasing carrier lifetimes[END_REF] and Au nanoparticles [START_REF] Yuan | Hot-electron injection in a sandwiched TiOx-Au-TiOx structure for highperformance planar perovskite solar cells[END_REF] in the TiO2 compact layer also reduces hysteresis. In addition, no/negligible hysteresis observed in the perovskite-based cells using organic electron collecting layers [START_REF] Im | 18.1 % hysteresis-less inverted CH3NH3PbI3 planar perovskite hybrid solar cells[END_REF][START_REF] Tao | 17.6 % steady state efficiency in low temperature processed planar perovskite solar cells[END_REF] instead of TiO2 compact layer sandwiched between FTO and perovskite also supports the fact that interface properties can be crucial for hysteresis Carrier extraction depends strongly on physical and electrical contact at the interfaces. Therefore, conductivity of the layers and their morphology and the interface connectivity play a role in causing hysteresis. Gaps at the interfaces can act as capacitors due to carrier accumulation and thereby alter carrier extraction significantly [START_REF] Cojocaru | Origin of the hysteresis in I-V curves for planar structure perovskite solar cells rationalized with a surface boundary-induced capacitance model[END_REF] . Although ferroelectrics can be involved in the above-mentioned interfacial phenomena, it is not solely responsible for hysteresis. The carrier dynamics at the interface, which might be influenced either by ferroelectric polarization or ion migration, holds responsible for causing hysteresis. Ion migration In halide perovskites, smaller organic cations can diffuse faster than larger organic cations and the halide anions are considered to have significantly high mobility compared to the heavy metal cation (Pb2+). The calculated activation energy for migration of halide vacancies (iodide vacancy) is significantly lower than the organic cation vacancies [START_REF] Haruyama | First-principles study of ion diffusion in perovskite solar cell sensitizers[END_REF][START_REF] Eames | Ionic transport in hybrid lead iodide perovskite solar cells[END_REF] . Methyammmonium cation (MA+) and iodide anion (I-) are freely diffusible in MAPbI 3 at room temperature. Even, MA+I-is thermally unstable and it can be evaporated from the crystal structure at elevated temperature [START_REF] Alberti | Similar structural dynamics for the degradation of CH3NH3PbI3 in air and in vacuum[END_REF] . These ions in halide perovskite never undergo reaction with photo-generated carriers and contribute to self-organization electrostatically and geometrically in crystal structure formation. However, migration of the ions in perovskite is considered to cause carrier localization that gives rise to J-V hysteresis. More importantly, the process of ion motion can affect stability of perovskite devices through continuous change in chemical and physical properties of the perovskite under photovoltaic operation. Such ionic migration under applied voltage to the device has been discussed in relation to generation of hysteresis. Snaith et al., based on photocurrent behavior of perovskite cell influenced by pre-biasing, proposed that under forward bias, MAPbI 3 becomes polarized due to accumulation of positive and negative space charges near electron and hole collector interfaces [START_REF] Zhang | Charge selective contacts, mobile ions and anomalous hysteresis in organic-inorganic perovskite solar cells[END_REF] . This charge accumulation is assumed to cause n-type and p-type doping at the interfaces (formation of p-i-n structure), which temporarily enhances photocurrent generation. Accumulation of migrated ions at the interfaces can change the polarity [START_REF] Zhao | Anomalously large interface charge in polarity-switchable photovoltaic devices: an indication of mobile ions in organic-inorganic halide perovskites[END_REF] (Fig. 1.17) and the dynamic change in accumulation of these charges with the scanning voltage (changing internal field) is assumed to generate hysteresis in photocurrent of the perovskite devices [START_REF] Zhang | Charge selective contacts, mobile ions and anomalous hysteresis in organic-inorganic perovskite solar cells[END_REF][START_REF] Zhao | Anomalously large interface charge in polarity-switchable photovoltaic devices: an indication of mobile ions in organic-inorganic halide perovskites[END_REF] . Figure 1. 17. Device and mechanism of switchable polarity in perovskite photovoltaic devices [START_REF] Zhao | Anomalously large interface charge in polarity-switchable photovoltaic devices: an indication of mobile ions in organic-inorganic halide perovskites[END_REF] Li et al. [START_REF] Li | Iodine migration and its effect on hysteresis in perovskite solar cells[END_REF] Trap states Among all possible causes of hysteresis, trap states are also considered to be major. Even though the recent reports strongly support the hypothesis of ion migration being responsible for hysteresis, most of the proposed models cannot rule out involvement of trap states. While some results partially agree with the assumption that trap sates must be causing hysteresis some other results certainly indicate their participation in the process. Trap states on surface and grain boundaries of perovskite have been proposed to be the origin of hysteresis [START_REF] Shao | Origin and elimination of photocurrent hysteresis by fullerene passivation in CH3NH3PbI3 planar heterojunction solar cells[END_REF] . Fullerene deposition on perovskite is believed to passivate these trap states and thereby eliminate the notorious hysteresis in photocurrent (Fig. 1.18). While ion migration is being talked much in relation to J-V hysteresis in the present time, lack of more direct evidences and comprehensive models makes it hard to conclude that ion migration is the sole cause of hysteresis. In fact, according to a model proposed by Snaith et al., only ion migration cannot explain hysteresis but ion migration and interfacial trap states together explains hysteresis [START_REF] Van Reenen | Modeling anomalous hysteresis in perovskite solar cells[END_REF] . The reduction of density of either the mobile ions or the trap states can reduce hysteresis. However, there are few facts that conflict with the theory of trap states being responsible for hysteresis. One of them is the observed low trap density in perovskite [START_REF] Shi | Low trap-state density and long carrier diffusion in organolead trihalide perovskite single crystals[END_REF][START_REF] Oga | Improved understanding of the electronic and energetic landscapes of perovskite solar cells: high local charge carrier mobility, reduced recombination, and extremely shallow traps[END_REF] . Relatively small trap density in perovskite is expected to result in slight/no hysteresis if trap states are the only players in hysteresis. In addition, the effective change in trap density with illumination intensity must be reflected as altered hysteresis in the perovskite cells. As trap states are better filled under light of higher intensity, hysteresis is expected to be less in the cases of higher intense illumination. On the contrary, it has been observed that hysteresis increases with light intensity, in proportion to the photocurrent. Therefore, the presumption that trapping-detrapping of existing trap sates in perovskite causes hysteresis is not completely convincing. Further investigation is needed to find more direct and distinct evidences for active participation of trap states in creating hysteresis. Also, mechanism of generation of the trap states and their density in perovskite need to be examined further to support the models based on trap states. Stability In order to check the performance stability, Jena et al. measured photocurrent density of the cells (not encapsulated) operated across the load (600 Ω) with cyclic on and off of light (1 sun). [START_REF] Jena | Steady state performance, photoinduced performance degradation and their relation to transient hysteresis in perovskite solar cells[END_REF] The time for each on and off cycle was fixed to 3 min and all the devices were measured for 10 cycles, enduring almost 1 h for the complete measurement. As can be seen in Fig. 1.20, hysteresis increased and performance stability decreased with thickness of the perovskite film in the planar perovskite solar cells, indicating that hysteresis may be directly related to such performance instability. *PEDOT = PEDOT :PSS Motivation First motivation of the thesis is the lack of theoretical understanding on J-V hysteresis for organic-inorganic hybrid perovskite films and devices. In spite of impressive progress in terms of device performance, current knowledge on many fundamental phenomena is still incomplete. In a historical point of view, this problem might be due to the urgent need to prove that perovskite device can compete with existing technologies in the photovoltaic field. Tremendous research efforts have thus mainly been put into the experimental works, which could finally meet such requirements. However, many theoretical questions remain unsolved. To answer these questions, various analysis technologies are used in the thesis. In conclusion, direct experimental evidences of special characteristics for organic-inorganic hybrid perovskite films are verified. These scientific evidences can provide the ability to interpret the experimental results and even predict them based on proper physical insights. Such process can form a positive feedback for experimental design and fabrication. Second motivation is the understanding the ageing mechanisms for overcoming the poor stability issue in the organic-inorganic hybrid perovskite solar cells. The organic devices, such as display (OLED; organic light emitting diode) and photovoltaics (OPV; organic photovoltaics) have been studied since 1990s. The ageing characteristic is the incorrigible issue for the organic materials and devices. For solving the stability issue, most academic groups still focus on the encapsulation technologies or verifying the ageing mechanisms. I already studied the innovative encapsulation technology for OPV device. [START_REF] Lee | Solution processed encapsulation for organic photovoltaic[END_REF] However, it is not the fundamental solution for solving stability issue. We need more radical understanding in this issue. In the thesis, diverse electrical and optical measurements are used for understanding the ageing mechanisms. This study makes a step forward in the quest of elucidating ageing phenomena usually observed in perovskite-based solar cells. Thesis overview This Thesis is titled 'Electrical transport characteristics and ageing characteristics of Perovskite thin film'. As the title implies, our foremost aims have been on development of fabrication techniques for perovskite solar cells and investigation of perovskite film characteristics for photovoltaic devices using various measurement systems. Here, a brief description of each chapter is given for overview. Chapter 1 Introduction provides short summaries on the most basic physical backgrounds for understanding perovskite thin film and perovskite solar cells devices. The origin of the semiconducting properties in organic-inorganic hybrid materials is briefly described. The parameters affecting hysteresis, and mechanism of origin to hysteresis are described in this chapter. We explain in Chapter 2 (Experimental Methods) the main techniques or methods that have been applied to verify electrical and optical characteristics of perovskite thin film and device. Seven main methods used for analyzing thin film are UV-Vis spectroscopy, X-ray diffraction (XRD), scanning electron microscopy (SEM), ellipsometry, transmission line measurement (TLM), steady state photo-carrier grating (SSPG) and atomic force microscopy (AFM). The two main methods used for analyzing solar cells device are current densityvoltage (J-V) and glow discharge optical emission spectroscopy (GD-OES). As will be seen in the result chapters, all these methods have been complementarily used. An introduction and practical knowledge for each method is given here to understand the results obtained by using each method. In Chapter 3 (Development of Cells Performance by Studying Film Characteristics), we will discuss the development of cells performance with various thin film analyzing techniques. The analysis techniques have been studied for analyzing two different categories of characteristics of the perovskite thin film; electrical and optical characteristics. For verifying electrical characteristics, contact resistance and conductivity are measured by TLM system and diffusion length is estimated by SSPG measurement. With the optical characteristics, we can check the quality of crystallinity (by using XRD and SEM) and energy bandgap (by UV-Vis Spectroscopy). Furthermore, all thin film measurement systems are used for understanding the ageing characteristics of the perovskite thin films. After one and half year efforts, the PCE of perovskite solar cells has risen from about 6 % to 12.7 % with great reproducibility. In Chapter 4 (Ionic migration in CH 3 NH 3 PbI 3-x Cl x based perovskite solar cells), direct experimental evidences of ionic migrations are verified by using GD-OES systems. The applied voltage (from -2.5V to +2.5V) induced the perovskite thin film to generate the ionic migrations. Lead (Pb), Nitrogen (N), Iodine (I), and Chlorine (Cl) are the main atoms for studying ionic transport. We confirm that I ions in the perovskite thin film consist of mobile and fixed ions. In addition, the average length of ionic migration is estimated by the GD-OES system. In Chapter 5 (Hysteresis characteristics in CH 3 NH 3 PbI 3-x Cl x based perovskite solar cells), the J-V hysteresis is studied not only in light but also in dark conditions for understanding the original material characteristics without considering photo-generated (excess) carriers. The initial voltage, the scan rate and the temperature are the parameters to control the hysteresis. In conclusion, we confirm the J-V hysteresis even with inverted planar structure in dark condition. In addition, we found the ionic freezing temperature by studying J-V analysis at low temperature. Chapter 6 (Ageing Characteristics in CH 3 NH 3 PbI 3-x Cl x based Perovskite solar cells) shows that the results of device performance are presented with the film characteristics for understanding their operation mechanism. The J-V curve and GD-OES system are used for studying device performance. TLM, XRD, and UV-Vis spectroscopy are used for studying the film characteristics. Finally, not the variation at the bulk but the significant alternations at the interface between perovskite and PCBM is proposed. In Chapter 7 (Conclusion and Outlook), major results that are found and analyzed in the thesis are summarized with concluding remarks. Limitation encountered during the research and related suggestions for the future work will be also specified. Chapter 2. Experimental methods Introduction In this chapter, descriptions of fabrication details and analysis methods will be presented. The chapter will be separated in three parts; device fabrication, film characterization, and device characterization. In the first part, device fabrication details will be explained, from substrate/solution preparations to all layer deposition processes using different techniques. This part will also include a description of the different materials used in this study. In the second part, the characterization methods of perovskite film will be explained in two types, optical and electrical characteristics. The working principles of used equipment for characterizations are described in this part. Finally, the electrical characteristic measurements of perovskite solar cells and device analysis method will be discussed in the last part. Organic-inorganic hybrid perovskite solar cells device fabrication Substrate and solution preparation Substrate preparation is important to maintain reproducibility of the device performance. According to following preparation process, indium tin oxide (ITO) substrate is prepared. (Figure 2.1.(a)) The ITO coated glass substrates were purchased from Xinyan Technologies (inf. 20ohm/sq). The wet etching process is used to pattern the purchased ITO glass. The substrate was masked with 3M scotch tape and the ITO, uncovered with mask area was etched with acid solution of hydrochloric acid (HCl) and zinc powder (Zn). 1 The tape mask was removed after etching, and substrates were cleaned in sequential ultrasonic baths of detergent (Liqui-Nox Phosphate-Free Liquid Detergent, Alconox, Inc.) 1% diluted in deionized water, pure deionized water and 2-propanol (IPA) (30 min each). Nitrogen gas (N 2 ) was used to dry the substrates after each bath. majority of the ink is flung off the side. Airflow then dries the majority of the solvent leaving a plasticized film before the film fully dries to leave the useful molecules on the substrate. The rotation of the substrate at high speed means that the centripetal force combined with the surface tension of the solution pulls the liquid coating into an even covering. 5 During this time the solvent then evaporates to leave the desired material on the substrate in an even covering. In this study, the spin coating technique was used to deposit PEDOT:PSS (HTL) and PCBM (ETL) in air or N 2 condition. Dynamic dispense (Spin casting) In a dynamic dispense, the substrate is first started spinning and allowed to reach the desired spin speed before the solution is dispensed into the center of the substrate. The centripetal force then rapidly pulls the solution from the middle of the substrate across the entire area before it dries. In general, a dynamic dispense is preferred as it is a more controlled process that gives better substrate to substrate variation. 6 This is because the solvent has less time to evaporate before the start of spinning and the ramp speed and dispense time is less critical (so long as the substrate has been allowed time to reach the desired rpm). A dynamic dispense also used less ink in general although this does depend upon the wetting properties of the surface. The disadvantage of a dynamic dispense is that it becomes increasingly difficult to get complete substrate coverage when using either low spin speeds below 1000 rpm or very viscous solutions. 7 This is because there is insufficient centrifugal force to pull the liquid across the surface, and the lower rotation speed also means that there is increased chance that the ink will be dispensed before the substrate has completed a full rotation. As such, it is generally recommended using a static dispense at 500 rpm or below with either technique a possibility in a region between 500-1000 rpm. For the majority of spin coating above 1000 rpm, it is normally used a dynamic dispense as standard unless there are any special circumstances or difficulties. For more controlled process, the perovskite film was deposited by dynamic dispense process instead of spin coating process in this study. intensity beams by a half-mirrored device. One beam passes through a sample containing a target film onto the transparent substrate. The other beam, the reference, passes through an identical transparent substrate that used at sample. The intensities of these light beams are then measured by electronic detectors and compared. The intensity of the reference beam, which should have suffered little or no light absorption, is defined as I 0 . The UV region scanned is normally from 200 nm to 400 nm, and the visible portion is from 400 to 800 nm. UV-Vis spectroscopy is based on the principle of electronic transition in atoms or molecules upon absorbing suitable energy from an incident light that allows electron to excite from a lower energy state to higher excited energy state. While interaction with infrared light causes molecules to undergo vibrational transitions, the shorter wavelength with higher energy radiations in the UV and visible range of the electromagnetic spectrum causes many atoms/molecules to undergo electronic transitions. If the sample compound does not absorb light of a given wavelength, I=I 0 . However, if the sample compound absorbs light then I is less than I 0 , and this difference may be plotted on a where h is the Planck constant, is the wave frequency and c light speed in vacuum. Experimentally, the optical band gap E opt of the thin film is estimated by linear extrapolation from the absorption feature edge to A =0 and subsequent conversion of the wavelength (nm) into energy value versus vacuum (eV). In conclusion, the E opt can be determined by absorbance spectra. 11 In this study, UV-Vis spectroscopy was used for calculating the E opt value and studying ageing characteristics of the perovskite thin film. Scanning electron microscopy (SEM) Scanning electron microscopes (SEM) use a beam of highly energetic electrons to examine objects on a very fine scale. In a SEM, when an electron beam strikes a sample, a large number of signals are generated. This examination can yield the information such as topography (the surface features of an object), composition (the elements and compounds that the object is composed of and the relative amounts of them) and crystallographic information (how the atoms are arranged in the object). 13 The combination of high magnification, large depth of focus, good resolution, and the ease of observation makes the SEM one of the most widely used equipment. Figure 2.7 shows the schematic illustration of SEM measurement system. Secondary electrons (SE), corresponding to the most intense emission due to electronic impact, are produced when an incident electron excites an electron in the sample and loses some of its energy in the process. 14 The excited electron moves towards the surface of the sample and, if it still has sufficient energy, it excites the surface and is called a secondary electron (non-conductive material can be coated with a conductive material to increase the number of the secondary electrons that will be emitted with energies less than 50 eV). Alternatively, when the electron beam strikes the sample, some of the electrons are scattered (deflected from their original path) by atoms in the specimen in an elastic fashion (no loss of energy). These essentially elastically scattered primary electrons (high-energy electrons) that rebound from the sample surface are called backscattered electrons (BSE). The mean free path length of secondary electrons in many materials is round 10 Ǻ. Thus, although electrons are generated throughout the region excited by the incident beam, only those electrons that originate less than 10 Ǻ deep in the sample escape to be detected as secondary. This volume of production is very small compared with those associated BSE and X-rays. Therefore, the resolution using SE is better than either of these and is effectively the same as the electron beam size. The shallow depth of production of detected secondary electrons makes them ideal for examining topography. The secondary electron yield depends on many factors, and is generally higher for high atomic number targets, and at higher angles of incidence. 15 BSE can be used to generate an image in the microscope that shows the different elements present in a sample. 16 In our study, we used ellipsometry analysis for investigating the crystallinity and ageing characteristics of the perovskite thin film. Electrical characterization Transmission line measurement (TLM) Transmission line measurement (TLM) is a technique used in semiconductor physics and engineering to determine the contact resistance between a metal and a semiconductor. 20 The technique involves making a series of metal-semiconductor contacts separated by various distances. (Figure 2. AFM operation is usually described three modes, according to the nature of the tip motion : contact mode, also called static mode (as opposed to the other two modes, which are called dynamic modes) ; tapping mode, also called intermittent contact, AC mode, or vibrating mode, or, after the detection mechanism, amplitude modulation AFM ; non-contact mode, or, again after the detection mechanism, frequency modulation AFM. In this study, AFM measurement was used for identifying the surface roughness and the ageing mechanisms of the perovskite thin film. Perovskite solar cells device characterization Current density-voltage (J-V) characterization The J-V characteristics of perovskite solar cells were measured in N 2 glove box using a source meter (Keithley 2635) in the dark and under illumination conditions. An AM 1.5 solar simulator SC575PV with 100mW/cm 2 was used as the light source. Key parameters of perovskite solar cells could be extracted by using the methods mentioned in chapter 1.1.2. V OC and J SC were directly obtained from X and Y intercept of J-V characteristics, respectively. The FF was calculated as follows : = × × (2. 3) Where, V max and J max refer to the values when the photovoltaic device generates maximum power. Finally, PCE could be written as : = × = × × (2. 4) Furthermore, based on the equivalent circuit diagram model, series resistance and shunt resistance could be calculated from J-V characteristic. In high forward voltage (V OC region) In 1968, Werner Grimm introduced a glow discharge tube as a light source for spectroscopic analyses investigating the chemical composition of metallic materials.24,25 The so-called Grimm discharge tube is characterized by a special arrangement of the electrodes: The two electrodes of the DC current source are set up of a cylindrical hollow anode and the sample as cathode which is sealing the anode tightly. (Figure 2.12) Since then, this technique and its applications have been continuously refined, an important development relevant for our work has been the introduction of pulsed RF sources. With pulsed RF, not only non-conductive specimen and layers can be measured but also fragile and heat sensitive materials. [26] In 1968, Werner Grimm introduced a glow discharge tube as a light source for spectroscopic analyses investigating the chemical composition of metallic materials. 24,25 The so-called Grimm discharge tube is characterized by a spec. Today, GDOES is one of the most precise methods of elemental analysis and layer thickness measurement. The Glow Discharge Source is generally filled with argon gas under low pressure (0.5 -10 hPa). As shown in figure 2.12 (which describes the simple historical configuration with dc source), a high direct voltage (DC) is applied between the anode and the sample (cathode). Due to the DC voltage, electrons are released from the sample surface and accelerated towards the anode gaining kinetic energy. By inelastic collisions the electrons transfer their kinetic energy to argon atoms, which causes them to dissociate into argon cations and further electrons. This avalanche effect triggers an increase in the charge carrier density making the insulating argon gas conductive. The resulting mixture of neutral argon atoms and free charge carriers (argon cations and electrons) is called plasma. The argon cations are accelerated towards the sample surface because there is a high negative potential. Striking the sample surface the argon cations knock out some sample atoms. This process is referred to as sputtering. The sample surface is ablated in a plane-parallel manner. The knocked out sample atoms diffuse into the plasma where they collide with highenergy electrons. During these collisions, energy is transferred to the sample atoms promoting them to excited energy states. Returning to the ground state, the atoms emit light with a characteristic wavelength spectrum. Passing through the entrance slit, the emitted light reaches a concave grating where it is dispersed into its spectral components. These components are registered by the detection system. The intensity of the lines is proportional to the concentration of the corresponding element in the plasma. In this study, GD-OES analysis was used for investigating the direct experimental evidences of ionic migration in perovskite thin film, which is one of the key factors of current voltage hysteresis and poor stability. evaporated PSCs have been studied since March 2016. The PCE has increased from 0% to 11.1% in 6 months with bad reproducibility. The devices show different performances even though the perovskite thin films are deposited at the same time. Finally, the 1-step spincoating process was chosen in this thesis for the investigation of the perovskite solar cells. Introduction Inorganic-organic hybrid perovskite solar cells have attracted great attention due to its solution-processing and high performance. The hybrid perovskite solar cell was initially discovered in a liquid dye sensitized solar cells (DSSCs). 1 Miyasaka and coworkers were the first to utilize the perovskite (CH 3 NH 3 PbI 3 and CH 3 NH 3 PbBr 3 ) nanocrystal as absorbers in DSSC structure, achieving an efficiency of 3.8 % in 2009. 1 Later, in 2011, Park et al. got 6.5 % by optimizing the processing. 6 However, these devices showed fast degradation due to the decomposition of perovskite by liquid electrolyte. In 2012, Park and Gratzel et al. reported a solid state perovskite solar cell using the solid hole transport layer (Spiro-OMeTAD) to improve stability. 7 After that, several milestones in device performance have achieved using DSSCs structure. 6-16 However, these mesoporous devices need a high temperature sintering that could increase the processing time and cost of cell production. It was found that Methylammonium based perovskites are characterized by large charge carrier diffusion lengths (around 100 nm for CH 3 NH 3 PbI 3 and around 1000 nm for CH 3 NH 3 PbI 3-x Cl x ). 17, 18 Further studies demonstrated that perovskites exhibit ambipolar behavior, indicating that the perovskite materials themselves can transport both electrons and holes between the cell terminals. 17 All of these results indicated that a simple planar structure was feasible. The first successful demonstration of the planar structure can be traced back to the perovskite/fullerene structure reported by Guo, 19 showing a 3.9 % efficiency. The breakthrough of the planar perovskite structure was obtained using a dual-source vapor deposition, providing dense and high-quality perovskite films that achieved 15.4 % efficiency. 20 Recently, the efficiency of the planar structure was pushed over 19 % through perovskite film morphology control and interface engineering. 12 These results showed that the planar structure could achieve similar device performance as the mesoporous structure. The planar structure can be divided into regular (n-i-p) and inverted (p-i-n) structure depending on which selective contact is used on the bottom. The regular n-i-p structure has been extensively studied and was based on dye-sensitized solar cells; while removing the mesoporous layer the p-i-n structure is derived from the organic solar cell, and usually, several charge transport layers used in organic solar cells were successfully transferred into perovskite solar cells. 11 The p-i-n inverted planar of perovskite solar cells showed the advantages of high efficiencies, lower temperature processing and flexibility, and furthermore, negligible J-V hysteresis effects. Due to these various advantages, the only inverted planar structure of PSCs was used in our study. In this chapter, we will focus on the performance development of two kinds of PSCs (2-steps dipping, and 1-step spin coating processed PSCs), including optical and electrical characteristics of their perovskite thin films. Solution processed perovskite solar cells (PSCs) Dipping processed (2-steps) PSCs Solution preparation for PSCs In dipping processed (2-steps) PSCs, the perovskite solution and the [6,6] Device fabrication As explained in chapter 2, the substrates used in this study were prepared in two steps; wet-etching patterning process and gold deposition process (figure 3.2). We did the UV ozone treatment in the substrates just before the deposition processes. The layers excluding the electrodes were deposited layer by layer in N 2 condition by solution processes. The PEDOT:PSS used as hole transport layer (HTL) was deposited by spin-coating process. Firstly, we filtered AI 4083 PEDOT:PSS using a 0.45 µm PVDF filter. Then we dispensed 35 µl of the filtered PEDOT:PSS solution using a pipette onto the ITO substrate spinning at 6000 rpm (so-called dynamic dispense) with total spin-time set to 40s. Methanol was used for patterning the water based PEDOT:PSS. The perovskite deposited on top of gold was wiped for measuring current-voltage (J-V) characteristics of device. The substrate should now be placed onto a hotplate at 120°C during 20 minutes. This process creates a PEDOT:PSS film having a thickness of 50 nm. The complete drying of the PEDOT:PSS layer is important to the perovskite layer due to its poor water stability. The perovskite layer used as active layer was deposited in 2-steps dipping process. First, we used the spin-coating process for depositing the PbI2 layer on top of PEDOT:PSS layer. After dropping 60µl by using a pipette, the thickness of the PbI 2 layer was controlled by the rate per minute (rpm) condition. As PEDOT:PSS depositing process, the perovskite layer was patterned before annealing process by DMF solvent. Lastly, we annealed the PbI 2 layer at 70℃ during 30 minutes. For transforming the PbI 2 into the perovskite (CH 3 NH 3 PbI 3 ), methylammonium iodide The spin-coating process was used for depositing PC 60 BM layer (ETL) on top of perovskite thin film. We used a 0.45 µm PTFE filter for filtering the PCBM solution just before depositing. We control the rpm conditions to determine the ideal thickness of PC60BM layer for performing the roles of hole blocking and electron transporting. This deposition technique is completed by an annealing process at 70℃ during 5 minutes. The aluminum (Al) was deposited as the electrode on top of the PC 60 BM layer by thermal vacuum evaporation technique. The target thickness was 100nm, and the electrode was patterned by special mask for keeping the active area size in 0.28cm 2 . There was no encapsulation process in this study due to that most of the device characteristics can be measured in the glove box (N 2 condition). Figure 3.2c is a photo image of the full PSCs device just before measuring the device performances. Characteristics of perovskite thin film As we talked in the previous chapter, the dipping technique used for depositing the perovskite layer is the sensitive process. Therefore, the analysis study of perovskite thin film was essential for forwarding the cells performances and the reproducibility. In this study, we used optical (such as UV-Vis spectroscopy, SEM, XRD), and electrical (such as TLM, SSPG, J-V characteristic) techniques for analyzing the perovskite thin film. In general, the thickness of the active layer is one of the critical factors in the photovoltaic device performance. The amount of light absorption and the electron-hole pair (EHP) are increased with thick active layer. However, we have to consider the carrier diffusion length for preventing the recombination effect. In this study, the thickness of the thin films was measured by the depth profiler system. The film has to be scratched before measuring this system. A sensitive tip scans the scratched area in contact with the surface during analysis process. The height difference between the scratched film and non-scratched film is defined as the thickness of the thin film. Figure 3.3 shows the result of the depth profiler system measurement, which shows the perovskite thickness difference depending on the IPA dip-cleaning process. This process reduces both the thickness of perovskite thin film from 300 nm to 270 nm and the surface roughness value. Considering the thin thickness of PCBM (~50 nm), which will be deposited on top of perovskite film, the roughness control technique is essential in fabricating the state of the art device. Considering the error range (~10 nm) of this measurement system, the variations by IPA dip-cleaning effect are significant and meaningful. The light absorbance and transmittance of the perovskite thin film is studied by using the UV-vis spectroscopy system. Considering the figure 3.21 and figure 3.22, we investigated that the fast rpm and high annealing temperature induced the large grain size of perovskite thin film. However, XRD result was only changed when the grain size was controlled by annealing temperature. The link between this XRD results and J-V hysteresis will be discussed in chapter 5. Solution deposition engineering of perovskite and electron transport layer (ETL) From now on, the J-V performances of 1-step spin-coated (spin-casted) devices will be discussed. After checking the reproducibility limit of the 2-steps dipping processed PSCs device, the 1-step spin-coating (spin-casting) technique was studied for depositing the perovskite thin film. Figure 3.23 and table 3.4 show the J-V performances of the PSCs device, that we first fabricated by using 1-step spin-coating technique for depositing the perovskite layer. (8nm). The extra ETL layer, such as C60 and BCP layers were deposited by using the thermal evaporation process, onto the PCBM layer to avoid the short circuit. The thermal evaporation technique is less sensitive and better depositing process than the spin-coating technique onto the poor roughness film. With only adding 10nm of C60 and 8nm of BCP layers, the whole J-V performances were increased significantly. Among them, there was remarkable increment in V OC with great increment in R SH . The short circuit generated by the pinholes caused the R SH drop due to their leakage current. Therefore, preventing the short circuit by depositing C60 and BCP layer induced the R SH increment. The R SH is strongly linked with the V OC as explained in chapter 2. In conclusion, the PCE was significantly jumped from 0.3% to 10%. There were no significant differences of J-V performance depending on the C60 thickness between 10nm to 60nm as below. Conclusion The optimized processes of the solution processed PSCs in two different types (2-steps dipping, and 1-step spin-casting techniques) were discussed by using various thin film analysis techniques, in this chapter. The electrical transport characteristics of the perovskite thin films were analyzed by using TLM and SSPG measurement systems. The optical characteristics of the perovskite thin film were studied by using SEM, UV-Vis spectroscopy, and XRD measurement techniques for understanding the J-V performance or optimizing the fabrication conditions. With the 2-steps dipping process it was difficult to achieve great reproducibility by hand due to their various critical experimental conditions such as the dipping speed, the dipping angle, and the pulling speed. Achieving the great device performance or the great reliability was unavailable with poor reproducibility. We switched the perovskite depositing technique from the 2-steps dipping process to the 1-step spin-coating (spin-casting) process for achieving the great reproducibility. The solution dropping moments (spin coating à spin casting) and the perovskite thin film patterning position were controlled for outstanding reproducibility of the device J-V performance. During the optimizing studying, we identified the seriousness of the DMF gas damage in the glove box. Consequently, the PCE of PSCs device reached around 10 % with settled reproducibility. The advanced researches were studied with this 1-step spin-casting processed PSCs for investigating the ionic migration, the J-V hysteresis, and the ageing characteristics. The detail results will be discussed in following chapters (chapter 4, 5, and 6) Chapter 4. Ionic migration in CH 3 NH 3 PbI 3-x Cl x based perovskite solar cells using GD-OES analysis Summary The ionic migration of perovskite thin film is reported as a key factor for explaining the current-voltage (J-V) hysteresis and ageing characteristics. This chapter shows directly the ionic migration of halogen components (I -and Cl -) of CH 3 NH 3 PbI 3-x Cl x perovskite film under an applied bias using glow discharge optical emission spectrometry (GD-OES). Furthermore, no migration of lead and nitrogen ions has been observed. The ratio of fixed to mobile iodide ions is deduced from the evolution of the GD-OES profile lines as a function of the applied bias. The average length of iodide and chloride ion migration has been deduced from the experimental results. Introduction Charged ions, as well as charge carriers, are mobile under the applied electrical field in hybrid perovskite solar cells (PSCs). Although the phenomenon of the ion migration in halide-based perovskite materials has been reported over the last 30 years, 1 the corresponding ion migration in perovskite did not draw considerable attention until the broad observation of current-voltage (J-V) hysteresis problem in PSCs. The J-V hysteresis behavior on PSCs was first reported by Snaith et al. 2 and Hoke et al. 3 with mesoporous structure in 2013, and by Xiao et al. 4 with planar heterojunction structure in 2014. Various mechanisms have been proposed to explain the origins of the J-V hysteresis such as filaments, giant dielectric constant, unbalance between hole and electron mobility, trapping effect, ferroelectricity effect, and the ionic migration effect. [2][3][4][5][6] Among them, we intensively studied the ionic migration to explain the J-V hysteresis in PSCs device. The possible mobile ions in MAPbI 3 crystal include MA + ions, Pb 2+ ions, I-ions [7][8][9] , and other impurities such as hydrogen-related impurities (H + and H -) 10 . Considering the activation energy of ion migration and the distance with its nearest neighbors, it is entirely reasonable to expect that both the MA + ions and I -ions are mobile in the MAPbI 3 films, while the Pb 2+ ions are difficult to move. 7, 11- 13 Furthermore, the I -ions are the most likely (majority) mobile ions in the MAPbI 3 . However, as the migration of MA + ions have been firmly proved, 14 more direct experimental shreds of evidence are needed to find out whether I -ions are mobile. While the I -ion migration under the operation or measurement condition of the perovskite devices at room temperature has not yet been revealed experimentally. Here, we elucidate the mobile ions migrations in CH 3 NH 3 PbI 3-X Cl X based cells by direct measurement of glow discharge optical emission spectrometry (GD-OES). In this study, we show experimentally the migration of ions in hybrid perovskites CH 3 NH 3 PbI 3-x Cl x based solar cells as a function of an applied bias using glow discharge optical emission spectrometry (GD-OES) (Figure 4.1a). Pulsed RF GDOES is a destructive technique, so one measurement per sample is only possible. This is why it was crucially important first to set up a stable process giving the possibility to generate a lot of samples in order to statistically validate the obtained results. The size of the Ag top contact in our cells was slightly larger than the GD anode, therefore a direct analysis was possible. Figure 4.1a (right) shows the sample after GD measurement with the crater visible in the Ag contact. The GDOES allows direct determination of major and trace elements. [17][18][19] The ratio of fixed ions versus mobile ions is deduced by applying electrical bias on the device. These results show directly that halogen ions (I -and Cl -) move through the device while lead and nitrogen ions are immobile. This migration of halogen ions influences the electrical characteristics of PSCs devices and may be responsible for the J-V hysteresis. 1. Table 1 shows a good reproducibility of the devices and we can thus consider that all the samples used for GD-OES experiments have the same characteristics. As shown in table 4.1, the power conversion efficiency (PCE) under 1 sun equivalent illumination is 12.6 % for the best cell (11.6 % in average) with an active area of 0.28 cm2. Figure 4.1b and table 1 represent the J-V characteristics of the best cell scanned in the forward and in the reverse directions. The hysteresis effect is small (less than 2.5 %) in our case. This is consistent with the results in the literature [20][21][22] reporting that the p-i-n architecture does not show hysteresis while the n-i-p architecture shows significant hysteresis. shows symmetry when the perovskite film was annealed at 100 ℃. There is a connection between the profile line of I and Cl, which are anions in perovskite thin film. As reported, Cl is totally removed from perovskite thin film at the end. In our study, Cl is totally removed when the perovskite thin film is annealed at 100 ℃. However, Cl remains when the annealing temperature is 80 ℃. Due to the Cl gas evaporation process, Cl atoms starts to disappear from the top surface (interface between PCBM and Perovskite). On this account, the profile line of Cl is not symmetric and it has a decisive effect on I profile line due to the both negative charges. PSCs device characteristics GD-OES result under applied bias The GD-OES profile lines versus sputtering time under different applied biases (-2. without applied voltage. However, we observe the appearance of a second peaks when bias is applied on the device. The second peak is at 47 s of sputtering time under positive bias (+1.5 Iodide and chloride ion migration V), and at 37 s of sputtering time under negative bias (-1.5 V). We attribute these second peaks to the iodide ionic migration due to the applied bias. These second peaks begin to shrink after removing the applied voltage, and disappear in 2 minutes when the device was under positive bias and in 3 minutes when the device was under negative bias. It signifies a reversibility (slow reaction) of iodide ionic migration. In addition, we observed the same phenomena in the blue solid line, GD-OES profile lines of chloride ions in the perovskite film. (blue solid line in Fig4.15a and 4.15b, respectively) The initial peak is at 50 s of sputtering time before applied voltage. However, we observe the movement of the peaks when bias is applied on the device. The shifted peaks are at 54 s and 42 s of sputtering time under positive bias (+1.5 V) and negative bias (-1.5 V), respectively. We attribute these peaks movement to the chloride ionic migration due to the applied bias. These shifted peaks get back to initial position (sputtering time at 50 s) in 1 minute, which is shorter than that of iodide. It also means a reversibility of chloride ionic migration. These are consistent with the results in the Conclusion In conclusion, this GD-OES study has provided the direct experimental evidence of the ionic (I and Cl) migration in the CH 3 NH 3 PbI 3-x Cl x based perovskite films under applied bias. We show that lead and MA ions are not migrating under the applied bias in the 2 minutes time-scale (Figure 4.16) . (+1.5 V). Considering a short voltage scanning time (few ten seconds) in J-V measurement, the initial applied voltage is one of the critical conditions in halide ionic migration. The detailed discussion will be in chapter 5. Based on GD-OES, this study gives a way for observing directly ionic movements in hybrid perovskite films. It makes a step forward in the quest of elucidating electrical phenomena usually observed in perovskite based solar cells like J-V hysteresis, external electric field screening or interfacial effects with electrodes. fast processes depending only on illumination (not on the voltage scan), any difference in initial collection must be influenced by time of transport and/or time of transfer at the interfaces. Carrier collection depends on the type of conductivity existing in perovskite and of the other layers and on the connectivity at their interfaces. Therefore, the diversity in device structures and fabrication methods together with changes in measurement conditions results in a wide variation of the hysteretic behavior 4 . As a result, the issue of hysteresis becomes too complex to be understood completely. [5][6][7] The reported parameters possibly affecting hysteresis are device structure [8][9][10][11][12][13][14] , process parameters [15][16][17] , measurement and priormeasurement conditions [18][19][20][21][22][23][24][25] . In the recent years, a lot of effort has been made to understand the cause of J-V hysteresis in PSCs and different mechanisms have been proposed. Only few approaches have been successful in reducing/eliminating hysteresis in the devices. The anomalous hysteresis in J-V characteristics of PSCs could be due to ferroelectric polarization 14,19,[25][26][27] , ion migration 13,14 , carrier dynamics at different interfaces or deep trap states in the perovskite layer 14 . Although, at present, there is no single universally accepted mechanism that can explain the phenomenon coherently, the studies done so far have certainly provided deeper insights into the topic. What makes the complexity of the problem is that several factors such as device structure (planar, and mesoporous), perovskite film characteristics, electron collecting layer properties, etc., can possibly influence the J-V curves at the same time. The lack of complete understanding and the inadequacy of direct evidences demand further investigation. Under illumination, not only the original film characteristics but also the effect of photo-generated carriers have to be considered when analyzing the J-V curve. In this work, we focused on the J-V hysteresis under dark conditions to take into account influence of the original film characteristics only. We explain here how the halide ionic migration we reported in chapter 4 influences the electrical characteristics of PSC devices and may be responsible for the J-V hysteresis. In conventional semiconductors like Si, Ge, GaAs, or CdTe, the electrical conductivity concerns the electron and hole populations. When describing a semiconductor in thermal equilibrium, the Fermi level (chemical potential) of each carrier type is everywhere constant throughout the entire crystal, even across a p-n junction. From this requirement for which electron and hole current densities cancel, a relation can be derived between the diffusion constant or diffusivity (D n and D p ) and the mobility ( ) of the electrons and the holes, respectively. These relations are called the Einstein relations: = and = Electron mobility and hole mobility depend upon temperature and dopant concentrations through lattice scattering and impurity scattering. In the ionic crystals comprising alkali halides and the metal oxides, the electrically charged particles are ions (cations and anions) and electrons. The transport of ionic and electronic charge carriers are exposed to chemical and electrical potential gradients, which correspond to electrochemical potential gradients. The hybrid perovskite (for instance CH 3 NH 3 PbI 3 ) may be considered as an inhomogeneous mixture of a conventional semiconductor and an ionic crystal in which the ions are essentially associated to migration of vacancies (see chapter 4). The mobility of electrons and holes is several orders of magnitude larger than that of ions. When the perovskite is working under dark conditions, the intrinsic carrier concentration of electrons and holes defined by the band gap value (1.58 eV) is smaller than that of ionic carrier defects (around 10 17 cm -3 ). The point will be detailed thereafter in section 5.3.3. Nevertheless, the hybrid perovskite may still be considered as an electronic conductor depending upon temperature. In thermal equilibrium, the electrochemical potential of each carrier type through the entire crystal of the perovskite must be kept constant even across a heterojunction. From this requirement, more complex relations can be derived between the diffusivity of the particles and the mobility of the particles. These are the Nernst-Einstein relations that intervenes in the migration of species in crystalline solids, when species are subjected to a force. Considering the electrical force ( = , where Z i is the atomic number, e is an electronic charge, and E is the electric field), we can define the electrical mobility as velocity per unit electric field: = = , where V i is the velocity, and k is the Boltzman constant. The electrical conductivity (S i ) is defined as charge flux per unit electric field with units (S/m). It can be expressed as = , whew C i is the concentration. For ionic species, we can apply Nernst-Einstein equation. = = thickness. In conclusion, the perovskite thickness variation doesn't influence its electrical transport characteristics. between 450 nm (Fig. 5. 5a and 5d), 390 nm (Fig. 5. 5b and 5e), and 350 nm (Fig. 5. 5c and 5f) by rpm conditions in the spin-casting process. The J-V performance is always higher when the applied voltage is scanned in reverse direction (+1 V à -1.5 V) than when the applied voltage is scanned from forward direction (-1.5 V à +1 V). As the perovskite thickness increased, the J-V hysteresis became stronger both in light and dark conditions. We can observe the increment of open circuit voltage (V OC ) and fill factor (FF), but no difference for the short circuit current density (J SC ) (Table 5.1). Reducing the thickness of the perovskite films because of short circuit current density reduction decreases VOC. Moreover, by reducing the perovskite thickness also decreases the VOC difference between forward and reverse bias. In dark condition, the J-V hysteresis tendency in a is more observable way than under illumination. It is because the effect of the ion migration is less important compared with the applied electric field in thinner perovskite layer for which the total number of point defects is decreased. As shown in Fig. 5.12c, V OC_dark is fixed to zero when the voltage scanning direction is forward (initial voltage = negative; -3, -2, -1.5, -1, and -0.5V). However, V 0_dark is increased to 0.16V when the initial voltage is positive (the voltage scanning direction = backward; 1, 1.5, 2, 2.5, and 3V). This shift of V 0_dark can be explained using our model discussed above in Fig. 5. We already experimentally evidenced the migration of halide ions (iodide and chloride ions) under an applied bias in this 'hysteresis free' p-i-n structures, using glow discharge optical emission spectrometry (GD-OES). When the bias is negative, we found as expected that the mobile iodide ions move toward the PCBM side, and that when the bias is positive, the mobile iodide ions are shifted toward the PEDOT:PSS side. The influence of iodide ions migration on energy bands of the perovskite thin film device is described in fig. 5.13. Considering the intrinsic characteristics of the perovskite layer, the increase and decrease of the iodide ions' concentration close to the interfaces can be viewed equivalent to N a -depleted region and N d + depleted region, respectively. The resulting band bending within the perovskite thin film (due to ionic migration) directly impact carrier injection as well as the leakage current. We suggest this model can explain the current -voltage (J-V) hysteresis observed under dark conditions. called leakage current density versus the initial applied voltage. We choose -1 V as a reference for the leakage current density. The leakage current density is almost fixed to 0.007 mA/cm 2 when the initial voltage is positive. A light increase of the leakage current is induced by the electron created on the PEDOT:PSS side. When the initial voltage is negative, the leakage current density is this time increased to 0.012 mA/cm 2 . The tilted band energy under negative initial applied voltage induces the increment of drifted leakage current potential as J-V hysteresis versus measuring temperature For concluding our study of electrical transport characteristics in hybrid perovskite thin film, we analyzed current-voltage (J-V) measurement at low temperature (135 ~ 370 K) by using photovoltaic (PV) device and transmission line measurement (TLM) device. Figure 5.14a and Figure 5.14b show a PV device and a TLM device onto the sample stage for measuring J-V at low temperature. The sample stage is located at the vacuum chamber. There are 4 mobile tips for measuring J-V at various positions keeping the vacuum condition (10 - 4 ~10 -2 Torr). All the J-V performances have been measured in dark condition due to studying electrical characteristics without considering photo-generated carriers. migration-dominated conduction is 264 K, which agrees well with the value that we speculated as the ionic freezing temperature measured by PV device (Figure 5.17) and with the following article 28 . Conclusion In this chapter, we studied the J-V hysteresis versus the structure variation and the J-V measurement conditions. The perovskite layer thickness and the type of cathode (Al or Ag) are considered as the parameters of the structure variation. The voltage scanning rate, the initial applied voltage, and the measuring temperature are considered as the parameters of measurement conditions for understanding the J-V hysteresis. Especially, the dark J-V curves study together with complimentary GD-OES measurements provides a deeper understanding of the relation between halide (iodide and chloride) migration and J-V performance in CH 3 NH 3 PbI 3-x Cl x based PSCs. We verified that the halide migration is slow (1 -3 minutes) and reversible. Even with a 'quasi-hysteresis free' structure (p-i-n), we were able to evidence a J-V hysteresis under dark conditions, versus initial applied voltage, voltage scanning direction, and measuring temperature. The effect of halide migration on the J-V performance is more visible with the absence of photo-generated carriers. It is due to that the ion migration-related phenomena, including photocurrent hysteresis, such as switchable photovoltaics, photo-induced poling effect, light induced phase separation, giant dielectric constant, and self-healing, will be unlikely to occur with excess carriers. The V 0_dark value shifts only under reverse scanning direction due to the electron barrier created by the halide migrations at the interfaces. The leakage current density under forward scanning direction is always higher than that under backward scanning direction. The maximum leakage current density, is obtained at 343 K, which is consistent with the phase transition temperature. The 263 K, the transition temperature for electronic-to ion migrationdominated conduction, is the J-V hysteresis generating point in dark condition. The activation energy for ionic conduction is 0.253 eV and for electronic conduction is 0.112 eV. On the basis of GD-OES, this study gives a framework for observing directly ionic movement in hybrid perovskite films. And the dark J-V curve study provides a step forward in the quest of elucidating a link between halide migration and J-V hysteresis performance. This behaviour has major repercussions for understanding PSCs device performance and for the design of future architectures. of the other layers of the solar cell stack. For instance, the organic hole transporting material (HTM) is unstable when in contact with water. This can be partially limited by proper device encapsulation [12][13][14] using buffer layers between perovskite and HTM 15 or moisture-blocking HTM 16 such as NiO X delivering in this case, up to 1,000h stability at room temperature 17 . However, this approach increases the device complexity, and the cost of materials and processing. It is also worth to mention that most of the device stability measurements reported in literature are often done under arbitrary conditions far from the required standards 18 such as not performed under continuous light illumination 17 , measured at an undefined temperature, or leaving the device under uncontrolled light and humidity conditions 19 . This makes a proper comparison among the different strategies used challenging. Thin film analysis of CH 3 NH 3 PbI 3-x Cl x based perovskite In this study, optical, electrical thin film analyzing techniques and surface analyzing techniques are used for verifying the ageing mechanisms of the CH 3 NH 3 PbI 3-x Cl x based perovskite thin film. UV-Vis spectroscopy and XRD are used for investigating the energy band gap and crystallinity variation during ageing progress, respectively. The TLM is used for verifying the variation of contact resistance (R C ) and sheet resistance (R Sheet ). Finally, AFM is used for analyzing the surface variation versus ageing effects. indicates that the MAI is removed as gas and the PbI 2 remains in the perovskite thin film. Optical thin film analysis observed at the surface is considered as the proof of ageing effect. This indicates that the significant variation occurred at the surface of the perovskite film. Figure 6.7 shows the SEM image of the fresh perovskite film. We can observe that the surface exhibits a dense-grained uniform morphology with grain sizes in the range 100-700 nm. The entire film is composed of a homogeneous, well-crystallized perovskite layer. Among AFM topographies (figure 6.6a, 6.6c, and 6.6e), the observed grain size become similar to that of fresh sample measured by SEM by ageing process. The grain size observed in figure 6.6 e is around 500 -700 nm. It means that the grain bulk is well fixed during ageing process. In summary, we concluded that the interface variation is more critical than the bulk variation in the perovskite thin film during the first 1 week of ageing. The following three points can be drawn. First, we observed that the absorbance variation is significant only at low wavelength part (between 440 nm and 520 nm) in UV-Vis spectroscopy results. Second, we observed that the contact resistance variation is more significant than the sheet resistance variation in TLM results. Third, the RMS values in AFM results show remarkable differences in first week of perovskite ageing. the reason why we observe more dynamic variation of J SC and R S than the variation of V OC and R Sh during ageing process. Of course, there are various process and reasons for poor stability of the PSCs. However, the iodide ionic diffusion must be one of them. Specially, on the basis of GD-OES, this study gives a framework of observing directly the iodide ionic diffusion in hybrid perovskite film. And this study provides a step forward in the quest of elucidating a link between the interface variation and J-V ageing performance. This behavior will be the great guideline for understanding PSCs device ageing performance and for the design of future stable PSCs device. Chapter7. Conclusion and Outlook In this thesis, results on physics-based thin film analyses of CH 3 NH 3 PbI 3-x Clx based perovskite thin film and solar cells have been presented. The current-voltage (J-V) hysteresis and the cause for the ageing process have been investigated with multiple approaches. Here, the major findings and related conclusions are summarized and some ideas on further works are suggested. At first, we optimized the fabricating processes of the perovskite solar cells. The perovskite thin film deposition is sensitive process and critical techniques of cells performances. Among various materials and depositing techniques, we decided to study with Finally, the power conversion efficiency was reached at 12.7 %, indicating the state of the art device considering the structure and active area (0.28 cm 2 ). Our optimized deposition technique is very simple, so even those who are new to this method can produce the perovskite solar cells with efficiencies of more than 9 %. At second, the GDOES analysis technique was first tried for getting the direct experimental evidence of the ionic (I and Cl) migration in the CH 3 NH 3 PbI 3-X Cl X based perovskite films under applied bias. We verified that lead and MA ions are not migrating under the applied bias. We found that the ratio of fixed to mobile iodine saturates at 35 % and the average length of iodine migration is around 120 nm. In addition, we observed that the halide ionic migration is reversible and slow reaction both in positive and in negative applied bias.It takes 1 min (chloride ions) and 3 min (iodide ions) to come back from migrated position after stopping the negative voltage applied (-1.5 V). On the other hand, it takes 1 min (chloride ions) and 2 min (iodide ions) after applied positive voltage (+1.5 V). Based on GDOES, this study gives a way for observing directly ionic movements in hybrid perovskite films. It makes a step forward in the quest of elucidating electrical phenomena usually observed in perovskite based solar cells like J-V hysteresis, external electric field screening or interfacial effects with electrode. At third, the J-V hysteresis, which is a special characteristic of the perovskite solar cells, has been studied in this work versus the structure variation and the J-V measurement conditions. The perovskite layer thickness and the type of cathode (Al or Ag) are considered as the parameters of the structure variation. The voltage scanning rate, the initial applied voltage, and the measuring temperature are considered as the parameters of measurement conditions. Especially, the dark J-V curves study together with complimentary GDOES measurements provides a deeper understanding of the relation between halide (iodide and chloride) migration and J-V performance. The effect of halide migration on the J-V performance is more visible with the absence of photo-generated carrier. The V 0_dark value shifts only under reverse scanning direction due to the electron barrier created by the halide migrations. The leakage current density under forward scanning direction is always higher than that under backward scanning direction. In addition, we studied the J-V performance at low temperature with PSCs and TLM devices for checking the J-V hysteresis versus the measuring temperature. The maximum leakage current density, is obtained at 343 K, which is consistent with the phase transition temperature. The 263 K, the transition temperature for electronic-to ion migration-dominated conduction, is the J-V hysteresis generating point in dark condition. The activation energy for ionic conduction is 0.253 eV and for electronic conduction is 0.112 eV. On the basis of the dark J-V analysis, this study provides a step forward in the quest of elucidating a link between halide migration and J-V hysteresis performance. This behavior has major repercussions for understanding PSCs device performance and for the design of future architectures. In addition, studying the effect of electron or hole affinity difference at the both interface between HTL-perovskite layer and between ETL-perovskite layer on the J-V hysteresis can be an informative topic for the future work. Finally, we studied the ageing process of the CH 3 NH 3 PbI 3-X Cl X based perovskite thin film by studying optical (XRD, UV-Vis spectroscopy, and AFM) and electrical (TLM, J-V performance) thin film analysis techniques. And we could speculate that the perovskite interface variation is more critical than the variation in bulk. The GD-OES analysis gave the direct experimental evidence of iodide ionic diffusion toward silver electrode during ageing process. Finally, we understood the reason why we observe more dynamic variation of J SC and R S than the variation of V OC and R Sh during ageing process. Of course, there are various process and reasons for poor stability of the PSCs. The iodide ionic diffusion, which we Chapter 3 .Chapter 5 .Chapter 7 . 357 Development of cells performance by studying film characteristics ................................................................................................... Summary .............................................................................................................................. 3.1. Introduction ................................................................................................................... 3.2. Solution processed perovskite solar cells (PSCs) ......................................................... 3.2.1. Dipping processed (2-steps) PSCs .......................................................................... 3.2.2. Spin-casting processed (1-Step) PSCs .................................................................... 3.3. Conclusion ..................................................................................................................... Reference .............................................................................................................................. Chapter 4. Ionic migration in CH 3 NH 3 PbI 3-x Cl x based perovskite solar cells using GD-OES analysis ................................................................................... Summary .............................................................................................................................. 4.1. Introduction ................................................................................................................... 4.2. PSCs device characteristics ........................................................................................... 4.3. GDOES optimization process for perovskite thin film ............................................... 4.4. GD-OES analysis without applied bias ....................................................................... 4.5. GD-OES result under applied bias .............................................................................. 4.5.1. Iodide and chloride ion migration ........................................................................ 4.5.2. Lead and MA ion migration ................................................................................. 4.6. Conclusion ................................................................................................................... Reference ............................................................................................................................ Hysteresis characteristics in CH 3 NH 3 PbI 3-x Cl x based inverted solar cells ......................................................................................................... Summary ............................................................................................................................ 5.1. Introduction ................................................................................................................. 5.2. J-V Hysteresis depending on device structure ............................................................ 5.2.1. J-V Hysteresis versus perovskite thickness and Al as cathode ............................ 5.2.2. J-V Hysteresis versus perovskite thickness and Ag as cathode ............................ 5.3. J-V Hysteresis depending on measurement conditions ............................................... 5.3.1. J-V hysteresis versus voltage scanning rate ......................................................... 5.3.2. J-V hysteresis versus initial applied voltage ......................................................... 5.3.3. J-V hysteresis versus measuring temperature ....................................................... 5.4. Conclusion ................................................................................................................... Reference ............................................................................................................................ Chapter 6. Ageing study in CH 3 NH 3 PbI 3-x Cl x based inverted perovskite solar cells ......................................................................................................... Summary ............................................................................................................................ 6.1. Introduction ................................................................................................................. 6.2. Thin film analysis of CH 3 NH 3 PbI 3-x Cl x based perovskite .......................................... 6.2.1. Optical thin film analysis ...................................................................................... 6.2.2. Electrical thin film analysis .................................................................................. 6.2.3. Surface analysis .................................................................................................... 6.3. J-V characteristics of CH 3 NH 3 PbI 3-X Cl X based PSCs ................................................. 6.4. Ionic diffusion ............................................................................................................. 6.5. Conclusion ................................................................................................................... Reference ............................................................................................................................ Conclusion and outlook 오류! 책갈피가 정의되지 않았습니다. Figure 1 . 1 Figure 1.2 shows the summary of the best research cell efficiencies from different types of photovoltaic devices throughout the timeline. The first practical photovoltaic devices were demonstrated in the 1950s. Research and development of photovoltaics received its first major boost from the space industry in the 1970s which required a power supply separate from "grid" power for satellite applications. And there was the oil crisis in the 1970s to focus world attention on the desirability of alternate energy sources for terrestrial use, which in turn promoted the investigation of photovoltaics as a means of generating terrestrial power. In the 1980s research into silicon solar cells paid off and solar cells began to increase their efficiency. In 1985 silicon solar cells achieved the milestone of 20% efficiency. Over the next decade, the photovoltaic industry experienced steady growth rates of between 15% and 20%, largely promoted by the remote power supply market. Furthermore, researchers began studying the various solar cells such as dye-sensitized cell in 1991, organic photovoltaics devices (OPV) in 2001, and quantum dot cells in 2010. occurs at zero current. The open-circuit voltage corresponds to the amount of forward bias on the solar cell due to the bias of the solar cell junction with the light-generated current. The open-circuit voltage is shown in figure 1.3.b. The equation for V OC is found by setting the net current equal to zero in the solar cell equation to give: 4 .Figure 1 . 4 . 414 Figure 1. 4. J-V curve showing 2 different resistances Figure 1 . 1 Figure 1. 16. J-V characteristics of (a) planar heterojunction PbI2 and (b) CH 3 NH 3 PbI 3-X Cl X perovskite solar cells.[START_REF] Jena | The interface between FTO and the TiO2 compact layer can be one of the origins to hysteresis in planar heterojunction perovskite solar cells[END_REF] have recently confirmed iodide migration to positive electrode, leaving iodide vacancies at the negative electrode through DC dependent electroabsorption (EA) spectra, temperature dependent electrical measurement and XPS characterization. According to the authors, accumulation of iodide ions at one interface and the corresponding vacancies at the other creates barriers for carrier extraction. Modulation of such interfacial barriers at CH 3 NH 3 PbI 3-x Cl x / Spiro-OMeTAD and TiO2/ CH 3 NH 3 PbI 3-x Cl x , caused by the migration of iodide ions/interstitials driven by an external electrical bias leads to J-V hysteresis in planar (FTO/TiO2 CL/ CH 3 NH 3 PbI 3-x Cl x /Spiro-OMeTAD/Au) perovskite solar cells. Based on temperature dependence of hysteretic change in current density, Grätzel et al. 70 estimated activation energy for diffusion of different ions in MAPbI3 and found that the iodide ions have the highest mobility with lowest activation energy. Hence, it is the halide anions (I-) not the MA+ ions which migrate more easily in perovskite, causing polarization and charge accumulation at interfaces under voltage scans and eventually creates hysteresis 70 . Figure 1 . 1 Figure 1. 18. (a) device structure and (b) forward and backward J-V curves of perovskite cells without and with PCBM (thermally annealed for 15 and 45min)[START_REF] Shao | Origin and elimination of photocurrent hysteresis by fullerene passivation in CH3NH3PbI3 planar heterojunction solar cells[END_REF] Figure 1 . 19 . 119 Figure 1. 19. Schematics showing shallow and deep trap states, and their role in causing hysteresis.[START_REF] Li | Hystersis mechanism in perovskite photovoltaic devices and its potential application for multi-bit memory devices[END_REF] Figure 2 . 2 1(b) shows the ITO substrate after etching process. graph versus wavelength. Absorption may be presented as transmittance (T=I/I 0 ) or absorbance (A=logI 0 /I). If no absorption has occurred, T=1.0 and A=0. Most spectrometers display absorbance on the vertical axis, and the commonly observed range is from 0 (100% transmittance) to 2 (1% transmittance). The wavelength of maximum absorbance is a characteristic value, designated as λ . The optical band gap (E opt ), expressed in electronvolt, depends on the incident photon wavelength by means of the Planck relation 2 ) 2 All elements have different sized nuclei and as the size of the atom the light incident upon the sample may be decomposed into s and p component (the scomponent of the electric field is oscillating parallel to the sample surface and vertical to plane of incidence, the p-component is of the electric field oscillating parallel to the plane of incidence). The reflection coefficients of the s and p component, after reflection are denoted by Rs and Rp. The fundamental equation of ellipsometry is then written: Thus, tanΨ is the amplitude ratio upon reflection, and Δ is the phase shift. Since ellipsometry is measuring the ratio of two values (rather than the absolute value of either), it is very robust, accurate (can achieve angstrom resolution) and reproducible. For instance, it is insensitive to scatter and fluctuations, and requires no standard or calibration. Experimental, the advantages of ellipsometry measurement are -Non-destructive and non-contact technique -No sample preparation -Solid and liquid samples -Fast thin film thickness mapping -Single and multi layer samples -Accurate measurement of ultra-thin films of thickness < 10nm 9 ) 9 Probes are applied to a pair of contacts and the resistance is measured by applying a voltage across these contacts and measuring the resulting current. The current flows from the first probe into the metal contact, across the metal-semiconductor junction, through the sheet of semiconductor, across the metal-semiconductor junction again (except probe lithography and local stimulation of cells. Simultaneous with the acquisition of topographical images, other properties of the sample can be measured locally and displayed as an image, ofen with similarly high resolution. Examples of such properties are mechanical properties like stiffness or adhesion strength and electrical properties such as conductivity or surface potential. In fact, the majority of SPM techniques are extensions of AFM that use this modality. Figure 3 . 2 . 32 Figure 3. 2. The photo images of the perovskite dipping-process (a) dipping in MAI solution, (b) Just after dipping process, (c) full device after electrode deposition, and (d) annealing process ( MAI) has to be penetrated into PbI 2 film during dipping process in MAI solution. First, we dipped the PbI 2 film into isopropyl alcohol (IPA) solvent during 10-15 seconds for cleaning.Thereafter, we made progress the dipping process in MAI solution during 30-40 seconds as shown in figure3.2a. We could see the change of the color of the film from yellow to dark brown upon dipping time. The IPA dip-cleaning progressed again during 10-15 seconds, immediately after the MAI dipping process. Finally, we did the spin-coating process in 4000 rpm during 30 seconds and annealing at 70℃ during 45 minutes for removing IPA solvent remaining in the perovskite film. The dipping technique is a sensitive process. There are numerous experimental conditions effecting the crystallinity of the perovskite thin films such as the dipping angle, dipping speed, pulling speed of dipped samples. However, we decided to study the dipping process due to the high performance reported among the solution processed PSCs at that time. Figure 3 . 5 and 35 Figure 3.6 show the light absorbance and transmittance characteristics of PbI 2 and perovskite film in the range of 300 to 800 nm wavelengths, respectively. First, the PEDOT:PSS layer, well-known as transparent hole transport layer (HTL), has zero absorption in a range of 300 to 800 nm wavelengths (red solid line in figure3.5 and 3.6). As depicted in figure3.5, the slower PbI 2 solution spun, the more light were absorbed due to the thicker thickness. However, the energy band-gaps of the PbI 2 films were fixed around 2 eV in spite of different thickness. In general, the energy band gap of thin film can be defined by the maximum wavelength of absorption range. Figure 3 . 5 . 35 Figure 3. 5. The (a) transmittance, and (b) absorbance graphs of the PbI 2 film in different thickness controlled by rpm conditions. Figure 3 . 3 . 7 . 337 Figure 3.6 shows the absorbance and transmittance characteristics of perovskite thin films depending on different dipping times in MAI solution. The absorbance of the film advanced by increasing the dipping time until 50 seconds and dropped over than 50 seconds. The crystallinity of the perovskite thin film collapsed due to the MAI solution damage from overtime-dipping process. We can confirm the damage by SEM measurement as shown in figure 3.7. The perovskite thin film dipped in ideal time has outstanding crystalline quality with 100 nm of grain size (figure 3.7a). However, the film crystallinity collapsed immediately over than 50 seconds dipping time (figure 3.6b). Figure 3 . 3 figure 3.23a, the color of perovskite thin film was completely dark brown with nice uniformity comparing to the photos in figure3.12 (the perovskite thin film deposited by using 2-step dipping technique). Figure 4 . 4 Figure 4.1a shows the inverted (or p-i-n) CH 3 NH 3 PbI 3-x Cl x based planar structure PSCs devices used in this work (ITO = anode, Ag = cathode). The photovoltaic performance of a series of 10 samples is reported in table1. Table1shows a good reproducibility of the Figure 4 . 1 . 41 Figure 4. 1. (a) Detailed scheme of the perovskite planar solar cell architecture facing the plasma of GD-OES, (b) Measured J-V curves of the best CH 3 NH 3 PbI 3-x Cl x solar cell under 1 sun illumination scanned in the forward (blue line) and reverse (red line) directions, (c) pictures of the perovskite solar cells before and after GD-OES measurement with GD crater visible. 5 V to +2.5 V vs. anode) are shown in figure 4.7 and figure 4.8 for iodide, chloride, lead and nitrogen ions. For clarity, the intensity values in figure 4.8 are shifted vertically as compared to intensity values in figure 4.5 in order to highlight the ionic migration. In the GD-OES line profiles, the initial sputtering time (30 s) corresponds to the PCBM-perovskite interface and the final sputtering time (60 s) to the PEDOT:PSS-perovskite interface. As shown in figure 4.7 and figure 4.8, two different behaviors are obtained: on one hand, the iodide and chloride ions (Figure 4.8a and 4.8b respectively) move according to the sign of the voltage and on the other hand lead and nitrogen do not move at all (Figure 4.8c and 4.8d respectively). Figures 4 . 4 Figures 4.7aand 4.8a display GD-OES profile lines of iodide ion versus sputtering time Figures 4.7aand 4.8a display GD-OES profile lines of iodide ion versus sputtering time for different biases showing directly the iodide ion movements. For a negative bias, I -ions move towards PCBM layer and for the positive bias I -ions move towards PEDOT:PSS layer. Figure 4 . 4 Figure 4.15 shows GD-OES profile lines of iodide and chloride ions versus sputtering time for different recovery time (from 1 to 3 minutes after stopping the applied voltage) showing directly the reversibility of ionic migration. By considering the plasma etching direction (from silver to ITO), the shorter sputtering time is silver cathode side and the longer sputtering time is the ITO anode side. For generating the ionic migration, 1.5 V or -1.5 V were applied in the PSCs device during 30 seconds (Figure 4.15). In comparison with the GD-OES profile lines measured before applying bias, both iodide and chloride ions are shifted towards the PEDOT:PSS side (longer sputtering time) under the positive bias and both iodide and chloride ions are shifted toward the silver cathode side (shorter sputtering time) under the negative bias. These results confirm that the iodide and chloride ions are negatively charged species. The GD-OES profile lines of iodide ions in the perovskite film (red solid lines in Fig 4.15a and 4.15b) show only one peak (sputtering time around 40 s) Figure 5 . 4 . 54 Figure 5. 4. Transmission line measurement (TLM) versus perovskite thickness on top of glass; (a) 450 nm, (b) 390 nm, and (c) 350 nm Fig. 5 . 5 Fig. 5.13b, while the slope generated under positive initial applied voltage induces the decrement of leakage current potential as Fig. 5.13d. Both V 0 and leakage current density versus initial applied voltage represent an interpretation of the energy band model considering the halide ionic migration. Fig. 6 . 6 Fig.6.1 represents the photos of the perovskite sample showing the color variation during ageing progress. The sample structure is glass/ITO/PEDOT:PSS/perovskite for confirming the identical conditions of bottom layers as PSCs device. The sample is stored in the glove box (N 2 condition) without illumination at room temperature. It is for removing the ageing effects due to thermal, water, moisture and illumination. During the 1 st week, the color of the perovskite thin film fixed in dark brown as initial color. However, the film color becomes light brown between the 2 nd week and the 4 th week. Finally, it takes over 6 weeks to change color to full yellow. As shown in Figure6.1 for the 5 th week aged sample, the perovskite thin film turns yellow from the edge of the sample due to the exposed area difference to air. The color variation of the perovskite thin film (dark brown à yellow) CH 3 NH 3 33 PbI 3-x Clx based perovskite thin film deposited by 1-step spin-casting process. The PEDOT:PSS and PC 60 BM layers are deposited by spin-coating process as HTL and ETL, respectively. The electrical transport characteristics (conductivity, resistivity, contact resistance, and carrier diffusion length) of the film are measured by TLM and SSPG techniques. The crystallinity of the film is optimized with the results of SEM and XRD. Real space carrier distribution measurement with the help of Kelvin Probe Force Microscopy (KPFM) also shows unbalanced hole and electron extraction rate in perovskite devices using TiO2 and spiro-OMeTAD as electron collector and HTM, respectively[START_REF] Tao | 17.6 % steady state efficiency in low temperature processed planar perovskite solar cells[END_REF] . It is known that structural defects and/or mismatching at any heterogeneous interface can develop a potential barrier for carrier extraction and thus, results in accumulation of these carries at the interfaces. Therefore, such interfacial defects lead to unbalanced carrier extraction. . Unbalanced carrier extraction (hole extraction rate ≠ electron extraction rate) caused by imperfect matching of the properties of layers is believed to result in J-V hysteresis. Heo et al. [START_REF] Im | 18.1 % hysteresis-less inverted CH3NH3PbI3 planar perovskite hybrid solar cells[END_REF] found that perovskite cells of inverted architecture using PCBM as electron collector do not show hysteresis. PCBM being more conductive (0.16 mS/cm) than the widely used TiO2 (6 × 10-6 mS/cm) collects/separates the electron more efficiently from perovskite (CH 3 NH 3 PbI 3 ), resulting in balanced carrier extraction (hole extraction rate = electron extraction rate), which in consequence, eliminated hysteresis. Table 1 . 1 . 11 Several representative devices performances of inverted planar structured PSCs Perovskite Processing HTL ETL V OC (V) J SC (mA/cm 2 ) FF (%) PCE (%) Stability # One-Step PEDOT PC 61 BM/BCP 0.60 10.32 63 3.9 - Two-Step PEDOT PC 61 BM 0.91 10.8 76 7.4 - One-Step (Cl) PEDOT PC 61 BM 0.87 18.5 72 11.5 - One-Step (Cl) PEDOT PC 61 BM/TiO X 0.94 15.8 66 9.8 - Solvent Engineering PEDOT PC 61 BM/LiF 0.87 20.7 78.3 14.1 - One-Step (Moisture, Cl) PEDOT PC 61 BM/PFN 1.05 20.3 80.2 17.1 - One-Step (Hot-casting, PEDOT PC 61 BM 0.94 22.4 83 17.4 - Cl) One-Step (HI additive) PEDOT PC 61 BM 1.1 20.9 79 18.2 - Co-Evap PEDOT/Poly-TPD PC 61 BM 1.05 16.12 67 12.04 - Co-Evap PEDOT /PCDTBT PC 61 BM/LiF 1.05 21.9 72 16.5 - Two-Step Spin-Coating PTAA PC 61 BM/ C 60 /BCP 1.07 22.0 76.8 18.1 - One-Step Solvent PEDOT C 60 0.92 21.07 80 15.44 - One-Step(Cl) PEDOT PC 61 BM/ZnO 0.97 20.5 80.1 15.9 140 h One-Step(Cl) PEDOT PC 61 BM/ZnO 1.02 22.0 74.2 16.8 60 days One-Step NiO X PC 61 BM/BCP 0.92 12.43 68 7.8 - One-Step NiO X PC 61 BM/C 60 1.11 19.01 73 15.4 244 h Solvent Engineering NiO X PC 61 BM/LiF 1.06 20.2 81.3 17.3 - Two-Step NiO X ZnO 1.01 21.0 76 16.1 > 60days Solvent Engineering NiLiMgO PC 61 BM /TiO 2 :Nb 1.07 20.62 74.8 16.2 1000 h (Sealed) 1,2 Its owns ABX 3 crystal structures, where A,B, and X are organic cation, metal cation, and halide anion, respectively. The bandgap can be tuned from the ultraviolet to infrared region through varying these components. [2][3][4] This family of materials exhibits a myriad of properties ideal for PV such as high dual electron and hole mobility, large absorption coefficients resulting from s-p antibonding coupling, a favorable band gap, a strong defect tolerance and shallow point defects, benign grain boundary recombination effects and reduced surface recombination. 5 After 7 years efforts, the power conversion efficiency (PCE) of perovskite solar cells has risen from about 3% to 22%. 1,6-16 The PC 60 BM was used as electron transport layer (ETL). The highest occupied molecular orbital (HOMO) level of PC 60 BM (-6.3eV) is enough to block the hole generated in the perovskite film (the valence band of perovskite is at -5.4eV). PC 60 BM has same lowest unoccupied molecular orbital (LUMO) level as perovskite film (-3.9eV). It makes that electron can pass through well from perovskite to Al electrode (-4.2eV). Therefore, PC 60 BM is suitable material as ETL. The PC 60 BM is diluted in Chlorobenzene (CB) in 2wt% and then it was stored in glove box (N 2 condition) at room temperature. -phenyl-C60- butyric acid methyl ester (PC 60 BM) solution have to be prepared one day before the deposition process for dissolving sufficiently. First, the CH 3 NH 3 PbI 3 film was used as the active layer in perovskite solar cells. PbI 2 solution and CH 3 NH 3 I solution have to be dissolved at least 16 hours before deposition process. For PbI 2 solution, PbI 2 was diluted in N,N-Dimethylformamide (DMF) solvent in 33wt%. It was stored in N 2 condition with annealing at 80°C. For CH 3 NH 3 I (MAI) solution, MAI was diluted in 2-propanol (IPA) solvent in 10mg/ml. The solution was stored in glove box at room temperature. Table 4 . 1 . 41 Photovoltaic performance of perovskite solar cells used in this study. J SC (mA/cm 2 ) V OC (V) FF (%) PCE (%) Average performance (10 samples) 19.9 (±2) 0.92 (±0.02) 64.2 (±4) 11.7 (±0.8) Best performance Forward scan 19.9 0.92 67.0 12.3 Reverse scan 20.2 0.93 67.0 12.6 Table 4 . 2 . 42 The performance parameters of the perovskite solar cells fabricated for direct measurement of ion migration using GD-OES analysis. Figure 4.6 is the GD-OES results of I and Cl in the perovskite thin film without applied voltage. As shown in figure 4.6a, the initial I profile line (before applied voltage) is changed depending on annealing temperature of perovskite thin film. It is not symmetric (little shifted toward PCBM) when the perovskite film was annealed at 80 ℃ in N 2 conditions. However, it Device (#) V OC (V) J SC (mA/cm 2 ) FF (%) PCE (%) 1 0.94 20.2 60 11.5 2 0.92 19.8 63 11.5 3 0.91 20.0 63 11.5 4 0.92 19.1 67 11.7 5 0.92 19.9 64 11.7 6 0.91 18.7 67 11.3 7 0.93 18.4 67 11.3 8 0.94 21.9 62 12.6 9 0.91 20.8 62 11.6 10 0.93 20.2 67 12.6 Average 0.92 19.9 64.2 11.7 4.5.2. Lead and MA ion migration GD-OES profile lines of Pb and N show no ionic migration under an applied bias (Figure 4.7c, 4.7d, 4.8c, and 4.8d). These results are in accordance with the fact that the migration activation energies of Pb (2.31 eV) and MA (0.84 eV) ions vacancies are higher than the value of I ions vacancies (0.58 eV) as reported by Eames et al. 7 Reference .............................................................................................................................. Acknowledgements Chapter 5. Hysteresis characteristics in CH 3 NH 3 PbI 3-x Cl x based inverted solar cells Summary The current voltage (J-V) hysteresis observed in the perovskite solar cells (PSCs) is reported as a key issue. This chapter shows the J-V hysteresis versus structure variation (thickness of perovskite layer and type of cathode) and versus measurement conditions (scanning rate, initially applied voltage, and measuring temperature). Not only the J-V curves under illumination but also the J-V curves in dark condition were analyzed for understanding the ionic migration effect on J-V performance. It's because that we need to consider the J-V hysteresis without photo-generated excess carriers. Introduction The continuous and skyrocketing rise in power conversion efficiency (PCE) of hybrid organic-inorganic perovskite materials (HOIPs) based solar cells [1][2][3] has attracted enormous attention among the photovoltaics community. These materials have become an utmost interest to all working on photovoltaic technologies because of its high absorption coefficient and long range carrier diffusion with minimal recombination, which are the main factors usually used to explain the large current density, high open circuit voltage, and thus high PCE of perovskite solar cells (PSCs). However, there are several unusual issues necessary to tackle in order to further improve their efficiency. Among them are the hysteresis observed in current-voltage curves, side distribution in performance, difficulties in reproducing the results, etc. These issues require deeper scientific understanding and demand serious attention. Among all the above issues, hysteresis has been considered by the community as crucial. It has been widely observed that perovskite solar cells show substantial mismatch between the current density -voltage curve (J-V curve) measured on forward scan (from negative to positive bias) and backward scan (from positive to negative bias). Hysteretic J-V curves imply that there is a clear difference in transient carrier collection at a given voltage whether it is measured during a forward or a backward scan. In general, carrier collection in the device depends on carrier generation, separation, and its transport from the bulk across the different interfaces of the device. As carrier generation and separation are considered as We must notice that the temperature dependence of diffusivity shows two distinct regions: extrinsic region (low temperature) and intrinsic region (high temperature). For the same reasons, temperature dependence of ionic conductivity also exhibit intrinsic and extrinsic regions. Now, we can develop a theory for ionic conduction using Boltzmann statistics ( = - , where is the accommodation coefficient, is the vibrational frequency of the ions, and E a can be considered as activation energy or free energy change per atom.) and see whether we arrive at the same formalism as the ionic conductivity versus diffusivity. We can write an expression for conductivity as This expression has similar form, which was shown above by using diffusivity and mobility, and it shows the similar dependence on temperature as well as activation energy for migration. In conclusion, the activation energy of ionic conductivity can quantitatively characterize the rate of ion migration, which can be extracted from the temperaturedependent electrical conductivity by the Nernst-Einstein relation. UV exposed ITO was used as the anode. Al or Ag was deposited as the cathode on top of PCBM layer. Two square (dashed line) represents the parameters of the device structure to control the J-V hysteresis; the thickness of the active layer (perovskite thin film) and cathode (Al / Ag). The thickness of the active layer was controlled by rpm conditions in the spincasting process and the cathode layer was deposited by thermal vacuum evaporation process. J-V Hysteresis depending on device structure J-V Hysteresis depending on measurement conditions J-V hysteresis versus voltage scanning rate The measurement condition for studying J-V hysteresis is the voltage scanning rate in the range between 25 mV/s and 25,000 mV/s. We already discussed reversibility of the halide ionic migration and its recovering time (around 2 -3 minute) by using GD-OES analysis as figure 4.15. This recovering time is longer than the voltage scanning time in J-V curve measurement (less than 1 minute). Therefore the halide ionic migration can be sustained during J-V measurement. And the amount of halide ionic migration can be changed by the voltage scanning rate. Fig. 5.7 and Fig. 5.8 show the J-V performances under illumination versus voltage scanning rate. As the scanning rate increased, the J-V hysteresis became stronger. It is due to that the ionic migration, induced by initial applied voltage, is kept well when the voltage-scanning rate is high. The short circuit current (J SC ) in reverse bias measurement is higher than that in forward bias measurement in fast voltage scanning rate. As the scanning rate is reduced, not only J SC but also V OC , FF, and PCE approach a similar value (0.912 V, 65%, and 11.4%), indicating that the J-V hysteresis becomes weaker. When the voltage scanning direction is forward, the V OC value is constant at 0.913 V in the whole range of scanning rates (25 -25,000 mV/s). However, the V OC value increased as a function of voltage scanning rate decrement in reverse voltage scanning direction. Both in forward and reverse scanning direction, fill factor (FF) is increased by decreasing the voltagescanning rate. Chapter 6. Ageing study in CH 3 NH 3 PbI 3-x Cl x based inverted perovskite solar cells Summary The poor stability is reported as one of the key issues in the perovskite solar cells (PSCs). In this study, we used various thin film analysis techniques for investigating the ageing process of the CH 3 NH 3 PbI 3-x Cl x based perovskite thin film such as optical thin film analysis (UV-Vis spectroscopy and X-ray diffraction (XRD)), electrical thin film analysis (transmission line measurement (TLM)), surface thin film analysis (atomic force microscopy (AFM)). We investigated that the surface variation is more critical than the bulk variation during ageing process. The direct experimental evidence of iodide ionic diffusion toward silver electrode during ageing process is observed by studying the glow discharge optical emission spectroscopy (GD-OES) analysis. Introduction With power conversion efficiencies (PCE) beyond 22%, organic-lead-halide perovskite solar cells (PSCs) are stimulating the photovoltaic research scene. However, despite the big excitement, the unacceptably low-device stability under operative conditions currently represents an apparently unbearable barrier for their market uptake. [1][2][3][4][5] Notably, a marketable product requires a warranty for 20-25 years with <10% drop in performance. This corresponds, on standard accelerated ageing tests, to having <10% drop in PCE for at least 1,000h. Hybrid perovskite solar cells are still struggling to reach this goal. Perovskite are sensitive to water and moisture, ultraviolet light and thermal stress. [6][7][8] When exposed to moisture, the perovskite structure tend to hydrolyse, 6 undergoing irreversible degradation and decomposing back into the precursors, for example, the highly hygroscopic CH 3 NH 3 X and CH(NH 2 ) 2 X salts and PbX 2 , with X=halide, a process that can be dramatically accelerated by heat, electric field and ultraviolet exposure. 7,8 Material instability can be controlled to a certain extent using cross-linking additives 9 or by compositional engineering 10 , that is, adding a combination of Pb(CH 3 CO 2 ) 2 •3H 2 O and PbCl 2 in the precursors 11 or using cation cascade, including Cs and Rb cations, as recently demonstrated, 2,3 to reduce the material photoinstability and/or optimize the film morphology. However, solar cell degradation is not only due to the poor stability of the perovskite layers, but can be also accelerated by the instability figure 3.19 (chapter 3). Though the contact resistance is high, we get great reproducibility of TLM results. After 3 days, the contact resistance is increased to 6E+10 Ω, which is around 130 times higher than the initial value. The resistivity is 3.4 E+6 Ω•cm after 3 days. This value is 3 times higher than the initial resistivity. We can conclude that the variation of contact resistance is more significant than the variation of resistivity. The ageing effect of the perovskite thin film is more critical from surface than from bulk of the perovskite thin film. These TLM results are in agreement with the results that we speculated with the UV-Vis spectroscopy analysis (figure 6.2). However, after 3 days, the TLM method is no more available due to the too low current resulting from the ageing process. The total resistance becomes over than 5E+12 Ω and linear increment versus electrode length is no longer observed. Surface analysis As we already discussed with the UV-Vis spectroscopy results (figure 6.2) and TLM analysis (figure 6.5), we observed that the ageing effect is more critical from surface than from bulk of the perovskite thin film. In this paragraph, we used AFM and SEM analysis for investigating the surface variation during ageing process. For preparing the AFM and SEM samples, the CH 3 NH 3 PbI 3-x Clx based perovskite thin film is deposited onto the glass/ITO/PEDOT:PSS by using spin-casting process in the glove box (N 2 condition). We tried to fabricate the perovskite thin film at the same conditions as the PSCs. The AFM and SEM samples are stored in the N 2 condition and in dark. found, must be one of them. This behavior will be the great guideline for understanding PSCs device ageing performance and for the design of future stable PSCs device.
00175525
en
[ "sdv.bbm.gtp" ]
2024/03/05 22:32:10
2005
https://ens-lyon.hal.science/ensl-00175525/file/PNAS-Origins.pdf
Marie Touchon Samuel Nicolay Benjamin Audit Edward-Benedict Brodie Of Brodie Yves D'aubenton-Carafa Alain Arneodo Claude Thermes Minor category: EVOLUTION Replication-associated strand asymmetries in mammalian genomes: towards detection of replication origins In the course of evolution, mutations do not affect both strands of genomic DNA equally. This mainly results from asymmetric DNA mutation and repair processes associated with replication and transcription. In prokaryotes, prevalence of G over C and T over A is frequently observed in the leading strand. The sign of the resulting TA and GC skews changes abruptly when crossing replication origin and termination sites, producing characteristic step-like transitions. In mammals, transcriptioncoupled skews have been detected, but so far, no bias has been associated with replication. Here, analysis of intergenic and transcribed regions flanking experimentally-identified human replication origins and the corresponding mouse and dog syntenic regions demonstrates the existence of compositional strand asymmetries associated with replication. Multi-scale analysis of human genome skew profiles reveals numerous transitions that allow us to identify a set of one thousand putative replication initiation zones. Around these putative origins, the skew profile displays a characteristic jagged pattern also observed in mouse and dog genomes. We therefore propose that in mammalian cells, replication termination sites are randomly distributed between adjacent origins. Altogether, these analyses constitute a step toward genome-wide studies of replication mechanisms. INTRODUCTION Comprehensive knowledge of genome evolution relies on understanding mutational processes that shape DNA sequences. Nucleotide substitutions do not occur at similar rates and in particular, owing to strand asymmetries of the DNA mutation and repair processes, they can affect each of the two DNA strands differently. Asymmetries of substitution rates coupled to transcription have been observed in prokaryotes (1)(2)(3) and in eukaryotes (4)(5)(6). Strand asymmetries (i.e. G ≠ C and T ≠ A) associated with the polarity of replication have been found in bacterial, mitochondrial and viral genomes where they have been used to detect replication origins (7)[START_REF] Mrazek | Proc. Natl. Acad. Sci[END_REF][START_REF] Tillier | [END_REF]. In most cases, the leading replicating strand presents an excess of G over C and of T over A. Along one DNA strand, the sign of this bias changes abruptly at the replication origin and at the terminus. In eukaryotes, the situation is unclear. Several studies failed to show compositional biases related to replication and analyses of nucleotide substitutions in the region of the ß-globin replication origin did not support the existence of mutational bias between the leading and the lagging strands [START_REF] Mrazek | Proc. Natl. Acad. Sci[END_REF]10,11). In contrast, strand asymmetries associated with replication were observed in the subtelomeric regions of Saccharomyces cerevisiae chromosomes, supporting the existence of replication-coupled asymmetric mutational pressure in this organism (12). We present here analyses of strand asymmetries flanking experimentally-determined human replication origins, as well as the corresponding mouse and dog syntenic regions. Our results demonstrate the existence of replication-coupled strand asymmetries in mammalian genomes. Multiscale analysis of skew profiles of the human genome using the wavelet transform methodology, reveals the existence of numerous putative replication origins associated with randomly distributed termination sites. Data and Methods Human replication origins. Nine replication origins were examined, namely those situated near the genes MCM4 (13), HSPA4 (14), TOP1 (15), MYC (16), SCA-7 (17), AR (17), DNMT1 (18), LaminB2 [START_REF] Giacca | Proc. Natl. Acad. Sci[END_REF] and ß-globin [START_REF] Kitsberg | [END_REF]. Sequences. Sequence and annotation data were retrieved from the Genome Browser of the University of California Santa Cruz (UCSC) for the human (May 2004), mouse (May 2004) and dog (July 2004) genomes. To delineate the most reliable intergenic regions, transcribed regions were retrieved from "all_mrna", one of the largest sets of annotated transcripts. To obtain intronic sequences, we used the KnownGene annotation (containing only protein-coding transcripts); when several transcripts presented common exonic regions, only common intronic sequences were retained. For the dog genome, only preliminary gene annotations were available, precluding the analysis of intergenic and intronic sequences. To avoid biases intrinsic to repeated elements, all sequences were masked with RepeatMasker, leading to 40-50% sequence reduction. Strand asymmetries. The TA and GC skews were calculated as S TA = (T -A) /(T + A), S GC = (G -C) /(G + C) and the total skew as S = S TA + S GC , in non-overlapping 1 kbp windows (all values are given in percent). The cumulated skew profiles Σ TA and Σ GC were obtained by cumulative addition of the values of the skews along the sequences. To calculate the skews in transcribed regions, only central regions of introns were considered (after removal of 530 nt from each extremity) in order to avoid the skews associated with splicing signals (6). To calculate the skews in intergenic regions, only windows that did not contain any transcribed region were retained. To eliminate the skews associated with promoter signals and with transcription downstream of polyA sites, transcribed sequences were extended by 0.5 kbp and 2 kbp at 5' and 3' extremities, respectively (6). Sequence alignments. Mouse and dog regions syntenic to the six human regions shown in Fig. 1 were retrieved from UCSC (Human Synteny). Mouse intergenic sequences were individually aligned using PipMaker (21) leading to a total of 150 conserved segments larger than 100 bp (> 70% identity) corresponding to a total of 26 kbp (5.3% of intergenic sequences). Wavelet-based analysis of the human genome. The wavelet transform (WT) methodology is a multiscale discontinuities tracking technique [START_REF] Arneodo | The Science of Disaster[END_REF][START_REF] Nicolay | [END_REF] (for details, see Supplementary material). The main steps involved in detection of jumps were the following. We selected the extrema of the first derivative ′ S of the skew profile S smoothened at large scale (i.e. computed in large windows). The scale 200 kbp was chosen as being just large enough to reduce the contribution of discontinuities associated with transcription (i.e. larger than most human genes (24)), yet as small as possible so as to capture most of the contributions associated with replication. In order to delineate the position corresponding to the jumps in the skew S at smaller scale, we then progressively decreased the size of the analyzing window and followed the positions of the extrema of ′ S across the whole range of scales down to the shortest scale analyzed (the precision was limited by the noisy background fluctuations in the skew profile). As expected, the set of extrema detected by this methodology corresponded to similar numbers of upward and downward jumps. The putative replication origins were then selected among the set of upward jumps on the basis of their ΔS amplitude (see text). RESULTS AND DISCUSSION Strand asymmetries associated with replication. We examined the nucleotide strand asymmetries around 9 replication origins experimentally-determined in the human genome (Data and Methods). For most of them, the S skew measured in the regions situated 5' to the origins on the Watson strand (lagging strand) presented negative values that shifted abruptly (over few kbp) to positive values in regions situated 3' to the origins (leading strand), displaying sharp upward transitions with large ΔS amplitudes as observed in bacterial genomes (7-9) (Fig. 1a). This was particularly clear with the cumulated TA and GC skews that presented decreasing (increasing) profiles in regions situated 5' (3') to the origins, displaying characteristic V-shapes pointing to the initiation zones. These profiles could, at least in part, result from transcription, as shown in previous work (6). To measure compositional asymmetries that would result only from replication, we calculated the skews in intergenic regions on both sides of the origins. The mean intergenic skews shifted from negative to positive values when crossing the origins (Fig. 2). This result strongly suggested the existence of mutational pressure associated with replication, leading to the mean compositional biases S TA = 4.0 ± 0.4% and S GC = 3.0 ± 0.5% (note that the value of the skew could vary from one origin to another, possibly reflecting different initiation efficiencies) (Table 1). In transcribed regions, the S bias presented large values when transcription was co-oriented with replication fork progression ((+) genes on the right, (-) genes on the left), and close to zero values in the opposite situation (Fig. 2). In these regions, the biases associated with transcription and replication added to each other when transcription was co-oriented with replication fork progression, giving the skew S Lead ; they subtracted from each other in the opposite situation, giving the skew S lag (Table 1). We could estimate the mean skews associated with transcription by subtracting intergenic skews from S Lead values, giving S TA = 3.6 ± 0.7% and S GC = 3.8 ± 0.9%. These estimations were consistent with those obtained with a large set of human introns S TA = 4.49 ± 0.01% and S GC = 3.29 ± 0.01% in ref. (6), further supporting the existence of replication- coupled strand asymmetries. 1. Strand asymmetries associated with human replication origins. The skews were calculated in the regions flanking the six human replication origins (Fig. 1a) and in the corresponding syntenic regions of the mouse genome. Intergenic sequences were always considered in the direction of replication fork progression (leading strand); they were considered in totality (all) or after elimination of conserved regions (ncr.) between human (H.s.) and mouse (M.m.) (see Data and Methods). To calculate the mean skew in introns, the sequences were considered on the non-transcribed strand: S S Lead , the orientation of transcription was the same as the replication fork progression; S lag , opposite situation. The mean values of the skews S TA , S GC and S are given in % (± SEM); l, total sequence length in kbp. represents the distance (kbp) of a sequence window to the corresponding origin; ordinate represents the values of S given in percent; red, (+) genes (coding strand identical to the Watson strand); blue, (-) genes (coding strand opposite to the Watson strand); black, intergenic regions; in (c) genes are not represented. Could the biases observed in intergenic regions result from the presence of as yet undetected genes? Two reasons argued against this possibility. First, we retained as transcribed regions one of the largest sets of transcripts available, resulting in a stringent definition of intergenic regions. Second, several studies have demonstrated the existence of hitherto unknown transcripts in regions where no protein coding genes had been previously identified (25)[START_REF] Chen | Proc. Natl. Acad. Sci[END_REF][START_REF] Rinn | [END_REF](28). Taking advantage of the set of non-proteincoding RNAs identified in the "H-Inv" database (29), we checked that none of them was present in the intergenic regions studied here. Another possibility was that the skews observed in intergenic regions result from conserved DNA segments. Indeed, comparative analyses have shown the presence of nongenic sequences conserved in human and mouse (30). These could present biased sequences, possibly contributing to the observed intergenic skews. We examined the mouse genome regions syntenic to the six human replication zones (Fig. 1b). Alignment of corresponding intergenic regions revealed the presence of homologous segments, but these accounted for only 5.3 % of all intergenic sequences. Removal of these segments did not change significantly the skew in intergenic regions, therefore eliminating the possibility that intergenic skews are due to conserved sequence elements (Table 1). Fig. 2. Skew S in regions situated on both sides of human replication origins. The mean values of S were calculated in intergenic regions and in intronic regions situated 5' (left) and 3' (right) of the six origins analyzed in Fig. 1a; colors are as in Fig. 1; mean values are in percent ± SEM. Conservation of replication-coupled strand asymmetries in mammalian genomes. We analyzed the skew profiles in DNA regions of mammalian genomes syntenic to the six human origins (Fig. 1). The human, mouse and dog profiles were strikingly similar to each other, suggesting that in mouse and dog, these regions also corresponded to replication initiation zones (indeed, they were very similar in primate genomes). Examination of mouse intergenic regions showed, as for human, significant skew S values with opposite signs on each side of these putative origins, suggesting the existence of a compositional bias associated with replication S = 5.8 ± 0.5% (Table 1). Human and mouse intergenic sequences situated at these homologous loci presented significant skews, even though they presented almost no conserved sequence elements. This presence of strand asymmetry in regions that strongly diverged from each other during evolution further supported the existence of compositional bias associated with replication in both organisms: in the absence of such process, intergenic sequences would have lost a significant fraction of their strand asymmetry. Altogether, these results establish, in mammals, the existence of strand asymmetries associated with replication in germ-line cells. They determine that most replication origins experimentally-detected in somatic cells coincide with sharp upward transitions of the skew profiles. The results also imply that for the majority of experimentally-determined origins, the positions of initiation zones are conserved in mammalian genomes (a recent work confirmed the presence of a replication origin in the mouse MYC locus (31)). Among nine human origins examined, three do not present typical V-type cumulated profiles. For the first one (DNMT1), the central part of the V-profile is replaced by a large horizontal plateau (several tens of kbp) possibly reflecting the presence of several origins dispersed over the whole plateau. Dispersed origins have been observed for example in the hamster DHFR initiation zone (32). By contrast, the skew profiles of the LaminB2 and ß-globin origins present no upward transition suggesting that they might be inactive in germ-line cells, or less active than neighboring origins (data not shown). Detection of putative replication origins. Human experimentally-determined replication origins coincided with large amplitude upward transitions of skew profiles. The corresponding ΔS ranged between 14% and 38% owing to possible different replication initiation efficiencies and/or different contributions of transcriptional biases (Fig. 1a). Are such discontinuities frequent in human sequences, and can they be considered as diagnostic of replication initiation zones? In particular, can they be distinguished from the transitions associated with transcription only? Indeed, strand asymmetries associated with transcription can generate sharp transitions in the skew profile at both gene extremities. These jumps are of same amplitude and of opposite signs, e.g. upward (downward) jumps at 5' (3') extremities of (+) genes (6). Upward jumps resulting from transcription only, might thus be confused with upward jumps associated with replication origins. To address these questions, systematic detection of discontinuities in the S profile was performed with the wavelet transform methodology, leading to a set of 2415 upward jumps and, as expected, to a similar number of downward jumps (see Data and Methods). The distributions of the ΔS amplitude of these jumps were then examined, showing strong differences between upward and downward jumps. For large ΔS values, the number of upward jumps exceeded by far the number of downward jumps (Fig. 3). This excess likely resulted from the fact that, contrasting with prokaryotes where downward jumps result from precisely positioned replication termination, in eukaryotes, termination appears not to occur at specific positions but to be randomly distributed (this point will be detailed in the last section) (33,34). Accordingly, the small number of downward jumps with large ΔS resulted from transcription, not replication. These jumps were due to highly biased genes that also generated a small number of large amplitude upward jumps, giving rise to false positive candidate replication origins. The number of large downward jumps was thus taken as an estimation of the number of false positives. In a first step, we retained as acceptable a proportion of 33% of false positives. This value resulted from the selection of upward and downward jumps presenting an amplitude ΔS ≥ 12.5%, corresponding to a ratio of downward jumps over upward jumps r = 0.33. The values of this ratio r were highly variable along the chromosomes (Fig. 3). In G+C-poor regions (G+C < 37%) we observed the smallest r values (r = 0.15). In regions with 37% ≤ G+C ≤ 42%, we obtained r = 0.24, contrasting with r = 0.53 in regions with G+C > 42%. In these latter regions (accounting for about 40% of the genome) with high gene density and small gene length (24), the skew profiles oscillated rapidly with large upward and downward amplitudes (Fig. 5d) resulting in a too large number of false positives (53%). In a final step, we retained as putative origins upward jumps (with ΔS ≥ 12.5%) detected in regions with G+C ≤ 42%. This led to a set of 1012 candidates among which we could estimate the proportion of true replication origins to 79% (r = 0.21, Fig. 3a). The mean amplitude of the jumps associated with the 1012 putative origins was 18%, consistent with the range of values observed for the six origins in Fig. 1. Note that these origins were all found in the detection process. In close vicinity of the 1012 putative origins (± 20kbp) most DNA sequences (55 % of the analyzing windows) are transcribed in the same direction as the progression of the replication fork. By contrast, only 7% of sequences are transcribed in the opposite direction (38% are intergenic). These results show that the ΔS amplitude at putative origins mostly results from superposition of biases (i) associated with replication and (ii) with transcription of the gene proximal to the origin. Whether transcription is co-oriented with replication at larger distances will require further studies. We then determined the skews of intergenic regions on both sides of these putative origins. As shown in Fig. 4, the mean skew profile calculated in intergenic windows shift abruptly from negative to positive values when crossing the jump positions. To avoid the skews that could result from incompletely annotated gene extremities (e.g. 5' and 3' UTRs), 10 kbp sequences were removed at both ends of all annotated transcripts. The removal of these intergenic sequences did not significantly modify the skew profiles indicating that the observed values do not result from transcription. On both sides of the jump, we observed a steady decrease of the bias, with some flattening of the profile close to the transition point. Note that, due to (i) the potential presence of signals implicated in replication initiation, and (ii) the possible existence of dispersed origins (32), one might question the meaningfulness of this flattening that leads to a significant underestimate of the jump amplitude. As shown in Fig. 4, extrapolating the linear behavior observed at distance from the jump would lead to a skew of 5.3%, a value consistent with the skew measured in intergenic regions around the six origins (7.0 ± 0.5%, Table 1). Overall, the detection of upward jumps with characteristics similar to those of experimentally-determined replication origins and with no downward counterpart, further support the existence, in human chromosomes, of replication-coupled strand asymmetries, leading to the identification of numerous putative replication origins active in germ-line cells. Fig. 3. Histograms of the ΔS amplitudes of the jumps in the S profile. Using the wavelet transform, a set of 5101 discontinuities was detected (2415 upward jumps and 2686 downward jumps, Data and Methods). The ΔS amplitude was calculated as in Fig. 1a. (a) ΔS distributions of the jumps presenting G+C < 42%, corresponding to 1647 upward jumps and 1755 downward jumps; the threshold ΔS ≥ 12.5% (vertical line) corresponded to 1012 upward jumps that were retained as putative replication origins, and to 211 downward jumps (r = 0.21). (b) ΔS distributions of the jumps presenting G+C > 42%, ΔS ≥ 12.5% corresponding to 528 upward jumps and 280 downward jumps (r = 0.53). The G+C content was measured in the 100 kbp window surrounding the jump position. Upward jumps (black); downward jumps (dots); abscissa represents the values of the ΔS amplitudes calculated in percent. Random replication termination in mammalian cells. In bacterial genomes, the skew profiles present upward and downward jumps at origin and termination positions, respectively, separated by constant S values (7-9). Contrasting with this step-like shape, the S profiles of intergenic regions surrounding putative origins did not present downward transitions, but decreased progressively in the 5' to 3' direction on both sides of the upward jump (Fig. 4). This pattern was typically found along S profiles of large genome regions showing sharp upward jumps connected to each other by segments of steadily decreasing skew (Fig. 5 a-c). The succession of these segments, presenting variable lengths, displayed a jagged motif reminiscent of the shape of "factory roofs" which was observed around the experimentally-determined human origins (Fig. 5a and data not shown), as well as around a number of putative origins (Fig. 5 b,c). Some of these segments were entirely intergenic (Fig. 5 a,c), clearly illustrating the particular profile of a strand bias resulting solely from replication. In most other cases, we observed the superposition of this replication profile and of the transcription profile of (+) and (-) genes, appearing as upward and downward blocks standing out from the replication pattern (Fig. 5c). Overall, this jagged pattern could not be explained by transcription only, but was perfectly explained by termination sites more or less homogeneously distributed between successive origins. Although some replication terminations have been found at specific sites in S. pombe (35), they occur randomly between active origins in S. cerevisiae and in Xenopus egg extracts (33,34). Our results indicate that this property can be extended to replication in human germ-line cells. According to our results, we propose a scenario of replication termination relying on the existence of numerous termination sites distributed along the sequence (Fig. 6). For each termination site (used in a small proportion of cell cycles), strand asymmetries associated with replication will generate a skew profile with a downward jump at the position of termination and upward jumps at the positions of the adjacent origins, separated by constant values (as in bacteria). Various termination positions will correspond to elementary skew profiles (Fig. 6, first column). Addition of these profiles will generate the intermediate profile (second column) and further addition of many elementary skews will generate the final profile (third column). In a simple picture, we can suppose that termination occurs with constant probability at any position on the sequence. This can result from the binding of some termination factor at any position between successive origins, leading to a homogeneous distribution of termination sites during successive cell cycles. The final skew profile is then a linear segment decreasing between successive origins (Fig. 6, third column, black line). In a more elaborate scenario, termination would take place when two replication forks collide. This would also lead to various termination sites, but the probability of termination would then be maximum at the middle of the segment separating neighboring origins, and decrease towards extremities. Considering that firing of replication origins occurs during time intervals of the S phase [START_REF] White | Proc. Natl. Acad. Sci. USA[END_REF] could result in some flattening of the skew profile at the origins, as sketched in Fig. 6 (third column, grey curve). In the present state, our results clearly support the hypothesis of random replication termination in mammalian cells, but further analyses will be necessary to determine what scenario is precisely at work. Importantly, the "factory roof" pattern was not specific to human sequences, but it was also observed in numerous regions of the mouse and dog genomes (e.g. Fig. 5 e,f)) indicating that random replication termination is a common feature of mammalian germ-line cells. Moreover, this pattern was displayed by a set of one thousand upward transitions, each flanked on each side by DNA segments of appriximately 300 kbp (without repeats), which can be roughly estimated to correspond to 20-30% of the human genome. In these regions, characterized by low and medium G+C contents, the skew profiles revealed a portrait of germ-line replication, consisting of putative origins separated by long DNA segments of about 1-2 Mbp long. Although such segments are much larger than could be expected from the classical view of ≈ 50-300 kbp long replicons (37), they are not incompatible with estimations showing that replicon size can reach up to 1 Mbp (38,39) and that replicating units in meiotic chromosomes are much longer than those engaged in somatic cells [START_REF] Callan | Proc. R. Soc. Lond[END_REF]. Finally, it is not unlikely that in G+C-rich (gene-rich) regions, replication origins would be closer to each other than in other regions, further explaining the greater difficulty in detecting origins in these regions. replication termination is more likely to occur at lower rates close to the origins, leading to a flattening of the profile (3 rd column, grey line). In conclusion, analyses of strand asymmetries demonstrate the existence of mutational pressure acting asymmetrically on the leading and lagging strands during successive replicative cycles of mammalian germ-line cells. Analyses of the sequences of human replication origins show that most of these origins, determined experimentally in somatic cells, are likely to be active also in germ-line cells. In addition, the results reveal that the positions of these origins are conserved in mammalian genomes. Finally, multi-scale studies of skew profiles allow us to identify a large number (1012) of putative replication initiation zones and provide a genome-wide picture of replication initiation and termination in germ-line cells. Fig. 1 . 1 Fig.1. TA and GC skew profiles around experimentally-determined human replication origins. (a) The skew profiles were determined in 1 kbp windows in regions surrounding (± 100 kbp without repeats) experimentally-determined human replication origins (Data and Methods). First row, TA and GC cumulated skew profiles Σ TA (thick line) and Σ GC (thin line). Second row, skew S calculated in the same regions. The ΔS amplitude associated with these origins, calculated as the difference of the skews measured in 20 kbp windows on both sides of the origins, are: MCM4 (31%), HSPA4 (29%), TOP1 (18%), MYC (14%), SCA7 (38%), AR (14%). (b) Cumulated skew profiles calculated in the 6 regions of the mouse genome syntenic to the human regions figured in (a). (c) Cumulated skew profiles in the 6 regions of the dog genome syntenic to human regions figured in (a). Abscissa (x) represents the distance (kbp) of a sequence window to the corresponding origin; ordinate represents the values of S given in percent; red, (+) genes (coding strand identical to the Watson strand); blue, (-) genes (coding strand opposite to the Watson strand); black, intergenic regions; in (c) genes are not represented. Fig. 4 . 4 Fig.4. Mean skew profile of intergenic regions around putative replication origins. The skew S was calculated in 1 kbp windows (Watson strand) around the position (± 300 kbp without repeats) of the 1012 upward jumps (Fig.3); 5' and 3' transcript extremities were extended by 0.5 and 2 kbp, respectively (full circles) or by 10 kbp at both ends (stars) (Data and Methods). Abscissa represents the distance (kbp) to the corresponding origin; ordinate represents the skews calculated for the windows situated in intergenic regions (mean values for all discontinuities and for ten consecutive 1 kbp window positions); the skews are given in percent (vertical bars, SEM). The lines correspond to linear fits of the values of the skew (stars) for x < -100 kbp and x > 100 kbp. Fig. 5 . 5 Fig. 5. S profiles along mammalian genome fragments. (a) Fragment of chr. 20 including the TOP1 origin (red vertical line); (b), (c), chr. 4 and chr. 9 fragments, respectively, with low G+C content (36%); (d) chr. 22 fragment with larger G+C content (48%). In (a) and (b), vertical lines correspond to selected putative origins; yellow lines, linear fits of the S values between successive putative origins. Black, intergenic regions; red, (+) genes; blue, (-) genes; note the fully intergenic regions upstream of TOP1 in (a) and from positions 5290 to 6850 kbp in (c). (e) Fragment of mouse chr. 4 syntenic to the human fragment shown in (c); (f) fragment of dog chr. 5 syntenic to the human fragment shown in (c); in (e) and (f), genes are nor represented. Fig. 6 . 6 Fig. 6. Model of replication termination. Schematic representation of the skew profiles associated with three replication origins O 1 , O 2 , O 3 ; we suppose that these are adjacent, bidirectional origins with similar replication efficiency; abscissa represent the sequence position; ordinate represent the S values (arbitrary units); upward (downward) steps correspond to origin (termination) positions; for convenience the termination sites are symmetric relative to O 2 . First column, three different termination positions T i , T j , T k , leading to elementary skew profiles S i , S j , S k ; second column, superposition of these 3 profiles; third column, superposition of a large number of elementary profiles leading to the final "factory roof" pattern. Simple model: termination occurs with equal probability on both sides of the origins leading to the linear profile (3 rd column, thick line). Alternative model: Figure S1 : S1 Figure S1: (top) skew profile of a fragment of human chromosome 12; (middle), WT of S using is coded from black (min) to red (max); three cuts of the WT at constant scale a = a* = 200 kbp, 70 kbp and 20 kbp are superimposed together with five maxima lines identified as pointing to upward jumps in the skew profile; (bottom) WT skeleton defined by the maxima lines in black (resp. red) when corresponding to positive (resp. negative) values of the WT. Acknowledgements This work was supported by the ACI IMPBIO 2004, the Centre National de la Recherche Scientifique (CNRS), the French Ministère de l'Education et de la Recherche and the PAI Tournesol. We thank O. Hyrien for very helpful discussions. Supplementary material Detection of jumps in skew profiles using the continuous wavelet transform. For effective detection of jumps or discontinuities, the simple intuitive idea is that these jumps are points of strong variation in the signal that can be detected as maxima of the modulus of the (regularized) first derivative of the signal. In order to avoid confusion between "true" maxima of the modulus and maxima induced by the presence of a noisy background, the rate of signal variation has to be estimated using a sufficiently large number of signal samples. This can be achieved using the continuous wavelet transform (WT) that provides a powerful framework for the estimation of signal variations over different length scales. The WT is a space-scale analysis which consists in expanding signals in terms of wavelets that are constructed from a single function, the analyzing wavelet, by means of dilations and translations [START_REF] Arneodo | The Science of Disaster[END_REF][START_REF] Nicolay | [END_REF]. When using the first derivative of the Gaussian function, namely , with , then the WT of the skew profile S takes the following expression: where x and a (> 0) are the space and scale parameters, respectively. Equation (1) shows that the WT computed with is the derivative of the signal S smoothened by a dilated version of the Gaussian function. This property is at the heart of various applications of the WT microscope as a very efficient multi-scale singularity tracking technique [START_REF] Arneodo | The Science of Disaster[END_REF][START_REF] Nicolay | [END_REF]. The basic principle of the detection of jumps in the skew profiles with the WT is illustrated in Figure S1. From equation (1), it is obvious that at any fixed scale a, a large value of the modulus of the WT coefficient corresponds to a large value of the derivative of the skew profile smoothened at that scale. In particular, jumps manifest as local maxima of the WT modulus as illustrated for three different scales in Figure S1 (middle). The main issue when dealing with noisy signals like the skew profile in Figure S1 (top) is to distinguish between the local WT modulus maxima (WTMM) associated with the jumps and those induced by the noise. In this respect, the freedom in the choice of the smoothing scale a is fundamental, since the noise amplitude is reduced when increasing the smoothing scale, while an isolated jump contributes equally at all scales. As shown in Figure S1 (bottom), our methodology consists in computing the WT skeleton defined by the set of maxima lines obtained by connecting the WTMM across scales. Then, we select a scale a large enough to reduce the effect of the noise, yet small enough to take into account the typical distance between jumps. The maxima lines that exist at that scale are likely to point to jump positions at small scale. The detected jump locations are estimated as the positions at scale 20 kbp of the so-selected maxima lines. According to equation (1), upward (resp. downward) jumps are identified by the maxima lines corresponding to positive (resp. negative) values of the WT as illustrated in Figure S1 (bottom) by the black (resp. red) lines. For the considered fragment of human chromosome 12, we have thus identified 7 upward and 8 downward jumps. The amplitude of the WTMM actually measures the relative importance of the jumps compared to the overall signal. The black dots in Figure S1 (middle) correspond to the 5 WTMM of largest amplitude ( ΔS ≥ 12.5%); it is clear that the associated maxima lines point to the 5 major jumps in the skew profile. Note that these are 5 upward jumps with no downward counterpart and that they have been reported as 5 putative replication origins.
01755329
en
[ "sdu.stu.pg", "sdv.bid" ]
2024/03/05 22:32:10
2017
https://hal.sorbonne-universite.fr/hal-01755329/file/K%C3%BChl%20Lourenco%202017_sans%20marque.pdf
Gabriele Ku ¨hl Wilson R Lourenc email: [email protected] Pesciara Von Monte Bolca Á Euscorpiidae Unteres Eoza Gabriele Ku A new genus and species of fossil scorpion (?Euscorpiidae) from the Early-Middle Eocene of Pesciara (Bolca, Italy) Keywords: Pesciara of Bolca, Euscorpiidae, Lower Eocene, Scorpions Pseudoscorpion, ist der hier beschriebene fossile Skorpion erst die zweite beschriebene Arachnidenart, die von der Bolca-Lagersta ¨tte bekannt ist Fossil scorpions are among the oldest terrestrial arthropods known from the fossil record. They have a worldwide distribution and a rich fossil record, especially for the Paleozoic. Fossil scorpions from Mesozoic and Cenozoic deposits are usually rare (except in amber-deposits). Here, we describe the only fossil scorpion from the Early to Middle Eocene Pesciara Lagersta ¨tte in Italy. Eoeuscorpius ceratoi gen. et sp. nov. is probably a genus and species within the family Euscorpiidae. This may be the first fossil record of the Euscorpiidae, which are so far only known from four extant genera. Eoeuscorpius ceratoi gen. et sp. nov. was found in the ''Lower Part'' of the Pesciara Limestone, which is actually dated Late Ypresian stage (between 49.5 and 49.7 Ma). Besides a possible pseudoscorpion, the here-described fossil scorpion is the second arachnid species known from the Bolca Locality. Introduction The Bolca Fossil Lagersta ¨tte is a world-famous Lagersta ¨tte for exceptionally preserved fossils from the Eocene. Several hundred plant and animal species have been described from here. Among the animal species, vertebrates (fishes) are dominant and known worldwide. Invertebrates, such as the herein described scorpion, belong to the so-called minor fauna, maybe less famous but no less spectacular. Geological and paleobiological background and taphonomy The Bolca region is located in the eastern part of the Lessini Mountains, which are within the Southern Alps in Northern Italy [START_REF] Papazzoni | The Pesciara-Monte Postale Fossil-Lagersta ¨tte: 1. Biostratigraphy, sedimentology and depositional model[END_REF]. The limestone developed during the Eocene in two phases of uplift of the Tethys Ocean. It is surrounded by volcanic ash, and the limestone deposits are about 19 m thick [START_REF] Tang | Monte bolca: an eocene fishbowl[END_REF]. The limestone (Lessini Shelf) is especially recognized for a rich fossil record of fishes, which are known throughout the world for the excellent preservation. The Lessini shelf is surrounded by deep marine basins and restricted northwards by terrestrial deposits [START_REF] Papazzoni | The Pesciara-Monte Postale Fossil-Lagersta ¨tte: 1. Biostratigraphy, sedimentology and depositional model[END_REF]). The fossil scorpion described herein [START_REF] Cerato | Cerato. I pescatori del Tempo. San Giovanni Ilarione[END_REF]; Fig. 4a) was discovered in the Pesciara Fossil-Lagersta ¨tte, which contains marine life forms as well as terrestrial organisms [START_REF] Guisberti | The Pesciara-Monte Postale Fossil-Lagersta ¨tte: 4. The ''minor fauna'' of the laminites[END_REF]. The Pesciara Limestone was deposited during the Ypresian stage (Lower Eocene), roughly 49.5 Ma ago. The fossils are found in fine-grained, laminated limestones that were deposited between coarser storm-induced limestone layers [START_REF] Schwark | Organic geochemistry and paleoenvironment of the Early Eocene ''Pesciara di Bolca'' Konservat-Lagersta ¨tte, Italy[END_REF]. The paleoenvironment of the Bolca area is generally regarded as rich in variety. Several ecosystems, reaching from pelagic, as well as shallow marine habitats, and even brackish to fluvial and terrestrial habitats, are evidenced by the deposits. Due to the Eocene climatic conditions, temperatures were tropical to subtropical. The deposits generally prove former advantageous life conditions [START_REF] Tang | Monte bolca: an eocene fishbowl[END_REF], allowing a high degree of biodiversity. However, temporally anoxic and euxinic conditions are assumed by the absence of bottom dwellers [START_REF] Papazzoni | The Pesciara-Monte Postale Fossil-Lagersta ¨tte: 1. Biostratigraphy, sedimentology and depositional model[END_REF]. These events may have let to extinction events that are responsible for the rich fossil fauna. First, a diverse and abundant fish fauna is represented. The rich fish fauna is interpreted as having been close to coral reefs [START_REF] Papazzoni | The Pesciara-Monte Postale Fossil-Lagersta ¨tte: 1. Biostratigraphy, sedimentology and depositional model[END_REF]. Other vertebrates are represented by two snake specimens, a turtle and several bird remains (for a review, see [START_REF] Carnevale | The Pesciara-Monte Postale Fossil-Lagersta ¨tte: 2. Fishes and other vertebrates[END_REF], but also plants are very abundant, with more than 105 macrofaunal genera being described [START_REF] Wilde | The Pesciara-Monte Postale Fossil-Lagersta ¨tte: 3. Flora. In The Bolca Fossil-Lagersta ¨tten: A window into the Eocene World[END_REF]. The fossil fauna and flora are two-dimensionally preserved and frequently fully articulated. Soft part preservation with organs and cuticles is common among the fossils. Even color preservation has been reported. Microbial fabrics may be related to the fossilization of soft tissues [START_REF] Briggs | The role of experiments in investigating the taphonomy of exceptional preservation[END_REF]. However, the taphonomic process is actually not well known. Arthropods from Pesciara of Bolca The scorpion described (Fig. 1) here belongs to the so-called minor fauna of the Pesciara-Lagersta ¨tte, which comprises arthropods, polychaete worms, jellyfishes, mollusks, brachiopods and bryozoans. Among arthropods, insects and crustaceans are most abundant [START_REF] Guisberti | The Pesciara-Monte Postale Fossil-Lagersta ¨tte: 4. The ''minor fauna'' of the laminites[END_REF]. Arachnids, such as the scorpion Eoeuscorpius ceratoi gen. et sp. nov., are only known by two fossils. One is the scorpion described here. The other arachnid is a possible pseudoscorpion from this Lagersta ¨tte [START_REF] Guisberti | The Pesciara-Monte Postale Fossil-Lagersta ¨tte: 4. The ''minor fauna'' of the laminites[END_REF]. In general, the order Scorpiones goes back to the mid Silurian of Scotland (see Dunlop 2010 for review). Scorpions from the early Paleozoic generally differ from subsequent groups by a simple coxo-sternal region and the lack of trichobothria, as the latter structures developed as a consequence of terrestrialization. Beginning with the Devonian, a coxapophyses/stomotheca developed in scorpions, which were then very abundant during the Carboniferous [START_REF] Dunlop | Geological history and phylogeny of Chelicerata[END_REF]. Currently, 136 valid fossil scorpion species have been described, most of them from Paleozoic fossil sites. Mesozoic and Tertiary fossils are comparatively rare, though amber seems to be a good resource for fossil scorpions [START_REF] Dunlop | How many species of fossil arachnids are there?[END_REF][START_REF] Dunlop | A summary list of fossil spiders and their relatives[END_REF][START_REF] Lourenc ¸o | A synopsis on amber scorpions, with special reference to the Baltic fauna[END_REF][START_REF] Lourenc ¸o | A new species of scorpion from Chiapas amber, Mexico (Scorpiones: Buthidae)[END_REF][START_REF] Lourenc ¸o | A Synopsis on amber scorpions with special reference to Burmite species; an extraordinary development of our knowledge in only 20 years[END_REF]. Materials and methods The description of the Pesciara scorpion is based on a single specimen, which is the property of Mr. Massimo Cerato in Bolca of Vestenanova and was kindly loaned to us for scientific description. The scorpion, collection number CMC1, is currently stored at the Museum of the Cerato Family (via San Giovanni Battista, 50-37030 Bolca di Vestenanova-Verona). The scorpion was x-rayed with a Phoenix v|tome|x s Micro Tomograph, but this provided no additional information. The scorpion was also photographed using a Nikon D3x. Details were photographed with a Keyence Digital Microscope. A Leica MZ95 was used to produce fluorescent images. Image editing was carried out with Adobe Photoshop CS6 in addition to Adobe Illustrator CS6. Measurements were done with the help of ImageJ. Each structure was measured in length and width [mm], except some structures that were regarded as highly fragmentary. Length and width were measured along the middle of each segment (see Table 1). Systematic paleontology Phylum Arthropoda von Siebold, 1848 Class Arachnida Lamarck, 1801 Order Scorpionida Koch, 1837 Family ?Euscorpiidae Laurie, 1896 Genus Eoeuscorpius gen. nov. Diagnosis of the genus. Scorpion of small size with a total length of 38 mm. General morphology very similar to that of most extant species of the genus Euscorpius Thorell, 1876; however, both body and pedipalps are more bulky and less flattened. Carapace with a strong anterior emargination. Trichobothrial pattern, most certainly of the type C [START_REF] Vachon | Etude des caracte `res utilise ´s pour classer les familles et les genres de Scorpions (Arachnides). 1. La trichobothriotaxie en arachnologie. Sigles trichobothriaux et types de trichobothriotaxie chez les Scorpions[END_REF]; a number of bothria can be observed: 3 on femur, internal, dorsal and external, 1 dorsal d 1 , 1 internal and a few external on patella; the internal is partially displaced; 4-6 on dorso-external aspect of chela hand and 5 on chela fixed finger. Diagnosis of the species. As for the genus. Etymology. The name honors Mr. Massimo Cerato, Bolca of Vestenanova, Italy, who allowed us to study the specimen. Description. The scorpion is exposed from its dorsal side, nearly completely preserved and in an excellent state of preservation (Fig. 1a), although the distal segments of the walking legs are not well preserved. The cuticle has a brownish coloration and therefore optically stands out against the paler sediment. The cuticle surface is not completely preserved, as smaller areas of usually less than 1 mm are missing. The cuticle surface of the mesosomal tergites is to some extent transparent, allowing a view to the ventral cuticle. The median eyes are not preserved. In that area the sediment matrix is exposed, indicating the former position of the eyes and eye tubercle. The right carapace half is incomplete but can be reconstructed from the left half. On the cuticle surface, especially of the pedipalps, numerous insertion points of bothria and setae are preserved (Fig. 2a,b). The body of the fossil scorpion measures ca. 38 mm in length and ca. 20 mm in width (Fig. 1b). Chelicerae are ca. 2 mm long and 1.3 mm wide. With a width of 4.6 mm and a length of ca. 7 mm, the chela-bearing segments of the pedipalps are very strong, and they are the optically dominant structures of the scorpion. The carapace is 4.8 mm long and 2.1 mm on the anterior width and 4.8 mm on the posterior width. In the middle region, the mesosoma is 6.6 mm wide on average. These measurements give an impression of the scorpion size. More detailed measurements are listed in Table 1. The carapace of the scorpion is sub-quadrangular with a strong convex emargination at the anterior margin. The cuticle of the anterior region is darker than the rest, which may be because of the carapace thickness. Lateral eyes would at best be preserved on the left carapace half, but their number cannot be determined with certainty. As mentioned before, the median eyes are not preserved, but their former position is marked by a heard-like notch on the carapace surface. The median eyes are situated in the middle of the carapace. Posteriorly, a deep furrow separates the carapace in two halves. At the anterior to the frontal carapace margin, a small (1.3-mm-long and 0.8-mm-wide) laminated structure is exposed (Fig. 1b shaded area, Fig. 3). About 36 laminae are within the oval structure, with an average thickness of 0.02 mm. This structure is interpreted as exposed muscle fibrils. Although the chelicerae are normally best seen from the ventral side, in the specimen described herein the tibia and tarsus of the chelicerae are also exposed dorsally (Fig. 1a,b). As in most scorpions, the chelicerae are very small (2 mm long/1.3 mm wide for the complete structure). Parts of the more basal segments (e.g., the coxa) are also preserved, but unfortunately not in detail. The tibia is produced into the fixed finger. The inner ventral margin is equipped with a bristlecomb (Fig. 4a,b). On both the fixed and moveable finger (tibia and tarsus), tooth denticles are preserved. Roughly the teeth can be distinguished into distal, subdistal and median teeth [START_REF] Vachon | De l'utilite ´, en syste ´matique, d'une nomenclature des dents des che ´lice `res chez les Scorpions[END_REF][START_REF] Sissom | Systematics, biogeography and paleontology[END_REF]). Again, the preservation is not detailed enough for further descriptions. The pedipalps of the fossil are comparably short and stout. Especially the patella is considerably large. It has a roundish shape and on the cuticle surface three carinae from anterior to posterior. The two carinae on the right side form a V as they meet at the posterior region of the patella (Fig. 2a,b). On the cuticle surface of the patella, Fig. 3 Laminated structure on the scorpion surface above the carapace margin. Possible remains of muscle fibrils. Scale 0.5 mm several insertion points show the former position of the bothria and other setae. These structures cannot be fully observed. However, some of the insertion points are larger than others. These larger insertion points (filled circles in Fig. 2b) are most certainly the remains of bothria. The trichobothrial pattern can be observed on the doral side on the pedipalps. Three bothria (internal, dorsal, external) are preserved on the femur. On the patella, one dorsal and one internal bothria is preserved, as well as four external bothria. The internal bothria is partially displaced; four to six bothria are preserved on the dorso-external aspect of the chela hand, and five bothria are on the chela fixed finger. Though incompletely preserved, the overall impression of the legs is that they were slender compared to the robust pedipalps. The tibia of the walking legs is broader than the preceding and following leg segments. The basitarsus and tarsus are shorter than the more basal segments. Tarsal claws are not preserved. The mesosoma consists of seven segments, covered by mesosomal tergites. Tergites are rectangular in shape, with rounded edges. A carina proceeds close beneath the anterior margin of each mesosomal tergite, except on tergite VII (Fig. 1a,b). The width of the dorsal mesosomal tergites varies from 5.8 to 6.5 mm from tergite I to tergite VII. There is more variation in the length (measured from anterior to posterior). The first and second mesosomal tergites are only about 0.9 mm long; the third measures 1.4 mm. The fourth mesosomal tergite is about 1.6 mm long. The following mesosomal tergites show similar lengths of 2.3 and 2.4 mm. The seventh mesosomal tergite differs from the others in being trapezoid. It is roughly 3 mm long. The anterior part is 5 mm wide, whereas the posterior part measures only 3.3 mm. Two additional ventral plates of mesosomal segments five and six are preserved. Their shape and size are comparable to their dorsal pendants. The metasoma of the fossil scorpion consists of five segments, plus the telson (Fig. 1a,b). With the exception of segment V, tail segments are approximately of the same size. Segment V is one and half times longer than the preceding ones. On each metasomal segment, two longitudinal carinae are preserved. The cuticle surface is densely covered with granules. The telson aculeus is partly covered by segment V, because it is bent backwards (as in life position). The tip of the aculeus is broken. Discussion The scorpion is one individual of the minor fauna (Guisberti et al. 2014), which is the animal non-fish fauna and comprises inter alia a few arthropod genera. The scorpion is originally regarded as terrestrial, so it was deposited allochthonously. As the depositional environment was not far away from the coastline, it was most probable washed in from nearby land. A definitive phylogenetic position of the scorpion is difficult to determine because of the incompleteness of the specimen. According to the observed characters-the general morphology, shape of pedipalp segments (i.e., femur, patella and chela), presence of a small apophysis on the internal aspect of the patella, and the same numbers and positions of some trichobothria-the specimen is unquestionably a representative of the Chactoidea and can be assigned temporarily to the extant family ?Euscorpiidae (see Lourenc ¸o 2015). However, because of the incompleteness of the specimen and in particular because of the geological horizon (Eocene), the specimen is assigned provisionally to a new genus. Euscorpiidae were hitherto only known from extant representatives. The family comprises 31 species, belonging to four genera (Lourenc ¸o 2015). Eoeuscorpius ceratoi gen. et sp. nov. would be the fifth genus (comprising one species), probably giving the family Euscorpiidae a fossil record that goes back more than 49 million years. Etymology. The generic name refers to the geological position of the new genus within the Eocene, which is intermediate in relation to elements of the family Palaeoeuscorpiidae Lourenc ¸o, 2003 and the extant genus Euscorpius (Lourenc ¸o 2003). Type species. E. ceratoi gen. et sp. nov., from the ''Lower Part'' of Pesciara Limestone, Early Eocene, Ypresian stage. Eoeuscorpius ceratoi gen. et sp. nov. Figure 1a, b. Material. Holotype, CMC1 (Museum of the Cerato family, Verona). Fig. 1 a 1 Fig. 1 a Holotype of Eoeuscorpius ceratoi gen. et sp. nov.; the color of the photography is inverted. b Line drawing of the holotype; dashed lines of the trunk and leg indicate indistinct segment (legs) respectively dorsal sternite margins. Densely dashed lines beside and Fig Fig. 2 a, b Right pedipalp chela of Eoeuscorpius ceratoi gen. et sp. nov. br bristles, bt bothria, c carina, mf moveable finger, ti tibia. Scale 1 mm Fig. 4 a 4 Fig. 4 a Left and right chelicera of Eoeuscorpius ceratoi gen. et sp. nov., fluorescence illumination. b Line drawing of chelicera. bc bristlecomb, d distal tooth, sd subdistal tooth, m median tooth, mf moveable finger, ti tibia. Scale 1 mm Table 1 1 Measurements of Eoeuscorpius ceratoi t sting 4.1/ 1.8 ts5 3.4/ 2.9 gen. et sp. nov Tagma L/W ch pd wl 1 wl 2 wl 3 wl 4 Sternal ms1 ms2 ms3 ms4 ms5 ms6 ms7 ts1 ts2 ts3 ts4 (mm) s Sternal plates Dorsal carapace, left half (complete wide) 2.5/ 2.5/ 6.2 6.2 Tergal plates 4.8/3.2 (6.4) 0.9/ 0.9/ 1.4/ 1.6/ 2.3/ 2.4/ 3.1/ 2.4/ 1.9/ 2.4/ 2.6/ 5.1 5.8 6.0 6.6 6.7 6.5 4.8 2.9 2.7 2.3 2.5 cx l cx r tr l 1.2/ 1.9 tr r 1.8/ 1.1/ Laminated structure (near carapace) 1.3/0.8 2.7 1.2 fe l 3.5/ 3.2/ 2.4/ Eye hole 0.6/0.7 2.0 1.6 1.4 fe r 4.6/ 2.3/ 2.3/? 2.2 1.1 pa l 2.4/ 2.4/ 3.0/ 2.0 1.5 1.2 pa r 3.1/ 2.0/ 3.1/ 2.2 1.4 1.2 ti l 1.9/ 6.9/ 1.6/ 3.3/ 2.7/ 1.3 4.7 0.7 1.6 1.6 ti r 1.9/ 6.9/ 1.6/ 2.7/ 2.7/ 1.3 4.5 0.7 1.5 1.2 bt l 1.4/ 1.8/ 1.3/ 0.5 0.7 0.8 bt r 1.6/ 1.2/ 0.7 0.6 ta l 1.3/? 0.5/ 1.5/ 1.6/ 0.4 0.6 0.6 ta r 1.3/? 4.3/ 0.8/ 1.6/ 0.9 0.4 0.4 First row is divided into the tagmata from anterior to posterior. First column is divided into appendage segments l/r left/right, cx coxa, tr trochanter, fe femur, pa patella, ti tibia, bt basitarsus, ta tarsus, ch chelicera, pd pedipalp, wl walking leg, ms mesosomal segment, ts tail segment Acknowledgements Before all others, we thank Mr. Massimo Cerato (Verona) for access to his fossil scorpion. We also sincerely thank Roberto Zorzin (Verona) for friendly communication, for information on the fossil and helpful comments on the manuscript. Additionally, we thank Torsten Wappler (Bonn) for technical help with pictures and Georg Oleschinski (Bonn) for the high-quality photographs. Additionally, we would like to thank the reviewers (Jason Dunlop and Andrea Rossi) and editors (Joachim Haug and Mike Reich) for helpful comments and constructive suggestions, which helped to improve our manuscript.
01755339
en
[ "shs.litt" ]
2024/03/05 22:32:10
1999
https://amu.hal.science/hal-01755339/file/Bangkok%20streets%20in%20Thai%20short%20stories.pdf
The aim of this paper is to compare the vision of the city and its streets in Thai modern short stories. This study is based on the research I am doing for my thesis entitled : "Fiction, ville et société : les signes du changement social en milieu urbain dans les nouvelles thaïes contemporaines" at the Institut National des Langues et Civilisations Orientales (Paris). This research focuses on short stories by four authors who all received the Southeast Asia Write Award : Atsiri Thammachot, Chat Kopchitti, Sila Khomchai and Wanit Charungkit-anan. In my corpus of short stories, I chose six which show different aspects of the street, such as : description, traffic jam, relations between people in the street, vision of the street according to social class, and conflict between tradition and modernity. Corpus Besides the fact that the four authors chosen for my corpus have received the Southeast Asia Writer Award, they all belong to the same generation (born between 1947 and 1954) and are all well-known in their country. The four of them were born in the countryside and came to Bangkok to attend university. They retain from their childhood in provincial towns a nostalgia that is apparent in their short stories and novels. By Atsiri Thammachot, I have selected two short stories : " Thoe yang mi chiwit yu yang noy ko nai chai chan " (She is still alive, at least in my heart) and " Thung khra cha ni klai pai chak lam khlong sai nan " ( It is time now to escape far from this khlong). Both are extracted from the collection of short stories entitled "Khunthong chaw cha klap mua fa sang" (Khunthong, you will come back at dawn) for which Atsiri got the SEA Write in 1981. Born in 1947, Atsiri spent his childhood in Hua Hin where his parents had fisheries. After studying journalism at Chulalongkorn University, he began to work with the Sayam Rat, where he is still working today, writing short stories at the same time. In his literary work, the action is often located in the countryside or in small towns, and the problems of urbanisation, modernisation and social change are recurrent. He also focuses on the return to the village after working in Bangkok, as shown in the short story Sia lew sia pai (What is gone is gone). The first short story, Thoe yang mi chiwit yu yang noy ko nai chai chan, is taking place during the events of October 1976. Coming out of his house, a journalist runs into a young woman who is involved in the struggle for democracy. Days later, he receives the list of the people killed in the massacre. The name of the woman is in it. Atsiri shows in this short story how a journalist can see himself as a coward and feel ashamed for not having had the courage to take part directly in the events. In Thung khra cha ni klai pai chak lam khlong sai nan, the author shows how a woman, living alone with her two children in a hut on a khlong bank, feels so bad about the life and the surroundings she is offering to her children, that she decides to quit the khlong. This short story points out how the city can destroy people. Chat Kopchitti, born in 1954 in Samut Sakhon, studied art in Bangkok. He had many jobs before he decided to be only a writer. According to Marcel Barang : " He wasn't quite 20 when he decided that creative writing was his life and five years later he turned down a life in business to gamble on a literary career" (Barang, 1994, p.334). Chat got the SEA Write twice : once in 1982, for his novel Khamphiphaksa (The judgement), and again, in 1994, for Wela (Time). In 1983, Raphiporn declared : "Chat Kopchitti avec son roman le Jugement annonce un nouveau défrichement de ce genre littéraire, une composition plus brillante, à laquelle nous, les anciens, n'avions jamais pensé." [START_REF] Fels | Promotion de la littérature en Thaïlande[END_REF]. Chat published a lot of novels and short stories collections, showing the life of people living in marginality (Phan ma ba) or in social rupture (Khamphiphaksa). Some of his short stories, written like tales, like Mit pracham tua and Nakhon mai pen rai, point out the flaws of society and the use of power by the elite. Chat's vision of society is rather pessimistic. The short story Ruang thamada (An ordinary story) presents the relations between the narrator and an old woman whose daughter is dying of cancer. The narrator is witness and actor at the same time. Beyond the relations between these two characters, it is all the problems encountered by people coming from the countryside which are shown. The conflict between tradition and modernity is evident in this text : traditional healer versus modern doctor ; village customs versus city behaviours ; solidarity versus individuality. All along the story, the narrator speaks to the reader, using khun, to involve him in the action. Khrop khrua klang thanon (A family in the street), written by Sila Khomchai, is one of the most representative short stories of this corpus because the whole story takes place in the street. Actually, the street is the main character in the story which recounts the life of a middle-class couple. Living in the suburbs of Bangkok, they spend most of their time in their car, stuck in traffic jam, eating, reading, speaking, playing, observing the others. Sila Khomchai (born in 1952 in the province of Nakhon Si Thammarat) took an active part in the events of 1973-1976 and had to hide in the jungle after the massacre. When he came back to Bangkok in 1981, he became journalist and continued to write fiction. If most of his short stories and novels reflect his political commitment, Khrop khrua klang thanon is more a criticism of urban life and, somehow, of the middle-class. This short story won the SEA Write in 1993. The second story by Sila Khomchai that I chose is entitled Khop khun ... Krungthep (Thank you Bangkok). In this short story, a taxi driver and his customer, going through Bangkok at night, develop a strange relation : each imagines that the other is going to attack him. During the whole journey, they are anxious, with feelings of fear and mistrust. The last author I will consider here is Wanit Charungkit-anan. Born in 1949, in the province of Suphan Buri, Wanit studied at Silpakorn University in Bangkok. Editor, columnist, poet, author of numerous novels and short stories, Wanit is a very famous writer. He received many awards, among which the SEA Write in 1984 for his short story Soi diaw kan (The same soi). The short story I selected, Muang luang (The capital) is quite well-known in Thailand and even abroad, since it has been translated into several languages. This story gives a very different vision of the city than Sila's Khrop khrua klang thanon. The main character is a rather poor worker in Bangkok who describes the street and the people seen from the bus. The description he makes is very pessimistic : tired and desperate workers, inextricable traffic jams, dangerous and hostile city. In the bus, a man from Isan province is singing a folk song that gives the narrator a deep feeling of nostalgia. Thematic analysis The usual definition of 'street', as we can find it in dictionaries, is reduced to a minimum. In English dictionaries, as well as in French or Thai dictionaries, the street is defined as a town or village road with houses on one side or both. Beyond the simple description, the aim of this research is to analyse the way the authors see the street and show it in short stories as a social area where all the different communities of the city pass by one another, having or not some contact. In Bangkok, the life in the street is very rich. Three kinds of 'street' -in the meaning of 'way of communication' -can be distinguished : the large avenues, the soi and the khlong. A fourth kind, reserved to the motor vehicles, is the express highway, created to break up the traffic jams. Highways are now very extensive, forming a second level of road network above the old streets. The organisation of traffic in the avenues and the soi is rather difficult, especially because many soi are dead ends, making communication between the large roads almost impossible. Life in avenues is quite different from life in soi. While the large streets seem to be only ways of communication and commercial areas, the soi are where people live, re-creating the village. Nowadays many khlong, traditional waterways, have been filled in and covered by roads and buildings. If Bangkok is no longer the Asian Venice, khlong still have a role in communication. In Thon Buri, of course, but also in other districts of the town, boats transport people and goods, using the khlong as a street. Bangkok, and the city generally, is viewed by the authors of my corpus as a terrible place, for the conditions of life (traffic jams, housing far from the centre, difficulties to find a job...) and for the relations between the people. Urban society and the city are often described as monsters devouring the countryside people, leading astray men and women and getting more and more westernised. In his book Aphet kamsuan (Bad omen), Win Liaw-warin gives a dictionary of life for middle-class people in Bangkok. Under 'Krungthep', he writes : "If Krungthep were a woman, she would be a woman of easy virtue fascinated by the cheap Western culture". About the word 'dream', Win says : "There are two kinds of dream : the good one is dreaming that you fall in Hell (and wake up in Krungthep) ; the bad one is that you go to Heaven (and wake up in Krungthep)" [START_REF] Liaw-Warin | Aphet kamsuan[END_REF]. This shows well enough the feelings of the writers, which are shared by a lot of Bangkok inhabitants. The different themes about the street in the city, which appear in short stories, reveal the importance of the street in the social urban context. Description In most of the short stories of my corpus, the descriptive part is not a very important one. Due to the shortness of the text or by choice of the author, the accent is more on the characters and the action than on the description. However, some places are described throughout the story. The opposition between avenues and soi is quite evident in Chat's story Ruang Thammada. Leaving in an old wooden house located in a soi, the narrator make the difference between the street where he lives, and the streets around, that he calls "the jungle" and the big avenues, where "one can find all things making civilisation, as luxurious hotel, cinema halls, massage parlours, bowlings, restaurants, bookshop (...) and very smart people. (...) The atmosphere is perfumed and air-conditioned, people look beautiful, there is lifts, escalators and other signs of progress. Coming from the soi is like coming out of the barbarism and emerging in the centre of a fairy tale city, except that it is a real city." Speaking about the city, Chat named it muang neramit, the 'city built by supernatural powers'. The opposition between soi and main street is really evident in this text. Atsiri, in Thoe yang mi chiwit yu..., describes the soi (which is actually called trok) where lives the narrator as "long and narrow as a railroad". The narrator has to walk to join the main road, since there is no buses crossing in his soi. The khlong as a way of communication is well represented by Atsiri in his short story Thung khra cha ni klai... The mother and her two children are living in a hut under an arch of a bridge that crossed over a dirty khlong, surrounded by big buildings. Just like in a street, vendors are passing in the khlong, paddling in the stream. Even the sex market is present : at night, the family can hear the prostitutes paddling up and down the khlong. Afraid that her little girl could become one of them, the mother will decide to quit the dirty khlong. During the night, the city changes its appearance. Lights, streets and people are not the same than in daytime. For the narrator of Ruang thamada (Chat), the city at night is a place of pleasure. At the end of the story, after the death of the daughter of his neighbour, the narrator decides to go to the city : "Tonight, I am going to walk around, to sit somewhere having a drink, or even to get a girl in the fairy tale city". In Khop khun...Krungthep, Sila shows a city deserted, crossed by fast cars and illuminated by advertising lights. It is two o'clock in the morning : "[the taxi] goes fast in the dark streets. In the headlights, some closed buildings appear on both side of the streets ; the side-walks are deserted. From time to time, headlights of an other car shine while passing the taxi, in a roar of engine". Arriving on Anusawri Chai : "[the place] is empty and wide. The white shining lights of the street lamps make a warm atmosphere. The advertising billboards pierce the black screen of the night with multicoloured and flashing lights". This description is quite far from the one of the daytime city, crowded, polluted, and congested ! Traffic jam From the moment that a short story is developed within the context of the city, traffic jam takes a central role in the story. The congested streets are described in a lot of short stories, but in two of which, they are almost the principal characters. Muang luang, by Wanit, and Khrop khrua klang thanon, by Sila, give two very different visions of the traffic jam in Bangkok. The family of Sila, actually a middle-class couple, is driving in Bangkok, spending most of their time in car. Having an appointment at three o'clock in the afternoon, they decide to leave their house, located in the North suburbs, at nine in the morning. The husband, who is also the narrator, describes the way his wife prepares the car : "She put on the back seat a basket full of food, an icebox with cool drinks (...) She put also some plastic bags for rubbish, a spittoon, a spare suit hanging above the window. Just as if we were going for picnic !" In the car, they eat, play, listen to the radio, and even make love. They think to the new car they want to buy, more spacious. Especially at the end of the story, when the wife announces to her husband that she is pregnant : "My wife is pregnant ! Pregnant in the street ..." the husband wants to yell. For this couple, traffic jam become more or less a way of life : the car is means of transport, house, and office as well. The hero of Muang luang, by Wanit, has not the same reaction towards the city and traffic jams. Bus user, he feels exhausted and sick of his life in Bangkok. Spending hours packed tightly in the bus stuck in traffic jam, he dreams of the village where he was born, of the girl he left there. At a cross-road, traffic jam is so long that he gets off the bus : "How the cars could go ? Going through this city is so difficult. The traffic lights have no meaning. Cars which get green light can't move because other cars are stuck in the middle of cross-road . (...) Green light become red ; on the other side, red light become green. And all comes to the same thing, cars crawl along and stop". And when the narrator walks in the street, it is even worse : "I was feeling so bad I could die when I was waiting to cross the street in Rachaprasong corner. I was standing on an island, exposed to polluted smokes, almost wanting to spit. (...) I suffocated, almost in blackout". Relations between people in the street Reading these short stories give the feeling that the relations between people in Bangkok streets are quite similar to relations encountered in European capitals. The main feeling shown in the texts is indifference towards the others, just as in Paris. The indifference is especially clear in Muang luang (Wanit). The hero, walking in the street to the bus stop, almost received on the head a stone felt from a building under construction. Nobody notices the fact, neither the other pedestrians nor the workers. Arriving at the bus stop, he looks to the people waiting for the bus : "People waiting at the bus stop are as usual. Nobody pays attention to the others". After the struggle to get in the bus, the narrator try to find a sit in the bus that is full : "Two children and their mother are holding on the back of a seat, standing in the middle of the bus. A young guy is sitting in front of them, but he does not think to give his seat. I do not blame him ; if I was sitting, I am not sure I would give my seat to someone else". The indifference is sometimes verging on non-assistance. Chat describes in Ruang thamada how the people walk in the street near a man lying on the side-walk : " (...) quite often, I see somebody lying on the side-walk or on a footbridge. People come and go, but nobody stops, nobody takes care of him, nobody takes time to check if he is still alive, or if he is still breathing. People pass in front of him as if he was a rubbish heap -some of them do not even see him. This is an ordinary story (in our urban society). If someone stops to check or to give assistance, that is extraordinary". In other cases, people feel contempt for the ones who are acting in an unusual way. When the Isan man begins to sing in Muang luang, some people appreciate it, but most of them laugh with contempt, looking at him as if he was crazy. With indifference and contempt, a third feeling is shown by the inhabitants of the city : the fear. The fear of each other, even when there is no reason to feel it. The short story of Sila, Khop khun ... Krungthep, illustrated well enough this irrational feeling. The taxi driver, remembering that a friend of him has been attacked one night and that every day, in newspapers, he can read about violence, becomes really frightened by his customer. Tall and strong, the customer wear a thick moustache and has a scar on the cheekbone, under the left eye. He holds tight on his knees a black bag that seems very precious. His odd-looking makes the taxi driver really nervous and anxious. The driver tries several times to start up a conversation with the customer, but this one answers only by few words. Actually, the customer is afraid of the driver, and the driver of the customer. Throughout the story, they feel more and more anxious, suspicious, and frightened, until they arrive to the house where the customer wanted to go. After he left the car, the two of them feel free and thankful. Thankful for each other, and thankful for the city, that is not as bad as they think. That is why the short story is entitled Khop khun ... Krungthep. Fortunately, the relations between people in the street are not always that pessimistic ! Characters of the short stories encounter sometimes people they like or who give them good feelings. The hero of Khrop khrua klang thanon takes advantage of being stuck in traffic jams for encountering people who could be useful in his job. Walking around his car stopped, he speaks with men : "We speak about our problems, we criticise politics, we chat about business or sport. We are like neighbours. (...) I work in advertising business (...) I find sometimes some unexpected customers". Later, the narrator meets a strange guy who is planting out some banana trees on the central strip of the road. The guy wants to plant out more and more trees to fight off pollution. Despite the discouraging feelings shown by the hero of Muang luang, he makes an encounter in the bus which changes his state of mind. When he hears the Isan man singing, the narrator thinks at first that he is dreaming. Listening to the folk song, his mind is transported to his village, with his girlfriend. It makes him feeling better, forgetting his bad situation in Bangkok. At the end of the story, the narrator gets off the bus, following the singer. He asks him : "Excuse-me to ask you that, but are you crazy ?" The singer answers : "No, but I wish I am". The journalist of Thoe yang mi chiwit yu... (Atsiri) meets a young woman who is running in his soi, frightened by people chasing her. The story takes place during the events of October 1976, and the young woman is carrying some political posters. Although they just speak a brief moment, the journalist feels himself very involved in this encounter. When the girl leaves, she gives him her name, that he writes on a piece of paper. Later on, he find her name on the list of the people killed in massacre. This encounter symbolise the relation between people who were involved in the political events and the ones who did not dare to. There is a lot of emotion in all this short story, and despite the sadness, a kind of hope. Vision of the street according to social class As seen before, the vision of the street and of the city is quite different, whether heroes are car users, bus users or pedestrians. And of course, the way of transportation is usually connected to the social class. In the short stories chosen for this paper, three kinds of people are represented : the middleclass, in Khrop khrua klang thanon, in which the hero is working in advertising and seems quite fashionable, living in modern style, and appreciating the urban life ; the employees, in Ruang thamada or Muang Luang, who have definitely not the same standard of life and who did not really choose to live in Bangkok but have to for economic reasons ; the very poor, in Thung khra cha ni klai... , who have to struggle for life at every moment, living in a slum, having no job, and feeling bad because the children are not living in good conditions. Even if, sometimes, the narrator of Khrop khrua klang thanon seems to regret the countryside, he appears like a very integrated person in the urban society. Living in the suburbs, he points out a paradox : "If we were poor, we could live in a slum in the heart of the city, as high class people who reside in condominiums (...)". It is actually what is shown in Thung khra cha ni klai ... The poor woman who lives in a hut by the khlong is surrounded by rich buildings and restaurants. The hero of Khrop khrua klang thanon is attached to the signs that prove his social status : the place where he lives, and the car : "Having a car allows us to rise our social position". On the opposite, the hero of Muang luang endure his life in Bangkok with a lot of difficulty. He is suffering of the transports, the heat, the loneliness. Forced to come in Bangkok for working and living, he always keeps in mind his province : "If only I could choose ! I should not be in this terrible big city". The mother in Thung khra cha ni klai... has a vision of the city even worse. She compares the city to a tiger, that pulls her life to pieces. She came also from countryside, with her husband. He promised her that they will get a better life, jobs and money. But then, he left her with their two children and disappeared in the big city. And her life is worse than before, because of the city and the hard urban life. Conflict between tradition and modernity The opposition between tradition and modernity is to be seen in a lot of short stories. Many themes are linked to this conflict, especially the nostalgia for the province. For most of the characters of short stories -and thus, for the authors -the tradition as found in villages is often idealised in opposition to the bad effects of the modern city. The narrator of Muang luang feels really nostalgic listening to the Isan folk song : "Yes, it is that ! Exactly ! Behind my house, there was some palm trees. I plaid flute, I was an applauded singer of ram wong in the village". Thinking about his girlfriend, he dreams : "To take along my girlfriend in a boat, for fishing together. It is a dream I have, but it is only a dream". But still, he keeps the sense of reality, saying : "I would like to come back in my place, in province. I would like it so much, but what can I do there ? There is no job at all, except to fish or to collect shellfishes. Not enough for living expenses. I could not stand a job of labourer in a rice-processing factory". Even the narrator of Khrop khrua klang thanon, who seems to like his urban life, think about the traditional way of life : "I know that after we, human beings, have destroyed the nature all around us, our own inner nature has been consumed by urban life, pollution, traffic jams... The family life, that was an hymn to happiness by its rhythm and elements, felt in incoherence and instability". The characters of Ruang thamada, living in an old house in a soi, are re-creating the village life in their house. The old woman, refusing modern medicine in hospital, calls a traditional healer to cure her daughter. Although she is living in Bangkok, not far from hospitals, she reacts as if she was still inhabiting a village. Traditional healer, astrologer, masseuse are trying to remedy to the cancer, but without success. The narrator tries several times to persuade the mother to take her daughter to hospital, but she refuses, arguing that the modern methods were not efficacious and too expensive. The narrator does not dare to insist, feeling that if the girl dies in hospital, the mother will accuse him. The narrator is really representative of the young employees class in urban society. He is always hesitating between tradition and modernity, solidarity and indifference, commitment in traditional values and fascination for Westernised city. These oppositions are symbolised by the three conditions that determined his choice for a room : "I wanted a room that be cheap, near civilisation and far from crowd". Conclusion As seen in this paper, short stories are a very rich material for studying about city and urban society. I tried here to expose only a few themes connected to the street as a social area, but, of course, a lot of other themes can be analysed, especially about how the traditional ways of life are re-created in urban environment and how the urban specifies are taken back to villages. If the four authors of my corpus have distinct visions of the city and urban culture, they all point out the changes of the Thai society and the transformations of the traditional values in contact with modernisation and Westernisation of the city. Prospects of research about literature and city, are obviously wide and numerous. Since the city is in perpetual change and development, we can imagine that literature will follow the same way. How the financial crisis -that makes the city changing too -will be perceived and shown by Thai writers should be a very interesting point to study. Loved and hated, Bangkok makes everybody concerned : inhabitants, writers, researchers and even tourists... In his dictionary, Win Liaw-warin writes : "If Krungthep were a cocktail, it would be composed of : 10% of natural sweetness ; 40% of synthetic sweetness ; 30% of lead essence ; 20% of dirty sediments". Let us hope that the natural sweetness will grow up. Selected bibliography Corpus Atsiri Thammachot
01755342
en
[ "chim" ]
2024/03/05 22:32:10
2018
https://hal.sorbonne-universite.fr/hal-01755342/file/A%20tribute%20to%20Professor%20Juan%20Faus%20Pay%C3%A1.pdf
Professor Juan Faus Payá Julve Miguel email: [email protected] Lloret Francesc Verdaguer Michel Journal of Coordination Chemistry celebrating his retirement from the University of Valencia. The three authors have very close and friendly long-lasting relations with Professor Faus even in different contexts. Two of us (FL, MJ) received their chemical education from him and became his close co-workers, whereas the third one (MV) developed a tight Spanish-French collaboration between the team of Professor Faus in Valencia and the laboratory of Olivier Kahn at Orsay in the 1980s and then with his group in Paris in the 1990s. Professor Faus is a scholar with a wide and deep culture. Even if his well- The teacher Professor Faus was appointed as an Assistant Professor in 1968 at the Department of Inorganic Chemistry of the University of Valencia. He became Full Professor at the same Institution in 1980. He held this position until his retirement in 2017, after almost 50 years of continuous dedication to the University. Amazingly, he decided to continue to work in the old Inorganic Chemistry building of the university, even when the new and more functional buildings of the Institute of Molecular Chemistry (ICMol) became available. His passion for teaching induced him to stay closer to the students by remaining in the old building. Professor Faus was, and remains, a great teacher. He taught many generations of students. His lectures, perfectly clear, organized and documented, the care brought to help the students are unanimously appreciated and recognized by his students and colleagues. In this respect, M. Julve, one of his former students, can testify that the students were so delighted with Professor Faus's classes that they asked him to extend them after the end of the academic year, which he did with pleasure. He was accustomed to receive the students between his classes to answer kindly their questions. He had a special ability to listen, to instil confidence, to push everyone to reach her/his full potential, and to recognize in each her/his individual skills. His present co-workers are indebted to him for such confidence and permanent support. Figure 1. Juan Faus, going for a lunch at the Albufera, after a working session in the university. From left to right: J. Faus, M. Julve, F. Lloret, and M. Verdaguer. The researcher and group leader Prof. Faus and his former PhD, Prof. José María Moratal, created the research group of Coordination Chemistry at the Department of Inorganic Chemistry of the University of Valencia in 1976. Before this date, his first publication appeared in "Quimica Analitica". It concerned analytical chemistry, the identification of alkaline cations by paper chromatography using the reaction of dihydrogen sulfide with alkali metal violurates. It is cosigned by the Head of Inorganic Chemistry in Valencia at that time. In the mid-1970s, Spain knew a sudden burst of changes and hopes in many aspects of the social life. It is significant that the new group was born at this time. Juan Faus and his first students (Moratal, Lloret, Julve, and Garcia-España) first focused on the determination of stability constants of metal complexes in solution with violurate (a very strong field ligand), catechol, porphyrins, and Schiff base ligands. The techniques used were potentiometry and spectrophotometry. A few years later, a Spanish governmental program allowed talented young Spanish scientists to visit abroad as postdoctoral fellows. With the support of Professor Faus, Miguel Julve was the first to seize this opportunity and he spent two years in France in the laboratory of Prof. Oliver Kahn at Orsay where he worked closely with M. Verdaguer. Others (Moratal, Garcia-España) visited the groups of Bertini and Paoletti in Florence (toward bioinorganic chemistry and supramolecular chemistry) and found their own way. This national and European opening-up transformed deeply Faus's group and beyond, the Inorganic Chemistry Department. It brought new blood, modernized and diversified the themes, and opened the way to publications in European and American journals. From then, the programmed postdoctoral stays of members of Faus's group in France (Orsay) (M. Julve then F. Lloret working with Y. Journaux and later, J. A. Real working with J. Zarembowitch on spin cross-over) and their reincorporation in the mother group in Valencia allowed achievement of a solid background in solidstate coordination chemistry, in structural studies and in magnetism. Furthermore, some of the systems investigated by Professor Faus underwent spin changes [complexes of Co(II) and Fe(II) with violurate and its alkyl derivatives] and led him to be interested in Molecular Magnetism. New equipment, a variable-temperature Faraday balance permitted to carry out this new research avenue at home, always keeping and reinforcing the collaboration with foreign teams. Along the years, the scientific partnership transformed into reciprocal esteem and friendship. The group is located nowadays in the premises of the Institute of Molecular Science of Valencia and is constituted by two Full Professors (Miguel Julve and Francesc Lloret), one Assistant Professor (Isabel Castro), a lecturer (Salah Eddine-Stiriba), two permanent researchers (Joan Cano and Rafael Ruiz), four hired researchers [Emilio Pardo and F. José Martínez (Ramón and Cajal positions), Marta Viciano (Juan de la Cierva position), and Luminita Toma (Marie Skłodowska-Curie, H2020, position] and one technician (F. Nicolás Moliner). Professor Faus supervised a good number of Ph.D. Theses. Most of his doctoral students occupy permanent positions in public institutions, beyond the coordination chemistry group itself: Enrique García-España and Miguel Mollar are Full Professors at the University of Valencia and Universidad Politécnica de Valencia, respectively; Hermás Jiménez is an Assistant Professor at the University of Valencia; Maria Luisa Calatayud is Full Professor in a secondary public school; abroad, Raúl Chiozzone, Ricardo González, and Alicia Cuevas are Assistant Professors at the Universidad de la República, Montevideo, Uruguay. Professor Faus promoted international collaborations of his team with national and foreign Institutions. The international exchange was especially strong and fruitful in the framework of the European Union where the Coordination Chemistry group was involved as a partner in several bilateral Spanish-French "Picasso" integrated actions (1990,1991,1993) with teams in France (Paris and Orsay) and in Spain (Valencia and Barcelona) and then in European networks. This collaboration, led at the beginning by Professors Faus and M. Verdaguer and then by their co-workers, followed the initial interest of Professor Faus for combined solution and solid-state studies and was dedicated to metal complexes with oxalate, oximes, functionalized oxamate/oxamidate derivatives, 2,2′-bipyrimidine, squarate, and croconate as ligands. The choice of these ligands was mainly based on both their great versatility as bridges between transition metals and their potential ability to mediate exchange interactions between the bridged paramagnetic centers. This management and scientific strategy had many positive consequences: (i) beautiful scientific results (about 150 articles) on original chemical systems published in prestigious journals (J. Am. Chem. Soc., Angewandte Chemie, Inorg. Chem., Dalton Transactions, etc.); (ii) a long-term exchange program for training of doctoral and postdoctoral fellows in the 1990s and later (I. Castro is now Assistant Professor in Valencia and R. Lescouëzec is Full Professor in Paris); (iii) close association of the Coordination Chemistry group in the European networks launched in the frame of 4th-6th PCRD. Other very fruitful international collaborations were established later (from 1999), on another Professor Faus's favorite topic, the chemistry and the magnetic properties of rhenium(IV), a paramagnetic and highly anisotropic metal ion. The collaboration involved Prof. Carlos Kremer and his co-workers from the Universidad de la República (Uruguay) and also the Italian group of Professors G. De Munno and D. Armentano. One can state that Professor Faus, after the studies first published by the Polish and Russian schools on halogenorhenates(IV), established in a very creative way a new coordination chemistry of Re(IV) to exploit the remarkable magnetic anisotropy of this ion. Three PhD co-shared theses issued from the Uruguayan collaboration on Re(IV) together with 37 published articles with Professor Faus as co-author. The intensity and the quality of this research, up to a recent Nature Communications paper, pushed the Valencia group to the first world rank for research on Re(IV) coordination chemistry. These achievements were summarized in a recent review. Professor Faus was distinguished with the Gascó Oliag Medal on 14 November 2009 by the Colegio de Químicos de la Comunitat Valenciana for his brilliant teaching and high-level research as well as his continuous collaboration with the Colegio de Químicos. The man and the friend We would like to conclude this tribute to evoke more personal memories. Juan Faus is a quiet, modest, and friendly man, always ready, after work, to enjoy life with friends (Figure 2). We cannot remember him in the university dressed without his clean white over-garment and his friendly teasing smile. Juan Faus is a man who is always ready to face adversity and find the right solutions. He enjoys life, likes his town (Valencia), its paella, and quiet boat cruises on the Albufera. He liked to share his lunch in the university restaurant with colleagues and friends and, open-minded, he was not the last to participate in vivid discussions where the world is passionately rebuilt every day on firmer and more rational bases. We are sure that all his collaborators and colleagues join us in thanking Prof. Faus for his invaluable contributions to the development of Coordination Chemistry in Spain and worldwide. We wish him all the best for his retirement. To conclude, we are convinced that this special issue will contribute to the recognition of his scientific legacy and his high-quality work which exerts a strong influence not only on the past generations of chemists, but also on the present and future ones.
01459079
en
[ "chim.orga", "chim.mate", "sdv", "sdv.can", "sdv.bio", "sdv.sp.pg" ]
2024/03/05 22:32:10
2016
https://hal.science/hal-01459079/file/Manuscript%20Liu%20PDF.pdf
Xiaoxuan Liu Yang Wang Chao Chen Aura Tintaru Yu Cao Juan Liu Fabio Ziarelli Jingjie Tang Hongbo Guo Roseline Rosas Suzanne Giorgio Laurence Charles Palma Rocchi Ph.D Ling Peng email: [email protected] A Fluorinated Bola-Amphiphilic Dendrimer for On-Demand Delivery of siRNA, via Specific Response to Reactive Oxygen Species Keywords: bola-amphiphile, stimuli-responsive, siRNA delivery, gene therapy, 19 F-NMR published or not. The documents may come INTRODUCTION Molecular science has revolutionized the central paradigm of drug delivery, especially for establishing smart or intelligent materials to deliver therapeutic agents on-demand. [1][2][3] In particular, many of the advances in the design of delivery systems for anticancer therapeutics has been inspired by a growing understanding of tumor microenvironments and exploitation of specific characteristics of cancer cells and subtle difference in tumor lesions. 1,2,4 Tumor microenvironments are characterized by a variety of atypical features such as abnormal tumor vasculature, absence of lymphatic drainage, altered redox environment, hypoxia and lower pH gradient etc. 5 It is of note that an intrinsic factor specifically linked to cancer cells is the high level of reactive oxygen species (ROS), as ROS are constantly generated within cancer cells because of the highly stressful environment caused by rapid and uncontrollable cancer cell proliferation. 6,7 Despite the obvious interest in harnessing this factor for specific targeting in cancer treatment, few strategies have been explored to develop delivery systems that are capable of ROS-triggered controlled release, in particular for small interfering RNA (siRNA) therapeutics. 8,9 Therapeutics based on siRNA provide an enormous opportunity for cancer treatment by virtue of the ability of siRNA to specifically and efficiently turn off the expression of genes associated with cancer development and drug resistance. [10][11][12] However, the main challenge facing siRNA therapeutics is their safe and efficient delivery. 13,14 This is because siRNA is too negatively charged to spontaneously cross biomembranes; at the same time, siRNA is vulnerable to nuclease attack. Consequently, safe and efficient on-demand delivery vectors, which can prevent siRNA degradation and convey functional siRNA into cancer cells, are in high demand. During the past years, various natural and synthetic delivery systems have been developed for siRNA delivery. Among these, the most effective are viral vectors. However, because of concerns about the immunogenicity and safety of viral delivery systems, there is an urgent demand for developing the alternative nonviral vectors. 13,14 The two main classes of nonviral vectors are based on lipids and polymers. [14][15][16][17] Dendrimers, a special type of synthetic polymer, have recently emerged as a promising delivery platform for siRNA therapeutics by virtue of their well-defined and precisely-controlled molecular structures as well as the unique multivalent cooperativity confined within a nanoscale volume. 18,19 In particular, amphiphilic dendrimers with judiciously tailored hydrophilic and hydrophobic components are able to benefit from the combined advantages of lipid and dendrimer vectors for effective siRNA delivery. [20][21][22] Here, we report an innovative bola-amphiphilic dendrimer bola4A for on-demand delivery of siRNA in response to the high level of ROS in cancer cells (Figure 1). This dendrimer features a bola-lipid/dendrimer hybrid harboring a ROS-sensitive thioacetal group at the central hydrophobic core and two poly(amidoamine) (PAMAM) dendrons as the polar terminals (Figure 1A). The PAMAM dendrons, which bear amine terminals and are positively charged at physiological pH, have been devised to interact with the negatively charged siRNA via electrostatic interactions, 23 while the hydrophobic "bola-lipid" core scaffold is to mimic the robust and strong assembly properties of bola-amphiphiles observed in extremophile archae bacteria. 24 The thioacetal entity at the bola-lipid core is supposed to decompose upon exposure to the high level of ROS in cancer cells, 8,9 thereby promoting siRNA unpacking for potent gene silencing within cancer cells (Figure 1B). Also of note is the presence of fluorine tags within this vector, which allow tracking of the ROS-responsive delivery process by 19 F-NMR. 25 Last but not least, this distinctive and ingenious bola-amphiphilic vector, by combining the advantages of both lipid and dendrimer vectors, will provide a new perspective on the design of ROS-responsive vectors for targeted siRNA delivery. RESULTS AND DISCUSSION Dendrimer bola4A is readily synthesized via "click" chemistry. We synthesized bola4A according to the plan illustrated in Scheme 1 (and in Scheme S1 in supplementary materials). The bola-core 1 was prepared via the condensation of 3,5-difluorobenzaldehyde with the freshly prepared corresponding azido-bearing thiol in the presence of the Lewis acid catalyst trifluoroboronetherate. 26,27 Using the Cu-catalyzed Huisgen "Click" reaction, 20,21,28 we successfully conjugated the bola-core 1 with the alkynyl-terminated dendron 2 to yield 3. The bola-dendrimer 3 was subsequently subjected to amidation with ethylenediamine to provide the desired bola4A. After purification via dialysis, bola4A was obtained in excellent purity with yield exceeding 90%. Bola4A responds to ROS. With the bola-dendrimer bola4A in hand, we investigated its response to ROS under simulated conditions, that is, in the presence of H2O2 (Figure 2). We incubated bola4A with H2O2 and found that the sharp 19 F-NMR signal originating from the intrinsic fluorine atoms of bola4A ( = -110.5 ppm) disappeared rapidly, while new 19 F-NMR peaks associated with the degradation products of bola4A progressively appeared (Figure 2A and Figure S1). This finding implies that bola4A was decomposed following exposure to H2O2. Subjecting the same samples to electrospray ionization mass spectrometry (ESI-MS) further confirmed the degradation of bola4A upon treatment with H2O2 (Figure 2B). On the one hand, signals assigned to bola4A in multiple protonation states (from 3+ to 6+) were observed to decrease and disappear as a function of the H2O2 exposure time (Figure 2B). On the other hand, new signals detected during the H2O2 treatment indicated the decomposition of bola4A and the formation of degradation products (Figure 2B). Based on the elemental composition derived from the accurate mass measurements (Table S1), structures could be proposed for these degradation products (Figure 2C), highlighting the effective disintegration of the thioacetal function after exposure to H2O2. Collectively, these results demonstrate that bola4A is readily decomposed under ROS conditions, and hence possesses favorable properties for potential controlled release in response to ROS. Bola4A forms nanoparticles with siRNA, protects siRNA and promote cellular uptake. For siRNA delivery, it is important that the delivery vector is able to bind and condense the siRNA into nanosized complexes and protect it from degradation before promoting its cellular uptake. With this in mind, we examined the formation of siRNA/bola4A complexes using gel shift analysis. As shown in Figure 3A, bola4A was able to form stable complexes with siRNA and completely retard the migration of siRNA on the gel at an N/P ratio ≥ 2.5. Further results from transmission electron microscopy (TEM) and scanning electron microscopy (SEM) revealed that the resulting siRNA/bola4A complexes formed uniform, compact and spherical nanoparticles (Figures 3B and3C). Additional dynamic light scattering (DLS) analysis confirmed that the siRNA/bola4A nanoparticles were well dispersed with a size average of 50 nm (Figure 3D). The surface charge of these nanoparticles was characterized by a -potential of +28 mV, implying stable colloidal nanoparticles. Indeed, the so-formed siRNA/bola4A nanoparticles effectively protected the siRNA from degradation by the enzyme RNase (Figure 3E), further indicating the formation of stable siRNA/bola4A complexes. Finally, these siRNA/bola4A nanoparticles were rapidly and efficiently internalized by cells (Figure 3F), a benefit and advantageous prerequisite for effective siRNA delivery. Bola4A/siRNA complexes are responsive to ROS. Importantly, the siRNA/bola4A nanoparticles were readily disassembled in response to high ROS levels in cancer cells. We first studied the siRNA/bola4A nanoparticles in response to H2O2, which simulates ROS conditions. As shown by TEM imaging, the siRNA/bola4A nanoparticles collapsed upon exposure to H2O2, accompanied by a significant change in morphology, suggesting a ROStriggered disassembly of the siRNA/bola4A complexes (Figure 4A and Figure S2). In line with the TEM imaging, further results from DLS analysis (Figure S3) also indicated the destruction of the siRNA/bola4A nanoparticles upon treatment with H2O2. Moreover, we examined the ROS-triggered decomposition of bola4A once the siRNA/bola4A complexes were internalized into ROS-abundant human prostate cancer PC-3 cells. To do this, we used 19 F high-resolution magic angle spinning (HRMAS) NMR, a nondestructive method for in situ analysis of 19 F-containing compounds within intact cells. 19 F HRMAS NMR is particularly powerful and sensitive for investigating fluorinated bola4A in cells since there are no endogenous fluorinated compounds in cells, and hence no 19 F-NMR signal in the control background. 25,29,30 As shown in Figure 4B, only weak 19 F-NMR signals were observed in ROS-abundant prostate cancer PC-3 cells, although FACS flow cytometry demonstrated that the siRNA/bola4A complexes were rapidly and effectively taken up by PC-3 cells (Figure 3F). In contrast, a sharp, strong and singular 19 F-NMR signal was detected in ROS-depleted PC-3 cells which were pretreated with the antioxidant N-acetyl cysteine (NAC) (Figure 4C). It is of note that treating PC-3 with the antioxidant NAC effectively downregulated the ROS level significantly (Figure 5A). In addition, the chemical shift of the 19 F-NMR signal detected in the NAC-treated PC-3 cells was a perfect match with that of bola4A, indicating that bola4A maintained intact in ROS-poor PC-3 cells pretreated with the antioxidant NAC. Similar results were also observed with ROS-poor Chinese hamster ovary (CHO) cells (Figure 4D). Collectively, these results indicate that bola4A was indeed decomposed or metabolized in ROS-rich PC-3 cells, but not in ROS-poor CHO or antioxidanttreated low-ROS PC-3 cells. Thus, bola4A demonstrates effective ROS-responsiveness for cancer cells. Bola4A-mediated specific ROS-responsive delivery of siRNA and gene silencing. Encouraged by the ROS-responsive properties of our bola4A dendrimer, we evaluated its ability to deliver siRNA and inhibit gene expression in two high-ROS cell lines, human prostate cancer PC-3 cells and breast cancer MCF-7 cells, and three low-ROS cell lines, human embryonic kidney (HEK) cells, Chinese hamster ovary (CHO) cells and antioxidant-pretreated PC-3 cells (Figure 5). The siRNA molecules used in this study were devised to target either heat shock protein 27 (Hsp27) 31,32 or translationally controlled tumor protein (TCTP) 33,34 , both of which are actively involved in cancer development and drug resistance. Following bola4Amediated siRNA delivery, expression of Hsp27 and TCTP was considerably suppressed in the two high-ROS content cancer cell lines PC-3 (Figure 5B) and MCF-7 (Figure 5C), whereas no noticeable gene silencing was observed in the low-ROS content HEK (Figure 5D) and CHO cells (Figure 5E). This can be reasonably ascribed to the inherently higher level of ROS in cancer cells, which leads to the decomposition of bola4A and consequently the disassembly of the siRNA/bola4A complexes, thereby enhancing siRNA release and gene silencing. We further demonstrated that down-regulation of the ROS level in PC-3 cells by pretreatment with the antioxidant NAC (Figure 5A) led to a dramatic decrease in gene silencing (Figure 5F). This provides additional evidence that gene silencing is specifically triggered by ROS. Together, our results indicate that bola4A is able to mediate specific and efficient siRNA delivery and gene silencing in response to a ROS-rich environment, in perfect agreement with our design concept for bola4A as a ROS-responsive vector. Bola4A benefits from the integrated delivery advantages of lipid and dendrimer vectors. In addition to the ROS-responsive feature of bola4A, we wanted to forge a strong and stable vector based on our bola-amphiphile by combining the benefits of both lipid and PAMAM dendrimer vectors for siRNA delivery. 20,21 Our bola4A is a lipid/dendrimer hybrid bearing a hydrophobic chain entity and two hydrophilic PAMAM dendrons (Figure 1A). When we compared the gene silencing activity of bola4A with mono4A (the amphiphilic dendrimer without the bola-lipid core), dendron4A (the dendron entity alone) or the bola-core 1, only bola4A was effective (Figure 6A). Thus, the unique bola-amphiphilic architecture did indeed endow bola4A with the ability to deliver siRNA effectively. It is to mention that the hydrophobic bola-core in bola4A is shorter than the native phospholipid bilayer. We designed bola4A with this unique molecular architecture to avoid insertion into the cell membrane, thus obviating deleterious effects on cell membrane integrity and cell viability. The biocompatibility of bola4A was confirmed by the absence of serum hemolysis (Figure S4) and cytotoxicity using both lactate dehydrogenase (LDH) and MTT tests (Figure S5). We further examined bola4A-mediated siRNA delivery and gene silencing in the presence of dioleoylphosphatidylethanolamine (DOPE). DOPE is a fusogenic lipid, which promotes fusion and is frequently used to enhance the delivery efficacy of lipid vectors. Our results showed that gene silencing was significantly increased in the presence of DOPE (Figure 6B), confirming that bola4A is endowed with the delivery characteristics of lipid vectors. We also surmised that bola4A might profit from the "proton sponge effect" 35 of PAMAM dendron entities for effective nucleic acid delivery. The proton sponge phenomenon occurs for PAMAM dendrimers in acidic environments such as endosomes, and is thought to be important for release of the cargo from endosomes into the cytoplasm. The tertiary amine groups in the interior of the PAMAM dendron are ready to mop up protons within endosomes (Figure S6), leading to an ionic imbalance which results in endosomal lysis and cargo release. We used the proton pump inhibitor bafilomycin A1 to impede endosome acidification, in order to test whether endosomal acidification affects bola4A-mediate siRNA delivery and gene silencing. Indeed, treatment with bafilomycin A1 significantly lowered the level of Hsp27 expression (Figure 6C), suggesting that acidic endosomes are necessary for effective siRNA release and delivery by bola4A. This result is consistent with the hypothesis that the PAMAM dendrons act as a proton sponge, and that this property is important for siRNA delivery. CONCLUSION In conclusion, we have established a ROS-sensitive bola-amphiphilic dendrimer bola4A as an innovative on-demand vector for stimulus-responsive siRNA delivery and gene silencing. The so-devised bola4A is also able to benefit from the combined advantages of lipid and dendrimer vectors. The distinctive ROS-sensitive thioacetal motif within bola4A allows efficient disassembly of the siRNA/bola4A complexes under ROS-rich conditions for effective siRNA delivery and potent gene silencing in cancer cells. In addition, the presence of fluorine atoms within this vector makes it possible to study the ROS-responsive delivery process by 19 F-NMR. Collectively, our results show that this ROS-sensitive bola-amphiphilic dendrimer offers a unique opportunity to achieve controllable release of siRNA for effective gene silencing in cancer cells by capitalizing on the high level of intracellular ROS. Our study provides a new perspective on the design of tailor-made stimulus-responsive materials for ondemand drug delivery. ASSOCIATED CONTENT Supporting information: supplementary figures, materials and methods, dendrimer synthesis and characterization as well as all experimental protocols for NMR, MS, TEM, DLS, cell uptake, siRNA delivery, gene silencing etc. This information is available free of charge via Internet. Figure legends: Scheme 1: Synthesis of the bola-amphiphilic dendrimer bola4A. S1), of degradation products of bola4A formed upon treatment with H2O2. S1), of degradation products of bola4A formed upon treatment with H2O2. Figure 1 : 1 Figure 1: The bola-amphiphilic dendrimer bola4A, designed for ROS-responsive siRNA delivery. (A) Chemical structure of bola4A. (B) Schematic representation of bola4A for the ROS-triggered delivery of siRNA and consequential gene silencing. Bola4A dendrimers form complexes with siRNA molecules, which can be internalized by the cancer cell before releasing siRNA in response to ROS, leading to effective gene silencing. Figure 2 : 2 Figure 2: Study of bola4A decomposition upon treatment with 200 mM H2O2 (simulated ROS conditions) for 0, 2 and 24 h, using (A) 19 F-NMR and (B) ESI-MS analysis. (C) Proposed structures, based on accurate mass measurements (TableS1), of degradation products of Figure 3 : 3 Figure 3: Bola4A is able to form nanocomplexes with siRNA, protect siRNA from degradation and promote cellular uptake. (A) Agarose gel migration of siRNA (200 ng per well) in the presence of bola4A dendrimer at N/P charge ratios of 1/5 -10/1. (B) TEM image of the siRNA/bola4A complexes using 5 ng/µL siRNA and bola4A at an N/P ratio of 10. (C) SEM image of the siRNA/bola4A complexes using 5 ng/µL siRNA and bola4A at an N/P ratio of 10. (D) Size distribution of the siRNA/bola4A complexes (at an N/P ratio of 10 with 1 M siRNA) determined using DLS. (E) Protection of siRNA by bola4A against enzymatic degradation. Compared to the naked siRNA (200 ng per well) which was degraded within 5 min in the presence of RNase, siRNA complexed with bola4A at an N/P ratio of 10 was resistant to RNase and remained stable even after 1 h incubation. (F) Uptake of Dy647-labeled Hsp27 siRNA/bola4A complexes (at an N/P ratio of 10 with 50 nM siRNA) by human prostate cancer PC-3 cells evaluated using flow cytometry. Experiments were carried out in triplicate. Figure 4 : 4 Figure 4: (A) TEM images of siRNA/bola4A complexes (N/P=10) before and after incubation with H2O2 (1.06 M) for 2, 4 and 24 h at 37 ºC. Scale bars indicate 200 nm. 19 F HRMAS NMR recording of bola4A at different time points (0, 1, 2 and 4 h) in (B) ROS-rich normal prostate cancer PC-3 cells, (C) ROS-poor PC-3 cells pretreated with the antioxidant N-acetyl cysteine (NAC) and (D) ROS-poor Chinese hamster ovary (CHO) cells after treatment with the Figure 5 : 5 Figure 5: Bola4A-mediated specific and efficient ROS-responsive siRNA delivery and gene silencing. (A) ROS levels in Chinese hamster ovary (CHO) cells, human embryonic kidney (HEK) cells, breast cancer MCF-7 cells, prostate cancer PC-3 cells, and PC-3 cells pretreated with the antioxidant N-acetyl-cysteine (NAC) (10 mM) quantified using CellROX ® orange reagent by flow cytometry. Bola4A-mediated siRNA delivery and gene silencing in ROSabundant (B) PC-3 cells and (C) MCF-7 cells, as well as in ROS-poor (D) HEK cells, (E) CHO cells and (F) PC-3 cells pretreated with NAC (50 nM siRNA at N/P=10). SiRNAs targeting heat shock protein 27 (Hsp27) and translationally controlled tumor protein (TCTP) were used. Figure 6 :BFigure 1 : 61 Figure 6: Bola4A-mediated siRNA delivery benefits from both the distinctive bolaamphiphilic structure and the delivery features of lipid and dendrimer vectors. (A) Compared to bola4A, neither the amphiphilic dendrimer mono4A without the bola-lipid core, nor the dendron entity dendron4A, nor the bola-core 1 led to any gene silencing (50 nM siRNA at N/P=10). (B) Dioleoylphosphatidylethanolamine (DOPE) enhanced the bola4A-mediated siRNA delivery and gene silencing (20 nM siRNA at N/P=10). (C) Bafilomycin A1 decreased the bola4A-mediated gene silencing (50 nM siRNA at N/P=10). PC-3 cells and Hsp27 siRNA were used. Figure 2 : 2 Figure 2: Study of bola4A decomposition upon treatment with 200 mM H2O2 (simulated ROS conditions) for 0, 2 and 24 h, using (A) 19 F-NMR and (B) ESI-MS analysis. (C) Proposed structures, based on accurate mass measurements (TableS1), of degradation products of bola4A formed upon treatment with H2O2. Figure 3 : 3 Figure 3: Bola4A is able to form nanocomplexes with siRNA, protect siRNA from degradation and promote cellular uptake. (A) Agarose gel migration of siRNA (200 ng per well) in the presence of bola4A dendrimer at N/P charge ratios of 1/5 -10/1. (B) TEM image of the siRNA/bola4A complexes using 5 ng/µL siRNA and bola4A at an N/P ratio of 10. (C) SEM image of the siRNA/bola4A complexes using 5 ng/µL siRNA and bola4A at an N/P ratio of 10. (D) Size distribution of the siRNA/bola4A complexes (at an N/P ratio of 10 with 1 M siRNA) determined using DLS. (E) Protection of siRNA by bola4A against enzymatic degradation. Compared to the naked siRNA (200 ng per well) which was degraded within 5 min in the presence of RNase, siRNA complexed with bola4A at an N/P ratio of 10 was resistant to RNase and remained stable even after 1 h incubation. (F) Uptake of Dy647-labeled Hsp27 siRNA/bola4A complexes (at an N/P ratio of 10 with 50 nM siRNA) by human prostate cancer PC-3 cells evaluated using flow cytometry. Experiments were carried out in triplicate. Figure 4 : 4 Figure 4: (A) TEM images of siRNA/bola4A complexes (N/P=10) before and after incubation with H2O2 (1.06 M) for 2, 4 and 24 h at 37 ºC. Scale bars indicate 200 nm. 19 F HRMAS NMR recording of bola4A at different time points (0, 1, 2 and 4 h) in (B) ROS-rich normal prostate cancer PC-3 cells, (C) ROS-poor PC-3 cells pretreated with the antioxidant N-acetyl cysteine (NAC) and (D) ROS-poor Chinese hamster ovary (CHO) cells after treatment with the siRNA/bola4A complexes. Figure 5 : 5 Figure 5: Bola4A-mediated specific and efficient ROS-responsive siRNA delivery and gene silencing. (A) ROS levels in Chinese hamster ovary (CHO) cells, human embryonic kidney (HEK) cells, breast cancer MCF-7 cells, prostate cancer PC-3 cells, and PC-3 cells pretreated with the antioxidant N-acetylcysteine (NAC) (10 mM) quantified using CellROX ® orange reagent by flow cytometry. Bola4Amediated siRNA delivery and gene silencing in ROS-abundant (B) PC-3 cells and (C) MCF-7 cells, as well as in ROS-poor (D) HEK cells, (E) CHO cells and (F) PC-3 cells pretreated with NAC (50 nM siRNA at N/P=10). SiRNAs targeting heat shock protein 27 (Hsp27) and translationally controlled tumor protein (TCTP) were used. Figure 6 : 6 Figure 6: Bola4A-mediated siRNA delivery benefits from both the distinctive bola-amphiphilic structure and the delivery features of lipid and dendrimer vectors. (A) Compared to bola4A, neither the amphiphilic dendrimer mono4A without the bola-lipid core, nor the dendron entity dendron4A, nor the bola-core 1 led to any gene silencing (50 nM siRNA at N/P=10). (B) Dioleoylphosphatidylethanolamine (DOPE) enhanced the bola4A-mediated siRNA delivery and gene silencing (20 nM siRNA at N/P=10). (C) Bafilomycin A1 decreased the bola4A-mediated gene silencing (50 nM siRNA at N/P=10). PC-3 cells and Hsp27 siRNA were used. ACKNOWLEDGEMENTS Financial support from La Ligue Nationale Contre le Cancer (LP), Association pour la Recherche sur les Tumeurs de la Prostate (LP, XL), Association Franç aise contre les Myopathies (XL), Fondation pour la Recherche Mé dicale (YC), the international ERA-Net EURONANOMED European Research project "Target4Cancer" (LP), China Scholarship Council (YW, CC, JL, JT), PACA Canceropôle, INCa, CNRS and INSERM is gratefully acknowledged. We thank Serge Netische and Damien Chaudanson for SEM experiments, AUTHOR INFORMATION LP conceived the project. XL, YW, LP designed experiments, YW, CC, YC and JT synthesized the dendrimer, AT, FZ, RR and LC executed MS and NMR analysis, CC, JT, SG, HG performed TEM and DLS experiments, XL carried out all biological experiments with the help of PR and JL. XL and LP wrote the manuscript with comments from all the other authors. REFERENCES: Scheme 1: Synthesis of the bola-amphiphilic dendrimer bola4A.
01755522
en
[ "sdu.stu.ag" ]
2024/03/05 22:32:10
2018
https://brgm.hal.science/hal-01755522/file/CBA_field_validation.pdf
Guillaume Bertrand Thomas Gutierrez Bruno Tourlière Jérémie Melleton Eric Gloaguen Florent Cheval-Garabédian Field validation of mineral prospectivity approaches, a first test of the CBA method Field validation of mineral prospectivity approaches, a first test of the CBA method Bertrand Guillaume 1,2 , Gutierrez Thomas 1 , Tourlière Bruno 1 , Melleton Jérémie 1 , Gloaguen Eric 1,2 and Florent Cheval-Garabédian 2 1 -BRGM, Orléans, France 2 -ISTO, UMR7327, Orléans, France Numerous methods of mineral prospectivity mapping have been developed during the last decades and it is often difficult to evaluate their reliability and adequacy. Some objective a posteriori approaches allow calculating their performance such as, for instance, AUC values on ROC curves. However, we believe that field control is a necessary step, especially in the context of mineral targeting in greenfield exploration. Mineral prospectivity mapping generally relies on unequivocal associations between discrete data (e.g., known deposits) and polygons (e.g., lithology, geophysics, geochemistry, etc.). Consequently, the quality of results obtained from 'classical' methods (e.g., Weight of Evidence, Fuzzy Logic, Logistic Regression, Neural Network, etc.) strongly depends on the accuracy of input data (location of points and contours of polygons) which, in many cases, may be questionable. To address this issue of geographic inaccuracy of map elements, BRGM has developed the CBA (Cell Based Association) method. Its base principle is not to rank polygons in which known deposits are located, but to identify favorable association neighboring these deposits (i.e. in cells of a regular grid containing them). In order to assess the adequacy of CBA for mineral targeting in early exploration phases, we have done a first field validation campaign for Sb in the Vendée region (western France). 107 soil samples have been collected (auger drill) in both favorable and unfavorable cells (according to CBA results) and across know Sb-bearing veins. They have been analyzed at ultra-trace level using ICP-MS/ICP-AES (detection range of 0.05 to 10 000 ppm for Sb). In this contribution, we present an overview of the CBA method, its resulting prospectivity map for Sb in Vendée, the sampling and analyzing procedure, the preliminary results we have obtained and the first conclusion they allow.
01755560
en
[ "spi.meca" ]
2024/03/05 22:32:10
2016
https://hal.science/hal-01755560/file/I2M_ECSSMET_2016_LAEUFFER.pdf
Hortense Laeuffer email: [email protected] Brice Guiot Jean-Christophe Wahl Nicolas Perry email: [email protected] Florian Lavelle Christophe Bois email: [email protected] RELATIONSHIP BETWEEN DAMAGE AND PERMEABILITY IN LINER-LESS COMPOSITE TANKS: EXPERIMENTAL STUDY AT THE LAMINA LEVEL The aim of this study is to provide a relevant description of damage growth and the resultant crack network to predict leaks in liner-less composite vessels. Tensile tests were carried out on three different laminates: [0 2 /90 n /0 2 ], [+45/-45] 2s and [0/+67.5/-67.5] s . Number n varies from 1 to 3 in order to study the effect of ply thickness. Transverse crack and delamination at crack tips were identified with an optical microscope during tensile loading. A length of 100 mm was observed for several loading levels to evaluate statistical effects. Results highlight a preliminary step in the damage scenario with small crack densities before a second step where the crack growth speeds up. In bulk, cross-section examinations showed that no delamination occurred at crack tip in the material of the study (M21 T700). Cross-section examinations were also performed on [+45/-45] 2s and [0/+67.5/-67.5] s layups in order to bypass the issue of free edge effects. Damage state in those layups was shown to be significantly different in the bulk than at the surface. Observations of the damage state in bulk for those layups demonstrated that there is no transverse crack in [+45/-45] 2s specimens subjected to shear strains up to 4%, and that interactions between damage of consecutive plies strongly impact both the damage kinetics and the arrangement of cracks. These elements are fundamental for the assessment of permeability performance, and will be introduced in the predictive model. INTRODUCTION Designing liner-less composite vessels for launch vehicles enables to save both cost and weight. Therefore, this is a core issue for aerospace industry. In usual composite vessels, the liner is a metallic or polymer part in charge of the gas barrier function. One challenge when designing a liner-less vessel is to reach the permeability requirement with the composite wall itself. Pristine composite laminates meet the permeability requirement, but as those materials are heterogeneous, damage growth may occur in cases of low thermo-mechanical loads. Transverse cracks and micro-delamination in adjacent plies grow and may connect together (Fig. 1), resulting in a leakage network through the composite wall. On the ply scale, transverse cracks are generated by transverse stress (mode I) and shear stress (mode II). Developing a model that predicts damage densities and arrangement for any laminate requires reliable experimental data for both modes. The aim of this study is to provide a relevant description of damage growth and the resultant network to predict leaks. The first part of this experimental study describes the method for damage characterisation. The second part presents the results obtained on the evolution of damage densities. Finally, the third part is devoted to the morphology of the damage network. METHOD FOR DAMAGE CHARACTERISATION Damage description The meso-damage state of each ply of a laminate can be described by two damage densities, as illustrated in Fig. 2: the crack density ρ, which is the average number of transverse cracks over the observed length L, and the micro-delamination length µ, which is the average length of microdelamination at each crack tip. The corresponding dimensionless variables are defined by: the crack rate : ρ " ρh " N L h (1) the delamination rate : μ " µρ " µ N L (2) where h denotes the ply thickness and N the number of cracks observed on the length L. Crack and micro-delamination rates are average variables, so that the damage stage is considered homogeneous in each ply. When used as damage variables in a model, the damage densities define the crack pattern which is commonly considered as periodic [START_REF] Ladevèze | Relationships between 'micro' and 'meso' mechanics of laminated composites[END_REF]. Experimental values and evolution of damage densities can be obtained by observation by optical microscopy [START_REF] Huchette | Sur la complémentarité des approches expérimentales et numériques pour la modélisation des mécanismes d'endommagement des composites stratifiés[END_REF][START_REF] Malenfant | Étude de l'influence de l'endommagement sur la perméabilité des matériaux composites, application à la réalisation d'un réservoir cryogénique sans liner[END_REF][START_REF] Bois | A multiscale damage and crack opening model for the prediction of flow path in laminated composite[END_REF] or X-ray tomography as well. Instrumentation and method Transverse cracks are generated by transverse stress (mode I) and shear stress (mode II). This requires testing different laminates in order to characterise damage growth in the ply in both modes or in mixed mode I + II. In this study, three different laminates were tested: [0 2 /90 n /0 2 ], [+45/-45] 2s and [0/+67.5/-67.5] s . The effect of ply thickness is also studied by making number n vary from 1 to 3. In the present work, the assessment of the damage densities was carried out by optical microscopy for several loading levels. Tensile test were performed on specimens polished on one edge. Transverse crack and delamination at crack tips were identified with a travelling optical microscope (Fig. 3). The observation area was chosen quite large, i.e. about 100 mm, in order to evaluate statistical effects. The material is carbon fibre and thermoset matrix M21 T700. Plate samples were manufactured by Automated Fibre Placement (AFP). The thickness of the elementary layer is 0.26 mm. Several layers of the same orientation can be stacked together in order to obtain a thicker ply, e.g. in a [0 2 /90 3 /0 2 ] laminate the thickness of the 90 0 ply is 0.78 mm. A specific protocol was applied in order to bypass the issue of free edge effects. Observation of the surface under loading was combined to cross-section examination : polishing of the edge was performed after loading in order to remove from 15 µm to 2 mm of the edge surface and observe the damage state in bulk. After polishing, the sample was loaded to a lower level so that damage did not propagate but cracks opened and became more easy to distinguish. Those steps were repeated for increasing maximal loading to obtain the evolution laws of damage in bulk. Beside crack growth characterisation, performing several cross-section examinations through the width of a specimen also allows to study the arrangement of cracks in three dimensions. EVOLUTION OF DAMAGE DENSITIES Edge effects Cross section examinations were carried out on damaged tensile test specimens of [0 2 /90 n /0 2 ] and [+45/-45] 2s layups. Measurements were performed on representative areas of about 50 mm long. Fig. 4 shows a transverse crack and the associated micro-delamination at crack tips on the edge of the sample before and after removing 15 microns by a polishing procedure. 15 microns under the edge surface, no visible delamination remains. Fig. 5(a) presents the evolution of micro-delamination rates before and after polishing for three ply thicknesses. Independently of the ply thickness, the micro-delamination rate tends quickly to zero. Cross section examinations were continued to verify the length of cracks. The crack rate with respect to the polishing depth is reported in Fig. 5(b). Cracks were shown to be continuous for double and triple 90 plies, while a few cracks disappeared in the [0 2 /90 1 /0 2 ] lay-up. This is consistent with edge effects in this layup. This variation of crack rate for the simple ply is about 15% of the crack rate at the surface (see Fig. 6). The [+45/-45] 2s layup was subjected to transverse strains up to ε 12 " 4%, polished and then observed. For the highest strain and after removing 1.5 mm, only a few transverse cracks remained in the central (and double) ply and in external plies. Deeper, after removing 3 mm of the surface, no crack remained at all. To exclude the eventuality of cracks being too closed to be observable, the sample was observed under loading. The results demonstrates that no transverse cracks nor micro-delamination occur in the bulk under shear load. This phenomenon can be due to the use of a toughened matrix (M21) with thermoplastic nodules. For another material, e.g. with a more fragile matrix, delamination would likely be lower in bulk than at the surface but may persist in bulk. Although no meso-damage is observable, this layup nevertheless undergoes irreversible strains and stiffness loss due to diffuse damage at the micro (fibre) scale. However, if shear loading alone does not lead to transverse cracking in this material, the shear component associated to a tensile loading will likely contribute to crack growth. This contribution may require additional testing on other layups. y (µm) μ r0 2 90 1 0 2 s, ε " 1.5% r0 2 90 2 0 2 s, ε " 1.3% r0 2 90 2 0 2 s, ε " 1.5% r0 2 90 3 0 2 s, ε " 1.1% r0 2 90 3 0 2 s, ε " 1.5% ( y (mm) ρ r0 2 90 1 0 2 s, ε " 1.3% r0 2 90 1 0 2 s, ε " 1.5% r0 2 90 2 0 2 s, ε " 1.2% r0 2 90 3 0 2 s, ε " 1.3% (b) Crack rate ρ vs polishing depth. Fig. 5. Damage rates with respect to the polishing depth y after the transverse strain ε was applied. ε 22 (%) ρ r0 2 {90 1 {0 2 s edge r0 2 {90 1 {0 2 s bulk Fig. 6. Crack rate ρ with respect to the transverse strain ε 22 at the surface of the edge and in bulk. Evolution of transverse cracking Evolution of transverse cracking was observed previously described in Section 2.2 for [+45/-45] 2s and [0 2 /90 n /0 2 ] layups with n varying from 1 to 3. As the first one was shown to be not subjected to transverse cracking, this section is devoted to the latter. The position of cracks measured for several loading levels on a [0 2 /90 1 /0 2 ] specimen is plotted in Fig. 7(a). The corresponding crack rate has been computed for the whole observed area and also for each separate third of the same area, see Fig. 7(b). The two figures highlight the benefit of observing a large enough and hence representative area. Depending on the piece of sample one chose to focus on, initiation and slope of crack growth may be very different, at least concerning the beginning of crack growth. It is likely due to weak areas (defect locations) that drive the position of the first cracks. The effect of defects on the damage threshold vanishes when damage increases: for higher strains, the distance between consecutive cracks becomes more homogeneous, and no major difference can be observed between the three small areas. The average crack rate computed on the whole length reveals a preliminary step with a progressive beginning of cracking. The evolution of damage densities according to the applied strain for two ply-thicknesses are presented in Fig. 8. In all cases, the preliminary step in the damage scenario with small cracking rates before a second step where the crack growth speeds up was observed. The curves shows an increase in cracking threshold when the ply thickness decreases. This phenomenon is explained by the energy released by the fracture being lower for a thinner ply. This point is widely described in the literature [START_REF] Parvizi | Constrained cracking in glass fibre-reinforced epoxy cross-ply laminates[END_REF][START_REF] Gudmundson | Initiation and growth criteria for transverse matrix cracks in composite laminates[END_REF][START_REF] Leguillon | Strength or toughness? A criterion for crack onset at a notch[END_REF] and obviously makes thin ply very competitive for liner-less composite vessel. The modelling of the preliminary step is fundamental for the prediction of first leak path. This phenomenon due to the variability of the material properties can be introduced in the damage model through a probabilistic approach [START_REF] Nairn | Matrix microcracking in composites[END_REF]. Evolution laws for transverse cracks and micro-delamination were built based on Finite Fracture Mechanics and energy release rates [START_REF] Tual | Multiscale model based on a finite fracture approach for the prediction of damage in laminate composites[END_REF][START_REF] Laeuffer | A model for the prediction of transverse crack and delamination density based on a strength and fracture mechanics probabilistic approach[END_REF]. Probability density functions are defined for fracture toughness and strength threshold. A set of simulations are then performed and the average response of the model are computed according to the weight associated to the density functions. Results of the simulation for both ply thicknesses are presented in Fig. 8 by doted and dashed lines. Preliminary step in the damage scenario is well described even though the densities of probability were not accurately identified yet. MORPHOLOGY OF THE DAMAGE NETWORK To study the shape of a damage network, damage observation under loading and cross-section examination were applied to [0/+67.5/-67.5] s specimens. The interest of this lay-up is that transverse cracking can occur in three different and consecutive plies, leading to the creation of a network. The results presented here concern one damage network (without damage growth) observed at several polishing depth from 0.2mm to 5mm. At the scale of the specimen, the results provide an overview of the network. The position of cracks is schematically drawn in Fig. 9 from the observations. The centre ply, ´67.5 0 , is also a double ply, and thus has a lower cracking threshold. In this ply, cracks occurred first and were continuous through the width. Conversely, cracks in the simple `67.5 0 plies were short and located around the cracks of the double ply. Hence, existence of cracks in an adjacent ply drives the position and the length of the new cracks. Moreover, the cracking threshold of the simple plies is also modified: cracks occur almost simultaneously in the three plies despite their different thicknesses. Tests on [0/+67.5/0/67.5] s specimens in which cracked plies are isolated are scheduled. Comparing the results obtained with isolated and not-isolated plies will make it possible to quantify the effect of the interaction between cracks in adjacent plies. Fig. 10 focuses on the intersection of three cracks. At the interface between two plies, delamination connects the cracks. This makes the connection area larger than the Crack Opening Displacement (COD) at crack tip and may increase the leakage rate induced by a leak path. CONCLUSION Transverse crack and delamination at crack tips were identified with an optical microscope during tensile loading. A length of 100 mm was observed for several loading levels. This allowed to highlight a preliminary step in the damage scenario with small crack densities and progressive growth before a second step with a steeper growth. In bulk, cross-section examinations showed that no delamination occurred at crack tip in the material of the study. Cross-section examinations were also performed for the observation of [+45/-45] 2s and [0/+67.5/-67.5] s layups in order to bypass the issue of free edge effects. Damage state in those layups was shown to be significantly different through the width of the specimens than at the surface of the edges. Particularly, there is no transverse crack in [+45/-45] 2s specimens subjected to shear strains up to 4%. It was also observed that crack growth and crack length are modified by the damage state of adjacent plies. These elements are fundamental for the assessment of permeability performance, and thus will be introduced in the model. Predicting the percolation of the network also requires to describe the network in terms of number of connections between two adjacent plies. It could be achieved by using only crack density and ply angle, but this is not trivial any more since crack growth and crack length are modified by the damage state of neighbouring plies. Cross-section examinations give an insight into the network pattern, nevertheless this method is restrictive because it is destructive, not very accurate and time-consuming. Additional experiments involving X-ray tomography are required to accurately characterise the crack network pattern. This kind of experiment remains a challenge because of the mismatch between the size of the crack pattern (3mm in the case of the [0/+67.5/-67.5] s specimen) and the size of its elements (COD « 2µm when the specimen is unloaded). Those experiments will also allow to assess the effect of the interactions between damage of consecutive plies. Fig. 1 . 1 Fig. 1. Transverse crack and delamination : crack network in two damaged plies and micrograph of one transverse crack with delamination at crack tip. 3 Fig. 2 . 32 Fig. 2. Measurement of the crack density ρ and average delamination at crack tip µ in a [0 2 /90/0 2 ] lay-up. Fig. 3 . 3 Fig. 3. Damage observation under tensile loading test. Fig. 4 . 4 Fig. 4. Cross-section examinations on [0 2 /90 2 /0 2 ]: diagram of the cross-sections and micrographs at the surface (y " 0µm) and after removing 15 microns (y " 15µm). a) Delamination rate μ vs polishing depth. L 55 σ u ε " 0.9% σ " 0.63 σ u ε " 1.0% σ " 0.69 σ u ε " 1.1% σ " 0.82 σ u ε " 1.3% σ " 0.99 σ u ε " Position of cracks observed on the edge for 5 loading levels. = 60 -90 mm L = 30 -60 mm L = 0 -30 mm L = 0 -90 mm (b) Crack density ρ at the surface. Fig. 7 . 7 Fig. 7. Damage measurements on a [0 2 /90 1 /0 2 ] specimen, ply thickness h 90 " 0.26 mm. r0 2 Fig. 8 . 28 Fig. 8. Measured and predicted crack rates. Fig. 9 . 9 Fig. 9. Crack network in a [0/+67.5/-67.5] s : diagram of the network and micrograph of the three damaged plies. Fig. 10 . 10 Fig. 10. Intersection between the cracks of three plies in a [0/+67.5/-67.5] s for several polishing depth. ACKNOWLEDGEMENTS The authors acknowledge the council of Region Aquitaine and the French space agency CNES for their support.
01755718
en
[ "sdu.ocean", "sdu.stu.me", "sdu.stu.cl", "phys.meca.mefl" ]
2024/03/05 22:32:10
2018
https://hal.sorbonne-universite.fr/hal-01755718/file/Improved_mcRSW_PrePrint.pdf
Masoud Rostami Vladimir Zeitlin email: [email protected] Improved moist-convective rotating shallow water model 1 "Improved moist-convective rotating shallow water model and its application to instabilities of hurricane-like vortices" Keywords: Moist Convection, Rotating Shallow Water, Tropical Cyclones, Baroclinic Instability We show how the two-layer moist-convective rotating shallow water model (mcRSW), which proved to be a simple and robust tool for studying effects of moist convection on large-scale atmospheric motions, can be improved by including, in addition to the water vapour, precipitable water, and the effects of vaporisation, entrainment, and precipitation. Thus improved mcRSW becomes cloud-resolving. It is applied, as an illustration, to model the development of instabilities of tropical cyclone-like vortices. Introduction Massive efforts have been undertaken in recent years in order to improve the quality of weather and climate modelling, and significant progress was achieved. Nevertheless, water vapour condensation and precipitations remain a weak point of weather forecasts, especially long-term ones. Thus, predictions of climate models significantly diverge in what concerns humidity and precipitations [START_REF] Stevens | What climate models miss?[END_REF] . The complexity of thermodynamics of the moist air, which includes phase transitions and microphysics, is prohibitive. That is why the related processes are usually represented through simplified parameterisations in the general circulation models. However, the essentially non-linear, switch character of phase transitions poses specific problems in modelling the water cycle. Parametrisations of numerous physical processes in general circulation models often obscure the role of the water vapour cycle upon the large-scale atmospheric dynamics. The moist-convective rotating shallow water (mcRSW) model was proposed recently, precisely, in order to understand this role in rough but robust terms. The model is based on vertically averaged primitive equations with pseudo-height as vertical coordinate. Instead of proceeding by a direct averaging of the complete system of equations with full thermodynamics and microphysics, which necessitates a series of specific ad hoc hypotheses, a hybrid approach is used, consisting in combination of vertical averaging between pairs of isobaric surfaces and Lagrangian conservation of the moist enthalpy [START_REF] Bouchut | Fronts and nonlinear waves in a simplified shallow-water model of the atmosphere with moisture and convection[END_REF][START_REF] Lambaerts | Simplified two-layer models of precipitating atmosphere and their properties[END_REF]. Technically, convective fluxes, i.e. an extra vertical velocity across the material surfaces delimiting the shallow-water layers, are added to the standard RSW model, and are linked to condensation. For the latter a relaxation parametrisation in terms of the bulk moisture of the layer, of the type applied in general circulation models, is used. Thus obtained mcRSW model combines simplicity and fidelity of reproduction of the moist phenomena at large scales, and allows to use efficient numerical tools available for rotating shallow water equations. They also proved to be useful in understanding moist instabilities of atmospheric jets and vortices [START_REF] Lambaerts | Moist versus dry baroclinic instability in a simplified two-layer atmospheric model with condensation and latent heat release[END_REF][START_REF] Lahaye | Understanding instabilities of tropical cyclones and their evolution with a moist-convective rotating shallow-water model[END_REF]Rostami and Zeitlin 2017;Rostami et al. 2017). The mcRSW model, however, gives only the crudest representation of the moist convection. The water vapour can condense, but after that the liquid water is dropped off, so there are no co-existing phases and no inverse vaporisation phase transition in the model. Yet, it is rather simple to introduce precipitable water in the model, and link it to the water vapour through bulk condensation and vaporisation. At the same time, the convective fluxes present in mcRSW can be associated with entrainment of precipitable water, and its exchanges between the layers, adding more realism in representing the moist convection. Below, we will make these additions to the mcRSW model, and thus obtain an "improved" mcRSW, which we call imcRSW. We will illustrate the capabilities of the new model on the example of moist instabilities of hurricane-like vortices. Multi-layer modelling of tropical cyclones goes back to the pioneering paper [START_REF] Ooyama | Numerical simulation of the life cycle of tropical cyclones[END_REF], which had, however, a limited range due to the constraint of axisymmetry. Strictly barotropic models were also used, e.g. [START_REF] Guinn | Hurricane spiral bands[END_REF], as well as shallow water models with ad hoc parametrisations of latent heat release, e.g. [START_REF] Hendricks | Hurricane eyewall evolution in a forced shallow-water model[END_REF]. The imcRSW model is a logical development of such approach. Derivation of the improved mcRSW Reminder on mcRSW and its derivation Let us recall the main ideas and the key points of derivation of the 2-layer mcRSW model. The starting point is the system of "dry" primitive equations with pseudo-height as vertical coordinate [START_REF] Hoskins | Atmospheric frontogenesis models: Mathematical formulation and solution[END_REF]. We recall that pseudo-height is the geopotential height for an atmosphere with an adiabatic lapse rate: z = z 0 1 -(p/p 0 ) R/cp , where z 0 = cpθ 0 /g, and the subscript 0 indicates reference sea-level) values. Horizontal momentum and continuity equations are vertically averaged between two pairs of material surfaces z 0 , z 1 , and z 1 , z 2 , where z 0 is at the ground, and z 2 is at the top. The pseudo-height z being directly related to pressure, the lower boundary is a "free surface" and the upper boundary is considered to be at a fixed pressure ("rigid lid"). The meanfield approximation is then applied, consisting, technically, in replacing averages of the products of dynamical variables by products of averages, which expresses the hypothesis of columnar motion. In the derivation of the "ordinary" RSW the fact that material surfaces z i , i = 0, 1, 2 are moving, by definition, with corresponding local vertical velocities w i allows to eliminate these latter. The main assumption of the mcRSW model is that there exist additional convective fluxes across z i , such that w 0 = dz 0 dt , w 1 = dz 1 dt + W 1 , w 2 = dz 2 dt + W 2 , (1) where W 1,2 are contributions from the extra fluxes, whatever their origin, cf. Figure 1. The resulting continuity equations for the thicknesses of the layers h 2 = z 2 -z 1 , h 1 = z 1 -z 0 are modified in a physically transparent way, acquiring additional source and sink terms: ∂ t h 1 + ∇ • (h 1 v 1 ) = -W 1 , ∂ t h 2 + ∇ • (h 2 v 2 ) = +W 1 -W 2 . (2) The modified momentum equations contain the terms of the form W i v at the boundaries z i of the layers. An additional assumption is, hence, necessary, in order to fix the value of the horizontal velocity at the interface. In the layered models the overall horizontal velocity, by construction, has the form v(z) = N i=1 v i H(z i -z)H(z -z i-1 ) , where H(z) is Heaviside (step-) function. Assigning a value to velocity at z i means assigning a value to the Heaviside function at zero, where it is not defined. This a well-known modelling problem, and any value between zero and one can be chosen, depending on the physics of the underlying system. In the present case this choice would reflect the processes in an intermediate buffer layer interpolating between the main layers, and replacing the sharp interface, if a vertically refined model is used. The "asymmetric" (non-centred) assignment H(0) = 1 was adopted in previous works. The "symmetric" (centred) assignment H(0) = 1/2 will be adopted below. This choice does not affect qualitatively the previous results obtained with mcRSW, however it does affect the forcing terms in conservation laws. It corresponds to a choice of efficiency of momentum transport between the layers. In this way, the vertically averaged momentum equations become: ∂ t v 1 + (v 1 • ∇)v 1 + f k × v 1 = -∇φ(z 1 ) + g θ1 θ0 ∇z 1 + v1-v2 2h1 W 1 , ∂ t v 2 + (v 2 • ∇)v 2 + f k × v 2 = -∇φ(z 2 ) + g θ2 θ0 ∇z 2 + v1-v2 2h2 W 1 + v1 2h2 W 2 , (3) Note that, whatever the assignment for Heaviside function, the total momentum of the two-layer system (z 1 -z 0 )v 1 + (z 2 -z 1 )v 2 is locally conserved (modulo the Coriolis force terms). In what follows, will be assuming that W 2 = 0. The system is closed with the help of hydrostatic relations between geopotential and potential temperature, which are used to express the geopotential at the upper levels in terms of the lower-level one: φ(z) = φ(z 0 ) + g θ1 θ0 (z -z 0 ) if z 0 ≤ z ≤ z 1 , φ(z 0 ) + g θ1 θ0 (z 1 -z 0 ) + g θ2 θ0 (z -z 1 ) if z 1 ≤ z ≤ z 2 , (4) The vertically integrated (bulk) humidity in each layer Q i = zi zi-1 qdz, i = 1, 2 , where q(x, y, z, t) is specific humidity, measures the total water vapour content of the air column, which is locally conserved in the absence of phase transitions. Condensation introduces c 2017 Royal Meteorological Society Prepared using qjrms4.cls a humidity sink: ∂ t Q i + ∇ • (Q i v i ) = -C i , i = 1, 2. (5) In the regions of condensation (C i > 0) specific moisture is saturated q(z i ) = q s (z i ) and the potential temperature θ(z i ) + (L/cp)q s (z i ) of an elementary air mass W i dt dx dy, which is rising due to the latent heat release, is equal to the potential temperature of the upper layer θ i+1 : θ i+1 = θ(z i ) + L cp q(z i ) ≈ θ i + L cp q(z i ), (6) If the background stratification, at constant θ(z i ) and constant q(z i ), is stable θ i+1 > θ i , by integrating the three-dimensional equation of moist-adiabatic processes d dt θ + L cp q = 0. (7) we get W i = β i C i , β i = L cp(θ i+1 -θ i ) ≈ 1 q(z i ) > 0. (8) In this way the extra vertical fluxes in (3), (2) are linked to condensation. For the system to be closed, condensation should be connected to moisture. This is done via the relaxation parametrisation, where the moisture relaxes with a characteristic time τc towards the saturation value Q s , if this threshold is crossed: C i = Q i -Q s i τc H(Q i -Q s i ). (9) Essentially nonlinear, switch character of the condensation process is reflected in this parameterisation, which poses no problem in finite-volume numerical scheme we are using below. For alternative, e.g. finite-difference schemes smoothing of the Heviside function could be used. In what follows we consider the two-layer model assuming that the upper layer is dry, and even with entrainment of water from the lower moist layer, water vapour in this layer is far from saturation, so the convective flux W 2 is negligible. In this way we get the mcRSW equations for such configuration:              ∂ t v 1 + (v 1 • ∇)v 1 + f k × v 1 = -g∇(h 1 + h 2 ) + v1-v2 2h1 βC, ∂ t v 2 + (v 2 • ∇)v 2 + f k × v 2 = -g∇(h 1 + sh 2 ) + v1-v2 2h2 βC, ∂ t h 1 + ∇ • (h 1 v 1 ) = -βC, ∂ t h 2 + ∇ • (h 2 v 2 ) = +βC, ∂ t Q + ∇ • (Qv 1 ) = -C, C = Q-Q s τc H(Q -Q s ) , (10) where s = θ 2 /θ 1 > 1 is the stratification parameter, v 1 = (u 1 , v 1 ) and v 2 = (u 2 , v 2 ) are the horizontal velocity fields in the lower and upper layer (counted from the bottom), with u i zonal and v i meridional components, and h 1 , h 2 are the thicknesses of the layers, and we will be considering the Coriolis parameter f to be constant. As in the previous studies with mcRSW, we will not develop sophisticated parameterisations of the boundary layer and of fluxes across the lower boundary of the model. Such parameterisations exist in the literature [START_REF] Schecter | Hurricane formation in diabatic ekman turbulence[END_REF], and may be borrowed, if necessary. We will limit ourselves by the simplest version of the exchanges with the boundary layer, with a a source of bulk moisture in the lower layer due to surface evaporation E. The moisture budget thus becomes: ∂ t Q + ∇ • (Qv 1 ) = E -C (11) The simplest parametrisations being used in the literature are the relaxational one E = Q -Q τ E H( Q -Q), (12) and the one where surface evaporation is proportional to the wind, which is plausible for the atmosphere over the oceanic surface: E ∝ |v|; (13) The two can be combined, in order to prevent the evaporation due to the wind to continue beyond the saturation: Es = Q -Q τ E |v|H( Q -Q). (14) The typical evaporation relaxation time τ E is about one day in the atmosphere, to be compared with τc, which is about an hour. Thus τ E τc. Q can be taken equal, or close to Qs, as we are doing, but not necessarily, as it represents complex processes in the boundary layer, and can be, in turn, parametrised. c 2017 Royal Meteorological Society Prepared using qjrms4.cls Improving the mcRSW model An obvious shortcoming of the mcRSW model presented above is that, although it includes condensation and related convective fluxes, the condensed water vapour disappears from the model. In this sense, condensation is equivalent to precipitation in the model. Yet, as is well-known, condensed water remains in the atmosphere in the form of clouds, and precipitation is switched only when water droplets reach a critical size. It is easy to include precipitable water in the model in the form of another advected quantity with a source due to condensation, and a sink due to vaporisation, the latter process having been neglected in the simplest version of mcRSW. We thus introduce a bulk amount of precipitable water, W (x, y, t), in the air column of a given layer. It obeys the following equation in each layer: ∂ t W + ∇.(W v) = +C -V, (15) where V denotes vaporisation. Vaporisation can be parametrised similarly to condensation: P = Qs -Q τv H(Qs -Q). (16) Opposite to the condensation, vaporisation engenders cooling, and hence a downward convective flux, which can be related to the background stratification along the same lines as upward flux due to condensation: Wv = -β * V, β * = L * Cv(θ 2 -θ 1 ) , ( 17 ) where L * is the latent heat absorption coefficient, Cv is specific heat of vaporisation. β * is an order of magnitude smaller than β. There is still no precipitation sink in (15). Such sink can be introduced, again as a relaxation with a relaxation time τp, and conditioned by some critical bulk amount of precipitable water in the column: P = W -Wcr τp H(W -Wcr). (18) The extra fluxes (17) due to cooling give rise to extra terms in mass and momentum equations of the model in each layer. Another important phenomenon, which is absent in the simplest version of mcRSW is the entrainment of liquid water by updrafts. This process can be modelled in a simple way as a sink in the lower-layer precipitable water equation, which is proportional, with some coefficient γ, to the updraft flux, and hence, to condensation, and provides a corresponding source of precipitable water in the upper-layer. Including the above-described modifications in the mcRSW models, and neglecting for simplicity 1) condensation and precipitations in the upper layer, by supposing that it remains far from saturation, 2) vaporisation in the lower layer, which is supposed to be close to saturation, we get the following system of equations:                                d 1 v 1 dt + f ẑ × v 1 = -g∇(h 1 + h 2 ) + ( βC -β * V h 1 )( v 1 -v 2 2 ), d 2 v 2 dt + f ẑ × v 2 = -g∇(h 1 + sh 2 ) + ( βC -β * V h 2 )( v 1 -v 2 2 ), ∂ t h 1 + ∇.(h 1 v 1 ) = -βC + β * V, ∂ t h 2 + ∇.(h 2 v 2 ) = +βC -β * V, ∂ t W 1 + ∇.(W 1 v 1 ) = +(1 -γ) C -P, ∂ t W 2 + ∇.(W 2 v 2 ) = +γ C -V, ∂ t Q 1 + ∇.(Q 1 v 1 ) = -C + E, ∂ t Q 2 + ∇.(Q 2 v 2 ) = V, (19) where d i .../dt = ∂ t ... + (v i •∇)..., i = 1, 2. Here C is condensation in the lower layer considered to be close to saturation, W i is the bulk amount of precipitable water and Q i bulk humidity in each layer, γ is the entrainment coefficient, V is vaporisation in the upper layer, considered as mostly dry. C, V , and P obey ( 9), ( 16), ( 18), respectively. Note that if the above-formulated hypotheses of mostly dry upper layer, and almost saturated lower layer are relaxed (or get inconsistent during simulations), the missing condensation, precipitation, and vaporisation in the corresponding layers can be easily restituted according to the same rules. Conservation laws in the improved mcRSW model As was already said, the total momentum of the system is locally conserved in the absence of the Coriolis force (f → 0), as can be seen by adding the equations for the momentum density in the layers: (∂ t + v 1 .∇)(h 1 v 1 ) + h 1 v 1 ∇.v 1 + f ẑ × (h 1 v 1 ) = -g∇ h 2 1 2 -gh 1 ∇h 2 -( v 1 + v 2 2 )(βC -β * V ) (20a) (∂ t + v 2 .∇)(h 2 v 2 ) + h 2 v 2 ∇.v 2 + f ẑ × (h 2 v 2 ) = -gs∇ h 2 2 2 -gh 2 ∇h 1 + ( v 1 + v 2 2 )(βC -β * V ) (20b) The last term in each equation corresponds to a Rayleigh drag produced by vertical momentum exchanges due to convective fluxes. The total mass (thickness) h = h 1 + h 2 is also conserved, while the mass in each layer h 1,2 is not. However, we can construct a moist enthalpy in the lower layer m 1 = h 1 -βQ 1 -β * W 2 , (21) ∂ t m 1 + ∇.(m 1 v 1 ) = 0, i = 1, 2. ( 22 ) The inclusion of precipitable water in the upper layer in ( 21) is necessary to compensate the downward mass flux due to vaporisation. The dry energy of the system E = dxdy(e 1 + e 2 ) is conserved in the absence of diabatic effects, where the energy densities of the layers are:        e 1 = h 1 v 2 1 2 + g h 2 1 2 , e 2 = h 2 v 2 2 2 + gh 1 h 2 + sg h 2 2 2 . In the presence of condensation and vaporisation, the energy budget changes and the total energy density e = e 1 + e 2 is not locally conserved, acquiring a sink/source term: ∂ t e = -∇ • fe -(βC -β * V )g(1 -s)h 2 , ( 23 ) where fe is the standard energy density flux in the two-layer model. For the total energy E = dxdy e of the closed system we thus get ∂ t E = (βC -β * V )g(s -1) dxdy h 2 . ( 24 ) For stable stratifications s > 1, the r.h.s. of this equation represents an increase (decrease) of potential energy due to upward (downward) convective fluxes due to condensation heating (vaporisation cooling). Note that with "asymmetric" assignment of Heaviside function at zero, an extra term corresponding to kinetic energy loss due to Rayleigh drag would appear in the energy budget, cf [START_REF] Lambaerts | Simplified two-layer models of precipitating atmosphere and their properties[END_REF]. Potential vorticity (PV) is an important characteristics of the flow. In the presence of diabatic effects it ceases to be a Lagrangian invariant, and evolves in each layer as follows: d 1 dt ( ζ 1 + f h 1 ) = ( ζ 1 + f h 1 ). (βC -β * V ) h 1 + ẑ h 1 • ∇ × v 1 -v 2 2 . βC -β * V h 1 , (25a) d 2 dt ( ζ 2 + f h 2 ) = -( ζ 2 + f h 2 ). (βC -β * V ) h 2 + ẑ h 2 • ∇ × v 1 -v 2 2 . βC -β * V h 2 , (25b) where ζ i = ẑ•(∇ × v i ) = ∂xv i -∂yu i (i = 1, 2 ) is relative vorticity, and q i = (ζ i + f )/h i is potential vorticity in each layer. One can construct a moist counterpart of potential vorticity in the lower layer with the help of the moist enthalpy (21), cf. Lambaerts et al. (2011): q 1m = ζ 1 + f m 1 . ( 26 ) The moist PV is conserved in the lower layer, modulo the Rayleigh drag effects: d 1 dt ( ζ 1 + f m 1 ) = + ẑ • ∇ × v 1 -v 2 2 . βC -β * V m 2 1 . (27) Note that the "asymmetric" assignment of the value of the step-function, which was discussed above, renders the moist PV in the lower layer conserved. 3. Illustration: application of improved mcRSW model to moist instabilities of hurricane-like vortices 3.1. Motivations We will illustrate the capabilities of the improved mcRSW, the imcRSW, on the example of moist instabilities of hurricane-like vortices. The mcRSW model, in its simplest one-layer version, captures well the salient properties of moist instabilities of such vortices, and clearly displays an important role of moisture in their development [START_REF] Lahaye | Understanding instabilities of tropical cyclones and their evolution with a moist-convective rotating shallow-water model[END_REF]). Below we extend the analysis of [START_REF] Lahaye | Understanding instabilities of tropical cyclones and their evolution with a moist-convective rotating shallow-water model[END_REF] to baroclinic tropical cyclones (TC), and use the imcRSW to check the role of new phenomena included in the model. Some questions which remained unanswered will be addressed, as well as new ones, possible to answer with the improved version of the model. In particular, we will investigate the influence of the size of TC (the radius of maximum wind) upon the structure of the most unstable mode, the role of vertical shear, and the evolution of inner and outer cloud bands at nonlinear stage of instability. Fitting velocity and vorticity distribution of the hurricanes We begin with building velocity and vorticity profiles of a typical TC within the two-layer model. An analytic form of the velocity profile is convenient both for the linear stability analysis, and for initialisations of the numerical simulations, so we construct a simple analytic fit with a minimal number of parameters. V i (r) =              i (r -r 0 ) αi e -mi(r-r0) β i max (r -r 0 ) αi e -mi(r-r0) β i , r ≥ r 0 , m 0 r, r r 0 . (28) Here i = 1, 2 indicate the lower and upper layer, respectively, r is the non-dimensional distance from the center, i measures the intensity of the velocity field, r 0 sets the non-dimensional distance of maximum wind from the centre, and other parameters allow to fit the shape of the distribution. A cubic Hermite interpolation across r = r 0 is made to prevent discontinuity in vorticity. Here and below we use a simple scaling where the distances are measured in units of barotropic deformation radius R d = √ gH/f , and velocities are measured in units of √ gH, where H is the total thickness of the atmospheric column at rest. (Hence, the parameter acquires a meaning of Froude number). Under this scaling the Rossby number of the vortex is proportional to the inverse of the non-dimensional radius of maximum wind (RM W ). A useful property of this parametrisation is a possibility to tune the ascending or descending trends of the wind near and far from the velocity peak. Velocity is normalised in a way that the maximum velocity is equal to . We suppose that velocity profile (28) corresponds to a stationary solution of "dry" equations of the model. Such solutions obey the cyclo-geostrophic balance in each layer: V 1 r + f V 1 = g ∂ ∂r (H 1 + H 2 ) , (29a) V 2 r + f V 2 = g ∂ ∂r (H 1 + αH 2 ) , (29b) so the related H i (r) are obtained by integrating these equations using (28). The radial distribution of the relative vorticity in the vortex is given by (1/r)d [rV (r)] /dr. It should be emphasised that the radial gradient of the PV corresponding to the profile (28) has sign reversal, and hence the instability of the vortex is expected. Typical velocity and vorticity fields of an intense (category 3) vortex are presented in Figure 2. 1. In what follows, we will be studying instabilities of thus constructed vortices, and their nonlinear saturation. The strategy will be the same as in [START_REF] Lahaye | Understanding instabilities of tropical cyclones and their evolution with a moist-convective rotating shallow-water model[END_REF]: namely, we identify the unstable modes of the vortex by performing detailed linear stability analysis of the "dry" adiabatic system, with switched-off condensation and vaporisation, and then use the unstable modes to initialise numerical simulations of nonlinear saturation of the instability, by superimposing them, with small amplitude, onto the background vortex. We will give below the results of numerical simulations of developing instabilities for three typical configurations which are presented in Table 1: weak barotropic (BTW) and baroclinic (BCW), and strong baroclinic (BCS) vortices. Results of the linear stability analysis: the most unstable mode and its dependence on the radius of maximal wind By applying the standard linearisation procedure and considering small perturbations to the axisymmetric background flow, we determine eigenfrequencies and eigenmodes of the "dry" system linearised about the background vortex. The technicalities of such c 2017 Royal Meteorological Society Prepared using qjrms4.cls analysis are the same as in [START_REF] Lahaye | Centrifugal, barotropic and baroclinic instabilities of isolated ageostrophic anticyclones in the two-layer rotating shallow water model and their nonlinear saturation[END_REF] extended to the two-layer configuration as in Rostami and Zeitlin (2017), and we pass directly to the results, which we will not give in detail either, limiting ourselves by what is necessary for numerical simulations in the next section. The most unstable mode with azimuthal wavenumber l = 3 of the BCS vortex is presented in Fig. 3. The unstable mode of Figure 3 Pressure is clearly of mixed Rossby -inertia gravity wave type. Such unstable modes of hurricane-like vortices are well documented in literature, both in shallow-water models [START_REF] Zhong | An eigenfrequency analysis of mixed rossby-gravity waves on barotropic vortices[END_REF] and full 3D models [START_REF] Menelau | On the relative contribution of inertia-gravity wave radiation to asymmetric intsabilities in tropical cyclone-like vortices[END_REF]). It should be stressed that at Rossby numbers which are about 40 and small Burger number, as can be inferred from the right panel of Fig. 2, the vortex Rossby wave part of the unstable mode, which is known since the work [START_REF] Montgomery | A theory for vortex Rossby-waves and its application to spiral bands and intensity changes in hurricanes[END_REF] and is clearly seen in the upper panels of Fig. 3, is inevitably coupled to inertia-gravity wave field through Lighthill radiation mechanism, cf. [START_REF] Zeitlin | Decoupling of balanced and unbalanced motions and inertia -gravity wave emission: Small versus large rossby numbers[END_REF]. The most unstable modes of BCW and BTW vortices have wavenumber l = 4. With our scaling the strength of the vortex is inversely proportional to its non-dimensional RM W , and thus the structure of the most unstable mode depends on RM W . Yet, as follows from Figure 4, the mode l = 4 is dominant through the wide range of RM W . In general, higher values of RM W correspond to higher azimuthal wavenumbers and lower growth rates. Nonlinear evolution of the instability We now use the unstable modes identified by the linear stability analysis to initialise numerical simulation of nonlinear evolution of the instability. We superimpose the unstable modes with weak amplitude (several per cent with respect to the background values) onto the vortex and trace the evolution of the system, as follows from numerical simulations with finite-volume well-balanced scheme developed for moist-convective RSW model [START_REF] Bouchut | Fronts and nonlinear waves in a simplified shallow-water model of the atmosphere with moisture and convection[END_REF]). Numerical simulations with each of the vortex configurations of Table 1 were performed both in "dry" (M), with diabatic effects switched off, and moist-convective (MCEV) environments. The values of parameters controlling condensation, evaporation, vaporisation, and precipitation in MCEV environment are given in Table 2. The values of parameters controlling condensation: τc, Q s , stratification: s = θ 2 /θ 1 , evaporation: τ E , vaporisation: τv, and precipitation: Wcr, τp. ∆t is the time-step of the code. τc Q s Q s τ E τv Wcr τp γ 5∆t 0.75 ≈ Q s 1.5 200∆t ≈1 day 10τc 0.01 3∆t 0.3 content. We present below some outputs of the simulations, illustrating different aspects of moist vs "dry" evolution, and difference in behaviour of baroclinic and barotropic vortices. Evolution of potential vorticity We start by evolution of the PV field of the weak cyclone, as it is understandably slower than the evolution of the strong one, and different stages can be clearly distinguished. The evolution of potential vorticity in both layers during nonlinear saturation of the instability of the BCW vortex in MCEV environment is presented in Fig. 5,6. The simulations show formation of a transient polygonal pattern inside the RM W at initial stages, with a symmetry of the most unstable linear mode. The patterns of this kind are frequently observed [START_REF] Muramatsu | The structure of polygonal eye of a typhoon[END_REF][START_REF] Lewis | Polygonal eye walls and rainbands in hurricanes[END_REF][START_REF] Muramatsu | The structure of polygonal eye of a typhoon[END_REF][START_REF] Kuo | A possible mechanism for the eye rotation of typhoon herb[END_REF]). The polygon is further transformed into an annulus of high PV. Such annuli of elevated vorticity (the so-called hollow PV towers (Hendricks and Schubert 2010) ) are found in both moist-convective and dry cases. It is worth mentioning that the growth of the primary unstable mode is accompanied by enhancement of outer gravity-wave field, as follows from the divergence field presented in Fig. 7. As follows from Figs. 5, 6 the polygon loses its shape at t ≈ 17. At this time the modes with azimuthal wavenumbers l = 1, 2 are being produced by nonlinear interactions, and start to grow and interact with the polygonal eye-wall, which leads to symmetry loss by the core. A secondary, dipolar instability of the core thus develops, and gives rise to formation of an elliptical central vortex, corresponding to azimuthal mode l = 1, and of a pair of satellite vortices indicating the presence of l = 2 mode. The interaction of initial l = 4 mode with emerging l = 1 and l = 2 modes is accompanied by inertia-gravity wave (IGW) emission, and enhancement of water vapour condensation that will be discussed below. It should be emphasised that interaction between l = 2 mode and elliptical eye, of the kind we observe in simulations, was described in TC literature, e.g. [START_REF] Kuo | A possible mechanism for the eye rotation of typhoon herb[END_REF], where reflectivity data from a Doppler radar were used to hypothesise that it was due to azimuthal propagation of l = 2 vortex Rossby waves around the eye-wall. Further nonlinear evolution consists in breakdown of the central ellipse with subsequent axisymmetrisation of the PV field, and its intensification at the center. This process characterises the evolution of both BTW (not shown) and BCW vortices, but is more efficient in the baroclinic case, as follows from Fig. 8. As seen in the Figure, the azimuthal velocity in the core region with r < 0.5RM W is subject to strong intensification. The exchanges of PV between the eye-wall and the eye, and intensification are known from the barotropic simulations [START_REF] Montgomery | A theory for vortex Rossby-waves and its application to spiral bands and intensity changes in hurricanes[END_REF][START_REF] Schubert | Polygonal eyewalls, asymmetric eye contraction, and potential vorticity mixing in hurricanes[END_REF][START_REF] Lahaye | Understanding instabilities of tropical cyclones and their evolution with a moist-convective rotating shallow-water model[END_REF]). As we see in Fig. 8, the intensification is enhanced by baroclinicity of the background vortex. This is confirmed by Fig. 9, and by Fig. 10, which illustrate the enhancement effect of both moist convection and baroclinicity upon palinstrophy, which is defined as P(t) = 1 2 ∇ζ.∇ζdxdy, (30) in each layer, and which diagnoses the overall intensity of vorticity gradients. It is worth emphasising that, because of higher vorticity and smaller RMW, the axisymmetric steady state is achieved in the lower layer more rapidly than in the upper one in the case of baroclinic vortices. In the case of intense vortex, nonlinear evolution of the instability follows similar scenario, but is considerably accelerated, as follows from Fig. 11. Spiral cloud bands Tropical cyclones exhibit specific cloud patterns. The new version of the model gives a possibility to follow clouds and precipitation, and it is interesting to test its capability to produce realistic cloud patterns. There are two types of cloud and rain bands associated with tropical cyclones, as reported in literature: the inner bands, that are situated close to the vortex core, within ≈ 2 RM W , and the outer spiral bands located farther from the centre and having larger horizontal scales [START_REF] Guinn | Hurricane spiral bands[END_REF][START_REF] Wang | How do outer spiral rainbands affect tropical cyclone structure and intensity[END_REF]. Fig. 12 shows formation of inner and outer cloud bands, the latter having characteristic spiral form, during nonlinear evolution of the instability. Spiral cloud bands are related to inertia-gravity "tail" of the developing unstable mode. The link of spiral bands to inertia-gravity waves in "dry" RSW model of hurricanes was discussed in literature [START_REF] Zhong | A theory for mixed rossby-gravity waves in tropical cyclones[END_REF]). Here we see it in "cloud-resolving" imcRSW. It is to be stressed that amount of clouds strongly depends on the initial water vapour content. If it is closer to the saturation value, the amount of clouds and precipitation obviously increases and eventually covers the whole vortex. Conclusions and discussion Thus, we have shown that the moist-convective rotating shallow water model "augmented" by adding precipitable water, and relaxational parametrisations of related processes of vaporisation, precipitation, together with entrainment, is capable to capture some salient features of the evolution of instabilities of hurricane-like vortices in moist-convective environment, and allows to analyse the importance of moist processes on the life-cycle of these instabilities. There exist extended literature on the dynamics of the hurricanes eyewall, with tentative explanations in terms of transient internal gravity waves, which form spiral bands, cf. [START_REF] Lewis | Polygonal eye walls and rainbands in hurricanes[END_REF]Hawkins (1982) Willoughby (1978), [START_REF] Kurihara | On the development of spiral bands in a tropical cyclone[END_REF], or alternative explanations [START_REF] Guinn | Hurricane spiral bands[END_REF] in terms of PV dynamics and vortex Rossby waves. Thus [START_REF] Schubert | Polygonal eyewalls, asymmetric eye contraction, and potential vorticity mixing in hurricanes[END_REF] obtained formation of polygonal eyewalls as a result of barotropic instability near the radius of maximum wind in a purely barotropic model, without gravity waves. A detailed analysis of instabilities of tropical cyclones was undertaken with a cloud-resolving model in [START_REF] Naylor | Evaluation of the impact of moist convectionon the developmentof asymmetric inner core instabilities in simulated tropical cyclones[END_REF], and showed that the results of [START_REF] Schubert | Polygonal eyewalls, asymmetric eye contraction, and potential vorticity mixing in hurricanes[END_REF] gave a useful first approximation for the eyewall instabilities. As was already mentioned in section 3.3, at high Rossby numbers the vortex Rossby wave motions are inevitably coupled to inertia-gravity waves, and our linear stability analysis confirms this fact. The mixed character of the wave perturbations of axisymmetric hurricane-like vortices was abundantly discussed in literature, e.g. [START_REF] Zhong | A theory for mixed rossby-gravity waves in tropical cyclones[END_REF]. A detailed analysis of instabilities of hurricane-like vortices in continuously stratified fluid was given recently by [START_REF] Menelau | On the relative contribution of inertia-gravity wave radiation to asymmetric intsabilities in tropical cyclone-like vortices[END_REF], where it was shown that the inertia-gravity part of the unstable modes intensifies with increasing Froude number. The vortex profiles used above in section 3.2 have moderate Froude numbers, and the corresponding unstable modes have weak inertia-gravity tails. They are, however, sufficient to generate spiral cloud patterns, as we showed. The development of the instability of the eyewall proper at early stages (up to ≈ 40f -1 ) is only weakly influenced by moist convection, in accordance with findings of [START_REF] Naylor | Evaluation of the impact of moist convectionon the developmentof asymmetric inner core instabilities in simulated tropical cyclones[END_REF]. This can be seen from comparison of the right and left panels of Fig. 9 and from Fig. 10. An advantage of our model, as compared to simple barotropic models, is its ability to capture both vorticity and inertia gravity waves Although we limited ourselves above by an application to tropical cyclones, the model can be used for analysis of various phenomena in mid-latitudes and tropics. The passage to the equatorial beta-plane is straightforward in the model, and it can be easily extended to the whole sphere. An important advantage of the model is that it allows for self-consistent inclusion of topography in the numerical scheme, giving a possibility to study a combination of moist and orographic effects. As was already mentioned, more realistic parametrisations of the boundary layer are available, and generalisations to three-layer versions are straightforward. c 2017 Royal Meteorological Society Prepared using qjrms4.cls Figure 1 . 1 Figure1. Notations for the simplified two-layer scheme with mass flux across material surfaces. From[START_REF] Lambaerts | Simplified two-layer models of precipitating atmosphere and their properties[END_REF], with permission of AIP. Figure 2 . 2 Figure2. Normalised radial structure of azimuthal tangential wind with a fixed slope close to the centre (left panel) and relative vorticity (right panel) in both layers corresponding to the BCS vortex in Table1. Figure 3 . 3 Figure 3. Upper row: Pressure and velocity fields in the x -y plane in the lower (left panel) and upper (right panel) layers corresponding to the most unstable mode with azimuthal wavenumber l = 3 of the BCS vortex of Table1. Lower row: left panel-corresponding divergence field of the most unstable mode, right panel -radial structure of three components of the most unstable mode: pressure anomaly η, and radial (v) and tangential (u) components of velocity ; dashed (solid) lines: imaginary (real) part, thick (thin) lines correspond to upper (lower) layer. Note that the domain in the lower left panel is ≈ ten times larger than that of the upper panels. Figure 4 . 4 Figure4. Dependence of the linear growth rates (in units of f -1 ) of the unstable modes with different azimuthal wavenumbers on the radius of maximum wind (RMW ). Figure 5 . 5 Figure5. Nonlinear evolution of the most unstable l = 4 mode superimposed onto the background BCW vortex in MCEV environment, as seen in potential vorticity field in the lower layer. Formation of meso-vortices (zones of enhanced PV in the vorticity ring) is clearly seen, giving way to axisymmetrisation and monotonisation of the PV profile. Time is measured in units of f -1 . Figure 6 .Figure 7 .Figure 8 . 678 Figure6. Same as in Fig.5, but for the upper layer. Figure 9 .Figure 10 .Figure 11 .Figure 12 . 9101112 Figure9. Effect of vertical shear on intensification of vorticity in the vortex core in environments without (M ) and with (MCEV ) moist convection and surface evaporation. The vorticity is normalised by its initial value. Table 1 . 1 Parameters of the background vortices. BCW(S): weak(strong) baroclinic, BTW: weak "barotropic", without vertical shear, l: the most unstable azimuthal mode conf ig. l 1 2 α 1 α 2 β 1 β 2 m 1 m 2 r 01 r 02 BCS 3 0.41 0.49 4.5 4.5 0.180 0.178 48 47.5 0.01 0.0101 BTW 4 0.4 0.40 2.25 2.25 0.25 0.25 14 14 0.1 0.1 BCW 4 0.4 0.36 2.25 2.25 0.25 0.237 14 12.6 0.1 0.115 observations were collected from Atlantic and eastern Pacific storms during 1977 -2001: It has the form which is consistent with Mallen et al. (2005) where flight-level c 2017 Royal Meteorological Society Prepared using qjrms4.cls Table 2 2 Prepared using qjrms4.cls . It must be stressed that amount of precipitable water in each layer is highly sensitive to the values of parameters, especially to the intensity of surface evaporation. Condensation and precipitation time scales are chosen to be short, just few time steps ∆t, while vaporisation and surface evaporation time-scales are much larger which is consistent with physical reality. Changing these parameters within the same order of magnitude does not lead to qualitative changes in the results. Wcr is an adjustable parameter that controls precipitation and γ controls entrainment of condensed water. The MCEV simulations were initialised with spatially uniform moisture c 2017 Royal Meteorological Society
01755720
en
[ "info.info-ao" ]
2024/03/05 22:32:10
2017
https://theses.hal.science/tel-01755720/file/DEEST_Gael.pdf
Mercie Chaleureusement Ali Hassan El Moussawi Antoine Morvan Nicolas Simon Christophe Huriaux Laurent Perraudeau Angeliki Kritikakou Patrice Quinton Nicolas Estibals Baptiste Roux Simon Rokicki Rafail Psiakis. . . Thomas Levesque Gabriel Radanne Benoit Le Gouis Nicolas Braud-Santoni Chaque thèse est une expérience unique. Si, pour certains, il semble ne s'agir que d'un rite de passage sans réelle difficulté, tel ne fut assurément pas mon cas. Au cours de ces ✘ ✘ ✘ thèse sur le sujet, après un chemin bien sinueux ! Merci à ma famille pour tout ce qu'elle m'a enseigné, malgré les épreuves et les tempêtes. Merci enfin à celle dont je partage la vie depuis plus de onze années et qui a cru en moi comme personne, vivant ces dernières années avec la même intensité que moi: Caroline, mon Amour. Résumé en français Contexte Des téléphones aux centres de calcul, les systèmes informatiques jouent aujourd'hui un rôle majeur dans la plupart des activités humaines. On les retrouve dans des contextes variés, dont la diversité reflète celle de leurs applications. En première approche, on peut distinguer trois types de système informatique: Les systèmes généralistes, comme les stations de travail, sont conçus pour effectuer raisonnablement efficacement un bon nombre de tâches (comme naviguer sur le web, utiliser une suite bureautique ou jouer à des jeux vidéos). Leur principale caractéristique est leur flexibilité, puisqu'ils peuvent exécuter des programmes arbitraires installés ou écrits par l'utilisateur final. Les systèmes embarqués, en revanche, sont spécifiquement dédiés à une ou quelques tâches bien spécifiques. Ils sont enfouis dans toutes sortes de système, des smartphones (où ils accélèrent par exemple les opérations d'encodage/décodage) au module de commande des engins spatiaux. Ils se caractérisent par les contraintes fortes qui pèsent sur leur conception. En effet, ils doivent par exemple respecter des contraintes de latence (temps réel), de consommation énergétique, de taille, de coût et de tolérance aux fautes -souvent simultanément. Les super-ordinateurs, par contraste, sont optimisés pour des tâches de calcul intensif, comme les simulations scientifiques (par exemple en climatologie ou météorologie) et l'intelligence artificielle (réseaux de neurones). Par rapport aux systèmes généralistes, la différence majeure est leur grande puissance de calcul, puisqu'ils doivent effectuer un grand nombre d'opérations par seconde pour satisfaire aux exigences de performance. Toutes ces différences influent sur la façon dont ces systèmes sont conçus et programmés. Par exemple, afin de supporter un grand nombre de cas d'utilisation, les ordinateurs généralistes sont basés sur des processeurs génériques offrant un jeu d'instruction prédéfini. Afin d'améliorer les performances, plusieurs stratégies sont mises en oeuvre dans ces architectures, comme l'utilisation de plusieurs niveaux de cache ou de techniques superscalaires. Ainsi, un processeur moderne peut, par exemple, ordonner les instructions dynamiquement en fonction des ressources disponibles, ou "deviner" la valeur d'une condition de branchement à partir du passé afin d'exécuter certaines parties du code par avance. Ces mécanismes complexes profitent à la fois aux programmeurs et aux utilisateurs finaux: • Le même jeu d'instruction peut être réutilisé d'une génération de processeur à l'autre, fournissant une inter-compatibilité ascendante et descendante entre logiciel et matériel. • Les programmes peuvent être écrits dans un langage de programmation haut niveau, traduit ensuite en une série d'instruction élémentaires par un compilateur, sans qu'il soit nécessaire de connaître précisément l'architecture sousjacente. • Les techniques d'optimisation statiques (à la compilation) et dynamiques (mises en oeuvre lors de l'exécution, comme les techniques superscalaires) assurent dans la plupart des cas des performances décentes aux utilisateurs, sans efforts excessifs de la part des programmeurs. Ces commodités ont rendu possible les environnements logiciels sophisitiqués que nous utilisons quotidiennement. Malheureusement, ces avantages ne sont pas gratuits: par nature, les architectures génériques ne peuvent fournir des performances ou une efficacité optimales, quelle que soit l'application. En fait, les processeurs génériques représentent un compromis pertinent entre performance et flexibilité. Si ce compromis convient à beaucoup d'applications, tel n'est pas toujours le cas. Par exemple, dans le cadre de systèmes embarqués hautement contraints en ressources, l'utilisation de processeurs génériques peut entraîner un dépassement du budget alloué en termes de surface de silicium ou de consommation énergétique. L'utilisation d'architectures spécifiques, nommées accélérateurs matériel, s'impose alors. Cette thèse traite la conception de tels d'accélérateurs (plus spécifiquement d'accélerateurs implémentés sur FPGA). Nous nous intéressons à la conception de modèles de performance permettant la mise en oeuvre de compromis spécifiques à chaque application, selon diverses métriques (précision, surface, débit, etc.). Ce manuscrit est composé de deux parties majeures: dans les chapitres 2 et 3, l'on s'intéresse aux compromis entre précision et coût matériel (notamment). Dans les chapitres 4 et 5, on se concentre sur une classe d'applications, les stencils. Le reste de ce chapitre présente succintement ces deux problématiques et nos contributions. Compromis entre coût et précision Dans la première moitié de ce manuscrit, on s'intéresse à la précision des résultats. Les compromis portant sur la précision représentent un vaste champ d'opportunités pour les architectes matériel. Un exemple classique est l'utilisation de l'arithmétique en virgule fixe au lieu de l'arithmétique en virgule flottante pour réduire les coûts matériel et la consommation énergétique. Naturellement, la précision ne peut être réduite indéfiniment: chaque implémentation doit satisfaire à des contraintes de précision spécifiques, dont le respect doit être vérifié lors de l'exploration de l'espace de conception. Déterminer si une implémentation en virgule fixe donnée respecte une contrainte de précision représente un problème difficile en général. On peut distinguer deux classes de techniques: simulatoires et analytiques. Les techniques basées sur la simulation sont facilement applicables et offrent de bons résultats à condition de disposer de suffisamment d'échantillons. Cependant, elles sont lentes à mettre en oeuvre car estimer la précision de chaque solution requiert un grand nombre d'exécutions. Les modèles analytiques sont beaucoup plus rapides à évaluer, ce qui permet l'exploration de plus de solutions en peu de temps, et donc l'identification de meilleur compromis. Cependant, leur applicabilité limitée représente un défi majeur. Avant ces travaux, les techniques analytiques ne pouvaient traiter que des systèmes uni-dimensionnels. dans le chapitre 3, nous étendons les techniques précédentes à des algorithmes multi-dimensionnels, comme des filtres d'image. Nous nous concentrons sur les filtres Linéaires, Spatialement Invariants (LSI), une généralisation des filtres Linéaires, Invariants dans le Temps supportés par d'autres approches. Nous proposons un flot partant d'une description algorithmique (écrite en C/C++). Les deux principaux défis que nous relevons sont: • Extraire une représentation mathématique compacte d'un filtre linéaire à partir d'une description impérative en C/C++. • Dériver un modèle de précision fiable à partir d'une telle représentation. Le premier de ces défis est relevé dans le cadre du modèle polyédrique. Nous représentons les filtres LSI comme des Systèmes d'Équations aux Récurrences Uniformes (SUREs) ou, de façon équivalente, des graphes de flots de donnée multidimensionnels (MDFGs), par analogie aux graphes de flot de signal (SFGs) utilisés comme représentation intermédiaire par les approches précedentes. Une différence majeure entre SFGs et MDFGs est que les MDFGs / SUREs n'imposent pas d'ordre d'itération canonique à chaque dimension. Cela nous permet de supporter des filtres d'image récursifs complexes, scannant leurs entrées dans toutes les directions. Nous utilisons des techniques polyédriques d'analyse de dépendance afin de transformer le programme en systèmes d'équation aux récurrences affines. Un certain nombre de simplifications et de transformations sont requises, comme l'uniformisation des dépendances, avant que le système puisse être reconnu comme un SURE. Pour la seconde problématique -inférer des modèles de précision -nous proposons deux approches. Toutes deux se ramènent à calculer l'intégrale et la norme L 2 de la réponse impulsionnelle du filtres, mais depuis des points de vue duaux: • Dans le domaine temporel, nous dérivions ces sommes en déroulant / évaluant les équations de récurrence définissant le système. • Dans le domaine fréquentiel, nous exploitons les propriétés algébriques des fonctions de transfert pour calculer celles représentant la propagation de chaque erreur. Nous calculons alors les sommes requises à partir de la réponse fréquentielle. Pour le deuxième cas, nous proposons une version simplifiée et plus efficace de l'algorithme proposé par Ménard et al. [START_REF] Menard | Analytical fixed-point accuracy evaluation in linear time-invariant systems[END_REF]. Nos expériences démontrent que nos modèles sont obtenus rapidement, et leur efficacité est illustrée en comparant avec des simulations sur des données réelles. Finalement, nous montrons comment l'approche fréquentielle peut être utilisée en amont de l'optimisation des largeurs afin de traiter la quantification des coefficients, un problème généralement ignoré dans les autres travaux. Compromis pour l'implémentation de stencils Dans la seconde partie de cette thèse, nous nous concentrons sur les compromis possibles pour l'implémentation de stencils itératifs sur FPGA. Les stencils itératifs (ou plus simplement stencils) sont un motif de calcul retrouvé dans de nombreuses applications, des simulations scientifiques à la vision par ordinateur. Chaque application présente des contraintes spécifiques en fonction de la taille du domaine, du schéma de dépendance et des caractéristiques intrinsèques du calcul. En première approche, les performances d'un stencil sont principalement déterminées par les ressources de calcul utilisées et les performances de la mémoire. Le pavage (ou tiling), présenté dans le chapitre 4, est un outil essentiel pour optimiser ces deux aspects, en améliorant la localité mémoire d'une part et en autorisant la parallélisation à différentes niveaux d'autre part. Au chapitre 5, nous proposons une méthode systématique pour l'impl'ementation de stencil itératifs sur FPGA. Notre méthode s'appuie sur un gabarit d'architecture flexible, fondé sur le pavage et exposant diverses paramètres: • Les performances maximales peuvent être controlées en ajustant le facteur de déroulage du chemin de données. Ceci autorise des compromis entre débit et surface. • Des tuiles plus grandes peuvent être utilisées pour réduire les besoins en bande passante, au prix d'une plus grande utilisastion en mémoire locale. • L'espace d'itération peut être pavé selon un sous-ensemble de ses dimensions, afin de réduire encore plus l'utilisation en bande passante. Le prix à payer est une perte de contrôle partielle sur l'utilisation en mémoire locale, qui devient proportionnelle à la taille de chaque dimension non pavée. En outre, nous proposons des modèles de performance simples, dérivés d'une analyse à haut niveau des performances du système. Ces modèles peuvent servir de base à l'exploration de l'espace des solutions. Pour valider notre architecture et nos modèles, nous avons implémentńotre approche sous la forme d'un outil de génération de code visant Vivado SDSoC. Nous avons identifié, pour différentes cibles de performance, plusieurs solutions potentielles en utilisant nos modèles de performance / surface. Nos expériences démontrent la bonne précision de nos modèles de performance. Nos modèles de surface s'avèrent, bien que moins précis, suffisants pour estimer a priori les solutions les plus intéressantes. Ces modèles ont donc fait la preuve de leur utilité lors de la phase de conception. Finalement, au cours de ce travail, nous avons constaté l'importance de la contiguité en mémoire pour réduire la latence et bénéficier des accès bursts proposés par le bus. Nous avons donc conçu une disposition mémoire spécifique pour les tencils. Pour l'instant, ce layout n'est implémenté que pour les stencils 2D. Sa généralisation à des stencils de dimension supérieure mériterait d'autres travaux. Perspectives Nos travaux ouvrent plusieurs perspectives. Une direction de recherche évidente consisterait à étendre notre travail sur les modèles de précision à une classe plus grande de programmes, comme les filtres linéaires non-invariants, ou des filtres constitués d'opérations arbitraires. Une autre piste, peut-être plus intéressante, serait de supporter des programmes non polyédriques. Par exemple, certains algorithmes, comme la transformée de Fourier rapide, présentent de grandes régularités et un flot de contrôle statique sans pour autant être représentables avec des dépendances affines. La propagation des erreurs dans de tels algorithmes présente encore des défis, et nous ne savons pas si le déroulage peut être évité. On pourrait aussi imaginer utiliser des modèles de précision analytiques dans d'autres contextes, par exemple pour analyser les erreurs de quantification dans des programmes en virgule flottante, ou pour prédire l'impact d'erreurs transitoires ("soft errors") ou d'opérateurs approximatifs sur la correction du programme. Une difficulté majeure est que, dans de tels cas, les erreurs ne vérifient pas les mêmes propriétés statistiques que le bruit de quantification de le cas de la virgule fixe. Nos travaux sur les stencils s'appuie sur une compréhension claire des facteurs majeurs affectant leur performance. Des observations similaires peuvent être formul'ees pour de nombreux algorithmes. Une approche plus générique, ciblant tous les algorithmes se prêtant au pavage, pourrait probablement être proposée. Nos recherches sur le placement contigu des données en mémoire. Cette problématique, importante en pratique, n'a pas été très étudiée. Elle ouvre des questions intéressantes, comme sur la façon de réduire le nombre de tampons requis ou la nécessité d'utiliser des mémoires locales pour réordonner les entrées. Nous suspectons l'existence de compromis intéressants entre la séquentialité des données d'une part et leur contiguïté d'autre part. Cependant, cette question mériterait de plus amples investigations. Pour finir, l'interaction entre les stencils et la précision est difficile à étudier. La capacité des FPGAs à gérer des données de taille arbitraire constitue l'un de leurs avantages majeurs, autorisant des réductions en surface significatives pour les applications supportant une certaine dégradation de la précision. Pouvoir caractériser l'impact (par exemple) de formats en virgule fixe sur la précision de stencils autoriserait des compromis très intéressants. Conclusion Lors de la conception d'accélerateurs matériel, le défi majeur réside dans la taille de l'espace de conception à explorer, en particulier quand certaines dimensions de design comme la précision, sont envisagées. Comme chaque application possède des exigences spécifiques, une solution unique ne peut répondre à tous les besoins. Dans cette thèse, nous défendons une approche rigoureuse de la conception matérielle, basée sur l'utilisation de modèles. Nous avons démontré l'efficacité de cette stratégie à deux problématiques distinctes: les stencils et l'optimisation des largeurs. À mesure que les accélérateurs se répandent, l'utilisation de modèles spécifiques deviendra essentiel pour comprendre l'impact des choix de conception sur le comportemet du système. Notre travail représente une étape dans cette direction. Chapter 1 Introduction Context From smartphones to data centers, computer systems now play a major role in most human activities. They are found in vastly different contexts, reflecting the diversity of their applications. We distinguish three main types of computing environments: General-purpose computers, such as desktop workstations, are designed to run reasonably efficiently a number of applications (e.g., web browsers, office suites or video games). Their main feature is their flexibility, as they can run arbitrary programs installed or written by the end user. Embedded systems, on the other hand, are dedicated to some specific task or set of tasks. They can be found within all sorts of systems and devices, from smartphones (where they typically handle encoding/decoding tasks) to the command system of spacecrafts. Their characteristic is the highly constrained environment in which they operate, as they are usually subject to real-time, power, size, cost or fault-tolerance constraints (often at the same time). Supercomputers are optimized towards High-Performance Computing (HPC) workloads, such as scientific simulations (e.g., climatology, seismology) or machine learning applications. Compared to general-purpose computers, the major difference is their significant computing power, as they must perform a large number of operations per second to meet performance requirements. All these differences influence the way these systems are designed and programmed. For example, since they must support a large number of use cases, general-purpose computers are based on generic processor design offering a predefined instruction set. Several design strategies are used to improve performance, such as adding multiple levels of cache or applying superscalar techniques in the processor. A modern processor may thus, for instance, schedule instructions dynamically based on available resources, or "guess" the value of a branching condition based on past executions. These complex mechanisms benefits both end users and programmers, since: • The same instruction set may be used over several CPU generations, providing backward and forward compatibility between software and hardware. • Compilers can translate programs written in a high-level programming language into sequences of elementary instructions, without exact knowledge of the supporting architecture. • Compile-and run-time optimizations (from memory hierarchy to superscalar techniques) ensure that users get decent performance in most cases without excessive optimization efforts from the programmer. These facilities have made possible the sophisticated software environments that we use daily. Unfortunately, these advantages come at a price: generic architectures cannot, by their very nature, provide optimal performance or efficiency for any particular application. Generic processors represent a convenient trade-off between performance and flexibility. While this is suitable for many applications, such is not always the case. For instance, in resource-constrained embedded systems, generic processors may exceed power and area budget for a given performance goal. The use of more efficient, special-purpose hardware accelerators is then necessary. Such accelerators, and the the problem of their design, are at the core of this thesis. Hardware Accelerators In a broad sense, the term accelerator denotes any processing device that gives up some genericity to execute a type of computation more efficiently than a generalpurpose processor (along which they are commonly used). Floating-point coprocessors, such as Intel's C8087 (introduced in 1980) constitute good examples1 . More generally, for the purpose of this discussion, we distinguish: • Programmable accelerators, designed for a class of applications while retaining some level of programmability. Well-known examples include Digital Signal Processors (DSPs) and Graphics Processing Units (GPUs). DSPs are specialpurpose processors that offer efficient support for common signal processing operations (e.g., dot products). GPUs, on the other hand, have evolved from domain-specific chips into powerful semi-generic computing platforms for dataparallel floating-point computations. • Custom, fixed-function accelerators, on the other hand, are specifically designed for a single, well-defined task. They may be implemented as costly Application-Specific Integrated Circuits (ASICs), or on top of reconfigurable logic, such as Field-Programmable Gate Arrays (FPGAs). Naturally, fixed-function implementations represent the highest level of specialization, and offer the greatest potential of optimizations. Notice, though, that such architectures move the responsability of hardware design closer to application developers. Since it is a notoriously difficult and costly endeavour, their adoption has long been mostly limited to applications with extreme constraints and requirements. The last fifteen years, however, have seen a renewed interest for custom accelerators. This may be explained by several factors. First, our computing needs have increased significantly, partly due to the growing amount of data produced each day, and the need to process them. Secondly, this period coincides with a turning-point in the hardware industry, with the end of the traditional scaling "laws" that have driven its development for more than 40 years. In particular, the breakdown of Dennard scaling, which stated that power density (W/cm 2 ) would remain constant as transistor density increased, has had significant impact on both software and hardware design. Since dynamic (transistor-switching) power is proportional to clock frequency, manufacturers could exploit reductions in processor size to raise frequencies from one generation to the next without increasing the power budget. This resulted in regular performance upgrades for single-threaded code. Nowadays, however, static power is no longer negligible compared to dynamic power, mainly due to current leakages, and this strategy is no longer applicable. Consequently, thermal dissipation is becoming a major issue. In fact, it is expected that, as transistor density continues to increase (albeit more slowly than before), a growing portion of integrated circuits will have to be turned-off at any given time to stay below nomimal thermal dissipation power (a phenomenon sometimes called "Dark Silicon"). Further improvements will then only come from better use of available transistors. Heterogeneous, accelerator-rich architectures are thus expected to become the norm. Unfortunately, designing hardware accelerators is still significantly more difficult than writing software. Lowering the barrier to entry, for example by developing new tools and methodologies, is thus an important challenge to address the needs of tomorrow's computing. Accelerator Design In both embedded systems and HPC, accelerator design may be stated as an optimization problem. One either seeks to minimize resource usage under performance constraints, or to maximize performance under resource constraints. In particular, when designing custom accelerators, the design space is extremely large, as many factors can influence the quality of the design. For example, wordlength may be reduced to trade accuracy for lower area cost, or local memory usage may be increased to tackle bandwidth limitations. The number of solutions is so large, one cannot expect to come up with an "optimal" or near-optimal design at first try; a time-consuming Design-Space Exploration (DSE) phase is usually required. New methodologies are needed to enable and speed up this exploration. Traditional hardware design relies on the use of low-level Hardware Description Languages (HDLs), such as VHDL and Verilog, to specify the architecture. These Register-Transfer Level (RTL) languages provide a poor level of abstraction; in particular, cycle accuracy is part of their semantics, as state changes happen synchronously at clock signal edges. While this level of control is sometimes required, when designing accelerators, it typically hinders DSE by making it difficult to explore architectural variants. In contrast, High-Level Synthesis (HLS) tools compile an algorithmic specifi-cation (usually written in C or C++) to a low-level (RTL) description. With this approach, many implementation details are handled automatically by the tool, based on target frequency, hardware platform and designer directives. This rise in abstraction allows much faster DSE, as far-reaching, system-level architectural changes can be implemented in a few lines of code. Consequently, HLS can be combined with methodologies based on code generation to explore a large number of design points in a short amount of time. Presentation of This Work For efficient exploration, though, HLS alone is not sufficient. Tools are required to guide the designer and help him/her make the right implementation choices in each situation. As accelerators are used in different contexts (HPC vs. embedded systems), and since each application has unique characteristics (access patterns, arithmetic intensity, numerical stability), such tools must integrate domain-specific constraints to identify a suitable set of trade-offs between all performance metrics. This thesis focuses on such DSE methodologies for i) fixed-point accelerators ii) FPGA accelerators for stencil computations. The document is organized as follows: • The first two chapters are concerned with performance/accuracy trade-offs. Many applications can tolerate significant accuracy degradations before the quality of results is strongly affected. It is common to exploit this tolerance by converting floating-point applications to use fixed-point arithmetic, to benefit from its overall lower cost. Chapter 2 discusses this problem, and some methods to evaluate the accuracy degradation resulting from conversion to fixed-point. In Chapter 3 we present our contributions to the construction of analytical accuracy models for linear systems. • The next chapters are concerned with implementation trade-offs for iterative stencil computations. Iterative stencil computations form a large class of algorithms with applications in scientific computing, embedded vision and more. They are presented in Chapter 4, along with implementation strategies. While many authors have proposed a "one size fits all" approach, in Chapter 5, we embrace the diversity of applications by proposing multiple architectural variants, along with associated performance models. • We conclude in Chapter 6 with a review of our contributions and a discussion of potential perspectives. Chapter 2 Accuracy Evaluation Introduction Floating-point arithmetic is based on a flexible approximation of real numbers offering sensible trade-offs between precision and representable range, freeing application developers from these concerns. Software programmers often forget about the complexity of the underlying machinery and take its almost universal support on general purpose hardware for granted. However, because of its significant resource cost, floating-point arithmetic is not always an option for embedded system designers. Instead, they must settle on less convenient, but more cost-effective fixed-point implementations. Implementing fixed-point computations is inherently challenging, as the programmer or designer must take extra care to avoid numerical overflows while retaining enough accuracy for application requirements. At the same time, he/she must also ensure that design concerns, such as power consumption and area budget, are correctly addressed. Reconciling all these constraints at once is a difficult task, and applications are usually first specified, prototyped and functionally validated in floating-point arithmetic, with floating-point to fixed-point conversion handled at a later stage in the design flow. Floating-point to fixed-point conversion exposes trades-offs between performance (area cost, power consumption) and accuracy. Design goals can be formalized as a constrained optimization problem. For example, one may wish to maximize accuracy under some fixed area budget, or minimize area cost subject to an applicationspecific accuracy constraint. The process of solving such problems is called Word-Length Optimization (WLO). Finding an optimal or near-optimal solution usually implies exploring a large design space, especially in a hardware design context where datapaths can be tailored to arbitrary bit-widths. During WLO, the cost and accuracy of each candidate implementation must be assessed to determine whether it represents an improvement over the best known solution. Quick accuracy evaluation is especially challenging, as the impact of numerical errors on the output can be hard to predict. For this reason, bit-accurate fixed-point simulations are often used, but their poor performance combined with the large number of simulations required to produce reliable accuracy estimates leads to significant iteration times. As a consequence, WLO is often performed semi-manually by expert designers, driving the exploration by identifying the most interesting design points. It is an error-prone, time-consuming task, taking up to 50% of overall design time [START_REF]Accelerating fixed-point design for MB-OFDM UWB systems[END_REF], which is often interrupted as soon as a satisfying solution is found, leading to suboptimal implementations. Combinatorial optimization algorithms can be used to perform WLO in a more systematic manner. In practice, because of long simulation times, this choice supposes the availability of a more efficient accuracy evaluation method. Analytical techniques try to solve this problem by constructing an accuracy model from a floating-point or infinite-precision specification, allowing the accuracy degradation associated to a fixed-point implementation to be estimated almost instantly, without simulations. Such methods have the potential to considerably improve the applicability of fully automated WLO. Unfortunately, as we will see in this chapter, they are currently limited to one-dimensional signal processing kernels and cannot properly handle higher-dimensional filters such as image or video processing algorithms. Hardware Representation of Real Numbers Implementing numerical computations implies choosing a finite, explicit approximation of real numbers. The two most popular options, fixed-point and floating-point arithmetic, have mostly opposite characteristics in terms of ease-of-use, hardware cost and numerical properties. After a brief review of fixed-point and floating-point arithmetic, this section describes their respective advantages and drawbacks, along with the trade-offs they expose. Fixed-Point Arithmetic In fixed-point arithmetic, real numbers are represented as integers, with an implicit scaling factor determining the position of the binary point. Concretely, let x be some arbitrary number and I x its integral representation. The interpretation x ≈ x is given by: x = radix e × I x where radix is the base of the numeral system (usually 2) and e is a fixed exponent. S b m b m-1 b 0 b -1 b -2 b -n 2 m 2 m-1 2 0 2 -1 2 -2 2 -n Integral part Fractional part Depending on implementations, I x may be stored in two's complement representation, or as a sign-bit and an absolute value. When the binary point, determined by the scaling factor, falls in the middle of the representation, digits are partitioned into integral and fractional parts, depending on their position. A fixed-point format with m-bit integral part and n-bit fractional part is often written Q m,n (see Figure 2.1). Fixed-point arithmetic is implemented on top of integer arithmetic. Explicit rescaling operations must be performed to ensure compatibility of operands, control word-length or avoid overflows. For example, consider the (unsigned) fixed-point addition of v 1 = 2 -1 × 10011b and v 2 = 2 -3 × 01101b. A custom word-length implementation is illustrated in Figure 2.2. The two numbers must first be aligned be to the same exponent by scaling v 2 down by two positions, which leads to the truncation of its two least significant bits. Finally, a sixth, guard bit is used to account for bit-growth and prevent overflows. This is not the only solution: for example, both operands could be further shifted by one position to keep output wordlength under 6 bits, or rounding could be used instead of truncation to improve error bounds. Fixed-point multiplication illustrates well the challenges of fixed-point arithmetic. Consider the product v 1 ×v 2 , with v 1 , v 2 defined as in the previous paragraph. No alignment is required to perform the operation, as: Binary point position 1 0 0 1 1 0 1 1 0 1 v 1 v 2 ≫2 Quantized bits + 1 0 1 1 0 0 = Guard bit v 1 × v 2 = (2 -1 × 10011b) × (2 -3 × 01101b) = 2 -4 × (10011b × 01101b). The result can thus be computed without loss of accuracy, irrelevant of the scaling factors. However, fixed-point multiplication of same word-length numbers produces a result of double bit-width, which can lead to a phenomenon sometimes called bitwidth explosion. Additional truncations or roundings, called quantizations, must be introduced to avoid this problem. This leads to computational errors whose magnitude depends on the computation and the severity of the quantizations. Floating-Point Arithmetic Contrary to fixed-point arithmetic, where scaling factors are implicitly encoded in the computation, floating-point arithmetic uses explicit exponents in the representation itself in order to automatically scale to different ranges of values. It can be seen as a form of binary scientific notation. For example, whereas chemists refer to the Avogadro constant as: 6.02214086 × 10 23 mol -1 , it can also be written in binary form as: 1.11111110000110000101111 × 10 1001110 mol -1 . More formally, a floating-point number consists in a sign bit S, a signed exponent E and a fixed-point number M with 1-bit integral part, called the mantissa or significand. The represented value is: Most floating-point implementations are based on the IEEE754 standard and use the binary representation pictured in Figure 2.3. This encoding saves 1 bit by not storing the integral part of the significand, but inferring it from the fractional part and the exponent. The interpretation, detailed in Table 2.1, assumes that (most) floating-point numbers are stored in so-called normal form, which also ensures that they are represented with a maximum number of significand bits. The exponent e is stored as a biased integer e � such that e = e � -2 w-1 + 1 . We write e = emin when e � = 0, and e = emax when e � = 2 w -1 (-1) S × 2 E × M S (sign) E (biased exponent) T (trailing significand field) 1 bit w bits p -1 bits w -1 0 p -2 0 While floating-point arithmetic can be emulated on top of integer arithmetic, performance is prohibitive. Consequently, most implementations rely on dedicated hardware support, usually in the form of a Floating-point Processing Unit (FPU), off-the-shelf operators or as a custom datapath. Comparison In the following, we discuss the main differences between fixed-point and floatingpoint arithmetic with respect to programmability, hardware cost, availability and numerical properties. Programmability To the programmer or hardware designer, floating-point arithmetic offers many advantages in terms of simplicity, as floating-point hardware automatically performs necessary rescalings to maximize accuracy and minimize the risk of overflows. In contrast, programming in fixed-point arithmetic often implies dealing with such problems manually. For example, multiplying two 8-bit unsigned fixed-point numbers, with respective exponents -2 and -4, may be written in C: The programmer must manually keep track of the implicit factor of each datum in order to perform the right operation. Libraries such as SystemC (OSCI), ac_fixed (Mentor Graphics) and ap_fixed (Xilinx) can handle some of these concerns for the hardware designer by performing automatic rescalings given the format of operands. However, fine-grained control of wordlength often requires the introduction of manual quantizations, expressed as intermediary variables which clutter the specification with implementation details. Area Cost and Power Consumption Floating-point hardware implementations are significantly more costly than fixedpoint implementations. For example, floating-point adders require pre-alignment logic (usually in the form of expensive barrel shifters), adder/rounding logic and normalization logic with leading-zero detection. This makes floating-point arithmetic prohibitive for area-constrained applications. Power usage of floating-point operators is also typically higher than that of integer operators used in fixed-point arithmetic. Availability Because of their significant hardware cost, many embedded processors do not even feature FPUs. On these platforms, the use of fixed-point arithmetic is virtually mandatory. Numerical Properties In fixed-point arithmetic, the scaling factor is the weight of the least significant bit and corresponds to the smallest non-zero representable value, also called quantization step. Along with wordlength, it determines the range of representable numbers. For example, the b = (m + n + 1)-bit Q m,n format has quantization step 2 -n and covers the range: [-2 b-n-1 , 2 b-n-1 -1]. This interval can be extended by increasing the size b of the representation or choosing a coarser quantization step. This necessary trade-off between range and accuracy is a major disadvantage of fixed-point arithmetic. In contrast, floating-point formats feature a variable exponent which allows them to represent a wide interval of numbers. Specifically, the range of (normalized) nonnegative values that can be represented by a b = (w + p)-bit IEEE754 floating-point number is: [2 -(2 w-1 -2) , 2 2 w-1 -2 2 w-1 -p ] Small values are encoded with a small exponent and considerable accuracy, while bigger exponents allow very large values to be represented, albeit possibly with larger errors. In other words, while fixed-point numbers have a fixed quantization step and bounded absolute errors, floating-point has quantization steps proportional to magnitude of numbers and bounded relative errors. Floating-point quantization step size as a function of number magnitude is plotted in Figure 2.4. The notion of dynamic range can be introduced to summarize these properties. Dynamic range is defined as the ratio between the smallest and largest representable values and does hence not depend on the scaling factor of fixed-point formats. Dynamic range of fixed-point and floating-point formats are plotted in Figure 2.5 as a function of wordlength. We can observe that for very small bit-widths, fixed-point numbers actually offer a larger dynamic range. But as wordlength increases, this phenomenon is reversed and floating-point arithmetic gives much more flexibility. Summary Floating-point arithmetic is more versatile than fixed-point arithmetic and offers better numerical properties, with a wide range of representable values and small relative errors. Unfortunately, its significant hardware cost hinders its adoption in many embedded contexts and fixed-point arithmetic must then be used. Floating-Point to Fixed-Point Conversion As many DSP processors lack support for floating-point arithmetic, or because of its significant area cost in FPGA and ASIC designs, many applications must be implemented in fixed-point. Unfortunately, programming in fixed-point arithmetic is a challenging task as overflows and numerical errors may significantly degrade accuracy. For this reason, these concerns are usually addressed at a later design stage: the application is specified and validated in floating-point arithmetic, before being converted to fixed-point. In this section, we discuss the problem of floating-point to fixed-point conversions. We then expose the techniques available to address this problem automatically, with a focus on accuracy evaluation, a step that often is the bottleneck of the process. Problem Setup The intent of float-to-fix is to assign to each value in a computation a fixed-point format minimizing the risk of overflow and ensuring that enough accuracy is retained with respect to the reference implementation/specification. That specification may be given as a block diagram, Signal Flow Graph (see Section 2.4.1), or as a C/C++/Matlab source code. Preventing overflows is usually achieved by performing range analysis on the input specification: the interval of each value (i.e., each signal in the graph, or each variable in the program) is determined in order to allocate enough bits in the most significant positions. Word-lengths are then adjusted until a suitable trade-off is found between performance/cost and accuracy. Depending on the context and target platform, the problem being solved can be stated differently: • In a software/DSP setting, the challenge consists in finding a fixed-point specification that i) meets (or exceeds) accuracy requirements, ii) can be implemented using available CPU primitives. The design space is thus limited by the word-lengths and instructions proposed by the target processor. • In a hardware design setting, floating-point to fixed-point conversion takes the form of a Word-Length Optimization, where one tries to minimize area cost or maximize accuracy, subject to some accuracy or cost constraint. The design space is usually considerably larger than in software fixed-point, as the datapath can be fine-tuned to arbitrary word-lengths. WLO problems are combinatorial in nature. It has been shown that a restricted analytical form of the multiple wordlength assignment problem is of NP-hard complexity [START_REF] Constantinides | The complexity of multiple wordlength assignment[END_REF]. Let w denote a fixed-point configuration, λ(w) the associated error and C(w) the cost estimate of that implementation. We distinguish two forms of WLO problems. The accuracy maximization problem may be stated as: These optimization problems can be solved using combinatorial optimization algorithms, or a more ad-hoc, semi-manual exploration. At a high-level, all approaches boil down to the same idea: starting from an initial configuration w 0 , the current solution is iteratively refined to optimize the objective function. At each step, the cost C(w) and accuracy λ(w) are evaluated and a new configuration is chosen until some acceptance condition is reached (for example, the absence of progress), or until the designer in charge of the conversion is satisfied with the results. To perform this optimization automatically and reach a good solution in reasonable time, sufficiently fast methods are required for evaluating the cost and accuracy of a design. Cost estimation is a challenging task, as the actual cost may depend on decisions made by the design tool after float-to-fix conversion, such as operator-sharing. In practice, more-or-less comprehensive high-level cost estimates are used [3,4]. While quantization noise can negatively impact the behavior of the system, overflows have an even stronger consequences on numerical correctness. Word-Length Optimization is mostly focused in tweaking the size of the fractional part of fixedpoint operands to approach the optimal solution. However, before entering the optimization loop, it is necessary to determine the size of the integral part, in order, depending on the criticity of the application, to limit or the risk or guarantee the absence of overflows. This analysis is called range estimation. Range Estimation In order to avoid overflows, it is necessary to ensure that the range of values spanned by each variable during the computation does not exceed the capacity of its representation. This range naturally depends on inputs and can be estimated from their own range or from representative samples. When input range is known, any static analysis designed to compute safe variable bounds can be used. Interval or Affine Arithmetic have been extensively applied to this problem. Affine arithmetic can model exactly range propagation through a linear non-recursive program, but non-linearity or the presence of feedback loops gives rise to approximations. For LTI systems, the L 1 norm of the transfer function (which can be computed by hand, or automatically from an adequate representation) gives precise information on the range of outputs. David Cachera and Tanguy Risset proposed a formal approach based on the polyhedral model and the (max, +) tropical algebra to compute ranges on affine loop nests operating on uni-dimensional arrays [5]. In general, without stringent restrictions on the nature of the system, any safe static method is bound to produce pessimistic over-approximations: computing the precise semantics of a program is an undecidable problem. Moreover, even when error bounds can be determined exactly (for example, using affine arithmetic in a basic block with linear operations), numbers close to the minimum or maximum values are unlikely to be observed in practice, as they correspond to statistical extremes. As a constrained example, consider the addition of 5 independent uniform random variables ranging over interval [0, 1]. Their sum is obviously distributed over interval [0, 5]. However, as evident from the plot of its probability density function (see Figure 2.6), values over 4 are unlikely to be observed -in fact, the probability is less than 1%. One may choose to use saturating arithmetic and only assume values less than 4, without significantly affecting the results of the computation. However, purely static methods are unlikely to help the designer to recognize such situations. Except in critical systems, where overflows are not acceptable, simulation is thus often preferred, or used in complement, to static analyses: provided that inputs are in a sufficient number and statistically representative, measured bounds indirectly reflect signal characteristics, and are thus often much more tight than those obtained with static approaches. Accuracy Metrics Formulating an accuracy constraint supposes the choice of a particular metric to characterize performance degradations. Two main classes of metrics may be used: "Hard" metrics (error bounds) In critical systems, accuracy constraints are usu- where µ e and σ 2 e denote the mean and variance of variable e. Noise power is generally given in decibels (dB): P log (e) = 10 log 10 P (e) dB. A related way to measure the relative magnitude of signals and errors is the Signal to Quantization Noise Ratio (SQNR). It is defined as: SQNR = P log � Signal Error � = P log (Signal) -P log (Error) Signal power P log (Signal) is generally known from representative inputs. Computing noise power or SQNR is thus equivalent. Finally, whereas in noise power, only the first two moments are used, estimates of higher-order moments give more information on the shape of the probability distribution and can be used to define even more fine-grained constraints [START_REF] Parashar | Shaping probability density function of quantization noise in fixed point systems[END_REF]. However, in the following, we will mostly consider noise power, as noise power and SQNR are the most widely used metrics. Accuracy Evaluation Accuracy evaluation is the process of evaluating the accuracy degradation occasioned by a fixed-point implementation. Simulation Given a fixed-point specification w, the obvious way to determine its accuracy is to perform bit-accurate fixed-point simulations with representative inputs and compare the results with the reference implementation. This approach produces reliable estimates and is easy to implement for any computation. However, it is also very time-consuming: • Compared to floating-point or native integer operations, fixed-point simulations suffer from a large performance hit on general purpose hardware. • To produce reliable estimates of statistical metrics, this process must be repeated a large number of times to determine the statistical moments of the error with enough confidence. Since accuracy evaluation is performed at every WLO step, the use of simulations is often a bottleneck limiting the depth of the design space exploration, leading to suboptimal implementations Analytical Approaches In analytical methods, an accuracy model of the specification is constructed prior to WLO to avoid simulations and speed up accuracy evaluation. While the construction of the model may be relatively costly, it can be used to quickly determine the accuracy of any solution, thus considerably increasing the number of optimization steps that can be performed. Analytical Accuracy Evaluation Our contributions, exposed in the next chapter, focus on analytical accuracy evaluation. The principle of analytical accuracy evaluation is to derive an accuracy model from a floating-point specification. The statistical moments of quantization errors, viewed as random variables, are propagated through the computation to construct a symbolic expression of overall noise power at the output of the system. Two kinds of model are required: first, the statistical properties of quantization errors need to be determined. Secondly, the overall impact of the system on these errors at the output must be captured by abstract models. Current methods operate on dataflow models such as Signal Flow Graphs as an intermediate representation of the system (Section 2.4.1). These graphs are transformed with simple rewrite rules to introduce error sources (Section 2.4.2) representing quantization errors as additional inputs. Quantization noise models (Section 2.4.3) provide expressions for the mean and variance of these errors as a function of input and output precisions. The challenge then consists in constructing a noise formula representing the moments of errors at the output of the system. A variety techniques have been proposed to achieve this goal, with different assumptions on the system. They are discussed in the rest of this section. Signal-Flow Graphs Signal Flow Graphs [START_REF] Robichaud | Signal flow graphs and applications[END_REF] (SFGs) are a flavor of synchronous data flow [START_REF] Lee | Synchronous data flow: Describing signal processing algorithm for parallel computation[END_REF] graphs used in the signal processing community to model discrete computations. Semantically, each node in a SFG represents a sequence of values, defined in terms of the node's predecessors. In particular, SFGs contain explicit delay operations, in the form of "register" nodes (usually marked z -1 ): at any point in time, the output of these nodes is defined as their input at the previous clock cycle. As an example, an SFG for a Finite Impulse Response (FIR) filter is shown in Figure 2.7. The input sequence x(n) is delayed through a series of register nodes which can collectively be seen as a shift register. The output y(n) is defined as the dot product of the content of the shift register and a vector of coefficients (b i ) 0≤i≤3 . Alternatively, SFG nodes may be expressed as recurrence equations. For example, the SFG in Figure 2.7 is a graphical representation of the following system:                y(n) = m 3 (n) + p 2 (n) m 2 (n) = b 2 δ 2 (n) p 2 (n) = m 2 (n) + p 1 (n) m 3 (n) = b 3 δ 3 (n) p 1 (n) = m 1 (n) + m 0 (n) δ 3 (n) = δ 2 (n -1) m 0 (n) = b 0 x(n) δ 2 (n) = δ 1 (n -1) m 1 (n) = b 1 δ 1 (n) δ 1 (n) = x(n -1) SFGs are schedulable if any cycle contains at least one delay node, while graphs with 0-weight cycles do not represent any meaningful system. An SFG verifing this condition is unambiguously defined modulo initial conditions -the state of the system before the beginning of the computation. This validity condition may be seen as a restriction of the conditions [START_REF] Karp | The organization of computations for uniform recurrence equations[END_REF] given by Karp, Miller and Winograd for a system of uniform recurrence equations to be explicitly defined. z -1 z -1 z -1 b 1 b 2 b 3 b 0 + + + x (n) y (n) Figure 2.7: SFG of a FIR Filter computing the formula: y(n) = � 3 i=0 b i x(n -i). Nodes labeled z -1 represent one-cycle delays and triangle-shaped nodes multiplication by a constant coefficient. Some tools such as Id.Fix [START_REF] Cairn | ID.Fix[END_REF] build a SFG out of a annotated C program. After WLO, a C/C++ fixed-point implementation, using the ac_fixed library, is output. There are two main advantages to this approach. First, a source code implementation is more easily integrated into a custom validation framework than a SFG or a block diagram. Perhaps more importantly, this technique can be used in a HLS context, with WLO viewed as a source-to-source transformation. In Figure 2.8, an implementation of the FIR filter in Figure 2.7 is given, as accepted by Id.Fix: #d e f i n e N 4 #pragma MAIN_FUNC f l o a t f i r 8 ( ) { #pragma DYNAMIC [ -1 , 1 ] f l o a t sample ; #pragma OUTPUT f l o a t a c c ; i n t i ; #pragma DELAY s t a t i c f l o a t X[N ] ; X [ 0 ] = sample ; a c c = X[N -1 ] * b [N -1 ] ; f o r ( i = N -2 ; i >= 0 ; i --) { a c c += X[ i ] * b [ i ] ; X[ i + 1 ] = X[ i ] ; } r e t u This code actually represents one iteration of the FIR. After parsing, the control flow of the top function (marked by the MAIN_FUNC pragma) is fully flattened and an acyclic data flow graph is built with a producer-tracking simulation. Finally, the DELAY pragma helps the tool insert the delay nodes, corresponding to dependences across consecutive function calls. Error Sources In a fixed-point implementation, an arithmetic operator can introduce multiple errors: inputs may need to be quantized to fit the operator's format and the precision of the output may also be reduced to limit bit-width growth. Quantizations may be expressed explicitly as additional operations, as shown in Figure 2.9. In analytical accuracy evaluation, though, round-off errors are modeled as additive noise perturbating an infinite-precision signal. This is usually reflected through a graph transformation: each quantization is replaced with an addition between the original signal and the quantization error. The virtue of this transformation is to reframe quantization errors as new system inputs, which can be modeled as a stochastic process known as Pseudo Quantization Noise (PQN). x y + z Q 0 Q 1 Q 2 ≡ x y + z + e 0 + e 1 + e 2 Pseudo Quantization Noise model Analytical accuracy evaluation seeks to predict the influence of quantization errors on the output of the system. At first, this may appear like an infeasible task, since actual errors depend on system inputs. As it turns out, in the vast majority of cases, quantization errors can be statistically characterized from the precision of the original and quantized signals, and the mode of quantization (truncation, rounding or convergent rounding). Moreover, this Pseudo Quantization Noise is uncorrelated from the input signal and other error sources, which further simplifies the analysis. For example, consider the truncation of some infinite-precision signal x to x � , with quantization step q = 2 -n . We have: x � = x + e x with error e x = x �x within the interval: [START_REF] Widrow | Statistical theory of quantization[END_REF] that, if quantization step q is sufficiently small, e x can be modeled as uniformly distributed variable: I =] -q; 0[. It can be shown e x ∼ U (I) such that signal x and quantization noise e x are uncorrelated. We can thus give the mean and variance of the error: E(e x ) = -q/2 Var(e x ) = q 2 12 The model above captures the distribution of errors as a continuous probability distribution, and is thus suitable for modeling the quantization of an infiniteprecision (analog, floating-point) signal to fixed-point precision. Using rounding instead of truncation leads to a similar model with I = (-q/2; q/2] and E(e x ) = 0, whereas discrete distributions can be used to characterize round-off errors between fixed-point formats [START_REF] Constantinides | Truncation noise in fixedpoint sfgs [digital filters[END_REF]. Noise models for different configurations are shown in Fig- ure 2.10. Quantization Mode Mean Variance discrete continuous discrete continuous Truncation -q 2 (1 -2 -k ) -q 2 q 2 12 (1 -2 -2k ) q 2 12 Rounding -q 2 × 2 -k 0 q 2 12 (1 -2 -2k ) q 2 12 Convergent Rounding 0 0 q 2 12 (1 -2 -2k ) q 2 12 Figure 2.10: PQN characteristics based on input / output signal precision and quantization mode. q represents the quantization step and k the number of eliminated bits when converting between fixed-point formats. Note that when k → ∞, the discrete model converges towards the continuous model. Operator-Level Noise Propagation Under certain conditions, noise mean and variance can be naturally propagated through linear operations (addition of signals and multiplication by a constant). In particular, if x � = x + e x , y � = y + e x , then: λx � + y � = (λx + y) + e λx+y where e λx+y = λe x + e y . Thanks to the linearity of the expected value operator, E(e λx+y ) = E(λe x + e y ) = λE(e x ) + E(e y ) and the mean of the error at the output can thus be computed from the average error of the input signals. Variance propagation is a bit more subtle. If X and Y represent two uncorrelated random variables, then: Var(λX + Y ) = λ 2 Var(X) + Var(Y ). When input errors are known to be independently distributed, this formula may be used to derive the output error variance of linear operators. Unfortunately, this is not true when noises are, in fact, correlated. This may happen even when all noise sources are independently distributed. For example, consider the degenerate case where λ = 1 and X = Y . X and Y denote the same random variable and are thus obviously correlated. We have: Var(X + Y ) = Var(2 × X) = 4 × Var(X) � = Var(X) + Var(Y ) = 2 × Var(X) which only holds if Var(X) = 0. Operator-level propagation of noise statistical parameters is used by Tourreilles et al. [START_REF] Tourreilles | A study on discrete wavelet transform implementation for a high level synthesis tool[END_REF] to implement a Word-Length Optimization pass within the GAUT HLS framework. However, the assumption is made that errors are always independent. In some cases, this can lead to large over-or under-approximations of noise power. The following conditions together guarantee the absence of correlations between errors within a SFG, and can thus be used to verify whether the simple noise propagation method exposed above is applicable. • All noise sources are uncorrelated. • Distinct paths between the same two nodes contain different numbers of delays. The second condition ensures that a single value cannot contribute twice to the same error. It is verified, for example, by the FIR filter in Figure 2.7. However, many computations do not possess this property ; in such cases, accuracy estimation must capture correlations between noises to produce reliable results. This is a global property of the graph and cannot be handled at the operator level. Another major difficulty is the presence, in many applications, of non-linear operations such as multiplications between signals. Indeed, we have: (x + e x ) × (y + e y ) ≈ xy + (xe y + ye x ), (2.1) (where the term e x e y is deemed negligible and voluntarily omitted). The error term xe y + ye x depends not only on error signals e x and e y , but also on signals x and y. To address this issue, many techniques are restricted to linear systems, while others try to fallback to the linear case through linearization. Noise Propagation in Linear Systems Linear systems form a convenient framework for the study of error propagation. In such algorithms, signals and errors do not interfere and may be considered independently. Let T be linear system, x an input signal and � x = x + � the same input perturbated by some random noise �. By definition of linearity: T (� x) -T (x) = T (�) In other words, the propagated error is simply the output of the system when applied to the input error. A linear system is called Linear, Time-Invariant (LTI) if a translation of its input by a constant offset results in the same offset at the output. Formally: ∀δ, T (τ δ (x)) = τ δ (T (x)), where τ δ represents the translation of a signal by δ: τ δ (x)(n) = x(n + δ). The temporal behavior of an LTI system y = T (x) is fully characterized by its impulse response h, i.e., the output of the system when stimulated by a unit impulse. For any input x: y = h * x, where * denotes the convolution operation. Equivalently, the transfer function H, defined as the Z-transform of the impulse response h, contains the same information in the complex-frequency domain, where multiplication replaces convolution: Y = HX . A well-formed single-input, single-output (SISO) SFG where the only operations (besides delays) are additions and multiplications by scalar values always represents an LTI system1 . We give an intuition of the proof, by inductive reasoning on the structure of the graph, in the case of a non-recursive system: • If the output node is also the input node, then the computation is the identity transformation x(t) � → x(t) which is obviously LTI. • Otherwise, suppose the hypothesis true for each sub-SFG induced by all the ancestors of one of the output's predecessors (i.e., the sub-graphs computing the operands of the output node). Proceeding by case analysis: -If the output is a delay, let T be the system represented by the subgraph corresponding to its unique predecessor. Then, the SFG implements the system: τ -1 • T . -Similarly, if the output is a linear operation (x, y) � → λx +y, let T x and T y be the systems corresponding to its operands. The implemented system is: λT x + T y . For example, the FIR filter represented by the SFG in Figure 2.7 is an LTI system. In general, a multiple-input, multiple-output (MIMO) SFG verifying the same conditions as above (well-formedness, linear operators) does not represent an LTI system per se, but can be modeled as a combination of LTI systems. More precisely, let T : (x 1 , . . . , x m ) � → (y 1 , . . . , y n ) be the mapping between signals represented by the SFG. For any i ∈ {1, . . . , n}, there exists m LTI systems T i,1 , . . . , T i,m such that: y i = m � j=1 T i,j (x j ). In other words, each input contributes additively to each output. In the temporal and frequency domain, we may also write: y i = m � j=1 h i,j * x j Y i = m � j=1 H i,j X j . This observation is key to analyze error propagation in an LTI system. Indeed, after modeling quantization errors as additional inputs (see Section 2.4.3), the SFG can be seen as a MIMO system. Since PQN-modeling only introduces additions, the propagation of each error to each output can be modeled as an LTI-system, and thus be fully captured by the corresponding impulse response or transfer function. This approach is used in [START_REF] Constantinides | The multiple wordlength paradigm[END_REF] to implement automatic wordlength optimization on an annotated Simulink block-diagram using transfer functions. However, the computation of these transfer functions is not detailed, and may not be automatic. Menard et al. [START_REF] Menard | Automatic floating-point to fixed-point conversion for dsp code generation[END_REF] proposed a similar approach base on graph algorithm computing the transfer functions. The SFG is decomposed into acyclic subgraphs whose transfer functions are recursively computed and combined into a single one modeling the propagation of each noise source. Noise Propagation in Non-Linear Systems As mentioned in Section 2.4.4, non-linear operations such as multiplication between signals introduce a problematic dependency between signals and noise propagation. Constantinides et al. [START_REF] Constantinides | Perturbation analysis for word-length optimization[END_REF] proposed to recast non-linear systems as linear timevarying systems to apply some results on LTI systems to non-linear algorithms. Their approach supposes that each node represents a differentiable operation: y(t) = f (x 1 (t), . . . , x n (t)). Since error values � 1 (t), . . . , � n (t) are small in comparison with signals, a first-order Taylor approximation of the output error is given by: �(t) ≈ ∂f (t) ∂x 1 � 1 (t) + • • • + ∂f (t) ∂x n � n (t) For example, if f (x 1 , x 2 ) = x 1 × x 2 : �(t) ≈ x 2 (t)� 1 (t) + x 1 (t)� 2 (t) (Note that this expression is essentially the same as Equation 2.1.) This small-signal model is a linear function of input errors with time-varying coefficients: as evidenced by the above example, the partial derivatives depend on the value of t. To account for this fact, values of the derivatives are computed numerically through simulation with sufficiently large representative inputs. The SFG is then transformed into its corresponding small-signal model, with derivatives as input coefficients. For each noise source, another simulation is run with a noise of known mean and variance as input. The statistical moments of the output are then computed, and linearity is used to i) scale the results to noises of different mean and variance ii) build an accuracy model reflecting the additive contribution of each noise source as a function of fixed-point encodings. This method can be seen as a hybrid simulation-based and analytical method: whereas simulations are used to construct the accuracy model, none are required during accuracy evaluation. Unfortunately, they make the unrealistic assumption that variance contributions of different noise sources can be summed, implicitly supposing that they are uncorrelated. There is no validation of accuracy estimates against simulations. Their propagation model is thus similar to that of Tourreilles [START_REF] Tourreilles | A study on discrete wavelet transform implementation for a high level synthesis tool[END_REF] and of probably limited applicability. In other approaches [START_REF] Menard | Automatic SQNR determination in non-linear and non-recursive fixed-point systems[END_REF][START_REF] Rocher | Analytical accuracy evaluation of fixed-point systems[END_REF], a system is characterized by its time-varying impulse response. Expressions of noise power are then derived from signal statistics (crosscorrelation, second-order moments) computed after a single floating-point simulation. Finally, an approach based on Affine Arithmetic [START_REF] Caffarena | SQNR estimation of non-linear fixed-point algorithms[END_REF] has been proposed. We believe this approach to be fundamentally tied to that of Rocher et al. [START_REF] Rocher | Analytical accuracy evaluation of fixed-point systems[END_REF], with correlation between signals captured by a different formalism. System-Level Approaches To handle large systems made of several sub-components and manage the combinatorial explosion of the design space, a hierarchical, divide-and-conquer approach may be beneficial [START_REF] Parashar | System-level approaches for fixed-point refinement of signal processing algorithms[END_REF]. The WLO process is decomposed into sub-problems where each component is assigned an accuracy budget. The issue is that the output noise of sub-systems is not uniformly distributed. Its statistical distribution must be captured by different means, for example with PDF-shaping [START_REF] Parashar | Shaping probability density function of quantization noise in fixed point systems[END_REF] or by its spectral power density [START_REF] Barrois | Leveraging power spectral density for scalable system-level accuracy evaluation[END_REF]. Limitations of Analytical Accuracy Evaluation The analytical accuracy evaluation techniques overviewed in this chapter are intrinsically limited by the use of SFGs as an intermediate representation. Indeed, this representation can only compactly model one-dimensional systems is thus not suitable for image and video processing applications. While SFGs can in fact be built from such algorithms by unrolling the full computation, their size is proportional to the dimensions of the image/scene -while the SFG of an IIR filter does not depend on the length of the sequence -which leads to severe scalability issues when constructing the accuracy model. In trivial cases, such as the convolution of an image by a mask of coefficients, this problem can be sidestepped by restricting the analysis to the computation of a single element. For example, the kernel of a Gaussian blur filter may be extracted into the following C function: which is seen by tools such as ID.Fix [START_REF] Cairn | ID.Fix[END_REF] as a multiple-inputs, single-output LTI system. Assuming that input quantization noise is spatially uncorrelated and identically distributed, as per the Widrow hypothesis, output noise does not depend on the position in the image and the accuracy model built for this function may be used to estimate the accuracy of the full algorithm. However, this strategy suffers from several shortcomings. The first one is that output noise is almost always spatially correlated, even when input noise is not. For example, consider two adjacent pixels in the output of the Gaussian filter: since they are computed from overlapping windows of inputs, a large error in one value can strongly affect both results, and the errors are thus correlated. It is not a problem when processing a single filter, as noise power or SQNR can still be obtained from the statistical moments determined by the accuracy model. However, it means that the trick above can not be repeatedly applied to handle image processing pipelines made of the composition of multiple filters. Another issue is that more complex cases, such as recursive algorithms, cannot be conveniently expressed as a simple function as above. Unfortunately, the scalability issues of current methods are even more important for these cases, as the number of propagation paths to be considered grows exponentially with the size of the image. Finally, the impact of the quantization of constant coefficients is not fully explored by previous approaches. In methods handling LTI systems [START_REF] Constantinides | The multiple wordlength paradigm[END_REF][START_REF] Menard | Automatic floating-point to fixed-point conversion for dsp code generation[END_REF][START_REF] Constantinides | Wordlength optimization for linear digital signal processing[END_REF][START_REF] Menard | Analytical fixed-point accuracy evaluation in linear time-invariant systems[END_REF], it is usually assumed that the sensitivity of the transfer function to the quantization of coefficients has been assessed before floating-point to fixed-point conversion. Accuracy evaluation tools should also help the designer in this process. Solutions to some of these problems are discussed in the next chapter. Chapter 3 Improving Applicability of Source-Level Accuracy Evaluation Introduction In the previous chapter, we exposed the Word-Length Optimization and accuracy evaluation problems. We described a range of analytical techniques to derive closedform accuracy models from floating-point specifications. We saw that such models can considerably speed-up design space exploration and allow for better implementations -however, their elaboration requires a precise modeling of the computation, a technical challenge that currently limits the scope of analytical methods to small sets of problems. Recent work [REFs] has focused on statistical modeling of decision (branching) operators to handle non-static, data-dependent control flow. To our knowledge, noise propagation through arithmetic operations with singularities, such as division, is still difficult to capture reliably. In this chapter, we extend the applicability of analytical approaches in another direction. Specifically, we add support for multi-dimensional algorithms, such as image or video processing filters. As a first step in this endeavor, we focus on Linear Shift-Invariant (LSI) filters, a multi-dimensional generalization of LTI systems. This class of computations is exemplified by the Deriche edge detector, a recursive image filter beyond the reach of previous approaches. This algorithm is discusssed in Section 3.2. In Section 3.3, we proceed with a more formal account on LSI systems. Our contributions per se are described in Section 3.4. They can be summarized as follows: • We replace SFGs with Multi-Dimensional Flow-Graphs (MDFGs) as an intermediate representation of the system/program. This representation allows us to capture regular access patterns over multiple dimensions and is thus well-suited to model the class of computations we target. • We propose a recursive algorithm to efficiently compute all the multi-dimensional transfer functions within a MDFG. These transfer functions compactly model the propagation of quantization noise between any two operations. • We (partially) address the problem of inferring MDFGs from source code specifications. Specifically, we borrow a powerful dependence analysis technique from the polyhedral compilation toolset to transform affine loop-nests into systems of recurrence equations. The MDFG is retrieved through a series of equational, semantic-preserving transformations. • Finally, we present a methodology to handle the quantization of coefficients prior to Word-Length Optimization, by automatically computing the frequency response of the modified system. We present experimental results in Section 3.5. We discuss future work, extensions and some open problems in Section 3.6 and conclude in Section 3.7. Motivating Example: Deriche Filter The Deriche or Canny-Deriche edge detector is a recursive image filter that cannot be handled by current techniques. It constitutes a good example of a Linear Shift-Invariant algorithm. Like most edge detection techniques, the Deriche filter proceeds by computing the gradient field of the image. Horizontal and vertical gradients G x and G y are compted independently. By symmetry, computing G y is the same as computing G x on the transpose of the image: from now on, we thus focus on horizontal gradient G x to simplify the discussion. The algorithm can be decomposed in two groups of recursive passes. The image is first processed in both horizontal directions (left-to-right and right-to-left). The results are then summed and the output is processed similarly along the vertical axis 1 . More precisely, the computation can be described by the following equations, where I represents the input image: x 1 (i, j) = a 1 I(i, j) + a 2 I(i -1, j) + b 1 x 1 (i -1, j) + b 2 x 1 (i -2, j) x 2 (i, j) = a 3 I(i + 1, j) + a 4 I(i + 2, j) + b 1 x 2 (i + 1, j) + b 2 x 2 (i + 2, j) x = x 1 + x 2 y 1 (i, j) = a 5 x(i, j) + a 6 x(i, j -1) + b 1 y 1 (i, j -1) + b 2 y 1 (i, j -2) y 2 (i, j) = a 7 x(i, j + 1) + a 8 x(i, j + 2) + b1y 2 (i, j + 1) + b 2 y 2 (i, j + 2) G x = y 1 + y 2 These equations describe the flow of data along with the actual operations. Some of them are recursively defined (x 1 , x 2 , y 1 and y 2 ) ; for this specification to be computable, their value must hence be explicitly given outside some region. We simply define _(i, j) = 0 whenever (i, j) / ∈ [0; W -1] × [0; H -1], where W and H are symbolic constants representing the dimensions of the image. The expression of the coefficients a 1 , . . . , a 8 and b 1 , b 2 depends on a scalar parameter α > 0, defining the amount of smoothing applied prior to gradient computation. With other edge detectors such as the Sobel filter, a smoothing pass is usually needed as a pre-processing phase to remove high-frequency noise that can could cause false edge detections. In the Deriche filter, low-pass filtering and gradient computation are combined into a single step. The coefficients of the recurrence equations are defined such that the frequency response of the filter reflects the combination of the two phases. The main benefit is that the number of operations per pixel is not affected by the amount of smoothing required. In a non-recursive implementation with a convolution kernel, the size of the mask can vary greatly depending on the value of α. This makes the Deriche filter a great fit for noisy images, requiring a large amount of filtering. Finally, this algorithm exhibits interesting signal-processing properties: as the smoothing (i.e., noise filtering) filter is recursively implemented, quantization noise itself tends to be filtered out in later computations. These qualities make the Deriche filter an interesting choice for limited-wordlength implementations on DSP processors or FPGAs, whose design phase often includes a floating-point to fixed-point conversion step. Unfortunately, earlier analytical accuracy evaluation methods are unable to construct an accuracy model for the Deriche filter or similar algorithms. Indeed, SFGs can only represent regular computations such as Deriche by fully flattening the control flow, thus distinguishing the computation of individual pixels and intermediary values. This results in a very large graph, where the number of propagation paths increases exponentially with each dimension of the image because of recursivity. Current techniques are thus affected by severe scalability issues when processing multi-dimensional algorithms. Besides multi-dimensionality, Deriche exhibits another feature that cannot be appropriately captured by SFGs: non-causality. Fixing some column index j, let us write x 2 (i, j) = x 2,j (i). The equation x 2,j defines an LTI system, specifically an Infinite Impulse Response (IIR) filter. However, interpreting i as the time time dimension, the processing order imposed by the recurrence is reversed: x 2,j (i) depends on x 2,j (i + 1). This may be viewed as a dependence on "future" outputs, and could be modeled as a SFG by reversing the interpretation of delay nodes. Now, consider the summation of x 1,j and x 2,j in x = x 1 + x 2 : as the summation of two LTI filters, it is also LTI, but each output now depends simultaneously on the "past" and the "future". Signal Flow Graphs are designed to model streaming computations, and cannot handle such dependence patterns. In other words, some one-dimensional LTI filters cannot be compactly represented as SFGs either. In the following, we address these difficulties by using a more suitable representation that doesn't require any unrolling/flattening and does not impose a single, synchronous execution order. The class of systems currently supported by our approach, called Linear Shift-Invariant systems, is described in the next section. Linear Shift-Invariant Systems Some basic notions on LTI systems have already been defined in the previous chapter. However, in order to make the description of our contributions clearer, we want to take the time to discuss LTI and LSI systems in a more formal way. Signals An n-dimensional signal (or nD-signal for short) is a mapping from n-dimensional coordinates to scalar values. For example, a time-varying signal is a 1D-signal mapping time to values, whereas a gray image is a 2D-signal from pixel coordinates to intensity levels. Signals can be defined on continous (R n ) or discrete (Z n ) domains. In computers, though, we almost always process discrete signals. In the following, an nD-signal thus denotes a function: Z n → R. If a signal x is only defined on some subset D ⊂ Z n , we extend it to Z n by fixing x(� v) = 0 if � v / ∈ D. Operations on real numbers can be "lifted" to signals in a natural way. In particular, if x and y are nD-signals and λ ∈ R is a scalar constant, we define addition and scalar multiplication of signals as: (x + y)(� u) = x(� u) + y(� u) and (λx)(� u) = λ(x(� u)). Equipped with these two operations, Z n → R is a vector space over R. Another importation operation is the shifting of a signal by a constant offset. Let x be an nD-signal and � u ∈ Z n a vector of coordinates. The translation of x by � u, denoted τ � u (x), is defined as: τ � u (x) = x • ∆ � u where ∆ � u is the translation of coordinates: � v � → � v + � u. In other words, τ � u (x)(� v) = x(� v + � u). Linear Shift-Invariant Systems A multidimensional system (or filter) T is a dimension-preserving transformation between signals: it transforms an n-dimensional input into an n-dimensional output. It is thus a map T : (Z n → R) → (Z n → R). T is called Linear, Shift-Invariant (LSI) if it verifies the following properties: Linearity For any scalar k ∈ R and any inputs a, b ∈ (Z n → R): T (λa + b) = λT (a) + T (b). Shift-Invariance For any vector � u ∈ Z n , T (τ � u (x)) = τ � u (T (x)), The linearity requirement implies that T is a linear map in the usual sense, i.e., preserves the vector space structure. Shift-invariance means that it also preserves shifts: applying a shift before or after T produces the same result. In other words, LSI filters commute with translation of signals. Examples • Any LTI system, such as the FIR filter, is also LSI. • The Deriche filter, like many image processing algorithms, is a two-dimensional LSI system. This is a sensible property for an edge detector: indeed, the gradient operator � ∇ is linear and edge localization should be translation-invariant. Algebraic Structure of LSI Systems In Section 3.2, we saw that the 2D Deriche filter is made of compositions and summations of simpler 1D LTI filters, operating over rows and columns in both directions. LTI and LSI systems are stable over linear operations and composition, which guarantees that the Deriche kernel is indeed linear and shift-invariant. More precisely, let T , T � be two filters of same dimensionality n, x be an nDsignal, λ a scalar value and � d a vector of Z n . We define the following operations: Addition (T + T � )(x) ≡ T (x) + T � (x). x T + T � y x ≡ T T � + y Multiplication by a scalar (λT )(x) ≡ λ (T (x)). x λT y ≡ x T ×λ y Composition (T � • T )(x) = T � (T (x)). x T � • T y ≡ x T T � y Note that these operations are not the same as those defined for signals: they map systems to systems, and not signals to signals. One easily proves that the space of LSI filters is stable (or closed ) under these operations, i.e., their result is always linear and shift-invariant. Moreover, composition is a bilinear operation: (λx + y) • z = λ(x • z) + y • z and : z • (λx + y) = λ(z • x) + z • y As a direct consequence, observe by setting λ = 1 that composition distributes over addition. Finally, the "empty" system: id n : x � → x is an identity element with respect to composition. These properties equip LSI systems with a structure of unital algebra over R, i.e., a vector space with a bilinear product and a multiplicative identity. Impulse Response An LSI system is characterized by its impulse reponse, i.e., the output of the system to a unit impulse. A n-dimensional unit impulse is an nD-signal defined as: δ n (� x) = � 1 � x = � 0 0 otherwise . Let h T = T (δ n ) (or just h, where T is clear from context) denote the impulse response of T . It can be shown that for any signal x, T (x) = h T * x, where * denotes the convolution operation: (x * y)(� v) = � � w∈Z n x(� v -� w)y( � w) Examples • The impulse response of a FIR filter is given by its coefficients. Indeed, h(n) = N � i=0 b i δ n (n -i) = � b n 0 ≤ n ≤ N 0 otherwise In other words, a FIR filter computes the convolution of its input with its coefficients. • In general, the impulse reponse of an n-dimensional recursive LSI system is an n-dimensional surface with infinite support. For example, the truncated impulse response of the Deriche filter, centered around the origin, is shown in Figure 3.1. This impulse response corresponds to the mask that should be used in a non-recursive implementation to approximate the same frequency response. Transfer Function and Z-Transform Impulse responses describe the behavior of systems in the spatial (or temporal) domain. For analysis purposes, though, the frequency-domain point-of-view is often more convenient. The frequency behavior of LSI systems can be captured through transfer functions. The transfer function of a system is the result of taking the Z-transform of its impulse response. The Z-transform is the discrete analog of the Laplace transform. It takes a signal from the spatial domain to the complex frequency domain, and is defined as: Z(x)(� z) = � � w∈Z n x( � w)z -w 1 1 . . . z -wn n , where � z = (z 1 , . . . , z n ) ∈ C n . In practice, this definition is rarely useful to compute transfer functions, thanks to the following properties: Linearity For any λ ∈ R and any signals x and y, Z(λx + y) = λZ(x) + Z(y). Space shifting For any vector � u = (u 1 , . . . , u n ), Z(x -� u)(� z) = z -u 1 1 . . . z -un n Z(x)(� z). Spatial convolution / frequential product The Z-transforms maps spatial convolutions to products in the frequency domain: Z(x * y) = Z(x)Z(y). These results allow us to quickly determine the transfer function of a composite system from that of its sub-parts. Example: The FIR Filter Recall that the impulse response of an FIR filter is given by its coefficients. It may be written as a linear combination of unit impulses: h T = N � i=0 b i τ -i (δ) Knowing that Z(δ) = 1 and applying the above properties, we have: Z(h T )(z) = N � i=0 b i z -i . Example: IIR Filter An IIR filter is usually defined by a recurrence equation of form: y(n) = N � i=0 b i x(n -i) - M � j=1 a j y(n -j) ⇔ y(n) + M � j=1 a j y(n -j) = N � i=0 b i x(n -j). With the same kind of reasoning as for the FIR filter, we can compute its transfer function: z � → � N i=0 b i z -i 1 + � M j=1 a j z -j Frequency Response The frequency response of the system is computed from the transfer function by constraining each component of its input vector to lie on the unit circle: H T (ω 1 , . . . , ω n ) ≡ H z,T (e iωn , . . . , e iωn ). where ω 1 and ω 2 are real numbers. We have the following, fundamental identity: H T = F (h T ), where F denotes the Fourier transform. The relation between the impulse response, the transfer function and the frequency response of a system is pictured in Figure 3.2. Remark that, even though H T only has real parameters, it does not contain less information on the frequency behavior than H z,T . In fact, H z,T can be retrieved from H T using the following relation: H z,T = Z(F -1 (H T )). Analytical Accuracy Evaluation for LSI Systems We now present our approach to derive analytical accuracy models from source-code description of multi-dimensional LSI systems. Overview Our approach can be decomposed into four steps. h T H z,T H T Z F F -1 φ � → (� ω � → φ(e iω 1 , . . . , e iωn )) Representation Extraction The first step of our method is to extract a multi-dimensional flowgraph representation from an algorithmic (e.g., C/C++) specification. Our technique is based on the polyhedral model, a mathematical framework for analysis and transformation of programs with regular control flow and access patterns. The source code is analyzed and translated into an equivalent system of recurrence equations, which is then further refined into a flowgraph. Coefficients Quantization The effect of coefficients quantization is assessed by comparing the frequency responses of the floating-point and fixed-point implementations. The frequency responses are computed automatically. Accuracy Model Construction Once a set of fixed-point coefficients compatible with application requirements has been determined, an analytical accuracy model is derived from the flowgraph representation. This is done by computing the transfer function from each node to the output, modeling the impact of quantization noise on the final result. Wordlength Optimization The accuracy model constructed in the previous step can then be exploited to perform fast automatic wordlength optimization. Multidimensional Flow-Graphs Multidimensional LSI systems can be conveniently represented as multidimensional flowgraphs (MDFGs). As an example, consider the following subset of the Deriche x 1 (i, j) = a 1 I(i, j) x 1 I x x 2 (a 1 , (0, 0)) (a 2 , (-1, 0)) (a 3 , (1, 0)) (a 4 , (2, 0)) (b 1 , (-1, 0)) (b 2 , (-2, 0)) (b 1 , (1, 0)) (b 2 , (2, 0)) +a 2 I(i -1, j) + b1x 1 (i -1, j) +b 2 x 1 (i -2, j) x 2 (i, j) = a 3 I(i + 1, j) +a 4 I(i + 2, j) + b1x 2 (i + 1, j) +b 2 x 2 (i + 2, j) x = x 1 + x 2 The corresponding flowgraph is shown in Figure 3.3. Each node in the graph represents an equation (or the input signal I). Each equation is the summation of incoming edges, labeled with a multiplier and an offset vector. For example, The edge (a 3 , (1, 0)) from I to x 2 represents the summand a 3 I(i + 1, j) in the definition of x 2 . Linear SFGs can be trivially encoded as MDFGs. For example, delay nodes may by replaced as node with incoming edge labeled: (1, -1), where 1 represents the unit multiplier and -1 the time offset. Inference of MDFGs from Source Code The first step in our approach is to extract the flow-graph representation of a system from its algorithmic specification. It is based on Systems of Affine Recurrence Equations (SAREs), a denotational representation of programs than can be inferred from polyhedral source code. Polyhedral Model The polyhedral model is a framework for modeling, analyzing and transforming a large class of regular programs and program fragments known as Static Control Parts (SCoPs) or Affine Control Loops (ACLs). While several extensions to the model have been proposed [START_REF] Benabderrahmane | The polyhedral model is more widely applicable than you think[END_REF][START_REF] Rajopadhye | Alphabets: An extended polyhedral equational language[END_REF], SCoPs are usually defined by the following constraints: • The program is composed exclusively of for loops, if-then-else statements and computations on scalar values and array elements. • Guards and array indices take the form of affine constraints on enclosing loop iterators, statically-known constants and symbolic parameters. A consequence of this definition is that control flow is static, as guards cannot refer to values computed within the same SCoP. In other words, the sequence of instructions is determined exactly by parameter values. Example The following program is a SCoP. f l o a t sum=0; f o r ( i n t i =0; i <N; i ++) { sum += a r Array Dataflow Analysis The flow of computations in polyhedral programs can be determined exactly using a powerful dependence analysis technique called Array Dataflow Analysis (ADA) [START_REF] Feautrier | Dataflow analysis of array and scalar references[END_REF]. It captures instance-wise and element-wise dependences: • Instance-wise means that the successive iterations of each statement in the program are distinguished from each other. • Element-wise means that accesses to different elements from the same array are also distinguished. As a simple example, consider the following fragment: f o r ( i n t i =0; i <N; i ++) S0 : x [ i ] = 0 ; f o r ( i n t j =0; j <N; j ++) S1 : x [ i ] = x [ i ] + a [ j ] ; Without instance-wise information, all that can be said is that S1 depends on both S0 and S1. Without element-wise information, an instance of S1 at (i, j) depends on all instances of S0 and S1, as they all write the same variable x. With ADA, though, the dependence information found for the read x[i] in S1 is the following: Producer of x[i] at S1(i, j) = � S0(i) j = 0 S1(i, j -1) j > 0 . For each operand used in each statement instance, we can thus determine exactly the statement instance (if any) that produced this value. From ADA to SAREs The dataflow information given by ADA can be used to transform the program to a semantically equivalent system of affine recurrence equations. Each statement in the program becomes an equation, where references to array elements are replaced by the producer of the corresponding value. For example, the program above can be written:      S 0 (i) = 0 S 1 (i, j) = a(j) + � S 0 (i) j = 0 S 1 (i, j -1) j > 0 This form captures computations and data dependences in a single representation, abstracting away storage locations. From SAREs to Multidimensional Flow-Graphs While multidimensional flow-graphs can always be seen as systems of recurrence equations, the opposite is not necessarily true. Moreover, even when that is the case, additional work is sometimes required to exhibit the underlying flow-graph nature. This is almost always the case for SAREs built from ADA. For example, consider the following code, representing an FIR filter: f l o a t tmp [ 3 ] ; f o r ( i n t i =0; i <N; i ++) { S0 : tmp [ 2 ] = x [ i ] ; S1 : y [ i ] = 0 . 2 5 * tmp [ 0 ] + 0 . 5 * tmp [ 1 ] + 0 . 2 5 * tmp [ 2 ] ; f o r ( i n t j =0; j <2; j ++) S2 : tmp [ j ] = tmp [ j + 1 ] ; } A simplified version of the output obtained after ADA and SARE extraction is shown below:            S 0 (i) = x(i) S 1 (i) = 0.25 * S 2 (i -1, 0) + 0.5 * S 2 (i -1, 1) + 0.25 * S 0 (i) S 2 (i, j) = � S 0 (i) j = 1 S 2 (i -1, j + 1) otherwise Even though FIR filters are LSI systems, the system above is not directly equivalent to a flow-graph. The reason is that equations and operands do not all have the same dimensionality: S 0 and S 1 are one-dimensional while S 2 has two dimensions because of the copy loop. The solution here is to inline the definition of S 2 in all use sites. However, it cannot be done directly because of the recursive reference in S 2 . We tackle this problem by computing the transitive closure of the copy relation defined by S 2 . The equation changes to: S 2 (i, j) = S 0 (i -j + 1) We can now inline S 2 and the system becomes: � S 0 (i) = x(i) S 1 (i) = 0.25 * S 0 (i -2) + 0.5 * S 0 (i -1) + 0.25 * S 0 (i) Finally, we can also inline S 0 into S 1 , which gives us the following single equation: S 1 (i) = 0.25 * x(i -2) + 0.5 * x(i -1) + 0.25 * x(i). Another common, more subtle difficulty arises because index dimensions in SAREs extracted from source code correspond to iteration dimensions and not data dimensions. To illustrate this problem, consider the simplified Deriche filter code below: // H o r i z o n t a l ( l e f t -to-r i g h t ) p a s s f o r ( i n t i =0; i < W; i ++) { ym1=0; ym2=0; xm1=0; f o r ( i n t j =0; j <H; j ++) {                                            S 1 (i, j) = a 1 I(i, j) + a 2 S 2 (i, j -1) + b 1 S 4 (i, j -1) + b 2 S 3 (i, j -1) S 2 (i, j) = I(i, j) S 3 (i, j) = S 4 (i, j -1) S 4 (i, j) = S 1 (i, j) S 5 (i, j) = S 1 (i, j) + x 2 (i, j) S 6 (j, i) = a 7 S 8 (j, i + 1) + a 8 S 7 (j, i + 2) + b 1 S 10 (j, i + 1) + b 2 S 9 (j, i + 1) S 7 (j, i) = S 8 (j, i + 1) S 8 (j, i) = S 5 (i, j) S 9 (j, i) = S 10 (j, i + 1) S 10 (j, i) = S 6 (j, i) S 11 (i, j) = y 1 (i, j) + S 6 (j, i) Because ADA constructs statement domains by examining the loop structure -and not array dimensions -statements corresponding to the horizontal passes (S 1 , . . . , S 4 ) and array summations (S 5 , S 11 ) are defined on (i, j) coordinates, whereas vertical passes (S 6 , . . . , S 10 ) are defined on transposed (j, i) coordinates. The link between the two views is made in equation S 8 and S 11 . To transform a SARE into a MDFG, we actually need to turn it into a system of uniform recurrence equations (SURE), such that dependence patterns are reflected by constant dependence vectors. Such is not the case here: for example, the dependence vector between S 5 and S 8 in the definition of S 8 is: (i -j, j -i) which is clearly non-constant. However, since the non-uniformity is introduced by benign permutations of dimensions, the SARE can be transformed into an equivalent SURE. We adapt techniques for uniformization/localization [START_REF] Van Dongen | Uniformization of linear recurrence equations: a step toward the automatic synthesis of systolic arrays[END_REF][START_REF] Shang | On uniformization of affine dependence algorithms[END_REF] of dependences to properly align the equations. Simple pattern-matching can then be used to retrieve the constant coefficients and build a SURE/MDFG. If the system cannot be properly uniformized, then the analysis fails and no MDFG can be built. We voluntarily left aside the problem of boundary conditions. They are reflected in the system as additional case branches. For example, the true equation for S 1 is equivalent to: S 1 (i, j) = � a 1 I(i, j) + a 2 S 2 (i, j -1) + b 1 S 4 (i, j -1) + b 2 S 3 (i, j -1) j > 0 a 1 I(i, j) j = 0 We use ad-hoc heuristics on domains and conditions to select the case-branch that corresponds to the general case. More sophisticated algorithmic-template recognition methods [START_REF] Barthou | On the equivalence of two systems of affine recurrence equations[END_REF][START_REF] Shashidhar | Verification of source code transformations by program equivalence checking[END_REF] could be used to determine that other case branches correspond to 0-initial conditions outside the image domain. Computation of Transfer Functions A key enabler to analyze the quantization of coefficients and construct the accuracy model is the ability to derive transfer functions from an MDFG. In this section, we present a recursive algorithm computing the transfer function from each node in the graph to the output. The result is a map associating each node in the graph to the transfer function representing the propagation of its contribution to the final result. The idea behind our algorithm is the following. Let v i be a node in the graph and (v j , k j , � d j ) the set of (source, multiplier, offset) triples representing incoming edges. We can distinguish two cases: • Case 1: Node v i does not belong to any cycle. Then, for each predecessor v j and any node v k , let T F k→j be the transfer function from v k to v j . By simple applications of the rules in Section 3.3.5, we have: T F k→i = � j k j z d j,1 1 . . . z d j,n n T F k→j . • Case 2: There is at least one cycle involving v i . Then, let v i � be a dummy node replacing v i as the source of every edge going out of v i , thus breaking any cycle. We have: T F k→i = � j k j z d j,1 1 . . . z d j,n n T F k→j 1 - � j k j z d j,1 1 . . . z d j,n n T F i � →j . In other words, in the absence of any cycle, transfer functions can be computed by simple application of the computation rules given in Section 3.3.5. Cycles are eliminated simply by considering the current node as an input node and then solving for the actual transfer function by including recursive contributions. A pseudocode description of the algorithm is given in Figure 1. By construction, we observe that no transfer function between any pair of nodes is computed twice. We conclude that this algorithm is of quadratic complexity O(n 2 ) where n represents the number of vertices in the graph. This algorithm is fundamentally similar to the one presented by Menard et al. [START_REF] Menard | Analytical fixed-point accuracy evaluation in linear time-invariant systems[END_REF] for LTI systems: likewise, we dismantle cycles to recursively solve for the global transfer function. However, their technique was much more complex, requiring 4 graph transformations, enumeration techniques to break down the graph into cycles and explicit substitutions to compute the transfer function. Our algorithm simply relies on a memoization strategy in the depth-first search traversal to resolve circular references and guarantee termination. Quantization of Coefficients Word-Length Optimization is essentially concerned with quantization of signals: input values and intermediary values. The quantization error of constant system coefficients, such as the coefficients of the FIR or Deriche filters, does not vary over time/space and does hence not lend itself well to statistical analysis and accuracy characterization. However, these errors affect the response of the system and aggressive quantizations can naturally compromise its function. It is usually assumed that these problems have been handled prior to floating-point to fixed-point conversion, for example by assessing the sensitivity of the system transfer function to small variations of coefficients. Instead, we propose to exploit the knowledge gathered during the analysis of the system to help the designer verify that the frequency characteristics of the filter with quantized coefficients matches functional requirements. These requirements are usually expressed as a frequency response template. Typically, min and max-bounds are defined on the frequency response of the system. However, inspection by an expert designer is often required. Since we can already compute the transfer function of the quantized system, we propose to compute and display the frequency response, using the relation: H(ω 1 , . . . , ω n ) = H z (e iω 1 , . . . , e iωn ). If possible, automatic validation of frequency response based on a specified template can also be performed. Integrating the quantization of coefficients into the scope of computed-assisted floating-point to fixed-point conversion provides benefits in terms of productivity and leaves less room for errors. Accuracy Model Construction The propagation of quantization noise from each node in the graph is fully determined by the coefficients of the impulse response associated to that node. Indeed, let e x be the error signal at node x and e � x its propagation to the output. As we are dealing with LSI systems, the resulting error at the output is given by: e � x = h x,T * e x Following the PQN-model, we represent the values of e x over space as independent, identically distributed (i.i.d) random variables. Let µ ex and σ 2 ex stand respectively for the mean and variance of the underlying probability distribution. We have: e � x (� v) = � � w h x,T ( � w)e x (� v -� w). Because of non-correlation we conclude that: µ e � x = µ ex � � v h x,T (� v) σ 2 e � x = σ 2 ex � � v h 2 x,T (� v) To compute the first two moments of e � x , we thus need to determine the sum and sum of squares of the impulse response coefficients. We will show two different ways to achieve this: a direct one, using the flowgraph representation to approximate the impulse response by abstract simulation on a unit impulse ; and a slightly more efficient approach based on the frequency response. Direct approach A well-formed [START_REF] Karp | The organization of computations for uniform recurrence equations[END_REF] SURE / MDFG gives a computable specification of a system. In our context, the SURE are derived from the schedule of a program. This property ensures that the resulting recurrence equations are indeed computable. We exploit this fact to simulate the subsystem corresponding to the propagation of each quantization noise: we use a unit impulse as input to compute the impulse response coefficients over a sufficiently large window around the origin, naively computing the values of each equation "on-demand". This direct simulation scheme requires the introduction of boundary conditions for each equation. We assume that the filters (and sub-filters) we study are stable: M = � � v |h T (� v)| < ∞. If the input is bounded by B, then the output is bounded by M B: the system cannot result in infinite amplification of inputs. This is a reasonable assumption in almost all applications. A direct consequence is that: lim |� v|→∞ h T (� v) = 0. We exploit this fact by setting x(� v) = 0 for any equation x when |� v| > r. This is a good approximation provided r is large enough, and allows for the computation of the impulse responses to terminate. However, determining the correct value of r such that the approximation error is below some bound ε is not trivial. In our experiments, though, we found values of r = 50 to give excellent results. Approach based on the frequency response An alternative approach to the direct solution above is to compute the frequency response of the system based on the formulas in Section 3.3. The sums � h(� v) and � h 2 (� v) can also be interpreted in the frequency domain: • The first one is simply the gain of the system for a constant input, i.e., the frequency response at � 0: � � v h(� v) = H( � 0) . • The second one is the L 2 norm of the impulse response, and is preserved in the frequency domain. According to Parseval's theorem: � � v h 2 (� v) ≈ 1 N � � w H 2 ( � w). where N denotes the number of samples over (-π, π] n taken for the frequency response. The method above is easy to implement given the transfer functions of the system and is slightly more efficient than the direct simulation-based approach, as a closed-form expression of the frequency response can be obtained without having to unroll the recurrence equations. Experimental Validation We integrated our approach, using the direct (time-domain) variant, within a compiler framework. The front-end (SURE extraction) was implemented within the GeCoS2 flow, developed at Irisa. The backend was written in OCaml. In Table 3.1, we contrast the scalability of our approach to that ID.Fix, an accuracy analysis tool based on the work Ménard et al. [START_REF] Menard | Analytical fixed-point accuracy evaluation in linear time-invariant systems[END_REF]. The lesson we draw from these experiments is that, while our tool is insensitive to problem size when dealing with image filters, ID.Fix is not and suffers from major scalability problems with larger problem sizes. In fact, at the time we ran these experiments, ID.Fix could not handle filters over images larger than 64x64, even for non-recursive filters. ID.Fix has seen major engineering efforts and it is possible that performance has improved since then. In particular, for the non-recursive Sobel and Gaussian blur filters, a simple analysis shows that run time should be proportional to the number of pixels in the image and thus only increase by a factor of four between the 32×32 and 64×64 versions, In practice, though, we observed a performance degration of more than 13 times. This suggests the presence of performance bugs. In any case, the story stays the same: our tool can scale to any (or even parametric) image size because it captures the dimensionality nature of these filters, whereas ID.Fix cannot, as it needs to "flatten" the control flow to treat these algorithms as 1D kernels. In a second round of experiments, we compared the error predicted by our tool for a set of fixed-point configurations with the one actually observed between fixedpoint and floating-point simulations. We emphasize that, unlike many prior work, these experiments have been run against real data (sound and image files) and not randomly generated test benches. It means that we also take into account potential estimation errors due to correlation between noises. This is important as most real world data shows strong spatial correlation, unlike artificial data where each sample has been randomly generated. Our results show that these phenomena do not strongly affect the validity of the Widrow hypothesis, as the deviations we observed were all below 5%. Although we haven't studied this topic in much detail, we believe this approach can also be applied to multidimensional algorithms. However, to properly estimate signal statistics, the number of required samples increases exponentially with the number of dimensions (this phenomenon is sometimes called the dimensionality curse). It is possible that, for systems of high-dimensionality, the amount of memory and/or time required to perform the parameterization phase becomes prohibitive. An obvious limitation of our approach is the applicability to polyhedral programs only. We cannot compactly capture regular but non-affine control flow as in the FFT transform: in such cases, parts of the programs must still be unrolled until the polyhedral conditions are met. Abstract models based on probabilistic semantics may be constructed to conservatively capture the propagation of noise probability distributions through arbitrary control flow. Some work has been done in this direction [START_REF] Monniaux | An abstract monte-carlo method for the analysis of probabilistic programs[END_REF] based on discretization of the measure space. However, the accuracy and performance of the technique are tied to the granularity of that discretization -to our knowledge, no purely analytical lattice has been defined. Even within the polyhedral model, we expect the round-off error behavior of some algorithmic patterns to be difficult to capture. For example, in the next two chapters, we will consider the hardware implementations of iterative stencil computations (or stencils for short). Let G ⊂ Z n denote an n-dimensional domain. A Jacobi-style stencil is given by a transformation (G → K) → (G → K), defined as a recurrence relation of form: D n+1 ( � w) = f ( � w, D n ( � w + � d 0 ), . . . , D n ( � w + � d m )) In most applications, one is interested, given some initial state D 0 , in computing D T where T may be a constant or computation-dependent number of iterations. Note that, in the general case, the computation may depend on spatial coordinates � w. For example, � w could be used to implement absorbing boundary conditions, or the coefficients actually depend on the spatial position. the iteration number. In simpler cases, such as the heat equation with uniform coefficients, the recurrence relation is defined as a dot product with a vector of constant coefficients: D n+1 ( � w) = � D n ( � w + � d 0 ) . . . D n ( � w + � d m ) � •    k 0 . . . k m    In such cases, assuming boundary conditions D n ( � w) = 0 if � w / ∈ G, we may model the effect of a single stencil iteration as the convolution of the grid with a mask of coefficients, which is a linear filter with transfer function H z . Then the stencil as a whole may be modeled as a linear filter with transfer function : H T z . If T is a known constant, the techniques presented in this chapter may be applied, with a caveat: signal power over successive iterations may be reduced such that Widrow's quantization theorem does not apply anymore, thus breaking a fundamental assumption of our modeling. Stencil with space-varying coefficients and non-constant coefficients are even harder to model. In the general case, assessing the accuracy of iterative computations may require other sets of techniques. Conclusion Over the last two chapters, we discussed trade-offs opportunities between, e.g., silicon area and accuracy. In Chapter 2, we exposed the problem accuracy evaluation and the limitations of simulation-based methods. In Chapter 3, we presented our contributions to analytical accuracy evaluation, a class of methods that seek to enable more thorough design space exploration through accuracy models. Our work generalizes earlier techniques, applicable to linear systems, to the multidimensional case. To accomodate a source-level design flow, this generalization required the use of techiques from the polyhedral compilation toolset. In the remaining of this thesis, we focus on another class of algorithms: iterative stencil computations. Due to their ubiquity, stencils and their acceleration have been the subject of much study. The diversity of their applications and the large number of corresponding algorithms gives rise to different constraint sets that each call for specific trade-offs between throughput, bandwidth requirements and area, to name a few. After a review of generic techniques for the optimization of stencils in Chapter 4, we will present our results and contributions in Chapter 5, based once again on a rigorous modeling of the design space. Chapter 4 Implementation and Optimization of Stencil Computations Introduction Iterative stencil computations (often just called stencils) are a family of regular algorithms used in application domains as varied as numerical analysis, computer simulations and image processing. Stencils operate over multidimensional grids of data, repeatedly updating each cell from neighbor values in successive timesteps. This computational pattern can be easily described as compact loop nests, as in Figure 4.1, where the outer loop iterates over time iterations while the innermost loops scan the spatial grid. Perhaps surprisingly, such simple algorithms are challenging to implement efficiently. Not only are they often applied to very large domains (requiring a large amount of computing power to meet performance needs), but each update operation also typically involves many earlier results (up-to 29 in some applications). For these reasons, naive implementations are severely memory-bound, since values need to be fetched redundantly from external memory. [ t ] [ x ] = ( a [ t -1 ] [ x-1]+a [ t -1 ] [ x]+a [ t -1 ] [ x +1]) / 3 ; The main goal of this chapter is to expose these obstacles and strategies to overcome them. It is organized as follows. In Section 4.2, we give some basic definitions on stencil computations. In Section 4.3, we motivate the relevance of these concepts with selected examples. In Section 4.4, we discuss the main implementation challenges in the formalism of the roofline model [START_REF] Williams | Roofline: an insightful visual performance model for multicore architectures[END_REF]. In Section 4.5, we introduce the tiling transformation, a fundamental tool in the implementation of regular algorithms such as stencils. In Section 4.6, other forms of tiling are presented, that present varying advantages in terms of parallelism or external communication. In Section 4.7, we discuss the problem of memory allocation. We conclude in Section 4.9. Definitions A d-dimensional stencil is an iterative computation over a d-dimensional grid (or array) of data. The grid is iteratively updated a problem-or instance-dependent number of times, called timesteps. At each timestep, the value of every point in the grid is recomputed from the value of its neighbors, according to a uniform (fixed) dependence pattern. Precisely, let A(t, � x) denote the value of the grid at point � x and timestep t. It is defined by a relation of the form: A(t, -→ x ) = f � -→ x , A ((t, -→ x ) + -→ d 0 ), . . . , A ((t, -→ x ) + --→ d m-1 )) � , (4.1) where m is the number of dependences and the -→ d i 's are constant dependence vectors. Classification Spatial invariance: Remark that, in the general case, the computation may depend on spatial position -→ x . When such is not the case, except perhaps at domain boundaries, we call the stencil spatially-invariant. Spatially varying stencils typically depend on position-specific coefficients. However, in our definition, the update formula does not depend on t and the computation for any given point is thus the same acrosse time iterations. Access patterns: Stencils usually classified based on their dependence patterns: • A stencil with Jacobi-style dependences (or Jacobi stencil ) is a stencil where new values are computed exclusively from previous timesteps. Dependence vectors are of form: (-k, -→ v ) with k > 0. • Otherwise, it is called Gauss-Seidel. Dependence vectors are of form (-k, -→ v ) with k ≥ 0, with k = 0 for at least one dependence. The names Jacobi and Gauss-Seidel refer to eponymous iterative methods for solving systems of linear equations. These methods are themselves stencil computations, with Gauss-Seidel slightly improving upon Jacobi by using more recent results to improve convergence. Boundary Conditions The equation 4.1 only properly defines the computation in the interior of the grid ; boundary points, where dependence vectors reach outside of the domain, need special handling. Many forms of boundary conditions have been proposed, depending on application requirements. For example: • Homogeneous Dirichlet boundary conditions, where values are set to 0 at grid boundaries, are often used in image processing applications. • In physics simulations, more exotic schemes such as reflecting, absorbing or periodic boundary conditions are often used. Reflecting and periodic boundary conditions ensure that the energy of the system stays constant over time, while absorbing conditions may be used to simulate an infinite field by letting energy dissipate outside of the grid. In this chapter, we mostly ignore boundary conditions, as they typically account fo a small fraction of the computation and do not represent a major performance factor. Examples In this Section, we illustrate the above definitions with various examples from different fields. Cellular Automata Cellular (finite-state) automata (CA), such as Conway's Game of Life (GoL), [START_REF] Gardner | Mathematical games: The fantastic combinations of john conway's new solitaire game "life[END_REF] are famous examples of stencil computations. In GoL, points (called cells) admit only two states: dead or alive. Between consecutive timesteps, live cells switch off if they have less than 2 or more than 3 extant neighbors, while dead cells turn to life if they are surrounded by exactly 3 live cells. As an example, Figure 4.2 illustrates a common GoL pattern known as the glider. CAs have many practical applications -they are used, for example, in the simulation of physical [START_REF] Chopard | Cellular automata modeling of physical systems[END_REF] and biological systems [START_REF] Ermentrout | Cellular automata approaches to biological modeling[END_REF]. They are also studied from a theoretical point of view for their ability to exhibit complex global behavior out of local rules and configurations [START_REF] Wolfram | Cellular automata as models of complexity[END_REF]. However, automata are uncommon examples of stencils in that cells can only take a small number of values. Thanks to this small state-space, GoL and related automata lend themselves very well to implementation techniques based on memoization, such as the HashLife algorithm. In this thesis, we mostly focus on numerical stencils, which expose a virtually infinite state-space. For such stencils, memoization is impractical and the computation rules must be applied repeatedly for each point in the grid. Smith-Waterman Smith-Waterman is a dynamic programming algorithm for sequence alignment, with applications in bioinformatics. A sequence is defined as a list of symbols from a finite alphabet Σ (for example, a protein is a sequence of nucleotids). The Smith-Waterman algorithm determines the similarity between two biological sequences A = a 1 . . . a m and B = b i . . . b n by inserting "gaps" within A and B to find the best alignment (in a given sense). Those gaps, called indels, model the insertion or deletion of a symbol in either A and B since the sequences diverged from a hypothetical common ancestor. The higher their similarity, the more likely they are to be historically related. For example, let Σ = {A, B, C}. Consider the following alignments of sequences: BBAABAC BBAABAC-- ABA-BAD ---ABABAD Both are valid, as the resulting strings have the same length. However, one clearly represents a more plausible biological evolution, as it maximizes the number of matching symbols and reduces the count of indels. To model this intuition, we assign a similarity score D(a, b) to any pair of symbols, and affect a penalty W to the introduction of an indel in either A or B. The best alignment score for the two sequences is built iteratively from that of their prefixes, by filling out a matrix H of size m × n with the following formula: H(i, j) = max        0 H(i -1, j -1) + D(a i , b j ) H(i -1, j) -W H(i, j -1) -W The best score is given by H(m, n). The alignment itself can be retrieved by walking backwards to reconstruct the series of insertions/deletions and substitutions. The computation of the score matrix is a 1D stencil, where each row corresponds to a timestep. Since H(i, j) depends on H(i, j -1), it has Gauss-Seidel dependences, with the following dependence vectors: (-1, -1), (-1, 0), (0, -1). Finite-Difference Methods The bulk of stencil computations arise from forward-time, Finite-Difference discretization of partial differential equations (PDEs). Depending on application, the output of such stencils is the state of the system after some predetermined simulation time, or when some equilibrium has been reached. Stability of such numerical schemes usually requires smaller timesteps compared to backward, implicit methods, but are also simpler to implement. They typically result in large computational workloads, especially in High-Performance Computing world. Heat Equation The heat equation is one of the canonical examples of stencils. It models the transfer of heat in a medium over time. Consider a surface with uniform thermal conductivity. The function u(t, x, y) giving the temperature at time t and position (x, y) is a solution to the following PDE: ∂u(t, x, y) ∂t = α � ∂ 2 u(t, x, y) ∂x 2 + ∂ 2 u(t, x, y) ∂y 2 � . This equation can be discretized to the following stencil computation1 : U (t, x, y) = U (t -1, x, y) + c 0 (U (t -1, x -1, y) + U (t -1, x + 1, y) -2U (t -1, x, y)) + c 1 (U (t -1, x, y -1) + U (t -1, x, y + 1) -2U (t -1, x, y)) , where c 0 = α∆t/∆x 2 and c 1 = α∆t/∆y 2 are constant coefficients depending solely on the discretization steps along each dimension. It thus defines a 5-point, spatiallyinvariant Jacobi stencil with the following dependence vectors: (-1, 0, 0), (-1, -1, 0), (-1, 1, 0), (-1, 0, -1), (-1, 0, 1) Boundaries may be handled, for example, by fixing u(t, x, y) = K outside the surface. In general, the coefficients c 0 and c 1 may depend on position (x, y), as thermal conductivity is not necessarily uniform. In such case, the stencil becomes: U (t, x, y) = U (t -1, x, y) + c 0 (x, y) (U (t -1, x -1, y) + U (t -1, x + 1, y) -2U (t -1, x, y)) + c 1 (x, y) (U (t -1, x, y -1) + U (t -1, x, y + 1) -2U (t -1, x, y)) and hence loses the property of space-invariance. Note that we may recover this property by switching to a multi-field stencil. Indeed, the definition of stencils does not constrain update operations to a compute a single-value ; way me choose to embed field coefficients with actual values, simply propagating coefficients from one time iteration to the next. Seismic Modeling Finite-difference stencils are also used in seismology, for example to model the propagation of seismic waves in a medium. This problem is called seismic modeling. Specifically, one is interested in computing the pressure field P after a pre-determined simulation time, given some initial conditions and a space-varying propagation velocity field v. As with most PDEs, many finite difference schemes may be used to discretize this problem, resulting in stencils of varying numerical complexity and stability properties. A simple 6-point scheme, spanning two iteration steps, is illustrated in Figure 4.3. The update equation is of form: P x,y,t =b x,y + P x,y,t-1 -P x,y,t-2 + a x,y (P x+1,y,t-1 + P x-1,y,t-1 + P x,y+1,t-1 + P x,y-1,t-1 ) x y t t -2 t -1 t Figure 4.3: Seismic Modeling. Fields a and b represent spatially-varying coefficients, computed from the derivatives of the velocity field. The same remarks as for the heat equation regarding spatial invariance also apply here. Yee's Algorithm Yee's algorithm for Maxwell equations is one of the first finitedifference methods for PDEs. It simultaneously solves for the electrical and magnetic fields E and H in the time domain. At first glance, this method does not exactly match the definition of stencils given at the beginning of this Section. Let ∆t be the size of time iterations. At the n-th timestep, fields E and H are mutually recomputed at time (n -1/2)∆t and n∆t. Moreover, in cartesian 3D space, the discretization lattices used for E and H are offset from one another by (∆x/2, ∆y/2, ∆z/2) so that each point of E (or H) is surrounded by 6 points of H (or E) along the canonical axes. To simplify the discussion, we will restrict ourself to the 1D case. The spatial and temporal decomposition of the iteration space is illustrated by the picture below, along with the data dependence vectors: E (n-1/2)∆t H n∆t E (n-1/2)∆t H n∆t We can transform this scheme to a stencil in the conventional sense by "grouping" elements of E and H diagonally: In this form, Yee's algorithm is a multi-field Gauss-Seidel stencil, where each point computes two values out of four other groups. The dependence vectors are: (-1, -1), (-1, 0), (-1, 1), (0, -1) The apparent self-dependency in the above diagram only imposes an order in the computation of E and H within the same iteration. Implementation Challenges Performance of scientific kernels (such as stencils) is the result of complex interplay between hardware resources (cache/local memory, bandwidth, computing power), architectural behavior and application-/implementation-specific factors. In the case of stencils, the most important performance factor is the balance between computations and communication. An implementation is said computebound if its performance is ultimately limited by the available amount of computing power ; in contrast, it is said IO-bound if computing power is in excess compared to the available memory bandwidth. The balance between both is the prominent challenge in the implementation of stencils. This problematic may be conveniently exposed in the formalism of the roofline model [START_REF] Williams | Roofline: an insightful visual performance model for multicore architectures[END_REF], an intuitive tool for understanding performance characteristics of parallel implementations. It relates performance of numerical algorithms, measured in GFlops2 , i.e., with their arithmetic intensity and the characteristics of the implementation platform. Arithmetic intensity I (also called compute/IO ratio) is the implementationspecific ratio, expressed in Flops/B, of computations over communication volume. Let β be the bandwidth limit (in GB/s) of our architecture. We see easily that the maximum performance, achievable via parallelization (not taking other resource constraints into account) is simply: 10 -6 × β × I (GFlops). Since β is a constant, the only way to improve this throughput is thus to reduce memory accesses to increase arithmetic intensity . Naturally, performance is also ultimately limited by the peak throughput P of the architecture (for example, on a multi-core architecture, throughput is limited by the number of cores and their frequency). We may thus derive a bound on achievable performance: min(P, 10 -6 × β × I) (GFlops). All these phenomena are illustrated in Figure 4.4. Arithmetic intensity (in flops/byte) is represented in the horizontal axis, while attainable performance (in GFlops) is displayed in the vertical axis. This plot only gives coarse-grained, conservative performance estimates; in most cases, other factors such as bad data locality Arithmetic intensity (flops/byte) Attainable throughput (GFlops) Peak throughput also negatively affect the attainable throughput. This model may be refined to account for other limitations, materialized as additional ceilings; for example, peak throughput may be lower without SIMD support. Actual bandwidth limits also depend on a variety of factors, such as access patterns, which can degrade attainable throughput by introducing additional latency. Finally, observe that arithmetic intensity is ultimately capped by the amount of local memory ; indeed, it can only be improved by reducing the number of memory accesses via buffering. The tiling transformation, exposed in the next sections, gives a mean to control tradeoffs between computation, communication and memory usage. Tiling Transformation Tiling is a fundamental tool in many implementations of stencil computations. Its first use is to improve data-locality, by partitioning the computation into smaller chunks, called tiles, that can fit on local memory. Its second purpose is to extract coarse-grained parallelism, to dispatch batches of work to multiple processing elements, or pipeline the execution of independent tiles on the same datapath. For example, most GPU stencil implementations map tiles to different thread blocks, where fine-grained parallelism is used to concurrently execute independent computations on multiple threads [START_REF] Holewinski | High-performance code generation for stencil computations on gpu architectures[END_REF]. Intermediary values can thus be kept in block-local shared memory, without external memory accesses. Multiple flavors of tiling have been proposed in the literature. In this Section, we introduce the fundamental ideas of tiling with rectilinear (or hyper-parallelepipedic) tiling. Like most tiling methods, it consists in partitioning the iteration space into convex regions, according to carefully chosen tiling hyperplanes. We will see in Section 4.6 that different hyperplanes may be used to expose trade-offs in terms of Iteration Space and Dependences Consider the affine stencil loop-nest in Figure 4.5. As you may recall from Chapter 3, we can represent all instances of statement S as a (convex) polyhedral domain: D S = {(t, x) ∈ Z 2 | 1 ≤ t ≤ T ∧ 1 ≤ x ≤ N }. D S is called the iteration space of statement S. As any polyhedron, it is the intersection of finitely many half-spaces, determined by affine constraints. In this case, affine constraints are simply derived from loop bounds, which depend on parameters T and N . Most iterative stencils may be implemented as such perfectly-nested loops, where the outermost loop iterates over timesteps, and the d-th innermost loops scan the spatial grid. A d-dimensional stencil hence gives rise to a n = (d + 1)-dimensional iteration domain. The iteration space of statement S can be conveniently represented as a regular lattice of integral points (see Figure 4.6). Each point represents an instance (or execution) of statement S. Its coordinates are the values of of iterators (t, x). Schedule In Figure 4.6, the iteration order of the original program is represented as dashed path. Since the loops iterate over each dimension in increasing order, it simply corresponds to a lexicographic scan of the domain: (1, 1), (1, 2), . . . , (1, N ), (2, 1), (2, 2) . . . , (2, N ), . . . , (T, 1), (T, 2), . . . , (T, N ). In general, iteration order may be linked to that of a schedule. Given an iteration domain D S , a schedule is a map Θ : D S → O where O is a set equipped with a partial or total order �. The relative execution order between two instances � i, � j ∈ D S is given by that of their image in (O, �) by Θ. Precisely: � i ≺ � j ⇒ � i executed before � j. An interesting class of schedules is given by affine schedules, the set of piecewise, quasi-affine maps to some lexicographically ordered polyhedron. For example, in the t x (6, 4) Let I = Θ(D S ) ⊂ Z m stand for the image of domain D by an affine schedule Θ. For simplicity, let us assume that Θ is an injective function, i.e., each point in I has a single pre-image in D. When such is the case, the schedule defines a total execution order. In pseudo-code, it corresponds the following program: lexfor( � i ∈ I) S[Θ -1 ( � i)]; where lexfor denotes a lexicographic domain scan. In other words, the program scans the schedule image of the domain in lexicographic order, and executes the corresponding pre-image instance. Under some conditions, it is possible to modify the execution order of the program without altering its semantics ; in the formalism of the polyhedral model, we often apply combinations of affine transformations to the co-domain of the schedule, before regenerating imperative code. However, the problem of turning the lexfor construct above into efficient code is a non-trivial problem that was not truly solved until 1991 by Ancourt et al. [START_REF] Ancourt | Scanning polyhedra with do loops[END_REF]. More recently, Boulet et al. [START_REF] Boulet | Scanning polyhedra without do-loops[END_REF] proposed to implement this scan as a state machine, by computing the next relation, mapping each iteration point to its successor in the domain. Dependence Relation The schedule in the original program is simply: {S[t, x] → [t, x]}, Since each statement writes in a different memory cell, we easily see that data accesses induce the following dependence relation: {S[t, x] → S[t � , x � ] : t � = t -1 ∨ x � = x -1}, In more complex cases, dependence analysis techniques such as Array Dataflow Analysis [START_REF] Feautrier | Dataflow analysis of array and scalar references[END_REF], discussed in the previous chapter (see Section 3.4.3), may be used to determine the dependence relation of polyhedra programs, such as stencils, automatically. Intuitively, this relation defines a validity condition for any schedule: an instance must always be executed after all its dependences. Because of our single-assignment addressing, our program only has true or datadependences: a statement instance depends on another iff it uses the value produced by that instance. In many cases, to reduce memory usage, some form of memory contraction must be implemented (see Section 4.7). For example, the result of instance S(t, x) could be mapped to A[t%2][x] without changing semantics under the initial schedule. However, such redundant memory allocations may result in false or spurious dependences, which may impede re-scheduling or prevent parallelization. However, for polyhedral programs, such dependences can always be eliminated [Feautrier SSA], in order to separate scheduling and memory allocation concerns. Consider an arbitrary schedule Θ : D S → Z m . We say that Θ is valid if it is compatible with the dependence relation defined by the original program, in the following sense: ∀(t, x), (t � , x � ) ∈ D S , (t � , x � ) → (t, x) ⇒ (t, x) ≺ (t � , x � ). As we sill see, tiling consists in defining a new, higher-dimensional schedule for the original stencil. Tiling In the original schedule, consider the number of steps between the production of a value and its first reuse by a later iteration. The value produced by instance (t, x) is first re-used by iteration (t + 1, x): the reuse distance is thus equal to N , the spatial size of the domain. If N is large enough, the value will be evicted from cache before being next accessed. Ignoring domain boundaries, cache miss rates are thus of 50%. Since RAM accesses can take hundreds of CPU cycles on modern architecture, this problem can significantly affect performance. It is even more severe for higherdimensional stencils, as in numerical solvers, the size of the grid typically grows exponentially with dimensionality. Tiling addresses this issue by partitioning the domain into tiles arbitrary size, in order to improve locality behavior. Figure 4.7 illustrates rectangular tiling for our running example. A fundamental idea of tiling is that tiles must be atomic, i.e., there must exist a valid schedule of instance where the execution of distinct tiles do not overlap. An example of such a schedule, corresponding to a lexicgraphic ordering of tiles, is represented in Figure 4.7 as a dashed path. As we will see, this schedule is not unique -others can be used, for example, to harvest inter-tile parallelism. The pattern in the figure corresponds to a 4-dimensional schedule: the outermost two dimensions correspond to the space of tiles, while innermost dimensions correspond to intra-tile iterations. The schedule may be described by the following quasi-affine map: � S[t, x] → �� t -1 2 � , � x -1 2 � , (t -1) mod 2, (x -1) mod 2 �� Asuming that 2, the size of tiles, divides T and N evenly, this tiling may be implemented in source code as in Figure 4.8. Tiling improves locality by reducing the re-use distance between two intra-tile iterations, which is now bounded by the volume of the tile. Tile size thus gives a mean to control temporal locality, enhancing performance under memory and bandwidth constraints, since results can now be kept in fast memory, reducing the need for external memory accesses. Thanks to this property, tiles are often used as communication boundaries: results produced within a tile are kept in cache available for later iterations, while outer dependencies must be fetched from elsewhere (e.g., loaded from shared memory sent by another node on the network). Tile-Level Dependencies Instance-level dependencies naturally induce tile-level dependencies. We say that a tile depends on another tile if a point in the first tile depends on a point in the second tile. Because of atomicity, we may view the result of tiling as a new stencil, where tiles take the role of statement instances. In the Gauss-Seidel example, this new stencil happens to have the same dependences as the untiled stencil. However, this needs not be the case. For example, consider the tiled Jacobi stencil in Figure 4.9. Observe that instance-level dependencies cross the barrier between two tiles in both directions: in other words, with this tiling, the tile-level stencil has a circular dependence. There is naturally no --------------E x t e r n a l communication boundary ----------------- * / // I n t r at i l e i t e r a t i o n s . f o r ( i n t t 1 =0; t1 <2; t 1++) f o r ( i n t x1 =0; x1 <2; x1++) { i n t t = t 0+t 1 ; schedule of instances that is atomic for tiles and compatible with this dependence relation, and this tiling is invalid. i n t x = x0+x1 ; A[ t ] [ x ] = f (A[ t -1 ] [ x -1] , A[ t -1 ] [ x ] ) ; } Skewing The problem of the stencil in Figure 4.9 is the bi-directionality of data flow across the hyperplane normal the horizontal dimension. In a more general sense, a tiling of an n-dimensional iteration space is defined by n tiling hyperplanes (φ 0 , . . . , φ n-1 ), and size / offset parameters. It can be shown [START_REF] Irigoin | Supernode partitioning[END_REF] that a tiling is valid if the projection dependence vectors along each normal to tiling hyperplanes is of the same sign: this condition captures the necessity of uni-directionality we already mentioned. In rectilinear tiling, tiling hyperplanes are simply the canonical hyperplanes, normal to one of the canonical basis vector. For our Jacobi stencil, dependence vectors are: (-1, -1), (-1, 0), (-1, 1) The vertical tiling hyperplane is normal to the (0, 1) basis vector. We see that: (-1, -1) • (0, 1) = -1 whereas (-1, 1) • (0, 1) = 1, which are of opposite signs. One way to address this issue is to apply rectilinear tiling to a skewed iteration space, where unidirectionality with respect to canonical hyperplanes has been enforced. For example, consider the domain transformation: {S[t, x] → S � [t, x + t]}. We may apply this transformation to the dependence vectors to verify that rectilinear tiling is now valid: (-1, -2), (-1, -1), (-1, 0). All dependence vectors now have non-positive components along each dimension: rectilinear linear tiling can now be applied. The result of skewing+tiling for the Jacobi stencil is illustrated in Figure 4.10. Inevitably, tiling after skewing introduces incomplete tiles at domain boundaries. If the domain is small compared to tile size, partial tiles can account for a large proportion of tiles and introduce a significant overhead, as their implementation cannot be fully specialized. It is to be noted that writing efficient implementations of skewed and tiled stencils by hand is a challenging task. When tile dimensions are compile-time constants, polyhedral code generation techniques can be applied. For example, the Pluto compiler features efficient algorithms for finding valid tiling hyperplanes and performing source-to-source tiling transformations. However, when tile dimensions are dynamic parameters, tiling can no longer be seen as a polyhedral transformation. Parametric code generators have been proposed, for example, for rectilinear tiling [START_REF] Kim | Efficient tiled loop generation: D-tiling[END_REF][START_REF] Tavarageri | Parametric tiling of affine loop nests[END_REF][START_REF] Hartono | Dyntile: Parametric tiled loop generation for parallel execution on multicore processors[END_REF]. Such generators can allow dynamic tile-size tuning. Tile Halo and Communication Volume The set of external dependencies of a tile is called its halo. For 1D stencils, halo volume grows linearly with tile size, while computation volume grows quadratically; this fact generalizes to higher-dimensional volumes as well. Since the halo of a tile can be seen as its (input) communication volume, it means that increasing tile size also improves the compute/IO ratio, thus reducing bandwidth requirements, at the cost of increased memory usage. For multidimensional rectilinear tiles, a tight upper-bound of communication volume, based on the concept of depth, can be given. Let (d j ) 1≤j<M be the dependence vectors of the stencil. For reasons that will be made clearer, we require that dependence vectors all have non-positive coordinates. Dependence depth along the n-th dimension is defined as: δ n = max j (-d j • 1 n ), where 1 n denotes the 1 n -th canonical basis vector. Let S n denote tile size in the nth dimension. Then, communication volume is bounded by: d � i=0 δ i � j� =i S j Consider the restricted case where S i = S for all i. Then, this bound simplifies to: While the volume of the tile is S d+1 . The ratio between computation and communicaton is thus: ( � i δ i )S d S � i δ i , and thus increases linearly with size S. Wavefront Parallelism Wavefront parallelism has been initially introduced for instance-wise parallelism, and is not directly linked to tiling. A wavefront is a hyperplane of parallel instances in regular commputations such as stencils. When a stencil is tiled, one is interested in finding sets of independent tiles for parallel execution. In Figure 4.11, tile wavefronts are pictured in alternating colors for the Gauss-Seidel stencil. Observe that, because wavefronts scan diagonals of the domain, first and last wavefronts contain less tiles, and thus, less parallelism than others. This problem is called load imbalance, and its elimination is one of the main motivation behind many non-rectilinear tiling techniques. Extracting wavefront parallelism may once again be seen as an application of skewing. It consists in exposing parallelism in the outer n -1 schedule dimensions, through a suitable domain transformation. For example, we may apply the following skewing to the domain of tiles of the Gauss-Seidel stencil: We can see in Figure 4.12 than wavefronts actually correspond to vertical bands of tiles in the transformed tile domain. {S[t, x] → S[t + x, x]} t x Hierarchical Tiling In this Section we only considered one level of tiling. In order to accomodate multiple levels of memory hierarchy, a recursive tiling strategy may be adopted, by splitting tiles repeatedly. For example, smallest tiles may be unrolled for register-level tiling, while largest ones serve as coarse-grained communication units. While in most hierarchical approaches, the same tiling hyperplanes are used atl all levels, hierarchical tiling can be used to expose nested parallelism through further domain transformations. For example, jagged tiling allows better control of the grain of intra-tile parallelism [START_REF] Shrestha | Jagged tiling for intra-tile parallelism and fine-grain multithreading[END_REF]. Tiling Variants Rectilinear tiling offers a convenient way to extract coarse-grained parallelism from regular computations. However, it suffers from a significant drawback called load imbalance. Indeed, in hyper-parallelepipedic domains, tile wavefronts usually contain varying numbers of tiles: at the beginning and the end of the computation, there is thus not enough parallelism to evenly dispatch tiles on multiple processors. As a synchronization point is required after each wavefront to enforce inter-tile dependences, this results in a suboptimal use of computational resources, as some processing elements need to wait for others to finish their work. This problem is especially severe on small grid sizes, since the number of large wavefronts does not compensate for this waste of computing power. Dynamic tile scheduling may help mitigate the issue [START_REF] Mullapudi | Tiling for dynamic scheduling[END_REF], by allowing to run some tiles earlier, but does not address the root of the problem. In technical terms, for stencil computations, rectilinear tiling is said to lack the concurrent start-up property. We say that a tiling technique enables concurrent start along some (usually canonical) dimension i if tiles along the corresponding unit vector can be computed in parallel. This definition can be extended to any subspace of Z n . Concurrent start over several dimensions can be beneficial in two ways: • First, it provides more coarse-grained parallelism. • Secondly, it reduces the need for synchronizations. In this Section, we review several techniques which all feature concurrent start along at least one canonical dimension. Each of them also exposes subtle trade-offs, e.g., in terms of communication volume and fine-grained-parallelism. Overlapped Tiling The first technique we review is called overlapped tiling [START_REF] Krishnamoorthy | Effective automatic parallelization of stencil computations[END_REF]. While most tiling transformations are tesselating, i.e., partition the iteration space into non-verlapping tiles, overlapped tiling enables concurrent start through redundant computations (see Figure 4.13). This method eliminates the need for communication along one or several dimensions, but comes with a significant computational overhead, especially for "tall" tiles spanning a large number of timesteps. Overlapped tiling is hence an interesting strategy on architecture where memory bandwidth is commonly the performance bottleneck, such as GPUs. Even then though, actual performance results from a complex interplay of factors, and tile shape selection is an important concern. Indeed, the amount of redundant computation increases with tile height, and can thus quickly become detrimental to performance. Moreover, as illustrated by Figure 4.14, the minimal tile and its halo may have a relatively complex ; to simplify the control flow, more data than necessary must often be transferred. The proportion of useful communication thus also decreases with tile height. Finally, the amount of "live" data within decreases at each time iteration, resulting in suboptimal use of accelerator memory resources. Diamond Tiling For Jacobi stencils, rectilinear tiling constrains one of the tiling hyperplane to be parallel to original spatial dimensions. Diamond tiling [START_REF] Bandishti | Tiling stencil computations to maximize parallelism[END_REF] lifts that restriction and determines tiling hyperplanes from the envelope of the cone spanned by dependence vectors (see Figure 4.15). The main benefit of this technique is to enable concurrent start along one spatial dimension. However, in most cases, tiles cannot be started along all dimensions concurrently. Diamond tiling has proven efficient over conventional tiling on multi-core CPUs [START_REF] Malas | Multicoreoptimized wavefront diamond blocking for optimizing stencil updates[END_REF] and GPU architectures [START_REF] Korch | Diamond-like tiling schemes for efficient explicit euler on gpus[END_REF]. However, it suffers from a double-drawback: i) bandwidth requirements are irregular, since the first and last intra-tile timesteps require more memory accesses per iteration. This can lead to a computation being memory-bound, in spite of its average bandwidth usage not exceeding the architectural limit. ii) Similarly, first and last timesteps expose less parallelism than others, since tiles exhibit narrow "peaks" with fewer iterations. Consequently, intra-tile wavefronts exhibit the same pipelined startup pattern as tile wavefronts in conventional tiling. One may argue that diamond tiling does not truly solve the problem of load imbalance: it displaces it to a finer-grained level. Hexagonal Tiling Hexagonal tiling [START_REF] Grosser | Hybrid hexagonal/classical tiling for GPUs[END_REF], shown in Figure 4.16 is a variant of diamond tiling that partially addresses the problems exposed above. Instead of deriving tile shapes from the narrowest dependence cone, tile peaks are "flattened" thus that each intra-tile timestep exposes a minimal level of parallelism. This strategy allows to reduce load and bandwidth imbalance. We observe that, as with diamond tiling, its full generalization to higher-dimensional stencils, while enabling concurrent start along all spatial dimensions, is not possible. Another approach, developed for GPUs, is named split tiling. It combines some of the benefits and drawbacks of diamond and overlapped tiling. Like diamond t x tiling, it enables concurrent start without introducing redundant computations. On the other hand, like overlapped tiling, it only requires external memory communication at time boundaries. The technique is illustrated in Figure 4.17. It consists in splitting each spatial wavefront in phases of heterogeneously-shaped trapezoidal tiles. These phases are executed sequentially, with output halos of the first phase kept in scratchpad memory for the second phase, reducing external bandwidth requirements. The main drawback of this approach is probably the large amount of shared memory needed to keep inter-tile dependencies on-chip and the complexity introduced by alternating tile shapes, which does not make this technique a good fit for architectures such as FPGAs, more optimized towards streaming access patterns with limited buffering. Memory Allocation Naive stencil implementations such as the one in Figure 4.5, where each memory cell is written at most once, are suboptimal in terms of memory usage. Indeed, the memory footprint of the kernel increases linearly with simulation time, while the lifespan of each value does not exceed a few timesteps. As we will see, it is possible to reduce memory usage by re-using space affected to values that are no longer needed. We call memory mapping or memory allocation a function assigning statement instances to memory addresses. For the purpose of this discussion, let us define mappings as functions of form: � i ∈ S � → φ( � i) ∈ N, where S denotes the iteration space of some statement S. 3 For example, if each statement S(i, j) writes to address S[i][j], where S is a multidimensional array of size M × N , then memory allocation is given by: φ(i, j) = i × M + j. A natural problem is to minimize the memory usage of the algorithm by assigning multiple instances in S to the same location. The challenge is to reclaim memory containing outdated values without overwriting live values. The validity of an allocation is thus closely tied with execution order: we can reuse memory cells when all the consumers of the value they contain have been scheduled. To make this statement clearer, we need to introduce some terminology. Let � i denote an instance in iteration space S. The uses of � i, uses( � i) ⊂ S, are the set of iteration points � j such that � i ∈ deps( � j) (or equivalently, ( � j, � i) ∈ deps). We wish to ensure that cell φ( � i) is not overwritten before the last use of � i. Let us restrict discussion to quasi-affine schedules. Such a schedule θ : S → S � maps iteration points to their position in a new, lexicographically ordered domain S � , such that � i is executed before � j if and only if θ( � i) ≺ θ θ( � j). The last use of a value � i is naturally the lexicographic maximum of the image of its use set by the schedule: Last θ ( � i) = lexmax � θ(uses( � i)) � . Let this iteration point be � j. We can now state the validity condition for a memory mapping φ under a schedule θ: � i � = � j, � i � � j � Last θ ( � i) ⇒ φ( � i) � = φ( � j). Two instances � i and � j which cannot be mapped to the same location are said in conflict (we write � i �� � j). A schedule thus implicitly defines a set of conflicting statement instances, similar to the conflict graph used in register allocation. The analogy goes even further ; for example, whereas in register allocation, the maximum number of live values is given by the size of the maximum clique, in our context, we can define the maximum set D such as D × D is a subset of the conflicting set. The size |D| gives a lower bound on required memory size. In general, fixing a particular memory allocation early on restricts the set of valid schedules and can thus hinder, e.g., efficient parallelization. The main problem is that memory reuse introduces non-flow dependences. For this reason, it is customary to consider these two problems independently: reordering optimizations, such as tiling and parallelization, are applied first ; one then looks for an allocation that reduces memory usage, without changing the order of execution. This second step is called memory contraction. For simplicity, one often restricts the search to modular allocations, i.e., affine mappings combined with modulos with compile-time constants on each array dimension. Another approach is to use a schedule-independent memory allocations. In the specific case of stencils, one can use schedules based on Universal Occupancy Vectors (UOV) [START_REF] Strout | Schedule-independent storage mapping for loops[END_REF]. Such schedules are easy to compute and are not affected by execution order. Tile Size Selection Tile size can significantly impact the performance of a stencil implementation. Unprincipled tile size selection is likely to result in sub-optimal performance and/or waste precious resources such as bandwidth, scratchpad memory or (in the case of overlapped tiling) computing power. For this reason, various methods have been proposed to adapt the size of tiles to the problem, requirements and/or architecture at hand, usually with the goal of maximizing performance. These approaches may be classified into static (analytical) and empirical methods. Static methods are usually based on more or less elaborate performance models. For example, early techniques may mostly consider cache/local memory size [START_REF] Lam | The cache performance and optimizations of blocked algorithms[END_REF][START_REF] Coleman | Tile size selection using cache organization and data layout[END_REF], while more recent work focuses on fine-grained modeling. Note that parametric tiling may be combined with static methods to dynamically select tile size based, e.g., domain size parameters. Empirical methods take the form of compile-time tuning. They are often used to adapt generic software packages to an unknown target platform. For example, the build process of several linear algebra libraries (such as LAPACK/CUBLAS) famously includes an auto-tuning phase. In this case, empirical tuning is more practical than using analytical models, as the same software may be built on a large number of (future) platforms, with possibly vastly different architectures. Conclusion In this chapter, we have exposed the problem of accelerating iterative stencil computations, a large class of compute-and data-intensive algorithms. We adopted a fairly high-level point-of-view, exposing the main challenges and techniques in a non architecture-specific way. A large part of our discussion was devoted to the tiling transformation and its variants, a tool used in most efficient implementations of stencil computations, particularly on multi-core processors Graphics Processing Units. In the next chapter, we focus on implementation trade-offs for stencil computations on FPGAs. Their strong embedded systems roots make them a natural choice for the implementation of stencils in computer vision applications. Moreover, due to their flexibility and power efficiency compared to GPUs, FPGAs are also gaining traction in the HPC world. Embedded and HPC applications possess dramatically different requirements: in the embedded world, one is usually interested in minimizing cost (area / power consumption) under a fixed performance constraint. In contrast, implementations targeting HPC algorithms should provide maximum performance for a given platform. Applications themselves have very different characteristics ; some stencils are much more compute-intensive than others, and grid size vary greatly between two use cases. Unfortunately, most FPGA stencil accelerators are ad-hoc implementation, or generic templates that fail to integrate this diversity. In the next chapter, we will see that there is, in fact, no single best accelerator architecture for stencils. Based on the tiling transformation and the high-level view exposed in the current chapter, we derive a systematic methodology for FPGA stencil accelerators than can accomodate a large set of concerns. Chapter 5 Managing Trade-Offs in FPGA Implementations of Stencil Computations Introduction In Chapter 4, we discussed the problem of accelerating stencil computations from a high-level, architecture-agnostic point-of-view. We showed the importance of the tiling transformation for both pallelizing stencils and reducing communication needs, problematics that arise on most implementation platforms. We now focus on the design and implementation of FPGA stencil accelerators. Thanks to their inherent flexibility, FPGA platforms are natural choices for implementing stencils, as they can accomodate a large set of design constraints. However, we observe that earlier work on the topic has usually focused on ad-hoc implementations for specific algorithms, or failed to provide ways to enable application-specific trade-offs. In this work, we attempt to fill this gap by providing a systematic FPGA design methodology for stencil accelerators, based on iteration-space tiling. We propose a family of designs based on sensible design parameters, exposing intuitive trade-offs between throughput, bandwidth requirements and local memory usage. We focus on system-level issues, not on fine-grained performance tuning. For this reason, we have developed a code generator producing HLS-optimized C/C++ architectural descriptions. The main design knobs are: • Unrolling Factor: Our accelerators are based on a heavily pipelined datapath derived from HLS tools. The amount of the fine-grained parallelism at the datapath is configured through unrolling of the innermost loops. This adjusts the level of parallelism in terms of stencil update operations per clock cycle to application requirements. • Tile Shape: The choice of tile shapes, characterized by the sizes of a tile in each dimension (possibly not tiling some of them), enables trade-offs between on-chip memory usage and bandwidth consumption. We also propose simple analytical models for both performance and area cost to guide the exploration of the design space. Using these models, a designer is easily able to identify the most interesting design points. We also tackle the largely ignored problem of deriving memory layouts optimized for contiguity. Indeed, in the kind of architectures we are targeting, memory accesses typically suffer from a significant latency overhead. Contiguous burst memory transfers must be used to achieve reasonable performance. However, typical memory layouts for stencils (such as affine modular allocations) only result in poor contiguity and do not allow for large burst transfers. We propose a data layout based on canonical projection of tile faces, partitioning inputs and outputs of a tile into a small number of contiguous regions. This Chapter is organized as follows. In Section 5.2, we present our design space of stencil accelerators. We discuss our design principles and implementation choices, as well as our design parameters and the trade-offs they enable. In Section 5.3, we derive performance and area models for our architecture. In Section 5.4, we present our contiguity-optimizing memory layout. In Section 5.5, we present our HLS implementation and code generation flow. In Section 5.6, we present and discuss our experimental results. We discuss related work on stencil accelerators and other considerations Section 5.7, and conclude in Section 5.8. Architectural Design Space In this section, we present our parameterized family of FPGA stencil accelerators. Target Platform In this work we target the Xilinx Zynq platform. However, our work is equally applicable to other hybrid System on Chip platforms, such as the Intel SoC, featuring an FPGA fabric tightly-coupled with a general purpose processor. In both Zynq and SoC, the FPGA shares coherent access to main memory with the CPU cores through the last level of cache. Accelerator Overview An overview of the architecture can be seen in Figure 5.1. Our accelerator takes as input a series of tile coordinates, and computes them in a single pass. It is implemented as a bus master device on the AXI4 bus, using HP ports to access external memory. The implementation decouples memory accesses from execution through macro-pipelining at the tile-level. Macro-pipeline stages are implemented as HLS actors: • Communication (Read and Write) actors read/write tile results from/to main memory through HP ports. • The compute actor performs actual tile computations. Communication and compute actors are inter-connected with FIFOs of sufficiently large size. The main benefit of this decoupling is that it provides overlapping between computation and communication: tile inputs can be fetched / committed in parallel with computation. If available bandwidth is sufficient, memory access time can thus be effectively hidden, provided that the number of tiles to execute is large. Indeed, even with computation/communication overlapping, the execution of the first and last tile in a sequence necessarily introduces some latency overhead. Compute Actor We now focus on the compute actor, the computationl core of our architecture. Execution Datapath The compute actor computes each tile in a single sweep using a deeply pipelined datapath, derived by te HLS tool. The entire set of operations in a tile is pipelined with Initiation Interval of one. In terms of input C code to HLS, this pipelining is realized by coalescing the loop nest iterating over a tile into a single loop, and pipelining the body of the loop. The updates within a single time step are independent of each other and can be fed to the datapath every cycle, provided that the data is available on FIFOs. However, pipelining across time steps requires that results from the previous time step have been entirely computed before being first accessed, which is not necessarily true in presence of pipelining. This imposes a constraint on tile size and pipeline depth ; such constraints are discussed in Section 5.2.5. Our datapath can be further configured to perform an arbitrary number of stencil updates per cycle, simply by unrolling the innermost loop by a fixed factor before coalescing. Adjusting this factor allows us to control the computational intensity of our IP. Empirically, we observe that pipeline depth ∆ depends on the target operating frequency provided by the user, but not on unrolling factor. It is not surprising, since propagation time should not be affected by the replication induced by unrolling, but is a beneficial property for controlling throughput. Memory Re-Use The execute actor takes advantage of reuse of input data and intermediate results within a tile. We apply a technique similar to the one by Cong et al. [START_REF] Cong | An optimal microarchitecture for stencil computation acceleration based on non-uniform partitioning of data reuse buffers[END_REF] to minimize local memory usage and avoid memory bank conflicts. However, we use HLS arrays instead of explicit FIFOs, which necessitates dealing with the initialization of these arrays for each tile. This can be achieved in two ways: 1. By increasing the number of memory ports in the on-chip memory to parallelize the initialization (i.e., without performance overhead) 2. By inserting wait-states to serialize the initialization phase. We use the latter as on-chip memory is a scarce resource. These wait states correspond to the halo of the tile: our loop scans halo regions along with iteration points to pre-load external dependencies. If such were not the case, for example, the first iteration on a tile would need to read all its dependencies in parallel from different BRAMs or FIFOs. With this approach, all external dependencies are already available in registers or on-chip memory when an update is started. Overview of the Read/Write Actors The read actor streams in input data to the execute actor, and the write actor streams out results from the execution actor. These actors perform burst accesses to external memory through the AXI4 interface. We use a custom data layout, discussed in Section 5.4, to ensure that most memory accesses are contiguous. We take special care to minimize idle time and maximize bus occupation to get as close as possible to the maximum achievable bandwidth (e.g., 600 MB/s per HP port with 32 bit bus width). To this aim, the compute actors are in fact split in several parallel actors to, e.g., re-order memory elements and compute addresses in parallel with actual memory transfers. Design Parameters Our approach aims at exposing relevant design knobs to drive the design space exploration. These knobs are: • The choice of the Unrolling Factor (UF), representing the number of stencil updates per cycle (in steady state). The datapath performs UF updates parallel. The value of UF hence determines the amount of parallelism. Increasing this factor will boost the maximum throughput that can be attained, but will also raise bandwidth requirements to keep feeding the datapath. The choice of this parameter is mostly driven by the throughput requirements. Larger values give higher throughput, but increase area cost. In HPC applications, one will want to increase this factor as much as possible, while in many embedded applications, a small value of UF may be sufficient. • The choice of the tile sizes (S 0 , . . . , S d ) is also critical, as tile shape determines data locality. Tile sizes control the trade-off between off-chip bandwidth requirements and on-chip memory usage. Larger sizes reduce bandwidth requirements, but increase on-chip data storage. Larger tile sizes also reduce the overhead due to halo regions, further improving throughput. • Finally, we allow for partial tiling (see Figure 5.2): one may choose to only tile outer domain dimensions, while leaving one or several inner dimensions untiled. In these dimensions, a tile spans the entirety of the domain. This technique can be interesting if the stencil grid is too large to fit in on-chip memory, but bands across only some dimensions are not. Another benefit of this approach, compared to full-tiling, is that it reduces the overhead of partial tiles, due to only a subset of domain dimensions needing to be skewed. While tiling in all dimensions can be used for domains of any (and even unknown) size, partial tiling is only applicable if domain dimensions are known at compile-time and relatively small. For this reason, it is an interesting approach, for example, for computer vision algorithms, while tiling all dimensions may always be required in most HPC use-cases. Constraints We require that tile size in the innermost dimension, S d , is evenly divisible by UF. This constraint avoids complex controls arising from cases where only a subset of unrolled iterations are valid computations. Also, Tile sizes in the spatial dimensions are constrained to have more iterations than the pipeline depth: S 1 × S 2 UF > ∆ to prevent dependence violations. Performance Modeling The parameters above expose a huge design space to be explored. In this section we present performance models to guide the exploration of this space. We adopt the following conventions: • The dimensions of a tile in the skewed iteration space are written as S 0 × S 1 × . . . S d , where d is the number of data dimensions. S 0 is thus the number of time steps spanned by a tile, while for i > 0, S i represents its extent along the i-th data dimension. In case of partial tiling, dimensions S d-k+ , . . . , S d , where k denotes the number of untiled dimensions, correspond to iteration domain dimensions. • Dependence vectors are denoted (d 0 , . . . , d n d -1 ), where n d is the number of such dependencies. For any i ≤ d, u i is the i-th canonical basis vector of Z d+1 with 1 as its i-th component and 0 elsewhere. • The unrolling factor is written UF. Asymptotic Performance Model The important metric to model is the number of stencil updates per cycle1 , computed as follows: UpdatesPerCycle = TileVolume(S 0 × S 1 × S 2 ) TileCycles . TileCycles denotes the number of cycles it takes to execute a tile, while TileVolume = ΠS i simply corresponds to the amount of computation in a tile. Assuming overlapping between computation and communication, TileCycles corresponds (in steady-state) to the slowest between the communication and computation tasks. In other words, is the maximum between the number of cycles spent for computing, CompCycles, and the number of cycles spent for memory transfers, CommCycles: TileCycles = max(CompCycles, CommCycles) This is the asymptotic performance of our design that is reached when the problem size is large enough to make the overhead at the boundaries (where the computation and communication are not fully overlapped) negligible. Performance of the Compute Actor Recall that the pipelined datapath of the compute actor computes, in steady-state, UF updates per cycle. In addition to the tile volume, the compute actor scans the boundary halo regions to fetch input data. Representing the dependence depth in the d-th dimension (i.e., the thickness of the halo in that dimension) as h d , the number of times the compute actor datapath is invoked per tile is thus: CAVolume = S 0 × (S 1 + h 1 ) × � S 2 + h 2 UF � Since the initiation interval is always 1 for our design, the total number of cycles that it takes to execute the compute actor, assuming all inputs are ready, is given by: CompCycles = CAVolume + ∆ -1 where ∆ denotes the pipeline depth of the compute actor datapath. Pipeline depth, determined by the HLS tool during RTL generation, is a function of the update formula and synthesis frequency, but is not influenced by the choice of tile sizes or unrolling factors. Communication Modeling It is critical to make use of burst communication to maximize bandwidth utilization. Indeed, the number of cycles for a memory transfer of n contiguous words can be accurately estimated as: BurstCycles(n) = n + BurstLatency BurstLatency = Frequency(MHz) × K burst × 0.01 where K burst is a constant representing burst latency at 100M Hz (about 30 cycles in our case). For this reason, we use a custom memory layout, detailed in 5.4, to ensure that almost all memory transfers permit burst accesses. Moreover, we concurrently use all HP ports such that burst latency can be totally hidden. Hence, modeling the communication cost can be simplified to modeling the data volume. The data volume to be communicated is exactly the halo regions of a tile. This can be approximated by: CommVolume = d � i=0 (S i + h i ) - d � i=0 S i . When the data element is one word, CommVolume directly translates to the number of transfer cycles: CommCycles. If the stencil operates, for example, on multiple numerical fields, this formula may need to be changed to reflect larger element-size. Modeling the Area Cost Precise modeling of the area cost can be extremely challenging, and is heavily influenced by the HLS tool and the algorithm. However, it is not difficult to make relative comparisons among design points in our parameter space. Indeed, we can expect unrolling factor and communication volume to both have linear relationships with area: UF with LUTs/DSPs and communication volume with on-chip buffer requirement. This suggests, prior to Design Space Exploration, to sample the design space and perform linear regressions in order to compute area models. To aggregate usage reports into a single number, we choose to use the sum of the utilization rates (Slice/BRAM/DSP) as area metric. While no metric is perfect, we choose this one as it captures the relative scarceness of hardware resources on the board. To represent the interaction between UF and tile size, and to relate these values to area, we propose infer the following linear functions from a few design points: C dp = a dp × UnrollFactor + b dp C mem = a mem × CommVolume + b mem where a i , b i must be learned with linear regression. As we demonstrate in Section 5.6, a simple estimate based on unrolling factor and tile face volumes gives sufficient insight about the area cost to guide the design space exploration. Moreover, we will see that only a few design points are required to obtain sufficiently accurate models, which means that this approach is practical. Data Layout We mentioned in Section 5.3.1 that contiguity plays an important role in maximizing memory bus occupation, as it reduces access latency thanks to burst accesses. Indeed, only a small fraction of the available bandwidth can be effectively used without bursts. For example, on the ZC706 board we use for our experiments, each memory access incurs a latency penalty of about about 30 cycles. This means that the best bandwidth efficiency that can be achieved on a given memory port, using bursts of maximum size (256 words) is approximately 90%. However, if we instead use bursts of only 32 words, we see efficiency drops to about 50% because of burst latency. It is thus critical for performance that most communication is performed using bursts of maximum size. Traditional memory layouts for multi-dimensional arrays (and modular contractions thereof, such as alternating between two copies of spatial grid) typically provide contiguity outward from innermost array dimensions: for example, in a 3D array A[M][N][P], consecutive elements A[t][x][y] and A[t][x] [y+1] may be stored contiguously, as well as full consecutive lines A[t][x][_] and A[t][x+1][_]. Consequently, when tiling iteration spaces along all dimensions, with such layouts, we find ourselves repeatedly accessing many small contiguous fragments. For example, the rectangular region A[t][x][y] → A[t][x+N][y+M] is composed of N + 1 contiguous segments of length M + 1. Suppose that M and N represent tile dimensions, and that this region represents the halo-face of a tile along the temporal dimension. M and N are likely much smaller than maximum burst size, and communication time is thus dominated by burst latency. The discussion above suggests to partition the inputs set of a tile into faces, and to store those faces contiguously. It is easy, using this kind of decomposition, to ensure that each face can be read contiguously. In fact, it is equivalently easy to enforce that all tile inputs form one contiguous segment in memory, for example by ordering them by their first read in the tile. The problem is that, then, writes cannot be performed contiguously, and nothing is gained performance-wise because of this asymmetry. Moreover, some results may be used by multiple tiles and must hence be stored redundantly ; there is thus an imbalance between read and write volume. h 0 S 0 h 2 S 2 h 1 S 1 B 0 B 1 B 2 B aux Of course, we can also ensure contiguity for writes, with symmetric problems. We tackle this challenge by storing projection of tile inputs / outputs along each dimension into distinct buffers. In this section, we present an allocation strategy providing the following benefits: • Each tile can read/write all its inputs/outputs in a small number of contiguous accesses, usually providing enough contiguity that burst latency becomes negligible. • Read and write volume are strictly equal, with some outputs being stored in multiple buffers. Our idea is to project tile faces along canonical dimensions, splitting communication volume into distinct buffers such that both tile inputs and and outputs map to contiguous segments in these buffers. Example: 3D Iteration Space Our decomposition is illustrated in Figure 5.3 for a 3D iteration space. Inputs to a tile are partitioned into 4 buffers, B 0 , B 1 , B 2 and B aux . The "thickness" of each buffer B 0 , . . . , B 2 corresponds to dependence depth h d along that dimension. Each buffer is extended along exactly one dimension ; this is necessary to enforce contiguity, and necessitates the introduction of buffer B aux , of size B 0 × B 1 × B 2 to store the "corner" of the tile. Observe that each input corresponds to a full face from a neighboring tile, and part of a face from a diagonal neighbor. For example, let (t, x, y) denote current tile coordinates ; buffer B 2 corresponds to outputs of tiles (t, x, y -1) and (t-1, x, y -1). It means that we must be able to "shift" a window on the axis along which this face is extended, and that the corresponding regions must always correspond to contiguous memory segments. This requirement imposes that all input/output buffers B i are stored in a single array. Also, it constrains the order of dimensions within these arrays: the projection-dimension i must be the innermost, and the one along which we are extending the outermost. In this example, we have chosen to extend dimension tile face 0 along dimension 1, tile face 1 along dimension 2 and tile face 2 along dimension 0. Dimensions order for arrays B 0 , B 1 and B 2 are thus: B 0 : 1, 2, 0 B 1 : 2, 0, 1 B 2 : 0, 1, 2 Since h 0 = 1 in many stencils, effective order of dimensions in buffer B 0 is usually canonical ; however, in buffer B 1 , array dimensions are "rotated" compared to natural order 0, 1, 2. It means that, while inputs can be read/written contiguously, they must unfortunately be buffered on-chip as the order in which they are used (produced) by a tile does not match the order in which they are read (written) to/from main memory. Finally, as with most memory allocations (see Section 4.7), one of our goals is to reduce memory consumption by re-using memory cells. This can be easily done at the face-buffer level using modular allocation. Generalization A natural question is to determine whether the allocation strategy presented above for 3D iteration spaces can be extended to higher-dimensional cases. A first attempt is to replicate the partitioning above by similarly extending each face along exactly one dimension, and using one auxiliary buffer B aux of size � h i . However, there are some issues with this approach. Indeed, let X = (x 0 , x 1 , . . . , x d ) be the vector of coordinates of a tile. Suppose that face B i is extended along dimension j. Then, B i contains points from tiles Xu i and and Xu iu j , where u k are unit vectors. B aux contains dependencies from tile X -� u k . However, by simply extending face along one dimension, we miss dependencies from diagonal tiles distant from current tile by more than 2 and less than d + 1 unit vectors. For example, in a 4D iteration space, we might typically miss dependencies from tile (x 0 -1, x 1 -1, x 2 -1, x 2 ) Generalizing our memory layout to higher-dimensional spaces hence requires the introduction of additional buffers to handle such "corner" and "edge" cases and cover all dependencies. For a 4D iteration space, 3 additional buffers of size S0 × h 0 × h 1 × h 2 , . . . must be introduced. Overall, we thus need 8 distinct memory arrays. At the time of this writing, we have only implemented our memory layout for the 3D iteration space case, and further work is required to properly handle higherdimensional cases. Implementation Our family of accelerators has been implemented via a code generator producing C/C++ architectural descriptions. This code is optimized for HDL synthesis by Vivado HLS, and system integration (block diagram generation, hardware invocation) is handled automatically by SDSoC. The output of our generator thus consists almost exclusively in C/C++ code, for both the hardware and software parts. We did not choose to produce HDL (VDHDL or Verilog) descriptions because (i) fine-tuning for performance was not our primary goal (ii) HLS tools are now mature enough that system-level issues impact performance more significantly than missed micro-optimizations. In fact, we believe that targeting a high-level language and relying on (pragma-driven) HLS optimizations significantly increased our productivity and the quality of our design. For example, the ability of Vivado HLS to derive multiple-input, multiple-output deeply pipelined operators (e.g., 87 stages for anisotropic diffusion) is essential to our methodology. Code Generator Overview Our experimental toolchain has been made freely available as open-source software2 . Current implementation is specialized to 2D stencils operating over 3D iteration spaces, and implementations of full and partial tiling are distinct. However, there is no fundamental reason for doing so and much of the code is in fact shared between both implementations ; some engineering work is required to generalize these implementations into a single generic tool. The input of the generator are the tile size, the unroll factor and the update formula. In the case of partial tiling, tile size in the last dimension corresponds to domain size. All other domain dimensions (for both full and partial tiling) are left as runtime parameters. Based on these inputs, the generator produces: • A software program, in charge of allocating/initializing memory buffers and orchestrating the execution of tile wavefronts. • HLS code for the IP. • A Makefile and synthesis scripts. The Makefile can be used to compile the program to software code and for simulation and validating the implementation against a reference (untiled) software implementation. It can also be used to invoke the synthesis scripts and generate a bootable SD card for the Zynq board. This SD card contains a bitstream with our IP, a minimal Linux distribution and a version of the generated program invocating our IP for the computation of tiles. Our code generator is implemented as a set of Python scripts. Generation of the code for scanning tile wavefronts is handled by the Integer Set Library [START_REF] Verdoolaege | isl: An integer set library for the polyhedral model[END_REF]. The HLS code is mostly based on a template. HLS Code Overview It is well known that, while the efficiency of HLS tools has significantly improved over the years, HLS-optimized code written by a seasoned hardware designer is still very different from functionally-equivalent code written by a software programmer. The goal of this paragraph is to give the reader a hint of some of the challenges and code changes we had to implement to reach reasonable performance. Our guiding principle was to bring the performance our IP as close as possible to the theoretical ideal of the roofline model. This was done by exploiting parallelim at all-levels to hide latency and by avoiding at all costs the introduction of idle states in the Read/Write/Compute processes. Our architecture is realized as a set of HLS actors, implemented with the DATAFLOW directive. An extremely simplified architecture skeleton representing the use of this directive is illustrated in Figure 5.4. With this pragma, the HLS tools automatically infers task-level parallelism within a basic block, and implements tasks as independent HDL processes communicating through FIFOS or ping-pong buffers. Unfortunately, this directive presents several limitations: • First, it is inherently oriented toward streaming access patterns and does not allow for feedback loops, which unfortunately prevents us to use it for handling reuse within a tile3 . While our code could have benefitted from actor-based parallelism at different levels, we must restrict its use to the macro-pipeline of tiles. • Secondly, it is not reliable for inferring useful parallelism when communicating through arrays, unless access patterns are extremely regular and the tool can prove that each element is read/written exactly once. Since we must deal with complex guards to handle tile boundaries, our code breaks this analysis and we must use explicit FIFOs for communication between actors. Communication and computation are thus performed by independent processes to implement computation / communication overlapping. We rely crucially on the ability of the HLS tool to infer burst accesses when using pipelined loops to read-/write from/to AXI4 bus master interfaces. Using this implicit mechanism, instead of explicit bursts performed with the memcpy() function (natively recognized by Vivado HLS), allowed us to reliably stream values to FIFOs in parallel with memory accesses. Full tiling uses a custom, tile-face based memory layout (see Section 5.4). Accesses between different buffers are in fact performed in parallel by different read-/write actors. Since some faces cannot be read/written in lexicographic order via burst accesses, we use additional HLS actors to re-order these values. This introduces additional re-ordering buffers, and thus constitutes a "hidden" cost of full tiling. Note, however, that this overhead is indirectly taken into account when deriving area models through linear regressions. Since the DATAFLOW directive does not allow for loops in the actor graph, the execution datapath of the compute actor cannot be kept as "clean" as we would like. It actually performs three different tasks: • Computing results. • Reading inputs from input FIFOs (through additional iterations) and writing outputs to output FIFOS (when relevant). • Managing reuse memory. Our memory architecture for storing inputs and intermediary results is similar to that proposed by Cong et al. [START_REF] Cong | An optimal microarchitecture for stencil computation acceleration based on non-uniform partitioning of data reuse buffers[END_REF], with one memory buffer per iteration space dimension. Finally, to minimize the number of memory ports used and overall simplify control, we implement a unrolled stencil as a stencil operating over a wider data-type. This allows to read and write from at most one-FIFO per loop iteration, and significantly simplifies some analyses by the HLS tools. Experimental Validation We proceed with the experimental validation of our methodology. We emphasize that our motivation is not to derive the design with the highest throughput ; indeed, we wish to accomodate different situations with specific performance requirements. We need to show that we are able to select the "right size" for a given context. In this section, our goal is thus: • To establish the accuracy of the performance model for different design parameters. • To show that design points can be successfully compared in terms of hardware resource usage. Experimental Setup Kernels We validate our work on two different stencil kernels: Jacobi 2D and Anisotropic diffusion. Jacobi 2D is a standard example for stencils that have relatively few number of operations, such as the heat equation, and is strongly bandwidth constrained. Anisotropic diffusion is an iterative smoothing filter, which is much more compute-intensive. The characteristics of their update operations are summarized in Table 5.1. Tools / Platform We use our code generation (see Section 5.5) and Xilinx SDSoC 2016.3. Designs are synthesized for the ZC706 Zynq evaluation board (featuring an XC7Z045 chip). The target frequency for all designs was set to 142.86 MHz (i.e., the maximum frequency supported by default by SDSoC which is below the architectural limit of 150 MHz for the AXI4 bus). We could have used multiple clock domains to synthesize the compute actor at a higher frequency that the communication actors ; however, this is not currently natively supported by SDSoC, and would have required dropping to Vivado HLS and performing the integration manually in IP integrator. Since (i) our goal is to validate our methodology as a whole, rather than fine-tuning the hardware for better performance, and (ii) this would likely provide only minor frequency improvements, we believe this compromise to be benign. We note that extending our models to the case where compute and communication actors are synthesized at different frequencies is straightforward. Methodology For each kernel, we used our code generator (see Section 5.5) to generate a series of design using different tile sizes, unrolling factors and tiling modes (full and partial). These designs were then synthesized using Xilinx SDSoC 2016.3, targeting the ZC706 Zynq evaluation board with an XC7Z045 chip. For each design point, we sampled the number of CPU cycles it took to execute a set of tiles on the board with the Zynq Global Timer. This timer represents a number of CPU cycles, which was than converted to FPGA cycles based on their erlative frequency (the default clock frequency for the ARM cores is 800 MHz). Contrary to many prior work, all performance numbers provided in this section were obtained from actual accelerators instances running on the target FPGA platform. Hence, our results account for all performance degradation issues related to bus interconnect and/or external memory. We adopt the following convention: a design is abbreviated as S 0 xS 1 xS 2 _UF. As mentioned in Section 5.3.2, the area cost is the sum of utilization rates for Slices/BRAMs/DSPs, and takes a value between 0 and 300. The target board has 54650 slices, 545 BRAM tiles, and 900 DSPs. Full Tiling Jacobi 2D We use four target performances; 1, 2, 4, and 8GFlop/s; to illustrate the trade-offs exposed by our design knobs. A number of design points that have the desired performance with different tile shapes were synthesized. Linear regression for the area model used the following four design points: 4x16x16_2, 8x16x32_4, 32x32x32_8, and 64x64x64_16. Figure 5.5 summarizes the area and throughput of the resulting designs, as well as those predicted by the model. Throughput is represented in the horizontal axis, while are is shown in the vertical axis. For readability reasons, area is displayed as rank : design points are vertically ordered from the less costly to the most expensive in terms of resource usage. One thing that is clearly visible in the figure is that the performance model is quite accurate. Almost all points are on the target GFlop/s based on the model. Moreover, performance results were extremely stable across repeated experiments. This is not necessarily surprising, as FPGAs are mostly deterministic ; however, bus contention issues and access to shared memory could have introduced significant variability, which was not the case. The largest divergence from model is 7% (16x120x240_12) ; as these divergences were also highly reproducible, this suggests that the performance model could be further improved. The area result is also mostly in agreement with the model. An unexpected event was the observation of some interchanges between the ranks predicted by the linear area model. It was found that these differences were due to powers-of-two tile sizes leading to much simpler address computations (especially modulo operations) in communication actors, and thus using less DSPs. This favors powers-of-two tile sizes over slightly smaller shapes using less on-chip memory. However, it is mostly an artefact of the HLS tool: since tile sizes are known at HDL-synthesis time, these operations could be implemented more efficiently in FPGA logic, probably reducing or eliminating the occurrence of such phenomena. To minimize area cost, the design knobs we provide suggest to first set the unrolling factor to the smallest value that can meet performance requirement, and then adjusting tile size to "feed" the datapath at a sufficient rate. However, we observe that in some use cases, using slightly higher UF may be beneficial for area cost. Design points such as 8x60x180_6 and 16x120x240_12 are examples of these cases. We might explain this phenomenon by the diminishing returns from increasing the tile sizes. Indeed, increasing tile size improves performance in two different ways: • by improving locality ; • by reducing the overhead due to the halo regions. Once the tile size is large enough to keep the datapath busy (i.e., the accelerator is no longer I/O-bound), then further performance comes only from reductions in overhead. The above designs are in such situations, where the performance target is at the limit of what can be achieved by the given UF, such that large tile sizes have to be used to meet the goal. Our performance model can identify these situations and point to better design points. We report the resource usage for the best performing designs for each target performance in Table 5.2. Anisotropic Diffusion We use four target performance levels: 4, 8, 12, and 16 GFlop/s. A number of design points that have the desired performance with different tile shapes were synthesized. The area model is learnt with the following points: 2x16x32_1, 4x16x32_2, and 16x16x32_4. The area-throughput trade-off is summarized in Figure 5.6, and Table 5.3 reports the detailed resource usage for the best performing designs. We do not repeat the same discussion as in Jacobi 2D case; all of them applies to anisotropic diffusion as well. One key difference is that the importance of BRAM is much less significant compared to Jacobi 2D. This is because the arithmetic intensity of this kernel is high (37 floating-point operations, including 9 exponentiation), and not much data locality is needed to keep the accelerator busy. Partial Tiling Partial tiling is an attractive alternative to full tiling for small problem sizes. To illustrate this, we implemented this strategy and compared it to fully tiled cases. Figure 5.7 illustrates the trade-offs between the two approaches. We selected the less expensive design meeting different performance targets for both full and partial tiling. In the case of partial tiling, observe that area cost actually depends on domain size, since tiles span an entire dimension of the domain. Consequently, only one result is reported for full tiling, while in the case of partial tiling, several designs were generated for different domain sizes. For both kernels, observe that partial tiling always provides lesser area cost for small problem size, but the situation is reversed for larger domains. This can be explained by the growing buffer requirements for partial tiling This reversal also occurs sooner as performance requirement increase, since larger tile size must be used ; partial tiling offers less degree of freedom in adjusting tile size to increase compute/IO ratio. Partial tiling is thus beneficial for relatively small problem sizes with moderate performance requirements. Note that, for anisotropic diffusion, it scales to larger problem sizes for a given performance target. This is a consequence of choosing GFlops as a performance metric: since anisotropic diffusion is much more compute-intensive than Jacobi, more operations are performed per update, and thus, per byte transferred from main memory. Smaller tile sizes can be kept to meet a given GFlops target. Discussion In this section, we conclude with a discussion on earlier work and some additional considerations. Comparison with Earlier Work The main contribution of our work is a family of designs that cover wide range of performance requirements, accompanied by a performance model to quickly narrow down on the most relevant design points. It would be interesting to directly compare our design with earlier work experimentally. However, this is not practical. Indeed, most implementations are not freely available, and often target different FPGA platforms. Fortunately, a large subset of prior work is within, or comparable to, the family of designs described in this chapter. We may hence compare their relative system-level characteristics. Untiled variants [START_REF] Cong | An optimal microarchitecture for stencil computation acceleration based on non-uniform partitioning of data reuse buffers[END_REF][START_REF] Kunz | An FPGA-optimized architecture of horn and schunck optical flow algorithm for real-time applications[END_REF][START_REF] Luzhou | Domain-specific language and compiler for stencil computation on FPGA-based systolic computational-memory array[END_REF] may be viewed as designs with tile size in the time dimension set to 1, and the remaining dimensions equal to the problem size. They can give similar performance to other designs for very small problem sizes, where the entire data fit on chip ; however, for larger instances, intermediary results must be "spilled" to main memory and temporal locality cannot be exploited. To demonstrate the ineffectiveness of such approaches compiled to tiled variants, we have implemented untiled stencils. Attaining 1GF/s with Jacobi 2D kernel for 256 × 256 image uses 25% of the available BRAM, and the limit is reached with 512 × 512 using 95% of BRAMs. Natale et al. [START_REF] Natale | A polyhedral model-based framework for dataflow implementation on FPGA devices of iterative stencil loops[END_REF] propose to scale computations of iterative stencils to multi-FPGA systems by replicating and chaining Streaming Stencil Timesteps (SST) over several FPGAs. Each SST is in charge of computing one timestep over the entire grid. Within a timestep, intermediary results are kept on FIFOs of optimal size, but are also forwarded to the next SST. Compared to our design, this approach provides only limited control over throughput and bandwidth requirements. Throughput may be increased by increasing the number of SSTs ; however, this leads to a corresponding increase in local memory requirements. One may view this strategy as a combination of (i) temporal tiling with (ii) an unrolling of the outermost loop dimension. Chaining multiple FPGAs allows this architecture to scale to arbitrary throughput requirements, but the absence of tiling in spatial dimensions means that it cannot handle arbitrary problem sizes. We did not implement overlapped tiling, because the overhead due to redundant computations is in most cases too significant, as revealed by a quick analysis. For example, consider the Jacobi 2D example, where the data dimensions are tiled by S × S. Overlapped tiling for two time steps requires the halo of S × S extend by one in each direction to be redundantly computed. The amount of redundant computation grows as tile size along the time dimension is increased. Generalizing to any tile size (S t + 1) in the time dimension, the overall computational volume is : St � x=1 � (S + 2x) 2 -S 2 � = St � x=1 � 4Sx + 4x 2 � which can be further simplified to: � The relative importance of the overhead depends on the non-redundant computation (S t × S 2 ). When S t = S, the overhead is more than three times larger than the actual computation (367 %). Tiles must hence be kept thin and flat to reduce this overhead ; but doing so also decreases temporal re-use, and is not interesting on our platform. Additional Considerations We have extensively discussed the trade-offs of different design choices in Section 5.6. Many other factors, if taken into account, could also influence the trade-off, such as synthesis frequency of the different actors. In this work, all designs were synthesized at the same frequency (143 MHz) on a single clock domain. The communication and computation parts could use independent clocks, allowing the compute actor to reach higher frequencies than the 150 MHz limit of the AXI bus. This change would unveil another design dimension that should be considered when selecting a design for a particular context. We have not discussed how to handle domain boundaries. In our implementation, boundary and incomplete tiles are currently handled in software, in parallel with "full" tiles in the same wavefront. For small domains and/or large tile sizes, this can introduce a large performance overhead as the software is much slower than the accelerator, and the number of "full" tiles is low: the software then becomes a performance bottleneck. Moreover, steady-state performance is not attained on small tile wavefronts. This problem is not fundamental, as the domain could be easily padded with "dummy" iterations to execute all tiles in hardware. Padding has an impact on overall performance that depends on tile size, problem size, and tiling strategy. For example, we would have 10.09% dummy iterations with 4 × 16 × 16_2 tiles on a 50 × 512 × 512 domain. However that this overhead decreases with partial tiling for the same performance target, and drops to 2.63% with 2 × 16 × 512_2 with the same domain size. In this work, we only considered single-field stencils with Jacobi-style dependences operating on 32-bit floating-point data. Our generator could be easily extended to Gauss-Seidel dependence patterns (with an additional skewing) and multifield stencils such as FDTD. The use of fixed-point or custom floating-point arithmetic would open a whole avenue of trade-offs involving accuracy, giving FPGAs a real advantage compared to less flexible platforms such as GPUs. All these factors influence throughput and/or area, and could impact the specific trade-offs and performance modeling, by introducing additional design knobs. Conclusions In this chapter, we discussed a design methodology for FPGA stencil accelerators based on a tunable family of designs and accurate performance / area models. Our results show that different constraints call for different implementations, a fact that is not acknowledged by much of earlier work. For example, partial tiling provides benefits over full tiling for some problem sizes but not others, as evidenced by both our performance models and our experimental validation. Our methodology is based on the tiling transformation, and our implementation targets HLS tools for their ability to implement efficient hardware into well-optimized high-level code. Targeting C/C++ instead of HDL code frees us from the need to implement many optimizations ourselves, by re-using those provided by the HLS tool. Our experience suggests that the use of domain-specific generative approaches is an effective strategy. Finally, we have shown that high-level system modeling, using reasonably simple performance models based on a system-level view, was sufficient to predict performance with excellent accuracy. We believe that the overall approach of providing a few, carefully chosen design knobs and simple performance models to drive the exploration could be generalized to other classes of accelerators. Chapter 6 Conclusion Review of our Contributions In this thesis, we present methodological and technical contributions to the design of hardware accelerators. As discussed in Chapter 1, such architectures are expected to meet strict resource or performance constraints. Often, these requirements can only be satisfied by implementing fine-grained trade-offs between hardware resources (e.g., power, bandwidth and area) and performance metrics (e.g., throughput, latency and accuracy). However, the design space is usually extremely large: identifying such trade-offs can be a challenging task. A core idea of this work is to tackle this complexity by providing analytical models to drive the exploration. Performance-Accuracy Trade-Offs In the first half of this manuscript, we focus on accuracy. Trade-offs between accuracy and performance constitute a large field of opportunities for hardware designers. A classical example is the use of fixedpoint arithmetic instead of floating-point to cut down hardware cost and reduce power consumption. Naturally, accuracy may not be reduced indefinitely: the implementation must satisfy application-specific accuracy constraints, which should be validated/enforced during design-space exploration. Determining whether a given fixed-point configuration satisfies accuracy constraints happens to be a difficult problem in general. Two main classes of techniques are proposed in the literature: techniques based on simulations, and analytical techniques. Simulation is the most widely applicable, and is also very effective given enough real-world inputs. However, analytical models are much faster to evaluate than simulations, and can thus be used to explore a large number of design points in a short time, possibly identifying better solutions. The main challenge remains their limited applicability. Prior to our work, analytical techniques could only handle one-dimensional systems. In Chapter 3, we extend earlier methods to multi-dimensional algorithms, such as image filters. We focus on Linear, Shift-Invariant filters, a generalization of 1D Linear Time-Invariant filters supported by other approaches. We propose a source-level design flow, where the architecture is specified as a C/C++ description. Our main challenges are thus: • From a given a C/C++ algorithm, how can we retrieve a compact, mathematical representation of the underlying linear system ? • From that representation, how can we derive a reliable accuracy model ? The first challenge is addressed within the polyhedral model. We choose to represent LSI algorithms as Systems of Uniform Recurrence Equations (SURE). SUREs may also be seen as Multi-Dimensional Flow Graphs, a generalization of Signal-Flow Graphs used as an intermediate representation in earlier approaches. A major difference is that, contrary to SFGs, MDFGs/SUREs do not impose a "canonical" iteration order for each dimension. Complex, recursive image filters, that scan the image in all directions, may thus be represented. We use dataflow analysis to transform the program into a system of affine recurrence equations. Before this system can be recognized as SURE, some amount of transformation is required, including linearization/uniformization of dependences. For the second problem -inferring accuracy models -we propose two different approaches. Both boil down to computing the integral and L 2 norm of the impulse response, but from dual points of view: • In the time-domain approach, we derive these sums by unrolling / evaluating recurrence equations defining the system. • In the frequency-domain approach, we use algebraic properties to derive the transfer functions of the system. We thus compute the sums above from the frequency response. For the latter case, we propose a much simplified and more efficient version of the algorithm proposed by Menard et al [START_REF] Menard | Analytical fixed-point accuracy evaluation in linear time-invariant systems[END_REF]. Experiments show that our models are fast to derive. Their effectiveness is demonstrated on real-world input: results show an excellent match between measured and predicted accuracy degradations. Finally, we show how the frequency-domain approach could be used, before WordLength Optimization, to handle the quantization of coefficients, a problem largely dismissed by other works. Implementation Trade-Offs for Stencils on FPGAs In the second part of this thesis, we focus on implementation trade-offs for iterative stencil computations. Stencil computations form a widespread computational pattern used in many applications, from scientific simulations to embedded vision. Applications vary greatly in terms of constraints, domain size, dependencies and computational intensity. At a high level, the performance of stencil implementations is mostly determined by computational throughput and memory performance. The tiling transformation, discussed in Chapter 4, is an essential tool to improve both aspects, by enhancing memory locality and enabling parallelism at multiple levels. In Chapter 5, we present a systematic design methodology for implementing stencil computations on FPGAs. Our approach is based on a flexible design template, rooted in the tiling transformation, and featuring several design knobs: • Maximum throughput can be controlled by adjusting the unrolling factor of the core datapath. This allows trade-offs between throughput and area. • Larger tile sizes can be used to reduce bandwidth requirements at the cost of increased memory usage. • The iteration space can be tiled in only some dimensions to further reduce bandwidth usage. The price is a partial loss of control over local memory size, as it is proportional to the size of the domain along untiled dimensions. In addition, we provide simple performance models, derived from a high-level analysis, to serve as a basis for Design-Space Exploration. To validate our design and models, we implemented our approach as a codegeneration flow targeting Vivado SDSoC. For multiple performance targets, we identified interesting design points using our performance models. We then compared predicted values against measured performance/area. Our experiments show that our performance model is extremely accurate and that our resource usage model is accurate enough to identify the most interesting design points. Our models can thus serve as a basis for systematic design space exploration. Finally, during this work, we realized the importance of memory contiguity to reduce latency and truly benefit from burst accesses on FPGA platforms. We thus devised a custom memory layout for our use case, currently only implemented for for the 2D-data stencils. However, it is still unclear whether this layout could be efficiently generalized to higher-dimensional stencils. Perspectives Our work opens several perspectives. An obvious direction would be to extend our work on analytical accuracy to a wider class of programs, such as linear non-shift invariant algorithms or arbitrary non-linear filters made of "smooth" operations. A more interesting research direction might be to support non-polyhedral programs. Some of these algorithms, such as the Fast Fourier Transform, are regular, static control flow yet cannot be represented compactly with affine dependences. It is unclear how quantization noise propagates in such algorithms, and if unrolling can be avoided. One could also imagine the use of analytical accuracy models in other contexts, for example to handle round-off errors in floating-point programs, or to predict the impact of transient ("soft") errors and approximate operators on the correctness of the program. A major difficulty is that, in such cases, errors do not verify the same statistical conditions as fixed-point quantization noise. For example, the latter two would probably show highly discontinuous error distributions and correlation between error sources. Our work on stencils relies on a clear understanding of the major factors affecting the performance of stencil computations, as illustrated by the Roofline model. Similar observations also apply to many other algorithms. A more generic approach, targeting all algorithms, could probably be derived. Properly extending our contiguity-optimizing memory layout beyond 2D stencils is another research direction. While this problematic is importance in practice to fully utilize the available bandwidth, it has not been largely investigated. One issue is to limit the number of additional memory buffers required to communicate tile faces. Another one is to reduce the need for re-ordering memory. We suspect that contiguity and sequentiality are both desirable, but essentially incompatible properties: in higher-dimensional algorithms, one cannot have both in general. However, this question would deserve proper investigation. Another idea would be to use tile-face based halo decomposition to reduce communication volume, for example by shrinking the wordlength of communicated data (thus introducing round-off errors at tile boundaries) or by applying compression at the tile-level. Finally, the interaction of stencils and accuracy is unclear, and should be investigated. The ability of FPGAs to handle arbitrary wordlengths is one of their major selling-point, as significant reductions in area can be achieved when applications tolerate some level of round-off noise. Being able to properly characterize the impact of (for example) fixed-point encodings on the accuracy of stencils could allow for very effective trade-offs. Conclusion The main challenge of designing hardware accelerators is the huge design space of possible implementations for each application, especially when design dimensions such as accuracy are considered. Because different applications have different needs, a single solution cannot be expected to satisfy all constraints in every case. In this thesis, we defend a systematic, model-based approach to design. We have demonstrated the efficiency of this strategy in two different problematics (stencils and wordlength optimization). We believe that, as accelerators become more and more widespread, the use of domain-specific models will become essential to understand the implications of each design choice on the behavior of the system. Our work is a step in that direction. 2 . 7 27 SFG of a FIR Filter computing the formula: y(n) = � 3 i=0 b i x(ni). Nodes labeled z -1 represent one-cycle delays and triangle-shaped nodes multiplication by a constant coefficient. . . . . . . . . . . . . . 2.8 FIR filter implementation for the Id.Fix conversion tool. . . . . . . . 2.9 Introduction of error sources in a Signal Flow Graph. . . . . . . . . . 2.10 PQN characteristics based on input / output signal precision and quantization mode. q represents the quantization step and k the number of eliminated bits when converting between fixed-point formats. Note that when k → ∞, the discrete model converges towards the continuous model. . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Impulse Response of the deriche edge detector estimated by our tool (horizontal gradient). . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Relation between the impulse response, transfer function and frequency response of a system T . The transfer function H z,T is the Z-transform of the impulse response. The frequency response H T is obtained by restricting the transfer function to the unit circle for each component. Finally, h T and H T are related by the Fourier transform. 3.3 Partial flowgraph of the Deriche filter . . . . . . . . . . . . . . . . . . 4.1 1D Jacobi stencil. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Glider pattern in Game of Life. . . . . . . . . . . . . . . . . . . . . . 4.3 Seismic Modeling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Roofline Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Naive Gauss-Seidel stencil implementation . . . . . . . . . . . . . . . 4.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Tiling of a 1D Gauss-Seidel stencil with 2 × 2 rectangular tiles. . . . . 4.8 Tiled source code of the 1D Gauss-Seidel stencil. . . . . . . . . . . . . 4.9 Invalid tiling for a Jacobi skewing. . . . . . . . . . . . . . . . . . . . . 4.10 Legal tiling for the Jacobi 1D stencil after skewing transformation. . . Figure 2 . 1 : 21 Figure 2.1: Q m,n format with m-bit integral part and n-bit fractional part. Figure 2 . 2 : 22 Figure 2.2: Fixed-point addition of two 5-bit numbers. 1 ) 1 S × 0 e = emin � = 0 (-1) S × 2 e × 0.T Denormal numbers emin + 1 ≤ e < emax any (-1) S × 2 e × 1.T Normal numbers e = emax 0 (-1) S × ∞ Infinities e = emax � = 0 NaN "Not a Number" Figure 2 . 3 : 23 Figure 2.3: Binary Representation of a IEEE 754 Floating-Point Number uchar8 mul_2_4 ( uchar8 a , uchar8 b ) { r e t u r n ( a >> 4 ) * ( b >> 6 ) ; } - Figure 2 . 4 : 24 Figure 2.4: Maximum representation error of floating-point numbers as a function of represented value. Figure 2 . 5 : 25 Figure 2.5: Dynamic range of floating-point and fixed-point representation as function of wordlength. Floating-point exponents of �25%� of wordlength are assumed. subject to C(w) ≤ C max and the cost minimization problem as: min w C(w) subject to λ(w) ≤ λ max Figure 2 . 6 : 26 Figure 2.6: PDF of the sum of 5 i.i.d random variables with distribution U ([0, 1]). ally specified as a hard bounds on error. Typically: |e| = |x -x| < ε Statistical metrics In signal and image processing systems, soft metrics based on the statistics of signals are usually used. The most common one is called noise power and involves the first and second statistical moments of the error, seen as a random noise e: P (e) = E(e 2 ) = µ 2 e + σ 2 e Figure 2 . 8 : 28 Figure 2.8: FIR filter implementation for the Id.Fix conversion tool. Figure 2 . 9 : 29 Figure 2.9: Introduction of error sources in a Signal Flow Graph. p i x e l _ t g a u s s i a n _ b l u r ( p i x e l _ t pxs[START_REF] Karp | The organization of computations for uniform recurrence equations[END_REF] ) { r e t u r n ( 0 . 0 2 5 * pxs [ 0 ] + 0 . 1 0 8 * pxs [ 1 ] + 0 . 0 2 5 * pxs [ 2 ] + 0 . 1 0 8 * pxs [ 3 ] + 0 . 4 6 9 * pxs [ 4 ] + 0 . 1 0 8 * pxs [ 5 ] + 0 . 0 2 5 * pxs [ 6 ] + 0 . 1 0 8 * pxs [ 7 ] + 0 . 0 2 5 * pxs [ 8 ] ) ; } Figure 3 . 1 : 31 Figure 3.1: Impulse Response of the deriche edge detector estimated by our tool (horizontal gradient). Figure 3 . 2 : 32 Figure 3.2: Relation between the impulse response, transfer function and frequency response of a system T . The transfer function H z,T is the Z-transform of the impulse response. The frequency response H T is obtained by restricting the transfer function to the unit circle for each component. Finally, h T and H T are related by the Fourier transform. Figure 3 . 3 : 33 Figure 3.3: Partial flowgraph of the Deriche filter a [ 0 ] [ _] = . . . ; // S e t i n i t i a l c o n d i t i o n s . f o r ( i n t t =1; t<T ; t++) // Temporal l o o p f o r ( i n t x=1; x<N+1; x++) // S p a t i a l l o o p a Figure 4 . 1 : 41 Figure 4.1: Naive implementation of a 1D Jacobi stencil. Figure 4 . 2 : 42 Figure 4.2: Glider pattern in Game of Life. Figure 4 . 4 : 44 Figure 4.4: Roofline Model f l o a t A[T+ 1 ] [N+1] = . . . ; f o r ( i n t t =1; t<=T ; t++) // Temporal l o o p f o r ( i n t x=1; x<=N; x++) // S p a t i a l l o o p S : A[ t ] [ x ] = f (A[ t -1 ] [ x ] , A[ t ] [ x -1]) ; Figure 4 . 5 : 45 Figure 4.5: Naive Gauss-Seidel stencil implementation Figure 4 . 6 46 Figure 4.6 Figure 4 . 7 : 47 Figure 4.7: Tiling of a 1D Gauss-Seidel stencil with 2 × 2 rectangular tiles. f l o a t A[T+ 1 ] [N+1] = . . . ; // I t e r a t i o n s o v e r t i l e s f o r ( i n t t 0 =1; t0<=T ; t 0+=2) f o r ( i n t x0 =1; x0<=N; x0+=2) / * - Figure 4 . 8 : 48 Figure 4.8: Tiled source code of the 1D Gauss-Seidel stencil. Figure 4 . 9 : 49 Figure 4.9: Invalid tiling for a Jacobi skewing. Figure 4 . 10 : 410 Figure 4.10: Legal tiling for the Jacobi 1D stencil after skewing transformation. Figure 4 . 11 : 411 Figure 4.11: Tile wavefronts for the 1D Gauss-Seidel stencil. Figure 4 . 12 : 412 Figure 4.12: Wavefront parallelism extraction as an application of skewing (Gauss-Seidel stencil). Figure 4 . 13 :Figure 4 . 14 : 413414 Figure 4.13: Overlapped tiling for a 1D 3-point Jacobi. Rectangular tiles of size 3 × 3 are extended to recompute all their dependencies down to the lower time boundary. Darker regions correspond to redundant computations. Figure 4 . 15 : 415 Figure 4.15: Diamond tiling for 1D 3-point Jacobi stencil. Tiling hyperplanes (dashed lines) are inferred from faces of the dependence cone, spanned by dependence vectors (red arrows). Each horizontal band of tiles can benefit from concurrent start. Note the heterogeneous shape of tiles in consecutive bands. Figure 4 . 16 :Figure 4 . 17 : 416417 Figure 4.16: Illustration of hexagonal tiling, a generalization of diamond tiling. Unlike diamond tiling, the elongated top and bottom faces guarantee a minimum amount of intra-tile parallelism at each time step, while still allowing concurrent start along one of the iteration space boundaries. Figure 5 . 1 : 51 Figure 5.1: Diagram of the architecture. 2 Figure 5 . 2 : 252 Figure 5.2: Illustration of partial tiling for a 2D jacobi. Figure 5 . 3 : 53 Figure 5.3: Input faces of a tile. Figure 5 . 4 : 54 Figure 5.4: Use of the DATAFLOW directive to implement computation / communication overlapping. Figure 5 . 5 : 55 Figure 5.5: Predicted and measured area/throughput for the Jacobi2D kernel. Figure 5 . 6 : 56 Figure 5.6: Predicted and measured area/throughput for the Anisotropic Diffusion kernel. 5 Managing Trade-Offs in FPGA Implementations of Stencil Computations 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Q m,n format with m-bit integral part and n-bit fractional part. . . . . 2.2 Fixed-point addition of two 5-bit numbers. . . . . . . . . . . . . . . . 2.3 Binary Representation of a IEEE 754 Floating-Point Number . . . . . 2.4 Maximum representation error of floating-point numbers as a function of represented value. . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Dynamic range of floating-point and fixed-point representation. . . . 2.6 PDF of the sum of 5 i.i.d random variables with distribution U ([0, 1]). List of Figures 2.1 Table 2 . 1 : 21 Interpretation of a IEEE 754 Floating-Point Number Example This program is not a SCoP: control flow depends on values of array elements and is thus not static.Example This program is not a SCoP either: control flow is static, but guards contain non-affine operations. f l o a t sum=0; f o r ( i n t i =0; i <N; i ++) { i f ( a r r [ i ] > 0 ) sum += a r r [ i ] ; } f l o a t sum=0; f o r ( i n t i =0; i <N; i ++) { f o r ( i n t j =0; j <i * i ; j ++) { sum += a r r [ i ] ; } } r [ i ] ; } Table 3 . 1 : 31 Model construction time for our tool and ID.Fix. Algorithm ID.Fix (s) Our tool (s) IIR8 23.1 20.5 Sobel (32×32) 169.1 9.2 Sobel (64×64) 2173.1 9.7 Sobel (128×128) - 9.4 Sobel (32×32) 160.1 9.2 Sobel (64×64) 2010.9 9.5 Sobel (128×128) - 9.4 Deriche blur (16×16) - 6.5 Table 3 . 2 : 32 Validation of our model against simulations and ID.Fix.In Chapter 2, we saw that extensions of analytical methods to non-LTI methods are almost all-based on a linearization of the system with time-varying coefficients. This linearization, based on perturbation theory, is reflected as a graph transformation where noise propagation is modeled as linear operations on error signals with input coefficients. The noise model is usually built from coefficient-signals characteristics, which are assessed with varying numbers of parameterization simulations. Algorithm Actual error (dB) Predicted error Estimation error (dB/%) IIR8 -17.80 -17.84 -0.04/-0.2% Sobel 11.62 12.04 0.42/3.6% Gauss 3.78 3.78 <0.01/0.1% Deriche -18.01 -18.06 -0.05/-2.78% 3.6 Future Work and Extensions Table 5 . 1 : 51 Number of floating-point operations, and pipeline depth for one update of the kernels. Kernel flops Pipeline depth Jacobi 2D 1 ×, 4 + 43 Anisotropic Diffusion 9 ×, 17 +, 2 /, 9 exp 87 Table 5 . 2 : 52 Resource Usage for the Jacobi 2D kernel Design Slices BRAMs DSP48E 2 × 32 × 32, UF=2 9663 27 66 8 × 16 × 32, UF=4 11123 30 104 16 × 32 × 64, UF=8 13148 57 180 34 × 70 × 70, UF=14 18103 126 372 Table 5 . 3 : 53 Resource Usage for the Anisotropic Diffusion kernel Design Slices BRAMs DSP48E 2 × 16 × 32, UF=1 12675 29 138 4 × 16 × 32, UF=2 17119 30 248 8 × 16 × 30, UF=3 22961 32 375 16 × 16 × 32, UF=4 26000 37 468 ������������ Figure 5.7: Area results of partial tiling for the two kernels and different performance targets. Such functionality has since been merged into general-purpose CPUs, but not, for example, in some Digital Signal Processors The converse is not true. Because of separability properties of the filter, the order between vertical and horizontal passes is mostly arbitrary. http://gecos.gforge.inria.fr This is a Forward-Time, Central-Space (FTCS) discretization. Other discretization schemes may be used, resulting in stencils with different numerical properties in terms of stability and accuracy. In the case of stencils, the number of updates per second may be a more meaningful metric ; both are related by a constant, application-dependent factor (the number of operations per update) and are thus equivalent. For simplicity, we only assume that there is a single statement S in the program ; when such is not the case, one usually makes the simplifying assumption that statements write to nonoverlapping memory areas. Note that UpdatesPerCycle is a direct proxy to throughput, which is UpdatesPerCycle × FlopsPerUpdate × Frequency. https://github.com/gdeest/hls-stencil This limitation is necessary to allow semantically equivalent sequential software simulation of the IP. In our opinion, it illustrates one limitation of pragma-based HLS: with a high-level description of the architecture using native software constructs, this limitation could be overcome. Remerciements List of Tables // V e r t i c a l ( top-to-bottom ) p a s s not shown f o r b r e v i t y . f o r ( i n t j =0; j <H; j ++) { . . . } // V e r t i c a l ( bottom-up ) p a s s f o r ( i n t j =0; j <H; j ++) { tp1 =0; tp2 =0; yp1 =0; yp2 =0; In this implementation, the horizontal passes visit columns of the image in the inner loop, whereas the vertical passes visit columns of the image (after horizontal filter) in the outer loop. This can be viewed as applying a horizontal pass on the transpose of the image. In fact, the code for horizontal and vertical passes are almost identical due to this transposed view of the image. Ignoring boundary conditions, the SARE inferred for the above program is:
01755774
en
[ "sdu.astr.co", "sdu.astr.ga", "phys.astr.he" ]
2024/03/05 22:32:10
2017
https://theses.hal.science/tel-01755774/file/75130_ZEFI_2017_diffusion.pdf
C Van Eldik O Blanch S Klepser M Langer R Mukherjee Matteo Jill Daniel Rachel Eva Carlo Monica Raphael Mathieu Philippe Denis Prof G Fontaine Marina, Luca, Philipp, Kostia, Toni, André, Iurii Simon Chiara The current state-of-the-art experiments in gamma-ray astronomy are the Fermi-LAT in space and the ground-based H. E. S. S., VERITAS and MAGIC experiments. The monitoring of the very-high-energy gamma-ray emitting sources indicates the diverse physics taking place in astrophysical environments. To study the most energetic form of radiation and the most violent phenomena taking place in the Universe, individual source analyses are important. BL Lac objects, a subcategory of active galaxies, are the most abundant source class detected both in the GeV and TeV energies, while pulsar wind nebulae represent the most numerous identified source class in the galactic plane. Both source classes exhibit gamma-ray flux variations. In this thesis, the gamma-ray variability of the BL Lac object B2 1215+30 is presented with Fermi-LAT data. A bright flare, with 16 times the average quiescent flux, was detected in February 2014. In collaboration with the VERITAS experiment, the gamma-ray variability was investigated over five decades in energy. This work resulted in the detection of a luminous flare, seen simultaneously in GeV and TeV energies by both instruments. These results were used to set constraints on the size of the emission region and on the Doppler factor of the relativistic jet. Additionally, the long-term variability was studied using nine years of Fermi-LAT data. This brought out new flux enhancements, which characterize the long-term lightcurve from 100 MeV up to 500 GeV. Other striking characteristics are a steady linear increase of the yearly average flux, together with a hardening of the spectral index. The investigation of the lightcurve indicates a hint of quasi-periodic behavior with a period of around 1083 ± 32 days. This work includes spectrum and flux variability studies for the well-studied but ever-surprising Crab Nebula at TeV energies with more than a decade of H. E. S. S. observations. The spectrum measured in this work goes from 280 GeV to 62 TeV, making this the first measurement that extends to such very-high-energies. Considered as a standard candle for ground-based gamma-ray astronomy, the Crab Nebula is also used for calibration and instrument studies. The detection of GeV flares by the Fermi-LAT were unexpected and motivated the search of flux variations at TeV energies with the H. E. S. S. experiment. The position of the Crab Nebula in the northern hemisphere makes this investigation challenging due to the large systematic uncertainties introduced by the non-optimal observation conditions. This work showed that the systematic uncertainties can be reduced by taking into account the atmospheric transparency. No flux variations were found at energies above 1 TeV from the H. E. S. S. I data. A flare reported by the Fermi-LAT in October 2016 was also investigated. This analysis showed the GeV flare lasting for one month, while the flux with H. E. S. S. II had an excess variance of 15%. This should be compared to the commonly quoted 20% systematic uncertainty by H. E. S. S. experiment. Résumé Les rayons gamma astrophysiques de haute énergie sont les messagers de l'Univers non thermique. Ils sont le produits de l'accélération de particules chargées, un phénomène qui se déroule dans de très nombreux endroits de l'Univers (voir par exemple le centre galactique en Figure 0.1). En particulier, ils portent des informations importantes sur les plus puissants mécanismes d'accélération de particules dans les environnements extrêmes de l'Univers. Les processus qui accélèrent ces particules énergétiques dans l'Univers peuvent être étudiées indirectement par la détection de rayons gamma, produits par les interactions avec le milieu interstellaire (ISM) ou avec le champs de rayonnement. Une introduction à l'astronomie gamma est présentée au Chapitre 1 de ce manuscrit. Cela couvre une introduction aux rayons gamma cosmiques, aux mécanismes possibles responsables de la production de rayons gamma de haute énergie et un résumé des sources astrophysiques émettant aux très hautes énergies. Les rayons gamma de très haute énergie émis par différents processus dans l'Univers sont étudiés par des détecteurs spatiaux et des détecteurs au sol. Comme l'atmosphère est opaque aux rayons gamma, empêchant une mesure directe des propriétés de ce rayonnement à partir du sol, le rayonnement gamma de haute énergie provenant de l'Univers peut donc seulement être étudié directement par de satellites envoyés au-dessus de l'atmosphère. Ces instruments sont conçus pour détecter et localiser des sites d'émission de rayons gamma de haute énergie en utilisant les propriétés d'interaction des rayons gamma avec des matériaux denses. C'est une technique robuste pour reconstruire l'énergie, la direction et le temps d'arrivée des rayons gamma avec des énergies allant du MeV au GeV. Aux énergies supérieures au Ter-VII aelectronvolt (TeV), les flux de rayons gamma sont faibles et des détecteurs avec de grandes surfaces de collections sont nécessaires. A ces énergies, seules les expériences au sol offrent cette possibilité. Lorsque les rayons gamma et les rayons cosmiques de très haute énergie atteignent la Terre et interagissent avec les molécules de l'atmosphère, ils déclenchent une cascade de particules secondaires. L'atmosphère servant de calorimètre pour le dépôt d'énergie d'une particule de très haute énergie, les détecteurs au sol peuvent détecter les particules secondaires produites au cours de ce processus. La lumière Cherenkov produite pendant les cascades atmophériques peut être détectée avec la technique d'imagerie Cherenkov atmosphérique (IACT). Les principes de base de la détection des rayons gamma dans l'espace et au sol sont décrits au Chapitre 2. Les expériences principales actuellement en service en astronomie gamma sont le satellite Fermi-Large Area Telescope (LAT) et les expériences au sol tel que H.E.S.S., VERITAS et MAGIC (voir Figure 0.2). La génération actuelle des détecteurs a ouvert une nouvelle fenêtre pour étudier l'émission gamma dans l'Univers. En particulier, la génération actuelle des télescopes Cherenkov a amélioré la compréhension du ciel à haute énergie. Pour le travail présenté dans ce manuscrit de thèse, les données de Fermi-LAT et de l'expérience H.E.S.S. sont utilisées. Le satellite Fermi-LAT observe le ciel entier toutes les trois heures dans la gamme d'énergie allant de 30 MeV à plus de 500 GeV depuis juin 2008 et est décrit dans la première partie du Chapitre 2. L'expérience H.E.S.S., située dans l'hémisphère sud en Namibie, détecte quant à elle les rayons gamma de très haute énergie de quelques dizaines de GeV à VIII des centaines de TeV depuis 2003. L'expérience H.E.S.S. est décrite plus en détails dans le Chapitre 3 de ce manuscrit. La combinaison des données des deux expériences couvre plus de cinq ordres de grandeur en énergie et aide à étudier l'émission des différentes sources astrophysique. La surveillance des sources de très haute énergie indique une physique diversifiée se déroulant dans différentes parties du ciel. Afin d'étudier la forme la plus énergétique de radiation et les phénomènes les plus violents qui se déroulent dans l'Univers, l'analyse des sources individuelles est importante. Les BL Lacs, un type de galaxies actives, constituent la classe de sources extragalactiques la plus abondante détectée dans les énergies du GeV au TeV, tandis que le nébuleuses de vents de Pulsar constituent la classe la plus peuplée dans le plan galactique. Ces deux types de sources ont des émissions variables de rayons gamma. Les Blazars constituent la grande majorité des sources détectées en rayons gamma. Les observations multi-longueurs d'onde des blazars montrent qu'ils sont variables dans toute le spectre électromagnétique. Les observations révèlent que leur émission est caractérisée par des événements de haute luminosité avec une variation rapide de flux qui ont lieu dans de petites régions d'émission. Les variations de flux qu'ils subissent ont un comportement différent à haute énergie, offrant une nouvelle opportunité d'étudier et de caractériser l'émission gamma de ces sources. Une partie du travail présenté dans ce manuscript est consacrée à l'étude de l'émission d'une source de cette classe; le blazar B2 1215 + 30 qui représente un cas intéressant pour l'étude de l'émission de haute énergie. Cette étude tire partie des données publiques obtenues avec le satellite Fermi-LAT. Une grande variation de flux, détectée par Fermi-LAT en février 2014, est simultanément accompagnée par une éruption très lumineuse observée au TeV et observée par l'expérience VERITAS (Very Energetic Radiation Imaging Telescope Array System). VERITAS est un réseau de quatre télescopes Cherenkov situé à l'observatoire Fred Lawrence Whipple dans le sud de l'Arizona, sensible aux rayons gamma entre 0,1 et 30 TeV. En collaboration avec l'expérience VERITAS, la variabilité du flux en rayons gamma a été utilisée pour établir des contraintes sur la taille de la région d'émission et sur le facteur Doppler des jets relativistes. Les observations multi-longueurs d'onde prises quasi simultanément au cours de l'épisode d'observation 2014 ont été utilisées pour modéliser et comprendre l'émission de haute énergie. Deux scénarios ont été considérés pour expliquer l'émission du B2 1215 + 30: la partie la plus élevée de la distribution d'énergie spectrale peut être expliquée par les modèles Compton synchrotron ou Compton externe. Dans le scénario "Synchrotron Self Compton", le rapport entre les luminosités synchrotron et inverse-Compton a été utilisé pour estimer le champ magnétique. Les observations à long terme avec le Fermi-LAT ont ouvert une nouvelle fenêtre pour étudier et surveiller l'émission gamma des objets du ciel à long terme. La variabilité du blazar B2 1215+30, en utilisant près de neuf ans de données de Le spectre de la Nébuleuse du Crabe est mesuré par H.E.S.S. pour des énergies comprises entre 280 GeV et 62 TeV. Ceci est la première mesure qui s'étend jusqu'à X des énergies aussi élevées. Lesx variations de flux au GeV ont motivé la recherche de variations de flux au TeV en utilisant les données de l'expérience H.E.S.S. La position de la nébuleuse de crabe dans l'hémisphère nord et la localisation de H.E.S.S. en Namibie rendent cette approche complexe en raison des importantes erreurs systématiques introduites par des conditions d'observation non optimales. Le travail sur la nébuleuse du crabe a montré que la prise en compte de la transparence de l'atmosphère pour l'étude de l'évolution du flux avec le temps résulte en une réduction des effets systématiques. En prenant compte de cet effet, aucune variation de flux supérieure à 20 % n'a été observée à des énergies supérieures au TeV dans toutes les données de H.E.S.S. I. Une autre éruption au GeV signalée par le Fermi-LAT en octobre 2016 par télégramme astronomique, a été étudiée avec H.E.S.S. II. Cette analyse a montré que l'éruption au GeV a duré pendant un mois, L'impact sur le flux mesuré par H.E.S.S. a un excès de variance de 15 %. Cela peut être comparé à l'incertitude systématique de 20 % considérée en général par H.E.S.S. Les résultats de l'étude avec H.E.S.S. I sont présentés au Chapitre 5, et ceux de H.E.S.S. II au Chapitre 6 dans ce manuscrit de thèse. XI Chapter 1 High-Energy Gamma-Ray Astronomy Astrophysical high-energy gamma rays are the messengers of the non-thermal Universe. They are secondary products of charged particle acceleration, a phenomenon taking place all-over the Universe. Particularly, they carry important information on the most powerful particle acceleration mechanisms in extreme environments. High-energy particles from outer space hit the Earth's atmosphere continuously. The up-to-date cosmic-ray energy spectrum is measured from below 10 9 eV up to 10 20 eV. Measurements of cosmic rays over twelve orders of magnitude in energy reveal that the majority of cosmic rays are high-energy protons and nuclei. Their emission sites are difficult to locate as they are deflected by turbulent magnetic fields in the Galaxy and arrive isotropically upon the atmosphere. Presently, hunting for cosmic ray sites and their acceleration mechanisms relies on gamma rays, which preserve their direction information. High-energy gamma rays are produced in the most extreme environments of the Universe from the interaction of energetic charged particles with radiation and magnetic fields and/or from hadronic interactions. The creation of gamma rays is taking place by either leptonic or hadronic processes, while the full acceleration scenario of the charged particles is still a matter of debate. Identifying the high-energy sources which accelerate cosmic rays up to PeV range is important if we wish to solve the cosmic ray origin problem. Hence, high-energy gamma rays offer a unique possibility to understand and complete the "puzzle" of the origin and acceleration mechanisms of cosmic rays. This chapter gives an introduction on the field of gamma-ray astronomy. A short historical overview on cosmic rays and on the early days of gamma-ray astronomy is given in Section 1.1. The possible mechanisms responsible for the production of high-energy gamma rays and acceleration of charged particles in astrophysical environments are described in Section 1.2 and in Section 1.3. A brief introduction on the astrophysical sources established as gamma-ray emitters is given in Section 1.4 and a short summary is given in Section 1.5. Introduction to Cosmic Gamma Rays 1.One Century of Cosmic Rays In the early years of the 20th century, Victor Hess used a sequence of ten balloon ascents to measure the level of ionizing radiation at different altitudes (Figure 1.1a). He finalized his study in 1912, using the measurements of three electrometers in a free balloon flight to an altitude of 5300 meters. About the results of his measurements, Hess wrote (from a translation by [START_REF] Bertolotti | Celestial Messengers[END_REF]): "Immediately above ground the total radiation decreases a little...at altitudes of 1000 to 2000 m there occur again a noticeable growth of penetrating radiation. The increase reaches, at altitudes of 3000 to 4000 m, already 50 % of the total radiation observed on the ground. At 4000 to 5200 m the radiation is stronger by (producing) 15 to 18 (more) ions than on the ground." (a) (b) .1: (a) Victor Hess in one of his balloon accents. Picture taken before take-off of one of his famous flights that took place between 1911 and 1913. Image courtesy [START_REF] Wikipedia | The Free Encyclopedia[END_REF]. (b) Present-day spectrum of cosmic rays spanning over twelve orders of magnitude in energy as measured from several independent experiments. The majority of the spectrum follows a power-law over twelve orders of magnitude in energy with a spectral index of 2.7. The two features of the cosmic ray spectrum, known as the "knee" and "ankle" are seen around 10 15 eV and 10 17 eV respectively. Image courtesy [START_REF] Bietenholz | The most powerful particles in the Universe: a cosmic smash[END_REF]. Introduction to Cosmic Gamma Rays In the conclusions of the study, Hess wrote: "The results of my observation are best explained by the assumption that a radiation of very great penetrating power enters our atmosphere from above." These results were confirmed later and honored with the Nobel Prize of Physics in 1936 for the discovery of cosmic rays [START_REF] Nobelprize | The Nobel Prize in Physics[END_REF]. The cosmic radiation, or as Hess referred to it, the "radiation of very great penetrating power" has become a very powerful research tool leading to important new results, as well as new problems to understand and solve about matter composition in the Universe. Ever since its discovery, physicists have been trying to understand the origin of this radiation. In the 105 years since, many experimental and theoretical contributions have broadened the knowledge on the cosmic-ray spectrum and its composition. Observations from numerous successive experiments reveal that cosmic rays are mainly high energy protons or nuclei whose flux follows a power-law over more than ten orders of magnitude in energy (Figure 1.1b). The overall measured cosmic ray spectrum exhibits some features, connected to a spectral change or "break" of the spectrum. The first feature, commonly referred as the "knee" is seen around energies of 10 15 eV, whereas a second break occurs around 10 17 eV, which is known as the "ankle" [START_REF] Carroll | An Introduction To Modern Astrophysics[END_REF]. There is growing evidence for another spectral change, a "second knee" between the knee and the ankle which is still under investigation [START_REF] Olive | [END_REF]. In the present picture, various sources are considered as possible candidates for powering different ranges of the measured cosmic ray spectrum. The acceleration of cosmic rays from the lowest energies up to the knee is attributed to solar flares and to galactic sources. Beyond the knee, extragalactic sources are potential candidates for cosmic ray acceleration. Changes in the spectral index of the cosmic ray spectrum are commonly explained by a change of the acceleration mechanism. The spectral index changes around the knee and the ankle are believed to correspond to the transition from galactic to extragalactic origin. The exact explanation of how and where this transition takes place is still not fully understood. The measured spectrum of the cosmic rays has reached levels that were unexpected in the beginning of the field: the Universe is a TeVatron accelerator. It is remarkable that up to date, state-of-the-art experiments have not found yet a natural end of the cosmic ray spectrum. An ultimate theoretical limit on the propagation of proton cosmic rays is expected by Greisen-Zatsepin-Kuzmin at around 5 × 10 19 eV [START_REF] Greisen | End to the Cosmic-Ray Spectrum?[END_REF]. Beside all the efforts and progress, the fundamental question of the origin of cosmic rays remains not fully solved and is still matter of debate. Gamma-ray astronomy offers an opportunity to study the high-energy sky using observations of gamma rays from space and ground. The early days of the field and some remarkable years are given in the following. The Birth of Gamma-Ray Astronomy In 1912, cosmic rays introduced for the first time the non-thermal processes taking place in the Universe and raised a series of questions about their origin and prop-agation. Right after their discovery, supernova remnants were proposed as possible candidates of galactic cosmic ray emission [START_REF] Baade | Cosmic Rays from Super-novae[END_REF]. The discovery of the π-meson in 1947, and the π 0 decay into two photons indicated that charged cosmic rays can produce gamma rays. During these years, particle cascades and high-energy nuclear reactions were seen in nuclear emulsions (as seen for example in extensive air showers) [START_REF] Edwards | Analysis of nuclear interactions of energies between 1000 and 100 000 BeV[END_REF]. Therefore, possible sources of cosmic rays, like supernova remnants and regions of cosmic ray confinement were expected to be visible and detectable at gamma-ray energies [START_REF] Hayakawa | Propagation of the Cosmic Radiation through Intersteller Space[END_REF]. Additionally, it was expected that the sites of nucleosynthesis would reveal themselves in the gamma-ray energy range [START_REF] Burbidge | Synthesis of the Elements in Stars[END_REF]. It was soon understood that detecting gamma rays can help to trace sources of cosmic rays. Accelerated particles, undergo interactions and energy losses via different mechanisms producing gamma rays as they travel through the astrophysical source and beyond. Therefore, gamma rays are expected from different environments containing populations of charged particles. Unlike the charged cosmic rays, gamma rays that arise indirectly from nuclear or very-high-energy processes preserve source information as they move in straight lines. On the other hand, charged particles lose this information on their way to the Earth, as they are deflected by the Universe's turbulent magnetic fields, except possibly at the highest energies [START_REF] Aublin | Arrival directions of the highestenergy cosmic rays detected with the Pierre Auger Observatory[END_REF]. In September 1952 a simple experiment carried out by Galbraith and Jelley gave hope to start a new field of astronomy: gamma-ray astronomy from ground [START_REF] Galbraith | Light Pulses from the Night Sky associated with Cosmic Rays[END_REF]. Their experiment allowed the first observation of the Cherenkov light emitted from cosmic rays air showers in the atmosphere. They measured the counts from the oscilloscope using a 25 cm mirror and a 5 cm phototube arranged in a rubbish bin painted black. It took decades to establish this technique and the first source emitting in gamma rays was detected in 1989 [START_REF] Weekes | Observation of TeV gamma rays from the Crab nebula using the atmospheric Cerenkov imaging technique[END_REF]. In this remarkable year, the Whipple air-Cherenkov telescope in the US detected the Crab Nebula at TeV gamma rays, as it was originally suggested by C. Giuseppe at the 1959 ICRC in Moscow [START_REF] Hayakawa | High Energy Gamma-Rays from the Crab Nebula[END_REF]. In the meantime, the detection of gamma rays from space satellites had already made quite some progress. As Earth's atmosphere is opaque to gamma rays, satellites above the atmosphere are needed to directly detect gamma rays, e.g. via Compton scattering or electron1 pair creation in the detector. Gamma-ray astronomy from space was presented by Morrison in 1957, predicting gamma-ray fluxes from various sources [START_REF] Morrison | On gamma-ray astronomy[END_REF]. A first milestone in gamma-ray astronomy from space was reached in 1961 with the Explorer 11, the first gamma-ray satellite on orbit that picked up a few more than 100 cosmic gamma-ray photons [START_REF] Nasa | Gamma-ray Astronomy[END_REF][START_REF] Kraushaar | Explorer XI Experiment on Cosmic Gamma Rays[END_REF]. Another remarkable year was 2008 with the launch of the Fermi-Large Area Telescope (LAT), which scans the entire sky every three hours and has brought new, unexpected discoveries. The gamma rays cover a large dynamic range of the electromagnetic spectrum and the study of the sky in this energy range requires more than one type of detection technique (see Chapter 2). Nowadays, the study of the sky in gamma rays is covered by space and ground based instruments. Space detectors played an important role in the beginning of the field and they are suitable for observations in the energy range from 100 MeV to 100 GeV. As the flux at higher energies is low and launching large collection area detectors to space is challenging, Cherenkov telescopes on the ground Introduction to Cosmic Gamma Rays are more suitable for high-energies due to their large collection areas associated with the large size of the Cherenkov light pool on the ground. The field of gamma-ray astronomy relies strongly on observations and detectors with good sensitivity, angular resolution, and a large field of view. The highlights of each decade since the early days of the field are given in the following: • 1950s -Also referred as the decade of predictions of gamma-ray astronomy. The main goals of gamma-ray astronomy were defined: to establish the sources of cosmic rays and the seats of nucleosynthesis. • 1960s -The detectors of this decade were background dominated and poor in sensitivity e.g. Orbiting Solar Observatory OSO-3 [START_REF] Kraushaar | High-Energy Cosmic Gamma-Ray Observations from the OSO-3 Satellite[END_REF]. • 1970s -The space detector OSO-3 produced for the first time a Milky Way map in gamma rays [START_REF] Kraushaar | High-Energy Cosmic Gamma-Ray Observations from the OSO-3 Satellite[END_REF]. Other successful observations were done during this decade by SAS 2 [START_REF] Derdeyn | SAS-B digitized spark chamber gamma ray telescope[END_REF], Cos-B [START_REF] Bignami | The COS-B experiment for gamma-ray astronomy[END_REF], SMM and others [START_REF] Strong | The many faces of the sun : a summary of the results from NASA's Solar Maximum Mission[END_REF]. • 1980s -Preparation decade of an all-sky view satellite named Compton Gamma-Ray Observatory. The Whipple Observatory detected for the first time the Crab Nebula from the ground [START_REF] Krennrich | Stereoscopic observations of gamma rays at the Whipple observatory[END_REF]. • 1990s -The Energetic Gamma Ray Experiment Telescope (EGRET) on board of the Compton Gamma-Ray Observatory produced the first all-sky map in gamma rays, with energies of 20 MeV to 30 GeV [START_REF] Hartman | The Third EGRET Catalog of High-Energy Gamma-Ray Sources[END_REF]. • 2000s -Design and operation of other space and ground-based instruments e.g. the Fermi-LAT satellite as well as the MAGIC and H. E. S. S. experiments. • 2010s -The current decade counts the largest operating ground gamma-ray instruments ever. Details about these instruments can be found in the reference list given here and references therein [START_REF] Kraushaar | High-Energy Cosmic Gamma-Ray Observations from the OSO-3 Satellite[END_REF][START_REF] Derdeyn | SAS-B digitized spark chamber gamma ray telescope[END_REF][START_REF] Thompson | Calibration of the Energetic Gamma-Ray Experiment Telescope (EGRET) for the Compton Gamma-Ray Observatory[END_REF][START_REF] Rando | Post-launch performance of the Fermi Large Area Telescope[END_REF][START_REF] Weekes | a Fast Large Aperture Camera for Very High Energy Gamma-Ray Astronomy[END_REF][START_REF] Barrau | The CAT imaging telescope for very-high-energy gamma-ray astronomy[END_REF][START_REF] Paré | CELESTE: an atmospheric Cherenkov telescope for high energy gamma astrophysics[END_REF][START_REF] Covault | Progress and Recent Results from the Solar Tower Atmospheric Cherenkov Effect Experiment (STACEE)[END_REF][START_REF] Lorenz | Status of the 17 m MAGIC telescope[END_REF][START_REF] Holder | VERITAS: Status and Highlights[END_REF][START_REF] Anderhub | Design and operation of FACTthe first G-APD Cherenkov telescope[END_REF][START_REF] Smith | HAWC: Design, Operation, Reconstruction and Analysis[END_REF]. The field of gamma-ray astronomy is rapidly progressing. The pioneering efforts and data from space and ground-based detectors have expanded the knowledge on the high-energy sky. No matter the efforts and improvements in the field, the full scenario of particle acceleration mechanisms taking place in the Universe is however still incomplete. Possible acceleration mechanisms responsible for accelerating particles to such very-high-energies and the possible cosmic-ray acceleration sites are given next. Acceleration Mechanisms The number of detected astrophysical sources emitting at very-high-energies has increased significantly in the last decade from the observations carried out by the gamma-ray experiments. These sources provide non-thermal spectra of very-highenergy gamma-rays, which often can be approximated by a power-law: dN dE ∝ E -Γ , ( 1.1) with a spectral index Γ typically between 2 and 3. The cosmic ray spectrum, measured over twelve orders of magnitude in energy, has a spectral index of the same order. Therefore, the Universe's most extreme particle acceleration scenarios must account for both a gamma-ray and cosmic ray spectrum of the form given in Equation 1.1 and for the extension of cosmic ray spectrum up to energies 10 20 eV. Diffusive Shock Acceleration The first particle acceleration mechanism was proposed by E. Fermi in 1949 [START_REF] Fermi | On the Origin of the Cosmic Radiation[END_REF]. The first order Fermi acceleration or diffusive shock acceleration describes the acceleration of charged particles in the vicinity of strong shock waves. In his model, Fermi used stochastic means to explain how particles colliding with clouds in the interstellar medium could be accelerated to high energies. Supernova remnants were considered as cosmic ray acceleration candidates right after their discovery in the 1930s. In the late 1970s, many authors, such as [START_REF] Bell | The acceleration of cosmic rays in shock fronts. I[END_REF], adapted the Fermi acceleration to the supernova shocks (for a review see [36] [37]). Suppose a strong (supersonic) shock wave of relativistic particles propagating through a diffuse plasma medium at velocity U . Non-thermal particles are moving at relativistic velocities, and the shock is moving non-relativistically [START_REF] Longair | High Energy Astrophysics[END_REF]. The plasma in front of the shock is referred to as the unshocked plasma (or upstream medium) and the plasma behind the shock as the shocked plasma (or downstream medium). Let us first consider the flow of the interstellar gas in the vicinity of the shock front. The upstream gas enters the shock front at a velocity v 1 = U and leaves it at a velocity v 2 . Applying the continuity equation and asking for mass conservation, the densities and velocities of the gas are related by ρ 1 v 1 = ρ 2 v 2 . In the case of strong shocks in a fully ionized or monoatomic gas, ρ 2 /ρ 1 = (γ + 1)/(γ -1), where γ = 5/3 is ratio of the specific heat capacities of the gas. Hence, the gas leaves the downstream at a velocity v 2 = 1/4U (as shown in Figure 1.2 left). Secondly, let us consider high energy particles in the rest frame of the upstream medium (Figure 1.2 middle). The shock advances through the medium at velocity U whereas the gas behind it travels at 3/4U . Particles crossing the shock will get scattered by the turbulences behind the shock, become isotropic with the downstream and gain an increase of ∆E in energy. Now, let us consider the process in the rest frame of the downstream medium. The particles are diffusing from downstream to uptream medium, encounter again gas (Right): Rest frame of the downstream medium; particles from the upstream are advancing with velocity 3/4U . Everytime particles cross the shock, there is a gain of energy by ∆E (shown in blue and orange lines). Image courtesy [START_REF] Funk | A new population of very high-energy -ray sources detected with H[END_REF]. moving towards the shock front with velocity 3/4U . Hence, the particles crossing the shock from downstream to upstream will undergo the same increase of ∆E in energy, as when they cross in the opposite way. Regardless of the way the particles enter the shock front, there will be always an increase of ∆E in energy. In the case of a shock front, the energy is transferred to the particles via head-on collisions. It can be shown that the average energy gain after crossing the shock is 2V /3c. Hence, the total energy after a round trip across the shock and back again is: ∆ E E = 4 3 V c (1.2) with V = 2/3U , the relative velocity of the upstream and downstream medium. If the particle energy increases by a factor of β after a crossing, the new energy of the particle is E = βE 0 and the probability that the particle remains in the shock after one collision is P . After k collisions in the acceleration region, there are N = N 0 P k particles with energies E = E 0 β k (for more details see [START_REF] Bell | The acceleration of cosmic rays in shock fronts. I[END_REF]). From this, the number of particles is N (≥ E) = N 0 (E/E 0 ) lnP/lnβ . Since β = 1/P , this implies β > 1 (energy gain). The spectral index depends on the compression index in the strong shock since lnP/lnβ = -1. The energy spectrum of the high energy particles is found to be of the following form: dN dE ∝ E -2 . (1.3) Chapter 1 High-Energy Gamma-Ray Astronomy What is necessary to obtain a spectral index of ∼ 2, is that the acceleration happens in the vicinity of a strong shock. There is evidence of strong shocks in astrophysical sources such as supernova remnants, active galaxies and the extended components of extragalactic radio sources. Sites of Gamma-Ray Emission Several sources are considered as possible sites of cosmic ray particle accelerators in the Universe. A limit on the maximum energy a particle accelerator can achieve can be set from its size and typical magnetic field values [START_REF] Fraschetti | On the acceleration of ultra-high-energy cosmic rays[END_REF]. The gyroradius r g is the radius of the motion of a particle in a uniform magnetic field B. The condition to contain the particles in a source of size R so that they can be accelerated is that the gyroradius r g must be smaller than R. Thus, the maximum energy that an accelerator can achieve is estimated to be: E max = ZeβcBR, (1.4) where Ze is the electric charge of the particle and βc its velocity. Accordingly, the maximum energy is proportional to the magnetic field and size of the source and to the charge of the cosmic ray particle. The possible cosmic ray emission sites as function of the magnetic field and of the average size are shown in Figure 1.3. The diagonal lines correspond to the minimum B and L required for the acceleration of protons of 100 EeV (10 20 eV) and 1 ZeV (10 21 eV). It is remarkable that cosmic ray acceleration can take place in different astrophysical sources at very different sizes, from neutron stars up to galaxy clusters. Origin of Cosmic Gamma Rays Very-high-energy gamma-rays are product of the interaction of charged particles with ambient matter or electromagnetic fields, e.g. synchrotron radiation or inverse Compton scattering. The charged particles themselves have firstly been accelerated to ultra-relativistic energies by the electromagnetic fields of the very-high-energy emitting sources or via diffusive shock acceleration processes. The emission processes responsible for producing energetic gamma rays are classified as leptonic and hadronic origin, based on the type of charged particles involved. Gamma rays from hadronic interactions are of great interest since they can be used as a bridge to discover the origin of cosmic rays. A brief description of the hadronic and leptonic processes resulting in the production of very-high-energy gamma-rays is given in the following. Hadronic Origin of Gamma-Rays Inelastic proton-proton (pp) and proton-nuclei (pN) collisions are the two main processes responsible for producing gamma rays from hadronic interactions. For inelastic pp interactions: p + p →      p + p + π 0 p + n + π + p + p + π + + π - (1.5) and for pN interactions: N + p → X + π 0 . (1.6) The pN interactions can also produce charged muons. The neutral pions with a short lifetime of 8.4 × 10 -17 s [START_REF] Patrignani | Review of Particle Physics[END_REF], decay predominantly into two photons: π 0 → γ + γ. (1.7) Chapter 1 High-Energy Gamma-Ray Astronomy Charged pions decay into muons and neutrinos: π + → µ + + ν µ → e + + νµ + ν e + νµ , (1.8) π -→ µ -+ νµ → e -+ ν µ + νe + ν µ . (1.9) The last equations show the connection between neutrino physics and gamma-ray astronomy [START_REF] Waxman | High energy neutrinos from astrophysical sources: An upper bound[END_REF]. The energy threshold for the production of pions π 0 from pp collisions is ≈ 280 MeV (for m π = 135 MeV). Regions of space filled with dense gas and ambient material with relativistic protons can produce highly energetic pions, which in turn produce very-high-energy gamma rays. Observations indicate that gamma rays from the Milky Way disk and supernova remnants are most likely produced in hadronic interactions. An interesting way to trace the acceleration of cosmic ray protons is the study of high-energy emission from young supernova remnants (see Section 1.4.1), as hadronic interactions are most probable to take place in their environment. Leptonic Origin of Gamma-Rays Electrons accelerated to high-energies radiate high-energy photons via synchrotron radiation or upscatter ambibient photons via inverse Compton scattering. These two, together with bremsstrahlung (to a lesser extent) are the main radiation processes at very-high-energy astrophysics. A brief description of them is given next. Bremsstrahlung Electrons traversing the electric field of a nucleus produce electromagnetic radiation. During this process, the electron transfers energy to the nucleus, is decelerated and emits radiation which is called "braking radiation", or bremsstrahlung. The electrons have an average energy loss rate dE e /dt ∝ E e . The characteristic time for energy loss by bremsstrahlung for electrons with energy E e , in an ambient with gas density n is: t br = E e -dE/dt ≈ 4 × 10 7 n 1 cm -3 -1 yr. (1.10) Thus, bremsstrahlung losses do not modify the shape of the electron spectrum. The emission of the astrophysical sources at very-high-energies is dominated by the synchrotron or inverse Compton radiation. Synchrotron Radiation Synchrotron radiation is emitted when a charged relativistic particle spirals around strong magnetic fields. This synchrotron radiation of ultra-relativistic electrons is Origin of Cosmic Gamma Rays responsible for the emission observed in many astrophysical sources. The average radiation rate loss of an electron by synchrotron radiation is given by: .11) where γ e is the Lorentz factor of an electron moving at a speed v, σ T = 8π 3 r2 e 2 is the Thomson scattering cross-section and U mag = B 2 /2µ 03 is the energy density of the magnetic field. The lifetime of electrons losing their energy due to synchrotron emission is: - dE dt = 4 3 σ T cU mag v c 2 γ 2 e , ( 1 t s = E e -dE/dt ≈ 1.3 × 10 10 E e 1 GeV -1 B 1 • G -2 yr. (1.12) Synchrotron radiation is likely the source of X-rays and low-energy gamma rays in the Fermi-LAT regime, but to explain very-high-energy radiation with it requires unrealistic energies and strong magnetic fields. Inverse Compton Scattering Inverse Compton scattering happens when high-energy electrons upscatter photons of the ambient radiation fields to very-high energies: e+γ target → e+γ, where γ target are the target low-energy background photons and γ is the upscattered photon. Relativistic electrons can collide with target photons from the cosmic microwave background, infrared or optical stellar radiation, or even synchrotron photons and upscatter these photons up to GeV energies or even higher. The high-energy gamma rays created by inverse Compton scattering are the only purely leptonic processes of very-high-energy gamma rays. This can help to understand the emission from active galaxies, gamma-ray bursts and some supernovae. The unpolarized Klein-Nishina differential cross-section, obtained from quantum electrodynamics: dσ dΩ = 3 16π σ T f i 2 f i + i f -sin 2 θ (1.13) The final photon energy in the electron rest frame is: f = i 1 + i mec 2 (1 -cosθ) (1.14) After integrating over all angles in Equation 1.13, the total Klein-Nishina crosssection is: σ IC = 3σ T 4x 1 + x x 3 2x(1 + x) 1 + 2x -ln(1 + 2x) + 1 2x - 1 + 3x (1 + 2x) 2 , (1.15) with x = i /m e c 2 . Chapter 1 High-Energy Gamma-Ray Astronomy If this energy is small compared to the electron rest mass, the photon is scattered at a different angles but the energy remains unchanged ( i f ). This is known as the Thomson regime and the differential cross-section is reduced to: dσ dΩ = 3 16π σ T (1 + cos 2 θ) (1.16) For low energy photons ( i << m e c 2 ), the total cross-section is reduced to the Thomson cross-section: σ IC ≈ σ T (1 -2x) ≈ σ T (1.17) In the so-called Klein-Nishina regime, when photons are of low energy but the electrons move at relativistic energies with x >> 1, the process is no longer treated as continuous but as discrete. The total inverse Compton cross-section becomes: σ IC ≈ 3 8 σ T 1 x ln 2x + 1 2 (1.18) When photons have much larger energies than m e , the cross section falls quite rapidly (the "Klein-Nishina regime"). Even though the inverse Compton scattering also happens with nuclei, this can be neglected since rate of proton interactions is suppressed by a factor of (m e /m p ) 2 with respect to electrons. In each case, it is important to estimate the energy loss of electrons by the inverse Compton scattering. In the Thomson regime, the electrons lose energy by inverse Compton: - dE dt = 4 3 σ T cγ 2 U ph . (1.19) From this equation together with Equation 1.11 for the synchrotron radiation, the following relation can be found: P IC P synch = U ph U B . (1.20) The ratio of the inverse Compton and synchrotron emission is equal to the ratio of the radiation field over magnetic field energy density. In the so-called Synchrotron Self-Compton (SSC) scenario high energy electrons upscatter the synchrotron background photons that they emit themselves and produce inverse Compton radiation. This is the most common model used for modeling the emission of the relativistic jets in active galactic nuclei. Multi-wavelength observations of very-high-energy emitting sources show the presence of double-hump in the Spectral Energy Distribution (SED). Leptonic scenarios are commonly used to explain the double-hump structure of the SED since the synchrotron and inverse Compton trace the same electron population emitted by a source. The first peak is attributed to the synchrotron emission, whereas the highenergy peak is due to the inverse Compton. The observations of the current generation of gamma-ray instruments reveal a large number of sources emitting very-high-energy gamma rays. In the following the 1.4 High Energy Gamma-Ray Sources main sources emitting very-high-energy gamma rays, their main characteristics and possible acceleration mechanisms are covered. High Energy Gamma-Ray Sources The third Fermi-LAT catalog (3FGL), produced with four years of data, lists 3033 gamma-ray sources detected in the energy range from 100 MeV to 300 GeV [START_REF] Acero | Fermi Large Area Telescope Third Source Catalog[END_REF]. This survey show that blazars, a type of active galactic nuclei, are the dominant source class among all other sources (Figure 1.4a) and pulsars the most abundant sources in our Galaxy. Blazars are subdivided as BL Lacs and Flat Spectrum Radio Quasars (FSRQ). BL Recent observations with the current generation of imaging atmospheric Cherenkov telescopes reveal that blazars represent the majority of the TeV sources (see Figure 1.4b) and pulsar wind nebulae are the most abundant TeV source class in the galactic plane (refer to [45] for the latest updates). In the following the main gamma-ray sources are introduced. Two main source classes, used for the work presented here are active galaxies nuclei and pulsar wind nebulae, which are described in more detail. Chapter 1 High-Energy Gamma-Ray Astronomy Supernova Remnants Supernova Remnant (SNR) is a common name for sources created either from the collapse of a massive star or from the explosion of a white dwarf. The first type of SNRs is the outcome of the collapse of a massive star at the end its lifetime, when the star cannot withstand its own gravitational force. The second type is created from the explosion of a carbon-oxygen white dwarf in a binary system that has accumulated enough matter from its companion star to exceed the Chandrasekhar limit [START_REF] Chandrasekhar | The Maximum Mass of Ideal White Dwarfs[END_REF]. The term "super-novae" was first introduced in 1934 in the historical papers of Baade and Zwicky, published a few years after the discovery of cosmic rays. They also proposed supernovae as candidates of cosmic rays and linked supernova explosions with the formation of neutron stars [START_REF] Baade | Cosmic Rays from Super-novae[END_REF][START_REF] Baade | On Super-novae[END_REF]. Ever since, SNRs are of great interest as they are considered as the prime candidates for galactic cosmic ray acceleration up to at least 10 15 eV (see [START_REF] Blasi | The origin of galactic cosmic rays[END_REF] for a review). This paradigm is supported by the observed energy density of cosmic rays with the one of thermal gas or magnetic fields in our Galaxy. However these connections are not fully understood. The study of SNRs across the electromagnetic spectrum has helped to better understand the acceleration scenario picture. SNRs exhibit shock fronts (shells), that can accelerate charged particles up to high-energies, hence they offer a unique laboratory to study the hadronic origin of gamma rays from pp collisions (introduced in Section 1.3.1). Recent observations carried out by the state-of-art experiments in gamma rays have increased the sample of gamma-ray detected SNRs. They reveal that gamma-ray emission is associated with a large variety of SNRs, from young shell-type SNRs up to evolved SNRs interacting with molecular clouds and historical SNRs. High Energy Gamma-Ray Sources Young SNRs are the best candidates to study the acceleration of cosmic ray protons through their interaction with the surrounding molecular gas. This may help to establish the possible proton acceleration to very-high-energies. The young SNR RX J1713.7-3946 was resolved in gamma-ray at TeV energies by the H. E. S. S. experiment for the first time ever. It has the largest surface brightness among other SNRs, which allows to study the morphology and spatially resolved spectra of such very-high-energygamma-ray sources (Figure 1.5a). The measured spectrum up to 100 TeV demonstrates that the particle acceleration goes beyond these energies in the shell of the source. However, the origin of the gamma-ray emission is still under debate and it could be hadronic, leptonic or a mixture of them. For instance, a pion-decay feature signature was reported in the gamma-ray spectra of two known SNRs, IC 443 and W44 [START_REF] Ackermann | Detection of the Characteristic Pion-Decay Signature in Supernova Remnants[END_REF]. On the other hand, correlation studies between the X-ray and gamma rays are in favour of leptonic models [START_REF] Aharonian | Discovery of Gamma-Ray Emission From the Shell-Type Supernova Remnant RCW 86 With Hess[END_REF]. In order to better understand and complete the acceleration scenario, better angular resolution instruments are required to resolve the emission at very-high-energies. Pulsars and Pulsar Wind Nebulae A Pulsar Wind Nebula (PWN) consists of a pulsar and the wind nebula, a flow of relativistic particles in the vicinity of the pulsar. The center of engine is the pulsar, a fast rotating neutron star created in supernova events. The particles are accelerated up to very-high-energies in the electromagnetic fields in the proximity of the pulsar. The neutron star rotational axis is misaligned with respect to the magnetic axis. As the neutron star rotates, co-rotating cones of light are emitted and a pulsed beam of radiation is seen when crossing the observer's line-of-sight on Earth. After being accelerated, the particles move freely within the cylinder cone along the ordered magnetic field and are advected downstream from the shock. These particles create an ultra-relativistic cold wind. Due to the rotation of the pulsar, the magnetic field lines also move and expand in a toroidal pattern. The equilibrium point is reached when the ram particle wind is balanced by the pressure of the particles in the surrounding nebula. After the shock termination, the magnetic field lines are opened and the particles are accelerated in the presence of the magnetic field and emit synchrotron radiation. Several models of pulsar magnetospheres are proposed [START_REF] Roberts | Model of Pulsar Magnetospheres[END_REF]. The full scenario of the PWN particle acceleration is still under debate and the exact place of particle acceleration unresolved (for a review see [START_REF] Gaensler | The Evolution and Structure of Pulsar Wind Nebulae[END_REF][START_REF] Bühler | The surprising Crab pulsar and its nebula: a review[END_REF]). It is commonly assumed that the non-thermal emission of PWNe is of leptonic origin. The hadronic induced emission is unlikely, as it would require dense target material nearby, e.g. a molecular cloud. The leptonic scenario is favoured because the material is swept away from the pulsar since the supernova explosion. The prototype of the entire PWNe class is the Crab Nebula, one of the best studied object in the sky. The detection of pulsed emission from the Crab Nebula at TeV energies has challenged the current theoretical emission models of pulsars [START_REF] Albert | VHE γ-Ray Observation of the Crab Nebula and its Pulsar with the MAGIC Telescope[END_REF]. Furthermore, the detection of flux variations at GeV energies by the Fermi-LAT experiment was unexpected and asks for more complex models to explain these observations. H. E. S. S. has measured for the first time the extension at the veryhigh-energies [START_REF] Holler | First measurement of the extension of the Crab nebula at TeV energies[END_REF]. Chapter 5 covers the Crab Nebula in more detail. Active Galactic Nuclei Active Galactic Nuclei (AGN) are active core galaxies with a central supermassive black hole 4 . The accretion of matter into the black hole powers ultra-relativistic jets in form of collimated outflows. The presence of relativistic jets is an important feature of AGNs since they transport energy and momentum from the black hole up to Mpc5 scales or even further away. Emission from active galaxies is detected all wavelengths, from radio up to gamma rays for the most energetic objects. Contrary to normal galaxies, active galaxies exhibit flux variability detected at all energy bands. Their flux can vary on time scales of years down to minutes, as variability studies of PKS 2155-304 from the H. E. S. S. collaboration show [START_REF] Aharonian | An Exceptional Very High Energy Gamma-Ray Flare of PKS 2155-304[END_REF]. From causality arguments, the emission from a region of size R cannot vary on time scales shorter than the time needed to cross this region at the speed of light i.e R/c. This is used to sets limits on the size and on the relativistic boost of the emitting region (Doppler factor). Observations in gamma rays show that in the jet, particles are accelerated to ultrarelativistic energies reaching Doppler factors greater than 100. High luminosity stands as another distinctive characteristic of AGNs. Even though active galaxies are distant extragalactic sources, they can sometimes outshine other stars and galaxies. For example, the quasar 3C 273 at a distance of z = 0.158 (2.4 Gly) is a bright source in the sky. Back in 1960's, it was surprising to discover that 3C 273 is an extragalactic source with such brightness. AGNs are known to produce very high luminosities in very compact volumes. Their luminosity can range from 10 40 erg s -1 for some nearby galaxies up to 10 48 erg s -1 for some distant quasars. Figure 1.7a illustrates the Centaurus A galaxy including its prominent jet and Figure 1.7b shows the jet of the famous quasar 3C 273. In the AGN unification model proposed in 1995 by Urry and Padovani, the different active galaxies are classified based on inclination of the jet towards the observer on the Earth [START_REF] Urry | Unified Schemes for Radio-Loud Active Galactic Nuclei[END_REF]. Based on this classification scheme, when looking down-jet, we see a so-called blazar as shown in Figure 1 Chapter 1 High-Energy Gamma-Ray Astronomy the low-energy photons of the EBL, resulting in energy attenuation of the photon density. The mean free path is used to quantify the free length of a photon without any interaction. The free path of the very-high-energy photon before interacting depends on the primary energy of the very-high-energy photon coming from the source. Higher energy photons have a shorter free path length than lower energy ones. The opaqueness of the Universe to the very-high-energy gamma rays leaves an imprint in the spectra of these sources which is used to set limits on EBL from measurements. Blazars detected by H. E. S. S. have been used to determine the EBL level [START_REF] Abdalla | Measurement of the EBL spectral energy distribution using the VHE gamma-ray spectra of[END_REF]. A particular case of rapid flux variations and luminous flares is found when studying the gamma-ray variability of the BL Lac object B2 1215+30 over five decades in energy which is presented in Chapter 4. Other Sources The list of gamma-ray emitters also includes other types of source classes and unidentified sources. More sources are found interesting for the studies at the very-highenergy regime. The one presented here are a pickup of the author. Binary Systems A majority of 70% of the stars found in our Galaxy form either a binary system with one companion or live in more complex systems (see Figure 1.8 as illustration) [START_REF] Kobulnicky | A New Look at the Binary Characteristics of Massive Stars[END_REF]. These systems include gamma-ray binaries and microquasars (MQs). Both are found to be variable at gamma rays and in some cases periodic. Binary systems consist of a compact object, either a neutron star or a stellar-mass black 1.5 Summary hole, and a massive O or B-type star. MQs are binary systems emitting in X-rays and with extended radio emission [START_REF] Bordas | Observations of TeV binary systems with the[END_REF]. Globular Clusters Globular clusters are regions with extremely high star densities (Figure 1.8). They host the most evolved and oldest stellar populations of our Galaxy. Their gammaray emission can originate from the numerous millisecond pulsars or from inverse Compton scattering of relativistic electrons accelerated in the globular cluster. From the analysis performed with the H. E. S. S. data, no point-like or extended emission was detected [START_REF] Collaboration | Search for very-highenergy γ-ray emission from Galactic globular clusters with H[END_REF]. These regions are also considered as potential targets for dark matter searches [START_REF] Abramowski | Observations of the Globular Clusters NGC 6388 and M15 and Search for a Dark Matter Signal[END_REF]. Molecular Clouds Although Molecular Clouds are not gamma-ray sources, they are interesting for gamma-ray astronomy as interaction targets for cosmic rays accelerated at SNRs and important to study the gamma-ray diffuse emission [START_REF] Abramowski | Diffuse Galactic gammaray emission with H[END_REF]. Gamma-Ray Bursts Gamma-Ray Bursts (GRBs) are the most energetic events in the gamma-ray regime. They are the most luminous, highly-relativistic events, lasting from few milliseconds up to hundreds of seconds. They are divided in short and long GRBs and can be result of the merging of compact objects or the gravitational collapse of a massive star. No GRB has been detected by the H. E. S. S. experiment so far [START_REF] Aharonian | HESS observations of γ-ray bursts in 2003-2007[END_REF]. Summary In this chapter, the gamma-ray astronomy above 100 MeV was introduced. The basic radiative processes responsible for producing energetic gamma rays were described briefly. Possible sites of cosmic ray acceleration can be neutron stars up to galaxy clusters. The exact picture for cosmic ray acceleration is not known but some scenarios, such as the Fermi acceleration are considered as possible mechanisms for powering the non-thermal emission. The main gamma-ray sources and their characteristics were covered. Observations show that blazars are the most abundant source classes at GeV and TeV energy range and PWNe the most abundant source type in the Galaxy. In the past decade, the understanding of the high-energy sky has changed from breakthrough discoveries of the state-of-the-art experiments. Beside all the achievements, the cosmic ray picture is not complete and fully understood. Chapter 2 Detectors for High-Energy Gamma-Ray Astronomy The high-energy processes accelerating particles the in Universe can be studied indirectly through the detection of gamma rays produced by interactions with interstellar medium or radiation fields. High-energy gamma rays interact with matter via wellunderstood quantum electrodynamics processes. Thus, the properties of gamma rays, like energy and direction can be reconstructed by detectors that "see" the secondary products. By reconstructing the directions of the incoming gamma rays, one can locate the emission sites and perform morphology studies. The reconstructed energy gives information on the emission power of astrophysical sources. The observed flux provides important information to study and understand the particle acceleration mechanisms powering particles to such high energies. Sophisticated detection techniques along with continuously advancing reconstruction methods are essential to study the most extreme form of radiation coming from the non-thermal Universe. In gamma-ray astronomy, space satellites detect and locate high-energy emission sites by directly exploiting the interactions of gamma rays with dense materials, e.g. in Fermi-LAT or AGILE. At energies E > 30 MeV, pair production is the dominant photon interaction process in most materials. A pair conversion instrument typically uses a thin foil of dense metal to convert incoming gamma rays, and a calorimeter with a tracker to measure the energy of the resulting electrons. This is a robust technique for reconstructing energy, direction and arrival time of gamma rays with energies from MeV to GeV. At energies above 1 TeV, the gamma-ray fluxes are low and large collection areas are required. Building and sending large collection area detectors to space is complicated and challenging due to the launch vehicles required. Hence, ground-based experiments are better suited at those energies. When energetic gamma and cosmic rays reach the Earth's atmosphere and interact with its molecules, they initiate an extended air shower of secondary particles. With the atmosphere acting as a calorimeter for the energy deposition of a very-highenergy particle, ground-based detectors can detect secondary particles produced during this process. The Cherenkov light produced during the extensive air showers can be detected with Imaging Atmospheric Cherenkov Telescopes (IACT). The Whipple telescope, the first successful IACT, was a leap in the development of the technique, which is currently employed by the MAGIC, VERITAS and H. E. S. S. experiments. In this Chapter, the basic principles of high-energy gamma-ray detectors are described. After describing briefly the photon interaction processes in matter in Section 2.1, the main principles of the Fermi-LAT detector are given in Section 2.2. The basic concepts of gamma-ray detection from ground are described in Section 2.3 and the properties of extensive air showers are described in Section 2.4. A historical review on the Cherenkov technique, its application in high-energy gamma-ray astronomy and present and future ground-based gamma-ray experiments are discussed in Section 2.5. A short summary of the chapter is given in Section 2.6. Space Detectors The Earth is opaque to gamma rays, preventing a direct measurement from the ground. The properties of high-energy radiation coming from the Universe can therefore only be studied directly from satellites sent above the atmosphere. These instruments are designed to detect and locate sites of high-energy gamma rays emission using the interaction properties of gamma rays with dense materials. The gamma rays with energies E ≥ 1.022 MeV can be converted to massive particles near an atomic nucleus via the electron pair production process in accordance with Einstein's mass-energy equivalence principle (γ + N → e + + e -+ N). Pair production is the dominant photon interaction process at high-energies (E > 100 MeV) for most materials [66] [6]. A space-borne pair creation telescope is therefore the instrument of choice for detecting gamma rays above approximately 100 MeV. Using a high Z material, a large fraction of high-energy photons can be converted. The Fermi Large Area Telescope The detector on board the satellite has to identify gamma-rays against a large rate of background charged particles (cosmic rays) coming from all directions. To reject the cosmic ray background, a plastic scintillator anti-coincidence detector serving as "veto" (SAS-2 [START_REF] Derdeyn | SAS-B digitized spark chamber gamma ray telescope[END_REF], COS-B [START_REF] Van De Hulst | Spectral Analysis of Gamma Rays with the Cos-B Satellite. The Caravane Collaboration[END_REF], EGRET/CGRO [START_REF] Esposito | In-Flight Calibration of EGRET on the Compton Gamma-Ray Observatory[END_REF] and Fermi-LAT [START_REF] Moiseev | The anti-coincidence detector for the GLAST large area telescope[END_REF]) is placed in the outer part of the satellites. It detects the passage of charged particles. Once the gamma ray has converted in the detector, its arrival time, energy and direction is extracted from the electron pair properties. Figure 2.1b shows the schematic construction of the EGRET satellite, which provided a comprehensive view of the gamma-ray sky, by producing also the first all-sky map in gamma rays. The successor of EGRET, the Fermi-LAT satellite is based on similar technologies but uses silicone tracker instead of spark chamber. Measuring the polarization of the gamma ray could give important information about the astrophysical sources, but this is not realized by space detectors so far. Future space satellites, using time projection chambers, are being designed in a way that provides also this information (see [START_REF] Gros | First measurement of the polarisation asymmetry of a gamma-ray beam between 1.7 to 74 MeV with the HARPO TPC[END_REF]). The current gamma-ray state-of-the-art instruments from space are Fermi-LAT and AGILE. The former is used for performing part of the work presented in this thesis and is described next. The Fermi Large Area Telescope The Fermi Gamma Ray Space Telescope (Fermi) mission was launched on 2008 June 11, with two instruments on board; the Large Area Telescope and the Gamma-Ray Burst Monitor (GBM). The Fermi spacecraft is shown in Figure 2.2. The GBM instrument is used to monitor and study transient phenomena in the Universe e.g. Gamma-Ray Bursts [START_REF] Meegan | The Fermi Gamma-ray Burst Monitor[END_REF]. The LAT is a pair-conversion telescope detecting gamma rays with energies from 20 MeV up to more than 500 GeV, an energy band that had only partially been explored by previous space satellites. The LAT has a large field of view of 2.5 sr, corresponding to 20% of the sky at every instant. The Fermi-LAT is mainly observing in survey mode where it can scan the Chapter 2 Detectors for High-Energy Gamma-Ray Astronomy entire sky every three hours making two orbits in a zenith-pointing mode, rocking at 35 • north and south of zenith on alternating orbits. The LAT can also observe in pointing mode when required e.g. for Target of Opportunity (ToOs) events. The Fermi-LAT, originally planned for five years of operation is now close to its 10th successful operation year. Its main science goals are listed below: • Monitoring fast transients events from GRBs and variable sources. • Complete coverage of the high energy sky. • Measure spectra with extended energy range, from ∼ 20 MeV to 500 GeV. • Localization of point sources i.e pulsars, blazars, new source types. • Extension studies in sources like SNRs, molecular clouds or nearby galaxies. • Dark matter searches. • Diffuse isotropic gamma-ray emission studies. The Fermi-LAT is the successor of the EGRET telescope, with better sensitivity and performance compared to previous missions [START_REF] Thompson | Calibration of the Energetic Gamma-Ray Experiment Telescope (EGRET) for the Compton Gamma-Ray Observatory[END_REF]. The reason for this is a combination of a better detector and reconstruction techniques. The LAT detector is based on principles of high-energy particle detectors as described below. Principles of the Large Area Telescope A high-energy gamma-ray hitting a high Z material is converted into an electronpositron pair, which in turn create a cascade of secondary particles, called particle shower, until the energy is completely absorbed by the material. The energy of the primary particle is transferred to the new particles created during the shower development with minimum losses until ionization starts to be dominant. Measuring all the energy deposited during the shower development is equivalent to measuring the energy of the primary particle which initiates the shower, in this case the gammaray. The determination of the gamma-ray direction is done by reconstructing the trajectories of the electron/positron pair created in the first step of the shower development. These two important parameters are reconstructed using a tracker, which can reveal the path of electrically charged particles, and a calorimeter which measures the energy deposited during the shower development. The electromagnetic calorimeter is typically segmented transversely and consists of layers of high density material. The aim of the calorimeter is to measure the energy deposition and the development profile of the electromagnetic shower. In order to have a precise reconstruction of the shower, the electromagnetic calorimeter is in conjunction with a tracker. The Fermi-LAT instrument detects the gamma rays by converting them into an electron pair in tungsten foils and following their trajectories using a silicon tracker. When a high energy gamma-ray hits the detector, it traverses the silicon tracker until it interacts with one atom from the thin tungsten foils. After this, the gamma-ray generates a particle shower by firstly converting into an electron pair, and the energy of each electron is deposited and can be measured in the electromagnetic calorimeter. The development of the electromagnetic shower in the electromagnetic calorimeter depends on the direction and energy of the gamma-ray hitting the detector. The segmented calorimeter is designed to allow the profile of this energy deposition to be measured, for a better discrimination and for a long duration. In the energy range of the Fermi-LAT, the charged cosmic ray flux is about 10 5 times higher than that of the gamma rays. To maximize the charged cosmic ray background rejection, the Fermi-LAT detector is composed by three main systems: the silicon tracker, the calorimeter and the AntiCoincidence Detector (ACD) (see Figure 2.3). All these parts are combined in the optimal way to reject the largest part of the background. To shield the detector from charged cosmic rays, the ACD is placed in the outer part. In the next sections, the basic principles of each subdetector part are described, starting from the outermost one. The Anticoincidence Detector The main source of background for the Fermi-LAT satellite are the charged cosmic rays that hit the detector at much higher rates than the gamma rays. The ACD is made of plastic scintillator tiles and veto responses to the passage of charged particles. It was designed under two main requirements; to have high (0.9997) detection efficiency for charged cosmic rays and to suppress self-vetos caused by the backsplash effects. The later effect is encountered in instruments with massive calorimeters, from where a small fraction of secondary particles from the electromagnetic shower go backwards through the tracker and cross the ACD. False vetos are created from the recoil electrons, resulted from Compton scattering of these particles with the ACD material. This effect limited the performance of the EGRET instrument when it caused false vetos yielding low detection efficiencies for E > 1 GeV [START_REF] Thompson | Calibration of the Energetic Gamma-Ray Experiment Telescope (EGRET) for the Compton Gamma-Ray Observatory[END_REF]. In order to suppress the backsplash effect, the LAT team segmented the ACD in 89 tiles of different sizes and only the segment on the trajectory of incident particle is considered. With only one segment of the ACD contributing in the backsplash, this effect is dramatically suppressed (for more details on the LAT ACD see [START_REF] Moiseev | The anti-coincidence detector for the GLAST large area telescope[END_REF]). The Tracker The tracker system of Fermi-LAT is the central part of the detector, located between the ACD and the calorimeter. The tracker is made of 16 planes, each with a high Z tungsten converter foil and two layers of silicon strips oriented at 90 degree to each other. The tracker can measure the path of the electron pair into which the gamma-ray converts. More details about the LAT are found here [START_REF] Atwood | The Large Area Telescope on the Fermi Gamma-Ray Space Telescope Mission[END_REF]. The calorimeter The role of the calorimeter is to measure the energy deposited by the electron pair resulting from the converted gamma-ray. The LAT calorimeter is made out of 96 CsI(Tl) crystals arranged in 12 columns and 8 layers with a depth of 8.3 radiation lengths. PIN diodes are attached to each side of the crystal to read out the scintillation light at both ends. The location and the energy deposit along the crystals are given respectively from the ratio and the sum of these light signals at the edges [START_REF] Johnson | The Construction and Performance of the CsI Hodoscopic Calorimeter for the GLAST Beam Test Engineering Module[END_REF]. Data-Processing Pipeline The Fermi-LAT was set on all-sky survey mode after the on-orbit in September 2009, after the check-out and calibration period was completed [START_REF] Abdo | The on-orbit calibration of the Fermi Large Area Telescope[END_REF]. In this observation strategy, the normal to the front of the instrument is "rocked" to ± 50 degree, above and below the orbital plane on alternate orbits. The orbital period is about 96 minutes and the full sky is observed with an almost uniform exposure after two orbits. The "all-sky" survey is the primary observation strategy but also the pointing observation mode is supported by the Fermi-LAT. The data-processing pipeline of Fermi-LAT detector is designed to be prompt. The data is reduced firstly onboard the Fermi-LAT. The online trigger and the LAT software monitor the performance during nominal science data-taking. After the trigger decision, on-board software filters are applied to classify events likely to be used for calibration or scientific purposes [73] [26]. The events passing the filters are then included in LAT data stream and transmitted to the Solid State Recorder for transmission on the ground. The LAT data is downloaded every three hours and the processing of the downlinked satellite is performed in a data time critical manner. This procedure is done on a computer cluster for a fast and effective processing of the data [START_REF] Zimmer | Extending the Fermi-LAT Data Processing Pipeline to the Grid[END_REF]. Analysis Method The standard analysis of the Fermi-LAT data is based on a likelihood framework. The application of the likelihood method to photon-counting experiments in astronomy was introduced in 1979 [START_REF] Cash | Parameter estimation in astronomy through application of the likelihood ratio[END_REF]. This method was successfully adapted and implemented to analyze the data from the EGRET satellite [START_REF] Mattox | The Likelihood Analysis of EGRET Data[END_REF]. The Fermi-LAT standard analysis framework is similar to the one used for the EGRET data. In the field of gamma-ray astronomy, the challenge is to detect the signal on top of the background. Given the observations, a proper model that describes the data is mandatory. A good background model is essential since it affects the accuracy of the scientific results. The LAT analysis technique uses a three-dimensional counts "map", which contains energy and position information for each event. Figure 2.4a shows an illustration of a 3D map with squared pixels. The emission for a Region of Interest (RoI), is parametrized by the superimposition of models describing different sources. Within the RoI, each gamma-ray source is modeled by a spatial and spectral component. The likelihood for a set of models describing the data is composed by the likelihoods of the n individual bins in the map as follows: log L = n i=0 log p i , ( 2.1) where p i = λ n i i e -λ i n i ! is the Poisson probability of observing n i counts in pixel i when are predicted λ i . The logarithm is taken for computational time reasons. Given the instrument limitations and imperfections, the source parameters like energy, position and morphology are convoluted with the Instrument Response Functions (IRFs). Thus, each model M within RoI has to account for the IRFs, i.e. the probability to reconstruct an event with p i for a photon with a true momentum p i . The IRFs are usually derived from Monte Carlo simulations and split into effective area, point spread function (PSF) and energy response. The effective area (A eff ) is the detection probability of a given photon p i in units of cm 2 . The energy response is the probability to reconstruct the energy E i of an event with true energy E i , and the PSF is the probability to reconstruct the direction (x i , y i ) of a photon with true direction (x i , y i ). Chapter 2 Detectors for High-Energy Gamma-Ray Astronomy Suppose a source i located in the center of the map (see Figure 2.4a) and three other sources around. These sources contribute as well in the observed counts from the central source square. The probability to detect an event p i originating from a source parametrized by M j is defined by convoluting the model components with the IRFs and integrated over true photon attributes as follows: For the Fermi-LAT analysis, the latest software release, named Pass 8 was used. This reconstruction technique is described here [START_REF] Ackermann | The Fermi Large Area Telescope on Orbit: Event Classification, Instrument Response Functions, and Calibration[END_REF] and gives a better gamma-ray acceptance compared to the previous one [START_REF] Atwood | Pass 8: Toward the Full Realization of the Fermi-LAT Scientific Potential[END_REF]. This allows to make analysis with energy which up to 500 GeV. The gamma-ray acceptance for these software releases is shown in Figure 2.5. M j (p i ) = dx dy dE S(E, x, y) × IRF(p i , p i ), ( 2 When point like sources are analyzed, a RoI with a typically radius of 15-20 degrees is used (as shown in Figure 2.4b). In order to have the best parameter estimation for a specific analysis, a proper background model is crucial. The background model has to account for the emission of all the known gamma-ray sources located within RoI, the diffuse and galactic emission. At Fermi-LAT energies, the diffuse emission dominates the entire high energy gamma-ray sky, with the highest level of emission On the first map one can see the Galactic diffuse emission along the galactic plane. Further away from the galactic plane, the background is dominated by a more isotropic emission. More details on this measurement can be found here [START_REF] Abdo | Spectrum of the Isotropic Diffuse Gamma-Ray Emission Derived from First-Year Fermi Large Area Telescope Data[END_REF][START_REF] Bechtol | The intensity of isotropic diffuse emission measured with the Fermi Large Area Telescope[END_REF]. The Galactic diffuse emission is represented by energy dependent sky maps whereas the isotropic diffuse model depends only on the energy. The FermiScienceTools is a likelihood analysis framework developed by the LAT team and provided the publicly data analysis. The latest galactic and isotropic diffuse models and the corresponding IRFs can be retrieved from the public Fermi-LAT server, and used to define the background. Ground-based Detectors At energies above 100 GeV the photon fluxes of astrophysical sources decrease rapidly with the energy. In this energy range the detection of gamma rays from space becomes challenging since large collection area detectors are required. To study the very-high-energy sky, ground-based detectors with large collection areas are used. When high energy gamma rays arrive upon the Earth's atmosphere, they interact with the atmospheric nuclei, create the cascade of particles and emit Cherenkov radiation (described in Section 2.4.3). Using the properties of the atmosphere as a calorimeter for the energy deposition of a very-high-energy gamma-ray, detectors on the ground detect the Cherenkov light produced during these process. Imaging atmospheric Cherenkov telescopes reconstructing the direction and energy of gamma rays rely on telescopes large reflective areas, fine-pixelized camera and fast read-out electronics. The flux of charged cosmic rays is higher than that of gamma rays, Extensive Air Showers making the background suppression challenging. Using stereoscopic measurements with more than one telescope offers an efficient method to distinguish gamma rays from cosmic rays. There are also ground-based experiments in gamma-ray astronomy that detect airshower particles by recording the Cherenkov light emitted when they pass through water tanks. An example is the High Altitude Water Cherenkov (HAWC) experiment, which consists of an array of 300 water tanks (7.3 m diameter and 4.5 m height). The energy range covered by HAWC is from 100 GeV up to 100 TeV [START_REF] Abeysekara | Sensitivity of the high altitude water Cherenkov detector to sources of multi-TeV gamma rays[END_REF]. This detector is not restricted to night observations and is operating during all the time. Describing the principle of these detectors is out of this thesis scope. The current generation of Cherenkov telescopes in gamma-ray astronomy field includes the H. E. S. S., VERITAS and MAGIC experiments. The properties of air showers created when high-energy particles interact in the atmosphere are important to establish methods and variables to discriminate gamma-ray from cosmic-ray induced showers. The basic properties of extensive air showers are given next. Extensive Air Showers High energy particles interact with atmospheric nuclei and initiate Extensive Air Showers (EAS). An EAS is a cascade of particles created from collisions and decays occurring during the shower development. Depending on the type of the particle initiating the process, we distinguish EAS of electromagnetic and hadronic origin. Given the rate of cosmic rays (mostly protons, arriving isotropically from outer space) with respect to gamma rays, the majority of the EAS are of hadronic origin. The study of EAS development and their intrinsic differences is important for the discrimination between the two different EAS types. Electromagnetic Showers Electromagnetic showers are primarily initiated by energetic gamma rays or by electrons. The main processes in the development of the electromagnetic showers are pair production and the bremsstrahlung. Ionization loss of electrons becomes dominant at lower energies. The majority of secondary charged particles are produced via electron pair-production in the electric field of a nucleus. The bremsstrahlung emission from electrons in the nucleus electric field is responsible for the creation of further high-energy photons. Therefore, the energy of the primary particle initiating an EAS is redistributed over many particles during the shower development in the atmosphere. The amount of matter traversed by electrons and photons to undergo one interaction is characteristic of the material. The material dependent radiation length X 0 is defined as the length scale over which the energy of a particle is reduced to a factor 1/e during the shower development. The energy E of a particle after traveling a given distance x is given by: E(x) = E 0 e -x/X 0 (2.3) The radiation length for electrons emitting bremsstrahlung and for photons undergoing pair production in the air are respectively X 0 37-38g cm -2 and X γ,0 = 9/7X 0 . A simple model describing the shower development is the Heitler model, developed in 1930s [START_REF] Matthews | A Heitler model of extensive air showers[END_REF]. This model is simplified under these assumptions: • pair-production and bremsstrahlung are the only dominant processes contributing in the development of the electromagnetic cascade • the radiation length X 0 to undergo pair-production and bremsstrahlung are equal • the energy of each particle is distributed evenly at each step among the particles created from these processes After n splitting lengths, at the thickness x = nX 0 ln2, the total of particles is N = 2 n = e x/X 0 . The radiation becomes less important than collisional energy loss when the particle energy is lower than the energy threshold for pair-production or bremsstrahlung. This energy, where the bremsstrahlung and ionisation losses are equal, referred to as the critical energy is equal to E c = 85 MeV in air. The depth at which the shower reaches its maximum size is: X max = X 0 ln E 0 E c (2.4) Additionally to the pair-production, bremsstrahlung and ionization loss there are other processes like multiple scattering or the Earth's magnetic field that play a role in the shower development, mainly at lower energies. The shower development is centered around the axis of the incoming gamma-ray initiating the EAS, keeping the directional information down to ground level. Hadronic Showers Hadronic interactions in the atmosphere are more complex to describe as many more processes take place during the shower development each with a different characteristic length of interaction. A simple model for hadronic showers based on the Heitler model is given here [START_REF] Matthews | A Heitler model of extensive air showers[END_REF]. In this approach, the atmosphere is assumed to be equally divided into n layers of fixed thickness. In the upper part of the atmosphere, arriving protons interact with air molecules via strong force and create a cascade comprising most importantly nucleons and charged and neutral pions (π ± , π 0 ). The neutral pions π 0 have a mean lifetime of 0.8 × 10 -16 s, hence they decay almost immediately into photons: π 0 → γ + γ The charged pions continue interacting in the atmosphere until their energy falls below the critical energy E π < E π c . Below this energy, the charged pions (π ± ) with a mean lifetime of 2.6 × 10 -8 s decay predominantly into muons: π + → µ + + ν µ , π -→ µ -+ νµ The typical energies above which the charged pions interact is about 30 GeV. If the photons resulting from the π 0 decay have enough energy, they can initiate an electromagnetic shower. Electromagnetic showers produced in the early phase of the hadronic shower development might cause confusion with the pure electromagnetic showers. However, the lateral and longitudinal development of the hadronic and electromagnetic showers is different (see Figure 2.8 and Figure ??). Hadronic showers are typically larger and more irregular than electromagnetic showers. Thus, the lateral distribution and the irregularities of the EAS are used as the basic discriminants to separate gamma rays from cosmic rays showers. Cherenkov radiation First investigated by Pavel Cherenkov, the Cherenkov light is the radiation emitted when charged particles move in a dielectric medium at speeds faster than the local phase velocity of light in the same medium [START_REF] Grieder | Extensive Air Showers: High Energy Phenomena and Astrophysical Aspects -A Tutorial, Reference Manual and Data Book[END_REF]. Fast moving charged particles (v > c/n) cause polarization by exciting the surrounding molecules and atoms while traversing a medium with refractive index n. Once the particle has passed, the molecules and atoms relax into their normal state emitting electromagnetic radiation. According to Huygen's construction, the wavelets from all points of the particle track will be in phase with one another under a particular emission angle. During this process, a coherent wave front is generated at an angle θ with respect to the particle direction. This coherent radiation, is the so-called Cherenkov radiation. The basic principle is shown in Figure 2.9. By applying simple geometrical rules as in Figure 2.9, the cosine of the Cherenkov light cone is given by: θ c = arccos c nc ⇒ θ c = arccos 1 n , (2.5) where n is the refractive index of the medium. The Cherenkov light is emitted at an opening angle of 2θ c that depends on the energy of the particle and on the refractive index of the medium. As described in section 2.4, very-high-energy particles initiate an EAS by interact-ing with the nuclei of the atmosphere. The highly energetic electrons, produced during the EAS development, travel faster than light in the atmosphere and emit Cherenkov light. Since the number of particles generated is amplified during the EAS development, this makes the Cherenkov light emitted possible to detect. The Cherenkov radiation creates a "light pool" on the ground with a duration of few nanoseconds. The refractive index depends on the density of the medium hence the radius of the Cherenkov light pool depends on the height at which the emission was originally produced, and on the detection altitude. n(h) = 1 + n 0 exp -h h 0 . (2.6) where n 0 = 0.00029 and h 0 = 7250 m (for a hydrostatic, isothermal atmosphere). The superimposition of the Cherenkov light emitted during the shower electromagnetic development illuminates the ground on a light pool with radius of 80-150 m. The number of photons produced per unit wavelength and per unit distance is given by the Frank-Tamm formula: dN dxdλ = 2παZ 2 λ -2 1 - 1 β 2 n 2 (λ) (2.7) where α = 2πe 2 hc is the fine structure constant. Named after Ilya Frank and Igor Tamm, this was awarded with Nobel Prize in Physics in 1958 [START_REF] Nobelprize | The Nobel Prize in Physics[END_REF]. The peak of the Cherenkov emission from the EAS is in the blue to ultraviolet (UV), region at the edge of the optical band (the UV part is mostly absorbed by the atmosphere) [START_REF] Doering | Measurement of the Cherenkov light spectrum and of the polarization with the HEGRA-IACT-system[END_REF]. The Cherenkov light emitted during EAS is detected by photomultipliers (PMTs) sensitive to wavelengths between 400-700 nm. In the following the basic principles of the detection of Cherenkov light from ground are given. Imaging Atmospheric Cherenkov Telescopes The detection of gamma rays from ground is possible by detecting the Cherenkov radiation emitted during an EAS. The main challenge and difficulty is the discrimination of gamma rays and hadronic induced showers. Milestones in the development of IACTs were the Whipple experiment, which demonstrated the power of large mirror areas and imaging cameras [START_REF] Krennrich | Stereoscopic observations of gamma rays at the Whipple observatory[END_REF], and the HEGRA (High Energy Gamma-Ray Astronomy) experiment, which demonstrated the power of telescope arrays to improve angular and energy resolution [START_REF] Daum | First Results on the Performance of the HEGRA IACT Array[END_REF]. In an IACT, the Cherenkov radiation is focused by large reflective areas onto an imaging camera in the focal plane, which is accompanied by fast electronics for readout and signal processing (principle shown in Figure 2.12). The reflective area is made of mirror facets supported by a solid frame on an azimuth or equatorial mount to track the sources during their diurnal motion. A finely pixelized camera is important to achieve a high sensitivity, as the number of pixels and in particular the field of view of each pixel determines the angular resolution of the telescope. Traditionally, the camera of an IACT uses photomultiplier tubes as pixels. Lately, SiPM are employed as well, for example in the FACT experiment [START_REF] Anderhub | Design and operation of FACTthe first G-APD Cherenkov telescope[END_REF]. The Cherenkov light emitted during a typical 1 TeV shower illuminates an area of 10 5 m 2 on the ground. A Cherenkov telescope placed anywhere in the light pool can see the shower. Multiple factors have to be considered to determine properties of the emitted light. First, the Cherenkov light emitted during an air shower depends on the energy of the primary particle. High energy particles have sufficient energy to generate more secondary particles. Second, the number of particles generated during air showers depends on the altitude of the shower. The refractive index of the atmosphere at high altitudes is smaller but in the early stage of the shower development are not many particles above the energy threshold for Cherenkov light production, resulting in a low amount of light emitted. Other factors that affect the detectability of the Cherenkov light are related to the light pollution from the ground and night sky background i.e. the light from stars. The gamma-ray energy and direction reconstruction becomes more robust when stereoscopy is employed. The shower direction is reconstructed more precisely when more than one telescope triggers on the same event and the Cherenkov light emitted from the same shower is seen from different angles. The use of the stereoscopy results in a larger collection area on the ground, and a large fraction of the background caused by photon fluctuations is suppressed. The currently operating IACTs make use of the stereoscopy for the event reconstruction. Currently Operating IACTs The currently major operating IACTs are MAGIC, VERITAS and H. E. S. S. The study at very-high-energy presented in this thesis is done with the H. E. S. S. instrument, which will be covered in detail in Chapter 3. Observation with the IACTs are limited to the night (low duty cycle of ∼ 10 % corresponding to moonless and no clouds). MAGIC experiment The MAGIC (Major Atmospheric Gamma Imaging Cherenkov) experiment is a system of two 17 m diameter telescopes on the Canary Island of La Palma [START_REF] Lorenz | Status of the 17 m MAGIC telescope[END_REF]. This experiment started operating in 2003 with one single 17 m diameter telescope. The reflective area is arranged in a parabolic shape reflector design. This is known as MAGIC I and had a 3.5 • field of view camera composed of 397 small PMT pixels. In 2009, a second telescope was added 85 m away from the first one (MAGIC II), with a camera of 1039 pixels and a field of view of 3.5 VERITAS experiment The VERITAS (Very Energetic Radiation Imaging Telescope Array System) is an array of four telescopes in southern Arizona, USA [START_REF] Weekes | VERITAS: the Very Energetic Radiation Imaging Telescope Array System[END_REF]. The telescopes are installed on the site of the Whipple telescope. Each of the telescopes has a 3.5 • field of view, a 499-pixel camera and a 12 m reflective area. The VERITAS telescopes follow a Davies-Cotton optical design. The VERITAS observatory is studying the highenergy sky since 2007, by detecting gamma rays with energies from 50 GeV up to 50 TeV. Figure 2.13b shows the VERITAS telescopes. Chapter 4 describes observations of B2 1215+30 with VERITAS and Fermi-LAT experiments. H. E. S. S. experiment The H. E. S. S. in Namibia is an array of five IACTs observing the high-energy sky from the southern hemisphere. The array is comprised of four 12 m telescopes arranged in a square configuration and a bigger size telescope of 28 m placed in the middle. The H. E. S. S. II telescope, installed on the site in 2012 is the biggest IACT telescope to date. The energy range covered by the H. E. S. S. experiment goes from tens of GeV up to a few hundreds of TeV. The H. E. S. S. experiment is covered in more detail in Chapter 3. Future Telescope Arrays At the present time, a new ground based gamma-ray observatory is under development [START_REF] Acharya | Introducing the CTA concept[END_REF]. The Cherenkov Telescope Array (CTA) aims to reach a sensitivity improved by a factor of ten compared to the current experiments. The expected CTA differential sensitivity with that of the currently running experiments is shown in Figure 2.14 (taken from the CTA homepage1 ). CTA is planned to cover the en- Summary and Conclusions ergy range from a few tens of GeV to some hundreds of TeV using telescopes of three different sizes. CTA is designed to cover the full sky from two observation sites: La Palma in the northern hemisphere and Chile in the southern hemisphere and have a better angular resolution. Summary and Conclusions In this chapter the basic principles for the detection of gamma rays from space and ground are given. For a direct detection of gamma-ray s from space, pair-conversion telescopes are sent above the atmosphere. The current gamma-ray space observatory based on this technique is the Fermi-LAT and observes the sky from 20 MeV up to more than 500 GeV. Energetic gamma rays interact with the atmospheric nuclei and generate extensive air showers, which produce Cherenkov light. The MAGIC, VERITAS and H. E. S. S. experiments are detecting very-high energy gamma rays from ground using arrays of imaging atmospheric Cherenkov telescopes. Given their different locations, the science covered by the current IACTs is diverse and brought unexpected results to the scientific community. The next generation of Cherenkov telescopes, the Cherenkov Telescope Array (CTA) is under development for a better sensitivity compared to the other experiments. Chapter 3 The H.E.S.S. Experiment The High Energy Stereoscopic System (H.E.S.S.) is located in the Khomas Highland of Namibia, at an altitude of 1800 m above sea level. The H. E. S. S. site was chosen, among other reasons, for the quality of the atmosphere and for its proximity to the Southern Tropic which provides optimal conditions for the observations of the Galactic center. Designed to study the very-high-energy sky, the H.E.S.S. experiment detects gamma rays of energies from a few tens of GeV up to hundreds of TeV using five imaging atmospheric Cherenkov telescopes. The main characteristics of the H. E. S. S. telescopes are given in Sections 3.1 and 3.2. The data acquisition and calibration procedure are described in Sections 3.3 and 3.4. A gamma-hadron separation method based on the Hillas Paramters and an alternative relying on template fits with simulated shower shapes are described in Sections 3.5 and 3.5.2 respectively. A possible monoscopic discrimination relying on boosted decision trees is elaborated in Section 3.7. H.E.S.S. Phase-I The The steel structure supports a 900 kg finely pixelized camera, positioned at a focal distance f = 15 m from the center of the reflector area. Given the characteristic dish size d of 13 m, the telescopes have a f /d ≈ 1.2. The camera consists of 960 Photomultiplier Tubes (PMT) with 0.16 degree field of view each, arranged in 60 drawers of 16 channels containing all the complete electronics needed for triggering, signal processing and digitization. The drawers are fitted in a cylinder structure of 2 m in length and 1.6 m in diameter, making a 5 degree field of view for each camera. Light concentrator "Winston" cones are installed in front of each PMT to guide the Cherenkov light onto the central region of the photo-cathode, where the quantum efficiency of the PMTs is maximal and to eliminate dead space between PMTs [START_REF] Welford | High collection nonimaging optics[END_REF]. They also shield the albedo light from the ground. More details are given in [START_REF] Bernlöhr | The optical system of the H.E.S.S. imaging atmospheric Cherenkov telescopes. Part I: layout and components of the system[END_REF]. H.E.S.S. Phase-II Following more than ten years of successful operation of H. E. S. S., a major camera upgrade for H. E. S. S. I telescopes started in 2015 with the aim to improve the performance and reduce the failure rate of the ageing systems. During this upgrade, all the components inside the camera except for the PMTs were replaced. The design is motivated by studies performed for the NECTAR Cam [START_REF] Naumann | NECTAR: New electronics for the Cherenkov Telescope Array[END_REF]. H.E.S.S. Phase-II The Data Acquisition The PMT signal is read out in three channels where one is for the analog trigger and two for the high and low gain digitization. The sampling of the signal with different gains is done to increase the dynamical range of the instrument. The amplified signal from the PMTs is sent to an analogue memory which samples the signal at the frequency of 1 GHz. The third channel that sends the PMT signal to the trigger system, which has two levels. The idea of such system is to account for the presence of accidental events caused by night sky background events in the camera of the telescopes. To properly account for this effect the correlation between the neighbor pixels is considered. After the trigger decision at camera level, the coincidences between the other telescopes are considered. For a more detailed description of the H.E.S.S. data acquisition see [START_REF] Balzer | central data acquisition system[END_REF]. Calibration In the calibration process, the coefficients required to convert the electronic signals registered during the trigger into physical units are determined. Important information needed for event reconstruction and data analysis are determined and stored during this process. A set of calibration runs is taken during each observation period to ensure that the system is working properly. The list of calibration runs taken regularly with the H. E. S. S. telescopes includes FlatField, SinglePE and Pointing runs. Camera calibration In the following the camera calibration procedure is given. The detailed description of the H. E. S. S. camera calibration is covered here [START_REF] Aharonian | Calibration of cameras of the H.E.S.S. detector[END_REF]. The coefficients required to complete the calibration process, the amplitude of pixels in high and low gain are calculated using the following formulas: S HG = ADC HG -P HG γ HG × F F S LG = ADC LG -P LG γ HG × HG LG × F F (3.1) The coefficients presented in Equation 3.1 are described below. • ADC HG and ADC LG are the measured number of ADC (Analogue to Digital Converter) counts in high and low gains respectively. • P HG and P LG are the noise pedestals of the electronics in HG and LG. • γ HG is the gain, the conversion coefficient from ADC counts to photo-electrons. • F F is the flat fielding coefficient. It characterizes the measured pixel efficiency of one specific PMT with respect to the total mean over the camera. • HG/LG is the amplification ratio between the high to the low gain channels. In order to complete the full calibration process, the calibration coefficients presented below are estimated separately in some steps. Gain Calibration A flashing LED, a stable and controlled light source, is placed in front of the camera to measure the single photo-electron peak of the PMTs. In order to avoid the NSB contamination, the LED is placed within the camera shelter, at a distance of 2 meters from the camera. Only the high gain has the resolution required to make this measurement. These dedicated SinglePE runs are taken every two nights and they serve to measure γ HG in Equation 3.1. Flat-Field Coefficients The collection and quantum efficiency of individual PMTs is an important information to be measured and stored. This information is needed to convert photoelectrons into incident photons. The procedure is done by taking dedicated FlatField runs. A flashing LED (or a laser for H. E. S. S. II) uniformly illuminates the camera and the signal of each PMT is recorded. Then, the FlatField coefficients F F are estimated for each individual pixel by comparing the individual PMT responses to the average across the camera. Pedestal Calibration The baseline of the electronics is referred to as the noise pedestal. This must be subtracted from the measured ADC values to calculate the Cherenkov signal and therefore requires a precise measurement in order to not bias the determination of the Cherenkov signal. The pedestal is the response of the camera in absence of light. The electronic pedestal mean and width is determined by taking dedicated runs with camera lid closed to avoid noise from the NSB. Since the level of NSB, the atmospheric temperature or other instrumental effects change from one night to another, this process is done for each observation run. Pointing Correction The direction reconstruction of single gamma rays is directly affected by the orientation of the telescopes. Dedicated pointing runs are taken in order to calibrate the camera orientations by mapping the position of known stars from catalogs onto the camera of the telescope. A correct pointing model is of fundamental importance, since it affects the whole reconstruction and as a consequence the analysis results. The mirror planes of the H. E. S. S. I telescopes are equipped with two CCD cameras: LidCCD (centre) and SkyCCD (at an offset of three meters, parallel to the optical axis). During the process of pointing calibration the camera lid is kept closed and only the caps of the camera LEDs are opened as they serve as reference points of the camera position. Bright stars images are taken simultaneously with the LidCCD and SkyCCD, which are compared with respect to the LEDs. This is done to determine the discrepancies between the expected and measured star positions of the LEDs to the known positions. This helps to measure the distortions of star images on the camera lid and to have a properly corrected pointing model, which is of fundamental use in the analysis. Another less robust method is to make a pointing correction while taking observations, which is described here [START_REF] Braun | Improving the Pointing Precision of the H[END_REF]. Optical efficiency The quality of data taken on a specific source depends on several criteria which can be instrument related or/and from other external factors. The reconstructed energy is calculated from the intensity of the shower on the camera, which depends on the optical efficiency of reflective area. The data accumulated is often spread over several months or even years, showing the importance of the long term stability of the optical system. The optical efficiencies of each telescope are measured using the known properties of muons. As muons interact minimally with the atmosphere, they have roughly constant speed and therefore the Cherenkov angle is constant. This results in a ring-like image on the camera. The velocity of the muon defines the radius of the ring and this along with its impact point on the ground give the number of photons in the muon ring. Hence, using muons to determine the optical efficiency of the telescopes is a very robust method. By definition, the muon efficiency is the ratio of detected to predicted photons. The number of detected photons is affected by a set of factors such as mirror reflectivity, Winston cones and PMT efficiencies. For each period, the calculated muon efficiency is plotted together with the mean (red) and one σ error (green). Figure courtesy [START_REF] Chalme-Calvet | Muon efficiency of the H.E.S.S. telescope[END_REF]. Atmospheric Monitoring The H. E. S. S. site is far from the city lights (100 km from the closest city), but weather conditions can affect the observations. The quality of the atmosphere affects the density of Cherenkov photons through absorption. In order to take data under optimal conditions and reduce the systematic effects at analysis level, additional facilities are provided on the site. Each telescope is equipped with a radiometer which points at the same direction as the telescope to monitor the sky temperature, humidity level and the clouds crossing the telescope field of view, but not the altitude of the clouds. Other full sky weather information is provided by the weather station and the scanning radiometer installed on the site. The radiometer gives detailed information about the weather and cloud coverage. The shift crew has the necessary weather monitoring information in the control room, helping to decide if the atmospheric conditions are good to start the observations and take good quality data. Reconstruction Techniques The main challenge of the reconstruction in gamma-ray astronomy is the high background rate. [START_REF] Ohm | γ/hadron separation in very-highenergy γ-ray astronomy using a multivariate analysis method[END_REF]. The basics of these reconstruction techniques are given in the following. Hillas Reconstruction The shape of the images on the camera can be used to separate gamma-ray from hadron initiated showers. Given the main properties of extensive air showers (discussed in Section 2.4), gamma-ray induced showers have a more regular shape in the camera compared to hadron showers. Model Reconstruction An alternative method, called the Model reconstruction, was adapted for the sterescopic reconstruction of the H. E. S. S. data by de Naurois & Rolland [START_REF] De Naurois | A high performance likelihood reconstruction of γ-rays for imaging atmospheric Cherenkov telescopes[END_REF] based on the work of Le Bohec for the CAT experiment [START_REF] Bohec | A new analysis method for very high definition imaging atmospheric Cherenkov telescopes as applied to the CAT telescope[END_REF]. Based on image fitting with a semi-analatical model template, it is a powerful separation technique and yields improved sensitivity. With the installation of H. E. S. S. II, this method has been successfully adapted for monoscopic reconstruction with H. E. S. S. II. The finely pixelized cameras of IACTs enable the use of a likelihood reconstruction, where the recorded shower images are compared to shower images predicted by a semianalytical model for the Cherenkov light distributions induced by electromagnetic showers. The shower images are parametrized as a function of energy, primary interaction depth, impact distance and direction which results in a better reconstruction and gamma-hadron separation compared to other methods. The work performed here with H. E. S. S. uses the Model Analysis pipeline. In the following, the basic principles of the method are presented. Shower Image Model A proper and precise shower model to predict the Cherenkov photon density emitted during the electromagnetic showers is highly important. A semi-analytical shower model can be constructed by fitting a template function to simulated showers. The longitudinal and the lateral distributions of the charged particles in an air shower depend on the gamma-ray energy and the primary interaction altitude. For a given energy and altitude, the longitudinal, lateral and angular air shower profile under different conditions are modeled using Monte Carlo simulations from KASKADE [START_REF] Kertzman | Computer simulation methods for investigating the detection characteristics of TeV air Cherenkov telescopes[END_REF]. The average number of charged particles produced during an electromagnetic air shower as a function of the distance from the primary interaction point for different gamma-ray energies is shown in Figure 3.7. Following Greisens formula, the longitudinal distribution of charged particles is mod-eled as follows: N e (y, t) = a √ y × exp t × 1 - b b -1 × ln(s) + 2 - a √ y × exp(-t), (3.2) where t is the distance from the first interaction point in units of radiation lengths X 0 , y is the primary photon energy in terms of the critical energy y = ln(E prim /E crit , and s represents the shower age, which is given by: s = b 1 + c × (b -1)/t (3.3) By definition, the shower age starts at the first interaction, which correspond to s = 0 and lasts until at the shower maximum at s = 1. The parameters c and b are the depth of the shower maximum measured from the first interaction and the scaling factor of the shower development, respectively. The best fit values obtained from the simulations are given below: a = 1.05 + 0.033 × y, b = 2.66, c = 0.97 × y -1.32 (3.4) The fit is represented by the solid red color in Figure 3.7 along with the simulations. Similarly, the lateral air shower profiles are considered in the semi-analytical model as well. The Cherenkov light density, recorded by the telescope camera is calculated from the following integral: I(x T , y T ) = dz dE × dN e dE (t, E) × dt dz (y) du × F u (u(E, s)) dφ 2π dX r dY r F XY (X r , Y r , E, s, u) dφ ph dλ λ 2 d 2 n γ cos θ dz dλ × exp(-τ (z, λ)) × Q ef f (λ) ×Col(z, X r , Y r , u, φ, φ ph ) . (3.5) Each line in the integral represent the electromagnetic shower development are longitudinal development and position of the electrons, the direction of the electrons, lateral position of the electrons, Cherenkov photons angles and distributions and the camera. The terms in Equation 3.5 are: • dN e / dE(t, E) the longitudinal distributions of charged particles in the shower • F u (u(E, s)) the normalized angular distribution of particles as function of the energy and shower age • F XY (E, s, u) the normalized lateral distribution of particles as function of energy, age and rescaled angular direction of particles • 1/λ 2 × d 2 n γ /(cos θ dz dλ) the Cherenkov photon production rate • exp(-τ (z, λ)) the atmospheric absorption • Q ef f (λ) the detector quantum efficiency • Col(z, X r , Y r , u, φ, φ ph , x T , y T ) is the average geometrical collection efficiency, which depends also on the incident parameters of the gamma-ray The terms on the integral correspond to the: • integral over atmospheric altitude z or depth t to account for the longitudinal development of the shower • integral over the energy E of the electron/positron in the shower • integral over the electron direction in the coordinate system of the camera • integral over electron/position with respect to their directions in X r and Y r , the lateral coordinates in the shower frame in units of radiation lengths • integral over the wavelengths of the Cherenkov emission Additional instrument and electronic related effects as the instrument point spread function, the trigger response and the integral computing time are taken into account. These effects together with the geometric light collection efficiency are simulated and stored in look-up tables over a wide range of different parameters i.e. zenith angles, impact distances, energies and interaction depths. For analysis uses, they are scanned over these parameters. • Two-dimensional shower images in the camera frame are produced from this procedure. Figure 3.8 shows two examples of Model showers falling at different distances to the telescope. The raw shower images are then compared with the shower model template from the semi-analytical model on a pixel-by-pixel basis. A likelihood minimization procedure gives the best parameters from a log-likelihood fit, which determine the properties of the incoming gamma rays. Model Variables To quantify the deviation of the gamma-ray shower from the template shower, i.e. the quality of the fit, a parameter called Goodness is determined. This quantity is calculated from the log-likelihood values in the pixels i as follows: G = i lnL(s i |µ i )-< lnL > | µ i √ 2 × NdF , ( 3.6) where lnL(s i |µ i ) represents the log-likelihood to observe a signal s i in photo-electrons under the hypothesis of a signal µ i , < lnL > is the expected value of lnL if there are µ i hypothesized photo-electrons and NdF is the number of degrees of freedom. Unlike the Goodness which is calculated for the whole camera, the Shower Goodness (SG) is calculated for the pixels attributed to the shower. For each telescope, the Shower Goodness is determined from the pixels of the shower core, using Equation 3.6. The Mean Scaled Shower Goodness is averaged over all-telescopes t: M SSG = t SG t -< SG > σ SG √ N t , ( 3.7) where N t is the number of telescopes and σ SG is the width of the SG distribution determined from the Monte Carlo simulations. The Background Goodness is another parameter for event selection, calculated from Equation 3.6 for only pixels outside the shower. The fluctuations caused due to NSB are also taken into account by a NSB Likelihood variable when µ i is 0. Another parameter with significant separation power, the DirectionError, is calculated from the uncertainty of the fit. Cutting on the direction uncertainty distribution helps to keep the events with the best angular resolution. The PrimaryDepth, the first interaction point is another parameter used to discriminate between the gamma and hadron-induced showers, as electromagnetic showers interact later on the atmosphere compared to the hadron induced showers. Cuts based on these parameters are used in the analysis of H. E. S. S. data. Their cut values depend on the analysis method, which are presented next. Model Analysis After defining variables with powerful separation power, they are optimized for the analysis of different astrophysical sources. Since the trigger system of H. E. S. S. supports monoscopic and stereoscopic triggers, different event reconstructions are developed within Model Analysis. For the H. E. S. S. I Phase-I, a trigger was required with at least two of the small telescopes participating, named CT1, CT2, CT3 and CT4. After the experiment entered its Phase-II, different observation strategies are supported by the array. The array is split and H. E. S. S. II (CT5) and CT1-4 can take data independently, or CT1-5 observe in Hybrid mode. The monoscopic (hereafter Mono) trigger uses only events from CT5. The stereoscopic (hereafter Stereo) consist of CT1-4 observations with, and without CT5 telescope. Observations including CT5 require it in trigger with at least two from the CT1-4 telescopes (hereafter Hybrid). In the following the different analyses for the H. E. S. S. I-and H. E. S. S. II-era are described. H. E. S. S. I Only events with at least two participating telescopes, hereafter telescope multiplicity, are accepted for reconstruction. A threshold on the number of photo-electrons for each event in the camera is set to remove faint events caused by noise or NSB. Additional cuts are applied to reject events falling on the camera edge. This is done by cutting on the nominal distance, i.e. the distance between the camera center and center of gravity of the shower, at 2 degrees. Another cut applied on the primary interaction depth i.e. the amount of atmosphere traversed before a particle initiates an electromagnetic EAS, removes showers produced by electrons. Motivated by the diversity of astrophysical sources and their emission spectra, different cut parameters have been developed for H. E. S. S. I data. Table 3 Source detection and background estimation Even after the set of cuts introduced above are applied, there is still a relatively high level of background contamination. The main background consists of misclassified events, i.e. gamma-like events from hadron and electron induced showers. For source detection, it is important to have a methodology for the background estimation. The background is estimated by counting events which are reconstructed in a region or regions of the sky where no gamma-ray sources are expected. A number of approaches have been suggested as discussed below. The standard method to estimate the background is by defining the source region as ON and the background as OFF. Having a proper method for defining the OFF background is essential for good and reliable results. The number of events attributed to a source, referred to as the number of excess events, is the difference between N ON and background N OFF : N excess = N ON -αN OFF , (3.8) where α is a normalization factor that takes into account the exposure times, region sizes and the detector responses in each corresponding region. The normalization factor can be estimated using the Reflected Region or the Ring Background method [START_REF] Berge | Background modelling in very-high-energy γ-ray astronomy[END_REF]. This work makes use of the two methods to estimate the background. In the Ring Background method, an annulus region around the source is selected to estimate the OFF background as shown in Figure 3.10b. Since the ring covers areas with different offsets from the observation position, the camera acceptance to gamma-rays has to be taken into account while calculating α. For extragalactic sources this is a straightforward calculation while for complex regions like the galactic centre requires the size of the ring to be adapted. This method allows to represent spatial distributions of the excess and the significance in two-dimensional sky maps. The application of the Reflected Region requires that the observations are taken deliberately offsetting the source of interest from the center of the camera by a given angular distance, known as "wobble" mode. Typically the offset angles are chosen between 0.5 and 1.0 degrees, considering the instrument field of view and the broadening of the PSF at large offset angles. Multiple OFF regions n OFF are taken around the camera center, with the same offset as the ON region. To avoid contamination from the source, a circular region around the ON region is excluded from the background estimation. To compensate for the excluded ON regions, wobble observations with alternate positive and negative right ascension (R.A.) and declination (Decl.) are taken. The value of the excluded ON region depend on the source and on the type of analysis chosen. For point like sources, the excluded value of the ON excluded region is 0.1 degrees for Stereo analysis and 0.4 degrees for Mono. Using this method, where the ON and OFF have the same wobble offset allows to estimate the background during each observation. The normalization α factor in the Equation 3.8 is simply 1/n OF F . Following Equation. 17 from Li & Ma [START_REF] Li | Analysis methods for results in gamma-ray astronomy[END_REF], the statistical significance of the excess is: S = √ 2 N ON ln 1 + α α N ON N ON + N OFF + N OFF ln (1 + α) N OFF N ON + N OFF 1/2 . (3.9) The background subtraction also affects the energy range of the measured spectrum. The intrinsic source properties are provided by the measured spectrum, which gives insights on the acceleration power. The method used in this work to measure the energy spectrum with H. E. S. S. data is described next. Spectrum To study the high-energy emission from a given source, an accurately reconstructed gamma-ray energy is important. In astrophysics, the energy spectrum refers to the source differential flux as a function of energy. The measured energy spectrum gives information about the acceleration processes, responsible for powering particles up to very-high-energies. Telescope arrays built on the ground must determine the gamma-ray energy based on the sampled density of the Cherenkov light, which is affected by different observation conditions i.e zenith angles, offaxis angles, optical efficiency and other instrument limitations. The observational conditions like zenith angles, offaxis angles and the optical efficiency are noted as C in the following. To correct for these effects, is important to know the effective area and the angular resolution of the detector as a function of C. The effective area A(E, C), is related to the detector acceptance, the probability to detect a particle as a function of C, impact parameter and energy. For a gamma-ray with a fixed energy, the effective area A(E, C) is calculated as a function of the zenith, offaxis angle and optical efficiency by integrating the detector acceptance over impact points on the ground (transforming probability to area) and stored in multidimensional tables. Thus, depending on the different observation conditions, tables allow to interpolate in the whole parameter space. The energy resolution R(E , E, C) is the probability density of reconstructing an event energy E given the true energy E. Both, the effective area and the energy resolution are calculated from simulations. Figure 3.11a shows an example of an energy resolution table in Model Analysis. The expected number of events n γ in a given energy bin [E 1 , E 2 ] is calculated as follows: n γ = E 2 E 1 dE ∞ 0 R(E , E, C) × A(E, C) × φ(E) dE, (3.10) where φ(E) is the true flux of the source. For a given energy bin with n γ gamma rays and n h hadron events described by Poisson statistics, a likelihood minimization procedure is performed. The minimization fit determines the spectrum parameters that best describe the distribution of events. Additional information like parameter uncertainties and the covariance matrix between parameters are also determined. The spectrum is calculated using a "forward-folding" method i.e. the fit is performed assuming a shape for the spectrum of the source. The spectral shapes of different sources, motivated by the Fermi acceleration are approximated by different 57 Chapter 3 The H.E.S.S. Experiment functions. The most simple spectral shape is given by a power-law function: dN dE = N 0 E E 0 -Γ , ( 3.11) where N 0 , E 0 and Γ are the normalization, reference energy and spectrum index. After their mathematical expressions are given, a physical interpretation is important. The simplest case of a power law spectral shape tells that the differential flux is changing (increasing or decreasing depending on the value of Γ) as a function of E in a manner that must be related to the acceleration and radiation mechanisms at place in the source. Other spectral shapes are found to better describe the energy spectrum for other sources but the ones used on this work are presented below. In case of curvature in the measured spectrum, a log-parabola function is used: dN dE = N 0 E E 0 -α-βln E E 0 . ( 3.12) A power law with an exponential cut-off at E c is popular as well: dN dE = N 0 E E 0 -Γ exp(- E E c ). (3.13) At the analysis level, the energy threshold for the spectrum is calculated from the effective area. For H. E. S. S. I analysis, the energy threshold for each event is set to 10% of the maximum effective area. For mono analysis the threshold is set at 25% of the maximum effective area. However, these values can be changed for specific sources and different sky regions. After the spectrum of a given astrophysical source is measured, other physical properties can be studied. The lightcurve, which provides information about the flux evolution with time, is described next. Lightcurve The term lightcurve commonly denotes the time evolution of the integrated flux in a given energy range. This is based on the principles described above and uses the best-fit spectrum values to calculate the integral flux in a given energy range, or above a given energy. The accuracy of the lightcurve is important when studying the flux evolution of variable sources like active galaxies, which are known to undergo flaring activities. The lightcurve has to be sensitive when investigating TeV flux variability for other sources like the Crab Nebula, which is found to be variable in the GeV energy range. In the method presented here, the integral flux is determined via a likelihood minimization. For each time bin, the excess number of events is defined by Equation 3.8. The spectral index is kept fixed to the value obtained in the spectrum determination and the integral flux is calculated in different time bins. The integral flux above 1 TeV is calculated as follows: Φ(E > 1TeV) = ∞ 1 TeV dN dE dE, (3.14) where the spectrum shape dN/dE varies depending on the source properties. This method is used also when performing analysis on short time bins, if there are enough statistics. The systematic errors on flux for H. E. S. S. is estimated to be 20 % [START_REF] Aharonian | Observations of the Crab nebula with HESS[END_REF]. Other methods have been developed, i.e. the Transient Analysis for variability studies. These tools are described in [START_REF] Bernhard | The The Variable High Energy γ-Ray Sky with H.E.S.S. & Towards a Calibration Unit for CTA FlashCam[END_REF] and they are used as a cross-check for the H. E. S. S. II lightcurve presented in Chapter 6. Multivariate Analysis Discriminating signal events from the vast background events requires sophisticated analysis techniques. If the event is characterized by a number of independent variables, it can be categorized using a MultiVariate Analysis (MVA). The MVA used for this analysis is implemented in the Toolkit for MultiVariate Analysis (TMVA), a software distributed with ROOT [START_REF] Hoecker | TMVA: Toolkit for Multivariate Data Analysis[END_REF]. TMVA includes different algorithms for MVA such as neural networks, Likelihood classifiers or Fisher discriminants which in general have a very similar working scheme. Boosted Decision Trees (BDT) is an MVA method commonly used in high-energy physics. Like all MVA, it combines a set of input variables to one single discriminant. The strength of the BDT is the capability of exploiting nonlinear correlations of the input variables and the high efficiency to ignore input variables with low or no separation power. The BDT algorithm adapted for the H. E. S. S. I data showed to be powerful for discriminating gamma rays from background hadronic induced showers (see [START_REF] Ohm | γ/hadron separation in very-highenergy γ-ray astronomy using a multivariate analysis method[END_REF]). In the following the basic principles of a BDT and the adaption of this method for monoscopic reconstruction of H. E. S. S. II events are described. Boosted Decision Trees BDTs provide a powerful machine learning algorithm for event classification. The decision tree is based on binary split criterion which classifies events as signal or background. The classification starts at the first node with a binary decision taken based on the most discriminating input variable (see Figure 3.12). Consequently, the classified events suffer other binary splits based on the most discriminating remaining input variables until a maximum tree depth is reached. The input variables can be ranked by discriminating power using for example the Gini index, which is based on the signal purity p after the cut (p = N S /N S+B ). It is calculated as p(1 -p) and the optimal cut is found by minimizing it. The most powerful input variable is the one with the lowest Gini index. This phase space decision is sequentially repeated at each node, until each events is well positioned in a classification parameter space. For their application as a signal-background discriminant in the H. E. S. S. analysis, the BDT is trained with Monte Carlo data. Since one single decision tree is prone to statistical fluctuations in the training sample, a forest of decision trees (boosting) is taken i.e. the classification process is repeated over many times. After each boost, misclassified events from the previous tree are multiplied with a weight α. This is called the AdaBoost or adaptive boost method and α is calculated as follows: α = 1 -ε ε , (3.15) where ε is the fraction of misclassified events. The output of the BDT, ζ is calculated from the following formula: ζ(x) = 1 N T rees i ln(α i )h i (x), (3.16) where N T rees is the total number trees in the forest, h i and α i are the classifier response and the boost weight for each tree, respectively. The classifier response h i delivers -1 for background-like and +1 for signal-like events, hence the values of ζ are distributed between -1 and +1. BDT Settings The BDT implementation in ROOT is distributed with default parameters. In general they are optimized for stable training, but they need to be understood and checked as they might cause other problems e.g. overtraining. A stable set of parameters found for the thesis work is described below: • NTrees: the number of trees in the forest is 200, a compromise between separation performance and processing power. The larger this number is, the larger is the computing time needed for the classification. • Gini Index: the method used for ranking the variables (see text). • MaxDepth: the maximum depth of the decision trees allowed is 500. • nCuts: to find the optimal cut value for each event variable, the parameter space is scanned with a step size of 100. This is to find the optimal cut in each node splitting. • PruneMethod: the tree pruning is a process to eliminate the unimportant nodes. This work uses the CostComplexity algorithm, which compares the cost and the additional output in classification performance in case of further splitting below a certain node. The default algorithm parameters are used. Monoscopic reconstruction A multivariate analysis relying on BDTs is an alternative method that provide powerful gamma/hadron separation for H. E. S. S. data. The multivariate analysis method was tested on the H. E. S. S. I data and was implemented also in the ParisAnalysis framework as an additional analysis tool for the users. Observations with H. E. S. S., but also with high-energy instruments in general are prone to large systematic effects, so different reconstruction techniques are developed. Crosschecking results with two different analysis frameworks has become a mandatory procedure within H. E. S. S. Besides the ParisAnalysis based on Model Analysis, another analysis framework called HAP works with an independent calibration and event reconstruction scheme [START_REF] Parsons | A Monte Carlo template based analysis for air-Cherenkov arrays[END_REF]. The MVA methods are widely used in the HAP framework. For the study presented in the following, the MVA with the BDT was adapted and implemented for monoscopic reconstruction in the ParisAnalysis framework. The analysis of H. E. S. S. II data using solely information from CT5 is challenging due to the large systematic effects, especially at low energies. This motivated the refinement of the monoscopic identification with the BDT, planned for a later application to lower the energy threshold of the Crab Nebula, the main subject of this thesis with H. E. S. S. experiment. After describing the BDT training procedure, the main input variables used for the signal/background event separation and the corresponding results are given. Training of the BDT A BDT must be trained with events of known type and classification, in this case signal and background event samples. The signal sample consists of Monte Carlo gamma events simulated with a power law spectrum with a spctral index of 3, using the latest software of ParisAnalysis. The generation of Monte Carlo events was done at offset angles of 0.5 degrees, zenith angles ranged between 0 and 25 degrees and azimuth angles of 180 degrees. This restriction is to limit computing time while checking the validity of the method for a given acceptance range before it is expanded to the whole dynamic range of H. E. S. S.. The background events are from the OFF regions of observations on PKS 2155-304, a distant (z=0.116) extragalactic BL Lac source continuously monitored by H. E. S. S. As an extragalactic source it has the advantage of a uniform background distribution across the camera of the telescope and no contamination from the diffuse emission of the Galactic plane. A run selection is done to match the Monte Carlo simulations: only runs with zenith angles between 15-25 degrees and offset angles of 0.5 degrees from the camera center were used. To avoid problematic events which cannot be well parametrized, both samples are filtered before the training. For instance, a cut on the nominal distance i.e. the distance between the center of the gravity of the reconstructed image and the camera center is applied to avoid events falling on the camera edge. These events are usually truncated and consequently not well parametrized. Possible contamination from real gamma rays from the ON region of PKS 2155-304 is mitigated by a cut on the squared angular distance between the reconstructed and true direction, with values of ϑ 2 > 0.3 for signal and ϑ 2 < 0.1 for background events. The input variable distributions for each training sample were checked and since some strange behavior was seen in the Direction Error distributions, another cut at DirError < 0.8 was applied for both samples. The list of considered input variables includes the Mean Scaled Shower Goodness, the Primary Depth, the Mean Scaled Background Goodness, the NSB Likelihood and the Direction Error. Figure 3.13 shows some input variable distributions, where it can be seen that the Mean Scaled Background Goodness has no separation power which led to its exclusion from the list of input variables. The input variables with the highest discriminating power are found with the TMVA variable ranking and correlation plots. The list of input variables kept for the training is: • Mean Scaled Shower Goodness • Primary Depth • Direction Error The selection and ranking of these variables as the most important ones is not surprising, as for example the Mean Scaled Shower Goodness was delibaretely con-Chapter 3 The H.E.S.S. Experiment structed for a good separation in the Model Analysis framework. is the primary discrimination parameter used in Model Analysis, along with the other ones. These variables complement each other to cover a wide energy range as some variables are better for the discrimination at high energies while others are better at low energies. For instance, the DirectionError is a better parameter at high energies. For the training phase, 70 % of the sample was used. The rest was reserved for the application, where the performance of the BDT in each training band is benchmarked. Figure 3.14 shows the BDT output distribution for signal and background events in both a high and low energy band for zenith angles between 15 and 25 degrees. The other training bands are not shown. Even after adding new input variables, the BDTs discrimination power does not improve and is still problematic at low energies. This is expected, as the monoscopic background separation in the energy range of 30 to 70 GeV is complicated. At highest energies, this discrimination is more powerful. Based on the BDT output, the cut on the ζ for gamma/hadron separation can be optimized depending on the physical motivation and use. Discussion A gamma-hadron separation of monoscopic events with a MVA was investigated. To perform an event separation using the BDT algorithm requires a good training sample. For the work presented here, the BDT provides a good separation at the highest energies, especially in the last band (700 GeV -1 TeV), whereas at low energies O(10GeV ) the signal and background BDT output is more overlapping. The BDT was also tested on the Crab Nebula, the prime subject of this thesis work with H. E. S. S. The goal is to measure the spectrum in a wide energy range, hence make profit of the H. E. S. S. II observations to lower the energy threshold. This asks for a stable and powerful gamma-hadron separation, especially at low energies, where the separation is difficult due to the monoscopy. As the Crab Nebula can be observed only at zenith angles above 45 degrees, the separation is even more complicated. The energy threshold form H. E. S. S. I, using stereoscopy was 440 GeV on the Crab Nebula [START_REF] Aharonian | Observations of the Crab nebula with HESS[END_REF]. The Model Analysis for monoscopic reconstruction gave an energy threshold of 250 GeV (this work is presented in Chapter 6). The poor separation power of the BDT for low energies and high zenith angles excluded the possibility to use the BDT for the Crab Nebula analysis. The BDT already used for event separation within H. E. S. S. is based on the Hillas parameters. To adapt the BDT separation power with the Model variables for monoscopic use would require a deep and detailed study which goes beyond the scope of this thesis. The parameters are adapted in the ParisAnalysis framework for the users and are functional for further sources or tests. Chapter 4 Characterizing the Gamma-Ray Variability of B2 1215+30 Blazars are Active Galactic Nuclei with a relativistic jet pointing close to the observer's line-of-sight, with highly relativistic particles moving in a magnetic field and emitting non-thermal radiation. This radiation, studied at all wavelengths, reveal blazars as amongst the most energetic and luminous objects in the Universe. They are known to undergo extreme, high-amplitude, variable emission at all wavelengths, with some of them undergoing flux increments at very-high-energies by a factor of almost hundred on time scales of only three minutes minutes [START_REF] Aharonian | An Exceptional Very High Energy Gamma-Ray Flare of PKS 2155-304[END_REF]. Doppler factors greater than 100 are required to explain such extreme phenomena [START_REF] Aharonian | An Exceptional Very High Energy Gamma-Ray Flare of PKS 2155-304[END_REF]. Observing variability of the gamma-ray flux on time scales as short as minutes is a common property of blazars [START_REF] Aleksić | MAGIC Discovery of Very High Energy Emission from the FSRQ PKS 1222+21[END_REF][START_REF] Arlen | Rapid TeV Gamma-Ray Flaring of BL Lacertae[END_REF]. Multiwavelength monitoring of such events disclosed simultaneous flux increments at different energies for some blazars. This is important information to understand and characterize the emission from blazars. B2 1215+30 is a BL Lac object located at a redshift of z = 0.13. It was listed as a gamma-ray emitter in the first Fermi-LAT bright catalog in 2009, where it was classified as a potential TeV emitter. It was first detected at TeV energies by the MAGIC experiment in 2011 after triggering an optical high flux state [START_REF] Abdo | VizieR Online Data Catalog: Fermi/LAT bright gamma-ray source list (0FGL) (Abdo+[END_REF][START_REF] Aleksić | Discovery of VHE γ-rays from the blazar 1ES 1215+303 with the MAGIC telescopes and simultaneous multi-wavelength observations[END_REF] and was detected later by the VERITAS experiment as well [START_REF] Aliu | Long Term Observations of B2 1215+30 with VERITAS[END_REF]. We Introduction B2 1215+30, also known as ON 325 or 1ES 1215+303, is a BL Lac type located at a redshift of z = 0.13 [START_REF] Paiano | On the Redshift of TeV BL Lac Objects[END_REF], corresponding to a luminosity distance of d L = 592 Mpc (for a Friedmann universe with H 0 = 73 km s -1 ly -1 , Ω m = 0.27 and Ω λ = 0.73). The 408 MHz radio source catalog of the Bologna Northern Cross telescope lists 3235 radio-sources, named after the B2 survey plus source coordinates. The appearance of B2 1215+30 in this catalog in 1970 marks its first discovery [START_REF] Colla | A catalogue of 3235 radio sources at 408 MHz[END_REF]. B2 1215+30 was subsequently discovered by other instruments in different energy bands resulting in its classification as a BL Lac type object. It was first discovered by Fermi-LAT in 2009, when it was listed among 205 other sources in the first Fermi-LAT bright source catalog produced with only 3 months of data [START_REF] Abdo | VizieR Online Data Catalog: Fermi/LAT bright gamma-ray source list (0FGL) (Abdo+[END_REF] and it appears in all later Fermi-LAT catalogs with a hard spectral index [START_REF] Ackermann | The Third Catalog of Active Galactic Nuclei Detected by the Fermi Large Area Telescope[END_REF]. As B2 1215+30 is located in the northern hemisphere, it can be seen at very-high-energies only by the VERITAS and MAGIC experiments. The first TeV emission from the source was reported in 2011 by the MAGIC telescopes, detected after triggering an optical outflow [START_REF] Aleksić | Discovery of VHE γ-rays from the blazar 1ES 1215+303 with the MAGIC telescopes and simultaneous multi-wavelength observations[END_REF]. It was subsequently detected by the VERITAS experiment in 2012 with observations carried out between 2008 and 2012 [START_REF] Aliu | Long Term Observations of B2 1215+30 with VERITAS[END_REF]. MAGIC detected the source in a brighter state during 2011 observations, with integral flux (7.7 ± 0.9) × 10 -12 cm -2 s -1 above 200 GeV, with spectral index of Γ = -2.96 ± 0.14 [START_REF] Aleksić | Discovery of VHE γ-rays from the blazar 1ES 1215+303 with the MAGIC telescopes and simultaneous multi-wavelength observations[END_REF]. VERITAS detected the source in a relatively bright state in 2011, with spectral index of power-law spectral index Γ = -3.6 ± 0.4 and integral flux above 200 GeV of (8.0±0.9) ×10 -12 cm -2 s -1 [START_REF] Aliu | Long Term Observations of B2 1215+30 with VERITAS[END_REF]. The here-discussed observations from MAGIC and VERITAS experiments claim flux variability on time scales longer than months. High amplitude flux variability is a distinctive characteristic of blazars. During such events, some blazars are found to exceed the flux of the Crab Nebula at veryhigh-energies, which is remarkable as the Crab Nebula is so much closer, and is in fact the brightest very-high-energy source. The spectral energy distribution is characterized by a double-hump structure with one peak located in the radio-to-UV/X-ray range and a second one in the X-to-gamma-ray range. The first peak is attributed to the synchrotron radiation from the relativistic electrons moving in the jet's magnetic field. The origin of the second component is still under debate, where two different scenarios of hadronic or leptonic origin exist. In the leptonic scenarios VERITAS Observations it is commonly believed that the second component arises from electrons that suffer inverse Compton scattering with low energy photons. The origin of the low-energy photons responsible for the IC is still unclear. B2 1215+30 is classified either as an intermediate BL Lac or high frequency peaked BL Lac (HBL) based on the position the SED synchrotron peak at log 10 ν peak = 15.58 Hz [START_REF] Nieppola | Spectral energy distributions of a large sample of BL Lacertae objects[END_REF][START_REF] Ackermann | The Second Catalog of Active Galactic Nuclei Detected by the Fermi Large Area Telescope[END_REF]. Simultaneous observations of B2 1215+30 with the Fermi-LAT and VERITAS during two very-high-energy flaring episodes detected at very-high-energies are presented in the following sections. Some of the observations of B2 1215+30 in other energy ranges are briefly described after. VERITAS Observations VERITAS (Very Energetic Radiation Imaging Telescope Array System) is an array of four IACTs located at the Fred Lawrence Whipple Observatory in southern Arizona [START_REF] Holder | VERITAS: Status and Highlights[END_REF]. VERITAS is sensitive to gamma rays in the energy range between 0.1 and 30 TeV. For these observations of B2 1215+30, data were all taken in wobble pointing mode [START_REF] Fomin | New methods of atmospheric Cherenkov imaging for gammaray astronomy. I. The false source method[END_REF], considering that another TeV source, 1ES 1218+304, is in the same field of view, offset 0.76 • from B2 1215+30 . Standard VERITAS data processing techniques were used for the analysis, as described in [START_REF] Acciari | Veritas Observations of a Very High Energy γ-Ray Flare From the Blazar 3C 66A[END_REF][START_REF] Archambault | Discovery of a New TeV Gamma-Ray Source: VER J0521+211[END_REF]. VERITAS observations carried out between MJD 56686 to 56802 (see Table 4.1) resulted in a detection of gamma-ray signal from B2 1215+30 with a statistical significance of 23.6σ and between MJD 56298-56424 with 8.8σ. The top panel of Figure 4.2 shows the 2013 light curve in 1-day time bins, with an average flux over the entire data set of (6.0 ± 1.2) × 10 -12 cm -2 s -1 . The top panel of Figure 4.3 shows the 2014 light curve in 3-day time bins, with an average flux over the entire data set of (2.4 ± 0.2) × 10 -11 cm -2 s -1 . With the exception of 2014 February 08, the observed nightly fluxes are comparable to previously-reported yearly-averaged values [START_REF] Aleksić | Discovery of VHE γ-rays from the blazar 1ES 1215+303 with the MAGIC telescopes and simultaneous multi-wavelength observations[END_REF][START_REF] Aliu | Long Term Observations of B2 1215+30 with VERITAS[END_REF]. On the night of MJD 56696 (2014 February 08) VERITAS measured a gamma-ray flux of (5.0 ± 0.1) × 10 -10 cm -2 s -1 , which is more than twice that of the Crab Nebula. The source was detected with a statistical significance of 46.5σ in an exposure of 45 minutes. Given the strength of the signal, the light curve in 5-minute time bins is derived. The flux on the night of the flare was more than 60 times brighter than the average flux previously reported by MAGIC and VERITAS from this source [START_REF] Aleksić | Discovery of VHE γ-rays from the blazar 1ES 1215+303 with the MAGIC telescopes and simultaneous multi-wavelength observations[END_REF][START_REF] Aliu | Long Term Observations of B2 1215+30 with VERITAS[END_REF], making this one of the brightest flares detected in a blazar. All the errors quoted are 1-σ statistical errors. The preliminary results of the VERITAS on the B2 1215+30 motivated the search of variability in the GeV energy range from Fermi-LAT. The analysis and results obtained from the Fermi-LAT observations, simultaneous with the two periods here discussed are given in the following. Fermi-LAT Observations Fermi-LAT is monitoring the high-energy sky in survey mode since 2008. The data accumulated by Fermi-LAT is publicly available, released in the form of event (PH) and spacecraft (SC) files. They contain all the event information for a given source and spacecraft pointing positions 1 . Since the launch of Fermi-LAT, the data is continuously updated by the LAT team using more sophisticated reconstruction techniques and IRFs. The latest reconstruction technique (Pass 8) leads to a better PSF and a substantial increase of gamma-ray acceptance, i.e. higher photon detection probability. This allowed a better reconstruction up to 500 GeV compared to the previous one (Pass 7 2 ), as described in Chapter 2. The data presented here was retrieved from the Fermi Science Support Center Event Selection Two time ranges were selected from the Fermi-LAT data matching the VERITAS observation periods in order to perform a contemporaneous data analysis: from 2013 January 6 to 2013 May 12 and from 2014 January 1 to 2014 May 25. A circular RoI of 10 degrees, centered on the position of B2 1215+30 (R.A. = 12 h 17 m 52 s , decl. = +30 • 07 00 1, J2000) was selected. Only events with energies 100 MeV < E < 500 GeV from the RoI were selected and analyzed. Gamma rays from the Earth's limb contaminate the sample each time the Earth enters the Fermi-LAT field of view. These time intervals, corresponding to a rocking angle > 52 degrees were removed. A further quality selection was applied by accepting only events with zenith angles less than 90 degrees. Background Modeling and Source Detection The likelihood framework requires a proper background model, and three background categories were further considered in the analysis. The first two are Galactic and extragalactic diffuse emission, which the Fermi-LAT is sensitive to [START_REF] Abdo | Spectrum of the Isotropic Diffuse Gamma-Ray Emission Derived from First-Year Fermi Large Area Telescope Data[END_REF][START_REF] Ackermann | The Spectrum of Isotropic Diffuse Gamma-Ray Emission between 100 MeV and 820 GeV[END_REF]. The Galactic diffuse model is provided in FermiScienceTools by a spatial and spectral template represented by a set of energy dependent maps scaled to the expected intensity. The isotropic diffuse model is a spectral template from a fit to the all-sky emission (|b|>30 • that includes both extragalactic diffuse gamma rays and the remaining residual (misclassified) cosmic-ray emission. The isotropic background and the Galactic diffuse emission were modeled with the iso_source_v05 and gll_iem_v05 templates and the recent instrument response functions (IRFs) P7REP_SOURCE_V155 from the FermiScienceTools were used. The third major background are events from the other sources in the vicinity of B2 1215+30. This type of background is modeled considering all known gammaray sources from the third Fermi-LAT catalog (3FGL) up to 5 • outside the RoI edges [START_REF] Acero | Fermi Large Area Telescope Third Source Catalog[END_REF]. Including this 5 • band outside the RoI accounts for spill-over events, caused by PSF tails from sources close to the RoI edges. The catalog lists sources detected with a significance of at least 3σ. The detection significance is determined from the likelihood difference of the background-only model and the background plus source model: T S = 2∆logL. (4.1) The source significance is inferred from this test statistic: σ ≈ √ T S. Given that the 3FGL catalog was built on 4 years of data, but only half-year periods were analyzed in these Fermi-LAT/VERITAS studies, the sources detected with less than 5σ were treated differently with respect to sources detected above the 5σ threshold in order to avoid fit convergence problems. To model the > 5σ sources as good as possible in the contemporaneous data analysis, their spectral parameters were first fitted on the values from the catalog. The initial parameters for this global fit were set to the ones from the 3FGL catalog. In the subsequent modeling of the half-year time periods, only the spectral parameters were left free. The 3FGL sources below the 5σ threshold were included with all their parameters fixed to the catalog values, both in the global and half-year fits. The resulting background model for the chosen RoI consists of 50 point sources, and no extended sources. The best-fit parameters from the global fit were used to properly model the background which is used to derive the spectrum and lightcurve. The procedure and the results are presented next. Spectrum A spectral analysis was performed on B2 1215+30 in the energy range from 0.1 to 500 GeV covering the same half-year time periods. The spectral points were produced with the standard unbinned maximum likelihood analysis in six energy bands, equally spaced on a logarithmic scale. To initiate the fit for background model, a similar procedure as described above for the gloabl fit was followed. For sources within the RoI with T S > 9, the integral flux was left free, while the spectral index was fixed at the value obtained during the global fit. In each energy band, the spectrum best-fit parameters were derived using a simple power-law model: dN dE = N 0 E E 0 -Γ , ( 4.2) where N 0 is the normalization factor at a chosen reference energy E 0 , and Γ is the photon index. The best-fit power law model fit gave a spectral index of Γ = 1.84 ± 0.06 at a normalization of N 0 = (6.89 ± 0.56) × 10 -12 cm -2 s -1 MeV -1 . In Lightcurve To derive the lightcurve, the data were divided into time bins of three or one day duration. The spectrum of B2 1215+30 at each bin was modeled with a simple power law. For all the sources within the RoI detected with a test statistic value of T S ≥ 9, only the integral flux was left free while the spectral index was fixed to the value obtained during the fitting procedure for the entire time range. For the other sources inside the RoI, detected with a T S < 9, all the parameters were fixed to the values obtained during the fitting procedure over the whole time period. The strength of the signal allows to derive 1 day time bin lightcurve around the flaring period. For this period is derived also the spectrum, which results in a spectral index of Γ = 1.7 ± 0.09, showing a hardening during the flare period. The decaying phase of the 1-day bin lightcurve is fitted with a function of the form Multiwavelength observations of the source on X-rays, UV and Optical bands covering this flaring period are described next. F (t) = F 0 (1 + 2 -(t-t 0 )/tvar ), Multiwavelength observations X-ray Observations Swift was launched in 10 November 2004 by NASA with three instruments on board: a Bust Alert Telescope (BAT); an Ultraviolet/Optical Telescope (UVOT); and Xray telescope (XRT) [START_REF] Burrows | The Swift X-Ray Telescope[END_REF]. These instruments are designed for monitoring the sky in a wide energy range. Below are described the analysis of the data taken from these two these instruments. Swift-XRT The X-ray telescope is designed to measure X-rays in the 0.2 -10 keV energy range [START_REF] Burrows | The Swift X-Ray Telescope[END_REF]. An observation by Swift XRT Observatory was carried out one day after the VERITAS 2014-detected flare with an exposure of 1.97 ks. The data were obtained in photon-counting (PC) mode and processed with xrtpipeline tool (HEASOFT 6.16). The source and background-extraction regions were defined as a 20 pixels (∼ 4.7 arcsec) radius circle and a 40-pixel radius circle positioned near the former without overlapping, respectively. The exposure shows a stable source-count rate of 0.3 counts s -1 , suggesting negligible pile-up effects in photoncounting mode [START_REF] Moretti | In-flight calibration of the Swift XRT Point Spread Function[END_REF]. The spectral fitting was performed with PyXspec v1.0.4 [START_REF] Arnaud | XSPEC: The First Ten Years[END_REF] , using the dedicated Ancillary Response Functions generated by the xrtmkarf. The spectrum was rebinned to have at least 20 counts per bin using the grppha, and ignoring the channels with energy below 0.3 keV in the XRT-PC data [START_REF] Petre | Highlights of the BBXRT mission[END_REF]. A power law model was fitted on the data, dN/dE = N 0 (E/E 0 ) -Γ X . Using a hydrogen column density of N H = 1.68 × 10 20 cm -2 , the power law model fit describes good the data with a chi square test (P (χ 2 ) = 0.42) and a photon index of Γ X = 2.54 ± 0.07. Swift-UVOT Swift-UVOT is a 17 arcmin square FoV operating in the ultraviolet and optical regime [START_REF] Roming | The Swift Ultra-Violet/Optical Telescope[END_REF]. The Swift-UVOT data were analyzed by extracting the source counts from an aperture of 5.0 arsec radius around the source. The background counts were taken from four neighboring regions with equal radius. The magnitudes were computed using the uvotsource tool (HEASOFT v6.16), corrected for extinction using E(B-V). UV and Optical Observations The optical R-band observations on B2 1215+30 were taken using two telescopes: the 35 cm Celestron telescope attached to the KVA 60 cm telescope (La Palma, Canary Islands, Spain) and the 50 cm Searchlight Observatory Network telescope (San Pedro de Atacama, Chile). The data were taken as part of the Tuorla blazar monitoring program and were analyzed by their team using a semi-analytical pipeline developed at the Tuorla Observatory [START_REF] Takalo | Tuorla Blazar Monitoring Program[END_REF]. From the data analysis one can conclude that there is no hint of flux variations during the period analyzed. The contemporaneous optical data described here are plotted in the SED, which is described next. Spectral Energy Distribution The spectral energy distribution, in νF ν representation is shown in Figure 4.4. Cyan and red points show the results obtained during this work. Observations from the Tuorla and XRT taken during the flare period are shown as well. The archival data are shown in gray and they are all the data being published before the time period analyzed during this work [START_REF] Aliu | Long Term Observations of B2 1215+30 with VERITAS[END_REF]. The two-bump structure in the SED is clearly visible with the location of the synchrotron peak between UV and X-rays. The inverse Compton peak is located in the gamma-ray energy band. The 2014 period presented and analyzed here resulted in flux variability at high and veryhigh energies detected by Fermi-LAT and VERITAS respectively. The Fermi-LAT spectrum points, marked as red squares correspond to the 2014 period. They show a clear shift on flux compared to the previous observations reported on [START_REF] Aliu | Long Term Observations of B2 1215+30 with VERITAS[END_REF]. The SED with a leptonic model can be found here [START_REF] Abeysekara | A Luminous and Isolated Gamma-Ray Flare from the Blazar B2 1215+30[END_REF]. Size of the Emission Region During the flare episodes an emission region emerges, whose size cannot vary on time scales shorter than the time needed to cross the region at the speed of light. For relativistic speeds the Doppler boost of the emission region has to be taken into account. If the bulk emission region moves with a Lorentz factor Γ, at an angle θ to the line of sight, the resulting Doppler factor is: with β = v/c and Γ = 1 1-β 2 . For a source at a redshift z, with a variability time scale t var , the source size is limited to: R ≤ c t var δ/(z + 1). δ = 1 Γ(1 -β cosθ) , ( 4 (4.4) The lightcurves derived with the VERITAS and Fermi-LAT show evidence for a short variability time scale. Opacity arguments for the pair production were used to set a limit on the Doppler factor of the relativistic jet, as explained below. In the following the approach proposed by Dondi was adapted [START_REF] Dondi | Gamma-ray-loud blazars and beaming[END_REF]. These arguments are used to explain the high luminosity produced in compact regions from the BL Lac objects. The short variability time scale and the high luminosities in gamma rays are not expected from very compact volumes with gamma rays. Photons with sufficient high energy can annihilate to produce an e -/e + pair, hence energetic gamma rays will suffer from attenuation with the low-energy photons and result in softer radiation. The beamed radiation from the relativistic jet can explain the high luminosities from very compact regions and the short variability time scales. In this calculation we assume a Friedmann universe with H 0 = 73 km s -1 Mpc -1 , Ω m = 0.27 and Ω λ = 0.73 and that the high and the low energy radiation are produced in a single region of the source. Assuming a spherical emission region, the radius of this region is constrained to be less than Equation 4.4. We assume also that the high energy photons and the soft photons are produced in the same region. It can be shown that the Doppler factor δ is found to be: δ ≥ σ T d 2 L 5hc 2 (1 + z) 2α 1 t var F 1keV E γ GeV α 1 (4+2α) , ( 4.5) where z is the redshift, d L the luminosity distance (see Appendix A for the details of the calculation). The highest energy photon that resulted from 2014 Fermi-LAT was 73 GeV. The redshift of B2 1215+30 (z = 0.13) and the luminosity distance is d L = 614.6Mpc are used in this calculation. To derive the X-ray photon density flux, observations from the Swift observatory were taken into account. As already mentioned above, these observations were taken one day after the flare at the veryhigh-energies. Assuming the flux does not change, these observations were used to derive the flux at 1 keV and is found that F 1keV = 1.7µJy. Using the variability time scale and X-ray spectral index of α = 2.5 derived by Swift data, a limit on the beaming factor is found to be δ Fermi ≥ 5.0. Given the simultaneous flare at GeV and TeV energies, more constraining results can be derived using the flare observations from VERITAS. The variability time scale derived from the VERITAS 5 minute lightcurve is 3.6 h and was used to calculate the Doppler factor: δ ≥ 10. From the simplest leptonic emission scenario, the high-energy component of the spectral energy distribution can be produced via the synchrotron self-Compton mechanism. In the modeling of the spectral energy distribution, a self synchrotron component and external Compton were considered [START_REF] Abeysekara | A Luminous and Isolated Gamma-Ray Flare from the Blazar B2 1215+30[END_REF]. In an SSC scenario, the ratio between the synchrotron and inverse-Compton luminosities can be used to estimate the magnetic field. The magnetic field is constrained by following [START_REF] Ghisellini | Diagnostics of Inverse-Compton models for the γ-ray emission of 3C 279 and MKN 421[END_REF] and using two arguments: (I) Non-detections by Swift-BAT (15-50 keV) and MAXI (4-10 keV) on the day of the TeV flare (MJD 56696) can be interpreted as a limit on the hard X-ray flux of the order of ν X F ν X 2 × 10 -10 erg cm -2 s -1 . This limits the peak synchrotron luminosity to be L syn ≤ 10 46 erg s -1 . (II) Using Equation 4.5 for the Doppler factor calculation. From these relations is found: B (1 + z) δ -3 2L 2 syn L γ c 3 t var 1/2 , ≤ 1.8 G L syn 10 46 erg s -1 (δ/10) -3 . (4.6) A full discussion about the magnetic reconnection with the magnetic field or what might be the cause of short time scale flares in blazars are discussed more in detail here [START_REF] Abeysekara | A Luminous and Isolated Gamma-Ray Flare from the Blazar B2 1215+30[END_REF]. Long-Term Variability with Fermi-LAT In the previous sections the analysis of two different time periods, simultaneous with the flaring activities detected by VERITAS were presented. Such events are a distinctive characteristic of BL Lac objects. To further investigate the emission from B2 1215+30 the variability at GeV energies using all the available data set from the Fermi-LAT was studied. Since Fermi-LAT observes the whole sky every three hours, the probability to detect such events is higher compared to the ground based detectors. The LAT team has an online monitoring program which is used to send alerts to the other experiments and have multiwavelength observation of such events. Beside the brightness of the 2014 flare (16× the average flux) the Fermi-LAT online monitoring missed the high flux activity on B2 1215+30. Hence, a long-term lightcurve would reveal more about the activities of the source at high-energies. All the data accumulated with the Fermi-LAT on B2 1215+30 was used to study the long-term variability. The lightcurve is calculated in time bins by assuming a power-law spectrum shape. Given the long-term data set, the fluxes are derived with a separate power-law spectral index fit for each year to account for any spectral variability in this time range. The flux evolution of B2 1215+30 is studied by calculating the integral flux, from 100 MeV to 500 GeV in one week time bins. The one week time bin represents a trade-off between statistics and sensitivity to such events. From the long-term lightcurve in Figure 4.5 can be seen the increase of the flux with the time. To check the significance of this flux increase, a linear fit was performed to the data. The fluxes and the spectral indices were investigated in a year by year basics. To test the significance of this correlation, each of them were fitted with constant (orange) and with a line (green). The values are summarized in Table 4.3. The linear fit agrees better with the measurements and shows a clear correlation between the flux increment with time and a hardening of the spectral index with time. To investigate the origin the GeV flares the long-term lightcurve was studied further. If the variability is originating from random processes, the fluxes are expected to be normally distributed around the mean. This was checked by performing a Gaussian fit to the flux distribution (Φ i -Φ)/σ i , where is the Φ i flux with error σ i and Φ is the mean flux weighted by the error. this value were also checked with a Gaussian fit (see Figure 4.7). Therefore, the high flux states during luminous flares are not likely connected to the same random processes that is dominant in the quiet state. In some blazars, i.e. PKS 2155-304, it was found that flares may be related to a self-amplifying, multiplicative process [START_REF] Abdalla | Characterizing the γ-ray long-term variability of PKS 2155-304 with H[END_REF]. Unlike the random processes where the flares might originate from additive processes, the flares are result of multiplicative processes. In this case, the fluxes are not expected to follow a normal distribution but the logarithms of the fluxes follow a normal distribution. A "log-normal" distribution of the fluxes was checked and was not compatible with a normal distribution either. The multiplicative process do not explain the flares either. 5 - 4 - 3 - 2 - 1 - 0 1 2 3 4 5 , i Φ σ (t))/ Φ - i Φ ( Another possible explanation would be a periodic behavior, as seen for example in another blazar PG 1553+113 [START_REF] Ackermann | Multiwavelength Evidence for Quasi-periodic Modulation in the Gamma-Ray Blazar PG 1553+113[END_REF]. Probing the B2 1215+30 lightcurve for periodic behavior a Fourier analysis was performed and is described in Section 4.6. Recent flaring activities This analysis resulted in the detection of several flux increments from 2015 to 2017, where only one flare has been reported by the Fermi-LAT on 2017 April 13 in the form of an Astronomer's telegram [START_REF]Fermi LAT detection of a GeV gamma-ray flare from the high-energy peaked BL Lac object 1ES 1215+303[END_REF]. The corresponding lightcurves, in one week time bins for these years are plotted in Figure 4.8. The flux enhancements found during these years show the high-flux activities of the source. From this it can be seen an evidence of quasi-periodic behaviour. This is more evident when looking at the 2016 data (middle plot of Figure 4.8). The flaring episodes with weekly average flux exceeding the value of Φ > 2.4 × 10 -8 cm -2 s -1 are summarized in Table 4. These results, along with the other years are used to characterize the fractional variability and are presented next. Fractional variability To quantify the spectral variability the statistical properties of the light curves are also considered. The fractional variability F var is a measure of the intrinsic variability that corrects for the noise [START_REF] Vaughan | On characterizing the variability properties of X-ray light curves from active galaxies[END_REF]. The F var is the square root of the excess variance and is calculated as follows: where S 2 is the total variance of the lightcurve, X and σ err are the mean flux measurements and the mean errors squared respectively. For N points, the errors on the fractional variability are propagated as follows: F var = S 2 -σ 2 err X 2 , ( 4 σ Fvar = 1 2F var 1 N S 2 X 2 (4.8) For the long-term lightcurve considered only the time bins where the they have a detection of at least T S ≥ 4 and the upper limits are excluded. The X is the unweighted mean flux. The F var , which is a measure of the variability power of the total lightcurve was found to be 30%. This value is consistent with what was previously reported from B2 1215+30. Periodic Behaviour The lightcurve is further tested for quasi-periodic modulations by doing a Discrete Fourier Transform (DFT) of the long-term lightcurve, shown in Figure 4.9a. Gaussian noise with mean and width corresponding to the distribution of all integrated flux measurements was simulated and Fourier transformed as well, in order to enable the judgment whether an excess seen in the data at a given period is significant or not. These significance thresholds have to be interpreted carefully, as the white noise is certainly not the optimal model for blazar flux measurements in the absence of quasi-periodicity or flaring activities. However, it allows for a preliminary measure of the significance. Another possibility would be to draw toy lightcurves from the measured lightcurve by scrambling the individual bins, in order to match the null hypothesis of no quasi-periodicity. Periodic Behaviour Figure 4.9a hints periodic components with a period of about 1000 and 3000 days. As those are the longest periods accessible to the DFT, this could also be the manifestation of the seemingly liner long-term flux increase (Figure 4.3) in the Fourier transform, instead of a truly periodic component. To account four that the lightcurve was fit by a polynomial of degree one, before applying the DFT to the lightcurve after subtracting this fit. This is shown in Figure 4.9b. The peak around 3000 days disappeared now as expected, but a significant peak acound 1000 days remains. This observation goes in the same direction as the conclusion of [START_REF] Arlen | Rapid TeV Gamma-Ray Flaring of BL Lacertae[END_REF], where a approximately 1000 day quasi-periodicity was claimed for the gamma-ray blazar PG 1553+113. Going back from the frequency to the time domain, the natural follow-up is a fit of the lightcurve with the shape expected from the DFT, which is a polynomial of degree one plus a sine: Φ(t) = At + B sin( 2π T t -ϕ) + C. (4.9) The longterm lightcurve including the fit is shown in Figure 4.10. The an initial value for the period T was set to 1000 days. The fit parameters and their uncertainties are given in Table 4.5. While the DFT gave us the power density in discrete frequencies defined by the discrete measurements, the fit yields an estimate for the period including an uncertainty, which is the most striking number in Table 4.5. To further collect evidence for this quasi-periodic component with a relatively long period, it is promising to redo the Fermi-LAT lightcurve of B2 1215+30 in time bins which are more adapted to lay open features in the 1000 day period-domain, for example in a 250 day time-binning. This behaviour has been checked with Lomb-Scargle Periodogram, another independent method (this check is done by M. de Naurois). This method, was checked with and without the polynomial fit and in the case when the linear increase is subtracted, it gives a period of 1081.94 days with 3.3σ consistent with the results mentioned above. Further checks on this behaviour, the interpretation and constraints derived from these results are a prospect for future projects. Summary and Discussion Long-term lightcurve fit 4.9. The fit parameters and their uncertainties are listed in Table 4.5. Summary and Discussion In this chapter, the gamma-ray variability of the BL Lac object was studied. For the scope of this thesis, the Fermi-LAT publicly available data was used. The variability time scale of the source during the 2014 flare was derived from the 1-day time bin Fermi-LAT lightcurve and was found to be t var 9.0 h. Such short variability timescales constrain the size of the emission regions using causality arguments. Using the opacity argument and following the calculation of [START_REF] Dondi | Gamma-ray-loud blazars and beaming[END_REF] we derived the minimum Doppler factor δ ≥ 5. A more constraining limit on the relativistic Doppler factor was set by using the VERITAS flare. With the VERITAS data taken during the night of the flare an upper limit on the flux halving time of t var < 3.6 h was found. These results were considered to set limits on the Doppler factor δ ≥ 10. The value reported here is in agreement with what is found in other TeV blazars. Two periods of five months from To better understand the GeV emission picture on a large time scale, the long term variability of B2 1215+30 in the energy range 100 MeV < E < 500 GeV was investigated with nine years of Fermi-LAT data. Several flux increment were detected. Three major flares with the same flaring amplitude as the 2014 flare were distinguished in the long-term lightcurve; in October 2008, February 2014 and April 2017. This analysis showed that the source has undergone several flaring activities after the here-discussed 2014 flare. Data from the IACT experiments, which cover the latest flares would help to better understand the gamma-ray emission and search for counterparts at TeV energies. Such investigation concerning the latest period of observations could not be studied here due to the data privacy policy of the IACT experiments and this source is outside the field of view of H. E. S. S. These flaring episodes offer the possibility to study and characterize and investigate the gamma-ray emission. When studying the long-term gamma-ray emission from B2 1215+30, a yearly flux increase with time was found. A linear correlation between the yearly average fluxes and the spectral indexes was identified and it is under investigation. Since 2009, the flux from this source is increasing and the spectral index is getting harder showing correlation between the two. The presence of the flaring activities in the GeV energy range has been investigated to understand the origin of the variability, assuming different processes. To account for different processes, the flux and the log-flux distributions are fitted with normal distributions. The former is often associated with additive processes and the latter with multiplicative processes. The two cases are unlikely to describe the data. There is the possibility that these processes are due to quasi-periodic behavior. A quasi-periodic study was performed using the long-term lightcurve binned in seven day time bins. The checks on the periodicity of the source, using a Discrete Fourier Transform gives a hint for a period of 1083 ± 32 days. These results were checked with a different method using a Lomb-Scargle Periodogram test, which gives a period 1081.94 days. Quasi-periodic behaviour is found in literature, which is reported in optical for some blazars and lately in the BL Lac object PG 1553+113 at GeV energies. Possible explanations can be related to a quasi-periodic process within the relativistic jet. This behaviour can also be related to geometrical effects, which might Summary and Discussion be due to a binary black hole system or precession of the supermassive black hole [START_REF] Sillanpaa | OJ 287 -Binary pair of supermassive black holes[END_REF][START_REF] Rieger | On the Geometrical Origin of Periodicity in Blazar-type Sources[END_REF][START_REF] Graham | A possible close supermassive black-hole binary in a quasar with optical periodicity[END_REF][START_REF] Fender | GRS 1915+105 and the Disc-Jet Coupling in Accreting Black Hole Systems[END_REF]. The quasi-periodic behaviour is seen in X-rays in some miqrosuars [START_REF] Rau | The 590 Day Long-Term Periodicity of the Microquasar GRS 1915+105[END_REF]. Similarities between X-ray observations of microquasars and gamma-ray observations in AGN should be investigated in the future. However, X-rays come from the accretion disk in microquasars and gamma-rays from the jet in blazars. There could be some coupling between them. This is an interesting direction for future and much longer observations of AGN. This important hint of periodicity on the BL Lac object B2 1215+30 and the physics responsible for it are going to be further investigated. The temporal variability of B2 1215+30 has been analyzed on time scales from days to years in the GeV energy range with the Fermi-LAT. This study reveals B2 1215+30 as an promising gamma-ray emitting source to study and characterize the emission of this source class at high-energies and better understand the blazar picture. Chapter 5 Crab Nebula with a decade of H. E. S. S. I observations The first source ever detected at very-high-energies is the Crab Nebula, a pulsar wind nebula in the Galactic plane, ∼2 kpc from the Earth [START_REF] Weekes | Observation of TeV gamma rays from the Crab nebula using the atmospheric Cerenkov imaging technique[END_REF]. The Crab Nebula was created in the supernova explosion of 1054 and contains a highly magnetized rotating neutron star (Crab pulsar) which powers a wind of relativistic particles in the nebula [START_REF] Bühler | The surprising Crab pulsar and its nebula: a review[END_REF]. Although is one of the best studied objects in the sky, it is an ever-surprising source. Since its first detection at very-high-energies by the Whipple telescope in 1989 [START_REF] Weekes | Observation of TeV gamma rays from the Crab nebula using the atmospheric Cerenkov imaging technique[END_REF], the Crab Nebula has been regularly monitored by the gamma-ray experiments. It serves as a calibration source for many imaging atmospheric Cherenkov experiments since it is very bright and no flux variations are predicted by simple SSC models, traditionally used to explain emission at the very-high-energies. And no flux variation were actually reported by the IACTs so far. Contrary, the space-borne experiments, such as AGILE and Fermi-LAT have reported on the detection of flux variations of the synchrotron component at GeV energies [START_REF] Striani | The Crab Nebula Super-flare in 2011 April: Extremely Fast Particle Acceleration and Gamma-Ray Emission[END_REF][START_REF] Abdo | Gamma-Ray Flares from the Crab Nebula[END_REF], with the latest major flare at energies above 100 MeV reported by the Fermi-LAT in March 2013 [START_REF] Mayer | Rapid Gamma-Ray Flux Variability during the 2013 March Crab Nebula Flare[END_REF]. During this flaring episode, the flux increased by a factor of 20 on time scales of only a few hours, which come as a complete surprise as was not expected from theoretical predictions. The H. E. S. S., VERITAS and MAGIC experiments have not reported any evidence of simultaneous flux variation at very-high-energies so far [START_REF] Meagher | Six years of VERITAS observations of the Crab Nebula[END_REF][START_REF] Aleksić | Measurement of the Crab Nebula spectrum over three decades in energy with the MAGIC telescopes[END_REF]. The unexpected flux variations detected at high-energies motivated the search for a similar behavior in the very-high-energy regime with the H. E. S. S. experiment, which has regularly monitored the Crab Nebula since the start of operation in 2003. As the Crab Nebula is in the northern hemisphere, it can be seen by the H. E. S. S. telescopes only at zenith angles larger than 45 degrees, resulting in larger systematic uncertainties. This is in contrast to the optimal observation conditions for VERITAS and MAGIC experiments, which are both in the northern hemisphere. In addition a large fraction of data is taken from September to October, the rainy season in Namibia. Another source of systematics is the presence of dust in the atmosphere from bushfires during this period. The tight visibility window on the Crab Nebula and the non-optimal atmospheric conditions increase the probability to miss such flares, which are exceptional and important events that help to understand the veryhigh-energy emission from this source. However, large zenith angle observations increase the effective area which, at the highest energies increases the sensitivity to the short flares in this regime. The study presented in this chapter on the very-high-energy spectrum and variability of the Crab Nebula is based on ten years of Crab Nebula observations with the H. E. S. S. experiment. All the good quality data accumulated with the H. E. S. S. I experiment is used. To increase the sensitivity to flux variations, we correct for the atmospheric transparency, which was shown to have an effect on the flux measurements. During H. E. S. S. I observations carried out between 2003 and 2005, the energy threshold for the Crab Nebula was 440 GeV [START_REF] Aharonian | Observations of the Crab nebula with HESS[END_REF]. Since then the data set is greatly expanded, a 5th telescope was added to the array in 2012, lowering the energy threshold, and new reconstruction techniques have been developed. All these combined together allow to make a comprehensive study on the Crab Nebula by measuring the energy spectra in a wider energy range, study flux variations throughout the years and search for extension at very-high-energies. When the H. E. S. S. II telescope was added to the already existing array in 2012, the Crab Nebula was one main target during the commissioning phase. The trigger of H. E. S. S. II supports mono and stereo observations. The advantages of this trigger are seen in the analysis and reconstruction results. Observations taken with CT1 to CT5 enable the reconstruction of lower energy events, bridging gammaray astronomy from ground and space. The observations with the H. E. S. S. II are covered in the next chapter. In this chapter, Section 5.1 introduces the Crab Nebula with a historical overview, followed by a description of the H. E. S. S. data on the Crab Nebula in Section 5.2. The analysis results are presented in Section 5.3. The measured spectrum and variability studies are given in Sections 5.4 and 5.5 respectively. The atmospheric transparency effect on the flux measurements is described in Section 5.6. The summary and a discussion are found at the end of the chapter in Section 5.7. Introduction In 1054, Chinese astronomers recorded the presence of a new star above the southern horn of the Taurus constellation [START_REF] Clark | The historical supernovae[END_REF][START_REF] Duyvendak | Further Data Bearing on the Identification of the Crab Nebula with the Supernova of 1054 A.D. Part I. The Ancient Oriental Chronicles[END_REF]. It was visible during day time for about a month with a brightness of about six times the one of Venus and as outstanding as a full Moon [START_REF] Collins | A Reinterpretation of Historical References to the Supernova of A.D. 1054[END_REF]. This "guest star", as the Chinese astronomers called it, started to fade after 6 months. After one year it disappeared and was not seen again with the human eyes until the invention of telescopes. In 1731, John Bevis observed the nebula of this explosion with an optical telescope. The comet hunter, Charles Messier mistook this nebula for a comet and listed it as the first entry (M1) in his "Catalogue of Nebulae and Star Clusters", published in 1758 (see [START_REF] Hester | The Crab Nebula: An Astrophysical Chimera[END_REF] for a review). In 1844, the nebula was named the Crab Nebula by William Parsons, third Earl of Rosse, after he discovered the filaments and drew a sketch which resembled a crab as shown in Figure 5.2a. We now understand that this "guest star" was the brilliant flash of a supernova explosion. Throughout the years, astronomers made more detailed observations and measurements. They found an expanding nebula and a central rapidly-rotating pulsar, the Crab pulsar. They combine to a system which is nowadays known as the Crab Nebula (see Figure 5. 3). The Crab Nebula is a surprisingly unusual pulsar wind nebula. The synchrotron component of the Crab Nebula extends up to the high-energies, which is found to be variable in this energy range. The emission from the pulsar remained constant during the flares, indicating that the flares are coming from the nebula. During these episodes, the synchrotron component exceeded its average luminosity up to a factor of 30. The unexpected gamma-ray flares have broadened the knowledge on this source but also challenged the theoretical models, where complex models have to be considered to explain the observations. Another peculiarity of the Crab Nebula is the Crab pulsar and the detection of pulsed emission up to TeV energies. So far, the Crab pulsar has been detected by MAGIC [START_REF] Albert | VHE γ-Ray Observation of the Crab Nebula and its Pulsar with the MAGIC Telescope[END_REF] and VERITAS [START_REF] Mccann | Detection of the Crab Pulsar with VERITAS above 100 GeV[END_REF], but not by the H. E. S. S. experiment. The observational conditions, zenith angle and the high background make it challenging to reach the low energy threshold needed to detect the pulsar with the H. E. S. S. II. However the Crab pulsar is not on the scope of this thesis subject. Ever since the Whipple telescope detected the first very-high-energy gamma rays from the Crab Nebula [START_REF] Weekes | Observation of TeV gamma rays from the Crab nebula using the atmospheric Cerenkov imaging technique[END_REF], the succeeding IACTs continued to monitor the Crab Nebula and measured its energy spectrum over a wide energy range. MAGIC measured the energy spectrum over three decades in energy, from 50 GeV to almost 30 TeV [START_REF] Aleksić | Measurement of the Crab Nebula spectrum over three decades in energy with the MAGIC telescopes[END_REF], VERITAS reported the spectrum measurement from 115 GeV to 42 TeV [START_REF] Meagher | Six years of VERITAS observations of the Crab Nebula[END_REF] and the H. E. S. S. I published spectrum was from 440 GeV to 40 TeV [START_REF] Aharonian | Observations of the Crab nebula with HESS[END_REF]. Despite all these measurements, the puzzle of the very-high-energy emission is incomplete. To better understand the picture, the Crab Nebula spectrum is investigated using all the available H. E. S. S. data. The available H. E. S. S. I data on the Crab Nebula is described in the following. Data Set Data Set The This can interrupt the data taking, but the data stored before the problems can be used. Other problems which do not interrupt data taking but have to be taken into account before analysis are related to faulty drawers or dead pixels. The telescope multiplicity, zenith angles, offset angles and other parameters differ from one run to another. The response of the instrument depends strongly on these parameters. It is highly important to select good quality data after calibration; this selection is done to remove events caused by noise and reduce the systematics to improve the physics results. Additionally, combining observations spread over a period of ten years might introduce other systematics. The aging of the optical system affect the Cherenkov light collection. The optical efficiencies of the telescopes degrade with time, yielding a lower light collection efficiency. All these parameters and their effects are checked to have more reliable scientific results and reduce systematic effects they can introduce. Table 5.1 summarizes the details from observations of the Crab Nebula with the H. E. S. S. I telescopes. The selection criteria applied to the H. E. S. S. I data is explained in the following. Data Quality Selection Before the data analysis and extraction of scientific results, a careful data selection is needed. During the calibration process (see Section 3.4), run quality parameters are stored for each observation run. The run selection was built on these parameters. This selection is done based on pre-defined standard criteria, which depend on the reconstruction technique. As described in Chapter 3, different event reconstructions are possible based on the triggered type. The trigger system of the H. E. S. S. I telescopes is discussed in detail in [START_REF] Funk | The trigger system of the H.E.S.S. telescope array[END_REF]. For the spectrum and variability studies in this work, a custom selection was adapted. Parameter The selection of data based on the parameters given in Table 5.2 is the first step before the analysis procedure. On top of the standard data selection criteria, additional cuts were applied. A limit on the off-axis angles was set at 0.8 degrees. This is done to avoid problems in the spectrum and lightcurve calculation, caused by the non-stable effective area at large off-set angles. Also, runs with zenith an- Data Set gles larger than 55 degrees were excluded. When including runs with larger zenith angles the effective area becomes unstable which is then reflected in the spectrum, introducing systematic effects at the energy threshold. Additionally, other checks were performed for each observation run individually. For each run, the Center of Gravity map (COG) of the participating telescopes was checked. The expected distribution over the camera is more or less homogenous. During some observation runs, faulty or non-operational drawers cause holes in the COG, biasing the final map. Therefore, this check was performed for every telescope individually. After the checks, only runs with homogeneous COG maps were kept. This resulted in 113 H. E. S. S. I observation runs, corresponding to an exposure of 46.6 h, taken between 2003 to 2013. Finally, six runs from 2006 passing all the previously applied run quality selection were excluded due to an incorrect pointing model. This mispointing leads to an incorrect reconstruction, and hence to incorrect results. In case of a solvable problem, correcting the pointing model for these observations could potentially make the data available again, but this can only be achieved through significant software development. The run distributions based on the month of observations for the full data set, without any selection criteria applied are shown in Figure 5.5 along with the zenith angle of the remaining good runs. The later runs (unless specified differently) are used to perform the study and results described next. Analysis Results The selected data was analysed with the Model Analysis, using the Std and Loose cuts to find the configuration which provides the best analysis results. An ON region of 0.25 degrees was defined around the source to avoid a possible spill-over into the OFF regions used for the background estimation. The background was estimated with the Reflected Region method (described in Section 3.6). The number of excess events and the significance are calculated from Equation 3.8 and 3.9. The distribution of ON, OFF and excess events are traditionally visualized on θ 2 histograms that represent the squared radial distribution of events. The θ 2 distributions for the here-discussed configurations are presented in Figure 5.7. The number of events n ON and n OFF from the chosen ON and OFF regions, together with the other analysis details are summarized in As expected, the number of events is larger for the Loose cut configuration than for the Std cuts due in particular to a lower threshold, but this comes at the price of a lower signal to background ratio. The Crab Nebula significance map and the significance distribution of this map are shown in Figure 5.7. As seen from Figure 5.7b, the mean and sigma for the regions of the map far from the Crab Nebula are almost 0 and 1 respectively, indicating a good background subtraction. The same check was performed for the Loose configuration and shows similar behavior. These results are used to derive the energy spectrum of the Crab Nebula which is described next. Differential Energy Spectrum When reconstructing gamma-ray events, the direction and energy are the most important parameters to be measured. The energy information is used to reconstruct the spectrum of the source which gives important insights about the acceleration mechanisms of the gamma-ray source. The measured Crab Nebula spectrum, first measured by the Whipple telescope at energies above 700 GeV, has now been extended up to 50 TeV [START_REF] Meagher For The | Six years of VERITAS observations of the Crab Nebula[END_REF][START_REF] Aleksić | Measurement of the Crab Nebula spectrum over three decades in energy with the MAGIC telescopes[END_REF]. In general, the spectrum of the Crab Nebula, as measured by the IACT experiments, is compatible with a log-parabola spectral shape. The differential energy spectrum measured by the H. E. S. S. experiment in 2006 was best described by a exponential cut-off spectrum shape [START_REF] Aharonian | Observations of the Crab nebula with HESS[END_REF]. This measurement was performed with data from 2003 (commissioning phase with three telescopes), 2004 and 2005. Since then, the statistics have increased and more sophisticated reconstruction techniques were developed for the H. E. S. S. data. We used the extended data set to lower the energy threshold, access a new energy range and perform a precise spectrum measurement of the Crab Nebula. The spectrum was measured using the "forward-folding" technique described in Section 3.6. To exploit a new energy range, the Std and Loose cuts were used. The energy threshold for the events passing the cuts was set to energy where the effective area is 20 % of its maximum for the corresponding observation parameters. Motivated by the previous measurements, the spectrum was fitted with three different spectral shapes: a power-law, a curved power-law1 and a power-law with exponential cut-off. Table 5. [START_REF] Nobelprize | The Nobel Prize in Physics[END_REF] gives the details of the best fit values obtained in this work and also a summary from the other IACT measurements. Fit Cuts E min E max N 0 Γ β E 0 E c χ 2 /ndf TeV TeV ×10 -11 cm -2 s -1 TeV -1 TeV TeV The spectrum is fitted with a power-law, log-parabola and an exponential with power-law cut off. The energy range of the spectrum fit with E min and E max , the normalisation N 0 , the reference energy E 0 , the spectral index Γ, the curvature β and other parameters of the fit are summarized. Spectrum parameters from the Whipple [START_REF] Carter-Lewis | Spectrum of TeV Gamma rays from the Crab Nebula[END_REF], CAT [START_REF] Finley | The spectrum of TeV gamma rays from the Crab Nebula[END_REF], HEGRA [START_REF] Aharonian | The Crab Nebula and Pulsar between 500 GeV and 80 TeV: Observations with the HEGRA Stereoscopic Air Cerenkov Telescopes[END_REF], MAGIC [START_REF] Aleksić | Measurement of the Crab Nebula spectrum over three decades in energy with the MAGIC telescopes[END_REF], VERITAS [START_REF] Meagher For The | Six years of VERITAS observations of the Crab Nebula[END_REF] and H. E. S. S. I [START_REF] Aharonian | Observations of the Crab nebula with HESS[END_REF] experiments are shown for comparison. After fitting the spectrum with different spectrum shapes, a statistical method is needed to conclude which is more adapted to describe the data. A likelihood-ratio test to compare two models requires them to be nested, i.e. the more complex model can be transformed into the simpler model by imposing a set of constraints on the parameters. The power-law and log-parabola are nested, and so are the power-law and exponential cut-off models, but the log-parabola and exponential-cutoff model are not. The log-parabola and the exponential-cutoff fits are compared with the reduced χ 2 of the fit. Given the fit results for these models (see Table 3.14b), the data is described best by a log-parabola spectrum: dN dE = (3.26±0.02)×10 -11 E 1.2 TeV -(2.45±0.01)-(0.18±0.01) ln( E 1.2 TeV ) 1 TeV cm 2 s . (5.1) The quoted errors on the spectrum parameters are statistical uncertainties only. The spectrum corresponds to the Std cut configuration and Figure 5.8 shows the reconstructed energy spectrum of the Crab Nebula. [ TeV. The spectrum is best described by a log-parabola spectral shape. The 1σ confidence interval of the fitted spectrum shape is plotted in solid red line. The residual plot in the form (N obs -N exp )/N exp is shown in the bottom part. Long-Term Variability Studies The differential energy spectrum of the Crab Nebula is measured by H. E. S. S. I in the energy range from 480 GeV to 62.4 TeV. The spectral points are shown in black and they are calculated from the difference of measured events in an energy interval N obs compared to the expected events from the best-fit model, N exp . The uncertainties are statistical only and a 1σ confidential band is plotted along with the points. The difference (N obs -N exp )/N exp is shown in the residual plot below the spectrum. It is possible to lower the energy threshold but the systematics need to be understood and controlled. The energy threshold for this analysis is set at 20% of the maximum effective area. As the effective area depends particularly on the zenith angle and given the zenith angle range of the Crab Nebula with H. E. S. S., it is challenging to lower the energy threshold. For the measurement of the Crab Nebula spectrum two independent analysis frameworks ParisAnalysis (PA, this work) and HAP (by J. Hahn) are involved, which is a standard procedure for cross-checking the results before publication. Since they rely on independent calibration schemes, the intersection run list is used (99 runs). This results in a more stable background control close to the energy threshold hence it allows to use the 15 % of the maximum effective area as threshold for the spectrum measurement. The spectrum is shown in Appendix B (Figure B.1) in E 2.5 dN/dE representation for a better comparison. The bottom panel of Figure B.1 shows the relative difference between the "all-world" Crab Nebula spectrum, the average spectrum defined from measurements by the published results from MAGIC and VERI-TAS and with the here-discussed H. E. S. S. I measurements from PA and HAP. The relative difference for each spectrum is defined as the difference (F i -F )/F , where F stands for the average "all-world" spectrum. In this thesis scope, the goal is a precise spectrum measurement and have a large data set which allows an extension at the very-high-energies is highly important. The later is of a particular importance in the variability studies which are presented next. For this reason the spectrum measurement here-discussed uses the run list from the ParisAnalysis, not the intersection run-list between the two different analysis frameworks. A discussion on the spectrum is given in Section 6.5 which includes measurements with H. E. S. S. I and H. E. S. S. II for a better over-view picture. A detailed study of the TeV flux evolution with time is described next. Long-Term Variability Studies Although the Crab Nebula is considered as a standard candle in gamma-ray astronomy, several flares at energies above 100 MeV were reported from the space-borne satellites. The flares detected by the space borne satellites showed that the flux variations were coming from the end part of the synchrotron emission, followed by a hardening of the spectrum. Generally two arguments can be invoked to explain the high-energy flares. These flares can be related either with an enhancement of the parent electron population or changes in the magnetic field. If the flares are due to the variation in the electron injection, then the flare detected at the synchrotron energy one expects a flare on the inverse Compton. Hence, flaring activities at TeV energies from the Crab Nebula are not excluded. This study profits from all H. E. S. S. observations on the Crab Nebula to investigate and search for variability in the TeV energy range. As satellites with good sensitivity i.e. AGILE and Fermi-LAT were launched few years after the H. E. S. S. experiment started to operate, this would give information about the early years of operation of H. E. S. S. The lightcurve is derived with the method described in Chapter 3. The integral flux above 1 TeV is calculated on a run-by-run basis, from October 23, 2003 up to March 14, 2013, corresponding to a Modified Julian date (MJD) 52935 -56365. Figure 5.9 shows the evolution of the integral flux with time from MJD 52935 to 56365. The plotted error bars are statistical only. MJD 53000 53500 54000 54500 55000 55500 A χ 2 test, to check if the flux is compatible with a constant C was performed. The χ 2 /N dF = 583.2/142 corresponds to a p-value of 0.0001, which is a low probability to obtain these measurements if the assumption of constant flux is correct. Since only statistical errors were considered, the systematic error has to be considered before making any statement about flux variations. The systematic error on the flux for H. E. S. S. measurements has been estimated to be 20 % [START_REF] Aharonian | Observations of the Crab nebula with HESS[END_REF]. The low fluxes of some particular runs (as seen between MJD 54500 to 55500 in Figure 5.9) were investigated. Statistical uncertainties can be introduced by instrument effects, reconstruction or external effects such as atmospheric variations. Given the spread of observations over ten years, another suspect is in the change of the optical efficiencies caused due to the degradation of mirror reflectivity. The distribution of integral fluxes with zenith angles, off-axis and optical efficiencies were checked and are scattered around the mean (see Figure 5.10). There is no evidence of apparent dependency which would introduce a bias. After all the detailed checks carried out, the presence of low fluxes was not found to be caused by known instrument or reconstruction problems. Additionally, the flux distribution with the number of excess events over the exposure time was checked and shows the expected linear increase trend as shown in Figure 5.11. derstand and reduce the systematic effects, a possible flux attenuation due to varying atmospheric transparency has been studied. Details are given in the following. Atmospheric Transparency Effect The Cherenkov Transparency Coefficient (TC) was developed to measure the atmospheric transparency and identify H. E. S. S. runs taken with non-optimal atmospheric conditions. The data analysis techniques developed for the imaging Cherenkov telescopes rely strongly on Monte Carlo simulations, which assume a particular atmospheric model. A deviation of the atmosphere from this model would introduce a systematic effect on the measured flux. Normally, the atmospheric monitoring and aerosols is done by a LIDAR installed on H. E. S. S. site. Unfortunately, for the data set used here the LIDAR was not working. Monitoring only the trigger rate from a given source does not allow to distinguish if changes are caused by a non-optimal atmospheric quality or are instrument related. A study performed on H. E. S. S. data developed a quantity to measure the effects introduced to distinguish trigger rate changes by large-scale atmospheric absorption, namely the transparency coefficient [START_REF] De Los Reyes | Influence of aerosols from biomass burning on the spectral analysis of Cherenkov telescopes[END_REF]. The coefficient was first introduced and developed by R. De Los Reyes et al. in the HAP analysis framework, and was later implemented in the ParisAnalysis framework by C. Mariaud. The transparency coefficient from ParisAnalysis is used for the study presented here. The transparency coefficient depends on the trigger rate R, the main PMT high gain over the camera g, and on the optical efficiency µ. It is calculated for each telescope from the following formula: tc i = R -1.7 i µ i g i , ( 5.2) and averaged over telescopes by taking into account the uncertainties: TC = n i=1 tc 2 i dtc 2 i n i=1 1 dtc 2 i , (5.3) where n is the number of the telescopes participating in a given run. The presence of thin layers of clouds, dust or fume affects the transparency of the atmosphere, attenuating the Cherenkov photon density on ground level, resulting in an underestimated particle energy. Hence the quality of the atmosphere affects the outcome of the results, in particular when studying the flux variations with time. The TC values from all the Crab Nebula data set taken over more than ten years are shown in Figure 5.12. It can be seen that some runs have low TC values, some as low as 0.4. There are two strategical methods used for the study presented here. The first is to identify and reject runs taken under bad conditions and the second is to try to correct for this effect. To study this effect on the Crab Nebula flux, all the available data was taken. For an initial study, the run selection was based on only two criteria, before performing the study with a good run-list to minimize other systematic effects. The two criteria were: broken pixels less than 15 % and a multiplicity of at least two telescopes. The whole data was divided in categories based on the TC value: 80-90 %, 90-95 %, 95-100 % and 100-105 %. A separate analysis was performed on each of the subdata sets with the Std configuration and the integral flux was calculated. Figure 5.13 shows the integral fluxes versus time for both the initial run selection and the good run list in the different TC categories. The atmosphere has a non-negligible impact on the flux attenuation for TC 85 %. To clarify, the TC value of 105 % is an artifact of the calculation method, which is based on an empirical formula. This value refers to perfect atmospheric conditions. Based on those results, the runs for which the atmosphere had TC < 90 % are considered as not good and to investigate the residual systematic uncertainties an additional cut is introduced. Only runs that fulfill the H. E. S. S. I quality selection criteria and have TC > 90 % are considered. The results are shown in Figure 5.14. By taking into account the atmospheric coefficient to reject bad runs when studying the flux evolution with time, we reduce the systematic effects and the excess variance. With a χ 2 test (see Table 5.5), these results can be compared with the previous results, where the transparency of the atmosphere is not considered in the run selection. The value of the reduced χ 2 shows the improvement, where flux measurements are spread more uniformly spread around the mean. The improvement can also be seen in the pull distributions before and after the TC cut in Figure 5.15. In the following, possible methods to correct for this effect are discussed. Correcting the Flux for Atmospheric Transparency Introducing the transparency coefficient as a quality cut when studying the flux time evolution improves the results and reduces the excess variance on cost of statistics. Based on the Figure 5.13 the TC was introduced as a data quality cut by accepting only runs with TC > 0.9. The goal is to recover the data taken under non optimal weather conditions that we had to exclude before due to the introduced effects in the flux measurements. When performing variability studies, in particular at short time-scales, each run may contain important information. To keep the data, the possibility to correct for this effect was studied. The attenuation of the Cherenkov light by the atmospheric quality can lead to an underestimated energy reconstruction. If E reco and E true are the particle reconstructed energy and the true energy, with E true ∝ E reco /TC, the differential flux is found to be: dN dE rec ∝ E -Γ reco TC Γ-1 . (5.4) which leads to a change of the normalization with the transparency coefficient and is described by a power law function (as shown in [START_REF] De Los Reyes | Influence of aerosols from biomass burning on the spectral analysis of Cherenkov telescopes[END_REF]). A first order, the correction of the flux is done assuming a linear correlation between the flux and the TC. In this case, the flux of each observation run is divided by its corresponding TC value. The resulting lightcurve can be seen in Figure 5. [START_REF] Morrison | On gamma-ray astronomy[END_REF]. The errors on the flux are propagated as follows: ∆F corr. = ∆F F 2 + ∆TC TC 2 × F TC (5.5) Linearly accounting for the atmospheric transparency causes a shift on the flux normalization [START_REF] De Los Reyes | Influence of aerosols from biomass burning on the spectral analysis of Cherenkov telescopes[END_REF]. This hinted a possible nonlinear dependence of the integral flux on the transparency coefficient F ∝ TC α , in which case the flux correction and error propagation should be done as follows: The projected lightcurves for the uncorrected, linearly corrected and α = 1.7 powerlaw corrected flux are shown in Figure 5.17. The projections are fitted with Gaussians in an unbinned likelihood-fit. It is evident that correcting for the atmospheric transparency reduces the systematic uncertainty, as can be seen by the reduced standard deviations of the corrected flux fits. However, one can not conclude whether the linear or power-law dependence of the flux on the TC is the better model to correct the flux for the data set used here. F corr. = F TC α , ∆F corr. = ∆F F 2 + α 2 ∆TC TC 2 × F TC α (5. Systematic Flux Uncertainty To check if the flux is compatible with a constant, a χ 2 test is commonly used. We include a systematic error in the definition of the χ 2 , which is proportional to the constant flux of the source C using a proportionality constant α. This is given as follows: χ 2 = i=data [x i -C] 2 σ 2 i + α 2 C . ( 5.7) If the Crab Nebula flux is really constant, this would result in χ 2 ∼ N DF . Requiring that this condition is satisfied, the systematic error is estimated to be α = 16.1 %. Such a value is acceptable for H. E. S. S. I flux measurements and it slightly improves over the systematic error of 20 % published in previous the Crab Nebula results [START_REF] Aharonian | Observations of the Crab nebula with HESS[END_REF], and is approximately what would be expected given the uncertainties in the Monte Carlo simulation. After being more strict in the data selection, it is possible to reduce the systematic effects on the flux measurements. This estimation is based on the assumption that the flux from the Crab Nebula is constant, but the possibility of having significant flux variations is not excluded. Table 5.5 summarizes the results for different data sets. The data set I correspond to the lightcurve presented in Figure 5.9, where the transparency coefficient was not taken into account. Data set II is for the lightcurve with transparency coefficient cut of 0.9 (see Figure 5.14). The data sets III and IV correspond to the corrected flux assuming a linear and a power-law dependence. As shown in Table 5.5, the systematic uncertainty is lower when the flux is corrected by the TC. The two cases when the flux was corrected by a linear and a power-law dependence were further investigated and was found to be related with the TC distribution of this particular data set, which peaks at the unity. Season Dependence on Flux Measurements To account for any seasonal variation due for instance to variations of atmospheric profile another check was performed on the Crab Nebula flux. The data were divided into two categories corresponding to the observation season. Data taken from September to December and data from January to April are grouped together. The former is called the first season and the later second season. This division is done to account for seasonal changes during September to November as during this period is the rainy season in Namibia. As it was shown in Figure 5.5, the majority of the data are taken during these months. Hence, the spectrum would possibly manifest changes if there is a systematic effect due to this. The differential energy spectrum for the two seasons is shown in Figure 5.18. Taking into account 20 % systematic uncertainty (the published value from H. E. S. S.), no significant change is seen between the spectra derived for the two seasons. The fit results are summarized in Table 5.6. Season N 0 Φ > 1 TeV Γ β cm -2 s -1 TeV -1 cm -2 s -1 I ( 4 Discussion and Conclusions In this chapter, the high-energy emission from the Crab Nebula was investigated with almost a decade of H. E. S. S. I observations using the newest reconstruction techniques. The differential energy spectrum and flux variability were studied on a long-term. Observing the Crab Nebula from the southern hemisphere is challenging, but not without prospects. On the one hand, the zenith angles larger than 45 degrees enhance systematic effects, in particular atmospheric effects. On the other hand the effective area is increasing with the zenith angle, allowing the H. E. S. S. I measurement to extend to very-high energies. The spectrum of the Crab Nebula was reconstructed using the Model Analysis. Three models were fitted to the data: a power-law, a log-parabola and a powerlaw with exponential cut-off. The Crab Nebula spectrum presented here, measured from 480 GeV to 62.4 TeV, is best described by a log-parabola spectral shape. This spectral shape has already been measured by the VERITAS and MAGIC experiments, whereas the energy spectrum published by the H. E. S. S. collaboration about a decade ago was most compatible with a power-law with exponential cut-off in the energy range from 440 GeV to 30. The long term variability of the Crab Nebula at energies above 1 TeV was studied from MJD 53331-56358, putting emphasis on systematic effects from varying atmospheric conditions. Introducing the atmospheric transparency coefficient in the data selection improved the results by reducing the systematic uncertainties. Given that the observation time with IACTs is already limited to the night and their field of view is small, some attempts to correct for this effect were considered so runs with bad atmospheric transparency could be kept. With a flux correction factor linear in the transparency coefficient, the systematic uncertainties on flux measurements were reduced to 12 %. This is significantly improved compared to the 20 % previously quoted by the H. E. S. S. collaboration. As it is actually the energy measurement which should scale with the transparency coefficient, another way to correct for this effect would be to correct the energy by the transparency coefficient for each event before analysis level. This study concluded that the integral flux above 1 TeV is stable within the systematic and statistical uncertainty of H. E. S. S. Chapter 6 Crab Nebula with H. E. S. S. Phase-II Observations With the installation of the H. E. S. S. II telescope in 2012 on site in Namibia, the H. E. S. S. experiment entered a new phase. The current trigger schemes allow to go down to few tens of GeV for some sources and explore a wider energy range on the very-high-energy sky. We profit from this scheme to perform spectrum and variability studies of the best studied object on the sky; the Crab Nebula. Introduction The H. E. S. S. I telescope array with its large effective area is adapted for very-highenergy studies. Combined with H. E. S. S. II, the full array allows to measure the Crab Nebula spectrum over a wide energy range. This is also important to perform variability studies to trace the physical phenomena underlying the GeV flares. Different analysis configurations are used to exploit the lower energy range and measure the spectrum of the Crab Nebula over a wide energy range, namely both monoscopic and stereoscopic reconstructions. The Crab Nebula spectrum measurement is more challenging in the low energy range. The monoscopic reconstruction of CT5 events is in general challenging due to the degraded hadron rejection. Reconstructing events with CT1 to CT5 provides a better background separation with an increased signal over background ratio at the analysis level at the expense of a higher threshold. Other flares have been reported lately by the spaceborne experiments i.e. Fermi-LAT and AGILE [START_REF] Buehler | Enhanced gamma-ray activity from the Crab nebula. The Astronomer's Telegram[END_REF][START_REF] Munar-Adrover | New episode of enhanced gamma-ray emission from the Crab Nebula detected by AGILE[END_REF][START_REF] Cheung | Fermi-LAT confirmation of enhanced gamma-ray activity from the Crab nebula[END_REF]. The latest flare reported in October 2016 by the Fermi-LAT satellite has increased the chances to understand the origin of the gamma-ray flares by investigating it [START_REF] Cheung | Fermi-LAT confirmation of enhanced gamma-ray activity from the Crab nebula[END_REF]. During this flaring episode, the small H. E. S. S. telescopes were in re-commissioning phase after the camera upgrade. The Crab Nebula was observed as a Target of Opportunity (ToO) with CT5 and CT1 for several nights. These observations, along with the Fermi-LAT public data, were used to perform the flux variability and correlation studies reported here. Data Set The 28 m diameter mirror telescope, installed on site in 2012 was commissioned and inaugurated during 2013. The Crab Nebula was the prime target during the commissioning phase. The observations taken during this period are not included Data Analysis in the reconstruction. The observation conditions are similar to the ones described in Section 5.2 for H. E. S. S. I After calibration, the data selection criteria depend on the observation strategy and on the scientific goal. Given the different physical trigger modes with the H. E. S. S. II telescope array, introduced in Section 3.6.2, different reconstruction methods exist to analyze the data. Each of them has its own set of selection criteria. These criteria for the monoscopic reconstruction, which demands the most careful event selection, are given below: • source location within 0.5 -0.7 degrees offset from the camera center • minimum trigger rate 1200 Hz with a stability ≤ 10 % • minimum run duration ≥ 5 minutes • maximum broken pixel fraction 5 % • zenith angles 54 -60 degrees Only runs with atmospheric transparency larger than 80 % were selected. Other external parameters, like the relative humidity or temperature, are similar to the ones for H. E. S. S. I. To select events for the Hybrid reconstruction (CT1-5 events), a mixture between the H. E. S. S. I and Mono selection criteria is applied. A total of 33 runs passed the run selection and the subsequent results are derived from this data (unless specified differently). More runs pass the selection criteria for the Hybrid analysis. The choice to use the same run list is done for a better comparison between results of different analysis configurations. Data Analysis To fully exploit the low and high energy events, the data was analyzed with the Combined, Stereo and Mono configurations, using Std and Loose cuts. The Mono reconstruction uses exclusively the CT5 events, whereas the Stereo Hybrid reconstruction uses CT1-5 events. The Combined analysis method combines the monoscopic reconstruction at low energies and stereoscopic reconstruction at high energies, developed to cover a wider energy range. The analysis results and the corresponding cut parameters for the hadron rejection are summarized in Table 6.1. The θ 2 cut for the Mono, Stereo and Combined were set to 0.015, 0.006 and 0.015 degree 2 , respectively. The θ 2 distribution for the Mono analysis with Std cuts is shown in Figure 6.2, where the OFF events are uniformly distributed as expected. The θ 2 histograms for the other analysis configurations were also checked. The main challenge of the H. E. S. S. II data analysis is the background subtraction at low energies. Controlling the background systematics at low energies with Mono reconstruction is challenging. The stereoscopic reconstruction with events from CT1 -5 provides a better background subtraction. Additional consistency checks were performed, including the acceptance maps and distributions of the main background separation variables (MSSG, Direction Error and Primary Depth). The center of gravity maps for every telescope participating in a given run were also checked. These results were used for a spectrum measurement of the Crab Nebula, which is described in the following. Energy Spectrum with H. E. S. S. II The spectrum of the Crab Nebula has been measured from H. E. S. S. II data with different H. E. S. S. II analysis configurations, with the same method as for the H. E. S. S. I spectrum measurement described in Section 3.6.4. Only the events above the safe energy threshold are used for the spectrum measurement. For the Mono spectral analysis, the energy threshold is set to 25 % of the maximum effective area, which is higher than for the H. E. S. S. I Stereo analysis due to the larger background systematic effects. For the Stereo Stereo and Combined analysis configurations, the threshold was set to 15 %. The spectrum is fitted with a simple power-law and a log-parabola 1 . The results of the spectrum fit for Mono, Hybrid and Combined analysis are summarized in Table 6.4. The Mono analysis provides the lowest energy threshold whereas the combined analysis provides the a spectrum measurement up to 63 TeV. For all the configurations, the data is best described by the log-parabola shape. The superimposed spectra for the Mono, Stereo and Combined analysis configurations, with Std or Loose cuts, are shown in Figure 6.4. The spectral shape of Combined and Stereo agree in the full energy range, whereas the Mono spectrum is shifted relative to them at the highest energies. This gap is seen for the Std and Loose cuts, but for Loose it is more than 20 %, which is under investigation. It could be a systematic effect related to the monoscopic reconstruction at very-high energies. At high energies, the flux uncertainties are larger due to the lack of statistics. 1 log-parabola is also referred to as curved-power law (CPL) Spectral Energy Distribution The measured spectrum with the H. E. S. S. II and H. E. S. S. I experiments from this work are plotted along with measurements from the other high-energy experiments in Figure 6.5 in an SED representation. The highest energy points from the Fermi-LAT measurement are connecting with the low energy spectral points from the IACT experiments close to the inverse Compton peak. The first spectral points from the H. E. S. S. II Mono analysis provide a link to the Fermi-LAT points as a continuation of the spectrum energy distribution. The MAGIC experiment location on the Canary island allows to observe the Crab Nebula near zenith and has the best observation position to bridge the spectrum measurements from space and ground measurements. The data point from the two experiments overlay. The peak of the spectral energy distribution was estimated by MAGIC at (53±3) GeV in a joint fit with Fermi-LAT data [START_REF] Aleksić | Measurement of the Crab Nebula spectrum over three decades in energy with the MAGIC telescopes[END_REF]. The H. E. S. S. I spectrum from this work extends up to 62 TeV, making this the highest energy spectrum measurement for the Crab Nebula from all IACTs. The spectrum measured by VERITAS goes up to 42 TeV, even though they observe the Crab Nebula at zenith angles between 8-35 degrees. Nevertheless, the telescope configuration, i.e. collection area and telescope design is more suited for low energy measurements, not for the high energies [START_REF] Meagher For The | Six years of VERITAS observations of the Crab Nebula[END_REF]. The spectrum measured here is more curved than the spectrum from the other analysis framework. It is still under investigation if it's due to systematic effects or physics. A more precise measurement could be achieved with Run-Wise-Simulations (RWS), which provide simulation for each set of run observation parameters, i.e. zenith angle, optical efficiencies, azimuth, night sky background level etc. In the current scheme are classical Monte Carlo simulations which have a predefined ranges of these parameters. RWS are currently being developed for H. E. S. S. Variability Studies This part is dedicated to variability studies with H. E. S. S. II, strongly motivated by the puzzle on the origin of the GeV flares. Four major flares detected by the Fermi-LAT are summarized in Table 6.3. During all these flares only the flux from the synchrotron component varied and the corresponding details are summarized. The flare amplitude are compared to the average quiescent synchrotron photon flux (6.1 ± 0.2 × 10 -7 cm -2 s -1 ) from [START_REF] Bühler | The surprising Crab pulsar and its nebula: a review[END_REF]. The spectral energy distribution of the GeV flares reported from the AGILE and Fermi-LAT spaceborne satellites, obtained at the maximum flare level are plotted in Figure 6.6. As seen from the plot, during the flares the spectrum of the synchrotron component hardens with the increased flux levels. From all flares, only the most recent flare (until this thesis was written) reported by the The October 2016 flare with the Fermi-LAT and H. E. S. S. II was subject of this thesis and the analysis details are described next. The Crab Nebula 2016 GeV Flare The Fermi-LAT and AGILE collaborations reported on an increase of the flux from the Crab Nebula in 2016 October 03: flare alerts were posted on the Astronomer's Telegram [START_REF] Cheung | Fermi-LAT confirmation of enhanced gamma-ray activity from the Crab nebula[END_REF][START_REF] Munar-Adrover | New episode of enhanced gamma-ray emission from the Crab Nebula detected by AGILE[END_REF]. The H. E. S. S. telescopes observed the Crab Nebula during this period. The upgrade phase of the CT2-4 telescopes started in the beginning of the year, so unfortunately three telescopes were not available for observations during this flaring period. Nevertheless, one of the small telescopes (CT1) and the H. E. S. S. II telescope observed the Crab Nebula as a target of opportunity for several days. H. E. S. S. II Observations Crab Nebula observations with H. E. S. S. II started in January 2016 and were intensified after the upgrade phase was completed, with many observation runs taken for re-commissioning purposes (see Figure 6.7). The zenith angle range during this period extended up to more than 60 degrees, making the analysis of this data challenging. From all the observation runs taken during 2016 that pass the standard quality selection criteria, 16 runs were taken at zenith angles greater than 60 degrees. These runs were excluded from the analysis. Many runs had other problems or were of too short duration, which also led to their exclusion from the final run list. Additionally, several runs with a non-optimal transparency coefficient values were not taken into account. For the GeV flare, the only public information was the initial flare reported on the Astronomer's telegram by the Fermi-LAT. Other important information for the variability study with H. E. S. S. II, like the flare amplitude or duration in GeV, were not known. Therefore, an analysis on the Crab Nebula with publicly available Fermi-LAT data taken during this flaring period was performed; the details are given next. Fermi-LAT Analysis As the Crab Nebula analysis with the Fermi-LAT is more challenging compared to extragalactic sources, the analysis of the March 2013 flare were reproduced to check the credibility of the 2016 results. After having obtained consistent results with the published ones, on the 2013 flare, it was proceeded to the analysis of the 2016 flare, which is described next. Background Model File In order to perform the likelihood analysis, the first step after selecting good quality data is to determine the background model for the RoI on the entire time range. For this purpose, all known gamma-ray sources from the 3FGL catalog [START_REF] Acero | Fermi Large Area Telescope Third Source Catalog[END_REF], located within 20 degrees of the RoI were considered. A total of 123 point sources and 2 extended sources were found within the RoI. The extended sources, namely S 147 and IC443 were modeled by the templates provided by the 3FGL catalog. The background model also accounts for the Galactic and isotropic emission by including the two diffuse components provided by the Fermi-LAT collaboration (iso_source_v05 and gll_iem_v05) in the model. In the default 3FGL based background-model, the parameters of the sources detected with a significance >5σ are left free. As the 3FGL is based on 4 years of data, the short time period analyzed in this work prohibits such significance limits. It would cause problems in the convergence of the likelihood fit due to the fact that for some sources there is not enough statistics. For this reason, only source listed with a detection significance >12σ were left free. In the previous analysis of the Crab Nebula (see [START_REF] Abdo | Fermi Large Area Telescope Observations of the Crab Pulsar And Nebula[END_REF][START_REF] Mayer | Rapid Gamma-Ray Flux Variability during the 2013 March Crab Nebula Flare[END_REF]), as well as in the 3FGL catalog [START_REF] Acero | Fermi Large Area Telescope Third Source Catalog[END_REF], the Crab Nebula spectrum consists of three independent components. For this Fermi-LAT analysis therefore, the Crab Nebula emission was split into three components accounting for the pulsar, for the inverse Compton and synchrotron emission, named as J0534.5+2201i, J0534.5+2201s and 3FGL J0534.5+2201. The Crab Nebula components were modeled as point like in this analysis, given the Fermi-LAT resolution of 0.1 • and the apparent size 0.03 • of the Crab Nebula. The extension of the Crab at very-high-energies would require a dedicated study which goes beyond the scope of this work. Note that H. E. S. S. measures the spectrum well beyond the synchrotron peak and is not able to resolve the pulsar, so the composition in the three components is not considered in H. E. S. S. analyses. Spectral Analysis To derive the average spectrum of the Crab Nebula over the time period mentioned above, a likelihood analysis with gtlike2 was performed. In the background model the components of the Crab Nebula were left free, together with the Galactic diffuse emission components, while all the other sources within 20 degrees were initialized to their 3FGL catalog values. The synchrotron component was modeled by a so-called Power Law2, which calculates the integrated flux in each bin: dN dE = F synch -Γ + 1 E -Γ+1 max -E -Γ+1 min E -Γ . (6.2) The inverse Compton component is modeled by a log-parabola: dN dE = Φ IC E E 0 -α-βlog(E/E 0 ) (6.3) The Crab pulsar spectrum, following [START_REF] Abdo | The Second Fermi Large Area Telescope Catalog of Gamma-Ray Pulsars[END_REF], was parametrized by a smoothly broken power law: dN dE = Φ pulsar E 100 MeV -p 1   1 + E E b p 2 -p 1 s   -s , ( 6.4) where p 1,2 are the spectral index before and after the energy break E b and s is the sharpness of the transition between the two slopes. The spectrum parameters of the Crab pulsar are kept fixed to their catalog values, whereas the synchrotron and the inverse Compton components were left free. From the Fermi-LAT observations of the previous flares, it was found a constant flux from the Crab pulsar, which is assumed also here to simplify this calculation. Checking if the Crab pulsar emission really remains constant would require the pulsar ephemeris for this period. Temporal flux variations To investigate the temporal flux evolution, the data was divided in 3-day time bin duration. The choice of this duration represents a trade-off between being sensitive to variability and have enough statistics in the time bins where there is no flux enhancements. In each bin, the synchrotron component was modeled as a simple power law. The only parameter left free in the short time bins is the integral flux of the Crab Nebula with the spectral index fixed to the value obtained for the complete data set. The pulsar spectrum parameters were fixed to the catalog values during the fitting procedure. The lightcurve is presented in Figure 6.10. It can be seen that the flare duration was about one month and the flux peaks at 2016 October 07. The flux points plotted in the lightcurve correspond to the synchrotron component. The average flux for the synchrotron component from MJD 57632-57692 was: F synch = (2.01 ± 0.12) × 10 -6 ph cm 2 s -1 . The flux at the amplitude peak, from MJD 57671-57674, was F peak = (4.27 ± 0.26) × 10 -6 ph cm 2 s -1 . The flux from the synchrotron component has reached a factor of five compared the quiescent flux reported before by the LAT collaboration [START_REF] Buehler | Gamma-Ray Activity in the Crab Nebula: The Exceptional Flare of 2011 April[END_REF]. H. E. S. S.-Fermi lightcurves The H. E. S. S. II observations taken between September to October resulted in nine good quality runs, corresponding to an acceptance-corrected live time of 3.6 hours. For the spectral and variability studies here-discussed, the Mono analysis configuration with Std cuts were used. The spectrum of the Crab Nebula for the data taken between September and October was fitted by a curved power-law shape, that best represents the shape of the IC peak. The H. E. S. S. spectrum derived simultaneously with the Fermi-LAT observations is shown in Figure 6.9, superimposed the overall H. E. S. S. II spectrum and extends from 230 GeV to 10.44 TeV. The corresponding lightcurve with integral flux above [START_REF] Aharonian | Observations of the Crab nebula with HESS[END_REF]. The dashed red line correspond to the average flux for this period, whereas the black line to the average flux from 2006. In the bottom part of the plot, the Crab Nebula integral flux of the synchrotron component between 57632-57692 MJD as derived from Fermi-LAT data is plotted (blue crosses). The integral fluxes are calculated in 3-day time bins, from 100 MeV < E < 500 GeV and normalized to the published average synchrotron flux from [START_REF] Buehler | Gamma-Ray Activity in the Crab Nebula: The Exceptional Flare of 2011 April[END_REF]. The dashed blue line correspond to the average synchrotron flux from the previous Fermi-LAT measurements (6.1 ± 0.2 × 10 -7 cm -2 s -1 ). Summary and Discussion The installation of the H. E. S. S. II telescope in 2012 with new trigger schemes, along with the sophisticated analysis techniques developed for it, opened up the low GeV energy range to the H. E. S. S. experiment. This chapter was dedicated to observations of the Crab Nebula with H. E. S. S. Phase-II. Even though the experiment was taking data continuously during 2013, the commissioning runs were not included to derive scientific results due to the continuous changes in the camera configuration during this period. The study performed on the Crab Nebula profited from the current trigger schemes which allowed to lower the energy threshold, particularly important for spectral and variability studies. All the available H. E. S. S. II Crab Nebula data taken after the commissioning is used to measure the energy spectrum down to 260 GeV. This allows H. E. S. S. to get closer to the inverse Compton peak and to the Fermi-LAT spectral points. The spectrum was best described by a log-parabola function, similar to the shape previously measured by the MAGIC and VERITAS experiments. In combination with the spectrum measured by H. E. S. S. I, presented in Chapter 5, this is the broadest H. E. S. S. Crab Nebula spectrum measurement so far, as it covers more than three decades in energy. All the spectra obtained from this work have been cross-checked with another analysis framework (HAP, done by J.Hahn) using the intersection run list between the two analysis frameworks. The relative differences on the differential flux are compared with the the "global" spectrum derived from all IACT measurements. In general, the spectra as measured by MAGIC, VERI-TAS and H. E. S. S. agree well with each other and show the curved shape around the inverse Compton peak. The relative differences of all spectrum measurements are within 30 % and the measurement from this work exhibits a higher curvature compared to the other measurements. B2 1215+30 with almost a decade of Fermi-LAT data One important feature of BL Lac objects is the flux variation with different amplitudes and on different time scales, in some cases down to minutes. Such events are important to characterize the high-energy emission from these objects Prospects In nine years of Fermi-LAT data, an almost linear increase of the year-average integral flux and hardening of the spectrum was found. The evolution of the flux was further investigated to understand the origin of the flares. The possibility of additive and multiplicative processes within the emission region were investigated by fitting the fluxes and log-fluxs by a normal distribution. Both cases do not describe the data in a satisfactory way. As the source exhibits several flux variations and high states through the years, a quasi-periodic modulation could explain the flaring activities. A discrete Fourier transform of the lightcurve hints a periodicity of of about 1083 ± 32 days. A similar periodicity has been reported only in one blazar, the PG 1553+113 [START_REF] Ackermann | Multiwavelength Evidence for Quasi-periodic Modulation in the Gamma-Ray Blazar PG 1553+113[END_REF]. The steady increase of the integrated flux on a long-term of nine years and the rich variability pattern make this blazar a promising object for future projects. Crab Nebula with more than a decade of H. E. S. S. data The Crab Nebula, one of the best studied objects in the sky, was the subject of this thesis work with H. E. S. S. More than ten years of observations from the H. E. S. S. experiment were used to measure the energy spectrum and perform variability studies on the Crab Nebula. The H. E. S. S. I experiment provides the best opportunities for studies at energies above 10 TeV due the large collection area. With the installation of the H. E. S. S. II telescope, there is a new possibility to exploit the lower energy range and with the increased H. E. S. S. I statistics, to expand the spectrum at even higher energies. Crab Nebula's flux measurements are prone to systematic uncertainties for H. E. S. S. due to large zenith angle observations. This makes a spectrum measurement over a large time scale of ten years particularly challenging. The atmosphere is an electromagnetic calorimeter, which can not be calibrated with beam tests, so the reconstruction relies heavily on Monte Carlo simulations, assuming an atmospheric model. A deviation of the atmosphere from this model introduces systematic uncertainties. Observations at large zenith angles mean that the Cherenkov light traverses more atmosphere before reaching the camera, emphasizing the importance of correcting for varying atmospheric conditions. In this work, the atmospheric transparency effect on the Crab Nebula flux measurements was studied. This represents the first usage of the atmospheric transparency coefficient for large-zenith data with H. E. S. S., including also flux corrections. The spectrum of the Crab Nebula presented here, measured with H. E. S. S. Phase-I and Phase-II, extends from 280 GeV up to 62 TeV. The increased data set was used to refine a H. E. S. S. I legacy spectrum and the H. E. S. S. II data set allows to measure the spectrum with lower energy threshold, which goes down to 280 GeV. With the monoscopic reconstruction, it is possible to measure the spectrum down to 240 GeV and get closer to the inverse Compton peak and connect with the highest points of Fermi-LAT. The measured spectrum is best described by a log-parabola shape, compatible with the spectral shape measured by the MAGIC and VERITAS experiments, while the spectrum previously measured by the H. E. S. S. collaboration was described best by a exponential power-law with a cut-off. Another particularity of the Crab Nebula are flux variations of the high-energy synchrotron component. With the current scheme of H. E. S. S., it is possible to study the very-high-energy emission, taking advantage of the large collection area of H. E. S. S. II to bridge the GeV and TeV energy ranges. Among all the flares reported by the Fermi-LAT, only a recent flare in October 2016 was observed by H. E. S. S. II. The analysis of the Fermi-LAT public data reveals the GeV flare lasting for about a month. A simultaneous data analysis was conducted with H. E. S. S. II data. The faring episode was not completely covered by H. E. S. S., as the small telescopes were recommissioned after a camera upgrade. Observation carried out between the 29th of September and the 12th of October 2016 with a total of 3.4 h live time corrected by the acceptance was used for investigating the variability with H. E. S. S. II. The energy spectrum of this period was found to be compatible with the spectrum measured for all the data set within the systematic uncertainties. The evolution of the integral flux above 1 TeV during this time period gives an excess variance of 15 %. Possible flux variability in the energy range covered by H. E. S. S. are not excluded. Given the observation limitations of ground-based detectors, such flares can also be missed. Given the puzzle on the GeV flares, part of the thesis was dedicated to hunt for TeV flares with 15 years of H. E. S. S. observations. Adding together H. E. S. S. I and H. E. S. S. II studies on the variability of the Crab Nebula, no evidence of an excess variance larger than 15% was found in a run-by-run lightcurve. This favours the scenarios that relate GeV flares with changes in the magnetic field. However, more is to be seen from future observations. Prospects The Crab Nebula spectrum measured from this work starts at 280 GeV and extends up to 62 TeV. The current limitations to the lower energy threshold are related to systematic effects. One possibility to reduce the systematic effects and lower the energy threshold is the use of run-wise-simulations, dedicated simulations of the observational conditions on a run-by-run basis instead of using averaged values as done in classical Monte Carlo simulations. Simulating the zenith angles, azimuth, night sky background and other observable variables for each run would decrease the systematic uncertainty. This would result in a more stable effective area and allow to lower the energy threshold even more. These simulations are still under development and not yet ready for use within the H. E. S. S. analysis framework. The flux correction by the atmospheric transparency coefficient improves the results and reduces the excess variance. Correcting the flux by the transparency coefficient changes the normalisation of the spectrum, which a correction at the event energy level wouldn't. Hence, a more precise correction of this effect would be to correct the energy of the events before the analysis, which would increase the sensitivity. After the upgrade phase was successfully completed and H. E. S. S. become fully operational again, the possibility to detect such flares is higher. The H. E. S. S. experiment will continue to monitor the Crab Nebula in the future. There is no future project confirmed as the Fermi-LAT successor as of now, but the Cherenkov Telescope Array, the future ground-based gamma-ray observatory, is currently under design and development. The two sites of CTA planned in La Palma in the northern hemisphere and Chile in the southern hemisphere offer the exclusive possibility to observe and explore a new energy range for the Crab Nebula. CTA south, planned as a large array of particularly small size telescopes, offers the opportunity to study the very-high-energy emission from the Crab Nebula. Whereas the north site, planned to operate with four large size telescopes, will allow to get closer to the inverse Compton peak. For the design of CTA, a more sophisticated atmospheric monitoring is planned. For each period, the calculated muon efficiency is plotted together with the mean (red) and one σ error (green). Figure courtesy [START_REF] Chalme-Calvet | Muon efficiency of the H.E.S.S. telescope[END_REF]. . List of Figures Appendix A The observed energy of the high energy photons and soft photons are: E γ = E γ δ (1 + z) ; E t = E t δ (1 + z) (A.1) The observed energy of the soft photons can be written as: E t = δ 2 (1 + z) 2 (mc 2 ) 2 E γ By taking δ = 4, z = 0.13, E t = 73.551GeV we estimated the observed energy of the soft photons to be E t = 44.485088eV . Following the Dondi paper [START_REF] Dondi | Gamma-ray-loud blazars and beaming[END_REF], the monochromatic luminosity of the target photons is given: L = 4πd 2 L mc 2 h δ -3 1 + z F δ 2 mc 2 (1 + z) 2 hν L = 20πτ γγ mc 3 σ T R From where we find: τ γγ = 4πd 2 L δ -3 mc 2 h F δ 2 mc 2 (1+z) 2 hν (1 + z) -1 20πτ γγ mc 3 σ T R = σ T d 2 L 5 F δ 2 mc 2 (1+z) 2 hν δ 3 hcR (1 + z) d 2 L δ 3 hc(1 + z)R F δ 2 mc 2 (1 + z) 2 hν The flux F δ 2 mc 2 (1+z) 2 hν ≡ F (E t ) can be presented with a power law: F (E t ) = F 0 E t E 0 -α Figure 0 . 1 : 01 Figure 0.1: Images superposées du plan galactique en lumière optique et rayons gamma de très haute énergie. Les images à très haute énergie sont prises avec les télescopes H.E.S.S. en Namibie. Photographie et montage de F. Acero. Figure 0 . 2 : 02 Figure 0.2: En haut: l'expérience H.E.S.S. située dans l'hémisphère sud en Namibie est composée de cinq télescopes Cherenkov. En bas à gauche: l'expérience MAGIC installé à La Palma. En bas à droite: l'expérience VERITAS, composée de quatre télescopes Cherenkov, est située dans l'hémisphère Nord, en Arizona. Fermi-LAT de 100 MeV jusqu'à 500 GeV, a permis de détecter plusieurs éruption. Cette analyse a montré d'autres éruptions de flux au GeV au cours desquelles le flux a atteint des valeurs similaires à celles du sursaut spectaculaire de février 2014. Trois éruptions majeures, où le flux moyen a été multiplié par 16, ont été mises IX en évidence. Les études de la variabilité à long terme montrent que le flux moyen annuel augmente linéairement avec le temps, de façon corrélée avec un durcissement de l'indice spectral. L'étude de la variabilité du flux indique un comportement quasi périodique avec une période de 1083 ± 32 jours. Les interprétations possibles peuvent être liées à un processus quasipériodique au sein du jet relativiste. Ce comportement peut également être lié à des effets géométriques. C'est une direction intéressante pour les observations futures, et beaucoup plus longues, de galaxies actives. Le travail sur B2 1215+30 est présenté au Chapitre 4. Dans une dernière partie, la variabilité du flux au TeV et le spectre de l'un des objets les plus étudiés, la Nébuleuse du Crabe (Figure 0.3), est étudiée avec dix ans d'observation de l'expérience H.E.S.S. Cette source, la première détectée à très haute énergie dès 1989, est une nébuleuse à vent de pulsar dans le plan galactique, à une distance de presque 2 kpc de la Terre. Figure 0 . 3 : 03 Figure 0.3: Le système de nébuleuse du crabe et le Crabe pulsar zoomé. Figure 1 . 2 : 12 Figure 1.2: Schematic of Fermi acceleration in a strong shock wave. The dynamics of high-energy particles in the rest frames of the shock front, the upstream and downstream medium. (Left): Rest frame of the shock front; the upstream gas is moving with velocity v 1 = U and the shocked plasma with velocity v 2 = 3/4U . (Middle): Rest frame of the upstream medium; particles from downstream are moving with velocity of 3/4U .(Right): Rest frame of the downstream medium; particles from the upstream are advancing with velocity 3/4U . Everytime particles cross the shock, there is a gain of energy by ∆E (shown in blue and orange lines). Image courtesy[START_REF] Funk | A new population of very high-energy -ray sources detected with H[END_REF]. Figure 1 . 3 : 13 Figure 1.3: Hillas plot showing possible sources of proton acceleration for E = 100 EeV and E=1 ZeV. The linear size R of different sources is plotted versus the magnetic field B in order to accelerate particles up to E ∼ 10 20 eV . The diagonal line correspond to the maximum reachable energy by a population of sources. Image courtesy [41]. Figure 1 . 4 : 14 Figure 1.4: (a) The total sources detected by Fermi-LAT instrument in percentage, plotted together with the BL Lac and FSRQ type sources which belong to blazar source class. (b) Total sources detected in TeV show the majority belong to the BL Lac source class, a subclass of active galactic nuclei. The GeV and TeV sky is populated mainly by BL Lac type objects. Plotted with data from [45]. Figure 1 . 5 : 15 Figure 1.5: (a) The gamma-ray excess map of the known shell-type SNR RX J1713.7-3946 as measured by H. E. S. S. Image courtesy [49]. (b) First extension measurement of the Crab Nebula at very-high-energies by H. E. S. S. Image courtesy [50]. . 6 .Figure 1 . 6 :Figure 1 . 7 : 61617 Figure 1.6: Active Galactic Nuclei unification model as described by Urry and Padovani [58]. The classification is based on the orientation of the jet with respect to the observer line-of-sight. If the observer is looking down the jet, it sees a blazar (BL Lac or FSRQ). Figure 1 . 8 : 18 Figure 1.8: (a) Artistic illustration of a millisecond pulsar and its companion. The pulsar is accreting material from its companion star and increasing its rotation rate. (b) Globular cluster. Images courtesy: ESA. Fig- ure 2 .Figure 2 . 1 : 221 Figure 2.1: (a) Interaction probability in radiation length of photons as function of energy W in eV in lead material, with σ p and σ c the probability of pair creation and Compton scattering. Image courtesy [67]. (b) Schematic construction of EGRET, the predecessor of the Fermi-LAT satellite. The main parts of the detector are given to be compared to the Fermi-LAT. Image courtesy [68]. Figure 2 . 2 : 22 Figure 2.2: Fermi spacecraft with two instruments on board: the Large Area Telescope and the Gamma-Ray Burst Monitor. Image courtesy: NASA. Figure 2 . 3 : 23 Figure 2.3: (a) Schematic view of the Fermi-LAT cutaway where the three main parts of the detector are shown. (b) Cosmic gamma rays after hitting the silicon tracker get converted into an electron/positron pair. The energy released is measured in the calorimeter. Image courtesy: NASA. . 2 )Figure 2 . 4 : 224 Figure 2.4: (a) Sketch of 3D maps used to perform data analysis with the Fermi-LAT. The source i and other sources are marked with j = 1, 2, 3. (b) The counts map of B2 1215+30 for a 15 • radius RoI. The known gamma-ray sources from the Fermi catalog are marked in green. Figure 2 . 5 : 25 Figure 2.5: Comparison of the gamma-ray acceptance map for Pass 7 and Pass 8, as indicated by the legend in the plot. Image courtesy [81]. Figure 2 . 6 : 26 Figure 2.6: The Fermi-LAT full sky map in aitoff projection in Galactic coordinates. It shows the gamma-ray intensity for energies E > 300 MeV produced from 48 months of observations. Image courtesy [82]. Figure 2 . 7 : 27 Figure 2.7: All sources detected from the Fermi-LAT using 4 years of data. The majority of the sources detected are AGNs, whereas the PWN are the most abundant source class in the Galactic plane. Image courtesy [44]. Figure 2 .Figure 2 . 8 : 228 Figure 2.8 shows a schematic view of this model. Each particle splits into two new particles after having traveled a thickness d = X 0 ln2. Figure 2 . 9 : 29 Figure 2.9: Schematic view of the Cherenkov light production when a charged particle moves at speeds v > c/n through a medium of refractive index n. The Cherenkov light is produced at an emission angle θ. Figure 2 . 10 : 210 Figure 2.10: The Cherenkov light emitted from a gamma-ray with initial energy of 1 TeV. The emission angle marked as α here changes with altitude and the superimposition of the Cherenkov light illuminates a light poll of 250 m diameter in the ground. This is seen at an observation level at 1800 m above sea level. Image courtesy [88]. Figure 2 . 2 [START_REF] Burbidge | Synthesis of the Elements in Stars[END_REF] shows different Cherenkov light pools on the ground for different EAS. The Cherenkov light distribution is approximately flat within the pool and rising at the edges (Figure2.10 left), due to the varying refractive index: Figure 2 . 11 : 211 Figure 2.11: Cherenkov light pool of an extensive air shower induced by a photon with energy of 300 GeV (left) and by a proton with an energy of 1 TeV (right). Image courtesy [91]. Figure 2 . 12 : 212 Figure 2.12: Illustration of the shower imaging principle in a telescope. The image shape of the shower in the telescope camera is almost elliptic and the corresponding reflection of different points of the shower into the focal plane of a camera are shown. The two main properties of gamma rays, the energy and direction are derived from the shower image on the camera. For gamma-ray induced showers, the image intensity gives information about its primary energy. Image courtesy [88]. Chapter 2 Figure 2 . 13 : 2213 Figure 2.13: (a) The MAGIC telescopes located in La Palma. Image courtesy: Daniel Lopez/IAC. (b) An artistic image of VERITAS telescopes located in Arizona. Image courtesy UCLA [93]. Figure 2 . 14 : 214 Figure 2.14: The differential sensitivity of CTA South and North compared to H. E. S. S., MAGIC, VERITAS and HAWC experiments. The CTA South and North, for 50 h of observations are expected to have a higher sensitivity with respect to other experiments and extend up to 100 TeV. Image courtesy [96]. Figure 3 . 1 : 31 Figure 3.1: Picture of the H.E.S.S. telescopes in Namibia. Different trigger modes are supported by the current telescope configuration. Figure 3 . 2 : 32 Figure 3.2: (a) A picture of CT1 camera taken during the H. E. S. S. I camera upgrade in 2015. (b) One of the camera drawers, a unit of 16 photomultiplier tubes. The drawers are fitted in the hexagonal structure of the camera. Image courtesy of H. E. S. S. Figure 3 . 3 : 33 Figure 3.3: (a) The H. E. S. S. I telescopes with reflective area made of facet mirrors arranged in Davies-Cotton fashion. (b) The H. E. S. S. II telescope with refractive area made of facets arranged in parabolic fashion. Image courtesy of H. E. S. S. Figure 3 . 4 : 34 Figure 3.4: (a) Example of a muon event on the H. E. S. S. camera. (b) The same image after cleaning and fitted by the model used for the calculation of the optical efficiency. Figure 3 . 3 [START_REF] Carroll | An Introduction To Modern Astrophysics[END_REF] shows the evolution of the H.E.S.S. Phase-I optical efficiency as a function of the run number[START_REF] Chalme-Calvet | Muon efficiency of the H.E.S.S. telescope[END_REF]. A continuous drop is seen until 2011 (run number 60913), related of degradation of the mirror reflectivity and Winston cones. The optical efficiency increased after the re-aluminization of the mirrors. Figure 3 . 5 : 35 Figure 3.5: Efficiency evolution for one (CT2) of the H. E. S. S. I telescopes (blue).For each period, the calculated muon efficiency is plotted together with the mean (red) and one σ error (green). Figure courtesy[START_REF] Chalme-Calvet | Muon efficiency of the H.E.S.S. telescope[END_REF]. Figure 3 . 3 [START_REF] Olive | [END_REF] shows two camera images, one with a gamma-ray and one with a proton induced shower[START_REF] Hillas | Cerenkov light images of EAS produced by primary gamma[END_REF]. Based on Monte Carlo studies a shower parameterization model, using the second moments of the pixel amplitudes in the camera, was proposed by Hillas in 1985. The width, length, center of gravity, angular orientation and position of the parametrized ellipses on the camera are used to reconstruct the shower properties. The camera images are then compared with simulated gamma-ray images to extract the shower parameters and reject hadrons. The robustness of this technique increases with the use of stereoscopy. Figure 3 . 6 : 36 Figure 3.6: Two example images of the light intensity distribution in the camera of the telescope. Left: a 1.0 TeV gamma-ray shower image, with an regular ellipse-like shape. Right: the image of a 2.6 TeV proton in the camera, with an irregular and more wide shape. Image couresy [88]. Figure 3 . 7 : 37 Figure 3.7: Longitudinal shower development as a function of energy, measured from the first interaction point. The black and red histograms correspond to the simulated and analytical (from Equation 3.2) results respectively. Image courtesy [108]. Figure 3 . 8 : 38 Figure 3.8: (a) Model of a 1 TeV shower started at one radiation length and falling 250 m away from the telescope. (b) Shower falling 20 m away from the telescope. Image courtesy [108]. Figure 3.9 shows different reconstruction techniques supported by H. E. S. S. Figure 3 . 9 : 39 Figure 3.9: Schematic illustration of different event trigger and reconstruction types of the H. E. S. S. experiment. The top part of the figure correspond to the trigger of H. E. S. S. Phase-I which are reconstructed with Stereo. In the bottom are shown the H. E. S. S. Phase-II trigger modes. The array can trigger CT5 and CT1-4 and only CT5 and reconstruct events by using the Combined, Hybrid and Mono. Figure 3 . 10 : 310 Figure 3.10: Schematic of ON and OFF regions in the (a) Reflected Region and (b) the Ring Background method. Images courtesy [111]. Figure 3 . 3 11b shows an example of an distribution of the effective area versus the energy and the zenith angle. In absence of statistics in the low energy range, irregularities on the tables and zero acceptance values are introduced. Such problems are avoided in the spectrum calculation by introducing an energy threshold which for a standard H. E. S. S. I analysis is typically set to 10 % of the maximum effective area. Figure 3 . 11 : 311 Figure 3.11: (a) Example of an energy resolution table in Model Analysis. For a set of parameters, it gives the probability density to measure the energy E rec for a given true energy E true . (b) Effective area of Model Analysis for a given set of paramters. Figure 3 . 12 : 312 Figure 3.12: Schematic view of a decission tree where each event is characterized by a set of input varibales M i (m i,1 , ...). The event classification at each node follows a binary split criteria. Image courtesy [106]. Figure 3 . 13 : 313 Figure 3.13: Example of input variable distributions for signal and background in the energy bin 700 GeV < E < 1 TeV for zenith angles between 15-25 degrees. They reveal the variables with little to no separation power e.g. the Mean Scaled Background Goodness and with high separation power, e.g. the Mean Scaled Shower Goodness. Table 3 . 3 : 33 The BDT training bins in energy and zenith.For the training, 6 energy bins and 7 zenith bins have been constructed as in Table3.3, resulting in a total of 42 bins. For each energy and zenith band, one BDT is created and trained with the samples introduced above. The different training bands in energy and zenith are motivated by the dependence of the Cherenkov photon density on these parameters. For the chosen samples, only 14 bins have enough events to complete the training, which is due to the zenith angles choice. To proceed with the BDT training in each bin, a minimum of signal and background 1000 events is required. (a) Energy band 30 to 70 GeV. (b) Energy band 700 GeV to 1 TeV. Figure 3 . 14 : 314 Figure 3.14: The BDT output distributions for signal (blue) and background (red) in the first zenith bin (15 -25 degrees) in different energy bands. The overlap of signal and background in the low energy band is excepted as monoscopic observations are poor for background discrimination. At high energies the separation is better. investigated the gamma-ray emission of B2 1215+30 over five decades in energy with the Fermi-LAT and VERITAS experiments. In collaboration with VERITAS, two gamma-ray flares detected during 2013 and 2014 at very-high-energies were studied in the energy range of Fermi-LAT. This work resulted in the detection of a simultaneous flare occurring in 2014 by VERITAS and Fermi-LAT. The results of this study, to which the author contributed, were published in[START_REF] Abeysekara | A Luminous and Isolated Gamma-Ray Flare from the Blazar B2 1215+30[END_REF]. Along with other multiwavelength observations, they characterize the emission from this source. The variability doubling time scale detected by the Fermi-LAT during the 2014 flare was less than 9.0 h. The measurement of the flaring amplitudes with Fermi-LAT allowed for an estimation of the Doppler factor and for setting constraints on the size of the emission region, results which were further constrained by the VERITAS flaring amplitude measurements in the very-high-energies.To study other flares and further investigate the emission at GeV energies, all the data accumulated by Fermi-LAT spanning more than nine years was used. This resulted in the detection of three major flares, where the flux exceeded by a factor of 16 the quiescent state, including the 2014 flare. Other significant flaring activities occurred between 2015 and 2017, but not as bright as the 2014 flare. These flaring activities are used to determine the variability time scale and set constraints in the size of the emission region as well. The analysis with public Fermi-LAT data in the energy range from 100 MeV to 500 GeV is conducted and presented in this chapter. This chapter is organized as follows: Section 4.1 gives a short summary on the blazar B2 1215+30. Sections 4.2, 4.3 and 4.4 describe the VERITAS, Fermi-LAT and multiwavelength observations of the 2013 and 2014 time periods, respectively. The longterm lightcurve and the latest flaring activities of B2 1215+30 with Fermi-LAT are presented in Section 4.5. The summary and conclusions are given in Section 4.7. Figure 4 . 4 Figure 4.1 shows the counts map around the position of B2 1215+30 for the 2014 data set. This region of the extragalactic sky is interesting since it is populated with bright high-and very-high-energy gamma-ray sources. All of them are BL Lac type objects and TeV emitters, except for MS 1221.8+2552 and GB6 J1159+2914 with no TeV emission detected so far. One can distinguish the BL Lac object 1ES 1218+304 (z=0.182), a bright TeV emitter located at (R.A. = 12 h 21 m 26.3 s , decl. = +30 • 11 29 , J2000). B2 1215+30 and 1ES 1218+304 are two nearby sources exhibit opposite brightness behaviour in the GeV and TeV energy bands. B2 1215+30 appears brighter in the GeV energy range and more faint in TeV, whereas 1ES 1218+304 exhibits the opposite behaviour [121]. Figure 4 . 1 : 41 Figure 4.1: The B2 1215+30 counts map for a 20 degrees x 20 degrees RoI, chosen larger than the RoI in the analysis for illustrative reasons. The map was produced with Fermi-LAT 2014 data. The position of B2 1215+30 is marked with a green cross and all other bright sources are marked with white crosses. The VERITAS field of view (3.5 degrees) is shown for comparison. 72 4. 3 723 Fermi-LAT Observations Section 4.4.3 is given the flux in each energy band as measured by Fermi-LAT, plotted with the other observations in the SED representation. Figure 4 . 2 42 shows the corresponding 2013 lightcurve. For the 2014 data set, the lightcurve is shown in Figure 4.3. The source exhibits clear variations in flux with a large flare starting on MJD 56693 (2014 February 05). σ 4 . 4 ± 44 where t var is the flux halving time and t 0 is the time corresponding to the peak of the flare. From the Fermi-LAT flare, an upper limit on the flux halving time of t var < 9.0 h that constrains the flare at 90 % confidence level was found. This variability time scale is seen also in other blazars. The strength of the TeV flare seen by VERITAS in the night of the flare (2014 Feb 08) allows to derive the lightcurve in 5 minute time bins. VERITAS observations on the next night (2014 Feb 09) did not show an elevated flux from B2 1215+30. This yield to a 90% c.l. limit on the flux halving time of 3.0.7 × 10 -7 Figure 4 . 2 : 42 Figure 4.2: TeV and GeV lightcurves of B2 1215+30 in 2013. Fluxes are calculated in 1-day bins for VERITAS (red squares). The red dashed line shows the yearly averaged TeV flux in 2011 (8.0×10 -12 cm -2 s -1 ) [121].The Fermi-LAT fluxes are calculated with 1-day integration bins (blue crosses), in the energy range 100 MeV < E < 500 GeV. The blue dashed line correspond to the average flux from the 3FGL catalog[START_REF] Acero | Fermi Large Area Telescope Third Source Catalog[END_REF].For the Fermi-LAT data, down-pointing triangles indicate 95 % c.l. upper limits for time bins with signal smaller than 2σ. Figure 4 . 3 : 43 Figure 4.3: B2 1215+30 light curves for the 2014 data. In the top panel, the VER-ITAS integral fluxes in 1-day time bins are plotted. The red dashed line shows the yearly-averaged TeV flux in 2011 [121]. The gray dashed lines correspond to one and two Crab Nebula flux. One Crab correspond to (2.1 ± 0.2) × 10 -10 cm -2 s -1 , for E > 0.2 TeV [132]. Using the Fermi-LAT data the integral fluxes were calculated in 3-day (blue crosses). Down-pointing triangles indicate 95 % c.l. upper limits for time bins with signal smaller than 2σ. The blue dashed line correspond to the average flux from the 3FGL catalog [44]. The yellow points correspond to 1-day time bins derived around the flare period. The bright flare is seen simultaneously by the two experiments at the GeV and TeV energies on February 2014. Figure 4 . 4 : 44 Figure 4.4: Spectral energy distribution of B2 1215+30. The cyan and red points correspond to the Fermi-LAT data and other observations from this work as indicated in the legend. The gray points are the archival points from [121]. Figure 4 . 4 5 shows the long-term lightcurve derived from September 2008 1st to 2017 June 30 or MJD 54710-57934. This analysis resulted in the detection of three major flares, as spectacular as the 2014 flare [141]. These events were detected during October 2008, February 2014 and April 2017 and the flux exceeded by ×16 the quiescent flux. The strength of the signal during these flaring episodes allows to derive the lightcurve down to one day time bin. The one day time bin lightcurves for the three major flares are shown in the bottom part of Figure 4.5 and the corresponding details are summarized in Table 4.2. -2008 Oct 13 15.5 σ (9.2 ± 0.15) × 10 -7 2014 Feb 08 -2014 Feb 09 12.2 σ (9.2 ± 0.18) × 10 -7 2017 Apr 12 -2017 Apr 13 14.0 σ (8.0 ± 0.15) × 10 -7 1 ]Figure 4 . 5 : 145 Figure 4.5: Top panel: Long-term lightcurve of B2 1215+30, from 2008 September 1st to 2017 June 30. The integral fluxes are calculated in one week time bins. The major flares, with weekly averaged flux Φ > 3.0 × 10 -07 cm -2 s -1 are indicated by dashed black lines. Bottom panel: lightcurve in one day time bins around the major flares detected during October 2008, February 2014 and April 2017. Figure 4 .Figure 4 . 6 : 446 Figure 4.6: Top: The average year-by-year fluxes for B2 1215+30 from 2008 to 2017. Bottom: The corresponding spectral indices for the marked years. The orange line corresponds to a constant and the green correspond to a linear fit. Figure 4 . 4 [START_REF] Greisen | End to the Cosmic-Ray Spectrum?[END_REF] shows the Gaussian fit to the distribution of the data around the mean flux which is assumed to be a constant. By considering the linear increase of the flux, the distribution of the flux values aroundYearFlux[START_REF] Hayakawa | Propagation of the Cosmic Radiation through Intersteller Space[END_REF] Figure 4 . 7 : 47 Figure 4.7: Distribution of B2 1215+30 fluxes for nine years of Fermi-LAT data measured in one week time bins. The distribution is fitted by a Gaussian assuming a constant flux (blue) and a linear flux (green). 4 . 7 Table 4 . 4 : 4744 The highest flux states of B2 1215+30 are found to occur during 2017 March and April. -2015 Jan08 12.0σ (2.6 ± 0.48) × 10 -7 2015 Dec 23 -2015 Dec 30 9.4σ (2.5 ± 0.48) × 10 -7 2016 Dec 16 -2014 Dec 23 14.2σ (2.4 ± 0.44) × 10 -7 2017 Mar 25 -2017 Apr 01 21.6σ (2.7 ± 0.35) × 10 -7 2017 Apr 07 -2017 Apr 15 20.1σ (3.1 ± 0.38) × 10 -Summary of the latest flares from the B2 1215+30 detected with the Fermi-LAT from 2015-2017. The strength of the signal and the integral flux between 0.1 and 500 GeV are given. Figure 4 . 8 : 48 Figure 4.8: Integral flux Φ (100 MeV < E < 500 GeV) in one week time bins for all the 2015 (MJD 57023-57387) and 2016 (MJD 57388-57753) data sets in the top and middle panels. The bottom panel shows only six months of data, from 2017 January 1st to 2017 June 30 (MJD 57754-57934) in one week time bins. 3 ( 3 a) Discrete Fourier transform of the long-term lightcurve. σ 1 σ 2 σ 3 ( 23 b) Discrete Fourier transform of the long-term lightcurve linear fit subtraction. Figure 4 . 9 : 49 Figure 4.9: The discrete Fourier transformed long-term lightcurve of B2 1215+30 measured with Fermi-LAT as a function of the period in an arbitrary unit, before and after subtracting the linear fit. The black dots are the discrete points in Fourier space, connected by a smooth curve to guide the eye. The dashed lines indicate the 1σ, 2σ and 3σ percentiles of the simulated Gaussian noise. 85 A 1 . 5 ± 10 Table 4 . 5 :Figure 4 . 10 : 151045410 Figure 4.10: The long-term lightcurve including a fit of the form given in Equation4.9. The fit parameters and their uncertainties are listed in Table4.5. January to May in 2013 and 2014 were analyzed using the Fermi-LAT data. This study was motivated by two major flares detected by the VERITAS experiment during 2013 and 2014. In collaboration with the VERITAS experiment, the gamma-ray variability of B2 1215+30 was studied over five decades in energy. VERITAS data resulted in the detection of a flare during 2013 whereas searching for counterparts in the GeV range with the Fermi-LAT data did not show evidence of any significant flux enhancement during this period. VERITAS and Fermi-LAT observed a bright gamma-ray flare from the BL Lac object B2 1215+30 on 2014 February 08, with TeV flux equivalent to 200 % of the Crab Nebula flux (a standard measure in gamma-ray astronomy). At GeV energies the flare started on 2014 February 05, whereas at TeV energies the flare was seen by VERITAS on 2014 February 08 (although the observatory did not operate on the previous nights). Quasi-simultaneous observations at other wavelengths of the flaring period did not detect any significant flare evidence. Observations from the Swift X-ray telescope, taken one day after the flare were used in the estimation of the relativistic jet Doppler factor. Figure 5 . 1 : 51 Figure 5.1: The Crab Nebula system, and the zoomed pulsar. Figure 5 . 2 : 52 Figure 5.2: (a) Lord Rosse's sketch of the Crab Nebula filaments in 1844. Image courtesy [160]. (b) Image of the Crab Nebula activity produced in 1968. Image courtesy [161]. Figure 5 . 3 : 53 Figure 5.3: (a) The Crab pulsar as seen from Chandra X-ray Observatory. (b) Composite image of the Crab Nebula from five telescopes: the Karl G. Jansky Very Large Array, the Spitzer Space Telescope, the Hubble Space Telescope, the XMM-Newton Observatory, and the Chandra Xray Observatory. Image courtesy: NASA, ESA, NRAO/AUI/NSF and G. Dubner (University of Buenos Aires). Figure 5.4 shows example COG maps for the CT1-4 telescopes with the faulty map for CT2, which led to the exclusion of the run. Figure 5 . 4 :Figure 5 . 5 : 5455 Figure 5.4: An example of a run that passed all the run selection criteria but the COG map check. The COG maps for CT1, CT2, CT3 and CT4 (left to the right) for the run number 42556 are shown. The second COG map, which correspond to CT2, is problematic and this run is excluded from the final run list. Figure 5 . 6 : 56 Figure 5.6: The θ (squared angular distance) distributions for gamma-like events (filled histogram) compared with normalized θ 2 distributions of off regions (black) for the (a) Std and (b) Loose cuts. The dashed blue points correspond to the background distribution, which for the Crab Nebula is very low compared to the signal from the source. (a) Significance map. (b) Significance distribution. Figure 5 . 7 : 57 Figure 5.7: (a) The significance map of the Crab Nebula using all the H. E. S. S. I data set with Std cuts. The position of the Crab Nebula is showed in the center. (b) The corresponding significance distribution of the whole map is shown in black. The distribution after excluding a circular region of 0.25 degrees around the source with a Gaussian fit is shown in red and the fit values are shown in the plot. Figure 5 . 8 : 58 Figure 5.8: Differential energy spectrum of the Crab Nebula as measured by H. E. S. S. I from 480 GeV to 62.4 TeV. The spectrum is best described by a log-parabola spectral shape. The 1σ confidence interval of the fitted spectrum shape is plotted in solid red line. The residual plot in the form (N obs -N exp )/N exp is shown in the bottom part. Figure 5 . 9 : 59 Figure 5.9: Long term lightcurve of the Crab Nebula on a run by run basis, from MJD 52935-56365. The integral flux is calculated at energies above 1 TeV are plotted along with one sigma statistical errors. The red line corresponds to the error weighted average flux. Further 104 5 1 ]Figure 5 . 10 : 1 ]Figure 5 . 11 : 10415101511 Figure 5.10: Top: Run-by-run integral flux versus optical efficiencies. Middle: Run-by-run integral flux above versus off-axis angles. Bottom: Distribution of the integral fluxes above 1 TeV with observation zenith angles. Their values are distributed around the mean, not showing any evidence of bias introduced. 106 5Figure 5 . 12 : 106512 Figure 5.12: The transparency coefficient averaged over telescopes for each observation run. The data plotted here is taken on the Crab Nebula over ten years. Lightcurve with run selection based on broken pixels and telescope multiplicity. Lightcurve with good quality data run-list. Figure 5 . 13 : 513 Figure 5.13: Lightcurves of the Crab Nebula including the mean, with different TC categories in different colors. The categories are named after the averages of their TC ranges. The effect of the transparency on the data can be seen, and it has a bigger impact for low TC values. 108 5Figure 5 . 14 : 108514 Figure 5.14: The Crab Nebula long term lightcurve on a run by run basis after removing runs with TC < 0.9. The flux errors are statistical only. The cyan dashed line correspond to the mean error weighted flux. Figure 5 . 16 : 516 Figure 5.16: The H. E. S. S. I long term lightcurve of the Crab Nebula on a run by run basis. The orange points correspond to the lightcurve without any correction or any cut on the transparency coefficient. The blue correspond to the same data set, corrected for the transparency coefficient. The dashed lines represent the mean error weighted flux for each data set. 110 5Figure 5 . 17 : 110517 Figure 5.17: The projected lightcurve for the uncorrected flux and the TC corrected flux with power-law index of α = 1 and α = 1.7, including unbinned Gaussian fits. The uncertainty bands visualize the fit error on the width. . 8 ±Table 5 . 6 :Figure 5 . 18 : 856518 Figure 5.18: Superimposed Crab Nebula spectrum from two different seasons as described in the text. The first season is represented by the black points and the second season by the blue ones. The 1σ confidence interval of the fitted spectrum shape are plotted by solid lines. The spectra are fitted with a curved power-law spectral shape and are compatible within the estimated H. E. S. S. systematic uncertainties. The residual plot in the form (N obs -N exp )/N exp are shown in the bottom part. on a run-by-run basis. No significant flux variations were found with stand-alone H. E. S. S. I data. The flux variability studied by H. E. S. S. I indicated stable flux from the inverse Compton component, whereas the high-energy synchrotron component is not stable. Over nine years, the Fermi-LAT reported on the detection of major flares in 2009, 2010, 2011, 2013 and 2016. Observations from the ground based detectors did not report evidence of simultaneous flux variation at the highest energies so far. Future observations of the Crab Nebula at high-energies and very-high-energies will reveal more about the evolution of the flux with time. Figure 6 . 1 : 61 Figure 6.1: The H. E. S. S. II telescopes. Image courtesy M. Lorentz. H. E. S. S. did not observe the brightest flare detected by Fermi-LAT in 2011. The other flare occurred in March 2013 and coincided with the H. E. S. S. II commissioning phase. Hence, the H. E. S. S. II runs from this flaring period can not be used to derive scientific results as the experimental setup was changed almost every night.In March 2013, the Crab Nebula underwent the second brightest flare detected at GeV energies by the Fermi-LAT. The flare lasted for about two weeks and the flux increased by a factor of six relative to the average within in less than six hours. As during the other gamma-ray flares, only the synchrotron component of the nebula varied. Multiwavelength campaigns undertaken by different experiments in radio, optical and X-rays had an excellent coverage of the 2013 flaring period. A good coverage was also accomplished at the very-high-energies by the major IACT experiments pointing at the Crab Nebula. Despite the coverage of the Crab Nebula during the 2013 Fermi-LAT flare, there was no enhancement reported in this energy range by any experiment. The absence of other correlated flux enhancements at other wavelengths kept the origin of the Crab Nebula flares a mystery. Figure 6 . 2 : 62 Figure 6.2: The distribution of θ 2 , the angular distance between the reconstructed direction and source position, for the Mono Std. The background from the OFF regions is shown in red. Figure 6 . 6 Figure 6.3 shows the Ring Background significance distribution for the Std cuts in Mono, Stereo and Combined analysis. The red line corresponds to the distribution of the events with target region excluded. This is fitted with a Gaussian which gives a sigma and mean of almost zero and one respectively, indicating a good background estimation and subtraction. Figure 6 . 3 : 63 Figure 6.3: The ring background significance distributions for Mono, Hybrid and Combined analysis with Std cuts. The significance distribution of the whole map is shown in black. The distribution after excluding a circular region around the source with a Gaussian fit is shown in red. The latter includes a Gaussian fit with the fit parameters indicated. .S.S. II Hybrid Loose H.E.S.S. II Combined Loose H.E.S.S. II Mono Loose (b) Loose cuts. Figure 6 . 4 : 64 Figure 6.4: The spectral energy distribution of the Crab Nebula as measured with H. E. S. S. II and fitted with log-parabola spectrum shape. The Mono, Stereo and Combined spectra are derived using the Std (a) and Loose (b) cuts. The spectrum derived with the Mono configuration (cyan) provides a lower energy threshold compared to the spectrum from the Stereo and Combined analyses, which agree well with one-another. 122 6Figure 6 . 5 : 12265 Figure 6.5: The spectral energy distribution of the Crab Nebula as measured with H. E. S. S. I and H. E. S. S. II (from this work) along with the published data points from MAGIC, VERITAS and Fermi-LAT experiments [172, 164, 155]. Figure 6 . 6 : 66 Figure 6.6: Spectral energy distribution of the Crab Nebula compiled with data from the spaceborne satellites. The spectral points correspond to the maximum flare level of the major flares detected on the Crab Nebula by AGILE and Fermi-LAT satellites. The blue data points belong to the average nebula flux values. Image courtesy [55]. Figure 6 . 7 : 67 Figure 6.7: Observations of the H. E. S. S. experiment before and after the upgrade phase. The CT1 telescope was not included in observations until March, the month during which the three other H. E. S. S. telescopes entered in the upgrade phase. After August 2016, the array started to be re-commissioned with five telescopes. The months during which the H. E. S. S. experiment observed the Crab Nebula are marked with down cyan triangles and the Fermi-LAT flare is indicated in orange. For this study, the Fermi-LAT publicly available data from 2016 September 01 to 2016 October 31, corresponding to MJD 57632-57692, were analyzed. All events with energies between 100 MeV and 500 GeV coming from a 15 degrees circular region of interest (RoI) of the sky centered on the position of the Crab Nebula were selected. This RoI size was chosen in order to account properly for the background contamination from nearby sources and to have an optimal normalization of the Galactic diffuse model. Considering the motion of the LAT instrument during the operation, good time intervals are selected by excluding the time when the Earth was in the field of view of the LAT. The counts map of the RoI for the here-discussed time period is shown in Figure6.8. Figure 6 . 8 : 68 Figure 6.8: A 15 • × 15 • RoI counts map centered on the position of the Crab Nebula. This correspond to the period of the 2016 flare and the brightness of the Crab Nebula is visible. The Crab Nebula is marked with green cross and other bright sources are marked as well. The squared symbols indicate two extended sources located within RoI. . The flux emission from two short time periods of five months during 2013 and 2014 was investigated with Fermi-LAT data. The study of these two episodes was done in collaboration with the VERITAS experiment, with gamma-ray emission studied over five decades in energy. This work resulted in the detection of one major flare on 2014 February 08, simultaneously seen by Fermi-LAT and VERITAS.The flux from B2 1215+30 during the 2014 TeV flare was 16 times higher than the average in the GeV range, whereas the TeV flux scaled by a remarkable factor of 60. The blazar Mrk 421 (z=0.0308) would have to exhibit a 35 Crab flare to reach the luminosity of the B2 1215+30 outburst reported here. To date, only few blazars are found to reach this brightness during flaring episodes. Around the time period of the flare, a hardening of the spectral index in the GeV energy range was measured.These results were used to set limits on the size of the emission region and estimate a minimum Doppler factor following opacity arguments. From the strength of the GeV flare in the Fermi-LAT energies, a lower limit of t var < 9.0 on the variability time scale and a minimum Doppler factor of 5 was found. The Fermi-LAT measurements were complemented by Swift-XRT data taken 24 h after the flare to estimate the synchrotron photon field density. The variability time scale of the TeV flux measured with VERITAS further constrained the results on the Doppler factor of the emission jet with a minimum Doppler factor of 10. Multiwavelength observations taken quasisimultaneously during the 2014 flaring episode were used to model and understand the high-energy emission. Two scenarios were considered to explain the emission from the B2 1215+30: the highest part of the spectral energy distribution can be explained by the synchrotron self-Compton or external Compton models. From the SSC scenario, the ratio between the synchrotron and inverse-Compton luminosities were used to estimate the magnetic field.Observations with the Fermi-LAT have opened a new window to study and monitor the gamma-ray emission of the sky objects. The long-term flux evolution of the BL Lac object B2 1215+30 was studies with almost nine years of observations. This showed other GeV flares with the flux reaching similar values as in the spectacular flare of February 2014. Three major flares, where the average flux increases by a factor of 16, were found. Studies of the long-term variability show that the yearly average flux is increasing linearly with time, correlated with a hardening of the spectral index. 0. 1 1 Images superposées du plan galactique en lumière optique et rayons gamma de très haute énergie. Les images à très haute énergie sont prises avec les télescopes H.E.S.S. en Namibie. Photographie et montage de F. Acero. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VII 0.2 En haut: l'expérience H.E.S.S. située dans l'hémisphère sud en Namibie est composée de cinq télescopes Cherenkov. En bas à gauche: l'expérience MAGIC installé à La Palma. En bas à droite: l'expérience VERITAS, composée de quatre télescopes Cherenkov, est située dans l'hémisphère Nord, en Arizona. . . . . . . . . . . . . . . . . . . . . . VIII 0.3 Le système de nébuleuse du crabe et le Crabe pulsar zoomé. . . . . . X 1.1 (a) Victor Hess in one of his balloon accents. Picture taken before take-off of one of his famous flights that took place between 1911 and 1913. Image courtesy [1]. (b) Present-day spectrum of cosmic rays spanning over twelve orders of magnitude in energy as measured from several independent experiments. The majority of the spectrum follows a power-law over twelve orders of magnitude in energy with a spectral index of 2.7. The two features of the cosmic ray spectrum, known as the "knee" and "ankle" are seen around 10 15 eV and 10 17 eV respectively. Image courtesy [2]. . . . . . . . . . . . . . . . . . . . 2 1.2 Schematic of Fermi acceleration in a strong shock wave. The dynamics of high-energy particles in the rest frames of the shock front, the upstream and downstream medium. (Left): Rest frame of the shock front; the upstream gas is moving with velocity v 1 = U and the shocked plasma with velocity v 2 = 3/4U . (Middle): Rest frame of the upstream medium; particles from downstream are moving with velocity of 3/4U . (Right): Rest frame of the downstream medium; particles from the upstream are advancing with velocity 3/4U . Everytime particles cross the shock, there is a gain of energy by ∆E (shown in blue and orange lines). Image courtesy [39]. . . . . . . . . . 7 1.3 Hillas plot showing possible sources of proton acceleration for E = 100 EeV and E=1 ZeV. The linear size R of different sources is plotted versus the magnetic field B in order to accelerate particles up to E ∼ 10 20 eV . The diagonal line correspond to the maximum reachable energy by a population of sources. Image courtesy [41]. . . . . . . . . 9 1.4 (a) The total sources detected by Fermi-LAT instrument in percentage, plotted together with the BL Lac and FSRQ type sources which belong to blazar source class. (b) Total sources detected in TeV show the majority belong to the BL Lac source class, a subclass of active galactic nuclei. The GeV and TeV sky is populated mainly by BL Lac type objects. Plotted with data from [45]. . . . . . . . . . . . . . . . 1.5 (a) The gamma-ray excess map of the known shell-type SNR RX J1713.7-3946 as measured by H. E. S. S. Image courtesy [49]. (b) First extension measurement of the Crab Nebula at very-high-energies by H. E. S. S. Image courtesy [50]. . . . . . . . . . . . . . . . . . . . . . . 1.6 Active Galactic Nuclei unification model as described by Urry and Padovani [58]. The classification is based on the orientation of the jet with respect to the observer line-of-sight. If the observer is looking down the jet, it sees a blazar (BL Lac or FSRQ). . . . . . . . . . . . 1.7 (a) Overlay of the Centaurus A galaxy with the prominent dust lanes and the emerging jet. Image courtesy: optical and radio image (VLA 6 cm), STScI/NASA. (b) The jet of the blazar 3C 279. Image courtesy: VLBA, 1.7 GHz. . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8 (a) Artistic illustration of a millisecond pulsar and its companion. The pulsar is accreting material from its companion star and increasing its rotation rate. (b) Globular cluster. Images courtesy: ESA. . . 2.1 (a) Interaction probability in radiation length of photons as function of energy W in eV in lead material, with σ p and σ c the probability of pair creation and Compton scattering. Image courtesy [67]. (b) Schematic construction of EGRET, the predecessor of the Fermi-LAT satellite. The main parts of the detector are given to be compared to the Fermi-LAT. Image courtesy [68]. . . . . . . . . . . . . . . . . . . 2.2 Fermi spacecraft with two instruments on board: the Large Area Telescope and the Gamma-Ray Burst Monitor. Image courtesy: NASA. 2.3 (a) Schematic view of the Fermi-LAT cutaway where the three main parts of the detector are shown. (b) Cosmic gamma rays after hitting the silicon tracker get converted into an electron/positron pair. The energy released is measured in the calorimeter. Image courtesy: NASA. 2.4 (a) Sketch of 3D maps used to perform data analysis with the Fermi-LAT. The source i and other sources are marked with j = 1, 2, 3. (b) The counts map of B2 1215+30 for a 15 • radius RoI. The known gamma-ray sources from the Fermi catalog are marked in green. . . . 2.5 Comparison of the gamma-ray acceptance map for Pass 7 and Pass 8, as indicated by the legend in the plot. Image courtesy [81]. . . . . 2.6 The Fermi-LAT full sky map in aitoff projection in Galactic coordinates. It shows the gamma-ray intensity for energies E > 300 MeV produced from 48 months of observations. Image courtesy [82]. . . . . 2.7 All sources detected from the Fermi-LAT using 4 years of data. The majority of the sources detected are AGNs, whereas the PWN are the most abundant source class in the Galactic plane. Image courtesy [44]. 2. 8 8 In (a) and (b) are shown the schematic view of an electromagnetic and hadronic cascade, respectively. Image courtesy[START_REF] Matthews | A Heitler model of extensive air showers[END_REF]. . . . . . . . . 2.9 Schematic view of the Cherenkov light production when a charged particle moves at speeds v > c/n through a medium of refractive index n. The Cherenkov light is produced at an emission angle θ. . . 2.10 The Cherenkov light emitted from a gamma-ray with initial energy of 1 TeV. The emission angle marked as α here changes with altitude and the superimposition of the Cherenkov light illuminates a light poll of 250 m diameter in the ground. This is seen at an observation level at 1800 m above sea level. Image courtesy[START_REF] Voelk | Imaging Very High Energy Gamma-Ray Telescopes[END_REF]. . . . . . . . . . . 2.11 Cherenkov light pool of an extensive air shower induced by a photon with energy of 300 GeV (left) and by a proton with an energy of 1 TeV (right). Image courtesy [91]. . . . . . . . . . . . . . . . . . . . . 2.12 Illustration of the shower imaging principle in a telescope. The image shape of the shower in the telescope camera is almost elliptic and the corresponding reflection of different points of the shower into the focal plane of a camera are shown. The two main properties of gamma rays, the energy and direction are derived from the shower image on the camera. For gamma-ray induced showers, the image intensity gives information about its primary energy. Image courtesy [88]. . . . . . . 2.13 (a) The MAGIC telescopes located in La Palma. Image courtesy: Daniel Lopez/IAC. (b) An artistic image of VERITAS telescopes located in Arizona. Image courtesy UCLA [93]. . . . . . . . . . . . . . 2.14 The differential sensitivity of CTA South and North compared to H. E. S. S., MAGIC, VERITAS and HAWC experiments. The CTA South and North, for 50 h of observations are expected to have a higher sensitivity with respect to other experiments and extend up to 100 TeV. Image courtesy [96]. . . . . . . . . . . . . . . . . . . . . . . 3.1 Picture of the H.E.S.S. telescopes in Namibia. Different trigger modes are supported by the current telescope configuration. . . . . . . . . . 3.2 (a) A picture of CT1 camera taken during the H. E. S. S. I camera upgrade in 2015. (b) One of the camera drawers, a unit of 16 photomultiplier tubes. The drawers are fitted in the hexagonal structure of the camera. Image courtesy of H. E. S. S. . . . . . . . . . . . . . . . 3.3 (a) The H. E. S. S. I telescopes with reflective area made of facet mirrors arranged in Davies-Cotton fashion. (b) The H. E. S. S. II telescope with refractive area made of facets arranged in parabolic fashion. Image courtesy of H. E. S. S. . . . . . . . . . . . . . . . . . . . . . 3.4 (a) Example of a muon event on the H. E. S. S. camera. (b) The same image after cleaning and fitted by the model used for the calculation of the optical efficiency. . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Efficiency evolution for one (CT2) of the H. E. S. S. I telescopes (blue). List of Figures 3 . 6 130 B. 1 FSRQ 361301 Two example images of the light intensity distribution in the camera of the telescope. Left: a 1.0 TeV gamma-ray shower image, with an regular ellipse-like shape. Right: the image of a 2.6 TeV proton in the camera, with an irregular and more wide shape. Image couresy[START_REF] Voelk | Imaging Very High Energy Gamma-Ray Telescopes[END_REF]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Longitudinal shower development as a function of energy, measured from the first interaction point. The black and red histograms correspond to the simulated and analytical (from Equation 3.2) results respectively. Image courtesy [108]. . . . . . . . . . . . . . . . . . . . . 3.8 (a) Model of a 1 TeV shower started at one radiation length and falling 250 m away from the telescope. (b) Shower falling 20 m away from the telescope. Image courtesy [108]. . . . . . . . . . . . . . . . . 3.9 Schematic illustration of different event trigger and reconstruction types of the H. E. S. S. experiment. The top part of the figure correspond to the trigger of H. E. S. S. Phase-I which are reconstructed with Stereo. In the bottom are shown the H. E. S. S. Phase-II trigger modes. The array can trigger CT5 and CT1-4 and only CT5 and reconstruct events by using the Combined, Hybrid and Mono. . . . . 3.10 Schematic of ON and OFF regions in the (a) Reflected Region and (b) the Ring Background method. Images courtesy [111]. . . . . . . . 3.11 (a) Example of an energy resolution table in Model Analysis. For a set of parameters, it gives the probability density to measure the energy E rec for a given true energy E true . (b) Effective area of Model Analysis for a given set of paramters. . . . . . . . . . . . . . . . . . . 3.12 Schematic view of a decission tree where each event is characterized by a set of input varibales M i (m i,1 , ...). The event classification at each node follows a binary split criteria. Image courtesy [106]. . . . . 3.13 Example of input variable distributions for signal and background in the energy bin 700 GeV < E < 1 TeV for zenith angles between 15-25 degrees. They reveal the variables with little to no separation power e.g. the Mean Scaled Background Goodness and with high separation power, e.g. the Mean Scaled Shower Goodness. . . . . . . . . . . . . . 3.14 The BDT output distributions for signal (blue) and background (red) in the first zenith bin (15 -25 degrees) in different energy bands. The overlap of signal and background in the low energy band is excepted as monoscopic observations are poor for background discrimination. At high energies the separation is better. . . . . . . . . . . . . . . . . 4.1 The B2 1215+30 counts map for a 20 degrees x 20 degrees RoI, chosen larger than the RoI in the analysis for illustrative reasons. The map was produced with Fermi-LAT 2014 data. The position of B2 1215+30 is marked with a green cross and all other bright sources are marked with white crosses. The VERITAS field of view (3.5 degrees) is shown for comparison. . . . . . . . . . . . . . . . . . . . . . . . . .4.2 TeV and GeV lightcurves of B2 1215+30 in 2013. Fluxes are calculated in 1-day bins for VERITAS (red squares). The red dashed line shows the yearly averaged TeV flux in 2011 (8.0 × 10 -12 cm -2 s -1 ) [121]. The Fermi-LAT fluxes are calculated with 1-day integration bins (blue crosses), in the energy range 100 MeV < E < 500 GeV. The blue dashed line correspond to the average flux from the 3FGL catalog [44]. For the Fermi-LAT data, down-pointing triangles indicate 95 % c.l. upper limits for time bins with signal smaller than 2σ. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 B2 1215+30 light curves for the 2014 data. In the top panel, the VERITAS integral fluxes in 1-day time bins are plotted. The red dashed line shows the yearly-averaged TeV flux in 2011 [121]. The gray dashed lines correspond to one and two Crab Nebula flux. One Crab correspond to (2.1 ± 0.2) × 10 -10 cm -2 s -1 , for E > 0.2 TeV [132]. Using the Fermi-LAT data the integral fluxes were calculated in 3-day (blue crosses). Down-pointing triangles indicate 95 % c.l. upper limits for time bins with signal smaller than 2σ. The blue dashed line correspond to the average flux from the 3FGL catalog [44]. The yellow points correspond to 1-day time bins derived around the flare period. The bright flare is seen simultaneously by the two experiments at the GeV and TeV energies on February 2014. . . . . . 4.4 Spectral energy distribution of B2 1215+30. The cyan and red points correspond to the Fermi-LAT data and other observations from this work as indicated in the legend. The gray points are the archival points from [121]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Top panel: Long-term lightcurve of B2 1215+30, from 2008 September 1st to 2017 June 30. The integral fluxes are calculated in one week time bins. The major flares, with weekly averaged flux Φ > 3.0 × 10 -07 cm -2 s -1 are indicated by dashed black lines. Bottom panel: lightcurve in one day time bins around the major flares detected during October 2008, February 2014 and April 2017. . . . . . 4.6 Top: The average year-by-year fluxes for B2 1215+30 from 2008 to 2017. Bottom: The corresponding spectral indices for the marked years. The orange line corresponds to a constant and the green correspond to a linear fit. . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Distribution of B2 1215+30 fluxes for nine years of Fermi-LAT data measured in one week time bins. The distribution is fitted by a Gaussian assuming a constant flux (blue) and a linear flux (green). . . . . 4.8 Integral flux Φ (100 MeV < E < 500 GeV) in one week time bins for all the 2015 (MJD 57023-57387) and 2016 (MJD 57388-57753) data sets in the top and middle panels. The bottom panel shows only six months of data, from 2017 January 1st to 2017 June 30 (MJD 57754-57934) in one week time bins. . . . . . . . . . . . . . . . . . . . 4.9 The discrete Fourier transformed long-term lightcurve of B2 1215+30 measured with Fermi-LAT as a function of the period in an arbitrary unit, before and after subtracting the linear fit. The black dots are the discrete points in Fourier space, connected by a smooth curve to guide the eye. The dashed lines indicate the 1σ, 2σ and 3σ percentiles of the simulated Gaussian noise. . . . . . . . . . . . . . . . . . . . . . 4.10 The long-term lightcurve including a fit of the form given in Equation 4.9. The fit parameters and their uncertainties are listed in Table 4.5. 5.1 The Crab Nebula system, and the zoomed pulsar. . . . . . . . . . . . 5.2 (a) Lord Rosse's sketch of the Crab Nebula filaments in 1844. Image courtesy [160]. (b) Image of the Crab Nebula activity produced in 1968. Image courtesy [161]. . . . . . . . . . . . . . . . . . . . . . . . 5.3 (a) The Crab pulsar as seen from Chandra X-ray Observatory. (b) Composite image of the Crab Nebula from five telescopes: the Karl G. Jansky Very Large Array, the Spitzer Space Telescope, the Hubble Space Telescope, the XMM-Newton Observatory, and the Chandra X-ray Observatory. Image courtesy: NASA, ESA, NRAO/AUI/NSF and G. Dubner (University of Buenos Aires). . . . . . . . . . . . . . . 5.4 An example of a run that passed all the run selection criteria but the COG map check. The COG maps for CT1, CT2, CT3 and CT4 (left to the right) for the run number 42556 are shown. The second COG map, which correspond to CT2, is problematic and this run is excluded from the final run list. . . . . . . . . . . . . . . . . . . . . . 5.5 Left: Zenith angle distributions for the run list used in the analysis after applying the custom selection criteria as described in the text. Right: Observations of the Crab Nebula taken with the H. E. S. S. I telescopes, with runs distributed from 2003 to 2015. The runs presented here are obtained after applying only the standard run quality, which results in more runs than in the left plot. The majority of the runs are taken during October. . . . . . . . . . . . . . . . . . . . . . 5.6 The θ 2 (squared angular distance) distributions for gamma-like events (filled histogram) compared with normalized θ 2 distributions of off regions (black) for the (a) Std and (b) Loose cuts. The dashed blue points correspond to the background distribution, which for the Crab Nebula is very low compared to the signal from the source. . . . . . . 5.7 (a) The significance map of the Crab Nebula using all the H. E. S. S. I data set with Std cuts. The position of the Crab Nebula is showed in the center. (b) The corresponding significance distribution of the whole map is shown in black. The distribution after excluding a circular region of 0.25 degrees around the source with a Gaussian fit is shown in red and the fit values are shown in the plot. . . . . . . . .6.9 The Crab Nebula energy spectrum derived with data from September to October 2016 superimposed to the overall H. E. S. S. II Mono spectrum for comparison. The 1σ confidence interval of the fitted spectrum shape are plotted as solid lines. Both spectra are obtained from Mono Std analysis configurations and are compatible within the estimated H. E. S. S. uncertainties. The errors plotted are statistical only. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 6.10 TeV and GeV lightcurves of Crab Nebula in 2016. Top: The lightcurve in a run-by-run basis measured from H. E. S. S. II from MJD 57390.94-57724.02 (red squares). The integral fluxes, calculated above 1 TeV are normalized to the average flux from the H. E. S. S. flux from 2006 publication [113]. The dashed red line correspond to the average flux for this period, whereas the black line to the average flux from 2006. In the bottom part of the plot, the Crab Nebula integral flux of the synchrotron component between 57632-57692 MJD as derived from Fermi-LAT data is plotted (blue crosses). The integral fluxes are calculated in 3-day time bins, from 100 MeV < E < 500 GeV and normalized to the published average synchrotron flux from [172]. The dashed blue line correspond to the average synchrotron flux from the previous Fermi-LAT measurements (6.1 ± 0.2 × 10 -7 cm -2 s -1 ). . . . . The Crab Nebula spectrum and the relative difference with the "global" spectrum. The relative difference between each spectrum and the average are plotted in the bottom part of the plot. The H. E. S. S. points are derived from the intersection run list between the ParisAnalysis and HAP, two different analysis frameworks. The shadowed blue area correspond to 30% relative difference. The PA Std (green) correspond to this work and "HAP' (orange) to the other analysis framework (internal H. E. S. S. results produced by J.Hahn using HAP framework). 167 Acronyms List ACD Anti-coincidence detector. ADC Analog to digital convertor. AGN Active galactic nucleus. BDT Boosted decision trees. BL Lac BL Lacertae. CMB Cosmic microwave background. COG Center of gravity map. DAQ Data acquisition. DFT Discrete Fourier Transform. EAS Extensive air showers. EBL Extragalactic background light. Flat spectrum radio quasar. GMB Gamma-Ray burst monitor. GRB Gamma-Ray burst. HAP H.E.S.S. Analysis Package. HBL High frequency peaked BL Lac. IACT Imaging atmospheric Cherenkov telescope. IBL Intermediate frequency peaked BL Lac. IC Inverse Compton. IRF Instrument response function. LAT Large area telescope. LP Log Parabola. MJD Modified Julian date. MSSG Mean scaled shower goodness. MVA Multivariate analysis. ndf Number of degrees of freedom. NSB Night sky background. PA Paris Analysis. Table 1 . 1 : 11 Main historical instruments used in high-energy gamma-ray astronomy. Decade 1970s 1980s 1990s 2000s 2010s Space-based OSO-3 SAS II EGRET Fermi-LAT Ground-based Whipple CAT STACEE FACT CELESTE MAGIC HAWC VERITAS H. E. S. S. The presently operating imaging atmospheric Cherenkov telescopes are H. E. S. S., MAGIC, VERITAS and FACT. To solve this problem, several reconstruction techniques have been introduced by the community. The oldest successful technique for the signal and background separation is based on the Hillas reconstruction. Since its introduction, the robust Hillas reconstruction is widely used. Another powerful technique, called Model reconstruction is based on a semi-analytical model, which has been used to perform the analysis during this work. The most sensitive analysis techniques of the H. E. S. S. data are provided by Model and Multivariate analysis methods Table 3 . 1 : 31 .1 summarizes the Standard (Std) and Loose cut parameters, as they are used for this work. Other cut configurations for faint or extended sources are available but are not covered here. Cut parameters for the H. E. S. S. I Stereo analysis and their corresponding cut parameters for Std and Loose configurations in Model Analysis. Parameter min p.e. max MSSG min NSB max direction error min primary depth Std 60 0.3 20 0.1 -1.1 Loose 40 0.9 10 0.2 -1.1 3.6.2 H. E. S. S. II The diverse trigger modes supported by the array allow to perform different event reconstruction with the H. E. S. S. Phase-II data (Figure 3 .9 bottom). The Mono reconstruction allows to lower the energy threshold of the analysis, but in absence of the stereoscopy the background substraction is more challenging. The high event rates at lower energies, require precise background estimation methods. The triggered events, where CT5 was part of the observations provide better results if they are reconstructed using the stereoscopy. The Combined reconstruction makes use of the Mono and Stereo events. The H. E. S. S. II telescope allows to exploit the lower energy range and in combination with the H. E. S. S. I they cover a widen energy range. The cut parameters for the standard and loose configurations for the Mono, Hybrid and Stereo analyses are summarized in Table 3 .2. Table 3 . 2 : 32 Cut parameters for Mono, Hybrid and Combined analysis with H. E. S. S. Phase-II and their corresponding cut parameters for Std and Loose configurations in Model Analysis. The Combined profile has two set of cuts, adapted to be applied to Mono and Stereo events. 3 (publicly available) and analyzed with Pass 8. We used all the data on B2 1215+30, accumulated by Fermi-LAT in survey mode to study and characterize the gammaray emission at energies larger than 100 MeV. The data analysis for two periods of five months in 2013 and 2014, contemporaneous with VERITAS observations are presented in the following section. It was conducted with the likelihood framework provided in the FermiScienceTools software package, version v10r0p5 4 . Table 4 . 4 1: Summary of the VERITAS and Fermi-LAT results from observations of B2 1215+30 in different epochs from 2013 and 2014. The VERITAS upper limit is computed at 95% c.l. assuming a power-law spectrum with index Γ = 3.0. Table 4 . 2 : 42 Summary of the major flares of B2 1215+30 detected in nine years of Fermi-LAT observations. The strength of the signal and the integral flux between 0.1 and 500 GeV are given. Jan 2009 Jan 2010 Jan 2011 Jan 2012 Jan 2013 Jan 2014 Jan 2015 Jan 2016 Jan 2017 50 Integral Flux [10 -11 ph cm -2 s -1 ] Ferm -LAT (0.1 -500) GeV 10 20 30 40 55000 55500 56000 56500 57000 57500 0 Time [MJD] 57831 Time [MJD] 57864 Table 4 . 3 : 43 Year-by-year fluxes and spectral indices of B2 1215+30, corresponding to the points presented in Figure 4.6. The fit values assuming a constant C and a linear increase for the flux and spectral index are given. -8 cm -2 s -1 ] Γ 2009 3.7 1.96 2010 6.8 2.04 2011 5.2 1.98 2012 6.5 1.95 2013 8.6 1.90 2014 9.4 1.92 2015 7.7 1.90 2016 11.4 1.90 2017 12.8 1.85 C : χ 2 /ndf 266.69/8 18.88/8 L : χ 2 /ndf 41.85/7 3.88/7 Table 5 . 1 : 51 first H. E. S. S. I telescope on site was built in 2002, science data taking started in 2003 with three telescopes, and the array become fully operational in 2004. The Crab Nebula was a prime target during the commissioning phase in 2003 and has been regularly observed ever since. The installation of the H. E. S. S. II telescope in 2012 allowed to access a new energy range for the study of the Crab Nebula. The observations of the Crab Nebula with H. E. S. S. I are discussed here, whereas the H. E. S. S. II observations are described in Chapter 6.The Crab Nebula (R.A. = 05 h 34 m 31.1 s , decl. = +22 • 00 52 , J2000) is visible to the H. E. S. S. telescopes at zenith angles larger than 45 degrees, from September to March. Observations are possible also during August and April but only at zenith angles larger than 65 degrees. The data accumulated on the Crab Nebula during H. E. S. S. Phase-I is spread over almost ten years and was taken during different seasons, under different weather conditions and different instrument response (in particular reflectivity). For instrument response studies, a fraction of the data was taken with different observation strategies. For instance, some runs taken in wobblemode have offset angles up to three degrees from the camera center. To extract meaningful results from this rich but varied dataset, a good understanding of the data and cross-checks excluding runs which might affect the reconstructed gammaray energy and direction are needed. Summary of Crab Nebula observations with the H. E. S. S. I telescopes.The H. E. S. S. observations are split into runs of 28 nominal minutes duration. This is done to balance the run live time with changing observation conditions during the night i.e. pointing, night sky background or moving objects on the sky. However, different problems encountered during data taking can shorten the run duration. Data Set Dates Zenith Offset N runs I 2003 -2004 45-50 0.5 10 II 2004 -2005 45-52 0.5 21 III 2005 -2006 45-51 0.5 8 IV 2006 -2007 45-55 0.5-0.8 6 V 2007 -2008 45-48 0.5 12 VI 2008 -2009 45-51 0.5 7 VII 2009 -2010 45-48 0.5 22 VIII 2010 -2011 45-50 0.7-0.8 24 IX 2011 -2012 - - - X 2012 -2013 - - - XI 2013 -2014 45-52 0.5-0.8 39 XII 2014 -2015 45-48 0.5 2 XIII 2015 -2016 45-55 0.5 5 These problems can be external, e.g. thin clouds or shooting stars entering the field of view of the telescopes, or technical, e.g. camera voltage or DAQ problem. Table 5 . 2 : 52 Selection criteria applied to the H. E. S. S. I data. Cut value Participating telescopes ≥ 3 Broken Pixel ≤ 20 % Trigger Rate ≥ 100 Hz Stability Trigger Rate ≤ 4 % Run duration ≥ 5 minutes Relative Humidity ≤ 90 % Radiometer Temperature ≤ -20 degree Radiometer Stability ≤ 3 degree Table 5 5 .3. Cut config n ON n OFF n excess S LiMa S/B Rate [γ's/min] Std 19929 9504 19269.6 270.7 29.2 6.9±0.05 Loose 29057 29821 26977.6 284.8 13.0 9.7±0.06 Table 5 . 3 : 53 Summary of H. E. S. S. I analysis results obtained from the Std and Loose cut configurations. The number of events in the ON and OFF regions n ON , n OFF , the number of excess events n excess , significance S LiMa , signal to background ratio (S/B) and the rate of gammas for each cut configuration are given. Table 5 . 4 : 54 Summary of the Crab Nebula spectrum fit parameters for Std and Loose cut configurations. PL Std 0.48 62.37 2.05 ± 0.12 2.66 ± 0.01 - 1.4 - 296.6/74 CPL Std 0.48 62.37 3.26 ± 0.21 2.45 ± 0.01 0.18 ± 0.01 1.2 - 91.5/73 Exp-CutOff Std 0.48 62.37 3.56 ± 0.35 2.38 ± 0.02 - 1.2 9.9 ± 0.57 119.5/73 PL Loose 0.39 62.37 2.56 ± 0.13 2.61 ± 0.01 - 1.0 - 345.2/88 CPL Loose 0.39 62.37 4.97 ± 0.28 2.40 ± 0.01 0.15 ± 0.01 - 122.2/87 Exp-CutOff Loose 0.39 62.37 4.05 ± 0.34 2.38 ± 0.01 - 1.1 11.5 ± 0.65 148.5/87 Whipple 3.20 ± 0.17 2.49 ± 0.06 CAT 2.20 ± 0.05 2.80 ± 0.03 HEGRA 2.83 ± 0.04 2.62 ± 0.02 MAGIC 0.05 30.0 3.80 ± 0.11 2.21 ± 0.02 0.24 ± 0.01 1.0 - 20.0/11 VERITAS 0.12 42.0 3.75 ± 0.03 2.47 ± 0.01 0.16 ± 0.01 1.0 - 12.9/13 H. E. S. S. I 0.44 30.5 3.76 ± 0.07 2.39 ± 0.03 - 1.0 14.3 ± 2.1 15.9/9 Table 5 . 5 : 55 Crab Nebula lightcurve fit parameters for data set I, II, III and IV (as defined in the text). The integral flux Φ, the fit parameters to a constant C, the number of runs and the systematic uncertainty α are given. Data Set Φ > 1 TeV χ 2 /ndf nr runs α ×10 -12 cm -2 s -1 I 28.73 ± 0.21 583.2/142 143 16.1 % II 29.30 ± 0.21 471.0/133 134 14.0 % III 29.27 ± 0.21 431.6/142 143 12.6% IV 29.29 ± 0.21 427.4/142 143 12.5 % [START_REF] Carroll | An Introduction To Modern Astrophysics[END_REF] TeV. The Crab Nebula spectrum as measured by H. E. S. S. I and H. E. S. S. II are discussed and compared to measurements from other experiments in the next chapter.Motivated by the surprising Crab Nebula flares reported by the spaceborne satellites which arise from the high-energy part of the synchrotron component, the flux variability at very-high-energies was studied with H. E. S. S. The Fermi-LAT instrument is suitable to study the high-energy synchrotron emission, whereas H. E. S. S. can access the high-energy part of the inverse Compton component. The origin of the flares is still not understood, but different scenarios could explain these events. If the flux variations are related to the parent population of electrons, the flux enhancements would be accompanied by flux variations in the inverse Compton component. Alternatively, if the rapid flares are due to changes on the magnetic fields, the inverse Compton flux is not expected to vary. Since the Fermi-LAT started operating at the end of 2008, there is no information on GeV flux variations before this time, but if the synchrotron flare has a counterpart in the TeV, investigating all H. E. S. S. data also helps to better understand the high-energy emission. Table 6 . 6 1 summarizes the Combined, Stereo and Mono analysis results. The total number of events in the source region n ON , in the background region n OFF and the corresponding excess are given for each configuration. For the loose cuts, it can be seen that the signal to background ratio is generally lower. This cut configuration provides a lower threshold, but weaker hadron rejection. The use of Stereo Hybrid allows to have a lower energy threshold and a better hadron rejection. Config Cut n ON n OFF n excess S LiMa S/B Rate [γ's/min] Mono Std 6799 4224 6319.3 131.4 13.2 7.2±0.09 Loose 7798 5988 7106.1 133.4 10.3 8.1±0.10 Hybrid Std 5667 2676 5495.9 147.4 32.1 6.9±0.09 Loose 8127 4764 7775.6 165.0 22.1 9.8±0.11 Combined Std 8953 4599 8469.2 160.2 17.5 9.6±0.11 Loose 12147 8851 11211.6 173.9 12.0 12.7±0.13 Table 6 . 1 : 61 Analysis results of H. E. S. S. II Mono, Stereo and Combined with Std and Loose cut configurations. The number of events in the ON and OFF regions n ON , n OFF , the number of excess events n excess , significance S LiMa , signal to background ratio (S/B) and the rate of gammas for each cut configuration are given. Table 6 . 2 : 62 Summary of the Crab Nebula spectrum fit parameters derived from Mono, Hybrid and Combined using the Std and Loose cut configurations. The best fit parameters for each spectrum shape, a power-law (PL) and log-parabola (CPL) for the Std and Loose cut configurations are given. The energy threshold E thresh , the spectral index Γ, curvature β, normalization N 0 , reference energy E 0 and the fit parameters for each configuration are given. E min and E max correspond to the energy range of the fit, obtained during the spectrum fit procedure. Chapter 6 Crab Nebula with H. E. S. S. Phase-II Observations These results have been-cross checked with the HAP framework, another reconstruction framework within H. E. S. S. (done by J. Hahn). The intersection run list of both analysis pipelines was used for the cross-check. Both analysis frameworks agree well on the differential flux within 20 % uncertainty. Systematic checks performed to understand the difference indicated that it's due to the flux normalization. For this scope, the spectrum measured by the two different H. E. S. S. analysis chains was compared to the "global" Crab Nebula spectrum, taken as the average spectrum of the three major IACT experiments i.e. MAGIC, VERITAS and H. E. S. S. (PA + HAP). The relative flux differences (F i -F Average )/F i were calculated, where F i stands for the H. E. S. S. measurement (this work) and F Average is the average spectrum obtained from H. E. S. S. (PA + HAP), MAGIC and VERITAS measurements. An example of this check is shown in Appendix B. The three instruments are taking independent measurements, during different time periods and under different observation conditions. Comparing the spectrum of different measurements from different experiments is not an easy task, since the systematics between the different experiments are not known. Systematic uncertainties are due to the different calibration and reconstruction methods of individual experiments and an absolute calibration between the experiments is difficult. In the comparison plot shown in FigureB.1, the uncertainty between the experiments is assumed to be 30%. The Combined analysis configuration provides a lower energy threshold compared to the Hybrid. The best fit spectrum parameters for the Combined Std are: dN dE = (5.9±0.06)×10 -11 E 0.87 TeV -(2.3±0.02)-(0.17±0.01) ln( E 0.87 TeV ) 1 TeV cm 2 s . (6.1) Table 6 . 3 : 63 Fermi-LAT during October 2016 was observed by the H. E. S. S. II telescope. Studying the emission at TeV energies and 123 Chapter 6 Crab Nebula with H. E. S. S. Phase-II Observations search for correlated variability is important to understand the origin of the flares. The major GeV flares detected by the Fermi-LAT along with duration, amplitude and variability time scale. This table also indicates if the flare was observed by H. E. S. S. II. The major flares details are taken from [152, 172, 153], while the 2016 flare details are from this analysis. Fermi-LAT flares Feb 2009 Sep 2010 Apr 2011 Mar 2013 Sep 2016 Duration [days] 16 4 9 14 30 Amplitude x4 x6 x30 x6 x5 Variability [h] <10 <10 <8 <6 - H. E. S. S. II Obs. - - - - yes S.S. II Mono Sept-Oct 2016 H.E.S.S. II Mono All data Figure 6.9: The Crab Nebula energy spectrum derived with data from September to October 2016 superimposed to the overall H. E. S. S. II Mono spectrum for comparison. The 1σ confidence interval of the fitted spectrum shape are plotted as solid lines. Both spectra are obtained from Mono Std analysis configurations and are compatible within the estimated H. E. S. S. uncertainties. The errors plotted are statistical only.but it has to be compared to the 20% systematic uncertainty quoted by H. E. S. S. The lightcurves obtained by Fermi-LAT and H. E. S. S. are shown in Figure6.10. The lightcurve points from H. E. S. S. II appear to have higher fluxes in the beginning and lower after, which could be a flare starting before at TeV energies. However, by assuming a systematic uncertainty of 15% in each run, the χ 2 /ndf is reduced to 17.86/21 (P = 0.66). Since the error on the flux is increased, this automatically reduces the χ 2 /ndf.All the Crab Nebula flares at GeV energies, including the one presented here, are coming from the synchrotron component. To investigate the high and the low flux variations with H. E. S. S., the spectrum of the Crab Nebula was divided into two parts. The "low" energy spectrum is restricted to events with energies up to 1 TeV. Whereas the "high" energy spectrum is fitted at energies above 1 TeV. The former does not show evidence of flux variability which is characterized by an excess variance larger than 15%. The later case for the monoscopic reconstruction above 1 TeV shows the same behaviour. ] -1 TeV -1 s -2 dN/dE [cm H.E.S.S. integral flux rato [Crab Flare /Crab Ref ] [TeV] Time [MJD] true E H.E.57600 57620 57640 57660 57680 57700 57720 57740 1 10 9 -10 8 -10 0.0 0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 H.E.S.S. average H.E.S.S. 2006 H.E.S.S. 2 4 6 8 10 Fermi-LAT integral flux rato [Crab 12 Flare /Crab 14 16 Ref ] Fermi-LAT average Fermi-LAT 1 TeV was calculated in runby-run basis (28 minutes). The highest value corresponds to the 2016 October 07. The lightcurve was fitted to a constant flux and gave a reduced χ 2 /ndf = 67.6/21 (P = 0.0001). The lightcurve derived by H. E. S. S. II has an excess variance of 15% 128 6.7 The Crab Nebula 2016 GeV Flare Figure 6.10: TeV and GeV lightcurves of Crab Nebula in 2016. Top: The lightcurve in a run-by-run basis measured from H. E. S. S. II from MJD 57390.94-57724.02 (red squares). The integral fluxes, calculated above 1 TeV are normalized to the average flux from the H. E. S. S. flux from 2006 publication Table 6 . 4 : 64 Analysis results for each observation run taken during September-October 2016. The run number, Modified Julian date (MJD), the live time t live , mean zenith angle Z mean , the number of ON (N ON ) and OFF (N OFF ) events, the excess and the significance for each run are given. The integral flux Φ above 1 TeV is also given. 6.8 Summary and Discussion The flux variability at GeV energies is another peculiarity of the Crab Nebula. A flare reported by the Fermi-LAT via an Astronomer's Telegram made the H. E. S. S. trigger on the Crab Nebula during September and October 2016. In this chapter, the Fermi-LAT and H. E. S. S. II observations of this flare were also described. The analysis of the Fermi-LAT data revealed that the Crab Nebula flare lasted about one month in the energy range of Fermi-LAT. This analysis showed the flux instabilities were in the synchrotron component, whereas the inverse Compton component remained at the level of the reported constant flux, as seen in the previous flares. The H. E. S. S. I telescopes could not observe the flare as they were in recommissioning, and commissioning runs are generally not used to derive scientific results due to the continuously changing experiment setup. Fortunately, the H. E. S. S. II telescope was operating during this period and its data was used to investigate the flux variability. With a total of 9 runs, corresponding to an exposure of 3.4 hours, the emission from the Crab Nebula simultaneous with the Fermi-LAT observations was studied. The flux from this period had an excess variance of 15%, which is within the H. E. S. S. uncertainty level. The energy spectrum measured by H. E. S. S. II during this time was compatible with the time-average spectrum from all H. E. S. S. II data, also within the uncertainties quoted by H. E. S. S.. There is a possibility that H. E. S. S. missed any significant flux enhancements in this period as only a few runs (3.4 h observations) were considered as good quality and were used to perform the study. The variability study from the H. E. S. S. and Fermi-LAT data hints a TeV flare, which starts increase before the GeV flare.Run-wise simulations can help to reduce and control the systematic uncertainties and to conclude more about the flux variation of the Crab Nebula. Also a detailed correlation study of the flaring amplitudes between the Fermi-LAT and the H. E. S. S. experiments could provide more information about the flares. The 2016 flare has been partly observed with H. E. S. S. II standing alone. H. E. S. S. is now fully operational again after the successful upgrade phase, offering the opportunity to observe the Crab Nebula with the full array if any flux enhancement is reported again in the time when the Crab Nebula is visible to the H. E. S. S.. Energetic gamma rays from high-energy processes in the Universe are studied by space satellites and ground based detectors. The Fermi-LAT satellite scans the whole sky every three hours in the energy range from about 30 MeV up to more than 500 GeV since June 2008. The H. E. S. S. experiment located in the southern hemisphere in Namibia detects very-high-energy gamma rays from a few tens of GeV up to hundreds of TeV since 2003. The present generation of detectors have opened a new window to study the gamma-ray emission from the Universe.In the work presented here, data from the Fermi-LAT and H. E. S. S., state-of-the-art experiments in gamma-ray astronomy, were used to perform spectral and variability studies at high-energies. The gamma-ray emission from B2 1215+30 and the Crab Nebula, two prototypical sources representing the most abundant source types at GeV and TeV energies, is studied. B2 1215+30 belongs to the blazar source class, a type of active galactic nuclei, whereas the Crab Nebula is a pulsar wind nebula located in the galactic plane.A systematic investigation of the complete H. E. S. S. data on the Crab Nebula was performed to study in particular the flux, spectrum and variability. Standalone H. E. S. S. measurements did not result in any evidence for variability. The synchrotron emission from the Crab Nebula was found to be variable at high-energies by the space detectors. Multiwavelength observations of the Crab Nebula flare with the H. E. S. S. experiment kept the origin of this flares uncertain by not revealing any variability in the very-high-energies. New flaring activities at the GeV energy range have been detected by the Fermi-LAT in 2016, and the simultaneous H. E. S. S. II observations were presented here. Chapter 7 Conclusions and Outlook Blazars constitute the vast majority of sources detected at gamma-ray energies. Multiwavelength observations of blazars reveal them as variable at all wavelengths. Observations of high luminosity with rapid flux variation from small emission regions characterize blazars at very-high-energies. Their flaring activities reveal different be- haviour at high-energies, offering a new opportunity to study and characterize the emission from these sources. Part of this work was dedicated to the study of the emission from one source of this class; the B2 1215+30 blazar represents an interest- ing case to study the high-energy emission. This work studied and characterized a large gamma-ray flare amplitude and long-term variability studies were performed. This study relied on Fermi-LAT publicly available data. hereafter the electron notation is used for electron/positron r e is the electron radius magnetic constant in the MKS system SMBH; M SM BH > 10 6 M 1 Mpc = 3.09 • 10 19 km https://www.cta-observatory.org http://fermi.gsfc.nasa.gov/ssc/data/analysis/user/ or log-parabola -10 15 -10 14 -10 13 -10 12 -10 11 -10 10 -10 9 -10 -4 -3 -2 -10 15 -10 14 -10 13 -10 12 -10 11 -10 https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/overview.html -10 14 -10 13 -10 12 -10 11 -10 10 -10 Acknowledgements V This thesis was financially supported by the University of Paris-Sud and PHENIICS doctoral school. Appendix No Tc cut Tc > 0.9 The pull distributions of the flux measurements before and after the TC cut, both normalized to unity. The pull is the flux Φ i minus the mean flux divided by the flux error σ Φ,i . For a statistical error only, the pull distribution has width one. For the runs before the TC cut it is 2.2 and after the TC cut it is 1.6, indicating that effects introduced by the varying atmospheric quality make up a large fraction of the systematic uncertainty. The flux errors are statistically only. List of Tables 1. [START_REF] Wikipedia | The Free Encyclopedia[END_REF] Main historical instruments used in high-energy gamma-ray astronomy. Details about these instruments can be found in the reference list given here and references therein [START_REF] Kraushaar | High-Energy Cosmic Gamma-Ray Observations from the OSO-3 Satellite[END_REF][START_REF] Derdeyn | SAS-B digitized spark chamber gamma ray telescope[END_REF][START_REF] Thompson | Calibration of the Energetic Gamma-Ray Experiment Telescope (EGRET) for the Compton Gamma-Ray Observatory[END_REF][START_REF] Rando | Post-launch performance of the Fermi Large Area Telescope[END_REF][START_REF] Weekes | a Fast Large Aperture Camera for Very High Energy Gamma-Ray Astronomy[END_REF][START_REF] Barrau | The CAT imaging telescope for very-high-energy gamma-ray astronomy[END_REF][START_REF] Paré | CELESTE: an atmospheric Cherenkov telescope for high energy gamma astrophysics[END_REF][START_REF] Covault | Progress and Recent Results from the Solar Tower Atmospheric Cherenkov Effect Experiment (STACEE)[END_REF][START_REF] Lorenz | Status of the 17 m MAGIC telescope[END_REF][START_REF] Holder | VERITAS: Status and Highlights[END_REF][START_REF] Anderhub | Design and operation of FACTthe first G-APD Cherenkov telescope[END_REF][START_REF] Smith | HAWC: Design, Operation, Reconstruction and Analysis[END_REF] List of Tables 5.4 Summary of the Crab Nebula spectrum fit parameters for Std and Loose cut configurations. The spectrum is fitted with a power-law, log-parabola and an exponential with power-law cut off. The energy range of the spectrum fit with E min and E max , the normalisation N 0 , the reference energy E 0 , the spectral index Γ, the curvature β and other parameters of the fit are summarized. Spectrum parameters from the Whipple [START_REF] Carter-Lewis | Spectrum of TeV Gamma rays from the Crab Nebula[END_REF], CAT [START_REF] Finley | The spectrum of TeV gamma rays from the Crab Nebula[END_REF], HEGRA [START_REF] Aharonian | The Crab Nebula and Pulsar between 500 GeV and 80 TeV: Observations with the HEGRA Stereoscopic Air Cerenkov Telescopes[END_REF], MAGIC [START_REF] Aleksić | Measurement of the Crab Nebula spectrum over three decades in energy with the MAGIC telescopes[END_REF], VERITAS [START_REF] Meagher For The | Six years of VERITAS observations of the Crab Nebula[END_REF] and H. E. S. S. I [START_REF] Aharonian | Observations of the Crab nebula with HESS[END_REF] TMVA Toolkit for multivariate analysis. ToO Target of opportunity. TS Test statistics. UV ultraviolet. which after can be transformed as: with F keV -measured flux at 1keV. The optical depth can be written as: With the requirement that τ (E t ) < 1 we can be obtainted a lower limit on the Doppler factor δ: production gives access to the Higgs boson trilinear self-coupling and is sensitive to the presence of physics beyond the standard model. A considerable effort has been devoted to the development of an algorithm for the reconstruction of τ leptons decays to hadrons (τ h ) and a neutrino for the Level-1 calorimeter trigger of the experiment, that has been upgraded to face the increase in the centre-of-mass energy and instantaneous luminosity conditions expected for the LHC Run II operations. The algorithm implements a sophisticated dynamic energy clustering technique and dedicated background rejection criteria. Its structure, optimisation and implementation, its commissioning for the LHC restart at TeV, and the measurement of its performance are presented. The algorithm is an essential element in the search for production. The investigation of the bbτ τ -process explores the three decay modes of the τ τ -system with one or two τ h in the final state. A dedicated event selection and categorisation is developed and optimised to enhance the sensitivity, and multivariate techniques are applied for the first time to these final states to separate the signal from the background. Results are derived using an integrated luminosity of . fb The current state-of-the-art experiments in gamma-ray astronomy are the Fermi-LAT in space and the ground-based H. E. S. S., VERITAS and MAGIC experiments. The monitoring of the very-high-energy gamma-ray emitting sources indicates the diverse physics taking place in astrophysical environments. To study the most energetic form of radiation and the most violent phenomena taking place in the Universe, individual source analyses are important. BL Lac objects, a subcategory of active galaxies, are the most abundant source class detected both in the GeV and TeV energies, while pulsar wind nebulae represent the most numerous identified source class in the galactic plane. Both source classes exhibit gamma-ray flux variations. In this thesis, the gamma-ray variability of the BL Lac object B2 1215+30 is presented with Fermi-LAT data. A bright flare, with 16 times the average quiescent flux, was detected in February 2014. In collaboration with the VERITAS experiment, the gamma-ray variability was investigated over five decades in energy. This work resulted in the detection of a luminous flare, seen simultaneously in GeV and TeV energies by both instruments. These results were used to set constraints on the size of the emission region and on the Doppler factor of the relativistic jet. Additionally, the long-term variability was studied using nine years of Fermi-LAT data. This brought out new flux enhancements, which characterize the long-term lightcurve from 100 MeV up to 500 GeV. Other striking characteristics are a steady linear increase of the yearly average flux, together with a hardening of the spectral index. The investigation of the lightcurve indicates a hint of quasi-periodic behavior with a period of around 1083 ± 32 days. production gives access to the Higgs boson trilinear self-coupling and is sensitive to the presence of physics beyond the standard model. A considerable effort has been devoted to the development of an algorithm for the reconstruction of τ leptons decays to hadrons (τ h ) and a neutrino for the Level-1 calorimeter trigger of the experiment, that has been upgraded to face the increase in the centre-of-mass energy and instantaneous luminosity conditions expected for the LHC Run II operations. The algorithm implements a sophisticated dynamic energy clustering technique and dedicated background rejection criteria. Its structure, optimisation and implementation, its commissioning for the LHC restart at TeV, and the measurement of its performance are presented. The algorithm is an essential element in the search for production. The investigation of the bbτ τ -process explores the three decay modes of the τ τ -system with one or two τ h in the final state. A dedicated event selection and categorisation is developed and optimised to enhance the sensitivity, and multivariate techniques are applied for the first time to these final states to separate the signal from the background. Results are derived using an integrated luminosity of . fb -1 . They are found to be consistent, within uncertainties, with the standard model background predictions. Upper limits are set on resonant and nonresonant production and constrain the parameter space of the minimal supersymmetric standard model and anomalous Higgs boson couplings. The observed and expected upper limits are about 30 and 25 times the standard model prediction respectively, corresponding to one of the most stringent limits set so far at the LHC. Finally, prospects for future measurements of production at the LHC are evaluated by extrapolating the current results to an integrated luminosity of fb -1 under different detector and analysis performance scenarios. Université Paris-Saclay Espace Technologique / Immeuble Discovery Route de l'Orme aux Merisiers RD 128 / 91190 Saint-Aubin, France
01755880
en
[ "phys" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01755880/file/DissipationDrivenTimeReversaloFWaves.pdf
Vincent Bacot Sander Wildeman Surabhi Kottigegollahalli Sreenivas Maxime Harazi Xiaoping Jia Arnaud Tourin Mathias Fink Emmanuel Fort email: [email protected] Dissipation driven time reversal for waves Dissipation is usually associated with irreversibility. Here we present a counter-intuitive concept to perform wave time reversal using a dissipation impulse. A sudden and strong modification of the damping, localized in time, in the propagating medium generates a counter-propagating timereversed version of the initial wave. In the limit of a high dissipation shock, it amounts to a 'freezing' of the wave, where the initial wave field is retained while its time derivative is set to zero at the time of the impulse. The initial wave then splits into two waves with identical profiles, but with opposite time evolution. In contrast with other time reversal methods, the present technique produces an exact time reversal of the initial wave field, compatible with broad-band time reversal. Experiments performed with interacting magnets placed on a tunable air cushion give a proof of concept. Simulations show the ability to perform time reversal in 2D complex media. Dissipation in physical systems introduces a time-irreversibility by breaking the time symmetry of their dynamics [1] [2] [3] [4] [5]. Similarly damping in wave propagation deteriorates timereversal (TR) operations. However, for short times compared to the characteristic dissipation time, the dissipation-free wave-propagation model remains a good approximation of the wave dynamics [6]. Thus, increasing dissipation can generally be seen as effectively reducing the reversibility of the system. On the other hand, dissipation can be seen as the time reverse of gain. This is exemplified in the concept of coherent perfect absorbers (CPA), combining an interplay between interferences and absorption to produce the equivalent of a time-reversed laser [7] [8]. CPA addresses the TR of sources into sinks [9]. In this paper, we present a concept in which dissipation produces a time reversed wave. The generation process is similar to that employed in the Instantaneous Time Mirror (ITM) approach [10] in which a sudden change in the wave propagation speed in the entire medium produces a time reversed wave. Here, we consider instead a dynamical change of the dissipation coefficient. We will show that in this case also the production mechanism can be interpreted as a modification of the initial conditions characterizing the system at a given instant by virtue of Cauchy's theorem [START_REF] Hadamard | Lectures on Cauchy's Problem in Linear Partial Differential Equations[END_REF]. In the following we will refer to this new, dissipation based, TR concept as DTR, for Dissipation Time Reversal. DTR can, in principle, be implemented for any type of wave propagating in a (dissipative) medium. For example, for acoustic waves, through a modulation of the viscosity, and for EM waves, through a sudden change in conductivity in the medium. Because the source term involved is a first order time derivative, this new TR technique is able to create an exact TR wave, i.e. precisely proportional to the initial wave field. This results in a higher fidelity and enhanced broadband capabilities compared to other methods [10] [START_REF] Fink | [END_REF]. In the following, we demonstrate the validity of DTR concept performing a proof-of-concept 1D experiment using a chain of coupled magnets. In addition, we show the performance of DTR in a complex 2D highly scattering media using computer simulations. Theory of Dissipation driven Time Reversal (DTR) Waves in homogeneous non-dissipative medium are usually governed by d'Alembert wave equation [START_REF] Crawford | Berkeley Physics Course[END_REF]. More generally, the wave fields can be described by an equation of the same structure, but in which the Laplacian operator is replaced by a more complex spatial operator. This is the case for instance when describing acoustic waves in a non-homogeneous medium or with gravity-capillary waves at the surface of deep water. This type of equations can be written in a general manner in the spatial Fourier space for the wave vector 𝒌 [START_REF] Benjamin | [END_REF] [15]: ! ! ! !! ! 𝒌, 𝑡 + 𝜔 ! ! 𝒌 𝜙 𝒌, 𝑡 = 0, (1) where 𝜔 ! 𝒌 is the dispersion relation of the waves. The time reversal symmetry is a direct consequence of the second order of the time derivative: if 𝜙 𝒌, 𝑡 is a solution of the equation, 𝜙 𝒌, -𝑡 obviously satisfies the same equation. The damping effect induced by dissipation is usually described by an additional first order derivative term in this equation: ! ! ! !! ! 𝒌, 𝑡 + 𝜁 𝒌, 𝑡 !! !" 𝒌, 𝑡 + 𝜔 ! ! 𝒌 𝜙 𝒌, 𝑡 = 0, (2) where 𝜁 𝒌, 𝑡 is time-dependent damping coefficient. This additional dissipation term breaks the time symmetry of the equation which is precisely why dissipation is generally associated with irreversibility. For simplicity, we remove the dissipation k-dependence in the following notations. If 𝜁 𝑡 remains small compared to 𝜔 ! , an approximate reversibility is retained for times smaller than 1/𝜁 𝑡 [10]. In the following, we consider a medium where the damping coefficient is initially small or negligible, i.e. 𝜁~0. At a time 𝑡 ! , the dissipation coefficient is set to a very high value over the entire medium 𝜁 ≫ 𝜔 ! and stays at this value until a later time 𝑡 ! = 𝑡 ! + 𝛥𝑡, where it is set back to its original value. 𝜁 𝑡 can thus be written as 𝜁 𝑡 = 𝜁. Π(𝑡), where Π(𝑡) is a unit rectangle function spanning from 𝑡 ! to 𝑡 ! . An initial wave 𝜙 ! with a Fourier transform 𝜙 ! is originally present in the medium. During the dissipation impulse, the last term of equation ( 2) is negligible and one may write the approximate expressions for the wave field and its time derivative at times t after the damping impulse at time 𝑡 ! as: 𝜙 𝑡 = 𝜙 ! 𝑡 ! + ! ! !! ! !" 𝑡 ! 1 -𝑒 !! !!! ! !! !" 𝑡 = !! ! !" 𝑡 ! 𝑒 !! !!! ! . (3) Taking 𝜁 towards infinity, we obtain that the field remains approximately constant 𝜙 𝑡 ≈ 𝜙 ! 𝑡 ! during this dissipation impulse. The system is described as an overdamped harmonic oscillator that returns very slowly to a steady equilibrium state without oscillating. In this regime, the characteristic oscillations corresponding to wave motion are stopped and the time it takes for the system to relax increases with dissipation so that in the high dissipation limit we are considering, amplitude damping does not have time to occur. A more detailed calculation shows that in the long run the amplitude decreases as exp - ! ! ! !! 𝑡 -𝑡 ! (see supplemental material). For the strong dissipation limit to hold and for the wave amplitude to be retained, the duration 𝛥𝑡 of the dissipation pulse should thus satisfy 1/𝜁 < 𝛥𝑡 < 𝜁/𝜔 ! ! . If the damping is strong enough, the duration of the dissipation phase may be large compared to the period of the original wave. At time 𝑡 ! , when the dissipation ends, the wave field starts evolving again according to equation (2), with the initial conditions: 𝜙 𝑡 ! , !! !" 𝑡 ! = 𝜙 ! 𝑡 ! , 0 . (4) The DTR process can be interpreted as a change of the initial Cauchy conditions which characterizes the future evolution of the wave field from this initial time 𝑡 ! . As in the case of ITM [10], this state can be decomposed into two counter-propagative wave components using the superposition principle: 𝜙 𝑡 ! , !! !" 𝑡 ! = ! ! 𝜙 ! 𝑡 ! , !! ! !" 𝑡 ! + ! ! 𝜙 ! 𝑡 ! , - !! ! !" 𝑡 ! . ( 5 ) The first term is associated (up to a factor one half) to the exact state of the incident wave field before the DTR, it corresponds to the same wave shifted in time: 𝜙 ! 𝑡 = ! ! 𝜙 ! 𝑡 -𝑡 ! + 𝑡 ! . The second term is associated to a wave whose derivative has a minus sign. It corresponds to the time reversed wave: 𝜙 ! 𝑡 = ! ! 𝜙 ! 𝑡 ! + 𝑡 ! -𝑡 . Figure 1 shows the principle of DTR with a chirped pulse containing several frequencies. The pulse created at time 𝑡 = 0 propagates in a dispersive medium undergoing spreading (see figure 1a). At time 𝑡 !"# , a damping impulse is applied and the pulse is frozen (see figure 1b). The damping is then removed resulting in the creation of two counter-propagating pulses with half the amplitude of the initial wave field. The forward propagating pulse is identical to the initial propagating pulse as if no DTR was applied apart from the amplitude factor. The backward propagating pulse is the TR version of the initial pulse. It thus narrows as it propagates, reversing the dispersion, until at time 2𝑡 !"# it returns to the initial profile (with a factor of one half in the amplitude). 1D proof-of-concept experiment using a chain of magnets We have performed a 1D experiment using a chain of magnets as a proof-of-concept for DTR. Figure 2a shows the experimental set-up composed of a circular chain of 41 plastic disks. A small magnet oriented vertically with their North pole upward is glued on each 1 cm disk. The disks orientation being constrained, it induces a repulsive interaction between them. The disks are confined horizontally on a circle of 30 cm diameter by an underlying circle of fixed magnets with their North pole oriented upward. In addition, the friction of the disks on the table is drastically reduced by the use of an air cushion system. This allows waves to propagate in this coupled oscillator chain. A computer-controlled electromagnet is used to trigger the longitudinal waves in the chain. The change in dissipation is obtained by a sudden stop the air flow using the controlled valve which results in "freezing" the disk chain. Thus, in that case the damping coefficient 𝜁 can be considered infinite. For small amplitude oscillations, the chain can be modeled has a system of coupled harmonic oscillators with a rigidity constant κ which can be fitted from the dispersion curve measurements. When the air pressure is slightly increased, the disks are set in the random motion as shown in Figure 2b, which presents the displacements of the disks as a function of time in gray scale. It is possible to retrieve from this noise motion the dispersion curve of the chain by performing a space-time Fourier transform. The color scale is given in degree along the circle. One degree is approximately equivalent to 2.6 mm in length. Figure 2c shows the resulting experimental dispersion curve together with the fit of the harmonic model (dashed line). The good agreement of the fit confirms the validity of and harmonic interaction and permits to give a value for the rigidity constant of κ=0.13 Ν.m -1 . Figure 3a shows the propagation of an initially localized perturbation of the magnetic chain (the magnet #37 is acting as a source). A wave packet is launched in the magnetic chain. At time t 0 the chain is suddenly damped by turning off the air flow resulting in a freezing of the disks motion. At time t 1 the air flow is restored resulting in the creation of two counter propagating wave packets: i) one which resembles the initial wave packet as if no DTR had been applied apart from a decrease of the global amplitude by a factor of approximately 2; ii) a time reversed wave packet refocusing on the source. Figure 3b shows the disk displacements simulated using the harmonic interaction model with the fitted coupling constant. The initial magnet displacement is also taken from the experiment at time t init . This signal processing enables one to remove the forward propagating wave packet after the DTR. The resulting displacement pattern clearly shows the two counter propagating waves after the DTR impulse. Its similarity with the experimental data also shows the validity of the model. Simulations of DTR in 2D disordered medium We performed computer simulations for a 2D disordered system to show the robustness of TDR technique in complex media. The simulations are based on the mass-spring model of Harazi et al. [START_REF] Harazi | Multiple Scattering and Time Reversal of Ultrasound in Dry and Immersed Granular Media[END_REF] [17] introduced to simulate wave propagation in disordered stressed granular packings. The model consists of a two-dimensional percolated network of point particle with mass m, connected by linear springs of random stiffness as shown in the schematics in Figure 4a. The masses are able to move in the plane of the network and are randomly placed on a 70x70 square lattice with a filling factor of 91%. Each mass of the network is subjected to the Newton's equation: ! ! 𝒓 ! !! ! = 𝜔 !" ( 𝒓 !" -𝑎 ! ) 𝒓 !" 𝒓 !" !∈! ! , ( 6 ) where r i is the vector position of particle i, V i the set of neighboring particles connected to this particle, 𝜔 !" is the angular frequency and and 𝑎 ! the rest length of the spring, 𝒓 !" and 𝒓 !" are the vector between the two particles i and j and its norm respectively. The angular frequencies 𝜔 !" are uniformly distributed between 0.5 𝜔 !" and 1.5 𝜔 !" , where 𝜔 !! is the average angular frequency of the spring-mass systems. Before launching the wave, the network is submitted to a static stress by pulling the four walls of the domain (strain equal to 0.2) in order to ease the propagation of transverse waves. After this phase, the boundaries of the domain are fixed (zero displacement). The network is then excited by a horizontal displacement of one of the particle during a finite time. The profile of the excitation is given by 𝑢(𝑡) = W 𝑡 cos 𝜔 ! 𝑡 , where W 𝑡 is a temporal window restricting the oscillation to a single period and 𝜔 ! is the driving source pulsation chosen at 0.35 of the average angular frequency 𝜔 !" of the spring-mass systems. Figure 4b and4c show the evolution of the horizontal displacement with time of the source and the map of the displacements of the particles at various times respectively. At t=0, the displacement is confined to the source particle. Then, the displacement of the source decreases rapidly at a noise level of approximately one tenth of its initial value. The initial perturbation propagates and is strongly scattered in the inhomogeneous mass-spring network. At time 𝑡 = 95 𝑇 ! , where 𝑇 ! = 2𝜋/𝜔 ! is the excitation period, the perturbation is spread over the network and the particle displacements become randomly distributed. The network acts as a complex medium due to the random spring stiffnesses and the random vacancies in the squared network. The DTR freezing is applied at 𝑡 !"# = 300 𝑇 ! , all the particle velocities are set to zero, as if an infinite damping was applied instantaneously, keeping only the potential energy in the system. Right after this instant, the masses are released from their frozen positions with zero velocity (see third panel Fig. 4d), and evolve according to equation (1). After a complex motion of the particles, a coherent field appears refocusing back to the initial source around time 𝑡 = 2𝑡 !"# =600 𝑇 ! (see panel four Fig. 4d). The horizontal displacement of the source undergoes a very sharp increase reaching approximately 80% of its initial value (see Fig. 4b). Right after this time, the converging wave diverges again, yielding to a new complex displacement field (see the movie of the propagation in supplemental material). In addition to the spatial focusing on the initial source, a time refocusing is also observed showing a temporal shortening of the initial impulsion at time 𝑡 = 2𝑡 !"# (see Fig. 4c). The time width of the refocusing is approximately 6𝑇 ! (FWHM) i.e. twice the one of the source signal. Discussion The DTR process is associated with a decrease of the amplitude of the initial wave field. The kinetic energy associated to the time derivative of the field 𝜕𝜙 ! 𝜕𝑡 vanishes during the dissipation impulse leaving the potential energy, associated to the wave field 𝜙 ! unaffected. In the case of an initially propagating wave, the energy of the wave is equally partitioned between potential and kinetic energy. Thus, half of the initial energy is lost in the DTR process resulting in a quarter of the initial energy being TR while an other quarter is retained in the initial wave. It is interesting to note that, for standing waves, the effect of the DTR depends on its relative time phase relative to the impulse since wave energy alternates between kinetic and potential. The Cauchy analysis in terms of initial conditions to determine the wave field evolution enable one to make a link with Loschmidt's Gedankenexperiment for particle evolution. Loschmidt imagined a deamon capable of instantaneously reversing the velocity of the particles of a gas while keeping their position unaffected and thus time reversing the gas evolution [18] [19]. Although this scheme is impossible in the case of particles due to the extreme sensitivity to initial conditions, it is more amenable for waves because they can often be described with a linear operator and any error in initial conditions will not suffer from chaotic behavior. The wave analogue of this Loschmidt daemon in terms of Cauchy's initial condition 𝜙 ! , 𝜕𝜙 ! 𝜕𝑡 is to change the sign of the wave field time derivative 𝜙 ! , -𝜕𝜙 ! 𝜕𝑡 . Because of the superposition principle, the DTR is thus acting as a Loschmidt daemon by decoupling the wave field from its derivative. The DTR concept is generic and applies in the case of complex inhomogeneous materials as shown by the 2D simulations (see Fig. 3). This can be shown directly from the Cauchy theorem. After the freezing the wave field initial condition are reset with a time derivative of the wave the wave field equal to zero. The superposition principle given in Eq. 5 holds. In contrast with the ITM approach based on wave velocity changes [10] and standard time reversal cavities [START_REF] Fink | [END_REF], the backward propagating wave is directly proportional to the TR of the original wave and not to the time reversal of its time derivative or antiderivative. From that perspective, DTR has thus no spectral limitations and can be applied for the TR of broadband wave packets removing one of the limitation of the existing TR techniques. The limitation in the TR spectral range comes from the ability to freeze the field sufficiently rapidly compared with the phase change in the wave packet, resulting in the maximum value for the time pulsation 𝜁 ≫ 𝜔 ! . Conclusion This paper presents a new way to perform an instantaneous time mirror using a dissipation modulation impulse. This concept is generic and could be applied to other type of waves. In optics, DTR could be induced by changing abruptly the conductivity of the medium as in graphene [20], in acoustics it could be obtained by changing electrorheological medium [21]. The excitation is initially a localized perturbation of the magnetic chain, the magnet #37 is acting as a source. At time t 0 the chain is suddenly damped by turning off the air flow resulting in a freezing of the disks motion. At time t 1 the air flow is restored resulting in the creation of two counter propagating wave packets; b) Simulations of the magnet displacements using the harmonic interaction model with the fitted coupling constant. The initial magnet displacement is also taken from the experiment at time t init . Supplemental Material For 𝑡 ∈ 𝑡 ! , 𝑡 ! : ! ! ! !! ! 𝒌, 𝑡 + 𝜁 𝒌 !! !" 𝒌, 𝑡 + 𝜔 ! ! 𝒌 𝜙 𝒌, 𝑡 = 0, (S1) We consider the regime of high dissipation, so that we assume ζ 𝒌 > 2𝜔 ! 𝒌 . (S1) is thus the equation of a damped harmonic oscillator in the overdamped regime, whose solutions are given by: 𝜙 𝑡 = 𝐴𝑒 ! ! 𝒌 ! !! !! ! ! ! 𝒌 ! ! 𝒌 (!!! ! ) + 𝐵𝑒 ! ! 𝒌 ! !! !! ! ! ! 𝒌 ! ! 𝒌 (!!! ! ) , ( S2 ) with 𝐴 and 𝐵 two constants. Given the initial conditions of continuity of the field and its time derivative, we obtain: 𝜙 𝒌, 𝑡 = !! ! ! ! 𝒌 ! ! 𝒌 !! ! ! 𝒌,! ! ! ! ! 𝒌 !! ! !" 𝒌,! ! ! !! ! ! ! 𝒌 ! ! 𝒌 𝑒 ! ! 𝒌 ! !! !! ! ! ! 𝒌 ! ! 𝒌 (!!! ! ) + !! ! ! ! 𝒌 ! ! 𝒌 !! ! ! 𝒌,! ! ! ! ! 𝒌 !! ! !" 𝒌,! ! ! !! ! ! ! 𝒌 ! ! 𝒌 𝑒 ! ! 𝒌 ! !! !! ! ! ! 𝒌 ! ! 𝒌 (!!! ! ) . ( S3 ) Developing at first order in ! ! 𝒌 ! 𝒌 → 0 in front of and (at order 2) inside the exponential terms: 𝜙 𝑡 = - ! ! 𝒌 !! ! !" 𝒌, 𝑡 ! 𝑒 ! ! 𝒌 ! !! ! ! ! ! ! 𝒌 ! ! 𝒌 !! ! ! ! 𝒌 ! ! 𝒌 !!! ! + 𝜙 ! 𝒌, 𝑡 ! + ! ! 𝒌 !! ! !" 𝒌, 𝑡 ! 𝑒 ! ! 𝒌 ! ! ! ! ! ! 𝒌 ! ! 𝒌 !! ! ! ! 𝒌 ! ! 𝒌 !!! ! , ( S4 Figure captions:Figure 1 : 1 Figure captions: Figure 1: Principle of DTR mirror: a) Propagation of a chirped pulse in a dispersive medium at 3 Figure 2 : 2 Figure 2: a) Schematics of the experimental set-up composed of a circular chain of 41 plastic Figure 3 : 3 Figure 3: a) Displacement amplitudes of the magnets as a function of time represented. The Figure 4 : 4 Figure 4: a) Schematic view of the 2D spring model. The model consists of a 2D percolated Figure 1 :Figure 2 :Figure 3 : 123 Figure 1: Figure 4 : 4 Figure 4: ) where we used the fact that! ! 𝒌 !! ! !" 𝒌, 𝑡 ! is of order one in 𝜔 ! 𝒌 /ζ 𝒌 .Taking the zero th order inside the exponential yields equation (4) of the main text. Equation (S4) also reveals that the wave amplitude decreases like 𝑒! ! ! ! 𝒌 !! 𝒌!!! ! in the long run. Acknowledgements: We are very grateful to Y. Couder for fruitful and stimulating discussions. We thank A. Fourgeaud for their help in building the experimental set-up. S. K. S. acknowledge the French Ambassy in India for a Charpak scholarship. The authors acknowledge the support of the AXA research fund and LABEX WIFI (Laboratory of Excellence ANR-10-LABX-24) within the French Program 'Investments for the Future' under reference ANR-10-IDEX-0001-02 PSL*
01756026
en
[ "math.math-ap" ]
2024/03/05 22:32:10
2020
https://hal.science/hal-01756026/file/PKB_AKG_PL_20180328.pdf
Prasanta Kumar Barik Ankik Kumar Giri Philippe Laurençot Mass-conserving solutions to the Smoluchowski coagulation equation with singular kernel Keywords: Coagulation, Singular coagulation kernels, Existence, Mass-conserving solutions MSC (2010): Primary: 45J05, 45K05, Secondary: 34A34, 45G10 Cueto Camejo & Warnecke (2015). In particular, linear growth at infinity of the coagulation kernel is included and the initial condition may have an infinite second moment. Furthermore, all weak solutions (in a suitable sense) including the ones constructed herein are shown to be mass-conserving, a property which was proved in Norris (1999) under stronger assumptions. The existence proof relies on a weak compactness method in L 1 and a by-product of the analysis is that both conservative and non-conservative approximations to the SCE lead to weak solutions which are then mass-conserving. Introduction The kinetic process in which particles undergo changes in their physical properties is called a particulate process. The study of particulate processes is a well-known subject in various branches of engineering, astrophysics, physics, chemistry and in many other related areas. During the particulate process, particles merge to form larger particles or break up into smaller particles. Due to this process, particles change their size, shape and volume, to name but a few. There are various types of particulate processes such as coagulation, fragmentation, nucleation and growth for instance. In particular, this article mainly deals with the coagulation process which is governed by the Smoluchowski coagulation equation (SCE). In this process, two particles coalesce to form a larger particle at a particular instant. The SCE is a nonlinear integral equation which describes the dynamics of evolution of the concentration g(ζ, t) of particles of volume ζ > 0 at time t ≥ 0 [START_REF] Smoluchowski | Versuch einer mathematischen Theorie der Koagulationskinetik kolloider Lösungen[END_REF]. The evolution of g is given Here ∂g(ζ,t) ∂t represents the time partial derivative of the concentration of particles of volume ζ at time t. In addition, the non-negative quantity Ψ(ζ, η) denotes the interaction rate at which particles of volume ζ and particles of volume η coalesce to form larger particles. This rate is also known as the coagulation kernel or coagulation coefficient. The first and last terms B c (g) and D c (g) on the right-hand side to (1.1) represent the formation and disappearance of particles of volume ζ due to coagulation events, respectively. Let us define the total mass (volume) of the system at time t ≥ 0 as: M 1 (g)(t) := ∞ 0 ζg(ζ, t)dζ. (1.5) According to the conservation of matter, it is well known that the total mass (volume) of particles is neither created nor destroyed. Therefore, it is expected that the total mass (volume) of the system remains conserved throughout the time evolution prescribed by (1.1)-(1.2), that is, M 1 (g)(t) = M 1 (g in ) for all t ≥ 0. However, it is worth to mention that, for the multiplicative coagulation kernel Ψ(ζ, η) = ζη, the total mass conservation fails for the SCE at finite time t = 1, see [START_REF] Leyvraz | Singularities in the kinetics of coagulation processes[END_REF]. The physical interpretation is that the lost mass corresponds to "particles of infinite volume" created by a runaway growth in the system due to the very high rate of coalescence of very large particles. These particles, also referred to as "giant particles" [START_REF] Aldous | Deterministic and stochastic model for coalescence (aggregation and coagulation): a review of the mean-field theory for probabilists[END_REF] are interpreted in the physics literature as a different macroscopic phase, called a gel, and its occurrence is called the sol-gel transition or gelation transition. The earliest time T g ≥ 0 after which mass conservation no longer holds is called the gelling time or gelation time. Since the works by Ball & Carr [START_REF] Ball | The discrete coagulation-fragmentation equations: Existence, uniqueness and density conservation[END_REF] and Stewart [START_REF] Stewart | A global existence theorem for the general coagulation-fragmentation equation with unbounded kernels[END_REF], several articles have been devoted to the existence and uniqueness of solutions to the SCE for coagulation kernels which are bounded for small volumes and unbounded for large volumes, as well as to the mass conservation and gelation phenomenon, see [START_REF] Dubovskii | Existence, uniqueness and mass conservation for the coagulationfragmentation equation[END_REF][START_REF] Escobedo | Gelation in coagulation and fragmentation models[END_REF][START_REF] Escobedo | Gelation and mass conservation in coagulation-fragmentation models[END_REF][START_REF] Giri | Weak solutions to the continuous coagulation with multiple fragmentation[END_REF][START_REF] Ph | From the discrete to the continuous coagulation-fragmentation equations[END_REF][START_REF] Norris | Smoluchowski's coagulation equation: uniqueness, non-uniqueness and hydrodynamic limit for the stochastic coalescent[END_REF][START_REF] Stewart | A uniqueness theorem for the coagulation-fragmentation equation[END_REF], see also the survey papers [START_REF] Aldous | Deterministic and stochastic model for coalescence (aggregation and coagulation): a review of the mean-field theory for probabilists[END_REF][START_REF] Ph | Weak compactness techniques and coagulation equations[END_REF][START_REF] Ph | On coalescence equations and related models[END_REF] and the references therein. However, to the best of our knowledge, there are fewer articles in which existence and uniqueness of solutions to the SCE with singular coagulation rates have been studied, see [START_REF] Camejo | Regular solutions to the coagulation equations with singular kernels[END_REF][START_REF] Camejo | The singular kernel coagulation equation with multifragmentation[END_REF][START_REF] Escobedo | On self-similarity and stationary problem for fragmentation and coagulation models[END_REF][START_REF] Escobedo | Dust and self-similarity for the Smoluchowski coagulation equation[END_REF][START_REF] Norris | Smoluchowski's coagulation equation: uniqueness, non-uniqueness and hydrodynamic limit for the stochastic coalescent[END_REF]. In [START_REF] Norris | Smoluchowski's coagulation equation: uniqueness, non-uniqueness and hydrodynamic limit for the stochastic coalescent[END_REF], Norris investigates the existence and uniqueness of solutions to the SCE locally in time when the coagulation kernel satisfies Ψ(ζ, η) ≤ φ(ζ)φ(η), (ζ, η) ∈ (0, ∞) 2 , (1.6) for some sublinear function φ : (0, ∞) → [0, ∞), that is, φ enjoys the property φ(aζ) ≤ aφ(ζ) for all ζ ∈ (0, ∞) and a ≥ 1, and the initial condition g in belongs to L 1 ((0, ∞); φ(ζ) 2 dζ). Massconservation is also shown as soon as there is ε > 0 such that φ(ζ) ≥ εζ for all ζ ∈ (0, ∞). In [START_REF] Escobedo | Dust and self-similarity for the Smoluchowski coagulation equation[END_REF][START_REF] Escobedo | On self-similarity and stationary problem for fragmentation and coagulation models[END_REF], global existence, uniqueness, and mass-conservation are established for coagulation rates of the form Ψ(ζ, η) = ζ µ 1 η µ 2 + ζ µ 2 η µ 1 with -1 ≤ µ 1 ≤ µ 2 ≤ 1, µ 1 + µ 2 ∈ [0, 2] , and (µ 1 , µ 2 ) = (0, 1). Recently, global existence of weak solutions to the SCE for coagulation kernels satisfying ), and k * > 0, is obtained in [START_REF] Camejo | Regular solutions to the coagulation equations with singular kernels[END_REF] and further extended in [START_REF] Camejo | The singular kernel coagulation equation with multifragmentation[END_REF] to the broader class of coagulation kernels Ψ(ζ, η) ≤ k * (1 + ζ + η) λ (ζη) -σ , (ζ, η) ∈ (0, ∞) 2 , with σ ∈ [0, 1/2], λ -σ ∈ [0, 1 Ψ(ζ, η) ≤ k * (1 + ζ) λ (1 + η) λ (ζη) -σ , (ζ, η) ∈ (0, ∞) 2 , (1.7) with σ ≥ 0, λ -σ ∈ [0, 1), and k * > 0. In [START_REF] Camejo | The singular kernel coagulation equation with multifragmentation[END_REF], multiple fragmentation is also included and uniqueness is shown for the following restricted class of coagulation kernels Ψ 2 (ζ, η) ≤ k * (ζ -σ + ζ λ-σ )(η -σ + η λ-σ ), (ζ, η) ∈ (0, ∞) 2 , where σ ≥ 0 and λ -σ ∈ [0, 1/2]. The main aim of this article is to extend and complete the previous results in two directions. We actually consider coagulation kernels satisfying the growth condition (1.6) for the non-negative function φ β (ζ) := max ζ -β , ζ , ζ ∈ (0, ∞), and prove the existence of a global mass-conserving solution of the SCE (1.1)-(1.2) with initial conditions in L 1 ((0, ∞); (ζ -2β + ζ)dζ), thereby removing the finiteness of the second moment required to apply the existence result of [START_REF] Norris | Smoluchowski's coagulation equation: uniqueness, non-uniqueness and hydrodynamic limit for the stochastic coalescent[END_REF] and relaxing the assumption λ < σ + 1 used in [START_REF] Camejo | The singular kernel coagulation equation with multifragmentation[END_REF] for coagulation kernels satisfying (1.7). Besides this, we show that any weak solution in the sense of Definition 2.2 below is mass-conserving, a feature which was enjoyed by the solution constructed in [START_REF] Norris | Smoluchowski's coagulation equation: uniqueness, non-uniqueness and hydrodynamic limit for the stochastic coalescent[END_REF] but not investigated in [START_REF] Camejo | Regular solutions to the coagulation equations with singular kernels[END_REF][START_REF] Camejo | The singular kernel coagulation equation with multifragmentation[END_REF]. An important consequence of this property is that it gives some flexibility in the choice of the method to construct a weak solution to the SCE (1.1)-(1.2) since it will be mass-conserving whatever the approach. Recall that there are two different approximations of the SCE (1.1) by truncation have been employed in recent years, the so-called conservative and non-conservative approximations, see (4.4) below. While it is expected and actually verified in several papers that the conservative approximation leads to a mass-conserving solution to the SCE, a similar conclusion is not awaited when using the nonconservative approximation which has rather been designed to study the gelation phenomenon, in particular from a numerical point of view [START_REF] Filbet | Numerical simulation of the Smoluchowski coagulation equation[END_REF][START_REF] Bourgade | Convergence of a finite volume scheme for coagulation-fragmentation equations[END_REF]. Still, it is by now known that, for the SCE with locally bounded coagulation kernels growing at most linearly at infinity, the non-conservative approximation also allows one to construct mass-conserving solutions [START_REF] Filbet | Mass-conserving solutions and non-conservative approximation to the Smoluchowski coagulation equation[END_REF][START_REF] Barik | A note on mass-conserving solutions to the coagulation-fragmentation equation by using non-conservative approximation[END_REF]. The last outcome of our analysis is that, in our case, the conservative and non-conservative approximations can be handled simultaneously and both lead to a weak solution to the SCE which might not be the same due to the lack of a general uniqueness result but is mass-conserving. We now outline the results of the paper: In the next section, we state precisely our hypotheses on coagulation kernel and on the initial data together with the definition of solutions and the main result. In Section 3, all weak solutions are shown to be mass-conserving. Finally, in the last section, the existence of a weak solution to the SCE (1.1)-(1.2) is obtained by using a weak L 1 compactness method applied to either the non-conservative or the conservative approximations of the SCE. Main result We assume that the coagulation kernel Ψ satisfies the following hypotheses. Hypotheses 2.1. (H1) Ψ is a non-negative measurable function on (0, ∞) × (0, ∞), (H2) There are β > 0 and k > 0 such that 0 ≤ Ψ(ζ, η) = Ψ(η, ζ) ≤ k(ζη) -β , (ζ, η) ∈ (0, 1) 2 , 0 ≤ Ψ(ζ, η) = Ψ(η, ζ) ≤ kηζ -β , (ζ, η) ∈ (0, 1) × (1, ∞), 0 ≤ Ψ(ζ, η) = Ψ(η, ζ) ≤ k(ζ + η), (ζ, η) ∈ (1, ∞) 2 . Observe that (H2) implies that Ψ(ζ, η) ≤ k max ζ -β , ζ max η -β , η , (ζ, η) ∈ (0, ∞) 2 . Let us now mention the following interesting singular coagulation kernels satisfying hypotheses 2.1. (a) Smoluchowski's coagulation kernel [START_REF] Smoluchowski | Versuch einer mathematischen Theorie der Koagulationskinetik kolloider Lösungen[END_REF] (with β = 1/3) Ψ(ζ, η) = ζ 1/3 + η 1/3 ζ -1/3 + η -1/3 , (ζ, η) ∈ (0, ∞) 2 . (b) Granulation kernel [16] Ψ(ζ, η) = (ζ + η) θ 1 (ζη) θ 2 , where θ 1 ≤ 1 and θ 2 ≥ 0. (c) Stochastic stirred froths [START_REF] Clark | Stably coalescent stochastic froths[END_REF] Ψ(ζ, η) = (ζη) -β , where β > 0. Before providing the statement of Theorem 2.3, we recall the following definition of weak solutions to the SCE (1.1)-(1.2). We set L 1 -2β,1 (0, ∞) := L 1 ((0, ∞); (ζ -2β + ζ)dζ). Definition 2.2. Let T ∈ (0, ∞] and g in ∈ L 1 -2β,1 (0, ∞), g in ≥ 0 a.e. in (0, ∞). A non- negative real valued function g = g(ζ, t) is a weak solution to equations (1.1)-(1.2) on [0, T ) if g ∈ C ([0, T ); L 1 (0, ∞)) L ∞ (0, T ; L 1 -2β,1 (0, ∞)) and satisfies ∞ 0 [g(ζ, t) -g in (ζ)]ω(ζ)dζ = 1 2 t 0 ∞ 0 ∞ 0 ω(ζ, η)Ψ(ζ, η)g(ζ, s)g(η, s)dηdζds, (2.1 ) for every t ∈ (0, T ) and ω ∈ L ∞ (0, ∞), where ω(ζ, η) := ω(ζ + η) -ω(ζ) -ω(η), (ζ, η) ∈ (0, ∞) 2 . Now, we are in a position to state the main theorem of this paper. Theorem 2.3. Assume that the coagulation kernel satisfies hypotheses (H1)-(H2) and consider a non-negative initial condition g in ∈ L 1 -2β,1 (0, ∞). There exists at least one massconserving weak solution g to the SCE (1.1)-(1.2) on [0, ∞), that is, g is a weak solution to (1.1)-(1.2) in the sense of Definition 2.2 satisfying M 1 (g)(t) = M 1 (g in ) for all t ≥ 0, the total mass M 1 (g) being defined in (1.5). Weak solutions are mass-conserving In this section, we establish that any weak solution g to (1.1)-(1.2) on [0, T ), T ∈ (0, ∞], in the sense of Definition 2.2 is mass-conserving, that is, satisfies M 1 (g)(t) = M 1 (g in ), t ≥ 0. (3.1) To this end, we adapt an argument designed in [2, Section 3] to investigate the same issue for the discrete coagulation-fragmentation equations and show that the behaviour of g for small volumes required in Definition 2.2 allows us to control the possible singularity of Ψ. In order to prove Theorem 3.1, we need the following sequence of lemmas. Lemma 3.2. Assume that (H1)-(H2) hold. Let g be a weak solution to (1.1)-(1.2) on [0, T ). Then, for q ∈ (0, ∞) and t ∈ (0, T ), q 0 ζg(ζ, t)dζ - q 0 ζg in (ζ)dζ = - t 0 q 0 ∞ q-ζ ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζds. (3.2) Proof. Set ω(ζ) = ζχ (0,q) (ζ) for ζ ∈ (0, ∞) and note that ω(ζ, η) =                0, if ζ + η ∈ (0, q), -(ζ + η), if ζ + η ≥ q, (ζ, η) ∈ (0, q) 2 , -ζ, if (ζ, η) ∈ (0, q) × [q, ∞), -η, if (ζ, η) ∈ [q, ∞) × (0, q), 0, if (ζ, η) ∈ [q, ∞) 2 . Inserting the above values of ω into (2.1) and using the symmetry of Ψ, we have q 0 [g(ζ, t) -g in (ζ)]ζdζ = 1 2 t 0 ∞ 0 ∞ 0 ω(ζ, η)Ψ(ζ, η)g(ζ, s)g(η, s)dηdζds = - 1 2 t 0 q 0 q q-ζ (ζ + η)Ψ(ζ, η)g(ζ, s)g(η, s)dηdζds - 1 2 t 0 q 0 ∞ q ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζds - 1 2 t 0 ∞ q q 0 ηΨ(ζ, η)g(ζ, s)g(η, s)dηdζds =- t 0 q 0 q q-ζ ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζds - t 0 q 0 ∞ q ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζds, which completes the proof of Lemma 3.2. In order to complete the proof of Theorem 3.1, it is sufficient to show that the right-hand side of (3.2) goes to zero as q → ∞. The first step in that direction is the following result. Lemma 3.3. Assume that (H1)-(H2) hold. Let g be a solution to (1.1)-(1.2) on [0, T ) and consider t ∈ (0, T ). Then (i) ∞ q [g(ζ, t) -g in (ζ)]dζ = - 1 2 t 0 ∞ q ∞ q Ψ(ζ, η)g(ζ, s)g(η, s)dηdζds + 1 2 t 0 q 0 q q-ζ Ψ(ζ, η)g(ζ, s)g(η, s)dηdζds, (ii) lim q→∞ t 0 q q 0 q q-ζ Ψ(ζ, η)g(ζ, s)g(η, s)dηdζ - ∞ q ∞ q Ψ(ζ, η)g(ζ, s)g(η, s)dηdζ ds = 0. Proof. Set ω(ζ) = χ [q,∞) (ζ) for ζ ∈ (0, ∞) and the corresponding ω is ω(ζ, η) =                0, if ζ + η ∈ (0, q), 1, if ζ + η ∈ [q, ∞), (ζ, η) ∈ (0, q) 2 , 0, if (ζ, η) ∈ (0, q) × [q, ∞), 0, if (ζ, η) ∈ [q, ∞) × (0, q), -1, if (ζ, η) ∈ [q, ∞) 2 . Inserting the above values of ω into (2.1), we obtain Lemma 3.3 (i). Next, we readily infer from the integrability of ζ → ζg(ζ, t) and ζ → ζg in (ζ) and Lebesgue's dominated convergence theorem that lim q→∞ q ∞ q [g(ζ, t) -g in (ζ)]dζ ≤ lim q→∞ ∞ q ζ[g(ζ, t) + g in (ζ)]dζ = 0. Multiplying the identity stated in Lemma 3.3 (i) by q, we deduce from the previous statement that the left-hand side of the thus obtained identity converges to zero as q → ∞. Then so does its right-hand side, which proves Lemma 3.3 (ii). Then, for t ∈ (0, T ), (i) lim q→∞ t 0 q 0 ∞ q ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζds = 0, and (ii) lim q→∞ q t 0 ∞ q ∞ q Ψ(ζ, η)g(ζ, s)g(η, s)dηdζds = 0. Proof. Let q > 1, t ∈ (0, T ), and s ∈ (0, t). To prove the first part of Lemma 3.4, we split the integral as follows q 0 ∞ q ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζ = J 1 (q, s) + J 2 (q, s), with J 1 (q, s) := 1 0 ∞ q ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζ, J 2 (q, s) := q 1 ∞ q ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζ. On the one hand, it follows from (H2) and Young's inequality that J 1 (q, s) ≤ k 1 0 ∞ q ζ 1-β ηg(ζ, s)g(η, s)dηdζ ≤ k ∞ 0 ζ 1-β g(ζ, s)dζ ∞ q ηg(η, s)dη ≤ k g(s) L 1 -2β,1 (0,∞) ∞ q ηg(η, s)dη and the integrability properties of g from Definition 2.2 and Lebesgue's dominated convergence theorem entail that lim q→∞ t 0 J 1 (q, s)ds = 0. ( On the other hand, we infer from (H2) that J 2 (q, s) ≤ k q 1 ∞ q ζ(ζ + η)g(ζ, s)g(η, s)dηdζ ≤ 2k q 1 ∞ q ζηg(ζ, s)g(η, s)dηdζ ≤ 2kM 1 (g)(s) ∞ q ηg(η, s)dη, and we argue as above to conclude that lim q→∞ t 0 J 2 (q, s)ds = 0. Recalling (3.3), we have proved Lemma 3.4 (i). Similarly, by (H2), q ∞ q ∞ q Ψ(ζ, η)g(ζ, s)g(η, s)dηdζ ≤ k ∞ q ∞ q (qζ + qη)g(ζ, s)g(η, s)dηdζ ≤ 2k ∞ q ∞ q ζηg(ζ, s)g(η, s)dηdζ ≤ 2kM 1 (g)(s) ∞ q ηg(η, s)dη, and we use once more the previous argument to obtain Lemma 3.4 (ii). Now, we are in a position to prove Theorem 3.1. Proof of Theorem 3.1. Let t ∈ (0, T ). From Lemma 3.4 (i), we obtain lim q→∞ t 0 q 0 ∞ q ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζds = 0, (3.4) while Lemma 3.3 (ii) and Lemma 3.4 (ii) imply that lim q→∞ q t 0 q 0 q q-ζ Ψ(ζ, η)g(ζ, s)g(η, s)dηdζds = 0. (3.5) Since t 0 q 0 ∞ q-ζ ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζds ≤ q t 0 q 0 q q-ζ Ψ(ζ, η)g(ζ, s)g(η, s)dηdζds + t 0 q 0 ∞ q ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζds, it readily follows from (3.4) and (3.5) that the right-hand side of (3.2) converges to zero as q → ∞. Consequently, M 1 (g)(t) = lim q→∞ q 0 ζg(ζ, s)dζ = lim q→∞ q 0 ζg in (ζ)dζ = M 1 (g in ). This completes the proof of Theorem 3.1. Existence of weak solutions This section is devoted to the construction of weak solutions to the SCE (1.1)-(1.2) with a nonnegative initial condition g in ∈ L 1 -2β,1 (0, ∞). It is achieved by a classical compactness technique, the appropriate functional setting being here the space L 1 (0, ∞) endowed with its weak topology first used in the seminal work [START_REF] Stewart | A global existence theorem for the general coagulation-fragmentation equation with unbounded kernels[END_REF] and subsequently further developed in [START_REF] Barik | A note on mass-conserving solutions to the coagulation-fragmentation equation by using non-conservative approximation[END_REF][START_REF] Camejo | Regular solutions to the coagulation equations with singular kernels[END_REF][START_REF] Camejo | The singular kernel coagulation equation with multifragmentation[END_REF][START_REF] Escobedo | Gelation and mass conservation in coagulation-fragmentation models[END_REF][START_REF] Filbet | Mass-conserving solutions and non-conservative approximation to the Smoluchowski coagulation equation[END_REF][START_REF] Giri | Weak solutions to the continuous coagulation with multiple fragmentation[END_REF][START_REF] Ph | From the discrete to the continuous coagulation-fragmentation equations[END_REF]. Given a non-negative initial condition g in ∈ L 1 -2β,1 (0, ∞), the starting point of this approach is the choice of an approximation of the SCE (1.1)-(1.2), which we set here to be ∂g n (ζ, t) ∂t = B c (g n )(ζ, t) -D θ c,n (g n )(ζ, t), (ζ, t) ∈ (0, n) × (0, ∞), (4.1) with truncated initial condition g n (ζ, 0) = g in n (ζ) := g in (ζ)χ (0,n) (ζ), ζ ∈ (0, n), (4.2) where n ≥ 1 is a positive integer, θ ∈ {0, 1}, Ψ θ n (ζ, η) := Ψ(ζ, η)χ (1/n,n) (ζ)χ (1/n,n) (η) 1 -θ + θχ (0,n) (ζ + η) (4.3) for (ζ, η) ∈ (0, ∞) 2 and D θ c,n (g)(ζ) := n-θζ 0 Ψ θ n (ζ, η)g(ζ)g(η)dη, ζ ∈ (0, n), (4.4) the gain term B c (g)(ζ) being still defined by (1.3) for ζ ∈ (0, n). The introduction of the additional parameter θ ∈ {0, 1} allows us to handle simultaneously the so-called conservative approximation (θ = 1) and non-conservative approximation (θ = 0) and thereby prove that both approximations allow us to construct weak solutions to the SCE (1.1)-(1.2), a feature which is of interest when no general uniqueness result is available. Note that we also truncate the coagulation for small volumes to guarantee the boundedness of Ψ θ n which is a straightforward consequence of (H2) and (4.3). Thanks to this property, it follows from [START_REF] Stewart | A global existence theorem for the general coagulation-fragmentation equation with unbounded kernels[END_REF] (θ = 1) and [START_REF] Filbet | Mass-conserving solutions and non-conservative approximation to the Smoluchowski coagulation equation[END_REF] (θ = 0) that there is a unique non-negative solution g n ∈ C 1 ([0, ∞); L 1 (0, n)) to (4.1)-(4. 2) (we do not indicate the dependence upon θ for notational simplicity) which satisfies n 0 ζg n (ζ, t)dζ = n 0 ζg in n (ζ)dζ -(1 -θ) t 0 n 0 n n-ζ ζΨ θ n (ζ, η)g n (ζ, s)g n (η, s)dηdζds (4.5) for t ≥ 0. The second term in the right-hand side of (4.5) vanishes for θ = 1 and the total mass of g n remains constant throughout time evolution, which is the reason for this approximation to be called conservative. In contrast, when θ = 0, the total mass of g n decreases as a function of time. In both cases, it readily follows from (4.5) that n 0 ζg n (ζ, t)dζ ≤ n 0 ζg in n (ζ)dζ ≤ M 1 (g in ), t ≥ 0. (4.6) For further use, we next state the weak formulation of (4.1)-(4.2): for t > 0 and ω ∈ L ∞ (0, n), there holds n 0 ω(ζ)[g n (ζ, t) -g in n (ζ)]dζ = 1 2 t 0 n 1/n n 1/n H θ ω,n (ζ, η)Ψ θ n (ζ, η)g n (ζ, s)g n (η, s)dηdζds, (4.7) where H θ ω,n (ζ, η) := ω(ζ + η)χ (0,n) (ζ + η) -[ω(ζ) + ω(η)] 1 -θ + θχ (0,n) (ζ + η) for (ζ, η) ∈ (0, n) 2 . In order to prove Theorem 2.3, we shall show the convergence (with respect to an appropriate topology) of a subsequence of (g n ) n≥1 towards a weak solution to (1.1)-(1.2). For that purpose, we now derive several estimates and first recall that, since g in ∈ L 1 -2β,1 (0, ∞), a refined version of de la Vallée-Poussin theorem, see [START_REF] Châu-Hoàn | Etude de la classe des opérateurs m-accrétifs de L 1 (Ω) et accrétifs dans L ∞ (Ω)[END_REF] or [START_REF] Ph | Weak compactness techniques and coagulation equations[END_REF]Theorem 8], guarantees that there exist two non-negative and convex functions σ 1 and σ 2 in C 2 ([0, ∞)) such that σ ′ 1 and σ ′ 2 are concave, σ i (0) = σ ′ i (0) = 0, lim x→∞ σ i (x) x = ∞, i = 1, 2, (4.8) and I 1 := ∞ 0 σ 1 (ζ)g in (ζ)dζ < ∞, and I 2 := ∞ 0 σ 2 ζ -β g in (ζ) dζ < ∞. (4.9) Let us state the following properties of the above defined functions σ 1 and σ 2 which are required to prove Theorem 2.3. Lemma 4.1. For (x, y) ∈ (0, ∞) 2 , there holds (i) σ 2 (x) ≤ xσ ′ 2 (x) ≤ 2σ 2 (x), (ii) xσ ′ 2 (y) ≤ σ 2 (x) + σ 2 (y), and (iii) 0 ≤ σ 1 (x + y) -σ 1 (x) -σ 1 (y) ≤ 2 xσ 1 (y) + yσ 1 (x) x + y . Proof. A proof of the statements (i) and (iii) may be found in [START_REF] Ph | Weak compactness techniques and coagulation equations[END_REF]Proposition 14] while (ii) can easily be deduced from (i) and the convexity of σ 2 . We recall that throughout this section, the coagulation kernel Ψ is assumed to satisfy (H1)-(H2) and g in is a non-negative function in L 1 -2β,1 (0, ∞). Moment estimates We begin with a uniform bound in L 1 -2β,1 (0, ∞). Lemma 4.2. There exists a positive constant B > 0 depending only on g in such that, for t ≥ 0, n 0 ζ + ζ -2β g n (ζ, t)dζ ≤ B. Proof. Let δ ∈ (0, 1) and take ω(ζ) = (ζ + δ) -2β , ζ ∈ (0, n), in (4.7) . With this choice of ω, H θ ω,n (ζ, η) ≤ (ζ + η + δ) -2β -(ζ + δ) -2β -(η + δ) -2β χ (0,n) (ζ + η) ≤ 0 for all (ζ, η) ∈ (0, n) 2 , so that (4.7) entails that, for t ≥ 0, n 0 (ζ + δ) -2β g n (ζ, t)dζ ≤ n 0 (ζ + δ) -2β g in n (ζ)dζ ≤ ∞ 0 ζ -2β g in (ζ)dζ. We then let δ → 0 in the previous inequality and deduce from Fatou's lemma that n 0 ζ -2β g n (ζ, t)dζ ≤ ∞ 0 ζ -2β g in (ζ)dζ, t ≥ 0. Combining the previous estimate with (4.6) gives Lemma 4.2 with B := g in L 1 -2β,1 (0,∞) . We next turn to the control of the tail behavior of g n for large volumes, a step which is instrumental in the proof of the convergence of each integral on the right-hand side of (4.1) to their respective limits on the right-hand side of (1.1). Lemma 4.3. For T > 0, there is a positive constant Γ(T ) depending on k, σ 1 , g in , and T such that, (i) sup t∈[0,T ] n 0 σ 1 (ζ)g n (ζ, t)dζ ≤ Γ(T ), and (ii) (1 -θ) T 0 n 1 n 1 σ 1 (ζ)χ (0,n) (ζ + η)Ψ(ζ, η)g n (ζ, s)g n (η, s)dηdζds ≤ Γ(T ). Proof. Let T > 0 and t ∈ (0, T ). We set ω (ζ) = σ 1 (ζ), ζ ∈ (0, n), into (4.7) and obtain n 0 σ 1 (ζ)[g n (ζ, t) -g in n (ζ)]dζ = 1 2 t 0 n 1/n n 1/n σ1 (ζ, η)χ (0,n) (ζ + η)Ψ(ζ, η)g n (ζ, s)g n (η, s)dηdζds - 1 -θ 2 t 0 n 1/n n 1/n [σ 1 (ζ) + σ 1 (η)]χ [n,∞) (ζ + η)Ψ(ζ, η)g n (ζ, s)g n (η, s)dηdζds, recalling that σ1 (ζ, η) = σ 1 (ζ + η) -σ 1 (ζ) -σ 1 (η) , hence, using (H2) and Lemma 4.1, Owing to the concavity of σ ′ 1 and the property σ 1 (0) = 0, there holds n 0 σ 1 (ζ)[g n (ζ, t) -g in n (ζ)]dζ ≤ k 2 4 i=1 J i,n (t) -(1 -θ)R n (t), with J 1,n (t) := t 0 1 0 1 0 σ1 (ζ, η)(ζη) -β g n (ζ, s)g n (η, s)dηdζds, J 2,n (t) := t 0 1 0 n 1 σ1 (ζ, η)ζ -β ηg n (ζ, s)g n (η, s)dηdζds, σ1 (ζ, η) = ζ 0 η 0 σ ′′ 1 (x + y)dydx ≤ σ ′′ 1 (0)ζη , (ζ, η) ∈ (0, ∞) 2 . ( 4.10) By (4.10), Lemma 4.2, and Young's inequality, J 1,n (t) ≤ σ ′′ 1 (0) t 0 1 0 1 0 ζ 1-β η -β g n (ζ, s)g n (η, s)dηdζds ≤ σ ′′ 1 (0) t 0 1 0 ζ + ζ -2β g n (ζ, s)dζ 2 ds ≤ σ ′′ 1 (0)B 2 t. Next, Lemma 4.1 (iii), Lemma 4.2, and Young's inequality give J 2,n (t) = J 3,n (t) ≤ 2 t 0 1 0 n 1 ζσ 1 (η) + ησ 1 (ζ) ζ + η ζ -β ηg n (ζ, s)g n (η, s)dηdζds ≤ 2 t 0 1 0 n 1 ζ 1-β σ 1 (η) + σ 1 (1)ζ -β η g n (ζ, s)g n (η, s)dηdζds ≤ 2 t 0 1 0 ζ + ζ -2β g n (ζ, s)dζ n 1 σ 1 (η)g n (η, s)dη ds + σ 1 (1) t 0 1 0 ζ + ζ -2β g n (ζ, s)dζ n 1 ηg n (η, s)dη ds ≤ 2σ 1 (1)B 2 t + 2B t 0 n 0 σ 1 (η)g n (η, s)dηds, and J 4,n (t) ≤ 2 t 0 n 1 n 1 (ησ 1 (ζ) + ζσ 1 (η)) g n (ζ, s)g n (η, s)dηdζds ≤ 4B t 0 n 0 σ 1 (η)g n (η, s)dηds. Gathering the previous estimates, we end up with n 0 σ 1 (ζ)[g n (ζ, t) -g in n (ζ)]dζ ≤ k σ ′′ 1 (0) 2 + 2σ 1 (1) B 2 t + 4kB t 0 n 0 σ 1 (η)g n (η, s)dηds -(1 -θ)R n (t), and we infer from Gronwall's lemma and (4.9) that n 0 σ 1 (ζ)g n (ζ, t)dζ + (1 -θ)R n (t) ≤ e 4kBt n 0 σ 1 (ζ)g in n (ζ)dζ + σ ′′ 1 (0) 8 + σ 1 (1) 2 Be 4kBt ≤ I + σ ′′ 1 (0) + σ 1 (1) B e 4kBt . This completes the proof of Lemma 4.3. Uniform integrability Next, our aim being to apply Dunford-Pettis' theorem, we have to prevent concentration of the sequence (g n ) n≥1 on sets of arbitrary small measure. For that purpose, we need to show the following result. Lemma 4.4. For any T > 0 and λ > 0, there is a positive constant L 1 (λ, T ) depending only on k, σ 2 , g in , λ, and T such that sup t∈[0,T ] λ 0 σ 2 ζ -β g n (ζ, t) dζ ≤ L 1 (λ, T ). Proof. For (ζ, t) ∈ (0, n) × (0, ∞), we set u n (ζ, t) := ζ -β g n (ζ, t). Let λ ∈ (1, n) , T > 0, and t ∈ (0, T ). Using Leibniz's rule, Fubini's theorem, and (4.1), we obtain d dt λ 0 σ 2 (u n (ζ, t))dζ ≤ 1 2 λ 0 λ-η 0 σ 2 ′ (u n (ζ + η, t))(ζ + η) -β Ψ θ n (ζ, η)g n (ζ, t)g n (η, t)dζdη. (4.11) It also follows from (H2) that Ψ θ n (ζ, η) ≤ Ψ(ζ, η) ≤ 2kλ 1+2β (ζη) -β , (ζ, η) ∈ (0, λ) 2 . (4.12) We then infer from (4.11), (4.12), Lemma 4.1 (ii) and Lemma 4.2 that d dt λ 0 σ 2 (u n (ζ, t))dζ ≤kλ 1+2β λ 0 λ-η 0 σ ′ 2 (u n (ζ + η, t))(ζ + η) -β u n (ζ, t)u n (η, t)dζdη ≤kλ 1+2β λ 0 λ-η 0 η -β [σ 2 (u n (ζ + η, t)) + σ 2 (u n (ζ, t))] u n (η, t)dζdη ≤2kλ 1+2β λ 0 η -2β g n (η, t) λ-η 0 σ 2 (u n (ζ + η, t))dζdη ≤2kλ 1+2β B λ 0 σ 2 (u n (ζ, t))dζ. Then, using Gronwall's lemma, the monotonicity of σ 2 , and (4.9), we obtain λ 0 σ 2 (ζ -β g n (ζ, t))dζ ≤ L 1 (λ, T ), where L 1 (λ, T ) := I 2 e 2kλ 1+2β BT , and the proof is complete. Time equicontinuity The outcome of the previous sections settles the (weak) compactness issue with respect to the volume variable. We now turn to the time variable. Lemma 4.5. Let t 2 ≥ t 1 ≥ 0 and λ ∈ (1, n). There is a positive constant L 2 (λ) depending only on k, g in , and λ such that λ 0 ζ -β |g n (ζ, t 2 ) -g n (ζ, t 1 )|dζ ≤ L 2 (λ)(t 2 -t 1 ). Proof. Let t > 0. On the one hand, by Fubini's theorem, (4.12), and Lemma 4.2, λ 0 ζ -β B c (g n )(ζ, t)dζ ≤ 1 2 λ 0 λ-ζ 0 (ζ + η) -β Ψ(ζ, η)g n (ζ, t)g n (η, t)dηdζ ≤ kλ 1+2β λ 0 λ 0 ζ -β η -2β g n (ζ, t)g n (η, t)dηdζ ≤ kλ 1+3β λ 0 ζ -2β g n (ζ, t)dζ 2 ≤ kλ 1+3β B 2 . On the other hand, since Ψ θ n (ζ, η) ≤ Ψ(ζ, η) ≤ 2kλ β ηζ -β , 0 < ζ < λ < η < n, we infer from (4.12) and Lemma 4.2 that λ 0 ζ -β D θ c,n (g n )(ζ, t)dζ ≤ λ 0 n 0 ζ -β Ψ(ζ, η)g n (ζ, t)g n (η, t)dηdζ ≤ 2kλ 1+2β λ 0 λ 0 ζ -2β η -β g n (ζ, t)g n (η, t)dηdζ + 2kλ β λ 0 n λ ζ -β ηg n (ζ, t)g n (η, t)dηdζ ≤ 2kB 2 (1 + λ 1+β )λ β . Consequently, by (4.1), Convergence We are now in a position to complete the proof of the existence of a weak solution to the SCE (1.1)-(1.2). Proof of Theorem 2.3. For (ζ, t) ∈ (0, n) × (0, ∞), we set u n (ζ, t) := ζ -β g n (ζ, t). Let T > 0 and λ > 1. Owing to the superlinear growth (4.8) of σ 2 at infinity and Lemma 4.4, we infer from Dunford-Pettis' theorem that there is a weakly compact subset K λ,T of L 1 (0, λ) such that (u n (t)) n≥1 lies in K λ,T for all t ∈ [0, T ]. Moreover, by Lemma 4.5, (u n ) n≥1 is strongly equicontinuous in L 1 (0, λ) at all t ∈ (0, T ) and thus also weakly equicontinuous in L 1 (0, λ) at all t ∈ (0, T ). A variant of Arzelà-Ascoli's theorem [26, Theorem 1.3.2] then guarantees that (u n ) n≥1 is relatively compact in C w ([0, T ]; L 1 (0, λ)). This property being valid for all T > 0 and λ > 1, we use a diagonal process to obtain a subsequence of (g n ) n≥1 (not relabeled) and a non-negative function g such that To complete the proof of Theorem 2.3, it remains to show that g is a weak solution to the SCE (1.1)-(1.2) on [0, ∞) in the sense of Definition 2.2. This step is carried out by the classical approach of [START_REF] Stewart | A global existence theorem for the general coagulation-fragmentation equation with unbounded kernels[END_REF] with some modifications as in [START_REF] Camejo | Regular solutions to the coagulation equations with singular kernels[END_REF][START_REF] Camejo | The singular kernel coagulation equation with multifragmentation[END_REF] and [START_REF] Ph | From the discrete to the continuous coagulation-fragmentation equations[END_REF] to handle the convergence of the integrals for small and large volumes, respectively. In particular, on the one hand, the behavior for large volumes is controlled by the estimates of Lemma 4.3 with the help of the superlinear growth (4.8) of σ 1 at infinity and the linear growth (H2) of Ψ. On the other hand, the behavior for small volumes is handled by (H2), Lemma 4.2, and (4.13). Finally, g being a weak solution to (1.1)-(1.2) on [0, ∞) in the sense of Definition 2.2, it is mass-conserving according to Theorem 3.1, which completes the proof of Theorem 2.3. 2 ζ 0 Ψ 0 Ψ 200 by ∂g(ζ, t) ∂t = B c (g)(ζ, t) -D c (g)(ζ, t), (ζ, t) ∈ (0, ∞) 2 , (1.1) with initial condition g(ζ, 0) = g in (ζ) ≥ 0, ζ ∈ (0, ∞),(1.2)where the operator B c and D c are expressed asB c (g)(ζ, t) := 1 (ζ -η, η)g(ζ -η, t)g(η, t)dη (1.3)andD c (g)(ζ, t) := ∞ (ζ, η)g(ζ, t)g(η, t)dη.(1.4) Theorem 3 . 1 . 31 Suppose that (H1)-(H2) hold. Let g be a weak solution to (1.1)-(1.2) on [0, T ) for some T ∈ (0, ∞]. Then g satisfies the mass-conserving property (3.1) for all t ∈ (0, T ). Lemma 3 . 4 . 34 Assume that (H1)-(H2) hold. Let g be a weak solution to (1.1)-(1.2) on [0, T ). J 3 σ 1 31 η)ζη -β g n (ζ, s)g n (η, s)dηdζds, η)(ζ + η)g n (ζ, s)g n (η, s)dηdζds, (ζ)χ [n,∞) (ζ + η)Ψ(ζ, η)g n (ζ, s)g n (η, s)dηdζds. g n -→ g in C w ([0, T ]; L 1 (0, λ))for all T > 0 and λ > 1. Owing to Lemma 4.3 and the superlinear growth (4.8) of σ 1 at infinity, a by-now classical argument allows us to improve the previous convergence tog n -→ g in C w ([0, T ]; L 1 ((0, ∞); (ζ -β + ζ)dζ)).(4.13) Acknowledgments This work was supported by University Grant Commission (UGC), India, for providing Ph.D fellowship to PKB. AKG would like to thank Science and Engineering Research Board (SERB), Department of Science and Technology (DST), India for providing funding support through the project Y SS/2015/001306.
00111435
en
[ "spi.meca" ]
2024/03/05 22:32:10
2004
https://hal.science/hal-00111435/file/lopezpamies2004.pdf
Oscar Lopez-Pamies Pedro Ponte Castañeda Second-Order Estimates for the Macroscopic Response and Loss of Ellipticity in Porous Rubbers at Large Deformations Keywords: homogenization, finite deformations, elastomeric foams, microstructure evolution . The reasons for this result have been linked to the evolution of the microstructure, which, under appropriate loading conditions, can induce geometric softening leading to overall loss of ellipticity. Furthermore, the "second-order" homogenization method has the merit that it recovers the exact evolution of the porosity under a finite-deformation history in the limit of incompressible behavior for the matrix. Introduction This article is concerned with the use of recently developed homogenization methods to determine the macroscopic behavior of porous elastomers, as well as the associated evolution of their microstructure (e.g., porosity) and the possible development of instabilities, when these materials are subjected to finite deformations. In an early contribution, Blatz and Ko [START_REF] Blatz | Application of finite elastic theory to the deformation of rubbery materials[END_REF] performed various experiments on a polyurethane rubber with a random distribution of voids about 40 μ in diameter and an approximate volume fraction of 50%. The experimental results generated allowed these authors to propose a phenomenological model, known as the Blatz-Ko model, which turns out to give adequate predictions at least for certain types of loadings. A physically appealing property of this constitutive model is that it is found to lose ellipticity at finite deformation histories [START_REF] Knowles | On the ellipticity of the equations of nonlinear elastostatics for a special material[END_REF]. This property is in agreement with experimental evidence suggesting that such materials can develop macroscopic shear band instabilities at sufficiently high deformations, which correspond to buckling of the ligaments between the pores at the micro scale. Motivated by this earlier work, Abeyaratne and Triantafyllidis [START_REF] Abeyaratne | An investigation of localization in a porous elastic material using homogenization theory[END_REF] attempted a numerical study of the overall behavior of a nearly incompressible Neo-Hookean matrix with a periodic distribution of cylindrical pores. They made use of the results of homogenization theory for periodic media [START_REF] Sanchez-Palencia | Comportements local et macroscopique d'un type de milieux physiques heterogenes[END_REF], which allow the reduction of the problem of determining the effective behavior of a composite to a numerical calculation on a unit cell. An interesting and important finding in this work was that the homogenized constitutive model for the porous material loses ellipticity, even when the matrix material itself does not. This pioneering work has been generalized and developed further by Triantafyllidis and coworkers [START_REF] Triantafyllidis | On the comparison between microscopic and macroscopic instability mechanisms in a class of fiber-reinforced composites[END_REF][START_REF] Geymonat | Homogenization of nonlinearly elastic materials, microscopic bifurcation and macroscopic loss of rank-one convexity[END_REF], always in the context of periodic hyperelastic composites. One of the main conclusions of this work is that loss of strong ellipticity for the homogenized material can be shown rigorously to correspond to the bifurcation of the composite at wavelengths much larger than the size of the unit cell. Furthermore, the overall loss of strong ellipticity (i.e., the possible emergence of shear bands) in the homogenized composite, which is due to the buckling of the actual material at the microstructural level, provides an upper bound for the stable domain of the composite. Although a proper homogenization framework has been available for some time for hyperelastic composites with random microstructures [START_REF] Hill | On constitutive macro-variables for heterogeneous solids at finite strain[END_REF], the application of this framework to random porous elastomers has apparently not yet been attempted, presumably because of the technical difficulties associated with this problem. There are, however, some rigorous estimates for special loadings [START_REF] Hashin | Large isotropic elastic deformation of composites and porous media[END_REF], as well as other estimates based on various types of ad hoc approximations, mostly for low-density foams (see, e.g., [START_REF] Gent | The deformation of foamed elastic materials[END_REF][START_REF] Feng | Nonlinear deformation of elastomeric foams[END_REF][START_REF] Gibson | Cellular Solids[END_REF]). Our proposal here is to make use of the "second-order" homogenization method, originally developed by Ponte Castañeda [START_REF] Castañeda | Second-order homogenization estimates for nonlinear composites incorporating field fluctuations[END_REF] for viscoplastic materials, and extended recently for hyperelastic composites by Lopez-Pamies and Ponte Castañeda [START_REF] Lopez-Pamies | Second-order estimates for the large-deformation response of particle-reinforced rubbers[END_REF][START_REF] Lopez-Pamies | Second-order homogenization estimates incorporating field fluctuations in finite elasticity[END_REF]. For comparison purposes, we will also make use of an earlier version of the method due to Ponte Castañeda and Tiberio [START_REF] Castañeda | A second-order homogenization procedure in finite elasticity and applications to black-filled elastomers[END_REF] (see also [START_REF] Castañeda | Exact second-order estimates for the effective mechanical properties of nonlinear composite materials[END_REF]). The advantage of these methods is that they can be used for any type of composite system, and that they make use of standard estimates for a suitably optimized "linear comparison composites" to generate corresponding estimates for the nonlinear hyperelastic composite. These methods have already been used to estimate the behavior of particle-reinforced elastomers [START_REF] Castañeda | A second-order homogenization procedure in finite elasticity and applications to black-filled elastomers[END_REF][START_REF] Lopez-Pamies | Second-order estimates for the large-deformation response of particle-reinforced rubbers[END_REF][START_REF] Lahellec | Second-order estimate of the macroscopic behavior of periodic hyperelastic composites: Theory and experimental validation[END_REF][START_REF] Lopez-Pamies | Second-order homogenization estimates incorporating field fluctuations in finite elasticity[END_REF], and have been shown to be able to handle the strongly nonlinear constraint of material incompressibility (a constraint on the determinant of the deformation) for these material systems. These encouraging results for particle-reinforced elastomers are suggestive that the methods can also be used successfully for porous elastomers. This being our first attempt to handle porous elastomers in the context of finite deformations, explicit results will be generated only for a model problem involving aligned two-dimensional pores distributed randomly and isotropically in an (in)compressible, isotropic elastomer. It is important to emphasize, however, that the method can be applied for general classes of constitutive models in finite elasticity, as well as for very general classes of microstructures. The aim of this first work is to explore the capabilities of the methods in the context of a simple example, albeit one that incorporates the essential features of the problem including strongly nonlinear behavior for the matrix phase, the possible evolution of the microstructure and its implications for the overall stability of the material, as determined by the strong ellipticity condition. Preliminaries on Hyperelastic Composites Consider a material made up of N different (homogeneous) phases distributed randomly in a specimen occupying a volume 0 in the reference configuration. Here, the characteristic length of the inhomogeneities (e.g., voids) is assumed to be much smaller than the size of the specimen and the scale of variation of the applied loading. The constitutive behavior of the phases is characterized by stored-energy functions W (r) (r = 1, . . . , N) that are nonconvex functions of the deformation gradient F. The local energy function of the composite may thus be written as W (X, F) = N r=1 χ (r) (X)W (r) (F), (1) where the functions χ (r) are equal to 1 if the position vector X is inside phase r (i.e., X ∈ (r) 0 ) and zero otherwise. The stored-energy functions of the phases are, of course, assumed to be objective in the sense that W (r) (QF) = W (r) (F) for all proper orthogonal Q and arbitrary deformation gradients F. Making use of the polar decomposition F = RU, where U is the right stretch tensor and R is the rotation tensor, it follows, in particular, that W (r) (F) = W (r) (U). The local or microscopic constitutive relation for the material is then given by S = ∂W ∂F (X, F), (2) where S denotes the first Piola-Kirchhoff stress tensor. Note that sufficient smoothness has been assumed for the W (r) on F. Furthermore, the stored-energy functions W (r) will be assumed to be such that W (r) (F) → ∞ as det F → 0+, to ensure the material impenetrability condition: det F(X) > 0 for X in 0 . This condition would be automatically satisfied for incompressible materials, where det F is required to be exactly 1. Following Hill [START_REF] Hill | On constitutive macro-variables for heterogeneous solids at finite strain[END_REF] and Hill and Rice [START_REF] Hill | Elastic potentials and the structure of inelastic constitutive laws[END_REF], the global or macroscopic constitutive relation for the composite is defined by S = ∂ W ∂F , (3) where S = S , F = F are the average stress and average deformation gradient, respectively, and W (F) = min F∈K(F) W (X, F) = min F∈K(F) N r=1 c (r) W (r) (F) (r) (4) is the effective stored-energy function of the composite. In the above expressions, the symbols • and • (r) denote volume averages over the composite ( 0 ) and over the phase r ( (r) 0 ), respectively, so that the scalars c (r) = χ (r) represent the (initial) volume fractions of the given phases. Furthermore, K denotes the set of admissible deformation gradients: K(F) = F | x = χ (X) with F = Gradχ in 0 , x = FX on ∂ 0 . ( 5 ) Note that W physically represents the average elastic energy stored in the composite when subjected to an affine displacement boundary condition. Moreover, from the definition (4) and the objectivity of W (r) , it can be shown that W is objective, namely, W (F) = W (U). Here, U represents the macroscopic right stretch tensor associated with the macroscopic polar decomposition F = R U, with R denoting the macroscopic rotation tensor. In turn, the objectivity of W implies that the macroscopic rotational balance equation S F T = F S T is automatically satisfied [START_REF] Hill | On constitutive macro-variables for heterogeneous solids at finite strain[END_REF][START_REF] Ogden | Extremun principles in non-linear elasticity and their application to composites -I Theory[END_REF]. It is further recalled that since W cannot be convex suitable hypothesis are needed to ensure the existence of minimizers in (4). Ball [START_REF] Ball | Convexity conditions and existence theorems in nonlinear elasticity[END_REF] has provided sufficient conditions for the existence of such minimizers, including the hypothesis of polyconvexity of W , together with suitable growth conditions for W . More mathematically precise definitions of the effective energy W are available at least for periodic microstructures [START_REF] Braides | Homogenization of some almost periodic coercive functionals[END_REF][START_REF] Müller | Homogenization of nonconvex integral functionals and cellular elastic materials[END_REF]. Such definitions generalize the classical definition of the effective energy for periodic media with convex energies [START_REF] Marcellini | Periodic solutions and homogenization of nonlinear variational problems[END_REF], by allowing for possible interactions between unit cells, essentially by taking an infimum over the set of all possible combinations of units cells. Physically, this corresponds to accounting for the possible development of instabilities in the composite at sufficiently high deformation. In this connection, it is important to remark that Geymonat et al. [START_REF] Geymonat | Homogenization of nonlinearly elastic materials, microscopic bifurcation and macroscopic loss of rank-one convexity[END_REF], following earlier work by Triantafyllidis and Maker [START_REF] Triantafyllidis | On the comparison between microscopic and macroscopic instability mechanisms in a class of fiber-reinforced composites[END_REF], have shown rigorously that loss of strong ellipticity in the homogenized behavior of the composite corresponds to the development of long wavelength instabilities in the form of localized shear bands. Furthermore, the "failure surfaces" defined by the macroscopic loss of strong ellipticity condition are actually upper bounds for the onset of other types of instabilities. The objective of this work is to obtain estimates for the effective stored-energy function W of the above-defined hyperelastic composites subjected to finite deformations, with particular interest in the special case of porous elastomers, where the second phase is vacuous. Motivated by the earlier work of Abeyaratne and Triantafyllidis [START_REF] Abeyaratne | An investigation of localization in a porous elastic material using homogenization theory[END_REF], a second objective will be to investigate whether or not the homogenized behavior of the porous material can lose strong ellipticity, even when the local behavior of the matrix phase is assumed to be strongly elliptic itself. The determination of the effective stored-energy function of a porous elastomer is a very difficult problem, because it amounts to solving a set of highly nonlinear partial differential equations with random coefficients. As a consequence, there are very few analytical estimates for W . Ogden [START_REF] Ogden | Extremun principles in non-linear elasticity and their application to composites -I Theory[END_REF] noted that use of the trial field F = F in the definition (4) for W leads to an upper bound analogous to the wellknown Voigt upper bound in linear elasticity. Due to the well-known difficulties associated with the definition of a complementary energy principle in finite elasticity, the equivalent of a Reuss-type bound in linear elasticity is not straightforward. A non-trivial lower bound that used only information on the phase volume fractions was proposed by Ponte Castañeda [START_REF] Castañeda | The overall constitutive behaviour of nonlinearly elastic composites[END_REF], exploiting the polyconvexity hypothesis. Our proposal will be to use the "second-order" homogenization method for hyperelastic composites developed by Lopez-Pamies and Ponte Castañeda [START_REF] Lopez-Pamies | Second-order homogenization estimates incorporating field fluctuations in finite elasticity[END_REF]. This is a general homogenization technique, exact to second order in the heterogeneity contrast, which has the capability to incorporate statistical information beyond the phase volume fraction and that can be applied to large classes of hyperelastic composites including reinforced and porous rubbers. For completeness, a brief outline of the second-order method is included in the following section. For a more detailed description of the theory, the reader is referred to [START_REF] Lopez-Pamies | Second-order homogenization estimates incorporating field fluctuations in finite elasticity[END_REF]. Outline of the Second-Order Variational Methods The key concept behind the second-order homogenization method for hyperelastic composites developed by Lopez-Pamies and Ponte Castañeda [START_REF] Lopez-Pamies | Second-order estimates for the large-deformation response of particle-reinforced rubbers[END_REF][START_REF] Lopez-Pamies | Second-order homogenization estimates incorporating field fluctuations in finite elasticity[END_REF], as well as the earlier version of the method [START_REF] Castañeda | A second-order homogenization procedure in finite elasticity and applications to black-filled elastomers[END_REF] is the introduction of a fictitious "linear comparison composite" (LCC) with the same microstructure as the nonlinear composite (i.e., the same χ (r) ). Thus, the stored-energy function of the LCC can be formally expressed as W T (X, F) = N r=1 χ (r) (X) W (r) T (F), (6) where the stored-energy functions of the phases W (r) T are given by the second-order Taylor approximations of the nonlinear stored-energy functions W (r) about certain uniform reference deformation gradients F (r) : W (r) T (F) = W (r) (F (r) ) + S (r) (F (r) ) • (F -F (r) ) + 1 2 (F -F (r) ) • L (r) 0 (F -F (r) ). ( 7 ) In this relation, the L (r) 0 are fourth-order constant tensors, which together with the F (r) , are left to be specified later, and use has been made of the notation: S (r) (F) . = ∂W (r) (F) ∂F . ( 8 ) Note further that the constitutive relation of the phases in the LCC are given by the expressions: S = L (r) 0 F + S (r) , (9) where S (r) = S (r) (F (r) ) -L (r) 0 F (r) is a fixed polarization stress for fixed L (r) 0 and F (r) . Thus, the LCC can be thought of as a linear "thermoelastic" composite, but in a generalized sense since the relevant "stress" and "strain" measures are not the standard linearized measures of stress and strain [START_REF] Castañeda | A second-order homogenization procedure in finite elasticity and applications to black-filled elastomers[END_REF]. While it is well known that such a material model is not suitable to describe the constitutive behavior of actual elastomers at finite strain, the LCC is only an intermediate construction that will allow the simplification of the original fully nonlinear problem, as described by ( 4) with ( 1) and ( 2). Next, "corrector" functions measuring the error made in the approximation of the stored-energy functions W (r) of the nonlinear composite by the corresponding stored-energy functions W (r) T of the LCC are introduced such that V (r) (F (r) , L (r) 0 ) = stat F (r) W (r) ( F (r) ) -W (r) T ( F (r) ) , ( 10 ) where the optimization operation stat with respect to a variable means differentiation with respect to that variable and setting the result equal to zero to generate an expression for the optimal value of the variable. Then, it follows that the local stored-energy functions of the phases of the nonlinear composite may be approximated as r) , L (r) 0 ), [START_REF] Hashin | Large isotropic elastic deformation of composites and porous media[END_REF] and therefore that the effective stored-energy function W of the nonlinear composite may be correspondingly approximated as W (r) (F) ≈ W (r) T (F) + V (r) (F ( W (F) ≈ W T (F; F (s) , L (s) 0 ) + N r=1 c (r) V (r) (F (r) , L (r) 0 ), (12) where W T (F; F (s) , L (s) 0 ) = min F∈K W T (X, F) = min F∈K N r=1 c (r) W (r) T (F) (r) ( 13 ) is the effective energy function associated with the LCC. In this last expression, it is implicitly assumed that the variables L (s) 0 remain strongly elliptic at least up to the point where macroscopic instabilities could develop. These macroscopic instabilities correspond to the loss of strong ellipticity of the effective constitutive relation for the nonlinear composite; the use of the approximation (12) could become questionable beyond the onset of these instabilities. The approximation [START_REF] Hill | On constitutive macro-variables for heterogeneous solids at finite strain[END_REF] is valid for any reference deformations F (r) and moduli L (r) 0 , which naturally suggests its optimization with respect to these variables. In fact, the solution of this optimization problem for the variables F (r) and L (r) 0 in the estimate [START_REF] Hill | On constitutive macro-variables for heterogeneous solids at finite strain[END_REF] for W depends on the solution of the optimization problems [START_REF] Gibson | Cellular Solids[END_REF] defining the "corrector" functions V (r) . Thus, stationarity with respect to the variables F (r) in [START_REF] Gibson | Cellular Solids[END_REF] leads to the conditions: S (r) ( F (r) ) -S (r) (F (r) ) = L (r) 0 ( F (r) -F (r) ), [START_REF] Hill | Elastic potentials and the structure of inelastic constitutive laws[END_REF] which correspond to the linearization of the nonlinear constitutive relation for the hyperelastic material in the phases. Now, it is observed that relations [START_REF] Hill | Elastic potentials and the structure of inelastic constitutive laws[END_REF] have several possible solutions. For example, these conditions could be identically satisfied by taking the F (r) equal to the F (r) , which would lead to the so-called "tangent" approximation. On the other hand, solutions are also possible such that F (r) = F (r) and F (r) = 0, which leads to a new type of linearization which has been referred to as a "generalized secant" approximation [START_REF] Castañeda | Second-order homogenization estimates for nonlinear composites incorporating field fluctuations[END_REF]. Depending on the type of linearization chosen, the general expression [START_REF] Hill | On constitutive macro-variables for heterogeneous solids at finite strain[END_REF] can deliver, as discussed in the next sections, various types of second-order estimates. It is recalled that the reason for calling these estimates "second-order" is related to the fact that they are exact to second-order in the limit of small heterogeneity contrast among the phases [START_REF] Suquet | Small-contrast perturbation expansions for the effective properties of nonlinear composites[END_REF]. TANGENT SECOND-ORDER ESTIMATES (VERSION 1) Considering the limit as F (r) tends to F (r) in ( 14) makes the functions V (r) vanish identically. Then, optimality of the reference deformations F (r) in ( 12) leads to the prescriptions: F (r) = F (r) . = F (r) , ( 15 ) where F (r) has been defined as the average deformation field over phase r in the LCC, defined by relations ( 6) and [START_REF] Gent | A new constitutive relation for rubber[END_REF]. As shown by Ponte Castañeda and Tiberio [START_REF] Castañeda | A second-order homogenization procedure in finite elasticity and applications to black-filled elastomers[END_REF], under condition [START_REF] Knowles | On the ellipticity of the equations of nonlinear elastostatics for a special material[END_REF], the general second-order estimate (12) simplifies to W (F) = N r=1 c (r) W (r) (F (r) ) + 1 2 S (r) (F (r) ) • (F -F (r) ) . ( 16 ) A key disadvantage of the estimates ( 16) is that by setting F (r) = F (r) , the optimality conditions for the moduli L (r) 0 in expression [START_REF] Hill | On constitutive macro-variables for heterogeneous solids at finite strain[END_REF], which for this case specialize to (F -F (r) ) ⊗ (F -F (r) ) (r) = 0, ( 17 ) where it is recalled that F = F(X) is the solution of the LCC problem (13), cannot be satisfied in general [START_REF] Castañeda | Variational second-order estimates for nonlinear composites[END_REF]. Instead, we implement the physically motivated prescription: L (r) 0 = L (r) t . = ∂ 2 W (r) ∂F 2 (F (r) ), (18) which, as already mentioned, is consistent with the limit F (r) → F (r) in [START_REF] Hill | Elastic potentials and the structure of inelastic constitutive laws[END_REF]. Note that the estimate ( 16) then depends exclusively on the average fields F (r) over the phases of the linear comparison composite. SECOND-ORDER ESTIMATES WITH FLUCTUATIONS (VERSION 2) Considering the more elaborate "generalized secant" linearization scheme, where F (r) = F (r) and F (r) = 0, and optimizing the resulting expression [START_REF] Hill | On constitutive macro-variables for heterogeneous solids at finite strain[END_REF] with respect to the moduli L (r) 0 , leads to the following conditions: (F -F (r) ) ⊗ (F -F (r) ) (r) = ( F (r) -F (r) ) ⊗ ( F (r) -F (r) ). ( 19 ) Unlike, the corresponding conditions (17) associated with the tangent secondorder estimate (Version 1), the new conditions [START_REF] Lopez-Pamies | Second-order estimates for the large-deformation response of particle-reinforced rubbers[END_REF] are more flexible in that they allow fluctuations of the fields in the phases of the linear comparison composite. The question then becomes what is the optimal choice of the reference deformations F (r) . Unfortunately, this is a difficult question that has not yet been completely resolved [START_REF] Castañeda | Second-order homogenization estimates for nonlinear composites incorporating field fluctuations[END_REF]. For this reason, two different approximate choices -neither of which is expected to be optimal -will be considered here. The first is to identify the reference deformations with the phase averages of the deformation gradients in the LCC: F (r) = F (r) . ( 20 ) This prescription was used by Lopez-Pamies and Ponte Castañeda [START_REF] Lopez-Pamies | Second-order homogenization estimates incorporating field fluctuations in finite elasticity[END_REF] and has the advantage that it makes stationary (with respect to F (r) ) the stored energy W T of the LCC. Furthermore, this prescription can be shown [START_REF] Lopez-Pamies | Second-order homogenization estimates incorporating field fluctuations in finite elasticity[END_REF] to lead to the following second-order estimate: W (F) = N r=1 c (r) W (r) ( F (r) ) -S (r) (F (r) ) • ( F (r) -F (r) ) , (21) where the phase moduli tensors L (r) 0 in the LCC are determined by the conditions: S (r) ( F (r) ) -S (r) (F (r) ) = L (r) 0 ( F (r) -F (r) ), ( 22 ) and the variables F (r) by the conditions: ( F (r) -F (r) ) ⊗ ( F (r) -F (r) ) = (F -F (r) ) ⊗ (F -F (r) ) (r) . = C (r) F . ( 23 ) In this last relation, it is useful to note that the phase fluctuation covariance tensors C (r) F may be estimated via C (r) F = 2 c (r) ∂ W T ∂L (r) 0 F (r) =F (r) . ( 24 ) It should be emphasized that, because the phase fluctuation tensors C (r) F are not of rank 1, it is not possible to satisfy conditions [START_REF] Lopez-Pamies | Second-order estimates for the large-deformation response of particle-reinforced rubbers[END_REF] in full generality. Instead, as explained later in more detail, only appropriate traces of these expressions should be enforced [START_REF] Castañeda | Second-order homogenization estimates for nonlinear composites incorporating field fluctuations[END_REF]. SECOND-ORDER ESTIMATES WITH FLUCTUATIONS: A SIMPLIFIED VERSION (VERSION 3) An alternative choice for the reference deformation F (r) , which is also probably not optimal, is provided by F (r) = F. ( 25 ) This prescription has the advantage of being simpler than [START_REF] Lopez-Pamies | Second-order homogenization estimates incorporating field fluctuations in finite elasticity[END_REF], while still keeping dependence on the field fluctuations. Using condition [START_REF] Castañeda | The overall constitutive behaviour of nonlinearly elastic composites[END_REF], together with the appropriate specialization of ( 19), the general second-order estimate [START_REF] Hill | On constitutive macro-variables for heterogeneous solids at finite strain[END_REF] can be shown to reduce to W (F) = N r=1 c (r) W (r) ( F (r) ) -S (r) (F) • ( F (r) -F (r) ) , ( 26 ) where now the phase moduli tensors L (r) 0 in the LCC are determined by the conditions: S (r) ( F (r) ) -S (r) (F) = L (r) 0 ( F (r) -F), (27) and the variables F (r) by (appropriate traces of) the conditions: ( F (r) -F) ⊗ ( F (r) -F) = C (r) F + (F (r) -F) ⊗ (F (r) -F). (28) It is seen that the second-order estimate [START_REF] Castañeda | Exact second-order estimates for the effective mechanical properties of nonlinear composite materials[END_REF] depends explicitly on the variables F (r) . In addition, as opposed to [START_REF] Knowles | On the failure of ellipticity of the equations for finite elastostatic plane strain[END_REF] and like [START_REF] Marcellini | Periodic solutions and homogenization of nonlinear variational problems[END_REF], the estimate (26) also depends directly on the variables F (r) , which are associated with the fluctuations of the deformation fields in the phases, as specified by relation [START_REF] Castañeda | Nonlinear composites[END_REF]. All three estimates, ( 16), [START_REF] Marcellini | Periodic solutions and homogenization of nonlinear variational problems[END_REF], and [START_REF] Castañeda | Exact second-order estimates for the effective mechanical properties of nonlinear composite materials[END_REF], can be shown to be exact to second order in the heterogeneity contrast, provided that the corresponding estimates for the LCC are also taken to be exact to second order in the contrast. For instance, the fact that both the Hashin-Shtrikman (HS) and Self-Consistent (SC) estimates are exact to second order in the contrast for linear composites implies that the corresponding hyperelastic HS and SC estimates for W obtained from ( 16), [START_REF] Marcellini | Periodic solutions and homogenization of nonlinear variational problems[END_REF], and (26) will be also exact to second order in the contrast. However, it should also be recalled [START_REF] Castañeda | Second-order homogenization estimates for nonlinear composites incorporating field fluctuations[END_REF] that the second-order methods exhibit a small "duality gap", which has the implication that the overall stress-strain relation for the nonlinear hyperelastic composite, as generated from equation (3), is not exactly the same as that for the LCC. As a final remark it is noted that Lopez-Pamies and Ponte Castañeda [START_REF] Lopez-Pamies | Second-order homogenization estimates incorporating field fluctuations in finite elasticity[END_REF] have shown that the estimates delivered by [START_REF] Marcellini | Periodic solutions and homogenization of nonlinear variational problems[END_REF] were superior to those delivered by [START_REF] Knowles | On the failure of ellipticity of the equations for finite elastostatic plane strain[END_REF], which (as previously stated) do not take into account field fluctuations, in the context of incompressible elastomers reinforced with rigid particles. In this particular context, the incorporation of fluctuations by the version [START_REF] Marcellini | Periodic solutions and homogenization of nonlinear variational problems[END_REF] proved to be crucial in order to recover the correct overall incompressibility constraint. In the present context of porous elastomers, there is not such an incompressibility constraint as the overall behavior of a porous elastomer remains compressible even when the matrix phase is incompressible. Instead, the challenge for porous systems is to predict correctly the evolution of the relevant microstructural variable (the porosity). Effective Behavior of Porous Elastomers In this section, the second-order estimates ( 16) and ( 26) for the effective storedenergy function W are applied to the special case of two-phase composites consisting of vacuous (i.e., W (2) = 0) inclusions with given initial volume fraction c (2) = f o in a compressible elastomeric matrix with stored-energy function W (1) = W . Note that for this particular case the average stress in phase 2 is identically zero, so that the average stress in phase 1 is given by S (1) = (1/c (1) )S. It is also emphasized that by exploiting the objectivity of W , only macroscopic pure stretch loading histories (i.e., F = U; R = I) need to be considered. Here, general expressions are derived for two types of geometry and distribution of the pores: (i) initially spherical pores distributed randomly and isotropically in the reference configuration, and (ii) aligned cylindrical pores with initially circular cross section distributed randomly and isotropically in the transverse plane of the reference configuration. It is remarked that the former situation corresponds to a statistically isotropic composite, whereas the latter corresponds to a statistically transversely isotropic one. The loading will be assumed to be general three-dimensional in the first case, and in-plane two-dimensional in the second. Before proceeding with the various second-order estimates, it is useful to recall the classical Voigt upper bound, which depends only on the phase concentrations and follows from the principle of minimum potential energy [START_REF] Ogden | Extremun principles in non-linear elasticity and their application to composites -I Theory[END_REF]. Thus, the specialization of this Voigt bound to porous elastomers with hyperelastic matrix phase W leads to W V (U) = (1 -f o )W (U). ( 29 ) Note that in the limit when the matrix phase is made incompressible, the Voigt upper bound becomes infinite for all loadings except for those with macroscopically isochoric deformations (i.e., det F = J = 1). Although rigorously an upper bound, the Voigt estimate is physically unrealistic, because it would suggest that a porous elastomer with an incompressible matrix phase would be itself incompressible, which is in contradiction with experimental evidence. This spectacular failure of the Voigt bound can be used as motivation for generating the new types of estimates that we propose to develop in this paper. Although they are less rigorous in the sense that they will not be bounds, they will be much more accurate and will give realistic predictions, at least for non-isochoric overall deformations. We conclude this section by noting that the available lower bounds [START_REF] Castañeda | The overall constitutive behaviour of nonlinearly elastic composites[END_REF] vanish identically for the case of porous elastomers, and are therefore also of little practical value in this context. THE LINEAR COMPARISON COMPOSITE In order to compute the second-order estimates ( 16) and ( 26) for porous rubbers it is necessary to determine the effective stored-energy function ( 13) associated with a fictitious linear porous "thermoelastic" composite (LCC) with the same microstructure as the original elastomer, as well as the corresponding phase averages F (r) and fluctuations C (r) F . Note that this fictitious linear thermoelastic problem involves measures of stress and deformation that are not symmetric, and hence suitable generalizations of the classical thermoelastic problem are required. These generalizations are straightforward and were provided by Ponte Castañeda and Tiberio [START_REF] Castañeda | A second-order homogenization procedure in finite elasticity and applications to black-filled elastomers[END_REF] in the broader context of N-phase thermoelastic composites. For conciseness, the general expressions will not be repeated here, instead, only the relevant results specialized to two-phase porous systems will be considered. To this end, it is first noted that for the special class of two-phase composites, the work of Levin [START_REF] Levin | Thermal expansion coefficients of heterogeneous materials[END_REF] allows great simplification of the general relations of linear thermoelastic composites. In fact, the effective potential energy (13) for W T for two-phase composites may be simply written in the form: W T (F) = f + T • H + 1 2 H • L 0 H + 1 2 H + ( L 0 ) -1 ( T) • ( L 0 -L 0 ) H + ( L 0 ) -1 ( T) , ( 30 ) where the notations H = F -I and H (r) = F (r) -I have been introduced for convenience. Also here, f (r) = W (r) (F (r) )-T (r) •H (r) -1 2 H (r) •L (r) 0 H (r) with T (r) = S (r) (F (r) ) -L (r) 0 H (r) , and L 0 = L (1) 0 -L (2) 0 , T 0 = T (1) 0 -T (2) 0 . Furthermore, in this relation, f and L 0 are the volume averages of f and L 0 , while L 0 is the tensor of effective modulus of the two-phase, linear-elastic comparison composite with moduli L (1) 0 and L (2) 0 , and the same microstructure, in its undeformed configuration, as the nonlinear hyperelastic composite. Now, by letting L (2) 0 → 0 and defining L 0 = L (1) 0 and M 0 = L -1 0 , it is straightforward to show that relation [START_REF] Castañeda | Variational second-order estimates for nonlinear composites[END_REF] in the case of porous systems specializes to (1) + M 0 S(F (1) ) • L 0 F -F (1) + M 0 S(F (1) ) . [START_REF] Sanchez-Palencia | Comportements local et macroscopique d'un type de milieux physiques heterogenes[END_REF] Next, the average deformation F (1) and the fluctuations C (1) F in the matrix phase can be determined from the stored-energy function (31) (see, for example, [START_REF] Castañeda | Nonlinear composites[END_REF]), and may be simply written as (1) + M 0 S(F (1) ) [START_REF] Suquet | Small-contrast perturbation expansions for the effective properties of nonlinear composites[END_REF] and W T (F) = (1 -f o )W (F (1) ) - 1 -f o 2 S(F (1) ) • M 0 S(F (1) ) + 1 2 F -F F (1) = F + 1 1 -f o M 0 L 0 -(1 -f o )L 0 F -F C (1) F = 2 1 -f o ∂ W T ∂L 0 , ( 33 ) respectively. Note that expressions (31) through [START_REF] Triantafyllidis | On the comparison between microscopic and macroscopic instability mechanisms in a class of fiber-reinforced composites[END_REF] are valid for any reference deformation tensor F (1) and modulus L 0 . Furthermore, these expressions are valid for any effective modulus tensor L 0 . For example, for the case of a random and isotropic distribution of initially spherical pores, use can be made of the isotropic Hashin-Shtrikman-type estimate [START_REF] Willis | Variational and related methods for the overall properties of composites[END_REF]: L 0 = L -1 0 + f o 1 -f o Q -1 -1 , ( 34 ) where Q = L 0 -L 0 PL 0 , with P being obtained by setting L (0) equal to L 0 in the expression: P = 1 4π |ξ |=1 H (0) (ξ ) dS, (35) with K (0) ik = L (0) ij kl ξ j ξ l , N (0) = K (0) -1 , and H (0) ij kl (ξ ) = N (0) ik ξ j ξ l . Similarly, for the case of a random and isotropic distribution of aligned cylindrical pores with initially circular cross section, the corresponding HS type estimate is given by [START_REF] Wang | Stability and vibrations of elastic thick-walled cylindrical and spherical shells subjected to pressure[END_REF] but with P being obtained by setting L (0) equal to L 0 in the expression [START_REF] Willis | Variational and related methods for the overall properties of composites[END_REF]: P = 1 2π ξ 2 1 +ξ 2 2 =1 H (0) (ξ 1 , ξ 2 , ξ 3 = 0) dS, (36) where the cylindrical pores have been aligned in the x 3 direction. From a computational point of view, it is seen that the microstructural tensor P depends on the anisotropy of the modulus L 0 , which in turn depends on the functional form of the potential W , as well as the particular type of loading, as determined by F = U. TANGENT SECOND-ORDER ESTIMATES (VERSION 1) The specialization of the second-order estimate [START_REF] Knowles | On the failure of ellipticity of the equations for finite elastostatic plane strain[END_REF] to the case of elastomeric porous composites leads to W (F) = (1 -f o ) W (F (1) ) + 1 2 S(F (1) ) • (F -F (1) ) , ( 37 ) where the variable F (1) needs to be determined. Now, by recognizing that within the framework of the tangent second-order estimates F (1) = F (1) , equation ( 32) can be readily shown to reduce to 1) + M 0 S(F (1) ) . F (1) = F + 1 1 -f o M 0 L 0 -(1 -f o )L 0 F -F ( (38) Next, recalling that in this context L 0 = ∂ 2 W (F (1) )/∂F 2 , it is seen that (38) constitutes a system of nine nonlinear algebraic equations for the components of the average deformation F (1) . The solution of these equations can then be used to compute the effective stored-energy function W for the porous elastomer using (37). SECOND-ORDER ESTIMATES WITH FLUCTUATIONS (VERSION 3) The second-order estimate [START_REF] Castañeda | Exact second-order estimates for the effective mechanical properties of nonlinear composite materials[END_REF] for the case of elastomeric porous composites specializes to (1) ) -S(F) • ( F (1) -F (1) ) . W (F) = (1 -f o ) W ( F (39) Here, the variables F (1) , F (1) , as well as the modulus tensor L 0 of the matrix phase of the linear comparison composite, need to be determined. Now, recognizing that in connection with the second-order estimate (26) the reference field F (1) = F, equation ( 32) can be shown to yield the following expression for the average deformation gradient F (1) associated with the estimate (39): F (1) = F + 1 1 -f o M 0 L 0 -(1 -f o )L 0 M 0 S(F). ( 40 ) With regard to the above equations, it is necessary to remark that (40) provides an explicit expression for F (1) in terms of the modulus L 0 , which can be determined, along with F (1) , making use of the generalized secant condition: S( F (1) ) -S(F) = L 0 ( F (1) -F), (41) together with suitably chosen traces of the expression: ( F (1) -F) ⊗ ( F (1) -F) = 2 (1 -f o ) ∂ W T ∂L 0 . ( 42 ) More specifically, the traces to be taken depend on the choice of the form of L 0 , as will be explained later in more detail. Moreover, in this last expression, W T is the stored-energy function for the relevant LCC given by W T (F) = (1 -f o )W (F) + 1 2 S(F) • M 0 L 0 -(1 -f o )L 0 M 0 S(F). ( 43 ) THE STRONG ELLIPTICITY CONDITION A complete study of the stability of porous elastomers with random microstructures is an extremely difficult problem, and well beyond the scope of this work. However, following Geymonat et al. [START_REF] Geymonat | Homogenization of nonlinearly elastic materials, microscopic bifurcation and macroscopic loss of rank-one convexity[END_REF], the onset of macroscopic instabilities can be identified from the loss of strong ellipticity of the effective constitutive behavior of the porous elastomers, which, as has already been seen in the prior subsections, can be estimated easily and efficiently by means of the second-order variational procedure. In this subsection, it is quickly recalled that the condition of strong ellipticity for a given constitutive relation is that the corresponding acoustic tensor must be positive definite. More precisely, the condition of strong ellipticity for the homogenized porous elastomer characterized by the effective stored-energy function W can be written in the form: L ij kl N j N l m i m k > 0 (44) for all m ⊗ N = 0. Here, L = ∂ 2 W /∂F 2 is the fourth-order tensor of first-order elastic moduli characterizing the effective incremental behavior of the porous composite. When condition (44) is satisfied, the associated macroscopic equilibrium equations form a strongly elliptic system of partial differential equations. In connection with condition (44) it should be emphasized that L = L 0 . In other words, the modulus associated with the effective stored-energy function of the porous elastomer does not correspond exactly to the effective modulus associated with the auxiliary linear thermoelastic composite. This is related to the abovementioned existence of a (small) "duality gap" in the second-order variational estimates. Parenthetically, it is recalled that the condition of ordinary ellipticity requires the acoustic tensor to be merely nonsingular and not necessarily positive definite. Hence, strong ellipticity implies ordinary ellipticity but the converse is not true in general. However, the interest here is in the determination of the loss of strong ellipticity of homogenized materials that are strictly convex, and therefore strongly elliptic, under infinitesimal deformations. Then, elliptic regions which contain the point λ1 = λ2 = λ3 = 1, by continuity, will be necessarily strongly elliptic as well. That is, for the cases studied here, ellipticity and strong ellipticity are equivalent. In summary, use can be made of the second-order expressions (37) and (39) to produce estimates for the effective stored-energy function W in order to determine the strongly elliptic domain of porous elastomers through condition (44). Plane Strain Loading of Transversely Isotropic Porous Elastomers The results presented in the previous section are specific for porous elastomers with (3-D and 2-D) isotropic microstructures, hence the use of the Hashin-Shtrikman (HS)-type estimate [START_REF] Wang | Stability and vibrations of elastic thick-walled cylindrical and spherical shells subjected to pressure[END_REF] for the LCC. However, the results are general as far as the material behavior and loading conditions are concerned. The aim of this paper is to make use of these results for the first time, and therefore some additional hypotheses will be made in this section in order to simplify the calculations involved, in an attempt to generate results that are as explicit as possible. Thus, the problem of plane-strain loading of porous elastomers consisting of cylindrical voids (with initially circular cross-section), perpendicular to the plane of deformation and aligned in the x 3 direction, will be considered. Note that the applied deformation F = U in this context is entirely characterized by the macroscopic principal stretches λ1 and λ2 , with the out-of-plane principal stretch λ3 being identically 1. Furthermore, attention will be restricted to porous elastomers with isotropic matrix phases. This restriction, along with objectivity and plane strain conditions, implies that the stored-energy function of the matrix phase can be expressed as a function of the principal invariants of the right Cauchy-Green deformation tensor C = F T F: I = tr C = λ 2 1 + λ 2 2 , (45) J = √ detC = λ 1 λ 2 , or equivalently, as a symmetric function of the principal stretches λ 1 , λ 2 of F. Therefore, W may be written as W (F) = ϕ(I, J ) = (λ 1 , λ 2 ). ( 46 ) However, actual rubber being nearly incompressible, it will suffice to consider isotropic matrix phases characterized by the stored-energy function: W (F) = g(I ) + h(J ) + μ 2 (J -1) 2 , ( 47 ) where the Lamé parameter μ will be taken to tend to infinity in order to recover incompressible behavior (J → 1). Here, g and h are assumed to be twice continuously differentiable and to satisfy the conditions: g(2) = h(1) = 0, g I (2) = μ/2, h J (1) = -μ, and 4g I I (2) + h J J (1) = μ, where the subscripts I and J denote partial differentiation with respect to these invariants. Note that when these conditions are satisfied W (F) → (1/2)μ (trε) 2 + μtrε 2 , where ε is the infinitesimal strain tensor, as F → I, so that the stored-energy function (47) linearizes properly. A relatively simple model of the general type (47), which captures the limiting chain extensibility of elastomers, is the Gent model [START_REF] Gent | A new constitutive relation for rubber[END_REF]: W (F) = - μJ m 2 ln 1 - I -2 J m -μ ln J + μ 2 - μ J m (J -1) 2 , ( 48 ) where the parameter J m is the limiting value for I -2 at which the material locks up. Note that (48) reduces to a Neo-Hookean material on taking the limit J m → ∞. 5.1. TANGENT SECOND-ORDER ESTIMATES (VERSION 1) Compressible Matrix Within the framework of the second-order estimate (37), it is seen that under plane strain loading and given matrix material behavior (47), expression (38) reduces simply to two nonlinear equations for the components F (1) 11 = λ(1) 1 and F (1) 22 = λ(1) 2 (the other in-plane components being identically zero) of the average deformation F (1) in the matrix phase of the linear comparison composite. They are: f o ( λ( 11 = 0, (49) where L 1111 = 2 ḡ(1) I + 4 λ(1) 1 2 ḡ(1) I I + h(1) J J + μ λ(1) 2 2 , L 2222 = 2 ḡ(1) I + 4 λ(1) 2 2 ḡ(1) I I + h(1) J J + μ λ(1) 1 2 , L 1122 = h(1) J -μ + 4 ḡ(1) I I + h(1) J J + 2μ λ(1) 1 λ (1) 2 and S (1) 11 = 2 ḡ(1) I λ(1) 1 + h(1) J λ(1) 2 + μ λ(1) 1 λ(1) 2 -1 λ(1) 2 , S (1) 22 = 2 ḡ(1) I λ(1) 2 + h(1) J λ(1) 1 + μ λ(1) 1 λ(1) 2 -1 λ(1) 1 . Here, for convenience, the subscript '0' has been dropped out for the components of L 0 . Also, use has been made of the notation ḡ(1) = g( Ī (1) ) and h(1) = h(J (1) ), where Ī (1) = ( λ(1) 1 ) 2 + ( λ(1) 2 ) 2 and J (1) = λ(1) 1 λ(1) 2 denote the principal invariants associated with F (1) . In passing, it is noted that for the tangent modulus tensor of (47) and the applied loading, the relevant tensor P can be computed analytically, but for brevity, the explicit expressions will not be included here. In summary, upon solving numerically the two nonlinear algebraic equations (49) for the nonzero components of F (1) , the effective stored-energy function W can be readily computed using (37). Incompressible Matrix The limit when the matrix phase is made incompressible (i.e., μ → ∞) can be shown to simplify the above expressions considerably. As already stated, this limit is interesting from a practical perspective, given that actual elastomers exhibit a nearly incompressible behavior (i.e., they usually exhibit a ratio between the bulk and shear moduli of the order of 10 4 ). Due to the cumbersomeness of the final expressions, the asymptotic analysis of the incompressible limit involving the general stored-energy function (47) will not be included here. Instead, for illustrative purposes, only the particular case of a Neo-Hookean matrix phase will be discussed. The details of the relevant asymptotic analysis are given in Appendix A, but the final expression for the tangent second-order estimate (37) for the effective storedenergy function of a porous elastomer with incompressible Neo-Hookean matrix phase may be written as W I (F) = I ( λ1 , λ2 ) = (1 -f o ) 2u μ (u 2 -1)( λ1 -λ2 ) + (1 + u 2 )( λ2 u 2 -2u + λ1 )((1 + f o )u 2 + ( λ2 -λ1 )u -1 -f o ) u( λ1 -λ2 u 2 ) . ( 50 ) In this expression, u is the solution of the equation ( λ2 2 + f 2 o -1)u 4 + 2( λ1 + (f o -1) λ2 )u 3 + ( λ2 2 -λ2 1 )u 2 -2((f o -1) λ1 + λ2 )u + 1 -f 2 o -λ2 1 = 0, (51) which can be determined explicitly as a function of f o , λ1 , and λ2 . Note that (51) is a quartic polynomial equation and hence it may be solved in closed-form. However, for practical purposes, it proves helpful to solve (51) numerically. In this regard, it is emphasized that only one of the 4 roots of (51) gives the correct linearized behavior for the effective response of porous materials; this is indeed the root that should be chosen at least initially. For the special case of in-plane hydrostatic finite expansion or contraction, i.e., λ1 = λ2 = λ, the second-order estimate (50) can be shown to further reduce to I ( λ, λ) = 2μ (1 -f o )( λ -1) 2 f o + λ -1 . ( 52 ) For later use, it is also noted that the average deformation field in the matrix phase associated with the stored-energy function ( 52) is given by F (1) = I. (53) Finally, it is noted that the result (53) holds true, not only for a porous elastomer with incompressible Neo-Hookean matrix, but for a porous elastomer with a general incompressible isotropic matrix phase. 5.2. SECOND-ORDER ESTIMATES WITH FLUCTUATIONS (VERSION 3) Compressible Matrix Given the transverse isotropy of the microstructure and the orthogonal symmetry of the prescribed loading, it is reasonable to assume that the linear comparison problem of relevance here will also exhibit orthotropic symmetry, with the symmetry axes aligned with the applied loading F = U. In order to carry out the computation for the HS-type second-order estimates (39) for porous elastomers with matrix phase (47) under plane strain conditions, it suffices to consider the inplane components of a general deformation field F relative to the symmetry axes. It is convenient to denote these components as a vector in 4 [ F 11 F 22 F 12 F 21 ] T . ( 54 ) The modulus tensor L 0 of the matrix phase of the linear comparison composite is consequently denoted as a matrix in 4×4 ⎡ ⎢ ⎢ ⎣ L 1111 L 1122 0 0 L 1122 L 2222 0 0 0 0 L 1212 L 1221 0 0 L 1221 L 2121 ⎤ ⎥ ⎥ ⎦ , ( 55 ) where, for convenience, the subscript '0' has been suppressed for the components of L 0 , and use has been made of the major symmetry of the tensor L 0 (i.e., L ij kl = L klij ). Next, due to the above-stated assumptions, it is seen that the tensor F (1) has at most 4 independent components ( F (1) 11 , F (1) 22 , F (1) 12 , F (1) 21 ) which should be determined from equations (42). This implies that the modulus L 0 must be chosen to have at most four independent components, with respect to which W T will be differentiated to generate 4 consistent equations for the components of F (1) . Note that these four conditions correspond to 4 different "traces" of the equations (42). For this reason, the components of L 0 will be required to satisfy the prescriptions: L 1212 = L 2121 and L 1221 + L 1122 = (L 1111 -L 1212 )(L 2222 -L 1212 ). ( 56 ) These relations reduce the components of the L 0 to only 4 independent ones, namely, L 1111 , L 2222 , L 1122 , and L 1212 . It is recalled [START_REF] Lopez-Pamies | Second-order homogenization estimates incorporating field fluctuations in finite elasticity[END_REF] that the motivation for the choices (56) was twofold: (i) the components of the tangent modulus of a Neo-Hookean material, expressed relative to the symmetry axes, satisfy (56); and (ii) the conditions (56) simplify significantly the computation of the microstructural tensor P (see [20, Appendix A]). Now, making use of the equations [START_REF] Wang | Stability and vibrations of elastic thick-walled cylindrical and spherical shells subjected to pressure[END_REF] for the HS estimate for L 0 , the equations (42), together with (43), can be used to generate 4 equations for the 4 components of F (1) , which are of the form: F (1) 11 -λ1 2 + 2f 1 F (1) 12 F (1) 21 = k 1 , F (1) 22 -λ2 2 + 2f 2 F (1) 12 F (1) 21 = k 2 , (57) F (1) 12 2 + F (1) 21 2 + 2f 3 F (1) 12 F (1) 21 = k 3 , F (1) 11 -λ1 F (1) 22 -λ2 -F (1) 12 F (1) 21 = k 4 , where f 1 , f 2 , f 3 , k 1 , k 2 , k 3 , k 4 are functions of the components of L 0 , i.e., L 1111 , L 2222 , L 1122 , and L 1212 , as well as of the deformation F, the initial volume fraction of voids f o , and the constitutive functions g, h, and μ of the matrix phase. Equations (57) can be shown to yield two distinct solutions for F (1) 11 and F (1) 22 , in terms of which F (1) 12 and F (1) 21 may be determined. Note, however, that the variables F (1) 12 and F (1) 21 only enter the equations through the combinations F (1) 12 F (1) 21 and ( F (1) 12 ) 2 + ( F (1) 21 ) 2 , and hence, only these combinations can be determined uniquely from (57). The two solutions for F (1) 11 and F (1) 22 are as follows: F (1) 11 -λ1 = ± 2f 1 k 4 + k 1 4f 2 1 k 2 + 4f 1 k 4 + k 1 , ( 58 ) F (1) 22 -λ2 = ± 2f 1 k 2 + k 4 4f 2 1 k 2 + 4f 1 k 4 + k 1 , where it must be emphasized that the positive (and negative) signs in the roots for F (1) 11 and F (1) 22 go together. Next, each of the two distinct roots of (57) can be substituted into the generalized secant condition (41) to obtain a system of 4 equations for the 4 unknowns L 1111 , L 2222 , L 1122 , and L 1212 , which must be solved numerically. Having computed the values of all the components of L 0 for a given initial porosity (f o ), given material behavior (g, h, and μ ), and given loading ( λ1 and λ2 ), the values of the components of F (1) and F (1) can be readily determined using relations (40) and (58), respectively. In turn, the second-order estimate (39) for the effective stored-energy function W for porous isotropic elastomers can be computed using these results. Finally, it is noted that the two above roots lead to very similar results for the effective behavior of the porous elastomer when small values of μ are considered (i.e., for μ of the order of μ). However, for larger values of μ , the estimates produced by the two distinct roots are very different. In fact, as explained in more detail in the following subsection, it can be shown that for large values of μ only one of the two roots provides physically meaningful results. Consequently, this is the root that should be chosen to compute the effective behavior of the porous elastomer. Incompressible Matrix The above expressions can be simplified in the limit of incompressibility of the matrix phase (i.e., μ → ∞). In this context, it is important to realize that the asymptotic behavior of one of the two above roots leads to nonphysical predictions in the limit as μ becomes unbounded. More specifically, for deformations satisfying ē1 + ē2 0 only the "positive" (+) root provides physically reasonable predictions, whereas for deformations with ē1 + ē2 0, only the "negative" (-) root has the physically plausible asymptotic behavior; here, the logarithmic strains ē1 = ln( λ1 ) and ē2 = ln( λ2 ) have been introduced for convenience. Taking this observation into account, it can be shown that the second-order estimate (39) for the the effective stored-energy function of a porous elastomer with an incompressible isotropic matrix phase may be written as W I (U) = I ( λ1 , λ2 ) = (1 -f o )g( Î (1) ), ( 59 ) where Î (1) = Î (1) (α, β, γ ), and α, β, γ are the solution to three nonlinear algebraic equations, not shown here for their bulkiness, which can be solved for in terms of the initial porosity f o , given material behavior g, and given loading λ1 and λ2 . In general, it is not possible to solve these equations in closed-form. However, for the particular case of a porous elastomer with an incompressible Neo-Hookean matrix phase, the general estimate (59) can be shown (see Appendix B) to reduce to I ( λ1 , λ2 ) = (1 -f o )μ 2 p 4 v 4 + p 3 v 3 + p 2 v 2 + p 1 v + p 0 (q 2 v 2 + q 1 v + q 0 ) 2 -2 , ( 60 ) where v is the solution of the quartic polynomial: r 4 v 4 + r 3 v 3 + r 2 v 2 + r 1 v + r 0 = 0. (61) Here, the coefficients p 0 , p 1 , p 2 , p 3 , p 4 , q 0 , q 1 , q 2 , r 0 , r 1 , r 2 , r 3 and r 4 , which depend on f o , λ1 and λ2 , are given in explicit form in Appendix C. Since the es-timate (60) depends effectively on the solution of the quartic polynomial equation (61), it may be written in closed-form. However, for all practical purposes, it is simpler to solve (61) numerically. In this regard, it is emphasized that only one of the 4 roots of (61) gives the correct linearized behavior for the effective response of porous materials; this is indeed the root that should be chosen at least initially. It is useful, for comparison purposes, to spell out the simplification of the second-order estimate (60) for the case of in-plane hydrostatic loading, i.e., λ1 = λ2 = λ. The result reads as follows: I ( λ, λ) = 2μ 1 -f o (1 + f o ) λ2 + f o -1 -2 λ f o ( λ2 + f o -1) . ( 62 ) For later use, it is noted that the average deformation field in the matrix phase associated with the stored-energy function ( 62) is given by F (1) = λI I, (63) where λI = f o ( λ2 + f o -1) - λ f o -1 . ( 64 ) Finally, it is emphasized that the result (64) holds true, not only for a porous elastomer with incompressible Neo-Hookean matrix, but in fact for a porous elastomer with general incompressible isotropic matrix phase. COMPARISON OF THE SECOND-ORDER ESTIMATES WITH EXACT RESULTS Hydrostatic Loading With regard to porous elastomers subjected to finite deformations, there are very few exact results available. For the special case of hydrostatic loading, Hashin [START_REF] Hashin | Large isotropic elastic deformation of composites and porous media[END_REF] obtained the exact equilibrium solution by making use of the idea of the composite spheres assemblage. Following that work, it is straightforward to show that the exact stored-energy function for the in-plane hydrostatic deformation of a porous rubber with incompressible isotropic matrix W (F) = (λ 1 , λ 2 ) may be written as W I (U) = 2 1 √ f o (λ, λ -1 )R dR, (65) where λ = 1 + λ2 -1 R 2 . ( 66 ) The corresponding exact average deformation in the matrix phase reads as F (1) = λI I, (67) with λI = f o ( λ2 + f o -1) - λ f o -1 , ( 68 ) where λ must be greater than √ 1f o . In general, the integral in (65) cannot be computed analytically; however, for the particular case of a porous elastomer with incompressible Neo-Hookean matrix phase, the exact function may be expressed as I ( λ, λ) = μ 2 ( λ2 -1) ln λ2 + f o -1 f o -ln( λ2 ) . ( 69 ) It can thus be seen that the two versions of the second-order estimates, as defined by ( 52) and (62), do not recover the exact result (69) for the effective storedenergy function of porous elastomers with incompressible Neo-Hookean matrix phase subjected to general in-plane hydrostatic finite deformations. Nonetheless, both estimates can be shown to be exact to third order in the infinitesimal strain (i.e., up to O( λ-1) 3 ). For larger finite deformations, however, the behavior of both estimates is very different. As it will be seen in the following section, whereas the expression (62), which takes into account the field fluctuations, provides estimates that are in very good agreement with the exact result, the corresponding tangent expression (52) delivers estimates that deviate drastically from (69). This disparity is mainly due to the difference in the prediction of the evolution of the microstructure. In fact, while the average deformation gradient in the matrix phase (53) predicted by the tangent second-order theory is exactly equal to the identity, the corresponding F (1) (63) predicted by the second-order method with fluctuations is consistent with the exact result (67). This is a remarkable result. Indeed, by taking into account the fluctuations, the second-order estimate ( 26) is able to improve on the tangent second-order estimate [START_REF] Knowles | On the failure of ellipticity of the equations for finite elastostatic plane strain[END_REF] in that it recovers the exact average deformation fields in a porous elastomer with an incompressible isotropic matrix under finite in-plane hydrostatic loading. General Loading For more general loadings, there are no other known exact solutions to the problem for porous elastomers. However, for incompressible matrix phase materials, a simple conservation of mass argument (for the matrix phase) allows for the determination of the evolution of the porosity f as a function of deformation. The result is f = 1 - 1 -f o J . ( 70 ) Now, it can be shown that the estimate for the porosity associated with the secondorder estimate with fluctuations (59) for porous elastomers with incompressible isotropic matrix phases is in exact agreement with the exact result (70). The proof of this is sketched out for the particular case of a Neo-Hookean porous material at the end of Appendix B. On the other hand, the corresponding estimate for the porosity arising from the tangent second-order estimate (37) deviates radically from (70) for large deformations, as will be shown explicitly in the next section. In summary, Version 3 (with fluctuations), unlike Version 1 (tangent), of the second-order method is able to predict the exact evolution of the porosity for general finite loading, and consequently the exact average deformation fields for hydrostatic loading, for a porous elastomer with incompressible isotropic matrix phase. This is a nontrivial result, since the actual fields in a deformed porous elastomer are highly heterogeneous. However, it appears that the "generalized secant" linearization together with the incorporation of field fluctuations leads to improved approximations capable of capturing better the heterogeneity of these fields in order to deliver the exact results mentioned above. LOSS OF STRONG ELLIPTICITY For the particular case of plane strain deformations, the strong ellipticity condition (44) can be written more explicitly. In fact, under plane strain deformations, necessary and sufficient conditions for strong ellipticity of the effective constitutive behavior of the type of porous systems considered here have been shown [START_REF] Knowles | On the failure of ellipticity of the equations for finite elastostatic plane strain[END_REF][START_REF] Hill | On the theory of plane strain in finitely deformed compressible materials[END_REF] to reduce to: L 1111 > 0, L 2222 > 0, L 1212 > 0, L 2121 > 0, and (71) L 1111 L 2222 + L 1212 L 2121 -( L 1122 + L 1221 ) 2 > -2 L 1111 L 2222 L 1212 L 2121 , where L iijj = ∂ 2 W ∂ λi ∂ λj , L ij ij = 1 λ2 i -λ2 j λi ∂ W ∂ λi -λj ∂ W ∂ λj , i = j, ( 72 ) L ijj i = 1 λ2 i -λ2 j λj ∂ W ∂ λi -λi ∂ W ∂ λj , i = j, (i, j = 1, 2) are the components of the modulus L written in the Lagrangean principal axes. Note that the third and fourth conditions in (71) are equivalent and that for loadings with λi = λj (i = j ), suitable limits must be taken for some of the components in (72). In particular, equations (72) 2 and (72) 3 transform into: L ij ij = 1 2 L iiii -L iijj + 1 λi ∂ W ∂ λi , i = j, ( 73 ) L ijj i = 1 2 L iiii -L iijj - 1 λi ∂ W ∂ λi , i = j, respectively. It is important to remark here that most stored-energy functions of the form (47) describe best the actual behavior of elastomers when "calibrated" to be strongly elliptic. For example, a compressible Gent material, characterized by the storedenergy function (48), is strongly elliptic under plane-strain deformations if (but not only if) μ > 0, J m > 0, and μ > 2μ/J m . Note that for a Neo-Hookean elastomer, these sufficient conditions simplify to μ > 0 and μ > 0. In fact, for realistic elastomers, μ > 0, J m > 0, and μ is not only positive but is several orders of magnitude larger than μ, namely, μ /μ ≈ 10 4 . Consequently, the Gent elastomers utilized in this work to model the matrix behavior of the porous elastomers are strongly elliptic for all deformations. As a final remark, it is recalled that having strict convexity in the linearized region, i.e., μ > 0 and κ = μ + μ > 0 (where κ denotes the plane-strain bulk modulus) in the limit λ1 → 1 and λ2 → 1, does not ensure strong ellipticity of a Gent material for all deformations. In the next section, the second-order methods described in this section will be used to generate estimates for the strongly elliptic domains of random porous elastomers with incompressible Gent and Neo-Hookean matrix phases subjected to plane-strain deformations. It will be shown that even though the behavior of the matrix phase is strongly elliptic, the homogenized behavior of the porous elastomer can lose strong ellipticity. This is consistent with the observations of Abeyaratne and Triantafyllidis [START_REF] Abeyaratne | An investigation of localization in a porous elastic material using homogenization theory[END_REF] for porous elastomers with periodic microstructures. Results for General Plane-Strain Loading This section presents results associated with the (Versions 1 and 3) second-order HS estimates for in-plane hydrostatic, uniaxial, and pure shear loading of porous elastomers with incompressible Gent and Neo-Hookean matrix phases. Results are given for μ = 1 and various levels of initial porosity f o , and were computed up to the point at which the effective incremental moduli were found to lose strong ellipticity, or truncated at some sufficiently large strain if no such loss was found. For clarity, the points at which loss of strong ellipticity occurred are denoted with the symbols and • for Versions 1 and 3, respectively. The characterization of the strongly elliptic domains is given in the last subsection. It is further noted that exact results and bounds are presented when available. HYDROSTATIC LOADING Figure 1 presents the comparison between the effective behavior as predicted by Versions 1 and 3 of the second-order method and the exact result, for a Neo-Hookean porous elastomer with incompressible matrix phase (μ → ∞) under Recall that closed-form expressions for the effective stored-energy functions shown in Figure 1(a) are given by ( 52), (62), and (69) for Version 1, Version 3, and the exact result, respectively. The main observation that can be made from Figure 1 is that Version 3 of the second-order variational procedure provides estimates for the effective constitutive behavior which are in excellent agreement with the exact result. Version 1 also delivers estimates that compare reasonably well with the exact result for compressive loadings. However, the predictions by Version 1 deviate significantly from the exact behavior for large tensile deformations. It is also seen that both versions of the second-order method predict loss of strong ellipticity of the homogenized porous elastomer under in-plane hydrostatic compression, while no such behavior is observed under in-plane hydrostatic tension. Moreover, both second-order estimates for the effective behavior exhibit a better agreement with the exact result for higher values of f o . Finally, it is interesting to note from Figure 1(b) that the overall constitutive behavior of the composite consistently exhibits hardening under compression and softening under tension. This feature will be shown shortly to be due mainly to a geometric effect caused by the evolution of the porosity. Figure 2 provides plots associated with the results shown in Figure 1 for: (a) the porosity as a function of the logarithmic strain ē = ln( λ); and (b) the critical stretch λcrit at which the loss of strong ellipticity of the homogenized elastomer takes place, as a function of initial porosity f o . First, a key point to be drawn from Figure 2(a) is that the porosity decreases for compressive deformations and increases for tensile ones. This entails a geometric hardening/softening on the overall response of the porous elastomer which is entirely consistent with the hardening/softening exhibited by the effective constitutive behavior observed in Figure 1(b). Moreover, the porosity predicted by Version 3 of the second-order method reduces to the exact result, as already pointed out in the previous section (see expressions (63) and ( 67)). On the other hand, the porosity delivered by Version 1 deviates from the exact evolution for large finite deformations, especially for tensile hydrostatic loading. In fact, under hydrostatic tension, Version 1, which does not take into account information about the field fluctuations, predicts unrealistic values for the porosity (i.e., greater than unity). This explains the extremely soft effective constitutive behavior observed in Figure 1 for Version 1 of the second-order method under hydrostatic tension. Also, in accordance with the trend discerned from Figure 1, it appears that the porosity evolution predicted by Version 1 gets worse with decreasing initial porosity. The main observation with regard to Figure 2(b) is the somewhat counterintuitive result that the porous material becomes more stable ( λcrit smaller) with increasing initial values of the porosity. In this connection, it is relevant to remark that while exact results are available for the effective stored-energy function and porosity evolution in in-plane hydrostatic loading of composite cylinders (with incompressible matrix phase), the loss of strong ellipticity of these structures has not been studied. However, Wang and Ertepinar [START_REF] Wang | Stability and vibrations of elastic thick-walled cylindrical and spherical shells subjected to pressure[END_REF] did study the stability of an isolated cylindrical Neo-Hookean shell under in-plane hydrostatic loading. Results of that work comprising the buckling flexural modes n = 2, which corresponds to the collapse to an oval shape, and n = 3 have been included in Figure 2(b), for reference purposes. Although, it should be emphasized that the buckling behavior of an isolated shell cannot be rigorously identified with the buckling instabilities that would take place in an actual composite system, even for a composite with the Hashin composite-shell microstructure, it is interesting to remark that these results appear to be consistent with the results derived from the second-order theory in that the overall stability is enhanced with increasing initial porosity, at least initially. Moreover, it is noted that the critical stretches characterizing the loss of strong ellipticity predicted by both versions of the second-order variational procedure are smaller than the corresponding critical stretches associated with the buckling modes for an isolated cylindrical shell. Figure 2(b) also shows that for the interval 0 < f o < 0.4 the loss of strong ellipticity predicted by Version 3 is slightly smaller than the one obtained from Version 1. In contrast, for initial porosities higher than 0.4, the prediction of loss of strong ellipticity by Version 1 becomes smaller than the one computed from Version 3. The difference between the results of these two versions becomes more pronounced in the limit f o → 1, where λcrit → 0.73 and λcrit → 0 for Versions 1 and 3, respectively. We expect the estimate for the critical stretch associated with Version 3 to be more accurate, but we do not have an explanation for its relatively low values at high porosities. However, it should be kept in mind that it is expected that other (smaller wavelength) instabilities would take place before reaching the loss of ellipticity condition. UNIAXIAL LOADING In Figure 3, plots for the effective stress-strain behavior associated with Versions 1 and 3 are presented for a Neo-Hookean porous elastomer with incompressible matrix phase (μ → ∞) under uniaxial loading with λ2 = 1, λ1 = λ. The results for the stress components S 11 and S 22 are presented in parts (a) and (b), respectively, for values of f o = 30, 50 and 70%, as functions of the logarithmic strain ē = ln( λ). Similar to the case of in-plane hydrostatic loading, the results for compressive (tensile) deformations shown in Figure 3 exhibit a clear hardening (softening) behavior with increasing deformation, but less pronounced than the corresponding results for in-plane hydrostatic loading. From Figure 3(a) it is seen that the effective constitutive behavior for S 11 obtained from Version 1 is significantly softer than the one obtained from Version 3. This is also the case for the component S 22 as shown by Figure 3(b). In fact, the prediction for this component of the stress by Version 1 is not only much softer than the corresponding stress predicted by Version 3, but it even decreases for tensile loadings reaching negative values, which is physically unrealistic. This suggests that the predictions of Version 1 could be too soft for large finite deformations, especially for tensile loadings. Furthermore, as it was the case for hydrostatic loadings, loss of ellipticity was found for compressive loadings but not for tensile ones. Figure 4 provides corresponding results for: (a) the porosity; and (b) the average aspect ratio of the pores ω, as function of the logarithmic strain ē = ln( λ). Note that the aspect ratio has been defined as ω = λ(2) 1 / λ(2) 2 , with λ(2) 1 and λ(2) 2 denoting the principal stretches associated with the average deformation gradient tensor of the vacuous phase F (2) , so that ω > (<) 1 correspond to an oblate (prolate) average shape of the pores. As it was the case for hydrostatic loading, it is seen from Figure 4(a) that the porosity decreases for compressive deformations and increases for tensile ones. In turn, this can be related to the aforementioned hardening/softening trend exhibited by the effective stress-strain behavior in Figure 3. As already anticipated in Section 5.3.2, Figure 4(a) also shows that the prediction for the evolution of the porosity by Version 3 of the second-order method is in agreement with the exact result, whereas the prediction by Version 1 deviates from the correct behavior for large deformations. This deviation, which is much more drastic for tensile loadings and lower values of f o , helps explain the unphysical behavior observed in Figure 3(b) for S [START_REF] Müller | Homogenization of nonconvex integral functionals and cellular elastic materials[END_REF] . In particular, it is seen that S 22 tends to negative values whenever f approaches one. Figure 4(b) shows that both versions of the second-order method give similar predictions for the average aspect ratio of the pores. Note that in compression the changes in aspect ratio are more rapid for smaller f o . It is concluded from the observations made in the context of these figures for uniaxial stretch, as well as the earlier figures for hydrostatic deformation, that Version 3 of the second-order method leads to more consistent predictions for the overall behavior and microstructure evolution of the porous elastomers, and therefore it should be preferred over Version 1. For this reason, in the following sections only results associated with Version 3 will be presented. PURE SHEAR LOADING Figure 5 provides Version 3 second-order estimates for a Gent porous elastomer with incompressible matrix phase under pure shear ( λ1 = 1/ λ2 = λ). Results are shown for an initial porosity of 10% and values of the lock-up parameter J m = 42, 100, and J m → ∞ as functions of the logarithmic strain ē = ln( λ) for: (a) the effective stored-energy function compared with the Voigt upper bound; and (b) the evolution of the aspect ratio ω. First, it is observed from Figure 5(a) that the Version 3 estimates for the effective stored-energy function satisfy the rigorous Voigt upper bound. It is emphasized again that this bound is only helpful when considering isochoric loadings, like the one considered in this section, since it becomes unbounded otherwise. Note that no loss of ellipticity was detected at any level of deformation. In connection with the evolution of the microstructure, it is remarked that the porosity does not evolve under pure shear deformations. On the other hand, as clearly shown by Figure 5(b), the aspect ratio of the pores does increase fairly rapidly with increasing strains. Furthermore, note that ω appears to be insensitive to the value of the material parameter J m . Figure 6 presents plots of the corresponding results for the stress components: (a) S 11 ; and (b) S 22 . One of the main points that can be drawn from Figure 6 is the strong dependence of the effective stress-strain relation of the porous rubber on the lock-up parameter J m of the matrix phase. Interestingly, it can also be deduced from these figures that the evolution of the aspect ratio appears to have little effect on the effective constitutive behavior of the porous elastomer under pure shear. In order to aid the discussion of the results, the boundary at which the porosity vanishes has also been included in Figure 7. Note that once the zero-porosity boundary is reached, further compressive deformation of the material is not possible. LOSS OF STRONG ELLIPTICITY An interesting observation that can be made from Figure 7 is that the loci of points describing loss of strong ellipticity satisfy ē2 + ē1 < 0, which implies that a necessary condition for loss of strong ellipticity to occur is the existence of a compressive component in the state of deformation. Also, note that the predictions from both versions of the second-order method have roughly the same qualitative behavior; however, the results of Version 1 appear to be more restrictive than those of Version 3 for all initial values of porosity and loadings, with the exception of cases satisfying f o < 0.3, ē1 < 0, and ē2 < 0, for which the onset of loss of strong ellipticity of Version 3 precedes that one of Version 1. Furthermore, it is interesting (a) (b) to note that the strongly elliptic (and non-elliptic) domains shown in Figure 7 are similar to the results obtained by Abeyaratne and Triantafyllidis [START_REF] Abeyaratne | An investigation of localization in a porous elastic material using homogenization theory[END_REF] for the loss of strong ellipticity of periodic porous elastomers with a nearly incompressible Neo-Hookean matrix phase. However, their results appear to be more restrictive than the ones obtained here. In particular, these investigators did find loss of (strong) ellipticity for deformations with ē2 + ē1 > 0, which includes pure shear loading. These discrepancies seem to be consistent with their periodic microstructure, as it is more susceptible to instabilities than the random microstructure utilized in this work. Another important point that deserves further comment is the trend followed by the onset of loss of ellipticity as a function of initial porosity. In effect, Figure 7 suggests that a Neo-Hookean porous elastomer with random and isotropic microstructure becomes more stable with increasing value of initial porosity. This behavior is counterintuitive as one might expect an elastomer to be more unstable with increasing porosity. However, this is a complex and difficult problem, which will be pursued in future work. Finally it is interesting to remark that it was through the failure of the third (and equivalently fourth) condition of (71) that the porous elastomer with incompressible Neo-Hookean matrix phase lost strong ellipticity systematically. Indeed, whereas the evolution of the microstructure for compressive loadings led to the already-mentioned hardening of some of the components of the effective incremental modulus, it also led to the softening of the shear component L 1212 , which resulted in the overall loss of ellipticity of the porous elastomer. For completeness, it is noted that the corresponding domains of strong ellipticity for porous elastomers with incompressible Gent matrix phases are essentially identical to those shown in Figure 7. Indeed, the results predicted by the secondorder theory indicate that the value of the lock-up parameter J m does not play a major role in estimating the onset of loss of ellipticity of porous elastomers with random and isotropic microstructures. In summary, the second-order estimates for the homogenized constitutive behavior of porous elastomers with isotropic, strongly elliptic, matrix phases have been found to admit loss of strong ellipticity at reasonable levels of deformation. This behavior has been linked to the evolution of the microstructure under finite deformations, which, depending on the specific loading conditions, was found to induce hardening or softening behavior resulting in the loss of strong ellipticity for the porous elastomer. Concluding Remarks In this work, analytical estimates have been derived for the effective behavior of porous elastomers with random microstructure subjected to finite deformation, by means of an implementation of the second-order procedure of Lopez-Pamies and Ponte Castañeda [START_REF] Lopez-Pamies | Second-order estimates for the large-deformation response of particle-reinforced rubbers[END_REF][START_REF] Lopez-Pamies | Second-order homogenization estimates incorporating field fluctuations in finite elasticity[END_REF]. It is emphasized that this homogenization technique, which is an extension of the variational method developed by Ponte Castañeda [START_REF] Castañeda | Second-order homogenization estimates for nonlinear composites incorporating field fluctuations[END_REF] in the context of viscoplastic materials, is applicable to a large class of hyperelastic composites including reinforced and porous rubbers. A key issue in the general framework of the second-order variational procedure is the scheme employed for the linearization of the constitutive relation of the hyperelastic phases in the composite. In this regard, it has been seen that the earlier tangent linearization proposed by Ponte Castañeda and Tiberio [START_REF] Castañeda | A second-order homogenization procedure in finite elasticity and applications to black-filled elastomers[END_REF] results in estimates for the effective stored-energy function that depend exclusively on the average fields of the constituent phases. On the other hand, the estimates associated with the generalized secant linearization scheme not only depend on the average fields, but also exhibit a direct dependence on the field fluctuations. The difference between these two approaches has already been shown to be significant in the context of reinforced incompressible elastomers, where the incorporation of field fluctuations proved necessary to obtain the correct overall incompressibility constraint for these materials (see [START_REF] Lopez-Pamies | Second-order homogenization estimates incorporating field fluctuations in finite elasticity[END_REF]). Within the richer class of porous elastomers, the direct incorporation of field fluctuations into the computation of the effective behavior has turned out to be essential as well. Thus, by incorporating field fluctuations, Version 3 of the second-order method has been shown to lead to the exact evolution of the porosity in porous elastomers with incompressible, isotropic, matrix phases, under general plane strain loading. This is a remarkable result in view of the strong nonlinearity of the problem. Furthermore, for the particular case of hydrostatic loading, the effective constitutive estimates delivered by Version 3 exhibit excellent agreement with the available exact result. This can be related to the correct prediction of the porosity evolution. Unfortunately, no other exact results are available for the effective constitutive behavior of porous elastomers. However, based on the comparisons presented, it seems plausible that Version 3 of the second-order variational procedure should be also able to deliver accurate estimates for the homogenized behavior of porous elastomers for more general loading conditions. On the contrary, Version 1 of the second-order method, which only makes use of the average fields, delivers predictions for the evolution of the microstructure that deviate rapidly from the expected behavior for finite deformations, especially for tensile loadings. The negative consequences of this deviation were put in evidence by the comparisons with the exact result for hydrostatic loading, where the Version 1 estimates, even though exact to third order in the infinitesimal strain, break down under large tensile deformations. A major result of this work is the strong influence of the microstructure evolution on the overall behavior of porous elastomers, in particular, through geometric hardening/softening mechanisms arising as a consequence of the evolution of the pore microstructure during a finite-deformation history. Indeed, it was seen that the decrease of the porosity during compressive deformations results in a significant hardening of the effective constitutive behavior of the porous elastomer. On the other hand, the increase of the porosity associated with tensile deformations leads to a pronounced softening. Finally, it has been shown that loss of strong ellipticity, corresponding to the possible development of shear-band instabilities, can take place in porous elastomers with random microstructures at physically realistic levels of compressive deformation. This is consistent with earlier findings by Abeyaratne and Triantafyllidis [START_REF] Abeyaratne | An investigation of localization in a porous elastic material using homogenization theory[END_REF] for porous systems with periodic microstructures. Indeed, in this work, we have been able to relate softening mechanisms associated with the evolution of the microstructure under finite deformations with the possible onset of macroscopic instabilities, even for materials with strongly elliptic matrix phases. These encouraging results for two-dimensional microstructures should provide ample motivation to carry out corresponding analyses for porous and other types of elastomeric composites with more general three-dimensional, random microstructures, where comparisons with appropriate experimental results should be feasible. Next, under condition (A.2), the equations of order O( 0 ) yield the relationship: α 2 + α 2 1 β 2 = (1 + α 2 1 )(-1 -f o + (1 + f o )α 2 1 + ( λ2 -λ1 )α 1 )μ λ1 -α 2 1 λ2 , (A.3) which determines the combination α 2 + α 2 1 β 2 in terms of α 1 . Finally, the equations of order O( ) derived from (49) are considered. Making use of relations (A.2) and (A.3) in these equations can be shown to lead to the following expressions: α 3 + G 1 (α 1 , α 2 )β 3 = G 2 (α 1 , α 2 ) (A.4) and ( λ2 2 + f 2 o -1)α 4 1 + 2( λ1 + (f o -1) λ2 )α 3 1 + ( λ2 2 -λ2 1 )α 2 1 -2((f o -1) λ1 + λ2 )α 1 + 1 -f 2 o -λ2 1 = 0, (A.5) where G 1 and G 2 are (known functions of their arguments) too cumbersome to be included here. It is noted that (A.4) establishes a linear relationship between β 3 and α 3 analogous to the one established by equation (A.3) between β 2 and α 2 . More importantly, (A.5) provides a fourth-order polynomial equation for the coefficient α 1 in terms of the initial concentration of pores f o , and the applied loading as determined by λ1 and λ2 . This equation is precisely the equation (51) given in the main body of the text, where for clarity of notation α 1 was denoted as u. It turns out that the leading order term of the effective energy (37) in the limit of incompressibility may eventually be characterized entirely in terms of the coefficient α 1 . The final result is given by expression (50) in the text, where, as already pointed out, u must be identified with α 1 . It is noted that for the particular case of hydrostatic loading, i.e., λ2 = λ1 = λ, a suitable limit must be taken in the above expressions. For this type of deformation, it is straightforward to show that λ(1) 2 = λ(1) 1 , and hence that, β 1 = α 1 , β 2 = α 2 , and β 3 = α 3 . Now, making use of these relations together with the equation of order O( -1 ) given by (A.2) leads to α 1 = 1. In turn, this result for α 1 makes the equation (of order O( 0 )) (A.3) be satisfied trivially, whereas the one of order O( 1 ) can be shown to render the following identities: α 2 = λ -1 λ + f o -1 μ, (A.6) α 3 = ( λ -1)(3 -5f o + 2f 2 o + (7f o -6) λ + 3 λ2 ) 2( λ + f o -1) 3 μ 2 . Recognizing now that under expression (A.1), hydrostatic loading, and α 1 = 1, the expansion of the second-order estimate (37) in the incompressibility limit can be written, to first order, as W I (F) = I ( λ, λ) = 2(1 -f o )( λ -1)α 2 + O( ), (A.7) together with (A.6) 1 , leads to the final result (52). Appendix B. Incompressible Limit for a Neo-Hookean Porous Elastomer (Version 3) In this appendix, a brief outline of the asymptotic analysis corresponding to the incompressibility limit associated with the second-order estimate (39) for a porous elastomer with a Neo-Hookean matrix phase is presented. As discussed in the main body of the text, only one of the roots derivable from this version of the second-order method has a physically consistent asymptotic behavior in the limit of incompressibility. The limit associated with this root is the one presented here. It is noted that the results obtained from the following asymptotic analysis have been checked to be in agreement with the full numerical solution. Based on numerical evidence from the results for general μ , an expansion is attempted in the limit as μ → ∞ of the following form: First, it is remarked that for the particular case of a Neo-Hookean matrix phase one of the generalized secant equations (41) can be solved exactly for the variable L 1212 in terms of the other components of the modulus L 0 . This, together with the constraints (56), can be shown to result into the following simplifications: Next, introducing relations (B.1) and (B.2) in the general expression (58) for the components of F (1) -F can be shown to lead to the following expansions: Prescriptions (B.4) through (B.8) can be shown to be sufficient to fully determine the first-order terms of all of the components of F (1) -F and F (1) . The final expressions may be written as 21 = 0. At this point, it is important to remark that relations (B.9) and (B.10), by means of (B.6), ultimately depend on the variable n 2 , which can be determined in closed-form by solving the fourth-order polynomial equation (B.7). This is precisely the same equation as (61) given in the main body of text, where for clarity of notation n 2 was relabelled as v. Under the above development, it is then straightforward to show that the leading order term of the expansion of the second-order estimate (39) in the limit of incompressibility may be expressed in closed-form, as it ultimately depends on the coefficient n 2 . The final explicit expression (in terms of the variable n 2 = v) is given by (60) in the text. Next, it is shown that the porosity associated with the second-order estimate (60) for a porous elastomer with an incompressible Neo-Hookean matrix phase reduces to the exact result (70). Given that a HS-type approximation is utilized in the homogenization process, the fields in the porous phase are assumed constant. This implies that the average change in volume of the porous phase is simply given by J (2) = det(F) (2) = det( F (2) ) = (f o λ1 -(1 -f o )x 1 )(f o λ2 -(1 -f o )y 1 ) f 2 o , (B.11) where use has been made of the relation F = (1f o )F (1) + f o F (2) . Expression (B.11) can now be used to compute the porosity associated with the secondorder estimate (60) through the relation f = J (2) J f o , (B.12) which, after some simplification, can be shown to reduce to the exact result (70). Finally it should be emphasized that this result has been proven to hold not only for Neo-Hookean porous elastomers, but more generally, for porous elastomers with incompressible isotropic matrix phases. Note that the factors of μ in the above expressions have been included for consistency with the results from Appendix B. However, these factors cancel out in equations ( 60) and (61), and therefore μ may be dropped from the above expressions. r 1 = Figure 1 . 1 Figure 1. Comparisons of the second-order estimates (Versions 1 and 3) with the exact results for the effective response of a porous rubber subjected to in-plane hydrostatic loading ( λ2 = λ1 = λ). The results correspond to an incompressible Neo-Hookean matrix phase with various initial concentrations f o of aligned cylindrical voids, and are shown as a function of the logarithmic strain ē = ln( λ). (a) The stored-energy function W ; and (b) the corresponding stresses S 11 = S 22 = S. Figure 2 . 2 Figure 2. In-plane hydrostatic loading ( λ2 = λ1 = λ) of a porous rubber with an incompressible, Neo-Hookean matrix phase with various initial concentrations f o of aligned cylindrical voids. (a) The evolution of porosity f as predicted by Versions 1 and 3 of the second-order method compared with the exact result as a function of the logarithmic strain ē = ln( λ). (b) The critical stretches λcrit at which the loss of strong ellipticity of the homogenized elastomer takes place as a function of initial porosity f o . (This last plot also includes the critical loads for the first two buckling modes (n = 2 and 3) of a cylindrical shell [34].) Figure 3 . 3 Figure 3. Versions 1 and 3 estimates of the second-order method for the effective response of a porous rubber subjected to uniaxial loading ( λ2 = 1 and λ1 = λ). The results correspond to an incompressible Neo-Hookean matrix phase with various initial concentrations f o of aligned cylindrical voids, and are shown as a function of the logarithmic strain ē = ln( λ). (a) The stress component S 11 . (b) The stress component S 22 . Figure 4 . 4 Figure 4. Versions 1 and 3 estimates of the second-order method for the effective response of a porous rubber subjected to uniaxial loading ( λ2 = 1 and λ1 = λ). The results correspond to an incompressible Neo-Hookean matrix phase with various initial concentrations f o of aligned cylindrical voids, and are shown as a function of the logarithmic strain ē = ln( λ). (a) The evolution of the porosity f compared with the exact result. (b) The evolution of the average aspect ratio of the voids ω. Figure 5 . 5 Figure 5. Version 3 estimates of the second-order method for the effective response of a porous rubber subjected to pure shear loading ( λ1 = 1/ λ2 = λ). The results correspond to an incompressible Gent matrix phase with given initial porosity f o = 0.1 and various values of the material parameter J m , and are shown as a function of the logarithmic strain ē = ln( λ). (a) The effective stored-energy function W compared with the Voigt upper bound. (b) The evolution of the aspect ratio ω. Figure 6 . 6 Figure 6. Version 3 estimates of the second-order method for the effective response of a porous rubber subjected to pure shear loading ( λ1 = 1/ λ2 = λ). The results correspond to an incompressible Gent matrix phase with given initial porosity f o = 0.1 and various values of the material parameter J m , and are shown as a function of the logarithmic strain ē = ln( λ). (a) The stress component S 11 . (b) The stress component S 22 . Figure 7 7 Figure 7 displays the strongly elliptic (and non-elliptic) domains for the 2-D porous elastomer with incompressible Neo-Hookean matrix phase, subjected to in-plane deformations. The results are shown in the plane ( ē1 -ē2 ) for: (a) Versions 1 and 3 estimates for initial porosities f o = 30, 50 and 70%; and for (b) Version 3 estimates for initial porosities f o = 10, 20 and 30%.In order to aid the discussion of the results, the boundary at which the porosity vanishes has also been included in Figure7. Note that once the zero-porosity boundary is reached, further compressive deformation of the material is not possible.An interesting observation that can be made from Figure7is that the loci of points describing loss of strong ellipticity satisfy ē2 + ē1 < 0, which implies that a necessary condition for loss of strong ellipticity to occur is the existence of a compressive component in the state of deformation. Also, note that the predictions from both versions of the second-order method have roughly the same qualitative behavior; however, the results of Version 1 appear to be more restrictive than those of Version 3 for all initial values of porosity and loadings, with the exception of cases satisfying f o < 0.3, ē1 < 0, and ē2 < 0, for which the onset of loss of strong ellipticity of Version 3 precedes that one of Version 1. Furthermore, it is interesting Figure 7 . 7 Figure 7. Domains of strong ellipticity on the ( ē1 -ē2 )-plane for a porous elastomer with incompressible, Neo-Hookean, matrix phase and various levels of initial concentrations f o of aligned cylindrical voids, as determined by Versions 1 and 3 of the second-order variational procedure. The dotted lines denote the boundary at which the level of zero porosity has been reached upon compressive deformation. (a) Comparisons between the Versions 1 and 3 estimates; and (b) Version 3 estimates for low initial porosity. L 1111 = a 1 + 1 a 2 + a 3 + O( 2 ), L 2222 = b 1 + b 2 + b 3 + O( 2 ), L 1122 = c 1 + c 2 + c 3 + O( 2 ), (B.1)L 1212 = d 1 + d 2 + O( ), L 1221 = e 1 + e 2 + O( ),where . = 1/μ is a small parameter and a 1 ,a 2 , a 3 , b 1 , b 2 , b 3 , c 1 , c 2 , c 3 , d 1 , d 2 , e 1, and e 2 are unknown coefficients that ultimately depend on the applied loading F, the initial concentration of voids f o , and the material parameter μ. d 1 1 = 0, d 2 = μ, e 1 = a 1 b 1c 1 , and (B.2) e 2 = a 2 b 1 + a 1 b 2 -(a 1 + b 1 )μ 2 √ a 1 b 1 c 2 . 1 ) 1 + λ2 ( λ1 λ2 -1)((f o -1)n 2 + 2μ( λ1 + λ2 ))] a 1 [2(f o -1)n 2 λ1μ( λ1 + λ2 )((f o -3) λ1 -(1 + f o ) λ2 )],p 1 = x 1 y 1 + λ2 x 1 + λ1 y 1 + λ1 λ2 -1, (B.9)s 1 = μ( λ1 -λ2 ) 2 ( λ1 + λ2 ) 2 a 2 1 f o λ1 [2(f o -1)n 2 λ1μ( λ1 + λ2 )((f o -3) λ1 -(1 + f o ) λ2 )] 2 × a 1 f o μ( λ1 + λ2 ) 2 + (f o -1) λ2 ( λ1 λ2 -1) n 2μ( λ1 + λ2 ) × 2a 1 f o λ1 + λ2 (1 + f o -(1 + f o ) λ1 λ2 ) 1) 2 -λ2 )(L 1122 P 1111 + L 2222 P 1122 ) + ( λ(1) 1 -λ1 ) (L 1122 P 1122 -1)(1 + (f o -1)L 1122 P 1122 ) -(f o -1)(L 2222 + L 2 1122 P 1111 -L 1111 L 2222 P 1111 )P 2222 + L 1111 (P 1111 -(f o -1)L 2222 P 2 1122 ) f o P 1111 -(f o -1)L 2222 P 2 1122 + (f o -1)L 2222 P 1111 P 2222 S -(f o -1)(L 1111 + L 2 1122 P 2222 -L 2222 L 1111 P 2222 )P 1111 + L 2222 (P 2222 -(f o -1)L 1111 P 2 1122 ) f o P 2222 -(f o -1)L 1111 P 2 1122 + (f o -1)L 1111 P 2222 P 1111 S (1) 11 (1) 22 = 0 and f o ( λ(1) (1) 22 f o P 1122 + (f o -1)L 1122 P 2 1122 -(f o -1)L 1122 P 1111 P 2222 S 1 -λ1 )(L 1122 P 2222 + L 1111 P 1122 ) + ( λ(1) 2 -λ2 ) (L 1122 P 1122 -1)(1 + (f o -1)L 1122 P 1122 ) f o P 1122 + (f o -1)L 1122 P 2 1122 -(f o -1)L 1122 P 2222 P 1111 S (1) 4μ 3 λ1 ( λ1 + λ2 ) (-1 + f o ) 3 λ4 + f o ) 4 -(1 + f o (-4 + f o + f 2 + λ2 2f o (4 -4f o + f 3 o + (-3 + f o ) λ2 2 )) , r 0 = -μ 4 ( λ1 + λ2 ) 2 (-1 + f o ) 4 λ4 1 -(-1 + f o ) 2 λ3 1 (2f o (1 + f o ) -(-1 + f o ) λ2 1 ) λ2 + λ2 1 (-2 + f o (2 + 3f o + f 3 o -2(-3 + f 2 1 λ2 + λ1 λ2 2 (-1 + f 2 o + (1 + f o + f 2 o ) λ2 2 ) + λ3 1 ((-1 o )) λ2 2 ) + λ3 2 (-1 + f o (f o + λ2 2 )) + λ2 1 λ2 (1 o ) λ2 1 )) λ2 2 + λ1 (2f o (1 + f o ) 2 + (2 + f o (4 + f o + f 2 o )) λ2 1 ) λ3 2 + (1 + f o )(1 + f o + 2f o λ2 1 ) λ4 2 + (-1 + f o ) λ1 λ5 2 . Acknowledgement This work was supported by NSF grant DMS-0204617. Appendix A. Incompressible Limit for a Neo-Hookean Porous Elastomer (Version 1) In this appendix some details are presented concerning the incompressibility limit associated with the tangent second-order estimate (37) for a porous elastomer with Neo-Hookean matrix. The asymptotic solution resulting from this heuristic derivation has been checked to be in agreement with the full numerical results. Motivated by the observed properties of the numerical solution for general μ , an expansion is attempted in the limit as μ → ∞ of the following form: where . = 1/μ is a small parameter, and α 1 , α 2 , α 3 , β 1 , β 2 , and β 3 are unknown coefficients which ultimately depend on the applied loading F, the initial concentration of voids f o , and the material parameter μ. By making use of expressions (A.1) in relation (49), a hierarchical system of equations is obtained for the coefficients α 1 , α 2 , α 3 , β 1 , β 2 , and β 3 . The leading order terms O( -1 ) of these equations can be shown to lead to the following relationship: which implies that the determinant of F (1) , denoted by J (1) , is exactly equal to one in the incompressible limit. F (1) 11 -λ1 = x 1 + x 2 + O( 2 ), F (1) 22 -λ2 = y 1 + y 2 + O( 2), (B.3) F (1) 12 F (1) 21 = p 1 + p 2 + O( 2), ( F (1) 12 ) 2 + ( F (1) 21 The explicit expressions for the coefficients of these expansions have not been included here for their bulkiness; however, it is useful to spell out their dependence on the variables introduced in (B.1). Thus, the coefficients of first order x 1 , y In connection with relations (B.3), it is necessary to clarify that the asymptotic expressions for the combinations F (1) 12 F (1) 12 and ( F (1) 12 ) 2 + ( F (1) 12 ) 2 have been specified in (B.3), rather than those for the independent components F (1) 12 and F (1) 12 , since, as discussed previously, they are the relevant variables in this problem. Now, by introducing expressions (B.1)-(B.3) into the three reduced (recall that L 1212 = μ) generalized secant equations (41), a hierarchical system of equations is obtained for the remaining unknown coefficients introduced in (B.1). Thus, the equations of first-order O( -1 ) lead to the following results: whereas the equations of second-order O( 0 ), by making use of (B.4), can be shown to render the following relations: where n 2 = λ1 a 2 -λ2 c 2 , and q 2 , q 1 , q 0 , r 4 , r 3 , r 2 , r 1 , and r 0 have been given in explicit form in Appendix C. Appendix C. Coefficients Associated with the Incompressible Limit for a Neo-Hookean Porous Elastomer (Version 3) In this appendix, the expression for the coefficients introduced in relations ( 60) and (61) are given in explicit form in terms of λ1 , λ2 , f o , and μ:
00111419
en
[ "spi.meca" ]
2024/03/05 22:32:10
2004
https://hal.science/hal-00111419/file/nguyentajan2004.pdf
On a new computational method for the simulation of periodic structures subjected to moving loads Application to vented brake discs Introduction The brake is a major security part of a car. In friction braking systems subjected to very severe conditions, various kinds of defects can appear : honeycomb cracking on the rubbing surface, thru-cracks through the disc thickness, fracture of the elbow of the bowl, wear... A numerical model able to predict these phenomena is an alternative and a complement to expensive bench-tests. The computational approach we develop consists in: -new numerical strategies suitable for problems involving components subjected to moving loads; -a relevant modelling of the behavior of the material; -a modelling of the different damage phenomena undergone by the disc, which takes account of the multiaxial and anisothermal characteristics of the loads. It is essential for the numerical determination of the thermomechanical state of the disc to take into account the main couplings between the different phenomena, the transient characteristic of the thermal history, the inelastic behavior of the material, the non-homogeneous thermomechanical gradients taking place in the disc and the rotation of the disc. The use of classical finite element methods leads to excessive computational times. More precisely, one particularity of the brakes discs is the fact that they are subjected to repeated thermomechanical rotating loads with amplitude varying with the rotations. To simulate a whole braking, dozens of rotations are necessary and dozens of incremental rotations are needed to compute a rotation so that the total computational time is too high. To circumvent this difficulty, algorithms adapted to problems of structures subjected to thermomechanical moving loads have been developed. The alternative approach consists in using the stationary methods, first proposed by Nguyen and Rahimian [START_REF] Nguyen | Mouvement permanent d'une fissure en milieu élastoplastique[END_REF] and later developed by Dang Van and Maitournam [START_REF] Maitournam | Formulation et résolution numérique des problèmes thermoviscoplastiques en régime permanent[END_REF][START_REF] Van | Steady-state flow in classical elastoplasticity: Application to repeated rolling and sliding Contact[END_REF], which permit to directly calculate the mechanical state of the structure after one rotation or directly the asymptotic state after repeated passes. These algorithms lead to important reductions of the computational time. They can be applied to structures which geometries are generated by the translation or the rotation of a two-dimensional section. Many industrial applications for instance railways [START_REF] Van | On some recent trends in modelling of contact fatigue and wear in rail[END_REF] and brake discs [NGU02A, NGU02B] have been treated with these methods. In a previous paper [START_REF] Nguyen-Tajan T | Une méthode de calcul de structures soumises à des chargements mobiles. Application au freinage automobile[END_REF], we have treated only solid disc. Here, we consider the case of vented discs (figure 1). These structures have a periodic geometry, so that stationary methods cannot be used in their previous formulations. The objective of the paper is to propose an extension of such methods to periodic structures subjected to moving loads and to quickly determine the mechanical state after a given number of rotations by calculating the solution cycle by cycle. First, we present the principle of the method and its formulation in the case of an elastoplastic material with linear kinematic hardening. Then, an example of simulation of a vented disc is given. The periodic stationary method Overview of the stationary method The stationary method was first proposed by Nguyen and Rahimian [START_REF] Nguyen | Mouvement permanent d'une fissure en milieu élastoplastique[END_REF]; Dang Van and Maitournam [MAI89,[START_REF] Van | Steady-state flow in classical elastoplasticity: Application to repeated rolling and sliding Contact[END_REF] developed it in the case of repeated moving loads. The considered structures are generated by the translation or the rotation of a given 2D section. They are subjected to repeated moving thermomechanical loads. Quasi-static evolution and infinitesimal deformations are assumed. The objective of the pass-by-pass stationary algorithm is to directly determine the thermomechanical response of the structure after each pass of the moving loads. The stationary method relies on the following hypothesis : -the amplitude and velocity of the loads remain constant during one load pass; -in a reference frame attached to the moving loads, the thermomechanical quantities are stationary (steady-state assumption). The idea is then to use this frame instead of the one related to the structure and so to use eulerian coordinates. The steady state assumption makes the problem become time independent. Internal variables are therefore directly calculated (without time incremental resolutions) by integration along the streamlines which are known as the hypothesis of small transformation holds: it is as if the constitutive law is non local [START_REF] Maitournam | Formulation et résolution numérique des problèmes thermoviscoplastiques en régime permanent[END_REF][START_REF] Van | Steady-state flow in classical elastoplasticity: Application to repeated rolling and sliding Contact[END_REF]. In fact, we consider a continuous medium subjected to a thermomechanical load moving with a velocity V(t) relatively to a frame R = (O, e X , e Y , e Z ). We adopt the frame R =(O , e x , e y , e z ) attached to the load moving. In this frame, the structure is therefore moving with the velocity -V(t). The material derivative of a tensorial quantity B related to the material is given by : Ḃ(x, t) = ∂B ∂t (x, t) + ∇ x B(x, t).v(x, t) with : v(x, t) = v r (x, t) -V(x, t) where x is the geometrical position of the material point in R , v the velocity of the material point relatively to R and v r its velocity relatively to R. Thanks to the hypothesis of infinitesinal transformation in the reference linked to the solid, the term v r (x, t) becomes negligible compared to V(x, t). Hence, the expression of the material derivative of B becomes : Ḃ(x, t) = ∂B ∂t (x, t) -∇ x B(x, t).V(x, t) [1] The assumption of steady state in a frame moving with the loads leads to a timeindependent problem for which all time partial derivatives vanish. So the expression of the material derivative of B becomes : Ḃ(x, t) = -∇ x B(x, t).V(x, t) [2] To numerically solve the problem, we use a frame moving with the loads and replace material derivatives in the governing equations (equilibrium, constitutive law and boundary conditions) by expressions as given above. The periodic stationary method We consider a structure with a periodic geometry subjected to a repeated moving load. The load moves with a constant velocity V e x . Its intensity is constant or varies periodically. Although we treat here a rotating load, we choose to present the method for translating load; the two cases are formally identical but the notations are less heavy in the case of translation. By periodic structure (figure 1), we design a structure which is generated by the translation or the rotation of an inhomogeneous material volume (heterogeneous solid made of different materials or containing voids). An elementary heterogeneous volume is called "cell". Figure 1. The vented disc: an example of periodic structure Due to this geometrical and material inhomogeneity, the thermomechanical state of each cell depends on the relative position of the load over the cell. The steady state assumption in the load reference no longer holds. On the other hand, the non stationary behavior is assumed to be periodic. The solution method adopted consists in two main features: -determination of the transient solution in a time period (time necessary for the loading to move along a cell); -use of the steady-state assumption at the cells scale in the reference frame related to the moving load. So the computations involve two stages: the trial transient elastic solution on a time period will be first sought and then integrations along the streamline associated to the cells are performed (as in the "classical" stationary method). These two points are used in the formulation of the problem to be numerically solved: on each time interval with a period T (T is the time necessary for the load to cover the distance X equivalent to the length of a cell along e x ), the response is variable; it is T-periodic in the load reference. In other words, in the load reference, for any physical quantity B and for any point x of the structure: B(x, t) = B(x -Xe x , t -T ) [3] This equation allows the transport of the physical quantities from cell to cell along the streamlines, as in the case of the stationary method. On the other hand, for the transient regime (lasting the time necessary for the load to cover the length of a cell), one has simply Ḃ(x, t) = ∂B ∂t (x, t) and a method similar to the Large time Increment Method [LAD96] is adopted. Formulation of the periodic stationary method in elastoplasticity From the equations (2) and (3), we are able to propose a solution scheme for the periodic stationary method and give the discretized equations of the problem. We just consider here a von Mises elastic-plastic material with a linear kinematic hardening (c) and a yield function (f ) of the following form: f (σ, cε p ) = (devσ -cε p ) : (devσ -cε p )-k = devσ -cε p -k = ξ -k [4] Let us recall that the steady state at the cell scale is assumed; the solution is then entirely determined by the knowledge of the response over a period (t ∈ [0, T ]). This time interval is discretized in m instants corresponding to the number of positions of the load on a cell. Practically, we have the following two-stage algorithm. A global stage consisting in calculating the elastic solution over the whole time interval [0, T ] with given internal variables is first carried out: in fact one performs discrete sequence of elastic calculations for all the positions of the load over a particular cell considered as reference. It is followed by a local stage for the determination of the internal variables by integration along the streamlines, and this is done for all the instants of the interval [0, T ]. This integration scheme using closest point projection, is detailed in the following. We denote by j the position of the load on the reference cell, j ∈ {1, ..., m}, ε p j the plastic deformation of the current point of the cell (n). (.) j denotes quantities at a point of the cell (n), for the j position of the load while (.) j (n -1) denotes quantities at the homologous point of the preceding cell (n -1), for the jth position of the load. Within, the periodic stationary method, the plastic deformation ε p is calculated as follows: If j = 1 we define ξ * j = devσ j-1cε p j-1 + 2μΔ(devε) j , one has: -if ξ * j > k, plastification occurs, so ε p j = ε p j-1 + 1 2μ+C 1 -k ξ * j ξ * j -if ξ * j ≤ k, no plastification occurs, so ε p j = ε p j-1 If j = 1 , we define ξ * 1 = devσ m-1 (n -1) -cε p m-1 (n -1) + 2μΔ(devε) 1 one has: -si ξ * 1 > k, plastification occurs, so ε p 1 = ε p m-1 (n -1) + 1 2μ+C 1 -k ξ * j ξ * j -If ξ * 1 ≤ k, no plastification occurs, so ε p 1 = ε p m-1 (n -1) Figure 2. Calculated positions of the load This algorithm has been implemented in the code Castem 2000 for a von Mises elastic-plastic material with a linear kinematic hardening. Application to a vented disc In this section, we present an application of the periodic stationary method to the vented disc (figure 1). Instead of making a realistic simulation of the braking in which thermal effects are proeminent, we choose to illustrate the different kinds of results that can obtained with this method in the case of purely mechanical problems. The dimensions of the disc are the following: external radius R e = 133 mm, internal radius of rubbing surfaces R i = 86.5mm, thickness of these surfaces e = 13 mm. The disc constitutive material is cast iron assumed to be, at room temperature, a von Mises elastic-plastic material with a linear kinematic hardening. Its characteristics are: Young modulus E = 110000MPa, Poisson coefficient ν = 0.3, yield limit in traction σ y = 90 MPa, hardening modulus h = 90000 MPa. The figure 3 shows the adopted mesh and the applied loading. The loading consists in two distributions of hertzian pressure (prescribed at the contact zones between the disc and the pads) with the maximum pressure equal to 500 MPa. This pressure is greater than the ones encountered during real brakings but here, as thermal effects are not taken into account, we choose a greater pressure just for illustration of the capabilities of the method which is interesting only in inelastic cases. Eight load positions are used for the simulation. They are represented with different colors on figure 3. Thanks to the periodicity of the solution in the cells "away" from the load, the mesh is truncated; only five cells are represented. Figure 3. Hertzian pressure distributions on the two surfaces of the vented disc : the eight considered load positions are represented The evolutions of equivalent plastic deformations during the first pass of the loading are shown on figure 4 for the eight positions of the load. The load is moving in the anti-clock-wise. One can notice that the solution is transient and depends on the relative position of the loading to the cell. On the first of figures 4, we observe a plastic deformation induced backwards by the moving loads. On figure 5, one can see the equivalent plastic deformations during the first pass of the loading. The calculation of five successive passes of the loading show that the mechanical stabilized state is reached quickly. The stabilized state is defined by the periodicity of the mechanical response at any material point. On figure 7, we plot the evolution of equivalent plastic deformations along a constant radius streamline: one can then see that the stabilized state is a plastic shakedown. One can also notice that backward, the plastic deformation becomes periodic, with a period equal to the length of a cell (0.08m). On figure 6, we show the equivalent stresses in the limit state obtained for the fifth pass of the loading. Conclusion In this paper, an extension of the stationary methods to periodic inelastic structures subjected to repeated moving loads is presented. It is based on the hypothesis of the periodicity of the response in a reference frame moving with the loads and the use of an approach similar to the Large Time Increment Method. It permits the directly determination of the mechanical state during a whole pass of the loading and therefore the determination of the limit state in the case of repeated moving loads. Even though only the capabilities of this method have be shown, it is clear that it reduces considerably the computational times and accordingly allows the simulation of complicated structures such as vented discs and the determination of asymptotic state of such structures subjected to repeated loads. Figure 4 .Figure 5 .Figure 6 . 456 Figure 4. Equivalent plastic deformations during the movement of the load along a cell Figure 7 . 7 Figure 7. Evolution of equivalent plastic deformations along a streamline for the five first passes of the loading: case of a plastic shakedown Mac Lan Nguyen-Tajan -Habibou Maitournam Luis Maestro PSA Peugeot Citroën, Direction de la Recherche, Route de Gisy F-78943 Vélizy-Villacoublay [email protected] LMS, UMR 7649, Ecole Polytechnique, F-91128 Palaiseau cedex [email protected] Stagiaire de l'ENSTA ABSTRACT. The purpose of the paper is to present a new numerical method suitable for the computation of periodic structures subjected to repeated moving loads. It directly derives from the stationary methods proposed for cylindrical and axisymmetrical structures. Its mains features are the use of a calculation reference related to the moving loads and the periodic property of the thermomechanical response. These methods are developped by PSA and the Ecole Polytechnique, in order to design vented brake discs. In this paper, a brief description of the algorithm is first given and examples of numerical simulations of a vented brake disc are treated. RÉSUMÉ. Cet article porte sur le développement d'une nouvelle méthode de résolution numérique adaptée au calcul de structures périodiques soumises à des chargements mobiles et répétés. Il s'inspire directement des méthodes stationnaires développées pour les structures cylindriques ou axisymétriques. Cette méthode repose sur les principes suivants : le repère de calcul est lié au chargement et non plus à la structure, et la réponse thermomécanique de la structure y est supposée périodique. Cette méthode a été développée par PSA Peugeot Citroën et l'Ecole Polytechnique dans le cadre du dimensionnement des disques de frein ventilés. Dans cet article, on donnera une brève description de l'algorithme puis la simulation mécanique d'un disque de frein ventilé illustrera la méthode. KEYWORDS: periodic steady state algorithm, cyclic moving load, brake disc, thermomechanics. MOTS-CLÉS : algorithme stationnaire périodique, chargement mobile cyclique, disque de frein, thermomécanique.
01756172
en
[ "phys", "phys.cond", "phys.cond.cm-ms" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01756172/file/1803.10828.pdf
Raul Ramos Diego Scoca Rafael Borges Merlo Francisco Chagas Marques Luiz Fernando Alvarez Fernando Zagonel email: [email protected] Study of nitrogen ion doping of titanium dioxide films Keywords: nitrogen ion doping, titanium dioxide, Anatase, transparent conducting oxide, diffusion; electronic transport This study reports on the properties of nitrogen doped titanium dioxide (TiO 2 ) thin films considering the application as transparent conducting oxide (TCO). Sets of thin films were prepared by sputtering a titanium target under oxygen atmosphere on a quartz substrate at 400 or 500°C. Films were then doped at the same temperature by 150 eV nitrogen ions. The films were prepared in Anatase phase which was maintained after doping. Up to 30at% nitrogen concentration was obtained at the surface, as determined by in situ x-ray photoelectron spectroscopy (XPS). Such high nitrogen concentration at the surface lead to nitrogen diffusion into the bulk which reached about 25 nm. Hall measurements indicate that average carrier density reached over 10 19 cm -3 with mobility in the range of 0.1 to 1 cm 2 V -1 s -1 . Resistivity about 3.10 -1 cm could be obtained with 85% light transmission at 550 nm. These results indicate that low energy implantation is an effective technique for TiO 2 doping that allows an accurate control of the doping process independently from the TiO 2 preparation. Moreover, this doping route seems promising to attain high doping levels without significantly affecting the film structure. Such approach could be relevant for preparation of N:TiO 2 transparent conduction electrodes (TCE). Introduction The increasing demand of energy efficiency and cost-effectiveness for display and energy technologies pushes continuously towards the search of new materials to surpass industry standards. In flat panel displays, light emitting devices and some solar cells, efficiency is linked to the performance of the transparent conducting electrodes (TCEs) used which allows front electrical contacts and simultaneously letting visible light in or out of the device. While tindoped indium oxide (ITO) is an industry standard TCE, it presents a high cost linked to indium scarcity. Most alternatives available today, such as ZnO or SnO 2 , has interesting niche applications. New and better TCEs, both in terms of cost and efficiency, are of great interest for several wide or niche applications. Since titanium oxide (TiO 2 ) doped with niobium has been proposed as TCE by Hasegawa group see refs. [1,2], several studies discussed the optical and electrical properties of TiO 2 doped with Nb, Ta, W, and N [3,4,5,6]. Moreover, it has been shown that, by doping TiO 2 with nitrogen, it is possible to reduce its optical gap and favor catalytic activity with visible light, with considerable interest for water splitting, among other applications [7,8]. Since then, several studies explored the properties of nitrogen doped titanium oxides (mainly Anatase and Rutile) prepared in various ways with respect to its optical, catalytic and transport properties. Nitrogen doped TiO 2 has already been synthetized by reactive sputtering and by post treatments with ammonia or ion implantations, among others. Using electron cyclotron resonance plasma sputtering under O 2 and N 2 gases, H. Akazawa showed that it is possible to continuously control carrier concentration and obtained films with a resistivity of 0.2 cm with a maximum transparence in the visible of about 80%, but the films were frequently amorphous, while large crystalline grains might favor better conductivity at similar transparency [4,6]. Using reactive d.c. magnetrons sputtering in an Ar+O 2 +N 2 gas mixture, N. Martin et al. obtained about 25cm with about 30% transmission, depositing TiN x O y . Again, in this study, crystal structure was difficult to control and, besides Rutile and Anatase, even Ti 3 O 5 was observed [9]. In another work by J.-M. Chappé et al., also using d.c. reactive magnetron sputtering, prepared TiO x N y films with visible light transmittance ranging from very low to nearly 80%, with a resistivity ranging from 10 -3 cm to 50 cm and with a complex crystal structure where Anatase was not the majority phase present [10]. Given the inherent difficulty in controlling independently composition and crystal structure/quality in reactive sputtering, splitting the processes in two parts is an interesting alternative. In this approach, Anatase or Rutile samples can be prepared and doped a posteriori with nitrogen by, for instance, ion implantation or NH 3 gas [11,12,13]. Using this approach, H. Shen et al. showed that implantation with 200 eV nitrogen ions successfully doped Anatase nanoparticles and enhanced photocatalytic efficiency without changing the crystal structure [13]. Considering that ion doping of TiO 2 could be relevant for TCE preparation, in this work, we deposited Anatase thin films at 400 and 500°C and then doped by low energy nitrogen ion implantation at 150eV from a simple laboratory ion gun. Such process allows doping pure Anatase thin films with controllable nitrogen amounts and following closely its properties. By heating the sample during the ion implantation, we could allow the diffusion of nitrogen from the surface into the bulk, thus developing a dopant profile. The results indicate that low Experimental Methods Sample preparation was performed in two sequential steps. First, a Ti target was sputtered using an ion beam (Ion Beam Deposition) to grow a thin film on amorphous quartz substrate. Argon was used as inert gas for bombarding the Ti target at an energy of 1.5keV. During the deposition, a partial pressure of 2.510 -2 Pa of oxygen was maintained (chamber base pressure was about 210 -4 Pa). Such partial pressure results in Anatase films with well-defined x-ray diffraction peaks [14]. During the deposition, Argon partial pressure in the chamber was about 110 -2 Pa. During a second step, nitrogen ions were implanted at low energy, 150 eV, into the thin film surface. The ions were produced in a Kaufman cell fed with 5 sccm of Nitrogen and 0.5 sccm of hydrogen, resulting in 2.110 -2 Pa and 2.110 -3 Pa partial pressures in the chamber respectively (Argon and oxygen were not used in this step). Such sample preparation was performed in a custom-built system that features two ion guns (one pointing to a sputter target and the other to the sample holder) in one vacuum chamber that is directly connected to another chamber for in situ X-ray Photoemission Spectroscopy analysis. More details of the deposition system and its capabilities can be found in references [15] and [16]. Hydrogen was used in analogy with ref. [17] (see also references there in) to remove oxygen from the surface to make it more reactive for incoming nitrogen. Indeed, the formation enthalpy favors TiO 2 over TiN [18] and in principle residual oxygen gas and water vapor in the vacuum chamber could keep the surface partially oxidized preventing nitrogen intake. The ion gun points perpendicularly to the sample surface and is located about 30 cm from the sample. The samples were prepared at different substrate temperatures (for both steps): 400 and 500°C and with different implantation times: 0, 10, 30 and 60 minutes. Film thickness was evaluated by perfilometry and it ranges from 70 to 100 nm. Just after preparation, samples were in situ analyzed in UHV by X-ray Photoemission Spectroscopy (XPS) using Al K radiation. Spectra were fitted using Avantage software. Average inelastic mean free path for Anatase and kinetic energies from 900-1100 eV are estimated as about 2 nm [19]. X-ray diffraction (XRD) was performed using Cu Kand keeping incidence angle at 1°. In this geometry, the average penetration depth is estimated as 0.05 m for Anatase [START_REF] Noyan | Residual Stress: Measurement by Diffraction and Interpretation[END_REF]. Sheet resistance was measured using 4-probe technique. Mobility, resistivity and carrier concentration were determined by Hall measurements using the Van der Pauw method in an Ecopia-3000 device using a 0.55 T permanent magnet. For Hall measurements, indium was used to provide ohmic contacts. Optical transparency measurements were performed in an Agilent 8453 device which uses a CCD detector. Resistance versus temperature measurement was performed for the sample deposited at 500°C and implanted for 60 minutes (at the same temperature). The measurement was carried out in a CTI Cryodine closed-cycle helium refrigerator in the temperature range from 80 K to 300 K. The electrical data was acquired using a Keithley model 2602A SourceMeter and the indium contacts previously used for Hall measurement, in the van der Pauw geometry. A constant current of 10 A was applied between two contacts and the voltage was measured between the other two, in a parallel configuration. A complete thermal loop was carried out to confirm the reproducibility of the data. Sample's morphology was studied by Atomic Force Microscopy (AFM) and Transmission Electron Microscopy (TEM). TEM analysis was performed in a JEOL 2100F TEM equipped with a Field Emission Gun (FEG) operating at 200 kV with an energy resolution of about 1 eV. EELS was obtained using a Gatan GIF Tridiem installed in this TEM and Gatan Digital Micrograph routines were used for quantification. The data were acquired in Scanning Transmission Electron Microscopy (STEM) mode in the form of spectrum lines (the electron beam is focused on the sample and a spectrum is acquired for each position along a line forming a bidimensional dataset). Topographic images of the sample's surface were taken with an Innova Bruker Atomic Force Microscope (AFM) in non-contact mode. Results and Discussions Composition and Structural characterization The effectiveness of the ion implantation at 150 eV was demonstrated by the presence of large amounts of nitrogen at the surface, as observed by in situ XPS. Figure 1 shows XPS spectra for the sample prepared at 500°C for 60 minutes, with similar results for all other samples. Main features observed by XPS are expected for TiO 2 and for TiN. In situ XPS on TiO 2 samples grown at 400 and 500°C (not shown) are similar to those in ref [14] and [START_REF] Scoca | [END_REF], typical for Anatase film close to stoichiometry. It must be noted that absolute binding energies are not accurately known due to some degree of uncontrolled spectral shift that is attributed to sample charging. From the indicated decomposition into several proposed chemical bounds, it is possible to observe that nitrogen concentration is similar to that of oxygen and that a TiO x N y alloy was created at the surface (TiN and TiO 2 components are observed in the Ti 2p spectrum). It is noteworthy in Figure 1(c) the presence of two XPS peaks for N 1s, one, smaller, at higher binding energy, and another, bigger, close to 396 eV (that is in turn composed of two peaks). Such smaller and bigger peaks are attributed to interstitial nitrogen and substitutional nitrogen, respectively [12,13,22,23]. In our case, interstitial N accounts to about 10% of the total amount of nitrogen observed on the surface, a much lower value when compared to ref. [8], which used NH 3 as nitrogen source, or ref. [13], which used 200 eV nitrogen ions without hydrogen. Therefore, depending on the incorporation route, different chemical locations are possible. This difference is relevant since depending on nitrogen site different diffusion mechanisms apply [12]. To evaluate if hydrogen was significantly affecting N 1s spectra, a sample was prepared without hydrogen gas during the implantation step. N1s peaks for samples prepared with and without hydrogen at 500°C and implanted for 60 minutes are shown in Figure 2. The spectra show that even without hydrogen the peak is still present and with similar (although smaller) ratio with respect to main N 1s peak. This shows that it does not depend on presence of hydrogen during the nitriding process, in contrast to literature suggestion [24]. However, as discussed below, total nitrogen concentration is smaller without hydrogen, indicating that hydrogen contributes to nitrogen incorporation at the surface, possibly by removing oxygen. [ To support the given interpretation of XPS results, SRIM simulations have been performed [25,[START_REF]The Stopping and Range of Ions in Matter[END_REF]. Simulations considered nitrogen ions (N+) on Anatase. Average penetration depth is about 0.9 nm while 90% of implanted nitrogen ion reaches a depth within 1.7 nm. These simulations indicate that XPS is probing exactly the implanted region and hence is suitable to investigate the nitrogen intake by the sample from the ion beam. Following the procedures detailed in references [START_REF] Scofield | [END_REF] and [28], we used XPS results to calculate the elemental concentrations of surface components. The results are shows in Figure 3 for samples prepared at 400 and at 500°C with implantation times ranging from 0 to 60 minutes. It is observed that in the first few minutes of implantation, a significant nitrogen concentration builds up at the surface and after the concentration increases yet to reach about 33at.%. This is explained by the high reactivity of the nitrogen ion beam and the low diffusion coefficient of nitrogen into the interior of the thin film. In such scenario, we consider that a high nitrogen concentration builds up during the first moments and is maintained by the ion beam creating a high nitrogen chemical potential at the surface. This nitrogen concentration will be the driving force for nitrogen diffusion into the thin film. The extent of the diffusion will depend mainly on the temperature but also on several details of the thin film microstructure, such as vacancies, grain boundaries, stress, and so on. This process is in tight analogy to the plasma or ion beam nitriding of steels at low temperatures where nitrogen diffusion is also slow [29,30]. It is important to note that for both studied temperatures, with sufficient nitriding time, the surface builds a titanium oxynitride alloy with stoichiometry close to TiO 1 N 1 (note such result applies only at the outer 2 nm of the thin film surface). It is also interesting that the sample prepared without hydrogen had a nitrogen concentration of only 25at.% while the sample prepared with hydrogen in the same conditions (500°C -60 minutes) had 33at.% of nitrogen at the surface. Again this indicates that hydrogen may favor oxygen removal opening sites for nitrogen chemical adsorption and reaction, even if nitrogen arrives at 150eV at the surface and the sample is in high vacuum. X-ray diffractograms are shown in Figure 4 for samples prepared at 400 and 500° without nitrogen implantation and implanted for 10, 30 and 60 minutes, as before. It is observed that all samples display peaks associated with Anatase phase with considerable intensity indicating the crystalline nature of the thin films. Moreover, implantation with Nitrogen and Hydrogen does not disturb the crystal structure, similarly to what has been reported in the literature for 200eV nitrogen implantation into Anatase thin films [13]. The position of the Anatase (101) peak remains within (25.30±0.05)° for all diffractograms while reference Anatase ( 101) is expected at 25.33° according to ICSD 9852. It must be noted that samples are kept at deposition temperature during the implantation and are therefore annealed what could affect their crystalline structure. It is also noteworthy that peak ratios do not agree with expected values for Anatase powder and hence the films should have some texture [31]. Transmission Electron Microscopy was applied to determine the extent of nitrogen diffusion into the film from the surface. For that, we considered only the sample prepared at 500°C for 60 minutes, considering that other samples would have a shallower nitrogen penetration depth. Figure 5 shows a cross-section of the sample. Apart from the amorphous quartz substrate and the protective coating used for FIB lamella preparation, we can observe the N implanted TiO 2 thin film in two layers, on top a layer that apparently has been modified by the implantation/diffusion process and on the bottom the pristine Anatase film. HRTEM images indicate the presence of atomic planes and grains from bottom to top of the thin film, again confirming the Anatase film preserved its crystal structure even after nitrogen shallow implantation (shallower than 2nm from SRIM simulations) and subsequent diffusion. Electron energy loss spectroscopy was used to detect nitrogen and determine its profile in the sample cross-section. Figure 6(a) shows the profiles of nitrogen, oxygen and titanium from the surface to the interior of the thin film. The detection of nitrogen was difficult due to sample damage: apparently the electron beam removed nitrogen during beam exposure. For this reason (and also because of diffraction and thickness effects were not taken into account), the results in Figure 6(a) may underestimate the original concentration (due to damage even in reduced dose measurements) or other systematic error (due to the other mentioned effects). However, it can accurately be considered semi-quantitatively to measure the diffusion depth. In Figure 6(a), a complementary error function fitting is added to the nitrogen profile as a thin line. Despite the noise, it is clear that nitrogen is detected down to 25 nm or so (where estimated nitrogen concentrations decreases to 10% of its surface value). The presence of nitrogen is also clear in the fine-structure of Titanium and oxygen absorption edges, shown in Figure 6 (b). Again, a transition from one edge shape to the other is observed around 30 nm from the surface. Moreover, it is interesting to note that the obtained nitrogen profile didn't affect the crystal structure as observed by RH-TEM (Figure 5 (b)), that is, no amorphous layer was found despite the observed nitrogen concentration. Indeed, in some studies, the presence of nitrogen in reactive sputtering leads to amorphous N:TiO 2 films [4]. Atomic force microscopy was used to gather a broader idea of the growth and also a clearer picture of the surface before and after ion implantation. The average surface roughness is about 1 nm for all samples, indicating a smooth growth of TiO 2 Anatase thin film by reactive sputtering and also that implantation at 150eV does not induce surface roughening. Illustrative results, for samples growth at 400°C without implantation and implanted for 30 minutes, are shown in Figure 7. Without implantation, the surface shows small grains having about 40-60 nm in diameter, but the height difference from peak to valley is just about 3 nm. After implantation, crystal grains are partially revealed, as indicated by arrows in Fig. 7(b), and their diameter is in fact about 200 to 300 nm, a result more consistent with TEM observations. As we consider that the ion implantation at 150eV or the annealing time didn't change the crystal structure, grains should have diameters in the hundreds of nanometers from the beginning of the deposition, but ion polishing was necessary to reveal the actual grains to preferential sputtering of different crystal orientations [32]. From XRD, XPS, TEM and AFM results, it is possible to form the following picture of the implantation process: the ion beam is highly reactive and, with the help of hydrogen, creates a high surface chemical potential which is, together with the process temperature, the driving force for nitrogen diffusion into the thin TiO 2 film. Considering that a roughly constant nitrogen concentration builds up in the first minutes, a diffusion coefficient of nitrogen on anatase at 500°C can be calculated as approximately 210 -8 m 2 s -1 , considering the solution of Fick's second law for a constant surface concentration. R. G. Palgrave et al. fond similar values (2.5310 -8 m 2 s -1 ) at 675° for rutile and also, analyzing very accurate nitrogen concentration profiles, reported different diffusion coefficients, indicating more than one diffusion route [12]. Moreover, they show that interstitial nitrogen diffuses much faster than substitutional nitrogen. In this case, the quantitative concentration of nitrogen obtained by EELS should be compared to the interstitial nitrogen concentration, which, from our in situ XPS results, is about 3at.%, meaning a better agreement between EELS nitrogen concentration near the surface and XPS results. It must be noted that since only about half to one third of the film is actually doped, the average nitrogen concentration is probably closer to 1-2at.%. Electrical characterization Very generally, the effect of nitrogen implantation and diffusion into the Anatase thin film can be monitored by 4 probe electrical resistivity measurements. Such results, converted into sheet resistance, are shown in Figure 8 (a). It is observed that sheet resistance drops by 7 orders of magnitude and reaches 54.7k/. These results are in close agreement with resistivity, , as measured by van der Pauw method using indium contacts, shown in Figure 8(b). The films resistivity could be as low as 310 -1 cm, for the sample prepared at 500°C and implanted for 60 minutes (average nitrogen concentration about 1-2%). This resistivity is about 10 fold lower than reported for TiO 1.88 N 0.12 (4at.% of nitrogen) prepared by plasma-assisted molecular beam epitaxy [33]. Moreover, the presented results are very similar to N:TiO 2 prepared by electron cyclotron resonance and by reactive sputtering in refs [4,6] (with slightly lower light transmittance, see below) but higher than TiO 2 doped with Nb, Ta or W, which may show resistivity much lower than 10 -2 cm [5,34,35]. In Figures 8 and9 the symbols cover the estimated uncertainty bars. Mobility and carrier concentration results are shown in Figure 9. Highest carrier concentration is observed for sample implanted at 500°C and for 60 minutes and reaches up to 610 19 cm -3 . Mobility values measured are always lower than 1 cm 2 v -1 s -1 (and just above 0.1 cm 2 v -1 s -1 ), which is much lower (at least by one or even two orders of magnitude) than usual TCOs like ITO or SnO 2 [36]. Such mobility is however similar to reported to Nb doped TiO 2 [34]. Note that resistivity and carrier concentration values are calculated considering the full film thickness and, as EELS Nitrogen profile showed (Figure 6(a)), nitrogen concentration is far from homogenous along the thin film. If one takes into account that nitrogen is present in about one third of the films (30 nm instead of 90 nm), then carrier concentration would be in such region and in average about 210 20 cm -3 (it should be higher close to the surface). Such corrected carrier concentration starts to be similar to values obtained in the literature as 310 20 cm -3 for Ta:TiO 2 and 10 21 cm -3 for Nb:TiO 2 [35,34]. Industry standard TCEs have again similar carrier concentration values, such as 1.510 20 cm -3 for FTO, or about 20 21 cm -3 for ITO and AZO [37,38,39]. Similarly, sheet resistance (or resistivity) in the doped region would be 3 fold smaller than in the thin film average. Moreover, nitrogen concentration gradient may explain low mobility since the region more relevant to electrical measurements has higher carrier concentration, which in turn may reduce mobility. The temperature dependence of the resistance is shown in Figure 10 (a) for the sample prepared at 500°C and implanted for 60 minutes. A very small hysteresis was observed. Failure to fit data in an Arrhenius plot (LnR vs. 1/T) denoted that the resistance is not governed by thermal activation, contrary to plasma-assisted molecular beam epitaxy N:TiO 2 samples that contained mostly substitutional nitrogen [33]. However, the resistance scales as LogR α T -1/2 , as shown in Figure 10 (b). This suggests that the conduction mechanism close to room temperature is variable range hopping (VRH). For this regime the resistivity should follow: ρ(T) = ρ 0 exp[T 0 /T] p , ( 1 ) where p = 1/4 for Mott (Mott-VRH) [40] and p = 1/2 for Efros and Shklovskii (ES-VRH) [41]. Both mechanisms were observed in ion implanted TiO 2 single crystals [42] and disordered TiO 2 thin films [43,44] in a wide temperature range. To determine which one of the mechanisms is dominant in our sample we used the method proposed by Zabrodskii and Zinoveva [45] to obtain the exponent p, where w(T) = -∂log(R)/∂log(T) and log(w) = log(pT0 p) -p log(T). By plotting log(w) versus log(T) we can find the value of the exponent p from the slope of the curve. As depicted in the insert of Figure 10(b), for T > 235 K the curve is fitted with p = 0.488 ± 0.007, very close to value expected for ES-VRH conduction mechanism. For lower temperatures, the data diverge, indicating a change in the conduction mechanism. Further study is necessary to understand this behavior but is outside the scope of this work. Optical Characterization The UV-Vis-NIR light transmission spectra for the studied samples are shown in Figure 11. The transmission spectra for amorphous quartz (used as substrate) and undoped anatase prepared at 400 and 500° are also shown. Interference fringes are observed for the thin films and the maximum transmission is in the range from 500 to 600 nm (green). It is observed that undoped anatase have a transmission maximum very similar to amorphous quartz and that by doping the thin films the transmission falls from about 90% to 85% (with respect to air). Such observed transmission is better than some literature results for doped TiO 2 with similar resistivity, as indicated above. [4,6,34]. The transmission curves were simulated (not shown) using the method described in ref [ 46 ] and the general shape is very well described considering only the thickness and refraction indexes of the film and substrate. Transmission spectra measured further into the IR up to 3000 nm (not shown) are still featureless with only one absorption region near 2720 nm due to the quartz substrate. Absorption spectra can be used to determine the optical band-gap. Taking in account that Anatase has an indirect band-gap and following the procedure indicated in [47], the optical band gap can be obtained by plotting the square-root of the absorption coefficient as function of the energy and extrapolating the absorption edge at high energies [48]. Figure 12 shows the results for two extreme cases: undoped Anatase thin film prepared at 500°C and nitrogen implanted for 60 minutes also prepared at 500°C. Optical band-gap in both cases is about 3.29 eV, in agreement with Anatase value [47,49]. This indicates that the doped region does not affect significantly the overall light absorption edge. Similar results were obtained for all other samples (not shown). Such result is in agreement to literature reports that indicate that gap narrowing is related to substitutional nitrogen, which in our case could be restricted to the surface. Interstitial nitrogen, on the other hand, does not reduce band-gap [12,50,51]. Shelf stability Finally, the shelf stability was evaluated by measuring the resistivity, mobility and carrier concentration on the interval of some days for the sample implanted for 30 minutes at 500°C (without any surface protection/coating). The results, shown in Figure 13, indicate that the films maintains its resistivity with only a 40% resistivity increase, despite the fact that TiO 2 is thermodynamically favorable with respect to TiN. It is observed that the resistivity increases slightly from (1.8±0.2) to (2.5±0.3) cm, see Figure 13 (a). This change is accompanied by a decrease of carrier density and an increase in mobility, as shown in Figure 13(b) and (c). Such changes could be due to surface oxidation that would displace nitrogen and reduce its concentration. However, oxygen diffusion would be too slow to displace nitrogen deeper in the thin film. The stability of this N:TiO 2 film, even if not subjected to heat or UV light, is interesting with respect to literature [52]. Further study is needed to compare the stability of N:TiO 2 to that of other TCOs [53]. Conclusions In summary, a comprehensive study showed that, by implanting 150 eV nitrogen and hydrogen ions into Anatase films, it is possible to build ~33% nitrogen surface concentrations which drive nitrogen diffusion into the volume of the film. For the sample prepared at 500°C and implanted for 60 minutes, the films resistivity could be as low as 310 -1 cm while transparency at 550nm is about 85%. In this case, nitrogen diffusion could reach about 25 to 30 nm deep into the thin film. Note that 150eV nitrogen ions are readily available with simple laboratory ion guns. The proposed two step deposition and doping technique could, as planned, provide Anatase thin films with a nitrogen doped zone near the surface. Moreover, carrier densities and conductivities similar to other established TCOs could be obtained. These results show the effectiveness of nitrogen diffusion into the Anatase film from the surface due to the obtained nitrogen surface concentration and applied temperature. Clearly, it was not possible to dope with nitrogen the whole film or to create homogenously doped sample at 500°C and 60 minutes of implantation. However, by adjusting properly the nitriding implanting time, desired thin film properties could be obtained. Indeed, the presented results indicate that, by doping during longer times until the whole thin film is doped, it could be possible to obtain resistivities lower than 10 -1 cm. This study supports the interpretations that interstitial nitrogen has higher binding energy in XPS, that it diffuses faster in Anatase (with respect to Nitrogen in substitutional sites) and that it does not affect Anatase optical bandgap. Finally, low energy ion doping using simple ion guns can be applied to Anatase prepared by other means, even colloidal synthesized nanoparticles. The high reactivity of low energy nitrogen ions associated with hydrogen ions, we speculate, could be also effective in other Anatase samples kinds. Figure 1 : 1 Figure 1: In situ x-ray photoemission spectra from sample grown at 500°C and implanted (at the same temperature) for 60 minutes with 150 eV nitrogen ions. The spectra include decomposition into components for different expected chemical bounds in the sample. Figure 2 : 2 Figure 2: X-ray photoemission spectra are shown for samples prepared with and without hydrogen in the gas feed to the ion gun. The peak associated to Nitrogen in interstitial sites is observed in both samples. Figure 3 : 3 Figure 3: Nitrogen, Oxygen and Titanium concentration obtained by in situ XPS for samples prepared at 400° and 500°C for several implantation times. Lines are a guide to the eyes. Nitrogen concentration reaches about 33at.%. Figure 4 : 4 Figure 4: Diffractograms from grazing incidence X-ray Diffraction for TiO 2 films prepared at 400 and 500°C and implanted for the time indicated in each curve. Anatase lines are indicated in the bottom according to ICSD 9852. Figure 5 : 5 Figure 5: TEM cross-section micrograph from sample prepared at 500°C and implanted for 60 minutes. (a) A modified region at the surface is observed. (b) HR-TEM shows atomic planes from small grains from bottom to the top of the thin film. The insert shows a SAED pattern obtained nearby on a region with larger grains. Figure 6 : 6 Figure 6: (a) Nitrogen, Oxygen and Titanium semi-quantitative profile determined by STEM-EELS on a cross-section lamella. A complementary error function fit was added to nitrogen profile. (b) Averaged EEL spectra of titanium 2p and oxygen 1s absorption edges indicating the difference from the upper (nitrogen doped) layer to the bottom (pristine) TiO 2 . Figure 7 : 7 Figure 7: AFM images of the surface of samples implanted by 0 minutes (undoped) and 30 minutes, both prepared at 400°C, are shown is (a) and (b), respectively. Ion implantation reveals partially the grains by preferential sputtering. Arrows in (b) indicate grain boundaries. Figure 8 : 8 Figure 8: (a) Sheet resistance and (b) Resistivity from all samples measured by 4-probe and van der Pauw, respectively. The results are in close agreement. Figure 9 : 9 Figure 9: Hall mobility and carrier concentration measured by van der Pauw method. Carrier density is show in open symbols while solid symbols show carrier mobility. Figure 10 . 10 Figure 10. (a) Resistance versus temperature for the film prepared at 500°C and implanted for 60 minutes. A small hysteresis was observed. (b) Logarithm plot of resistance versus T -1/2 showing a linear dependency (blue line) at high temperature. The inset shows the double-log plot of function w(T) and T and the value of exponent "p", as determined from the slope of the curve. Figure 11 : 11 Figure 11: Light transmission for samples prepared at 500 and 400°C. Amorphous quartz substrate and undoped anatase thin film are also shown. Figure 12 : 12 Figure12: Square-root of the absorption coefficient for undoped and 60 minutes N doped Anatase films prepared at 500°C. In both cases, the gap is about 3.29 eV. Figure 13 : 13 Figure 13: Resistivity, carrier concentration and mobility for shelf storage of the sample nitride for 30 minutes and prepared at 500°C. Acknowledgments Part of this work was supported by FAPESP, projects 2014/23399-9 and 2012/10127-5. TEM experiments were performed at the Brazilian Nanotechnology National Laboratory (LNNano/CNPEM).
01698582
en
[ "info", "info.info-se" ]
2024/03/05 22:32:10
2017
https://ensta-bretagne.hal.science/hal-01698582/file/Relating%20Student%2C%20Teacher%20and%20External%20Assessments%20in%20a%20Bachelor%20Capstone%20Project.pdf
Keywords: process assessment, competencies model, capstone project The capstone is arguably the most important course in any engineering program because it provides a culminating experience and is often the only course intended to develop non-technical, but essential skills. In a software development, the capstone runs from requirements to qualification testing. Indeed, the project progress is sustained by software processes. This paper yields different settings where students, teachers and third-party assessors performed [self-] assessment and the paper analyses corresponding correlation coefficients. The paper presents also some aspects of the bachelor capstone. A research question aims to seek if an external process assessment can be replaced or completed with students' self-assessment. Our initial findings were presented at the International Workshop on Software Process Education Training and Professionalism (IWSPETP) 2015 in Gothenburg, Sweden and we aimed to improve the assessment using teacher and third-party assessments. Revised findings show that, if they are related to curriculum topics, students and teacher assessments are correlated but that external assessment is not suitable in an academic context. Introduction Project experience for graduates of computer science programs has the following characteristic in the ACM Computer Science Curricula [START_REF]Computer Science Curricula -Curriculum Guidelines for Undergraduate Degree Programs in Computer Science[END_REF]: "To ensure that graduates can successfully apply the knowledge they have gained, all graduates of computer science programs should have been involved in at least one substantial project. […] Such projects should challenge students by being integrative, requiring evaluation of potential solutions, and requiring work on a larger scale than typical course projects. Students should have opportunities to develop their interpersonal communication skills as part of their project experience." The capstone is arguably the most important course in any engineering program because it provides a culminating experience and is often the only course used to develop non-technical, but essential skills [START_REF]The glossary of education reform[END_REF]. Many programs run capstone projects in different settings [START_REF] Dascalu | Computer science capstone course senior projects: from project idea to prototype implementation[END_REF][START_REF] Umphress | Software process in the classroom: the Capstone project experience[END_REF][START_REF] Karunasekera | Preparing software engineering graduates for an industry career[END_REF][START_REF] Vasilevskaya | Assessing Large-Project Courses: Model, Activities, and Lessons Learned[END_REF][START_REF] Bloomfield | A service learning practicum capstone[END_REF][START_REF] Goold | Providing process for projects in capstone courses[END_REF]. The capstone project is intended to provide students with a learning by doing approach about software development, from requirements to qualification testing. Indeed, the project progress is sustained by software processes. Within the ISO/IEC 15504 series and the ISO/IEC 330xx family of standards, process assessment is used for process improvement and/or process capability determination. Process assessment helps students to be conscious about and improve what they are doing. Hence, a capstone teacher's activity is to assist students with appreciation and guidance, a task that relies on the assessment of students' practices and students' products. This paper yields different settings where students, teachers and third-party assessors performed [self-] assessment and analyses correlation coefficients. Incidentally, the paper presents some aspects of the bachelor capstone project at Brest University. Data collection started 3 years ago. Initial findings were presented in [START_REF] Ribaud | Process Assessment Issues in a Bachelor Capstone Project[END_REF]. The paper structure is: section 2 overviews process assessment, section 3 presents different settings we carried process assessments; we finish with a conclusion. 2 Process assessment Process Reference Models Most software engineering educators will agree that the main goal of the capstone project is to learn by doing a simplified cycle of software development through a somewhat realistic project. For instance, Dascalu et al. use a "streamlined" version of a traditional software development process [START_REF] Dascalu | Computer science capstone course senior projects: from project idea to prototype implementation[END_REF]. Umphress et al. state that using software processes in the classroom helps in three ways: 1 -processes describe the tasks that students must accomplish to build software; 2 -processes can give the instructor visibility into the project; 3 -processes can provide continuity and corporate memory across academic terms [START_REF] Umphress | Software process in the classroom: the Capstone project experience[END_REF]. Consequently, the exposition to some kind of process assessment is considered as a side-effect goal of the capstone project. It is a conventional assertion that assessment drives learning [START_REF] Dollard | Personality and psychotherapy; an analysis in terms of learning, thinking, and culture[END_REF]; hence process assessment drives processes learning. Conventionally, a process is seen as a set of activities or tasks, converting inputs into outputs [START_REF]Systems and software engineering --Software life cycle processes[END_REF]. This definition is not suited for process assessment. Rout states that "it is of more value to explore the purpose for which the process is employed. Implementing a process results in the achievement of a number of observable outcomes, which together demonstrate achievement of the process purpose [START_REF] Rout | The evolving picture of standardisation and certification for process assessment[END_REF]." This approach is used to specify processes in a Process Reference Model (PRM). We use a small subset of the ISO/IEC 15504- Ability model From an individual perspective, the ISO/IEC 15504 Exemplar Process Assessment Model (PAM) is seen as a competencies model related to the knowledge, skills and attitudes involved in a software project. A competencies model defines and organizes the elements of a curriculum (or a professional baseline) and their relationships. During the capstone project, all the students use the model and self-assess their progress. A hierarchical model is easy to manage and use. We kept the hierarchical decomposition issued from the ISO/IEC 15504 Exemplar PAM: process groupsprocessbase practices and products. A competency model is decomposed into competency areas (mapping to process groups); each area corresponding to one of the main division of the profession or of a curriculum. Each area organizes the competencies into families (mapping to processes). A family corresponds to main activities of the area. Each family is made of a set of knowledge and abilities (mapping to base practices), called competencies; each of these entities is represented by a designation and a description. The ability model and its associated tool eCompas have been presented in [START_REF] Ribaud | Towards an ability model for SE apprenticeship[END_REF]. Process assessment The technique of process assessment is essentially a measurement activity. Within ISO/IEC 15504, process assessment has been applied to a characteristic termed process capability, defined as "a characterization of the ability of a process to meet current or projected business goals" [START_REF]Information technology --Process assessment --Part 5: An exemplar software life cycle process assessment model[END_REF]. It is now replaced in the 330xx family of standards by the larger concept of process quality, defined as "ability of a process to satisfy stated and implied stakeholders needs when used in a specific context [START_REF]Information technology --Process assessment --Concepts and terminology[END_REF]. In ISO/IEC 33020:2015, process capability is defined on a six point ordinal scale that enables capability to be assessed from the bottom of the scale, Incomplete, through the top end of the scale, Innovating [START_REF]Information technology --Process assessment --Process measurement framework for assessment of process capability[END_REF]. We see Capability Level 1, Performed, as an achievement: through the performance of necessary actions and the presence of appropriate input and output work products, the process achieves its process purpose and outcomes. Hence, Capability Level 1 will be the goal and the assessment focus. If students are able to perform a process, it denotes a successful learning of software processes, and teachers' assessments rate this capability. Because we believe that learning is sustained by continuous, self-directed assessment, done by teachers or a third-party, the research question aims to state how students' self-assessment and teacher's assessment are correlated and if self-assessment of BPs and WPs is an alternative to external assessment about ISO/IEC 15504 Capability Level 1. Obviously, the main goal of assessment is students' ability to perform the selected processes set. 3 The Capstone Project Overview Schedule The curriculum is a 3-year Bachelor of Computer Science. The project is performed during two periods. The first period is dispatched all the semester along and homework is required. The second period (2 weeks) happens after the final exams and before students' internship. Students are familiar with the Author-Reader cycle: each deliverable can be reviewed as much as needed by the teacher that provides students with comments and suggestions. It is called Continuous Assessment in [START_REF] Karunasekera | Preparing software engineering graduates for an industry career[END_REF][START_REF] Vasilevskaya | Assessing Large-Project Courses: Model, Activities, and Lessons Learned[END_REF]. System Architecture The system is made of 2 sub-systems: PocketAgenda (PA) for address books and agenda management and interface with a central directory; WhoIsWho (WIW) for managing the directory and a social network. PocketAgenda is implemented with Java, JSF relying on an Oracle RDBMS. WhoIsWho is implemented in Java using a RDBMS. Both sub-systems communicate with a protocol to establish using UDP. The system is delivered in 3 batches. Batch 0 established and analyzed requirements. Batch 1 performed collaborative architectural design, separate client and server development, integration. Batch 2 is focused on information system development. Students consent Students were advised that they can freely participate to the experiment described in this paper. The class contains 29 students, all agreed to participate; 4 did not complete the project and do not take part to the study. Students have to regularly update the competencies model consisting in the ENG process group, the 6 processes above and their Base Practices and main Work Products and self-assess on an achievement scale: Not -Partially -Largely -Full. There will be also teacher and third-party assessments that will be anonymously joined to self-assessments by volunteer students. Batch 0 : writing and analyzing requirements Batch 0 is intended to capture, write and manage requirements through use cases. It is a non-technical task not familiar to students. In [START_REF] Bloomfield | A service learning practicum capstone[END_REF], requirements are discussed as one of the four challenges for capstone projects. Students use an iterative process of writing and reviewing by the teacher. Usually, 3 cycles are required to achieve the task. Table 1 presents the correlation coefficient r between student and teacher assessment for the ENG.4 Software requirements analysis. It relies on 3 BPs and 2 WPs. Table 2 presents also the average assessment for each assessed item. The overall correlation coefficient relates 25 * 6 = 150 self-assessment measures with the corresponding teacher assessment measures, its value r = 0.64 indicates a correlation. Thanks to the Author-Reader cycle, specification writing iterates several time during the semester and the final mark given to almost 17-8 Interface requirements and 17-11 Software requirement documents was Fully Achieved. Hence correlation between students and teacher assessments is complete. However, students mistake documents assessment for the BP1: Specify software requirements. Documents were improved through the author-reader cycle, but only reflective students improve their practices accordingly. Also, students did not understand the ENG.4. BP4: Ensure consistency and failed the self-assessment. Most students did not take any interest in traceability and self-assessed at a much higher level that the teacher did. A special set of values can bias a correlation coefficient; if we remove the BP4: Ensure consistency assessment, we get r = 0.89, indicating an effective correlation. However, a bias still exists because students are mostly self-assessing using the continuous feedback they got from the teacher during the Author-Reader cycle. Students reported that they wrote use cases from a statement of work for the first time and that they could not have succeeded without the Author-Reader cycle. Batch 1 : a client-server endeavor For the batch 1, students have to work closely in pairs, to produce architectural design and interface specification and to integrate the client and server sub-systems, each sub-system being designed, developed and tested by one student. Defining the highlevel architecture, producing the medium and low-level design are typical activities of the design phase [START_REF] Dascalu | Computer science capstone course senior projects: from project idea to prototype implementation[END_REF]. 4 pairs failed to work together and split, consequently lonesome students worked alone and have to develop both sub-systems. We were aware of two biases: 1 -students interpret the teacher's feedback to selfassess accordingly; 2 -relationship issues might prevent teachers to assess students to their effective level. Hence, for ENG.3 System architectural design process and ENG.7 Software integration process, in addition to teachers' assessment, another teacher, experienced in ISO/IEC 15504 assessments, acted as a third-party assessor. Architectural design For the ENG.3 System architectural design, table 2 presents the correlation coefficient between student and teacher assessments and the correlation coefficient between student and third-party assessments. Assessment relies on 3 BPs and 2 WPs. Table 2 presents also the average assessment for each assessed item. The correlation coefficient between self-assessment and teacher assessment measures is r1 = 0.28 and the correlation coefficient between self-assessment and third-party assessment measures is r2 = 0.24. There is no real indication for a correlation. poor, except maybe for database design and interface design, but these technical topics are deeply addressed in the curriculum. An half of students perform a very superficial architectural work because they are eager to jump to the code. They believe that the work is fair enough but teachers do not. The BP4. Ensure consistency is a traceability matter that suffers the same problem described above. A similar concern to requirements arose: most students took Work Products (Design Documents) assessment as an indication of their achievement. Students reported that requirement analysis greatly helped to figure out the system behavior and facilitated the design phase and interface specification. However, students had never really learnt architectural design and interface between sub-systems, indeed it explains the low third-party assessment average for BPS and WPs. Integration ENG.7 Software integration is assessed with 6 main Base Practices and 2 Work Products. The correlation coefficient between self and teacher assessments is r1 = -0.03 and the correlation coefficient between self and third-party assessments is r2 = 0.31. However, several BPs or WPs were assessed by the third-party assessor with the same mark for all students (N or P): the standard deviation is zero and the correlation coefficient is biased and was not used. Table 3 presents the assessment average for the third types of assessment. 48 All BPs and WPs related to integration and test are weakly third-party assessed, indicating that students are not really aware of these topics, a common hole in a Bachelor curriculum. Some students were aware of the poor maturity of the integrated product, partly due to the lack of testing. Although the Junit framework has been taught during the first semester, some students did not see the point to use it while some others did not see how to use it for the project. As mentioned by [START_REF] Umphress | Software process in the classroom: the Capstone project experience[END_REF], we came to doubt the veracity of process data we collected. Students reported that they appreciated the high-level discipline that the capstone imposed, but they balked at the details. Batch 2 : information system development For the batch 2, students have to work loosely in pairs; each of the two has developed different components of the information system and has been assessed individually. Table 4 presents the correlation coefficient r between student and teacher assessment for the ENG.6 Software construction process. It relies on 4 Base Practices and 2 Work Products. Table 4 presents also the average assessment for each assessed item. The correlation coefficient is r = 0.10 and there is no indication for a correlation. However, BPs and WPs related to unit testing were assessed by the teacher with almost the same mark for all students (N or P), biasing the correlation coefficient. If we remove BPs and WPs related to unit testing (17-14 Test cases specification; 15-10 Test incidents report; BP1: Develop unit verification procedures), we get r = 0.49, indicating a possible correlation. Our bachelor students have little awareness of the importance of testing, including test specification and bugs reporting. This issue has been raised by professional tutors many times during the internships but no effective solution has been found until yet. Students reported that the ENG.6 Software construction process raised a certain anxiety because students had doubt about their ability to develop a stand-alone server interoperating with a JDeveloper application and two databases but most students succeeded. For some students, a poor Java literacy compromised the project progress. It is one problem reported by Goold: the lack of technical skills in some teams [START_REF] Karunasekera | Preparing software engineering graduates for an industry career[END_REF]. Conclusion The research question aims to see how students' self-assessment and external assessment [by a teacher or a third-party] are correlated. This is not true for topics not addressed in the curriculum or unknown by students. For well-known topics, assessments are correlated roughly for the half of the study population. It might indicate that in a professional setting, where employees are skilled for the required tasks, selfassessment might be a good replacement to external assessment. Using a third-party assessment instead of coaches' assessment was not convincing. Third-party assessment is too harsh and tends to assess almost all students with the same mark. Self-knowledge or teacher's understanding tempers this rough assessment towards a finer appreciation. The interest of a competencies model (process/BPs/WPs) is to supply a reference framework for doing the job. Software professionals may benefit from selfassessment using a competencies model in order to record abilities gained through different projects, to store annotations related to new skills, to establish snapshots in order to evaluate and recognize knowledge, skills and experience gained over long periods and in diverse contexts, including in non-formal and informal settings. Table 1 1 : ENG.4 assessment (self and teacher) Stud. avg Tch. avg r BP1: Specify software requirements 2.12 1.84 0.31 BP3: Develop criteria for software testing 1.76 1.76 1.00 BP4: Ensure consistency 1.92 0.88 0.29 17-8 Interface requirements 1.88 1.88 1.00 17-11 Software requirements 2.08 2.08 1.00 Table 2 : 2 ENG.3 (self, teacher and third-party) Stud. Tch. 3-party r Std- r Std- avg avg avg. Tch 3party Table 3 : 3 ENG.7 indicators Stud. Tch. 3-party avg avg avg. Table 4 : 4 ENG.6 assessment (self and teacher) Stud. avg Tch. avg r Acknowledgements We thank all the students of the 2016-2017 final year of Bachelor in Computer Science for their agreement to participate to this study, and especially Maxens Manach and Killian Monot who collected and anonymized the assessments. We thank Laurence Duval, a teacher that coached and assessed half of the students during batch 1.
01756185
en
[ "info.info-se" ]
2024/03/05 22:32:10
2017
https://hal.univ-brest.fr/hal-01756185/file/PID5031269.pdf
Cassandra Balland Néné Satorou Cissé Louise Hergoualc'h, Gwendoline Kervot Audrey Lidec Alix Machard Lisa Ribaud-Le Cann Constance Rio Louise Hergoualc'h Maelle Sinilo Valérie Dantec Catherine Dezan Cyrielle Feron Claire François Chabha Hireche Arwa Khannoussi Vincent Ribaud Do Scratch a First Round with the Essence Kernel Keywords: Scratch, gender equality, elementary school de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Girls Who . . . I. INTRODUCTION "Girls who…" is a facility intended to disseminate Scratch in elementary schools. It takes part of a French national plan accompanying schools in science and technology (ASTEP -Accompagnement en Sciences et Technologie à l'École Primaire). "Girls who…" is also a girl collective that develops and maintains a practicum of sciences, called the factory. "Girls who…" have a double goal: to set an example of sciences by women and to support the practice of sciences for elementary school pupils. Scratch is an open-source environment for multi-media creation (https://scratch.mit.edu) young people 8 to 16 years old. The Scratch language is made of blocks, arranged in categories. Series of connected blocks form scripts. Categories Motion and Pen allow the displacement of sprites in a scene (a Cartesian coordinate system) and the layout of geometrical and artistic figures. Categories Data, Control and Operators provide typical primitives of imperative programming languages. Sensing category handles inputs/outputs and manages the usual peripherals: mouse, keyboard, webcam, and clock. Events category allows event-driven programming and a parallel way of thinking. Categories Looks and Sound provide the user with multi-media blocks. The Essence kernel [START_REF]Object Management Group, Essence -Kernel and Language for Software Engineering Methods, Version 1.1[END_REF] issued from SEMAT initiative [START_REF] Jacobson | The essence of software engineering: the SEMAT kernel[END_REF] helps to assess projects progress and health. Essence yields an actionable and extensible kernel of universal elements, called alphas. Each alpha has a specific set of states that codify points along a progress dimension addressed by the alpha [START_REF] Jacobson | A new software engineering[END_REF]. The first round of "girls who…" produced introductory courses for Scratch, some courses based on the history of Charlie and the chocolate factory [START_REF] Dahl | Charlie et la chocolaterie[END_REF]. Exemplary lessons have been tested in five elementary schools around Brest, France. The development and the delivery of the course were performed in five sprints using Trello boards. After sprints, we performed an Essence assessment, presented in this paper, which revealed several weaknesses of the project and helped to improve the facility. Essence was chosen because we share the SEMAT concerns: to refund software engineering based on a solid theory, proven principles and best practices. Section II describes the project according to alphas of Customer and Solution areas of concern: opportunities, stakeholders, requirements and software system. After having discussed time, space, actions and matter of "girls who…" section III presents the Endeavor area of concern: work, team and way-of-working. Finally, section IV concludes. II. ASSESSMENT OF A PROJECT PROGRESS AND HEALTH A unique feature of the Essence kernel is the way that it handles "things we always work with" through alphas. An alpha is an essential element of the software engineering endeavor -one that is relevant to an assessment of its progress and health. Alphas move through a set of states. Each state has a checklist that specifies the criteria needed to reach the state. A. Opportunity In France, the 2016 reform for elementary and secondary schools defined a common grounding of knowledge, skills and culture, structured in five areas [START_REF]Le socle commun de connaissances, de compétences et de culture[END_REF]. The common grounding acts as a reference for the school curriculum from 6 to 16 years. Learning Scratch programming and algorithmic thinking was introduced into the "languages to think and communicate" area. According to its authors [START_REF] Resnick | Scratch: programming for all[END_REF], Scratch is a creative design environment that appeals to people who hadn't imagined themselves as programmers. As a technological tool, it supports the learning of many skills of the common ground. Since 1996, within the framework of the ASTEP, scientific students accompany primary school teachers during science and technology lessons. This accompaniment associates the teacher, the scientific student and pupils around the scientific and technological practice, for a mutual enrichment and natural division of competences. In Brest University, each second year student of any Bachelor of Science degree attends a course about her/his "Preparation to professional life" with an internship. Students' objectives have two points, regarding her/his professional integration: 1 -the student must carry out a task or a set of specific tasks inside the internship organization or company; 2 -the student must be able to show her/his understanding of a professional environment. For students considering a teacher or an academic career, the ASTEP is a unique opportunity to experiment the teaching situation and area of concern. In Western countries, there is girl disaffection for STEM area (Science, Technology, Engineering, and Mathematics). However, new generations of students are easily investing in communication and information technologies and appreciate the ludic and creative aspects of Scratch. The "girls who…" facility allows elementary schools to be technologically accompanied in Scratch by second year Bachelor girls. Students are demonstrating an example of sciences by women to the elementary school pupils. Combination of a) Scratch introduction in the elementary school, b) existence of the French ASTEP program and c) commitment of second year students in the "Preparation to professional life" course establishes an identified opportunity. It corresponds to the first progress level of alpha Opportunity [1, Table 8.3 -Checklist for Opportunity]: the idea is clear and the stakeholders (detailed in the following section) are identified and interested. The second level for the alpha Opportunity is to have a solution needed. In our facility, "girls who…" participation is made of three distinct activities: design and realization of science or technology learning sessions (based on Scratch); deliver learning sessions in the elementary schools; participate to a scientific project with primary schools class under the responsibility of a PhD female student. The whole facility is called the factory, and constitutes the solution. All items of the checklist Solution Needed of alpha Opportunity [1, Table 8.3 -Checklist for Opportunity] are not achieved yet: although the need for a solution is confirmed and that a first solution is identified and suggested, the needs for the stakeholders are not entirely established and all problems and their root causes are not completely identified. B. Stakeholders As mentioned in the previous section, second year students have to perform a practical experience through an internship and gain the associated course credits. Parallel to their research work, PhD students have also to gain credits. With primary education pupils, three groups are part of an example chain towards scientific professions. Each group has a role in the factory: • Apprentices: elementary school pupils (girls and boys) who are learning Scratch and performing projects. • Workers: second year female students of any Bachelor of Science program who like to prepare Scratch exercises and examples, deliver learning sessions to apprentices and accompany teachers. • Tutors: PhD female students who animate science project for apprentices with workers' help and who might assist workers on their disciplinary learning. Several categories of Ministry of National Education employees are also stakeholders. Educational institution employees, called resource persons, provide the factory with various services. Elementary school teachers engage their class in Scratch learning and science projects. School district inspectors (inspecteurs de circonscription, in French) are teachers' supervisor; they agree students' participation in schools and represent the institutional guarantee. Inside the university, heads of bachelors of science degree and PhD advisors facilitate students' participation as "girls who…" Stakeholders groups are identified, key stakeholders are represented and responsibilities are defined; one reached the first alpha level: stakeholders are recognized [1, Table 8.2 -Checklist for Stakeholders]. The second level is to have stakeholders represented. In the "girls who…" facility, the stakeholders' representatives are authorized and responsibilities are agreed. As detailed in section III.C, the collaborative approach is largely agreed. Consequently, to achieve the second level, the remaining task is to ensure that way-of-working is supported and respected. C. Requirements "Use cases have been around for almost 30 years as a requirements approach and have been part of the inspiration for more recent techniques such as user stories [START_REF] Jacobson | Use-case 2.0[END_REF]." Some stories are presented below. A resource person creates, on a Trello board, a schedule list for a school, then a card for each learning sequence where s/he seizes sequence description, fills the expiry date and allocates the workers. A worker consults a Trello board related to daily tasks, decides to take again the realization of a sequence and consults the task card. She downloads the Moodle resources associated with the card, works on the instructions and the project, and updates the Moodle resources. A worker arrives in a school to animate a learning sequence. She connects herself to Moodle and set up resources for the sequence on each work station. She starts the presentation of the sequence. Once apprentices are working, she explains the work to be carried out using the presentation and supervises the apprentices' activities. An apprentice starts working. S/he chooses one activity on her/his current sequence. Then, s/he loads the related Scratch project and get on the task. All stakeholders are not familiar with requirements engineering. The choice of "user stories" for sketching the requirements makes it possible to communicate them and to effectively share them between the stakeholders. User stories are represented using cards, called story cards, which constitute the smallest grain of requirements. As noted by [START_REF] Sharp | The Role of Story Cards and the Wall in XP teams: a distributed cognition perspective[END_REF], story cards and tasks to be implemented together with the wall where they are displayed are two tangible artefacts of the distributed knowledge of the team. The need for a facility such as the factory has been agreed by stakeholders. Users (pupils) are identified. Funding and opportunity are clear. The first level for alpha Requirements is reached [1, Table 8.4 -Checklist for Requirements]: requirements are conceived. The second level aims to have requirements bounded. Workers and tutors are main stakeholders involved in factory development. All stakeholders agree on the factory purpose -Scratch learning -and the factory will be successful if apprentices learn effectively. The stakeholders have a shared understanding of the ASTEP program. Requirements are managed using a Trello board, with a list for each user stories made of pieces materialized by story cards. Assumptions about the system changed (see next section) but are now clearly stated. We are struggling with two items of the Requirements bounded checklist: is the prioritization scheme clear; are constraints identified and considered? In the short history of "girls who…" each new school joining the factory brought new circumstances and new needs, and workers adapted consequently requirements. In order to move to the third level -Requirements coherent -one needs to find a way to manage new constraints and prioritize requirements. D. System The initial idea behind the system was to build an online training of Scratch. The first source of inspiration was the web site code.org. It offers many well-organized activities from simple to complex in terms of teaching programming. Kalelioglu studied the effects of learning programming with code.org regarding reflexive thought, problem resolution, and gender issues. She stated: "Students developed a positive attitude towards programming, and female students showed that they were as successful as their male counterparts, and that programming could be part of their future plans [START_REF] Kalelioğlu | A new way of teaching programming skills to K-12 students: code.org[END_REF]." During preliminary discussions with elementary school teachers and inspectors, the need moved from a remote on-line course to classroom training sessions. The first round of "girls who…" established the proof of concept thanks to five elementary schools. We kept the code.org principle of learning sequences divided into increasingly complex activities. Strength of code.org is an integrated environment, providing the user with learning instructions, a workspace where you can do programming, on-line help and feedback with learning. This integration remains a requirement of the system, but in a future version. Moreover, some learning sequences use small robots called mbot. Robotics programming is performed in the mblock environment, an open-source Scratch extension. The architecture of the system is thus made up of: • Several Trello boards (https://trello.com). • Prezi presentations (https://prezi.com). • A Moodle server from Brest University, dedicated to online courses and accessible by external users (https://moodlespoc.univ-brest.fr). • School computers or tablets with Scratch and mblock. The architecture selection criteria have been agreed on, hardware platforms identified and technologies selected. System boundary is known and key technical risks agreed to. Significant decisions about the system have been made: opensource software and free contents. All criteria of the first level for alpha System are fulfilled [1, Table 8.5 -Checklist for System Software] and the architecture is selected. The second level is to have a demonstrable system. According to the checklist associated with this level, one needs to demonstrate architectural characteristics, critical hardware, critical interfaces and integration with other existing systems. One needs to exercise the system and measure its performance. Then the relevant stakeholders will agree that the demonstrated architecture is appropriate. All verifications and measurements will be realized in the next round where "girls who…" will run courses in many schools; the goal is to demonstrate that the architecture of the system is fit for purpose and supports testing. III. THE ENDEAVOUR AREA OF CONCERN The Endeavour area of concern contains everything to do with the team, and the way that they approach their work. Endeavour contains three alphas: Team, Work and Way-ofworking. Key aspects of the facility are presented: time, space, actions and matter then alphas' assessment is drafted. A. Time of girls who… Each year, "girls who…" will welcome new workers (second-year students) and new tutors (starting PhD students). "Girls who…" go in schools and have three kinds of contributions: Scratch discovery sessions (in May/June), Scratch learning sequences (from October to February), and science project accompaniments (from March to June). A Trello board equivalent to the product backlog, called the service backlog, contains a list by school. Each school visit is materialized by a card and its attributes: date, members, checklist, and resources. During her first period in the factory (1-4 weeks, see sections II.A and III.D), each worker produces contents that will be integrated in one of the factory courses. Workers work in pair and the time unit is the week: the product backlog (a Trello board) contains a list per week. The list keeps cards (tasks or sequences in schools) organized in their various stages of progress. B. Spaces of girls who… The factory managed by "girls who…" is a common property (like Wikipedia or an open-source software) under a double license, GNU GPL and CC BY-SA. Digital spaces of the project are numerous: Moodle courses and Prezi presentations, product and service backlogs. The entry point is a public Trello board, in French (https://trello.com/b/83omDwmt/les-filles-qui). The integration of new workers in the factory occurs during the first internship (1-4 weeks) in the facility. Thanks to a partnership with an engineer school, École Nationale Supérieure de Techniques Avancées -ENSTA Bretagne, new workers are accommodated in a dedicated room at ENSTA Bretagne. Thereafter, "girls who…" will not be always on the same space-time: girls telework, go in schools by pairs, exchange synchronously and asynchronously. However, this integration space-time where "girls who…" meet presentially is a key element of network dynamics. It makes it possible to acquire tools and methods usage which will be then used remotely. A distributed common facility such as our factory has to schedule gathering space-time events throughout the year to keep girls trained, to strengthen network membership and to ensure continuity between presence and distance. C. Actions of girls who… "Girls who…" work is ruled by two key principles: setting an example of science and gender equality. "Girls who…" act and are models for other girls who act and so on. A worker achieves various tasks. She contributes to the facility organization and digital spaces management. She designs and realizes learning sequences for Scratch, Scratch junior, mbot programming and carry out assessments. She accompanies elementary school classes within the national program ASTEP (see section II.A), either delivering learning sequence or participating to science projects. She contributes to a research action endeavor related to the introduction of programming and algorithmic thinking in elementary schools. A tutor achieves targeted action. Upon teachers' request, she leads science projects for elementary school pupils. She might assist workers during their Bachelor studies. At the end of the first round, three courses has been developed and partly tested: a 6-sequence course of Scratch programming for 10-12 years, a 6-sequence course (sharing a few sequences with the previous course) of mbot robots programming for 10-12 years, a 3-sequence introductory course of Scratch junior for 6-8 years. Scratch courses use characters inspired of the book "Charlie and the chocolate factory" of Roald Dahl [START_REF] Dahl | Charlie et la chocolaterie[END_REF]. D. Matter of girls who… E. A first endeavor During the first five-weeks round, nine workers realized the programming courses described above and are co-authors of this paper. Whereas the initial idea was to prepare courses for next year, it has been quite clear that "girls who…" need to test courses in schools. Generally speaking, the design, the realization and courses delivery occurred in short cycles (one week) and built incrementally, the next week incorporating the current week feedback. A regular health evaluation of alphas Work and Team guided the project progress. The alpha Wayof-Working was barely assessed. 1) Work. A Trello board was used for definition, planning and assignment of tasks: a list per week and a card per workday with workers present this day. Card descriptions were informal, but for recurring tasks, helped to feed various to-do lists. The lists, as indicated in the section III.A, control tasks and schools' accompaniments and constitute the product backlog and the service backlog of the project. Required results, constraints, funding sources and priorities are clear; stakeholders are known; initiators are identified. All criteria of the first level of alpha Work [1, Table 8.7 -Checklist for Work] are met and work is initiated. The second level is to have the work prepared. Commitment of faculty is made; a credible plan and funding is in place and renewable each year. Resource availability (mainly workers) is understood; cost and effort of the work are roughly estimated. Work is broken down sufficiently in learning sequences for productive work to start; tasks are identified and prioritized; at least one of members (the Science Faculty) is ready. Exposure to the risks is minimal thus one will admit that it is understood. There are two points to fulfill to have the work prepared. Criteria of acceptance have to be established, for instance to constitute pupils cohorts and to perform measurements. Integration points have to be defined, in contents terms -how to enrich the poll of courses, and in schools terms -how to insert a new school in the facility. 2) Team. Seven workers of the first round came from Bachelor of Mathematics and Bachelor of Informatics. They know each other, some students committed immediately to the project and this was the key to decide the others. The project evolved from an online course to face-to-face courses; however the team mission and composition are now defined; constraints are known; required competences in programming and teaching are identified. Student themselves outlined their responsibilities and the need for a clear level of commitment. "Girls who…" initiators received a training intended to animate collaborative projects (http://animacoop.net). Training was carried out part-time over three months. Training facilitated the selection of a distributed and co-operative leadership model. Training helped to set rules of governance which is shared between workers, tutors, heads of Bachelor studies and network animators. The greatest difficulty arose from the two latter items of the checklist: definition of mechanisms to grow the team and determination of team size. An identified risk is that schools demand increases too fast; during the first round, we selected five pilot schools; for the 2017-2018 academic year, 18 elementary classes are already part of the program. Due to introduction of Scratch into the elementary school curriculum, it can lead to a demand exceeding supply. In French Faculties of Science, biology studies are largely invested by girls, and there is no need for exemplary models for Biology. However "girls who…" make biology can be interested by Scratch technology and by going in the schools as well. We have to evolve in a way that "girls who…" make biology can bring forces to the facility and find an interest for their scientific discipline. This is the goal of science project accompaniments, thus all criteria of the first level of alpha Team are met [1, Table 8.6 -Checklist for Team] and team is seeded. The second level is to have a formed team. Workers know their role and their place in the team; individual and team responsibilities are accepted and aligned with competencies. As volunteers, members accept work and it was underlined in the previous section that members must commit to the work and the team. Team communication mechanisms have been defined: Trello for backlog, planning and tasks and Moodle as a Learning Management System. "Girls who…" use Slack as instant messaging and Loomio for collective decision making. The unknown factor relates to the growth: will be enough workers available and did we fail to identify external collaborators? The points will be addressed at the next round. 3) Way-of-working The first level of alpha Way-of-Working is to have principles established [1, Table 8.8 -Checklist for Way-Of-Working]. The criteria are: the team actively supports the principles, the stakeholders agree with principles, the needs for tool are agreed, the approach is recommended, the operational context is understood and the constraints of practice and tool are understood. The first round of "girls who…" did not have established principles before its start, but workers defined their way-ofworking and made the following decisions. A pair of workers is assigned to a school; design and realization of learning sequences of training is assigned individually; Prezi presentations are used to present learning instructions and exercises interactively with pupils; Slack, Trello and Loomio usage rules are defined; tools and resources required to be available in schools are identified; a back-up solution is needed and implemented, for content availability as well as for software tools. Foundations are laid and at the next round, we will see if the new "girls who…" will follow these principles. We will check if the first level is reached and principles established. IV. CONCLUSION The first round of "girls who…" shaped a facility with a double goal: to set the example of "girls who…" do sciences and technology and to support the introduction of Scratch and algorithmic thinking in the elementary schools. "Girls who…" prepared Scratch lessons, delivered courses in five pilot schools and organized the system for next rounds. Kids loved when "girls who…" came in schools and learned Scratch and robots programming with enthusiasm. For 2017-2018 year, 18 elementary classes are already part of the program. The facility evolved considerably since we wrote initial specifications nine months ago. The facility was led with a double help: training intended to animate collaborative projects and use of the Essence kernel to assess the state of different areas of concern and to define next steps to complete. Essence provides an understanding of the progress and health of development efforts and how practices can be combined into an effective way of working. The use of Essence can help teams, among others, to detect systemic problems early and take appropriate action; evaluate the completeness of the set of practices selected, and understand the strengths and weaknesses of the way of working; keep an up-to-date record of the team's way of working, and share information and experiences with other teams [START_REF] Jacobson | Agile and SEMAT: perfect partners[END_REF]. Essence assessment gave us a clear vision of the project state and raised critical weaknesses of the facility. Indeed, we are using Essence to assess our research projects health and progress. Fig. 1 . 1 Fig. 1. Illustrations by Lisa, inspired by "Charlie and the chocolate factory" the novel by Roald Dahl and the eponym movie by Tim Burton. Acknowledgments Many persons facilitated the birth of "girls who…" including Pascale Huret-Cloastre, Hélène Klucik, Isabelle Queré, Corinne Tarits, Yann Ti-Coz.
01756186
en
[ "info.info-se" ]
2024/03/05 22:32:10
2017
https://hal.univ-brest.fr/hal-01756186/file/PID5031297.pdf
Vincent Leildé email: [email protected] Vincent Ribaud email: [email protected] Does process assessment drive process learning? The case of a Bachelor capstone project Keywords: process assessment, ability model, capstone project In order to see if process assessment drives processes learning, process assessments were performed in the capstone project of a Bachelor in Computer Science. Assessments use an ability model based on a small subset of ISO/IEC 15504 processes, its main Base Practices and Work Products. Students' point of view was also collected through an anonymous questionnaire. Self-assessment using a competency model helps students to recognize knowledge, skills and experience gained over time and in diverse contexts. The capstone project offered a starting point. Students' self-assessment and external assessment are correlated to some point but are not correlated for topics unaddressed in the curriculum or unknown by students. I. INTRODUCTION At Brest University, the Bachelor capstone lasts roughly 100 hours during the Bachelor last semester. Students learn by doing a simplified cycle of software development through a somewhat realistic project. A rainbow metaphor is used to position phases in the cycle, divided in four main phases: requirement (red), design (yellow), construction (blue) and integration (indigo). Three phases are used to transition between main phases: requirements analysis (orange), detailed design (green), validation (violet). A side-effect goal of the capstone project is to be exposed to some kind of process assessment. A process is "a set of interrelated or interacting activities which transforms inputs into outputs [START_REF]Information technology --Process assessment[END_REF]." Process assessment is applicable by or on behalf of an organization with the objective of understanding the state of its own processes for process improvement. Our hypothesis is that it is worth it for students, if they are conscious of the processes underlying their way-of-working. It is a common assertion that assessment drives learning. Our research question examines whether process assessment drives process learning. 25 students consented to use an ability model based on a small subset of the ISO/IEC 12207:2008 Process Model [START_REF]Information technology --Process assessment[END_REF] and self-assessed skills proficiency test using a 4-point Likert scale. Teachers assessed students using the same ability model and Likert scale, and for two processes, a 3 rd -party assessment was also performed. Volunteer collected the results including a satisfaction questionnaire, gathered students' and teachers' assessment and anonymized data. Results are used to present the achievement degree of students together with a possible correlation between self and external assessments. Some initial findings were presented in [START_REF] Ribaud | Process Assessment Issues in a Bachelor Capstone Project[END_REF]. The structure of the paper is: section 2 overviews process assessment; section 3 presents different assessments we carried out; section 4 presents the questionnaire results and we finish with a discussion and a conclusion. II. PROCESS ASSESSMENT A. ISO/IEC 15504 standard ISO/IEC 15504 standard uses a Process Assessment Model which is a two-dimensional model of process capability. In the process dimension, processes are defined and classified into process categories. In the capability dimension, a set of process attributes grouped into capability levels is defined. Process Attributes are features of a process that can be evaluated on a scale of achievement, providing a measure of the capability of the process. The process dimension uses a Process Reference Model (PRM) replicated from ISO/IEC 12207:2008. A PRM is a model comprising definitions of processes in a life cycle described in terms of process purpose and outcomes, together with an architecture describing the relationships between processes. The capability dimension defines process assessment that is essentially a measurement activity of process capability on the basis of evidences related to assessment indicators. There are two types of assessment indicators: process capability indicators, which apply to all capability levels and process performance indicators, which apply exclusively to capability level 1. The process performance indicators are Base Practices (BP) and Work Products (WP). "A base practice is an activity that, when consistently performed, contributes to achieving a specific process purpose", and "a work product is an artifact associated with the execution of a process" [START_REF]Information technology --Process assessment[END_REF]. B. Ability Model Computer From an individual human perspective, this subset can be seen as a competency model related to the knowledge, skills and attitudes involved in a software development project. A competency model (also called an ability model) defines and organizes the elements of a curriculum and their relationships. A hierarchical model is easier to manage and use. We kept the hierarchical decomposition issued from the 15504: process groups -process -base practices and products. A competency model is decomposed into competency areas (mapping to process groups); each area roughly corresponding to one of the main division of the profession or of a curriculum. Each area organizes the competencies into families (mapping to processes). A family roughly corresponds to main activities of the area. Each family is made of a set of knowledge and abilities (mapping to base practices), eventually called competencies; each of these entities being represented by a designation and a detailed description. The ability model and its associated tool eCompas has been presented in [START_REF] Ribaud | Towards an ability model for software engineering apprenticeship[END_REF]. C. Process Assessment ISO 15504 [START_REF]Information technology --Process assessment[END_REF] defines a measurement framework for the assessment of process capability defined on a six point ordinal scale which represents increasing capability of the implemented process, from not achieving the process purpose through to meeting current and projected business goals [START_REF]Information technology --Process assessment[END_REF]. Capability Level 0 denotes an incomplete process, either not performed at all, or for which there is little or no evidence of systematic achievement of the process purpose [START_REF]Information technology --Process assessment[END_REF]. Capability Level 1 denotes a performed process that achieves its process purpose through the performance of necessary actions and the presence of appropriate input and output work products which, collectively, ensure that the process purpose is achieved [START_REF]Information technology --Process assessment[END_REF]. We are not interested in higher levels than 1 in this study. Evidence of performance of the base practices, and the presence of work products with their expected work product characteristics, provide objective evidence of the achievement of the purpose of the process. Achievement is characterized on a defined rating scale: N Not Achieved, P Partially Achieved, L Largely Achieved, F Fully Achieved. If students are able to perform a process, it denotes a successful learning of software processes, and teachers' assessments rate this capability. The research question aims to state if self-assessment and external assessment help students to improve their software processes practice and how students, teacher and third-party assessment are correlated. III. THE CAPSTONE PROJECT A. Overview 1) Schedule The curriculum is a 3-year Bachelor of Computer Science. The project happens the third year before students' internship. The project is performed during a dedicated period of two weeks. Before the dedicated weeks, a dozen of two-hour labs are conducted all the semester along and some homework is required. According to students' estimates, they spent an average of 102 hours on the capstone project. Each phase is driven by main Base Practices of the related software process and ends up with the delivery of few related Work Products. Each deliverable can be reviewed as much as needed by the teacher that provides students with comments and suggestions. When the dedicated period starts, students are familiar with the Author-Reader cycle and have performed the requirements and architectural design processes. 2) Statement of work The software is made of 2 sub-systems: PocketAgenda that manages an address book and an agenda and interfaces with a central directory; WhoIsWho manages the directory and a social network. PocketAgenda is implemented with Java and JSF relying on an Oracle RDBMS. WhoIsWho is implemented in Java using a small RDBMS. Both sub-systems interact in a client-server mode and communicate with a protocol established using UDP. 3) Students' consent Students were advised that they can freely participate in the experiment described in this paper. The class contains 29 students, all agreed to participate; 4 did not complete the project and do not take part to the study. Students have to regularly update the competency model consisting of the ENG process group, the 6 processes above and their Base Practices and main Work Products and self-assess on the N-P-L-F scale. There are also teacher and 3 rd -party assessments that have to be attached to self-assessments by volunteer students. 4) Statistics Process assessment was continuous and communicated to students regularly; hence they were made aware of their progression very often and adjusted their effort. Table 1 presents teacher's assessment. BP and WP rating are aggregated using an all-or-none principle: if all BPs or WPs in a process are rated at least Not, Partially, Largely or Fully, BPs or WPs are rated Not, Partially, Largely or Fully. B. Writing and analyzing requirements Before the 2-weeks dedicated period starts, students have to capture, write and manage requirements through use cases. It is a non-technical task unfamiliar to students. Students report that eliciting and writing requirements were a difficult task and the Author-Reader cycle helped to produce complete and usable use cases and to acquire a writing style. In most cases, three cycles were required to achieve the task. According to students' estimates average, they spent 20 hours (roughly a fifth of the total hours) to capture, write and manage use cases. This period refers to red/orange colors corresponding to the ENG.1 and ENG.2 processes. A 4-hour lecture about use cases was delivered in January at the beginning of the semester, then the iterative process of writing and being reviewed by the teacher started. 2 presents also the average assessment for each assessed item. The correlation coefficient of each item relates 25 selfassessment measures with the corresponding teacher assessment measures; the overall correlation coefficient relates 25 * 5 = 125 couple of measures, its value r = 0.64 indicates a correlation. Thanks to the Author-Reader cycle, the final mark given to almost all Interface requirements and Software requirement documents was Largely Achieved or Fully Achieved. Students were made aware of their mark and reproduced the mark during the self-assessment; obviously the correlation between students and teacher assessments is complete. However, students mistook documents assessment for practices assessment. Documents were improved through the authorreader cycle, but only reflective students improve their practices accordingly. Other students self-assess their practices at the same level that the corresponding work products, but the teacher did not; see for example BP1: Specify software requirements. Also, students failed the self-assessment for BP4: Ensure consistency. Most students neglected traceability and self-assessed at a higher level that the teacher did. An abnormal set of values can bias a correlation coefficient; if we remove the BP4: Ensure consistency assessment, we get r = 0.89, indicating an effective correlation. However, a bias still exists because students are assessing themselves using the continuous feedback they got from teachers during the Author-Reader cycle. C. Architecture and integration: a client-server endeavor The ACM/IEEE joint Computer Science Curriculum [START_REF]CS Curriculum Guidelines for Undergraduate Degree Programs in Computer Science[END_REF] states about the capstone project "Such projects should challenge students by […] requiring work on a larger scale than typical course projects. Students should have opportunities to develop their interpersonal communication skills as part of their project experience." The first week was used for these purposes: students have to work closely in pairs, to produce interface specification and to integrate the client and server sub-systems, each sub-system being designed, developed and tested by one student. The scope of the week is bounded and defined by 4 use cases, previously written by students. The architectural design has been already done during the semester using SADT and E-R models. The week schedule follows a kind of agile sprint schedule: agreement on requirements and high-level architecture (0.5 day), pair working interface design and low-level decisions (1-1.5 days), individual sub-systems development and test unit (2.5-3 days). Some pairs chose continuous integration, some other pairs performed integration the last day of the week. Both scheme worked but 4 pairs failed to work together and split, consequently lone students worked alone and have to develop both sub-systems, with or without success. We designed the assessment with the intent to minimize the bias mentioned in the previous section: students interpret the teacher's feedback to self-assess accordingly. Also, we wished to focus on the pair work and the architectural design / integration relationship. Hence, we focus the first week assessment on ENG.3 System architectural design process (yellow) and ENG.7 Software integration process (indigo). One author worked several years in a software company and had some experience in 15504 assessments, and then he did not participate to the week and acted as a third-party assessor. The other author and another teacher coached and assessed students' BPs and WPs during the whole week. 1) Architectural design UML modeling and object-oriented design are taught in dedicated lectures (30 hours each). However, nearly all students had no idea how to perform architecture and interface design. Architectural design was taught by example: teachers performed a step-by-step design for one of the four use cases; students reproduced the scheme for the remaining use cases. For the ENG.3 System architectural design, Table 3 presents the correlation coefficient between student and teacher assessments and the correlation coefficient between student and third-party assessments. Assessment relies on 3 Base Practices and 2 Work Products. Table 3 presents also the average assessment for each assessed item. The overall correlation coefficient between self-assessment and teacher assessment measures is r1 = 0.28 and the overall correlation coefficient between self-assessment and third-party assessment measures is r2 = 0.24. There are no real differences and indeed no indication for a correlation. In Table 3, we see that item correlation is poor, except for database design and interface design, but these topics are deeply addressed in the curriculum. An half of students performed a superficial architectural work because they are eager to jump to the code. They believe that the work is fair enough and teachers as well, the external assessor does not. The BP4.Ensure consistency is a traceability matter suffering the same problem described above. Teachers also have a weak awareness of the topic; they over-assess students; and there is no correlation with students' and teachers' assessments. Students reported that requirement analysis with SADT greatly helped to figure out the system behavior and facilitated the design phase and interface specification. However, students had never really learnt architectural design and interface between sub-systems, indeed it explains the lower 3 rd -party assessment average for BPS and WPs. 2) Integration The integration topic is not addressed in the Bachelor curriculum. In the best case, students respected their interface specifications and few problems arose when they integrated client and server code. In some cases, they were unable to perform the integration and the integration merely failed. ENG.7 Software integration (indigo) is assessed with 6 Base Practices and 2 Work Products. The overall correlation coefficient between self and teacher assessments is r1 = -0.03 and the overall correlation coefficient between self and 3 rd -party assessments is r2 = 0.31. However, several BPs or WPs were assessed by the 3 rd -party assessor with the same mark for all students (N or P): the standard deviation is zero and the correlation coefficient is biased and is not used. Table 4 presents the different assessments average. All BPs and WPs related to integration and test are weakly third-party assessed, indicating that students are not really aware of these topics, a common hole in a Bachelor curriculum. Some students were aware of the poor maturity of the integrated product, partly due to the lack of testing. D. Construction : information system development The second week was devoted to perform a continuous and significant development endeavor. Students have to work loosely in pairs; each of them developed separate components of the information system and has been assessed individually. Unfortunately, a teacher quit the capstone project for strong personal reasons; consequently the 3 rd party assessor moved back to be a teacher and an internal assessor. 1) Construction JDeveloper is a Java IDE for the Oracle Application Development Framework (ADF). ADF is an end-to-end development framework, built on top of the Enterprise Java platform, and providing integrated solutions including data access, business services development, a controller layer, a JSF tag library implementation. Most of the application logic relies on the database schema, without the need to write code. During the semester, 16 labs hour were devoted to learning the framework, a few for mastering the IDE but enough for a start. Java, database, network and SQL programming are taught in dedicated lectures during the curriculum (60 hours each). Despite of this amount, a third of students self-judged as having a poor knowledge of SQL and Java. Students have almost no idea of test-driven development and a lack of a test strategy; hence units were poorly tested. Although the Junit framework has been taught during the first semester, no student used it. These points have to be improved in the future. Table 5 presents the correlation coefficient r between student and teacher assessment for the ENG.6 Software construction process (blue). It relies on 4 Base Practices and 2 Work Products. Table 5 presents also the average assessment for each assessed item. The overall correlation coefficient is r = 0.10 and there is no indication for a correlation. Our bachelor students have little awareness of the importance of testing, including test specification and bugs reporting. Consequently, BPs and WPs related to unit testing were assessed by teachers with almost the same mark for all students (N or P), biasing the correlation coefficient. If we remove BPs and WPs related to unit testing management (17-14 Test cases specification; 15-10 Test incidents report; BP1: Develop unit verification procedures), we get r = 0.49, indicating a plausible correlation. Students reported that the ENG.6 Software construction process raised a certain anxiety because students had doubt about their ability to develop a stand-alone server interoperating with a JDeveloper application and two databases but most students succeeded. For some students, a poor Java literacy compromised the project progress. 2) Qualification testing On average, students spent less than 10% of the total hours to perform integration and qualification tests of the software. These topics are unaddressed in the curriculum and because they mostly occur at the end of the project, no time is available to complete the learning. For the ENG.8 Software Testing process, 2 WPs were assessed: 08-21 Software test plan and 15-11 Defect report. Teachers' assessment was Not Achieved for each product; however students' assessment average is 1.56 for the Test Plan (one half Partially and one half Largely) and 1.32 for the Defect Report (two third Partially and one third Largely). Only 2 students self-assessed Not Achieved to both products. The discrepancy might come from the lack of lectures dedicated to testing and from the misunderstanding of the topic. Hence, both WPs are not considered here. We were interested to get an unformal feedback. An anonymous questionnaire let students express their opinions about the capstone project, which are presented in Table 7. Although one project objective is to relate to previous lectures and to mobilize knowledge and skills gained during the bachelor studies, it was not effective and rather seen as a new learning experience for the half of students. We were surprised with the poor use and interest for reviewing facilities. Students' comment. Students appreciated that each project phase has been explained from experience and through examples. Students were convinced of the usefulness of the different phases performed in a software project and that it might be applied to other type of projects. Shared documents could be an alternative to mail exchange and might trigger the use of reviewing facilities that some students misused. Students asked to be exposed to a whole picture of the project at the beginning, not piece by piece. Some students found the work load too heavy and time devoted to the project too short. V. DISCUSSION AND CONCLUSION The first research question aims to see if process assessment fosters process learning. The interest of a competency model (process/BPs/WPs) is to supply a reference framework for doing the job. Software professionals may benefit from self-assessment using a competency model in order to record abilities gained through different projects, to store annotations related to new skills, to establish snapshots in order to evaluate and recognize knowledge, skills and experience gained over long periods and in diverse contexts, including in non-formal and informal settings. Students committed to self-assessment but we don't know if they will integrate this reflective practice in their usage or if they did as a part of the capstone project. However, it is a starting point. The second research question examines how students' selfassessment and external assessment [by a teacher or a thirdparty] are correlated. This is not true for topics not addressed in the curriculum or unknown by students. For known topics, assessments are correlated roughly for the half of the study population. It might indicate that in a professional setting, where employees are skilled for the required tasks, selfassessment might be a good replacement to external assessment. A 3 rd -party assessment did not prove to be useful. Third-party assessment is too harsh and tends to assess almost all students with the same mark. Science bachelor (CS) students are generally focused either on technical topics or theoretical subjects. Little attention is paid to software engineering in a CS Bachelor curriculum. The PRM we use is a small subset of the ISO/IEC 15504:2006 Process Reference Model, mainly the Softwarerelated Processes of the ENG Process. Process Purpose, Process Objectives and Base Practices have been kept without any modification; Input and Outputs Work Products have been reduced to main products. To foster a practical understanding of SE, we use a colored cycle and we slightly rearrange the ENG Software-related Processes Process Group. The cycle is ENG.1 Requirements elicitation: red; ENG.2 System requirements analysis: orange; ENG.3 System architectural design: yellow; ENG.5 Software design: green; ENG.6 Software construction: blue; ENG.7 Software integration: indigo; ENG.8 Software testing: violet. TABLE I I . TEACHER'S ASSESSMENTS (AGGREGATED) Base Practices Work Products Rating N P L F N P L F ENG.1/2 Requirement 0 5 19 1 0 5 17 3 ENG.3/5 Design 0 8 17 0 0 5 17 3 ENG.6 Construction 0 4 21 0 17 7 1 0 ENG.7 Integration 0 5 12 8 1 1 17 6 ENG.8 Testing 3 19 3 0 25 0 0 0 TABLE II . II ENG.1/2 ASSESSMENT (SELF AND TEACHER) Stud. Teach. r BP1: Specify software requirements 2.12 1.84 0.31 BP3: Develop criteria for software testing 1.76 1.76 1.00 BP4: Ensure consistency 1.92 0.88 0.29 17-8 Interface requirements 1.88 1.88 1.00 17-11 Software requirements 2.08 2.08 1.00 Table 2 2 presents the correlation coefficient r between student and teacher assessment for the ENG.1 Requirements elicitation and ENG.2 Requirements analysis processes. It relies on 3 Base Practices and 2 Work Products. Table TABLE III . III ENG.3 (SELF, TEACHER AND THIRD-PARTY) Std. avg Tch. avg 3 rd avg r Std-Tch r Tch-3 rd BP1: Describe system architect. 2.24 2.02 1.68 -0.22 0.18 BP3. Define interfaces 1.96 2.16 1.56 0.48 0.36 BP4. Ensure consistency 2 1.72 0.88 0 0.44 04-01 Database design 2.48 2.2 1.88 0.49 0.35 04-04 High level design 2.12 1.84 1.64 0.37 -0.11 TABLE IV IV . ENG.7 AVERAGE (SELF, TEACHER AND THIRD-PARTY) Stnd. Teac. 3 rd - avg. avg. avg. BP1: Develop software integration strategy 1.56 1.20 0.40 BP2: Develop tests for integrated software items 2.08 1.08 0.52 BP3: Integrate software item 2.00 2.12 1.76 BP4: Test integrated software items 2.00 1.80 1.16 BP5. Ensure consistency 1.76 1.20 0.72 BP6: Regression test integrated software items 1.64 0.52 0.2 08-10 Software integration test plan 1.44 0.88 0.00 11-01 Software product 2.04 2.12 1.48 TABLE V V . ENG.6 ASSESSMENT (SELF AND TEACHER) Stud. Teach. r BP1: Develop unit verification procedures 1.84 0.40 0.05 BP2: Develop software units 1.92 1.84 0.37 BP3: Ensure consistency 1.92 0.92 0.25 BP4: Verify software units 1.96 1.00 -0.2 17-14 Test cases specification 1.80 0.36 0.07 15-10 Test incidents report 1.52 0.12 -0.45 TABLE VI VI . ENG.8 ASSESSMENT (SELF AND TEACHER) Stud. Teach. r BP1: Develop tests for integrated software 1.96 1.00 0.53 product BP2: Test integrated software product 1.84 1.08 0.27 BP3: Regression test integrated software 1.52 0.56 -0.03 Table 6 6 presents the correlation coefficient r between student and teacher assessment for the ENG.8 Software testing process (violet). It relies on 3 Base Practices. Table6presents also the average assessment for each assessed item. The overall correlation coefficient is r = 0.30 and there is little indication for a correlation. Students are not familiar with regression tests and BP3.Regression test integrated software has been assessed by the teacher Not achieved for the half of students and Partially Achieved for the other. If we remove the BP3, we get r = 0.41, indicating a possible correlation. IV. STUDENTS' VIEWPOINT TABLE VII. STUDENTS' SELF-PERCEPTION ABOUT THE PRACTICUM The Agenda project strg agr neu- dsgr strg agr tral dsgr I had the time to learn and do 8 6 3 3 2 the project. I found the project complex. 5 10 5 1 1 I committed to perform the 10 10 2 0 0 project. I found the project realistic. 11 7 2 0 2 I understand relationships 10 6 5 1 0 between specifications, design, building and tests. I had to deepen my knowledge 10 7 2 1 2 and skills to perform the project. My work for the project helped 5 3 8 3 3 me to understand lectures. I used a lot the reviewing 2 8 7 2 3 facilities. I made progress thanks to the 3 7 7 2 3 reviewing facilities. I improved my working 5 10 3 2 2 methods thanks to the project. Acknowledgment We thank all the students of the 2016-2017 final year of Bachelor in Computer Science to their agreement to participate to this study, and especially Maxens Manach and Killian Monot who collected and anonymized the assessments. We thank Laurence Duval, a teacher that coached and assessed half of the students during the first week.
01756217
en
[ "spi.auto" ]
2024/03/05 22:32:10
2014
https://hal.science/tel-01756217/file/50376-2014-Haddad.pdf
Vincent Cocquempot Haddad Alain| Lina Raymond Aleksandra Thèse Alain Haddad J ' Exprime Mon Remerciement À Monsieur Professeur Ahmed El Hajjaji Monsieur Mohammed M'saad D' Ater Je Voudrais Remercier Tout Particulièrement Madame Madame Brigitte Cantegrit Monsieur Marie-Hélène Bekaert Monsieur Michel Edel Monsieur Jean-Marc Vannobel Monsieur Lotfi Belkoura Monsieur John Klein Monsieur Christophe Fiter Frédéric Durak Glumineau A Procceedings Haddad A mes parents Etudes d'Ingénieur. Je tiens à leur adresser mes remerciements pour la confiance qu'ils m'ont témoignée en acceptant la direction scientifique de mes travaux de recherche. Je leur suis très reconnaissant de m'avoir guidé, encouragé et fait bénéficier de leur expérience -xii - Liste des figures Figure 1.1 : Les différentes étapes du diagnostic à base de modèle [START_REF] Toscano | Commande et diagnostic des systèmes dynamiques[END_REF] .. Introduction générale Afin de faire face aux dysfonctionnements des systèmes automatisés, il s'est avéré nécessaire d'élaborer des stratégies de diagnostic et de commande tolérante aux fautes. Ces stratégies visent à assurer la sûreté du fonctionnement, tout en poursuivant la mission des systèmes, et ceci en détectant et en compensant les défauts. Les travaux réalisés dans ce mémoire de thèse entrent dans le cadre de la commande tolérante aux fautes active pour des systèmes suractionnés. Un système est dit suractionné si le nombre de ses actionneurs disponibles est supérieur au nombre des actionneurs requis pour accomplir une mission. Les actionneurs redondants sont utilisés pour augmenter l'efficacité du système et pour obtenir de meilleures performances [START_REF] Song | Comparison between braking and steering yaw moment controllers considering ABS control aspects[END_REF], mais aussi pour tolérer des défaillances d'actionneurs comme dans [START_REF] Vermillon | Optimal Modular Control of Overactuated Systems -Theory and Applications[END_REF]. La stratégie élaborée dans cette thèse se divise en 4 étapes : détection rapide du défaut, activation d'une commande tolérante aux fautes qui assure le suivi de trajectoire du système en présence du défaut, localisation précise du défaut et finalement reconfiguration du système en utilisant uniquement les composants sains. Nous appliquons la démarche à un véhicule électrique autonome suractionné à 2 roues directrices et 4 roues motrices (2WS4WD : 2 Wheel Steering, 4 Wheel Driving). Les stratégies de commande tolérante aux fautes active visent à modifier la loi de commande en fonction du défaut détecté et identifié. Elles nécessitent l'utilisation d'un outil de diagnostic, dont le rôle est de détecter, localiser, voire estimer en ligne le(s) défaut(s). Les commandes tolérantes aux fautes actives se regroupent classiquement en deux approches [START_REF] Patton | Fault-tolerant control systems: The 1997 situation[END_REF]. La première est l'accommodation, qui se caractérise par la modification en ligne de la loi de commande en fonction du défaut identifié. La deuxième est la reconfiguration, qui consiste à commuter d'une commande à une autre, établie hors ligne, pour tolérer le défaut. La détection rapide, la localisation et/ou l'estimation des défauts sont essentielles pour garantir les performances requises du système, comme montré dans [START_REF] Mariton | Detection delays, false alarm rates and the reconfiguration of control systems[END_REF]. En pratique, les délais de diagnostic peuvent être non négligeables. Il en résulte que le système peut perdre non seulement ses performances nominales, mais aussi sa stabilité, avant que le défaut ne soit toléré. Cependant, la majorité des travaux appliquant les approches de commandes tolérantes aux fautes actives ne considèrent pas l'influence du délai nécessaire pour le diagnostic sur les performances du système [START_REF] Zhang | Bibliographical review on reconfigurable faulttolerant control systems[END_REF]. Le premier objectif de cette thèse consiste à concevoir une loi de commande tolérante aux fautes active qui garantit rapidement certaines performanc es du cahier des charges pour un système suractionné et ceci après la détection d'un défaut par un module de surveillance. Les commandes des actionneurs initialement utilisés ne sont pas modifiées. Des actionneurs redondants sont activés pour compenser l'e ffet du défaut. Le deuxième objectif est de reconfigurer le système défaillant et ceci en ne gardant que les composants sains. La localisation précise du défaut d'actionneur est alors nécessaire. Cette tâche est coûteuse en temps de calcul, puisque la gén ération des indicateurs de défaut nécessite l'identification des paramètres [START_REF] Haddad | Hierarchical Diagnosis for an overactuated autonomous vehicle[END_REF]. Les performances du système ne doivent pas être dégradées durant le temps nécessaire au diagnostic. Organisation du mémoire Dans le Chapitre 1, nous présentons un tour d'horizon sur les stratégies de diagnostic à base de modèle et de commande tolérante aux fautes. Nous montrons les limites de ces approches qui justifient le besoin d'élaborer une commande tolérante aux fautes capable de rétablir et maintenir les performances du système global rapidement après l'apparition du défaut. Les approches de diagnostic et de commande tolérante aux fautes appliquées à un véhicule autonome 2WS4WD sont présentées dans le Chapitre 2. Les stratégies de tolérance aux fautes de la littérature utilisent principalement une commande centralisée avec une répartition de tâches entre les actionneurs redondants établie hors ligne ou en ligne. Cette répartition de tâches pour assurer le suivi de trajectoire du véhicule, consiste à contrôler soit le braquage des roues avant et le freinage de l'une des roues avant ou arrière du véhicule, soit le braquage des roues avant et arrière. Dans le Chapitre 3, nous élaborons une stratégie de commande tolérante aux fautes active avec une architecture décentralisée, qui s'inspire des travaux du Thèse de Alain Haddad, Lille 1, 2014 domaine aéronautique ( [START_REF] Falcone | A Hierarchical Model Predictive Control Framework for Autonomous Ground Vehicles[END_REF], [START_REF] Vermillon | Optimal Modular Control of Overactuated Systems -Theory and Applications[END_REF], [START_REF] Da Ronch | A Framework for Constrained Control Allocation Using CFD-based Tabular Data[END_REF]). Elle est basée sur la génération dynamique de références et s'applique pour des systèmes suractionnés, tels le véhicule 2WS4WD. Nous utilisons, dans un premier temps, une surveillance globale du système. Lorsque l'objectif global n'est plus atteint (suivi de trajectoire pour le véhicule 2WS4WD), nous activons les actionneurs redondants, non utilisés en fonctionnement normal. Ceux-ci sont contrôlés de manière à compenser l'effet du défaut. La commande des actionneurs redondants est composée de deux boucles. La première, appelée boucle externe, assure le calcul des objectifs dits locaux nécessaires pour atteindre l'objectif global du système [START_REF] Haddad | Fault Tolerant Control for Autonomous Vehicle by Generating References for Rear Wheels Steering[END_REF]. La seconde, appelée boucle interne, permet d'assurer le suivi des objectifs locaux élaborés dans la boucle externe [START_REF] Haddad | Fault Tolerant Control Strategy for an Overactuated Autonomous Vehicle Path Tracking[END_REF]. Cette approche est appliquée à un véhicule autonome suractionné du type 2WS4WD. Pour assurer le suivi de trajectoire du véhicule, quatre actionneurs de traction et l'actionneur de direction du train avant sont utilisés en fonctionnement normal. Lorsque le module de surveillance globale détecte une déviation de trajectoire du véhicule, l'actionneur de direction du train arrière, non utilisé en fonctionnement normal, est contrôlé de manière à rétablir et main tenir le suivi de trajectoire en présence du défaut. Dans le Chapitre 4, nous nous intéressons à la localisation et à l'identification précise d'un défaut actionneur en vue de la reconfiguration du système de comm ande. A partir des équations du modèle, nous générons des résidus structurés, qui nécessitent pour être calculés de connaître un certain nombre de paramètres. Identifier ces paramètres peut être délicat, surtout lorsque le système est défaillant. Non seulement les performances du système sont dégradées durant la phase d'identification, mais de plus, l'identification peut être biaisée du fait que la Introduction L'automatisation des systèmes assure la réduction des coûts de production, l'optimisation du temps de fabrication et des dépenses énergétiques, ainsi que la réduction du taux d'erreurs humaines. En revanche, la complexité d'automatisation a rendu ces systèmes plus susceptibles de tomber en panne. Vue l'importance des tâches qui leurs sont confiées, une panne partielle ou totale de ces systèmes peut mener à des conséquences néfastes voire catastrophiques. Afin de faire face aux dysfonctionnements des procédés, l'élaboration de lois de commandes tolérantes aux fautes s'est avérée nécessaire. Ces algorithmes visent à assurer la sûreté de fonctionnement en réagissant pour compenser le déf aut. Deux types de commandes tolérantes aux fautes sont distingués dans la littérature ( [START_REF] Patton | Fault-tolerant control systems: The 1997 situation[END_REF], [START_REF] Blanke | Concepts and Methods in Fault -Tolerant Control[END_REF], [START_REF] Zhang | Bibliographical review on reconfigurable faulttolerant control systems[END_REF]) : la commande tolérante aux fautes passive et la commande tolérante aux fautes active. La commande tolérante aux fautes passive cherche à assurer au système une robustesse à certains défauts anticipés. La commande tolérante aux fautes active vise à modifier la loi de commande en fonction du défaut détecté et identifié. Ce type de commande nécessite l'utilisation d'un outil de diagnostic, dont le rôle est de détecter, localiser et si possible estimer en ligne le(s) défaut(s). Module de diagnostic à base de modèle L'objectif du diagnostic est de détecter l'apparition d'un défaut et d'en trouver la cause. Les modules de diagnostic développés dans la littérature peuvent être regroupés en deux classes ( [START_REF] Patton | Issues of Fault Diagnosis for Dynamic Systems[END_REF], [START_REF] Toscano | Commande et diagnostic des systèmes dynamiques[END_REF])) : les méthodes de diagnostic sans utilisation d'un modèle comportemental analytique et les méthodes de diagnostic à base de modèle. Le diagnostic sans modèle, qui ne fait pas l'objet de cette thèse, considère des informations issues d'expériences préalables et de règles heuristiques afin de caractériser le mode de fonctionnement du système. S'il existe une connaissance partielle de la relation liant les fautes aux symptômes, des algorithmes d'inférences peuvent être appliqués pour définir cette relation sous forme de règles IF < condition> THEN <conclusion> ( [START_REF] Heckerman | An empirical comparison of three inference methods[END_REF], [START_REF] Sheppard | Inducing diagnostic inference models from case data[END_REF]). La condition exprime les symptômes observés alors que la conclusion inclut les événements et les fautes. Dans le cas où aucune connaissance existe sur la relation entre les fautes et les symptômes, des méthodes de classifications sont appliquées [START_REF] Isermann | Fault-diagnosis systems[END_REF]. Elles se basent sur la classification par la reconnaissance de formes [START_REF] Duda | Pattern classification[END_REF], les réseaux de neurones [START_REF] Rebouças | Use of artificial neural networks to fault detection and diagnosis[END_REF], la logique floue [START_REF] Frank | Fuzzy logic and neural network applications to fault diagnosis[END_REF]. Par la suite, notre étude sera consacrée au diagnostic à base de modèle qui utilise un modèle analytique du système. Nous présentons dans ce qui suit les étapes de cette technique de diagnostic. Le diagnostic à base de modèle utilise une représentation analytique du système pour générer des indicateurs de défaut (appelés aussi résidus). Ces indicateurs de défaut sont par la suite analysés afin de localiser tout défaut apparaissant et estimer si possible son amplitude et son évolution. Trois étapes principales résument la démarche de diagnostic : la génération des indicateurs de défaut, la détection du défaut et la localisation du défaut (voir Figure 1.1). Génération des indicateurs de défaut Un indicateur de défaut doit refléter la cohérence des signaux mesurés avec le modèle du système. Pour qu'un signal, généré à partir des entrées et des sorties du système, soit un indicateur de défaut, il faut qu'il soit affecté par un sous -ensemble de pannes. Plusieurs techniques existent dans la littérature pour la génération de ces indicateurs de défaut. Ces techniques se basent sur l'estimation paramétrique, l'estimation des états du système et l'utilisation des Relations de Redondance Analytique (ou méthode de l'espace de parité dans le cas des modèles linéaires). [START_REF] Wang | Actuator fault diagnosis: an adaptive observer -based technique[END_REF], [START_REF] Edwards | Sliding mode observers for fault detection and isolation[END_REF], [START_REF] Jiang | Fault diagnosis based on adaptive observer for a class of non-linear systems with unknown parameters[END_REF], [START_REF] Yan | Robust sliding mode observer-based actuator fault detection and isolation for a class of nonlinear systems[END_REF]). Le principe général consiste à comparer des fonctions de sorties estimées avec les mêmes fonctions de sorties mesurées. L'écart entre ces fonctions est utilisé comme résidu. Espace de parité : Le principe de cette approche appliquée dans le Chapitre 4 de la thèse, est de réécrire les équations du modèle analytique du système de manière à obtenir des relations particulières appelées RRA pour Relations de Redondance Analytique ( [START_REF] Chow | Analytical redundancy and the design of robust failure detection system[END_REF], [START_REF] Bibliographie Aitouche | Détection et localisation de défaillances de capteurs[END_REF], [START_REF] Patton | Fault detection and diagnosis in aerospace systems using analytical redundancy[END_REF], [START_REF] Cocquempot | Generation of robust analytical redundancy relations[END_REF]). Ces relations ont pour propriété de ne lier que des grandeurs connues, disponibles en ligne. Les résidus sont alors obtenus en substituant dans les R RA les variables connues par leurs valeurs réelles, mesurées sur le système en fonctionnement. Les résidus sont nuls dans le cas où le système est sans défaut, en supposant que le modèle du système est exact et que les perturbations sont négligées. Ces résidus deviennent non nuls lorsqu'un défaut auquel ils sont sensibles apparaît. La détection de tous les défauts possibles dans un système nécessite de pouvoir calculer un nombre de résidus découplés l'un de l'autre égal au nombre de ces défauts. L'obtention hors-ligne des RRA est un problème général d'élimination de variables dans un système d'équations algébro-différentielles. Lorsque le modèle est linéaire, l'élimination s'effectue par projection dans un sous-espace particulier appelé espace de parité ( [START_REF] Chow | Analytical redundancy and the design of robust failure detection system[END_REF], [START_REF] Cocquempot | Fault detection and isolation for hybrid systems using structured parity residuals[END_REF] Staroswiecki et al., 2001), [START_REF] Leuschen | Fault residual generation via nonlinear analytical redundancy[END_REF], [START_REF] Bokor | Fault detection and isolation in nonlinear systems[END_REF]). Revenons au cas de systèmes linéaires pour présenter la relation de redondance De l'équation (1.4) on peut écrire : ( 2) (1) 2 (1) (3) (2) (2) 3 2 (1) (2) ( ) ( 1) 1 h h h h i i i x Ax Bu x Ax Bu A x ABu Bu x Ax Bu A x A Bu ABu Bu x A x A Bu                    (1.6) Les sorties du système s'écrivent alors comme suit : (1) (1) (2) (2) ( ) 1 2 ( ) 2 0 0 0 0 0 0 0 00 0 h h h h h yu y CB u y CAB CB u y CA B CA B CB u C CA x CA CA                                                                        (1.7) En posant : 13 (2) () h y y Y y y           , (1) (2) () h u u U u u           , 2 h C CA OBS CA CA           , 12 0 0 0 0 0 0 0 00 0 hh CB COM CAB CB CA B CA B CB                    (1.8) L'équation (1.7) s'écrit alors comme suit : .. Y OBS x COM U  (1.9) En multipliant les deux membres de l'équation (1.9) par une matrice de parité W orthogonale à OBS , à condition que cette matrice W existe (qui est toujours le cas vu que OBS n'est pas une matrice régulière), on obtient le vecteur de parité suivant : .( . ) r W Y COM U  (1.10) Etape de détection Cette étape consiste à évaluer le résidu calculé en le comparant à un seuil de référence. Si le résultat de l'évaluation dépasse le seuil, une alarme est générée signalant la présence d'une défaillance. Deux types d'erreurs peuvent survenir lors de l'évaluation du résidu : le premier est la signalisation d'un défaut alors que le système est sain (fausse alarme) et le second est la non signalisation du défaut (non détection). Afin de clarifier ce point, nous appliquons dans ce qui suit le test d'hypothèse, qui est une méthode statistique, pour la détection du défaut [START_REF] Foucard | Introduction aux tests statistiques, enseignement assisté par ordinateur[END_REF]. P D0 = 1-α = P(H 0 choisie | H 0 vraie) = 0 ( | ) . p r H dr    P D1 = 1-β = P(H 1 choisie | H 1 vraie) = 1 ( | ) . p r H dr    P D = α = P(H 1 choisie | H 0 vraie) = 0 ( | ) . p r H dr    P ND = β = P(H 0 choisie | H 1 vraie) = 1 ( | ) . p r H dr    La minimisation de la probabilité P D en augmentant la valeur du seuil de référence γ augmente la probabilité P ND . De même, la minimisation de la probabilité P ND en diminuant la valeur du seuil de référence γ augmente la probabilité P D . Il est alors nécessaire d'établir un compromis entre les erreurs de type I et II. La valeur numérique d'un seuil statique est déterminée en utilisant des méthodes empiriques [START_REF] Depold | A unified metric for fault detection and isolation in engines[END_REF], des méthodes statistiques [START_REF] Gustafsson | Adaptive Filtering and Change Detection[END_REF], des méthodes déterministes [START_REF] Zhong | An LMI approach to robust fault detection filter design for discrete-time systems with model uncertainty[END_REF]. Le choix de ce seuil fixe résulte d'un compromis entre la robustesse et la sensibilité de la décision : diminuer la valeur du seuil mène à l'augmentation des fausses alarmes alors qu'augmenter la valeur de ce seui l cause l'augmentation des erreurs de non détection de défaut. Afin d'améliorer la décision pour tenir compte de l'évolution des conditions d'utilisation et/ou des modes de fonctionnement du système, des techniques utilisant un seuil dynamique sont conçues. Elles appliquent majoritairement des méthodes adaptatives ( [START_REF] Johansson | Parametric uncertainty in sensor fault detection for a turbofan jet engine[END_REF], [START_REF] Pisu | Adaptive threshold based diagnostics for steer-by-wire systems[END_REF], [START_REF] Liu | An adaptive threshold based on support vector machine for fault diagnosis[END_REF]). La valeur d'un seuil dynamique est alors déterminée en ligne et dépend du point de fonctionnement du système, des incertitudes de modèle considérées bornées, des perturbations mesurées [START_REF] Bask | Dynamic threshold generators for robust fault detection[END_REF]. Etape de localisation La localisation du défaut correspond à l'identification du ou des composants défaillants. A partir d'une matrice de signature (appelée aussi matrice de sensibili té), on est capable de déterminer l'origine du défaut [START_REF] Patton | Issues of Fault Diagnosis for Dynamic Systems[END_REF]. Chaque ligne i de cette matrice correspond à un résidu et chaque colonne j correspond à une Thèse de Alain Haddad, Lille 1, 2014 défaillance. Un 1 à la position ( i, j ) indique qu'une défaillance j est détectable par le résidu i. Le nombre binaire formé par la colonne j est appelé "signature de la défaillance j ". Il existe deux types de matrices qui permettent la localisation d'un défaut : la matrice non diagonale de rang plein constituée d'un ensemble de vecteurs de résidus sensibles aux défauts de système et la matrice diagonale où chaque résidu est sensible à un défaut unique. La localisation de défaut peut de même s'effectuer à partir de la signature de résidus directionnels. Dans ce cas, la présence d'un défaut entraîne le vecteur résidu dans une direction fixe. Pour plus de détails sur cette technique, le lecteur peut consulter [START_REF] Gertler | Analytical redundancy methods in fault detection and isolation : survey and synthesis[END_REF]. Une fois le défaut localisé, des actions correctives sont exécutées manuellement ou automatiquement. Les stratégies de tolérance aux fautes peuvent alors être appliquées. En cas de défaillance majeure affectant des composants sensibles du système, un arrêt d'urgence est exécuté afin d'éviter des dégâts matériels et humains. Principes généraux des approches de commandes tolérantes aux fautes La recherche dans le domaine de commande tolérante aux fautes a mené à l'apparition de deux classes de commandes [START_REF] Blanke | Concepts and Methods in Fault -Tolerant Control[END_REF] : la commande tolérante aux fautes passive et la commande tolérante aux fautes active. Afin de présenter la différence entre ces deux classes, nous rappelons tout d'abord le problème de commande standard défini dans [START_REF] Dugard | Commande adaptative : méthodologie et applications[END_REF] et repris dans (Staroswiecki et al., 2001).  . Les approches élaborées dans ce domaine se basent en général sur la commande H∞ [START_REF] Yang | Reliable controller design for linear systems[END_REF], le contrôle par mode glissant [START_REF] Fang | Robust antisliding control of autonomous vehicles in presence of lateral disturb ances[END_REF], la commande backstepping [START_REF] Chen | Backstepping Control Design and Its Applications to Vehicle Lateral Control in Automated Highway Systems[END_REF]. Les principaux inconvénients de cette approche sont ( [START_REF] Li | Passive and active nonlinear fault-tolerant control of a quadrotor unmanned aerial vehicle based on the sliding mode control technique[END_REF], [START_REF] Blanke | What is faulttolerant control?[END_REF]) l'incapacité de tolérer les défauts non anticipés, la limitation de la classe des défauts considérés et le masquage des défauts après leur apparition, ce qui peut mener à leur propagation. Commande tolérante aux fautes active La commande tolérante aux fautes active «Active Fault Tolerant Control (AFTC)» consiste à changer la structure et/ou les paramètres de la commande du système en fonction du défaut détecté ( (Staroswiecki et al., 2001), [START_REF] Hoblos | Contribution à l'analyse de la tolérance aux fautes des systèmes d'instrumentation[END_REF]) [START_REF] Jiang | Active fault tolerant control for a class of nonlinear systems[END_REF] Des méthodes utilisant cette approche se basent sur la commande re-séquencée [START_REF] Leith | Survey of gain-scheduling analysis design[END_REF], la méthode de la pseudo-inverse modifiée [START_REF] Staroswiecki | Fault Tolerant Control : the pseudo-inverse method revisited[END_REF], le placement de valeurs propres [START_REF] Liu | Eigenstructure Assignment for Control Systems Design[END_REF]. En appliquant l'accommodation, il est impossible de garantir le suivi de trajectoire du système tant que l'algorithme de diagnostic n'a pas fourni l'inform ation sur le défaut. De plus, la commande du système (élaborée pour un système en fonctionnement nominal) doit être modifiée afin de tolérer la défaillance. Or, pour des applications sensibles, où la sécurité doit être maintenue en toute circonstance, tell es que la commande d'un véhicule autonome, il n'est pas recommandé de modifier les commandes initiales [START_REF] Trevathan | A Guide to the Automation Body of Knowledge, 2nd Edition[END_REF]. Reconfiguration La reconfiguration utilise la redondance matérielle du système ou des lois de commandes pour tolérer un défaut. Elle est obtenue soit par la reconfiguration matérielle du système, soit par reconfiguration des lois de commande. Pour le premier cas, on change la structure interne du système et ceci en utilisant uniquement les composants sains. Pour le deuxième, on choisit une nouvelle loi de commande, établie hors ligne, nécessaire pour atteindre l'objectif du système. La reconfiguration consiste alors à trouver une loi de commande Des méthodes appliquant l'approche de reconfiguration sont la commande hybride (Yang et al., 2010a), l'interaction des modèles multiples [START_REF] Rong | Hidden coupling in multiple model based PID controller networks[END_REF], la commande par réallocation [START_REF] Kalat | Control allocation and reallocation for a modified quadrotor helicopter against actuator faults[END_REF]. La reconfiguration est bien adaptée aux applications industrielles puisqu'elle ne modifie pas les commandes initiales [START_REF] Härkegård | Resolving actuator redundancy-optimal control vs. control allocation[END_REF]. Par contre, cette approche ne peut garantir le maintien des performances requises avant la localisation du défaut (nécessaire pour appliquer la stratégie de reconfiguration). Il faut noter qu'une approche récente proposée dans [START_REF] Yang | Spacecraft formation stabilization and fault tolerance: a state-varying switched system approach[END_REF]) et utilisant la reconfiguration des lois de commande, effectue des commutations entre les lois de commande sans nécessiter la localisation du défaut. Néanmoins, le changement brusque des lois de commande est parfois inévitable, ce qui induit une sollicitation du système en boucle fermée ( [START_REF] Yang | Spacecraft formation stabilization and fault tolerance: a state-varying switched system approach[END_REF], [START_REF] Yamé | On bumps and reduction of switching transients in multicontroller systems[END_REF]. Ce problème de gestion des systèmes commutés a fait l'objet de plusieurs travaux. Des stratégies assurant un passage plus graduel entre les commandes ont été proposées et reposent sur la commande optimale linéaire quadratique [START_REF] Turner | Linear quadratic bumpless transfer[END_REF], la norme 2 L [START_REF] Zaccarian | A common framework for anti-windup, bumpless transfer and reliable designs[END_REF], l'interpolation [START_REF] Stoustrup | A parameterization of observer-based controllers: Bumpless transfer by covariance interpolation[END_REF]. Conclusion Nous avons présenté dans ce chapitre un tour d'horizon rapide sur les stratégies de diagnostic à base de modèle et de commande tolérante aux fautes. L'approche de reconfiguration conserve les commandes initiales, ce qui la rend adaptée à certaines applications industrielles, telle que la commande d'un véhicule 2WS4WD. Par contre, en appliquant les stratégies de reconfiguration classiques, aucune garantie de performances ne peut être fournie durant la période de transition entre l'apparition du défaut et l'utilisation d'une nouvelle loi de commande. Il est alors essentiel de concevoir une approche qui permet de réagir très rapidement après la détection d'un défaut. Son objectif doit être de garantir les performances exigées par le cahier des charges en présence d'une défaillance, tout en conservant les lois de commandes initiales. Dans le chapitre suivant, nous présentons les modèles de véhicule 2WS4WD utilisés pour l'élaboration des lois de commande tolérantes aux fautes ainsi que les stratégies de commande appliquées à ce type de véhicule en cas de défaillance. Nous situons par la suite notre contribution par rapport aux travaux existants. Modèle de véhicule Les Modèle d'un véhicule 2WS4WD dans le repère ( , , , ) xyz G x y z D'après la loi fondamentale de la dynamique, l'équation du mouvement longitudinal du véhicule ramenée à son centre de gravité G s'écrit (Matsumoto et al., 1992): 1 2 3 4 sin( ) G X X X X c Mx F F F F F        (2. Yi xi i yi i F F F   (2.4) La force centrifuge c F des équations (2.1) et (2.3) est exprimée par : 2 G c V FM   (2.5) avec G V la vitesse au centre de gravité du véhicule et  le rayon de courbure. Le rayon de courbure  et la vitesse du véhicule G V sont liés par la relation suivante : La relation donnée dans (2.5) peut alors s'écrire: () G V       (2.6) Sym. Quantité Unité G x  Xi F xi F f x F r x F c F G y  Yi F yi F    J i J G  G   f l r l d i  f  r  G V wi V f V r V M i m  h g i  i G i U r i C f C r C xi () cG F MV     (2.7) avec   la vitesse de glissement au niveau du véhicule L'équation du mouvement de lacet du véhicule ramenée à son centre de gravité G s'exprime par : , nous réécrivons ces équations dans la section suivante de manière à obtenir une modélisation linéaire du système (comme dans (Matsumoto et al., 1992), [START_REF] Plumlee | Control of a ground vehicle using quadratic programming based control allocation technique[END_REF]). En considérant les angles de braquage comme faibles, les équations (2.2) et 1 2 3 4 1 3 2 4 ( ) ( ) ( ) ( ) G f Y Y r Y Y X X X X J l F F l F F d F F d F F           (2.8) avec J l'inertie f l r l d d  G V  2 X F 2 Y F 2  2 w V 2  1 X F 1  1 Y F 1  1 w V 3  3 X F 3 Y F 3  3 w V 4 X F 4 Y F 4  4 w V 4  y u  x u  G (2.4) peuvent être réexprimées par : Xi xi yi i F F F   (2.9) Yi xi i yi F F F   (2.10) Les masses appliquées sur les roues peuvent être exprimées en fonction de la distance du centre de gravité comme suit : La force longitudinale du véhicule est représentée comme suit : 1 2 3 4 () 2( ) () 2( ) () 2( ) () 2 xi xi i F m g   (2.12) avec le coefficient d'adhérence xi  exprimé par : () xi i i i fG   (2.13) i f étant une fonction non linéaire qui dépend du contact roue/chaussée et i  l'angle de dérive appelé aussi adhérence transversale. La force latérale du véhicule est représentée par : yi i i FC   (2.14) avec la rigidité en virage i C exprimée de la manière suivante : () i i i i C f G m g  (2.15) L'angle de dérive du véhicule est réécrit : i i i     (2.16) avec : 12 34 f f r r yl V yl V             (2.17) En considérant que les angles de braquage des roues d'un même train (avant et arrière) sont égaux, on a : 12 34 f r         (2.18) Les forces longitudinales générées au niveau des roues sont représentées comme suit: 1 2 3 4 2 2 2 2 f f f f r r r r x xx x xx x xx x xx F FF F FF F FF F FF                 (2.19) Après ces réécritures, l'équation du mouvement latéral du véhicule exprimée dans (2.3) peut être alors réexprimée de la manière suivante : 1 2 3 4 1 2 3 4 4 1 ( ) ( ) () ( ) ( ) f r G G G G G G G xi i i G i f r f f r r G G G G G G xf f f xr r r G C C l C C l C C C C My y V V MV F C MV C C C l C l y MV V V F C F C MV                                              (2.20) Et l'équation du mouvement de lacet présentée dans (2.8) peut être réécrite comme suit : 1 2 3 4 2 2 1 2 3 4 1 1 1 2 2 2 3 3 3 4 4 4 1 2 3 4 1 1 2 2 3 3 4 4 22 ( ) ( ) ( ) ( ) () ( ) ( ) ( ) ( ) ( ) () f r G G G f r fx G f x r x r x x x x x y y y y f f r r f f r r G G G f xf f C C l C C l J y V C C l C C l l F C V l F C l F C l F C d F F F F d F F F F C l C l C l C l y V V l F C                                                 () () f r xr r r xf xr l F C d F F         (2.21) La représentation d'état de ce système peut alors être exprimée par : 11 f f r GG x GG xr yy AB F F                              (2.22) avec : 12 1 3 2 G PP V MM A P P JJ            1 1 1 1 1 00 f r ff rr Q Q MM B lQ lQ dd J J J J        1 () fr G CC P V   2 () f f r r G C l C l P V   22 3 () f f r r G C l C l P V   1 1 xf f f xr r r FC Q FC Q          , 0    (2.23) De l'équation (2.22) on peut obtenir le modèle bicyclette classique, comme dans [START_REF] Klomp | Longitudinal force distribution and road vehicle handling[END_REF] : 12 3 2 f r G GG f GG r ff rr C PP C V yy MM MM P lC P lC JJ JJ                                         (2.24) Le modèle linéaire, présenté par l'équation (2.22), n'est pas unique. Il existe d'autres modèles qui incluent l'angle de courbure de la route [START_REF] Zhou | Robust sliding mode control of 4WS vehicles for automatic path tracking[END_REF], la vitess e de déviation du véhicule (Moriwaki 2005) et l'angle de glissement du véhicule [START_REF] Plumlee | Control of a ground vehicle using quadratic programming based control allocation technique[END_REF]. Nous présentons dans ce qui suit le modèle incluant β l'angle de glissement du véhicule, et qui sera utilisé dans la section 2.4.1. L'angle de glissement du véhicule, non considérée dans le modèle présenté précédemment), [START_REF] Plumlee | Control of a ground vehicle using quadratic programming based control allocation technique[END_REF] est estimé par : arctan( ) y x V V   (2.25) En dérivant cet angle par rapport au temps on obtient : 2 2 1 ( ) [START_REF] Plumlee | Control of a ground vehicle using quadratic programming based control allocation technique[END_REF] y x y x x y x V V V V V V V       (2.26) D'après ( ) ( ) f f f r r G f f r r G f f f G r r r x xr C l C l J C l C l l C V l C d F d F                   2 ( ) ( ) f r f f r r f r G G f r G G G G C C C l C l C C MV MV MV MV                  (2.27) La représentation d'état devient alors : Appliquons le théorème du moment cinétique sur le centre du train i , comme dans [START_REF] Dumont | Tolérance active aux fautes des systèmes d'instrumentation[END_REF]. Nous obtenons l'équation suivante : 22 f f GG r GG x xr yy AB F F                                        (2.28) avec 12 14 2 2 3 4 0 01 0 G G PP V MM PP A M MV P P JJ                  1 1 2 00 00 f r ff rr f r GG Q Q MM lC lC dd B J J J J C C MV MV           42 G P V P  (2. () i wi i ext J M F      (2.30) où () i ext MF   désigne l'ensemble des moments appliqués au centre du train i . Les moments considérés dans l'équation (2.30) sont les suivants ((Gillespie, 1992), [START_REF] Proca | Identification of power steering system dynamic models[END_REF], [START_REF] Dumont | Tolérance active aux fautes des systèmes d'instrumentation[END_REF])) : ( cos( ) cos( ) sin( ))  SAi ri i i i i i i M F d v r       ( vi zi i i wi M F d   (2.34)  Le moment créé par la force latérale au niveau de la roue i autour de l'axe vertical passant par le centre du train. Ce moment dépend de même de l'angle de chasse de la roue : lorsque l'angle de chasse est positif, ce moment a un effet de sous-virage. Ce moment est représenté par : tan( ) Li yi i i M F r v  (2.35) L'équation (2.30) devient finalement : i wi i bi ATi SAi vi Li i i wi Ti J u M M M M M u b M              (2.36) avec Ti ATi SAi vi Li M M M M M     (2.37) En rajoutant la dynamique de braquage des roues avant et arrière aux modèles (2.23) et (2.28), nous obtenons les modèles suivants : (2.38) 1 1 f G G f G G r wf wf G Tf T T x wf wf G Tr wr wr x r wr wr y y u u y M A B E F M F                                                                                               22 f f G G r wf u u M A B E F M F                                                         (2.39) avec 1 1 2 1 11 3 2 1 00 00 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 f r G f f f r T f f r r Q P P Q V M M M M l Q l Q P P J J J J A b J b J                                      (2.40) 1 0 0 0 0 00 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 T f r dd JJ B J J                (2.41) 1 0 0 0 0 0 1 0 0 0 0 0 T f r J E J        (2.42) 14 2 3 4 2 1 0 0 00 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 f r G G G ff rr T f f r r C P P C M MV MV MV lC P P l C J J J J A b J b J                        (2.43) 2 0 0 0 0 00 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 T f r dd JJ B J J                (2.44) Modèle d'un véhicule 2WS4WD dans le repère 0 0 0 0 0 0 ( , , , ) x y z O x y z Une technique différente de modélisation est présentée dans cette section. Le système exprimé avec les équations (2.1), (2. ). Dans une première étape, on établit le modèle d'un véhicule standard 2WS4WD se déplaçant dans le plan OXYZ lié au sol, pour passer ensuite au modèle d'un véhicule 2WS4WD [START_REF] Baille | Le Cycab de l'INRIA Rhône-Alpes[END_REF]). De la R F wf R F wf wf RF xv yv v L           (2. G G G G G G X X X X c Y Y Y Y c G dV x V V dt x y V F F F F F F F F F F M M V                                            (2.70) 22 2 2 1 2 3 4 1 2 3 4 ( sin( )) sin( ) cos( ) sin( ) cos( ) ( sin( )) ( cos( )) sin( ) cos( ) G GG G G G X X X X c Y Y Y Y c G dV y V V dt x y V F F F F F F F F F F M M V                                           (2. X X X X Y Y Y Y c wf wr X X X X Y Y Y Y c x y cos( ) x y sin( ) ( F F F F ) ( F F F F F ) cos( ) MM ( tan( ) tan( )) ( x y ) sin( ) L f ( X ) ( F F F F ) ( F F F F F ) sin( ) MM ( tan( ( x y )cos( )                                          22 wf wr wf wr ) tan( )) L x y ( tan( ) tan( )) L                                                          (2.75)   , T X x y x y    T wf wr U     (2.76) A ce modèle, on intègre la dynamique de braquage des roues avant et arrière comme suit : ( ) ( ) X f X g U   (2.77) avec T wf wr wf wr X x y x y           (2.78) 22 22 22 1 2 3 4 1 2 3 4 22 22 1 2 3 4 1 2 3 4 22 X X X X c Y Y Y Y c wf wr X X X X c Y Y Y Y c x y cos( ) x y sin( ) [START_REF] Durham | Constrained Control Allocation[END_REF], [START_REF] Bodson | Evaluation of Optimization Methods for Control Allocation[END_REF], (Lewis et,al), [START_REF] Hac | Unified Control of Brake-and Steer-by-Wire Systems Using Optimal Control Allocation Methods[END_REF] ( F F F F F ) ( F F F F F ) cos( ) MM ( tan( ) tan( )) ( x y ) sin( ) L ( F F F F F ) ( F F F F F ) sin( ) MM f ( X ) ( x y )cos(                                                ( tan( ) tan( )) ) L x y ( tan( ) tan( )) L BM J BM J                                                                              (2.79) T fr U u u    (2.80) ( ) 0 0 0 0 0 0 0 T f r fr u u gU JJ          (2. Commande tolérante aux fautes centralisée pour véhicule autonome 2WS4WD avec allocation établie hors ligne Nous présentons dans ce qui suit un recueil de travaux qui appliquent la commande tolérante aux fautes centralisée sur un véhicule autonome suractionné 2WS4WD en considérant une allocation établie hors ligne. Ces travaux sont majoritairement basés sur la commande tolérante aux fautes passive qui cherche à assurer l'insensibilité du système à une classe de défauts. Ils utilisent des modèles du système avec des régions incertaines pour lesquels ils élaborent une commande robuste. Les méthodes présentées dans cette section sont basées sur la commande par mode glissant ( [START_REF] Zhou | Robust sliding mode control of 4WS vehicles for automatic path tracking[END_REF], [START_REF] Hiraoka | Automatic path-tracking controller of a four-wheel steering vehicle[END_REF]) et la commande Backstepping [START_REF] Chen | Backstepping Control Design and Its Applications to Vehicle Lateral Control in Automated Highway Systems[END_REF]. La commande par mode glissant : XX   (2.82) 2 1 1 1 2 2 2 1 1 2 2 3 3 1 1 2 2 1 2 X ( A A )X ( A A )X ( B B )U ( B B )d ( B B )W A X A X B U B d E                       (2.83) 1 1 2 2 1 3 3 E A X A X BU ( B B )W           (2.84) avec :     12 , TT f r f r f r f r X y y y y X y y y y           3 13 1 0 2 ( ) 0 fs rs s f f l l f A l         34 3 2 13 1 ( ) 2 22 2 ( ) 2 fs rs s fs rs G l l f f lf A f l l f V a           13 0 0 , 2 fs rs fs rs fs rs r f f r fs rs f r f r ll ll J J BB l l l l l l ll J Ml l Ml l MJ                       2 2 1 2 0 0 0 2 , , T f r r r f f G C C C l C l B V f f MM       22 34 , r r f f f f r r C l C l C l C l ff JJ     2 1 2 3 4 1 (2 2 ) 4 ( ) ( 2 l l f f l l f l l f a ll         wf wr U     (2.85) 1 1 n XR   , 1 2 n XR   , 1 m UR   , 1 p dR   , d représente l'  T V S S (2.87) En dérivant l'équation (2.87) par rapport au temps, on obtient : [START_REF] Bartolini | Chattering avoidance by second-order sliding mode control[END_REF], [START_REF] Mammar | Coordinated ramp metering via second order sliding mode control[END_REF]). 1 1 2 2 1 2 2 1 1 2 2 1 2 1 2 2 1 1 1 2 2 2 2 1 2 2 2 () ( ( )) ( ( ) )) T T T T V S S S P X P X S P X P A X A X B U B d E S P X P A X P P A X P B U P B d P E                     (2.88)   1 2 1 2 1 1 1 2 2 2 2 1 ( ) ( ) sgn( ) U P B P A X P P A X P C d S         (2.89) l'équation (2.88) s'écrit : 2 2 1 2 1 1 1 1 2 ( sgn( )) . . . T T V S P E S S P E S P S S              (2.90) D'où : 2 11 2 ( . ) 0       VP ( 2 La commande backstepping : La commande backstepping est une commande récursive qui s'applique à des classes de systèmes non linéaires. La loi de commande est calculée en plusieurs étapes en utilisant un calcul récursif des fonctions de Lyapunov, comme dans [START_REF] Chen | Backstepping Control Design and Its Applications to Vehicle Lateral Control in Automated Highway Systems[END_REF]. Pour ce faire, le système global est divisé en sous-systèmes. Chaque sous-système est ensuite contrôlé par des entrées virtuelles, qui sont des variables du sous-système, de manière à garantir sa stabilité. Des extensions successives de ces sous-systèmes sont par la suite appliquées pour finalement atteindre tout le système. La loi de commande est ainsi calculée dans une dernière étape de manière à garantir la stabilité du système global. Considérons le système suivant [START_REF] Khalil | Nonlinear Systems[END_REF]: 23 1 1 1 2 2 z z z z z u          (2.92) En première étape, partageons le système (2.92) en deux sous-systèmes : 23 1 1 1 2 2 z z z z zu       (2.93) Pour le premier sous-système, choisissons 2 z comme entrée virtuelle au sous- système. Ce sous-système est alors réécrit de la manière suivante : (2.97) 23 1 1 1 z z z      (2. Introduisons la valeur de  définie dans (2.97) dans (2.96), nous obtenons : 23 1 1 1 11 2 3 2 1 1 1 1 1 24 11 0 z ( z z ) V ( z ) z ( z z z z ) zz             (2.98) Ceci implique, qu'en appliquant la loi de commande virtuelle au sous-système (2.94), le sous-système converge asymptotiquement au point 1 0 z  Considérons le changement de variables suivant : 22 zz    (2.99) Et choisissons une fonction de Lyapunov candidate de la forme suivante : 2 2 2 1 2 1 1 2 z V ( z , z ) V ( z )     (2.100) Ensuite, calculons la loi de commande u qui assure que la dérivée de 2 1 2 V ( z , z )  par rapport au temps vérifie la condition suivante : 2 1 2 1 1 2 2 0 V ( z , z ) V ( z ) z z         pour 12 00 ( z , z ) ( , )  (2.101) 2 1 2 V ( z , z )   s'exprime par : 33 1 1 1 2 2 1 1 1 2 2 1 2 2 4 3 1 1 2 1 1 1 1 2 12 12 z ( z z z ) z ( u ( z )( z z z )) V ( z , z ) z z z ( z ( z )( z z z ) u )                            (2.102) Pour une loi de commande u définie par : 3 1 1 1 1 2 2 12 u z ( z )( z z z ) z           (2.103) L'équation (2.102) devient : 2 4 3 1 1 2 1 1 1 1 2 1 2 1 2 3 1 1 1 2 2 2 4 2 1 1 2 12 12 z z z ( z ( z )( z z z ) z V ( z , z ) ( z )( z z z ) z ) z z z                            (2.104) De (2.104), nous obtenons : 2 4 2 2 1 2 1 1 2 0 V ( z , z ) z z z         (2. Commande tolérante aux fautes centralisée pour véhicule autonome 2WS4WD avec allocation établie en ligne Nous présentons dans ce qui suit un recueil de travaux qui appliquent la commande tolérante aux fautes centralisée sur un véhicule autonome suractionné 2WS4WD en considérant une allocation établie en ligne. Ces travaux relèvent de la commande tolérante aux fautes active et nécessitent donc un module de diagnostic pour détecter et isoler les défauts (voir le Chapitre 1 du mémoire). Nous détaillons dans cette section plus particulièrement deux techniques de commande, à savoir la commande linéaire quadratique (LQ) ( [START_REF] Anderson | Optimal control: linear quadratic methods[END_REF], [START_REF] Esmailzadeh | Optimal yaw moment control law for improved vehicle handling[END_REF], [START_REF] Plumlee | Control of a ground vehicle using quadratic programming based control allocation technique[END_REF])) et la commande hybride ou à commutations ( (Yang et al., 2010a), [START_REF] Yang | Spacecraft formation stabilization and fault tolerance: a state-varying switched system approach[END_REF]). Appliquées sur le véhicule autonome 2WS4WD, ces stratégies consistent à redistribuer en ligne les forces au niveau des roues [START_REF] Sakai | Motion control in an electric vehicle with four independently driven in-wheels motors[END_REF], [START_REF] Casavola | Adaptive fault tolerant control allocation strategies for autonomous overactuated vehicles[END_REF], [START_REF] Luo | Model Predictive Dynamic Control Allocation with Actuator Dynamics[END_REF] J Le critère de performance J est établi à partir de la somme pondérée de l'énergie de l'état x et de la commande u [START_REF] Anderson | Optimal control: linear quadratic methods[END_REF]. Considérons l'exemple suivant, détaillé dans [START_REF] Plumlee | Control of a ground vehicle using quadratic programming based control allocation technique[END_REF] 01 f r f f r r GG r f f r r f f r r des C C l C l C u MV MV u r r l C l C l C l C JJ Ax B u                                      (2. TT u u r J u Q u c u q ( ) q ( )           (                                                   (2.112) On peut ensuite exprimer l'erreur d'allocation e de la manière suivante: méthode permet de maintenir une vitesse constante et de réduire l'usure des pneus causée par des freinages répétés et excessifs [START_REF] Song | Comparison between braking and steering yaw moment controllers considering ABS control aspects[END_REF]. L'application de cette stratégie de commande nécessite de gérer correctement les commutations. En effet, le changement brusque des lois de commande provoque des sollicitations instantanées des actionneurs, ce qui accélère l'usure de ces actionneurs et engendre des changements rapides, saccadés de trajectoires. Conclusion Les méthodes décrites dans la littérature qui appliquent la commande tolérante Nous présentons dans ce chapitre une approche qui permet de tolérer les défauts actionneurs et qui tente de répondre au cahier de charges présenté cidessus pour des systèmes suractionnés. doc.univ-lille1.fr Introduction Systèmes suractionnés Cette section s'organise comme suit : dans un premier temps, nous rappelons le modèle de véhicule utilisé pour l'élaboration de la loi de commande. Dans un deuxième temps, nous présentons l'algorithme de diagnostic (FD) utilisé pour détecter rapidement un comportement défaillant correspondant à un défaut d'actionneur. Puis, nous présentons la synthèse de la loi de commande tolérante aux fautes calculée à partir de deux boucles de commande interconnectées : une boucle externe et une boucle interne. Finalement, nous présentons des résultats de simulation en utilisant les logiciels CarSim (pour simuler la dynamique du véhicule) et Matlab/Simulink (pour implanter les lois de commande). Modèle du véhicule 2WS4WD X X X X c Y Y Y Y c wf wr X X X X c Y Y Y Y c x y cos( ) x y sin( ) Pour des variations faibles de la vitesse G V en fonction du temps [START_REF] Rajamani | Lateral Control of a backward driven front-steering vehicle[END_REF], les accélérations longitudinales et latérales du véhicule, exprimées dans l'équation (3.4), peuvent être approximées comme suit : ( F F F F F ) ( F F F F F ) cos( ) MM ( tan( ) tan( )) ( x y ) sin( ) L ( F F F F F ) ( F F F F F ) sin( ) MM f ( X ) ( x y )cos(                                                ( tan( ) tan( )) ) L x y ( tan( ) tan( )) L BM J BM J                                                                              (3.(3.9) ( ) 0 0 0 0 0 0 0 T f r fr u u gU JJ          (3.10) Tf bf ATf SAf vf Lf M M M M M M      (3.11) ( cos( )) sin( ) G G dV x V dt        (3.13) ( sin( )) cos( ) G G dV yV dt      (3.14) En introduisant les équations (3.13) et (3.14) dans (3.9), on obtient : x y cos( ) x y sin( ) ( tan( ) tan( )) ( x y ) sin( ) L ( tan( ) tan( )) ( x y )cos( ) L x y ( tan( ) tan( )) f ( X ) L BM J BM J                                                                             (3.15) Notre objectif est d'assurer le contrôle du mouvement latéral du véhicule. Nous réécrivons alors les équations du système en séparant les dynamiques du mouvement longitudinal, latéral et de lacet, ainsi que les dynamiques de braquage des roues avant et arrière, comme suit : Toute variation de   induit une variation de l'angle de lacet  et par la suite la variation de la position latérale du véhicule. Ainsi, un défaut au niveau de l'un des actionneurs de traction ou de direction modifie   et affecte alors la trajectoire du véhicule. 1 1 1 2 3 4 5 X f ( X , X , X , X , X )   (3.16) 2 2 1 2 3 4 5 X f ( X , X , X , X , X )   (3.17) 3 3 1 2 4 5 X f ( X , X , X , X )   (3.18) 4 4 4 f X f ( X ,u )   (3.19) 5 5 5 r X f ( X ,u )   (3.20) avec Thèse de Alain Haddad, Lille 1, 2014 1 T X ( y y )   , 2 T X ( x x )   , 3 X ( )   4 T wf wf X ( )    , 5 T wr wr X ( )    (3.21) 2 2 1 1 1 2 3 1 1 2 3 4 5 2 2 2 4 2 5 1 1 1 2 3 ( N X ) ( N X ) sin( X ) f ( X , X , X , X , X ) ( tan( N X ) tan( N X )) (( N X ) ( N X ) )cos( X ) L               (3.22) 2 2 1 1 1 2 3 2 1 2 3 4 5 2 2 2 4 2 5 1 1 1 2 3 ( N X ) ( N X ) cos( X ) f ( X , X , X , X , X ) ( tan( N X ) tan( N X )) (( N X ) ( N X ) )sin( X ) L                (3.23) 22 1 1 1 2 2 4 2 5 3 1 2 4 5 ( N X ) ( N X ) ( tan( N X ) tan( N X )) f ( X , X , X , X ) ( ) L   (3.24) 14 4 4 1 4 T f Tf f f f B N X M u f ( X ,U ) N X J         (3.25) 15 5 5 1 5 T Tr r r r N X M u f ( X ,U ) N X J        (3. l F F F F J l F F F F J d F F F F J d F F F J                                 4 sin( )) Nous choisissons donc comme résidu : 1 ref r y( t ) y( t ) y ( t )     (3.31) Ce résidu est ensuite comparé à un seuil dynamique qui est fonction de la vitesse et de l'orientation du véhicule. Le choix de ce seuil dynamique est justifié par le fait que la vitesse et l'orientation du véhicule ont un impact important sur sa stabilité [START_REF] Kiencke | Automotive control systems : for engine, driveline and vehicle[END_REF]. Une augmentation de la vitesse entraîne une augmentation des angles de glissement au niveau des roues, ce qui rend le véhicule plus difficile à contrôler. De même, une erreur de direction du véhicule plus importante exige plus de temps pour rétablir la trajectoire consigne. Pour obtenir le seuil dynamique, nous allons retrancher la valeur de la position latérale du véhicule prédite à l'instant tT  à la valeur d'un seuil maximal statique choisi, qui est fonction de la largeur de la route.                       (3.32) En posant: 0 0 0 ( ) ( ) ( ) y t y t T y t         (3.33) et en considérant l'erreur latérale nulle pour 0 t  , l'équation (3.32) devient : 0 0 0 ( ) ( ) ( ) y t T y t y t         ( 3 Synthèse de la loi de commande tolérante aux fautes Nous considérons dans notre étude que, dans le cas de fonctionnement normal, la direction des roues avant est seule contrôlée pour assurer le suivi de trajectoire du véhicule. Une fois une défaillance détectée, la direction des roues arrière est contrôlée de manière à garantir le suivi de trajectoire du véhicule en présence de la défaillance. Le système de direction des roues arrière est supposé non défaillant. Notre but est de calculer dans une boucle externe l'objectif local wrdes  (angle de braquage désiré des roues arrière) qui, si suivi, assurera les performances souhaitées du système, à savoir le suivi de trajectoire comme dans [START_REF] Haddad | Fault Tolerant Control for Autonomous Vehicle by Generating References for Rear Wheels Steering[END_REF]. Une fois que la référence de braquage désirée des roues arrière wrdes  est obtenue, la commande r u est calculée dans la boucle interne de manière à assurer le suivi de cet objectif local. Le suivi de trajectoire du système global sera ainsi garanti en présence de la défaillance, comme montré dans [START_REF] Haddad | Fault Tolerant Control Strategy for an Overactuated Autonomous Vehicle Path Tracking[END_REF]. Conception de la boucle externegénération de la référence locale La boucle externe a pour objectif de calculer l'angle de braquage désiré (0) 0 V  et ( ) 0 Vx , 0 x  . Théorème 3.1. ((Khalil, 2002) page 124) Considérons un système autonome représenté par () x f x   et 0 x  son point d'équilibre. Soit () Vx une fonction continue et dérivable tel que : V e e  satisfasse la condition (3.52). Une fois les conditions obtenues, nous déterminons l'angle de lacet désiré qui permet de satisfaire cette condition. La démarche de calcul est schématisée par la Figure V e e  doit vérifier les deux conditions suffisantes : Il faut noter que 111 ( , ) (0) 0 ( ) 0, 0 V et V x x     () x V x      Si : () 0, 0 dV x x dt  (3.52) alors V(x) C 3 : 111 ( , ) 0 V e e   pour 11 ( , ) 0 ee   et 1 (0, 0) 0 V  . V e e  vérifie nécessairement : ( ,( ,, )) ( , , ) () x V x      ref wrdes wf ref ref L y v arctan tan xy X X v y X v y               (3.67) Sachant que : () y arctan x     (3.68) nous pouvons finalement obtenir l'expression de l'angle de braquage désiré des roues arrière : 0 1 1 1 22 () ( ( )) ( ) cos( ( )) ref wrdes wf L K e K e y arctan tan y x y arctan x            (3.69) Remarque 3. 1. Il existe des singularités pour () 2 y arctan x     , comme le montre l'équation (3.69). Il est alors nécessaire de trouver une solution pour ce cas critique. Pour ceci, un changement de base peut être effectué pour passer du repère OXYZ au repère OYXZ lorsque les deux conditions suivantes ne sont pas vérifiées [START_REF] Rajamani | Lateral Control of a backward driven front-steering vehicle[END_REF] : C 5 : () 44 y arctan x       (3.70) C 6 : () 44 y arctan x       (3.71) La matrice de transition appliquée pour effectuer le changement de base est alors de la forme : 0 1 0 1 0 0 0 0 1 T        (3.72) L'expression de la position de braquage référence des roues arrière peut alors s'écrire comme suit : La matrice de transition est dans ce cas de la forme suivante : 01 22 ( ( ) ( ) ) ( ( )) ( ) sin( ( )) ref ref ref wrdes wf L K x x K x x x arctan tan y x y arctan x              ( 3 1 0 1 0 1 0 0 0 0 1 T        (3.76) L'expression de la position de braquage référence des roues arrière s'écrit alors comme (3.69) : 0 1 1 1 22 () ( ( )) ( ) cos( ( ) Conception de la boucle internesuivi de la référence locale Dans cette partie, nous calculons la loi de commande r u qui doit assurer le suivi de l'angle de braquage désiré des roues arrière δ wrdes , tout en garantissant le suivi de trajectoire du véhicule 2WS4WD. Comme le modèle global de véhicule considéré est non linéaire, nous choisissons une méthode adaptée à cette classe de système, à savoir la méthode Backstepping [START_REF] Härkegård | Backstepping and control allocation with application to flight control[END_REF], qui a été présentée dans la section 2.4.1. Pour ce faire, le système global est divisé en deux sous-systèmes. Le premier sous-système est contrôlé par une entrée virtuelle, qui est l'angle de braquage des roues arrière, de manière à assurer la stabilité de l'erreur de position latérale. Une extension à ce soussystème est ensuite réalisée pour inclure le deuxième sous-système, qui représente la dynamique de braquage des roues arrière. La loi de commande r u est finalement calculée en dernière étape en utilisant un calcul récursif des fonctions de Lyapunov. Le modèle dynamique du système global considéré est donné par les équations (3.16)- (3.20). Nous les rappelons ci-dessous : 1 1 1 2 3 4 5 X f ( X , X , X , X , X )   (3.77) 2 2 1 2 3 4 5 X f ( X , X , X , X , X )   (3.78) 3 3 1 2 4 5 X f ( X , X , X , X )   (3.79) 4 4 4 f X f ( X ,u )   (3.80) 5 5 5 r X f ( X ,u )   (3.81) avec 1 T X ( y y )   , 2 T X ( x x )   , 3 X ( )   , 4 T wf wf X ( )    , 5 T wr wr X ( )    2 2 1 1 1 2 3 1 1 2 3 4 5 2 2 2 4 2 5 1 1 1 2 3 ( N X ) ( N X ) sin( X ) f ( X , X , X , X , X ) ( tan( N X ) tan( N X )) (( N X ) ( N X ) )cos( X ) L               2 2 1 1 1 2 3 2 1 2 3 4 5 2 2 2 4 2 5 1 1 1 2 3 ( N X ) ( N X ) cos( X ) f ( X , X , X , X , X ) ( tan( N X ) tan( N X )) (( N X ) ( N X ) )sin( X ) L                22 1 1 1 2 2 4 2 5 3 1 2 4 5 ( N X ) ( N X ) ( tan( N X ) tan( N X )) f ( X , X , X , X ) ( ) L   14 4 4 1 4 T f Tf f f f B N X M u f ( X ,U ) N X J         15 5 5 1 5 T Tr r r r N X M u f ( X ,U ) N X J        où     1 2 01 10 N N        Dans notre étude, nous nous intéressons au suivi de trajectoire du véhicule. Nous considérons alors, extraits du modèle du système global, deux sous-systèmes donnés par les équations (3.77) et (3.81) : 1 1 1 2 3 4 5 X f ( X , X , X , X , X )   5 5 5 r X f ( X ,u )   avec X 2 , X 3 , X 4 , et X 5 des états mesurés. Pour le premier sous-système, nous choisissons 5 X comme entrée virtuelle au sous-système. Ce sous-système est alors réécrit de la manière suivante : 1 1 1 2 3 4 X f ( X ,X ,X ,X , )    (3.82) avec 5 wr wr X         . Nous calculons alors la loi de commande virtuelle  qui assure la stabilité de l'erreur de position latérale à ce sous-système, tout en garantissant des performances désirées. Cette étape a été réalisée dans la section précédente. En effet, pour une loi de commande virtuelle  exprimée par : ( , , , , ) wf wrdes wr X f X X X        (3.85) avec 11 1 2 3 1 1 2 3 12 1 2 3 22 22 ( , , , , ) ( , , , , ) ( , , , , ) sin( ) ( ) cos( )( ( ) ( ) f X X X f X X X f X X X xy x y tan tan L                                            (3.86) l'équation (3.85) peut être réécrite comme suit : ,,,, ) V e e     par : 1 1 1 2 3 1 1 2 3 1 1 2 3 ( , , , , ) ( , , , , ) ( , , , , , ) wf wrdes X f X X X f X X X X X X                     (3.87) avec 1 1 2 3 1 1 2 3 1 1 2 3 ( , , , , ) ( , , , , ) ( , wf wr wf wrdes wf wr wr wr f X X X f X X X X X X                 (3.88) C'est à dire 11 1 2 3 1 1 2 3 12 1 2 3 11 1 2 3 12 1 2 3 12 1 2 3 12 1 2 3 ( , , , , , ) ( , , , , , ) ( , , , , , ) ( , , , , ) ( , , , , ) ( , , , , ) ( , , , , ) wf wr X X X X X X X X X f X X X f X X X f X X X f X X X                                                            (3. 2 2 2 2 1 1 1 1 1 ( ) ( ) ( , , , ) ( ,          , 2 0 K  et 111 ( , ) V e e  la fonction de Lyapunov établie dans l'équation (3.53). V e e     par rapport au temps, nous obtenons: 2 1 1 1 1 1 2 ( , , , ) ( , ) ( )( ) ( )(                               (3.92) Pour faire apparaitre la commande r u dans l'équation (3.92), on utilise la représentation de wr   établie dans l'équation (3.81) et qui est r wr Tr r wr r B M u J         (3.93) On obtient alors : 2 1 1 1 1 1 2 ( , , , ) ( , ) ( )( ) ( )(                                  (3.94) Le moment Tr M dans l'équation (3.94) représente une perturbation bornée et non mesurée pour le système (Gillespie, 1992). Cette perturbation sera négligée dans la suite du calcul. L'équation (3.94) est alors réécrite de la manière suivante : V e e 2 1 1 1 1 1 2 ( , , , ) ( , ) ( )( ) ( )( ) V e e K Bu J                                (3.95) De l'équation (3.54) on a : 1 1 1 0 1 1 1 1 ( , ) V e e K e e e e       (3.96) En introduisant le résultat de l'équation (3.87) dans l'équation (3.96), nous pouvons réécrire 111 ( , ) V e e   comme suit: V e e K e e e y f X X X 1 1 1 0 1 1 1 12 1 2 3 12 1 2 3 2 1 1 12 1 2 3 1 ( , ) ( ( , , , , ) ( , , , , , ) ) ( , , , , , ) ref wf X X X K e X X X e                                  (3.97) avec 1 12 1 2 3 12 1 2 3 ( , , , , ) ( , , , , , ) ref wf wrdes wf wr wr wr e y f X X X X X X                 Nous pouvons vérifier de l'équation (3.97) que si 0 wr   , l'équation (3.97) est la même que l'équation (3.56). En introduisant l'expression (3.97) dans l'équation (3.95), on a Thèse de Alain Haddad, Lille 1, 2014 2 2 1 1 1 1 12 1 2 3 1 2 ( , , , ) ( , , , , , ) ( )( ) ( )( V e e K e X X X e Bu K J                                            (3.98) Une condition suffisante pour satisfaire C 10 est d'avoir r u exprimé comme suit : 12 1 2 3 1 23 12 1 2 3 1 23 ( ( , , , , , ) ( ) ( )) ( ( , , , , , ) ) r u B J X X X e KK B J X X X e KK                                              (3.99) avec 3 0 K  . En effet, réécrivons l'équation (3.98) en substituant la valeur r u par celle donnée dans l'équation (3.99) : 2 2 1 1 1 1 12 1 2 3 1 2 2 3 12 1 2 3 1 ( , , , ) ( , , , , , ) ( ( , , , , , ) ) wr V e e K e X X X e K K K X X X e                                                 (3.100) Ce qui donne u B J X X X x x K K                         (3.105) avec 2 2 1 2 3 2 1 2 3 2 1 2 3 ( , , , , ) ( , , , , ) ( , , , , , ) wf wrdes wr wf wrdes wf wr wr wr ,,,, ) X f X X X f X X X X X X                     (3.106) 21 1 2 3 2 1 2 3 22 1 2 3 22 22 ( , , , , ) ( , , , , ) ( , , , , ) cos( ) ( )sin( )( ( ) ( ) f X X X f X X X f X X X xy x y tan tan L                                             (3.107) 2 1 2 3 ( , , , , , ) wf wr wr X X X       étant exprimé comme suit : 2 1 2 3 2 1 2 3 2 1 2 3 ( , , , , ) ( , , , , ) ( , wf wr wf wrdes wf wr wr wr f X X X f X X X X X X                 (3.108) et 21 1 2 3 2 1 2 3 22 1 2 3 21 1 2 3 22 1 2 3 22 1 2 3 22 1 2 3 ( , , , , , ) ( , , , , , ) ( , , , , , ) ( , , , , ) ( , , , , ) ( , , , , ) ( , , , , ) wf wr X X X X X X X X X f X X X f X X X f X X X f X X X                                                            (3. u B J X X X e K K                        Résultats de simulation                               0 0 X X X Y Y Y                                 Figure 3.  , avec 1   l'amortissement de l'équation (3.57). Le temps de réponse à 5% de ce système est de l'ordre de 2.5s. Cette simulation permet de visualiser les performances du système en présence de la défaillance. Nous remarquons dans ce test que le véhicule ne dépasse plus les limites de la route pour fd tt  (cf. Figure 3.6). Le contrôle du braquage des roues arrière est capable de compenser l'effet de la défaillance. Le suivi de trajectoire du véhicule est alors assuré en présence du défaut. Comme la dynamique de braquage des roues arrière est négligée, on peut voir dans la Figure 3.7 que l'angle de braquage réel des roues arrière suit exactement l'angle de braquage désiré. Conclusion Introduction Nous avons vu dans le Chapitre 1 que grâce à la redondance analytique et à l'évaluation des résidus issus de cette redondance, la détection des défauts d'un système est possible. Afin de localiser les défauts, les résidus doivent réagir différemment aux différents défauts. Plusieurs techniques de localisation de défaut existent dans la littérature tels les résidus directionnels et les résidus structurés. Les résidus directionnels consistent à générer un vecteur de résidu de norme nulle dans le cas d'un fonctionnement non défaillant et qui se dirige vers une direction spécifique en fonction du type de défaut. La localisation des défauts consiste donc à déterminer parmi les directions possibles laquelle est la plus proche de celle du résidu observé. L'approche des résidus structurés est largement utilisée pour la localisation des défauts, comme dans [START_REF] Gertler | Fault detection and isolation using parity relations[END_REF] Identification de l'interface roue-chaussée pour une évaluation précise des résidus Différentes approches existent dans la littérature pour identifier l'interface roue-chaussée. Nous pouvons citer l'approche appliquant le filtre de Kalman étendu (EKF), l'approche appliquant des observateurs non linéaires, et l'approche utilisant les techniques de redondance analytique. L'approche appliquant le filtre de Kalman étendu utilise le modèle global du système [START_REF] Dakhlallah | Tire-road forces estimation using extended Kalman filter and sideslip angle evaluation[END_REF]. Elle suppose de connaître un modèle valide, le véhicule est considéré toujours en fonctionnement nominal. Des mesures supplémentaires sont parfois requises, comme dans [START_REF] Ray | Nonlinear Tire Force Estimation and Road Friction Identification: Simulation and experiments[END_REF], pour appliquer cette approche, telle que la mesure de l'angle de roulis. L'approche basée sur des observateurs non linéaires nécessite l'excitation permanente du système. Cette excitation doit induire une variation de vitesse latérale et d'accélération du véhicule pour permettre d'obtenir les informations requises [START_REF] Grip | Nonlinear vehicle sideslip estimation with friction adaptation[END_REF], [START_REF] Canudas-De-Wit | A new nonlinear observer for Tire/road distributed contact friction[END_REF], [START_REF] Stephant | Evaluation of a sliding mode observer for vehicle sideslip angle[END_REF]). Pour réduire la complexité des calculs [START_REF] Baffet | Estimation of vehicle sideslip, tire force and wheel cornering stiffness[END_REF], un nombre fini de surfaces de chaussée est parfois utilisé dans l'approche par observateurs. Pour déterminer l'interface roue-chaussée, l'approche décrite dans [START_REF] Villagra | A diagnosis-based approach for tire-road forces and maximum friction estimation[END_REF] est basée sur des techniques de filtrage et d'estimation des dérivées de signaux bruités. Nous élaborons dans ce qui suit une approche utilisant la génération de résidus par la redondance analytique pour déterminer l'interface roue-chaussée. Pour générer ces résidus, nous utilisons uniquement le modèle de direction des roues arrière. Après la détection du comportement défaillant (écart par rapport à la trajectoire suivie), l'actionneur de braquage des roues supposé non défaillant est activé pour rétablir le suivi (voir le chapitre précédent). Génération des résidus pour l'identification Dans cette partie, nous élaborons le résidu utilisé pour déterminer l'interface roue-chaussée à partir du modèle de braquage des roues arrière présenté dans le Chapitre 2. Nous considérons dans ce modèle que les roues arrière tournent à un même angle permettant d'utiliser une seule variable wr  pour désigner l'angle de [START_REF] Grip | Nonlinear vehicle sideslip estimation with friction adaptation[END_REF] Le modèle dynamique de direction est obtenu à partir du théorème du moment cinétique : J B u k D C B k D C B k D C B d r F d                              (4. J B u k D C B F F d r F d                         (4.10) J B u k k k D C B F F F d r F F F d r Fd                                  ( r J B u k D C B F F d r F d k k D C B F F d r F                                       ˆ)) sin( )( k D C B F F d r F F d r                      ( ˆˆˆf wf f bf ATf SAf vf f bf Tf J u M M M M u M M           (4.19) avec ˆˆˆT f ATf SAf vf M M M M    (4.20) Pour obtenir le modèle de traction des 4 roues du véhicule, nous appliquons ce même théorème au centre de la roue i , avec {1, 2,3, 4} i  . Nous pouvons écrire : () ( cos( ) sin( )) i i i ext J M F      ( 4 i i fi ti tri i i xi wi yi wi i J M M M f r F F u              (4.22) Nous pouvons alors exprimer les équations modélisant la relation traction-rouechaussée des quatre roues du véhicule comme suit : 1 1 1 1 1 1 1 2 2 2 2 2 2 2 3 3 3 J f r F F u J f r F F u J f r F F u J f r F F u                                                ( f T f f r B M r J f u rF rF r J f u rF rF r J f u rF rF r J f u rF rF J k u                                                     (4.24) Nous remarquons de l'équation (4.24) que l'évaluation des résidus ne peut se réaliser sans l'estimation précise des forces longitudinales et latérales. Or, cette estimation exige la connaissance du paramètre interface roue-chaussée [START_REF] Pacejka | Tyre and vehicle dynamics[END_REF]. Comme ce paramètre inconnu a été identifié dans la section précédente, nous pouvons alors évaluer ces résidus. Localisation du défaut Nous utilisons dans cette partie la matrice de signature structurée pour la localisation d'un défaut d'actionneur (de traction ou de direction). Chaque ligne i de cette matrice correspond à un résidu et chaque colonne j correspond à une défaillance. Un 1 à la position ( i, j ) indique qu'une défaillance j est détectable par le résidu i . Le nombre binaire formé par la colonne j est appelé "signature de la défaillance j ". Dans la Tableau Comme dans [START_REF] Villagra | A diagnosis-based approach for tire-road forces and maximum friction estimation[END_REF] Une fois le défaut localisé, l'actionneur défaillant est déconnecté : les roues avant sont bloquées dans la direction longitudinale du véhicule à t=7s. [START_REF] Seddiki | Analyse comparative des modèles de contact pneu chaussée[END_REF]. Il existe de même d'autres modèles utilisés dans la littérature pour le calcul des forces longitudinales et latérales, et qui n'ont pas été considérés dans ce mémoire (voir [START_REF] Seddiki | Analyse comparative des modèles de contact pneu chaussée[END_REF]). Avant de présenter dans ce qui suit des modèles établis, commençons par détailler deux paramètres présents dans ces modèles. Conclusion 1-Le glissement longitudinal des pneus pour une roue i défini par : Le modèle de Dugoff Le modèle de Gim Le modèle de Gim ne considère pas de couplage entre i G et i  . Il suppose que l'aire de contact se répartit en forme de rectangle et que la force verticale peut être calculée à partir de l'intégration de la pression à la surface de contact. Les forces longitudinales et latérales sont fonction : de la charge verticale zi F , du glissement longitudinal i G et l'angle de dérive i  , la longueur de la surface de contact, la rigidité longitudinale et latérale et le coefficient de frottement. Ce modèle est établi comme suit : Abstract: An active fault tolerant control (AFTC) strategy for overactuated systems is presented in this thesis. It consists of four steps: detecting very quickly the fault, activating a fault tolerant control law for preserving the stability of the overactuated system in presence of the fault, localizing precisely the faulty component, and finally reconfiguring the system by maintaining only the healthy components. This strategy is applied to an autonomous 2WS4WD vehicle : when the vehicle's lateral deviation exceeds a dynamic security threshold, the fault tolerant control algorithm is activated. It is based on a dynamic reference generation and consists in controlling the redundant actuators which are not used in normal behavior. The control law used for this task is designed using Lyapunov theory and backstepping technique. It consists of two interconnected control loops: an outer loop and an inner loop. The outer loop ensures the computation of dynamic references necessary for preserving the trajectory tracking of the vehicle. The inner loop ensures the tracking of the dynamic references generated in the outer loop. A fault isolation module is then applied to determine precisely the faulty component. Once it is isolated, the system is controlled by using only healthy components. The diagnosis and fault tolerant control schemes are validated on a realistic vehicle model using a co-simulation between CarSim and Matlab/Simulink softwares. Le modèle de Kiencke Keywords : diagnosis, active fault tolerant control, over-actuated system, allocation, autonomous vehicle, nonlinear control, Lyapunov theory, backstepping, external loop, internal loop. Figure 1 . 2 : 12 Figure 1.2 : Les régions de décision en appliquant le test d'hypothèse ..................... Figure 1 . 3 : 13 Figure 1.3 : Structure générale d'un système de contrôle tolérant aux fautes actif .... Figure 2 . 1 : 21 Figure 2.1 : Vue éclatée de RobuCAR .................................................................... Figure 2 . 2 : 22 Figure 2.2 : Modes de fonctionnement de robuCAR ................................................ Figure 2 . 3 : 23 Figure 2.3 : Les mouvements du véhicule dans les 3 dimensions (Dumont 2006) ..... Figure 2 . 4 : 24 Figure 2.4 : Schéma d'un véhicule 4WS4WD ......................................................... Figure 2 . 5 : 25 Figure 2.5 : Les vues de face et de profil d'une roue (Gillepsie 1992) ...................... Figure 2 2 Figure 2.6 : Schéma d'un véhicule standard ............................................................ Figure 2 2 Figure 2.7 : Schéma d'un véhicule 2WS4WD ......................................................... Figure 2 2 Figure 2.8 : Structure générale d'une commande centralisée.................................... Figure 2 2 Figure 2.9 : Structure générale d'une commande décentralisée ................................ Figure 2 . 2 Figure 2.10 : Braquage des roues avant et arrière, courbe extraite de (Zhou, Wang and Li 2005) .......................................................................................................... Figure 2 . 2 Figure 2.11 : Evolution du braquage des roues avant, courbe extraite de (Plumlee et al., Hodel 2004) ..................................................................................................... Figure 2 . 2 Figure 2.12 : Evolution de la force de freinage différentielle, courbe extraite de (Plumlee et al., Hodel 2004) ................................................................................... Figure 2 . 2 Figure 2.13 : Evolution de l'angle de lacet, courbe extraite de (Plumlee et al., Hodel 2004) ..................................................................................................................... Figure 2 . 2 Figure 2.14 : La stratégie de commande, extrait de (Yang et al., 2010b), ................. Figure 2 . 2 Figure 2.15 : Evolution de la déviation latérale du véhicule, courbe extraite de (Yang et al., 2010b), ......................................................................................................... Figure 2 . 2 Figure 2.16 : La variation du braquage des roues avant et arrière, courbe extraite de (Yang et al., 2010b), ............................................................................................... Figure 3 . 1 : 31 Figure 3.1 : Structure générale de la commande tolérante aux fautes (FTC) active basée sur la génération de références pour un système suractionné .......................... Figure 3 . 2 : 32 Figure 3.2 : Stratégie de FTC basée sur la génération dynamique de références ....... Figure 3 . 3 : 33 Figure 3.3 : Calcul de l'angle de braquage désiré .................................................... Figure 3 . 4 : 34 Figure 3.4 : Co-simulation CarSim/Matlab-Simulink............................................. Figure 3 . 3 Figure 3.5 : Trajectoires du véhicule en fonctionnement nominal et défaillant ....... Figure 3 . 6 :Figure 3 .Figure 4 . 3 :Figure 4 . 4 : 3634344 Figure 3.6 : Trajectoire du véhicule après l'activation du FTC pour une dynamique de braquage des roues arrière négligeable. ................................................................. 102 Figure 3.7 : Angles de braquage des roues arrière, désiré et réel ............................ 103 Figure 3.8 : Trajectoire du véhicule après l'activation du FTC pour une dynamique de braquage des roues arrière négligeable .................................................................. 104 Figure 3.9 : Trajectoire du véhicule après l'activation du FTC pour une dynamique de braquage des roues arrière non négligeable. .......................................................... 104 Figure 3.10 : Angles de braquage des roues arrière, désiré et réel .......................... 105 Figure 4.1 : Evaluation du résidu r 1 pour détecter une sortie de trajectoire ............. 120 Figure 4.2 : Génération des résidus pour l'identification de l'interface roue/chaussée ............................................................................................................................ 121 Figure 4.3 : Changement de l'interface roue-chaussée ........................................... 122 Figure 4.4 : Génération des résidus pour la localisation de la défaillance ............... 122 Figure 4.5 : Trajectoire du véhicule avant et après l'application du FTC ................ 123 défaillance inconnue modifie les caractéristiques du système et perturbe les signaux utilisés pour identifier les paramètres. Dans un système suractionné, ces paramètres peuvent être partagés par plusieurs actionneurs. A titre d'exemple, pour un véhicule électrique 2WS4WD, le paramètre d'interface roue-chaussée est commun aux 4 roues (si on suppose que la route est homogène). La méthode que nous proposons est d'utiliser les actionneurs redondants contrôlés pour assurer les performances du système (cf. Chapitre 3) et supposés non défaillants, pour identifier les paramètres communs. Lorsque ces paramètres sont obtenus, les résidus structurés sont calculés et l'actionneur défaillant peut être identifié. Le système est alors reconfiguré en utilisant uniquement les actionneurs sains. Les algorithmes de diagnostic et de commande élaborés dans les chapitres 3 et 4 sont testés en co-simulation avec les logiciels CarSim et Matlab/Simulink. Le mémoire se termine par une conclusion résumant l'ensemble du travail réalisé et précise quelques directions de recherches futures. .................................................................................................... 1.2 Module de diagnostic à base de modèle ............................................................ 1.2.1 Génération des indicateurs de défaut ...........................................................................8 1.2.2 Etape de détection .................................................................................................... 1.2.3 Etape de localisation ................................................................................................ tolérante aux fautes passive .................................................................... 1.3.2 Commande tolérante aux fautes active ...................................................................... 1.3.2.1 Accommodation .............................................................................................. 1.3.2.2 Reconfiguration ............................................................................................... 1.4 Conclusion....................................................................................................... Figure 1 . 2 : 12 Figure 1.2 : Les régions de décision en appliquant le test d'hypothèse Un problème de commande est défini en considérant trois entités : un objectif O , une classe de commandes admissibles U et un ensemble de contraintes C . Les contraintes du système sont exprimées en utilisant des paramètres  et la structure globale du système S qui englobe le procédé et le contrôleur. En fonctionnement nominal, la résolution du problème de commande revient à trouver, sous les contraintes ( , ), CS une loi de commande uU  permettant d'atteindre l'objectif O . Autrement dit, le but est de trouver une solution pour le triplet , indicateur de performance J est associé à l'objectif O . On a recours à cet indicateur de performance pour sélectionner la solution u à appliquer lorsque plusieurs solutions existent pour un problème de commande donné. Dans le cas d'un fonctionnement défaillant, les contraintes du système changent. Une perte partielle des capacités d'actionnement par exemple modifie le vecteur des paramètres  alors qu'un blocage en position fermée d'une vanne modifie la structure S . Nous présentons dans ce qui suit les approches assurant la commande des systèmes défaillants. 1.3.1 Commande tolérante aux fautes passive L'objectif de la commande tolérante aux fautes passive est d'assurer l'insensibilité du système à une classe de défauts. La détection de défaut n'est pas nécessaire. On utilise des modèles de système avec des régions incertaines pour lesquels une commande robuste est élaborée. Considérons l'ensemble  de valeurs possibles des paramètres  du système en fonctionnement normal ou défaillant. La commande tolérante aux fautes passive cherche alors à trouver, sous les contraintes ( , ) CS , une loi de commande uU  permettant d'atteindre l'objectif O . Autrement dit, le problème de commande tolérante aux fautes, dite passive, cherche à résoudre le triplet , O  C (, S ),U Figure 2 . 2 : 22 Figure 2.2 : Modes de fonctionnement de robuCAR modèles de véhicule 2WS4WD classiquement utilisés dans la littérature se basent sur les équations modélisant les mouvements longitudinaux, latéraux et de rotation du véhicule (Roche 2008) (voit Figure 2.3). Ils sont élaborés soit dans le repère . Le pompage qui cause un mouvement de translation suivant l'axe () Gz , ainsi que les mouvements de roulis et de tangage, qui causent la rotation du véhicule autour des axes () Gx et () Gy , ne sont pas considérés dans cette thèse. Figure 2 . 3 : 23 Figure 2.3 : Les mouvements du véhicule dans les 3 dimensions (Dumont 2006) Figure Figure 2.4 : Schéma d'un véhicule 4WS4WD h la distance qui sépare le centre de gravité du véhicule du sol et g la constante de gravité. 2.32) avec i d le décalage du pivot, i  l'angle d'inclinaison latérale du pivot, i v l'angle de chasse, i  l'angle d'inclinaison du train de la roue (voir Figure 2.5). Figure 2 . 5 : 25 Figure 2.5 : Les vues de face et de profil d'une roue[START_REF] Gillepsie | Fundamentals of vehicle dynamics[END_REF] Figure 2.6 : Schéma d'un véhicule standard Figure 2 2 Figure 2.8 : Structure générale d'une commande centralisée Figure 2.9 : Structure générale d'une commande décentralisée .91) On obtient alors une condition suffisante pour que l'état converge vers la surface S . Avec la commande décrite par l'équation (2.89), le système devient alors asymptotiquement stable en présence d'incertitudes de modèle bornées.Dans l'article[START_REF] Zhou | Robust sliding mode control of 4WS vehicles for automatic path tracking[END_REF], la loi de commande est testée sur un simulateur modélisant le comportement dynamique d'un véhicule réel. Dans ce test, le véhicule circule à une vitesse de 30m/s (108 km/h) en présence de perturbations bornées. On considère dans ce test que l'adhérence est variable, en raison, par exemple, d'un défaut de pneu. Figure 2 . 2 Figure 2.10 : Braquage des roues avant et arrière, courbe extraite de (Zhou, Wang and Li 2005) ou à contrôler activement les roues arrière(Yang et al., 2010b), et ceci dans le but de compenser l'effet d'une perte d'efficacité de l'un des actionneurs de traction ou de direction. Peu de travaux étudient la relation entre le module de diagnostic et le contrôleur tolérant aux fautes. Généralement, le contrôleur élaboré est testé en supposant connaitre le composant défaillant et dans certains cas l'amplitude du défaut grâce à un module de diagnostic.La commande linéaire quadratique (LQ) :La commande LQ détermine une commande u , par retour d'état statique, qui stabilise le système en minimisant un critère de performance quadratique . 2.111) avec u Q et u c des matrices de pondération, q  un scalaire positif, u ,   et r de l'équation (2.111) que cette erreur e devient nulle lorsque , l'équation (2.112) devient équivalente à l'équation (2.109).Pour résoudre ce problème de réallocation, la matrice B doit être connue. En cas de défaut d'actionneur, cette matrice devient f B . Il est alors essentiel d'identifier le défaut après sa détection pour calculer en ligne cette nouvelle matrice, comme dans[START_REF] Casavola | Adaptive fault tolerant control allocation strategies for autonomous overactuated vehicles[END_REF].Dans l'article[START_REF] Plumlee | Control of a ground vehicle using quadratic programming based control allocation technique[END_REF], la loi de commande est testée sur un simulateur Matlab/Simulink modélisant le comportement dynamique d'un véhicule réel. Trois vitesses différentes du véhicule sont considérées dans les simulations : 20m/s (72 km/h), 24,5 m/s (88.2 km/h) et 29 m/s (104,4 km/h). Le module de diagnostic n'est pas élaboré dans cette étude : le modèle du système, modifié après le défaut, est considéré connu et les délais de diagnostic ne sont pas considérés. Figure 2 . 2 Figure 2.11 : Evolution du braquage des roues avant, courbe extraite de (Plumlee et al., Hodel 2004) Figure 2 . 2 Figure 2.14 : La stratégie de commande, extrait de (Yang et al., 2010b) Dans l'article (Yang et al., 2010b), cette stratégie est testée en simulation sur un modèle de robuCAR. Le véhicule circule sur une surface homogène à une vitesse constante égale à 5m/s (18km/h). Figure 2 . 2 Figure 2.16 : La variation du braquage des roues avant et arrière, courbe extraite de (Yang et al., 2010b) aux fautes centralisée à un véhicule autonome 2WS4WD utilisent soit la commande du couple de braquage des roues avant et du couple de freinage de l'une des roues (avant ou arrière) du véhicule, soit la commande du couple de braquage des roues avant et arrière.Les stratégies qui utilisent le braquage des roues avant et arrière pour assurer le suivi de trajectoire du véhicule sont plus avantageuses que celles qui appliquent le freinage de l'un des roues. Ceci a été démontré par[START_REF] Song | Comparison between braking and steering yaw moment controllers considering ABS control aspects[END_REF] en considérant différents scénarios. De plus, elles permettent de conserver une vitesse constante du véhicule et de réduire l'usure des pneus causée majoritairement par le freinage excessif.Dans le chapitre suivant, nous élaborons une commande tolérante aux fautes décentralisée pour un véhicule 2WS4WD en utilisant le braquage des roues avant et arrière. La distribution des tâches entre les actionneurs redondants est établie en ligne : en fonctionnement normal, une commande principale contrôle les 4 actionneurs de traction et l'actionneur de direction des roues avant de manière à assurer le suivi de trajectoire du véhicule. Lorsqu'un défaut, qui se manifeste par une déviation de trajectoire du véhicule, est détecté au niveau du système, la commande principale est maintenue. Une commande est activée pour contrôler l'actionneur de braquage des roues arrière (non utilisé en fonctionnement normal), de manière à garantir le suivi de trajectoire du véhicule en compensant l'effet du défaut. La commande de l'actionneur de direction du train arrière permet de tolérer efficacement le défaut. Figure 3 . 1 : 31 Figure 3.1 : Structure générale de la commande tolérante aux fautes (FTC) active basée sur la génération de références pour un système suractionné Figure 3 . 2 : 32 Figure 3.2 : Stratégie de FTC basée sur la génération dynamique de références Figure 3 . 3 : 33 Figure 3.3 : Calcul de l'angle de braquage désiré La stratégie de commande tolérante aux fautes active est testée en utilisant une co-simulation entre CarSim, un logiciel professionnel de simulation de dynamiques de véhicule, et le logiciel Matlab-Simulink (Figure 3.4). Le logiciel CarSim, développé par Mechanical Simulation Corporation (cf. http://www.carsim.com/), est utilisé par de nombreux constructeurs d'automobiles tels que Volkswagen, Toyota, Opel … Nous nous servons de ce logiciel pour simuler la dynamique globale du véhicule et pour assurer le contrôle des quatre actionneurs de traction et de l'actionneur de direction des roues avant. Le contrôle de direction des roues arrière du véhicule est assuré par les deux contrôleurs élaborés dans la section 3.4.3. Ils sont implantés sur le logiciel Matlab/Simulink et fournissent, une fois activés, le couple de braquage r u au logiciel CarSim. en fonctionnement défaillant Trajectoire du véhicule en fonctionnement défaillant avec activation du FTC Limite maximale du seuil dynamique Limite minimale du seuil dynamique Limite maximale de la route Limite minimale de la route Détection d'une déviation latérale. Activation du système FTC basé sur la génération de références Figure 3 . 6 :Figure 3 . 9 : 3639 Figure 3.6 : Trajectoire du véhicule après l'activation du FTC pour une dynamique de braquage des roues arrière négligeable. Figure 3 . 8 : 38 Figure 3.8 : Trajectoire du véhicule après l'activation du FTC pour une dynamique de braquage des roues arrière négligeable Figure 3.10 : Angles de braquage des roues arrière, désiré et réel . Elle consiste à générer un vecteur de résidus où chaque résidu est sensible à un sous-ensemble différent des défauts surveillés. Une matrice d'incidence est utilisée pour déterminer l'influence de chaque défaut sur chacun des résidus. Cette approche est appliquée dans ce chapitre pour localiser un défaut actionneur. Or, dans le cas d'un véhicule 2WS4WD, le calcul des résidus nécessite de connaître précisément le paramètre d'adhérence (contact pneu/chaussée). Pour estimer ce paramètre, le résidu associé à l'actionneur de direction arrière (supposé non défaillant) est calculé suivant plusieurs hypothèses de chaussée. Le résidu le plus proche de zéro indique le type de chaussée. Une fois que le paramètre d'adhérence est connu, les résidus associés aux 5 actionneurs initialement utilisés sont calculés et évalués. Le défaut peut ainsi être localisé. La commande du véhicule est ensuite reconfigurée en déconnectant l'actionneur défaillant et en utilisant uniquement les actionneurs sains. Cette reconfiguration reste possible tant que le nombre des actionneurs défaillants ne dépasse pas l'indice de suractionnement du système. 4. 3 3 Localisation de défaut actionneur pour véhicule 2WS4WD Dans cette section, nous nous intéressons à la localisation d'un défaut d'actionneur pour un véhicule suractionné 2WS4WD. Nous utilisons pour ce faire des résidus issus de relations de redondance analytique (RRA), qui ne lient que des grandeurs connues disponibles en ligne. Ces résidus sont sensibles aux défauts des 4 actionneurs de traction et de l'actionneur de direction. Les RRA sont obtenues à partir des modèles de traction et de direction du véhicule 2WS4WD. Le modèle de direction des roues avant a été présenté dans le Chapitre 2 du mémoire. Nous rappelons ce modèle puis nous présentons le modèle de traction des 4 roues du véhicule, comme dans (Dumont 2006). Figure 4 4 tolérante aux fautes basé sur la génération dynamique de références. La direction des roues arrière du véhicule est alors utilisée afin de maintenir le suivi de trajectoire. L'angle de direction désiré des roues arrière est calculé dans une boucle externe de manière à ce que l'objectif global du système soit préservé. Ensuite, la commande de l'actionneur de direction des roues arrière est calculée dans une boucle interne pour assurer le suivi de cet angle de direction désiré (voir Chapitre 3). La sécurité du véhicule est alors garantie en présence du défaut. Figure 4 . 1 : 41 Figure 4.1 : Evaluation du résidu r 1 pour détecter une sortie de trajectoire Figure 4.2 : Génération des résidus pour l'identification de l'interface roue/chaussée Figure 4 . 4 :Figure 4 . 3 : 4443 Figure 4.4 : Génération des résidus pour la localisation de la défaillance Nous avons présenté dans ce chapitre un algorithme de diagnostic pour des systèmes suractionnés. Cet algorithme assure une localisation précise d'un défaut d'actionneur sans compromettre la sécurité du système. Il est activé après la détection d'un défaut et le lancement d'une commande tolérante aux fautes active qui permet de rétablir et maintenir les performances du système en présence du défaut. Une fois le défaut localisé, le système suractionné est reconfiguré de manière à éliminer le composant défaillant. Cette stratégie de diagnostic est appliquée à un véhicule autonome 2WS4WD. Pour ce type de véhicule, la localisation fine du défaut nécessite l'estimation de l'interface roues-chaussée. Pour ceci, une technique d'identification de ce paramètre est proposée. Elle repose sur l'utilisation du modèle dynamique des actionneurs de braquage des roues arrière (considérés non défaillants) pour déterminer l'interface. Un résidu est calculé avec différentes valeurs de paramètres. Le résidu le plus proche de zéro indique le paramètre le plus probable. L'algorithme de diagnostic est finalement testé en utilisant la co-simulation des logiciels CarSim et Simulink. Les résultats obtenus démontrent qu'en utilisant en fonctionnement défaillant Limite maximale de la route Limite minimale de la route Détection d'une déviation latérale. Activation du système FTC basé sur la génération de références Arrêt du braquage des roues avant Figure 4 . 5 : 45 Figure 4.5 : Trajectoire du véhicule avant et après l'application du FTC yi K étant respectivement les constantes de raideur longitudinales et latérales. longueur de surface de contact et W la largeur du pneumatique. ,c : paramètres dépendant du sol. 4 c 4 : paramètre dépendant de la vitesse du véhicule. 5 c : paramètre dépendant de la charge de la roue.S : glissement total.Le modèle PacejkaLe modèle Pacejka est l'un des modèles les plus utilisés dans le domaine automobile. Ce modèle permet l'estimation de paramètres obtenues à partir de mesures hors lignes, et qui correspondent à des caractéristiques physiques obtenues du contact entre le pneu et la chaussée. Plusieurs améliorations ont été apportées à ce modèle depuis son élaboration en 1987. Ceci a permis de l'approcher au mieux du comportement réel du véhicule.Le modèle est établi comme suit : 1 : Les différentes étapes du diagnostic à base de modèle (Toscano, 2011) Estimation des états : L'estimation des états d'un système est réalisée soit en utilisant des observateurs dans le cas déterministe soit en utilisant des filtres dans le cas stochastique. Les observateurs ou filtres sont classiquement utilisés à des fins de commande en boucle fermée. Le principe général de ces systèmes dynamiques est de donner une image, ou estimation, de certaines variables ou combinaisons de variables, nécessaires au bouclage. Lorsque le système est dynamique et que les conditions initiales sont inconnues, l'estimation n'est correcte qu'après un certain temps de convergence, fixé par la dynamique de l'observateur ou du filtre. Ces outils ont été adaptés à des fins de diagnostic comme dans (( Système à surveiller Grandeurs Capteurs Acquisition de données Mesures Etape 1 Génération d'indicateurs de défaut Résidus ou Symptômes Etape 2 Détection des défauts Résidu ≠ 0 Evaluation des résidus Etape 3 Localisation des défauts Elément défaillant Prise de décision Modification des Changement de Arrêt normal pour Arrêt d'urgence lois de commande point de consigne maintenance pour réparation Figure 1. Tableau 1.1 Approches de conception de lois de commande tolérantes aux fautes actives (Zhang et al., 2008) Ce type d'approche utilise explicitement un outil de diagnostic qui détecte, localise et, pour certains cas, estime le défaut en ligne. La théorie du retour d'état statique « Quantitatif Feedback Theory (QFT) » Lois de Commande par modèle interne commande « Generalized Internal Model Control (GIMC) » établies hors Contrôleur à gains préprogrammés ligne « Gain Scheduling (GS) » Contrôleur à paramètres linéaires variables « Linear Parameter Varying (LPV) » Contrôleur adaptatif « Adaptive Control (AC) » Méthode de la pseudo-inverse « Pseudo-Inverse (PI) » Commande par placement de pôles « Eigenstructure assignement (EA) » Inversion dynamique Différentes techniques de modification de lois de commande peuvent être Méthodes/ outils de conception Lois de commande envisagées. Ces techniques visent à calculer une commande optimale pour le système, « Dynamic Inversion (DI) » établies en à commuter d'une commande à une autre (établie hors ligne), ou aussi à élaborer une Commande par modèle prédictif ligne nouvelle loi de commande qui assure un fonctionnement nominal au système (voir « Model Predictive Control (MPC) » Régulateur à commande optimale « Linear Quadratic (LQ) » Tableau 1.2). Commande à structure variable / Commande à mode glissant LQR « Variable Structure Control (VSC) » / « Sliding Optimisation H ∞ Mode Conrol (SMC) » MPC Techniques de modification GS de la loi de commande Commutation LPV Il existe deux types d'approches de commande tolérante aux fautes active. Le VSC/SMC premier est l'accommodation, qui se caractérise par la modification en ligne de la loi Correspondance avec PIM de commande en fonction de l'amplitude de la défaillance estimée. Le deuxième est fonctionnement nominal EA la reconfiguration, qui consiste à commuter d'une commande à une autre, établie hors Tableau 1. ligne, ou en reconfigurant la structure interne du système en fonction du défaut. La structure générale de ces deux approches est présentée dans la Figure 1.3. ). Un tour d'horizon sur les commandes appliquant cette approche est présenté dans le Tableau 1.1. 2 Méchanismes de modification de la loi de commande (Zhang et al., 2008) 1.3.2.1 Accommodation Modification de Supervision la loi de commande Module de diagnostic correcteur Objectif global -+ Contrôleur Actionneurs Processus s Capteurs Soit f   le vecteur des paramètres du système en présence de la défaillance et ˆf  son estimation. L'accommodation consiste à calculer en ligne, sous les Figure 1.3 : Structure générale d'un système de contrôle tolérant aux fautes actif CS , une loi de commande uU  permettant d'atteindre l'objectif O . contraintes ( , ) Autrement dit, la commande tolérante aux fautes appliquant l'accommodation cherche à résoudre le triplet , ( , ), f O C S U   . f  . Dans le cas où aucune solution n'existe pour la reconfiguration, il est nécessaire de changer l'objectif O : on passe alors en mode dégradé. dans certains cas d'identifier le composant défaillant afin de déterminer la paire ( , ) rr S  ou ( , ) , uU  établie hors ligne, permettant d'atteindre l'objectif O soit sous les contraintes ( , ) rr CS  , avec ( , ) rr S    S représentant la structure et paramètres non modifiés par la défaillance et S l'ensemble des structures possibles, soit sous les contraintes ( , ) f CS . Autrement dit, la commande tolérante aux fautes appliquant la reconfiguration cherche à résoudre soit le triplet , ( , ), rr O C S   , soit le triplet U , ( , ), f O C S U   . La reconfiguration se réalise alors sans modification des lois de commande établies hors ligne. Le rôle du système de diagnostic dans cette approche est de détecter, d'isoler et f S Chapitre 2 Commande tolérante aux fautes pour véhicule autonome 2WS4WD 2.1 Introduction .................................................................................................... 24 2.2 Le véhicule robuCAR (2WS4WD) .................................................................. 25 2.3 Modèle de véhicule .......................................................................................... 27 2.3.1 Modèle d'un véhicule 2WS4WD dans le repère ( G x y z .............................. 29 , , , ) xyz 2.3.1.1 Modèle linéarisé d'un véhicule 2WS4WD dans le repère ( G x y z ........ 32 , , , ) xyz 2.3.2 Modèle d'un véhicule 2WS4WD dans le repère ( O 0 0 0 x y z 0 , , , ) 0 0 x y z ....................... 40 2.4 Les stratégies de commande tolérante aux fautes pour véhicule autonome 2WS4WD .................................................................................................................... 48 2.4.1 Commande tolérante aux fautes centralisée pour véhicule autonome 2WS4WD avec allocation établie hors ligne .................................................................................................. 51 2.4.2 Commande tolérante aux fautes centralisée pour véhicule autonome 2WS4WD avec allocation établie en ligne ..................................................................................................... 57 2.5 Conclusion....................................................................................................... 65 2.1 Introduction Un véhicule autonome 2WS4WD peut, en fonctionnement normal, suivre une trajectoire sans utiliser l'ensemble de ses actionneurs : 2 actionneurs pour contrôler Les travaux qui portent sur les véhicules autonomes ont débuté dans les années les directions des trains avant et arrière, 4 roues motrices actionnées pour garantir le 1920 mais ont connu leur vrai essor à partir des années 1980. Nous citons d'une contrôle indépendant de la rotation de ces roues. Suivant la mission demandée, il manière non exhaustive des projets essentiels dans ce domaine : s'agit d'un système suractionné puisque le nombre de ses actionneurs disponibles peut être supérieur au nombre des actionneurs requis. Les actionneurs redondants sont  Le projet ALV (Autonomous Land Vehicle) financé par DARPA (Defense utilisés pour augmenter l'efficacité du système et pour obtenir de meilleures Advanced Research Projects Agency), qui est une agence du département de la performances (Song et al., 2009), mais aussi, comme dans notre approche (voir défense des Etats Unies, et réalisé entre les années 1984 et 1998 qui a permis pour Chapitres 3 et 4), pour tolérer des défaillances d'actionneurs (Vermillon 2009). Afin la première fois l'usage de nouveaux outils (tel que le radar laser, la vision de nous positionner vis-à-vis des travaux existants, ce chapitre présente un tour numérique, etc.) afin d'assurer un déplacement autonome du véhicule. d'horizon des méthodes de commande tolérante proposées dans la littérature pour ce  Le projet européen EUREKA réalisé entre les années 1987 et 1995 qui a permis de type de véhicule. concevoir des outils technologiques dédiés à la conduite automatique de l'automobile. Nous présentons dans un premier temps le prototype de véhicule (2WS4WD)  Le projet Navlab réalisé par l'Université Américaine Carnegie Mellon entre les fabriqué par la société Robosoft et dont dispose le laboratoire LAGIS UMR CNRS années 1984 et 2007. Le véhicule semi-autonome créé dans ce projet a pu parcourir 8219. Nous présentons ensuite les modèles de ce type de véhicule classiquement 5000 km. utilisés dans la littérature pour élaborer la loi de commande. Nous détaillons  La compétition -DARPA Grand Challenge‖ organisée par l'agence DARPA depuis finalement certaines stratégies basées sur ces modèles pour élaborer les lois de Mars 2004 a pour but d'arriver à parcourir 241.4 Km à l'aide d'un véhicule commande tolérantes aux fautes pour un véhicule 2WS4WD. entièrement autonome. Une analyse de 2013 (Litman, 2013) montre que les applications à grande échelle de ce type de véhicule débuteront entre 2020 et 2030 mais que leur vrai impact sur la société surviendra entre les années 2040 et 2060 avec pour conséquences l'augmentation de la capacité et de la sécurité routière, la réduction des coûts de déplacement, l'utilisation plus efficace des espaces de stationnement, la réduction de la pollution et la dépense plus optimale de l'énergie. Afin que les véhicules autonomes soient commercialisés, leur sûreté fonctionnement doit être garantie. L'implémentation d'une commande optimale pour ce type de véhicule n'est pas suffisante pour garantir le bon fonctionnement du système tout au long d'un trajet. Ce véhicule doit avoir la possibilité de réagir pour se reconfigurer lors de la détection d'une défaillance d'un de ses composants sans avoir besoin de s'arrêter ni d'avoir recours à une intervention humaine. 2.2 Le véhicule robuCAR (de type 2WS4WD) RobuCAR est un prototype de véhicule autonome servant de plate-forme expérimentale au laboratoire LAGIS UMR CNRS 8219 (voir Figure 2.1). C'est un véhicule électrique conçu par la société Robosoft. Il constitue une variante du véhicule Cycab (Baille, et al. 1999). Comparé à un véhicule traditionnel constitué d'un moteur de traction et d'un moteur de direction, robuCAR a la particularité de fonctionner de manière décentralisée grâce à 6 moteurs (4 moteurs de traction et 2 moteurs de direction). La redondance des actionneurs permet d'envisager qu'en cas de défaillance de certains de ses actionneurs, le véhicule reste capable d'assurer les performances requises en n'utilisant que les composants sains restants. On garantit ainsi la sécurité des passagers, du véhicule et de l'environnement. Figure 2.1 : Vue éclatée de RobuCAR  Le mode single : Il correspond au fonctionnement d'une voiture ordinaire. Le braquage des roues se fait uniquement sur l'essieu avant et les roues tournent toutes selon le même sens de rotation.  Le mode dual : C'est le mode single avec en plus la possibilité de braquer les roues arrière pour accentuer le rayon de courbure de la trajectoire du véhicule.  Le mode park : Pour ce mode, les braquages sur les essieux avant et arrière se font dans le même sens. Ce mode est particulièrement utile pour garer une voiture par exemple. (1, 7 : Batterie de 12 Volts 60 Ah, 2 : Châssis de voiture, 3 : Roue avant droite, 4 : Panneau de contrôle avant, 5 : moteur électrique de direction avant, 6 : Roue avant gauche, 8 : Roue arrière gauche, 9 : Moteur électrique de direction arrière, 10 : Panneau de contrôle arrière) Des capteurs, destinés à informer le véhicule sur son environnement, sur sa position et sur sa vitesse, permettent le fonctionnement de robuCAR en mode autonome (Dumont 2006) :  Un télémètre à balayage laser assurant la mesure de la distance et de la direction d'un éventuel obstacle.  Un codeur incrémental pour chaque roue permettant d'effectuer des mesures odométriques (les mesures de vitesse des roues).  Deux codeurs absolus utilisés dans le contrôle du braquage des roues avant et arrière.  Un GPS donnant la position absolue du véhicule.  Une centrale inertielle délivrant les valeurs de la vitesse et de l'accélération de lacet et l'estimée de l'angle de lacet. Ce véhicule est équipé de huit batteries qui lui confèrent une autonomie de deux heures à vitesse maximale (20 km/h). Grâce aux possibilités de braquage des essieux avant et arrière, robuCAR peut fonctionner selon trois modes, présentés dans la Figure 2.2. du mouvement de lacet,  l'accélération de lacet, f l (resp. r l ) la distance entre le centre de gravité du véhicule et le train avant (resp.arrière) et d la demi-distance du train avant ou arrière. Après avoir présenté les équations dynamiques modélisant les mouvements longitudinaux, latéraux et de rotation du véhicule dans le repère ( , , , ) xyz G x y z  Le moment généré par le couple de traction au niveau des roues du train i . Ce moment est représenté par i u .  Le moment généré par l'amortissement entre les pneus du train i et la chaussée. Ce moment est représenté par : bi MB  i wi   (2.31)  Le moment d'auto-alignement qui résiste au mouvement des roues. Ce moment est la source de l'effet de sous-virage généré dans un véhicule. Il est causé par la génération des forces latérales d'une roue dans un point différent du centre de la roue. Ce moment est représenté par ATi M .  Le moment créé par la force longitudinale au niveau de la roue i autour de l'axe vertical passant par le centre du train. En effet, même lorsque le véhicule se déplace en ligne droite, ce moment est présent. Il est représenté par : 48)Il faut noter que les équations (2.46) et (2.48) sont déduites à partir de la relation qui lie Les équations du mouvement au point F sont alors exprimées comme suit : On a: cos(   F xv F  () wf r FH tan GH   wf ) (2.63) En posant : sin( sin( cos( ) ) wf wf wr wf L     F FF F yv v    () wr r RH tan   GH   ) (2.64) (2.58) ou de même : L RH FH  (2.65) cos( ) cos( ) cos( ) cos( ) wr wf wr wf     Nous pouvons réécrire l'équation (2.63) comme suit : cos( ) sin( ) F R wf F R wf xv yv       () wr r L FH Tan GH    (2.66) sin( cos( ) wf wr wf L   Et en utilisant le résultat de l'équation (2.63), nous pouvons réécrire l'équation (2.66) ) FF v     (2.59) comme suit : RF RF vv   Les équations du mouvement au point R sont alors exprimées comme suit :   * ( ) () r wf wr r L G H Tan Tan     GH (2.49) (2.67) cos(   R xv R  wr ) D'où : sin(  wr wf wr wf )   * G H L L )  1 tan( ) wr wf L   sin( ) R wr yv HF R     sin( cos( ) wf wr RR wf v L       ( ) ( ) wf wr r Tan Tan   (2.54) (2.60) (2.68) r GH ou de même : wr RH   Des équations (2.62) et (2.68) nous pouvons exprimer la vitesse de lacet du véhicule wf HF (2.50)  De l'équation (2.49) nous pouvons écrire : tan( ) tan( ) wr wf RH HF    (2.51) Sachant que : Les rayons de braquage avant et arrière sont alors les suivants : cos( ) sin( ) sin( ) wf R wr wf RH L       (2.55) cos( ) cos( ) cos( ) cos( ) sin( ) cos( ) wf R F wr wr wf R F wr wr xv yv    comme suit :      ( ( ) ( )) G wf wr V Tan Tan      L (2.69)   wr cos( ) sin( ) sin( ) wr R wf wf wr HF L        sin( ) cos( ) cos( ) cos( ) wf wr wf RF wf wr v L        A ce modèle, nous introduisons deux variables d'état (Rajamani et al., 2003) :  (2.61) l'accélération longitudinale et l'accélération latérale. Ces variables sont exprimées (2.56) comme suit : RH HF L  En utilisant les équations (2.49), (2.55) et (2.56), nous pouvons exprimer la relation (2.52) Sachant que la vitesse de lacet du véhicule est donnée par : G r ( cos( )) cos( ) sin( ) V    (2.62) GH 22 cos( ) sin( ) qui lie les vitesses instantanées des trains avant et arrière comme suit : Nous pouvons alors réécrire RH et HF comme suit : cos( ) sin( ) sin( ) 1 tan( ) wf wr wf wf wr wr L RH L    (2.53)     cos( ) cos( ) wf F R F F R wr v v v    avec G 2 2 1 2 3 4 1 2 3 4 ( sin( )) ( cos( )) cos( ) V la vitesse instantanée au point H  (2.57) sin( ) F  et R  . Cette relation est obtenue comme suit : Passons maintenant au modèle d'un véhicule 2WS4WD (montré dans la Figure 2.7). Nous définissons le point H comme étant la projection orthogonale du centre de rotation instantané r G sur l'axe longitudinal du véhicule. La distance r GHest définie comme suit : tan( ) tan( ) Figure 2.7 : Schéma d'un véhicule 2WS4WD cos( )sin( ) 4 Les stratégies de commande tolérante aux fautes pour véhicule autonome 2WS4WD 81) Le modèle non linéaire du véhicule autonome présenté par l'équation (2.77) ne nécessite pas les hypothèses d'angles de braquage faibles et de la dynamique des actionneurs négligeable. Il permet, par conséquent, de modéliser toute variation d'angle de braquage des roues avant et arrière et d'intégrer l'effet de la dynamique de braquage des roues sur le comportement du système global. Ceci rend ce modèle adapté au développement de notre loi de commande, présentée dans les chapitres 3 et 4. au niveau des interrupteurs du composant électronique intégré « power switch short-circuit », la coupure de circuit au niveau des interrupteurs du composant électronique intégré « power switch fail open », la coupure d'une phase d'enroulement du stator « winding open phase », le court-circuit au niveau d'un enroulement interne du stator « internal turn fault ». Dans cette section, nous présentons quelques travaux de la littérature appliquant la commande tolérante aux fautes à un véhicule 2WS4WD. Ces travaux seront regroupés en fonction des stratégies de distribution (allocation) de tâches appliquées, à savoir une stratégie de distribution de tâches hors ligne et une stratégie de distribution de tâches en à un intervalle borné exprimé par [ ,1]   , avec  représentant la valeur pour laquelle l'efficacité de l'actionneur est minimale. Une perte d'efficacité est dite totale lorsque l'actionneur se bloque dans une position donnée et devient insensible à la commande (Lock in Place), ou lorsqu'il se bloque dans sa position maximale ou minimale indépendamment de la commande (Hard-Over-Fault), et enfin lorsqu'aucun couple ne peut être fourni par l'actionneur. Pour le dernier cas, l'efficacité de l'actionneur vaut zéro. 2. Nous considérons dans cette thèse des défauts sur le véhicule 2WS4WD, produisant une déviation latérale du véhicule, à savoir les défauts au niveau des 4 actionneurs de traction et des 2 actionneurs de direction. D'autres types de défauts peuvent causer directement ou indirectement une déviation latérale du véhicule mais ne seront pas abordés dans notre étude, comme des défauts au niveau des pneus ou de la structure mécanique du véhicule, des défauts de transmission d'information (bus de données) ou des capteurs utilisés dans les algorithmes de commande : accéléromèt re, centrale inertielle, capteurs de position angulaire ou de vitesse de rotation des roues. Pour de plus amples informations, voir [START_REF] Fischer | Fault detection for lateral and vertical vehicle dynamics[END_REF] , [START_REF] Dumont | Tolérance active aux fautes des systèmes d'instrumentation[END_REF] ). Un défaut d'actionneur est une perte d'efficacité partielle ou totale de cet actionneur ( [START_REF] Guo | Actuator fault compensation via multiple model based adaptive control[END_REF] , (X. C. Zhang 2014)). Une perte d'efficacité est dite partielle (Partial Loss of Effectiveness) lorsque l'efficacité de l'actionneur appartient D'après [START_REF] Blanke | Electrical steering of vehicles-fault-tolerant analysis and design[END_REF] , les principales causes de ces défauts d'actionneur électrique sont la défaillance de la grille de l'onduleur « gate-driver malfunction », le court-circuit ligne. Six actionneurs peuvent être utilisés pour assurer le suivi de trajectoire d'un véhicule 2WS4WD : 2 actionneurs pour contrôler les directions des trains avant et arrière et 4 blocs moteur-roue actionnés pour garantir le contrôle indépendant de chacune des roues. Dans [START_REF] Zhou | Robust sliding mode control of 4WS vehicles for automatic path tracking[END_REF] , une distribution des tâches hors ligne est adoptée qui consiste à appliquer la même commande sur les actionneurs de bra quage avant et arrière, quel que soit l'état du système. Dans [START_REF] Plumlee | Control of a ground vehicle using quadratic programming based control allocation technique[END_REF] les commandes de freinage de chaque roue sont distribuées en-ligne en fonction du défaut estimé. Pour les deux approches, la loi commande tolérante aux fautes, élaborée dans la littérature pour un véhicule 2WS4WD, est calculée par un seul calculateur embarqué. Il s'agit d'une commande tolérante au fautes centralisée telle que décrite par la Figure 2.8, où le contrôleur génère une commande désirée des u en considérant la dynamique globale du système (( Thèse deAlain Haddad, Lille 1, 2014 Soit  2 11 2 .  P , où    T w w w . En exprimant la loi de commande U de la 12 , ..., n manière suivante : 54 En deuxième étape, calculons la loi de commande virtuelle  qui assure à ce sous- système les performances désirées. Pour ce faire, nous choisissons une fonction de Lyapunov candidate, définie positive, de la forme suivante : 11 V ( z )  2 1 z 2 (2.95) Puis, nous calculons la condition que doit satisfaire  pour que 11 0 dV ( z ) dt  , avec 1 z  . Cette condition assure une convergence asymptotique globale au sous-système 0 (2.94) au point d'équilibre 1 0 z  . 11 dV ( z ) dt s'exprime par : 11 V ( z )   11 zz   23 1 1 z ( z z 1   ) (2.96) Une solution pour avoir 11 V ( z )  strictement négative pour toute valeur de 1 z non nulle est de définir la commande virtuelle  comme : 2 11 zz     94) avec 2   . z Dans le cas où aucune solution n'existe pour ce problème d'optimisation, le critère de performance J est réécrit comme suit : 1 2 1 2 22 1 11 2 107) avec : u des   u u  r  T (2.108) Une loi de commande LQ est ensuite établie à partir de ce modèle en résolvant des équations algébriques de Riccati. Nous obtenons ainsi la commande des u . Afin d'assurer une redistribution optimale de la commande des u aux différents actionneurs, un critère de performance J est considéré. Le but est de trouver la commande u qui minimise J avec : J 1  TT u Qu c u 2 (2.109) avec Q et c des matrices de pondération, et u tel que : u    1 des max Bu B u min u u   (2.110) (Yang et al., 2010b), une loi de commutation est déterminée pour assurer le suivi de trajectoire d'un véhicule 2WS4WD. Deux contrôleurs sont élaborés pour assurer le suivi de trajectoire du véhicule en fonctionnement normal et en présence d'un défaut d'actionneur : le premier est un contrôleur LQR et le second est un contrôleur robuste au défaut (voir Figure 2.14).Le contrôleur LQR, qui n'est pas un contrôleur robuste, est utilisé en fonctionnement normal pour contrôler simultanément le braquage des roues avant et arrière et assurer le suivi de trajectoire du véhicule. Lorsqu'un défaut est déte cté, on commute vers une loi de commande robuste. Cette loi de commande, élaborée en utilisant la théorie de Lyapunov, garantit le suivi de trajectoire du système en présence du défaut. Ceci va assurer au module de diagnostic le temps nécessaire pour estimer l'amplitude du défaut. Une fois l'amplitude du défaut estimée, on commute vers une nouvelle loi de commande LQR adaptée au défaut. Dans ( ) x t   f  () t ( , ( ), ( )) t x t u t (2.115) avec ()  1, 2,..., EN    :   (2.116) et qui caractérise le régime actif. Un seul sous-système est activé à un instant donné. Le choix du sous-système actif peut être lié à un critère temporel, à des régions dans l'espace d'état du système, ou à un paramètre extérieur (Hetel 2007). Les systèmes peuvent être intrinsèquement hybrides lorsqu'ils peuvent fonctionner suivant différents modes de fonctionnement ou peuvent être hybrides par la commande lorsque plusieurs commandes sont n xt   l'état du système, () m ut   la commande, () ( , ( ), ( )) t f t x t u t  le champs de vecteurs décrivant le régime de fonctionnement du système et () t  une fonction constante par morceaux, nommée loi de commutation définie comme suit : disponibles et qu'une commande particulière est sélectionnée en fonction d'un événement interne ou externe (commutation de la commande). Commande tolérante aux fautes active pour un système suractionné basée sur la génération de références Dans notre étude, nous considérons le cas d'un système suractionné ayant m actionneurs et p variables d'état contrôlées, avec mp  . Le problème de commande d'un système suractionné dépend de quatre entités : un objectif global O , un ensemble de références locales o , une classe de commande U et un ensemble de contraintes . C Les contraintes du système sont exprimées en utilisant la structure globale du système S et un ensemble de paramètres  . Avant la détection du défaut, le système suractionné est contrôlé par un contrôleur principal en utilisant k Définition 3.1. Un système est suractionné si le nombre d'actionneurs disponibles est supérieur au nombre minimum d'actionneurs requis pour accomplir une mission. Définition 3.2. L'indice de suractionnement pour une mission donnée est le nombre d'actionneurs potentiellement utilisables mais non nécessaires pour exécuter la mission. La définition des systèmes suractionnés que nous adoptons ajoute aux définitions classiques, comme dans (Levine 2010) et (Vermillon 2009), la compenser l'effet du défaut. Les régulateurs, réglés de manière à suivre les Une définition équivalente du suractionnement est donnée dans ( ) ( ) u x f x B x u   (3.1) avec () n fx  , () m ut   le signal de commande et () nxm u Bx  . Le système est suractionné si () u Bx n'est pas de rang plein, c.à.d. rang( () u Bx)= k < m. Ceci permet de factoriser la matrice () u Bx comme suit : ( ) ( ) ( ) uv B x B x B x  (3.2) où les matrices () nxk v Bx   et () kxm Bx  sont de rang k. En introduisant (3.2) dans (3.1), nous pouvons écrire : ( ) ( ) () v x f x B x v v B x u    (3.3) où () références locales (références pour chaque actionneur) sont calculées afin de contrainte liées à la dynamique des actionneurs. On considère donc dans cette étape que les actionneurs activés répondent instantanément et d'une manière exacte.  Boucle interne Connaissant a o , calculé dans la boucle précédemment décrite (boucle externe), nous cherchons dans la boucle interne à résoudre le problème de commande décrit par le triplet , ( , ), aa d o C S U    . Le but est de trouver dans U (ensemble des commandes admissibles) une loi de commande permettant d'atteindre l'ensemble des objectifs locaux a o tout en respectant les contraintes a d C de la dynamique des actionneurs sains qui sont activés après la détection du défaut, avec a d d CC  . objectifs locaux des actionneurs initiaux, générés en fonctionnement normal et défaillant sont représentés par i o , avec   12 i , ,...,k  , et les objectifs locaux des actionneurs utilisés uniquement après la détection d'un défaut sont représentés par a j o , avec   12 j , ,...,m k  (cf. Figure 3.1). Les lois de commande i Co établies hors ligne, avec   12 i , ,...,k  , sont seules utilisées initialement pour contrôler les k actionneurs et assurer le suivi de ces références. Des objectifs locaux (ou références locales) a j o , avec   12 j , ,...,m k  , liés à 3.3  Boucle externe Pour ce système suractionné à commande décentralisée, un module de ces actionneurs sont alors calculés dans une boucle externe de manière à ce que (Michellod 2009), qui considère qu'un système est suractionné si on a une objectifs locaux, ne sont pas modifiés. Une présentation générale de cette Dans la boucle externe, nous cherchons à résoudre le problème décrit par diagnostic est intégré. Ce dernier détecte un fonctionnement défaillant lorsque le l'objectif global du système soit préservé malgré le non suivi des références locales infinité de lois de commande pour accomplir une mission donnée. On trouve de approche est faite dans la section 3.3 puis appliquée dans la section 3.4. le triplet , ( , ), a e O C S o   . Ceci consiste à trouver dans  un ensemble d'objectifs système s'écarte de l'objectif global fixé. L'origine de cet écart est le non suivi de i o (avec   12 i , ,...,k  ). Les lois de commande des nouveaux sous-systèmes utilisés même des travaux, en particulier en robotique, pour lesquels un système est suractionné si le nombre de ses actionneurs est supérieur à ses degrés de l iberté, Cette approche est ensuite appliquée sur un véhicule autonome 2WS4WD en utilisant une co-simulation entre les logiciels CarSim et locaux a certains objectifs locaux fixés. Une fois qu'un défaut est détecté, les ( m k )  permettent d'assurer le suivi de l'ensemble de ces références locales (boucle o , liés aux nouveaux actionneurs activés après la détection du défaut, permettant d'atteindre l'objectif global O tout en respectant les relations de actionneurs non utilisés en fonctionnement normal sont activés. interne). comme dans (Oppenheimer et al., 2006) et (Vissers 2005). Une définition Matlab/Simulink. contrainte e C , avec \ ed C C C  , C étant les relations de contrainte qui mathématique est donnée dans (Härkegård et al, 2005) pour des systèmes suractionnés représentés sous la forme d'état suivante : correspondent au modèle dynamique du système global et d C les relations de notion de mission pour le système. Le degré de suractionnement dépend de cette mission. Par exemple, suivant la mission et les conditions d'utilisation, un véhicule 2WS4WD peut être considéré comme un système suractionné ou non. Dans des conditions normales d'adhérence et pour un objectif de suivi de trajectoire standard, seul un actionneur de direction et un actionneur de traction sont nécessaires (ceci correspond aux véhicules classiques, traction avant ou arrière avec un train avant directionnel). Le véhicule 2WS4WD est donc dans ces conditions suractionné, avec un indice de suractionnement égal à 4. Par contre, dans des conditions d'adhérence faible, et/ou de suivi de trajectoire délicat (virages très serrés et successifs), il est nécessaire d'utiliser des actionneurs supplémentaires pour continuer à maintenir les performances du véhicule. L'indice de suractionnement diminue et peut devenir nul lorsque tous les actionneurs doivent être obligatoirement utilisés. n xt   l'état du système, () k vt   représente l'effort total produit par les actionneurs. Ceci implique que pour un système suractionné, une infinité de lois de commande u peuvent mener à un même effort total v . Comme v est calculé de manière à accomplir une mission donnée, on retrouve alors la définition des systèmes suractionnés donnée par (Michellod 2009). Nous présentons dans la partie suivante la conception d'une loi de commande tolérante aux fautes active. Cette loi de commande est basée sur la génération dynamique de références et consiste à redistribuer les tâches entre les différents actionneurs dans un système suractionné dès la détection d'un défaut. Elle ne nécessite pas de modifier les lois de commande élaborées hors ligne : Dès l'apparition d'un comportement défaillant du système, de nouvelles actionneurs, avec p k m  . L'indice de suractionnement est donc m-k. Lorsque l'objectif global du système O n'est plus atteint, un défaut est signalé par le module de diagnostic. Le contrôleur initial est maintenu et continue à assurer le contrôle des k actionneurs actifs. La défaillance peut être compensée en contrôlant les ( m k )  actionneurs redondants supposés non défaillants, puisqu'ils sont non utilisés initialement. Ce contrôleur, qui agit sur les actionneurs redondants, peut être qualifié de contrôleur tolérant aux fautes même s'il n'agit pas sur des éléments en défauts. En effet, il permet d'assurer la mission globale (objectif O ) du système en présence de défauts actionneurs. Ce contrôleur tolérant aux fautes est constitué de deux contrôleurs implantés dans deux boucles interconnectées : une boucle externe et une boucle interne. Nous appliquons par la suite cette stratégie sur un modèle de système suractionné à commande décentralisée. Le choix d'une commande décentralisée est justifié par le fait que cette dernière, à la différence de la commande centralisée, donne la possibilité de modifier les lois de commandes locales des sous-systèmes sans avoir besoin de recalculer la loi de commande globale [START_REF] Vermillon | Optimal Modular Control of Overactuated Systems -Theory and Applications[END_REF] . Les , sont liées à l'accélération de lacet  . Or, une liaison existe, d'une part, entre les forces longitudinales et les couples de traction, et de l'autre, entre les forces latérales et les couples de direction (voir Annexe).  y F  wr (3.29) Les forces longitudinales et latérales (resp. xi F et yi F ) au niveau des roues i du  i  véhicule, avec  1, 2,3, 4 Toute variation au niveau des couples de traction (resp. direction) induit une variation au niveau des forces longitudinales (resp. latérales) et par la suite une G yV     variation de   . Or, dans la section précédente on a montré la relation : cos( ) (3.30) T  est choisi en fonction du temps de réponse du système de braquage des roues arrière du véhicule, puisque l'on souhaite réagir avant que le véhicule ne sorte de la route. Nous définissons tout d'abord un seuil maximal fixe max S qui est fonction de la largeur de la route considérée. Nous prenons comme hypothèse que l'accélération de l'erreur de position latérale du véhicule est constante sur un intervalle de temps T  . Cette hypothèse est réaliste puisque l'intervalle de temps T  est court en pratique. Commençons par exprimer la prédiction de l'erreur latérale () yt  pour 0    comme suit: t t T ( y t 0 T ) 0 tT 0 () dy t dt dt 0 0 t ( ) dy t dt dt t 0 0 t T ( ) dy t dt dt 0 ( ( ) (0)) ( ( y t y y t 0 0 ) ( )) T y t   est une fonction de Lyapunov et le point 0 x  est un point d'équilibre globalement asymptotiquement stable. Pour trouver la vitesse de lacet désirée des  qui assure la convergence asymptotique globale au point d'équilibre ( , ) ( , ref y y y y ref   ) nécessaire pour garantir le suivi de trajectoire du véhicule tout en assurant les performances requises, nous cherchons une fonction de Lyapunov quadratique candidate 111 ( , ) V e e  , avec 1 e y  et ref y 1 e y  ref  y . Nous déterminons ensuite les conditions suffisantes pour que 111 ( , ) ) wf wrdes wr wf wrdes wr wf wrdes wr wf wrdes wr ) V e e wr wr V e e K wr wrdes wr wrdes wr wrdes wr wrdes ) V e e wr wr V e e K wr wrdes wr wrdes r wr B M Tr u r wr wrdes J r wrdes ) wr wr wf wr wr wr r wr r wr wrdes wr wrdes wr wrdes wrdes r Dans le cas contraire, nous effectuons un changement de base pour passer du repère OXYZ au repère OYXZ, comme présenté dans Remarque 3.1. La commande r u s'écrit alors de la manière suivante : r r wr r ( wrdes 22 1 ( , , , , 2 3 wf wr , wr )( ref ) 2 wr 3 wr ) 22 1 1 3 wr K e K     ) wr      2 1 1 ( , , , wr V e e     0 (3.101) C 10 est alors vérifiée. On peut ainsi conclure que pour la commande calculée dans l'équation (3.99), 2 1 1 ( , , V e e  , wr   wr  ) est une fonction de Lyapunov et ( , , , ) ( , wr wr ref y y y y ref     , , wrdes wrdes    ) est un point d'équilibre globalement asymptotiquement stable. Remarque 3.3. La loi de commande, présentée dans l'équation (3.99), s'applique lorsque l'angle de lacet du véhicule vérifie une des conditions suivantes (comme dans la section 3.4.3.1): C 5 : () 44 y arctan x       (3.103) C 6 : 35 () 44 y arctan x     (3.104) ) wf wrdes wr wf wrdes wr wf wrdes wr wf wrdes wr 4 : Co-simulation CarSim/Matlab-Simulink Nous considérons dans notre étude le cas suivant. Un véhicule autonome 2WS4WD circule à une vitesse constante de 60km/h. Cette vitesse est supérieure à la vitesse maximale du RobuCar qui est de 20km/h mais permet tester l'efficacité de la loi de commande dans des conditions plus contraintes. Ce véhicule effectue un changement de ligne double sur une route en asphalte sec (l'adhérence maximale étant max 1.2   ). A f t  3.3 s, une chute d'efficacité se produit au niveau de l'actionneur de direction avant. Dans un premier scénario, le contrôle latéral du véhicule est assuré par les roues avant seules. Nous pouvons remarquer, de la Figure 3.5, que le véhicule excède à t  4.59 s les limites maximales de la route, déterminées à partir du standard ISO:3888-1:1999(F), AFNOR. Ces limites sont calculées comme suit : 1 max Sy  ref   1.3*( Largeur du véhicule )  0.25  2 Sy  max ref   1.3*( ) Largeur du véhicule  0.25  14 12 10 Smax1 (m) latérale 6 8 Apparition du défaut Smax2 Position 4 2 Trajectoire référence Trajectoire du véhicule en fonctionnement nominal 0 Trajectoire du véhicule en fonctionnement défaillant Limite maximale de la route 0 -2 2 4 6 8 10 Limite minimale de la route 12 14 16 18 Temps (s) Dans un second scénario, lorsque l'erreur latérale ref r y  y excède un seuil dynamique à Figure 3. fd t  4.04 s, la commande tolérante aux fautes active basée sur la génération dynamique de références est activée. Dans un premier temps, nous simulons le véhicule sans intégrer la dynamique de braquage des roues arrière. Dans ce cas, nous obtenons 5 : Trajectoires du véhicule en fonctionnement nominal et défaillant   wr ( ) t   wrdes ( ) t   wr ( ) 0 t  quelque soit t. Nous choisissons pour les gains 0 K et 1 K les valeurs suivantes : 2 n  K  0 4 , avec 2   rad/s la pulsation propre de n l'équation (3.57), et 1 Kw  24 n Thèse deAlain Haddad, Lille 1, 2014 braquage des roues arrière. Nous appliquons le théorème du moment cinétique sur le centre du train arrière et nous obtenons l'équation suivante : J ˆˆˆr wr    r u M  br  M ATr  M SAr  M vr (4.1) r u M   br  M Tr avec : ˆˆˆT r M  M ATr  M SAr  M vr (4.2) Dans l'équation (4.1) nous mesurons la vitesse de braquage des roues arrière wr   et le couple de direction r u , et nous estimons l'accélération de braquage des roues arrière ˆwr   et des moments générés au niveau du train arrière ˆTr M . Or, pour estimer les moments M ATr et SAr M , il est nécessaire de connaître l'interface roue-chaussée. En effet, en utilisant le modèle de Pacejka (Pacejka, 2002) (voir Annexe) pour estimer les moments ˆATr M et ˆSAr M on obtient : 1 tan (   ˆsin( )) ATr mr mr mr mr M k D C B  S vmr (4.3) ˆˆĉ os( ) SAr xr wr M F   sin( ) ( cos( ) yr wr F d       r sin(    )) (4.4) avec : ˆ( xr xr F k  1 tan (   sin( Gr xr xr xr x D C B )) (4.5) ˆ( yr yr F k  1 tan (   sin( r yr yr yr y D C B  )) (4.6) Les variables données par les équations (4.3), (4.4), (4.5) et (4.6) sont détaillées en Annexe. Par la suite, le paramètre vmr S est considéré nul car il dépend de l'angle de carrossage dont on n'a pas tenu compte dans le modèle du mouvement de direction du système (Gillespie 1992). Remarque 4.1. Les paramètres mr k , xr k et yr k dépendent de l'interface roue- chaussée 2.2 Evaluation de résidus pour l'identification 8) avec : ˆsin( )sin( ) vr zr wr M F d   (4.9) Comme le montre l'équation (4.8), r est basée sur des grandeurs 7 estimées et sur des grandeurs mesurées. Le système de braquage des roues arrière étant considéré sans défaut, le résidu r doit être proche de 0 si les 7 paramètres mr k , xr k et yr k de l'équation (4.8) sont exacts ou leurs estimations sont proches des valeurs réelles. Dans le cas où le résidu est non nul, ces paramètres différent des valeurs réelles. Pour évaluer le résidu précédent, nous avons supposé que la nature de la chaussée est connue de manière exacte. Dans le cas contraire, le résidu r ne 7 peut s'appliquer et dans ce cas nous allons évaluer un autre résidu r , qui peut 8 s'exprimer : 8 r ˆˆ' r wr r r r mr mr sin( mr 1 tan ( mr )) ˆ' cos( ) xr wr ' sin( ) ( cos( ) yr wr sin( )) zr sin( )sin( ) wr 4. Parmi ces résidus, celui ayant la valeur absolue la plus faible indique l'hypothèse d'interface roue-chaussée la plus proche. Nous estimons ensuite les forces longitudinales et latérales au niveau de chaque roue en utilisant le paramètre d''interface roue-chaussée identifiée. Ceci nous permet de générer par la En posant 1 tan ( cos( ) sin( ˆsin( mr mr mr k D C d d ATr c r    B mr )      suite les résidus structurés i M      r , avec   1, 2,3, 4,5,6 i  , qui permettrons la localisation ))  (4.17) l'équation (4.16) est réécrite de la manière suivante : du défaut. 8 r  ' ˆˆˆy '' ˆˆ( 1) ( 1) cos( ) ( r mr xr ATr xr wr mr xr yr k kk M F k k k  1) sin( ) yr wr F      d c (4.18) D'après (Gillepsie, 1992), les moments ˆATr M , ˆcos( ) xr wr Fd  c et ˆsin( ) yr wr Fd  c de l'équation (4.18) ont la même direction. Ils s'opposent au mouvement de rotation du véhicule et produisent un effet de sous-virage. De plus, les paramètres ˆ'mr k , ˆ'xr k et ˆ'yr k de l'équation (4.18) varient en fonction de l'interface roue-chaussée. Si l'hypothèse d'interface roue-chaussée considérée pour calculer ˆ'mr k , ˆ'xr k et ˆ'yr k suppose que les frottements roue-chaussée sont plus importants que ceux considérés pour le calcul de ˆmr k , ˆxr k et ˆyr k , on a alors ˆ' mr kk  mr , ˆ' xr k  k xr et ˆ' yr k  k yr . Si on suppose qu'il y a moins de frottements roue-chaussée, les valeurs de ˆ'mr k , ˆ'xr k et ˆ'yr k sont inférieurs respectivement à celles de ˆmr k , ˆxr k et ˆyr k . Nous pouvons alors conclure que lorsque les erreurs d'estimation des forces longitudinales et latérales et du moment d'auto-alignement augmentent, ˆ'mr mr k k 4.15) , ˆ'xr xr k k et ˆ'yr yr k k s'éloignent de 1. Dans ce cas là, 8 r augmente vu que les moments ˆATr M , Si on suppose que les paramètres mr k , xr k et yr k , sont exacts, le résidu r 7 est nul ˆcos( ) xr wr c Fd  et ˆsin( ) yr wr c Fd  de l'équation (4.18) ont la même direction. Pour et ce qui permet d'avoir l'expression suivante du résidu r 8 : chaque hypothèse d'interface roue-chaussée considérée, on obtient alors une valeur 1 ˆ'' ˆ( 1) sin( tan ( )) (( ˆ' mr mr mr mr mr xr xr kk k D C B  1) cos( ) xr wr F    de résidu unique. En considérant un nombre fini d'hypothèses de surfaces de route 8 r   kk déterminées hors ligne, comme pour les approches d'estimation d'interfaces roue-  mr k k yr yr  ( 1) sin( )( cos( ) sin( yr wr F d r       )) (4.16) chaussée utilisant des observateurs (voir [START_REF] Baffet | Estimation of vehicle sideslip, tire force and wheel cornering stiffness[END_REF] ), on génère un nombre fini de résidus uniques. Thèse de Alain Haddad, Lille 1, 2014  Le moment généré par la force de frottement visqueux entre la roue i et le moteur de traction. Ce moment est représenté par fi i i Mf    .  Le moment d'impact généré par les forces longitudinales et latérales issues du contact roue i -chaussée. Ce moment est représenté par ( cos( ) xi wi M r F ti  sin( )) yi wi F  , avec r le rayon de la roue.  Le couple moteur généré par l'actionneur i . Ce couple est représenté par tri Mu  i Nous exprimons finalement le modèle de traction au niveau de la roue i comme suit : .21) avec  () ext MF i  désignant la somme des moments appliqués au centre de la roue . i Les moments considérés dans l'équation (4.21) sont les suivants : 1 : Matrice de signature de défauts 4.4 Reconfiguration de la commande , comme l'utilisation de l'actionneur de direction du train arrière n'est pas nécessaire pour accomplir la mission. Après la détection et la localisation du défaut, nous appliquons la reconfiguration matérielle du système. f 1 f 2 f 3 f 4 f 5 r 1 Pour ce faire, nous considérons deux stratégies de reconfiguration matérielle 1 1 1 1 1 en fonction du défaut localisé. Si la sortie de trajectoire du véhicule est causée par un r 2 défaut de l'un des quatre actionneurs de traction du véhicule, la roue contrôlée par 1 0 0 0 0 r 3 cet actionneur est alors laissée libre (comme dans (Dumont 2006)). La commande du 0 1 0 0 0 véhicule est ensuite reconfigurée en déconnectant l'actionneur de traction défaillant r 4 et en utilisant uniquement les actionneurs sains. Nous continuons à contrôler 0 0 1 0 0 r 5 l'actionneur de direction du train arrière et ceci pour garantir les performances 0 0 0 1 0 désirées. r 6 0 0 0 0 1 Tableau 4.Une fois le défaut localisé, nous appliquons la reconfiguration matérielle au 4.1, i f défaillance de l'actionneur i . La défaillance des actionneurs de traction est est utilisée pour désigner la affectée de l'indice   1 2 3 4 i , , ,  alors que la défaillance de l'actionneur de direction du train avant est affectée de l'indice 5 i  . Le cas de défaillance de véhicule 2WS4WD. Ce type de reconfiguration consiste à changer la structure interne du système et ceci en utilisant uniquement les composants sains (voir Chapitre 1). Pour réaliser cette reconfiguration, nous calculons une loi de commande rr S    S représentant la structure et paramètres non modifiés par la défaillance et S l'ensemble des structures possibles. l'actionneur de direction du train arrière n'est pas présenté dans ce tableau vu Cette reconfiguration matérielle reste applicable pour le cas d'un véhicule que cet actionneur est supposé non défaillant. Comme cette matrice de signature 2WS4WD tant que l'indice de suractionnement du système est non nul. Nous est de rang plein, les signatures des défaillances sont indépendantes. Nous rappelons que l'indice de suractionnement, définit dans le Chapitre 3, est fonction pouvons alors localiser tout défaut d'actionneur. de la mission donnée. Il représente le nombre d'actionneurs potentiellement utilisables mais non nécessaires pour exécuter la mission. Pour notre cas, la mission que le véhicule 2WS4WD doit accomplir est de suivre la trajectoire référence tout en , uU  établie hors ligne, permettant d'atteindre l'objectif O sous les contraintes ( , ) rr CS  , avec ( , ) garantissant des performances désirées. Pour réaliser cette mission, nous avons utilisé au départ 5 actionneurs du système : les 4 actionneurs de traction et l'actionneur de direction du train avant. L'indice de suractionnement du véhicule est alors 1Dans le cas où la sortie de trajectoire du véhicule est causée par un défaut d'actionneur de direction du train avant, les roues avant sont bloquées dans la direction longitudinale du véhicule. La commande du véhicule est ensuite reconfigurée en utilisant la direction des roues arrière pour assurer le suivi de trajectoire du véhicule. La mission du véhicule est alors garantie, vu qu'on a considéré comme hypothèse de départ que 4 actionneurs de traction et 1 actionneur de direction sont suffisants pour l'accomplir. Pour les deux cas considérés, l'indice de suractionnement du système devient nul. En effet, si l'actionneur de direction du train arrière tombe aussi en panne, on n'est plus capable d'assurer le suivi de trajectoire du véhicule avec les actionneurs restants. On doit stopper le véhicule et se mettre sur une bande d'arrêt d'urgence. 4. 5 Résultats de simulation Nous testons dans cette partie le module de diagnostic présenté dans ce chapitre. Nous utilisons pour ceci la co-simulation entre CarSim et Matlab-Simulink.Nous considérons le scénario suivant : Un véhicule autonome 2WS4WD circule à une vitesse constante de 60km/h. La route considérée a des caractéristiques variables. Elle est divisée en trois parties : la première et la troisième partie de la route sont en asphalte sec, alors que la deuxième est en asphalte humide. Pour t    0, 2 et ]6,14] t  le véhicule circule sur l'asphalte sec et pour ]2, 6] t  il circule sur l'asphalte humide. Le coefficient d'adhérence maximal sur l'asphalte sec est max 1.2   , alors que le coefficient d'adhérence maximal sur l'asphalte humide est max 0.8   . Une chute d'efficacité de l'actionneur de direction du train avant a lieu à 3.3 t  s. Cette chute d'efficacité mène à une déviation de trajectoire. L'erreur de position latérale dépasse le seuil adaptatif à 4.04 s comme le montre la Nous nous servons du logiciel CarSim pour simuler la dynamique globale du véhicule et pour assurer le contrôle des quatre actionneurs de traction et de l'actionneur de direction des roues avant. Nous implémentons sur le logiciel Matlab- Simulink le module de diagnostic élaboré dans ce chapitre ainsi que la commande de direction des roues arrière du véhicule présentée dans le Chapitre 3 du mémoire. Cette commande et activée dès qu'un défaut est détecté par le module de diagnostic. , nous considérons cinq types de surfaces différentes. Nous remarquons dans la Figure4.2 que pour des instants donnés, tous les résidus sont nuls. Ceci correspond au cas où l'actionneur de direction du train arrière n'est pas excité. En effet, dans l'équation (4.18), les moments ˆATr M , ˆcos( ) xr wr Fd  c et ˆsin( ) yr wr Fd  c s'opposent au mouvement de rotation du train arrière. Si le système n'est pas excité, ces moments sont nuls pour toute hypothèse d'interface roue-chaussée. En appliquant cette technique, l'interface roue-chaussée est déterminée en ligne. Le passage d'une surface de contact à une autre est de même détecté (comme le montre la Figure 4.3 lors du passage d'une surface d'asphalte humide à une surface d'asphalte sec à 6 t  s). Une fois l'information sur l'interface roue-chaussée est obtenue, elle est utilisée dans la génération des résidus i r , avec  2,3, 4,5,6  i  . la vitesse au niveau de la roue i . Le glissement i G est créé par le couple de traction appliqué sur la roue i . Il a une valeur positive lorsque la force de traction longitudinale xi F est positive. Dans ce cas, la vitesse angulaire de la roue augmente, ce qui fait que G Vr    . Lorsque le véhicule circule sur une surface glissante, i G atteint des valeurs importantes. Lors du freinage, la vitesse du véhicule devient supérieure à r  . La valeur du glissement est alors négative et continue de diminuer jusqu'à atteindre la valeur 1  lors du blocage de la roue i . 2-L'angle de dérive, appelée aussi caractéristique d'adhérence transversale du et xi V sont simultanément les vitesses latérales et longitudinales au centre de la roue i . L'angle i  permet de générer des accélérations transversales et peut causer soit le sous-virage lorsqu'il a une valeur négative, soit le survirage lorsqu'il a une valeur positive. i Vr i V        avec i i G Après la présentation de ces deux notations, passons aux modèles de yi i xi V V   représentation des forces longitudinales et latérales xi F et yi F exprimés en fonctions V pneumatique : tan de i G et i  . yi V Le modèle de Kiencke calcule le coefficient de frottement en utilisant le modèle de Burckhardt étendue. Ce coefficient est fonction du glissement longitudinal i G , et de la force verticale zi F et de la vitesse du centre de gravité du véhicule.Le modèle est établi comme suit : yi F z F S  t i ( c cos( ) G sin( )) i i      avec : 1 2 11 3 4 Gz 5 2  1 22 2  it t i t 1 i i xi F z F S  i i ( G cos( ) c sin( )) t i      Une stratégie de commande tolérante aux fautes active pour des systèmes suractionnés est présentée dans ce mémoire de thèse. Elle est formée de 4 étapes : détection rapide du défaut, activation d'une commande tolérante aux fautes qui assure le suivi de trajectoire du système en présence du défaut, localisation précise du défaut et finalement reconfiguration du système en déconnectant ou en bloquant dans une position déterminée, le composant défaillant. Cette stratégie de commande s'applique à un véhicule autonome de type 2WS4WD : lorsque la déviation latérale du véhicule dépasse un seuil de sécurité dynamique, une commande tolérante aux fautes basée sur la génération de références est activée. Son objectif est d'assurer la redistribution des tâches au niveau des actionneurs sains, non utilisés en fonctionnement normal, pour compenser l'effet du défaut. La loi de commande est élaborée en utilisant la théorie de Lyapunov et la technique du backstepping et calculée par deux boucles interconnectées. La première boucle, appelée boucle externe, calcule les nouveaux objectifs locaux nécessaires pour atteindre l'objectif global du système. La second boucle, appelée boucle interne, calcule la loi de commande nécessaire pour assurer le suivi des objectifs locaux élaborés dans la boucle externe. Un algorithme de localisation précise de défaut est ensuite appliqué pour déterminer le composant défaillant. Une fois ce composant identifié, le système suractionné est reconfiguré en utilisant uniquement les composants sains. Les algorithmes de diagnostic et de commande tolérante aux fautes sont finalement validés en utilisant une co-simulation des logiciels CarSim et Matlab/Simulink.  y (1   E yr )(  r  S hyr )  E B yr yr 1 tan ( ( yr B   r  S hyr )) B : coeffcient de raideur.  x   xr E G r  xr xr B E  xr r B G C : facteur de forme. D : valeur maximale de la courbe. E : courbure. Résumé : Descripteurs : commande tolérante aux fautes active, diagnostic, système suractionné, allocation, véhicule autonome, commande non linéaire, théorie de Lyapunov, backstepping, boucle externe, boucle interne. Thèse de Alain Haddad, Lille 1, 2014 © 2014 Tous droits réservés. doc.univ-lille1.fr © 2014 Tous droits réservés.doc.univ-lille1.fr Remerciements
00175627
en
[ "phys.cond.cm-sm", "info.info-lg" ]
2024/03/05 22:32:10
2007
https://inria.hal.science/hal-00175627/file/BPitsc-final.pdf
Cyril Furtlehner Jean-Marc Lasgouttes Arnaud De La Fortelle A Belief Propagation Approach to Traffic Prediction using Probe Vehicles This paper deals with real-time prediction of traffic conditions in a setting where the only available information is floating car data (FCD) sent by probe vehicles. Starting from the Ising model of statistical physics, we use a discretized space-time traffic description, on which we define and study an inference method based on the Belief Propagation (BP) algorithm. The idea is to encode into a graph the a priori information derived from historical data (marginal probabilities of pairs of variables), and to use BP to estimate the actual state from the latest FCD. The behavior of the algorithm is illustrated by numerical studies on a simple simulated traffic network. The generalization to the superposition of many traffic patterns is discussed. I. INTRODUCTION With an estimated 1% GDP cost in the European Union (i.e. more than 100 billions euros), congestion is not only a time waste for drivers and an environmental challenge, but also an economic issue. Today, some urban and interurban areas have traffic management and advice systems that collect data from stationary sensors, analyze them, and post notices about road conditions ahead and recommended speed limits on display signs located at various points along specific routes. However, these systems are not available everywhere and they are virtually non-existent on rural areas. In this context, the EU-funded REACT project developed new traffic prediction models to be used to inform the public and possibly to regulate the traffic, on all roads. The REACT project combines a traditional traffic prediction approach on equipped motorways with an innovative approach on nonequipped roads. The idea is to obtain floating car data from a fleet of probe vehicles and reconstruct the traffic conditions from this partial information. Two types of approaches are usually distinguished for traffic prediction, namely data driven (application of statistical models to a large amount of data, for example regression analysis) and model based (simulation or mathematical models explaining the traffic patterns). Models (see e.g. [START_REF] Chowdhury | Statistical physics of vehicular traffic and some related systems[END_REF], [START_REF] Klar | Mathematical models for vehicular traffic[END_REF] for a review) may range from microscopic description encoding drivers behaviors, with many parameters to be calibrated, to macroscopic ones, based on fluid dynamics, mainly adapted to highway traffic and subject to controversy [START_REF] Daganzo | Requiem for second order fluid approximation of traffic flow[END_REF], [START_REF] Aw | Resurrection of "second order" models of traffic flow[END_REF]. Intermediate kinetic C. Furtlehner is with Project Team TAO at INRIA Futurs, Université de Paris-Sud 11 -Bâtiment 490, 91405 Orsay Cedex, France. ([email protected]) J.-M. Lasgouttes is with Project Team IMARA at INRIA Paris-Rocquencourt, Domaine de Voluceau -BP 105, 78153 Rocquencourt Cedex, France. ([email protected]) A. de La Fortelle is with INRIA Paris-Rocquencourt and École des Mines de Paris, CAOR Research Centre, 60 boulevard Saint-Michel, 75272 Paris Cedex 06, France. (arnaud.de la [email protected]) description including cellular automata [START_REF] Nagel | A cellular automaton model for freeway traffic[END_REF] are instrumental for powerful simulation and prediction systems in equipped road networks [START_REF] Chrobok | Traffic forecast using simulations of large scale networks[END_REF]. On the other hand, the statistical approach mainly focuses on time series analysis on single road links, with various machine learning techniques [START_REF] Guozhen | Traffic flow prediction based on generalized neural network[END_REF], [START_REF] Chun-Hsin | Travel-time prediction with support vector regression[END_REF], while global prediction systems on a network combine data analysis and model simulations [START_REF] Chrobok | Traffic forecast using simulations of large scale networks[END_REF], [START_REF] Kanoh | Short-term traffic prediction using fuzzy c-means and cellular automata in a wide-area road network[END_REF]. For more information about traffic prediction methods, we refer also the reader to [START_REF] Benz | Information supply for intelligent routing servicesthe INVENT traffic network equalizer approach[END_REF], [START_REF] Versteegt | PredicTime -state of the art and functional architecture[END_REF]. We propose here a hybrid approach, by taking full advantage of the statistical nature of the information, in combination with a stochastic modeling of traffic patterns and a powerful message-passing inference algorithm. The beliefpropagation algorithm, originally designed for bayesian inference on tree-like graphs [START_REF] Pearl | Probabilistic Reasoning in Intelligent Systems: Network of Plausible Inference[END_REF], is widely used in a variety of inference problem (e.g. computer vision, coding theory. . . ) but to our knowledge has not yet been applied in the context of traffic prediction. The purpose of this paper is to give the first principles of such an approach, able to exploit both space and time correlation on a traffic network. The main focus is on finding a good way to encode some coarse information (typically whether traffic on a segment is fluid or congested), and to decode it in the form of real-time traffic reconstruction and prediction. In order to reconstruct the traffic and make predictions, we propose to use the socalled Bethe approximation of an underlying disordered Ising model (see e.g. [START_REF] Mézard | Spin Glass Theory and Beyond[END_REF]), to encode the statistical fluctuations and stochastic evolution of the traffic and the belief propagation (BP) algorithm, to decode the information. Those concepts are familiar to the computer science and statistical physics communities since it was shown [START_REF] Yedidia | Constructing free-energy approximations and generalized belief propagation algorithms[END_REF] that the output of BP is in general the Bethe approximation. The paper is organized as follows: Section II describes the model and its relationship to the Ising model and the Bethe approximation. The inference problem and our strategy to tackle it using the belief propagation approach are stated in Section III. Section IV is devoted to a more practical description of the algorithm, and to numerical results illustrating the method. Finally, some new research directions are outlined in Section V. II. TRAFFIC DESCRIPTION AND STATISTICAL PHYSICS We consider a road network for which we want to both reconstruct the current traffic conditions and predict future evolutions. To this end, all we have is partial information in the form of floating car data arriving every minute or so. The difficulty is of course that the up-to-date data only covers part of the network and that the rest has to be inferred from this. In order to take into account both spatial and temporal relationships, the graph on which our model is defined is made of space-time vertices that encode both a location (road link) and a time (discretized on a few minutes scale). More precisely, the set of vertices is V = L ⊗ Z + , where L corresponds to the links of the network and Z + to the time discretization. To each point α = ( , t) ∈ V, we attach an information τ α ∈ {0, 1} indicating the state of the traffic (1 if congested, 0 otherwise). On such a model, the problems of prediction and reconstruction are equivalent, since they both amount to estimating the value of a subset of the nodes of the graph. The difference, which is obvious for the practitioner, lies mainly in the nature (space or time) of the correlations which are most exploited to perform the two tasks. Each vertex is correlated to its neighbors (in time and space) and the evaluation of this local correlation determines the model. In other words, we assume that the joint probability distribution of τ V def = {τ α , α ∈ V} ∈ {0, 1} V is of the form p({τ α , α ∈ V}) = α∈V φ α (τ α ) (α,β)∈E ψ αβ (τ α , τ β ) (1) where E ⊂ V 2 is the set of edges, and the local correlations are encoded in the functions ψ and φ. V together with E describe the space-time graph G and V(α) ⊂ V denotes the set of neighbors of vertex α. The model described by [START_REF] Chowdhury | Statistical physics of vehicular traffic and some related systems[END_REF] is actually equivalent to an Ising model on G, with arbitrary coupling between adjacent spins s α = 2τ α -1 ∈ {-1, 1}, the up or down orientation of each spin indicating the status of the corresponding link (Fig. 1). The homogeneous Ising model (uniform coupling constants) is a well-studied model of ferro (positive coupling) or anti-ferro (negative coupling) material in statistical physics. It displays a phase transition phenomenon with respect to the value of the coupling. At weak coupling, only one disordered state occurs, where spins are randomly distributed around a mean-zero value. Conversely, when the coupling is strong, there are two equally probable states that correspond to the onset of a macroscopic magnetization either in the up or down direction: each spin has a larger probability to be oriented in the privileged direction than in the opposite one. From the point of view of a traffic network, this means that such a model is able to describe three possible traffic regimes: fluid (most of the spins up), congested (most of the spins down) and dense (roughly half of the links are congested). For real situations, we expect other types of congestion patterns, and we seek to associate them to the possible states of an inhomogeneous Ising model with possibly negative coupling parameters, referred to as spin glasses in statistical physics [START_REF] Mézard | Spin Glass Theory and Beyond[END_REF]. The practical information of interest which one wishes to extract from ( 1) is in the form of local marginal distributions p α (τ α ), once a certain number of variables have been fixed by probe vehicles observations. They give the probability for a given node to be saturated at a given time and in turn can be the basis of a travel time estimation. From a computational viewpoint, the extraction cost of such an information from an Ising model on a multiply connected graph is known to scale exponentially with the size of the graph, so one has to resort to some approximate procedure. As we explain now such an approximation exists for dilute graphs (graphs with a tree-like local structure). On a simply connected graph, the knowledge of p α (τ α ) the one-vertex and p αβ (τ α , τ β ) the two-vertices marginal probabilities is sufficient [START_REF] Pearl | Probabilistic Reasoning in Intelligent Systems: Network of Plausible Inference[END_REF] to describe the measure [START_REF] Chowdhury | Statistical physics of vehicular traffic and some related systems[END_REF]. p(τ V ) = (α,β)∈E p αβ (τ α , τ β ) α∈V p α (τ α ) qα-1 = α∈V p α (τ α ) (α,β)∈E p αβ (τ α , τ β ) p α (τ α )p β (τ β ) , (2) where q α denotes the number of neighbors of α. Since our space time graph G is multi-connected, this relationship between local marginals and the full joint probability measure can only be an approximation, which in the context of statistical physics is referred to as the Bethe approximation. This approximation is provided by the minimum of the socalled Bethe free energy, which, based on the form (2), is an approximate form of the Kullback-Leibler distance, D(b p) def = τV b(τ V ) ln b(τ V ) p(τ V ) , between the reference measure p and an approximate one b. This rewrites in terms of a free energy as D(b p) = F(b) -F(p), where F(b) def = U(b) -S(b), (3) with the respective definitions of the energy U and of the entropy S U(b ) def = - (α,β)∈E τα,τ β b αβ (τ α , τ β ) log ψ αβ (τ α , τ β ) - α∈V τα b α (τ α ) log φ α (τ α ), S(b) def = - (α,β)∈E τα,τ β b αβ (τ α , τ β ) log b αβ (τ α , τ β ) + α∈V τα (q α -1)b α (τ α ) log b α (τ α ). The set of single vertex b α and and two-vertices b αβ marginal probabilities that minimize (3) form the Bethe approximation of the Ising model. For reasons that will become evident in Section III-B, these will also be called beliefs. It is known that the quality of the approximation may deteriorate in the presence of short loops. In our case, the fact that the nodes are replicated along the time axis alleviates this problem. In practice, what we retain from an inhomogeneous Ising description is the possibility to encode a certain number of traffic patterns in a statistical physics model. This property is also shared by its Bethe approximation (BA) and it is actually easier to encode the traffic patterns in this simplified model rather than the original one. Indeed, it will be shown in Section III-C that the computation of the BA from the marginal probabilities is immediate. The data collected from the probe vehicles is used in two different ways. The most evident one is that the data of the current day directly influences the prediction. In parallel, this data is collected over long periods (weeks or months) in order to estimate the model [START_REF] Chowdhury | Statistical physics of vehicular traffic and some related systems[END_REF]. Typical historical data that is accumulated is • pα (τ α ): the probability that vertex α is congested (τ α = 1) or not (τ α = 0); • pαβ (τ α , τ β ): the probability that a probe vehicle going from α to β ∈ V(α) finds α with state τ α and β with state τ β . The computation of pα and pαβ requires a proper congestion state indicator τ α that we assume to be the result of the practitioner's pretreatment of the FCD. The definition of this indicator is a problem of its own and is outside of the scope of this article. A relevant FCD variable is instantaneous speed. An empirical threshold may be attached to each link in order to discriminate (in a binary or in a continuous manner) between a fluid and a congested situation. Another approach is to convert the instantaneous speed in a probability distribution of the local car density, when an empirical fundamental diagram is known for a given link. Aggregation over a long period of these estimators yields then the desired historical data. The edges (α, β) of the space time graph G are constructed based on the presence of a measured mutual information between α and β, which is the case when pαβ (τ α , τ β ) = pα (τ α )p β (τ β ). III. THE RECONSTRUCTION AND PREDICTION ALGORITHM A. Statement of the inference problem We turn now to our present work concerning an inference problem, which we set in general terms as follows: a set of observables τ V = {τ α , α ∈ V}, which are stochastic variables are attached to the set V of vertices of a graph. For each edge (α, β) ∈ E of the graph, an accumulation of repetitive observations allows to build the empirical marginal probabilities {p αβ }. The question is then: given the values of a subset τ V * = {τ α , α ∈ V * }, what prediction can be made concerning V * , the complementary set of V * in V? There are two main issues: • how to encode the historical observations (inverse problem) in an Ising model, such that its marginal probabilities on the edges coincide with the pαβ ? • how to decode in the most efficient manner-typically in real time-this information, in terms of conditional probabilities P (τ α |τ V * )? The answer to the second question will somehow give a hint to the first one. B. The belief propagation algorithm BP is a message passing procedure [START_REF] Pearl | Probabilistic Reasoning in Intelligent Systems: Network of Plausible Inference[END_REF], which output is a set of estimated marginal probabilities (the beliefs b αβ and b α ) for the measure (1). The name "belief" reflects the artificial intelligence roots of the algorithm. The idea is to factor the marginal probability at a given site in a product of contributions coming from neighboring sites, which are the messages. The messages sent by a vertex α to β ∈ V(α) depends on the messages it received previously from other vertices: m α→β (τ β ) ← τα∈{0,1} n α→β (τ α )φ α (τ α )ψ αβ (τ α , τ β ), (4) where n α→β (τ α ) def = γ∈V(α)\{β} m γ→α (τ α ). (5) The messages are iteratvely propagated into the network with a parallel, sequential or random policy. If they converge to a fixed point, the beliefs b α are then reconstructed according to b α (τ α ) ∝ φ α (τ α ) β∈V(α) m β→α (τ α ), (6) and, similarly, the belief b αβ of the joint probability of (τ α , τ β ) is given by b αβ (τ α , τ β ) ∝ n α→β (τ α )n β→α (τ β ) × φ α (τ α )φ β (τ β )ψ αβ (τ α , τ β ). (7) In the formulas above and in the remainder of this paper, the symbol ∝ indicates that one must normalize the beliefs so that they sum to 1. A simple computation shows that equations ( 6) and ( 7) are compatible, since (4)-( 5) imply that τα∈{0,1} b αβ (τ α , τ β ) = b β (τ β ). In most studies, it is assumed that the messages are normalized so that τ β ∈{0,1} m α→β (τ β ) = 1. holds. The update rule (4) indeed indicates that there is an important risk to see the messages converge to 0 or diverge to infinity. It is however not immediate to check that the normalized version of the algorithm has the same fixed points as the original one (and therefore the Bethe approximation). This point has been analyzed in [START_REF] Furtlehner | Belief propagation and Bethe approximation for traffic prediction[END_REF] and the conclusion is that the fixed points of both version of the algorithm coincide, except possibly when the graph has a unique cycle. We can safely expect that it is not the case in practical situations. It has been realized a few years ago [START_REF] Yedidia | Generalized belief propagation[END_REF] that the fixed points of the BP algorithm coincide with stable points of the Bethe free energy (3), and that moreover stable fixed points correspond to local minima of (3) [START_REF] Heskes | Stable fixed points of loopy belief propagation are minima of the Bethe free energy[END_REF]. BP is therefore a simple and efficient way to compute the Bethe approximation of our inhomogeneous Ising model. We propose to use the BP algorithm for two purposes: estimation of the model parameters (the functions ψ αβ and φ α ) from historical data and reconstruction of traffic from current data. C. Setting the model with belief propagation The fixed points of the BP algorithm (and therefore the Bethe approximation) allow to approximate the joint marginal probability p αβ when the functions ψ αβ and φ α are known. Conversely, it can provide good candidates for ψ αβ and φ α from the historical values pαβ and pα . To set up our model, we are looking for a fixed point of the BP algorithm satisfying ( 4)-( 5) and such that b αβ (τ α , τ β ) = pαβ (τ α , τ β ) and therefore b α (τ α ) = pα (τ α ). It is easy to check that the following choice of φ and ψ, ψ αβ (τ α , τ β ) = pαβ (τ α , τ β ) pα (τ α )p β (τ β ) , (8) φ α (τ α ) = pα (τ α ), (9) leads (1) to coincide with [START_REF] Klar | Mathematical models for vehicular traffic[END_REF]. They correspond to a normalized BP fixed point for which all messages are equal to 1/2. It has been shown in [START_REF] Furtlehner | Belief propagation and Bethe approximation for traffic prediction[END_REF] that this form of φ and ψ is in some sense canonical: any other set of functions yielding the same beliefs is equal to ( 8)-( 9), up to a change of variable. This equivalence holds for the beliefs at the other fixed points of the algorithm and their stability properties. This crucial result means that it is not needed to learn the parameters of the Ising model, but that they can be readily recovered from the Bethe approximation. The message update scheme (4) of previous section can therefore be recast as m α→β (τ β ) ← τα∈{0,1} n α→β (τ α )p αβ (τ α |τ β ) (10) and the beliefs are now expressed as b α (τ α ) ∝ pα (τ α ) γ∈V(α) m γ→α (τ α ), (11) b αβ (τ α , τ β ) ∝ pαβ (τ α , τ β )n α→β (τ α )n β→α (τ β ). ( 12 ) There is no guarantee that the trivial constant fixed point is stable. However, the following theorem, proved in [START_REF] Furtlehner | Belief propagation and Bethe approximation for traffic prediction[END_REF], shows that this can be decided from the mere knowledge of the marginal which we want to model. Theorem 1: The fixed point {p} is stable if, and only if, the matrix defined, for any pair of oriented edges (α, β) ∈ E, (α , β ) ∈ E, by the elements J α β αβ = pαβ (1|1) -pαβ (1|0) 1 1 {α ∈V(α)\{β}, β =α} , has a spectral radius (largest eigenvalue in norm) smaller than 1. A sufficient condition for this stability is therefore pαβ (1|1) -pαβ (1|0) < 1 q α -1 , for all α ∈ V, β ∈ V(α). In addition, on a dilute graph, the knowledge of the Jacobian coefficient distribution and the connectivity distribution of the graph is enough to determine the stability property by a mean field argument [START_REF] Furtlehner | Belief propagation and Bethe approximation for traffic prediction[END_REF]. D. Traffic reconstruction and prediction Let V * be the set of vertices that have been visited by probe vehicles. Reconstructing traffic from the data gathered by those vehicles is equivalent to evaluating the conditional probability p α (τ α |τ V * ) = p α,V * (τ α , τ V * ) p V * (τ V * ) , (13) where τ V * is a shorthand notation for the set {τ α } α∈V * . The BP algorithm applies to this case if a specific rule is defined for vertices α ∈ V * : since the value of τ α is known, there is no need to sum over possible values and ( 10) becomes m α→β (τ β ) ← n α→β (τ α )p αβ (τ α |τ β ). (14) IV. PRACTICAL CONSIDERATIONS AND SIMULATION The algorithm outlined in Section III can be summarized by the flowchart of Fig. 2. It is supposed to be run in real time, over a graph which corresponds to a time window (typically a few hours) centered around present time, with probe vehicle data added as it is available. In this perspective, the reconstruction and prediction operations are done simultaneously on an equal footing, the distinction being simply the time-stamp (past for reconstruction or future for prediction) of a given computed belief. The output of the previous run can be used as initial messages for a new run, in order to speedup convergence. Full re-initialization (typically a random set of initial messages) has to be performed within a time interval of the order but smaller than the time-scale of typical traffic fluctuations. We have tested the algorithm on the artificial traffic network shown on the program's screenshot of Fig. 3. To this end, we used a simulated traffic system which has the advantage to yield exact empirical data correlations. For real data, problems may arise because of noise in the historical information used to build the model; this additional difficulty will be treated in a separate work. The simulator implements a queueing network, where each queue represents a link of the traffic network (a single-way lane) and has a finite capacity. To each link, we attach a variable ρ ∈ [0, 1], the car density, which is represented by a color code on the user interface snapshot. As already stated in Section II, the physical traffic network is replicated, to form a space time graph, in which each reconstruction algorithm ∀α ∈ V, β ∈ V(α) m α→β ← 1 it ← 0 ∀α ∈ V, β ∈ V(α) m old α→β ← m α→β it ← it + 1 Choose randomly α 0 ∈ V Q Q Q Q Q Q Q Q Q Q n y α 0 ∈ V * ? Compute m α 0 →β for all β ∈ V(α 0 ) using [START_REF] Benz | Information supply for intelligent routing servicesthe INVENT traffic network equalizer approach[END_REF] Compute m α 0 →β for all β ∈ V(α 0 ) using ( 14) Q Q Q Q Q Q Q Q Q Q n y it = itmax or m old -m < Compute beliefs using [START_REF] Chrobok | Traffic forecast using simulations of large scale networks[END_REF] and [START_REF] Guozhen | Traffic flow prediction based on generalized neural network[END_REF] end Fig. 2. The traffic reconstruction algorithm. The parameters, besides those that have already been defined elsewhere, are the total number of iterations itmax, the maximal error > 0 and a norm • on the set of messages, which choice is up to the implementer. Likewise, the update policy described here is "random", but parallel or sequential updates can also be used. vertex α = ( , t) corresponds to a link at a given time t of the traffic graph. To any space-time vertex α, we associate a binary congestion variable τ α ∈ {0, 1}. The statistical physics description amounts to relating the probability of saturation P (τ α = 1) to the density ρ α . For the sake of simplicity, we consider a linear relation and build our historical p according to some time averaging procedure. In practice, the car density would not be available from the FCD and a preprocessing of information would be necessary. In our oversimplified setting, the single-vertex beliefs yield directly an estimation of the car density. Nevertheless, more realistic data collection and modeling would be completely transparent w.r.t. the algorithm. To estimate the quality of the traffic restoration we use the following estimator: reconstruction rate def = 1 |V| α∈V 1 1 {|bα-ρα|<0.2} , which computes the fraction of space-time nodes α for which the belief b α does not differ by more than an arbitrary threshold of 0.2 from ρ α . A typical prediction time series is shown in Fig. 4. The overall traffic level, characterized by some real number comprised between 0 and 1, oscillates between dense and fluid conditions with a certain amount of noise superimposed. In this setting, we observe first that BP has three fixed points, among which the reference b = p (see Section III-C), which is is in fact unstable because it is a superposition of distinct measures. The two additional fixed points represent actually the dense and fluid traffic conditions. These additional states appear spontaneously and some fine tuning is required to control saturation effects [START_REF] Furtlehner | Belief propagation and Bethe approximation for traffic prediction[END_REF]. This is reflected in the sudden drops of the reconstruction rate when the algorithm jumps from one state to the other during transient phases. A selection criteria based on free energy measurements may be used to choose the most relevant fixed point [START_REF] Furtlehner | Belief propagation and Bethe approximation for traffic prediction[END_REF]. Concerning the dependence of the reconstruction rate with respect to the number of probe vehicles, Fig. 5 indicates that the knowledge on less than 10% of the links (10 vehicles for 122 links) is sufficient in this setting to identify most of the time correctly the traffic regime. However, when the number of probes is increased, the reconstruction rate given by our algorithm saturates around 80%. In addition to the fact that time correlations are not incorporated in our model, the main reason for this saturation is that in our practical implementation [START_REF] Furtlehner | Belief propagation and Bethe approximation for traffic prediction[END_REF] correlations between probe vehicles are neglected when imposing (13). V. CONCLUSION AND PERSPECTIVES We have presented a novel methodology for reconstruction and prediction of traffic using the belief propagation algorithm on floating car data. We have shown how the underlying Ising model can be determined in a straightforward manner and that it is unique up to some change of variables. The algorithm has been implemented and illustrated using an artificial traffic model. While our main focus is currently on testing on more realistic simulations, several generalizations are considered for future work, using the extension of the results of Section III to a more general factor graph setting done in [START_REF] Furtlehner | Belief propagation inference with a prescribed fixed point[END_REF]: Firstly, the binary description corresponding to the underlying Ising model is arbitrary. Traffic patterns could be represented in terms of s different inference states. A Potts model with s-states variables would leave the belief propagation algorithm and its stability properties structurally unchanged. However, since the number of correlations to evaluate for each link is s 2 -1, this number of states should be subject to an optimization procedure. Secondly, our way of encoding traffic network information might need to be augmented to cope with real world situations. This would simply amount to use a factor-graph used to propagate this information. In particular it is likely that a great deal of information is contained in the correlations of local congestion with aggregate traffic indexes, corresponding to sub-regions of the traffic network. Taking these correlations into account would result in the introduction of specific variables and function nodes associated to these aggregate traffic indexes. These aggregate variables would naturally lead to a hierarchical representation of the factor graph, which is necessary for inferring the traffic on large scale network. Additionally, time dependent correlations which are needed for the description of traffic, which by essence is an out of equilibrium phenomenon, could be conveniently encoded in these traffic index variables. Ultimately, for the elaboration of a powerful prediction system, the structure of the information content of a trafficroad network has to be elucidated through a specific statistical analysis. The use of probe vehicles, based on modern communications devices, combined with a belief propagation approach, is in this respect a very promising approach. Fig. 1 . 1 Fig. 1. Traffic network (a) and Ising model (b) on a random graph. Up (resp. down) arrows correspond to fluid (resp. congested) traffic. Fig. 3 .Fig. 4 .Fig. 5 . 345 Fig. 3. Traffic network as produced by the simulator. The continuous color code represents the traffic index from 0 (green/light) to 1 (red/dark). There are 35 physical nodes and 122 physical links (road segments), simulated on 40 time steps, which yields a time-space graph G with 4880 nodes.
01006927
en
[ "spi.meca", "spi.mat" ]
2024/03/05 22:32:10
2012
https://hal.science/hal-01006927/file/giraud2012.pdf
Eliane Giraud email: [email protected] Michel Suery Michel Coret M Suéry High temperature compression behavior of the solid phase resulting from drained compression of a semi-solid 6061 alloy High temperature compression behavior of the solid phase resulting from drained compression of a semi-solid 6061 alloy Introduction The rheological behavior of alloys in the solidification range is now studied more and more extensively [1][2][3][4][5] owing to the importance of processes during which solid and liquid phases are coexisting. This coexistence occurs obviously during conventional solidification of castings, ingots or billets but also during liquid phase sintering, welding and forming such as rheocasting or thixocasting [6][7][8][9][10]. The knowledge of this behavior is indeed important for modeling purposes to avoid numerous trial and error experiments. For example, during semi-continuous casting of Al billets, defects like hot tears or macrosegregations [11][12][13] can form depending on alloy composition and process parameters. Their prediction requires modeling the whole process by considering the behavior of the semi-solid alloy taking into account both the deformation of the solid and the flow of the liquid. Criteria for hot tear formation are then introduced based on critical solid deformation or cavitation pressure in the liquid [14,15]. To determine the rheological behavior of the solid, it is generally assumed that its composition is not far from that of the alloy. Experiments are then carried out at temperatures close to but below the solidus temperature of the alloy to determine the various parameters of the constitutive equation. In this temperature range, viscoplastic behavior is generally a good approximation so that the constitutive equation is mainly determined by the strain rate sensitivity parameter and by the activation energy [16][17][18]. This procedure therefore does not take into account the fact that the composition of the solid phase is not that of the alloy and also that it is changing with temperature and thus with the solid volume fraction present in the alloy. Indeed, during solidification or partial melting of a binary alloy, solid and liquid are coexisting with compositions given by the phase diagram and proportions by the lever rule in equilibrium conditions. The composition and the proportion of these two phases thus continuously change with temperature. To determine the constitutive equation of the solid phase, it is therefore necessary to test alloys with various compositions corresponding to those of the solid phase at various temperatures below the solidus. Extrapolation of the results at the solidus temperature for each composition allows determining the behavior of the solid phase in a semi-solid alloy at various temperatures. This procedure can be quite easily considered in the case of a binary alloy but it is hardly possible in a multi-constituent alloy. In this case indeed, the composition of the solid phase is not simply given by the phase diagram and even the solid can be constituted of several phases. Specific softwares are thus required which are able to give the composition and the proportion of the various phases as a function of temperature for various solidification or partial melting conditions. The next step of the procedure would be to prepare alloys with the composition of the solid phase at various temperatures and to test them. Another simpler procedure can be considered, i.e. drainage of the liquid from the semi-solid alloy and testing of the remaining solid. If all the liquid present at a given temperature can be drained out of a sample, the remaining solid has the exact composition of the solid phase at this temperature. Repeating this procedure at various temperatures would then lead to various specimens having the composition of the solid phase at these temperatures. One way to carry out this drainage of the liquid is to use drained oedometric compression, already performed on aluminum alloys [2,19]. It consists in compressing a semi-solid sample placed in a container with a filter on top of it by applying the compression using a hollow piston to drain the liquid through the filter. This procedure allows one to measure also the compressibility of the solid skeleton provided that the pressure required to drain the liquid is low compared to that required to compress the solid. In the present work, the feasibility of the procedure exposed above has been studied in the case of a 6061 alloy. Drained compressive tests have been carried out at various temperatures within the solidification range. These tests allow measuring the compressibility of the solid for various temperatures and obtaining specimens representative of the solid phase found in the semi-solid state. Then compressive tests at high temperature (below the solidus temperature) have been carried out on these drained specimens in order to determine the constitutive equation of the solid. Finally, a comparison with the behavior of the non-drained 6061 alloy within the same temperature range has been performed. Experimental procedure The alloy used for this investigation is a 6061 alloy provided by Almet (France) in the form of a rolled plate of 50 mm thickness in the heat treated T6 condition. In order to drain the liquid from a semi-solid sample, a drained compression apparatus has been designed. It consists in a container of 35 mm diameter in which the alloy is placed while being still solid (Fig. 1). During the test, the alloy is initially melted, and then partially solidified at a cooling rate of 20 K/min until a given solid fraction is reached. At this stage, the temperature is kept constant, and a downward vertical displacement is imposed to the hollow piston. Two displacement rates of the piston have been studied: 0.008 and 0.015 mm/s. A system of filtration allows the liquid to flow out of the sample, and, consequently, the solid fraction increases. In fact this system is constituted of two different filters as shown in Fig. 1: a rigid stainless steel filter with quite large holes (about 2 mm diameter) associated with a much thinner stainless steel filter with very small holes (about 200 m diameter). The first filter allows transmitting the load from the hollow piston whereas the second is required for the filtration of the liquid. This test can be seen as a means to impose solidification mechanically at constant temperature. Assuming that the solid fraction evolves only by liquid drainage, the following equation can be used to relate the imposed axial strain ε z to the solid fraction g s : g s = g s0 exp(-ε z ) (1) where g s0 is the initial solid fraction before any strain is imposed. Two values of this initial solid fraction (0.6 and 0.8) have been selected by using two temperatures (910 K and 897 K, respectively) at which oedometric compression has been carried out. These two temperatures have been determined by using the ProPhase software (from ALCAN CRV, France). The compression of the sample is pursued until g s is theoretically equal to 1, which means that all the liquid should have been drained from the semi-solid sample. The test was thus stopped when ε z satisfied this condition. After drained compression, the solid remaining in the container has been cut in two pieces through the axis of the cylinder to observe the microstructure of the alloy after drainage of the liquid. These two pieces have been thereafter machined in order to get compression specimens of 8 mm in diameter and 6 mm in height. These specimens have been used for compression tests carried out at high temperature by using the strain rate jump procedure. Four temperatures have been investigated: 723, 773, 803 and 823 K as well as five strain rates: 10 -4 , 2.5 × 10 -4 , 10 -3 , 2.5 × 10 -3 and 6 × 10 -3 s -1 . Similar compression tests have been also carried out for comparison on samples machined in the non-drained 6061 rolled plate. Experimental results Drained compression tests Influence of initial solid fraction Fig. 2 shows the variation of the applied stress as a function of the theoretical solid fraction (given by Eq. ( 1)) for the two tests carried out with different initial solid fractions at a displacement rate of the piston of 0.015 mm/s. The applied stress obviously increases with increasing strain i.e. with increasing solid fraction. It increases slowly initially when the initial solid fraction is small (0.60) and much more rapidly when the initial solid fraction is larger (0.80). It should be noted that the two curves are coming close to each other at large solid fractions but the curve corresponding to the larger initial solid fraction is always below that for the lower initial solid fraction. In addition, it is surprising to observe that the stress does not increase very sharply when the solid fraction is close to 1 although the solid is not compressible. Influence of displacement rate Fig. 3 shows the variation of the applied stress as a function of the solid fraction for the two tests carried out with different displacement rates of the piston and for an initial solid fraction of 0.8. The displacement rate has an influence on the stress required to drain the liquid from the specimen: a high strain rate leads to a higher measured stress at a given solid fraction. This strain rate sensitivity is correlated to the viscoplastic behavior of the solid network in this solid fraction range as already observed for tensile [20] and shear [21] experiments. 3.2. Compressive tests at high temperature (below the solidus temperature) Compressive tests at constant strain rate on drained samples Since most of the solidification defects form in the last stage of solidification (i.e. g s > 0.8), only samples machined in the remaining solid obtained after a drained experiment at a high initial solid fraction (i.e. g s0 = 0.8) have been tested in order to determine the high temperature behavior of the solid phase. Fig. 4 shows the stress-strain curves obtained at the various temperatures and for the different strain rates applied successively from 10 -4 s -1 to 6 × 10 -3 s -1 . For a given strain rate, stress increases until a plateau where it remains relatively constant, thus showing negligible strain hardening during deformation of the solid material. Moreover, stress obviously increases with increasing strain rate and decreasing temperature, which highlights the viscoplastic behavior of the solid material. Compressive tests at constant strain rate on non-drained 6061 samples The same type of compression experiments at various temperatures in the solid state as for drained samples has been performed on non-drained 6061 samples. Fig. 5 shows the stress-strain curves obtained at the various temperatures and for the different strain rates applied successively from 2.5 × 10 -4 s -1 to 6 × 10 -3 s -1 . Negligible strain hardening and viscoplastic behavior of the solid material are again found. Discussion The behavior of the semi-solid alloy during the drained experiments will be discussed first. Then the behavior observed during compression at high temperature of the solid phase will be examined in order to determine the appropriate rheological law. Finally, a comparison with the behavior at high temperature of the nondrained 6061 alloy will be performed. Behavior of the semi-solid alloy during drained compressive tests The stress required for the drained oedometric compression of the partially solidified specimen increases with increasing strain or solid fraction (Figs. 2 and3). This stress is due to both the densification of the solid skeleton and the liquid flow out of the sample which becomes more and more difficult as the remaining liquid volume fraction decreases. Indeed, a decrease of the liquid volume fraction allows increasing the number of contacts between dendrites and decreasing the interdendritic spaces thus decreasing the permeability of the solid skeleton. Adopting Darcy's law to describe liquid flow and assuming homogeneous deformation in the specimen, its contribution to the stress measured during the drained compression test can be evaluated. The maximal interstitial pressure P L , which corresponds to the pressure at the bottom of the container is then given by [22]: P L = V p • Á 2 • K • H 0 • h 2 + P 0 ( 2 ) where V p is the velocity of the piston, Á is the liquid viscosity, K is the solid skeleton permeability, H 0 and h are the initial and current positions of the piston from the bottom of the container and P 0 is the pressure due to the drained liquid located above the filters. The variation of the position of the piston during the test is given by: h = H 0 • g S0 /1g L where g S0 is the initial solid fraction and g L is the current liquid fraction. To determine the solid skeleton permeability, the Kozeny-Carman equation can be used [23,24]: K = g 3 L • 3 1 /5 where 1 is the primary dendrite arm spacing. 1 is assumed equal to that given by Flemings for pure aluminum in [START_REF] Flemings | Solidification Processing[END_REF] since, to measure this parameter during our experiments, it would have been necessary to cool with a very high rate the specimen when a solid fraction of 0.8 is reached, which is not possible. P 0 is given by: P 0 = (H 0h) • L • g with L the liquid density and g the gravity acceleration. The variation of interstitial pressure with liquid fraction, calculated using Eq. ( 2), is shown in Fig. 6: it increases slowly up to a solid fraction of 0.97 and then very sharply when the solid fraction exceeds 0.97 which corresponds to the coalescence solid fraction [20,21]. This sharp increase is due to the fact that, for solid fractions larger than that for coalescence (>0.97), the number of solid bridges increases drastically thus leading to a drop of the solid skeleton permeability and then to more and more difficult liquid flow. However, the interstitial pressure remains much lower (0.07 MPa) than the experimental measured stress (12 MPa as shown in Figs. 2 and3) so that it can be neglected during the deformation of the semisolid alloy. The mushy material therefore behaves during drained compression like a solid without liquid. Metallographic observation of the part of the specimen which has been drained through the filter (Fig. 7) and of the part which remained in the container (Fig. 8) shows that liquid has been effectively drained from the specimen. The solidified drained liquid (Fig. 7) contains much more eutectic and intermetallics than the non-drained 6061 alloy [20]. Since this liquid was drained from the specimen when the solid fraction was equal to 0.8, it is possible to determine its composition by using the ProPhase software for the solidification conditions of the experiment (20 K/min) (Table 1). The liquid is theoretically enriched in Mg, Si, Fe and Cu which are the main secondary elements of the 6061 alloy. An analysis by X-ray diffraction of the drained liquid, by using the Cu K␣ radiation, allows detecting various phases which contain these elements such as: Mg 2 Si, CuAl 2 , Al 15 (FeMn) 3 Si 2 and Al 4 Cu 2 Mg 8 Si 7 in addition to the Al-rich matrix (Fig. 9). This result confirms that liquid has been drained from the specimen. The solid part which remains in the container after drainage contains some intermetallics homogeneously distributed everywhere in the specimen (Fig. 8). Their concentration is not high enough to allow detection by X-ray diffraction: as shown in Fig. 9, only the Al matrix is detected. Since these intermetallics are observed after cooling, it is necessary to wonder whether they were already present in the material before liquid drainage or they formed upon cooling after liquid drainage. Prophase calculation indicates that the first intermetallics form in the 6061 alloy when the volume fraction of the primary phase is 0.84. Therefore, the intermetallics present in the solid part are more likely to result from the solidification of the liquid which was not drained from the specimen. Thus, the drainage of the liquid has not been complete which seems to be in contradiction with the stress-solid fraction curves (Figs. 2 and3): these curves indicate that the drained compression tests were stopped when the solid fraction was theoretically very close to 1, so that almost no liquid should have remained in the specimen. The fact that some liquid remains can be explained by two factors: the filter has been deformed slightly at the end of the compression test so that the real axial strain applied to the specimen was smaller than expected, and/or some solid was extruded through the filter. These assumptions allow us to explain the behavior observed when the apparent solid fraction reaches 1. Indeed, since the solid is not compressible, the stress should have increased asymptotically to infinite, which is not observed experimentally. In addition to the viscoplastic behavior of the solid phase shown in Fig. 3, the influence of the initial solid fraction and of the accumulated strain on the mechanical behavior of the semi-solid material has been investigated. Fig. 2 also shows that stress increases more or less rapidly with increasing strain depending on the initial solid fraction. This is due to the initial morphology of the solid phase: when the initial solid fraction is high, the solid grains are more connected so that drained compression involves extensive deformation of the solid with less rearrangement thus leading to an important increase of stress with increasing strain. Fig. 2 shows also that, at a given solid fraction, the measured stress is higher when the initial solid fraction is low. This result can be explained by the level of accumulated strain required to reach this solid fraction, which is larger when the initial solid fraction is low. A larger accumulated strain obviously leads to a more deformed microstructure and consequently to a larger number of solid-solid contacts. However, when the strain is such that the solid fraction approaches 1, the microstructure is largely deformed with almost no liquid remaining, but, as strain hardening is not observed in this temperature range, the stress no longer depends on the initial solid fraction. Behavior of the drained solid at high temperature Fig. 4 has shown that the solid phase exhibits a viscoplastic behavior with negligible strain hardening. The most common approximation used to describe the mechanical behavior of such material consists in using a classical creep law as follows [16,18]: ε = A • n • exp - Q RT ( 3 ) where ε is the strain rate, is the measured stress, n is the stress sensitivity parameter, Q is the activation energy, A is a material constant, T is the temperature and R is the ideal gas constant. The behavior of the solid material is thus described by two main parameters: n and Q. The slope of the curves showing the variation of stress as a function of strain rate in logarithmic scales (Fig. 10) allows determining the strain rate sensitivity parameter m which is close to 0.08 whatever the temperature. The stress sensitivity parameter n is then equal to 12 (n = 1/m). From the curves showing the variation of ln( ε/ n ) as a function of -1/RT (Fig. 11), it is possible to deduce the activation energy Q which is equal to 347 kJ/mol. The values obtained for n and Q are very high. Indeed, as a general rule, the stress sensitivity parameter for materials deformed at high temperature is between 3 and 5. An exponent of 3 is generally attributed to dislocation glide controlled by viscous drag whereas a value of 5 corresponds to climb controlled dislocation glide [START_REF] Kloc | [END_REF]. A value of 8 can be sometimes observed when the substructure does not evolve with stress. The activation energy is usually around 130-140 kJ/mol which is the activation energy for self-diffusion in 4) for specimens resulting from drained compression of the 6061 alloy at an initial solid fraction of 0.8. Table 2 Parameters for the rheological law (4) and for specimens resulting from drained compression of the 6061 alloy at an initial solid fraction of 0.8. aluminum [27,28]. Thus, the classical creep law (3) is not adapted to describe the behavior of the solid phase of the 6061 alloy since the values of the various parameters have no physical meaning. Another possibility consists in using a hyperbolic sine law. Indeed, some alloys can exhibit a stronger stress increase than that predicted by an exponential law when exceeding a given stress or strain rate level [18,[29][30][31][32]. This leads to the presence of two regimes: a power-law region and a power-law breakdown region, presumably governed by different physical processes not already completely known. For intermediate strain rates and stresses, the alloy exhibits a behavior governed by a power law, whereas, for high strain rates and stresses, a hyperbolic sine law intervenes. Therefore, due to the stress level reached during the experiments, it can be assumed that the experimental conditions lead to the typical behavior of the power-law breakdown regime. In this case, the following equation can be used to relate the strain rate ε to the experimental stress [18,29,30]: ε = A • G • b k • T • exp -Q RT • [sinh(˛• )] n (4) where b is the burgers vector (2.86 × 10 -10 m), k is the Boltzman constant (1.38 × 10 -23 J/K), A and ˛are material constants and G is the shear modulus. It is assumed that the shear modulus depends on temperature according to the equation suggested in [33]: G = 3.022 × 10 4 -16 × T (in MPa). The standard procedure to determine the parameters values can be divided into two main steps: (i) determination of the activation energy Q and the stress sensitivity parameter n by applying Eq. (3) on experimental data within the power-law regime and (ii) determination of the material constants A and ˛by applying Eq. (4) on experimental data within the power-law breakdown regime and by using the values of Q and n calculated in (i). This procedure assumes that sufficient data for each regime are available, which is not the case in this study. Therefore, the values of Q and n have been fixed to classical values for hot deformation of aluminum alloys [34,35] i.e. 142 kJ/mol and 5, respectively. Parameter's and material's constants A and ˛have then been determined so that the variation of ε • k • T • exp(Q/RT )/G • b as a function of sinh(˛• ) in a logarithmic scale exhibits a slope equal to 5, as shown in Fig. 12. The values of the different parameters of Eq. ( 4) are summarized in Table 2. It can be noticed that the material constant ˛is in agreement with classical values of the literature for aluminum alloys: from 0.01 to 0.08 [29][30][31]36], which tends to show that a hyperbolic sine law is relatively appropriate to describe the behavior of the solid phase of the 6061 alloy. The same study as for the drained alloy has been performed on the 6061 alloy in order to determine the most appropriate rheological law describing its behavior at high temperature. Since Eq. ( 3) still leads to high values for the activation energy Q (about 287 kJ/mol) and the stress sensitivity parameter (about 11), an identical procedure has been followed to estimate the values of the parameters for Eq. ( 4). These values are summarized in Table 3. Although the behavior of the non-drained 6061 alloy is also governed by a hyperbolic sine law, it can be noticed, by comparing Tables 2 and3, that the values of the constitutive parameters are different depending on the type of tested material. Therefore, the deformation at high temperature of the non-drained 6061 alloy differs from the deformation of the drained alloy, which confirms the necessity to study the behavior of the material with the appropriate composition to determine the response of the solid phase in the semi-solid state. Conclusion In this study, an original technique to determine the behavior at high temperature of the solid phase of a multi-constituent alloy was presented. It consists in: (i) drainage of the liquid present at a given temperature in order to obtain a solid with the exact composition of the solid phase at this temperature and (ii) deformation of the solid at high temperature to determine the rheological law. The densification behavior of the 6061 alloy in the mushy state has been investigated thanks to drained compressive tests. These tests consist in a mechanical solidification of the semi-solid alloy at a constant temperature resulting from liquid drainage through filters which leads to a nearly complete densification of the solid phase. The results have shown that the compression behavior of the semi-solid alloy depends on strain rate, on the initial morphology of the solid skeleton and on the accumulated strain. Indeed, at a given solid fraction achieved during compression, the pressure required to densify the solid increases with decreasing initial solid fraction and thus increasing accumulated strain. Since strain hardening is not present in this temperature range, this result is correlated to the liquid distribution in the specimen which depends on the accumulated strain. Specimens resulting from drained compression have been further tested in simple compression at high temperature to determine the constitutive equation of the solid phase present in a semi-solid 6061 alloy at a given temperature and to compare it with that of the 6061 alloy. Compression tests with strain rate jumps at various temperatures have shown that the behavior of the two materials is governed by a hyperbolic sine law. The values of the parameters of the constitutive equation are however different. This thus shows that the behavior of the solid phase in the 6061 alloy differs from that of the alloy. It is therefore necessary to study the behavior of the material with the appropriate composition if the behavior of the solid phase must be known within the solidification range. Fig. 1 . 1 Fig. 1. Schematic view of the oedometric compression apparatus (a) with picture of the two filters (b). Fig. 2 . 2 Fig. 2. Variation of the applied stress during drained compression as a function of the solid fraction present in the specimen. The two curves correspond to two different initial solid fractions. Fig. 3 . 3 Fig. 3. Variation of the applied stress during drained compression as a function of the solid fraction present in the specimen. The two curves correspond to two different displacement rates of the piston. Fig. 4 . 4 Fig. 4. Stress-strain curves at various temperatures and various strain rates applied by compression on specimens resulting from drained compression of the 6061 alloy at an initial solid fraction of 0.8. Fig. 5 . 5 Fig. 5. Stress-strain curves at various temperatures and various strain rates applied by compression on non-drained 6061 specimens. Fig. 6 . 6 Fig.6. Pressure necessary to drain the liquid through a porous solid mediumcalculated using Eq. (2). Fig. 7 . 7 Fig. 7. Microstructure of the liquid part of the 6061 specimen which has been drained through the filters for an initial solid fraction equal to 0.8: overall view (a) and magnified view (b). Fig. 8 . 8 Fig. 8. Microstructure of the solid part of the 6061 specimen remaining in the container after a drained compression at an initial solid fraction of 0.8: in the vicinity of the filters (a) and in the bottom of the specimen (b). Fig. 9 . 9 Fig. 9. X-ray diffraction patterns on the liquid part of the 6061 specimen which has been drained through the filters (a) and on the solid part which remained in the container (b) for an initial solid fraction equal to 0.8. Fig. 10 .Fig. 11 . 1011 Fig. 10. Stress-strain curves in a logarithmic scale at various temperatures for specimens resulting from drained compression of the 6061 alloy at an initial solid fraction of 0.8. Fig. 12 . 12 Fig. 12. Determination of the material constants A and ˛from Eq. (4) for specimens resulting from drained compression of the 6061 alloy at an initial solid fraction of 0.8. 4. 3 . 3 Comparison with the behavior at high temperature of the non-drained 6061 alloy Table 1 1 Composition (in wt%) of the liquid in a 6061 alloy when the solid fraction is equal to 0.8 obtained by ProPhase calculation. Si Mg Cu Fe Mn Cr 2.5 2.9 1.1 1.2 0.2 0.06 Table 3 3 Parameters for the rheological law (4) and for non-drained 6061 specimens. n Q (kJ/mol) ˛(MPa -1 ) A (m 2 s -1 ) 5 142 0.055 6 × 10 -15 Acknowledgements One of the authors (EG) is grateful to CNRS (French National Center for Scientific Research) and AREVA for financial support through a scholarship. The authors thank Cédric Gasquères, ALCAN CRV (France), for providing the ProPhase calculations and Stéphane Coindeau, CMTC (France), for the analysis by X-ray diffraction.
01756490
en
[ "shs.litt" ]
2024/03/05 22:32:10
1999
https://hal.science/cel-01756490/file/HJAMESPortrait.pdf
Pr P Carmignani THE PORTRAIT OF A LADY It is a complex fate, being an American" (H. James) "Life is learning to know oneself" (H. James) « En ce qui nous concerne, nous n'avons pour connaître l'homme que la lecture, la merveilleuse lecture qui juge l'homme d'après ce qu'il écrit. De l'homme, ce que nous aimons par-dessus tout, c'est ce qu'on peut en écrire. Ce qui ne peut-être écrit mérite-t-il d'être vécu ? » (G. Bachelard) With 22 novels (2 unfinished), 112 tales, 7 plays, to say nothing of his critical and descriptive work -(the New York edition of his novels and tales published between 1907 and 1909 comprises 24 volumes) -, Henry James, an American novelist of Irish antecedents, who became a naturalized British citizen a year before his death, is a formidable literary monument ; as W. Morris put it, "the central defect in the mind and art of James is a defect of riches-he is simply to much for us". Needless to say, one can hardly hope to get a comprehensive view ofor achieve in-depth acquaintance withsuch a body of fiction, unless one is willing to devotenot to say sacrificeone's whole life to it, and I daresay there are countless other objects worthier of study in a (wo)man's lifetime. In other words, the following notes are but a tentative approach to the author and his work, a series of guidelines based on a long but desultory familiarity with James's fiction and criticism, a miscellany of observations, interpretations and hypothesessome personal, others borrowed from various sourcesto be taken not as Gospel truth but as a point of departure for your own exploration and appreciation of The Portrait of the Lady (by the way, all subsequent page references being to the Penguin Modern Classics Edition of the novel instead of the recommended Norton Critical Edition, I must apologize to you, dear reader, for a most inconvenient discrepancy in page numbers. I'll leave it to you to figure out some sort of conversion table). James's fiction is a source of irresistible fascination for some readers and of almost unbearable irritation for others as witness H. G. the words of T. S. Eliot, "an author who is difficult for English readers, because he is an American ; and who is difficult for Americans, because he is a European ; and I do not know whether he is possible to other readers at all" (quoted by L. Edel, emphasis mine). Well, forewarned is forearmed as the saying goes : we'll have to take up the challenge and prove that James is not only "possible to other readers" but also, hopefully, quite profitable and palatable. Whatever his merits or demerits, attractiveness or repulsiveness -there's no disputing about tastes -, H. James is beyond dispute a master of the craft of the novel, and his importance lies in the fact that he was "the first great novelist-and perhaps still the only one-to have fused the European's sense of the objective limits of life, which habitually expresses itself in he novel of manners, and the American's sense of its limitless conceivable possibilities, which habitually expresses itself in the so-called metaphysical romance..." (S. Gorley Putt). As for The Portrait of a Lady (1881), it has always been recognized as the best novel of James's middle period. It was a book of great importance in the history of the novel ; in this novel, James proved to be "years in advance of his time in his psychological interest in his characters. In The Portrait he wrote on two planes ; he told a story full of significant action almost entirely in terms of the inner life of his protagonists" (Swan). Both quotations highlight one of the dominant features of the man and his work, what one is tempted to call, for want of a better term and at the risk of being guilty of crude oversimplification, their in-betweenness if not double-sideness i.e. everything that bears upon James, whether it be the period, background, education, themes, language, etc., partakes of duality, of the dialectics of sameness and otherness, as witness : -Family background. Among the facts to bear in mind  overwhelming influence of the father, Henry James Senior whom the novelist was named after, and rivalry with the elder brother, William James : « H. James, né en 1843, s'avère un 'génie' précoce, puisqu'il est présenté par son père dans une lettre de 1857 comme "un grand dévoreur de bibliothèques et un gigantesque scripteur de romans et de pièces de théâtre". Cet enfant est le fils de Henry James Senior dont il porte le prénom, un 'intellectuel' dilettante, féru des théories de Swedenborg, de Fourier, de Sandeman et d'Emerson, luimême fils d'un négociant millionnaire d'Albany, 'l'ancêtre' William. C'est donc dans un prestigieux quatuor que la naissance engage le petit Henry : par le jeu forcené de la répétition et de la nomination (de ce qu'il appelle 'l'étiquette'), l'enfant s'y trouve placé dans la position d'un rival direct de William ; son aîné d'un an, le futur philosophe pragmatiste, l'image même de la réussite sociale. En 1880 encore, le romancier mentionnera le poids écrasant de cette 'géméllité' familiale qui l'a obligé à fuir le domicile paternel... » (J. Perrot in L'Arc, emphasis mine). More about the head of the family : Henry James Senior was born into a Calvinist family against which he soon rebelled and he remained a critic of all institutions, including "the New England conscience, with its fussy self-consciousness and self-culture". Wishing to preserve his sons'minds from any contamination through formal schooling, and to leave them open to experience he sent them to schools in America, France, Germany and Switzerland, and H. James, the novelist-to-be, left for Europe in 1869 after a short period at Harvard Law School. As stated above, Henry James Senior soon came under the influence of Swedenborg, the mystic, and Fourier, the social reformer, to both of whom he remained an ardent, if somewhat eccentric, disciple (by the way, Charles Fourier coined the phrase "feminism" and stated that « Le mariage est le tombeau de la femme, le principe de toute servitude humaine », a radical opinion The Portrait of the Lady somewhat substantiates). He even published a major work, Society : The Redeemed Form of Man, in which he attempted to show how Fourier's 'Divine Society' and Swedenborg's 'Grand Man' can be brought into existence. H. James Senior believedthe point is highly relevant to one of the main themes of The Portrait of a Lady -that "growing up required the individuating crisis which in Genesis is dramatized as the Fall of man : the fatal necessary quickening within the unconscious chunk of innocence of the awareness of self. This egotism and selfhood is essentially sinful and can only be overcome by a second crisis leading to the individual's re-birth as a social being". We'll find an echo of this doctrine in the Jamesian myth of the Fortunate Fall (see below reference to Hawthorne), and in the 'spiritual adventures' undertaken by several of the characters in his novels. -Nationality  cf. following quotation from author himself : "it would be impossible for an outsider to say whether I am at a given moment an American writing about England or an Englishman writing about America." (H. James) -Ambivalent attitude to US culture  James experienced a love-hate feeling for 19th century American society. He was very critical of the shortcomings his native country -"the soil of American perception is a poor little barren artificial deposit," he said -, yet he was also perfectly aware of the advantages or promises it held out : "We are Americans born-il faut en prendre son parti. I look upon it as a great blessing ; I think that to be an American is an excellent preparation for culture. We have exquisite qualities as a race, and it seems to me that we are ahead of the European races in the fact that more than either of them we can deal freely with forms of civilisation not our own, can pick and choose and assimilate and in short (aesthetically, etc.) claim our property wherever we find it... We must of course have something of our own-something distinctive and homogeneous-and I take it that we shall find it in the moral consciousness, our unprecedented spiritual lightness and vigour." -Theme  See below "The international situation" or the confrontation of New World innocence with the cultural richness of the Old World. -Vision  cf. what has been called "the double vision" : "for James, the poetic imagination was to be very largely a matter of seeing things from both sides : from the early tales to the final Prefaces his writing is full of images invoking the obverse and reverse, the back and the front, the passive and the active, the efficient and the visionary, the romance and the disillusion." (G. Putt) -Characterization  very often dominated by "fascination with the notion of two twinlike personalities inhabiting one consciousness", hence also the characteristic tendency to "distri-bute over two or more contrasting characters, often cousins, the split personality of the geminian author" (G. Putt). The cousinly relationship serves to underline the contradictions in the characters'natures  Isabel Archer and Ralph Touchett are a case in point. -Art  according to critic R. Chase, James's art can be defined in terms of "an assimilation of romance into the substance of the novel". James's fiction is situated at the interface of the romance and the novel (cf. ; in the same way as James himself belongs to a hybrid literary category i. e. "he is a poet-novelist combining J. Austen's skill of observing and dramatizing manners with Hawthorne's 'profoundly moral and psychological [... ] poetic art of fiction'." (Critics on H. James). In The Portrait, James "explores the limits of romance [... ] but though he rejects romance as a moral view of the world, he assimilates into the very substance of the novel, by means of metaphor and the charm of the heroine herself, the appeal of romance" (Ibid.).Cf. James's definition of "romance" as opposed to the actual : Le réel représente à mes yeux les choses que nous ne pouvons pas vraiment ne pas connaître, tôt ou tard, d'une façon ou d'une autre [...]. Le romanesque, d'autre part, représente les choses qu'avec toutes les facilités du monde [...] nous ne pouvons jamais connaître directement, les choses qui peuvent nous atteindre seulement à travers les beaux circuits et subterfuges de notre pensée et de notre désir. -Language and Style  James's language is a very idiosyncratic variety of English, known as "the mid-Atlantic variety", a strange combination of English, American and European influences subserving "his ambition of appearing to write from a sort of detached equipoise in Mid-Atlantic" (M. Swan). Note also that James is partial to "the warring words of an oxymoron", a figure of speech by which a locution produces an effect by seeming self-contradiction, as in "cruel kindness" or "to make haste slowly." -Sex  James was banned from active service in the American Civil War by reason of an injury sustained when, helping to put out a small farmyard fire, he strained himself with the pumphandle. This 'obscure hurt' (a back injury, slipped disc, muscular strain or psychosomatic backache ?) was, in James's own words, "the most odious, horrid, intimate thing that can happen to a man...". A hurt which amounted to castration ? There has been much speculation on its nature and consequences, and Henry James critics tend to see a relationship between the accident and his celibacy/homosexuality, his apparent avoidance of involvements with women and the absence of overt sexuality in his work. Biographic information (from American Fiction, Longman) : The peculiarities of James'life, which spans three quarters of a centuryfrom the age of the common man and of Jacksonian democracy to the first World Warare crucial for an understanding of his work. "He is a member of the James family and has no other country," his brother William, the famous philosopher of pragmatism, once wrote, pointing to a central idiosyncrasy of the James clan. Their father wished his five children to be citizens of the world and therefore gave them a remarkably cosmopolitan, eclectic and liberal education. Henry James Jr. thus received a highly informal training which encouraged in him the habit of observation and a certain withdrawal from participation. Privately tutored until 1855, he spent the next three years touring Europe with his family, a crucial experience which brought him in touch with high culture which he was to make his world. From then on, his youth was divided between restricted settings of the american scenethe artistic and intellectual centers of Newport, Cambridge and New York -, and further educational experiences in Europe. Kept out of the Civil War, the great ordeal of his time, by a mysterious injury, James left the Harvard Law School after one year in 1862 and turned to literature. In 1869, he became a transatlantic pilgrim again and from 1875 established his residence in England, while making frequent sojourns in France and Italy. What he found in Europe was precisely what he missed most in the States, all the attributes of civilization without which the novelist was left empty. James'mock-epic list of items missing in America was more than a humorous reassessment of the poverty and crudity of the American landscape, which Hawthorne deplored in his daysit pointed to the fundamental blankness of American culture which James himself cruelly encountered and which made him an expatriate. He felt unfitted for life in postwar United States and experienced dismay at the greed and vulgarity of the Gilded Age (cf. below). To him the period between the Civil War and the First world war came to be seen as "the Age of the Mistake" and he grew increasingly pessimistic about the possibilities of culture in America. "My choice was the Old World," he wrote, where his interest in highly refined manners and his devotion to art could be fully satisfied. Hence his lifelong quarrel with his native land, which he officially renounced in 1915, shortly before his death, by becoming a british citizen. Yet if the United States remained an impossible place for him to feel at home in, it did provide him with an essential component of his international theme and thus remained a constant referent in his vision. In Paris he met Flaubert and Turgenev who were to exert a considerable influence on his conception of art. From them he learned that writing was more than a profession, it was a vocation, and that the novel was the mastery of a form, and as such belonged to art. James thus set about writing novels worthy of this lofty conception, which would lift American fiction to the heights of European and Russian fiction and make it rank alongside the works of Balzac, whom he greatly admired, George Eliot, and his Parisian mentors, Flaubert and Turgenev. Having written Roderick Hudson (1876), The American (1877), and The Europeans (1878), which were all variations on his international theme, it was Daisy Miller (1878), another incursion into the same theme, which brought him fame. In The Portrait of A Lady (1881), James brought the first of his three periods to a close, with a triumphant sample of his method of psychological realism. This climax in his career was followed by a period of experimentation, in which James refined his medium into an extremely sophisticated vehicle of perception which earned him a reputation for unnecessary difficulty and undue elaborateness. The Princess Casamassima (1886), The Aspern Papers (1888), The Tragic Muse (1890), The Lesson of the Master (1892) and The Spoils of Poynton (1897) constitute some of the landmarks of this controversial period in which James had to face increasing public neglect and incomprehension, which left him disheartened but which was perhaps the necessary transition, in F.R. Leavis'terms, "to pass from talent to genius." After an unsuccessful attempt at play-writing which turned into a humiliating failure, James produced the three novels considered to be the masterpieces of his maturity : The Wings of the Dove (1902), The Ambassadors (1903) and The Golden Bowl (1904). These late novels, which offer a more and more searching insight into the complexities of human relationships and the depths of human consiousness, testify to James'mastery, though they also point to an increasing loss of touch with the real world, as well as to an often frustrating incapacity for directness, and an exhausting array of subtleties. His prose became accordingly endlessly convoluted, ruminative and elusive, forever accreting, ramifying and qualifyingall Jamesian mannerisms of overtreatment which have divided his readers, among them his brother William urging him "to say it out, for God's sake, and have done with it." His artistry had become overly self-conscious and led him into realms of his own, with characters altogether cut off from the material world. James'friend, the novelist Edith Wharton, thus criticized The Golden Bowl : "What was your idea in suspending the four principal characters in The Golden Bowl in the void ? What sort of life did they lead when they were not watching each other and fending with each other ?"a comment which left James baffled. This relinquishing of social surface accentuated tendencies already present in his earlier work, and placed his protagonists in rarefied spheres where perception proceeded against ethereal backgrounds of affluence and sophistication. Even though he greatly admired Balzac's Human Comedy, he himself obviously did not care to paint characters from all strata of society, in particular ordinary, commonplace characters that were the stuff of traditional realistic fiction, and devoted himself solely to portraying contrasting types of European aristocrats or artists and American pilgrims. Just as his vision excluded a vast painting of the social canvas, it also discarded apprehension of the physical realities of the flesh : if sexuality appears in his fiction, it surfaces in the blanks of the page, the ellipses of the story, the repressed perceptions of an otherwise hyperactive consciousness. James'own celibate and rather dusky private life may help account for this shadowy treatment of sex. His characters share in their author's essential isolation and withdrawal from life, Strether in The Ambassadors being perhaps one of James'closest personae in this respect. The scene in which Strether exhorts his young expatriate American friend to live -"Live all you can ; it's a mistake not to. It doesn't so much matter what you do in particular, so long as you have your life. If you haven't had that, what have you had ?" -speaks eloquently of James' sense of a wasted life. In a life sacrificed to art and gnawed by the awareness of the unlived life, excesses of refinement thus made up for a fundamental loss : art became for him a means to redeem life. This is perhaps the ultimate portent of James'fiction. Not only did he greatly stretch the scope and mastery of the novel by introducing psychological realism and emphasizing form, but he also prefigured the modernist vision of art, art being the quintessential form of culture and culture being the sole barrier against barbarism. James'exile is a living testimony to this belief and his fiction a magnificent tribute to the redeeming power of art : "It is art that makes life, makes interest, makes importance and I know of no substitute whatever for the force and beauty of its process." Before going any further, it is necessary to situate the novel back in its context and to assess the rôle and importance of James in the evolution of US literature. Place in the context of American history Although The Portrait of a Lady is a work of fiction and no sociological document, the novel was not composed in a vacuum ; it is firmly grounded in history and incorporates a solid fabric of factual detail. The novel takes place in the latter half of the XIX th century, in post-Civil-War America (the Civil War is mentioned p. 35), and particularly in the decades known as "The Gilded Age" (after the title of a satirical novel, The Gilded Age : A Tale of Today, that Charles Warner and Mark Twain wrote in 1873). It was the period between the Civil War and the "Progressive Era" which started around the end of the 19 th century (the Progressive Movement which gave its name to the following period was a powerful new idealist movement which aimed at reforming the paltry, disappointing reality of the money-grubbing society the USA had become). The Gilded Age refers to the period when the nation was undergoing a dramatic alteration, passing from an agrarian-commercial economy to an industrial-capitalist one. It was a period of intense economic development which saw the birth of the industrial revolution, the extension of the railway network, the rise of social Darwinism and of the Labor Movement. According to numerous historians, the intellecual and ethical atmosphere of the country was never so poor as during that "age" ; thus the tag is quite appropriate since it emphasizes both the primary rôle played by capital and the spurious values of the age, gilded and not golden (the period was also labelled the "Age of Sham" i.e. pretence, fraud, spurious imitation). The Press was one of those spurious values or institutions ; the crudest expression of the prevalent vulgarity. Throughout his life, James never missed an opportunity to satirize American newspapers and newspaper(wo)men. It can partly be explained by his own unsatisfactory relationship with certain American periodicals and his contempt for their low standards. The delineation of Henrietta Stackpole in The Portrait of Lady bears witness to James's dissatisfaction with and distrust of the Press : "there's something of the 'people' in her [...] she is a kind of emanation of the great democracyof the continent, the country, the nation. I don't say that she sums it all up, that would be too much to ask of her. But she suggests it ; she vividly figures it" (p. 93) ; "I don't like Miss Stackpoleeverything about her displeases me ; she talks so much too loud and looks at one as if one wanted to look at her -which one doesn't. I'm sure she has lived all her life in a boarding-house and [...] I detest a boarding-house civilization" (95). Place in the context of American literary history The time of publication of The Portrait of a Lady is still very close to the origins of American literature. One must bear in mind that at first, i.e. in colonial times, there was no American literature to speak of : in its early days, American literature was little more than English literature transplanted on new ground ; the New Continent was too immature for the production of original works. As Henry James was to put it much later : "it takes a great deal of history to make a little tradition, a great deal of tradition to make a little taste, and a great deal of taste to make a little art". In Europe, literature was the product of a sophisticated civilization ; in the early days of the settlement, conditions were not favorable to the flowering of belles-lettres and there was little encouragement to the writing of literature ; pioneering engaged all the colonists'faculties and efforts. What little literature eventually emerged in the formative years of colonial growth had no distinctive American quality and was represented by minor genres such as diaries, chronicles and local histories. So, although America was discovered in the 15 th century, the American novel is a recent invention. After a first slow phase of germination, came the flowering of American literature ; by the middle of the XIXth century, a group of New England authors (Emerson, Thoreau, Hawthorne, Melville and Whitman) took the lead in American letters and produced in the space of twenty years (1840-1860) some of the finest and greatest works ever written in the land and in the world. Hence, the name of American Renaissance that the famous critic F.O. Mathiessen gave to those two brilliant decades. Henry James (April 15 th 1843 -February 28 th 1916) does not exactly belong to the American Renaissance, but he is "a continuator of the New England genius" as characterized by the abovementioned authors, and he showed great interest in Nathaniel Hawthorne (1804-1864), the third major figure of the American Renaissance, who influenced him to a great extent. James wrote that "the fine thing in Hawthorne is that he cared for the deeper psychology, and that, in his way, he tried to become familiar with it". In Hawthorne's time, the literary and philosophic scene was dominated by Transcendentalism, a philosphic movement launched by R. W. Emerson. The concept of Trans-cendentalism (i.e. going beyond experience) is borrowed from Kant whose philosophy was known indirectly through the writings of Coleridge and Carlyle. Besides German influences, Transcendentalism also included some elements of Oriental mysticism and French Fourierism. The movement was an outgrowth of the reaction against Puritanism, materialism, rationalism and bourgeois commercialism. Its advocates laid emphasis on the intuitive and mystical above the empirical ; they also claimed that each human being has divine attributes that can be discovered by intuition and they were convinced that the way to attain spiritual growth was through nature. They rejected formalism in religion for spontaneous individual worship and rejected the tyranny of social conformity for a free personal ethical code. Transcendentalism was in fact as much a mode of life as a philosophy. Hawthorne never was a convert to Transcendentalism though he was once enough of a sympathizer to take part in the Brook Farm experiment (a farm in West Roxbury, Mass. where a communistic community was established from 1841 to 1847). He did not share Emerson's optimism and rejection of the past ; on the contrary, he was closely associated with the New England past through the traditions of his own family which included a judge in the notorious Salem witchcraft trials of the XVIIth century and was obsessed by the traditional view of Sin as rooted in New England conscience. Much of his work is devoted to the probing of the darker regions of the human mind ; his romanticism was not primarily that of an advocate of the new faith in Man's divine essence but rather, that of an artist for whom evil did exist, and who found in Man's sense of sin a rich field for psychological studies. He published short stories : Twice-Told Tales (1837), and novels : -The Scarlet Letter (1850) : the story of a woman found guilty of adultery and condemned to wear in public the scarlet letter "A" as a sign of her sin. -The House of the Seven Gables (1851) : the working out of an old curse visiting the sins of fathers upon the children of several generations. -The Blithedale Romance (1852) : whose setting is a Transcendental community like Brook Farm. -The Marble Faun (1860) : this novel set in Italy is a predecessor to The Portrait of a Lady. It is a modern version of the Garden of Eden with the Miltonic thesis of the "fortunate fall" or felix culpa through which humankind can be elevated to a new and greater estate than that of innocence. A key distinction  We owe N. Hawthorne the categorization of American fiction into romances and novels. The main difference between the novel and the romance is in the way in which they view reality. The novel renders reality closely and in comprehensive detail ; it attaches great importance to character, psychology and strives after verisimilitude. Romance is free from the ordinary novelistic requirements of verisimilitude ; it shows a tendency to plunge into the underside of consciousness and often expresses dark and complex truths unavailable to realism. The word romance also refers to a medieval narrative treating of heroic, fantastic or supernatural events, often in the form of an allegory. A third meaning is appropriate : romance means a love affair with the usual connotations of idealism and sentimentalism, and as such it is not far removed from 'romanticism' which according to Th. Mann "bears in its heart the germ of morbidity, as the rose bears the worm ; its innermost character is seduction, seduction to death". In the "Introduction" to The Scarlet Letter, Hawthorne defined the field of action of romance as being "the borderland of the human mind where the actual and the imaginary intermingle". The distinction is still valid and may account, as some critics have argued, notably R. Chase in The American Novel and its Tradition, for the original and characteristic form of the American novel which Chase actually calls "romancenovel" to highlight its hybrid nature. This being said, and to return to H. James, who belongs to the XX th century and the beginnings of the modern era  At the turn of the century, a group of literati dissatisfied with social and cultural conditions in the States, turned to Europe to find what was still lacking in their own country : a cultural tradition, a sophisticated civilization, a social climate favorable to literary creation, cf. Hawthorne's opinion : "No author, without a trial, can conceive of the difficulty of writing a romance about a country where there is no shadow, no antiquity, no mystery, no picturesque and gloomy wrong, nor anything but a commonplace prosperity, in broad and simple daylight, as is happily the case with my dear native land" (N. Hawthorne, Preface to The Marble Faun). They consequently often settled in Europe, whether in France, Great Britain or Italy and were given the name of "expatriates". Henry James was the first to make the pilgrimage back to Europe. There are 4 periods in J's writing-life : 1.-1871-1883 : James explored the effects of the Old World on the New (and occasionally of the New World on the Old, cf. Roderick Hudson, 1876, The American, 1877). This period culminated in The Portrait of a Lady (1881) ; 2. -1884-1890 : James's production consisted mainly of studies of current social themes in his native and adopted countries. The Bostonians (a novel of intersectionaland not internationalcontrast dealing with social conditions in America, the opposition between North and South i.e. progressive females and conservative males) and The Princess Casamassima obviously belong to that period ; 3. -1890-1895 : While keeping a steady output of fiction, James tried to win a place in the theatre. He dramatized The American but the play did not run for long. He then wrote four comedies but he was hissed at first night of his play Guy Domville and abandoned the theatre for ever ; 4. -1895-1904 : James gave his mind to the theory of fiction and the question of point of view. The novels of this time (What Maisie Knew, 1897 ; The Spoils of Poynton, 1896) marked the beginning of the celebrated "later manner" (by which is often meant a hypertrophy of technique) and they culminated in the publication of The Wings of the Dove (1902), The Ambassadors (1903) and The Golden Bowl (1904). The international situation The Portrait of a Lady begun in Florence in the spring of 1879 and published serially in The Atlantic Monthly in 1880 belongs to the series of early novels having in common the theme of the international situation. The phrase refers to what one might call "the mutual interrogation of America and Europe" (T. Tanner) or "the interplay of contrasted cultural traditions" (Leavis). Thus, the confrontation of the distinctively American outlook and the distinctively European outlook which lies at the heart of nearly all of James's fiction, was his great discovery for and contribution to the American novel. H. James considered America a Continent too immature for the production of great literature -"the flower of art blooms only where the soil is deep", he said -, while Europe was ancient and ripe with traditions ; it also represented for him that romantic "otherness" which seemed to be necessary to him as an artist. In a way, one might say that H. James exemplified a unique case of 'divided loyalty' ; hence the contradiction that is the motive power of his fiction-writing. R. Chase stated in The American Novel and its Tradition1 that the natural bent of the American novel is to exploit and expose the contradictions and discrepancies of the culture it originates in : "[...] many of the best American novels achieve their very being, their energy, their form, from the perception and acceptance not of unities but of radical disunities" ; James compounded the situation, made it more complex, by enriching domestic contradictions with those resulting from the confrontation of the New world with the Old. James's task as an artist was precisely to dramatize the psychological and cultural complexities that grew out of such conflicts. So James was drawn to the international setting by temperament and training, as well as by what he judged to be the particular aesthetic requirements of the novel, but that pilgrimage to Europe was also the spiritual journey of an author in quest of selfhood : C'est cette quête de soi que la critique a pris pour la fascination de l'Europe, qui n'en est qu'une des formes, un objectif-corrélatif intermittent et superficiel. Le voyage n'est pas le déplacement dans l'espace du personnage qui en découvre, en inventorie ou en reconnaît les spécifités concrètes ; ce n'est que l'espace absorbé par la conscience individuelle et transmuté en dimension intérieure dynamisée qui permet au personnage de faire le tour de lui-mêmede se chercher et, peut-être de se découvrir (M.-H. Bergeret in L'Art de la fiction) Thus, there is more to the international situation than meets the eye ; a French critic, H. Cixous, puts readers on their guard against too simplistic an interpretation of James's voluntary exile : la situation internationale, expression inexacte, comme l'est l'idée du cosmopolitisme, qu'il faut réduire à l'opposition Europe (corps, objet, matière, mère, origine désirée, traversée de part en part, méprisée parce qu'elle est réduite à ses monuments, ses ruines, ses collections, ses os, parce qu'elle est cynique et desséchée, qu'elle a besoin du sang de ses enfants) et l'Amérique (âme, sujet, spiritualité, noblement dépouillée en ses puritains, avide, trompée). Et James, qui n'est ni Européen, ni Américain, se vit comme fils amoureux et trop lucide, désireux de régresser vers une enfance où la lucidité n'entraîne pas encore l'obligation du choix. I grant you all that may be a little bit complex (however James is no run-of-the-mill novelist), it nonetheless highlights the originality of the author of The Portrait. As far as The Portrait of a Lady is concerned, the theme of the international situation is embodied in the confrontation between Isabel Archer (and Henrietta Stackpole) and Madame Merle and Gilbert Osmond who represent the two countries of James's imagination : America, the boring paradise, and Europe, seductive, sensual, an enchanting hell. Isabel's fate illustrates "the disabling effects upon the American mind of the simplicities and freedoms of the American life, and their effect in particular of placing Americans at a severe disadvantage in their intercourse with the English and the Europeans" (The Wings of the Dove). After those necessary preliminaries bearing on the general framework of the "international theme", it is necessary to revert to places in particular, for The Portrait of a Lady (which is also a portrait of places 2 ) sends its heroine (and its readers) on an instructive journey through both a "geographic" and "social" map, in other words Isabel's journey in space is also an initiation into the intricacies of "the social atlas" and the complexities of the self. Consequently, the symbolic and social values of the places Isabel traverses differ widely and one could almost trace Isabel's education by reference to the houses and rooms she occupies : from the cluttered office in Albany to the comforting spaciousness of Gardencourt ; through the "stout grey pile" of Lockleigh (Lord Warburton's estate which, to Isabel, looks like 'a noble picture' and 'a castle in a legend') ; Mrs Touchett's Florentine palace ; Osmond's ancient villa with its imposing front and perpetually chilled antechamber ; the garish hotel room in which Osmond proposes marriage ; Osmond's Palazzo Roccanera, that 'dark and massive structure' like 'a dungeon' ; and finally the bare cold apartments of the convent to which Pansy is banished by her father. Each of these references to places, houses, palaces, etc. in a word, architecture, is a clue to Isabel's development. Cf . following opinion from Critics on Henry James : "Figurativeley speaking the story told in the novel is of Isabel's leaving an American house-a way of life, that is-for a European house. Ostensibly she conceives of this as an escape from frustrating and cramping confinement to a fuller, freer, more resonant and significant life." First of all there is "the large, square, double house" in Albany (the capital of New York) ; it has "the appearance of a bustling provincial inn kept by a gentle old landlady" (Isabel's grandmother) and affords the heroine "uncontrolled use of a library full of books". This is where the first association of Isabel with gardens manifests itself (note by the way that Isabel's visits to her grandmother's are said to have "a flavour of peaches," 24). In her innocence she is repeatedly associated with gardens -the garden behind her grandmother's house, then the wide, fresh expanse of Gardencourt and eventually with gardens that harbour corruption and profound egotistical temptation. Note also that on her arrival in England she is described as "having a certain garden-like quality, a suggestion of perfume and murmuring boughs, of shady bowers and lengthening vistas, which made her feel that introspection was, after all, an exercise in the open air, and that a visit to the recesses of one's spirit was harmless when one returned from it with a lapful of roses. But she was often reminded that there were other gardens in the world than those of her remarkable soul, and that there were moreover a great many places which were not gardens at allonly dusky pestiferous tracts, planted with ugliness and misery." (p. 53, emphasis mine) The second feature of the house in Albany is Isabel's "office" whose meaningless clutter of furnishings is a direct antithesis to the studiously contrived aesthetic harmony of Osmond's villa. It is here that Isabel, "the intellectual superior" (p. 30) seals herself off from the world, never opening the bolted door that leads to the street. She imagines herself in this way protected from what she thinks of as "the vulgar street", "a region of delight or of terror" (p. 25) and it is this failure of experience, the chronic inability to assess the world as distinct from her romantic vision of the world which will spell her doom ; only in the cloistered office of the walled-in garden behind the house does Isabel feel safe ; she lacks the energy and the conviction to meet the demands of her imagination or to test them against life ; Isabel ventures out into the world only when led forth by her practical aunt, Mrs Touchett. Then Gardencourt, the country-seat whose name suggests the garden of Eden. In Gardencourt, the symbol of country-house civilization with its rich perfection, Isabelthrough her romanticismsees life as a novel (cf. her reaction on being introduced to Lord Warburton : "Oh, I hoped there would be a lord ; it's just like a novel" [17]) or a picture/painting (cf. "Her uncle's house seemed a picture made real...the rich perfection of Gardencourt at once revealed a world and gratified a need" [p. 54] ), and becomes herself subject to an artistic analogy ("A character like that [i.e. Isabel] is finer than the finest work of art. Suddenly I receive a Titian by the post to hang on my wall a Greek bas-relief to stick over my chimney-place," p. 63 / p. 45 : "She asked Ralph to show her the pictures [...] she was better worth looking at than most works of art" p. 46), an analogy that is carried to its extreme in Italy. Gardencourt is also said to be haunted by a ghost whose function is highly symbolic since it is associated with the knowledge of deceit, evil and death, and is only visible to people who have suffered, cf. p. 48 : Ralph shook his head sadly. 'I might show it to you, but yoy'd never see it. The privilege isn't given to everyone ; it's not enviable. It has never been seen by a young, happy, innocent person like you. You must have suffered first... I saw it long ago,' said Ralph. Italy with its three main cities Florence, Venice and Rome, each endowed in James's fiction with a highly symbolic value : -Florence, where Isabel lives in Palazzo Crescentini with Mrs Touchett, is the locale of her education to aesthetic appreciation and enjoyment : "she felt her heart beat in the presence of immortal genius and knew the sweetness of rising tears in eyes to which faded fresco and darkened marble grew dim" (246) -Venice : Isabel just passes through Venice [322] (which doesn't play the same rôle in as The Wings of the Dove). However, its importance in James's fiction lies in the fact that it is the city in which eastern and western cultures meet and mingle, the vast cemetery of past culture and the city of death. -Rome is the terminus ad quem of Isabel's international pilgrimage from Albany to Italy, the locale of the confrontation with and temptation of "abysses". Rome is a city of decay, offering a kind of openness and freedom impossible in New York or London ; it is a milieu which allows the heroine a certain free-play, an exhilarating imaginative range : cf. beginning chapter 27, and p. 288 : "from the Roman past to Isabel Archer's future was a long stride, but her imagination had taken it in a single flight and now hovered in slow circles over the nearer and richer field." To know more about the international situation and the confrontation between America and the Mediterranean world at large (bear in mind that Isabel, feeling that "the world lay before hershe could do whatever she chose" [p. 322] makes "a little pilgrimage to the East" [323] and spends three months in Greece, in Turkey, in Egypt [p. 323] before leaving for Italy), see the following extract from a paper read in French by yours truly at the International Conference on "Saveurs, senteurs : le goût de la Méditerranée" : Le rapport Amérique/Méditerranée à l'arrière-plan du roman de James est une question fort complexe : essentiellement vécues sur le mode imaginaire, ces relations sont marquées par une fondamentale ambivalence où l'attirance et l'attraction le disputent à l'aversion et à la répulsion. Cependant, il n'en demeure pas moins qu'historiquement les liens qui se sont établis entre l'Amérique et la Méditerranée remontent à la naissance même de cette nation et sont, à ce titre, d'une importance primordiale : le processus de la découverte et de la colonisation de l'Amérique est essentiellement lié au monde méditerranéen, mais on le sait, les États-Unis se sont voulus et affirmés WASP, c'est-à-dire anglo-saxons et protestants, ce qui a impliqué le rejet de tout ce qui rappelait la Méditerranée, très tôt perçue comme un tiers-monde avant la lettre. Mais le monde méditerranéen n'est pas totalement exclu et évacué de l'univers anglo-américain ; il refait surface, resurgit avec une fréquence et une obstination insoupçonnées dans les domaines les plus divers et en particulier l'art et la littérature qui apparaissent souvent baignés par « une luminosité qui ne semble pas venir du jour même, mais des temps classiques anciens 3 ». Je n'hésiterai pas à avancer l'hypothèse que la Méditerranée, et tout ce qui s'y rattache (Catholicisme, éthique, moeurs, sexualité, etc.), a fait l'objet dans la culture américaine d'un refoulement primaire qui est à l'origine de l'inconscient américain ; reste à en découvrir les raisons et à en apprécier les effets. Les relations entre l'Amérique et le monde méditerranéen s'inscrivent dès le départ dans la dialectique fondamentale du désir et de l'interdit, de la Loi et de sa transgression. Par exemple, la Méditerranée incarne une relation au corps, à la sensualité et à la sexualité, personnifiée par l'archétype littéraire de la Dark Lady, la brune ardente, la séductrice, l'anti-viergecatholique, juive, latine ou noirequi symbolise pour la psyché américaine, je cite le critique L. Fiedler, « our relationship with the enslaved Africa in our midst or with the Mediterranean Europe from which our culture began ; she is surrogate for all the Otherness against which an Anglo-Saxon world attempts to define itself and a Protestant one to justify its existence 4 ». Dans le même ordre d'idées, la Méditerranée représente aussi une certaine forme d'hédonisme sinon parfois de licence et d'exubérance foncièrement étrangères à la mentalité puritaine plutôt répressive à l'égard des plaisirs des sens et qui n'en a privilégié que deux, les plus intellectualisés et socialisés, ceux qui maintiennent la distance entre soi et l'autre -et même tiennent l'autre à distancela vue, organe de la contemplation, qui permet de lire le Verbe, et l'ouïe, qui possède l'autre vertu théologique de l'entendre. [D]ans un tel contexte, rêver à la Méditerranée, l'évoquer, y situer le cadre d'un roman ou naturellement s'y rendre, sera toujours aller à rebours de certains tropismes fondamentaux de la culture américaine ; prendre le contre-pied de principes fondateurs et de tendances caractéristiques. L'accès à la Méditerranée permet à l'Américain de se dépayser de lui-même, car la (re)découverte du Mare Nostrum est placée sous le signe du retour et de la régression sinon de la transgression. Retour vers quoi ? Réponse diverse : tout d'abord, bien évidemment, retour aux sources, à l'origine c'est-à-dire finalement et fatalement au corps : corps de la femme, de l'amant(e) et de la mère (on pourrait évoquer ici le rapport mare-madre présent dans les enfonçures de l'inconscient méditerranéen), c'est-à-dire au langage du corps (gestualité, contacts, caresses) et à l'intimité. « Trop de dehors pas assez de dedans » a-t-on dit de l'Amérique ; à l'inverse, les pays méditerranéens offriront décadence sont intimement mêlés. On y sent le poids écrasant du passé (« sense of ponderous remembrances », MF), la présence obsédante de la mort (« some subtle allusion to death carefully veiled but forever peeping forth amid emblems of mirth and riot », MF), et l'on y est confronté à de redoutables contradictions dont N. Hawthorne se fait l'écho : "Catholicism is convenient but corrupt, aestheticism is enriching but pagan and tradition is profound but carries along its burden of sin" (MF, 342). Si Rome est le lieu où R. Hudson vient bénéficier d'une « education to the senses and the imagination », (147) et, subjugué par tant de beauté, éprouve, tout comme H. James, le sentiment de naître ou de s'ouvrir à la vie, Venise, au contraire, offre à Milly Theale, l'héroïne des Ailes de la colombe, l'occasion de goûter la poésie aigre-douce de la mélancolie et de l'infortune (« the poetry of misfortune », 441) et de s'exposer à la beauté mais aussi à la duplicité et au mal. Dans la cité des Doges, on sent, dit-elle, l'ancienneté de la race, et l'interpénétration de l'art de la vie en fait « la plus belle des tombes » (« the most beautiful of tombs », 461) où Milly conformément à son voeu (« I should like to die here », 269) passera de vie à trépas. Finalement, pour l'Américain qui y séjourne assez longtemps, l'Italie comme l'Europe a pour effet d'enrichir et de com-plexifier la conscience ; c'est par excellence le lieu où les représentants du Nouveau Monde peuvent faire l'expérience de ce que le critique T. Tanner nomme excellemment, par opposition au Transcendantalisme, descendentalism, c'est-à-dire une plongée dans l'épaisseur du temps et de l'histoire, dans les profondeurs glauques de l'expérience et les recoins obscurs de l'inconscient, dans le tourbillon vertigineux du péché, de la corruption et de la dissolution (Hawthorne évoque dans un de ses textes "those dark caverns, into which all men must descend, if they would know anything beneath the surface and illusive pleasures of existence"). L'Américain, dont J. de Crèvecoeur faisait un « Européen dépouillé », y retrouve ses dépouilles, ses défroques et y éprouve « le sens du gouffre 8 ». [...] Ainsi, les romans où l'Italie sert de cadre sont autant d'illustrations diverses d'un schéma que le roman américain a souvent mis en oeuvre au cours de son histoire, à savoir que « se rapatrier implique bien un détour, et par une préhistoire et par un ailleurs 9 ». L'Italie permet au protagoniste de « se rendre maître du chaos que l'on est soi-même, [de] forcer son propre chaos à devenir forme 10 » (voir plus bas la citation faisant référence à l'incohérence d'Isabel). Telle serait au fond une des possibles leçons de la Méditerranée pour l'Américain qui accepte de s'y risquer et de s'y perdre pour éventuellement mieux se retrouver. » Such observations, however theoretical or far-fetched they may seem to the layman, are not so remote from the novel under study as one may imagine ; they parallel and throw light on some aspects of The Portrait which are easier to understand against such a background, for instance : -Symbolic function of Italy  "The charm of the Mediterranean coast only deepened for our heroine on acquaintance, for it was the threshold of Italy, the gate of admirations. Italy, as yet imperfectly seen and felt stretched before her as a land of promise, a land in which a love of the beautiful might be comforted by endless knowledge" (223) -Sense of ponderous past  In Florence Isabel inhabits "an historic building" with Mrs Touchett : "To live in such a place was, for Isabel, to hold to her ear all day a shell of the sea of the past. This vague eternal rumour kept her imagination awake" 247 -Education to the senses ("to breakfast on the Florentine sunshine", 348) and the imagination : cf. visit to Osmond's apartments  "Isabel was oppressed with the accumulation of beauty and knowledge to which she found herself introduced" (263) and sojourn in Rome : "I may not attempt to report in its fulness our young woman's response to the deep appeal of Rome... She had an imagination that kindled at the mention of great deeds... These things strongly moved her, but moved her all inwardly. [...] The sense of the terrible human past was heavy to her... " (287) 8. G. Bachelard, La Poétique de la rêverie, Paris, PUF, 1960, 128. Cf. p. 517 : "She had long before this taken old Rome into her confidence, for in a world of ruins the ruin of one's happiness seemed a less unnatural catastrophe" -Italy as the scene/site of "descendentalism"  The Portait of a Lady is a modern reenactment of "the happy fall" : cf. Hawthorne again for whom sin and the fall, the lapse from edenic innocence, were "an element of human education through which we struggled to a higher and purer state than we could otherwise have attained. Did Adam fall, that we might ultimately rise to a far loftier paradise than his11 ?". Isabel, through her confrontation with evil, hypocrisy and dishonesty, loses her naiveté, her New World innocence, to achieve the higher innocence of those who are aware of the moral intricacy of the world, and having recognized their own limitations, triumph over them by rejoigning the world rather than succumbing to self-pity and despair. -Motif of the abyss : "...she [Isabel] had suddenly found the infinite vista of a multiplied life to be a dark, narrow alley with a dead wall at the end. Instead of leading to the high places of happiness, from which the world would seem to lie below one...it led rather downward and earthward, into realms of restriction and depression where the sound of other lives, easier and freer, was heard as from above, and where it served to deepen the feeling of failure" p. 425 Madame Merle  "I am afraid of the abyss into which I shall have cast her [Isabel]" p. 286 And to round off this short disquisation on the international situation, cf. H. Stackpole's statement to the effect that : "It's nothing to come to Europe [...] It is something to stay at home ; this is much more important" (487). That gives one food for thought, doesn't it ? MAIN CHARACTERS, SYMBOLS AND METAPHORS IN THE PORTRAIT OF A LADY Men and women Before dealing with the question, a word of warning is in order : the following observations do not purport to be a thorough-going psychological study of each character but a sketch aiming at bringing together some essential clues or semes scattered thoughout the text in order to emphasize the traits accounting for the fate of the protagonists i.e. Isabel Archer, Madame Merle, Ralph Touchett and Gilbert Osmond. I'll leave it to you to complete the "conversation piece" or portrait de groupe and fill in the gaps. Men and women, or more appropriately, men vs. women, embody a fundamental question not only in The Portrait of a Lady but also in the whole of James's fiction as witness the following quotation from critic N. Blake : La différence des sexes surtout en Amérique est telle que James se persuade que la scène américaine, c'est surtout la scène de la femme. L'homme n'intervient que de façon 'occulte', 'secrète', 'pratiquement désavouée'. L'Amérique, dès Un Épisode international, est une société de femmes située dans un monde d'hommes. Et James trouve une image significative : 'les hommes fournissent, pour ainsi dire, toute la toile, les femmes toute la broderie'. H. James being "l'écrivain de la femme", I'll deal first with women who fall into two Isabel Archer In the creation of the main protagonist, Isabel Archer, James was directly inspired by the memory of his beloved cousin from Albany, Minny Temple, who died of tuberculosis at the age of twenty-four ; the cousins were to have made together the grand tour (i.e. tour of the chief towns of Europe) but Minny's illness prevented it. In both The Portrait and in The Wings of the Dove (a companion novel to the former that you are strongly advised to read), Minny served as a prototype of the charmingly ingenuous American girl James was to send on a symbolic voyage to the Old World that would test, temper, and eventually threaten to destroy the very essence of American innocence. However, if we are to believe the author himself, this biographic parallel requires qualifications : "You are both right and wrong about Minny temple. I had her in mind and there is in the heroine a considerable infusion of my impression of her remarkable nature. But the thing is not a portrait. Poor Minny was essentially incomplete and I have attempted to make my young woman more rounded, more finished." While we are on the subject of "biographic fallacy" (i.e. an evaluation of literary works in terms of the personality and life of their author, or a literal one-for-one equation of fiction with the details of the life from which it grows), let me add that Ralph Touchett is frequently judged to be a thinly disguised and self-indulgent portrait of the novelist himself. There exist numerous parallels between James and Ralph e.g. their detachment, their desire to test the ressources of imagination, their unfulfilled love for a cousin from Albany, etc. Now to return to Isabel Archer, she clearly represents : « la jeune fille américaine [qui] possède le double attribut de l'audace et de l'innocence, de la spontanéïté et de la naïveté. Importée en Europe, au coeur de la fameuse situation internationale, elle contraste sérieusement avec ses soeurs anglaises, moins idéalistes, plus conventionnelles. Elle débarque en Europe toute neuve, page blanche qui ne demande qu'à s'écrire, identité provisoire qui attend de se constituer en essence. Là, elle devient le point de mire de tous les regards, l'objet de tous les désirs. Le sujet de toutes les conversations également, car ce qui séduit d'abord en elle c'est la contradiction qui la fonde... » (L'Arc). Like many Jamesian heroines, Isabel is "an interesting mixture" ; there's in the description of the alloy composing her nature a suggestion of some flaw : a chronic inability to assess the world as distinct from her romantic vision of it (cf. "She spent half her time in thinking of beauty and bravery and magnamity ; she had a fixed determination to regard the world as a place of brightness, of free expansion, of irresistible action", 51). Isabel's aim is "the free exploration of life" (110) ; she wishes to "drain the cup of experience" (150), to roam about the world, to see places and people, and there's nothing wrong with it for "an independent young lady", except that she is unable to see through them : cf. "she paid the penalty of having given undue encouragement to the faculty of seeing without judging" (p. 33 ; emphasis mine) and her tell-tale exclamation : "Good heavens, how you see through one !" (48). In other words, "sight" is a poor substitute for "insight", and the failure of any such discriminating vision is one of the most serious of Isabel's faults (cf. in aesthetic matters, "her fear of exposing her possible grossness of perception" 263). This "tragic flaw" (or hamartia i.e. error of judgment resulting from a defect in the character of a tragic hero and bringing about disaster, is an essential ingredient of tragedy, and so is "inescapability" : cf. "I can't escape my fate" p. 131 and "I can't escape unhappiness" p. 132) clearly manifests itself in her taste for "ideas" and "theories" (etymologically both refer to the notion of vision : "idea" comes from Indo-European *weid idea/eidos (idole) in Greek  videre in Latin  voir/vision in French  wit, wisdom in English ; "theory" comes from the Greek theoria, meaning "contemplation", "speculation", "sight") two essential signifiers (which shouldn't come as a surprise in a novel entitled The Portrait of a Lady since painting is a visual art ; however, the eye is not only an organ of aesthetic experience but also of moral discrimination  much of James's work is an exploration of the profound identity of the aesthetic and the moral ; moral and aesthetic experience have in common their foundation in feeling and their distinction from the useful.) as witness the number of occurrences : "Isabel Archer was a young person of many theories" 49 ; "theories" (53 ; 127 ; 128) ; "a system, a theory, that she had lately embraced", 160 ; "That love of liberty...was as yet almost exclusively theoretic", 164 ; "according to her own theory" (180) ; "having invented a fine theory about Gilbert Osmond, she loved him not for what he really possessed but for his poverties dressed out as honors" 348 ; "What had become of all her ardours, her aspirations, her theories ?" 352 ; "by theory" 414 ; "if she had really married on a fictitious theory" 427 ; "she had too many ideas" 428 ; "theory", 583, etc. and the number of metaphors using visual images as their vehicle  the question of seeing is central to all of James's major works and it is even more so in the case of The Portrait : The title, The Portrait, asks the eye to see. And the handling of the book is in terms of seeing. The informing and strengthening of the eye of the mind is the theme-the ultimate knowledge, the thing finally "seen," havingonly the contingent importance of stimulating a more subtle and various activity of perception. The dramatization is deliberately "scenic", moving in a series of recognition scenes that are slight and low-keyed at first, or blurred and erroneous, in proportion both to the innocence of the heroine and others' skill in refined disguises and obliquities ; then, toward the end, proceeding in swift and livid flashes. For in adopting as his compositional center the growth of a consciousness, James was able to use the bafflements and illusions of ignorance for his "complications," as he was able to use, more consistently than any other novelist, "recognitions" for his crises. Further, this action, moving through errors and illuminations of the inward eye, is set in a symbolic construct of things to be seen by the physical eye-paintings and sculptures, old coins and porcelain and lace and tapestries, most of all buildings : the aesthetic riches of Europe, pregnant with memory, with "histories within histories" ofskills and motivations, temptations and suffering. (D. Van Ghent, emphasis mine) Isabel's lack of discrimination is also the logical concomitant of her New World innocence ("her innocent ignorance" p. 543), an innocence which is the reverse side of her "sublime soul", and is paradoxically and ironically enough, strengthened by her intellectualism and devotion to literature ("Her reputation of reading a great deal hung about her like the cloudy enveloppe of a goddess in an epic" 35) ; even if "her love of knowledge had a fertilizing quality and her imagination was strong" (23), they can't make up for actual experience as Henrietta Stackpole bluntly states : "The peril for you is that you live too much in the world of your own dreams. You're not enough in contact with realitywith the toiling, striving, suffering, I may even say sinning, world that sur-rounds you. You're too fastidious ; you've too many graceful illusions...You think you can lead a romantic life, that you can live by pleasing yourself and pleasing others" (216-217) a fault that Isabel herself had come to acknowledge, earlier in the novel, without acting accordingly : "It appeared to Isabel that the unpleasant had been even too absent from her knowledge, for she had gathered from her acquaintance with literature that it was often a source of interest and even of instruction" (33). That it is, indeed, as the rest of the story will show ! Moreover, to highlight another paradox in her psychological make-up : "The love of knowledge coexisted in her mind with the finest capacity for ignorance. With all her love of knowledge she had a natural shrinking from raising curtains and looking into unlighted corners." (199). Her reluctance to face "the unpleasant" stems from her almost exclusive preoccupation with aesthetics ; she tends to place too exclusive a trust in social and aesthetic forms themselves, without considering the ethical distinctions which should inform these. To that shortcoming may be ascribed several other inadequacies : a fear of life, if not a withdrawal from life, and particularly sexuality, associated with three clusters of images bearing on : shadows  they become increasingly symbolic of the cultural ambiguities and moral obscurity into which Isabel is moving and of her fear of the world, of human commitments and physical contact ; light "if a certain light should dawn she could give herself completely" (53) ; "His kiss was like white lightning, a flash that spread, and spread again, and stayed ; and it was extraordinarily as if , while she took it, she felt each thing in his hard manhood that had least pleased her... justified of its intense identity and made one with this act of possession" (591) water  her persistent fear of her suitor's sexuality is conveyed by the use of water imagery ; water becomes concomitant with the surrender to passion which would destroy her own image  p. 590 : "The world [...] seemed to open out, all round her, to take the form of a mighty sea, where she floated in fathomless waters. She had wanted help, and here was help ; it had come in a rushing torrent," etc. Note also that Isabel turns down Lord Warburton and Caspar Goodwood as beingin her opiniontoo safe, too conservative, too predictable, but one may wonder if she does not reject those suitors because they are hardy and robust men, and turns instead to the sexless love of a dying cousin and to a husband who is slight, effete and cold. Be that as it may, all these implications of frigidity suggest a fear of life itself, of passion and instinct, and it is the most fatal of Isabel's presumptions that she can see life from a detached, almost theoretical point of view, without actually experiencing it ; hence Ralph's pointed reproof : "You want to see but not to feel" (p. 150). Two other characteristics are also to be taken into account in any discussion of Isabel's personality : self-complacency  "Isabel was probably very liable to the sin of self-esteem ; she often surveyed with complacency the field of her own nature [note the variant of the garden-metaphor] ; she was in the habit of taking for granted, on scanty evidence, that she was right" (50) ; -and "perversity"  Isabel, whom James describes in the Preface as "an intelligent but presumptuous girl" (XI), is a perverse heroine, perverse, that is, in both the English sense of the word and the etymological one, i.e. someone who "turns from the right way, turns aside from a right course or opinion, or who wilfully determines not to do what is expected or desired ; contrary, persistent or obstinate in what is wrong"  Isabel determines, against the advice of her family, to make an unconventional marriage to an effete older man and must learn to accept the limitations of that choice, and to retrieve from it what dignity she can. Having everything (beauty, intelligence, wealth, etc.), Isabel is thus in an ideal position not only to give everything but also to lose everything, to end up with nothing, the common fate of most Jamesian heroes and heroines, and yet to be eventually the richer for the loss of everything because one is radically changed in the process ; it's like a game in which the loser takes all (a sort of "qui perd gagne"). In her evolution from naivety to maturity, Isabel will have to learn to see people and things plainly, as they actually are and not as they appear to be : this moment of realization (or anagnorisis in Aristotle's definition of tragedy. Anagnorisis "refers to the point in the plot at which the protagonist recognizes his or her or some other character's true identity or discovers the true nature of his or her own situation". If tragedy is, as some critics maintain, the realization of the unthinkable in the present instance that Madame Merle "had made a convenience of [her]" (573) and that Osmond "had married [her] for [her] money" (576) -, then The Portrait clearly belongs to the genre.) is the climax of Isabel's apprenticeship. It goes through various stages : first inkling of the truth : p. 408  "What struck Isabel first was that he was sitting while Madame Merle stood ; there was an anomaly in this that arrested her. [...] But the thing made an image, lasting only a moment, like a sudden flicker of light" -revelation/"apocalypse" (in the original sense : "lifting of the veil of appearances") : p. Isabel's conscious decision to return to her 'satanic' tormentor at the conclusion of the novel may seem disturbing ; however, that this decision is not only inevitable, but also the thematic climax of the work is fully appreciated when we grasp one of the novel's most crucial ambiguities : if Isabel is wronged, she is not herself blameless (cf. "the sole source of her mistake had been within herself" p. 405) ; if Osmond is the victimizer, he is also, in important measure, a victim : Isabel feels herself responsible for having deceived Osmond into believing she was more pliable, less opinionated than she was : "She had made herself small, pretending that there was less of her than there really was... Yes she had been hypocritical ; she had liked him so much." (page reference ?). It is an ironic blend of strength and weakness which determines her return to Osmond ; she actually obeys one of the most endearing aspects of her personality : "the temptation to give". Thus, Isabel accepts the consequences of her error of judgment and her decision is more or less anticipated by the fact that "suffering with Isabel, was an active condition ; it was a passion of thought, of speculation, of response to every pressure" (p. 425). In the work of Henry James, suffering is a supreme discipline-it is the price one pays for being able to feel  "For James, man's most terrifying fate is to exist without feeling, and he never more clearly argues his point than he does in his short story "The Beast in the Jungle" for here the special fate, the beast waiting to spring is simply the realization that one has lived without ever having felt or committed onself at all". As for Isabel's splendid imagination, it is worth noting that it is not destroyed : it is transmuted into the finer stuff of conscience. This is quite in keeping with one of the most fundamental tenets of James's literary credo, i.e. that "his characters can only reach the ultimate stage of conciousness when they have coupled aesthetic perception with ethical perception". Serena Merle A manifestation of the forces of evil, which in The Portrait are embodied by expatriated or hybrid Americans, Madame Merle is the arch betrayer, a born plotter, who loves handling everyone. The pair composed by Isabel/Madame Merle (heroine/anti-heroine) is underlain by the well-known opposition in American fiction between an innocent heroine, the Fair Maiden, and a worldly woman, the Dark Lady : the former stands for American innocence and the latter for European experience. This conventional moral color-scheme is an integral part of James's deepest symbolism and it crops up repeatedly in his characterization : like the blackbird from which she takes the name (by the way, note the irony of Ralph's words on p. 251 : "On the character of everyone else you may find some little black speck... For my own, I'm spotted like a leopard. But on Madame Merle's nothing, nothing, nothing !"), she is crafty and capable of viciousness. Even if Isabel sometimes entertains doubts as to the true nature of her lady-friend -"She liked her as much as ever, but there was a corner of the curtain that never was lifted" (324) -and even realizes that "she [Serena Merle] belonged to the 'old, old' world, and never lost the impression that she was the product of a different moral or social clime from her own, that she had grown under other stars. She believed then that at bottom she had a different morality," 324 (this is the heart of the matter, i.e. of the international motif) she won't see beyond appearances and will continue to confide in Mme Merle ("it was as if she had given to a comparative stranger the key to her cabinet of jewels" 187) until she is betrayed. In "the social battle" (401) that the novel depicts -« le choc du pot de fer contre le pot de terre », to use a French phrase (cf. 192 : "there are many more iron pots certainly than porcelain") -Isabel Archer, the weaker vessel, who is in a way incomplete (she has much to learn) is no match for Serena Merle who is "too complete" (251), and a formidable opponent : "That personage was armed at all points ; it was a pleasure to see a character so completely equipped for the social bat- 188), she, unlike the others, who are often mere spectators, will take an active part in engineering the show. Her two main motives being of course, money (like Kate Croy, her counterpart in The Wings of the Dove, she is "a loyal apostle to money" : "the idea of a distribution of property...irritated her with a sense of exclusion", 207 ; cf. her admission to Isabel : "I wish you had a little money" 203) and social ambition : "She was always plain Madame Merle" (252). However, beware of blackening S. Merle more than she deserves ; the character is a villainness but she also has extenuating circum-stances ; she is also, in a way, a victim of Osmond's : "You've not only dried up my tears ; you've dried up my soul" (p. 522) Ralph Touchett With Osmond and Serena Merle, Ralph is a major force shaping and coloring Isabel's life and lending the final tones to her portrait. An expatriated American, who like his father can be said to "be living in England assimilated yet unconverted" (39), Ralph is defined by Lord Warburtonhalf in jest, half in earnest -as a "a regular cynic. He doesn't seem to believe in anything" (10). There is a modicum of truth in this judgment as witness Ralph's partiality for/indulgence in paradoxes : "The increasing seriousness of things, then -that's the great opportunity of jokes" (12). Several features stand out in his delineation : Cleverness : "clever" 243 ; "a clever man" 394 ; hence his tendency to manipulate people e.g. his father (see Ralph's own admission on p. 186 : "But it's scandalous, the way I've taken adavantage of you !") and his cousin. Even though altruism is not totally absent from his motivations (as an "apostle of freedom" [462] he urges Isabel to "Spread your wings" [222]), they reveal a good deal of selfishness. Ralph is consciously taking chances, cf. recurrence of the word "calculations" as on p. 338 : "his calculations had been false and the person in the world in whom he was most interested was lost". The source of his cynicism and dilettantism is, of course, his state of health : Health  "the state of his health had seemed not a limitation, but a kind of intellectual advantage ; it absolved him from all professional and official emotions and left him the luxury of being exclusively personal" (337). His illness leads him to live by proxy and to see life and the world as a stage from which he is excluded : he is restricted to mere spectatorship  "what's the use of being ill and disabled and restricted to mere spectatorship at the game of life if I can't really see the show when I have paid so much for my ticket ?" [148]. This theatrical metaphor is a leitmotif with Ralph cf. : "There will be plenty of spectators" (149) ; "he wanted to see what she would make of her husbandor what her husband would make of her. This was only the first act of the drama, and he was determined to sit out the performance" (395). The motif of vision is closely bound up with theatricality : Ralph, like his cousin and most of the other characters in the novel, wants to "see" : "I shall have the thrill of seeing what a young lady does who won't marry Lord Warburton" (149) Wealth : His wealth will enable him to fulfil the requirements of his imagination (cf. his own definition : "I call people rich when they're able to meet the requirements of their imagination", 183) and "to put a little wind in his cousin's sails" (182) i.e. endow Isabel with the wherewithal to carry out her plans. It follows from the above features that Ralph is in certain respects a flawed character, and indeed he and Isabel share some of the same shortcomings (e.g. presumptuousness), just as he shares with Osmond the fond delusion that "life was a matter of connoisseurship... She [Isabel] trusted she should learn in time" (262). Although he loves Isabel more intensely than any other character, he plays a major rôle in her downfall. He urges his father to "put money in her purse" because he should like to see her "going before the breeze", without realizing that she may be driven out to sea, as Henrietta fears. When Mr. Touchett remarks that "You speak as if it were for your mere amusement", Ralph answers bluntly, "So it is, a good deal". Such an idea strikes the old man as immoral, and he predicts that Isabel "may fall a victim to the fortune-hunters" (186). "That's a risk," Ralph replies, "and it has entered into my calculations. I think it appreciable, but I think it's small, and I'm prepared to take it" (Ibid.). The gross presumptuousness with which Ralph imagines he can steer Isabel's course is made keenly apparent when he concludes the interview with the statement that "I shall get the good of having met the requirements of my imagination" (186). Even if it is love for his cousin which plants in him the desire to play Pygmalion to Isabel's Galatea, the motive of selfishness (his desire to meet the requirements of his imagination whatever the cost) cannot be overlooked. It is this hint of egotism which links Ralph with Mme Merle and Osmond. However, there is a redeeming feature in the portrait of Ralph Touchett : his faithful love is to prove the key to Isabel's salvation. In returning to Gardencourt, Isabel is eventually returning to life, for she has achieved through suffering knowledge of life (remember the "ghost"), and more important still, of truth : "the knowledge that they were looking at the truth together" (576) Gilbert Osmond Another "clever" character, only this time to the power of three ("clever" (243) ; "one of the cleverest men", 244/272 ; "awfully clever", 297) and a key element in the confrontation between New World innocence and Old World corruption : "You [Isabel] are remarkably fresh and I'm remarkably well-seasoned" 352. Osmond is a compendium of the worst traits all the expatriate characters share among themselves : -he is "small, narrow, selfish" (345) ; "his egotism lay hidden like a serpent in a bank of flowers," 430. -"very perverse" (236) ; "a streak of perversity" (507) ; "his faculty of making everything wither that he touched, spoiling everything that he looked at" (424) ; cf. Madame Merle  "You're very bad" (523). The description of his surroundings leave no doubt as to his evil nature  "It was the house of darkness, the house of dumbness, the house of suffocation" (429) -"an original without being an eccentric" 261 ; -"a sterile dilettante" (345)  "he was the elegant complicated medal struck off for a special occasion" (228) ; "Osmond was a specimen apart" (261). Osmond, as a villain ("to prove him a villain" 343), is an embodiment of the worst combination in James's fictitious world : "If he had English blood in his veins it had probably received some French or Italian commixture" (228) ; "He's a vague, unexplained American who has been living these thirty years, or less, in Italy" (249). As for his favourite subjects -Machiavelli, Vittorio Colonna, Metastasio (259) -they leave no doubt as to Osmond's true nature. Imbued with a sense of his own importance ("It implied a sovereign contempt for everyone but some three or four very exalted people whom he envied, and for everything in the world but half a dozen ideas of his own" 430) and superiority ("as if he were made of some superior clay" 272 ), Osmond is a slave to decorum ("what a worship I have for propriety" 312) and propriety : "under the guise of caring only for intrinsic values Omond lived exclusively for the world....Everything he did was pose" (394) ; cf. also : "His ideal was a conception of high prosperity and propriety" 431 ; "he was fond of the old, the consecrated, the transmitted" (431). Isabel is impressed by "Osmond's artistic, plastic view of Pansy's innocence" ( 353), yet all this emphasis on form and propriety is but a façade : "He always had an eye to effect, and his effects were deeply calculated. They were produced by no vulgar means, the motive was as vulgar as the art was great" (393). Note an anticipation of Isabel's fate in his tenet : "It's one's own fault if one isn't happy" (266) and the irony ; Isabel is to him a collector's piece if not a commodity, "a young lady who had qualified herself to figure in his collection" (304)  "he would have liked her to have nothing of her own but her pretty appearances" 428 ; "He wished her to have no freedom of mind", 462. Caspar Goodwood  main semes in the delineation of the character : "he expressed for her an energy that was of his very nature" 114 ; "nothing cottony about him" 116 ; "a mover of men" ; "honesty", 494 ; "the most modern man in the world", 505 ; "something really formidable in his resolution", 587 ; "firm as a rock", 590 ; Main failing  "a want of easy consonance with the deeper rhythms of life" 116  "he was naturally plated and steeled, armed essentially for aggression", 155. What such a cursory presentation of the main characters tends to obliterate is that much of their significance stems not so much from their individual personalities as from their mutual relationship and the influence they exert upon each other ; actually each character is what he becomes through his contacts with the others : The Portrait is a drama of mutual initiation. For Isabel, it takes the form of an education of her conscience since she's faced with moral choices ; her trip to Europe is also an initiation, Columbus's journey in reverse, "the exposure of American innocence to a knowing Europe". Isabel is introduced to the intricacies of European social life and organiza-tion but to a certain extent her initiation also takes the form of an "education of the eye" (R. W. Emerson), an artistic, sentimental and above all moral education. Like Milly Theale, her opposite number in The Wings of the Dove, Isabel learns "the art of seeing things as they were" (The Wings). The Portrait of a Lady is of course a portrait, a novel of characters, but it also a novel in which relations are the centre ; it is a "novel of relations" i.e. a novel whose characters act upon and react to one another (this is to be linked with James's compositional credo : "to treat a subject is to exhibit relations"). SYMBOLS & METAPHORS Two essential devices in James's poetics and the most obvious by-products of the writer's traffic with language. Metaphors and symbols highlight the verbal nature of the text  a text is a discursive construct, an artefact made of words. However, if a writer starts from scratch, words don't ; they have a life of their own, a memory, a history and their integration within a sentence may bring to life long-forgotten or unsuspected meanings. In other words, the writer's medium is never virgin, yet the aim of a writer is to turn to personal use this collective material called language ("Avoir du style, c'est parler la langue de tout le monde comme personne", or to put it differently, style is "an individual linguistic deviation from the general norm"), hence the creation of various figures of speech (among which metaphor and metonymy, two processes standing at the heart of verbal activity), images, similes, symbols, etc. resulting in the emergence of a personal style (i.e. « Une sélection opérée dans un répertoire linguistique, déclenchée mais non déterminée par le sujet de l'oeuvre et constitutive de cette oeuvre »  the imaginative writer creates what it describes ; "style is not a decorative embellishment upon subject matter, but the very medium in which the subject is turned into art." (D. Lodge in Language of Fiction). Hence, R. Barthes's challenging definition of a writer : « Est dit écrivain, non pas celui qui exprime sa pensée, sa passion ou son imagination par des phrases mais celui qui pense des phrases ». However thought-provoking it may be, Barthes's statement deserves qualification : style is not just a question of technique (i.e. it is not reducible to an inventory of various devices and features), but a question of metaphysics, as J. P. Sartre stated. Without going that far, one might say that style is a quality of vision (cf. infra) and the "way a writer is able to communicate to the reader the quality of his mind" (S. Foote). A few quotations as food for thought : -Metaphors « Le plus grand mérite est d'être un maître de la métaphore... Car une vraie métaphore suppose la perception intuitive de la similitude dans les choses dissemblables » (R. Caillois) « Il y a dans toute métaphore à la fois la mise en oeuvre d'une ressemblance et celle d'une différence, une tentative d'assimilation et une résistance à cette assimilation, faute de quoi il n'y aurait qu'une stérile tautologie » (G. Genette). M. Proust (un contemporain de James) : « Il n'est pas de beau style sans métaphore : 'seule la métaphore peut donner au style une sorte d'éternité'. Il ne s'agit pas là, pour lui, d'une simple exigence formelle, d'un point d'honneur esthétique comme en cultivaient les tenants du 'style artiste' et plus généralement les amateurs naïfs pour qui la 'beauté des images' fait la valeur suprême de l'écriture littéraire. Selon Proust, le style est 'une question non de technique mais de vision', et la métaphore est l'expression privilégiée d'une vision profonde : celle qui dépasse les apparences pour accéder à l''essence' des choses. » (G. Genette). Cf. aussi S. Foote : « La métaphore n'est pas un ornement, mais l'instrument nécessaire à une restitution, par le style, de la vision des essences » (Metaphor is not a fanciful embroidery of the facts. It is a way of experiencing the facts) -Symbols One of the five codes out of whichaccording to R. Barthesa literary text (which is etymologically akin to "tissue, fabric, weft", etc.) is woven, viz. : 1. The proairetic code ("voix de l'empirie") : code of actions  referring to all that happens/takes place in a text ; 2. The semic code ("voix de la personne") : its objective is characterization achieved through the distribution of "semes" throughout the text. Semes = traits, features, attributes whose sum subsumed under a proper namecomposes an image of a particular character. Bear in mind that "character is merely the term by which the reader alludes to the pseudo-objective image he composes of his responses to an author's verbal arrangements" (C.H. Rickword in D. Lodge, op. cit.,) ; 3. The hermeneutic code ("voix de la vérité") : devices for stimulating interest, curiosity or suspense such as delays and gaps which turn the reading process into a guessing game, an attempt to solve a riddle or a puzzle. The text suggests the existence of an enigma and delays its solution while, at the same time, it keeps promising an answer ; 4. The cultural code ("voix de la science") : a code of reference, a shared body of knowledge or Doxa = opinions, rationalizations, commonplaces, received ideas : "modest women blush", "boys will be boys". Refers to concepts, generalizations, principles governing our perception of the world, justifying our actions, attitudes or judgments. Its function is to naturalize what is on the main cultural : fragments of sth that has always been already read, seen, done, experienced. 5. The symbolic code ("champ symbolique") : the code governing the production of symbolic meaning (a symbol is something that refers to or represents something else). But in R. Barthes's theory the notion has strong psychoanalytic undertones : symbolism arises as the result of intrapsychic conflict between the repressing tendencies and the repressed...only what is repressed is symbolized. Repression triggers off a series of substitutions resulting from displacement and condensation, two key primary processes governing unconscious thinking  according to R. Barthes, the way in which a text will regulate a series of antithetical terms. Note that in The Portrait of a Lady characterization (the semic code) heavily resorts to symbolism : the names of almost all the characters in the novel convey some degree of symbolical innuendo and there seems to be a certain correspondence between a person's name and his/her character. Isabel (variant of Elizabeth = "oath of god") Archer is obviously connected with Artemis, the archer goddess. Artemis, a Greek goddess, was a virgin huntress, associated with uncultivated places and wild animals. The Romans identified her with the Italian goddess Diana, 'chaste and fair', associated with wooded places, women and childbirth, and with the moon. Pansy is flowerlike ; Madame Merle combines the various connotations of a blackbird, a bird of bad omen, and a blackguard ; Henrietta Stackpole's Christian name is appropriately derived from the name of a man, and a critic has remarked-ironically but hardly irrelevantly-that Henrietta's utilitarian-sounding family name contains echoes of 'haystack' and 'flagpole'. Lord Warburton is associated with manly virtues, and at one point is directly compared to Mars ; Caspar Goodwood (a pretty obvious surname) is a sturdy, unbending, and pre-eminently good character. Gilbert (pledge + bright) Osmond (God + protection) bears a name that does justice to his high opinion of himself : a domestic god extending his nefarious protection to all the people around him. The Touchett family obviously presents particular difficulties with regard to such nominal symbolism, since their personalities are so strikingly different, however it has been suggested that there might be an implied emblematic significance of something like 'touching' for Mr. Touchett, of 'touchy' for his wife Lydia, and of 'touché' for Ralph who becomes Isabel's 'touchstone' (etymologically the name Ralph associates two semes : "counsel + wolf" : he both advises and preys upon Isabel i.e. uses her to satisfy the requirements of his own imagination). Obvious conclusion  nothing in a novel can be wholly gratuitous or neutral. Now to revert to the first device, it is worthy of note that in his Preface to the novel, James consistently resorts to metaphors to tell about his art and the composition of The Portrait. For instance, to evoke the process of literary inspiration, he depicts himself as awaiting "the ship of some right suggestion, of some better phrase, of the next happy twist of my subject, the next true touch for my canvas" (V) etc., a fundamental metaphor acting as a filter through which reality within the novel is viewed (e.g. Isabel is compared to a ship: cf. "I should like to put a little wind in her sails", 182. Note, by the way, that Milly Theale, the heroine of The Wings of the Dove is likened to "a great new steamer" p. 81 ; pretty consistent, isn't it ?) as witness its recurrence throughout the text e.g. : Henrietta to Isabel : "Do you know where you're drifting ? [...] You're drifting to some great mistake" 165-166 "She [Isabel] had started on an exploring exhibition..she'll be steaming away again", 275 ; "her [Isabel] adventure wore already the changed, the seaward face of some romantic island from which, after feasting on purple grapes, she was putting off" 308 "he [Ralph] drifted..like a rudderless vessel" 338 "You seemed to me to be soaring far up in the blueto be sailing in the bright light, over the heads of men. Suddenly someone tosses a faded rosebud...and straight you drop to the ground" 344 The encounter between Caspar/Isabel : "it was like a collision between vessels in broad daylight...she had only wished to steer wide...to complete the metaphor" (emphasis mine) ; 485 "Isabel far afloat on a sea of wonder" ; 550 "The tide of her (Merle's) confidence ebbed, and she was able only to just glide into port, faintly grazing the bottom" ; 552 Caspar : "he let out sail" ; 587 "The world seemed to open out, to take the form of a mighty sea, where she [Isabel] floated in fathomless waters" ; 590, etc. A choice obviously determined by the image of life as a sea full of obstacles which has to be crossed (cf. James's definition of his main theme as "the conception of a certain young woman affronting her destiny", X), not to mention the various symbolic meanings attaching to the notion : the sea symbolizing the feminine principle and blind forces of chaos, the sea as source of all life, containing all potentials, etc. (just look up the term in any Dictionary of Symbols) Another fundamental metaphor is the Book/Sheet metaphor which can be seen as a sort of self-reflexive device pointing to the literariness/fictionality of The Portrait as a discursive construct, an artefact made of words and sometimes of other texts (intertextuality). Source of the metaphor : life/nature/world as a book  the book of life (cf. "I [Osmond] had been putting out my eyes over the book of life" 351). Osmond, Madame Merle are quite literate ; they can even read between the lines ; Isabel, despite her fondness for reading, is illiterate when it comes to deciphering the social code/text and interpreting the book of life (cf. "she [Isabel] had not read him [Osmond] right" 426  hence the need for "an education of the eye" as stated earlier in these notes). Another interesting use of the metaphor is that in James's fiction, women are consistently likened to books, they formas a character put it in The Wings -"a whole library of the unknown, the uncut" : L'assimilation de [l'héroïne de James] à une pagepage que la lecture ne saurait épuiserprésente la situation fondatricele rapport homme-femmecomme une page encore mystérieuse qui appelle indéfiniment la glose, qui sollicite l'interprétation, qui provoque la réverbération : la romance est un texte à lire et à relire, toujours inachevé et toujours se faisant (C. Richard, DELTA). It has been pointed that in most novels by James "the centers of perception are both acting in the drama and organizing their involvement in it into coherent artistic patterns ; they live their experience as if they were writing about it" (L. Barsani), hence the omnipresent literary metaphor turning the characters into books, pages or making them feel as if they were sentences, texts, parentheses, images, e.g. : "Isabel's written in a foreign tongue" (says Mr ludlow) 31 ; "living as he (Ralph) now lived was like reading a good book in a poor translation" 40 " [Stackpole] was as crisp and new and comprehensive as a first issue before the folding" ; 84 Pansy = a sheet of blank paper ; 278  [Isabel] hoped so fair and smooth a page would be covered with an edifying text 279 "she [Countess Gemini] had been written over in a variety of hands, a number of unmistakable blots" 279 ; "Don't put us in a parenthesis -give us a chapter to ourselves" (Gilbert Osmond to Isabel) 307 ; "You express yourself like a sentence in a book" (OsmondMerle), 525 ; "You're more like a copybook than I", 525 ; Gardencourt  "No chapter of the past was more perfectly irrecoverable", 497 ; etc. The preface also provides another fundamental metaphor i.e. one having to do with James's poetics, let's call it the architectural metaphor (the novel as construct cf. "The house of fiction" : "The house of fiction has in short not one window, but a million [...] The spreading field, the human scene, is the 'choice of subject' ; the pierced aperture, either broad or balconied or slit-like and low-browed, is the 'literary form' ; but they are singly or together, as nothing without the posted presence of the watcher -without, in other words, the consciousness of the artist" IX): "She (Isabel) lived at the window of her spirit", 541 ; "the truth of things...rose before her with a kind of architectural vastness", 560 ; "It (Osmond's house) was the house of darkness, the house of dumbness, the house of suffocation" 429 ; "when she saw this rigid system close about her, draped though it was in pictured tapestries, that sense of darkness and suffocation of which I have spoken took possession of her", 431 ; "He (Rosier) comes and looks at one's daughter as if she were a suite of appartments, etc." 490 ; etc. Art metaphor (Painting, Images, etc)  life as art (one of Isabel's fond illusions she'll have to grow out of) : "the peculiarly English picture I have attempted to sketch" 6 ; "the weather had played all sorts of pictorial tricks" "a person [Isabel] who had just made her appearance in the ample doorway"  Isabel is framed like a painting 15 cf. 367 : "framed in the gilded doorway, she [Isabel] struck our young man as the picture of a gracious lady" ; Isabel = a Cimabue Madonna, 210 ; pictures shown by Ralph at Gardencourt 45 ; Lord W seen as a picture by Stackpole, 89 ; "she [Isabel] formed a graceful and harmonious image", 99 ; "Ralph had such a fanciful pictorial way of saying things..." etc. 274 ; Osmond  "One ought to make one's life a work of art", 307 ; "her thoughts would have evoked a multitude of interesting pictures", 319 ; Pansy = an Infanta of Velazquez, 369 ; Osmond sees Isabel as "a young lady who had qualified herself to figure in his collection", 304 "Osmond had given her a sort of tableau of her position", 441 ; Osmond  "putting a thing into words-almost into pictures", 532 ; "he regarded his daughter as a precious work of art", 532 ; "She envied the security of valuable pieces which change by no hair's breadth, only grow in value while their owners inch by inch youth, happiness, beauty", 569. Theatre metaphor (a vestige/reminder of James's long flirtation with the stage ; H. James : a failed playwright turned successful novelist  failure in one field of endeavour used as springboard for success in another (any relevance to Isabel Archer ?). Source of the metaphor  Society as a stage, people as dramatis personae, life as tragedy or comedy, truth vs. appearance, showing vs. telling, seeing vs. action, active vs. passive, the unconscious as "der andere Schauplatz" (cf. good old Sigmund et alii), and the whole caboodle : "for him [R] she would always wear a mask", 392 ; "her mask had dropped for an instant", 466 ; "Ralph restricted to mere spectatorship at the game of life", 148 ; "I shall have the thrill of seeing what a young lady does who won't marry Lord W", 149 ; "There will be plenty of spectators", 149 ; "This was only the first act of the drama, and he [Ralph] was determined to sit out the performance", 395 [about company at Isabel's home]"They are part of the comedy. You others are spectators" "The tragedy then if you like. You're all looking at me", 501. The five metaphors we have just mentioned invite us to discuss life and art, life as art, life vs. art or the relation of art to life which is of paramount importance in James's poetics. H. James stated once that "the only reason for the existence of the novel is that it should attempt to represent life", however such a profession of faith does not automatically turn the author of The Portrait of a Lady into a realistic writer for at least two simple reasons : firstly, the function of the novel is not just to "represent life", it aims also at "lending composition to the splendid waste that is life" ; secondly, James consistently sees the real world through the prism of art. Reality is rarely described in itself and for itself ; it is systematically rendered or better still filtered through some key metaphors pertaining to the world of art viz., embroidery and/or tapestry-weaving, painting, literature and the theatre, etc., and through such metaphors James hoped to find new ways to enable his readers "to read with the senses as well as with the reason". As stated above, James declared men represented the canvas and women the embroidery ; now this reference to embroidery recurs throughout his work and its significance is essential to an understanding of James's work and conception of art all the more so as "text" originally means cloth/"tissu" and that, as R. Barthes put it, "le texte se fait, se travaille dans un entrelacs perpétuel" (Le Plaisir du texte). It's also interesting to note that just like lace-making, James's texts are as often as not formed around a central blank, a void : cf. his statement : "all my reserves are blanks". What James's indirection often skirts round is something much like emptiness ; although he stated in the Preface to The Portrait that "the novel is of its very nature an 'ado', an ado about something, and the larger the form it takes the greater of course the ado" (XI), James's novels evinced over the years a marked tendency to become "an ado about nothing", hence the characteristic dialectics of presence and absence at work in James's style (cf. a characteristic example in The Wings of the Dove: "the subject was made present to them [...] only by the intensity with which it mutely expressed its absence" ; in James's fictitious world, reality becomes not what is fully present to us but what is absent) and the peculiar quality of his conversations that are just « de vastes dispositifs énigmatiques construits à partir du vide, du rien : non-dit ou dit à demi-mot, flottaison des signifiants vagues, toujours sur le point de basculer dans le silence » (N. Blake) : Dans l'oeuvre de James, nous trouvons constamment un double mouvement : exposer pour refouler, montrer pour cacher, développer et interroger pour, en fin de compte, ignorer. Les métaphores familières de l'auteur, celles de la tapisserie ou de la broderie sont là pour souligner ce trait de son travail. Embroidery and weaving are thus archetypal symbols for the nature of the literary text and the process of fiction-writing. The theatrical metaphor also accounts for some of James's idiosyncrasies as a stylist. James, who would have "loved to be popular", turned to the theatre in the early nineties to achieve this aim. His popularity as a fiction-writer had declined at the time and he felt that a great theatrical success would restore it. He enjoyed the problems of the dramatic form, and his play The American had a moderate succes ; but the cold reception of Guy Domville (another play) made him decide to put an end to his unrequited flirtation with the drama. However, James returned to novel-writing with a new technique which was largely influenced by his work for the theatre as witness his wish to "produce a novel all dramatic, all scenic" (cf. notes on "showing" and "telling" in last lecture on James's style and "écriture"). As for the painterly metaphor, it also is of paramount importance ; it must be borne in mind that James went so far as to state that "a psychological reason is to my imagination an object adorably pictorial". The novel under discussion being, among other things, a portrait study of Isabel Archer, it is no wonder that the heroine's progress is punctuated with references and allusions to various pictures. Lastly, the literary metaphor at work in James's novels is even more important than the pictorial metaphor ; James had one of his characters declare that "la littérature, la vie : c'est tout un", a contention borne out by much of what is depicted in The Portrait and above all Isabel's tendency to perceive reality through the prism of fiction or better still romance. However sketchy such observations may be, what they point to is that in James's world, life and art are both antagonistic and complementary notions. The relationship betwen the two notions is all the more complex as it seems that the frustration of life is in James's fiction one of the conditions of success in art. However paradoxical it may seem, James always insisted on the moral beauties of failure and the crudity of worldly success ; he had one his characters, Henry St George, who refused the price of dedication to art, declare : "I've had everything. In other words, I've missed everything", and Isabel, who likewise has everything, runs the risk of missing everything. In her eagerness to "drain the cup of experience" and to launch out into "the free exploration of life "her [Isabel's] poor winged spirit", 405 ; "He [Caspar] had never supposed she hadn't wings and the need of beautiful free movements"161 ; Ralph  Isabel : "Spread your wings", 222 ; "the terrible human past was heavy to her but that of something altogether contemporary would suddenly give it wings", 287. Light/Darkness metaphor (a variant of the rise and fall pattern : revelation, illumination / obfuscation ; desire/repression) : "if a certain light should dawn she could give herself completely" 53 ; "His kiss was like white lightning, a flash that spread, and spread again, and stayed ; and it was extraordinarily as if , while she took it, she felt each thing in his hard manhood that had least pleased her...justified of its intense identity and made one with this act of possession" 591. Military metaphor ( society seen as a battlefield where only the fittest survive) : "she [Isabel] had spent much ingenuity in training [her mind] to a military step and teaching it to advance, to halt, retreat, to perform even more complicated manoeuvres", 25 ; "That personage [Mme Merle] was armed at all points ; it was a pleasure to see a character so completely equipped for the social battle. She carried her flag discreetly, but her weapons were polished steel and she used them with a skill which struck Isabel as more and more that of a veteran." (401). "for advice read cash", 365 ; Money (" "At bottom her money had been a burden", 427 ; "Lord Warburton's frienship was like having a large balance at the bank", 440 ; "an account to settle with Caspar", 486 ; "Caspar would make it out as over a falsified balance-sheet.. Deep in her breast she believed that he had invested his all in her happiness, while the others had invested only a part", 487 ; (Isabel) "missing those enrichments of consciousness", 571 ; "Having" and "being" thus form a fundamental antithesis enabling one to categorize Jamesian characters into two classes i.e. those who, like Isabel, have everything but feel they are not complete, and strive to reach a higher state of being (Archer), and those whose sense of being is undermined by the painful awareness of not having enough e.g. Madame Merle. So money is the controlling force in the novel, and the acquisitive drive is the motive power, the primum mobile in the society depicted by a novel which, among other things, is a reflection on the notion of "value", for "le déchirement de la conscience moderne naît pour James de la permanence du Beau dans un monde aliéné par l'argent" (L'Arc), the supreme value endowed with permanence in a fleeting, transitory world. However, the status of money in James's fiction is ambiguous if not paradoxical because if acquiring money is morally despicable, the possession of it is the requisite for the good life that all Jamesian characters strive for, but money destroys those who are associated with it, for possession not only takes away the charm but is always impure. Be that as it may, in The Portrait, « l'arithmétique d'une économie monétaire contamine les domaines intérieurs » (L'Arc) : feelings are expressed in financial or monetary terms reminiscent of the notions of "emotional currency" or even "emotional bankruptcy" used by W. Faulkner and F. S. Fitzgerald in their own denunciation of the role of money and of business ethics in the society of their time. Likewise in James's fiction, « l'argent sert à consacrer la possession » (Cahiers Cistre) and above all the possession of woman who is seen as a commodity : woman is an object and, as in ancient Rome, marriage (Latin : coemptio) is tantamount to an act of selling (emptio) : love is replaced by "le commerce amoureux", a sort of "emotional bargaining". Thus the narrative is punctuated by a series of deals and bargains that the different characters make with each other. An interesting aspect of the question of money is that the two female deuteragonists also embody opposite notions of value : if Madame Merle stands for "market value", Isabel at the end of her initiation (when she makes the superior choice of renunciation) comes to represent "the real thing" i. e. a kind of value based on what one might call in French, "l'économie du don" i.e "du don de soi/oblativité" (self-sacrifice, self-denial or abnegation). In the final stage of her evolution, Isabel Archer shares with Milly Theale "the imagination of expenditure" (The Wings of the Dove) in the figurative sense. Possession assuming two formsit is either financial or sexualthis leads us now to the question of sex in The Portrait. Sex Though there is no direct reference to sex, it is omnipresent in James's fiction (« James ne couche les femmes que sur le papier » as some critic, who shall be nameless, humorously put it), but inasmuch as "the names of things, the verbal terms of intercourse, [are], compared with love itself, horribly vulgar" (H. James), « l'acte sexuel se dit [...] presque exclusivement par détours [...] Dans ces romans si verbeux, tout se dit en fin de compte en silence. Le roman jamesien, c'est le triomphe du non-dit » (Cistre) i.e. the untold and "obliquity". Cf. following metaphors in The Portrait : Book metaphor  Pansy = "a sheet of blank paper", 278. Pansy = "unspotted nature, a blank page, a pure white surface, successfully kept so", 315 ("If he were not my papa, I should like to marry him" 316) "[Isabel] hoped so fair and smooth a page would be covered with an edifying text", 279 ; "she [Countess Gemini] had been written over in a variety of hands, a number of unmistakable blots", 279 ; that one is a beauty !  Is novel-writing a surrogate for...a more strenuous and dangerous activity ? Cf. ironic sexual overtones of Osmond's description of what Isabel'life would have been like had she married Caspar : It would have been an excellent thing, like living under some tall belfry which would strike all the hours and make a queer vibration in the upper air. He declared he liked to talk with the Great Goodwood ; it wasn't easy at first, you had to climb up an interminable steep staircase, up to the top of the tower ; but when you got there you had a big view and felt a little fresh breeze (495). Caspar's words : « I can't understand, I can't penetrate you ! » 511 ; Osmond  "We're as united as the candlestick and the snuffers" 505 ; etc. This is the crux of the matter : sex in James's universe is not something stated but rather something understood. But the fact that love is never dealt with directly is more than compensated for by an unmistakable "érotisation du discours" : in James's fiction, sex never pertains to "showing" but to "telling" obliquely, and discourse becomes, so to speak, a substitute for sexual intercourse (cf. in The Wings : "the verbal terms of intercourse"). Sexual intercourse does take place between Isabel and Osmond, but as usual with James the satisfaction of desire involves an immediate penalty : « tout succès charnel aboutit en effet à une dissipation de conscience, tandis que tout échec accepté et surmonté, toute absence reconnue et explorée, tous les états de manque cultivés mènent à une intensité prolongée : telle est la formule du monde jamesien » (J. J. Mayoux : this, by the way, is one of the most penetrating observations ever made about James's fictitious world ; cf. also Freud's opinion that « la frustration est la seule mesure éducative »). James's main subject is thus the growth, the emergence of conscience in an individual, and conscience, as we have seen, evolves out of suffering and deprivation. His realism is therefore one of analysis rather than of documentation, focused on psychological portrayal and human intercourse rather than sociological reporting and social scope. Through his emphasis on principles of composition and his technical mastery of point of view, James perfected the form of the novel and gave it a consistency and a tightness unequalled in European and Russian fiction. Together with "The Art of Fiction," the Notebooks which he kept all his life and the prefaces which he added to his novels in his last decade form a kind of manifesto of his art. His influence in the history of the novel is thus unquestionable : he served as a bridge between America and Europe, between the past and the present, between the 19 th century traditional novel of plot and the 20 th century novel of consciousness. No one surpassed him in his time and the title of of his short stories fittingly characterizes his literary legacy-"The Lesson of the Master." 3. M. Gresset, "La Tyrannie du regard ou la relation absolue", Thèse de doctorat(Paris III), 1976, 610. 4. Love and Death in the American Novel, Harmondsworth, PenguinBooks, 1984, 301. toutes les séductions et répulsions de la proximité, de l'intériorité, voire de la promiscuité. C'est le lieu et le moment d'évoquer, par exemple, le protagoniste de Despair (roman de V. Nabokov publié en français sous le titre de La Méprise) qui déambule dans les rues de Perpignan « écrasé par la foule méridionale » et accablé par de « riches odeurs nauséabondes » (216). Intimité sociale et corporelle (liée au contact, au toucher, à l'odorat), mais aussi culturelle : elle se traduit alors par la présence palpable de l'art et du passé, point sur lequel je reviendrai.La Méditerranée incarne aussi, pour la psyché américaine, la possibilité de quitter la tyrannie rassurante, parce que depuis longtemps familière, du mode géométrique, linéaire et mécanique pour s'exposer au risque du chaotique, de l'amorphe, du tortueux et, ultime étape, de l'instinctuel. S'offre alors la possibilité de régresser vers le premier cogito, composé de perceptions sensorielles, le cogito radical et perdu 5 , d'avant le sujet pensant -l'anté-sujet -qui fait de l'environnement le complément d'objet direct des verbes humer, goûter, palper, ouïr, et substitue à la formule le monde est ma représentation « le monde est mon appétit » (G. Bachelard). La Méditerranée devient ainsi le terrain de la sensualité permise, ou plutôt reconquise, et d'un certain hédonisme : s'y inaugure ou s'y parfait l'éducation des sens et notamment du goût et de l'odorat à travers l'initiation aux saveurs et aux senteurs les plus diverses, les plus ordinaires comme les plus subtiles et les plus profondes « qui touche[nt] aux morts et à leur pourriture » (Serres). [...] Donc réveil des sens et des sensations qui passeront des nourritures terrestres aux nourritures spirituelles, des nourritures de l'art à la cuisine du sens : l'Américain s'ouvre alors à un monde complexe et déroutant de codes, de rituels et de significations d'ordre social ou culturel, à la nécessité d'opérer des distinctions ou des discriminations subtiles et ténues, condition première de cetteaisthêsisfaculté de juger, qui est, ne l'oublions pas, indissolublement liée à la Méditerranée. Le philosophe Luc Ferry rappelle qu'historiquement « c'est d'abord en Italie et en Espagne que le terme goût acquiert une pertinence dans la désignation d'une faculté nouvelle, habilitée à distinguer le beau du laid et à appréhender par le sentiment (aisthêsis) immédiat les règles d'une telle séparationde cette Krisis 6 ». Ainsi, pour la mentalité puritaine et l'imaginaire américain, la Méditerranée représentera tantôt un lieu de perdition et de damnation tantôt, au contraire, « un ailleurs comme sauvé de la chute et allégé de la culpabilité [...] où l'instinctuel peut se donner libre cours sans entrer en conflit avec le culturel 7 ». Dans ce dernier cas, la Méditerranée se retrouve placée sous le signe de Pan (sainteté de la sauvagerie originelle) et de Dionysos (abolition orgiastique de l'ordre légal) et symbolise soit la levée des interdits, soit l'irruption d'un Ça, du refoulé de la culture américaine qui 5. G. Bachelard, La Poétique de la rêverie, Paris, PUF, 1960, 125. 6. Homo aestheticus, Paris, Grasset, 1990, 27. 7. M. Gresset, 610-612. continue à faire sentir son dynamisme et parvient souvent à rompre les défenses qu'un Surmoi puritain oppose à l'âme américaine divisée, selon D. H. Lawrence, entre l'innocence et le désir, le spirituel et le sensuel. Ce cadre général ayant été établi, il convient à présent de restreindre notre domaine d'investigation aux frontières de l'Italie [...] Qu'incarnent l'Italie et ses deux villes phares, Rome et Venise, pour l'imaginaire américain ? Nouvelle question complexe à laquelle on apportera diverses réponses selon les auteurs consultés, cependant que l'on évoque N. Hawthorne, dont Le Faune de marbre (The Marble Faun) a inauguré en 1860 le motif de la rencontre, si ce n'est de la confrontation, entre un artiste américain et la vieille Europe, symbolisée par l'Italie, H. James, son continuateur, qui dans Roderick Hudson (1875), Un Portrait de femme (The Portrait of a Lady, 1881) ou Les Ailes de la colombe (The Wings of the Dove, 1909) a élargi et porté à son plus haut degré de perfection le "thème international", ou leurs lointains continuateurs, Ernest Hemingway, Francis Scott Fitzgerald, William Styron, John Cheever et Elizabeth Spencer, nos contemporains, trois constantes fondamentales apparaîtront : un schème -celui du voyage d'Ouest en Est -, une tonalité dominante -l'ambivalence -, et une métaphore, née sous la plume de H. James : celle du « banquet d'initiation / "banquet of initiation" », qui nous met en prise directe sur le thème de notre colloque. Dans les oeuvres des auteurs précités, l'Italie représente, par rapport au tropisme fondateur d'Est en Ouest du Nouveau Monde, le terme d'un pèlerinage à rebours du temps et de l'espace où l'Américain, ce nouvel Adam, va, pour parfaire son éducation, prendre part à un rituel initiatique (qui implique toujours la séquence symbolique naissance-mort-renaissance/résurrection) aux multiples enjeux. Il aura ainsi l'occasion de confronter l'innocence et la naïveté, qui sont en quelque sorte l'apanage naturel de l'homo americanus, aux subtiles conventions et aux moeurs parfois corrompues d'une société européenne perçue comme l'antithèse de la société américaine : troc symbolique de l'innocence contre l'expérience faisant défaut au personnage (ou variante tragi-comique : marché de dupes où l'Américain peut perdre sa précieuse innocence sans pour autant acquérir l'expérience). Il peut aussicomme dans Le Faune de marbre (The Marble Faun)boire à la coupe d'argent (« the silver cup of initiation » RH, XIX) contenant « le vin de l'Âge d'Or » (« the wine of the Golden Age », MF, 164), époque bénie des Dieux, « avant que l'humanité ne soit accablée du fardeau du péché et de l'affliction » (« before mankind was burdened with sin and sorrow », MF, 67), et faire, dans les cas les plus déconcertants pour un esprit teinté de rigorisme sinon de puritanisme, l'apprentissage de la vertu libératrice du péché. James évoque en une image marquantecelle du "fouet céleste" (« whip in the sky » RH, XVII) -la crainte persistante qui assombrit l'humeur et la conscience du Puritain en Nouvelle-Angleterre (« the suspended fear in the old, the abiding Puritan conscience » RH, XVIII), or cette menace paraît absente des cieux d'Italie aussi éprouve-t-il en découvrant la Ville éternelle le sentiment de vivre : « At lastfor the first time -I live ! » s'exclame-t-il. Cette initiation aux joies de l'existence se double souvent d'une révélation des plaisirs ineffables de la culture, de l'art et de l'esthétique. Par opposition aux États-Unis où Hawthorne se plaignait qu'il n'y ait ni ombre, ni antiquité ni mystère, et que les conditions ne fussent guère favorables à la création artistique -« la poésie, le romanesque, le lierre, les lichens et les giroflées ont besoin de ruines pour croître », écrivait-il -l'Italie incarne naturellement la patrie des arts. Même attitude chez H. James, dont certains héros, tel Theobald dans la nouvelle "The Madonna of the Future" (1879), se percevant comme des « déshérités de l'Art » (« We are the disinherited of Art »,) dans leur pays d'origine, au passé silencieux et au présent assourdissant (« our silent past, our deafening present »), se rendent en Italie (« an immemorial, a complex and accumulated civilisation » RH, 247) pour goûter aux plaisirs de l'acculturation et de l'accumulation de monuments, d'oeuvres d'art, de vestiges historiques (« superpositions of history » RH, 69), et se faire admettre dans « le cercle magique » ("magic circle") de l'art et de la culture. Rituel qui n'est pas sans danger pour le novice, qui paie parfois cette initiation d'une sorte d'étiolement de ses facultés créatrices, d'une forme d'impuissance devant le talent écrasant et l'exemple insurpassable de ses prédécesseurs. Il arrive parfois que le banquet d'initiation auquel l'Italie convie l'Américain le fasse accéder à des réalités moins souriantes que le pittoresque ou ce que les Américains appellent romance (disons le romanesque et le romantique) : ainsi la révélation de l'art ne va-t-elle pas sans la prise de conscience douloureuse de ce qu'il est censé transcender, le temps et la mort, et l'initiation à l'amour comporte toujours sa contrepartie de déceptions, de trahisons et de souffrances. En Italie, terre de contrastes, de paradoxes et de contradictions tout s'interpénètre : innocence et corruption, amour et mort, nature et culture, ordre et chaos, civilisation et barbarie, grandeur et 9. P.-Y Pétillon, L'Europe aux anciens parapets,Paris, Le Seuil, 1986, 193. 10. L. Ferry, Homo aestheticus, Paris,Grasset, 1990, 259. -On the deleterious influence of Italy cf. p. 258 : "Italy all the same had spoiled a great many people." -Motif of "the banquet" & "the silver cup of initiation" (Roderick Hudson) : According to Ralph  "You [Isabel] want to drain the cup of experience" (Portrait, 150, emphasis mine). cf. also : "Madame Merle continued to remark that even among the most classic sites, the scenes most calculated to suggest repose and reflection, a certain incoherence prevailed in her[Isabel]. Isabel travelled rapidly and recklessly ; she was like a thirsty person draining cup after cup". (323) ; "the dregs of the banquet" (571) -Romance  "drinking deep, in secret, of romance, she [Isabel] was, etc." p. 321 -Initiation (i.e. Europe complicates consciousness) & ensuing disappointment : "she had had no personal acquaintance with wickedness" (519). In Italy, Madame Merle fills that gap, and Isabel gets a taste of "wickedness". After that shattering initiation, Rome feels a congenial place : "She had grown to think of it [Rome] chiefly as the place where people had suffereed. [...] the marble columns, transferred from pagan ruins, seemed to offer her a companionship in endurance" (518). categories since in the Preface to the novel James makes a distinction between main characters and what he calls "smaller female fry" or "wheels to the coach" (e.g. H. Stackpole) i.e. a character who "neither belongs to the body of that vehicle, or is for a moment accommodated with a seat inside" (XV). There are six women in The Portrait of a Lady (Mrs Touchett, Isabel Archer, Madame Merle, Henrietta Stackpole, Countess Gemini and Pansy) forming various couples pairing an elderly or middle-aged woman with a young one (Mrs Touchett/Isabel ; Madame Merle/Isabel ; Countess Gemini/Isabel ; Isabel/Pansy ; only exception Henrietta/Isabel) or, of course, a man and a woman (Mrs Touchett/Mr. Touchett, Ralph/Isabel ; Lord Warburton/Isabel ; Caspar Goodwood/Isabel ; Osmond/Isabel ; Madame Merle/Osmond). 517  "What have you to do with me ?" Isabel went on. -"Everything !" she [Madame Merle] answered" admission : "I'm wretched"(488) Money is like a sixth sense without which you cannot make a complete use of the other five", an anonymous wit) As a starting-point I'd like to quote M. Zéraffa's opinion that : « il est permis de représenter la pensée de James comme motivée et polarisée dans ses effets théoriques, techniques, poétiques et même scripturaux, par l'opposition ou par le dilemme, entre Avoir et Être ». Hence the importance of money in the universe of The Portrait as witness the number of references scattered throughout the text : Isabel  "few of the men she saw seemed worth a ruinous expenditure", 53 ; Madame Merle  "I wish you had a little money", 203 ; "She [Isabel] had not given her last shilling, sentimentally speaking", 224 "the element between them [Osmond/Madame Merle]" 241 money = Madame Merle's motive, 545 ; "Isabel wondered a little what was the nature of the tie binding these superior spirits"  the inducement of profit, 245;Passion  "It was there like a large sum stored in a bankwhich there was a terror in having to begin to spend", 310. A key metaphorfeelings/love equated with moneythat keeps cropping up in American fiction (Fitzgerald, Faulkner, etc. see for instance in The Wild Palms the tell-tale statements : "I am still in the puberty of money" ; "I have repudiated money and hence love" ; "the value of love is the sum of what you have to pay for it and any time you get it cheap you have cheated yourself".). As French philosopher G. Bataille put it : « L'érotisme ressortit à la dilapidation [dépense d'énergie, de sentiments, de ressources, don de soi] ; l'union charnelle relève à la fois de la 'consommation' (oeuvre de chair) et de la 'consumation' (dépense) ; c'est un excès faisant pièce à l'avarice et au calcul froid de l'ordre réel. [....]. Dans l'érotisme 'JE me perds' ; c'est le déséquilibre dans lequel l'être se met lui-même en question. » Isabel's fighting shy of/shrinking back from sex, physical surrender, is akin to emotional miserliness (to coin a phrase !). tle. She carried her flag discreetly, but her weapons were polished steel and she used them a skill which etc." (401). Unlike Isabel, Serena Merle can judge people (see the irony in her warning to Isabel : "One can't judge till one's forty" 188) ; this capacity is the result of experience. If Isabel is totally inexperienced in the field of social relations, Serena Merle has learnt her craft the hard way, "on the job" ; there's nothing theoretical about it ("That isn'tthe knowledge I impute to youa common sort of wisdom. You've gained it in the right wayexperimentally" 239), hence her cold pragmatism : "I don't pretend to know what people are meant for, I only know what I can do with them" (240). She uses people like pawns on a chess-board and even if she evinces a certain fondness for Isabel, she considers her basically as an asset, "an investment" that is to bear interest in the form of Pansy's marriage to someone above her station i.e. Lord Warburton. Madame Merle has been used and abused by the world as witness the image of the "chipped cup" she applies to herself with a wry sort of humour : "I've been shockingly chipped and cracked...cleverly mended...remain in the cupboard" (192 ; there are of course moral undertones to this comparison). Madame Merle is "in a word too perfectly the social animal that man and woman are supposed to have been intended to be [...] she existed only in her relations, direct or indirect, with her fellow mortals" (192) . Hence the connotations of mystery ("She's too fond of mystery" 174), duplicity (403), cynicism (245) and calculation (268) attaching to her delineation. Setting great store by appearances ("she's always had a worship of appearances", 545), Madame Merle "lived entirely by reason and by wisdom" (401) (wisdom i.e. sight + insight). If, like all the other characters in the novel, Madame Merle "wants to see what life makes of you [Isabel]" ( ", Isabel is in danger of forgetting that « vivre, c'est ainsi[et aussi] se retenir de vivre pour conserver intacte l'imagination illimitée de la vie » (J.-J. Mayoux), which seems to be the fate of the typically Jamesian heroine. Lastly to come full circle and complete this short disquisition on the relation of art to life, I'd like to stress that the absolutely gratuitous gesture with which Isabel completes her initiation i.e. her resolve to return to Osmond, her renunciation, is a kind of "beau geste" : it is a transcendent gesture The Wings of the Dove ; not so recurrent in The Portrait but noticeable all the same. The motif refers to idealism, transcendence (cf. Archer, a symbol of ascent. Bears out Jean-Jacques Mayoux's contention that James is « un mondain mystique » fascinated with "the poetic drama of the inner life of the soul" trying to rise to ideal heights) i.e. passage from one ontological Eden/Garden/flower metaphor her [Isabel] nature had..a certain garden-like quality, 53 ; Ed Rosier, 357 Pansy and Isabel  "the effect of one's carrying a nosegay composed all of the same flower" 406 ; "it's all pansies ; it must be hers", 436 "his [Osmond's] faculty of making everything wither that he touched, spoiling everything that he looked at" 424 "his [Osmond's] egotism lay hidden like a serpent in a bank of flowers ", 430 ; [Merle wanted to be] a kind of full-blown lily -the incarnation of propriety 546 Wings A leitmotif in expressive of a lofty moral and aesthetic sensibility. It is necessary to stress the and, for in James's fiction the moral sensibility is never remote from the aesthetic ; in other words, moral sense and the artistic sense lie very near together, hence James's definition of "taste as the active sense of life". plane to another, and finds expression in a pattern of rise (flight, floating, etc.) and fall (abyss, gulf, depths, etc.) : "[Isabel] a winged creature", 383 ; R Chase, The American Novel and its Tradition, Baltimore ; The Johns Hopkins UP, 1957, 7. James published in 1883 a collection of essays entitled Portraits of Places. The Marble Faun, New York, The New American Library, 1961. James's Art revisited H. James was not only an outstanding practitioner of the novel but also a most perceptive theoretician of fiction-writing ; in his prefaces and studies, particularly The Art of Fiction, he's made numerous observations on the art of novel-writing and novelistic technique. It should be borne in mind, however, that most prefaces were composed years after the publication of the corresponding works so that there is as often as not a discrepancy between the texts and the questions the Prefaces intend to throw some light on : to put it bluntly, theory is not always reflected in the author's actual practice and there may even be a wide gap between the novel as planned and the endproduct, the final version of the text, as James himself explicitly acknowledges : "Yet one's plan, alas, is one thing and one's result another," which doesn't detract from the interest and value of the various prefaces but is a clear indication that they have to be taken with a pinch of salt... In the preface to The Portrait, as in The Art of Fiction, James resorts to define his art to various metaphors and images drawn from three essential fields : "architecture", "painting" and "the drama" ; to these must be added the notions of "indirection" and "reflection". The architectural metaphor ("the house of fiction") is strengthened by the allusion to the "use of windows and balconies" which is linked to the question of point of view since the novel aims to build, to borrow and adapt a phrase from The Wings, "the whole bright house of Isabel's exposure", in other words to evoke Isabel "through the successive windows of other people's interest in her". To achieve such an objective, James will resort to a principle of composition and exposition called "indirection" or "indirect approach" i.e. "all the events and actions are represented as they unfold before, and filter to the reader through, the particular consciousness of one of his characters" (M. H. Abrams). Which means that the author "presents the reader not with a narrator's objective account of the characters but with the characters' subjective and therefore partial, colored and often warped accounts of themselves" (R. C. McLean). James dubbed "reflectors" such characters whose consciousness plays the rôle of a mirror hence, an important stylistic consequence : the frequent use of free indirect speech with a marked effect of bivocality i.e the voice of the narrator intermingles with the character's to express the thoughts of the reflector. Incidentally, it is worthy of note that the function of reflector is not limited to characters only : things, places, elements of the setting are also likely to serve this purpose and connote a situation, a state of mind, etc. : the description of the characters' various interiors (cf. Osmond's palazzo or Gardencourt) is a good example of such "characterization by environmental implication" (cf. "We're each of us made up of some cluster of appurtenances. What shall we call our 'self' ? Where does it begin ? where does it end ? It overflows into everything that belongs to us -and then flows back again", 201). Another consequence of James's circuitous approach "by narrowing circumvallations from an outer ring" is that it enables the author to study his object from various angles, to see the obverse and the reverse : Jamesian images "have sides and backs" as S. Gorley Putt convincingly pointed out : For James, the poetic imagination was to be very largely a matter of seeing things from both sides: from the early tales to the final Prefaces his writing is full of images invoking the obverse and reverse, the back and the front, the passive and the active, the efficient and the visionary, the romance and the disillusion. [...] That complete honesty of the double vision in James's work...helps to explain the tortuosities of the high style where he makes the reader dizzy by his conscientious efforts to be fair all round, to take every possible aspect into consideration. The desire to do justice to the complex situations and motivations, as well as the dual nature of numerous characters and the diverse relations prevailing between them led James, according to the critic above-mentioned, to strike "the geminian notes of antithesis or parallel, dissonance or assonance, contradiction or compensation". No sooner is one term laid down than somewhere in the same sentence or paragraph there crops up its opposite (p. 7 : "It seemed to tell that he had been successful in life, yet it seemed to tell also that his success had not been exclusive and invidious, but had had much of the inoffensiveness of failure." ) ; the high rate of dual forms created by the bringing together of the two terms of a polarity makes for uncertainty and indecision, which proves that James aims not so much at realism as at "the intensity of an illusion". The rôle of James the artist is to make the reader see the multiple, the complex and not to impose unity of vision (Cf. what he wrote in the preface to What Maisie Knew : "The effort really to see and really to represent is no idle business in face of the constant force that makes for muddlement"). However such "double vision" is sometimes thwarted by an opposite principle which consists in not showing everything, in not telling everything. It has been repeatedly pointed out that James is an illusionist practising « l'art d'exposer pour refouler, de montrer pour cacher » or weaving his texts around numerous blanks or things untold : thus his texts very often hinge around « des pivots obscurs parce que non représentés » (such as the intimate relationship between Madame Merle and G. Osmond) so that : laissées à l'imagination du lecteur et parfois des protagonistes, ces ellipses qui trouent le récit de brusques suspensions et de silences se transforment peu à peu en d'invisibles mais inépuisables matrices de significations, analogues, par leur dialectique de la plénitude et du vide, aux abîmes dont les gouffres, les tourbillons et les naufrages de la préface offrent autant de réfractions mélodramatiques. (E. Labbé). Such ellipses force the reader to implement a virtue James deemed essential : "the power to guess the unseen from the seen, to trace the implication of things, to judge the whole piece by the pattern". The other two metaphors "the picture" and "the drama" represent "the two rival techniques of the novel" (L. Edel). "Picture" refers to "narrative from a point of view" and "drama" to "direct representation". To throw some more light on those two devices, one might say, following P. Lubbock's cue that "a scene is pictorially depicted when it is the reflection of events in the mirror of somebody's receptive consciousness". So the novel as picture is based on a central consciousness and a kind of inner soliloquy. "Picture" is in some respects equivalent to "sommaire"/summary i.e. « un raccourci de plusieurs moments tel qu'il s'effectue dans la conscience d'un personnage » (C. Verley). The second device consists in erasing all references to the narrative instance to give the floor to the character ( « discours immédiat, émancipé de tout patronage narratif »), hence James's own watchword in his Notebooks : "Dramatize, dramatize." Thus drama, as opposed to picture (a non-scenic rendering of some character's consciousness of a situation), renders scenically the character's speech and behaviour. This opposition parallels the one existing between the two modes of regulation of narrative distance i.e. "showing and telling", two novelistic strategies associated with the name of H. James. Actually the opposition is as old as the hills since Plato made a distinction between "diegesis" (i.e. pure narrative innocent of mimetic elements ; the poet speaks in his own words without trying to make the reader believe that someone else is uttering the words) and "mimesis" (the author tries to create the illusion that it is not he who speaks). From such a perspective, pure narrative is considered to be more distant from reality than imitation : it says less (condensation) in a more mediate way (indirection). Actually, this opposition neutralized by Aristotle, reappeared in the theory of the novel in the late XIXth century with James and its disciples under the names of "showing" vs. "telling". However G. Genette rightly pointed out that the notion of "showing" is quite illusory inasmuch as no narrative whatsoever can show or imitate what it conveys : language signifies without imitating unless, of course, the narrated pertains to language (« La mimésis verbale ne peut-être que mimésis du verbe » Genette). After experimenting with the theatre, H. James tried to dramatize the action as much as possible hence his emphasis on the notion of "showing" of which the two main characteristics are the predominance of scene (detailed narrative) and the transparency of the narrator (which in Genettian parlance results in the formula : showing implies the maximum of information and the minimum of informant). Consequently, the best narrative strategy for James is « un récit focalisé, raconté par un narrateur qui n'est pas l'un des personnages mais qui en adopte le point de vue » (Genette). Thus the reader sees the events as they are filtered through the consciousness of one of the characters, but s/he perceives it directly as it impinges upon that consciousness. By way of introduction to a reading of The Art of Fiction cf. following excerpt from American Fiction : The Art of Fiction (1884) figures as the classic text on the realistic novel. In this essay, James pleaded for his own brand of realism, very different from William Dean Howells', and one which contributed to giving American fiction its noblest coat of arms. For James the novel was to be concerned with impression -"a novel is in its broad definition, a personal, a direct impression of life,"and with experience, experience understood not as surface incident but as "an immense sensibility" : "Experience is never limited, and it is never complete ; it is an immense sensibility, a kind of huge spider-web of the finest silken threads suspended in the chamber of consciousness, and catching every air-borne particle in its tissue. It is the very atmosphere of the mind." With this seminal essay, James struck an altogether new note in the concert of realistic fiction and gave birth to a new type of novel called "the James novel." If the novel was to be "the art of representation," and if the novelist was to transcribe "experience," the stuff of fiction was not to be confined to mere external objective material, incident or plot in the traditional sense of the term, but was to concentrate on sensibility and perception. To the sensationalism dear to naturalistic fiction, James thus opposed the fascination for the enlarging consciousness and the excitement of penetrating insight. No need for the novelist to resort to violence to entice his readers'attention, the drama is played out in the mind, in the web of relationships established by consciousness : "It is an incident for a woman to stand up with her hand resting on a table and to look out at you in a certain way." Consciousness thus becomes the pivot of fiction for James, and his realism is ultimately that of the inner, subjective life, and not of the outer, objective life. As a consequence, character reigns supreme and prevails over incident, or rather, determines incident -"What is character but the determination of incident ? What is incident but the illustration of character ? What is either a picture or a novel that is not of character ?" In order to present experience through a consciousness, James needed a central intelligence, one that would serve as reflector, or refractor,also called focus, mirror, reverberatorof the events and scenes represented. Hence he was brought to give form to a major innovation in the history of the novel, that of the mediation of an active consciousness between the story and the reader. In doing so he got rid of the foremost convention of the traditional novel, that of the authorial voice of fiction or omniscient narrator, through which all the events filter to the reader. As a consequence of this chosen center, the author is never allowed to intrude directly, even though his presence may be felt in an implicit counterpoise and through a subtle play on a dual consciousness. In The Ambassadors, James refined this narrative mode into an unprecedentedly complex structure of mirrors and ironic patterns. Clearly, the more finely aware the refractor, the more intense the refraction, which led James to select characters of unusual perceptive power and exceptional sensitivity, like Maisie in What Maisie Knew or Strether in The Ambassadors. Thus the restrictive point of view, far from being a limitation, becomes on the contrary the ground for an ever-widening enlargement of experience and a fascinating medium for the penetration of reality. James's stories embark the reader on an inner voyage, in which the narrative attention is drawn inward and which prefigures the latter developments of the stream consciousness technique.
00125398
en
[ "phys.cond.cm-msqhe", "phys.cond.cm-ms" ]
2024/03/05 22:32:10
2007
https://hal.science/hal-00125398v3/file/1_over_f_noise_alkyl_on_Si_-_Clement_et_al_-PRB.pdf
Nicolas Clement Stephane Pleutin Oliver Seitz Stephane Lenfant Dominique Vuillaume email: [email protected] Stephane Pleutin Stephane Lenfant Nicolas Clément Stéphane Pleutin Stéphane Lenfant /f Tunnel Current Noise through Si-bound Alkyl Monolayers Keywords: Td ; 81, 07, Nb à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. I. INTRODUCTION Molecular electronics is a challenging area of research in physics and chemistry. Electronic transport in molecular junctions and devices has been widely studied from a static (dc) point of view. 1,2 More recently electron -molecular vibration interactions were investigated by inelastic electron tunneling spectroscopy. 3 In terms of the dynamics of a system, fluctuations and noise are ubiquitous physical phenomena. Noise is often composed of 1/ f noise at low frequency and shot noise at high frequency. Although some theories about shot noise in molecular systems were proposed, 4 it is only recently that it was measured, in the case of a single D 2 molecule. 5 Low frequency 1/ f noise was studied in carbon nanotube transistors, 6 but, up to now, no study of the low frequency current noise in molecular junctions (e.g., electrode/short molecules/electrode) has been reported. Low frequency noise measurements in electronic devices usually can be interpreted in terms of defects and transport mechanisms. 7 While it is obvious that 1/ f noise will be present in molecular monolayers as in almost any system, only a detailed study can lead to new insights in the transport mechanisms, defect characterization and coupling of molecules with electrodes. We report here the observation and detailed study of the 1/ f γ power spectrum of current noise through organic molecular junctions. n-Si/C 18 H 37 /Al junctions were chosen for these experiments because of their very high quality, which allows reproducible and reliable measurements. 8 The noise current power spectra (S I ) are measured for different biases. Superimposed on the background noise, we observe noise bumps over a certain bias range and propose a model that includes trap-induced tunnel current, which satisfactorily describes the noise behaviour in our tunnel molecular junctions. II. CURRENT-VOLTAGE EXPERIMENTS Si-C linked alkyl monolayers were formed on Si(111) substrates (0.05-0.2 Ω.cm) by thermally induced hydrosilylation of alkenes with Si:H, as detailed elsewhere. 8,9 50 nm thick aluminium contact pads with different surface areas between 9x10 -4 cm 2 and 4x10 -2 cm 2 were deposited at 3 Å/s on top of the alkyl chains. The studied junction, Sin/C 18 H 37 /Al, is shown in Fig. 1-a (inset). Figure 1-a shows typical current densityvoltage (J-V) curves. We measured 13 devices with different pad areas. The maximum deviation of the current density between the devices is not more than half an order of magnitude. It is interesting to notice that although devices A and C have different contact pad areas (see figure caption), their J-V curves almost overlap. This confirms the high quality of the monolayer. 9 Figure 1-b shows a linear behaviour around zero bias and we deduce a surface-normalized conductance of about 2-3x10 -7 S.cm -2 . For most of the measured devices, the J-V curves diverge from that of device C at V > 0.4 V, with an increase of current that can reach an order of magnitude at 1 V (device B). Taking into account the difference of work functions between n-Si and Al, considering the level of doping in the Si substrate (resistivity ~ 0.1 Ω.cm), there will be an accumulation layer in the Si at V > -0.1 V. [START_REF] Sze | Physics of Semiconductor Devices[END_REF] From capacitance-voltage (C-V) and conductance-frequency (G-f) measurements (not shown here), we confirmed this threshold value (± 0.1 V). As a consequence, for positive bias, we can neglect any large band bending in Si (no significant voltage drop in Si). The J-V characteristics are then calculated with the Tsu-Esaki formula [START_REF] Tsu | [END_REF] that can be recovered from the tunnelling Hamiltonian. [START_REF] Mahan | Many-Particle Physics[END_REF] Assuming the monolayer to be in between two reservoirs of free quasi-electrons and the system to be invariant with respect to translation in the transverse directions (parallel to the electrode plates) we get ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ + + = - - - ∞ + ∫ ) ( ) ( 0 3 2 1 1 ln ) ( 2 ) ( E eV E B e e E T dE emk V J µ β µ β π θ (1) where e is the electron charge, m the effective mass of the charge carriers within the barrier, k B the Boltzmann constant, ħ the reduced Planck constant, µ the Fermi level and β = 1/ k B Θ (Θ the temperature in K). T(E) is the transfer coefficient for quasi-electrons flowing through the tunnel barrier with longitudinal energy E. The total energy, E T , of quasi-electrons is decomposed into a longitudinal and a transverse component, E T =E+E t ; E t was integrated out in Eq. ( 1). The transfer coefficient is calculated for a given barrier height, Ф, and thickness, d, and shows two distinct parts: T(E)=T 1 (E)+T 2 (E). T 1 (E) is the main contribution to T(E) that describes transmission through a defect-free barrier. T 2 (E) contains perturbative corrections due to assisted tunnelling mechanisms induced by impurities located at or near the interfaces. The density of defects is assumed to be sufficiently low to consider the defects as independent from each other, each impurity at position i r interacting with the incoming electrons via a strongly localized potential at energy U i , ) ( i i r r U - δ . The value of U i is random. We write ∑ = = imp N i i U E T E T 1 2 2 ) , ( ) ( , with N imp being the number of impurities and T 2 (E,U i ) the part of the transmission coefficient due to the impurity i. The two contributions of T(E) are calculated following the method of Appelbaum and Brinkman. [START_REF] Appelbaum | [END_REF] Using Eq. ( 1), we obtain a good agreement with experiments. The theoretical J-V characteristic for device C and B are shown in Fig. 1-a. The best fits are obtained with Ф = 4.7 eV, m = 0.614 m e (m e is the electron mass), 10 10 traps/cm² uniformly distributed in energy for device C and additional 10 13 traps/cm 2 for device B distributed according to a Gaussian peaked at 3 eV. The transfer coefficients T 2 (E,U i ) show pronounced quasi-resonances at energies depending on U i that explain the important increase of current. The thickness is kept fixed, d = 2 nm (measured by ellipsometry 8 ). III. NOISE BEHAVIOR The difference observed in the J-V curves are well correlated with specific behaviours observed in the low frequency noise. Figure 2 shows the low frequency current noise power spectrum S I for different bias voltages from 0.02 V to 0.9 V. All curves are almost parallel and follow a perfect γ f / 1 law with γ = 1 at low voltages, increasing up to 1.2 at 1 V. We could not observe the shot noise because the high gains necessary for the amplification of the low currents induce a cut-off frequency of our current preamplifier lower than the frequency of the 1/ f -shot noise transition. At high currents, 1/ f γ noise was observed up to 10 kHz. The low frequency 1/ f current noise usually scales as I 2 , where I is the dc tunnel current, 14 as proposed for example by the standard phenomenological equation of Hooge 15 S I =α H I 2 /N c f where N c is the number of free carriers in the sample and α H is a dimensionless constant frequently found to be 2x10 -3 . This expression was used with relative success for homogeneous bulk metals 14,15 and more recently also for carbon nanotubes. 6 Similar relations were also derived for 1/ f noise in variable range hopping conduction. 16 In Fig. 3-a we present the normalized current noise power spectrum (S I /I 2 ) at 10 Hz (it is customary to compare noise spectra at 10 Hz) as a function of the bias V for devices B and C. Device C has a basic characteristic with the points following the dashed line asymptote. We use it as a reference for comparison with our other devices. We basically observed that S I /I² decreases with |V|. For most of our samples, in addition to the background normalized noise, we observe a local (Gaussian with V) increase of noise at V > 0.4 V. The amplitude of the local increase varies from device to device. This local increase of noise is correlated with the increase of current seen in the J-V curves. The J-V characteristics (Fig. 1-a) of device B diverge from those of device C at V > 0.4 V and this is consistent with the local increase of noise observed in Fig. 3. The observed excess noise bump is likely attributed to this Gaussian distribution of traps centred at 3 eV responsible for the current increase. Although the microscopic mechanisms associated with conductance fluctuations are not clearly identified, it is believed that the underlying mechanism involves the trapping of charge carriers in localized states. 17 . The nature and origin of these traps is however not known. We can hypothesis that the low density of traps uniformly distributed in energy may be due to Si-alkyl interface defects or traps in the monolayer, while the high density, peaked in energy, may be due to metal-induced gap states (MIGS) 18 or residual aluminum oxide at the metal-alkyl interface. The difference in the noise behaviours of samples B and C simply results from inhomogeneities of the metal deposition, i.e. of the chemical reactivity between the metal and the monolayer, or is due to the formation of a residual aluminum oxide due to the presence of residual oxygen in the evaporation chamber. More 1/f noise experiments on samples with various physical and chemical natures of the interfaces are in progress to figure out how the noise behaviour depends on specific conditions such as the sample geometry, the metal or monolayer quality, the method used for the metal deposition and so forth. IV. TUNNEL CURRENT NOISE MODEL To model the tunnel current noise in the monolayers, we assume that some of the impurities may trap charge carriers. Since we do not know the microscopic details of the trapping mechanisms and the exact nature of these defects, we use a qualitative description that associates to each of them an effective Two-Level Tunnelling Systems (TLTS) characterized by an asymmetric double well potential with the two minima separated in energy by 2 i ε . We denote as i ∆ the term allowing tunneling from one well to the other, and get, after diagonalization, two levels that are separated in energy by 2 2 i i i E ∆ + = ε . Since we are interested in low frequency noise, we focus on defects with very long trapping times i.e. defects for which i i ε << ∆ . The lower state (with energy - i E ) corresponds to an empty trap, the upper state (with energy + i E ) to a charged one. The relaxation rate from the upper to the lower state is determined by the coupling with the phonons and/or with the quasi-electrons giving θ τ B i i i k E E 2 coth 2 1 ∆ ∝ - and θ τ B i i i k E E 2 coth 2 1 ∆ ∝ -, respectively. In all cases, the time scale of the relaxation, τ, is very long compared to the duration of a scattering event. This allows us to consider the TLTS with a definite value at any instant of time. We then consider the following spectral density of noise for each TLTS 19 θ τ ϖ τ B i i I k E I I f S 2 Cosh 1 ) ( ) ( 2 2 2 2 - + - + - = (2) where f π ϖ 2 induced by the trapped quasi-electron that produces a shift in the applied bias, V δ . We write ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ + + ∂ ∂ + + ≅ - - - - ∞ + - + - ∫ ) ( ) ( 2 0 3 2 1 1 ln ) , ( 2 ) ( ) ( E V eV E i E i i B e e E U U E T dE emk V V I V I i δ µ β µ β π θ δ A ( 3 ) where A is the junction (metal electrode) area. The first term in the right hand side is due to the fluctuating applied bias, the second to the change in the impurity energy. Since T 2 (E) is already a perturbation, the second contribution is in general negligible but becomes important to explain the excess noise. We focus first on the background noise and therefore we keep only the first term of Eq. ( 3). We assume for simplicity that all the charged impurities give the same shift of bias Α = TJ C e V / δ , where TJ C is the capacitance of the tunnel junction per unit surface. Capacitance-voltage measurements (not shown) indicate that TJ C is constant for positive bias. By using usual approximations regarding the distribution in relaxation times, τ, and energies, E i , [START_REF] Kogan | Electronic noise and fluctuations in solids[END_REF] we get f C e V I N E S TJ imp I 1 1 2 2 2 * * ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ∂ ∂ Α ∝ - . (4) We assume that the distribution function of ε i and ∆ i , P(ε i ,∆ i ) is uniform to get the 1/ f dependence. In this last expression, the derivative of the current is evaluated for the lower impurity state, * imp N is the impurity density per unit energy and surface area. We have The quasi resonances of T 2 (E) are at the origin of the local increase. The Gaussian distribution selects defects for which T 2 (E,U i ) shows quasi resonance in the appropriate range of energy. These traps may be associated to a non-uniform contribution to the distribution function P(ε i ,∆ i ) that would break the 1/ f dependence of S I above certain bias. This is what is observed in Fig. 2, with γ changing from 1 to 1.2. max * E E = , the maximum of i E , if θ B k E << V. CONCLUSION In summary, we have reported the study of low frequency (1/ f γ ) current noise in molecular junctions. We have correlated the small dispersion observed in dc J-V characteristics and the local increase of normalized noise at certain biases (mainly at V > 0.4 V). A theoretical model qualitatively explains this effect as due to the presence of an energy-localized distribution of traps. The model predicts that the power spectrum of the background current noise is proportional to 2 ) / ( V I ∂ ∂ as observed in our experiments. [START_REF] Alers | A similar behavior have been some time observed in SiO 2 tunnel devices[END_REF] We also show that the power spectrum of the current noise should be normalized as S I /I 1.7 . The background noise is associated with a low density of traps uniformly distributed in energy that may be due to Si-alkyl interface defects or traps in the monolayer. The local increase of noise for bias V>0.4V is ascribed to a high density of traps, peaked in energy, probably induced by the metal deposition on the monolayer. 4) with a uniform defect distribution (dashed line), with adding a Gaussian energy-localized distribution of defects (thin solid line), and keeping the two terms of Eq. ( 3) with E * =5eδV (bold solid line). An ad-hoc multiplicative factor has been applied to the theoretical results. I 8 i 8 is the tunnel current for the empty (charged) impurity state. In this equation, we consider the average of ( ) + --I I over the TLTSs, having similar ε and i ∆ . The difference between the two levels of current has two different origins. The first one is the change in energy of the impurity level that directly affects ) the change in the charge density at the interfaces of the molecular junction N Fig. 3 decreases with V. The appropriate normalization factor to obtain flat background Fig. 1 : 1 Fig.1: (a) Experimental J-V curves at room temperature for n-Si/C 18 H 37 /Al junctions. The contact areas are 0.36 mm 2 for device A and 1 mm 2 for devices B and C. The voltage V is Fig. 2 : 2 Fig.2: Low frequency (1/f γ ) power spectrum current noise for device C. Although we Fig. 3 : 3 Fig.3: (A) Normalized power spectrum current noise S I /I 2 as a function of bias V for Fig. 4 : 4 Fig.4: S I -) / ( V I ∂ ∂ curve for device C on a log-log scale. The dashed line represents the Fig. 1 14Fig. 2 12 Fig.1 Fig. 3 16Fig. 4 34 Fig.3 ACKNOWLEDGEMENTS We thank David Cahen for many valuable discussions. N.C. and S.P. acknowledge support from the "ACI nanosciences" program and IRCICA. We thank Hiroshi Inokawa, Frederic Martinez for helpful comments.
00175653
en
[ "phys.phys.phys-atom-ph" ]
2024/03/05 22:32:10
2007
https://hal.science/hal-00175653/file/article_fermion.pdf
Xavier Baillard Mathilde Hugbart Rodolphe Le Targat Philip G Westergaard Arnaud Lecallier Frédéric Chapelet Michel Abgrall Giovanni D Rovera Philippe Laurent Peter Rosenbusch Mathilde Fouché Sébastien Bize Giorgio Santarelli André Clairon Pierre Lemonde email: [email protected] Gesine Grosche Burghard Lipphardt Harald Schnatz An Optical Lattice Clock with Spin-polarized 87Sr Atoms come Introduction The possibility to build high accuracy optical clocks with 87 Sr atoms confined in an optical lattice is now well established. Since this idea was published [2], experiments rapidly proved the possibility to obtain narrower and narrower resonances with atoms in the Lamb-Dicke regime [3,4,5]. The narrowest observed resonances now have a width in the Hz range [6] and the corresponding potential fractional frequency instabilities are better than 10 -16 over 1 s of averaging time. On the other hand, systematic effects were also shown to be highly controllable. It was theoretically demonstrated that the residual effects of the atomic motion could be reduced down to the 10 -18 level for a lattice depth as small as 10 E r [7], with E r the recoil energy associated with the absorption or emission of a lattice photon. Higher order frequency shifts due to the trapping light were then also shown to be controllable at that level [5] 1 . Altogether, the accuracy of the frequency measurement of the 1 S 0 -3 P 0 clock transition of Sr has steadily improved by four orders of magnitude since its first direct measurement in 2003 [8]. Three independent measurements performed in Tokyo university [9], JILA [4] and SYRTE [10] were reported with a fractional uncertainty of 10 -14 giving excellent agreement. Recently, the JILA group improved their uncertainty down to 2.5 × 10 -15 [1], a value that is comparable to the frequency difference between the various primary standards throughout the world [START_REF] Wolf | Comparing high accuracy frequency standards via TAI[END_REF]. We report here a new and independent measurement of this clock transition with an accuracy of 2.6 × 10 -15 . The major modification as compared to our previous evaluation is an improved control of the Zeeman effect. By applying a bias field of typically 0.1 mT and pumping atoms into extreme Zeeman states (we alternate measurements with m F = +9/2 and m F = -9/2) we cancel the first order Zeeman effect while getting a real time measurement of the actual magnetic field seen by the atoms [9]. The measured frequency of the 1 S 0 → 3 P 0 clock transition of 87 Sr is 429 228 004 229 873.6 (1.1) Hz. This value differs from the one of Ref. [1] by 0.4 Hz only. 2 Experimental setup Atom manipulation The apparatus is derived from the one described in Ref. [10]. The clock is operated sequentially, with a typical cycle duration of 400 ms. We use a dipole trap formed by a vertical standing wave at 813.428 nm inside an enhancement Fabry-Pérot cavity. The depth of the wells of the resulting lattice is typically 100 µK. A beam of 87 Sr atoms from an oven at a temperature of about 450 • C is sent through a Zeeman slower and then loaded into a magneto-optical trap (MOT) based on the 1 S 0 → 1 P 1 transition at 461 nm. The MOT temperature is about 1 mK. The dipole trap laser is aligned in order to cross the center of the MOT. Two additional laser beams tuned to the 1 S 0 → 3 P 1 and 3 P 1 → 3 S 1 transitions, at 689 nm and 688 nm respectively, are superimposed on the trapping laser. The atoms that cross these beams are therefore drained into the metastable states 3 P 0 and 3 P 2 at the center of the trap, and those with a small enough kinetic energy remain confined in the potential wells forming the lattice. The MOT and drain lasers are then switched off and atoms are optically pumped back to the ground state, where they are further cooled using the narrow 1 S 0 → 3 P 1 transition. About 95 % of the atoms are cooled down to the ground state of the trap (see Fig. 1), corresponding to a temperature of 3 µK. They are optically pumped into either the ( 1 S 0 , m F = 9/2) or ( 1 S 0 , m F = -9/2) Zeeman sub-state. This is achieved by means of a bias magnetic field of about = 10 -4 T and a circularly polarized laser (σ + or σ -depending on the desired m F state) tuned to the 1 S 0 (F = 9/2) → 3 P 1 (F = 9/2) transition. This transition is power-broadened to a few hundreds of kHz. The magnetic field can then be switched to a different value (up to a fraction of a mT) for the clock transition interrogation. We use a π-polarized laser at 698 nm to probe the 1 S 0 → 3 P 0 transition with adjustable frequency to match the desired (m F = ±9/2 → m F = ±9/2) transition. Finally the populations of the two states 1 S 0 and 3 P 0 are measured by laser induced fluorescence using two blue pulses at 461 nm separated by a repumping pulse. Measurement scheme The spectroscopy of the clock transition is performed with an extended-cavity diode laser at 698 nm which is pre-stabilized with an interference filter [START_REF] Baillard | [END_REF]. The laser is stabilized to an ultrastable cavity of finesse F = 25000, and its frequency is constantly measured by means of a femtosecond fiber laser [13,14] referenced to the three atomic fountain clocks FO1, FO2 and FOM. The femtosecond fiber laser is described in paragraph 2.3. The fountain ensemble used in this measurement is described extensively in [15,16]. The three atomic fountains FO1, FO2 and FOM are used as primary frequency standards measuring the frequency of the same ultra-low noise reference derived from a cryogenic sapphire oscillator. Practically, the reference signal at 11.98 GHz is divided to generate a 100 MHz reference which is disseminated to FO1 and FOM. Being located in the neighboring lab, FO2 benefits from using the 11.98 GHz directly. Another 1 GHz reference is also generated from the 11.98 GHz signal and sent through a fiber link as a reference for the fiber femtosecond optical frequency comb. The 100 MHz signal is also compared to the 100 MHz output of a H-maser. A slow phase-locked loop (time constant of 1000 s) is implemented to ensure coherence between the reference signals and the H-maser to avoid long term frequency drift of the reference signals. During the 15 days of measurement reported here, the three fountains are operated continuously as primary frequency standards measuring the same reference oscillator. The overall frequency instability for this measurement is 3.5 × 10 -14 at 1 s for FO2, 4.2 × 10 -14 at 1 s for FO1 and 7.2 × 10 -14 at 1 s for FOM. The accuracy of these clocks are 4 × 10 -16 for FO1 and FO2 and 1.2 × 10 -15 for FOM. The fractional frequency differences between the fountain clocks are all consistent with zero within the combined 1-sigma error bar, which implies consistency to better than 10 -15 . The link between the cavity stabilized laser and the frequency comb is a fiber link of 50 m length with a phase noise cancellation system. The probe laser beam is sent to the atoms after passing through an acousto-optic modulator (AOM) driven at a computer controlled frequency. Each transition is measured for 32 cycles before switching to the transition involving opposite m F states. Two digital servo-loops to both atomic resonances therefore run in parallel with interlaced measurements. For each servo-loop we alternately probe both sides of the resonance peak. The difference between two successive transition probability measurements constitutes the error signal used to servo-control the AOM frequency to the atomic transition. In addition, we interlace sets of 64 cycles involving two different trapping depths. The whole sequence is repeated for up to one hour. This operating mode allows the independent evaluation of three clock parameters. The difference between the frequency measurements made for each Zeeman sub-state can be used to accurately determine the magnetic field. As we switch to the other resonance every 32 cycles, this gives a real-time calibration of the magnetic-field averaged over 64 cycles. The global average of the measurement is the value of the clock frequency and is independent on the first order Zeeman effect as the two probed transitions are symmetrically shifted. Finally, the two frequencies corresponding to two different dipole trap depths are used for a real-time monitoring of the possible residual light shift of the clock transition by the optical lattice. The frequency stability of the Sr lattice clock-FO2 fountain comparison is shown in Fig. 2. The Allan deviation is 6 × 10 -14 τ -1/2 so that the statistical uncertainty after one hour of averaging time is 10 -15 , corresponding approximately to 0.5 Hz. Fig. 2. Allan standard deviation of the frequency measurements for a magnetic field B = 0.87 G and a time of interrogation of 20 ms. The line is a fit to the data using a τ -1/2 law. The corresponding stability at 1 s is 6 × 10 -14 . The frequency comb For the absolute frequency measurement of the Sr transition we have used a fibre-based optical frequency comb which is based on a FC1500 optical frequency synthesizer supplied by Menlo Systems. The laser source for the FC1500 comb is a passively mode-locked femtosecond fibre laser which operates at a centre wavelength of approximately 1550 nm and has a repetition rate of 100 MHz. The repetition rate can be tuned over approximately 400 kHz, by means of an end mirror mounted on a translation stage controlled by a stepper motor and a piezoelectric transducer. The output power from the mode-locked laser is split and fed to three erbium-doped fiber amplifiers (EDFAs). These are used to generate three phase-coherent optical frequency combs whose spectral properties can be independently optimized to perform different functions. The output from the first EDFA is broadened using a nonlinear fibre to span the wavelength range from approximately 1000 nm to 2100 nm. This provides the octave-spanning spectrum required for detection of the carrier-envelope offset frequency f 0 using the self-referencing technique [17,18]. With proper adjustment of the polarisation controllers a beat signal with a signal to noise ratio SNR= 40 dB in a resolution bandwidth of 100 kHz is achieved. The electronics for the stabilization of the carrier offset frequency comprises a photo detector, a tracking oscillator, and a digital phase-locked loop (PLL) with adjustable gain and bandwidth. The offset frequency is stabilized by feedback to the pump laser diode current. The critical frequency to be measured is the pulse repetition frequency f rep since the optical frequency is measured as a very high multiple of this pulse repetition frequency. To overcome noise limitations due to the locking electronics and to enhance the resolution of the counting system, we detect f rep at a high harmonic using a fast InGaAs photodiode with a bandwidth of 25 GHz. Locking of the repetition frequency is provided by an analogue phase-locked loop comparing the 90 th harmonic of f rep with a microwave reference and controlling the cavity length. The subsequent use of a harmonic tracking filter further enhances the short term resolution of our counting system. For this purpose, the 9 GHz beat signal is down-converted with a low noise 9 GHz signal synthesized from the microwave frequency reference (CSO / hydrogen maser referenced to a Cs-fountain clock). The difference frequency is again multiplied by 128, reducing frequency counter digiti-zation errors to below the level of the noise of the microwave reference. Thereby, the frequency at which f rep is effectively measured is 1.15 THz. A second EDFA generates high power radiation which is frequency-doubled using a PPLN crystal, generating a narrow-band frequency comb around 780 nm. This is subsequently broadened in a nonlinear fiber to generate a comb spanning the range 600-900 nm. Light of the spectroscopy laser at 698 nm is superimposed on the output of the frequency comb using a beam splitter. To assure proper mode matching of the beams and proper polarization adjustment one output of the beam splitter is launched into a single mode fiber and detected with a DC photodetector. The other output is dispersed by means of a diffraction grating. A subsequent pinhole placed in front of the photodetector then selects a narrow range of the spectrum at 698 nm and improves the signal to noise ratio of the observed heterodyne beat. For the heterodyne beat signal with the Sr clock laser a SNR of 30-35 dB in a bandwidth of 100 kHz was achieved. Again, a tracking oscillator is used for optimal filtering and conditioning of the heterodyne signal. All beat frequencies and relevant AOMfrequencies were counted using totalizing counters with no dead-time; the counters correspond to Π-estimators [19] for the calculation of the standard Allan variance. First order Zeeman effect In presence of a magnetic field, both clock levels, which have a total momentum F = 9/2, are split into 10 Zeeman sub-states. The linear shift ∆ Z of a sub-state due to a magnetic field B is ∆ Z = m F g F µ B B/h, (1) where m F is the Zeeman sub-state (here 9/2 or -9/2), g F the Landé factor of the considered state, µ B the Bohr magneton, and h the Planck constant. Using the differential g-factor between 3 P 0 and 1 S 0 reported in Ref. [20]: ∆g = 7.77(3) × 10 -5 , we can determine the magnetic field by measuring two symmetrical resonances. Fig. 3 shows the typical resonances observed with a magnetic field B = 87 µT for both m F = ±9/2 sub-states. The linewidth is of the order of 30 Hz, essentially limited by Fourier broadening hence facilitating the lock of the frequency to each resonance. This linewidth corresponds to an atomic quality factor Q = 1.4 × 10 13 . At this magnetic field, two successive π-transitions are separated by 96 Hz, which is high enough to entirely resolve the Zeeman sub-structure with that type of resonance and to limit possible line-pulling effects to below 10 -15 (see section 4.3). The magnetic field used for pumping and detecting the atoms is provided by two coils in Helmoltz configuration to produce a homogeneous field at the center of the trap. They are fed by a fast computer-controlled power supply to reach the desired value in a few ms. This setup requires to accurately characterize the stability of the magnetic field, as the residual magnetic field fluctuations are a possible issue for the clock accuracy and stability. The Zeeman effect can provide a precise measurement of this field and its calibration when we measure the clock transition for two symmetrical transitions. When probing the two transitions for the m F = ±9/2 sub-states, the difference ∆ν between the two frequencies can be related to the magnetic field using Eq. 1: ∆ν = 9∆gµ B B/h. To evaluate the stability of the magnetic field, we chose a particular set of parameters (B = 87 µT and a modulation depth of the numerical servo-loop of 10 Hz) and repeated a large number of times the corresponding time sequence as described in section 2.2. The measured magnetic field is averaged over 64 cycles. We then concatenated all the averaged data and calculated the Allan standard deviation to determine the long term stability of the magnetic field. The result is plotted on Fig. 4. The deviation is 10 -2 in fractional units at 32 s, and going down following a τ -1/2 law for longer times. The deviation for long times is below 10 -3 . This measurement is totally dominated by the frequency noise of the Sr clock and no fluctuations of the field itself are visible at the present level of resolution. For a magnetic field of 87 µT, this represents a control of the magnetic field at the sub-µT level over long timescales. Frequency accuracy 4.1 Second order Zeeman effect The clock has been operated with different values of the bias magnetic field up to 0.6 mT. As explained before, our method of interrogation makes the measurements independent on the first order Zeeman effect. On the other hand, both resonances are shifted by the same quadratic Zeeman shift which has to be evaluated. From the calibration of the magnetic field, we can evaluate the dependence of the transition frequency as a function of this field. The results are plotted on Fig. 5. The line plotted on the graph represents the expected quadratic dependence of -23.3 Hz/mT 2 [21] where we adjusted only the frequency offset to fit the data. The statistical uncertainty on this fit is 0.2 Hz, and there is no indication for a residual first order effect to within less than 1 Hz at 0.6 mT. At B = 87 µT, where most of the measurements were done, the correction due to the quadratic Zeeman effect is 0.1 Hz only. Conversely, an experimental value for the quadratic Zeeman effect coefficient can be derived from the data plotted in Fig. 5 with a 7% uncertainty. We find -24.9(1.7) Hz/mT 2 , which is in agreement with theory. Residual lattice light shift The clock frequency as a function of the trapping depth is plotted on Fig. 6. Measurements have been done with depths ranging from 50 to 500 E r , corresponding to an individual light shift of both clock levels levels up to 1.8 MHz. Over this range, the scatter of points is less than 2 Hz and the statistical uncertainty of each point lower than 1 Hz. The control of this effect has been evaluated by fitting the data with a line. The slope represents a shift of 0.5(5) Hz at 500 E r . The differential shift between both clock states is therefore controlled at a level of 3 × 10 -7 . In ultimate operating conditions of the clock, a trapping depth of 10 E r is theoretically sufficient to cancel the motional effects down to below 10 -17 [7]. The light shift corresponding to this depth is 36 kHz for both level, or 8 × 10 -11 in fractional units. The kind of control demonstrated here would correspond, in these ultimate conditions, to a residual light shift below 2 × 10 -17 . Uncertainty budget Other systematic effects have been evaluated and included in the accuracy budget listed in Table 1. The line pulling by neighbouring transitions has been carefully evaluated. Two types of transitions should be considered: transverse motional sidebands and transitions between the various Zeeman states of the atoms. Transverse motional sidebands can be excited by the transverse k-content of the probe laser. For a lattice depth of 100 E r , the transverse oscillation frequency is about 150 Hz. Both diffraction and misalignement are below 1 mrad here so that the transverse dynamics is deeply in the Lamb-Dicke regime and the height of transverse sidebands are expected to be at most 5×10 -3 of the carrier (experimentally, they do not emerge from the noise of the measurements). The corresponding line pulling is therefore below 0.4 Hz. This is confirmed by the absence of pathological dependence of the clock frequency as a function of the lattice depth (Fig. 6). Unwanted Zeeman transitions result from the imperfection of the optical pumping process and of the polarization of the probe laser. In standard configuration the only visible stray resonance is the m F = ±7/2 -m F = ±7/2 transition, with a height that is about half of the one of the m F = ±9/2 -m F = ±9/2 resonance. It is difficult to set a realistic theoretical upper limit on this effect since we have no direct access to the level of coherence between the various m F states, nor on the degree of polarization of the probe laser. Experimentally however, several parameters can be varied to test the effect. The magnetic field dependence shown in Fig. 5 shows no deviation from the expected law to within the error bars. On the other hand, measurements performed with various depths of the servo-loop modulation also show no differences to within 0.5 Hz. Finally, we operated the clock with a probe laser polarization orthogonal to the bias field and using the m F = ±9/2 -m F = ±7/2 transitions as clock resonances. The clock frequency in this configuration is found 0.2(5) Hz away from the frequency in the standard configuration. These measurements also test for a possible line pulling from higher order stray resonances involving both a change of the transverse motion and of the internal Zeeman state which can Incidently, the combination of the measurements using σ and π transitions allows the derivation of the differential Landé factor between both clock states [20]. We find g( 3 P 0 ) -g( 1 S 0 ) = 7.90(7) × 10 -5 , a value that differs from the one reported in Ref. [20] by twice the combined 1-sigma uncertainty. The light shift due to the probe laser is essentially due to the off-resonant coupling of the 3 P 0 state with the 3 S 1 state. The typical light intensity used for the clock evaluation was of a few mW/cm 2 . By varying this intensity by a factor up to 2, no visible effect has been observed to within the uncertainty. A more precise evaluation was carried out using the bosonic isotope 88 Sr [22]. The measured light shift for an intensity of 6 W/cm 2 was in this case of -78 [START_REF] Wolf | Comparing high accuracy frequency standards via TAI[END_REF] Hz. The corresponding effect for our current setup, where the probe power is 3 orders of magnitude smaller, is about 0.1 Hz with an uncertainty in the 10 -2 Hz range. The Blackbody radiation shift is derived from temperature measurements of the vacuum chamber using two Pt resistors placed on opposite sides of the apparatus and using the accurate theoretical calculation reported in Ref. [23]. The Blackbody radiation shift in our operating conditions is 2.39(10) Hz. Finally a 1 Hz uncertainty is attributed to an effect that has not been clearly identified. After having varied all the parameters necessary for estimating the systematic effects, we decided to check the overall consistency by performing three series of measurements with fixed parameters. The two first ones were performed with a bias field of 87 µT and servo loop modulation depths of 7 and 10 Hz respectively. The third one with a larger field of 140 µT and a modulation depth of 7 Hz. The results of series 2 and 3 are shown in Fig. 7, where the error bars include the statistical uncertainty of each measurement only. The scatter of points of series 3 is clearly incompatible with the individual error bars (the reduced χ 2 of this distribution is 4.3). In addition, its average value is 1.5 Hz away. Having not clearly identified the reason for this behaviour (one possibility could be a problem in the injection locking of one of the slave lasers at 698 nm), we decided to keep this series of data. We also cannot ensure that the effect is not present (though at a smaller level) in the other measurements and decided to attribute an uncertainty of 1 Hz to this effect. Taking into account these systematic effects, the averaged clock frequency is determined to be ν clock = 429 228 004 229 873.6(1.1) Hz. The global uncertainty, 2.6 × 10 -15 in fractional units, corresponds to the quadratic sum of all the uncertainties of the systematic effects listed in Table 1. The statistical uncertainty is at the level of 0.1 Hz. Conclusion We have reported here a new measurement of the frequency of the 1 S 0 → 3 P 0 transition of 87 Sr with an uncertainty of 1.1 Hz or 2.6 × 10 -15 in fractional units. The result is in excellent agreement with the values reported by the JILA group with a similar uncertainty [1] and by the Tokyo group with a 4 Hz error bar [9]. Obtained in independent experiments with significant differences in their implementation, this multiple redundancy strengthens the obtained results and further confirms the possibility to build high accuracy clocks with cold atoms confined in an optical lattice. It also further assesses this transition as a possible candidate for a future redefinition of the second. SYRTE is Unité Associée au CNRS (UMR 8630) and a member of IFRAF. This work is supported by CNES and DGA. PTB acknowledges financial support from the German Science foundation through SFB 407. Fig. 1 . 1 Fig.1. Spectrum at high power of the carrier and the first two longitudinal sidebands of the trapped atoms. The ratio between both sidebands is related to the population of the ground state of the trap. 95 % of the atoms are in the lowest vibrational state of the lattice wells. Fig. 3 . 3 Fig. 3. Experimental resonances observed for the mF = 9/2 → mF = 9/2 (left) and mF = -9/2 → mF = -9/2 (right) transitions for a magnetic field B = 87 µT and an interrogation time of 20 ms. The lines are gaussian fits to the data. The asymmetry between both resonances results from the imperfection of the optical pumping. Fig. 4 . 4 Fig. 4. Allan standard deviation of the magnetic field. The line is a τ -1/2 fit to the data. Fig. 5 . 5 Fig. 5. (a) Clock frequency as a function of the applied magnetic field. The line represents a fit of the experimental data by a quadratic law with one adjustable parameter : the frequency offset. The linear term was set to 0 and the quadratic term to its theoretical value. (b) Clock frequency after correction for the second order Zeeman effect. The line is the average of the data. Fig. 6 . 6 Fig.6. Clock frequency as a function of the dipole trap depth in terms of recoil energies. On the upper scale is the corresponding light shift of the clock levels. The line is a linear fit to the data. The value of the light shift due to the trap at 500 Er is only 0.5(0.5) Hz. Fig. 7 . 7 Fig. 7. Two series of measurements performed with different clock parameters (see text). The series plotted on the right hand side of the figure clearly exhibits a scatter of points that is incompatible with the individual statistical error bars of the measurements. Its reduced χ 2 is 4.3. The other series behaves normally and is shown for reference. Table 1 . 1 Uncertainty budget. Effect Correction (Hz) Uncertainty (Hz) Fractional uncertainty (×10 -15 ) Zeeman 0.1 0.1 0.2 Probe laser Stark shift 0.1 < 0.1 < 0.1 Lattice AC Stark shift (100 Er) 0 0.2 0.4 Lattice 2nd order Stark shift (100 Er) 0 0.1 0.2 Line pulling (transverse sidebands) 0 0.5 1.1 Cold collisions 0.1 0.2 Blackbody radiation shift 2.39 0.1 0.1 See text 0 1 2.3 Fountain accuracy 0 0.2 0.4 Total 2.59 1.1 2.6 The first order shift can be made to vanish in this type of clocks at the so-called "magic wavelength".
01383023
en
[ "math.math-ds" ]
2024/03/05 22:32:10
2020
https://hal.science/hal-01383023v3/file/KS_main_30Mar2018.pdf
Shingo Kamimoto David Sauzin Iterated convolutions and endless Riemann surfaces Keywords: may come Introduction In this article, we deal with the following version of Écalle's definition of resurgence: Definition 1.1. A convergent power series φ ∈ C{ζ} is said to be endlessly continuable if, for every real L > 0, there exists a finite subset F L of C such that the holomorphic germ at 0 defined by φ can be analytically continued along every Lipschitz path γ : [0, 1] → C of length smaller than L such that γ(0) = 0 and γ (0, 1] ⊂ C \F L . We denote by R ⊂ C{ζ} the space of endlessly continuable functions. (j-1)! is an endlessly continuable function. In other words, the space of resurgent series is R := B -1 (C δ ⊕ R) ⊂ C[[z -1 ]], where B : C[[z -1 ]] → C δ ⊕ C[[ζ] ] is the formal Borel transform, defined by B φ := ϕ 0 δ + φ(ζ) in the notation of Definition 1.2. We will also treat the more general case of functions which are "endlessly continuable w.r.t. bounded direction variation": we will define a space R dv containing R and, correspondingly, a space R dv containing R, but for the sake of simplicity, in this introduction, we stick to the simpler situation of Definitions 1.1 and 1.2. Note that the radius of convergence of an element of R may be 0. As for the elements of R, we will usually identify a convergent power series and the holomorphic germ that it defines at the origin of C, as well as the holomorphic function which is thus defined near 0. Holomorphic germs with meromorphic or algebraic analytic continuation are examples of endlessly continuable functions, but the functions in R can have a multiple-valued analytic continuation with a rich set of singularities. The convolution product is defined as the Borel image of multiplication and denoted by the symbol * : for φ, ψ ∈ C [[ζ]], φ * ψ := B(B -1 φ • B -1 ψ), and δ is the convolution unit (obtained from (C [[ζ]], * ) by adjunction of unit). As is well known, for convergent power series, convolution admits the integral representation Our aim is to study the analytic continuation of the convolution product of an arbitrary number of endlessly continuable functions, to check its endless continuability, and also to provide bounds, so as to be able to deal with nonlinear operations on resurgent series. A typical example of nonlinear operation is the substitution of one or several series without constant term φ1 , . . . , φr into a power series F (w 1 , . . . , w r ), defined as (1.2) F ( φ1 , . . . , φr ) := k∈N r c k φk 1 1 • • • φkr r for F = k∈N r c k w k 1 1 • • • w kr r . One of our main results is Theorem 1.3. Let r ≥ 1 be an integer. Then, for any convergent power series F (w 1 , . . . , w r ) ∈ C{w 1 , . . . , w r } and for any resurgent series φ1 , . . . , φr without constant term, F ( φ1 , . . . , φr ) ∈ R. The proof of this result requires suitable bounds for the analytic continuation of the Borel transform of each term in the right-hand side of (1.2). Along the way, we will study the Riemann surfaces generated by endlessly continuable functions. We will also prove similar results for the larger spaces R dv and R dv . Resurgence theory was developed in the early 1980s, with [START_REF] Écalle | Les fonctions résurgentes[END_REF] and [START_REF] Écalle | Les fonctions résurgentes[END_REF], and has many mathematical applications in the study of holomorphic dynamical systems, analytic differential equations, WKB analysis, etc. (see the references e.g. in [START_REF]Nonlinear analysis with resurgent functions[END_REF]). More recently, there has been a burst of activity on the use of resurgence in Theoretical Physics, in the context of matrix models, string theory, quantum field theory and also quantum mechanics-see e.g. [START_REF] Aniceto | Nonperturbative ambiguities and the reality of resurgent transseries[END_REF], [START_REF] Aniceto | The resurgence of instantons in string theory[END_REF], [START_REF] Argyres | The semi-classical expansion and resurgence in gauge theories: new perturbative, instanton, bion, and renormalon effects[END_REF], [START_REF] Cherman | Decoding perturbation theory using resurgence: Stokes phenomena, new saddle points and Lefschetz thimbles[END_REF], [START_REF] Couso-Santamaría | Finite N from resurgent large N[END_REF], [START_REF] Dunne | Resurgence and trans-series in quantum field theory: the CP N -1 model[END_REF], [START_REF] Dunne | Uniform WKB, multi-instantons, and resurgent trans-series[END_REF], [START_REF] Garay | Resurgent deformation quantisation[END_REF], [START_REF] Mariño | Lectures on non-perturbative effects in large N gauge theories, matrix models and strings[END_REF]. In almost all these applications, it is an important fact that the space of resurgent series be stable under nonlinear operations: such stability properties are useful, and at the same time they account for the occurrence of resurgent series in concrete problems. These stability properties were stated in a very general framework in [START_REF] Écalle | Les fonctions résurgentes[END_REF], but without detailed proofs, and the part of [START_REF] Candelpergher | Approche de la résurgence[END_REF] which tackles this issue contains obscurities and at least one mistake. It is thus our aim in this article to provide a rigorous treatment of this question, at least in the slightly narrower context of endless continuability. The definitions of resurgence that we use for R and R dv are indeed more restrictive than Écalle's most general definition [START_REF] Écalle | Les fonctions résurgentes[END_REF]. In fact, our definition of R dv is almost identical to the one used by Pham et al. in [START_REF] Candelpergher | Approche de la résurgence[END_REF], and our definition of R is essentially equivalent to the definition used in [START_REF] Deleabaere | Endless continuability and convolution product[END_REF], but the latter preprint has flaws which induced us to develop the results of the present paper. These versions of the definition of resurgence are sufficient for a large class of applications, which virtually contains all the aforementioned ones-see for instance [START_REF] Kamimoto | Resurgence of formal series solutions of nonlinear differential and difference equations[END_REF] for the details concerning the case of nonlinear systems of differential or difference equations. The advantage of the definitions based on endless continuability is that they allow for a description of the location of the singularities in the Borel plane by means of discrete filtered sets or discrete doubly filtered sets (defined in Sections 2.1 and 2.5); the notion of discrete (doubly) filtered set, adapted from [START_REF] Candelpergher | Approche de la résurgence[END_REF] and [START_REF] Deleabaere | Endless continuability and convolution product[END_REF], is flexible enough to allow for a control of the singularity structure of convolution products. A more restrictive definition is used in [START_REF]Nonlinear analysis with resurgent functions[END_REF] and [START_REF] Mitschi | Divergent Series, Summability and Resurgence[END_REF] (see also [START_REF] Écalle | Les fonctions résurgentes[END_REF]): Definition 1.4. Let Σ be a closed discrete subset of C. A convergent power series φ is said to be Σ-continuable if it can be analytically continued along any path which starts in its disc of convergence and stays in C \Σ. The space of Σ-continuable functions is denoted by RΣ . This is clearly a particular case of Definition 1.1: any Σ-continuable function is endlessly continuable (take F L = { ω ∈ Σ | |ω| ≤ L }). It is proved in [MS16] that, if Σ ′ and Σ ′′ are closed discrete subsets of C, and if also Σ := {ω ′ + ω ′′ | ω ′ ∈ Σ ′ , ω ′′ ∈ Σ ′′ } is closed and discrete, then φ ∈ RΣ ′ , ψ ∈ RΣ ′′ ⇒ φ * ψ ∈ RΣ . This is because in formula (1.1), heuristically, singular points tend to add to create new singularities; so, the analytic continuation of φ * ψ along a path which does not stay close to the origin is possible provided the path avoids Σ. In particular, if a closed discrete set Σ is closed under addition, then RΣ is closed under convolution; moreover, in this case, bounds for the analytic continuation of iterated convolutions φ1 * • • • * φn are given in [START_REF]Nonlinear analysis with resurgent functions[END_REF], where an analogue of Theorem 1.3 is proved for Σ-continuable functions. The notion of Σ-continuability is sufficient to cover interesting applications, e.g. differential equations of the saddle-node singularity type or difference equations like Abel's equation for one-dimensional tangent-to-identity diffeomorphisms, in which cases one may take for Σ a one-dimensional lattice of C. However, reflecting for a moment on the origin of resurgence in differential equations, one sees that one cannot handle situations beyond a certain level of complexity without replacing Σ-continuability by a more general notion like endless continuability. (z) + b 1 (z)ϕ + b 2 (z)ϕ 2 + • • • with b(z, w) = b m (z)w m ∈ z -1 C{z -1 , w} given, we may expect a formal solution whose Borel transform φ has singularities at ζ = -nλ, n ∈ Z >0 (because, as an effect of the nonlinearity, the singular points tend to add), i.e. φ will be Σ-continuable with Σ = {-λ, -2λ, . . .} (see [START_REF]Mould expansions for the saddle-node and resurgence monomials[END_REF] for a rigorous proof of this), but in the multidimensional case, for a system of r coupled equations with left-hand sides of the form dϕ j dz λ j ϕ j with λ 1 , . . . , λ r ∈ C * , we may expect that the Borel transforms φj of the components of the formal solution have singularities at the points ζ = -(n 1 λ 1 + • • • + n r λ r ), n ∈ Z r >0 ; this set of possible singular points may fail to be closed and discrete (depending on the arithmetical properties of (λ 1 , . . . , λ r )), hence, in general, we cannot expect these Borel transforms to be Σ-continuable for any Σ. Still, this does not prevent them from being always endlessly continuable, as proved in [START_REF] Kamimoto | Resurgence of formal series solutions of nonlinear differential and difference equations[END_REF]. -Another illustration of the need to go beyond Σ-continuability stems from parametric resurgence [START_REF] Écalle | Cinq applications des fonctions résurgentes[END_REF]. Suppose that we are given a holomorphic function b(t) globally defined on C, with isolated singularities ω ∈ S ⊂ C, e.g. a meromorphic function, and consider the differential equation (1.3) dϕ dt -zλϕ = b(t), where λ ∈ C * is fixed and z is a large complex parameter with respect to which we consider perturbative expansions. It is easy to see that there is a unique solution which is formal in z and analytic in t, namely φ(z, t) := -∞ k=0 λ -k-1 z -k-1 b (k) (t) , and its Borel transform φ(ζ, t) = -λ -1 b(t + λ -1 ζ) is singular at all points of the form ζ t,ω := λ(-t + ω), ω ∈ S. Now, if we add to the right-hand side of (1.3) a perturbation which is nonlinear in ϕ, we can expect to get a formal solution whose Borel transform possesses a rich set of singular points generated by the ζ t,ω 's, which might easily be too rich to allow for Σ-continuability with any Σ; however, we can still hope endless continuability. These are good motivations to study endless continuable functions. As already alluded to, we will use discrete filtered sets (d.f.s. for short) to work with them. A d.f.s. is a family of sets Ω = (Ω L ) L∈R ≥0 , where each Ω L is a finite set; we will define Ω-continuability when Ω is a d.f.s., thus extending Definition 1.4, and the space of endlessly continuable functions will appear as the totality of Ω-continuable functions for all possible d.f.s. This was already the approach of [START_REF] Candelpergher | Approche de la résurgence[END_REF], and it was used in [START_REF] Deleabaere | Endless continuability and convolution product[END_REF] to prove that the convolution product of two endlessly continuable functions is endlessly continuable, hence R is a subring of C[[z -1 ]]. However, to reach the conclusions of Theorem 1.3, we will need to give precise estimates on the convolution product of an arbitrary number of endlessly continuable functions, so as to prove the convergence of the series of holomorphic functions c k φ * k 1 1 * • • • * φ * kr r (Borel transform of the right-hand side of (1.2)) and to check its endless continuability. We will proceed similarly in the case of endless continuability w.r.t. bounded direction variation, using discrete doubly filtered sets. Notice that explicit bounds for iterated convolutions can be useful in themselves; in the context of Σ-continuability, such bounds were obtained in [START_REF]Nonlinear analysis with resurgent functions[END_REF] and they were used in [K 3 16] in a study in WKB analysis, where the authors track the analytic dependence upon parameters in the exponential of the Voros coefficient. As another contribution to the study of endlessly continuable functions, we will show how to contruct, for each discrete filtered set Ω, a universal Riemann surface X Ω whose holomorphic functions are in one-to-one correspondence with Ω-continuable functions. The plan of the paper is as follows. -Section 2 introduces discrete filtered sets, the corresponding Ω-continuable functions and their Borel images, the Ω-resurgent series, and discusses their relation with Definitions 1.1 and 1.2. The case of discrete doubly filtered sets and the spaces R dv and R dv is in Section 2.5. -Section 3 discusses the notion of Ω-endless Riemann surface and shows how to construct a universal object X Ω (Theorem 3.2). -In Section 4, we state and prove Theorem 4.8 which gives precise estimates for the convolution product of an arbitrary number of endlessly continuable functions. We also show the analogous statement for functions which are endlessly continuable w.r.t. bounded direction variation. -Section 5 is devoted to applications of Theorem 4.8: the proof of Theorem 1.3 and even of a more general and more precise version, Theorem 5.2, and an implicit resurgent function theorem, Theorem 5.3. Some of the results presented here have been announced in [START_REF] Kamimoto | Nonlinear analysis with endlessly continuable functions[END_REF]. 2 Discrete filtered sets and Ω-continuability In this section, we review the notions concerning discrete filtered sets (usually denoted by the letter Ω), the corresponding Ω-allowed paths and Ω-continuable functions. The relation with endless continuability is established, and sums of discrete filtered sets are defined in order to handle convolution of enlessly continuable functions. Discrete filtered sets We first introduce the notion of discrete filtered sets which will be used to describe singularity structure of endlessly continuable functions (the first part of the definition is adapted from [START_REF] Candelpergher | Approche de la résurgence[END_REF] and [START_REF] Deleabaere | Endless continuability and convolution product[END_REF]): Definition 2.1. We use the notation R ≥0 = {λ ∈ R | λ ≥ 0}. 1) A discrete filtered set, or d.f.s. for short, is a family Ω = (Ω L ) L∈R ≥0 where i) Ω L is a finite subset of C for each L, ii) Ω L 1 ⊆ Ω L 2 for L 1 ≤ L 2 , iii) there exists δ > 0 such that Ω δ = Ø. 2) Let Ω and Ω ′ be d.f.s. We write Ω ⊂ Ω ′ if Ω L ⊂ Ω ′ L for every L. 3) We call upper closure of a d.f.s. Ω the family of sets Ω = ( Ω) L∈R ≥0 defined by (2.1) ΩL := ε>0 Ω L+ε for L ∈ R ≥0 . It is easy to check that Ω is a d.f.s. and Ω ⊂ Ω. Example 2.2. Given a closed discrete subset Σ of C, the formula Ω(Σ) L := { ω ∈ Σ | |ω| ≤ L } for L ∈ R ≥0 defines a d.f.s. Ω(Σ) which coincides with its upper closure. From the definition of d.f.s., we find the following Lemma 2.3. For any d.f.s. Ω, there exists a real sequence (L n ) n≥0 such that 0 = L 0 < L 1 < L 2 < • • • and, for every integer n ≥ 0, L n < L < L n+1 ⇒ ΩLn = ΩL = Ω L . Proof. First note that (2.1) entails (2.2) ΩL := ε>0 ΩL+ε for every L ∈ R ≥0 (because Ω L+ε ⊂ ΩL+ε ⊂ ΩL+2ε ). Consider the weakly order-preserving integer-valued function L ∈ R ≥0 → N (L) := card ΩL . For each L the sequence k → N (L + 1 k ) must be eventually constant, hence there exists ε L > 0 such that, for all L ′ ∈ (L, L + ε L ], N (L ′ ) = N (L + ε L ), whence ΩL ′ = ΩL+ε L , and in fact, by (2.2), this holds also for L ′ = L. The conclusion follows from the fact that R ≥0 = k∈Z N -1 (k) and each non-empty N -1 (k) is convex, hence an interval, which by the above must be left-closed and right-open, hence of the form [L, L ′ ) or [L, ∞). Given a d.f.s. Ω, we set (2.3) S Ω := (λ, ω) ∈ R × C | λ ≥ 0 and ω ∈ Ω λ and denote by S Ω the closure of S Ω in R × C. We then call (2.4) M Ω := R × C \ S Ω (open subset of R × C) the allowed open set associated with Ω. Lemma 2.4. One has S Ω = S Ω and M Ω = M Ω. Proof. Suppose (λ, ω) ∈ S Ω. Then ω ∈ Ω λ+1/k for each k ≥ 1, hence (λ + 1 k , ω) ∈ S Ω , whence (λ, ω) ∈ S Ω . Suppose (λ, ω) ∈ S Ω . Then there exists a sequence (λ k , ω k ) k≥1 in S Ω which converges to (λ, ω). If ε > 0, then λ k ≤ λ + ε for k large enough, hence ω k ∈ Ω λ+ε , whence ω ∈ Ω λ+ε (because a finite set is closed); therefore (λ, ω) ∈ S Ω. Therefore, S Ω = S Ω = S Ω and M Ω = M Ω . Ω-allowed paths When dealing with a Lipschitz path γ : [a, b] → C, we denote by L(γ) its length. We denote by Π the set of all Lipschitz paths γ : [0, t * ] → C such that γ(0) = 0, with some real t * ≥ 0 depending on γ. Given such a γ ∈ Π and t ∈ [0, t * ], we denote by γ |t := γ| [0,t] ∈ Π the restriction of γ to the interval [0, t]. Notice that L(γ |t ) is also Lipschitz continuous on [0, t * ] since γ ′ exists a.e. and is essentially bounded by Rademacher's theorem. Definition 2.5. Given a d.f.s. Ω, we call Ω-allowed path any γ ∈ Π such that γ(t) := L(γ |t ), γ(t) ∈ M Ω for all t. We denote by Π Ω the set of all Ω-allowed paths. Notice that, given t * ≥ 0, (2.5) if t ∈ [0, t * ] → γ(t) = λ(t), γ(t) ∈ M Ω is a piecewise C 1 path such that γ(0) = (0, 0) and λ ′ (t) = |γ ′ (t)| for a.e. t, then γ ∈ Π Ω . In view of Lemmas 2.3 and 2.4, we have the following characterization of Ω-allowed paths: Lemma 2.6. Let Ω be a d.f.s. Then Π Ω = Π Ω and, given γ ∈ Π, the followings are equivalent: 1) γ ∈ Π Ω , 2) γ(t) ∈ C \ ΩL(γ |t ) for every t, 3) for every t, there exists n such that L(γ |t ) < L n+1 and γ(t) ∈ C \ ΩLn (using the notation of Lemma 2.3). Proof. Obvious. Notation 2.7. For L, δ > 0, we set M δ,L Ω := (λ, ζ) ∈ R × C | dist (λ, ζ), S Ω ≥ δ and λ ≤ L , (2.6) Π δ,L Ω := γ ∈ Π Ω | L(γ |t ), γ(t) ∈ M δ,L Ω for all t , (2.7) where dist(• , •) is the Euclidean distance in R × C ≃ R 3 . Note that M Ω = δ,L>0 M δ,L Ω , Π Ω = δ,L>0 Π δ,L Ω . Ω-continuable functions and Ω-resurgent series Definition 2.8. Given a d.f.s. Ω, we call Ω-continuable function a holomorphic germ φ ∈ C{ζ} which can be analytically continued along any path γ ∈ Π Ω . We denote by RΩ the set of all Ω-continuable functions and define RΩ := B -1 C δ ⊕ RΩ ⊂ C[[z -1 ]] to be the set of Ω-resurgent series. Remark 2.9. Given a closed discrete subset Σ of C, the Σ-continuability in the sense of Definition 1.4 is equivalent to the Ω(Σ)-continuability in the sense of Definition 2.8 for the d.f.s. Ω(Σ) of Example 2.2. Remark 2.10. Observe that Ω ⊂ Ω ′ implies S Ω ⊂ S Ω ′ , hence M Ω ′ ⊂ M Ω and Π Ω ′ ⊂ Π Ω , therefore Ω ⊂ Ω ′ ⇒ RΩ ⊂ RΩ ′ . Remark 2.11. Notice that, for the trivial d.f.s. Ω = Ø, RØ = O(C), hence O(C) ⊂ RΩ for every d.f.s. Ω, i.e. entire functions are always Ω-continuable. Consequently, convergent series are always Ω-resurgent: C{z -1 } ⊂ RΩ . However, RΩ = O(C) does not imply Ω = Ø (consider for instance the d.f.s. Ω defined by Ω L = Ø for 0 ≤ L < 2 and Ω L = {1} for L ≥ 2). In fact, one can show RΩ = O(C) ⇔ ∀L > 0, ∃L ′ > L such that Ω L ′ ⊂ { ω ∈ C | |ω| < L }. Remark 2.12. In view of Lemma 2.6, we have RΩ = RΩ . Therefore, when dealing with Ωresurgence, we can always suppose that Ω coincides with its upper closure (by replacing Ω with Ω). We now show the relation between resurgence in the sense of Definition 1.2 and Ω-resurgence in the sense of Definition 2.8. Theorem 2.13. A formal series φ ∈ C[[z -1 ]] is resurgent if and only if there exists a d.f.s. Ω such that φ is Ω-resurgent. In other words, (2.8) R = Ω d.f.s. RΩ , R = Ω d.f.s. RΩ . Before proving Theorem 2.13, we state a technical result. Lemma 2.14. Suppose that we are given a germ φ ∈ C{ζ} that can be analytically continued along a path γ : [0, t * ] → C of Π, and that F is a finite subset of C. Then, for each ε > 0, there exists a path γ * : [0, t * ] → C of Π such that • γ * (0, t * ) ⊂ C \F , • L(γ * ) < L(γ) + ε, • γ * (t * ) = γ(t * ), the germ φ can be analytically continued along γ * and the analytic continuations along γ and γ * coincide. Proof of Lemma 2.14. Without loss of generality, we can assume that γ [0, t * ] is not reduced to {0} and that t → L(γ |t ) is strictly increasing. The analytic continuation assumption allows us to find a finite subdivision 0 = t 0 < • • • < t m = t * of [0, t * ] together with open discs ∆ 0 , . . . , ∆ m so that, for each k, γ(t k ) ∈ ∆ k , the analytic continuation of φ along γ |t k extends holomorphically to ∆ k , and γ [t k , t k+1 ] ⊂ ∆ k if k < m. For each k ≥ 1, let us pick s k ∈ (t k-1 , t k ) such that γ [s k , t k ] ⊂ ∆ k-1 ∩ ∆ k ; increasing the value of s k if necessary, we can assume γ(s k ) / ∈ F . Let us also set s 0 := 0 and s m+1 := t * , so that 0 ≤ k ≤ m ⇒                γ [s k , s k+1 ] ⊂ ∆ k , the analytic continuation of φ along γ |s k is holomorphic in ∆ k γ(s k ) / ∈ F except maybe if k = 0, γ(s k+1 ) / ∈ F except maybe if k = m. We now define γ * by specifying its restriction γ * | [s k ,s k+1 ] for each k so that it has the same endpoints as γ| [s k ,s k+1 ] and, -if the open line segment S := γ(s k ), γ(s k+1 ) is contained in C \F , then we let γ * | [s k ,s k+1 ] start at γ(s k ) and end at γ(s k+1 ) following S, by setting γ * (t) := γ(s k ) + t-s k s k+1 -s k γ(s k+1 ) -γ(s k ) for t ∈ [s k , s k+1 ], -if not, then S ∩ F = {ω 1 , . . . , ω ν } with ν ≥ 1 (depending on k); we pick ρ > 0 small enough so that πρ < min 1 2 |ω i -γ(s k )|, 1 2 |ω i -γ(s k+1 )|, 1 2 |ω j -ω i |, ε ν(m+1) | 1 ≤ i, j, ≤ ν, i = j and we let γ * | [s k ,s k+1 ] follow S except that it circumvents each ω i by following a half-circle of radius ρ contained in ∆ k . This way, γ * | [s k ,s k+1 ] stays in ∆ k ; the resulting path γ * : [0, t * ] → C is thus a path of analytic continuation for φ and the analytic continuations along γ and γ * coincide. On the other hand, the length of γ * | [s k ,s k+1 ] is < |γ(s k ) -γ(s k+1 )| + ε m+1 , whereas the length of γ| [s k ,s k+1 ] is ≥ |γ(s k ) -γ(s k+1 )|, hence L(γ * ) < L(γ) + ε. Proof of Theorem 2.13. Suppose first that Ω is a d.f.s. and φ ∈ RΩ . Then, for every L > 0, φ meets the requirement of Definition 1.1 with F L = ΩL , hence φ ∈ R. Thus RΩ ⊂ R, which yields one inclusion in (2.8). Suppose now φ ∈ R. In view of Definition 1.1, the radius of convergence δ of φ is positive and, for each positive integer n, we can choose a finite set F n such that (2.9) the germ φ can be analytically continued along any path γ : [0, 1] → C of Π such that L(γ) < (n + 1)δ and γ (0, 1] ⊂ C \F n . Let F 0 := Ø. The property (2.9) holds for n = 0 too. For every real L ≥ 0, we set Ω L := n k=0 F k with n := ⌊L/δ⌋. One can check that Ω := (Ω L ) L∈R ≥0 is a d.f.s. which coincides with its upper closure. We will show that φ ∈ RΩ . Pick an arbitrary γ : [0, 1] → C such that γ ∈ Π Ω . It is sufficient to prove that φ can be analytically continued along γ. Our assumption amounts to γ(t) ∈ C \Ω L(γ |t ) for each t ∈ [0, 1]. Without loss of generality, we can assume that γ [0, 1] is not reduced to {0} and that t → L(γ |t ) is strictly increasing. Let N := ⌊L(γ)/δ⌋. We define a subdivision 0 = t 0 < t 1 < • • • < t N ≤ 1 by the requirement L(γ |tn ) = nδ and set I n := [t n , t n+1 ) for 0 ≤ n < N , I N := [t N , 1]. For each integer n such that 0 ≤ n ≤ N , (2.10) t ∈ I n ⇒ nδ ≤ L(γ |t ) < (n + 1)δ, thus Ω L(γ |t ) = n k=0 F k , in particular (2.11) t ∈ I n ⇒ γ(t) ∈ C \F n . Let us check by induction on n that φ can be analytically continued along γ |t for any t ∈ I n . If t ∈ I 0 , then γ |t has length < δ and the conclusion follows from (2.9). Suppose now that 1 ≤ n ≤ N and that the property holds for n -1. Let t ∈ I n . By (2.10)-(2.11), we have L(γ |t ) < (n + 1)δ and γ [t n , t] ⊂ C \F n . -If γ (0, t n ) ∩ F n is empty, then the conclusion follows from (2.9). -If not, then, since C \F n is open, we can pick t * < t n so that γ [t * , t] ⊂ C \F n , and the induction hypothesis shows that φ can be analytically continued along γ |t * . We then apply Lemma 2.14 to γ |t * with F = F n and ε = (n + 1)δ -L(γ |t ): we get a path γ * : [0, t * ] → C which defines the same analytic continuation for φ as γ |t * , avoids F n and has length < L(γ |t * ) + ε. The concatenation of γ * with γ| [t * ,t] is a path γ * * of length < (n + 1)δ which avoids F n , so it is a path of analytic continuation for φ because of (2.9), and so is γ itself. Sums of discrete filtered sets It is easy to see that, if Ω and Ω ′ are d.f.s., then the formula (2.12) (Ω * Ω ′ ) L := { ω 1 + ω 2 | ω 1 ∈ Ω L 1 , ω 2 ∈ Ω ′ L 2 , L 1 + L 2 = L } ∪ Ω L ∪ Ω ′ L for L ∈ R ≥0 defines a d.f.s. Ω * Ω ′ . We call it the sum of Ω and Ω ′ . The proof of the following lemma is left to the reader. Lemma 2.15. The law * on the set of all d.f.s. is commutative and associative. The formula Ω * n := Ω * • • • * Ω n times (for n ≥ 1) defines an inductive system, which gives rise to a d.f.s. Ω * ∞ := lim -→ n Ω * n . As shown in [START_REF] Candelpergher | Approche de la résurgence[END_REF] and [START_REF] Deleabaere | Endless continuability and convolution product[END_REF], the sum of d.f.s. is useful to study the convolution product: Theorem 2.16 ([OD15] ). Assume that Ω and Ω ′ are d.f.s. and φ ∈ RΩ , ψ ∈ RΩ ′ . Then the convolution product φ * ψ is Ω * Ω ′ -continuable. Remark 2.17. Note that the notion of Σ-continuability in the sense of Definition 1.4 does not give such flexibility, because there are closed discrete sets Σ and Σ ′ such that Ω(Σ) * Ω(Σ ′ ) = Ω(Σ ′′ ) for any closed discrete Σ ′′ (take e.g. Σ = Σ ′ = (Z >0 √ 2) ∪ Z <0 ), and in fact there are Σ-continuable functions φ such that φ * φ is not Σ ′′ -continuable for any Σ ′′ . In view of Theorem 2.13, a direct consequence of Theorem 2.16 is that the space of endlessly continuable functions R is stable under convolution, and the space of resurgent formal series R is a subring of the ring of formal series C[[z -1 ]]. Given φ ∈ RΩ ∩ z -1 C[[z -1 ]], Theorem 2.16 guarantees the Ω * k -resurgence of φk for every integer k, hence its Ω * ∞ -resurgence. This is a first step towards the proof of the resurgence of F ( φ) for F (w) = c k w k ∈ C{w}, i.e. Theorem 1.3 in the case r = 1, however some analysis is needed to prove the convergence of c k φk in some appropriate topology. What we need is a precise estimate for the convolution product of an arbitrary number of endlessly continuable functions, and this will be the content of Theorem 4.8. In Section 5, the substitution problem will be discussed in a more general setting, resulting in Theorem 5.2, which is more general and more precise than Theorem 1.3. Discrete doubly filtered sets and a more general definition of resurgence We now define the spaces R dv and R dv which were alluded to in the introduction. We first require the notion of "direction variation" of a C 1+Lip path. We denote by Π dv the set of all C 1 paths γ belonging to Π, such that γ ′ is Lipschitz and never vanishes. By Rademacher's theorem, γ ′′ exists a.e. on the interval of definition [0, t * ] of γ and is essentially bounded. We can thus define the direction variation V (γ) of γ ∈ Π dv by t) with a real-valued Lipschitz function θ, and then Im γ ′′ (t) γ ′ (t) = θ ′ , hence V (γ) is nothing but the length of the path θ). Note that the function t → V (γ |t ) is Lipschitz. V (γ) := t * 0 Im γ ′′ (t) γ ′ (t) dt (notice that one can write γ ′ (t) = |γ ′ (t)| e iθ( Definition 2.18. A convergent power series φ ∈ C{ζ} is said to be endlessly continuable w.r.t. bounded direction variation (and we write φ ∈ R dv ) if, for every real L, M > 0, there exists a finite subset F L,M of C such that φ can be analytically continued along every path γ : [0, 1] → C such that γ ∈ Π dv , L(γ) < L, V (γ) < M , and γ (0, 1] ⊂ C \F L,M . We also set R dv := B -1 (C δ ⊕ R dv ). Note that R ⊂ R dv ⊂ C{ζ} and R ⊂ R dv ⊂ C[[z -1 ]]. Definition 2.19. A discrete doubly filtered set, or d.d.f.s. for short, is a family Ω = (Ω L,M ) L,M ∈R ≥0 that satisfies i) Ω L,M is a finite subset of C for each L and M , ii) Ω L 1 ,M 1 ⊆ Ω L 2 ,M 2 when L 1 ≤ L 2 and M 1 ≤ M 2 , iii) there exists δ > 0 such that Ω δ,M = Ø for all M ≥ 0. Notice that a d.f.s. Ω can be regarded as a d.d.f.s. Ω dv by setting Ω dv L,M := Ω L for L, M ≥ 0. For a d.d.f.s. Ω, we set S Ω := (µ, λ, ω) ∈ R 2 × C | µ ≥ 0, λ ≥ 0 and ω ∈ Ω λ,µ and M Ω := R 2 × C \ S Ω , where S Ω is the closure of S Ω in R 2 × C. We call Ω-allowed path any γ ∈ Π dv such that (2.13) γ dv (t) := V (γ |t ), L(γ |t ), γ(t) ∈ M Ω for all t. We denote by Π dv Ω the set of all Ω-allowed paths. Finally, the set of Ω-continuable functions (resp. Ω-resurgent series) is defined in the same way as in Definition 2.8, and denoted by R dv Ω (resp. R dv Ω ). Arguing as for Theorem 2.13, one obtains (2.14) R dv = Ω d.d.f.s. R dv Ω , R dv = Ω d.d.f.s. R dv Ω . The sum Ω * Ω ′ of two d.d.f.s. Ω and Ω ′ is the d.d.f.s. defined by setting, for L, M ∈ R ≥0 , (2.15) (Ω * Ω ′ ) L,M := { ω 1 + ω 2 | ω 1 ∈ Ω L 1 ,M , ω 2 ∈ Ω ′ L 2 ,M , L 1 + L 2 = L } ∪ Ω L,M ∪ Ω ′ L,M . 3 The endless Riemann surface associated with a d.f.s. We introduce the notion of Ω-endless Riemann surfaces for a d.f.s. Ω as follows: Definition 3.1. We call Ω-endless Riemann surface any triple (X, p, 0) such that X is a connected Riemann surface, p : X → C is a local biholomorphism, 0 ∈ p -1 (0), and any path γ : [0, 1] → C of Π Ω has a lift γ : [0, 1] → X such that γ(0) = 0. A morphism of Ω-endless Riemann surfaces is a local biholomorphism q : (X, p, 0) → (X ′ , p ′ , 0 ′ ) that makes the following diagram commutative: (X, 0) (X ′ , 0 ′ ) (C, 0) G G 0 0 ❁ ❁ ❁ ❁ ❁ ❁ ❁ ❁ ❁ ❁ Ð Ð ✂ ✂ ✂ ✂ ✂ ✂ ✂ ✂ ✂ ✂ q p p ′ In this section, we prove the existence of an initial object (X Ω , p Ω , 0 Ω ) in the category of Ω-endless Riemann surfaces: Theorem 3.2. There exists an Ω-endless Riemann surface (X Ω , p Ω , 0 Ω ) such that, for any Ωendless Riemann surface (X, p, 0), there is a unique morphism q : (X Ω , p Ω , 0 Ω ) → (X, p, 0). The Ω-endless Riemann surface (X Ω , p Ω , 0 Ω ) is unique up to isomorphism and X Ω is simply connected. Construction of X Ω We first define "skeleton" of Ω: Definition 3.3. Let V Ω ⊂ ∞ n=1 (C × Z) n be the set of vertices v := ((ω 1 , σ 1 ), • • • , (ω n , σ n )) ∈ (C × Z) n that satisfy the following conditions: 1) (ω 1 , σ 1 ) = (0, 0) and (ω j , σ j ) ∈ C ×(Z \{0}) for j ≥ 2, 2) ω j = ω j+1 for j = 1, • • • , n -1, 3) ω j ∈ ΩL j (v) with L j (v) := j-1 i=1 |ω i+1 -ω i | for j = 2, • • • , n. Let E Ω ⊂ V Ω × V Ω be the set of edges e = (v ′ , v ) that satisfy one of the following conditions: i) v = ((ω 1 , σ 1 ), • • • , (ω n , σ n )) and v ′ = ((ω 1 , σ 1 ), • • • , (ω n , σ n ), (ω n+1 , ±1)), ii) v = ((ω 1 , σ 1 ), • • • , (ω n , σ n )) and v ′ = ((ω 1 , σ 1 ), • • • , (ω n , σ n + 1)) with σ n ≥ 1, iii) v = ((ω 1 , σ 1 ), • • • , (ω n , σ n )) and v ′ = ((ω 1 , σ 1 ), • • • , (ω n , σ n -1)) with σ n ≤ -1. We denote the directed tree diagram (V Ω , E Ω ) by Sk Ω and call it skeleton of Ω. Notation 3.4. For v ∈ V Ω ∩ (C × Z) n , we set ω(v) := ω n and L(v) := L n (v). From the definition of Sk Ω , we find the following Lemma 3.5. For each v ∈ V Ω \ {(0, 0)}, there exists a unique vertex v ↑ ∈ V Ω such that (v, v ↑ ) ∈ E Ω . To each v ∈ V Ω we assign a cut plane U v , defined as the open set U v := C \ C v ∪ v ′ → i v C v ′ →v , where v ′ → i v is the union over all the vertices v ′ ∈ V Ω that have an edge (v ′ , v) ∈ E Ω of type i), ω( ) ω( ) ω( 1 ) ′ ⭡ ω( 2 ) ′ ω( 3 ) ′ 1 → ´ 2 → ´ 3 → ´Figure 1: The set U v . C v := Ø when v = (0, 0), {ω n -s(ω n -ω n-1 ) | s ∈ R ≥0 } when v = (0, 0), C v ′ →v := {ω n+1 + s(ω n+1 -ω n ) | s ∈ R ≥0 }. We patch the U v 's along the cuts according to the following rules: Suppose first that (v ′ , v) is an edge of type i), with v ′ = (v, (ω n+1 , σ n+1 )) ∈ V Ω . To it, we assign a line segment or a half-line ℓ v ′ →v as follows: If there exists u = (v, (ω ′ n+1 , ±1)) ∈ V Ω such that ω ′ n+1 ∈ C v ′ →v \ {ω n+1 }, take u (0) = (v, (ω (0) n+1 , ±1)) ∈ V Ω so that |ω (0) n+1 - ω n+1 ) | s ∈ (0, 1)} to (v ′ , v). Otherwise, we assign the open half-line ℓ v ′ →v := C v ′ →v \ {ω n+1 } to (v ′ , v). Since each Ω L (L ≥ 0) is finite, we can take a connected neighborhood U v ′ →v of ℓ v ′ →v so that (3.1) U v ′ →v \ ℓ v ′ →v = U + v ′ →v ∪ U - v ′ →v and U ± v ′ →v ⊂ U v ∩ U v ′ , where U ± v ′ →v := {ζ ∈ U v ′ →v | ±Im(ζ • ζ ′ ) > 0 for ζ ′ ∈ ℓ v ′ →v }. Then, if σ n+1 = 1, we glue U v and U v ′ along U - v ′ →v , whereas if σ n+1 = -1 we glue them along U + v ′ →v . Suppose now that (v ′ , v ) is an edge ot type ii) and iii). As in the case of i), if there exists u = (v, (ω ′ n+1 , ±1)) ∈ V Ω such that ω ′ n+1 ∈ C v \ {ω n }, then we take u (0) = (v, (ω (0) n+1 , ±1)) ∈ V Ω so that |ω (0) n+1 -ω n | is minimum and assign ℓ v ′ →v := {ω n + s(ω (0) n+1 -ω n ) | s ∈ (0, 1)} to (v ′ , v). Otherwise, we assign ℓ v ′ →v := C v \{ω n } to (v ′ , v). Then, we take a connected neighborhood U v ′ →v of ℓ v ′ →v satisfying (3.1), and glue U v and U v ′ along U - v ′ →v in case ii), and along U + v ′ →v in case iii). Patching the U v 's and the U v ′ →v 's according to the above rules, we obtain a Riemann surface X Ω , in which we denote by 0 Ω the point corresponding to 0 ∈ U (0,0) . The map p Ω : X Ω → C is naturally defined using local coordinates U v and U v ′ →v . ω( ) ′ ω( ) (0) → + → -- Figure 2: The set U v ′ →v . Let U e , ℓ e (e ∈ E Ω ) and U v (v ∈ V Ω ) respectively denote the subsets of X Ω defined by U e , ℓ e and U v . Notice that each ζ ∈ X Ω belongs to one of the ℓ e 's or U v 's (e ∈ E Ω or v ∈ V Ω ). Therefore, we have the following decomposition of X Ω : X Ω = v∈V Ω U v ⊔ e∈E Ω ℓ e . Definition 3.6. We define a function L : X Ω → R ≥0 by the following formula: L(ζ) := L(v) + |p(ζ) -ω(v)| when ζ ∈ U v ⊔ ℓ v→v ↑ . We call L(ζ) the canonical distance of ζ from 0 Ω . We obtain from the construction of L the following Lemma 3.7. The function L : X Ω → R ≥0 is continuous and satisfies the following inequality for every γ ∈ Π Ω : L(γ(t)) ≤ L(γ |t ) for t ∈ [0, 1]. We now show the fundamental properties of X Ω . Lemma 3.8. The Riemann surface X Ω constructed above is simply connected. Proof. We first note that, since Sk Ω is connected, X Ω is path-connected. Let γ : [0, 1] → X Ω be a path such that γ(0) = γ(1). Since the image of γ is a compact set in X Ω , we can take finite number of vertices {v j } p j=1 ⊂ V Ω and {e j } q j=1 ⊂ E Ω so that v 1 = (0, 0) and the image of γ is covered by {U v j } p j=1 and {U e j } q j=1 . Since each of {v j } p j=2 and {e j } q j=1 has a path to v 1 that contains it, interpolating finite number of the vertices and the edges if necessary, we may assume that the diagram Sk defined by {v j } p j=1 and {e j } q j=1 are connected in Sk Ω . Now, let U be the union of {U v j } p j=1 and {U e j } q j=1 . Since all of the open sets are simply connected and Sk is acyclic, we can inductively confirm using the van Kampen's theorem that U is simply connected. Therefore, the path γ is contracted to the point 0 Ω . It proves the simply connectedness of X Ω . Lemma 3.9. The Riemann surface X Ω constructed above is Ω-endless. Proof. Take an arbitrary Ω-allowed path γ and δ, L > 0 so that γ ∈ Π δ,L Ω . Let V δ,L Ω denote the set of vertices v = ((ω 1 , σ 1 ), • • • , (ω n , σ n )) ∈ V Ω that satisfy L δ (v) := L n (v) + n j=2 (|σ j | -1)δ ≤ L and set E δ,L Ω := {(v, v ↑ ) ∈ E Ω | v ∈ V δ,L Ω }. Notice that V δ,L Ω and E δ,L Ω are finite. We set for ε > 0 and v ∈ V δ,L Ω U δ,L,ε v := {ζ ∈ U v | inf (v ′ ,v)∈E Ω |ζ -ω(v ′ )| ≥ δ, D ε ζ ⊂ U v } ∩ D L-L δ (v) ω(v) , where D r ζ := { ζ ∈ C | | ζ -ζ| ≤ r} for ζ ∈ C, r > 0. We also set for ε > 0 and (v, v ↑ ) ∈ E δ,L Ω U δ,L,ε v→v ↑ := {ζ ∈ U v→v ↑ | min j=1,2 |ζ -ωj | ≥ δ, inf ζ∈ℓv→v ↑ |ζ -ζ| ≤ ε} ∩ D L-L δ (v ↑ ) ω(v) , where ω1 := ω(v) and ω2 is the other endpoint of ℓ v→v ↑ if it exists and ω2 := ω(v) otherwise. Since E δ,L Ω are finite set, we can take ε > 0 sufficiently small so that D ε ζ ⊂ U v→v ↑ for all ζ ∈ U δ,L,ε v→v ↑ and (v, v ↑ ) ∈ E δ,L Ω . We fix such a number ε > 0. Now, let I be the maximal interval such that the restriction of γ to I has a lift γ on X Ω . Obviously, I = Ø and I is open. Assume that I = [0, a) for a ∈ (0, 1]. We take b ∈ (0, a) so that L(γ |a ) -L(γ |b ) < ε. Then, notice that, since γ ∈ Π δ,L Ω and γ |b has a lift on X Ω , γ(b) is in U δ,L,ε v for v ∈ V δ,L Ω or U δ,L,ε e for e ∈ E δ,L Ω . Since D ε γ(b) ⊂ U v (resp., D ε γ(b) ⊂ U e ) when γ(b) ∈ U δ,L,ε v (resp., γ(b) ∈ U δ,L,ε e ), Proof of Theorem 3.2 We first show the following: Lemma 3.10. For all ε > 0 and ζ ∈ X Ω , there exists an Ω-allowed path γ such that L(γ) < L(ζ) + ε and its lift γ on X Ω satisfies γ(0) = 0 Ω and γ(1) = ζ. Proof. Let ζ ∈ U v for v = ((ω 1 , σ 1 ), • • • , (ω n , σ n )) . We consider a polygonal curve P 0 ζ obtained by connecting line segments [ω j , ω j+1 ] (j = 1, • • • , n), where we set ω n+1 := p Ω (ζ) for the sake of notational simplicity. Now, collect all the points ω j,k on (ω j , ω j+1 ) such that (L j,k , ω) ∈ S Ω , where L j,k := L j (v) + |ω j,k -ω j |. Since (3.2) S Ω ∩ {λ ∈ R ≥0 | |λ| ≤ L} × C is written for each L > 0 by the union of finite number of line segments of the form {λ ∈ R ≥0 | L ≤ λ ≤ L} × {ω} ( L > 0, ω ∈ C), such points are finite. We order ω j and ω j,k so that L j (v) and L j,k increase along the order and denote the sequence by (ω ′ 1 , ω ′ 2 , • • • , ω ′ n ′ ). We set L ′ j := j-1 i=1 |ω ′ i+1 -ω ′ i |. We extend v to v ′ = ((ω ′ 1 , σ ′ 1 ), • • • , (ω ′ n ′ , σ ′ n ′ )) by setting σ ′ j = 1 (resp., σ ′ j = -1) when (ω ′ j , L ′ j ) = (ω i,k , L i,k ) for some i, k and σ i+1 ≥ 1 (resp., σ i+1 ≤ -1). Then, in view of (3.2), we can take δ > 0 so that {(L ′ j + |ζ ′ -ω ′ j | + δ, ζ ′ ) | ζ ′ ∈ (ω ′ j , ω ′ j+1 )} ∩ S Ω = Ø, {(L ′ j + δ, ζ ′ ) | 0 < |ζ ′ -ω ′ j | < δ} ∩ S Ω = Ø hold for j = 1, • • • , n ′ . Let ω ′ j,-(resp., ω ′ j,+ ) be the intersection point of [ω ′ j-1 , ω ′ j ] (resp., [ω ′ j , ω ′ j+1 ]) and C ε ′ ω ′ j := {ζ ′ ∈ C | |ζ ′ -ω ′ j | = ε ′ } for sufficiently small ε ′ > 0. We replace the part [ω ′ j,-, ω ′ j ] ∪ [ω ′ j , ω ′ j,+ ] of ℓ with a path that goes anti-clockwise (resp., clockwise) along C ε ′ ω ′ j from ω ′ j,-to ω ′ j,+ and turns around ω ′ j (|σ ′ j | -1)-times when σ ′ j ≥ 1 (resp., when σ ′ j ≤ -1). Let P ε ′ ζ denote a path ζ defines an Ω-allowed path and its lift P ε ′ ζ on X Ω satisfies the conditions. Further, by taking ε ′ sufficiently small so that 2πε ′ n ′ j=2 |σ ′ j | < ε, we find L(P ε ′ ζ ) < L(ζ) + ε, hence one can take γ = P ε ′ ζ . When ζ ∈ ℓ e for an edge e = (v, v ↑ ) ∈ E Ω , we can construct such a path P ε ′ ζ ∈ Π Ω Q ε ′ ζ,ζ ′ ∈ Π Ω (ζ ′ ∈ U ζ ) of the path γ = P ε ′ ζ constructed in the proof of Lemma 3.10 such that L(Q ε ′ ζ,ζ ′ ) < L(ζ ′ ) + 2ε for each ζ ′ ∈ U ζ and the lift Q ε ′ ζ,ζ ′ on X Ω satisfies Q ε ′ ζ,ζ ′ (0) = 0 and Q ε ′ ζ,ζ ′ (1) = ζ ′ . Indeed, the deformation of P ε ′ ζ is concretely given as follows: - When ζ ∈ U v for v ∈ V Ω , taking a neighborhood U ζ ⊂ U v of ζ sufficiently small, we find that the family of the paths P ε ′ ζ ′ (ζ ′ ∈ U ζ ) constructed in the proof of Lemma 3.10 gives such a deformation. -When ζ ∈ ℓ e for e ∈ E Ω , we can take a neighborhood U ζ ⊂ U e of ζ so that [ω ′ n ′ ,+ (ζ ′ ), p Ω (ζ ′ )] ⊂ U e for all ζ ′ ∈ U ζ , where ω ′ n ′ ,+ (ζ ′ ) is the intersection point of [ω ′ n ′ , p Ω (ζ ′ )] and C ε ′ ω ′ n ′ . Define a deformation Q ε ′ ζ,ζ ′ (ζ ′ ∈ U e ) of P ε ′ ζ by continuously varying the arc of C ε ′ ω ′ n ′ from ω ′ n ′ ,-to ω ′ n ′ ,+ (ζ ′ ) and the line segment [ω ′ n ′ ,+ (ζ ′ ), p Ω (ζ ′ )] and fixing the other part of P ε ′ ζ . Then, shrinking U ζ if necessary, we find that Q ε ′ ζ,ζ ′ satisfies Q ε ′ ζ,ζ ′ ∈ Π Ω and L(Q ε ′ ζ,ζ ′ ) < L(ζ ′ ) + 2ε for each ζ ′ ∈ U ζ . Beware that, when the edge (v, v ↑ ) is the type i), Q ε ′ ζ,ζ ′ is different from P ε ′ ζ ′ for ζ ∈ ℓ v→v ↑ and ζ ′ ∈ U ζ ∩ U v ↑ . On the other hand, Q ε ′ ζ,ζ ′ = P ε ′ ζ ′ holds for ζ ′ ∈ U ζ ∩ U v . When the edge (v, v ↑ ) is the type ii) or iii), Q ε ′ ζ,ζ ′ = P ε ′ ζ ′ holds for ζ ∈ ℓ v→v ↑ and ζ ′ ∈ U ζ . Let (X, p, 0) be an Ω-endless Riemann surface. For each ζ ∈ X Ω , take γ ∈ Π Ω such that γ(1) = ζ and let γ X be its lift on X. Then, define a map q : X Ω → X by q(ζ) = γ X (1). We now show the well-definedness of q. For that purpose, it suffices to prove the following Proposition 3.12. Let γ 0 , γ 1 ∈ Π Ω such that γ 0 (1) = γ 1 (1). Then, there exists a continuous family (H s ) s∈[0,1] of Ω-allowed paths satisfying the conditions 1. H s (0) = 0 and H s (1) = γ 0 (1) for all s ∈ [0, 1], 2. H j = γ j for j = 0, 1. The proof of Proposition 3.12 is reduced to the following Lemma 3.13. For each γ ∈ Π Ω and ε ′ > 0 sufficiently small, there exists a continuous family ( Hs ) s∈[0,1] of Ω-allowed paths satisfying the following conditions: 1. L Hs ≤ L(γ |s ) and Hs (1) = γ(s) for all s ∈ [0, 1], 2. Hs = P ε ′ γ(s) for s = 0, 1. Notice that, since γ(0 ) = 0 Ω , P ε ′ γ(0) is the constant map P ε ′ γ(0) = 0. Reduction of Proposition 3.12 to Lemma 3.13. For each γ ∈ Π Ω and s ∈ (0, 1], define H s using Hs constructed in Lemma 3.13 as follows: H s (t) := Hs (t/s) when t ∈ [0, s], γ(t) when t ∈ [s, 1]. It extends continuously to s = 0 and gives a continuous family (H s ) s∈[0,1] of Ω-allowed paths satisfying the assumption in Proposition 3.12 with γ 0 = γ and γ 1 = P ε ′ γ(1) . Now, let γ 0 and γ 1 be the Ω-allowed paths satisfying the assumption in Proposition 3.12. Applying the above discussion to each of γ 0 and γ 1 , we obtain two families of Ω-allowed paths connecting them to P ε ′ γ 0 (1) and, concatenating the deformations at P ε ′ γ 0 (1) , we obtain a deformation (H s ) s∈[0,1] satisfying the conditions in Proposition 3.12. Proof of Lemma 3.13. Take δ, L > 0 so that γ ∈ Π δ,L Ω . We first show the following: (3.3) When γ(t 0 ) ∈ U v→(0,0) for t 0 ∈ (0, 1] and v = ((0, 0), (ω 2 , σ 2 )), the following estimate holds for t ∈ [t 0 , 1]: L(γ(t)) + |ω 2 | 2 + δ 2 -|ω 2 | ≤ L(γ |t ). Notice that, since γ ∈ Π δ,L Ω , the length L(γ |t 0 ) of γ |t 0 must be longer than that of the polygonal curve C obtained by concatenating the line segments [0, ω 2 + δe iθ ] and [ω 2 + δe iθ , γ(t 0 )], where θ = arg(ω 2 )σ 2 π/2. Then, we find that, for an arbitrary ε > 0, taking ε ′ > 0 sufficiently small, the path γε ′ obtained by concatenating the paths P ε ′ γ(t 0 ) and γ| [t 0 ,1] satisfies γε ′ ∈ Π Ω , γε ′ (t) = γ(t) and L(γ ε ′ |t ) ≤ L(γ 0 |t ) + ε for t ∈ [t 0 , 1]. Therefore, we have L(γ(t)) ≤ L(γ 0 |t ) for t ∈ [t 0 , 1] Since L(C) ≥ |ω 2 | 2 + δ 2 + |γ(t 0 ) -ω 2 |, we find L(γ |t ) = L(γ 0 |t ) + L(γ |t 0 ) -L([0, γ(t 0 )]) ≥ L(γ(t)) + |ω 2 | 2 + δ 2 -|ω 2 | holds for t ∈ [t 0 , 1], and hence, we obtain (3.3). Now, we shall construct (H s ) s∈[0,1] . Let ε > 0 be given. We assign the path P ε ′ t γ(t) (ε ′ t > 0) to each t ∈ [0, 1] and take a neighborhood U γ(t) of γ(t) and the deformation Q ε ′ t γ(t),ζ ′ (ζ ′ ∈ U γ(t) ) of P ε ′ t γ(t) constructed in Lemma 3.11. Then, we can cover [0, 1] by a finite number of intervals I j = [a j , b j ] (j = 1, 2, • • • , k) satisfying the following conditions: -The interior I • j of I j satisfies I • j 1 ∩ I • j 2 = Ø when |j 1 -j 2 | ≤ 1 and I j 1 ∩ I j 2 = Ø otherwise. -There exists t j ∈ I j such that t j < t j+1 for j = 1, • • • , k -1 and γ(I j ) ⊂ U γ(t j ) . Notice that, since U γ(t) is taken for each t ∈ [0, 1] so that it is contained in one of the charts U v (v ∈ V Ω ) or U e (e ∈ E Ω ), one of the followings holds: - γ(t j ) ∈ U v and γ(I j ) ⊂ U v (v ∈ V Ω ). -γ(t j ) ∈ ℓ e and γ(I j ) ⊂ U e (e ∈ E Ω ). We set ε ′ = min j {ε ′ t j | γ(t j ) / ∈ U (0,0) }. Then, P ε ′ γ(t j ) and its deformation Q ε ′ γ(t j ),ζ ′ (ζ ′ ∈ U γ(t j ) ) also satisfy the conditions in Lemma 3.10 and Lemma 3.11. Let J E ⊂ {1, • • • , k} denote the set of suffixes satisfying the condition that there exists e ∈ E Ω such that γ(t j ) ∈ ℓ e and let j 0 be the minimum of J E . Shrinking the neighborhood U γ(t) for each t ∈ [0, 1] at the first, we may assume without loss of generality that, -|γ(t) -γ(t j )| ≤ ε for t ∈ I j and j = 1, • • • , k, -if j, j + 1 ∈ J E , there exists an edge e ∈ E Ω such that γ(t j ), γ(t j+1 ) ∈ ℓ e . Recall that, from the construction of Q ε ′ ζ,ζ ′ , Q ε ′ γ(t j ),γ(t) = Q ε ′ γ(t j+1 ),γ(t) for t ∈ I j ∩ I j+1 except for the cases where there exists an edge e = (v, v ↑ ) ∈ E Ω of the type i) such that -γ(t j ) ∈ U e and γ(t j+1 ) ∈ U v ↑ , -γ(t j ) ∈ U v ↑ and γ(t j+1 ) ∈ U e . In the first case, the difference between Q ε ′ γ(t j ),γ(t) and Q ε ′ γ(t j+1 ),γ(t) is the part from ω t (v ↑ ) to γ(t), where ω t (v ↑ ) is the intersection point of C ε ′ ω(v ↑ ) and [ω(v ↑ ), γ(t)]: Let ω e,i (i = 0, • • • , m + 1) be the points on the line segment [ω(v ↑ ), ω(v)] satisfying the conditions (L e,i , ω e,i ) ∈ S Ω and L e,i < L e,i+1 , where L e,i := L(v ↑ ) + |ω e,i -ω(v ↑ )|. Then, the part of Q ε ′ γ(t j ),γ(t) from ω t (v ↑ ) to γ(t) is given by concatenating the arcs of C ε ′ ω e,i (i = 0, • • • , m + 1), the intervals of the line segment [ω(v ↑ ), ω(v)] and [ω t (v), γ(t)], where ω t (v) is the intersection point of C ε ′ ω(v) and [ω(v), γ(t)]. (See Figure 4 (a).) On the other hand, Q ε ′ γ(t j+1 ),γ(t) goes directly from ω t (v ↑ ) to γ(t). (See Figure 4 (d).) Now, let ω t i,+ (resp. ω t i,-) be the intersection point of C ε ′ ω e,i and [ω t (v ↑ ), ω t (v)] that is the closer to ω t (v) (resp. ω t (v ↑ )). While t moves on I j ∩ I j+1 , we first deform the part of Q ε ′ γ(t j ),γ(t) from ω t (v ↑ ) to ω t (v) to the line segment [ω t (v ↑ ), ω t (v)] by shrinking the part of Q ε ′ γ(t j ),γ(t) from ω t i,- to ω t i,+ (resp. from ω t i,+ to ω t i+1,-) to the line segment [ω t i,-, ω t i,+ ] (resp. [ω t i,+ , ω t i+1,-]) for each i. (See Figure 4 (b) and (c).) Then, further shrinking the polygonal line given by concatenating [ω t (v ↑ ), ω t (v)] and [ω t (v), γ(t)] to the line segment [ω t (v ↑ ), γ(t)], we obtain a continuous family of Ω-allowed paths Hs s∈[t j ,t j+1 ] satisfying the following conditions: -Hs = Q ε ′ γ(t j ),γ(s) when s ∈ [t j , t j+1 ] \ I j+1 , -Hs = Q ε ′ γ(t j+1 ),γ(s) when s ∈ [t j , t j+1 ] \ I j , -L Hs ≤ L Q ε ′ γ(t j ),γ(s) and Hs (1) = γ(s) when s ∈ I j ∩ I j+1 . ω ,0 ω ,1 ω ,2 γ( ) (a) (b) (c) (d) Figure 4: For the second case, we can also construct a continuous family of Ω-allowed paths Hs s∈[t j ,t j+1 ] satisfying the first and the second conditions above and -L Hs ≤ L Q ε ′ γ(t j+1 ),γ(s) and Hs (1) = γ(s) when s ∈ I j ∩ I j+1 . Then, we can continuously extend Hs to [0, 1] by interpolating it by Q ε ′ γ(t j ),γ(s) so that it satisfies (3.4) L Hs ≤ max j L Q ε ′ γ(t j ),γ(s) | s ∈ I j and Hs (1) = γ(s) for all s ∈ [0, 1]. Since I j 0 is taken so that |γ(t)γ(t j 0 )| ≤ ε holds on I j 0 , applying (3.3) with t 0 = t j 0 , we have the following estimates: L(γ(t)) + |ω 2 | 2 + δ 2 -|ω 2 | -ε ≤ L(γ |t ) for t ∈ [a j 0 , 1]. On the other hand, since γ(t) ∈ U (0,0) for t ∈ [0, a j 0 ], we find L Q ε ′ γ(t j ),γ(t) = L(γ(t)) holds for t ∈ I j and j < j 0 from the construction of Q ε ′ ζ,ζ ′ . Therefore, taking ε > 0 sufficiently small so that 3ε ≤ |ω 2 | 2 + δ 2 -|ω 2 |, we obtain the following estimates from Lemma 3.10 and (3.4): L Hs ≤ L(γ |s ) for s ∈ [0, 1]. Finally, from the construction of Hs , we find that Hs satisfies Hs = P ε ′ γ(s) for s = 0, 1. Since p Ω = p • q and p is isomorphic near 0, all the maps q : X Ω → X must coincide near 0 Ω , and hence, uniqueness of q follows from the uniqueness of the analytical continuation of q. Finally, X Ω is unique up to isomorphism because X Ω is an initial object in the category of Ω-endless Riemann surfaces. Supplement to the properties of X Ω Let O X denote the sheaf of holomorphic functions on a Riemann surface X and consider the natural morphism p * Ω : p -1 Ω O C → O X Ω induced by p Ω : X Ω → C . Since X Ω is simply connected, we obtain the following: Proposition 3.14. Let φ ∈ O C,0 . Then the followings are equivalent: i) φ ∈ O C,0 is Ω-continuable, ii) p * Ω φ ∈ O X Ω ,0 Ω can be analytically continued along any path on X Ω , iii) p * Ω φ ∈ O X Ω ,0 Ω can be extended to Γ(X Ω , O X Ω ). Therefore, we find p * Ω : RΩ ∼ -→ Γ(X Ω , O X Ω ). Notation 3.15. For L, δ > 0, using Π δ,L Ω of (2.7), we define a compact subset K δ,L Ω of X Ω by (3.5) K δ,L Ω := ζ ∈ X Ω | ∃γ ∈ Π δ,L Ω such that ζ = γ(1) . Notice that X Ω is exhausted by (K δ,L Ω ) δ,L>0 . Therefore, the family of seminorms • δ,L Ω (δ, L > 0) defined by f δ,L Ω := sup ζ∈K δ,L Ω | f (ζ)| for f ∈ Γ(X Ω , O X Ω ) induces a structure of Fréchet space on Γ(X Ω , O X Ω ). Definition 3.16. We introduce a structure of Fréchet space on RΩ by a family of seminorms • δ,L Ω (δ, L > 0) defined by φ δ,L Ω := |ϕ 0 | + p * Ω φ δ,L Ω for φ ∈ RΩ , where B( φ) = ϕ 0 δ + φ ∈ C δ ⊕ RΩ . Let Ω ′ be a d.f.s. such that Ω ⊂ Ω ′ . Since Π Ω ′ ⊂ Π Ω , X Ω is Ω ′ -endless. Therefore, Theorem 3.2 yields a morphism q : (X Ω ′ , p Ω ′ , 0 Ω ′ ) → (X Ω , p Ω , 0 Ω ), which induces a morphism q * : q -1 O X Ω → O X Ω ′ . Since q(K δ,L Ω ′ ) ⊂ K δ,L Ω , we have q * f δ,L Ω ′ ≤ f δ,L Ω for f ∈ Γ(X Ω , O X Ω ), and hence, φ δ,L Ω ′ ≤ φ δ,L Ω for φ ∈ RΩ . In view of Theorem 4.8 below, the product map RΩ × RΩ ′ → RΩ * Ω ′ is continuous and hence, when Ω * Ω = Ω, RΩ is a Fréchet algebra. 3.4 The endless Riemann surface associated with a d.d.f.s. In this section, we discuss the construction of the endless Riemann surfaces associated with an arbitrary d.d.f.s. Ω. Let us first define the skeleton of Ω: Definition 3.17. Let V Ω ⊂ ∞ n=1 (C × Z) n be the set of vertices v := ((ω 1 , σ 1 ), • • • , (ω n , σ n )) ∈ (C × Z) n that satisfy the conditions 1) and 2) in Definition 3.3 and 3') M j (v), L j (v), ω j ∈ S Ω for j = 2, • • • , n, with L j (v) := j-1 i=1 |ω i+1 -ω i | (j = 2, • • • , n), M j (v) :=        0 (j = 2), j-1 i=2 A i (v) + 2π(|σ i | -1) (j = 3, • • • , n), and A i (v) := |θ i | if θ i σ i ≥ 0, 2π -|θ i | if θ i σ i < 0, where θ i := arg ω i+1 -ω i ω i -ω i-1 is taken so that θ i ∈ (-π, π]. Let E Ω ⊂ V Ω × V Ω be the set of edges e = (v ′ , v) that satisfy one of the conditions i) ∼ iii) in Definition 3.3. We denote the directed tree diagram (V Ω , E Ω ) by Sk Ω and call it skeleton of Ω. Now, assigning a cut plane U v (resp. an open set U e ) to each v ∈ V Ω (resp. each e ∈ E Ω of type i)) defined by totally the same way with Section 3.1 and patching them as in Section 3.1, we obtain an initial object (X Ω , p Ω , 0 Ω ) in the category of Ω-endless Riemann surfaces associated with a d.d.f.s. Ω. We denote the lift of γ ∈ Π dv Ω on X Ω by γ. Estimates for the analytic continuation of iterated convolutions In this section, our aim is to prove the following theorem, which is the analytical core of our study of the convolution product of endlessly continuable functions. Theorem 4.1. Let δ, L > 0 be real numbers. Then there exist c, δ ′ > 0 such that, for every d.f.s. Ω such that Ω 4δ = Ø, for every integer n ≥ 1 and for every f1 , . . . , fn ∈ RΩ , the function 1 * f1 * • • • * fn (which is known to belong to RΩ * n ) satisfies (4.1) p * Ω * n 1 * f1 * • • • * fn (ζ) ≤ c n n! sup L 1 +•••+Ln=L p * Ω f1 δ ′ ,L 1 Ω • • • p * Ω fn δ ′ ,Ln Ω for ζ ∈ K δ,L Ω * n (with notation (3.5)). Using the Cauchy inequality, the identity d dζ (1 * f1 * • • • * fn ) = f1 * • • • * fn and the inverse Borel transform, one easily deduces the following Corollary 4.2. Let δ, L > 0 be real numbers. Then there exist c, δ ′ , L ′ > 0 such that, for every d.f.s. Ω such that Ω 4δ = Ø, for every integer n ≥ 1 and for every f1 , . . . , fn ∈ RΩ without constant term, the formal series f1 • • • fn (which is known to belong to RΩ * n ) satisfies f1 • • • fn δ,L Ω * n ≤ c n+1 n! f1 δ ′ ,L ′ Ω • • • fn δ ′ ,L ′ Ω . In fact, one can cover the case f1 ∈ RΩ 1 , . . . , fn ∈ RΩn with different d.f.s.'s Ω 1 , . . . , Ω n as well-see Theorem 4.8-, but we only give details for the case of one d.f.s. so as to lighten the presentation. Notations and preliminaries We fix an integer n ≥ 1 and a d.f.s. Ω. In view of Remark 2.12, without loss of generality, we can suppose that Ω coincides with its upper closure: (4.2) Ω = Ω. Let ρ > 0 be such that Ω 3ρ = Ø. We set (See [START_REF]Nonlinear analysis with resurgent functions[END_REF] for the notations and notions related to integration currents.) U := { ζ ∈ C | |ζ| < 3ρ }. As in [START_REF]Nonlinear analysis with resurgent functions[END_REF], our starting point will be Lemma 4.3. Let f1 , . . . , fn ∈ RΩ and β := (p * Ω f1 ) ζ 1 • • • (p * Ω fn ) ζ n dζ 1 ∧ • • • ∧ dζ n ,, where we denote by dζ 1 ∧ • • • ∧ dζ n the pullback by p ⊗n Ω : X n Ω → C n of the n-form dζ 1 ∧ • • • ∧ dζ n . Then 1 * f1 * • • • * fn (ζ) = D(ζ) # [∆ n ](β) for ζ ∈ U . Proof. This is just another way of writing the formula (4.4) 1 * f1 * • • • * fn (ζ) = ζ n ∆n f1 (ζs 1 ) • • • fn (ζs n ) ds 1 • • • ds n . See [START_REF]Nonlinear analysis with resurgent functions[END_REF] for the details. Notation 4.4. We set N (ζ) := ζ 1 , . . . , ζ n ∈ X n Ω | p Ω ζ 1 + • • • + p Ω ζ n = ζ for ζ ∈ C, (4.5) N j := ζ 1 , . . . , ζ n ∈ X n Ω | ζ j = 0 Ω for 1 ≤ j ≤ n. (4.6) γ-adapted deformations of the identity Let us consider a path γ : [0, 1] → C in Π Ω * n for which there exists a ∈ (0, 1) such that We now introduce the notion of γ-adapted deformation of the identity, which is a slight generalization of the γ-adapted origin-fixing isotopies which appear in [Sau15, Def. 5.1]. Definition 4.5. A γ-adapted deformation of the identity is a family (Ψ t ) t∈[a,1] of maps Ψ t : V → X n Ω , for t ∈ [a, 1], where V := D γ(a) (∆ n ) ⊂ X n Ω , such that Ψ a = Id, the map t, ζ ∈ [a, 1] × V → Ψ t ζ ∈ X n Ω is locally Lipschitz, and for any t ∈ [a, 1] and j = 1, . . . , n, (4.8) Ψ t V ∩ N γ(a) ⊂ N γ(t) , Ψ t V ∩ N j ⊂ N j (with the notations (4.5)-(4.6)). Let γ denote the lift of γ in X Ω starting at 0 Ω . The analytical continuation along γ of a convolution product can be obtained as follows: The following is the key estimate: Theorem 4.7. Let δ ∈ (0, ρ) and L > 0. Let γ ∈ Π δ,L Ω * n satisfy (4.7) and let (4.12) Proposition 4.6 ([Sau15]). If (Ψ t ) t∈[a,1] is a γ-adapted deformation of the identity, then (4.9) p * Ω * n 1 * f1 * • • • * fn γ(t) = Ψ t • D γ(a) # [∆ n ](β) for t ∈ [a, δ ′ (t) := ρ e -2 √ 2δ -1 L(γ| [a,t] ) , c(t) := ρ e 3δ -1 L(γ| [a,t] ) for t ∈ [a, 1]. Then there exists a γ-adapted deformation of the identity Proof that Theorem 4.7 implies Theorem 4.1. Let δ, L > 0. We will show that (4.1) holds with δ ′ := min δ, ρ e -4 √ 2(1+δ -1 L) , c := max 2ρ, ρ e 6(1+δ -1 L) , where ρ := 4 3 δ. Let Ω be a d.f.s. such that Ω 4δ = Ø. Without loss of generality we may suppose that Ω = Ω. (Ψ t ) t∈[a,1] such that (4.13) Ψ t • D γ(a) (∆ n ) ⊂ L 1 +•••+Ln=L(γ |t ) K δ ′ (t),L 1 Ω × • • • × K δ ′ (t) In view of formula (4.4), the inequality (4.1) holds for ζ ∈ K δ,L Ω * n ∩ U , where U is defined by (4.3), because the Lebesgue measure of ∆ n is 1/n!. Let ζ ∈ K δ,L Ω * n \ U . We can write ζ = γ(1) with γ ∈ Π δ,L Ω * n , is not C 1 , then we use a sequence of paths γ k ∈ Π δ/2,L+δ Ω * n such that γ k | [0,a] = γ| [0,a] , γ k (1) = γ(1), γ k | [a,1] is C 1 and sup t∈[a,1] |γ(t) -γ k (t)| → 0 as k → ∞; (4.15) p * Ω 1 * f1 * • • • * fn (ζ) ≤ c n n! sup L 1 +•••+Ln=L p * Ω 1 f1 δ ′ ,L 1 Ω 1 • • • p * Ωn fn δ ′ ,Ln Ωn for ζ ∈ K δ,L Ω . Proposition 4.10. Let ζ = L (ζ 1 ), . . . , L (ζ n ) ∈ V , i.e. ζ j = s j γ(a) with (s 1 , . . . , s n ) ∈ ∆ n . We define v := (|ζ 1 |, ζ 1 ), . . . , (|ζ n |, ζ n ) ∈ (R × C) n and Γ = (γ 1 , . . . , γn ) : [0, 1] → (R × C) n by t ∈ [0, a] ⇒ Γ(t) := t a (|ζ 1 |, ζ 1 ), . . . , t a (|ζ n |, ζ n ) , t ∈ [a, 1] ⇒ Γ(t) := Φ a,t ( v ). Then, for each j ∈ {1, . . . , n}, γj is a path [0, 1] → R × C whose C-projection γ j belongs to Π Ω , and the formula (4.21) Ψ t ζ := γ 1 (t), . . . , γ n (t) ∈ X n Ω for t ∈ [a, 1]. defines a γ-adapted deformation of the identity. Proof. We first prove that γ 1 , . . . , γ n ∈ Π Ω . In view of (2.5), we just need to check that, for each j ∈ {1, . . . , n}, the path γj = (λ j , γ j ) satisfies v ∈ (R × C) n | v j = v * j } is invariant by the maps Φ t 1 ,t 2 (because η(v j ) = 0 implies that X j = 0 on this submanifold), in particular Φ t,a (R × C) n \ M n Ω ⊂ (R × C) n \ M n Ω , whence (4.23) follows because Φ a,t and Φ t,a are mutually inverse bijections. Therefore the paths γ 1 , . . . , γ n are Ω-allowed and have lifts in X Ω starting at 0 Ω , which allow us to define the maps Ψ t by (4.21) on V . We now prove that (Ψ t ) t∈[a,1] is a γ-adapted deformation of the identity. The map (t, v ) → Ψ t ( v ) is locally Lipschitz because the flow map (4.20) is locally Lipschitz, and Ψ a = Id because Φ a,a is the identity map of (R × C) n ; hence, we just need to prove (4.8). We set which can be itself checked as follows: consider first an arbitrary initial condition v ∈ (R × C) n and the corresponding solution v(t) := Φ a,t ( v ), and let v 0 (t) := v 1 (t) + We now show that the γ-adapted deformation of the identity that we have constructed in Proposition 4.10 meets the requirements of Theorem 4.7. Ñ (w) := (v 1 , . . . , v n ) ∈ (R × C) n | v 1 + • • • + v n = w for w ∈ R × C, Ñj := (v 1 , . . . , v n ) ∈ (R × C) n | v j = (0, 0) for 1 ≤ j ≤ n. In view of (2.6)-(2.7) and (3.5), the inclusion (4.13) follows from Φ a,t Ṽ ⊂ L 1 +•••+Ln=L(γ |t ) M L 1 ,δ ′ (t) Ω × • • • × M Ln,δ ′ (t) Ω for all t ∈ [a, 1], with δ ′ (t) as in (4.12). Proof of Lemma 4.11. Let us consider an initial condition v ∈ Ṽ and the corresponding solution v(t) := Φ a,t ( v ), whose components we write as v j (t) = λ j (t), ζ j (t) for j = 1, . . . , n. We also have v j (a) = s j γ(a) for some (s 1 , . . . We first notice that . . . X n := η n (v n ) D(t, v ) γ′ (t), where η j (v) := dist v, {(0, 0)} ∪ S Ω j , D t, v := η 1 (v 1 ) + • • • + η n (v n ) + |γ(t) -(v 1 + • • • + v n )|. The case of endless continuability w.r.t. bounded direction variation In this subsection, we extend the estimates of Theorem 4.1 to the case of a d.d.f.s.. Let us fix an arbitrary d.d.f.s. Ω. We fix ρ > 0 such that Ω 3ρ,M = Ø for every M ≥ 0. We consider a path γ : [0, 1] → C in Π δ,M,L Ω * n , with arbitrary δ ∈ (0, ρ) and L > 0, satisfying the following condition: (see proof of Theorem 4 in [START_REF]Nonlinear analysis with resurgent functions[END_REF] for the detail). Since F (z, w) ∈ RΩ {w}, we obtain from Corollary 4.2 the following estimates: For every δ, L > 0, there exist δ ′ , L ′ , C > 0 such that F k δ ′ ,L ′ Ω ≤ C k+1 and Hm δ,L Ω * ∞ ≤ k≥1 (m + k -1)! m!k! n 1 +•••+n k =m+k-1 n 1 ,••• ,n k ≥1 C k+1 k! Fn 1 δ ′ ,L ′ Ω • • • Fn k δ ′ ,L ′ Ω ≤ k≥1 2 m+k n 1 +•••+n k =m+k-1 n 1 ,••• ,n k ≥1 C m+3k k! ≤ k≥1 2 2m+3k-2 C m+3k k! ≤ e 8C 3 (4C) m . This yields H(z, w) ∈ RΩ * ∞ {w}, whence, H(z, F0 (z)) ∈ RΩ * ∞ . Definition 1.2. A formal series φ(z) = ∞ j=0 ϕ j z -j ∈ C[[z -1 ]] is said to be resurgent if φ(ζ) = ∞ j=1 ϕ j ζ j-1 ) ψ(ζξ) dξ for ζ in the intersection of the discs of convergence of φ and ψ. Let us illustrate this point on two examples. -The equation dϕ dz λϕ = b(z), where b(z) is given in z -1 C{z -1 } and λ ∈ C * , has a unique formal solution in C[[z -1 ]], namely φ(z) := -λ -1 Id -λ -1 d dz -1 b, whose Borel transform is φ(ζ) = -(λ + ζ) -1 b(ζ); here, the Borel transform b(ζ) of b(z) is entire, hence φ is meromorphic in C, with at worse a pole at ζ = -λ and no singularity elsewhere. Therefore, heuristically, for a nonlinear equation dϕ dz λϕ = b 0 n+1ω n+1 | gives the minimum of |ω ′ n+1ω n+1 | for such vertices and assign an open line segment ℓ v ′ →v := {ω n+1 + s(ω we obtain a lift of γ| [0,a] by concatenating γ |b and γ| [b,a] in the coordinate. It contradicts the maximality of I, and hence, I = [0, 1]. Figure 3: . by totally the same discussion. Notice that, since the sequence v ′ in the proof of Lemma 3.10 is uniquely determined by ζ ∈ X Ω , the choice of the path P ε ′ ζ depends only on the radius ε ′ of the circles C ε ′ ω ′ j Further, from the construction of the path P ε ′ ζ , we can extend Lemma 3.10 as follows: Lemma 3.11. For all ε > 0 and ζ ∈ X Ω , there exist a neighborhood U ζ of ζ and, for ε ′ small enough, a continuous deformation For each ζ ∈ U , the path γ ζ : t ∈ [0, 1] → tζ is Ω-allowed and hence has a lift γ ζ on X Ω starting at 0 Ω . Then L (ζ) := γ ζ (1) defines a holomorphic function on U and induces an isomorphism(4.3) L : U ∼ -→ U , where U := L (U ) ⊂ X Ω , such that p Ω • L = Id.Let us denote by ∆ n the n-dimensional simplex∆ n := { (s 1 , . . . , s n ) ∈ R n ≥0 | s 1 + • • • + s n ≤ 1 }with the standard orientation, and by [∆ n ] ∈ E n (R n ) the corresponding integration current. For ζ ∈ U , we define a map D(ζ) on a neighbourhood of ∆ n in R n by D(ζ) : s = (s 1 , . . . , s n ) → D(ζ, s ) := L (s 1 ζ), . . . , L (s n ζ) ∈ U n ⊂ X n Ω and denote by D(ζ) # [∆ n ] ∈ E n (X n Ω ) the push-forward of [∆ n ] by D(ζ). (4.7) γ(t) = t a γ(a) for t ∈ [0, a], |γ(a)| = ρ, γ| [a,1] is C 1 . for k large enough one has γ k (1) = ζ, thus one then can replace γ by γ k . Hence we can assume that (4.7) holds. Let (Ψ t ) [t∈[a,1]] denote the γ-adapted deformation of the identity provided by Theorem 4.7, possibly with (δ, L) replaced by (δ/2, L + δ). Proposition 4.6 shows that, for f1 , . . . , fn ∈ RΩ , p * Ω * n 1 * f1 * • • • * fn (ζ) can be written as (4.10) with t = 1, and (4.13)-(4.14) then show that (4.1) holds because δ ′ (t) ≥ δ ′ and c(1) ≤ c. Therefore, (4.1) holds on K δ,L Ω * n \ U too. In fact, in view of the proof of Theorem 4.7 given below, one can give the following generalization of Theorem 4.1: Theorem 4.8. Let δ, L be positive real numbers. Then there exist positive constants c and δ ′ such that, for every integer n ≥ 1 and for all d.f.s. Ω 1 , . . . , Ω n with Ω j,4δ = Ø (j = 1, • • • , n) and f1 ∈ RΩ 1 , . . . , fn ∈ RΩn , the function 1 * f1 * • • • * fn belongs to RΩ , where Ω := Ω 1 * • • • * Ω n , and ( Let j ∈ {1, . . . , n}. The second part of (4.8) follows from the inclusionΦ a,t Ñj ⊂ Ñj for t ∈ [a, 1],which stems from the fact that the jth component of the vector field (4.19) vanishes on Ñj (because η (0, 0) = 0).Sinceζ 1 + • • • + ζ n = γ(a) ⇒ |ζ 1 | + • • • + |ζ n | = |γ(a)| for any (ζ 1 , . . . , ζ n ) ∈ V, the first part of (4.8) follows from the inclusion Φ a,t Ñ γ(a) ⊂ Ñ γ(t) for t ∈ [a, 1], ), whence γ(t)v 0 (t) ≤ γ(a)v 0 (a) exp δ -1 √ 2 L(γ| [a,t] )for all t; now, if v ∈ Ñ γ(a) , we find v 0 (a) = γ(a), whence v 0 (t) = γ(t) for all t. Lemma 4.11. Let Ṽ := s 1 γ(a), . . . , s n γ(a) | (s 1 , . . . , s n ) ∈ ∆ n ∈ (R × C) n . Then (4.24) , s n ) ∈ ∆ n , whence λ 1 (a) + • • • + λ n (a)≤ |γ(a)| = ρ and |v j (a)| = ρ for j = 1, . . . , n. Notation 4. 14 . 14 Given δ, M, L > 0, we denote by Π δ,M,L Ω the set of all paths γ ∈ Π dv Ω such that V (γ) ≤ M , L(γ) ≤ L and inf t∈[0,t * ] dist 1 γ dv (t), S Ω ≥ δ, where γ dv is as in (2.13) and dist 1 is the distance associated with the norm• 1 defined on R 2 × C by (µ, λ, ζ) 1 := |µ| + |λ| 2 + |ζ| 2 . (4.29)There exists a ∈ (0, 1) such that γ(t) = t a γ(a) for t ∈ [0, a] and |γ(a)| = ρ.Then, for t ∈ [0, 1] and v ∈ R × C, we setη(t, v) := dist 1 (V (γ |t ), v), R ×{(0, 0)} ∪ S Ω . and, for v = (v 1 , • • • , v n ) ∈ (R × C) n , D t, v := η(t, v 1 ) + • • • + η(t, v n ) + |γ(t) -(v 1 + • • • + v n )|. See the proof of [Sau15, Prop. 5.2]. Note that the right-hand side of (4.9) must be interpreted as (4.10) ∆n (p * Ω f1 ) ζ t 1 • • • (p * Ω fn ) ζ t n det ∂ζ t i ∂s j 1≤i,j≤n ds 1 • • • ds n with the notation (4.11) (each function ζ t i is Lipschitz on ∆ n and Rademacher's theorem ensures that it is differentiable ζ t 1 , . . . , ζ t n := Ψ t • D γ(a) , ζ t i := p Ω • ζ t i for 1 ≤ i ≤ n almost everywhere on ∆ n , with bounded partial derivatives). 1] for any f1 , . . . , fn ∈ RΩ , with β as in Lemma 4.3. Proof. assuming without loss of generality that the first two conditions in (4.7) hold. If the third condition in (4.7) does not hold, i.e. if γ|[a,1] 4.22)t ∈ [0, 1] ⇒ γj (t) ∈ M Ω and dλ j /dt = |dγ j /dt|. Since ζ j ∈ U and γ j (t) = t a ζ j for t ∈ [0, a], the property (4.22) holds for t ∈ [0, a]. For t ∈ [a, 1], the second property in (4.22) follows from the fact that the R-projection of X j (t, v ) ∈ R × C coincides with the modulus of its C-projection.Since γ1 (t), . . . , γn (t) = Φ a,t γ1 (a), . . . , γn (a) and the first property in (4.22) holds at t = a, the first property in (4.22) for t ∈ [a, 1] is a consequence of the inclusion (4.23) Φ a,t M n Ω ⊂ M n Ω , which can itself be checked as follows: suppose v * ∈ (R × C) n \ M n Ω , then it has at least one component v * j in S Ω and, in view of the form of the vector field (4.19), the submanifold { • • • + v n (t); then (4.19) shows that d dt γ(t)v 0 (t) = γ(t)v 0 (t) D(t, v(t)) γ′ (t), hence the Lipschitz function h(t) := γ(t)v 0 (t) has an almost everywhere defined derivative which satisfies |h ′ (t)| ≤ d dt γ(t)v 0 (t) ≤ 1 D(t, v(t)) |γ ′ (t)| h(t), which is ≤ δ -1 √ 2 |γ ′ (t)| h(t) by (4.17 Let j ∈ {1, . . . , n}. Since η is 1-Lipschitz, we can define a Lipschitz function on [a, 1] by the formula h j (t) := η v j (t) , and its almost everywhere defined derivative satisfies |γ ′ (t)| ≤ g(t)h j (t), where g(t):= δ -1 √ 2 |γ ′ (t)|. (a) e -δ -1 √ 2 L(γ| [a,t] ) ≤ η v j (t) ≤ η v j (a) e δ -1 √ 2 L(γ| [a,t] ) for all t ∈ [a, 1].Let us now fix t ∈ [a, 1]. We conclude by distinguishing two cases. , Gronwall's lemma yields V (t) ≤ V (a) e 3δ -1 L(γ| [a,t] ) , and hence, sinceV (a) = ρ n j=1 |s js ′ j |, we have (4.28) V (t) ≤ ρ e 3δ -1 L(γ| [a,t] )Then, (4.28) entails via Rademacher's theorem that the following estimate holds a.e. on ∆ n : ≤ ρ e 3δ -1 L(γ| [a,t] ) .Remark 4.13. Theorem 4.8 is verified by replacing the vector field (4.19) by n j=1 |γ |h ′ |λ ′ j (t)| = n j=1 η v j (t) D(t, v(t)) j (t)| ≤ |v ′ j (t)| = h j (t) D(t, v(t)) a g(τ ) dτ = δ -1 √ t 2 L(γ| [a,t] ), we deduce that j=1 n i=1 ∂ζ t i ∂s j Finally, (4.14) follows from the inequality Since (4.26) det ∂ζ t i ∂s j 1≤i,j≤n ≤ n j=1 X(t, v ) = η v j Thereforen X 1 := η 1 (v 1 ) |s j -s ′ j |. n i=1 ∂ζ t i ∂s j D(t, v ) γ′ (t) . ′ (t)| ≤ |γ ′ (t)|, hence λ 1 (t) + • • • + λ n (t) ≤ λ 1 (a) + • • • + λ n (a) + t a |γ ′ | ≤ L(γ |t ). Therefore, we just need to show that (4.25) dist v j (t), S Ω ≥ δ ′ (t) for j = 1, . . . , n. Suppose first that η(v j (a)) ≥ ρ e - √ 2 δ -1 L(γ| [a,t] ) . Then the first inequality in (4.26) yields η(v j (t)) ≥ δ ′ (t), and since dist v j (t), S Ω ≥ η(v j (t)) we get (4.25). Graduate School of Sciences, Hiroshima University. 1-3-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526, Japan. 2 IMCCE, CNRS-Observatoire de Paris, France. Acknowledgements. This work has been supported by Grant-in-Aid for JSPS Fellows Grant Number 15J06019, French National Research Agency reference ANR-12-BS01-0017 and Laboratoire Hypathie A*Midex. The authors thank Fibonacci Laboratory (CNRS UMI 3483), the Centro Di Ricerca Matematica Ennio De Giorgi and the Scuola Normale Superiore di Pisa for their kind hospitality. Proof of Theorem 4.7 We suppose that we are given n ≥ 1, ρ > 0, a d.f.s. Ω such that Ω = Ω and Ω 3ρ = Ø, and γ ∈ Π δ,L Ω * n satisfying (4.7) with δ ∈ (0, ρ) and L > 0. We set γ(t) := L(γ |t ), γ(t) and define functions by the formulas (4.16) η(v) := dist v, {(0, 0)} ∪ S Ω , D t, v := η(v 1 ) + where | • | is the Euclidean norm in R × C ≃ R 3 . The assumptions Ω = Ω and γ ∈ Π δ,L Ω * n yield Lemma 4.9. The function D satisfies (4.17) Proof. Let (t, v ) ∈ [a, 1] × (R × C) n . For each j ∈ {1, . . . , n}, pick u j ∈ {(0, 0)} ∪ S Ω so that η(v j ) = |v ju j |, and let Either all of the u j 's equal (0, 0), in which case u = (0, 0) too, or u = (λ, ω) is a non-trivial sum of at most n points of the form u j = (λ j , ω j ) ∈ S Ω , in which case we have in fact ω j ∈ Ω λ j because of Lemma 2.4 and the assuption Ω = Ω, hence (2.12) then yields ω ∈ Ω * n λ . We thus find Otherwise, u ∈ S Ω * n and (4.18) shows that D t, v ≥ δ because γ ∈ Π δ,L Ω * n . Since D never vanishes, we can define a non-autonomous vector field . . . The functions X j : [a, 1] × (R × C) n → R × C are locally Lipschitz, thus we can apply the Cauchy-Lipschitz theorem on the existence and uniqueness of solutions to differential equations and get a locally Lipschitz flow map (value at time t of the unique maximal solution to d v/dt = X(t, v ) whose value at time t * is v ). We construct a γ-adapted deformation of the identity out of the flow map as follows: ) . Then the second inequality in (4.26) yields and we are done. Only the inequality (4.14) remains to be proved. We first show the following: Lemma 4.12. For any t ∈ [a, 1] and u, v ∈ (R × C) n , the vector field (4.19) satisfies Proof of Lemma 4.12. We rewrite X j (t, u ) -X j (t, v ) as follows: Then, summing up |X j (t, u ) -X j (t, v )| in j, we obtain (4.27) from the inequality n j=1 η(v j ) ≤ D(t, v ). We conclude by deriving the inequality (4.14) from Lemma 4.12. We use the notation (4.11) to define ζ t 1 , . . . , ζ t n : ∆ n → C, and we now define v t j : ∆ n → R × C for t ∈ [a, 1] by the formulas v a j ( s ) := s j γ(a) and We obtain from (4.17) and (4.27) the following estimate: Choosing (µ j , u j ) ∈ R ×{(0, 0)} ∪ S Ω so that η(t, v j ) = (V (γ |t ), v j ) -(µ j , u j ) 1 for each j and using (µ We can thus define a map (t . . . ) be the flow of (4.30) with the initial condition v a j := (|γ(a)|s j , γ(a)s j ) with s ∈ ∆ n . Since γ′ (t), η(t, v j ) and D(t, v ) are Lipschitz continuous on [a, 1] × (R × C) n , we find by Rademacher's theorem that dζ t j /dt is differentiable a.e. on [a, 1] and satisfies when s j = 0. Since η(v t j ) and D(t, v t ) are real valued functions, we have Therefore, the following holds for every t ∈ [a, 1]: Arguing as for Theorem 4.1, we obtain Theorem 4.15. Let δ, L, M > 0 be real numbers. Then there exist c, δ ′ > 0 such that, for every d.d.f.s. Ω such that Ω 4δ,M = Ø (M ≥ 0), for every integer n ≥ 1 and for every f1 , . . . , fn ∈ R dv Ω , the function 1 * f1 * • • • * fn belongs to R dv Ω * n and satisfies where the seminorm Applications In this section, we display some applications of our results of Section 4. We first introduce convergent power series with coefficients in RΩ : Definition 5.1. Given Ω a d.f.s. and r ≥ 1, we define RΩ {w 1 , • • • , w r } as the space of all such that, for every δ, L > 0, there exists a positive constant C satisfying Fk where |k| := k 1 + • • • + k r (with the notation of Definition 3.16 for • δ,L Ω ). We can now deal with the substitution of resurgent formal series in a context more general than in Theorem 1.3. Theorem 5.2. Let r ≥ 1 be an integer and let Ω 0 , . . . , Ω r be d.f.s. Then for any F (w 1 , . . . , w r ) ∈ RΩ 0 {w 1 , • • • , w r } and for any φ1 , . . . , φr ∈ C[[z -1 ]] without constant term, one has where Proof. Since the family f.s. satisfies the conditions in Theorem 4.8 for sufficiently small δ > 0, for every L > 0, there exist δ Therefore, since F (w 1 , . . . , w r ) ∈ RΩ 0 {w 1 , • • • , w r }, we find that F ( φ1 , . . . , φr ) converges in RΩ 0 * Ω * ∞ and defines an Ω 0 * Ω * ∞ -resurgent formal series. Notice that, in view of Theorem 2.13, Theorem 1.3 is a direct consequence of Theorem 5.2. Next, we show the following implicit function theorem for resurgent formal series:
01756689
en
[ "info.info-es" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01756689/file/Rapport-CSTwiem.pdf
Wiem Housseyni Maryline Chetto email: [email protected]@univ-nantes.fr ARTE: a Simulation Tool for Reconfigurable Energy Harvesting Real-time Embedded Systems This report presents ARTE a new free Java based software, which we have developed for simulation and evaluation of reconfigurable energy harvesting real-time embedded systems. It is designed in the aim to generate, compare, and evaluate different real-time scheduling strategies on their schedulability performance as well as energy efficiency in the presence of external unpredictable reconfiguration scenarios. A reconfiguration scenario is defined as unpredictable events from the environment such as addition-remove of software tasks, and increase-decrease of the power rate. The occurrence of such events may evolve the system towards an overload state. For this purpose, ARTE provides a task sets generator, reconfiguration scenarios generator, and a simulator. It also offers to the designer the possibility to generate and run simulation for specific systems. In the actual version, ARTE can simulate the execution of periodic independent and (m,k)-firm constrained task sets on monoprocessor architecture. The energy consumption is considered as a scheduling parameter in the same manner as Worst Case Execution Time (WCET). Introduction Since decades, the literature has revealed a substantial interest in the real-time scheduling problem, where several approaches and algorithms have been ported over the years to optimize the scheduling of real-time tasks on single and multiple processor systems. The energy constraint has emerged as a major issue in the design of such systems, despite there are big research efforts addressing this problem. Research works have focused on either dynamic energy awareness algorithm in order to reduce system energy consumption or energy harvesting technology. In order to enhance the lifespan and to achieve energy autonomy in such system, there is a tremendous interest in the energy harvesting technologies recently. In recent years, energy scavenging or harvesting technology from renewable sources such as photovoltaic cells, and piezoelectric vibrations emerges as new alternative to ensure sustainable autonomy and perpetual function of the system. By the same token, the literature has revealed a substantial interest in the scheduling research for energy aware and power management scheduling for real-time systems. Still, there is sufficient scope for research, although uni-processor real-time scheduling for energy harvesting based systems is well studied. On the other hand, scheduling techniques for the reconfigurable energy harvesting real-time systems are not mature enough to either be applicable or optimal as much as currently available uni-processor real-time scheduling techniques. Nowadays, new criteria such as energy efficiency, high-performance and flexible computing arises the need of new generation of real-time systems. As a result, such systems need to solve a number of problems that so far have not been addressed by traditional real-time systems. Reconfigurable systems are solutions to providing both higher energy efficiency, and high-performance and flexibility. In recent years, a substantial number of research works from both academia and industry have been made to develop reconfigurable control systems [START_REF] Gharbi | Functional and operational solutions for safety reconfigurable embedded control systems[END_REF], [START_REF] Da Silva | Modeling of reconfigurable distributed manufacturing control systems[END_REF]. Reconfiguration is usually performed in response to both user requirements and dynamic changes in its environment such as unpredictable arrival of new tasks, and hardware or software failures. Some examples of such systems are multi-robot systems [START_REF] Chen | Combining re-allocating and rescheduling for dynamic multi-robot task allocation[END_REF], and wireless sensor networks [START_REF] Grichi | Rwin: New methodology for the development of reconfigurable wsn[END_REF]. From the literature we can drive different definitions of a reconfigurable system. The authors of [START_REF] Grichi | Rocl: New extensions to ocl for useful verification of flexible software systems[END_REF] define a reconfiguration of a distributed system as any addition/ removal/update of one/more software-hardware elements. In this work, we define a reconfiguration as a dynamic operation that offers to the system the capability to adjust and adapt its behavior i.e., scheduling policy, power consumption, according to environment and the fluctuating behavior of renewable source, or to modify the applicative functions i.e., add-remove-update software tasks. Almost of embedded systems are real-time constrained. A real-time system involves a set of tasks where each task performs a computational activity according to deadline constraints. The main purpose of a real-time system is to produce not only the required results but also within strict time constraints. In order to check whether a set of tasks respects its temporal constraints when a scheduling algorithm is used or to evaluate the efficiency of a new approach, software simulation against other algorithms is considered as a valid comparison technique and it is commonly used in the evaluation of real-time systems. Over the last years, several simulation tools have been ported MAST [START_REF] Harbour | Mast: Modeling and analysis suite for real time applications[END_REF], CHEDDAR [START_REF] Singhoff | Cheddar: a flexible real time scheduling framework[END_REF], STORM [START_REF] Urunuela | Storm a simulation tool for real-time multiprocessor scheduling evaluation[END_REF], FORTAS [START_REF] Courbin | Fortas: Framework for real-time analysis and simulation[END_REF] and YARTISS [START_REF] Chandarli | YARTISS: A Generic, Modular and Energy-Aware Scheduling Simulator for Real-Time Multiprocessor Systems[END_REF]. However, none of these approaches provide support for reconfigurable based applications, yet. More recently, new simulation tools are proposed for reconfigurable computing systems Reconf-Pack [START_REF] Gammoudi | Reconf-pack: A simulator for reconfigurable battery-powered real-time systems[END_REF]. However, this attempt do not support energy harvesting requirements. New challenges raised by reconfigurable real-time embedded energy harvesting systems in terms of reconfigurability, scheduling, and energy harvesting management. Indeed, in such a context, it is very difficult to evaluate and compare scheduling algorithms on their schedulability performance as well as energy efficiency. Unlike any prior work, we formulate and solve the challenging problem of scheduling real-time applications on reconfigurable energy-harvesting embedded system platforms. Based on these motivations, we investigate in this report the challenges and viability of designing an open and flexible simulation tool able to generate and simulate accurately the behavior of such systems. In this report, we introduce ARTE, a new simulation tool for reconfigurable energy harvesting real-time embedded systems, which provides various functions to simulate the scheduling process of real-time task sets and their temporal behavior when a scheduling policy is used. It provides the classical real-time scheduling policy EDF, the optimal scheduling algorithm for energy harvesting systems EDH, EDF scheduling for (m,k)-constrained task sets, and a new scheduling policy which is an extension of EDH for (m,k)-firm constrained task sets EDH-MK, and finally a new hierarchical approach Reconf-Algorithm. It implements also classical and new feasibility tests for both real-time and energy harvesting requirements. The main aim of this research work is to guarantee a feasible execution in the whole system in the presence of unpredictable events while satisfying a quality of service (QoS) measured first in term of the percentage of satisfied deadline, second the percentage of satisfied deadline while considering the degree of importance of tasks, and finally the overhead introduced by the proposed approach. The remainder of this report is as follows: we review related works and examples of real-time simulators in Section 2. Section 3 presents a background about the EDF, EDH scheduling algorithm, and the (m,k)-firm model. In section 4 we detail the proposed new scheduling approach for reconfigurable energy harvesting real-time systems. Then in Section 5 we present the various functionalities of our simulation tool. A case study is given in Section 6. Finally, we discuss our future work in Section 7 and Section 8 brings a conclusion to this report. Research Aims New challenges raised by reconfigurable real-time embedded energy harvesting systems in terms of reconfigurability, scheduling, and energy harvesting management. Such systems work in dynamic environment where unpredictable events may occur arrival-remove of software tasks or increase-decrease of power rate. However, when unpredictable events from the environment occur, the system may evolve towards an unfeasible state (processor/energy overload) and the actual operation mode is no longer optimal. For this aim, the main focus of this research is on how to achieve system feasibility while satisfying a graceful QoS degradation. For this purpose we define three operating modes: -Normal mode: where all the tasks in the system execute 100% of their instances while meeting all the deadlines -Degradation mode level 1: This is the case where the normal mode is not feasible. The K least important tasks execute in degraded mode according to the model (m,k)-firm and other tasks execute normally. The schedulability test is performed by considering iteratively the tasks according to their importance. -Degradation mode level 2. This is the case where the degradation mode level 1 is not feasible. Abandonable tasks are gradually eliminated by increasing importance. When a system executing under an operating mode and an external event occurs it is imperative to verify the schedubility test. We identify three cases: -The system may continue to execute in the actual operating mode, -the system may be executed under degraded mode, -the system may executed under normal mode. The quality of service (QoS) is measured in term of: -The percentage of satisfied deadline, -the percentage of satisfied deadline while considering the degree of importance of tasks, -the overhead introduced by the proposed approach. Related Works In this section we outline the existing works related to simulation of the scheduling of real-time tasks. There are a lot of tools to test and visualize the temporal and execution behavior of real-time systems, and they are divided mainly into two categories: the execution analyzer frameworks and the simulation software. MAST [START_REF] Harbour | Mast: Modeling and analysis suite for real time applications[END_REF] is a modeling and analysis suite for real-time applications that is developed in 2000. MAST is an event-driven scheduling simulator that permits modeling of distributed real-time systems and offers a set of tools to e.g. test their feasibility or perform sensitive analysis. Another known simulator is Cheddar [START_REF] Hamdaoui | A dynamic priority assignment technique for streams with (m, k)-firm deadlines[END_REF][START_REF] Bernat | Combining (/sub m//sup n/)-hard deadlines and dual priority scheduling[END_REF] which is developed in 2004 and it handles the scheduling of real-time tasks on multiprocessor systems. It provides many implementations of scheduling, partitioning and analysis of algorithms, and it comes with a friendly Graphical User Interface (GUI). Unfortunately, no API documentation is available to help with the implementation of new algorithms and to facilitate its extensibility. Moreover, Cheddar is written in Ada programming language [START_REF] Mccormick | Building parallel, embedded, and real-time applications with Ada[END_REF] which is used mainly in embedded systems and it has strong features such as modularity mechanisms and parallel processing. Ada is often the language of choice for large systems that require real-time processing, but in general, it is not a common language among developers. We believe that the choice of Ada as the base of Cheddar reduces the potential contributions to the software from average developers and researchers. Finally, STORM [START_REF] Urunuela | Storm a simulation tool for real-time multiprocessor scheduling evaluation[END_REF] FORTAS [START_REF] Courbin | Fortas: Framework for real-time analysis and simulation[END_REF], and YARTISS [START_REF] Chandarli | YARTISS: A Generic, Modular and Energy-Aware Scheduling Simulator for Real-Time Multiprocessor Systems[END_REF] are tools which are written in Java. In 2009, STORM is released and it is described as a simulation tool for Real time multiprocessor scheduling. It has modular architectures (both software and hardware) which simulate the scheduling of task sets on multiprocessor systems based on the rules of a chosen scheduling policy. The specifications of the simulation parameters and the scheduling policies are modifiable using an XML file. However, the simulator tool lacks a detailed documentation and description of the available scheduling methods and the features of the software. On the other hand, FORTAS is a real-time simulation and analysis framework which targets uniprocessor and multiprocessor systems. It is developed mainly to facilitate the comparison process between the different scheduling algorithms, and it includes features such as task generators and computation of results of each tested and compared scheduling algorithm. FORTAS represents valuable contributions in the effort towards providing an open and modular tool. Unfortunately, it seems to suffer from the following issues: its development is not open to other developers for now, we can only download .class files, no documentation is yet provided and it seems that no new version has been released to public since its presentation in [START_REF] Courbin | Fortas: Framework for real-time analysis and simulation[END_REF]. More recently, YARTISS is proposed as a modular and extensible tool. It is a real-time multiprocessor scheduling simulator which provides various functions to simulate the scheduling process of real-time task sets and their temporal behavior when a scheduling policy is used. Functionalities of YARTISS: 1) simulate a task set on one or several processors while monitoring the system energy consumption, 2) concurrently simulate a large number of tasksets and present the results in a user friendly way that permits us to isolate interesting cases, and 3) randomly generate a large number of task sets. However, none of these simulation tools provide support for unpredictable reconfiguration scenarios yet. To date, only a few recent works target the real-time scheduling issue in reconfigurable systems. In [START_REF] Gammoudi | Reconf-pack: A simulator for reconfigurable battery-powered real-time systems[END_REF] a simulator tool is proposed for Reconfigurable Battery-Powered Real-Time Systems Reconf-Pack. Reconf-Pack is a simulation tool for analyzing a reconfiguration and applying the proposed strategy for real-time systems. It is based upon another tool Task-Generator which generates random tasks. According to the state of the system after a reconfiguration, Reconf-Pack calculates dynamically a deterministic solution. Moreover, it compares the pack-based solutions to related works. However, it seems to suffer from the following issues: its development is not open to other developers for now, and isn't available for download. During the development of ARTE, we learned from those existing tools and we included some of their features in addition to others of our own. Our aim is to provide a simulation tool for reconfigurable energy harvesting real-time systems that is easily it can be used to generate, compare, simulate the scheduling of real-time tasks on reconfigurable energy harvesting real-time systems. Background This section gives a background about first the EDF scheduling algorithm, then EDH scheduling algorithm, and finally the (M,K)-model. Earliest Deadline First scheduling EDF EDF is probably the most famous dynamic priority scheduler. As a consequence of its optimality for preemptive uniprocessor scheduling of independent jobs, the runtime scheduling problem is perfectly solved if we assume there exists no additional constraints on the jobs. EDF is the scheduler of choice since any feasible set of jobs is guaranteed to have a valid EDF schedule [START_REF] Chetto | Optimal scheduling for real-time jobs in energy harvesting computing systems[END_REF]. Time-feasibility: A set ψ of n tasks τ i = {C i , T i , E i } is time feasible if and only if n i=1 C i T i ≤ 1 (1) However, when considering energy harvesting requirements EDF is no longer optimal and a necessary condition but non sufficient of schedulability is as follows: Time-feasibility: A set ψ of n tasks is time feasible if and only if n i=1 C i T i ≤ 1 (2) Energy-feasibility: A set ψ of n tasks is energy feasible if n i=1 E i C + E P (0, t) ≤ 1 (3) Let E p (0,t) be the amount of energy that will be produced by the source between 0 and t and C is the energy storage unit (supercapacitor or battery) capacity. Earliest Deadline First-Energy Harvesting EDH Dertouzos [START_REF] Dertouzos | Control robotics: The procedural control of physical processes[END_REF] shows that the Earliest Deadline First Algorithm (EDF) is optimal. EDF schedules at each instant of time t, that job ready for execution whose deadline is closest to t. But the problem with EDF is that it does not consider future jobs arrivals and their energy requirements. In [START_REF] Chetto | A note on edf schedulingfor real-time energy harvesting systems[END_REF], the authors prove that EDF is no longer optimal for RTEH systems. Jobs are processed as soon as possible thus consuming the available energy greedily.Although noncompetitive, EDF turns out to remain the best non-idling scheduler for uniprocessor RTEH platforms [START_REF] Chetto | A note on edf schedulingfor real-time energy harvesting systems[END_REF]. In [START_REF] Chetto | Optimal scheduling for real-time jobs in energy harvesting computing systems[END_REF], the authors describe the Earliest Deadline-Harvesting ED-H scheduling algorithm which is proved to be optimal for the scheduling problem of Energy-Harvesting Real-time systems. ED-H is an extension of the EDF algorithm that adds energy awareness capabilities by using the notion of slack-time and slack-energy. The idea behind ED-H is to order the jobs according to the EDF rule since the jobs issued from the periodic tasks have hard deadlines. Executing them in accordance with their relative urgency appears to be the best approach even if they are not systematically executed as soon as possible due to possible energy shortage. The difference between ED-H and classical EDF is to decide when to execute a job and when to let the processor idle. Before authorizing any job to execute, the energy level of the storage must be sufficient so that all future occurring jobs execute timely with no energy starvation, considering their timing and energy requirements and the replenishment rate of the storage unit. According to EDH a processor P j and battery B j , that performs tasks set ψ j should satisfy the schedulability tests described follows: -Time-feasibility: [START_REF] Chetto | Optimal scheduling for real-time jobs in energy harvesting computing systems[END_REF] ψ j is time feasible if and only if U Pj = n i=1 C i T i ≤ 1 (4) -Energy-feasibility: [START_REF] Chetto | Optimal scheduling for real-time jobs in energy harvesting computing systems[END_REF] ψ j is energy feasible if and only if U e j ≤ 1 (5) U e j is the energy load of the set of tasks ψ j assigned to processor P j U e j = sup 0≤t1≤t2≤H E cj (t 1 , t 2 ) C + E Pj (t 1 , t 2 ) Theorem 1 [START_REF] Chetto | Optimal scheduling for real-time jobs in energy harvesting computing systems[END_REF] The set of tasks ψ j assigned to processor P j is feasible if and only if U Pj ≤ 1 and U e j ≤ 1. Theorem 1 gives a necessary and sufficient condition of schedulability. (M,K)-model For its intuitiveness and capability of capturing not only statistical but also deterministic quality of service QoS requirements, the (m,k)-model has been widely studied, e.g., [START_REF] Bernat | Combining (/sub m//sup n/)-hard deadlines and dual priority scheduling[END_REF], [START_REF] Hamdaoui | A dynamic priority assignment technique for streams with (m, k)-firm deadlines[END_REF], [START_REF] Hua | Energy-efficient dual-voltage soft real-time system with (m, k)-firm deadline guarantee[END_REF], [START_REF] Quan | Enhanced fixed-priority scheduling with (m, k)-firm guarantee[END_REF], and [START_REF] Ramanathan | Overload management in real-time control applications using (m, k)firm guarantee[END_REF]. The (m,k)-model was originally proposed by Hamdaoui et al. [START_REF] Hamdaoui | A dynamic priority assignment technique for streams with (m, k)-firm deadlines[END_REF]. According to this model, a repetitive task of the system is associated with an (m,k) (0<m<k) constraint requiring that m out of any k consecutive job instances of the task meet their deadlines. A dynamic failure occurs, which implies that the temporal quality of service QoS constraint is violated and the scheduler is thus considered failed, if, within any k consecutive jobs, more than (k-m) job instances miss their deadlines. Based on this (m,k)-model, Ramanathan et al. [START_REF] Ramanathan | Overload management in real-time control applications using (m, k)firm guarantee[END_REF] proposed to partition the jobs into mandatory and optional jobs. So long as all of the mandatory jobs can meet their deadlines, the (m,k)-constraints can be ensured. The mandatory jobs are the jobs that must meet their deadlines in order to satisfy the -constraints, while the optional jobs can be executed to further improve the quality of the service or simply be dropped to save computing resources. Quan et al. [START_REF] Quan | Enhanced fixed-priority scheduling with (m, k)-firm guarantee[END_REF] formally proved that the problem of scheduling with the (m,k)-guarantee for an arbitrary value of mand k is NP-hard in the strong sense. They further proposed to improve the mandatory/optional partitioning by reducing the maximal interference between mandatory jobs. Mandatory/Optional Job Partitioning With (M,K)-Pattern The (m,k)-pattern of task τ i , denoted by Π i , is a binary string Π i = {π i0 , π i1 π i(ki-1 )} which satisfies the following: 1) π ij is a mandatory job if π ij = 1 and optional if π ij = 0 and 2) ki-1 j=0 π ij = m i . By repeating the (m,k)-pattern , we get a mandatory job pattern for τ i . It is not difficult to see that the (m,k)-constraint for τ i can be satisfied if the mandatory jobs of τ i are selected accordingly. Evenly Distributed Pattern (Even Pattern) Even Pattern strategy was proposed by Ramanathan et al. [START_REF] Ramanathan | Overload management in real-time control applications using (m, k)firm guarantee[END_REF] as follows: the first release is always mandatory and subsequent distribution of mandatory and optional alternating. Mathematically, Π i j =    1, if j = j × m i k i × k i m i , f orj = 0, 1, .., k i 0, otherwise (6) In [START_REF] Niu | Energy minimization for real-time systems with (m, k)-guarantee[END_REF] a necessary and sufficient feasibility test for (m,k)-constrained tasks executing under EDF scheduling policy is proposed. Theorem 2: Let system T = {τ 0 , τ i , .., τ n-1 }, where τ i ={C i , T i , D i , m i , k i } and Ψ be the mandatory job set according to their E-patterns. Also, let L represent either the ending point of the first busy period when scheduling only the mandatory jobs or the LCM of T(i) , i = 0, ..., (n-1), whichever is smaller. Then, Ψ is schedulable with EDF if and only if (iff) all the mandatory jobs arriving within [0, L] can meet their deadlines, i.e., i W i (0, t) = i ( m i k i × t -D i T i + ) × C i ≤ t (7) 5 New Scheduling Approach for Reconfigurable Energy Harvesting Real-Time Systems Reconfiguration is usually performed in response to both user requirements and dynamic changes in its environment such as unpredictable activation of new tasks, removal of tasks, or increase-decrease of power supply of the system. Some examples of reconfigurable systems are multi-robot systems [START_REF] Chen | Combining re-allocating and rescheduling for dynamic multi-robot task allocation[END_REF] and wireless sensor networks [START_REF] Grichi | Rwin: New methodology for the development of reconfigurable wsn[END_REF]. At run-time, the occurrence of unpredictable task's activation makes the static schedule no longer optimal and may evolve the system towards an unfeasible state due to energy and processing overloads. Thereafter, some existing or added tasks may violate deadlines. The system has to dynamically adjust and adapt the task allocation and scheduling in order to cope with unpredictable reconfiguration scenarios. We identify mainly two function modes: -Normal mode: where all the tasks in the system execute 100% of their instances while meeting all the deadlines -Degradation mode level 1: This is the case where the normal mode is not feasible. The K least important tasks execute in degraded mode according to the model (m,k)-firm and other tasks execute normally. The schedulability test is performed by considering iteratively the tasks according to their importance. -Degradation mode level 2. This is the case where the degradation mode level 1 is not feasible. Abandonable tasks are gradually eliminated by increasing importance. At any instant, external unpredictable reconfigurations may occur to add or remove software tasks or to increase-decrease power supply on the system. The occurrence of such events provoke the execution of schedulability tests to identify in which mode the tasks should be executed. Normal Mode In the normal mode tasks are assumed to be executed under the optimal scheduler for real-time energy harvesting systems EDH algorithm. Then the tasks set should satisfy the following theorem: Theorem 1 [START_REF] Chetto | Optimal scheduling for real-time jobs in energy harvesting computing systems[END_REF] The set of tasks ψ j assigned to processor P j is feasible if and only if U Pj ≤ 1 and U e j ≤ 1. Degradation mode level 1: EDH-MK Algorithm We propose in this work a new real-time scheduler for reconfigurale energy harvesting real-time systems EDH-MK. The proposed algorithm EDH-MK is an extension of the EDH algorithm with (m,k)-firm guarantee. When it is impossible to execute the tasks set in normal mode due to processor and/or energy overloads, we propose to execute the K least important tasks in degraded mode according to the model (m,k)-firm and other tasks execute normally. In this work we propose the necessary and sufficient schedulability conditions for the EDH-MK algorithm. Definition 1: The static slack time of a mandatory job set Ψ on the time interval [t 1 , t 2 ) is SST Ψ (t 1 , t 2 ) = t 1 -t 2 - i W (t 1 , t 2 ) ( 8 ) SST Ψ (t 1 , t 2 ) gives the longest time that could be made available within [t 1 , t 2 ) after executing mandatory jobs of Ψ with release time at or after t 1 and deadline at or before t 2 . Definition 2: The total mandatory energy demand within interval [0,t] is g(0, t) = i ( m i k i × t T i ) × En i (9) Proof : As shown [START_REF] Ramanathan | Overload management in real-time control applications using (m, k)firm guarantee[END_REF] that, if the mandatory jobs are determined according to (6), for the first p i jobs of τ i , there are l i (t) = mi ki × p i jobs that are mandatory. Therefore, the total mandatory energy load within interval [0,t] that has to be finished by time t, denoted by g(0,t), can be formulated as follows: g(0, t) = i ( m i k i × t -D i T i + ) × En i (10) g(0, t) = i ( m i k i × t T i ) × En i (11) Let E p (0,t) be the amount of energy that will be produced by the source between 0 and t and C is the energy storage unit (supercapacitor or battery) capacity. Definition 3: The total mandatory static slack energy on the time interval [0, t] is SSE Ψ (0, t) = C + E p (0, t) -g(0, t) (12) SSE Ψ (0,t) gives the largest energy that could be made available within [0,t] after executing mandatory jobs with release time at or after 0 and deadline at or before t. U E Ψ ≤ 1 (15) Proof : As proof of Lemma since SSE Ψ (t 1 , t 2 ) ≤ 0 amounts to U E Ψ (t 1 , t 2 ) ≤ 1. The necessary and sufficient schedulability conditions falls into two constraints which should be respected. -Real-time constraints: For each processor P j the tasks set Ψ j assigned to P j should satisfy their deadlines. From equation ( 7) Time-feasibility: The set Ψ j is time feasible if and only if i W i (0, t) = i ( m i k i × 1 T i ) × C i ≤ 1 (16) -Energy constraints: Each processor P j must not, at any moment, lack energy to execute the tasks set assigned to processor P j . Energy feasibility: P j is energy feasible if and only if U E Ψ ≤ 1 (17) We give a necessary and sufficient condition for EDH-MK schedulability and feasibility. Theorem 3: Let Ψ be the mandatory job set according to their E-patterns. Also, let L represent either the ending point of the first busy period when scheduling only the mandatory jobs or the LCM of T(i ) , i = 0, ..., (n-1), whichever is smaller. Then, Ψ is schedulable with EDH-MK if and only if (iff) all the mandatory jobs arriving within [0, L] can meet their deadlines, i.e., i W (0, t) ≤ 1andSSE Ψ ≤ 0 (18) Proof : "If ": We suppose that constraint ( 17) is satisfied and Ψ is not schedulable by EDH-MK. Let us show a contradiction. First, we assume that Ψ is not schedulable by EDH-MK because of time starvation. of energy starvation. Lemma 2 states that there exists a time interval [t 0 , d 1 ) such that g(t 0 , d 1 ) > C + E p (t 0 , d 1 ) i.e., C + E p (t 0 , d 1 ) -g(t 0 , d 1 ) < 0. Thus, SSE Ψ < 0 and condition 17 in Theorem 2 is violated. "Only if ": Suppose that Ψ is feasible. Thus, Ψ is time-feasible and energy feasible. From constraint (7) in theorem 2 and constraint [START_REF] Mccormick | Building parallel, embedded, and real-time applications with Ada[END_REF] in Lemma 2, it is the case that constraint (17) is satisfied. Degradation mode level 2: Removal Algorithm This is the case where the degradation mode level 1 is not feasible. Abandonable tasks are gradually eliminated by increasing importance. We sort all the abandonable tasks in an ascending order of degree of importance such that we can reject those with less importance one by one until establishing the system feasibility. Theorem 4: The set of tasks Ψ j assigned to processor P j is feasible under degradation mode level 2: Removal algorithm if and only if the set of non abandonable tasks set Ψ na j U Ψ na j ≤ 1 and U e Ψ na j ≤ 1. (19) Proof: Directly follows from the proof of the theorem 4 in [18] Reconf-Algorithm To adjust the framework to cope with any unpredictable external event such as new task arrivals, task removal, and increase-decrease power supply, we characterize a reconfiguration as any procedure that permits to reconfigure the system to be feasible, i.e., satisfying its real-time and energy constraints with the consideration of system performance optimization. We propose an approach with two successive adaptation strategies to reconfigure the system at run-time. The two adaptation strategies are performed in a hierarchical order as depicted in Fig. 1. -Degradation mode level 1: EDH-MK Algorithm -Degradation mode level 2: Removal Algorithm Functionalities In this section we explain all functionalities of ARTE in details while showing their specifications and various characteristics regarding the problem of real-time scheduling in reconfigurable energy harvesting systems. Task Set Generator Task Model The current version proposes a task model according to the Liu and Layland task model with energy related parameters. All tasks are considered periodic and independent. Each task τ i is characterized by i) worst case execution time WCET C i , ii) worst case energy consumption WCEC E i , and iii) its period T i . It is considered that tasks have implicit deadlines, i.e., deadlines are equal to periods. In addition, each task is characterized by a degree of importance L i which define the functional and operational importance of the execution of the task vis-a-vis the application. Moreover, tasks are considered to be (m,k)-firm constrained deadlines. Tasks are classified into two categories: the first is firm task set with (1,1)-firm deadline constraints; the other is a set of soft tasks with (m,k)-soft deadline constraints. And a boolean A i to determine if a task is abandonnable or not. The used task sets can be loaded into the simulator either through the GUI by using a file browser or entering the parameters manually, or by using task set generator as depicted in Fig. 2 . For the simulation results to be credible, the used task sets should be randomly generated and varied sufficiently. The current version includes by default a generator based on the UUniFast-Discard algorithm [START_REF] Bini | Measuring the performance of schedulability tests[END_REF] coupled with a hyper-period limitation technique [START_REF] Goossens | Limitation of the hyper-period in real-time periodic task set generation[END_REF] adapted to energy constraints. This algorithm generates task sets by dividing among them the CPU utilization (U = Ci Ti ) and the energy utilization (U e = Ei TiP r where Pr is the recharging function) chosen by the user. The idea behind the algorithm is to distribute the system's utilization on the tasks of the system. When we add the energy cost of tasks to the system, we end up with two parameters to vary and two conditions to satisfy. The algorithm in its current version distributes U and U e uniformly on the tasks then it finds the 2-tuple (C i , E i ) which satisfies all the conditions namely U i , U e and energy consumption constraints. The operation is repeated several times until the desired 2-tuple approaches the imposed conditions. Finally, the algorithm returns as a result a time and potentially energy feasible system. The (m,k)-firm parameters are randomly generated in the interval [START_REF] Gharbi | Functional and operational solutions for safety reconfigurable embedded control systems[END_REF][START_REF] Chandarli | YARTISS: A Generic, Modular and Energy-Aware Scheduling Simulator for Real-Time Multiprocessor Systems[END_REF]. We define three levels of importance then the degree of importance is randomly generated in the interval [START_REF] Gharbi | Functional and operational solutions for safety reconfigurable embedded control systems[END_REF][START_REF] Chen | Combining re-allocating and rescheduling for dynamic multi-robot task allocation[END_REF] where 1 is the higher importance level. The parameter A i is randomly generated in the interval [0, 1]. Reconfiguration Scenarios Generator In order to represent as near as possible the real behavior of the physical reconfigurable energy harvesting real-time embedded systems we developed a reconfiguration scenarios generator tool. Through the GUI the user can use personalized reconfiguration scenarios by selecting the user personalization option, or using random reconfiguration scenarios by using the random task set generator as depicted in Fig. 3. The option user personalization offers to the user the possibility to generate reconfiguration scenarios that modify the applicative functions i.e., add-remove software tasks or increase-decrease the power supply of the system. For the simulation results to be credible, the used reconfiguration scenarios should be randomly generated and varied sufficiently. The current version includes by default a reconfiguration scenarios generator which offers the possibility to generate three kinds of reconfiguration scenarios: i) high dynamic system, ii) medium dynamic system, and iii) low dynamic system. The randomly reconfiguration scenarios algorithm calculates the number of jobs Njobs in the system upon one hyper-period. -High dynamic system: the generator will add randomly n tasks where n is randomly in the interval [10%, 3%] from Njobs. -Medium dynamic system: the generator will add randomly where n is randomly in the interval [5%, 1%] from Njobs. -Low dynamic system: the generator will add randomly where n is randomly in the interval [ 1%, 0%] from Njobs. Simulation Tool The aim of this tool is to simulate the scheduling of a system according to the parameters and the assumptions of the user, mainly the task set, and the scheduling policy. But the purpose of ARTE is not only restricted to checking the feasibility of a given system but also to simulate and analyze the performance of the scheduling policies when unpredictable reconfiguration scenarios occur in the system. Through the main interface as depicted in Fig. 4 the user have the possibility to use task sets loaded into the simulator either through the GUI by using a file browser or entering the parameters manually, or by using task set generator. When the user creates a system the hyper period will be calculated automatically and displayed in the GUI. For simulation the user choose the scheduling policy as well as the time interval for simulations. Two kinds of analyzes can be performed: scheduling simulation and test feasibility. The user can generate a set of reconfiguration scenarios to the system. Case Study This section presents a case study through which we can show the different features and functionalities implemented in ARTE as well as to explore the performance of the proposed EDH-MK algorithm and Reconf-Algorithm in order to keep feasible executions with graceful QoS after any external reconfiguration scenario that may evolve the system towards an unfeasible state. For this aim we create a new system where we generate randomly a task set with parameters in table 1. Initially, we have verified the task set feasibility using the simulator tool Fig. 5. Then, we have choose to generate a random high dynamic reconfiguration scenario using the reconfiguration scenario tool Fig. 6. Thereafter, the system evolves toward an unfeasible state Fig. 7. In order to analyze the EDH scheduler performance we have run a simulation on 100 time units. The EDH scheduler provides 114 deadline miss Fig. 8. In order to analyze the EDHMK scheduler performance we have run a simulation on 100 time units. The EDHMK scheduler provides 16 deadline miss Fig. 9. 8 Future Works The actual release offers many important features where the main purpose is to provide a simulator tool which provide the flexibility and the performance to deal with unpredictable reconfiguration scenarios. But, it has been made in a hurry and improvements are planned to address all features targeted by the proposed simulator tool. The authors are now working on: the development of an extension of the developed graphical user interface to facilitate the use of the simulator by a large number of users, and to provide our tool with a GUI to display the simulation results in an interactive and intuitive way. Three different views envisaged: a time chart, a processor view and an energy curve as well as a comparison results view permits to the user to see the simulation of selected scheduling policies, -implement other task models in the simulator, -implement other scheduling approaches in the simulator, such as the fixed priority approaches. implement the multiprocessor platforms, and develop new scheduling techniques based on the migration of tasks between the different processors, -finally, we plan the use of a distributed system decentralizing the control, and more precisely the use of an multi-agent system MAS. We aim through the use of MAS to represent as near as possible the real behavior of the physical networked reconfigurable energy harvesting real-time embedded systems thanks to the developed simulator. Motivated by these considerations, we choose to deploy the intelligent agents to simulate the dynamic behavior of networked reconfigurable energy harvesting real-time embedded systems. Conclusion This report presents ARTE, a real-time scheduling simulator for reconfigurable energy harvesting real-time embedded systems. Presently, the ARTE simulator is able to simulate accurately the execution of task sets of a reconfigurable energy harvesting system. We briefly presented existing simulation tools. However, none of the aforementioned efforts respond to the new criteria of flexibility, agility and high-performance. Thus, there is a need to develop a new simulation tool that offers the ability to deal with the dynamic behavior of reconfigurable systems. We have detailed the different features provided by ARTE: 1) scheduling simulation, 2) feasibility tests, and 3) percentage of missed deadline measurement. We have described the three main tools Fig. 1 . 1 Fig. 1. The reconfiguration scenarios generator tool. Fig. 2 . 2 Fig. 2. The task set generator tool. Fig. 3 . 3 Fig. 3. The reconfiguration scenarios generator tool. Fig. 4 . 4 Fig. 4. The simulation tool. Fig. 5 . 5 Fig. 5. Random system generation. Fig. 6 . 6 Fig. 6. Random reconfiguration scenario generation. Fig. 7 . 7 Fig. 7. Test system feasibility. Fig. 8 . 8 Fig. 8. EDH simulation. Fig. 9 . 9 Fig. 9. EDH-MK simulation. At least one mandatory job with deadline after d 1 executes within [t 0 , d 1 ). Let t 2 be the latest time where a mandatory job, say τ 2 , with deadline after d 1 is executed. As d 1 is lower than d 2 and mandatory jobs are executed according to the earliest deadline rule in EDH-MK, we have r 2 < r 1 . At time t 2 , one of the following situations occurs. Case 2a: The processor is busy all the times in [t 0 , d 1 ). τ 2 is preempted by a higher priority job, say τ 3 , with d 3 ≤ d 1 . From rule 4.2 in[START_REF] Chetto | Optimal scheduling for real-time jobs in energy harvesting computing systems[END_REF], P SE Ψ (r 3 ) > 0 which implies that SE τ1 (r 3 ) > 0 and in consequence g(r 3 , d 1 ) < E(r 3 ) + E p (r 3 , d 1 ). All mandatory jobs that are executed within [r 3 , d 1 ) have release time at or after r 3 and deadline at or before d 1 . Consequently, the amount of energy they require is at most g(r 3 , d 1 ). That contradicts deadline violation and E(d 1 ) = 0. Case 2b: The processor is idle in [t 3 -1, t 3 ) with t 3 > t 2 and busy all the times in [t at t 3 because t 0 is the latest one. Furthermore, no mandatory job with deadline after d 1 is executed after t 2 and consequently after t 3 . In order not to waste energy, all the energy which arrives from the source is used to advance mandatory jobs with deadline after d 1 . The processor continuously commutes from active state to inactive state. The storage is maintained at maximum level until τ 1 releases. Consequently, we have E(r 1 ) = C. As τ 1 is feasible, g(r 3 , d 1 ) ≤ C + E p (r 1 , d 1 ). Thus, E(r 1 ) + E p (r 1 , d 1 ) ≥ g(r 1 , d 1 ). That contradicts deadline violation and E(d 1 ) = 0. Lemma 2: The set Ψ is energy-feasible if and only if Directly follows from Lemma 2. "Only If ": Since is energy-feasible, let us consider an energy-valid schedule produced within [0, d M ax ). The amount of energy demanded in each interval of time [t 1 , t 2 ), g(t 1 , t 2 ), is necessarily less than or equal to the actual energy available in [t 1 , t 2 ) given by E(t 1 ) + E p (t 1 , t 2 ). An upper bound on E(t 1 ) is the maximum storable energy at time t 1 , that is C. Consequently, g(t 1 , t 2 )is lower than or equal to C + E p (t 1 , t 2 ). This leads to ∀ (t 1 , t 2 ) ∈ [0, d M ax ), g(t 1 , t 2 ) ≤ C + E p (t 1 , t 2 ) i.e. SSE (t 1 , t 2 ) ≥ 0. Thus, SSE ≥ 0. Lemma 3: Ψ is energy-feasible if and only if SSE Ψ > 0 (14) Proof : "If ": Definition 4: Let d be the deadline of the active job at current time t c . The preemption slack energy of a mandatory job set Ψ at t c is P SE Ψ (t c ) = min tc≤ri≤di≤d SE τi (t c ) (13) Lemma 1: If d 1 is missed in the EDH-MK schedule because of energy starvation there exists a time instant t such that g(t, d 1 ) > C + E p (t, d 1 ) and no schedule exists where d 1 and all earlier deadlines are met. Proof : Recall that we have to consider the energy starvation case where d 1 is missed with E(d 1 ) = 0. Let t 0 be the latest time before d 1 where a mandatory job with deadline after d 1 releases, no other mandatory job is ready just before t 0 and the energy storage unit is fully charged i.e. E(t 0 ) = C. The initialization time can be such time. The processor is idle within [t 0 -1, t 0 ) since no mandatory jobs are ready. As no energy is wasted except when there are no ready jobs, the processor is busy at least from time t 0 to t 0 + 1. We consider two cases: Case 1: No mandatory job with deadline after d 1 executes within [t 0 , d 1 ). Consequently, all the mandatory jobs that execute within [t 0 , d 1 ). have release time at or after t 0 and deadline at or before d 1 . The amount of energy required by these mandatory jobs is g(t 0 , d 1 ). As is feasible, g(t 0 , d 1 ) is no more than the maximum storable energy plus all the incoming energy i.e., C + E p (t 0 , d 1 ). As E(t 0 ) = C, we conclude that all mandatory jobs ready within [t 0 , d 1 ) can be executed with no energy starvation which contradicts the deadline violation at d 1 with E(d 1 ) = 0. Case 2: 3 , d 1 ). The processor stops idle at time t 3 imperatively by rule 4.1 [START_REF] Chetto | Optimal scheduling for real-time jobs in energy harvesting computing systems[END_REF] if E(t 3 ) = C. By hypothesis, there is no mandatory job waiting with deadline at or before d 1 Table 1 . 1 Initial System Configuration. File Name systemfile.txt Power Rate 10 Processor Utilization 0.8 Energy Utilization 0.8 Number of tasks 10 Emax 200 Emin 2 Battery Capacity 200 Proc number 1 of ARTE: 1) random generator of task sets, 2) random generator of reconfiguration scenarios sets, and 3) a simulator tool. Finally, we presented some expanding features we will implement.
00111373
en
[ "spi.meca", "spi.meca.msmeca" ]
2024/03/05 22:32:10
2003
https://hal.science/hal-00111373/file/nguyen2003.pdf
Son Quoc Nguyen Instability and friction Instabilité et frottement Keywords: Friction, Unilateral contact, Instability, Stick-slip-separation waves Frottement, Contact unilatéral, Instabilité, Ondes adhérence-glissement-séparation A review on the stability analysis of solids in unilateral and frictional contact is given. The presentation is focussed on the stability of an equilibrium position of an elastic solid in frictional contact with a fixed or moving obstacle. The problem of divergence instability and the obtention of a criterion of static stability are discussed first for the case of a fixed obstacle. The possibility of flutter instability is then considered for a steady sliding equilibrium with a moving obstacle. The steady sliding solution is generically unstable by flutter and leads to a dynamic response which can be chaotic or periodic. This dynamic response leads to the generation of stick-slip-separation waves on the contact surface in a similar way as Schallamach waves in statics. Illustrating examples and principal results recently obtained in the literature are reported. Some problems of frictioninduced vibration and noise emittence, such as brake squeal for example, can be interpreted in this spirit. Résumé On présente dans cette Note une synthèse des résultats de la littérature sur l'analyse de stabilité des solides sous contact unilatéral avec frottement de Coulomb. Le problème de contact frottant d'un solide élastique avec un obstacle fixe ou mobile est examiné. La possibilité d'instabilité par divergence et la recherche d'un critère de stabilité statique d'un équilibre sont examinées dans le cas d'un obstacle fixe. La possibilité d'instabilité par flottement est ensuite discutée pour un équilibre résultant d'un glissement stationnaire avec un obstacle mobile. La solution de glissement stationnaire est dynamiquement instable par flottement et conduit à une réponse dynamique qui peut être chaotique ou périodique. En particulier, la réponse dynamique génère des ondes d'adhérence-glissement-séparation sur la surface de contact d'une façon comparable aux ondes de Schallamach en statique. Des exemples et des résultats récents de la littérature sont rapportés. Quelques problèmes de vibration et d'émission acoustique induites par le frottement, comme le crissement des freins par exemple, peuvent être interprétés dans cet esprit. Solids in frictional contact with an obstacle Coulomb's law of dry friction At a contact point of a solid with an obstacle, the relative velocity v = v sv o is by definition the difference between the velocities of the material points of the solid and of the obstacle in contact. The relative velocity v and reaction R can be decomposed into normal and tangential components v = v T + v N n, R = T + Nn (1) where n denotes the external normal vector to the obstacle at a contact point, v T is the sliding velocity vector, T the tangential reaction vector and N the normal reaction, N and v N are scalars. The unilateral condition of contact implies that the normal reaction must be non-negative N 0. When there is contact, Coulomb's law of dry friction states that the friction criterion must be satisfied and that the friction force must have the opposite direction to the sliding velocity, φ = T -f N 0 and T = f N, T = -av T , a 0, if v T = 0 ( 2 ) where f denotes the friction coefficient. In particular, the dissipation by friction is -T • v T = f N v T . Coulomb's law has often been interpreted in the literature as a non-associated law since the velocity (v T , v N ) is not a normal to the domain of admissible forces. In particular, it has been discussed as a bi-potential law, cf. [START_REF] De Saxcé | A generalization of Fenchel's inequality and its application to constitutive laws[END_REF]. A more standard interpretation consists of saying that Coulomb's friction is a standard dissipative law with a state-dependent dissipation potential, cf. [START_REF] Moreau | On unilateral constraints, friction and plasticity[END_REF][START_REF] Nguyen | Stability and Nonlinear Solid Mechanics[END_REF]. Indeed, normality law is satisfied by the flux v T and the force T since T and v T are related through a state-dependent dissipation potential D(v T , N) T = -D, v T with D = f N v T ( 3 ) where D, v T is understood in the sense of sub-gradient. The set of admissible forces, which is a sphere of radius f N, depends on the present state through the present value of N . Governing equations The simple case of an elastic solid occupying a volume V in the undeformed position is considered. The solid is submitted to given forces and displacements r d = r d (λ(t)), u d = u d (λ(t)) respectively on the portions S r , S u of the boundary S, and λ(t) denotes a control parameter defining the loading history. On the complementary part S R , the solid may enter into contact with a moving obstacle h(m, t) < 0 and the non-penetration condition is h x + u(x, t), t 0 ∀x ∈ S R (4) The mechanical response of the solid are governed by unilateral contact conditions, Coulomb's law and classical equations of elastodynamics. In Lagrange description, the governing equations are Div b = ρ ü, b = W, ∇u ∀x ∈ V b • n s = r d ∀x ∈ S r , b• n s = Nn + T ∀x ∈ S R , u= u d ∀x ∈ S u ( 5 ) where W (∇u) and b denote respectively the elastic energy per unit volume and the unsymmetric Piola-Lagrange's stress. In particular, the unknowns (u, N) must satisfy the local equations u = u d ∀x ∈ S u , h 0, N 0, Nh = 0 ∀x ∈ S R (6) and the variational inequality where U ca denotes the set of kinematically admissible rates V (∇u * -∇ u) : W, ∇u (∇u) dV -S r r d • (u * -u) da + V ρ ü • (u * -u) dV -S R (v * N -v N )N da + S R f N( v * T -v T ) dS 0 ∀u * ∈ U ca (7) U ca = u * | u * = ud on S u ( 8 ) and v * and v are the relative rates v N = n • u -v oN , v T = u -(n • u)n -v oT , v * N = n • u * -v oN , v * T = u * -(n • u * )n -v oT , (9) with n = h, m / h, m and v oN = -h, t / h, m . The system of solid in frictional contact under loads is associated with an energy potential and a dissipation potential: E(u, λ, µ) = V W (∇u) dV -S r r d (λ) • u da -S R µh(x + u) da D(N, v T ) = S R f N v T da (10) where µ denotes the Lagrange multipler associated with the constraints (4) and N = µ h, m . The variational inequality [START_REF] Andersson | A quasistatic frictional problem with normal compliance[END_REF] can be condensed as (J + E, u ) • (u * -u) + D(N, v * T ) -D(N, v T ) 0 ∀u * ∈ U ca ( 11 ) where J denotes the inertial terms. Some regularizations of the frictional contact problem have been proposed in the literature: -Non-local Coulomb's law, proposed by Duvaut [START_REF] Duvaut | Les inéquations en mécanique et en physique[END_REF], in which the local normal reaction N is replaced by its mean value N on an elementary representative surface S re N = 1 S re S re N(x) ds -Normal compliance law, discussed by Oden and Martins [START_REF] Oden | Models and computational methods for dynamic friction phenomena[END_REF], Kikuchi and Oden [START_REF] Kikuchi | Contact Problem in Elasticity: A Study of Variational Inequalities and Finite Element Methods[END_REF], Andersson [START_REF] Andersson | A quasistatic frictional problem with normal compliance[END_REF][START_REF] Andersson | Existence and uniqueness for quasistatic contact problems with friction[END_REF], Klarbring et al. [START_REF] Klarbring | A global existence result for the quasistatic frictional contact problem with normal compliance[END_REF], . . .. It consists of replacing Signorini's relations of unilateral contact by a relationship giving the normal reaction as a function of the gap h(x + u(x)). For example, nonlinear springs of energy ϕ(h) per unit surface may be added to the system where ϕ(h) is a regular function permitting an approximation of Signorini's conditions. In particular, with ϕ(h) = k 2 h 2 -, then the normal compliance law consists of writing that N = k h, m h - General discussions on the existence of a solution of the quasi-static problem in small deformation have been given in the literature, when there is regularization by normal compliance cf. [START_REF] Andersson | A quasistatic frictional problem with normal compliance[END_REF][START_REF] Klarbring | A global existence result for the quasistatic frictional contact problem with normal compliance[END_REF] and by nonlocal Coulomb's law cf. [START_REF] Cocu | Analysis of an incremental formulation for frictional contact problems[END_REF]. In these cases, it has been proved that the existence of at least one solution is ensured when the friction coefficient is small enough. Divergence instability of an equilibrium There is flutter or divergence instability if, under disturbances, the system will leave the equilibrium position with or without growing oscillations. As usual, the possibility of divergence instability can be discussed in a purely static approach [START_REF] Hill | A general theory of uniqueness and stability in elastic/plastic solids[END_REF][START_REF] Petryk | A consistent approach to defining stability of plastic deformed processes[END_REF][START_REF] Nguyen | Stability and Nonlinear Solid Mechanics[END_REF]. The quasi-static problem has been much discussed, cf. [START_REF] Kikuchi | Contact Problem in Elasticity: A Study of Variational Inequalities and Finite Element Methods[END_REF][START_REF] Cocu | Analysis of an incremental formulation for frictional contact problems[END_REF][START_REF] Klarbring | Derivation and analysis of rate boundary value problems of frictional contact[END_REF][START_REF] Andersson | Existence and uniqueness for quasistatic contact problems with friction[END_REF][START_REF] Andersson | A quasistatic frictional problem with normal compliance[END_REF][START_REF] Klarbring | Stability and critical points in large displacement frictionless contact problems[END_REF]. For the sake of simplicity, the particular case of fixed obstacles is considered here. The obstacle is given by the time-independent domain h(m) 0. Normal and tangential relative velocities are v N = uN and v T = uT . The governing equations follow from ( 6), [START_REF] Andersson | A quasistatic frictional problem with normal compliance[END_REF] by deleting the inertia terms. If the rigid motion of the solid is not excluded by the implied displacements, it is well known that the existence of an equilibrium position under applied loads is not always ensured. Rate problem in quasi-statics The analysis of the static stability of an equilibrium follows from the consideration of the rate response as in the theory of elastic and plastic buckling of solids. With the following notation V ad =    u * u * ∈ U ca and ∀x ∈ S Rc , u * N = 0 if N > 0 and u * N 0 if N = 0, u * T = 0 if φ < 0 and u * T = -bT , b 0 if φ = 0, N > 0    ( 12 ) it is clear that u ∈ V ad which is the set of admissible rates. The virtual work equation holds in rate form for all u * ∈ V ad V ∇(u * -u) : L : ∇ u dV - S r ṙ • (u * -u) da - S R Ṙ • (u * -u) da = 0 where L = W, ∇u∇u denotes the elastic modulus. Since Ṫ • (u * -u) = Ṫ • (u * T -uT ) + Ṫ • (u * N -uN )n and Ṫ • n = 0 when T = 0, the last term can also be written as -Ṙ •(u * -u) = -N(u * T -uT )•C • uT -Ṅ(v * N -v N )-Ṫ •(v * T -v T ) where C denotes the curvature tensor of the obstacle at contact point. Taking account of the fact that: Ṫ (v T -v * T ) + f Ṅ v T -v * T 0 ( 13 ) which is a consequence of Coulomb's law, cf. [START_REF] Klarbring | Derivation and analysis of rate boundary value problems of frictional contact[END_REF][START_REF] Chateau | Buckling of elastic structures in unilateral contact with or without friction[END_REF][START_REF] Nguyen | Stability and Nonlinear Solid Mechanics[END_REF], the following statement is obtained The rate u ∈ V ad and satisfies the variational inequality V (∇u * -∇ u) : L : ∇ u dV - S r ṙd • (u * -u) da - S Rc N(u * T -uT ) • C • uT da - S Rc (u * N -uN ) Ṅ( u) da + S Rc f Ṅ( u) u * T -uT da 0 ∀u * ∈ V ad ( 14 ) where Ṅ( u) = d dt (n • b • n s ) = (n s ⊗ n) : L : ∇ u -R • C • u (15) If a regularization of unilateral contact by normal compliances is introduced, there are no more unilateral constraints. For some particular cases of elastic structures in frictional contact with normal compliances, the framework of standard dissipative systems can be directly applied, cf. [START_REF] Biot | Mechanics of Incremental Deformation[END_REF][START_REF] Nguyen | Bifurcation and stability in dissipative media (plasticity, friction, fracture)[END_REF][START_REF] Nguyen | Stability and Nonlinear Solid Mechanics[END_REF][START_REF] Mroz | Contact friction models and stability problems[END_REF]. Such a system is defined by an energy potential E(q, λ) and a dissipation potential D(q, q) which is convex and positively homogenous of degree 1 with respect to q. The quasi-static evolution is governed by E, q +D, q = 0 ( 16 ) where D, q denotes the sub-gradient of D with respect to q. The associated rate equations are E, qq • q + E, qλ λ + D, q q • q = 0 (17) These equations can also be written in variational form as E, q • (q * -q) + D(q, q * ) -D(q, q) = 0 ∀q * (q * -q) • (E, qq • q + E, qλ λ) + q • D, q (q, q * ) -D, q (q, q) = 0 ∀q * ∈ V ad Rate equations ( 14) are written in the same spirit, with some additional difficulties due to the presence of constraints and to the state-dependent expressions of tangential and normal components in terms of q. Static stability criterion It is assumed that under the action of a given dead load, an equilibrium position of a solid in frictional contact with a fixed obstacle exists. The stability of this equilibrium is the subject of interest here. The classical concept of static stability consists of defining the stability as the absence of additional displacement when the load does not vary λ = 0. The equilibrium is thus statically stable if the rate problem admits only the trivial solution u = 0. The following statement is then straightforward [START_REF] Chateau | Buckling of elastic structures in unilateral contact with or without friction[END_REF][START_REF] Nguyen | Bifurcation et stabilité des systèmes irréversibles obéissant au principe de dissipation maximale[END_REF][START_REF] Nguyen | Bifurcation and stability in dissipative media (plasticity, friction, fracture)[END_REF][START_REF] Klarbring | Stability and critical points in large displacement frictionless contact problems[END_REF][START_REF] Martins | Some notes on friction and instabilities[END_REF][START_REF] Mroz | Contact friction models and stability problems[END_REF]: The condition of positivity I(u * ) = V ∇u * : L : ∇u * dV + S Rc Nu * T • C • u * T da + S Rc f Ṅ(u * ) u * T da > 0 for all u * = 0 ∈ V 0 ad ( 18 ) is a criterion of stability since it ensures the static stability of the considered equilibrium. The significance of this criterion can be better understood from its energy interpretation in the same spirit as Hill's criterion of stability in plasticity, cf. [START_REF] Hill | A general theory of uniqueness and stability in elastic/plastic solids[END_REF][START_REF] Petryk | A consistent approach to defining stability of plastic deformed processes[END_REF][START_REF] Chateau | Buckling of elastic structures in unilateral contact with or without friction[END_REF][START_REF] Nguyen | Bifurcation and stability in dissipative media (plasticity, friction, fracture)[END_REF]. A perturbation of the equilibrium by perturbation forces is introduced for t 0. Let u(t) denotes the perturbed motion starting from the equilibrium u(0) = u eq . The energy balance at time t is W per (t) = V W ∇u(t) -W (∇u eq ) dV - S r r d (λ) • u(t) -u eq da - t 0 S R R(τ ) • u da dt + C(t) where W per (t) is the energy supplied by perturbation forces and C(t) is the kinetic energy of the solid at time t. At the begining of the perturbation, i.e., for small t, if the perturbed motion and the energy balance can be expanded as u(t) = u eq + u 1 t + u 2 t 2 2 + • • • , W per (t) = W 0 + W 1 t + W 2 t 2 2 + • • • + C(t) then, the following expressions are obtained W 0 = 0, W 1 = 0, W 2 = I (u 1 ), cf. [START_REF] Chateau | Buckling of elastic structures in unilateral contact with or without friction[END_REF][START_REF] Nguyen | Bifurcation and stability in dissipative media (plasticity, friction, fracture)[END_REF]. Thus, condition [START_REF] Mroz | Contact friction models and stability problems[END_REF] implies that the external world must supply energy at early times in order to perturbe the system from equilibrium. This criterion ensures a certain stability in the energy sense, also called directional stability [START_REF] Petryk | A consistent approach to defining stability of plastic deformed processes[END_REF]. Its violation in a direction of displacement leads to a divergence instability of the considered equilibrium since the system will leave this position with a growing kinetic energy. Flutter instability of the steady sliding equilibrium For the sake of clarity, the stability of the steady sliding equilibrium of an elastic solid in contact with a moving rigid half-space, in translation motion at a constant velocity w parallel to the free surface, is considered here in small deformation. Steady sliding equilibrium The steady sliding equilibrium u of the solid must satisfy V ∇δu : L : ∇u dV - S r r d • δu da - S R (δu N N + f Nτ • δu T ) da = 0 This equation leads formally to a system of reduced equations of the form N = k NN [u N ] + k NT [u T ] + N d , T = f Nτ = k T N [u N ] + k T T [u T ] + T d The principal unknown u N must satisfy u N = A[N] + B, N 0, u N 0, N • u N = 0 ( 19 ) with A = k NN -k NT k -1 T T k T N -1 (I -f k NT h T N ) h T N [N] = k -1 T T [Nτ ], B = k NN -k NT k -1 T T k T N -1 N d -k NT k -1 T T T d ( 20 ) It is clear that the linear operator A is not symmetric if f = 0: N * , A[N] = S R N * (x)A[N](x) dS = N, A[N * ] (21) Thus, a linear complementary problem (LCP) must be considered. In particular, the existence and uniqueness of a steady sliding solution are ensured if A is positive-definite or P -positive [START_REF] Cottle | The Linear Complementarity Problem[END_REF][START_REF] Klarbring | A mathematical programming approach to three-dimensional contact problems with friction[END_REF][START_REF] Nguyen | Stability and Nonlinear Solid Mechanics[END_REF]. Instability of the steady sliding equilibrium The stability of the steady sliding position can be obtained from the study of small perturbed motions. However, the equations of motion cannot be linearized without the assumption of effective contact. Indeed, in the presence of a loose contact, a small perturbed motion is not necessarily governed by linear equations because of the possibility of separation. Under the assumption of an effective contact, if the sliding speed is never zero, the dynamic equations can be written as V δu • ρ ü dV + V ∇δu : L : ∇u dV + S R Nδu N dS + S R f N uT -w uT -w • δu T dS = 0 ∀δu, δN (22) The linearization is then possible for sliding motions. The nature of this particular problem can be better understood in the discretized form. After discretization, the equations of motion are    U N = 0 M Y Y Ÿ + K Y Y Y = f Φ( Ẏ )N + F Y M NY Ÿ + K NY Y = N + F N ( 23 ) where u = (U N , Y ) and Φ( Ẏ ) is a matrix dependent on the direction of sliding. The linearized equations for sliding motions are    U * N = 0 M Y Y Ÿ * + K Y Y Y * = f Φ N * + f Φ Ẏ Ẏ * M NY Ÿ * + K NY Y * = N * (24) The general expression u * = e st U with U = (U N = 0, X) then leads to s 2 M Y Y X + K Y Y X = f sΦ Ẏ X + f ΦN , s 2 M NY X + K NY X = N (25) i.e., to the generalized eigenvalue problem s 2 (M Y Y -f ΦM NY )X -sf Φ Ẏ X + (K Y Y -f ΦK NY )X = 0 (26) Thus, the considered equilibrium is asymptotically stable (with respect to sliding motions) if (s) < 0 for all s and unstable if there exists at least one value s such that (s) > 0. This generalized eigenvalue problem can be written as (s 2 M + s C + K)X = 0 with non-symmetric matrices M, K and complex eigenvalues and eigenvectors. This analysis leads to the definition of a critical value f d 0 such that the considered equilibrium is unstable when f > f d . Example on the sliding contact of two elastic layers The simple example of the frictional contact of two elastic infinite layers is considered here as an illustrating example. This problem was discussed analytically by Adams [START_REF] Adams | Self-excited oscillations of two elastic half-spaces sliding with a constant coefficient of friction[END_REF], by Martins et al. [START_REF] Martins | Dynamic surface solutions in linear elasticity and viscoelasticity with frictional boundary conditions[END_REF] and by Renardy [START_REF] Renardy | Ill-posedness at the boundary for elastic solids sliding under Coulomb friction[END_REF]. The contact in plane strain with friction of two infinite elastic layers, of thickness h and h * respectively as shown in Fig. 2, is considered. The lower face of the bottom layer is maintained fixed in the axes Oxyz. The upper face of the top layer, assumed to be in translation of velocity w in the direction Ox, is compressed to the bottom layer by an implied displacement δ < 0. At the interface y = 0, the contact is assumed to obey Coulomb's law of friction with a constant friction coefficient. The celerities of dilatation and shear waves are first introduced: c 1 = λ + 2µ ρ , c 2 = µ ρ , τ = c 2 c 1 for the top layer and for the bottom layer (superscript *), to write the governing equations for the displacements of the top layer wte 1 + u(xwt, y, t) and of the bottom layer u * (x, y, t) under the form:                    1 -τ 2 w c 2 2 u x,xx + τ 2 u x,yy + 1 -τ 2 u y,xy = τ 2 u x,tt -2 w c 2 u x,tx u y,yy + τ 2 1 - w c 2 2 u y,xx + 1 -τ 2 u x,xy = τ 2 u y,tt -2 w c 2 u y,tx u * x,xx + τ * 2 u * x,yy + 1 -τ * 2 u * y,xy = τ * 2 u * x,tt u * y,yy + τ * 2 u * y,xx + 1 -τ * 2 u * x,xy = τ * 2 u * y,tt (27) Boundary and interface conditions are u(xwt, h, t) = 0, u * (x, -h * , t) = 0, u y (xwt, 0, t) = u * y (x, 0, t) σ yy (x, 0, t) = σ * yy (x, 0, t), σ xy (x, 0, t) = σ * xy (x, 0, t), f σ yy (x, 0, t) = -σ xy (x, 0, t) The stability of the steady sliding solution can be obtained by a linearization of the dynamic equation under the assumption of sliding perturbed motions near the steady sliding state. These motions are searched for under the form of slip waves of wave-length L = 1/k: u(xwt, y, t) = e 2πkct e 2ikπ(x-wt) X(y), u * (x, y, t) = e 2πkct e 2ikπx X * (y) The condition of existence of non null displacement modes (X, X * ) requires that the pair c and k must be a root of the following equation: F (c, k) = ρc 2 2 A(p, q, kh) iB(p * , q * , kh * ) + f C(p * , q * , kh * ) + ρ * c * 2 2 A(p * , q * , kh * ) iB(p, q, kh) -f C(p, q, kh) = 0 ( 28 ) where p, q, A, B, C are appropriate functions [START_REF] Moirot | Some problems of friction-induced vibrations and instabilities[END_REF]                          p 2 = 1 + c -iw c 2 2 , q 2 = 1 + τ 2 c -iw c 2 2 A(p, q, kh) = -4pq 1 + p 2 + pq 4 + 1 + p 2 2 cosh(2πpkh) cosh(2πqkh) -1 + p 2 2 + 4p 2 q 2 sinh(2πpkh) sinh(2πqkh) B(p, q, kh) = q 1 -p 2 sinh(2πpkh) cosh(2πqkh) -pq cosh(2πpkh) sinh(2πqkh) C(p, q, kh) = pq 3 + p 2 -pq 3 + p 2 cosh(2πpkh) cosh(2πqkh) + 2p 2 q 2 + 1 + p 2 sinh (2πpkh) sinh (2πqkh) The case of a rigid top layer is obtained when c 2 ⇒ +∞ F (c, k) = iB(p * q * , kh * ) + f C(p * , q * , kh * ) = 0 (29) For an elastic half-plane compressed into a moving rigid half-plane, cf. Martins et al. [START_REF] Martins | Dynamic surface solutions in linear elasticity and viscoelasticity with frictional boundary conditions[END_REF], the results are: F (c) = iq * 1 -p * 2 + f 1 + p * 2 -2p * q * = 0 ( 30 ) In the case of two elastic half-planes, cf. Adams [START_REF] Adams | Self-excited oscillations of two elastic half-spaces sliding with a constant coefficient of friction[END_REF], this equation can be written as: F (c) = ρc 2 2 1 + p 2 2 -4pq iq * 1 -p * 2 + f 1 + p * 2 -2p * q * + ρ * c * 2 2 1 + p * 2 2 -4p * q * iq 1 -p 2 -f 1 + p 2 -2pq = 0 ( 31 ) It has been established in each case that there exists a critical value f d 0 such that the steady sliding solution is unstable for f f d . For example, f d = 0 occurs for the system of two layers of finite depths while the possibility f d > 0 may happen in the sliding contact of half-spaces, cf. [START_REF] Martins | Dynamic surface solutions in linear elasticity and viscoelasticity with frictional boundary conditions[END_REF][START_REF] Adams | Dynamic instabilities in the sliding of two layered elastic half-spaces[END_REF][START_REF] Martins | Some notes on friction and instabilities[END_REF][START_REF] Moirot | Etude de la stabilité d'un équilibre en présence du frottement de Coulomb[END_REF][START_REF] Ranjith | Slip dynamics at an interface between dissimilar materials[END_REF]. Stick-slip-separation waves The fact that the steady sliding solution is unstable leads to the study of possible dynamic bifurcations of the sliding contact of solids. In the spirit of Hopf bifurcation [START_REF] Troger | Nonlinear Stability and Bifurcation Theory[END_REF], a periodic response can be expected as an alternative stable response. This possibility has been explored in the example of two coaxial cylinders [START_REF] Moirot | Etude de la stabilité d'un équilibre en présence du frottement de Coulomb[END_REF][START_REF] Moirot | An example of stick-slip waves[END_REF][START_REF] Oueslati | Transition vers une onde glissement-adhérence-décollement sous contact frottant de Coulomb[END_REF]. The mechanical response in plane strain of a brake-like system composed of an elastic tube, of internal radius R and external radius R * , in frictional contact on its inner surface with a rotating rigid cylinder of radius R + and of angular rotation Ω has been discussed, cf. Fig. 3. The mismatch 0 is a load parameter controlling the normal contact pressures. This model problem enables us to exhibit the existence of nontrivial periodic solutions in the form of stick-slip or stick-slip-separation waves propagating on the contact surface. The governing equations of the system follow from the kinetic relations, the fundamental law, the linear elastic constitutive equations, and the boundary unilateral contact conditions with Coulomb's friction:                            ε = (∇ ū) s Div σ = γ ü σ = ν (1 + ν)(1 -2ν) Tr(ε)I + 1 1 + ν ε ūr (ξ, θ, t) = ūθ (ξ, θ, t) = 0 σrr (1, θ, t) = -p(θ, t), σrθ (1, θ, t) = -q(θ, t) ūr δ, p 0, p(ū r -δ) = 0 |q| fp, q(1 -uθ ) -fp|1 -uθ | = 0 (32) where non-dimensional variables are introduced, ū = u R , σ = σ E , r = r R , γ = ρR 2 Ω 2 E , ξ = R * R , δ = R , t = Ωt, u = d ū dt The steady sliding solution is given by,        ūer = δ 1 ξ 2 -1 ξ 2 r -r , ūeθ = δf 1 ξ 2 -1 ξ 2 r -r 1 + 1 ξ 2 (1 -2ν) p e = δ 1 ξ 2 -1 1 1 + ν ξ 2 + 1 1 -2ν > 0, q e = fp e (33) Since closed form dynamical solutions cannot be generated, two complementary approaches has been followed. The first approach is semi-analytical after a reduction to a simpler system of equations. The second approach consists of a numerical simulation by the finite element method and appropriate time-integrations. An interesting simplification to the problem is obtained when the displacement is sought in the form ūr = U(θ, t)F (r), ūθ = V (θ, t)F (r), F (r) = 1 ξ 2 -1 ξ 2 r -r (34) In this approximation, the following local equations are obtained from the virtual work equation          Ü -bU -dV + gU = P V -aV + dU + hV = Q P 0, U -δ 0, P(U -δ) = 0 |Q| f P, Q(1 -V ) -f P |1 -V | = 0 (35) where denotes the derivative with respect to θ and a, b, g, h, d are material and geometry constants. All of them are positive except for the coupling coefficient d. Finally, only the non-dimensional displacements on the contact surface U(θ, t) and V (θ, t) and the non-dimensional reactions P (θ, t) and Q(θ, t) remain as unknowns in the reduced equations. The steady sliding solution, given by U e = δ, V e = δfg/ h, P = P e and Q e = f P e , is unstable for the reduced system. Indeed, under the assumption of sliding motions, a small perturbed motion is described by U = U e , V = V e + V * , P = P e + P * and Q = Q e + Q * . It follows that V * -aV * + f dV * + hV * = 0 (36) If a general solution is sought in the form V * = e s t e ikθ , then -s 2 = ak 2 + h + ikf d. When f = 0, it follows that s = ±iω k with ω 2 k = ak 2 + h. Thus two harmonic waves propagating in opposite senses of the form cos(kθ ± ω k t + ϕ) are obtained as in classical elasticity. When f > 0 and d > 0, then s = ±(s rk + is ik ), s rk > 0, s ik < 0, thus a general solution of the difference V * of the form V * = e ±s rk t cos(kθ ± s ik t + ϕ) is obtained and represents two waves propagating in opposite senses: an exploding wave in the sense of the implied rotation, and a damping wave propagating in the opposite direction. If f > 0 and d < 0, the exploding wave propagates in the opposite sense since the previous expression of s is still valid with s rk > 0 and s ik > 0. It is expected that in some particular situations, there is a dynamic bifurcation of Poincaré-Andronov-Hopf's type. This means that the perturbed motion may evolve to a periodic response. This transition has been observed numerically in many examples of the literature, cf., for example, [START_REF] Oestreich | Bifurcation and stability analysis for a non-smooth friction oscillator[END_REF] or [START_REF] Vola | Friction and instability of steady sliding squeal of a glass/rubber contact[END_REF]. To explore this idea, a periodic solution has been sought in the form of a wave propagating at constant velocity: U = U(φ), V = V (φ), φ = θ -ct (37) where c is the non-dimensional wave velocity, U and V are periodic functions of period T = 2π/k. The physical velocity of the wave is thus c = | c|RΩ and the associated dynamic response is periodic of frequency | c|kΩ. The propagation occurs in the sense of the rotation when c > 0. According to the regime of contact, a slip wave, a stick-slip wave, a slip-separation wave or a stick-slip-separation wave can be discussed. The governing equations of such a wave follow from [START_REF] Oueslati | Transition vers une onde glissement-adhérence-décollement sous contact frottant de Coulomb[END_REF]:          c2 -b U -dV + gU = P c2 -a V + dU + hV = Q P 0, U δ, P (U -δ) = 0 |Q| f P, Q(1 -V ) -f P |1 -V | = 0 (38) The existence of stick-slip waves is obtained when the load is sufficiently strong or when the rotation is slow. For example, for ξ = 1.25 and f = 1, stick-positive slip solutions are obtained for 8 k 12. It is found that c must have the sign of d. These waves propagate in the sense of the previous exploding perturbed motions, thus opposite to the rotation of the cylinder when d < 0, with a frequency and a celerity independent of the rotation velocity Ω. The celerity is close to the celerities of dilatation and shear waves in the solid while the frequency is inversely proportional to the radius R. 4. For ξ = 1.15, for example, d is positive and the propagation goes in the rotation direction. The limiting case ξ ⇒ 1 can be interpreted as the sliding motion of a rigid plate on an elastic layer or of a rigid half-space on an elastic half-space [START_REF] Adams | Self-excited oscillations of two elastic half-spaces sliding with a constant coefficient of friction[END_REF][START_REF] Martins | Dynamic surface solutions in linear elasticity and viscoelasticity with frictional boundary conditions[END_REF]. The obtained solution is a wave with an oscillation about the steady sliding response. The amplitude of the wave is linearly proportional to the rotation Ω. It also increases with the friction coefficient f and decreases with the mismatch. Thus, for vanishing rotations, the steady sliding solution is recovered as the limit of the dynamic response. The stick-slip solution can no longer be available if the rotation is strong enough since the associated pressure may become negative. In the same spirit, for a small mismatch, the pressure may become negative under the assumption of a stick-slip regime everywhere. This means that the possibility of separation is not excluded when the mismatch is not strong enough or if the rotation or the friction coefficient is sufficiently high. A numerical simulation with an explicit scheme using Lagrange multipliers [START_REF] Carpenter | Lagrange constraints for transient finite element surface contact[END_REF][START_REF] Baillet | Tribologie de l'interface fibre/matrice. Approche théorique et expérimentale[END_REF] has been performed. The case ξ = 2 and f = 0.3 has been considered. Starting from a motionless initial state, the mismatch displacement is then increased linearly from 0 to its final value. A cyclic limit response is then obtained for large time. The numerical simulation leads to a stick-slip wave in mode 4 without forcing and the obtained response is close to the analytical solution of the reduced approach. It was also checked that a stick-slip-separation wave is effectively obtained when the mismatch is small enough or when the friction is high enough. For example, when Ω = 50 rad/s, δ = 0.001 and f = 0.7, the limit cycle results as a stick-slip-separation wave. The result for radial displacements is shown in Fig. 5. In fact, it is well known that a periodic response does not systematically result from the flutter instability of the steady sliding solution. It has been observed in various examples of discrete or continuous systems that the response may be quasi-periodic or non-periodic or chaotic [START_REF] Van Der Pol | On relaxation-oscillations[END_REF][START_REF] Oden | Models and computational methods for dynamic friction phenomena[END_REF][START_REF] Popp | Stick-slip vibrations and chaos[END_REF][START_REF] Nguyen | Stability and Nonlinear Solid Mechanics[END_REF]. Periodic responses prevail in the example of coaxial cylinders because of the special geometry of the system. Periodic solutions under the form of stick-slip waves have been also obtained by Adams in the sliding contact of two elastic half-spaces [START_REF] Adams | Steady sliding of two elastic half-spaces with friction reduction due to interface stick-slip[END_REF]. The generation of dynamic stick-slip-separation waves on the contact surface can be compared to Shallamach waves in the sliding contact of rubber in statics, cf. [START_REF] Zharii | Frictional contact between the surface wave and a rigid strip[END_REF][START_REF] Baillet | Tribologie de l'interface fibre/matrice. Approche théorique et expérimentale[END_REF][START_REF] Moirot | Etude de la stabilité d'un équilibre en présence du frottement de Coulomb[END_REF][START_REF] Martins | Dynamic stability of finite dimensional linear elastic system with unilateral contact and Coulomb's friction[END_REF][START_REF] Oueslati | Transition vers une onde glissement-adhérence-décollement sous contact frottant de Coulomb[END_REF][START_REF] Vola | Consistent time discretization for a dynamical frictional contact problem and complementary techniques[END_REF][START_REF] Renard | Modélisation des instabilités liées au frottement sec des solides, aspects théoriques et numériques[END_REF][START_REF] Adams | Steady sliding of two elastic half-spaces with friction reduction due to interface stick-slip[END_REF][START_REF] Vola | Friction and instability of steady sliding squeal of a glass/rubber contact[END_REF][START_REF] Martins | Some notes on friction and instabilities[END_REF][START_REF] Raous | Numerical characterization and computation of dynamic instabilities for frictional contact problems[END_REF]. Friction-induced vibrations and noises It is well known that the presence of friction induces mechanical vibrations and noise emittences in the sliding contact of solids. For example, the creaking noise of a door, the unsteady motion with fits and starts of a windscreenwiper can be interpreted as stick-slip motions resulting from the instability of the steady sliding solution. In particular, the phenomena of squeal [START_REF] Crolla | Brake noise and vibration: state of art[END_REF] have been interpreted in the literature in this way. The squeals of band brakes in washing machines have been discussed in [START_REF] Nakai | Band brake squeal[END_REF]. The brake squeals of an automotive disk brake has been examined in [START_REF] Moirot | Etude de la stabilité d'un équilibre en présence du frottement de Coulomb[END_REF][START_REF] Moirot | Brake squeal: a problem of flutter instability of the steady sliding solution?[END_REF], cf. Fig. 6. The squeal of a system glass-rubber in finite deformation is considered in [START_REF] Vola | Friction and instability of steady sliding squeal of a glass/rubber contact[END_REF]. Fig. 6. Some unstable modes of the system pad-disk in an automotive disk brake [START_REF] Moirot | Etude de la stabilité d'un équilibre en présence du frottement de Coulomb[END_REF][START_REF] Moirot | Some problems of friction-induced vibrations and instabilities[END_REF] Fig. 1 . 1 Fig. 1. A solid in unilateral contact with an obstacle. Fig. 2 . 2 Fig. 2. Sliding contact of two elastic layers. Fig. 3 . 3 Fig. 3. The problem of coaxial cylinders in frictional contact. For example, for √ E/ρ = 1000 m/s, ξ = 1.25, f = 1, R = 1 m and Ω = 100 rad/s, the results obtained concerning the mode-8 wave are Ψ = 0.839, c = 1255 m/s and the associated frequency is 10045 Hz. If ξ = 2, f = 0.3, R = 0.5 m, Ω = 10 rad/s, a frequency 8240 Hz and a celerity 1030 m/s are obtained for k = 4 as shown in Fig. Fig. 4 . 4 Fig. 4. Semi-analytical approach: an example of stick-slip wave in mode 4 with ξ = 2, f = 0.3, Ω = 10 rad/s, R = 0.25 m and δ = 0.005. It is found that Ψ = 0.644, c = 1030 m/s. Phase diagram and variations of V /V e , P /P e and Q/Q e in [0, 2π/k]. Fig. 5 . 5 Fig. 5. A mode-4 stick-slip-separation wave, obtained by numerical simulations for ξ = 2, Ω = 50 rad/s, f = 0.7, δ = 0.001. The isovalues of the radial displacement (mm): separation, slip and stick nodes on the contact surface are given respectively in red, green, blue.
01756728
en
[ "math", "math.math-ap" ]
2024/03/05 22:32:10
2019
https://univ-tln.hal.science/hal-01756728/file/Chang-Jin-Novotny-final.pdf
Antonin Novotny T Chang B J Jin A Novotný Compressible Navier-Stokes system with general inflow-outflow boundary data Keywords: Compressible Navier-Stokes system, inhomogeneous boundary conditions, weak solutions, renormalized continuity equation, large inflow, large outflow published or not. The documents may come Compressible Navier-Stokes system with general inflow-outflow boundary data Introduction We consider the problem of identifying the non steady motion of a compressible viscous fluid driven by general in/out flux boundary conditions on general bounded domains. Specifically, the mass density = (t, x) and the velocity u = u(t, x), (t, x) ∈ I × Ω ≡ Q T , I = (0, T ) of the fluid satisfy the Navier-Stokes system, ∂ t + div x ( u) = 0, (1.1) ∂ t ( u) + div x ( u ⊗ u) + ∇ x p( ) = div x S(∇ x u), (1.2) S(∇ x u) = µ ∇ x u + ∇ t x u + λdiv x uI, µ > 0, λ ≥ 0, (1.3) in Ω ⊂ R d , d = 2, 3, where p = p( ) is the barotropic pressure. The system is endowed with initial conditions (0) = 0 , u(0) = 0 u 0 . (1.4) We consider general boundary conditions, u| ∂Ω = u B , | Γ in = B , (1.5) where Γ in = x ∈ ∂Ω u B • n < 0 , Γ out = x ∈ ∂Ω u B • n > 0 . (1.6) We concentrate on the inflow/outflow phenomena, we have therefore deliberately omitted the contribution of external forces f . Nevertheless, all results of this paper remain valid also in the presence of external forces. Investigation and better insight to the equations in this setting is important for many real world applications. In fact this is a natural and basic abstract setting for flows in pipelines, wind tunnels, turbines to name a few concrete examples. In spite of this fact the problem in its full generality resists to all attempts of its solution for decades. To the best of our knowledge, this is the first work ever treating this system for large boundary data in a very large class of bounded domains. Indeed, the only available results on the existence of strong solutions in setting (1.1-1.6) are on a short time interval or deal with small boundary data perturbations of an equilibrium state, see e.g. Valli, Zajaczkowski [START_REF] Valli | Navier-Stokes equations for compressible fluids: Global existence and qualitative properties of the solutions in the general case[END_REF]. The only results on the existence of weak solutions for large flows for system (1.1-1.6) with large boundary data are available in papers by Novo [START_REF] Novo | Compressible Navier-Stokes model with inflow-outflow boundary conditions[END_REF] (where the domain is a ball and the incoming/outgoing velocity field is constant) or by Girinon [START_REF] Girinon | Navier-Stokes equations with nonhomogeneous boundary conditions in a bounded three-dimensional domain[END_REF], where the domain is more general but the inflow boundary has to be convex set included in a cone, and the velocity at the inflow boundary has to satisfy so called no reflux condition. In the steady case, the problem with large boundary conditions is open for barotropic flows. It was solved only recently for the constitutive law of pressure of so called hard sphere model (when lim → p( ) = ∞ for some > 0), see [START_REF] Feireisl | Stationary solutions to the compressible Navier-Stokes system with general boundary conditions Preprint Nečas Center for Mathematical Modeling[END_REF]. There are several results dealing with data close to equilibrium flows, see Plotnikov, Ruban, Sokolowski [START_REF] Plotnikov | Inhomogeneous boundary value problems for compressible Navier-Stokes equations: well-posedness and sensitivity analysis[END_REF], [START_REF] Plotnikov | Inhomogeneous boundary value problems for compressible Navier-Stokes and transport equations[END_REF], Mucha, Piasecki [START_REF] Mucha | Compressible perturbation of Poiseuille type flow[END_REF], Piasecki [START_REF] Piasecki | On an inhomogeneous slip-inflow boundary value problem for a steady flow of a viscous compressible fluid in a cylindrical domain[END_REF], Piasecki and Pokorny [START_REF] Piasecki | Strong solutions to the Navier-Stokes-Fourier system with slip-inflow boundary conditions[END_REF] among others. Our goal is to establish the existence of a weak solution ( , u) to problem (1.1-1.6) for general large boundary data B , u B in an arbitrary bounded sufficiently smooth domain with no geometric restrictions on the inflow boundary. Such general result requires a completely different approach to the construction of solutions than the approach employed by Novo or Girinon. We suggest a new (spatially local) method of construction of solutions via regularization of the continuity equation by a specific non homogenous parabolic boundary value problem, instead of using transport equation based approximation as in [START_REF] Novo | Compressible Navier-Stokes model with inflow-outflow boundary conditions[END_REF] or [START_REF] Girinon | Navier-Stokes equations with nonhomogeneous boundary conditions in a bounded three-dimensional domain[END_REF]. This approach allows to remove the restrictions imposed on the domain and data. Another novelty with respect to the two above mentioned papers is the fact that we include in our investigation the pressure laws that may be non monotone on a compact portion of interval [0, ∞), in the spirit of [START_REF] Feireisl | Compressible Navier-Stokes equations with a non-monotone pressure law[END_REF]. (It is to be noticed that a method allowing non monotone pressure laws on a non compact portion of [0, ∞) was recently suggested in [START_REF] Bresch | Global existence of weak solutions for compresssible Navier-Stokes equations: Thermodynamically unstable pressure and anisotropic viscous stress tensor[END_REF], but this method does not work if growth of p (expressed through coefficient γ) is less than 9/5 and it is not clear whether it would work with the non homogenous boundary conditions.) The paper is organized as follows. In Section 2 we define weak solutions to the problem and state the main theorem (Theorem 2.4). In Section 4 the approximated problem (including two small parameters ε > 0 and δ > 0) is specified and its solvability is proved. Limit ε → 0 is performed in Section 5 and limit δ → 0 in Section 6. At each stage of the convergence proof from the approximate system to the original system (ε → 0 and δ → 0, respectively) our approach follows closely the Lions approach [START_REF] Lions | Mathematical topics in fluid dynamics[END_REF] (for ε → 0) and Feireisl's approach [START_REF] Feireisl | On the existence of globally defined weak solutions to the Navier-Stokes equations of compressible isentropic fluids[END_REF] (for δ → 0). This includes the main tools as effective viscous flux identity, oscillations defect measure and renormalization techniques for the continuity equation. The first two tools are local, and remain essentially unchanged (with respect to their use in the case of homogenous Dirichlet boundary conditions), while the third tool -the renormalization technique for the continuity equation introduced in Di Perna-Lions [START_REF] Diperna | Ordinary differential equations, transport theory and Sobolev spaces[END_REF] (in the case of squared integrable densities) and in Feireisl [START_REF] Feireisl | On the existence of globally defined weak solutions to the Navier-Stokes equations of compressible isentropic fluids[END_REF] (in the case of non squared integrable densities) -has to be essentially modified in order to be able to accommodate general non homogenous boundary data. This topic is investigated in Section 3 (and applied in Sections 5.4 (for the limit ε → 0)) and 6.5 (for the limit δ → 0)). Besides the original approximation presented in Section 4 (that allows to treat the outflow/inflow problem in full generality, without geometric restrictions on the form and position of the inflow/outflow boundaries, in contrast with all previous treatments of this problem) the content of Sections 3, 5.4, 6.5 represents the main novelty of this paper. The results on the renormalized continuity equation formulated in Lemmas 3.1, 3.2 are of independent interest within the context of the theory of compressible fluids. Main result In order to avoid additional technicalities, we suppose that the boundary data satisfy u B ∈ C 2 (∂Ω; R d ), B ∈ C(∂Ω). (2.1) In agreement with the standard existence theory in the absence of inflow/outflow, we assume for pressure p = p -p, p ∈ C[0, ∞) ∩ C 1 (0, ∞), p(0) = 0, (2.2) ∀ > 0, p ( ) > max{0, a 1 γ-1 -b}, p( ) ≤ a 2 γ + b, p ∈ C 2 c [0, ∞), p ≥ 0, p (0) = 0, where γ > 1 and a 1 , a 2 , b > 0. We allow, in general, a non-monotone pressure p. If p = 0 then the pressure is monotone p = p and conditions (2.2) includes the isentropic pressure law p( ) = a γ , a > 0, γ > 1 which can be taken as a particular case. In general the splitting (2.2) to strictly increasing and bounded negative compactly supported functions complies with pressure laws that obey assumptions a 1 γ-1 -b ≤ p ( ), p( ) < b + a 2 γ , p(0) = 0, where p( ) ≥ 0 in a (small) right neighborhood of 0. We notice that the very latter condition (or p (0) = 0) is not needed in the homogenous case, cf. Feireisl [START_REF] Feireisl | Compressible Navier-Stokes equations with a non-monotone pressure law[END_REF], [START_REF] Feireisl | Dynamics of viscous compressible fluids[END_REF]; it is specific for the non-homogenous problem. It enters into the game only in order to treat signs of the boundary terms arising from the non-zero boundary conditions in the a priory estimates (see Section 4.3.3 for more details). For further convenience, it will be useful to introduce Helmholtz functions: H( ) = 0 p(z) z 2 dz, H( ) = 0 p(z) z 2 dz, H( ) = - 0 p(z) z 2 dz (2.3) and relative energy functions E( |r) = H( ) -H (r)( -r) -H(r), E( |r) = H( ) -H (r)( -r) -H(r), (2.4) E( |r) = H( ) -H (r)( -r) -H(r). We begin with the definition of weak solutions to system (1.1-1.6). Definition 2.1 [Weak solutions to system (1.1-1.6)] We say that ( , u) is a bounded energy weak solution of problem (1.1-1.6) if: 1. It belongs to functional spaces: 1 and the integral identity ∈ L ∞ (0, T ; L γ (Ω)), 0 ≤ a.a. in (0, T ) × Ω, u ∈ L 2 (0, T ; W 1,2 (Ω; R d )), u| I×∂Ω = u B ; (2.5) 2. Function ∈ C weak ([0, T ], L γ (Ω)) Ω (τ, •)ϕ(τ, •) dx- Ω 0 (•)ϕ(0, •) dx = τ 0 Ω ∂ t ϕ+ u•∇ x ϕ dxdt- τ 0 Γ in B u B •nϕ dS x dt (2.6) holds for any τ ∈ [0, T ] and ϕ ∈ C 1 c ([0, T ] × (Ω ∪ Γ in )). 3. Function u ∈ C weak ([0, T ], L 2γ γ+1 (Ω; R d )) , and the integral identity Ω u(τ, •) • ϕ(τ, •) dx - Ω 0 u 0 (•)ϕ(0, •) dx (2.7) = τ 0 Ω u • ∂ t ϕ + u ⊗ u : ∇ x ϕ + p( )div x ϕ -S(∇ x u) : ∇ x ϕ dxdt holds for any τ ∈ [0, T ] and any ϕ ∈ C 1 c ([0, T ] × Ω; R d ). 4. There exists a Lipschitz extension u ∞ ∈ W 1,∞ (Ω; R d ) of u B whose divergence is non negative in a certain interior neighborhood of ∂Ω, i.e. divu ∞ ≥ 0 a.e. in Û - h ≡ {x ∈ Ω | dist(x, ∂Ω) < h}, h > 0 (2.8) such that the energy inequality Ω 1 2 |u -u ∞ | 2 + H( ) (τ ) dx + τ 0 Ω S(∇ x (u -u ∞ )) : ∇ x (u -u ∞ ) dxdt (2.9) ≤ Ω 1 2 0 |u 0 -u ∞ | 2 + H( 0 ) dx - τ 0 Ω p( )divu ∞ dxdt - τ 0 Ω u • ∇ x u ∞ • (u -u ∞ ) dxdt - τ 0 Ω S(∇ x u ∞ ) : ∇ x (u -u ∞ ) dxdt - τ 0 Γ in H( B )u B • ndS x dt -H τ 0 Γout u B • ndS x dt holds. In inequality (2.9), H = inf >0 H( ) > -∞. (2.10) Remark 2.1. 1. An extension u ∞ of u B verifying (2.8) always exists, due to the following lemma (see [START_REF] Girinon | Navier-Stokes equations with nonhomogeneous boundary conditions in a bounded three-dimensional domain[END_REF]Lemma 3.3]). Lemma 2.2. Let V ∈ W 1,∞ (∂Ω; R d ) be a Lipschitz vector field on the boundary ∂Ω of a bounded Lipschitz domain Ω. Then there is h > 0 and a vector field V ∞ ∈ W 1,∞ (R 3 ) ∩ C c (R d ), divV ∞ ≥ 0 a.e. in Ûh (2.11) verifying V ∞ | ∂Ω = V, where Ûh = {x ∈ R 3 | dist(x, ∂Ω) < h}. 2. A brief inspection of formula (2.10) gives the estimate of value H, H ≥ -sup ∈(0,1) p( ) -sup >1 p( ) > -∞ provided suppp ⊂ [0, r], where r > 1 without loss of generality. Equation (2.6) implies the total mass inequality Ω (τ ) dx ≤ Ω 0 dx - τ 0 Γ in B u B • ndS x dt (2.12) for all τ ∈ [0, T ]. To see it, it is enough to take for test functions a convenient sequence ϕ = ϕ δ , δ > 0 (e.g. the same as suggested in (5.27)) and let δ → 0. Definition 2.2 We say that the couple ( , u) ∈ L p (Q T ) × L 2 (0, T ; W 1,2 (Ω, R d )), p > 1 is a renormalized solution of the continuity equation if b( ) ∈ C weak ([0, T ]; L 1 (Ω) ) and if it satisfies in addition to the continuity equation (2.6) also equation 1] (see also monographs [START_REF] Lions | Mathematical topics in fluid dynamics[END_REF], [START_REF] Feireisl | Dynamics of viscous compressible fluids[END_REF], [START_REF] Novotný | Introduction to the mathematical theory of compressible flow[END_REF], [START_REF] Feireisl | Singular limits in thermodynamics of viscous fluids[END_REF]) covers the case u| (0,T )×∂Ω = 0 (which is covered by Theorem 2.4 as well) and the case u • n| (0,T )×∂Ω = 0 completed with the Navier conditions-eventually with friction-(which is not covered by Theorem 2.4). Ω (b( )ϕ)(τ ) dx - Ω b( 0 )ϕ(0) dx = (2.13) τ 0 Ω b( )u • ∇ x ϕ -ϕ (b ( ) -b( )) div x u dxdt - τ 0 Γ in b( B )u B • nϕ dS x dt for any ϕ ∈ C 1 c ([0, T ] × (Ω ∪ Γ in )), ∈ C[0, ∞) ∩ C 1 (0, ∞), zb -b ∈ C[0, ∞), |b(z)| ≤ c(1 + z 5p/6 ), |zb (z) -b(z)| ≤ c(1 + z p/2 Ω 1 2 0 u 2 0 + H( 0 ) dx < ∞, 0 ≤ 0 , Ω 0 dx > 0. ( 2 2. Theorem 2.4 still holds provided one considers in the momentum equation at its right hand side term f corresponding to large external forces, provided f ∈ L ∞ (Q T ) (modulo necessary changes in the weak formulation in order to accommodate the presence of this term). 3. Conditions on the regularity p, p, B and u B in Theorem 2.4 could be slightly weakened, up to p continuous on [0, ∞), locally Lipschitz on [0, ∞) instead of p ∈ C[0, ∞) ∩ C 1 (0, ∞), p Lipschitz differentiable with compact support on [0, ∞) instead of p ∈ C 2 c [0, ∞), B ∈ L ∞ (∂Ω), u B ∈ W 1,∞ (∂Ω) , at expense of some additional technical difficulties. We shall perform the proof in all details in the case d = 3 assuming tacitly that both Γ in and Γ out have non zero (d -1)-Hausdorff measure. Other cases, namely the case d = 2 is left to the reader as an exercise. 3 Renormalized continuity equation with non homogenous data The case of squared integrable density In this section we generalize the Di-Perna, Lions transport theory [START_REF] Diperna | Ordinary differential equations, transport theory and Sobolev spaces[END_REF] to the continuity equation with non homogenous boundary data. The main result reads: Lemma 3.1. Suppose that Ω ⊂ R d , d = 2, U + h (Γ in ) ≡ {x 0 + zn(x 0 ) | 0 < z < h, x 0 ∈ Γ in } ∩ (R d \ Ω) (3.1) and extend the vector field u B to U + h (Γ in ), ũB (x) = u B (x 0 ), x = x 0 + zn(x 0 ) ∈ U + h (Γ in ). (3.2) If Γ in ∈ C 2 , such extension always exists and ũB ∈ C 1 (U + h (Γ in ), cf. Foote [START_REF] Foote | Regularity of the distance function[END_REF]. Consider now the flow generated in U + h (Γ in ) by the field -ũ B defined on U + h (Γ in ), X (s, x 0 ) = -ũ B (X(s, x 0 )), X(0) = x 0 ∈ U + h (Γ in ) ∪ Γ in , for s > 0, X(s; x 0 ) ∈ U + h (Γ in ). (3.3) Let Ũ+ h (Γ in ) = x ∈ U + h (Γ in ) x = X(s, x 0 ) for a certain x 0 ∈ Γ in and 0 < s < h . Employing the local Cauchy-Lipschitz theory for ODEs to equation (3.3) and evoking the differentiability properties of its solutions with respect to the "initial" data (see e.g. the book of Taylor [START_REF] Taylor | Partial Differential Equations (Basic theory)[END_REF]Chapter 1] or of Benzoni-Gavage [START_REF] Benzoni-Gavage | Calcul différentiel et équations différentielles[END_REF]), we infer that: 1. For any x 0 ∈ U + h (Γ in ), there is unique T (x 0 ) > 0 and T (x 0 ) > 0 such that the map X ∈ C 1 ((-T (x 0 ), T (x 0 )); U + h (Γ in )) is a maximal solution of problem (3.3). If x 0 ∈ Γ in , then there is unique T (x 0 ) > 0 such that the map X ∈ C 1 ([0, T (x 0 )); U + h (Γ in ) ∪ Γ in ) is a maximal solution. 2. For any compact K ⊂ U + h (Γ in ) ∪ Γ in , T K ≡ inf x 0 ∈K T (x 0 ) > 0 and for any compact K ⊂ U + h (Γ in ), T K ≡ inf x 0 ∈K T (x 0 ) > 0. 3. For any z ∈ Ũ+ h (Γ in ) there is an open ball B(z) centered at z and δ z > 0 such that X ∈ C 1 ([-δ z , δ z ]× B(z)). In particular, item 1. in the above list implies that the set Ũ+ h (Γ in ) is not empty. With points 2. and 3. at hand, we are ready to show that Ũ+ h (Γ in ) is an open set. Indeed, let z 0 = X(s 0 ; x 0 ), s 0 ∈ (0, T ), T = min{h, T (x 0 )} with x 0 = γ(0), where γ : B (0) → R d , γ(σ) = x 0 + O(σ, a(σ)) T with a ∈ C 2 (B (0); R + ) representing the local description of Γ in in the vicinity of x 0 . In the above O is a fixed orthonormal matrix, B (0) is a d -1 dimensional ball centered at 0, and we may suppose, without loss of generality, that ∇ σ a = 0 in B (0). We may now consider the map Φ : (-T K , T K ) × B (0) (s, σ) → z ∈ Φ (-T K , T K ) × B (0) ⊂ Ũ+ h (Γ in ), z = Φ(s, σ) = X(s; X(s 0 ; γ(σ))), K = {X(s 0 , γ(σ) | σ ∈ B (0)}. We have clearly Φ(0, 0) = z 0 . It is a cumbersome calculation to show that det ∂ s Φ, ∇ σ Φ (0, 0) = 0, see [START_REF] Girinon | Navier-Stokes equations with nonhomogeneous boundary conditions in a bounded three-dimensional domain[END_REF]Section 3.3.6]. We may therefore apply to this map the implicit function theorem and conclude that there is an open set (0, 0) ∈ U ⊂ (-T K , T K ) × B (0), open set z 0 ∈ V ⊂ R d , and a map Ψ ∈ C 1 (V ; U ) such that Φ • Ψ(z) = z for any z ∈ V . In particular, V ⊂ Φ (-T K , T K ) × B (0) . We may therefore extend the boundary data B to Ũ+ h (Γ in ) by setting ˜ B (X(s, x 0 )) = B (x 0 )exp s 0 divũ B (X(z, x 0 ))dz . (3.4) Clearly, ˜ B ∈ W 1,∞ ( Ũ+ h (Γ in )) and div x (˜ B ũB ) = 0 in Ũ+ h (Γ in ), ˜ B | Γ in = B . (3.5) Now we put Ωh = Ω ∪ Γ in ∪ Ũ+ h (Γ in ) and extend ( , u) from (0, T ) × Ω to (0, T ) × Ωh by setting ( , u)(t, x) = (˜ B , ũB )(x), (t, x) ∈ (0, T ) × Ũ+ h (Γ in ). Conserving notation ( , u) for the extended fields, we easily verify that ( , u) ∈ L 2 ((0, T ) × Ωh ) × L 2 (0, T ; W 1,2 ( Ωh ; R d )), and that it satisfies the equation of continuity (1.1), in particular, in the sense of distributions on (0, T ) × Ωh . Next, we use the regularization procedure due to DiPerna and Lions [START_REF] Diperna | Ordinary differential equations, transport theory and Sobolev spaces[END_REF] applying convolution with a family of regularizing kernels obtaining for the regularized function [ ] ε , ∂ t [ ] ε + div x ([ ] ε u) = R ε a.e. in (0, T ) × Ωε,h , (3.6) where Ωε,h = x ∈ Ωh dist(x, ∂ Ωh ) > ε , R ε ≡ div x ([ ] ε u) -div x ([ u] ε ) → 0 in L 1 loc ((0, T ) × Ωh ) as ε → 0. The convergence of R ε evoked above results from the application of the refined version of the Friedrichs lemma on commutators, see e.g. [START_REF] Diperna | Ordinary differential equations, transport theory and Sobolev spaces[END_REF] ∂ t b([ ] ε ) + div x (b([ ] ε )u) + (b ([ ] ε )[ ] ε -b([ ] ε )) div x u = b ([ ] ε )R ε or equivalently, Ωh b([ ] ε (τ ))ϕ(τ )dx - Ωh b([ 0 ] ε )ϕ(0)dx = τ 0 Ωh b([ ] ε )∂ t ϕ + b([ ] ε )u • ∇ x ϕ -ϕ (b ([ ] ε )[ ] ε -b([ ] ε )) div x u dxdt - τ 0 Ωh ϕb ([ ] ε )R ε dx dt for all τ ∈ [0, T ], for any ϕ ∈ C 1 c ([0, T ] × Ωh ), 0 < ε < dist(supp(ϕ), ∂ Ωh ). Thus, letting ε → 0 we get Ωh b( (τ ))ϕ(τ )dx - Ωh b( 0 )ϕ(0)dx (3.7) = τ 0 Ωh b( )∂ t ϕ + b( )u • ∇ x ϕ -ϕ (b ( ) -b( )) div x u dxdt for all τ ∈ [0, T ], for any ϕ ∈ C 1 c ([0, T ] × Ωh ). Now we write Ωh b( )u • ∇ x ϕdx = Ω b( )u • ∇ x ϕ dx + Ũ+ h (Γ in ) b( )u • ∇ x ϕdx, (3.8) where, due to (3.5), the second integral is equal to Γ in ϕb( B )u B • ndS x + Ũ+ h (Γ in ) ϕ(˜ B b (˜ B ) -b(˜ B ))divũ B dx (3.9) We notice that ϕ is vanishing on a neighborhood of ∂Ω \ Γ in and that Γ in is Lipschitz. This justifies the latter integration by parts although Ũ+ h (Γ in ) may fail to be Lipschitz. Now, we insert the identities (3.8-3.9) into (3.7) and let h → 0. Recalling regularity of (˜ B , ũB ) evoked in (3.5) and summability of ( , u), we deduce finally that Ω b( (τ ))ϕ(τ )dx - Ω b( 0 )ϕ(0)dx = τ 0 Ω b( )∂ t ϕ + b( )u • ∇ x ϕ -ϕ (b ( ) -b( )) div x u dxdt + τ 0 ∂Ω b( B )u B • nϕdS x dt for all τ ∈ [0, T ], for any ϕ ∈ C 1 c ([0, T ] × (Ω ∪ Γ in )) . This finishes proof of Lemma 3.1. The case of bounded oscillations defect measure If the density is not square integrable, but if it is a weak limit of a sequence whose oscillations defect measure is bounded, one replaces in the case of theory with no inflow/outflow boundary data Lemma 3.1 by another result, see [START_REF] Feireisl | On the existence of globally defined weak solutions to the Navier-Stokes equations of compressible isentropic fluids[END_REF]. The goal of this Section is to generalize this result to continuity equation with non homogenous boundary data. We introduce the L p -oscillations defect measure of the sequence δ which admits a weak limit in L 1 (Q T ) as follows osc p [ δ ](Q T ) ≡ sup k≥1 lim sup δ→0 Q T T k ( δ ) -T k ( ) p dxdt , (3.10) where truncation T k ( ) of is defined as follows T k (z) = kT (z/k), T ∈ C 1 [0, ∞), T (z) =            z if z ∈ [0, 1], concave on [0, ∞), 2 if z ≥ 3, (3.11) where k > 1. The wanted result is the following lemma. Lemma 3.2. Suppose that Ω ⊂ R d , d = 2, 3 is a bounded Lipschitz domain and let ( B , u B ) satisfy assumptions (2. 1). Assume that the inflow portion of the boundary Γ in is a C 2 open (d-1)-dimensional manifold. Suppose further that δ in L p ((0, T ) × Ω), p > 1, u δ u in L r ((0, T ) × Ω; R d ), ∇u δ ∇u in L r ((0, T ) × Ω; R d 2 ), r > 1. (3.12) and that osc q [ δ ]((0, T ) × Ω) < ∞ (3.13) for 1 q < 1 -1 r , where ( δ , u δ ) solve the renormalized continuity equation (2.13) (with any b ∈ C 1 [0, ∞) and b having compact support). Then the limit functions , u solve again the renormalized continuity equation (2.13) for any b belonging to the same class. Remark 3.3. Let {Ω n } ∞ n=1 , Ω n ⊂ Ω n+1 ⊂ Ω, Ω n ⊂ Ω ∪ Γ in be a family of domains satisfying condition: For all compact K of Ω ∪ Γ in there exists n ∈ N * such that K ⊂ Ω n . Then one can replace in Lemma 3.2 assumption (3.12) by a slightly weaker assumption ∀n ∈ N * , δ in L p ((0, T ) × Ω n ), where ∈ L p (Ω), p > 1 ∀n ∈ N * , u δ u in L r ((0, T ) × Ω n ; R d ), where u ∈ L r (Ω; R d ), ∀n ∈ N * , ∇u δ ∇u in L r ((0, T ) × Ω n ; R d 2 ), where ∇u ∈ L r (Ω; R d 2 ), r > 1. This observation (which is seen from a brief inspection of the proof hereafter) is not needed in the present paper but may be of interest whenever one deals with the stability of weak solutions with respect to perturbations of the boundary in the case of non homogenous boundary data. Proof of Lemma 3.2 The proof of the lemma follows closely with only minor modifications the similar proof when boundary velocity is zero, see [START_REF] Feireisl | On the existence of globally defined weak solutions to the Navier-Stokes equations of compressible isentropic fluids[END_REF]. During the process of the proof one shall need Lemma 3.1. This is the only moment when the requirement on the regularity of Γ in is needed. We present however here the entire proof for the sake of completeness. Renormalized continuity eqution (2.13) with b = T k reads Ω T k ( δ )ϕ(τ, x) dx - Ω T k ( 0 )ϕ(0, x) dx = (3.14) τ 0 Ω T k ( δ )∂ t ϕ + T k ( δ )u δ • ∇ x ϕ -ϕ (T k ( δ ) δ -T k ( δ )) div x u δ dxdt - τ 0 Γ in T k ( B )u B • nϕ dS x dt for any ϕ ∈ C 1 ([0, T ] × (Ω ∪ Γ in )). Passing to the limit δ → 0 in (3.14), we get Ω T k ( )ϕ(τ, x) dx - Ω T k ( 0 )ϕ(0, x) dx = τ 0 Ω T k ( )∂ t ϕ + T k ( )u • ∇ x ϕ -ϕ(T k ( ) -T k ( ))div x u dxdt - τ 0 Γ in T k ( B )u B • nϕ dS x dt for any ϕ ∈ C 1 ([0, T ] × (Ω ∪ Γ in )). Since for fixed k > 0, T k ( ) ∈ L ∞ ((0, T ) × R d ) , we can employ Lemma 3.1 (with a slight obvious modification which takes into account the non zero right hand side (T k ( ) -T k ( ))div x u) in order to infer that Ω b M (T k ( ))ϕ(τ, x)dx - Ω b M ( 0 )ϕ(0, x)dx = (3.15) τ 0 Ω b M (T k ( ))∂ t ϕ + b M (T k ( ))u • ∇ x ϕ -ϕ b M (T k ( ))T k ( ) -b M (T k ( )) div x u dxdt + τ 0 Ω ( T k ( ) -)div x u b M (T k ( )) dxdt - τ 0 Γ in b M (T k ( B ))u B • nϕ dS x dt holds with any ϕ ∈ C 1 ([0, T ] × (Ω ∪ Γ in )) and any b M ∈ C 1 [0, ∞) with b M having a compact support in [0, M ). Seeing that by lower weak semi-continuity of L 1 norms, T k ( ) → in L 1 ((0, T ) × Ω) as k → ∞ , we obtain from equation (3.15) by using the Lebesgue dominated convergence theorem Ω b M ( )ϕ(τ, x)dx - Ω b M ( 0 )ϕ(0, x)dx = (3.16) τ 0 Ω b M ( )∂ t ϕ + b M ( )u • ∇ x ϕ -ϕ (b M ( ) -b M ( )) div x u dxdt - τ 0 Γ in b M ( B )u B • nϕ dS x dt with any ϕ ∈ C 1 ([0, T ] × (Ω ∪ Γ in )) as k → ∞, provided we show that ( T k ( ) -)div x u)b M (T k ( )) L 1 ((0,T )×Ω) → 0 as k → ∞. (3.17) To show the latter relation we use lower weak semicontinuity of L 1 norm, Hölder's inequality, uniform bound of u δ in L r (0, T ; W 1,r (Ω)) and interpolation of L r between Lebesgue spaces L 1 and L q to get ( T k ( ) -)div x u)b M (T k ( )) L 1 ((0,T )×Ω) ≤ max z∈[0,M ] |b M (z)| {T k ( )≤M } |( T k ( ) -)div x u)|dxdt ≤ c sup δ>0 δ T k ( δ ) -δ ) q(r-1)-r r(q-1) L 1 ((0,T )×Ω) lim inf δ→0 δ T k ( δ ) -δ ) q r(q-1) L q ({T k ( )≤M }) . We have δ T k ( δ ) -δ ) L 1 ((0,T )×Ω) ≤ 2sup δ>0 δ L 1 ({ δ ≥k}) → 0 as k → ∞ by virtue of the uniform bound of δ in L p ((0, T ) × Ω) (in the above we have also used algebraic relation zT k (z) -T k (z) ≤ 2z1 {z≥k} ), while δ T k ( δ ) -δ ) L q ({T k ( )≤M }) ≤ 2 T k ( δ ) L 1 ({T k ( )≤M }) ≤ 2 T k ( δ ) -T k ( ) L q ((0,T )×Ω) + T k ( ) -T k ( ) L q ((0,T )×Ω) + T k ( ) L q ({T k ( )≤M }) , Approximate problem Our goal is to construct solutions the existence of which is claimed in Theorem 2.4. To this end, we adopt the approximation scheme based on pressure regularization (small parameter δ > 0) adding artificial viscosity terms to both (1.1) and (1.2) (small parameter ε > 0). This is so far standard procedure. Moreover, mostly for technical reasons, we regularize the momentum equation by adding a convenient dissipative monotone operator with small parameter ε > 0. (This step is not needed when treating zero boundary data, but seems to be necessary if the boundary velocity is non zero.) In sharp contrast with [START_REF] Girinon | Navier-Stokes equations with nonhomogeneous boundary conditions in a bounded three-dimensional domain[END_REF], we consider for the new system a boundary value problem on Ω with the non homogenous boundary conditions for velocity and a convenient nonlinear Neumann type boundary condition for density. The approximated problem is determined in Section 4.1 and the crucial theorem on its solvability and estimates is formulated in Section 4.2 (see Lemma 4.1). Lemma 4.1 is proved by a standard combination of a fixed point and Galerkin method, see Section 4.3. After this process we have at our disposal a convenient solution of the approximate problem. Once the sequence of approximated solutions is available the limit process has to be effectuated in order 1. ε → 0 (see Section 5), 2. δ → 0 (see Section 6). Approximating system of equations The approximate problem reads: ∂ t -ε∆ x + div x ( u) = 0, (4.1) (0, x) = 0 (x), (-ε∇ x + u) • n| I×∂Ω = B u B • n if [u B • n](x) ≤ 0, x ∈ ∂Ω, u B • n if [u B • n](x) > 0, x ∈ ∂Ω (4.2) ∂ t ( u) + div x ( u ⊗ u) + ∇ x p δ ( ) = div x S(∇ x u) -ε∇ x • ∇ x u + εdiv |∇ x (u -u ∞ )| 2 ∇ x (u -u ∞ ) (4.3) u(0, x) = u 0 (x), u| I×∂Ω = u B , (4.4) with positive parameters ε > 0, δ > 0, where we have denoted p δ ( ) = p( ) + δ β , β > max{γ, 9/2} (4.5) and where u ∞ is an extension of u B from Lemma 2.2. The exact choice of β is irrelevant from the point of view of the final result provided it is sufficiently large. It is guided by convenience in proofs; it might not be optimal. Anticipating the future development, we denote: H δ ( ) = H( ) + δH (β) ( ), H δ ( ) = H( ) + δH (β) ( ), H (β) ( ) = 1 z β-2 dz = 1 β -1 β (4.6) and E δ ( |r) = E( |r) + δE (β) ( |r), E δ ( |r) = E( |r) + δE (β) ( |r), (4.7) E (β) ( |r) = H (β) ( ) -[H (β) ] (r)( -r) -H (β) (r). Generalized solutions of the approximate problem Definition 4.1 A couple ( ε , u ε ) and associated tensor field Z ε is a generalized solution of the sequence of problems (4.1-4.4) ε>0 iff the following holds: 1. It belongs to the functional spaces: ε ∈ L ∞ (0, T ; L β (Ω)) ∩ L 2 (0, T ; W 1,2 (Ω)), 0 ≤ ε a.a. in (0, T ) × Ω, (4.8) u ε ∈ L 2 (0, T ; W 1,2 (Ω; R 3 )) ∩ L 4 (0, T ; W 1,4 (Ω; R 3 )), u ε | I×∂Ω = u B , Z ε → 0 in L 4/3 (Q T ; R 3 ) as ε → 0. 2. Function ε ∈ C weak ([0, T ], L β (Ω) ) and the integral identity Ω ε (τ, x)ϕ(τ, x) dx - Ω 0 (x)ϕ(0, x) dx = (4.9) τ 0 Ω ε ∂ t ϕ + ε u ε • ∇ x ϕ -ε∇ x ε • ∇ x ϕ dxdt - τ 0 Γ in B u B • nϕ dS x dt holds for any τ ∈ [0, T ] and ϕ ∈ C 1 c ([0, T ] × (Ω ∪ Γ in )); 3. Function ε u ε ∈ C weak ([0, T ], L 2β β+1 (Ω; R 3 )) , and the integral identity Ω ε u ε (τ, •) • ϕ(τ, •) dx - Ω 0 u 0 (•)ϕ(0, •) dx = - τ 0 Ω Z ε : ∇ x ϕ dxdt (4.10) + τ 0 Ω ( ε u ε ∂ t ϕ + ε u ε ⊗ u ε : ∇ x ϕ + p δ ( ε )div x ϕ -ε∇ x ε • ∇ x u ε • ϕ -S(∇ x u ε ) : ∇ x ϕ) dxdt holds for any τ ∈ [0, T ] and any ϕ ∈ C 1 c ([0, T ] × Ω; R 3 ). Energy inequality Ω 1 2 ε |u ε -u ∞ | 2 + H δ ( ε ) + Σ 2 2 ε (τ ) dx (4.11) +δ β -1 2 τ 0 Γ in β ε |u B • n|dS x dt + δ 1 β -1 τ 0 Γout β ε |u B • n|dS x dt + Σ 2 τ 0 Γ in 2 ε |u B • n|dS x dt + τ 0 Ω S(∇ x (u ε -u ∞ )) : ∇ x (u ε -u ∞ ) + εH δ ( ε )|∇ x ε | 2 + εΣ|∇ x ε | 2 +ε|∇ x (u ε -u ∞ )| 4 dxdt ≤ Ω 1 2 0 |u 0 -u ∞ | 2 + H δ ( 0 ) + Σ 2 2 0 dx -E -δB τ 0 Γ in |u B • n|dS x dt -(H -δA) τ 0 Γout |u B • n|dS x dt - τ 0 Γ in H δ ( B )u B • ndS x dt + Σ τ 0 Γ in ε B |u B • n|dS x dt - τ 0 Ω p δ ( ε )divu ∞ dxdt - τ 0 Ω ε u ε • ∇ x u ∞ • (u ε -u ∞ ) dxdt +ε τ 0 Ω ∇ x ε • ∇ x (u ε -u ∞ ) • u ∞ dxdt - τ 0 Ω S(∇ x u ∞ ) : ∇ x (u ε -u ∞ ) dxdt - Σ 2 τ 0 Ω 2 ε divu ε dxdt, A = 1 β -1 , B = 2 β-1 β -1 β B holds for a.a. τ ∈ (0, T ), any Σ > 0 with some continuous extension u ∞ of u B in the class (2.11). In the above, H δ is defined in (4.6), H is defined in (2.10) and -∞ < E := inf >0, B <r< B E(r| ), where E(•|•) is defined in (2.4). (4.12) The main achievement of this section is existence theorem for approximating problem (4.1-4.4). It is announced in the next lemma. u 0 ∈ L 2 (Ω), 0 ∈ W 1,2 (Ω), 0 < ≤ 0 ≤ < ∞. (4.13) 0 < B ≤ B ≤ B < ∞ (4.14) Then for any continuous extension u ∞ of u B in class (2.11) there exists a generalized solution ( , u ) and Z ε to the sequence of approximate problems (4.1 -4.4) ε∈(0,1) -which belongs to functional spaces (4.8), satisfies weak formulations (4.9-4.10) and verifies energy inequality (4.11) -with the following extra properties: (i) In addition to (4.8) it belongs to functional spaces: ε ∈ L 5 3 β (Q T ), √ ε , β 2 ∈ L 2 (I, W 1,2 (Ω)), ∂ t ∈ L 4/3 (Q T ), ∇ 2 ∈ L 4/3 (Q T ). ( 4.15) (ii) In addition to the weak formulation (4.9), the couple ( ε , u ε ) satisfies equation (4.9) in the strong sense, meaning it verifies equation (4.1) with ( ε , u ε ) a.e. in Q T , boundary identity (4.2) with ( ε , u ε ) a.e. in (0, T ) × ∂Ω and initial conditions in the sense lim t→0+ ε (t) -0 L 4/3 (Ω) = 0. (iii) The couple ( ε , u ε ) satisfies identity ∂ t b( ε ) + εb ( ε )|∇ x ε | 2 -εdiv x (b ( ε )∇ x ε ) + div x (b( ε )u ε ) + [b ( ε ) ε -b( ε )] div x u ε = 0 (4.16) a.e. in (0, T ) × Ω with any b ∈ C 2 [0, ∞), where the space-time derivatives have to be understood in the sense a.e. Remark 4.2. Identity (4.16) holds in the weak sense Ω b( ε (τ ))ϕ(τ ) dx - Ω b( 0 )ϕ(0) dx = τ 0 ∂Ω εb ( ε )∇ x ε -b( ε )u ε • ndS x dt + τ 0 Ω b( ε )∂ t ϕ + (b( ε )u ε -εb ( ε )∇ x ε ) • ∇ x ϕ -ϕ εb ( ε )|∇ x ε | 2 + ( ε b ( ε ) -b( ε ))divu ε dxdt with any τ ∈ [0, T ] and ϕ ∈ C 1 c ([0, T ] × Ω) with any b whose growth (and that one of its derivatives) in combination with (4.15) guarantees b( ε ) ∈ C weak ([0, T ]; L 1 (Ω)), existence of traces and integrability of all terms appearing at the r.h.s. Solvability of the approximating equations This section is devoted to the proof of Lemma 4.1. We adopt the nowadays standard procedure based on computing the approximate density in terms of u in (4.1), (4.2), calculating u via a Galerkin approximation, and applying a fixed point argument, see [START_REF] Novotný | Introduction to the mathematical theory of compressible flow[END_REF]Chapter 7]. We proceed in several steps. These steps are described in the following subsections. Construction of a (strong) solution of problem (4.1-4.2) Here we consider the problem (4.1-4.2), with fixed ε > 0 and with fixed sufficiently regular u. We may suppose without loss of generality that u B • n on Γ in , 0 on ∂Ω \ Γ in ≡ v ∈ C 1 (∂Ω), B v ≡ g ∈ C 1 (∂Ω). (4.17) Now, problem (4.1-4.2) may be rewritten as parabolic problem: ∂ t -ε∆ + div( u) = 0 in (0, T ) × Ω, (4.18) -ε∇ • n + v = g in (0, T ) × ∂Ω, (0) = 0 in Ω. Applying to problem (4.18) the maximal regularity theory for parabolic systems, we get in particular: Lemma 4.3. Suppose that Ω is a bounded domain of class C 2 and assume further that 0 ∈ W 1,2 (Ω), u ∈ L ∞ (0, T ; W 1,∞ (Ω)), u| (0,T )×∂Ω = u B , v, g ∈ C 1 (∂Ω) are given by (4.17). Then we have: 1. The parabolic problem (4.18) admits a unique solution in the class 3. In the sequel we denote = S 0 , B ,u B (u) ≡ S(u). Then ∈ L 2 (0, T ; W 2,2 (Ω)) ∩ W 1,2 (0, T ; L 2 (Ω)). (4.19) The following estimates hold: There is c = c(T, ε, |Ω|, |∂Ω| 2 , K) > 0 (independent of u, 0 , B , u B ) such that (τ ) 2 L 2 (Ω) ≤ c 0 2 L 2 (Ω) + τ 0 ∂Ω 2 B |v|dS x dt , (4.20) τ 0 ∇ x 2 L 2 (Ω) dt + τ 0 ∂Ω 2 |v|dSdt ≤ c 0 2 L 2 (Ω) + τ 0 ∂Ω 2 B |v|dSdt , for all τ ∈ [0, T ], provided u L ∞ (Q T ) + divu L ∞ (Q T ) ≤ K. (4.21) 2. Let ≤ 0 (x) ≤ for a. a. x ∈ Ω, ≤ B (x) ≤ for all x ∈ Γ in . Then exp - τ 0 divu(s) L ∞ (Ω) ds ≤ (τ, x) ≤ exp τ 0 divu(s) L ∞ (Ω) ds , (4.22 [S 0 , B ,u B (u 1 ) -S 0 , B ,u B (u 2 )](τ ) L 2 (Ω) ≤ Γ u 1 -u 2 L ∞ (0,τ ;W 1,∞ (Ω)) (4.23) with some number ). The reader wishing to read more about the maximal regularity to parabolic equations is referred to the monograph [START_REF] Denk | Fourier multipliers and problems of elliptic and parabolic type[END_REF]. Γ = Γ(T, ε, |Ω|, |∂Ω| 2 , K, 0 L 2 (Ω) , 2 B v L 1 (0;T ;L 1 (∂Ω)) ) > 0, provided both u 1 , u 2 verify (4. Proof of statement 2. Estimates (4.20) is obtained testing (4.18) by and using first the Hölder and Young inequalities and next the Gronwall inequality. Indeed, this testing gives 1 2 Ω 2 (τ ) dx - τ 0 ∂Ω 2 vdS x dt + ε τ 0 Ω |∇ x | 2 dxdt = 1 2 Ω 2 0 dx - τ 0 ∂Ω B vdS x dt - τ 0 Ω ∇ x • u + 2 divu dxdt, where 1 2 τ 0 ∂Ω B vdS x dt ≤ 1 2 τ 0 ∂Ω 2 |v|dS x dt + 1 2 τ 0 ∂Ω 2 B |v|dS x dt and τ 0 Ω ∇ x •u + 2 divu dxdt ≤ ε 2 τ 0 Ω |∇ x | 2 dxdt+ τ 0 1 ε u 2 L ∞ (Ω) + divu L ∞ (Ω) Ω 2 dxdt; whence application of the standard Gronwall inequality yields (τ ) 2 L 2 (Ω) ≤ 0 2 L 2 (Ω) + τ 0 ∂Ω 2 B |v|dS x dt exp (K + 1 ε K 2 )τ which is first inequality in (4.20). Once this inequality is known, the second inequality is immediate. Proof of statement 3. We shall proceed to the proof of upper bound (4.22). To this end we define R(t) = exp t 0 divu(s) L ∞ (Ω) ds, i.e ∂ t R + div(Ru) ≥ 0, R(0) = . We further set ω(t, x) = (t, x) -R(t), so that ω satisfies ∂ t ω -ε∆ω + div(ωu) ≤ 0, in (0, T ) × Ω, -ε∂ω • n + ωv = ( B -R)v in (0, T ) × ∂Ω. Testing the latter inequality by ω + (the positive part of ω) we get while reasoning in the same way as in the proof of estimate (4.20), 1 2 Ω |ω + | 2 (τ ) dx + τ 0 ∂Ω |ω + | 2 |v|dS x dt + ε τ 0 Ω |∇ x ω + | 2 dxdt ≤ 1 2 Ω |ω + | 2 (0) dx + τ 0 ∂Ω ω + ( B -R)|v|dS x dt - τ 0 Ω ∇ x ω + • uω + + |ω + | 2 divu dxdt. Now we employ the fact that the first term at the right hand side is zero, while the second term is non positive, and handle the last term as in the proof of (4.20) to finally get ω + (τ ) L 2 (Ω) = 0 which yields the upper bound in (4.22). To derive the lower bound we shall repeat the same procedure with R(t) = exp - t 0 divu(s) L ∞ (Ω) ds and ω(t, x) = R(t) -(t, x). Proof of statement 4. The difference η ≡ 1 -2 , where i = S 0 , B ,u B (u i ), verifies equation: ∂ t η -ε∆η = F a.e. in Q T , -ε∇ x η • n + ηv = 0 in (0, T ) × ∂Ω with zero initial data, where F = -1 div(u 1 -u 2 ) -∇ x 1 • (u 1 -u 2 ) -ηdivu 2 -∇ x η • u 2 , | Ω F η dx| ≤ 1 2 u 1 -u 2 2 W 1,∞ (Ω) + 1 2 1 2 W 1,2 (Ω) + K + 1 ε K 2 η 2 L 2 (Ω) + ε 2 ∇ x η 2 L 2 (Ω) . Therefore, by the same token as before, after testing the above equation by η, we get η(τ ) 2 L 2 (Ω) + ε 2 τ 0 ∇ x η 2 L 2 (Ω) dt + τ 0 ∂Ω η 2 |v|dS x dt ≤ τ 2 u 1 -u 2 2 W 1,∞ (Qτ ) + τ 0 1 2 1 2 W 1,2 (Ω) + K + 1 ε K 2 η 2 L 2 (Ω) dt; whence Gronwall lemma and bounds (4.20) yield the desired result (4.23). Galerkin approximation for approximate problem (4.1-4.4) We start by introducing notations and gathering some preliminary material: 1. We denote X = span{Φ i } N i=1 where B := {Φ i ∈ C ∞ c (Ω) | i ∈ N * } is an orthonormal basis in L 2 (Ω; R 3 ) (4. 24) a finite dimensional real Hilbert space with scalar product (•, •) X induced by the scalar product in L 2 (Ω; R 3 ) and • X the norm induced by this scalar product. We denote by P N the orthogonal projection of L 2 (Ω; R 3 ) to X. Since X is finite-dimensional, norms on X induced by W k,p -norms, k ∈ N , 1 ≤ p ≤ ∞ are equivalent. In particular, there are universal numbers (depending solely on the dimension N of X) 0 < d < d < ∞ such that d v W 1,∞ (Ω) ≤ v X ≡ v L 2 (Ω) ≤ d v W 1,∞ (Ω) , for all v ∈ X. (4.25) 2. Let g ∈ L 1 (Q T ), infess (t,x)∈Q T g ≥ a > 0. We define for a.a. t ∈ (0, T ), M g(t) ∈ L(X, X), < M g(t) Φ, Ψ > X := Ω g(t, x)ΦΨ dx (4.26) With this definition at hand, we easily see that there holds for a.a. t ∈ (0, T ), M g(t) L(X,X) ≤ δ Ω g(t, x) dx, < M g(t) Φ, Φ > X ≥ a Φ 2 X , (4.27) M -1 g(t) (t) ∈ L(X, X), M -1 g(t) (t) L(X,X) ≤ δ a , M g 1 (t) -M g 2 (t) L(X,X) ≤ δ g 1 (t) -g 2 (t) L 1 (Ω) , (4.28) M -1 g 1 (t) -M -1 g 2 (t) L(X,X) ≤ δ a 2 g 1 (t) -g 2 (t) L 1 (Ω) . In the above formulas δ is a positive universal number dependent solely of N . Moreover, if in addition g ∈ C([0, T ]; L 1 (Ω)) then M g(•) , M -1 g(•) ∈ C([0, T ]; L(X, X)). (4.29) Finally, if in addition ∂ t g ∈ L 1 (Q T ), then ∂ t M g(.) (t) = M ∂tg(t) ∈ L(X, X), ∂ t M -1 g(.) (t) = -M -1 g(t) M ∂tg(t) M -1 g(t) ∈ L(X, X) (4.30) for a.a. t ∈ (0, T ). We shall look for T ∈ (0, T ] and u N = u ∞ + v N , v N ∈ C([0, T ]; X), (4.31) satisfying Ω ∂ t ( N u N ) • Φ dx = Ω divS(∇ x u N )+εdiv |∇ x (u N -u ∞ )| 2 ∇ x (u N -u ∞ ) (4.32) -∇ x p δ ( N ) -div( N u N ⊗ u N ) -ε∇ x N • ∇ x u N dx, or equivalently, omitting index N at v N , u N and N Ω v(t) • Φ dx - Ω 0 v 0 Φ dx = t 0 Ω divS(∇ x u)+εdiv |∇ x (u -u ∞ )| 2 ∇ x (u -u ∞ ) (4.33) -∇ x p δ ( ) -div( u ⊗ u) -ε∇ x • ∇ x u-∂ t u ∞ • Φ dxdt, where v 0 = u 0 -u ∞ , (t) = S(u)(t), Φ ∈ X, t ∈ (0, T ) with S being defined in item 3. of Lemma 4.3. The latter equation is equivalent to the integral formulation v(t) = T(v) := M -1 S(u)(t) P ( 0 v 0 ) + M -1 S(u)(t) t 0 P N(S(u)(s), u(s)) ds , (4.34) where N( , u) = divS(∇ x u)+εdiv |∇ x (u -u ∞ )| 2 ∇ x (u -u ∞ ) -∇ x p δ ( ) -div( u ⊗ u) -ε∇ x • ∇ x u-∂ t u ∞ and where here and in the sequel, P states for P N . Clearly, N(S(u), u) ∈ L 2 (Q T ) due to (4.19), (4.22), (4.31). This implies that 1) the map t → P N(S(u)(t), u(t)) ∈ L 2 (0, T ; X), 2) the map t → t 0 P N(S(u)(s), u(s))ds ∈ C([0, T ]; X), 3) operator T maps C([0, T ]; X) to C([0, T ]; X), 4) we have a bound P N(S(u), u) X ≤ d 2 (K, , T ), provided u verifies (4.21). (4.35) 5) Likewise, using in addition (4.23), P N(S(u 1 ), u 1 ) -P N(S(u 2 ), u 2 ) X ≤ d 3 (K, , T ) u 1 -u 2 X , Tv 1 -Tv 2 C([0,t];X) ≤ t Γδd 2 ( ) 2 + δd 3 e Kt v 1 -v 2 C([0,t];X) provided u ∞ + v i verifies (4.21). Now we take K > 0 sufficiently large and T sufficiently small, so that K d 2 ≥ max δ e KT P v 0 X + d 2 (K, , T )T , d u ∞ W 1,∞ (Ω) and T Γδd 2 ( ) 2 + δd 3 e KT < 1 With this choice, we easily check employing (4.25) that u ∞ + v verifies condition (4.21), provided v C([0,T ];X ≤ K d 2 . We have thus showed that T is a contraction maping from the (closed) ball B(0; Kd/2) ⊂ C([0, T ], X) into itself. It therefore admits a unique fixed point v ∈ C([0, T ], X), and u = u ∞ + v solves problem (4.33). Denote now T = {T ∈ (0, T ) | problem (4. 33) admits a solution v ∈ C([0, T ]; X)}, and set T max = sup T. We have already proved that T is not empty. In what follows we prove that T max = T . In fact, if T max < T , then necessarily lim T →Tmax v C([0,T ];X) → ∞. (4.37) We shall show that (4.37) cannot happen. To this end we derive in the next section the uniform estimates. Uniform bounds independent of the Galerkin approximation We first integrate equation (4.1) ( N ,u N ) in order to obtain the conservation of total mass. Omitting subscript N in order to simplify notatio, we get Ω (τ ) dx + τ 0 Γout u B • ndS x dt = Ω 0 dx + τ 0 Γ in |u B • ∂ t H δ ( ) + εH δ ( )|∇ x | 2 -εdiv H δ ( )∇ x + div(H δ ( )u) + p δ ( )divu = 0 a.e. in Q τ , τ ∈ (0, T ), or, after using boundary conditions (4.18), ∂ t Ω H δ ( ) dx + ε Ω H δ ( )|∇ x | 2 dx + ∂Ω H δ ( )( B -)v + H δ ( )u B • n dS x = - Ω p δ ( )divu dx or further, after employing the definition (4.17) of v, ∂ t Ω H δ ( ) dx+ε Ω H δ ( )|∇ x | 2 dx+ Γ in H δ ( B )-H δ ( )( B -)-H δ ( ) |v|dS x + Γout H δ ( )u B •ndS x = - Ω p δ ( )divu dx - Γ in H δ ( B )u B • ndS x . (4.39) Next, we deduce from (4.32) τ 0 Ω ∂ t ( v) • v -u ⊗ u : ∇ x v dxdt + τ 0 Ω S(∇ x u) : ∇ x v dx +ε τ 0 Ω |∇ x v| 4 dxdt - τ 0 Ω p δ ( )divv dxdt + τ 0 Ω ε∇ x • ∇ x u • v dxdt = 0, where by virtue of (4.18) (after several integrations by parts and recalling that u = u ∞ + v), Ω ∂ t ( u) • v -u ⊗ u : ∇ x v dx = Ω ∂ t v 2 + 1 2 ∂ t v 2 + ∂ t u ∞ • v + 1 2 div( u)v 2 -u • ∇ x v • u ∞ dx = Ω 1 2 ∂ t ( v 2 ) + ε∆ u ∞ • v + ε 2 ∆ v 2 -div( u)u ∞ • v -u • ∇ x v • u ∞ dx = ∂ t Ω 1 2 v 2 dx + Ω u • ∇ x u ∞ • v -ε∇ x • ∇ x u • v -ε∇ x • ∇ x v • u ∞ dx. Consequently, Ω 1 2 v 2 + H δ ( ) (τ ) dx + τ 0 Γ in E δ ( B | )|u B • n|dS x dt + τ 0 Γout H δ ( )|u B • n|dS x dt (4.40) +ε τ 0 Ω |∇ x v| 4 dxdt + ε τ 0 Ω H δ ( )|∇ x | 2 dxdt + τ 0 Ω S(∇ x v) : ∇ x v dxdt ≤ Ω 1 2 0 v 2 0 + H δ ( 0 ) dx - τ 0 Γ in H δ ( B )u B • ndS x dt + τ 0 Ω -p δ ( )divu ∞ -S(∇ x u ∞ ) : ∇ x v -u • ∇ x u ∞ • v + ε∇ x • ∇ x v • u ∞ dxdt, where, recall, H δ , E δ are defined in (4.6), (4.7), respectively. Further, we test equation (4.1) ( N ,u N ) by N in order to get, after several integrations by parts, 1 2 Ω 2 (τ ) dx + 1 2 τ 0 ∂Ω 2 |u B • n|dS x dt + ε τ 0 Ω |∇ x | 2 dxdt (4.41) = 1 2 Ω 2 0 dx + τ 0 Γ in B |u B • n|dS x dt - 1 2 τ 0 Ω 2 divudxdt where as in (4.40) we have omitted indexes N . Now, our goal is to derive from (4.40) and (4.41) useful bounds for the sequence ( N , u N ). The coefficients of derived bounds depend tacitly on the parameters of the problem (as γ, β, µ, λ, a, b, T, Ω, functions p, p) and on "data", where "data" stands for Ω 1 2 0 v 2 0 + H δ ( 0 ) dx, u ∞ W 1,∞ (Ω) , B , B ≡ B C(∂Ω) , H, E. In particular, they are always independent N . If they depend on ε or δ this dependence is also always indicated in their argument as well as the dependence on other quantities (notably T ) if it is necessary for the understanding of proofs. The coefficients may take different values even in the same formulas. Before attacking estimates we shall list several consequences of structural assumptions (2.2), (4.5) and formulas (2.3), (2.4), (4.6), (4.7) needed for the derivation of those bounds. A brief excursion to (2.3) and (4.6) yields -∞ < - δ β -1 + H ≤ H δ . ( 4 E (β) (r| ) ≥ 1 2 β - 2 β-1 β -1 r β , where E (β) is defined in (4.7). Finally, using namely conditions p (0) = 0 (cf. last line in formula (2.2)), we get E := inf >0; B <r< B E(r| ) > -∞ Putting together these three observations we infer (see (4.6-4.7) for the notation), E δ ( B | ) ≥ δ 2 β + E -δ 2 β-1 β -1 β B . (4.43) 3. Recalling again definition (4.7) of E δ , E δ we find identity H δ ( ) = E δ ( | ) + D δ ( , ), (4.44) where D δ ( , ) = H δ ( )( -) + H δ ( ) + H( ) , = 1 |Ω| Ω 0 dx. Thanks to (4.38), sup t∈(0,T ) Ω |D( , )| dx ≤ c(data), (4.45) where we have also employed regularity of p near zero to show that the map → H( ) is bounded near 0. 4. We have E δ ( |r) = E( |r) + δE (β) ( |r), where, due to convexity of H and H (β) on (0, ∞), E δ enjoys the following coercivity property: There is c = c( ) > 0, such that E δ ( | ) ≥ c δ β 1 Ores ( ) + 1 Ores ( ) (4.46) +1 Oess ( )( -) 2 + γ 1 Ores ( ) + 1 Ores ( ) + 1 Oess ( )( -) 2 for all ≥ 0, where O ess = ( 1 2 , 2 ) while O res = [0, ∞) \ O ess , and where we have used the growth condition for p from (2.2). 5. We deduce from the Korn and Poincaré inequalities, v 2 W 1,2 (Ω) ≤ c S(∇ x v) : ∇ x v L 1 (Ω) . ( 4 .47) 6. There holds H δ = H + δH (β) + H, where H ≥ 0, [H (β) ] ( ) = β β-2 (4.48) T 0 Ω |H ( )||∇ x | 2 dxdt ≤ sup >0 p ( ) T 0 Ω |∇ x | 2 dxdt, (4.49) where sup >0 p ( ) < ∞ namely thanks to assumption p (0) = 0 (cf. again last line in formula (2.2)). 7. The absolute value of the right hand side of inequality (4.40) is bounded by 1 + α α c(data, δ) τ 0 Ω E δ ( | ) + 1 2 v 2 dxdt + αε ∇ x 2 L 2 (Qτ ) (4.50) + α + ε c(data, T ) α ∇ x v 2 L 2 (Qτ ) + c(data) α , with arbitrary α > 0, where we have used several times the Hölder and Young inequalities, and coercivity (4.46) of E δ ( | ). 8. By the same token, the absolute value of the right hand side of equality (4.41) is bounded by 1 δ 1 + α α c(data, T ) + c 1 + α α 1 δ τ 0 Ω E δ ( | ) dxdt (4.51) +αδ τ 0 Γ in β |u B • n|dS x dt + α ∇ x v 2 L 2 (Qτ ) with arbitrary α > 0. Next, we multiply equation (4.41) by a positive number Σ and add it to inequality (4.40). With notably (4.42-4.43) at hand, we deduce from this operation the following inequality which will be our departure point: Ω 1 2 v 2 + H δ ( ) + Σ 2 2 (τ ) dx (4.52) +δ β -1 2 τ 0 Γ in β |u B • n|dS x dt + δ 1 β -1 τ 0 Γout β |u B • n|dS x dt + Σ 2 τ 0 ∂Ω 2 |u B • n|dS x dt + τ 0 Ω S(∇ x v) : ∇ x v + εH δ ( )|∇ x | 2 + εΣ|∇ x ε | 2 + ε|∇ x v| 4 dxdt ≤ Ω 1 2 0 v 2 0 + H δ ( 0 ) + Σ 2 2 0 dx -E -δB τ 0 Γ in |u B • n|dS x dt -(H -δA) τ 0 Γout |u B • n|dS x dt - τ 0 Γ in H δ ( B )u B • ndS x dt + Σ τ 0 Γ in B |u B • n|dS x dt - τ 0 Ω p δ ( )divu ∞ dxdt - τ 0 Ω u • ∇ x u ∞ • v dxdt +ε τ 0 Ω ∇ x ε • ∇ x v • u ∞ dxdt - τ 0 Ω S(∇ x u ∞ ) : ∇ x v dxdt - Σ 2 t∈(0,T ) Ω v 2 (t) dx ≤ K(data, T, δ) (4.53) v L 2 (0,T ;W 1,2 (Ω)) ≤ K(data, T, δ), (4.54) sup t∈(0,T ) Ω E δ ( | )|(t) dx ≤ L(data, T, δ), (4.55) ε T 0 Ω H δ ( )|∇ x | 2 dx ≤ L(data, T, δ), (4.56) ε T 0 Ω |∇ x | 2 dx ≤ L(data, T, δ), (4.57) ≥ exp - T 0 ( u ∞ (s) W 1,∞ (Ω) + c v(s) W 1,2 (Ω) )ds ≥ K 1 ( , T, data). Coming back to (4.53), and using v L 2 (Ω) ≥ d v W 1,∞ (Ω) , we finally obtain v C([0,T ];W 1,∞ (Ω)) ≤ 1 d K K 1 for any T < T max . This contradicts (4.37). We have thus proved that T max = T . From ( N = S(u N ), u N = u ∞ + v N ) of Galerkin solutions to the problem (4.33): N |u N | 2 L ∞ (I,L 1 (Ω)) ≤ L(data, δ), (4.60) u N L 2 (I,W 1,2 (Ω)) ≤ L(data, δ), (4.61) N L ∞ (I,L β (Ω)) ≤ L(data, δ), (4.62) ε ∇ N 2 L 2 (Q T ) + ε ∇( β/2 N ) 2 L 2 (Q T ) ≤ L(data, δ), (4.63) |u B • n| 1/β L β ((0,T )×∂Ω) ≤ L(data, δ), (4.64) ε u N -u ∞ 4 L 4 (0,T ;W 1,4 (Ω)) ≤ L(data, δ). (4.65) From these bounds we find by Hölder inequalities, Sobolev embeddings and interpolation N u N L ∞ (I,L 2β β+1 (Ω)) + N u N L 2 (I,L 6β β+6 (Ω)) ≤ L(data, δ), (4.66) N |u N | 2 L 2 (I,L 6β 4β+3 (Ω)) ≤ L(data, δ), (4.67) N L 5 3 β (Q T ) ≤ L(data, δ, ε), (4.68) div( N u N ) L 4/3 (Q T ) ≤ L(data, δ, ε). ( 4 ∂ t N L 4/3 (Q T ) + N L 4/3 (0,T ;W 2,4/3 (Ω)) ≤ L(data, δ, ε). (4.70) The above bounds imply, via several classical convergence theorems, existence of a chosen subsequence (not relabeled) whose limits and way of convergence will be specified in the following text. We deduce from (4.70), N in L 4/3 (0, T ; W 2,4/3 (Ω)) and in L 2 (0, T ; W 1,2 (Ω)), ∂ t N ∂ t in L 4/3 (Q T ) (4.71) and also in addition with help of (4.62-4.63) by Lions-Aubin Lemma, N → in L 2 (Q T ), ∇ x N → ∇ x in L 4/3 (Q T ); whence, in particular, N → a.e. in Q T and in L p (Q T ), 1 ≤ p < 5 3 β, (4.72) ∇ x N → ∇ x a.e. in Q T and in L p (Q T ), 1 ≤ p < 2, (4.73) where we have used (4.68) and (4.63). Consequently, in particular p δ ( N ), H δ ( N ) → p δ ( ), H δ ( ) in L p/β (Q T ), β < p < 5 3 β. (4.74) Further, due to (4.63), trace theorem and (4.64) N in L 2 ((0, T ) × ∂Ω), N |u B • n| 1/β |u B • n| 1/β in L β ((0, T ) × ∂Ω). (4.75) Next, we derive from (4.18) written with ( N , u N ) that the sequences of functions t → Ω N (t)ϕ dx are for any ϕ ∈ C 1 c (Ω) uniformly bounded and equi-continuous in C[0, T ]; whence the Arzela-Ascoli theorem in combination with the separability of L β (Ω) furnishes, N → in C weak ([0, T ], L β (Ω)). (4.76) Estimate (4.65) yields u N u (weakly) in L 4 (0, T ; W 1,4 (Ω)), (4.77) and in combination with (4.72) N u N u e.g. in L 2 (0, T ; L 6β β+6 (Ω)) (4.78) and finally, together with (4.69), div( N u N ) div( u) in L 4/3 Q T ). (4.79) Estimate (4.65) furnishes further ε|∇ x (u N -u ∞ )| 2 ∇ x (u N -u ∞ ) Z ≡ Z ε weakly in L 4/3 (Q T ; R 9 ), (4.80) where Z ε L 4/3 (Q T ) → 0 as ε → 0. Second convergence in (4.73) and (4.77) yield ∇ x N • ∇ x u N ∇ x • ∇ x u in L 4/3 (Q T ). (4.81) Returning with estimates (4.60-4.67) and with (4.81) to (4.33), we infer that the sequences of functions t → N u N (t)Φ i are for any Φ i ∈ B uniformly bounded and equi-continuous in C[0, T ]. We may thus combine Arzela-Ascoli theorem with the fact that the linear hull of B is dense in L 2β β-1 (Ω) to deduce that N u N → u in C weak ([0, T ]; L 2β β+1 (Ω)), (4.82) where we have used the second convergence in (4.77) in order to identify the limit. Seeing the compact imbedding L 2β β+1 (Ω) → → W -1,2 (Ω), we deduce N u N (t) → u(t) (strongly) in W -1,2 (Ω) for all t ∈ [0, T ]. This implies, in particular, N u N → u in L 2 (0, T ; W -1,2 (Ω)). Combining weak convergence (4.77) with just obtained strong convergence of N u N , and with estimate (4.67) we get N u N ⊗ u N u ⊗ u in L 2 (I, L 6β 4β+3 (Ω)). (4.83) Relations (4.72-4.83) guarantee the belonging of ( , u) to class (4.8) while (4.63), (4.68) and (4.70) guarantee additional regularity (4.15). Relation (4.71) guarantees that equation (4.9) is satisfied in the strong sense (4.1). Equation (4.1) ε,uε tested by e yields inequality (4.41) by the same manipulations as presented during the derivation of inequality (4.41). Equation (4.16) is obtained by multiplying (4.1) by b ( ). The limits (4.72-4.78), (4.81-4.83) employed in (4.32) lead to equation (4.10). It remains to pass to the limit from the inequality (4.52) ( N ,u N ) to inequality (4.11). To this end, we use at the left hand side the lower weak semi-continuity of norms and convex functionals. The right hand side converges to its due limit (the same expression with ( , v)) due to (4.72), (4.77), (4.81), (4.83) and (4.75). We postpone the details of the limit passage in the energy inequality to the next Section, where similar reasoning will be employed. We have thus established Lemma 4.1. Limit ε → 0 The aim in this section is to pass to the limit in the weak formulation (4.9-4.11) of the problem (4.1-4.4) ( ε,uε) in order to recover the weak formulation of problem (1.1-1.6) written with p δ , H δ instead of p, H (cf. (4.8-4.11)). We expect that there is a (weak) limit ( , u) of a conveniently chosen subsequence ( ε , u ε ), that represents a weak solution of problem (1.1-1.6) (p=p δ ,H=H δ ) (cf. (2.5-2.9)). More exactly, we want to prove the following lemma: Lemma 5.1. Under assumptions of Lemma 4.1, there exists a subsequence ( ε , u ε ) (not relabeled) and a couple ( , u) such that ε * (weakly - * ) in L ∞ (0, T ; L β (Ω)), u ε u ∈ L 2 (0, T ; W 1,2 (Ω; R 3 )) (5.1) 0 ≤ a.a. in (0, T ) × Ω, u| (0,T )×∂Ω = u B , satisfying: 1. Function ∈ C weak ([0, T ], L β (Ω) ) and the integral identity Ω (τ, •)ϕ(τ, •)dx - Ω 0 (•)ϕ(0, •)dx (5.2) = τ 0 Ω ∂ t ϕ + u • ∇ x ϕ dxdt - τ 0 Γ in B u B • nϕ dS x dt holds for any τ ∈ [0, T ] and ϕ ∈ C 1 c ([0, T ] × (Ω ∪ Γ in )); 2. The renormalized continuity equations holds: Ω b( )ϕ(τ )dx - Ω b( 0 )ϕ(0)dx = (5.3) τ 0 Ω b( )∂ t ϕ + b( )u • ∇ x ϕ -ϕ (b ( ) -b( )) div x u dxdt - τ 0 Γ in b( B )u B • nϕ dS x dt for any ϕ ∈ C 1 c ([0, T ] × (Ω ∪ Γ in )) , and any continuously differentiable b with b having a compact support in [0, ∞). Function u ∈ C weak ([0, T ], L 2β β+1 (Ω; R 3 )) , and the integral identity Ω u(τ, •) • ϕ(τ, •)dx - Ω 0 u 0 (•)ϕ(0, •)dx (5.4) = τ 0 Ω ( u • ∂ t ϕ + u ⊗ u : ∇ x ϕ + p δ ( )div x ϕ -S(∇ x u) : ∇ x ϕ) dxdt holds for any τ ∈ [0, T ] and any ϕ ∈ C 1 c ([0, T ] × Ω; R 3 ). The energy inequality Ω 1 2 |u -u ∞ | 2 + H δ ( ) (τ )dx + τ 0 Ω S(∇ x (u -u ∞ )) : ∇ x (u -u ∞ )dxdt (5.5) ≤ Ω 1 2 0 |u 0 -u ∞ | 2 + H( 0 ) dx - τ 0 Γ in H δ ( B )u B • ndS x dt -E -δB τ 0 Γ in |u B • n|dS x dt -(H -δA) τ 0 Γout |u B • n|dS x dt - τ 0 Ω p δ ( )divu ∞ dxdt - τ 0 Ω u • ∇ x u ∞ • (u -u ∞ )dxdt - τ 0 Ω S(∇ x u ∞ ) : ∇ x (u -u ∞ )dxdt holds for a.a. τ ∈ (0, T ). Numbers A, B are defined in (4.11) and H, E in (2.10) and (4.12), respectively. Vector field u ∞ is a given continuous extension of u B in class (2.11). The remaining part of Section 5 will be devoted to the proof of Lemma 5.1. The proof will be performed in the following subsections. We shall obtain estimate (5.11) by taking in the momentum equation (4.10) with ( ε , u ε ) test function ϕ = η(t)B ψ ε - 1 |Ω| Ω ψ ε dx , where η ∈ W 1,∞ 0 (0, T ) and ψ ∈ C 1 c (Ω) are convenient cut off functions. This testing provides an identity of the form T 0 Ω ηψp δ ( ε ) ε dxdt = T 0 Ω R( ε , u ε , η, ψ) dxdt, where the right hand side may be bounded from above via the uniform estimates (5.6-5.10) by virtue of Hölder, Sobolev and interpolation inequalities, and Lemma 5.3 by a positive number dependent of ∇ x ψ, but independent, in particular, of η, η (and, of course, independent of ε). In order to obtain this formula, one must perform several times integration by parts and employ conveniently continuity equation (4.9). We notice that the most disagreeable terms involving integration over the boundary vanish due to the fact that ϕ and ψ vanish at the boundary. This is nowadays a standard and well understood procedure. We refer the reader for more details to [11, Section 3.2], or to monographs [START_REF] Feireisl | Dynamics of viscous compressible fluids[END_REF], [START_REF] Novotný | Introduction to the mathematical theory of compressible flow[END_REF], [START_REF] Feireisl | Singular limits in thermodynamics of viscous fluids[END_REF]. Seeing decomposition (2.2) of the pressure and seeing that p is bounded, the latter formula provides bound (5.11). Weak limits in continuity and momentum equations We shall first pass to the limit in the weak formulations of the continuity equation (4.9) and momentum equation (4.10). Estimates (5.6) and (5.8) yield convergence (5.1), and estimate (5.11) together with (2.2), (4.5) implies p δ ( ε ) p δ ( ) weakly in L β+1 β ((0, T )×K)) for any compact K ⊂ Ω. Here, and in the sequel, g( , u) denotes a weak limit in L 1 (Q T ) of the sequence g( ε , u ε ). By virtue of (5.10) (and (5.6)) the terms multiplied by ε will vanish in the limit. Sequence Z ε → 0 in L 4/3 (Q T ; R 9 ) by construction, see (4.80). Seeing that ε → in C weak ([0, T ]; L β (Ω)) (as one can show by means of the Arzela-Ascoli type argument from equation (4.9) and uniform bounds (5.6-5.10)), we deduce from the compact imbedding L β (Ω) → → W -1,2 (Ω) and from u ε u in L 2 (0, T ; W 1,2 (Ω)) the weak-* convergence ε u ε * u in L ∞ (0, T ; L 2β β+1 (Ω)) that may be consequently improved thanks to momentum equation (4.10) and estimates (5.6-5.10) to ε u ε → u in C weak (0, T ; L 2β β+1 (Ω)) again by the Arzela-Ascoli type argument. With this observation at hand, employing compact imbedding 2 (Ω; R 3 )) and consequently ε u ε ⊗u ε u⊗u weakly e.g. in L 1 (Q T ; R 9 ), at least for a chosen subsequence (not relabeled). Having the above, we get the following limits in equations (4.9-4.10): L 2β β+1 (Ω) → → W -1,2 (Ω) and u ε u in L 2 (0, T ; W 1,2 (Ω)) we infer that ε u ε → u in L 2 (0, T, W -1, Ω (τ, x)ϕ(τ, x)dx - Ω 0 (x)ϕ(0, x)dx = τ 0 Ω ∂ t ϕ + u • ∇ x ϕ dxdt (5.12) = - τ 0 Γ in B u B • nϕ dS x dt for any τ ∈ [0, T ] and ϕ ∈ C 1 c ([0, T ] × (Ω ∪ Γ in )); Ω u(τ, •) • ϕ(τ, •)dx - Ω 0 u 0 (•)ϕ(0, •)dx (5.13) = τ 0 Ω u∂ t ϕ + u ⊗ u : ∇ x ϕ + p δ ( )div x ϕ -S(∇ x u) : ∇ x ϕ dxdt for any τ ∈ [0, T ] and any ϕ ∈ C 1 c ([0, T ] × Ω; R 3 ). It remains to show that p δ ( ) = p δ ( ). The rest of this section is devoted to the proof of this identity. This is equivalent to show that ε → a.e. in Q T . Effective viscous flux identity We denote by ∇ x ∆ -1 the pseudodifferential operator with Fourier symbol iξ |ξ| 2 and by R the Riesz transform with Fourier symbol ξ⊗ξ |ξ| 2 . Following Lions [START_REF] Lions | Mathematical topics in fluid dynamics[END_REF], we shall use in the approximating momentum equation (4.9) test function ϕ(t, x) = ψ(t)φ(x)∇ x ∆ -1 ( ε φ), ψ ∈ C 1 c (0, T ), φ ∈ C 1 c (Ω) and in the limiting momentum equation (5.13) test function ϕ(t, x) = ψ(t)φ(x)∇ x ∆ -1 ( φ), ψ ∈ C 1 c (0, T ), φ ∈ C 1 c (Ω) subtract both identities and perform the limit ε → 0. This is a laborious, but nowadays standard calculation (whose details can be found e.g. in [START_REF] Feireisl | On the existence of globally defined weak solutions to the Navier-Stokes equations of compressible isentropic fluids[END_REF]Lemma 3.2], [START_REF] Novotný | Introduction to the mathematical theory of compressible flow[END_REF], [START_REF] Feireisl | Dynamics of viscous compressible fluids[END_REF] or [START_REF] Feireisl | Singular limits in thermodynamics of viscous fluids[END_REF]Chapter 3]) leading to the identity T 0 Ω ψφ 2 p δ ( ) -(2µ + λ)divu dxdt - T 0 Ω ψφ 2 p δ ( ) -(2µ + λ) divu dxdt (5.14) = T 0 Ω ψφu • R • ( uφ) -u • R( φ) dxdt -lim ε→0 T 0 Ω ψφu ε • ε R • ( ε u ε φ) -ε u ε • R( ε φ) dxdt. This process involves several integrations by parts and exploits continuity equation in form (4.9) and (5.12). We notice that the non homogenous data do not play any role due to the presence of compactly supported cut-off functions ψ and φ. The essential observation for getting (5.14) is the fact that the map → ϕ defined above is a linear and continuous from L p (Ω) to W 1,p (Ω), 1 < p < ∞ as a consequence of classical Hörmander-Michlin's multiplier theorem of harmonic analysis. The most non trivial moment are convex. Consequently, Λ ln -ln ≥ p( ) -p( ) (5.17) and Λ ln -ln ≥ p( ) -p( ). (5.18) Coming now back to (5.15), we obtain by using monotonicity of p, (2µ + λ) divu -divu ≤ p( ) -p( ) ≤ p( ) -p( ) + p( ) -p( ) . Employing (5.17) and (5.18) further yields divu -divu ≤ cΛ(1 + r) ln -ln , (5.19) provided supp p ⊂ [0, r]. This is the crucial inequality that plays in the case of non monotone pressure the same role as would be played by the inequality divu -divu ≥ 0 in the case of monotone pressure law. Strong convergence of density sequence Since verifies continuity equation (5.12) and since it belongs to to L 2 (Q T ) we may employ Lemma 3.1 in order to conclude that it verifies also renormalized continuity equation (2.13). In view of Remark 2.5, identity (2.13) is valid for any b belonging to class (2.14). In particular, for b( ) ≡ L( ) = log , it reads Ω L( (τ, x))ϕ(τ, x)dx - Ω L( 0 )ϕ(0, x)dx (5.20) = τ 0 Ω L( )∂ t ϕ + L( )u • ∇ x ϕ -ϕ div x u dxdt + τ 0 ∂Ω L( B )u B • nϕdS x dt. We continue with the renormalized version of the approximate equation of continuity (4.16). In particular, for b( ) = L( ) ≡ log( ), when passing to the weak formulation, we obtain Ω L( ε (τ, x))ϕ(τ, x) dx - Ω L( 0 (x))ϕ(0, x) dx (5.21) - τ 0 Ω L( ε )∂ t ϕ + L( ε )u ε • ∇ x ϕ dxdt + τ 0 Ω ε div x u ε ϕ dxdt + τ 0 Γ in ϕL( ε )u B • n dS x dt -ε τ 0 Γ in ϕL ( ε )∇ x ε • n dS x dt ≤ o(ε) for any τ ∈ [0, T ] and any ϕ ∈ C 1 c ([0, T ] × (Ω ∪ Γ in )), ϕ ≥ 0, where the inequality sign appears due to the omission of the (non negative) term containing εL ( ε )|∇ x ε | 2 (recall that L is convex) and o(ε), lim ε→0 o(ε) = 0 corresponds to the terms of (4.16) containing ε as multiplier. Finally, we use the boundary conditions (4.10) obtaining Ω L( ε (τ, x))ϕ(τ, x) dx - Ω L( 0 (x))ϕ(0, x) dx (5.22) - τ 0 Ω L( ε )∂ t ϕ + L( ε )u ε • ∇ x ϕ dxdt + τ 0 Ω ε div x u ε ϕ dxdt + τ 0 Γ in L( ε )u B • n + L ( ε )( B -ε )u B • n dS x dt ≤ o(ε) for any τ ∈ [0, T ] and any ϕ ∈ C 1 c ([0, T ] × (Ω ∪ Γ in )), ϕ ≥ 0. Subtracting (5.20) from (5.22) while taking ϕ independent of t ∈ [0, T ], we get Ω L( ε (τ, x))ϕ(x) dx - Ω L( (τ, x))ϕ(x) dx (5.23) - τ 0 Ω L( ε ) -L( ) u ε • ∇ x ϕ dxdt + τ 0 Ω ε div x u ε -div x u ϕ dxdt + τ 0 Γ in ϕ [L( B ) -L ( ε )( B -ε ) -L( ε )] |u B • n| dS x ≤ o(ε) for any τ ∈ [0, T ] and any ϕ ∈ C 1 c (Ω ∪ Γ in )), ϕ ≥ 0. As L is convex, we deduce, Ω L( ε (τ, x))ϕ(x) dx - Ω L( (τ, x))ϕ(x) dx - τ 0 Ω L( ε ) -L( ) u ε • ∇ x ϕ dxdt + τ 0 Ω ε div x u ε -div x u ϕ dxdt ≤ o(ε). Whence, letting ε → 0 yields Ω log -log )(τ, x)ϕ(x) dx + τ 0 Ω log -log u • ∇ x ϕ dxdt (5.24) ≤ cΛ(1 + r) τ 0 Ω log -log dxdt for any τ ∈ [0, T ] and any ϕ ∈ C 1 c (Ω ∪ Γ in ), ϕ ≥ 0, where we have used (5.19). Let now ũB ∈ W 1,∞ (Ω) be a Lipschitz extension of u B to Ω constructed in Lemma 2.2. Since Γ out is in class C 2 , function x → dist(x, Γ out ) belongs to C 2 (U - ε 0 (Γ out ) ∪ Γ out ) for some "'small"' ε 0 > 0, where U - ε 0 (Γ out ) ≡ {x = x 0 -zn(x 0 ) | x 0 ∈ Γ out , 0 < z < ε 0 } ∩ Ω, Energy inequality We shall pass to the limit in the energy inequality (4.11) with the goal to deduce from it energy inequality (5.5). To this end we first take identity (4.16) with b(z) = z 2 , ϕ = 1 in order to get identity (4.41) ( ε,uε) . This is justified by virtue of Remark 4.2. Due to this operation, all terms in (4.11) multiplied by Σ vanish. Once this is done, we integrate the resulting inequality over τ from 0 < τ 1 < τ 2 < T to get τ 2 τ 1 Ω 1 2 ε |u ε -u ∞ | 2 + H δ ( ε ) (τ ) dxdt + τ 2 τ 1 τ 0 Ω S(∇ x (u ε -u ∞ )) : ∇ x (u ε -u ∞ ) dxdt (5.32) ≤ τ 2 τ 1 Ω 1 2 0 |u 0 -u ∞ | 2 + H( 0 ) dxdτ - τ 2 τ 1 τ 0 Γ in H δ ( B )u B • ndS x dtdτ -E -δB τ 0 Γ in |u B • n|dS x dt -(H -δA) τ 0 Γout |u B • n|dS x dt - τ 2 τ 1 τ 0 Ω p δ ( ε )divu ∞ dxdt - τ 2 τ 1 τ 0 Ω ε u ε • ∇ x u ∞ • (u ε -u ∞ ) dxdtdτ - τ 2 τ 1 τ 0 Ω S(∇ x u ∞ ) : ∇ x (u ε -u ∞ ) -ε∇ x ε • ∇ x (u ε -u ∞ ) • u ∞ dxdtdτ, where, at the left hand side we have omitted the non negative terms multiplied by ε and the non negative terms involving integrals over the Γ in and Γ out portions of the boundary. We can now use the convergences established in Section 5.2 and in (5.31) in combination with the lower weak semi-continuity of convex functionals at the left hand side (see e.g. [START_REF] Feireisl | Singular limits in thermodynamics of viscous fluids[END_REF]Theorem 10.20]) -to this end we write H δ = H δ + H and realize that H δ is convex and H is bounded on (0, ∞)-to get τ 2 τ 1 Ω 1 2 |u -u ∞ | 2 + H δ ( ) (τ ) dxdt + τ 2 τ 1 τ 0 Ω S(∇ x (u -u ∞ )) : ∇ x (u -u ∞ ) dxdt ≤ τ 2 τ 1 Ω 1 2 0 |u 0 -u ∞ | 2 + H( 0 ) dxdτ - τ 2 τ 1 τ 0 Γ in H δ ( B )u B • ndS x dtdτ -E -δB τ 0 Γ in |u B • n|dS x dt -(H -δA) τ 0 Γout |u B • n|dS x dt -lim inf ε→0 τ 2 τ 1 τ 0 Ω p δ ( ε )divu ∞ dxdt - τ 2 τ 1 τ 0 Ω u • ∇ x u ∞ • (u -u ∞ ) dxdtdτ - τ 2 τ 1 τ 0 Ω S(∇ x u ∞ ) : ∇ x (u -u ∞ ) dxdtdτ. We observe that due to (2.11), τ 0 Û- h (∂Ω) p( ε ) + δ( β ε + ε ) divu ∞ dxdt ≥ 0 (if h > 0 is sufficiently small) and lim inf ε→0 τ 2 τ 1 τ 0 Ω\ Û- h (∂Ω) p( ε ) + δ( β ε + ε ) divu ∞ dtdτ = τ 2 τ 1 τ 0 Ω\ Û- h (∂Ω) p( ) + δ( β + ) divu ∞ dxdtdτ, while τ 0 Ω p( ε )divu ∞ dxdt → τ 0 Ω p( )divu ∞ dxdt by virtue of (5.31). Using these facts in (5.32), letting h → 0 and τ 1 → τ 2 while applying the Theorem on Lebesgue points yields the desired inequality (5.5). Lemma 5.1 is thus proved. 6 Limit δ → 0. Proof of Theorem 2.4 Our ultimate goal is to perform limit δ → 0. We will prove the following: Lemma 6.1. Let ( δ , u δ ) be a sequence of functions constructed in Lemma 5.1. Then there is a subsequence (not relabeled) such that δ weakly-* in L ∞ (0, T ; L γ (Ω)), (6.1) u δ u in L 2 (0, T ; W 1,2 (Ω; R 3 )), where the couple ( , u) is a weak solution of problem (1.1-1.6). The remaining part of this section is devoted to the proof of Lemma 6.1, which is nothing but Theorem 2.4. Uniform estimates We shall start with estimates for weak solutions ( δ , u δ ) constructed in Lemma 5.1. They are collected in the following lemma. Lemma 6.2. Let ( δ , u δ ) be a couple constructed in Lemma 5.1. Then, the following estimates hold: u δ L 2 (I,W 1,2 (Ω;R 3 )) ≤ L(data), (6.2) δ L ∞ (I,L γ (Ω)) ≤ L(data), (6.3) δ u 2 δ L ∞ (I,L 1 (Ω)) ≤ L(data), (6.4 ) δ 1/β δ L ∞ (I,L β (Ω)) ≤ L(data), (6.5) There is α(γ) > 0 such that δ L γ+α ((0,T )×K) ≤ L(data, K), with any compact K ⊂ Ω. (6.6) In the above, "data" stands for Ω 1 2 0 u 2 0 + H( 0 ) dx, u ∞ W 1,∞ (Ω) , B , B Proof of Lemma 5.2 Similarly as before, continuity equation (5.12) ( δ ,u δ ) yields L ∞ (0, T ; L 1 (Ω)) bound for the sequence δ . Now, uniform estimates (6.2-6.5) follow directly from energy inequality (5. ϕ = η(t)B ψ α δ - 1 |Ω| Ω ψ α δ dx , where α > 0 is sufficiently small, and where η ∈ W 1,∞ 0 (0, T ) and ψ ∈ C 1 c (Ω) are convenient cut off functions. After several integrations by parts, using renormalized equation (5.3) (with ( δ , u δ ) and b( ) = α , cf. Remark 2.3), we arrive finally at T 0 Ω ηψp δ ( δ ) α δ dxdt = T 0 Ω R( δ , u δ , η, ψ) dxdt, where the right hand side may be bounded from above due to estimates (6.2-6.5), in the same way as in the Section 5.1. We refer the reader for more details of this standard but laborious procedure again to [11, Section 4.1], or to monographs [START_REF] Feireisl | Dynamics of viscous compressible fluids[END_REF], [START_REF] Novotný | Introduction to the mathematical theory of compressible flow[END_REF], [START_REF] Feireisl | Singular limits in thermodynamics of viscous fluids[END_REF]. Weak limits in the field equations Estimates (6.2-6.3) yield immediately weak convergence announced in (6.1) and estimate (6.6) together with (2.2) imply p( δ ) p( ) weakly in L γ+α γ ((0, T )×K)) for any compact K ⊂ Ω. The terms multiplied by δ in the momentum equation will vanish due to estimate (6.5). Repeating carefully the (standard) reasoning of Section 5.2, we deduce that ∈ C weak ([0, T ]; L γ (Ω)), u ∈ C weak ([0, T ]; L 2γ γ+1 (Ω; R 3 )), and the the limit in equations (5.12) and (4.10) reads Ω (τ, •)ϕ(τ, •)dx - Ω 0 (•)ϕ(0, •)dx = τ 0 Ω ∂ t ϕ + u • ∇ x ϕ dxdt (6.7) = - τ 0 Γ in B u B • nϕ dS x dt for any τ ∈ [0, T ] and ϕ ∈ C 1 c ([0, T ] × (Ω ∪ Γ in )); Ω u(τ, •) • ϕ(τ, •)dx - Ω 0 u 0 (•)ϕ(0, •)dx (6.8) = τ 0 Ω u∂ t ϕ + u ⊗ u : ∇ x ϕ + p( )div x ϕ dxdt - τ 0 Ω S(∇ x u) : ∇ x ϕdxdt for any τ ∈ [0, T ] and any ϕ ∈ C 1 c ([0, T ] × Ω; R 3 ). We can perform the weak limit in the renormalized continuity equation (5.3) for ( δ , u δ ). We obtain, by the same token, Ω (b( )u)(τ )ϕ(τ )dx - Ω b( 0 )u 0 ϕ(0)dx = (6.9) τ 0 Ω b( )u • ∇ x ϕ -ϕ(b ( ) -b( ))div x u dxdt - τ 0 Γ in b( B )u B • nϕ dS x dt for any ϕ ∈ C 1 ([0, T ] × (Ω ∪ Γ in )), ( δ , u δ ) (in L 1 (Q T ).)) It remains to show that p( ) = p( ). The rest of this section is devoted to the proof of this identity. This is equivalent to show that δ → a.e. in Q T . Effective viscous flux identity We now perform similar reasoning as in Section 5.3. Since however functions and log do not possess enough summability, we shall replace them by convenient truncations T k ( ) and L k ( ), where T k ( ) is defined in (3.11) and L k ( ) = 1 T k (z) z dz. (6.10) We shall repeat the process described in Section 5.3 with T k ( δ ) resp. T k ( ) instead of δ , : Following [START_REF] Feireisl | On the existence of globally defined weak solutions to the Navier-Stokes equations of compressible isentropic fluids[END_REF], we shall use in the approximating momentum equation (5.4) (where ( , u) = ( δ , u δ )) test function ϕ(t, x) = ψ(t)φ(x)∇ x ∆ -1 (T k ( δ )φ), ψ ∈ C 1 c (0, T ), φ ∈ C 1 c (Ω) and in the limiting momentum equation (6.8) test function ϕ(t, x) = ψ(t)φ(x)∇ x ∆ -1 (T k ( )φ), ψ ∈ C 1 c (0, T ), φ ∈ C 1 c (Ω) subtract both identities and perform the limit δ → 0. This leads to equation Finally, we employ formulas (5.17), (5.18), similarly as when deriving (5.19), in order to get (2µ + λ) τ 0 Ω T k ( )divu -T k ( )divu dxdt ≤ cΛ(1 + r) τ 0 Ω ln -ln dxdt. (6.13) Oscillations defect measure The main achievement of the present section is the following lemma. Lemma 6.3. Let ( δ , u δ ) be a sequence constructed in Lemma 5.1. Then osc γ+1 [ δ ](Q T ) < ∞. ( 6 .14) The quantity osc γ+1 [ δ ](Q T ) is defined in (3.10) It is well known that Lemma 6.3 follows from the effective viscous flux identity, see [START_REF] Feireisl | On the existence of globally defined weak solutions to the Navier-Stokes equations of compressible isentropic fluids[END_REF]Lemma 4.3], [START_REF] Feireisl | Dynamics of viscous compressible fluids[END_REF], [START_REF] Novotný | Introduction to the mathematical theory of compressible flow[END_REF] for the detailed proof. To see this fact, we observe that there is non-decreasing p I i δ , (6.16) where I 1 δ = 2µ + λ T 0 Ω T k ( δ ) -T k ( ) div x u δ dxdt, I 2 δ = 2µ + λ T 0 Ω T k ( ) -T k ( ) div x u δ dxdt, I 3 δ = T 0 Ω p b ( )T k ( ) -p b ( ) T k ( ) dxdt We first observe that the second integral at the left hand side is non negative (indeed, p m is nondecreasing and we can use Theorem 10.19 in [START_REF] Feireisl | Singular limits in thermodynamics of viscous fluids[END_REF]). Second, we employ the Hölder inequality and interpolation together with the lower weak semi-continuity of norms and bounds (6.2 -6.3) to estimate integrals I 1 δ , I 2 δ in order to get |I 1 δ + I 2 δ | ≤ c osc γ+1 [ δ ](Q T ) 1 2γ (6.17 Inserting the last inequality into (6.16) yields (in combination with estimates of integrals I 1 δ -I 3 δ ) the statement of Lemma 6.3. Strong convergence of density Since couple ( , u) verifies continuity equation (2.6), it verifies also renormalized continuity equation (2.13) in view of Lemma 3.2. In view of Remark 2.5 we can take in the latter equation b = L k . We get, in particular, where the first term converges to 0 as k → ∞ by virtue of (6.14) and interpolation estimate (indeed, T k ( ) -T k ( ) L 1 (Q T ) → 0 as k → ∞ by virtue of definition of T k and lower weak semi-continuity of norms), while the second term is bounded from above by the expression at the right hand side of (6.13). Remark 2 . 3 . 23 and any continuously differentiable b with b having a compact support in [0, ∞). Weak solution to problem (1.1-1.6) satisfying in addition renormalized continuity equation (2.13) is called renormalized weak solution. It can be shown easily by using the Lebesgue dominated convergence theorem that the the family of test functions b in the previous definition can be extended to b . 15 )Remark 2 . 5 . 1 . 15251 Then for any Lipschitz extension u ∞ of u B verifying (2.8) problem (1.1-1.6) possesses at least one bounded energy renormalized weak solution ( , u). Theorem 2.4 holds regardless the (d -1)-Hausdorff measure of Γ in or Γ out is zero. If the Hausdorff measure |Γ in | d-1 = 0 then all conditions on B become irrelevant. The standard theory developed in [11, Theorem 1. where we have used algebraic relation zT k (z) ≤ 2T k (z) and the Minkowski inequality. Equation (3.16) implies (2.13) with any b ∈ C 1 [0, ∞), b with compact support by virtue of the Lebesgue dominated convergence theorem. Lemma 3.2 is proved. Lemma 4 . 1 . 41 Let Ω be a domain of class C 2 . Let ( B , u B ) verify assumptions (2.1) and let initial and boundary data verify ) in particular, e -Kτ ≤ (t, x) ≤ e Kτ for all τ ∈ [0, T ] provided u verifies condition (4.21). (Here and in the sequel |A| is a Lebesgue measure of set A ⊂ R 3 while |A| 2 denotes its 2 -D Hausdorf measure.) 21). Proof of Lemma 4.3 Proof of statement 1. The parabolic boundary value problem (with elliptic differential operator A = -ε∆ + u • ∇ x + divu on Ω and boundary operator B = -εn • ∇ x + v on the boundary ∂Ω) satisfies all assumptions of the maximal regularity theorem by Denk, Hieber, Prüss [4, Theorem 2.1] with p = 2 (the coefficients are sufficiently regular in order to verify conditions [4, conditions (SD), (SB)], principal part of operator A is (normally) elliptic, see [4, condition (E)], while the principal parts of operators A and B verify the Shapiro-Lopatinskii conditions [4, condition (LS)], and the data 0 and g verify conditions [4, condition (D)], and eventually Bergh and Löfström [3, Theorem 6.4.4] for the identification of the Sobolev space W 1,2 (Ω) with the Besov space B 1 2,2 (Ω)). Under these circumstances, Theorem 2.1 in [4] yields the statement in item 1. of Lemma 4.3, in particular (4.19 numbers A, B are define in (4.11). Now we take in (4.52) Σ = 2 sup >0 p ( ) and use estimates (4.44-4.45) when dealing with Ω H δ ( )(τ ) dx, (4.47) when dealing with τ 0 Ω S(∇ x v) : ∇ x v dxdt , (4.48-4.49) when treating ε τ 0 Ω H δ ( )|∇ x | 2 dxdt, and (4.50-4.51) to treat the right hand side (while taking first α > 0 sufficiently small and then ε > 0 also sufficiently small in order to let the terms αε ∇ x L 2 (Qτ ) , (α + ε c α ) ∇ x v 2 L 2 (Qτ ) and αδ τ 0 Γ in β |u • n|dS x "absorb" in the left hand side) with the goal to obtain with help of the Gronwall inequality, sup |u B • n|dS x dt ≤ L(data, T, δ). (4.58) ε v 4 L 4 (0,T ;W 1,4 (Ω)) ≤ L(data, T, δ). (4.59) At this immediate stage, we shall use the first two estimates. Employing (4.25), namely v W 1,∞ (Ω) ≤ c v W 1,2 (Ω) , v ∈ X, and (4.22), we get .69) Now we return to (4.18) -with ( N , u N )-and consider it as parabolic problem with operatot ∂ t -ε∆ in (0, T ) × Ω with right hand side -div( N u N ), and boundary operator -εn • ∇ x + v in (0, T ) × ∂Ω with right hand side B v. The maximal parabolic regularity theory, as e.g. [4, Theorem 2.1], yields that with any b satisfying conditions (2.14) p=γ . (Here again b( , u) denotes weak limit of the sequence b ψφ 2 2 p( )T k ( ) -(2µ + λ)T k ( )divu dxdt -T 0 Ω ψφ 2 p δ ( )T k ( ) -(2µ + λ)T k ( )divu dxdt = T 0 Ω ψφu • T k ( )R • ( uφ) -u • R(T k ( )φ) dxdt δ • T k ( δ )R • ( δ u δ φ)δ u δ • R(T k ( δ )φ) dxdt.Renormalized continuity equation (5.3) ( δ ,u δ ) and its weak limit (6.9) with b = T k play in this calculation an important role. Due to the compact support of φ, the non homogeneity of the boundary data is irrelevant. The right hand side of the last identity is zero by div-curl lemma. Consequently, we get the effective viscous flux identityp( )T k ( ) -p( ) T k ( ) = (2µ + λ) T k ( )divu -T k ( )divu . (6.11)The details of this calculus and reasoning can be found in [11, Lemma 3.2],[START_REF] Feireisl | Dynamics of viscous compressible fluids[END_REF],[START_REF] Novotný | Introduction to the mathematical theory of compressible flow[END_REF] or[START_REF] Feireisl | Singular limits in thermodynamics of viscous fluids[END_REF] Chapter 3]. If p would be non decreasing we would have (2µ + λ) T k ( )divu -T k ( )divu ≥ 0 (according to e.g.[START_REF] Feireisl | Singular limits in thermodynamics of viscous fluids[END_REF] Theorem 10.19]) and we could stop this part of argumentation at this place. In the general case, we must continue. Writing p = pp and recalling that p is non decreasing, we deduce from identity (6.11),(2µ + λ) T k ( )divu -T k ( )divu ≤ p( )T k ( ) -p( ) T k ( ). (6.12) Next, we realize (by employing essentially the lower-weak semi-continuity of norms) that lim sup k→∞ T k ( ) -L 1 (Q T ) = 0, lim k→∞ p( )T k ( ) -p( ) L 1 (Q T ) T k ( ) -p( ) T k ( ) dxdt ≤ τ 0 Ω p( ) -p( ) dxdt lim sup δ→0 3 i=1 3 m ∈ C[0, ∞) and bounded p b ∈ C[0, ∞) such that p( ) = a 2γ γ + p m ( ) -p b (6.15) Indeed, one may take p m = pa 2γ γ + bη( ) min{r, }, p b ( ) = p( ) -bη( ) min{r, }, where r solves equation as γ-1 -2b = 0 and η ∈ C 1 c [0, ∞), η(s) = 1 for s ∈ [0, R), 0 ≤ -η (s) ≤ 1 R with R sufficiently large. With this decomposition, effective viscous flux identity (6.11) can be rewritten as follows a 2γ T 0 Ω γ T k ( ) -γ T k ( ) dxdt + T 0 Ω p m ( )T k ( ) -p m ( ) T k ( ) dxdt = T ) with c > 0 independent of k. Finally, since p b is continuous with compact support, integral |I 3 δ | is bounded by an universal constant c = c(p b ) > 0.Next we write, as in[START_REF] Feireisl | On the existence of globally defined weak solutions to the Navier-Stokes equations of compressible isentropic fluids[END_REF] T0 Ω γ T k ( ) -γ T k ( γ T k ( δ ) -T k ( ) dxdt + T 0 Ω γγ T k ( ) -T k ( ) dxdt k ( δ ) -T k ( )γ+1 dxdt, where we have employed convexity of → γ and concavity of → T k ( ) on [0, ∞), and algebraic inequality |a -b| γ ≤ |a γ -b γ | and |a -b| ≥ |T k (a) -T k (b)|, (a, b) ∈ [0, ∞) 2 . ΩLLL L k ( (τ, x))ϕ(x)dx -Ω L k ( 0 )ϕ(x)dx (k ( )u • ∇ x ϕ -ϕT k ( )div x u dxdt + τ 0 ∂Ω L k ( B )u B • nϕdS x dt, with any ϕ ∈ C 1 c (Ω ∪ Γ in ) and τ ∈ [0, T ].On the other hand, equation (6.9) with b = L k reads,Ω L k ( )(τ, x)ϕ(x)dx -Ω L k ( 0 )ϕ(x)dx (6.19) = τ 0 Ω L k ( )u • ∇ x ϕ -ϕT k ( )div x u dxdt + τ 0 ∂Ω L k ( B )u B • nϕdS x dt, where ϕ ∈ C 1 c (Ω ∪ Γ in ) and τ ∈ [0, T ]. Subtracting (6.[START_REF] Piasecki | Strong solutions to the Navier-Stokes-Fourier system with slip-inflow boundary conditions[END_REF]) and (6.18) yieldsΩ L k ( ) -L k ( ) (τ, x)ϕ(x)dx -τ 0 Ω L k ( ) -L k ( ) (u -ũB ) • ∇ x ϕdxdt (k ( ) -L k ( ) ũB • ∇ x ϕ = τ 0 Ω ϕ T k ( )div x u -T k ( )div x u dxdtdxdt with any ϕ ∈ C 1 c (Ω ∪ Γ in ) and τ ∈ [0, T ],where ũB is defined in Lemma 2.2. Now we consider the family of test functions ϕ δ defined in(5.27). By the same reasoning as in (5.26-5.29) we deduceT 0 Ω L k ( ) -L k ( ) (u -ũB ) • ∇ x ϕ δ dxdt → 0 as δ → 0k ( ) -L k ( ) ũB • ∇ x ϕ δ dxdt ≥ 0δ T k ( )div x u -T k ( )div x u dxdt (δ T k ( ) -T k ( ) div x u dxdt + τ 0 Ω ϕ δ T k ( )div x u -T k ( )div x u dxdt, Theorem 2.4. Let Ω ⊂ R d , d = 2, 3 be a bounded domain of class C 2 . Let the boundary data u B , B satisfy (2.1), where min B ≡ B > 0. Assume that the pressure satisfies hypotheses (2.2) with γ > d/2 and the initial data are of finite energy ). (2.14) Our main result is the following theorem. provided both u i verify (4.21).(4.36) Coming back to T, we find with help of (4.27), (4.22), (4.35), Tv(t) X ≤ δ e Kt P v 0 X + d 2 (K, , T )t provided u ∞ + v verifies (4.21), and with help of (4.28), (4.22), (4.23) on one hand and (4.27), (3.5) on the other hand .42) 2. We shall now investigate the lower bound of E δ ( B | ). First, due to convexity of H, we have E( B | ) ≥ 0. Second, we verify by direct calculation that Galerkin approximation to solutions of approximate problem (4.1-4.4) Recalling structural assumptions (2.2) for p, definitions (2.3) of H and (4.6) of p δ , H δ (notably the coercivity relations (4.46), (4.48), we deduce from (4.53-4.59) the following bounds for the sequence [START_REF] Denk | Fourier multipliers and problems of elliptic and parabolic type[END_REF], structural assumptions on the pressure p, and definitions of p δ and H δ , see (2.2), (4.5), (4.6), and energy inequality (5.5) by the similar (in fact more simple) reasoning as that one performed in Sections 4.3.3, 4.3.4. The last estimate, as in the previous section, is based on the properties of the Bogovskii operator introduced in Lemma 5.3. We obtain it by testing the momentum equation (5.4) with ( δ , u δ ) with test function We say that f ∈ C weak ([0, T ], L p (Ω)) iff Ω f ϕ dx ∈ C[0, T ] for all ϕ ∈ L p (Ω) * The work of T.Ch. has been supported by the NRF grant 2015 R1A5A1009350. † The work of B.J.J. has been supported by NRF grant 2016R1D1A1B03934133 ‡ The work of A.N. has been supported by the NRF grant 2015 R1A5A1009350. 1 Uniform bounds independent on We have to start by deriving uniform bounds independent of ε. We collect them in the following lemma: Lemma 5.2. Let ( ε , u ε ) (and associated Z ε ) be a sequence of (genegalized) solutions of the approximate problem (4.1-4.2) constructed in Lemma 4.1. Then under assumptions of Lemma 4.1 there holds: u L 2 (I,W 1,2 (Ω)) ≤ L(data, δ), (5.6) ε 1/4 u L 4 (I,W 1,4 (Ω)) ≤ L(data, δ), (5.7) L β+1 ((0,T )×K) ≤ L(data, δ, K, δ), with any compacts K ⊂ Ω. (5.11) Here L is a positive constant, which is, in particular, independent of . Proof of Lemma 5.2 Continuity equation (4.9) provides bound With this bound at hand, uniform estimates ( ) for all g with the above properties. In the above L in this process is to show that the right hand side of identity (5.14) is 0. To see it (we repeat the reasoning [START_REF] Feireisl | On the existence of globally defined weak solutions to the Navier-Stokes equations of compressible isentropic fluids[END_REF] for the sake of completeness) we first realize that the C weak ([0, T ], L β (Ω))-convergence of Since R is a continuous operator from L p (R 3 ) to L p (R 3 ), 1 < p < ∞, we also have At this stage we report a convenient version of the celebrated Div-Curl lemma, see [START_REF] Feireisl | Dynamics of viscous compressible fluids[END_REF]Section 6] or [START_REF] Feireisl | Singular limits in thermodynamics of viscous fluids[END_REF]Theorem 10.27]. It reads where Applying this lemma to the above situation, we get In view of compact imbedding L 2β β+3 (Ω) → → W -1,2 (Ω), we have also We easily verify that the sequence Recalling the L 2 (0, T ; W 1,2 (Ω))-weak convergence of u ε we get the desired result. Identity (5.14) now reads p( ) -p( ) = (2µ + λ) divu -divu . (5.15) If the pressure were non decreasing (i.e. if p would be identically zero), we would have by Miniti's trick, p( ) -p( ) ≥ 0 a.e. in Q T , see [9, Theorem 10.19] and consequently divu -divu ≥ 0 a.e. in Q T . We however consider a non-monotone pressure and this simple conclusion is not true anymore. We have to further extend this argument. Following Feireisl [START_REF] Feireisl | Dynamics of viscous compressible fluids[END_REF], we realize that there is Λ > 0 (dependent on p) such that cf. Foote [START_REF] Foote | Regularity of the distance function[END_REF]. Moreover, Since Ω is Lipschitz, we have also that where Ûε (Γ out ) ≡ {x ∈ Ω | dist(x, Γ out ) < ε} and A∆B denotes the symmetric difference of sets A and B. Consider family of Lipschitz test functions in Ω, By Lebesgue theorem and Hardy's inequality (we notice that while, in accordance with (5.25), ( ). This will be done in the next section. Coming with all this information back to (6.20) with ϕ = ϕ δ , and performing first limit δ → 0 and then limit k → ∞ we conclude that This means a.e. in Q T convergence of δ to and consequently the identity p( ) = p( ). The passage δ → 0 from the energy inequality (5.5) (with ( δ , u δ )) to the final energy inequality (2.9) will be done in the same way as in Section 5.5. We have so far performed the whole proof with initial data satisfying (4.13). We notice that this is without loss of generality. Indeed finite energy initial data (2.15) can be easily approximated on the level δ by initial data (4.13) in the way suggested in [START_REF] Feireisl | On the existence of globally defined weak solutions to the Navier-Stokes equations of compressible isentropic fluids[END_REF]Section 4]. This concludes the proof of Theorem 2.4.
01756743
en
[ "sdv.bibs", "sdv.imm", "info.info-bi", "info.info-bt", "info.info-au", "spi.auto", "info.info-sy", "math.math-oc", "info.info-ai", "stat.ml" ]
2024/03/05 22:32:10
2018
https://polytechnique.hal.science/hal-01756743/file/InflamJTB.pdf
Ouassim Bara email: [email protected] Michel Fliess email: [email protected] Cédric Join email: [email protected] Judy Day email: [email protected] Seddik M Djouadi email: [email protected] Toward a model-free feedback control synthesis for treating acute inflammation A mathematical perspective Keywords: Immune systems, Inflammatory response, Model-free control, Intelligent controllers published or not. The documents may come Toward a model-free feedback control synthesis for treating acute inflammation Introduction Inflammation is a key biomedical subject (see, e.g., [START_REF] Nathan | Points of control in inflammation[END_REF][START_REF] Vodovotz | Solving immunology?[END_REF]) with fascinating connections to diseases like cancer (see, e.g., [START_REF] Balkwill | Inflammation and cancer: back to Virchow?[END_REF]), AIDS (see, e.g., [START_REF] Deeks | HIV infection, inflammation, immunosenescence, and aging[END_REF]) and psychiatry (see, e.g., [START_REF] Miller | The role of inflammation in depression: from evolutionary imperative to modern treatment target[END_REF]). Mathematical and computational models investigating these biological systems have provided a greater understanding of the dynamics and key mechanisms of these processes (see, e.g., [START_REF]Complex Systems and Computational Biology Approaches to Acute Inflammation[END_REF]). The very content of this particular study leads citations of mathematical models where differential equations play a prominent role (see, e.g., [START_REF] Arazi | Modeling immune complex-mediated autoimmune inflammation[END_REF][START_REF] Arciero | Using a mathematical model to analyze the role of probiotics and inflammation in necrotizing enterocolitis[END_REF][START_REF] Asachenkov | Disease Dynamics[END_REF][START_REF] Barber | A three-dimensional mathematical and computational model of necrotizing enterocolitis[END_REF][START_REF] Day | Modeling the immune rheostat of macrophages in the lung in response to infection[END_REF][START_REF] Day | Mathematical modeling of early cellular innate and adaptive immune responses to ischemia/reperfusion injury and solid organ allotransplantation[END_REF][START_REF] Day | A reduced mathematical model of the acute inflammatory response II. Capturing scenarios of repeated endotoxin administration[END_REF][START_REF] Russo | A mathematical model of inflammation during ischemic stroke[END_REF][START_REF] Dunster | The resolution of inflammation: A mathematical model of neutrophil and macrophage interactions[END_REF][START_REF] Eftimie | Mathematical models for immunology: Current state of the art and future research directions[END_REF][START_REF] Ho | A model of neutrophil dynamics in response to inflammatory and cancer chemotherapy challenges[END_REF][START_REF] Kumar | The dynamics of acute inflammation[END_REF][START_REF] Mathew | Global sensitivity analysis of a mathematical model of acute inflammation identifies nonlinear dependence of cumulative tissue damage on host interleukin-6 responses[END_REF][START_REF] Perelson | Immunology for physicists[END_REF][START_REF] Prodanov | A model of space-fractional-order diffusion in the glial scar[END_REF][START_REF] Reynolds | Mathematical Models of Acute Inflammation and Full Lung Model of Gas exchange under inflammatory stress[END_REF][START_REF] Reynolds | A mathematical model of pulmonary gas exchange under inflammatory stress[END_REF][START_REF] Reynolds | A reduced mathematical model of the acute inflammatory response I. Derivation of model and analysis of antiinflammation[END_REF][START_REF] Song | Ensemble models of neutrophil trafficking in severe sepsis[END_REF][START_REF] Torres | Mathematical modelling of posthemorrhage inflammation in mice: Studies using a novel, computer-controlled, closed-loop hemorrhage apparatus[END_REF][START_REF] Yiu | Dynamics of a cytokine storm[END_REF]). The usefulness of those equations for simulation, prediction purposes, and, more generally, for understanding the intimate mechanisms is indisputable. In addition, some models have also been used in order to provide a real-time feedback control synthesis (see, e.g., [START_REF] Åström | Feedback Systems: An Introduction for Scientists and Engineers[END_REF] for an excellent introduction to this important engineering topic) for treating acute inflammation due to severe infection. Insightful results were obtained via two main model-based approaches: optimal control [START_REF] Bara | Optimal control of an inflammatory immune response model[END_REF][START_REF] Bara | Immune Therapy using optimal control with L 1 type objective[END_REF][START_REF] Bara | Immune therapeutic strategies using optimal controls with L 1 and L 2 type objectives[END_REF][START_REF] Kirschner | Optimal control of the chemotherapy of HIV[END_REF][START_REF] Stengel | Stochastic optimal therapy for enhanced immune response[END_REF][START_REF] Stengel | Optimal enhancement of immune response[END_REF][START_REF] Stengel | Optimal control of innate immune response[END_REF][START_REF] Tan | Optimal control strategy for abnormal innate immune response[END_REF], model predictive control [START_REF] Day | Using nonlinear model predictive control to find optimal therapeutic strategies to modulate inflammation[END_REF][START_REF] Hogg | Acute inflammation treatment via particle filter state estimation and MPC[END_REF][START_REF] Radosavljevic | A data-driven acute inflammation therapy[END_REF][START_REF] Zitelli | Combining robust state estimation with nonlinear model predictive control to regulate the acute inflammatory response to pathogen[END_REF]. Our work in [START_REF] Bara | Optimal control of an inflammatory immune response model[END_REF][START_REF] Bara | Immune Therapy using optimal control with L 1 type objective[END_REF][START_REF] Bara | Immune therapeutic strategies using optimal controls with L 1 and L 2 type objectives[END_REF][START_REF] Day | Using nonlinear model predictive control to find optimal therapeutic strategies to modulate inflammation[END_REF][START_REF] Zitelli | Combining robust state estimation with nonlinear model predictive control to regulate the acute inflammatory response to pathogen[END_REF] made use of the low dimensional system of ordinary differential equations (ODE) derived in [START_REF] Reynolds | A reduced mathematical model of the acute inflammatory response I. Derivation of model and analysis of antiinflammation[END_REF] (see also [START_REF] Day | A reduced mathematical model of the acute inflammatory response II. Capturing scenarios of repeated endotoxin administration[END_REF]). This four variable model possesses the following characteristics: -The model is based on biological first principles, the non-specific mechanisms of the innate immune response to a generic gram-negative bacterial pathogen. -A variable representing anti-inflammatory mediators e.g. Interleukin-10, Transforming Growth Factor-β ) is included and plays an important role in mitigating the negative effects of inflammation to avoid excessive tissue damage. -Though a qualitative model of acute inflammation, it reproduces several clinically relevant outcomes: a healthy resolution and two death outcomes. The calibration of a system of differential equations can be quite difficult since the identification of various rate parameters requires specific data in sufficient quantities, which may not be feasible. Additionally, there is much heterogeneity to account for between patient responses such as the initiating circumstances, patient co-morbidities and personal characteristics, like genetics, age, gender, . . . . In spite of promising preliminary results in [START_REF] Bara | Nonlinear state estimation for complex immune responses[END_REF][START_REF] Bara | Parameter estimation for nonlinear immune response model using EM[END_REF][START_REF] Zitelli | Combining robust state estimation with nonlinear model predictive control to regulate the acute inflammatory response to pathogen[END_REF], state estimation and parameter identification of highly nonlinear models may still require more data than can be reasonably collected. These roadblocks hamper the use of model-based control strategies in clinical practice in spite of recent mathematical advances. Here, another route, i.e, model-free control (MFC) and the corresponding "intelligent" feedback controllers [START_REF] Fliess | Model-free control[END_REF], are therefore explored. 1 We briefly introduce the method before discussing its application to the scenario of controlling the inflammatory response to pathogen. We begin by replacing the poorly known global description by the ultra-local model given by: ẏ = F + αu, (1) where the control and output variables are u and y, respectively; the derivation order of y is 1 like in most concrete situations; α ∈ R is chosen by the practitioner such that αu and ẏ are of the same magnitude; -F is estimated via the measurements of u and y; -F subsumes not only the unknown system structure but also any perturbation. Remark 1 The following comparison with computer graphics is borrowed from [START_REF] Fliess | Model-free control[END_REF]. To produce an image of a complex curve in space, the equations defining that curve are not actually used but, instead an approximation of the curve is made with short straight line segments. Equation ( 1), which might be viewed as an analogue of such a segment, should hence not be considered as a global description but instead as a rough linear approximation. Remark 2 The estimation of the fundamental quantity F in Equation ( 1) via the control and output variables u and y will be detailed in Section 2.2. It connects our approach to the data-driven viewpoint which has been adopted in control engineering (see, e.g., [START_REF] Formentin | A comparison of model-based and data-driven controller tuning[END_REF][START_REF] Hou | From model-based control to data-driven control: Survey, classification and perspective[END_REF][START_REF] Roman | Multi-input multi-output system experimental validation of model-free control and virtual reference feedback tuning techniques[END_REF][START_REF] Roman | Data-driven model-free adaptive control tuned by virtual reference feedback tuning[END_REF]) and in studies about inflammation (see, e.g., [START_REF] Azhar | Integrating data-driven and mechanistic models of the inflammatory response in sepsis and trauma[END_REF][START_REF] Brause | Data driven automatic model selection and parameter adaptation -a case study for septic shock[END_REF][START_REF] Radosavljevic | A data-driven acute inflammation therapy[END_REF][START_REF] Vodovotz | Computational modelling of the inflammatory response in trauma, sepsis and wound healing: implications for modelling resilience[END_REF]). Ideally, data associated with the time courses of the inflammatory response variables would be generated by measurements from real patients. In our case, it would be the pro-inflammatory and anti-inflammatory variables of the model (patient) which we would want to track; and therefore, define these as the reference trajectories (available data) for the model-free setup. Once the quantity F est is obtained, the loop is closed by an intelligent proportional controller, or iP: u = - F est -ẏ * + K P e α , (2) where -F est is an estimate of F; y is the reference trajectory; e = yy is the tracking error; and -K P is an usual tuning gain. With a "good" estimate F est of F, i.e., F -F est 0, Equations ( 1)-(2) yield ė + K P e = F -F est 0 1 This new viewpoint in control engineering has been successfully illustrated in many concrete casestudies (see, e.g., the references in [START_REF] Fliess | Model-free control[END_REF], and [START_REF] Abouaïssa | Energy saving for building heating via a simple and efficient model-free control design: First steps with computer simulations[END_REF][START_REF] Abouaïssa | On ramp metering: Towards a better understanding of ALINEA via model-free control[END_REF][START_REF] Agee | Tip trajectory control of a flexible-link manipulator using an intelligent proportional integral (iPI) controller[END_REF][START_REF] Agee | Intelligent proportional-integral (iPI) control of a single link flexible joint manipulator[END_REF][START_REF] Bara | Model-free load control for high penetration of solar photovoltaic generation[END_REF][START_REF] Chand | Non-linear model-free control of flapping wing flying robot using iPID[END_REF][START_REF] Join | A simple and efficient feedback control strategy for wastewater denitrification[END_REF][START_REF] Lafont | A model-free control strategy for an experimental greenhouse with an application to fault accommodation[END_REF][START_REF] Li | Direct power control of DFIG wind turbine systems based on an intelligent proportional-integral sliding mode control[END_REF][START_REF] Madoński | Model-free control of a two-dimensional system based on uncertainty reconstruction and attenuation[END_REF][START_REF] Menhour | An efficient modelfree setting for longitudinal and lateral vehicle control. Validation through the interconnected pro-SiVIC/RTMaps prototyping platform[END_REF][START_REF] Michel | Commande "sans modèle" pour l'asservissement numérique d'un banc de caractérisation magnétique[END_REF][START_REF] Michel | Model-free based digital control for magnetic measurements[END_REF][START_REF] Mohammadridha | Model free iPID control for glycemia regulation of type-1 diabetes[END_REF][START_REF] De Miras | Active magnetic bearing: A new step for model-free control[END_REF][START_REF] Rodriguez-Fortun | Model-free control of a 3-DOF piezoelectric nanopositioning platform[END_REF][START_REF] Schwalb Moraes | Model-free control of magnetic levitation systems through algebraic derivative estimation[END_REF][START_REF] Tebbani | Model-based versus model-free control designs for improving microalgae growth in a closed photobioreactor: Some preliminary comparisons[END_REF][START_REF] Ticherfatine | Model-free approach based intelligent PD controller for vertical motion reduction in fast ferries[END_REF][START_REF] Wang | ZMP theory-based gait planning and model-free trajectory tracking control of lower limb carrying exoskeleton system[END_REF][START_REF] Wang | Event-driven model-free control in motion control with comparisons[END_REF][START_REF] Wang | Model-free based terminal SMC of quadrotor attitude and position[END_REF][START_REF] Xu | Robustness study on the model-free control and the control with restricted model of a high performance electro-hydraulic system[END_REF][START_REF] Yaseen | Attack-tolerant networked control system: an approach for detection the controller stealthy hijacking attack[END_REF][START_REF] Al-Younes | Robust model-free control applied to a quadrotor UAV[END_REF][START_REF] Zhou | Model-free deadbeat predictive current control of a surfacemounted permanent magnet synchronous motor drive systems[END_REF]). Some of the methods have been patented and some have been applied to life sciences [START_REF] Fliess | Dynamic compensation and homeostasis: a feedback control perspective[END_REF][START_REF] Join | A simple and efficient feedback control strategy for wastewater denitrification[END_REF][START_REF] Lafont | A model-free control strategy for an experimental greenhouse with an application to fault accommodation[END_REF][START_REF] Mohammadridha | Model free iPID control for glycemia regulation of type-1 diabetes[END_REF][START_REF] Tebbani | Model-based versus model-free control designs for improving microalgae growth in a closed photobioreactor: Some preliminary comparisons[END_REF]. Thus e(t) e(0) exp(-K P t), which implies that lim t→+∞ y(t) y (t) if and only if, K P > 0. In other words, the scheme ensures an excellent tracking of the reference trajectory. This tracking is moreover quite robust with respect to uncertainties and disturbances which can be numerous in a medical setting such as considered here. This robustness feature is explained by the fact that F in Equation (1) encompasses "everything," without trying to distinguish between its different components. In our application, sensorless outputs must be driven in order to correct dysfunctional immune responses of the patient. Here, this difficult problem is solved by assigning suitable reference trajectories to those systems variables which can be measured. This feedforward viewpoint is borrowed from the flatness-based control setting [START_REF] Fliess | Flatness and defect of non-linear systems: introductory theory and examples[END_REF] (see also [START_REF] Åström | Feedback Systems: An Introduction for Scientists and Engineers[END_REF][START_REF] Lévine | Analysis and Control of Nonlinear Systems -A flatness-based approach[END_REF][START_REF] Sira-Ramírez | Differentially Flat Systems[END_REF]). After justifying model-free control in Section 2, Section 3 presents results from applying the method to a heterogeneous in silico virtual patient population generated in [START_REF] Day | Using nonlinear model predictive control to find optimal therapeutic strategies to modulate inflammation[END_REF]. The cohort of virtual patients are summarized in Section 3.1. The computer simulations demonstrate the great robustness of the model-free control strategy with respect to noise corruption, as demonstrated in Section 3.4. Concluding remarks in Section 4 discuss some of the potential as well as the remaining challenges of the approach in the setting of controlling complex immune responses. A first draft has already been presented in [START_REF] Bara | Model-free immune therapy: A control approach to acute inflammation[END_REF]. 2 Justification of the model-free approach: A brief sketch Justification of the ultra-local model We first justify the ultra-local model given in [START_REF] Abouaïssa | Energy saving for building heating via a simple and efficient model-free control design: First steps with computer simulations[END_REF]. For notational simplicity, we restrict to a system with a single control variable u and a single output variable y. Assume that the system is a causal, or non-anticipative functional; In other words, for any time instant t > 0, let y(t) = F (u(τ) | 0 ≤ τ ≤ t) , (3) where F depends on the past and present but not the future, various perturbations, and initial conditions at t = 0. Example 1 A representation of rather general nonlinear functionals, also popular in the biological sciences, is provided by a Volterra series (see, e.g., [START_REF] Korenberg | The identification of nonlinear biological systems: Volterra kernel approaches[END_REF]): y(t) =h 0 (t) + t 0 h 1 (t, τ)u(τ)dτ+ t 0 t 0 h 2 (t, τ 2 , τ 1 )u(τ 2 )u(τ 1 )dτ 2 dτ 1 + . . . t 0 . . . t 0 h ν (t, τ ν , . . . τ 1 )u(τ ν ) . . . u(τ 1 )dτ ν . . . dτ 1 + . . . Solutions of quite arbitrary ordinary differential equations, related to input-output behaviors, may be expressed as a Volterra series (see, e.g., [START_REF] Fliess | An algebraic approach to nonlinear functional expansions[END_REF]). Let -I ⊂ [0, +∞[ be a compact subset and -C ⊂ C 0 (I ) be a compact subset, where C 0 (I ) is the space of continuous functions I → R, which is equipped with the topology of uniform convergence. Consider the Banach R-algebra S of continuous causal functionals (3) I × C → R. If a subalgebra contains a non-zero constant element and separates points in I × C , then it is dense in S according to the classic Stone-Weierstraß theorem (see, e.g., [START_REF] Rudin | Functional Analysis[END_REF]). Let A ⊂ S be the set of functionals which satisfy an algebraic differential equation of the form E(y, ẏ, . . . , y (a) , u, u, . . . , u (b) ) = 0, ( 4 ) where E is a polynomial function of its arguments with real coefficients. It has been proven in [START_REF] Fliess | Model-free control[END_REF] that with this, the conditions of the Stone-Weierstraß theorem are satisfied and, therefore, A is dense in S. Assume therefore that our system is "well" approximated by a system defined by Equation (4). Let ν be an integer, 1 ≤ ν ≤ a, such that ∂ E ∂ y (ν) ≡ 0. The implicit function theorem yields locally y (ν) = E (y, ẏ, . . . , y (ν-1) , y (ν+1) , . . . , y (a) , u, u, . . . , u (b) ). This may be rewritten as y (ν) = F + αu. (5) In most concrete situations such as the one here, the order ν = 1 of derivation, as in Equation( 1), is enough. See [START_REF] Fliess | Model-free control[END_REF] for an explanation and for some examples where ν = 2. Closing the loop If ν = 1 in Equation ( 5), we are back to Equation (1). The loop is closed with the intelligent proportional controller (2). Estimation of F Any rather general function [a, b] → R, a, b ∈ R, a < b, may be approximated by a step function F approx , i.e., a piecewise constant function (see, e.g., [START_REF] Rudin | Real and Complex Analysis[END_REF]). Therefore, for estimating a suitable approximation of F in Equation ( 5), the question reduces to the identification of the constant parameter Φ in ẏ = Φ + αu. (6) Here a recent real-time algebraic estimation/identification techniques are employed ( [START_REF] Fliess | Closed-loop parametric identification for continuous-time linear systems via new algebraic techniques[END_REF][START_REF] Sira-Ramírez | Algebraic Identification and Estimation Methods in Feedback Control Systems[END_REF]). With respect to the well-known notations of operational calculus (see, e.g., [START_REF] Erdélyi | Operational Calculus and Generalized Functions[END_REF][START_REF] Yosida | Operational Calculus -A Theory of Hyperfunctions[END_REF]) (which are identical to those of the classic Laplace transform taught everywhere in engineering e.g., [START_REF] Doetsch | Introduction to the Theory and Application of the Laplace Transformation (translated from the German)[END_REF], [START_REF] Åström | Feedback Systems: An Introduction for Scientists and Engineers[END_REF]), Equation ( 6) yields: sY = Φ s + αU + y(0), where U and Y are the operational analogues of u and y. In the literature, U and Y are often called the Laplace transforms of u and y, and s is the Laplace variable (see, e.g., [START_REF] Åström | Feedback Systems: An Introduction for Scientists and Engineers[END_REF]). We eliminate the initial condition y(0) by left-multiplying both sides by d ds or, in other words, by differentiating both sides with respect to s: Y + s dY ds = - Φ s 2 + α dU ds . The product by s corresponds in the time domain to the derivation with respect to time. Such a derivation is known to be most sensitive to noise corruptions. Therefore, multiply both sides on the left by s -2 in order to replace derivations by integrations with respect to time, which are quite robust with respect to noise (see [START_REF] Fliess | Analyse non standard du bruit[END_REF] for more explanations). Recall that d ι ds ι , where ι ≥ 1 is an integer, corresponds in the time domain to the multiplication by (-t) ι . Then F est (t) = - 6 τ 3 t t-τ [(τ -2σ )y(σ ) + ασ (τ -σ )u(σ )] dσ , (7) where τ > 0 might be quite small. This integral may of course be replaced in practice by a classic digital filter. There are other formulas one can use for obtaining an estimate of F. For instance, closing the loop with the iP (2) yields: F est (t) = 1 τ t t-τ ( ẏ -αu -K P e) dσ . (8) Remark 3 Measurement devices are always corrupted by various noise sources (see, e.g., [START_REF] Tagawa | Biomedical Sensors and Instruments[END_REF]). The noise is usually described via probabilistic/statistical laws that are difficult to write down in most concrete situations. Following [START_REF] Fliess | Analyse non standard du bruit[END_REF] where nonstandard analysis is used, the noise is related to quick fluctuations around zero [START_REF] Cartier | Integration over finite sets[END_REF]. Such a fluctuation is a Lebesgue-integrable real-valued time function F which is characterized by the following property: the integral of F over any finite time interval, τ f τ i F (τ)dτ, is infinitesimal. Therefore, noise is attenuated thanks to the integrals in formulas ( 7)-( 8). 3 Computer Simulation Virtual patients In [START_REF] Day | Using nonlinear model predictive control to find optimal therapeutic strategies to modulate inflammation[END_REF], a cohort of virtual patients were defined by using the ODE model of [START_REF] Reynolds | A reduced mathematical model of the acute inflammatory response I. Derivation of model and analysis of antiinflammation[END_REF] for the underlying immune response dynamics for each patient and with each differing in the value of six of the rate parameters and two of the initial conditions. This same cohort was used in this study as well. The ODE model describes an abstract dynamical representation of an acute inflammatory response to pathogenic infection: Ṗ(t) = k pg P(t) 1 - P(t) p ∞ - k pm s m P(t) µ m + k mp P(t) -k pn f (N(t))P(t) (9) Ṅ(t) = s nr R(P(t), N(t), D(t)) µ nr + R(P(t), N(t), D(t)) -µ n N(t) + u p (t) (10) Ḋ(t) = k dn f (N(t)) 6 x 6 dn + f (N(t)) 6 -µ d D(t) (11) Ċa (t) = s c + k cn f (N(t) + k cnd D(t)) 1 + f (N(t) + k cnd D(t)) -µ c C a (t) + u a (t), (12) where R(P, N, D) = f (k np P(t) + k nn N(t) + k nd D(t)) and f (x) = x 1 + C a (t) c ∞ 2 . -Equation ( 9) represents the evolution of the bacterial pathogen population P that causes the inflammation. -Equation ( 10) governs the dynamics of the concentration of a collection of early pro-inflammatory mediators such as activated phagocytes and the pro-inflammatory cytokines produced by N. -Equation ( 11) corresponds to tissue damage (D), which helps to determine response outcomes. -Equation ( 12) describes the evolution of the concentration of a collection of antiinflammatory mediators C a . As explained in [START_REF] Reynolds | A reduced mathematical model of the acute inflammatory response I. Derivation of model and analysis of antiinflammation[END_REF], f (x) represents a Hill function that models the impact of activated phagocytes and their by-products (N) on the creation of damaged tissue. With this modeling construct, tissue damage (D) increases in a switch-like sigmoidal fashion as N increases such that it takes sufficiently high levels of N to incite a moderate increase in damage and that the increase in damage saturates with sufficiently elevated and sustained N levels. The hill coefficient (exponent) 6 was chosen to model this aspect which also ensured that the healthy equilibrium had a reasonable basin of attraction for the N/D subsystem. For the reference set of parameter values which is given in Table I of [START_REF] Reynolds | A reduced mathematical model of the acute inflammatory response I. Derivation of model and analysis of antiinflammation[END_REF], the above model possesses three (positive) stable equilibria which can be qualitatively interpreted as the following clinical outcomes: -Healthy outcome: equilibrium in which P = N = D = 0 and C a is at a background level. -Aseptic death outcome: equilibrium in which all mediators N, C a , and D are at elevated levels, while pathogen, P, has been eliminated. -Septic death outcome: equilibrium in which all mediators N, C a , and D together with the pathogen P are at elevated levels (higher than in the aseptic death equilibrium). Fig. 1 Diagram of the mediators of the acute inflammatory response to pathogen as abstractly modeled in [START_REF] Reynolds | A reduced mathematical model of the acute inflammatory response I. Derivation of model and analysis of antiinflammation[END_REF]. Solid lines with arrow heads and dashed lines with nodes/circular heads represent upregulation and inhibition, respectively. P: replicating pathogen, N: early pro-inflammatory immune mediators, D: marker of tissue damage/dysfunction caused by inflammatory response, C a : inhibitory anti-inflammatory mediators, u a and u p : time-varying input controls for the anti-and pro-inflammatory therapy, respectively. Note that the model was formulated to represent a most abstract form of the complex processes involved in the acute inflammatory response. Hence, as explained in [START_REF] Reynolds | A reduced mathematical model of the acute inflammatory response I. Derivation of model and analysis of antiinflammation[END_REF] the variables N and C a represent multiple mediators with similar inflammatory characteristics, and D is an abstract representation of collateral tissue damage caused by inflammatory by-products. This abstraction reduces the description to four essential variables which also allows for tractable mathematical analysis. Therefore, the units of these variables are in arbitrary units of N-units, C a -units, D-units, since they represent various types of cells and thus, they qualitatively, rather than quantitatively, describe the response of the inflammatory mediators and their by-products. Pathogen, P, units are more closely related to numbers of pathogens or colony forming units (CFU), but abstract units P-units are simply used as well and this population is scaled by 10 6 /cc. More details about the model development can be found in [START_REF] Reynolds | A reduced mathematical model of the acute inflammatory response I. Derivation of model and analysis of antiinflammation[END_REF]. The diagram in Figure 1 characterizes the different interactions between the states of the inflammatory model. A solid line with an arrow head indicates an up-regulation, whereas a dashed line with circular head indicates inhibition or down-regulation of a process. For instance, early pro-inflammatory mediators, N, respond to the presence of pathogen, P, by initiating self-recruitment of additional inflammatory mediators and N is therefore up-regulated by the interaction with P to attempt to efficiently eliminate the pathogen. The self up-regulation that exists for P is due to replication. Furthermore, N inhibits P by eliminating it at some rate. The inflammation caused by N, however, results in tissue damage, D, which can provide a positive feedback into the early inflammatory mediators depending on their intensity. To balance this, anti-inflammatory mediators, such as cortisol, IL-10, and TGF-β , can mitigate the inflammation and its harmful effect by suppressing the response by N and the effects of D in various ways. The C a variable maintains a small positive background level at equilibrium in the presence of no pathogen. Following the setup used in [START_REF] Day | Using nonlinear model predictive control to find optimal therapeutic strategies to modulate inflammation[END_REF], the reference parameter value for C a (0) is set to 0.125 and virtual patients have a value that is ±25% of the reference value. In addition, the values of six other parameters as well as the initial condition for P are set to have differing (positive) values from the reference set. In particular, the values of these parameters and initial conditions were generated from a uniform distribution on defined parameter ranges or on a range that was +/-25% of the (mean) reference value. The remaining parameters retained the same values as those in the reference set. These differences distinguish one virtual patient from another. We use the set of 1000 virtual patients generated by [START_REF] Day | Using nonlinear model predictive control to find optimal therapeutic strategies to modulate inflammation[END_REF] in the way described above to evaluate the performance of the proposed control strategy. The set of patients was classified with respect to their outcome after an open loop simulation for a long enough time to numerically determine outcome without ambiguity. Of the 1000 virtual patients, 369 did not resolve the infection and/or inflammatory response on their own and succumbed to a septic (141) or aseptic (228) death death outcome. On the other hand, 631 exhibited a healthy outcome of which there were two distinct subsets: 1. 379 of the 631 healthy virtual patients did not necessitate treatment intervention because their inflammatory levels did not exceed a specified threshold (defined to be N(t) ≤ 0.05, set in [START_REF] Day | Using nonlinear model predictive control to find optimal therapeutic strategies to modulate inflammation[END_REF]). These virtual patients were excluded from receiving treatment and from our in silico study. 2. The remaining 252 of these virtual patients did surpass the specified threshold, N(t) ≥ 0.05, and are included in the cohort that receives treatment. However, these virtual patients would be able to resolve to health on their own, in the absence of treatment intervention. An important issue for these particular virtual patients is not to harm them with treatment. Thus, 621 of the 1000 generated virtual patients receive treatment via our control design. Once a suitable reference trajectory is provided for the states with sensors, the derivation of the control part is straightforward which we now discuss. Control design As in previous control studies using this model, we assume that the state components P and D in Equations ( 9) and ( 12) are not measurable; whereas, the states N and C a in Equations ( 10) and ( 12), respectively, are: easily measured and influenced by the control variables u p and u a , respectively. We then introduce two equations of type (1): Ṅ = F 1 + α p u p (t) (13) Ċa = F 2 + α a u a (t). (14) We emphasize that, like in [START_REF] Lafont | A model-free control strategy for an experimental greenhouse with an application to fault accommodation[END_REF], the above two ultra-local systems may be "decoupled" so that they can be considered as monovariable systems. It should be nevertheless clear from a purely mathematical standpoint that F 1 (resp. F 2 ) is not necessarily independent of u a (resp. u p ). The two corresponding iPs (2) then read u p = - F 1,est -Ṅ * + K P1 e p α p (15) u a = - F 2,est -Ċ * a + K P2 e a α a . ( 16 ) The tracking errors are defined by e p = N -N * and e a = C a -C * a , where N * and C * a represent the reference trajectories corresponding to pro and antiinflammatory measurements N and C a , respectively. Knowing that F encapsulates all the model uncertainties and disturbances as already explained in the introduction, a good estimate F est provides exponential local stability of the closed-loop system. The following algorithm 1 provides a good summary on the functioning of the proposed methodology for immune regulation: Algorithm 1 Model-free Control Step 1: Initialization, k = 0 u p (0) = 0, define reference trajectories N * , initialize K p and α, fix the sampling time T e ; For 1 ≤ k ≤ T f Step 2, : Get measurements of N and u p ; Step 3 Estimation of F: Estimate F according to a discrete implementation of equation ( 7); Step 4: Close the loop according to equation ( 15) and return to Step 2. Note that the same design procedure can lead to the derivation of the control u a ; however, this time we associate the measurement C a with the control u a (See equation ( 14)). The interesting fact about this approach is that we do not need to control the state variables P and D, which are not measurable. Solving the tracking problem consisting of following closely the reference trajectory of N and C a is enough to drive the pathogen and damage to values in the basin of attraction of the healthy equilibrium where they would converge to this state as time progressed, thereby 'curing' the patient. Results without noise corruption We first examine the performance of the control approach with respect to the set of virtual patients and their individual corresponding initial conditions. The robustness of the control law with the addition of corrupting measurement noise will be discussed afterward. In what follows, the reference trajectories which are inspired from [START_REF] Bara | Immune therapeutic strategies using optimal controls with L 1 and L 2 type objectives[END_REF], correspond to the measurable states N and C a . They will be highlighted in dashed lines. The simulations for all patients were performed under the following conditions: a sampling time of 1 minute, α p = 1, α a = 10 in Equations ( 13)-( 14), -K P1 = K P2 = 0.5 in Equations ( 15)-( 16), and -250 hours simulation time to numerically determine outcomes without ambiguity; though we note stress that our control objectives were reached in less than 250 hours. The use of the same reference trajectory for all simulations emphasizes the robustness of the proposed control approach with respect to the variability among virtual patient parameter values and initial conditions. Figure 2 represents a successful outcomes related to 92 out of the 141 septic patients who were cured when applying a Fig. 3 Time evolution of the control u p and u a for the set of septic patients on which the strategy was implemented. The zoomed-in plot for u p provides more details on the duration of the control dose, where the x-axis is shown for only two hours since it is zero for the remaining time control given by Figure 3. The patients that converged to the septic death equilibrium (as explained in Section 3.1) are obviously the ones who were not cured with the approach. The criteria to classify successful therapeutic control is to determine if the levels of pathogen (P) and damage (D) are reduced to very low values (< 0.2). All virtual patients not meeting this criteria were either classified as septic death outcomes if, in addition, the pathogen state did not also approach zero or aseptic death outcomes otherwise. These two latter cases correspond to virtual patients not saved by the applied dosage. A closer look during the first hours in Figure 3 shows that the amplitude of the control variables is the main difference between different dosing profiles. Similar remarks apply fo u a . Analyzing u a shows that it is applied for a longer period of time than u p , but with smaller amplitude. This was not observed in the optimal control setting of [START_REF] Bara | Immune therapeutic strategies using optimal controls with L 1 and L 2 type objectives[END_REF], where dosing strategy ended at 30 hrs. It is thus interesting to see if purposefully restraining the dose quantity would have a sizable impact on the result. Surprisingly, however, we observe that forcing the anti-inflammatory control input, u a , to be zero after 28 hours does not affect the number of cured patients in this current study. This is an important insight to have in order to prevent unnecessary and lengthy dosing protocols. Whereas the maximum duration of the derived optimal control doses in [START_REF] Bara | Immune therapeutic strategies using optimal controls with L 1 and L 2 type objectives[END_REF] is 30 hours, it is much longer in the model-free control simulations. An extended duration in the model-free setting is the price to pay. On the other hand, the model-free control tracks a single given reference trajectory for all the virtual patients whereas the optimal control strategy [START_REF] Bara | Immune therapeutic strategies using optimal controls with L 1 and L 2 type objectives[END_REF] strives to infer the trajectories from a mathematical model that is required to be a 'good' model for all the virtual patients. Table 1 displays the results from our study for the 621 patients that qualified for therapy because of sufficiently elevated inflammation. The first column displays the outcomes in the absence of intervention, labeled the placebo outcome. Without intervention, 40% (252) will resolve to a healthy outcome, while the remaining 60% (369) fall into one of the two unhealthy outcome categories. We use the total of 369 unhealthy placebo outcomes to determine the percentage of those that the treatment rescued. Likewise, we use the total of 252 healthy placebo outcomes to determine the percentage of those harmed (i.e. they would have resolved to healthy without treatment but converged to one of the death states instead after receiving treatment). Figures 2 and3 display the time courses for the sensorless states, P and D. These were guided via the reference trajectories for the states with sensors, N and C a , along with the corresponding control input. The results are reminiscent of [START_REF] Bara | Optimal control of an inflammatory immune response model[END_REF][START_REF] Bara | Immune Therapy using optimal control with L 1 type objective[END_REF][START_REF] Day | Using nonlinear model predictive control to find optimal therapeutic strategies to modulate inflammation[END_REF]: first apply a large dose of pro-inflammatory therapy, u p , followed by an anti-inflammatory dose, u a . The latter attempts to prevent excessive tissue damage resulting from the additional pro-inflammatory signals form the first dose. The information we can derive from Table 1 is that the control strategy obviously improves the percentage of cured patients when compared to the placebo case. Our therapy rescued 85.66% of the total patient population (621) and 75.88% of the combined septic and aseptic population (369). Additionally, 0% of the healthy patients are harmed. Figure 4 shows the evolution of the unobservable state P and D together with the measured states corresponding to N and C a for a set of 228 aseptic patients. Of these, 188 were able to recover from an aseptic placebo outcome, when the generated controls in Figure 5 are applied, driving the pathogen P and the level of damage D to zero. Again, one can observe from Figure 4 that some trajectories diverge to the unhealthy aseptic region, where the pathogen is known to have a zero value but the other state variables remain elevated. Overall, the simulation results with respect to successful control of the number of outcomes for both the septic and aseptic placebo outcomes are very encouraging when one considers that only a unique reference trajectory was used for the heterogeneous population. The absence of perfect tracking should not be seen as a weakness of the model free control approach, since the control objective has been attained in most scenarios. One of the important features of the presented data driven control approach is the necessity to have suitable choice(s) for the reference trajectories. To be more explicit, consider a naive choice of the reference trajectories: a trajectory exponential decaying to zero for N re f and another trajectory exponentially decaying to the C A steady state value 0.125. This would not satisfy the control objective since the generated control doses are negative and the level of pathogen will converge to its maximum allowable value. The reason for this behavior can be explained by the fact that the iP controller is only concerned about reducing the tracking error without imposing any constraints on the control inputs. Constraints on the control are not implemented simply because the model-free approach is not formulated as an optimization problem. 2 However, choosing a reference trajectory that will account for the correct time-varying dynamics of the inflammatory response will eventually generate the correct doses. That is, if we chose, for example, a reference trajectory with a smaller amplitude or with slower rising dynamics, other than what is presented in this work, then it is highly probable that the patient would not converge to the healthy state with respect to the generated control doses received. Similar remarks can be made to Figure 5 as discussed previously for the set of placebo septic patients. It is not surprising to notice a very close pattern with respect to the generated control doses for both sets of septic and aseptic patients. This can be explained in part by a common control objective consisting of tracking the same reference trajectory and also because of what has been discussed before regarding how the inflammatory immune system needs to react in order to eliminate the pathogen without incurring a significant damage. Results with noise corruption Consider the effects of corrupting measurement noise on our control problem. Here, a white Gaussian noise is taken into account as in many academic studies, (see, e.g., [START_REF] Blanc-Lapierre | Théorie des fonctions aléatoires -Applications à divers phénomènes de fluctuation[END_REF][START_REF] Rabiner | Theory and Application of Digital Signal Processing[END_REF]). Otherwise, the same setting as the previous section is kept. Figures 6 and7 display the states and the corresponding controls for the set of 141 septic placebo patients in which 90 were cured. The addition of measurement noise with a standard deviation equal to 10 -3 only changes the outcome for two of the septic patients, when compared to the initial simulations where no noise was included. However, for the aseptic set of patients, there is a difference of 16 additional patients that did not survive when measurement noise is considered. Remark 4 For the model-free simulation with measurement noise, there are mainly two important remarks to make with regard to the discussion of the previous Section. First, for the case of septic patients, restraining the control u p and u a to be zero after 2 hours and 28 hours, respectively, will not considerably affect the number of cured patient since 90 patients were cured. One would fail to obtain a similar result when altering the control in the same way for the aseptic case. Although not shown here, a decrease of around 45 patients was observed when compared to the 172 who were cured without restraining control inputs. Concluding remarks In this study we propose a new data-driven control approach in order to appropriately regulate the state of an inflammatory immune response in the presence of pathogenic infection. The performance of the proposed control strategy is investigated in the context of a set of 621 heterogenous model-based virtual patient population having model rate parameter variability. The results of the model-free strategy presented here in the presence of measurement noise are also explored and discussed. The robustness of the approach to parameter variability and noise disturbances is seen in the fact that a single reference trajectory was used to inform the approach about desirable inflammatory dynamics and from this, the individual dosing strategies found largely produced healthy outcomes. The downside of the proposed control approach to this specific application is the necessity to apply the control for a longer period of time although with small doses. However, we have seen that artificially restricting this small dose from being provided does not affect the outcome of the states in the case when no measurement noise is used; thought it did in the scenarios with measurement noise. We want to emphasize the importance of a suitable choice for the reference trajectory and further studies may provide better insights in this direction. Past successes of the model-free control feedback approach in other realistic casestudies should certainly be viewed as encouraging for the future development of our approach to the treatment of inflammation. Additionally, the model-free control approach seems to be both theoretically and practically simpler when compared to model-based control designs. This newer viewpoint for control problems in biomedicine needs to be further analyzed in order to confirm its applicability in these complex dynamic systems where the ability to realistically obtain frequent measurement information is limited. Fig. 2 2 Fig. 2 Dashed (--) curves in the panels for the variables N and C a denote the reference trajectories used in the simulation. The various colored curves display the closed loop state responses for the set of 141 septic patients, of which 92 resolved to the healthy outcome. Fig. 4 4 Fig. 4 Dashed (--) curves in the panels for the variables N and C a denote the reference trajectories used in the simulation. The various colored curves display the closed loop state responses results for the set of 228 aseptic placebo patients, of which 188 were cured. Fig. 5 5 Fig.5Time evolution of the control inputs u p and u a for the set of aseptic patients shown in Figure4. The zoomed-in plot for u p provides a better perspective on the duration of the control dose, where the x-axis is shown for two hours only since it is zero afterward. Fig. 6 6 Fig. 6 Dashed (--) curves in the panels for the variables N and C a denote the reference trajectories used in the simulation. The various colored curves display the closed loop state responses for the set of 141 septic placebo patients, of which 90 were cured. Note that the measurements N and C a were corrupted with Gaussian noise. Fig. 7 7 Fig. 7 Time evolution of the control u p and u a for the set of septic placebo patients when the measurements N and C a were corrupted with Gaussian noise. The zoomed-in plot for u p provides more detail on the duration of the control dose, with an x-axis shown only for two hours since the doses are zero afterward. Fig. 8 8 Fig. 8 Dashed (--) curves in the panels showing the time courses of the variables N and C a denote the reference trajectories used in the simulation. The various colored curves display the closed loop state responses for the set of 228 aseptic placebo patients, 172 of which were cured. Note that the measurements N and C a were corrupted with Gaussian noise. Fig. 9 9 Fig. 9 Time evolution of the control u p and u a for the set of 228 aseptic placebo patients when the measurements N and C a were corrupted with Gaussian noise. The zoomed-in plot for u p provides more details on the duration of the control dose, with an x-axis shown only for two hours since the doses are zero afterward. Table 1 1 Results of the model-free immune therapy strategy without measurement noise compared to the placebo outcomes. Therapy Type: Placebo Model-free control therapy Percentage Healthy: 40% (252) 85.66% (518) Percentage Aseptic: 37% (228) 6.4% (40) Percentage Septic: 23% (141) 7.8% (49) Percentage Harmed (out of 252) n/a 0% (0/252) Percentage Rescued (out of 369) n/a 75.88% (280/369) Table 2 2 Results of the model-free immune therapy strategy with measurement noise compared to the placebo outcomes. Therapy Type: Placebo Model-free control therapy Percentage Healthy: 40% (252) 82.76% (514) Percentage Aseptic: 37% (228) 9.02% (56) Percentage Septic: 23% (141) 8.21% (51) Percentage Harmed (out of 252) n/a 0% (0/252) Percentage Rescued (out of 369) n/a 71% (262/369) Allowing the control to be only positive semidefinite will result in a zero control all the time. Work partially supported by the NSF-DMS Award 1122462 and a Fullbright Scholarship.
00175675
en
[ "shs.langue" ]
2024/03/05 22:32:10
2010
https://shs.hal.science/halshs-00175675/file/span.mood.pdf
Brenda Laca Introduction This paper is mainly concerned with the uses of the subjunctive in Modern Spanish (section 3). 1 Section 2 gives a brief sketch of those aspects of the temporal-aspectual system of Spanish that constitute a necessary background for the interpretation of subjunctive forms. Section 4 briefly describes the conditional, which exhibits very close links to some subjunctive forms. The imperative mood is discussed in the subsection devoted to the subjunctive in root contexts (3.3.4). The temporal-aspectual system of Spanish The neo-Reichenbachian system proposed by Demirdache & Uribe-Etxeberria (2007) proves particularly useful for representing the tense-aspect system of Spanish. In this system, tense is modelled as a relation between the time of evaluation (Ast-T, a direct descendant of the Reichenbachian R understood as an interval) and a highest anchor, which is normally the time of speech (Utt-T) in matrix contexts. Possible relations are anteriority (anchor after anchored), inclusion or coincidence, and posteriority (anchor before anchored). These relations are replicated for aspect, which expresses a relation between the time of the event (Ev-T) and Ast-T. The analysis I propose for Spanish is summarized in Table 1. Tenses are illustrated with the 1 st Pers. Sing. of the verb cantar 'sing', aspects with aspectualized infinitival forms. Table 1 : Tense and grammatical aspect in Spanish I assume that aspect is not expressed by simple tenses in Spanish, with the notable exception of the preterite, which is a perfective tense requiring that Ast-T includes Ev-T [START_REF] Laca | Périphrases aspectuelles et temps grammatical dans les langues romanes[END_REF]. All other simple forms leave the relation between Ast-T and Ev-T unspecified and are in this sense "aspectually neutral" [START_REF] Smith | The parameter of aspect[END_REF][START_REF] Reyle | Ups and downs in the theory of temporal reference[END_REF], Schaden 2007: Chap.3). "Aspectually neutral" forms are not totally unconstrained, but whatever preferences they exhibit result (a) from polarisation effects due to the existence of an aspectually marked competing form -thus, an imperfect will strongly prefer imperfective interpretations in the contexts in which it contrasts with the preterite, a simple future will prefer perfective interpretations by contrast with a progressive future [START_REF] Laca | Périphrases aspectuelles et temps grammatical dans les langues romanes[END_REF]; (b) from the temporal structure of the eventuality description, according to a very general pattern which essentially excludes imperfective interpretations of bona-fide telic descriptions (Demirdache & Uribe-Etxeberria 2007). Deictic and anaphoric (or zero) tenses are distinguished by supposing that the latter do not have Utt-T, but an interval not identical with Utt-T, dubbed here Tx, as anchor. The notion of anaphoric tense used here is very restrictive, in that this anchor can only be provided by an embedding predicate of propositional attitude: in the sense used here, an anaphoric tense can only appear in reported speech or reported thought contexts. The introduction of Tx in the system 2 is designed to provide a unified interpretation of forms exhibiting imperfect morphology (the imperfect itself, as well as the pluperfect and the conditional) and to solve the well known problem posed by perfect conditional forms (habría cantado 'would have sung') without assuming a third layer of temporal relations next to Tense and Aspect. It can also prove useful in accounting for the fact that such forms consistently develop counterfactual uses. The splitting of the imperfect into two uses, a bona fide past tense and an anaphoric "present of the past", is justified by the behavior of the imperfect with modal verbs -a point that cannot be developed here. Aspect is explicitly expressed in Spanish by a set of periphrastic combinations exhibiting a characteristic behavior [START_REF] Laca | Périphrases aspectuelles et temps grammatical dans les langues romanes[END_REF]. Periphrastic combinations formed with haber + PP are uniformly treated as compound or perfect tenses (perfect, pluperfect, future and conditional perfect) in the Spanish grammatical tradition, and carry the main bulk of the expression of secondary anteriority relations. The Spanish progressive shows a very similar distribution to that of the English progressive, and the prospective closely parallels the English be-going-to-construction. The subjunctive Subjunctive morphology In Modern Spanish, the subjunctive exhibits two simple forms, the present and the imperfect, as well as two compound forms, the perfect and the pluperfect. The present is built on the present stem of the verb by a change of the thematic vowel, a > e for verbs of the first conjugation class and e/i > a for verbs of the second and third classes. To the exception of a handful of irregular cases, the stem normally appears in the form it takes in the 1 st pers. sing. present indicative. The imperfect is built on the preterite/perfect stem (the one of preterite indicative), and exhibits the peculiarity of having two distinct markers, -ra-and -se-, which are traditionally held to be in allomorphic variation. Person marking corresponds to the general pattern of the language, with -∅ for 1 st and 3 rd pers. sing., -s for 2 nd pers. sing., -mos for 1 st pers. pl., -is for 2 nd pers. pl., and -n for 3 rd pers. pl [START_REF] Boyé | The structure of allomorphy in Spanish verbal inflection[END_REF]. Compound forms are built with the past participle and the auxiliary haber, which appears in the present subjunctive in the formation of the perfect (haya cantado), and in the imperfect subjunctive in the pluperfect (hubiera/ hubiese cantado). The set of forms of the subjunctive is radically reduced in comparison to that of the indicative on account of the lack of a perfective/imperfective (neutral) contrast in the past forms, on the one hand, and of the lack of a present/future contrast on the other. Medieval and Classical Spanish had a form for the future subjunctive. Built on the preterite/perfect stem with the marker -re-(cantare/ quisiere/ saliere), this simple form was flanked by a corresponding compound form with haber in the future subjunctive (hubiere cantado). Although surviving in some set expressions (sea como fuere 'be as it may'), in juridical language, and possibly in some reduced dialectal areas, future subjunctive forms seem to have disappeared from general usage as far back as the 18 th century [START_REF] Ridruejo | ¿Cambios iterados en el subjuntivo español?[END_REF][START_REF] Camus Bergareche | El futuro de subjuntivo en español[END_REF][START_REF] Eberenz | Sea como fuere. En torno a la historia del futuro de subjuntivo español[END_REF]/1990). A comparison with the original Latin subjunctive paradigm shows that the main differences are directly or indirectly related to the loss of a conjugation system based on the contrast between infectum and perfectum and to the concomitant generalization of compound forms for perfects. The main reinterpretation processes are the following: (i) the Latin pluperfect subjunctive is reinterpreted as a general past (imperfect) subjunctive (canta(vi)sse > cantase). (ii) the Latin perfect subjunctive conflates with the Latin future perfect indicative to give the form of the future subjunctive (canta(ve)rim/ canta(ve)ro > cantare). The resulting form is indistinguishable from the Latin imperfect subjunctive for all verbs lacking a perfect stem form distinct from the present stem (cantare(m)), so that the latter can be held either to have been entirely given up or to have concurred in the formation of the future subjunctive [START_REF] Ridruejo | ¿Cambios iterados en el subjuntivo español?[END_REF]). (iii) the Latin pluperfect indicative is reinterpreted as a subjunctive form (canta(ve)ram > cantara), and ends up being largely equivalent to the imperfect subjunctive arisen from the Latin perfect subjunctive. This process sets on in Medieval Spanish and stretches well into the contemporary language. The details of this semantic development are extremely complex, though clearly linked to a cross-linguistically widespread phenomenon which consists in exploiting past morphology for the expression of counterfactuality [START_REF] Ridruejo | ¿Cambios iterados en el subjuntivo español?[END_REF][START_REF] Iatridou | The grammatical ingredients of counterfactuality[END_REF]. In contemporary language, the -ra-form preserves some of its etymological uses in contexts from which the -se-form is excluded, most notably as a pluperfect or preterite indicative in subordinate clauses, and with the modals deber 'must', poder 'can', and querer 'wish' in independent clauses. It tends to fully replace the -se-form for the expression of the imperfect subjunctive in a large number of regional, specially American varieties. Although lacking any direct impact on the stock of subjunctive forms, the emergence of a conditional in Romance -together with changes in the uses of the imperfect indicativehas profoundly affected the distribution and interpretation of the subjunctive. Temporal and aspectual relations The comparatively poor stock of subjunctive forms and the fact that the subjunctive is a dependent mood appearing mainly in sequence-of-tense contexts have given rise to a debate as to the temporal interpretation of subjunctive forms. This debate concentrates on the contrast between the present and the imperfect subjunctive and is formulated in the generative tradition as a question concerning the existence of an independent Tense feature in subjunctive clauses. The issue carries over to the contrast between the compound forms of the subjunctive: although both convey a secondary anteriority relation, they contrast as to the possible highest anchors for this relation. On the basis of the distribution in (1a-b), [START_REF] Picallo | El nudo FLEX y el parámetro de sujeto nulo[END_REF][START_REF] Picallo | El nudo FLEX y el parámetro de sujeto nulo[END_REF]) has argued that subjunctive clauses lack independent Tense, and that subjunctive forms are selected via a necessary anaphoric link with the temporal features of the matrix sentence, in such a way that a PAST in the matrix determines an imperfect/pluperfect subjunctive, whereas a NON-PAST in the matrix determines a present/perfect subjunctive: This claim has been challenged on a number of grounds. Strict temporal selection holds only in a restricted type of contexts, particularly those involving subjunctive selection by a forward-shifting predicate 3 or in causative constructions, and subjunctive licensing in subject clauses of copular sentences. Even in these contexts, it often takes the form of a constraint banning certain crossed combinations, but not others. Thus, forward-shifting predicates exclude an imperfect subjunctive under a matrix NON-PAST (2c), but allow a present subjunctive under a matrix PAST (2b), whereas copular sentences allow an imperfect subjunctive under a matrix NON-PAST (3b), but exclude a present subjunctive under a matrix PAST (3c) (for further details, see [START_REF] Suñer | Concordancia temporal y subjuntivo[END_REF]/1990[START_REF] Kempchinsky | Más sobre el efecto de referencia disjunta del subjuntivo[END_REF][START_REF] Quer | Mood at the interface[END_REF] The conclusion seems thus inescapable that subjunctive forms make a temporal contribution of their own: what appears as strict temporal selection is a result of the interaction between the semantic properties of the context and this temporal contribution. 1 a. The interpretation of the PAST / NON-PAST combinations in (2b) and (3b) offers an immediate clue as to what this contribution is. ( 2b) is an instance of a double access configuration, in which the time of the subordinate clause is calculated with Utt-Time as anchor [START_REF] Kempchinsky | Más sobre el efecto de referencia disjunta del subjuntivo[END_REF]): the requested arrival must follow Utt-Time. On the other hand, (3b) contrasts with (3c): the arrival must precede the time of epistemic evaluation in (3b), which reports present epistemic uncertainty about an already settled matter, whereas it follows the time of evaluation in (3c), which reports past metaphysical uncertainty about a matter not yet settled at that time. I would like to suggest that the contrast between the present and the imperfect subjunctive parallels that between the corresponding indicative tenses. The present is a deictic tense, always anchored with regard to Utt-Time. The imperfect can be an anaphoric tense, taking Tx as anchor ("present of the past"), but it can also have a deictic interpretation, in which case it signals anteriority with regard to Utt-Time. This latter interpretation becomes prominent whenever the matrix context does not provide a past temporal anchor, i.e. a suitable Tx. The temporal contrast between the present and the imperfect subjunctive is somewhat obscured by the fact that the latter gives rise to interpretations in which the event time is simultaneous or forward-shifted with regard to Utt-T. The imperfect subjunctive cannot be understood either as a deictic past or as an anaphoric "present of the past" in main clauses expressing wishes (with ojalá as a licensing adverb), nor in the antecedent of conditionals. It does not contrast in temporal location with the present subjunctive or with the present indicative, respectively: 5 a. Ojalá estuvieran/ estén en casa. hopefully be.IMPF.SBJ.3PL /be.PRS.SBJ.3PL in house 'I wish they were/ I hope they are at home' b. Si estuvieran/ están en casa... if be.IMPF.SBJ.3PL /be.PRS.IND.3PL in house 'If they were/ are at home... Such cases can be assimilated to the numerous instances of past tenses being used for signaling counterfactuality or non-realistic modal bases (see [START_REF] Iatridou | The grammatical ingredients of counterfactuality[END_REF]. 4 By contrast with the present subjunctive resp. indicative versions, which only indicate epistemic uncertainty, in the imperfect subjunctive versions the world of evaluation w0 is assumed not to be a world in which they are at home in (5a-b). In fact, counterfactual uses of the imperfect subjunctive rather reinforce the analogy with the imperfect indicative, which in some so-called modal uses locates event-time simultaneouly or subsequently to Utt-T, but does not locate it in the world of evaluation: 6 Yo que tú no se lo contaba. I that you not him/her it tell IMPF.IND.1SG 'If I were you, I wouldn't tell him/her' The simple forms of the subjunctive are aspectually neutral. The compound forms convey an anteriority relationship whose highest anchor can be Utt-T in the case of the perfect subjunctive, and is normally a Tx preceding Utt-T in the case of the pluperfect: The subjunctive is compatible with the periphrastic expression of prospective aspect, but prospective subjunctives are excluded in forward-shifting matrix contexts such as volitionals and directives. To sum up, the temporal-aspectual organization of the subjunctive does not differ radically from that of the indicative. It has a deictic form indicating coincidence with Utt-T, the present, and a form that can function anaphorically, indicating coincidence with Tx, or deictically, indicating precedence with regard to Utt-T, the imperfect. This latter form is exploited for counterfactual uses, signaling coincidence with Utt-T in a "world history" different from w0. Forms indicating coincidence regularly give rise to forward-shifted readings, sometimes as a function of the forward-shifting properties of the matrix context, but often simply as a result of the type of eventuality described in the clause. Compound forms indicate a secondary anteriority relation. When the highest anchor for this secondary anteriority relationship is Utt-T, the perfect subjunctive is very close to a deictically functioning imperfect subjunctive. The meaning and uses of the subjunctive General semantic characterizations of mood are notoriously difficult. The subjunctive is clearly an expression of modality, in as far as all its uses involve consideration of sets of alternative possible worlds, i.e. non totally realistic modal bases. However, this characterization captures a necessary, but not a sufficient condition for subjunctive use. Whereas the indicative corresponds to the default mood, appearing in main assertions, but also in questions and in a number of dependent clauses, the subjunctive is a dependent mood, which is subject to specific licensing conditions. This does not mean that the subjunctive is restricted to subordinate clauses, although the widest array of its uses does involve syntactic subordination. We will first discuss the subjunctive in dependent clauses and then the subjunctive in root contexts. Argument clauses Two distinctions have proven particularly useful when describing uses of the subjunctive. The first opposes intensional contexts to polarity contexts [START_REF] Quer | Mood at the interface[END_REF]) as subjunctive licensors. In intensional contexts, the subjunctive is triggered by the lexical properties of a predicate, which can be a verb, but also an adjective or a noun: 9 Quiere que hablen de él. want.PRS.IND.3SG that talk.PRS.SBJ.3PL of him 'He wants people to talk about him' In polarity contexts, it is essentially a negation in the matrix context that licenses a subjunctive which would be otherwise excluded. 10 Nunca dijo que estuviera enfermo. never say PRT.IND.3SG that be IMPF.SBJ.3SG ill 'S/he never said that he was ill' The second, more traditional distinction, opposes contexts of rigid subjunctive selection to contexts in which mood alternation is possible. Thus, the subjunctive is the only possible option in ( 9) whereas (10) also admits the indicative. However, the two distinctions do not overlap: (11a-c) show cases of mood alternation for "intensional" subjunctives. 'S/he consented not to be paid/ admitted that s/he was not being paid' The clear meaning differences between the subjunctive and the indicative versions in (11a-c) give precious clues as to the semantic contribution of the subjunctive. In (11a), the subjunctive version reports a directive speech act, whereas the indicative version reports a statement of fact. This sort of contrast extends to a large class of verbs of communication. In (11b), the indicative version asserts that the subject of the main verb checked a fact. The subjunctive version signals that the subject has a vested interest in this fact, and has possibly contributed to its coming about, for instance by closing the door herself. Finally, in (11c), the indicative version conveys acknowledgement of the truth of the propositional content of the object clause, whereas the subjunctive version indicates acquiescense or agreement with a suggestion. What is common to all the subjunctive versions is (a) an "element of will" on the side of the subject of the propositional attitude verb as to the coming about of the state of affairs described in the subordinate clause, and (b) the fact that the subject is involved as a causal factor that can possibly favor or prevent this coming about. Bouletic modality and causation are involved in most cases of rigid subjunctive selection, 5 namely with volitionals, as in (9), and with directives, implicatives, and causatives (12). Note that the latter two cases assert the truth of the propositional content of the subjunctive clause, thus infirming the widely held view that the subjunctive signals lack of assertion: However, some emotive-factive predicates exhibit uses as verbs of communication. They report speech acts which convey at the same time the assertion of a fact and an evaluation of this fact by the subject of the propositional attitude. In such uses, they lose their factive status, in as far as they do not presuppose the truth of their complement, and they occasionally give rise to mood alternation: 15 Se lamenta -injustificadamente-de que nadie REFL complain PRS.IND.3SG unjustifiedly of that nobody lo comprende/ comprenda. him understand.PRS.IND.3SG/ understand.PRS.SBJ.3SG 'He unjustifiedly complains about not being understood by anybody' Mood alternation is sensitive, in such contexts, to the foregrounding of the propositional content of the subjunctive clause (indicative) or of the emotive-factive predicate (subjunctive), as shown by the fact that the focus of pseudo-cleft structures allows the indicative even in the absence of reported-speech readings [START_REF] Quer | Mood at the interface[END_REF][START_REF] Grae | /to appear. Nueva gramática de la lengua española[END_REF] As argued by [START_REF] Quer | Mood at the interface[END_REF], the causation component in the semantics of emotivefactive predicates is a decisive factor in mood selection. At the same time, these predicates convey the (positive or negative) evaluation of a fact on the side of the Experiencer. Evaluative predicates constitute another major class of subjunctive selectors. They include a couple of verbs such as bastar'suffice', convenir'be advisable', urgir'be urgent', and a large class of adjectives and nouns, as well as the adverbs bien 'well, right, proper' and mal 'bad, unfair, inappropriate' [START_REF] Grae | /to appear. Nueva gramática de la lengua española[END_REF] The subjunctive is triggered whenever the propositional content of the argument clause is not merely asserted, but located in a space of possibilities. This is the case with modal predicates expressing epistemic or metaphysical possibility or necessity, but also with predicates expressing frequency and with those expressing falsity: 18 Es probable/ usual/ erróneo. be.PRS.IND.3SG likely/ usual/ mistaken que surjan conflictos. that arise.PRS.SBJ.3PL conflicts 'It is likely/ usual/ false that conflicts (should) ensue' Among predicates of propositions, only those that are equivalent to the assertion of the proposition, as for example es verdad/ cierto/ exacto/ seguro 'it is true/ correct/ exact/ sure', consistently select the indicative mood. Note that with modal predicates, the truth of the subjunctive proposition may be entailed in some cases. Together with the implicative subjunctive triggers mentioned above (12), this fact casts some doubt on the role of nonveridicality [START_REF] Giannakidou | The Landscape of Polarity Items[END_REF] in the distribution of the subjunctive. To sum up, subjunctive triggering in intensional contexts is intimately related to the notions of causation and evaluation. Mood selection is usually rigid in such contexts, which is probably an indication of the fact that argument clauses in such configurations cannot escape the scope of the selecting predicate. Note that the more complex scope configurations involved in pseudo-clefts, possibly disrupting subordination, permit the indicative, as in ( 16) above, and that when causal relations do not involve the embedding of an argument clause, no subjunctive is licensed ( 19a-b Possible subjunctive licensors include first and foremost sentential negation, but also nonupward entailing environments, such as contexts containing downward-entailing elements, questions and conditional antecedents [START_REF] Ridruejo | Modo y modalidad. El modo en las subordinadas sustantivas[END_REF][START_REF] Grae | /to appear. Nueva gramática de la lengua española[END_REF] This can be taken to mean that indicative clauses in polarity contexts convey the speaker's endorsement of the truth of the complement. The indicative version of ( 21), in which subject of belief and speaker coincide, seems to report contradictory beliefs. By contrast, the subjunctive in polarity contexts does not convey any attitude of the speaker as to the truth of the complement clause: it indicates that the complement clause is under the scope of the propositional attitude verb and the operator affecting it. This scopal dependency of the subjunctive -contrasting with the outscoping effects of the indicative-is further confirmed by the fact that polarity contexts license negative polarity items in subjunctive, but not in indicative complement clauses [START_REF] Bosque | Las bases gramaticales de la alternancia modal[END_REF] Relative clauses As stated above, in polarity contexts the subjunctive indicates that the clause containing it is in the scope of the licensing context. Mood alternation in relative clauses follows an analogous interpretive pattern. The descriptive content of an indicative relative is evaluated in w0 (the world in which non-modalized assertions are evaluated). By contrast, the descriptive content of a subjunctive relative is evaluated in a non-totally realistic modal base contributed by an intensional environment. This explains the well known fact that noun phrases containing subjunctive relatives are typically interpreted non-specifically (23a) or attributively (23b): a. Pidieron un libro que fuera fácil de leer. ask. PRT.IND.3PL a book that be.IMPF.SBJ.3SG easy of read.INF 'They asked for a book that was easy to read' b. Le dieron un libro a cada cliente que him give.PRT.IND.3PL a book to every customer that hubiera gastado más de 10 euros. have. IMPF.SBJ.3SG spend.PP more of 10 euros 'They gave a book to any customer having spent over 10 euros' Non-specific relatives do not entail the existence in w0 of an object verifying the description. Attributive relatives are characterized by the fact that the link between the content of the nominal description and the property denoted by the rest of the sentence is a law-like one, grounded in generalizations that extend to counterfactual cases and usually involve causality. Mood alternation is excluded in appositive relatives. Since these constitute independent subsidiary assertions, they only take the indicative [START_REF] Ridruejo | Modo y modalidad. El modo en las subordinadas sustantivas[END_REF]). 6 The licensing environments for subjunctive relatives share, to a certain extent, the properties of the environments licensing subjunctive argument clauses. As a matter of fact, restrictive relatives contained in subjunctive argument clauses admit themselves the subjunctive [START_REF] Grae | /to appear. Nueva gramática de la lengua española[END_REF] As for subjunctive relatives not contained in subjunctive clausal environments, they are excluded in contexts involving totally realistic modal bases (25a), and they are licensed in modal environments such as those involving bouletic modality (25b), but also in those containing modal verbs (25c), future tense or prospective aspect, or exhibiting a habitual/generic interpretation [START_REF] Quer | Mood at the interface[END_REF][START_REF] Grae | /to appear. Nueva gramática de la lengua española[END_REF] The problem is that, in relative clauses, the subjunctive itself can be the only overt element triggering a non-totally realistic interpretation of the environment. Usually, unexpected subjunctives are linked to the possibility of establishing an intentional link between the will of an agent and the descriptive content of the noun phrase, and are thus assimilable to bouletic modality. This is particularly clear in the case of so-called "purpose relatives" exemplified in (26) [START_REF] Ridruejo | Modo y modalidad. El modo en las subordinadas sustantivas[END_REF]), but also extends to subtler cases: 26 Hicieron un cobertizo build PRT.IND.3PL a shed que los protegiera de la lluvia. that them protect IMPF.SBJ.3SG of the rain 'They built a shed as a protection against the rain' Note that such cases are analogous to subjunctive-triggering with implicative verbs, in as far as entailments of existence are not suspended by the subjunctive, which only adds a forwardshifting element of will. Although we have exemplified subjunctive relatives mainly in indefinite nounphrases, all determiners, to the notable exception of demonstratives, are compatible with subjunctive relatives [START_REF] Quer | Mood at the interface[END_REF]. Occasional difficulties with the definite article should probably be attributed to a mismatch between the presuppositions of the article and the descriptive content of the noun phrase. Semantic definites -those in which the unicity presupposition is guaranteed by the descriptive content of the noun phrase, such as superlatives or descriptions containing ordinals-pose no problem for the subjunctive [START_REF] Ridruejo | Modo y modalidad. El modo en las subordinadas sustantivas[END_REF]): 27 Iban a comprar el libro go.IMPF.IND.3PL to buy.INF the book que contuviera #(más) ilustraciones. that contain IMPF.SBJ.3SG more illustrations ' They were going to buy the book with the greatest number of illustrations' Bare plurals (28), but also count singular algún 'some', free choice items, and negative indefinites strongly favor subjunctive relatives [START_REF] Quer | Mood at the interface[END_REF]). In the first case, this is a consequence of the scopal dependency of bare plurals; in the other cases, scopal dependency is reinforced by the fact that the items in question require licensors roughly corresponding to those required by the subjunctive: 28 Buscan libros que search.PRS.IND.3PL books that ??contienen/ contengan ilustraciones. contain.PRS.IND.3PL /contain PRS.SBJ.3PL illustrations 'They are looking for books containing illustrations' Free relatives also clearly favor the subjunctive, possibly as a consequence of the tendency to interpret them attributively and of their proximity to free-choice items [START_REF] Giannakidou | The Landscape of Polarity Items[END_REF][START_REF] Quer | Mood at the interface[END_REF] To sum up, relatives clauses exhibit mood alternation. The subjunctive requires that the descriptive content of the clause be evaluated in a non-totally realistic modal base, which is more often than not guaranteed by its dependence from an intensional context and gives rise to non-specific or attributive readings for the NP containing it. Adverbial and/or adjunct clauses Due to space limitations, only information concerning some prominent types of subjunctive adverbial/adjunct clauses and some limited types of mood alternation will be given in this section. Subjunctive use in these contexts is sensitive to roughly the same type of semantic factors we have been discussing. Thus, for instance, purpose clauses (30a), which involve bouletic modality, and clauses negating concomitance (30b), in which the proposition expressed is necessarily under the scope of the negative sin 'without', take the subjunctive. Both types of interclausal relations are expressed by a preposition governing a complement clause (GRAE 2008): 7 30 a. Lo hice para que se enterara. it do.PRT.IND.1SG for that REFL inform.IMPF.SBJ.3SG 'I did it so that he would notice it' b. Lo hice sin que se enterara. it do.PRT.IND.1SG without that REFL inform.IMPF.SBJ.3SG 'I did it without his noticing it' Modern Spanish exhibits the peculiarity that all forward shifted temporal clauseswhose time of evaluation is ordered after the highest anchor Utt-T or Tx -take the subjunctive. This holds of temporal clauses introduced by any syntactic type of subordinating expression, and expressing simultaneity, posteriority or anteriority: 31 Cuando llegue, se lo decimos. when arrive. PRS.SBJ.3SG him/her it tell.PRS.IND.1PL 'When s/he arrives, we'll tell him/her' Some authors classify these uses of the subjunctive as "suppletive" future tenses, but the assumption of a "different" subjunctive seems unwarranted. Furthermore, before-temporal clauses always take the subjunctive (i.e., not only when they are forwardshifted), whereas after-temporal clauses only take it in European Spanish [START_REF] Grae | /to appear. Nueva gramática de la lengua española[END_REF]. Conditional antecedents and subjunctive concessive clauses figure prominently among the contexts in which the temporal contrast between present and imperfect subjunctive forms is reinterpreted, with imperfect subjunctive forms being used for the expression of nonrealistic modal bases. Thus, both (32a-b) and (33a-b) locate the time of the subordinate after resp. before Utt-T. But (32b) The factors linked to the presence of the subjunctive in main clauses parallel those we find in dependent clauses, in as far as they involve evaluation with regard to non-totally realistic modal bases. The conditional The verbal form built on the infinitive/future stem by adding to it the desinences of the imperfect (cantar-ía/ habr-ía cantado) is predominantly classified as a temporal form of the indicative mood. It is not surprising that it should have modal uses: the future and imperfect indicative are known to exhibit a number of modal uses, which clearly predominate over temporal uses in the case of the former, so that it is only to be expected that a form combining the morphology of both tenses will have a still more pronounced modal profile. I would like to suggest, however, that there are good reasons for assuming a split in uses of this form, with some of them corresponding to a tense ("future of the past" and, more interestingly, "past of the future"), and others constituting the mood of choice when non-realistic modal bases are involved, i.e. when w0 is excluded from the domain of quantification in a modal environment. In "future of the past" uses, the conditional behaves as a strictly anaphoric tense: it requires a past anchor contributed by a verb of thinking or speaking (41a), which may be implicit in free indirect speech contexts and in so called quotative or evidential uses of the conditional [START_REF] Squartini | The internal structure of evidentiality in Romance[END_REF] What I'd like to label "past of the future" readings are practically equivalent to future perfects in contexts expressing a conjecture [START_REF] Squartini | The internal structure of evidentiality in Romance[END_REF]. Spanish makes abundant use of future morphology for indicating that the propositional content is advanced as a possibility, and not as an unqualified assertion. If the propositional content concerns a time preceding Utt-T, anteriority can be expressed by the future perfect, but also by the conditional: 42 No vino a la fiesta. What the semantics of conditionals, want-verbs, and modals have in common is the fact that they require consideration of non-totally realistic modal bases. It is thus natural to assume that "conditional mood" requires sets of alternative worlds to operate on. To judge from its effects in conditional sentences, what it does in such contexts is to exclude the world of evaluation from the domain of quantification, signaling that w0 does not belong to the modal base. The non-totally realistic modal base contributed by the modal element on which the conditional is grafted becomes a non-realistic modal base. When talking about the past -by means of perfect morphology on the conditional or on an embedded infinitive-non-realistic modal bases result in clearly counterfactual interpretations involving non-realized possibilities or unfulfilled wishes: "Conditional mood" -by contrast with the temporal conditional-is a counterfactual form. As such, it interferes in a number of contexts with imperfect and pluperfect subjunctives, which have been shown to exhibit counterfactual interpretations. As stated above, pluperfect subjunctives compete with perfect conditionals in the consequent of past counterfactuals. The same competition exists with modals and with verbs of wish. This connection is reinforced by the use of the imperfect subjunctive in root clauses containing the modals poder, querer and deber. Notes 1 I am greatly indebted to Ignacio Bosque (GRAE, Madrid). The materials and analyses proposed in GRAE (2008/ to appear) have profoundly influenced my views on the Spanish subjunctive. I gratefully acknowledge the support by the Fédération Typologie et Universaux CNRS for the programm Temporalité: typologie et acquisition. 2 A temporal anchor different from Utt-T, labelled Tx, is introduced, albeit with different characterizations, by [START_REF] Giorgi | Tense and aspect. From semantics to morphosyntax[END_REF] and by [START_REF] Iatridou | The grammatical ingredients of counterfactuality[END_REF]. Adoption of Tx could lead to a more precise formulation of the intuition regarding "inactual" tenses on which [START_REF] Coseriu | Das romanische Verbalsystem[END_REF] based his analysis of the Romance verbal system. 3 Forward-shifting predicates are characterized by the fact that the clauses they introduce are evaluated at a time that cannot precede the matrix time. Volitionals, directives, and verbs of planning belong to his class. For a discussion, see [START_REF] Abusch | On the temporal composition of infinitives[END_REF], for an analysis of modal verbs as forward-shifting, see [START_REF] Condoravdi | Temporal interpretations of modals. Modals for the present and for the past[END_REF]. 4 Non-realistic modal bases are domains excluding the world of evaluation (w0). They are contrasted in the text to non-totally realistic modal bases, which contain w0 but are nonsingleton sets of worlds, and to totally realistic modal bases, which are singleton sets whose only member is w0. The latter form the background for factual, non-modalized statements. For a discussion, see [START_REF] Kaufmann | Formal approaches to modality[END_REF], as well as Giorgi & Pianesi (1997: 205-217). 5 Assertions as to rigid subjunctive selection or exclusion should be taken with a pinch of salt whenever the verb involved is a modal [START_REF] Grae | /to appear. Nueva gramática de la lengua española[END_REF], since modals can appear in the indicative in subjunctive-selecting contexts, and in the subjunctive in indicative-selecting contexts. 6 This means that the -ra-forms appearing in appositive relatives in certain registers should be analysed as indicative forms. As for their role in restrictive relatives, it is subject to debate (see [START_REF] Rivero | Especificidad y existencia[END_REF][START_REF] Rivero | Especificidad y existencia[END_REF]). 7 Mood alternation distinguishes purpose (subjunctive) from result clauses (indicative) with prepositional expressions such as de manera/modo/forma tal (que) 'so as/ so that'. 8 A possible exception is that of counterfactual suggestions or wishes, for instance: (i) Me lo hubieras dicho antes. me it have IMPF.SBJ.2SG say.PP before 'You should have told me before' 9 3 rd person imperatives are not usually acknowledged as such in the Spanish descriptive tradition, which assimilates the sentences containing them to desideratives [START_REF] Grae | /to appear. Nueva gramática de la lengua española[END_REF]. However, some of their uses cannot be semantically assimilated to desideratives: (i) Que hagan el menor error, y los denuncio. that make.PRS.SBJ.3SG the least mistake and them report..PRS.IND.1SG 'Let them commit the slightest mistake, and I'll report them' 11 Habrías podido/ tenido que prestar atención. have COND..2SG can.PP/ have PP that lend.INF attention 'You could/ should paid attention' b. Preferiría haberme enterado inmediatamente. prefer. COND.1SG have.INF-me inform immediately 'I'd have rather learnt about it right away' Table 2 : The morphology of the simple forms of the subjunctive 2 IMPF.IND.3SG likely that arrive.PRS.SBJ.3PL / arrive.IMPF.SBJ.3PL to time 'It was likely that they would arrive on time' ): 2 a. Les pidió que llegaran a tiempo. them ask.PRT.IND.3SG that arrive.IMPF.SBJ.3PL to time 'S/he asked them to arrive on time' b. Les pidió que lleguen a tiempo. them ask.PRT.IND.3SG that arrive.PRS.SBJ.3PL to time 'S/he asked them to arrive on time' c. Les pide que lleguen/ *llegaran a tiempo. them ask.PRS.IND.3SG that arrive.PRS.SBJ.3PL /arrive.IMPF.SBJ.3PL to time 'S/he asks them to arrive on time' 3 a. Es probable que lleguen a tiempo. be. PRS.IND.3SG likely that arrive PRS.SBJ.3PL to time 'It's likely that they will arrive on time' b. Es probable que llegaran a tiempo. be. PRS.IND.3SG likely that arrive.IMPF.SBJ.3PL to time 'It is likely that they arrived on time' c. Era probable que *lleguen/ llegaran a tiempo. be. me surprise PRS.IND.1SG that it have PRS.SBJ.3PL/ have IMPF.SBJ.3PL seePP 'I'm surprised that they (should) have seen it' b. Me sorprendió que lo hayan/ hubieran visto. me surprise PRT.IND.1SG that it have PRS.SBJ.3PL/ have IMPF.SBJ.3PL seePP 'I was surprised that they had/ should have seen it'However, just like the imperfect subjunctive can locate event-time simultaneously with Utt-T in counterfactually interpreted contexts, the pluperfect subjunctive can express a single anteriority relation, locating the eventuality before Utt-T in such contexts. Pluperfect and perfect do not contrast in temporal location in (8a-b), but the pluperfect versions indicate that w0 is not assumed to be a world in which they arrived on time, whereas the perfect versions merely express epistemic uncertainty as to w0 being or not such a world IMPF.SBJ.3PL / have.PRS.SBJ.3PL arrive.PP to time 'I wish they had / I hope they have arrived on time' b. Si hubieran / han llegado a tiempo... if have.IMPF.SBJ.3PL / have.PRS.IND.3PL arrive.PP to time 'If they had / have arrived on time... a. Me sorprende que lo hayan/ *hubieran visto. 8 a. Ojalá hubieran / hayan llegado a tiempo. hopefully have. insist PRS.IND.3SG in that arrive.PRS.SBJ.3PL/ arrive.PRS.IND.3PL at three 'S/he insists on their arriving/ that they arrive at 3 o'clock' b. Se aseguró de que la puerta estuviera/ REFL make-sure. PRT.IND.3SG of that the door be.IMPF.SBJ.3SG/ estaba cerrada. be. IMPF.IND.3SG closed 'S/he saw to it/ checked that the door was closed' c. Admitió que no le pagaran/ pagaban. admit. PRT.IND.3SG that not him/her pay IMPF.SBJ.3PL/ pay IMPF.IND.3PL a. Insiste en que lleguen/ llegan a las tres. PRT.IND.3SG/ obtain. PRT.IND.3SG/ make PRT.IND.3SG que le pagaran. that him/her pay IMPF.SBJ.3PL 'S/he demanded/ managed to be paid'/ 'S/he made them pay him/her' In some cases, causation alone triggers rigid subjunctive selection. This is the case when a causal relation between two eventualities is established by means of a verbal predicate (13a-b), but also in the complement clauses of nouns and adjectives denoting causal relations: That REFL refuse IMPF.SBJ.3PL to pay--him/her gave place to a quarrel Their refusal to pay him/her caused a quarrel' Emotive-factive predicates express a relationship between an Experiencer and a Stimulus, such that the Stimulus causes a psychological reaction in the Experiencer. They consistently select the subjunctive in their argument clauses: Him/her surprise PRS.IND.3SG that have-PRS.SBJ.3SG arrive.PP late 'S/he is surprised that s/he should have arrived late' Exigió/ Consiguió/ Hizo. demand. 13 a. El mal tiempo explica que llegara tarde. The bad weather explains that arrive.IMPF.SBJ.3SG late 'The bad weather explains his/her late arrival' b. Que se negaran a pagarle dio lugar a una disputa. 14 Le sorprende que haya llegado tarde. PRS.SBJ.3SG arrive.PP/ arrive.PRT.IND.3SG late 'What surpriseshim/her is that s/he (should have) arrived late' ): 16 Lo que le sorprende es que That.N.SG that him/her surprise PRS.IND.3SG be.PRS.IND.3SG that haya llegado/ llegó tarde. have- Subjunctive selection in argument clauses is much less rigid in polarity contexts. ): 19 a. ¿Le molesta si fumo? you/him/her bother.PRS.IND.3SG if smoke PRS.IND.1SG 'Do you/ Does s/he mind if I smoke?' b. Se aburrió porque siempre lo criticaban. REFL annoy.PRT.IND.3SG because always him criticize.IMPF.IND.3PL 'S/he got fed up because he was always being criticized' . Thus, the indicative is the only possible choice in (20a), but the subjunctive is allowed in (20b):Mood alternation in polarity contexts produces extremely subtle effects which involve the attitude of the speaker towards the propositional content of the argument clause. Note that first person present negated belief reports select the subjunctive[START_REF] Quer | Mood at the interface[END_REF]): 20 a. Creían/ Afirmaban que Juan believe. IMPF.IND.3PL/ claim. IMPF.IND.3PL that Juan *estuviera/ estaba enfermo. be. IMPF.SBJ.3SG/ be.IMPF.IND.3SG ill 'They believed/ claimed that Juan was ill' b. No creían/ afirmaban que Juan not believe. IMPF.IND.3PL/ claim. IMPF.IND.3PL that Juan estuviera /estaba enfermo. be. IMPF.SBJ.3SG/ be.IMPF.IND.3SG ill They didn't believe/ claim that Juan was ill' 21 No creo que not believe. PRS.IND.1SG that estuviera/ *estaba enfermo. be. IMPF.SBJ.3SG/ be.IMPF.IND.3SG ill 'I don't believe s/he was ill' and (33b) signal that the speaker views Pedro's confession as improbable resp. as contrary to fact: Conditionals and subjunctive concessives show parallel patterns in tense-mood distribution[START_REF] Quer | Mood at the interface[END_REF], with one important exception: conditionals introduced by the conjunction si 'if' never accept present/perfect subjunctive forms (34a). This restriction does not hold of other expressions, as shown by (34b): They hold my hand// We hold his/her hand' Thus, clitic position is held to discriminate between subjunctive and imperative in cases such as le tenga/ téngale, etc.[START_REF] Grae | /to appear. Nueva gramática de la lengua española[END_REF].Apart from certain set expressions and set patterns expressing wishes (39), desiderative sentences require a licensing element preceding the subjunctive. The most usual are the complementizer que and the particle ojalá 'hopefully' illustrated above.Adverbs expressing uncertainty license the subjunctive in main clauses when they precede the verb, but never when they follow it, as shown by the following contrast: a. Aunque Pedro confiese, even-that Pedro confess. PRS.SBJ.3SG a. tenme / tenedme / téngame yo seguiré hold-IMP.2SG-me/ hold-IMP.2PL-me/ hold-IMP.3SG-me negando. I follow.FUT.IND.1SG deny.GER ténganme/ tengámosle la mano 'Even if Pedro confesses, I'll go on denying it. hold-IMP.3PL-me/ hold-IMP.1PL-him/her the hand b. Aunque Pedro confesara, 'Hold my hand / Let's hold his/her hand' even-that Pedro confess.IMPF.SBJ.3SG b. me tienes/ me tenéis / me tenga yo seguiría.COND.1SG negando. me hold-IND.2SG-me/ me hold-IND.2PL-me/ me hold-SBJ.1/3SG I follow deny.GER me tengan/ le tengamos la mano 'Even if Pedro confessed, I would go on denying it' me hold-SBJ.3PL/ him/her hold.SBJ.1PL the hand 33 'You/ I/ S/he/ 39 a. Aunque Pedro haya even-that Pedro have.PRS.SBJ.3SG confess.PP confesado, yo seguiré FUT.IND.1SG negando. I follow deny.GER 'Even if Pedro has confessed, I'll go on denying it' b. Aunque Pedro hubiera confesado, even-that Pedro have.IMPF.SBJ.3SG confess.PP yo seguiría negando. a. Dios te ayude. I follow COND.1SG deny.GER God you help PRS.SBJ.3SG 'Even if Pedro had confessed, I would go on denying it' '(May) God help you' 40 a. Quizás/ Probablemente esté/ está enfermo. perhaps/ probably be.PRS.SBJ.3SG be.PRS.IND.3SG ill 34 a. Si Pedro confiesa/ 'Maybe/ Probably s/he is ill' /*confiese, if Pedro confess.PRS.IND.3SG / confess.PRS.SBJ.3SG b. *Esté/ Está enfermo, quizás/ probablemente. yo también confesaré. be.PRS.SBJ.3SG be.PRS.IND.3SG ill maybe/ probably I also 'S/he is ill, maybe/probably' confess. FUT.IND.1SG 'If Pedro confesses, I will confess too' b. En caso de que Pedro *confiesa/ confiese, in case of that Pedro confess.PRS.IND.3SG / confess.PRS.SBJ.3SG yo también confesaré. I also confess. FUT.IND.1SG 'If Pedro confesses, I will confess too' Counterfactual conditionals and subjunctive concessives with an imperfect or a pluperfect subjunctive normally have conditional forms in the main clause. However, there is a marked tendency to replicate a pluperfect subjunctive in the main clause: 35 Si/Aunque hubiera confesado, if even-that have IMPF.SBJ.3SG confess.PP lo habrían/ %hubieran condenado. him have.COND.3PL/ have.IMPF.SBJ.3PL condemn.PP 'If / Even if he had confessed, he would have gotten a sentence' Causal subordinates do not of themselves license the subjunctive (36a). However, under negation, as well as under emotive-factive or evaluative predicates, the subjunctive is . It thus contrasts with prospective aspect, whose past anchor can be contributed by an adverbial (41b) or by the tense of an independent previous sentence not go-out PRT.IND.1PL because rain COND.3SG / go IMPF.IND.3SG to rain 'We didn't go out because it would rain/ was going to rain' (41c): 41 a. Pensó/ Afirmó que llovería / iba a llover. think.PRT.IND.3SG/claim PRT..3SG that rain.COND.3SG/ go.IMPF.IND.3SG to rain 'S/he thought/claimed that it would rain/ it was going to rain' b. Ayer *llovería / iba a llover. yesterday rain. COND.3SG / go IMPF.IND.3SG to rain 'Yesterday it would rain/ was going to rain' c. No salimos porque *llovería / iba a llover. not come PRT.IND.3SG to the party Estaría/ Habrá estado enfermo. be.COND.3SG/ have.FUT.IND.3SG be.PP ill 'He didn't come to the party. He might have been ill' Modal uses of the conditional, on the other hand, are only licensed in a particular subset of modal environments, comprising (a) modal verbs; (b) verbs expressing wishes or preferences; (c) the consequent of counterfactual or hypothetical conditional sentences (Laca 2006). 43 a. Podrías/ Tendrías que prestar atención. can.COND.2SG/ have. COND.2SG that lend.INF attention 'You could/ should pay attention' b. Querría/ Preferiría/ Me gustaría want. COND.1SG/ prefer.COND.1SG/ me like.COND.3SG que prestaras atención. that lend.IMPF.SBJ.2SG attention 'I wish/ I'd prefer you would pay attention/ I'd like it for you to pay attention' c. Si te importara, prestarías atención. if you.DAT mind. IMPF.SBJ.3SG lend.COND..2SG attention 'If you minded, you would pay attention' Root contexts In main clauses, the subjunctive invariably signals that the propositional content is not being asserted. This is the case in directive (37a) and desiderative sentences (37b), but also in sentences expressing some forms of epistemic modality (37c Note that in all cases, the subjunctive requires a licensor that precedes it: negation in (37a), the complementizer que or the particle ojalá (37b), and an adverb in ((37c). 8 The subjunctive alternates with the imperative in directives. Negative directives cannot be expressed in the imperative, and 3 rd pers. imperatives are indistinguishable from subjunctive forms. 9 Since the politeness form of address is a 3 rd pers. form, and the only form for plural addressees in American Spanish is the politeness form, this leads to considerable overlap between imperative and subjunctive. Table 3 shows that there are only two distinct forms for the imperative in European Spanish, and only one in American Spanish : Although the wisdom of maintaining a separate mood for two, resp. one distinct inflection may be questioned, imperative sentences not introduced by negation or by a complementizer share with infinitives and gerunds the peculiarity of not allowing proclitics:
01756773
en
[ "info.info-rb", "info.info-cv" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01756773/file/ICRA2018_FinalVersion.pdf
Banglei Guan email: [email protected] Pascal Vasseur email: [email protected] Cédric Demonceaux email: [email protected] Friedrich Fraundorfer email: [email protected] Visual odometry using a homography formulation with decoupled rotation and translation estimation using minimal solutions In this paper we present minimal solutions for two-view relative motion estimation based on a homography formulation. By assuming a known vertical direction (e.g. from an IMU) and assuming a dominant ground plane we demonstrate that rotation and translation estimation can be decoupled. This result allows us to reduce the number of point matches needed to compute a motion hypothesis. We then derive different algorithms based on this decoupling that allow an efficient estimation. We also demonstrate how these algorithms can be used efficiently to compute an optimal inlier set using exhaustive search or histogram voting instead of a traditional RANSAC step. Our methods are evaluated on synthetic data and on the KITTI data set, demonstrating that our methods are well suited for visual odometry in road driving scenarios. I. INTRODUCTION Visual odometry and visual SLAM [START_REF] Scaramuzza | Visual odometry [tutorial] part1: The first 30 years and fundamentals[END_REF] play an immensely important role for mobile robotics. Many different approaches for visual odometry have been proposed already, and for a wide variety of applications visual odometry has been used successfully. However, reliability, long-term stability and accuracy of visual odometry algorithms are still a topic of research as can be seen by the many contributions to the KITTI visual odometry benchmark [START_REF] Geiger | Are we ready for autonomous driving? the kitti vision benchmark suite[END_REF]. Most approaches for visual odometry follow the scheme where first feature correspondences between subsequent views are established, then they are screened for outliers and then egomotion estimation is done on inliers only [START_REF] Scaramuzza | Visual odometry [tutorial] part1: The first 30 years and fundamentals[END_REF]. The reliability and robustness of such a scheme is heavily dependent on the outlier screening step. In addition the outlier screening process has to be fast and efficient. The use of RANSAC [START_REF] Fischler | RANSAC random sampling concensus: A paradigm for model fitting with applications to image analysis and automated cartography[END_REF] is widely accepted for this step. However, the complexity of the RANSAC process being exponentially related to the minimal number of points necessary for the solution estimation, reducing this number is very interesting. For instance, a standard solution for twoviews egomotion estimation is to use the essential matrix with 5 matching points [START_REF] Nistér | An efficient solution to the five-point relative pose problem[END_REF] in a RANSAC process to increase the robustness. Nevertheless, the number of points needed for estimating the parameters is really crucial for the RANSAC algorithm. Indeed, the runtime of the RANSAC increases exponentially according to the number of points we need. Thus, before estimating the parameters, we have to be sure that we use the minimal number of points for that. One such idea is to take motion constraints into account, e.g. a planar motion (2pt algorithm) or the Ackermann steering motion for self-driving cars (1pt algorithm [START_REF] Scaramuzza | Real-time monocular visual odometry for on-road vehicles with 1point ransac[END_REF]). Another idea is to utilize an additional sensor like an inertial measurement unit to improve this step. Traditional sensor fusion methods [START_REF] Weiss | Real-time metric state estimation for modular vision-inertial systems[END_REF] perform a late fusion of the individual vision and IMU measurements. However, it is possible to utilize the IMU measurements much earlier to aid the visual odometry algorithm for outlier screening. This idea has already been utilized in [START_REF] Fraundorfer | A minimal case solution to the calibrated relative pose problem for the case of two known orientation angles[END_REF], [START_REF] Gim | Relative pose estimation for a multi-camera system with known vertical direction[END_REF], [START_REF] Naroditsky | Two efficient solutions for visual odometry using directional correspondence[END_REF], [START_REF] Saurer | Homography based egomotion estimation with a common direction[END_REF] in which partial IMU measurements have been used to design more efficient motion estimation algorithms for outlier screening. In this paper we follow this idea by proposing a low complexity algorithm for unconstrained two-view motion estimation that can be used for efficient outlier screening and initial motion estimation. Our method assumes a known gravity vector (measured by an IMU) and is based on a homography relation between two views. In [START_REF] Saurer | Homography based egomotion estimation with a common direction[END_REF] a 2pt algorithm has been proposed exactly for this case. In this work we will improve on [START_REF] Saurer | Homography based egomotion estimation with a common direction[END_REF] and show that actually an algorithm can be found that needs fewer than 2 data points for a motion hypothesis. To achieve this, the first step is to separate the rotation and translation estimation. This is possible if the scene contains features that are far away. Such features are only influenced by rotation and only the x-coordinate of a single feature point is sufficient to find the remaining rotational degree of freedom (DOF), so we call this the 0.5pt method. After this the remaining 3 DOFs for the translation t x , t y , t z are computed. We present a linear solution that needs 1.5pt correspondences. However, more important is our proposal of using a discrete sampling for determining one of the remaining parameters and then use a 1pt algorithm for the remaining 2 parameters. This makes it possible to completely determine a motion hypothesis from a single point correspondence. Thus, we obtain an extremely fast algorithm even within a RANSAC loop. The actual motion hypotheses can be computed exhaustively for each point correspondence and the best solution can be found by a voting scheme. The proposed methods are evaluated experimentally on synthetic and real data sets. We test the algorithms under different image noise and IMU measurement noise. We demonstrate the proposed algorithms on KITTI [START_REF] Geiger | Are we ready for autonomous driving? the kitti vision benchmark suite[END_REF] data set and evaluate the accuracy compared to the ground truth. These experiments also demonstrate that the assumptions taken hold very well in practice and the results on the KITTI data set show that the proposed methods are useful within the self-driving car context. II. RELATED WORK With known intrinsic parameters, a minimum of 5 point correspondences is sufficient to estimate the essential matrix [START_REF] Nistér | An efficient solution to the five-point relative pose problem[END_REF], and a minimum of 4 point correspondences is required to estimate the homography if all the 3D points lie on a plane [START_REF] Hartley | Multiple View Geometry in Computer Vision[END_REF]. Then the essential matrix or the homography can be decomposed into the motion of the camera between two views, i.e. a relative rotation and translation direction. A reduction of the number of needed point correspondences between views is important in terms of computational efficiency and of robustness and reliability. Such a reduction is possible if some additional information is available or assumptions about the scene and camera motion are taken. If for instance the motion is constrained to be on a plane, which is typical for ground based robots or self-driving cars, 2 point correspondences are only needed for computing the 3-DOFs motion [START_REF] Ortin | Indoor robot motion based on monocular images[END_REF]. If further the motion is constrained by Ackermann steering typical for cars only 1 point correspondence is necessary [START_REF] Scaramuzza | Real-time monocular visual odometry for on-road vehicles with 1point ransac[END_REF]. In contrast if additional information e.g. from an IMU is available and the complete rotation between the two views is provided by the IMU, the remaining translation can be recovered up to scale using only 2 points [START_REF] Kneip | Robust real-time visual odometry with a single camera and an imu[END_REF]. Using this concept a variety of algorithms have recently been proposed for egomotion estimation when knowing a common direction [START_REF] Fraundorfer | A minimal case solution to the calibrated relative pose problem for the case of two known orientation angles[END_REF], [START_REF] Kalantari | A new solution to the relative orientation problem using only 3 points and the vertical direction[END_REF], [START_REF] Naroditsky | Two efficient solutions for visual odometry using directional correspondence[END_REF]. The common direction between the two views can be given by an IMU (measuring the gravity direction) or by vanishing points extraction in the images. All these works propose different algorithms for solving the essential matrix with 3 point correspondences. For this they start with a simplified essential matrix (due to the known common direction) and then derive a polynomial equation system for the solution. To further reduce the number of point correspondences, the homography relation between two views can be used instead of the epipolar constraint expressed by the essential matrix. Under the assumption that the scene contains a large enough plane that is normal or parallel to the gravity vector measured by an IMU (a typical case for indoor or road driving scenarios) the egomotion can be computed from 2 point correspondences [START_REF] Saurer | Homography based egomotion estimation with a common direction[END_REF]. This idea however can be extended even further which is what we propose in this work. We start with the formulation of [START_REF] Saurer | Homography based egomotion estimation with a common direction[END_REF] where the cameras are aligned to the gravity vector and the remaining DOFs are one rotation parameter and three translation parameters. We solve for rotation and translation separately. This uses the fact that for far scene points, the parallax-shift (induced by translation) between two views is hardly noticeable. The motion of these far points is close enough to a pure rotation case such that the rotation between two views can be estimated firstly (and independent from translation) using these far points. Every single point correspondence can produce a hypothesis for the remaining rotation parameter which can be used in a 1pt RANSAC algorithm for rotation estimation or for histogram voting by computing the hypothesis from all the point matches. This step also allows to separate the correspondences into two sets, a far set and a near set. The further processing for the translation estimation can then be continued on the smaller near set only, as the effect of translation is not noticeable in the far set. Such a configuration is typical for road driving imagery. For estimating the remaining translation parameters we propose a linear 1.5pt algorithm. However, practically this solution does not give a direct computational advantage over the 2pt algorithm of Saurer et al. [START_REF] Saurer | Homography based egomotion estimation with a common direction[END_REF]. Instead we propose to use a combination of discrete sampling and parameter estimation. The idea is that 1 parameter of the remaining 3 is sampled in discrete steps. For each sampled value it is possible to estimate the remaining parameters from a single point correspondence. This can be then done efficiently using a 1pt RANSAC step or an exhaustive search to find the globally optimal value. The great benefit about this approach is that instead of performing 2pt RANSAC a sequence of 1pt RANSAC steps with a constant overhead for bounded discrete sampling is used. This exhaustive search gives us an efficient way to find the globally optimal solution. III. BASICS AND NOTATIONS With known intrinsic camera parameters, a general homography relation between two different views is represented as follows [START_REF] Hartley | Multiple View Geometry in Computer Vision[END_REF]: λx j = Hx i , (1) where x i = [x i , y i , 1] T and x j = [x j , y j , 1] T are the normalized homogeneous image coordinates of the points in views i and j, λ is a scale factor. The homography matrix H is given by: H = R - 1 d tN T , (2) where R = R y R x R z and t = [t x , t y , t z ] are respectively the rotation and the translation from views i to j. R y , R x and R z are the rotation matrices along y-, x-and z-axis, respectively. With the knowledge of the vertical direction the rotation matrix R can be simplified such that R = R y by pre-rotating the feature points with R x R z , which can be measured from the IMU (or alternatively from vanishing points [START_REF] Bazin | Motion estimation by decoupling rotation and translation in catadioptric vision[END_REF]). After this rotation, the cameras are in a configuration such that the camera plane is vertical to the ground plane. d is the distance between the view i frame and the 3D plane. N = [n 1 , n 2 , n 3 ] T is the unit normal vector of the 3D plane with respect to the view i frame. For this gravity aligned camera configuration the plane normal N of the ground plane is [0, 1, 0] T . Consequently, Equation 2 that only considers points on the ground plane can be written as: H =   cos (θ) 0 sin (θ) 0 1 0 -sin (θ) 0 cos (θ)   - t d   0 1 0   T (3) The homography relation is defined up to scale, so there are 4 DOFs remaining being the rotation around the y-axis and the translation parameters t x , t y , t z . IV. 0.5PT ROTATION ESTIMATION METHOD The rotation angle can be computed from a single point correspondence. In fact, it can be computed from only the x-coordinate of a single feature point. Rotation estimation can be done independently from the translation estimation if the scene contains far points. These points, that can be considered at infinity, are not affected any more by a translation. Consequently, the translation component is zero for these points and the Equation 3 simplifies to a rotation matrix: H =   cos (θ) 0 sin (θ) 0 1 0 -sin (θ) 0 cos (θ)   (4) In order to further eliminate the unknown scale factor λ, multiplying both sides of Equation 1 by the skew-symmetric matrix [x j ] × , yields the equation: [x j ] × Hx i = 0. (5) Substituting Equation 4into the above equation and expand it:   0 -1 y j 1 0 -x j -y j x j 0     cos (θ) 0 sin (θ) 0 1 0 -sin (θ) 0 cos (θ)     x i y i 1   = 0 (6) By rewriting the equation, we obtain: -x i y j sin(θ) + y j cos(θ) -y i = 0 (7) (x i x j + 1)sin(θ) + (x i -x j )cos(θ) = 0 (8) -y j sin(θ) -x i y j cos(θ) + x j y i = 0 (9) These equations are now derived for the case of points that lie on a plane normal to the y-axis of the cameras and which are infinitely far away. In this case, all the points have to lie on the horizon, which means that the y-coordinate of such a point in normalized coordinates is 0 (normalized coordinates are image coordinates in pixel after multiplication with the inverse calibration matrix). This equates Equation 7and Equation 9 to zero. Only Equation 8remains to be used. Considering the trigonometric constraint sin 2 (θ) + cos 2 (θ) = 1, the rotation parameter sin(θ) can be obtained: sin(θ) = ± x i -x j x i 2 x j 2 + x i 2 + x j 2 + 1 (10) Due to the sign ambiguity of sin(θ), we obtain two possible solutions for the rotation angle. For every point correspondence a rotation hypothesis can be calculated. A 1pt RANSAC loop can be utilized to find a consistent hypothesis with only a few samples. Alternatively the globally optimal solution can be computed by performing an exhaustive search or histogram voting. The exhaustive search is linear in the number of point correspondences and a hypothesis can be computed for every point correspondence. The hypothesis with the maximum number of inliers is the globally optimal solution. To avoid computing the inliers and outliers for every hypothesis a histogram voting method can be used. For this all the hypothesis are collected in a histogram with discrete bins (e.g. a bin size of 0.1 degree) and the bin with the maximum count is selected as the best solution. Alternatively the mean of a window around the peak can be computed for a more accurate result. The inliers of the pure rotation formulation belong to scene points that are very far away and don't influence the translation. For further translation estimation these point correspondences can be removed to reduce the number of data points to process. Translation estimation only needs to consider the outlier set of the rotation estimation. V. TRANSLATION ESTIMATION METHOD After estimation of the rotation parameter as described in the previous section, the feature points in views j can be rotated by the rotation matrix around the yaw axis: xj = R T y x j , (11) This aligns both views such that they only differ in translation t = [ tx , ty , tz ] for x i ↔ xj . Equation 3 therefore is written as: H =   1 0 0 0 1 0 0 0 1   - 1 d   tx ty tz     0 1 0   T (12) In the following subsections we describe 4 different methods of how to estimate the translation parameters. A. 1.5pt linear solution This subsection describes a linear method to compute the remaining translation parameters. Three equations from two point correspondences are used to set up a linear equation system to solve for the translation. The camera-plane distance d is unknown but the translation can be known only up to scale. Therefore d can be absorbed by t. We then obtain: H =   1 -tx 0 0 1 -ty 0 0 -tz 1   (13) Substituting Equation 13into the Equation 5, the homography constraints between x i and xj can be expressed:   0 -1 ỹj 1 0 -x j -ỹ j xj 0     1 -tx 0 0 1 -ty 0 0 -tz 1     x i y i 1   = 0 (14) By rewriting the equation, we obtain:      y i ty -y i ỹj tz = y i -ỹj -y i tx + xj y i tz = xj -x i y i ỹj tx -xj y i ty = x i ỹj -xj y i (15) Even though Equation 15has three rows, it only imposes two independent constraints on t, because the skewsymmetric matrix [x j ] × has only rank 2. To solve for the 3 unknowns of t = [ tx , ty , tz ], one more equation is required which has to be taken from a second point correspondence. In principle, an arbitrary equation can be chosen from Equation 15, for example, the second and third rows of the first point x i ↔ xj , and the second row of the second point x i ↔ x j are stacked into 3 equations in 3 unknowns:      -y i tx + xj y i tz = xj -x i y i ỹj tx -xj y i ty = x i ỹj -xj y i -y i tx + x j y i tz = x j -x i (16) The linear solution for t = [ tx , ty , tz ] can be obtained by:                    tx = x i x j y i + xj y i x j -xj x j y i -xj y i x i y i x j y i -xj y i y i ty = y i x j y i + y i ỹj x j + x i ỹj y i -y i ỹj x i -xj y i y i -ỹj x j y i y i x j y i -xj y i y i tz = x i y i + y i x j -xj y i -y i x i y i x j y i -xj y i y i (17) The translation t from views i to j can be obtained: t = R T y t. (18) For finding the best fitting translation t a RANSAC step should be used. Here it is however possible to evaluate the full point set. From the estimated rotation and translation parameters an essential matrix can be constructed (E = [t] × R y ) and the inliers can be tested against the epipolar geometry. This test is not limited to points on the ground plane and the final inlier set contains all scene points. For a most accurate result a non-linear optimization of the Sampson distance on all the inliers is advised. The techniques of constructing the epipolar geometry and performing the nonlinear optimization are also applicable to other translation estimation methods. B. 1pt method by discrete sampling of the relative height change Translation estimation as explained in the previous section needs 1.5pt correspondences. However, if one of the remaining parameters is known only a single point correspondence is needed for computing the remaining two parameters. This leads to 1pt algorithm for the translation. It is possible to perform a discrete sampling of a suitable parameter within a suitable bounded range. This allows to perform an exhaustive search to find the global optimal solution which produces the highest number of inliers. The time complexity of this exhaustive search is linear in the number of point correspondences if the number of discrete samples is significantly smaller than the number of point correspondences. In order to use the sampling method, Equation 12 can be written as: H =   1 0 0 0 1 0 0 0 1   - ty d   tx / ty 1 tz / ty     0 1 0   T (19) In this variant the relative height change over ground a = ty /d is sampled in discrete steps which leads to an equation system with only 2 unknowns, b = tx / ty , c = tz / ty . Only 1pt is needed to compute a solution b and c for a given a. In the same way, we can choose the second and third row from Equation 15to compute b and c:        b = ax j y i + x i ỹj -xj y i ay i ỹj c = aby i + xj -x i ax j y i (20) Based on the value of b and c, we can recover t = [b, 1, c] T up to scale. Then the estimated translation t between views i and j is also recovered up to scale by Equation 18. C. 1pt method by discrete sampling for x-z translation direction The sampling method in the previous section worked by discretizing the relative height change between two views. As this can be up to scale there is no obvious value for the step sizes and bounds. However, if one is to sample the direction vector of the translation in the x-z plane this represents the discretization of an angle between 0...360 degrees. In this case a meaningful step size can easily be defined. For this variant Equation 12 can be written as: H =   1 0 0 0 1 0 0 0 1   - t2 x + t2 z d   cos(δ) ty / t2 x + t2 z sin(δ)     0 1 0   T (21) The translation direction can be represented as an angle δ and e.g. can be sampled in steps of 1 degree from 0 • to 360 • , which leads to an equation system with only 2 unknowns, a = t2 x + t2 z /d, b = ty / t2 x + t2 z . Only 1pt is needed to compute a solution a and b for a given angle δ. In the same way, we can choose the second and third row from Equation 15to compute a and b:        a = xj -x i xj y i sin(δ) -y i cos(δ) b = ay i ỹi cos(δ) + xj y i -x i ỹj ax j y i (22) Based on the vector [cos(δ), b, sin(δ)] T , we can recover t up to scale. Then we obtain t by Equation 18. D. 1pt method by discrete sampling of the in-plane scale change The method described in this section is another variant of choosing a meaningful parameter for discrete sampling. For easy explanation of this idea one should imagine a camera setup with downward looking cameras with a camera plane parallel to the ground plane. The previously aligned camera setup with the camera plane normal to the ground plane can easily be transformed into such a setup by rotating the feature points about 90 • around the x-axis of camera by multiplication with a rotation matrix R d . Moving a camera looking downwards at a plane (e.g. the street) up and down results in an in-plane scale change of the image, i.e. the points will move inwards to or outwards from the center. The scale change directly corresponds to the effects of a translation in z-direction. This makes the scale change a good parameter for discrete sampling, as the in-plane scale change can be expressed in pixel distances. In this approach discrete values for the scale change are sampled and the remaining translation direction in the x-y plane can be computed for every point correspondence from a single point. Points lying on the same plane, have exactly the same translation shift for all the feature matches. Now, we derive the formula in detail. We assume that (x i , y i , 1) ↔ (x j , y j , 1) are the normalized homogeneous image coordinates of the points in downward looking views i and j. The heights of the downward looking views i and j are h i and h j , respectively. The ground points can be represented in the camera coordinate system of views i and j: X i = h i * [x i , y i , 1] T , X j = h j * [x j , y j , 1] T . The translation between views i and j can be computed directly: td = h j   x j y j 1   -h i   x i y i 1   =   (x j h j -x i h i ) (y j h j -y i h i ) h j -h i   (23) We set the in-plane scale κ = f * (h i /h j ), f is the focal length. We substitute κ into the above equation. We can obtain the translation vector directly as the difference of the image coordinates: td =   f x j -κx i f y j -κy i f -κ   (24) By sampling the in-plane scale κ, we can compute the translation vector td using one point, and choose the solution which has the maximum number of inliers. This sampling interval is defined in pixels and allows setting a meaningful step size (e.g. 1 pixel). The final translation t between two views can be obtained: t = R T y R T d td (25) VI. EXPERIMENTS We validate the performance of the proposed methods using both synthetic and real scene data. The experiments with synthetic scenes will demonstrate the behavior of our derivations in the presence of image noise and IMU noise. The experiments using the KITTI data set [START_REF] Geiger | Are we ready for autonomous driving? the kitti vision benchmark suite[END_REF] will demonstrate the suitability of the methods for use in road driving scenarios. This experiments will also demonstrate that the assumptions taken will hold for real scenarios. A. Experiments with synthetic data To evaluate the algorithms on synthetic data we choose the following setup. The distance of the ground to the first camera center is set to 1. The baseline between two cameras is set to be 0.2 and the direction is either along the x-axis of the first camera (sideways) or along the z-axis of the first camera (forward). Additionally, the second camera is rotated around every axis, three rotation angles varies from -90 • to 90 • . The Roll angle (around z-axis) and Pitch angle (around x-axis) are known. The generated scene points can be set to lie on the ground plane or be distributed freely in space. We evaluate the accuracy of the presented algorithms on synthetic data under different image noise and IMU noise. The focal length is set to 1000 pixels. The solutions for relative rotation and translation are obtained by RANSAC or histogram voting. We assess the rotation and translation error by the root mean square error of the errors. We report the results on the data points within the first two intervals of a 5quantile partitioning 1 (Quintile) of 1000 trials. The proposed methods are also compared against the 2pt method [START_REF] Saurer | Homography based egomotion estimation with a common direction[END_REF]. In all of the experiments, we compare the relative rotation and translation between views i and j separately. The error measure compares the angle difference between the true rotation and estimated rotation. Since the estimated translation between views i and j is only known up to scale, we compare the angle difference between the true translation and estimated translation. The errors are computed as follows: • Rotation error: ξ R = arccos((T r(R gt R T ) -1)/2) • Translation error: ξ t = arccos((t T gt t)/( t gt t )) R gt , t gt denote the ground-truth transformation and R, t are the corresponding estimated transformations. 1) Pure planar scene setting ("PLANAR"): In this setting the generated scene points are constrained to lie on the ground plane. The scene points consist of two parts: near points (0 to 5 meters) and far points (5 to 500 meters). Both parts have 200 randomly generated points. Figure 1(a) and (b) show the results of the 0.5pt method and histogram voting for rotation estimation for gradually increased image noise levels with perfect IMU data. It is interesting to see that our method performs better for forward motion than for sideways motion. The histogram voting has an higher error, because of the binning which has more effect than the image noise. Figure 1 (b) does not show a clear trend with increased image noise levels. It seems that the influence of the sideways motion is stronger than the influence of the image noise. Figure 1(c)-(f) show the influence of increasing noise on the IMU data while assuming image noise with 0.5 pixel standard deviation. Figure 2 shows the results of the 1.5pt linear solution method, 1pt method by sampling for x-z translation direction and 2pt method, for gradually increased image noise levels with perfect IMU data or increasing noise on the IMU data while assuming image noise with 0.5 pixel standard deviation. Note that we use the histogram voting method to compute the rotation first, in order to compare the accuracy of different translation estimation methods. The 1.5pt linear solution method and the 1pt method by sampling for x-z translation direction are robust to the increased image noise and IMU data noise. 2) Mixed scene setting ("MIXED"): In this experiment not all the scene points were constrained to lie on the ground plane. Far points (from 5 to 500 meters distance) do not lie on the ground only. They are generated a heights that vary from 0 to 10 meters. The near points however (from 0 to 5 meters distance) are constrained to lie on the ground plane. Both sets, near points and far points consist of 200 randomly generated points. Figure 3 shows the results of the 0.5pt method and histogram voting for rotation estimation for gradually increased image noise levels with perfect IMU data, or increasing noise on the IMU data while assuming image noise with 0.5 pixel standard deviation. Figure 4 shows the results of the 1.5pt linear solution method, 1pt method by sampling for x-z translation direction and 2pt method, for gradually increased image noise levels with perfect IMU data or increasing noise on the IMU data while assuming image noise with 0.5 pixel standard deviation. In Figure 4 (f) a sensitivity of the 1.5pt linear solution method to noise in Pitch angle can be seen in sideways motion. The experiments on synthetic data validate the derivation of the minimal solution solvers and quantify the stability of the solvers with respect to image noise and IMU noise. The synthetic data did not contain outliers. The use of the methods within a RANSAC loop for outlier detection is part of the experiments using real data. B. Experiments on real data Experiments on real data were performed on the KITTI data set [START_REF] Geiger | Are we ready for autonomous driving? the kitti vision benchmark suite[END_REF]. For the evaluation we utilized all the available 11 sequences which have ground truth data (labeled from 0 to 10 on the KITTI webpage) and together consist of around 23000 images. The KITTI data set provides a challenging environment for our experiments, however, such a road driving scenario does fit our method very well. In all the images a large scene plane is visible (the road) and features at far distances are present as well. For our experiments we performed SIFT feature matching [START_REF] Lowe | Distinctive image features from scale-invariant keypoints[END_REF] between consecutive frames. The ground truth data of the sequences is used to pre-rotate the feature-points by R x R z , basically simulating IMU measurements. Then the remaining relative rotation and translation are estimated with our methods. We perform 3 sets of experiments with the KITTI data set. In a first experiment we test the effectiveness of our proposed quick test for rotation inliers. In the second experiment we compute rotation and translation using all our proposed methods and compare it to the ground truth. We also compare the results to the 5pt method [START_REF] Nistér | An efficient solution to the five-point relative pose problem[END_REF] and 2pt method [START_REF] Saurer | Homography based egomotion estimation with a common direction[END_REF]. In a third experiment we test the quality of the inlier detection by using the different methods. 1) Rotation estimation inlier selection using y-coordinate test: The 0.5pt method for rotation estimation is working under the assumption that a scene point is far away. Using RANSAC or histogram voting the inliers of this assumption can be found. However, even before computing rotation hypothesis with the 0.5pt method the point correspondences can already be checked if they stem from far points, to remove all the near points. For a far point in this setting the y coordinate of a point feature does not change. So any point correspondence where the y coordinate changes is a near feature that can be discarded for the rotation estimation. This allows to significantly remove a big part of feature points for the rotation estimation to make it more efficient. Table I can be removed already based on this simple criteria. For this test a feature was classified as a far feature if the y coordinate does not change more than 1 pixel. N umSIF T is the average number of point correspondences within a sequence, N umRemove is the average number of outliers removed, and Ratio = N umRemove/N umSIF T is the average of the percentage of the removed outliers. It can be seen that at least more than 52% feature matches can be removed due to this criteria. 2) Comparison of rotation and translation estimation to ground truth: In this experiment we compare the rotation and translation estimates of our methods to ground truth and also to the results of the 5pt method [START_REF] Nistér | An efficient solution to the five-point relative pose problem[END_REF] and the 2pt method [START_REF] Saurer | Homography based egomotion estimation with a common direction[END_REF]. The rotation value at the peak in the histogram is then selected. Both 0.5pt method and histogram voting method provide better results than the 5pt method and 2pt method. The histogram voting method is slightly more accurate than the 0.5pt method. In subsequent experiments, we use the histogram voting method to estimate the rotation first, then estimate the translation using the different methods. The translation error for all sequences is shown in Table III. All the four methods for translation estimation, the 1.5pt linear solution method (1.5pt lin), the 1pt method by discrete sampling of the relative height change (1pt h), the 1pt method by discrete sampling for x-z translation direction (1pt d) and the 1pt method by discrete sampling of the inplane scale change (1pt s) are compared to ground truth. The 1.5pt method is used within a RANSAC loop with a fix number of 100 iteration and an inliers threshold of 2 pixel. For the 1pt methods an exhaustive search is performed and the solution with the highest number of inliers is used. The table shows that all of our methods provide better results than the 2pt method. The 1pt methods for translation estimation and the 5pt method are more accurate than the linear solution using 1.5pt. The 1pt d method offers the best overall performance among all the translation estimation methods. 3) Inlier recovery rate: The main usage for our proposed algorithms should be to efficiently find a correct inlier set which can then be used for accurate motion estimation using e.g. non-linear optimization (maybe also using our motion estimates as initial value). We therefore perform an experiment that tests how many of the real inliers (calculated from the ground truth) can be found by our methods. This inlier recovery rate is shown in Table IV as an average over all sequences (an inlier threshold of 2 pixels). All of our four methods can be used to find a correct inlier set, and provide a more complete inlier set than the 2pt method. The inlier recovery rate of 1pt d method is slightly better than the 5pt method. Inlier detection using the 1pt d method is shown in Figure 5. VII. CONCLUSION The presented algorithms allow to compute motion estimation and inlier sets by exhaustive search or by histogram voting. This is an interesting alternative to the traditional RANSAC method. RANSAC finds an inlier set with high probability but there is no guarantee that it is really a good one. Also, our experiments demonstrate that the assumptions taken in these algorithms are commonly met in road driving scenes (e.g. the KITTI data set), which could be a very interesting application area for it. Fig. 1 . 1 Fig. 1. Rotation error with "PLANAR" setting: Evaluation of the 0.5pt method, histogram voting and 2pt method. Left: forward motion, right: sideways motion. (a) and (b) are with varying image noise. (c), (d), (e) and (f) are under different IMU noise and constant image noise 0.5 pixel standard deviation. (c) and (d) are with Roll angle noise. (e) and (f) are with Pitch angle noise. Fig. 2 . 2 Fig. 2. Translation error with "PLANAR" setting: Evaluation of the 1.5pt linear solution method, 1pt method by sampling for x-z translation direction and 2pt method. Left: forward motion, right: sideways motion. (a) and (b) are with varying image noise. (c), (d), (e) and (f) are under different IMU noise and constant image noise 0.5 pixel standard deviation. (c) and (d) are with Roll angle noise. (e) and (f) are with Pitch angle noise. Fig. 3 . 3 Fig. 3. Rotation error with "MIXED" setting: Evaluation of the 0.5pt method, histogram voting and 2pt method. Left: forward motion, right: sideways motion. (a) and (b) are with varying image noise. (c), (d), (e) and (f) are under different IMU noise and constant image noise 0.5 pixel standard deviation. (c) and (d) are with Roll angle noise. (e) and (f) are with Pitch angle noise. Fig. 4 . 4 Fig. 4. Translation error with "MIXED" setting: Evaluation of the 1.5pt linear solution method, 1pt method by sampling for x-z translation direction and 2pt method. Left: forward motion, right: sideways motion. (a) and (b) are with varying image noise. (c), (d), (e) and (f) are under different IMU noise and constant image noise 0.5 pixel standard deviation. (c) and (d) are with Roll angle noise. (e) and (f) are with Pitch angle noise. Fig. 5 . 5 Fig. 5. Inlier detection example, left: previous frame; right: current frame. (a). Ground truth inliers: 885 matches; (b). Inliers detected by the 1pt d method: 884 matches. ) 0.2 Forward motion 0.2 Sideways motion 0.18 2pt method 1.5pt method 1pt sampling method 0.18 2pt method 1.5pt method 1pt sampling method 0.16 0.16 Translation error (degree) 0.06 0.08 0.1 0.12 0.14 Translation error (degree) 0.06 0.08 0.1 0.12 0.14 0.04 0.04 0.02 0.02 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Noise standard deviation (pixel) Noise standard deviation (pixel) 14 Forward motion 35 Sideways motion 2pt method 1.5pt method 1pt sampling method 2pt method 1.5pt method 1pt sampling method 12 30 Translation error (degree) 4 6 8 10 Translation error (degree) 10 15 20 25 2 5 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Rotation Noise in Degree (Roll) Rotation Noise in Degree (Roll) 30 Forward motion 45 Sideways motion 2pt method 1.5pt method 1pt sampling method 40 2pt method 1.5pt method 1pt sampling method 25 35 Translation error (degree) 10 15 20 Translation error (degree) 15 20 25 30 10 5 5 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Rotation Noise in Degree (Pitch) Rotation Noise in Degree (Pitch) shows how many of the feature points 0.016 Forward motion 0.045 Sideways motion 0.014 2pt method 0.5pt method Histogram voting 0.04 2pt method 0.5pt method Histogram voting 0.012 0.035 (a) Rotation error (degree) 0.006 0.008 0.01 (b) Rotation error (degree) 0.015 0.02 0.025 0.03 0.004 0.01 0.002 0.005 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Noise standard deviation (pixel) Noise standard deviation (pixel) 0.12 Forward motion 0.4 Sideways motion 2pt method 0.5pt method Histogram voting 2pt method 0.5pt method Histogram voting 0.35 0.1 0.3 (c) Rotation error (degree) 0.04 0.06 0.08 (d) Rotation error (degree) 0.15 0.2 0.25 0.1 0.02 0.05 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Rotation Noise in Degree (Roll) Rotation Noise in Degree (Roll) 0.25 Forward motion 0.6 Sideways motion 2pt method 0.5pt method Histogram voting 2pt method 0.5pt method Histogram voting 0.2 0.5 (e) Rotation error (degree) 0.1 0.15 (f) Rotation error (degree) 0.2 0.3 0.4 0.05 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Rotation Noise in Degree (Pitch) Rotation Noise in Degree (Pitch) TABLE I EFFECT I OF THE Y-COORDINATE TEST FOR OUTLIER REMOVAL. Sequences N umSIF T N umRemove Ratio 00 (4541 images) 634 463 74.15% 01 (1101 images) 398 213 52.26% 02 (4661 images) 648 508 78.74% 03 (801 images) 886 572 65.50% 04 (271 images) 561 421 74.83% 05 (2761 images) 672 450 69.82% 06 (1101 images) 539 383 71.32% 07 (1101 images) 750 446 63.97% 08 (4071 images) 658 460 71.65% 09 (1591 images) 581 434 75.27% 10 (1201 images) 652 454 73.35% Table II lists the results of the rotation estimation and Table III lists the results for the translation estimation. In this experiment the relative rotations and translations between two consecutive images are compared to the ground truth relative poses. The tables show the median error for each 0.2 Forward motion 0.2 Sideways motion 0.18 2pt method 1.5pt method 1pt sampling method 0.18 2pt method 1.5pt method 1pt sampling method 0.16 0.16 (a) Translation error (degree) 0.06 0.08 0.1 0.12 0.14 (b) Translation error (degree) 0.06 0.08 0.1 0.12 0.14 0.04 0.04 0.02 0.02 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Noise standard deviation (pixel) Noise standard deviation (pixel) 16 Forward motion 12 Sideways motion 2pt method 1.5pt method 1pt sampling method 2pt method 1.5pt method 1pt sampling method 14 10 (c) Translation error (degree) 4 6 8 10 12 (d) Translation error (degree) 4 6 8 2 2 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Rotation Noise in Degree (Roll) Rotation Noise in Degree (Roll) 35 Forward motion 40 Sideways motion 2pt method 1.5pt method 1pt sampling method 2pt method 1.5pt method 1pt sampling method 30 35 (e) Translation error (degree) 10 15 20 25 (f) Translation error (degree) 10 15 20 25 30 5 5 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Rotation Noise in Degree (Pitch) Rotation Noise in Degree (Pitch) TABLE II ROTATION II ERROR FOR KITTI SEQUENCES [DEGREES]. Seq. 0.5pt method Histogram voting 5pt 2pt 00 0.060 0.051 0.13 0.24 01 0.073 0.063 0.12 0.23 02 0.068 0.057 0.12 0.26 03 0.068 0.057 0.11 0.21 04 0.074 0.030 0.11 0.17 05 0.049 0.032 0.11 0.21 06 0.073 0.050 0.11 0.21 07 0.052 0.034 0.11 0.19 08 0.051 0.037 0.12 0.19 09 0.079 0.094 0.12 0.24 10 0.059 0.048 0.12 0.22 individual sequence. For rotation estimation the RANSAC variant and the histogram voting scheme was tested. For the RANSAC variant a fixed number of 100 iterations with an inlier threshold of 2 pixels has been used. For the histogram voting a rotation hypothesis is computed for every point correspondence exhaustively and entered into a histogram. TABLE III TRANSLATION III ERROR FOR KITTI SEQUENCES [DEGREES]. Seq. 1.5pt lin 1pt h 1pt d 1pt s 5pt 2pt 00 4.23 1.90 1.64 1.58 1.93 8.03 01 7.34 2.01 1.18 1.20 1.41 10.74 02 3.68 1.83 1.53 1.54 1.53 6.47 03 4.69 2.13 1.88 2.58 2.12 8.61 04 2.64 0.95 0.88 0.92 1.19 5.45 05 3.92 1.57 1.37 1.34 1.67 7.90 06 4.02 1.27 1.20 1.12 1.37 5.57 07 4.89 2.20 1.82 1.89 2.37 10.09 08 4.23 2.17 1.86 1.84 2.06 7.41 09 4.20 2.04 1.53 1.53 1.54 7.20 10 3.90 1.78 1.61 1.58 1.73 7.39 TABLE IV INLIER IV RECOVERY RATE FOR ALL KITTI SEQUENCES. Seq. 1.5pt lin 1pt h 1pt d 1pt s 5pt 2pt all 88.47% 96.97% 98.29% 96.51% 98.27% 84.37% ACKNOWLEDGMENT This work has been partially funded by CopTer Project of Grands Réseaux de Recherche Haut-Normands.
01756817
en
[ "sdv.bv.ap", "info" ]
2024/03/05 22:32:10
2017
https://institut-agro-rennes-angers.hal.science/hal-01756817/file/ELVIS_JOBIM2017.pdf
Fabrice Dupuis Aurélie Lelièvre Sandra Pelletier Tatiana Thouroude Julie Bourbeillon Sylvain Gaillard PREMS / ELVIS : A local plant biological resource management system ELVIS PREMS PREMS / ELVIS : A local plant biological resource management system Fabrice 1 , Aurélie LELIÈVRE , Sandra PELLETIER 1 , Tatiana THOUROUDE 1 , Julie BOURBEILLON 1 and Sylvain GAILLARD 1 The management of biological resources collection is a major key for quality of research results. It is also mandatory to keep trace from data to studied organism through every experimental steps including sampling, culture conditions and description of the subject of the study. To achieve this goal in the context of plant science, there are some notable software such as Doriane Labkey and GreenGlobal. After evaluating these solutions with other laboratories we ended with the conclusion that, for different reasons, neither fulfill our needs especially when dealing with perennial plants such as fruit trees (apple or pear trees) or ornamental bush (rose). We then decided to provide our own solution based on the needs and feedbacks from the different teams of the IRHS in the fields of biological resources management, breeding, genetics, molecular biology, physiology and phenotyping. We introduce here two pieces of software: -ELVIS: an information system that takes care of data management and provides an extensible set of python libraries to interact with. These libraries are used here to provide a JSON-RPC API exposed as web services. This is the foundation of the LIMS of our laboratory. -PREMS: a dynamic web interface for biological resources management. This is one of the interfaces developed to interact with ELVIS. Plant management: - ELVIS Qooxdoo libraries PREMS PREMS The plant resource management system Multi-criteria Variety search gives a list of Varieties. A selection in that list displays a detailed view of the Lots and Accessions for that Variety. This view provides quick access to denomination and passport data. On Lots, the view gives access to phenotyping notations and location of the plants or the seeds bag. The user can also enter new notations. ELVIS and PREMS are made available to the community through the SourceSup forge under CeCILL license. ELVIS can provide access to the data by implementing compatible API to interact with other tools. For instance, we are deploying a PHIS installation with the integration of IRHS team in the PHENOM project and are working on ways to make the two applications interact. https://sourcesup.renater.fr/projects/elvis/ https://sourcesup.renater.fr/projects/prems/ ELVIS PREMS 1 IRHS, INRA, AGROCAMPUS-Ouest, Université d'Angers, SFR 4207 QUASAV, 42 rue Georges Morel, 49071 Beaucouzé Cedex,
01756824
en
[ "spi", "spi.signal", "spi.nrj", "math.math-pr" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01756824/file/Uncertainty%20Analysis_renewable_energy_2nd%20review_bf.pdf
Xingyu Yan Dhaker Abbes Bruno Francois email: [email protected] Uncertainty Analysis for Day Ahead Power Reserve Quantification in an Urban Microgrid Including PV Generators Keywords: Power reserve scheduling, renewable energy sources, forecast errors, uncertainty analysis, reliability. I Setting an adequate operating power reserve (PR) to compensate unpredictable imbalances between generation and consumption is essential for power system security. Operating power reserve should be carefully sized but also ideally minimized and dispatched to reduce operation costs with a satisfying security level. Although several energy generation and load forecasting tools have been developed, decision-making methods are required to estimate the operating power reserve amount within its dispatch over generators during small time windows and with adaptive capabilities to markets, as new ancillary service markets. This paper proposes an uncertainty analysis method for power reserve quantification in an urban microgrid with a high penetration ratio of PV (photovoltaic) power. First, forecasting errors of PV production and load demand are estimated one day ahead by using artificial neural networks. Then two methods are proposed to calculate one day ahead the net demand error. The first perform a direct forecast of the error, the second one calculates it from the available PV power and load demand forecast errors. This remaining net error is analyzed with dedicated statistical and stochastic procedures. Hence, according to an accepted risk level, a method is proposed to calculate the required PR for each hour. to maintain the security and reliability of grids with a high share of renewable generators, primary, secondary and tertiary regulation as well as spinning reserve are now required from renewable generators in more and more grid codes [START_REF] Ipakchi | Grid of the future[END_REF][START_REF] Galiana | Scheduling and pricing of coupled energy and primary, secondary, and tertiary reserves[END_REF]. This operating power reserve should be ideally minimized to reduce system costs with a satisfying security level. Typically, PV power generation forecasting is needed to optimize the operation and to reduce the cost of power systems, especially for the scheduling and dispatching of required hourly operating [START_REF] Mills | Integrating Solar PV in Utility System Operations[END_REF]. However, the predicted uncertainty associated with forecast cannot be eliminated even with the best model tools. In addition to the load demand uncertainty, the combination of power generation and consumption variability with forecast uncertainty makes the situation more difficult for power system operators to schedule and to set power reserve level. Therefore, the uncertainties from both generation and consumption must be taken into account by an accurate stochastic model for power system management. In addition, forecasting errors from system uncertainty analysis could be used to set power reserve [START_REF] Sobu | Dynamic optimal schedule management method for microgrid system considering forecast errors of renewable power generations[END_REF]. Historically, most conventional utilities have adopted deterministic criteria for the reserve requirement: the operating rules required PR to be greater than the capacity of the largest on-line generator or a fraction of the load, or equal to some function of both of them. Those deterministic criteria are wildly used because of their simplicity and understandable employing. However, I these deterministic calculation methods are gradually replaced by probabilistic methods that respond to the stochastic factors corresponding to the system reliability. Several research works have been focused on calculating the total system uncertainty from all the variable sources. Based on dynamic simulations, the study in [START_REF] Delille | Dynamic frequency control support by energy storage to reduce the impact of wind and solar generation on isolated power system's inertia[END_REF] is focusing on dynamic frequency control of an isolated system and the reduction of the impact due to large shares of wind and PV powers. However this work did not consider other aspects, such as variability and forecast accuracy. A deterministic approach is proposed in [START_REF] Vosa | Revision of reserve requirements following wind power integration in island power systems[END_REF] to analyze the flexibility of thermal generation to balance wind power variations and prediction errors. A stochastic analysis could improve it in order to be able to quantify the power reserve with a risk index. A stochastic model was developed in [START_REF] Sansavini | A stochastic framework for uncertainty analysis in electric power transmission systems with wind generation[END_REF] to simulate the operations and the line disconnection events of the transmission network due to overloads beyond the rated capacity. But the issue is clearly that analysis of the system states, in terms of power request and supply, are critical for network vulnerability and may induce a cascade of line disconnections leading to a massive network blackout. An insurance strategy is proposed in [START_REF] Yang | Insurance strategy for mitigating power system operational risk introduced by wind power forecasting uncertainty[END_REF] to cover the possible imbalance cost that wind power producer may incur in electricity markets. Monte Carlo simulations have been used to estimate insurance premiums for further analysis excesses and so require a significant calculation time. Our previous works in [START_REF] Buzau | Quantification of operating power reserve through uncertainty analysis of a microgrid operating with wind generation[END_REF][START_REF] Yan | Operating power reserve quantification through PV generation uncertainty analysis of a microgrid[END_REF] showed that forecasting errors from system uncertainty analysis could be used for PR setting. Following these promising results and experiences, we have carried out further investigations on rigorous methods to quantify the required PR. The task is to calculate it by considering uncertainties from PV prediction and load forecast or with uncertainty estimation. In the second part of this paper, PV power and load uncertainty and variability are analyzed. Then, the artificial neural network based prediction methods are applied to forecast PV power, load demand and errors. In the third part, the Net Demand (ND) Forecasted uncertainty is obtained, for each hour of the next day, as the difference between the forecasted production uncertainty and the forecasted load uncertainty. Two methods are detailed to calculate the ND forecast errors. An hourly probability density function of all predicted ND forecasted errors has been used for the error analysis. In the fourth part, a method is explained to assess the accuracy of these predictions and to quantify the required operating PR to compensate the system power unbalancing due to these errors. Power reserve is obtained by choosing a risk level related to two reliability assessment indicators: loss of load probability (LOLP), and expected energy not served (EENS). Finally, this management tool is proved through an illustrative example. II.METHODOLOGY A.PV Power and Load Uncertainty Analysis The PV power variability is the expected change in generation while PV power uncertainty is the unexpected change from what was anticipated, such as a suddenly cloud cover. The former depends on the latitude and the rotation of Earth, while the latter is mostly caused by uncertainty conditions, such as cloud variations over the PV. The movement of clouds introduces a significant uncertainty that can result in rapid fluctuations in solar irradiance and therefore PV power output. However, the influence of a moving cloud and, hence, the shading of an entire PV site depends on the PV area, cloud speed, cloud height and many other factors. Data from solar installations covering a large spatial extent have an hourly temporal dynamic, while individual zones have instantaneous dynamics as in local distribution networks or micro grids. The daily operation of a power system should be matched to load variations to maintain system reliability. This reliability refers to two areas: -system adequacy, which depends on sufficient facilities within the system to satisfy system operational constraints and load demand, and -system security, which is the system ability to respond to dynamic disturbances. When RES represent a significant part of the power generation, system operating power reserve must be larger to regulate the variations and maintain the security level. This additional power is required to stable the electrical network. Classically this power reserve is provided by controllable generators (gas turbines, diesel plant, ..). Today, the increasing of balancing power reserves leads to a significant increase in the power system operating cost and so system may limit the PV power penetration is due to the variability and uncertainty over short time scales. There are different ways to manage variability and uncertainty. In general, system operators and planners use mechanisms including forecasting, scheduling, economic dispatch, and power reserves to ensure performances that satisfy reliability standards with the least cost. The earlier system operators and planners know what kind of variability and uncertainty they will have to deal with, the more options they will have to accommodate it and cheaper it will be. The key task of variability and uncertainty management is to maintain a reliable operation of the power system (grid connected or isolated) while keeping down costs. Energy management of electrical systems is usually implemented over different time scales. One day ahead, system operators have to balance the load demand with electrical generation by planning the starting and set points of controllable generators on an hourly time step. Risks are also considered and thus a power reserve has also to be hourly planned. During the day, unexpected PV power lack is compensated by injecting a primary power reserve. The PV variability can be separated into different time scales associated with different impacts, onto the grid management and costs. Consequently, more capacity to compensate errors in forecasts or unexpected events must be accommodated. The instantaneous PV power output is affected by many correlated external and physical inputs, such as irradiance, humidity, pressure, cloud cover percentage, air/panels temperature and wind speed. The per unit surface power output is modeled by [START_REF] Fuentes | Application and validation of algebraic methods to predict the behaviour of crystalline silicon PV modules in Mediterranean climates[END_REF]: ) ) 25 ) ( ( . - 1 ( ). ( . . = ) (  t T C t I A t P p r PV  (1) where is the power conversion efficiency of the module (%), A is the surface area of PV panels (m 2 ), I r is the global solar radiation (kW/m 2 ) and T is the outside air temperature (°C), p C is the cell maximum power temperature coefficient (equal to 0.0035 but it can varies from 0.005 to 0.003 per °C in crystalline silicon). The PV power, solar irradiance and temperature of our lab PV plant have been recorded during three continuous days (22/06/2010 -24/06/2010) and are presented respectively in Fig. 1. The PV power variability is highly correlated with irradiance, so as to the temperature, while the PV power uncertainty is almost caused by the irradiance change. Sensed PV power data points can be drawn according to sensed irradiance and temperature data points in order to highlight correlations (Fig. 2). The local load consumption demand is also highly unpredictable and quite random. It depends on different factor, such as the economy, the time, the weather, and other random effects. However, for power system planning and operation, load demand variation and uncertainty analysis are crucial for power flow study or contingency analysis. As for PV production, load demand variations exist in all time scales and system actions are needed for power control in order to maintain the balancing. B.Power Forecasting Methodology 1)PV Power Forecasting In recent decades, several forecasting models of energy production have been published [START_REF] Espinar | Photovoltaic Forecasting: A state of the art[END_REF][START_REF] Yan | Solar radiation forecasting using artificial neural network for local power reserve[END_REF][START_REF] De Rocha | Photovoltaic forecasting with artificial neural networks[END_REF][START_REF] Mohammed | Probabilistic Forecasting of Solar Power: An Ensemble Learning Approach[END_REF][START_REF] Golestaneh | Generation and evaluation of space-time trajectories of photovoltaic power[END_REF]. For PV power, one method consists in forecasting solar radiation and then forecasting PV power with a mathematical model of the PV generator. A second one proposes to directly predict the PV power output from environmental data (irradiance, temperature, etc.). Statistical analysis tools are generally used, such as linear/multiple-linear/non-linear regression and autoregressive models that are based on time series regression analysis [START_REF] De Giorgi | Photovoltaic power forecasting using statistical methods: impact of weather data[END_REF]. These forecasting models rely on modeling relationships between influent inputs and the produced output power. Consequently, mathematical model calibration and parameters adjustment process take a long time. Meanwhile, some intelligent based electrical power generation forecast methods, as expert systems, fuzzy logic, neural networks, are widely used to deal with uncertainties of RES power generation and load demand [START_REF] Espinar | Photovoltaic Forecasting: A state of the art[END_REF][START_REF] Ghods | Different methods of long-term electric load demand forecasting; a comprehensive review[END_REF]. In daily markets, the hourly PV power output for the next day (day D+1) at time step h is represented as the sum of a day ahead hourly forecast PV power ( h v P ~) and the forecast error (ε h Pv ): Pv h h h v P Pv   ~ = (2) 2)Load Demand Forecasting For load demand forecast, numerous variables affect directly or indirectly the accuracy. Until now, many methods and models have already been tried out. In [START_REF] Ghods | Different methods of long-term electric load demand forecasting; a comprehensive review[END_REF], several long-term (month or year) load forecasting methods are introduced and are very important for planning and developing future generation, transmission and distribution systems. In [START_REF] Hong | Long term probabilistic load forecasting and normalization with hourly information[END_REF], a long term probabilistic load forecasting method is proposed with three modernized elements: predictive modeling, scenario analysis, and weather normalization. Long-term and short-term load forecast play important roles in the formulation of secure and reliable operating strategies for the electrical power system. The objective is to improve the forecast accuracy in order to optimize power system planning and to reduce costs. ): The day ahead actual load demand at time step h ( L h h h L L   ~ = (3) 3)Net Demand Forecasting Knowing the PV power forecasting and the load demand forecasting, the net demand forecasting ( h D N ~) for a given time step h is expressed as: h h h v P L D N   (4) The real net demand (ND h ) is composed of the forecasted day ahead ND and a forecast error (ε h ND ): ND h h h D N ND   ~ = (5) C.Application of Back-Propagation ANN to Forecast In order to predict the net demand errors, as well as PV and load forecast errors, we have developed several back-propagation (BP) Artificial Neural Networks (ANN) [START_REF] Francois | Orthogonal considerations in the design of neural networks for function approximation[END_REF]. Compared with conventional statistical forecasting schemes, ANN has some additional advantages, such as simplicity in adaptability to online measurements, data error tolerance and lack of any excess information. Since the fundamentals of ANN based predictors can be found in many sources, it will not be recalled again. III.NET DEMAND UNCERTAINTY ANALYSIS A.Net Demand Uncertainty In order to simplify the study, the uncertainties coming from conventional generators and network outages are ignored and only load and PV power uncertainties are considered. Then the error of the ND forecasting is representing the ND uncertainty. Two possible methods are proposed to calculate the forecasted net demand error. 1)First Method: Forecast of the Day-ahead Net Demand Error The real ND is the difference of the sensed load and the sensed PV power. Based on the historical sensed and forecasted database of the load demand and PV production, the past forecasted ND is calculated as the difference between past forecasted load and past forecasted PV power at time step h. Then by using past real ND ( ). Hence these data are used to calculate the day-ahead forecast of ND errors ND h  ~ (D+1) (Fig. 3). The obtained ND error forecast can be characterized by the mean and variance (respectively, ND h  and ND h  ). 2)Second Method: Calculation from the PV Power and the Load Forecast Errors Estimation A second method is to define the ND uncertainty as the combination of PV power and load uncertainties. It is generally assumed that PV power and load forecast errors are unrelated random variables. So, firstly the day-ahead PV power and load forecasting errors ( PV h  ~and L h  ~) are estimated independently. Then, last 24-hour load forecast errors and PV power forecast errors are calculated as the difference of the sensed load and PV power, and forecasted load and PV power, respectively (Fig. 4). The mean values and standard deviations of those forecasting errors can be obtained. Then the ND forecasting error can be attained as a new variable which comes from those two independent variables. The new obtained pdf is also a normal distribution with the following mean and variance [START_REF] Ortega-Vazquez | Estimating the spinning reserve requirements in systems with significant wind power generation penetration[END_REF][START_REF] Bouffard | Stochastic security for operations planning with significant wind power generation[END_REF]: PV h L h ND h      (6) 2 2 ) ( ) ( PV h L h ND h      ( ) ( , ND h ND h   2 ) ( , PV h PV h   2 ) ( , L h L h   Mean average and Standard deviation Equations ( 6) and ( 7 B.Assessment of the Forecasting Uncertainty The predicted errors ( ND h  ~) of the ND forecast ( ND h D N ~) can be obtained with the normal probability density function (Fig. 5). ) , ( 2 1 2 2 ) ( 2 ) ( ND h ND h B ND h ND h B F d e pdf ND h ND h                (8) The forecasting uncertainty can be represented as upper and lower bound margins around the ND forecast. Bound margins (B) are extracted by a normal inverse cumulative distribution function for a desired probability index x (Fig. 5):   x B F B x F B ND h ND h ND h ND h    ) , ( : ) , ( 1 -     (9) Lower bound (Blower) Normal Inverse Cumulative Distribution Function (F -1 ) Normal Probability Density Function (F(B|µ ND ,σ ND )) Upper bound (Bup) + + _ + ND h   ND h   Net Demand Forecast IV.POWER RESERVE QUANTIFICATION A.Reliability Assessment Resulting from the uncertainty assessment, the pdf of the forecasted ND errors in a given time step is considered for the calculation of the power reserve [START_REF] Holttinen | Methodologies to determine operating reserves due to increased wind power[END_REF]. To estimate the impact of forecast ND uncertainty, two common reliability assessment parameters are used: the loss of load probability (LOLP) and the expected energy not served (EENS) [START_REF] Li | A multi-state model for the reliability assessment of a distributed generation system via universal generating function[END_REF][START_REF] Wang | Spinning reserve estimation in microgrids[END_REF][START_REF] Liu | Quantifying spinning reserve in systems with significant wind power penetration[END_REF]. LOLP represents the probability that the load demand (L h ) exceeds PV power (P h ) at time step h:      R h h h pdf P L prob LOLP   )d ( ) 0 ( = ( 10 ) prob(L h -P h >0) is also the probability that the power reserve (R) is insufficient to satisfy the load demand in the time step h. Meanwhile, EENS measures the magnitude of the load demand not served: ) ( ) 0 ( = h h h h h P L P L prob EENS     (11) where (L h -P h ) is the missed power in the time step h. In this situation, the grid operator can either disconnect a part of loads or use the power reserve to increase the power production. After obtaining each of the next 24 hours forecast ND pdf s , an hourly day ahead reliability assessment can be attained. Electrical system operators can use this reliability to calculate the system security level. B.Risk-constrained Energy Management A reserve characteristic according to a risk level and for each time step can be obtained. With a fixed risk index, the operator can then easily quantify the power reserve [START_REF] Vosa | Revision of reserve requirements following wind power integration in island power systems[END_REF]. As shown in Fig. 6, the sum of the shaded areas represents the accepted risk of violation with x % of LOLP. R is the needed power reserve to compensate the remaining power unbalance. So, the reliability assessment can be done with the hourly cumulative distribution function (cdf) obtained from the normal difference distribution of ND errors. Then the cdf represents the probability that the random variable (here the ND error) is less or equal to x. This assessment has been made under the assumption of a positive hourly forecasted ND. Otherwise, if the forecasted ND is negative, the reserve power for the same reliability level will be unnecessary (the power generation is more than the load demand). So the reliability has been assessed by considering only positive forecasted ND errors (L h -P h ) for each time step. Then, LOLP is deduced with:       R h h h pdf P L prob LOLP   )d ( - 1 0) ( - 1 = (12) When the LOLP equals to the risk index x %, the reserve power (R) covers the remaining probability that the load demand exceeds the PV power generation (blue part in Fig. 6). V.ILLUSTRATIVE CASE STUDY A.Presentation and Data Collection The studied urban microgrid is a 110 kW load peak and is powered with 17 kW PV panels and three micro-gas turbines 30 kW, 30 kW and 60 kW each. Sensed data from our 17 kW PV plant located on the lab roof have been recorded in 2010 and 2013. For the load forecasting, past daily French power consumptions have been scaled to obtain per unit values of locally power consumption with the same characters and dynamics. A part of this database has been used to design the ANN based forecasting tool, a part to assess the estimation quality and a third one to implement the application of the proposed method in a real situation [START_REF] De Rocha | Photovoltaic forecasting with artificial neural networks[END_REF]. The ANN has been trained with past recorded data from the training set to predict hourly PV output power.    n i k k y y n nRMSE 1 2 ) ( 1 = (13)    n i k k y y n nMAE 1 1 = (14) B.ANN Based Power Forecast and Net Demand Forecast 1)ANN based PV Power Forecasting A three-layer ANN has been developed for the PV power generation prediction with: one input layer including last n hours of measured PV power, of irradiance and of forecasted average temperature (obtained from our local weather information service) (Fig. 7); one hidden layer with 170 neurons; -one output layer with the 24 predicted PV power points (for each hour). Various hidden layer neurons have been tested until getting an nRMSE inferior to 5%. First, 60% of previously sensed data (representing one year of data) have been used for training the ANN based PV power forecasting tool. Next 20% of sensed data are used to create a validation pattern set in order to assess the prediction quality. The test set (with the remaining 20% data) is used to implement the forecast error calculation. Obtained nRMSE and nMAE for next 24 hours PV power predictions are given in Table I. Predicted errors for 120 test days are given in Fig. 8. Absolute values are less than 0.4 p.u. of the PV power output. The largest errors are in the middle of the day when the PV power production is the highest. 2)ANN based Load Forecasting Another neural network has been used for load forecast. The load demand prediction model includes: an input layer with last 48 hours load demand measurements and predicted temperatures for next 24 hours, one hidden layer with 70 neurons (in order to get an nRMSE inferior to 4%) and an output layer that predicts next 24 hours load demand. 60% of available data are used for the neural network training, 20% for the validation and 20% for tests. The predicted errors for 120 test days are shown in Fig. 9. As it can be seen, the largest forecast error occurs at 8:00 and 18:00. Yet the total absolute errors are less than 0.2 p.u. of the load demand. Obtained results of nRMSE and nMAE are listed in Table II. Following the method highlighted in Fig. 3, another ANN is applied for ND errors forecast: an input layer with last 24 hours predicted net demand errors, one hidden layer with 70 neurons and an output layer that predicts next 24 hours forecasted net errors. Application of the first method (Fig. 3) for the time step at 12 am gives: 1.781 0.1282, - C.Forecasting Uncertainty Assessment By applying both proposed methods, the uncertainties of PV power forecasting, load forecasting and net forecasting with various probability indices (from 90% to 60%) in a random day are represented as a function of the forecasting data and the predicted errors of forecasting. In order to simplify the explanation, results are given with the second method (corresponding to Fig. 5). As shown in Fig. 11, the uncertainty of PV power forecasting is higher in the middle of the day, when the PV system generates the highest power. While in the morning (from 6:00 to 10:00) and afternoon (from 17:00 to 21:00), the uncertainty is smaller. Obviously, PV power forecasting uncertainty increases, and decreases with PV power increase and decrease respectively. Also, the uncertainty is increased when the time horizon is larger. For example, at 10:00 and at 17:00 power outputs are almost at the same level (about 6.5 kW), but uncertainty is larger at 17:00 then at 9:00. The load forecasting has the same variation trend (Fig. 12). Fig. 13 depicts the obtained ND uncertainty with the first method. If the forecasted ND is positive, then additional power sources have to be programmed to cover the difference. Otherwise, if forecasted ND is negative then three actions must be considered to meet the low forecasted demand: -A part of PV power generators must be switched off (or can work at a sub-optimal level). -Controllable loads (as electrical vehicles, heating loads.) must be switched on to absorb excess available power. -Export the available excess energy to the main grid. D.Power Reserve Calculation with Fixed Risk Indices The forecasted ND uncertainty assessment has been done with the hourly cumulative distribution function (cdf) obtained from the ND forecast errors. Then, the hourly risk/reserve curve takes into account all the errors from the cdf s . Since the forecast ND errors can be expressed as an x % of the rated power, the PR can be drawn according to the LOLP. Fig. shows the required PR variation according to LOLP and EENS (with the second method in Section III). Therefore, an operating PR under x % of LOLP would cover a part of the forecast ND uncertainty. For example, with 10% of LOLP, the reserve power will be 7 kW and the EENS will be 0.2 kW. In general, this operating reserve is limited not only by the risk indices but also by the availability of micro- On Fig. 15, an assessment of hourly reserve power required with the second method for different LOLP has been deduced. Much more reserve will be needed when the LOLP rate is very low, which means a high security level. While less reserve power will be needed with a high LOLP rate, but then the risk will be higher. For example with 1% of LOLP, the necessary PR will be 14 kW (EENS is almost zero) at 12:00 am, while the necessary reserve power will be 7 kW with a 10% of LOLP and EENS increases to 0.25 kWh. If a constant LOLP rate is set, the power reserve for each hour can be obtained. As shown in Fig. 16, with a 1% of LOLP, more power reserve is needed in the middle of the day when larger PV power is generated. Moreover, power reserve with the second method is higher than the method with direct ND forecast. The most likely explanation of this result is because the load forecast uncertainty and PV forecast uncertainty are not totally independent. Sharing a common temperature, integrated PV power uncertainty and load uncertainty is greater than the direct ND forecast uncertainty. This result can be used for power dispatch management. VI.CONCLUSION This work proposed a new technique to quantify the power reserve of a microgrid by taking into account the PV power forecasting uncertainty and load forecasting uncertainty. In order to assess these uncertainties, a three-layer BP ANN is used to estimate errors of PV power and load forecastings. Two methods are proposed to obtain the ND forecast uncertainties. With the first method, a probabilistic model is proposed to forecast ND uncertainty distribution by integrating the uncertainties from both PV power and load. The other method is desired to directly forecast the ND errors. The power reserve quantification results demonstrate that with a fixed risk index, the power reserve for next day or next 24 hours can be evaluated to cover the risk. As the uncertainty from forecasting errors increases with time horizon, future research works are oriented toward the implementation of the intraday adjustment. The dispatch of the calculated power reserve onto micro-gas turbines, controllable loads and also new "PV based active power generators" is also an interesting way to pave. Fig. 1 .Fig. 2 . 12 Fig. 1. PV power, solar irradiance and temperature in three continuous days. h L ) is assumed to be the sum of the day ahead forecasted load ( h L ~) and an error (ε h L Fig. 3 . 3 Fig. 3. Net demand uncertainty calculation from ND error forecast. Fig. 4 . 4 Fig. 4. Net demand uncertainty calculation from PV power and load forecasting errors prediction. Fig. 5 5 Fig. 5 Net uncertainty calculation at hour h with a given probability. Fig. 6 . 6 Fig. 6. Calculation of power reserve requirements (R) based on forecast ND uncertainty ( ND h  ~) with x% of LOLP, at time step h. 24 .Fig. 7 . 247 Fig. 7. PV power, load forecasting and errors prediction with ANN. Fig. 8 . 8 Fig. 8. PV power prediction errors on 120 test days. Fig. 9 . 9 Fig. 9. Load prediction on 120 test days. Fig. 10 . 10 Fig. 10(c) with obtained parameters: Fig. 11 . 11 Fig. 11. PV forecasting with uncertainty (a random day). Fig. 12 . 12 Fig. 12. Load forecasting with uncertainty (a random day). Fig. 13 . 13 Fig. 13. Next 24 hours NFD with uncertainty (a random day). Fig. 14 . 14 Fig. 14. Risk/reserve curve for LOLPh+12 and EENSh+12 at 12:00. Fig. 15 .Fig. 16 . 1516 Fig. 15. Required power reserve for each hour with x% LOLP. 0 1 2 3 4 5 6 7 8 9 10 0 4 8 12 16 20 24 0 5 10 15 x % of LOLP Time (Hour) Power Reserve (kW) TABLE I . I Errors of the PV Power Forecast with ANN nRMSE [%] nMAE [%] Training Set 4.67 2.69 Validation Set 5.58 3.13 Test Set 5.95 3.12 TABLE II II nRMSE [%] nMAE [%] Training Set 3.18 2. 45 Validation Set 3.57 2.76 Test Set 3.67 2.84 3)Net Demand Uncertainty a) First Method: Direct Net Demand Forecast 0.4 Error (p.u.) 0 0.2 -0.2 -0.4 0 4 8 12 16 20 24 Time (hours) 0.3 0.2 Error (p.u.) -0.1 0 0.1 -0.2 -0.3 0 4 8 12 16 20 24 Time (hours) . ERRORS OF THE LOAD DEMAND FORECAST WITH ANN 12 ACKNOWLEDGMENT The authors would like to thank the China Scholarship Council and Centrale Lille for their cofounding supports
01756831
en
[ "info.info-au" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01756831/file/NIMFA-INFOCOM-final.pdf
Vineeth S Varma Irinel-Constantin Morȃrescu Yezekael Hayel Continuous time opinion dynamics of agents with multi-leveled opinions and binary actions Keywords: Opinion dynamics, Social computing and networks, Markov chains, agent based models This paper proposes and analyzes a stochastic multiagent opinion dynamics model. We are interested in a multileveled opinion of each agent which is randomly influenced by the binary actions of its neighbors. It is shown that, as far as the number of agents in the network is finite, the model asymptotically produces consensus. The consensus value corresponds to one of the absorbing states of the associated Markov system. However, when the number of agents is large, we emphasize that partial agreements are reached and these transient states are metastable, i.e., the expected persistence duration is arbitrarily large. These states are characterized using an N-intertwined mean field approximation (NIMFA) for the Markov system. Numerical simulations validate the proposed analysis. I. INTRODUCTION Understanding opinion dynamics is a challenging problem that has received an increasing amount of attention over the past few decades. The main motivation of these studies is to provide reliable tools to fight against different addictions as well as propagation of undesired unsocial behaviors/beliefs. One of the major difficulties related to opinion dynamics is the development of models that can capture many features of a real social network [START_REF] Kerckhove | Modelling influence and opinion evolution in online collective behaviour[END_REF]. Most of the existing models assume that individuals are influenced by the opinion of their neighbors [START_REF] Ising | Contribution to the theory of ferromagnetism[END_REF], [START_REF] Degroot | Reaching a consensus[END_REF], [START_REF] Sznajd-Weron | Opinion evolution in closed community[END_REF], [START_REF] Deffuant | Mixing beliefs among interacting agents[END_REF], [START_REF] Hegselmann | Opinion dynamics and bounded confidence models, analysis, and simulation[END_REF], [START_REF] Morȃrescu | Opinion dynamics with decaying confidence: Application to community detection in graphs[END_REF]. Nevertheless, it is very hard to estimate these opinions. In order to relax this constraint of measuring the opinions and to model more realistic behaviors, a mix of continuous opinion with discrete actions (CODA) was proposed in [START_REF] Martins | Continuous opinions and discrete actions in opinion dynamics problems[END_REF]. This model reflects the fact that even if we often face binary choices or actions which are visible by our neighbors, the opinions evolve in a continuous space of values which are not explicitly visible to the neighbors. A multi-agent system with a CODA model was proposed and analyzed in [START_REF] Chowdhury | Continuous opinions and discrete actions in social networks: a multi-agent system approach[END_REF]. It was shown that this deterministic model leads to a variety of asymptotic behaviors including consensus. Due to the complexity of the opinion dynamics, we believe that stochastic models are more suitable than deterministic ones. Indeed, we can propose a realistic deterministic update rule but many random events will still influence the interaction network and consequently the opinion dynamics. For this reason, we consider that it is important to reformulate the model from [START_REF] Chowdhury | Continuous opinions and discrete actions in social networks: a multi-agent system approach[END_REF] in a stochastic framework, specifically, as an interactive Markov chain. Similar approaches for the Deffuant and Hegselmann-Krause models have been considered in the literature (see for instance [START_REF] Lima | Agent based models and opinion dynamics as markov chains[END_REF], [START_REF] Lorenz | Consensus strikes back in the hegselmann-krause model of continuous opinion dynamics under bounded confidence[END_REF]). Although the asymptotic behavior of the model can be given by characterizing the absorbing states of the Markov chain, the convergence time can be arbitrarily large. Moreover, transient but persistent local agreements, called metastable equilibria, are very interesting because they describe the finite-time behavior of the network. Consequently, we consider in this paper an N-intertwined mean field approximation (NIMFA) based approach in order to characterize the metastable equilibria of the Markov system. It is noteworthy that NIMFA was successfully used to analyze and validate some epidemiological models [START_REF] Van Mieghem | Virus spread in networks[END_REF], [START_REF] Trajanovski | Decentralized protection strategies against sis epidemics in networks[END_REF]. In this work, we model the social network as a multi-agent system in which each agent represents an individual whose state is his opinion. This opinion can be understood as the preference of the agent towards performing a binary action, i.e. action can be 0 or 1. These agents are interconnected through an interaction directed graph, whose edge weights represent the trust given by an agent to his neighbor. We propose continuous time opinion dynamics in which the opinions are discrete and belong to a given set that is fixed a priori. Each agent is influenced randomly by the actions of his neighboring agents and consequently influences its neighbors. Therefore the opinions of agents are an intrinsic variable that is hidden from the other agents, the only visible variable is the action. As an example, consider the opinion of users regarding two products Coca-cola and Pepsi. A user may prefer Coca-cola strongly, while some other users might be more indifferent. However, what the other users see (and is therefore influenced by) is only what the user buys, which is the action taken. Our goal here is to analyze the behavior of the opinions in the network under the proposed stochastic dynamics. One of the main results states that the opinions always asymptotically reach a consensus defined by one of the extreme opinions. Nevertheless, for large networks, we emphasize that intermediate local agreements are reached and preserved for a long duration of time. The contributions of this paper can be summarized as follows. Firstly, we formulate and analyze a stochastic version of the CODA model proposed in [START_REF] Chowdhury | Continuous opinions and discrete actions in social networks: a multi-agent system approach[END_REF]. Secondly, we characterize the local agreements which are persistent for a long duration by using the NIMFA for the original Markov system. Thirdly, we give a complete characterization of the system behavior under symmetric allto-all connection assumption. Finally, we provide conditions for the preservation of the main action inside one cluster as well as for the propagation of actions. The rest of the paper is organized as follows. Section II introduces the main notation and concepts and provides a description of the model used throughout the paper. The analysis of the asymptotic behavior of opinions described by this stochastic model is provided in Section III. The presented results are valid for any connected networks with a finite number of agents. Moreover Section III contains the description of the NIMFA model and an method to compute its equilibria. In Section IV, we analyze the particular network in which each agent is connected to all the others. In this case, we show that only three equilibria exist and two of them are stable, and correspond to the absorbing states of the system. Section V presents the theoretical analysis of the system under generic interaction networks. It also emphasizes conditions for the preservation of the main action (corresponding to a metastable state) in some clusters as well as conditions for the action propagation. The results of our work are numerically illustrated in Section VI. The paper ends with some concluding remarks and perspectives for further developments. Preliminaries: We use E for the expectation of a random variable, 1 A (x) the indicator function which takes the value 1 when x ∈ A and 0 otherwise, R + the set of non-negative reals and N = {1, 2, . . . } the set of natural numbers.. II. MODEL Throughout the paper we consider N ∈ N an even number of possible opinion levels, and the set of agents K = {1, 2, . . . , K} with K ∈ N. Each agent i is characterized at time t ∈ R + by its opinion represented as a scalar X i (t) ∈ Θ where Θ = {θ 1 , θ 2 , . . . , θ N } is the discrete set of possible opinions, such that θ n ∈ (0, 1)\{0.5} and θ n < θ n+1 for all n ∈ {1, 2, . . . , N }. Moreover Θ is constructed such that θ N/2 < 0.5 and θ N/2+1 > 0.5. In the following let us introduce some graph notions allowing us to define the interaction structure in the social network under consideration. Definition 1 (Directed graph): A weighted directed graph G is a couple (K, A) with K being a finite set denoting the vertices, and A being a K × K matrix, with elements a ij denoting the trust given by i on j. We say that agent j is a neighbor of agent i if a ij > 0. We denote by τ i the total trust in the network for agent i as τ i = K j=1 a ij . Agent i is said to be connected with agent j if G contains a directed path from i to j, i.e. if there exists at least one sequence (i = i 1 , i 2 , . . . , i p+1 = j) such that a i k ,i k+1 > 0, ∀k ∈ {1, 2, . . . , p}. Definition 2 (Strongly connected): The graph G is strongly connected if any two distinct agents i, j ∈ K are connected. In the sequel we suppose the following holds true. Assumption 1: The graph (K, E) modeling the interaction in the network is strongly connected. (t) = X i (t) ∈ {0, 1} a ij Trust of agent i in agent j τ i Trust of agent i in the network τ i = j∈K a ij v i,n (t) Probability that X i (t) = θn R i (t) Influence on i to shift opinion to 1 R i (t) = j∈K a ij Q j (t) L i (t) Influence on i to shift opinion to 0 L i (t) = j∈K a ij (1 -Q j (t)) r i (t) Expected influence on i to shift opinion to 1 r i (t) = j∈K a ij N n=N/2+1 v j,n (t) l i (t) Expected influence on i to shift opinion to 0 l i (t) = j∈K a ij N/2 n=1 v j,n (t) νn(t) Expected fraction of population with opinion θn, νn = i∈K v i,n (t) K ν + (t), ν -(t) Expected fraction of population with action 1, ν + (t) = i∈K N n=N/2+1 v i,n (t) K or action 0, ν -= i∈K N/2 n=1 v i,n (t) K ν C n (t) Expected fraction of population in C ⊆ K with opinion θn, ν C n (t) = i∈C v i,n (t) |C| ν C + (t), ν C -(t) Expected fraction of population in C ⊆ K with action 1, ν C + = i∈C N n=N/2+1 v i,n (t) |C| or action 0, ν C -= i∈C N/2 n=1 v i,n (t) |C| TABLE I: Notations used The action Q i (t) taken by agent i at time t is defined by the opinion X i (t) through the following relation Q i (t) = X i (t) , where . is the nearest integer function. This means that if an agent has an opinion more than 0.5, it will take the action 1 and 0 otherwise. This kind of opinion quantization is suitable for many practical applications. For example, an agent may support the left or right political party, with various opinion levels (opinions close to 0 or 1 represents a stronger preference), however, in the election, the agent's action is to vote with exactly two choices (left or right). Similarly, an agent might have to choose between two cars or other types of merchandise like cola as mentioned in the introduction. Although its preference for one product is not of the type 0 or 1, its action will be, since it cannot buy fractions of cars, but one of them. For ease of exposition, we provide Table I, a list of notations and their meanings. A. Opinion dynamics In this work, we look at the evolution of opinions of the agents based on their mutual influence. We also account for the inertia of opinion, i.e., when the opinion of the agent is closer to 0.5, he is more likely to shift as he is less decisive, whereas someone with a strong opinion (close to 1 or 0) is less likely to shift his opinion as he is more convinced by his opinion. The opinion of agent j may shift towards the actions of its neighbors with a rate β n while X j (t) = θ n . If no action is naturally preferred by the opinion dynamics, then we construct θ n = θ N +1-n and assume that β n = β N +1-n for all n ∈ {1, 2, . . . , N }. At each time t ∈ R + we denote the vector collecting all the opinions in the network by X(t) = (X 1 (t), . . . , X K (t)). Notice that the evolution of X(t) is described by a continuous time Markov process with N K states and its analysis is complicated even for small number opinion levels and relatively small number of agents. The stochastic transition rate of agent i shifting its opinion to the right, i.e. to have opinion θ n+1 when at opinion θ n , with n ∈ {1, 2, . . . , N -1}, is given by β n N j=1 a ij 1 (0.5,1] (X j (t)) = β n K j=1 a ij Q j (t) = β n R i (t). Similarly, the transition rate to the left, i.e. to shift from θ n to θ n-1 is given by β n N j=1 a ij 1 [0,0.5) (X j (t)) = β n N j=1 a ij (1 -Q j (t)) = β n L i (t). for n ∈ {2, . . . , N }. Therefore, we can write the infinitesimal generator M i,t (a tri-diagonal matrix of size N × N ) for an agent i as: M i,t =    -β 1 R i (t) β 1 R i (t) 0 . . . β 2 L i (t) -β 2 τ i β 2 R i (t) . . . . . .    (1) with elements corresponding the n-th rown and m-th column given by M i,t (m, n) and ∀n ∈ {1, . . . , N -1}, M i,t (n, n + 1) = β n R i (t), ∀n ∈ {2, . . . , N }, M i,t (n, n -1) = β n L i (t), ∀|m -n| > 1, M i,t (m, n) = 0 and Mi,t(n, n) =    -β1Ri(t) for n = 1, -βnτi for n ∈ {2, . . . , N -1}, -βN Li(t) for n = N. Let v i,n (t) := E[1 {θn} (X i (t))] = Pr(X i (t) = θ n ) be the probability for an opinion level θ n for user i at time t. Then, in order to propose an analysis of the stochastic process introduced above, we may consider the mean-field approximation by replacing the transitions by their expectations. Then, the expected transition rate from state n to state n+1 for K → ∞, is given by: β n K j=1 a ij E 1 (0.5,1] (X j (t)) = β n K j=1 a ij N n=N/2+1 v j,n (t). We have similar expression for transition between state n and n -1. III. STEADY STATE ANALYSIS Define by θ-= (θ 1 , . . . , θ 1 ) and θ+ = (θ N , . . . , θ N ) the states where all the agents in the network have an identical opinion, which correspond to the two extreme opinions. Proposition 1: Under Assumption 1, the continuous time Markov process X(t), with (1) as the infinitesimal generators corresponding to each agent, has exactly two absorbing states X(t) = θ+ and X(t) = θ-. Proof: We can verify that θ+ and θare absorbing states by evaluating the transition rates using [START_REF] Kerckhove | Modelling influence and opinion evolution in online collective behaviour[END_REF]. If X(t) = θ-, then X i (t) = θ 1 for all i, and the transition rate is (1, 0, . . . , 0)M i,t = (-β 1 R i (t), β 1 R i (t), 0, . . . , 0). (2) but as X i = θ 1 for all i, Q i (t) = 0 for all i and so we have R i (t) = 0 for all i. Therefore the transition rate from this state is 0. We can similarly show that θ+ is also an absorbing state. Next, we show that no other state can be an absorbing state. Consider any state with at least one agent i such that X i (t) = θ n with 1 < n < N . The transition rate from such a state is never 0, which can is easy to see, as we have M i,t (n, n) = β n τ i . This implies that as long as such an agent exists, the global state is not an absorbing state. The only states which are not θ+ , θor such a state are those satisfying the following property. Consider X i (t) = θ 1 for all i ∈ S and X i (t) = θ N for all i ∈ K \ S with S ⊂ K and 1 < |S| < K. As the graph is strongly connected, there is at least one agent k in S, which is directly connected to some agent l in K \ S, i.e. a k,l > 0. The transition rate of this k is given by (-β 1 R k (t), β 1 R k (t), 0, . . . , 0) As a k,l > 0 and Q l (t) = 1 as all agents outside S has opinion θ N , we have R k (t) > 0. Therefore such a state is also never an absorbing state, which concludes our proof that θ+ and θare the two absorbing states and no other state is an absorbing state. Considering the NIMFA approximation, we get that the dynamics of the opinion for an agent i are given by: vi,1 = -β 1 r i v i,1 + β 2 l i v i,2 vi,n = -β n r i v i,n + β n+1 l i v i,n+1 -β n l i v i,n + β n-1 r i v i,n-1 vi,N = -β N l i v i,N + β N -1 r i v i,N -1 (3) for all i ∈ K and 1 < n < N where l i = j∈K a ij E[1 -Q j ] = j∈K N/2 n=1 a ij v j,n , r i = j∈K a ij E[Q j ] = j∈K N n=N/2+1 a ij v j,n . (4) and n v i,n = 1. We can easily verify that X i = θ 1 , i.e. v i,1 = 1 for all i is an equilibrium for the above set of equations. When v i,1 = 1 for all i, v i,n = 0 for all n ≥ 2 and as a result, l i = τ i and r i = 0 for all i which gives vi,n = 0 for all i, n. Apart from the extreme solutions θ+ and θ-, the non-linearity of system (3) could give rise to the existence of interior rest points which are locally stable. Such rest points are referred to as metastable states in Physics. Metastability of Markov processes is precisely defined in [START_REF] Huisinga | Phase transitions and metastability in markovian and molecular systems[END_REF], where the exit times from these metastable states are shown to approach infinity. For a given r i = E[R i (t)], the equilibrium state v * i,n must satisfy the following conditions 0 = -β 1 ri τi v * i,1 + β 2 ( τi-ri τi )v * i,2 0 = -β n v * i,n + β n+1 ( τi-ri τi )v i,n+1 +β n-1 ri τi v * i,n-1 0 = -β N ( τi-ri τi )v * i,N + β N -1 ri τi v * i,N -1 (5) We can write any v * i,n based on v * i,1 , by simplification as v * i,n = β 1 β n r i τ i -r i n-1 v * i,1 . (6) As the sum of v i,n over n must be 1, we can solve for v * i,1 as v * i,1 = 1 N n=1 β1 βn ri τi-ri n-1 . (7) We then can use this relationship to construct a fixed-point algorithm that computes a rest-point of the global opinion dynamics for all users. Algorithm outline: The algorithm involves initializing v to a random value. Then, this v can be used to compute the corresponding r i and l i with (4), which is then used to compute the associated v with [START_REF] Hegselmann | Opinion dynamics and bounded confidence models, analysis, and simulation[END_REF]. Repeating this recursively results in a fixed point of the NIMFA which is a potential metastable equilibrium. Other potential equilibria can be found by initializing with another random v. Additionally, we can obtain some nice properties on the relation between r i and v i,n by studying the following function. Lemma 1: Consider the function f : [0, 1] → [0, 1] defined as f (x) := N n=N/2+1 β1 βn x 1-x n-1 N n=1 β1 βn x 1-x n-1 (8) for all x ∈ [0, 1) and with f (1) = 1. We have that f (x) is a monotonically increasing continuous function and takes the values f (0) = 0, f (0.5) = 0.5 and lim x→1 f (x) = 1. Proof: We can easily verify that f (0) = 0 1 + 0 + . . . = 0 As β n is assumed to be symmetric around N/2, i.e. β 1 = β N etc., we have f (0.5) = N N/2+1 β n N 1 β n = 0.5 In order to simplify differentiations, we use some additional variables, ξ = x 1-x , p = β 1 + β 2 ξ + . . . β N/2 ξ N/2-1 and q = β 1 ξ N + β 2 ξ N -1 + . . . β N/2 ξ N/2 which gives f (x) = q p+q . As q = 1 + . . . , q p+q is continuous at all points except when ξ is a pole, i.e., when x = 1. Therefore if we show that f (x) is continuous at x = 1, f (x) is continious is [0, 1]. For this, we use L'Hôpital's rule to calculate lim x→1 f (x), which is lim ξ→∞ β 1 ξ N + β 2 ξ N -1 + . . . β N/2 ξ N/2 β 1 + β 2 ξ + . . . β N ξ N (9) Applying L'Hôpital's rule recursively N times, we get this to be 1. Therefore, we have shown that our function f (x) is continuous in [0, 1]. Next, we show that it is monotonic. We have (f ) = q (p+q)-(p +q )q (p+q) 2 = q p p 2 (p+q) 2 (10) where (•) = d(•) dξ . We see that dξ dx ≥ 0. Therefore, if q p ≥ 0, then we have the monotonicity and increasing property. This can be verified by simplifying it as q p = N n=N/2+1 β n ξ n-1 p (11) Each of these terms in the sum are positive and so we have shown that the first derivative of f (x) w.r.t x is positive. We can use f ( ri τi-ri ) to calculate the probability that an agent i will take the action 1, i.e., 7) and [START_REF] Hegselmann | Opinion dynamics and bounded confidence models, analysis, and simulation[END_REF]. N n=N/2+1 v * i,n = f ( ri τi-ri ) from ( IV. COMPLETE GRAPH WITH IDENTICAL CONNECTIONS Let us first consider a simple graph structure in which each agent is identically influenced by all the others agents i.e., with a i,j = 1 for all i, j ∈ K and i = j, and a i,i = 0. We use ν n to denote the expected fraction of the population of agents with opinion θ n . This is simply ν n = E[ i∈K 1(X i = θ n )] K = i∈K v i,n K . Also recall that we introduced the following notation ν -:= N/2 n=1 ν n and ν + = N n=N/2+1 ν n . Under this specific graph structure, we have lim K→∞ r i τ i = 1 τ i j∈K a ij N n=N/2+1 v j,n = ν + - N n=N/2+1 v i,n K = ν + (12) for any i ∈ K. Therefore, using (3) and the fact that νn = i∈K vi,n K we can get the dynamics of νn , ∀n ∈ K. Moreover, following [START_REF] Van Mieghem | Virus spread in networks[END_REF], for large values of K we can approximate this dynamics as: ν1 K -1 = -β 1 ν 1 ν + + β 2 ν 2 ν -, νn K -1 = -β n ν n + β n+1 ν n+1 ν - +β n-1 ν n-1 ν + , ∀n ∈ {2, 3, . . . , N -1}, νN K -1 = -β N ν N ν -+ β N -1 ν N -1 ν + . ( 13 ) where ν + = N n=N/2+1 ν n and ν -= N/2 n=1 ν n . Theorem 1: When N ≥ 4, β n > 0 and β n = β N -n+1 for all n ∈ {1, 2, . . . , N }, the dynamics described in (13) has exactly three equilibrium points at (1, 0, . . . ), (0, . . . , 0, 1) and at a point with ν + = ν -= 0.5. The first two equilibrium points correspond to the absorbing states and are stable, while the third fixed point is an unstable equilibrium (not metastable). Proof: Let us notice that: • if ν 1 = 1 and ν n = 0 for all n > 1 one gets ν + = 0; • if ν N = 1 with ν n = 0 for all n < N one gets ν -= 0. Consequently is straightforward to verify that (1, 0, . . . ), (0, . . . , 0, 1) are equilibria of [START_REF] Trajanovski | Decentralized protection strategies against sis epidemics in networks[END_REF]. Another equilibrium point of ( 13) is obtained for ν n = ω βn , with 1 ω := N i=1 1 βi . Indeed, for this point one has ν + = ν -= 0.5 and consequently, the right hand side of (13) becomes ω 2 + ω 2 or -ω + ω 2 + ω 2 which are all 0. Next, we show that no other equilibrium exists. For this, we suppose by contradiction that there exists some 0 < ν + < 1, ν + = 0.5 such that ν + = f (ν + ). If such a ν + exists, then that corresponds to another equilibrium point. However, notice that ν + 1 -ν + = N i=N/2+1 β1 βi ν+ 1-ν+ i-1 N/2 i=1 β1 βi ν+ 1-ν+ i-1 (14) replacing ν+ 1-ν+ by ξ and accounting for β n = β N -n+1 (symmetry), we must have N/2 i=1 β 1 β n (ξ -1)ξ i-1 = 0 (15) for an equilibrium point. However, ξ = 1 corresponds to ν + = 0.5. As ξ > 0, the above equation is never satisfied unless β 1 = 0 (which also means β N = 0). Hence, by contradiction, we prove that the only three equilibrium points are those described. In order to study the stability of these equilibrium points, we look at the Jacobian and its eigenvalues. If we denote the Jacobian elements by J i,j , where J i,j = ∂ νi ∂ν j , then for all 1 < i ≤ N 2 , and for all N 2 < j < N , we have: J 1,1 = β 2 ν 2 -β 1 ν + , J i,i = β i+1 ν i+1 -β i , J j,j = β j-1 ν j-1 -β j , J N,N = β N -1 ν N -1 -β N ν -/ We also have ∀ 2 < i ≤ N/2, J 1,i = β 2 ν 2 , and ∀ N/2 < i ≤ N -2, J 1,i = -β 1 ν 1 , ∀ 2 < i ≤ N/2, J N,i = -β N ν N , ∀ N/2 < i ≤ N -2, J N,i = β N -1 ν N -1 . For all i, j ∈ {2, 3, . . . , N -1} such that |i-j| > 1 we have J i,j = β i+1 ν i+1 when j ≤ N/2 and J i,j = β i-1 ν i-1 when j > N/2. Next, we have J 1,2 = β 2 (ν 2 + ν -) and J N -1,N = β N -1 (ν N -1 + ν + ). For all i, j ∈ {2, 3, . . . , N -1} such that |i -j| = 1 we have J i,i+1 = β k ν k + β i+1 ν - where k = i + 1 if i + 1 ≤ N/2 and k = i -1 otherwise; and J i,i-1 = β k ν k + β i-1 ν + where k = i + 1 if i -1 ≤ N/2 and k = i -1 otherwise. At the absorbing state (1, 0, . . . , 0), we have the Jacobian evaluated to have diagonal elements to be 0, -β 2 , β 3 etc. Additionally, the Jacobian becomes an upper triangular matrix as ν n = 0 for all n > N/2. Therefore, the eigenvalues are the elements of the diagonal, which are non-positive and this corresponds to a stable equilibrium. By symmetry, we have the same result for the other absorbing state. When N/2 m=1 ν m = 0.5, we have that the fixed point corresponding to this distribution satisfies ν n β n = ω for some K > 0. Thus, the first column of the Jacobian can be written as (K - β 1 2 , ω + β 1 2 , ω, . . . , K, -K) T The columns j for 2 ≤ j ≤ N/2 are of the form (ω, . . . , ω + β j 2 , ω -β j , ω + β j 2 , . . . , ω, -ω) T where ω + βj 2 is the diagonal term of the Jacobian. The j-th column where N/2 < j ≤ N -1 is of the form (-ω, ω, . . . , ω + β j 2 , ω -β j , ω + β j 2 , . . . , ω) T Finally, the N -th column is given by (-ω, ω, . . . , ω, ω + β N 2 , ω - β N 2 ) T The above matrix is such that each column has exactly one element which is -ω either at the first row (after column index is more than N/2) or at the last row. If we subtract ω(N -2)I from this matrix, it's determinant becomes 0. This can be verified using the properties of determinants that adding a scalar times a row to another row does not change the determinant. We replace the first row with the sum of all rows and this results in the first row becoming all zeroes. This implies that one of the eigenvalues of the Jacobian is ω(N -2) which is positive. Therefore, the equilibrium at ν + = ν -= 0.5 is unstable. The previous theorem characterizes the behavior of the agents' opinion in all-to-all networks with identical connections. Basically, this result states that beside the two stable equilibria in which all the agents rich consensus we may have a metastable equilibria in which the opinions are symmetrically displaced with respect to 0.5. V. GENERIC INTERACTION NETWORKS A way to model generic interaction networks is to consider that they are the union of a number of clusters (see for instance [START_REF] Morȃrescu | Opinion dynamics with decaying confidence: Application to community detection in graphs[END_REF] for a cluster detection algorithm). Basically a cluster C is a group of agents in which the opinion of any agent in C is influenced more by the other agents in C, than agents outside C. When the interactions between cluster are deterministic and very weak, we can use a two time scale-modeling as in [START_REF] Martin | Time scale modeling for consensus in sparse directed networks with time-varying topologies[END_REF] to analyze the overall behavior of the network. In the stochastic framework and knowing only a quantized version of the opinions we propose here a development aiming at characterizing the majority actions in clusters. The notion of cluster can be mathematically formalized as follows. Definition 3 (Cluster): A subset of agents C ⊂ K defines a cluster when, for all i, j ∈ C and some λ > 0.5 the following inequality holds a ij ≥ λ τ i |C| . ( 16 ) The maximum λ which satisfies this inequality for all i, j ∈ C is called the cluster coefficient. One question that can be asked in this context is what are the sufficient conditions for the preservation of actions in a cluster, i.e. regardless of external opinions, agents preserve the majority action inside the cluster C for long time. At the limit, all the agents will have identical opinions corresponding to an absorbing state of the network but, clusters with large enough λ may preserve their action in metastable states (long time). For any given set of agents C ⊂ K, let us denote that ν C -= j∈C N/2 n=1 v j,n |C| , and ν C + = j∈C N n=N/2+1 v j,n |C| . Those values represent the expected fraction of agents within a set C with action 0 and 1, respectively. We also denote by ν C n , the average probability of agents in a cluster to have opinion θ n , i.e., ν C n = i∈C vi,n |C| . Now we can use the definition of a cluster given in ( 16) to obtain the following proposition. Proposition 2: The dynamics of the average opinion probabilities in a cluster C ⊂ K can be written as: νC 1 κ = -β 1 ν C 1 λν C + + (1 -λ)δ +β 2 ν C 2 λν C -+ (1 -λ)(1 -δ) νC n κ = -β n ν C n + β n+1 ν C n+1 λν C -+ (1 -λ)(1 -δ) +β n-1 ν C n-1 λν C + + (1 -λ)δ νC N κ = -β N ν C N λν C -+ (1 -λ)(1 -δ) +β N -1 ν C N -1 λν C + + (1 -λ)δ ( ≥ j∈C aij τi N n=N/2+1 v j,n ⇒ ri τi ≥ λ |C| j∈C N n=N/2+1 v j,n ⇒ ri τi ≥ λν C + ( 18 ) Applying the same derivation for l i , we get li τi ≥ λν C -. Since l i + r i = τ i , we always have λν C + ≤ ri τi ≤ λν C + + (1 -λ) . Thus, we can always rewrite the dynamics of a single agent in the cluster as vi,1 τ i = -β 1 ν 1 λν C + + (1 -λ)δ i +β 2 ν 2 λν C -+ (1 -λ)(1 -δ i ) vi,n τ i = -β n ν n + β n+1 ν n+1 λν C -+ (1 -λ)(1 -δ i ) +β n-1 ν n-1 λν C + + (1 -λ)δ i vi,N τ i = -β N ν N λν C -+ (1 -λ)(1 -δ i ) +β N -1 ν N -1 λν C + + (1 -λ)δ i (19) where δ i ∈ [0, 1] . By taking the sum of each equation over the cluster and dividing by |C|, we get the averages. Each term on the right hand side has a constant factor of the type (l i + r i )β m ν n λν C + and an additional perturbation term of the type The result above shows that instead of looking at individual opinions of agents inside a cluster, we can provide equation (17) for the dynamics of the expected fraction of agents in a cluster with certain opinions. Using proposition 2 and theorem 1, we can immediately obtain some interesting properties for a cluster. (l i + r i )β m ν n (1 -λ)δ i . Corollary 1: Let C be a cluster with coefficient λ → 1. Then the cluster has two metastable equilibria at ν C = (1, 0, . . . , 0), ν C = (0, . . . , 0, 1) and one unstable equilibrium corresponding to ν C + = 0.5. This result holds as if λ → 1, (17) simply becomes [START_REF] Trajanovski | Decentralized protection strategies against sis epidemics in networks[END_REF] and therefore the equilibrium points of (13) must correspond to those of (17), but with the fraction of population being the fraction of agents within the cluster and not the whole graph. For example, consider that K = C 1 ∪ C 2 with C 1 ∩ C 2 = ∅ and |C 1 |, |C 2 | → ∞. Additionally, a ij = 1 for all i, j ∈ C 1 , and for all i, j ∈ C 2 . Finally, we have a set A 1 ⊂ C 1 and A 2 ⊂ C 2 such that a ij = 0 = a ji = 0 for all i ∈ C 1 \ A 1 and j ∈ C 2 \ A 2 , but a ij = a ji = 1 for all i ∈ A 1 and j ∈ A 2 . If |A 1 | < ∞ and |A 2 | < ∞, then we have that λ 1 → 1, λ 2 → 1 which are the cluster coefficients for C 1 and C 2 . In this example, the graph is connected and so the only two absorbing states must be when all opinions are θ 1 or θ N as shown in proposition 1. Corollary 1 implies that if there are two clusters, each with its λ coefficient close to 1 but not 1, then each cluster can stay with contradicting actions in a metastable state. Such a state is not an absorbing state for the system, but can be held for an arbitrarily long duration. A. Action preservation We have shown that if a cluster has its coefficient λ → 1, then it can preserve its action as if it starts with all agents with opinion θ 1 or θ N , this state is a stable state and therefore the local action of the cluster is preserved regardless of the external opinions. However, one problem we want to investigate is if this property hold for other λ < 1. The following proposition provides a necessary condition for a cluster to preserve its action and not collapse to external perturbations. Proposition 3: A necessary condition for C with coefficient λ to preserve its action in a metastable state is if ∃x ∈ (0.5, 1) such that x = f (λx). If no such x exists, then the only equilibrium when the perturbation term δ = 1 from (17) is at ν C + = 1, and when δ = 0, the equilibrium is at ν C + = 0. Proof: The dynamics of the cluster average opinions follow equation (17). We look for equilibrium points for this dynamics under certain values of δ. To find the equilibrium points, we set the left hand side to 0 in (17). For a given δ, by definition of f (•), we obtain that if ν C + is the fraction of agents with action 1 in the cluster, then it must satisfy ν C + = f (λν C + + (1 -λ)δ) (20) Having δ = 0 implies that the agents that interact with the agents in the cluster from outside the cluster have all action 0. If ν C + > 0 is an equilibrium point even with δ = 0, this means that regardless of external opinion, the cluster can preserve an action 1. This is true because f (•) is a monotonic function and therefore f (λν C + + (1 -λ)δ) ≥ f (λν C + ) for all δ ≥ 0. From the properties of f (•) we know that it is monotonic and takes f (x) = x only when x = 0, .5, 1. Additionally, as f (x) < x when x → 0, and as f (•) is continuous, f (λx) < x for all x < 0.5 except at x = 0. However, x = 0 corresponds to the state ν + = 0 which means that this equilibrium is not preserved. If an x > 0.5 exists such that x = f (λx), then regardless of the actions outside C, i.e. δ, we will have ν C + ≥ x as a possible equilibrium. By studying the opposite case with the majority action inside the cluster being 0 and external opinion 1, we get 1 -x = f (λ(1 -x)) with 1 -x > 0.5, which means that the same condition holds for preserving the action 0 as well, to get ν C -≥ x. B. Propagation of actions In the previous subsection, we have seen that a cluster C can preserve its majority action regardless of external opinion if it has a sufficiently large λ. If there are agents outside C with some connections to agents in C, then this action can be propagated. Let τ C i = j∈C a ij denote the total trust of agent i in the cluster C. Let the cluster C be such that it has λ large enough so that νC + > 0.5 exists where νC + = f (λν C + ). Proposition 4: If the cluster C is preserving an action 1 with at least a fraction νC + of the population in C having action 1, then the probability of any agent i ∈ K \ C to chose action 1 at equilibrium is lower bounded as follows. Pr(Q i = 1) ≥ f νC + τ C i τ i . (21) Proof: We know that at equilibrium, we must have: N n=N/2+1 v i,n = f r i τ i (22) by definition of f (•), [START_REF] Morȃrescu | Opinion dynamics with decaying confidence: Application to community detection in graphs[END_REF] and [START_REF] Hegselmann | Opinion dynamics and bounded confidence models, analysis, and simulation[END_REF]. However, we can lower bound R i as r i ≥ j∈C a ij N n=N/2+1 v j,n (23) Since the cluster C has an action 1 by at least νC + of its population, we have N n=N/2+1 v j,n ≥ f λν C + (24) for any j ∈ C. As f (λν C + ) = νC + , we have r i ≥ νC + j∈C a ij (25) Therefore, r i ≥ νC + τ C i and so we have N n=N/2+1 v i,n ≥ f νC + τ C i τ i (26) VI. NUMERICAL RESULTS For all simulations studied, unless otherwise mentioned, we take Θ = {0.2, 0.4, 0.6, 0.8}, β 1 = β 4 = 0.01 per unit of time and β 2 = β 3 = 0.02 per unit of time. A. General graph First, we perform some simulations to validate our NIMFA model for the opinion dynamics. For this purpose, we construct a graph with K = 120 partitioned into two sets B 1 = {1, . . . , 40} and B 2 = {41, . . . , 120}. We take a ij = 1 if a connection exists and 0 otherwise. Connections between agents belonging to the same set (B 1 or B 2 ) are established randomly with a probability of 0.3 and connections from B 1 to B 2 and vice-versa are established with probability of 0.02. For one such randomly generated graph (K, A), we study the opinion dynamics both by simulating a continuous time Markov chain and by looking at the equilibrium points generated by the recursive search algorithm described in Section III. We always start with the initial state given by X i (0) = 0.2 for all i ∈ B 1 , X i (0) = 0.8 for all i ∈ B 2 . Setting this as the initial condition, the algorithm 1 gives us the population fraction of agents at equilibrium with action 1, i.e., ν + to be 0.693. To validate this, we run several simulations of the continuous time Markov process with the same initial state and graph structure, and plot the resulting K i=1 Q i (t). This plot is shown in Figure 1. We observe that on average over time, the population with action 1 in all simulations are close to the value obtained from the NIMFA model. Next we focus on the approximation done by NIMFA for individual agent opinions. Table II shows the estimated probability of an agent choosing action 1 for some selected agents. This estimate is computed in each simulation by averaging the action taken over a large time horizon during which the system is in the metastable state. We also observed that the system collapsed to an absorbing state 16 times when we did 1000 simulations. Therefore, we estimate P r(Q i = 1) by averaging over the 984 simulations for which the system stayed in the metastable state. Notice that, as a result of the starting conditions and the graph structure, most agents in B 1 have a high probability of choosing action 0 at the metastable state (equilibrium of the NIMFA), while most agents in B 2 chose action 1. Some agents like 9 and 10 have trust in some agents of B 1 as well as B 2 . Consequently, these agents constantly shift their opinions resulting in a more random behavior for their actions. B. Graph with clusters For the next set of simulations, we take another graph structure, but with the same K as indicated in Figure 2. We first randomly make links between any i, j ∈ K with a probability of 0.05. When such a link exists, a ij = 1 and a ij = 0 otherwise. Then we construct a cluster C 1 , with the agents i = 1, 2, . . . , 40, and C 2 with agents i = 81, 82, . . . , 120. We also label agents 40 < i ≤ 60 as set B 1 and 60 < j ≤ 80 as set B 2 . To provide the relevant cluster structure, we made the edge weights a ij = 1 for all τi ≥ 0.444 for all 60 < i ≤ 80. We find that the largest x satisfying x = f (λ 1 x) is 0.95 and that satisfying x = f (λ 2 x) is 0.94. Therefore, if all agents in C 1 start with opinion 0.2 and all agents in cluster 2 start with opinion 0.8, we predict from proposition 3 that ν C1 -≥ 0.95 and ν C2 + ≥ 0.94 in the metastable state. Additionally, applying proposition 4 yields ν B1 -≥ f (0.95 × 0.714) = 0.85, ν B2 -, ≥ f (0.95×0.444) = 0.324 and ν B2 + ≥ f (0.94×0.444) = 0.315. Simulations of the continuous time Markov chain show that our theoretical results are valid even when the cluster size is 40. Figure 3 plots the population fraction of agents with action 1 within a certain set for one simulation. We look for this value in the clusters C 1 and C 2 as well as the sets B 1 and B 2 . C 1 and C 2 are seen to preserve their actions which are opposite to each other. Since B 1 has a significant trust in C 1 alone, the opinion of C 1 is propagated to B 1 . However, as B 2 trusts both C 1 and C 2 , its opinion is influenced by the two contradicting actions resulting in having some agents with action 1 and the rest with action 0. We repeat this for several simulations with the same graph structure and initial opinions to validate our results. • i, j ∈ C 1 or i, j ∈ C 2 , VII. CONCLUSION In this paper, we have proposed a stochastic multi-agent opinion dynamics model with binary actions. Agents interact through a network and individual opinions are influenced by neighbors action. Our analysis based on a Markov model of opinion dynamics show a consensus like limiting behavior for a finite number of agents. Whereas, when this number becomes large enough, the stochastic system can enter into a quasistationary regime in which partial agreements are reached. This type of phenomenon has been observed in all-to-all and cluster type topologies. Currently, we have studied the dynamics of opinion without any external control of the network. In the future, we will extend this by accounting for an external entity who has a preferred action (a company selling a product for example) and tries to control the actions of the users in a social network by controlling the opinion of a certain subset of agents. This can be interpreted as a company advertising for its product in order for the other agents in the social network to choose its product over that of a rival company. The addition of the first type of terms and division by |C| simply becomes κβ m ν n λν C + , with m being n -1,n or n + 1. The averaging of the perturbation terms results in a value between 0 and κβ m ν n (1 -λ) as all δ i are in [0, 1]. This can be therefore be written as in (17) with a new δ ∈ [0, 1]. Fig. 1 : 1 Fig. 1: Simulation of K i=1 Q i (t) compared to the NIMFA metastable state. The NIMFA works remarkably well even for K = 120. Fig. 2 : 2 Fig. 2: Structure of the graph. Any two agents in K may be connected with a 0.05 probability. All agents within a cluster are connected, and the arrows indicate directed connections. 1 i 1 making C 1 and C 2 clusters with coefficients λ 1 = 0.833 and λ 2 = 0.816 (for the particular random graph generated for this simulation). • 40 < i ≤ 60 and 1 ≤ j ≤ 20, making agents in B 1 trust C 1 with τ C τi ≥ 0.714 for all 40 < i ≤ 60. • 60 < i ≤ 80 and 1 ≤ j ≤ 20 or 80 < j ≤ 120, making agents in B 2 trust both C 1 and C 2 , with τ Fig. 3 : 3 Fig. 3: Simulation of i∈SQ i (t) for S = C 1 , C 2 , B 1 , B 2 .We see that C 1 and C 2 preserve their initial actions as given by proposition 3. We also see that as B 1 follows only C 1 , it's action is close to C 1 . As B 2 follows both C 1 and C 2 who have contradicting actions, it has a very mixed opinion which keeps changing randomly in time. TABLE II : II P r(Q i = 1) at equilibrium for some selected agents. P r(Q i = 1) values mentioned in simulation realization 1, 2 and 3 are calculated by time averaging over t = 1000 to t = 3000 hours and the simulation average is taken over 984 realizations. Cluster C 1 Set B1 |C 1 | = 40 |B1| = 20 Cluster C 2 Set B2 |C 1 | = 40 |B2| = 20 Table III compares the predictions based on propositions 3 and 4 with the values obtained in three of our simulations. Set S ν S + = E[ i∈S Q i ] from Propositions Simulation 1,2,3 C 1 ≤ 0.05 0.002 0.004 0.002 C 2 ≥ 0.94 0.994 0.995 0.995 B 1 ≤ 0.15 0.008 0.006 0.005 B 2 ≥ 0.315, ≤ 0.685 0.476 0.471 0.483 TABLE III : III ν S + at equilibrium for the indicated set S. Simulation values are calculated by time averaging over t = 100 to t = 1000 hours.
01756843
en
[ "spi.elec" ]
2024/03/05 22:32:10
2016
https://hal.science/hal-01756843/file/CEFC2016_Mohamodhosen.pdf
Bilquis Mohamodhosen email: [email protected] Frédéric Gillon Abdelmounaïm Tounzi Loïc Chevallier Topology Optimisation of a 3D Electromagnetic Device using the SIMP Density-Based Method Keywords: gradient-based algorithm, relaxation of variables, SIMP density method, weighted objective sum The presented paper proposes a topology optimisation methodology based on the density-based method SIMP, and applied to a numerical example to validate the former. The approach and methodology are detailed, and the results for a 3D basic electromagnetic example are presented. The non-linear B(H) curve is also taken into account. I. INTRODUCTION Topology Optimisation (TO) arouses growing interest in the electromagnetic community as it proposes a new way of tackling the general engineering problem: finding the most optimal design to maximize performance and minimize cost. TO offers the freewill of finding the best design without any layout a priori, given appropriate formulation of optimization objectives and constraints to be consistent with the problem. Various authors have investigated the topic, presenting different methods to deal with it, but our main interest in this paper will be focused on the SIMP (density-based) method [START_REF] Rozvany | Generalized shape optimization without homogenization[END_REF] due to its ease of application and reproducibility. However, this method presents some weaknesses related to the unavoidable relaxation of variables (discrete to continuous) while using a gradient-based algorithm, giving rise to undesired intermediate variables. This paper presents a methodology devised to solve this problem, and a numerical example using Finite Element (FE) Analysis for validation. II. PROBLEM FORMULATION In electromagnetic applications, it is usually desired to maximise an objective function such as force or energy, while using the least Material Quantity (MQ) in the design. The structure is modelled by FE, and calculations are done by a laboratory developed calculation code (code_Carmel) coupled with gradient-based fmincon SQP algorithm via a laboratory developed optimisation platform (Sophemis). This has the advantage of outputting an electromagnetically coherent design at each iteration, and hence increasing the probability of obtaining the best 'feasible' design. In this paper, it is desired to produce the best structure made of iron and air as materials, represented by n number of variables ρ varying from 0 to 1, where 0 is air and 1 is iron. To prevent existence of undefined intermediate materials (e.g. at 0.5) in the final design, we propose to add a Feasibility Factor (FF) to force the values to 0 and 1. The optimisation problem is defined as in equation 1. (1) The main objective g(ρ) is maximised, and can also be written as the minimisation of the negative value of g(ρ). The coefficient α is used to weigh the prominence of FF w.r.t the main objective in the problem. If α is less than 0.5, FF will have a reduced, yet substantial importance in problem solving as compared to g(ρ). MQ is constrained to β, where β is the maximum amount of iron allowed in the design. III. NUMERICAL EXAMPLE A basic 3D numerical example (fig. 1) is used to validate the methodology. The initial domain used is a cube meshed into 4096 hexahedral elements (fig. 1a) where a Magnetic Potential Difference is applied to the nodes highlighted with white dots. The number of variables used is 64, with therefore 64 hexahedra per variable (green outlined cube). The aim is to find the most optimum design that maximises the energy (hence minimises negative value of energy), with the MQ constrained to 75%. The weightage α is set to 0.25. The final design is given in fig. 1b, where the white zones represent air and black represent iron, and the Magnetic Flux Density B given in fig. 1c. Optimisations are done using a linear, as well as non-linear B(H) behaviour assumption with same initial parameters. Both yield the same topology, with a calculation time of 5 mins and max B of 3.5T for linear case, and 20 mins with max B of 1.3T for non-linear case. In the extended paper version, more details will be given on the application of the methodology to optimise the topology of an electromagnet as in [START_REF] Okamoto | Improvements in Material-Density-Based Topology Optimization for 3-D Magnetic Circuit Design by FEM and Sequential Linear Programming Method[END_REF]. Fig. 1 . 1 Fig. 1. (a) FE Model, (b) Final design, (c) Magnetic Flux Density, B(T)
01756849
en
[ "sde", "sde.mcg", "sde.be" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01756849/file/Ols%20et%20al.%20GLOPLACHA.pdf
Clémentine Ols email: [email protected] Valérie Trouet Martin P Girardin Annika Hofgaard Yves Bergeron Igor Drobyshev Post-1980 shifts in the sensitivity of boreal tree growth to North Atlantic Ocean dynamics and seasonal climate Tree growth responses to North Atlantic Ocean dynamics Keywords: Climate change, Dendrochronology, Climate-growth interactions, Response functions, Teleconnections, Arctic amplification  A significant boreal tree growth response to oceanic and atmospheric indices emerged during the 1980s.  This was particularly observed in western and central boreal Quebec and in central and northern boreal Sweden.  The post-1980 sensitivity to large-scale indices synchronized with changes in tree growth responses to local climate.  Future large-scale dynamics may impact forest growth and carbon sequestration to a greater extent than previously thought. Introduction Terrestrial biomes on both sides of the North Atlantic Ocean are strongly influenced by Arctic and Atlantic oceanic and atmospheric dynamics [START_REF] D'arrigo | NAO and sea surface temperature signatures in tree-ring records from the North Atlantic sector[END_REF][START_REF] Ottersen | Ecological effects of the North Atlantic Oscillation[END_REF][START_REF] Gerardin | Une classification climatique du Québec à partir de modèles de distribution spatiale de données climatiques mensuelles : vers une définition des bioclimats du Québec. Direction du patrimoine écologique et du développement durable[END_REF]. Some mid-20 th century changes in the dynamics of the North Atlantic Ocean have been considered as early signs of tipping points in the Earth climate system [START_REF] Lenton | Tipping elements in the Earth's climate system[END_REF][START_REF] Lenton | Early warning of climate tipping points[END_REF]. The Atlantic Meridional Overturning Circulation (AMOC) exhibited an exceptional slow-down in the 1970s [START_REF] Rahmstorf | Exceptional twentieth-century slowdown in Atlantic Ocean overturning circulation[END_REF]. The cause of this slow-down is still under debate, but possible explanations include the weakening of the vertical structure of surface waters through the discharge of low-salinity fresh water into the North Atlantic Ocean, due to the disintegration of the Greenland ice sheet and the melting of Canadian Arctic glaciers. A further weakening of the AMOC may possibly lead to a wide-spread cooling and decrease in precipitation in the North Atlantic region [START_REF] Sgubin | Abrupt cooling over the North Atlantic in modern climate models[END_REF], subsequently lowering the productivity of land vegetation both over northeastern North America and northern Europe [START_REF] Zickfeld | Carbon-cycle feedbacks of changes in the Atlantic meridional overturning circulation under future atmospheric CO2[END_REF][START_REF] Jackson | Global and European climate impacts of a slowdown of the AMOC in a high resolution GCM[END_REF]. Despite increasing research efforts in monitoring climate-change impacts on ecosystems, effects of late 20 th century changes in North Atlantic Ocean dynamics on mid-to high-latitude terrestrial ecosystems remain poorly understood. The dynamics of North Atlantic oceanic and atmospheric circulation, as measured through the AMOC, North Atlantic Oscillation (NAO) and Arctic Oscillation (AO) indices, strongly influence climate variability in northeastern North America (NA) and northern Europe (NE) [START_REF] Hurrell | Decadal trends in the north atlantic oscillation: regional temperatures and precipitation[END_REF][START_REF] Baldwin | Propagation of the Arctic Oscillation from the stratosphere to the troposphere[END_REF][START_REF] Wettstein | The influence of the North Atlantic-Arctic Oscillation on mean, variance, and extremes of temperature in the Northeastern United States and Canada[END_REF]. NAO and AO indices integrate differences in sea-level pressure between the Iceland Low and the Azores High [START_REF] Walker | Correlation in seasonal variation of weather. IX. A further study of world weather[END_REF], with high indices representative of increased west-east air circulation over the North Atlantic. Variability in AMOC, NAO and AO indices affects climate dynamics, both in terms of temperatures and precipitation regimes. Periods of high winter NAO and AO indices are associated with below-average temperatures and more sea ice in NA and a warmer-and wetter-than-average climate in NE. Periods of low winter NAO and AO indices are, in turn, associated with above-average temperatures and less sea ice in NA and a colder-and dryer-than-average climate in NE [START_REF] Wallace | Teleconnections in the geopotential height field during the Northern Hemisphere Winter[END_REF][START_REF] Hellström | The influence of the North Atlantic Oscillation on the regional temperature variability in Sweden: spatial and temporal variations[END_REF]. Low AMOC indices induce a wide-spread cooling and decrease of precipitation across the high latitudes of the North Atlantic region [START_REF] Jackson | Global and European climate impacts of a slowdown of the AMOC in a high resolution GCM[END_REF]. Boreal forests cover most of mid-and high-latitude terrestrial regions of NA and NE and play an important role in terrestrial carbon sequestration and land-atmosphere energy exchange [START_REF] Betts | Offset of the potential carbon sink from boreal forestation by decreases in surface albedo[END_REF][START_REF] Bala | Combined climate and carbon-cycle effects of large-scale deforestation[END_REF][START_REF] De Wit | Climate warming feedback from mountain birch forest expansion: Reduced albedo dominates carbon uptake[END_REF]. Boreal forests are sensitive to climate change [START_REF] Gauthier | Boreal forest health and global change[END_REF]. Despite general warming and lengthening of the growing season at mid-and high-latitudes [START_REF] Karlsen | Growing-season trends in Fennoscandia 1982-2006, determined from satellite and phenology data[END_REF]IPCC, 2014), tree growth in many boreal regions lost its positive response to rising temperatures during the late-20 th century [START_REF] Briffa | Reduced sensitivity of recent tree-growth to temperature at high northern latitudes[END_REF]). An increasing dependence on soil moisture in the face of the rapid rise in summer temperatures may counterbalance potential positive effects on boreal forest growth of increased atmospheric CO2 concentrations [START_REF] Girardin | No growth stimulation of Canada's boreal forest under half-century of combined warming and CO2 fertilization[END_REF]. During the late 20 th century, large-scale growth declines [START_REF] Gerardin | Une classification climatique du Québec à partir de modèles de distribution spatiale de données climatiques mensuelles : vers une définition des bioclimats du Québec. Direction du patrimoine écologique et du développement durable[END_REF] and more frequent low growth anomalies [START_REF] Ols | Previous growing season climate controls the occurrence of black spruce growth anomalies in boreal forests of Eastern Canada[END_REF]-in comparison with the early 20 th century-have been reported for pristine boreal spruce forests of NA. In coastal NE, climatic changes over the 20 th century have triggered shifts from negative significant to non-significant spruce responses to winter precipitation [START_REF] Solberg | Shifts in radial growth responses of coastal Picea abies induced by climatic change during the 20th century, central Norway[END_REF]. Annual variability in boreal forest tree growth patterns have shown sensitivity to sea ice conditions [START_REF] Gerardin | Une classification climatique du Québec à partir de modèles de distribution spatiale de données climatiques mensuelles : vers une définition des bioclimats du Québec. Direction du patrimoine écologique et du développement durable[END_REF][START_REF] Drobyshev | Atlantic SSTs control regime shifts in forest fire activity of Northern Scandinavia[END_REF] and variability in SSTs [START_REF] Lindholm | Growth indices of North European Scots pine record the seasonal North Atlantic Oscillation[END_REF]. All changes in boreal tree growth patterns and climate-growth interactions listed above may be driven by the dynamics of the North Atlantic Ocean. Understanding current and projected future impacts of North Atlantic Ocean dynamics on boreal forest ecosystems and their carbon sequestration capacity calls for a deeper spatiotemporal analysis of tree growth sensitivity to large-scale oceanic and atmospheric dynamics. The present study investigates tree growth responses to changes in North Atlantic Ocean dynamics of two widely distributed tree species in the boreal forests of northeastern North America (black spruce) and northern Europe (Norway spruce). We investigated treegrowth sensitivity to seasonal large-scale indices (AMOC, NAO; AO) and seasonal climate (temperature and precipitation) over the second half of the 20 th century. We hypothesize that shifts in tree growth sensitivity to large-scale indices and local climate are linked to major changes in North Atlantic Ocean dynamics. This study aims to answer two questions: (i) has boreal tree growth shown sensitivity to North-Atlantic Ocean dynamics? and (ii) does tree growth sensitivity to such dynamics vary through space and time, both within and across NA and NE? 2 Material and methods Study areas We studied two boreal forest dominated areas under the influence of large-scale atmospheric circulation patterns originating in the North Atlantic: the northern boreal biome of the Canadian province of Quebec (50°N-52°N, 58°W-82°W) in NA and the boreal biome of Sweden (59°N-68°N, 12°E-24°E) in NE (Fig. 1a). The selection of the study areas was based on the availability of accurate annually-resolved tree growth measurements acquired from forest inventories. In northern boreal Quebec, mean annual temperature increases from north to south (-5 to 0.8 °C) and total annual precipitation increases from west to east (550 to 1300 mm), mainly due to winter moisture advection from the North Atlantic Ocean [START_REF] Gerardin | Une classification climatique du Québec à partir de modèles de distribution spatiale de données climatiques mensuelles : vers une définition des bioclimats du Québec. Direction du patrimoine écologique et du développement durable[END_REF]. In boreal Sweden, annual mean temperature increases from north to south (-2 to 6 °C) and annual total precipitation decreases from west to east (900 to 500 mm), mostly because of winter moisture advection from the North Atlantic Ocean that condenses and precipitates over the Scandinavian mountains in the west (Sveriges meteorologiska och hydrologiska institut (SMHI), 2016). The topography in northern boreal Quebec reveals a gradient from low plains in the west (200-350 m above sea level [a.s.l.]) to hills in the east (400-800 m a.s.l.). In boreal Sweden, the topography varies from high mountains (1500-2000 m a.s.l.) in the west to low lands (50-200 m a.s.l.) in the east along the Baltic Sea. However, mountainous coniferous forests are only found up to ca. 400 m a.s.l. in the north (68°N) and ca. 800 m a.s.l. in the south (61°N). Tree growth data We studied tree growth patterns of the most common and widely distributed spruce species in each study area: black spruce (Picea mariana (Mill.) Britton) in Quebec and Norway spruce (P. abies (L.) H. Karst) in Sweden. A total of 6,876 and 14,438 tree-ring width series were retrieved from the Quebec (Ministère des Ressources naturelles du Québec, 2014) and Swedish forest inventory database (Riksskogstaxeringen, 2016), respectively. We adapted data selection procedures to each database to provide as high local coherence in growth patterns as possible. For Quebec, core series were collected from dominant trees on permanent plots (three trees per plot, four cores per tree) between 2007 and 2014. Permanent plots were situated in unmanaged old-growth black spruce forests north of the northern limit for timber exploitation. Core series were aggregated into individual tree series using a robust bi-weighted mean (robust average unaffected by outliers, [START_REF] Affymetrix | Statistical Algorithms Description Document[END_REF]. To enhance growth coherence at the local level, we further selected tree series presenting strong correlation (r > 0.4) with their respective local landscape unit master chronology. This master chronology corresponds to the average of all other tree series within the same landscape unit (landscape units are 6341 km 2 on average and delimit a territory characterized by specific bioclimatic and physiographic factors [START_REF] Robitaille | Paysages régionaux du Québec méridional[END_REF]). This resulted in the selection of 790 tree series that were averaged at the plot level using a robust bi-weighted mean. The obtained 444 plot chronologies had a common period of 1885-2006 (Table 1). Plot chronologies were detrended using a log transformation and a 32-year spline de-trending, and pre-whitened using autocorrelation removal [START_REF] Cook | The smoothing spline: an approach to standardizing forest interior tree-ring width series for dendroclimatic studies Tree-Ring[END_REF]. Detrending aims at removing the lowfrequency age-linked variability in tree-ring series (decreasing tree-ring width with increasing age) while keeping most of the high-frequency variability (mainly linked to climate). Prewhitening removes all but the high frequency variation in the series by fitting an autoregressive model to the detrended series. The order of the auto-regressive model was selected by Akaike Information Criterion [START_REF] Akaike | A new look at the statistical model identification[END_REF]. For Sweden, core series were collected within the boreal zone of the country (59°N-68°N) on temporary plots between 1983 and 2010. Temporary plots were situated in productive forests, i.e. those with an annual timber production of at least 1m 3 /ha. These forests encompass protected, semi-natural and managed forests. In each plot, one to three trees were sampled, with two cores per tree. Swedish inventory procedures do not include any visual and statistical cross-dating of core series at the plot level. To filter out misdated series, we aggregated core series into 4067 plot chronologies using a robust bi-weighted mean, and compared them to Norway spruce reference chronologies from the International Tree-Ring Data Base (International Tree Ring Data Bank (ITRDB), 2016). In total, seven ITRDB reference chronologies were selected (Fig. 1b), all representative of tree growth at mesic sites in boreal Sweden. Plot and reference chronologies were detrended and pre-whitened using the same standard procedures used for the Quebec data. Each plot chronology was then compared with its geographically nearest reference chronology -determined based on Euclidean distance -using Student's t-test analysis [START_REF] Student | The probable error of the mean[END_REF]. Plot chronologies with a t-test value lower than 2.5 with their respective nearest reference chronology were removed from further analyses (the t-test value threshold was set up according to the mean length of plot chronologies (Table 1)). A total of 1256 plot chronologies (with a common period of passed this quality test (Table 1). Spatial aggregation of plot chronologies into regional chronologies in each study area Quality checked chronologies at the plot level were aggregated into 1° x 1° latitude-longitude grid cell chronologies within each study area (Fig. 1b). Grid cell chronologies were calculated as the robust bi-weighted mean of all plot chronologies within each grid cell. 1812-2008 1929-2008 containing less than three plot chronologies were removed from further analyses. This resulted in a total of 36 and 56 grid cell chronologies in Quebec and Sweden, respectively (Fig. 1b, Table 1). Grid cells contained on average 12 and 23 plot chronologies in Quebec and Sweden, respectively (Table 1). To investigate the influence of spatial scale in climate-growth sensitivity analyses, we performed an ordination of grid cell chronologies within each study area over their common period (Fig. 1c). The common period between grid cell chronologies was 1885-2006 and 1936-1995 in Quebec and Sweden, respectively. Ordination analyses were performed in R using Euclidean dissimilarities matrices (dist function) and Ward agglomeration (hclust function) methods. Three main clusters were identified in each study area (Fig. 1c). Spatial extents of all clusters were consistent with well-defined bioclimatic regions, providing support to data selection procedures. In Quebec, clusters identified in the West (Q_W) and the East (Q_E) corresponded well to the drier and wetter northern boreal region, respectively (Fig. 1b &c). In Sweden, the cluster identified in the South (S_S) corresponded to a combination of the nemo-boreal and southern boreal zones [START_REF] Moen | National Atlas of Norway. Vegetation[END_REF]. The Swedish central (S_C) and northern (S_N) clusters corresponded to the mid-boreal and northern boreal zones, respectively (Fig. 1b &c) [START_REF] Moen | National Atlas of Norway. Vegetation[END_REF]. Regional chronologies were built as the average of all grid cell chronologies within a cluster. In Sweden, inter-cluster correlations were all significant and ranged from 0.77 (S_S vs S_N) to 0.94 (S_C vs S_N). In Quebec, inter-cluster correlations were all significant and ranged from 0.44 (Q_W vs Q_E) to 0.52 (Q_C vs Q_E) (see Appendix S1-S3 in Supporting Information). Henceforward, the terms 'local level' and 'regional level' refer to analyses focusing on the grid cell chronologies and the six regional chronologies, respectively. ⋇ (swed011, swed012, swed013, swed014, swed015, swed017 and swed312). The grey shading indicates the boreal zone delimitation according to [START_REF] Brant | An introduction to Canada's boreal zone: ecosystem processes,health, sustainability, and environmental issues[END_REF]. Climate data For each grid cell, we extracted local seasonal mean temperature and total precipitation data from the CRU TS 3.24 1° x 1° [START_REF] Harris | Updated high-resolution grids of monthly climatic observations ---the CRU TS3.10 dataset[END_REF], with seasons spanning from the previous (pJJA) through the current summer (JJA). Climate data were further aggregated at the regional level as the robust bi-weighted mean of climate data of all grid cells contained in each regional cluster (Fig. 1b &c). Seasonal AMOC indices (1961( -2005( , first AMOC measurements in 1961) ) were extracted from the European Center for Medium-Range Weather Forecast (Ocean Reanalysis System ORA-S3). Seasonal AO andNAO indices (1950-2008) were extracted from the Climate Prediction Center database (NOAA, 2016). Seasonal AMOC, NAO, and AO indices included previous summer, winter (DJF), and current summer. All seasonal climate data were downloaded using the KNMI Climate Explorer [START_REF] Trouet | KNMI Climate Explorer: A web-based research tool for high-resolution paleoclimatology[END_REF]) and were detrended using linear regression and thereafter pre-whitened (autocorrelation of order 1 removed from time series). Links between seasonal climate and growth patterns Analyses were run over the 1950-2008 period (the longest common period between tree growth and climate data), except with AMOC indices which were only available for 1961-2005. Tree growth patterns were correlated with seasonal climate variables (previous-tocurrent summer temperature averages and precipitation sums) and seasonal indices (previous summer, winter, and current summer AMOC, NAO, and AO) at the regional and local levels. To minimize type I errors, each correlation analysis was tested for 95% confidence intervals using 1000 bootstrap samples. In addition, moving correlation analyses (21-yr windows moved one year at a time) were performed at the regional level using the same procedures as above. All calculations were performed using the R package treeclim [START_REF] Zang | treeclim: an R package for the numerical calibration of proxyclimate relationships[END_REF]. For more details regarding bootstrapping procedures please see the description of the "dcc" function of this package. Results Tree growth responses to seasonal climate Some significant climate-growth associations were observed at the regional level (Fig. 2). Significant associations at the local level displayed strong spatial patterns and revealed heterogeneous within-region growth responses (Figs. 3 and4). Moving correlations revealed numerous shifts in the significance of climate-growth associations around 1980 (Fig. 5). Quebec No significant climate-growth associations were observed at the regional level in western boreal Quebec over the entire study period (Fig. 2). Some significant positive responses to previous winter and current spring temperatures were observed at the local level, but these concerned a minority of cells (Fig. 3). Moving correlations revealed that Q_W significantly correlated with previous summer precipitation (negatively) before the 1970s, with previous winter temperatures (positively) from the 1970s and with current spring temperatures (positively) from 1980 (Fig. 5). Tree growth in central boreal Quebec significantly and positively correlated with current summer temperatures at the regional and local levels (Figs. 2 and3). Numerous negative correlations between tree growth and spring precipitation were observed at the local level (Fig. 3). Moving correlations revealed an emerging correlation between Q_C and previous winter temperatures in the early 1970s (significant during most intervals up to most recent years) (Fig. 5). No significant climate-growth associations were observed in eastern boreal Quebec at the regional level (Fig. 2). At the local level, some positive significant correlations with current summer temperatures were observed (Fig. 3). Moving correlations revealed that Q_E correlated significantly and positively with current summer temperatures up to the early 1970s (Fig. 5). Significant correlations (P < 0.05) are marked with a star. Analyses were computed between grid cell chronologies and local seasonal climate data extracted for each grid cell from the CRU TS 3.24 1° x 1° [START_REF] Harris | Updated high-resolution grids of monthly climatic observations ---the CRU TS3.10 dataset[END_REF]. Seasons included previous summer (pJJA), previous autumn (SON), winter (DJF); current spring (MAM) and current summer (JJA). To visualize separation between regional clusters (Q_W, Q_C, and Q_E, cf. Fig. 1) correlation values at Q_C grid cells are plotted with circles. Significant correlations (P < 0.05) are marked with a black dot. Sweden Tree growth in southern boreal Sweden correlated significantly and negatively with previous summer and winter temperatures at the regional and local levels, the correlation with winter temperatures concerning however only a minority of cells (Figs. 2 and4). Moving correlations indicated that the negative association with previous summer temperatures remained significant up to the early 1990s and that the negative association with winter temperatures emerged after 1980 (Fig. 5). In central boreal Sweden, tree growth significantly and negatively correlated with previous summer temperatures both at the regional and local levels (Figs. 2 and4). Some additional significant correlations with winter temperatures (negative) and with current summer temperatures (positive) were observed at the local level (Fig. 4). Moving correlation analyses revealed a significant positive correlation between S_C and current summer temperatures that dropped and became non-significant at the end of the study period (Fig. 5). In addition, the correlation between S_C and previous summer precipitation shifted from significantly negative to significantly positive during the 1980s (Fig. 5). S_C became significantly and negatively correlated with previous summer temperatures after the 1980s and stopped being significantly and negatively correlated with previous autumn precipitation and with winter temperatures at the end of the 1970s (Fig. 5). Tree growth in northern boreal Sweden correlated significantly with previous summer (negatively) and current summer temperatures (positively) both at the regional and local levels (Figs. 2 and4). At the local level, tree growth in some cells significantly and negatively correlated with winter temperatures (Fig. 4). Significant and negative responses to current summer precipitation were observed at northernmost cells (Fig. 4). Moving correlations revealed that the positive association with current summer temperatures was only significant at the beginning and at the end of the study period (Fig. 5). After the 1980s, significant positive associations with previous autumn temperatures emerged (Fig. 5) and the significant negative association with winter temperatures disappeared. Analyses were computed between grid cell chronologies and local seasonal climate data extracted for each grid cell from the CRU TS 3.24 1° x 1° [START_REF] Harris | Updated high-resolution grids of monthly climatic observations ---the CRU TS3.10 dataset[END_REF]. Seasons included previous summer (pJJA), previous autumn (SON), winter (DJF); current spring (MAM) and current summer (JJA). To visualize the separation between regional clusters (S_S, S_C, and S_N, cf. Fig. 1) correlation values at S_C grid cells are plotted with circles. Significant correlations (P < 0.05) are marked with a black dot. extracted for each grid cell from the CRU TS 3.24 1° x 1° [START_REF] Harris | Updated high-resolution grids of monthly climatic observations ---the CRU TS3.10 dataset[END_REF] and then aggregated at the regional level by robust bi-weighted mean. Seasons included previous summer (pJJA), previous autumn (SON), winter (DJF); current spring (MAM) and current summer (JJA). Moving correlations were calculated using 21-yr windows moved one year at a time and are plotted using the central year of each window. Windows of significant correlations (P < 0.05) are marked with a dot. Links between tree growth patterns and large-scale indices Some significant associations were found between tree growth and large-scale indices (Figs. 6, 7, and 8). Moving correlation analyses revealed some shifts from pre-1980 insignificant to post-1980 significant correlations (Fig. 9). The seasonal indices involved in these shifts varied across regional chronologies. Quebec Tree growth in western boreal Quebec was significantly and negatively associated with the winter AMOC and the winter AO indices at the regional level (Fig. 6). At the local level, these associations concerned, however, a minority of cells (Fig. 7). Moving correlations revealed that the regional negative association with winter AMOC was only significant in the most recent part of the study period (Fig. 9). Significant negative correlations between Q_W and current summer NAO and AO indices were observed from the 1980s up to the most recent years, at which point they show a steep increase and become non-significant (Fig. 9). In central boreal Quebec, no significant associations between tree growth and seasonal indices were identified at the regional or local level (Figs. 6 and7). Moving correlations indicated significant negative correlations with previous summer NAO and AO indices of during the 1970s, with winter NAO and AO indices during the 1980s and with current summer NAO and AO indices from the 1980s up to the most recent years (Fig. 9). No significant association was identified between large-scale indices and tree growth in eastern boreal Quebec (Figs. 6,7,and 9). are marked with a black dot. Sweden No significant association between tree growth in southern boreal Sweden and seasonal largescale indices was identified at the regional or local level (Figs. 6 and8). Moving correlations revealed, however, significant negative associations between S_S and the winter AMOC index before the 1980s (Fig. 9). In central boreal Sweden, tree growth significantly and positively correlated with the current summer NAO index at the regional level (Fig. 6). At the local level, this correlation concerned, however, a minority of cells (Fig. 8). Moving correlations revealed that the significant positive association with the current summer NAO index emerged in the early 1980s (Fig. 9) and that S_C significantly correlated with the current summer AMOC index during the 1980s (Fig. 9). In northern boreal Sweden, tree growth significantly correlated with the current summer NAO index (positively) and with the winter AO index (negatively) at the regional level (Fig. 6). At the local level, the positive association with summer NAO concerned a large majority of cells and the negative association with the winter AO index concerned only very few cells (Fig. 8). Moving correlation analyses indicated that the positive association between S_N and the current summer NAO index was only significant after the 1980s and that S_N significantly correlated with current summer AMOC during most of the 1980s (Fig. 9). Significant correlations (P < 0.05) are marked with a black dot. Spatial aggregation of tree growth data The high correlation between the regional chronologies in NE (Appendix S1), especially between the central and northern chronologies, could have supported the construction of one single boreal Sweden-wide regional chronology. Climate-growth analyses at the regional and local level revealed, nevertheless, clear differences across space in tree growth sensitivity to climate (Fig. 4) and to large-scale indices (Fig. 8), with a higher sensitivity in northernmost forests. The aggregation of tree growth data across space, even if based on objective similarity statistics (Appendix S1), may, therefore, mask important local differences in climate-growth interactions [START_REF] Macias | Growth variability of Scots pine (Pinus sylvestris) along a west-east gradient across northern Fennoscandia: A dendroclimatic approach[END_REF]. Our results demonstrate that spatial aggregation should not be performed without accounting for bioclimatic domains especially when studying climategrowth interactions. In practice, one should at least check that a spatial similarity in tree growth patterns is associated with spatial similarity in seasonal climate. The use of both the regional and local scales regarding climate-growth interactions, as in the present study, is, therefore, recommended to exhaustively and more precisely capture cross-scale diverging and emerging tree growth patterns and sensitivity to climate. Post-1980 shifts towards significant influence of large-scale indices on boreal tree growth The emergence of a post-1980 significant positive tree growth response to the current summer NAO index in central and northern boreal Sweden (Fig. 9) appears to be linked to spatial variability in the NAO influence on seasonal climate (Fig. 10). Summer NAO has had little to no influence on summer climate variability over the entire period 1950-2008 in boreal Quebec or Sweden (Appendix S4). However, the partitioning of the period into two sub-periods of similar length (1950-1980 and 1981-2008) revealed a northeastward migration of the significant-correspondence field between the summer NAO index and local climate, particularly in NE (Fig. 10). Over the 1981-2008 period, the summer NAO index was significantly and positively associated with temperature and negatively with precipitation in boreal Sweden (Fig. 10). Higher growing-season temperatures, induced by a higher summer NAO, might have promoted the growth of temperature-limited Swedish boreal forest ecosystems, explaining recent positive response of tree growth to this large-scale index in the central and northern regions (Fig. 9). The northeastward migration of the NAO-climate spatial field may be an early sign of a northward migration of the North Atlantic Gulf stream [START_REF] Taylor | The North Atlantic Oscillation and the latitude of the Gulf Stream[END_REF] or a spatial reorganization of the Icelandic-low and Azores-high pressure NAO's nodes [START_REF] Portis | Seasonality of the North Atlantic Oscillation[END_REF][START_REF] Wassenburg | Reorganization of the North Atlantic Oscillation during early Holocene deglaciation[END_REF]. The August Northern Hemisphere Jet over NE reached its northernmost position in 1976 but thereafter moved southward, despite increasing variability in its position [START_REF] Trouet | Recent enhanced high-summer North Atlantic Jet variability emerges from three-century context[END_REF]. This southward migration of the jet may weaken the strength of the observed post-1980 positive association between boreal tree growth and the summer NAO index in NE in the coming decades. period were extracted from NOAA's climate prediction center. Summer mean temperature and total precipitation are those of CRU TS 3.24 1° x 1° [START_REF] Harris | Updated high-resolution grids of monthly climatic observations ---the CRU TS3.10 dataset[END_REF]. All correlations were computed in the KNMI Climate Explorer (https://climexp.knmi.nl [START_REF] Trouet | KNMI Climate Explorer: A web-based research tool for high-resolution paleoclimatology[END_REF]). Indices and climate variables were normalized (linear regression) prior to analyses. Only correlations significant at P < 0.05 are plotted. The post-1980 significant negative associations between tree growth and summer NAO and AO indices in boreal Quebec are more challenging to interpret. There was no evident significant tree growth response to summer temperature in these regions when analyzed over the full 1950-2008 period (Fig. 4). Yet, some significant positive associations between tree growth and temperatures were observed with winter temperatures from the 1970s (in central Quebec) and with spring temperatures from the 1980s (in western Quebec only) (Fig. 5). These associations indicate that tree growth in boreal Quebec has been limited by winter and spring climate since the 1970s and 1980s, respectively. Below-average summer temperatures induced by high summer NAO and AO may exacerbate the sensitivity of tree growth to low temperatures. Noting that no significant post-1980 association was observed between temperature and summer NAO and AO indices in Quebec (Fig. 10), the emerging negative tree growth response to summer NAO and AO indices may indicate a complex interplay between large-scale indices and air mass dynamics and lagged effects over several seasons [START_REF] Boucher | Decadal variations in eastern Canada's taiga wood biomass production forced by ocean-atmosphere interactions[END_REF]. In western Quebec, tree growth was negatively influenced by the winter AMOC index at the regional level (Fig. 6). This relationship appears to be linked to a significant positive association between tree growth and spring temperature (Figs. particularly through their control upon fire activities (Macias Fauria & Johnson 2006[START_REF] Goff | Historical fire regime shifts related to climate teleconnections in the Waswanipi area, central Quebec, Canada[END_REF]. These indices have not been investigated in the present study but might present some additional interesting features. Contrasting climate-growth associations among boreal regions Post-1980 shifts in tree growth sensitivity to seasonal climate differed among boreal regions. In NA, we observed the emergence of significantly positive growth responses to winter and spring temperatures. In NE, observed post-1980 shifts mainly concerned the significance of negative growth responses to previous summer and winter temperatures. Warmer temperatures at boreal latitudes have been reported to trigger contrasting growth responses to climate [START_REF] Wilmking | Recent climate warming forces contrasting growth responses of white spruce at treeline in Alaska through temperature thresholds[END_REF] and to enhance the control of site factors upon growth [START_REF] Nicklen | Local site conditions drive climate-growth responses of Picea mariana and Picea glauca in interior Alaska[END_REF]. This is particularly true with site factors influencing soil water retention, such as soil type, micro-topography, and vegetation cover [START_REF] Düthorn | Influence of micro-site conditions on tree-ring climate signals and trends in central and northern Sweden[END_REF]. Despite a generalized warming at high latitudes [START_REF] Serreze | The emergence of surface-based Arctic amplification[END_REF], no increased sensitivity of boreal tree growth to precipitation was identified in the present study, except in central Sweden where tree growth became positively and significantly correlated to previous summer precipitation (Fig. 5). This result underlines that temperature remains the major-growth limiting factor in our study regions. The observed differences in tree growth response to winter temperature highlight diverging non-growing season temperature constraints on boreal forest growth. While warmer winters appear to promote boreal tree growth in NA, they appear to constrain tree growth in boreal NE. Such opposite responses to winter climate from two boreal tree species of the same genus might be linked to different winter conditions between Quebec and Sweden. In NA, winters conditions are more continental and harsher than in NE (Appendix S5). Warmer winters may therefore stimulate an earlier start of the growing season and increase growth potential [START_REF] Rossi | Lengthening of the duration of xylogenesis engenders disproportionate increases in xylem production[END_REF]. However, warmer winters, combined with shallower snow-pack, have been shown to induce a delay in the spring tree growth onset, through lower thermal inertia and a slower transition from winter to spring [START_REF] Contosta | A longer vernal window: the role of winter coldness and snowpack in driving spring transitions and lags[END_REF]. This phenomenon might explain the negative association between tree growth and winter temperatures observed in NE. The post-1970s growth-promoting effects of winter and spring temperature in NA (Fig. 5) suggest, as ealier reported by [START_REF] Charney | Observed forest sensitivity to climate implies large changes in 21st century North American forest growth[END_REF] and [START_REF] Girardin | No growth stimulation of Canada's boreal forest under half-century of combined warming and CO2 fertilization[END_REF], that, under sufficient soil water availability and limited heat stress conditions, tree growth at midto high-latitudes can increase in the future. However, warmer winters may also negatively affect growth by triggering an earlier bud break and increasing risks of frost damages to developing buds [START_REF] Cannell | Climatic warming, spring budburst, and frost damage on trees[END_REF] or by postponing the start of the growing season (see above, [START_REF] Contosta | A longer vernal window: the role of winter coldness and snowpack in driving spring transitions and lags[END_REF]. This might provide an argument against a sustained growth-promoting effect of higher seasonal temperatures [START_REF] Gerardin | Une classification climatique du Québec à partir de modèles de distribution spatiale de données climatiques mensuelles : vers une définition des bioclimats du Québec. Direction du patrimoine écologique et du développement durable[END_REF]. Gradients in the sensitivity of tree growth to North Atlantic Ocean dynamics across boreal Quebec and Sweden Trees in western and central boreal Quebec, despite being furthest away from the North Atlantic Ocean in comparison to trees in eastern boreal Quebec, were the most sensitive to oceanic and atmospheric dynamics, and particularly to current summer NAO and AO indices after the 1970s. In these two boreal regions, tree growth responses to large-scale indices were stronger and more spatially homogeneous than tree growth responses to regional climate. This suggests that growth dynamics in western and central boreal Quebec, despite being mainly temperature-limited, can be strongly governed by large-scale oceanic and atmospheric dynamics [START_REF] Boucher | Decadal variations in eastern Canada's taiga wood biomass production forced by ocean-atmosphere interactions[END_REF]. The tree growth sensitivity to the winter AMOC index observed at regional level in western boreal Quebec might directly emerge from the correspondence between AMOC and winter snow fall. Western boreal Quebec is the driest and most fire-prone of the Quebec regions studied here. Soil water availability in this region strongly depends on winter precipitation. High winter AMOC indices are associated with the dominance of Arctic air masses over NA and leads to decreased snowfall (Appendix S4). Large-scale indices, through their correlation with regional fire activity, can also possibly override the direct effects of climate on boreal forest dynamics [START_REF] Drobyshev | Environmental controls of the northern distribution limit of yellow birch in eastern Canada[END_REF][START_REF] Zhang | Stand history is more important than climate in controlling red maple (Acer rubrum L.) growth at its northern distribution limit in western Quebec, Canada[END_REF]. Fire activity in NA strongly correlates with variability in atmospheric circulation, with summer high-pressure anomalies promoting the drying of forest fuels and increasing fire hazard [START_REF] Skinner | The Association Between Circulation Anomaliesin the Mid-Troposphere and Area Burnedby Wildland Fire in Canada[END_REF], Macias Fauria & Johnson 2006) and low-pressure anomalies bringing precipitation and decreasing fire activity. In Sweden, the northernmost forests were the most sensitive to North Atlantic Ocean dynamics, particularly to the summer NAO (Fig. 8). These high-latitude forests, considered to be 'Europe's last wilderness' [START_REF] Kuuluvainen | North Fennoscandian mountain forests: History, composition, disturbance dynamics and the unpredictable future[END_REF], are experiencing the fastest climate changes [START_REF] Hansen | Global surface temperature change[END_REF]. Numerous studies have highlighted a correspondence between tree growth and NAO (both winter and summer) across Sweden [START_REF] D'arrigo | NAO and sea surface temperature signatures in tree-ring records from the North Atlantic sector[END_REF][START_REF] Cullen | Multiproxy reconstructions of the North Atlantic Oscillation[END_REF][START_REF] Linderholm | Dendroclimatology in Fennoscandiafrom past accomplishments to future potential[END_REF], with possible shifts in the sign of this correspondence along north-south [START_REF] Lindholm | Growth indices of North European Scots pine record the seasonal North Atlantic Oscillation[END_REF] and west-east gradients [START_REF] Linderholm | Tree-ring records from central Fennoscandia: the relationship between tree growth and climate along a west-east transect[END_REF]. Our results identified a post-1980 positive correspondence between tree growth and summer NAO, spatially restricted to the northernmost regions (Figs. 8 and9). This emerging correspondence appears linked to the combination of a growth-promoting effect of higher temperature at these latitudes (Fig. 5) and a northeastward migration of the spatial correspondence between NAO and local climate (Fig. 10). Boreal forests of Quebec (western and central) and Sweden (central and northern) emerged as regions sensitive to large-scale climate dynamics. We, therefore, consider them as suitable for a long-term survey of impacts of ocean-atmosphere dynamics on boreal forest ecosystems. Fig. 1 1 Fig. 1 a: Location of the two study areas (black frame); b & c: Clusters identified in each study area by ordination of 1° x 1° latitude-longitude grid cell chronologies. Ordination analyses were performed over the common period between grid cell chronologies in each study area using Euclidean dissimilarities matrices and Ward agglomeration methods. The common period was 1885-2006 for Quebec and 1936-1995 for Sweden. Ordinations included 36 and 56 grid cell chronologies in Quebec and Sweden, respectively. A western (Q_W), central (Q_C) and eastern (Q_E) cluster were identified in Quebec and a southern (S_S), central (S_C) and northern (S_N) cluster were identified in Sweden. Reference chronologies Fig. 2 . 2 Fig. 2. Tree growth responses to seasonal temperature averages (a) and precipitation sums (b) at the regional level over the 1950-2008 period, as revealed by correlation analyses. Analyseswere computed between the six regional chronologies (Q_W, Q_C, and Q_E in NA; and S_S, S_C and S_N in NE) and seasonal climate data. Climate data were first extracted from the CRU TS 3.24 1° x 1°[START_REF] Harris | Updated high-resolution grids of monthly climatic observations ---the CRU TS3.10 dataset[END_REF] for each grid cell and then aggregated at the regional level by a robust bi-weighted mean. Seasons included previous summer (pJJA), previous autumn (SON), winter (DJF); current spring (MAM) and current summer (JJA). Fig. 3 . 3 Fig. 3. Tree growth responses to seasonal temperature averages (a) and precipitation sums (b) at the local level over the 1950-2008 period in Quebec, as revealed by correlation analyses. Fig. 4 . 4 Fig. 4. Tree growth responses to seasonal temperature averages (a) and precipitation sums (b) at the local level over the 1950-2008 period in Sweden, as revealed by correlation analyses. Fig. 5 . 5 Fig. 5. Moving correlations between regional seasonal temperature averages (red lines) and precipitation sums (blue lines), and the six regional chronologies (Q_W, Q_C, and Q_E in NA; and S_S, S_C and S_N in NE) over the 1950-2008 period. Climate data were first Fig. 6 . 6 Fig. 6. Correlation between seasonal AMOC (a), NAO (b), and AO (c) indices and the six regional chronologies (Q_W, Q_C, and Q_E in NA; and S_S, S_C and S_N in NE). Seasonal indices include previous summer (pJJA), winter (DJF), and current summer (JJA), and were calculated as mean of monthly indices. Correlations were calculated over the 1961-2005 period for AMOC, and over the 1950-2008 period for NAO and AO. Significant correlations (P < 0.05) are marked with a star. Fig. 7 . 7 Fig. 7. Correlation between seasonal AMOC (a), NAO (b), and AO (c) indices, and growth patterns at the local level in Quebec. Seasonal indices include previous summer (left-hand panels), winter (middle panels), and current summer (right-hand panels), and were calculated as mean of monthly indices. Correlations were calculated over the 1961-2005 period for AMOC, and over the 1950-2008 period for NAO and AO. To visualize the separation between regional clusters, correlation values at Q_C grid cells are plotted with circles. Significant correlations (P < 0.05) Fig. 8 . 8 Fig. 8. Correlation between seasonal AMOC (a), NAO (b), and AO (c) indices, and growth patterns at the local level in Sweden. Seasonal indices were calculated as mean of monthly indices and include previous summer (left-hand panels), winter (middle panels), and current summer (right-hand panels). Correlations were calculated over the 1961-2005 period for AMOC, and over the 1950-2008 period for NAO and AO. To visualize the separation between regional clusters, correlation values at S_C grid cells are plotted with circles. Fig. 9 . 9 Fig. 9. Moving correlations between previous summer (pJJA; left-hand panels), winter (DJF; middle panels) and current summer (JJA; right-hand panels) large-scale indices, and the six regional chronologies (Q_W, Q_C, and Q_E in NA; and S_S, S_C and S_N in NE). Largescale indices include AMOC (black), NAO (red), and AO (blue). Moving correlations were calculated using 21-yr windows moved one year at a time and are plotted using the central year of each window. Correlations were calculated over the 1961-2005 period for AMOC, and over the 1950-2008 period for NAO and AO. Windows of significant correlations (P < 0.05) are marked with a dot. Fig. 10 . 10 Fig. 10. Correspondence between summer NAO (a) and AO (b) indices and local summer climate (mean temperature and total precipitation) between 1950 and 1980 (left-hand panels) and between 1981 and 2008 (right-hand panels). NAO and AO indices over the 1950-2008 5 and 9). Positive winter AMOC indices are generally associated with cold temperatures in Quebec, and particularly so in the West (Appendix S4). Positive winter AMOC indices are associated with the dominance of dry winter air masses of Arctic origin over Quebec, and may thereby delay the start of the growing season and reduce tree-growth potential. Forest dynamics in NA have been reported to correlate with Pacific Ocean indices such as the Pacific Decadal Oscillation (PDO) or the El-Nino Southern Oscillation (ENSO), Table 1 . 1 Characteristics of tree-ring width chronologies*. 198 *Data for Q_W, Q_C and Q_E chronologies respectively. **Data for S_S, S_C and S_N chronologies respectively. Acknowledgements This study was financed by the Natural Sciences and Engineering Research Council of Canada (NSERC) through the project 'Natural disturbances, forest resilience and forest management: the study case of the northern limit for timber allocation in Quebec in a climate change context' (STPGP 41344-11). We acknowledged financial support from the Nordic Forest Research Cooperation Committee (SNS) through the network project entitled 'Understanding the impacts of future climate change on boreal forests of northern Europe and eastern Canada', from the EU Belmont Forum (project PREREAL), NINA's strategic institute program portfolio funded by the Research Council of Norway (grant no. 160022/F40), the Forest Complexity Modelling (FCM), an NSERC funded program in Canada and a US National Science Foundation CAREER grant (AGS-1349942). We are thankful to the Ministry of Forests, Wildlife and Parks (MFFP) in Quebec and to the Swedish National Forest Inventory (Bertil Westerlund, Riksskogstaxeringen, SLU) in Sweden for providing treegrowth data. ID thanks the Swedish Institute for support of this study done within the framework of CLIMECO project. Appendix A -Supplementary data Supplementary data to this article can be found online at https://doi. org/10.1902/j.gloplacha.2018.03.006
01609518
en
[ "shs.droit" ]
2024/03/05 22:32:10
2017
https://shs.hal.science/halshs-01609518/file/RM%20article%20politics%20reimbursement%20Final.pdf
Keywords: Reimbursement, regenerative medicine, valuation, publications, trade organisations, orphan drugs, risk-sharing agreements Aims This paper aims to map the trends and analyse key institutional dynamics that are constituting the policies for reimbursement of Regenerative Medicine (RM), especially in the UK. Materials & Methods Two quantitative publications studies using Google Scholar and a qualitative study based on a larger study of 43 semi-structured interviews. Results Reimbursement has been a growing topic of publications specific to RM and independent from orphan drugs. Risk-sharing schemes receive attention amongst others for dealing with RM reimbursement. Trade organisations have been especially involved on RM reimbursement issues and have proposed solutions. Conclusion The policy and institutional landscape of reimbursement studies in RM is a highly variegated and conflictual one and in its infancy. of the publications landscape, one overarching and one more specific, combined with one in-depth study of the most contested aspects of the RM reimbursement debate. Thus the publication studies describe the emerging forms and sites of RM reimbursement analysis, while the in-depth (interview-based) study presents the most crucial content of the conflictual debates in the field. II-Methodology and Results In this section, we present the methods/results of each of the three complementary studies. We undertook two quantitative studies of publications using Google Scholar and a qualitative study based on interviews. The time periods the publication searches covered are 2015 and/or 2016. Practically, these were the most recent periods available at the time of our study. Scientifically, the question of RM reimbursement emerged as a prominent issue from 2015. Hence, our systematic searches do not cover earlier publications, although a few relevant publications previous to 2015 are referred to in our discussion. Google Scholar (GS) was chosen as most relevant for the purposes of this research, following review of options [START_REF] Harzing | Google Scholar as a new source for citation analysis?[END_REF][START_REF] Winter | The expansion of Google Scholar versus Web of Science: a longitudinal study[END_REF][START_REF]Google Scholar compared to Web of Science: A Literature Review[END_REF]14]. GS is interdisciplinary and met our supposition that publications on RM reimbursement will be found in various fields of research, and probably more in Social and Human Sciences (SHS) or economics. GS also is not confined to peer-reviewed articles, and it is free to use and thus accessible beyond academia. We supposed transparency and thus wider accessibility are key aspects of RM's reimbursement politics. However, GS also has many limitations, notably it is not as comprehensive or precise as other search interfaces. Therefore, our set of selected articles includes both very detailed publications on RM reimbursement as well as others where it is referred to without being the main focus. Therefore, our results are broadly indicative rather than definitive. We present the results of the three studies below. 1) Reimbursement topics in the publications landscape of RM For the first study, GS has been used for systematic searches to identify trends in RM publications (referring here to every publication whatever the format is: journals, book, Doctoral dissertation…) during 6 months from January 2016 to July 2016. The full search strategy is available from supplementary file 1. In summary, the following keywords were used: "regenerative medicine" OR "advanced therapy" OR "gene therapy" OR "cell therapy" OR "tissue engineered product" OR "innovative therapies" (that we call "regenerative medicine based products": RMPs), combined with and without the word "UK", and with combinations including or excluding "reimbursement", "risk-sharing", and "orphan drugs". The names of different countries were also included: UK, France, Germany, Japan, South Korea, USA (United States). Various adjustments such as averaging of results over 6 monthly periods were made to allow for GS's performance (Supplementary file 1). New systematic researches were added in February 2016 or in March 2016. (Where averages have been calculated over 6 months or over 5 months instead of 7 months, they are respectively marked with a single (6 months average marked*) or a double (5 months average marked**) star, in the tables presented). The results are integrated in an Excel table (Supplementary file 2). This analysis addressed questions including: Is the question of reimbursement prominent among RM publications generally? Is reimbursement discussed to similar extents in the countries that are the most active in the RM field? Is RM reimbursement specifically considered compared to other expensive medicinal products, especially orphan drugs? Is risk-sharing the most discussed managed entry reimbursement option? The results of our analysis are below. First, reimbursement of RMPs has been a growing topic of publications between 2015 and 2016, even excluding orphan drugs. Publications on reimbursement/risk-sharing were more numerous in the first six months of 2016 than in the whole year 2015. However, reimbursement/risk-sharing is not a very prominent primary topic, judged by use of these terms explicitly in publications' titles. This is even more true for the UK, as there was no publication including these terms in UK-origin titles. However, when including orphan drugs the picture changes. Risk-sharing is associated with orphan drugs in more publications (there were generally more than twice as many publications when orphan drugs was included associated with risk sharing. This difference was smaller when orphan drugs was associated with reimbursement as a general term. (Table 1) 1 and supplementary file 2) although there were fewer publications on RM risksharing than on RM reimbursement generally. Thus, the orphan drug focus is greater than that on RMPs, though the latter is attracting an independent set of publications, both on reimbursement generally and risk-sharing in particular. Finally, regarding the 2015 publications related to reimbursement of RMPs, the UK had the fifth most references (183 results) behind the USA (416 results), Japan (239 results) and Germany (237), close to France (188 results) and before South Korea (56). However, it should be noted that, given GS limitations, these countries could be mentioned in the publications data or as countries of affiliation of the authors (Table 2). Thus, UK RM reimbursement is a recent and growing topic of publications, and it appears less developed than in the USA, Japan, Germany, and France, both related to and independent from orphan drugs, and not only linked to risk-sharing agreements as a main policy option. 2) Profiling political characteristics of the RM reimbursement publications landscape The second study is based on a deeper quantitative analysis of targeted publications identified from the first study. Out of 182 publications found in February 2015, 5 appear twice and 127 have been excluded following inspection as being out of our scope. Thus, our working material is based on 50 publications in 2015. These 50 publications have been classified in different Excel tables to distinguish the various research fields they come from for each type of publications: Journals, books chapters, books, thesis and other types of publications see Supplementary file 3). This analysis addressed questions including: In which types (journals, book chapters, books, thesis or other kinds of publications) of publications is reimbursement of RM considered? (We hypothesise that books and theses are less accessible than journals for most stakeholders in the field). In which field of research (clinical, SHS, economics, public health, business, other) is reimbursement of RM considered? In which clinical areas is reimbursement of RM considered? Are there dominant journals? Is reimbursement of RM considered by few publishers or many? What is the UK position and authorship in these publications? Our analysis shows that publications related to RM reimbursement are mainly found in journals (74%; N= 37/50), especially in public health (100%; N= 4/4), clinical (95.7%; N=22/23) and economics (75%; N= 3/4), showing the predominance of the journal format in these disciplines. Nevertheless, a sizeable proportion appears in other formats and so may be less accessible. In addition, the sharing of publications between journals, books' chapters, books, doctoral dissertations, and other types of publications is much more balanced in SHS and business fields. The clinical and economic areas have a similar sharing of publications types regarding RM reimbursement: mainly journals (95.7 % for the clinical area and 75% for the economics area) and few in book chapters (4.3 % (N= 1/23) for the clinical area and 25% (N= 1/4) for the economics area). In the public health area 100% (N= 4/4) publications on RM reimbursement are in journals. On the other hand, the business discipline is an exception, with no publication in journals and more equally represented (together with the SHS area) across other formats of publications (Table 3), suggesting that this discipline may be a less visible area of the landscape Reimbursement of RM is also considered by a wide range of different journals. Indeed, 83.7% (N= 31/37) appeared in separate journals across all fields, suggesting a very diffuse and emerging picture. Nevertheless, the public health and economics fields appeared as significant exceptions with 3/4 of public health articles published in the "Journal of Market Access and Policy", 2/3 of economics journals' publications published in the "Value in Health" journal (Table 7). Thus, it appears RM reimbursement publications are mainly in clinical journals although reimbursement might be considered primarily an SHS or Economics topic. This may be linked to the overall numeric domination of clinical publications compared to SHS or economics publications, or to the interest of medical practitioners and researchers in scenarios for clinical translation. While the question of RM reimbursement was often considered generally, where specific disease areas are targeted, it corresponded to those in which RM is closer to or already in the clinic: Skin and respiratory disease, haematological and orthopaedic and neurologic and ophthalmologic diseases. Finally, two journals (Journal of Market Access and Policy and Value in Health) and two publishers (Elsevier and Universities as a group) seem to be the most active on the topic of RM reimbursement. 3) Trade associations' positions on RM reimbursement issues We now turn from the forms and disciplines with which reimbursement is being studied and debated to the actual content of the key debates and proposals. Thus, the third analysis is part of a study based on 43 semi-structured interviews of stakeholders from key institutions in the field of RM in the UK, conducted in 2015 and 2016: national bodies (5), service providers (3), consultancy companies/law academics (4), regulatory agencies/Institute (4), Innovation networks (organisations that promote relevant innovation) (4), trade organisations (4), funders (2), health professional organisations (4), research charities [START_REF] Helgesson | Values and Valuations in Market Practice[END_REF], other institutions (4). (Details of the organisations are available in Supplementary file 4). Interviews lasted around 1hour, generally at the offices of the interviewees and covered both general questions on the interviewees' current perceptions of the RM field and of its prospects and specific questions modified according to each interviewee/institution.. Informed consent was obtained and interview transcripts were anonymised, and coded and analysed using Nvivo qualitative analysis software. The analysis of the whole set of interviews is the subject of another paper. In this manuscript we target trade associations, as they were the category of stakeholders most involved and most critical of existing protocols on RM reimbursement. We supplemented the interviews with in-depth Internet searches, especially on the websites of the trade associations. We highlight key results and comments from four interviews in three associations. Although a small number, representatives of the associations held key senior positions and, of course, represent the views of many member firms and other organisations. To preserve anonymity we identify the trade organisations/interviewees by a number. All the trade associations considered reimbursement to be a current issue: "So I think we have the academic excellence, some significant infrastructure, a growing community on the positive side. What we don't have is a route to reimbursement." (Trade organisation 2) One trade association highlights the uncertainty of reimbursement decisions: unclear remits of relevant institutions in the context of multiple reimbursement pathways: "At the moment, we have some theoretical routes and we have a number of European companies that have gone bust trying to solve this problem." (Trade organisation 2) Trade associations specifically highlight the contradictions or lack of collaboration between the National Institute for Health and Care Excellence (NICE) and National Health Service (NHS) England. Specific issues identified included: different criteria used by NICE and NHS England such as the gap between marketing authorisation and reimbursement decision; a perceived need for NICE and NHS England integration in an 'holistic system'; tension between NICE and NHS England, although recognition that NICE Office for Market Access has been established to provide clarifications regarding procedures; alleged problems with NICE methodologies; NHS England's specialised commissioning (i.e. many innovations competing for access to the specialised commissioning budget, NHSE not being seen as good at commercial access models); difficulties relating to the cost and choices to be made, long-term and uncertainty of evidence; routes to adoption in the clinic; the differences between devolved nations; difficulty of reaching a common product assessment at the European level; and the impact of (NICE) reimbursement decisions: "For things that go to NICE, whatever they are, that absolutely affects their fortune in the market and through the rest of their lifecycle. So the signal that NICE sends absolutely affects the commercial success of medicines and that will be true of regenerative medicines." (Trade Organisation 4) Moreover, three out of four trade organisations expressed views on reimbursement methodologies for RM. One trade association considered NHSE methodologies had already changed but that NICE methodologies should be revised for RMPs. Indeed, three trade organisations explicitly said there is a need for change in reimbursement methodologies. Regarding the possible establishment of a specific government supported fund for RM, one trade organisation was clearly in favour of it, while 2 others were not opposed to it. A fund for innovative medicines was seen as more acceptable than a disease-type fund such as the Cancer Drugs Fund [START_REF] Littlejohns | Challenges for the new Cancer Drugs Fund[END_REF], but it should be a 'transition' fund. They highlighted that the advantages of a specific fund for RM would be especially to provide flexibility. Moreover, as a budget is necessary to make a system change, the fund would be a powerful and potent mechanism for market access although it raised issues: "There's no doubt that if you want to get something done and you want to make a system change, you create a ring-fenced budget. (…) So in terms of market access it's (a specific fund is) a very powerful and potent mechanism, but it goes against the grain of travel of the system, which is not to have centralised budgets for things, but to have devolved responsibility and decision-making spread out across the system." (Trade organisation 4) Finally, trade associations have made proposals for the reimbursement of RM based products. First, collaborations should be enhanced between marketing authorisation and reimbursement steps, that is, between regulatory agencies and reimbursement bodies. Indeed, "A partnership approach (between developers and gatekeepers) has the potential to reduce the time and cost of development, while improving clinical relevance of studies and their assessment of cost-effectiveness. Improving public health through appropriate uptake of medicines at lower cost and improved cost-effectiveness, without any reduction in development standards or scrutiny, is an important incentive to develop new methodology. Real time database utilisation, new analytical methods and adaptive approaches can underpin this." [START_REF]Reengineering Medicine Development: A stakeholder discussion document for cost-effective development of affordable innovative medicine[END_REF] Second, one trade association emphasised that changes should occur at NHS England: a perceived need for direct producer engagement with NHS England, a key aspect being 'good horizon scanning' at NHS England, whilst approving the new clinically focused routes for reimbursement (i.e. Clinical Reference Groups). Third, one trade organisation highlighted a need for general systemic changes to achieve a healthcare service with a good way of both monitoring and assessing patient performance over time, and the need for industry to have a coherent business strategy. Fourth, two trade organisations underlined that a broader or specific view should be taken for RMPs such as a need for different thinking around the benefits models, especially where there is a curative effect. Fifth, one trade organisation has envisaged 'adaptive pathways' [17] as a solution for RM based products as they could take into account their specific issues, while another referred to risk-sharing or annual/stages payment models: "The key thing is there has to be recognition that the risk has to be shared between the company and the system. So it's no good if the system says, 'Okay, you'll have to supply the medicine for free through this period and we'll look at it again in two years.' There needs to be more flexibility to say, 'Okay, so maybe the medicine will be provided at a different in-market commercial price for this period,' but then, if the data supports it, the price should go up. (...) and it's only if we accept that that we're properly accepting that value should be linked to price."(Trade organisation 4) However, one trade organisation highlighted, as above, that these reimbursement models need connectivity between regulatory and HTA bodies, and between NICE and NHS England, early discussions with HTA bodies and a structured framework for data collection. Finally, one trade organisation welcomed the Japanese model for faster access to market and conditional reimbursement [START_REF] Sipp | Conditional approval: Japan lowers the bar for regenerative medicine products[END_REF] although it recognised it needs to be tested. Thus, beyond recognising reimbursement issues for RMPs, trade organisations generally have views on what are these issues and how they could be solved. Indeed, BIA and ABPI recognise these issues on their websites or positions papers, especially from 2015 [START_REF]Delivering value to the UK-The contribution of the pharmaceutical industry to the patients, the NHS and the economy[END_REF][START_REF]Manifesto: our agenda for change[END_REF][START_REF] Abpi | Adapting the innovation landscape, UK Biopharma R&D sourcebook[END_REF][START_REF] Bia | Update, Influencing and shaping our sector[END_REF][START_REF] Bia | Update, Influencing and shaping our sector[END_REF][START_REF]Briefing Paper: Advanced Therapy Medicinal Products and Regenerative Medicine[END_REF], in collaboration together [START_REF] Abpi | Early Access Medicines Scheme[END_REF] and with other trade associations as well [START_REF] Casmi | Adaptive Licensing[END_REF][START_REF] Bivda | From vision to action: Delivery of the Strategy for UK Life Sciences[END_REF][START_REF] Bia | One nucleus, Mediwales. UK Life sciences manifesto[END_REF], following some earlier statements [START_REF]BIA policy and parliamentary impact in Quarters Two and Three[END_REF][START_REF] Mestre-Ferrandiz | The many faces of Innovation, A report for ABPI by the Office of Health Economics[END_REF]. ABPI was represented on this topic at the 2015 conference of the International Society For Pharmacoeconomics and Outcomes Research (ISPOR) [31]. The general views of the trade organisations could be summarised by this statement: "Ultimately a major consideration is whether payors -especially the NHS in the UK -can afford to use the medicine. Biological medicines, especially advanced therapies like cell and gene therapies, have particularly high development and manufacture costs. But they may also provide healthcare benefits that ultimately save the NHS money down the line. There is a need for policymakers to consider short, versus long-term, trade-offs and to propose models for realistic reimbursement plans." [START_REF] Bia | One nucleus, Mediwales. UK Life sciences manifesto[END_REF] 4) Discussion In this section, we summarise and discuss the main results of the three studies: The valuation of regenerative medicine involves a politics of stakeholder institutions and emerging policy discourse evident in the interview positions and publication profiles that we have presented. It could be considered that 2015 constituted a turning-point in that reimbursement and adoption in the NHS became a key issue in new national reports [START_REF]General proposals to solve the RM reimbursement challenges 33 NICE. Exploring the assessment and appraisal of regenerative medicines and cell therapy products[END_REF]33]. Moreover, the reimbursement of the first authorized Advanced Therapy Medicinal Product (ATMP), Chondrocelect in 2009, was turned down for reimbursement both in France and in the UK. In 2016, its marketing authorization holder, Tigenix NV, decided to withdraw its marketing authorization, as did Dendreon/Valeant for Provenge in 2015, for commercial reasons. Thus, the commercial viability of ATMP and RM products, as linked to the decisions of reimbursement by national bodies, became a key challenge. Indeed, "the reimbursement point is the keystone from which an allowable COGs [Cost of Goods] is determined by subtracting business costs." [START_REF] Mount | Cell-based therapy technology classifications and translational challenges[END_REF] These developments show the volatile environment in which valuation and reimbursement of RMPs is being debated. We have shown that it has been a growing Our publications analysis and interviews suggest that the UK's position in the emerging RM reimbursement landscape is similar. Clearly, trade organisations have been very involved in RMPs reimbursement debates, as one would expect. Indeed, the industry is the most critical of RM valuation issues. Industry generally will not develop medicines lacking likely wide reimbursement and thus uncertain return on investment. Trade organisations consider more flexibility is needed, notably regarding NICE methodologies for assessment. In the context of limited budgets for healthcare, we showed in our interview and internet study that several key trade organisations argue that new flexible routes for reimbursement are needed to ensure patient access to the latest medical advances, including RMPs. Beyond acceptance for risk-sharing schemes, while highlighting their limits, trade organisations emphasised the need for more collaboration between key stakeholders as a main solution to reimbursement issues. Some measures have been taken toward this objective, such as promotion of early contact with regulators and HTA bodies notably through the NICE Office for Market Access. The latter's objectives include defining acceptable evidences in a context of uncertainty with a curative treatment, and supporting navigation between the different gatekeepers. This is seen as particularly necessary given the challenge of an "increase in demand for 'real world' evidence by HTA, payers and regulators", i.e. their "growing interest in relative effectiveness" [START_REF] Abpi | Securing a future for innovative medicines: A discussion paper[END_REF]. 5) Conclusion We conclude that the policy and institutional landscape of reimbursement studies in RM is a highly variegated one and in its infancy. The two publications studies gave details on the amount of activities going on, the potential gap in the field, and signs of both general and niche trends. The volume of publications is growing, as researchers and analysts in a wide variety of disciplines and types of organisation start to grapple with reimbursement challenges. The interviews study highlights trade associations as closely engaged with debating at a high level the possible reimbursement scenarios for RM, and pointing to ways in which current technology assessment and healthcare infrastructures could be improved to favour RM enterprise. The analysis that we have provided is particularly relevant to the stakeholders involved in policy making in RM, and industry and academia. It offers a picture of the emerging landscape of RM reimbursement actors and issues that can inform the various stakeholders' participation in its future analysis, potential, and development. Summary Points • Reimbursement of RM based products has been a growing topic of publications between 2015 and 2016, independently from orphan drugs. • Risk-sharing schemes are only one strategy for dealing with RM reimbursement, albeit a widely debated one. • Reimbursement and risk-sharing are distinct issues for RM, although there is an overlap with the same issues for orphan drugs. • The UK's position in the RM reimbursement publication landscape is in keeping with several reports on the global dynamics of RM. • Trade organisations have been very involved on RM based products reimbursement issues. • Trade organisations have detailed views on reimbursement issues for RM especially the high cost versus the uncertainty regarding long-term evidence. • Trade organisations have various proposals to solve RM reimbursement issues, emphasising a need for more collaboration between several key national-level actors. ------------------------------------------------------------------------------------------------ Table Table 2 : 2015 publications related to reimbursement of RMPs per country 2 UK France Germany Japan South Korea United States Reimbursement 183 188* 237* 239* 56* 416* Anywhere Year 2015 Table 3 : 3 Different types of publication formats by discipline of our first suppositions has been verified in that publications related to RM reimbursement are found in a range of different fields of research. However, the dominant field is not SHS (28.0%; N= 14/50) nor economics (8.0%; N= 4/50) but is clinical (46.0%; N= 23/50); public health being more or less equivalent to economics (8.0%; N= 4/50) and business (10.0%; N= 5/50). Thus, the questions of RM reimbursement are being formulated mainly in clinical and SHS disciplinary publications ( Clinical SHS Economics Public Business Total Health Table 4 ) . Table 4 : Range of different publications subject areas/types 4.4 Clinical SHS Economics Public Business Total Health 23 14 4 4 5 N= 50 46,0% 28,0% 8,0% 8,0% 10,0% N= 100% Of the clinical articles, most were in generalist medical journals (45.5%; N= 10/22). When specific disease areas were the focus, skin and respiratory diseases had received most attention (13.6% each; N= 3/22), followed by haematological and orthopaedic (9.1% each; N= 2/22), and finally neurologic and ophthalmologic diseases (4.5% each; N= 1/22). It is notable that publications had not targeted important clinical fields such as cardiovascular, gastroenterological and cancers other than blood diseases (Table 5 ). Table 5 : Disease areas in clinical publications 5 (Among the book chapters, the Gaucher disease has been considered both as an haematologic disease (Type 2) and as a neurologic disease (Types 2 and 3).) Number of Number of Total articles books' chapters N=23 N= 22 N= 1 General 10 0 10 45,5% 0% 43,5% Haematological 2 1 3 Furthermore, while "United Kingdom" was one of our selection criteria, first authors are from the UK in 40% (N= 20/50) of all the publications. The clinical and SHS areas are the main fields with UK first author's affiliation when we consider all types of publications (clinical area (20%; N= 10/50) and SHS area (12%; N= 6/50)), or journals only (clinical (66.7%; N= 10/15) and SHS (20%; N= 3/15) journals)) (Table 6 ). Table 6 : UK first author affiliation 6 Clinical SHS Economics Public Business Total Health UK first in 10/15 3/15 1/15 1/15 0/15 15/50 journals 66,7% 20% 6,7% 6,7% 0% 30% UK first in 0/2 1/2 0/2 N/A 1/2 2/50 book chapters 0% 50% 0% 50% 4% UK first in N/A 0/1 N/A N/A 1/1 1/50 books 0% 100% 2% UK first in N/A 2/2 N/A N/A 0 2/50 thesis 100% 0% 4% UK first in N/A N/A N/A N/A 0/0 0/50 other 0% 0% publication UK first author 10/50 6/50 1/50 1/50 2/50 20/50 in all 20% 12% 2% 2% 4% 40% publications (N=50) Table 7 : Publications in range of different journals 7 of publishers was also evident. Four publishers shared 52% (N=26/50) of RM reimbursement publications, and Elsevier and universities covered the widest range of different fields (4): clinical, SHS, economics and public health for Elsevier, and clinical, SHS, economics and business for universities, suggesting a mix of commercial and non-profit commitment to the field. Most other publishers were specific to one or two fields. Clinical SHS Economics Public Business Total Health Publications 19/22 8/8 2/3 2/4 0/0 31/37 in different 86,3% 100% 66,7% 50,0% 0% 83,7% journals A wide spread topic of focused publications between 2015 and 2016, the vast majority of which appear in very disparate avenues or 'spaces' geared to various disciplinary audiences and interested parties. Nevertheless, clinical and especially generalist medical journals were shown to be dominating, and at least two specialist journals have appeared recently, which are likely to see more RMP reimbursement contributions. Many of the reimbursement challenges are not specific to RMPs, because other fields such as orphan drugs can also have high up-front costs [START_REF] Gardner | Are there specific translational challenges in regenerative medicine? Lessons from other fields[END_REF][START_REF] Nice | Exploring the assessment and appraisal of regenerative medicines and cell therapy products[END_REF]. However, we maintain that some kinds of RMPs raise specific challenges, such as gene therapies when they are curative [START_REF] Carr | Gene therapies: the challenge of super-high-cost treatments and how to pay for them[END_REF]. We have shown that risk-sharing specifically is far less discussed than reimbursement generally. This result accords with risk-sharing schemes being just one strategy for dealing with RM reimbursement, albeit a widely debated one. Indeed, these schemes can be considered as one way of addressing the uncertainties regarding the alternative approaches to the valuation of RM between different actors, especially the NHS and the producer/manufacturer. Reimbursement and risk-sharing are distinct issues for RM, although there is an overlap with the same issues for orphan drugs. RMPs can be medicinal products, especially ATMPs. For instance, Holoclar, the first stem cell-based medicinal product approved for use in the EU, is both an ATMP and an orphan medicinal product, and as such benefits from the incentives of both regulatory frameworks [START_REF] Gerke | EU Marketing Authorisation of Orphan Medicinal Products and Its Impact on Related Research[END_REF]. More globally, among the eight ATMPs authorised on the EU market to date, four are orphan drugs. As those cases show, orphan drugs and RM based products often share the two main features of high cost and uncertainties around evidence and value [START_REF] Bubela | Bringing regenerative medicines to the clinic: the future for regulation and reimbursement[END_REF]40]. However, these same uncertainties are also seen in the weak long-term evidence for RMPs that are not orphan drugs. Even though there was an increase in using risk-sharing schemes in Europe generally [START_REF] Adamski | Risk sharing arrangements for pharmaceuticals: Potential considerations and recommendations for European payers[END_REF] and they have been considered suitable for orphan drugs [START_REF] Campillo-Artero | Risk sharing agreements: with orphan drugs?[END_REF], there should be further exploration of whether such schemes might be more applicable to orphan drugs than RM products, as our findings imply. Such considerations are important to the political design of the markets and health system adoption of different subsectors of RM and related enterprise. We showed in the first study the UK's position in the RM reimbursement publication landscape. Those results are in keeping with several reports in the field of RM evidencing different countries' positions addressing RM challenges broadly: "The UK needs to be ambitious and act quickly to get ahead. The USA, Canada and Japan are particularly active in this space and, although the UK is preeminent in Europe; Germany, Italy, France and Spain, in particular are rapidly reviewing how they can also capture these investments." [START_REF]Advanced Therapies Manufacturing Task Force Action Plan. Retaining and attracting advanced therapies manufacture in the UK[END_REF] Regarding the distribution of RM tissue engineering firms and research institutes: "When we look at the geographic distribution of tissue engineering firms and research institutes, the U.S. with 52% leads the market followed by Germany (21%), Japan (16%), the UK (7%), and Sweden (4%)." [START_REF]Stem Cell Regenerative Medicine Market : Global Demand Analysis & Opportunity Outlook 2021[END_REF] This pattern has been established for some time across the biopharmaceutical sector said to reflect 'longstanding problems: limited venture capital finance, a fragmented patent system, and relatively weak relations between academia and industry.' [START_REF] Hogarth | Regenerative medicine in Europe: global competition and innovation governance[END_REF].
01756881
en
[ "chim.othe" ]
2024/03/05 22:32:10
2016
https://theses.hal.science/tel-01756881/file/Meshkov_Ivan_2016_ED222.pdf
Prof Gilles Lemercier Prof Wais Mir Hosseini Prof Sylvie Ferlay Dr Abdelaziz Jouaiti Dr Stéphane Baudron Dr Aurélie Guenet Dr Alexander Martynov Dr Kirill Birin Dr Yulia Enakieva Dr Anna Sineltchikova Mohamed Antoine Patrick Bowen, Fan, Takumi, Hervé, Romain, Berangere Cyril Maxime, Katya, Damien Elsa Georges Lionel Allouche P R Ashton R Ballardini V Balzani I Baxter A Credi M C T Fyfe M T Gandolfi M Gómez-López M V Martínez-Díaz A Piersanti N Spencer J Fraser Stoddart M Venturi A J P White D J Williams J E Green J Wook Choi A Boukai Y Bunimovich E Johnston-Halperin E Deionno Y Luo B A Sheriff K Xu Y Shik Shin H.-R Tseng J R Heath D V Liu S A L Kondratuk G Rousseaux M C Gil-Ramírez J O'sullivan T Barbour W J Belcher P J Brothers for fitness trainings), Nico Z (for turnstile spirit in the office), Nico M (for wonderful French music in the lab), Sasha (for advices and support), Chaojie (for keeping high morale in the office); Evgeniya (for wonderful Russian music in the lab), Marina (for Dijon interventions), Taisiya (for singlet oxygen and good mood), Elena (too much "for" to write) 1 Introduction General introduction The world around us includes multitude of objects. They can be of different shapes, colors, sizes etc. however they are all in permanent motion relative to each other. Giant planetary systems rotate around the galactic centers. Simultaneously galaxies move with enormous speeds relatively to supermassive black holes, quasars and other galaxies. Planetary system consists in complex movements of planets, satellites, asteroids… They also include moving parts. This series can be continued to micro objects, such as atoms and subatomic particles. Even at 0 K the movement of micro particles does not stop due to quantum effects. Thus the motion pierces all the matter around us and it never stops. Since ancient ages, humanity is concerned with the control movement. This process started with constructing of simple mechanic devices like a wheel before it became more and more sophisticated. The evolution of movement control evolved in two ways based on increasing and decreasing the size of moving parts. Today, for example ships with size greater than medieval cities, or airplanes of the cargo type capable of transporting hundreds of tons have been constructed. In more recent years, the opposite trend, i. e. constructing even smaller devices, is under active investigations. Where are the limits of devices miniaturization? Is it possible to construct operational devices based on the control of single molecules or atoms? These questions are topics of current research. In 1959, Richard Feynman, an American physicist, gave a famous lecture "There is plenty of room at the bottom". 1 This lecture postulated the principle of individual atoms manipulations. At that time, this proposal was visionary. Years after, in 1981, Gerd Karl Binnig and Heinrich Rohrer from Swiss department of IBM invented the scanning tunneling microscope (STM) with atomic resolution. This instrument, not only allowed to analyze samples with nanometric resolution but also to move single atoms. In 1989, Don Eigler made an IBM logo with 35 Xe atoms at a surface. 2,3 In 2012, again IBM researchers showed the cartoon "A Boy and His Atom" based on motion animation of 65 molecules of carbon monoxide on a copper substrate. 4 At the same time biologists, interested in movements in biological systems, initiated research on biological machines within living organisms. Single molecules or clusters of molecules play a major role in dynamics of movement in complex biological systems. This domain, although well investigated today still remains in its infancy and requires further studies. Our understanding of complex biological systems at different length scale has increased dramatically as our experimental ability to observe nature has expanded from macro to molecular scale. Today, we see the mergence of biomimetic approaches based, i.e. bio-inspired design of devices and systems for solving technological problems in medicine and engineering. Kinesin a walking protein One of the most striking examples of the biological motors is kinesin. The latter is a protein based motor present in eukaryotic cells. This protein moves along cytoskeleton microtubules. Kinesin plays very important roles in cellular functions. In particular, as a cellular cargo responsible for transport of matter inside cells. The energy source required for kinesin motion is ATP (adenosine triphosphate). The structure of the kinesin family is variable with common features. But in general, kinesin-1 is a heterotetramer (Fig. 1). It contains two motor subunits (called "heavy chains", green on the figure) and two "light chains" (blue in the figure). Heavy chains of the protein form two mobile heads. Through amino groups, the head is connected to a flexible linker "stalk" (grey part). The stalk includes a carboxy tail that binds to light chains of kinesin. The stalk forms a coiled-coil domain. Usually the cargo section is connected to the light chains (blue). The protein head possesses two binding sites: one for the microtubule, the other for ATP. The ATP binding and hydrolysis with simultaneous release of ADP (adenosine diphosphate) lead to changes of the conformation of microtubule binding part and then to movement of the protein along the microtubule. (Fig. 2). The direction of movement is controlled. Microtubules are polar in nature and kinesin heads are connected to microtubules with imposed orientation with ATP controlling the direction of each step. The two heads function as two legs. Two mechanisms have been suggested for the movement. The "Hand-over-hand" mechanism assumes that leading head changes in each step. The other mechanism "inchworm" postulates that one of the two heads leads and the second one follows. However, it has been established that the first mechanism prevales. 5 Some examples of artificial systems with translational motion are described below. Translatory motion Rotaxanes are one of the well-known peculiar molecular architectures. The first prototype was described in 1967 6 . A typical rotaxane contains two parts: a dumbbell molecule and a macrocycle as the ring (Fig. 3). Two bulky groups (stoppers) are located at both ends of the dumbbell to prevent dissociation of the macrocycle. The first attempt was based on the reaction of two halves of the dumbbell and the macrocycle. However, more effective synthetic strategies were elaborated subsequently. 7 For [2]-rotaxanes, the dumbbell contains two interaction sites. Thus the macrocycle may be positioned at two possible stations. The interaction between the two parts may be insured by either coordination bonding 8 or by hydrogen bonding 9 . Fig. 3 Translatory motion in [2]-rotaxane. In such a system, the ring may be regarded as a shuttle travelling between two terminal stations. In the context of binary logic, this may be seen as "0" and "1" states. This concept was tested as a molecular electronic memory in 2007. 10 The rotaxane design principle was extended to three component architectures. Examples of socalled "molecular elevators" were described (Fig. 4). 11,12 The system was based on two "floors" based on amine and viologen moieties. The elevator was shown to move between the two floors based on basicity/acidity. Rotaxanes are close to another class of molecular motors, namely catenanes, which undergo rotatory motion (Fig. 6). The switching between these two classes of molecular motors was described. 13 Indeed, using anthracene based stoppers connected to the dumbbell, irradiation under UV light triggers their reaction leading thus to a catenane (Fig. 5). Interestingly, irradiation using another wavelength or heating breaks the bonds between anthracenes leading to the initial rotaxane. This process is reversible and can be repeated several times. Rotatory motion Catenanes mentioned above, are described more precisely here after. In general, a catenane contains two similar interlocked parts. For [2]catenanes, the two parts are macrocyclic rings (Fig. 6). Fig. 6 Rotary motion in a [2]catenane. Principles used to control in motion are similar to those described for rotaxanes. For example, the process may be based on hydrogen bonding between the two rings (Fig. 7). 14 The authors described a [2]catenane for which the movement can be induced by changing the polarity of the solvent. Thus, in halogenated solvent such as chloroform, the two rings of the catenane interact with each other by H-bonds. In DMSO, a polar solvent breaking H-bonds, a conformational change occurs. Indeed, the amide moieties of the two rings do not form H bonds with each other leading to the location of the hydrophobic alkyl chains in the center of the architecture. Fig. 7 The [2]catenane motion based on hydrogen bonding. A similar design based on coordination bonding was also described (Fig. 8). [15][16][17][18] The interactions of phenanthroline coordinating sites with Cu(I) lead to formation of bonds between two rings. Removal of the metal cation induces a conformational change leading to the initial state of the catenane. Fig. 8 The [2]catenane motion based on copper (I)-ligand interactions. The catenane rings may possess different sizes. Using a larger ring, more than one binding sites may be introduced (Fig. 9). This type of system undergoes two dynamic processes: shuttle motion owing to its in rotaxane architecture and rotation within the catenane part. Switching between three different stations allows to achieve a controlled directional motion. 19 The directionality of the movement is based on differences in the binding affinities of stations A, B, C leading to selective coordination which induces unidirectional rotation of the smaller ring. Two fumaramide station A and B display different coordinative properties due to methylation of B. Furthermore A connected to the benzophenone unit allows photoisomerization of A at 350 nm before the station B (254 nm). Station C is photo inactive and its coordination propensity is between A and B. These features are responsible for the stepwise motion of the small ring in a sequential manner A → B → C. Fig. 9 Unidirectional motion in [2]catenane. Several other molecular rotors have been also reported. For example, metallocene based molecular rotors (Fig. 10) were described. 20 Both species (triptycene and metallocene) cannot rotate independently due to bulkiness of the two mobile parts which interact with each other like a gear. No precise mechanism for this mobile system was described. Subsequently, the triptycene fragment was equipped with a recognition site (Fig. 11). 21 The system called "molecular brakes", undergoes free rotation of the two parts. For steric reasons, a conformational change is induced by the binding Hg(II) cation which locks the rotational movement. This system was further modified (Fig. 12) through the incorporation of coordinating sites on both fragments. 22,23 Since energetic and steric barriers are rather substantial, only an oscillation of the system is expected. Reaction with phosgene molecule leads to formation of isocyanate moiety. Owing to the close proximity of the hydroxy and isocyano groups, they react together. The system may be switched to the next position by thermal events. After hydrolysis of amide group, the final conformation of the machine is shifted by 120 ° when compared to the initial state. More recent examples describe "molecular robotic arm" (Fig. 13). 24 The system is based on two interconnected units. The rotation of the two parts is induced by a double protonation process. The first protonation leads to disconnection of the pyridine moiety and isomerization of the system around the double bond. The second protonation leads to the disconnection of the quinoline unit and the rotation around the N-quinoline bond. Deprotonation affords a metastable isomer, which slowly transforms into the initial state. Thus the pyridine fragment (red) may be considered as the arm that moves the cargo from one side of the machine to the other. Fig. 13 A hydrazone-based molecular switch. Porphyrin-based compounds have been also widely applied as molecular machines. For example, Ce(IV) double-decker porphyrin based complexes undergo rotary motion which can be interrupted by binding with bidentate external molecules (Fig. 14). 25,26 In replacement of dicarboxylic acids, several other stoppers were reported. [27][28][29][30] Fig. 14 Double-deck porphyrin based molecular rotor. A very inspiring porphyrin based caterpillar was synthesized by Harry Anderson's group (Fig. 15). [START_REF][END_REF] For this rather elaborated system, motions of several movable parts was demonstrated by NMR exchange spectroscopy (EXSY). These experiments show that these complexes exhibit correlated motion for which the conrotatory rotation of the two template wheels is coupled to rotation of the nanoring track. In the case of the 10-porphyrin system, the correlated motion can be locked by binding palladium (II) dichloride between the two templates. Molecular devices on surfaces Surface confined molecular motion is a topic of interest. Several examples of molecular rotors deposited different substrates have been described. Metallocene based fluorinated and non fluorinated rotors equipped with thioether moieties as "legs" was reported. 32 The rotors (Fig. 16) was adsorbed on gold surface. According to NMR data in solution, almost no barrier for the rotation process is detected for the polar fluorinated rotor solution. The surface deposited samples were investigated by X-ray photoelectron microscopy, STM and grazing incident IR microscopy. Only for the polar fluorinated rotor, the electric field of STM tip induces the rotation of the system on the surface. Fig. 16 Nonpolar and polar surface deposited molecular motors. Another molecular rotor was reported by Jian and Tour (Fig. 17). 33 Four caltrop-shaped molecules that might be useful as surface-bound electric field-driven molecular motors have been synthesized. The caltrops are based on a pair of electron donor-acceptor arms and a tripod base. A monolayer of such compound deposited on golden substrate was generated. The movement of the zwitterionic molecular arms was induced by electric fields applied around the caltrops. Another organometallic molecular rotor designed to be deposited on surface was synthesized. [START_REF]Le complexe formé s'est révélé extrêmement instable en présence d'O2 et son étude n'a pas été poursuivie[END_REF]35 Due to five ferrocene groups at the ends of propeller blades, the system is expected to be electroactive (Fig. 18). The stator (red) was placed on a platinum grid. It was placed between an anode and a cathode. The proposed mechanism implies selective oxidation of one of the ferrocene units and the repulsion of the resulting cation by the anode. Upon rotation, the oxidized ferrocene unit is replaced by the second which is oxidized in its turn. However, it was not clearly demonstrated that the conductivity observed was due to the rotation of the molecule since a direct tunneling from cathode to anode could also take place. Fig. 18 Ferrocene molecular rotor and proposed mechanism for its rotation. Other similar systems have been also investigated (Fig. 19). 36,37 Direct tunneling electron transfer from STM tip to ferrocene (position 2) and empty hands (position 1) of the rotor induced anticlockwise and clockwise rotation respectively. This behaviour was explained by modeling of different excited electronic states. It was determined that tunneling through different arms involves different electronic states with opposite potential landscapes that lead to different directions of rotation. Fig. 19 Direct STM control of the directional motion of an unsymmetrical ferrocene based rotor. A very important step in development of molecular machines is the shift to materials (from solution to the solid state). Unidirectional rotating devices (Fig. 20) and liquid crystals films were reported by Feringa et al. [38][39][40] Fig. 20 Operation of an unidirectional molecular rotor. The rotor in trans-1 conformation is transforms into its cis-1 conformation under irradiation (λ >280 nm). This form is not stable at temperatures above -55 o C because methyl substituents are in unfavorable equatorial positions. This isomer relaxes to cis-2 stable form by helix inversion around double bond. Thus anticlockwise rotation further continues. Irradiation leads to the formation of trans-2 isomer. Methyl groups are again in energetically unfavorable positions. Heating to ~60 o C leads to 360 o rotation of the upper part of the molecule upon reaching the initial trans-1 isomer. Since each turn of the system induces a change in the helicity, all conformations can be monitored by CD spectroscopy. This unique molecular rotor was incorporated in liquid crystal films. 40 Three isomers of the rotor can be isolated at room temperature. A liquid-crystalline film doped with the trans-1 conformer gave a right-handed cholesteric violet phase (Fig. 21). UV irradiation at room temperature switches the trans-1 conformer to the cis-1 isomer. Further irradiation leads to the trans-2 isomer with a left-handed helical twist. This process is observed directly by the film's color change from violet to red. Heating of the film affords the initial violet color due to relaxation to the trans-1 molecule. Fig. 21 Color changing of liquid crystalline film containing the unidirectional molecular rotor. There are many other examples of molecular machines controlled by different external and internal factors. Among them, considerable interest has been focused over the last years on molecular "turnstiles". In particular, during the last decade, several studies have been reported on molecular turnstiles based on porphyrin backbones. The main topic of this PhD thesis was to develop approaches towards new turnstiles based on phosphorus (V) porphyrin derivatives. The state of art and detailed investigations of this topic are described in the following chapters. Conclusion Although abiotic molecular dynamic systems are still much less sophisticated and complex than biological motors, however, combinations of molecular design, synthetic procedures and theoretical calculations should allow to increase the sophistication of synthetic molecular devices and their use in a variety of applications such as molecular computing, catalysis, sensors, transport of matter etc. Chapter I. Molecular Turnstiles Based on Phosphorus (V) Porphyrins 1. Introduction 1.1 The general idea on multiple station molecular turnstiles During the last decade, our group has focused on the control of intramolecular movements. We have mainly studied a special class of dynamic molecular devices called "molecular turnstiles". 1- 16 These mobile systems may be described as a molecule composed of interconnected rotating two parts with respect to each other. In Fig. 1.1. a schematic representation is given. The grey part, so-called the "handle", contains one recognition site (mobile station) (dark grey box in Fig. 1.1). The central purple part acts as the "stator". It contains variable number of static stations (green parts in Fig. 1.1). Both types of stations may take part in interactions with effectors or external stimuli. The connection between the handle and the stator is achieved through using a "hinge" (red sphere in the center of the stator). The handle either freely rotates around the stator (open state, Fig. 1.2) or may be blocked (close state) by simultaneous interaction between one of the static stations and the dynamic station through an effector using molecular recognition events. The recognition processes between the two parts and the effector may be based on different type of intermolecular interactions such as electrostatic interactions, coordination bonds or Hbonds. By introducing at least three different stations in the periphery of the stator, one may control the direction of rotation through appropriate choices of interaction sites and their relative localisations on the stator. The design proposed above may be based on different propensities of different stations to bind a metal and/or a proton. For example, if dealing with a handle bearing a pyridine moiety, one may introduce sequentially a 2,6-lutidine, a pyridine and a benzonitrile units on the stator. In the presence of a metal cation as the effector, the first locked state may be generated through its binding to lutidine possessing the highest affinity for the metallic centre. Then a decrease of pH should lead to gradual protonation of the different stations from the more basic (lutidine) to the less basic one (benzonitrile) and thus the movement of the handle (Fig. 1.3). Obviously, one must take into account the stability of the "hinge" at different pH, in particular in acidic media. During this work, our aim was to synthetize a 4 stations molecular turnstile based on P(V) mesosubstituted porphyrins. The handle would be connected to the porphyrin stator through the axial positions of the P(V) atom located in the cavity of the porphyrin and acting as the "hinge". In order to study the stability of the system under acidic conditions, we have studied first an analogue, a porphyrin bearing only one station at the meso position. NMR techniques are perfectly suited for studying the behaviour of turnstile in solutions, namely, the open and the close states. Indeed, during the movement process, we should observe a change in symmetry together with shifts of resonance of specific protons of the handle. In addition, 2D NMR techniques should reveal specific correlations between protons of the handle and of the porphyrin due to their near proximity in the close state. The molecular turnstiles previously obtained in the laboratory are described in the following parts divided in two different categories depending on the nature of the stator. Non-porphyrin turnstiles A series of turnstiles with two coordinating sites was synthesized and described few years ago 10,11,[13][14][15][16] . The design principles are represented in Fig. 1.4. In the absence of any external stimulus, the handle freely rotates around the stator. In the presence of the metal ion, the rotation is blocked and the turnstile is in its closed state. Two alternative handles combined with two types of stator were elaborated. Organometallic Pt(II) turnstiles The systems with a pyridyl unit as coordinating site within the handle were synthesized (Fig. 1.5). 10,11,16 Two of them (A and B) are symmetrical whereas the turnstile C is unsymmetrical. The turnstile B even in the absence of any external effector is in its locked state owing to the formation of a weak hydrogen bond between the phenol and the pyridyl moieties. Again, the turnstile oscillates between two close states owing to the presence of two phenol units. All attempts to distinguish between the two different phenols at low temperature were unsuccessful. However, ROESY NMR demonstrated the formation of a closed state even at room temperature. 16 Both states were identified at room temperature by NMR techniques. Switching between different states was achieved. However, the system bearing only two interaction sites, it is impossible to impose the direction of the rotation. Fig. 1.8 Organometallic turnstiles equipped with a pyridine-amide type handle. In addition to the above mentioned turnstiles, another series based on a different handle was reported. 10,15 Instead of a pyridyl as coordinating site, an amide moiety is used (Fig The turnstile E, for which the pyridyl coordinating sites were replaced by benzonitrile groups, was also reported. 15 The benzonitrile unit possesses weaker binding propensity than pyridyl group. As expected, this indeed facilitated the opening process. Again, the turnstile was successfully closed using Pd(AcO)2 (Fig. 1.10). The turnstile F with two phenols attached to the stator was also investigated. In solution, the hydroxy group of phenol occurs in fast exchange with water (which always exists in solvents). Thus, no closed state was observed. Attempts to deprotonate the phenol moiety afforded partial decomposition of the turnstile. 10 Organic turnstiles A series of turnstiles mainly organic in nature was also synthesized (Fig. 1.12). 10,13,14 An amide based handle was used. All turnstiles G-K have been closed using Pd(II) as locking agent as described above for turnstiles D, E. 10 Again, both closed and open states were investigated by multidimensional NMR techniques which revealed the reversibility of the opening and closing processes for turnstiles I and J. 13,14 Interestingly, as expected from the design of turnstiles I and J, The major advantage of non-porphyrin based turnstiles described above is their simplicity and high yields of synthesis. However, by principle, for the turnstiles mentioned above only a maximum of two interaction sites may be introduced. This implies that with such design principles, the directionality of the movement, which requires at least three different sites, can not be controlled. different Fig. 1.12 Organic turnstiles G-K. Porphyrin-based turnstiles In the laboratory, several studies dealing with molecular turnstiles based on porphyrins have been undertaken over the last decade. As stated above, for Pt(II) as well as for organic systems reported, the turnstile may be equipped only with two stations. In contrast, for porphyrin based turnstiles, one may introduce up to four stations using the meso positions. Porphyrin based turnstiles synthesized in the group may be separated into two groups: strapped porphyrin and Sn(IV) porphyrin based systems. Strapped porphyrin based turnstiles The schematic principle of the turnstile is represented in Fig. 1.13. The purple part represents the porphyrin backbone, the handle (in grey) is connected to the porphyrin via the meta positions of two trans meso-substituents. Several related systems were investigated (Fig. 1.14). 4,9,12 Fig. 1.13 Schematic representation of strapped porphyrin turnstiles. Depending on the metal cations located within the cavity of the porphyrin, the turnstiles may contain up to 4 stations. Indeed, the pyridyl moiety of the handle may interact with two meso substituents (Ph-X and Ph-Y) for M= 2H or Pd(II), while the presence of ligands in the apical position of Zn(II) and Sn(IV) leads to one or two additional stations respectively. Due to the structure of the strapped porphyrin based turnstiles (connection of the handle to the stator at meso-positions), full rotation may be expected depending on the length of the spacers used to connect the handle to the porphyrin. Hence, one of the two opposite axial sites may be involved in the interactions with the free or metallated handle. Sn-based turnstiles The most interesting results have been obtained with Sn(IV) porphyrin based turnstiles. In the latter case, the handle is connected to the stator via two axial positions of the octahedron around Sn(IV) cation. The first Sn(IV) based turnstile synthetized in our laboratory (Fig. 1.15) [1][2][3] was composed of a meso-monopyridylporphyrin as the stator (AP). The turnstile was found to be stable under the conditions used. Concerning the molecular turnstile BP, the addition of Pd(II) leads to the formation of the closed state while the reopening could be performed by adding an excess of CN -anion or DMAP. 6 The next step was to extend the principle to turnstiles offering two stations. This was achieved using trans-A2B2 Sn(IV) porphyrins as stators (Fig. 1.18). 4,5,7,8 Four new systems have been obtained. They differ by the presence of the amide linkages in the handle (DP -FP) when compared to CP and by the nature of the stations at the meso positions: pyridyl, methoxyphenyl and benzonitrile. Similarly to AP, the turnstile CP is locked in the presence of Ag + cation. However, at room temperature an oscillation between the two equivalent meso-pyridyl units was observed. 8 The The "amide" turnstile DP was investigated. 4 Its closed state was generated in the presence of Pd(II) (Fig. 1.19). For the turnstile GP, the aim was to achieve the movement of the handle from the meso-pyridyl to meso-benzonitrile (HP) stations by the presence of an external stimuli. closing Unfortunately, the addition of the Pt(II) complex L does not lead to the cleavage of the Pd-pyridyl coordination bond and the turnstile remained locked at the pyridyl station. Unfortunately, in all systems studied, it was not possible to observe the movement of the handle from one station to the adjacent one. Moreover, although these Sn (IV) molecular turnstiles showed high efficiency, the Sn-O bond was found to be unstable under acidic conditions, even in the presence of weak acids preventing thus the use of acid/base tuning of the movement. In order to increase the stability of the system under acidic conditions, it appeared compulsory to replace the Sn(IV) center by other metals or metalloids to generate less reactive bonds for the connection of the handle to the stator. For achieving that, phosphorus appeared to be the candidate of choice. The main topic of this PhD thesis is to synthetize new turnstiles based on phosphorus (V) porphyrin. The development of synthetic approaches for the preparation of such turnstiles as well as their properties are described in the following chapters. Model turnstile based on P(V) tetraphenylporphyrin As stated above, Sn(IV) porphyrin based turnstiles were shown to be effective dynamic systems. However, their potential could not be fully exploited owing to the reactivity of the Sn-O bond under acidic conditions. Due to an ability of phosphorus cation to form strong P-O bonds as well as the coordination number 6 of P(V), phosphorus porphyrin based turnstiles appeared to be promising candidates to replace Sn(IV) based derivatives. We planned to use the same synthetic strategy as for Sn(IV) based turnstiles. This multistep strategy involves several key synthetic issues: -Synthesis of free-base porphyrins -Insertion of P(V) within the porphyrins -The synthesis of handles -Connection of the handle to P(V) porphyrin -Investigation of turnstile dynamic properties. The previously described approaches toward the synthesis of phosphorus porphyrins can be divided into two main groups: β-substituted porphyrins and meso-arylporphyrins. Phosphorus βporphyrins (octaethylporphyrin-OEP) was described first in 1977 by M. Gouterman. 17 The reaction was achieved with PCl3 in boiling pyridine. This method was used afterwards with some modifications. [18][19][20][21] PBr3 was also used for the metallation of meso-porphyrin diethyl ester. 18 Unfortunately, these techniques are not appropriate for meso-arylporphyrins which are targeted for our project. The metallation of meso-tetraphenylporphyrins (H2TPP) with phosphorus (V) was described for the first time by Carrano and Tsutsui also in 1977. 22 The method consists in the reaction of porphyrin with POCl3 in boiling pyridine. This method is the most used one to date. [23][24][25][26][27] For the series of meso-alkylporphyrins, a mixture of PCl3 and lutidine was found to be successful. 28 It is interesting to notice that the insertion of phosphorus into meso-pyridylporphyrins was not described previously. Synthesis of phosphorus tetraphenylporphyrins Insertion of the phosphorus (V) into the porphyrin cavity The insertion of a metal cation into a porphyrin cavity proceeds usually under smooth conditions and in high yields. 29 However, in the case of phosphorus (V), the insertion takes place only under harsh conditions. At room temperature, the reaction does not proceed. To complete the metallation reaction, it is compulsory to carry out the reaction during almost 24 h in refluxing pyridine in the presence POCl3 as the phosphorus source. Pyridine must be dried by distillation since residual water promotes the formation of hydroxy-complexes as impurities. To avoid this, dry pyridine or PCl5 should be applied. Following this procedure, the dichloro-complex [P(TPP)Cl2] + Cl -is obtained in rather high yield (up to 82%) (Fig. 1.21). The crude product is purified by column chromatography on SiO2 and the desired product is eluted with a mixture of CH2Cl2-MeOH (5%), along with partial substitution of Cl -by MeO -and a small amount of side products. Further purification by gel-permeation chromatography (GPC) using Bio-Beads polymeric gel and chloroform as the eluent allows to separate different products differing by the nature of the axial ligands based on their size. The desired product is collected as the second fraction. The complex has been characterized by 1 H-, [START_REF][END_REF] Axial chloride ligands are labile and the complex slowly hydrolyses in air due to the presence of moister. As already noted, the exchange of Cl -anion could also be observed in the presence of MeOH. This method is the only one described for meso-arylporphyrins and it works rather well for H2TPP. However, as it will be described below, this procedure proceeds with rather low yields for porphyrins bearing pyridyl groups at the meso positions. The insertion of phosphorus into phthalocyanines was described in the literature 30 and consists in the reaction of free-base phthalocyanines with POBr3 as phosphorus source. We applied this methodology to our case and found that the use of POBr3 instead of POCl3 or PCl5 presented several advantages (Fig. 1.23). The reaction was monitored by UV-Vis spectroscopy by observing the disappearance of 4 Q-bands of TPP and the rising of new bands of P(V) porphyrin. It was found that full metallation occurs in 80 min in contrast to 24 h required for the previous method and the yield reaches 95%. In addition, this approach requires a smaller amount of reagents. The It must be pointed out that the reaction with POBr3 is very tricky. If we dissolve the porphyrin in pyridine under argon and add solid POBr3 directly, the reaction does not take place, even under reflux. POBr3 should be dissolved in pyridine and added dropwise to the free-base porphyrin in pyridine at room temperature. After refluxing, the reaction mixture cannot be evaporated. In case of evaporation of pyridine using rotary-evaporator, the concentration of POBr3 increases and this leads to partial decomposition of P(V) porphyrin. Especially evaporating to dryness decreases the yield to ~15%. Moreover, the purification of the crude product should be performed carefully and, as already mentioned, the evaporation of the pyridine should be avoided. We found that the best way to proceed is to add directly the mixture in DCM (or chloroform) and mix it with water. This mixture is then stirred during approximately 2 days to complete the hydrolysis of dibromo-complex [P(TPP)Br2] + to the hydroxy analogue [P(TPP)(OH)2] + . Further extraction and washing with water removes the most part of pyridine. However, in addition to the desired product, some phosphorus containing compounds as well as pyridine remain in the organic layer which can not be removed by further washing. Even at that stage, the evaporation of the solvent leads to a dramatic decrease of the yield due to the decomposition of the complex. We were unable to recrystallize the complex in hexane or other non polar solvents. The purple solid complex becomes brown-green instantly in air. This is the reason why the organic layer is mixed with petroleum ether (or hexane) and the 1:1 mixture is loaded directly on a SiO2 column and eluted with a 1:1 DCM-petroleum ether mixture. The free base porphyrin is eluted with DCM while the highly polar targeted P(V) complex remains at the top of the column. Gradually addition of methanol into the eluent removes the brown fractions containing impurities. Up to 10% of methanol is required to elute the desired phosphorus complex. An additional purification by gel permeation chromatography (GPC) removes small residual impurities and leads to the pure product. Following this precise procedure allows to increase the yield to 95% for TPP. Finally, the nature of the counter ion is crucial. Br -anion must be used as the counter ion. For characterization of the complex in the solid state see the X-ray diffraction study on single crystal here after. Substitution of the axial ligands In the next step, we studied the exchange of the axial ligands on P(V). In porphyrin chemistry, this type of reaction is quite common, [START_REF][END_REF] and the substitution occurs in most cases quite smoothly. However, as already mentioned, the case of phosphorus is peculiar and poorly documented. A precise and systematic study is thus needed in order to find the best conditions to insert the handle in the last step of the synthesis. As mentioned in the previous part, the axial ligands present in [P(TPP)X2] + X -(X = Cl -or Br -), can be exchanged in the presence of water. Refluxing of [P(TPP)Cl2] + Cl -with water and pyridine affords the complex [P(TPP)(OH)2] + with hydroxy axial ligands. In order to mimic the coordination of the handle, we studied the introduction of mmethoxyphenol at the axial position of the P(V) (Fig. 1.24). In the case of Sn(IV) porphyrin based systems, starting with the dihydroxy complex, it has been shown that 3-methoxyphenol could be introduced at both axial positions in CHCl3 under reflux using a small excess of m-methoxyphenol (~2.4 eq.). 1 Fig. 1.24 Exchange of the axial ligands in P(V) porphyrin. In the case of phosphorus, these conditions did not lead to any substitution. Thus, more drastic conditions have been used. The typical procedure for the exchange of axial ligands in P(V) porphyrins requires the presence of pyridine under refluxing conditions. 24,[START_REF][END_REF]32 However, starting from [P(TPP)(OH)2] + this reaction did not lead to the desired product but mainly to the decomposition of the complex after 2 h under reflux (Fig. 1.24). Nevertheless, starting from [P(TPP)Cl2] + Cl -, the desired bis(3-methoxyphenoxo) complex was obtained in 50 % yield after overnight refluxing in the presence of 3 eq. of 3-methoxyphenol. The formation of the new complex was confirmed by [START_REF][END_REF] P-NMR which revealed a shift of the signal from -229 to -196 ppm (Fig. 1.25). This value is however close to the value observed for dihydroxy complex (-193 ppm). Fig. 1.25 [START_REF][END_REF] P-NMR spectra of P(V) TPP complexes with chlorine and methoxyphenoxy as axial ligands in CDCl3. The crude product was found contain in addition to the desired complex [P(TPP)(OPhOMe)2] + , a mixture of partially substituted complexes with high molecular mass, the free-base porphyrin and some unidentified compounds. The mixture could be separated by SiO2 column chromatography. The final purification requires GPC. The 1 H-NMR spectrum of target product in CDCl3 (Fig. It should be noted that the yield could be increased upon increasing the amount of methoxyphenol. However, our goal here is to define the optimal conditions for the insertion of the handle. Apart from the fact that the preparation of the handle requires a multi-step and quite expensive synthesis, the use of the bidentate handle in excess could also lead to the formation of the 2:1 side product represented in Fig. 1.27. As described above, [P(TPP)Br2] + is a highly reactive species and hydrolyses rapidly. We decided to check its reactivity in the presence of alcohols. First, we added ethanol to the crude product and stirred the mixture at room temperature during 2 days. The diethoxy complex [P(TPP)(OEt)2] + was obtained with a rather high yield (64%) (Fig. 1.30). The exchange process was monitored by [START_REF][END_REF] In addition to solution characterization, the solid-state structure of [P(TPP)(OEt)2] + was also studied by X-ray diffraction on single crystal (Fig. 1.32). Purple crystals of the complex were obtained at 25 o C upon slow diffusion of pentane into a solution of the complex in chloroform with a drop of methanol. Due to the low quality of the single crystals, the structure was not finalized and it was not possible to refine the solvent presented in the asymmetric unit. The P(V) atom is localized in the center of the cavity and its small ionic radius induces a substantial deformation of the porphyrin ring. The macrocycle is strongly "ruffled". The Cmeso atoms are located out of the N4 porphyrin plane, two of them are above the main plane, the other two bellow the plane. The distance between meso carbons and the plane of four N and one P atom is the 0.931-0.958 Å range. The phosphorus is hexacoordinated with 4 nitrogen of the porphyrin in the equatorial plane and two ethoxy groups located in the axial positions. Indeed, the P-N and P-O distances are small i.e. ca 1.84 Å and ca 1.63 Å respectively. These values are in good accordance with the data reported previously for similar complexes. [START_REF]Le complexe formé s'est révélé extrêmement instable en présence d'O2 et son étude n'a pas été poursuivie[END_REF][35][36] The quality of the crystal did not allow to determine the counter ion with precision. Taking into account the substantial deformation of the P(V) porphyrin macrocycle, synthetic difficulties for the preparation of the turnstile can be expected. Table 1.1. Selected X-ray data for [P(TPP)(OEt)2] + . Synthesis of the handle#1 Dealing with the choice of the handle, Sn(IV) porphyrin based turnstiles were equipped with the handle#1 (Fig. 1.33), since its metric and shape are perfectly suited. For the phosphorus (V) porphyrin based turnstiles, the same handle was chosen. The synthetic pathway for the handle#1 was described in previous publications. [1][2][3] Its preparation, based on a multi-step procedure, was inspired by the published procedure, however some of them have been modified (Fig. 1.33). In a first step, pyridine-2,6-diyldimethanol was converted to the dichloro-derivative 3 in 95% yield by reaction with SOCl2. The triethylene glycol "spacer" in the handle was first monoprotected with 2-tetrahydropyranyl group to prevent the formation of cyclic compounds in the next step. The monoprotection reaction using dihydropyran is not selective and lead to a mixture of unprotected, monoprotected (4) and diprotected compounds, even with a threefold excess of the glycol. The mixture of was separated by column chromatography. The compound 4 reacts with 3 in dry refluxing THF in the presence of NaH in order to deprotonate the OH group on the glycol fragment. The compound 5 was isolated in 58% yield. After removal of the THP protecting group, the compound 6 was obtained. Then it was transformed into the bis-mesylate compound 7. The monoprotected resorcinol 8 was synthesized using a modification of the described synthesis. [1][2][3] It consisted in the direct solvent-free condensation of resorcinol with DHP 37,38 in the presence of AlCl3•6H2O as Lewis acid catalyst. The reaction leads to the formation of a mixture of free resorcinol, monoprotected (substance 8) and diprotected one. This mixture could be separated by the column chromatography despite the close polarity of compounds. Derivative 9 is obtained by the reaction of 7 with 8 in dry THF and under reflux in the presence of NaH. There are two ways to isolate the product: column 10) is obtained upon deprotection of 9 under acidic conditions. The overall yield is 20%. Fig. 1.33 The synthesis of the handle#1. Synthesis of the turnstile with handle#1 2.3.1 Application of standard method First, we tried to use the conditions found for the introduction of 3-methoxyphenol (see previous part). As described above, the aromatic fragment can be added to the complex in refluxing pyridine. This method was applied for the reaction of To avoid the hydrolysis of [P(TPP)Cl2] + , we also tried to use dried pyridine by distillation over CaH2, 39 unfortunately, we observed the same result and the targeted turnstile could not be obtained. One explanation for the different behaviour of the handle compared to the 3-methoxyphenol could be due to the severe deformation of the porphyrin macrocycle that prevents the handle to act as a bidentate ligand and to form a 1:1 complex with the phosphorus porphyrin. In order to overcome this issue, the reaction was performed under microwave irradiation. Microwave synthesis Microwave irradiation and the use of special tubes, allow to reach temperatures over boiling point of the solvent. Cooling the system with an airflow allows to maintain the level of incoming energy constant without overheating of the solvent. The microwave region of the electromagnetic wavelengths includes the region from ~300 MHz to ~300 GHz. In science and industry, 2450 MHz is usually used. Comparing to IR region, microwaves contains less energy. Because microwaves can transfer energy directly to reactive species, so-called "molecular heating", they can promote transformations that are currently not possible using conventional heating. Thus, only rotation processes of molecules are involved, that exclude overheating of reaction mixture and thus preventing decomposition of reaction products. 40 The main advantage of the microwave heating when compared to conventional techniques is direct delivering of energy from the source to molecules. During an ordinary procedure, the heat should overpass the walls of the vessel first. Next, it should heat all the mixture equally. During the microwave synthesis, there is no such threshold. There are two main mechanisms for microwave transfer energy: dipole rotation and ionic conduction. The first mechanism functions with polar molecules. In the presence of microwave field, molecules start to rotate following the quickly changing electric field of the source. The second mechanism is similar but instead of dipoles it involves ions. 40 The solvent has a great influence on the reaction. In the case of polar solvent, the advantage of microwave synthesis is not so significant. So, due to its polarity, pyridine is not the best choice. However, the deprotonation of the handle needs a basic substance (which is polar) in the mixture. Thus, we decided to keep pyridine as the solvent. Comparing to conventional heating, which includes in terms of energy mainly two parameters to be varied (time and temperature), microwave includes also the power. This turnstile displays rather interesting NMR spectra which needs to be described. [START_REF][END_REF] The resonance for β-pyrrolic protons a appears as a doublet ( 4 JP-H=3. 4 Hz) which is expected for P(V) porphyrins. Signals for protons e, f are clearly upfield shifted (Δδ of ~4 ppm and ~5 ppm respectively). Some of the proton signals of the handle are also up-field shifted, however to much less extent. Signals for the glycol protons i-n appear, as expected, in the 3.5 ppm range. However, the shape of the observed multiplets differs from the one observed for the Sn (IV) based turnstiles. Here, they are split into several multiplets with different chemical shifts and without overlaps 2,3 . The reason for this difference could be due to the difference in the geometry of complexes: tin systems are planar while phosphorus systems are not. So far, several conclusions may be drawn: Synthesis of the single station turnstile In order to occupy two distinct "open" and "closed" states, the Turnstile#1 must contain one coordinating site on the stator (Fig. 1.2). The pyridyl group is the most suitable coordinating site as previously demonstrated by Sn(IV) porphyrins based turnstiles. The pyridyl unit offers two feature: binding to metal ion cations and undergoing protonation in the present of acids. The latter issue is a crucial for the control of the movement by acid/base reactions. Synthesis of P(V) meso-pyridylporphyrins Insertion of the phosphorus atom into the pyridyl-containing porphyrin As already described, the standard POCl3&PCl5 method can be applied for TPP metallation. Our aim was to prepare a turnstile bearing a coordination site on the stator. Thus, 5-pyridyl- (10,15,20)-triphenylporphyrin (H2MPyP) was chosen as the target (Fig. 1.37). Fig. 1.37 Standard method of the insertion of phosphorus into H2MPyP. The standard metallation method was found less effective for this porphyrin. Indeed, it needs 72 h instead of 24 h for the TPP. Even after 72 h, full conversion is not achieved i.e. the mixture contains the free-base porphyrin. Increasing the time did not lead to an increase of the yield. The final yield is only 40% which is lower than for MPyP. Moreover, the dichlorocomplex [P(MPyP)Cl2] + is more sensitive to moisture than [P(TPP)Cl2] + . It is slowly hydrolyzed in air, and the chloride axial ligands are partially substituted by OMe during column chromatography in the presence of methanol and further purification by GPC is required after the silica column. The pure product must be stored under argon in the absence of light. However, even under these conditions, a slow decomposition of the product over the time was observed. The metallation of the (5,15)-dipyridyl- (10,20)-diphenylporphyrin (DPyPH2) was also investigated (Fig. 1.38). The metallation is even more difficult when compared to the monopyridyl H2MPyP. Refluxing the reaction mixture during 7 days is necessary to obtain some metallated P(V) porphyrin and only a small part of the porphyrin is converted to the desired P(V) complex. The purification is more tedious since this complex is more sensitive to moisture. The final yield for the (5,15)-dipyridyl- (10,20)-diphenylporphyrin (H2DPyP) was found to be only 5%. To confirm the influence of the meso pyridyl substituents on the yield of P(V) complex formation, the reaction was carried out with tetrapyridylporphyrin (H2TPyP). No metallation was observed. Thus, it appears that the standard metallation with POCl3&PCl5 works well only with H2TPP and with the monopyridyl porphyrin H2MPyP however with lower efficiency. The insertion of P(V) in the cavity of porphyrins bearing more than one meso-pyridyl unit requires another metallation method. The metallation of phthalocyanines usually requires stronger conditions than porphyrins. 41 The P(V) phthalocyaninates are described 30 and were obtained using POBr3 as the metallation agent. POBr3 has been already successfully applied for the insertion of P(V) in H2TPP (see previous part, Fig. 1.23). We also applied this procedure to H2MPyP (Fig. 1.39). An increased reaction time (+10 min when compared to H2TPP) and a larger POBr3 excess (+15 equivalents) were used. After hydrolysis, the yield of target compound reached 85% which is lower than the one observed for H2TPP (95%). It should be noted that full hydrolysis of the Br-complex needs only one day of stirring with water instead of two for [P(TPP)Br2] + indicating that [P(MPyP)Br2] + is more reactive. The insertion of phosphorus to H2DPyP can be also achieved using POBr3. Similar trends are observed: an increase of the number of meso-pyridyl leads to a decrease of the rate of the reaction even in the presence of increasing amounts of POBr3 with H2TPP. The incorporation of P(V) into H2DPyP was achieved in an acceptable yield (69%). It appeared interesting to test these conditions in the presence of tetrapyridylporphyrin H2TPyP. After optimization of conditions, the best yield obtained for the incorporation of P(V) was only 13%. The [P(TPyP)(OH)2] + displayed some unusual properties such as low solubility in chloroform and in more polar solvents. During purification on SiO2, up to a 1:1 mixture of methanol and DCM was required for elution. Increasing the eluent polarity using acetic acid must be avoided since it leads to the protonation of complex. Moreover, the use of any base such as Et3N to increase the polarity was found to be also avoided. Indeed, although the complex is stable in the DCM-methanol-triethylamine mixture, upon evaporation the complex decomposes. The P(V) tetrapyridylporphyrin is highly unstable under basic conditions. Moreover, even at low temperature under the argon in the absence of light, this compound is unstable. Because of these features i.e. synthetic difficulties, purification problems and instability, this compound cannot be used for the generation of the turnstile. Axial ligands exchange The X-ray diffraction data of phosphorus (V) MPyP complexes In addition to solution characterization, the solid-state structure of some of the phosphorus As expected, the complex shows a pronounced "ruffled" deformation of the porphyrin ring. The degree of the deformation is similar to what was observed for P(V) TPP complexes. The structure correlates well with literature data reported for similar [P(TPP)(OH)2] + complex. 27 Bromide anion was the counterion of the cationic complex. 1.638( 5) Since the structure of the complex is similar to the one determined for [P(TPP)(OH)2] + , 27 it should be possible to prepare the turnstile based on this platform. O-P-O 178.1 (3) Attempts to obtain single crystal of [P(TPP)(OPhOMe)2] + and [P(MPyP)(OPhOMe)2] + failed. Although once very small needle type crystals of [P(MPyP)(OPhOMe)2] + were obtained, unfortunately they were not suitable for X-ray diffraction studies. The complex [P(MPyP)(OPhOH)2] + , which was considered to be used for the synthesis of the turnstile (see below), except for the absence of methyl groups in the axial ligands, displays similar structural features. 1 H-and [START_REF][END_REF] P-NMR spectra are similar to the one observed for [P(MPyP)(OPhOMe)2] + . As it is shown in table 1.3, here the N-P bond lengths are shorter than in [P(MPyP)(OH)2] + . Thus the deformation is slightly more pronounced, however the porphyrin core remains "ruffled". Phenyl rings does not form parallel planes as in the similar Sn(IV) system. 3 The mixture of Cl -and Br -counterions is present in the 1:1 ratio. Chloride anions found in the crystal arise most likely from chloroform decomposition (since the crystallization was carried out over a period of almost two months). The resorcinol OH groups forms hydrogen bonds with the counterions. From structural studies in solution and solid state several conclusions can be made: -Geometry of P(V) MPyP complexes is similar to the one of P(V) TPP complexes, the porphyrin plane in all case is strongly distorted -Electronic properties are different and depend on the number of pyridyl substituents -[P(MPyP)(OPhOH)2] + structure is different with respect to the similar Sn(IV) analogue. Approaches toward the turnstile#1 For the synthesis of the turnstile, we faced some difficulties in connecting the handle#1 to the porphyrin using the method described for TPP (for details see below). The first attempt to connect the handle#1 to [P(MPyP)Cl2] + was performed using conventional heating in pyridine (approach 1) (Fig. 1.45). Due to higher reactivity of axial ligands in P(V) MPyP when compared to TPP, the exchange process should have been facilitated. Unfortunately, similarly to [P(TPP)Cl2] + , the formation of the one station turnstile was unsuccessful. Microwave conditions were also tested. Under the same conditions as those used for the As we already noted, both for P(V) TPP and MPyP complexes, the targeted axial ligands may be introduced directly starting with the dibromo-complexes. The alcohol could be added directly in a second step after the treatment with POBr3. By using an excess of resorcinol with respect to [P(MPyP)Br2] + , it was possible to obtain the dihydroxyphenoxy-complex [P(MPyP)(OPhOH)2] + in 55% yield (Fig. 1.46). Subsequently, in order to connect directly the handle#1 to the porphyrin, the complex was condensed with the compound 7 (approach 2). This process requires basic conditions for resorcinol deprotonation. As we already mentioned, owing to the instability of P(V) porphyrins under basic conditions, the reaction leads to the demetallation of the porphyrin. Fig. 1.46 The explored pathway to obtain the turnstile#1 (approach 2). Based on the above mentioned investigations, it appears that the synthetic approach developed for model turnstile can not be applied for the preparation of the turnstile bearing coordinating pyridyl sites. As an alternative, we have undertaken another approach for connection of the handle to the porphyrin backbone. This interconnection strategy is based on O-alkylation at P(V) center using handles bearing two terminal O-alkyl groups. It has been shown that this reaction proceeds under mild conditions. 42 For this approach, the leaving group should be incorporated within the handle. The O-alkylation of P(V) porphyrins was already described [42][43][44] and is based on the dihydroxy-complex as the precursor. To test this reaction with MPyP, a model reaction with ethyl tosylate was investigated (Fig. Subsequently, this new approach was applied to the synthesis of the turnstile using the handle bearing O-alkyl fragment (approach 3). For this purpose, another handle ( 14) was synthesized (Fig. 1.48, 1.49). Fig. 1.48 Suggested pathway to obtain the turnstile#1 (approach 3). X = leaving group Synthesis of the handle#2 In order to facilitate the reaction between phosphorus atom and the handle, the tosylate derivative was chosen as leaving group. The multistep synthesis was inspired by the previously described procedure (Fig. 1.49). 11 The signal corresponding to the model turnstile#2 molecular ion is observed at m/z=1132.46 (calculated for [M-OTs] + 1132.47). Thus, we can confirm that the [P(TPP)handle#2] + turnstile was successfully obtained, however, we were not able to isolate it in a pure form. Each purification step leads to the partial decomposition of complex and did not decrease the amount of the impurities. We suspected a photo-destruction processes of the phosphorus turnstile to be responsible for the instability. Consequently, we have investigated this process in detail. The results are described in the second chapter of the manuscript. Synthesis of the turnstile#2 To overcome this failure, we explored other conditions to prepare the turnstile. Several attempts to obtain the MPyP turnstile#2 were made. The reaction of the dihydroxy complex with handle#2 was performed in acetonitrile at 50 o C and in the presence of Cs2CO3 under argon (Fig. 1.53). The complex was purified with the same procedure as for the model turnstile#2. Unfortunately, again the pure turnstile could not be isolated. Similarly to model turnstile#2, the compound undergoes decomposition during GPC purification. This fact is probably related the photostability of turnstale#2. This phenomenon is described in the second chapter of the manuscript. The structure and composition of turnstile#2 in solution was determined by 1 H-and [START_REF][END_REF] The peak corresponding to the turnstile#2 molecular ion is observed at m/z=1133.4598 (calculated for [M-OTs] + 1133.4573). In addition, several high-mass peaks were also detected. Again, although the turnstile#2 was formed, it was not possible to isolate it in a pure form. Several conclusions may be drawn: - The O-alkylation reaction was found to be effective for the formation of the turnstile -The purification was found to be unfeasible et the compound appeared to unstable, probably due to photodecomposition Turnstile #1 synthesis through Br-derivative As stated above, we showed that the synthetic pathway leading to the turnstile#1 was operational for TPP but non operational for MPyP. It has been also demonstrated that the bromide axial ligands in P(V) porphyrins exhibit high reactivity and may be substituted directly during the complexation process. This reaction requires an excess of the incoming molecules. Considering the effective reaction with SOCl2, the use of SOBr2 should lead to the Br-complex. As we expected, the stirring of [P(MPyP)(OH)2] + in small amount of chloroform with SOBr2 leads to dibromocomplex. The excess of SOBr2 was removed by vacuum. However, attempts to isolate the compound failed. Traces of water in solvents or in air quickly hydrolyze the complex. The mixture of hydroxyl and bromo complexes was characterized by [START_REF][END_REF] Based on these results, the synthesis of the turnstile#1 was attempted by a two step reaction The purification of the reaction mixture revealed the presence of several substances. They may be divided into two main groups: P(V) porphyrins and the products of decomposition of porphyrins and the handle. The first group contains the turnstile#1, complex with the handle connected only by one side [P(MPyP)monohandle#1] + and the complexes with hydroxy, ethoxy and methoxy axial ligands. The other group includes the free-base porphyrin, the free handle, several unidentified compounds (probably corroles) and high-molecular substances. The purification process is quite similar to one used for the model turnstile#1. The two main groups of compounds were separated by column chromatography. The products of decomposition are less polar and can be eluted with DCM and MeOH (gradually increasing the ratio from 0% to 5%). These fractions contain also some pyridine. All P(V) porphyrins and high-molecular species remain at the start of the column and can be eluted with DCM-MeOH mixture (12-15v% MeOH). It is reasonable to think that the 1D NMR investigations The study of the dynamic behaviour of the P(V) porphyrin based turnstile is inspired by the one carried out with Sn(IV) based turnstile. 2 As previously reported, the dynamic behaviour of the turnstile in solution was studied by NMR techniques. Silver cation 1 was used as the locking agent 45,46 because: -its possible linear coordination geometry sufficiently high binding constant when bound to pyridyl moieties as coordinating sites its diamagnetic nature required for NMR investigation Triflate anion was used for solubility reasons and its poor binding capacity to silver cation. 2 The experiments were carried out in polar solvents (methanol-d4 or acetonitrile-d3). The concentration of the complex was ca 10 -3 M unless specified otherwise. Owing to simultaneous binding of the silver cation by both pyridyl moieties, the latter should behave as the locking agent (Fig. 1.61). The aromatic region of the spectra (7.1-9.5 ppm) is presented in Fig. 1.62. Addition of 0.5 eq. of AgOTf had no effect, while addition of 1 equivalent caused few small but observable changes of the spectrum. This behaviour is expected since the counterion of the turnstile is Br -anion and thus the first equivalent of silver cation is required to remove Br -anion by precipitation of AgBr. Addition of 2 eq. of AgOTf leads to substantial changes. Signals of glycol protons o-u appear in the 3.3-5.0 ppm region (Fig. 1.63). In the presence of 2 equivalents of silver triflate, the signal corresponding to u shifts downfield by 0.42 ppm. The multiplet splits into five distinct multiplets. Signals of protons t, s are found to be broad. Although the shape of signals of protons o-r is expected to be triplets, they appear as multiplets which may be a superposition of two triplets non equivalence of protons of the two glycol chains of the handle. To prove that the close state of the turnstile#1 was reached, an excess of AgOTf was added. The spectrum in the presence of 3 eq. is the same as the one with 2 eq. The binding constant for silver binding was calculated using ChemEqui software. 47,48 Signals of e, v, t, s protons were used for calculations. The binding constant log K is 2.83 ± 0.8. Usually ROESY is applied for compounds with molar mass around 1000 g/mol. The molar mass of the turnstile#1 without silver and bromide counterion is 1310 g/mol. Thus both methods NOESY and ROESY were used. Since the ROESY technique did not show any correlations, NOESY was used. It is worth noting that, in order to observe through space correlations, the maximum distance between the two interacting 1 H nuclei must be around 5Å. 49 Since silver cations forms insoluble precipitates in the presence of halides, this feature, as previously exploited 2 was used to open the close state of the turnstile. Tetraethylammonium bromide (Et4NBr) was used as the opening agent (Fig. 1.67). The protocol used was the following. Starting with the closed state of the turnstile generated by addition of two equivalents of AgOTf, two equivalents of Et4NBr is added. This procedure indeed leads to the open state of the turnstile. In order to regenerate the closed state, two equivalents of AgOTf is added. Four and a half openingclosing cycles were performed in methanol-d4 and followed by 1 H-NMR (400 MHz, 25 °C) (Fig. 1.67). Although each cycle leads to the precipitation of AgBr, this had no influence on the quality of the NMR spectra. 1-D NMR investigations Based on the design of the turnstile#1, i. e. the presence of two coordinating and basic pyridyl moieties, bearing, in stead of Ag + cation, one may use H + to lock the system (Fig. 1.68). Owing to the presence of two pyridyl units (on the handle and on the stator), addition of one equivalent of a strong acid should lead to the monoprotonation of one of the two basic units, probably the pyridyl located on the handle. Further addition of acid should lead to the diprotonated derivative. Taking into account the rather low basicity of the pyridyl moiety, the use of a strong acid is required. Thus, triflic acid, one of the strongest organic acids (pKa= -12 in water 50 ) was used. The stepwise addition of the acid to the turnstile#1 in solution was monitored by 1 H-NMR. We also carried out the experiment using higher concentrations (c=9.8•10 -3 ) and found that to block the rotation, only 3 equivalents of HOTf was required. The binding constant of the proton by the turnstile#1 leading to its closed state was also calculated using ChemEqui software. Signals of e, u, t, s protons were used for calculations. The obtained value is log K = 3.69 ± 0.06. The results obtained may be summarized as follows: -The turnstile#1 can be closed using a proton (addition of HOTf) - The closing process depends on the concentration of H + -Changes in the 1 H-NMR spectrum are similar to those observed for locking with silver As in the case of silver cation, the closing process by H + locks the rotation of the handle around the stator. Again, the pyridyl group should be placed between the two handle glycol chains. The meta-pyridyl proton f is located near the handle protons s, t and r as evidenced by the observation of correlations. Interestingly, several correlations between the glycol protons and the phenyl groups of porphyrin are detected. This may be due to the smaller size of proton (its radius is 0.31 pm vs 145 pm for Ag + ). Conclusion of the chapter In this chapter we described the synthetic pathways for the preparation of four different molecular turnstiles and studied their dynamic behaviour. The synthesis of P(V) porphyrin derivatives was found to be delicate. We have prepared a model compound based on TPP derivatives. The syntheses of P(V) porphyrin bearing two monodentate axial ligands was achieved under reported standard conditions. However, the reaction with a bidentate handle#1 required to find specific conditions. Microwave synthesis was found to be effective. In case of Sn(IV) porphyrin based turnstile investigated previously in the group, it was possible to model the reactivity of the pyridyl bearing derivative MPyP using TPP. For P(V) porphyrin derivative, we observed large differences depending on the number of pyridyl substituents at the meso positions of the macrocycle. For the incorporation of the phosphorus atom into the cavity of the porphyrin, it was necessary to elaborate new method using shorter reaction time and lower amount of metallation agent. The new method was found to be effective for porphyrins that cannot be metallated by standard techniques. The number of pyridyl substituents on the porphyrin backbone was found to be crucial for axial ligands exchange. However, the method elaborated for the synthesis of the model turnstile#1 was found to be ineffective for the synthesis of the turnstile#1. In order to overcome this issue, another strategy was developed. The turnstile#1 bearing one pyridyl coordinating site was obtained and isolated. The use of mild conditions for the connection of the handle to the porphyrin was investigated. A modified methods based on O-alkylation of the P(V) porphyrins was developed. Two turnstiles were prepared: the model turnstile #2 (without coordinating sites) and the turnstile#2 (with pyridyl group). Unfortunately, all attempts to isolate the pure products failed. The compounds prepared were found to be photo unstable and underwent decomposition during purification. The turnstile#1 was finally successfully obtained and characterized by mass-spectrometry, 1D and 2D 1 H-, [START_REF][END_REF] P-and 13 C-NMR techniques. The switching between the open and close states of the turnstile was investigated using silver cation as locking agent. The molecular turnstile#1 was shown to be stable. The dynamic opening/closing process was shown to be reversible. Interestingly, the closing of the turnstile using a proton was demonstrated using triflic acid. Furthermore, the dynamic reversible opening/closing process using acid (triflic acide)/base (Et3N) was demonstrated. Although our distant goal was to prepare P(V) porphyrin based turnstiles equipped with four stations at the meso positions, the synthetic difficulties as well as the reactivity of the phosphorous containing backbone, this was not achieved. Following this design principle seems currently illusive. Chapter II. Phosphorus (V) Porphyrins: Introduction In the first chapter, we suggested that the instability of some of P(V) porphyrins was due to their photoreactivity. Most probably it arises from generation of singlet oxygen by this type of porphyrins. Their behaviour was studied more precisely. In this chapter, we describe some photochemical properties of P(V) porphyrins. The role played by the nature of substituents and axial ligands was investigated. The porphyrins are highly-colored compounds that possess strong absorptions in two main Relaxation processes of porphyrins are well documented 4 and depend on the nature of the porphyrin ring and the metal centre. Several main photophysical relaxation processes take place. Two types of fluorescence are observed (S2-S0 and S1-S0). S2 state usually rapidly transfer into S1, but S2-S0 transition can be detected and described. 5 Usually, fluorescence is observed in 650-800 nm spectral range and corresponds to S1-S0 transitions. Another pathway for relaxation of absorbed energy in porphyrins is non-radiative processes. The efficiency of this pathway also strongly depends on the porphyrin nature. The residual energy participates in intersystem crossing processes. Due to this process, an energy transfer to the triplet state of porphyrin occurs (Fig. 2.4). In the absence of oxygen, the porphyrin energy relaxes either by phosphorescence or by non-radiative pathways. states: 1 Δg and 1 Σg + . 1 Δg state is relatively long-lived since the transition to the triplet state is spin forbidden. 1 Σg + state has an extremely short lifetime (10 -11 -10 -9 s in the solution in contrast with the 1 Δg state (lifetime of 10 -6 -10 -3 s)). [6][7][8][9] This 1 Δg→ 3 Σg -transformation leads to a very weak phosphorescence at 1268.7 nm 10 which may be used as a direct method for detection of singlet it reacts solely with singlet oxygen (there is no physical quenching of 1 Δg). 11 The photochemical behaviour of DPBF in the presence of photosensitizer was described. 15 The pathway of DPBF oxidation was also determined and described. 15 As we already stated, DPBF is insoluble in water. Many water-soluble chemical traps are also known. 11,14,17,18,19 Mostly, they include an anthracene core sensitive to singlet oxygen and react with 1 O2 to yield endoperoxides (Fig. 2.7). Fig. 2.7 Singlet oxygen reaction with anthracene derivatives as chemical traps. Water-soluble traps have been extensively investigated. 19 Simple traps such as furfuryl alcohol or 2,5-dimethylfuran shows low selectivity and thus cannot be used as efficient singlet oxygen traps. Naphthalene, anthracene and tetracene water-soluble derivatives show better results. However, these compounds are usually anionic species. Indeed, the commercially available anionic ADMA (antracene-9,10-bis-methylmalonate) (Fig. The commercially available Singlet Oxygen Sensor Green (SOSG) (Fig. 2.9.) is another trap used for singlet oxygen detection using. of SOSG (red) and SOSG-EP (blue). 13 In the presence of singlet oxygen, the anthracene core forms an endoperoxide leading to the quenching of PET in the molecule and fluorescein moiety shows an intensive fluorescence at 530 nm (Fig. 2.10, blue dashed line). The SOSG behaviour was described precisely by Majima group. 13 SOSG was applied to biological studies to determine the presence of singlet oxygen in cells 12 or to quantitatively control its quantum yield. 16 We also used SOSG for the determination of singlet oxygen quantum yield determination in this work. Photosensitizers can be divided into two groups based on their stability in the presence of singlet oxygen. Instable compounds undergo self oxidation. Concerning the photosensitive properties of P(V) porphyrins, [23][24][25][26][27] mainly their interaction with biological systems was investigated. DNA damage under visible irradiation in the presence of P(V) porphyrins was reported. 23,25 Protein damages in the presence of P(V) porphyrins under irradiation was also decribed. 24 Two probable mechanisms were proposed: electron transfer and oxidation by generation of active oxygen species. The latter mechanism was shown to be the predominant one. It is worth noting that, the reported studies were based on the use of tetraphenylporphyrin or similar symmetric P(V) derivatives. The quantum yields of singlet oxygen generation were also determined. The value of 0.28 was reported for [P(TPP)(OH)2] + in ethanol. 25 The singlet oxygen was detected through its phosphorescence at 1270 nm. Two other contributions reported a quantum yield of 0.62-0.73 for the generation of singlet oxygen by phosphorus porphyrin derivatives (axially substituted alkoxy-complexes) in water. 24,26 Within the framework of this thesis, we have undertaken an investigation of the two series of phosphorus porphyrins: TPP and MPyP differing by the nature of the axial ligands, the role played by the solvent. It should be mentioned that whereas the turnstile#1 is stable and can be isolated, the P(V) porphyrin bearing an aryloxy axial ligands behaves differently. Photophysical properties of P(V) porphyrins P(V) porphyrins as photosensitizers In the first chapter we described the synthetic efforts to prepare P(V) porphyrin based turnstiles. Attempts to synthesize and isolate turnstile#2 and the model turnstile#2 were unsuccessful. These compounds undergo decomposition either during the purification procedure or directly in the solid state. Here we proposed that their instability is due to the photoreactivity of P(V) porphyrin core in the presence of oxygen. To prove this hypothesis, comparative photochemical experiments were carried out in the presence and in the absence of oxygen. Two solutions of [P(MPyP)(OEt)2] + in chloroform were prepared. One of them was left for 4. It can be deduced that O2 is involved in the decomposition process under the irradiation. Moreover, keeping the solution of the first sample (open vessel) in the dark does not lead to rapid bleaching of the porphyrin. Thus, we can propose that the decomposition of our series of P(V) porphyrins is due to the formation of singlet oxygen under irradiation and its reaction with the porphyrin macrocycle. Based on these experiments, the P(V) porphyrins ((TPP or MPyP) prepared were systematically studied. The role played by the axial ligands (hydroxide, chloride, ethoxide or phenolate) and the solvent (CHCl3, DMSO and water) was investigated. Photophysical properties of P(V) porphyrins in chloroform 2.2.1 Experimental setup#1 As described above, two main methods based on observation of phosphorescence at 1268.7 nm or the use of a chemical traps may be employed to detect singlet oxygen generation. We used the standard method using DPBF as a trap. The experimental setup#1 depicted in Fig. 2.12 was designed and constructed. The sample was irradiated by the Xenon beam passed through the green filter. Simultaneously, a low intensity halogen lamp beam (modulated by the attenuator) covering the visible region was passed through the sample and analysed by the detector. The UV-Vis spectrum was recorded every 50 ms using this device, however, in the majority of cases spectra were recorded every 1-10 sec. Each sample was prepared in 2.4 ml of chloroform. The concentration of the porphyrin was ~10 -6 corresponding to intensity of Soret band ~ 1. 7 µl of a 0.017 M DPBF solution in chloroform was added. The DPBF concentration in the sample was thus 5•10 -5 M corresponding to ca 1 order excess of the trap. Singlet oxygen generation measurements Two series of P(V) porphyrins (TPP and MPyP) were chosen and for each, four different The absorption spectra of P(V) MPyP complex in chloroform is similar to spectra of P(V)TPP complex (Fig. 2.13). H2TPP was applied as a standard. The quantum yield of the singlet oxygen generation in chloroform for H2TPP is Ф = 0.50. 29 The equation 1 was used for calculation of the singlet oxygen generation quantum yield. Ф,Фst are quantum yields; R, Rst are DPBF bleaching rates; I, Ist are integral light absorptions for the samples and for the standards respectively: The absorption spectrum of the DPBF solution is presented in the Fig. 2.14a. In most cases, the overlap of the porphyrin and the trap spectra is observed after addition of a DPBF solution into the sample (Fig. (1) In the case of [P(TPP)Cl2] + , two different peaks are observed (Fig. 12b) and the process was monitored directly by the decrease of the intensity of the peak (417 nm) of DPBF. Data are given in table 2.1. Although this method leads to large experimental errors, since all data were collected under similar conditions, their comparison reliable. Fluorescence measurements The fluorescence of the complexes was measured at room temperature in freshly distilled chloroform. H2TPP was chosen as a standard (Ф = 0.11 in toluene). 30 All samples, excited at 550 nm, showed fluorescence in the 620 and 700 nm range corresponding to S1-S0 transition with characteristic Stokes shifts (Fig. 2.16). P(V)TPP spectra are similar and not presented. During the fluorescence measurements, excitation spectra were also recorded; the similarity of excitation spectra with absorption ones indicated the purity of complexes studied. As presented in table 2.2, P(V) the fluorescence quantum yields of porphyrins studied are small. The lowest value (less than 1%) is observed for the aryloxy complexes. The data collected is in good correlation with literature reported for similar P(V)TPP complexes. 31 A low fluorescence quantum yield implies an efficient intersystem crossing (ISC) for P(V) porphyrins. The photochemical behaviour of [P(TPP)(OH)2] + complex in ethanol was described by A. Harriman in 1983. [START_REF][END_REF] It was stated that 80% of absorbed energy is engaged in ISC for the generation of triplet state. Based on the above mentioned investigations, several conclusions may be drawn: the P(V) porphyrins are efficient photosensitizers in chloroform their behaviour as photosensitizers depends on the meso-substituents of the porphyrin backbone as well as on the axial ligands complexes with aryloxy axial ligands display low fluorescence quantum yields as well as singlet oxygen generation propensity potentially, these series of compounds may be used as photosensitizers in photodynamic therapy (PDT) due to their solubility in water additional flash-photolysis experiments should be performed for better understanding of the energy transfer processes in these complexes. Photophysical properties of P(V) porphyrins in DMSO Singlet oxygen generation measurements Using DPBF as the chemical trap, the same experimental setup described above for measurements in CHCl3 was used in the case of DMSO as the solvent. All techniques and concentrations were also identical. DMSO (tmelt=18.5 o C) was frozen in the fridge (+6 o C) and the excess of water was removed, the procedure was repeated until no liquid was detected. Residual solid DMSO was melted and used for the measurements. DMSO was chosen since it is nontoxic and widely applied in medicine. Furthermore, potential compounds for PDT were tested in this solvent. 32 Again, H2TPP was used as the standard. The literature data for singlet oxygen generation quantum yield for H2TPP in DMSO varies between 5% and 50% 33,34 depending on the oxygen concentration. Thus, we remeasured the quantum yield of H2TPP in DMSO using the same technique as for P(V) porphyrins in chloroform and calculated a ФΔ value of 17%. Taking this into account, we calculated the ФΔ for all prepared complexes in DMSO. Calculations were made using the Equation 1 and the data are presented in table 2.3. The dramatic decrease in the quantum yield upon switching from chloroform to DMSO may be due to the presence of higher number of hydrogen atoms in DMSO responsible for the quenching of singlet oxygen. [35][36][37] Interestingly, MPyP complexes are more effective than TPP with the following sequence (OPhOMe) < (OH) < (OEt) < Cl. The lower quantum yields in DMSO when compared to chloroform may be also explained by the lower stability of chloroform under irradiation. Indeed, the decomposition leads to the formation of radicals which could promote the destruction of the trap. Fluorescence measurements As in chloroform, the fluorescence experiments were performed at room temperature and under irradiation at λex = 550 nm. Complexes studied again displayed fluorescence in the 620-700 nm range with characteristic Stokes shifts (Fig. The fluorescence quantum yields were calculated using the Equation 2and values are given The fluorescence quantum yield increases dramatically for complexes with ethoxy and hydroxy axial ligands when compared to values obtained in chloroform. Very weak fluorescence was observed for the chloro and aryloxy porphyrin derivatives. Excitation spectra were found to be similar to absorption spectra. The following conclusions may be drawn: the solvent has a substantial influence on the photochemical properties of P(V) porphyrins quantum yield values for singlet oxygen generation are higher in chloroform in contrast to fluorescence quantum yields -complexes with aryloxy as axial ligands do not generate singlet oxygen in DMSO since P(V) porphyrins may be of interest for PDT studied, experiments in aqueous solution must be carried out Photophysical properties of phosphorus porphyrins in aqueous solutions Experimental setup#2 The setup of the spectrometer had to be changed due to the use of SOSG. In setup#1, the sample was irradiated at 547 nm but this is no longer possible due to the possible overlap with the emission peak of SOSG at 530 nm. Thus, the Xenon lamp was replaced by a violet diode laser with wavelength of 405 nm (STAR405F10, Roithner Laser Technik). This laser excites both the porphyrin core and SOSG. Since we detect the emission of SOSG, the setup does not require the second light source. The scheme of the fiber-optic spectrometer is presented in Fig. 2.19. Fig. 2.19 Experimental setup#2 for singlet oxygen determination in aqueous solutions using SOSG. The sample was stirred at 30 °C. Only stirrers covered with Teflon can be used because of strong oxidation power of the solution. The detector connected was orthogonally placed to measure the emission. Singlet oxygen generation measurements An increase in the number of meso-pyridyl substituents on P(V) porphyrins enhances their solubility in water. The Free-base H2TPP porphyrin is totally insoluble in water and cannot be used as a standard. The water-soluble 5,10,15,20-tetra(4-sulfophenyl)porphyrin (H2TSP) was thus used. Its singlet oxygen generation quantum yield is reported to be 0.64. 38 All absorption spectra were recorded (Fig. 2.20). For calculations, the emission peak rate increase of SOSG was used. Only the linear part was considered. Plot of the emission at 530 nm against time is presented in Fig. 2.21 and the data are given in table 2.5. Fluorescence measurements For fluorescence experiments, we used the same conditions and protocol as for experiments carried out in chloroform (λex = 550 nm, room temperature). The fluorescence spectra of TPP or MPyP complexes were found to rather similar, the fluorescence spectra of the MPyP series is given in Fig. 2.22. with ethoxy and hydroxy axial ligands display fluorescence quantum yields of ca 10%, the hydroxy complex is slightly more efficient. The following conclusions and trends may be drawn: -P(V) porphyrins are efficient photosensitizers in water the efficiency of singlet oxygen generation and the fluorescence dramatically depends on: -the axial ligands with the following sequence Cl> OEt>OH>>OAr -the nature of the porphyrin substituents: MPyP better than TPP -the solvent: CHCl3 better than DMSO or H2O -since P(V) porphyrins are water-soluble, they may be used for PDT -P(V) porphyrins may be also used for photocatalysis (see part 3 of this chapter) -Complexes with aryloxy axial ligands display different behaviour, an attempt to explain this difference is presented in the following part . Electronic structure of aryloxy-complexes [P(TPP)(OPhOMe)2] + and [P(MPyP)(OPhOMe)2] + complexes show specific photophysical features when compared to other P(V) porphyrins. Indeed, they display extremely low fluorescence and singlet oxygen generation quantum yields. Since, these complexes were used for the design of turnstiles (see chapter I), we tried to find an explanation for their peculiar properties. Similar compounds with pyrene moieties in axial positions were described previously. 39 Owing to presence of the phosphorus atom in the cavity, the porphyrin ring is an electron acceptor. On the other hand, aromatic axial ligands are electron donors. For the pyrene complex described, since the energy of the HOMOs of the pyrene ligands is higher than the one of the porphyrin, a photoinduced electron transfer is expected. Upon excitation of the complex, 3 possible pathways for electron and energy transfer were described (Fig. 2.23). After excitation of the pyrene absorption band (337 nm), two possible processes (2 and 3) were found to occur. It was shown that in case of connection of the aromatic axial ligand directly to O-P group, only the process 2 (oxidative electron transfer) takes place. Upon excitation of the porphyrin moiety, the process 1 occurs. Both processes 1 and 2 lead to the formation of "chargetransferred" state of porphyrin for which the electron from the pyrene moiety is transferred to the porphyrin. This electron transfer leads to the quenching of the fluorescence as depicted in Fig. 2.23. Similar complexes based on phosphorus -N-containing axial aromatic ligands were also described 40 and PET from axial aromatic ligands to porphyrin ring was also observed. As we can see, both HOMO and HOMO-1 are located at phenoxy axial ligands and both LUMOsat the porphyrin ring. This confirms the possibility of electron transfer in agreement with literature results and offers a possible explanation for the specific behaviour of our compounds under the irradiation. Most part of the absorbed energy goes to PET. Therefore, this type of P(V) porphyrins are bad photosensitizers. As described in the chapter I, the handle#1 contains aromatic part connected directly to the porphyrin. Consequently, the PET stabilizes the P(V) porphyrins towards irradiation. Indeed, the turnstile#1 bearing phenolates axial ligands was found to be more stable than the turnstile#2 bearing alkoxides in the axial positions. This feature allowed to isolate the turnstile#1 and to study its dynamic behaviour in solution. Application of P(V) porphyrins in photocatalysis 3.1 Experiment description Since P(V) porphyrins are effective photosensitizers, it appeared interesting to investigate their reactivity in photooxidation processes. Porphyrins (free-base or Zn(II) complex) were already used for this purpose. 41,42 They showed high efficiency in the oxidation of unsaturated cyclic organic compounds. To study the behaviour of P(V) complexes, the hydroquinone (HQ) oxidation was chosen as a model reaction since its photooxidation mechanism is well documented. [43][44][45] The oxidative process can be easily monitored by UV-Vis spectroscopy, since the main product of the photodegradation is hydroxybenzoquinone (BQ) (Fig. 2.25). The oxidation of HQ was monitored by UV-Vis spectrophotometry. Owing to higher ionic strength, the solubility of P(V) porphyrins decreases considerably in the buffered solution when compared to pure water. The experiments were performed in porphyrins buffered solutions (1.4•10 - solution in an open quartz cells. The final HQ:Porphyrin ratio was 270:1. The solution was irradiated at room temperature and then analysed by UV-Vis spectroscopy. The percentage of HQ and porphyrin was determined from the intensity of the corresponding bands. Photooxidation of hydroquinone All four samples were irradiated simultaneously in the chamber. Significant changes in the UV domain of the spectra were detected. All porphyrins displayed similar behaviours. Spectral Complexes with ethoxy axial ligands showed better efficiency in comparison with hydroxy analogues. After 170 min of irradiation, only ~40% of HQ remained (Fig. 2.28a). Further irradiation leads to further oxidation of BQ to carbonic acids as evidenced by the decrease of the BQ peak and the appearance of other bands in the spectrum. Consequently, it appeared difficult to quantify the processes after 170 min of exposure. [P(TPP)(OH)2] + appeared to be a least efficient photocatalyst. After oxidation of ~30% of HQ, the oxidation process was stopped. Simultaneously, the intensity of [P(TPP)(OH)2] + peak decreased by 60% when compared to the initial intensity (Fig. 2.28b). The best result was obtained with [P(MPyP)(OEt)2] + : more than 80% of the initial porphyrin was still in the solution after 170 min of irradiation. -MPyP complexes are more stable than TPP analogues -[P(MPyP)(OEt)2] + is the best photocatalyst among the studied compounds and shows with higher stability under irradiation as expected, the hydroxy P(V) porphyrins are less photo stable Conclusions of the chapter In this chapter, we have investigated in detail photophysical properties of a series of P(V) porphyrins (TPP and MPyP) and found this type of compounds to be efficient photosensitizers. This observation explains the low stability of P(V) porphyrins described in Chapter I. Indeed, it was shown that self-oxidizing processes take place under visible light irradiation in the presence of oxygen. P(V) porphyrin derivatives (except complexes with aryloxy-axial ligands) display high quantum yields for singlet oxygen generation in chloroform. In DMSO and in water, the efficiency is noticeably lower. As it might be expected, an increase of the fluorescence quantum yields in DMSO and water with respect chloroform was observed. For a better understanding of these processes, flash-photolysis experiments are needed. It was also found that fluorescence and singlet oxygen generation of P(V) porphyrin complexes bearing aryloxy axial ligands are quenched by electron transfer from the electron donor phenyl ring to the porphyrin macrocycle. This leads to a higher photo stability when compared to other P(V) porphyrins. Consequently, P(V) porphyrin complexes bearing aryloxy axial ligands are suitable to be used for the design of molecular turnstiles as described in chapter I. Attempts to investigate photocatalytic properties of P(V) porphyrins were carried out. The dependence of photocatalytic activity on the substituents in the macrocycle and the nature of axial ligands were investigated. [P(MPyP)(OEt)2] + complex was shown to be the best catalyst for hydroquinone oxidation in water. P(V) porphyrin derivatives prepared may thus be used as catalysts for photooxidation of other classes of compounds. Furthermore, P(V) porphyrins may also be of interest for Photo Dynamic Treatment (PDT). These features need to be further investigated. Contents Introduction As it was mentioned in the introduction part of this manuscript, the porphyrin backbone has been widely used for the design of dynamic systems. [1][2][3][4][5][6][7] Free-base porphyrin derivatives as well as their zinc complexes have been studied both in fundamental research and in materials science. After Two porphyrin moieties are linked via a non-flexible bridge (green parts). The bridge allows the free motion of the two parts with respect to each other. Usually in solution, zinc cation located within the porphyrin cavity is pentacoordinated and adopts a square based pyramidal coordination geometry. Thus, in addition to the four nitrogen centres of the tetraaza core of the porphyrin, it can bind one additional peripheral ligand in its axial position (violet moiety). The motion of the system can be blocked by a locker (blue sphere) between two axial ligands or by a bismonodentate ligand binding simultaneously to both Zn centres. Among many possibilities, the porphyrin dimer may formed either by flexible or rigid connectors. For the latter design, one may use aromatic linkers or ethynyl type fragments. However, an aromatic bridge, for steric reasons, will impose a barrier to the motion. Hence, an ethynyl type connector was chosen. An a single ethynyl spacer, owing to π-conjugation with the porphyrin core, would also hinder the rotation. 8 For that reason, butadiyne was used as the bridge between the two porphyrin units. Concerning the design of porphyrin macrocycle, three meso-positions of both macrocycles were occupied by phenyl rings and only one meso position on each porphyrin was used to interconnect them by the butadiyne moiety. The structure of the targeted porphyrin dimer is represented in Fig. 3.2. Fig. 3.2 The porphyrin dimer designed to behave as molecular gear. One of the first studies dealing with this subject was reported by the Harry Anderson in 1994. 9 In this publication, butadiyne-linked β-substituted porphyrin dimers were reported. Since this type of linker allows the rotation of two porphyrin cores, several conformers in solution may be expected. However, porphyrin dimers were found to aggregate via π-π stacking interactions 9,10 (Fig. 3.3) leading to a planar conformation of dimers with two parallel adjacent porphyrin molecules. In the case of metallated porphyrins, an additional way to form planar complexes may occur. The use of a bis monodentate rigid ligand capable of binding two metal centres simultaneously, the two dimers may be bridged imposing thus the planar conformation. Indeed, it was demonstrated that Zn(II) dimer in DCM solution predominantly exists in aggregated form. Binding of DABCO as bridging ligand leads to a stable 2:2 complex with "ladder" type structures (Fig. In the presence of pyridine, the dimer forms a 1:2 porphyrin-pyridine complex (Fig. 3.5). In that case, non planar oligomers can be formed for steric reasons. Thus, in solution, the pyridine complex occurs as a mixture of torsionally distinguished isomers. The UV-Vis spectrum of the mixture is different from the one of "planar DABCO complex". An explanation can be given by the point-dipole coupling theory developed by Kasha. 11 Imposition of the orthogonal conformation was also described by Aida et al. 12 A monopyridyl meso-substituted dimer depicted in Fig. 3.7 was synthesized. In the absence of any external coordinating agent, it the compound forms orthogonal cages (tetramers of dimers) in solution due to self-assembling processes as confirmed by the UV-Vis absorption spectrum (Fig. 3.7). Later, Bo Albinsson et al. investigated similar dimers however without the presence of coordinating sites at the meso-positions (Fig. 3.8). 13 It was found that the butadiyne bridge allows the motion of two moieties of the dimer. Interestingly, DFT calculations revealed considerable energetic barrier to free rotation about the butadiyne axis. 14 At room temperature, the free 360 o rotation is not expected. Several structures of butadyine-linked porphyrin Zn(II) dimers were investigated by other groups. [15][16][17][18] In the solid state, the dimer displays the planar (or close to planar) conformation. The zinc cation is penta-coordinated in almost all compounds. The two axial ligands are located either on the same side of the dimer 15 or on opposite sides. [16][17][18] The butadiyne linker is strictly linear. No examples of dimers bearing a bidentate handle in the axial position have been reported. The dimer bearing two pyridyl axial ligands 15 will be described more precisely and compared to compounds obtained during this work. Synthesis of the targeted Zn(II) dimer According to a described procedure 19,20 , the first step involves a direct cross-condensation of a mixture of aldehydes affording a mixture of 6 porphyrins including the targeted porphyrin in rather low yield (Fig. 3.9). The TMS protective group was removed by TBAF in quantitative yield. Then the porphyrin was metallated using Zn acetate under standard conditions affording the desired metalled porphyrin. The dimer was obtained in 65% yield by a Pd catalysed coupling reaction between the two monomer molecules. The overall yield was 3.25%. 21 The product was isolated by column chromatography (alumina, cyclohexane-toluene 1:1 as an eluent) and analysed by NMR and HR-ESI MS techniques. The dimer appeared to be only slightly soluble in non-coordinative solvents due to π-π stacking interactions between porphyrins. However, it was found to be well soluble in coordinating solvent such as THF, DMSO and pyridine. Indeed, owing to their coordination in the axial positions of the Zn(II) and breaking  interactions between dimers, addition of the coordinating solvents to chloroform or toluene solutions of the dimer increased its solubility. Behaviour of the Zn(II) dimer Investigation of the axial ligand coordination The system described above (Fig. 3.1) requires the presence of one ligand in the axial position per Zn(II) centre. The most common axial ligand for such a complex is pyridine-based derivatives. 22 The binding constant for the coordination of pyridine by the monomer Zn(II) tetraphenylporphyrin (Zn(TPP)) was reported to be logK = 3.84 and 2.79 in DCM 23 and chloroform 24 respectively. The binding of pyridine to the dimer was investigated by NMR titration experiments performed in CDCl3 with traces of DMSO-d6 (to solubilize the dimer) (Fig. The dimer is soluble enough in chloroform and DCM in concentrations less than 10 -5 M to perform the UV-Vis experiments. Thus, titration of the dimer solution by pyridine in both solvents was carried out (Fig. 3.12). During the titration, we noticed a significant increase (4 times) in the intensity of all bands due to disaggregation of the stacked dimers in DCM. Furthermore, the binding of axial pyridines leads to a red shift of ca 10 nm of the Soret and Q-bands. These changes correspond to two processes (Fig. 3.12): disaggregation of aggregated dimer (KDis) and concomitant binding of pyridine (KL). All titration curves were treated by HypSpec software. 25,26 logK = 6.941±0.0033 in DCM and logK = 7.507±0.0131 in chloroform were calculated. These constants can be represented as a result of two processes: K = KDisKL (eq. 3.1) logK = logKDis + logKL (eq. 3.2) In a first approximation we assume that the binding constant KL of the system with pyridine is the same as for Zn(II) tetraphenylporphyrin (as described previously 9,23,24 , logKL 3.84 in DCM and 2.79 in CHCl3), since both binding processes occur independently. The disaggregation KDis constant can be deduced from eq.3.1. Thus logKDis is 4.15 in DCM and 3.66 in chloroform. The titration of dimer by DMAP (para-dimethyl pyridine) in DCM solution was performed. Calculated logK appeared to be 9.1489±0.0119. This value is three orders of magnitude higher than the value obtained for pyridine and is consistent with the stronger basicity DMAP when compared to pyridine (pKa = 9.60 vs pKa = 5.17 for pyridine 27 ). A value logKL = 5.45 28 for DMAP binding to Zn(TPP) was reported. Thus, the logKDis for the dimer in DCM is equal to 3.68. These systems with monodentate axial ligands, pyridine or DMAP, are free to rotate. Spectra recorded are averaged and correspond to mixtures of all conformers in the solution. As already described, 9 the addition of DABCO to a similar compound (fig. 3.4.) leading to the formation of a 2/2 complex, blocks the rotation and imposes a planar conformation. We also performed the titration of the dimer with DABCO. The resulting UV-Vis spectrum (Fig. 3.13) is similar to the one described earlier and corresponds to the planar conformation of the dimer after the addition of 15 eq. of DABCO. This spectrum will be used as a reference for further experiments. The crystal (triclinic, P-1 space group) is composed of the dimer and 3 chloroform molecules which form short contacts with the dimer molecules. The X-ray structure is depicted in Fig. 3.14. Selected structural parameters are given in table 3.1. Table 3.1 Selected X-ray data for dimers with DMAP and pyridine molecules. This structure should be compared to the structure of the Dimer(pyridine)2 reported previously. 15 The structural parameters of both crystals are similar with only slight differences. The angle between two porphyrin planes is 1.64 o in DMAP structure vs 5.51 o in the case of Dimer(pyridine)2 complex. The Zn atom displacements out of the porphyrin planes are almost the same: 0.327 Å and 0.424 Å for DMAP complex and 0.394 Å and 0.401 Å for pyridine complex. Since pyridine is less basic than DMAP, the distance between Zn and N atom of the axial ligand is larger for the pyridine complex (see systems. Distances between two dimer molecules are 3.423 Å for the DMAP complex while it is larger for the pyridine complex (4.9 Å). Handle#3 based system To stop the rotation of two porphyrin moieties within the dimer, a bidentate locker, handle#3 By comparison with the DABCO 2:2 complex, the blue-shifted Bx component of the Soret-band of the Dimer(handle#3) is more intense and sharper, thus indicating that there is still some "flipping" in the presence of the handle. There are three possible explanations for that: i) the handle#3 is flexible enough to allow the flipping of the dimer and to a decrease in the conjugation of π-electrons between the two porphyrin cores; ii) opposite of the reason mentioned above, the handle is too rigid or short and only one of the two pyridyl units is involved in the binding process. ( However this is less likely since the distance between two Zn(II) is close to 13.5 Å and the expected distance between two nitrogen atoms in handle#3 (obtained by ChemDraw 3D software) is ca 13-14 Å. Moreover, the compatibility between the handle#3 and the Zn(dimer) is supported by examination of CPK models. iii) The third reason is the formation of a mixture containing partially bonded dimers. Among the three possibilities, the first one seems the most probable. The re-opening of the system requires the disconnection of the handle#3. For this purpose, the competition between DMAP and the handle was investigated. Gradual addition of DMAP to the dimer in the presence of the handle#3 (preformed complex) leads to the formation of the mixture of "torsional free" Dimer(DMAP)2. This process was monitored by UV-Vis spectroscopy (Fig. 3.18). Disconnection of the handle leads to an increase of the intensity of the blue-shifted Soret and Q-bands while the intensity of the red shifted Soret and Q-bands decreased simultaneously indicating that the system is reopened. Interestingly, this process is reversible and the closed state could be regenerated (Fig. 3.18). Indeed, stepwise addition of TFA leads to the protonation of the stronger base, DMAP, leading to its decoordination from Zn(II) and the binding of the handle#3. This process was also followed by UV-Vis spectroscopy (Fig. 3.18). In addition to the study of the Dimer(handle#3) complex in solution, its characterization in the solid-state by X-Ray diffraction was also carried out. Single crystals were obtained at 25 o C by vapour diffusion of pentane into a solution of the dimer in chloroform (containing drops of toluene and methanol) in the presence of 1.5 equivalents of the handle#3. Dimer(handle#3) complex crystallizes (monoclinic, c2/c space group) with one water molecule (Fig. Both Zn atoms are penta-coordinated and surrounded by four nitrogen atoms of the porphyrin cores and one nitrogen of the pyridyl axial ligand. They are both displaced out of porphyrin plane (the distances between Zn and porphyrin planes (20 C and 4 N atoms) are 0.232Å, 0.296 Å, which is twice less than in structure of dimer with DMAP). Unlike the structure of Dimer(DMAP)2 , the torsional angle between two halves of the dimer is not 0 o . In comparison with the previously described DMAP complex, the dimer is not planar. The butadiyne linker is not linear and the angle between two porphyrin planes (20 C and 4 N atoms) is about 36.66 o and the Zn-Zn distance is 13.11 Å. Since we did not determine the position of all the atoms in the structure, we cannot explain why the butadiyne linker is not linear. But most likely, the convergent orientation of the two porphyrin mean planes and the bending of the dimer is due to the coordination of the handle to the Zn(II) that is probably not long enough to provide a planar conformation of the dimer in the solid state. To test this proposal, we modelled the geometry of the handle#3 complex. The calculations were achieved using the Spartan'14 software (B3LYP 6-31G* basic set) (Fig. 3.20). The optimized geometry is in good agreement with the obtained crystal data and the butadiyne linker was also found to be non linear. The torsional angle between two porphyrin rings is even larger that the angle observed experimentally. However this difference could arise from the fact that the calculations were performed for the molecule in vacuum, while in the single crystal X-ray structure, one has to consider the influence of the crystal packing on the final structure (Fig. 3.21). Several conclusions can be drawn: the motion of two moieties of the dimer can be switched off or at least partially by the addition of the handle#3 -the process may be monitored by UV-Vis absorption spectroscopy single crystals of the dimer with handle#3 and DMAP illustrate their structural peculiarities: DMAP complex is planar, handle#3 complex is not the handle#3 could not be determined in the crystal structure but its bending by the dimer in the solid state is in good accordance with the calculated structure 123 Acid/base trigger Another possibility to stop the motion of the two porphyrin moieties within the dimer, may be based on a system bearing two small axial ligands bearing acid/base active entities and the use of an auxiliary molecules to lock the movement as shown Fig. 3.1. We first tried to introduce two nicotinic acids at the axial positions of the two Zn(II) in the dimer. Unfortunately, nicotinic acid is poorly soluble in most organic solvents, consequently it cannot be used as the dimer that is not soluble in water. Another way to proceed is to use an aminopyridine as an axial ligand and a dicarboxylic acid as a locker. Thus, the 4-aminomethyl pyridine (AP) was used as axial ligands: the pyridyl unit being coordinated to the Zn(II) while the amino groups could further interact with an appropriate diacid through H-bonds as a locker and thus switch off the flipping of the system. Due to its rigidity and its pKa (2.554 for benzenesulfonic acid 29 ), 4,4'-biphenyldisulfonic (SA) acid seems perfectly suited to lock the system. However this acid is insoluble in DCM, so for the titration, SA was added to a solution of methanol (fig. 3.22). Addition of AP to the solution of the dimer in DCM was monitored by UV-Vis spectroscopy. Further addition of SA closes the system and switch off the motion. Formation of the planar conformation is shown by UV-Vis (Fig. 3. 22). Moreover, the system can be reopened by addition of Et3N. Indeed addition of Et3N leads to the deprotonation of the ammonium and thus to the reopening of the system (Fig. 3.22) and return to the open initial state. Further addition of SA to the solution leads to formation of a precipitate. As conclusions for this part we have shown that: the dimer can be closed by AP/SA system only one open/close cycle can be achieved due to the precipitation We tried to characterize the open and/or close state of the molecular gear in the solid state by X-Ray diffraction. Unfortunately, for the moment, we did not succeed in growing single crystals. Conclusion of the chapter In this chapter we reported the formation of molecular gears based on Zn(II) dimer composed of two porphyrin moieties connected by butadiyne bridge. In order to control the motion within the dimer, two approaches have been undertaken. The first one consisted in the coordination of a bis monodentate handle (handle#3), bearing two pyridyl moieties, to the axial positions of the two Zn(II) centres of the dimer. The second strategy involved electrostatic interactions and hydrogen bonding between ammonium groups located at the axial pyridyls and 4,4'-biphenyldisulfonate anion. Due to the low solubility of the dimer in non-coordinating solvents, it appeared difficult to follow the processes by NMR techniques. Thus, we monitored the conformational changes by UV-Vis spectroscopy, since the spectrum of the dimer is sensitive to the torsional angle between the two moieties. Both systems studied are reversible, however for the second system based on ammonium and sulfonate units, due to precipitation only one cycle could be achieved. In order to increase the solubility, one may replace the meso-phenyl groups by 3,5-tBuPhenyl or alkyl chains. The research carried out with the framework of this PhD training period was centred on the design and synthesis of new types of molecular turnstiles, photosensitizers and molecular gears. The first chapter describes new molecular turnstiles based on P(V) porphyrin derivatives. Two families of molecular dynamic systems was prepared and their behaviour in solution was investigated. They were both based on P(V) porphyrin derivatives. They differ by the nature of the handle. The Turnstiles#1 is based a bidentate handle (handle#1) bearing two coordinating phenolate moieties while for the turnstiles#2, the bidentate handle (handle#2) is based on the coordination of non-aromatic alkoxide units at the axial position of P(V). Turnstiles#2 were found to be photo unstable and were not further studied. Concerning the two turnstiles#1, the one, based on phosphorus (V) tetraphenylporphyrin, was investigated as a model compound in order to test the synthetic pathway and to check the stability of compounds under acidic conditions. The dynamics of the movement in the one-station turnstile#1, based on P(V) [5-monopyridyl-(10,15,20)-triphenylporphyrin] and handle#1 was studied. The presence of a pyridyl group in the handle#1 allows the stopping of the movement either by addition of Ag + or H + . Both processes are reversible and the addition of NEt4Br or NEt3 respectively allows to switch between the close and open states. Moreover, a new approach to the synthesis of P(V) porphyrins was elaborated. The use of POBr3 to metallate the porphyrin cavity was found to be more efficient than the standard POCl3&PCl5 method. The molecular structures of phosphorus (V) complexes with 5-monopyridyl- (10,15,20)-triphenylporphyrin and hydroxyl axial ligands as well as oxyphenol axial groups were studied by single crystals X-ray diffraction. The [P(MPyP)(OH)2] + complex forms a 1D zig-zag coordination polymer in the crystal due to hydrogen bonds between adjacent porphyrins. The geometry of P(V) MPyP complexes is similar to P(V) TPP complexes, the porphyrin core in all cases is not planar but strongly distorted (ruffled). In the future, the formation of similar turnstiles bearing more than two coordinating sites at the meso positions of the porphyrin could in principle allow to master the directionality of the movement which is the final goal of this project. However, we noticed that the synthesis of such turnstiles might be rather difficult and would require new experimental strategies and conditions. During the investigation of the P(V) based turnstiles, we noticed rather low photostability of the porphyrin based compounds in the presence of oxygen under irradiation. The second chapter of the manuscript describes photochemical properties of P(V) porphyrins. It was demonstrated that P(V) porphyrins are efficient photosensitizers. The studied compounds (except complexes with aryloxy-axial ligands) display high quantum yields for singlet oxygen generation in chloroform. Lower values have been observed in DMSO and in water. In parallel, higher fluorescence quantum yields were observed in DMSO and water with respect to chloroform. The photophysical parameters depend on three main factors: nature of meso-substituents, axial ligands and the solvent. However, P(V) porphyrins bearing two aryloxy axial ligands behave differently probably due to PET. This explains the difference in photostability of the turnstile#1 and the turnstile#2. Flash-photolysis experiments are required to better understand the relaxation processes in the P(V) porphyrin derivatives upon excitation (especially ISC processes). Some preliminary attempts to study photocatalytic properties of P(V) porphyrins were carried Materials and instruments Unless otherwise noted, all chemicals and starting materials were obtained commercially from Acros® or Aldrich® and used without further purification. Pyrrole was purified using chromatography on aluminum oxide. Dry pyridine was obtained by distillation over CaH2. THF was dried and distilled over metallic sodium or using chromatography on aluminum oxide in the drying station. Et3N was dried over molecular sieves. Aluminum Oxide 90 (Merck) and silica (Geduran, Silica Gel Si 60 from Merck) were used for column chromatography. BioRad Bio Beads S-X1 and S-X3 were used for the gel-permeation chromatography (GPC). Analytical thin-layer chromatography (TLC) was carried out using Merck silica gel 60 plates (precoated sheets, 0.2 mm thick, with fluorescence indicator F254). All reactions were performed under argon unless stated otherwise. Several reactions were carried out under microwave irradiation using Discover BenchMate microwave apparatus (CEM Corporation). The reactions were carried out in glass tubes equipped with a magnetic stirrer. The photoluminescence spectra (ex = 450 nm) at 25 o C were recorded on a Horiba Scientific Flourolog spectrofluorimeter with Fluorohub photon-counter for life-time measurements. Fluorescence quantum yields (F) were determined by a comparative method using Eq. ( 1). 2 2 St St St St F F n A F n A F        (1) with F and F St areas under the fluorescence emission peaks of the samples and the standard, respectively; A and A St are absorbances of the samples and standard at the excitation wavelengths, respectively; n 2 and n 2 st are the refractive indices of solvents used for the sample and standard, respectively. H2TPP (in toluene) (F = 0.11) 1 was used as the standard. Singlet oxygen quantum yield (Ф) determination was carried out using the fiber-optic setup (Ocean Optics) described more in details in Chapter II. Singlet oxygen quantum yields were determined using H2TPP and H2TSP as references (see Chapter II). DPBF was used as a chemical quencher for singlet oxygen in chloroform and DMSO. SOSG was used as a chemical quencher in distilled water. Eq. ( 2) was used for the calculations: Elemental analysis was performed on a Thermo Scientific Flash 2000. I R I R St St St        ( Quantum chemical calculations were provided with Spartan'14 software (Wavefunction Inc., CA, USA) with B3LYP(6-31G*) basic set. Single-crystal X-ray diffraction experiments were carried out on a Bruker APEX8 CCD diffractometer equipped with an Oxford Cryosystem liquid N2 device at 173(2) K, using a molybdenum microfocus sealed tube generator with mirror-monochromated Mo-Kα radiation (λ = 0.71073 Å), operated at 50 kV/600 μA. The structures were solved using SHELX-97 and refined by full matrix least squares on F 2 using SHELX-97 with anisotropic thermal parameters for all non-hydrogen atoms. Hydrogen atoms were introduced at calculated positions and not refined (riding model). Synthetic procedures and characterizations. H2MPyP was prepared according to a modified literature protocol 2 that leads to the formation of a mixture of porphyrins, including H2TPP and H2DPyP. Pyrrole (4 ml, 57.65 mmol, 4 eq.), 4-pyridinecarboxaldehyde (1.35 ml, 14.33 mmol, 1 eq. and benzaldehyde (4.40 ml, 43.29 mmol 3 eq.) were added to a round-bottom flask (0,5 L) preliminary filled with 250 ml of propionic acid and the mixture bubbled with argon during 15 minutes. The reaction mixture was refluxed for 2 h, then cooled to the room temperature, diluted with ethanol (400 ml) and left in the fridge for 1 night. The purple solid was filtrated and washed several times with ethanol until the colorless filtrate. The resulting crude solid was dried under vacuum and purified by column chromatography on silica gel. H2TPP (1a) was isolated (500 mg, 2.5%) as the first fraction using a cyclohexane-DCM mixture (1:1). H2MPyP (1b) was isolated (440 mg, 5%) as the second fraction using DCM. Increasing the polarity of the eluent (DCM + 1vol % of MeOH) afforded H2DPyP (1c) (130 mg, 3%). H2TPP -1 H-NMR (CDCl3, 600 MHz) δ, ppm: -2.77 (s, 2H, NH), 7.77 (m,12H,8.22 (d,3 J = 6.8 Hz,8H,8.84 (s,8H,. H2MPyP - The porphyrin 2 was synthesized according to a described procedure. 3 Pyrrole (5 ml, 72.07 mmol, 4 eq.), pentafluorobenzaldehyde (3.53 g, 18.02 mmol, 1 eq.) and benzaldehyde (5.5 ml, 54.05 mmol, 3 eq.) were added to a round-bottom flask (0.5 L) preliminary filled with 300 ml of propionic acid and the mixture bubbled with argon during 15 minutes. The reaction mixture was refluxed under argon during 2 h. After cooling to room temperature, ethanol was added (300 ml) and the mixture was left in the fridge overnight. The purple precipitate was filtrated and washed several times with ethanol until the colorless filtrate. Then it was dried under vacuum and purified by column chromatography on silica gel using toluene-petroleum ether mixture (3:7). The 5 th fraction was collected, the solvent was evaporated to afford target product 2 (270 mg, 2.5%) as a purple solid. 2,6-bis(chloromethyl)pyridine (3). This compound was synthesized following a described procedure. 4 The 2,6-bis(oxymethyl)pyridine (4 g, 28.75 mmol, 1 eq.) was dissolved in dry THF (100 ml). The mixture was bubbled with argon during 15 min and then SOCl2 (5.5 ml, 71.87 mmol, 2.5 eq.) dissolved in THF (30 ml) was added dropwise during 1 h. The mixture was stirred during 24 h at room temperature. Then the solvent was evaporated and the solid was dissolved in dichloromethane. The mixture was washed with 0.1 M NaHCO3 (4 x 300 ml) and water (5 x 600 ml). The organic layer was evaporated and the residue was dried under vacuum to afford target product 3 with 93% yield (4.91 g). 2-(2-(2-((tetrahydro-2H-pyran-2yl)oxy)ethoxy)ethoxy)ethanol (4). This compound was synthesized using a described procedure. 4 The triethylene glycol (20 ml, 146.50 mmol, 3 eq.) and 3,4-dihydro-2H-pyran (DHP) (4.46 ml, 48.83 mmol, 1 eq.) were mixed in a 25 ml round-bottom flask and 7 drops of concentrated HCl were added as a catalyst. The mixture was stirred at room temperature during 24 h. Then it was diluted with 300 ml of chloroform and washed with 4 x 700 ml of 0.1 M NaHCO3. The organic layer was dried under vacuum and purified by SiO2 column chromatography using methanol-ethyl acetate (1:9) affording compound 4 with 56% yield (6.40 g) as a colorless oil. This compound was synthesized according to a described procedure. 4 A suspension of NaH (1.09 g (60% dispersion in mineral oil), 27.32 mmol, 2.2 eq.) in dry THF (200 ml) was cooled to 0 o C and (6.4 g, 27.32 mmol, 2.2 eq.) of 4 (dissolved in dry THF (50 ml)) was added to the suspension. Then compound 3 (2.2 g, 12.42 mmol, 1 eq.) dissolved in dry THF (50 ml) was added dropwise and the reaction mixture was refluxed during 5 days. The solvent was removed under vacuum. The brown solid was dissolved in 300 ml of chloroform. It was washed with water (4 x 300 ml). The organic layer was isolated. The solvents were removed under vacuum and the brown oil was purified by Al2O3 column chromatography using ethyl acetate-n-hexane (1:1) affording the compound 5 with 58% yield (4.10 g) as a colorless oil. This compound was synthesized using a described procedure. 4 A solution of 5 (4.10 g, 7.18 mmol, 1 eq.) in methanol (500 ml) with 10 drops of HCl was stirred at room temperature for 7 h. Solid NaHCO3 was added to the mixture until pH 7 was reached. The solvent was evaporated and the solid residue was dissolved in diethyl ether and filtrated. The solvent was evaporated afforded compound 6 in quantitative yield (2.90 g) as a pale yellow oil. 600 MHz) δ,24H,CH2O),4.67 (s,4H,PyCH2),7.36 (d,3 J = 7.6 Hz,2H,7.72 (t,J=7.8 Hz,. This compound was synthesized using a described procedure. 4 A solution of 6 (2.90 g, 7.18 mmol, 1 eq.) in dry THF (200 ml) was bubbled with argon during 30 min. Then triethylamine (3.50 ml, 25.17 mmol, 3.5 eq.) and methanesulfonyl chloride (1.95 ml, 25.17 mmol, 3.5 eq.) were added under argon and the mixture was stirred during 5 h at room temperature. 200 ml of chloroform was added and the mixture was washed with 0.1M NaHCO3 ( 4x 400 ml) and water (4 x 400 ml). Chloroform was evaporated and the yellow oil thus obtained was dried under vacuum. The compound 7 was obtained in quantitative yield (4.03 g). 3-((tetrahydro-2H-pyran-2-yl)oxy)phenol (8). This compound was obtained using a described procedure. 5 Resorcinol (5 g, 45.41 mmol, 1 eq.) was dissolved in 3,4-dihydro-2H-pyran (DHP) (4.55 ml, 49.95 mmol, 1.1 eq.). Then AlCl3⋅6H2O (121 mg, 45.40 mmol, 0.01 eq.) was added and the mixture was stirred during 20 h. The reaction mixture was purified by column chromatography on SiO2 using ethyl acetate -n-hexane (1:9) to give 2.11 g of compound 8 (yield 24%). This compound was synthesized according to a described procedure. 4 A suspension of NaH (0.87 g, 21.73 mmol, 4.4 eq., (60% dispersion in mineral oil)) in 150 ml of dry THF was bubbled with argon for 30 min and cooled down to 0 o C. A degassed solution of 8 (2.11 g, 10.87 mmol, 2.2 eq.) in 75 ml of dry THF was added dropwise and the mixture was stirred at 0 o C for 1.5 h. Then a degassed solution of 7 (2.76 g, 4.94 mmol, 1 eq.) in 80 ml of THF was added dropwise. The reaction mixture was refluxed under argon during 3 days. After evaporation of the solvent the brown residue was purified by column chromatography (Al2O3, nhexane -ethyl acetate, 1:1) to give 2.63 g (70%) of the compound 9 as the pale yellow oil. This compound was synthesized using a described procedure. 4 A solution of 9 (2.63 g, 3.48 mmol, 1 eq.) in methanol (250 ml) in the presence of 25 drops of HCl was stirred at room temperature for 6 h. Solid NaHCO3 was added to the solution until pH 7 was reached. After evaporation of methanol the crude product was purified by column chromatography (SiO2, methanol -ethyl acetate, 1:9) to give pure 10 as a pale yellow oil (1.94 g, 95%). 600 MHz) δ,16H,CH2O),3.80 (m,4H,CH2O),4.06 (m,4H,CH2O),4.70 (s,4H,PyCH2),6.41 (dd, 3 J = 8.1 Hz, 4 J = 2.2 Hz, 2H, CHres), 6.44 (dd, 3 J = 8.1 Hz, 4 J = 2.2 Hz, 2H, CHres), 6.42 (dd, 4 J = 2.3 Hz, 4 J = 2.3 Hz, 2H, CHres), 7.07 (dd, 3 J = 8.1 Hz, 3 J = 8. 1 Hz,2H,CHres),7.40 (d,3 J = 7.7 Hz,2H,7.67 (t,3 J = 7.8 Hz,1H,. 2-(2-(2-(2-((tetrahydro-2H-pyran-2yl)oxy)ethoxy)ethoxy)ethoxy)ethanol (11). This compound was synthesized using a described procedure. 6 The tetraethylene glycol (41.4 ml, 239.80 mmol, 3 eq.) and 3,4-dihydro-2H-pyran (DHP) (7.30 ml, 79.93 mmol, 1 eq.) were mixed in 25 ml round-bottom flask and 10 drops of concentrated HCl were added as a catalyst. The mixture was stirred at room temperature for 20 h, diluted with 120 ml of chloroform and washed with 0.1 M NaHCO3 (4 x 700 ml). The organic layer was dried under vacuum at 60 o C overnight to afford 17 g (yield 73%) of compound 11 as pale yellow oil. This compound was synthesized using a described procedure. 6 A suspension of NaH (2.44 g, 61.08 mmol, 2.2 eq., 60% dispersion in mineral oil) in dry THF (300 ml) was cooled to 0 o C under argon and the compound 11 (17 g, 61.08 mmol, 2.2 eq.) dissolved in dry THF (100 ml)) was added dropwise to the suspension. After 45 min of stirring, compound 3 (4.89 g, 27.76 mmol, 1 eq.) dissolved in dry THF (100 ml) was added dropwise. The reaction mixture was refluxed for 120 h. After evaporation of solvent under vacuum, the brown solid was dissolved in DCM (400 ml) and washed with water (4 x 500 ml). The organic layer was isolated and DCM was evaporated under vacuum. The brown oil thus obtained was purified by column chromatography (Al2O3, ethyl acetate -petroleum ether, 6:4) to afford 11.82 g (yield 64%) of colorless oil 12. 1 H-NMR (CDCl3, 400 MHz) δ, ppm 12H,CH2THP),32H,CH2O),4.62 (m,2H,CHTHP),4.66 (s,4H,PyCH2),7.37 (d,3 J = 7.7 Hz,2H,7.70 (t,3 J = 7.7 Hz,1H,. This compound was synthesized using a described procedure. 6 A solution of 12 (11.32 g, 17.91 mmol) in methanol (1 L) with 10 ml of HCl was stirred at room temperature for 36 h. Solid NaHCO3 was added to the mixture until pH 7 was reached. After evaporation of methanol, the solid residue was dissolved in diethyl ether. The mixture was filtrated and ether was evaporated. The compound 13 was isolated as colorless oil in 88% yield (7.37 g). A solution of 13 (7.07 g, 14.38 mmol, 1 eq.) in dry THF (300 ml) was bubbled with argon during 30 min. Triethylamine (6.80 ml, 48.90 mmol, 3.4 eq.) and tosyl chloride (8.22 g, 43.15 mmol, 3.0 eq.) were added under argon and the reaction mixture was stirred at room temperature for 5 h. After addition of chloroform (600 ml) the reaction mixture was washed several times with 0.1M NaHCO3 (4 x 500 ml) and water (4 x 500 ml). The organic layer was isolated and after evaporation of chloroform, the yellow oil was dried under vacuum affording the compound 14 (2.77 g, 25% yield). Porphyrin 15 was obtained according to a modified literature procedure. 7 The free-base 5,10,15,20-tetraphenylporphyrin, H2TPP (1a) (0.1 g, 0.16 mmol, 1 eq.) was dissolved in 18 ml of pyridine under argon and POCl3 (2 ml, 21.96 mmol, 135 eq.) was added dropwise to the mixture under stirring. A solution of PCl5 (0.1 g, 0.48 mmol, 3 eq.) in 2 ml of pyridine was then added dropwise and the mixture was refluxed under argon during 24 h. After cooling to the room temperature and evaporation of pyridine under vacuum, the dark green solid was dissolved in 50 ml of CH2Cl2 and washed with distilled water (3 x 400 ml) to remove residual pyridine. The organic layer was isolated and after evaporation of DCM, the green solid was purified by column chromatography on alumina using DCM-methanol (95:5) as eluent. The major main fraction was isolated and evaporated to dryness before it was further purified by chromatography on Bio-Beads S-X3 column (eluentchloroform) affording 0.1 g (yield 82%) of pure compound 15 as a green solid. A previously described procedure was applied. 7 The porphyrin [P(TPP)Cl2] + Cl -(15) (0.05 g, 0,07 mmol, 1 eq.) dissolved in a mixture of distilled water (15 ml) and pyridine (25 ml) and the mixture was refluxed overnight. Water and pyridine were removed under vacuum. The residue was dissolved in DCM (80 ml) and washed with distilled water (3 x 400ml) to remove residual pyridine. Solvent was evaporated under vacuum and the purple residue was purified by column chromatography on alumina with a mixture of DCM and methanol (from 0% to 5%) as an eluent affording 0.031 g (yield 65%) of a purple solid. A procedure described for phthalocyanines 8 was modified and adapted for porphyrins. The porphyrin H2TPP (1a) (0.1 g, 0.16 mmol, 1 eq.) was dissolved in pyridine (60 ml) under argon and POBr3 (1.1 g, 4.06 mmol, 25 eq.) previously dissolved in pyridine (20 ml) was added dropwise to the mixture under stirring. The reaction mixture was refluxed during 80 min under argon and then cooled to room temperature. The green mixture was then poured into 150 ml of CH2Cl2 and 2L of distilled water was added and the mixture stirred during 2 days at room temperature until full hydrolysis of [P(TPP)(Br)2] + Br -to the dihydroxy complex [P(TPP)(OH)2] + Br -was completed. The organic layer was isolated and diluted with petroleum ether (150 ml). The mixture was poured directly on a SiO2 chromatography column without evaporation of solvents. Increasing the polarity of the eluent using CH2Cl2-MeOH mixture (90 :10) give the crude product. Further purification of product by Bio-Beads S-X1 GPC (eluentchloroform-methanol 98:2) afforded 0,115 g of the pure purple compound 16b in 95 % yield. The porphyrin [P(TPP)Cl2] + Cl -(15) (0.04 g, 0.053 mmol, 1 eq.) was dissolved in pyridine ( 10ml) then 3-methoxyphenol (17 µl, 0.155 mmol, 3 eq.) previously dissolved in 5 ml of pyridine was The porphyrin [P(TPP)Cl2] + Cl -(15) (0.02 g, 0.027 mmol, 1 eq.) and handle#1 (10) (0.018 g, 0.029 mmol, 1.1 eq.) were placed into a microwave tube and distilled pyridine (3 ml) was added under argon. The mixture was heated to 150 o C at 120 W during 2 h in the microwave oven using dry air cooler. The reaction mixture was cooled to room temperature and chloroform (20 ml) was added and the solution (without evaporation of solvents) was purified by silica gel column chromatography. The 3 rd fraction (10% methanol and 90% of chloroform) was isolated and evaporated to give a green-brown solid. This fraction was further purified using Bio-Beads S-X1 GPC (eluent -chloroform-methanol 98:2). 10 mg (30% yield) of pure product 19 was obtained as a brown-green solid. The porphyrin [P(MPyP)Cl2] + Cl -(21) (50 mg, 0.067 mmol, 1 eq.) was dissolved in pyridine (10 ml) and then a solution of 3-methoxyphenol (22 µl, 0.20 mmol, 3 eq.) in pyridine (5 ml) was added dropwise. The reaction mixture was refluxed overnight under argon. Pyridine was removed under vacuum and the solid residue was purified by column chromatography on silica gel using methanol-CH2Cl2 mixture as the eluent (from 0% to 15% of MeOH). Further purification by Bio-Beads S-X1 GPC (eluent -chloroform-methanol 98:2) afforded 30 mg (yield 50%) of the compound 23 as a green solid. added dropwise under stirring. The reaction mixture was refluxed for 80 min under argon. Then it was diluted with propane-1,3-diol (250 ml) and was stirred at room temperature for 6 h. Approximately 80% of the solvents were evaporated under vacuum and the mixture was diluted with DCM (200 ml). The reaction mixture was washed 4 times with 400 ml of distilled water. The organic layer was isolated and poured directly on an alumina chromatography column without evaporation of DCM. Increasing the methanol ratio in the eluent (DCM-methanol) up to 8% give a dark red fraction which was passed through Bio-Beads S-X3 GPC column (eluent -chloroformmethanol 98:2) affording 172 mg (yield 67%)of the pure compound 26 as a purple solid. The porphyrin H2MPyP (1b) (100 mg, 0.162 mmol, 1 eq.) was dissolved in pyridine (50 ml) under argon and then a solution of POBr3 (1.86 g, 6.50 mmol, 40 eq.) in pyridine (20 ml) was added dropwise under stirring. The reaction mixture was refluxed for 1.5 h under argon and then cooled to room temperature. The suspension was diluted with chloroform (200 ml) and resorcinol (5 g, 45 mmol, 454 eq.) was added and the mixture was stirred at room temperature for 48 h until the axial ligand exchange was achieved. The organic layer was washed three times with distilled water (3 x 500 ml). The green-brown organic layer was poured on a silica gel chromatography column without evaporation of chloroform. Increasing the polarity of the eluent (DCM-methanol) up to 15% of MeOH give a crude product which was further purified on a Bio-Beads S-X3 GPC (eluent -chloroform-methanol 98:2) column affording 82 mg (yield 55%) of the pure compound 27 as a green-brown solid. Turnstile#1, [P(MPyP)handle#1] + Br -(28b). A 2-necked 250 ml round-bottom flask was filled with argon, the porphyrin Hz, 2H, p-phenoxy), 5.92 (td, 3 J = 8.2 Hz, 4 J = 1.7, 2H, m-phenoxy), 7.17 (d, 3 J = 7.7 Hz, 2H, mpyridylhandle), 7.30 (t, 3 J = 7.8 Hz, 1H, p-pyridylhandle), 7.75 (m, 9H, m-and p-phenyl), 7.82 (m, 6H, o-phenyl), 7.90 (m, 2H, o-pyridyl), 8.95 (m, 2H, m-pyridyl), 9.13 (m, 8H, β-pyrr). [P(MPyP)(OH)2] + Br -(22) (54 mg, 0.07 mmol, 1 eq.) and handle#2 (14) (68 mg, 0.085 mmol, 1.2 eq.) were dissolved in MeCN (80 ml). The mixture was bubbled 30 min with argon and Cs2CO3 (82 mg, 0.25 mmol, 3.5 eq.) was added and the reaction mixture was stirred at 50 o C for 36 h. Approximately half of MeCN was evaporated under vacuum and the mixture was diluted with DCM (100 ml). The reaction mixture was directly poured on a SiO2 chromatography column and eluted with a mixture DCM-methanol. Brown-red fraction (15% of MeOH) was isolated and purified by Bio-Beads S-X1 GPC with chloroform as the eluent affording 10 mg of the compound 29 along with impurities. The (5,15)-dipyridyl-(10,20)-diphenylporphyrin, H2DPyP (1c) (75 mg, 0.12 mmol, 1 eq.) was dissolved in pyridine (25 ml) under argon and solutions of POCl3 (18 ml, 19.37 mmol, 160 eq.) and PCl5 (9 mg, 0.43 mmol, 3,5 eq.) in pyridine (2 ml) were added dropwise under argon and then the mixture was refluxed during 7 days. The mixture was cooled to room temperature and the pyridine was evaporated under vacuum. The green solid was purified by column chromatography on silica gel. Mixture of MeOH and CH2Cl2 was used as the eluent and the green solid was isolated at 8% of methanol. The residue was purified by Bio-Beads S-X1 GPC with chloroform as the eluent affording 5 mg (yield 5%) of compound 30. This complex is hydrolyzing very rapidly in air and should be kept under argon in the absence of light and moisture. 300 MHz) δ,ppm: 7.78 (m,6H,7.93 (m,4H,8.07 (m,4H,12H,. The (5,15)-dipyridyl- (10,20)-diphenylporphyrin, H2DPyP (1c) (73 mg, 0.118 mmol, 1 eq.) was dissolved in pyridine (50 ml) under argon and then a solution of POBr3 (1.8 g, 6.3 mmol, 53 eq.) in pyridine (10 ml) was added dropwise under stirring. The reaction mixture was refluxed for 1.5 h under argon and then cooled to room temperature. After dissolving in DCM (200 ml), to the green mixture 2 L of distilled water was added and stirred at room temperature during 1 day until full hydrolysis of P(DPyP)(Br)2] + Br -to the dihydroxy complex [P(DPyP)(OH)2] + Br -was achieved and the mixture became purple. The organic layer was then isolated and purified by column chromatography (silica gel, DCM-methanol mixture as the eluent). Increasing the methanol ratio in the eluent up to 25% afforded the crude product. Further purification by Bio-Beads S-X3 GPC (eluent -98/2 chloroform-methanol 98:2) afforded 62 mg (yield 69%) of the pure compound 31 as a purple solid. 1 H-NMR (CDCl3, 500 MHz) δ, ppm 6H,7.89 (d,3 J = 5.4 Hz,2H,7.97 (d,3 J = 7.4 Hz,2H,8.53 (d,3 J = 4.8 Hz,2H,8.72 (dd,3 J = 5.3 Hz,4 JP-H = 1.9 Hz, 4H, β-pyrr.), 8.89 (dd, 3 J = 5. 3 Hz,4H,. This synthesis was achieved by modification of a reported procedure. 10 The porphyrin [P(DPyP)(OH)2] + Br -(31) (7 mg, 0.009 mmol, 1 eq.) was dissolved in acetonitrile (15 ml) and then 1-iodopropane (3 µl, 0.003 mmol, 3.3 eq.) and Cs2CO3 (7.5 mg, 0.0023 mmol, 2.5 eq.) were added. The reaction mixture was stirred at 50 o C during 24 h under argon. After cooling to room temperature, the solvent was evaporated under vacuum. The residue was purified by column chromatography (silica gel, DCM-methanol). The product was isolated at 20% of methanol in the eluent. Then it was passed through Bio-Beads S-X1 GPC (eluent -chloroformmethanol 98:2) afforded 1.5 mg (yield 20%) of compound 32. 5,10,15,20-tetrapyridylporphyrin, H2TPyP (105 mg, 0.17 mmol, 1 eq.) was dissolved in pyridine (50 ml) under argon and then a solution of POBr3 (4.80 g, 17 mmol, 100 eq.) in pyridine (10 ml) was added dropwise under stirring and the reaction mixture was refluxed for 2.5 h before it was cooled to room temperature. After dissolving in DCM (200 ml), to the green mixture 2 L of distilled water was added and the mixture stirred at room temperature during 1 day until full hydrolysis of P(TPyP)(Br)2] + Br -to the dihydroxy complex [P(TPyP)(OH)2] + Br -was achieved. The organic layer was isolated and poured directly on a silica gel chromatography column without evaporation of the solvents. Increasing the methanol ratio in the eluent up to 50% give the crude product. Further purification by Bio-Beads S-X1 GPC (eluent -chloroform-methanol 98:2) afforded 16 mg (yield 13%) of the pure compound 33 as a purple solid. This compound is unstable and decomposes even under argon and thus must be kept in the absence of light in the fridge. 500 MHz) δ,ppm: 8.17 (m,8H,8.97 (m,9.04 (d,4 JP-H =1. 7 Hz, 8H β-pyrr). 1 H-NMR (CD3OD, 500 MHz) δ, ppm : 7.80 (m, 9H, m-and p-phenyl), 8.04 (m, 6H o-phenyl), 9.06 (m, 4H, β-pyrr), 9.08 (m, 2H, β-pyrr), 9.14 (m, 2H, β-pyrr). The compound was synthesized following a described procedure. 11,12 To a mixture of pyrrole (1.18 ml, 17 mmol, 2.5 eq.), benzaldehyde (1.04 ml, 10.2 mmol, 1.5 eq.) and (trimethylsilyl)propiolaldehyde (1 ml, 6.8 mmol, 1 eq.) in chloroform (1.5 L) and trifluoroboron etherate (0.14 ml, 1.23 mmol, 0.18 eq.) were added. The reaction mixture was stirred at room temperature for 6 h in the dark. DDQ (1.93 g, 8.5 mmol, 1.25 eq.) was added and the resulting mixture was further stirred for 1 h. The solvent was evaporated to dryness under reduced pressure. The desired porphyrin was separated from the crude residue by column chromatography (silica gel, eluting with DCM-cyclohexane mixture 3:7). The 5 th green fraction was collected. After recrystallization from DCM and cyclohexane, the product was isolated in 5% yield (0.1 g). The compound was synthesized according to a described procedure. 12,11 A solution of TBAF (0.2 ml, 0.2 mmol, 1.3 eq., 1M in THF) was slowly added to a solution of porphyrin 35 (100 mg, 0.16 mmol, 1 eq.) in a mixture of dry THF-DCM (4:1). A spatula of CaCl2 was added to quench the reaction after 1 h. The reaction mixture was washed with water (3 x 200 ml), the organic layer was dried over anhydrous Na2SO4 and concentrated. The solvent was evaporated. The solid was recrystallized from chloroform-methanol afforded 90 mg (quantitative yield) of pure compound 36. 400 MHz) δ,2H,NH),4.19 (s,1H,ethynyl),7.79 (m,8.20 (m,6H,8.79 (s,4H,8.92 (d,3 J = 4.3 Hz,2H,9.70 (d,3 J = 4.5 Hz,2H,. Zinc dimer (37). The compound was synthesized according to a described procedure. 13 The porphyrin 36 (90 mg, 0.16 mmol, 1 eq.) was dissolved in a chloroform-methanol mixture (150/30 ml). Zinc acetate dihydrate (0.5 g, 2.28 mmol, 14 eq.) was added. The mixture was stirred during 4 h before solvents were evaporated under reduced pressure. The solid was dissolved in CHCl3, washed with water, the organic fraction was concentrated and passed through a pad of silica and evaporated. The violet solid was dissolved in 50 ml of toluene. Trimethylamine (10 ml) and bis(triphenylphosphine)palladium(II) dichloride (56 mg, 0,08 mmol, 0.5 eq) were added and the mixture was stirred during 24 h at 50 o C in air. The solvent was removed under reduced pressure and the crude solid was purified by column chromatography (alumina, cyclohexanetoluene 1:1) afforded 65 mg (yield 65%) of the pure product 37 as a green solid. Le contrôle du mouvement moléculaire est un sujet étudié par les chimistes depuis deux décennies. L'utilisation de différents stimuli externes [1][2][3][4][5][6][7][8][9] a été mise à profit pour contrôler le mouvement. On peut citer notamment la formation ou la rupture d'une liaison de coordination 4 , une variation de pH 9 ou encore des phénomènes de photoisomérisation 8 . Au cours des dernières années, notre groupe a étudié la synthèse de tourniquets moléculaires à base notamment de porphyrines de Sn(IV) [10][11][12][13][14][15] , et de porphyrines à anse. 16,17 Des complexes non porphyriniques de Pt (II) et purement organique ont également été étudiés. [18][19][20][21][22] Le but poursuivi avec les porphyrines d'étain est décrit schématique sur la Fig. 1. Dans une telle entité, le mouvement du tourniquet devait pourvoir être contrôlé par l'introduction de 4 stations différenciées en position méso du macrocycle porphyrinique. Il a effectivement été possible de générer un tourniquet moléculaire en utilisant une porphyrine symétrique de type A2B2 [13][14][15] . Malheureusement, la diminution du pH conduisait à la destruction du tourniquet du fait de la labilité de la liaison Sn-O dans ces conditions 9 . La première partie de ce travail de thèse a été consacrée à la synthèse d'un tourniquet moléculaire à base de porphyrines de phosphore (V). La substitution du Sn(IV) par du P(V) dans Le deuxième chapitre du manuscrit aborde les propriétés photochimiques particulières des porphyrines de P(V) étudiées. En effet, l'irradiation, en présence d'oxygène, des complexes de P(V) permet de générer de l'oxygène singulet avec des rendements quantiques quasi-quantitatifs dans certaines conditions ce qui pourrait permettre leur utilisation comme photo-sensibilisateurs très efficaces. [23][24][25] Nous avons étudié l'influence du solvant et des ligands présents en position axiale du P(V) sur les caractéristiques photophysiques des complexes ainsi que leur efficacité en tant que photo-sensibilisateurs. La dernière partie du manuscrit concerne la maitrise du mouvement moléculaire dans un dimère de porphyrines de Zn(II) méso-substituées et liées via un pont acétylénique. [26][27][28] Le but poursuivi est de maitriser la mise en mouvement ou à l'inverse l'arrêt du mouvement des deux macrocycles porphyriniques en utilisant les propriétés basiques des ligands présents en positions apicales des ions Zn(II). Le stimulus utilisé serait ici d'ordre chimique avec l'ajout d'une molécule «sandwich» interagissant simultanément avec les deux ligands axiaux des Zn (II) conduisant ainsi à l'arrêt de la rotation, une variation du pH devant quant à elle permettre de réenclencher le mouvement des deux macrocycles. Nous aurions ainsi un système bis-porphyrinique pour lequel nous maitriserions la mise en mouvement ou son arrêt. Chapitre I. Des tourniquets moléculaires à base de porphyrines de P(V). Le premier chapitre du manuscrit traite de la synthèse, la caractérisation et l'étude du mouvement de tourniquets moléculaires à base de porphyrine de P(V). 1) Synthèse Les articles décrivant la synthèse et la caractérisation des porphyrines de P(V) méso-substituées sont peu nombreux. [29][30][START_REF][END_REF][32][33] Les porphyrines de P(V) ont une réactivité particulière pouvant en partie être expliquée par une déformation importante du macrocycle porphyrinique du fait du faible rayon ionique du P(V). La procédure habituelle de métallation des porphyrines méso-substituées par du P(V) utilise un mélange de POCl3 et PCl5. 29 Il semble également possible d'échanger les ligands présents en positions axiales du métal. 30,32,33 Nous avons tout d'abord mis au point la synthèse d'un composé de référence à base de mésotétraphénylporphyrine. L'accès à un tel complexe nécessite tout d'abord l'insertion du P(V) dans la cavité porphyrinique puis la coordination de l'anse en position axiale du P(V) (Schéma 1). L'anse utilisée est obtenue via une synthèse multi-étape mise au point au laboratoire lors de l'étude sur les tourniquets à base de porphyrines d'étain. 10 Celle-ci porte un groupement pyridine dont la fonction serait de bloquer la rotation de l'anse en milieu acide ou en présence de cations métalliques. Les groupements phénols présents à chaque extrémité de l'anse devrait, chacun, former une liaison de coordination en positions axiales du P(V). L'insertion du P(V) dans la cavité porphyrinique a été effectuée grâce à un mélange POCl3/PCl5 dans la pyridine à reflux. 29 Le complexe [PCl2(TPP)]Cl a ainsi pu être isolé avec un rendement de 82%. L'introduction de l'anse en position axiale du phosphore a pu être effectuée uniquement via l'utilisation de micro-ondes et à des températures élevées (Schéma 1). Le rendement optimal obtenu n'est que de 30 % et on observe au cours de cette réaction la formation de nombreux sousproduits issus notamment de la déphosphoration de la porphyrine ou de la destruction du macrocycle. Fig. 1 1 Fig. 1 Representation of the structure of kinesin-1. Fig. 2 2 Fig. 2 Principe of kinesin-1 "walking" along the cell microtubule. Fig. 4 4 Fig. 4 Translatory motion in a molecular elevator. Fig. 5 5 Fig. 5 Rotaxane-catenane transformations. Fig. 10 10 Fig. 10 Molecular gear. Fig. 11 11 Fig. 11 Molecular brakes: stopping the motion by coordination of mercury. Fig. 12 12 Fig. 12 Unidirectional molecular oscillating rotor. Fig. 15 15 Fig. 15 Supramolecular caterpillar. Fig. 17 17 Fig. 17 Dipolar rotor designed to be attached to gold surfaces. Fig. 1 . 1 11 Fig. 1.1 Schematic representation of a turnstile bearing four static and one mobile stations. Fig. 1 . 2 12 Fig. 1.2 Schematic representation of the open and the close states of the turnstile. Fig. 1 . 3 13 Fig. 1.3 The motion control of the system with three coordination sites (bluelutidine, yellowpyridine, redbenzonitrile) by selective binding and protonation. Moreover, these processes being achieved in solution, the equilibrium constants involved in the different process should be high enough in order to avoid the formation of mixtures of close and open states. In order to shift the equilibrium, one may use non-coordinating and apolar solvents or/and low temperature. Fig. 1 . 4 14 Fig. 1.4 Schematic representation of the open and the close states of the non-porphyrin based turnstile. Fig. 1 . 5 15 Fig. 1.5 Organometallic turnstiles equipped with a handle bearing a pyridyl unitThe Pt(II) turnstile A was combined with silver cations. In the presence of the AgSbF6, it forms a 1:1 complex (Fig.1.6). At room temperature, both pyridyl groups of stator were found to be equivalent according to 1 H-NMR investigations. Owing to the presence of two ethynylpyridine moieties on the rotor, at room temperature, the turnstile oscillates between two closed states through rotation around the P-Pt-P axis. Cooling of the system to -70 °C locks the oscillatory movement. The reversibility of the opening and closing process was studied in solution using 1 Hand 31 P-NMR. At RT, the addition of Et4NBr to a solution of A-Ag + caused the precipitation of AgBr which was filtered leaving thus the turnstile A in its open state. The process can be repeated several times.11 Fig. 1 . 6 16 Fig. 1.6 Close and open states of the organometallic turnstile A. The unsymmetrical turnstile C shows an interesting behaviour in the solution. In the absence of effector, it first forms a closed state by hydrogen bonding between the phenol moiety of the stator and the pyridyl unit of the handle. The presence of Ag + cation in the solution leads to the second closed state (Fig. 1.7).16 Fig. 1 . 7 17 Fig. 1.7 Switching between two states of the organometallic turnstile C. Fig. 1 . 9 19 Fig. 1.9 Open and closed states of the organometallic turnstile D.In comparison with turnstile A, no oscillation between two pyridyl units was observed. Indeed, even at room temperature, two different sets of signals have been observed for the two pyridyl units of the stator. It should be noted that in the solid state in the absence of Pd(II) the turnstile is in its closed state owing to the formation of hydrogen bonds between the pyridyl moiety of the stator and the H atoms of the amide unit of handle10 . Fig. 1 . 1 Fig. 1.10 Open and closed states of the organometallic turnstile E and its reversible opening.At room temperature, the turnstile E forms a stable 1:1 complex with Pd(II). No oscillation between the two benzonitrile moieties was observed. For the reopening of the system, DMAP (para-DimethylAminoPyridine) was used. DMAP, binding more strongly Pd(II) than benzonitrile, Fig. 1 . 1 Fig. 1.11 Acid-base opening and closing of the organometallic turnstile E. luminescence properties have been observed for their open and closed states. Whereas the open state is strongly luminescent, for the closed state, the emission is quenched by the presence of Pd(II) leading to an optical reading of the two states. Thus, it was demonstrated that the dynamics of the turnstile can be monitored either by NMR or by luminescence techniques. Fig. 1 . 1 Fig. 1.14 Strapped porphyrin turnstiles. Fig. 1 .Fig. 1 . 11 Fig. 1.15 Sn(IV) porphyrin based turnstiles.The handle, containing pyridyl moiety is connected to the porphyrin via the two axial position of the octahedron around the Sn(IV) center acting as a hinge. A similar system, BP, bearing two amide linkers within the handle has also been reported (Fig.1.15).6 The molecule AP can be viewed as a molecular gate. In its open state, the handle is free to rotate around the porphyrin. The addition Fig. 1 . 1 Fig. 1.17 Portions of the ROESY NMR map for AP closed state. The addition of an excess of silver ion in solution did not lead to further changes. The stability constant was calculated to be logK = 3.75 in CH3CN. The addition of Et4NBr leads to the reopening of the system and to the precipitation of AgBr (Fig. 1.16). Several close-open cycles of the system were performed without any decomposition. of the turnstile at a single station was reached at -70 o C. The close-open processes were again achieved by precipitation of AgBr from the solution. Fig. 1 . 1 Fig. 1.18 Several examples of A2B2 Sn (IV) based turnstiles. Fig. 1 .Fig. 1 . 11 Fig. 1.19 Attempt to switch the turnstile from the pyridyl to benzonitrile station. Fig. 1 . 1 Fig. 1.21 Standard method for the insertion of phosphorus atom into the porphyrin cavity. Fig. 1 . 1 Fig. 1.22 1 H-NMR spectrum of dichlorocomplex (300 MHz, CDCl3, 25 °C). [ P (Fig. 1 . P1 Fig. 1.23 The phosphorus insertion method developed in this work. 1.26), as expected, contains the two signals corresponding to protons in the ortho positions of the phenoxy axial ligands (e and f) strongly upfield shifted (Δδ ~ 5 ppm) when compared to the free 3-methoxyphenol. The 8 β-pyrrolic protons appear as a doublet due to their coupling with the phosphorus located in the cavity ( 4 JP-H = 3.4 Hz). Fig. 1 . 1 Fig. 1.26 1 H-NMR (CDCl3, 500 MHz, 25 o C) spectrum of [P(TPP)(OPhOMe)2] + and assignment of the signals, proton attributions are given in Fig. 1.24 Fig. 1 .Fig. 1 .Fig. 1 . 111 Fig. 1.27 Side product which can be formed in the excess of the handle. The main differences between the reaction of methoxyphenol with Sn(IV) and P(V) porphyrins concern the mechanisms of the axial ligand substitution. The one concerning Sn porphyrins 33 is represented in Fig. 1.28. In the case of Sn(IV), the reaction proceeds smoothly due to the weakness of Sn-O bond. Fig. 1 . 1 Fig. 1.30 Synthetic pathway for [P(TPP)(OEt)2] + . P-NMR spectroscopy: the phosphorus signal was observed at -179 ppm. The 1 H-NMR spectrum are also informative (Fig. 1.31). Two multiplets corresponding to the resonance of 20 phenyl protons and the doublet corresponding to the 8 βpyrrolic protons "a" ( 4 JP-H = 2.8 Hz) are observed. The axial ligands are located in the field of influence of the porphyrin ring and thus appeared strongly upfield shifted. For EtOH, Δδ ~ 3 ppm for the CH3 protons (f) while it is Δδ ~5.6 ppm for CH2 proton (e) are observed. Their coupling with the 31 P was also observed for both signals; the signal corresponding to f appears as a triplet of doublets (td, 3 J = 7.3 Hz, 4 JP-H = 2.1 Hz) for the 6 protons e, while the signals corresponding to the 4 protons e appeared as a doublet of quadruplets (dq, 3 JP-H = 14.0 Hz, 3 J = 7.0 Hz). Fig. 1 . 1 Fig. 1.31 1 H-NMR (CDCl3, 500 MHz, 25 o C) spectrum of [P(TPP)(OEt)2] + and assignment of the signals, proton attributions are given in Fig. 1.30. Fig. 1 . 1 Fig. 1.32 X-ray structure of [P(TPP)(OEt)2] + (hydrogen atoms are omitted for clarity). [P(TPP)Cl2] + with the handle#1 (Fig. 1.34) in a 1:1 ratio. Unfortunately, we observed only the formation of a mixture of [P(TPP)(OH)2] + , freebase porphyrin and high-molecular mass compounds upon overnight refluxing of [P(TPP)Cl2] + with the handle#1. Attempts to optimize the conditions failed. Fig. 1 . 1 Fig. 1.34 Application of standard method to obtain the model turnstile#1. A mixture of [P(TPP)Cl2] + and handle#1 in dry pyridine heated to 150 o C at 120 W during 2 h leads to the formation of the desired turnstile [P(TPP)handle#1] + . However, under these conditions, we observed also the formation of [P(TPP)(OH)2] + , free-base porphyrin, high molecular compounds, some unidentified substances and [P(TPP)monohandle#1] + (Fig. 1.35). To separate the mixture, a tedious step by step purification was applied: standard absorption chromatography and several GPC columns. The main issue was to separate the target model turnstile#1 from the similar compound with the handle acting as a monodentate ligand [P(TPP)monohandle#1] + (Fig. 1.35). Fig. 1 . 1 Fig. 1.35 [P(TPP)monohandle#1] + and model turnstile#1.These compounds have approximately the same polarity and size. The size of the partially opened complex is a little bigger than the desired turnstile [P(TPP)handle#1] + , and this difference even small allows their separation by several GPC. After optimization of the conditions the desired model turnstile#1 could be obtained in 30% yield. P-NMR shows a signal at 195 ppm in CDCl3 and in CD3OD, indicating the stability of the axial bonds in these solvents (not dried). Concerning the 1 H-NMR investigation, spectra of [P(TPP)handle#1] + obtained in CDCl3 and in CD3OD are given in Fig. 1.36 together with the spectrum of the "free" handle#1 in CDCl3. Broadening of all signals are observed in CDCl3. Fig. 1 . 1 Fig. 1.36 1 H NMR spectra (400 MHz, 25 0 C) of the model turnstile#1 in CDCl3 (A), CD3OD (B) and of the handle#1 in CDCl3 (C). Proton's designations are presented in Fig. 1.35 - It is possible to obtain the model turnstile based on phosphorus(V) porphyrin -The model turnstile#1 displays different solution behaviour when compared to the similar compound [P(TPP)(OPhOMe)2] + ; -The introduction of similar axial ligands (methoxyphenol and handle#1) requires different reaction conditions; probably for steric reasons since the handle#1 is rather bulky. Fig. 1 . 1 Fig. 1.38 Standard method of insertion of phosphorus to H2DPyP. Fig. 1 . 1 Fig. 1.39 Insertion of phosphorus to H2MPyP using POBr3. Fig. 1 . 1 Fig. 1.40 Exchange of the axial ligands in MPyP phosphorus(V) system. As it was already found for H2TPP, [P(MPyP)(OH)2] + cannot be used as starting material and the hydroxy-ligands has to be replaced by more labile ones such as Cl -. The formation of [P(MPyP)Cl2] + is carried out with excess of SOCl2, a strong chlorinating agent (Fig. 1.40). The reaction is carried out at room temperature. This procedure proceeds with 100% conversion. The excess of SOCl2 is removed under reduced pressure. The product does not require further purification. It needs to be filtered through a Bio-Beads S-X1 column to remove traces of SOCl2. Fig. 1 . 1 Fig. 1.41 Portions of 1 H-NMR spectra of [P(MPyP)(OPhOMe)2] + (1), [P(MPyP)(OH)2] + (2) and [P(MPyP)Cl2] + (3) (CDCl3, 25 o C). complexes with mono-pyridyl containing porphyrin were studied by single crystals X-ray diffraction. Single crystals of [P(MPyP)(OH)2] + were obtained at 25 o C upon vapor diffusion of the n-pentane into a chloroform solution of the complex. The latter crystallizes (triclinic, P-1 space group) with 2 chloroform molecules. The structure of [P(MPyP)(OH)2] + Br -is presented in Fig.1.42. Selected structural parameters are presented in table 1.2. The phosphorus atom, adopting a distorted octahedral environment, is coordinated by four nitrogen atoms of the porphyrin core (P-N distances are in the 1.859 -1.878 Å range) and two oxygen atoms of axial hydroxyl ligands -O distances of 1.615 and 1.638 Å). The meso-substituents are tilted with respect to the porphyrin plane with CCC dihedral angles of 118.43° for the pyridine moiety and from 120.75° to 121.69° for phenyl groups. Fig. 1 . 1 Fig. 1.42 X-ray structure of [P(MPyP)(OH)2] + (hydrogen atoms and bromine counterions are omitted for clarity). packing, nitrogen atom of pyridine moiety is engaged in special intermolecular interactions. It forms a hydrogen bond with the OH axial ligand of the adjacent porphyrin molecule (Fig. 1.43) with the formation of 1D zig-zag H-bonded network. This interaction helps to identify the pyridine moiety from the three other phenyl substituents. The second axial OH ligand forms hydrogen bond with the chloroform molecule. Fig. 1 . 1 Fig. 1.43 Fragments of crystal packing of [P(MPyP)(OH)2] + Br -showing the interconnection of adjacent porphyrins into a 1D zig-zag H-bonded network (phenyl meso-substituents, hydrogen atoms, solvent molecules and bromide counterion are omitted for clarity). The red-brown single crystals of [P(MPyP)(OPhOH)2] + were obtained upon slow diffusion of n-hexane into a chloroform solution of the complex in the presence of traces of methanol and toluene. The complex crystallizes (triclinic, P-1 space group) with one CHCl3 molecule. The molecular structure of the complex is shown in Fig. 1.44. Selected crystal parameters are presented in table 1.3. Similarly to [P(MPyP)(OH)2] + Br -, the phosphorus atom is hexacoordinated and surrounded by four nitrogen atoms of the porphyrin core (P-N distances are in the 1,82 -1,84 Å range) and two oxygen atoms of axial hydroxyl ligands (P-O distances of 1,658 and 1,664 Å). The meso-substituents are tilted with respect of porphyrin plane with CCC dihedral angles in the 116.7° to 119.8° range. The pyridyl unit is disordered over all four meso-substituents of macrocycle and cannot be identified. No hydrogen bonds have been observed. Fig. 1 .Table 1 . 3 113 Fig. 1.44 X-ray structure of [P(MPyP)(OPhOH)2] + (hydrogen atoms, solvent molecules and counterion are omitted for clarity)Table 1.3 Selected X-ray data for [P(MPyP)(OPhOH)2] + . formation of [P(TPP)handle#1] + (model turnstile#1) (120W, 150 o C, 2 h), only the demetallated products was obtained. The synthesis was successful only once when the reaction was performed at 140 o C and 120W during 45 min. Unfortunately the procedure was not reproducible. Fig. 1 . 1 Fig. 1.45 The suggested pathway to obtain the turnstile#1 (approach 1). Fig. 1 . 1 Fig. 1.47 O-alkylation of [P(MPyP)(OH)2] + . Fig. 1 . 1 Fig. 1.49 Synthetic pathway to obtain the handle#2.Comparing to handle#1, several changes were made. Instead of triethylene glycol, tetraethylene glycol was used as spacer. The handle#2 does not posses the aromatic part, thus the difference in length may be compensated by addition of one glycol unit within the chain. All synthetic steps are similar to those used for the preparation of handle#1, except the final tosylation reaction. The reaction of compound 13 with tosyl chloride in the presence of triethylamine was carried out in THF at room temperature upon stirring the mixture for 5 h. After purification, i.e. washing with NaHCO3 to remove the excess of tosyl chloride, the handle#2 was obtained in rather low yield (25%) for the last step. The overall yield for the preparation of the handle#2 bearing two active leaving groups was ca 10%. Fig. 1 .Fig. 1 . 11 Fig. 1.50 [P(TPP)monohandle#2] + complex and model turnstile#2 [P(TPP)handle#2] + . The model turnstile#2 was characterized by 1 H-and 31 P-NMR spectroscopy (Fig. 1.51). The Fig. 1 .Fig. 1 . 11 Fig. 1.51 1 H-NMR spectra of handle#2 (1) (CDCl3, 600 MHz, 25 °C) and model turnstile#2 (2) (MeOD, 400 MHz, 25 °C). Protons assignment is presented in Fig. 1.50. A strong signal corresponding to impurities is observed at 3.5 ppm. The latter cannot correspond to the handle connected to the porphyrin, since e-l form individual signals. Most likely, it corresponds to glycol moieties of the decomposed handle. Also several other unidentified peaks were detected in the spectrum. Additional characterization of the compound was made by massspectrometry (MALDI-TOF) (Fig. 1.52). Fig. 1 . 1 Fig. 1.53 Synthesis of the turnstile#2 [P(MPyP)handle#2] + . P-NMR spectroscopy (Fig. 1.54). The 31 P NMR signal appeared at -181 ppm as for the model turnstile#2. The 1 H-NMR spectrum is also similar to the one recorded for the model turnstile#2. The signal for the proton closest to the phosphorus atom appears as a doublet of doublets ( 3 JP-H = 10 Hz, 3 J = 5.7 Hz) with the same chemical shift. The shapes and positions of signals of other glycol protons k-r are similar to those of the model turnstile#2. The signals corresponding to the β-pyrrolic protons a-d appeared as a multiplet (δ = 9.12-9.19 ppm), while resonances for the pyridyl protons e and f appeared as two signals in the aromatic region of the spectrum. The signals of impurities appeared at the same ppm range as for the model turnstale#2. Fig. 1 . 1 Fig. 1.54 1 H-NMR spectrum of turnstile#2 (MeOD, 400 MHz, 25 o C). Proton assignment is presented in Fig. 1.53. Additional characterization by mass-spectrometry (HR ESI) was also carried out (Fig.1.55). Fig. 1 . 1 Fig. 1.55 HR ESI spectrum of the turnstile#2. Fig. 1.56 31 P-NMR spectrum of the mixture of [P(MPyP)Br2] + and [P(MPyP)(OH)2] + (162 MHz, CDCl3, 25 o C). (Fig. 1 . 1 57). The first step consists in the bromination reaction during 3 h and storage under vacuum overnight to remove the excess of thionyl bromide. The handle#1 was also kept under the reduced pressure to remove all traces of the moisture. In order to avoid the presence of moisture, freshly distilled over CaH2 pyridine was added through a cannula to both flasks containing the [P(MPyP)Br2] + and the handle#1. Fig. 1 . 1 Fig. 1.57 The novel synthetic pathway used to synthesize the turnstile#1. During the second step, the two solutions were mixed and stirred at 60 o C during 24 h. At this temperature, the demetallation and decomposition of the P(V) porphyrin are minimized. However, the handle#1 connects to phosphorus center only from one side (Fig. 1.58). Further refluxing of the mixture for 2 h is necessary to fully connect the handle#1 to phosphorus atom. Unfortunately, the reflux activates the decomposition processes thus leading the formation of the turnstile#1 in only 10% yield. It should be mentioned that direct refluxing (without the 1 st step) leads only to the decomposition of the compound. Fig. 1 . 1 Fig. 1.58 Scheme for the synthesis of the turnstile#1. Fig. 1 . 1 Fig. 1.59 Comparison of 31 P-NMR spectra of turnstile#1 and non-cyclic analog (162 MHz, MeOD, 25 °C). The phosphorus signals appear at -188 ppm and -195 ppm for the open compound [P(MPyP)monohandle#1] + and for the turnstile1# respectively (Fig. 1.59). It was previously described that the signal for [P(MPyP)(OEt)2] + appears at -179 ppm, therefore the signal for [P(MPyP)monohandle#1] + is located between the ethoxy complex and the turstile#1. The ethoxy axial ligand can also be easily identified by 1 H-NMR (Fig. 1.60). The resonances of x and y Fig. 1 . 1 4 11 Fig. 1.60 Comparison of 1 H-NMR spectra of handle #1 (A, CDCl3), turnstile#1 (B, MeOD), and [P(MPyP)monohandle#1] + (C, MeOD), 400 MHz, 25 °C. Proton assignment is presented in Fig. 1.58. Signals of the porphyrin protons of both complexes display similar chemical shifts and shapes. The Turnstile#1 1 H-NMR spectrum is similar to the model turnstile. The differences observed are related to the structures of porphyrins TPP and MPyP. One feature should be noted, signals corresponding to protons j and k of the turnstile#1 display the same chemical shifts and shape as the model turnstile. The absence of signals in the upfield region of the 1 H-NMR spectrum confirms the absence of aliphatic chains connected to the phosphorus atom.In summary, the following conclusions may be drawn:-The turnstile#1 with one coordination site was synthesized -The compound is stable and was isolated and fully characterized; -A reaction mechanism for its synthesis is proposed -Probably due to the distortion of porphyrin ring, the handle#1 preferentially undergoes a single connection to the phosphorus atom Fig. 1 . 1 Fig. 1.61 The closing of turnstile#1 induced by the silver cation. The locking process was investigated by NMR (c=2.6•10 -3 M) upon addition of solution of silver triflate in deuterated methanol to solution of turnstile#1. Two regions of 1 H-NMR spectra (aromatic and aliphatic) were analyzed. Owing to the binding of silver cation, one would expect shifts of protons of the pyridyl moieties and a change in the symmetry of the turnstile. Fig. 1 . 1 Fig. 1.62 1 H-NMR (MeOD, 400 MHz, 25 °C) spectrum (7.1-9.5 ppm region) of the turnstile#1 in the presence of different equivalents of AgOTf. Proton assignment is presented in Fig. 1.61. Signals of the porphyrin pyridyl protons e and f shift downfield by 0.30 ppm and 0.43 ppm respectively. Signals of the handle pyridyl protons y and w follow the same trend (Δδ = 0.78 ppm for both). A splitting of signals a-d (β-pyrrolic protons) into two groups is also observed. Signals of protons a and b display close chemical shifts and appear as a multiplet, while signals of protons c and d appear as a doublet of doublets. ( 3 J = 5.3 Hz, 4 JP-H = 3.1). These changes are due to the binding of the silver cation. Fig. 1 . 1 Fig. 1.63 1 H-NMR (MeOD, 400 MHz, 25 °C) spectrum (3.3-5.0 ppm region) of turnstile#1 in the presence of different equivalents of AgOTf. Proton assignment is presented in Fig. 1.61. 4.1.2 Analysis by Mass-spectrometry The closed state of the turnstile#1, obtained in the presence of 3 equivalents of AgOTf, was analyzed my mass-spectrometry. HR electrospray ionization technique was used since it is one of the softest methods. Two main peaks were observed (Fig. 1.64). One at molecular ion (m/z 615.2309, calc. 615.2323) corresponds to the monoprotonated turnstile#1 [M+H] 2+ . The lower intensity peak at m/z 668.1781 (calc. mass 668.1809) corresponds to the closed state of turnstile [M+Ag] 2+ , demonstrating the 1:1 stoichiometry of the complex. Fig. 1 . 1 Fig. 1.64 HR ESI spectrum of the turnstile#1 closed by Ag + cation. Fig. 1 . 1 Fig. 1.65 NOESY spectrum (CD3CN, 500 MHz, 25 °C) of the open state of the turnstile#1. Although Methanol-d4 always contains residual water, in the 1D 1 H-NMR spectra no overlap with turnstile#1 signals was observed. However, its presence leads to noisy 2D NOESY correlation maps. Therefore, correlations observed are rather weak and hidden due to high intensity water signal. Deuterated acetonitrile may overcome this shortcoming. Thus, the investigation was performed in acetonitrile-d3. First, the turnstile in the absence of silver ions (open state) was analyzed (Fig. 1.65). Several weak correlations were observed. The correlation between the porphyrin protons is marked with black dotted lines. β-pyrrolic protons show cross-peaks with both phenyl and pyridyl protons. Interactions between protons of pyridyl and phenyl moieties are also observed. Cross-peaks between v and u, o and m protons indicate through-space interactions in the handle. No specific correlations between the handle and the porphyrin were detected. The closed state of the turnstile (c=2.6•10 -3 M) was investigated by the same technique. The analysis was performed in the presence of 3 equivalents AgOTf. Similar correlations, as for the open state of the turnstile, between the handle protons and porphyrin protons were detected. In addition to these correlations, other significant correlations have been observed (Fig. 1.66). Fig. 1 . 1 Fig. 1.66 NOESY spectrum (CD3CN, 500 MHz, 25 °C) of the closed form of the turnstile#1 in the presence of 3 equivalents of AgOTf.Owing to the binding of Ag + cation, the rotation of the handle around the stator is blocked. The locking process imposes for the pyridyl group to be located between the two glycol chains of the handle. The rather short distance between them allows to detect 1 H-1 H through space correlations.Indeed m-pyridyl protons f being close to handle protons s, t and r, correlations between them are observed by NOESY investigation. Also o-pyridyl protons e show interactions with handle proton o. Thus observed correlations clearly demonstrate the closing of the turnstile by the silver cation. Fig. 1 . 1 Fig. 1.67 Selected protons chemical shifts for 4.5 "open-close" cycles. Fig. 1 . 1 Fig. 1.68 Proposed behaviour of the turnstile#1 in the presence of triflic acid.The experiment was carried out in methanol-d4 at room temperature (c=2.6•10 -3 M). HOTf was gradually added. Changes in the 1 H-NMR spectrum were observed in the presence of 1 eq. of the acid. Two regions of the spectrum were analyzed (separately described bellow). The aromatic region (7.1-9.4 ppm) is presented in Fig.1.69. In general, the behaviour is similar to the one observed when Ag + was used as the locking agent. However, 100% closing was only achieved in the presence of 8 equivalents of triflic acid. Further addition of HOTf did not lead to any change in the spectrum. We stopped the process after addition of 30 equivalents of the acid. Probably, by increasing the number of equivalent of acid, the second protonation could be reached although such large excess of acid could also lead to the decomposition of the turnstile. Fig. 1 . 1 Fig. 1.69 1 H-NMR (MeOD, 400 MHz, 25 °C) spectrum (7.1-9.5 ppm region) of the turnstile#1 in the presence of different equivalents of HOTf. Proton assignment is presented in Fig. 1.68. Fig. 1 . 1 Fig. 1.70 1 H-NMR (MeOD, 400 MHz, 25 °C) spectrum (3.3-5.0 ppm region) of the turnstile#1 in the presence of different equivalents of HOTf. Proton assignment is presented in Fig. 1.68. was recorded in the presence of 3 equivalents of HOTf (c=9.8•10 -3 ) in acetonitrile-d3. In general, correlations observed are similar to those previously observed in the case of silver cation (Fig.1.71). Fig. 1 . 4 . 2 . 3 1423 Fig. 1.71 NOESY spectrum (CD3CN, 400 MHz, 25 °C) of turnstile#1 in its closed state. Furthermore, HR ESI investigations performed on the closed state of the turnstile showed, as expected, a peak corresponding to the protonated turnstile. Fig. 1 .Fig. 1 . 11 Fig. 1.72 Selected protons chemical shifts of 4.5 "open-close" cycles. Fig. 1 .Fig. 1 . 11 Fig. 1.74 Portions of the 1 H-NMR spectra for the competition experiment. Fig. 2 . 1 21 Fig. 2.1 Gouterman 4-orbital model. Hence, a strong Soret band corresponds to both Bx and By transitions. Since it corresponds to the LUMO+1 level, it may be called S0-S2 transition. Due to the small energy difference, the Soret band usually appears as one strong peak. Lower energy of Qx and Qy transitions corresponds to Q bands resulting from electronic transition to first excited state (S0-S1). A standard free-base porphyrin such as H2TPP displays a D2h symmetry. Thus Qx and Qy transitions appear as four different Q bands 3 (Fig. 2.2). Two of them correspond to 0-0 Qx and Qy 0-0 transitions. Two supplementary bands appear in case of 0-1 processes. Metallated, deprotonated and diprotonated porphyrins possess D4h symmetry. Hence, generally only two Q-bands are observed. Indeed, insertion of metal ion (Zn(II) for example) into the porphyrin cavity leads to only two Q bands in the spectrum (Fig. 2.3). Fig. 2 . 2 22 Fig. 2.2 Typical UV-vis spectrum of a free-base porphyrin. Fig. 2 . 3 23 Fig. 2.3 Typical UV-vis spectrum of a metallated porphyrin. Fig. 2 . 4 2 . 5 . 2425 Fig. 2.4 Jablonski diagram of energy transitions in porphyrins and oxygen: A -absorption, Ffluorescence, Pphosphorescence.Under normal conditions, oxygen exists in its triplet state. Thus, if the porphyrin triplet state is located close to the oxygen triplet state, the energy transfer occurs. The efficiency of this process 19 Fig. 2 . 5 1925 Fig. 2.5 Representation of singlet and triplet oxygen states.The singlet oxygen can be generated by either by chemical or photochemical processes. The first method is based on degradation of hydrogen trioxide20 or reaction of hydrogen peroxide with sodium hypochlorite.21 We will not focus on that pathway. The second possibility is based on the use of photosensitizers. This method is simple and controllable and requires only the oxygen molecule. This process requires the use of photons with appropriate wavelength, and a photosensitizer capable of exciting the oxygen molecule from its triplet to its singlet state. In order to promote the energy transfer from T1 to 3 O2 leading to the excited 1 Δg, photosensitizer molecules must possess high quantum yields and the lifetime of the T1 state must be longer (ms) than that of the S1 state (ns). In fact, there are two competitive mechanisms of interactions between the photosensitizer molecule and oxygen. The type I mechanism involves hydrogen atom abstraction or electron-transfer between the excited sensitizer and a substrate yielding free radicals. These radicals can react with O2 leading to the formation of "superoxide" anion radical O2 •-. It should be noted that superoxide is very reactive species. The type II mechanism proceeds only through energy transfer with formation of the 1 Δg state. This pathway is presented in Jablonski diagram (Fig.2.4). Fig. 2 . 6 26 Fig. 2.6 The reaction between DPBF and singlet oxygen. 2 . 8 ) 28 is the most commonly used water--BPAA) was described.18 Fig. 2 . 8 28 Fig. 2.8 Water soluble trap ADMA reacts with singlet oxygen. Fig. 2 . 9 29 Fig. 2.9 Formation of SOSG endoperoxide in the presence of singlet oxygen. The SOSG molecule contains both a fluorescein and an anthracene moieties. Under irradiation, a photoinduced electron transfer (PET) occurs between anthracene and fluorescein and a weak fluorescence band at ~540 nm is observed (Fig. 2.10, red dashed line). Fig. 2 . 2 Fig. 2.10 Normalized absorption (solid lines) and emission spectra (dashed lines, λex = 465 nm)of SOSG (red) and SOSG-EP (blue).13 2 . 11 ) 211 5 h under laboratory UV lamp (BioBlock Scientific VL-4C, 254 nm) irradiation in an open vessel. The second one was bubbled with argon for 15 min and then irradiated for 4.5 h in a closed vessel. Both samples were monitored by UV-Vis spectroscopy during the irradiation. The solution of the first sample was almost bleached (Fig. under irradiation for 4.5 h, whereas for the second sample no change of the UV-vis spectrum was observed. Fig. 2 . 2 Fig. 2.11 Photobleaching of [P(MPyP)(OEt)2] + chloroform solution under UV irradiation in air. Fig. 2 . 2 Fig. 2.12 Experimental setup#1 for singlet oxygen detection. complexes were studied: [P(TPP)Cl2] + , [P(TPP)(OH)2] + , [P(TPP)(OEt)2] + , [P(TPP)(OPhOMe)2] + ; [P(MPyP)Cl2] + , [P(MPyP)(OH)2] + , [P(MPyP)(OEt)2] + , [P(MPyP)(OPhOMe)2] + . All compounds were passed through Bio-Beads S-X3 column to remove all impurities formed during the storage.For each compound, experiments were carried out at least twice. Fig. 2 . 2 Fig. 2.13 UV-Vis absorption spectra of [P(MPyP)(OH)2] + (blue), [P(MPyP)(OEt)2] + (orange), [P(MPyP)Cl2] + (grey) and [P(MPyP)(OPhOMe)2] + (yellow) in chloroform at room temperature. 2.14b), except for [P(TPP)Cl2] + for which two different peaks (417 nm and 440 nm) were observed (Fig. 2.14c). Fig. 2 . 2 Fig. 2.14 UV-Vis absorption spectra of a) DPBF solution in chloroform; b) P(MPyP)(OEt)2] + (blue), mixed with DPBF (red) and c) [P(TPP)Cl2] + (black), mixed with DPBF (green) solutions at 30 o C in chloroform. The irradiation of the sample at 547 nm induces the decrease of the intensity of the "overlapped peak" (Fig. 2.15). After complete bleaching of DPBF, the spectrum corresponds to the spectrum of porphyrin. Thus, for further calculation we focused on the "overlapped peak" located at 425-430 nm corresponding to the porphyrin Soret band and the DPBF bands. The plot of the intensity of the peak against time displays a linear dependence of the DPBF bleaching (Fig. 2.15). The appearance of an invariable behaviour after 2000 s indicates the completion of DPBF bleaching and the presence of the porphyrin in solution. Fig. 2 . 2 Fig. 2.15 Plot of DPBF + [P(MPyP(OEt)2] + peak (428 nm) bleaching vs time in chloroform at 30 o C (left) and changes of the UV-Vis spectra with time (right). The P(V)TPP derivatives leads to an increase in the quantum yield of the singlet oxygen generation. The following trend was determined: [P(TPP)(OPhOMe)2] + < [P(TPP)(OH)2] + < [P(TPP)(OEt)2] + < [P(TPP)Cl2] + . The same sequence holds for P(V)MPyP however values are ca 10-20% higher. Such high values indicate very weak fluorescence for almost all the investigated complexes. On the other hand, the complexes with aryloxy axial ligands ([P(TPP)(OPhOMe)2] + and [P(MPyP)(OPhOMe)2] + ) leads to extremely low quantum yields when compared to the other complexes (the explanation is given below). Fig. 2 . 2 Fig. 2.16 (a) Comparison of normalized absorption (red) and emission (blue) spectra of [P(MPyP(OEt)2] + complex in chloroform (λex = 550 nm), (b) fluorescence spectra (λex = 550 nm) of [P(MPyP)(OH)2] + (blue), [P(MPyP)(OEt)2] + (orange), [P(MPyP)Cl2] + (grey), [P(MPyP)(OPhOMe)2] + (yellow) complexes in chloroform at room temperature.Quantum yields of fluorescence were calculated using equation 2: Fig. 2 . 2 Fig. 2.17 UV-Vis absorption spectra of [P(MPyP)(OH)2] + (blue), [P(MPyP)(OEt)2] + (orange), [P(MPyP)Cl2] + (grey) and [P(MPyP)(OPhOMe)2] + (yellow) complexes in DMSO at room temperature. 2.18). Fig. 2 . 2 Fig. 2.18 Fluorescence spectra (λex = 550 nm) of [P(MPyP)(OH)2] + (blue), [P(MPyP)(OEt)2] + (orange), [P(MPyP)Cl2] + (grey), [P(MPyP)(OPhOMe)2] + (yellow) complexes in DMSO at room temperature. Fig. 2 . 2 Fig. 2.20 UV-Vis absorption spectra of [P(MPyP)(OH)2] + (blue), [P(MPyP)(OEt)2] + (orange) and [P(MPyP)(OPhOMe)2] + (grey) complexes in distilled water at room temperature. Since [P(TPP)Cl2] + and [P(MPyP)Cl2] + complexes are easily hydrolyzed in water, they were not studied. Thus only six P(V) porphyrin derivatives were investigated. Fig. 2 . 2 Fig. 2.21 Plot of SOSG-EP emission increase at 530 nm vs time. Fig. 2 . 2 Fig. 2.22 Fluorescence spectra (λex = 550 nm) of [P(MPyP)(OH)2] + (blue), [P(MPyP)(OEt)2] + (orange), [P(MPyP)(OPhOMe)2] + (grey) complexes in water at room temperature. Fig. 2 .Fig. 2 . 22 Fig. 2.23 Pyrene substituted P(V) porphyrin and possible relaxation processes in the molecule.We suggest that similar phenomenon could occur in P(V) complexes with aryloxy axial ligands described in this manuscript. According to the literature data, we may propose an electron transfer from the phenyl ring to porphyrin core. To prove this hypothesis, we performed quantum chemical calculations using the Spartan'14 software (Wavefunction Inc., CA, USA) with B3LYP(6-31G*) basic set for [P(MPyP)(OPhOMe)2] + complex (Fig.2.24). Fig. 2 .Fig. 2 . 22 Fig. 2.25 HQ oxidation by active forms of oxygen. To prevent the influence of the pH in the experiment, we carried out the reaction using 0.1M potassium phosphate buffer (pH = 7). Four complexes were used in this study: [P(TPP)(OH)2] + , [P(TPP)(OEt)2] + , [P(MPyP)(OH)2] + , [P(MPyP)(OEt)2] + . It was difficult to predict the results, since HQ is sensitive to all forms of active oxygen (not only singlet oxygen) and probably to other photodestrucive mechanisms. The experiment was performed under visible light irradiation in a designed irradiation chamber BS-02 (Opsytec Dr. Gröbel). The spectrum of the lamp D-65 is presented in Fig. 2.26. changes for [P(MPyP(OEt)2] + complex are presented in Fig.2.27 as an example. Fig. 2 .Fig. 2 . 22 Fig. 2.27 Photooxidation of HQ in the presence of [P(MPyP)(OEt)2] + complex in phosphate buffer. The band at 290 nm corresponds to the initial HQ and the peak at 246 nm to BQ formed during the irradiation. Due to the substantial difference in the molar absorption coefficients (εHQ(290nm) = 2500 L.mol -1 .cm -1 ; εBQ(246nm) = 25000 L.mol -1 .cm -1 ), the intensity of the BQ band increases rapidly above 3. To visualise the efficiency of [P(TPP)(OH)2] + , [P(TPP)(OEt)2] + , [P(MPyP)(OH)2] + , [P(MPyP)(OEt)2] + , degradation curves of P(V) porphyrins and HQ were plotted (Fig. 2.28). 1 .Fig. 3 . 1 131 Fig. 3.1 Schematic representation of Zn(II) porphyrin dimer as a molecular gear. Fig. 3 . 3 Fig. 3.3 π-π stacking in porphyrin dimers (β-substituents in dimers are omitted for clarity) and UV-Vis spectrum of the system in DCM at room temperature. 9 Dimers are shifted relative to each other, i.e. the centre of one molecule is located under the pyrrole part of the molecule. Owing to these interactions, the rotational motion is hindered and only the planar conformer is observed. 3. 4 ) 4 . The NMR spectrum of porphyrin dimer in the presence of DABCO in CD2Cl2 (c = 10 -3 M) displays sharp signals for all porphyrin protons while DABCO signals are significantly upfield shifted due to the screening effect of neighbouring porphyrins (-4,6 ppm). The increase of the porphyrin-DABCO ratio to 1:2 (or more) leads to the formation of 1:2 porphyrin-DABCO complex with fast exchange between coordinated and uncoordinated DABCO molecules. These processes were also monitored by UV-Vis spectroscopy. Fig. 3 . 4 34 Fig. 3.4 Porphyrin dimer-DABCO complex (2:2) and its UV-Vis absorption spectrum in DCM at room temperature. Fig. 3 . 5 Fig. 3 . 6 3536 Fig. 3.5 Porphyrin dimer-pyridine 1:2 complex and its UV-Vis absorption spectrum in DCM at room temperature. The Soret band of the porphyrin includes Bx and By components (Fig. 3.6). In the spectrum of a metallated porphyrin monomer, they are degenerated. In the spectrum of the dimer, they are split into two bands: Bx transition corresponding to head-to-tail and By corresponding to face-to-face arrangements respectively. Since they are not identical, two Soret-bands are observed. From data collected, 9,12 it was proposed that the blue-shifted By component mainly corresponds to the nonplanar conformation (mixture of torsionally free dimers) and the red-shifted Bx band to the planar conformation. Fig. 3 . 7 37 Fig. 3.7 Formation of an orthogonal dimer conformer and its UV-Vis spectrum in DCM at room temperature. 12 In order to impose a planar conformation of the dimer in solution, a non-flexible bidentate handle was prepared and used. Results obtained correlate well with the above mentioned data. In solution, a mixture of several conformers is observed and upon gradual addition of the handle to the solution significant changes in the absorption spectrum are observed (fig. 3.8). The intensity of the blue-shifted Bx component of the Soret band decreases dramatically, while simultaneously the intensity of the red-shifted By component increases. Q-bands demonstrate the same behaviour. Fig. 3 . 8 38 Fig. 3.8 Structure of dimer bonded to a handle and UV-Vis spectrum in DCM during titration. Fig. 3 . 9 39 Fig. 3.9 Synthetic strategy toward the target Zn(II) dimer. Fig. 3 . 3 Fig. 3.10 1 H-NMR spectrum of the dimer in THF-d8 at different temperatures. Due to the high symmetry of the molecule, all conformers should display the same 1 H-NMR spectrum and consequently cannot be distinguished by this method at room temperature. In order to overcome this, 1 H-NMR spectra were recorded at different temperatures in order to distinguish between different conformers. Unfortunately, cooling the THF-d8 solution of dimer to -70 °C did not afford any significant changes in the spectrum (Fig. 3.10). 3 . 11 ). 311 The addition of one or two equivalents of pyridine leads to the appearance of signals corresponding to the pyridine protons significantly up-field shifted due to ring current of porphyrin indicating the binding of pyridines in the axial position of Zn(II). Owing to fast exchange between coordinated and non-coordinated pyridine molecules, further addition of pyridine leads to a downfield shift of these signals up to the usual chemical shifts for a non-coordinated pyridine (for 30 eq.). Fig. 3 . 3 Fig. 3.11 1 H-NMR of titration of the dimer (c = 2.1•10 -3 M) in CDCl3 solution with a drop of DMSO-d6 by pyridine. Fig. 3 . 3 Fig. 3.12 Titration of the dimer with pyridine a: in DCM, c = 4.7•10 -6 M, 0-220 eq. of pyridine and b: in chloroform, c = 8.0•10 -6 M, 0-400 eq. of pyridine Fig. 3 . 3 Fig. 3.13 The formation of a planar 2:2 complex with DABCO in DCM (c = 8.0•10 -6 M, 40 eq. of pyridine, 0-15 eq. of DABCO). Single crystals of the dimer have been obtained and studied by X-ray diffraction techniques. The single crystals were obtained at 25 o C by vapour diffusion of pentane into the solution of dimer in chloroform with a drop of toluene and methanol in the presence of 2.5 equivalents of DMAP. Fig. 3 .Fig. 3 . 33 Fig. 3.14 X-ray structure of the dimer complex with two DMAP molecules (solvent molecules and hydrogen atoms are omitted for clarity). Both porphyrin rings are almost planar. The butadiyne linker is linear. Both Zn atoms are pentacoordinated and surrounded by four nitrogen atoms of the porphyrin cores (Zn-Npor distances are 2.067(5) -2.115(5) Å) and one nitrogen of the DMAP axial ligand ((Zn-NDMAP distances are 2.115(5) -2.135(5) Å). Zinc cations are slightly displaced out of the porphyrin plane (distances between Zn and porphyrin plane (20 C and 4 N atoms of the porphyrin ring) are 0.314Å, 0.453Å). fig 3 . 16 )Fig. 3 . 3163 Fig. 3.16 The handle#3 used to lock the dimer. Addition of the handle to the dimer in DCM leads to the coordination of the bis monodentate ligand to the apical position of the two Zn(II) centres of the dimer, thus switching off the flipping of the porphyrin macrocycles. The coordination of the two pyridyls of the handle#3 could be monitored by UV-Vis spectroscopy. Indeed, the addition of the handle#3 to a DCM solution of the dimer leads to the formation of a planar 1:1 complex (Fig. 3.17) with logK = 4.665±0.022. Fig. 3 . 3 Fig. 3.17 Comparison of the structures and UV-Vis spectra (in DCM) of the three different complexes of the dimer: Dimer(pyr)2 (a, green), [Dimer(DABCO)]2 (b, red), Dimer(handle#3) (c, blue). Fig. 3 . 3 Fig. 3.18 Opening and closing of the dimer (c = 4.7•10 -6 M) with handle#3 (5 eq.), DMAP (0-20 eq.) (upper spectrum) and TFA (5-15 eq.) (lower spectrum) in DCM at room temperature. 3 . 19 ). 319 Due to the low quality of the crystal, the structure was not finished. Unfortunately, it was not possible to determine the position of the flexible chain connected to the two pyridyl units and thus they are not represented in Fig.3.19. Also one of the two pyridyl moieties was found to be disordered between two positions. The calculation of the structure could not be completed. It is difficult to say if the handle#3 is connected to one dimer molecule or to two neighbouring ones. Fig. 3 . 3 Fig. 3.19 Partial X-ray structure of the dimer complex with handle#3 (solvent molecules are removed for clarity, one of the pyridine moiety is disordered). Fig. 3 . 3 Fig. 3.20 Calculated geometry of Dimer(handle#3) complex (B3LYP 6-31G* basic set). Fig. 3 . 3 Fig. 3.21 Crystal packing of the single crystal of the dimer(handle#3) complex (meso-phenyl substituents are omitted for clarity) Crystal packing is significantly different from Dimer(DMAP)2. Two molecules form pairs. Dimer molecules are oriented in the pair in a convergent manner. Pyridyl moieties of one dimer form π-π bonds with pyridyl units of another dimer. Simultaneously, each porphyrin forms π-π contacts with adjacent porphyrin ring of another dimer pair. Fig. 3 . 3 Fig. 3.22 Closing/opening processes in the AP-SA dimer (c = 8.0•10 -6 M) system in DCM at room temperature (AP -40 eq., SA -0-40 eq., upper) and Et3N -0-90 eq. (lower). Fig. 3 . 3 Fig. 3.23 Schematic representation of a potential bistable gear. out. [P(MPyP)(OEt)2] + complex showed the best results for hydroquinone oxidation in water. This type of P(V) porphyrin derivatives could be used for photo oxidation of other classes of compounds. They might also be of interest as PDT reagents.The last part of the manuscript deals with a new type of molecular gears based on a Zn(II) porphyrin dimer. Two porphyrin moieties are linked with a non-flexible butadiyne bridge. In solution, the compound exists as a mixture of rotational conformers resulting from the flipping of the two porphyrins. We succeed in locking the system by blocking the flipping process. The first approach required the addition of a bis monodentate ligand (handle #3) that coordinates simultaneously to the axial position of both zinc (II) cation within the dimer. The reversibility of the process was shown from the addition of DMAP and TFA alternatively. The second design involves acid/base processes for the locking and unlocking of the molecular gear. The dynamic molecule is composed of the Zn(II) dimer and two axial monodentate ligands bearing either an acid (carboxylic) or a basic (amine) sites. The locking process is achieved through electrostatic interactions and H-bonding. Although the opening/closing processes are in principle reversible, only a single cycle could be achieved because of precipitation. Further functionalization of the porphyrin meso-positions of the dimer could open new perspectives in mastering the movement in such dimeric species. 1 H 1 -NMR,13 C-NMR,[START_REF][END_REF] P-NMR and 19 F-NMR spectra were acquired at 25 • C on Bruker Avance NMR spectrometers: AV 300 (300 MHz), AV400 (400 MHz), AV500 (500 MHz) or AV600 (600 MHz). NMR spectra were referenced to the residual solvent signal.UV-Vis absorption spectra were obtained at room temperature on a Lambda 650S spectrometer (PerkinElmer) and Thermo Evolution 210 spectrometer in quartz cells with a 1 cm optical path. Spectrophotometric titrations were performed in 1 cm quartz cells with a Teflon stopper. Wavelengths (λ) are given in nm. Molar absorption coefficients (ε) are given in L.mol - 1 .cm -1 . photobleaching rates in non-aqueous solutions (or SOSG fluorescence increasing rates in water) in the presence of the sample and the standard, respectively. I and I St are the integral light absorption values of the sample and the standard, respectively. To avoid chain reactions induced by DPBF in the presence of singlet oxygen, the concentration of trap was lowered to 510 -5 M. Photodestruction experiments were carried out in an irradiation chamber BS-02 (Opsytec Dr. Gröbel) with D65 lamps in time-controlling mode. MALDI TOF mass-spectra were measured on a Bruker Daltonics Ultraflex(III) spectrometer without matrix. High-resolution mass spectra (HRMS) ESI-TOF were recorded on the microTOF LC mass spectrometer (Bruker Daltonics). 1 H-NMR (CDCl3, 600 MHz) δ, ppm: -2.80 (s, 2H, NH), 7.77 (m, 9H, m-and pphenyl), 8.18 (m, 2H, o-pyridyl), 8.21 (m, 6H, o-phenyl), 8.80 (d, 3 J = 4.4 Hz, 2H, β-pyrr), 8.86 (s, 4H, β-pyrr), 8.89 (d, 3 J = 4.4 Hz, 2H, β-pyrr), 9.04 (m, 2H, m-pyridyl). H2DPyP -1 H-NMR (CDCl3, 600 MHz) δ, ppm: -2.84 (s, 2H, NH), 7.78 (m, 6H, m-and pphenyl), 8.18 (m, 4H, o-pyridyl), 8.21 (d, 3 J = 6.9 Hz, 4H, o-phenyl), 8.81 (d, 3 J = 4.3 Hz, 4H, βpyrr), 8.91 (d, 3 J = 4.3 Hz, 4H, β-pyrr), 9.04 (m, 4H, m-pyridyl). 1 H-NMR (CDCl3, 500 MHz) δ, ppm: -2.73 (s, 2H, NH), 7.79 (m, 9H, m-and p-phenyl), 8.23 (m, 6H, o-phenyl), 8.79 (d, 3 J = 4.9 Hz, 2H, β-pyrr), 8.87 (s, 4H, β-pyrr), 8.96 (d, 3 J = 4.9 Hz, 2H, β-pyrr). 1 H-NMR (CDCl3, 300 MHz) δ, ppm 4.67 (s, 4H, CH2), 7.44 (d, 3 J = 7.7 Hz, 2H, m-pyridyl), 7.77 (t, 3 J = 7.7 Hz, 1H, p-pyridyl). 1 H-NMR (CDCl3, 600 MHz) δ, ppm: 1.43 (m, 4H, CH2 THP), 1.62 (m, 1H, CH2 THP), 1.74 (m, 1H, CH2 THP), 3.41 (m, 2H, CH2O), 3.54 (m, 10H, CH2O), 3.77 (m, 2H, CH2O), 4.54 (s, 1H, CH). 13 C 13 -NMR (CDCl3, 75 MHz) δ, ppm: 19.5 (CH2 THP), 25.3 (CH2 THP), 30.5 (CH2 THP), 61.7 (CH2O), 62.1 (CH2O), 62.4 (CH2O), 66.6 (CH2O), 66.8 (CH2O), 70.4 (CH2O), 72.4 (CH2O), 99.1 (CHTHP). 1 H-NMR (CDCl3, 300 MHz) δ, ppm 1.44-1.81 (m, 12H, CH2 THP), 3.42-3.86 (m, 28H, CH2O and CH2OTHP), 4.59 (m, 2H, CH), 4.63 (s, 4H, PyCH2), 7.34 (d, 3 J = 7.7 Hz, 2H, m-pyridyl), 7.66 (t, 3 J = 7.8 Hz, p-pyridyl). 13 C 13 -NMR (CDCl3, 75 MHz) δ, ppm: 19.4 (CH2 THP), 25.3 (CH2 THP), 30.4 (CH2 THP), 62.1 (CH2O), 66.5 (CH2O), 70.2 (CH2O), 70.4 (CH2O), 70.5 (CH2O), 70.6 (CH2O), 73.9 (PyCH2), 98.8 (CH THP), 119.8 (CHPy), 137.1 (CHPy), 157.8 (C). 13 C 13 -NMR (CDCl3, 150 MHz) δ, ppm: 61.5 (CH2O), 70.0 (CH2O), 70.2 (CH2O), 70.4 (CH2O), 70.5 (CH2O), 72.6 (CH2O), 73.7 (PyCH2), 120.4 (CHPy), 137.4 (CHPy), 157.5 (C). 1 H-NMR (CDCl3, 600 MHz) δ, ppm: 2.99 (s, 6H, SCH3), 3.57-3.72 (m, 20H, CH2O), 4.30 (m, 4H, CH2O), 4.61 (s, 4H, PyCH2), 7.33 (d, 3 J = 7.8 Hz, 2H, m-pyridyl), 7.70 (t, 3 J = 7.8 Hz, ppyridyl). 13 C 13 -NMR (CDCl3, 150 MHz) δ, ppm: 37.3 (SCH3), 69.0 (CH2O), 69.2 (CH2O), 70.2 (CH2O), 70.5 (CH2O), 70.6 (CH2O), 70.7 (CH2O), 74.0 (PyCH2), 120.0 (CHPy), 137.3 (CHPy), 157.8 (C). 1 H-NMR (CDCl3, 600 MHz) δ, ppm: 1.67 (m, 4H, CH2 THP), 1.84 (m, 1H, CH2 THP), 1.99 (m, 1H, CH2 THP), 3.62 (m, 1H, OCH2 THP), 3.93 (m, 1H, OCH2 THP), 5.38 (t, 3 J = 3.2 Hz, 1H, CHTHP), 6.46 (dd 3 J = 8.0 Hz, 4 J = 2.2 Hz, 1H, CHres), 6.58 (dd, 4 J = 2.6 Hz, 4 J = 2.6 Hz, 1H, CHres), 6.60 (dd, 3 J = 7.8 Hz, 4 J = 2.2 Hz, 1H, CHres), 6.67 (s, 1H, OH), 7.09 (dd, 3 J = 8.1 Hz, 3 J = 8.1 Hz, 1H, CHres). 13 C 13 -NMR (CDCl3, 150 MHz) δ, ppm: 18.8 (CH2 THP), 25.2 (CH2 THP), 30.4 (CH2 THP), 62.2 (OCH2 THP), 96.5 (CHTHP), 104.1 (CHres), 108.7 (CHres), 109.0 (CHres), 130.1 (CHres), 157.1 (C), 158.2 (C). 1 H-NMR (CDCl3, 600 MHz) δ, ppm: 1.66 (m, 8H, CH2 THP), 1.83 (m, 2H, CH2 THP), 1.99 (m, 2H, CH2 THP), 3.55-3.92 (m, 24H, CH2O), 4.10 (m, 4H, OCH2 THP), 4.65 (s, 4H, PyCH2), 5.37 (t, 3 J = 3.4 Hz, 2H, CH THP), 6.53 (m, 2H, CHres), 6.64 (m, 4H, CHres), 7.14 (dd, 3 J = 8.6 Hz, 3 J = 8.6 Hz, 2H, CHres), 7.36 (d, 3 J = 7.7 Hz, 2H, m-pyridyl), 7.66 (t, 3 J = 7.8 Hz, 1H, p-pyridyl). 13 C 13 -NMR (CDCl3, 150 MHz) δ, ppm: 18.8 (CH2 THP), 25.2 (CH2 THP), 30.9 (CH2 THP), 62.1 (CH2O), 67.4 (CH2O), 69.8 (CH2O), 70.3 (CH2O), 70.6 (CH2O), 70.7 (CH2O), 70.8 (CH2O), 74.1, 96.4 (PyCH2), 103.6 (CHres), 107.9 (CHres), 109.0 (CHres), 119.9 (CHPy), 129.7 (CHres), 137.3 (CHPy), 157.9 (C), 158.3 (C), 159.9 (C). 13 C 13 -NMR (CDCl3, 150 MHz) δ, 67.4 (CH2O), 69.7 (CH2O), 70.4 (CH2O), 70.7 (CH2O), 70.8 (CH2O), 70.9 (CH2O), 73.0 (PyCH2), 102.8 (CHres), 106.4 (CHres), 108.5 (CHres), 121.1 (CHPy), 130.0 (CHres), 157.2 (CHPy), 157.6 (C), 159.9 (C), 159.9 (C). 1 H-NMR (CDCl3, 300 MHz) δ, ppm 1.20-1.72 (m, 6H, CH2 THP), 3.10-3.75 (m, 16H, CH2O), 4.40 (s, 1H, OCHTHP). 13 C 13 -NMR (CDCl3, 150 MHz) δ, 19.5 (CH2 THP), 25.4 (CH2 THP), 30.6 (CH2 THP), 62.2 (CH2O), 66.6 (CH2O), 70.3 (CH2O), 70.5 (CH2O), 70.6 (CH2O), 70.7 (CH2O), 74.0 (PyCH2), 98.9 (CHTHP), 119.9 (CHPy), 137.2 (CHPy), 157.9 (C). 1 H-NMR (CDCl3, 300 MHz) δ, ppm 3.51-3.73 (m, 32H, CH2O), 4.63 (s, 4H, PyCH2), 7.36 (d, 3 J = 7.7, 2H, m-pyridyl), 7.69 (t, 3 J = 7.7 Hz, 1H, p-pyridyl). 1 H-NMR (CDCl3, 300 MHz) δ, ppm: 2.40 (s, 6H, CH3), 3.64 (m, 28H, CH2O), 4.12 (m, 4H, CH2O), 4.62 (s, 4H, ArCH2), 7.30 (d, 3 J = 8.0 Hz, 4H, CHtosyl), 7.33 (d, 3 J = 7.8 Hz, 2H, m-pyridyl), 7.66 (t, 3 J = 7.9, 1H, p-pyridyl),7.75 (d, 3 J = 8.4, 4H, CHtosyl). 13 C 13 -NMR (CDCl3, 125 MHz) δ, 21.6 (CH3), 68.1 (CH2O), 68.6 (CH2O), 69.3 (CH2O), 69.4 (CH2O), 70.1 (CH2O), 70.5 (CH2O), 70.6 (CH2O), 70.7 (CH2O), 72.4 (PyCH2), 125.9 (CHPy), 127.9 (CHTs), 128.8 (CHPy), 129.9 (CHTs), 133.7 (C), 144.9 (C), 154.2 (C). Elemental analysis, %: calculated for C37H55NO15S2•0.71(CH2Cl2)•0.71(H2O)•0.29(N3Et): C,52.48; H, 6.72; N, 2.00. Found C, 52.43; H, 6.76; N, 2 1 H-NMR (CDCl3, 300 MHz) δ, ppm: 7.80 (m, 12H, m-and p-phenyl), 7.99 (m, 8H, o-phenyl), 9.14 (d, 4 JP-H = 4.5 Hz, 8H, β-pyrr). 31 P- 31 NMR (CDCl3, 121 MHz) δ, ppm: -229 13 C-NMR (CDCl3, 125 MHz) δ, ppm: 117.7 (C, 3 JP-C = 3.3 Hz), 128.8 (CH), 130.5 (CH), 132.8 (CH, 3 JP-C = 6.5 Hz), 133.3 (CH), 134.2 (C), 140.0 (C). HR-ESI MS: m/z calcd. for [C44H28Cl2N4P] + : 713.1423; found: 713.1407 [15-Cl] + . UV-vis (CHCl3): λ, nm (log ε) 440 (5.44), 568 (4.13), 613 (4.00). 1 H 1 -NMR (CDCl3, 400 MHz) δ, ppm: 7.69 (m, 12H, m-and p-phenyl), 8.01 (m, 8H o-phenyl), 8.89 (d, 4 JP-H = 2.7 Hz, 8H, β-pyrr). 31 P- 31 NMR (CDCl3, 162 MHz) δ, ppm: -193. UV-vis (CHCl3) λ, nm (logε): 428(5.42), 556 (4.21), 596 (3.92). 1 H-NMR (CDCl3, 400 MHz) δ, ppm: 7.69 (m, 12H, m-and p-phenyl), 8.01 (m, 8H o-phenyl), 8.89 (d, 4 JP-H = 2.7 Hz, 8H, β-pyrr). 31 P- 31 NMR (CDCl3, 162 MHz) δ, ppm: -193. 13 C 13 -NMR (CDCl3, 125 MHz) δ, ppm: 115.9 (C), 127.7 (CH), 128.7 (CH), 132.0 (CH), 133.7 (CH), 134.7 (C), 139.5 (C). HR-ESI MS: m/z calcd. for [C44H30N4O2P] + : 677.2101; found: 677.2084 [16b-Br] + . UV-vis (CHCl3) λ, nm (logε): 428 (5.42), 556 (4.21), 596 (3.92). 13 C 13 -NMR (CDCl3, 125 MHz) δ, ppm: 13.1 (CH3, 3 JP-C = 16.4 Hz), 56.9 (CH2, 2 JP-C = 15.0 Hz), 115.4 (C, 3 JP-C = 2.0 Hz), 128.6 (CH), 129.9 (CH), 132.2 (CH, 3 JP-C = 5.0 Hz), 133.3 (CH), 135.4 (C), 139.2 (C). HR-ESI MS: m/z calcd. for [C48H38N4O2P] + : 733.2727; found: 733.2737 [18-Br] + . UV-vis (CHCl3) λ, nm (logε): 431 (5.44), 560 (4.23), 599 (4.02). 1 H 1 -NMR (CD3OD, 400 MHz) δ, ppm: 1.40 (td,4 JP-H =2.2 Hz, 4 J = 2.2 Hz, 2H, o-phenoxy), 2.27 (br. d, 3 J = 8.2 Hz, 2H, o-phenoxy), 3.34 (t, 3 J = 5.0 Hz, 4H, CH2O), 3.51 (t, 3 J = 5.0 Hz, 4H, CH2O), 3.56-3.69 (m, 16H, CH2O), 4.51 (s, 4H, CH2O), 5.74 (br. d, 3 J = 8.2 Hz, 2H, p-phenoxy), 5.90 (ddd, 3 J = 8.3 Hz, 3 J = 8.3 Hz, 4 J = 1.7 Hz, 2H, 7.22 (d, 3 J = 7.6 Hz, 2H, 7.31 (t 3 J = 7.8 Hz, 1H, 7.75 (m, 12H, 7.81 (m, 8H, 9.08 (d, 8H,. 31 P- 31 NMR (CD3OD, 162 MHz): δ, ppm, -195. 13 C 13 -NMR (CD3OD, 100 MHz): δ, ppm 68.3 (CH2O), 70.3 (CH2O), 71.7 (CH2O), 71.8 (CH2O), 72.0 (CH2O), 74.5 (CH2O), 79.5 (CH2O), 101.7 (CHphenoxy, 3 JP-C = 8.1 Hz), 108.9 (CH), 109.7 (CH), 118.4 (CH), 121.7 (C), 129.5 (CH), 130.9 (CH) 134.7 (CH), 134.8 (CH, 3 JP-C = 6.0 Hz), 136.3 (CH), 138.9 (C), 140.8 (CH), 151.6 (C) 151.7 (C), 159.0 (C), 159. 13 C 13 -NMR (CDCl3, 125 MHz) δ, ppm: 111.8 (C), 116.2 (C), 116.4 (C), 126.9 (CH,), 127.8 (CH,), 128.3 (CH), 129.0 (CH), 129.5 (CH), 131.4 (CH), 132.4 (C), 132.5 (CH), 132.9 (CH), 133.6 (CH), 134.7 (CH), 138.3 (C), 139.4 (C), 139.5 (C), 147.6 (CH), 148.6 (CH). HR-ESI MS: m/z calcd. for [C43H30N5O2P] 2+ : 339.6063; found: 339.6031 [22+H-Br] 2+ ; m/z calcd. for [C43H29N5O2P] + : 678.2053; found: 678.2036 [22-Br] + . UV-vis (CHCl3) λ, nm (logε): 428 (5.24), 556 (4.02), 598 (3.32). 1 H-NMR (CDCl3, 400 MHz) δ, ppm: 1.24 (br., 2H, o-phenoxy),1.75 (br., 2H o-phenoxy), 3.11 (s, 6H, O-CH3), 5.68 (d, 3 J = 8.0 Hz, 2H, p-phenoxy), 5.84 (dd, 3 J = 8.0 Hz, 3 J = 8.0 Hz, 2H, mphenoxy),7.69-7.81 (m, 17H, phenyl + o-pyridyl), 9.06 (m, 8H, β-pyrr), 9.12 (m, 2H, m-pyridyl). 31 P- 31 NMR (CDCl3, 162 MHz) δ ppm: -195. 13 C 13 -NMR (CDCl3, 125 MHz) δ, ppm: 55.0 (CH3), 101.7 (CH, 3 JP-C = 9.0 Hz), 106.8 (CH, 3 JP-C = 9.1 Hz), 107.6, (CH), 117.8 (C) 128.5 (CH), 128.6 (CH), 130.2 (CH), 132.7 (CH), 133.6 (CH), 133.8 (CH), 134.4 (CH), 134.7 (CH), 139.5 (C), 139.7 (C), 139.8 (C), 150.0 (C), 158.7 (C). HR-ESI MS: m/z calcd. for [C57H42N5O4P] 2+ : 445.6482; found: 445.6458 [23H-Cl] 2+ : m/z calcd. for [C57H41N5O4P] + : 890.2891; found: 890.2873 [23-Cl] + . UV-vis (CHCl3) λ, nm (logε): 434 (5.00), 564 (4.00), 606 (3.61). 1 H-NMR (CDCl3, 400 MHz) δ, ppm: -2.26 (dt,3 JP-H = 12.0 Hz, 3 J = 5.9 Hz, 4H, P-O-CH2), -1.27 (m, 4H, CH2), 1.54 (t, 3 J = 5.8 Hz, 4H, -CH2-OH), 7.75 (m, 9H, m-and p-phenyl), 8.00 (m, 6H, o-phenyl),8.08 (m, 2H, o-pyridyl), 8.95 (m, 2H, m-pyridyl), 8.98 (m, 2H, β-pyrr), 9.03 (m, 6H, β-pyrr). 31 P- 31 NMR (CDCl3, 162 MHz) δ, ppm: -180. 13 C 13 -NMR (CDCl3, 125 MHz) δ, ppm: 33.3 (CH2, 2 JP-C = 16.6 Hz), 57.0 (CH2), 59.4 (CH2, 2 JP-C = 15.3 Hz), 113.6 (C), 117.9 (C), 118.3 (C), 129.4 (CH), 129.9 (CH), 130.9 (CH), 133.8 (CH), 134.6 (CH), 134.7 (CH), 135.0 (CH), 136.8 (C), 139.6 (C), 140.7 (C), 140.8 (C), 140.9 (C), 146.1 (C), 150.1 (CH) HR ESI MS: m/z calcd. for [C49H41N5O4P] + : 794.2891; found: 794.2821 [26-Br] + . UV-vis (CHCl3) λ, nm (logε): 431 (5.26), 560 (4.00), 603 (3.39). [ P ( P MPyP)(OH)2] + Br -(22) (100 mg, 0.13 mmol, 1 eq.) was dissolved in 5 ml of dry DCM and placed into the flask together with SOBr2 (0.39 ml, 5.03 mmol, 38 eq.). The mixture was stirred at room temperature under argon during 3 h. DCM and the excess of SOBr2 were removed under low pressure. The reaction mixture was left under vacuum overnight. Freshly distilled pyridine (25 ml) was added and the solution was transferred to a 0.5L 2-necked round-bottom flask, previously vacuumed and filled with argon and equipped with a reflux condenser. The handle#1 (10) (90 mg, 0.153 mmol, 1.16 eq.) previously dissolved in freshly distilled pyridine (25 ml) was added under argon under stirring. After addition of freshly distilled pyridine (100 ml), the mixture was stirred at 50 o C during 24 h and further refluxed for 2 h. The reaction mixture was cooled to room temperature and dissolved in cyclohexane (200 ml). Without evaporation of solvents, it was purified by column chromatography on SiO2. Using different eluents: DCM-cyclohexane (1:1), DCM and gradually increasing methanol ratio in DCM-methanol mixtures up to 12% give set of complexes. Further purification by Bio-Beads S-X3 (eluent -chloroform-methanol 98:2) removed small impurities from the mixture. Extra purification on Bio-Beads S-X1 column (eluent -98/2 chloroform-methanol 98:2) afforded 18 mg (yield 10%) of the pure compound 28b as a green solid. 1 H-NMR (CD3OD, 400 MHz) δ, ppm: 1.41 (ddd, 4 J = 2.2 Hz, 4 J = 2.2 Hz, 4 JP-H = 2.2 Hz, 2H, o-phenoxy), 2.24 (ddd, 3 J = 8.2 Hz, 4 J = 2.2 Hz, 4 JP-H = 2.2 Hz, 2H o-phenoxy), 3.35 (m, CH2O), 3.51 (m, CH2O), 3.56-3.68 (m, 16H, CH2O), 4.48 (s, 4H, CH2O), 5.74 (dt, 3 J = 8.2 Hz, 4 J = 1.4 31 P- 31 NMR (CD3OD, 162 MHz) δ, ppm: -195. 13 C 13 -NMR (CD3OD, 125 MHz) δ, ppm: 68.3 (CH2O), 70.2 (CH2O), 71.6 (CH2O), 71.7 (CH2O), 72.0 (CH2O), 74.4 (CH2O), 74.4 (CH2O), 101.8 (CH, 3 JP-C = 8.2 Hz), 106.7 (CH), 108.8 (CH, 3 JP-C = 8.3 Hz), 109.0 (CH), 109.7 (CH), 118.8 (C), 119.1 (C), 121.6 (CH), 129.6 (CH), 129.7 (CH), 131.0 (CH), 134.2 (CH, 3 JP-C = 5.2 Hz), 134.7 (CH), 135.0 (CH, 3 JP-C = 5.4 Hz), 135.2, (CH, 3 JP-C = 5.6 Hz), 135.4 (CH, 3 JP-C = 5.3 Hz), 136.1 (C), 138.9 (CH), 139.7 (C), 140.7 (C), 140.8 (C), 141.0 (C), 145.2 (C), 150.3 (CH), 151.5 (C), 151.6 (C), 159.0 (C), 159.2(C). HR-ESI MS: m/z calcd. for [C74H67N6O10P] 2+ : 615.2323; found: 615.2312 [28bH-Br] 2+ ; m/z calcd. for [C74H66N6O10P] + : 1229.4573; found 1229.4590 [28b-Br] + . UV-vis (CHCl3) λ, nm (logε): 434 (4.92), 565 (3.86), 607 (3.45). 1 H-NMR (CD3OD, 400 MHz) δ, ppm: -2.23 (m, 4H, P-O-CH2), 0.65 (br., 4H, CH2O), 2.12 (m, 4H, CH2O), 2.63 (m, 4H, CH2O), 2.94 (m, 4H, CH2O), 3.18 (m, 4H, CH2O), 3.38 (m, 4H, CH2O), 3.49 (m, 4H, CH2O), 4.46 (s, 4H, CH2O), 7.23 (d, 3 J = 7.9 Hz, 2H, m-pyridylhandle), 7.55 (t 3 J = 7.6 Hz, 1H, p-pyridylhandle), 7.82 (m, 9H, m and p-phenyl), 8.04 (m, 6H, o-phenyl), 8.15 (m, 2H, opyridyl), 8.98 (m, 2H, m-pyridyl), 9.14 (m, 8H, β-pyrr). Spectra are described more precisely in Chapter I. 31 P- 31 NMR (CD3OD, 162 MHz) δ, ppm: -181. MALDI-TOF MS: m/z calcd. for [C67H67N5O10P] + : 1132.47; found 1133.48 [29-OTs] + . UV-vis (CHCl3) λ, nm: 433, 563, 602. 31 P- 31 NMR (DMSO-d6 + CDCl3, 162 MHz) δ, ppm: -229. 13 C 13 -NMR (DMSO-d6 + CDCl3, 125 MHz) δ, ppm: 113.4 (C), 116.5 (C), 117.5 (C), 127.8 (CH), 128.2 (CH), 129.7 (CH), 129.9 (CH), 132.9 (CH), 133.3 (CH), 138.2 (C), 139.6 (C), 142.8, (C), 148.2 (CH). HR-ESI MS: m/z calcd. for [C42H27Cl2N6P] 2+ : 358.0700; found: 358.0716 [30H-Cl] 2+ ; m/z calcd. for [C42H26Cl2N6P] + : 715.1328; found: 715.1315 [30-Cl] + . UV-vis (CHCl3) λ, nm: 438, 570, 608. 31 P- 31 NMR (CDCl3, 121 MHz) δ: ppm -180. 13 C 13 -NMR (CDCl3, 125 MHz): 112.4 (C), 116.5 (C), 128.0 (CH,), 128.2 (CH,), 129.2 (CH), 131.6 (CH), 133.0 (CH), 133.5 (CH), 137.0 (C), 138.3 (C), 139.6 (C), 145.8 (C), 148.6 (CH). HR-ESI MS: m/z calcd. for [C42H29N6O2P] 2+ : 340.1039; found: 340.1026 [31H-Br] 2+ ; m/z calcd. for [C42H28N6O2P] + :679.2006; found: 679.2008 [31-Br] + . UV-vis (CHCl3) λ, nm (logε): 428 (5.07), 557(3.93), 598 (3.48). 1 H-NMR (CDCl3, 400 MHz) δ, ppm -2.48 (br., 4H, P-O-CH2), -1.49 (br., 4H CH2), -1.14 (br., 6H, CH3) 7.81 (m, 6H, m-and p-phenyl), 8.00 (m, 8H, o-phenyl + o-pyridyl), 9.13 (m, 10H, mpyridyl + β-pyrr). 31 P- 31 NMR (CDCl3, 162 MHz) δ, ppm, -179. UV-vis (CHCl3) λ, nm: 432, 560, 600. 31 P- 31 NMR (CD3OD, 121 MHz) δ, ppm: -175. 13 C 13 -NMR (CD3OD, 125 MHz) δ, ppm: 114.3 (C), 130.0 (CH), 133.5 (CH, 3 JP-C = 3.3 Hz), 139.9 (C), 147.9 (C), 149.7 (CH). HR-ESI MS: m/z calcd. for [C40H26N8O2P] + : 681.1911; found: 681.1821 [33-Br] + . UV-vis (CHCl3) λ, nm (logε): 427 (5.13), 557 (3.92), 594 (3.50). pentafluorophenyl-10,15,20-triphenylporphyrin, H2MpFP (2) (270 mg, 0.38 mmol, 1 eq.) was dissolved in pyridine (150 ml) under argon and then a solution of POBr3 (4.39 g, 15.3 mmol, 40 eq.) in pyridine (10 ml) was added dropwise under stirring. The mixture was refluxed for 1.5 h under argon and then cooled to room temperature. After dissolving in DCM (200 ml) to the green mixture, 2L of distilled water was added and the mixture stirred at room temperature during 1 day until it became purple. The organic layer was isolated and diluted with petroleum ether (200 ml). The mixture was poured directly on a silica gel chromatography column without evaporation of the solvents. Increasing the methanol ratio in the eluent up to 15% give the crude product. Further purification by Bio-Beads S-X3 GPC column (eluent -chloroform-methanol 98:2) afforded 170 mg (yield 55%) of the pure compound 34. 31 P- 31 NMR (CD3OD, 121 MHz) δ, ppm: -189.19 F-NMR (CD3OD, 282 MHz) δ, ppm: -164.3 (td, 3 JF-F = 21.7 Hz, 4 JF-F = 6.5 Hz, 2F, o-phenyl), -151.1 (t, 3 JF-F = 21.0 Hz, 1F, p-phenyl), -140.6 (m, 2F, m-phenyl). 13 C 13 -NMR (CD3OD, 125, MHz) δ, ppm: 117.6 (C, 3 JP-C = 1.8 Hz), 118.5 (C, 3 JP-C = 1.8 Hz), 120.0 (CH), 120.1 (CH), 130.5 (CH), 132.8 (CH, 3 JP-C = 4.1 Hz), 133.9 (CH, 3 JP-C = 4.7 Hz), 134.3 (CH, 3 JP-C = 4.7 Hz), 134.6 (CH), 134.7 (CH), 135.0 (CH, 3 JP-C = 4.7 Hz), 137.7 (CH), 137.8 (C), 140.0 (C), 140.2 (C), 140.6 (C), 140.9 (C). HR-ESI MS: m/z calcd. for [C44H25F5N4O2P] + : 767.1630; found: 767.1563 [34-Br] + . UV-vis (CHCl3) λ, nm (logε): 425 (5 1 H 1 -NMR (CDCl3, 400 MHz) δ, ppm: -2.39 (s, 2H, NH), 0.63 (s, 9H, TMS), 7.78 (m, 9H m-and p-phenyl.), 8.21 (m, 6H, o-phenyl), 8.78 (s, 4H, β-pyrr), 8.90 (d, 3 J = 4.7 Hz, 2H, β-pyrr), 9.68 (d, 3 J = 4.8 Hz, 2H, β-pyrr). 13 C 13 -NMR (CDCl3, 75 MHz) δ, ppm: 0.8 (CH3), 99.4 (C), 102.3 (C), 107.5 (C), 121.4 (C), 122.4 (C), 127.0 (CH), 127.1 (CH), 128.2 (CH), 134.7 (CH), 134.8 (CH), 142.1 (C), 142.3 (C). 13 C 13 -NMR (CDCl3, 100 MHz) δ, ppm: 84.0 (CH), 121.0 (C), 126.7 (CH), 126.8 (CH), 127.9 (CH), 134.4 (CH), 134.5 (CH), 141.7 (C), 142.0 (C). 1 H-NMR (THF-d8, 400 MHz) δ, ppm: 7.85 (m, 18H m-and p-phenyl.), 8.27 (m, 12H, ophenyl), 8.83 (s, 8H, β-pyrr), 9.06 (d, 3 J = 4.6 Hz, 4H, β-pyrr), 10.01 (d, 3 J = 4.7 Hz, 4H, β-pyrr). 13 C 13 -NMR (CDCl3+DMSO-d6, 100 MHz) δ, ppm: 102.9 (C), 121.7 (C), 126.0 (CH), 126.1 (CH), 127.1 (CH), 131.2 (CH) 131.7 (CH), 132.7 (CH), 133.7 (CH), 133.8 (CH), 142.0 (C), 148.8 (C), 149.0 (C), 149.9 (C), 152.4 (C). HR-ESI MS: m/z calcd. for [C80H46N8Zn2] + : 1246.2423; found: 1246.2414 [37] + . UV-vis (THF) λ, nm (logε): 451 (5.58), 483 (5.36), 569 (4.43), 637 (4.80), 690 (4.83). Fig. 1 1 Fig. 1 Représentation schématique du tourniquet. Il s'agissait d'utiliser le Sn(IV) comme une charnière entre un rotor (l'anse) et un stator (la porphyrine). Le mouvement est fondé sur la rotation de la porphyrine autour de la liaison O-Sn-O. la cavité porphyrinique devait permettre d'augmenter la stabilité de la liaison entre le rotor et le stator en milieu acide. Le but ultime poursuivi est la formation d'un tourniquet moléculaire dont on maitriserait le sens de rotation grâce à l'implication séquentielle d'au moins trois positions méso différenciées. Le mouvement serait maitrisé par la formation successive de liaisons de coordination et/ou d'interactions ioniques impliquant des substituants coordinants et/ou basiques présents sur des positions méso adjacentes. Schéma 1 .Schéma 7 . 17 Synthèse du système à base de tétraphenylporphyrine -tourniquet modèle#1. Nous avons ensuite tenté d'adapter les conditions d'introduction de phosphore à des macrocycles porphyriniques portant un nombre croissant de groupements pyridines en position méso (Schéma 2). L'utilisation du mélange POCl3/PCl5 a permis d'accéder au complexe possédant une seule méso-pyridine mais avec rendement que de 40%. Par contre, les complexes des porphyrines possédant deux ou quatre pyridines en positions méso n'ont pu être obtenus selon ce protocole.Par contre, l'utilisation de POBr3 suivi de l'échange des ions Br -par des hydroxydes par hydrolyse a permis d'isoler l'ensemble des 4 complexes de P(V) ciblés (Schéma 2). On remarque néanmoins que, l'augmentation du nombre de groupements pyridyl en position méso conduit à une diminution drastique des rendements et ce malgré une augmentation de la quantité de réactif phosphoré utilisé et un prolongement des temps de réaction. Ainsi le rendement optimal obtenu avec la méso-tétrapyridylporphyrine reste très faible et ce malgré l'utilisation de 100 eq. de POBr3 et près de 2 h 30 min de reflux à 130 °C. notamment pu montrer que l'ajout d'acide [1,1'-biphenyl]-4,4'-disulfonique sur le système après coordination préalable de pyridin-4-yl-méthylamine en positions axiales des Zn(II) conduisait au blocage du système (Schéma 7). Le système peut être déverrouillé par simple ajout de triéthylamine. Maitrise du mouvement dans le dimère 3.ConclusionLors de ce travail de thèse, nous avons mis au point la synthèse d'un tourniquet moléculaire à base de porphyrine de P(V). La chimie des porphyrines de phosphore est loin d'être triviale, ceci est notamment lié à la faible taille du cation P(V) qui conduit à une très importante déformation du macrocycle ainsi qu'à la photo-décomposition des complexes générés.Nous avons élaboré une méthode d'insertion du P(V) dans la cavité porphyrinique qui a permis d'accéder à des complexes porphyriniques substitués en position méso par des groupements pyridyl. Nous avons étudié l'échange de ligands en position axiale du P(V) et avons pu isoler 2 nouveaux tourniquets moléculaires. L'un d'entre eux s'est révélé être suffisamment stable et parfaitement adapté pour répondre à un stimulus chimique tel qu'une variation de pH ou l'ajout d'un cation métallique de type Ag + . Dans les deux cas, la réversibilité des processus a pu être démontrée. Nous avons également effectué une étude photophysique des complexes porphyriniques de P(V) obtenus afin de mettre en évidence les paramètres conduisant à l'instabilité des systèmes obtenus.Enfin, nous avons élaboré un deuxième système basé sur un dimère de porphyrines de Zn(II) dans lequel le mouvement peut être contrôlé à l'aide d'un stimulus chimique mettent en jeu les propriétés acido-basiques des ligands présents en position axiale des ions Zn(II) Table 1 . 1 Bond Length (Å) Bond Angle (degree) P-O 1.615(5) 2 Selected X-ray data for [P(MPyP)(OH)2] + . Photochemical Properties Contents 1. Introduction 2. Photophysical properties of P(V) porphyrins 2.1 P(V) porphyrins as photosensitizers 2.2 Photophysical properties of P(V) porphyrins in chloroform 2.2.1 Experimental setup#1 2.2.2 Singlet oxygen generation measurements 2.2.3 Fluorescence measurements 2.3 Photophysical properties of P(V) porphyrins in DMSO 2.3.1 Singlet oxygen generation measurements 2.3.2 Fluorescence measurements 2.4 Photophysical properties of phosphorus porphyrins in aqueous solutions 2.4.1 Experimental setup#2 2.4.2 Singlet oxygen generation measurements 2.4.3 Fluorescence measurements 2.5 Electronic structure of aryloxy-complexes 3. Application of P(V) porphyrins in photocatalysis 3.1 Experiment description 3.2 Photooxidation of hydroquinone 4. Conclusions of the chapter 5. References Table 2 . 2 Name ФΔ [P(TPP)(OH)2] + 0.73 [P(TPP)(OEt)2] + 0.77 [P(TPP)Cl2] + 0.93 [P(TPP)(OPhOMe)2] + 0.19 [P(MPyP)(OH)2] + 0.99 [P(MPyP)(OEt)2] + 0.99 [P(MPyP)Cl2] + 1.00 [P(MPyP)(OPhOMe)2] + 0.16 1 Singlet oxygen generation quantum yields of P(V) porphyrins in chloroform. Table 2 . 2 22 Photophysical data of P(V) porphyrins in chloroform. Ф= Ф st FA st 𝑛 2 F st A𝑛 𝑠𝑡 2 Name F, nm ФF Stokes shift, nm [P(TPP)(OH)2] + 609, 664 0.027 106 [P(TPP)(OEt)2] + 610, 663 0.036 106 [P(TPP)Cl2] + 622, 677 0.014 109 [P(TPP)(OPhOMe)2] + 619, 672 0.009 107 [P(MPyP)(OH)2] + 611, 663 0.029 107 [P(MPyP)(OEt)2] + 616, 669 0.060 110 [P(MPyP)Cl2] + 621, 676 0.012 110 [P(MPyP)(OPhOMe)2] + 621, 671 0.005 107 Table 2 . 2 Name ФΔ [P(TPP)(OH)2] + 0.24 [P(TPP)(OEt)2] + 0.34 [P(TPP)Cl2] + 0.36 [P(TPP)(OPhOMe)2] + 0.00 [P(MPyP)(OH)2] + 0.31 [P(MPyP)(OEt)2] + 0.37 [P(MPyP)Cl2] + 0.40 [P(MPyP)(OPhOMe)2] + 0.00 In the presence of DPBF, [P(TPP)(OPhOMe)2] 3 Singlet oxygen generation quantum yields of P(V) porphyrins in DMSO. + and [P(MPyP)(OPhOMe)2] + complexes did not lead to any bleaching in DMSO solution. table 2 .4.Table 2 . 22 Name Fmax, nm ФF Stokes shift, nm [P(TPP)(OH)2] + 614, 667 0.126 109 [P(TPP)(OEt)2] + 614, 666 0.125 109 [P(TPP)Cl2] + 625, 679 0.016 111 [P(TPP)(OPhOMe)2] + 618, 668 0.005 103 [P(MPyP)(OH)2] + 612, 667 0.119 110 [P(MPyP)(OEt)2] + 618, 671 0.104 111 [P(MPyP)Cl2] + 624, 678 0.029 111 [P(MPyP)(OPhOMe)2] + 617, 666 0.006 100 4. Photophysical data for P(V) porphyrins in DMSO. Table 2 . 5 25 Singlet oxygen generation quantum yields of P(V) porphyrins in water.Trends observed in CHCl3 and DMSO are again observed in water. Indeed, the photosensitizing efficiency of MPyP is higher than the one observed for TPP complexes, and the difference is even more pronounced in water than in organic solvents. ФΔ for [P(TPP)(OH)2] + and [P(TPP)(OEt)2] + is only 0.11 and 0.12 respectively, whereas the values obtained for the corresponding MPyP are Name ФΔ [P(TPP)(OH)2] + 0.11 [P(TPP)(OEt)2] + 0.12 [P(TPP)(OPhOMe)2] + 0.00 [P(MPyP)(OH)2] + 0.25 [P(MPyP)(OEt)2] + 0.46 [P(MPyP)(OPhOMe)2] + 0.00 Table 2 . 6 26 Photophysical data of P(V) porphyrins in water. Name Fmax, nm ФF Stokes shift, nm [P(TPP)(OH)2] + 614, 667 0.119 112 [P(TPP)(OEt)2] + 621, 673 0.073 115 [P(TPP)(OPhOMe)2] + 619, 671 0.008 107 [P(MPyP)(OH)2] + 614, 667 0.113 113 [P(MPyP)(OEt)2] + 618, 674 0.083 117 [P(MPyP)(OPhOMe)2] + 616, 667 0.007 104 table 3 3 .1). In both structures, phenyl substituents are oriented almost orthogonally to porphyrin ring. Strong π-π stacking is observed in crystal of both Bond Length (Å) Bond Angle ( degree) Dimer (DMAP) Dimer (pyridine) 15 Dimer (DMAP) Dimer (pyridine) 15 Zn-NPyr. 2.115(5), 2.144(4), NPyr-Zn-NPorph 95.0(2), 96.68, 2.135(5) 2.163(4) 94.2(2) 96.41 Zn- 2.067(5) - 2.052 -2.080 N-Zn-Ntrans 158.28(19) 159.82 - NPorph. 2.115(5) N-Zn-Ncis -163.3(2); 161.22; 87.7(2) - 88.11 - 89.0(2) 88.74 M). 10 l of the HQ solution in phosphate buffer (0.09 M) was added to 2.4 ml of the porphyrin Wavelength, nm Acknowledgements Abbreviations DABCO 1,4-diazabicyclo[2.2.2]octane Chapter III. A Molecular Break Based on a Zn(II) Porphyrin Dimer Within this chapter, only a first step toward a functional molecular gear based on the Zn(II) dimer was investigated. One could introduce additional coordinating sites at the periphery of the porphyrin ring. The additional functionalization of one of the meso-positions would afford a system that could in principle be switched between two conformations: orthogonal and planar, as represented in Fig. 3.23. Further functionalization of the meso-positions of porphyrins could also leads to gears capable to interact with specific the surfaces. More complex gears composed of porphyrin trimer or tetramer. General Conclusions Experimental Part added dropwise. The reaction mixture was refluxed overnight under argon. Pyridine was removed under vacuum and the solid residue was purified by column chromatography on alumina using a mixture of methanol-DCM as the eluent (from 0% to 10% of MeOH). Further purification by Bio-Beads S-X1 GPC (eluent -chloroform-methanol 98:2) afforded 24 mg of the pure green product 17 (50% yield). 1 H-NMR (CDCl3, 500 MHz) δ, ppm: 1.25 (br., 2H, o-phenoxy), 1.80 (d, 3 J = 8.2 Hz, 2H, ophenoxy), 3.11 (s,6H,5.68 (d,3 J=8. 3 Hz, 2H, p-phenoxy), 5.84 (dd, 3 J = 8.2 Hz, 3 J = 8. 2 Hz,2H ,7.76 (m,20H,phenyl),9.04 (d,4 JP-H = 2. 7 Hz, 8H, β-pyrr). The porphyrin H2TPP (1a) (0.1 g, 0.16 mmol, 1 eq.) was dissolved pyridine (60 ml) under argon before POBr3 (1.1 g, 4.06 mmol, 25 eq.) previously dissolved in pyridine (20 ml) was added dropwise under stirring. The reaction mixture was refluxed during 80 min under argon and then cooled to room temperature. After addition of 100 ml of ethanol, the green mixture was stirred at room temperature for 48 h until full axial ligand substitution was completed and reaction mixture became purple. The mixture was diluted with DCM (200 ml) and washed with distilled water (3 x 500 ml) to remove pyridine and ethanol. The organic layer was isolated, diluted with 200 ml of petroleum ether and poured directly on a silica gel chromatography column without evaporation of solvents. Increasing the polarity of the eluent using DCM-MeOH mixture (90:10) give the crude product. Further purification by Bio-Beads S-X1 GPC (eluent -chloroform-methanol 98:2) afforded 84 mg of the pure purple compound 18 in 64% yield. 1 H-NMR (CDCl3, 300 MHz) δ, ppm: -2.34 (dq, 3 JP-H = 14.0 Hz, 3 J = 7.0 Hz, 4H, CH2), -1.79 (td, 3 J = 7. 3 Hz,4 JP-H = 2. 1 Hz,6H,CH3),7.79 (m,12H,7.95 (m,8H,ophenyl),9.07 (d,4 JP-H = 2.7 Hz, 8H, β-pyrr). Model turnstile#2, [P(TPP)handle#2] + OTs -(20). The synthesis was adapted from a described procedure for similar compounds. 9,10 The porphyrin [P(TPP)(OH)2] + Br -(16b) (0.04 g, 0,053 mmol, 1 eq.) and handle#2 (14) (0.046 g, 0.058 mmol 1.1 eq.) were dissolved in acetonitrile (150 ml). The mixture was bubbled with argon during 30 min and then Cs2CO3 (0.023 g, 0.07 mmol, 1.32 eq.) and DMF (2 drops) were added. The reaction mixture was stirred overnight at 60 o C. Approximately 100 ml of MeCN was evaporated under vacuum and the mixture was diluted with 100 ml of DCM. The suspension was directly poured (without evaporation of the solvents) onto a SiO2 column with a mixture of DCMmethanol as eluent. Brown-red fraction (15% of methanol) was isolated and purified by Bio-Beads S-X1 GPC with chloroform as the eluent to afford 9 mg of red-brown solid that still containing a small amount of impurities. Method A. The porphyrin H2MPyP (5-monopyridyl-(10,15,20)-triphenylporphyrin) (1b) (0.1 g, 0.16 mmol, 1 eq.) was dissolved in pyridine (20 ml) and solutions of POCl3 (2 ml, 21.93 mmol, 135 eq.) and PCl5 (0.1 g, 0.49 mmol, 3 eq.) in 2 ml of pyridine were added dropwise under argon. The mixture was refluxed during 72 h under argon. The pyridine was evaporated under vacuum and the green residue was purified by column chromatography on alumina. Gradual addition of methanol up to 5% afforded the crude compound. After evaporation, the solid was further purified by Bio-Beads S-X1 GPC with chloroform as the eluent affording 48 mg (yield 40%) of compound 21 as a green solid. Method B. The porphyrin [P(MPyP)(OH)2] + Br -(22) (19 mg, 0.038 mmol, 1 eq.), (see below) was dissolved in chloroform (5 ml) under argon and SOCl2 (2 ml, 27 mmol, 720 eq.) was added. The mixture was stirred during 12 h at room temperature and the solvent as well as excess of thionyl chloride were removed under vacuum. Chloroform (5 ml) was added and the mixture was passed through Bio-Beads S-X1 GPC column with chloroform as eluent affording 19 mg (100% yield) of the compound 21 as a green solid. 1 H-NMR (CDCl3 + DMSO-d6, 300 MHz) δ, ppm: 7.48 (m, 9H, m-and p-phenyl), 7.63 (d, 3 J = 6. 7 Hz, 6H o-phenyl), 7.79 (m, 2H o-pyridyl), 8.78-8.88 (m, 10H, β-pyrr, m-pyridyl). The porphyrin H2MPyP (5-monopyridyl-(10,15,20)-triphenylporphyrin) (1b) (63 mg, 0.1 mmol, 1 eq.) was dissolved pyridine (30 ml) under argon and a solution of POBr3 (1.17 g, 4.09 mmol, 40 eq.) in pyridine (20 ml) was added dropwise under stirring. The reaction mixture was refluxed during 1.5 h under argon and then cooled to room temperature. After adding CH2Cl2 (150 ml), to the green suspension 2 L of distilled water was added and the mixture stirred at room temperature during 1 day until full hydrolysis of P(MPyP)(Br)2] + Br -to the dihydroxy complex This synthesis was achieved by modifying a reported procedure. 10 The porphyrin [P(MPyP)(OH)2] + Br -(22) (10 mg, 0.013 mmol, 1 eq.) was dissolved in acetonitrile (30 ml) and then ethyl tosylate (6 mg, 0.03 mmol, 2.3 eq.) and Cs2CO3 (5.4 mg, 0.016 mmol, 1.3 eq.) were added. The reaction mixture was stirred at 50 o C for 12 h under the argon. After cooling to room temperature, the solvent was evaporated under vacuum and the residue was purified by column chromatography (silica gel, DCM-methanol). Gradually increasing eluent polarity up to 20% of methanol affording a crude product that was passed through Bio-Beads S-X1 GPC (eluent -chloroform-methanol 98:2) afforded 4 mg (yield 30%) of the complex 24 as a purple solid. Method B. The porphyrin H2MPyP (1b) (95 mg, 0.15 mmol, 1 eq.) was dissolved pyridine (30 ml) under argon and then a solution of POBr3 (1.77 g, 6.17 mmol, 40 eq.) in pyridine (20 ml) was added dropwise under stirring. The mixture was refluxed for 1,5 h under argon and then cooled to room temperature. After addition of ethanol (200 ml), the green mixture was stirred at room temperature for 24 h until full axial ligand substitution occurred and the mixture became purple. The DCM (500 ml) was added and the mixture was washed with distilled water several times (5 x 500 ml). Organic layer was poured on a silica gel chromatography column without evaporation of DCM. Increasing the methanol ratio in the eluent up to 15% give the crude product. Further purification of product by Bio-Beads S-X1 GPC (eluent -chloroform-methanol 98:2) afforded 84 mg (yield 66%) of the pure compound 24 as a purple solid. Approximately 80% of the solvents was evaporated under vacuum and then the mixture was diluted with CH2Cl2 (200 ml). The reaction mixture was washed 4 times with 400 ml of distilled water. The organic layer was isolated and poured directly on a Al2O3 chromatography column without evaporation of CH2Cl2. Increasing the methanol ratio in the eluent (DCM-methanol) up to 8% give a dark red fraction which was isolated and passed through a Bio-Beads S-X3 GPC column (eluent -chloroform-methanol 98:2) affording 180 mg (yield 73%) of the compound 25 as a purple solid. Après une introduction dédiée à l'état de l'art des machines moléculaires, le premier chapitre s'intéresse à la conception de tourniquets moléculaires à base de complexes porphyriniques de P(V). Le mouvement moléculaire a pu être contrôlé de manière réversible soit par l'utilisation des sites de coordination présents à la périphérie du système soit par des variations de pH. Le deuxième chapitre s'intéresse aux propriétés photophysiques des porphyrines de P(V) obtenues et plus particulièrement à leur capacité à générer de l'oxygène singulet avec une application potentielle en Thérapie Photodynamique (PDT). Le troisième chapitre concerne l'élaboration d'un complexe contenant deux porphyrines de Zn(II) dont le mouvement relatif a pu être bloqué réversiblement par l'utilisation des positions axiales des cations métalliques. Mots clés : machine moléculaire, tourniquet moléculaire, chimie supramoléculaire, porphyrine de P(V), chimie de coordination, oxygène singulet, porphyrin de Zn(II). Résumé en anglais The manuscript focuses on molecular machines and the control of their movement. Two different devices have been designed, synthetized and characterized. Moreover, a series of new potential photosensitizer was obtained. The introduction gives a general overview on molecular machines, reported during the past 20 years. The first chapter describes the synthesis of molecular turnstiles based on P(V) porphyrins. The molecular motion was controlled reversibly using either coordination chemistry or by changing the pH. The second part is dedicated to the study of the photophysical properties of P(V) porphyrins and especially their capacity to generate singlet oxygen under irradiation., making them potential photosensitizers that can be use in Photodynamic Therapy (PDT) or as catalyst. The third chapter is devoted to the study of a molecular break based on a Zn (II) porphyrin dimer. The control of the movement was performed using the coordination of a bidentate ligand in the axial position of the metal cations. Keywords: Molecular machines, Molecular turnstiles, supramolecular chemistry, P (V) porphyrins, Coordination chemistry, Singlet oxygen, Zn (II) porphyrins.
01756892
en
[ "info.info-au" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01756892/file/NIMFAOD-ACC.pdf
Vineeth S Varma Irinel Constantin Morarescu Yezekael Hayel Analysis and control of multi-leveled opinions spreading in social networks Keywords: Opinion dynamics, Social computing and networks, Markov chains, agent based models This paper proposes and analyzes a stochastic multi-agent opinion dynamics model. We are interested in a multi-leveled opinion of each agent which is randomly influenced by the binary actions of its neighbors. It is shown that, as far as the number of agents in the network is finite, the model asymptotically produces consensus. The consensus value corresponds to one of the absorbing states of the associated Markov system. However, when the number of agents is large, we emphasize that partial agreements are reached and these transient states are metastable, i.e., the expected persistence duration is arbitrarily large. These states are characterized using an N-intertwined mean field approximation (NIMFA) for the Markov system. Moreover we analyze a simple and easily implementable way of controlling the opinion in the network. Numerical simulations validate the proposed analysis. I. INTRODUCTION The analysis and control of complex sociological phenomena as consensus, clustering and propagation are challenging scientific problem. In the past decades much progress has been made both on the development and the analysis of new models that capture more features characterizing the social network behavior. A possible classification of the existing models can be done by looking at the evolution space of the opinions. Precisely we find models in which the opinions evolve in a discrete set of values, they come from statistical physics, and the most employed are the Ising [START_REF] Ising | Contribution to the theory of ferromagnetism[END_REF], voter [START_REF] Clifford | A model for spatial conflict[END_REF] and Sznajd [START_REF] Sznajd-Weron | Opinion evolution in closed community[END_REF] models. A second class is given by the models that consider a continuous set of opinion's values [START_REF] Degroot | Reaching a consensus[END_REF], [START_REF] Hegselmann | Opinion dynamics and bounded confidence models, analysis, and simulation[END_REF], [START_REF] Morȃrescu | Opinion dynamics with decaying confidence: Application to community detection in graphs[END_REF]. While in some models the interaction network is fixed in some others it is state-dependent. Although some studies propose repulsive interactions [START_REF] Altafini | Consensus problems on networks with antagonistic interactions[END_REF] the predominant tendency of empirical studies emphasize the attractive action of the neighbors opinions. We can also emphasize that many studies in the literature focus on the emergence of consensus in social networks [START_REF] Galam | Towards a theory of collective phenomena: Consensus and attitude changes in groups[END_REF], [START_REF] Axelrod | The dissemination of culture: A model with local convergence and global polarization[END_REF], [START_REF] Fortunato | Vector opinion dynamics in a bounded confidence consensus model[END_REF] while some others point out local agreements leading to clustering [START_REF] Hegselmann | Opinion dynamics and bounded confidence models, analysis, and simulation[END_REF], [START_REF] Morȃrescu | Opinion dynamics with decaying confidence: Application to community detection in graphs[END_REF]. Most of the existing models including the aforementioned ones share the idea that an individual's opinion is influenced by the opinions of his neighbors. Nevertheless, it is very hard to estimate these opinions and often one may access only a quantized version of them. Following this idea a mix of continuous opinion with discrete actions (CODA) was proposed in [START_REF] Martins | Continuous opinions and discrete actions in opinion dynamics problems[END_REF]. This model reflects the fact that even if we often face binary choices or actions which are visible * CRAN (CNRS-Univ. of Lorraine), Nancy, France, {vineeth.satheeskumar-varma,constantin.morarescu}@univ-lorraine.fr ‡ University of Avignon, Avignon, France. [email protected] This work was partially funded by the CNRS PEPS project YPSOC. by our neighbors, the opinions evolve in a continuous space of values which are not explicitly visible to the neighbors. A multi-agent system with a CODA model was proposed and analyzed in [START_REF] Chowdhury | Continuous opinions and discrete actions in social networks: a multi-agent system approach[END_REF]. It was shown that this deterministic model leads to a variety of asymptotic behaviors including consensus and clustering. In [START_REF] Varma | Modeling stochastic dynamics of agents with multi-leveled opinions and binary actions[END_REF] the model in [START_REF] Chowdhury | Continuous opinions and discrete actions in social networks: a multi-agent system approach[END_REF] was reformulated as a discrete interactive Markov chain. One advantage of this approach is that, it also allows analysis of the behavior of infinite populations partitioned into a certain number of opinion classes. Due to the complexity of the opinion dynamics, we believe that stochastic models are more suitable than deterministic ones. Indeed, we can propose a realistic deterministic update rule but many random events will still influence the interaction network and consequently the opinion dynamics. Following the development in [START_REF] Varma | Modeling stochastic dynamics of agents with multi-leveled opinions and binary actions[END_REF] we propose here a continuous-time interactive Markov chain modeling that approximates the model in [START_REF] Chowdhury | Continuous opinions and discrete actions in social networks: a multi-agent system approach[END_REF]. Although the asymptotic behavior of the model can be given by characterizing the absorbing states of the Markov chain, the convergence time can be arbitrarily large and transient but persistent local agreements, called metastable equilibria, can appear. These equilibria are very interesting because they describe the finite-time behavior of the network. Consequently, we consider in this paper an N-intertwined mean field approximation (NIMFA) based approach in order to characterize the metastable equilibria of the Markov system. It is noteworthy that NIMFA was successfully used to analyze and validate some epidemiological models [START_REF] Van Mieghem | Virus spread in networks[END_REF], [START_REF] Trajanovski | Decentralized protection strategies against sis epidemics in networks[END_REF]. In this work, we model the social network as a multi-agent system in which each agent represents an individual whose state is his opinion. This opinion can be understood as the preference of the agent towards performing a binary action, i.e. action can be 0 or 1. These agents are interconnected through an interaction directed graph, whose edge weights represent the trust given by an agent to his neighbor. We propose continuous time opinion dynamics in which the opinions are discrete and belong to a given set that is fixed a priori. Each agent is influenced randomly by the actions of his neighboring agents and consequently influences its neighbors. Therefore the opinions of agents are an intrinsic variable that is hidden from the other agents, the only visible variable is the action. As an example, consider the opinion of users regarding two products red and blue cars. A user may prefer red cars strongly, while some other users might be more indifferent. However, what the other users see (and is therefore influenced by) is only what the user buys, which is the action taken. The contributions of this paper can be summarized as follows. Firstly, we formulate and analyze a stochastic version of the CODA model proposed in [START_REF] Chowdhury | Continuous opinions and discrete actions in social networks: a multi-agent system approach[END_REF]. Secondly, we characterize the local agreements which are persistent for a long duration by using the NIMFA for the original Markov system. Thirdly, we provide conditions for the preservation of the main action inside one cluster as well as for the propagation of actions. Finally, we study how an external entity can control the opinion dynamics by manipulating the network edge weights. In particular, we study how such a control can be applied to a cluster for preservation or propagation of its opinion. The rest of the paper is organized as follows. Section II introduces the main notation and concepts and provides a description of the model used throughout the paper. The analysis of the asymptotic behavior of opinions described by this stochastic model is provided in Section III. The presented results are valid for any connected networks with a finite number of agents. Moreover Section III contains the description of the NIMFA model and an algorithm to compute its equilibria. In Section IV we emphasize conditions for the preservation of the main action (corresponding to a metastable state) in some clusters as well as conditions for the action propagation, both without any control and in the presence of some control. The results of our work are numerically illustrated in Section V. The paper ends with some concluding remarks and perspectives for further developments. Preliminaries: We use E for the expectation of a random variable, 1 A (x) the indicator function which takes the value 1 when x ∈ A and 0 otherwise, R + the set of non-negative reals and N = {1, 2, . . . } the set of natural numbers.. II. MODEL Throughout the paper we consider N ∈ N an even number of possible opinion levels, and the set of agents K = {1, 2, . . . , K} with K ∈ N. Each agent i is characterized at time t ∈ R + by its opinion represented as a scalar X i (t) ∈ Θ where Θ = {θ 1 , θ 2 , . . . , θ N } is the discrete set of possible opinions, such that θ n ∈ (0, 1)\{0.5} and θ n < θ n+1 for all n ∈ {1, 2, . . . , N }. Moreover Θ is constructed such that θ N/2 < 0.5 and θ N/2+1 > 0.5. In the following let us introduce some graph notions allowing us to define the interaction structure in the social network under consideration. Definition 1 (Directed graph): A weighted directed graph G is a couple (K, A) with K being a finite set denoting the vertices, and A being a K × K matrix, with elements a ij denoting the trust given by i on j. We say that agent j is a neighbor of agent i if a ij > 0. We denote by τ i the total trust in the network for agent i as τ i = K j=1 a ij . Agent i is said to be connected with agent j if G contains a directed path from i to j, i.e. if there exists at least one sequence (i = i 1 , i 2 , . . . , i p+1 = j) such that a i k ,i k+1 > 0, ∀k ∈ {1, 2, . . . , p}. Definition 2 (Strongly connected): The graph G is strongly connected if any two distinct agents i, j ∈ K are connected. In the sequel we suppose the following holds true. Assumption 1: The graph (K, E) modeling the interaction in the network is strongly connected. The action Q i (t) taken by agent i at time t is defined by the opinion X i (t) through the following relation Q i (t) = X i (t) , where . is the nearest integer function. This means that if an agent has an opinion more than 0.5, it will take the action 1 and 0 otherwise. This kind of opinion quantization is suitable for many practical applications. For example, an agent may support the left or right political party, with various opinion levels (opinions close to 0 or 1 represents a stronger preference), however, in the election, the agent's action is to vote with exactly two choices (left or right). Similarly, an agent might have to choose between two cars or other types of merchandise like cola as mentioned in the introduction. Although its preference for one product is not of the type 0 or 1, its action will be, since it cannot buy fractions of cars, but one of them. A. Opinion dynamics In this work, we look at the evolution of opinions of the agents based on their mutual influence. We also account for the inertia of opinion, i.e., when the opinion of the agent is closer to 0.5, he is more likely to shift as he is less decisive, whereas someone with a strong opinion (close to 1 or 0) is less likely to shift his opinion as he is more convinced by his opinion. The opinion of agent j may shift towards the actions of its neighbors with a rate β n while X j (t) = θ n . If no action is naturally preferred by the opinion dynamics, then we construct θ n = 1 -θ N +1-n and assume that β n = β N +1-n for all n ∈ {1, 2, . . . , N }. At each time t ∈ R + we denote the vector collecting all the opinions in the network by X(t) = (X 1 (t), . . . , X K (t)). Notice that the evolution of X(t) is described by a continuous time Markov process with N K states and its analysis is complicated even for small number opinion levels and relatively small number of agents. The stochastic transition rate of agent i shifting its opinion to the right, i.e. to have opinion θ n+1 when at opinion θ n , with n ∈ {1, 2, . . . , N -1}, is given by β n N j=1 a ij 1 (0.5,1] (X j (t)) = β n K j=1 a ij Q j (t) = β n R i (t). Similarly, the transition rate to the left, i.e. to shift from θ n to θ n-1 is given by β n N j=1 a ij 1 [0,0.5) (X j (t)) = β n N j=1 a ij (1-Q j (t)) = β n L i (t). for n ∈ {2, . . . , N }. Therefore, we can write the infinitesimal generator M i,t (a tri-diagonal matrix of size N × N ) for an agent i as: M i,t =    -β 1 R i (t) β 1 R i (t) 0 . . . β 2 L i (t) -β 2 τ i β 2 R i (t) . . . . . .    (1) with elements corresponding the n-th rown and m-th column given by M i,t (m, n) and ∀n ∈ {1, . . . , N -1}, M i,t (n, n + 1) = β n R i (t), ∀n ∈ {2, . . . , N }, M i,t (n, n -1) = β n L i (t), ∀|m -n| > 1, M i,t (m, n) = 0 and Mi,t(n, n) =    -β1Ri(t) for n = 1, -βnτi for n ∈ {2, . . . , N -1}, -βN Li(t) for n = N. Let v i,n (t) := E[1 {θn} (X i (t))] = Pr(X i (t) = θ n ) be the probability for an opinion level θ n for user i at time t. Then, in order to propose an analysis of the stochastic process introduced above, we may consider the mean-field approximation by replacing the transitions by their expectations. Then, the expected transition rate from state n to state n + 1 for K → ∞, is given by: β n K j=1 a ij E 1 (0.5,1] (X j (t)) = β n K j=1 a ij N n=N/2+1 v j,n (t). We have similar expression for transition between state n and n -1. III. STEADY STATE ANALYSIS Define by θ-= (θ 1 , . . . , θ 1 ) and θ+ = (θ N , . . . , θ N ) the states where all the agents in the network have an identical opinion, which correspond to the two extreme opinions. Proposition 1: Under Assumption 1, the continuous time Markov process X(t), with (1) as the infinitesimal generators corresponding to each agent, has exactly two absorbing states X(t) = θ+ and X(t) = θ-. Proof: Due to space limitation the proof is omitted. Considering the NIMFA approximation, we get that the dynamics of the opinion for an agent i are given by: vi,1 = -β 1 r i v i,1 + β 2 l i v i,2 vi,n = -β n r i v i,n + β n+1 l i v i,n+1 -β n l i v i,n + β n-1 r i v i,n-1 vi,N = -β N l i v i,N + β N -1 r i v i,N -1 (2) for all i ∈ K and 1 < n < N where l i = j∈K a ij E[1 -Q j ] = j∈K N/2 n=1 a ij v j,n , r i = j∈K a ij E[Q j ] = j∈K N n=N/2+1 a ij v j,n . (3) and n v i,n = 1. We can easily verify that X i = θ 1 , i.e. v i,1 = 1 for all i is an equilibrium for the above set of equations. When v i,1 = 1 for all i, v i,n = 0 for all n ≥ 2 and as a result, l i = τ i and r i = 0 for all i which gives vi,n = 0 for all i, n. Excluding the extreme solutions θ+ and θ-, the non-linearity of system (2) could give rise to the existence of interior rest points which are locally stable. Such rest points are referred to as metastable states in Physics. Metastability of Markov processes is precisely defined in [START_REF] Huisinga | Phase transitions and metastability in markovian and molecular systems[END_REF], where the exit times from these metastable states are shown to approach infinity when the network size is arbitrarily large. A. Rest points of the dynamics For a given r i = E[R i (t)], the equilibrium state v * i,n must satisfy the following conditions 0 = -β 1 ri τi v * i,1 + β 2 ( τi-ri τi )v * i,2 0 = -β n v * i,n + β n+1 ( τi-ri τi )v i,n+1 +β n-1 ri τi v * i,n-1 0 = -β N ( τi-ri τi )v * i,N + β N -1 ri τi v * i,N -1 (4) We can write any v * i,n based on v * i,1 , by simplification as v * i,n = β 1 β n r i τ i -r i n-1 v * i,1 . (5) As the sum of v i,n over n must be 1, we can solve for v * i,1 as v * i,1 = 1 N n=1 β1 βn ri τi-ri n-1 . (6) We then can use this relationship to construct a fixed-point algorithm that computes a rest-point of the global opinion dynamics for all users. Data: Number of agents K, the edge weights a i,j for all i, j ∈ K, initial values v i,n (0), convergence factor << 1, opinion levels N and the jump rates β n for all n ∈ {1, . . . , N }. Result: v(m) at the end of the loop is close to a fixed point of the opinion dynamics do m ← m + 1 ; Set r i (m) = k N n=N/2+1 a i,k v k,n (m) (7) for all i ∈ K ; Set v i,n (m) = β1 βn ri(m) τi-ri(m) n-1 N l=1 β1 β l ri(m) τi-ri(m) l-1 (8) for all n ∈ {1, . . . , N }, i ∈ K ; while ||v m -v m-1 || ≥ ; Algorithm 1: Algorithm to find a fixed point of the NIMFA. Additionally, we can obtain some nice properties on the relation between r i and v i,n by studying the following function. Lemma 1: Consider the function f : [0, 1] → [0, 1] defined as f (x) := N n=N/2+1 β1 βn x 1-x n-1 N n=1 β1 βn x 1-x n-1 (9) for all x ∈ [0, 1) and with f (1) = 1. We have that f (x) is a monotonically increasing continuous function and takes the values f (0) = 0, f (0.5) = 0.5 and lim x→1 f (x) = 1. Proof: Due to space limitation the proof is omitted. We can use f ( ri τi-ri ) to calculate the probability that an agent i will take the action 1, i.e., 6) and ( 5). N n=N/2+1 v * i,n = f ( ri τi-ri ) from ( IV. OPINION SPREADING A way to model generic interaction networks is to consider that they are the union of a number of clusters (see for instance [START_REF] Morȃrescu | Opinion dynamics with decaying confidence: Application to community detection in graphs[END_REF] for a cluster detection algorithm). Basically a cluster C is a group of agents in which the opinion of any agent in C is influenced more by the other agents in C, than agents outside C. When the interactions between cluster are deterministic and very weak, we can use a two time scale-modeling as in [START_REF] Martin | Time scale modeling for consensus in sparse directed networks with time-varying topologies[END_REF] to analyze the overall behavior of the network. In the stochastic framework and knowing only a quantized version of the opinions we propose here a development aiming at characterizing the majority actions in clusters. The notion of cluster can be mathematically formalized as follows. Definition 3 (Cluster): A subset of agents C ⊂ K defines a cluster when, for all i, j ∈ C and some λ > 0.5 the following inequality holds a ij ≥ λ τ i |C| . ( 10 ) The maximum λ which satisfies this inequality for all i, j ∈ C is called the cluster coefficient. For any given set of agents C ⊂ K, let us denote that ν C -= j∈C N/2 n=1 v j,n |C| , and ν C + = j∈C N n=N/2+1 v j,n |C| . Those values represent the expected fraction of agents within a set C with action 0 and 1, respectively. We also denote by ν C n , the average probability of agents in a cluster to have opinion θ n , i.e., ν C n = i∈C vi,n |C| . Now we can use the definition of a cluster given in [START_REF] Fortunato | Vector opinion dynamics in a bounded confidence consensus model[END_REF] to obtain the following proposition. Proposition 2: The dynamics of the average opinion probabilities in a cluster C ⊂ K can be written as: Proof: Due to space limitation the proof is omitted. The result above shows that instead of looking at individual opinions of agents inside a cluster, we can provide equation [START_REF] Martins | Continuous opinions and discrete actions in opinion dynamics problems[END_REF] for the dynamics of the expected fraction of agents in a cluster with certain opinions. νC 1 κ = -β 1 ν C 1 λν C + + (1 -λ)δ +β 2 ν C 2 λν C -+ (1 -λ)(1 -δ) νC n κ = -β n ν C n + β n+1 ν C n+1 λν C -+ (1 -λ)(1 -δ) +β n-1 ν C n-1 λν C + + (1 -λ)δ νC N κ = -β N ν C N λν C -+ (1 -λ)(1 -δ) +β N -1 ν C N -1 λν C + + (1 -λ)δ ( A. Action preservation One question that can be asked in this context is what are the sufficient conditions for the preservation of actions in a cluster, i.e. regardless of external opinions, agents preserve the majority action inside the cluster C for long time. At the limit, all the agents will have identical opinions corresponding to an absorbing state of the network but, clusters with large enough λ may preserve their action in metastable states (long time). Proposition 3: If ∃ x ∈ (0.5, 1) such that x = f (λx) than cluster C with coefficient λ preserves its action in a metastable state. If no such x exists, then the only equilibrium when the perturbation term δ = 1 from (11) is at ν C + = 1, and when δ = 0, the equilibrium is at ν C + = 0. Proof: Due to space limitation the proof is omitted. B. Propagation of actions In the previous subsection, we have seen that a cluster C can preserve its majority action regardless of external opinion if it has a sufficiently large λ. If there are agents outside C with some connections to agents in C, then this action can be propagated. Let τ C i = j∈C a ij denote the total trust of agent i in the cluster C. Let the cluster C be such that it has λ large enough so that νC + > 0.5 exists where νC + = f (λν C + ). Proposition 4: If the cluster C is preserving an action 1 with at least a fraction νC + of the population in C having action 1, then the probability of any agent i ∈ K \ C to chose action 1 at equilibrium is bounded as follows. f νC + τ C i τ i ≤ Pr(Q * i = 1) ≤ 1 -f νC - τ C i τ i . ( 12 ) where Q * i is the action taken by i when the system is in a non-absorbing NIMFA equilibrium state. Proof: Due to space limitation the proof is omitted. C. Control of opinion spreading Consider that an external entity, a firm for example, wants to control the actions of the agents in the network. In particular, the external agency wants to ensure that a cluster C preserves its initial action. Practically, a set of consumers in C might prefer the product of the firm, and this firm wants to ensue that other competing firms do not sway their opinions and convert their opinions. For this purpose, the firm tries to reinforce the opinion spread within C and from C to its followers, by strengthening the influence of agents in C. As an example, the party might pay the social network, such as Facebook or Twitter, to make messages, in the form of wall posts or tweets, to be more visible when the source of these messages are within C. This implies that a ij for any i ∈ K and j ∈ C, is now modified to a ij = a ij (1 + u), where u denotes the control parameter. Here, u = 0 represents the normal state of the network and u > 0 implies that agents in the network will view messages from agents in C more frequently or more easily (with a higher priority). Thus, the cluster coefficient of C is no longer the initial cluster coefficient λ, but the new coefficient is increased depending on u. Proposition 5: Let λ be the cluster coefficient of C. Then a control which strengthens the connections from C from a ij to a ij (1 + u) for all j ∈ C and i ∈ K results in a new cluster coefficient λ ≥ λ given by λ ≥ λ 1 + u 1 + λu (13) This results in a threshold on the control required for preservation as the smallest u * ≥ 0 satisfying x = f λ 1 + u * 1 + λu * x ( 14 ) for some x ∈ (0.5, 1]. Additionally, any follower of C has its probability of action 1 modified to f νC + τ C i (1 + u) τ i + τ C i u ≤ Pr(Q * i = 1) ≤ 1-f νC - τ C i (1 + u) τ i + τ C i u (15) Proof: Due to space limitation the proof is omitted. As λ ≤ 1, λ ≥ λ always, and we also have lim u→∞ λ = 1. This implies that as long as we apply a sufficiently large control u, any cluster C will be able to preserve its action (as a result of Corollary 1). Additionally, this also means that agents who are followers of C will now be influenced to a greater degree by C. V. NUMERICAL RESULTS For all simulations studied, unless otherwise mentioned, we take Θ = {0.2, 0.4, 0.6, 0.8}, β 1 = β 4 = 0.01 per unit of time and β 2 = β 3 = 0.02 per unit of time. For the first set of simulations, we take another graph structure, but with the same K as indicated in Figure 1. We first randomly make links between any i, j ∈ K with a probability of 0.05. When such a link exists, a ij = 1 and a ij = 0 otherwise. Then we construct a cluster C 1 , with the agents i = 1, 2, . . . , 40, and C 2 with agents i = 81, 82, . . . , 120. We also label agents 40 < i ≤ 60 as set B 1 and 60 < j ≤ 80 as set B 2 . To provide the relevant cluster structure, we made the edge weights a ij = 1 for all τi ≥ 0.444 for all 60 < i ≤ 80. Cluster C 1 |C 1 | = 40 Cluster C 2 |C 2 | = 40 Set B1 |B1| = 20 Set B2 |B2| = 20 • i, j ∈ C 1 or i, j ∈ C 2 , A. Propagation of actions We find that the largest x satisfying x = f (λ 1 x) is 0.95 and that satisfying x = f (λ 2 x) is 0.94. Therefore, if all agents in C 1 start with opinion 0.2 and all agents in cluster 2 start with opinion 0.8, we predict from proposition 3 that ν C1 Q i (t) for S = C 1 , C 2 , B 1 , B 2 . We see that C 1 and C 2 preserve their initial actions as given by proposition 3. We also see that as B 1 follows only C 1 , it's action is close to C 1 . As B 2 follows both C 1 and C 2 who have contradicting actions, it has a very mixed opinion which keeps changing randomly in time. Simulations of the continuous time Markov chain show that our theoretical results are valid even when the cluster size is 40. Figure 2 plots the population fraction of agents with action 1 within a certain set for one simulation. We look for this value in the clusters C 1 and C 2 as well as the sets B 1 and B 2 . C 1 and C 2 are seen to preserve their actions which are opposite to each other. Since B 1 has a significant trust in C 1 alone, the opinion of C 1 is propagated to B 1 . However, as B 2 trusts both C 1 and C 2 , its opinion is influenced by the two contradicting actions resulting in having some agents with action 1 and the rest with action 0. B. Control of opinion spreading We consider the same graph structure used in Fig. 2 illustrated in Fig. 1, but with Cluster C 2 having its first 25 agents removed. As a result, the cluster coefficient of this cluster becomes λ 2 = 0.636, which no longer allows for preservation of its action as seen in Figure 3a. From proposition 3, we find that a λ > 0.8 ensures preservation. Therefore, we introduce a control u = 1.5 which enhances the visibility of actions spread by agents in C 2 by a factor of 2.5, resulting in a ij = 2.5a ij for all j ∈ C, i ∈ K. This results in a new λ 2 ≥ 0.8 according to proposition 5 and verified by numerical calculation on the graph to be λ 2 = 0.814. As a result, we observe from Figure 3b that the cluster 2 action is preserved for a long duration. for a finite number of agents. Whereas, when this number becomes large enough, the stochastic system can enter into a quasi-stationary regime in which partial agreements are reached. This type of phenomenon has been observed in allto-all and cluster type topologies. Additionally, we have also studied the impact of an external entity which tries to control the actions of the users in a social network by manipulating the network connections. This can be interpreted as a company paying a social platform to make content from certain groups of agents more visible or frequent. We have shown how such a control can enable a community or cluster of agents to preserve or propagate its opinion better. 11) where κ = i∈C τi |C| and δ ∈ [0, 1]. Fig. 1 : 1 Fig. 1: Structure of the graph. Any two agents in K may be connected with a 0.05 probability. All agents within a cluster are connected, and the arrows indicate directed connections. - ≥ 0.95 and ν C2 + ≥ 0.94 in the metastable state. Additionally, applying proposition 4 yields ν B1 -≥ f (0.95 × 0.714) = 0.85, ν B2 -, ≥ f (0.95 × 0.444) = 0.324 and ν B2 + ≥ f (0.94 × 0.444) = 0.315. Fig. 2 : 2 Fig. 2: Simulation of i∈SQ i (t) for S = C 1 , C 2 , B 1 , B 2 .We see that C 1 and C 2 preserve their initial actions as given by proposition 3. We also see that as B 1 follows only C 1 , it's action is close to C 1 . As B 2 follows both C 1 and C 2 who have contradicting actions, it has a very mixed opinion which keeps changing randomly in time. (a) Simulation with no control implemented. (b) Simulation with u = 1 on cluster 2. Fig. 3 : 3 Fig. 3: Average population with action 1 plotted vs time for each set. making C 1 and C 2 clusters with coefficients λ 1 = 0.833 and λ 2 = 0.816 (for the particular random graph generated for this simulation).• 40 < i ≤ 60 and 1 ≤ j ≤ 20, making agents in B 1 trust C 1 with 60 < i ≤ 80 and 1 ≤ j ≤ 20 or 80 < j ≤ 120, making agents in B 2 trust both C 1 and C 2 , with τ C 1 i τ τi , C 1 i τ C 2 i τi ≥ 0.714 for all 40 < i ≤ 60. •
01745260
en
[ "info.info-au" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01745260v2/file/IFAC_ADHS.pdf
Irinel-Constantin Morȃrescu email: [email protected] Vineeth Satheeskumar Varma Lucian Bus email: [email protected] Samson Lasaulce email: [email protected] Space-time budget allocation for marketing over social networks Keywords: Social networks, hybrid systems, optimal control We address formally the problem of opinion dynamics when the agents of a social network (e.g., consumers) are not only influenced by their neighbors but also by an external influential entity referred to as a marketer. The influential entity tries to sway the overall opinion to its own side by using a specific influence budget during discrete-time advertising campaigns; consequently, the overall closed-loop dynamics becomes a linear-impulsive (hybrid) one. The main technical issue addressed is finding how the marketer should allocate its budget over time (through marketing campaigns) and over space (among the agents) such that the agents' opinion be as close as possible to a desired opinion; for instance, the marketer may prioritize certain agents over others based on their influence in the social graph. The corresponding space-time allocation problem is formulated and solved for several special cases of practical interest. Valuable insights can be extracted from our analysis. For instance, for most cases we prove that the marketer has an interest in investing most of its budget at the beginning of the process and that budget should be shared among agents according to the famous water-filling allocation rule. Numerical examples illustrate the analysis. INTRODUCTION The last decades have witnessed an increasing interest in the study of opinion dynamics in social networks. This is mainly motivated by the fact that people's opinions are increasingly influenced through digital social networks. Therefore, governmental institution but also private companies consider that marketing over social networks becomes a key tool for promoting new products or ideas. However, most of the existing studies focus on the analysis of models without control, i.e., they study the convergence, dynamical patterns or asymptotic configurations of the open-loop dynamics. Various mathematical models [START_REF] Degroot | Reaching a consensus[END_REF]Friedkin and Johnsen., 1990;[START_REF] Deffuant | Mixing beliefs among interacting agents[END_REF][START_REF] Hegselmann | Opinion dynamics and bounded confidence models, analysis, and simulation[END_REF][START_REF] Altafini | Consensus problems on networks with antagonistic interactions[END_REF][START_REF] Chowdhury | Continuous opinions and discrete actions in social networks: a multi-agent system approach[END_REF] have been proposed to capture more features of these complex dynamics. Empirical models based on in vitro and in vivo experiments have also been developed [START_REF] Davis | Understanding group behavior : Consensual action by small groups, volume 1, chapter Group decision making and quantitative judgments : A consensus model[END_REF][START_REF] Ohtsubo | Majority influence process in group judgment : Test of the social judgment scheme model in a group polarization context[END_REF][START_REF] Kerckhove | Modelling influence and opinion evolution in online collective behaviour[END_REF]. The emergence of consensus received a particular attention in opinion dynamics [START_REF] Axelrod | The dissemination of culture: A model with local convergence and global polarization[END_REF][START_REF] Galam | Towards a theory of collective phenomena: Consensus and attitude changes in groups[END_REF]. While some mathematical models naturally lead to consensus [START_REF] Degroot | Reaching a consensus[END_REF]Friedkin and Johnsen., 1990), others lead to network clustering [START_REF] Hegselmann | Opinion dynamics and bounded confidence models, analysis, and simulation[END_REF][START_REF] Altafini | Consensus problems on networks with antagonistic interactions[END_REF][START_REF] Morȃrescu | Opinion dynamics with decaying confidence: Application to community detection in graphs[END_REF]. In order to enforce consensus, some recent studies propose the control of one or a few agents, see [START_REF] Caponigro | Sparse feedback stabilization of multi-agent dynamics[END_REF]; [START_REF] Dietrich | Control via leadership of opinion dynamics with state and timedependent interactions[END_REF]. Beside these methods of controlling opinion dynamics towards consensus, we also find recent attempts to control the discretetime dynamics of opinions such that as many agents as possible reach a certain set after a finite number of influences [START_REF] Hegselmann | Optimal opinion control : The campaign problem[END_REF]. In [START_REF] Masucci | Strategic resource allocation for competitive influence in social networks[END_REF], the authors consider multiple influential entities competing to control the opinion of consumers under a game theoretical setting. However, this work assumes an undirected graph and a voter model for opinion dynamics resulting in strategies that are independent of the node centrality. On the other hand, [START_REF] Varma | Opinion dynamics aware marketing strategies in duopolies[END_REF] considers a similar competition with opinion dynamics over a directed graph and no budget constraints. In this paper, we consider a different problem that requires minimizing the distance between opinions and a desired value using a given control/marketing budget. Moreover, we assume that the maximal marketing influence cannot instantaneously shift the opinion of one individual to the desired value. Basically, we consider a continuous time opinion dynamics and we want to design a marketing strategy that minimizes the distance between opinions and the desired value after a given finite number of discrete-time campaigns under budget constraints. To solve this control design problem we write the overall closed-loop dynamics as a linear-impulsive system and we show that the optimal strategy is to influence as much as possible the most central/popular individuals of the network. (see [START_REF] Bonacich | Eigenvector-like measures of centrality for asymmetric relations[END_REF] for a formal definition of centrality). To the best of our knowledge our work is different from all the existing results on opinion dynamics control. Unlike the few previous works on the control of opinions in social networks, we do not control the state of the influencing entity. Instead, we consider that value as fixed and we control the influence weight that the marketer has on different individuals of the social network. By doing so, we emphasize the advantages of targeted marketing with respect to broadcasting strategies when budget constraints have to be taken into account. Moreover, we show that, although the individual control action u i (t k ) at time t k can be chosen in the interval [0, ū], the optimal choice is discrete: either 0 or ū. The rest of the paper is organized as follows. Section 2 formulates the opinion dynamics control problem under consideration. A useful preliminary result for solving a specific optimization problem with constraints is given in Section 3. To motivate our analysis, we emphasize in Section 4 the improvements that can be obtained by targeted advertising with respect to a uniform/broadcasting control. Section 5 contains the results related to the optimal control strategy. We first analyze the case when the campaign budget is given a priori and must be optimally partitioned among the network agents. Secondly, we look at the case when the campaign budget is unknown but the campaigns are distanced in time. Both cases point out that the optimal control contains only 0 or ū actions. These results motivate us to study in Section 6 the spatio-temporal distribution of the budget under the assumption that all the components of u(t k ) are either 0 or ū. Numerical examples and concluding remarks end the paper. PROBLEM STATEMENT We consider an entity (for example, a governmental institution or a company) that is interested in attracting consumers to a certain opinion. Consumers belong to a social network and we refer to any consumer as an agent. For the sake of simplicity, we consider a fixed social network over the set of vertices V = {1, 2, . . . , N } of N agents. In other words, we identify each agent with its index in the set V. To agent n ∈ V we assign a normalized scalar opinion x n (t) ∈ [0, 1] that represents its opinion level and can be interpreted e.g., as the probability for an agent to act as desired. We use x(t) = (x 1 (t), x 2 (t), . . . , x N (t)) to denote the state of the network at any time t, where x(t) ∈ X and X = [0, 1] N . In order to obtain a larger market share with a minimum investment, the external entity applies an action vector u(t k ) = (u 1 (t k ), . . . , u N (t k )) ∈ U i in marketing campaigns at discrete time instants t k ∈ T , k ∈ N. A given action therefore corresponds to a given marketing campaign aiming at influencing the consumer's opinion. The instants corresponding to the campaigns are known and are collected in the set T = {t 0 , t 1 , . . . , t M }. Between two consecutive campaigns, the consumer's opinion is only influenced by the other consumers of the network. We assume that t k -t k-1 = δ k ∈ [δ m , δ M ] where δ m < δ M are two fixed real numbers. Throughout the paper we refer to d ∈ {0, 1} as the desired opinion that the external entity wants to be adopted. We consider ∀i ∈ V the following dynamics:        ẋi (t) = N j=1 a ij (x j (t) -x i (t)), t ∈ [t k , t k+1 ) x i (t k ) = u i (t k )d + (1 -u i (t k ))x i (t - k ) , ∀k ∈ N, (1) where u i (t k ) ∈ [0, ū] with ū < 1, ∀i ∈ V and M k=0 N i=1 u i (t k ) ≤ B where B represents the total budget of the external entity for the marketing campaigns. Dynamics (1) can be rewritten using the collective variable X(t) = (d, x(t)) as: Ẋ(t) = -LX(t) X(t k ) = PX(t - k ) , (2) where L = 0 0 1,N 0 N,1 L , P = 1 0 1,N u(t k ) I N -diag(u(t k )) with diag(u(t k )) ∈ R N ×N being the diagonal matrix having the components of u(t k ) on the diagonal. Remark 1. It is noteworthy that: • L is a Laplacian matrix corresponding to a network of N + 1 agents. The first agent represents the external entity and is not connected to any other agent while the rest of the agents represents the consumers and are connected through the social network defined by the influence weights a ij . • P is a row stochastic matrix that can be interpreted as a Perron matrix associated with the tree having the external entity as a parent of all the other nodes. Consequently, without budget constraints, the network reaches, at least asymptotically, the value d. Several space-time control strategies can be implemented under the budget constraints. For instance, we can spend the same budget for each agent i.e., u i (t k ) = u j (t k ), ∀i, j ∈ V, we can also allocate the entire budget for specific agents of the network. Moreover, the budget can be spent either on few or many campaigns. Our objective is to design a space-time control strategy that minimizes the following cost function J T = N i=1 |x i (T ) -d| (3) for some T > t M , and we have the cost associated with the asymptotic opinion given by J ∞ = N i=1 lim t→∞ |x i (t) -d| (4) This can be interpreted as follows. If the entity (a governmental institution for example) is interested in convincing the public to buy some product or change their habits (practice sports or quit smoking for instance), it will try to move the asymptotic consensus value of the network as close as possible to the desired value, i.e. minimize J ∞ . In some other cases, like an election campaign which targets to get the opinions close to d within a finite time T , we will minimize J T . It is worth mentioning that between campaigns and after the last campaign the opinions evolve according to consensus dynamics. PRELIMINARIES We first state a very useful Lemma which will help us to find the optimal solutions for many sub-cases of our problem. Lemma 1. Given an optimization problem (OP) of the form minimize y∈R L C(y) subject to 0 ≤ y i ≤ ȳ < 1, L i=1 y i ≤ B. (5) where C(y) is a decreasing convex function in y i such that one of the following two conditions hold Case 1: ∀ i ∈ {1, . . . , L}, ∃g(y) ≥ 0 such that ∂C(y) ∂y i = -c i g(y); Case 2: ∂C(y) ∂y i = 1 1 -y i for all i ∈ {1, . . . , L}. Then, a solution y * to this OP is given by water-filling as follows y * R(i) =        ȳ if i ≤ B Lȳ B -ȳL B Lȳ if i = B Lȳ 0 otherwise (6) where R : {1, . . . , L} → {1, . . . , L} is any bijection for Case 2 and, a bijection satisfying c R(1) ≥ c R(2) ≥ • • • ≥ c R(L) . for Case 1. THE BROADCASTING CASE STUDY To emphasize the relevance of the problem under consideration, we will show that for some particular network topologies we can obtain a significant improvement of the revenue by using targeted marketing instead of broadcasting-based marketing in which the marketer allocates the same amount of resource to all the agents. First, we derive the optimal revenue that can be obtained by implementing a broadcasting strategy i.e., u i (t k ) = u j (t k ) α k , ∀i, j ∈ V. We suppose that the graph representing the social network contains a spanning tree. Let v be the right eigenvector of L associated with the eigenvalue 0 and satisfying v 1 N = 1. Therefore, in the absence of any control action, one has that lim t→∞ x(t) = v x(0)1 N x ∞ 0 . Let us also introduce the following notation: x ∞ k = lim t→∞ e -L(t-t k ) x(t k ) = v x(t k )1 N , ∀k ∈ N. Following (2) and using δ k = t k+1 -t k , D k = diag(u(t k )) one deduces that: x ∞ k+1 = v x(t k+1 )1 N = v u(t k+1 )d + (I N -D k+1 )x(t - k+1 ) 1 N = v u(t k+1 )d + (I N -D k+1 )e -Lδ k x(t k ) 1 N . Since v L = 0 N one has that v e -Lδ k = v and conse- quently one obtains that x ∞ k+1 -x ∞ k = v u(t k+1 )d -D k+1 e -Lδ k x(t k ) 1 N . (7) In the case of broadcasting one has u(t k ) = α k 1 N and D k = α k I N , where α k ∈ [0, ū] for all k ∈ {0, . . . , M }. Therefore, using v 1 N = 1, (7) becomes x ∞ k+1 -x ∞ k = α k+1 (d1 N -x ∞ k ), which can be equivalently rewritten as (d1 N -x ∞ k+1 ) = (1 -α k+1 )(d1 N -x ∞ k ). (8) Using (8) recursively one obtains that J ∞ (α) = (d1 N -x ∞ M ) = M =0 (1 -α )(d1 N -x ∞ 0 ). ( 9 ) where J B (α) denotes the cost associated with a broadcasting strategy using α k at stage t k . Proposition 1. The broadcasting cost J ∞ (α) is minimized by using the maximum possible investments as soon as possible, i.e. α k =        ū if k ≤ B N ū B -ūN B N ū if k = B N ū 0 otherwise (10) Proof. Minimizing J ∞ (α) under the broadcasting strategy assumption is equivalent to minimizing k+1 =0 (1 -α ). This is equivalent to minimizing C(α) = log k+1 =0 (1 -α ) and we have ∂C ∂α = - 1 1 -α (11) This results in an OP which satisfies the conditions to use Lemma 1 case 2. It is noteworthy that for u i ∈ [0, 1) one has that k+1 =0 (1 -α ) ≥ 1 - k+1 =0 α ≥ 1 - B N . (12) The last inequality in (12) comes from the broadcasting hypothesis u i (t ) = α , ∀i ∈ V which mean that the budget spent in the -th campaign is N • α . Therefore, the total budget for k + 2 campaigns is N k+1 =0 α and has to be smaller than B. Thus J = 1 N |(d1 N -x ∞ k+1 )| ≥ (1 - B N )1 N |(d1 N -x ∞ 0 )|. The interpretation of ( 12) is that for the broadcasting strategy the minimal cost J is obtained when the whole budget is spent in one marketing campaign (provided this is possible i.e., B ≤ N ū), otherwise the first inequality in (12) becomes strict meaning that J > (1 - B N )1 N |(d1 N -x ∞ 0 )|. Let us now suppose that the graph under consideration is a tree having the first node as root. Then, using a targeted marketing in which the external entity influences only the root, we will show that, under the same budget constraints, the cost J will be smaller. Indeed, for this graph topology one has v = (1, 0, . . . , 0) yielding x ∞ k = x 1 (t k )1 N . Moreover, the dynamics of x 1 (•) writes as: ẋ1 (t) = 0, t ∈ [t k , t k+1 ) x 1 (t k ) = u 1 (t k )d + (1 -u 1 (t k ))x 1 (t - k ) , ∀k ∈ N. (13) Therefore, x 1 (t k ) = u 1 (t k )d + (1 -u 1 (t k ))x 1 (t k-1 ) yielding d -x 1 (t k ) = (1 -u 1 (t k ))(d -x 1 (t k-1 )), which is equivalent to (8). As we have seen before, in the broadcasting strategy one has k+1 =0 α ≤ B N whereas targeting only the root, the constraint becomes k+1 =0 u 1 (t ) ≤ B. Therefore, for any given broadcasting strategy (u 1 , u 2 , . . . , u k ) there exists a strategy targeted on the root that consists of repeating N times (u 1 , u 2 , . . . , u k ). Doing so, one obtains (d1 N -x ∞ k+1 ) = k+1 =0 (1 -α ) N (d1 N -x ∞ 0 ). which leads to a much smaller cost J i.e., the strategy is more efficient. GENERAL OPTIMAL SPACE-TIME CONTROL STRATEGY First, we rewrite the optimal control problem as an optimization problem by treating the control u(t k ) as an N M dimensional vector to optimize. We denote u i,k = u i (t k ) to represent the control for agent i ∈ V at time t k . Then our problem can be rewritten as Minimize u∈R N M J T (u) Subject to 0 ≤ u i,k ≤ ū ∀i ∈ V, k ∈ {0, . . . , M }, and N i=1 M k=1 u i,k ≤ B (14) Here, J T (u) is a multilinear function in u. Before solving problem ( 14) we want to get further insights on the solution's structure, which will lead to important simplifications. Therefore, instead of solving the general optimization problem ( 14), we consider splitting our problem into time-allocation and spaceallocation. For a given time-allocation, i.e. if we know that for stage k a maximum budget of β k ≤ B has been allocated, we find the optimal control strategy for the k-th stage. Moreover, for long stage durations (i.e., t k+1 -t k large) and given temporal budget allocation (β 0 , . . . , β M ), we characterize the optimal space allocation of the budget. Based on these results, we propose a discrete-action spatio-temporal control strategy. Minimizing the per-stage cost In this section we consider that the budget β k for each campaign is a priori given, and optimize the corresponding |d1 N -x ∞ k |. Denote the budget for stage k by β k such that N i=1 u i (t k ) ≤ β k (15) The corresponding cost for the stage k is written as J ∞ k (u(t k )) = (d1 N -x ∞ k ) = |d - N i=1 v i x i (t k )| = |d - N i=1 v i (u i (t k )d + (1 -u i (t k ))x i (t - k ))| (16) We use γ i = v i |d -x i (t - k ) | to denote the gain by investing in agent i ∈ V. Define by R : V → V, a bijection which sorts the agents based on decreasing γ i , i.e. γ R(1) ≥ γ R(2) ≥ • • • ≥ γ R(N ) Proposition 2. The cost J ∞ k (u(t k )) is minimized by the fol- lowing investment profile u * R(i) (k) =        ū if i ≤ β k ū β k -ū β k ū if i = β k ū 0 otherwise (17) Proof. Due to space limitation the proof is omitted. Space allocation for long stage duration In the following we consider that a finite number of marketing campaigns with a priori fixed budget are scheduled such that t k+1 -t k is very large for each k ∈ {0, 1, . . . , M -1}. In this case, we can assume that x i (t - k+1 ) = x ∞ k for all i ∈ V and k ∈ {0, 1, . . . , M -1}. Under this assumption, we write x i (t - 1 ) = x ∞ 0 (u(t 0 )) = N i=1 v i (du i (t 0 ) + x i (t - 0 )(1 -u i (t k ))) (18) for any i ∈ V. Subsequently, we have x ∞ k (u(t 0 ), u(t 1 ), . . . , u(t k )) = N i=1 v i [du i (t k ) +x ∞ k-1 (u(t 0 ), . . . , u(t k-1 ))(1 -u i (t k )) (19) for all k ∈ {1, 2, . . . , M }. Our objective is to minimize J ∞ = x ∞ M (u(t 0 ), . . . , u(t M ) ) -d and this can be done using the proposition below. First, let us define S k : V → V a bijection such that S 0 = R and for all k ∈ {1, 2, . . . , M }, S k gives the agent index after sorting over v i , i.e., v S k (1) ≥ v S k (2) ≥ • • • ≥ v S k (N ) Proposition 3. Let the temporal budget allocation be given by β = (β 0 , . . . , β M ) such that M k=1 β k ≤ B and β k ≤ N ū. Then, the optimal allocation per agent minimizing the cost J(u) is given by u * S k (i) (k) =        ū if i ≤ β k ū β k -ū β k ū if i = β k ū 0 otherwise (20) Proof. Due to space limitation the proof is omitted. DISCRETE-ACTION SPACE-TIME CONTROL STRATEGY Motivated by the results in Propositions 2 and 3, in this section we consider that u i (t k ) ∈ {0, ū}, ∀i ∈ V, k ∈ N and B = K ū with K ∈ N given a priori. The objective is to numerically find the best space-time control strategy for a given initial state x 0 of the network. Algorithms Let us consider in turn the cases of short and long stages. In the short-stage case, given a time allocation consisting of the budgets β k = b k ū at each stage, Proposition 2 tells us how to allocate each stage budget optimally across the agents. Denote all possible budgets at one stage by B = {0, . . . , min{N, K}}. A very simple algorithm is then to search in a brute-force manner all possible time allocations b = (b 0 , . . . , b M ) ∈ B M +1 , subject to the constraint k b k ≤ K. For each such vector b, we simulate the system from x 0 with dynamics (1) where the budget b k is allocated with Proposition 2, and we obtain a final state x F (b) = v x(t M )1 N (the infinite-time state of the network after the last campaign). We retain a solution with the best cost: min b |x 1,F (b) -d| where subscript 1 denotes the first agent (recall that the agents all have the same opinion at infinite time). Note that this cost is J ∞ /N ; we do not sum it over all the agents because this version is easier to interpret as a deviation of each agent from the target state. Furthermore, the simulation can be done in closed form, using the fact that x -(t k+1 ) = e -Lδ k x(t k ). The complexity of this search is O(N 3 (M +1)(min{N, K}+1) M +1 ), dominated by the exponential term. Therefore, this approach will only be feasible for small values of N or K, and especially of M . Considering now the long-stage case, we could still implement a similar brute-force search, but using dynamics (19) for interstage propagation and Proposition 3 for allocation over agents. However, now we can do better by taking advantage of the fact that for all k > 1, the opinions of all the agents reach identical values. Using this, we will derive a more efficient, dynamic programming solution to the optimal control problem: min b |x 1,F (b) -d| where the long-stage dynamics apply but by a slight abuse we keep it the same as in the previous section. To obtain the DP algorithm, define for k = 1, . . . , M, M + 1 =: F a new state signal z k = [y k , r k ] ∈ Z := [0, 1] × {0, . . . , K}. In this signal, y k = x ∞ k-1 , the opinion resulting from long-term propagation after the k -1th campaign, and r k is the remaining budget to apply (we will start from r 0 = K). Here, F is associated with the infinite-time state of the network. We will compute a value function V k (z k ), representing the best cost attainable from stage k, if the agent state is y k and budget r k remains: V F (z F ) = |x F -d|, ∀r F V k (z k ) = min min{r k ,N } b k =0 V k+1 (g(z k , b k )), k = M, . . . , 1 Here, the dynamics g : Z × B → Z are given by: At stage 0, special dynamics g 0 : X × B → Z apply, because the initial state of the network cannot be represented by a single number: Once V k is available, an optimal solution is found by a forward pass, as follows: y k+1 = v x(t k ), r k+1 = r k -b k where x i (t k ) = u i (b k )d + (1 -u i (b k )) y 1 = v x(t 0 ), r 1 = K -b 0 where x i (t 0 ) = u i (b 0 )d + (1 -u i (b 0 ))x i, b * 0 = arg min min{K,N } b0=0 V 1 (g 0 (x 0 , b 0 )), z * 1 = g 0 (x 0 , b * 0 ) b * k = arg min min{r k ,N } b k =0 V k+1 (g(z * k , b k )), z * k+1 = g(z * k , b * k ) for k = 1, . . . , M and the optimal cost of the overall solution is simply the minimum value at the first step. To implement this algorithm in practice, we will discretize the continuous state y into Y points, and interpolate the value function on the grid formed by these points, see [START_REF] Bus ¸oniu | Approximate dynamic programming with a fuzzy parameterization[END_REF] for details. The complexity of the backward pass for value function computation is O(M Y (min{N, K}+1)N ) (we disregard the complexity of the forward pass since it is much smaller). To develop an intuition, take the case N < K; then the algorithm is quadratic in N and linear in M and Y . This allows us to apply the algorithm to much larger problems than the brute-force search above. Finally, note that in principle we could develop such a DP algorithm for the short-stage problem, but there we cannot condense the network state into a single number. Each agent state would have to be discretized instead, leading to a memory and time complexity proportional to Y N , which makes the algorithm unfeasible for more than a few agents. Numerical results We begin by evaluating the brute-force algorithm on a smallscale problem with short stages. Then, we move to the longstage case, where for the same network we compare DP and the brute-force method, confirming that they get the same result. Consider N = 15 agents connected on the graph from Figure 1, where the size of the node corresponds to its centrality. There are 4 stages, corresponding to M = 3, and the budget K = 15 = N . The initial states of the agents are random. For a short stage length δ k = 0.5 ∀k, the brute-force approach gets the results from Figure 2. The final cost (each individual agent's difference from the desired opinion) is 0.2485. Table 1, left shows the agents influenced at each stage. Reexamining Figure 1, we see that most of these agents have a large centrality, which is also the reason for which the algorithm selects them. An exception is agent 6 which has a low centrality, but is still influenced at many stages as x 6 (0) ≈ 0.1. We take now the same problem and make a single change: the stages become long (i.e., t k+1 -t k → ∞). We apply DP, with a discretization of y into 10 points. The results are in Figure 3, with the specific agents being controlled shown on the right side of Table 1. Note the solution is different from the short-stage case, and the final cost is 0.2553, slightly worse, which indicates that giving up the fine-grained control of the agents over time leads to some losses, but they are small. To evaluate the impact of function approximation (interpolation), we also run the brute-force search, since in this problem it is still feasible. It gives the same strategy and cost as DP, so we do not show the result. Note that unlike before, agent 6 is only influenced at k = 0 as the stage durations are long and its opinion value plays a role only at the first stage. CONCLUSIONS In its full generality, the problem of space-time budget allocation problem over a social network is seen to be non-trivial. However, it can be solved in several special cases of practical interest. If for every marketing campaign, the budget is allocated uniformly over the agents, the problem becomes a pure time budget control and can be solved. On the other hand, for a given time budget control, the problem becomes a pure space problem and the optimal way of allocating the budget is proved to be a water-filling allocation policy. Thirdly, if one goes for a binary budget allocation i.e., the marketer either allocates a given amount of budget to an agent or nothing, the space-time budget allocation problem can be solved by using dynamic programming-based numerical techniques. Numerical results illustrate how the available budget should be used by the marketer to reach its objective in terms of desired opinion for the network. This work was supported by projects PEPS INS2I IODINE and PEPS S2IH INS2I YPSOC funded by the CNRS . y k is the network state after campaign k, in which the agent allocations u i (b k ) are computed by distributing budget b k with Proposition 3. 0 and -differently from the other steps -u i (b 0 ) is found with Proposition 2. Figure 1 . 1 Figure 1. Small-scale graph. Figure 2 . 2 Figure 2. Results for short stages. The bottom plot shows the budget allocated by the algorithm at each stage. The top plot shows the opinions of the agents, with an additional, long stage converging to the average opinion (so the last stage duration is not to scale). The circles indicate the opinions right before applying the control at each stage; note the discontinuous transitions of the opinions after control. Figure 3 . 3 Figure 3. Results for long stages. The continuous opinion dynamics is plotted for t ∈ [t k , t k + 25) per stage k, which is sufficient to observe the long stage behavior, i.e., the convergence of opinions of the agents. Table 1 . 1 Agents influenced in each campaign. Left: short stages. Right: long stages. Campaign Agents Campaign Agents 0 3,5,6,7,8,14 0 3,6,7,8 1 3,6,7 1 2,3,7,9 2 3,6,7,9 2 2,3,7,9 3 3,9 3 2,3,9
01756120
en
[ "info.info-ni" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01756120v2/file/main_v2.pdf
Guangshuo Chen email: [email protected] Sahar Hoteit email: [email protected] Aline Carneiro Viana Marco Fiore email: [email protected] Carlos Sarraute Enriching Sparse Mobility Information in Call Detail Records Keywords: Call Detail Records, spatiotemporal trajectories, data sparsity Call Detail Records (CDR) are an important source of information in the study of diverse aspects of human mobility. The accuracy of mobility information granted by CDR strongly depends on the radio access infrastructure deployment and the frequency of interactions between mobile users and the network. As cellular network deployment is highly irregular and interaction frequencies are typically low, CDR are often characterized by spatial and temporal sparsity, which, in turn, can bias mobility analyses based on such data. In this paper, we precisely address this subject. First, we evaluate the spatial error in CDR, caused by approximating user positions with cell tower locations. Second, we assess the impact of the limited spatial and temporal granularity of CDR on the estimation of standard mobility metrics. Third, we propose novel and effective techniques to reduce temporal sparsity in CDR by leveraging regularity in human movement patterns. Tests with real-world datasets show that our solutions can reduce temporal sparsity in CDR by recovering 75% of daytime hours, while retaining a spatial accuracy within 1 km for 95% of the completed data. cellular networks, mobility, movement inference. Introduction Urbanization challenges the development and sustainability of city infrastructures in a variety of ways, and telecommunications networks are no exception. Understanding human habits becomes essential for managing the available resources in complex smart urban environments. Specifically, a number of network-related functions, such as paging [START_REF] Zang | Mining call and mobility data to improve paging efficiency in cellular networks[END_REF], caching [START_REF] Lai | Supporting user mobility through cache relocation[END_REF], dimensioning [START_REF] Paul | Understanding traffic dynamics in cellular data networks[END_REF], or network-driven location-based recommending systems [START_REF] Zheng | Mining interesting locations and travel sequences from gps trajectories[END_REF] have been shown to benefit from insights on movements of mobile network subscribers. More generally, the investigation of human mobility pattern has attracted a significant attention across disciplines [START_REF] González | Understanding individual human mobility patterns[END_REF][START_REF] Ranjan | Are call detail records biased for sampling human mobility?[END_REF][START_REF] Song | Limits of Predictability in Human Mobility[END_REF][START_REF] Iovan | Moving and Calling: Mobile Phone Data Quality Measurements and Spatiotemporal Uncertainty in Human Mobility Studies[END_REF][START_REF] Ficek | Inter-Call Mobility model: A spatio-temporal refinement of Call Data Records using a Gaussian mixture model[END_REF]. Motivation: Human mobility studies strongly rely on actual human footprints, which are usually provided by spatiotemporal datasets, as a piece of knowledge to investigate human mobility patterns. In this context, using specialized spatiotemporal datasets such as GPS logs seems to be a direct solution, but there is a huge overhead of collecting such a detailed dataset at scale. Hence, Call Detail Records (CDR) have been lately considered as a primary source of data for large-scale mobility studies. CDR contain information about when, where and how a mobile network subscriber generates voice calls and text messages, and are collected by mobile network operators for billing purposes. These records usually cover large populations [START_REF] Naboulsi | Large-scale Mobile Traffic Analysis: a Survey[END_REF], which makes them a practical choice for performing large-scale human mobility analyses. CDR can be regarded as footprints of individual mobility and can thus be used to infer visited locations, to learn recurrent movement patterns, and to measure mobility-related features. Despite the significant benefits that CDR bring to human mobility analyses, an indiscriminate use of CDR may question the validity of research conclusions. Indeed, CDR have limited accuracy in the spatial dimension (as the user's location is known at a cell sector or in a base station level) and the temporal dimension (since the device's position is only recorded when it sends or receives a voice call or text message). This is a severe limitation, as a cell (sector) typically spans thousands of square meters at least, and even a very active mobile network subscriber only generates a few tens of voice or text events per day. Overall, CDR are characterized by spatiotemporal sparsity, and understanding whether and to what extent such sparsity affects mobility studies is a critical issue. Existing studies and limitations: A few previous works have investigated the validity of mobility studies based on CDR. An influential analysis [START_REF] Ranjan | Are call detail records biased for sampling human mobility?[END_REF] observed that using CDR allows to correctly identify popular locations that account for 90% of each subscriber's activity; however, biases may arise when measuring individual human mobility features. Works such as [START_REF] Ranjan | Are call detail records biased for sampling human mobility?[END_REF] or the later [START_REF] Zhang | Exploring human mobility with multi-source data at extremely large metropolitan scales[END_REF] discussed biases introduced by the incompleteness of positioning information, i.e., the fact that CDR do not capture every location a user has travelled through. Nevertheless, another important bias of CDR, caused by the use of cell tower locations of mobile network subscribers in their footprints instead of their actual positions, has been overlooked in the literature. Another open research problem is that of completing spatiotemporal gaps in CDR. The most intuitive solution is to consider that the location in an entry of CDR stays representative for a time interval period (e.g., one hour) centered on the actual event timestamp [START_REF] Song | Limits of Predictability in Human Mobility[END_REF][START_REF] Jo | Spatiotemporal correlations of handset-based service usages[END_REF]. So far and to the best of our knowledge, no more advanced solution has been proposed in the literature to fill the spatiotemporal gaps in CDR. Our work and contributions: In this paper, we explore the following research questions. First, we investigate how the spatiotemporal sparsity of CDR affects the accuracy and incompleteness of mobility information, by leveraging CDR and cell tower deployments in metropolitan areas. Second, we evaluate the biases caused by such spatiotemporal sparsity in identifying important locations and measuring individual movements. Third, we study the capability of CDR of locating a user continuously in time, i.e., the degree of completeness of the data. Answering these questions leads to the following main contributions: • We show that the geographical shifts, caused by the mapping of user locations to cell tower positions, are less than 1 kilometer in the most of cases (i.e., 85%-95% in the entire country or over 99% in the metropolitan areas in France), and the median of the shifts is around 200 -500 meters (varying across cellular operators). This result substantiates the validity of many large-scale analyses of human mobility that employ CDR. • We provide a confirmation of previous findings in the literature regarding the capability of CDR to model individual movement patterns: (1) CDR provides the limited suitability for the assessment of the spread of human mobility and the study of short-term mobility patterns; (2) CDR yield enough details to detect significant locations in users' visiting patterns and to estimate the ranking among such locations. • We implement different techniques for CDR completion proposed in the literature and assess their quality in the presence of ground-truth GPS data. Our evaluation sheds light on the quality of the results provided by each approach. • We propose original CDR completion approaches that outperform existing ones, and carry out extensive tests on their performance with substantial real-world datasets collected by mobile network operators and mobility tracing initiatives. Validations against ground-truth movement information of individual users show that, on average, our proposed adaptive techniques can achieve an increased temporal completion of CDR data (75% of daytime hours) and retain significant spatial accuracy (having errors below 1 km in 95% of completed time). Compared with the most common proposal in the literature, our best adaptive approach outperforms by 5% of accuracy and 50% of completion. The rest of the paper is organized as follows. Related works are introduced in Sec. 2. In Sec. 3, we present the datasets used in our study. In Sec. 4, we introduce and explore the biases of using CDR for human mobility analyses. In Sec. 5, we discuss the rationale for CDR completion and errors introduced by common literature related approaches. In Sec. 6 and 7, we describe original CDR completion solutions that achieve improved accuracy, during nighttime and daytime, respectively. Finally, Sec. 8 concludes the paper. Related works Our work aims at measuring and evaluating possible biases induced by the use of CDR. Understanding whether and to what extent these biases affect human mobility studies is a subject that has been only partly addressed. The early paper by Isaacman [START_REF] Isaacman | Ranges of human mobility in los angeles and new york[END_REF] unveiled that using CDR as positioning information may lead to a distance error within 1 km compared to ground-truth collected from 5 users. In a seminal work, Ranjan et al. [START_REF] Ranjan | Are call detail records biased for sampling human mobility?[END_REF] showed that CDR are capable of identifying important locations, but they can bias results when more complex mobility metrics are considered; the authors leveraged CDR of very active mobile network subscribers as ground-truth. In our previous study [START_REF] Hoteit | Filling the gaps: On the completion of sparse call detail records for mobility analysis[END_REF], we confirmed these observations using a GPS dataset encompassing 84 users. In the present work, we confirm the observation in [START_REF] Ranjan | Are call detail records biased for sampling human mobility?[END_REF], and push them one step further by also considering the spatial bias introduced by CDR. For the sake of completeness, we mention that results are instead more promising when mobility is constrained to transportation networks: Zhang et al. [START_REF] Zhang | Exploring human mobility with multi-source data at extremely large metropolitan scales[END_REF] found CDR-based individual trajectories to match reference information from public transport data, i.e., GPS logs of taxis and buses, as well as subway transit records. Also relevant to our study are attempts at mitigating the spatiotemporal sparsity of CDR through completion techniques. The legacy approach in the literature consists in assuming that a user remains static from some time before and after each communication activity. The span of the static period, which we will refer to as temporal cell boundary hereinafter, is a constant system parameter that is often fairly arbitrary [START_REF] Jo | Spatiotemporal correlations of handset-based service usages[END_REF][START_REF] Hoteit | Filling the gaps: On the completion of sparse call detail records for mobility analysis[END_REF]. In this paper, we extend previously proposed solutions [START_REF] Hoteit | Filling the gaps: On the completion of sparse call detail records for mobility analysis[END_REF][START_REF] Chen | Towards an adaptive completion of sparse call detail records for mobility analysis[END_REF], and introduce two adaptive approaches to complete subscribers' trajectories inferred from CDR. Datasets We leverage two types of datasets in our study. Coarse-grained datasets are typical CDR data and feature significant spatiotemporal sparsity as well as user locations mapped to cell tower positions. Fine-grained datasets describe the mobility of the same user populations in the coarse-grained datasets with a much higher level of details and spatial accuracy. The coarse-grained datasets are treated as CDR in our experiments, while the corresponding fine-grained datasets are used as ground-truth to validate the results. We have access to one coarse-grained (CDR) and three fine-grained (Internet flow, MACACO, and Geolife) datasets. The CDR and Internet flow datasets share the same set of subscribers, and thus represent a readily usable pair of coarse-and fine-grained datasets. Coarse-grained counterparts of the MACACO and Geolife datasets are instead artificially generated, by downsampling the original fine-grained data. The exact process is detailed in Sec. 3.5. As a result, we have three pairs of fine-and coarse-grained datasets. The following sections describe each dataset in detail. CDR coarse-grained dataset This dataset consists of actual Call Detail Records (CDR), i.e., time-stamped and geo-referenced logs of network events associated to voice calls placed or received by mobile network subscribers. Specifically, each record contains the hashed identifiers of the caller and the callee, the call duration in seconds, the timestamp for the call time and the location of the cell tower to which the caller's device is connected to when the call was first started. The CDR are collected by a major cellular network operator. They capture the communication activities of 1.6 million of users over a consecutive 3-month period in 20151 , resulting in 681 million CDR in the selected period of study. We carry out a preliminary analysis of the CDR dataset, by extracting the experimental statistical distributions of the inter-event time (i.e., the time between consecutive events). These distributions will be later leveraged in Sec. 3.5 to downsample the fine-grained datasets. The resulting cumulative distribution functions (CDF) are shown, for different hours of the day, in Fig. 1. We observe that a majority of events occur at a temporal distance of a few minutes, but a non-negligible amount of events are spaced by hours. This observation confirms results in the literature on the burstiness of human digital communication activities, with rapidly occurring events separated by long periods of inactivity [START_REF] Barabasi | The origin of bursts and heavy tails in human dynamics[END_REF]. The curves in Fig. 1 allow appreciating the longer inter-event times during low-activity hours (e.g., midnight to 6 am) that become progressively shorter during the day. Internet flow fine-grained dataset This dataset is composed of mobile Internet session records, termed flows in the following. These records are generated and stored by the operator every time a mobile device establishes a TCP/UDP session for certain services (i.e., Facebook, Google Services, WhatsApp etc). Each flow entry contains the hashed device identifier, the type of service, the volume of exchanged upload and download data, the timestamps denoting the starting and ending time of the session, and the location of the cell tower handling the session. The dataset refers to two-day period consisting of a Sunday and a Monday in 2015. In each day, the data covers a constant time interval, i.e., from 10 am to 6 pm. The flows in the Internet flow dataset have a considerably higher time granularity than the original CDR. Namely, at least one flow (i.e., one location) is provided within every 20 minutes, for all users. The statistical distribution of the per-user inter-flow time is shown in Fig. 2(a). We note that in 98% of cases, the inter-event time is less than 5 minutes, and in less than 1% of cases, the inter-event time is higher than 10 minutes. We also plot in Fig. 2(b) the CDF of the number of flows (solid lines) and CDR (dashed lines) for each user appearing in both datasets: the number of events per user in the Internet flow case is more than two orders of magnitude larger than that observed in the CDR case. We conclude that the Internet flows represent a suitable fine-grained dataset that can be associated to the coarse-grained CDR dataset. Tab. 1 summarizes the number of users in the Internet flow dataset. In particular, the over 10K and 14K subscribers recorded on Sunday and Monday, respectively, are separated into two similarly sized categories based on their CDR as follows: • Rare CDR users are not very active in placing or receiving voice calls and thus have limited records in the CDR dataset. As in [START_REF] Song | Limits of Predictability in Human Mobility[END_REF], we use the threshold of 0.5 event/hour below which the user is considered to belong to this category. • Frequent CDR users are more active callers or callees and have more than 0.5 event/hour in the CDR dataset. This distinction will be leveraged later on in our performance evaluation. MACACO fine-grained dataset This dataset is obtained through an Android mobile phone application, MACACOApp2 , developed in the context of the EU CHIST-ERA MACACO project [START_REF]EU CHIST-ERA Mobile context-Adaptive CAching for COntent-centric networking (MACACO) project[END_REF]. The application collects data related to the user's digital activities such as used mobile services, generated uplink/downlink traffic, available network connectivity, and visited GPS locations. These activities are logged with a fixed periodicity of 5 minutes. We remark that this sampling approach differs from those employed by popular GPS tracking projects, such as MIT Reality Mining [START_REF] Eagle | Reality mining: Sensing complex social systems[END_REF] or GeoLife [START_REF] Zheng | Mining interesting locations and travel sequences from gps trajectories[END_REF], where users' positions are sometimes irreg- ularly Geolife fine-grained dataset This is the latest version of the Geolife dataset [START_REF] Zheng | Mining interesting locations and travel sequences from gps trajectories[END_REF], which provides timestamped GPS locations of 182 individuals, mostly in Beijing [START_REF] Zheng | Mining interesting locations and travel sequences from gps trajectories[END_REF]. The dataset spans a three-year time period, from April 2007 to August 2012. Unfortunately, the Geolife dataset is often characterized by large temporal gaps between subsequent data records. As a result, not all users present a number of locations or mobility level sufficient to our analysis. We thus select users given the criteria that the entropy rate of each individual's data points falls below the theoretical maximum entropy rate, which are used in [START_REF] Smith | A refined limit on the predictability of human mobility[END_REF] to select the Geolife users for analyzing individual human mobility. Generating coarse-grained equivalents for MACACO and Geolife We do not have access to CDR datasets for users in the MACACO nor Geolife datasets. We thus generate CDR-equivalent coarse-grained datasets, by leveraging the experimental distributions of the inter-event time in the CDR dataset (shown in Fig. 1, cf. Sec. 3.1). Specifically, we downsample the MACACO and Geolife datasets so that the inter-event times match those in the experimental distributions. Therefore, we first randomly choose one GPS record of the user as the seed CDR entry. We then randomly choose an inter-event time value from the distribution for the corresponding hour of the day, and use such interval to sample the second GPS record for the same user, mimicking a new CDR entry. We repeat this operation through the whole fine-grained trajectories of all users, and obtain datasets of downsampled GPS records that follow the actual inter-event time distributions of CDR. Note that tailoring the inter-event distribution on a specific hour of the day allows taking into account the daily variability of CDR sampling. Also, upon downsampling, we filter out users who have an insufficient number of records, i.e., users with less than 30 records per day on average or less than 3 days of activity. The final CDR-like coarse-grained versions of the MACACO and Geolife datasets contain 32 and 42 users, respectively. Summary By matching or downsampling the original data, we obtain three combinations of coarse-grained and fine-grained datasets for the same sets of users. The second and third data combinations, issued from the MACACO and Geolife datasets, cover instead all times. We thus employ them to overcome the limitations of the CDR and Internet flow pair, and to study CDR completion during night hours. Details are provided in Sec. 6. Biases in CDR-based mobility analyses Before delving into CDR completion, we present an updated analysis of the suitability of CDR data for the characterization of human mobility. Indeed, as anticipated in Sec. 1, CDR are typically sparse in space and time, which may affect the validity of results obtained from CDR mining. Cell tower locations In most CDR datasets, the position information is actually represented by the cell tower location handling the corresponding communication. Hence, a shift from the user's actual location to the cell tower location always exists in every CDR entry. Such a shift may impact the accuracy of individual mobility measurements. Usually, CDR are collected in metropolitan areas. In this case, the precision of human locations provided by CDR is related to the local deployment of base stations. Fig. 4 shows the deployment of cell towers in the metropolitan area where our CDR dataset was collected. The presence of cell towers is far from uniform, with a higher density in downtown areas where a cell tower covers an approximately 2 km 2 area on average: in these cases, the cell coverage grants a fair granularity in the localization of mobile network subscribers. The same may not be true for cells in the city outskirts, which cover areas of several tens of km 2 . We evaluate how the cell deployment can bias human mobility studies. To We first extract 718, 987 GPS locations in the mainland of France 3 from the MACACO dataset. Among these locations, 74% are collected from the major metropolitan areas in France, including Paris Region, Lyon, and Toulouse. We then extract cell tower locations of the four major cellular network operators in France (i.e., Orange, SFR, Free, and Bouygues) from the open government data [START_REF]France Open Data[END_REF]. Fig. 5(a) is the CDF of the distance between each GPS location in the 3 The study focuses on the area in the latitude and longitude ranges of (43.005, 49.554) and (-1.318, 5.999), respectively. MACACO dataset and its nearest cell tower. We observe that most of the locations have a distance below 1 km when shifting to their nearest cells (i.e., 95% for Orange, 91% for SFR, 86% for Free, and 91% for Bouygues). Nevertheless, when we focus on the metropolitan areas as shown in Fig. 5(b), almost all the shifts (i.e., over 99%) are below 1 km and all the operators have their median shifts around 200 -500 meters. This indicates that the shifts above 1 km are all observed in rural areas. Still, most of the shifts are higher than 100 meters, indicating the presence of some bias of using cell tower locations. We stress that these values provide an upper bound to the positioning error incurred by CDR, as mobile network subscribers may be associated to antennas that are not the nearest ones, due to the signal propagation phenomena or load balancing policies enacted by the operator. Still, the level of accuracy in Fig. 5, although far from that obtained from GPS logs, is largely sufficient for a variety of metropolitan-level or inter-city mobility analyses. For instance, it was shown that a spatial resolution of 2-7 km is sufficient to track the vast majority of mobility flows in a large dual-pole metropolitan region [START_REF] Coscia | Optimal spatial resolution for the analysis of human mobility[END_REF]. Human movement span We then examine whether mining CDR data is a suitable means for measuring the geographical span of movement of individuals. For that, we compute for each user u in the set of study U the radius of gyration, i.e., the deviation of the user's positions to their centroid. Formally, R u g = 1 n n i=1 ||r u i -r u centroid || 2 geo , where r u centroid is the center of mass of locations of the user u, i.e., r u centroid = 1 n n i=1 r u i . This metric reflects how widely the subscribers move and is a popular measure used in human mobility studies [START_REF] Paul | Understanding traffic dynamics in cellular data networks[END_REF][START_REF] González | Understanding individual human mobility patterns[END_REF][START_REF] Song | Limits of Predictability in Human Mobility[END_REF][START_REF] Hoteit | Estimating human trajectories and hotspots through mobile phone data[END_REF]]. An individual who repeatedly moves among several fixed nearby locations still yields a small radius of gyration, even if she may total a large traveled distance. We are able to compute both estimated (due to the temporal sparsity of the actual or the equivalent CDR data) and real (due to the finer granularity in the ground-truth provided by the Internet flow, MACACO, and Geolife datasets) The three distributions are quite similar, indicating that one can get a reliable distribution of R u g from a certain number of users even if they are rare CDR users, i.e., have a limited number of mobile communication activities. When considering the error between real and estimated radius of gyration, in Fig. 6(b) for the CDR and Internet flow datasets, and in Fig. 6(c) and 6(d) for the MACACO or Geolife datasets, respectively, we observe the following: • The distribution of large errors is similar in all cases, and outlines a decent accuracy of the coarse-grained CDR or CDR-like datasets. For approximately 90% of the Internet flow users, 95% of the MACACO users and 70% of the Geolife users, the errors between the real and the estimated radius of gyration are less than 5 km. The higher errors obtained from Geolife dataset may be interpreted by the irregular sampling in the original data and the presence of very large gaps between consecutive logs. • A more accurate radius of gyration can be obtained for the CDR users who are especially active: 92% of the frequent CDR users have their errors lower than 5 km, while the percentage decreases to 86% for the rare CDR users. • When considering small errors, the distributions tend to differ, with far lower errors in the case of CDR than MACACO or Geolife. This is in fact an artifact of considering cell tower locations as the ground-truth user positions in the fine-grained Internet flow dataset (cf. Sec. 4.1). In the more accurate GPS data of MACACO and Geolife, around 30% and 10% of the users enjoy their errors lower than 100 meters, while around 35% of the users in the CDR dataset have errors below 1 meter. Overall, these results confirm the previous findings on the limited suitability of CDR for the assessment of the spread of human mobility [START_REF] Ranjan | Are call detail records biased for sampling human mobility?[END_REF]. They also unveil how different datasets can affect the data reliability at diverse scales. Missing locations Due to spatiotemporal sparsity, the mobility information provided by CDR is usually incomplete. We investigate the phenomenon in the case of users in the CDR dataset, and plot in Fig. 7 r N L = N CDR L /N Flow L . (1) We notice that 42% in the population of study (i.e., all users) have their r N L higher than 0.8. For these users, 80% of their unique visited locations appear in the CDR data. The percentage of all users having this criterion is slightly higher for the frequent CDR users (50%) and lower for the rare CDR users (37%). These results confirm that using CDR to study very short-term mobility patterns is not a good idea due to the high temporal sparsity and the lack of locations in CDR. Important locations The identification of significant places where people live and work is generally regarded as an important step in the characterization of human mobility. Here, we focus on home and work locations: we separate the period of study into two time windows, mapping to work time (9 am to 5 pm) and night time (10 pm to 7 am) for both CDR-like and ground-truth datasets. For each user, the places where the majority of work time records occur are considered a proxy for work locations; the equivalent records at night time are considered a proxy for home locations [START_REF] Phithakkitnukoon | Socio-geography of human mobility: A study using longitudinal mobile phone data[END_REF]. It is worth noting that, as the Internet flow dataset covers only (10am, 6pm), we only infer work locations for this dataset. Formally, let us consider a user u from the user set. The visiting pattern of the user u is a sequence of samples ( home location H u of the user u is then defined as the most frequent location during night time: H u = mode( i u | t i u ∈ t H ), (2) where t H is the night time interval. The definition is equivalent for the work location W u of the user u, computed as W u = mode( i u | t i u ∈ t W ), (3) where t W is the work time interval. We use the definitions in ( 2) and (3) to determine home and work locations and then evaluate the accuracy of the CDR-based significant locations by measuring the geographical distance that separates them from the equivalent locations estimated via the corresponding fine-grained ground-truth datasets. The results are shown in Fig. 7(b)-(f) as the CDF of the spatial error in the position of home and work places for different user groups for the three datasets. We observe the following: • The errors related to home locations are fairly small in the MACACO dataset, but are relatively higher in the Geolife dataset. For the MACACO users, the errors are always below 1 km and 94% are within 100 meters. For the Geolife users, we observe that 17% of the errors are higher than 10 km. A possible interpretation is that some Geolife users are highly active and don't stay within a stable location during nighttime. • For both MACACO and Geolife users, the errors associated with work locations are sensibly higher than those measured for home locations. For instance, as shown in Fig. 7(d), while 75% of the MACACO users have an error of less than 300 meters, the work places of a significant portion of individuals (around 12%) are identified at a distance higher than 10 km from the positions extracted from the GPS data. A close behavior can be noticed from the Internet flow and Geolife users, as shown in Fig. 7(b) and Fig. 7(f). These large errors typically occur for users who do not seem to have a stable work location and may be working in different places depending on, e.g., the time of day. • The errors are significantly reduced when using cell tower locations as in the Internet flow dataset instead of actual GPS positions as in the MACACO or Geolife datasets. For the Internet flow users in Fig. 7(b), the errors between the real and the estimated significant locations are null for approximately 85% of all users, indicating that the use of the coarsegrained dataset is fairly sufficient for inferring these significant locations. • The errors are non-null for the remaining Internet flow users (15%). Among them, 10% have relatively small errors (less than 5 km), while 5% have errors larger than 5 km. • There is only a slight difference in the distribution of the errors associated with work locations between the rare and the frequent CDR users as shown in Fig. 7(b). The reason is that, most of CDR are generated in significant locations, and hence the most frequent location obtained from CDR of a user is likely to be her actual work location during daytime. Still, it is relatively difficult to capture actual location frequencies if a user has only a few of CDR. Hence the rare CDR users have higher errors. Overall, these results confirm previous findings [START_REF] Ranjan | Are call detail records biased for sampling human mobility?[END_REF], and further prove that CDR yield enough details to detect significant locations in users' visiting patterns. Besides, the results reveal a small possibility of incorrect estimation in the ranking among such locations. Current approaches to CDR completion The previous results confirm the quality of mobility information inferred from CDR, regarding the span of user's movement and significant locations. They also indicate that some biases are present: specifically, although transient and less important places visited may be lost in CDR data, capturing most of one's historical locations is not impossible. The good news is that, even in those cases, the error induced by CDR is relatively small. A major issue remains that CDR only provide instantaneous information about user's locations at a few time instants over a whole day. Overcoming the problem would help the already significant efforts in mobility analyses with CDR [START_REF] Naboulsi | Large-scale Mobile Traffic Analysis: a Survey[END_REF], allowing the exploration of scales much larger than those enabled by GPS datasets. Temporal CDR completion aims at filling the time gaps in CDR, by estimating users' locations in between their mobile communication activities. Several strategies for CDR completion have been proposed to date. In this section, we introduce and discuss the two most popular solutions adopted in the literature. Baseline static solution A simple solution is to hypothesize that a user remains static at the same location where she is last seen in her CDR. This methodology is adopted, e.g., by Khodabandelou et al. [START_REF] Khodabandelou | Population estimation from mobile network traffic metadata[END_REF] to compute subscriber's presence in mobile traffic meta-data used for population density estimation. We will refer to this approach as the static solution and will use it as a basic benchmark for more advanced techniques. It is worth noting that this solution has no spatiotemporal flexibility; its performance only depends on the number of CDR a user generates in the period of study: i.e., the higher is the number of CDR, the lower will be the spatial error in the completed data by the static solution. In other words, there is no space (configurable setting or initial parameter) for customizing this solution to obtain better accuracy. Baseline stop-by solution Building on in-depth studies proving individuals to stay most of the time in the vicinity of their voice call places [START_REF] Ficek | Inter-call mobility model: A spatio-temporal refinement of call data records using a gaussian mixture model[END_REF], Jo et al. [START_REF] Jo | Spatiotemporal correlations of handset-based service usages[END_REF] assume that users can be found at the locations where they generate some digital activities for an hour-long interval centered at the time when each activity is recorded. If the time between consecutive activities is shorter than one-hour, the inter-event interval is equally split between the two locations where the bounding events occur. This solution will be denoted as stop-by in the remaining sections. The drawback of the stop-by is that it uses a constant hour-long interval for all calls as well as users in CDR, which may be not always suitable. This solution lacks flexibility in dealing with various human mobility behaviors. As exemplified in Fig. 8, a single CDR is observed at time t CDR at cell C. Following the stop-by solution, the user is considered to be stable at this cell C during the period d = (t CDR -|d|/2, t CDR + |d|/2), while in fact the user has moved to two other cell towers during this period. We call the period estimated from an instant CDR entry, a temporal cell boundary. In the example of Fig. 8, this temporal cell boundary is overestimated. Nevertheless, this solution has more flexibility than the static solution does, i.e., the time interval |d| affects its performance and is configurable. Although a one-hour interval (|d| = 60 minutes) is usually adopted in the literature, we are interested in evaluating the performance of the stop-by solution over different intervals, which has never been explored before. Intuitively, a spatial error occurs if the user moves to other different cells during the temporal cell boundary. To have a quantitative manner of such an error, we define the spatial error of a temporal cell boundary with a period d as follows: error(d) = 1 |d| d c (CDR) -c (real) t geo dt. ( 4 ) This measure represents the average spatial error between a user's real cell location over time c (real) t and her estimated cell location c (CDR) , during the time period d. The interpretation of the spatial error is straightforward, as follows: • When error(d) = 0, it means that the user stays at the cell c (CDR) during the whole temporal cell boundary. Still, the estimation of d may be conservative, since a larger |d| could be more adapted in this case. • When error(d) > 0, it means that the temporal cell boundary is oversized: the user in fact, moves to other cells in the corresponding time period. Thus, a smaller |d| should be used for the cell. Due to the relevance of this parameter on the model performance, in the following we evaluate the impact of |d| on the spatial error. Impact of parametrization on stop-by accuracy We evaluate the performance of the stop-by approach, by considering the CDR and ground-truth Internet flow datasets (cf. Sec. 3). CDR are used to generate temporal cell boundaries, while locations in the fine-grained data of flows are adopted as actual locations and are used to compute the spatial errors. We error, under any |d|. To account for this aspect, we exclude the static users in the following, and only consider the mobile users, i.e., ones having R u g > 0. An interesting consideration is that the spatial error incurred by the stop-by approach is not uniform across cells. Intuitively, a cell tower covering a larger area is expected to determine longer user dwelling times and hence better estimates with stop-by. We thus compute for each cell its coverage as the cell radius: specifically, we assume a homogeneous propagation environment and an isotropic radiation of power in all directions at each cell tower, and roughly estimate each cell radius as that of the smallest circle encompassing the Voronoi polygon of the cell tower. We remark that this approach yields overlapping coverage at temporal cell boundaries, which reflects what happens in real-world deployments. In the target area under study, shown in Fig. 4, 70% of the cells have radii within 3 km, and the median radius is approximately 1 km. We can now evaluate the probability of having a temporal cell boundary with a null spatial error, as P e0 = Pr{error(d) = 0}. Fig. 10(a) and 10(b) present the probabilities P e0 grouped by the cell radius, when applying varying sizes of temporal cell boundary on the days of study. We notice the following. • The probability P e0 decreases with the increasing period marked by |d|, indicating that using a large period on the temporal cell boundary increases the chances of generating some spatial errors. For instance, for |d| = 30 minutes, the probability of having a null spatial error is around 0.7 depending on the date and on the cell radius. When a larger |d| is used, the probability significantly increases (e.g., for |d| = 60 minutes, the probability P e0 reduces to around 0.6). • The probability P e0 correlates positively with the cell radius r. This trend is seen on both Monday and Sunday (except some cases), indicating that the cell size has an impact on the time interval during which the user stays within the cell coverage. Intuitively, handovers are frequent for users moving among small cells and less so for users traveling across large cells. The results support the idea that there is a strong correlation between the temporal cell boundary and the cell coverage. Nevertheless, since CDR are usually sparse in time, using a small temporal cell boundary could only cover an insignificant amount of cell visiting time, while using a big temporal cell boundary increases the risk of having a non-null spatial error. To investigate this trade-off, we plot the variation of the statistical distribution of the spatial errors after excluding the null errors (i.e., keeping only cases with non-null error(d)) in Fig. 10(c) and 10(d). We observe that: • The spatial error varies widely: it goes from less than 1 km to very huge values (up to 3.6 km on Monday and to 7.5 km on Sunday). Hence, for some users, the stop-by solution is unsuitable for reconstructing visiting patterns due to the presence of such high spatial errors. • The spatial error grows with the cell radius: when the cell size increases, the variation of the error becomes wider, while the mean value also increases. This is reasonable because the higher the cell radius is, the farther the cell is from its cell neighbors. Hence, when a spatial error occurs, it means that the user is actually in a far cell that has a larger distance to c (CDR) . Key insights Overall, we assert that temporal cell boundary estimates user's locations with a high accuracy when |d| is small. This validates the previous finding that users usually stay in proximity of call locations for certain time. The accuracy reduces significantly, giving rise to spatial errors, when increasing |d|. Hence, the trade-off between the completion and the accuracy should be carefully considered when completing CDR using temporal cell boundaries. Using a constant |d| over all users as in the stop-by solution is unlikely to be an appropriate approach. Building on these considerations, we propose enhancements to the stop-by and static solutions in the remainder of the paper. The data completion strategies introduced in the following leverage common trends in human mobility, in terms of (1) attachment to a specific location during night periods, and (2) a tendency to stay for some time in the vicinity of locations where digital activities take place. In particular, we tell apart strategies for CDR completion at night time and daytime: Sec. 6 presents nighttime completion strategies inferring the home location of users; Sec. 7 introduces our adaptive temporal cell boundary strategies leveraging human mobility regularity during daytime. Identifying temporal home boundaries The main goal of our strategies for CDR completion during nighttime is to infer temporal boundaries where users are located, with a high probability, at their home locations. We refer to this problem as the identification of the user's temporal home boundary. Gaps in CDR occurring within the home boundary of each user are then filled with the identified home location. The rationale for this approach stems from our previous observations that CDR allow identifying the home location of individuals with high accuracy. Proposed solutions We extend the stop-by solution (cf. Sec. 5.2) in the following ways. Note that all techniques below assume that the home location is the user's most active location during some night time interval h, and that CDR not in h are completed via legacy stop-by. • The stop-by-home strategy adds fixed temporal home boundaries to the stop-by technique. If a user's location is unknown during h = (10pm, 7am) due to the absence of CDR in that period, the user will be considered at her home location during h. • The stop-by-flexhome strategy refines the previous approach by exploiting the diversity in the habits of individuals. The fixed night time temporal home boundaries are relaxed and become flexible, which allows adapting them on a per-user basis. Specifically, instead of considering h = (10pm, 7am) as the fixed home boundaries for all users, we compute for each user u the most probable interval of time h flex ⊆ h during which the user is at her home location. Then, as for stop-by-home, the user will be considered at her home location to fill gaps in her CDR data during h (u) flex . • The stop-by-spothome strategy augments the previous technique by accounting for positioning errors that can derive (1) from users who are far from home during some nights, or (2) from ping-pong effects in the association to base stations when the user is within their overlapping coverage region. In this approach, if a user's location during h (u) flex is not identified, and if she is last seen at no more than 1 km from her home location, she is considered to be at her home location. We compare the above strategies with the static and the legacy stop-by solution introduced in Sec. 5, assuming |d| = 60 min. Our evaluation considers dual perspectives. The first is accuracy, i.e., the spatial error between mobility metrics computed from ground-truth GPS data and from CDR completed with the different techniques above. The second is completion, i.e., the percent of the time during which the position of a user is determined. Note that the static solution (cf. Sec. 5) provides user locations at all times, but this is not true for stop-by or the derived techniques above. In this case, the CDR is completed only for a portion of the total period of study, and the users' whereabouts remain unknown in the remaining time. Accuracy and completion results We first compute the geographical distance between the positions in the GPS records in MACACO and Geolife and those in their equivalent CDR-like coarse-grained datasets. These strategies are not designed to provide positioning information at all times expect the static solution, hence distances are only measured for GPS samples whose timestamps fall in the time periods for which completed data is available. • The static approach provides the worst accuracy in both datasets. • The stop-by-flexhome technique largely improves the data precision, with an error that is lower than 100 meters in 90 -92% of cases for the MACACO users and with a median error around 250 meters for the Geolife users. • The stop-by-spothome technique provides the best performance for both datasets. For instance, about 95% of samples lie within 100 meters of the ground-truth locations in the MACACO dataset, while the median error is 250 meters (the lowest result) in the Geolife dataset. These results confirm that a model where the user remains static for a limited temporal interval around each measurement timestamp is fairly reliable when it comes to accuracy of the completed data. They also support previous observations on the quite static behavior of mobile network subscribers [START_REF] Ficek | Inter-call mobility model: A spatio-temporal refinement of call data records using a gaussian mixture model[END_REF]. More importantly, the information of home locations can be successfully included in such models, by accounting for the specificity of each user's habits overnight. The stop-by and derived solutions do not provide full completion by design. Overall, the combination of the results in Fig. 11 indicates the stop-by-spothome solution as that achieving the best combination of high accuracy and fair completion, among the different completion techniques considered. Identifying temporal cell boundaries We now consider the possibility of completing CDR during daytime. Our strategy is based again on inferring temporal boundaries of users. However, unlike what has been done with nighttime periods in Sec. 6, here we leverage the communication context of human mobility habits and extend the time span of the position associated with each communication activity to so-called temporal cell boundaries. Factors impacting temporal cell boundaries Hereafter, we aim to answer the following question: how to choose a proper and adaptive period for a temporal cell boundary instead of a static fixed-toall period? To answer the question, we need to understand the correlation between the routine behavior of users in terms of mobile communications and their movement patterns. For this, we first study how human behavior factors that can be extracted from CDR may affect daytime temporal cell boundaries. We categorize factors in three classes, i.e., event-related, long-term behavior, and location-related, as detailed next. Then, we leverage them to design novel approaches to estimate temporal cell boundaries. The algorithm applies Hartigan's clustering [START_REF] Hartigan | Clustering[END_REF] on visited cell locations of users in CDR and use logistic regression to estimate a location's importance to the user from factors extracted from the cluster that the location belongs to. To start with, the cluster approach chooses the cell tower from the first CDR and makes it the first cluster. Then, it recursively checks all cell towers in the remaining CDR. If a cell tower is within the distance threshold (we use 1 km) to the centroid of a certain cluster, the cell tower is added to the cluster, and the centroid of the cluster is moved to the weighted average of the locations of all the cell towers in the cluster. The weights assigned to locations are the fractions of days in which they are visited over the whole observing period. The clustering process finishes once all cell towers are assigned to clusters. Once clusters are defined, the importance of each cluster is identified according to the following observable factors: (i) the number of days on which any cell tower in the cluster was contacted (CDay); (ii) the number of days that elapse between the first and the last contact with any location in the cluster (CDur); (iii) the sum of the number of days cell towers in the cluster were contacted (CTDay); (iv) the number of cell towers inside the cluster (CTower); (v) the distance from the registered location of the activity to the centroid of the cluster (CDist). These factors derived from a cluster correlate with the time that the user spends in the cluster's locations, as shown by Isaacman et al. via their logistic regression model [START_REF] Isaacman | Identifying important places in peopleś lives from cellular network data[END_REF]. It is worth noting that we cannot reproduce the exact model in [START_REF] Isaacman | Identifying important places in peopleś lives from cellular network data[END_REF], since the used ground-truth is not publicly available. However, we can still use the same factors for our objective, i.e., identifying temporal cell boundaries. Supervised temporal cell boundary estimation So far, we have introduced human behavior factors that might be directly or indirectly related to temporal cell boundaries. In order to use them for our purpose, we need a reliable model linking them to actual temporal cell boundaries. In the following we introduce two approaches to do so, both based on supervised machine learning. Symmetric and asymmetric temporal cell boundaries We define two kinds of temporal cell boundaries: symmetric and asymmetric. Given a CDR entry at time t, determining its temporal cell boundary means to expand the instantaneous time t to a time interval d, during which the user is assumed to remain within coverage of the same cell. For a symmetric temporal cell boundary, this period is generated from a CDR-based parameter d ± as d = (td ± , t + d ± ), i.e., it is symmetric with respect to the CDR time t. Instead, the period of an asymmetric temporal cell boundary is generated from two independent parameters d + and d -as d = (td -, t + d + ). We design sym-adaptive and asym-adaptive approaches, both of which receive a CDR entry as input and return an estimate of its associated temporal cell boundary. More precisely, the factors discussed in Sec. 7.1 are extracted for each user and CDR record, and converted to an input vector x, under the following rules: (i) the categorical factor type is converted to two binary features by one-hot encoding5 ; (ii) the time is converted to the distances (in seconds) separating it from 10am and from 6pm6 ; (iii) the other factors are used as plain scalar values. Given a CDR entry and its input vector x, we have the following approaches: • The sym-adaptive approach contains one model that accepts the input vector and predicts the parameter d ± as a symmetric estimation of the corresponding temporal cell boundary, i.e., d ± = F sym (x). • The asym-adaptive approach contains two models that separately predict the parameters d + and d -as a joint asymmetric estimation of the corresponding temporal cell boundary, i.e., d + = F + asym (x) and d -= F - asym (x). We use supervised machine learning techniques to build the models. It is worth noting that the user identifier is not in the input vector x because we do not want to train models that bound themselves to any particular user. This gives our models better flexibility and ensures higher potential for applying the trained model into other mobile phone datasets where the same factors can be derived. Estimating temporal cell boundaries via supervised learning We detail our methodology and results, by (i) formalizing the optimization problems that capture our goal, (ii) discussing how they can be addressed via supervised machine learning, and (iii) presenting a complete experimental evaluation. Optimization problems. All the models are generalized from a training set X consisting of CDR entries (as input vectors) and their real temporal cell boundaries (which are originally asymmetric), i.e., X = {(x i , d + i , d - i )}. To build the asym-adaptive approach, the objective is to find two separate approximations, as F + asym (x) and F - asym (x), to functions F + (x) and F -(x) that respectively minimize the expected values of two losses L(d + , F + (x)) and L(d -, F -(x)), i.e., F + asym (x) = arg min F + E d + ,x [L(d + , F + (x))], (5) F - asym (x) = arg min F -E d -,x [L(d -, F -(x))], (6) where L is the squared error loss function, i.e., L(x, y) = 1 2 (xy) 2 . To build the sym-adaptive approach, a modified training set X ± = {(x i , d ± i )} is firstly generated from the original X by applying d ± i = min{d + i , d - i } on each real asymmetric temporal cell boundary. Then, as our objective, we need to find an approximation F sym (x) to a function F ± (x) that minimizes the expected and Internet flow datasets, we first extract for each CDR entry of these selected users its corresponding input vector x as well as the parameters d + , d -of its real temporal cell boundary. We then build the two training sets X and X ± . The second step is to build the approximation functions (i.e., F + asym , F - asym , and F sym ) from the training sets. For that, we have to first tune the M and ν parameters of Alg. 1 of each approximation function. To this end, we use a equal-sized three subsets. For each combination of M and ν, we train the model corresponding to each approximation function based on one subset and validate it on the other two subsets. We repeat this operation three times with each of the subsets used as training data. We select as our actual parameters the M and ν values that achieve the lowest loss in the cross-validation. Finally, we use the training sets X and X ± and the tuning parameters that we select to build the functions F + asym , F - asym , and F sym corresponding to the asym-adaptive and sym-adaptive approaches. Fig. 12 shows the relative importance of factors with respect to the estimation of a temporal cell boundary in the training procedure of the GBRT technique. For each factor, its importance is computed as a relative value of the sum of its corresponding importance in all the three approximations. The importance indicates the degree of a feature contributing to the construction of the regression trees. This figure allows us drawing the following main conclusions, valid for both approaches. • The three most important factors are the timestamp of the activity, the cell radius, and the radius of gyration. This indicates that the time spent by a user within coverage of a same cell mainly depends on the cell size, the precise time when the activity occurred, and the user's long-term mobility. • Surprisingly, the activity's type is the least relevant factor, indicating that knowing whether a user generates a call or a message is useless in 825 determining a temporal cell boundary. Accuracy and completion results We compare our two trained approaches with the stop-by and static approaches using the CDR from the remaining 50% of the randomly-selected users. For the two sym-adaptive and asym-adaptive approaches, we build two test-adaptive symmetric and asymmetric temporal cell boundaries using the input vectors in the testing sets. Besides, we let the stop-by approach generate temporal cell boundaries using |d| = {10, 60, 180} minutes. As in Sec. 6, we make a comparative study by evaluating the solutions regarding accuracy and completion, where the accuracy is measured by evaluating the spatial error in (4) (cf. Sec. 5). Recall that a good data completion approach should cover the observing period as much and precise as possible, i.e., satisfying high accuracy and completion simultaneously. Fig. 13(a) and 13(b) display the distribution of the spatial errors over all temporal cell boundaries. Our results confirm that the spatial error increases as t d becomes larger when using the stop-by approach. More importantly, the two adaptive approaches perform slightly better than the stop-by approach does with its most common setting (|d| = 60 minutes) in terms of the spatial error. As expected, the static solution has the worst performance, similarly to what observed in the case of home boundaries using the MACACO and Geolife datasets. Fig. 13(c) and 13(d) plot the distribution of the completion per users over all approaches except static (of which the completed data always covers the whole period). The x-axis of the figures has 8 hours because the Internet flow dataset only covers an eight-hour day time, i.e., (10am, 6pm). We remark that both our adaptive approaches score a significant performance improvement in terms of completion: the amount of time during which users' locations stay unidentified is substantially reduced with respect to the legacy stop-by approach. On average, only approximately 2 hours (25% of the period of study) of the user's day time remains unidentified after applying the asym-adaptive approach, while 3 hours remains unidentified after using the sym-adaptive and stop-by (|d| = 180 minutes) approaches. The stop-by approach with its most common setting (|d| = 60 minutes) has the same degree of accuracy as the adaptive approaches but has a far less degree of completion (i.e., a median of 6 unidentified hours). Overall, these results highlight a clear advantage provided by adaptive approaches for CDR completion based on supervised learning. Consequently, the adaptive approaches achieve a slightly better performance in terms of accuracy but have a far better performance in terms of completion. The asym-adaptive approach has an obvious advantage than the competitors: it completes 75% of the day hours with a fairly good accuracy. Conclusion In this paper, we leveraged real-world CDR and GPS datasets to characterize the bias induced by the use of CDR for the study of human mobility, and evaluated CDR completion techniques to reduce some of the emerging limitations of this type of data. Our results confirm previous findings on the sparsity of CDR, and, more importantly, provide a first comprehensive investigation of techniques for CDR completion. In this context, we propose solutions that (i) dynamically extend the time intervals spent by users at locations where they are pinpointed by the CDR data during daytime, and (ii) sensibly place users at their home locations during nighttime. Extensive tests with heterogeneous realworld datasets prove that our approaches can achieve excellent combinations of accuracy and completion. On average, for daytime hours, our approaches can complete 75% of the time in which 95% have errors below 1 km; for nighttime hours, our refinements of the legacy solution have a performance gain of 4-5 or 3-7 hours on two datasets regarding completion and up to 10% of a performance gain regarding accuracy. Particularly, compared with the most common proposal in the literature, our best adaptive approach outperforms by 5% of accuracy and 50% of completion. Figure 1 : 1 Figure 1: Distributions of the inter-event time in the CDR dataset at different day times. Figure 2 : 2 Figure 2: (a) CDF of the inter-event time in the Internet flow fine-grained dataset; (b) CDF of the number of records (flows or CDR) per user in a weekend and a weekday. Figure 3 : 3 Figure 3: Combinations of corresponding coarse-and fine-grained datasets. Fig. 3 3 Fig. 3 outlines them.An important remark is that, as already mentioned in Sec. 3.2, the Internet flow dataset only covers working hours, from 10 am to 6 pm. As a result, the first data combination is well suited to the investigation of CDR completion during daytime. The relevant analysis is presented in Sec. 7. Figure 4 : 4 Figure 4: Deployment of cell towers in the target metropolitan area. Purple dots represent the base stations, whose coverage is approximated by a Voronoi tessellation. Figure 5 : 5 Figure 5: Distributions of the distances to the nearest cell tower (shifts), for 718, 987 GPS locations in the MACACO data of users in (a) the whole area and (b) major metropolitan areas (Paris Region, Lyon, Toulouse) in France. FF Figure 6: (a) CDF of the radius of gyration of two categories (Rare and Frequent) of CDR users in the Internet flow dataset. (b)(c)(d) CDF of the distance between the real and the estimated radius of gyration from CDR over the users of the (b) Internet flow, (c) MACACO, and (d) Geolife datasets. (a) the ratio r N L of unique locations detected from CDR (N CDR L ) to those from the ground-truth (N Flow L ), i.e., Internet flow data, as Figure 7 : 7 Figure 7: (a) CDF of the radio r N L of the number of locations in each user's coarse-grained trajectory to the one in her fine-grained trajectory. (b)(c)(d)(e)(f) CDF of the distances between each user's real and estimated important locations located by her CDR and groundtruth: (b) work locations over the Internet flow users; (c) home and (d) work locations over the MACACO users; (e) home and (f) work locations over the Geolife users. Figure 8 : 8 Figure 8: An example of a temporal cell boundary in the stop-by approach: A period (t CDR -|d|/2, t CDR + |d|/2) is given as a temporal cell boundary at the cell C attached with a CDR entry at time t CDR . In this temporal cell boundary, the user is assumed to be at the cell C, while actually she moves from the cell B to D: this leads to a spatial error. Figure 9 : 9 Fig. 9(a) and 9(b) show the CDF of the spatial error of temporal cell boundary on Monday and Sunday, respectively. We observe that error(d) = 0 for 80% of CDR on Monday (cf. 75% on Sunday) when |d| = 60 minutes, and for 60% of CDR on Monday (cf. 53% on Sunday) when |d| = 240 minutes. This result is a strong indicator that users tend to remain in cell coverage areas for long intervals around their instant locations recorded by CDR. It is also true that many users are simply static, i.e., only appear at one single location in their Internet flows, and, consequently have an associated radius of gyration R u g = 0:this behavior accounts for approximately 35% and 40% on Monday and Sunday, respectively. The high percentage of temporal cell boundaries with error(d) = 0 in Fig.9may be due to these static users, since they will not entail any spatial 3 ( 3 (Figure 10 : 3310 Figure 10: Spatial errors of temporal cell boundaries of CDR generated by the stop-by solution over users with their Rg > 0: (a)(b) the probability (P e0 ) of having a non-error temporal cell boundary (-|d|, |d|), where |d| ∈ {10, 30, 60, 120, 180, 240} minutes, under several groups of cell radius on (a) Monday and (b) Sunday; (c)(d) Box plot of non-zero spatial errors, grouped by the cell radius and the time period of temporal cell boundary on (c) Monday and (d) Sunday. Each box denotes the median and 25 th -75 th percentiles and the whiskers denote 5 th -95 th percentiles. Fig. 11 ( 11 Fig. 11(a) and 11(b) summarize the results of our comparative evaluation of accuracy, and allow drawing the following main conclusions: Fig. 11 (Figure 11 : 1111 Fig. 11(c) and 11(d) show the CDF of the hours per day during which a user 7. 1 . 1 . 11 Event-related factorsWe include in this class the meta-data contained in records of common CDR datasets. They include the activity time, type (i.e., voice call or text message), and duration 4 . Intuitively, these factors have direct effects on temporal cell boundaries. For instance, in terms of time, a user may stay within a fixed cell during her whole working period. In terms of type and duration, a long phone call may imply that the user is static, while a single text message may indicate that the user is on the move. Besides, these factors are commonly found in and easily extracted from any common CDR entries.7.1.2. Long-term behavior factorsThis class includes factors describing users' activities over extended time intervals. They are the radius of gyration (URg), the number of unique visited locations (ULoc), and the number of active days during which at least one event is recorded (UDAY). These factors characterize a user by giving indications of (i) her long-term mobility and (ii) her habit on generating calls and text messages, which may be indirectly related to her temporal cell boundaries. For each user, these factors are computed from our CDR dataset (cf. Sec. 3.1) by aggregating data during the whole 3-month period of study.7.1.3. Location-related factorsFactors in this class relate to positioning information. The first factor is the cell radius (CR), which we already proved to be affecting the reliability of CDR completion schemes in Sec. 5. The other location-related factors take account for the relevance that different places have for each user's activities. The intuition is that individuals spend long time periods at their important places. Specifically, we explore it by applying the algorithm presented by Isaacman et al.[START_REF] Isaacman | Identifying important places in peopleś lives from cellular network data[END_REF], which determines prominent locations where the user usually spends a large amount of time or visits frequently. 3 -Figure 12 : 312 Figure 12: Relative Importance of features in determining accurate temporal cell boundaries. Figure 13 : 13 Figure 13: CDF of the spatial errors of temporal cell boundaries computed on (a) Sunday and (b) Monday; CDF of the completion of completed data on (c) Sunday and (d) Monday, across the stop-by, static, sym-adaptive, and asym-adaptive approaches. Table 1 : 1 Overview of the Internet Flow Dataset Day of the week Users Rare CDR users Frequent CDR users Sunday 10, 856 6, 154 4, 702 Monday 14, 353 7, 215 7, 138 Due to a non-disclosure agreement with the data owner, we cannot reveal the geographical area or the exact collecting period of this dataset. Available at https://macaco.inria.fr/MACACOApp/. We set the duration text messages to 0 seconds. Used to deal with the unbalanced occurrence of the types. Daytime interval covered by the used dataset (cf. Sec. 3.2). ing sets from the CDR entries of the remaining users. We then let them generate Acknowledgment This work is supported by the EU FP7 ERANET program under grant CHIST-ERA-2012 MACACO and is performed in the context of the EMBRACE Associated Team of Inria. value of the loss L(d ± , F ± (x)), i.e., F sym (x) = arg min Learning technique. In order to compute the approximations, we utilize a typical supervised machine learning technique, i.e., Gradient Boosted Regression Trees (GBRT) [START_REF] Friedman | Greedy function approximation: a gradient boosting machine[END_REF][START_REF] Friedman | The elements of statistical learning[END_REF]. Although several supervised learning techniques can be adopted, we pick the GBRT technique because (i) it is a well-understood approach with thoroughly-tested implementations, (ii) it has advantages over alternative techniques, in terms of predicative power, training speed, and flexibility to accommodate heterogeneous input (which is our case) [START_REF]Ensemble methods[END_REF], and (iii) it returns quantitative measures about the contribution of each factor to the overall approximation [START_REF] Friedman | Greedy function approximation: a gradient boosting machine[END_REF]. In the GBRT technique, an approximation function is the weighted sum of an ensemble of regression trees. Each tree divides the input space (i.e., the vector x of factors) into disjoint regions and predicts a constant value in each region. The GBRT technique combines the predictive power of all regression trees having a weak predicting performance by making a joint predictor: it is proved that the performance of such a joint predictor is better than that of each single regression tree [START_REF] Friedman | The elements of statistical learning[END_REF]. The ensemble is initialized with a single-leaf tree (i.e., a constant value). During each iteration, a new regression tree is added to the ensemble by minimizing the loss function via gradient descent. An algorithm of the GBRT technique for building the approximation of the function F sym in the sym-adaptive approach is given in Alg. 1. In the algorithm, the function FitRegrTree is used to build a regression tree based on the input and the gradients of the function in the last iteration, of which we refer the reader to [31, Chapter 9.2.2] for the detail. Two important tuning parameters are in the algorithm, i.e., the number of iterations M (i.e., the number of regression trees to be added to the ensemble) and the learning rate ν (i.e., the level of contribution expected by a new regression tree), which we determine via cross validation and discuss later. In the asym-adaptive approach, the same algorithm is used except that the training set X ± is replaced by X . 12 return F M (x); Experiments. The first step is to build the training sets. For that, we randomly select 50% of the users from the two available days (i.e., a Monday and a Sunday) in the Internet flow dataset (cf. Sec. 3.2). In particular, from the CDR
01756934
en
[ "info.info-ni" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01756934/file/MMSJ.2018.pdf
Ben Chiheb Emmanuel Ameur Bernard Mory Eugen Cousin Dedu Ameur Emmanuel Mory Bernard Cousin Eugen Dedu C Ben Ameur Performance Evaluation of TcpHas: TCP for HTTP Adaptive Streaming Keywords: HTTP Adaptive Streaming, TCP Congestion Control, Cross-layer Optimization, Traffic Shaping, Quality of Experience, Quality of Service HTTP Adaptive Streaming (HAS) is a widely used video streaming technology that suffers from a degradation of user's Quality of Experience (QoE) and network's Quality of Service (QoS) when many HAS players are sharing the same bottleneck link and competing for bandwidth. The two major factors of this degradation are: the large OFF period of HAS, which causes false bandwidth estimations, and the TCP congestion control, which is not suitable for HAS given that it does not consider the different video encoding bitrates of HAS. This paper proposes a HAS-based TCP congestion control, TcpHas, that minimizes the impact of the two aforementioned issues. It does this by using traffic shaping on the server. Simulations indicate that TcpHas improves both QoE, mainly by reducing instability and convergence speed, and QoS, mainly by reducing queuing delay and packet drop rate. Introduction Video streaming is a widely used service. According to 2016 Sandvine report [START_REF]Global internet phenomena report[END_REF], in North America, video and audio streaming in fixed access networks accounts for over 70% of the downstream bandwidth in evening hours. Given this high usage, it is of extreme importance to optimize its use. This is usually done by adapting the video to the available bandwidth. Numerous adaptation methods have been proposed in the literature and by major companies, and their differences mainly rely on the entity that does the adaptation (client or server), the variable used for adaptation (the network or sender or client buffers), and the protocols used; the major companies having finally opted for HTTP [START_REF] Dedu | A taxonomy of the parameters used by decision methods for adaptive video transmission[END_REF]. HTTP Adaptive Streaming (HAS) is a streaming technology where video contents are encoded and stored at different qualities at the server and where players (clients) can choose periodically the quality according to the available resources. Popular HAS-based methods are Microsoft Smooth Streaming, Apple HTTP Live Streaming, and MPEG DASH (Dynamic Adaptive Streaming over HTTP). Still, this technology is not optimal for video streaming, mainly because its HTTP data is transported using the TCP protocol. Indeed, video data is encoded at distinct bitrates, and TCP does not increase the throughput sufficiently quickly when the bitrate changes. TCP variants (such as Cubic, Illinois, and West-wood+) specific to high bandwidth-delay product networks achieve high bandwidth more quickly and seem to give better performance for HAS service than classical TCP variants such as NewReno and Vegas [START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF], but the improvement is limited. Another reason for this suboptimality is the highly periodic ON-OFF activity pattern specific to HAS [START_REF] Akhshabi | What happens when HTTP adaptive streaming players compete for bandwidth?[END_REF]. Currently, a HAS player estimates the available bandwidth by computing the download bitrate for each chunk at the end of the download (for the majority of players, this estimation is done by dividing the chunk size by its download duration). As such, it is impossible for a player to estimate the available bandwidth when no data is being received, i.e. during OFF periods. Moreover, when several HAS stream compete in the same home network, bandwidth estimation becomes more difficult. For example, if the ON period of a player coincides with the OFF period of a second player, the first player will overestimate its available bandwidth, and makes it select for the next chunk a quality level higher than in reality. This, in turn, could lead to a congestion event if the sum of the downloading bitrates of the two players exceeds the available bandwidth of the bottleneck. An example is given in [START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF] (table 4): the congestion rate for two competing HAS clients is considerably reduced when using a traffic shaping. Finally, unstable quality levels are harmful to user's Quality of Experience (QoE) [START_REF] Seufert | A survey on quality of experience of HTTP adaptive streaming[END_REF]. Traffic shaping, which expands the ON periods and shrinks the OFF periods, can considerably limit the drawbacks mentioned above [START_REF] Abdallah | Cross layer optimization architecture for video streaming in WiMAX networks[END_REF][START_REF] Houdaille | Shaping HTTP adaptive streams for a better user experience[END_REF][START_REF] Ameur | Shaping HTTP adaptive streams using receive window tuning method in home gateway[END_REF][START_REF] Villa | Group based traffic shaping for adaptive HTTP video streaming by segment duration control[END_REF][START_REF] Akhshabi | Server-based traffic shaping for stabilizing oscillating adaptive streaming players[END_REF][START_REF] Ameur | Evaluation of gateway-based shaping methods for HTTP adaptive streaming[END_REF]. One method to reduce occurrences of ON-OFF patterns is to use server-based shaping at application layer [START_REF] Akhshabi | Server-based traffic shaping for stabilizing oscillating adaptive streaming players[END_REF]. This approach is cross-layer because it interacts with the TCP layer and its parameters such as the congestion window, cwnd, and the round-trip time estimation, RTT. Hence, implementing HAS traffic shaping at the TCP level is naturally more practical and easier to manage; in addition, this should offer better bandwidth share among HAS streams, reduce congestion events and improve the QoE of HAS users. Despite the advantages of using a transport layer-based method for HAS, and in contrast with other types of streaming, where methods at the transport layer have already been proposed (RTP, Real-time Transport Protocol, and TFRC, TCP Friendly Rate Control [START_REF] Floyd | TCP Friendly Rate Control (TFRC): Protocol specification[END_REF]), to the best of our knowledge, there is no proposition at the transport level specifically designed for HAS. For commercial video providers YouTube, Dailymotion, Vimeo and Netflix, according to [START_REF] Hoquea | Mobile multimedia streaming techniques: QoE and energy saving perspective[END_REF], "The quality switching algorithms are implemented in the client players. A player estimates the bandwidth continuously and transitions to a lower or to a higher quality stream if the bandwidth permits." The streaming depends on many parameters, such as player, video quality, device and video service provider etc., and uses various techniques such as several TCP connections, variable chunk sizes, different processing for audio and video flows, different throttling factors etc. To conclude, all these providers use numerous techniques, all of them based on client. Therefore, in this paper, we extend our previous work [START_REF] Ameur | TcpHas: TCP for HTTP adaptive streaming[END_REF] by proposing a HAS-oriented TCP congestion control variant, TcpHas, that aims to minimize the aforementioned issues (TCP throughput insufficient increase and ON-OFF pattern) and to unify all the techniques given in the previous paragraph. It uses four sub-modules: bandwidth estimator, optimal quality level estimator, ssthresh adjusting, and cwnd adjusting to the shaping rate. Simulation results show that TcpHas considerably improves both QoS (queuing delay, packet drop rate) and QoE (stability, convergence speed), performs well with several concurrent clients, and does not cause stalling events. The remainder of this paper is organized as follows: Section 2 presents server-based shaping methods and describes possible optimizations at TCP level. Then, Section 3 describes TcpHas congestion control and Section 4 evaluates it. Section 5 concludes the article. Background and related works Our article aims to increase QoE and QoS by fixing the ON-OFF pattern. Many serverbased shaping methods have been proposed in the literature to improve QoE and QoS of HAS. Their functioning is usually separated into two modules: 1. Estimation of the optimal quality level, based on network conditions, such as bandwidth, delay, and/or history of selected quality levels, and available encoding bitrates of the video. 2. The shaping function of the sending rate, which should be suitable to the encoding bitrate of the estimated optimal quality level. The next two subsections describe constraints and proposed solutions for each module. The last subsection presents some possible ways of optimization, which provides the basis for the TcpHas design. Optimal Quality Level Estimation A major constraint of optimal quality level estimation is that the server has no visibility on the set of flows that share the bottleneck link. Ramadan et al. [START_REF] Ramadan | Avoiding quality oscillations during adaptive streaming of video[END_REF] propose an algorithm to reduce the oscillations of quality during video adaptation. During streaming, it marks each quality as unsuccessful or successful, depending on whether it has led to lost packets or not. A successfulness value is thus attached to each quality, and is updated regularly using an EWMA (Exponential Weighted Moving Average) algorithm. The next quality increase is allowed if and only if its successfulness value does not exceed some threshold. We note that, to discover the available bandwidth, this method increases throughput and pushes to packet drop, which is different from our proposed method, where the available bandwidth is computed using an algorithm. Akhshabi et al. [START_REF] Akhshabi | Server-based traffic shaping for stabilizing oscillating adaptive streaming players[END_REF] propose a server-based shaping method that aims to stabilize the quality level sent by the server by detecting oscillation events. The shaping function is activated only when oscillations are detected. The optimal quality level is based on the history of quality level oscillations. Then, the server shapes its sending rate based on the encoding bitrate of the estimated optimal quality level. However, when the end-to-end available bandwidth increases, the HAS player cannot increase its quality level when the shaper is activated. This is because the sending bitrate is limited on the server side and when the endto-end available bandwidth increases, the player is still stable on the same quality level that matches the shaping rate. To cope with that, the method deactivates the shaping function for some chunks and uses two TCP parameters, RTT and cwnd, to compute the connection throughput that corresponds to the end-to-end available bandwidth ( cwnd RT T ). If the estimated bandwidth is higher than the shaping rate, the optimal quality level is increased to the next higher quality level and the shaping rate is increased to follow its encoding bitrate. We note that this method is implemented in the application layer. It takes as inputs the encoding bitrates of delivered chunks and two TCP parameters (RTT and cwnd). The authors indicate that their method stabilizes the competing players inside the same home network without significant bandwidth utilization loss. Accordingly, the optimal quality estimation process is based on two different techniques: quality level oscillation detection and bandwidth estimation using the throughput measurement. The former is based on the application layer information (i.e., the encoding bitrate of the actual and previous sent chunks) and is sufficient to activate the shaping function (i.e., the shaper). However, to verify whether the optimal quality level has been increased or not, the server is obliged to deactivate the shaper to let the TCP congestion control algorithm occupy the remaining capacity available for the HAS stream. Although this proposed method offers performance improvements on both QoE and QoS, the concept of activating and deactivating the shaper is not sufficiently solid, especially against unstable network conditions, and raises a number of open questions about the duration of deactivation of the traffic shaping and its impact on increasing the OFF period duration. In addition, this method is not proactive and the shaper is activated only in the case of quality level oscillation. What is missing in this proposed method is a good estimation of the available bandwidth for the HAS flow. This method relies on the throughput measurement during non-shaped phases. If a bandwidth estimation less dependent on cwnd could be given, we could keep the shaper activated during the whole HAS stream and adapt the estimation of optimal quality level to the estimation of available bandwidth. Traffic Shaping Methods Ghobadi et al. propose a shaping method on the server side called Trickle [START_REF] Ghobadi | Trickle: Rate limiting youtube video streaming[END_REF]. It was proposed for YouTube in 2011, when it adopted progressive download technology. Its key idea is to place a dynamic upper bound on cwnd such that TCP itself limits the overall data rate. The server application periodically computes the cwnd bound from the product between the round-trip time (RTT) and the target streaming bitrate. Then it uses a socket option to apply it to the TCP socket. Their results show that Trickle reduces the average RTT by up to 28% and the average TCP loss rate by up to 43%. However, HAS differs from progressive download by the change of encoding bitrate during streaming. Nevertheless, Trickle can also be used with HAS by adapting the cwnd bound to the encoding bitrate of each chunk. We note that the selection of the shaping rate by the server-based shaping methods does not mean that the player will automatically start requesting that next higher quality level [START_REF] Akhshabi | Server-based traffic shaping for stabilizing oscillating adaptive streaming players[END_REF]. The transition to another shaping rate may take place several chunks later, depending on the player's bitrate controller and the server-based shaping method efficiency. Furthermore, it was reported [START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF] that ssthresh has a predominant effect on the convergence speed of the HAS client to select the desired optimal quality level. Indeed, when ssthresh is set higher than the product of shaping rate and RTT, the server becomes aggressive and causes congestions and a reduction of quality level selection on the player side. In contrast, when ssthresh is set lower than this product, cwnd takes several RTTs to reach the value of this product, because in the congestion avoidance phase the increase of cwnd is relatively slow (one MSS, Maximum Segment Size, each RTT). Consequently, the server becomes conservative and needs a long time to occupy its selected shaping rate. Hence, the player would have difficulties reaching its optimal quality level. Accordingly, shaping the sending rate by limiting cwnd, as described in Trickle, has a good effect on improving the QoE of HAS. However, it is still insufficient to increase the reactivity of the HAS player and consequently to accelerate the convergence speed. Hence, to improve the performance of the shaper, ssthresh needs to be modified too. The value of ssthresh should be set at the right value that allows the server to quickly reach the desired shaping rate. Optimization of Current Solutions What can be noted from the different proposed methods for estimating the optimal quality level is that an efficient end-to-end estimator of available bandwidth can improve their performance, as shown in Subsection 2.1. In addition, the only parameter from the application layer needed for shaping the HAS traffic is the encoding bitrate of each available quality level of the corresponding HAS stream. As explained in Subsection 2.2, the remaining parameters are found in the TCP layer: the congestion window cwnd, the slow-start threshold ssthresh, and the round-trip time RTT. We are particularly interested in adjusting ssthresh to accelerate the convergence speed. This is summed up in figure 1. Naturally, what is missing here is an efficient TCP-based method for end-to-end bandwidth estimation. We also need a mechanism that adjusts ssthresh based on the output of the bandwidth estimator scheme. Both this mechanism and estimation schemes used by various TCP variants are introduced in the following. Adaptive Decrease Mechanism In the literature, we found a specific category of TCP variants that set ssthresh using bandwidth estimation. Even if the estimation is updated over time, TCP uses it only when a congestion event is detected. The usefulness of this mechanism, known as adaptive decrease mechanism, is described in [START_REF] Mascolo | Testing TCP Westwood+ over transatlantic links at 10 gigabit/second rate[END_REF] as follows: "the adaptive window setting provides a congestion window that is decreased more in the presence of heavy congestion and less in the presence of light congestion or losses that are not due to congestion, such as in the case of losses due to unreliable links". This low frequency of ssthresh updating (only after congestion detection) is justified in [START_REF] Capone | Bandwidth estimation schemes for TCP over wireless networks[END_REF] by the fact that, in contrast, a frequent updating of ssthresh tends to force TCP into congestion avoidance phase, preventing it from following the variations in the available bandwidth. Hence, the unique difference of this category from the classical TCP congestion variant is only the adaptive decrease mechanism when detecting a congestion, i.e., when receiving three duplicated ACKs or when the retransmission timeout expires. This mechanism is described in Algorithm 1. Algorithm 1 TCP adaptive decrease mechanism. We remark that the algorithm uses the estimated bandwidth, Bwe, multiplied by RT T min to update the ssthresh value. The use of RT T min instead of the actual RTT is justified by the fact that RT T min can be considered as an estimation of RTT of the connection when the network is not congested. Bandwidth Estimation Schemes The most common TCP variant that uses bandwidth estimation to set ssthresh is Westwood. Other newer variants have been proposed, such as Westwood+ and TIBET (Time Intervalsbased Bandwidth Estimation Technique). The only difference between them is the bandwidth estimation scheme used. In the following, we introduce the different schemes and describe their performance. Westwood estimation scheme [START_REF] Mascolo | TCP Westwood: Bandwidth estimation for enhanced transport over wireless links[END_REF]: The key idea of Westwood is that the source performs an end-to-end estimation of the bandwidth available along a TCP connection by measuring the rate of returning acknowledgments [START_REF] Mascolo | TCP Westwood: Bandwidth estimation for enhanced transport over wireless links[END_REF]. It consists of estimating this bandwidth by properly filtering the flow of returning ACKs. A sample of available bandwidth Bwe k is computed each time t k the sender receives an ACK: Bwe k = d k t k -t k-1 (1) where d k is the amount of data acknowledged by the ACK that is received at time t k . d k is determined by an accurate counting procedure by taking into consideration delayed ACKs, duplicate ACKs and selective ACKs. Then, the bandwidth samples Bwe k are low-pass filtered by using a discrete-time low-pass filter to obtain the bandwidth estimation Bwe k . The low-pass filter employed is generally the exponentially-weighted moving average function: Bwe k = γ × Bwe k-1 + (1 -γ) × Bwe k (2) where 0 ≤ γ ≤ 1. Low-pass filtering is necessary because congestion is due to low-frequency components of the available bandwidth, and because of the delayed ACK option [START_REF] Li | Link capacity allocation and network control by filtered input rate in high-speed networks[END_REF][START_REF] Mascolo | Additive increase early adaptive decrease mechanism for TCP congestion control[END_REF]. However, this estimation scheme is affected by the ACK compression phenomenon. This phenomenon occurs when the time spacing between the received ACKs is altered by the congestion of the routers on the return path [START_REF] Zhang | Observations on the dynamics of a congestion control algorithm: The effects of two-way traffic[END_REF]. In fact, when ACKs pass through one congested router, which generates additional queuing delay, they lose their original time spacing because during forwarding they are spaced by the short ACK transmission time [START_REF] Capone | Bandwidth estimation schemes for TCP over wireless networks[END_REF]. The result is ACK compression that can lead to bandwidth overestimation when computing the bandwidth sample Bwe k . Moreover, the low-pass filtering process is also affected by ACK compression because it cannot filter bandwidth samples that contain a high-frequency component [START_REF] Mascolo | Additive increase early adaptive decrease mechanism for TCP congestion control[END_REF]. Accordingly, the ACK compression causes a systematic bandwidth overestimation when using the Westwood bandwidth estimation scheme. ACK compression is commonly observed in real network operation [START_REF] Mogul | Observing TCP dynamics in real networks[END_REF] and thus should not be neglected in the estimation scheme. Another phenomenon that distorts the Westwood estimation scheme is clustering: As already noted [START_REF] Capone | Bandwidth estimation schemes for TCP over wireless networks[END_REF][START_REF] Zhang | Observations on the dynamics of a congestion control algorithm: The effects of two-way traffic[END_REF], the packets belonging to different TCP connections that share the same link do not intermingle. As a consequence, many consecutive packets of the same connection can be observed on a single channel. This means that each connection uses the full bandwidth of the link for the time needed to transmit its cluster of packets. Hence, a problem of fairness between TCP connections is experienced when the estimation scheme does not take the clustering phenomenon into consideration and continues to estimate the bandwidth of the whole shared bottleneck link instead of their available bandwidth. Westwood+ estimation scheme [START_REF] Mascolo | Performance evaluation of Westwood+ TCP congestion control[END_REF]: To estimate correctly the bandwidth and alleviate the effect of ACK compression and clustering, a TCP source should observe its own link utilization for a time longer than the time needed for entire cluster transmission. For this purpose, Westwood+ modifies the bandwidth estimation (Bwe) mechanism to perform the sampling every RTT instead of every ACK reception as follows: Bwe = d RT T RT T (3) where d RT T is the amount of data acknowledged during one RT T . As indicated in [START_REF] Mascolo | Performance evaluation of Westwood+ TCP congestion control[END_REF], the result is a more accurate bandwidth measurement that ensures better performance when compared with NewReno and it is still fair when sharing the network with other TCP connections. Bwe is updated once per RT T . The bandwidth estimation samples are low-pass filtered to give a better smoothed estimation of Bwe. However, the amount of acknowledged data during one RT T (d RT T ) is bounded by the sender's window size, min(cwnd, rwnd), which is defined by the congestion control algorithm. In fact, min(cwnd, rwnd) defines the maximum amount of data to be transmitted during one RT T . Consequently, the bandwidth estimation of Westwood+, given by each sample Bwe, is still always lower than the sender sending rate (Bwe ≤ min(cwnd,rwnd) RT T ). Hence, although the Westwood+ estimation scheme reduces the side effects of ACK compression and clustering, it is still dependent on the sender sending rate rather than the available bandwidth of the corresponding TCP connection. TIBET estimation scheme [START_REF] Capone | Bandwidth estimation schemes for TCP over wireless networks[END_REF][START_REF] Capone | Bandwidth estimates in the TCP congestion control scheme[END_REF]: TIBET (Time Interval-based Bandwidth Estimation Technique) is another technique that gives a good estimation of bandwidth even in the presence of packet clustering and ACK compression. The basic idea of TIBET is to perform a run-time sender-side estimate of the average packet length and the average inter-arrival separately. The bandwidth estimation scheme is applied to the stream of the received ACKs and is described in Algorithm 2 [START_REF] Capone | Bandwidth estimation schemes for TCP over wireless networks[END_REF], where acked is the number of segments acknowledged by the last ACK, packet size is the average segment size in bytes, now is the current time and last ack time is the time of the previous ACK reception. Average packet length and Average interval are the low-pass filtered measures of the packet length and the interval between sending times. Algorithm 2 Bandwidth estimation scheme. Al pha (0 ≤ al pha ≤ 1) is the pole of the two low-pass filters. The value of al pha is critical to TIBET performance: If al pha is set to a low value, TIBET is highly responsive to changes in the available bandwidth, but the oscillations of Bwe are quite large. In contrast, if al pha approaches 1, TIBET produces more stable estimates, but is less responsive to network changes. Here, we note that if al pha is set to zero we have the Westwood bandwidth estimation scheme, where the sample Bwe varies between 0 and the bottleneck bandwidth. TIBET estimation scheme uses a second low-pass filtering, with parameter γ, on the estimated available bandwidth Bwe to give a better smoothed estimation Bwe, as described in Equation 2. γ is a variable parameter, equal to e -T k , where T k = t k -t k-1 is the time interval between the two last received ACKs. This means that bandwidth estimation samples Bwe with high T k values are given more importance than those with low T k values. Simulations [START_REF] Capone | Bandwidth estimation schemes for TCP over wireless networks[END_REF] indicate that TIBET gives bandwidth estimations very close to the correct values, even in the presence of other UDP flows with variable rates or other TCP flows. TcpHas Description As shown in the previous section, a protocol specific to HAS needs to modify several TCP parameters and consist of several algorithms. Our HAS-based TCP congestion control, TcpHas, is based on the two modules of server-based shaping solution: optimal quality level estimation and sending traffic shaping itself, both with two submodules. The first module uses a bandwidth estimator submodule inspired by the TIBET scheme and adapted to HAS context, and an optimal quality level estimator submodule to define the quality level, QLevel, based on the estimated bandwidth. The second module uses QLevel in two submodules that update respectively the values of ssthresh and cwnd over time. This section progressively presents TcpHas by describing the four submodules, i.e., the bandwidth estimator, the optimal quality level estimator, ssthresh updating process, and cwnd value adaptation to the shaping rate. Bandwidth Estimator of TcpHas As described in Section 2, TIBET performs better than other proposed schemes. It reduces the effect of ACK compression and packet clustering and is less dependent on the congestion window than Westwood+. The parameter γ used by TIBET to smooth Bwe estimations (see Equation 2) is variable and equal to e -T k . However, this variability is not suited to HAS. Indeed, when the HAS stream has a large OFF period, the HTTP GET request packet sent from client to server to ask for a new chunk is considered by the server as a new ACK. As a consequence, the new bandwidth estimation sample, Bwe, will have an underestimated value and γ will be reduced. Hence, this filter gives higher importance to an underestimated value to the detriment of the previous better estimations. For example, if the OFF period is equal to 1 second, γ will be equal to 0.36, which means that a factor of 0.64 is given to the new underestimated value in the filtering process. Consequently, the smoothed bandwidth estimation, Bwe, will be reduced at each high OFF period. However, the objective is rather to maintain a good estimation of available bandwidth, even in the presence of large OFF periods. For this purpose, we propose to make parameter γ constant. Hence, the bandwidth estimator of TcpHas is the same as in the TIBET bandwidth estimation scheme, except for the low-pass filtering process: we use a constant value of γ instead of e -T k as defined by TIBET. Optimal Quality Level Estimator of TcpHas TcpHas' optimal quality level estimator is based on the estimated bandwidth, Bwe, described in Subsection 3.1. This estimator is a function that adapts HAS features to TCP congestion control and replaces Bwe value by the encoding bitrate of the estimated optimal quality level QLevel. One piece of information from the application layer is needed: the available video encoding bitrates, which are specified in the index file of the HAS stream. In TcpHas they are specified, in ascending order, in the EncodingRate vector. TcpHas' estimator is defined by the function QLevelEstimator, described in Algorithm 3, which selects the highest quality level whose encoding bitrate is equal to or lower than the estimated bandwidth, Bwe. Algorithm 3 QLevelEstimator function. 1: for i = length(EncodingRate) -1 downto 0 do 2: if EncodingRate[i] ≤ Bwe then 3: QLevel = i 4: return 5: end if 6: end for 7: QLevel = 0 QLevel parameter is updated only by this function. However, the time and frequency of its updating is a delicate issue: -We need to use the adaptive decrease mechanism (see Algorithm 1), because when a congestion occurs QLevel needs to be updated to the new network conditions. Hence, this function is called after each congestion detection. -Given that TcpHas performs a shaping rate that reduces OFF occupancy, when TcpHas detects an OFF period, it may mean that some network conditions have changed (e.g. an incorrect increase of the shaping rate). Accordingly, to better estimate the optimal quality level, this function is called after each OFF period. The EncodingRate vector is also used by TcpHas during application initialization to differentiate between a HAS application and a normal one: when the application returns an empty vector, it is a normal application, and TcpHas just makes this application be processed by classical TCP, without being involved at all. Ssthresh Modification of TcpHas The TCP variants that use the TCP decrease mechanism use RT T min multiplied by the estimated bandwidth, Bwe, to update ssthresh. However, given that the value of ssthresh affects the convergence speed, it should correspond to the desired shaping rate instead of Bwe. Also, the shaping rate is defined in Trickle [START_REF] Ghobadi | Trickle: Rate limiting youtube video streaming[END_REF] to be 20% higher than the encoding bitrate, which allows the server to deal better with transient network congestion. Hence, for TcpHas we decided to replace Bwe by EncodingRate[ QLevel] × 1.2, which represents its shaping rate: ssthresh = EncodingRate[ QLevel] × RT T min × 1.2 (4) The timing of ssthresh updating is the same as that of QLevel: when detecting a congestion event and just after an idle OFF period. Moreover, the initial value of ssthresh should be modified to correspond to the context of HAS. These three points are presented in the following. Congestion Events Inspired by Algorithm 1, the TcpHas algorithm when detecting a congestion event is described in Algorithm 4. It includes the two cases of congestion events: three duplicated ACKs, and retransmission timeout. In both cases, Qlevel is updated from Bwe using the QLevelEstimator function. Then, ssthresh is updated according to Equation 4. The update of cwnd is as in Algorithm 1. Algorithm 4 TcpHas algorithm when congestion occurs. Idle Periods As explained in [START_REF] Allman | TCP congestion control[END_REF][START_REF] Ameur | Evaluation of gateway-based shaping methods for HTTP adaptive streaming[END_REF][START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF], the congestion window is reduced when the idle period exceeds the retransmission timeout RTO, and ssthresh is updated to max(ssthresh, 3/4 × cwnd). In HAS context, the idle period coincides with the OFF period. In addition, we denote by OFF the OFF period whose duration exceeds RTO. Accordingly, reducing cwnd after an OFF period will force cwnd to switch to slow-start phase although the server is asked to deliver the video content with the optimal shaping rate. To avoid this, we propose to remove the cwnd reduction after the OFF period. Instead, as presented in Algorithm 5, TcpHas updates Qlevel and ssthresh, then sets cwnd to ssthresh. This modification is very useful in the context of HAS. On the one hand, it eliminates the sending rate reduction after each OFF period, which adds additional delay to deliver the next chunk and may cause a reduction of quality level selection on the player side. On the other hand, the update of ssthresh each OFF period allows the server to adjust its sending rate more correctly, especially when the client generates a high OFF period between two consecutive chunks. Algorithm 5 TcpHas algorithm after an OFF period. Initialization By default, TCP congestion control uses an initial value of ssthresh, initial ssthresh, of 65535 bytes. The justification comes from the TCP classical goal to occupy quickly (exponentially) the whole available end-to-end bandwidth. However, in HAS context, initial ssthresh is better to match an encoding bitrate. We decided to set it to the highest quality level at the beginning of streaming for two reasons: 1) to give a similar initial aggressiveness as classical TCP and 2) to avoid setting it higher than the highest encoding bitrate to maintain the HAS traffic shaping concept. This initialization should be done in conformity with Equation 4, hence the computation of RTT is needed. Consequently, TcpHas just updates the ssthresh when the first RTT is computed. In this case, our updated ssthresh serves the same purpose as initial ssthresh. TcpHas initialization is presented in Algorithm 6. Cwnd Modification of TcpHas for Traffic Shaping As shown in Subsection 2.2, Trickle does traffic shaping on the server-side by setting a maximum threshold for cwnd, equal to the shaping rate multiplied by the current RTT. However, during congestion avoidance phase (i.e., when cwnd > ssthresh), cwnd is increased very slowly by one MSS each RTT. Consequently, when cwnd is lower than this threshold, it takes several RTTs to reach it, i.e. a slow reactivity. Algorithm 6 TcpHas initialization. 1: QLevel = length(EncodingRate) -1 the highest quality level 2: cwnd = initial cwnd 3: ssthresh = initial ssthresh i.e. 65535 bytes 4: RT T = 0 5: if new ACK is received then 6: if RT T = 0 then i.e. when the first RTT is computed 7: ssthresh = EncodingRate[ QLevel] × RT T × 1.2 8: end if 9: end if To increase its reactivity, we modify TCP congestion avoidance algorithm by directly tuning cwnd to match the shaping rate. Given that TcpHas does not change its shaping rate (EncodingRate[ QLevel] × 1.2) during the congestion avoidance phase (see Subsection 3.2), we update cwnd according to the RTT variation. However, in this case, we are faced to the following dilemma related to RTT variation: -On the one hand, the increase of RTT means that queuing delay increases and could cause congestion when the congestion window is still increasing. Worse, if the standard deviation of RTT is important (e.g., in the case of a wireless home network, or unstable network conditions), an important jitter of RTT would force cwnd to increase suddenly and cause heavy congestion. -On the other hand, the increase of RTT over time should be taken into account by the server in its cwnd updating process. In fact, during the ON period of a HAS stream, the RTT value is increasing [START_REF] Mansy | Sabre: A client based technique for mitigating the buffer bloat effect of adaptive video flows[END_REF]. Consequently, using a constant value of RT T (such as RT T min ) does not take into consideration this increase of RTT and may result in a shaping rate lower than the desirable rate. One way to mitigate RTT fluctuation and to take into account the increase of RTT during the ON period is to use smoothed RTT computations. We propose to employ a low-pass filter for this purpose. The smoothed RTT that is updated at each ACK reception is: RT T k = ψ × RT T k-1 + (1 -ψ) × RT T k (5) where 0 ≤ ψ ≤ 1. TcpHas algorithm during the congestion avoidance phase is described in Algorithm 7, where EncodingRate[ QLevel] × 1.2 is the shaping rate. Algorithm 7 TcpHas algorithm in congestion avoidance phase. 1: if new ACK is received and cwnd ≥ ssthresh then 2: cwnd = EncodingRate[ QLevel] × RT T × 1.2 3: end if To sum up, TcpHas is a congestion control optimized for video streaming of type HAS. It is implemented in the server, at transport layer, no other modifications are needed. TcpHas needs only one information from the application layer: the encoding bitrates of the selected video level. It coexists gracefully with TCP on server, the transport layer simply checking whether the EncodingRate vector returned by application is empty or not, as explained in section 3.2. It is compatible with all TCP clients. There is no direct interaction between the client and the server to make the adaptation decision. When TcpHas performs bandwidth estimation (at the server), it is independent of the estimation made in the HAS player on the client side. The only objective of the bandwidth estimation of the server is to shape the sending bitrate in order to prevent the HAS player to select a quality level higher than the optimal one. Hence, the bandwidth estimation in the server provides a proactive optimization: it limits the sending bitrate before the client may select an inappropriate quality level. TcpHas Evaluation The final goal of our work is to implement our idea in real software. However, at this stage of the work, we preferred instead to use a simulated player because of the classical advantages of simulation over experimentation, such as reproducibility of results, and measurement of individual parameters for better parameter tuning. More precisely, we preferred to use a simulated player instead of a commercial player for the following main reasons: -First, the commercial players have complex implementation with many parameters. Besides, the bitrate controller is different among commercial players and even between different versions of the same player. Accordingly, using our own well-controlled player allows better evaluations than a "black box" commercial player that could give incomprehensible behaviors. -Second, some image-oriented perceptual factors used in a real player (e.g. video spatial resolution, frame rate or type of video codec) are of no interest for HAS evaluation. -Third, objective metrics are increasingly employed for HAS evaluation. In reliable flows, such as those using TCP, objective metrics lead to the same results no matter the file content. Hence, with a fully controlled simulated player we can easily get the variation of its parameters during time and use them for objective QoE metric computation. -Fourth, for our simulations, we need to automatize events such as triggering the beginning and the end of HAS flows at precise moments. Using a simulated player offers easier manipulation than a real player, especially when many players need to be launched simultaneously. In this section, we evaluate TcpHas using the classical ns-3 simulator, version 3.17. In our scenario, several identical HAS players share the same bottleneck link and compete for bandwidth inside a home network. We first describe the network setup used in all of the simulations. Then, we describe the parameter settings of TcpHas. Afterwards, we show the behavior of TcpHas compared to the other methods. Finally, after describing the QoE and QoS metrics used, we analyze results for 1 to 9 competing HAS flows in the home network and with background traffic. Simulation setup Fig. 2 presents the architecture we used, which is compliant with the fixed broadband access network architecture used by Cisco to present its products [START_REF]Broadband network gateway overview[END_REF]. The HAS clients are located inside the home network, a local network with 100 Mbps bandwidth. The Home Gateway (HG) is connected to the DSLAM. The bottleneck link is located between HG and DSLAM and has 8 Mbps. The queue of the DSLAM uses Drop Tail discipline with a length that corresponds to the bandwidth-delay product. Nodes BNG (Broadband Network Gateway) and IR (Internet Router), and links AggLine (that simulates the aggregate line), ISPLink (that simulates the Internet Service Provider core network) and NetLink (that simulates the route between the IR and the HAS server) are configured so that their queues are large enough (1000 packets) to support a large bandwidth of 100 Mbps and high delay of 100 ms without causing significant packet losses. We generate Internet traffic that crosses ISPLink and AggLine, because the two simulated links are supposed to support a heavy traffic from ISP networks. For Internet traffic, we use the Poisson Pareto Burst Process (PPBP) model [START_REF] Zukerman | Internet traffic modeling and future technology implications[END_REF], considered as a simple and accurate traffic model that matches statistical properties of real-life IP networks (such as their bursty behavior). PPBP is a process based on the overlapping of multiple bursts with heavy-tailed distributed lengths. Events in this process represent points of time at which one of an infinite population of users begins or stops transmitting a traffic burst. PPBP is closely related to the M/G/∞ queue model [START_REF] Zukerman | Internet traffic modeling and future technology implications[END_REF]. We use the PPBP implementation in ns-3 [START_REF] Ammar | PPBP in ns-3[END_REF][START_REF] Ammar | A new tool for generating realistic internet traffic in ns-3[END_REF]. In our configuration, the overall rate of PPBP traffic is 40 Mbps, which corresponds to 40% of ISPLink capacity. HAS module Emulated The ns-3 simulated TcpHas player we use is similar to the emulated player described in [START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF], with a chunk duration of 2 seconds and a playback buffer of 30 seconds (maximum video size the buffer can hold). Note that [START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF] compares four TCP variants and two routerbased traffic shaping methods, whereas the current article proposes a new congestion control to be executed on server. Our player is classified as Rate and Buffer based (RBB) player, following classification proposed in [START_REF] Yin | A control-theoretic approach for dynamic adaptive video streaming over HTTP[END_REF][START_REF] Yin | Toward a principled framework to design dynamic adaptive streaming algorithms over HTTP[END_REF]. Using buffer occupancy information is increasingly proposed and used due to its advantages for reducing stalling events. In addition, the bandwidth estimator we used consists in dividing the size of received chunk by its download duration. The buffer occupancy information is used only to define an aggressiveness level of the player, which allows the player to ask a quality level higher than the estimated bandwidth. The player uses HTTP GET requests to ask for each chunk. It has two phases: buffering and steady state. During buffering phase it fills up its playback buffer by asking for chunks of the lowest video quality level, each chunk immediately after the other. When the playback buffer fills up, the player switches to the steady state phase. In this phase, it asks for the next chunk of the estimated quality level each time the playback buffer occupancy drops for more than 2 seconds (i.e. it remains less than 28 seconds of video in the buffer). When the playback buffer is empty, the player re-enters the buffering phase. All tests use five video quality levels with constant encoding rates presented in Table 1, and correspond to the quality levels usually used by many video service providers. Since objective metrics are used, cf. section 4, and given that TCP use ensures that all packets arrive to the destination, the exact video type used does not influence results (in our case we used random bits for the video file). We also use the HTTP traffic generator module given in [START_REF] Cheng | HTTP traffic generator[END_REF][START_REF] Cheng | Transactional traffic generator implementation in ns-3[END_REF]. This module allows communication between two nodes using HTTP protocol, and includes all features that generate and control HTTP GET Request and HTTP response messages. We wrote additional code into this HTTP module by integrating the simulated HAS player. We call this implementation the HAS module, as presented in Fig. 2. Streaming is done from S f to C f , where 0 ≤ f < N. The round-trip propagation delay between S f and C f is 100 ms. We show results for TcpHas, Westwood+, TIBET and Trickle. We do not consider Westwood because Westwood+ is supposed to replace Westwood since it performs better in case of ACK compression and clustering. Concerning Trickle, it is a traffic shaping method that was proposed in the context of progressive download, as described in SubSection 2.2. In order to adapt it to HAS, we added to it the estimator of optimal quality level of TcpHas, the adaptive decrease mechanism of Westwood+ (the same as TIBET), and applied the Trickle traffic shaping based on the estimated optimal quality level. This HAS adaptation of Trickle is simply denoted by "Trickle" in the reminder of this article. For all evaluations, we use competing players that are playing simultaneously during K seconds. We set K = 180 seconds, which allows the HAS players to reach stationary behavior when they are competing for bandwidth [START_REF] Houdaille | Shaping HTTP adaptive streams for a better user experience[END_REF]. TcpHas Parameter Settings The parameter γ of the Bwe low-pass filter is constant, in conformity with Subsection 3.1. We set γ = 0.99 to reduce the oscillation of bandwidth estimations, Bwe, over time. We set initial cwnd = 2 × MSS. The initial bandwidth estimation value of Bwe is set to the highest encoding bitrate. If the first value was set to zero or to the first estimation sample, the low-pass filtering process with parameter γ = 0.99 would be too slow to reach the correct estimation. In addition, we want TcpHas to quickly reach the highest quality level at the beginning of the stream, as explained in Subsection 3.3. The parameter al pha of the TIBET estimation scheme (see Algorithm 2) is chosen empirically in our simulations and is set to 0.8. A higher value produces more stable estimations but is less responsive to network changes, whereas a lower value makes TcpHas more aggressive with a tendency to select the quality level that corresponds to the whole bandwidth (unfairness). The parameter ψ used for low-pass filtering the RTT measurements in Subsection 3.4 is set to 0.99. The justification for this value is that it reduces better the RTT fluctuations, and consequently reduces cwnd fluctuation during the congestion avoidance phase. TcpHas. Fig. 3 Quality level selection over time for the four methods compared. TcpHas Behavior Compared to the Other Methods We present results for a scenario with 8 competing identical clients. Figure 3 shows the quality level selection over time of one of the competing players for the above methods. The optimal quality level that should be selected by the competing players is n • 2. During the buffering phase, all players select the lowest quality level, as allows by slow start phase. However, during the steady-phase the results diverge: Westwood+ player frequently changes the quality level between n • 0 and n • 3, which means that not only the player produces an unstable HAS stream, but also a high risk of generating stalling events. TIBET player is more stable and presents less risk of stalling events. Trickle player has an improved performance and becomes more stable around the optimal quality level n • 2, with some oscillations between quality levels n • 1 and n • 3. In contrast, TcpHas player is stable at the optimal quality level during the steady-state phase, hence it performs the best among all the methods. Given that the congestion control algorithms of these four methods use bandwidth estimation Bwe to set ssthresh, it is interesting to present Bwe variation over time, shown in Figure 4. The optimal Bwe estimation should be equal to the bottleneck capacity (8 Mbps) divided by the number of competing HAS clients (8), i.e., 1 Mbps. For Westwood+, Bwe varies between 500 kbps and 2 Mbps. For TIBET, Bwe is more stable but varies between 1.5 Mbps and 2 Mbps, which is greater than the average of 1 Mbps; this means that an unfairness in bandwidth sharing occurred because this player is more aggressive than the other 7 competing players. For TcpHas, Bwe begins by the initial estimation that corresponds to the encoding bitrate of the highest quality level (4256 kbps), as described in Algorithm 6, then Bwe converges rapidly to the optimal estimation value of 1 Mbps. Both Trickle and TcpHas present a similar Bwe shape because they use the same bandwidth estimator. The dissimilarity between the four algorithms is more visible in Figure 5, which presents the variation of cwnd and ssthresh. Westwood+ and TIBET yield unstable ssthresh and even more unstable cwnd. In contrast, Trickle and TcpHas provide stable ssthresh values. For Trickle, cwnd is able to increase to high values during the congestion avoidance phase because Trickle limits the congestion window by setting an upper bound in order to have a sending bitrate close to the encoding bitrate of the optimal quality level. For TcpHas, ssthresh is stable at around 14 kB, which corresponds to the result of Equation 4 when QLevel = 2 and RT T min = 100 ms. Besides, cwnd is almost in the same range as ssthresh and increases during ON periods because it takes into acocount the increase of RTT as presented in Algorithm 7. QoE and QoS Metrics This subsection describes the specific QoE and QoS metrics we selected to evaluate objectively TcpHas, and justify their importance in our evaluation. QoE Metrics We use three QoE metrics described by formulas in [START_REF] Ameur | Evaluation of gateway-based shaping methods for HTTP adaptive streaming[END_REF][START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF]: the instability of video quality level [START_REF] Ameur | Evaluation of gateway-based shaping methods for HTTP adaptive streaming[END_REF][START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF][START_REF] Jiang | Improving fairness, efficiency, and stability in HTTPbased adaptive video streaming with festive[END_REF] (0% means the same quality, 100% means the quality changes each period), the infidelity to the optimal quality level [START_REF] Ameur | Evaluation of gateway-based shaping methods for HTTP adaptive streaming[END_REF][START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF] (percentage in seconds where the optimal quality is used), and the convergence speed to the optimal quality level [8, 9, 21] (time to stabilize on the optimal quality and be stable over at least 1 minute). The optimal quality level L C-S,opt used in our evaluation is given by the highest encoding bitrate among the quality levels which is lower than or equal to the ratio between the bottleneck bandwidth, avail bw, and the number of competing HAS players, N, as follows: L C-S,opt = max 0≤L C-S ≤4 (L C-S ) EncodingRate(L C-S ) × 1.2 ≤ avail bw N (6) This formula applies to all the flows, i.e., we attach the same optimal quality level to all flows; this is because our focus is on fairness among flows. We acknowledge that this does not use the maximum achievable bandwidth in some cases, for example for six clients sharing an 8 Mbps bottleneck link, the above formula gives 928 kbps for each client, and not 928 kbps for five clients and 1632 kbps for the sixth client (see Table 1 for the bitrates). We however noticed that TcpHas does maximize the bandwidth use in some cases, as presented in the next section. The fourth metric is the initial delay, metric adopted by many authors [START_REF] Krogfoss | Analytical method for objective scoring of HTTP adaptive streaming (HAS)[END_REF][START_REF] Shuai | Olac: An open-loop controller for low-latency adaptive video streaming[END_REF], which accounts for the fact that the user dislikes to wait a long time before the beginning of video display. The fifth metric is the stalling event rate; the user is highly disturbed when the video display is interrupted while concentrating on watching [START_REF] Hoßfeld | Initial delay vs. interruptions: Between the devil and the deep blue sea[END_REF]. We define the stalling event rate, StallingRate(K), as the number of stalling events during a K-second test duration, divided by K and multiplied by 100: StallingRate(K) = number o f stalling events during K seconds K × 100 (7) The greater the StallingRate(K), the greater the dissatisfaction of the user. A streaming technology must try as much as possible to have a zero stalling event rate. QoS Metrics We use four QoS metrics, described in the following. The first metric is the frequency of OFF periods [START_REF] Ameur | Evaluation of gateway-based shaping methods for HTTP adaptive streaming[END_REF][START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF]. OFF is an OFF period whose duration exceeds TCP retransmission timeout duration (RTO); such periods lead to a reduction of bitrate and potentially to a degradation of performance [START_REF] Ameur | Evaluation of gateway-based shaping methods for HTTP adaptive streaming[END_REF][START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF]. This metric is defined as the total number of OFF periods divided by the total number of downloaded chunks of one HAS flow: f r OFF = number o f OFF periods number o f chunks (8) A high queuing delay is harmful to HAS and for real-time applications [START_REF] Yang | Opportunities and challenges of HTTP adaptive streaming[END_REF]. We noticed in our tests that this delay could vary considerably, and so the RTT of the HAS flow, so we use as the second metric the average queuing delay, defined as: Delay C-S (K) = RT T C-S,mean (K) -RT T 0 C-S (9) where RT T C-S,mean (K) is the average among all RT T C-S samples of the whole HAS session between client C and server S for a K-second test duration, and RT T 0 C-S is the initial roundtrip propagation delay between the client C and the server S. The congestion detection events greatly influence both QoS and QoE of HAS because the server decreases its sending rate at each such event. This event is always accompanied by a ssthresh reduction. Hence, we use a third metric, congestion rate, which we define as the rate of congestion events detected on the server side: CNG C-S (K) = D ssthresh C-S (K) K × 100 (10) where D ssthresh C-S (K) is the number of times the ssthresh has been decreased for the C-S HAS session during the K-second test duration. The fourth metric we use is the average packet drop rate. The rationale is that the number of dropped packets at the bottleneck gives an idea of the congestion severity of the bottleneck link. We define this metric as the average packet drop rate at the bottleneck during a Ksecond test duration: DropPkt(K) = number o f dropped packets during K seconds K × 100 [START_REF] Capone | Bandwidth estimates in the TCP congestion control scheme[END_REF] Note that this metric is different from the congestion rate described above, because the TCP protocol at the server could detect a congestion event whereas there is no packet loss. Performance Evaluation In this subsection we evaluate objectively the performance of TcpHas compared to West-wood+, TIBET and Trickle. For this, we give and comment results of evaluation in two scenarios: when increasing the number of competing HAS flows in the home network and when increasing the background traffic in access network. We use 16 runs for each simulation. We present the mean value for QoE and QoS among the competing players and among the number of runs for each simulation. We present the performance unfairness measurements among HAS clients with vertical error bars. We chose 16 runs because the relative difference between mean value of instability and infidelity of 16 and 64 runs is less than 4%. Effect of Increasing the Number of HAS Flows Here, we vary the number of competing players from 1 to 9. We select a maximum of 9 competing HAS clients because in practice the number of users inside a home network does not exceed 9. QoE results are given in Figure 6. In this Figure, the lowest instability rate is that of TcpHas (less than 4%), with a negligible instability unfairness between players. Trickle shows a similar instability rate when the number of competing players is between 4 and 7, but for the other cases it has a high instability rate, whose cause is that Trickle does not take into consideration the reduction of cwnd during OFF periods which causes a low sending rate after each OFF period. Hence, Trickle is sensitive to OFF: we can see in the Figure a correlation between instability and frequency of OFF period. In contrast, the instability of Westwood+ and TIBET is much greater and increases with the number of competing players. The infidelity and convergence speed of TcpHas are satisfactory, as presented in the Figure : the infidelity rate is less than 30% and convergence speed is smaller than 50 seconds in all but two cases. When there are 5 or 9 competing HAS clients, TcpHas selects a quality level higher than the optimal quality level that we defined (equation 6); TcpHas is thus able to select a higher quality level, converge to it, and be stable on it for the whole duration of the simulation. This result is rather positive, because TcpHas is able to maximize the occupancy of the home bandwidth to almost 100% in these two particular cases. In contrast, Westwood+ and TIBET present high infidelity to the optimal quality level and have difficulties to converge to it. For Trickle, due to its traffic shaping algorithm, the infidelity rate is lower than 45%, and lower than 25% when the OFF frequency (hence the instability rate) is low. Stalling event rate. Fig. 6 Values of QoE metrics when increasing the number of competing HAS clients for the four methods compared. The initial delay of the four methods increases with the number of competing HAS clients, as presented in Figure 6. The reason is that during the buffering phase the HAS player asks for chunks successively, and when the number of competing HAS clients increases, the bandwidth share for each flow decreases, thus generating additional delay. We also notice that Westwood+, TIBET and TcpHas present initial delay in the same range of values. However, Trickle has a lower delay; the reason is that, as shown in figures 4 and 5, during buffering state Trickle is able to maintain the initial bandwidth estimation that corresponds to the encoding bitrate of the highest quality level and does not provoke congestions. In other words, Trickle is able to send video chunks with a high sending bitrate without causing congestions. This leads to the reduction of the initial delay. Finally, Figure 6 shows that TcpHas and Trickle generate no stalling events, whereas Westwood+ and TIBET do starting from 7 competing HAS clients. The result for TcpHas and Trickle comes from their high stability rate even for a high number of competing HAS clients. QoS results are given in Figure 7. It shows that the OFF period frequency of TcpHas is kept near to zero and much lower than Westwood+, TIBET and Trickle, except in the case when the home network has only one HAS client. In this case, the optimal quality level is n • 4 whose encoding rate is 4.256 Mbps. Hence, the chunk size is equal to 8.512 Mbits. Consequently, when TcpHas shapes the sending rate according to this encoding rate while delivering chunks with large sizes, it would be difficult to reduce OFF periods below the retransmission timeout duration, RTO. Note that we have taken this case into account when proposing TcpHas by eliminating the initialization of cwnd after idle periods, as explained in Algorithm 5, to preserve high QoE. Frequency of OFF periods. Queuing delay. As presented in Figure 7, although the queuing delay of the four methods increases with the number of competing HAS clients, TcpHas and Trickle present a lower queuing delay than Westwood+ and TIBET. The reason is that both TcpHas and Trickle shape the HAS flows by reducing the sending rate of the server which reduces queue overflow in the bottleneck. Additionally, we observe that TcpHas reduces better the queuing delay than Trickle; TcpHas has roughly half the queuing delay of Westwood+ and TIBET. Besides, TcpHas does not increase its queuing delay more than 25 ms even for 9 competing players, while Trickle increases it to about 50 ms. This result is mainly due to the high stability of the HAS quality level generated by TcpHas which offers better fluidity of HAS flows inside the bottleneck. The same reason applies for the very low congestion detection rate and packet drop rate at the bottleneck of TcpHas, given in Figure 7. Furthermore, the congestion rate of Trickle is correlated to its frequency of OFF periods; this means that ssthresh reduction of Trickle is principally caused by the detection of OFF periods. In addition, due to its corresponding traffic shaping method that reduces the sending rate of HAS server, the packet drop rate of Trickle is quite similar to that of TcpHas, as shown in Figure 7. Number of competing HAS players Stalling events rate To summarize, TcpHas is not affected by the increase of the competing HAS clients in the same home network. From QoE point of view, it preserves high stability and high fidelity to optimal quality level, and has a tendency to increase the occupancy of the home bandwidth. From QoS point of view, it maintains a low OFF period duration, low queuing delay, and low packet drop rate. Background Traffic Effect Here we vary the burst arrival rate λ p of the Poisson Pareto Burst Process (PPBP) that simulates the traffic that crosses Internet Router (IR) and DSLAM from 10 to 50. Table 2 shows the percentage of occupancy of the 100 Mbps ISP network (ISPLink in our network) for each selected λ p value (without counting HAS traffic). Hence, we simulate a background traffic in the ISP network ranging from 20% to 100% of network capacity. In our simulations, we used two competing HAS clients inside the same home network. Instability. Infidelity. QoE results are given in Figure 8. It shows that the instability, infidelity, convergence speed and stalling event rate curves of TcpHas present two different regimes. When λ p < 35, TcpHas keeps practically the same satisfying measurements much better than Westwood+, TIBET and Trickle. However, when λ p > 35, the four measurements degrade suddenly and stabilize around same high values; even for infidelity and stalling rate, TcpHas yields worse values than Weswood+, TIBET and Trickle. We deduce that TcpHas is sensitive to additional load of ISP network, and could be more harmful than the three other methods. Trickle presents relatively better performance than Westwood+ and TIBET in terms of average values. However, it presents higher unfainess between clients, as shown by its big vertical error bars. In addition, we observe that TcpHas presents the same initial delay as the other methods, which is around 3 seconds, and does not exceed 5 seconds and does not disturb the user's QoE. QoS results are presented in Figure 9. Westwood+, TIBET and Trickle present high frequency of OFF periods, which decreases when increasing the ISP network load, whereas TcpHas presents low OFF frequency. The average queuing delay generated by TcpHas is lower than that of Westwood+, TIBET and Trickle for λ p < 40. The reason for this is explained in the Figure : the congestion detection rate increases with λ p (especially above 40), while the packet drop rate at the bottleneck is still null for TcpHas. Hence, we deduce Frequency of OFF periods. Queuing delay. that the bottleneck is no more located in the link between DSLAM and IR, but is rather transposed inside the loaded ISP link. To summarize, TcpHas yields very good QoE and QoS results when the ISP link is not too loaded. Beyond 70% load of the ISP link, the congestion rate increases, which degrades the QoS and forces TcpHas to frequently update its estimated optimal quality level, which in turn degrades the QoE. Conclusion This paper presents and analyses server-based shaping methods that aim to stabilize the video quality level and improve the QoE of HAS users. Based on this analysis, we propose and describe TcpHas, a HAS-based TCP congestion control that acts like a server-based HAS traffic shaping method. It is inspired by the TCP adaptive decrease mechanism and uses the end-to-end bandwidth estimation of TIBET to estimate the optimal quality level. Then, it shapes the sending rate to match the encoding bitrate of the estimated optimal quality level. The traffic shaping process is based on updating ssthresh when detecting a congestion event or after an idle period, and on modifying cwnd during the congestion avoidance phase. We evaluate TcpHas in the case of HAS clients that share the bottleneck link and are competing for the same home network under various conditions. Simulation results indicate that TcpHas considerably improves both HAS QoE and network QoS. Concerning QoE, it offers a high stability, high fidelity to optimal quality level, a rapid convergence speed, and an acceptable initial delay. Concerning QoS, it reduces the frequency of large OFF periods that exceed TCP retransmission timeout, reduces queuing delay, and reduces considerably the packet drop rate in the shared bottleneck queue. TcpHas performs well when increasing the number of competing HAS clients and does not cause stalling events. It shows excellent performance for small and medium loaded ISP network. As future work, we plan to implement TcpHas in real DASH servers of a video content provider, and to offer a large-scale evaluation during a long duration of tests in real and variable network conditions when hundreds of DASH players located in different access networks are asking for video content. Fig. 1 1 Fig.1TCP parameter setting process for quality level selection. 1 : 4 : 5 : 145 if ACK is received then 2: sample length = acked × packet size × 8 3: sample interval = nowlast ack time Average packet length = al pha × Average packet length + (1al pha) × sample length Average interval = al pha × Average interval + (1al pha) × sample interval 6: Bwe = Average packet length/Average interval 7: end if Fig. 2 2 Fig.2Network architecture used in ns-3 for the evaluation. Fig. 4 4 Fig. 4 Estimated bandwidth Bwe over time for the four methods compared. Fig. 5 5 Fig. 5 cwnd and ssthresh values over time for the four methods compared. 20 Chiheb 20 Ben Ameur et al. Fig. 7 7 Fig.7Values of QoE metrics when increasing the number of competing HAS clients for the four methods compared. Fig. 8 8 Fig.8Values of QoE metrics when increasing the burst arrival rate for the four methods compared. Fig. 9 9 Fig.9Values of QoS metrics when increasing the burst arrival rate for the four methods compared. Table 1 1 Available encoding bitrates for the video file used in simulations. Video quality level L C-S 0 1 2 3 4 Encoding bitrate (kbps) 248 456 928 1632 4256 Table 2 2 ISP network load when varying the burst arrival rate λ p . λ p 10 15 20 25 30 35 40 45 50 ISP network load (%) 20 30 40 50 60 70 80 90 100 Instability IS (%) 10 0 5 10 15 20 15 20 Westwood+ 25 TIBET PPBP burst arrival rate λp 30 35 40 TcpHas Trickle 45 50 10 0 20 40 60 80 100 Infidelity IF (%) 15 PPBP burst arrival rate λp 20 25 30 35 40 Westwood+ TcpHas TIBET Trickle 45 50
01625993
en
[ "sdv.ib.ima", "info.info-im", "info.info-ts", "sdv.mhep.os" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01625993/file/article.pdf
Benjamin Béouche-Hélias David Helbert email: [email protected] Cynthia De Malézieu Nicolas Leveziel Christine Fernandez-Maloigne Neovascularization Detection in Diabetic Retinopathy from Fluorescein Angiograms Keywords: Diabetic retinopathy, neovascularization, classification, anti-VEGF, diabetes Even if a lot of work has been done on Optical Coherence Tomography (OCT) and color images in order to detect and quantify diseases such as diabetic retinopathy, exudates or neovascularizations, none of them is able to evaluate the diffusion of the neovascularizations in retinas. Our work has been to develop a tool able to quantify a neovascularization and the fluorescein leakage during an angiography. The proposed method has been developed following a clinical trial protocol, images are taken by a Spectralis (Heidelberg Engineering). Detections are done using a supervised classification using specific features. Images and their detected neovascularizations are then spatially matched by an image registration.We compute the expansion speed of the liquid that we call diffusion index. This last one specifies the state of the disease and permits to indicate the activity of neovascularizations and allows a follow-up of patients. The method proposed in this paper has been built to be robust, even with laser impacts, to compute a diffusion index. Introduction The detection and follow-up of diabetic retinopathy, an increasingly important cause of blindness, is a public health issue. Indeed, loss of vision can be prevented by early detection of diabetic retinopathy and increased monitoring by regular examination. There are now many algorithms for the automatic detection of common anomalies of the retina (microaneurysms, haemorrhages, exudates, tasks, ...). However, very few researches have been done on the detection of a major pathology, which is neovascularization, corresponding to the growth of new blood vessels due to a large lack of oxygen in the retinal capillaries. Our work has not been to substitute manual detections of experts but to help them doing it by suggesting what areas of the retina could or not be considered as having neovascularizations (NVs) and by providing quantitative and qualitative proliferative diabetic retinopathy such as the area of NV, the location to the optical nerve, the activity of NV (diffusion index). The main goal has been to provide a diffusion index of the injected fluorescent liquid, which indicates the severity of the pathology, to follow the patient over the years. Diabetic retinopathies is one of the first cause of visual impairment worldwide, due to the increasing incidence of diabetes. Proliferative diabetic retinopathy (PDR) is define by the outgrowth of preretinal vessels leading to retinal complication i.e. intravitreous hemorrhages and retinal detachments. Today the laser photocoagulation is the standard of care treatment of proliferative diabetic retinopathy, leading to a decrease of growth factors secretion in photoagulated areas of the retina. Vascular endothelial growth factor (VEGF) is responsible of the growth of healthy vessels, but also of the NVs due to diabetes. Research is active on finding a specific type of anti-VEGF that could stop the growth of the NVs specifically. A clinical trial (ClinicalTrials.gov Identifier: NCT02151695), called "Safety and Efficacy of Aflibercept in Proliferative Diabetic Retinopathy" is in progress at the CHU of Poitiers, testing the effects of a specific anti-VEGF : Aflibercept. This drug has been approved by the European Medicines Agency (EMA) and the United States Food and Drug Administration (FDA) for treatment of exudative age-related macular degeneration, another retinal disease characterized by choroidal new vessels. The aim of this pilot study is to evaluate the efficacy and the safety of Aflibercept intravitreal injections compared to panretinal photocoagulation for proliferative diabetic retinopathy. In ophthalmology, the majority of the works on the retinal diseases is about the detection of the exudates, [START_REF] Zhang | Exudate detection in color retinal images for mass screening of diabetic retinopathy[END_REF][START_REF] Niemeijer | Automated detection and differentiation of drusen, exudates, and cotton-wool spots in digital color fundus photographs for early diagnosis of diabetic retinopathy[END_REF][START_REF] Abramoff | Automated detection of diabetic retinopathy: barriers to translation into clinical practice[END_REF][START_REF] Osareh | Automated identification of diabetic retinal exudates in digital colour images[END_REF][START_REF] Imani | Fully automated diabetic retinopathy screening using morphological component analysis[END_REF] the healthy vessels segmentation, [START_REF] Staal | Ridgebased vessel segmentation in color images of the retina[END_REF][START_REF] Mendona | Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction[END_REF][START_REF] Nguyen | An effective retinal blood vessel segmentation method using multi-scale line detection[END_REF] the detection of the neural disk [START_REF] Sinthanayothin | Automated localisation of the optic disc, fovea, and retinal blood vessels from digital colour fundus images[END_REF][START_REF] Duanggate | Parameter-free optic disc detection[END_REF] but none of them is about the detection of the proliferative diabetic retinopathy within angiograms. Some works have also been done on the image registration for retinal images. Can [START_REF] Can | A featurebased, robust, hierarchical algorithm for registering pairs of images of the curved human retina[END_REF] et al have proposed the registration of a pair of retinal images by using branching points and cross-over points in the vasculature. Zheng et al have developped in [START_REF] Zheng | Salient feature region: a new method for retinal image registration[END_REF] a registration algorithm by using salient feature region description. Legg et al have illustrated in [START_REF] Legg | Improving accuracy and efficiency of mutual information for multi-modal retinal image registration using adaptive probability density estimation[END_REF] the efficiency of mutual information for registration of fundus colour photographes and colour scanning laser ophthalmoscope images. Few steps are needed to compute that diffusion index. It is a growth in time which means that we have to detect and quantify the pathology on both injection times and compare their area. As it is nearly impossible to have exactly the same conditions during the acquisition (eye movements, focus of the camera, angle), we need an image registration to have an estimation of deformations and to correctly spatially correlate NVs. For the segmentation, we used a supervised classification by Random Forests [START_REF] Breiman | Random forests[END_REF] using intensity, textural and contextual features, and a database of trained images from the clinical trials. These steps are shown in Fig. 1. The paper is organized as follows. In section 2 we present the microscope and the acquisition protocol. The image registration on both injection times is proposed in section 3. We then propose a novel neovascularization detection method in section 4 and an automatic diffusion index computation in section 5. Materials Our database is made of images taken by the c Spectralis Heidelberg Engineering microscope with an ultrawidefield lens covering a 102 o field of view. It delivers undistorted images of a great part of the retina, making the detection easier and monitor abnormal peripheral changes. Images are in gray levels, the areas that are bright are mainly 1/ those leaking during fluorescein angiography due to NV, 2/ normal retinal vessels or 3/ laser impacts. Some images we have taken from patients who have been treated by laser photocoagulation, visible on Fig. 2. These impacts are annoying because they are also very bright for some parts. The blood still spread through some impacts, and can be big enough to be wrongly assimilated to a NV. To qualify the PDR by the index leakage, different times of acquisition during fluorescein angiography were used. The protocol presented below is the clinical trial's protocol which is composed of two image acquisitions: 1. fluorescein injection into the patient's arm; 2. acquisition of the early time injection (t 0 ); acquisition of the late time injection (t f ). The few minutes left between different acquisitions times allow visualization of the fluorescein laekage defined as a progressive increase of the NV area with blurring edges of the NV. No leakage is observed on normal retinal vessels. On Fig. 3 we can see pictures of the same eye with acquisitions at t 0 and t f , where we can see the fluorescein spreading first into arteries and then bleeds in neovascularizations. As images are taken by three minutes, some spatial differences occur and we need to spatially correlate both images with an image registration which is presented in the next part. 15 diabetic patients were included in the analysis and ophthalmologists have identified 60 different NV from fluorescein angiographies on wide field imaging. Image registration The image registration does not aim to be perfect but to allow spatial comparison between NV taken in both images to compute quantitative data. The best registration model should be a local method, but for that reason just explained, a global method is widely enough for the comparison. Some of them are very popular and have been tested by many experts like Scale Invariant Feature Transform (SIFT), [START_REF] Lowe | Distinctive image features from scaleinvariant keypoints[END_REF] Maximally Stable Extremal Regions (MSER) [START_REF] Forssen | Shape descriptors for maximally stable extremal regions[END_REF] or Speeded Up Robust Features (SURF). [START_REF] Bay | Surf : speeded up robust features[END_REF] We found that SIFT was robust and fast enough for the deformations we have on images. Constraints Images are taken with a manually movable camera with no spacial landmarks to help. Moreover the eye of the patient slightly moves between each capture, even when focusing a specific direction, which means that the images for the two injection times can be geometrically different, with translations (x and y), scaling (z) and some small rotations (θ). Futhermore, tissues on the retina can slightly be different over the time, depending on several biological factors, like the heat, the light or the blood flood. We then have global and local geometrical deformations. The brightness of the images mainly depends on the diffusion of the fluorescent liquid injected in the patient.Some tissues will appear more or less bright between both images and sometimes will simply be or not present onto them. For example, healthy arteries will appear darker on the late time injection because the liquid first flood into them (t 0 ) and then spread into different tissues like neovessels (t f ). That is why NVs appear brighter and are easier to detect on t f . We finally have global colorimetry changes, which impact on the general contrast of the image, and very local changes. Deformation computation First steps are the extraction and the description of keypoints on the image. These keypoints need to be invari-ant to image scaling and rotation, and mostly invariant to change in illumination, they also need to be highly distinctive by their description to be matched further. To match the extracted keypoints, we use the brute force method. For each keypoint we take the two nearest points in terms of Euclidean distance and we only keep those who the first nearest point is inferior to 0.8 times the second nearest neighbor (as proposed in [START_REF] Lowe | Distinctive image features from scaleinvariant keypoints[END_REF] ). The deformation matrix is finally computed by the RANSAC algorithm (Random Sample Consensus). [START_REF] Fischler | Random sample consensus : A paradigm model fitting with applications to image analysis and automated cartography[END_REF] Results and discussion We know that deformations we have between both images are relatively small. Even with small movements from the eye or the camera, the lens used takes wide images enough to avoid big deformations because it makes the margin of movement very small, so we removed matching points that obviously are too far from each other. Within the accordance of the experts and the visualization of the different images, we set the distance threshold to a ratio (r) of the diagonal of the image, where r is the constant set which can be adjusted depending on the strength of the deformations. For example, you can set r to 0.5 if you want to set the threshold at half the length of the diagonal. We can see on Fig. 4 that the registration process works well. It is still a global image registration that could be more precise with a specific local non rigid algorithm, but the aim is to pair NVs and be spatially correct when comparing the leakage areas, so we do not need to have a perfect registration. Once the image registration is done, we can process both images. The segmentation method we used is explained in the next part. Neovascularizations detection Principle The aim of a supervised classification is to set the rules that will let the algorithm classify objects into classes from features describing these objects. The algorithm is first trained with a portion of the available data to learn the classification rules (see Fig. 5). As supervised classification tends to give better results when it is possible to have a good trained database we choose to use the Random Forests of decision trees [START_REF] Breiman | Random forests[END_REF] (RF), which is a supervised classification algorithm that gives good results even with a small database. The noise is very high in most images, notably the laser impacts that some patients can have (see Fig. 2). They have some close properties like the brightness that is very high and sometimes have the same shape and size. Some noise is also due to the acquisition itself : eyelashes can blur a part of the images and the automated settings of the camera can lead to more or less blur just as examples. Classification Algorithm 4.2.1 Features Supervised classification can be used with as many features as we want but can be poor if too many bad features are used. To prune the features, it is possible to use multiple features and try to see which are the best by some tests. Once you have found the most important features, you can decide if you want to get rid of the other features or not, depending of your needs in terms of accuracy and time computation. Note that images being only in gray level. We choose to take enough features to prune our selection because our database is not big enough to take on many features and still be a good predicate. NVs being very bright, we choose to have several features based on the intensity, we also add textural and one contextual features as listed below. Intensity Because leakages are bright, we put a lot of weight onto the features based on the intensity : mean, maximum and minimum intensity in the neighborhood. We also take in account the single value of the classified pixel. The values are normalized.Mean, maximum and minimum values are computed in a 5 × 5 and a 9 × 9 neighborhood, which leads to six features. Texture Texture can be a discriminator between laser impacts and NVs because laser impacts are more likely heterogeneous than the second one. For that, we calculate the variance on a 5 × 5 and a 9 × 9 neighborhood. We compute an isotropic gradient with a 3×3 and 5×5 Sobel operator.We add some Haralick's texture features: angular second moment, contrast and correlation. [START_REF] Robert | Textural features for image classification[END_REF] Contextual Contextual features are very important because the intensity is often not enough and is very sensitive to the noise. We add a vessel segmentation in our process, which we translate into a feature. Healthy vessels could sometimes be classified as NVs if we only take into account the intensity features because they are very similar. We base our method on the method proposed in. [START_REF] Nguyen | An effective retinal blood vessel segmentation method using multi-scale line detection[END_REF] It is a morphological segmentation based on the width and the homogeneity of the vessels and weighted by the luminance. See figure 6. With our dataset, the importance of the proposed features are listed on Fig. 7 (values have been rounded for visibility). The most important features are the minimum intensity and the mean intensity in the 9 × 9 neighborhood. As expected the intensity of the classified pixel is poor because several noise is also bright (e.g laser impacts and healthy vessels). Fig. 8 is an example of a classification with the Random Forests algorithm. Compared to the ground truth, the true positives are in green, false positives in red, false negatives in blue and true negatives in white. Post processing Because it is a pixel-wise classification, it is not sufficient enough by itself to have compact and fulfilled regions, so we added few post-treatments. As the leakage is almost isotropic in the vitreous, it is correct to compare the leakage with a cloud that is more or less dense but mostly filled (i.e without holes). Classification sometimes gives regions with little holes that can easily be filled with a closing operation by mathematical morphology. Moreover some thin line detections can happen onto laser impacts edges or healthy vessels for example, we can remove them with an opening operation. Thus, after the classification, a morphological closing and a morphological closing are directly applied to remove thin false detections and they fill the holes of the detected NVs. Results and discussion Random forests algorithm gives a probability for each pixel to belong to the class "NV" or to the class "other". Because the results may vary due to the probabilities, we tried the algorithm with different thresholds (λ) of probability. Results are obtained using a cross validation process on our database. For each image, the training set is composed of all the data except those extracted from the current image. In this way the data of the image are not taken into account for the training and the statistical model is not wrongly influenced. As results, we compare expert manual and automated segmentation to classify the resulting pixels into four classes : true positive (TP), false positive (FP), true negative (TN) and false negative (FN). Given these classes, we can calculate the sensibility (S), the specificity (Sp) and the pixel prediction value (PPV) as following: S = T P T P + F N (1) Sp = T N F P + T N (2) P P V = T P T P + F P However, NVs are mainly small compared to the size of the image and results in a big disparity between the number of positive and negative pixels. The specificity is then always very close to 1 because the pixel number belonging to the background is too much compared to the positives, so we neglecte this feature from our results. Detection at t f Results of the detection for the t f images are given in the figure 9. We can see that the pixel prediction value is very influenced by probability threshold λ compared to the sensibility which is less influenced. A λ under 0.8 gives a good detection of the NVs (high S, but very poor P P V ). When the λ is above 0.8, the S decreases a bit but stays very high, whereas the P P V becomes more reliable around a λ of 0.8 and becomes > 90% for a λ superior to 0.9. Results of the detection for the t 0 images are given in the figure 10. They are not as high as for the T F images, as expected, because it is not easy to distinguish them from healthy vessels before the big part of the spread. As for the t f images, P P V is very poor below a λ of 0.8 and becomes very high above. The problem is that above this threshold, the S decreases more than expected, until 60% for a 0.99 λ. 5 Diffusion index Methodology and Results The diffusion index has to give an indication about the severity of the diabetic retinopathy, which means that it has to compare two liquid spread volumes. As we only work with two dimension images, we can only guess that the spreading is isotropic and that an index computed only with the surface is enough to tell the strength of the leakage. Figure 11 recalls the processing: we detect the NV surfaces at time t f and inside these surfaces, we detect NV surfaces at time t 0 . The diffusion index is then computed by the differentiation of NV areas at t 0 and at t f . Results ans discussion The detection of NVs into the t 0 and t f images is quite complex and really depends on many parameters. The parameters are linked to the fact that the eyes of the patient moves between each capture and the images between two injections can be geometrically differents. Computed diffusion indices are close to the ground truth (cf Tab. 1), indeed the error is only 0.1 or 0.5%. Moreover the retina can slightly be different the time, depending biological factors, and the healthy arteries appear darker according the time of the capture. In our experience, for neovascularization of diabetic retinopathy, the algorithm shows a sensibility and pixel predictive values are effective to describe lesions. The detection of NVs onto the t 0 images is quite complex and really depends on many parameters. We obtain a low Mean Square Error for probability λ equal to 0.8. Conclusion We propose to compute diffusion indices after detecting neovascularizations in noisy angiogram images at initial time t 0 and at final time t f . First we extract the NV areas at time t f and we use the area to detect the NV areas at time t f . We also need to register images between the two acquisitions and we choose to detect interest points using SIFT and we estimate the geometrical transformation for each neovascularization. To detect neovascularizations, we learn features which characterize the NV. We so choose a random tree forest and this approach gives good detection results and the computed diffusion index is close to the ground truth. A clinical study about this algorithm and manual method is now necessary to comparate them, to permit the evaluation of clinical effectiveness and to propose a software solution for the ophthalmogists. Fig 1 : 1 Fig 1: Example of a full detection by our method. Fig 2 : 2 Fig 2: Image of an angiogram taken with the Heidelberg Spectralis. Laser impacts are present all over the image, some examples are highlighted in red. (a) Image of a retina at time t0. (b) Zoom on a NV at time t0. (c) Image of a retina at time t f . (d) Zoom on a NV at time t f . Fig 3 : 3 Fig 3: Retina acquired at initial time t 0 and final time t f . (a) Detected NVs on t f . (b) NVs from t f after image registration on t0. (c) Example of NVs with their bounding boxes. Yellow box is the original box, the green box is the deformed box. Fig 4 :Fig 5 : 45 Fig 4: Registration of the NVs from a t f image to a t 0 image. Fig 6 : 6 Fig 6: Detection of the healthy vessels. Fig 7 : 7 Fig 7: List of the feature importance. Result of the supervised classification Fig 8 : 8 Fig 8: Result of the RF classification on an image. Green regions represent the true positives, red regions the false positives and blue the false negatives. Fig 9 : 9 Fig 9: Sensibility (blue dots) and Pixel Prediction Value (red squares) results for the classification on t f depending on λ used for the probabilities. 4. 4 . 2 42 Detection at t0 Fig 10 : 10 Fig 10: Sensibility (blue dots) and Pixel Prediction Value (red squares) results for the classification on t 0 depending on λ used for the probabilities. 11 : 11 Methodology of diffusion index computation. Fig 12 : 12 Fig 12: Mean Square Errors for each computed diffusion index according to probability threshold λ used for the classification. Table 1 : 1 Diffusion index results. Ground Truth Automated Difference Mean 2.09 2.10 0.01 σ 0.33 0.54 0.52 Acknowledgments This research is financially supported by the clinical trial (ClinicalTrials.gov Identifier: NCT02151695) called "Safety and Efficacy of Aflibercept in Proliferative Diabetic Retinopathy" in progress at the CHU of Poitiers, France. The authors state no conflict of interest and have nothing to disclose.
01756957
en
[ "phys.meca.mema", "spi.mat" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01756957/file/1-hal-preprints.pdf
Laurent Perrier Gilles Pijaudier-Cabot David Grégoire email: [email protected] G Pijaudier Extended poromechanics for adsorption-induced swelling prediction in double porosity media: modeling and experimental validation on activated carbon Keywords: Adsorption, swelling, double porosity media, poromechanical modelling Natural and synthesised porous media are generally composed of a double porosity: a microporosity where the fluid is trapped as an adsorbed phase and a meso or a macro porosity required to ensure the transport of fluids to and from the smaller pores. Zeolites, activated carbon, tight rocks, coal rocks, source rocks, cement paste or construction materials are among these materials. In nanometer-scale pores, the molecules of fluid are confined. This effect, denoted as molecular packing, induces that fluid-fluid and fluid-solid interactions sum at the pore scale and have significant consequences at the macroscale, such as instantaneous deformation, which are not predicted by classical poromechanics. If adsorption in nanopores induces instantaneous deformation at a higher scale, the matrix swelling may close the transport porosity, reducing the global permeability of the porous system. This is important for applications in petroleum oil and gas recovery, gas storage, separation, catalysis or drug delivery. This study aims at characterizing the influence of an adsorbed phase on the instantaneous deformation of micro-tomacro porous media presenting distinct and well-separated porosities. A new incremental poromechanical framework with varying porosity is proposed allowing the prediction of the swelling induced by adsorption without any fitting parameters. This model is validated by experimental comparison performed on a high micro and macro porous activated carbon. It is shown also that a single porosity model cannot predict the adsorption-induced strain evolution observed during the experiment. After validation, the double porosity model is used to discuss the evolution of the poromechanical properties under free and constraint swelling. Introduction Following the IUPAC recommendation [START_REF] Sing | Reporting physisorption data for gas/solid systems with special reference to the deter-mination of surface area and porosity[END_REF][START_REF] Thommes | Physisorption of gases, with special reference to the evaluation of surface area and pore size distribution (IUPAC Technical Report)[END_REF], the pore space in porous materials is divided into three groups according to the pore size diameters: macropores of widths greater than 50 nm, mesopores of widths between 2 and 50 nm and micropores (or nanopores) of widths less than 2 nm. Zeolites, activated carbon, tight rocks, coal rocks, source rocks, cement paste or construction materials are among these materials. In recent years, a major attention has been paid on these microporous materials because the surface-to-volume ratio (i.e., the specific pore surface) increases with decreasing characteristic pore size. Consequently, these materials can trap an important quantity of fluid molecules as an adsorbed phase. This is important for applications in petroleum and oil recovery, gas storage, separation, catalysis or drug delivery. For these microporous materials, a deviation from standard poromechanics [START_REF] Biot | General theory of three-dimensional consolidation[END_REF][START_REF] Coussy | Poromechanics[END_REF], is expected. In nanometer-scale pores, the molecules of fluid are confined. This effect, denoted as molecular packing, induces that fluid-fluid and fluid-solid interactions sum at the pore scale and have significant consequences at the macroscale, such as instantaneous deformation. A lot of natural and synthesised porous media are composed of a double porosity: the microporosity where the fluid is trapped as an adsorbed phase and a meso or a macro porosity required to ensure the transport of fluids to and from the smaller pores. If adsorption in nanopores induces instantaneous deformation at a higher scale, the matrix swelling may close the transport porosity, reducing the global permeability of the porous system or annihilating the functionality of synthesised materials. In different contexts, this deformation may be critical. For instance, in situ adsorption-induced coal swelling has been identified [START_REF] Larsen | The effects of dissolved CO2 on coal structure and properties[END_REF][START_REF] Pan | A theoretical model for gas adsorption-induced coal swelling[END_REF][START_REF] Sampath | Ch4co2 gas exchange and supercritical co2 based hydraulic fracturing as cbm production-accelerating techniques: A review[END_REF] as the principal factor leading to a rapid decrease in CO 2 injectivity during coal bed methane production enhanced by CO 2 injection. Conversely, gas desorption can lead to matrix shrinkage and microcracking, which may help oil and gas recovery in the context of unconventional petroleum engineering [START_REF] Levine | Model study of the influence of matrix shrinkage on absolute permeability of coal bed reservoirs[END_REF]. The effects of adsorbent deformation on physical adsorption has also been identified by [START_REF] Thommes | Physical adsorption characterization of nanoporous materials: Progress and challenges[END_REF] as one of the next major challenges concerning gas porosimetry in nano-porous non-rigid materials (e.g. metal organic framework). In conclusion, there is now a consensus in the research community that major attention has to be focused on the coupled effects appearing at the nanoscale within microporous media because they may have significant consequences at the macroscale. Experimentally, different authors tried to combine gas adsorption results and volumetric swelling data (see e.g. [START_REF] Gor | Adsorption-induced deformation of nanoporous materials -A review[END_REF] for a review). The pioneering work of [START_REF] Meehan | The Expansion of Charcoal on Sorption of Carbon Dioxide[END_REF] showed the effect of carbon dioxyde sorption on the expansion of charcoal but only mechanical deformation was reported and adsorption quantities were not measured. Later on, different authors [START_REF] Briggs | Expansion and contraction of coal caused respectively by the sorption and discharge of gas[END_REF][START_REF] Levine | Model study of the influence of matrix shrinkage on absolute permeability of coal bed reservoirs[END_REF][START_REF] Day | Swelling of australian coals in supercritical co2[END_REF][START_REF] Ottiger | Competitive adsorption equilibria of co2 and ch4 on a dry coal[END_REF][START_REF] Pini | Role of adsorption and swelling on the dynamics of gas injection in coal[END_REF][START_REF] Hol | Competition between adsorption-induced swelling and elastic compression of coal at co 2 pressures up to 100mpa[END_REF][START_REF] Espinoza | Measurement and modeling of adsorptive-poromechanical properties of bituminous coal cores exposed to co2: Adsorption, swelling strains, swelling stresses and impact on fracture permeability[END_REF] performed tests on bituminous coal, because it is of utmost importance in the context of CO 2 geological sequestration and coal bed reservoirs exploitation. However, most results were not complete in a sense that adsorption and swelling experiments were not measured simultaneously [START_REF] Meehan | The Expansion of Charcoal on Sorption of Carbon Dioxide[END_REF][START_REF] Robertson | Measuring and modeling sorption-induced coal strain[END_REF] or performed on exactly the same coal samples [START_REF] Ottiger | Competitive adsorption equilibria of co2 and ch4 on a dry coal[END_REF]. Other authors presented simultaneous in situ adsorption and swelling results but the volumetric strain was extrapolated from a local measurement -using strain gauges [START_REF] Levine | Model study of the influence of matrix shrinkage on absolute permeability of coal bed reservoirs[END_REF][START_REF] Harpalani | Influence of matrix shrinkage and compressibility on gas production from coalbed methane reservoirs[END_REF][START_REF] Battistutta | Swelling and sorption experiments on methane, nitrogen and carbon dioxide on dry selar cornish coal[END_REF] or LVDT sensors [START_REF] Chen | Method for simultaneous measure of sorption and swelling of the block coal under high gas pressure[END_REF][START_REF] Espinoza | Measurement and modeling of adsorptive-poromechanical properties of bituminous coal cores exposed to co2: Adsorption, swelling strains, swelling stresses and impact on fracture permeability[END_REF] -or by monitoring the silhouette expansion [START_REF] Day | Swelling of australian coals in supercritical co2[END_REF]. [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF] presented an experimental setup providing simultaneous in situ measurements of both adsorption and deformation for the same sample in the exact same conditions, which can be directly used for model validation. Gas adsorption measurements are performed using a custom-built manometric apparatus and deformation measurements are performed using a digital image correlation set-up. This set-up allows full-field displacement measurements, which may be crucial for heterogeneous, anisotropic or cracked samples. As far as modeling is concerned, molecular simulations are the classical tools at the nanoscale. Important efforts have been involved in molecular simulations in order to characterise adsorption-induced deformation in nanoporous materials [START_REF] Vandamme | Adsorption and strain: The CO2-induced swelling of coal[END_REF]Brochard et al., 2012a;[START_REF] Hoang | Couplings between swelling and shear in saturated slit nanopores : A molecular simulation study[END_REF] and these investigations showed on few configurations that pressures applied on the pore surfaces may be very high (few hundred of MPa), depending on the thermodynamic conditions and on the pore sizes. Note that an alternative approach based on a non-local density functional theory can be used to obtain highly resolved evolutions of pore pressure versus pore widths and bulk pressure in slit-shaped pores for a large spectrum of thermodynamic conditions on the whole range of micropore widths, even for complex fluids [START_REF] Grégoire | Estimation of adsorption-induced pore pressure and confinement in a nanoscopic slit pore by a density functional theory[END_REF]. However, if macroscopic adsorption isotherms may be reconstructed in a consistent way from molecular simulations through the material pore size distribution [START_REF] Khaddour | A fully consistent experimental and molecular simulation study of methane adsorption on activated carbon[END_REF], molecular simulation tools are not tractable to predict resulting deformation at a macroscale due to the fluid confinement in nanopores (pore sizes below 2 nm). Note that [START_REF] Kulasinski | Impact of hydration on the micromechanical properties of the polymer composite structure of wood investigated with atomistic simulations[END_REF] proposed a molecular dynamic study where macroscopic swelling may be reconstructed from water adsorption in mesoporous wood (pore sizes in [4 -10] nm). If adsorption is essentially controlled by the amount and size of the pores, the mechanical effect of the pressure build up inside the pores due to fluid confinement requires some additional description about the topology and spatial organization of the porous network which is not easy to characterize, for sub-nanometric pores especially. Such a result motivates the fact that swelling is usually related to the adsorption isotherms instead of the pore pressure directly, the mechanical effect of the pore pressure being hidden in the poromechanical description. In this context, different enhanced thermodynamical or poromechanical frameworks have been proposed within the last ten years to link adsorption, induced deformation and permeability changes (e.g [START_REF] Pan | A theoretical model for gas adsorption-induced coal swelling[END_REF] 2014)). For instance, Brochard et al. (2012b) (resp. Vermorel and[START_REF] Vermorel | Enhanced continuum poromechanics to account for adsorption induced swelling of saturated isotropic microporous materials[END_REF]) proposed enhanced poromechanical frameworks where swelling volumetric deformation may be estimated as a function of the bulk pressure and a coupling (resp. a confinement) coefficient, which may be deduced from adsorption measurements. However, if these models are consistent with experimental results from the literature, they cannot be considered as truly predictive because the model parameters have to be identified to recover the experimental loading path. An incremental poromechanical framework with varying porosity has been proposed by [START_REF] Perrier | Poromechanics of adsorption-induced swelling in microporous materials: a new poromechanical model taking into account strain effects on adsorption[END_REF] allowing the full prediction of the swelling induced by adsorption for isotropic nano-porous solids saturated with a single phase fluid, in reversible and isothermal conditions. This single porosity model has been compared with experimental data obtained by [START_REF] Ottiger | Competitive adsorption equilibria of co2 and ch4 on a dry coal[END_REF] on bituminous coal samples filled with pure CH 4 and pure CO 2 at T = 45 o C and a fair agreement was observed for these low porosity coals but these types of models have to be enhanced to take into account the intrinsic double porosity features of such materials. This study aims at characterizing the influence of an adsorbed phase on the instantaneous deformation of microto-macro porous media presenting distinct and well separated porosities in term of pore size distribution. A model accounting for double porosity is proposed and validated by in-situ and simultaneous experimental comparisons. The novelty of the approach is to propose an extended poromechanical framework taking into account the intrinsic double porosity features of such materials capable of predicting adsorption-induced swelling for high porous materials without any fitting parameters. 1. An incremental poromechanical framework with varying porosity for double porosity media In this section, an incremental poromechanical framework with varying porosity proposed by [START_REF] Perrier | Poromechanics of adsorption-induced swelling in microporous materials: a new poromechanical model taking into account strain effects on adsorption[END_REF] for single porosity media is extended to double porosity media. We consider here a double porosity medium with distinct and separated porosities. The small porosity is called adsorption porosity (φ µ ) and the larger one transport porosity (φ M ). This medium is considered as isotropic with a linear poro-elastic behaviour and it is immersed and saturated by a surrounding fluid at bulk pressure P b under isothermal conditions. Confinement effects may change the thermodynamic properties of the interstitial fluids in both porosities. The adsorption porosity is saturated by an interstitial fluid of density ρ µ at pressure P µ . The transport porosity is fully saturated by an interstitial fluid (single-phase) of density ρ M at pressure P M (see Fig. 1). For saturated isotropic porous solids, in reversible and isothermal conditions and under small displacementgradient assumptions, classical poromechanics may be rewritten for double porosity media [START_REF] Coussy | Poromechanics[END_REF] : d Gs = dΨ s + dW s (1) = σ i j : dε i j + P M dφ M + P µ dφ µ dΨ s + d(-P M φ M -P µ φ µ ) dW s (2) = σ i j : dε i j -φ M dP M -φ µ dP µ . (3) of the skeleton. The state variables (ε i j , φ M , φ µ ) are respectively the infinitesimal strain tensor, the transport porosity and the adsorption porosity. The associated thermodynamical forces (σ i j , P M , P µ ) are respectively the Cauchy stress tensor and the fluid pore pressures in both porosities. For an isotropic linear poro-elastic medium, the state equations are then given by:                        σ i j = ∂ Gs ∂ε i j φ M = -∂ Gs ∂P M φ µ = -∂ Gs ∂P µ , and then:                        dσ = K(φ M , φ µ )dε -b M (φ M , φ µ )dP M -b µ (φ M , φ µ )dP µ dφ M = b M (φ M , φ µ )dε + dP M N MM (φ M ,φ µ ) - dP µ N Mµ (φ M ,φ µ ) dφ µ = b µ (φ M , φ µ )dε -dP M N µM (φ M ,φ µ ) + dP µ N µµ (φ M ,φ µ ) . (4) In Eq. 4, (σ = σ kk /3) and (ε = ε kk ) are respectively the total mean stress and the volumetric strain. (K, b M , b µ , N MM , N Mµ , N µM , N µµ ) are respectively the apparent modulus of incompressibility and six poromechanical properties, which depends on the two evolving porosities φ M and φ µ and on the constant skeleton matrix modulus. Considering a single cylindrical porosity2 , homogenization models [START_REF] Halpin | The Halpin-Tsai Equations: A Review[END_REF] yield to:            K(φ) = K s G s (1-φ) G s +K s φ , G s = 3K s (1-2ν s ) 2(1+ν s ) b(φ) = 1 -K(φ) K s , N(φ) = K s b(φ)-φ . (5) In Eq. 5, φ is the porosity and (G s , ν s ) are respectively the shear modulus and the Poisson ratio of the skeleton matrix. Practically and for high porosity media, an iterative process of homogenization is chosen to avoid discrepancies in the apparent properties estimation as noticed by [START_REF] Barboura | Modélisation micromécanique du comportement de milieux poreux non linéaires : Applications aux argiles compactées[END_REF]. The iterative process of homogenization for a cylindrical porosity is detailed in Appendix A. Full details on the iterative processes for both spherical and cylindrical porosities are presented in [START_REF] Perrier | Poromechanics of adsorption-induced swelling in microporous materials: a new poromechanical model taking into account strain effects on adsorption[END_REF]. Considering that the two porosities are distinct, well separated and both cylindrical, the iterative process can be used in two successive steps to determine the different modulus of incompressibility. Note that this two step homogenization process may be reversed as well to estimate the skeleton properties knowing the apparent ones: (K, G) = F n F n (K s , G s , φ µ ), φ M and (K s , G s ) = R n (R n (K, G, φ M ), φ µ ) . (6) In Eq. 6, (F n , R n ) stand as the standard and the reverse iterative processes of homogenization defined in Eq. A.1 and A.2 respectively. K s and G s (resp. K and G) are the skeleton (resp. apparent) incompressible and shear modulii. Based on stress/strain partitions [START_REF] Coussy | Poromechanics[END_REF] and on the response of the medium saturated by a non-adsorbable fluid [START_REF] Nikoosokhan | CO2 Storage in Coal Seams: Coupling Surface Adsorption and Strain[END_REF], the six poromechanical properties (b M , b µ , N MM , N Mµ , N µM , N µµ ) may be identified:                        b M = 1 -K K µ , b µ = K( 1 K µ -1 K s ) 1 N MM = b M -φ M K µ , 1 N Mµ = 1 N µM = (b M -φ M )( 1 K µ -1 K s ) , 1 N µµ = (b µ -φ µ ) K s + (b M -φ M )( 1 K µ -1 K s ) with K µ = F n (K s , G s , φ µ ) . (7) For a porous medium saturated by a fluid under isothermal conditions (isotropic surrounding/bulk pressure: P b , density: ρ b ), dσ = -dP b and Eq. 4 yield to:                      dε = -dP b K s dφ M = -φ M K s dP b dφ µ = -φ µ K s dP b . (8) Therefore, classical poromechanics predicts a shrinkage of the porous matrix and a decrease of the porosity under bulk pressure. This has been confirmed by experimental measurements (e.g. [START_REF] Reucroft | Gas-induced swelling in coal[END_REF] on a natural coal with a non-adsorbable gas). Considering that the fluid is confined within both porosities, the thermodynamic properties (pressures: (2015) for single porosity media: P M , P µ , densities: ρ M , ρ µ ) of            dP M = ρ M dP b ρ b = dP b 1-χ M dP µ = ρ µ dP b ρ b = dP b 1-χ µ . (9) In Eq. 9, (χ M = 1 -ρ b ρ M ) and (χ µ = 1 -ρ b ρ µ ) are the confinement degrees in the transport and in the adsorption porosities respectively, which characterize how confined is the interstitial fluid due to the number of adsorbate moles n ex M and n ex µ that exceeds the number of fluid moles at bulk conditions in porosities φ M and φ µ respectively:              χ M = n ex M n tot M with n tot M = n ex M + ρ b V φ M M χ µ = n ex µ n tot µ with n tot µ = n ex µ + ρ b V φµ M . ( 10 ) In Eq. 10, (V φ M , V φ µ ) are the connected porous volume corresponding to the transport porosity φ M and to the adsorption porosities φ µ respectively, n ex M and n ex µ are the number of adsorbate moles that exceeds the number of fluid moles at bulk conditions and n tot M and n tot µ are the total number of moles of interstitial fluid in porosities φ M and φ µ respectively. Generally, there is no way to link separately the two confinement degrees χ M and χ µ to quantities that can be measured experimentally because the partition of the excess number of adsorbate moles n ex , which can be measured experimentally, within the two porosities is unknown (n ex = n ex M + n ex µ ). However, assuming that the two scales of porosities are well separated, one can consider that most of the adsorption phenomenon occurs in the adsorption porosity (n ex µ >> n ex M ) and that the interstitial fluid is not confined in the transport porosity:            χ µ ≈ n ex n tot µ and dP µ = dP b 1-χ µ χ M ≈ 0 and dP M = dP b . (11) Finally, a new incremental poromechanical framework with varying porosities for double porosity media is proposed:                                                                              dε = ( b µ 1-χ µ + b M -1) dP b K dφ µ =                      b µ 1 -χ µ + b M -1 b µ K T 1 + 1 N µµ 1 -χ µ T 2 - 1 N µM T 3                      dP b dφ M =                      b µ 1 -χ µ + b M -1 b M K T 4 - 1 N µM 1 -χ µ T 5 + 1 N MM T 6                      dP b χ µ = n ex n tot µ with n tot µ = n ex + ρ b V φµ M = n ex + m s M ρ b ρ s φ µ 1-φ µ -φ M (12) In Eq. 12, K(φ M , φ µ ) is given by Eq. 6, (b M , b µ , N MM , N µM , N µµ ) all depend on (φ M , φ µ ) and are given by Eq. 7, (n ex , P b , ρ b ) are experimentally measurable and (m s , M, ρ s ) are respectively the adsorbent sample mass, the molar mass of the adsorbed gas and the density of the material composing the solid matrix of the porous adsorbent. Validation by experimental comparisons on a double porosity synthetic activated carbon The experimental results obtained by [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF] on a double porosity synthetic activated carbon (Chemviron) are used in this study for validation purpose. The main advantage of the proposed method is to provide simultaneous in situ measurements of both adsorption and deformation for the same sample in the exact same conditions. The material and the adsorption-induced strain measurements are briefly recalled in section 2.1. The model input parameters are identified in section 2.2 and finally comparisons between experimental and model results are performed in section 2.3. Material description and adsorption-induced strain measurements In this section, the experimental results obtained by [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF] on a double porosity synthetic activated carbon are briefly recalled. Full details may be found in [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF]. An activated carbon (Chemviron) is used as adsorbent material. The sample is a cylinder and its main characteristics are collected in Table 1. The geometrical dimensions have been measured with a caliper, the mass has been measured with a Precisa scale (XT 2220 M-DR), the specific pore surface has been measured with a gas porosimeter (Micromeretics ASAP 2020) according to the BET theory [START_REF] Brunauer | Adsorption of Gases in Multimolecular Layers[END_REF]. The specific micropore volume has been estimated according to the IUPAC classification [START_REF] Thommes | Physisorption of gases, with special reference to the evaluation of surface area and pore size distribution (IUPAC Technical Report)[END_REF] (pore diameter below 2 nm) based on a pore size distribution deduced from a low-pressure adsorption isotherm (N 2 at 77 K from 8.10 -8 to 0.99 P/P 0 in relative pressure range) measured with the same gas porosimeter according to the HK theory [START_REF] Horvath | Method for the calculation of effective pore size distribution in molecular sieve carbon[END_REF]. The specific macropore volume has been estimated according to the IUPAC classification [START_REF] Thommes | Physisorption of gases, with special reference to the evaluation of surface area and pore size distribution (IUPAC Technical Report)[END_REF] (pore diameter above 50 nm) based on a pore size distribution deduced from mercury intrusion porosimetry. Both porosimetry techniques show that there are almost no pore of diameters between 2 nm and 50 nm in this material. The two porosities are well separated in term of pore size distribution. The adsorbates, CO 2 and CH 4 , as well as the calibrating gas, He, are provided with a minimum purity of 99.995%, 99.995% and 99.999% respectively. Fig. 3.a presents the results in term of excess adsorption/desorption isotherms. CO 2 and CH 4 gas sorption in activated carbon is a reversible phenomenon and no hysteresis is observed between adsorption and desorption paths as previously reported in the literature in [START_REF] Khaddour | A fully consistent experimental and molecular simulation study of methane adsorption on activated carbon[END_REF]. Noting that adsorbed quantity amount increases when temperature decreases, Fig. 3.a shows that CO 2 is preferentially adsorbed in carbon compare to CH 4 as previously reported in the literature [START_REF] Ottiger | Competitive adsorption equilibria of co2 and ch4 on a dry coal[END_REF][START_REF] Battistutta | Swelling and sorption experiments on methane, nitrogen and carbon dioxide on dry selar cornish coal[END_REF]. This is the reason why CO 2 injection is used to increase CH 4 recovery in Enhanced Coal Bed Methane production. Fig. 3.b presents the results in term of adsorption-induced volumetric strain. CO 2 and CH 4 gas adsorption-induced deformation is a reversible phenomenon but a small hysteresis is observed between the adsorption and the desorption paths. This hysteresis is not linked to the adsorption-deformation couplings but is due to an elastic compaction of the carbon matrix grains [START_REF] Perrier | Coupling between adsorption and deformation in microporous media[END_REF]. Cycling effect and material compaction are detailed in [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF]. For a given pressure, CO 2 adsorption produces more volumetric deformation than CH 4 adsorption, which is the source of the rapid decrease in CO 2 injectivity during coal bed methane production enhanced by CO 2 injection. Identification of model input parameters The input parameters of the incremental poromechanical framework with varying porosities for double porosity media presented in Eq. ( 12) are: • The adsorbent sample mass (m s ), the molar mass of the adsorbed gas (M CO 2 , M CH 4 ), the density of the material composing the solid matrix of the porous adsorbent (ρ s ), the initial transport porosity (φ 0 M ) and the initial adsorption porosity (φ 0 µ ), which are all given in Table 1. • The surrounding fluid bulk pressure (P b ), the excess adsorbed quantities (n ex ) and the bulk density (ρ b ), which are both experimentally measured or deduced. From the experimental measurements of the excess adsorption isotherm (Fig. 3.a), a power-law fit is identified and used as an input in the incremental estimation of Eq. ( 12). From the bulk pressure and the temperature, the bulk density (ρ b ) of the surrounding fluid is estimated by its state equation using the AGA8 software [START_REF] Starling | Compressibility and super compressibility for natural gas and other hydrocarbon gases[END_REF]. • The skeleton incompressibility (K s ) and shear (G s ) modulii are deduced from the apparent ones from the two step homogenization reversed process presented in Eq. ( 6). The apparent properties are experimentally measured using an ultra-sonic technique where longitudinal and transverse waves are generated by a piezo-electric source and detected by a laser Doppler vibrometer [START_REF] Shen | Seismic wave propagation in heterogeneous limestone samples[END_REF]: K = ρ s (V 2 p -4 3 V 2 s ) G = ρ s V 2 s . ( 13 ) In Eq. 13, (V p , V s ) are respectively the velocities of the longitudinal and the transverse waves. With V p = (302 ± 2) m.s -1 and V s = (176 ± 1) m.s -1 , we get K = (120 ± 15) MPa and G = (75 ± 8) MPa and then K s = (6.0 ± 0.6) GPa and G s = (3.5 ± 0.4) GPa. Note that the experimental technique developed by [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF] and allowing simultaneous measurements of adsorption-induced swelling may also be used to characterize K s directly as previously reported by [START_REF] Hol | Competition between adsorption-induced swelling and elastic compression of coal at co 2 pressures up to 100mpa[END_REF]. Indeed, if a non-adsorbable gas (such as helium) is used, the skeleton incompressible modulus may be deduced from bulk pressure and volumetric shrinkage strain measurements using Eq. 8. Figure 2 presents a typical result of K s direct identification. An experimental value of K s = (6 ± 1) GPa is then obtained, which is in good agreement with the latter one. Note that dynamic and static mechanical properties may differ for a lot of materials so a perfect match is not expected here. However, for this material with a low rigidity, the difference between the static and the dynamic properties is relatively not important and it may stand within the measurement uncertainty. As discussed in [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF], activated carbon is subjected to cycling effect and material compaction. The process to produce the active carbon is composed of three phases: first the carbon is grinded, then it is activated, and finally it is compacted to obtain a cylindrical sample. During the first cycle of gas adsorption, there is a competition between the grain compaction shrinkage and the adsorption-induced volumetric swelling and a large hysteresis is observed because of the material compaction. This compaction is mostly irreversible and after the first cycle, the second and the third cycles are reversible. Fig. 3 presents the results in term of adsorptioninduced volumetric strain obtained during the third cycle when the activated carbon is fully compacted and swelling strain fully reversible. The two other cycles are presented in [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF]. However, the ultra-sonic technique used to identify the apparent elastic properties has been performed on the sample before compaction and the skeleton elastic properties may differ after compaction. Therefore, a second direct K s identification has been performed after the adsorption-induced swelling test and the value of K s = 7.0 ± 0.8 GPa is obtained. Assuming that the shear skeleton modulus is affected by the compaction in the same proportion of the incompressible onei.e. assuming that the Poisson's ratio is not affected by the compaction -the following skeleton modulii are identified and further used in the comparisons with experimental data: K s = (7.0 ± 0.8) GPa G s = (4.1 ± 0.4) GPa . ( 14 ) Comparisons between experimental and model results Fig. 3 presents also the results obtained with the double porosity adsorption-induced deformation model presented in part 1. All the parameters being identified in section 2.2, the volumetric strain induced by gas adsorption is estimated step by step as well as the evolutions of the transport and adsorption porosities and the poromechanical properties without any fitted parameters. Fig. 3.b shows that the double porosity adsorption-induced deformation model presented in part 1 is capable to predict swelling induced by both CH 4 and CO 2 gas adsorption without any additional fitting parameters. The entry parameters in the model are those collected in Table 1, the skeleton elastic modulii corrected as explained in the latter section, and the adsorption isotherms. For this activated carbon, a swelling strain of ≈ 2% is recovered for a CO 2 bulk pressure up to 46 bar and a swelling strain of ≈ 1.5% is recovered for a CH 4 bulk pressure up to 107 bar. Fig. 4 shows the same results in term of excess adsorption quantities versus adsorption-induced swelling. One can note that the relationship between excess adsorbed quantities and resulting swelling is not linear. Moreover, the two evolutions for the two different gases are close together showing that the volumetric swelling is directly linked to the excess adsorbed quantity. Fig. 5 shows that for this challenging high micro and macro porous activated carbon, a single porosity model, as the one presented in [START_REF] Perrier | Poromechanics of adsorption-induced swelling in microporous materials: a new poromechanical model taking into account strain effects on adsorption[END_REF], highly overestimates the swelling deformation induced by gas adsorption in this activated carbon. The coupling appearing between the evolving adsorption porosity and the evolving transport porosity limits the macroscopic swelling of the material. This can only be captured with a double porosity model. Evolution of the poromechanical properties under free swelling The proposed double porosity model being validated by experimental comparisons in the latter section, we study here the evolution of the poromechanical properties under this free swelling. Fig. 6 presents the evolution of the confinement degree in the adsorption porosity under free swelling for an activated carbon filled with pure CO 2 and pure CH 4 at T = 318.15 K and T = 303.15 K respectively. At the early adsorption stage, the confinement degree is high (≥ 0.9) for both CO 2 and CH 4 . This is due to the fact that at the onset of adsorption, the interstitial fluid density is much higher that the bulk density (ρ b << ρ µ ). Upon swelling, 0 5 -0.14 -0.13 -0.12 -0.11 -0.1 -0.09 -0.08 Pressure [bar] Volumetric strain [%] Experiment Fit Figure 2: Activated carbon K s direct identification based on Eq. 8 thanks to helium bulk pressure and volumetric shrinkage strain measurements (the slope provides directly K s ). (1) CO 2 -adsorption (1) CO 2 -desorption (1) CH 4 -adsorption (1) CH 4 -desorption (2) Model -mean (2) Model -dispersion (1) Experiment [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF], (2) Model (this study). (1) CO 2 -adsorption (1) CO 2 -desorption (1) CH 4 -adsorption (1) CH 4 -desorption (2) Model -mean (2) Model -dispersion (1) Experiment [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF], (2) Model (this study). the confinement degree is decreasing for both CO 2 and CH 4 . This may due to two main reasons: firstly, the ratio of the bulk density over the interstitial fluid density is increasing; secondly, the adsorption porosity is increasing due to the active carbon swelling, and therefore, the confinement of the fluid decreases. That way the functional φ µ 1-φ µ -φ M in Eq. 12 is increasing even faster. Therefore the total number of adsorbed gas moles increases faster than the excess number of moles and the confinement degree is decreasing, the CO 2 interstitial fluid being more confined than the CH 4 one. Figs. 7 presents the evolution of the poromechanical properties in term of relative variations under free swelling. Fig. 7.a shows that all porosities are increasing with increasing bulk pressure under free swelling and, for this activated carbon, a relative variation of total porosity of ≈ 2% is recovered for a CO 2 bulk pressure up to 46 bar and a a relative variation of ≈ 1.5% is recovered for a CH 4 bulk pressure up to 107 bar. Due to the increase of porosities, Fig. Even if it may be counter-intuitive, Fig. 7.a shows that, under free swelling, the transport porosity is not decreasing even if the adsorption porosity increases. It simply increases less than the adsorption porosity. Note that this cannot be generalised for other materials or thermodynamic conditions because this is due to the complex couplings appearing between the transport and the adsorption porosities in Eq. 12. For the conditions considered here, Fig. 8 shows that, whatever the bulk pressure, the different contributions of the terms (T 1 to T 6 ) in Eq. 12 lead to a positive derivative of both porosities in respect to the bulk pressure, and therefore both porosities increases upon swelling (see Eq. 12 for T 1 to T 6 term expressions). δ r K µ δ r K (c) (d) -1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0 2 4 6 8 10 12 CO 2 CH 4 Relative variation [%] Bulk pressure (P b ) [MPa] δ r b µ δ r b M -2 -1.8 -1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0 0 2 4 6 8 10 12 CO 2 CH 4 Relative variation [%] Bulk pressure (P b ) [MPa] δ r N µ µ δ r N M M δ r N µ Μ where δ r X = X-X 0 X 0 is the relative variation of X. with n tot µ = n ex + ρ b V φµ M = n ex + m s M ρ b ρ s φ µ 1-φ µ -φ M (15) Figs. 9 presents the evolution of the poromechanical properties in term of relative variations under constrained swelling. Fig. 9.a shows that the total porosity and the transport porosity are now decreasing whereas the adsorption porosity still increases. Indeed, and for the conditions considered here, Fig. 10 shows that, whatever the bulk pressure, T 7 and T 8 in Eq. 15 lead to a positive derivative of the adsorption porosity in respect to the bulk pressure, whereas T 9 and T 10 in Eq. 15 lead to a negative derivative of the transport porosity. Figs. 9.b to 9.d show the corresponding evolution of the other poromechanical properties with increasing bulk pressure. Fig. 11 shows the evolution of the total mean stress under constraint swelling. The volumetric strain being imposed equal to zero, the continuum is submitted to compressive total mean stresses. Concluding remarks • A new incremental poromechanical framework with varying porosity has been proposed allowing the prediction of the swelling induced by adsorption. Within this framework, the adsorption-induced strain are incrementally estimated based on experimental adsorption isotherm measurements only. The evolution of the porosity and the evolutions of the poromechanical properties, such as the apparent incompressible modulus, the apparent shear modulus, the Biot modulus and the Biot coefficient, are also predicted by the model. • A double porosity model has been proposed for which the adsorption porosity and the transport porosity are distinguished. These two scales of porosity are supposed to be well separated and a two steps homogenization process is used to estimate incrementally the evolution of the poromechanical properties, which couple the evolutions of both porosities. • An existing custom-built experimental set-up has been used to test the relevance of this double porosity model. A challenging high micro and macro porous activated carbon has been chosen for this purpose. An adsorption porosity of 32.2±0.2% and a transport porosity of 41.6±0.2% have been characterized as well as its apparent and skeleton elastic properties. In situ adsorption-induced swelling has been measured for pure CO 2 and pure CH 4 at T = 318.15 K and T = 303.15 K respectively and the corresponding model responses have been estimated. It has been shown that the double porosity model is capable to predict accurately the swelling induced by both CH 4 and CO 2 gas adsorption without any fitting parameters. Conversely, it has been shown that a single porosity model highly overestimates the swelling deformation induced by gas adsorption for this high micro and macro porous activated carbon. The coupling appearing between the evolving adsorption porosity and the evolving where δ r X = X-X 0 X 0 is the relative variation of X. 18 transport porosity limits the macroscopic swelling of the material. This can only be captured with a double porosity model. • After validation, the double porosity model has been used to discuss the evolution of the poromechanical properties under free and constraint swelling. The case-study of constraint swelling consists here in assuming a global volumetric strain equal to zero. It has been shown that for the considered material, all porosities are increasing with increasing bulk pressure under free swelling, whereas the total porosity and the transport porosity are decreasing when the adsorption porosity still increases under constraint swelling. (K s , G s ) knowing the homogenized properties (K m , G m ) and a given number of increment n: i) , K (i-1) , G (i-1) ) i) , K (i-1) , G (i-1) ) (K s , G s ) = R n (K m , G m , φ) with:                                K (0) = K m , G (0) = G m , ∆φ = φ n , φ (i) = ∆φ 1-φ+i∆φ          K (i) = H K (φ ( G (i) = H G (φ ( K s = K (n) , G s = G (n) , where:              H K (φ, K s , G s ) = K s + φK s 1-(1-φ)×( Ks Ks +Gs ) H G (φ, K s , G s ) = G s + φG s 1-(1-φ)×( Ks+2Gs 2Ks +2Gs ) . (A.2) Bibliography Figure 1 : 1 Figure 1: Schematic of a double porosity media. the interstitial fluid in the two porosities (φ M , φ µ ) differ from the surrounding ones (P b , ρ b ) but the thermodynamic equilibrium imposes that the three fluids are chemically balanced (equality of the chemical potentials µ b , µ M and µ µ ). Assuming that the Gibbs-Duhem equation (dP = ρdµ) still applies for both the surrounding fluid and the interstitial ones, a macroscopic relation between the interstitial pore pressures and the surrounding one may be derived similarly to the relation initially proposed by[START_REF] Vermorel | Enhanced continuum poromechanics to account for adsorption induced swelling of saturated isotropic microporous materials[END_REF] and used inPerrier et al. Fig. 3 3 Fig.3presents the results of these simultaneous measurements for an activated carbon filled with pure CO 2 and pure CH 4 at T = 318.15 K and T = 303.15 K respectively. Full-field deformation maps and collected experimental data are reported in[START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF]. Figure 3 : 3 Figure 3: Simultaneous adsorption and induced swelling measurements for an activated carbon filled with pure CO 2 and pure CH 4 at T = 318.15 K and T = 303.15 K respectively: (a) experimental excess adsorption isotherms; (b) comparison between experimental and modeling adsorptioninduced swelling results. adsorbed quantity [mmol.g -1 ] Figure 4 :Figure 6 : 46 Figure 4: Comparison between experimental and modeling results in term of excess adsorption quantities versus adsorption-induced swelling for an activated carbon: (a) CO 2 at T = 318.15 K; (b) CH 4 at T = 303.15 K. 7.b shows that incompressible modulii are decreasing under free swelling, K decreasing faster than K µ . Consequently, b M is increasing and b µ upon free swelling as shown in Fig. 7.c. The evolution of the coupling poromechanical properties N MM , N µµ , N µM are more difficult to anticipate but Fig. 7.d shows that they are all decreasing upon swelling. Figure 7 :Figure 8 : 78 Figure 7: Evolution of the poromechanical properties in term of relative variations under free swelling for an activated carbon filled with pure CO 2 and pure CH 4 at T = 318.15 K and T = 303.15 K respectively. Figure 9 :Figure 10 :Figure 11 : 91011 Figure 9: Evolution of the poromechanical properties in term of relative variations under constrained swelling for an activated carbon filled with pure CO 2 and pure CH 4 at T = 318.15 K and T = 303.15 K respectively. Table 1 : 1 Main characteristics of the adsorbent and the adsorbates. 2 and CH 4 excess adsorption isotherms are built step by step from gas adsorption measurements performed using a custom-built manometric set-up. Simultaneously adsorption-induced swelling strain are measured based on digital image correlation. Full details are provided in[START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF]. Property Unit Symbol Value Height (cm) h 1.922 ± 0.004 Diameter (cm) d 2.087 ± 0.002 Volume (ml) V ech 6.57 ± 0.03 Adsorbent sample mass (g) m s 4.137 ± 0.001 Solid matrix density (kg/L) ρ s 2.4 ± 0.8 Specific pore surface (m 2 .g -1 ) S BET 1090 ± 10 Specific micropore volume (cm 3 .g -1 ) v φ µ 0.51 ± 0.01 Adsorption porous volume (cm 3 ) V φ µ 2.115 ± 0.001 Adsorption porosity Specific macropore volume (cm 3 .g -1 ) (%) φ 0 µ v φ M 32 ± 1 0.66 ± 0.01 Transport porous volume (cm 3 ) V φ M 2.712 ± 0.001 Transport porosity Total porosity (%) (%) φ 0 M φ 0 41 ± 1 73 ± 2 CO 2 molar mass CH 4 molar mass (g.mol -1 ) (g.mol -1 ) M CO 2 M CH 4 44.01 16.04 CO This assumption is discussed in[START_REF] Perrier | Poromechanics of adsorption-induced swelling in microporous materials: a new poromechanical model taking into account strain effects on adsorption[END_REF] where both spherical and cylindrical porosities are considered. Acknowledgements Financial supports from the Région Aquitaine through the grant CEPAGE ( 20121105002), from the Conseil Départemental 64 through the grant CEPAGE2 (2015 0768), from the Insitut Carnot ISIFoR and from the Université de Pau et des Pays de l'Adour through the grant Bonus Qualité Recherche are gratefully acknowledged. We also gratefully acknowledge Dr. Frédéric Plantier and Dr. Christelle Miqueu for their advices and our different discussions and Dr. Valier Poydenot for his help concerning the ultra-sonic technique and the apparent properties identification. D. Grégoire and G. Pijaudier-Cabot are fellows of the Institut Universitaire de France. (1) CO 2 -adsorption (1) CO 2 -desorption (1) CH 4 -adsorption (1) CH 4 -desorption (2) Double porosity model (3) Single porosity model (1) Experiment [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF], (2) Model (this study), (3) Model [START_REF] Perrier | Poromechanics of adsorption-induced swelling in microporous materials: a new poromechanical model taking into account strain effects on adsorption[END_REF]. Figure 5: Comparison between the adsorption-induced swelling results provided by the single and the double porosity models for an activated carbon filled with pure CO 2 and pure CH 4 at T = 318.15 K and T = 303.15 K respectively. Case-study of constrained swelling In this section, one case-study of constrained swelling is considered. The volumetric strain is then assumed to be equal to zero and Eq. 12 may be rewritten as: Iterative process of homogenization Following Smaoui-Barboura [START_REF] Barboura | Modélisation micromécanique du comportement de milieux poreux non linéaires : Applications aux argiles compactées[END_REF], an iterative homogenization process may be applied for the linear homogenization functions used for a cylindrical porosity (Eq. 5) 3 . Considering a given cylindrical porosity φ, the local skeleton properties (K s , G s ) and a given number of increment n, the global homogenized properties (K m , G m ) are determined step by step using the following scheme: i) , K (i-1) , G (i-1) ) , where: . (A.1) Moreover, the latter iterative process may be reversed to determine step by step the local skeleton properties 3 For a full description on the iterative processes for both spherical and cylindrical porosities see [START_REF] Perrier | Poromechanics of adsorption-induced swelling in microporous materials: a new poromechanical model taking into account strain effects on adsorption[END_REF].
01756975
en
[ "info.info-cv" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01756975/file/asprs_2017_range.pdf
P Biasutti email: [email protected] J-F Aujol M Brédif A Bugeau RANGE-IMAGE: INCORPORATING SENSOR TOPOLOGY FOR LiDAR POINT CLOUD PROCESSING This paper proposes a novel methodology for LiDAR point cloud processing that takes advantage of the implicit topology of various LiDAR sensors to derive 2D images from the point cloud while bringing spatial structure to each point. The interest of such a methodology is then proved by addressing the problems of segmentation and disocclusion of mobile objects in 3D LiDAR scenes acquired via street-based Mobile Mapping Systems (MMS). Most of the existing lines of research tackle those problems directly in the 3D space. This work promotes an alternative approach by using this image representation of the 3D point cloud, taking advantage of the fact that the problem of disocclusion has been intensively studied in the 2D image processing community over the past decade. Using the image derived from the sensor data by exploiting the sensor topology, a semi-automatic segmentation procedure based on depth histograms is presented. Then, a variational image inpainting technique is introduced to reconstruct the areas that are occluded by objects. Experiments and validation on real data prove the effectiveness of this methodology both in terms of accuracy and speed. INTRODUCTION Over the past decade, street-based Mobile Mapping Systems (MMS) have encountered a large success as the onboard 3D sensors are able to map full urban environments with a very high accuracy. These systems are now widely used for various applications from urban surveying to city modeling [START_REF] Serna | Urban accessibility diagnosis from mobile laser scanning data[END_REF][START_REF] Hervieu | Road marking extraction using a Model&Data-driven RJ-MCMC[END_REF][START_REF] El-Halawany | Detection of road curb from mobile terrestrial laser scanner point cloud[END_REF][START_REF] Hervieu | Semi-automatic road/pavement modeling using mobile laser scanning[END_REF][START_REF] Goulette | An integrated onboard laser range sensing system for on-the-way city and road modelling[END_REF]. Several systems have been proposed in order to perform these acquisitions. They mostly consist in optical cameras, 3D LiDAR sensor and GPS combined with Inertial Measurement Unit (IMU), built on a vehicle for mobility purposes [START_REF] Paparoditis | Stereopolis II: A multi-purpose and multi-sensor 3D mobile mapping system for street visualisation and 3D metrology[END_REF][START_REF] Geiger | Vision meets robotics: The KITTI dataset[END_REF]. They provide multi-modal data that can be merged in several ways, such as LiDAR point clouds colored by optical images or LiDAR depth maps aligned with optical images. Although these systems lead to very complete 3D mapping of urban scenes by capturing optical and 3D details (pavements, walls, trees, etc.), providing billions of 3D points and RGB pixels per hour of acquisition, they often require further processing to suit their ultimate usage. For example, MMS tend to acquire mobile objects that are not persistent to the scene. This often happens in urban environments with objects such as cars, pedestrians, traffic cones, etc. As LiDAR sensors cannot penetrate opaque objects, those mobile objects cast shadows behind them where no point has been acquired (Figure 1, left). Therefore, merging optical data with the point cloud can be ambiguous as the point cloud might represent objects that are not present in the optical image. Moreover, these shadows are also largely visible when the point cloud is not viewed from the original acquisition point of view. This might end up being distracting and confusing for visualization. Thus, the segmentation of mobile objects and the reconstruction of their background remain strategic issues in order to improve the understanding of urban 3D scans. We argue that working on simplified representations of the point cloud enables specific problems such as disocclusion to be solved not only using traditional 3D techniques but also using techniques brought by other communities (image processing in our case). Exploiting the sensor topology also brings spatial structure into the point cloud that can be used for other applications such as segmentation, remeshing, colorization or registration. The main contribution of this paper is a novel methodology for point cloud processing by exploiting the implicit topology of various LiDAR sensors that can be used to infer a simplified representation of the LiDAR point cloud while bringing spatial structure between every points. The utility of such a methodology is here demonstrated by two applications. First, a fast segmentation technique for dense and sparse point clouds to extract full objects from the scene is presented (Figure 1, center). Then, we introduce a fast and efficient variational method for the disocclusion of a point cloud using the range image representation while taking advantage of a horizontal prior without any knowledge of the color or texture of the represented objects (Figure 1, right). This paper is an extension of [START_REF] Biasutti | Disocclusion of 3D LiDAR point clouds using range images[END_REF] with improved technical details of the methodology as well as a complete validation of the proposed applications and a discussion about its limitations. The paper is organized as follows: after a review on the state-of-the-art of the two application scenarios (Section 2), we detail how the topology of various sensors can be exploited to turn a regular LiDAR point cloud into a range image (Section 3). In Section 4, a point cloud segmentation model using range images is introduced with corresponding results and a validation on several datasets. Then, a disocclusion method for point clouds is presented in Section 5 as well as results and validation on various datasets. Finally conclusions are drawn and potential future work is identified in Section 6. Related Works The growing interest for MMS over the past decade has lead to many works and contributions for solving problems that could be tackled using range images. In this part, we present a state-of-the-art on both segmentation and disocclusion. Point cloud segmentation The problem of point cloud segmentation has been extensively addressed in the past years. Three types of methods have emerged: geometry-based techniques, statistical techniques and techniques based on simplified representations of the point cloud. Geometry-based segmentation. The first well-known method in this category is regiongrowing where the point cloud is segmented into various geometric shapes based on the neighboring area of each point [START_REF] Huang | Automatic data segmentation for geometric feature extraction from unorganized 3D coordinate points[END_REF]. Later, techniques that aim at fitting primitives (cones, spheres, planes, cubes ...) in the point cloud using RANSAC [START_REF] Schnabel | RANSAC based out-of-core point-cloud shape detection for city-modeling[END_REF] have been proposed. Others look for smooth surfaces [START_REF] Rabbani | Segmentation of point clouds using smoothness constraint[END_REF]. Although these methods do not need any prior about the number of objects, they often suffer from over-segmenting the scene resulting in objects segmented in several parts. Semantic segmentation. The methods in this category analyze the point cloud characteristics [START_REF] Demantke | Dimensionality based scale selection in 3D LiDAR point clouds[END_REF][START_REF] Weinmann | Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers[END_REF][START_REF] Landrieu | Comparison of belief propagation and graph-cut approaches for contextual classification of 3D LiDAR point cloud data[END_REF]. They analyze the geometric neighborhood of each point in order to perform a point-wise classification, possibly with spatial regularisation, which, in turn, yields a semantic segmentation. It leads to a good separation of points that belong to static and mobile objects, but not to the distinction between different objects of the same class. Simplified model for segmentation. MMS LiDAR point clouds typically represent massive amounts of unorganized data that are difficult to handle. Different segmentation approaches based on a simplified representation of the point cloud have been proposed. [START_REF] Papon | Voxel cloud connectivity segmentationsupervoxels for point clouds[END_REF] propose a method in which the point cloud is first turned into a set of voxels which are then merged using a variant of the SLIC algorithm for super-pixels in 2D images [START_REF] Achanta | SLIC superpixels compared to state-of-the-art superpixel methods[END_REF]. This representation leads to a fast segmentation but it might fail when the scale of the objects in the scene is too different. [START_REF] Gehrung | An approach to extract moving objects from MLS data using a volumetric background representation[END_REF] propose to extract moving objects from MLS data by using a probabilistic volumetric representation of the MLS data in order to cluster points between mobile objects and static objects. However this technique can only be used with 3D sensors. Another simplified model of the point cloud is presented by [START_REF] Zhu | Segmentation and classification of range image from an intelligent vehicle in urban environment[END_REF]. The authors take advantage of the implicit topology of the sensor to simplify the point cloud in order to segment it before performing classification. The segmentation is done through a graph-based method as the notion of neighborhood is easily computable on a 2D image. Although the provided segmentation algorithm is fast, it suffers from the same issues as geometry-based algorithms such as over-segmentation or incoherent segmentation. Finally, an approach for urban objects segmentation using elevation images is proposed in [START_REF] Serna | Detection, segmentation and classification of 3D urban objects using mathematical morphology and supervised learning[END_REF]. There, the point cloud is simplified by projecting its statistics onto a horizontal grid. Advanced morphological operators are then applied on the horizontal grid and objects are segmented using a watershed approach. Although this method provides good results, the overall precision of the segmentation is limited by the resolution of the projection grid and leads to the occurence of artifacts at object borders. Moreover, all those categories of segmentation techniques are not able to treat efficiently both dense and sparse LiDAR point clouds i.e. point clouds acquired with high or low sampling rates compared to the real-world feature sizes (e.g. macroscopic objects such as cars, pedestrians, etc.). For example, one sensor turn in the KITTI dataset [START_REF] Geiger | Vision meets robotics: The KITTI dataset[END_REF] corresponds to 10 5 points (sparse) whereas for a scene of similar size in the Stereopolis-II dataset [START_REF] Paparoditis | Stereopolis II: A multi-purpose and multi-sensor 3D mobile mapping system for street visualisation and 3D metrology[END_REF], the scene contains more than 4 • 10 6 points (dense). In this paper, we present a novel simplified model for segmentation based on histograms of depth in range images by leveraging grid-like topology without suffering from accuracy loss that is often caused by projection/rasterization. Disocclusion Disocclusion of a scene has only been scarcely investigated for 3D point clouds [START_REF] Sharf | Context-based surface completion[END_REF][START_REF] Park | Shape and appearance repair for incomplete point surfaces[END_REF][START_REF] Becker | LiDAR inpainting from a single image[END_REF]. These methods generally work on complete point clouds (with homogeneous sampling) rather than LiDAR point clouds. This task, also referred to as inpainting, has been much more studied in the image processing community. Over the past decades, various approaches have emerged to solve the problem in different manners. Patch-based methods such as the one proposed by Criminisi et al. (2004) (and more recently [START_REF] Lorenzi | Inpainting strategies for reconstruction of missing data in VHR images[END_REF] and Buyssens et al. (2015b)) have proven their strengths. They have been extended for RGB-D images (Buyssens et al., 2015a) and to LiDAR point clouds [START_REF] Doria | Filling large holes in LiDAR data by inpainting depth gradients[END_REF] by considering an implicit topology in the point cloud. Variational approaches represent another type of inpainting algorithms [START_REF] Weickert | Anisotropic diffusion in image processing[END_REF][START_REF] Bertalmio | Image inpainting[END_REF][START_REF] Bredies | Total generalized variation[END_REF][START_REF] Chambolle | A first-order primal-dual algorithm for convex problems with applications to imaging[END_REF]. They have been extended to RGB-D images by taking advantage of the bi-modality of the data [START_REF] Ferstl | Image guided depth upsampling using anisotropic total generalized variation[END_REF][START_REF] Bevilacqua | Joint inpainting of depth and reflectance with visibility estimation[END_REF]. Even if the results of the disocclusion are quite satisfying, these models require the point cloud to have color information as well as the 3D data. In this work, we introduce an improvement to a variational disocclusion technique by taking advantage of a horizontal prior. Range images derived from the sensor topology In this paper, we demonstrate that a simplified model of the point cloud can be directly derived from it using the intrinsic topology of the sensing pattern during acquisition. This section introduces this sensor topology and how it can be exploited on various kinds of sensors. Examples of its usages are presented. Sensor topology Most of modern LiDAR sensors offer an intrinsic 2D topology in raw acquisitions. However, this feature is rarely considered in recent works. Namely, LiDAR points may obviously be ordered along scanlines, yielding the first dimension of the sensor topology, linking each LiDAR pulse to the immediately preceding and succeeding pulses within the same scanline. For most LiDAR devices, one can also order the consecutive scanlines. It amounts to considering a second dimension of the sensor topology across the scanlines as it can be seen in Figure 2. From sensor topology to range image The sensor topology often varies with the type of LiDAR sensor that is being used. 2D LiDAR sensors (i.e., featuring a single simultaneous scanline acquisition) such as the one used in [START_REF] Paparoditis | Stereopolis II: A multi-purpose and multi-sensor 3D mobile mapping system for street visualisation and 3D metrology[END_REF] generally send an almost constant number H of pulses per scanline (or per turn for 360 degree 2D LiDARs) where each pulse was emitted at a certain θ angle value. Therefore, any measurement of the sensor might be organized in an image of size W × H, where W is the number of consecutive scanlines and thus a temporal dimension. This is illustrated in Figure 3 in which one can see how the 2D image is spanned by the sensor topology. In this work, such images are only built using the range measurement as pixel intensity, later refered to as range images. Note that these range images differ from typical range images (Kinect, RGB-D) as the origin of acquisition is not the same for each pixel and the 3D directions of pixels are not regularly spaced along the image, but warped by the orientation changes of the sensor trajectory. 3D LiDAR sensors are based on multiple simultaneous scanline acquisitions (e.g. H = 64 fibers) such as in the MMS proposed in [START_REF] Geiger | Vision meets robotics: The KITTI dataset[END_REF]. Again, each scanline contains the same number of points and each scanline may be stacked horizontally to form the same type of structure, as illustrated in Figure 4. Note that Figures 3 and4 are simplified for better understanding, but that realistic cases can be more chaotic as discussed later in this section. Whereas LiDAR pulses are emitted somewhat regularly, many pulses yield no range measurements due, for instance, to reflective surfaces, absorption or absence of target objects (e.g. in the sky direction) or an ignored measurement whenever the measure is too uncertain. Therefore the sensor topology is only a relevant approximation for emitted pulses but not for echo returns, such that the range image is sparse with undefined values where the sensor measured no echoes (or when further processing was performed on the acquisition, leading to the removal of points having a too incertain measurement). This is illustrated in Figure 5.b in which pulses with no echoes appear in dark. Note that considering multi-echo datasets as a multilayer depth image is beyond the scope of this paper, which only considers first returns. This 2D sensor topology encodes an implicit neighborhood between LiDAR measurement pulses. Whereas the implicit topology of pixels in optical images is supported by a regular geometry of rays (shared origin and regular grid of directions if geometric distortion is neglected), the proposed 2D sensor topology for LiDAR point clouds is supported by the trajectory-warped geometry of 3D rays. However, it readily provides, with minimal effort, an approximation of the immediate 3D point neighborhoods, especially if the sensor moves or turns slowly compared to its sensing rate. We argue however that this approximation a. b. is sufficient for most purposes, as it has the added advantage of providing pulse neighborhoods that are reasonably local both in terms of space and time, thus being robust to misregistrations, and being very efficient to handle (constant time access to neighbors). Moreover, as LiDAR sensor designs evolve to higher sampling rates within and/or across scanlines, the sensor topology will better approximate spatio-temporal neighborhoods, even in the case of mobile acquisitions. We argue that most raw LiDAR datasets contain all the information (scanline ordering, pulses with no echo, number of points per turn...) to enable the access to a well-defined implicit sensor topology. However it sometimes occurs that the dataset received further processings (points were reordered or filtered, or pulses with no return were discarded) or that the sensor does not acquire neighbouring points consecutively. Therefore, the sensor topology may then only be approximated using auxilliary point attributes (time, θ, fiber id...) and guesses about acquisition settings (e.g. guessing approximate ∆time or ∆θ values between successive pulse emissions). Using this information, one can recreate the range map by stacking points even if some points were discarded. Defining a grid-like topology is a good approximation if the number of pulses per scanline/per turn is close to an integer constant with relatively stable rotation offsets between pulses. Interest and applications The use of range images as the simplified representation of a point cloud directly brings spatial structure to the point cloud. Therefore, retrieving neighbors of a point, which was formerly done using advanced data structures [START_REF] Muja | Scalable nearest neighbor algorithms for high dimensional data[END_REF], is now a trivial operation and is given without any ambiguities. This was proved to be very useful in applications such as remeshing since faces can be directly associated to the grid structure of the range image. As shown in this paper, considering a point cloud as a range image supported by its implicit sensor topology enables the adaptation of the many existing image processing approaches to LiDAR point cloud processing (e.g.: segmentation, disocclusion). Moreover, when optical data was acquired along with LiDAR point clouds, the range image can be used for improving the point cloud colorization and the texture registration on the point cloud as the silouhettes present in the range image are likely to be aligned with the gradients of optical images. In the following sections, the LiDAR measurements, equipped with this implicit 2D topology, are denoted as the sparse range image u R . Application to point cloud segmentation In this section, a simple yet efficient segmentation technique that takes advantage of the range image will be introduced. Results will be presented and a quantitative analysis will be performed to validate the model. Range Histogram Segmentation technique We now propose a segmentation technique based on range histograms. For the sake of simplicity, we assume that the ground is relatively flat and we remove ground points, which are identified by plane fitting. Instead of segmenting the whole range image u R directly, we first split this image into S sub-windows u R s , s = 1 . . . S of size W s × H along the horizontal axis to prevent each sub-window from representing several objects at the same range. For each u R s , a depth histogram h s of B bins is built. This histogram is automatically segmented into C s classes using the a-contrario technique presented in [START_REF] Delon | A nonparametric approach for histogram segmentation[END_REF]. This technique presents the advantage of segmenting a 1D-histogram without any prior assumption, e.g. the underlying density function or the number of objects. Moreover, it aims at segmenting the histogram following an accurate definition of an admissible segmentation, preventing over-and under-segmentation. An example of a segmented histogram is given in Figure 6. Once the histograms of successive sub-images have been segmented, we merge together the corresponding classes by checking the distance between each of their centroids in order to obtain the final segmentation labels. Let us define the centroid C i s of the i th class C i s in the histogram h s of the sub-image u R s as follows: C i s = b∈C i s b × h s (b) b∈C i s h s (b) (1) a. b. where b are all bins belonging to class C i s . The distance between two classes C i s and C j r of two consecutive windows r and s can be defined as follows: d(C i s , C j r ) = |C i s -C j r | (2) Finally, we can set a threshold such that if d(C i s , C j r ) ≤ τ , classes C i s and C j r should be merged (e.g. they now share the same label). If two classes of the same window are eligible to be merged with the class of an other window, then only the one with lower depth should be merged. Results of this segmentation procedure can be found in the next subsection. The choice of W s , B and τ mostly depends on the type of data that is being treated (sparse or dense). For sparse point clouds (few thousand points per turn), B has to remain small (e.g. 50) whereas for dense point clouds (> 10 5 points per turn), this value can be increased (e.g. 200). In practice, we found out that good segmentations may be obtained on various kinds of data by setting W s = 0.5 × B and τ = 0.2 × B. Note that the windows are not required to be overlapping in most cases, but for very sparse point clouds, an overlap of 10% is enough to achieve good segmentation. For example in our experiments on the KITTI dataset [START_REF] Geiger | Vision meets robotics: The KITTI dataset[END_REF], for range images of size 2215 × 64px, W s = 50, B = 100, τ = 20 with no overlap. Results & Analysis Figure 7 shows two examples of segmentations obtained using our method on different point clouds from the KITTI dataset [START_REF] Geiger | Vision meets robotics: The KITTI dataset[END_REF]. Each object, of different scale, is correctly distinguished from all others as an individual entity. Moreover, both results appear to be visually plausible. Apart from the visual inspection, we also performed a quantitative analysis on the IQmulus dataset [START_REF] Vallet | TerraMobilita/IQmulus urban point cloud analysis benchmark[END_REF]. The IQmulus dataset consists of a manually annotated point cloud of 12 million points in which points are clustered into several classes corresponding to typical urban entities (cars, walls, pedestrian, etc.). Our aim is to compare the quality of our segmentation on several objects to the ground truth provided by this dataset. First, the point cloud is segmented using our technique, using 100px wide windows with a 10px overlap and a threshold for merging set to 50. After that, we manually select labels that correspond to the wanted object (hereafter: cars). We then compare the result of the segmentation to the ground truth in the same area, and compute the Jaccard distance (Intersection over Union) between our result and the ground truth. Figure 8 presents the result of such a comparison. The overall distance shows that the segmentation matches 97.09% of the ground truth, for a total of 59021 points. Although the result is very satisfying, our result differs in some ways from the ground truth. Indeed, in the first zoom of Figure 8, one can see that our model better succeeds in catching the points of the cars that are close to the ground (we remind here that the ground truth on IQmulus was manually labelled and thus subject to errors). In the second zoomed-in part, one can see that points belonging to the windows of the car were not correctly retrieved using our model. This is due to the fact that the measure in areas where the beam was highly deviated (e.g. beams that were not reflected in the same direction as the one they were emitted along) is not reliable as the range estimation is not realistic. Therefore our model fails in areas where the estimated 3D point is not close to the actual 3D surface. Note that a similar case appears for the review mirror (Figure 8, on the left) which is made of specular material that leads to bad measurements. In some extreme cases, the segmentation is not able to separate objects that are too close from the sensor point of view. Figure 9.a shows a result of the segmentation in a scene where two cars that are segmented with the same label (symbolised by the same color). In order to better distinguish the different objets, one can simply compute the connected components of the points regarding their 3D neighborhood (that can be computed using K-NN for example). Figure 9.b shows the result of such post-processing on the same two cars. We can notice how both cars are distinguished from one other. Application to disocclusion In this section, we show that the problem of disocclusion in a 3D point cloud can be addressed using basic image inpainting techniques. a. b. Range map disocclusion technique The segmentation technique introduced above provides labels that can be manually selected in order to build masks. As mentioned in the beginning, we propose a variational approach to the problem of disocclusion of the point cloud that leverages its range image representation. By considering the range image representation of the point cloud rather than the point cloud itself, the problem of disocclusion can be reduced to the estimation of a set of 1D ranges instead of a set of 3D points, where each range is associated with the ray direction of the pulse. The Gaussian diffusion algorithm provides a very simple algorithm for the disocclusion of objects in 2D images by solving partial differential equations. This technique is defined as follows:    ∂u ∂t -∆u = 0 in (0, T ) × Ω u(t = 0, x, y) = u R (x, y) in Ω (3) having u an image defined on Ω, t being a time range and ∆ the Laplacian operator. As the diffusion is performed in every direction, the result of this algorithm is often very smooth. Therefore, the result in 3D lacks of coherence as shown in Figure 10.b. In this work, we show that the structures that require disocclusion are likely to evolve smoothly along the x W and y W axes of the real world as defined in Figure 11.a. Therefore, we set η for each pixel to be a unitary vector orthogonal to the projection of z W in the u R range image (Figure 11.b). This vector defines the direction in which the diffusion should be done to respect this prior. Note that most MLS systems provide georeferenced coordinates of each point that can be used to define η. For example, using a 2D LiDAR sensor that is orthogonal to the path of the vehicle, one can define η as the projection of the pitch angle of the aquisition vehicle. We aim at extending the level lines of u along η. This can be expressed as ∇u, η = 0. Therefore, we define the energy F (u) = 1 2 ( ∇u, η ) 2 . The disocclusion is then computed as a solution of the minimization problem inf u F (u). The gradient of this energy is given by ∇F (u) = -(∇ 2 u) η, η = -u η η , where u η η stands for the second order derivative of u with respect to η and ∇ 2 u for the Hessian matrix. The minimization of F can be done by gradient descent. If we cast it into a continuous framework, we end up with the following equation to solve our disocclusion problem:    ∂u ∂t -u η η = 0 in (0, T ) × Ω u(t = 0, x, y) = u R (x, y) in Ω (4) using the notations introduced earlier. We recall that ∆u = u η η + u η T η T , where η T stands 280 for a unitary vector orthogonal to η. Thus, Equation (4) can be seen as an adaptation of the Gaussian diffusion equation (3) with respect to the diffusion prior in the direction η. Figure 10 shows a comparison between the original Gaussian diffusion algorithm and our modification. The Gaussian diffusion leads to an over-smoothing of the scene, creating an aberrant surface, whereas our modification provides a result that is more plausible. The equation proposed in (4) can be solved iteratively. The number of iterations simply depends on the size of the area that needs to be filled in. Results & Analysis In this part, the results of the segmentation of various objects and the disocclusion of their background are detailed. Sparse point cloud. A first result is shown in Figure 12. This result is obtained for a sparse point cloud (≈ 10 5 pts) of the KITTI database. A pedestrian is segmented out of the scene using our proposed segmentation technique (using the parameters introduced in 4.1) and a manual selection of the corresponding label. This is used as a mask for the disocclusion of its background using our modified variational technique for disocclusion. Figure 12.a shows the original range image. In Figure 12.b, the dark region corresponds to the result of the segmentation step for the pedestrian. For practical purpose, a very small dilation is applied to the mask (radius of 2px in sensor topology) to ensure that no outlier points (near the occluder's silhouette with low accuracy or on the occluder itself) bias the reconstruction. Finally, Figure 12.c shows the range image after the reconstruction. We can see that the disocclusion performs very well as the pedestrian has completely disappeared and the result is visually plausible in the range image. Notice how the implicit sensor topology of the range image has allowed here to use a standard 2D image processing technique from mathematical morphology to filter mislabelled and inaccurate points near silhouettes. In this scene, η has a direction that is very close to the x axis of the range image and the 3D point cloud is acquired using a 3D LiDAR sensor. Therefore, the coherence of the reconstruction can be checked by looking how the acquisition lines are connected. Figure 13 shows the reconstruction of the same scene in three dimensions. This reconstruction simply consists in the projection of the depth of each pixel along the axis formed by each corresponding point and the sensor origin. We can see that the acquisition lines are properly retrieved after removing the pedestrian. This result was generated in 4.9 seconds using Matlab on a 2.7GHz processor. Note that a similar analysis can be done on the results presented in Figure 1. Dense point cloud. In this work, we aim at presenting a model that performs well on both sparse and dense data. Figure 14 shows a result of the disocclusion of a car in a dense point cloud. This point cloud was acquired using the Stereopolis-II system [START_REF] Paparoditis | Stereopolis II: A multi-purpose and multi-sensor 3D mobile mapping system for street visualisation and 3D metrology[END_REF] and contains over 4.9 million points. In Figure 14.a, the original point cloud is displayed with the color based on the reflectance of the points for a better understanding of the scene. Figure 14.b highlights the segmentation of the car using our model (with the same parameters as in Section 4.2), dilated to prevent aberrant points. Finally, Figure 14.c depicts the result of the disocclusion of the car using our method. We can note that the car is perfectly removed from the scene. It is replaced by the ground that could not have been measured during the acquisition. Although the reconstruction is satisfying, some gaps are left in the point cloud. Indeed, in the data used for this example, pulses returned with large deviation values were discarded. Therefore, the windows and the roof of the car are not present in the point cloud before and after the reconstruction as no data is available. We could have added these no-return pulses in the inpainting mask as well to reconstruct these holes as well. Quantitative analysis. To conclude this section, we perform a quantitative analysis of our various point clouds in order to reconstruct them using our model. Therefore, the original point clouds can serve as ground truth. Note that areas are removed while taking care that no objects are present in those locations. Indeed, this test aims at showing how the disocclusion step behaves when reconstructing backgrounds of objects. The size of the removed areas corresponds to an approximation of a pedestrian's size at 8 meters from the 335 sensor in the range image (20 × 20px). The test was done on 20 point clouds in which an area was manually removed and then reconstructed. After that, we computed the MAE (Mean Absolute Error) between the ground truth and the reconstruction (where the occlusion was simulated) using both Gaussian disocclusion and our model. We recall that the MAE is expressed as follows: MAE(u 1 , u 2 ) = 1 N i,j∈Ω |u 1 (i, j) -u 2 (i, j)| (5) where u 1 , u 2 are images defined on Ω with N pixels where each pixel intensity represents the depth value. Table 1 sums up the result of our experiment. We can note that our method provides a great improvement compared to the Gaussian disocclusion, with an average MAE lower than 3cm. These results are obtained on scenes where objects are located from 12 to 25 meters away from the sensor. The result obtained using our method is very close to the sensor accuracy as mentionned by the manufacturer ( 2cm). Overlapping objetcs. Although the proposed disocclusion method performs well in realistic scenarios as demonstrated above, in some specific contexts, the reconstruction quality can be debatable. Indeed, when two small objects (pedestrians, poles, cars, etc.) overlap in front of the 3D sensor (e.g. one object is in front of the other), the disocclusion of the closest object may not fully recover the farthest object. Figure 16.a shows an example of such a scenario where the goal is to remove the cyclist (highlighted in green). In this case, a pole (Figure 16.a, in orange) is situated between the cyclist and the background. Conclusion In this paper, we have proposed a novel methodology for LiDAR point cloud processing that relies on the implicit topology that is brought by most recent LiDAR sensors. Considering the range image derived from the sensor topology has enabled a simplified formulation of the problem from having to determine an unknown number of 3D points to estimating only the 1D range in the ray directions of a fixed set of range image pixels. Beyond simplifying drastically the search space, it also provides directly a reasonable sampling pattern for the reconstructed point set. Moreover, it also directly provides a robust estimation of the neighborhood of each point according to the acquisition, while improving the computational time and the memory usage. To highlight the relevance of this methodology, we have proposed novel aproaches for the segmentation and the disocclusion of objects in 3D point clouds acquired using MMS. These models take advantage of range images. We have also proposed an improvement of a classical imaging technique that takes the nature of the point cloud into account (horizontality prior on the 3D embedding), leading to better results. The segmentation step can be done online any time a new window is acquired, leading to great speed improvement, constant memory requirements and the possibility of online processing during the acquisition. Moreover, our model is designed to work semi-automatically with using very few parameters in reasonable computational time. We have validated both the segmentation and the disocclusion methods by visual inspection as well as quantitative analysis against ground truth and we have proved their effectiveness in terms of accuracy. In the future, we will focus on extending the methodology to other point cloud processing tasks such as LiDAR point cloud colorization / registration using range images and optical images through variational models. Acknowledgement J-F. Aujol is a member of Institut Universitaire de France. This work was funded by the ANR GOTMI (ANR-16-CE33-0010-01) grant. We would like to thank the anonymous reviewer for his/her useful comments. Figure 1 : 1 Figure 1: Result of the segmentation and the disocclusion of a pedestrian in a point cloud using range images. (left) original point cloud, (center) segmentation using range image, (right) disocclusion using range image. The pedestrian is correctly segmented and its background is then reconstructed in a plausible way. Figure 2 : 2 Figure 2: Example of the intrinsic topology of a 2D LiDAR sensor built on a plane Figure 5 : 5 Figure 5: Example of a point cloud from the KITTI database (Geiger et al., 2013) (a) turned into a range image (b). Note that the dark area in (b) corresponds to pulses with no returns. Figure 6 : 6 Figure 6: Result of the histogram segmentation using the approach of Delon et al. (2007). (a) segmented histogram (bins of 50cm), (b) result in the range image using the same colors. We can see how well the segmentation follows the different modes of the histogram. Figure 7 : 7 Figure 7: Example of point cloud segmentation using our model on various scenes. We can note how each label stricly corresponds to a single object (pedestrian, poles, walls). Figure 8 : 8 Figure 8: Quantitative analysis of the segmentation of cars. Our segmentation result only slighly differs from the ground truth in areas close to the ground or for points that were largely deviated such as points through windows. Figure 9 : 9 Figure 9: Result of the segmentation of a point cloud where two objects end up with the same label (a), and the labeling after considering the connected components (b). Figure 10 : 10 Figure 10: Comparison between disocclusion algorithms. (a) is the original point cloud (white points belong to the object to be disoccluded), (b) the result after Gaussian diffusion and (c) the result with our proposed algorithm (1500 iterations). Note that the Gaussian diffusion oversmoothes the background of the object whereas our proposed model respects the coherence of the scene. Figure 11 : 11 Figure 11: (a) is the definition of the different frames between the LiDAR sensor (xL, yL, zL) and the real world (xW , yW , zW ), (b) is the definition and the visualization of η. Figure 12 : 12 Figure 12: Result of disocclusion on a pedestrian on the KITTI database (Geiger et al., 2013). (a) is the original range image, (b) the segmented pedestrian (dark), (c) the final disocclusion. Depth scale is given in meters. After disocclusion, the pedestrian completely disappears from the image, and its background is reconstructed accordingly to the rest of the scene. Figure 13 : 13 Figure 13: 3D representation of the disocclusion of the pedestrian presented in Figure 12. (a) is the original mask highlighted in 3D, (b) is the final reconstruction. Figure 14 : 14 Figure 14: Result of the disocclusion on a car in a dense point cloud. (a) is the original point cloud colorized with the reflectance, (b) is the segmentation of the car highlighted in orange, (c) is the result of the disocclusion. The car is entirely removed and the road is correctly reconstructed. disocclusion model on the KITTI dataset. The experiment consists in removing areas of a Figure 15 : 15 Figure 15: Example of results obtained for the quantitative experiment. (a) is the original point cloud (ground truth), (b) the artificial occlusion in dark, (c) the disocclusion result with the Gaussian diffusion, (d) the disocclusion using our method, (e) the Absolute Difference of the ground truth against the Gaussian diffusion, (f) the Absolute Difference of the ground truth against our method. Scales are given in meters. Figure 15 15 Figure 15 shows an example of disocclusion following this protocole. The result of our proposed model is visually very plausible whereas the Gaussian diffusion ends up oversmoothing the reconstructed range image which increases the MAE. Figure 16 . 16 Figure 16.b presents the disocclusion of the cyclist. The background is reconstructed in a plausible way, however, details of the occluded part of the pole are not recovered. Figure 16 : 16 Figure 16: Example of a scene where two objects overlap in the acquisition. (a) is the original point cloud colored with depth towards sensor with the missing part of a pole highlighted with dashed pink contour, (b) shows the two objects that overlap: a pole (highlighted in orange) and a cyclist (highlighted in green), (c) shows the disocclusion of the cyclist. Although the background is reconstructed in a plausible way, details of the occluded part of the pole are missing. Table 1 : 1 Comparison of the average MAE (Mean Absolute Error) on the reconstruction of occluded areas. Gaussian Proposed model Average MAE (meters) 0.591 0.0279 Standard deviation of MAEs 0.143 0.0232
01757038
en
[ "spi.opti" ]
2024/03/05 22:32:10
2017
https://theses.hal.science/tel-01757038/file/SIVARAMAN_2017_diffusion.pdf
Docteur Nimisha Sivaraman Thèse Dirigée Par Professeur Fabien Ndagijimana Thèse Mme Daniela Dragomirescu Professeur Rapporteur M Eduardo Motta Cruz Professeur M Christian Vollaire Professeur M Sébastien Serpaud Ingénieur M Leonce Mutel Ingénieur Examinateur M Valence Ndagijimana Fabien Professeur M Zouheir THÈSE Pour obtenir le grade de Keywords: 2-D cross-correlation method, Electromagnetic compatibility, near field scanning, shielded magnetic probe, source reconstruction, Transmission line matrix method As the number of components in a confined volume is increasing, there is a strong demand for identifying the sources of radiation in PCBs and the prediction of EMC of electronic circuits. Electromagnetic near field scanning is a general method of identifying the radiating sources in a PCB. The first part of the thesis consists of the design and characterization of printed circuit magnetic probes with high sensitivity and high spatial resolution. Conventional probes based on microstrip and coplanar configuration is studied. As the length of the transmission line connected to the probe increases, the probe output contains noise due to common mode voltages which is many induced by the electric field. In order to suppress the voltage induced due to the electric field, a shielded magnetic probe is designed and fabricated using low cost printed circuit board (PCB) technology. The performance of the passive probe is calibrated and validated from 1MHz -1GHz. The shielded probe is fabricated on an FR4 substrate of thickness 0.8mm and consists of 3 layers with the signal in the middle layer and top and bottom layers dedicated to ground planes. The aperture size of the loop is 800µm x 800µm, with an expected spatial resolution of 400 µm. The high sensitivity of the probe is achieved by integrating a low noise amplifier at the output of the probe, hence making an active probe. The performance of the shielded probe with a different length of transmission lines is made to study. When the probe has to be operated above 100MHz, it is found that the transmission lines connected to the probe should be short (around 1.5cm). For frequencies below 100MH, the length of the lines can be up to 12cm. A three-axis probe which is able to measure the three components of the magnetic field is also designed and validated by near field scanning above a standard wire over the ground structure. In The second part, the inverse transmission line matrix method (Inv-TLM) method is used reconstruct the source distribution from the near field scan (NFS) data above a single in plane on the PCB. Even though the resolution of reconstruction depends on the wavelength and the mesh parameter, the inverse propagation increases the width of the reconstructed wave. As this method is found to be ill posed and results in multiple solutions, we have developed a new method based on the two-dimensional cross-correlation, which represents the near field scan data in terms of the equivalent electric currents of the dipole. With the new method, we are able to identify and locate the current sources in the PCB and are represented by an equivalent source. The method is validated for the current sources with different orientations. The simulated near field data using CST microwave studio is used to validate both the methods. The radiated far field from these equivalent sources is compared with the simulated fields. Introduction 1.1 Need of EMC in electronic components The widespread use of electronic circuits for communication, computation, automation etc. makes it necessary for different electronic circuits to operate in close proximity to each other. Quite often, these circuits affect each other adversely. Electromagnetic interference (EMI) has become a major problem for circuit designers, and it is likely to become even more severe in the future. A large number of electronic devices in common use is partly responsible for this trend. In addition, the use of integrated circuits and largescale integration has reduced the size of electronic equipment. As electronic circuits have become smaller and more sophisticated, more circuits are being confined into less space, which increases the probability of interference. The products must be designed to work in the ''real world,'' with other equipment nearby, and to comply with the electromagnetic compatibility (EMC) regulations. On the other hand, the equipment should not be affected by external electromagnetic sources and should not itself be a source of electromagnetic noise that can pollute the environment. In an electronic system the most common sources of radio frequency interference (RFI) can be listed as below: (1) The electromagnetic spectrum that is used extensively by radio transmitters for communications (2) High frequency pulses trains in digital systems (3) Intended high frequency oscillator circuits used in sub-systems (4) Circuit transients caused by simple switching operations In digital circuit design, EMC gathers attention because of the three different technological trends. First, in the process of achieving higher processing speeds, shorter pulse rise-times are used, contributing a significant amount of energy at high frequencies, which is capable of propagating by radiative mechanisms over long distances. Second, the modern physical design of equipment is based increasingly on the use of plastics in preference to metals. This significantly reduces the electromagnetic shielding inherent to the all-metal cabinet. The third one is due to the trend for compact design as a result of miniaturization, which in turn contributes to EMC problems. Electromagnetic compatibility (EMC) is the ability of an electronic system to (1) function properly in its intended electromagnetic environment and ( 2) not be a source of pollution to that electromagnetic environment. The electromagnetic environment is composed of both radiated and conducted energy. EMC, therefore, has two aspects, emission, and susceptibility. Susceptibility is the capability of a device or circuit to respond to unwanted electromagnetic energy (i.e., noise). The opposite of susceptibility is radiation. The immunity level of a circuit or device is the electromagnetic environment in which the equipment can operate satisfactorily, without degradation, and with a defined margin of safety. One difficulty in determining immunity (or susceptibility) levels is defining what constitutes performance degradation. Emission pertains to the interference-causing potential of a product. The purpose of controlling emissions is to limit the electromagnetic energy emitted and thereby to control the electromagnetic environment in which other products must operate. The three aspects, which forms the basic framework of EMC design is the generation, transmission, and reception of electromagnetic energy (shown in Figure 1-1). Whether the source or receptor is intended or unintended depends on the type of the coupling path and the source or receiver. Figure 1-1 Universal interference model The transfer of electromagnetic energy can be divided into four subgroups, radiated emissions, radiated susceptibility, conducted emissions and conducted susceptibility. These four aspects are shown in Figure 1-2 [1]. An electronic system normally consists of one or more subsystems that communicate each other via cables. These cables have the potential in emitting or picking up the electromagnetic energy. Longer the length of the cable, the more efficient it is in picking up or emitting the electromagnetic radiations. The factor that produces intended or unintended radiation is the currents on the cables or traces. Electromagnetic emission or susceptibility to emission not only occurs by radiation through the air, but also interference signals passes between the subsystems directly via conduction. There are also some other factors concerned with the EMC. Some of them are depicted in Figure 1-3 [2]. There is significant interest in the military in hardening the communication against the electromagnetic pulse (EMP). The electromagnetic interference from the intense current caused by lightning couples to the electronic power systems and subsequently it is conducted into the device through the ac power cord. There are instances of direct interception with the radiated emissions from which the content of the communication can be determined. This problem is called TEMPSET and is imperative for the military. Figure 1-2 Emission and susceptibility in an electronic system [1] Figure 1 -3 Some other aspects of EMC (a) Electro Static Discharge , ESD (b) electromagnetic pulse, EMP (c) lightning (d) TEMPEST (secure communication and data processing) [2] Controlling the emission from one product may eliminate an interference problem for many other products. Therefore, it is desirable to control emission in an attempt to produce an electromagnetically compatible environment. There are two different approaches to predict the emission or immunity of an electronic circuit, analytical and numerical methods. However the analytical methods can be applied only to very simplified structure, they are still important as they provide insight into the significance of various parameters and also benchmarks against which numerical methods can be checked and evaluated. For more complex configurations, numerical computer-based methods are suitable for studying the behavior of the system. The numerical methods are mainly classified into two groups: (1) Frequency-domain methods such as the finite element method (FEM) and the method of moments (MOM). (2) Time-domain methods such as the finite-difference-time-domain (FD-TD) method and the transmission line matrix (TLM) method. In another classification the numerical methods are divided into following groups: (1) Integral equation methods and (2) Differential equation methods Measurement methods of PCB emissions The electromagnetic emissions from a PCB can be classified as conducted emissions and radiated emissions or common mode and differential mode emissions. The conducted emission measurements are either a voltage-capacitive-tap type of measurement or they are a current-clamp type of measurement. On the other hand, the radiated emission measurements are unique in that they must always state "the horizontal distance from the Device-under-Test (DUT) to the receiving antenna." This horizontal distance can be 1.5,5,10 or 30 meters [3]. TEM cell method: The radiation level of a device under test can be measured using a TEM cell as shown in Figure 1-4 [4]. The DUT faces the interior of the cell while the support circuitry is maintained outside the cell. The RF voltage appearing at the input of the connected spectrum analyzer is related to the electromagnetic potential of the DUT. The measurements are repeated for at least two orientations of the DUT in order to capture all the radiations. Figure 1-4 Setup of TEM cell method for measuring radiated emissions [4] Surface scan method: the radiated emissions from the DUT can be measured by scanning with a probe above the DUT, as shown in Figure 1-5. The measurement result of the surface scan method provides not only the electromagnetic fields from the DUT but also the relative strength of the sources. A variety of probes such as electric, magnetic and optic probes are used for surface scanning. Figure 1-5 Surface scan method [4] Method of measurement the radiation pattern: the radiation pattern of a DUT in far field can be measured using a receiving antenna. Figure 1-6 Radiation pattern measurement method[4] The DUT is mounted on a turntable and rotated through 360° to find the maximum emission direction as shown in Figure 1-6. The receiving antenna is scanned in height from 1 to 4m to find the maximum level. The DUT-to-antenna azimuth and polarization are varied through 360° during the measurement to record the radiation pattern of the DUT. The standard test procedure requires that the measurements are to be done on an open area test site or semi-anechoic chamber. A balanced dipole is used as the receiving antenna below 1 GHz, and a log-periodical antenna or a horn antenna should be used for tests above 1 GHz. Reverberation chamber measurements: A reverberation chamber is an electrically large, highly conductive enclosed cavity equipped with one or more metallic tuners/stirrers whose dimensions are a significant fraction of the chamber dimensions. A typical measurement setup using reverberation chamber is illustrated in Figure 1234567. It is used to measure the total radiated power of a DUT. The DUT should be at least λ/4 from the chamber walls. The stirrers are rotated very slowly compared to the sweep time of the EMI receiver in order to obtain a sufficient number of samples. The mechanical tuners/stirrers can "stir" the multi-mode field in the chamber to achieve a statistically uniform and statistically isotropic electromagnetic environment. The receiving antenna measure either the maximum received power or averaged received power during a cycle of the stirrers. The recorded signals are then converted to the total radiated power and the free space field strength. The advantage of the reverberation chamber method is that it is able to measure the total field on all sides of a DUT without multiple test positions and orientations. Figure 1-7 Reverberation chamber method [4] Direct coupling method: The simplified configuration of the 1Ω method for measuring the sum current in the common ground path is shown in Figure 1-8. The direct coupling method determines the conducted emissions from power and signal ports of a small electronic module especially an IC. RF currents developed across a standardized load is measured to allow indirect estimation of the emission level Figure 1-8 1𝛀 Direct coupling measurement for conducted emissions [4] Workbench Faraday Cage (WBFC) method: IEC 61967-5 defines a method of measuring the conducted electromagnetic emissions at defined common-mode points in order to estimate the emissions radiated by connected cables. The Faraday cage is typically a metallic box of 500mm x 300mm x 150 mm. A typical workbench of Faraday cage method is shown in Figure 123456789. DUT is mounted on either an IC EMC test board or an application board if it fits inside the WBFC. With all input, output, and power connections to the test board filtered and connected to commonmode chokes, the conducted noise is measured at PCB locations specified by the standard. [4] Magnetic probe method: IEC 61967-6 defines a method for calculating the conducted emissions from an IC pin using a magnetic field probe to measure the magnetic field associated with a connected PCB trace. The schematic of the measurement setup is shown in Figure 12345678910. Figure 1-10 Schematic of magnetic probe method for conducted emissions [4] Standard and method comparison of emission measurements IEC-61967 A magnetic probe is used to measure the magnetic field associated with a connected PCB trace, and the RF currents inside the circuit are then calculated. The preferred test configuration is with the DUT mounted on a standard EMC test board to maximize repeatability and minimize probe coupling to other circuits. The features of all these PCB emission measurement methods are summarized in Table 1-1 [4]. Near field measurements Methods such as modulated scattering technique, electro-optic sampling, electron beam probing (modified scanning electron microscope), and perturbation method are already available for the identification and localization of sources. [5], [6] proposes source location estimation technique with the modulated scattering technique (MST) for indoor wireless environments. The uniform circular scatterer array (UCSA) that consist of five optically modulated stutterers as array elements and a dipole antenna at the center of the UCSA is employed for estimating a source location from the impinging signal, but these techniques require expensive components, complicated configuration and low signal to noise ratio. Electromagnetic near field scanning is a general method for identifying the source of electromagnetic interference (EMI) in electronic circuits. Various methods have been developed by many authors for the calculation of far field pattern leading to the source identification from the near field measurements [7]- [11]. Thus, it is necessary to know the electromagnetic fields in their close environment. Near field and far field The space surrounding an antenna is divided into three (1) reactive near field (2) radiative near field and (3) far field. Reactive near field is that portion of the near field region immediately surrounding the antenna whereas in the reactive near field predominates. For most antennas, the outer boundary of this region is taken into exist as at a distance R<0.62D3/ λ. Radiating near field region is defined as that region of the field of an antenna between the reactive near field region and the far field region where the radiative field predominates and the angular field distribution is dependent upon the distance from the antenna. Far field region is defined as that region of the field of an antenna where the angular distribution is independent of the distance from the antenna. If D is the maximum overall dimension of the antenna, the far field region is at a distance greater than 2D2/λ, where λ is the wavelength [12]. Why near field measurements? The electromagnetic emissions can be measured in either near field or far field. The near field measurements have advantages in accuracy, reliability, cost and application range compared with that of the far field measurements. The effect of some uncertain factors such as weather, scattering, electromagnetic interference has less influence on the measurement because the probe and the DUT are very close to each other and hence it gives a more accurate measurement [13][14].  The near field measurement is less dependent on test conditions. It can be conducted in normal lab environments rather than an OATS or an anechoic chamber. This makes the technique highly feasible.  Unlike the far field measurements, which give the radiation in far field, near field measurements can be used not only to obtain electromagnetic fields, but also to provide the emission tests and source diagnostics in EMC studies of PCBs and ICs. It can be used to estimate the current on the microstrip trace of a PCB or to locate the fault in the high frequency chip [15].  Near field-far field transformation methods and equivalent source methods enhances the application and popularity of the near field measurement method. Depending on the applications, the near field measurements can be classified into antenna near field measurements and EMC/EMI near field measurements. Antenna near field measurements are performed in the radiating near field range, typically in the range 3λ to 5λ and are focused on the determination of far field pattern from near field. EMC near field measurements are performed in the highly reactive region (typically <λ/6) and are focused on the determination of real or equivalent radiating sources in the DUT. Objective of the thesis The major objective of this thesis is to predict the radiated emissions from a PCB from the near field scan measurements. The steps toward this can be divided into two main tasks. 1) Design of a high resolution and high sensitivity printed circuit magnetic probe for the near field scan measurements. 2) Development of the inverse algorithm for equivalent source representation and prediction of the electromagnetic compatibility. The state of the art about the near field probes and source reconstruction methods is described in the next chapter (chapter 2) of this thesis. Chapter 3 of this thesis deals with the design and characterization of printed circuit magnetic field probes for near field measurements. The main goal is the design of the low cost shielded magnetic probes in the printed circuit technology towards the achievement high spatial resolution (of the order of 100µm). The thesis describes the problems encountered while miniaturization and the solutions we put forward towards the goal. The required frequency band of operation is from 1 MHz -1 GHz, which is set by the definitions of the project (LOCRAY). The thesis also focuses on the design and characterization of active probes in order to achieve high sensitivity. The effect of the position of the LNA on the probe and the effect of the gain of the LNA are also made under study. The performance of a 3 axis probe which can measure the 3 components of the magnetic field is also analyzed. Chapter 4 describes the source reconstruction methods from near field scan data for the prediction of radiated emissions from the PCB. The inverse transmission line based on the inverse propagation of electromagnetic probes is used to find the field distribution at the plane of the source. The inverse algorithm is meant to use the near field scanned data measured by the designed probe. This chapter also describes a new method based on the 2D cross-correlation. This method represents the current sources on the PCB by an equivalent current. This chapter contains the validation for elementary current sources and simple PCBs on air. Finally, the radiated far field of the circuit is predicted using the equivalent current. In this, we use the time reversal property of the TLM matrix in order to reverse propagate the electromagnetic waves and reconstruct the radiating source. Chapter 5 concludes the work of the thesis and also gives the directions for future developments. State of the art Near Field Probes Based on the configuration of the probe, there are two near field measurement setups as shown in The near-field probes (NFPs) have been widely studied recent years because of their ability to quantify the near-field (optical, electrical or magnetic field) strength in the space close to the noise sources. Most of the studies about the NFPs focused on the improvement of spatial resolution, available bandwidth, and suppressing the coupled field in an orthogonal direction or adverse field. Electro-optical probes Optical magnetic field probe which has a loop antenna element and an optoelectronic crystal is presented in [4]- [7]. The study in [4] analyzes the invasiveness of the optical magnetic field probes quantitatively by experiment and finite difference time domain (FDTD) method simulation. The metallic cables in the shielded loop coils introduce disturbance to the original magnetic field, this can be eliminated by replacing the metallic cables by optical fiber cable. An optical probe for simultaneous measurements of electric and magnetic near fields as shown in Kapteos [8]. The spatial resolution of these probes are around (<=) 1mm. Electromagnetic probes An electromagnetic probe consists of a loop or a dipole and a transmission line which carries the signal induced at the loop terminals. An electric probe consists of a dipole and a magnetic probe consists of a loop as detecting element. Various standard antennas for measuring radio-frequency electric and magnetic fields are developed. The standard probes described are an electrically short dipole, a resistively-loaded dipole, a half-wave dipole, an electrically small loop, and a resistivelyloaded loop. A single-turn loop designed for simultaneous measurement of the electric and magnetic components of near-fields and other complex electromagnetic environments is described in [9]. Each type of antenna demonstrates a different compromise between broadband frequency response and sensitivity. A common design for an electric probe consists of an electrically-short antenna with a diode across its terminals a resistive; parallel-wire transmission line transmits the detected signal from the diode to the monitoring instrumentation (Figure 2-4 (a)). Small dipoles are desirable because they provide high spatial resolution of the field, and because they permit a frequency-independent response at higher microwave frequencies [10]. The important problem associated with the electric dipole probes is the disturbance offered to the device under test by the probe itself. The use of resistive tapering, by means of thin-film deposition and photoetching techniques, has been shown to yield significant improvements over conventional dipole elements (Figure 2-4 (b)). These include broader bandwidth, flatter frequency response, wider dynamic range, smaller size, reduced EM-field perturbation and distortion, and complete absence of in-band resonances [11]. Waveguide probes are also used for near field measurements (shown in Figure 2-5) [12]. Open-ended waveguides measure the E field component. Figure 2-6 Electric probes [14] The electric probe fabricated by [14] in an open-ended coaxial cable is shown inFigure 2-6. In the case of an electric dipole probe, the current induced can be calculated based on the capacitive model as 𝑖 = 𝑗𝝎𝜺 𝟎 𝑬 𝒛 2-1 Figure 2-7Output voltage for different diameters of probe microstrip line configuration [14] The capacitive model does not take into account coupling with external conductors. The simulated output voltage of the probe for different diameters of the inner conductor, by inserting it into a rectangular waveguide to avoid coupling with external conductors is shown in Figure 234567. The output voltage increases when the dimensions of the probe increase. A loop operates according to Faraday's law. A voltage is induced at the terminals of the loop when the magnetic flux through the loop changes [15]. . emf S d d V B dS dt dt       r r 2-2 As it is obvious from equation 2-2, the voltage induced on the probe depends on the size of the loop; hence the sensitivity of the loop is directly proportional to the size of the loop. The finite loop probe contains not only the usual circulating current which is proportional to the magnetic field but also certain currents which do not depend on the component of magnetic field, but rather on the average electric field in the plane of the loop. Very large errors are possible when a singly-loaded loop is used to measure magnetic fields unless its diameter is less than 0.01λ. The doubly-loaded probe may be used with comparable accuracy when its diameter is as large as 0.15λ [16]. Electromagnetic probes are fabricated using different technologies such as CMOS integration technology LTCC, thin film technology and PCB technology. The very classic magnetic probes is a circular or elliptical loop fabricated by hand and connected to the ground and signal lines of a semirigid coaxial cable as shown in Figure 2-8(a) [17]. These probes are not suitable when there is a requirement for high spatial resolution because they are difficult to fabricate manually. Different miniaturized probes are fabricated using the thin film technology. A miniaturized thin film magnetic probes for LSI measurement proposed by [18] is shown in Figure 2-8 (b). . Figure 2-8(c) shows an LTCC probe fabricated by [19]. To miniaturize further and improve the frequency band, integrated probes and probes on LTCC (Low temperature Co fired Ceramics) are used [18]- [22]. A silicon integrated RF magnetic field probe presented by [19] is shown in aperture [17] (b) thin film [18] (c)-LTCC probe [19] (d) CMOS SOI probe [20] (a) (b) probe with a bare loop and stripline loop [24] (c) loop with microstrip filter [25] There is quite a little literature on the printed circuit magnetic field probes. Since the LTCC, thin film and integrated technologies are quite expensive; this work focuses on probes based on the printed circuit technology. A rectangular loop probe with CPW fed line is proposed in [23]. The length and width of the loop are 5mm x 5mm with the conductor width of 0.5mm. A tapered transition is adapted in this paper in addition to the air bridge between the two ground planes (Figure 2-9 (a)). The probe is fabricated on a Teflon substrate with permittivity 2.5 and height 0.508mm. The authors claim that the proposed probe miniaturizes the spurious radiation and resonances by connecting the ground planes together and adapting a tapered transition. The probe works from 1 GHz-7 GHz. Two types of printed circuit magnetic probe for GHz band is proposed in [24], shown in Figure 2-9 (b). Type A is a bare loop and Type B is a strip line loop. The loop aperture has an area of 1 square millimeter and is fabricated on FR4 substrate. Later in 2009, a printed circuit magnetic probe shown in Figure 2-9(c), with a set of quasi periodic notches which acts as microstrip filters were proposed in [25]. This probe was fabricated on an FR4 substrate with a 0.8mm thickness, the aperture of the loop is 8mm (R=4mm). (b) (c) The sensitivity of the probe is improved by using carrier suppression technique in [26]. An array of active magnetic probes is designed by M. Yamaguchi, using CMOS integration technology [3]. An active electro-optical probe for medical MRI is proposed by [5]. These researches don't provide any information about the amplifier characteristics or the influence of the integrated amplifier on the measurement result. The measurements in near fields and far field requires separate probes [27]. The disturbance of the probe in the far field is negligible when a probe with maximal sensitivity is used. This is not the case with the near field. The smaller the dimensions of the probe, the smaller will be the disturbances to the device under test and the higher the spatial resolution. Synthesis of radiating sources At present, inverse problems arise in many branches of applied sciences, such as medical diagnostics, atmospheric sounding, radar and sonar target estimation, seismology, radio astronomy, and microscopy. In a general context, an inverse problem can be broadly described as a problem of determining the internal structure or past state of a system from external measurements, which corresponds to the inverse of the usual cause-effect sequence of events. In the domain of electromagnetism, inverse problems are divided into two categories: (1) Inverse scattering problems (inverse medium problems), which obtain the characteristics of a scattering object from its scattered wave due to external illumination, and (2) Inverse source problems, which determine the constitution of a source from measured values of its emitted radiation. Other common applications of inverse source problems are concealed weapon detection, nondestructive evaluation or testing, and remote sensing [28][29] [30]. The inverse source problems, in contrast, have received little attention, partially because of the severe non-unique nature in the solutions. Despite this ill-posedness, inverse source problems are still applied directly in antenna design [31]and holographic imaging [START_REF] Gaikovich | Subsurface Near-Field Microwave Holography[END_REF]. Inverse source problems have already been studied in the frequency domain [29] and in time domain [30]. The earliest works on modelling the far field emissions were direct near-field to far-field transformation based on modal expansion methods as shown in Figure 2-10(a) [33][34]. The fields radiated by a DUT are expanded in terms of planar, cylindrical, or spherical wave functions in order to obtain the far field, and the measured near-field data are used to determine the coefficients of the wave functions. An equivalent source representation composed of both magnetic and electric dipoles placed over a fictitious sphere surrounding the AUT is obtained in [START_REF] Serhir | An accurate equivalent behavioral model of antenna radiation using a mode-matching technique based on spherical near field measurements[END_REF]. A linear relation between the transmission coefficients of the AUT and the transmission coefficients of each dipole derived by a spherical wave expansion (SWE) of the NF measurements. Dipole transmission coefficients are determined using the translational and rotational addition theorems, and a least square method is employed to compute the excitation of each current source.The equivalent currents are placed in the aperture in front of the antenna so these methods do not apply to emissions in the other half space behind the antenna. The equivalent source reconstruction from the radiated field is applied in antenna synthesis to reconstruct the equivalent source of antennas from its radiated near field as illustrated in Figure 2-10 (b). A method for computing the far field of an antenna from the near field measurements taken over an arbitrary geometry is presented in [START_REF] Sarkar | Near-field to near/far-field transformation for arbitrary near-field geometry utilizing an equivalent electric current and MoM[END_REF]. An integral equation is used to relate the near field data to the equivalent current and the MOM method is used to solve the equation by transforming it into a matrix equation. The authors in [START_REF] Las-Heras | A direct optimization approach for source reconstruction and NF-FF transformation using amplitude-only data[END_REF] present a method that reconstructs directly the sources from the knowledge of the electric field amplitude data over some region. The sources are reconstructed in terms of an equivalent magnetic current density and the equivalence principle has been used to represent the Antenna under test. With these methods, the correct far field in front of the antenna can be produced regardless of the geometry of the near-field measurement. Another way is to identify the real or equivalent sources bound to the actual DUT surface in order to locate the radiating source as shown in The approach of source reconstruction is harnessed to model the radiated emissions from PCBsand electronic devices. The equivalent source of the source distribution is obtained by numerical methods. After obtaining this equivalent source, it is quite easy to conduct source diagnostics, nearfield prediction, evaluation of the radiation performance regularized by the EMC standards like the radiated emission limitation at 3 m, 10 m even 30 m away from the DUT, and investigation of interactions between the PCBs and the shielding cavities [37][38]. The recent development in source reconstruction for EMC is done by Ping Li and Li Jun Jiang [START_REF] Li | Source reconstruction method-based radiated emission characterization for PCBs[END_REF], in which the near field TLM method has been a powerful numerical technique for solving electromagnetic structure problems including high-speed RF/microwave circuits, antennas [START_REF] Hailu | Sub-wavelength Microwave Radar Imaging for Detection of Breast Cancer Tumors[END_REF][40]The time-reversal property of the TLM method was utilized for the localization of scattering objects from external field measurement in the time domain. In [START_REF] Zhang | The solution of transient electromagnetic inverse source problems using time-domain TLM method[END_REF] non-uniqueness of the inverse source problem is addressed by additionally imposing smoothness constraints. Both the source solution and the field measurements are performed in the time domain. The inverse problem in their paper deals with the reconstruction of a transient source distribution inside a closed region in free space. Conclusion The near field scanning probes in the literature is for frequencies above 1 GHz. The higher resolution probes is said to operate in the frequencies above 3 GHz. The probes with high spatial resolution in the range of micrometers (µm) were always achieved with expensive CMOS, LTCC or thin film technologies. There was no information in the literature about the near field probes which operates in the frequencies below 1 GHz with spatial resolution below 1 mmusing the printed circuit technology. The influence of the amplifier factors such as gain, noise figure and position on the probe on the performance of the active probe has to analyze. The source reconstruction incorporates the electric or magnetic field integral equations with the numerical methods such as MoM. Most of these methods needs to have information about the device under test in order to predict the radiated field and requires large computational time and memory. Use of the near field data from two planes above and below the device under test requires large scanning time. So, it is required to have a method for the reconstruction of sources with near field scan data from single plane to reduce the scanning time and the memory of computation. 3. Design and characterization of printed circuit magnetic probes Introduction Near field scanning is a general method for the measurement of radiated emissions from PCBs. The accuracy of this measurement depends on the probes used for the near field scan. As seen in the literature, the sensitivity and selectivity of the probe are inversely proportional. Therefore, the objective of this chapter is to design a magnetic probe with high sensitivity and high spatial resolution. The design and characterization of high-resolution magnetic probes using printed circuit technology are discussed in this chapter. The sensitivity of the probe is enhanced by integrating a Low noise amplifier at the probe output. The active probe discussed here is the first active probe in the literature, which is fabricated using printed circuit technology. The validation of active probes and three axis probes are also contained in the following sections. Parameters of the probe The important parameters of the probe are listed in Figure 3-1. Selectivity: It is the ability of the probe to distinguish between the normal component and the tangential component of the electric or magnetic field. Spatial resolution: The Rayleigh criterion defined the spatial resolution as the ability to discriminate two RF sources in close proximity. That amounts to determining a distance δ from which the system will be unable to distinguish these two sources. The spatial resolution in the case of the dipole is estimated to be equal to half of the length of the dipole probe. Antenna factor:The antenna factor of a dipole is defined by the ratio between the incident field and the voltage induced at the input of a receiver. The antenna factor is defined as the ratio between the electric or the magnetic field of the circuit under test and the voltage induced at the probe when placed on the device under test. 𝐴𝐹 𝑖𝑛 𝑆/𝑚 = 𝐻 𝑖𝑛 𝐴/𝑚 𝑉 𝑖𝑛 𝑉 3-1 And the antenna factor in dB is expressed as 𝐴𝐹 𝑖𝑛 𝑑𝐵𝑆/𝑚 = 𝐻 ( 𝑑𝐵𝐴 𝑚 ) -𝑉 (𝑑𝐵𝑉) 3-2 For calculating this parameter the theoretical or the simulated magnetic field of the DUT is divided with the measured voltage or the simulated voltage at the probe. The antenna factor (AF) can be derived theoretically from Ampere's law as 𝐴𝐹 = 1 𝑗𝜔𝜇𝑛𝑆 3-3 Where, n is the number of turns, S is the area of the loop and 𝝎 is the angular frequency The AF is important to know the level of the field above the DUT when using the probe in the nearfield measurement system. The near field test bench The characterization of the probe is performed with the near field test bench. A near field test bench has been developed by IRSEEM (Research Institute for Electronic Embedded Systems, the laboratory at Rouen, France) to collect the electromagnetic field close to devices of various sizes. The system is based on a direct measurement method and its schematic is shown in Figure 3-2 Schematic of the near field characterization setup The 3D positioning system can be made to operate in 2 scanning modes, 1. A constant height mode and 2. A constant distance between the probe and a device under test mode. The distance between the probe and the DUT is kept constant by using the relief model of the object to regulate probe position. The method used to acquire the relief model of objects is based on laser triangulation. The object is illuminated from one direction with a laser lime projector and viewed with the camera from another one. Laser and camera are fixed on the robot. The relief model of the device under test is acquired by translating the positioning system along the x-axis of the robot. Zdata provided by the camera are recorded on a computer and converted into probe coordinates. Finally, these data are used to position the probe over the device under test during electromagnetic measurement. The low noise amplifier having a gain of 37dB is associated with the arm of the robot. 𝐸 𝑦,𝑇ℎ𝑒𝑜𝑟𝑦 = 8𝐾 𝑦𝑧𝑛 (𝑦 2 + (𝑧 + 𝑛) 2 )(𝑦 2 + (𝑧 -𝑛) 2 ) 3-4 𝐸 𝑧,𝑇ℎ𝑒𝑜𝑟𝑦 = 4𝐾 𝑛(𝑦 2 -𝑧 2 + 𝑛 2 ) (𝑦 2 + (𝑧 + 𝑛) 2 )(𝑦 2 + (𝑧 -𝑛) 2 ) 3-5 𝐻 𝑦,𝑇ℎ𝑒𝑜𝑟𝑦 = - 1 𝜂 𝐸 𝑧 3-6 𝐻 𝑧,𝑇ℎ𝑒𝑜𝑟𝑦 = 1 𝜂 𝐸 𝑦 3-7 Where, 𝜂 = 𝜇 0 ℇ 0 , 𝐾 = √2𝑃𝑍 𝑐 𝑙𝑛( ℎ+𝑛 ℎ-𝑛 ) and 𝑛 = √ℎ 2 -𝑎 2 Printed magnetic field probes An ideal magnetic probe is only a loop. But, in reality, this configuration is not possible, because it requires an interface to carry the voltage induced at the probe to the receiver system. A printed circuit magnetic probe consists of a loop and a transmission line followed by the loop as shown in the schematic in Figure 34567. Figure 3-9 Simulation model of the probe above the reference circuit (view from HFSS) The probe is moved along the y-axis as shown in dotted lines in the figure, keeping the reference circuit fixed. The Port 2 is assigned a power of 10 dBm. The scattering parameter S12 gives the ratio of the voltage at port 1 to the voltage at port 2, that is, the ratio between the voltages at the probe and the input voltage. Knowing the input power at port 2, the voltage induced at the probe can be calculated. 𝑉 𝑝 = 𝑆 12 𝑉 𝑖𝑛 3-8 Where, 𝑉 𝑝 is the output voltage of the probe, and 𝑉 𝑖𝑛 is the input voltage at the port 2 of reference circuit. Simulated results Figure 3-10 Simulations results of the magnetic field measured by the probe MP-1 at different frequencies The simulation results of the probe in Figure 3-8are plotted in Figure 3-10 along with the original magnetic fields present at the same position of measurement, which is obtained using the analytical equation (3)(4)(5)(6). The measured H field of the probe is calculated by multiplying Antenna Factor (AF) or Calibration Factor (CF) with the output voltage of the probe. The comparison is made at different frequencies from 10 kHz -1 GHz. From the figure, it is seen that the probe detected the profile of the magnetic field in all the frequencies from 10 kHz-1 GHz. The received magnetic field is in very good agreement with the theoretical magnetic field from 10 kHz -200 MHz. All the peaks are detected perfectly in this frequency band. At frequencies above 200 MHz, we can see that the detected magnetic field has the shape of the original magnetic field, but the two minima at the side lobes couldn't reach up to the level of the original field. It is also being observed that as the frequency increases, the level of the detected minima reduces. At 300 MHz, the detected minimum level of the magnetic field was 30dBA/m, at the same time, the detected minimum decays to -22 dBA/m at 1 GHz. The reason for this behavior is investigated in the following sections. As the probe consists of a loop and transmission line, the parameters considered for analysis are the effect of the structure of loop and the effect of the transmission line. However, with the addition of the line, the minimum of the magnetic field is not detected correctly from 100 MHz with MP-4 and MP-5. The profile of MP-5 at frequencies above 500 MHz is very much affected by noise than MP-4. This shows that the probe output contains the voltage induced in the loop and the voltage induced on the transmission line, which can be called as noise. These noise voltages corrupted the frequencies above 100 MHz. When the length of the transmission line increases, the highest operating frequency of the probe is reduced. Figure 3-14 Simulations results of the magnetic field measured by the probe MP-1, MP-4, and MP-5 at different frequencies The features of all the microstrip probes are tabulated in Table 3-1 along with its simulated operating frequency bandwidth. It is obvious from the table that when the number of layers of the loop increases, the frequency bandwidth reduces. Also, the number of turns of the loop is inversely proportional to the frequency bandwidth. It is also noted that the frequency bandwidth reduced with the increase in the length of the transmission line connected to the probe. 3-4 is used for the characterization. The fields are plotted from 100 KHz, as the probe is not able to detect signals below 100 kHz. The probe has a good profile up to few MHz, and when it reaches around 50 MHz, the shape of the detected profile is lost. The probe is not able to detect the magnetic field minimum positions above 50 MHz, even though the profile of the probe is good up to 100 MHz in the simulation. So, in the measurement, there are some factors other than the electric field which cause the deviation from the ideal response. The reasons for this behaviour is investigated in the following sections. Printed circuit probes type 2 (with CPW line) The designed and characterization of probes based on coplanar waveguide (CPW) configuration is presented in this section. A new probe with Grounded CPW (GCPW) transmission line is proposed and compared with the conventional probe with CPW transmission line Coplanar waveguide (CPW) design Coplanar waveguide (CPW) is an alternative to microstrip and stripline that places the signal and ground currents on the same layer. It is a printed circuit analogous to three-wire transmission line (Figure 3 -16). A center strip which acts as the signal line is separated by a narrow gap from the ground plane on each side. The gap in the waveguide is usually very small and supports electric fields primarily concentrated in the dielectric. With the little fringing field in the air, the coplanar waveguide exhibits low dispersion. In order to concentrate the fields in the substrate area and to minimize radiation, the dielectric substrate thickness is usually set equal to about twice the gap width. A CPW has a zero cut-off frequency, but its low order propagation mode is quasi TEM. Some electric and magnetic field lines for the Quasi-TEM mode in CPW are indicated in Figure 3-17 in a defined cross-section and a defined time [2]. At higher frequencies, dispersion arises and the field becomes less TEM and more TE in nature. The two ground planes of the CPW must be maintained the same potential to prevent the unwanted modes from propagating. If the grounds are at different potentials, the CPW mode will become uneven, with a higher field in one gap than the other. In the CPW two fundamental modes in CPW are coplanar mode and the parasitic slotline mode. Air bridges between the ground planes have to be applied to suppress the undesired slotline mode. If wire bonds are used to connect the ground planes, the wires should be spaced one-quarter wavelength apart or less. The use of substrates with high dielectric constant, with recommended values greater than 10, the electromagnetic field is mainly concentrated inside the dielectric and avoids the field radiation in air. Spurious modes (mainly microstrip mode) can easily be generated if the separation between the CPW structure and the backing metallization is too close. The characteristic impedance is determined by the ratio of the centre strip width a to the gap width b, so the size reduction is possible without limit at the expense of higher losses. Design of coplanar probes Based on the CPW and GCPW configuration two types of probes are fabricated. The probe with the CPW transmission line is a conventional probe, which is said to operate in the frequency band of 2 GHz -7 GHz [3][4]. We have designed and fabricated the conventional CPW probe and a new GCPW probe, which has the same dimensions of the loop as in [4]. Both the probes are fabricated on Roger RO 3210, which has a dielectric constant 10.2 and thickness of the substrate h is 0.635mm. The structure of both the probes is shown in Figure 3-21.The width of the conductor of the loop is 0.5mm, and hence the aperture of each loop is 4mm x 4mm in dimension. Metallic wire bonding is provided between the grounds of the CPW probe. As the wire bond is a bridge in the air, it is easy to break and handled carefully. In order to have an equipotential ground and avoid the air bridge of the conventional CPW probe, a new configuration of probe employing a grounded coplanar waveguide is fabricated. In the GCPW probe, metallic vias having a diameter of 0.2mm are used to connect the grounds from the top to the bottom. proposed GCPW probe, l=5mm, w=5mm, h=0.635mm Both the probes have a very bad profile at frequencies below 2 GHz. Even though, the detected profiles of GCPW probe at frequencies above 600MHz is found to be better than the CPW probe these profiles prove that the probes are not a suitable candidate for near field scan below 1 GHz. So, we can write, 𝑉 𝑝𝐼 = 𝐶 𝐼 𝐼 𝑡𝑟 3-9 Where,𝐶 𝐼 is the proportionality constant. In addition, the loop is also exposed to a vertical electric field which leads to charge redistributions on the loop and on the transmission line connected to the loop. As the field alternates, the current will flow on the loop causing a magnetic flux coupling into the loop and which in turn induces a secondary voltage 𝑉 𝑝𝑉 at the probe output. At any given frequency, the induced voltage is proportional to the trace voltage𝑉 𝑡𝑟 . With a proportionality constant, the induced voltage 𝑉 𝑝𝑉 𝑉 𝑝𝑉 = 𝐶 𝑉 𝑉 𝑡𝑟 3-10 For an ideal prove 𝐶 𝑉 would be zero. In practical, it has a non-zero value. This effect can be associated with the common mode currents on the ground of the receiver or the outer shield of the coaxial cable. So, the total voltage 𝑉 𝑝 can be expressed as follows 𝑉 𝑝 = 𝑉 𝑝𝐼 + 𝑉 𝑝𝑉 = 𝐶 𝐼 𝐼 𝑡𝑟 + 𝐶 𝑉 𝑉 𝑡𝑟 3-11 𝐶 𝐼 and 𝐶 𝑉 are frequency dependent complex values , which can be derived from the measurement and simulation using two different values of load impedances 𝑍 𝐿 to the trace [5]. When referenced to the local common or ground, a common-mode signal appears on both lines of a 2-wire cable, in-phase and with equal amplitudes [6]. Such signals can arise from one or more of the following sources:  Radiated signals coupled equally to both lines,  An offset from signal common created in the driver circuit, or  A ground differential between the transmitting and receiving locations. The fundamental deviations from ideal response of the probe are related to the excitation of higherorder current modes in the loop, i.e., current distributions other than the ideal uniform current around the loop periphery. The first higher-order mode corresponds to a short electric dipole response giving rise to electric-field coupling. Any electric field present at the probe location causes a receiver response in addition to the desired pure magnetic field response. Further higher-order terms in the loop current give rise to additional unwanted responses. All these higher-order terms are related to the loop dimensions (loop radius, for a circular loop) measured in wavelengths. In the case of an ideal loop, which is isolated from ground, the worst-case plane-wave E-field to H-field coupling ratio equals twice the ratio between probe perimeter and wavelength [7]. There are quite a few literatures which treated the common mode response of the probe. because of the length of the robot arm used to hold the probe and the width is W1=10mm. The grounds at layer 1 and layer 2 are connected by cylindrical vias of 300µm diameter at consecutive intervals in order to avoid the parallel plate mode of operation of the stripline and increase the isolation from the electric field. A rectangular metallic pad of 500µm x 500µm is provided at the end of the loop in order for the connection of via during fabrication. Design of the transmission line The transmission line used is a stripline structure having a strip conductor sandwiched between two ground layers [8]. A stripline is essentially a printed circuit version of a coaxial transmission line. Stripline has three layers of conductors (as seen in Figure 3-26). The internal conductor is commonly called "hot conductor". The other two conductors which are always connected to the signal ground are called the "cold conductors" or "ground conductors". The hot conductor is embedded in a homogeneous isotropic dielectric of dielectric constant ε r. Its dominant mode is pure TEM, assuming perfect conductors. The stripline is more suitable for use at low microwave frequencies because of the possibility of exciting parallel plate mode at higher frequencies. Figure 3-26 Cross section of astripline Figure 3-27 Electric E and magnetic H field lines of stripline The electric and magnetic field lines of the strip line are indicated in Figure 3-27in defined crosssection and a defined time. As the region between the two conductor plates of stripline contains only a single medium, the phase velocity and the characteristic impedance of the dominant TEM mode do not vary with frequency. In the fundamental mode, the hot conductor is equipotential ie., every point in it is at the same potential. If the top and bottom ground plates are not at the same potential, a parallel-plate mode can propagate between them. If excited, this mode will not remain confined to the region near the strip but will be able to propagate wherever the two ground planes exist. Choice of the via and the via spacing Striplines are less sensitive to lateral ground planes of a metallic enclosure because the electromagnetic field is strongly contained near the center conductor and the top-bottom ground planes. The return current path for a high frequency signal trace is located directly above and below the signal trace on the ground planes. The high frequency signal is thus contained entirely inside the PCB, minimizing emissions and providing natural shielding against incoming spurious signals. The metallized via holes connecting the two ground planes suppresses the parallel plate mode of the stripline. The via should be placed closely; a spacing "s" of one-eighth of a wavelength in the dielectric is recommended to prevent a potential difference from forming between the ground planes. Such vias form a cage around the strip and makes it like a basic coaxial line. When the vias are placed too close to the edge of the strip, they can perturb the characteristic impedance. The via separation 'w' should be a minimum of three-strip widths, and 5 is preferable. If the via separation is too large, a pseudo-rectangular waveguide mode can be excited. This mode has a cut off frequency given by 𝑤 < 𝑐 2𝑓 𝑚𝑎𝑥 3-12 Where, c is the speed of light in the dielectric. Thus, at the highest frequency of operationf max , the via separation should be less than the ratio of the velocity of light and twice the maximum frequency. Considerations for thickness of the shield The thickness of the shield t s is chosen to be greater than the skin depth δ of the metallic shield over the frequency range of interest. 𝑡 𝑠 > 𝛿 3-13 The skin depth δ is calculated as [9] 𝛿 = √ 2𝜌 𝜔𝜇 3-14 Where, 𝜌 -resistivity of the conductor 𝜔the angular frequency of the current 𝜇 = 𝜇 𝑟 𝜇 0 𝜇 𝑟the relative permeability of the conductor 𝜇 0the permeability of free space For copper resistivity is 𝜌 = 1.98 × 10 -8 Ohm meter. At 50MHz, copper has a skin depth of 9.2 µm and at 1 GHz, 𝛿 is 2 µm. Equivalent circuit model A simple circular loop isolated from the ground is used to explain the reception of the electric and magnetic field by the loop [10]. Assume that a plane wave impinges on the loop with magnetic field perpendicular to the plane of the loop and the electric field along the plane of the loop. This configuration maximizes the interaction between the loop, the H field, and E field. The radiation resistance in series with L and C is neglected in this case. Ideally pure magnetic response and first order e field response are considered. V E = πbE 3-17 V H = -jwµπb 2 H, 2пb<<λ0 3-18 The ratio of the H field response to the E field response is 𝑍 𝐿 is the impedance of the receiving device, 𝐶 𝑒 represents the effect of the E field coupling. C is the capacitance of the loop; 𝐿 𝑠 is the self-inductance of the loop.𝐿 𝑔 is the self-inductance of the ground. V 0H V 0E = c 4πbf 3-19 M is the mutual inductance [11]. Characterization The results of the simulation and measurement of the probe are discussed in this section. The design of the probe is done using HFSS. The photograph of the fabricated probe is shown in Figure 3-30 Simulated near field scan results The probe in Figure 3-29 is simulated above the reference circuit in order to get the magnetic field received by the probe along the y-axis. The simulated tangential magnetic field received by the probe is plotted in up to 100 MHz.The levels of the minimum don't coincide with the minimum level of the theoretical magnetic field for frequencies above 100 MHz, even though the profile has a similar shape of the original magnetic field. This is because, as the dimensions of the probe are very small, the proposed probe has very less sensitivity in the entire frequency band from 1 MHz to 1 GHz and the noise floor of the vector network analyzer is very high in order to be not able to receive the signals in all positions. Even at frequencies above 1 GHz (profile at 2 GHz is shown in Figure 3-36), the detected voltage is below the noise floor of the receiver. Near field scanning with external amplifier In order to increase the sensitivity of the probe, an amplifier with a gain curve as shown Figure 3 close to the minimum level. This indicates that the 37 dB gain of the amplifier is not enough to detect the frequencies below 1 MHz. The fields are measured only up to 1 GHz because the amplifier is usable up to 1 GHz. The profile of the measured tangential field is in good agreement with the theoretical magnetic field from 10 MHz to 400 MHz. In the frequency band of 400 MHz-900 MHz, even though the maximum and minimum are correctly detected, there is a small deviation from theoretical values at the positions of the side lobes. This is expected due to the 10 cm long length of the transmission in the probe, which is not calibrated. This length of the line was unavoidable due to the length of the arm of the robot in the near field test bench. Effect of the length of the probe and arm of the measurement system In order to measure the probes with length less than 10cm, we used another scanning system because of the mechanical limitations of the length of the arm holding the probe. 3.8.4.1 An alternate near field scanning system A scanning system has been developed at Pheline lab ( at CSTB Grenoble), which consists of a robot, Teledyne LeCroy Wave Runner 640Zi oscilloscope, PC and the designed 3D H-field probes. The robot is able to move in 3-axis with a step size of 1mm. As the step size is very high, the precision of this scanning system is less compared to the one presented in the section 3.3. The scanning robot is connected to the oscilloscope to the PC. The PC is programmed to control the movement of the robot and the data collection from the spectrum analyzer. The plastic arm of the robot holds the probe above the DUT. The movement of the robot arm can be precisely adjusted to 0.1mm/step by a 3D movable controller system. The output of probe can be connected to any of the channels of the oscilloscope. The amplified output of the probe is connected to spectrum analyzer inside the oscilloscope. Near field scanning is performed at every desired frequency. The input power from the signal generator is 10dBm. The tangential field measured by this configuration is plotted in Figure 3-41. Unlike the probe SP1-A with a long line, the scanned profile of the probe SP1-B at frequencies above 400 MHz are in very good agreement with the theoretical field. So, it is the length of the line in SP-A which caused the small deviations from the theoretical field at some frequency band as suspected. But it is not well understood that the frequencies below 400 MHz, has deviated more from the theoretical fields in the side lobe levels, even though it preserves the shape of the profile. Design and characterization of3 axis magnetic field probes It is necessary to have all the 3 components of magnetic or electric fields for accurate prediction of radiated emissions [13]. But it requires a large scanning time in order to complete one surface of a PCB during the near field measurements. In [14] a 3D near field scanner for IC chip level measurements is proposed. It consists of 3D near field scanner with a magnetic probe head. Two separate magnetic fields probe (to measure the normal and tangential component of the magnetic field) for measuring three-dimensional fields in high frequency planar circuits is proposed in [15]. Still, these methods don't reduce the scanning time because each component has to be measured separately. In order to minimize the scanning time, we propose a three dimensional (3D) scanning system consisting of a near field scanner and a three-axis probe which measures the three components of the magnetic field at the same time. Jomaa has already realized a scanning system with a 3-axis probe consisting of conventional circular loops [16]. These probes suffer from having a poor spatial resolution because of its larger size and are also difficult to fabricate manually. The proposed 3-axis probe is a printed circuit magnetic probe which has 3 loops printed inside a substrate in order to measure the three components of the magnetic field simultaneously. Each loop having a dimension of 3.2mm x 3.2mm is fabricated on both sides of an FR4 substrate with a thickness of 3.2mm. The probe has an overall dimension of 0.9mm x 0.9mm x 3.2mm. In order to form loops on the X and Y directions, top and bottom conductors are connected through a cylindrical via having a radius of 500 µm. The connecting pads for all individual loops are provided at the top surface in order to be able to connect it to the coaxial cable. Calibration The calibration setup used in this work is shown in Figure 3-43 FCC-TEM-JM1 cell was used to produce calculable electric and magnetic field strengths. It operates in the transverse electromagnetic mode so that both the E and H-field components generated between the septum and outer conductor have the characteristics of a wave propagating in free space. The field strength can be calculated from the dimensions of the cell, its impedance at the measurement plane and the input power as shown in the equations below [17]: E z [V/m] = √2Z c P in [watt] h tem [m] 3-20 H[A/m] = E[V/m] η[Ω] 3-21 Where Z c is the input impedance of the measuring instrument in Ohm P in is the input power in watts h tem is the distance in meters between the septum and the outer conductor of the TEM cell ηis the intrinsic impedance of the free space in ohms. The antenna factor then can be calculated by placing the probe inside the TEM cell and measuring the induced voltageV ind across the loop terminals, according to the following equation: AF [dB A/m V ] = 20log ( H[A/m] V ind [V] ) 3-22 Where V ind is the induced voltage in the loop (in volts) and it is given by: to the coupling between the individual probes in the 3D probe. V ind = √2Z c Near field scanning The near field scanning setup of this 3 axis probe is shown in Figure 3 shows that the field measurements are in good agreement with the theoretical calculations for the 3D probe. Also, a good spatial resolution is obtained for the probes in measuring the magnetic field components. The volume of the proposed probe is only 15 % of the previous 3D probe [16]. Compared to the conventional circular loop probes printed circuit probes are easy to miniaturize and less susceptible to damage. Also, the proposed probes have the advantage of being low cost, compact and easy to handle. The electrical connection between the loop and the coaxial cables is found to be very critical in the near fields scanning. The long connecting lines printed in the probe are found to distort the scanned profile of the probe. So the connecting pad to connect the loop to the coaxial cable is made very small in order reduce the coupling effects. Active magnetic probes There is always a trade-off between the sensitivity and the spatial resolution of the probe. The sensitivity of the probe is reduced when the probe is reduced, especially at low frequencies. Figure 3-48 shows the relation between the size of the probe, spatial resolution, and sensitivity of the probe at different frequencies. The spatial resolution of the probe is directly related to the size of the probe in such a way that the spatial resolution is half of the size of the probe. At the low frequencies, there is a gradual reduction in the sensitivity of the probe with the reduction in the size of the probe. The active probe AP1-A is characterized using a vector network analyzer from 1 MHz to 10 GHz. The near field map of the tangential magnetic field received by the active probe AP1-Ais plotted in At frequencies up to 10 MHz, the detected signal is completely below the noise floor as it was the case with AP1-A. But, unlike AP1-A, the profile at all frequencies are distorted. So, it can be concluded that there is an interaction between the magnetic field of the microstrip and the amplifier circuit. When the amplifier is away from the loop, it is away from the device under test also, so, there is less disturbance offered to the device under test by the amplifier. So, it is not wise to keep the amplifier circuit close to the loop, At the same time it is not possible to keep the amplifier very far, because it increases the length of the line between the loop and hence the total length of the probe increases, which in turn reduces the highest operating frequency of the probe. The choice of the position of the amplifier has to be made subject to the frequency band in which the probe is required to operate. The reason for the deviation of the performance of the active probes from the passive probes can be explained from the measurement of the scattering parameter of all these probes above the device under test. The S-parameter S21 is a measure of the output voltage of the probe. When the probe is placed at the center of the cylindrical conductor, as in Figure 3-53, the detected output voltage has maximum value. The S21 as a function of frequency is plotted for both the active probes and the passive probe in Figure 3-55. The curve of the passive probe plotted here belongs to SP1-A, by using the external amplifier (ZPUL-30).  Due to the interaction between the fields of the microstrip line and the probe. For the probe AP1-B, there are more oscillations which are expected due to the reflection between the output of the amplifier and the semi-rigid coaxial cable, because the LNA has poor output reflection coefficient Conclusion In this chapter, we have designed and characterized probes based on microstrip line, CPW line,and shielded probes. The probes with single turn loop have the highest frequency band of operation. The output voltage at the probe is not only due to the voltage is a sum of the voltage due to the magnetic field and the voltage due to the electric field. The highest frequency of operation of the probes decreases with the increase in the length of the transmission line. E field coupling into the probe has very undesired effects. The common mode voltages are a big problem, which causes a large error in the measurement of the probes. The electric (E) field coupling into the probe is the major cause of common mode voltages at the probe output. The probes with microstrip and CPW configurations suffer from common mode response and the probe fails to detect the minimum magnetic field in the frequency band of 100 MHz-1GHz. The shielded probe suppresses the voltage due to the electric field. Rejection of the electric field in the shielded probe has also reduced the common mode response. It is observed that, for a single turn loop,it is not necessary to use the shielding for frequencies below 100 MHz, as they are not susceptible to electric field response. The introductions of the shielded probe could resolve this problem up to some extent and increase the operating frequency range of the probe. Even with the shielded probe, the total length of the probe (loop + transmission line) plays an important role in the measured results. As the size of the probe decreases, the sensitivity of the probe reduces, which requires the receiver with high sensitivity or the use of the amplifier in the measurement. The frequency operation of the probe then depends on the characteristics of the amplifier. For a loop with 800µmx 800 µm dimensions, all the frequencies below 1GHz require the use of an amplifier. With an external amplifier connected to the probe, the sensitivity of the probe is increased and the profile of the magnetic field is detected correctly. But the active probe introduces more errors in the measurement, depending on the characteristics of the amplifier and the position of placement of amplifier. It may be not required to use an active probe to increase the sensitivity of the probe, as it introduces other problems and causes a spurious error in the measurement. Source reconstruction methods for the prediction of EMC 4.1 Introduction It is required to predict the far field radiated from PCBs for the prediction of electromagnetic interference. The prediction of far field needs to have the knowledge of the radiating sources. The electromagnetic inverse problem can be defined as the reconstruction of the electromagnetic source from its radiated field. This source distribution can be used to calculate the far field which is more cost efficient than the direct far field measurement, which requires a large anechoic chamber. This chapter deals with the methods for reconstruction of the sources from the near field scan data. Two methods based on the inverse transmission line matrix and 2D cross-correlation method is presented in the following sections. EM source reconstruction using inverse-TLM method Time reversal-an overview A device that produces an outgoing wave, equal to the time symmetric of an incoming wave is defined as a time reversal mirror. It is an array of transceivers and dedicated electronics. The time dependence of an electric field is recorded on each transceiver in the first step. Each transceiver plays back the signal in reverse order during the second step. In the case of a tile symmetric propagation medium, the wave propagates back and finally focuses on the initial source location. The time reversal mirror is experimentally realized by D. Cassereau and M.Fink in their laboratory for the 3 D time reversal of ultrasonic fields [1][2]. A theoretical model for time reversal cavities to optimize homogeneous and inhomogeneous media is described. It used impulse diffraction theory to obtain the impulse response of the cavity in any inhomogeneous medium. After that TRM has been developed for acoustic applications such as sound focusing, ultrasonic non-destructive testing, ultrasonic hyperthermia for medical therapy and seismology [2][3]. The acoustic wave equation in a non-dissipative heterogeneous medium is invariant under a time reversal operation. Indeed, it only contains a second-order time-derivative operator. Therefore, for every burst of sound ϕ(r, t) diverging from a source-and possibly reflected, refracted or scattered by any heterogeneous mediathere exists in theory a set of waves ϕ(r,-t) that precisely retraces all of these complex paths and converges in synchrony, at the original source, as if time was going backwards. This is the basic idea behind the time reversal in acoustics. It requires both the time-reversal invariant and spatial reciprocity to reconstruct the time-reversed wave in a whole volume. In acoustics, a closed time-reversal mirror (TRM) consists of a two-dimensional piezoelectric transducer array that samples the wavefield over a closed surface as shown inFigure 4-1 [4]. An array pitch of the order of λ/2, where λ is the smallest wavelength of the pressure field needed to ensure the recording of all the information on the wavefield. Electronic circuitry connected to each transducer consists of a receiving amplifier, an A/D converter, a storage memory and a programmable transmitter able to synthesize a time-reversed version of the stored signal. The time reversal technique is applied more recently in electromagnetics [5][6]. The first experimental demonstration on time reversal focusing of electromagnetic waves is done by G. Lerosey et.al [7]. Theory of monochromatic time reversal for electromagnetic waves was developed by [8]. Figure 4-2 shows the simplified sketch of time reversal with dipolar sources. The applications of time reversal mirrors include radio communications, terahertz imaging and medical imaging [9][10] [11]. The decomposition of time reversal mirrors (DORT) method is used to detect the scatterers in the electromagnetic field [12]. The converging wave is followed by a diverging one [7]. A numerical synthesis technique based on inverse transmission line matrix was first proposed by [13]to reconstruct the scattering sources. Later it is used in the synthesis of microwave filters and antennas and deals with the problem of resolution [14] Transmission line matrix (TLM) method The transmission line method is an approach to the simulation of wave motion and other physical phenomenon. It is a physical model with an exact computer solution. It is a modelling process rather than a numerical method for solving differential or integral equations. TLM method sets up a spatial 𝑉 𝑟 = 𝑆𝑉 𝑖 4-1 Where S is a 12x12 matrix with elements 𝑆 𝑚𝑛 in the m th row and n th column. Figure 4-3 TLM symmetrical condensed node (SCN) For a condensed node without stub, the scattering matrix is given as follows 𝑆 = [ 0 1 1 1 0 0 1 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 -1 0 0 0 0 -1 0 1 0 1 0 0 0 -1 0 0 1 0 0 0 0 0 0 0 1 0 1 0 1 0 1 0 -1 0 1 0 -1 0 0 0 -1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 1 1 0 -1 0 0 -1 0 1 0 0 0 -1 -1 0 1 0 -1 0 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 1 1 0 0 0 0 0 0 1 1 0 1 0 ] 4-2 In order to excite the separate components of E and H, a possible set of incident pulses are given by the following equations. The output E and H fields are calculated as follows, 𝑉 1 𝑖 = (𝑢𝐸 𝑥 + 𝑤𝑍 0 𝐻 𝑧 ) 2 ⁄ 4-3 𝑉 2 𝑖 = (𝑢𝐸 𝑥 -𝑣𝑍 0 𝐻 𝑦 ) 2 ⁄ 4-4 𝑉 3 𝑖 = (𝑣𝐸 𝑦 -𝑤𝑍 0 𝐻 𝑧 ) 2 ⁄ 4- The pulses at port 1, 2, 9 and 12 contribute to Ex field. The pulses at port 3, 4, 8, and 11 contribute to Ey field. And the pulses contributing to Ez field are at ports 5, 6, 7 and 10. Similarly, the Hx field is contributed by pulses at ports 4, 5, 7 and 8. Hy is contributed by pulses at ports 2, 6,9,10 and Hz is due to the pulses at ports 1, 3, 11, 12. Inverse TLM method The 1-D scalar wave equations aregiven by ∇ 2 - 1 𝑐 2 𝜕 2 𝜕𝑡 2 𝛹(𝑟 ̅, 𝑟 0 ̅̅̅̅, 𝑡) = 0 4-22 Theory of wave propagation TR is based on the invariance property of the scalar wave equation 4-22 under TR transformation, in a lossless space. Where, 𝛹(𝑟 ̅, 𝑟 0 ̅̅̅̅, 𝑡) is the scalar radiated field, r is the distance-vector between the source and the observation point, 𝑟 0 is the source position, t is the time and c is the speed of light. This is a differential equation, containing only a second order derivative with respect to time. Therefore, if 𝛹(𝑟 ̅, 𝑟 0 ̅̅̅̅, 𝑡) is a solution of this equation, then 𝛹(𝑟 ̅, 𝑟 0 ̅̅̅̅, -𝑡) is also a solution [23], [24]. It can be said in other words that, scalar wave equation remains invariant under TR transformation in a non-absorbing medium. Maxwell's equations are also symmetrical under time reversal. Where 𝐺(𝑟 ̅, 𝑟 0 ̅̅̅̅, 𝑡) is the greens function, 𝑓(𝑡) initial time domain excitation, * is the convolution operator in time domain. So, changes in the initial condition generates the dual solution 𝛹(𝑟 ̅, 𝑟 0 ̅̅̅̅, -𝑡). The impulse scattering matrix of the two-dimensional TLM shunt node which relates the reflected voltage impulses at the time (k + 1)At to the incident voltage impulses at the previous time step kAt is identical to its inverse. So we can write [ 𝑉 1 𝑉 2 𝑉 3 𝑉 4 ] 𝑘+1 𝑟 = 1 2 [ -1 1 1 1 1 -1 1 1 1 1 1 1 -1 1 1 -1 ] [ 𝑉 1 𝑉 2 𝑉 3 𝑉 4 ] 𝑘 𝑖 4-24 The scattering matrix is equal to its inverse 𝑆 -1 = 𝑆 4-25 Where 𝑆 = 1 2 [ -1 1 1 1 1 -1 1 1 1 1 1 1 -1 1 1 -1 ] 4-26 This property of the scattering matrix implies that the TLM process can be reversed without any change in the scattering algorithm. Practically speaking, if a TLM network has been excited by a single voltage impulse at time t = 0, and if after k computational steps the impulses on all branches of the mesh have been computed and stored, one can return to the initial state at t = 0 by reversing the direction of all impulses at t = k and iterating k-times. By virtue of the unique property of the scattering matrix, this reverse simulation is identical to a forward simulation in which the initial excitation has been obtained by re-injecting the result of a previous simulation. So, one can reconstruct a source distribution from a field distribution by reversing the TLM process in time. This is true for three dimensional distributed and condensed node algorithms. Thus, an EM radiation process can be theoretically time-reversed without any change. The unique difference is that the initial conditions change. The reconstruction of the source is affected only by the round-off error which is typically less than 10 -5 for several thousand computation steps in single precision [13]. They have reconstructed the induced sources from the scattered field using inverse TLM method. Figure 4-5 shows two identical sections of parallel plate waveguide which are terminated with absorbing boundaries at its ends. The upper one is empty and the lower one contains a thin perfectly conducting septum. Figure 456shows different steps in the reconstruction process from the scattered field. A Gaussian pulse is used as the excitation signal. This procedure differs from the previous one in that the forward response is not simply reversed at all nodes in the network, but it has first picked up the response at a limited number of output points, and then re-injected after subtracting the incident filed. The resolution of reconstruction was directly proportional to the mesh parameter Δl. (d)-(e) the perspective, side and front views of the maximum field after re-injection [13] Another synthesis procedure for conducting scatterers is proposed by [27]. They have also demonstrated the source reconstruction in a simple discontinuity inside a parallel plate waveguide as shown in Figure 4-7, which consists of a thin conducting obstacle at the center. Their synthesis method consists of the following stages (1) a time sampled Gaussian pulse is injected into the empty waveguide and the incident pulses on both the absorbing boundaries are stored using forward analysis; (2) the same pulses are stored with the discontinuity in place using forward analysis; (3) as a final step, both the impulse responses are subtracted from each other yielding a particular solution and this output function is then time reversed and applied to both ends of the waveguide. The number of computational steps in all stages was kept the same. The different aspects of the contour of the obstacle were determined from different field components. The highest value of the normal electric field occurs at the surface of the obstacle. The position of the sharp edges of the scatterer was determined from the longitudinal magnetic field components. The shape of the reconstructed obstacle extracted from the Poynting vector distribution [27]. A.Ungureanu has used the Time reversed TLM method to reconstruct the unknown source reconstruction from the far field radiation [18]. An additional resolution improvement step is performed in order to overcome the spatial resolution limit of half wavelength. In order to validate the source reconstruction using inverse TLM algorithm, the developed method is applied to ideal sources with different polarization. The sources are considered to be above a ground plane as the real PCB will have a large ground plane. Ey polarization Excitation by an electric field corresponds to a voltage source. A distributed voltage source Vy is excited at the center of the XZ plane. The metallic ground is provided at 1. It has a positive and negative maximum along the x-axis with the same magnitude. Along z-axis, the field has only one peak, and there are no alternating maxima and minima. Ez polarization Similarly, a voltage source is excited with z polarization as in Figure 4-16. The length of the source is 2mm, which is same as before. Now, we can conclude that for a tangential voltage source, the distribution of the normal component can be used to determine the orientation of the source. If the source is oriented along the xdirection, the normal component of E field will have alternating maximum and minimum along the axis and if the source is oriented along the z-axis, the alternating maxima and minima of the normal electric field component are found along the z-axis. Current source Jy A circulating magnetic field produces a current at the center and vice versa. A current source Jy is excited at the center of the TLM mesh as shown in Figure 4-18. The length of the current source is 3mm and is oriented along the y-direction. The current source is simulated using the direct TLM method. The E and H fields at 1mm above the current source are recorded in the XZ plane. These fields are time reversed and inverse propagated towards the direction of the source. The number of time steps in both the forward and reverse computation is kept the same. As it is a vertical current source, the tangential component of the magnetic field will give the information about the current source. The reconstructed tangential magnetic fields Hx and Hz are plotted at 2.5mm above the ground plane (shown in Figure 45678910111213141516171819). We can see that the Hx component alternates along the xdirection and the Hz component alternates along the z-direction, forming a circulating magnetic field. From this, we can determine the current source is a vertical current Jy and is at the center of the alternating magnetic field Hx and Hz. The time reversal mirror is located at 1 mm above the current source. The electric and magnetic fields recorded at 1mm are applied in the TRM.We know that the plane at which the source is reconstructed is tangential to the source. As the source is a tangential current source, the circulating magnetic field cannot be found in the tangential plane. The maximum amplitude of magnetic field is found for the reconstructed normal component of the magnetic field Hy, at the plane containing the source. The reconstructed normal component of the magnetic field is shown in Figure 4-21. From the cross-sectional view through the X and Z axis along the dotted lines (Figure 4-21 (b)), it is clear that the magnetic field has a positive and negative maximum along the x-axis. This implies that there is an alternating magnetic field along the x-direction, which in turn indicates the current along the z-direction. These results show that the normal component of the current can be identified from the two tangential magnetic field components and the tangential current can be identified from the normal component of magnetic field depending on the direction in which the magnetic field alternates. For a current along the x-direction, the normal component alternates along z-direction and vice versa. Effect of near field scanning height on source reconstruction In order to investigate the effect of the height between the source and the near field scanning plane, the simple example of the voltage source Vy, which is discussed before is considered here. The electromagnetic fields are recorded at different heights (d=1mm, 2mm, 3mm, 4mm) using the forward simulation. The source is reconstructed by using data from each height separately using the inverse propagation. The reconstructed E field at the plane containing the source is plotted in Figure 4-24 (a)-(d). The original field distribution at the position of the source, which obtained with the forward simulation, is compared with the reconstructed field at the same position. It is obviously seen from the figures that the width of the reconstructed pulse has increased when the height d increased from 1mm-4mm. Also, the increase in width of the reconstructed field is proportional to the height d. Spatial resolution The spatial resolution is the distance between two sources that can be reconstructed by the inverse method. Two voltage sourcesEy, separated by a distance 's' as shown in Figure 4-25 are simulated using forward using propagation. Inverse propagations are made using the fields recorded at 1mm above the source for each value of 's' in the forward propagation. The time steps in both the forward and inverse propagations are the same for all cases. normalized with respect to its maximum value in order for the comparison. When the separation between the sources is 2u which is equal to 2cm, the reconstructed E field is not able to distinguish between the two sources. It is seen that there only one peak, which is in the middle of the two sources, this indicates that the source is a single voltage located source at the middle of the two sources. When the distance between them has increased to 3cm (3u), the two peaks are found in the reconstructed field at the same position of the source. When the distance between the sources is increased, the sources are reconstructed more accurately. The factor which limits the resolution of the reconstruction is the mesh parameter (u and v in this example). In order to reconstruct the sources more accurately, the mesh parameter has to be reduced, that means the number of spatial sampling points in the near field recording has to be increased. that there is a one to one correspondence between the two [28]. EMC measurements are carried out in the frequency domain.The near field scan (NFS) data contains the magnitude and phase of the magnetic field components at different frequencies. As the inverse transmission line matrix method is based on time reversal of electromagnetic waves it is not possible to apply the scanned data directly on its mesh. The discrete Fourier transform (DFT) of the time domain signal gives the corresponding frequency domain signal, and it is applicable in the case of TLM method as well [28], [29]. As per the properties of the Fourier transform, a delayed signal 4-29 Where, the bar over H denotes complex conjugation. For example, consider the following 2-D cross-correlation: 𝑋 = 1 1 1 1 1 1 𝐻 = 1 4 2 5 3 6 The cross-correlation between X and H gives the matric C as follows with all other terms in the double sum equal to zero. Formulation of the equations The cross-correlation of the measured magnetic field with a magnetic field produced by a known current element will give the presence of that current element in the measured magnetic field. So, in equation 4-29 the function X can be referred to as the measured magnetic field and the function H can be referred to as a standard function. As per equivalence theorem, any source of current can be determined from its tangential magnetic field components [30]. 𝐽 = 𝑛 ̂× 𝐻 4- 30 Where, H is the tangential magnetic field. Then, the equation 4-29 can be modified as C(k, l) = ∑ ∑ 𝑋 𝑝𝑡 (𝑚, 𝑛)𝐻 ̅ 𝑝𝑡 (𝑚 -𝑘, 𝑛 -𝑙) N-1 n 𝑀-1 𝑚=0 4-31 Xpt is the tangential magnetic field radiated from the DUT. 𝐻 ̅ 𝑝𝑡 is the complex conjugate of the standard function. In a 3-D space, a current element has two tangential magnetic field components as the magnetic monopoles don't exist. The above equation can then be written as, C(k, l) = ∑ ∑ 𝑋 𝑝𝑡1 (𝑚, 𝑛)𝐻 ̅ 𝑝𝑡1 (𝑚 -𝑘, 𝑛 -𝑙) N-1 n 𝑀-1 𝑚=0 × ∑ ∑ 𝑋 𝑝𝑡2 (𝑚, 𝑛)𝐻 ̅ 𝑝𝑡2 (𝑚 -𝑘, 𝑛 -𝑙) N-1 n 𝑀-1 𝑚=0 4- 32 Where,𝐻 ̅ 𝑝𝑡1 and𝐻 ̅ 𝑝𝑡2 is the standard function for the two tangential magnetic field components. It is the complex conjugate of the tangential magnetic field calculated using 4-29. The suffix p indicates the orientation of the dipole (x y or z). The parameters t1 and t2stands for two tangential magnetic field components for each configuration of dipole. So, for a dipole oriented along z axis, the suffix As the current is oriented along the z-axis, the above equation can be written in spherical coordinates as 𝐴 𝑝 ⃗⃗⃗⃗⃗⃗ = (𝑟̂ 𝑐𝑜𝑠(𝜃) -𝜃 ̂𝑠𝑖𝑛(θ)) 𝜇 0 𝑑 4𝜋𝑟 𝐼𝑒 -𝑗𝑘𝑟 4-34 Magnetic field is then calculated by applying the curl of the vector potential 𝐻 = 𝛻 × 𝐴 𝑝 ⃗⃗⃗⃗ 𝜇 0 = (𝜇 0 𝑟 2 𝑠𝑖𝑛 𝜃) -1 𝑑𝑒𝑡 | | 𝑟̂𝜃 ̂𝜑 ∂ 𝜕𝑟 𝜕 𝜕𝜃 𝜕 𝜕𝜑 𝐴 𝑟 𝐴 𝜃 𝐴 𝜑 | | 4-35 𝐻 ⃗ ⃗ = 𝑗(𝑘𝐼𝑑𝜂 0 /4π𝑟) 𝑒 -𝑗𝑘𝑟 [1 + 1 𝑗𝑘𝑟 ] sin θ 4-36 The E field is then obtained by applying Faraday's law 𝐸 ⃗ = 𝑗(𝑘𝐼𝑑𝜂 0 /4π𝑟) 𝑒 -𝑗𝑘𝑟 {𝑟̂[ 1 𝑗𝑘𝑟 + 1 (𝑗𝑘𝑟) 2 ] 2 cos 𝜃 + 𝜃 ̂[1 + 1 𝑗𝑘𝑟 + 1 (𝑗𝑘𝑟) 2 ] sin θ} 4-37 The standard function in rectangular coordinates can be calculated by applying spherical to Cartesian conversion. The standard function at this same plan is then calculated using the equation 4-36. The standard function used here is only one dimensional (1-D) that is the magnetic fields are calculated either along the x-direction or along the y-direction depending on the current source to be found. The position of the source used for the standard function is shown in Figure 4-42. The source is assumed to be at (5,5). The window size of the standard function is chosen as 9mm along each direction. The fields at 14mm due to the current element are plotted in Figure 4-43 for all the three orientations, i.e., for a current source along x-direction (Jx), a current source along y-direction (Jy) and a current source along z-direction (Jz). The developed algorithm is able to identify and localize the elementary current sources. As the method uses scalar multiplication, the phase of the detected current may not be the same as the original current. But the polarization and the position of the elementary current sources are found accurately. In order to validate the feasibility of thee developed source reconstruction method, a bend in a monopole on an air substrate is taken as a PCB and is simulated using CST microwave studio. The structure is excited by 1 V through a coaxial cable below the PCB (Figure 4 To validate the procedure, the current source above a ground, discussed before is again considered here to be the device under test. The equivalent source is calculated by using the 2D crosscorrelation method. Two Current source Conclusions The inverse TLM method can be used to reconstruct the fields at any plane above the source and below the near field scanning plane. In the case of the known distribution of sources, this method can be used to identify the sources. In the case of the voltage sources, the source can be found from the maximum value of tee the normal component of the electric field. Ann alternating normal component of electric field is found only for a tangential voltage source. The orientation of the The inverse TLM method can be used to reconstruct the fields at any plane above the source and below the near field scanning plane. With this method, the equivalent current could only be found accurately for a vertical current source in an arbitrary PCB, where there is no knowledge about the sources. The newly developed cross-correlation method could overcome this limitation of the inverse propagation method. The cross-correlation method is applied directly to the near field scan data. The method is not applied to the inverse propagated fields because the electric and magnetic fields become wider than the original filed after inverse propagation. The spatial resolution of the reconstruction depends on the mesh size and the wavelength. So, when a number of spatial points in the near field scan data increases, the more accurately we reconstruct the sources. This illustrates the need of having a probe with high spatial resolution. The developed cross-correlation method is validated for ideal current sources. It is found that the procedure could accurately determine the position and orientation of the source. The method is also validated for multiple current sources. The window size of the standard function is optimized to a one-dimensional window of size 9mm. The accuracy of the equivalent source decreases when the window size of the standard function increases. The final objective of the developed source reconstruction methods is to apply to real PCBs. In order to validate the procedure experimentally, we have designed few demonstrators. Two of them are shown in Figure 5-2. The demonstrator in Figure 5-2 (a) is a loop above a metallic ground plane and is fed by a coaxial cable at one end and terminated by 50 Ohm impedance at the other end. The 2D cross-correlation will not be able to distinguish between the positive and negative currents. So, the equivalent current source for this PCB may appear as two vertical dipoles with same phase (Jz) as one horizontal dipole. As the phases of the sources are the same, the current due to the vertical dipoles (+Jz and -Jz) will not be able to cancel in the calculation of the far field, which will lead to some errors in the predicted radiated field. B. Grounded CPW transmission line The effective dielectric constant and characteristic impedance for the conductor-backed CPW is calculated as follows, for zero strip thickness (t=0) For a finite strip thickness t, the characteristic impedance is calculated as follows [5] 𝜀 Résumé Au fur et à mesure que le nombre de composants augmente, il existe une forte demande pour identifier les sources de rayonnement pour la prédiction de la compatibilité électromagnétique des Figure 1 - 9 19 Figure 1-9 Setup of Faraday cage measurement for conducted emissions[4] Figure 2 - 1 [ 1 ] 211 Figure2-1[1]. In the first configuration, the measurements done with a single probe controlled by a precise positioning system. The second one uses a planar array of probes terminated with 50 Ohm impedance. With this probe array, the DUT can be scanned in a single measurement or in a few numbers of measurements depending on the area of the DUT and the probe array. In this setup, the presence of a large number of probes induces first order disturbances to the measured field. The former one, which uses single probe is the widely used method for near field scanning[2][3]. Figure 2 - 1 21 Figure 2-1 Probe setup in near field measurement[1] Figure 2 - 2 Figure 2 - 3 A 2223 Figure 2-2 Schematic layout of active electro-optical dipole antenna[6] Figure 2 - 2 Figure 2-2 is presented in [6]. The probe consists of a loop antenna element doubly loaded with LiNbO EO crystals. One of the LiNbO has two domains where -axes (optical axes) in each domain are directly opposite to each other. Using optical technology, the probe can work as a conventional double-loaded loop probe without metallic cables and electrical hybrid junctions. An electro-optical probe which uses an electro-optical crystal (shown in Figure 2-3) is available commercially from Figure 2 - 4 24 Figure 2-4 Model for dipole receiving probe[10][11] Figure 2 - 5 25 Figure 2-5 Waveguide probe[13] Figure 2 -Figure 2 - 8 228 Figure 2-8 Magnetic probes fabricated on different technologies (a) coaxial probe with elliptical Figure 2 - 9 29 Figure 2-9 Different printed circuit probes: (a) loop with a tapered CPW transmission line[23] Figure 2 -Figure 2 - 22 Figure 2-10 Different ways of modelling emissions from near field measurement[1] Figure 2 - 2 Figure 2-11 Equivalent source reconstruction problem (a) measurement schematic (b) the equivalent current source domain is the whole surface of the original PCB and it is solved by matching the measured tangential magnetic fields over two planes S1 and S2.[START_REF] Li | Source reconstruction method-based radiated emission characterization for PCBs[END_REF] Figure 3 - 1 31 Figure 3-1 Parameters of the probe Figure 3 - 2 . 32 The probe is connected to the spectrum analyzer through the arm of the robot, which is mounted on a 5 axis robot at the same time. The 5 axis robot is shown in the photograph of the near field test bench in Figure3-3. A computer monitors the probe displacement over the device under test and acquires data provided by the spectrum analyzer. The maximum scanning area is of size 200cm (x) x 100cm (y) x 60cm (z) with a mechanical resolution of 10µm in the three directions (x, y and z) and 0.009° for two rotations. To ensure high resolution the probe must be close to the surface of the device under test due to the coupling with evanescent waves. Figure 3 -Figure 3 - 4 334 Figure 3-3Near field test bench Figure 3 - 3 Figure 3-5The tangential component of magnetic field (Hy) at 10 MHz Figure 3 -Figure 3 - 33 Figure 3-6The normal component of magnetic field (Hz) at 10MHz Figure 3 -shielded magnetic probes 3 . 5 335 Figure 3-7Schematic of a probe Figure 3 - 8 4 3. 5 . 1 38451 Figure 3-8 Structure and geometrical dimensions of the probeMP-1: l=3mm, w=3mm, g=0.2mm, h=0.8mm, ε r =4.4 3. 5 . 3 2 (b) MP- 3 Figure 3 -MP- 2 : 532332 Figure 3-11 Structure of the probes (a) single layer double loop probe: MP-2 (b) single layer single loop probe: MP-3 l=3mm, w=3mm, g=0.2mm, h=0.8mm, εr=4.4 Figure 3 - 3 Figure 3-13Effect of transmission lines of the probes MP-4 and MP-5 3.5.5 Characterization of microstrip probesThe probe MP4 is characterized in the frequency range of 30 KHz-1GHz. The measured tangential magnetic field Hy of the probe is plotted in Figure3-15 by applying the antenna factor. Figure 3 - 3 Figure 3-15 Measured tangential field components of the microstrip probe at different frequencies Figure 3 - 3 Figure 3-16 Cross section of a CPW transmission line Figure 3 - 3 Figure 3-18 Air bridge connection between the grounds Figure 3 - 3 22 shows the fabricated prototype of the CPW probe and GCPW probe. Figure 3 - 3 Figure 3-21 Structure of the coplanar probes-simulated view (a) conventional CPW probe (b) Figure 3 - 3 Figure 3-22 Fabricated probes with CPW and GCPW transmission lines Figure 3 - 3 Figure 3-23 Measured tangential magnetic field components of the CPW and GCPW probes at different frequencies Figure 3 - 3 Figure 3-24 Schematic of the loop above the reference circuit Figure 3 - 16 L 316 Figure 3-28 (a) EM field reception; (b) simplified equivalent circuit model of the receiving loop Figure 3 - 3 Figure 3-29 Equivalent circuit representation of the shielded magnetic probe (a) shielded probe above the microstrip trace (b) equivalent circuit (c) alternative representation Figure 3 - 3 29 (a) shows the simulated view of the probe above the reference circuit. Figure 3 - 3 29 (b) shows the equivalent circuit representation of this configuration. 𝑅 𝑠 is the source impedance. 𝑅 𝐿 is the load impedance connected to the microstrip line. Figure 3 - 3 Figure 3-30 Photograph of the fabricated probe SP1-A Figure 3 - 3 Gain curve of the amplifier ZPUL-30[12] Figure 3 -Figure 3 - 33 Figure 3-32 Measured and simulated Antenna factor of the probe SP1 Figure 3 - 3 34 by applying the simulated antenna factor. The detected profile of the tangential magnetic field is in good agreement with the theoretical fields of the microstrip in all the frequencies from 10 MHz to 3 GHz. The simulated fields below 10 MHz are not plotted because the antenna facto below 1 MHz is very high and the probe is expected to have very less sensitivity. Figure 3 - 3 Figure 3-34 Simulated tangential magnetic field of probe SP1 at 1.7mm Figure 3 - 3 Figure 3-36 Measured tangential field received by the probe at different frequencies without any amplifier - 31 is 31 connected immediately at the terminals of the probe as shown in Figure3-37.The near field scanning is again performed at the same height (at d=1.7mm). Figure 3 - 3 Figure 3-37Near field scanning setup with an external amplifier. Figure 3 - 3 Figure 3-38 Measured tangential component of magnetic field (Hy) with amplifier Figure 3 - 40 (Figure 3 -Figure 3 - 34033 Figure 3-39 Alternate near field scanning system 3. 9 . 1 91 Design and characterization 3.9.1.1 Structure of the probe The 3 axis printed circuit probe consists of 3 rectangular loops oriented in 3 different directions X, Y and Z. Every single loop is designed to measure one component of magnetic field.Figure 3-42 shows the fabricated 3-axis probe along with the 3D view. The 3-axis printed circuit probe consists of 3 loops oriented in 3 different directions X, Y and Z. The loop oriented in the y-direction measures the x component of the magnetic field and the loop oriented along the x-direction measures the y component of magnetic field. The loop in the XY plane measures the normal component of magnetic field. Figure 3 - 3 Figure 3-42Structure of the 3 axis probe, l = w = z1= 3.2 mm 23 Figure 3 -Figure 3 - 2333 Figure 3-43 Calibration setup with TEM cell -45. During the near field scan, the z-axis is normal to the scanned plane. Measurements were done at different heights and frequencies. The scanning is done along the y-axis with a step width of 1 mm. The computed theoretical field values are compared with the measured ones at 13 MHz and 400 MHz and they are presented in Figure 3-46 and Figure 3-47 respectively. The normal and tangential component of the magnetic field is plotted atd= 0.5mm and 2.5mm, where d is the distance from the top of the cylindrical conductor to the bottom side of the probe when placed as shown in Figure 3-45. Figure 3 -Figure 3 -Figure 3 -Figure 3 - 3333 Figure 3-45 Configuration of near field scanning Figure 3 -Figure 3 - 33 Figure 3-48 Graph showing the relation between the size of the probe and the sensitivity at various frequencies Figure 3 -Figure 3 - 33 Figure 3-50 Measured gain and reflection coefficient of the amplifier Figure 3 - 3 Figure 3-52 at different frequencies. The detected profile up to 100 MHz is under the noise of the network analyzer. The 27 dB gain of the amplifier is not enough to detect the signals below 100MHz. The distorted profile at 10 MHz is completely below the noise floor of the receiver. From 100 MHz-500 MHz, even though the profile is detected, the points near minimum are close to the noise floor. But the profiles of the magnetic field from 800 MHz-1 GHz are lost. This is due to the bad input and output reflection coefficient of the amplifier from 500 MHz to 1GHz. Figure 3 - 3 Figure 3-52 Measured tangential magnetic field of active probe AP1-A Figure 3 - 3 Figure 3-53 Active probe (AP1-B) and measurement configuration Figure 3 - 3 Figure 3-55 Comparison of the measured s parameter of the active and passive probes at the centre of the microstrip line Figure 4 - 1 41 Figure 4-1 Time reversal in acoustics (a) Recording step: A closed surface is filled with transducer elements. A point-like source generates a wavefront which is distorted by heterogeneities. The distorted pressure field is recorded on the cavity elements. (b) Time-reversed or reconstruction step: The recorded signals are time-reversed and reemitted by the cavity elements. The timereversed pressure field propagates back and refocuses exactly on the initial source.[4] Figure 4 - 2 42 Figure 4-2 Schematic illustration of the far field time reversal by dipolar sources: (a) A source generates a wave. The electric components are recorded on a closed surface. (b) This electric field is time reversed and re-emitted by dipolar sources and the back propagated wave is generated. (c)The converging wave is followed by a diverging one[7]. [15][16][17][18][19]. and time discretization scheme. It substitutes the original medium by an analogous transmission line mesh in which voltage and current pulses propagate in the same way as the electromagnetic waves would propagate in the original medium.The expanded node network has been used for a variety of applications until the 1990s. It is a network of transmission lines developed by interconnecting two-dimensional series and shunt nodes. As these nodes have a time delay of half a time step, these nodes are called as expanded node network. The topology of the expanded node network and finite difference method is quite complicated. The nodes to calculate different field component are spatially separated. Due to this, the data preparation for the modelling of boundaries is difficult and liable to error, and the problem is acute in the implementation of automatic data preparation schemes. The inconvenience of an expanded node structure has led to the development of a condensed node structure by P. Saguet and E. Pic in 1982[20]. Depending on the direction of view, the first connection in the node is either shunt or series. Hence, the node is said to be an asymmetrical condensed node. Even though the effect of these asymmetries is insignificant, the boundaries viewed in one direction have slightly different properties when viewed in another direction. Although it involves quite lengthy arithmetic, the asymmetrical condensed node technique uses less resource than expanded node technique. The symmetrical condensed node (SCN) developed by P.B. Johns eliminates the disadvantages of asymmetric and lengthy arithmetic while preserving the advantages of condensed node working. He has developed the scattering matrix for the SCN with and without the stubs. The absence of stubs means that the node represents only a cubic block of Cartesian mesh. The extra inductance and capacitance cannot be added locally to the node. The node with stubs can be used in inhomogeneous problems[21][22].The symmetrical condensed node without stubs is shown in Figure4-3. The two polarizations in any direction of propagation are carried out on two pairs of non-coupling transmission lines. There are 12 transmission lines and each of them has characteristic impedance 𝑍 0 same as that of free space. Since these lines link the Cartesian mesh of nodes together, they are called as link transmission lines. The incident and reflected pulses appear on the terminals of the transmission line. The 12 incident pulses on the link transmission lines produces scattering into 12 reflected pulses. The scattering is defined by 4. 2 . 4 24 EM source synthesis with reverse TLM method-state of the art The EM point source reconstruction using the radiated fields outside the region containing the sources has been studied in[25][26]Different phases of forwarding and backward simulation of an impulsive point sources using inverse TLM is shown in Figure4-4. Figure 4 - 4 44 Figure 4-4 Four phases of a forward/backward simulation involving (a) the excitation of a lossless structure by a single impulse after the first iteration, (b) the field in the structure after 20 iterations, (c) the field in the structure after 100 iterations, and (d) the reconstructed source after 100 inverse computation steps[13]. Figure 4 -Figure 4 - 44 Figure 4-5Top views of two parallel plate waveguide sections matched at both ends. The uppersection is empty and the lower section contains a conducting septum[13] Figure 4 - 4 8 (a) shows the image from which the conducting septum is reconstructed. Figure 4 -Figure 4 - 44 Figure 4-7Parallel plate waveguide: (a) Empty waveguide yielding the homogeneous field solution.(b) Waveguide with an obstacle[27] Figure 4 - 4 9 shows the result of the reconstruction of two point sources. The first figure (a) is obtained after the first reconstruction process and the second figure (b) is obtained after the resolution improvement step. With the resolution improvement procedure, they have overcome the spatial resolution limit of half wavelength. They have also applied inverse TLM method to reconstruct the monopole antenna from its far field radiation pattern. The reconstructed normal magnetic field component of monopole is shown in Figure 4-10. The position and orientation of the current distribution are determined from the reconstructed field. Figure 4 - 9 Figure 4 - 4 . 3 49443 Figure 4-9 Reconstructed sources (a) after coarse reconstruction step (b) after resolution improvement step[18] Figure 4 - 4 Figure 4-11 Excitation signal used in the TLM mesh 5mm below the source (Figure 4-12). Figure 4 - 4 Figure 4-12 TLM cell with an ideal voltage source Ey Figure 4 - 4 17 shows the reconstructed electric fields at the plane of the source. Figure 4 -Figure 4 - 44 Figure 4-16 TLM cell with ideal voltage source Ez Figure 4 -Figure 4 - 44 Figure 4-18The vertical current source in the TLM mesh, d=1mm Figure 4 -Figure 4 - 44 Figure 4-20 Horizontal current source Jz in the TLM mesh, d=1mm Figure 4 -Figure 4 - 44 Figure 4-22 Horizontal current source Jx in the TLM mesh, d=1mm Figure 4 - 4 Figure 4-24 Results of the reconstruction of the voltage source Vy using fields from a different height of the near field plane, d. Figure 4 - 4 Figure 4-25 Two voltage sources separated by a distance s Figure 4 - 4 Figure 4-26 (a) to (c) shows the real component of the reconstructed E fields for S=2mm, 3mm, and 4mm respectively. The reconstructed fields are compared with the original field present at the source position, which is obtained using direct propagation. The E field values at the position containing the position of the source, which is obtained after direct and inverse propagation are Figure 4 - 4 Figure 4-26 Reconstructed normal component of E field of two voltage source Vy with a distance of separation's' between them. (a) S=2cm (b) S=3cm (c) S=4cm in the time domain corresponds to a phase shifted signal frequency domain. 𝐹[𝑥(𝑡 ± 𝑡 0 )] = 𝑋(𝑗𝜔)𝑒 𝑗𝜔𝑡 0 Figure 4 -Figure 4 - 44 Figure 4-27 (a) Sinusoidal Gaussian pulse used for superposition (b) Frequency spectrum Figure 4 -Figure 4 - 44 Figure 4-27shows sinusoidal Gaussian pulse used as the reference signal. The time period of this Figure 4 - 4 Figure 4-30 Monopole above a ground plane Figure 4 -Figure 4 - 28 Figure 4 -Figure 4 - 442844 Figure 4-31 Simulated magnetic fields at 2mm above the monopole Figure 4 - 4 . 4 444 Figure 4-37 Circulation of magnetic field in the XZ and YZ plane (a) XZ plane (b) YZ plane ] The C(1,1) element in the output corresponds to C(1-3,1-2) = C(-2,-1) in the defining equation, which uses zero-based indexing. The C(1,1) element is computed by shifting H two rows upward and one column to the left. Accordingly, the only product in the cross-correlation sum is X(1,1)*H(3,2) = 6. Using the defining equation, we obtain C(-2, -1) = ∑ ∑ 𝑋(𝑚, 𝑛)𝐻 ̅ (𝑚 + 2, 𝑛 + 1) 0,0)H ̅ (2,1) = 1 × 6 = 6 pt1 and pt2 corresponds to x and y components. The individual summations terms in 4-32 are the 2-D cross correlation. The position of the equivalent source is the position at which the product of the cross correlation has a peak value. 4. 4 . 2 . 1 421 Derivation of the standard function Consider a short current filament illustrated in Figure4-38. A current with amplitude I, angular frequency ω extends from -dl/2 to dl/2. Of course, a real current element couldn't start and stop abruptly like this, however realistic current distributions can be modeled by a superposition of current elements like this. The potential due to this current element at a point p can be expressed as[31]. Figure 4 - 4 Figure 4-38 A current element oriented along the z axis 4. 4 . 3 Figure 4 - 41 For a current source Jy 4 . 4 . 4 . 1 434414441 Figure 4-39 Parameters of the source in the standard function Figure 4 - 4 Figure 4-40 Simulated structure of the current source (Jz) from CST Figure 4 - 4 Figure 4-42 Position of the source used for standard function Figure 4 -Figure 4 -Figure 4 - 444 Figure 4-45 Calculated equivalent source Figure 4 -Figure 4 - 44 Figure 4-50 Two current sources separated by a distance of 3cm 4. 4 . 4 . 5 445 Effect of the window size of the standard functionIn order to understand the influence of the window size of the standard function of the developed algorithm, the example of two current sources discussed in the previous section is made under study. At First, a 2-D window of size 9mm x 9mm as shown in Figure4-52 is taken for the standard function. It shows the normalized value of magnetic fields at 14mm above the ideal currents. In this case, the source is assumed to be at the center of the window(5, 5). The equivalent source obtained with this window is shown in Figure4-53. It is able to detect the equivalent sources correctly, in the same position as that of the original sources. The amplitude of the equivalent source is very high compared to that of Figure4-51, which is obviously due to the more number of cells taken for cross-correlation. Another important fact to be noted is that the width of the calculated equivalent source has increased.. Figure 4 -Figure 4 -Figure 4 -Figure 4 - 4444 Figure 4-52 standard functions for a window size of 9mm x 9mm Figure 4 - 4 Figure4-57. The equivalent source is identified as a Jx and Jy, i.e. a current along x axis and a current along y axis. The amplitude of the reconstructed current is not the same as the original source current, but the position and orientation of the source is identified correctly. The two major peaks appear as equivalent current Jx and Jz. But there is also two small peaks found for the equivalent current Jy which don't correspond to the original source. In this case, it is required to incorporate the knowledge about the device under test in order to determine the equivalent sources correctly. The electric field radiated by this equivalent source at 1m is calculated in the next section. Figure 4 - 4 Figure 4-56 Simulated magnetic field at 2mm above the PCB Figure 4 - 4 Figure 4-58 Equivalent sources on PCBThe equivalent source represents the source distribution which gives the same far field radiated by the original source. The equivalent sources are identified as either as current Jx, Jy, Jz or the combination of these currents as in Figure4-58. Knowing the equivalent currents it is possible to calculate the fields radiated from the sources at any distance away from the source. In order to validate the reconstruct equivalent source, it is necessary to predict the radiated fields which involve the calculation of the far field radiated by the equivalent source. The radiated field is calculated from the current density using the far field integrals discussed in the following session. Figure 4 - 4 Figure 4-59 Coordinate system for far field calculationThe radiated far filed can be found using the far field approximations. Figure 4 -Figure 4 - 44 Figure 4-61 Calculated E field of the monopole at 1mThe radiated E field at 1m is plotted in Figure4-61. The E field is normalized in each plane. The maximum radiation is in the XY plane. The radiation pattern is same as that of a dipole oriented Figure 4 - 4 Figure 4-62 Comparison of the calculated magnetic field with the simulated magnetic field of a bent monopole Figure 5 - 1 51 Figure 5-1 Layout of some passive probes under fabrication (view from ADS momentum) Figure 5 - 2 52 Figure 5-2 Validation circuits under fabrication 3D view (a) loop above a ground plane (b) PCB with an OscillatorIn reality, the PCBs are assumed to have more current loops. In order to have an accurate equivalent model of these loops, it is necessary to represent them in terms of the magnetic dipole current. So, modifying the developed method to represent the sources in terms of magnetic dipole current can accurately predict the behavior of these demonstrators. After validating the method for the demonstrator in Figure5-2(a), we plan to apply to more complex PCBs containing oscillator circuits as shown in Figure5-2 (b). , Its characteristic impedance for a zero thickness strip is determined by the following conformal mapping formulas[4] 𝑘 ′ = √1 -𝑘2 circuits électroniques. Le balayage d'une sonde à proximité du circuit est une méthode générale d'identification des sources rayonnantes dans une PCB. La première partie de la thèse consiste à concevoir et à caractériser des sondes magnétiques à haute sensibilité et à haute résolution spatiale.Les sondes conventionnelles basées sur la ligne micro ruban et la configuration coplanaire sont étudiées. À mesure que la longueur de la ligne de transmission connectée à la sonde augmente, le bruit sur le signal de sortie augmente en raison de tensions de mode commun induites par le champ électrique. Afin de supprimer cette tension induite par le champ électrique, une sonde magnétique blindée est conçue et fabriquée à l'aide d'une technologie de circuit imprimé à faible coût (PCB). La performance de la sonde passive est validée dans la bande 1MHz -1GHz. La sonde blindée est fabriquée sur un substrat FR4 d'une épaisseur de 0,8 mm et se compose de 3 couches avec le signal dans la couche intermédiaire et les couches supérieure et inférieure dédiées aux plans de masse. La taille d'ouverture de la boucle est de 800 μm x 800 μm, avec une résolution spatiale attendue de 400 μm. La haute sensibilité de la sonde est obtenue en intégrant un amplificateur à faible bruit à la sortie de la sonde, ce qui en fait une sonde active. La performance de la sonde blindée avec différentes longueurs de lignes de transmission est faite pour étudier. Une sonde à trois axes capable de mesurer les trois composantes du champ magnétique est également conçue et validée par un balayage en champs proches au-dessus d'une structure standard plan de masse.Dans la deuxième partie, la méthode de la matrice de la ligne de transmission inverse (Inv-TLM) est utilisée, pour reconstruire la distribution source à partir des champs proches (NFS) mesurés audessus d'un plan sur la carte PCB. Même si, la résolution de la reconstruction dépend de la longueur d'onde et des paramètres du maillage, la propagation inverse augmente la largeur de l'onde reconstruite. Comme cette méthode corresponde à un problème « mal posé» et entraîne des solutions multiples, nous avons développé une nouvelle méthode basée sur la corrélation croisée bidimensionnelle, qui représente les données de balayage en champ proche sous forme de de dipôles équivalents. Avec cette nouvelle méthode, nous avions pu identifier et de localiser les sources actuelles dans le PCB et est représenté avec des sources équivalentes. La méthode est validée pour les sources avec des orientations différentes. Les données simulées des champs proches utilisant le logiciel commercial CST sont utilisées pour valider les deux méthodes. Le champ lointain prédit à partir de ces sources équivalentes est comparé aux champs simulés. Table 1 - 1 11 Frequency Speed Accuracy complexity IEC-61967-2 Radiated 150kHz-1GHz Fast Medium Low TEM cell emissions IEC-61967-3 Radiated 10 MHz-1GHz Slow High High Surface scan emissions IEC-61967-4 Conducted 150kHz-1GHz Medium Medium Medium Direct CM and DM coupling emissions IEC-61967-5 Conducted 150kHz-1GHz Medium Medium Medium Faraday cage CM emissions IEC-61967-6 Conducted 150kHz-1GHz Medium Medium Low Magnetic CM and DM probe emissions EN-55022 Radiated >30M Medium Low High Radiation emissions Hz pattern EN-61000-4- Radiated >LUF Medium Medium High 21 emissions Reverberation chamber Table 3 - 3 Name of the Loop size Number of Number of Line length Simulated operating probe layers turns in a frequency bandwidth In the loop layer MP-1 3mm x 3mm 2 2 1mm <800MHz MP-2 3mm x 3mm 1 2 1mm >2GHz MP-3 3mm x 3mm 1 1 1mm >2GHz MP-4 3mm x 3mm 2 2 6.7cm <100MHz MP-5 3mm x 3mm 2 2 14cm <100MHz 1 Comparison of different microstrip probes List of abbreviations Ex polarization A voltage source with x polarization Vx is placed above the ground plane (as shown in Figure 4-14). The length of the source is 2mm. The ground plane is situated 1.5mm below the source. The electric and magnetic fields in the XZ plane at 1mm above the source are recorded using the direct TLM process and time reversed for inverse propagation. This is because we took the fields for reconstruction from the plane which is tangential to the source For a current source Jx The resultant matrix after cross-correlation will be of size (M+K-1) (N+L-1). The results of the cross-correlation will have a maximum value when the analytical function matches with the simulated field. 4. Fit the results of correlation to the scanning plane of the PCB. The size of the matrix after performing will be larger than the dimensions of the PCB. To locate the sources on the DUT, the size of the resultant matrix should be identical to that of the DUT. As the source is assumed to be at K+1/2, L+1/2 and the size of the standard function is (K, L), the first (K-1)/2 points and last (K-1)/2 points don't represent any sources. So, this part has to be eliminated from the matrix and the resultant matrix will have (M, N) points and each point corresponds to that of the device under test. so, the number of points in the matrix after subtraction is Validation by application to elementary currents In order to validate the proposed method, the algorithm is applied to 3 current sources oriented in 3 different directions. As it is a theoretical problem, the near field scanned data is simulated magnetic fields, instead of the measured magnetic field. The simulations here are carried out using the simulation software CST microwave studio. The procedure is explained in detail for a current source oriented along z-direction in the following section. And later, the results of the current sources along x and y directions after following the same procedure are given. In order to validate the algorithm for tangential currents, the same structure has simulated in CST, with the direction of the current element along the x-axis as shown in Figure 4-46. The three magnetic field components have recorded at 14mm above the structure from the centre of the coaxial connector. The magnetic fields have recorded in the same XY plane as before. The coaxial connector is excited with a current of magnitude 1A. As the current carrying conductor is oriented along the x-direction, the expected equivalent current is Jx. After applying the recorded magnetic field to the developed algorithm, the equivalent currents obtained has plotted in Figure 4-47. The voltage source is determined from the direction of alternation of the normal E field. In the case of the normal voltage source, the normal component of E will have a peak at the position of the source. The current source is determined from the magnetic field components. In the case of the normal current, the equivalent current can be calculated by calculating the circulation of the magnetic field. But, in the case of the tangential current sources, the circulating current is not found in the reconstructed plane, because of the fact that the plane is tangential to the current source. Even though we will be able to identify the source positions from the near field scan data and the reconstructed fields, we need an algorithm which represents the radiations from devise under test with an equivalent source. In order to calculate the equivalent current in complex PCBs, it needs to have both the tangential components of the magnetic field. So, the equivalent current can only be found accurately for a vertical current source in an arbitrary PCB, where there is no knowledge about the sources. The newly developed cross-correlation method could overcome this limitation of the inverse propagation method. The cross-correlation method is applied directly to the near field scan data. The method is not applied to the inverse propagated fields because the electric and magnetic fields become wider than the original filed after inverse propagation. The spatial resolution of the reconstruction depends on the mesh size and the wavelength. So, when a number of spatial points in the near field scan data increases, the more accurately we reconstruct the sources. The developed cross-correlation method is validated for ideal current sources. It is found that the procedure could accurately determine the position and orientation of the source. The method is also validated for multiple current sources. The window size of the standard function is optimized to a one-dimensional window of size 9mm. The accuracy of the equivalent source decreases when the window size of the standard function increases. The prediction of the far field requires both the magnitude and phase of the equivalent current. Even though the source position and orientation are correctly identified, it needs to have the accurate information about the exact amplitude and phase of the source in order to calculate the radiated far field. Conclusions and perspectives In this thesis, we have designed and validated magnetic probes based on microstrip line, CPW line, and printed shielded probes. The single loop probe is found to have higher selectivity than multiple loop probes and hence it has the highest frequency band of operation. The transmission connected to the loop has a significant influence on the performance of the probe. The voltage at the output of the probe contains not only the voltage due to the magnetic field but also, the common mode voltages induced by the electric field. As the length of the transmission line increases, the common mode voltages increases and the highest frequency of operation of the probe decreases. The shielded magnetic probes reduce the voltage due to the electric field and increase the frequency band of operation. For the frequency range between 1MHz-1GHz, the shielded magnetic probes with the stripline configuration are found to be the better choice than the conventional probes. It is necessary to use the shielded probes for frequencies above 100 MHz. As the aperture size of the loop is 800 µm x 800 µm the sensitivity of the probe is very poor and is not able to detect the magnetic field at all positions without an amplifier. The use of an external amplifier has increased the sensitivity of the probe. With an external amplifier of 37 dB, the probe could measure the magnetic field at all frequencies from 10 MHz-1GHz. It is found that the amplifier should not be placed very close to the probe because it causes noise at the output due to the probe. APPENDIX Design equations of the transmission lines used in this thesis are provided below A. CPW transmission line Closed form expressions of effective dielectric constant and characteristic impedancefor zero strip thickness (t=0) are given below according to [1] thickness (t=0) are given below according to [1] 𝜀 The effective dielectric constant and the characteristic impedance of the CPW considering the thickness t of the strip (t>0) is calculated as follows [3]. List of publications International conferences
01756840
en
[ "shs.droit" ]
2024/03/05 22:32:10
2018
https://shs.hal.science/halshs-01756840/file/JWIP_Stem%20Cells%20AWAM-2018.pdf
Stem cell technology is undergoing a rapid development for its high potential in versatile therapeutic applications. Patent protection is a vital factor affecting the development and commercial success of life sciences inventions; yet human stem cells-based inventions have been encountering significant restrictions particularly in the perspective of patentable subject matters. This article looks into the patentability limits and unique challenges for human stem cells-based patents in four regions: Europe, the United States, China and Japan. We will also provide suggestions for addressing the emerging issues in each region. Introduction Stem cell technology is an eye-catching and fast-growing research area with huge therapeutic potential. Human stem cells in particular are the focus of much interest in research, and much hope and hype surrounds their clinical therapy potential. However, ethical considerations and regulatory restrictions over human stem cells have framed the development of regenerative medicine and drug discovery. While the legal protection of life sciences inventions via patents has been recognized for years, it continuously raises challenges worldwide. It is especially true regarding patents based on human stem cells, particularly human Embryonic Stem Cells (hESCs). For instance, the recent Courts' decisions regarding the restriction of patents based on hESCs in Europe and the exclusion of natural products in the United States have undoubtedly hampered the patent protection for many human stem cells products. On the other hand, methods for the treatment or diagnosis of diseases practiced on human are generally not patentable in most major jurisdictions. While it is difficult to have a clear idea of how the patentability limits are affecting the human stem cells markets, it is obvious that developers of stem cells-based products or processes, whether they be academics or companies, have to adapt their research and commercial strategies to the scopes of patentability. Looking into the context of in-force patent laws, rules, regulations and relevant court decisions, this paper looks into the emerging issues in the patentability of inventions based on human stem cells in four regions: Europe, the United States (U.S.), China and Japan which have the majority of patents applications (World Intellectual Property Organization, 2015, p. 23). However, this article does not cover the patentability of human stem cells under the Trade-Related Aspects of Intellectual Property Rights (TRIPS) agreement (World Trade Organization, 1994), even though the U.S., China, Japan, all Member States of the European Union (EU) as well as the EU itself are contracting parties to the TRIPS agreement. It should be noted that, despite each region has its unique patent system imposing various patentability requirements, most countries build their patentability framework around five limbs of patentability; namely patentable subject matter, novelty, inventiveness, written description and enablement. This article does not only discuss these requirements and their standard in each of the four jurisdictions as far as stem cells are concerned. More importantly, the authors aim to provide an overview of the patent framework and exclusions of patentability in the four jurisdictions, and highlight the unique challenges for human stem cells-based patents in each region. We will also provide suggestions for overcoming these obstacles and adapting the changing landscape. Last but not least, specific aspects of the four patent systems are compared to provide a glimpse of global protection for stem cell inventions. I -Europe A) Background In Europe, patent law is relatively uniform although in addition to each State, two different organizations are regulating the field: the European Patent Organization (EPO) and the European Union (EU). The European Patent Organization covers 28 Member States of the EU and 10 non-EU countries. 1 It is based on the European Patent Convention (EPC), a multilateral Treaty signed in Munich on October 5, 1973. The EPO is mainly in charge of granting European patents. Its organizational structure notably includes 28 independent Technical Boards of Appeal that can refer to the Enlarged Board of Appeal to ensure a uniform application of the law. A European patent confers protection in all the contracting States that have been designated by the applicant as long as it has been validated by their national patent offices. The European Union regulates stem cells patents on the basis of the Directive on the legal protection of biotechnological inventions of July 6, 1998 (European Parliament and Council, 1998). The Directive harmonizes national patent laws: it has been transposed in every EU Member States and it is applied by their national patent offices. Contrary to the EPO that has been established to regulate patents only, the EU competency goes beyond patentability. Thus, it has not had a specific court for patents disputes and these matters have fallen under the remit of the general Court of Justice of the European Union. However, in 2012, the Member States of the EU (except Spain, Poland and Croatia) decided to establish an enhanced cooperation and to adopt the so-called "patent package". It includes a regulation creating a European patent with unitary effect (hereafter the "unitary patent") (European Parliament and Council, 2012), a regulation on the language regime applicable to the unitary patent (European Council, 2012) as well as an agreement between the EU countries to set up a specialized Unified Patent Court (European Council, 2013). This "patent package" will enter into force once ratified by any thirteen Member States including France, Germany and the United Kingdom (UK). 2 However, following the UK's vote to exit the EU, a minima new delay could be expected for such entry into force (Grubb et al., 2016; Jaeger, 2017). Besides already existing national patents (regulated by national laws that have been harmonized by the European Directive 98/44/EC) and the classical European patents (regulated by the EPC), the unitary patent will be a third option. Granted by the EPO under the provisions of the EPC, a unitary patent will be a European patent to which a unitary effect for the territories of the participating States will be given, at the patentee's request, after grant. Finally, it should be highlighted that even though the EPO and the EU are two distinct organizations, the contracting States of the former decided to incorporate the Directive 98/44/EC as secondary legislation into the Implementing Regulations to the EPC. This directive is used as a supplementary means of interpretation of the EPC since 1999. Thus, there is a trend for a global "uniformization" of European patents laws in the field of biotechnological inventions although a European patent relied on the EPC framework and a national patent relied on both the Directive 98/44/EC and on national law implementing it. While the articulation between the different kinds of patents, notably with the future unitary patent, and patent laws in Europe are raising lots of concern (Kaesling, 2013; Kaisi, 2014; Mahne, 2012; Pila and Wadlow, 2014; Plomer, 2015), what is going beyond the patentability of inventions based on human stem cells is not considered in this article. In Europe, there are four basic criteria for patentability. 3 First, there must be an invention belonging to any field of technology that is both a technical and concrete character. Second, the invention must be susceptible of industrial application, i.e. it can be made or used in any kind of industry as any physical activity of "technical character". Third, it must be new, not forming part of the state of the art. In the absence of grace period 4 , the invention must not have been made available to the public before the date of filing of the patent application. Fourth, it must involve an inventive step, such that it is not obvious to a person skilled in the art. Finally, the "sufficient disclosure" requirement implies that the full scope of a claim must be adequately enabled by disclosing methods of practicing the invention in the specification (Marty et al., 2014). It should be highlighted there is not a clear distinction between "enablement" and "clear written description" but both account for patentability in Europe (Schuster, 2007). B) The exclusions of patentability In Europe two types of exclusions could be distinguished: "moral exclusions" and "ineligible subject matters". Moral exclusions European patent law provides "moral exceptions" (Min, 2012) from patentability; that is, patents are not granted where the commercial exploitation of the inventions is contrary to ordre public or morality, beyond the straightforward prohibition by law or regulation. 5 Regarding the exclusions specific to the patentability of biotechnological inventions, the wording of the EPC and of the Directive 98/44/EC is the same. On one hand, "the human body, at the various stages of its formation and development, and the simple discovery of one of its elements, including the sequence or partial sequence of a gene, cannot constitute patentable inventions." 6 However, it is specified, "an element isolated from the human body or otherwise produced by means of a technical process, 7 including the sequence or partial sequence of a gene, 8 may constitute a patentable invention, even if the structure of that element is identical to that of a natural element." 9 Similarly, a microbiological or other technical process, or a product obtained by means of such process can be patented. 10 On the other hand, European patent law provides a non-exhaustive list of inventions of which the commercial exploitation is contrary to "ordre public" or morality: processes for cloning human beings; processes for modifying the germ line genetic identity of human beings; and uses of human embryos for industrial or commercial purposes. 11 Ineligible subject matters Article 52 (2) of the EPC provides general exclusions that are not specific to biotechnological inventions: (a) discoveries, scientific theories and mathematical methods; (b) aesthetic creations; (c) schemes, rules and methods for performing mental acts, playing games or doing business, and programs for computers; (d) presentations of information. Moreover, and apart from plant or animal varieties that are not covered in this article, European patent law also excludes "methods for treatment of the human or animal body by surgery or therapy and diagnostic methods practiced on the human or animal body"; this provision shall not apply to products, in particular substances or compositions, for use in any of these methods. 12 C) Main challenges for stem cells patents Exclusion of the uses of human embryos for industrial or commercial purposes In Europe, the main challenge to stem cells patents is related to embryonic stem cells and to the moral exclusions provided both by the Directive 98/44/EC and the EPC, especially the uses of human embryos for industrial or commercial purposes. Interpreted by the EPO and the European Court of Justice of the European Union, such disposal has been aligned to exclude from patentability inventions using human embryonic stem cells (hESCs) obtained either by de novo destruction of human embryos, or from publicly available hESC lines initially derived by a process destroying the human embryo (Mahalatchimy et al., 2015a; Mahalatchimy et al., 2015b). First, on November 25, 2008 in the Wisconsin Alumni Research Foundation case, the enlarged board of the European Patent Office decided that European patents with "claims directed to products which-as described in the application could be prepared, at the filing date, exclusively by a method which necessarily involved the destruction of human embryos" are prohibited. 13 Second, the Court of Justice of the European Union has gone a step further in the exclusion with the Brüstle v Greenpeace eV (hereafter the Brüstle) case. 14 On the hand, it has provided a wide definition of the human embryo: any human ovum after fertilization, and any nonfertilized human ovum into which the cell nucleus from a mature human cell has been transplanted or whose division and further development have been stimulated by parthenogenesis. However, the European Court of Justice of the European Union recently came back to its definition of the human embryo to allow the patentability of inventions using embryonic stem cells made from 'parthenotes' egg (Baeyens and Goffin, 2015; Bonadio and Rovati, 2015; Kirwin, 2015; Mansnérus, 2015; Stazi, 2015). Henceforth, "an unfertilized human ovum whose division and further development have been stimulated by parthenogenesis does not constitute a 'human embryo', (…) if, in the light of current scientific knowledge, it does not, in itself, have the inherent capacity of developing into a human being, this being a matter for the national court to determine." 15 (Faeh, 2015; Ribbons and Lynch, 2014) On the other hand, the Brüstle case has given an extensive interpretation of the patent's exclusion for uses of human embryos for commercial or industrial purposes: an invention is excluded from patentability where it involved the prior destruction of human embryos or their use as base material. It occurs whatever the stage at which such destruction takes place and even if the claim does not refer to the use of human embryos. The Brüstle case has been widely commented both by scientists (Wilmut, 2011; Koch et al, 2011; Vrtovec and Scott, 2011) and lawyers (Bonadio, 2012; Davies and Denoon, 2011; Plomer, 2012). Third, on February 4, 2014 in the Technion Research and Development Foundation Case, the European Patent Office followed the Brüstle case excluding from patentability inventions using hESCs obtained by destruction of human embryos, whenever such destruction takes place. Exclusion of surgery, therapy and diagnostic methods Even though methods for the treatment of the human or animal body by surgery or therapy and diagnostic methods practiced on the human or animal body are excluded from patentability in accordance with the EPC, such exclusion has been limited (Ventose, 2010). First, it does not cover products used in such methods (European Patent Office, 2015). Second, clarifications have been provided regarding the interpretations to be given to "treatments by surgery" and "treatment and diagnostic methods". For surgery, the EPO clarified that "treatments by surgery" are not confined to surgical methods pursuing a therapeutic purpose. 16 Consequently, beyond surgical treatment for therapeutic purposes, methods of treatment by surgery for embryo transfer or for cosmetic purposes are also excluded from patentability. As for treatment and diagnostic methods, exclusions are generally limited to those that are carried out on living human (or animal) bodies. Thus, where these treatment or diagnostic methods are carried out on dead bodies, they are not excluded from patentability. Similarly, they are not excluded from patentability if they are carried out in vitro, i.e. on tissues or fluids that have been removed from living bodies, as long as they are not returned to the same body. 17 Further, as in China and Japan, the aim of the methods is determining to their patent eligibility. Methods of treatment of living human beings or animals such as pure cosmetic treatment of a human by administration of a chemical product 18 or methods of measuring or recording characteristics of the human or animal body are patentable, where they are of a technical and not essentially biological character. 19 Perspectives European patent law excludes from patentability products and processes based on hESCs obtained by the destruction of human embryos, whenever such destruction takes place. According to the interpretation of the European Courts, this exclusion is not restricted to de novo destruction of human embryos and covers the use of publicly available hESC lines initially derived by a process destroying human embryos (Mahalatchimy et al., 2015a). This wide interpretation and the extensive definition given to the "human embryo" imply that no patent could be obtained on inventions based on human hESCs in Europe. However, several elements are limiting or could limit such broad exclusion. First, divergences have appeared in the national implementations of the Brüstle case regarding the proofs of hESCs via methods that do not involve the destruction of human embryos (Mahalatchimy, 2014). The EPO and the UK patent office have considered the inventor should prove that hESCs have been obtained by other methods than the destruction of human embryos. 20 On opposite, the German Federal Court has considered it is not required to be proved. 21 Consequently, a general claim of non-destruction of human embryos would be sufficient to obtain a patent for an invention based on hESCs as long as other criteria are satisfied. On the basis of such wide interpretation, it may be easier to obtain a patent for an invention based on hESCs than in other countries having a stricter interpretation of the Brüstle case as long as the Court of Justice of the European Union has not clarified such question. Second, the International Stem Cell Corporation case law 22 of the Court of Justice of the European Union which is generally seen as a clarification of European patent law on ESCs (Moore and Wells, 2015) is nevertheless questionable (Norberg and Minseen, 2016): should it be considered as an exception to the Brüstle case or as an opening to a wider reversal of jurisprudence of the Court of Justice of the European Union? Indeed, parthenotes are not considered anymore to be human embryos and consequently not excluded as long as they do not have the inherent capacity of developing into a human being. However, one can expect other techniques (that were previously interpreted as given rise to human embryos in the Brüstle case) could be claimed as not having the inherent capacity of developing into a human being. For instance, it could be considered that hESCs obtained by somatic cell nuclear transfer would need to be implanted in utero (although it is forbidden in accordance with the prohibition of human cloning) to have the capacity to develop into a human being. The same could be claimed for induced pluripotent stem cells (iPSCs) even though they have not been mentioned in these cases of the European Courts and they have not been included in the wide definition of human embryos. It seems the International Stem Cell Corporation case law should be seen as a clarification that narrows the extent of the exclusion from patentability of the uses of human embryos for industrial or commercial purposes. Indeed, the European Commission has been recommended to take no further action for clarification following the recent jurisprudence by most of the members of the Expert Group on the development and implications of patent law in the field of biotechnology and genetic engineering (Expert Group on the development and implications of patent law in the field of biotechnology and genetic engineering of the European Commission, 2016). Beyond hESCs, patents could be obtained on products and manufacturing methods based on allogeneic stem cells. However, for autologous cell-based regenerative therapies that are based on the patient's own cells (the donor of the cells is also the recipient of the final product made from these cells), it is not the product that is manufactured at the industrial scale (the product is patient specific as autologous); it can only be the manufacturing process industrially applicable. Moreover, treatment processes that generally occur by surgery in the field of cell therapy are excluded from patentability under the exclusion of surgery, therapy and diagnostic methods. Consequently, treatment methods based on stem cells would not be patentable. Thus, although European patent law is relatively uniform, national divergences remain in the interpretation of the most recent jurisprudence on stem cells and new issues need to be solved regarding the future unitary patent, especially on the implementation of the moral exclusions by both the Court of Justice of the European Union and Unified Patent Court (Aerts, 2014; McMahon, 2017). Inventiveness The inventive step criterion might be a hurdle in stem cell patenting. Inventions are opposed to mere discoveries that are not patentable. Indeed discoveries as such have no technical effect and are therefore not inventions in accordance with European patent laws. 23 Moreover, it should be noted that in Europe, secondary indicators such as an unexpected technical effect or a long-felt need may be regarded as indications of inventiveness. However, commercial success alone is not to be regarded as an indication of inventive step, except when coupled with evidence of a long-felt want provided that "the success derives from the technical features of the invention and not from other influences (e.g. selling techniques or advertising)". 24 The author will discuss this requirement in details in the following U.S. session for its seemingly higher standard than the other three regions. II -United States A) Background The United States Patent and Trademark Office (USPTO) is responsible for examining and granting patents under the U.S. Patent Law (Title 35 of United States Code, 35 U.S.C.) and Rules (Title 37 of Code of Federal Regulations, 37 C.F.R.). A utility patent is granted for any "new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof". 25 An invention must be useful 26 , new 27 and non-obvious 28 to be patentable in the United States. Moreover, the invention must have a specific and substantial utility that one skilled in the art would find credible. 29 Further, the statute mandates that an invention shall be fully described and enabled by the specification; 30 and that the specification must disclose the best mode of making and practicing the invention as the inventors contemplate, although it is not necessary to point out the best mode embodiment. 31,[START_REF]Manual of Patent Examining Procedure (MPEP) §2165.01[END_REF] Notably, the written description requirement is separate and distinct from the enablement requirement. The former requires that an invention shall be adequately described to show possession of the invention whereas the enablement requirement is satisfied where a person skilled in the art can make and use the claimed invention without undue experimentation. More specifically, the specification should disclose at least one method for making and using the claimed invention. [START_REF]Manual of Patent Examining Procedure (MPEP) §2164[END_REF] Unlike the mainstream absolute novelty standard, the U.S. offers a one-year grace period for inventors to file a patent application after disclosure of the invention. 34 B) The exclusions of patentability One notable feature which imparts a significant difference between U.S. and the other three regions is that U.S. patent law does not impose any moral consideration in determining whether a subject matter should be excluded from patent protection. Laws of nature, natural phenomena and abstract ideas have been historically excluded from the U.S. patent protection (Lesser, 2016). 35 Living organisms including animals, plants and microorganisms are all patent-eligible 36 but patents directed to or encompassing a human organism including a human embryo and a fetus are prohibited. [START_REF]Manual of Patent Examining Procedure (MPEP) §2105, Part III[END_REF] The U.S. has issued a wide range of stem cell patents from products (e.g. hESCs, iPSCs and regenerated tissues) to methods (e.g. manufacturing processes and therapeutic applications). However, the patent eligibility landscape changed dramatically in the past few years in light of several Supreme Court decisions Mayo in 2012, [START_REF][END_REF] Myriad in 2013 39 and Alice in 2014. 40 Mayo and Myriad respectively touched on claims involving the natural correlation between metabolite levels and drug effectiveness/toxicity and claims for human genes. They have significantly expanded the scope of exclusion to many biotech and pharmaceutical inventions. Very briefly, Mayo invalidated diagnostic claims that determine whether a particular dosage of a drug is ineffective or harmful to a subject based on the level of metabolites in the subject's blood. 41 Two physical steps were recited in the claim, namely a step of administering the drug to a subject and a step of determining the level of a specific metabolite in the subject. However, the Court regarded these steps as "well-understood, routine and conventional" activities which researchers are already engaged in the field to apply the natural correlation and hence the claim as a whole does not amount to "significantly more" than the natural law itself (Dutra, 2012; Chan et al., 2014; Selness, 2017). Myriad held that an isolated nucleic acid having sequence identical to a breast cancer-susceptible gene BRCA is not patent-eligible because it is a product of nature; whereas a complementary DNA (cDNA) having non-coding introns of the gene removed is eligible because the cDNA is not naturally-occurring and distinct from the natural gene. 42 Following the precedent ruling in Charkrabarty that upheld the patentability of a genetically engineered microorganism, 43 the Court looked for "markedly different characteristics from any found in nature" of the isolated gene to determine patent eligibility. The Court noted: "separating [the] gene from its surrounding genetic material is not an act of invention". 44 (Chan et al., 2014) Even if the claimed DNA molecules are somehow different from the genes in the genome in terms of chemical structure, the Court gave no deference to that because the claims were not relying on the chemical aspect of the DNA but the genetic information of the DNA that was neither created nor altered by the patentee. On the face of Myriad, isolated products such as chemicals, genes, proteins and even cells have to be different from the natural substances to a certain extent to be patent-eligible (Wong and Chan, 2014; Chan et al., 2014). Alice concerns abstract ideas and adopted the Mayo framework, ruled that "[t]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into patent-eligible invention", and that claims directed to an abstract idea is ineligible when the computer or software feature adds nothing more than generic or conventional functions to the invention. 45 Alice has a dramatic impact on inventions related to software or business methods (Stern, 2014; Jesse, 2014; Ford, 2016). New criteria for eligibility Evolving with the development of Court cases, the USPTO successively issued four guidelines during 2012-2014 for Examiners on how to apply Mayo, Myriad, Alice and other precedent cases to examine eligibility of claims related to natural phenomenon, laws of nature or abstract ideas. The in-effect guideline was released on December 16, 2014 (United States Patent and Trademark Office, 2014a), and supplemented by two updates (United States Patent and Trademark Office, 2015; United States Patent and Trademark Office, 2016a). For natural products, eligibility is determined principally on whether the claimed product possesses any structural, functional and/or other properties that represent "markedly different characteristics" from the natural counterparts. Importantly, neither innate characteristics of the natural product nor characteristics resulted irrespective of inventor's intervention qualify as "markedly different characteristics" (United States Patent and Trademark Office, 2016a). 46,47 A combination of natural products is examined as a whole rather than as individual components (United States Patent and Trademark Office, 2014a). While for claim which sets forth or describes an exception (in contrast to "is based on" or "involves" an exception), the claims must contain additional elements that "add significantly more" to the exception such that it is "more than a drafting effort designed to monopolize the exception" (United States Patent and Trademark Office, 2014a). General applications of natural products or natural laws employing well-understood, routine and conventional activities known in the field are not patenteligible (United States Patent and Trademark Office, 2014a). For example, a process claim is eligible if it is focused on a process of practically applying the product for treating a particular disease that does not seek to tie up the natural product (United States Patent and Trademark Office, 2014b). Most patents invalidated after Mayo, Myriad and Alice have been business method or software-related inventions. Among these patents, Ariosa is a notable case in which the decision has sparked intense debates and worries in the field of biotechnology, and more specifically molecular diagnosis. In Ariosa, the Federal Circuit upheld the invalidation of a patent for a method of detecting cell-free fetal DNA (cffDNA) in maternal serum or plasma under the Mayo framework. 48 Despite the Court acknowledged that the discovery of the presence of cffDNA in maternal plasma or serum was new and useful, it recognized that the steps of amplifying and detecting cffDNA with methods such as Polymerase Chain Reaction (PCR) are wellunderstood, routine and conventional activities in 1997; and the claimed method amounts to a general instruction to doctors to apply routine and conventional techniques to detect cffDNA and hence is not eligible for patent. The Court also noted that "preemption may signal patent ineligible subject matter, the absence of complete preemption does not demonstrate patent eligibility", meaning that a claim is not eligible merely for not blocking all other alternative uses of the natural product or law. In March 2016, Sequenom filed a Petition for Writ of Certiorari in the Supreme Court to challenge the Federal Court's decision in Ariosa; however, the highest Court declined to review and thus the Ariosa decision is finalized (Selness, 2017). 49 The disfavorable disposition was slightly relieved in Rapid Litigation Management, 50 in which the Federal Circuit, for the first time since the decisions in Mayo and Alice, upheld a patent that was drawn to a law of nature. In Rapid Litigation Management, the inventors of the concerned patent developed an improved method of preserving hepatocytes upon repeated steps of freezing and thawing. The claims at issue were drawn to a method of preparing multi-cryopreserved hepatocytes of which the resulting hepatocytes are capable of being frozen and thawed at least two times and exhibit 70% viability after the final thaw. The Federal Court found the claims patent-eligible because they were not directed to the ability of hepatocytes to survive multiple freeze-thaw cycles but a new and useful laboratory technique for preserving hepatocytes, noting that the inventors "employed their natural discovery to create a new and improved way of preserving hepatocyte cells for later use". [START_REF] Mcmahon | An institutional examination of the implications of the unitary patent package for the morality provisions: a fragmented future too far?[END_REF] Although individual steps of freezing and thawing were well known, the Court recognized that at the time of the invention, it was believed that a single round of freezing severely damaged hepatocyte cells and resulted in lower cell viability and therefore the prior art actually taught away from multiple freeze-thaw cycles. As such, the Court concluded that the claimed method which "[r]epeating a step that the art taught should be performed only once can hardly be considered routine or conventional". 52 Specifically, the Court looked into whether the end result of the method was directed to a patent-ineligible concept. The Court said no because the end result was "not simply an observation or detection of the ability of hepatocytes to survive multiple freeze-thaw cycles"; rather, the claims recited a number of steps that manipulate the hepatocytes in accordance with their ability to survive multiple freezethaw cycles to achieve the "desired preparation". 53 (Sanzo, 2017; United States Patent and Trademark Office, 2016b) More §101 rejections Impacts of Myriad and Mayo to the biotech and pharmaceutical industries are farreaching and tremendous. At the USPTO in the service responsible for biotechnology and organic chemistry (i.e., Patent Technology Centre 1600), it was estimated that the percentage of Office Actions with a U.S.C. §101 rejection regarding ineligible subject matters in May 2015 increased by nearly two folds than two months before the March 2012 Mayo decision (11.86% vs 6.81%) (Sachs, 2015). Patent Technology Centre 1630 designated for molecular biology and nucleic acids related inventions was profoundly affected, having the §101 rejection rate boosted from 16.8% to 52.3% (Sachs, 2015). The total rejection rate in Technology Centre 1600 was rising from 10.4% before Alice to 13.1% in December 2015, followed by a notable drop to 10.9% in July 2016 and 10.0% in May 2016 (Sachs, 2016). While it is too early to conclude that the situation becomes less adverse to the patentees, it may signal that the stringent situation has started to relax while uncertainties remain (Leung, 2015). C) Main challenges for stem cells patents Major hurdles in stem cell patenting are the expanded scope of ineligible subject matters and obviousness rejections. Patent eligibility Product claims Usefulness of patents is limited if the patented inventions cannot be commercialized. The applicant has to resolve the dilemma: non-native features would weigh toward the eligibility of stem cells already existing in nature, however stem cells for medicinal uses should be native enough to be as safe and effective as the natural stem cells. Myriad held that purification or isolation is not an act of invention. Stem cells, be they ESCs or adult stem cells, or produced by a new, ground-breaking method, are not patentable absent any distinctive structural, functional or other properties from the natural cells in our body. 54 iPSCs obtained using exogenous genes are more likely to survive for their artificial nature; however, iPSCs that are produced otherwise may be ineligible if the cells are indistinguishable from a naturally-occurring stem cells except the production method. On the opposite, regenerated tissues and organs would typically be patentable because they are usually not exactly the same as the actual tissues and organs (Tran, 2015). Intrinsic properties of stem cells add no weight to the eligibility, hence stem cells identified by natural biomarkers are likely to be construed as a product of nature and deemed not patentable. In contrast, new traits resulted from inventor's efforts such as extended lifespan, higher self-renewal ability and expression of new biomarkers may open up patentability for isolated stem cells. For example, U.S. Patent 9,175,264 claims an isolated population of human postnatal deciduous dental pulp multipotent stem cells expanded ex vivo. This application was initially rejected under U.S.C. §101 because the Examiner opined that the claimed cell is a product of nature; but was later allowed after the applicant limited the cell to express CD146 that is absent in the natural counterpart. 55 Lastly, it is worth noting that a product-by-process claim is examined based on the product itself not the manufacturing process, hence stem cells pursued under a product-by-process claim still need to abide on the "markedly different characteristics" requirement for natural products (United States Patent and Trademark Office, 2014a). Method claims Method claims may be more favorable given the unclear prospect of stem cell patents. The USPTO's records indicate that methods for stem cell production, maintenance or differentiation remain patentable post-Mayo and post-Myriad. Methods of producing iPSCs by reprograming somatic cells are likely patentable (e.g. U.S. Patent 9,234,179), but differentiation methods that make no difference from the natural differentiation processes could be unpatentable. As discussed, under the "significantly more" requirement, applications of a natural phenomenon must possess additional features that transform the claims into some eligible processes that amount to more than the natural phenomenon itself. Thus, for a method of differentiating stem cells using components of a signaling pathway (e.g. basic fibroblast growth factor (bFGF) and epidermal growth factor (EGF) for neural differentiation) reciting no additional feature that amounts to "significantly more", the examiner may regard the method as a general application of the natural phenomenon and hence unpatentable (Morad, 2012). Methods of identifying or selecting stem cells based on the detection of natural biomarkers may be rejected if the claims only recite routine and conventional techniques to detect the biomarkers (Chan et al., 2014). As implied in Ariosa, patentability is not justified albeit the inventor newly discovered the presence of biomarkers in these cells. Diagnostic methods hinging on the detection of natural biomarkers may be likewise rejected if specified at a high level of generality. As exemplified by the USPTO, a diagnostic method relying on the detection of a natural human biomarker in a plasma sample by an antibody is patent eligible if the antibody has not been routinely or conventionally used for detecting human proteins (United States Patent and Trademark Office, 2016c). Notably, the Office suggests that it is feasible to limit the claim to the detection of a biomarker without reciting any step of diagnosis of the disease or analysis of the results such that the claim would not be regarded as ineligible for describing a natural correlation between the presence of the biomarker and the presence of the disease (United States Patent and Trademark Office, 2016c); yet, the author takes the position that this is contradictory to Ariosa which ruled that methods of detecting cffDNA in maternal serum are not eligible. On the other hand, as learnt from Rapid Litigation Management, for claims which are based on a natural law or natural phenomenon, it may be useful to focus on the end result of the claims and emphasize that the claims are directed to the manipulation of something (e.g. a pool of mesenchymal stem cells) to achieve a desired end result (e.g. exhibits specific therapeutic functions) to argue that the claims are not directed to a patent-ineligible concept. When it comes to methods for treatment or screening compounds using natural stem cells, they are eligible if the methods per se are specific enough such that they do not preempt the use of the natural cells. The USPTO exemplified that using a natural purified amazonic acid compound for treating breast or colon cancer (United States Patent and Trademark Office, 2014a) and that using an antibody against tumor necrosis factor (TNF) for treating julitis are both patent eligible. [START_REF]Julitis" is a hypothetic autoimmune disease given by the USPTO (United States Patent and Trademark Office[END_REF] As an analogy, a method of treating leukemia by administering an effective amount of natural hematopoietic stem cells to a leukemia patient is likely patentable. Perspectives Many practitioners have urged the Congress to make clear whether the ruling of Myriad applies beyond nucleic acids, and at what extent should Mayo be applied to the field of diagnostics. The USPTO is continuously seeking public comment to its latest update in May 2016 (United States Patent and Trademark Office, 2016d) and may issue new update that may better sort out how Myriad and Mayo are applied to various disciplines. The Office held two roundtables in late 2016 to solicit public views respectively on its subject matter eligibility guidances and larger questions concerning the legal contours of eligible subject matter in the U.S. patent system (United States Patent and Trademark Office, 2016e, 2017a, 2017b). The life science industry and also supporters from computer-related industry are calling for new legislation to replace the Mayo/Alice test with a technological or useful art test, and to clearly define exceptions to eligibility or clearly separating eligibility from other patentability requirements (United States Patent and Trademark Office, 2017c). While law and policy could change depending on the subsequent measures of the USPTO and Congress, it is beneficial to pursue both product claims (stem cells) and method claims (producing processes and applications), although the former are likely rejected for their native nature. Inventors may concentrate on non- §101 rejections such as obviousness and enablement while postponing the subject matter arguments to buy time to get a clearer picture from additional guidance or court decisions (Gaudry et al., 2015). Inventors may also look into the prosecution history of patented applications to learn what is eligible and vice versa, and the rationale behind so as to enhance their chance to survive under §101. Notably, patent eligibility could highly depend on how the claim is structured (Smith, 2014). Inventors should examine and describe in the application any distinctive features between the isolated stem cells and their natural forms, and include these characteristics in the claims when necessary. It is also useful to emphasize the association of human effort with these characteristics. Inventors may also concurrently pursue cultural media or system, compositions or treatment kits comprising stem cells and non-natural components and so on for multiple levels of protection. While for applications of stem cells or natural principles, the fact that the inventor discovers a new and useful natural product or law does not weigh toward patent eligibility of their uses. Although the framework of Mayo and Alice appears not to have been changed or reshaped by Sequenom and Rapid Litigation Management, the two cases did provide additional guidance on eligible subject matters concerning natural matters in the field of life sciences. The general advice is, method claims should not merely read on the natural products or laws and should be scrutinized on any preemptive effect for the natural matters; and specific and inventive steps should be added to the claims to reduce the level of generality of the methods. It is undisputed that simply using the word "apply" does not avail, the standard of "significantly more" is general yet unclear. It is not easy to interpret whether the additional step would be treated as a "well-understood, routine and conventional activity practiced in the field", or an element capable of transforming the natural matter into something eligible. Importantly, the USPTO noted that a technique that is known (or even has been used by a few scientists) "does not necessarily show that an element is well-understood, routine and conventional activity" practiced in the field; rather, the evaluation turns on whether the use of that particular known technique was a well-understood, routine and conventional activity previously engaged in by scientists in the relevant field (United States Patent and Trademark Office, 2016a). Hence, applicants may argue that the recited step was not a technique prevalently used in the field at the time the application was filed to overcome the rejection. Obviousness Obviousness is a big challenge to stem cell patenting common in the four regions. The four regions share a similar framework in determining obviousness but the U.S. appears to adopt a higher standard than the other three regions. The authors chose to use the U.S. system to illustrate this topic for two reasons: 1) the readers may be more benefited if we discuss the topic at a higher standard; 2) the U.S. has a large volume of case law and administrative decisions touching upon this topic including some useful examples in the areas of stem cells. The U.S. in general adopts a similar approach for determining obviousness as in other jurisdictions, i.e., determining the scope and content of the prior art, ascertaining the difference between the invention and the prior art, and resolving the level of ordinary skill in the art (the "Graham" factors). [START_REF]Manual of Patent Examining Procedure (MPEP) §2141[END_REF] "Teaching, suggestion, or motivation" test (the "TSM test") has long been the standard for obviousness determination, under which a claim would only be proved obvious if some motivation or suggestion to combine the prior art teachings could be found in the prior art, the nature of the problem, or the knowledge of a person having ordinary skill in the art (Davidson and Myles, 2008). [START_REF][END_REF] However in 2007, KSR held that precise teachings in the prior art are not required to prove obviousness; rather, an invention is obvious if one skilled in the art has good reasons to combine the prior art elements to arrive at the claimed invention with an anticipated success. 59 Since then, more rationales can be used to prove obviousness thus significantly lowering the obviousness threshold. The USPTO has provided a non-exhaustive list of rationales for supporting an obviousness conclusion (Dorsey, 2008), 60 the examples include "combining prior art elements according to known methods to yield predictable results", "obvious to try -choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success" and the TSM test (United States Patent and Trademark Office, 2010). As compared to the TSM test, the "obvious to try" rationale is more subjective, thus leaving uncertainty in winning the obviousness battle. Seemingly, an invention that is "obvious to try" with a "reasonable expectation of success" would be deemed obvious and hence not patentable. Nonetheless, the examiner must provide articulated reasoning to support the obviousness conclusion; mere conclusion statements are not acceptable. [START_REF]Manual of Patent Examining Procedure (MPEP) §2142[END_REF] One of the feasible approaches to overcome an obviousness rejection is to carefully study the entirety of the prior art and the examiner's rationale, and rebut by pointing out the insufficiency of the examiner's reasoning. For example, one may refute that teaching away exists in the prior art, the examiner failed to address every claimed element with sound reasons, or the conclusions are hindsight bias. Another approach is to identify all possible differences between the claim and the prior art to locate any feature unique to the invention but absent in the references, or any unexpected effect achieved by the claimed invention to show that the combination of the prior art elements would not be able to arrive at the claimed invention (Davidson and Myles, 2008). Declaration under 37 C.F.R. §1.132 could be an effective tool to traverse the obviousness rejection by virtue of objective expert's opinion and/or evidence (Messinger and Horn, 2010). The declaration can be, for example, an expert's opinion on the inoperability of references or their combination, or an expert's justification of the level of ordinary skill in the art at the time of invention. Secondary considerations such as lacking a solution for a long-felt need and failure of others, unpredictable results and commercial success can also be testified using a declaration. [START_REF]Manual of Patent Examining Procedure (MPEP) §716[END_REF] For example, direct comparative results of the claimed invention and the closest prior art may be presented to show unpredictable results of the invention, while series of patents and publications attempted to solve the same problem but unsuccessful may be used to show a long-felt but unsolved need. The power of substantiated expert's opinions and objective evidence on addressing obviousness may be illustrated by the reexamination proceeding of U.S. Patent No 7,029,913 (here after the '913 patent'). Claiming an in vitro culture of hESCs and granted to Dr. James Thomson, the '913 patent' was challenged in 2006. One of the debates is whether the claimed hESCs would have been obvious in view of several references which teach, among other things, the isolation or derivation of ESCs from mouse and the culture of these murine ESCs using feeder cell layers. The proceeding took more than 6.5 years for the appellant Board to affirm the patentability after minor claim amendment. [START_REF]The Foundation For Taxpayer & Consumer Rights v[END_REF] During the reopened prosecution, the patentee submitted a declaration from an expert in the field of mouse embryogenesis and stem cells, along with scientific publications, to testify that the prior art method of isolating stem cells without feeder cells was not enabled to produce hESCs, and that no one could derive stem cells from rats till 27 years after the first isolation of murine ESCs despite mice and rats are closely related. One of the strong proof of non-obviousness is a research paper which reported the failure to isolate a replicating in vitro cell culture of pluripotent hESCs following the same prior art method. The declaration also pointed out that the invention was widely recognized as a breakthrough and highly praised by scientists in the field. The Board finally concluded that the above presented strong evidences of non-obviousness and thereby affirmed the patentability of the claims. Lastly, while the '913 patent' concerns a post-grant proceeding, it is very important that a §1.132 declaration be timely submitted before the final Office Action for consideration in the prosecution and appeal procedures, and be justly supported more than an attorney's assertion. In sum, the laws governing the subject matter eligibility are evolving and clearly unsettled, the actual impacts to the stem cell landscape remain to be seen but applications directed to natural stem cells and their applications have been rejected. Case law implicates that the claim language could be determining to patent eligibility. Stakeholders are advised to learn from court cases and seek advice from practitioners with biotech expertise to overcome the high patentability hurdles. III -China A) Background The State Intellectual Property Office of China (SIPO) is the administrative agency overseeing patents under the Patent Law and its implementing regulations. In addition to design patents that are not used in the case of stem cells, two types of patents, namely an invention patent and a utility model, are available (Zhang and Yu, 2008; Chen, 2010). An invention patent protects new technical solutions for a product, a process or their improvement; while a utility model is exclusive to products, protecting the shape, structure or their combination of a product. [START_REF]Patent Law of the People's Republic of China[END_REF] Hence, both types of patents may protect a device. A utility model is significantly different from an invention patent in that the former is generally not examined for inventiveness [START_REF]Section I, Chapter II. 66 Patent Law of the People's Republic of China[END_REF] and only grants a 10year patent term as compared to 20 years by an invention patent. 66 Invention patent is more preferable for protecting stem cell inventions than utility model, hence the following sections refer to invention patents unless specified otherwise. China adopts the common patentability requirements of novelty, inventiveness and industrial applicability, 67 and likewise requires an enabling description that clearly and fully describes the invention. 68 Further, direct and original sources of genetic resources on which the invention relies upon must be identified. 69 Embracing the absolute novelty standard, a six-month grace period for public disclosure is possible only for three very limited circumstances: (1) The invention is exhibited for the first time at an international exhibition sponsored or recognized by the Chinese Government; (2) The invention is published for the first time at a specified academic or technological conference; and (3) The contents are divulged by others without the consent of the applicant. 70 B) The exclusions of patentability Similar to Europe and Japan, China denies patents on moral grounds and precludes patents on therapeutic and diagnostic methods. Moral exclusion The moral exception under Article 5 of the Patent Law forbids patents on inventions that violate the laws or social ethics, or harm the public interest. Also, inventions that are accomplished relying on genetic resources obtained or used in violation of laws and administrative regulations are also prohibited. 71 The authority made clear that any industrial or commercial use of human embryos is contrary to social ethics and should not be patented, thus hESCs and their production methods are not patentable. [START_REF]Guidelines for Patent Examination[END_REF][START_REF]Guidelines for Patent Examination[END_REF] Furthermore, human body at various forms and developmental stages including germ cells, fertilized eggs, embryos and individuals are also prohibited from patenting for moral reasons. 74 Ineligible subject matters Article 25 of the Patent Law explicitly excludes patents for six subject matters: 1) scientific discoveries; 2) rules and methods for mental activities; 3) methods for the diagnosis or treatment of diseases; 4) animal or plant varieties; 5) substances obtained by means of nuclear transformation; and 6) designs that are mainly used for marking the pattern, color or the combination of the two of prints. 75 (Chen, 2002) Animal varieties are interpreted to exclude human but include whole animals, animal ESCs, germ cells, fertilized eggs and embryos, thus all of the above cannot be patented. [START_REF]Guidelines for Patent Examination[END_REF][START_REF]9.1.2. However, animal somatic cells, animal organs and tissues are still[END_REF] Notwithstanding, methods for producing an animal or plant variety are allowed, 78 and natural genes and microorganisms in their isolated forms are patent eligible. 79 Methods for the treatment or diagnosis of diseases practiced on living human or animals are strictly prohibited, nonetheless devices for practicing the treatment or diagnosis, or materials used in these methods are patentable. [START_REF]Guidelines for Patent Examination[END_REF] Similar to the European framework, treatment and diagnostic methods that are practiced on dead bodies are patentable. 81 (Chen, 2002) As for diagnostic methods, even if the tested item is a sample isolated from a living subject, the method is not patentable if it has the immediate intention of obtaining diagnostic results for a disease or a health condition of the same subject. 82 For instance, tests based on genetic screenings or prognosis of disease susceptibility are interpreted to be diagnostic methods and hence patent ineligible. 83 Treatment methods encompass methods for the prevention of disease and for immunization. Notably, although surgical methods practiced on living human or animals that are not for therapeutic purposes are not forbidden under Article 25, they could not be used (or used for production) industrially and hence are not patentable for lacking industrial applicability. 84 C) Main challenges for stem cells patents The SIPO has granted patents on stem cells and methods for producing stem cells that do not involve human embryos or hESCs. The major barriers in stem cell patenting are the moral exclusion and the treatment exclusion. Moral exclusion The scope of moral exclusion is largely unclear because terminologies including "social ethics", "human embryo" and "industrial or commercial use" are not explicitly defined. Claims can be rejected for moral reasons even if they are not directed to human embryos or hESCs. We may infer from Chinese application no. 03816184.2 how SIPO exercises the moral provision to exclude patents that may involve an industrial or commercial use of human embryos. Directed to a production of glial cells using undifferentiated primate pluripotent stem (pPS) cells, CN application no. 03816184.2 was rejected for violating social ethics and lacking industrial applicability in 2011. Upon reexamination, the Patent Reexamination Board reversed all the rejections. [START_REF][END_REF] (Tao and Duan, 2013) This application claimed an in vitro system for producing glial cells that comprises an established cell lineage of undifferentiated pPS cells and a population of cells differentiated from the pPS cell lineage. The claims also covered, among other related inventions, the population of cells differentiated from the pPS cell lineage, and producing methods and uses of the glial cells. Despite all the claims were limited to established cell linages of either primate pluripotent stem cells or hESCs, the Examiner opined that these established cell lineages have to be obtained from human embryos (given that pPS cells could include hESCs), therefore the claims are directed to the industrial or commercial use of human embryos which is prohibited by the Patent Law. Further, the Examiner rejected all the claims for lacking industrial applicability because, in the case where the claimed pluripotent stem cells are derived from non-embryonic tissues, human or animal bone marrow or other tissues must first be obtained using non-therapeutic surgical methods. Thus, the invention pertains to non-therapeutic surgical methods applied on living human and cannot be used industrially, and hence should be rejected. During prosecution, the applicant set forth the followings to address the objections: the invention relies upon established cell lineages of hESCs that are readily available before the filing date of the application; the acquisition of established hESCs lineages does not necessarily violate social ethics; and the use of established hESCs lineages is not an industrial or commercial use of human embryos. The applicant further amended the description to delete descriptions involving acquisition of hESCs; however the case was finally rejected. Upon appeal, the claims were amended to explicitly exclude pPS cells or hESCs that are directly disaggregated from human embryos or blastocysts. The applicant further provided evidence showing that the initial cell lineages derived from human embryos have been widely employed and patented, and cell lineages H1 and H7 used in the examples were commercially available before the priority date, therefore the invention does not require the use of human embryos. 86 In May 2012, the Board ruled in favor of the applicant, concluding that the invention does not violate the moral provision. The Board reasoned that the description and claims have already precluded the direct use or disaggregation of human embryos and blastocysts, and that hESCs lineages are available in public depositories, thus apparently the invention uses established and commercialized cell lineages and does not pertain to an industrial or commercial use of human embryos. In response to the Examiner's proposition that the established cell lineages H1 and H7 must be obtained through the destruction of human embryos, the Board enjoined that it is inappropriate to incessantly trace the acquisition of the established cell lineages to their initial origin (i.e., human embryos), given that cell lineages H1 and H7 have been publicly available and can be indefinitely proliferated in vitro and obtained with known techniques. 87 While for industrial applicability, the Board noted that none of the claims are directed to the isolation of pluripotent stem cells from non-embryonic tissues, and that the claims are limited to established lineages of pPS cells or hESCs, hence nontherapeutic surgical methods for the isolation of the pluripotent stem cells are not compulsory for practicing the invention, hence reversed the rejection for lacking industrial applicability. 88 Perspectives In practice, the examiners tended to strictly adopt the moral exclusion to exclude inventions which read on hESCs (e.g. hESCs per se and preparing methods thereof), and also inventions which are not direct to but related to hESCs. However, it appears that China has loosened its restrictions as indicated by a plurality of the Board's decisions in upholding patents over hESCs downstream technology (Peng, 2016). Although these Board's decisions are not binding, they illustrate that an invention which does not indispensably use human embryos could traverse the moral exclusion. Inventions in which the devastation or use of human embryos or fertilized eggs is not requisite, such as those on somatic cell nuclear transfer (SCNT) (e.g. CN1280412C and CN1209457C) and iPSC (e.g. CN103429732B) would have a higher chance of success. It is thus advised to explicitly exclude the use of human embryos in the claims and the description, and to make necessary clarification to avoid rejections on moral grounds. All class of inventions including stem cells and differentiated cells, and manufacturing methods and uses of these cells should be put on guard. Exclusion of therapeutic methods Patents on pharmaceutical compositions made of stem cells and their manufacturing methods are patentable but therapeutic methods using stem cells are prohibited. As in Japan, applicant may pursue Swiss-type claims in the format of "use of a composition in the preparation of a medicament (or kit) for treating a disease" to protect medicinal uses of stem cell products (Chen and Feng, 2002). In short, morality is the biggest consideration in granting a patent to stem cell inventions and claims that read on hESCs are likely rejected. While SIPO has started to relax its rules and allowed patenting inventions which involve yet not directed to hESCs, it is essential to show with objective evidence why the claimed invention is free of ethical issues to have a higher chance of success. Inventiveness China also takes into account secondary considerations for justifying the inventiveness of an invention. Provided by the patent examination guideline, China considers long-felt but unsolved needs, unexpected results and commercial success. [START_REF]Guidelines for Patent Examination[END_REF] IV -Japan A) Background The Japanese Patent office (JPO) examines patents under the Patent Act and Utility Model Act. Inventions that can be protected by a patent, as defined, are "highly advanced creation[s] of technical ideas utilizing the laws of nature". 90 (Borowski, 1999; Kariyawasam et al., 2015) Generally, subject matters eligible for a patent can be a product, a device or a process. In contrast, a utility model is designed to protect a device that is related to the shape or structure of an article or the combination of articles which is industrially applicable; 91 it does not go through the substantive examination as a patent does and confers a 10-year patent right. [START_REF] Patent | [END_REF] Sharing similar rules as other jurisdictions, novelty, inventiveness, industrial applicability, [START_REF][END_REF] and a sufficient and enabling description 94 are the basic patentability requirements in Japan. Japan provides a six-month grace period which is stricter than the U.S. but more lenient than China. Inventions that were tested, 95 were disclosed through presentations in printed publications or electric telecommunication line, 96 or disclosed against the will of the person having the right to obtain a patent 97 are eligible for the grace period if the application is a direct national application, or is an international patent application under the Patent Corporation Treaty (PCT) designating Japan. Notably, the international filing date would be interpreted as the filing date for cutting off the six-month grace period for PCT application. [START_REF]Examination Guidelines for Patent and Utility Model in Japan. Part III[END_REF] Except for third-party disclosures, a proof document to identify the relevant disclosure has to be submitted within 30 days from the filing date of the application. 99,100 B) The exclusions of patentability Moral exclusion Codified at Article 32 of the Patent Act, Japan has an explicit moral provision that excludes inventions liable to injure public order, morality or public health from patent protection. 101 (Borowski, 1999) One example of such inventions is human produced through genetic manipulation. 102 However, whether a human body of various stages of its development is interpreted to be a human is not specified in the Patent Act and Examination Guidelines. Although the Patent Act and Examination Guideline do not specify that inventions involving human embryos are not patentable, applications involving a step of destroying human embryos have been rejected under Article 32 (Sugimura and Chen, 2013). Ineligible subject matters A statutory invention must be a creation of a technical idea utilizing a law of nature. Thus, laws of nature, discoveries per se (e.g. natural products and phenomenon), inventions contrary to the laws of nature, and inventions that are not using the laws of nature (e.g. economic laws, mathematical methods and mental activities) are not regarded as an invention. 103 However, natural products and microorganisms that are artificially isolated from their surroundings are patentable. 104 Similar to Europe and China, Japan interprets methods for surgery, treatment, or diagnosis practiced on humans are incapable of industrial application and therefore could not be patented (Sato, 2011). 105 However, patents are possible if these methods are applied on animals and explicitly exclude human. Materials that are used in these methods, and products of these methods are patentable. 106 Notably, any method that processes or analyzes a sample taken from a human body is not patentable unless the sample is not supposed to be returned to the same body. 107 While for diagnostic methods, Japan adopts a similar approach as China, defining that any method for the judgment of physical or mental conditions of a human body is a diagnostic method and hence not patentable. 108 Further, methods designed for the purpose of prescription, treatment or surgery plans are regarded as diagnosis of human and hence disallowed. 109 Thus, in a general sense, methods for extracting or analyzing a sample, or gathering data from a human body which are not for judging physical or mental conditions, or for the planning of drug prescription, treatment or surgery are patent eligible. The Examination Guideline sets forth some examples of medical activity that are patent eligible. 110 For example, methods of determining susceptibility to a disease by determining and comparing the gene sequence with a standard can be patented. 111 There are certain exceptions to methods involving a "sample extracted from a human body and presumed to be returned to the same body". Specifically to the stem cell area, methods for manufacturing a medicinal product or material using raw materials from a human body is patent eligible. 112 Thus, method for preparing a cell or artificial skin sheet is patentable even if these articles are intended to be returned to the same person. Methods for differentiating or purifying a cell using raw materials from a human body, or analyzing medicinal products or materials using raw materials from a human body are also eligible. 113 C) Main challenges for stem cells patents Japan appears to be less restrictive than Europe and China in granting stem cell patents despite Japan has comparable stances on moral violation and industrial applicability of medicinal activity in its patent framework. Japan has issued patents on stem cell lines, manufacturing methods, uses of stem cells for drug production and so on, and is the pioneer granting patents on iPSC (Simon et al., 2010). Moral exclusion Although the JPO sets no explicit rule on human embryos and hESCs, it is likely that inventions relied on human embryos will be rejected. Therefore, it is advised to take an approach similar to that proposed for the Chinese landscape; that is to preclude the possibility of destruction of human embryos for practicing the invention (supra). Following the JPO guidance which exemplifies that methods of differentiating stem cells are patentable, methods for producing stem cells based on established embryonic stem cells lines are likely allowed (Sugimura and Chen, 2013). For example, JP 5,862,061 claims a method of culturing hESCs, and JP 5,841,926 was granted on a method of producing ESCs using blastomere and compositions comprising the ESCs, of which the description specifies that the cells can be derived without embryo destruction. Exclusion of therapeutic and diagnostic methods Treatment methods are generally not patentable. While methods for manufacturing medicinal products (e.g. vaccine or cells) or artificial substitutes using raw materials collected from a human body is patentable, uses of these medicinal products on human are likely regarded as treatment methods and hence not patentable. Testing or assaying methods should devoid of any step pertaining an evaluation or determination of a physical or mental condition of a human, such that the method would not be interpreted as a diagnosis practiced on human. Methods for the collection of data and/or comparison of data with a control do not correspond to a diagnostic method and hence is likely patentable. Devices for practicing a treatment or diagnosis, as well as methods for controlling the operation of these devices are largely patentable as long as the function of the medical device itself is represented as a method. 114 JPO specifies that a method for controlling the operation of a medical device is not a method of surgery, treatment or diagnosis of human, 115 given that the method does not involve a step with an action of a physician on the human body or a step with an influence on the human body by the device (e.g. incision and excision of patient's body by an irradiated device). 116 Thus, it may be feasible to redraft the forbidden therapeutic or diagnostic claims to methods for controlling the operation of the therapeutic or diagnostic devices or systems without any steps involving a physician's action on the human body or steps affecting the human body by the device. For instance, "a method for irradiating X-rays onto the human body by changing the tube voltage and the tube current of the X-ray generator each time the generator rotates one lap inside the gantry" is considered to be a method of surgery, therapy or diagnosis of human while "a method for controlling the X-ray generator by control means of the X-ray device; wherein the control means change the tube voltage and the tube current of the said X-ray generator each time the generator rotates one lap inside the gantry" would be patent eligible. 117 In short, Japan is more liberal than Europe and China in granting stem cell patents provided that the claimed invention does not involve the destruction of human embryos. The Japanese Examination Guidelines helpfully provide a lot of examples of eligible and ineligible claims, stakeholders are advised to read through these examples and craft the claims accordingly. Inventiveness As in the other three regions, secondary considerations can be used in Japan for justifying the inventiveness of an invention. For example, commercial success and long-felt need may be considered provided that these are contributed by the technical features of the claimed inventions and supported by the applicant's arguments and evidences. 118 V -Discussion Patent systems in each jurisdiction are standalone yet have similarities. Coherent with the above discussion, the most notable difference in the landscape of stem cell patents is that the U.S. neither establishes a moral exclusion nor excludes inventions that involve the destruction of human embryos. Furthermore, despite patentability requirements in the four regions are similar in the broadest sense, they are subject to disparate interpretations and standards. That is why it is not uncommon that an invention is awarded a patent in one region but not in another one. Seeking patent protection for the same invention in multiple jurisdictions is commonplace. Very often, patent applications filed in different jurisdictions share the same or substantially the same disclosure while claims could be tailor-made in order to comply with the local rules and meet the interests of the stakeholders. Hence, to set up a favourable global and regional strategy for patent protection, it would be advantageous to look into the aspect of patent procurement in each of the jurisdictions of interest, and to deal with issues that may disfavour patent protection at the very beginning. The aspect of patent enforcement should not be overlooked but goes beyond the scope of this chapter. Focusing on patent procurement, the following section will highlight the main similarities and differences in patentability issues between the four regions that may be worthy of attention. While a side-by-side comparison and analysis are not feasible due to limited space here, we summarize a few general and specific aspects of the four systems for a quick and easier comparison (Table 1-Comparison of stem cell patent systems). A) The unique territorial patent system in Europe Europe is a region that includes both national patent laws and European patent laws, whereas the U.S., China and Japan are sovereign countries that have a single national patent law. The co-existence of national and European patent law systems brings inherent complexity which should not be overlooked. As discussed, there are several possibilities to obtain patents in Europe: a European patent at the EPO, national patents at each national patent offices, and in the future a European patent with unitary effect in all the Member States of the European Union at the EPO. It is especially complex in the evolving scientific and technical field of stem cells as inferred from the definition of human embryos by the Court of Justice of the European Union. Although the EPO and the EU have been generally successful in providing a quite uniform patent law that overpasses national heterogeneities, small national divergences with potentially high consequences can always appear as it has been shown by the different national implementations of the Brüstle case, especially on whether or not it should be proved that hESCs have been obtained without previous destruction of human embryos (Mahalatchimy, 2014). B) Utility vs industrially applicability Among the five general criteria for patentability, the utility or industrially applicability requirement appears to be most distinctive. Different from the industrially applicability requirement of Europe, China and Japan, the U.S. does not specify that an invention must be susceptible of industrial application. Rather, it requires the invention must have a specific and substantial utility. While the utility requirement is not usually an issue for stem cells patents in the U.S., it is totally different for the industrial application criterion which precludes certain types of methods that are not industrially applicable from patentability. As discussed, therapeutic, diagnostic and surgery methods practiced on human are mostly not patentable in Europe, China and Japan. C) Moral exclusion Absent a moral exclusion, the U.S. appears to be the most liberal among the four regions in granting human stem cell patents while Europe is the strictest. Europe appears as the region that places the strongest emphasis on moral exclusion as evidenced by the extensive coverage and specific examples of the moral exclusions in the patent law and rules. Firstly, Europe is the region where a definition of the human embryo has been provided in the field of patents. Secondly, Europe has explicitly defined that uses of human embryos for industrial or commercial purposes fall within the moral exclusion and provided the most extensive interpretation of the exclusion: to cover the destruction of human embryos whenever it takes place, not only de novo destruction. China is the closest to Europe (Farrand, 2016) but more lenient regarding the interpretation of the uses of human embryos for commercial or industrial purposes. Although the decisions of the Chinese Patent Reexamination Board are not necessarily binding, the Board considers the exclusion is limited to the de novo destruction of human embryo. Thus, the use of hESC from publicly available cells lines deposited in biobanks does not prevent the grant of patent. It also provides a clearer answer than the European Courts as it specified that it is inappropriate to incessantly trace the acquisition of the established cell lines to their initial origin as long as they are publicly available. While the European courts adopted the opposite view, they did not clarify whether the non-destruction of human embryos should be proved in the claims and by whom. Japan has placed a moral exclusion in the patent law but does not specify whether the involvement of human embryos would render an invention injuring the public order, morality or public health. It has consequently been considered as an attractive country with more liberal policy to stem cell patents (Kariyawasam et al., 2015). On the face, stem cell products are largely patent eligible in China and Japan provided that the de novo destruction of human embryos is excluded from the claimed invention in view of the description. D) Limited eligibility for natural products, laws and phenomenon in the U.S. The recent changes in the interpretation of patent eligibility in the U.S. has imposed a unique and huge challenge to the stem cell arena. As discussed, a natural product be it synthetic or isolated from natural sources, is not patentable absent any "markedly different characteristics" from the naturally-occurring product. Hence, a purified population of stem cells may be patent eligible in the other three regions but not in the U.S. if it is essentially the same as the cells in the human body. Diagnostic methods using stem cells albeit are not excluded for lacking industrial applicability, the prospect of getting a patent is unclear unless more solid criteria of the "significantly more" standard are provided. Finally, as the U.S. has significantly narrowed the scope of eligible subject matter, one can foresee the convergence of consequences of different US and EU laws regarding stem cell patent eligibility. (Davey et al, 2015). E) Patent -A double-edged sword? Undoubtedly patents are awards granted by the government to innovators for their intelligent efforts by conferring them an exclusive right in their innovation; however, in essence, the primary objective of the patent system is to promote innovation and economic development through encouragement of information exchange among the community. On the one hand, patent protects the interests of the innovators, allowing them to generate revenue and gain capital to foster their research and business. Market exclusivity including patent right and data exclusivity are particularly important to the pharmaceutical and medical devices industries to offset the huge yet disproportionate risks and investment in the development of drugs, diagnostic kits or medical devices. Such risks and difficulty for finding investment are particularly true in the field of stem cells patent. Monopoly status, even if it only lasts for a limited period of time, is crucial for the industries and venture capitalists to invest into the development and commercialization of innovations. First and foremost, patents help to minimise the risk of infringement. Secondly, patents can effectively suppress competition and permit firms to earn revenue as a return on their investment. This second point as well as the public impact as a result of the suppression of competition by patents can be illustrated by the well-known story of Myriad. Before the Supreme Court decision which invalidated Myriad's claims over the natural BRCA genes, genetic tests for breast cancer which based on the evaluation of BRCA1 and BRCA2 genes costed about $3,000-4,000 in the U.S. (Cartwright-Smith, 2014). Holding the patents which claimed the natural sequence of BRCA genes, Myriad was the sole company that could administer the BRCA1/2 tests. Laboratories which provided BRCA tests were forced to terminate their services after Myriad alleged them for patent infringement, thereby barring patients from obtaining a second diagnostic opinion from an independent laboratory. The cost of the BRCA tests soon fell to around $1,000-2,300 after the Supreme Court decision (Cartwright-Smith, 2014). As for academia, patent protection also plays a significant role insofar as commercialization is concerned. No matter the technology is to be licensed to a third party or commercialized by the researchers (e.g. in the form of spin-off), patents could provide some level of comfort to the potential licensees and investors in favour of the deal. Capital investment and revenue generated from royalties and licensing fees received by the universities or companies can be used in subsequent research and thereby promote innovation. It also gives renown to academia and may facilitate the publications of papers; the latter being the main way of assessment for research and consequently proof of its good quality. On the other hand, although patent information is in open access, the public is prohibited by law from engaging the patented products or methods in research without the consent of the right holders. Nonetheless, in addition to anti-competition or similar laws, patent system itself does provide certain filtering mechanisms to prevent overly broad patents. The first filtering mechanism is the exclusion of patentability of common goods. Scientific discoveries, natural phenomenon and natural products per se are generally barred from patent protection in most if not all jurisdictions, but applications of these natural matters are still patentable provided that they meet other patentability requirements. While preclusion of patentability is an effective means to prevent the preemption of uses of natural goods, it is noteworthy that decisions of Mayo and Myriad have been heavily criticized by the industries and the patent practitioners for hindering the development of biotechnology especially the diagnostic field since these decisions have effectively excluded many biotech and diagnostic inventions from patent protection. The second filtering mechanism is imposing restrictions on patent rights. Patent law precludes certain activities from patent infringement and thereby waives one's liability of patent infringement as a result of these exempted activities. Examples include prior use defense, research exemption and regulatory review exemption (so-called the "Bolar exemption" which is an exemption of patent infringement for use of patented products in experiments for the purpose of obtaining regulatory approval for drugs, this is established to enhance public access to generic drugs) (Misati and Adachi, 2010; Kappos and Rea, 2012). Therefore, patents can be seen as a double-edge sword which simultaneously promotes innovations and suppresses competitions. Availability of new information and the incentives given by patent may promote research, yet existing patent rights may discourage people from developing basic research into commercial embodiments that are practical and beneficial to the community. Doubts as to whether a patent system promotes or suppresses innovation and of what magnitude always exist. Particularly to the arena of stem cells, any monopoly over the use of natural human stem cells likely inhibits the research and development of stem cells-based technologies. Yet, the first filtering mechanism may operate to prevent a party from tying up the natural stem cells at some level, for example stem cells obtained directly through the destruction of the human embryo (e.g. in China, Europe and Japan) and stem cells isolated from natural sources (in the U.S.). While for patented technologies, the research exemption does leave room for research using stem cells technologies covered by patents, although the permissible scope may be narrowly limited to noncommercial research. While the authors do not have an affirmative answer as to whether patents promote research of stem cells or not, we believe that patent itself is a very useful tool to promote the exchange of information. Free flow of information is essential to enrich our knowledge, invoke our creativity and prompt the emergence of ground-breaking or disruptive technologies. Indeed, issues of patent infringement may arise when a research involving the use of patented technology matures to some sort of commercial activity; however, negotiations or collaborations between the owners/ exclusive licensees of the patented technology and innovators of the follow-on inventions are often possible to allow commercialization of the follow-on inventions without patent infringement. It should be highlighted that patent policy is more than the issue of free market and academic freedom. As with other areas of law, public policy reasons always play a role in formulating patent laws and rules. It is for the executive branch and the legislature to strike a balance between the general public and individual rights and liberties and adjust the laws and policies to achieve the best overall interest. VI -Conclusion Definitely, human stem cells have a great potential in the fields of regenerative medicine and personal medicine. Stem cell technologies must be timely and comprehensively protected by all means of intellectual property particularly patents. Even though the field of human stem cells have been the object of clarifications by Courts or guidelines in each region, the complexity and uncertainties remain (Schwartz and Minssen, 2015). How the ISSC case on the destruction of human embryos is applied in Europe? How Myriad and Mayo on natural genes and diagnostic methods are applied in the U.S.? Will the non-binding decisions of the Chinese Patent Reexamination Board upholding the patentability of invention that uses hESC from publicly available cells lines deposited in biobanks be followed in the process of patent examination and various patent proceedings? Will Japan provide more explanation and examples on its moral exclusion? All these uncertainties, among other challenges such as the regulatory requirements, could be of utmost importance to the commercialization and success of stem cell inventions. Practitioners should closely follow the development in the patent landscape from patent law to court decisions with a comparative view, and researchers and the industry should adjust their strategy to strive for success through the challenges. patentable and an invention (such as a substance found in nature that is shown to produce a technical effect) that is patentable. 8 Regarding sequences and partial sequences of genes, the industrial application requirement has specific form: "The industrial application of a sequence or a partial sequence of a gene must be disclosed in the patent application. " Article 5. 3 Directive 98/44/EC and Rule 29 (3) EPC. Both the Court of Justice of the European Union (ECJ, gr. ch., July 6, 2010, Monsanto Technology LLC v. Cefetra BV and a., C-428/08, Rec. I-06765) and the EPO (Board of Appeal of the EPO, The University of Utah Research Foundation v. Institut Curie, Assistance Publique-Hôpitaux de Paris, Institut Gustave Roussy-IGR, Vereniging van Stichtingen Klinische Genetica, et al., De Staat der Nederlanden, Greenpeace e.V., November 13, T 0666/05 (2008)) have specified the patent protection is limited to the function for which the sequences and partial sequences of genes have been patented. 9 Article 5. 2 Directive 98/44/EC and Rule 29 (2) EPC. 10 Rule 27 EPC; Article 4.3 Directive 98/44/EC. 11 The list also includes "processes for modifying the genetic identity of animals which are likely to cause them suffering without any substantial medical benefit to man or animal, and also animals resulting from such processes." Rule 28 of the implementing regulations to the European Patent Convention and Article 6(2) of Directive 98/44/EC. 12 Article 53 (c) of the European Patent Convention and Recital (35) of Directive 98/44/EC. 13 Enlarged Board of Appeal of the EPO, Wisconsin Alumni Research Foundation (WARF), November 25, G02/06 (2008). 14 Court of Justice of the European Union, Grand Chamber, Brüstle v Greenpeace eV, October 18, C-34/10 (2011). 15 Court of Justice of the European Union, Grand Chamber, International Stem Cell Corporation v Comptroller General of Patents, Designs and Trade Marks, 18 December 2014, Case C-364/13 (2014). 16 European Patent Office, Enlarged Board of Appeal,Enlarged Board of Appeal of the EPO, Medi-Physics Inc., February 15, 2010, G 1/07. 17 Ibid. 18 European Patent Office, Technical board of appeal, March 27, 1986, E.I. du Pont de Nemours and Company, T 0144/83. 19 Ibid. 20 European Patent Office, Board of Appeal, T2221/10 Technion Research and Development Foundation Ltd, February 4, 2014; UK Intellectual Property Office, International Stem Cell Corporation, August 16, 2012, BL O/316/12. 21 German Federal Court, BGH, Urteil vom November 27, 2012, Az. X ZR 58/07, (German Federal Court case law, 2012; http://openjur.de/u/596870.html). 22 Court of Justice of the European Union (2014), Grand Chamber, International Stem Cell Corporation v Comptroller General of Patents, Designs and Trade Marks, 18 December 2014, Case C-364/13, op. cit. 23 Article 52 (1) (a) EPC; recitals (13), (16), (34) of Directive 98/44/EC. The criterion of inventiveness has been detailed by the EPO with clarifications regarding the state of the art, the non-evident concept and the definition of the person skilled in the art notably; the obviousness being included within the inventive step criterion. EPO, Part G Chapter VII, Guidelines for examination. 24 EPO, Part G Chapter VII, 10.3, Guidelines for Examination. United States Patent and Trademark Office (2016e) 7 This is to be understood regarding the distinction between a mere discovery (such as the finding of a previously unrecognized substance occurring in nature) that is not
01757046
en
[ "shs.eco" ]
2024/03/05 22:32:10
2018
https://shs.hal.science/halshs-01757046/file/WP%202018%20-%20Nr%2009.pdf
Fadia Al Hajj email: [email protected] Gilles Dufrénot email: [email protected] Benjamin Keddad email: [email protected] Exchange Rate Policy and External Vulnerabilities in Sub-Saharan Africa: Nominal, Real or Mixed Targeting? $ Keywords: African Countries, Exchange Rate Policy, External Vulnerabilities, Regime-Switching Model JEL classification: C32, F31, O24 This paper discusses the theoretical choice of exchange rate anchors in Sub-Saharan African countries that are facing external vulnerabilities. To reduce instability, policymakers choose among promoting external competitiveness using a real anchor, lowering the burden of external debt using a nominal anchor or using a policy mix of both anchors. We observe that these countries tend to adopt mixed anchor policies. We solve a state space model to explain the determinants of and the strategy behind this policy. We find that the choice of policy mix is a two-step strategy: First, authorities choose the degree of nominal exchange rate flexibility according to the velocity of money, trade openness, foreign debt, degree of exchange rate pass-through and exchange rate target zone. Second, authorities seek to stabilize the real exchange rate depending on the degree of trade integration with the rest of world and the degree of foreign exchange interventions. We conclude with regime-switching estimations to provide empirical evidence of how these economic fundamentals influence exchange rate policy in Sub-Saharan Africa. Introduction This paper discusses a new exchange rate policy issue in Sub-Saharan African (SSA) countries. A large number of governments have, in the past years, adopted an exchange rate anchor based on a mix between a real and a nominal target. This means that they have sought to simultaneously achieve stable real and nominal exchange rates. Few papers have discussed such a strategy, as discussions of exchange rate regimes have focused on the choice between so-called corner solutions (pure floating or fixed exchange rates) and intermediate regimes (see, for instance, [START_REF] Qureshi | Hard or soft pegs? Choice of exchange rate regime and trade in Africa[END_REF][START_REF] Harrigan | Time to Exchange the Exchange-Rate Regime: Are Hard Pegs the Best Option for Low-Income Countries?[END_REF][START_REF] Husain | Exchange rate regime durability and performance in developing versus advanced economies[END_REF][START_REF] Calvo | The mirage of exchange rate regimes for emerging market countries[END_REF]. However, SSA countries face two challenges that can explain the new stylized facts that we highlight. First, they seek external competitiveness to achieve trade surpluses or limit their trade deficits (see UNCTAD , 2016a;[START_REF] Allen | The Effects of the financial crisis on Sub-Saharan Africa[END_REF]. Second, they seek to alleviate the burden of their external debt, a significant part of which is denominated in foreign currencies (see UNCTAD , 2016b). We propose a theoretical model that brings to light the factors that influence a policymaker's anchor strategy because many SSA countries face balance of payment crises due to high trade dependence (measured as the ratio of imports to GDP) and a lack of export diversification (see Nicita abd Rollo , 2015;[START_REF] Iwanow | Trade Facilitation and Manufactured Exports: Is Africa Different[END_REF]. This imbalance has fueled the growth of foreign indebtedness. To reduce the pre-eminence of imported goods in the household's consumption basket, policymakers can adopt trade controls by either raising customs taxes or restricting imports. Alternatively, they can target the internal real exchange rate to influence the consumer's trade-off between locally produced and imported goods. Targeting the real exchange rate by controlling fluctuations in the nominal exchange rate can be achieved through various types of intermediate exchange rate regimes that lie between free floating and strict fixed exchange rate regimes: hard and soft pegs, basket pegs, target zones, crawling bands, etc. Second, we hold that policymakers are also concerned about stabilizing international reserves. Foreign reserve are needed for three purposes. One is to respond to changes in the current account balance and to stabilize the nominal exchange rate. The second is to meet the country's foreign liabilities by servicing external debt. The third motivation is to maintain access to foreign borrowing, which might be difficult in cases when reserves are depleted (credibility and reputation). Regardless of the motivation, the accumulation of foreign reserves in the African countries has served as insurance against recurrent balance of payment crises [START_REF] Pina | The recent growth of international reserves in developing economies: A monetary perspectives[END_REF]. The determination of both the real exchange rate and foreign reserves results from consumers' decisions in the good markets (i.e., traded versus non-traded goods, demand for money and foreign borrowing) and from macroeconomic constraints such as interest rates on debt service or changes in foreign exchange rates. In this paper, we consider a deterministic version of the choice of an exchange rate regime, in the sense that we do not consider the exchange rate regime as an optimal response to a shock. Thus, the determination of the exchange rate regime is presented as the solution in a "state-space" model in which the policymaker's decision is conditioned by the state of the economy captured by the agent's decisions and the macroeconomic environment. With regards to the existing literature on the choice of exchange rate regimes, our proposed model is based upon three stands of the literature on exchange rates to which we add several new assumptions. First, the exchange rate policy is described by a target zone model, but unlike the usual literature, the exchange rate is derived neither from the assumption of monetary determination of the exchange rate nor from the assumption of purchasing power parity (PPP). We instead consider a general equilibrium-based model with tradable and non-tradable goods. This allows us to examine how monetary authority interventions in foreign exchange markets can influence the real and nominal exchange rates. Second, unlike many papers in the literature on exchange rate regimes, we do not consider inflation or output stabilization as the final goals of monetary policy. This is not to say that their roles are negligible. We develop our argument for given levels of inflation and economic growth. We focus instead on external imbalances considering that heavy indebtedness has become crucial in many African countries. We introduce debt into our model through the consumer's budget constraint by assuming that they hold two assets: money and an IOU asset indicating how much money is owed to foreign lenders. Third, many models of currency basket pegs assume that the assumption of uncovered interest rate parity (UIP) holds, which seems inconsistent for SSA countries (see [START_REF] Thomas | Exchange rate and foreign interest rate linkages for Sub-Saharan Africa floaters[END_REF]. Contrary to the usual UIP assumption, we add a premium to the interest rate incurred on foreign debt. This premium, by making debt service more costly, may limit indebtedness, which could in turn reduce the pressure on and the depletion of foreign reserves that are used to maintain the peg. The above assumptions are considered in a microeconomic-based model in which the real and nominal exchange rates are determined endogenously, taking into account some of the specific characteristic of the African countries: high levels of external debt, shallow domestic financial markets that do not allow capital mobility with developed countries, sharp price competition between locally produced goods and imported goods, and the existence of enclaves in some rent sectors (e.g., oil, minerals). We propose an empirical extrapolation of our findings to assess which domestic factors influence the decision to increase nominal exchange rate stability. Our econometric framework is based on a Markov switching model to allow regime-dependent dynamics of the exchange rate (flexibility versus stability). The remainder of the paper proceeds as follows. Section 2 presents general anchoring behavior in Sub-Saharan Africa, Section 3 presents the determinants of anchoring to both nominal and real exchange rates. Section 4 proceeds with the empirical application. Section 5 concludes. Preliminary assessment Policy mix: some evidence A country's exchange rate policy can be defined according to the nature of its peg. Two extreme cases are those in which a country fully fixes either the nominal or the real exchange rate. The in-between cases are defined as a policy mix in which a country combines both objectives by weighting the real and nominal pegs, reflecting a trade-off between decreasing the cost of external debt and enhancing external trade competition (see, for instance, [START_REF] Gervais | Current account dynamics, real exchange rate adjustment, and the exchange rate regime in emerging-market economies[END_REF][START_REF] Staveley-O'carroll | Exchange rate targeting in presence of foreign debt obligations[END_REF]. The fully nominal peg applies when authorities want to maintain nominal stability of the currency. This objective can be reached by fixing the nominal exchange rate against a single currency or a currency basket. The third option is to control exchange rate variation explicitly or implicitly within a specific margin. The monetary authority chooses to stabilize the variation of reserves in order to stabilize or decrease the cost of external debt. The fully real peg applies when authorities target variations in the real effective exchange rate (REER). This can be done through a free floating nominal exchange rate or through a managed float. By choosing a real peg, policymakers seek to increase external competition or fight inflation. Figure 1 describes these two choices by plotting couples of changes in the REER and foreign reserves for SSA countries over the period 2000-2015 (on a monthly average basis). 3The x axis displays changes in foreign reserves, while the y axis describes changes in the real exchange rate. Any point located on the latter means that the policymaker has adopted a pure real peg (since there are no foreign exchange interventions). The exchange rate is defined in such a way that a positive change in the real exchange rate reflects real depreciation, and vice-versa for real appreciation. Any point on the x axis illustrates a pure nominal peg policy, since the central bank uses foreign reserves to stabilize the nominal exchange rate. A policy mix of real and nominal pegs is illustrated by any point located in one of the four areas labeled I, II, III and IV (including the area delineated by the dotted lines). The results are displayed in Table 1. In area I, the policymaker favors an objective of external competitiveness over foreign debt cost reduction. Indeed, it seeks to depreciate the real exchange rate through nominal depreciation (by buying foreign currency). In area II, the policymakers faces a trade-off between external competitiveness and the cost of external debt. By allowing nominal exchange rate appreciation, it allows for the reduction of debt service cost. Appreciation of the nominal exchange rate leads to depreciation of the real exchange rate if the income effect of the changes in relative prices on income outweighs the substitution effect. In area III, the reduction of the cost of foreign debt is preferred and achieved through nominal appreciation of the currency, although this reduces competitiveness. Area IV illustrates both an increase in the cost of debt (due to nominal depreciation of the domestic currency, as reflected by an increase in foreign exchange reserves) and a loss of competitiveness as a consequence of real appreciation of the domestic currency. Finally, the dotted lines delineate small variations in the real exchange rate (within a margin of ±1 percent) and illustrate situations where a nominal peg was preferred by policymakers (through foreign exchange interventions). This strategy seems to characterize more than half of the country sample, including the WAEMU and CEMAC countries, as well as Botswana, Gambia, Malawi, Mauritius, Mozambique, Namibia, Swaziland, Tanzania and Uganda. Moreover, many of these countries are in area I and area IV, suggesting that their exchange rate regimes are very costly in terms of foreign debt cost and represent significant burdens in terms of their external stability. As a whole, Figure 1 shows that nominal pegs have detrimental effects on the external competitiveness and debt of many SSA countries (area IV), whereas others have adopted policy mixes with higher weights assigned to real exchange rate targeting and, thus, trade competitiveness (area I). When countries favor real depreciation through nominal depreciation of the domestic currency, they must then bear the costs of this strategy in terms of higher foreign debt servicing due to higher inflation and higher disbursements in the domestic currency. I II III IV Variation of the international reserves Variation of the real effective exchange rate In this sub-section, we propose a simple approach to assessing African exchange rate policy over the period 2000-2017. Real depreciation (+) real appreciation (--) nominal depreciation (+) nominal appreciation (--) Our aim is to illustrate the trade-offs that many African countries are facing. That is, they must tighten their nominal pegs given transaction costs (especially foreign debt transactions) associated with greater exchange rate volatility or loosen their peg in the face of balance of payments and trade competitiveness issues. Here, we are interested in estimating the way African countries peg their currencies, which can provide important insight into the mixed peg strategy. As described in the previous sections, for a typical small open economy, a managed float (or any other managed arrangement) might be interpreted as the desire of policymakers to reduce downward pressure on their currency. In this view, the nominal exchange rate is considered a tool with which to control the real exchange rate and, therefore, trade competitiveness. As explained above, this strategy should imply an increase in foreign debt cost. The evidence regarding a de facto mixed peg policy (real and nominal) can be illustrated using an econometric approach allowing diversity in exchange rate regimes: pure floating, pegged to a basket of currencies and managed float. Our empirical investigation is based on an extended version of the following Frankel-Wei model: ∆e t = α + β 1 ∆e US D t + β 2 ∆e EUR t + β 3 ∆e GBP t + β 4 ∆e Y EN t + ε t (1) where ∆e is the first difference of the natural logarithm of the respective exchange rates (US dollar, euro, British pound, yen and the dependent currency) against an independent numeraire (i.e., the Swiss franc). Estimates of β reflect the respective weight of the right-hand side currencies in the implicit basket peg of the left-hand side currency. For instance, a β 1 coefficient close to one implies that fluctuations in the exchange rate are mainly explained by movements in the USD. In this case, the USD can be qualified as the main anchor currency. Obviously, many peg configurations are possible, depending on the exchange rate policy of each country. We propose a structural version of the [START_REF] Frankel | Yen bloc or dollar bloc? Exchange rate policies of the East Asian economies[END_REF] model to estimate the degree of nominal exchange rate flexibility as well as the weights on the USD, EUR GBP and CNY in the implicit basket peg of each African country. 4 To assess the relative importance of shocks from the USD, EUR, GBP and CNY, we estimate the following VAR model for each African currency: Y t = µ 0 + P k=1 Φ k Y t-k + ε t (2) where Y t represents the vector of variables (∆e US D ,∆e EUR ,∆e GBP ?∆e CNY ,∆e AFC ), Φ k is a 5 × 5 matrix, and µ 0 is a vector of constants.5 Specifically, we simulate shocks to the USD, EUR, GBP, CNY and the domestic currency to determine the respective weights of their innovations in the fluctuation of each African currency. 6 The share of home currency shocks is then interpreted as the degree of flexibility because it represents demand shocks to the currency that the authorities allow to be partially reflected in the exchange rate [START_REF] Frankel | Assessing China's exchange rate regime[END_REF]. Consequently, the higher this share, the lower the share of major currencies, suggesting that the home currency fluctuates more freely. The dataset obtained from the IMF IFS database covers monthly nominal exchange rates over the period from January 2000 to October 2017 for a sample of 23 African countries: Angola, Botswana, Burundi, Eritrea, Ethiopia, The Gambia, Ghana, Guinea, Kenya, Liberia, Madagascar, Malawi, Mauritius, Mozambique, Nigeria, Rwanda, São Tomé, Seychelles, Sierra Leone, South Africa, Uganda, and Zambia. 7 Our sample was constrained by data availability, but we took care to reflect the various characteristics of exchange rate policies within the region (see Table 1). 8 The estimations are carried Table 2 displays the variance decomposition for each African currency over the two sub-periods. For each sub-sample, each column gives the percentage of forecast error variance due to currency innovations. Accordingly, each row adds up to 100. The first four columns show the de facto implicit basket weights and the degree of exchange rate flexibility of each African currency before the 2008-2009 financial crisis. Angola, for instance, is mainly driven by country-specific shocks, which account for more than 57% of Angola's exchange rate variations. USD shocks explain more than 26% of Angola's exchange rate variations, while the euro, pound sterling and yuan have a limited impact (less than 10%). We can conclude that Angola's monetary authority manages, to some extent, the nominal exchange rate against the USD, although it allows a high degree of flexibility. As a whole, the GBP and CNY shares are both negligible, amounting to less than 9% in all African countries. The euro share is also quite low, except in Cabo Verde (43%), which has adopted a soft peg to a currency basket composed mainly of the euro and the USD. Furthermore, we find that the USD accounts for an important percentage of African currency variations. Indeed, the role of the USD was particularly high (more than 40%) during the 2000s in Eritrea, Ethiopia, Ghana, Kenya, Mauritius, Nigeria, Rwanda, São Tomé, Sierra Leone and Uganda. In the second sub-sample, we note that the USD plays an increasing role in many countries. For instance, the USD weight has more than doubled in seven countries (Angola, Botswana, Burundi, Guinea, Liberia, Madagascar and Sierra Leone), while for 10 countries, the weight has increased by at least 37%. The USD weight is now higher than 50% for 12 countries compared to 8 countries in the first sub-sample. Conversely, the USD weight has decreased significantly for only four countries (Ethiopia, Malawi, Nigeria, São Tomé). These higher USD shares have resulted in smaller shares of countryspecific shocks, although the degree of exchange rate flexibility remains significant (higher than 40%) for 10 countries and higher than 70% for 3 countries (Gambia, Zambia and Malawi). The euro displays a share higher than 50% for Cabo Verde and Mauritius. As a whole, the CNY, GBP and EUR shares remain weak. This evidence is in line with the fear of floating phenomenon. As African countries move towards floating exchange rate regimes, they are strongly affected by commodity shocks. Accordingly, they tend to avoid currency appreciation by pegging their currencies to the commodity currency, i.e., the USD [START_REF] Chan | Commodity currencies[END_REF]. Furthermore, African countries tend to favor managed arrangements to minimize transaction costs associated with greater exchange rate flexibility, such debt securities transactions. As transaction costs increase with exchange rate volatility, it is optimal for countries to tighten their pegs on the USD. However, they tend to preserve a degree of exchange rate stability in order to address balance of payments issues and depreciation pressure and, thus, avoid real exchange rate misalignment (see, [START_REF] Gnimassoun | The importance of the exchange rate regime in limiting current account imbalances in Sub-Saharan African countries[END_REF]. This duality in exchange rate behavior clearly shows that African countries face a trade-off between foreign debt costs and trade competitiveness. In the next section, we propose a theoretical framework that explains how African countries can cope with these conflicting objectives. Notes: The optimal lag lengths were selected according to the Akaike criterion. The lag length ranges are 1 and 3, depending the country and the sample period. Innovations ε US D ε EUR ε GBP ε CNY ε AFC ε US D ε EUR ε GBP ε CNY ε AFC Angola 26, Theoretical model General features To represent these African economies, we consider a small open economy (the domestic country) vis-à-vis the rest of the world. The latter is divided into two areas: a euro zone and a dollar zone. The domestic country may choose to peg its currency to the euro and the US dollar. Although we limit the basket peg to two currencies for simplicity, our arguments are valid for a higher number of currencies in the basket.?? The central bank chooses from among a variety of exchange rate regimes. In the case of an intermediate regime, we consider a soft peg in the sense that the policymaker allows the exchange rate to fluctuate around a central parity within a given band. Our goal is to examine the main motivations that drive an African country to adopt such an intermediate regime rather than a corner solution (pure float or hard peg). An essential feature of the model is that the domestic country issues debt denominated in foreign currencies and therefore accumulates foreign liabilities. The central bank decides the optimal weights on the euro and the US dollar in the basket and chooses the degree of flexibility of the exchange rate. It does so while seeking to minimize the variance of the internal real exchange rate and international reserves. The model is built upon several assumptions that characterize the situations of developing or emerging African countries: i) Their foreign debt is high and a pure float regime may overweight the debt burden if the domestic currency depreciates. For this reason, limiting exchange rate fluctuations may be advantageous. ii) In many articles, the optimal design of basket-peg arrangements relies on the UIP assumption to relate the domestic interest rate to foreign interest rates. This assumption, however, is at odds with empirically low African capital market integration with international financial markets. One reason for deviation from UIP is the existence of additional lending costs due to the scarcity of funds in these domestic financial markets. We use this assumption in our model. iii) In addition to debt problems, we examine the role of the real exchange rate as a factor in external imbalances when people consume both locally produced (non-tradable) goods and imported goods (tradable) that are imperfect substitutes. We consider an economy populated with households that are identical and own firms. They consume locally produced goods (non-tradable goods and services) and tradable goods imported from abroad. In the domestic country, there are three categories of firms: i) firms in the export sector sell commodity goods abroad whose prices and quantities sold are exogenous and fixed by the rest of the world; ii) firms in the tradable imported good sector import commodity goods and sell them in the domestic market; and iii) finally, in the non-tradable goods sector, firms hire workers to produce domestic goods and supply domestic services. In addition to the real sector, the model includes financial markets and a monetary sector. Demand for money is introduced through a cash-in-advance constraint. Moreover, we assume that the exchange rate between the domestic currency and the foreign countries is not given by the UIP condition (which amounts to assuming that there is not perfect capital mobility between the domestic country and its foreign partners). The central parity of the domestic currency is described by a peg to a two-currency basket and since it is a small country, foreign interest rates are exogenous. Net borrowing is assumed to be positive, so that consumers hold neither foreign assets nor domestic assets. They issue domestic assets that are held by foreign countries. The model The households We consider a cash and credit economy. The representative household gains utility from the consumption of a nontradable good C Nt and a tradable (imported) good C T t .9 The consumer-worker obtains disutility from working (l t is the labor supply) and maximizes the present discounted value of utility: 10 max E 0 [Σ ∞ τ=0 β t+τ U(C Nt+τ , C T t+τ , l t+τ )], (3a) U(C Nt , C T t , l t ) = (1 -α)lnC Nt + αlnC T t -γl t , (3b) where β = (1 + ρ) -1 is the discount factor, ρ is the time preference rate, and we assume that 0 < α < 1 and γ > 0. The non-tradable good is purchased using cash m t . The tradable good is purchased on credit. The households hold two assets: money and IOU bonds. They borrow money from abroad for a one-period maturity and in turn hold a bond indicating the amount of debt owed. At time t, the amount owed in foreign currency is -B t+1 (with B t+1 > 0). We use the index t + 1 to indicate that this is the amount borrowed at time t to be reimbursed at time t + 1. The equivalent amount in terms of the domestic currency is calculated by taking into account the "composite" nominal exchange rate prevailing at the time of reimbursement -e a t+1 B t+1 . e t is the price of the domestic composite exchange in terms of the foreign currencies to be defined below (an increase in e t means depreciation of the domestic currency). e a t+1 is the expectation at time t of the nominal exchange rate that will prevail at time t + 1. The term "composite" is used because there are multiple foreign lenders (here, the US and European countries) and the domestic country may adopt a basket peg regime. Finally, we assume that the IOU asset costs r t , the interest rate paid on foreign debt between time t -1 and time t. The household faces cash-in-advance and budget constraints for t = 1, . . . , ∞: P Nt C Nt ≤ m t -(1 + r t )e t B t + e a t+1 B t+1 , (4) P Nt C Nt + P T t C T t -e a t+1 B t+1 + m t+1 ≤ w t l t + [m t -e t B t (1 + r t )] + Π t + Ω t , (5) where Ω t = P xt X t , Π t = (pro f Nt + pro f T t ). P Nt is the unit price of the non-tradable good at time t. P T t is the unit price of the tradable good. We assume that the households own the firms. w t is the wage rate. The household's income has several components: labor income w t l t , domestic profits Π t , and export revenues Ω t . pro f Nt and pro f T t (defined below) are the profits from activities in the non-tradable goods sector and the tradable goods sector, respectively. P Xt X t is the value of commodity exports invoiced in the domestic currency. P Xt is the unit export price, and X t is the volume of exports. Exports refers here to a commodity good extracted from the soil and directly sold abroad (e.g., primary goods, oil, minerals). The household optimization problem is to choose a sequence C T t , C Nt , m t , l t and B t to maximize Eq. (3a) subject to constraints (4) and ( 5), taking the composite exchange rate e t , prices P Xt , P T t , P Nt , w t , profits pro f Nt and pro f T t , volume of exports X t , initial stock of debt B 0 and cash m 0 as given. We define ϕ t = ((P Nt )/(P T t )) as the real exchange rate at time t. From standard first-order conditions, we obtain the following relationships. The shares of non-tradable and tradable consumptions of total consumption are given by: (C Nt /(C Nt + C T t )) = (1 -α)/((1 -α) + αϕ t (1 + r t )), (C T t /(C Nt + C T t )) = (α(1 + r t )ϕ t )/((1 -α) + αϕ t (1 + r t )). (6) A higher interest rate makes the current tradable good less expensive relative to the non-traded good (because debt becomes more costly) and, thus, increases the current consumption of this good (a substitution effect). At the same time, the increase in the interest rate reduces the consumer's total income, which leads to less consumption of both goods (an income effect). Because there is a distinction in the model between cash and credit goods, any change in a given price has an impact on the credit good through both substitution and income effects, while the impact on the cash good comes only from income effects. As a consequence, the non-tradable good share of total consumption diminishes. Similarly, an increase in the real exchange rate reduces the non-tradable good share of total consumption. Here, P t Yt = P T t C T t +P Nt C Nt denotes total consumption spending on cash and credit goods, and noting that the only money held will finance cash goods (m t = P Nt C Nt ), the velocity of money is v t = P t Y t /m t = 1 + (1/ρ t )(C T t /C Nt ) = 1 + (α/(1 -α))(1 + r t ). (7) Eq. ( 7) implies that real money demand varies negatively with the interest rate (for instance, a decrease makes holding the IOU asset or, equivalently, selling domestic assets abroad more attractive than holding money). Firms in the non-tradable goods sector We assume that the production function in the competitive non-tradable goods sector is linear in labor demand: Y Nt = L t . (8) Capital is fixed (and normalized to 1), and a similar assumption applies to productivity. The representative firm maximizes the following profit with respect to Y Nt (L t is labor demand): pro f Nt = P Nt Y Nt -w t L t . (9) Profit maximization implies P Nt = w t . (10) At the competitive equilibrium, profits equal zero. Firms in the tradable goods sector The tradable goods sector consists of two representative firms. Firm 1 produces and exports commodity goods at a price that is set by international markets P Xt . The volume of exports X t is also determined by foreign demand and is exogenous. We assume that export activities are performed by self-employed people who are paid the income from their exports P Xt X t . Firm 2 imports goods and sells them in the domestic markets. People in these firms are also self-employed. Self-employment activities are common in the tradable goods sector in Africa, since small firms in this sector belong to diaspora communities (e.g., Lebanese, Indian or, more recently, Chinese merchants). The import sector is described by a representative monopolistic firm. The latter sells a good imported from outside in the domestic market. Profits are: pro f T t = (P T t -e t P Mt )Y T t , Y T t = C T t . (11) The firm in the import sector faces a demand for traded goods and sets its price P T t . P Mt is the price of imports denominated in foreign currencies. Exchange rate pass-through is the way in which the nominal exchange rate affects import prices and the prices of tradable goods depends upon the exchange rate regime. In a hard peg regime (e t is fixed), there is full pass-through. In a floating exchange rate (partial or free float) regime, nominal exchange rate fluctuations can dampen the effects of changes in import prices. The monopoly chooses to maximize Eq. ( 11) subject to the demand function for traded goods, resulting in an optimal price: P T t = (1 + µ t )e t P Mt , µ t = 1/(| t | -1), t = (P T t (∂C T t )/(∂P T t ))/C T t = -(α/γ)ϕ t Y Nt , t < -1. (12) We assume that t < -1, since traded and non-traded goods are imperfect substitutes. Exchange rate regimes SSA countries may use some forms of intermediate exchange rate regimes (soft pegs or managed floats), because their capital accounts are still not completely open to international capital flows. They borrow money in international capital markets, but their own domestic financial markets are still shallow and illiquid. An intermediate regime (between the corner solutions of a hard peg or a free float) can be a first step towards deeper financial integration with the rest of the world through a mechanism similar to the former European Exchange Rate Mechanism (ERM) (fixed exchange rate currency margins or a semi-pegged regime). Exchange rate movements depend upon the current account. The balance of payments and foreign reserves are the sum of trade balance, debt service and foreign liabilities (positive official transfers): -r t e t B t + e a t+1 B t+1 = -∆RES t , where ∆RES t denotes changes in net international reserves at time t. Moreover, the rate of nominal exchange rate adjustment is limited by the central bank's interventions in the exchange rate market. The resulting changes in foreign reserves are as follows: 14) is useful for studying policymakers' choices in several situations. ∆RES t = -θ 1 λ t + ∆L t + ∆U t , λ t = e t -e c t . (14) • θ 1 = 0 and ∆L t = ∆U t = 0. This case corresponds to a free floating regime. The central bank does not intervene in exchange rate market to stabilize the nominal exchange rate. In this case, λ t → ±∞ (the exchange rate is allowed to deviate from the central parity with no limit). • θ 1 = 0 and ∆L t 0, ∆U t 0. This case corresponds to a managed float regime. Foreign reserves are used to monitor exchange rate movements within a band [e min , e max ]. The central bank sets upper and lower limits to the exchange rate changes by defining a target band. This target remains unchanged (θ 1 = 0 means that there are no intra-band interventions; therefore, ∆L t = ∆L < ∞, ∆U t = ∆U < ∞). • θ 1 → ∞ and ∆L t → ∞, ∆U t → ∞. This case describes a hard peg, a regime characterized by unlimited central bank interventions to maintain the fixed exchange rate (in this case, λ t = 0). • 0 < θ 1 < ∞ (crawling band). This case is similar to the managed float regime, but this time, ∆L t and ∆U t are time varying and remain bounded. Interest rates We now present our assumptions about the determination of the domestic interest rate r t . Since they are constrained in their domestic markets and must borrow money from abroad, households have to incur an additional cost to service their debt. This additional cost represents an amount they have to pay to foreign lenders for making foreign funds available. This additional cost represents an interest rate margin for the lenders over the interest rate that would correspond to UIP. Using ξ as the interest rate differential, we write ξ us t = (r tr us t ) -(E t e us t+1e us t ), ξ us t > 0, (15a) ξ euro t = (r t -r euro t ) -(E t e euro t+1 -e euro t ), ξ euro t > 0, ( 15b ) where ξ us t and ξ euro t would equal zero under the UIP assumption, which is not our assumption. One could also imagine that funds borrowed from abroad are not given directly to the households but to a bank (a subsidiary of an international financial company could borrow the funds at an interest rate corresponding to that of the UIP and lend them at a higher interest rate). In that case, UIP would not be satisfied because domestic financial markets are thin, illiquid, and shallow, so the domestic interest is higher than foreign interest rates in international capital markets. The rate is so high that even with appreciation of the foreign currency, the expected rate of appreciation would not be enough to offset the savings accrued from the positive interest rate differential. E t e us t+1 and E t e euro t+1 are the expectations at time t of the domestic exchange rate relative to the US dollar and the euro, respectively, at time t + 1. Assuming perfect capital mobility between the US and the euro area implies that ξ us t = ξ euro t = ξ t . (16) This means that there is no borrowing-cost arbitrage by borrowers between the US and European financial markets. Otherwise, at a given period t, the households would choose to issue foreign debt in the country with the lowest ξ t . We thus write r t = where ξ t can be thought of as either a finance premium that compensates the foreign lender for bearing verification costs due to informational asymmetries or as a premium due to the possibility of default on the debt. Ranking the exchange rate regimes The choice of the exchange rate regime Policymakers are concerned about stabilizing fluctuations in the real exchange rate and in foreign reserves. We solve the model and obtain the following expressions for the real exchange rate and foreign reserves (see the details in Appendix (A)): ϕ t = D t 1 e t = D t θ 1 -C t A t + θ 1 λ 1 e us t + (1 -λ 1 )e euro t , (18) and ∆RES t = θ 1 [A t + C t λ 1 e us t + (1 -λ 1 )C t e euro t ] + X t [θ 1 -C t ] [θ 1 -C t ] , (19) where A t , D t , C t and X t are defined by: A t = αw t Y Nt Y T t δ(1 -α)(Y t -Y Nt ) + (1 + δ)P Xt X t + (1 -δω)P Nt Y Nt -m t+1 + X t , (20a) D t = w t P Mt (1-α) γ δ (Y t -Y Nt ) , ( 20b ) C t = B t -P Mt Y T t (1 -Y Nt )(1 + δω), (20c) X t = ∆L t + ∆U t , (20d) δ = 1 1 + r t , ω = 1 + 1 -α 1 + r t . ( 20e ) We now consider the main determinants of A t , D t , C t and X t . They describe the macroeconomic environment that influences the policymaker's optimal choice to be studied in the next sections. The first component of A t can be rewritten as: αw t Y Nt Y T t (1-α) (1+r t ) (Y t -Y N t) = γ 2 µ t 1 + µ t e t P Mt Y T t . (21) The key variable is µ t 1+µ t , a proxy for the degree of pass-through from the import price to the price of tradable goods. If µ t = 0, the demand for traded goods is price inelastic, and firms in the traded goods sector can fully pass-through changes in import prices to the prices of the tradable goods. If |µ t | 0, pass-through is incomplete. The ratio µ t 1+µ t can be thought of as a measure of the degree of trade integration with the rest of the world (an indicator of competition in the domestic goods market). Equation (18) shows that a higher A t stemming from this first component is consistent with a lower real exchange rate ϕ t . The remainder in Eq. ( 20a) can be written as follows: P Xt X t + P t Y t 1 + r t open t - α 1 -α P Nt Y Nt ν t ν t -1 + 1 + 1 -α (1 + r t ) 2 P Nt Y Nt -m t+1 + X t , ( 22 ) where open t is the degree of trade openness defined as the sum of exports P Xt X t and imports P T t Y T t as a share of GDP P t Y t . One expects higher openness to be associated with lower prices of non-tradable goods and therefore with a lower real exchange rate. The ratio ν t /(ν t -1) is an index of the velocity of money. Insofar as non-traded goods are cash goods, high money demand (high ν t ) indicates a preference for these goods relative to traded goods. Conversely, when ν t → 0, low money demand reflects a preference for tradable goods. An increase in ν t is associated with a higher real exchange rate ϕ t , since higher money demand increase the relative price of non-traded goods. A higher ν t means a lower A t and, by Eq. ( 18), this implies a higher ϕ t . The nominal interest rate represents the cost of debt servicing. An increase implies a decrease in the demand for the credit (tradable) good and therefore a higher real exchange rate. From Eq. ( 17), we deduce that r t depends upon the nominal foreign interest rates, the financial risk premium and the expected changes in the nominal exchange rate of the domestic currency against the US dollar (or the euro). As for the other variables, C t has a one-to-one relationship with the value of debt denominated in a foreign currency. Higher debt resulting from greater borrowing allows the greater consumption of traded goods (credit goods), thereby implying an increase in the relative price of this good and therefore a decrease in the real exchange rate. X t is the band width of the "composite" nominal exchange rate e t . Narrowing the band for permissible exchange rate fluctuations can prevent excessive nominal exchange rate appreciation and depreciation. We therefore expect any change in X t to be negatively correlated with changes in e t . From Eq. ( 18), we expect a positive correlation with any change in the real exchange rate. Using Eq. ( 20b), it can be shown that D t = 2(1 + µ t ) α γ C Nt . ( 23 ) D t is therefore a function of the price elasticity of the demand for the traded good. Its impact on the real exchange rate is similar to that of the ratio µ t /(1 + µ t ) examined above. Solving the model for the exchange rate and foreign reserves From Eqs. ( 18) and ( 19), we obtain ϕ t = D t X t -∆RES t + θ 1 λ 1 e us t + (1 -λ 1 )e euro t . (24) This expression of the real effective exchange rate (ϕ t in level) explains the choice of the country's anchor regime according to two important variables: a = D t and b = X t + θ 1 λ 1 e us t + (1 -λ 1 )e euro t . D t is a function of the monopolistic power in the traded goods market and can be considered a proxy for the degree of trade integration with the rest of the world (see Eq. ( 23)). A positive value of D t indicates a relatively high degree of integration reflected by lower monopolistic power in the traded goods market. Conversely, a low degree of trade integration is captured by a negative value of D t . Regarding b, in the denominator, the expression captures the degree of exchange market intervention. Frequent interventions, suggesting a harder peg, are associated with positive values of b, while infrequent interventions are associated with negative values. Figure 2 shows appreciation or depreciation of the real effective exchange rate with regards to changes in reserves and, thus, the nominal exchange rate. The increase in changes in reserves reflects nominal exchange rate depreciation. The dotted lines show how the curve moves after an increase in b. The upper two cases show countries with high integration (a > 0). These types of countries and the two lower cases show countries with low integration (a < 0). When countries have low integration with the rest of the world (a < 0), the curve shows that an increase in changes in reserves (or slowing of the decrease in changes in reserves) would cause appreciation of the real effective exchange rate. As changes in reserves increase, the curve decreases, reflecting the policy mix anchor. Indeed, in a real anchor policy, the monetary authority would depreciate the rate sufficiently to cause real depreciation. This is reflected in the case where countries have high integration with the rest of the world (a > 0). Therefore, the mixed policy case is constrained by the integration degree and the amount of local monopolistic power. The lower world trade integration is and the higher monopolistic power, the higher the probability that a country chooses a policy mix anchor. Finally, the sign of b does not impact the choice of anchor, but it does impact the degree of the anchor. For a given sign of a, the different signs and values of b are reflected by the same curves. Since b reflects the degree of floating or hardness of a peg in a given country, this explains that the choice of exchange rate regime does not affect the choice of the policy mix anchor. This confirms what we have found in Figure 1, where multiple countries undertake a policy mix disregarding their exchange rate regime. In sum, our model explains that the choice of policy anchor is made separately and not as suggested in the previous literature (see, for instance, [START_REF] Savvides | Real exchange rate variability and the choice of exchange rate regime by developing countries[END_REF]. Authorities start by choosing the nominal anchor policy according to the country's specificities: the degree of pass-through, money velocity, openness, debt denominated in foreign currency, interest rate and the targeted exchange rate band. The stronger the impacts of openness, velocity, exchange rate pass-through, debt and interest rate, the more likely authorities are to choose a harder peg. Once the nominal anchor is set, authorities choose the real anchor with regards to the degree of trade integration and the degree of intervention on the exchange market. In the next section, we explore whether these domestic fundamentals impact exchange rate stability. at which factors are statistically significant in the first regime. For instance, assuming that the coefficients associated with external debt are positive and significant in the first regime implies that an increase in external debt leads to greater exchange rate stability against the anchor currency. Negative coefficients would suggest that a decrease in external debt is associated with a tighter peg. Finally, a non-significant coefficient implies that the variable does not affect exchange rate stability. The estimations are reported in Tables 3 and4, and the plots of regime probabilities are shown in Figures 3 and4. When the smoothed probability of state 1 is greater than 0.5, the exchange rate is considered to be in the low flexibility regime (i.e., episodes during which the monetary authority increases exchange rate stability against the USD). First, we compute a likelihood ratio to check whether exchange rate volatility evolves according to a non-linear process. The constrained model is an OLS version of Eq.( 25), where all coefficients are linear. The results clearly indicate that exchange rates evolve through two distinct regimes in which the degree of floating differs significantly. This confirms our previous empirical findings of the trade-off between tightening and loosening the nominal currency peg. Moreover, the estimates provide evidence that the influence of the explanatory variables is regime dependent. We find that the constant terms are lower in the first regime (i.e., α 1 < α 2 ), implying that regime 1 corresponds to episodes where exchange rate fluctuations against the USD are weaker. Concerning global factors, we find that non-energy price volatility is significant in regime 1 and positively impacts exchange rate volatility for only four counties (Botswana, Cabo Verde, Ghana and South Africa). This means that higher non-energy price volatility strengthens the incentives of these countries to stabilize their exchange rates against the USD. One explanation is that lower exchange flexibility allows these countries to reduce transaction costs linked to exchange rate fluctuations. Accordingly, increasing exchange rate stability against the USD allows them to preserve revenues for export firms. For other countries, a non-significant effect could be explained by the fact that the commodities they export are under-weighted in the index. 15 However, we find that energy price index volatility is significant for most of these countries. The negative sign suggests that lower energy price volatility leads to a decrease in exchange rate volatility against the USD. This finding is in line with [START_REF] Cashin | Commodity currencies and the real exchange rate[END_REF], who identify co-movement between world commodity prices and exchange rates among commodity-exporting countries. In the same way, when the VIX volatility index decreases, fluctuations of African currencies against the USD are weaker. In regime 2, we find that the index is positive, implying that during episodes of financial stress, exchange rates become more volatile. For the interest rate differential in regime 1, we find a positive and significant coefficient for all countries except Cabo Verde. A possible explanation for this is that when domestic rates increase faster than US rates, domestic borrowing becomes more costly. This induces a rise in external debt and provides a stronger incentive to stabilize the exchange rate. INSERT TABLE 3 Regarding domestic factors, we find that many coefficients are significant in regime 1 but non-significant in regime 2. This is strong evidence that domestic factors are important drivers of exchange rate stability, as suggested by our theoretical results. Starting with the external debt-to-GDP ratio, previous studies demonstrate that debt levels would have different effects on countries (see, for instance, [START_REF] Meissner | Why do countries peg the way they peg? The determinants of anchor currency choice[END_REF]. On the one hand, a large body of literature explains that since these countries are severely indebted, an increase in the debt level would increase debt service payments, which exert pressure on budgetary resources and crowd out domestic investment. This causes a decline in economic growth and puts downward pressure on the currency, which eventually increases the external debt burden. Intuitively, the more liabilities are denominated in a foreign currency, the greater is propensity to peg to that currency. On the other hand, an increase in debt can induce investment productivity, leading to an increase in GDP and currency appreciation. Our results suggest that the first view seems more intuitive in the African context. Indeed, we find that external debt is significant in regime 1 for almost all countries (except Cabo Verde and Sierra Leone). This suggests that any increase in external debt implies that African countries are more likely to peg their currency to the USD. For Burundi, the negative sign can be explained by the fact that external debt has sharply decreased since 2004. This has allowed Burundi to reduce downward pressure on its currency and, thus, reduce exchange rate volatility against the USD. For the velocity of money, quantitative money theory suggests that any increase in velocity would put some upward pressure on the value of the currency (Agbeyege et al. , 2004;[START_REF] Bleaney | The impact of terms of trade and real exchange rate volatility on investment and growth in sub-Saharan Africa[END_REF]. With an increase of the velocity of money, agents will choose to increase their investments and to spend more on consumption, which will lead to higher economic growth and inflationary pressure. For African countries, the results are mixed, as the velocity of money is significant in about half of the cases. One potential explanation is the greater financial development of some African countries, such as Botswana, South Africa and Cabo Verde (IMF , 2016b). Any increase in the velocity of money would positively impact investments, growth and inflation, which would encourage authorities to contain upward pressure on the exchange rate. The more financially developed the country, the greater the impact. INSERT TABLE 4 For the degree of openness indicator, we expect that the more integrated these countries are with the rest of the world, the more they accumulate foreign reserves (for net exporters), thus putting upward pressure on their currencies. In such a case, authorities face a trade-off between letting the currency appreciate to alleviate the external debt burden (at the expense of the real exchange rate and trade competitiveness) and intervening on the foreign exchange market to stabilize the exchange rate. For net importers, the dilemma is reversed. We observe that the degree of openness significantly explains exchange rate stability for all countries except Liberia and South Africa. In the latter cases, this is explained by the high South African exchange rate and its limited foreign exchange intervention. However, a positive and significant coefficient implies that when the degree of openness increases, exchange rate stability increases as well because authorities are more willing to intervene in the foreign exchange market to offset pressure on the domestic currency (Burundi, Cabo Verde, Ethiopia, Ghana, Guinea Kenya and Madagascar). When the sign is negative, any decrease in the degree of openness contributes to increased exchange rate stability without requiring authorities to intervene in the foreign exchange market. This is particularly true for Guinea and Nigeria for which the degree of exchange rate flexibility was found to be large. Finally, a higher degree of ERPT would induce higher imported inflation, thus putting downward pressure on the domestic currency. Accordingly, monetary authorities are more likely to intervene in the foreign exchange market to avoid currency depreciation and an increase in external debt. If trade competitiveness prevails, authorities would let the currency depreciate. The latter argument seems consistent for Burundi Ethiopia, Madagascar and Mauritius, since the degree of ERPT does not impact exchange rate stability. A positive sign implies that monetary authorities will tighten their nominal peg when the degree of ERPT increases to contain inflationary pressure (Botswana, Liberia and Nigeria). For the other countries, the degree of ERPT is negatively linked to exchange stability. Conclusion In conclusion, when policymakers choose to anchor the exchange rate it is essential to consider whether they are anchoring real, nominal or both rates. Anchoring the nominal exchange rate occurs by stabilizing variation in reserves in order to 1) respond to current account imbalances, 2) meet foreign liabilities by servicing external debt 3) and maintain access to foreign borrowing. Anchoring the real exchange rate through controls of the nominal exchange rate has the objectives of influencing the consumer trade-off between locally produced and imported goods and promoting external competitiveness. Having a policy mix anchor would satisfy simultaneously these objectives while promoting both internal and external stability. Our results suggest that states follow a two-step strategy. Policymakers choose their exchange rate regime (nominal target) with regards to the degree of openness, the degree of debt to GDP denominated in foreign currencies and the fluctuation margin band of the exchange rate. Then, authorities choose the real anchor according to the extent of trade integration with the rest of the world and the degree of intervention in the exchange market. The impact of the strategy therefore depends on the behavior of the real anchor with regards to the nominal anchor constrained by the amount of monopolistic power. We find that the strategic behavior of Sub-Saharan African countries is not efficient in terms of reducing external imbalances and that they will always face a trade-off between objectives. The main cause of such behavior is a high degree of monopolistic power that is explained either by diaspora communities controlling some services sectors or institutions controlling their natural resources. To pursue a more efficient strategy in terms of reducing external imbalances, Sub-Saharan African countries need to create more competitive markets. where (using the consumer's budget constraint and Eqs. ( 6) and ( 7) to compute the demand for money): e a t+1 B t+1 = 0. (32) Setting: 33) and assuming that the discriminant is null, we have . a = 1 1 + r t , b = 1 + 1 -α 1 + r t ( (34) Defining: A t = αw t Y Nt Y T t a(1 -α)(Y t -Y Nt ) + (1 + a)P Xt X t + (1 -ab)P Nt Y Nt -m t+1 + X t , (35) C t = B t + P Mt Y T t (1 -Y Nt )(1 + ab), (36) We rewrite e t = A t + θ 1 [λ 1 e us t + (1 -λ 1 )e euro t ] (θ 1 -C t ) . Setting: D t = w t P Mt (1-α) γ a (Y t -Y Nt ) . ( 38 ) We obtain: ϕ t = out for the following sub-samples: 2000:01-2008:09 (sub-sample 1) and 2010:01-2017:10 (sub-sample 2) to reduce the effects of the 2008-2009 financial crisis. Figure 2 : 2 Figure 2: Policy behavior choices (Yt -Y Nt ) -(1+ 1 (1+rt ) )PXt Xt+ 1 (1+rt ) (1+ 1-α (1+rt ) ))PNtYNt+mt+1-wtYNt-θ1[λ1e us t +(1-λ1)e euro t ]-∆Lt-∆Ut] 2[(θ1-Bt)+PMtYTt(1-YNt)(1+ab)] Table 1 : 1 Policy mix and de facto exchange rate arrangements in SSA(IMF , 2016a) Policy mix characteristics Countries Case I: Nominal depreciation is accompanied by Floating: Madagascar, Mauritius, Kenya, Uganda, Zambia, Tan- real depreciation. zania and Sierra Leone Country promotes external competitive- Conventional peg: Guinea-Bissau (WAEMU), Central African ness over lower cost of foreign debt. Rep., Chad, Rep. of Congo, Gabon (CEMAC), Cabo Verde, São Tomé and Eritrea Other managed arrangement: Angola, Liberia, Rwanda Crawl-like arrangement: Ethiopia Stabilized arrangement: Burundi, Nigeria Case II: Nominal appreciation is accompanied by Other managed arrangement: Guinea real depreciation. Country achieves both objectives, simulta- neously lowering the cost of debt and pro- moting external competitiveness Case III: Nominal appreciation is accompanied by Conventional peg: Senegal (WAEMU) real appreciation. Country promotes lowering the cost of for- eign debt over external competitiveness. Case IV: Nominal depreciation is accompanied by Floating: Ghana, Malawi, Mozambique, South Africa, Sey- real appreciation. chelles Country disregards both the cost of foreign Conventional peg: Benin, Burkina Faso, Ivory Cost, Niger, Togo debt and external competitiveness. (WAEMU), Cameroon (CEMAC), Comoros, Lesotho, Namibia and Swaziland Crawling peg: Botswana, the Gambia 2.2. Exchange rate policy in SSA: a simple empirical model Table 2 : 2 Variance decomposition of forecast errors as a % of the total variance of African exchange rates. 2000:01-2008:09 2010:01-2017:10 Table 3 : 3 Estimates of regime-dependent correlations among exchange rate, domestic and global factors (sample 1) Botswana Burundi Cabo Verde Ethiopia Ghana Guinea Kenya Notes: *,**,*** denote significance at the 10%, 5% and 1% levels, respectively. The standard errors of parameters are reported in parentheses (.), while p-values are displayed in brackets [.] . The LR aims to test whether the Markov switching model outperforms the simple linear regression model. The LR test statistic is computed as follows: LR = 2 × [LL MS (Θ) -LL OLS (Θ)], where Θ indicates the parameters of the model. The null hypothesis is that the MS model does not fit significantly better than the OLS model. The estimates DEBT , OD, PT , V, V IX, NE, EN and I correspond to the correlations associated with external debt, trade openness, ERPT, velocity of money, VIX, non-energy price index, energy price index, and interest rate differential, respectively. Blank cells indicate that data are not available. Table 4 : 4 Estimates of regime-dependent correlations among exchange rate, domestic and global factors (sample 2)Notes: *,**,*** denote significance at the 10%, 5% and 1% levels, respectively. The standard errors of parameters are reported in parentheses (.), while the p-values are displayed in brackets[.]. The LR aims to test whether the Markov switching model outperforms the simple linear regression model. The LR test statistic is computed as follows:LR = 2 × [LL MS (Θ) -LL OLS (Θ)],where Θ indicates the parameters of the model. The null hypothesis is that the MS model does not fit significantly better than the OLS model. The estimates DEBT , OD, PT , V, V IX, NE, EN and I correspond to correlations associated with external debt, trade openness, ERPT, velocity of money, VIX, non-energy price index, energy price index, and interest rate differential, respectively. Blank cells indicate that data are not available. Figure 3: Smooth probabilities of being in the low-volatility regime (sample 1) = λ 1 e us t + λ 2 e euro t + λ t and r t = r us t + (E t e us t+1e us t ) + ξ t . Moreover, from Eq. 13, we obtain e t = ∆RES t + P Xt X t + e a t+1 B t+1 r t B t + [1 -( α γ )ϕ t Y Nt ]P Mt Y T t Liberia Madagascar Mauritius Nigeria Sierra Leone South Africa = (1 + r t )e t B t -) , P t Y t = P T t Y T t + P Nt Y Nt .Changes in foreign reserves are given by the money supply equation:∆RES t = -θ 1 (e t -λ 1 e us t -(1 -λ 1 )e euroSolving Eqs. (26b), (27) and (29) through Eq. (31), we obtain:e 2 t [(θ 1 -B t ) + P Mt Y T t (1 -Y Nt )(1 + 1 1 + r t (1 + (1 -α) (1 + r t ) )] + e t [ -αw t Y Nt Y T t ( (1-α) (1+r t ) )(Y t -Y Nt ) -(1 + 1 1 + r t )P Xt X t + 1 1 + r t (1 + (1 -α) (1 + r t ) )P Nt Y Nt + m t+1 -w t Y Nt -θ 1 [λ 1 e us t + (1 -λ 1 )e euro t ] -∆L t -∆U t ] + (P Mt w t Y Nt Y T t (1-α) (1+r t ) (Y t -Y Nt ) 1 1 + r t {(P t Y t -(m t+1 ) -m t )} -m t = 1 (1 + r t ) (1 -α)P t Y t w t Y Nt -P Xt X t + [( (1 + r t (28) α γ )ϕ t Y Nt Y T t ] , (29) Solving the system of equations, our steady state is: e t = -[ -αwt Y Nt Y T t (1-α) (1+rt ) (Yt -Y Nt ) -(1+ 1 (1+rt ) )PXt Xt+ 1 (1+rt ) (1+ 1-α (1+rt ) ))PNtYNt+mt+1-wtYNt-θ1[λ1e us t +(1-λ1)e euro t 2[(θ1-Bt)+PMtYTt(1-YNt)(1+ab)] ]-∆Lt-∆Ut] , (30) e a t+1 B t+1 = (1 + r t )e t B t --P Xt X t + [( α γ )ϕ t Y Nt Y T t ] + (1 -( 1 1 + r t  P Nt Y Nt (1 + α γ )ϕ t Y Nt )e t P Mt (1 + (1 -α) -m t+1 + w t Y Nt (1 + r t )) (1 + r t ) ). (1 -α) (31) t ) + ∆L t + ∆U t . The country sample includes the following countries: Madagascar, Mauritius, Kenya, Uganda, Zambia, Tanzania, Sierra Leone, Guinea-Bissau, Central African Rep., Chad, Rep. of Congo, Gabon, Cabo Verde, São Tomé and Eritrea, Angola, Liberia, Rwanda, Ethiopia, Burundi, Nigeria, Guinea, Senegal, Ghana, Malawi, Mozambique, South Africa, Seychelles, Benin, Burkina Faso, Ivory Cost, Niger, Togo, Cameroon, Comoros, Lesotho, Namibia and Swaziland, Botswana and Gambia. Mali, Sudan, Zimbabwe and Equatorial Guinea are excluded due to missing data. We include the CNY given China's increasing share of total African trade and financial transactions. This causal ordering reflects their level of exogeneity, with the assumption that the USD is exogenous to contemporaneous shocks on the CNY. This allows us to avoid simultaneity bias since the CNY and African currencies can be simultaneously affected by the USD. Note that alternative causal orderings have been used, with ∆e EUR exogenous to ∆e US D . These estimation results, which are not presented here but are available upon request, do not lead to different conclusions. These innovations are orthogonalized using the Cholesky decomposition, while the weight computations are done through VAR-based variance decomposition of forecast errors. We use monthly data because higher frequency data are not available for all countries over the study period. Moreover, we use the Swiss franc (CHF) as an independent numeraire to measure exchange rate movements. Special Drawing Rights (SDR) and the Australian dollar have been also considered but no significant changes in the results have been observed. To avoid collinearity issues with the euro, we exclude from the sample the African Countries belonging to the following African monetary unions: the West African Economic and Monetary Union (WAEMU), the Economic and Monetary Community of Central Africa (EMCCA) and the Common Monetary Area (CMA). For, Namibia, Lesotho and Swaziland (CMA), which are pegged to the South African rand, the results are logically identical to those of South Africa. In what follows, we shall use the terms "non-tradable" and "non-traded" for locally produced goods and "tradable" or "non-traded" for imported goods. In the model, consumption includes both private and public consumption, which means that we do not distinguish the government from the households. An agent's debt is therefore equivalent to the domestic country's sovereign debt. We choose a logarithmic function for consumption for convenience. For instance, more than 36% of Liberia's exports are composed of iron ore and rubber, but these two commodities represent less than 10% of the index. Regime 1 (s t = 1): Constant 1 -0,117*** -0,115*** -0,146*** -0,174*** -0,116*** -0,201*** -0,063*** (0,040) (0,018) (0,023) (0,016) (0,019) (0,022) (0,016) DEBT 1 0,296** -0,131*** -0,075 -0,405*** 0,069** 0,071** 0,144*** (0,128) (0,044) (0,055) (0,062) (0,034) (0,029) (0,036) OD 1 -0,041** 0,1*** 0,05*** 0,096*** 0,06** -0,053** 0,052** (0,017) (0,038) (0,012) (0,026) (0,028) (0,021) (0,023) PT 1 2,25** 0,048 -0,385*** -1,355*** -0,547 0,599* (1,141) (0,336) (0,107) (0,259) (0,447) (0,389) V 1 0,188** 0,223* 0,396*** 0,049 -0,078 (0,093) (0,115) (0,084) (0,090) (0,049) V IX 1 -0,295* 0,036 -0,075 -0,14 -0,485** -0,521*** -0,087 (0,226) (0,058) (0,063) (0,096) (0,201) (0,142) (0,087) NE 1 0,424*** 0,066 0,924*** 0,099 0,304** 0,11 -0,001 (0,117) (0,263) (0,175) (0,192) (0,148) (0,127) (0,090) EN 1 -0,068* 0,072 -0,097 -0,487*** 0,121 -0,22 0,121 (0,173) (0,151) (0,115) (0,095) (0,175) (0,141) (0,093) I 1 0,513* 3,577*** -0,137 (0,759) (1,107) (0,610) Regime 2 (s t = 2): Constant 2 -0,079*** -0,084*** -0,075*** -0,107*** -0,07*** -0,051*** -0,038** (0,012) (0,007) (0,011) (0,007) 51* 0,797*** 0,053 0,002 -0,11 -0,119 (0,600) (0,227) (0,107) (0,117) (0,298) (0,333) V 2 0,004* -0,023 -0,034 0,017 0,13** (0,057) (0,020) (0,028) (0,036) (0,057) V IX 2 0,031*** 0,87*** 0,62** 0,412** 0,01 -0,012 0,256** (0,087) (0,294) (0,297) (0,163) (0,084) (0,065) (0,111) NE 2 -0,057 -0,004 0,291*** -0,045 0,011 -0,094 -0,084 (0,110) (0,059) (0,072) (0,058) (0,069) (0,073) (0,123) EN 2 0,046 0,076 0,008 0,056 0,169** 0,004 0,243* (0,099) (0,063) (0,069) (0,087) (0,082) (0,063) (0,134) I 2 -2,716*** 0,015 1,061* (1,045) (0,119) (0,599) Common: σ -4,155*** -4,279*** -4,138*** -4,084*** -4,152*** -4,232*** -4,238*** (0,094) (0,072) (0,069) (0,080) (0,063) (0,078) (0,087) Transition matrix θ 11 -13,712*** -19,484 0,3 -14,333** -1,54* -0,453 -0,284 (4,134) (603,393) (0,655) (6,111) (0,835) (0,707) (0,482) θ 21 -0,979* -1,76*** -2,738*** -1,907*** -1,656*** -1,641*** 0,792 (0,695) (0,329) (0,458) (0,343) (0,451) (0,390) (0,754) Regime 1 (s t = 1): Constant 1 -0,139*** -0,126*** -0,133*** -0,143*** -0,207*** -0,129*** (0,019) (0,016) (0,014) (0,037) (0,019) (0,017) DEBT 1 0,026*** 0,221*** -0,027 0,568*** 0,008 0,109* (0,007) (0,055) (0,021) (0,095) (0,020) (0,060) OD 1 -0,002 0,055*** -0,033 -0,408*** 0,02* 0,131 (0,001) (0,013) (0,027) (0,128) (0,011) (0,096) PT 1 2,78*** 1,505 -0,68 3,069*** -0,168*** -3,253*** (0,992) (1,276) (0,577) (1,160) (0,062) (1,172) V 1 -0,099 0,077 0,164 0,405*** 0,26*** (0,091) (0,049) (0,122) (0,124) (0,068) V IX 1 -0,302 -0,312*** -0,021 -0,153 -0,448* -0,788*** (0,202) (0,116) (0,071) (0,175) (0,232) (0,166) NE 1 0,237 0,157 -0,144 0,271 -0,337 0,478*** (0,191) (0,176) (0,138) (0,460) (0,240) (0,133) EN 1 -0,542** -0,212** -0,192** -0,527*** -0,471** -0,383*** (0,232) (0,101) (0,093) (0,183) (0,197) (0,124) I 1 1,134** 0,938 1,82*** 3,621*** (0,518) (0,759) (0,399) (1,931) Regime 2 (s t = 2): Constant 2 -0,09*** -0,082*** -0,095*** -0,1*** -0,081*** -0,071*** (0,009) (0,013) (0,009) (0,010) (0,008) (0,010) DEBT 2 -0,003 0,032*** 0,009 0,019 -0,02* 0,04 (0,002) (0,012) (0,016) (0,034) (0,011) (0,039) OD 2 0,001 0,006 0,015** -0,001 0,013 0,013 (0,001) (0,013) (0,006) (0,042) (0,009) (0,022) PT 2 -0,017 -1,246** -0,362 -0,498 -0,074** 0,465 (0,190) (0,610) (0,301) (0,381) (0,034) (0,667) V 2 0,066 0,039 0,007 0,004 -0,054 (0,058) (0,035) (0,022) (0,027) (0,075) V IX 2 0,009 -0,003 0,362** 0,043 0,053 0,07 (0,074) (0,086) (0,167) (0,101) (0,079) (0,052) NE 2 -0,018 0,048 -0,023 -0,013 0,048 -0,045 (0,067) (0,083) (0,075) (0,094) (0,068) (0,055) EN 2 -0,014 -0,034 0,01 -0,055 0,006 -0,028 (0,069) (0,095) (0,086) (0,095) (0,068) (0,055) I 2 0,601* -0,081 0,317 -0,151 (0,364) (0,217) (0,374) (0,614) σ -4,107*** -4,3*** -4,172*** -3,912*** -4,234*** -4,276*** (0,062) (0,125) (0,073) (0,103) (0,068) (0,081) Transition matrix θ 11 -0,288 -0,455 1,487*** -1,932 -0,905 -14,83*** (0,919) (0,703) (0,487) (2,459) (0,777) (1,241) θ 21 -2,275*** -1,543*** -2,266*** -1,969*** -1,661*** -2,026*** (0,555) (0,518) (0,463) (0,629) (0,424) (0,628) Determinants of SSA currency pegs In the last stage, we empirically explore whether the domestic factors that we highlighted in the theoretical model provide incentives for African countries to tighten their nominal pegs. Our aim is to explain the way the African countries peg their currencies. We focus on a sample of 13 African countries: Botswana, Burundi, Cabo Verde, Ethiopia, Ghana, Guinea, Kenya, Liberia, Madagascar, Mauritius, Nigeria, Sierra Leone and South Africa. Our choice is based on data availability and quality. 11 We consider the following Markov switching (MS) model augmented with a set of economic variables: The estimation procedure is detailed in Appendix (B). The estimates are carried out for the period January 2004-November 2016. The dependent variable is defined as the exchange rate volatility between the African currency and the main anchor currency. 12 Note that the latter corresponds to the currency with the higher estimated weight according to the estimation procedure described in Section 2.2 (in most cases, the US dollar). The factors that are likely to affect the degree of the nominal anchor are divided into domestic and global factors. The variables vel, deb, pt, do are the domestic factors and correspond to the velocity of money (computed as the ratio between base money and GDP), the total external debt-to-GDP ratio, the evolving degree of exchange rate pass-through (ERPT), and the evolving degree of trade openness (computed as the ratio between GDP and total trade), respectively. 13 We take the first difference of these five variables to ensure stationarity. Moreover, all exogenous variables are lagged by one period to avoid endogeneity issues. All data are extracted from the World Bank, IMF DOTS and IMF IFS databases. Moreover, we control for the effects of global factors: energy and non-energy price volatility (World Bank Commodity Price Data), the CBOE Volatility Index (VIX) and the interest rate differential (computed as the difference between the policy rates of the domestic country and the United States), which are denoted by en, nen, vix and i, respectively. We use the log of squared returns of energy and non-energy prices as a proxy for volatility. We take advantage of the flexible parametric structure of the MS model to allow exchange rate volatility to evolve across two distinct regimes (peggingversusfloating). 14 Under the condition that α 1 < α 2 , the first regime is defined by low flexibility against the the anchor currency; the second, by high flexibility. Moreover, the correlation coefficients are regime dependent, implying that the influence of economic factors differ across exchange rate regimes. The main advantage is that this allows us to identify the determinants of the decision to tighten a currency peg. In this view, we need to look 11 We exclude countries belonging to monetary unions, since the degree of exchange rate flexibility is zero. 12 As a proxy for volatility, we take the log volatility computed as the log of squared returns of the exchange rates. 13 We estimate a dynamic regression model for the ERPT to African consumer prices. We follow the literature by augmenting the bivariate relationship between the NEER and domestic inflation with trade-weighted foreign prices and domestic real GDP. We take the year-to-year differences of the variables expressed in logarithmic terms. Rolling estimation of the ERPT is performed for each African country over the period from March 1998 to January 2016 with a window size of 72 observations and a step size equal to one. Accordingly, for each African country, we obtain 155 coefficients that represent the dynamic ERPT. The NEER and trade-weighted foreign prices are computed with trade data from the IMF DOTS database. The results, which are not presented here, are available upon request. 14 We favor time series over panel data for two main reasons. First, we may think that the role of economic determinants can vary from one country to another. Detecting this heterogeneity is important given the prospects for deeper monetary and financial integration in Africa. Second, the MS model allows us to discriminate between two regimes according to different time series properties (here, the mean of the volatility) and to then observe specific schemes or patterns in a given regime. Appendix A: Solving for nominal and real exchange rates The export sector is an enclave in the sense that the domestic country is assumed to export minerals, natural resources, oil, cocoa, coffee, etc., directly to foreign countries. Equilibrium in the non-tradable and tradable markets implies that Using one of these equalities and the expressions in (6), one finds the following expression of the real exchange rate: We have: Appendix B This appendix briefly details the estimation procedure for the general Markov switching model. The maximum likelihood method is employed to provide estimates of the parameters, and the BFGS algorithm is used to perform non-linear optimization. The parameters of Eq. ( 25) depend upon a hidden variable s t ∈ {1, 2} representing a particular state of exchange rate volatility. Since the states are unobservable, the inference of s t takes the form of a probability given observations on σe AFC t . The state-generating process is an ergodic two-regime Markov chain of order 1 with the following transition probabilities: with, The conditional likelihood function for the observed data is defined as: where ξ t = (y t , y t-1 , . . . , y 1 ), Ω t = (X t , X t-1 , . . . , X 1 ) denotes the vector containing observations through date t, and Θ t is the vector of model parameters. Considering the normality assumption, the regime-dependent densities are defined as: where Φ is the standard logistic cumulative distribution function and φ is the standard normal probability density function. The model is estimated using a maximum likelihood estimator for mixtures of Gaussian distributions, which provides efficient and consistent estimates under the normality assumption (see, e.g., [START_REF] Kim | Estimation of Markov regime-switching regression models with endogenous switching[END_REF]. Applying Bayes' rule, the weighting probabilities are computed recursively: P(s t = i, s t-1 = j|Ω t , ξ t-1 ; Θ) = P(s t = i, s t-1 = j|z t ; Θ)P(s t-1 = j|Ω t , ξ t-1 ; Θ) = P i j (z t )P(s t-1 = j|Ω t , ξ t-1 ; Θ), P(s t = i|Ω t+1 , ξ t ; Θ) = j f (y t |s t = i, s t-1 = j, Ω t , ξ t-1 ; Θ)P(s t = i, s t-1 = j|Ω t , ξ t-1 ; Θ) f (y t |Ω t , ξ t-1 ; Θ) . (46)
01757060
en
[ "shs.sport.ps" ]
2024/03/05 22:32:10
2000
https://insep.hal.science//hal-01757060/file/127-%20Local%20muscular%20fatigue%20and.pdf
Marie-Françoise Devienne Michel Audiffren Hubert Ripoll Jean-François Stein Local muscular fatigue and attentional processes in a fencing task Local muscular fatigue and attentional processes in a fencing task Marie-Françoise Devienne 1 , Michel Audiffren 2 , Hubert Ripoll 3 , Jean-François Stein 1 1 Mission Recherche, INSEP Paris, France 2 Faculté des Sciences du Sport, Université de Poitiers, France 3 Faculté des Sciences du Sport, Université de la Méditerranée, France Summary Study of the effects of brief exercise on mental processes by [START_REF] Tomporowski | Effects of exercise on cognitive processes: a review[END_REF] has shown that moderate muscular tension improves cognitive performance while low or high tension does not. Improvements in performance induced by exercise are commonly associated with increase in arousal, while impairments are generally attributed to the effects of muscular or central fatigue. To test two hypotheses, that (1) submaximal muscular exercise would decrease premotor time and increase motor time in a subsequent choice-RT task and (2) that submaximal muscular exercise would increase the attentional and preparatory effects observed in premotor time 9 men, aged 20 to 30 years, performed an isometric test at 50% of their maximum voluntary contraction between blocks of a 3-choice reaction-time fencing task. Analysis showed (1) physical exercise did not improve postexercise premotor rime, (2) muscular fatigue induced by isometric contractions did not increase motor time, (3) there was no effect of exercise on attentional and preparatory processes involved in the postexercise choice-RT task. The invalidation of hypotheses was mainly explained by disparity in directional effects across subjects and by use of an exercise that was not really fatiguing. Study of the effects of brief exercise on mental processes have shown that moderate muscular exercise improves cognitive performance while low or high muscular exercise neither improves nor impairs it (for a review, see [START_REF] Tomporowski | Effects of exercise on cognitive processes: a review[END_REF]Brisswalter & Legros, 1996). Improvement is commonly associated with increased arousal and activation, while impairment is generally attributed to the effects of muscular or central fatigue. The present aim was to study the influence of repeated isometric contractions on cognitive performance. To study the differentiated effects of local muscular fatigue and of increasing arousal and activation, we used the fractionated reaction-time technique [START_REF] Botwinick | Premotor and motor components of reaction time[END_REF]. According to the discrete stage model of [START_REF] Sternberg | The discover of processing stages: extensions of Donders' method[END_REF], we hypothesized that a factor which selectively influences the central stages of information processing would affect premotor time but not motor time. Thus, increased arousal, induced by a physical exercise, would decrease premotor time without affecting motor time. In other respects, [START_REF] Stull | Effects of variable fatigue levels on reaction-time components[END_REF] showed motor time increased with fatigue. Thus, muscular fatigue should negatively influence motor time without affecting premotor time. In addition, we hypothesize that an increase in arousal and activation induced by exercise enhances the orienting of visual attention and of motor preparation involved in a priming procedure [START_REF] Posner | Orienting of attention[END_REF][START_REF] Rosenbaum | A priming method for investigating the selection of motor responses[END_REF]. In this procedure, information (prime), related to the forthcoming location of a response signal and to the forthcoming movement, is presented to the subject before the response signal. This information can allow the subject to set, during the preparatory period, visual attention and motor preparation processes with the aim of reducing the reaction time. The preliminary information can be valid or invalid, that is to say, it can bring exact or erroneous information on location of the response signal and parameter values of the forthcoming movement. The difference between invalid and valid primes has been described in terms of the mental operations involving engagement, disengagement, switching, and reengagement of attention [START_REF] Posner | Orienting of attention[END_REF] or the programming, deprogramming, and reprogramming of movement [START_REF] Rosenbaum | A priming method for investigating the selection of motor responses[END_REF]. We expected that an increase in arousal and activation would increase availability of attentional and preparatory resources, and of differences between performances recorded for valid and invalid priming. Method Nine male volunteers (M age = 25 yr., SD= 3.8) participated. Subjects confronted a 3-choice reaction time task which consisted of reaching a target with a sword when a light-emitting diode was illuminated. The task was executed using ARVIMEX [START_REF] Nougier | Covert orienting of attention and motor preparation processes as a factor of success in fencing[END_REF]) which analyzes visuomotor reactions in fencing. The fatigue of the triceps brachii was measured by the median frequency obtained from electromyography [START_REF] De Luca | Myoelectrical manifestations of localized muscular fatigue in humans[END_REF]. The experiment was composed of a baseline and an exercise session. The baseline session was composed of five blocks of 90 trials with a rest period between blocks. The exercise session was composed of five blocks of 90 trials with an isometric contraction between blocks performed at 50% of the maximal voluntary contraction. The order of the two sessions was randomized. To obtain attentional effects, each block of 90 trials included 24 trials with a valid prime and six trials with an invalid prime for each of the three targets. Results and discussion We predicted that physical exercise would induce two different effects on the choice-reaction time task. On the one hand, the muscular fatigue should impair motor time without affecting premotor time. On the other hand, an increase in arousal and activation induced by physical exercise should decrease premotor time without affecting motor time. These two effects might decrease premotor time and increase motor time. The shift of the median frequency to a low frequency was not significant, although the exhaustion times in executing a 50% isometric contraction with the triceps brachii decreased significantly between the pre-(M=86 sec., SD =26) and posttest conditions (M = 61 sec., SD = 14). Contrary to our expectations and to the results of a previous study [START_REF] Stull | Effects of variable fatigue levels on reaction-time components[END_REF], no deterioration in motor time was observed in the last block of trials after the isometric contraction. As the sample of subjects was composed of well conditioned athletes, the 50% isometric fatigue task was probably not severe enough relative to fatigue. Studies of the effect of brief exercise on mental processes have shown that moderate muscular tension improves cognitive performance whereas high or low tension has no effect [START_REF] Tomporowski | Effects of exercise on cognitive processes: a review[END_REF]. It is important to emphasize that in all these studies mental tasks were carried out simultaneously with muscular contractions. The only study of postexercise effects showed an increase in visual threshold after an isometric contraction (Krus, Wapner, & Werner, 1958). Despite the choice of moderate physical exercise, we have observed no variation in performance on the reaction-time task. This lack of positive effect on premotor time could reflect greater variability in the directional effect across subjects or lack of effect for all subjects. An analysis of individual records showed that during the first trial block, premotor time increased for six subjects whereas the premotor time decreased for three subjects. Such an analysis clearly shows that the lack of a significant effect of exercise reflected between-subjects variability. Perhaps initially, arousal varied with their different personality traits, motivation, or time of day in which they participated. In accordance with [START_REF] Posner | Orienting of attention[END_REF] and [START_REF] Rosenbaum | A priming method for investigating the selection of motor responses[END_REF], voluntary orienting of attention and preparation induced by a valid prime before the signal response facilitated subjects' reaction time, premotor time, and movement time (F 1,8 = 83.2, 12.5, and 55.6, respectively). This result could be interpreted in two ways. From Posner's perspective (1980), the engagement of attention during stimulus onset asynchrony, given a valid prime, involves decreased reaction time. Given an invalid prime, disengagement, switching, and reengagement of attention increases reaction time. In contrast, [START_REF] Rosenbaum | A priming method for investigating the selection of motor responses[END_REF] would argue that the programming of movement during the foreperiod, given a valid prime, facilitates reaction time. Deprogramming and reprogramming of movement that takes place, given an invalid prime, impairs reaction time. The priming effect did not interact with block or session. Consequently, our second hypothesis that a submaximal muscular exercise would increase attentional and preparatory effects observed in premotor time was not validated. As we expected, motor time which reflects the muscular electromechanical transduction time was not affected by the nature of the prime.
01757186
en
[ "chim.mate", "phys.cond.cm-ms", "spi.mat" ]
2024/03/05 22:32:10
2018
https://univ-lyon1.hal.science/hal-01757186/file/TiTiC_COMETTi_insituXRD_v20.pdf
Jérôme Andrieux email: [email protected] Bruno Gardiola Olivier Dezellus Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X-ray diffraction and modeling Keywords: Metal matrix composites (MMCs), Solid state reaction, Interface Interdiffusion, Synchrotron diffraction, Theory and modeling (kinetics, transport, diffusion) published or not. The documents may come Synthesis of Ti matrix composites reinforced with TiC particles: in situ synchrotron X-ray diffraction and modeling Introduction The high specific mechanical properties of Metal Matrix Composite (MMC) lead to much considerations over the past decades for structural lightening in aerospace applications or wear resistance in car breaks. Among the different materials used as reinforcement for Titanium based MMC, titanium carbide (TiC) was selected for its excellent chemical compatibility with the matrix [START_REF] Clyne | An introduction to Metal Matrix Composites[END_REF][START_REF] Vk | Recent advances in metal matrix composites[END_REF][START_REF] Miracle | Metal matrix composites -From science to technological significance[END_REF][START_REF] Huang | In situ TiC particles reinforced Ti6Al4V matrix composite with a network reinforcement architecture[END_REF]. The powder metallurgy route, that is widely used to produced Ti-based MMC [START_REF] Liu | Design of powder metallurgy titanium alloys and composites[END_REF], leads to the preparation of green compacts where Ti and stoichiometric TiC are not in thermodynamic equilibrium before the consolidation of the MMC. During the heating step, an interfacial reaction between Ti and TiC will occur as already shown and studied in the literature [START_REF] Quinn | Solid-State Reaction Between Titanium Carbide and Titanium Metal[END_REF][START_REF] Wanjara | Evidence for stable stoichiometric Ti2C at the interface in TiC particulate reinforced Ti alloy composites[END_REF]. More precisely, a preceding paper gave details of the general scenario of chemical interaction between the Ti matrix and the TiC particles during the consolidation step at high temperature [START_REF] Roger | Synthesis of Ti matrix composites reinforced with TiC particles: thermodynamic equilibrium and change in microstructure[END_REF]. Four key steps were proposed starting with dissolution of the smallest particles to reach C saturation in the Ti matrix, followed by a change in the TiC stoichiometry in order to reach thermodynamic equilibrium between matrix and reinforcement following reaction (1): y TiC + (1-y) Ti -> TiC y [START_REF] Clyne | An introduction to Metal Matrix Composites[END_REF] According to the assessment of the C-Ti binary system by Dumitrescu et al., the phase equilibrium is characterized by a y value ranging from 0.55 to 0.595 at 1000°C and 600°C respectively [START_REF] Dumitrescu | A reassessment of Ti-C-N based on a critical review of available assessments of Ti-N and Ti-C[END_REF]. Reaction 1 corresponds to incorporation of Ti from the metallic matrix to the To cite this paper : J. Andrieux, B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. 3 carbide phase. The reaction is therefore associated with two major phenomena: an increase in the total amount of reinforcement in the composite material and an increase in the mean particle radius by a factor of 1.19 [START_REF] Roger | Synthesis of Ti matrix composites reinforced with TiC particles: thermodynamic equilibrium and change in microstructure[END_REF]. As a consequence, the average distance between the particles decreases during the course of the reaction and contact between individual particles occurs in the most reinforced domain of MMC. This initiates an aggregation phenomenon that is responsible for the high growth rate of particles observed for heat treatment times lower than 1h. Finally, Ostwald ripening is responsible for the change in particles for longer heat treatment times [START_REF] Roger | Synthesis of Ti matrix composites reinforced with TiC particles: thermodynamic equilibrium and change in microstructure[END_REF]. The present work is focused on the two first steps, i.e. C saturation of the Ti matrix and the change in TiC stoichiometry resulting in the Ti-TiC composite material tending towards its thermodynamic equilibrium. The main objectives are an experimental determination of the kinetics of these two first steps, supported by a modeling of the diffusion phenomena occurring at the interface between a particle and the matrix. For this purpose, the reaction was investigated by in-situ synchrotron X-ray Diffraction (XRD) with a high-energy monochromatic Xray beam at the European Synchrotron Radiation Facility (ESRF -ID15B). Nowadays, it is quite common in metallurgy to perform simulations of phase formation and transformations by using software packages such as DICTRA [START_REF] Andersson | Thermo-Calc & DICTRA, computational tools for materials science[END_REF]. Such calculations require the use of two databases. The first is a thermodynamic database that can provide a good description of the system; from the point of view of both phase equilibria, and energetic description of the different phases of the system. The second is a mobility database that is needed to calculate the diffusion coefficient inside the main phases of the system. To the authors' knowledge, despite the great success of such calculations in describing metallurgical processes, the same approach has not been used for the synthesis of Metal Matrix Composites, despite the fact that the two types of materials can be considered as very similar. In the present study, the change in the Ti matrix and its TiC reinforcement over several heat treatment periods is simulated with the DICTRA package in order to compare the results with the experimental study by in-situ X-ray diffraction. As usual, if the simulations are able to reproduce experimental results obtained in wellcontrolled conditions, then they can also be used to extrapolate and predict microstructure changes in much more severe conditions. To cite this paper : J. Experimental procedures Sample preparation The starting materials used to prepare powder compacts were commercial TiC 0.96 and Ti powders (see Tables 1 to 3 for their compositions The small cylinders of powder compacts were inserted under argon in a graphite crucible previously filled with a layer of Ti powder that was used as a getter for gaseous impurities such as oxygen or nitrogen. The graphite crucible was heated by an induction coil under vacuum (~10 -4 mbar) using an induction furnace available at ESRF through the Sample Environment Support Service (SESS). The temperature was measured and controlled using a K-type thermocouple placed in the crucible and connected to a 2408i Eurotherm controller. In step mode, the heating rate was measured at 60K.s -1 with temperature stabilization of ±1K at 1273K (1000°C). The starting time of the experiment, t=0, corresponds to the beginning of the heating step. Synchrotron XRD Transmission X-Ray Diffraction (XRD) experiments were performed on beamline ID15B at ESRF (Grenoble) using a high-energy X-ray monochromatic beam (300x300 µm 2 ) with a wavelength of 0.14370 Å (~86.3 keV) and a ∆E/E~10 -4 . To follow small variations in the diffraction peak position and improve the angular resolution (∆(2θ)=0.003 °(2θ)), the sampledetector distance was increased to 4.358 meters, leading to a cell parameter resolution of ∆a=0.002 Å. In addition, the detector was off-centered by 200 mm compared to the direct beam to select a 2θ range from 2.8 to 5°(2θ) and to focus the acquisition on the main TiC diffraction peaks, i.e. the (111)-3.295°2θ and the (200)-3.806°2θ peaks. Continuous acquisitions were performed using a MAR345 detector with an exposure time of 90s. Thus, diffraction rings were radially integrated using Fit2D software [START_REF] Hammersley | Two-dimensional detector software: From real detector to idealised image or two-theta scan[END_REF]. An example of a diffraction pattern collected on the sample at room temperature with peak identification is given in Figure 1. To cite this paper : J. The structural study was performed by sequential Rietveld refinement using the FullProf Suite Software with the WinPlotr graphical interface [START_REF] Roisnel | WinPLOTR: A windows tool for powder diffraction pattern analysis[END_REF][START_REF] Rodríguez-Carvajal | Recent advances in magnetic structure determination by neutron powder diffraction[END_REF]. The profile function 7 in Full Prof, a Pseudo-Voigt function with axial divergence correction, was used. The instrument resolution file was defined using LaB 6 as standard. Following the reduced 2θ range of acquisition due to the experimental configuration and the aim of the present paper, the Rietveld refinement was only performed on TiC diffraction peaks. For each refinement process, the refinement weighting factors were found at maximum R p <15%, R wp <20%, R exp <10%, χ 2 <5. More details in the sequential Rietveld refinement procedure are given in Section 3.1. Experimental Results Rietveld refinement Titanium carbide has a halite structure (two face-centered cubic (fcc) sublattices with Ti and C, respectively). It displays deviations in stoichiometry over a wide range of homogeneity, from TiC 0.47 to TiC 0.99 , because of the presence of vacancies in the carbon sublattice [START_REF] Seifert | Thermodynamic optimization of the Ti-C system[END_REF]. Consequently, this phase is usually modeled in Calphad assessments by using the compound energy formalism with two sublattices having the same number of sites, one totally filled by Ti atoms, the other filled by a random mixture of C atoms and vacancies [START_REF] Dumitrescu | A reassessment of Ti-C-N based on a critical review of available assessments of Ti-N and Ti-C[END_REF][START_REF] Jonsson | Assessment of the Ti-C system[END_REF][START_REF] Frisk | A revised thermodynamic description of the Ti-C system[END_REF] leading to a chemical formula that can be expressed as TiC y in order to illustrate both the stoichiometry range and the location of vacancies on the C sublattice. Experimentally, the number of vacancies has a conspicuous effect on the lattice parameter. In a review of the Group 4a carbides, Storms [START_REF] Storms | A critical review of refractories. Part I. Selected properties of Group 4a, 5a and 6a carbides[END_REF] reported that the lattice parameter expands for 0.99 > y > 0.85 and shrinks for 0.85 > y > 0.5, the total change in lattice parameter being less than 1%, from 4.330nm to 4.298nm respectively [START_REF] Bittner | Magnetische untersuchungen der carbide TiC, ZrC, HfC, VC, NbC und TaC[END_REF][START_REF] Norton | Properties of non-stoichiometric metallic carbides[END_REF][START_REF] Rudy | Ternary phase equilibria in transition metal-boroncarbon-silicon systems[END_REF][START_REF] Storms | The refractory carbides[END_REF][START_REF] Ramqvist | Variation of Lattice Parameter and Hardness with Carbon Content of Group 4 B Metal Carbides[END_REF][START_REF] Vicens | Contribution to study of system Titanium-Carbon-Oxygen[END_REF][START_REF] Kiparisov | Effect of titanium carbide composition on the properties of titanium carbide-steel materials[END_REF][START_REF] Frage | Iron-titanium-carbon system. II. Microstructure of titanium carbide (TiCx) of various stoichiometries infiltrated with iron-carbon alloy[END_REF][START_REF] Fernandes | Characterisation of solar-synthesised TiCx (x=0.50, 0.625, 0.75, 0.85, 0.90 and 1.0by X-ray diffraction, density and Vickers microhardness[END_REF]. The unusual presence of a maximum in the lattice parameter vs. composition curve, corresponding to anomalous volume behavior of TiC at small vacancy concentration, was recently explained by Hugosson as resulting from an effect of local relaxation of the atoms surrounding the vacancy sites [START_REF] Hugosson | Phase stabilities and structural relaxations in substoichiometric TiC1-x[END_REF]. Consequently, changes in y of the TiC y phase are associated with a slight modification in both the lattice parameter (about 1%) and the peak intensities because of variations in the number of vacancies in the C sublattice. Thanks to the improved angular resolution of the present in-situ XRD study (∆a=0.002 Å), these variations were captured and analyzed by Rietveld refinement to follow in situ the pathway to equilibrium during heat treatment of a Ti-TiC composite material. The TiC 0.96 particles used as starting material present a continuous particle size distribution ranging over three decades from 15-20 nm for the smallest to 7-10 µm for the biggest [START_REF] Roger | Synthesis of Ti matrix composites reinforced with TiC particles: thermodynamic equilibrium and change in microstructure[END_REF], leading to different crystallite sizes, with constant starting stoichiometry (Table 2) and lattice parameters. Diffraction peaks of such a population of particles are characterized by a convolution of different contributions due to the different crystallite sizes at the same 2θ position: a sharp and intense peak due to the biggest particles (i.e. a mean diameter value of about 1 µm) and a broader peak due to the smallest particles (a few tenths of nm), leading to a broadening at the bottom of the TiC diffraction peak, as evidenced on Figure 2.a. This specific peak shape was fitted during Rietveld refinement by using two populations of TiC particles: the first, labeled "TiC 0.96 _SC", presents a small crystallite size and is related to the smallest particles whereas the second, labeled "TiC 0.96 _BC", is associated with the biggest particles. An example of a Rietveld refinement result of (200) TiC peak is given in Rietveld refinement, the chemical composition of these two populations corresponding to the starting material (i.e. C and Ti site occupancies) was fixed and kept constant whereas the intensity scale factor, the peak shape profile parameters and the lattice parameters were refined independently. In terms of temperature, the diffraction peaks associated with TiC 0.96 _BC and TiC 0.96 _SC shifted to lower 2θ values due to thermal expansion. In addition, new diffraction peaks were observed for a higher value of 2θ and associated with the formation of the substoichiometric TiC y phase, labeled "TiC y ". During the sequential Rietveld refinement, the composition, the intensity scale factor, the peak shape profile parameters and the lattice parameters of the TiC y phase were refined. An example of Rietveld refinement of (200) TiC diffraction peaks acquired at 900°C is given in Figure 2.b. Fig. To cite this paper : J. Andrieux The time-dependent change in cell parameter is given on Figure 4. Note that t=0 on Figure 4 corresponds to the beginning of the heating step. First of all, this figure confirms the presence during the whole process of only two populations of TiC, characterized by two distinct values of lattice parameters. From a lattice parameter a RT =4.330Å at room temperature, the initial population corresponding to TiC 0.96 has a parameter of a 800°C =4.353 Å at 800°C and this cell parameter remains stable during the course of reaction 1. It may be deduced that the cell expansion for the TiC 0.96 due to the temperature increase is ∆a=0.023 Å. Concerning the TiC y population that forms at high temperature, Figure 4 shows an almost constant lattice parameter of a TiCy@800°C =4.336Å, the variation being less than 0.02% during the experiment. Assuming that the thermal expansion is the same for TiC 0.96 and TiC y , the lattice parameter of the TiC y population at RT can be estimated to be equal to a TiCy@RT =4.314Å. Following the correlation between the cell parameter and the stoichiometry of TiC y reported by Storms [START_REF] Storms | A critical review of refractories. Part I. Selected properties of Group 4a, 5a and 6a carbides[END_REF], the composition of the TiC y population is found to be TiC 0.57 . This experimental determination of the TiC y composition that forms at high temperature is in good agreement with the value To cite this paper : J. Andrieux expected when thermodynamic equilibrium is reached between the carbide and Ti phase at 1073K (800°C) [START_REF] Roger | Synthesis of Ti matrix composites reinforced with TiC particles: thermodynamic equilibrium and change in microstructure[END_REF][START_REF] Dumitrescu | A reassessment of Ti-C-N based on a critical review of available assessments of Ti-N and Ti-C[END_REF]. The formation of a sub-stoichiometric TiC y phase is confirmed at the interface between TiC 0.96 particles and the Titanium matrix as illustrated on STEM micrograph of a selected zone of the MMC microstructure (Figure 5). It leads to a core-shell structure of the particles, with a stoichiometric TiC core and a TiC y shell, in accordance with bright field mode contrasts. To cite this paper : J. Andrieux 13 The total mass fraction of reinforced TiC in the composite, calculated using equation ( 1), based on the experimental results obtained during the isothermal experiment at 1073K (800°C), is presented in Figure 7 (filled circles). The calculated mass fraction of reinforced TiC in the composite is of major concern as it is a key parameter governing the expected mechanical properties of the MMC. As the in-situ experiment time was set to 3600s, final equilibrium was not reached but extrapolation based on total conversion of the initial stoichiometric population to the substoichiometric composition determined by XRD provides a means of estimating the equilibrium mass fraction of TiC that is found to be equal to 0.2303 (dashed-dotted horizontal line on Figure 7). This value is in good agreement with that expected after thermodynamic equilibrium is reached (0.2306). To cite this paper : J. Andrieux Modeling Simulation conditions In order to model the changes that take place in the Ti-TiC metal matrix composite during high temperature heat treatment, the DICTRA software package [START_REF] Andersson | Computer simulation of multicomponent diffusional transformations in steel[END_REF] was used. DICTRA stands for diffusion-controlled transformation and is based on a numerical solution of the multicomponent diffusion equations and local thermodynamic equilibrium at the phase interfaces. The program is suitable for treating phenomena such as growth [START_REF] Borgenstam | DICTRA, a tool for simulation of diffusional transformations in alloys[END_REF], dissolution [START_REF] Agren | Kinetics of Carbide Dissolution[END_REF] and coarsening of particles in a matrix phase [START_REF] Gustafson | Coarsening of TiC in austenitic stainless steel -experiments and simulations in comparison[END_REF]. The DICTRA software package is built with databases for thermodynamics and diffusion. The present simulations were performed using the MOB2 mobility database provided by the ThermoCalc company and the thermodynamic assessment of the C-Ti system published by Dumitrescu et al. [START_REF] Dumitrescu | A reassessment of Ti-C-N based on a critical review of available assessments of Ti-N and Ti-C[END_REF]. The simulations were performed considering a spherical geometry, representative of reinforcing particles embedded in a metallic matrix. The carbide particle was defined as being spherical with an initial radius R TiC at the center of a spherical cell of radius R Mat defined by the surrounding metallic matrix (see Figure 8). the composite powder after the ball-milling step, before any high temperature treatment, is estimated to be D TiC = 128 nm. However, the residual presence of a tail corresponding to micron-sized particles must be taken into account. Note that the volume of matter corresponding to a 1µm particle is equivalent to 1000 particles or 8000 particles of respectively 100 and 50nm in size. It is difficult to obtain the real size distribution by counting particles from SEM observations because the quantity of biggest particles will be systematically overestimated. Therefore, the size distribution will be used as a guide giving the main trends, but exact numerical values cannot be used for modeling. Fig. 9 Diameter distribution of TiC0.96 particles removed from the composite powder by selective acid etching after a ball-milling step and before any high temperature treatment. As a consequence, the initial size of TiC 0.96 particles in the model was estimated from Figure 6 and more precisely from the volume ratio, r, between untransformed and transformed populations of TiC, respectively TiC 0.96 and TiC 0.57 populations. During the heat treatment, each particle can be considered as a core-shell structure with the residual TiC 0.96 composition as a core of radius R and the modified TiC 0.57 stoichiometry as a shell of thickness e. Given that the transformation occurs by a solid state diffusion process through the external shell, its thickness e depends only on the time of interaction and can be estimated by equation ( 2) regardless of the initial particle size (D inter is the interdiffusion coefficient in TiC 0.57 ). To cite this paper : J. Andrieux, B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. 16 Eq. 2 The ratio r of the core to shell volumes for a spherical particle, corresponding to the advancement of the reaction transforming the particle, is then given by equation 3: Eq. 3 The radius of the residual inner TiC 0.96 core can then be estimated for a given diffusion time t and advancement of reaction r following equation ( 4): Eq. 4 This equation can be used to estimate, for a given interaction time t, the radius of the residual TiC 0.96 core, leading to a given advancement of reaction r. According to Figure 6, the residual amount of the initial TiC 0.96 population is about 50% (r = 0.5) after t = 400s at 1073K (800°C). The interdiffusion coefficient reported by van Loo et al. at this temperature for TiC 0.57 is equal to .s -1 [START_REF] Van Loo | On the diffusion of carbon in titanium carbide[END_REF]. Therefore, numerical applications lead to a shell thickness e = 10nm and a core radius R = 23 nm. From these simple calculations, it may be concluded that, considering a single particle size distribution, the very fast transformation rate of the initial TiC 0.96 composition can only be obtained for particles with an initial radius smaller than the typical value of about 30 nm. As a consequence, an initial TiC particle radius of 30nm is used for modeling (diameter equal to 60nm). In the model, the thickness of the metallic shell in which the carbide particle is embedded is fixed in order to obtain a massive amount of reinforcement equal to 16.23 mass%, i.e. a value corresponding to the composite material experimentally studied. In the case of carbide particles with a diameter of 60 nm, this consideration leads to a Ti matrix shell thickness of 29 nm. To model the isothermal treatment at 1073K (800°C), the matrix and particle domains are considered as being respectively the HCP_A3 phase with an initial C mass fraction of 10 -6 (see table 2), and the carbide TiC 0.96 phase with a starting C composition of 19.74 mass%C (Table 3). To cite this paper : J. Andrieux (800°C) of about 1 s. This result should be compared with Figure 6 where the Rietveld refinement of diffraction peaks shows the disappearance of the population of smallest particles after about 100 s. Obviously, some discrepancies are to be expected for such short times as the experimental heating rate is finite (i.e. 60K.s -1 ), even if a step function toward 1073K (800°C) was used for the induction heater and for calculations. Therefore, the typical times of the experimental results are expected to be slightly higher than those obtained from the kinetics calculations. More precisely, a careful analysis of Figure 4 reveals that the thermal expansion of the initial population of stoichiometric TiC 0.96 particles is observed experimentally in the center of the sample a few hundreds of seconds after starting the induction heater. Considering these limitations in the comparison of the shortest characteristic times for experiments and calculations, the agreement is relatively good and illustrates that chemical exchanges and solid state diffusion of C in the Ti matrix are extremely fast. The solid line in Figure 7 presents the calculated change in the TiC mass fraction as a function of treatment time at 800°C in the case of TiC 0.96 particles with a unique size of 60 nm diameter. The experimental mass fraction of TiC determined from the in-situ experiments is also reported (filled circles). The model correctly reproduces the fast increase in the TiC mass fraction observed by XRD in-situ measurements at the beginning of heat treatment. However, with a unique size of 60 nm, the final equilibrium is almost reached after only 1000 s whereas, experimentally, equilibrium is still not obtained after 3600s. As previously noted, the presence after milling of residual micron-sized particles is responsible for the experimentally observed two-step change: fast conversion of the smallest particles and much slower conversion of the biggest particles. Given that diffusion processes control the change in the amount of TiC particles, this change is therefore limited by the presence of the few big particles with the initial TiC 0.96 composition remaining in the TiC population after milling. Therefore, in order to improve the modeling process and to illustrate the drastic influence of the residual presence of big particles, the same calculation was performed with a 1µm size particle and associated with the preceding results by To cite this paper : J. Andrieux 7 with a dashed line. As expected, the presence of one micron-sized particle, in association with 6000 particles with a diameter of 60nm, allows the capture of both the initial transformation rate and the asymptotic trend towards the equilibrium state. Note that just one micron-sized particle represents 38% of the total volume of particles in the sample. Finally, to illustrate the potential of thermokinetic modeling, a third class is added with an intermediate size of 160nm in diameter and the results are presented in Figure 7 with a dotted line (population in numbers are 6000, 200 and 2 for 60, 160 and 1000nm sized respectively). Discussion The present results highlight the fast kinetics of the trend towards thermodynamic equilibrium of a Ti-based matrix composite reinforced by TiC 0.96 particles. During isothermal treatment at 1073K (800°C), the dissolution of the smallest TiC 0.96 particles to reach saturation of the Ti matrix is obtained after a few tenths of a second, while the formation of the equilibrium composition of the carbide phase, TiC 0.57 , concomitantly increases sharply. Moreover, after only 6min of isothermal treatment at 1073K (800°C), 50% of the conversion of TiC particles from their initial TiC 0.96 composition towards the equilibrium value, TiC 0.57 , is achieved. It is to be expected that such high reaction rates will have major consequences on the MMC synthesis process. First of all, reaching the C saturation of the Ti matrix induces the dissolution of about 10% of the initial TiC 0.96 particles (Figure 10.b). This dissolution process affects the smallest particles, which means that most of the effort made during milling to decrease the size of the initial particles is cancelled out. Next, the change in initial TiC composition (TiC 0.96 ) towards the equilibrium value (TiC 0.57 ) is achieved by partial conversion of the Ti matrix into carbide phase. This process therefore leads to an increase in the total amount of carbide in the composite from 16mass% to about 19mass% after 6min at 1073K (800°C) (Figure 7) and to an increase in particle size [START_REF] Roger | Synthesis of Ti matrix composites reinforced with TiC particles: thermodynamic equilibrium and change in microstructure[END_REF]. These changes induce a trend towards the formation of particle clusters that might have detrimental effects on the mechanical properties because of local embrittlement. To cite this paper : J. 20 These two main processes, i.e. dissolution of the smallest particles and increase in total number of particles, occur during the industrial high temperature consolidation step of a Ti-based matrix composite reinforced by TiC particles. It is difficult to avoid them for two reasons: the driving force is thermodynamic and their kinetics is very fast. As a consequence, they have to be considered upstream in the process design: for example by reducing the initial quantity of TiC particles. The reliability of thermokinetic modeling to simulate the change in TiC reinforcement in Ti-based composites has been demonstrated for the case of isothermal treatment (Figure 7). It can thus now be used to predict the change in mass fraction of reinforcement during the heat treatment that is performed prior to the consolidation step. Figure 11 presents a typical industrial heat treatment that involves inserting a billet, containing the cold compacted composite powder, inside a furnace heated to 1173 (900°C) for 1h. According to temperature measurements inside the consolidation furnace used for the experiment, the duration of the heating step is considered to be 10 min while the isothermal holding time is 50 min. The calculated change in TiC mass fraction, considering three particle size classes (as in Figure 7), is reported in Figure 11, as well as the experimental value determined just after the consolidation step. Once again, this figure highlights the fast kinetics of the reaction occurring inside the MMC as most of the transformation of the TiC stoichiometry, associated with the increase in particle mass fraction, is already achieved after a holding time of 10 min. Fig. 11 Change in the calculated TiC mass fraction during heat treatment of a composite powder compact prior to the consolidation step (solid line). Temperature vs time is also reported (dashed line) as well as the experimental mass fraction determined after heat treatment and the consolidation step (filled circle). To cite this paper : J. Andrieux Conclusion The reaction tending towards thermodynamic equilibrium during the synthesis of Ti/TiC MMC prepared by the powder metallurgy route was studied by in-situ synchrotron X-ray diffraction. It was found that the carbide composition changes rapidly from its initial stoichiometric composition TiC 0.96 towards a sub-stoichiometric value (TiC 0.57 ) corresponding to the thermodynamic equilibrium with the C saturated Ti matrix. The reaction is almost complete after only a few minutes at 800°C for the smallest particles, whereas the rate-limiting step is the particle size. Modeling of the diffusion processes in MMCs isothermal heat treatment at 800°C was performed using three particles size classes. First dissolution of the smallest particles (about 10% of the initial TiC 0.96 particles) is expected to be achieved after only 1s at 800°C. Second the change in TiC composition lead to an increase in the total amount of carbide in the composite from 16 mass% to 19 mass%. The consequences on the industrial process of Ti/TiC MMC synthesis have also been considered. Typical industrial heat treatment of a MMC billet, 1h at 900°C, was modeled and the results showing an increase of the total amount of carbide in the composite from 16 mass% to 22 mass% are in rather good agreement with the experimental value (21 mass%). This illustrates the potential benefits of thermodynamic and kinetic modeling, combined to in-situ X-ray diffraction, in understanding and optimizing industrial processes for MMC synthesis. Fig. 1 1 Fig. 1 Example of diffraction pattern collected from the sample at room temperature with peak identification Figure 2 . 2 a. During the sequential To cite this paper : J. Andrieux, B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. 9 3. 2 92 Isothermal heating at 1073K (800°C) for 2h3.2.1. Raw dataThe time-dependent change in the TiC(200) peak during isothermal heating at 1073K (800°C) is shown on Figure3. The colorscale is related to the peak intensity. Note that the darker area around 4500s was due to synchrotron beam refill. It can be clearly seen from Figure3that a new diffraction peak appears at higher 2θ values for the TiC(200) peak, corresponding to the formation of a TiC population with smaller cell parameters, i.e. a sub-stoichiometric composition TiC y . The peak position of the TiC y composition remains constant as a function of time at 1073K (800°C), meaning that the lattice parameter and therefore the stoichiometry of the phase, remain constant. Finally, the intensity of the TiC y diffraction peak increases whereas that of the starting TiC 0.96 population decreases. Note that the same results were observed with the TiC(111) peak. Fig. 3 3 Fig. 3 Time-dependent change in the TiC(200) during isothermal treatment at 1073K (800°C). The colorscale indicates the peak intensity (on-line version). Fig. 4 4 Fig. 4 Time-dependent change in cell parameter during isothermal treatment at 800°C. t=0 corresponds to the beginning of the heating step. Fig. 5 5 Fig. 5 Bright field STEM view of core-shell microstructure of TiC particles in Ti-TiC MMC, obtained after 1min at 900°C. Figure 6 6 Figure6presents the time-dependent variation in the amount of the three identified populations of TiC particles (see Section 3.1) during isothermal annealing at 1073K (800°C). As already observed in Figure3, the quantity of the TiC y population increases whereas that of the initial TiC 0.96 composition decreases. More interestingly, the population of the initial smallest crystallites (TiC 0.96 _SC) is consumed after only 3 min of heat treatment. In addition, 50% conversion of TiC 0.96 _BC into TiC y phase is reached after only 6min of heat treatment. Finally, after 1h30 at 1073K (800°C), a complete reaction is not observed as ~25 % of TiC 0.96 _BC remains in the sample. Fig. 6 6 Fig.[START_REF] Quinn | Solid-State Reaction Between Titanium Carbide and Titanium Metal[END_REF] Changes in the quantity of different populations of TiC calculated using Rietveld refinement during isothermal treatment at 800°C. t=0 corresponds to the beginning of the heating step. Fig. 7 7 Fig. 7 Calculated time-dependent change in the mass fraction of TiC particles in a Ti matrix during isothermal heat treatment at 1073K (800°C). The solid line corresponds to one class of particles with a unique TiC diameter of 60nm. The dashed line shows the results obtained with two classes of TiC particles with different diameters: 6000 particles of 60nm, and one particle of 1µm. The dotted line corresponds to three classes of particles: 6000 particles of 60nm, 200 particles of 160nm and two particles of 1µm. The experimental values and the final asymptotic amount of TiC are also reported (filled circles and dasheddotted horizontal line respectively). Fig. 8 8 Fig. 8 Schematic model of the initial configuration of the system used in the DICTRA simulations 17 17 4. 2 2 Fig. 10 (a) Time-dependent change in the C content at the external Ti matrix interface during isothermal treatment at 1073K (800°C). (b) Time-dependent change in the calculated mass fraction of TiC particles in the Ti matrix at the very beginning of the isothermal treatment, during the dissolution step. Figures 10 . 10 Figures 10.a and b indicate that the dissolution of the smallest particles in order to allow carbon saturation of the Ti matrix is expected to be achieved after a typical isothermal time at 1073K Table 1 1 Chemical analysis of the Titanium powder used as starting material (SCA-CNRS Solaize) ). Small cylinders of powder compacts (3 mm Table 2 2 Chemical analysis of the TiC0.96 powder used as starting material (SCA-CNRS, Solaize) Element concentration (ppm in mass) Fe Cr V Ni Ca Cu, Cl, K, Zn, S, Er, As TiC 0.96 (starting powder) 1200 118 170 116 77 traces Table 3 X 3 2.2 STEM characterization Sample for STEM characterization was placed in a liquid tight graphite crucible and then immersed for 1 min in a liquid aluminium bath hold at 900°C. After 1min, the crucible was quenched in water. STEM characterization was performed on a JEOL-JEM 2100F microscope in Bright field mode, under an accelerating voltage of 200 kV and with a magnification of x40000. -ray fluorescence analysis of the TiC0.96 powder used as starting material (SCA-CNRS, Solaize) To cite this paper : J. Andrieux, B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. Andrieux, B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. , B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. 3.3.2. Rietveld refinement results 10 , B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. , B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. , B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. , B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. , B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8.a simple linear combination. The relative amount of the two classes of particles is defined in order to get a 50% volume fraction of the smallest class, a value that should allow the initial fast reaction rate to be captured. Calculation results are reported in Figure Andrieux, B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. , B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. 21 Acknowledgements This work was undertaken in the framework of the COMETTi project funded by the French national research agency (ANR) [grant number ANR-09-MAPR-0021]. O.D. is very grateful to Dr. S. Fries and Pr. I. Steinbach from ICAMS institute at Bochum University (Germany) for allowing DICTRA calculations on their informatics cluster. J.A. thanks ID15B beamline staff for their help during beam time and ESRF for the provision of beamtime through in-house research during his post-doctoral position. The authors thank Gilles RENOU for the STEM observation performed at the "Consortium des Moyens Technologiques Communs" (CMTC, http://cmtc.grenoble-inp.fr). Composite powders were provided by the Mecachrome company (www.mecachrome.fr). To cite this paper : J. Andrieux, B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. Conflicts of interest statement The authors declare that they have no conflict of interest.
01757191
en
[ "sdv.sp.pharma", "sdv.mhep.rsoa" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01757191/file/APS-DOAC%20Letter%20to%20the%20editor%20Rheumatology%20_%20REF.pdf
Quentin Scanvion Sandrine Morell-Dubois Cécile M Yelnik Johana Bene Sophie Gautier² Marc Lambert email: [email protected] Pr Marc Lambert FAILURE OF RIVAROXABAN TO PREVENT THROMBOSIS IN FOUR PATIENTS WITH ANTI-PHOSPHOLIPID SYNDROME Comment on: Failure of rivaroxaban to prevent thrombosis in four patients with anti-phospholipid syndrome Quentin Scanvion, Sandrine Morell-Dubois, Cécile M Yelnik, Johana Bene, Sophie Gautier, Marc Lambert To cite this version: Letter to the Editor Sir, we read with a great interest the letter published recently by Dufrost et al. [START_REF] Dufrost | Failure of rivaroxaban to prevent thrombosis in four patients with anti-phospholipid syndrome[END_REF] on failure of rivaroxaban to prevent thrombosis in patients with APS, as well as the interview between Dr. Hannah Cohen and Prof. Bernard Lauwerys [START_REF]Rivaroxaban versus warfarin to treat patients with thrombotic antiphospholipid syndrome. Dr. Hannah Cohen about the results of the RAPS trial[END_REF]. We think that this subject is important because we have observed growing use of direct oral anticoagulants (DOAC) in APS patients. Rivaroxaban in Anti-Phospholipid Syndrome (RAPS) study is the only randomized controlled trial studying the use of rivaroxaban versus warfarin to prevent thrombosis recurrence [START_REF] Cohen | Rivaroxaban versus warfarin to treat patients with thrombotic antiphospholipid syndrome, with or without systemic lupus erythematosus (RAPS): a randomised, controlled, open-label, phase 2/3, non-inferiority trial[END_REF]. The primary outcome was the percentage change in endogenous thrombin potential, selected because of the low frequency of clinical outcomes and the ability of this particular test to measure biological activity of both drugs according to Cohen's interview. The authors conclude that rivaroxaban could be an effective alternative to vitamin K antagonist (VKA) therapy in APS patients since no thrombotic event was observed. Although we appreciate the randomized controlled nature of the study, we think that author's conclusion is an overstatement because of the short duration of the study follow-up (210 days), and the nature of the study which was not designed to demonstrate clinical non-inferiority. We also think that predicting a drug's efficacy in APS only on the basis of anticoagulation does not reflect the complexity of APS pathophysiology, which involves platelets, endothelial cells, monocytes, and immune and inflammatory mechanisms as well [START_REF] Giannakopoulos | The pathogenesis of the antiphospholipid syndrome[END_REF]. To illustrate, we report two new cases of APS patients; they both relapsed under DOAC, after having been stable under VKA therapy. The first patient is a 66 year-old SLE female diagnosed in 1967 (oral ulcers, polyarthritis, ANA, immune thrombocytopenic purpura). warfarin. An aortic valve replacement by mechanical prosthesis was required. We notice that DOACs are increasingly prescribed to treat APS patients, whatever the expression of their disease. Among the DOACs, rivaroxaban is primarily prescribed for the treatment of APS according to initial RAPS results [START_REF] Cohen | Rivaroxaban versus warfarin to treat patients with thrombotic antiphospholipid syndrome, with or without systemic lupus erythematosus (RAPS): a randomised, controlled, open-label, phase 2/3, non-inferiority trial[END_REF]. However evidence-based medicine inspires caution. The Task Force on APS Treatment Trends of the 15 th International Congress on Antiphospholipid Antibodies stated that there was insufficient evidence to make recommendations at this time regarding the use of these DOACs in the APS, which can only be considered when there is known VKA allergy/intolerance, in patients with only venous APS. Thus VKA remains the mainstay of anticoagulation in thrombotic APS and the non-adherence is not a reason to switch [START_REF] Erkan | 14th International Congress on Antiphospholipid Antibodies Task Force Report on Antiphospholipid Syndrome Treatment Trends[END_REF]. In APS-ACTION registry, an international multicenter prospective database, of 428 thrombosis APS patients, 19 were under DOAC, of whom 6 had relapse during the 2-years follow-up (15.8% annual thrombosis risk), compared to 1.5% risk in VKA-receiving patients [START_REF] Unlu | Antiphospholipid syndrome alliance for clinical,trials,and international networking (APS action) clinical database and repository analysis: direct oral anticoagulant use among among antiphospholipid syndrome patient[END_REF], suggesting that DOAC and VKA do not have similar efficacy. Furthermore Cohen's study included only patients at low risk (neither prior arterial thrombotic event nor previous relapse with INR between 2.0 and 3.0). Because it is difficult to evaluate if "standard intensity" [START_REF]Rivaroxaban versus warfarin to treat patients with thrombotic antiphospholipid syndrome. Dr. Hannah Cohen about the results of the RAPS trial[END_REF] anticoagulation by VKA will be safe before starting, DOAC should be considered only as a second line treatment. Moreover an inaugural venous event does not exclude later arterial thromboses. Our first patient had no prior arterial history before the catastrophic APS. Thus we endorse the conclusion of Dufrost et al. [START_REF] Dufrost | Failure of rivaroxaban to prevent thrombosis in four patients with anti-phospholipid syndrome[END_REF] and do not agree that rivaroxaban offers an effective, safe and convenient alternative to warfarin in APS patients, as Cohen suggested [START_REF]Rivaroxaban versus warfarin to treat patients with thrombotic antiphospholipid syndrome. Dr. Hannah Cohen about the results of the RAPS trial[END_REF]. The manifestations of APS span a heterogeneous clinical spectrum. Awaiting randomized controlled trials with clinical outcome [START_REF] Pengo | Efficacy and safety of rivaroxaban vs warfarin in high-risk patients with antiphospholipid syndrome: Rationale and design of the Trial on Rivaroxaban in AntiPhospholipid Syndrome (TRAPS) trial[END_REF][START_REF] Woller | Apixaban for the Secondary Prevention of Thrombosis Among Patients With Antiphospholipid Syndrome: Study Rationale and Design (ASTRO-APS)[END_REF] and prolonged follow-up to clarify whether DOACs are efficient alternatives to VKAs, cautions and stringent clinical review are especially necessary in known high risk patients (table 1). SLE associated-APS was diagnosed in 2007 after a deep vein thrombosis of the right arm in the context of LA and anti-beta2gp1 positivity. She remained relapse-free under VKA therapy during 8 years and then switched to rivaroxaban (20mg) in 2015. Because of refractory ITP (corticosteroids, rituximab and splenectomy) eltrombopag was started. Six months later (one year after rivaroxaban switching), she experienced chest pain associated with elevated troponin, leading to the diagnosis of myocardial micro-thrombosis on MRI of the heart. Concomitantly, multiple cerebral ischemia and cutaneous arterial micro-thrombosis (confirmed by biopsy) occurred within a week, which led to the diagnosis of catastrophic APS. She improved after corticosteroids, intravenous immunoglobulin, curative heparin therapies, plasmapheresis and rivaroxaban/eltrombopag withdrawal. The second patient is a 44 year-old female diagnosed with primary triple-positive APS in 2007 (stroke with Libman-Sacks endocarditis and obstetrical morbidity) and treated by warfarin. Despite subtherapeutic INR, no relapse was observed on yearly-repeated echocardiography and cerebral MRI during the 7 years follow-up. Rivaroxaban (20mg) was initiated in 2014. The compliance was suboptimal. In May 2017 Libman-Sacks endocarditis relapsed with new ischemic strokes. Remission improved after switching by heparin, then Table 1 : Clinical phenotypes of APS require caution with the use of direct oral anticoagulants. Lack of evidence-based medicine No evidence-based medicine 1 Venous thrombotic APS without previous Arterial and small vessel thrombotic APS relapse with INR between 2.0 and 3.0 Libman-Sacks endocarditis Association with pro-thrombotic treatment Triple-positive aPL profile Poor compliance (no biological monitoring and short half time) Relapsing patient despite anticoagulant. Conflicts of interest: Prof. LAMBERT receives fees from BAYER, BMS-PFIZER and DAICHY-SANKIO. QS, SMD, CMY, JB and SG declare no conflicts of interest. Funding source: No specific funding was received from any bodies in the public, commercial or not-for -profit sectors to carry out the work described in this manuscript.
01757201
en
[ "sdu.stu.gc" ]
2024/03/05 22:32:10
2018
https://brgm.hal.science/hal-01757201/file/EUG2018-nitrate.pdf
Wolfram Kloppmann Neus Otero Daren C Gooddy Dan Lapworth Ben Surridge Emmanuelle Petelet Christine Flehoc Nicole Baran Limitations of the isotopic composition of nitrates as a tracer of their origin Nitrogen and oxygen isotopes are traditionally considered and frequently used as tracers of nitrate sources in watersheds used for drinking water production. The enrichment of synthetic nitrate-containing fertilizers in 18O due to the contribution of atmospheric oxygen in the production process confers a specific isotopic fingerprint to mineral fertilizers. In spite of the still widespread use on nitrate-containing synthetic fertilizers, their characteristic N and O isotope signatures are rarely unambiguously observed in nitrate-contaminated groundwater. We postulate, in line with Mengis et al. (2001), that fertilizer-derived nitrate is not directly and rapidly transferred to groundwater but rather retained in the soil-plant system as organic N and then mineralized and re-oxidized (termed the mineralization-immobilization turnover, MIT) thereby re-setting the oxygen isotope composition of nitrate and also changing its N isotope ratios. We show examples from watersheds on diverse alluvial/clastic and carbonate aquifers in eastern and northern France where, in spite of the use of mineral fertilizers, evidenced also through other isotopic tracers (boron isotopes), both N and O-isotope ratios are very homogeneous and compatible with nitrification of ammonium where 2/3 of oxygen is derived from soil water and 1/3 from atmospheric O 2 . These field data are corroborated by lysimeter data from Canada. Even if in areas where ammonium is derived from chemical fertilizers, N values still tend to be lower than in areas where ammonium is derived from manure/sewage, this is clearly a limitation to the dual isotope method (N, O) for nitrate source identification, but has important implications for the nitrogen mobility and residence time in soils amended with synthetic fertilizers (Sebilo et al., 2013).